Sign in with
Sign up | Sign in
Your question

How will Fermi perform?

Last response: in Graphics & Displays
Share
January 13, 2010 3:13:45 AM

Based on what we know so far, how well do you think Fermi will perform?

I mean like with the current 40nm TSMC yield boost, things can't be too shabby for Nvidia, right?

Fermi does look big though.

More about : fermi perform

a b Î Nvidia
January 13, 2010 3:18:07 AM

we dont know how their GPGPU implementation is going to affect gaming performance, it might all transfer well and perform quite well, or it might have half the chip sitting there doing nothing. We wont really know until it gets benched which wont be for several weeks yet.

While 40nm will let it run cooler, simply using a smaller size doesnt help at all and good architecture has always made more of a difference than being able to double up what you already had, and we dont know if fermi has either yet.
January 13, 2010 3:28:25 AM

Latest rumors
30+% better than the 5870, rumored from nVidia, so take with even more salt
Other rumors 20% better than the 5870, which puts it same distance in perf as the 280 was of the 4870, and more likely.
Related resources
January 13, 2010 3:28:58 AM

Well definately with all the CUDA cores, this baby will be hell at GPGPU. From the way I've read the techinical stuff it just looks really brute force. Like simple adding more and more CPU cores to a die when you shrink it, with slapped on Dx11.

I wonder what Fermi's market position will be tho. With the fact that it's almost been a full 6 month cycle since AMD first launched the HD 5xxx series. I don't think Nvidia's biggest concern in the HD 5xxx series since about in 2 months we should see AMD's 6xxx series if AMD keeps the traditional 6 month graphics generational cycle.
January 13, 2010 3:33:18 AM

im assuming that fermi will be outperform the 5870 by a noticeable margin, but only a few frames in most games. im also predicting it will be much more epxensive, making the performancew gain hard to justify. i also predict they will overclock badly due to heat output, and when compared to an OCd 5870, will fall behind.

all in all, i imagine fermi will be good, but not good enough.
January 13, 2010 3:36:57 AM

Well obviously Nvidia has problems with Fermi.

Otherwise they wouldn't have delayed Fermi to such a point.

Unless they knew Fermi was vastly superior and are stocking up on inventory for a surprise attack on AMD.
January 13, 2010 3:39:50 AM

Someone said in another thread that theyd be the first to say Fermi=fail, I thought about that, and this is more telling, each day its not here is failure.
The gpgpu abilities could be astounding within nVidias own scope, which is fine in most apps its aimed for, on DT? It may leave it somewhat too propietary, tho its still early
January 13, 2010 3:43:58 AM

Well if Fermi becomes a really good GPGPU card it might make a nice little niche for itself.

Like what VIA did with low-power CPUs (until Intel Atom of course)
January 13, 2010 3:50:39 AM

Yes, and with LRBs delay, this allows for this time to get a foothold as well
January 13, 2010 3:59:10 AM

Well... I am an enthusiast fiction writer so maybe.
January 13, 2010 4:04:04 AM

heheh ask me no questions Ill tell you no lies? heheh or science fiction?
a b Î Nvidia
January 13, 2010 5:09:50 AM

amdfangirl said:
Well definately with all the CUDA cores, this baby will be hell at GPGPU.


Except for single precision or integer calculations, where at expected launch speeds it will be just a bit faster than G200b and equal or slower than HD4870, let alone HD5870. Even if they get it as fast as the GTX285 it will only be about half the performance of Cypress (1.5Tflops to 2.75).

Where it evens the gap is in the double precision calculations, where it is expected to be about 650Gflops to HD5870's 544Gflops. This is good for HPC work, but not necessarily other situations. It depends alot on the app being used, and only a few would do 'all DP all the time'.

And remember the launch consumer card will likely not have the full compliment of 'cores', so it will likely be pretty close to the HD5870 there too.

And then you have the HD5970 with way more than even the best estimates for Fermi.

Now the thing that will differentiate Fermi will be it's memory and cache structures which will help some HPC calculations, but not most consumer level GPGPU apps.

Quote:
I don't think Nvidia's biggest concern in the HD 5xxx series since about in 2 months we should see AMD's 6xxx series if AMD keeps the traditional 6 month graphics generational cycle.


Since when did AMD have a 'tradition' of 6 month GPU cycles? More like 18-24 months with a mid-generation refresh. Expect an HD5890 refresh this year, and an HD6xxx early NEXT year.
January 13, 2010 5:15:30 AM

Wonder if we will see a late risk card on 28? Something similar to a 5850/5830? I say this because its always been the full chip, cut down, as opposed to a 57xx on down
a b Î Nvidia
January 13, 2010 5:30:34 AM

Could be, but remember it hasn't always been full chip cut down, the biggest such jumps were, X800XL, HD3870 and HD4770, but there were also the R9600, X1650 Pro and HD2600 as mid-range jumps.
January 13, 2010 5:38:27 AM

I realized that after i typed it. I have a feeling tho, it may be more important just having tranny numbers at these lower nodes, as nVidia is already crying about "perfection" on a 3.2 billion trans. chip. It wont be gettin easier heheh
January 13, 2010 6:01:49 AM

Which bank to rob so that i can buy Fermi GPU?

January 13, 2010 7:34:32 AM

i wanna see a mars edition fermi lol 3 or 4 of those on sli see what fps u get in crysis
January 13, 2010 7:41:21 AM

xrodney said:
Its more then likely amd will try lowend graphic chips on 28 to mature process and prepare for highend chips this year.
So far it seems GF is ahead of schedule :) 
http://www.semiaccurate.com/2010/01/09/global-foundries...

Not sure how is 28nm process going on for TSMC.


It is posted on a rumour site tho.
a b Î Nvidia
January 13, 2010 7:43:28 AM

I heard it was going to be 20% faster than the 5870 and really, really hot because of the size.
January 13, 2010 11:14:16 AM

I want to see it ray trace. Beyond that I couldn't care less how it does.
a b Î Nvidia
January 13, 2010 1:01:22 PM

amdfangirl said:
It is posted on a rumour site tho.


Are you serious !?!

You say this in a thread that asks "How will Fermi Perform?" :pt1cable: 
January 13, 2010 1:31:02 PM

Oh, the irony
a b Î Nvidia
January 13, 2010 6:05:10 PM

Simply, put ....we'll know when it hits the streets....well maybe 2 weeks before.
January 13, 2010 6:30:00 PM

call me crazy but would it realy be that hard to create a 3 slot 8+8+8 power pin card? :) 
January 13, 2010 6:50:52 PM

Dont have the old links, but yes, its been thought of. a dual slot card, stacked 3 high.
Id just caution as I cautioned with the 5xxx series, expect the worse, hope for the best.
Worst is 5870 level and hot, best is 40% >5870 level, and not hot IMO
January 13, 2010 6:55:24 PM

@Randomizer: +1 to raytracing, I just saw that Full-CG VOTD and had to wipe drool off my keyboard.

@Zirbmonkey: F@H FTW. I can only imagine how many PPD we could get... And yet how will it compare once we have GPU3 on 5970's?

@Paperfox: Dude, you're not crazy, you're brilliant. Why haven't we seen this yet?

I'm confused, is GF100 the flagship card? Bigger numbers = better performance, right? That's why red cars are faster than blue cars and racing stripes win races!
Have a groovy day everyone, I LOVE threads like this.
January 13, 2010 7:05:14 PM

being a programmer (have used CUDA/ATI Stream/and OpenCL) i would love to see what the compute power is

as for gaming, i could probably do with my 4870 1GB for a while still
January 13, 2010 7:14:16 PM

.02

I think Fermi will see a reverse of position between Nvidia and ATI design philosophies. Historically Nvidia gpus have been designed to run today's games faster while ATI has been faster to fully support new features.

Fermi is designed to be flexible and powerful. The more complicated the graphics the more it will shine. It will love advanced shader effects, antialaising, and tessellation, but it won't run simple graphics any faster than today's cards, so you won't see much speed up in games like WoW.

The games Fermi will like will be from the tech shops, expect Rage and the next Unreal engine to shine.

It's a big shift in architecture so expect the first round of drivers to be slow. Once good drivers come out I'd expect to see a 20% speed increase across the board.

January 13, 2010 7:35:26 PM

People seem to be pretty optimistic here. I have my doubts. Nvidia knows how fast the card will run... Why are they silent? From a marketing perspective it just makes sense to advertise the hell out of your product prior to it's launch in order to slow AMD sales down. If they could really give us 20% or more power over the 5870 I think they would be screeming the specs. Instead all I hear is GPGpu, PhysX, and Cuda.
January 13, 2010 8:18:07 PM

TheGreatGrapeApe said:
Are you serious !?!

You say this in a thread that asks "How will Fermi Perform?" :pt1cable: 


What I meant was to come to conclusions based on evidence we already have.

I need more than 2 hours of sleep a day don't I?
a b Î Nvidia
January 13, 2010 11:46:24 PM

Quote:

As for price, the fact that the GPU chip is 40% bigger means it'll cost 40% more to make: $400*1.4 = $560 if the costs were flat.


That's not how it works, it may be 40% bigger which seems a little optimistics, however it's a geometric equation of how many dies you can fit per wafer, and int's not a linear 40% bigger = 40% more expensive. It's a fixed cost per wafer, currently thought to be $7,000 pe wafer @ TSMC, and the die count per wafer in essentially 100:170 so divide the wafer cost by the dies and you get $70 per die vs $41.18 per die which is about 70% higher cost as well, and that's holding the yields at the same amount, and not the current 20% vs 40% rumours, which would make it more expensive, and then that's just the chips, where the Fermi card is more expensive as well (more complex PCB with more memory, more traces, more power management components, and what will likely be a more expensive heat sink cooler. The total may be higher or lower than 40%, but it's far from being directed in ratio to die size. :heink: 

Quote:
I'm sure it'll be a F@H champ for sure.


It's not that simple or else the HD4K would've destroyed the GTX280. The client has to be optimized for the GPU and the DP and cache advantage may not matter unless they redesign the clients.
January 14, 2010 4:19:03 AM

They are redesigning the GPU folding program with the new GPU 3 implementation. This new version finally takes advantage of the arch of AMD GPUs so the race should be close between the two.
January 14, 2010 5:03:55 AM

My monies on ATI, if the clients decent, unless Fermi surprises, and older nVidia cards, no way
January 14, 2010 5:20:13 AM

Well the HD 4850 had 1 TFlop of processing power :) 
January 14, 2010 5:37:55 AM

Exactly, and while most complain about furmark and heat, the majority wearing green for some odd reason, they never mention the numbers firmark pulls on those cards, and so too will a good client do, just not too good, or the heat issues will return, but thatll be worked out
January 14, 2010 5:56:12 AM

nvidia always seem to pull somthing outa the bag i see no reason why they wont now as for aethm comment maybe they are just that confident ? or maybe now that they have seen AMD gpus they are rushing to make somthing faster and are in trouble with heat who knows.

Either way ive always had nvidia but im not a fanboy i get the best component at the time that i need it i cant wait to see the results of testing as i do have high hopes for it
January 14, 2010 6:09:52 AM

snipe0876 said:
or maybe now that they have seen AMD gpus they are rushing to make somthing faster and are in trouble with heat who knows.

Development on Fermi started ages ago. You don't draw up a design and put it into mass production in the space of 6 months ;) 
January 14, 2010 7:13:21 AM

shubham1401 said:
Which bank to rob so that i can buy Fermi GPU?


You will need to rob two banks. One to buy it and another to pay for the electric bills after.
January 14, 2010 7:58:42 AM

Faster than AMD's best GPU would technically mean it beats the 5870, not necessarily the 5970. I don't think anyone expects it to beat the latter anyway, at least not in single-GPU form. But it's just PR and I know what NVIDIA's PR are like.
January 14, 2010 8:05:36 AM

If Lloyd had said it, itd actually mean something. I liked it, as he was honest in how hard its been, and slow to market, thus painful, and left no doubt as to who was talking, and when
January 14, 2010 11:48:48 AM

well its good to see somthing finaly after all the hush hush
January 14, 2010 12:14:06 PM

snipe0876 said:
i see toms has got a sneak at 1 of the unfinished products of fermi http://www.tomshardware.co.uk/ces-2010-fermi,review-317...


and i hated that article due to the fact that the computer was running uniengine benchmark, but the performance estimated given was from the PR guy not the computer itself
a b Î Nvidia
January 14, 2010 1:12:22 PM

Yep, I don't trust nV PR any more, after the wood-screw fiasco, unless they bench it and provide numbers to people, then someone telling someone else it's faster carries little weight. I'd give it some credence if it was even Lloyd seeing a benchmark result with it performing faster than an HD5870 or 5970 (where even then nV could floptimize as they have in the past), yet we don't even have that, we have 'well the sales guys say their product is better than the competitor', which is a surprise to whom?

Also if the cooling apparatus is not final, and clocks aren't final, power draw isn't final, how is performance estimates something you can trust based on word of mouth? We get early benchmark numbers from unreleased hardware and I don't trust that more solid target, why would I trust nV-guy saying "we're #1!" ?


a b Î Nvidia
January 14, 2010 1:16:24 PM

amdfangirl said:
They are redesigning the GPU folding program with the new GPU 3 implementation. This new version finally takes advantage of the arch of AMD GPUs so the race should be close between the two.


GPU3 will likely favour the HD5870 & 5970 especially since they've had no Fermi to optimize on, whereas I expect the OpenCL version to be the interesting one which should require less optimization and should work across more architectures right out of the box with more universal code so would be a better comparison between the two. However I wouldn't be surprised if the GPU client works faster once optimized by the IHV's people like Mike Houston.
January 14, 2010 4:57:40 PM

so then what is your final take on this card before it comes out will be a winner and be top or will it flop based on what u know and all the speculation aswell.
January 14, 2010 5:03:58 PM

TheGreatGrapeApe said:
GPU3 will likely favour the HD5870 & 5970 especially since they've had no Fermi to optimize on, whereas I expect the OpenCL version to be the interesting one which should require less optimization and should work across more architectures right out of the box with more universal code so would be a better comparison between the two. However I wouldn't be surprised if the GPU client works faster once optimized by the IHV's people like Mike Houston.


Well GPU3 is being designed around AMD's arch so I think that will perform better than if they made it Open CL.

It depends.

Was the OpenCL spec around when they started work on the new core?
a b Î Nvidia
January 14, 2010 5:12:10 PM

Yep, they actually talked about focusing on the OCL core more than the update to GPU3, which kinda annoyed alot of HD4K owners who would benifit from the GPU3 update more than the OpenCL path.

There's a thread at the folding forum in which Mike Houston comments ont this.

If you look at how MilkyWay performs on the HD4K cards compared tot heir nV counterparts you see the true potential that is just not being used there.
!