Sign in with
Sign up | Sign in
Your question
Closed

Guestimate on how much fermi would cost?

Last response: in Graphics & Displays
Share
November 20, 2009 2:06:07 AM

I am sure they will have at least two variants, a cheaper(like a GTX260) and a really expensive(Gtx280) variant. How much do you think they will cost?

More about : guestimate fermi cost

a b Î Nvidia
November 20, 2009 2:44:15 AM

Easy formula;

value of the US$ / value of the taiwanese dollar X the yield @ TSMC / the yield at GF for 32/28nm X (the distance from the moon to the surface of the earth - [the distance from the core to the surface of the earth at TSMC + the distance from the core to the surface of the Dresden GF Fab]/2x All the tea in China [mainland-[HongKong/Taiwan] ) + the square root of working A1 Fermi chips X (the remain number of loyal fanbois who think Fermi will launch before groud hog day / the total # of GPU buyers -1) X (the price of the HD5970 on Boxing Day/K1 - the price of two 5770 on the last day of Kwanzaa).

I think there another variable in the like the the natural log of the gravitational constant somehwere, but I think that's only for the special overclocked and watercooled editions.

Hope that helps. :hello: 
November 20, 2009 3:44:56 AM

the expensive one will be at least 450 dollars at launch as a guestimate .
Related resources
a b Î Nvidia
November 20, 2009 12:44:07 PM

Seeing as the G300 is larger then the G200, and yields at 40nm aren't so good, the prices will be high. The one advantage Nvidia has this time is that AMD fired first so they know what the rest of the market is aiming for. G300 top end will be around the power of the new 5970 (pure guess on my part) so you can expect ~$600.
November 20, 2009 1:18:09 PM

i hope it has the power of a gt300 for sure , and then when they go 28/32nm , they can get a dual fermi out , now that would be serious for future games on 2560x1600 . that would be a perfect replacement for a gtx 295 .

but there are doubts : its a bit different from the last year , as 4800 launched after gt200 , and bought a reduction in prices , but what will happen this year ? launch at less prices ? from nvidia's history , they like to launch at sell-your-old-mobike-and-get-this-card prices !

secondly , fermi being an HPC optimized , but how will that play out for gaming and dx11 performance ? perhaps they will use all that dual precision flops and caches and other tidbits , write solid drivers , they can do that , and take the performance bar to newer heights . i think that what they have in mind , they are moving a bit far from gaming only cards as ati said , but they wont "abandon" gaming .

even more future speculation , radeon hd6000 will be radically different from hd5000 .
November 20, 2009 4:32:53 PM

bottom range $200 top line for $799
November 20, 2009 6:15:16 PM

TheGreatGrapeApe said:
Easy formula;

value of the US$ / value of the taiwanese dollar X the yield @ TSMC / the yield at GF for 32/28nm X (the distance from the moon to the surface of the earth - [the distance from the core to the surface of the earth at TSMC + the distance from the core to the surface of the Dresden GF Fab]/2x All the tea in China [mainland-[HongKong/Taiwan] ) + the square root of working A1 Fermi chips X (the remain number of loyal fanbois who think Fermi will launch before groud hog day / the total # of GPU buyers -1) X (the price of the HD5970 on Boxing Day/K1 - the price of two 5770 on the last day of Kwanzaa).

I think there another variable in the like the the natural log of the gravitational constant somehwere, but I think that's only for the special overclocked and watercooled editions.

Hope that helps. :hello: 


lol +1

so buy looking at that formula it seems that they will cost quite a bit, or shall I say more than what ATI is charging .... :o 
a b Î Nvidia
November 20, 2009 6:37:33 PM

Figure that no one will buy one if they can't justify the cost over a comparable ATI card......

....So if it beats the 5970, it can't cost more than $699
....So if it beats the 5870, it can't cost more than $499
.....and so on ....

and if it loses, no one will buy any so price is moot :) 
a b Î Nvidia
November 20, 2009 6:49:19 PM

Oh, plenty will still buy, just for the NVidia name.

But anyway, my bet is one billion dollars (from TGGA's formula, he just forgot the square root of the age of the Universe).
November 20, 2009 6:50:52 PM

^^ daym, then that makes it even more expensive per unit.... =(
November 20, 2009 7:18:09 PM

well, doesn't it all depend on the benchmarks vs. the competition?

Example: Fermi releases it's top - of - the - line single card solution and it's right on par or very slightly better than the 5870. In the end, it should be priced the same as the 5870... it's all based on price/performance. One would be considered dumb to buy a card that that has the same performance but costs more.

November 20, 2009 7:24:56 PM

Exactly, thats why Fermi has to be superior to the 5870 REGARDLESS or there is no point in purchasing it in the first place. Same as the GTX 260 vs. the 4890, the 4890 is superior yet costs the same or less depending on model...
November 20, 2009 7:40:26 PM

OvrClkr said:
Exactly, thats why Fermi has to be superior to the 5870 REGARDLESS or there is no point in purchasing it in the first place. Same as the GTX 260 vs. the 4890, the 4890 is superior yet costs the same or less depending on model...


Yet people still buy many gtx260s.

Never underestimate the power of a brand, or the stupidity of a consumer.

I'm sure there is someone at Nvidia tht has already figured out how much money they will make at any given final performance level. I bet you would be surprised how much they figure they will make even if the thing performs berely better than a gtx285. The don't have to match ATI's price/perforamcne, they only have to come close.. they will probably profit even without being all that close. I'd bet JH and company are mroe than able to tank a generation. As sad as it is, the 5000 series coudl be on every store shelf right now, and fermi coudl nto come out until June, yet Nvidia would not lose more than a couple % in market share over that time. Though things like that hurt long term if it goes on for too long.
November 20, 2009 7:45:44 PM

Dont forget, people will defend the use of physx, CUDA etc, and be willing to pay that extra
November 20, 2009 7:55:58 PM

in regards to CUDA/Physx/etc. I believe that it is all a matter of preference there, JDJ & daedalus685 are right, people stick to their brands, but over the last year or so, i have seen alot of converts...

These are very exciting times in the GPU market, lots of competition... lets hope laugh-a-bee can hold a candle, it'll make things even more interesting.
November 20, 2009 8:04:42 PM

daedalus685 said:
Yet people still buy many gtx260s.

Never underestimate the power of a brand, or the stupidity of a consumer.

I'm sure there is someone at Nvidia tht has already figured out how much money they will make at any given final performance level. I bet you would be surprised how much they figure they will make even if the thing performs berely better than a gtx285. The don't have to match ATI's price/perforamcne, they only have to come close.. they will probably profit even without being all that close. I'd bet JH and company are mroe than able to tank a generation. As sad as it is, the 5000 series coudl be on every store shelf right now, and fermi coudl nto come out until June, yet Nvidia would not lose more than a couple % in market share over that time. Though things like that hurt long term if it goes on for too long.


Well just for the record I am not one of those people :na:  , I had the the option to upgrade one of my GTS 250 (1bg) for the GTX 260 through BFG's trade up program. But I might have to settle for another GTX 260 if Fermi does not land on earth anytime soon. Im just tired of the wait :( 
November 20, 2009 8:18:39 PM

I'll defend the use of phsyx hell any "physics" engine when it does and is used for what people think of physics ie moving single rigid body instead of ohh debris cloth smoke WATER! by YOUR powers combined, I am Captain Planet!
a b Î Nvidia
November 20, 2009 8:24:21 PM

Mousemonkey said:
$2499 - $18995


I was going to say $600~800 USD but if the economy implodes that will be cheap with your numbers.
a c 168 Î Nvidia
November 21, 2009 12:58:27 PM

nforce4max said:
I was going to say $600~800 USD but if the economy implodes that will be cheap with your numbers.

The Fermi cards are most likely going to cost that much, the cards 'based' on Fermi, like the GF100, will hopefully cost a lot less but that's because they are not the full fat Fermi cards. The 'Fermi' cards are the equivalent of the current Quadro cards and should not be confused with what will be their desktop equivalents which, as rumour has it, will be known as GF100. The OP's question was "Guestimate on how much fermi would cost?" not "Guestimate on how much fermi based top of the range desktop cards would cost?

a b Î Nvidia
November 21, 2009 1:17:21 PM

I'm guessing at least $1, unless it's sold in a 99 cents store.
a b Î Nvidia
November 21, 2009 1:50:32 PM

jaguarskx said:
I'm guessing at least $1, unless it's sold in a 99 cents store.


Sure if we went on a gold standard and ten to fifteen years has passed then it should be free.

Best solution

November 21, 2009 1:59:49 PM
Share

These are just speculations

Top end card: 512 cores, 384-bit, High clocks ------ performance between gtx295 and 5970 ---------- $599

Second Card: 448 cores, 320-bit, Moderate clocks ---- performance between 5870 and gtx295 ---------- $399

Third card: 256 cores, 256-bit, High clocks ------- performance around 5850 ------ $249
November 21, 2009 3:25:08 PM

those prices are not impossible as far as the die sizes are concerned , as 512 core fermi is rumored to be 576 mm2 , exactly as big as gtx280 which launched at $650 . even more delicious , consider a 28nm fermi based shrink like the gtx285 is today , around 350 dollars ! that would be sweet , and the dual fermi for 550$(1024 cores) .

oh my , save us from all the guessing and bless nvidia to release the next , godspeed !
November 21, 2009 3:45:03 PM

Id add, we havnt seen the implementation of HKMG in gpus yet, which should be available at 28nm.
HKMG brings with it cooler cores at same speeds, or, faster cores all around.
Its like a shrink in itself, so Im looking highly towards its usage, as I see it as, with the 28nm process, similar to a doubling of process change, meaning higher clocks by far, and better cooling, or cooler cores
a b Î Nvidia
November 21, 2009 4:05:10 PM

Main things IMO for HKMG is lower idle power and higher clocks both due to less leakage.

You could say lower overall power and heat, but really does anyone think either company is going to do that versus clocking them as fast as possible to satisfy the whiners who cry for doubling of performance every launch?
November 21, 2009 4:13:19 PM

LOOL true.
It depends if Intel has an alternative by then also.
Theyre touting their "green" approach, using resource shutdown within die etc, which, like you, I find ironic talking gfx
a b Î Nvidia
November 21, 2009 4:27:40 PM

Yeah, well the idle and 2D or 3D desktop should be fine with whatever a single cluster of SPUs would be (heck really only need ~4 SPU for good aero desktop with basic 3D) probably one ROP cluster and even single PCIe lane/channel, that'd be nice for low power, but when gaming, it's gonna be as fast as they can go Go GO!
November 21, 2009 5:01:59 PM

Which only means, all those LRB cores will be cranked, using lotsa juice.
Personally I cant wait for HKMG in gpus, the jump will be very nice, but no soup for LRB, as itll already have it coming in
November 22, 2009 3:59:07 AM

thanks for the hkmg info , didn't know this , what a potential ! depends on tsmc then ... intel's fab's are of course latest for giving out LRB juice !
a b Î Nvidia
November 22, 2009 4:24:13 AM

Quote:
those prices are not impossible as far as the die sizes are concerned , as 512 core fermi is rumored to be 576 mm2 , exactly as big as gtx280


How could that be? It has over twice the shaders and god knows what else in there. GTX280 is on 55nm, and G300 should be 40nm. I would think it would be bigger then the GTX280.
November 22, 2009 5:32:06 AM

The die shrink just makes it able for them to fit more power in a given area. The size of how big they want to make that area is all up to them. Like it matters gpu's cpu's are all pretty much the size of your thumb at best the rest is interface and heatspreader.
a b Î Nvidia
November 22, 2009 5:50:39 AM

Quote:
Like it matters...


It does matter Izzy. Nvidia EOL'd their GTX series before coming out with the replacements. The die was simply to large to compete with what AMD had out. If they more then doubled their shaders, then the die will get even larger. The move to 40nm will help, as will any tweaks they make to the die. Other then that, we could see a repeat of what happened with the GT200 series. They simply have to get the die size down.
November 22, 2009 9:14:48 AM

Hiya All,

First I apoligise in advance for mistakes etc as this is my first forum that I've joined not just read, I first posted and joined about 24 hours ago posting in the one answering a question about the required Power Supply Size and Case for at least a Single HD5970 and as I understood the question it should also with the ability to run 2 HD5970s in Crossfire. I haven't found the other yet to read any replies. It will likely be 2010 or near when I can more accurately answer those questions as the 2 HD5970s I PreOrdered are supposedly not due until after Christmas.

Now to this one with my newbie questions too: I've no idea what assumptions we use?? Seems some commented there will be more than one, my understanding at least 3 with slowest a GT100. Prices about a $100 over the AMD equivalent once benchmarked seems logical but a big part being how much Nvidia can afford to lose?

One question I have is how much structure do these each have? Like since not stated it seems far more members seem to be assuming the Nvidia will arrive as a 28nm Core vs a 40nm? 40nm is what it is still currently being worked on as I'd assume, and it's already really far behind the target release date so I'd assume Nvidia is not going to wait another year? It's about a year yet when 28nm is first ready for even ramp up production rates which is a long long time to delay!! But now which it arrives as will affect pricing in at least two ways, the obvious being lower cost at 28nm along with ability to clock faster! But the biggest for the curious is what would 28nm Nvidias arrive priced against? Meaning which family of ATI cards and I assume they'd be HD6000 models!

Then be they a shrunk to 28nm they will be up against not 40nm current HD5000 cards but either even faster RV970 based HD6000 models that're mainly just an improved (more Streams etc) refined RV870 Core. Or what I read mostly is to assume it would then be arriving priced against HD6000 models with a RV970 that's finally at least a partial redesign! I've heard the 28nm RV970's are a redesign with some "limited increase of CPU ability" but with that being no where near the percentage of the Core Die devoted to "CPU" the 1/3 if the die I hear the Nvidia GT300 is supposed to have as CPU!!

After all that l hear that late in 2010 is supposed to be a 3 way race with Intel finally entering the Ring!

Hope you all had a great weekend!
December 5, 2009 9:28:24 AM

Mousemonkey said:
The Fermi cards are most likely going to cost that much, the cards 'based' on Fermi, like the GF100, will hopefully cost a lot less but that's because they are not the full fat Fermi cards. The 'Fermi' cards are the equivalent of the current Quadro cards and should not be confused with what will be their desktop equivalents which, as rumour has it, will be known as GF100. The OP's question was "Guestimate on how much fermi would cost?" not "Guestimate on how much fermi based top of the range desktop cards would cost?


Hell the "decent" quadro cards on newegg cost about $789.99 and slightly above while the crazy GTX280-like quadro cards cost around $1600 so I wouldnt even suppose that the fermi card (GTX380spec as one might call it) would be even close to that price. Im thniking starting at $800 would be more like it which would be half of that the Quadro FX4800 is($1,600-1,700). They are not workstation cards, they are gaming cards with the ability to interact more with browsers, multi core cpus, apps like CUDA, Physx, and perhaps "share" gpu power virtually? (The last one im wishing).

I do wish they were Hybrid SLI though like the older 9xxxgt's and gtx's.....
December 5, 2009 9:36:14 AM

ironsides7 said:
Hiya All,

First I apoligise in advance for mistakes etc as this is my first forum that I've joined not just read, I first posted and joined about 24 hours ago posting in the one answering a question about the required Power Supply Size and Case for at least a Single HD5970 and as I understood the question it should also with the ability to run 2 HD5970s in Crossfire. I haven't found the other yet to read any replies. It will likely be 2010 or near when I can more accurately answer those questions as the 2 HD5970s I PreOrdered are supposedly not due until after Christmas.

Now to this one with my newbie questions too: I've no idea what assumptions we use?? Seems some commented there will be more than one, my understanding at least 3 with slowest a GT100. Prices about a $100 over the AMD equivalent once benchmarked seems logical but a big part being how much Nvidia can afford to lose?

One question I have is how much structure do these each have? Like since not stated it seems far more members seem to be assuming the Nvidia will arrive as a 28nm Core vs a 40nm? 40nm is what it is still currently being worked on as I'd assume, and it's already really far behind the target release date so I'd assume Nvidia is not going to wait another year? It's about a year yet when 28nm is first ready for even ramp up production rates which is a long long time to delay!! But now which it arrives as will affect pricing in at least two ways, the obvious being lower cost at 28nm along with ability to clock faster! But the biggest for the curious is what would 28nm Nvidias arrive priced against? Meaning which family of ATI cards and I assume they'd be HD6000 models!

Then be they a shrunk to 28nm they will be up against not 40nm current HD5000 cards but either even faster RV970 based HD6000 models that're mainly just an improved (more Streams etc) refined RV870 Core. Or what I read mostly is to assume it would then be arriving priced against HD6000 models with a RV970 that's finally at least a partial redesign! I've heard the 28nm RV970's are a redesign with some "limited increase of CPU ability" but with that being no where near the percentage of the Core Die devoted to "CPU" the 1/3 if the die I hear the Nvidia GT300 is supposed to have as CPU!!

After all that l hear that late in 2010 is supposed to be a 3 way race with Intel finally entering the Ring!

Hope you all had a great weekend!



Interesting you mentioned that "limited increase of CPU ability" but with that being no where near the percentage of the Core Die devoted to "CPU" the 1/3 if the die I hear the Nvidia GT300 is supposed to have as CPU!! "

So you are saying that the newer GPus could actually help automatically overclock the CPus and use its cores systematically giving each one a task like one core for shaders, the other for vectors, the other core for rendering, the other for better AA and so fourth?? I like the creativity this fourm is getting at....... imaging 6 full cores with this type of GPU and a motherboard with Hybrid SLI(they should bring it back) and a great Larrabee chip to go along with 16GB of DDR3 memory and USB3.0 support and SSD speed......... the PC of the next two years....
December 7, 2009 12:24:37 AM

Well,
This is the fee structure that I have for the Fermi's:

C2050:
$1625 (Academic cost)

C2070 ~$3900 (Academic)

It's a bit more for regular schmo's. There are a number of packages where you get the presently available C1060 and upgrade - where you either keep the C1060, or, you trade up.

Those are for the cards. For the S1070, S2050, and S2070, they run $7995 / $8995 / $22900, respectively with tradeups of the S1070 card. Again, there are all sorts of trade up combinations, depending upon dealers.
Regards,
Particleman529
December 7, 2009 12:54:13 AM

liquidsnake718 said:
Interesting you mentioned that "limited increase of CPU ability" but with that being no where near the percentage of the Core Die devoted to "CPU" the 1/3 if the die I hear the Nvidia GT300 is supposed to have as CPU!! "

So you are saying that the newer GPus could actually help automatically overclock the CPus and use its cores systematically giving each one a task like one core for shaders, the other for vectors, the other core for rendering, the other for better AA and so fourth?? I like the creativity this fourm is getting at....... imaging 6 full cores with this type of GPU and a motherboard with Hybrid SLI(they should bring it back) and a great Larrabee chip to go along with 16GB of DDR3 memory and USB3.0 support and SSD speed......... the PC of the next two years....




That Quote pertained to the expected CPU capability of next years ATI RV970, which will have a little more "CPU" in them than the HD5000 (RV870) family. The Nvidia Fermi GT300 (if we ever see one) is the one about 1B of 3B devoted to "CPU" which now is considered too much for a "Graphic" chip.

Dane
January 8, 2010 11:13:33 PM

jonpaul37 said:
well, doesn't it all depend on the benchmarks vs. the competition?

Example: Fermi releases it's top - of - the - line single card solution and it's right on par or very slightly better than the 5870. In the end, it should be priced the same as the 5870... it's all based on price/performance. One would be considered dumb to buy a card that that has the same performance but costs more.

Comon man, you know very truly well that Nvidia will NEVER price a NEW gpu much less a promising one with new technology and DX11 features, at the same price of the direct competition. In this case the 5870 and the 5850. Their GT180 or gt160(if that is truly their product names) will not be equivalent to the 5870=gt180.... Nvidia always overprices and usually prices their gpus above Ati's exact competing card. Its their branding strategy and image. It is to make the average fanboy, pc builder think that Nvidia has a slight premium and quality advantage over Ati.... Its psychological marketingand for 5 different generations, I believed and actually bought 5 different Nvidia cards. That is until the greatly prices 5850's came along and smashed not only Nvidias GTX285, but even the GTX295 in certain games!! The price? $260, priceless..... I mean comon, the GTX285 is still sub$300!!!
January 8, 2010 11:22:27 PM

JAYDEEJOHN said:
Dont forget, people will defend the use of physx, CUDA etc, and be willing to pay that extra



No no no no, I truly have to admit, as sad as it is for me to say I have actually bought a Sapphire 5850 to replace my 9800gtx+..... The 5850 is a beast and can play Crysis on Very High settings and 2x AA with 35plus fps on my PC and it works great, blah blah..... I do have to say, even my older, (come to think of it, way much older g92 architecture) 9800gtx+ felt and came out better in terms of Physics due to PhysX. I will tell you that upon throwing barrels, koreans, shooting trees, turtles, birds, and water, Crysis with PhysX is much better. This is comparing a 9800gtx+ to a 5850!!!!! Unfortunatly Toms and other websites do not compare physics, lighting, color palets, shadows, shaders, textures, fluidity, bump mapping, or whatever in terms of small details in their comparison charts understandably because there would be too much to report and differentiate.

However if you have two PCs or if you change your GPU and compare these details like physics, color, you will see that Nvidia does however have a better physics system. I can swear by this as you can feel the difference in the game. That is once positive thing that a beautiful game like crysis can give gamers and testers, its programming and use of physics, cpu's gpu power and AA....
Anonymous
January 8, 2010 11:27:57 PM

You should stick that 9800gtx+ back in your pc and see how it runs on very high settings. Then play it the way you used to play it.

You will quickly conclude that PhysX is a total gimmick and you would never pass up higher quality settings and fps in preference to it.
January 8, 2010 11:43:00 PM

itisravi said:
These are just speculations

Top end card: 512 cores, 384-bit, High clocks ------ performance between gtx295 and 5970 ---------- $599

Second Card: 448 cores, 320-bit, Moderate clocks ---- performance between 5870 and gtx295 ---------- $399

Third card: 256 cores, 256-bit, High clocks ------- performance around 5850 ------ $249

even though these may be specs, i suppose these numbers are realistic, perhaps +$100 for the gf of gt180 variant leaving it likely to be $699. Heck the 8800gtX and the 8800Ultra was $699-799 when it first came out in 2006 for the enthuiasts that bought this card, they can tell you how good that card was as we can see incarnations of this card(not the exact same but close) in the GTS250 still being sold and used today......
February 8, 2010 11:09:54 PM

Months later and still no word, even for marketing reasons, Nvidia is truly behind in marketing their product...
a c 168 Î Nvidia
February 8, 2010 11:11:45 PM

This topic has been closed by Mousemonkey
!