Sign in with
Sign up | Sign in
Your question
Closed

Nvidia Betting its CUDA GPU Future With 'Fermi'

Last response: in News comments
Share
October 1, 2009 3:55:48 PM

Wow, now that is sweet.
Score
1
October 1, 2009 4:03:22 PM

If you're going to put a logo on your chart, common sense states you shouldn't cover up what's on the chart...
Score
46
Related resources
Anonymous
October 1, 2009 4:04:50 PM

What does that say under the TomsHardware logo in the picture..?
Score
25
October 1, 2009 4:09:33 PM

The computing capability is great and all but I am personally more interested in affordable GPUs. Let's see if NVIDIA can deliver here.
Score
17
October 1, 2009 4:15:47 PM

Hi, I would like a programmable CPU/GPU/GPGPU unit that allowed Virtual Instruments and Effects to be processed on it. Otherwise, this is just more of the same CRAP!
Score
2
a b Î Nvidia
October 1, 2009 4:24:59 PM

That's nice and all, but when are they gonna start selling the darn thing? Besides, even though an evolution of Cuda is nice and everything, proprietary APIs like that are kind of a hard sell. I think it's cool that it will get some C++ support, we'll see how that one goes, but as OpenCL and DirectCompute are more open it will be more important how this chip compares to AMDs in the performance of those rather than CUDA.
Score
9
October 1, 2009 4:32:13 PM

if the performance/price fit the same shoes as ATI's latest release(s), i will be sold and Nvidia will again be an option in my future. Not to mean it isn't now, i'm just saying, ATI has some nice stuff for a low-ish price.
Score
4
a b Î Nvidia
October 1, 2009 4:33:28 PM

Cool how much will it cost? Most likely have to work for a month just to get one at $10 US an hour.
Score
1
October 1, 2009 4:50:01 PM

d
Score
-3
Anonymous
October 1, 2009 4:50:58 PM

The logo'd out part of the chart reads:

L1 Cache: Configurable 16K or 48K
L2 Cache: 768K
ECC: Yes
Concurrent Kernels: Up to 16

...from another source. Gotta love automated processes like logo stamping :) 
Score
10
October 1, 2009 4:58:58 PM

Quote:
Nvidia did reveal that its upcoming Fermi GPU will pack 3 billion transistors, making it one mammoth chip – bigger than anything from ATI.


Not until the card is out and it's not coming out til first quarter of 2010. Besides, with 5870x2 at the corner and 5770, 5850xx... ATI should still hold the best price / performance value.


http://www.hardwarecanucks.com/news/video/ati-radeon-hd...
Score
4
October 1, 2009 5:00:09 PM

The Quadros and the Tesla product lines have always been based off the Geforce line. Is this a turnaround? Are they going to design for the Tesla line and then remove features for the Quadro and Geforce? I hope the pricing of these isn't going to reflect all the capabilities this chip has that will be completely unused by the majority of geforce owners.(other than Folding@home)
Score
3
October 1, 2009 5:00:29 PM

wow the money in my pocket is getting really hot
Score
0
October 1, 2009 5:03:00 PM

If it does not bottleneck it's self and can manage data flow it could do well. "Just one DVI output - keep in mind this is NOT a gaming card, but the Tesla model for super computing. " I would like to put a card in that takes care of everything else in the background of my computer that bogs the cpu. If it helps with programming languages perform better like C, Java, Python, OpenCL and DirectCompute why not but it have to be more then 10% increase for me to be interested.
Score
5
October 1, 2009 5:06:17 PM

I use to see tests with 4 gpus what happen to those days? I still would love to rip threw games and have no game bring me to my knees. I am not sure if motherboards have enough bus to take advantage of this. hmm?
Score
1
October 1, 2009 5:18:16 PM

well nice looking gpu but with its delayed entry, focus on computing rather than gaming, and the high possibility that it will be pretty high on the cost scale. i would have to say the odds that they will one up ATI are slim. just due to the fact that their chips are already out, they are priced reasonably and their chips have already shown to haul serious ass in gaming, and with their soon to be full line of dx 11 products covering all price levels of the market i would be surprised if nvidia can pull this one out of their hats.
Score
1
October 1, 2009 5:32:15 PM

I think it's safe to assume that if it has the stated capabilities then it really won't have any issues at all playing games, even the next-gen stuff. nVidia would be insane to sell a card in the consumer market without making it a kick-ass gaming beast, or the reviews would tear it apart and within a month all gamers would be buying ATI, and we know they would capitalize on that shift so much that it would force nVidia to go in crisis mode! No need to worry, these card will be premium gaming cards, with the added benefit of an expanded potential. I HOPE!
Score
-1
October 1, 2009 5:39:07 PM

...will this run crysis? :) 
Score
1
October 1, 2009 5:40:09 PM

Ahh... as technology marches on. It is odd that Tom's put their logo on top of not just the chart, it's over the part of the chart showing info on the new GPU!
Score
0
October 1, 2009 6:13:02 PM

Woah.... Thats insane. wish I programmed now.
Score
0
October 1, 2009 6:31:48 PM

A guite a monster for GPU... Great for folding@home and PhysX I supose.
And don't worry, with that kind of compute power it is ok allso in gaming ;-)
The other thing is completely... is this more sensible than ATI 58XX series for games. With 3.0 billion transistors, this is not going to be a "cheap" alternative to 5870. The Nvidia is going to take the fastest card tittle at any cost. But even GPU genre needs to have it's Ferrari. It may not be sensible to normal use, but it can be fun if you have the money for it.
This seems to be reasonable modular, so there is some hope for new generation Nvidia middle range cards... ATI is doing a great job in that aspect, we need some competition also there. If Nvidia makes (again) facelift for G80 in the low and midlle range, the graphic development can stagnage greatly.
Score
0
October 1, 2009 7:14:20 PM

From Anandtech article
Quote:
Correct answer isn't to target a lower price point first, but rather build big chips efficiently. And build them so that you can scale to different sizes/configurations without having to redo a bunch of stuff.


So this seems to be scale able, so we really can see competition allso in not so high end segments. But how soon?

Allso interesting is that ATI use smaller chips and use more of them for more power. Nvidia makes one big chip and reduce SM units or something like that to make smaller chip for cheaper prize range.
All in all it's good that Nvidia has a plan and there is going to be some real competition next year!
Maybe cheaper 58xx cards?
Score
1
October 1, 2009 7:54:09 PM

Yes that's really nice for scientific purposes and stuff. Now where's your DX11 GPU and new motherboard chipsets?

I want to see if they can come up with something interesting enough to make them my next choice, otherwise I'll definitely be moving over to AMD/ATI.
Score
1
October 1, 2009 8:34:49 PM

if NVDA can provide the floating point and processing power required to render in REAL TIME (which is a feat in itself since one realistic image (1 frame, of 60 in 1 second) takes 23 hours on a quad core CPU.) They could take a chunk from intel and amd. I'd be all over support NVDA, but I have been since their first 3D Accelerator (i believ they were diamond at the time, or bought diamond at the time)
Score
-1
October 1, 2009 8:42:52 PM

Antilycusif NVDA can provide the floating point and processing power required to render in REAL TIME (which is a feat in itself since one realistic image (1 frame, of 60 in 1 second) takes 23 hours on a quad core CPU.) They could take a chunk from intel and amd. I'd be all over support NVDA, but I have been since their first 3D Accelerator (i believ they were diamond at the time, or bought diamond at the time)

Hell, if this happened nVidia would have a MASSIVE advantage. I'm sure many pro 3D designers will be willing to shell out $5k+ for a card like this considering the time saved in the long run.

Then again, this probably won't happen very soon. Also I'm assuming it will be a true ray traced rendering.
Score
0
a b Î Nvidia
October 1, 2009 9:17:00 PM

This sounds like the new line of nVidia GPU to compete with the new HD 58xx....
Score
-1
October 1, 2009 10:29:00 PM

I think this is a logical step by Nvidia to head in the GPGPU direction if it's to survive. Hats off to them if they do release the product.

But rest assure AMD/ATI will come out with a GPU in Q2/Q3 2010 with similar features. You never know, the GT300 might not make out to the market by then :-)
Score
0
October 1, 2009 10:39:19 PM

saint19This sounds like the new line of nVidia GPU to compete with the new HD 58xx....

It sounds like a new gpu line that will crush all of their FirePro 3D line
Score
1
October 1, 2009 10:42:12 PM

The professional applications and scientific computing market is big, not as big as the video game business, but the current number of professional and scientific applications taking advantage of CUDA after only a couple years is huge, they have an installed base way larger than ATI. If they can keep that base, as they likely will, the entire graphic design, movie production, animation, GIS, oil & gas industries and academia may shift to optimizing their tools for CUDA. I personally think they'd be better of designing for Open CL, so that they can use ATI or nvidia cards for GPU acceleration, it might not be as optimized as if it was specifically tuned for one architecture, but would hedge their bets. Considering we can all by the end of this year have maybe 20 Tflops with 4 x 5870 X2's on our desktops, it's amazing, what would four of these new cards be capable of?
Score
0
October 1, 2009 11:48:15 PM

I'd still rather use a platform independent api.
Score
1
October 2, 2009 1:11:32 AM

I am not a programmer. I am not a scientist. I am not a doctor.

I am a gamer.

Do I really need all of that?

Let's just say I want to play Crysis with maxed out settings on my modest 1680×1050 LCD display.

Do I really need to pay a premium price for a bunch of stuff I won't even need?

Are all of those bells and whistles to make people scream "ME WANT! ME WANT!"?

I see that this product will be very valuable for programmers, scientists and medical imagery and NASA and stuff. But I'm just a gamer!
Score
1
October 2, 2009 4:14:50 AM

Methinks a higher-frequency binned parts will emerge from ATi as 5890 (1GHz?), and everyone will be happy again. Except NVDA shareholders.

Nothing is known about the power consumption-to be. ATi again has a smaller design that's scalable to X2. 3000000000 transistors will output a LOT of heat. She's gonna be HOT, and she's gonna be expensive, and she'll lose to both 5870X2 and 5850X2 by large margins. Heck, there isn't even a speculated release date here!

Come on, you are gonna make a 3 billion transistor part and disable a billion from them to make a cheaper part? You just wasted so much money on making it! New PCB design cost issues maybe?
Score
0
October 2, 2009 10:06:58 AM

joeman42No, because it uses havok instead of physx. For the same reason Nvidia sabotaged ATI on physics it will be unable to process the effects in Cryengine. The silicon that could have been used to accelerate it (like ATI did) will be wasted idling. Larrabe, on the other hand, being a collection of x86 engines can be reconfigured on the fly to use all of its muscle on each game. Nvidia is going on a proprietary limb which may very well be an evolutionary dead end.

=D
I like the idea of Larrabe because of it's massive combatibility and function. Shitloads of x86 cores is as good as it gets.

saint19This sounds like the new line of nVidia GPU to compete with the new HD 58xx....

At first galnce at the chart, I though "GTX 300!! YAY!"... Hopefully this was a sneak preview of the (hopefully soon to be released) GTX 3xx line.
Score
0
October 2, 2009 11:14:56 AM

Toms, i hope you take notice that during presentation the fermi board was a fake board (probably had real ones behind closed doors, or pre rendered the demos) but theres a lot of evidence all over of why they are fakes (at least the one they held up and said " this puppy is fermi"
Score
1
October 2, 2009 12:08:01 PM

Yummy boy spinning it again.
Still, at least half a year to launch...
Score
0
October 3, 2009 5:03:03 AM

are they running out of stuff to name their products after?? enrico fermi wasn't even involved with semiconductors that much. maybe just because he had the Eg/2 (fermi level) named after him that merits having some advanced piece of silicon named after him, what do i know
Score
0
October 4, 2009 4:36:47 AM

Antilycusif NVDA can provide the floating point and processing power required to render in REAL TIME (which is a feat in itself since one realistic image (1 frame, of 60 in 1 second) takes 23 hours on a quad core CPU.) They could take a chunk from intel and amd. I'd be all over support NVDA, but I have been since their first 3D Accelerator (i believ they were diamond at the time, or bought diamond at the time)


nVidia bought 3DFx...just like they did Ageia. Diamond sells ATI cards... www.diamondmm.com ....same website they've had for the last 10 years. nVidia has always been nVidia. nVidia's first GPU was the NV1 released in 1995....which failed miserably.
Score
1
October 6, 2009 11:21:45 AM

It sure is nice of them to put their logo on top of the chart so that u can't read it!
Score
0
October 15, 2009 5:31:55 AM

Since the table compares the Fermi against the current GT200 & G80 GPUs, both of which were used in Tesla as well as GeForce, it indicates that the Fermi will be used by Nvidia's upcoming graphics cards.If these are the specs for the new GeForce cards and the rumors are right about GDDR5 memory then no ATi card can match up to the performance of the next generation GeForce series.But the pricing will still be a factor which should not bother Nvidia fans.
Score
0
!