Sign in with
Sign up | Sign in

Nvidia Betting its CUDA GPU Future With 'Fermi'

By - Source: Tom's Hardware US | B 42 comments

This chip is going to be huge for the supercomputing market -- if Nvidia's has its way.

The video card has evolved now to be termed the GPU, thanks to the growing capability of the hardware. Now the GPU is about to take its next big leap to becoming specialized GPGPU (of course, we realize that the term specialized and general purpose are some what contradictory).

Nvidia is betting heavily on GPGPUs becoming a large need in the computing market. While we'll still need our GPUs to push our pixels for our 3D games, Nvidia has just revealed its next-generation CUDA architecture, codenamed "Fermi."

Nvidia bills Fermi as an entirely new ground-up design that will finally realize the potential of GPU computing. Although Nvidia made big steps with its G80 and later the GT200, the graphics maker has made Fermi a much more pleasant and useful tool for programmers.

“The first two generations of the CUDA GPU architecture enabled Nvidia to make real in-roads into the scientific computing space, delivering dramatic performance increases across a broad spectrum of applications,” said Bill Dally, chief scientist at Nvidia.

“It is completely clear that GPUs are now general purpose parallel computing processors with amazing graphics, and not just graphics chips anymore,” said Jen-Hsun Huang, co-founder and CEO of Nvidia. “The Fermi architecture, the integrated tools, libraries and engines are the direct results of the insights we have gained from working with thousands of CUDA developers around the world. We will look back in the coming years and see that Fermi started the new GPU industry.”

At the unveil event, Nvidia did not give anything away in terms of clock speeds or any of the other specifications that hardcore 3D gamers focus on. Instead, it talked about technical features that lend themselves specifically for GPU computing. Such technologies include:

  • C++, complementing existing support for C, Fortran, Java, Python, OpenCL and DirectCompute.
  • ECC, a critical requirement for datacenters and supercomputing centers deploying GPUs on a large scale
  • 512 CUDA Cores featuring the new IEEE 754-2008 floating-point standard, surpassing even the most advanced CPUs
  • 8x the peak double precision arithmetic performance over Nvidia’s last generation GPU. Double precision is critical for high-performance computing (HPC) applications such as linear algebra, numerical simulation, and quantum chemistry
  • Nvidia Parallel DataCache - the world’s first true cache hierarchy in a GPU that speeds up algorithms such as physics solvers, raytracing, and sparse matrix multiplication where data addresses are not known beforehand
  • Nvidia GigaThread Engine with support for concurrent kernel execution, where different kernels of the same application context can execute on the GPU at the same time (eg: PhysX fluid and rigid body solvers)
  • Nexus – the world’s first fully integrated heterogeneous computing application development environment within Microsoft Visual Studio

Oak Ridge National Laboratory (ORNL) has already announced plans for a new supercomputer that will use Fermi to research in areas such as energy and climate change. ORNL’s supercomputer is expected to be 10-times more powerful than today’s fastest supercomputer.

“This would be the first co-processing architecture that Oak Ridge has deployed for open science, and we are extremely excited about the opportunities it creates to solve huge scientific challenges,” Jeff Nichols, ORNL associate lab director for Computing and Computational Sciences said. “With the help of Nvidia technology, Oak Ridge proposes to create a computing platform that will deliver exascale computing within ten years.”

Nvidia did reveal that its upcoming Fermi GPU will pack 3 billion transistors, making it one mammoth chip – bigger than anything from ATI. Of course, the aspirations of Nvidia in the GPU space are far more ambitious than that of AMD. It'll be interesting to see if and how the two head-to-head rivals diverge from the focus on 3D gaming technologies to greater GPGPU application.

Display 42 Comments.
This thread is closed for comments
Top Comments
  • 46 Hide
    magicandy , October 1, 2009 4:03 PM
    If you're going to put a logo on your chart, common sense states you shouldn't cover up what's on the chart...
  • 25 Hide
    Anonymous , October 1, 2009 4:04 PM
    What does that say under the TomsHardware logo in the picture..?
  • 17 Hide
    crisisavatar , October 1, 2009 4:09 PM
    The computing capability is great and all but I am personally more interested in affordable GPUs. Let's see if NVIDIA can deliver here.
Other Comments
  • 1 Hide
    lucuis , October 1, 2009 3:55 PM
    Wow, now that is sweet.
  • 46 Hide
    magicandy , October 1, 2009 4:03 PM
    If you're going to put a logo on your chart, common sense states you shouldn't cover up what's on the chart...
  • 25 Hide
    Anonymous , October 1, 2009 4:04 PM
    What does that say under the TomsHardware logo in the picture..?
  • 17 Hide
    crisisavatar , October 1, 2009 4:09 PM
    The computing capability is great and all but I am personally more interested in affordable GPUs. Let's see if NVIDIA can deliver here.
  • 2 Hide
    mlopinto2k1 , October 1, 2009 4:15 PM
    Hi, I would like a programmable CPU/GPU/GPGPU unit that allowed Virtual Instruments and Effects to be processed on it. Otherwise, this is just more of the same CRAP!
  • 9 Hide
    megamanx00 , October 1, 2009 4:24 PM
    That's nice and all, but when are they gonna start selling the darn thing? Besides, even though an evolution of Cuda is nice and everything, proprietary APIs like that are kind of a hard sell. I think it's cool that it will get some C++ support, we'll see how that one goes, but as OpenCL and DirectCompute are more open it will be more important how this chip compares to AMDs in the performance of those rather than CUDA.
  • 4 Hide
    jonpaul37 , October 1, 2009 4:32 PM
    if the performance/price fit the same shoes as ATI's latest release(s), i will be sold and Nvidia will again be an option in my future. Not to mean it isn't now, i'm just saying, ATI has some nice stuff for a low-ish price.
  • 1 Hide
    nforce4max , October 1, 2009 4:33 PM
    Cool how much will it cost? Most likely have to work for a month just to get one at $10 US an hour.
  • -3 Hide
    Nakecat , October 1, 2009 4:50 PM
    d
  • 10 Hide
    Anonymous , October 1, 2009 4:50 PM
    The logo'd out part of the chart reads:

    L1 Cache: Configurable 16K or 48K
    L2 Cache: 768K
    ECC: Yes
    Concurrent Kernels: Up to 16

    ...from another source. Gotta love automated processes like logo stamping :) 
  • 2 Hide
    thomaseron , October 1, 2009 4:52 PM
    http://media.bestofmicro.com/6/Z/225755/original/Nvidia-Fermi-Overview.png
  • 4 Hide
    Nakecat , October 1, 2009 4:58 PM
    Quote:
    Nvidia did reveal that its upcoming Fermi GPU will pack 3 billion transistors, making it one mammoth chip – bigger than anything from ATI.


    Not until the card is out and it's not coming out til first quarter of 2010. Besides, with 5870x2 at the corner and 5770, 5850xx... ATI should still hold the best price / performance value.


    http://www.hardwarecanucks.com/news/video/ati-radeon-hd-5870-x2-images-surface/
  • 3 Hide
    Jenoin , October 1, 2009 5:00 PM
    The Quadros and the Tesla product lines have always been based off the Geforce line. Is this a turnaround? Are they going to design for the Tesla line and then remove features for the Quadro and Geforce? I hope the pricing of these isn't going to reflect all the capabilities this chip has that will be completely unused by the majority of geforce owners.(other than Folding@home)
  • 0 Hide
    viometrix , October 1, 2009 5:00 PM
    wow the money in my pocket is getting really hot
  • 5 Hide
    dreamer77dd , October 1, 2009 5:03 PM
    If it does not bottleneck it's self and can manage data flow it could do well. "Just one DVI output - keep in mind this is NOT a gaming card, but the Tesla model for super computing. " I would like to put a card in that takes care of everything else in the background of my computer that bogs the cpu. If it helps with programming languages perform better like C, Java, Python, OpenCL and DirectCompute why not but it have to be more then 10% increase for me to be interested.
  • 1 Hide
    dreamer77dd , October 1, 2009 5:06 PM
    I use to see tests with 4 gpus what happen to those days? I still would love to rip threw games and have no game bring me to my knees. I am not sure if motherboards have enough bus to take advantage of this. hmm?
  • 1 Hide
    njkid3 , October 1, 2009 5:18 PM
    well nice looking gpu but with its delayed entry, focus on computing rather than gaming, and the high possibility that it will be pretty high on the cost scale. i would have to say the odds that they will one up ATI are slim. just due to the fact that their chips are already out, they are priced reasonably and their chips have already shown to haul serious ass in gaming, and with their soon to be full line of dx 11 products covering all price levels of the market i would be surprised if nvidia can pull this one out of their hats.
  • -1 Hide
    omnimodis78 , October 1, 2009 5:32 PM
    I think it's safe to assume that if it has the stated capabilities then it really won't have any issues at all playing games, even the next-gen stuff. nVidia would be insane to sell a card in the consumer market without making it a kick-ass gaming beast, or the reviews would tear it apart and within a month all gamers would be buying ATI, and we know they would capitalize on that shift so much that it would force nVidia to go in crisis mode! No need to worry, these card will be premium gaming cards, with the added benefit of an expanded potential. I HOPE!
  • 1 Hide
    yang , October 1, 2009 5:39 PM
    ...will this run crysis? :) 
  • 0 Hide
    wildwell , October 1, 2009 5:40 PM
    Ahh... as technology marches on. It is odd that Tom's put their logo on top of not just the chart, it's over the part of the chart showing info on the new GPU!
Display more comments