Sign in with
Sign up | Sign in

Applications Of GPGPU Computing

Exclusive Interview: Nvidia's Ian Buck Talks GPGPU
By

One of the interesting things about Larabee is the theoretical ability to do things like recursion on the chip. How would that compare to a theoretical approach of incorporating a lightweight x86 processor on a GPU for "housekeeping tasks?"

I don’t think recursion is critical to the success of GPU computing, as almost all codes that run on the GPU are the performance-critical inner loops of an application. It is always best to inline and avoid things like recursions for performance reasons. We certainly could support recursion today, but prefer to allow our compiler to optimize without it. Regarding a lightweight CPU, there’s already a CPU in the system and we’ve focused on providing a razor-thin driver stack to keep things as efficient as possible. Where just-in-time processor scheduling is required, we’ve found dedicated hardware is almost always more area and power efficient at these critical tasks than a heavyweight x86 processor.

Right now, the majority of GPGPU applications have been limited to scientific computing and video decoding/transcoding. Where do you see consumers benefiting from GPGPU technology in realms outside video? 

The next wave of GPU computing consumer applications will be accelerating video editing, image processing, and gaming physics. We think your spreadsheet might already be fast enough. While video processing was an obvious application to accelerate, novel applications in computer vision, speech, and handwriting recognition applications for the consumer market can equally benefit from the massive performance potential of the GPU that is readily available in every PC.

Where do you see GPGPU going in the future?

Consumers are already benefiting from GPU computing. Companies like OptiTex are using CUDA to design clothing for the mass market. Car companies are designing next-generation cars with GPU ray tracing using CUDA. Physics engines in games are also migrating to the GPU. Moving forward, we’ll continue to see opportunities in personal media, such as sorting and searching photos based on the image content, i.e. faces, location, etc, is an incredibly compute-intensive operation.

Some of the work I’m most proud of is in medical imagining and cancer research. Techniscan is a company using our Tesla GPUs to improve a doctor’s ability to detect and diagnose breast cancer earlier and more accurately than traditional methods. The National Cancer Institute is reporting a 12x CUDA-enabled speedup in protein ligand calculations used to design new drugs for diseases such as cancer and Alzheimer's. It is wonderful to see GPU computing being used in some of the fundamental research that will save lives.

Display all 23 comments.
This thread is closed for comments
Top Comments
  • 10 Hide
    matt87_50 , September 3, 2009 7:21 AM
    I ported my simple sphere and plane raytracer to the gpu (dx9) using Render Monkey, it was soo simple and easy, only took a few hours, nearly EXACTLY the same code. (using hlsl, which is basically c) and it was so beautiful, what took seconds on the cpu was now running at over 30fps at 1680x1050.

    a monumental speed increase with hardly any effort (even without a gpgpu language)

    its going to be nothing short of a revolution when gpgpu goes mainstream (hopefully soon, with dx11)
    computer down on power? don't replace the cpu, just dump another gpu in one of the many spare pci16x slots and away you go, no fussing around with sli or crossfire and the compadibillity issues they bring. it will just be seen as another pile of cores that can be used!

    even for tasks that can't be done easily on the gpu architecture, most will still probably run faster than they would on the cpu, because the brute power the gpu has over the cpu is so immense, and as he kinda said, most of the tasks that aren't suited to gpgpu don't need speeding up anyway.
Other Comments
  • 8 Hide
    Anonymous , September 3, 2009 7:00 AM
    How about some GPU acceleration for linux! I'd love blue-ray and HD content to be gpu accelerated by VlC or Totem. Nvidia?
  • 10 Hide
    matt87_50 , September 3, 2009 7:21 AM
    I ported my simple sphere and plane raytracer to the gpu (dx9) using Render Monkey, it was soo simple and easy, only took a few hours, nearly EXACTLY the same code. (using hlsl, which is basically c) and it was so beautiful, what took seconds on the cpu was now running at over 30fps at 1680x1050.

    a monumental speed increase with hardly any effort (even without a gpgpu language)

    its going to be nothing short of a revolution when gpgpu goes mainstream (hopefully soon, with dx11)
    computer down on power? don't replace the cpu, just dump another gpu in one of the many spare pci16x slots and away you go, no fussing around with sli or crossfire and the compadibillity issues they bring. it will just be seen as another pile of cores that can be used!

    even for tasks that can't be done easily on the gpu architecture, most will still probably run faster than they would on the cpu, because the brute power the gpu has over the cpu is so immense, and as he kinda said, most of the tasks that aren't suited to gpgpu don't need speeding up anyway.
  • 3 Hide
    Anonymous , September 3, 2009 9:49 AM
    shuffman37: Nvidia does have gpu accelerated video on linux. Link to wikipedia http://en.wikipedia.org/wiki/VDPAU about VDPAU. Its gaining support by a lot of different media players.
  • -1 Hide
    Anonymous , September 3, 2009 3:15 PM
    NVIDIA, saying that "spreadsheet is already fast enough" may be misleading. Business users have the money. Spreadsheets are already installed (huge existing user base). Many financial spreadsheets are very complicated 24 layers, 4,000 lines, with built in Monte Carlo simulations.

    Making all these users instantly benefit from faster computing may be the road for success for NVIDIA.

    Dr. Drey
    Bloomington, IN
  • 3 Hide
    raptor550 , September 3, 2009 8:44 PM
    Although I appreciate his work... I had to AdBlock his face. Sorry, its just creepy.
  • -1 Hide
    techpops , September 3, 2009 8:55 PM
    While I can't get enough of GPGPU articles, it really saddens me that Nvidia is completely ignoring Linux and not because I'm a Linux user. Ignoring Linux stops the GPU from being the main source for rendering in 3D software that also is available under Linux. So in my case, where I use Cinema 4D under Windows, I'll never see the massive speedups possible because Maxon would never develop for a Windows and Mac only platform.

    It's worth pointing out here that I saw video of Cuda accelerated global illumination from a single Nvidia graphics card, going up against an 8 core CPU beast. Beautiful globally illuminated images were taking 2-3 minutes to render, just for a single image on the 8 core PC. The Cuda one, rendering to the same quality was rendering at up to 10 frames per second! That speed up is astonishing and really makes an upgrade to a massive 8 core PC system seem pathetic in the face of that kind of performance.

    One can only imagine what would be possible with multiple graphics cards.

    I also think the killer app for the GPU is not ultimately going to be graphics at all, while in the early days it will be, further down the line, I think it will be augmented reality that takes over for the main GPU use. Right now, it's pretty shoddy using a smart phone for augmented reality applications, everything is dependent on GPS, and that's totally unreliable and will remain so. What's needed for silky smooth AR apps is a lot of processing power to recognize shapes and interpret all that visual data you get through a camera to work with the GPS. So if you're standing in front of a building, an arrow can point on the floor leading into the buildings entrance because the GPS has located the building and the gpu has worked out where the windows and doors are and made overlaid graphics that are motion locked to the video.

    I think AR is going to change everything with portable computers, but only when enough compute power is in a device to make it a smooth experience, rather than the jerky unreliable experimental toy it is on today's smart phones.
  • 0 Hide
    pinkzeppelin97 , September 4, 2009 12:09 AM
    zipzoomflyhighIf my forehead was that big due to a retreating hairline, I would shave my head.


    amen to that
  • -2 Hide
    tubers , September 4, 2009 12:25 AM
    cpu and gpu combined? will that bring more profit to each of their respective companies? hmm
  • 0 Hide
    jibbo , September 4, 2009 1:24 AM
    shuffman37How about some GPU acceleration for linux! I'd love blue-ray and HD content to be gpu accelerated by VlC or Totem. Nvidia?


    There is GPU acceleration for Linux. I believe NVIDIA's provided a CUDA driver, compiler, toolkit, etc for Linux since day 1.

  • -6 Hide
    Anonymous , September 4, 2009 5:06 AM
    linux is almost as gay as its users, stfu noobs
  • 0 Hide
    matt87_50 , September 4, 2009 6:13 AM
    surely there is nothing stopping openCL going to linux?
  • 2 Hide
    jibbo , September 4, 2009 7:01 AM
    Matt87_50surely there is nothing stopping openCL going to linux?


    NVIDIA released Windows and Linux OpenCL drivers in June.
  • 0 Hide
    Soul_keeper , September 4, 2009 8:22 AM
    Excellent article, thank you.
  • 0 Hide
    techpops , September 4, 2009 8:40 AM
    @jibbo It's my understanding that you need directx11 to really make use of CUDA and if you wanted a cross platform app to work in Windows and Linux you'd have major hurdles adapting CUDA to work with both. Enough of a hassle that at least one huge 3D company has turned its nose up at CUDA and is just waiting for however many years until OpenCL is something worth looking at.
  • -2 Hide
    void_pointer , September 4, 2009 5:30 PM
    Quote:
    GPU computing programs tend to be much larger, more complex, and benefit from more complex optimization


    Larger and more complex than what, exactly? Please. What a load of egotistical bollocks.
  • -2 Hide
    jasperjones , September 4, 2009 6:37 PM
    Thanks THW for this one. Really interesting stuff!
  • 2 Hide
    Anonymous , September 4, 2009 8:03 PM
    Quote:
    linux is almost as gay as its users, stfu noobs

    How does using something besides Windows dictate my sexual preference? If you wouldn't mind I'd like to know your logic behind that statement.
  • 3 Hide
    sumitg , September 4, 2009 11:24 PM
    There is a GPU (CUDA) accelerator plug-in for Excel from SciComp
    http://www.scicomp.com/parallel_computing/GPU_OpenMP/
  • 0 Hide
    Anonymous , September 7, 2009 7:30 AM
    I wish he would have confirmed that 570 times faster comment from the big honcho. 570x faster in 6 years. Sounds like a load of marketing garbage to me. Then he said cpus will only be 3 times faster in 6 years.

    I agree parallel computing is looking like the wave of the future. Claiming that in 6 years you will be producing a processor that is nearly 600 times faster than the competition is ridiculous. The fact that the head of this company said that, makes anything coming out of these guys mouths sound like marketing instead of solid science.

    The fact that you had this interview and didn't feel the need to put that question to him suggests this interview is old, or you just didn't feel like asking a very pertinent question.

    Guess that's why I don't read this site much anymore. Amateur hour
  • -1 Hide
    JohnnyLucky , September 7, 2009 5:30 PM
    I read the article but I didn't understand the technobabble.
Display more comments