Exclusive Interview: Nvidia's Ian Buck Talks GPGPU

Applications Of GPGPU Computing

One of the interesting things about Larabee is the theoretical ability to do things like recursion on the chip. How would that compare to a theoretical approach of incorporating a lightweight x86 processor on a GPU for "housekeeping tasks?"

I don’t think recursion is critical to the success of GPU computing, as almost all codes that run on the GPU are the performance-critical inner loops of an application. It is always best to inline and avoid things like recursions for performance reasons. We certainly could support recursion today, but prefer to allow our compiler to optimize without it. Regarding a lightweight CPU, there’s already a CPU in the system and we’ve focused on providing a razor-thin driver stack to keep things as efficient as possible. Where just-in-time processor scheduling is required, we’ve found dedicated hardware is almost always more area and power efficient at these critical tasks than a heavyweight x86 processor.

Right now, the majority of GPGPU applications have been limited to scientific computing and video decoding/transcoding. Where do you see consumers benefiting from GPGPU technology in realms outside video? 

The next wave of GPU computing consumer applications will be accelerating video editing, image processing, and gaming physics. We think your spreadsheet might already be fast enough. While video processing was an obvious application to accelerate, novel applications in computer vision, speech, and handwriting recognition applications for the consumer market can equally benefit from the massive performance potential of the GPU that is readily available in every PC.

Where do you see GPGPU going in the future?

Consumers are already benefiting from GPU computing. Companies like OptiTex are using CUDA to design clothing for the mass market. Car companies are designing next-generation cars with GPU ray tracing using CUDA. Physics engines in games are also migrating to the GPU. Moving forward, we’ll continue to see opportunities in personal media, such as sorting and searching photos based on the image content, i.e. faces, location, etc, is an incredibly compute-intensive operation.

Some of the work I’m most proud of is in medical imagining and cancer research. Techniscan is a company using our Tesla GPUs to improve a doctor’s ability to detect and diagnose breast cancer earlier and more accurately than traditional methods. The National Cancer Institute is reporting a 12x CUDA-enabled speedup in protein ligand calculations used to design new drugs for diseases such as cancer and Alzheimer's. It is wonderful to see GPU computing being used in some of the fundamental research that will save lives.

  • How about some GPU acceleration for linux! I'd love blue-ray and HD content to be gpu accelerated by VlC or Totem. Nvidia?
    Reply
  • matt87_50
    I ported my simple sphere and plane raytracer to the gpu (dx9) using Render Monkey, it was soo simple and easy, only took a few hours, nearly EXACTLY the same code. (using hlsl, which is basically c) and it was so beautiful, what took seconds on the cpu was now running at over 30fps at 1680x1050.

    a monumental speed increase with hardly any effort (even without a gpgpu language)

    its going to be nothing short of a revolution when gpgpu goes mainstream (hopefully soon, with dx11)
    computer down on power? don't replace the cpu, just dump another gpu in one of the many spare pci16x slots and away you go, no fussing around with sli or crossfire and the compadibillity issues they bring. it will just be seen as another pile of cores that can be used!

    even for tasks that can't be done easily on the gpu architecture, most will still probably run faster than they would on the cpu, because the brute power the gpu has over the cpu is so immense, and as he kinda said, most of the tasks that aren't suited to gpgpu don't need speeding up anyway.
    Reply
  • shuffman37: Nvidia does have gpu accelerated video on linux. Link to wikipedia http://en.wikipedia.org/wiki/VDPAU about VDPAU. Its gaining support by a lot of different media players.
    Reply
  • NVIDIA, saying that "spreadsheet is already fast enough" may be misleading. Business users have the money. Spreadsheets are already installed (huge existing user base). Many financial spreadsheets are very complicated 24 layers, 4,000 lines, with built in Monte Carlo simulations.

    Making all these users instantly benefit from faster computing may be the road for success for NVIDIA.

    Dr. Drey
    Bloomington, IN
    Reply
  • raptor550
    Although I appreciate his work... I had to AdBlock his face. Sorry, its just creepy.
    Reply
  • techpops
    While I can't get enough of GPGPU articles, it really saddens me that Nvidia is completely ignoring Linux and not because I'm a Linux user. Ignoring Linux stops the GPU from being the main source for rendering in 3D software that also is available under Linux. So in my case, where I use Cinema 4D under Windows, I'll never see the massive speedups possible because Maxon would never develop for a Windows and Mac only platform.

    It's worth pointing out here that I saw video of Cuda accelerated global illumination from a single Nvidia graphics card, going up against an 8 core CPU beast. Beautiful globally illuminated images were taking 2-3 minutes to render, just for a single image on the 8 core PC. The Cuda one, rendering to the same quality was rendering at up to 10 frames per second! That speed up is astonishing and really makes an upgrade to a massive 8 core PC system seem pathetic in the face of that kind of performance.

    One can only imagine what would be possible with multiple graphics cards.

    I also think the killer app for the GPU is not ultimately going to be graphics at all, while in the early days it will be, further down the line, I think it will be augmented reality that takes over for the main GPU use. Right now, it's pretty shoddy using a smart phone for augmented reality applications, everything is dependent on GPS, and that's totally unreliable and will remain so. What's needed for silky smooth AR apps is a lot of processing power to recognize shapes and interpret all that visual data you get through a camera to work with the GPS. So if you're standing in front of a building, an arrow can point on the floor leading into the buildings entrance because the GPS has located the building and the gpu has worked out where the windows and doors are and made overlaid graphics that are motion locked to the video.

    I think AR is going to change everything with portable computers, but only when enough compute power is in a device to make it a smooth experience, rather than the jerky unreliable experimental toy it is on today's smart phones.
    Reply
  • pinkzeppelin97
    zipzoomflyhighIf my forehead was that big due to a retreating hairline, I would shave my head.
    amen to that
    Reply
  • tubers
    cpu and gpu combined? will that bring more profit to each of their respective companies? hmm
    Reply
  • jibbo
    shuffman37How about some GPU acceleration for linux! I'd love blue-ray and HD content to be gpu accelerated by VlC or Totem. Nvidia?
    There is GPU acceleration for Linux. I believe NVIDIA's provided a CUDA driver, compiler, toolkit, etc for Linux since day 1.
    Reply
  • linux is almost as gay as its users, stfu noobs
    Reply