Nvidia Does Accelerated Programming

Santa Clara (CA) - Back when we first saw the capabilities of Nvidia’s CUDA technology and Tesla acceleration cards, it was clear to us that the company had all the tools necessary to change the way we use computers today - the enormous computing horsepower of graphics cards open up possibilities we have talked about for some time, but didn’t think were possible in the foreseeable future. The company now challenges developers for the first time to exploit the hidden potential graphics cards in a mainstream application.

Nvidia was first to come up with a development framework that used a relatively easy-to-learn way to accelerate traditional CPU-centric applications through a graphics processor. But while CUDA, which is based on C++ and has some GPGPU extensions, is generally available, Nvidia pitched the technology mainly to universities, scientists and industries that have a need for floating-point-heavy applications - such as financial institutions and the oil and gas sector.

Both Nvidia and ATI have been showing mainstream applications based on GPGPU technologies, but neither one has targeted the mainstream application segment yet. When we asked Nvidia CEO Jen-Hsun Huang when Cuda would go into the mainstream market, he told us that such a move would depend on Microsoft and their efforts to provide a Windows interface for GPGPUs.

It appears that Nvidia is shifting its enterprise-only strategy and is turning its focus on a mainstream opportunity as well. In a contest announced today, the company looks for the "most talented CUDA programmers in the world". Nvidia will provide a "partially GPU-optimized version of an MP3 LAME encoder" and asks developers to "optimize [the software] to run as fast as possible on a Cuda-enabled GPU." The encoder has to be created in the CUDA programming environment and must achieve a speed up in run-time.

So, the challenge in this contest is not to port a mainstream application to CUDA, but rather optimize it to squeeze as many gigaflops out of the GPU as possible. That challenge may sound easier than it really is, as we were told before by researchers at University of Illinois’ Beckman Institute and the National Center for Supercomputing Applications that getting an application to run on a GPGPU is the simple task, while accelerating it takes up most of the time - and knowledge.

Those scientific GPGPU applications simulating fluid dynamics or biological processes are impressive to watch, but of course we are interested to see what these processors are capable of in mainstream applications. AMD previously demonstrated its stream processors in an application that rendered a user’s hand, which was captured by a webcam, in near real-time and replaced the mouse as moving around objects on a screen.

Optimizing an MP3 encoder is far from the sophistication of such an application, but it is a first step.

jk

  • cool...
    Reply
  • randomizer
    Running Vista on GPGPU... sounds like fun :D
    Reply
  • christian summer
    if you consider the power efficiency of gpgpu processing to that of normal intel based chips you will also see many disadvantages...sure we all have a miniature super computer inside each graphics card...sure we have super high system bandwidth over the pci x(2.0)etc bus and extremely fast and high capacities of video ram...but the gpu's eat a hell of a lot more power under load than a general purpose processor...

    while it would be great to take advantage of the gpu horse power, especially in fpu intensive processing, i dont see the gpu completely replacing the processor anytime soon...i am an artist that does a lot of music and video, and it would be great to offload a lot of the processing, but when i am running word or surfing the internet i dont need my computer eating quite as many watts as playing cod4...

    -c
    Reply
  • mr roboto
    Nvidia needs to get their ass's in gear and bring Folding@Home to their GPU's. ATI has had their GPU's ready for a while yet Nvidia refuses to simply optimize their drivers for this. I guess they want people to buy supercomputers to accomplish this task. I love Nvidia's cards but this really pisses me off. Assholes.
    Reply
  • Horhe
    There is a lot of potential in multi-core processors which isn't used, and they want to use the GPU, which is the most power-hungry component in a system. That's retarded. I hope that Larrabee will be a success so we could get rid of graphic cards. (I'm not an Intel fanboy, I just think that their approach is the most efficient)
    Reply
  • Horhe
    There is a lot of potential in multi-core processors which isn't used, and they want to use the GPU, which is the most power-hungry component in a system. That's retarded. I hope that Larrabee will be a success so we could get rid of graphic cards. (I'm not an Intel fanboy, I just think that their approach is the most efficient)
    Reply
  • fransizzle
    Although I don't see the end of the CPU anytime in the near future, there are certain tasks that a GPU could, at least in theory, do much much faster and I personally can't wait for it to happen. Anything that can make my computer substantially faster with the hardware I already have is awesome by me. Nvidia needs to hurry up and get this out and working already.
    Reply
  • dogman-x
    I think NVidia's approach is perfect. Certain things work better on CPUs, and certain things work better on GPUs. In particular, the hardware structures in GPUs and other accelerators vastly outperform multi-core CPUs for many math intensive tasks, particularly for imaging, video, financial, geology, etc., while CPUs are still quite necessary for decision based logic and control. So you need both types of processors to be effective. CUDA is a perfect development tool to enable this, and LAME is a perfect mainstream application that can benefit from acceleration.

    We're past the days where we can just raise the clock speed. New programming models are necessary. Homogeneous multi-core designs (e.g. Larabee) will fall short. Heterogeneous multi-core (many different types of cores) will dominate in the future. Although the bandwidth of the PCIe 2.0 bus is very capable, the latency of this bus will be an issue. The best designs will have all the different types of cores on the same chip. So while NVidia has a great development tool with CUDA, hardware designs along the lines of AMD's Fusion may be the way of the future.
    Reply
  • JAYDEEJOHN
    Since the beginning weve had cpus. Almost all the programming has been aimed at cpus since weve had transistors. Thats our history. Given the opportunity, I believe we will see huge benefits from gpu processing. You read about alot of these super computers that have thousands of cpus on them being replaced by handfulls of gpus and still tripling their output. Running something like this is less exspensive, costs less up front, and has higher potential than any cpu based system. I think theres going to be more and more a trend heading in this direction for super computing. The cpus function is slowly being replaced there. Soon we will see it more and more in server , and someday on desktop. The gpu isnt dead. Intel says it is, while they invest billions in them. What a joke. They know whats going on here, but Im not buying the gpu is dead, while they (Intel) invest all that money in them
    Reply
  • I like it. And my ass hurts. And its hot outside. And ... why are you reading it, dork?
    Reply