Sign in with
Sign up | Sign in

Nvidia Does Accelerated Programming

By - Source: Tom's Hardware | B 14 comments

Santa Clara (CA) - Back when we first saw the capabilities of Nvidia’s CUDA technology and Tesla acceleration cards, it was clear to us that the company had all the tools necessary to change the way we use computers today - the enormous computing horsepower of graphics cards open up possibilities we have talked about for some time, but didn’t think were possible in the foreseeable future. The company now challenges developers for the first time to exploit the hidden potential graphics cards in a mainstream application.

Nvidia was first to come up with a development framework that used a relatively easy-to-learn way to accelerate traditional CPU-centric applications through a graphics processor. But while CUDA, which is based on C++ and has some GPGPU extensions, is generally available, Nvidia pitched the technology mainly to universities, scientists and industries that have a need for floating-point-heavy applications - such as financial institutions and the oil and gas sector.

Both Nvidia and ATI have been showing mainstream applications based on GPGPU technologies, but neither one has targeted the mainstream application segment yet. When we asked Nvidia CEO Jen-Hsun Huang when Cuda would go into the mainstream market, he told us that such a move would depend on Microsoft and their efforts to provide a Windows interface for GPGPUs.

It appears that Nvidia is shifting its enterprise-only strategy and is turning its focus on a mainstream opportunity as well. In a contest announced today, the company looks for the "most talented CUDA programmers in the world". Nvidia will provide a "partially GPU-optimized version of an MP3 LAME encoder" and asks developers to "optimize [the software] to run as fast as possible on a Cuda-enabled GPU." The encoder has to be created in the CUDA programming environment and must achieve a speed up in run-time.

So, the challenge in this contest is not to port a mainstream application to CUDA, but rather optimize it to squeeze as many gigaflops out of the GPU as possible. That challenge may sound easier than it really is, as we were told before by researchers at University of Illinois’ Beckman Institute and the National Center for Supercomputing Applications that getting an application to run on a GPGPU is the simple task, while accelerating it takes up most of the time - and knowledge.

Those scientific GPGPU applications simulating fluid dynamics or biological processes are impressive to watch, but of course we are interested to see what these processors are capable of in mainstream applications. AMD previously demonstrated its stream processors in an application that rendered a user’s hand, which was captured by a webcam, in near real-time and replaced the mouse as moving around objects on a screen.

Optimizing an MP3 encoder is far from the sophistication of such an application, but it is a first step.

jk

Display 14 Comments.
This thread is closed for comments
  • 2 Hide
    Anonymous , May 20, 2008 1:42 AM
    cool...
  • 2 Hide
    randomizer , May 20, 2008 1:54 AM
    Running Vista on GPGPU... sounds like fun :D 
  • 0 Hide
    christian summer , May 20, 2008 3:54 AM
    if you consider the power efficiency of gpgpu processing to that of normal intel based chips you will also see many disadvantages...sure we all have a miniature super computer inside each graphics card...sure we have super high system bandwidth over the pci x(2.0)etc bus and extremely fast and high capacities of video ram...but the gpu's eat a hell of a lot more power under load than a general purpose processor...

    while it would be great to take advantage of the gpu horse power, especially in fpu intensive processing, i dont see the gpu completely replacing the processor anytime soon...i am an artist that does a lot of music and video, and it would be great to offload a lot of the processing, but when i am running word or surfing the internet i dont need my computer eating quite as many watts as playing cod4...

    -c
  • 2 Hide
    mr roboto , May 20, 2008 5:07 AM
    Nvidia needs to get their ass's in gear and bring Folding@Home to their GPU's. ATI has had their GPU's ready for a while yet Nvidia refuses to simply optimize their drivers for this. I guess they want people to buy supercomputers to accomplish this task. I love Nvidia's cards but this really pisses me off. Assholes.
  • -2 Hide
    Horhe , May 20, 2008 5:09 AM
    There is a lot of potential in multi-core processors which isn't used, and they want to use the GPU, which is the most power-hungry component in a system. That's retarded. I hope that Larrabee will be a success so we could get rid of graphic cards. (I'm not an Intel fanboy, I just think that their approach is the most efficient)
  • -6 Hide
    Horhe , May 20, 2008 5:09 AM
    There is a lot of potential in multi-core processors which isn't used, and they want to use the GPU, which is the most power-hungry component in a system. That's retarded. I hope that Larrabee will be a success so we could get rid of graphic cards. (I'm not an Intel fanboy, I just think that their approach is the most efficient)
  • 3 Hide
    fransizzle , May 20, 2008 6:59 AM
    Although I don't see the end of the CPU anytime in the near future, there are certain tasks that a GPU could, at least in theory, do much much faster and I personally can't wait for it to happen. Anything that can make my computer substantially faster with the hardware I already have is awesome by me. Nvidia needs to hurry up and get this out and working already.
  • 1 Hide
    dogman-x , May 20, 2008 12:52 PM
    I think NVidia's approach is perfect. Certain things work better on CPUs, and certain things work better on GPUs. In particular, the hardware structures in GPUs and other accelerators vastly outperform multi-core CPUs for many math intensive tasks, particularly for imaging, video, financial, geology, etc., while CPUs are still quite necessary for decision based logic and control. So you need both types of processors to be effective. CUDA is a perfect development tool to enable this, and LAME is a perfect mainstream application that can benefit from acceleration.

    We're past the days where we can just raise the clock speed. New programming models are necessary. Homogeneous multi-core designs (e.g. Larabee) will fall short. Heterogeneous multi-core (many different types of cores) will dominate in the future. Although the bandwidth of the PCIe 2.0 bus is very capable, the latency of this bus will be an issue. The best designs will have all the different types of cores on the same chip. So while NVidia has a great development tool with CUDA, hardware designs along the lines of AMD's Fusion may be the way of the future.
  • 1 Hide
    JAYDEEJOHN , May 20, 2008 8:21 PM
    Since the beginning weve had cpus. Almost all the programming has been aimed at cpus since weve had transistors. Thats our history. Given the opportunity, I believe we will see huge benefits from gpu processing. You read about alot of these super computers that have thousands of cpus on them being replaced by handfulls of gpus and still tripling their output. Running something like this is less exspensive, costs less up front, and has higher potential than any cpu based system. I think theres going to be more and more a trend heading in this direction for super computing. The cpus function is slowly being replaced there. Soon we will see it more and more in server , and someday on desktop. The gpu isnt dead. Intel says it is, while they invest billions in them. What a joke. They know whats going on here, but Im not buying the gpu is dead, while they (Intel) invest all that money in them
  • -2 Hide
    Anonymous , May 20, 2008 9:10 PM
    I like it. And my ass hurts. And its hot outside. And ... why are you reading it, dork?
  • 0 Hide
    Anonymous , May 21, 2008 9:45 AM
    Wintel will never allow this happen..
  • 0 Hide
    cryogenic , May 21, 2008 11:20 AM
    lujoooWintel will never allow this happen..


    They can't do a damn thing about it if the nVidia's programming model gets adopted and becomes a de facto standard before Intel has a chance to unveil its own model with Larabee. Just like it happened with AMD64 by the time Intel wanted to implement their own 64bit instruction set, the AMD one was already supported by Windows, Linux, Unix, Solaris and many more and none of the software companies wanted to support yet another standard that is different but basically offers the same thing.

    nVidia is wise on this, it knows that they must push GPU computing into mainstream before Intel has a chance to do it with Larabee, unfortunately to succeed in doing so they will need support from the software giants like Microsoft, Sun, Oracle, the Linux crowd and alike. I don't think that just providing a CUDA development environment will be enough, they might need OS support at the core (something which Intel will likely manage to obtain shortly after they release Larabee).


  • 0 Hide
    techguy911 , May 21, 2008 1:53 PM
    CUDA is useless unless nvidia optimizes there drivers for its use otherwise its just a novelty.
  • 0 Hide
    wild9 , May 21, 2008 6:27 PM
    I think that from a design viewpoint such hardware would really show the advantages of AMD's architecture (Hypter-transport especially). I just can't help feeling this techonology is being held back for the same reason if someone managed to get cars to run on water..all current technology would be dead in the water, with severe losses.

    Quote:
    There is a lot of potential in multi-core processors which isn't used, and they want to use the GPU, which is the most power-hungry component in a system. That's retarded.


    Some tasks require more number-crunching capability than those CPU's can muster, and it would take 10's or even 100's of them to even begin to match the capability of a few GPU's..imagine the power consumption not to mention the footprint.

    I don't think all can and should, be ported - it's too complex and in some cases completely needless. I think you'll still have powerful CPU's just that they'll act as bridges/interfaces rather than act as the soul number-crunching device. Closest I ever saw to this 'transputer' type hardware was the Amiga range of computers, that had multi-tasking built into the hardware, and those systems were a joy to use. I'd like to see a similar thing happen on the PC, and would not mind buying a GPGPU chip (or several), to speed up my applications, but I don't think we'll see it just yet, not in mainstream use anyway. Too many conflicting interests here, most of which are of a commercial nature..