Sign in with
Sign up | Sign in

CUDA-Enabled Apps: Measuring Mainstream GPU Performance

CUDA-Enabled Apps: Measuring Mainstream GPU Performance
By

I like an eye-popping benchmark as well as the next guy. But at the end of the day, I’m a user. I use computers to do useful things. And on days when I have to give back the Tom’s Hardware Lear jet, and all the ski bunnies go back to their warrens, I have a modest computer with modest components and not much budget to spare for $500 upgrades. I need technology that’s going to help me do what I want more efficiently, whether it’s play games, edit video, or help model genetic sequences.

Nvidia's lineup of GPGPU solutionsNvidia's lineup of GPGPU solutions

Some applications are linear in nature and merely want to crank as quickly as possible on a single processing thread until the cows come home. Others are built to leverage parallelism. Everything from Unreal Engine 3 to Adobe Premiere has shown us the benefits of CPU-based multi-threading, but what if 4 or 8 or even 16 threads was just a beginning?

This is the promise behind Nvidia’s CUDA computing architecture, which, according to the company’s definition, can run thousands of threads simultaneously.

We've written about CUDA in the past, so hopefully you’re no stranger to the technology (if you did miss our coverage, check out Nvidia's CUDA: The End of the CPU?) For better or worse, though, most CUDA coverage in the press has focused on high-end hardware, even though the supporting logic has been present in Nvidia GPUs since the dawn of the GeForce 8. When you consider the huge enterprise dollars wrapped up in the high-performance computing (HPC) and professional graphics workstation markets—targeted by Nvidia’s Tesla and Quadro lines, respectively—no wonder this is where so much of Nvidia’s marketing attention has been.

But in 2009, we finally see a change. CUDA has come to the masses. There's a huge install base of compatible desktop graphics cards, and the mainstream applications able to exploit that built-in CUDA support are hitting one after the other.

From Nothing To Now

The first consumer-friendly CUDA app was Folding@Home, a university distributed computing project out of Stanford in which each user can crunch a chunk of raw data about protein behavior so as to better understand (and hopefully cure) several of humanity’s worst diseases. The application transitioned to CUDA compatibility in the second half of 2008. Very shortly afterward came Badaboom, the video transcoder from Elemental Technologies that, according to Elemental, can transcode up to 18 times faster than a CPU-only implementation.

Then came a whole slew of media applications for CUDA: Adobe Creative Suite 4, TMPGEnc 4.0 XPress, CyberLink PowerDirector 7, MotionDSP vReveal, Loilo LoiLoScope, Nero Move it, and more. Mirror’s Edge looks to be the first AAA game title to fully leverage CUDA-based PhysX technology for increasing visual complexity, allegedly by 10x to 20x. Expect to see more titles emerge in this vein—a lot more. While AMD and its ATI Stream technology have been mired in setbacks, Nvidia has been hyping its finished and proven CUDA to everyone who will listen...and developers now seem to be taking the message to heart.

That’s all well and good, but proof of CUDA’s incendiary capabilities has largely been proven on high-end GPUs. I’m on a tight budget. Friends are getting mowed down around me by lay-offs and wage-cuts like bubonic plague victims. You bet, I’d love to drop ten or twelve Benjamins on a 3-way graphics overhaul, but the reality is that, like many of you, I’ve only got one or two C-notes to spare. On a good day. So the question all of us who can’t afford the graphics equivalent of a five-star menage-a-troi should be asking is, “Does CUDA mean anything to me when all I can afford is a budget-friendly card for my existing system?”

Let’s find out. Today, we'll be looking at some of the most promising titles and measuring the speed-up garnered from a pair of mid-range GPUs.

Display 56 Comments.
This thread is closed for comments
  • 1 Hide
    SpadeM , May 18, 2009 7:04 AM
    The 8800GS or with the new name 9600GSO goes for 60$ and delivers 96 stream processors. Would it be correct to assume that it would perform betwen the 9600 GT and 9800 GTX you reviewed?

    Other then that great article, been waiting for it since we got a sneak preview from Chris last week.
  • 6 Hide
    curnel_D , May 18, 2009 7:08 AM
    And I'll never take Nvidia marketing seriously until they either stop singing about CUDA being the holy grail of computing, or this changes: "Aside from Folding@home and SETI@home, every single application on Nvidia’s consumer CUDA list involves video editing and/or transcoding."
  • 0 Hide
    Anonymous , May 18, 2009 7:15 AM
    As more software will use CUDA, we will not only see a great boost in performance for e.g. video performance, but for parallel programing in general. This sky rocket this business into a new age!
  • 4 Hide
    curnel_D , May 18, 2009 7:18 AM
    l0bd0nAs more software will use CUDA, we will not only see a great boost in performance for e.g. video performance, but for parallel programing in general. This sky rocket this business into a new age!

    Honestly, I dont think a proprietary language will do this. If anything, it's likely to be GPGPU's in general, run by Open Computing Language.(OpenCL)
  • 4 Hide
    one-shot , May 18, 2009 7:23 AM
    Are we both thinking about the same "Pirates 2"? Or am I missing something...
  • 2 Hide
    IzzyCraft , May 18, 2009 7:35 AM
    Who knows it's just a clip he used he could be naming it anything for the hell of it.

    CUDA transcoding is very nice to someone that does H.264 transcoding at a high profile and lacks a 300+ dollar cpu who would spend hours transcoding a dvd on high profile settings.

    Else from that CUDA acceleration has just been more of a feature nothing like a main event. Although can easly be the main attraction to someone that does a good flow of H.264 trasncoding/encoding.

    Encoding/transcoding in h.264 high profile can easily make someone who is very content with their cpu and it's power become sad very quickly when they see the est time for their 30 min clip or something.
  • 0 Hide
    Anonymous , May 18, 2009 7:38 AM
    I'm using CoreAVC since support was added for CUDA h264 decoding. I kinda feel stupid for buying a high end CPU (at the time) since playing all videos, no matter the resolution or bit-rate, leaves the CPU at near-idle usage.
    Vid card: 8600GTS
    CPU: E6700
  • 0 Hide
    IzzyCraft , May 18, 2009 7:49 AM
    Well you lucked in considering not all of the geforce 8 series supports H.264 decoding etc.
  • 2 Hide
    ohim , May 18, 2009 8:01 AM
    they should remove Adobe CS4 suite from there since Cuda transcoding is only posible with nvidia CX videocards not with normal gaming cards wich supports cuda.
  • -2 Hide
    adbat , May 18, 2009 8:05 AM
    CUDA means Miracle in my language :-) I it will do those
    The sad thing is that ATI does not truly compete in CUDA department and there is not standard for it.
  • 0 Hide
    JeanLuc , May 18, 2009 8:26 AM
    I was only really interested in the Badaboom benchmarks and I was fairly impressed but I seem to remember the last time you guys done an article based on GPU accelerated apps (Cuda vs Stream) Badaboom suffered from output quality issues something that hasn't been mentioned in this article. It's all very well a 9800GTX being able to encode HD video content in half the time if the final product is no good.
  • 1 Hide
    cangelini , May 18, 2009 8:56 AM
    Jean,

    Actually, I don't believe we've done a comparison between the two. However, I have read that comparison at other sites, and it's actually ATI's Stream app that has the quality issues. Version two of the software is on the way, and it purportedly fixes the quality issues (though it still isn't demonstrating much GPU scaling, from what I've seen thus far).
  • -1 Hide
    ohim , May 18, 2009 9:17 AM
    cangeliniJean,Actually, I don't believe we've done a comparison between the two. However, I have read that comparison at other sites, and it's actually ATI's Stream app that has the quality issues. Version two of the software is on the way, and it purportedly fixes the quality issues (though it still isn't demonstrating much GPU scaling, from what I've seen thus far).
    yeah but chose your words carefouly since readers could be misslead on this one :)  the quality of the transcoding is related to the aplication used not to the computing technology like cuda or stream.
  • 0 Hide
    Anonymous , May 18, 2009 9:27 AM
    Cangelini, Badaboom definitely has lower quality output compared to the newest x264 builds. I'd definitely like to take advantage of my 9600 GT, but not unless I can use it with Handbrake or some other app on my own terms (NOT BASELINE OR MAIN PROFILE.)
  • -8 Hide
    stlunatic , May 18, 2009 11:46 AM
    I can haz chezberger?

    ATI

    CUDA

    CONA
  • 1 Hide
    randomizer , May 18, 2009 12:34 PM
    SpadeMThe 8800GS or with the new name 9600GSO goes for 60$ and delivers 96 stream processors.

    The 9600GSO has 2 versions (ignoring VRAM variations), one with only 48 SPs (essentially a castrated G94, not G92).
  • 0 Hide
    Anonymous , May 18, 2009 1:12 PM
    There is a plugin for people who do audio engineering/recording/mixing/mastering from this guy:

    http://www.nilsschneider.de

    It runs on CUDA, but TBH, it has not manifested itself as anything special just yet, it's more a "proof of concept". However, as someone who's been doing that kind of thing for years, any quad-core ever made is good enough for real-time audio work, so there's not much point in CUDA acceleration.
  • -5 Hide
    jgoette , May 18, 2009 2:31 PM
    Measuing? Do you not even have spellcheck now?
  • 0 Hide
    Anonymous , May 18, 2009 2:38 PM
    I enjoyed the article, and just like in the dual-core versus quad core debate, there remains few applications that can fully exploit CUDA.

    By the way, I have quick correction. The author writes, "...that can leverage parallelism in a way that jives with CUDA’s architecture." The correct word is "jibe" not "jive."
  • -2 Hide
    1raflo , May 18, 2009 2:51 PM
    CUDA is mostly about hype. Nothing really else.
Display more comments