Nvidia's CUDA: The End of the CPU?

In Practice

Once you’ve pored over Nvidia’s documentation, it’s hard to resist getting your hands a little dirty. After all, what better way is there of judging an API than trying to write a little program using it? That’s where most of the problems come to the surface, even if everything looks perfect on paper. It’s also the best way to see if you’ve assimilated all the concepts described in the CUDA documentation.

And it’s actually quite easy to dive into such a project. A lot of high-quality free tools are available. For this test we used Visual C++ Express 2005, which had everything we needed. The hardest part was finding a program that was simple enough to port to the GPU without spending weeks to do so, but at the same time interesting enough to make the adventure worthwhile. We ended up choosing a code snippet we had that takes a height map and calculates the corresponding normal map. We won’t go into the details of the function, which is not of much interest in itself at this point. To be brief, we’re dealing with a convolution: For each pixel of the starting image, we apply a matrix that will determine the color of the resulting pixel in the generated image from the adjacent pixels, using a more or less complicated formula. The advantage of this function is that it’s very easily parallelizable, making it an ideal test of what CUDA is capable of.

The other advantage is that we already had a CPU implementation we could easily compare the result of our CUDA version with – which avoided, as programmers say, having to reinvent the wheel. (When a programmer uses that expression, it means that the time saved can be spent more productively in exhaustive testing of a recent FPS game or close observation of an athletic contest via the medium of HDTV – and we’re no exception.)

We should repeat that the purpose of this test was to familiarize us with tools in the CUDA SDK, and not to do a comparative benchmark between a CPU version and a GPU version. Since this was our first attempt at a CUDA program, we didn’t have high expectations about performance. And, since this wasn’t a critical piece of code, the CPU version wasn’t all that optimized, and so a direct comparison of the results wouldn’t really be of interest.

  • CUDA software enables GPUs to do tasks normally reserved for CPUs. We look at how it works and its real and potential performance advantages.

    Nvidia's CUDA: The End of the CPU? : Read more
    Reply
  • pulasky
    CRAP "TECH"
    Reply
  • Well if the technology was used just to play games yes, it would be crap tech, spending billions just so we can play quake doesnt make much sense ;)
    Reply
  • MTLance
    Wow a gaming GFX into a serious work horse LMAO.
    Reply
  • dariushro
    The Best thing that could happen is for M$ to release an API similar to DirextX for developers. That way both ATI and NVidia can support the API.
    Reply
  • dmuir
    And no mention of OpenCL? I guess there's not a lot of details about it yet, but I find it surprising that you look to M$ for a unified API (who have no plans to do so that we know of), when Apple has already announced that they'll be releasing one next year. (unless I've totally misunderstood things...)
    Reply
  • neodude007
    Im not gonna bother reading this article, I just thought the title was funny seeing as how Nvidia claims CUDA in NO way replaces the CPU and that is simply not their goal.
    Reply
  • LazyGarfield
    I´d like it better if DirectX wouldnt be used.

    Anyways, NV wants to sell cuda, so why would they change to DX ,-)
    Reply
  • I think the best way to go for MS is announce to support OpenCL like Apple. That way it will make things a lot easier for the developers and it makes MS look good to support the oen standard.
    Reply
  • Shadow703793
    Mr RobotoVery interesting. I'm anxiously awaiting the RapiHD video encoder. Everyone knows how long it takes to encode a standard definition video, let alone an HD or multiple HD videos. If a 10x speedup can materialize from the CUDA API, lets just say it's more than welcome.I understand from the launch if the GTX280 and GTX260 that Nvidia has a broader outlook for the use of these GPU's. However I don't buy it fully especially when they cost so much to manufacture and use so much power. The GTX http://en.wikipedia.org/wiki/Gore-Tex 280 has been reported as using upwards of 300w. That doesn't translate to that much money in electrical bills over a span of a year but never the less it's still moving backwards. Also don't expect the GTX series to come down in price anytime soon. The 8800GTX and it's 384 Bit bus is a prime example of how much these devices cost to make. Unless CUDA becomes standardized it's just another niche product fighting against other niche products from ATI and Intel.On the other hand though, I was reading on Anand Tech that Nvidia is sticking 4 of these cards (each with 4GB RAM) in a 1U formfactor using CUDA to create ultra cheap Super Computers. For the scientific community this may be just what they're looking for. Maybe I was misled into believing that these cards were for gaming and anything else would be an added benefit. With the price and power consumption this makes much more sense now. Agreed. Also I predict in a few years we will have a Linux distro that will run mostly on a GPU.
    Reply