Skip to main content

Nvidia's CUDA: The End of the CPU?

The advent of GPGPU

The idea of using graphics accelerators for mathematical calculation is not recent. The first traces of it go back to the 1990s. Initially it was very primitive – limited mostly to the use of certain hard-wired functions like rasterization and Z-buffers to accelerate tasks like pathfinding or drawing Voronoi diagrams.

In 2003, with the appearance of highly evolved shaders, a new stage was reached – this time performing matrix calculations using the hardware then available. That was the year when an entire section of the SIGGRAPH (“Computations on GPUs”) was dedicated to this new IT fringe area. This early initiative was to take on the name GPGPU (for General-Purpose computation on GPUs). One early turning point in this area was the appearance of BrookGPU.

To really understand the role of Brook, you need to see how things were done before it made its appearance. The only way to get access to the GPU’s resources in 2003 was to use one of the two graphics APIs – Direct3D or OpenGL. Consequently, researchers who wanted to harness the GPU’s processing power had to work with these APIs. The problem was that those individuals weren’t necessarily experts in graphics programming, which seriously complicated access to the technology. Where 3D programmers talk in terms of shaders, textures and fragments; specialists in parallel programming talk about streams, kernels, scatter, and gather. So, the first difficulty was to find analogies between two distinct worlds:

  • a stream – that is, a flow of elements of the same type – can be represented on the GPU by a texture. To give you an idea of this, consider that the equivalent in classic programming languages is simply an array.
  • a kernel – the function that will be applied independently to each element of the stream – is the equivalent of a pixel shader. Conceptually, it can be seen as an internal loop in a classic program – the one that will be applied to the largest number of elements.
  • to read the results of the application of a kernel to a stream, it has to be rendered in a texture. Obviously there’s no equivalent on a CPU, which has total access to the memory.
  • to control the location where a memory write is to take place (in a scatter operation), it has to be done in a vertex shader, since a pixel shader can’t modify the coordinates of the pixel currently being processed.

  • CUDA software enables GPUs to do tasks normally reserved for CPUs. We look at how it works and its real and potential performance advantages.

    Nvidia's CUDA: The End of the CPU? : Read more
    Reply
  • pulasky
    CRAP "TECH"
    Reply
  • Well if the technology was used just to play games yes, it would be crap tech, spending billions just so we can play quake doesnt make much sense ;)
    Reply
  • MTLance
    Wow a gaming GFX into a serious work horse LMAO.
    Reply
  • dariushro
    The Best thing that could happen is for M$ to release an API similar to DirextX for developers. That way both ATI and NVidia can support the API.
    Reply
  • dmuir
    And no mention of OpenCL? I guess there's not a lot of details about it yet, but I find it surprising that you look to M$ for a unified API (who have no plans to do so that we know of), when Apple has already announced that they'll be releasing one next year. (unless I've totally misunderstood things...)
    Reply
  • neodude007
    Im not gonna bother reading this article, I just thought the title was funny seeing as how Nvidia claims CUDA in NO way replaces the CPU and that is simply not their goal.
    Reply
  • LazyGarfield
    I´d like it better if DirectX wouldnt be used.

    Anyways, NV wants to sell cuda, so why would they change to DX ,-)
    Reply
  • I think the best way to go for MS is announce to support OpenCL like Apple. That way it will make things a lot easier for the developers and it makes MS look good to support the oen standard.
    Reply
  • Shadow703793
    Mr RobotoVery interesting. I'm anxiously awaiting the RapiHD video encoder. Everyone knows how long it takes to encode a standard definition video, let alone an HD or multiple HD videos. If a 10x speedup can materialize from the CUDA API, lets just say it's more than welcome.I understand from the launch if the GTX280 and GTX260 that Nvidia has a broader outlook for the use of these GPU's. However I don't buy it fully especially when they cost so much to manufacture and use so much power. The GTX http://en.wikipedia.org/wiki/Gore-Tex 280 has been reported as using upwards of 300w. That doesn't translate to that much money in electrical bills over a span of a year but never the less it's still moving backwards. Also don't expect the GTX series to come down in price anytime soon. The 8800GTX and it's 384 Bit bus is a prime example of how much these devices cost to make. Unless CUDA becomes standardized it's just another niche product fighting against other niche products from ATI and Intel.On the other hand though, I was reading on Anand Tech that Nvidia is sticking 4 of these cards (each with 4GB RAM) in a 1U formfactor using CUDA to create ultra cheap Super Computers. For the scientific community this may be just what they're looking for. Maybe I was misled into believing that these cards were for gaming and anything else would be an added benefit. With the price and power consumption this makes much more sense now. Agreed. Also I predict in a few years we will have a Linux distro that will run mostly on a GPU.
    Reply