Nvidia's CUDA: The End of the CPU?

BrookGPU

As you can see, even with those analogies in mind, the task is no simple one, and that’s where Brook came in. Brook was a set of extensions to the C language – "C with streams," as its creators at Stanford presented it. Concretely, Brook proposed to encapsulate all the management part of the 3D API and expose the GPU as a coprocessor for parallel calculations. For this, Brook comprises a compiler, which takes a .br file containing C++ code and extensions and generates standard C++ code that will be linked to a run-time library that has various backends (DirectX, OpenGL ARB, OpenGL NV3x, x86).

Brook had several merits, the first of which was to bring GPGPU out of the shadows and make it known to the “general public.” Indeed, when the announcement of the project was made, several IT Websites reported on the arrival of Brook – sometimes oversimplifying to the point of caricature: “The CPU is dead! GPUs are much more powerful and may soon replace them.” Five years later, that still hasn’t happened. And let’s be clear about this: it never will. On the other hand, looking at the successive changes in CPUs, which are orienting more and more towards parallelism (more and more cores, simultaneous multi-threading technology and the widening of SIMD units), and GPUs, which – conversely – are being oriented toward greater and greater flexibility (support for single-precision floating-point calculation, integer calculation and soon double-precision calculation), it seems obvious that the two are bound to meet eventually. So what’ll happen then? Will GPUs be absorbed by CPUs, the way math coprocessors were? Possibly. Intel and AMD are both working on projects of this type. But in the meantime, a lot can still change.

But let’s get back to our topic. If Brook’s initial advantage was that of popularizing the concept of GPGPU, the API wasn’t limited to a PR role. It also greatly simplified access to the GPU’s resources, enabling a lot more people to start learning the new programming model. On the other hand, despite all Brook’s qualities, it still had a long way to go before it could make GPUs credible calculating units.

One of the problems encountered stemmed from the different layers of abstraction, and in particular the excess workload generated by the 3D API, which could be considerable. But the real problem, one over which Brook’s developers had no control, was compatibility. It’s not rare for GPU manufacturers to optimize their drivers regularly, especially given the heavy competition between them. While these optimizations are (most of the time) a good thing for gamers, they could break Brook’s compatibility overnight. That made it hard to imagine using the API in industrial-quality code intended for deployment. And so for a long time, Brook remained the province of curious researchers and programmers.

  • CUDA software enables GPUs to do tasks normally reserved for CPUs. We look at how it works and its real and potential performance advantages.

    Nvidia's CUDA: The End of the CPU? : Read more
    Reply
  • pulasky
    CRAP "TECH"
    Reply
  • Well if the technology was used just to play games yes, it would be crap tech, spending billions just so we can play quake doesnt make much sense ;)
    Reply
  • MTLance
    Wow a gaming GFX into a serious work horse LMAO.
    Reply
  • dariushro
    The Best thing that could happen is for M$ to release an API similar to DirextX for developers. That way both ATI and NVidia can support the API.
    Reply
  • dmuir
    And no mention of OpenCL? I guess there's not a lot of details about it yet, but I find it surprising that you look to M$ for a unified API (who have no plans to do so that we know of), when Apple has already announced that they'll be releasing one next year. (unless I've totally misunderstood things...)
    Reply
  • neodude007
    Im not gonna bother reading this article, I just thought the title was funny seeing as how Nvidia claims CUDA in NO way replaces the CPU and that is simply not their goal.
    Reply
  • LazyGarfield
    I´d like it better if DirectX wouldnt be used.

    Anyways, NV wants to sell cuda, so why would they change to DX ,-)
    Reply
  • I think the best way to go for MS is announce to support OpenCL like Apple. That way it will make things a lot easier for the developers and it makes MS look good to support the oen standard.
    Reply
  • Shadow703793
    Mr RobotoVery interesting. I'm anxiously awaiting the RapiHD video encoder. Everyone knows how long it takes to encode a standard definition video, let alone an HD or multiple HD videos. If a 10x speedup can materialize from the CUDA API, lets just say it's more than welcome.I understand from the launch if the GTX280 and GTX260 that Nvidia has a broader outlook for the use of these GPU's. However I don't buy it fully especially when they cost so much to manufacture and use so much power. The GTX http://en.wikipedia.org/wiki/Gore-Tex 280 has been reported as using upwards of 300w. That doesn't translate to that much money in electrical bills over a span of a year but never the less it's still moving backwards. Also don't expect the GTX series to come down in price anytime soon. The 8800GTX and it's 384 Bit bus is a prime example of how much these devices cost to make. Unless CUDA becomes standardized it's just another niche product fighting against other niche products from ATI and Intel.On the other hand though, I was reading on Anand Tech that Nvidia is sticking 4 of these cards (each with 4GB RAM) in a 1U formfactor using CUDA to create ultra cheap Super Computers. For the scientific community this may be just what they're looking for. Maybe I was misled into believing that these cards were for gaming and anything else would be an added benefit. With the price and power consumption this makes much more sense now. Agreed. Also I predict in a few years we will have a Linux distro that will run mostly on a GPU.
    Reply