Nvidia's CUDA: The End of the CPU?
Conclusion
Nvidia introduced CUDA with the release of the GeForce 8800. At that time the promises they were making were extremely seductive, but we kept our enthusiasm in check. After all, wasn’t this likely to be just a way of staking out the territory and surfing the GPGPU wave? Without an SDK available, you can’t blame us for thinking it was all just a marketing operation and that nothing really concrete would come of it. It wouldn’t be the first time a good initiative has been announced too early and never really saw the light of day due to a lack of resources – especially in such a competitive sector. Now, a year and a half after the announcement, we can say that Nvidia has kept its word.
Not only was the SDK available quickly in a beta version, in early 2007, but it’s also been updated frequently, proving the importance of this project for Nvidia. Today CUDA has developed nicely; the SDK is available in a beta 2.0 version for the major operating systems (Windows XP and Vista and Linux and 1.1 for Mac OS X), and Nvidia is devoting an entire section of its site for developers.
On a more personal level, the impression we got from our first steps with CUDA was extremely positive. Even if you’re familiar with the GPU’s architecture, it’s natural to be apprehensive about programming it, and while the API looks clear at first glance you can’t keep from thinking it won’t be easy to get convincing results with the architecture. Won’t the gain in processing time be siphoned off by the multiple CPU-GPU transfers? And how to make good use of those thousands of threads with almost no synchronization primitive? We started our experimentation with all these uncertainties in mind. But they soon evaporated when the first version of our algorithm, trivial as it was, already proved to be significantly faster than the CPU implementation.
So, CUDA is not a gimmick intended for researchers who want to cajole their university into buying them a GeForce. CUDA is genuinely usable by any programmer who knows C, provided he or she is ready to make a small investment of time and effort to adapt to this new programming paradigm. That effort won’t be wasted provided your algorithms lend themselves to parallelization. We should also tip our hat to Nvidia for providing ample, quality documentation to answer all the questions of beginning programmers.
For the latest on CUDA click here.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
-
CUDA software enables GPUs to do tasks normally reserved for CPUs. We look at how it works and its real and potential performance advantages.Reply
Nvidia's CUDA: The End of the CPU? : Read more -
Well if the technology was used just to play games yes, it would be crap tech, spending billions just so we can play quake doesnt make much sense ;)Reply
-
dariushro The Best thing that could happen is for M$ to release an API similar to DirextX for developers. That way both ATI and NVidia can support the API.Reply -
dmuir And no mention of OpenCL? I guess there's not a lot of details about it yet, but I find it surprising that you look to M$ for a unified API (who have no plans to do so that we know of), when Apple has already announced that they'll be releasing one next year. (unless I've totally misunderstood things...)Reply -
neodude007 Im not gonna bother reading this article, I just thought the title was funny seeing as how Nvidia claims CUDA in NO way replaces the CPU and that is simply not their goal.Reply -
LazyGarfield I´d like it better if DirectX wouldnt be used.Reply
Anyways, NV wants to sell cuda, so why would they change to DX ,-) -
I think the best way to go for MS is announce to support OpenCL like Apple. That way it will make things a lot easier for the developers and it makes MS look good to support the oen standard.Reply
-
Shadow703793 Mr RobotoVery interesting. I'm anxiously awaiting the RapiHD video encoder. Everyone knows how long it takes to encode a standard definition video, let alone an HD or multiple HD videos. If a 10x speedup can materialize from the CUDA API, lets just say it's more than welcome.I understand from the launch if the GTX280 and GTX260 that Nvidia has a broader outlook for the use of these GPU's. However I don't buy it fully especially when they cost so much to manufacture and use so much power. The GTX http://en.wikipedia.org/wiki/Gore-Tex 280 has been reported as using upwards of 300w. That doesn't translate to that much money in electrical bills over a span of a year but never the less it's still moving backwards. Also don't expect the GTX series to come down in price anytime soon. The 8800GTX and it's 384 Bit bus is a prime example of how much these devices cost to make. Unless CUDA becomes standardized it's just another niche product fighting against other niche products from ATI and Intel.On the other hand though, I was reading on Anand Tech that Nvidia is sticking 4 of these cards (each with 4GB RAM) in a 1U formfactor using CUDA to create ultra cheap Super Computers. For the scientific community this may be just what they're looking for. Maybe I was misled into believing that these cards were for gaming and anything else would be an added benefit. With the price and power consumption this makes much more sense now. Agreed. Also I predict in a few years we will have a Linux distro that will run mostly on a GPU.Reply