So what does CUDA need in order to become the API to reckon with? In a word: portability. We know that the future of IT is in parallel computing – everybody’s preparing for the change and all initiatives, both software and hardware, are taking that direction. Currently, in terms of development paradigms, we’re still in prehistory – creating threads by hand and making sure to carefully plan access to shared resources is still manageable today when the number of processor cores can be counted on the fingers of one hand; but in a few years, when processors will number in the hundreds, that won’t be a possibility. With CUDA, Nvidia is proposing a first step in solving this problem – but the solution is obviously reserved only for their own GPUs, and not even all of them. Only the GF8 and 9 (and their Quadro/Tesla derivatives) are currently able to run CUDA programs.
Nvidia may boast that it has sold 70 million CUDA-compatible GPUs worldwide, but that’s still not enough for it to impose itself as the de facto standard. All the more so since their competitors aren’t standing by idly. AMD is offering its own SDK (Stream Computing) and Intel has also announced a solution (Ct), though it’s not available yet. So the war is on and there won’t be room for three competitors, unless another player – say Microsoft – were to step in and pick up all the marbles with a common API, which would certainly be welcomed by developers.
So Nvidia still has a lot of challenges to meet to make CUDA stick, since while technologically it’s undeniably a success, the task now is to convince developers that it’s a credible platform – and that doesn’t look like it’ll be easy. However, judging by the many recent announcements in the news about the API, the future doesn’t look unpromising.
See our review of Nvidia’s GT200 GPUs for more on CUDA.
For the latest on CUDA click here.