Nvidia's CUDA is Already 5 Years Old
The platform, which enables developers to exploit Nvidia GPUs (as well as x86 CPUs) for general-purpose GPU computing purposes was introduced on November 15, 2006 with the GeForce 8 series. Since then, Nvidia claims to have sold more than 350 million CUDA enabled GPUs. The CUDA toolkit has been downloaded more than 1 million times and more than 500 universities around the globe are teaching CUDA classes.
CUDA was, from the very beginning, designed to drive GPUs into high-performance computing applications in military, academic and industrial environments. While it was somewhat slow to start, Nvidia has been successful as, for example, three of the five fastest supercomputers in the world now integrate Tesla acceleration cards, the primary delivery vehicle for CUDA-based accelerators. CUDA apps, which are basically created via C++ like-code with specific extensions was the first generally available high-level language to easily access the processing horsepower in widely available and relatively affordable GPUs.
CUDA, which is still positioned against open high-level platforms, especially OpenCL, survived a looming battle with Intel's canceled Larrabee graphics card and floating point accelerator, but has been frequently criticized that it is not as easy to deploy as Nvidia claims. For example, while basic access to the GPU via CUDA is considered to be relatively easy, the remaining 5 to 10 percent of performance that is hidden in a GPU can only be accessed via detailed knowledge of the architecture of the GPU, especially its memory architecture.
In June of this year, Nvidia rolled out, with a delay of more than two years, multi-CPU x86 CUDA compilers that runs CUDA code on Intel and AMD processors.
From Nvidia:
Adobe, Microsoft, and others use CUDA in several of their apps. CUDA also comes in handy in game and graphics development. The benefits also go beyond that. Just because "games" and such don't take advantage of CUDA doesn't mean it hasn't helped...
From Nvidia:
Adobe, Microsoft, and others use CUDA in several of their apps. CUDA also comes in handy in game and graphics development. The benefits also go beyond that. Just because "games" and such don't take advantage of CUDA doesn't mean it hasn't helped...
I didn't say that it was useless.
I said that it hasn't improved much.(Has'nt got better)
They haven't personally wronged you, Alidan. Quit your crying.
There already are several open solutions. OpenCL for one is coming along nicely. But this requires that the hardware manufacturers meet THEIR specs. With CUDA, the software meets Nvidia's specs.
There are simply things OpenCL cannot do that CUDA can. And OpenCL is excellent of course, but AMD cards and Nvidia cards are simply built differently. It's apples and oranges, when it comes to GPU compute performance. Gaming it would be apples and apples, but raw computations are a different monster...
If you've ever used iRay (mental images) or Vray-RT for GPU rendering, you'd know the difference. Since you haven't, it'll give you a point to research.
Since when is 10% a big deal?
Shouldn't that be quiet instead of quite?
Amen to that! CUDA has been a dream for us video editors using Premiere! Not needing to render much (if at all) has enabled my CUDA cards to pay for themselves over and over.
I'm pretty certain that PhysX uses CUDA, so in reality, games do take advantage of it.