Sign in with
Sign up | Sign in
Your question

NVIDIA's CUDA turns GPUs into high-powered CPUs

Last response: in CPUs
Share
May 25, 2007 10:38:21 AM

Here.
Sounds pretty interesting. If this turns out to be as big they say it will, will AMD be in even more trouble?
May 25, 2007 12:16:19 PM

It's hard to imagine AMD being in even more trouble than it is now! 8)

CUDA is pretty neat (not as nice as if it had a Hemi in it though) but I don't know if getting it to run the full gamut of Windows software is gonna be easy on it!
May 25, 2007 12:51:24 PM

This isn't really new, but it'll be interesting if CUDA is as great as Nvidia claims. In the article it said:
Quote:
Folding@Home already runs on NVIDIA and ATI chips
What Nvidia cards currently run Folding@Home?
Related resources
May 25, 2007 8:43:22 PM

Quote:
Sounds pretty interesting. If this turns out to be as big they say it will, will AMD be in even more trouble?


...as big they say it will...
I didn't notice anyone saying it would achieve big volumes just big (MIPS/folding/...) benchies (and that was by implication, rather than directly stated). And big benchies may well translate pretty easily to big mindshare, but that doesn't necessarily result in big sales. As far as I can see, people who have powerful rigs, like folding, etc., because they can, but few people buy more powerful rigs just to get big folding numbers.

The extent to which I can see this causing trouble for AMD, is the extent to which it causes trouble for the graphics-chip-company-that-used-to-be ATI.

There are a number of specialist fields, basically Scientific Simulation - Weather, SPICE, CFD, combustion simulation (and folding) - where this or the Cell processor approach potentially offer big gains. But that potential is only achieved if it is easy enough to access. That's where this does have a real worth, if the libraries make things easy enough to compile in a whole load more speed. The trouble is that may end up selling a few thousands of cards more, globally.

On the other hand, if you are faced with the choice between two graphics cards, both of which give similar and adequate frame rates in today's games, yet one gives a big kick to your folding scores, which you going to choose? And you'll probably do that even if you don't immediately want to go for folding as a pastime, because something might use it, even if its not folding.

But that will only sway the decision when it is close. If one card gives twice the rates of the other in today's games (and you won't know the rates in tomorrow's, even though that's what you want to know), you'll go with the higher one.
May 26, 2007 1:35:43 PM

http://www.extremetech.com/article2/0,1697,2136264,00.asp

Quote:
Several customers have already shifted their projects over to Nvidia's GPGPU architecture, Keane said. Some examples:


Massachusetts General Hospital: Previously touted as an Nvidia success story, the hospital uses the G80 to perform digital tomosynthesis, or a continuous low-power X-ray that can be combined to form a real-time image. The hospital saw a 100X improvement over its old solution, a 35-node workstation.

Acceleware electromagnetic field simulation: Essentially a repeated calculation of Maxwell's Law, measuring the field strength and direction of an electromagnetic field. One use? Cell-phone irradiation. Improvement: 5X per GPU, versus a normal CPU.

Headwave visualization application: headwave is used to visualize terabytes of data for analysis.

NAMD molecular dynamics:: Computational biology, which is seeing 705 Gflops of computing power using three GPUs, running in a 700-watt test box. The program claims a 100X improvement over their previous solution.

Evolved Machines sensory simulation: The software is being used by one scientist to develop models of neurons characterized as circuit designs. The idea is to one day simulate senses, such as smell, Keane said.

MatLAB: A common software application for modeling mathematics in code, the software sees a 10X speedup when making calls to the GPU, Keanse said.




This is Nvidia PR so take it as that.
May 26, 2007 1:53:02 PM

I'm not sure if they really mean, running general purpose x86 applications on it.
But even if this was possible (which i doubt), GPUs are massively parallel architectures, with relatively low clock speeds, while CPUs are mostly sequential machines with high clock speed.
Typical code executed by CPUs is mostly scalar / sequential with a limited degree of parallelism, so most of the execution units of the GPU would remain unused and the clock speed would be the main performance bottleneck.
Also in heavily branching code GPUs would perform poorly.
GPUs can be used as number crunching vector co-processors, but they can't replace CPUs for general purpose applications.
May 26, 2007 2:51:19 PM

Quote:
Here.
Sounds pretty interesting. If this turns out to be as big they say it will, will AMD be in even more trouble?

nVidia had Gelato long before this I think; it's a 3D, GPU render supposed to tremendously accelerate render times; have tried it and have not even felt those speedups, render quality and options suck :roll:
I think this will come true when fusion will come into play and even then, it wil take several long, painful years for programmers to catch up. As an example, SSE2 instructions were introduced back in 2001 with the first Pentium4s, but only the last two years it is getting strong (not sporadic) support by the market. SSE3 were first released in 2003 and very few software today are optimized for them, now,.... just imagine the lag of software development when dealing with totally new instruction sets, different CPU/GPU communication protocols. As you can see, developers are pretty lazy in following even minor x86 code improvements, I'd not bet on them ALL (or most of them) embracing something like this.
!