Opinions: Gpus going lower node than cpus

Itll be soon 1H 09 we will se GPUs at the 40nm node, surpassing the older 45nm CPU node. Thoughts or comments welcome. Its been known for awhile now, the GPUs have been outpacing thier cpu counterparts, and for a few newer games, you wont get great performance unless you oc your cpu, as they become a bottleneck at stock speeds. This trend will continue until we see higher clock speeds from cpus, regardless os multi threading, as most games are introducing more and more physics and AI into them, using that extra threading for those purposes. Will we see higher clocks and smaller nodes in cpus in the near future, or has gpus passed them up, and wont catch up?
 
Example: Not sure which is tic or which is toc, but typically a cpu gains 5-12%, usually 5-7% with every tic, after 2 years we get a toc, if you will, where we may see 20% or more over the older arch. Gpus have been increasing in speed much faster, and now also in node usage.
 
Actually the biottleneck doesn't exist as much as you would think on newer CPUs and quad cores.

But this is a good thing to see them finally going to a smaller node but the thing is that Intel already has working 32nm so I don't think I expect to see the GPU to overtake and keep ahead of CPUs.

I think it will go back and forth either way. But GPUs are more prone to consume more heat compared to a CPU so even with 40nm I wounder how well it will hold up at such high heats.
 

hannibal

Distinguished
GPU are also easier to design, so it is easier to go smaller production node. Allso there is not enough competition in CPU side... Intel is too far away in production technology and capasity...
 
Yea, TSMC is doing 30nm I think currently. Production for 40nm starts begginning new year. The thing is, gpus have always been a node or 2 behind. If the new games pan out like theyre looking to, the cpus will be a bit more of a bottleneck. This wasnt the case 5 months ago. The demands of the gpu are going up at a hugely fast pace, the design node is shrinking faster than any cpu ever has in a time line. Im just wondering if itll continue?
 

BaronMatrix

Splendid
Dec 14, 2005
6,655
0
25,790
I thought this might happen last year as ATi went real quick from 65nm to 55nm. Since GPUs don't have the design cycle of CPUs, it figures they would get ahead in tech, especially since GPUs are above 200W in some cases.

They need to get smaller faster. R700 uses 525W+ for two cards.
 

MarkG

Distinguished
Oct 13, 2004
841
0
19,010


I take it you haven't designed a CPU or GPU...

Having worked with GPU designers, I wouldn't want to bet on which is easier to design; GPUs have many copies of small pipelines, but CPUs have multiple copies of the same cores and tons of cache. And laying out a chip the size of the recent Nvidia monstrosities is far from easy.
 
GPUs do have more copies of smaller items, but as you said, the layout on a monstrosity like the nvidia chips is far from easy. I'd say both are quite hard to do properly, as shown by the fact that companies with many years of experience are still having a hard time getting it right.