CPU bottlenecking newest GPUs

If most of us have read the new reviews on the 5970 cards, the cpu bottlenecks are begining to show up.
If this trend continues, this could end up having a "fast enough" carry down effect we currently have in the cpus.
Next gen cards should be typically 40-60% faster, and it appears that weve really run into a SW bottleneck, with games and MT usage,
Ive mentioned this awhiles back, at the time, no one conceded this possibility, but its just starting to show up now, and ATIs solution of eyefinity is but one workaround concerning this, but again, its here, and we do need much better cpus, at least for gaming IMHO
 
Look at the res, or did you miss my older comments on this?
Do we need to max out the highest available res before people see this? Its here, even at 25x16, and next gen gpus wont have anywheres to go, as thats it.
The refresh gpus will only put more pressure on this, before next gen.
If LRB is to be sooo much faster for instance, it simple cant, as weve reached the end.
We need faster cpus
 
Looking at say, i7s lil dominance over P2 in IPC of 20%, which is 1 whole gen ahead, plus a much more mature process, thatll be eaten up quickly by the next gen gpus.
And thats what were seeing now.
So, even if next gen cpus are 20% faster, which is a huge perf gain in cpus, it wouldnt have changed much, if at all from where we are currently.
I know AMDs been struggling, and Intels on top for now, but something has to change.
Having even more difficult games also puts much more stress on the cpus, as we move more and more into the AI arena, plus physx, unless its done using a mix of both cpu and gpu, since the cpu is falling behind,
Currently, at the pace games are using the new DX model, by the time DX11 is in full usage, gpus will be 100% faster, and should easily handle those changes.
Look at Crysis, games like this will have to be the norm, and that simply wont happen anytime soon, as far as a huge jump goes, which also requires a fast cpu as well
 
Ubisoft confirmed what the previously promised: The Dunia Engine really benefits a lot from multi-core CPUs. Therefore a Core 2 Quad Q6600 with 2.4 GHz is as fast as a Core 2 Duo E8400 with 3 GHz. Unlike Crysis for example, Far Cry 2 still benefits from a faster CPU even with a Radeon HD 4870 running at 1,680 x 1,050 with 4x FSAA and 16:1 AF - it seems like the workload is divided to the individual components in a better way.
http://www.pcgameshardware.com/aid,663817/Far-Cry-2-GPU-and-CPU-benchmarks/Reviews/?page=2
Keep in mind this was done on a 4870, which offers around only 1/3 the power of the 5970
 

ElMoIsEviL

Distinguished
The problem can be found two fold. 1. Consoles 2. nVIDIA.

nVIDIA has been holding back gaming for some time now. Their G80 did a real number on gaming as instead of the ambitious 4:1 ALU:TEX ratio ATi had envisioned (more Direct Compute/Cinematic realism) nVIDIA went for a far more conservative 2:1 ratio (and 3:1 with GT200). This, essentially, pushes games that are programmed for G80/GT200 (TWIMTBP or pretty much 99.9% of games) to rely more on traditional rendering and effects techniques (Texel/s and Triangle/s fillrate reliant techniques rather than ALU/Compute Shader techniques.

Couple that with the fact that most games are now developed to work on the Xbox360, PS3 and Wii and you've got a problem. Essentially DX10/10.1 games look like DX9c games (as there is no real use of the extra Compute shader techniques available in DX10/DX10.1).

It's really a messed up situation as we have all this GPU power and no real titles to make use of it (RV770/RV870 will generally only run at 30-50% GPU usage when looking at the Catalyst Overdrive panel).

The proof is in how ATi have tackled the issue. Going from 16/16 (TMU/ROP) with RV670 to 40/16 for RV770 and now 80/32 with RV870. So ATi has had to grow the number of Texture Mapping Units as well as Raster Operators in order to increase performance (they've grown their shaders but as we know they don't get much usage).
 
True, and to part of my point.
We do see more and more games becoming cpu bottlenecked more.
Id add Intels enept attempts of their IGPs for gaming use has left the lowest common dev too low and has contributed as well.
Yea, the DX10.1 thing has been a bone of contention with me, as well as other things nVidia has done.
But currently weve seen what "they" want. "They" dont want to really up the gaming dev til 2012, and by then, the minimum low end will be a 4850 or better, as IGPs should be that good, unless Intel continues its poor showing at this time.
In the mean time, we have slow cpus not even to handle todays games at the highest res, and thats a concern.
Its showing having a "heavy" game, using alot of cpu resources may make devs choose other paths.
Look at some games, those shaders are needed, and doubling them isnt perfect scaling, and never has been, as we saw the 4870 with 2 1/2 times the shaders, of the 3870, and didnt see perfect scaling there either, and it was also highly opened up, more than needed . as far as BW is concerned, which could be the scenario with the 5xxx series, as theyve reached the BW threshold with these cards.
GRID comes to mind for shader usage.