Personally I don't believe it because ATi engineers supposedly don't like to talk about pixel pipelines and pigeon-hole them like that. BUT, let's discuss this as if it's true for the techncial aspect, because while it could be totally FUD/false, it's fun to immagine what it means.
On a note that makes me particularly suspicious of The Inquirer's claims, they refer to them as "pipelines." Most of us who've paid close attention know that ATi completely ditched the traditional symmetric pipeline arcitecture back with the release of thr R520;
Yeah but I think it is going to be more R500 than R520, we suspected that it was going to be a combo, but the way textures are handled on the R500 is VERY different than on the R520, although this would be even further removed from any previous design building on unification on a texture level as well.
These factors together, make it that if The Inquirer is right, this will be the biggest surprised they've ever given me; I've been largely thinking that ATi's been going for larger number of shaders than TMUs, given their increased importance in next-gen gaming, (Oblivion, etc.
However picture this... 64 shader fully unified units that can do texture also not as a sperate texture crossbar that needs to cross communicate, but something that can do 1:1 mid-stream, this would help two fold, giving you the flexability of what you have before in the R580 while adding the power to do what you were missing, as an example remember HDR requires lotsa shader opps, but at the core is texture first step which is why IMO the GF7 series isn't as handicapped as expected compared to the massive shader power of the R580 (for those sensetive people out there by massive I just mean in number). I'd love to see that, but oie like I said the transistor penalty could be huge, unless they found another efficiency.
The second benifit I could see you wuold remove the texture ALU crossbar and avoid another layer of potential problem goin back forth for texture lookup from both pixel & vertex which having to maintain the context information for each simply for that operation.
given that they still lose some current-gen benchmarks to nVidia, ATi may "fall back" on this for a generation;
And that was my thinking somewhat too when I first read this, is this ATi's own 'hybrid' response since the X1600 and X1900 didn't give them as much of a boost in most applications (although there are some just check AOE3 X1800 vs X1900 performance almost 2 times the diff @ same clock). This to me would be a huge price to pay in transistors, but would give them the PR wins, then when they do feel they are better suited to the return of the unbalanced besign then they would go for the transistor saving. I don't like the plan, but it would explain a decision to do so.
I've been feeling that the R600 will go for 64 pooled shaders, (128 ALUs total) but 24 TMUs, giving it a 2.66:1 ratio, rather than a 3:1 ratio.
And that was pretty much the concensus IMO on the shader part, 1Full+1mini+1branch unit(s), but the TMUs was higher than previously believed.
This would match the texturing fill-rate per clock of the G70, and given that ATi's GPU will almost certainly post far higher clock speeds than G80, we may just see them make the texturing difference between R600 and G80 close to nothing,
Or with an advantage depending on which G80 design you put the most faith in 32/24/16unified (V+G) or 32/32/16U
leaving ATi's shader advantage. (and possibly RAM advantage, if we get either that rumored 512-bit interface,
Yeah I just don't buy the 512bit yet (the transistor count increases alot, and the card traces go up enourmously on an allready packed board (likely meaning another 1-2 yaers on the PCB IMO).
or even if nVidia can't incorporate GDDR4 support for G80)
I would suspect GDDR4 is a given for the G80, you'd need it built into the VPU for at least a future refresh, unless they think the G80 won't last that long (quickly replaced by the G90). They could launch a board that doesn't sport GDDR4 but I'd think it'd be in the chip.
So, according to the unified arcitecture
Did you have something more there, seemed to end abruptly.
Anywhoo, hope my post was fodder for thought/discussion, but it's been a wicked WICKED busy day at work, so kinda rushing this out before getting the heck out of here! 8)