An FX repeat for nVidia next gen?

Gamer_369

Distinguished
May 29, 2005
183
0
18,680
Ok, first off, check out this article from Xbit claiming that the G80/DX10 generation from nVidia will *not* be based off a unified shader architect.
http://www.xbitlabs.com/news/video/display/20060220100915.html

Now, my question is, does anybody think that perhaps MS may have let ATI in on it's API for WGF 2.0 earlier on than nVidia and how it would be unified, allowing ATI to develope a full unified architect (R600) knowing that that is what DX10 would be heading towards? And that they omited such info to nVidia, and that is why we will be seeing ATI with a unified architect, and nVidia without (according to the Xbit article?)

So instead of the unified architect not "making sense," perhaps nVidia just didn't know about it until it was too late to redesign NV50/G80?
 
So instead of the unified architect not "making sense," perhaps nVidia just didn't know about it until it was too late to redesign NV50/G80?

Nice theory, but I don't buy it since Intel and PowerVR also have been developing unified shader solutions as well.

Sounds like a transistor use decision, just like ATi decided to avoid FP32 on the X800 because of transistor count, it apprers that nV has decided it's not worth saving transistors to increase complexity. A non-unified architecture can do the same as a unified one only not as flexibly; a 32+16+16 design can do the same work a unified 48 unit design, only it can't shift the distribution should more pixel, vertex or geometry power be needed. Say another game like X2 were released where the were no pixel shader function it was all vertex, the non unified design has 16 dedicated 'pipelines', whereas the unified design could assign all units to do vertex work. We are already seeing the advantage of such non-traditional designs in the X1900 with it's 3 shaders per TMU, and even the GF7800 with its independant number of pipelines to ROPs.

There's no solid reason not to move to a unified shader design other than the added complexity.

It shoudl give you a very powerful VPU at far less transistor count, and thus more dies per wafer and hopefully lower costs as long as yields are good.
 

Gamer_369

Distinguished
May 29, 2005
183
0
18,680
Agreed Agreed

But the question remains if had nVidia known about WGF 2.0 and Microsoft's API (assuming they didn't), then would they have went with a unified architect? Perhaps still not.

Also, could an argument to my theory be that Vista has been delayed so long, that when nVidia did fully realize the capabilities of WGF 2.0, they would have had enough time since Vista was orginally scheduled for a 2004 release, but now has been delayed almost two years for a late 2006 release?
 

Dresden

Distinguished
Oct 4, 2005
118
0
18,680
I love a conspiracy theory, HELLS YEA! Supposedly the issue with Nvidia is the transistor count, but they also say "the old technology(dedicated pixel and vertex processors) have 'mileage' left"...strange. I kind of agree, and mostly because of a link that Grape left in some forum awhile ago that outlined the DX10 structure and release. It seems that the whole foundation might not take off as fast as gamers think it will. And will games utilize this foundation right off? Who knows?

What is strange though is that Nvidia was the first to jump on the SM3.0 bandwagon, and ATI was crying out that SM2.0 tech wasn't finished yet. So, I see two things happening. 1) ATI was right last time, so Nvidia learned their lesson and are trying to play their cards right and let ATI take the plunge, or 2) ATI was right the first time, and will be right again, and Nvidia screwed themselves. I doubt that Microsoft would withhold from Nvidia, but ATI is closer with Microsoft than Nvidia is...

Fun topic, nice post.