OK, WTF is the point of this thread now?
A very similar thread started at [H] with better results;
http://www.hardforum.com/showthread.php?p=1030583692
The n00b FAQfest is getting really annoying again. :roll:
The R600 being DX10.1 compliant has been a long standing rumour, ATi doesn't confirm/deny it, which is normal for them, especially since if they knew the delays in prod in Nov, they wouldn't tip their hand knowing it just give nV more info for their refreshes. Some people may remeber the old rumours (partially spawned from EB's article IMO);
http://www.tweaktown.com/news/6703/r600_will_be_directx_10_1/index.html
As for DX10.1 itself, get yourselves edjucated before posting;
http://www.elitebastards.com/cms/index.php?option=com_content&task=view&id=103&Itemid=29
http://download.microsoft.com/download/5/b/9/5b97017b-e28a-4bae-ba48-174cf47d23cd/PRI103_WH06.ppt
The big difference that they are going to address is not multi-core VPUs, but the change in AA functionality and performance improvements. Check slide 30-31;
http://download.microsoft.com/download/5/b/9/5b97017b-e28a-4bae-ba48-174cf47d23cd/PRI022_WH06.ppt
And OMFG, they're already talkig about DX10.2, whatever will we do!
It's not that important, like every previous generation, by the time anyone exploits the new version well enough to make a difference, there will be better/cheaper harware out there to do it.
Regardless of whether or not the R600 does truely support
all of D3D10.1 features, it's unlikely to matter anytime soon. The G80 definitely currently falls slightly short of
all the specs, but I doubt it will matter more than theoretically. What will matter most is current performance in games/apps. Just like the GF6600 'supported' SM3.0, but the X800XT clobbered it in gaming. If the top R600 only performs as well as the GF8800GTS, no one will care about future cube-mapping support unless they are keeping the card for 4 years, or are developers who need the features and not performance.
"you also forgot the fact that the R600 as suposedly 64 4 way shaders. Ie complex shaders for 4 operations, while the 8800 has only simple shaders."
So the R600 has 64 shaders which can perform 4 operations
And the 8800GTX has 128 shaders which can perform 2 operations
Isnt it the same so far?
No, the G80 is single operation dual issue, the R600 is said to be Vec4 and dual issue. So that would be 2:1, but the question is whether that will be a big difference if there are simple functions. Also the granularity of the R600 may be further reduced/improved over previous generations to improve branching and GPGPU performance, whereas the GF8800 caught up to the X8/X1K series.
It's a theoretical benefit with likely limited early exposure, just like the X1900 3:1 shader difference in the X1800.
Now do not forget their clock speed:
I dont know about the R600 but I am guessing something like 675Mhz
And the 8800GTX 1350Mhz
Your guess is well below the current writings of 800+, and no one knows for sure if all parts are synchronous like previous designs, or asynchronous like the GF8800.
So you see, the 8800 has better specs
Whatever. :roll:
Commenting on 'better specs' when you obviously don't know the specs one way or the other is just a ridiculous exercise in fanboism.
If anyone is truely concerned, then wait for the launch of the R600, but I doubt it'll matter much for the time being, likely just another checkbox feature at best until 2008.
Oh, BTW, I can guarantee it's not fully DX10.2 capable, so what now? Wait 'til the R680/700 & G90?