I have an 8800 GTS and I have 16 days left on my step up program. Now though I'm a bit torn, I have a chance to sell a gaming PC in a couple months and I was also thinking just to throw my card in there with it and get the R600, or I was going to I was just going to SLI later on with my GTS once the prices come down.
I have no lack of performance right now being that I only have a 19" widescreen and everything plays maxed out. The only performance difference I would see is in 3dmark but I'm always looking for the future.
Being that, looking to the future I don't know how the R600 will stack up to the G80. I know that ATI has some big advantages over the Nvidia being that this is their 2nd unified design and they're much further ahead of Nvidia in the driver front.
I know this is all speculation because nothing is released and nothing is official but at least the leaked specs at about this time in the development seen correct. The G80’s leaked specs were so I’m thinking these are somewhat correct ignoring any clock speeds.
If we talk about each of their unified shader design then let’s look at ATI’s first. Looking at the leaked specs for this ATI has 64, 4-way shading units that can perform 128 shading operations per cycle. To me it looks like they have taken the standard way of shading design being that a straight throughput of information is going to be used. Having the unified shaders inline with the normal pipeline with the shading operations running at the core speed is a drawback compared to Nvidia’s design. Nvidia’s design has the shaders running completely independent of the initial core. Being that more work can be done on developing the pixel before sending it to the ROP. These differences probably account for the high memory bit rate and core clock speed. The high core clock speed also makes up for the R600 only having 16 ROP’s.
The r600 high core clock speed makes up for Nvidia’s independent shader clock speed operation and the high bandwidth is probably for storing pixels on the main memory to account for Nvidia’s independent texture unit’s which allows for storing of information on them to then be recycled back for more stream processing.
The reason I think I’m right about ATI using a traditional style of “pipeline” design is because they are done with all of their drivers across the board, even Linux. To me this looks like (remember the R600 has been in development for some years now) after they found out what G80 has done with their design and has independently clocked shaders that are independent of the core operation that they had to go to respin after respin to bring up the clock speeds. The initial core design stayed the same allowing for the driver designers to start on their drivers long ago, and the only real obstruction in the driver design being to write for the unified shaders and the general reprocessing of the geometry shaders after the vertex calculations are done thus making the general design of the driver relatively simple to adapt.
I think the reason that Nvidia is taking so long is that they want to completely use the stream processors independency along with high use of the texture filtering units to allow for a much higher throughput of shader operations to the core. Making this happen is a driver design nightmare because of the complete independent design of everything on the G80 (independent shader clock and independent textures) and needing a complete new approach to driver design. Looking at these points I wonder if Nvidia will ever be able to write a driver to allow for complete full potential usage of the G80.
By the way I’m not a Nvidia fanboy, this is the first Nvidia product I’ve owned and this is up for general discussion and what everyone thinks. I’m not trying to say I’m right and if I made any mistakes or wrong assumptions please don’t be a dink; just point them out and correct. I also think I’m going to just wait for that sale in a few months and either spend the difference for a 8800 gtx or R600 but that’s also up for suggestions too.
I have no lack of performance right now being that I only have a 19" widescreen and everything plays maxed out. The only performance difference I would see is in 3dmark but I'm always looking for the future.
Being that, looking to the future I don't know how the R600 will stack up to the G80. I know that ATI has some big advantages over the Nvidia being that this is their 2nd unified design and they're much further ahead of Nvidia in the driver front.
I know this is all speculation because nothing is released and nothing is official but at least the leaked specs at about this time in the development seen correct. The G80’s leaked specs were so I’m thinking these are somewhat correct ignoring any clock speeds.
If we talk about each of their unified shader design then let’s look at ATI’s first. Looking at the leaked specs for this ATI has 64, 4-way shading units that can perform 128 shading operations per cycle. To me it looks like they have taken the standard way of shading design being that a straight throughput of information is going to be used. Having the unified shaders inline with the normal pipeline with the shading operations running at the core speed is a drawback compared to Nvidia’s design. Nvidia’s design has the shaders running completely independent of the initial core. Being that more work can be done on developing the pixel before sending it to the ROP. These differences probably account for the high memory bit rate and core clock speed. The high core clock speed also makes up for the R600 only having 16 ROP’s.
The r600 high core clock speed makes up for Nvidia’s independent shader clock speed operation and the high bandwidth is probably for storing pixels on the main memory to account for Nvidia’s independent texture unit’s which allows for storing of information on them to then be recycled back for more stream processing.
The reason I think I’m right about ATI using a traditional style of “pipeline” design is because they are done with all of their drivers across the board, even Linux. To me this looks like (remember the R600 has been in development for some years now) after they found out what G80 has done with their design and has independently clocked shaders that are independent of the core operation that they had to go to respin after respin to bring up the clock speeds. The initial core design stayed the same allowing for the driver designers to start on their drivers long ago, and the only real obstruction in the driver design being to write for the unified shaders and the general reprocessing of the geometry shaders after the vertex calculations are done thus making the general design of the driver relatively simple to adapt.
I think the reason that Nvidia is taking so long is that they want to completely use the stream processors independency along with high use of the texture filtering units to allow for a much higher throughput of shader operations to the core. Making this happen is a driver design nightmare because of the complete independent design of everything on the G80 (independent shader clock and independent textures) and needing a complete new approach to driver design. Looking at these points I wonder if Nvidia will ever be able to write a driver to allow for complete full potential usage of the G80.
By the way I’m not a Nvidia fanboy, this is the first Nvidia product I’ve owned and this is up for general discussion and what everyone thinks. I’m not trying to say I’m right and if I made any mistakes or wrong assumptions please don’t be a dink; just point them out and correct. I also think I’m going to just wait for that sale in a few months and either spend the difference for a 8800 gtx or R600 but that’s also up for suggestions too.