Sign in with
Sign up | Sign in
Your question

Step Up program or Not / R600 vs G80

Last response: in Graphics & Displays
Share
January 24, 2007 4:42:03 PM

I have an 8800 GTS and I have 16 days left on my step up program. Now though I'm a bit torn, I have a chance to sell a gaming PC in a couple months and I was also thinking just to throw my card in there with it and get the R600, or I was going to I was just going to SLI later on with my GTS once the prices come down.

I have no lack of performance right now being that I only have a 19" widescreen and everything plays maxed out. The only performance difference I would see is in 3dmark but I'm always looking for the future.

Being that, looking to the future I don't know how the R600 will stack up to the G80. I know that ATI has some big advantages over the Nvidia being that this is their 2nd unified design and they're much further ahead of Nvidia in the driver front.

I know this is all speculation because nothing is released and nothing is official but at least the leaked specs at about this time in the development seen correct. The G80’s leaked specs were so I’m thinking these are somewhat correct ignoring any clock speeds.

If we talk about each of their unified shader design then let’s look at ATI’s first. Looking at the leaked specs for this ATI has 64, 4-way shading units that can perform 128 shading operations per cycle. To me it looks like they have taken the standard way of shading design being that a straight throughput of information is going to be used. Having the unified shaders inline with the normal pipeline with the shading operations running at the core speed is a drawback compared to Nvidia’s design. Nvidia’s design has the shaders running completely independent of the initial core. Being that more work can be done on developing the pixel before sending it to the ROP. These differences probably account for the high memory bit rate and core clock speed. The high core clock speed also makes up for the R600 only having 16 ROP’s.

The r600 high core clock speed makes up for Nvidia’s independent shader clock speed operation and the high bandwidth is probably for storing pixels on the main memory to account for Nvidia’s independent texture unit’s which allows for storing of information on them to then be recycled back for more stream processing.

The reason I think I’m right about ATI using a traditional style of “pipeline” design is because they are done with all of their drivers across the board, even Linux. To me this looks like (remember the R600 has been in development for some years now) after they found out what G80 has done with their design and has independently clocked shaders that are independent of the core operation that they had to go to respin after respin to bring up the clock speeds. The initial core design stayed the same allowing for the driver designers to start on their drivers long ago, and the only real obstruction in the driver design being to write for the unified shaders and the general reprocessing of the geometry shaders after the vertex calculations are done thus making the general design of the driver relatively simple to adapt.

I think the reason that Nvidia is taking so long is that they want to completely use the stream processors independency along with high use of the texture filtering units to allow for a much higher throughput of shader operations to the core. Making this happen is a driver design nightmare because of the complete independent design of everything on the G80 (independent shader clock and independent textures) and needing a complete new approach to driver design. Looking at these points I wonder if Nvidia will ever be able to write a driver to allow for complete full potential usage of the G80.

By the way I’m not a Nvidia fanboy, this is the first Nvidia product I’ve owned and this is up for general discussion and what everyone thinks. I’m not trying to say I’m right and if I made any mistakes or wrong assumptions please don’t be a dink; just point them out and correct. I also think I’m going to just wait for that sale in a few months and either spend the difference for a 8800 gtx or R600 but that’s also up for suggestions too.

More about : step program r600 g80

January 24, 2007 5:24:04 PM

Too long for people to read? Common there's some good arguments in there.
January 24, 2007 6:20:44 PM

Dr. Phil will tell you "People do what works"
so is the G80 working fine for you? it will work great at least for a while until they dig something even newer on the API side (say like DX10.1 or something)

What I would suggest you is this:

When R600 comes out there is no guarantee that ATI has the card with the right price point for you (definitely will surpass G80 in price for the first few weeks) so unless you can guarantee the money from selling the GTS that you have will be able to cover it, a better choice is to wait a bit, since you have nvidia mobo to start with to SLI your video card (and not crossfire)
January 24, 2007 6:38:57 PM

I actually have a P5B which is crossfire and in the whole build I'll either swap mobo's or not.
I'm just starting to think that the G80 archetecture has more potential if the drivers can be fully optimised though. I'm basing all this on the leaked specs. While the leaked specs look too conservative not to be real and if they aren't (main thing I'm looking at is more shader operations than they said) then it's some good mud. Just relativly speaking, if someone's going to lie about a new product they usually go overboard. ie: the x1800 was supposed to have 32 pixel pipelines then the g80 was then the r600 was and other lies, all the way back when Intel's new thing was going to be Pentium 5 at 5Ghz (Mind you I think Inquirer said all of those).
!