Sign in with
Sign up | Sign in
Your question

Radeon HD 7970 or GTX 680?

Last response: in Graphics & Displays
Share
a b U Graphics card
August 14, 2012 4:16:13 PM

I ended up selling my 2 XFX Radeon HD 6970's and I am in the market for a very high end single GPU card, with the intention of buying a second one later on for SLI/CrossfireX. I'm mainly looking at a a Geforce GTX 680 or a Radeon HD 7970, but the 7950 and GTX 670 are still on the table. I've seen several benchmarks, but I am just looking for some honest opinions. I don't much care one way or the other, but I have enjoyed my Radeon cards a bit more then my GeForce cards.

Here's the models I'm looking at, and my system is my sig.

7970

http://www.newegg.com/Product/Product.aspx?Item=N82E168...

http://www.newegg.com/Product/Product.aspx?Item=N82E168...

http://www.newegg.com/Product/Product.aspx?Item=N82E168...

7950

http://www.newegg.com/Product/Product.aspx?Item=N82E168...

http://www.newegg.com/Product/Product.aspx?Item=N82E168...

http://www.newegg.com/Product/Product.aspx?Item=N82E168...

GTX 680

http://www.newegg.com/Product/Product.aspx?Item=N82E168...

http://www.newegg.com/Product/Product.aspx?Item=N82E168...

http://www.newegg.com/Product/Product.aspx?Item=N82E168...


GTX 670

http://www.newegg.com/Product/Product.aspx?Item=N82E168...

http://www.newegg.com/Product/Product.aspx?Item=N82E168...

http://www.newegg.com/Product/Product.aspx?Item=N82E168...

More about : radeon 7970 gtx 680

a c 125 U Graphics card
August 14, 2012 5:02:18 PM

In my opinion a GTX670 or HD7950 aren't enough of an upgrade from a HD6970 to worth bothering with.

The GTX680 is generally faster than the HD7970 but costs more, As you haven't mentioned a budget then I would say the GTX680 is the best choice.

I really like EVGA products so I would opt for the girst GTX680 in your list, however if you decide to go for the ASUS then make sure your motherboard has good PCI-E spacing as the ASUS card takes up 3 slots (something to consider for SLI)
a c 535 U Graphics card
August 14, 2012 9:04:34 PM

I agree with the Asus DCUII GTX 680. It's whisper silent at full gaming load. I can't seem to find it, but I saw a really good review of the EVGA Signature 2 where it was leading out several high-end 680's. I actually think I would go with the EVGA.

Really, this decision is not so much about performance, you can pull out any number of scenarios where one beats another, but usually only by a few FPS one way or the other. The big question is whether you place any value on the Nvidia ecosystem. Since AMD does not seem to have anything special they offer with their cards (maybe GCN or MLAA2?), you should decide if you have any interest in the extra available settings and features that come with an Nvidia card. These include PhysX, Adaptive VSync, FXAA, TXAA, good driver support, and good relationships with game developers that mean many new releases are fully supported the day they come out. I look at it this way, by buying an AMD card, you are betting that you will NEVER want to play a game with PhysX, Adaptive VSync, forced FXAA, or TXAA. TXAA, in particular is very promising and such an unknown, I would not want to place a bet against it.
http://www.hardocp.com/article/2012/04/16/nvidia_adapti...
http://alienbabeltech.com/main/?p=31233

With the SLI/Crossfire issue, HardOCP did several articles where they made note of driver problems with Crossfire and also noted that Crossfire seems to be less smooth. Take what you want from the articles, here they are:
http://www.hardocp.com/article/2012/01/17/amd_crossfire...
http://hardocp.com/article/2012/03/28/nvidia_kepler_gef...

Lastly, the issue of AMD drivers. Much has been said about the performance of the 12.7 drivers, some has been valid. Despite that, there are several concerns:
- First of all, is the fact that it took 6 months for them to be released, half way through the product cycle, assuming the 8000 series comes out at the end of the year.
- Second, the 12.7 drivers are almost exclusively geared towards performance in the 7000 series cards. As a 6970 owner, you may already be aware of this. Purchasing a 7000 series card seems like you may be setting up for disappointment when the driver focus shifts to the 8000 series towards the end of the year.
- Third, is the fact that AMD has dropped support for it's pre-5000 series cards. Obviously this means that obsolescence is hardwired into the business model. How long it will take before support for the 7000 series is dropped is anyone's guess, but it will happen.
http://benchmark3d.com/amd-catalyst-12-6-whql-12-7-beta...

In the end, there is more of a gamble with the AMD cards, and virtually none with the Nvidia cards.
Related resources
a b U Graphics card
August 14, 2012 10:22:09 PM

7970
a c 86 U Graphics card
August 14, 2012 10:58:02 PM

The GTX 670 is nearly identical to the 680 in performance, so the 680 is a fairly stupid purchase now that the 670 is out. The 7970 is faster than the 7950, but only because it has higher clock frequencies. It's mere 256 core count advantage (and respective texture unit and such increase) is irrelevant and meaningless. A 7950 at 925MHz core and 1375MHz memory will be as close to the 7970 as the 670 is to the 680. The 7950 overclocks equally well to 7970s that use the same PCB and cooler, so not only is the 680 a poor purchase, but so too is the 7970.

So, it's pretty much down to 7950 versus 670 and then we also have the 7970 GHz Edition to consider. The over $100 price hike over the 7950 is not worth the minuscule binning advantage IMO. So, I'd say that the 670 and the 7950 are the two main options to choose between for a top end single GPU graphics card. With overclocking, they perform equally well to their bigger brothers (even if you overclock those bigger brothers) and they use less power than those bigger brothers at the same time.

7950 versus 670, I'd have to hand it to the 7950 for most games because the 7950 is not nearly as memory-bandwidth bottle-necked as the 670, overclocks a little better, and is significantly cheaper. However, if OP wants the 670 anyway, then go ahead and get it. It's still a great card. The only truly bad choices would be the 680, 7970, and 7970 GHz Edition.

Some people say that the AMD drivers are inferior, but that simply isn't true, at least not with current drivers.
a c 86 U Graphics card
August 14, 2012 11:05:47 PM

17seconds said:
I agree with the Asus DCUII GTX 680. It's whisper silent at full gaming load. I can't seem to find it, but I saw a really good review of the EVGA Signature 2 where it was leading out several high-end 680's. I actually think I would go with the EVGA.

Really, this decision is not so much about performance, you can pull out any number of scenarios where one beats another, but usually only by a few FPS one way or the other. The big question is whether you place any value on the Nvidia ecosystem. Since AMD does not seem to have anything special they offer with their cards (maybe GCN or MLAA2?), you should decide if you have any interest in the extra available settings and features that come with an Nvidia card. These include PhysX, Adaptive VSync, FXAA, TXAA, good driver support, and good relationships with game developers that mean many new releases are fully supported the day they come out. I look at it this way, by buying an AMD card, you are betting that you will NEVER want to play a game with PhysX, Adaptive VSync, forced FXAA, or TXAA. TXAA, in particular is very promising and such an unknown, I would not want to place a bet against it.
http://www.hardocp.com/article/2012/04/16/nvidia_adapti...
http://alienbabeltech.com/main/?p=31233

With the SLI/Crossfire issue, HardOCP did several articles where they made note of driver problems with Crossfire and also noted that Crossfire seems to be less smooth. Take what you want from the articles, here they are:
http://www.hardocp.com/article/2012/01/17/amd_crossfire...
http://hardocp.com/article/2012/03/28/nvidia_kepler_gef...

Lastly, the issue of AMD drivers. Much has been said about the performance of the 12.7 drivers, some has been valid. Despite that, there are several concerns:
- First of all, is the fact that it took 6 months for them to be released, half way through the product cycle, assuming the 8000 series comes out at the end of the year.
- Second, the 12.7 drivers are almost exclusively geared towards performance in the 7000 series cards. As a 6970 owner, you may already be aware of this. Purchasing a 7000 series card seems like you may be setting up for disappointment when the driver focus shifts to the 8000 series towards the end of the year.
- Third, is the fact that AMD has dropped support for it's pre-5000 series cards. Obviously this means that obsolescence is hardwired into the business model. How long it will take before support for the 7000 series is dropped is anyone's guess, but it will happen.
http://benchmark3d.com/amd-catalyst-12-6-whql-12-7-beta...

In the end, there is more of a gamble with the AMD cards, and virtually none with the Nvidia cards.


PhysX is hardly any better than no PhysX, TXAA is needed by Nvidia to attempt to make up for the low memory bandwidth and doesn't actually look better than MSAA, adaptive V-Sync is a good arguing point, FXAA is crap, and the rest of that first paragraph is not as accurate as you'd like with the drivers and the rest is irrelevant at this point because AMD's cards have been out long enough to solve those problems.

HardwareOCP's review was using the beta driver for the Radeon 7000 cxards. The BETA DRIVER! Of course it didn't work perfectly with a seven month old driver that was made before the 7970 even hit retail.

How long it took for good drivers to be released is irrelevant because they are out now. How geared towards the Radeon 700 cards the 12.7 driver is doesn't matter if OP upgrades to them. Driver focus can shift all it wants because the current driver is incredible. AMD didn't drop support for the pre-5000 cards, they slowed down driver releases for them. Nvidia did the same thing with their older cards too. Try getting an 8800 GT to run with the current newest driver supported by Kepler. AMD won't slow down support for Radeon 7000 cards until 2015 or 2016, by which time they would truly be obsolete as would the GTX 600 cards.

The only good point that you had was that AMD lacks adaptive V-Sync, a feature that often doesn't work anyway, granted that in the future, it could become a great advantage if AMD doesn't get something similar.

If we want to dwell on past problems, then how about the fact that Nvidia had many severe stuttering problems, V-Sync problems, and underclocking problems for more than a month until a few weeks ago when they finally fixed them with a new driver release.
August 14, 2012 11:13:20 PM

17seconds said:
I agree with the Asus DCUII GTX 680. It's whisper silent at full gaming load. I can't seem to find it, but I saw a really good review of the EVGA Signature 2 where it was leading out several high-end 680's. I actually think I would go with the EVGA.

Really, this decision is not so much about performance, you can pull out any number of scenarios where one beats another, but usually only by a few FPS one way or the other. The big question is whether you place any value on the Nvidia ecosystem. Since AMD does not seem to have anything special they offer with their cards (maybe GCN or MLAA2?), you should decide if you have any interest in the extra available settings and features that come with an Nvidia card. These include PhysX, Adaptive VSync, FXAA, TXAA, good driver support, and good relationships with game developers that mean many new releases are fully supported the day they come out. I look at it this way, by buying an AMD card, you are betting that you will NEVER want to play a game with PhysX, Adaptive VSync, forced FXAA, or TXAA. TXAA, in particular is very promising and such an unknown, I would not want to place a bet against it.
http://www.hardocp.com/article/2012/04/16/nvidia_adapti...
http://alienbabeltech.com/main/?p=31233

With the SLI/Crossfire issue, HardOCP did several articles where they made note of driver problems with Crossfire and also noted that Crossfire seems to be less smooth. Take what you want from the articles, here they are:
http://www.hardocp.com/article/2012/01/17/amd_crossfire...
http://hardocp.com/article/2012/03/28/nvidia_kepler_gef...

Lastly, the issue of AMD drivers. Much has been said about the performance of the 12.7 drivers, some has been valid. Despite that, there are several concerns:
- First of all, is the fact that it took 6 months for them to be released, half way through the product cycle, assuming the 8000 series comes out at the end of the year.
- Second, the 12.7 drivers are almost exclusively geared towards performance in the 7000 series cards. As a 6970 owner, you may already be aware of this. Purchasing a 7000 series card seems like you may be setting up for disappointment when the driver focus shifts to the 8000 series towards the end of the year.
- Third, is the fact that AMD has dropped support for it's pre-5000 series cards. Obviously this means that obsolescence is hardwired into the business model. How long it will take before support for the 7000 series is dropped is anyone's guess, but it will happen.
http://benchmark3d.com/amd-catalyst-12-6-whql-12-7-beta...

In the end, there is more of a gamble with the AMD cards, and virtually none with the Nvidia cards.


Really?
PhysX is worthless and if you seriously want to sacrifice your fps for some pebbles being kicked around or some tearing cloth then be my guest.
Adaptive vsync is completely useless for a high end card.
FXAA is a complete joke. (Quality is horrible compared to MSAA)
TXAA is very new and has not been picked up by devs and will not be commonly utilized for a long while.
Crossfire scaling is better with 7970s than SLI scaling with 680s.
AMD Tahiti has great compute performance while Nvidia kepler has abysmal compute performance.
At an average price of $430, the 7970s will remain a great value even at the release of the 8970s which will most likely start at around $550-$600.
Lastly, AMD did not release the 12.7 drivers for a long while because before the kepler series, they dominated every other card on the market. The fact that the 7000 series which was made to destroy the gtx 500 series can still beat the gtx 600 series which was released a matter of months after should tell you all you need to know about Tahiti gpus.
a c 86 U Graphics card
August 14, 2012 11:19:23 PM

nacos said:
Really?
Adaptive vsync is completely useless for a high end card.
FXAA is a complete joke. (Quality is horrible compared to MSAA)
TXAA is very new and has not been picked up by devs and will not be commonly utilized for a long while.
Crossfire scaling is better with 7970s than SLI scaling with 680s.
AMD Tahiti has great compute performance while Nvidia kepler has abysmal compute performance.
At an average price of $430, the 7970s will remain a great value even at the release of the 8970s which will most likely start at around $550-$600.
Lastly, AMD did not release the 12.7 drivers for a long while because before the kepler series, they dominated every other card on the market. The fact that the 7000 series which was made to destroy the gtx 500 series can still beat the gtx 600 series which was released a matter of months after should tell you all you need to know about Tahiti gpus.


As much as I am in favor of OP buying an AMD card in this scenario, I have to call you out for making poor points in support of AMD or else be a hypocrit.

TXAA isn't as poorly supported as you might think because the drivers can now force it in many games. Crossfire scaling with the 7900 cards is not really much better than SLI scaling with the GTX 600 cards. This sometimes seems this way because of 7900's memory bandwidth advantage, but is not actually true. Compute performance, although very important for some people, is not something that everyone cares a whole lot about. It is supposedly going to become more important in future games, so it's a good thing to mention it, but the fact that it is only supposedly going to be important years from now should be mentioned as well.

Radeon 7000 wasn't made to kill GTX 500. It was made to replace Radeon 6000. It killed GTX 500 because it is a whoel generation ahead of GTX 500. It also doesn't really kill GTX 600. GTX 600 competes quite well, although it could be better. Also keep in mind that the Tahiti GPU is not nearly as fast as the GK104 GPU for gaming, It only competes because of GK104's huge memory bandwidth bottle-neck holding back the very fast GPU.
August 14, 2012 11:24:52 PM

I am giving my vote for the 7970 just because the performance difference is no longer wide enough to justify the $70 premium of the 680. Sure you can compare it to the 670, but the extra 1GB of vram is enough to convince me that for a mere $10 dollar difference, it is worth it. BF3 is using close to 2GB of vram already, an extra GB for extra futureproof-ing

EDIT: Scaling is better on the radeon cards, and if you are using multi-monitor setups, the extra vram is definitely worth it.
a c 535 U Graphics card
August 14, 2012 11:42:37 PM

blazorthon said:
PhysX is hardly any better than no PhysX, TXAA is needed by Nvidia to attempt to make up for the low memory bandwidth and doesn't actually look better than FXAA, adaptive V-Sync is a good arguing point, FXAA is crap, and the rest of that first paragraph is not as accurate as you'd like with the drivers and the rest is irrelevant at this point because AMD's cards have been out long enough to solve those problems.

HardwareOCP's review was using the beta driver for the Radeon 7000 cxards. The BETA DRIVER! Of course it didn't work perfectly with a seven month old driver that was made before the 7970 even hit retail.

How long it took for good drivers to be released is irrelevant because they are out now. How geared towards the Radeon 700 cards the 12.7 driver is doesn't matter if OP upgrades to them. Driver focus can shift all it wants because the current driver is incredible. AMD didn't drop support for the pre-5000 cards, they slowed down driver releases for them. Nvidia did the same thing with their older cards too. Try getting an 8800 GT to run with the current newest driver supported by Kepler. AMD won't slow down support for Radeon 7000 cards until 2015 or 2016, by which time they would truly be obsolete as would the GTX 600 cards.

The only good point that you had was that AMD lacks adaptive V-Sync, a feature that often doesn't work anyway, granted that in the future, it could become a great advantage if AMD doesn't get something similar. You also ignored the fact that Nvidia was the one having driver problems recently, not AMD. Nvidia had stuttering problems and underclocking problems for months that they finally fixed with one of their driver releases a few weeks ago.

So your whole argument is "no it isn't". Honestly, it's hard to take the word of an AMD user when it comes to writing about the things that come with an Nvidia card.

But see, the real problem is that your argument should be: "well, here's what AMD has, and it's better". And I've asked this repeatedly: what does AMD offer the gamer that's unique and adds value? I'm wondering if you were to stack up the pros and cons of each, what exactly would you list as pros for AMD? If anyone can answer this, I know it's you (said with respect).

"PhysX is hardly any better than no PhysX"
A preview of the PhysX effects in Borderlands 2. The reviewers from Gamespot, Eurogamer, and Destructoid would seem to disagree with your assessment:
http://physxinfo.com/news/7865/borderlands-2-will-be-en...

"TXAA is needed by Nvidia to attempt to make up for the low memory bandwidth and doesn't actually look better than FXAA"
You would need to provide an example of this rather than just state it.

"FXAA is crap"
Really, where do you come by this information? Why is it increasingly being used by game developers, and why do reviews state: "What Is FXAA, And Why Has It Made Anti-Aliasing As We Know It Obsolete?"
http://www.kotaku.com.au/2011/12/what-is-fxaa/

"HardwareOCP's review was using the beta driver for the Radeon 7000 cxards. The BETA DRIVER!"
So you're okay if reviews use the 12.7 beta drivers, but not under other circumstances if it doesn't serve your purposes? The problems with Crossfire drivers is certainly not a big secret.

"Try getting an 8800 GT to run with the current newest driver supported by Kepler"
Sorry, but all the latest WHQL drivers from Nvidia not only support the 8800's, they also support the 7800's.

"AMD lacks adaptive V-Sync, a feature that often doesn't work anyway"
Again, where do you get this stuff?
According to HardOCP: "As a gamer, I personally prefer to play with no screen tearing, but also want the best performance possible up to my refresh rate. The answer for me is clear, I want Adaptive VSync technology."
http://www.hardocp.com/article/2012/04/16/nvidia_adapti...
August 14, 2012 11:59:04 PM

17seconds said:
So your whole argument is "no it isn't". Honestly, it's hard to take the word of an AMD user when it comes to writing about the things that come with an Nvidia card.

But see, the real problem is that your argument should be: "well, here's what AMD has, and it's better". And I've asked this repeatedly: what does AMD offer the gamer that's unique and adds value? I'm wondering if you were to stack up the pros and cons of each, what exactly would you list as pros for AMD? If anyone can answer this, I know it's you (said with respect).

"PhysX is hardly any better than no PhysX"
A preview of the PhysX effects in Borderlands 2. The reviewers from Gamespot, Eurogamer, and Destructoid would seem to disagree with your assessment:
http://physxinfo.com/news/7865/borderlands-2-will-be-en...

"TXAA is needed by Nvidia to attempt to make up for the low memory bandwidth and doesn't actually look better than FXAA"
You would need to provide an example of this rather than just state it.

"FXAA is crap"
Really, where do you come by this information? Why is it increasingly being used by game developers, and why do reviews state: "What Is FXAA, And Why Has It Made Anti-Aliasing As We Know It Obsolete?"
http://www.kotaku.com.au/2011/12/what-is-fxaa/

"HardwareOCP's review was using the beta driver for the Radeon 7000 cxards. The BETA DRIVER!"
So you're okay if reviews use the 12.7 beta drivers, but not under other circumstances if it doesn't serve your purposes? The problems with Crossfire drivers is certainly not a big secret.

"Try getting an 8800 GT to run with the current newest driver supported by Kepler"
Sorry, but all the latest WHQL drivers from Nvidia not only support the 8800's, they also support the 7800's.

"AMD lacks adaptive V-Sync, a feature that often doesn't work anyway"
Again, where do you get this stuff?
According to HardOCP: "As a gamer, I personally prefer to play with no screen tearing, but also want the best performance possible up to my refresh rate. The answer for me is clear, I want Adaptive VSync technology."
http://www.hardocp.com/article/2012/04/16/nvidia_adapti...


Adaptive VSync only helps when you drop below your refresh rate. I never drop below my refresh rate. I also never use vsync because the screen tearing is only a mild drawback compared to the input lag that comes with vsync. FXAA is not anti aliasing. FXAA is a blur effect used to disguise the problem, not fix it.
Let me tell you what AMD has. AMD has very powerful graphics cards that are kept at very reasonable price points because they do not charge 15% more for sprinkles on the cake.
a c 535 U Graphics card
August 15, 2012 12:04:31 AM

nacos said:
Adaptive VSync only helps when you drop below your refresh rate. I never drop below my refresh rate. I also never use vsync because the screen tearing is only a mild drawback compared to the input lag that comes with vsync. FXAA is not anti aliasing. FXAA is a blur effect used to disguise the problem, not fix it.
Let me tell you what AMD has. AMD has very powerful graphics cards that are kept at very reasonable price points because they do not charge 15% more for sprinkles on the cake.

Now that I can get behind. Thanks.
Both of you.
a c 86 U Graphics card
August 15, 2012 12:05:50 AM

17seconds said:
So your whole argument is "no it isn't". Honestly, it's hard to take the word of an AMD user when it comes to writing about the things that come with an Nvidia card.

But see, the real problem is that your argument should be: "well, here's what AMD has, and it's better". And I've asked this repeatedly: what does AMD offer the gamer that's unique and adds value? I'm wondering if you were to stack up the pros and cons of each, what exactly would you list as pros for AMD? If anyone can answer this, I know it's you (said with respect).

"PhysX is hardly any better than no PhysX"
A preview of the PhysX effects in Borderlands 2. The reviewers from Gamespot, Eurogamer, and Destructoid would seem to disagree with your assessment:
http://physxinfo.com/news/7865/borderlands-2-will-be-en...

"TXAA is needed by Nvidia to attempt to make up for the low memory bandwidth and doesn't actually look better than FXAA"
You would need to provide an example of this rather than just state it.

"FXAA is crap"
Really, where do you come by this information? Why is it increasingly being used by game developers, and why do reviews state: "What Is FXAA, And Why Has It Made Anti-Aliasing As We Know It Obsolete?"
http://www.kotaku.com.au/2011/12/what-is-fxaa/

"HardwareOCP's review was using the beta driver for the Radeon 7000 cxards. The BETA DRIVER!"
So you're okay if reviews use the 12.7 beta drivers, but not under other circumstances if it doesn't serve your purposes? The problems with Crossfire drivers is certainly not a big secret.

"Try getting an 8800 GT to run with the current newest driver supported by Kepler"
Sorry, but all the latest WHQL drivers from Nvidia not only support the 8800's, they also support the 7800's.

"AMD lacks adaptive V-Sync, a feature that often doesn't work anyway"
Again, where do you get this stuff?
According to HardOCP: "As a gamer, I personally prefer to play with no screen tearing, but also want the best performance possible up to my refresh rate. The answer for me is clear, I want Adaptive VSync technology."
http://www.hardocp.com/article/2012/04/16/nvidia_adapti...


Actually, I use both AMD and Nvida cards. I owned a GTX 560 TI up until it died earlier this year and I replaced it with a Radeon 7850. I also build computers for many people and I've worked with some GTX 600 cards lately. What AMD has that's better with 7950 versus 670 (seeing as how I've already explained why the 7970, 680, and 7970 GHz Edition aren't worth buying) is price, overclocking performance, much more memory bandwidth (helping future proofing a lot in this scenario), and it also has the 1GB VRAM capacity advantage which also helps future proofing, albeit not as much as the bandwidth advantage.

Sure, FXAA isn't really crap, but it is not an advantage. I've used it and it is a much lighter form of AA (if even AA at all) in both performance and quality. TXAA is not an advantage strictly because of Kepler's huge memory bandwidth bottle-encks. It is very light in performance, but heavy in quality, a great combination. It would be an advantage if not for the memory bandwidth of Kepler being so low that Radeon 7900 MSAA can still outperform GTX 600 TXAA in many situations, especially in very high end setups. For what it does here, it does very well. If not for TXAA, any time that the memory bandwidth and/or capacity becomes too problematic for MSAA on Kepler, Nvidia users would have to resort to lower quality FXAA or no AA at all.

I get adaptive V-sync not working quite often fom my own tests and having helped several forum members who have problems with it (I'm almost in the double digits from helping people with adaptive V-Sync problems) as well as there having been a Tom's article that partially addressed this. The numerous Nvidia forums full of people having problems was also a clue as to it happening. The problems have improved greatly, but it is still not perfect. When it is, like I said before, it will be one of Nvidia's greatest advantages. I have no doubt in that this won't take much longer. Nvidia is usually pretty good at fixing problems such as this quickly and effectively.

Maybe an 8800 GT will work with current drivers, I don't have one to test it with. However, my 8500GT does not. It needed an older driver and that did not make me happy, to say the least.
a c 86 U Graphics card
August 15, 2012 12:07:57 AM

Quote:
Please tell me your not going to be using them on a puny 1080p screen. Tell me you have 3 screens or a 30 inch 2560 x 1600 one anything less than what I mentioned is a waste for those cards. I say a 680 or 2 but Im biased right now.

http://i71.photobucket.com/albums/i145/Soldier36/20120726_221503.jpg


1080p with settings maxed out and some AA can bring even the GTX 670 and RAdeon 7900 cards under 60FPS averages in several games.
a c 86 U Graphics card
August 15, 2012 12:46:34 AM

Well OP, here are the arguments for each card. This is the particular 7950 model that I'd recommend:
http://www.newegg.com/Product/Product.aspx?Item=N82E168...

I have friends whom have this card and have compared it to several other 7950s and it was the best in every way except for its cooler taking up a little more space than other coolers need. The current top review of this card is actually from one of my friends and I consider it quite reliable and useful.
a c 216 U Graphics card
August 15, 2012 1:00:15 AM

If you are going to go on and on about how the 7970 has more bandwidth, at least show some benchmarks to proove it. I see nothing in these to show there is any advantage, and I can't find anything that shows an advantage. They trade blows.

http://hardocp.com/article/2012/05/14/geforce_680_670_v...
http://hardocp.com/article/2012/05/14/geforce_680_670_v...
(The 2nd link shows OC'ed comparisons)

Anyways, FXAA is most comparable to MLAA, but much better, but I haven't tried MLAA 2.0, which could be improved. It's useful in 2 ways. It doesn't require much power to use, and it works on any game, even when other forms of AA doesn't work.

We don't know much about TXAA, because only 1 game has it, and it's an MMO I've never tried.

PhysX isn't often useful, but if you do play some of these games, it may have appeal to you.

3D Vision can be useful too, if you want 3D (I do, and the reason I have Nvidia cards).
a c 535 U Graphics card
August 15, 2012 1:00:38 AM

blazorthon said:
Well OP, here are the arguments for each card. This is the particular 7950 model that I'd recommend:
http://www.newegg.com/Product/Product.aspx?Item=N82E168...

I have friends whom have this card and have compared it to several other 7950s and it was the best in every way except for its cooler taking up a little more space than other coolers need. The current top review of this card is actually from one of my friends and I consider it quite reliable and useful.

It's not a good time to buy a 7950, or any card really, just a couple days before the GTX 660 Ti gets released.
a c 86 U Graphics card
August 15, 2012 1:15:16 AM

17seconds said:
It's not a good time to buy a 7950, or any card really, just a couple days before the GTX 660 Ti gets released.


The 660 TI is a 670 with a 33% cut in memory interface width and maybe some frequency changes. It can't compete with the 7950 when overclocking is considered, if even when it's not. The 660 TI, at best, would be able to compete with the 7870 when overclocking is considered. I would agree with you if OP was considering the 7870, but not the 7950 or above.
a c 535 U Graphics card
August 15, 2012 1:37:13 AM

blazorthon said:
The 660 TI is a 670 with a 33% cut in memory interface width and maybe some frequency changes. It can't compete with the 7950 when overclocking is considered, if even when it's not. The 660 TI, at best, would be able to compete with the 7870 when overclocking is considered. I would agree with you if OP was considering the 7870, but not the 7950 or above.

That's contradicted by the advance reviews and leaked benchmarks. But still, the logic stands, the release of the GTX 660 Ti will likely either be a card that beats the 7950 for less money, and/or will cause the prices of the 7950 to drop.
a b U Graphics card
August 15, 2012 2:09:46 AM

Thats... A lot of information. Thanks guys. Quick couple of questions.

Is it worth saving my money for a Radeon 8000 or GTX 700 series card? With school starting up in a week and a half, and being as I'm a third year premed student with a very difficult course load coming up, I won't have very much time left for gaming. I could easily put my machine aside until December. I'd just stick my old GTX 460 in there so it's operable.

Is it worth waiting or should I just jump on a pair of GTX 680's?
a c 535 U Graphics card
August 15, 2012 2:18:40 AM

stant1rm said:
Thats... A lot of information. Thanks guys. Quick couple of questions.

Is it worth saving my money for a Radeon 8000 or GTX 700 series card? With school starting up in a week and a half, and being as I'm a third year premed student with a very difficult course load coming up, I won't have very much time left for gaming. I could easily put my machine aside until December. I'd just stick my old GTX 460 in there so it's operable.

Is it worth waiting or should I just jump on a pair of GTX 680's?

Personally, I'm waiting out this round because the jump in performance is not enough to justify the upgrade from a GTX 580. If you can get by with a GTX 460, which is still a good card for gaming, then wait.

I do think that it's more likely that the 8000 series will be released by the end of this year, and less likely that the GTX 780 will. All the same, the GK110 is sitting in a lab somewhere being prepped for whatever comes next. It's also a good bet that Nvidia will unleash it to counter the 8000 cards. I guess the bottom line is, you just never know what will be released and when in terms of the next generations. Maybe an argument for getting the two 680's now?
a c 216 U Graphics card
August 15, 2012 2:42:54 AM

17seconds said:
Personally, I'm waiting out this round because the jump in performance is not enough to justify the upgrade from a GTX 580. If you can get by with a GTX 460, which is still a good card for gaming, then wait.

I do think that it's more likely that the 8000 series will be released by the end of this year, and less likely that the GTX 780 will. All the same, the GK110 is sitting in a lab somewhere being prepped for whatever comes next. It's also a good bet that Nvidia will unleash it to counter the 8000 cards. I guess the bottom line is, you just never know what will be released and when in terms of the next generations. Maybe an argument for getting the two 680's now?


You may be right about a lot of that, but I believe the GK110 has already been slated to be their workstation card. It doesn't have the GPGPU parts cut out, and I believe they plan to continue to keep that out of their gaming cards. I'm expecting a whole new arch for the 700 series.
a c 535 U Graphics card
August 15, 2012 2:48:35 AM

bystander said:
You may be right about a lot of that, but I believe the GK110 has already been slated to be their workstation card. It doesn't have the GPGPU parts cut out, and I believe they plan to continue to keep that out of their gaming cards. I'm expecting a whole new arch for the 700 series.

My thinking, don't know if it's true, is that the GK110 can be adopted for either workstation or gaming, as I believe it was originally intended. The success of the mid-range GK104 chip really took the pressure off and now they have the luxury of perfecting the GK110 and holding on to it until just the right time. If you recall the release of the 6970 was a dud because Nvidia came out of nowhere with the GTX 580 and caught everyone by surprise. I can see the same happening this time too.
a c 86 U Graphics card
August 15, 2012 2:56:12 AM

bystander said:
If you are going to go on and on about how the 7970 has more bandwidth, at least show some benchmarks to proove it. I see nothing in these to show there is any advantage, and I can't find anything that shows an advantage. They trade blows.

http://hardocp.com/article/2012/05/14/geforce_680_670_v...
http://hardocp.com/article/2012/05/14/geforce_680_670_v...
(The 2nd link shows OC'ed comparisons)

Anyways, FXAA is most comparable to MLAA, but much better, but I haven't tried MLAA 2.0, which could be improved. It's useful in 2 ways. It doesn't require much power to use, and it works on any game, even when other forms of AA doesn't work.

We don't know much about TXAA, because only 1 game has it, and it's an MMO I've never tried.

PhysX isn't often useful, but if you do play some of these games, it may have appeal to you.

3D Vision can be useful too, if you want 3D (I do, and the reason I have Nvidia cards).


Okay, I'll explain the impact of memory bandwidth for those who don't yet understand it. I don't need to show benchmarks to prove that the 7970 has more memory bandwidth. This is a simple fact. 384 bit-wide GDDR5 interface at 1.375GHz has much more bandwidth than a 256 bit-wide GDDR5 interface at about 1.5GHz. How this affects performance can vary greatly between different situations. For example, increasing the amount of AA, the resolution, and texture quality can greatly increase the GPU's need for memory bandwidth. Having insufficient memory bandwidth hurts performance scaling as the workload increases.

Radeon 7900 gets faster relative to GTX 600 as the workload increases and this is why it happens. They have enough memory bandwidth to handle the heavier workloads, but the GTX 600 cards do not have enough to handle heavier and heavier workloads as well as Radeon 7900 does, so they falter and 7900 pulls further and further ahead with higher resolutions. This is especially obvious in particularly intense games that can really push the memory bandwidth to its limits. Such games show great preferences for RAdeon 6900 to such an extent that the GTX 670 and the GTX 680 can actually almost be overshadowed by the GTX 580 that has roughly the same memory bandwidth.

If you want proof, you merely have to look through Tom's and Anand's benchmarks (keep in mind that outdated benchmarks, although just as good for showing such phenomenon as up-to-date benchmarks, do not show how well cards will perform today in the same situations). Take Metro 2033 as an example. At lower resolutions, such as 1080p, the GTX 680 was able to stay ahead of the Radeon 7970 when the 680 first came out.

However, increase the resolution to 2560x1600 and then to 5760x1080 or 5760x1200 and the 680 actually lost to the 7970 by increasingly large margins so lnog as AA isn't dropped. Increasing AA made it even worse for Nvidia, especially when the memory bandwidth and capacity would occasionally become such bottle-necks that the game would be unplayable without decreased settings even though the GPU was far more than capable of keeping up, but the memory just couldn't.

Memory bandwidth was a good advantage that Nvidia had with the Fermi cards in several comparisons to the VLIW5/VLIW4 cards, but back then games simply weren't all as demanding as some games are today and these phenomenon needed extremely high end setups for the time to show up. Nvidia's inferior SLI scaling to VLIW4 CF helped to mask this very well in such situations. SLI with the GTX 600 cards is much more efficient than previous versions and it is Nvidia who has less memory bandwidth now, not AMD, so Nvidia gave themselves a big double whammy in this case by not giving their cards wider memory interfaces.

New games get progressively more and more intense as time goes on. This is a fact of gaming technological progression. As we get increasingly intense games, decreasing or leaving the same memory bandwidth is going to hurt Nvidia in the long run much worse than even their near abandonment of dual-precision compute performance in their consumer cards. One thing that has to be faced when buying Nvidia is that although they keep their older drivers updated and such fairly well, they truly do seem to design their cards to be upgraded often because they continually give them bottle-necks. Although the Fermi cards had good memory bandwidth, like many of their predecessors, they had poor memory capacity. The GTX 570 hurts the most from this in the Fermi cards in that even in 1080p, some games need to have settings such as AA kept in check for the memory bandwidth of the 1.25GiB models to not become a severe bottle-neck.

With Radeon 7900 versus GTX 670/680, the GTX cards have such low memory bandwidth that the much slower Tahiti GPU can keep up or even win in many situations. The 7900 cards will get better and better relative to GTX 600 because newer games won't be nearly as memory bandwidth and capacity limited as they would be with GTX 600 cards. When SLI/CF is considered, most people increase the load compared to the single card rather than leaving it the same for 120Hz display gaming. This increases the load on the memory bandwidth and capacity more than a single card would do with lower settings. For example, comparing 2560x1600 on a GTX 670 and a Radeon 7970 to 2560x1600 with some serious AA on GTX 670 SLI and RAdeon 7970 CF shows the 7970s pulling ahead a little in most games, especially in very memory bandwidth intensive games, more than they pull ahead without the AA. Heck, without the AA, the 7970s might lose, yet win with the AA. This is actually common.

TXAA is Nvidia's chance to make up for their memory bandwidth and capacity fallacy. By reducing the need for memory bandwidth and capacity for a given picture quality, Nvidia can compete much better in high end systems that can push the GK104's best memory configurations to there limits when using MSAA.

If you want a better example of memory bandwidth holding back a GPU, then compare the performance of Trinity A10 APUs to that of the desktop Radeon 6670. You see the 6670 is faster at first, but the A10's 384 core VLIW4 GPU is actually more powerful than the 6670's 480 core VLIW5 GPU. The A10 is held back by its memory bandwidth and this is easily proven by overclocking the memory. When the GPU is overclocked, it can increase in performance almost linearly with linearly increased memory bandwidth. The GTX 670 is not this memory bandwidth bottle-necked, but keep in mind that the GTX 670 and the GTX 680 have the same memory bandwidth and roughly the same performance despite the fact that the 680's GPU is over 15% faster than the GTX 670's GPU. 15% faster, yet the 680 is only less than even 2-4% faster than the 670 in most situations. Heck, like I said, the GTX 580 can creep up on the GTX 670 and 680 in several situations!

How much memory bandwidth affects performance can vary, but its impact is increased mostly by increased AA and texture quality. How much memory bandwidth is needed increases less than linearly with linearly increased GPU performance, but AA can almost reverse that trend.
a c 86 U Graphics card
August 15, 2012 3:00:07 AM

17seconds said:
My thinking, don't know if it's true, is that the GK110 can be adopted for either workstation or gaming, as I believe it was originally intended. The success of the mid-range GK104 chip really took the pressure off and now they have the luxury of perfecting the GK110 and holding on to it until just the right time. If you recall the release of the 6970 was a dud because Nvidia came out of nowhere with the GTX 580 and caught everyone by surprise. I can see the same happening this time too.


The GK110 lacks many components needed for a gaming GPU. It would need substantial redesigning to be modified for gaming purposes. It is literally almost nothing but a huge compute GPU in its current form.
a c 86 U Graphics card
August 15, 2012 3:04:39 AM

17seconds said:
That's contradicted by the advance reviews and leaked benchmarks. But still, the logic stands, the release of the GTX 660 Ti will likely either be a card that beats the 7950 for less money, and/or will cause the prices of the 7950 to drop.


No, it isn't contradicted by the current reviews. The 7950 might drop in price, but only for stock performance. The overclocking performance of the 7950 is still far higher than that of the 660 TI. That is why i said that it is better competing against the 7870 rather than the 7950. The 7950 is literally a 7970 with a small amount of parts disabled and a much lower clock frequency. This clock frequency drop is nearly 100% of the performance loss compared to the 7970. Simply bumping up the frequency to the 7970's frequencies lets it almost exactly match the 7970 similar to how the 670 matches the 680, although for different reasons (the 670 matches the 680 for identical memory bandwidth whereas the 7950 matches the 7970 for nearly identical GPU performance). The 660 TI might be able to beat the 7950 in stock performance (entirely plausible), but it loses substantially in overclocking performance.
a c 216 U Graphics card
August 15, 2012 3:12:34 AM

blazorthon said:
Okay, I'll explain the impact of memory bandwidth for those who don't yet understand it. I don't need to show benchmarks to prove that the 7970 has more memory bandwidth. This is a simple fact. 384 bit-wide GDDR5 interface at 1.375GHz has much more bandwidth than a 256 bit-wide GDDR5 interface at about 1.5GHz. How this affects performance can vary greatly between different situations. For example, increasing the amount of AA, the resolution, and texture quality can greatly increase the GPU's need for memory bandwidth. Having insufficient memory bandwidth hurts performance scaling as the workload increases.

Radeon 7900 gets faster relative to GTX 600 as the workload increases and this is why it happens. They have enough memory bandwidth to handle the heavier workloads, but the GTX 600 cards do not have enough to handle heavier and heavier workloads as well as Radeon 7900 does, so they falter and 7900 pulls further and further ahead with higher resolutions. This is especially obvious in particularly intense games that can really push the memory bandwidth to its limits. Such games show great preferences for RAdeon 6900 to such an extent that the GTX 670 and the GTX 680 can actually almost be overshadowed by the GTX 580 that has roughly the same memory bandwidth.

If you want proof, you merely have to look through Tom's and Anand's benchmarks (keep in mind that outdated benchmarks, although just as good for showing such phenomenon as up-to-date benchmarks, do not show how well cards will perform today in the same situations). Take Metro 2033 as an example. At lower resolutions, such as 1080p, the GTX 680 was able to stay ahead of the Radeon 7970 when the 680 first came out.

However, increase the resolution to 2560x1600 and then to 5760x1080 or 5760x1200 and the 680 actually lost to the 7970 by increasingly large margins so lnog as AA isn't dropped. Increasing AA made it even worse for Nvidia, especially when the memory bandwidth and capacity would occasionally become such bottle-necks that the game would be unplayable without decreased settings even though the GPU was far more than capable of keeping up, but the memory just couldn't.

Memory bandwidth was a good advantage that Nvidia had with the Fermi cards in several comparisons to the VLIW5/VLIW4 cards, but back then games simply weren't all as demanding as some games are today and these phenomenon needed extremely high end setups for the time to show up. Nvidia's inferior SLI scaling to VLIW4 CF helped to mask this very well in such situations. SLI with the GTX 600 cards is much more efficient than previous versions and it is Nvidia who has less memory bandwidth now, not AMD, so Nvidia gave themselves a big double whammy in this case by not giving their cards wider memory interfaces.

New games get progressively more and more intense as time goes on. This is a fact of gaming technological progression. As we get increasingly intense games, decreasing or leaving the same memory bandwidth is going to hurt Nvidia in the long run much worse than even their near abandonment of dual-precision compute performance in their consumer cards. One thing that has to be faced when buying Nvidia is that although they keep their older drivers updated and such fairly well, they truly do seem to design their cards to be upgraded often because they continually give them bottle-necks. Although the Fermi cards had good memory bandwidth, like many of their predecessors, they had poor memory capacity. The GTX 570 hurts the most from this in the Fermi cards in that even in 1080p, some games need to have settings such as AA kept in check for the memory bandwidth of the 1.25GiB models to not become a severe bottle-neck.

With Radeon 7900 versus GTX 670/680, the GTX cards have such low memory bandwidth that the much slower Tahiti GPU can keep up or even win in many situations. The 7900 cards will get better and better relative to GTX 600 because newer games won't be nearly as memory bandwidth and capacity limited as they would be with GTX 600 cards. When SLI/CF is considered, most people increase the load compared to the single card rather than leaving it the same for 120Hz display gaming. This increases the load on the memory bandwidth and capacity more than a single card would do with lower settings. For example, comparing 2560x1600 on a GTX 670 and a Radeon 7970 to 2560x1600 with some serious AA on GTX 670 SLI and RAdeon 7970 CF shows the 7970s pulling ahead a little in most games, especially in very memory bandwidth intensive games, more than they pull ahead without the AA. Heck, without the AA, the 7970s might lose, yet win with the AA. This is actually common.

TXAA is Nvidia's chance to make up for their memory bandwidth and capacity fallacy. By reducing the need for memory bandwidth and capacity for a given picture quality, Nvidia can compete much better in high end systems that can push the GK104's best memory configurations to there limits when using MSAA.

If you want a better example of memory bandwidth holding back a GPU, then compare the performance of Trinity A10 APUs to that of the desktop Radeon 6670. You see the 6670 is faster at first, but the A10's 384 core VLIW4 GPU is actually more powerful than the 6670's 480 core VLIW5 GPU. The A10 is held back by its memory bandwidth and this is easily proven by overclocking the memory. When the GPU is overclocked, it can increase in performance almost linearly with linearly increased memory bandwidth. The GTX 670 is not this memory bandwidth bottle-necked, but keep in mind that the GTX 670 and the GTX 680 have the same memory bandwidth and roughly the same performance despite the fact that the 680's GPU is over 15% faster than the GTX 670's GPU. 15% faster, yet the 680 is only less than even 2-4% faster than the 670 in most situations. Heck, like I said, the GTX 580 can creep up on the GTX 670 and 680 in several situations!

How much memory bandwidth affects performance can vary, but its impact is increased mostly by increased AA and texture quality. How much memory bandwidth is needed increases less than linearly with linearly increased GPU performance, but AA can almost reverse that trend.


I'm not going to read a multipage essay on the subject. What I am saying is can you show any benchmarks that prove it has any significant impact on performance? Theory is great. More bandwidth is great, but is it needed? Does it actually help?

The benchmarks I provided, as well as many others, would say no.
a c 86 U Graphics card
August 15, 2012 3:27:47 AM

bystander said:
I'm not going to read a multipage essay on the subject. What I am saying is can you show any benchmarks that prove it has any significant impact on performance? Theory is great. More bandwidth is great, but is it needed? Does it actually help?

The benchmarks I provided, as well as many others, would say no.


Actually, your benchmarks do not say that, you just don't understand how the graphics cards work (no offense intended by this statement) and how the benchmarks reflect how the graphics cards work. This is not a surprise, especially given that many *professional* testers, such as even Tom's, often don't have the best understanding of this either and make that fairly clear when they state that they don't know why some cards perform the way they do, although they can often give fairly accurate estimations.

http://www.anandtech.com/show/5818/nvidia-geforce-gtx-6...

The second and third benchmark on this page show how an increase of resolution and nothing else changing affects performance. We can clearly see that with the higher memory load imposed by this, the 7970 made up a lot of lost ground. Its performance number went from being 83% of the 680 to being 94% of the 680. The first benchmark, the 5760x1080 benchmark, might at first seem to discredit this, but it actually puts less work on the memory interface than the 2560x1600 benchmark does because it reduces the FPS greatly (fewer frames being transferred means fewer data passing through the interface) and the AA is disabled, greatly reducing the amount of data passing through the memory interface.

Looking at any Llano/Trinity benchmarks that involve widely varying RAM bandwidth show this to an even greater and more easily shown extent. A good way to prove this beyond any doubt would be comparing the 7970 and the 680 to a 7970 with the memory frequency brought down to 1GHz. Maybe I can get some friends to help with a comparison if I don't get any clients looking for computers with these cards anytime soon.
a c 216 U Graphics card
August 15, 2012 3:31:51 AM

blazorthon said:
Actually, your benchmarks do not say that, you just don't understand how the graphics cards work (no offense intended by this statement) and how the benchmarks reflect how the graphics cards work. This is not a surprise, especially given that many *professional* testers, such as even Tom's, often don't have great understanding of this either.

http://www.anandtech.com/show/5818/nvidia-geforce-gtx-6...

The second and third benchmark on this page show how an increase of resolution and nothing else changed affects performance. We can clearly see that with the higher memory load imposed by this, the 7970 made up a lot of lost ground. Its performance number went from being 83% of the 680 to being 94% of the 680. The first benchmark, the 5760x1080 benchmark, might at first seem to discredit this, but it actually puts less work on the memory interface than the 2560x1600 benchmark does because it reduces the FPS greatly (fewer frames being transferred means fewer data passing through the interface) and the AA is disabled, greatly reducing the amount of data passing through the memory interface.

Looking at any Llano/Trinity benchmarks that involve widely varying RAM bandwidth show this to an even greater and more easily shown extent. A good way to prove this beyond any doubt would be comparing the 7970 and the 680 to a 7970 with the memory frequency brought down to 1GHz. Maybe I can get some friends to help with a comparison if I don't get any clients looking for computers with these cards anytime soon.


Alright, so what you are saying is that because of the extra bandwidth, in situations of using 2560x1600 monitors, it just doesn't lose by as much. In 3 monitor situations, it doesn't change, which I assume is because of each monitor getting it's own interface.

The difference is there, but not significant enough to change the card which performs best.
a c 86 U Graphics card
August 15, 2012 3:37:24 AM

bystander said:
Alright, so what you are saying is that because of the extra bandwidth, in situations of using 2560x1600 monitors, it just doesn't lose by as much. In 3 monitor situations, it doesn't change, which I assume is because of each monitor getting it's own interface.

The difference is there, but not significant enough to change the card which performs best.


Sorry, but that is not what I'm saying. The three monitor solution didn't show this because that test had much less of a memory bandwidth limitation. It simply used much less memory bandwidth, so the 680 showed less of a memory bandwidth bottle-neck. This phenomenon can be even greater, especially when overclocking is considered (increasing the 680's RAM frequency linearly does not increase its bandwidth as much as linearly increasing the 7970's memory frequency increases its bandwidth because of the 7970's wider bus). Overclocking shows this phenomenon to a greater extent, current drivers actually let the 7970 pull ahead in 2560x1600 in many (perhaps most) games (especially the 7970 GHz Edition which is just a 7970 with better binned and thus better clocked parts). Keep in mind that these tests were performed when the 670 launched, so they use outdated drivers.

The difference is actually often more than enough to change which card performs the best or let AMD pull ahead further when they're already ahead, especially when SLI/CF is used. The 7800 cards versus the 660 TI might actually be able to repeat this in the mid-range market due to their having a wider GDDR5 bus, but to a lesser extent due to the difference in bus size being smaller.
a c 216 U Graphics card
August 15, 2012 3:42:54 AM

blazorthon said:
Sorry, but that is not what I'm saying. The three monitor solution didn't show this because that test had much less of a memory bandwidth limitation. It simply used much less memory bandwidth, so the 680 showed less of a memory bandwidth bottle-neck. This phenomenon can be even greater, especially when overclocking is considered (increasing the 680's RAM frequency linearly does not increase its bandwidth as much as linearly increasing the 7970's memory frequency increases its bandwidth because of the 7970's wider bus). Overclocking shows this phenomenon to a greater extent, current drivers actually let the 7970 pull ahead in 2560x1600 in many (perhaps most) games (especially the 7970 GHz Edition which is just a 7970 with better binned and thus better clocked parts). Keep in mind that these tests were performed when the 670 launched, so they use outdated drivers.

The difference is actually often more than enough to change which card performs the best or let AMD pull ahead further, especially when SLI/CF is used.


I've been looking at lot of benchmarks on multi and single 7950/70 and 670/80's, and I'm not seeing any that show a big difference between cards, no matter the OC. I have also noticed, like you are pointing out, that the ones which favor the AMD the most are the 1600p setups, but somehow, the 5760x1080p performances are almost identical in gap as the normal 1080p ones. I'll see if I can't show you what I saw in a bit. I'm currently in the middle of a driver update.

EDIT: To show you some benchmarks. These show crossfire/SLI setups
http://www.hardwareluxx.de/index.php/artikel/hardware/g...
http://www.hardwareluxx.de/index.php/artikel/hardware/g...
http://www.tomshardware.com/reviews/geforce-gtx-690-ben...
(The 5760x1080 looked to be bottlenecked, as it shows nearly the same for 3 setups.
a c 86 U Graphics card
August 15, 2012 3:53:51 AM

bystander said:
I've been looking at lot of benchmarks on multi and single 7950/70 and 670/80's, and I'm not seeing any that show a big difference between cards, no matter the OC. I have also noticed, like you are pointing out, that the ones which favor the AMD the most are the 1600p setups, but somehow, the 5760x1080p performances are almost identical in gap as the normal 1080p ones. I'll see if I can't show you what I saw in a bit. I'm currently in the middle of a driver update.


One more major problem with the triple monitor setups with older AMD drivers is that EyeFinity didn't work well and often was very unstable except when using the driver from January. Sorry that I forgot to mention this. Benchmarks of the Eyefinity setups with Catalyst 12.6 and Catalyst 12.7 should work much better. The main problems with many tests is that they often are adjusted to hit specific FPS numbers with at least some of the cards, those numbers being considered at least reasonably best-case FPS for gaming (generally well above 30-40FPS).

For example, many triple 1080p and triple 1920x1200 reviews have either low or no AA, maybe lower tessellation and texture quality, or other settings such as AF and more lowered or disabled. This throws in a lot of complexity into the tests and it can be extremely difficult for even the best to discern comparable information from such tests. Even worse is when we aren't told about all settings (some tests leave out the levels of AF, tessellation, and other info in their descriptions and charts), so we don't even know that they aren't comparable until we see tests that have this information.

What I meant by overclocking shows this to a greater extent is that with overclocking and increased settings, not just overclocking. Just overclocking, although increasing the differences in memory bandwidth, still increases the 670/680's memory bandwidth, so overclocking alone would probably do the opposite of what I said. However, taking a 7970 and a 680 and giving both an overclock to say 1.2GHz GPU clock and 1.7GHz or 1.8GHz memory clocks and increasing the AA and such enough to justify the increase will show the difference between both cards increasing. The 7970 would get a roughly 80GB/s memory bandwidth increase and the 680 would get a roughly 38GB/s memory bandwidth increase. The 7970 simply has more bandwidth for the AA and such to work with.
a c 216 U Graphics card
August 15, 2012 3:58:08 AM

It's interesting following the benchmarks. Sometimes it shows what you are saying, though not massively, but sometimes it does the opposite. Look at the BF3 and Skyrim benchmarks, the gap seems to widen as you go up in resolution.
a c 216 U Graphics card
August 15, 2012 4:01:50 AM

blazorthon said:
One more major problem with the triple monitor setups with older AMD drivers is that EyeFinity didn't work well and often was very unstable except when using the driver from January. Sorry that I forgot to mention this. Benchmarks of the Eyefinity setups with Catalyst 12.6 and Catalyst 12.7 should work much better. The main problems with many tests is that they often are adjusted to hit specific FPS numbers with at least some of the cards, those numbers being considered at least reasonably best-case FPS for gaming (generally well above 30-40FPS).

For example, many triple 1080p and triple 1920x1200 reviews have either low or no AA, maybe lower tessellation and texture quality, or other settings such as AF and more lowered or disabled. This throws in a lot of complexity into the tests and it can be extremely difficult for even the best to discern comparable information from such tests. Even worse is when we aren't told about all settings (some tests leave out the levels of AF, tessellation, and other info in their descriptions and charts), so we don't even know that they aren't comparable until we see tests that have this information.


All the benchmarks I posted are from May 2012, which is several months after the 7970 was released. They may not be as good as todays drivers, but these aren't the early drivers either. The 670/680/690's have also had major improvements from drivers since.

Anyways, yes, there are lots of different settings being used. The end result is, these settings are what you can use at those resolutions, so theory may say one thing, the reality is, we use settings similar to the benchmarks, and we don't see much disparity.
a c 535 U Graphics card
August 15, 2012 4:07:39 AM

My perception has been that there are differences/advantages in certain areas, but the rhetoric has magnified them beyond what the data says. This goes for: memory bandwidth, overclock scaling, crossfire scaling, VRam amounts, and the performance advantage from using the 12.7 drivers. In a way, it's masterful way of getting in the echo chamber and repeating the same thing over and over until it becomes accepted fact.
a c 86 U Graphics card
August 15, 2012 4:11:26 AM

bystander said:
All the benchmarks I posted are from May 2012, which is several months after the 7970 was released. They may not be as good as todays drivers, but these aren't the early drivers either. The 670/680/690's have also had major improvements from drivers since.

Anyways, yes, there are lots of different settings being used. The end result is, these settings are what you can use at those resolutions, so theory may say one thing, the reality is, we use settings similar to the benchmarks, and we don't see much disparity.


The problem is that the benchmarks can paint an inaccurate picture of the performance differences when the settings aren't all comparable. For example, we might see 1080p benchmarks and 2560x1600 benchmarks that have the same settings except for the resolution (or they might have other settings disabled), but most people don't have such monitors. They care about what they will get at 1080p and that means using very high AA to make use of the high performance of these cards in real-world numbers.

In order to compare cards such as the GTX 670, 680, 660 TI, 690, Radeon 7970, 7970 GHz Edition, and 7950, people often use settings that cards like the lower models and even lower models of previous generations can have respectable numbers. However, this gives Nvidia a bloated image where their memory bandwidth disadvantage in more realistic settings is masked by the low memory bandwidth overhead of lower settings that allow for unrealistically high frame rates.

The problems are that benchmarks are often cherry picked in what they test to show one card beating another even if it shouldn't. However, in contrast with that point, different people might use different settings when they play that may have similar performance on one card, but wildly different performanvce on another. For example, Trading AA and texture quality for sheer high resolution gaming can perform similarly well on the 680, but the 7970 might perform very differently. The reverse can be true with other settings differences, especially when CF/SLI is brought in to the equations.

Also, Catalyst 12.7 is much more of an improvement of previous AMD drivers in performance than Nvidia's current drivers are over previous Nvidia Kepler drivers. Tom's showed this to great effect with their covering of the 7970 GHz Edition's launch.
a c 216 U Graphics card
August 15, 2012 4:15:25 AM

blazorthon said:
The problem is that the benchmarks can paint an inaccurate picture of the performance differences when the settings aren't all comparable. For example, we might see 1080p benchmarks and 2560x1600 benchmarks that have the same settings except for the resolution (or they might have other settings disabled), but most people don't have such monitors. They care about what they will get at 1080p and that means using very high AA to make use of the high performance of these cards in real-world numbers.

In order to compare cards such as the GTX 670, 680, 660 TI, 690, Radeon 7970, 7970 GHz Edition, and 7950, people often use settings that cards like the lower models and even lower models of previous generations can have respectable numbers. However, this gives Nvidia a bloated image where their memory bandwidth disadvantage in more realistic settings is masked by the low memory bandwidth overhead of lower settings that allow for unrealistically high frame rates.

The problems are that benchmarks are often cherry picked in what they test to show one card beating another even if it shouldn't. However, in contrast with that point, different people might use different settings when they play that may have similar performance on one card, but wildly different performanvce on another. For example, Trading AA and texture quality for sheer high resolution gaming can perform similarly well on the 680, but the 7970 might perform very differently. The reverse can be true with other settings differences, especially when CF/SLI is brought in to the equations.

Also, Catalyst 12.7 is much more of an improvement of previous AMD drivers in performance than Nvidia's current drivers are over previous Nvidia Kepler drivers. Tom's showed this to great effect with their covering of the 7970 GHz Edition's launch.


You can talk about all the things that can distort the benchmarks, but can you provide benchmarks of what you are talking about? I have seen a lot of different benchmarks from all kinds of different sources, and I keep taking the same thing away; they are close no matter the resolution.

Theory is great, but at some point, you have to show actual benchmarks to back up those claims.
a c 86 U Graphics card
August 15, 2012 4:18:32 AM

17seconds said:
My perception has been that there are differences/advantages in certain areas, but the rhetoric has magnified them beyond what the data says. This goes for: memory bandwidth, overclock scaling, crossfire scaling, VRam amounts, and the performance advantage from using the 12.7 drivers. In a way, it's masterful way of getting in the echo chamber and repeating the same thing over and over until it becomes accepted fact.


Crossfire scaling was a huge advantage with Fermi versus VLIW4. Heck, 6950s could compete with 570s when there were two of each despite the huge price difference. 6970s could almost compete with 580s, but the memory bandwidth difference (among others) often let the 580 stay at least somewhat ahead and the fact that many drivers back then were simply not up to par also could hold back Crossfire back then. SLI scaling has pretty much caught up with the Kepler cards and might actually be a little better with two GPUs, although GCN CF generally takes the win with three and four GPUs right now, granted that is only in such high end setups that it is almost irrelevant. Nvidia did a great job on catching up in this.

Memory bandwidth can be a big deal. Llano/Trinity prove this to an undeniable margin. More memory-bandwidth heavy games, such as Metro 2033, can show this to a lesser extent with the GTX 600 cards versus the comparable Radeon 7000 cards in various resolutions and levels of AA.

The VRAM capacity is easily shown to be a huge factor in performance when it is pushed to its limits. Try running GTX 570 1.25GiB in SLI with 2560x1600 in any DX11 game and compare it to Radeon 6950 2GiB CF in the same scenario.

Tom's coverage of the 7970 GHz Edition shows how improved the Catalyst 12.7 driver is over even the 12.6 driver. If you don't believe me, then just look at their tests.

Overclocking is a more complex matter to tackle with reasoning. AMD's 7970 and 7950 generally scale better than the Kepler cards, but not by huge margins except in memory bandwidth and even then, it's performance impact is not a huge margin, although it is considerable when put to work with a heavy enough workload.
a c 86 U Graphics card
August 15, 2012 4:27:02 AM

bystander said:
You can talk about all the things that can distort the benchmarks, but can you provide benchmarks of what you are talking about? I have seen a lot of different benchmarks from all kinds of different sources, and I keep taking the same thing away; they are close no matter the resolution.

Theory is great, but at some point, you have to show actual benchmarks to back up those claims.


I don't need to show benchmarks to show that varying levels of tessellation, ambient occlusion, AF, and other settings affect performance. Just looking at benchmarks from different sites that used the same hardware and drivers and have the same given info, yet sometimes significantly different numbers, shows that there must be something else going on.
a c 216 U Graphics card
August 15, 2012 4:30:44 AM

blazorthon said:
I don't need to show benchmarks to show that varying levels of tessellation, ocular occlusion, AF, and other settings affect performance. Just looking at benchmarks from different sites that used the same hardware and drivers and have the same given info, yet sometimes significantly different numbers, shows that there must be something else going on.


I have, I posted them, and you continue to go on and on about how much better the 7970 does, when the benchmarks show otherwise.

It is true, different settings perform better on different cards, and different games as well. That doesn't change the fact that they are close to each other regardless of resolution, and even if they are CF/SLI.
a c 86 U Graphics card
August 15, 2012 4:41:20 AM

http://hardocp.com/article/2012/04/25/geforce_gtx_680_3...

Look at the apples to apples comparisons near the bottom. The 680s couldn't do any MSAA at all, yet the 7970s had no trouble with it. The 680s couldn't do better than FXAA in which point the 7970's crap driver at the time left them still performing better, but too stuttery to win the match. This was repeated in other games and situations with the same game several times. Even Nvidia-optimized games would show that the Nvidia cards were memory bottle-necked just by changing setting around. Again, the saving grace of Kepler is it having much faster GPUs than AMD has and this is what makes them able to vary in performance so much even with the same game.

EDIT: Keep in mind that this was tested with the original driver for the 7970s and its performance could be extremely different from Catalyst 12.6, let alone 12.7.
a c 216 U Graphics card
August 15, 2012 6:07:31 AM

blazorthon said:
http://hardocp.com/article/2012/04/25/geforce_gtx_680_3...

Look at the apples to apples comparisons near the bottom. The 680s couldn't do any MSAA at all, yet the 7970s had no trouble with it. The 680s couldn't do better than FXAA in which point the 7970's crap driver at the time left them still performing better, but too stuttery to win the match. This was repeated in other games and situations with the same game several times. Even Nvidia-optimized games would show that the Nvidia cards were memory bottle-necked just by changing setting around. Again, the saving grace of Kepler is it having much faster GPUs than AMD has and this is what makes them able to vary in performance so much even with the same game.

EDIT: Keep in mind that this was tested with the original driver for the 7970s and its performance could be extremely different from Catalyst 12.6, let alone 12.7.


That was interesting, although I'm not sure that has anything to do with bandwidth. I'd be more likely to assume it's a problem with 2GB limitation, though I have noticed with Skyrim that the Nvidia card doesn't like to mix FXAA and MSAA, so it could be that.

Another odd thing is that in multiplayer, the Nvidia setup performed better. They used a little higher setting with the same fps (within 2):
http://hardocp.com/article/2012/04/25/geforce_gtx_680_3...

The rest of the benchmarks they showed, the Nvidia card clearly outperformed the AMD cards by a lot.

It would be easier to know if the problems on the single player BF3 if they showed a change with resolution, which could point to the bandwidth (not likely), 2gb of vram limitation (more likely) or a MSAA problem, which I've heard has issues with BF3 in general. Anyways, those benchmarks definitely show the 680 clearly had the edge in every benchmark but singleplayer BF3.
a c 86 U Graphics card
August 15, 2012 7:34:13 AM

bystander said:
That was interesting, although I'm not sure that has anything to do with bandwidth. I'd be more likely to assume it's a problem with 2GB limitation, though I have noticed with Skyrim that the Nvidia card doesn't like to mix FXAA and MSAA, so it could be that.

Another odd thing is that in multiplayer, the Nvidia setup performed better. They used a little higher setting with the same fps (within 2):
http://hardocp.com/article/2012/04/25/geforce_gtx_680_3...

The rest of the benchmarks they showed, the Nvidia card clearly outperformed the AMD cards by a lot.

It would be easier to know if the problems on the single player BF3 if they showed a change with resolution, which could point to the bandwidth (not likely), 2gb of vram limitation (more likely) or a MSAA problem, which I've heard has issues with BF3 in general. Anyways, those benchmarks definitely show the 680 clearly had the edge in every benchmark but singleplayer BF3.


Like I said, keep in mind that those were with the original driver because up until Catalyst 12.6, that was the only driver that supported Eyefinity properly. The performance is very different from Catalyst 12.6 and 12.7.

The 680s being unable to play with MSAA in some games is due to the memory capacity. However, their inferior AA efficiency when it did work was because of the memory bandwidth. The main point of that link was to show that memory capacity can also be a big factor in performance because that was listed as something that was supposedly blown out of proportion.two 680s or 7970s can also easily run into memory capacity problems and I've already explained and shown how the memory bandwidth affects performance too. The GTX 600 cards (especially the 680 and the 660 TI) are highly memory bandwidth bottle-necked cards and this can also affect their performance quite significantly as can their lower memory capacity.

I'd address most of the oddities of that review to the very immature driver, but the review did how how memory capacity can matter very much and I think that I've shown enough evidence for how the bandwidth can affect performance too.
a c 86 U Graphics card
August 15, 2012 9:32:36 AM

What do you want to do that you could use more than even 2GB or 3GB of VRAM?
a c 216 U Graphics card
August 15, 2012 3:39:35 PM

blazorthon said:
Like I said, keep in mind that those were with the original driver because up until Catalyst 12.6, that was the only driver that supported Eyefinity properly. The performance is very different from Catalyst 12.6 and 12.7.

The 680s being unable to play with MSAA in some games is due to the memory capacity. However, their inferior AA efficiency when it did work was because of the memory bandwidth. The main point of that link was to show that memory capacity can also be a big factor in performance because that was listed as something that was supposedly blown out of proportion.two 680s or 7970s can also easily run into memory capacity problems and I've already explained and shown how the memory bandwidth affects performance too. The GTX 600 cards (especially the 680 and the 660 TI) are highly memory bandwidth bottle-necked cards and this can also affect their performance quite significantly as can their lower memory capacity.

I'd address most of the oddities of that review to the very immature driver, but the review did how how memory capacity can matter very much and I think that I've shown enough evidence for how the bandwidth can affect performance too.


I believe you've gotten ahead of yourself here. You can't say they are very bottlenecked without more than a single instance, in an unusual setup. Under 5760x1200, on the most memory hog of a game, only in single player, in a game that is played almost exclusively in multi-player, with 3 cards in SLI, it runs into a problem. That takes a pretty extreme situation, and we aren't even sure the cause. I could just as easily look at the Skyrim results on that same review and say the same thing about the 7970, but a single instance if falling behind is a big stretch when making generalizations.
August 15, 2012 4:54:35 PM

well i guess 2 gtx 685 4gb (gk110) coming in autumn will have enough gpu power to more or less utilize most 4gb vram.
on other side based on what i red in other threads, even 3 670 4gb had too little of gpu power to utilize 4gb vram. and also they will have more power then gtx 690.

but well who knows what will 7990 bring

best
revro
a c 86 U Graphics card
August 15, 2012 6:03:50 PM

bystander said:
I believe you've gotten ahead of yourself here. You can't say they are very bottlenecked without more than a single instance, in an unusual setup. Under 5760x1200, on the most memory hog of a game, only in single player, in a game that is played almost exclusively in multi-player, with 3 cards in SLI, it runs into a problem. That takes a pretty extreme situation, and we aren't even sure the cause. I could just as easily look at the Skyrim results on that same review and say the same thing about the 7970, but a single instance if falling behind is a big stretch when making generalizations.


Pretty much any benchmark that has apples to apples tests will show the same thing. I showed you one example, if you want more, all you would have to do is look for them. Again, the Tom's 7970 GHz Edition review is very up to date review and it has these cards as well. I haven't looked at it recently, but it's bound to have at least a fea apples to apples tests like the much older Hard OCP review did. Beyond that, I notice how the 7900 cards win in the memory bandwidth-limited games far more often and by larger margins than in other games. You didn't really think that I based my claims in a single test, did you?
!