Both AMD and Nvidia claim to be offering unprecedented performance per watt of power consumed, and we believe that both companies are telling the truth.
Nvidia is taking the extra step, though, of adjusting its clock rate and voltage in real-time, based on the premise that no two workloads exact the same power demands. As a result, we can’t simply test one game, divide its average frame rate by average power use, and expect you to believe the outcome is representative of all games. But we also don’t have time to test every game at every resolution (yes, power consumption changes based on resolution, detail settings, and so on). So, we took the games from our suite, set them all to 1920x1080 using the most demanding settings possible, and charted the power behavior for each on all six cards.
This starts a little messy, but it gets easier as we go, so bear with us. First, four different graphics cards in six games. We have the data for GeForce GTX 590 and Radeon HD 6990 as well, but those two cards are just ugly...
It doesn’t matter that some of these tests wrap up before the others. What’s important is that we have the power captured, along with the performance generated during the test run. Charting everything out on a line graph simply shows you the upper and lower bounds for system power use in each game—and that no two games are identical.
Averaging all of the games together, we come up with an average power use figure for each card. AMD’s Radeon HD 7950 uses the least power, on average, followed by Nvidia’s GeForce GTX 680.
We already know that the Radeon HD 7970 is a faster graphics card than Nvidia’s GeForce GTX 580. The fact that it also uses less power tells us it’s more efficient without needing this next graph.
Averaging the frame rates for all six games in our 1920x1080 runs gives us an index of sorts there too, represented in frames per second. The GeForce GTX 680 easily captures the top position, followed by AMD’s Radeon HD 7970. The GeForce GTX 580 takes third place, followed closely by the Radeon HD 7950.
Update (3/23/2012): The original chart on this page showed GeForce GTX 680 at 172% of GeForce GTX 580's performance per watt. This result was derived from an Excel division error, which was noticed by German reader csc. It has since been corrected, yielding a more modest number. The overall effect remains the same, though we're certainly a lot further from Nvidia's original claim of a 2x improvement over GeForce GTX 580. Our apologies for the mistake.
Now, the GeForce GTX 580 is our frame of reference. We want to know how AMD’s and Nvidia’s respective architectures perform in comparison. We set the GTX 580 as 100%, and the rest of the results speak for themselves.
The Radeon HD 7970 and 7950 both do deliver more performance per watt of power used compared to GeForce GTX 580—and by a significant amount. But GeForce GTX 680 is like, way up there.
As a gamer, do you care about this? Not nearly as much as absolute performance, we imagine. And I personally doubt I’d ever pay more for a card specifically because it gave me better performance/watt. But with AMD and Nvidia both talking about their efficiency this generation, thanks to 28 nm manufacturing and new architectural decisions, the exercise is still interesting.
- GeForce GTX 680: The Card And Cooling
- GK104: The Chip And Architecture
- GPU Boost: Graphics Afterburners
- Overclocking: I Want More Than GPU Boost
- PCI Express 3.0 And Adaptive V-Sync
- Hardware Setup And Benchmarks
- Benchmark Results: 3DMark 11 (DX 11)
- Benchmark Results: Battlefield 3 (DX 11)
- Benchmark Results: Crysis 2 (DX 9/DX 11)
- Benchmark Results: The Elder Scrolls V: Skyrim (DX 9)
- Benchmark Results: DiRT 3 (DX 11)
- Benchmark Results: World Of Warcraft: Cataclysm (DX 11)
- Benchmark Results: Metro 2033 (DX 11)
- Benchmark Results: Sandra 2012
- Benchmark Results: Compute Performance In LuxMark 2.0
- Benchmark Results: NVEnc And MediaEspresso 6.5
- Temperature And Noise
- Power Consumption
- Performance Per Watt: The Index
- GeForce GTX 680: The Hunter Scores A Kill