Performance Per Watt
By comparing graphics card performance and power consumption, we can get an idea of energy efficiency. We of course have to measure performance to do this. Ideally, we use the same settings and resolutions for all graphics cards tested, divide the results (either in the form of frame rate or benchmark score) by the power consumed. For example, we can run Crysis using the Very High detail settings at 1920x1080 on a test platform with various graphics cards, such as the Radeon HD 5670, HD 5770 and HD 5870. Simultaneously, we record peak power consumption numbers during those runs. Compare the two numbers, and you get the performance per watt for each graphics card in that one application.
This approach does have some disadvantages. First, the benchmark has to scale perfectly with processing power, which doesn’t always happen. Such synthetic cases don't translate well to the real world, where variables like processing power and bus bandwidth/latency come into play. Second, you have to choose settings that are conservative enough to allow the lowest-performing contender to still run well enough. If we were to use the above-mentioned settings, frame rates would hardly be ideal for gameplay (well below 30 FPS). Third, the results are only relevant to the application tested. If the software we test scales well and emphasizes the GPU, we won’t see the same results in an application that's CPU/system limited.
We can try to address some of those inherent limitations. By nature and design, the cards will likely fall into groups separated by playable frame rates at different resolutions. So, a high-enough quality setting will only be playable at low resolutions on more mainstream cards, while boards with more graphics processing horsepower will be able to offer the same frame rate at higher resolutions. Frame buffer capacity will come into play, particularly at very high resolutions. Once the cards fall into place, power consumption measurements will tell us which card requires more power to be playable at a certain resolution. Then it's just a matter of picking the card that offers the best performance with the least power consumed.
These measurements will work, but they do not tell us the whole story. They will only tell us the typical power consumption of graphics cards running games, which can make use of all GPU resources. Thus, we’ll also run a handful of other applications and workloads capable of giving us a more complete picture of power consumption and efficiency.
Efficiency At Idle
Outside of playing games or running Direct3D and OpenGL applications, the GPU is hardly used. Not surprisingly, we’d expect the graphics processor’s power consumption to dip as low as possible in these situations. This is why we measure idle power consumption.
Idle measurements are typically recorded on the desktop when the PC isn’t running any other applications. That means the results aren’t reflective of power consumption when a piece of software that uses the GPU is left idling. Cinema 4D is a good example.
Power Consumption In Different Scenarios
What if we wanted to know exactly how much power is consumed in situations that fall in between the maximum load and minimum idle numbers we record for most GPU reviews? What kind of power draw we can expect? This is the focus of today’s story.
We believe there is more to graphics card power consumption and energy efficiency than what can be gleamed from power consumption numbers at the two extremes. By measuring power consumption and performance running various GPU applications, we get a better understanding of overall efficiency.
Today, we're looking at a handful of graphics solutions from AMD--the Radeon HD 5770, HD 5670, HD 5870 1 GB, and HD 5870 2 GB. As a reference point, we’re using a Radeon HD 3300, an integrated GPU included with AMD's 790GX chipset, and an older Radeon 2900 XT. These two references were chosen for different reasons. The Radeon HD 3300 integrated graphics is a good baseline with no discrete graphics installed at all. The Radeon 2900 XT offers roughly the performance of today's DX11 mainstream cards. Using it, we are able to ascertain the architectural improvements made by AMD’s latest-generation design, affecting overall performance per watt.
There is one minor consideration to bear in mind as you look at the power measurement results: variation between samples. In discussing this piece with AMD, company representatives made it a point to mention that variations from one card to another exist. They may be due to the components used, board design, and even the graphics chip itself. Your own efficiency testing may consequently look a little different from what we have here.
Oh, wait, this just in:
My next PC will be used mostly for movie DVDs and Diablo 3. Apparently if I get a 5870 1GB I get the best of both worlds - speed in Diablo and low power consumption when playing movies.
How about nVidia cards, would I get the same behavior with a GTX 480 for example?
Next questions: First, where does the HD5750 fall in this? Second, if you do the same kinds of manual tweaking for power saving that you did in your Cool-n-Quiet analysis, how will that change the results? And finally, if you run a F@H client, what does that do to "idle" scores, when the GPU is actually quite busy processing a work unit?
I'd love to see nvidia cards and beefier CPUs used as well. Normal non green hdds too. Just how big of a difference in speed/power do they make?
Thank you for sharing.
Thanks for reading the article.
Have no 5750 sample yet, but they should relatively be close to 5770. For this article, we simply chose the best bin for each series (Redwood, Juniper and Cypress).
The second question, what will happen when you tweak the chip? Glad you ask!! I can't say much yet, but you'll be surprised what the 5870 1 GB can do.
As for NVIDIA cards, I'm hoping to have the chance to test GF100 and derivatives very soon.