Page 1:An Eye For Power
Page 2:Performance Per Watt
Page 3:The Tests
Page 4:Test Setup And A Side Note
Page 5:Test System
Page 6:Benchmark Results: Crysis, The Classic Approach
Page 7:Benchmark Results: Desktop Usage, Less-Than-Ideal Conditions
Page 8:Benchmark Results: Cinebench R11
Page 9:Benchmark Results: Cyberlink PowerDVD 9
Page 10:Benchmark Results: Cyberlink PowerDirector
Page 11:GPU Vs. CPU
Page 12:Measuring Power Consumption: Let's Recap
Page 13:Don't Forget Idle Power Consumption
Performance Per Watt
By comparing graphics card performance and power consumption, we can get an idea of energy efficiency. We of course have to measure performance to do this. Ideally, we use the same settings and resolutions for all graphics cards tested, divide the results (either in the form of frame rate or benchmark score) by the power consumed. For example, we can run Crysis using the Very High detail settings at 1920x1080 on a test platform with various graphics cards, such as the Radeon HD 5670, HD 5770 and HD 5870. Simultaneously, we record peak power consumption numbers during those runs. Compare the two numbers, and you get the performance per watt for each graphics card in that one application.
This approach does have some disadvantages. First, the benchmark has to scale perfectly with processing power, which doesn’t always happen. Such synthetic cases don't translate well to the real world, where variables like processing power and bus bandwidth/latency come into play. Second, you have to choose settings that are conservative enough to allow the lowest-performing contender to still run well enough. If we were to use the above-mentioned settings, frame rates would hardly be ideal for gameplay (well below 30 FPS). Third, the results are only relevant to the application tested. If the software we test scales well and emphasizes the GPU, we won’t see the same results in an application that's CPU/system limited.
We can try to address some of those inherent limitations. By nature and design, the cards will likely fall into groups separated by playable frame rates at different resolutions. So, a high-enough quality setting will only be playable at low resolutions on more mainstream cards, while boards with more graphics processing horsepower will be able to offer the same frame rate at higher resolutions. Frame buffer capacity will come into play, particularly at very high resolutions. Once the cards fall into place, power consumption measurements will tell us which card requires more power to be playable at a certain resolution. Then it's just a matter of picking the card that offers the best performance with the least power consumed.
These measurements will work, but they do not tell us the whole story. They will only tell us the typical power consumption of graphics cards running games, which can make use of all GPU resources. Thus, we’ll also run a handful of other applications and workloads capable of giving us a more complete picture of power consumption and efficiency.
Efficiency At Idle
Outside of playing games or running Direct3D and OpenGL applications, the GPU is hardly used. Not surprisingly, we’d expect the graphics processor’s power consumption to dip as low as possible in these situations. This is why we measure idle power consumption.
Idle measurements are typically recorded on the desktop when the PC isn’t running any other applications. That means the results aren’t reflective of power consumption when a piece of software that uses the GPU is left idling. Cinema 4D is a good example.
Power Consumption In Different Scenarios
What if we wanted to know exactly how much power is consumed in situations that fall in between the maximum load and minimum idle numbers we record for most GPU reviews? What kind of power draw we can expect? This is the focus of today’s story.
We believe there is more to graphics card power consumption and energy efficiency than what can be gleamed from power consumption numbers at the two extremes. By measuring power consumption and performance running various GPU applications, we get a better understanding of overall efficiency.
Today, we're looking at a handful of graphics solutions from AMD--the Radeon HD 5770, HD 5670, HD 5870 1 GB, and HD 5870 2 GB. As a reference point, we’re using a Radeon HD 3300, an integrated GPU included with AMD's 790GX chipset, and an older Radeon 2900 XT. These two references were chosen for different reasons. The Radeon HD 3300 integrated graphics is a good baseline with no discrete graphics installed at all. The Radeon 2900 XT offers roughly the performance of today's DX11 mainstream cards. Using it, we are able to ascertain the architectural improvements made by AMD’s latest-generation design, affecting overall performance per watt.
There is one minor consideration to bear in mind as you look at the power measurement results: variation between samples. In discussing this piece with AMD, company representatives made it a point to mention that variations from one card to another exist. They may be due to the components used, board design, and even the graphics chip itself. Your own efficiency testing may consequently look a little different from what we have here.
- An Eye For Power
- Performance Per Watt
- The Tests
- Test Setup And A Side Note
- Test System
- Benchmark Results: Crysis, The Classic Approach
- Benchmark Results: Desktop Usage, Less-Than-Ideal Conditions
- Benchmark Results: Cinebench R11
- Benchmark Results: Cyberlink PowerDVD 9
- Benchmark Results: Cyberlink PowerDirector
- GPU Vs. CPU
- Measuring Power Consumption: Let's Recap
- Don't Forget Idle Power Consumption