Most of our graphics card reviews include power measurements at idle and load. But how do applications tax your GPU in between those two extremes? We line up a handful of different programs and monitor power use with a handful of AMD's latest cards.
Next to the CPU, graphics cards receive a lion’s share of attention when it comes to the analytical eye of hardware reviewers. For the uninitiated, modern graphics processors help determine the performance your PC puts out in applications dominated by 3D and video. They’ve very quickly become some of the most complex piece of hardware inside your system, evolving from simple display adapters into fully parallel processors able to handle general-purpose computing workloads.
Flagship GPUs sport higher transistor budgets than most CPUs, so it’s hardly a surprise that we have to be more diligent than ever about the power these components consume. While we generally make an effort to measure idle and load consumption in our graphics card reviews, we wanted to take a more granular and focused look at power use in specific applications.
The Runway Power Consumption of GPUs
Most of us really can’t complain about the increased muscle of modern graphics products—after all, they’re driving the push toward realism in games, parallel processing, and technologies like Blu-ray 3D. But if there were one quibble we’d cite, it’d be the escalating power draw of architectures like Nvidia’s Fermi (especially right after AMD’s flagship Cypress GPU demonstrated impressive power use). Even with power-saving strategies like clock throttling and power gating, shutting off unused pieces of the GPU, graphics cards seem to consume more and more power with each successive generation. These days, it’s common to see even mainstream graphics cards with auxiliary six-pin power connectors—many even require a pair of extra inputs. Heat is also becoming a problem. Just look at the TDP numbers for high-end graphics cards like Nvidia’s GeForce GTX 480/470 (225/250 W) and the Radeon HD 5870/5970 (188/294 W). In comparison, the most power-hungry processors from AMD and Intel are rated at 140/130 W, respectively.
Of course, those figures represent the absolute highest board power each product can output, measured and cited by each vendor in a different way (we’ve already demonstrated the GeForce GTX 480 using more power than a Radeon HD 5970).
And what about idle power—the card’s draw when you’re working on the Windows desktop? Idle board power on a Radeon HD 5970 is rated at 42 W. Believe it or not, you can idle an entire PC with integrated graphics using 40 W (Patrick even built a Core i5-based machine that idles under 25 W). Thankfully, newer cards, such as the Radeon HD 5870, HD 5770, and HD 5670, consume less power at idle (around 18-20 W).
What Most Power Consumption Tests Don't Tell You
A majority of graphics card reviews measure power consumption at full load and absolute idle. For load measurements, FurMark is typically used to push the graphics cards to use all of its available processing power. The reason here is simple. You want to know the maximum power draw, making it easy to compare one card against another, and evaluate noise in a worst-case scenario. Maximum power draw is also used to determine whether a given power supply is ample for driving a certain graphics card.
Unfortunately, (or perhaps not), we already know for a fact that AMD employs hardware and software optimizations that detect an unrealistic workload like FurMark and throttles back clock rates and power to help protect the GPU. Thus, FurMark’s most taxing modes are effectively defeated.
Under normal usage, power is rarely pushed to such an extreme anyway. Even in graphically-demanding games, you will only see consumption numbers below what you see in FurMark. This doesn't mean testing with FurMark is not at all useful (we use some of the less-demanding settings to generate comparable numbers). It just means you typically will never encounter such an extreme usage scenario.
These two screenshots from the Radeon BIOS Editor (RBE) indicate clocks and voltages used by the graphics cards under several different modes: default performance mode, idle, and video playback (employing UVD).
What about idle conditions? As with a typical CPU, a GPU is, in most cases, never completely idle. If you use Windows Vista or 7, Aero takes advantage of GPU acceleration when it’s enabled. Video decoding tasks for MPEG-1/2, VC-1, and H.264 are also offloaded to the GPU. Then there's GPU acceleration for certain general-purpose applications, like transcoding. These kinds of scenarios are rarely considered when it comes to measuring a graphics card’s power consumption.
- An Eye For Power
- Performance Per Watt
- The Tests
- Test Setup And A Side Note
- Test System
- Benchmark Results: Crysis, The Classic Approach
- Benchmark Results: Desktop Usage, Less-Than-Ideal Conditions
- Benchmark Results: Cinebench R11
- Benchmark Results: Cyberlink PowerDVD 9
- Benchmark Results: Cyberlink PowerDirector
- GPU Vs. CPU
- Measuring Power Consumption: Let's Recap
- Don't Forget Idle Power Consumption


Oh, wait, this just in:
My next PC will be used mostly for movie DVDs and Diablo 3. Apparently if I get a 5870 1GB I get the best of both worlds - speed in Diablo and low power consumption when playing movies.
How about nVidia cards, would I get the same behavior with a GTX 480 for example?
Next questions: First, where does the HD5750 fall in this? Second, if you do the same kinds of manual tweaking for power saving that you did in your Cool-n-Quiet analysis, how will that change the results? And finally, if you run a F@H client, what does that do to "idle" scores, when the GPU is actually quite busy processing a work unit?
I'd love to see nvidia cards and beefier CPUs used as well. Normal non green hdds too. Just how big of a difference in speed/power do they make?
Thank you for sharing.
Thanks for reading the article.
Have no 5750 sample yet, but they should relatively be close to 5770. For this article, we simply chose the best bin for each series (Redwood, Juniper and Cypress).
The second question, what will happen when you tweak the chip? Glad you ask!! I can't say much yet, but you'll be surprised what the 5870 1 GB can do.
As for NVIDIA cards, I'm hoping to have the chance to test GF100 and derivatives very soon.
Take care.
Interesting comments about Furmark.
I have to disagree, there are several ways a user can fully load their graphics card in normal use. I have found that my GPU utilization and fan speed go to %100 when I play the dice mini-game in The Witcher. The game only has to render a small game board and the frame rate goes into the 200-300 range. Some thing similar occurs when I hit the pause key in stalker.
bad penis joke?
Well i think this article inspired form the movie that AMD has release lately.
That movie called as i think "Mis understanding"here's the link: http://www.youtube.com/watch?v=2QkyfGJgcwQ
As we all know AMD is innovative in power consumption as well It's Graphics I read such this review in Anandtech.com ...Just WOW....Loads Of Noise and power flowed for Fermi VGAs . In this review we see the smooth performance for 5670 and 5770.
and another thing that we should give a hint on is You know releasing Fermi after six month of releasing 5000 series...I think it's good in performance but not after 6 Months!!! but awful in power consuming and noise and heat!!
Take care guys