AMD's Radeon HD 5000-Series: Measuring Power Efficiency

An Eye For Power

Next to the CPU, graphics cards receive a lion’s share of attention when it comes to the analytical eye of hardware reviewers. For the uninitiated, modern graphics processors help determine the performance your PC puts out in applications dominated by 3D and video. They’ve very quickly become some of the most complex piece of hardware inside your system, evolving from simple display adapters into fully parallel processors able to handle general-purpose computing workloads.  

Flagship GPUs sport higher transistor budgets than most CPUs, so it’s hardly a surprise that we have to be more diligent than ever about the power these components consume. While we generally make an effort to measure idle and load consumption in our graphics card reviews, we wanted to take a more granular and focused look at power use in specific applications.

The Runway Power Consumption of GPUs

Most of us really can’t complain about the increased muscle of modern graphics products—after all, they’re driving the push toward realism in games, parallel processing, and technologies like Blu-ray 3D. But if there were one quibble we’d cite, it’d be the escalating power draw of architectures like Nvidia’s Fermi (especially right after AMD’s flagship Cypress GPU demonstrated impressive power use). Even with power-saving strategies like clock throttling and power gating, shutting off unused pieces of the GPU, graphics cards seem to consume more and more power with each successive generation. These days, it’s common to see even mainstream graphics cards with auxiliary six-pin power connectors—many even require a pair of extra inputs. Heat is also becoming a problem. Just  look at the TDP numbers for high-end graphics cards like Nvidia’s GeForce GTX 480/470 (225/250 W) and the Radeon HD 5870/5970 (188/294 W). In comparison, the most power-hungry processors from AMD and Intel are rated at 140/130 W, respectively.

Of course, those figures represent the absolute highest board power each product can output, measured and cited by each vendor in a different way (we’ve already demonstrated the GeForce GTX 480 using more power than a Radeon HD 5970).  

And what about idle power—the card’s draw when you’re working on the Windows desktop? Idle board power on a Radeon HD 5970 is rated at 42 W. Believe it or not, you can idle an entire PC with integrated graphics using 40 W (Patrick even built a Core i5-based machine that idles under 25 W). Thankfully, newer cards, such as the Radeon HD 5870, HD 5770, and HD 5670, consume less power at idle (around 18-20 W).

What Most Power Consumption Tests Don't Tell You

A majority of graphics card reviews measure power consumption at full load and absolute idle. For load measurements, FurMark is typically used to push the graphics cards to use all of its available processing power. The reason here is simple. You want to know the maximum power draw, making it easy to compare one card against another, and evaluate noise in a worst-case scenario. Maximum power draw is also used to determine whether a given power supply is ample for driving a certain graphics card.  

Unfortunately, (or perhaps not), we already know for a fact that AMD employs hardware and software optimizations that detect an unrealistic workload like FurMark and throttles back clock rates and power to help protect the GPU. Thus, FurMark’s most taxing modes are effectively defeated.

Under normal usage, power is rarely pushed to such an extreme anyway. Even in graphically-demanding games, you will only see consumption numbers below what you see in FurMark. This doesn't mean testing with FurMark is not at all useful (we use some of the less-demanding settings to generate comparable numbers). It just means you typically will never encounter such an extreme usage scenario.

These two screenshots from the Radeon BIOS Editor (RBE) indicate clocks and voltages used by the graphics cards under several different modes: default performance mode, idle, and video playback (employing UVD).

What about idle conditions? As with a typical CPU, a GPU is, in most cases, never completely idle. If you use Windows Vista or 7, Aero takes advantage of GPU acceleration when it’s enabled. Video decoding tasks for MPEG-1/2, VC-1, and H.264 are also offloaded to the GPU. Then there's GPU acceleration for certain general-purpose applications, like transcoding. These kinds of scenarios are rarely considered when it comes to measuring a graphics card’s power consumption.  

  • tony singh
    Very innovative article tom keep it up!! Similar article consisting of various cpus would be really useful.
    Reply
  • tacoslave
    gtx 480 and 460 for reference?
    Reply
  • Lutfij
    ^ nvidia would loose at this battle. period.
    Reply
  • spidey81
    I know the FPS/watt wouldn't be as good, but what if the 5670 was crossfired. Would it still be a better alternative, efficiency wise, than say a 5850?
    Reply
  • nforce4max
    Remember the R600 (2900xt) has a 80nm core while the 5870 has a 45nm core. Shrink the R600 and you will get the 3870 (55nm) that barely uses hardly any.
    Reply
  • rhino13
    And now just for fun we should compare to Fermi.

    Oh, wait, this just in:
    There is a Fermi comparison chart that was avalible but you needed to have two screens to display the bar graph for Fermi's power consumption and temperature. So the decission was made to provide readers with the single screen only version.
    Reply
  • aevm
    I loved this part:

    A mere 20 watts separate the Radeon HD 3300, HD 5670, HD 5770, and HD 5870 1 GB. So, in certain cases, the Radeon HD 5870 1 GB can still save enough power to close in on its more mainstream derivatives. Again, this is the case because the cards use a fixed-function video engine to assist in decoding acceleration, which is the same from one board to the next. Thus, even a high-end card behaves like a lower-end product in such a workload. This is very important, as you will see later on.

    My next PC will be used mostly for movie DVDs and Diablo 3. Apparently if I get a 5870 1GB I get the best of both worlds - speed in Diablo and low power consumption when playing movies.

    How about nVidia cards, would I get the same behavior with a GTX 480 for example?
    Reply
  • Onus
    For those not needing the absolute maximum eye candy at high resolutions in their games, the HD5670 looks like a very nice choice for a do-it-all card that won't break the budget.
    Next questions: First, where does the HD5750 fall in this? Second, if you do the same kinds of manual tweaking for power saving that you did in your Cool-n-Quiet analysis, how will that change the results? And finally, if you run a F@H client, what does that do to "idle" scores, when the GPU is actually quite busy processing a work unit?
    Reply
  • eodeo
    Very interesting article indeed.

    I'd love to see nvidia cards and beefier CPUs used as well. Normal non green hdds too. Just how big of a difference in speed/power do they make?

    Thank you for sharing.
    Reply
  • arnawa_widagda
    Hi guys,

    Thanks for reading the article.

    Next questions: First, where does the HD5750 fall in this? Second, if you do the same kinds of manual tweaking for power saving that you did in your Cool-n-Quiet analysis, how will that change the results? And finally, if you run a F@H client, what does that do to "idle" scores, when the GPU is actually quite busy processing a work unit?

    Have no 5750 sample yet, but they should relatively be close to 5770. For this article, we simply chose the best bin for each series (Redwood, Juniper and Cypress).

    The second question, what will happen when you tweak the chip? Glad you ask!! I can't say much yet, but you'll be surprised what the 5870 1 GB can do.

    As for NVIDIA cards, I'm hoping to have the chance to test GF100 and derivatives very soon.

    Take care.

    Reply