Sign in with
Sign up | Sign in

PowerTune: Changing The Way You Overclock

Radeon HD 6970 And 6950 Review: Is Cayman A Gator Or A Crock?
By

Then

Over time, AMD and Nvidia have integrated specific capabilities to help their hardware cope with the rigors of taxing applications and then gracefully scale back when the load isn’t as high.  

Under extreme duress, usually in a piece of software like FurMark specifically written to apply atypically-intense workloads, both companies are able to throttle voltages and clock rates to protect against an unsustainable thermal situation. Additionally, they’ve incorporated protection mechanisms for voltage regulator circuitry that’ll also drop GPU clocks if an overvoltage occurs, even before the graphics processor heats up.

Power-monitoring circuitry added to GeForce GTX 580Power-monitoring circuitry added to GeForce GTX 580

At the other end of the spectrum, Radeon and GeForce boards spin down when they’re not being taxed. This translates to significant power savings (not to mention better thermal and acoustic properties). The flexibility to scale up and down like this is what makes it possible to drop a desktop-class piece of silicon into a notebook and still end up with a useable system.

AMD’s suite of power management technologies has, for many generations, gone by the name of PowerPlay (apropos, given ATI’s origins in Toronto). It’s most well-known on the mobile side, because that’s where specific thermal considerations most affect what a given GPU can do. But PowerPlay is fairly rudimentary in the grand scheme of things. It supports an idle state, a peak power state, and intermediate states for things like video playback. However, each voltage/frequency combination is static, like rungs on a ladder.

The problem is that applications don’t all behave the same way. So, even if a GPU is in its highest performance state, a piece of software like FurMark might trigger 260 W of power draw, while an application like Crysis pushes the card to consume 220 W. If you’re designing a graphics board, you can’t set the clocks and voltages with Crysis in mind; you have to make sure it’s stable in FurMark, too. That sort of worst-case combination of factors is what goes into the thermal design power we so often cite in our reviews.

A company like AMD or Nvidia defines a thermal design power for an entire graphics card, representing maximum power draw for reliable operation of the GPU, voltage regulation, memory, and so on. When they set the voltage and clock rate for that top P-state, three things are being taken into consideration: the TDP, the highest stable frequency at a given voltage, and the power characteristics of applications, which end up determining draw under full load, since some push hardware much harder than others.

Source: AMDSource: AMD

Now, in some cases, you might have to artificially cap a GPU’s performance to get it to fit within a given power envelope--this is particularly common in notebooks. Similarly, it might become necessary to limit clock speed to fit within the PCI Express specification, for example. Sadly, the result could be that you limit the performance of World of Warcraft because you have to cap clocks with 3DMark Vantage’s Perlin Noise test in mind, preventing instability when that test runs. Suddenly, it makes a lot more sense as to why both GPU manufacturers hate programs like FurMark and OCCT so much. Those "outlier" apps, as they call them, artificially hobble what their cards can do.

At the end of the day, you have this situation where graphics cards are protected from damage. But the protection mechanism hammers performance in the name of safety. And if you’re running an application that doesn’t reach the board’s power limit, then you wind up leaving it underutilized—that’s performance left on the table.

Now

AMD claims that its PowerTune technology addresses both “power problems” that GPU vendors face through dynamic TDP management.

Instead of scaling up and down static power states, PowerTune dynamically calculates a GPU engine clock based on current power draw—right up to its highest possible state. Should you dial in an overclock and run an application that pushes the card beyond its TDP, PowerTune is supposed to keep the GPU in its highest P-state, but cut back on power use by dynamically reducing clock speed.

Source: AMD. The difference is more granularitySource: AMD. The difference is more granularity

This is not to say that PowerTune will prevent you from crashing if you get too aggressive on your overclock. We tried upping the Radeon HD 6970’s clocks in AMD’s Catalyst Control Center software, keeping PowerTune at its factory setting, and still managed to get Just Cause 2 and Metro 2033 to crater. Also, it’s worth noting that actually using PowerTune is akin to overclocking. Should your shiny new 6000-series card’s death turn out to be PowerTune-related, a warranty won’t cover it. Sounds a little like Nissan equipping its GT-R with launch control, and then denying warranty claims when someone pops the tranny. Nevertheless, you’ve been warned.

How does PowerTune help performance? Well, rather than designing the Radeon HD 6000s with worst-case applications in mind, AMD is able to dial in a higher core clock at the factory (880 MHz in the case of the 6970) and rely on PowerTune to modulate performance down in the applications that would have previously forced the company to ship at, say, 750 MHz.

How It Works

So, let’s say you’re overclocking your card, leaving PowerTune at its default setting in AMD’s driver. If you run an application that wasn’t TDP-constrained by the default clock, and still isn’t constrained by the higher clock, you’ll see the scaling you expected. If the application wasn’t TDP-limited before the overclock, but does cross that threshold afterward, you’ll realize a smaller performance boost. Finally, if the application was already pushing the card’s TDP, overclocking isn’t going to get you anything extra—PowerTune was already modulating performance.

AMD gives you a way around this, though. In the Catalyst Control Center, under the AMD Overdrive tab, there’s a PowerTune slider that goes from -20% to +20%. Sliding down the scale reins in maximum TDP, helping you save energy at the cost of performance. Moving the other direction creates thermal headroom, allowing higher performance in apps that might have been capped previously.

In order to test this out, we fired up a few games to spot check PowerTune’s behavior, eventually settling on Metro 2033—the same app we use to log power consumption later in this piece. We also dialed in a slight overclock on our Radeon HD 6970 (915/1400 MHz). With the slider set to -20%, we saw 48.54 frames per second at the game’s High detail setting. At the default 0%, performance jumped to 56.43 FPS. At +20%, performance increased slightly to 57.33 FPS.

The logged power chart tells the tale. By dropping the PowerTune slider, it’s clear that the capability pulls down peak power use (the difference is about 27 W average). But because Metro is already using a lot of power, there isn’t any headroom left for the card to drive extra performance.

In a sense, AMD has already extracted much of the overclocking headroom you might have otherwise pursued in order to make thee 6900-series cards more competitive. You can use the PowerTune slider to make more headroom available, but at the end of the day, the gains you see will be application-dependent.

AMD says that PowerTune is a silicon-level feature enabled by counters placed throughout the GPU. It works in real-time without relying on driver or application support. It’s programmable, too, so you can expect it to make a reappearance when Cayman is turned into a mobile part called Blackcomb.

Ask a Category Expert

Create a new thread in the Reviews comments forum about this subject

Example: Notebook, Android, SSD hard drive

Display all 217 comments.
This thread is closed for comments
Top Comments
  • 30 Hide
    terror112 , December 15, 2010 3:13 AM
    WOW not impressed.
  • 15 Hide
    iam2thecrowe , December 15, 2010 4:22 AM
    Dissappointed. well i guess anyone that bought a 5xxx series card that couldnt wait can now be content that they made an ok choice. The only thing i got from this review is that i want 2 x gtx460's or 2 x 6850's, not the new amd cards.
  • 14 Hide
    bluekoala , December 15, 2010 4:56 AM
    I also urge people to put more emphasis on MINIMUM FPS as that is when you require high FPS the most.
Other Comments
  • 30 Hide
    terror112 , December 15, 2010 3:13 AM
    WOW not impressed.
  • 11 Hide
    Annisman , December 15, 2010 3:15 AM
    Thanks for the review Angelini, these new naming schemes are hurting my head, sometimes the only way to tell (at a quick glance) which AMD card matches up to what Nvidia card, is by comparing the prices, which I think is bad for the average consumer.
  • 13 Hide
    rohitbaran , December 15, 2010 3:25 AM
    These cards are to GTX 500 series what 4000 series was to GTX 200. Not the fastest at their time but offer killer performance and feature set for the price. I too expected 6900 to be close to GTX 580, but it didn't turn out that way. Still, it is the card I have waited for to upgrade. Right in my budget.
  • 3 Hide
    tacoslave , December 15, 2010 3:25 AM
    imagine when this hits 32nm?
  • 7 Hide
    notty22 , December 15, 2010 3:27 AM
    AMD's top card is about a draw with the gtx 570.
    Pricing is in line.
    Gives AMD only hold outs buying options, Nvidia already offered
    Merry Christmas
  • 11 Hide
    microterf , December 15, 2010 3:27 AM
    Why drop the 580 when it comes to the multi-gpu scaling??
  • 4 Hide
    IzzyCraft , December 15, 2010 3:28 AM
    Sorry all i read was this
    "This helps catch AMD up to Nvidia. However, Intel has something waiting in the wings that’ll take both graphics companies by surprise. In a couple of weeks, we'll be able to tell you more." and now i'm fixated to weather or not intel's gpu's can actually commit to proper playback.
  • -3 Hide
    andrewcutter , December 15, 2010 3:28 AM
    but from what i read at hardocp, though it is priced alongside the 570, 6970 was benched against the 580 and they were trading blows... So toms has it at par with 570 but hard has it on par with 580.. now im confused because if it can give 580 perfomance or almost 580 performance at 570 price and power then this one is a winner. Sim a 6950 was trading blows with 570 there. So i am very confused
  • -1 Hide
    sgt bombulous , December 15, 2010 3:30 AM
    This is hilarious... How long ago was it that there were ATI fanboys blabbering "The 6970 is gonna be 80% faster than the GTX 580!!!". And then reality hit...
  • 2 Hide
    manitoublack , December 15, 2010 3:35 AM
    I'd have to say wait until the christmas new years dust settles
  • 8 Hide
    andrewcutter , December 15, 2010 3:36 AM
    sry i take back what i said earlier. most reviews agree with what toms says. So my apologies..:) 
  • 6 Hide
    tpi2007 , December 15, 2010 3:36 AM
    Not bad, but not very impressive either. It's hard to be impressed at 40nm by now.

    But it is quite ironic that AMD has had a tesselator in their cards way before anybody supported the feature (let alone Nvidia), and now Nvidia does better tessellation than AMD.. they should really address that problem.. well, now the only way is to redesign the chip... at 28nm.

    28nm it is then, the next big excitment.

    What I would really like, now that the HD6xxx lineup is here (dual GPU still missing, but that is a niche product), is that AMD now focuses on fixing bugs in their drivers.
  • 4 Hide
    namelessonez , December 15, 2010 3:39 AM
    It's always the same story! nVidia pulls out a new product and then the wait begins for AMD to release its products! Ultimately, the difference isn't worth the wait. We know that AMD's winning factor is its price, but nVidia's is the quality....imho!

    As rightly stated, 'reality hits'.
  • 0 Hide
    yyk71200 , December 15, 2010 3:50 AM
    AndrewCutterbut from what i read at hardocp, though it is priced alongside the 570, 6970 was benched against the 580 and they were trading blows... So toms has it at par with 570 but hard has it on par with 580.. now im confused because if it can give 580 perfomance or almost 580 performance at 570 price and power then this one is a winner. Sim a 6950 was trading blows with 570 there. So i am very confused

    Actually, in Hardocp review overall 580 has some edge over 6970 as well. Only in F1 6970 is ahead. 6970 is great value though.
  • 8 Hide
    joytech22 , December 15, 2010 3:52 AM
    IzzyCraftSorry all i read was this"This helps catch AMD up to Nvidia. However, Intel has something waiting in the wings that’ll take both graphics companies by surprise. In a couple of weeks, we'll be able to tell you more." and now i'm fixated to weather or not intel's gpu's can actually commit to proper playback.


    If intel entered the graphics market and provided a half-decent dedicated GPU, that would definitely make ANY GPU company shake in their boots.

    But in all honesty i hope Intel does enter the market for graphics, making AMD and Nvidia push harder and faster for better products.
  • 1 Hide
    Stardude82 , December 15, 2010 3:53 AM
    So the answer to the question posed in the title is that it is neither really a gator nor a crock. It works on so many levels! Well, maybe just one since caimans are more closely related to alligators than crocodiles.
  • 10 Hide
    Tamz_msc , December 15, 2010 3:54 AM
    I wonder if Nvidia will reduce the prices on the GTX 580 and 570.
  • 6 Hide
    Lamiel , December 15, 2010 4:02 AM
    About the only good news I can see in this for AMD is how much they've increased their multi-GPU scaling, making the 6850's in Crossfire a great bargain. That's great, sure, but the new 6900 cards leave me completely underwhelmed. It doesn't make a whole lot of sense to follow up the 6800's in this way. I'm an Nvidia user, but I was still looking forward with curiosity to seeing how much stronger the 6970 would be than the GTX 580. Looks like the hype-machine broke down this time... My guess is that a lot of the AMD/ATI fanatics will be scrambling to salvage some dignity after all their talking up of Cayman and how it would eat Nvidia's lunch.
  • -3 Hide
    fstrthnu , December 15, 2010 4:16 AM
    This reminds me a lot about the recent release of the updated AMD processors - a temporary holdout while the company plans to release new ones ahead. Except these graphics cards don't even have value going for them. Nvidia was behind just a couple months ago, but now they're ready to crush AMD/Radeon. AMD better have something new coming, and FAST. Holding procedures will only do so much when your competitor is already developing their next-gen GPUs.
  • 15 Hide
    iam2thecrowe , December 15, 2010 4:22 AM
    Dissappointed. well i guess anyone that bought a 5xxx series card that couldnt wait can now be content that they made an ok choice. The only thing i got from this review is that i want 2 x gtx460's or 2 x 6850's, not the new amd cards.
Display more comments