Graphics Card Power Consumption and Efficiency Tested

(Image credit: Shutterstock)

How much power do the best graphics cards use? It's an important question, and while the performance we show in our GPU benchmarks hierarchy is useful, one of the true measures of a GPU is how efficient it is. To determine GPU power efficiency, we need to know both performance and power use. Measuring performance is relatively easy, but measuring power can be complex. We're here to press the reset button on GPU power measurements and do things the right way.

There are various ways to determine power use, with varying levels of difficulty and accuracy. The easiest approach is via software like GPU-Z, which will tell you what the hardware reports. Alternatively, you can measure power at the outlet using something like a Kill-A-Watt power meter, but that only captures total system power, including PSU inefficiencies. The best and most accurate means of measuring the power use of a graphics card is to measure power draw in between the power supply (PSU) and the card, but it requires a lot more work.

We've used GPU-Z in the past, but it had some clear inaccuracies. Depending on the GPU, it can be off by anywhere from a few watts to potentially 50W or more. Thankfully, the latest generation AMD Big Navi and Nvidia Ampere GPUs tend to report relatively accurate data, but we're doing things the right way. And by "right way," we mean measuring in-line power consumption using hardware devices. Specifically, we're using Powenetics software in combination with various monitors from TinkerForge. You can read our Powenetics project overview for additional details.

After assembling the necessary bits and pieces — some soldering is required, and we have a list of the best soldering irons to help — the testing process is relatively straightforward. Plug in a graphics card and the power leads, boot the PC, and run some tests that put a load on the GPU while logging power use.

We've done that with all the legacy GPUs we have from the past six years or so, and we do the same for every new GPU launch. We've updated this article with the latest data from the GeForce RTX 3090, RTX 3080, RTX 3070, RTX 3060 Ti, and RTX 3060 12GB from Nvidia; and the Radeon RX 6900 XT, RX 6800 XT, RX 6800, and RX 6700 XT from AMD. We use the reference models whenever possible, which means only the EVGA RTX 3060 is a custom card.

If you want to see power use and other metrics for custom cards, all of our graphics card reviews include power testing. So for example, the RX 6800 XT roundup shows that many custom cards use about 40W more power than the reference designs, thanks to factory overclocks.

Test Setup

We're using our standard graphics card testbed for these power measurements, and it's what we'll use on graphics card reviews. It consists of an MSI MEG Z390 Ace motherboard, Intel Core i9-9900K CPU, NZXT Z73 cooler, 32GB Corsair DDR4-3200 RAM, a fast M.2 SSD, and the other various bits and pieces you see to the right. This is an open test bed, because the Powenetics equipment essentially requires one.

There's a PCIe x16 riser card (which is where the soldering came into play) that slots into the motherboard, and then the graphics cards slot into that. This is how we accurately capture actual PCIe slot power draw, from both the 12V and 3.3V rails. There are also 12V kits measuring power draw for each of the PCIe Graphics (PEG) power connectors — we cut the PEG power harnesses in half and run the cables through the power blocks. RIP, PSU cable.

Powenetics equipment in hand, we set about testing and retesting all of the current and previous generation GPUs we could get our hands on. You can see the full list of everything we've tested in the list to the right.

From AMD, all of the latest generation Big Navi / RDNA2 GPUs use reference designs, as do the previous gen RX 5700 XT, RX 5700 cards, Radeon VII, Vega 64 and Vega 56. AMD doesn't do 'reference' models on most other GPUs, so we've used third party designs to fill in the blanks.

For Nvidia, all of the Ampere GPUs are Founders Edition models, except for the EVGA RTX 3060 card. With Turing, everything from the RTX 2060 and above is a Founders Edition card — which includes the 90 MHz overclock and slightly higher TDP on the non-Super models — while the other Turing cards are all AIB partner cards. Older GTX 10-series and GTX 900-series cards use reference designs as well, except where indicated.

Note that all of the cards are running 'factory stock,' meaning there's no manual overclocking or undervolting is involved. Yes, the various cards might run better with some tuning and tweaking, but this is the way the cards will behave if you just pull them out of their box and install them in your PC. (RX Vega cards in particular benefit from tuning, in our experience.)

Our testing uses the Metro Exodus benchmark looped five times at 1440p ultra (except on cards with 4GB or less VRAM, where we loop 1080p ultra — that uses a bit more power). We also run Furmark for ten minutes. These are both demanding tests, and Furmark can push some GPUs beyond their normal limits, though the latest models from AMD and Nvidia both tend to cope with it just fine. We're only focusing on power draw for this article, as the temperature, fan speed, and GPU clock results continue to use GPU-Z to gather that data. 

(Image credit: Tom's Hardware)

GPU Power Use While Gaming: Metro Exodus 

Due to the number of cards being tested, we have multiple charts. The average power use charts show average power consumption during the approximately 10 minute long test. These charts do not include the time in between test runs, where power use dips for about 9 seconds, so it's a realistic view of the sort of power use you'll see when playing a game for hours on end.

Besides the bar chart, we have separate line charts segregated into groups of up to 12 GPUs, and we've grouped cards from similar generations into each chart. These show real-time power draw over the course of the benchmark using data from Powenetics. The 12 GPUs per chart limit is to try and keep the charts mostly legible, and the division of what GPU goes on which chart is somewhat arbitrary.

Kicking things off with the latest generation GPUs, the overall power use is relatively similar. The 3090 and 3080 use the most power (for the reference models), followed by the three Navi 10 cards. The RTX 3070, RX 3060 Ti, and RX 6700 XT are all pretty close, with the RTX 3060 dropping power use by around 35W. AMD does lead Nvidia in pure power use when looking at the RX 6800 XT and RX 6900 XT compared to the RTX 3080 and RTX 3090, but then Nvidia's GPUs are a bit faster so it mostly equals out.

Step back one generation to the Turing GPUs and Navi 1x, and Nvidia had far more GPU models available than AMD. There were 15 Turing variants — six GTX 16-series and nine RTX 20-series — while AMD only had five RX 5000-series GPUs. Comparing similar performance levels, Nvidia Turing generally comes in ahead of AMD, despite using a 12nm process compared to 7nm. That's particularly true when looking at the GTX 1660 Super and below versus the RX 5500 XT cards, though the RTX models are closer to their AMD counterparts (while offering extra features).

It's pretty obvious how far AMD fell behind Nvidia prior to the Navi generation GPUs. The various Vega and Polaris AMD cards use significantly more power than their Nvidia counterparts. RX Vega 64 was particularly egregious, with the reference card using nearly 300W. If you're still running an older generation AMD card, this is one good reason to upgrade. The same is true of the legacy cards, though we're missing many models from these generations of GPU. Perhaps the less said, the better, so let's move on.

(Image credit: Tom's Hardware)

GPU Power with FurMark 

FurMark, as we've frequently pointed out, is basically a worst-case scenario for power use. Some of the GPUs tend to be more aggressive about throttling with FurMark, while others go hog wild and dramatically exceed official TDPs. Few if any games can tax a GPU quite like FurMark, though things like cryptocurrency mining can come close with some algorithms (but not Ehterium's Ethash, which tends to be limited by memory bandwidth). The chart setup is the same as above, with average power use charts followed by detailed line charts. 

The latest Ampere and RDNA2 GPUs are relatively evenly matched, with all of the cards using a bit more power in FurMark than in Metro Exodus. One thing we're not showing here is average GPU clocks, which tend to be far lower than in gaming scenarios — you can see that data, along with fan speeds and temperatures, in our graphics card reviews.

The Navi / RDNA1 and Turing GPUs start to separate a bit more, particularly in the budget and midrange segments. AMD didn't really have anything to compete against Nvidia's top GPUs, as the RX 5700 XT only matched the RTX 2070 Super at best. Note the gap in power use between the RTX 2060 and RX 5600 XT, though. In gaming, the two GPUs were pretty similar, but in FurMark the AMD chip uses nearly 30W more power. Actually, the 5600 XT used more power than the RX 5700, but that's probably because the Sapphire Pulse we used for testing has a modest factory overclock. The RX 5500 XT cards also draw more power than any of the GTX 16-series cards.

With the Pascal, Polaris, and Vega GPUs, AMD's GPUs fall toward the bottom. The Vega 64 and Radeon VII both use nearly 300W, and considering the Vega 64 competes with the GTX 1080 in performance, that's pretty awful. The RX 570 4GB (an MSI Gaming X model) actually exceeds the official power spec for an 8-pin PEG connector with FurMark, pulling nearly 180W. That's thankfully the only GPU to go above spec, for the PEG connector(s) or the PCIe slot, but it does illustrate just how bad things can get in a worst-case workload.

The legacy charts are even worse for AMD. The R9 Fury X and R9 390 go well over 300W with FurMark, though perhaps that's more of an issue with the hardware not throttling to stay within spec. Anyway, it's great to see that AMD no longer trails Nvidia as badly as it did five or six years ago!

Analyzing GPU Power Use and Efficiency 

It's worth noting that we're not showing or discussing GPU clocks, fan speeds or GPU temperatures in this article. Power, performance, temperature and fan speed are all interrelated, so a higher fan speed can drop temperatures and allow for higher performance and power consumption. Alternatively, a card can drop GPU clocks in order to reduce power consumption and temperature. We dig into this in our individual GPU and graphics card reviews, but we just wanted to focus on the power charts here. If you see discrepancies between previous and future GPU reviews, this is why.

The good news is that, using these testing procedures, we can properly measure the real graphics card power use and not be left to the whims of the various companies when it comes to power information. It's not that power is the most important metric when looking at graphics cards, but if other aspects like performance, features and price are the same, getting the card that uses less power is a good idea. Now bring on the new GPUs!

Here's the final high-level overview of our GPU power testing, showing relative efficiency in terms of performance per watt. The power data listed is a weighted geometric mean of the Metro Exodus and FurMark power consumption, while the FPS comes from our GPU benchmarks hierarchy and uses the geometric mean of nine games tested at six different settings and resolution combinations (so 54 results, summarized into a single fps score).

Swipe to scroll horizontally
Graphics CardGPU FPS (9 Games)GPU Power (Watts)Efficiency Score
RX 6800130.8235.4100.0%
RTX 3070116.6219.395.7%
RX 6700 XT112.0215.593.5%
RTX 3060 Ti106.3205.593.1%
RTX 3060 12GB83.6171.887.6%
RX 6900 XT148.1308.586.4%
RX 570078.4165.885.1%
RX 6800 XT142.8303.484.7%
GTX 1660 Super57.9124.283.9%
GTX 1660 Ti57.8124.083.8%
RTX 2080 Ti118.2259.482.0%
RX 5600 XT71.1158.380.8%
GTX 1650 GDDR636.483.378.7%
RTX 208095.5219.078.5%
RTX 2060 Super77.2177.378.4%
RTX 206068.5159.277.5%
RTX 3080142.1333.076.8%
RTX 2070 Super91.0213.376.8%
GTX 1650 Super43.5102.376.4%
RTX 3090152.7361.076.1%
Titan RTX121.4287.775.9%
Titan V104.9249.475.6%
GTX 166050.1119.275.6%
RTX 2080 Super102.1246.874.4%
RTX 207081.0196.174.4%
RX 5700 XT87.1215.172.9%
GTX 1050 Ti24.561.072.3%
GTX 107056.1141.771.3%
GTX 165031.982.569.6%
GTX 108069.1180.468.9%
Titan Xp93.3249.567.3%
RX 5500 XT 8GB48.6133.765.4%
GTX 1070 Ti63.9175.765.4%
GTX 1080 Ti88.2246.864.3%
GTX 1060 6GB40.4115.063.2%
GTX 105018.654.761.2%
Radeon VII89.9266.760.7%
RX 5500 XT 4GB43.3133.158.5%
GTX 1060 3GB34.0108.656.4%
RX Vega 5665.3210.555.8%
RX 560 4GB19.165.152.9%
RX Vega 6474.0297.044.8%
RX 570 4GB38.5163.142.5%
GTX 98040.4173.241.9%
GTX Titan X53.9232.141.8%
GTX 980 Ti50.3219.341.3%
RX 59049.4219.240.6%
GTX 97033.8150.440.4%
RX 58047.2214.239.6%
R9 Fury X50.0261.134.4%
R9 39041.5263.628.3%

This table combines the performance data for all of the tested GPUs with the power use data discussed above, sorts by performance per watt, and then scales all of the scores relative to the most efficient GPU (currently the RX 6800). It's a telling look at how far behind AMD was, and how far it's come with the latest Big Navi architecture.

Efficiency isn't the only important metric for a GPU, and performance definitely matters. Also of note is that all of the performance data does not include newer technology like ray tracing and DLSS.

The most efficient GPUs are a mix of AMD's Big Navi GPUs and Nvidia's Ampere cards, along with some first generation Navi and Nvidia Turing chips. AMD claims the top spot with the Navi 21-based RX 6800, and Nvidia takes second place with the RTX 3070. Seven of the top ten spots are occupied by either RDNA2 or Ampere cards. However, Nvidia's GDDR6X-equipped GPUs, the RTX 3080 and 3090, rank 17 and 20, respectively.

Given the current GPU shortages, finding a new graphics card in stock is difficult at best. By the time things settle down, we might even have RDNA3 and Hopper GPUs on the shelves. If you're still hanging on to an older generation GPU, upgrading might be problematic, but at some point it will be the smart move, considering the added performance and efficiency available by more recent offerings.

Jarred Walton

Jarred Walton is a senior editor at Tom's Hardware focusing on everything GPU. He has been working as a tech journalist since 2004, writing for AnandTech, Maximum PC, and PC Gamer. From the first S3 Virge '3D decelerators' to today's GPUs, Jarred keeps up with all the latest graphics trends and is the one to ask about game performance.

  • King_V
    Thanks for this.

    I have to admit, and maybe it's just the cooling limitation, but I did not at all expect the Vega 56 and Vega 64 to measure pretty much exactly what their official TDP numbers of 210W and 295W are.

    and I may add a few earlier cards when I get some time
    Don't taunt me like this, LOL
    Reply
  • salgado18
    and I may add a few earlier cards when I get some time
    Geforce 8800GT? ;)
    Reply
  • JarredWaltonGPU
    King_V said:
    Thanks for this.

    I have to admit, and maybe it's just the cooling limitation, but I did not at all expect the Vega 56 and Vega 64 to measure pretty much exactly what their official TDP numbers of 210W and 295W are.


    Don't taunt me like this, LOL
    LOL. I don't have all of the older generation GPUs, and in fact I'm missing a whole lot of potentially interesting models. But I do have 980 Ti, 980, 970, Fury X, and R9 390 that might be fun to check.

    Yeah, what the heck -- I'm going to stuff in the 900 series while I work on some other writing. Thankfully, it's not too hard to just capture the data in the background while I work. Swap card, run Metro for 10 minutes, run FurMark for 10 minutes. Repeat. Check back to see new charts in... maybe by the end of the day. (I have I think a 770 and 780 as well sitting around. And R9 380 if I'm feeling ambitious.)

    On Vega, the one thing that's shocking to me is the gap between GPU-Z and Powenetics. The Vega 64 appears to use about 80W on "other stuff" besides the GPU, if GPU-Z's power numbers are accurately reporting GPU-only power. The Vega 56 isn't nearly as high -- about a 45W difference between GPU-Z and Powenetics. That suggests the 18% increase in HBM2 clocks causes about a 75% increase in HBM2 power use (though some of it is probably VRMs and such as well).

    Full figures:
    Vega 64 Powenetics: 296.7W avg power during Metro, GPU-Z says 219.2W.
    Vega 56 Powenetics: 209.4W avg power during Metro, GPU-Z says 164.6W.
    Reply
  • JarredWaltonGPU
    salgado18 said:
    Geforce 8800GT? ;)
    Why stop there? I really do sort of wish I had an FX 5900 Ultra floating around. :D
    Reply
  • bit_user
    Thanks for doing this!

    JarredWaltonGPU said:
    I do have 980 Ti, ... Fury X, and R9 390 that might be fun to check.
    These three are most interesting to me. I have a GTX 980 Ti and the Fury X was its AMD counterpart.

    What's intriguing about the R9 390 is that it was the last 512-bit card. Hawaii was hot, in general (275 W?).
    Reply
  • King_V
    JarredWaltonGPU said:
    LOL. I don't have all of the older generation GPUs, and in fact I'm missing a whole lot of potentially interesting models. But I do have 980 Ti, 980, 970, Fury X, and R9 390 that might be fun to check.

    Yeah, what the heck -- I'm going to stuff in the 900 series while I work on some other writing. Thankfully, it's not too hard to just capture the data in the background while I work. Swap card, run Metro for 10 minutes, run FurMark for 10 minutes. Repeat. Check back to see new charts in... maybe by the end of the day. (I have I think a 770 and 780 as well sitting around. And R9 380 if I'm feeling ambitious.)

    On Vega, the one thing that's shocking to me is the gap between GPU-Z and Powenetics. The Vega 64 appears to use about 80W on "other stuff" besides the GPU, if GPU-Z's power numbers are accurately reporting GPU-only power. The Vega 56 isn't nearly as high -- about a 45W difference between GPU-Z and Powenetics. That suggests the 18% increase in HBM2 clocks causes about a 75% increase in HBM2 power use (though some of it is probably VRMs and such as well).

    Full figures:
    Vega 64 Powenetics: 296.7W avg power during Metro, GPU-Z says 219.2W.
    Vega 56 Powenetics: 209.4W avg power during Metro, GPU-Z says 164.6W.

    I'd most certainly be curious as to the R9 380 results, but that's because, if I understand it correctly, it's basically a slightly overclocked R9 285. Since I had an R9 285 (Gigabyte's Windforce OC version), well, the curiosity is there, LOL

    And yep, with the Vegas, I was talking about the Powenetics numbers. I had assumed they'd blow past their official TDP numbers, but no, they're right in line, give or take 1-2 watts.
    Reply
  • JarredWaltonGPU
    King_V said:
    I'd most certainly be curious as to the R9 380 results, but that's because, if I understand it correctly, it's basically a slightly overclocked R9 285. Since I had an R9 285 (Gigabyte's Windforce OC version), well, the curiosity is there, LOL

    And yep, with the Vegas, I was talking about the Powenetics numbers. I had assumed they'd blow past their official TDP numbers, but no, they're right in line, give or take 1-2 watts.
    So, my old R9 380 4GB card is dead. Actually, it works, but one of the fans is busted and it crashed multiple times in both Metro Exodus and FurMark, so I'm not going to put any more effort into that one. RIP, 380...

    Anyway, I didn't show GPU clockspeeds, which is another dimension of the Vega numbers. The throttling in some of the tests is ... severe. This is why people undervolt, because otherwise the power use is just brutal on Vega. But undervolting can create instability and isn't a panacea.

    In Metro, the Vega 56 averages GPU clocks of 1230MHz -- not too bad, considering the official spec is 1156MHz base, 1471MHz boost. Vega 64 averages 1494MHz, again with base of 1247MHz and boost of 1546MHz. But FurMark... Vega 56 drops to 835MHz average clocks, and Vega 64 is at 1270MHz. That's on 'reference' models. The PowerColor V56 is higher power and clocks, obviously. MSI V64 is also better, possibly just because of binning and being made a couple of months after the initial Vega launch.
    Reply
  • salgado18
    JarredWaltonGPU said:
    Why stop there? I really do sort of wish I had an FX 5900 Ultra floating around. :D
    That would be awesome, right? If only you could get an AGP motherboard :ROFLMAO:

    Hey, throw some emails around, you might get some old cards borrowed. It would be very interesting to see the evolution in efficiency.
    Reply
  • gdmaclew
    There must be hundreds of thousands of RX580s out there but you chose NOT to test them?
    Perhaps you have other articles that do that?
    You have something against Polaris?
    Reply
  • King_V
    gdmaclew said:
    There must be hundreds of thousands of RX580s out there but you chose NOT to test them?
    Perhaps you have other articles that do that?
    You have something against Polaris?

    Oh, OBVIOUSLY he has something against Polaris, which is why he tested the RX 570 and RX 590. Clearly. :rolleyes:
    Reply