How much power do the best graphics cards use? It's an important question, and while the performance we show in our GPU hierarchy is useful, one of the true measures of a GPU is how efficient it is. To determine GPU power efficiency, we need to know both performance and power use. Measuring performance is relatively easy, but measuring power can be complex. We're here to press the reset button on GPU power measurements and do things the right way — plus it's good preparation for AMD Big Navi, Nvidia Ampere, and Intel Xe Graphics.
There are various ways to determine power use, with varying levels of difficulty and accuracy. The easiest approach is via software like GPU-Z, which will tell you what the hardware reports. Alternatively, you can measure power at the outlet using something like a Kill-A-Watt power meter, but that only captures total system power, including PSU inefficiencies. The best and most accurate means of measuring the power use of a graphics card is to measure power draw in between the power supply (PSU) and the card, but it requires a lot more work.
We've been using GPU-Z for the past six months, but it has some clear inaccuracies, so it's time to go back to doing things the right way. And by "right way," we mean measuring in-line power consumption using hardware devices. Specifically, we're using Powenetics software in combination with various monitors from TinkerForge. You can read our Powenetics project overview for additional details.
I've spent the past couple of weeks soldering together the necessary bits and pieces, followed by testing. Let me just say, soldering is not for the faint of heart. I managed not to burn myself (barely), and everything works, but it was way more difficult than building a PC. Go ahead, flame me if you think soldering is fun — I'd rather go to the dentist. It would certainly take a lot less time. Also, my test bed now has an alarming number of wires coming out of it. But I digress.
The main problem with GPU-Z is that it's prone to 'cheating' of sorts. Nvidia's GPU-Z power metrics are reasonably accurate, particularly on more recent Turing GPUs. However, AMD GPUs only report GPU power use — the rest of the graphics card, including VRAM, VRMs, etc. aren't part of the equation. How big of a difference does that make? According to our renewed Powenetics testing, many of AMD's Navi GPUs have graphics card power consumption that's 25-35W higher than just the GPU alone … and the less said about Polaris and Vega, the better. (But don't worry, we have the charts! Oh boy, do we have some charts.)
Since we have a backlog of recent graphics card reviews that used a different method of reading power, we're taking this opportunity to set the record straight. How much power does an AMD Radeon RX 5600 XT or RX 5500 XT really use — and is it more or less than the competing Nvidia parts? Now we can definitively answer that question. We're also testing previous generation hardware, more as a point of reference, so we'll have GTX 10-series and AMD Polaris and Vega as well (and I may add a few earlier cards when I get some time).
We're using our standard graphics card testbed for these power measurements, and it's what we'll use on future graphics card reviews. It consists of an MSI MEG Z390 Ace motherboard, Intel Core i9-9900K CPU, NZXT Z73 cooler, 32GB Corsair DDR4-3200 RAM, a fast M.2 SSD, and the other various bits and pieces you see to the right. This is an open test bed, because the Powenetics equipment essentially requires one.
There's a PCIe x16 riser card (which is where the soldering came into play) that slots into the motherboard, and then the graphics cards slot into that. This is how we accurately capture actual PCIe slot power draw, from both the 12V and 3.3V rails. There are also 12V kits measuring power draw for each of the PCIe Graphics (PEG) power connectors — we cut the PEG power harnesses in half and run the cables through the power blocks. RIP, PSU cable.
Powenetics equipment in hand, I set about retesting all of the current and previous generation GPUs I could get my hands on. Mostly I tested reference cards, at least for higher-end AMD and Nvidia GPUs. However, reference models don't always exist for budget and mid-range GPUs. I've included a few additional GPUs as well as points of reference, and of course all future GPUs will be tested using the same approach. Here's the list of what we've tested:
From AMD, I have reference Radeon RX 5700 XT and 5700 cards, along with the Radeon VII, Vega 64 and Vega 56, but AMD doesn't do 'reference' models on most other GPUs. I've also included a couple of non-reference cards for comparison, and as we'll see, there's some variation between different models of the same GPU. We'll include third party cards in our results in future reviews as well, so this is more the baseline measurement for current GPUs.
For Nvidia, everything from the RTX 2060 and above is a reference Founders Edition card — which includes the 90 MHz overclock and slightly higher TDP on the non-Super models — while the other Turing cards are all AIB partner cards. A few run at reference clocks, and others come with modest factory overclocks, which is basically the same as the non-reference AMD models. Previous generation GTX 10-series cards are also Founders Edition models, except for the 1060 3GB and lower that use partner cards.
Update: I've added several older GPUs, which is basically everything I have available for testing. The legacy cards are Nvidia's GTX 980 Ti, 980, 970, and 780, along with the AMD R9 Fury X and R9 390.
Note that all of the cards are running 'factory stock,' meaning there's no manual overclocking or undervolting is involved. Yes, the various cards might run better with some tuning and tweaking, but this is the way the cards will behave if you just pull them out of their box and install them in your PC.
Our actual testing remains the same as recent reviews. We loop the Metro Exodus benchmark five times at 1440p ultra, and we also run Furmark for ten minutes. These are both demanding tests, and Furmark can push some GPUs beyond their normal limits, though the latest models from AMD and Nvidia both tend to cope with it just fine. We're only focusing on power draw for this article, as the temperature, fan speed, and GPU clock results continue to use GPU-Z to gather that data.
GPU Power Use While Gaming: Metro Exodus
Due to the number of cards being tested, we have multiple charts. The overall power chart will show average power consumption during the approximately 10 minute long test — it's actually 15 seconds shy of 10 minutes, if we're being precise. This chart does not include the time in between test runs, where power use dips for about 9 seconds, so it's a realistic view of the sort of power use you'll see when playing a game for hours on end.
Besides the bar chart, we have separate line charts roughly segregated into budget, midrange and high-end categories, each with up to 12 GPUs. These show real-time power draw over the course of the benchmark using data from Powenetics. The 12 GPUs per chart limit is to try and keep the charts mostly legible, and the division of what GPU goes on which chart is somewhat arbitrary. We've tried to group GPUs in a sensible fashion, though we couldn't fit every GPU on the ideal chart. (There's no clean break between 'budget' and 'mainstream,' plus previous generation GPUs mean we have more than 12 GPUs in some categories.)
In the overall standings, where less power is better, it's pretty easy to see how far AMD fell behind Nvidia prior to the Navi generation GPUs. The various Vega and Polaris AMD cards use significantly more power than their Nvidia counterparts. Even now, with 7nm Navi GPUs going up against 12nm GPUs, AMD is only roughly equal to Nvidia. How things will change with upcoming Nvidia Ampere and AMD Big Navi launches is something we're definitely looking forward to seeing, what with AMD's claims of 50% improvements in performance per watt with RDNA 2.
Digging into the line charts, in the first grouping of slower / lower power GPUs, Nvidia's GTX 1650 series cards come in below the competing AMD RX 5500 XT models, though performance is certainly still a factor when choosing between the cards. It's interesting that the 4GB and 8GB 5500 XT are basically identical in power use — more VRAM doesn't inherently mean substantially higher power use. Meanwhile, the RX 590 and RX 570 4GB are a big step up in power consumption, while the RX 560 4GB is the only card in our test suite that doesn't include a 6-pin or 8-pin PEG power connector and thus remains below 75W.
The second line chart highlights the big jump AMD saw with its Navi GPUs. The three GTX 1660 models (vanilla, Super and Ti) aren't on the same chart as the RX 5500 XT models, but they're pretty much tied for power use. Stepping up in performance, the RX 5700 and RX 5600 XT are right in the thick of things compared to Turing and the Vega 56. And speaking of Vega 56, the PowerColor model comes with a modest overclock and a much larger cooler, which allows it to remain cool. That means higher clocks and higher power use, along with better framerates.
Last, the highest performance cards can draw a lot of power, with Vega 64 actually surpassing the reference model Radeon VII. The RX 5700 XT meanwhile delivers nearly the same performance as the VII while using substantially less power. It's also interesting to see that the previous Nvidia Pascal cards (GTX 1080 and GTX 1070 Ti) still use less power than their 'replacement' Turing models, and slightly more power than the 2060 Super. That's expected, since both architectures use TSMC's similar 12nm / 16nm process technology. Moving to 7nm ought to provide a substantial boost in power efficiency and performance for Nvidia's next generation GPUs.
GPU Power with FurMark
Let’s put gaming behind us and move on to the FurMark test that, as we've frequently pointed out, is basically a worst-case scenario for power use. Some of the GPUs tend to be more aggressive about throttling with FurMark, while others go hog wild and dramatically exceed official TDPs. Few if any games will tax a GPU quite like FurMark, though things like cryptocurrency mining can come close. The chart setup is the same as above, with a high level overview followed by three detailed line charts.
In the overall chart, most of AMD's GPUs move toward the bottom — and lower is better here, so that's not good. Radeon VII power use jumps 30W compared to the Metro Exodus testing, and the Vega and Polaris GPUs see a big spike as well. The RX 570 4GB (an MSI Gaming X model) actually exceeds the official power spec for an 8-pin PEG connector with FurMark, pulling nearly 180W. That's thankfully the only GPU to go above spec, for the PEG connector(s) or the PCIe slot, but it does illustrate just how bad things can get in a worst-case workload.
The remaining GPUs, meaning AMD's latest Navi parts and Nvidia's Turing and Pascal chips, mostly don't change power use too much. The various Nvidia RTX cards are all within about 5W of the Metro Exodus numbers, and the same applies to Pascal. There are only a few exceptions: The GTX 1660 power use under FurMark jumps by 15W, actually surpassing the power use of the 1660 Ti, and the GTX 1060 3GB, 1050 Ti, and 1050 all see larger jumps as well.
AMD's Navi GPUs split the difference between Turing and Vega, but the RX 5500 XT cards are the worst of the bunch, jumping 45W. The 5600 XT shows a smaller 20W delta, the RX 5700 only changes by 10W, and the RX 5700 XT is only a 3W difference.
It's interesting that the budget chips from both companies seem to get hit a lot harder by FurMark than by games, and perhaps it's just a case of the budget models not being designed to detect and throttle FurMark. We've checked other settings on the budget GPUs in Metro, though, and can't hit the same power levels as FurMark. Part of the problem may simply be that demanding games push beyond the GPUs' capabilities whereas synthetic loads like FurMark are able to max out power draw.
One thing we're not showing are the GPU-Z power figures, though we have them. While Nvidia's Pascal GPUs have real power use typically within 5W of the GPU-Z number, and Turing GPUs are practically bang on, AMD's Navi and Polaris GPUs have total board power use that's 25-35W higher than the GPU-only power use shown in GPU-Z. And Vega? There's up to an 80W delta between GPU-Z and Powenetics with the Vega 64. Hopefully AMD reconsiders how it reports power in future GPUs, as it would be far more helpful to report board power rather than only GPU power.
There's not much else to say about the line charts, other than noting that power use is higher and this time most of the GPUs are hitting close to their rated power and then staying there. There are no major fluctuations, except on the two Vega 64 cards. It's also interesting how the vanilla GTX 1660 with GDDR5 uses more power than the Super and Ti, showing one of the other benefits of GDDR6 besides bandwidth.
Analyzing GPU Power Use and Efficiency
It's worth noting that we're not showing or discussing GPU clocks, fan speeds or GPU temperatures in this article. Power, performance, temperature and fan speed are all interrelated, so a higher fan speed can drop temperatures and allow for higher performance and power consumption. Alternatively, a card can drop GPU clocks in order to reduce power consumption and temperature. We dig into this in our individual GPU and graphics card reviews, but we just wanted to focus on resetting the power charts for now. If you see discrepancies between previous and future GPU reviews, this is why.
Looking forward, the switch back to in-line power measurements also prepares us for the upcoming launches of AMD RDNA 2, Nvidia Ampere and Intel Xe Graphics cards. Hopefully AMD and Nvidia improve even further on efficiency, and we're ready to test when the cards arrive. Intel meanwhile is something of a wild card. Current Intel integrated graphics can be very power efficient, but they're also pathetically slow. What will happen when Intel attempts to make a dedicated GPU, and will Intel report accurate power consumption to utilities like GPU-Z? Of course, with Powenetics we won't have to worry about that.
We can now properly measure the real graphics card power use and not be left to the whims of the various companies when it comes to power information. It's not that power is the most important metric when looking at graphics cards, but if other aspects like performance, features and price are the same, getting the card that uses less power is a good idea. Now bring on the new GPUs!
Here's the final high-level overview of our GPU power testing, showing relative efficiency in terms of performance per watt:
|Efficiency Score||Where to Buy|
|GTX 1660 Ti||100.0%|
|GTX 1660 Super||96.7%|
|GTX 1650 GDDR6||96.3%|
|RX 5600 XT||93.8%|
|RTX 2060 Super FE||93.8%|
|RTX 2080 FE||92.1%|
|GTX 1650 Super||91.7%|
|RTX 2060 FE||90.1%|
|GTX 1050 Ti||89.5%|
|RTX 2070 Super FE||88.7%|
|RTX 2070 FE||88.1%|
|RTX 2080 Ti FE||88.0%|
|GTX 1070 FE||85.9%|
|RTX 2080 Super FE||84.9%|
|RX 5700 XT||83.7%|
|GTX 1080 FE||82.9%|
|GTX 1060 6GB FE||76.9%|
|GTX 1070 Ti FE||76.8%|
|RX 5500 XT 8GB||76.6%|
|GTX 1080 Ti FE||74.5%|
|GTX 1060 3GB||74.3%|
|RX 5500 XT 4GB||69.2%|
|RX Vega 56||65.2%|
|RX 560 4GB||64.1%|
|GTX 980 Ti||51.3%|
|RX Vega 64||49.2%|
|RX 570 4GB||45.3%|
|R9 Fury X||39.1%|
This table combines the performance data for all of the tested GPUs with the power use data discussed above, sorts by performance per watt, and then scales all of the scores relative to the most efficient GPU. It's a telling look at how far behind AMD was, how far it's come with the Navi architecture, and the work that yet remains.
Efficiency isn't the only important metric for a GPU, and performance definitely matters. However, cards often pointed to as being extremely good bargains can have a dark side. The Radeon RX 570 4GB, for example, has been one of the top picks for budget GPUs for the past year. Often priced at only $120, it delivers decent gaming performance. Power use on the other hand can be roughly double that of newer cards that deliver similar performance, like the GTX 1650 GDDR6, and it sits near the bottom of our relative performance per watt table.
The most efficient GPUs end up as a mix of AMD's Navi 10 cards and Nvidia's Turing chips, though Nvidia has both an efficiency and numerical advantage. Where AMD only has five different Navi parts right now, Nvidia has seven RTX Turing GPUs and six more GTX Turing chips. The RX 5700 places second, just behind the GTX 1660 Ti. Balanced mid-range GPUs and even budget chips make for a potent combination. Nvidia's best ray tracing RTX card is the 2060 Super, which ranks sixth, while AMD's higher clocked RX 5700 XT is clear down in 18th place.