Nvidia's new RTX 3090 TI might be a power hog, with a high price to match. However, when undervolted to 300W, it seemingly turns into one of the most power-efficient GPUs on the Ampere architecture. In a recent Igors'Lab review, he tested an MSI Suprim X RTX 3090 TI that was power limited to just 300W and unsurprisingly saw some amazing efficiency gains.
From the factory, the GeForce RTX 3090 Ti comes with the highest power consumption we've ever recorded from an Nvidia GPU, with a reference specification of around 450W. And that number increases up to 550W for select AIB partner cards (yes that is an additional 100W over reference).
In other words, while the RTX 3090 Ti might be the fastest gaming GPU on the planet, it throws power efficiency right out of the window to claim that title. In general, it seems to have a roughly 5-10% lead over the vanilla RTX 3090 (depending on resolution), and that card consumes 100W less power under load. That's a big jump in power for a relatively small increase in performance.
So what happens when you take the RTX 3090 TI and throw away its massive power budget? Igor sets out to see what happens, and the results are impressive — though you can generally get improved efficiency from any GPU by adjusting power limits and voltages.
Igor used MSI Afterburner and adjusted the power limit to 300W, then he went into the VF curve to adjust clock speeds. For the VF curve, Igor duplicated the curve on Nvidia's RTX A6000, which uses a 300W power limit and has the same GA102 die as the 3090 Ti (though it also uses GDDR6 instead of GDDR6X memory).
This sees the RTX 3090 Ti peaking at a maximum of 2050MHz on the highest voltage points within the curve. For reference, this is the maximum clock speed GPU Boost 4.0 can pull from the card when conditions are optimal.
For comparison, Igor used similar Suprim X versions of the RTX 3080 10GB/12GB, RTX 3080 Ti, and RTX 3090 to keep everything consistent, as the Suprim X SKUs from MSI are all heavily factory overclocked. For AMD, Igor used MSI's Gaming X versions of the RX 6800 XT and RX 6900 XT, which are also factory overclocked.
10 games were tested: Borderlands 3, Control, FarCry 6, Ghost Recon Breakpoint, Horizon Zero Dawn, Metro Exodus Enhanced Edition, Shadow of the Tomb Raider, Watch Dogs Legion, Wolfenstein Youngblood, and World War Z. All games were tested at a 4K resolution.
When comparing all these games, the power-limited RTX 3090 Ti's gaming performance is nearly identical to the Suprim X RTX 3080 Ti, with an average FPS in all titles of 96.3 for the 3090 Ti and 97.7 for the 3080 Ti — a difference of just 1.4%. However, power consumption between the two cards is quite a bit more drastic.
At its peak, the power limited RTX 3090 Ti pulled a maximum of 314 watts, while the 3080 TI was pulling an additional 95 watts of power at 409W. That is almost a 30% improvement in power efficiency for the RTX 3090 Ti. The "300W" RTX 3090 Ti's power consumption is also better than the Radeon RX 6800 XT (319W) and still managed to be 16% faster in average gaming performance. Similarly, the RTX 3080 10GB pulled 351 watts but was coming in 11% slower than the 300W 3090 Ti.
All in all, the power efficiency of the 3090 Ti at 300W is surprisingly good, and demonstrates the well-known penalty of pushing too far up the voltage and frequency curve. Nvidia's Ampere architecture can be very power-hungry, particularly when used in a factory overclocked configuration. But like AMD's old R9 Nano, clamping down on power use can deliver a huge win in power efficiency.
Of course, buying a $2,000 graphics card and then making it run like a $1,200 card just to save 100W of power isn't exactly sensible. At typical power costs, it would only take around 3,300 days of 24/7 use to make up for the $800 difference. We're not sure what sort of PC hardware we'll have in nine years, but it's almost certainly going to be better than a power limited RTX 3090 Ti.
Still, as mentioned, not hard to believe that you can gain a lot of efficiency by not running a chip at it's maximum frequency (+55% power consumption for +11% frequency). Then again, the 3090Ti was designed to be an FPS chart-topper. Throw all common sense out the window.
So I took this further and undervolted my card. At what I have set currently, it's limited to about 1935MHz but sits at 0.950V and consumes about 75%-80% as much power. And I added another profile on MSI Afterburner to see how low I can get it to still run the stock boost speed of 1800MHz. It goes down as much as 65%. Normally if I let the card do its thing, it can get up to about 2050MHz. So I could get a theoretical performance gain of ~14%, but I'd have to consume at least 70 more watts getting there.
Similarly I noticed when I was doing Handbrake runs, I left my CPU's turbo disabled on accident. However, I found that I only lost about 15% performance, but the CPU was also consuming 60% of the power.
I'm starting to not see the point of full on turbo boosting unless I need to polish my e-peen.
AMD/Intel/Nvidia can get (essentially) free performance out of boosting frequency at the cost of power. As long as their competition isn't absolutely blowing them out of the water on power draw and/or power draw is manageable for heat dissipation and noise, they know that there will only be a handful of press reviews exploring/mentioning power efficiency. And in the end, the consumer defaults to the performance chart. So, like Intel, if you're topping the performance charts, people don't seem to care much that it's costing nearly 2x the power draw compared to AMD.
There's just so little literature out there to help the consumer realize how little performance they'd give up for a >>> decrease in power consumption/heat/noise.
FWIW - I run my 3060Ti at 165W TDP.
In an air-conditioned climate, you not only pay the direct power cost, but nearly as much to pump the excess heat out of your home as well.
If you live somewhere cold it does make a good space heater :)
They already had 'em beat with ray tracing and DLSS - I guess NVENC and Shadowplay too - but geez...