If you've followed the world of CPU overclocking, you've probably noticed how AMD and Intel are both pushing their CPUs closer and closer to the practical limits. I still have fond memories of 50% overclocks on some old CPUs, like the Celeron 300A. These days, the fastest CPUs often only have a few hundred MHz of headroom available — and then only with substantial cooling. I begin with that preface because the RTX 3080 very much feels like it's also running close to the limit.
For the GPU core, while Nvidia specs the nominal boost clock at 1710 MHz, in practice, the GPU boosts quite a bit higher. Depending on the game, we saw sustained boost clocks of at least 1830 MHz, and in some cases, clocks were as high as 1950 MHz. That's not that different from the Turing GPUs, or even Pascal. The real question is how far we were able to push clocks.
The answer: Not far. I started with a modest 50 MHz bump to clock speed, which seemed to go fine. Then I pushed it to 100 MHz and crashed. Through a bit of trial and error, I ended up at +75 MHz as the best stable speed I could hit. That's after increasing the voltage by 100 mV using EVGA Precision X1 and ramping up fan speeds to keep the GPU cool. The result was boost clocks in the 1950-2070 MHz range, typically settling right around the 2GHz mark.
Memory overclocking ended up being far more promising. I started with 250 MHz increments. 250, 500, 750, and even 1000 MHz went by without a problem before my test (Unigine Heaven) crashed at 1250 MHz. Stepping back to 1200 MHz, everything seemed okay. And then I ran some benchmarks.
Remember that bit about EDR we mentioned earlier? It works. 1200 MHz appeared stable, but performance was worse than at stock memory clocks. I started stepping down the memory overclock and eventually ended up at 750 MHz, yielding an effective speed of 20.5 Gbps. I was really hoping to sustain 21 Gbps, in honor of the 21st anniversary of the GeForce 256, but it was not meant to be. We'll include the RTX 3080 overclocked results in the ultra quality charts (we didn't bother testing the overclock at medium quality).
Combined, we ended up with stable performance using the +75 MHz core overclock and +750 MHz GDDR6X overclock. That's a relatively small 4% overclock on the GPU, and a slightly more significant 8% memory overclock, but neither one is going to make a huge difference in gaming performance. Overall performance at 4K ultra improved by about 6%.
GeForce RTX 3080: Test Setup
We're using our standard GPU test bed for this review. However, the Core i9-9900K CPU is now approaching its second birthday, and the Core i9-10900K might be a bit faster. Never fear! While we didn't have nearly enough time to retest all of the graphics cards on a new test bed, we did run a series of RTX 3080 CPU Scaling benchmarks on five other processors. Check that article out for the full results, but the short summary is that the 9900K and 10900K are effectively tied at 1440p and 4K.
You can see the full specs of our GPU test PC to the right. We've equipped it with some top-tier hardware, including 32GB of DDR4-3200 CL16 memory, a potent AIO liquid cooler, and a 2TB M.2 NVMe SSD. We actually have two nearly identical test PCs, only one of which is equipped for testing power use — that's the OpenBenchTable PC. Our second PC has a Phanteks Enthoo Pro M case, which we test with the side removed because of all the GPU swapping. (We've checked with the side installed as well, and we have sufficient fans that temperatures for the GPU don't really change with the panel installed.)
Our current gaming test suite consists of nine games, some of which are getting a bit long in the tooth, and none with ray tracing or DLSS effects enabled. We're adding 14 additional 'bonus' graphics tests, all at 4K ultra with DLSS enabled (where possible) on a limited selection of GPUs to show how pushing the RTX 3080 to the limit changes things … or doesn't, in this case.
Also, Microsoft enabled GPU hardware scheduling in Windows 10 a few months back, and Nvidia's latest drivers support the features. We tested with HW scheduling enabled and disabled, with all of our other GPU results run with the feature disabled. It's not that the feature doesn't help, but overall it ends up being in the margin of error at most settings. We'll include the HW scheduling enabled results in our charts as well.