Nvidia RTX 4070 Founders Edition Overclocking
In the above images, you can see the results of our power testing that shows clocks, power, and temperatures. The big takeaway is that the card ran quite cool on both the GPU and VRAM, and you can see the clocks were hitting well over 2.7GHz in the gaming test. As for overclocking, there's trial and error as usual, and our results may not be representative of other cards — each card behaves slightly differently. We attempt to dial in stable settings while running some stress tests, but what at first appears to work just fine may crash once we start running through our gaming suite.
We start by maxing out the power limit, which in this case was 110% — and again, that can vary by card manufacturer and model. Most of the RTX 40-series cards we've tested can't do more than about +150 MHz on the cores, but we were able to hit up to +250 MHz on the RTX 4070. We backed off slightly to a +225 MHz overclock on the GPU to keep things fully stable.
The GDDR6X memory was able to reach +1450 MHz before showing some graphical corruption at +1500 MHz. Because Nvidia has error detection and retry, that generally means you don't want to fully max out the memory speed so we backed off to +1350 MHz. With both the GPU and VRAM overclocks and a fan curve set to ramp from 30% at 30C up to 100% at 80C, we were able to run our full suite of gaming tests at 1080p and 1440p ultra without any issues.
As with other RTX 40-series cards, there's no way to increase the GPU voltage short of doing a voltage mod (not something we wanted to do), and that seems to be a limiting factor. GPU clocks did break the 3 GHz mark at times, and we'll have the overclocking results in our charts for reference.
Nvidia RTX 4070 Test Setup
We updated our GPU test PC at the end of last year with a Core i9-13900K, though we continue to also test reference GPUs on our 2022 system that includes a Core i9-12900K for our GPU benchmarks hierarchy. (We'll be updating that later today, once the embargo has passed.) For the RTX 4070, our review will focus on the 13900K performance, which ensures (as much as possible) that we're not CPU limited.
TOM'S HARDWARE INTEL 13TH GEN PC
TOM'S HARDWARE 2022 PC
OTHER GRAPHICS CARDS
AMD RX 7900 XT
AMD RX 6950 XT
AMD RX 6900 XT
AMD RX 6800 XT
AMD RX 6800
Nvidia RTX 4080
Nvidia RTX 4070 Ti
Nvidia RTX 3080 Ti
Nvidia RTX 3080 (10GB)
Nvidia RTX 3070 Ti
Nvidia RTX 3070
Multiple games have been updated over the past few months, so we retested all of the cards for this review (though some of the tests were done with the previous Nvidia drivers, as we've only had the review drivers for about a week). We're running Nvidia 531.41 and 531.42 drivers, and AMD 23.3.2. Our professional and AI workloads were also tested on the 12900K PC, since that allowed better multitasking on the part of our testing regimen.
Our current test suite consists of 15 games. Of these, nine support DirectX Raytracing (DXR), but we only enable the DXR features in six of the games. At the time of testing, 12 of the games support DLSS 2, five support DLSS 3, and five support FSR 2. We'll cover performance with the various upscaling modes enabled in a separate gaming section.
We tested all of the GPUs at 4K, 1440p, and 1080p using "ultra" settings — basically the highest supported preset if there is one, and in some cases maxing out all the other settings for good measure (except for MSAA or super sampling). We also tested at 1080p "medium" to show what sort of higher FPS games can hit. Our PC is hooked up to a Samsung Odyssey Neo G8 32, one of the best gaming monitors around, just so we could fully experience some of the higher frame rates that might be available — G-Sync and FreeSync were enabled, as appropriate.
When we assembled the new test PC, we installed all of the then-latest Windows 11 updates. We're running Windows 11 22H2, but we've used InControl to lock our test PC to that major release for the foreseeable future (security updates still get installed on occasion).
Our new test PC includes Nvidia's PCAT v2 (Power Capture and Analysis Tool) hardware, which means we can grab real power use, GPU clocks, and more during all of our gaming benchmarks. We'll cover those results in our page on power use.
Finally, because GPUs aren't purely for gaming these days, we've run some professional application tests, and we also ran some Stable Diffusion benchmarks to see how AI workloads scale on the various GPUs.