Nvidia GeForce RTX 4070 Ti Review: A Costly 70-Class GPU

Open up your wallet and say ouch

Nvidia GeForce RTX 4070 Ti
(Image: © Tom's Hardware)

Why you can trust Tom's Hardware Our expert reviewers spend hours testing and comparing products and services so you can choose the best for you. Find out more about how we test.

 Asus RTX 4070 Ti Overclocking 

In the above images, you can see our stock and overclocked test settings. Overclocking involves trial and error and results are not guaranteed — each card behaves slightly differently. We attempt to dial in stable settings while running some stress tests, but what at first appears to work just fine may crash once we start running through our gaming suite.

We start by maxing out the power limit, which in this case was only 110% — and again, that can vary by card manufacturer and model. We ultimately ended up with a 175 MHz overclock on the GPU, while the memory was able to go all the way to +1600 MHz with stability (1800 MHz crashed when we attempted it while running FurMark). We also set a custom fan speed that ramps from 30% at 30C up to 100% at 80C, though we never approached 80C in testing.

As with other RTX 40-series cards, there's no way to increase the GPU voltage short of doing a voltage mod (not something we wanted to do), and that seems to be a limiting factor. GPU clocks did break the 3 GHz mark in some tests while gaming, though the FurMark screen capture as usual ended up at far lower clocks. We'll include the 4K overclocked results in our charts.

Nvidia RTX 4070 Ti Test Setup

(Image credit: Tom's Hardware)

We updated our GPU test PC and gaming suite in early 2022, but with the RTX 40-series launch we found more and more games were becoming CPU limited at anything below 4K. As such, we've upgraded our GPU test system again… twice! (Thanks AMD and Intel!) Our GPU benchmarks hierarchy still uses the older i9-12900K PC, as do the professional workloads, but we're working on shifting everything over to the new PCs.

We've now retested several more GPUs using the latest AMD and Nvidia drivers: 22.12.2 for the 7900 series, 22.11.2 for other AMD GPUs; mostly 527.62 for the Nvidia GPUs, though a few GPUs were tested with the previous 527.56 drivers — except for Forza Horizon 5, which we have now retested using the 4070 Ti's preview drivers.

AMD provided us with its latest Ryzen 7000-series and socket AM5 processor, which we're using as a secondary test system. As with the RX 7900 XTX and XT review, we've run the full gaming test suite on the RTX 4070 Ti using the AMD platform and we'll have the results in the charts. We will do the same tests on other future GPUs, and at some point we might do a second set of charts once we have enough data points. As you'll see, performance varies a bit from game to game, but overall performance tends to slightly favor the Intel platform. Anyway, we now have three test PCs, as you can see in the boxout.

AMD and Nvidia both recommend either the AMD Ryzen 9 7950X or Intel Core i9-13900K to get the most out of their new graphics cards. We're mostly going to focus mostly on our test results using the 13900K. MSI provided the Z790 DDR5 motherboard, G.Skill got the nod on memory, and Sabrent was good enough to send over a beefy 4TB SSD — which we promptly filled to about half its total capacity. The AMD rig has slightly different components, like the ASRock X670E Taichi motherboard and G.Skill Trident Z5 Neo memory with Expo profile support for AMD systems. Both power supplies are ATX 3.0 compliant, rated for 1500W, and are 80 Plus Platinum or Titanium certified.

Time constraints prevented us from retesting every GPU on both the AMD and Intel PCs, but we now have results for about a dozen cards on the Intel PC. We've also slightly reworked our benchmark lineup, thanks to some game updates invalidating our old results (looking at you, Fortnite). For reference, the RTX 3090 generally performs about the same (a few percent faster) as the RTX 3080 Ti, and the RTX 3080 12GB tends to be a few percent slower than the 3080 Ti. We'll retest those in the future as well, but we left them off for this review. AMD's RX 6900 XT likewise lands roughly between the 6950 XT and 6800 XT, while the RX 6800 would be 10–15 percent slower than the 6800 XT.

Also of note is that we have PCAT v2 (Power Capture and Analysis Tool) hardware from Nvidia on both the AMD 7950X and Intel 13900K PCs, which means we can grab real power use, GPU clocks, and more during all of our gaming benchmarks. We'll have most of the details for power testing in a few pages.

For all of our testing, we've run the latest Windows 11 updates. Our gaming tests now consist of a standard suite of nine games without ray tracing enabled (even if the game supports it), and a separate ray tracing suite of six games that all use multiple RT effects. We tested all of the GPUs at 4K, 1440p, and 1080p using "ultra" settings — basically the highest supported preset if there is one, and in some cases maxing out all the other settings for good measure (except for MSAA or super sampling). We've also hooked our test PCs up to the Samsung Odyssey Neo G8 32, one of the best gaming monitors around, just so we could fully experience some of the higher frame rates that might be available — G-Sync and FreeSync were enabled, as appropriate.

In games that support the technology, we've also tested performance on the RTX 4070 Ti with DLSS enabled — including DLSS 3, where applicable. We're only using the Quality mode, which is 2x upscaling, as that's the only mode that mostly looks as good as native (sometimes better). We'll label the results "DLSS2" and "DLSS3" in the charts, though Minecraft doesn't allow you to specify an upscaling mode and simply uses Quality for 1080p, Balanced for 1440p, and Performance for 4K.

Besides the gaming tests, we also have a collection of professional and content creation benchmarks that can leverage the GPU. We're using SPECviewperf 2020 v3, Blender 3.30, OTOY OctaneBenchmark, and V-Ray Benchmark. Time constraints prevented us from finishing our video encoding benchmarks, but we'll revisit that topic in the coming days. 

Jarred Walton

Jarred Walton is a senior editor at Tom's Hardware focusing on everything GPU. He has been working as a tech journalist since 2004, writing for AnandTech, Maximum PC, and PC Gamer. From the first S3 Virge '3D decelerators' to today's GPUs, Jarred keeps up with all the latest graphics trends and is the one to ask about game performance.