Nvidia GeForce RTX 2070 Founders Edition Review: Replacing GeForce GTX 1080
Why you can trust Tom's Hardware
Deep Learning Super-Sampling: Faster Than GeForce GTX 1080 Ti?
Before we get into the performance of GeForce RTX 2070 across our benchmark suite, let’s acknowledge the elephant in the room: a month in, there still aren't any games with real-time ray tracing or DLSS to test. We do, however, have access to a demo of Final Fantasy XV Windows Edition with DLSS support. Details of the implementation are somewhat light, aside from a note that DLSS allows Turing-based GPUs to use half the number of input samples for rendering. The architecture’s Tensor cores fill in the rest to create a final image.
Nvidia says the demo runs at 4K and maximum graphics fidelity no matter what but doesn’t provide any way to see what the quality settings include. The HUD simply shows Resolution: 3840x2160, Graphics: Custom, and a score that increases as the demo runs. Unfortunately, there's no way for us to specify 2560x1440 (a more practical target for GeForce RTX 2070).
Despite the over-ambitious resolution, GeForce RTX 2070 Founders Edition picks up a 38% speed-up with DLSS active compared to applying TAA at 4K. That makes it faster than GeForce GTX 1080 Ti and RTX 2080 using TAA. Prognosticating one year into the future, TU106's Tensor cores may surprise enthusiasts and become this architecture's most useful feature, particularly if real-world optimizations for DLSS prove as compelling as Nvidia's demos.
But until DLSS really proves itself, we anticipate gamers distrusting the idea that input samples can be removed to save on rendering budget and then filled in using AI. We pored over the demo, running both versions over and over to identify any differences that stood out.
In the clip below, we make two observations. First, Noct’s textured shirt is affected by banding/shimmering due to DLSS. In the TAA version, his chest does not exhibit the same effect. Second, as Noct casts his fishing rod, there’s a pronounced ghosting artifact that remains on-screen with TAA active. DLSS does away with this entirely. Neither solution is perfect.
Might the Final Fantasy’s DLSS implementation improve over time? According to Nvidia, the model for DLSS is trained on a set of data that eventually reaches a point where the quality of its inferred results flattens out. So, in a sense, the DLSS model does mature. But the company’s supercomputing cluster is constantly training with new data on new games, so improvements may roll out as time goes on. If there are areas that demonstrate an issue of some sort, the DLSS model can be reviewed and tweaked. This may involve providing additional “correct” data to train with.
Beyond DLSS' implications for performance and image quality, we were also curious whether utilizing TU106's Tensor cores affected clock rates or power consumption.
Comparing 300 seconds of the Final Fantasy XV demo, power consumption looks very similar using DLSS and TAA. Notably, the dips between scenes consistently drop lower with DLSS enabled.
The same goes for clock rate, though we might guess that TU106 is able to run at a slightly higher frequency because it's only rendering a fraction of the input samples and using Tensor cores to fill in the rest.
MORE: Best Graphics Cards
MORE: Desktop GPU Performance Hierarchy Table
MORE: All Graphics Content
Current page: Deep Learning Super-Sampling: Faster Than GeForce GTX 1080 Ti?
Prev Page Meet TU106: The Engine Powering GeForce RTX 2070 Next Page Results: Ashes of the Singularity and Battlefield 1Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Arrow Lake hotspot purportedly moves up compared to LGA 1700 — Der8auer preparing water blocks for Core Ultra 200 series
KVM expansion card utilizes RISC-V CPU architecture for enhanced remote PC management — Sipeed NanoKVM-PCIe now available for pre-order starting at $40
Russia to spend $2.54 billion on its own chipmaking tools industry by 2030