Could DLSS be the Future of Entry-Level GPUs?
We wanted to dig deep into Nvidia's DLSS technology, beyond what the company was willing to divulge at launch. And although we made progress, the results are far from conclusive. One observation is clear, though: DLSS starts by rendering at a lower resolution and then upscaling by 150% to reach its target output. As far as quality goes, DLSS at 4K looks a lot better than DLSS at 2560 x 1440.
The steps in between rendering at a lower resolution and upscaling to the target are where the purported "magic" happens, and we have few specifics beyond Nvidia's original description. There are a number of techniques for cleanly enlarging an image thanks to advances in machine learning. This field is especially advanced, as it was one of the first applications of artificial intelligence.
Moreover, we can't help but see similarities in what DLSS does for gaming and Nvidia's Ansel AI Up-Res technology, allowing gamers to take 1080p-based captures and generate massive 8K photos via inference. We suspect that the same inferencing goes on in real-time before each frame is upscaled in the DLSS pipeline. Also note the application of an anti-aliasing filter (TAA, DLAA?) just before the upscale step.
Some of this sounds familiar from Nvidia's original description of DLSS. But the company certainly omitted interesting technical details, such as explicitly rendering at a lower resolution, applying anti-aliasing, and upscaling. Those big performance gains certainly make a lot more sense now, though.
DLSS: Still One Of Turing's Most Promising Features
Inner workings aside, DLSS remains one of the Turing architecture's most interesting capabilities, and for multiple reasons. First of all, the technology consistently yields excellent image quality. If you watch any of the DLSS-enabled demos in real-time, it's difficult to distinguish between native 4K with TAA and the same scene enhanced by DLSS.
Second, we're told that DLSS should only get better as time goes on. According to Nvidia, the model for DLSS is trained on a set of data that eventually reaches a point where the quality of its inferred results flattens out. So, in a sense, the DLSS model does mature. But the company’s supercomputing cluster is constantly training with new data on new games, so improvements may roll out as time goes on.
Finally, this is a technology that might be viable on entry-level Turing-based GPUs (as opposed to ray tracing, which requires a minimum level of performance to be useful), if those graphics processors end up with Tensor cores. We'd love to see low-end GPU play through AAA games at 1920 x 1080 based off of a 720p render.
MORE: Best Graphics Cards
MORE: ;Desktop GPU Performance Hierarchy Table
MORE: All Graphics Content