Nvidia's DLSS Technology Analyzed: It All Starts With Upscaling

Nvidia's Deep Learning Super Sampling Technology, Explored

Nvidia's new DLSS (Deep Learning Super Sampling) technology is one of the Turing architecture's most promising, as we first showed in our GeForce RTX 2080 Ti Founders Edition review. But it is also the most mysterious. The company isn't going into depth on how DLSS works. Of course, we wanted to know more. So, after hours of testing and image analysis, we think we have the answer.

DLSS, According to Nvidia

In its descriptions of DLSS' inner workings, Nvidia tends to stay fairly superficial. Nvidia’s Turing Architecture Explored: Inside the GeForce RTX 2080 had the company presenting DLSS as a feature that enables better performance than anti-aliasing at QHD and 4K, simultaneously achieving better picture quality. It's the claim of higher-quality visuals at faster frame rates that perplexed us most. We naturally spent time comparing the performance and output of DLSS versus TAA (Temporal Anti-Aliasing, a technique for smoothing out the crawling and flickering seen in motion while playing a game) in our first GeForce RTX reviews.

Most recently, Nvidia's GeForce RTX 2070 reviewer's guide explained DLSS by saying, "DLSS leverages a deep neural network to extract multidimensional features of the rendered scene and intelligently combine details from multiple frames to construct a high-quality final image. This allows Turing GPUs to use half the samples for rendering and use AI to fill in information to create the final image." This explanation left us imagining that the graphics processor was only shading part of each frame, leaving the architecture's Tensor cores to reproduce the missing pixels through AI.

Might DLSS more simply be explained as an upscaling technique, perfected through the application of AI? It seems plausible, especially since image processing is one of the most compelling applications of AI. It's also possible that DLSS involves a mix of upscaling, anti-aliasing, and filling in mixing pixels.

The First Visual Cues

Sometimes DLSS looks better than TAA, and sometimes it looks worse. In either case, the technology's output looks very good. Our analysis focuses on individual frames with hand-picked regions zoomed in. But real-time gameplay makes it tough to differentiate between DLSS or TAA at 3840 x 2160, and in certain scenes, artifacts that plague TAA leave the DLSS-based picture unscathed.

Interestingly, we've found that DLSS runs more efficiently at 4K than QHD, yielding a cleaner-looking output. However, when we look at certain image captures, specific hints suggest the picture is rendered at a lower resolution than what is claimed. We were also able to modify the configuration files of Nvidia's Infiltrator and Final Fantasy XV DLSS-enabled demos to run them without AA. This helped immensely with our analysis.

All of our screenshots facilitate comparisons of strictly identical images (aside from some unavoidable variations due to lighting effects). The enlargements are done without filtering to preserve each picture's authenticity. Click to view the images in PNG format (without loss and in their original size).

When DLSS Works Wonderfully

In this picture, it's difficult to distinguish between technologies. DLSS does a great job, and you can even see in the background vegetation that it offers superior image quality compared to TAA. Remarkable.

At this early stage, the Final Fantasy XV demo represents the best implementation of DLSS that we've seen. The Infiltrator demo is somewhat less flattering, though its DLSS rendering also looks great to the naked eye watching in real-time. Again, in some scenes, DLSS is very effective indeed. The following image, DLSS comes close to perfection.

This is close to perfection

When DLSS Shows its Limits

After zooming in on dozens of screenshots to get more familiar with DLSS and its strengths/weaknesses, we were able to uncover flaws that made us wonder whether DLSS was natively rendering at 4K (3840x2160) or QHD (2560x1440).

Here's the first frame of a new scene

And this is the first frame of a new scene at QHD, not zoomed-in

We also noticed that DLSS betrays its true resolution on the first frame of every new scene (see above). In the image below, screen captures taken 40 frames later show DLSS smoothing the jaggies with great efficiency. Especially at 4K, the output quality of DLSS is difficult to distinguish from true 4K with TAA applied.

DLSS looks almost perfect 40 frames after a new scene starts

Aliasing is sometimes visible in the middle of a sequence with DLSS active, though, and it persists through the scene. Check out the image below, where jagged edges are more prominent in the 4K DLSS capture compared to TAA at 3840 x 2160.

We wanted to know more about what was going on under the hood to yield such different results. And in the following pages, we finally figured it out...

The Strengths and Weaknesses of DLSS in One Screenshot

In the shot with DLSS enabled, the background and its vegetation look better than the screen captures with no AA or with TAA enabled. But aliasing is more pronounced on the edges of the car. As a final clue, the text on the license plate in our DLSS example reveals a lack of definition compared to 4K with and without AA. In short, DLSS can't always perform miracles.

Note: This story originally appeared on Tom's Hardware FR.


MORE: Best Graphics Cards

MORE: ;Desktop GPU Performance Hierarchy Table


MORE: All Graphics Content

  • hixbot
    OMG this might be a decent article, but I can't tell because of the autoplay video that hovers over the text makes it impossible to read.
    Reply
  • richardvday
    I keep hearing about the autoplay videos yet i never see them ?
    I come here on my phone and my pc never have this problem. I use chrome what browser does that
    Reply
  • bit_user
    21435394 said:
    Most surprising is that 4K with DLSS enabled runs faster than 4K without any anti-aliasing.
    Thank you! I was waiting for someone to try this. It seems I was vindicated, when I previously claimed that it's upsampling.

    Now, if I could just remember where I read that...
    Reply
  • bit_user
    21435394 said:
    Notice that there is very little difference in GDDR6 usage between the runs with and without DLSS at 4K.
    You only compared vs TAA. Please compare against no AA, both in 2.5k and 4k.
    Reply
  • bit_user
    21435394 said:
    In the Reflections demo, we have to wonder if DLAA is invoking the Turing architecture's Tensor cores to substitute in a higher-quality ground truth image prior to upscaling?
    I understand what you're saying, but it's incorrect to refer to the output of an inference pipeline as "ground truth". A ground truth is only present during training or evaluation.

    Anyway, thanks. Good article!
    Reply
  • redgarl
    So, 4k no AA is better... like I noticed a long time ago. No need for AA at 4k, you are killing performances for no gain. At 2160p you don't see jaggies.
    Reply
  • coolitic
    So... just "smart" upscaling. I'd still rather use no AA, or MSAA/SSAA if applicable.
    Reply
  • bit_user
    21436668 said:
    So, 4k no AA is better... like I noticed a long time ago.
    That's not what I see. Click on the images and look @ full resolution. Jagged lines and texture noise are readily visible.

    21436668 said:
    No need for AA at 4k, you are killing performances for no gain.
    If you read the article, DLSS @ 4k is actually faster than no AA @ 4k.

    21436668 said:
    At 2160p you don't see jaggies.
    Depending on monitor size, response time, and framerate. Monitors with worse response times will have some motion blurring that helps obscure artifacts. And, for any monitor, running at 144 Hz would blur away more of the artifacts than at 45 or 60 Hz.
    Reply
  • Lasselundberg
    i hate your forced video's .....and why is there no 2080ti FE in stock anywhere
    Reply
  • s1mon7
    Using a 4K monitor on a daily basis, aliasing is much less of an issue than seeing low res textures on 4K content. With that in mind, the DLSS samples immediately gave me the uncomfortable feeling of low res rendering. Sure, it is obvious on the license plate screenshot, but it is also apparent on the character on the first screenshot and foliage. They lack detail and have that "blurriness" of "this was not rendered in 4K" that daily users of 4K screens quickly grow to avoid, as it removes the biggest benefit of 4K screens - the crispness and life-like appearance of characters and objects. It's the perceived resolution of things on the screen that is the most important factor there, and DLSS takes that away.


    The way I see it, DLSS does the opposite of what truly matters in 4K after you actually get used to it and its pains, and I would not find it usable outside of really fast paced games where you don't take the time to appreciate the vistas. Those are also the games that aren't usually as demanding in 4K anyway, nor require 4K in the first place.

    This technology is much more useful for low resolutions, where aliasing is the far larger problem, and the textures, where rendered natively, don't deliver the same "wow" effect you expect from 4K anyway, thus turning them down a notch is far less noticeable.
    Reply