Nvidia's DLSS Technology Analyzed: It All Starts With Upscaling

Nvidia's new DLSS (Deep Learning Super Sampling) technology is one of the Turing architecture's most promising, as we first showed in our GeForce RTX 2080 Ti Founders Edition review. But it is also the most mysterious. The company isn't going into depth on how DLSS works. Of course, we wanted to know more. So, after hours of testing and image analysis, we think we have the answer.

DLSS, According to Nvidia

In its descriptions of DLSS' inner workings, Nvidia tends to stay fairly superficial. Nvidia’s Turing Architecture Explored: Inside the GeForce RTX 2080 had the company presenting DLSS as a feature that enables better performance than anti-aliasing at QHD and 4K, simultaneously achieving better picture quality. It's the claim of higher-quality visuals at faster frame rates that perplexed us most. We naturally spent time comparing the performance and output of DLSS versus TAA (Temporal Anti-Aliasing, a technique for smoothing out the crawling and flickering seen in motion while playing a game) in our first GeForce RTX reviews.

Most recently, Nvidia's GeForce RTX 2070 reviewer's guide explained DLSS by saying, "DLSS leverages a deep neural network to extract multidimensional features of the rendered scene and intelligently combine details from multiple frames to construct a high-quality final image. This allows Turing GPUs to use half the samples for rendering and use AI to fill in information to create the final image." This explanation left us imagining that the graphics processor was only shading part of each frame, leaving the architecture's Tensor cores to reproduce the missing pixels through AI.

Might DLSS more simply be explained as an upscaling technique, perfected through the application of AI? It seems plausible, especially since image processing is one of the most compelling applications of AI. It's also possible that DLSS involves a mix of upscaling, anti-aliasing, and filling in mixing pixels.

The First Visual Cues

Sometimes DLSS looks better than TAA, and sometimes it looks worse. In either case, the technology's output looks very good. Our analysis focuses on individual frames with hand-picked regions zoomed in. But real-time gameplay makes it tough to differentiate between DLSS or TAA at 3840 x 2160, and in certain scenes, artifacts that plague TAA leave the DLSS-based picture unscathed.

Interestingly, we've found that DLSS runs more efficiently at 4K than QHD, yielding a cleaner-looking output. However, when we look at certain image captures, specific hints suggest the picture is rendered at a lower resolution than what is claimed. We were also able to modify the configuration files of Nvidia's Infiltrator and Final Fantasy XV DLSS-enabled demos to run them without AA. This helped immensely with our analysis.

All of our screenshots facilitate comparisons of strictly identical images (aside from some unavoidable variations due to lighting effects). The enlargements are done without filtering to preserve each picture's authenticity. Click to view the images in PNG format (without loss and in their original size).

When DLSS Works Wonderfully

In this picture, it's difficult to distinguish between technologies. DLSS does a great job, and you can even see in the background vegetation that it offers superior image quality compared to TAA. Remarkable.

At this early stage, the Final Fantasy XV demo represents the best implementation of DLSS that we've seen. The Infiltrator demo is somewhat less flattering, though its DLSS rendering also looks great to the naked eye watching in real-time. Again, in some scenes, DLSS is very effective indeed. The following image, DLSS comes close to perfection.

This is close to perfectionThis is close to perfection

When DLSS Shows its Limits

After zooming in on dozens of screenshots to get more familiar with DLSS and its strengths/weaknesses, we were able to uncover flaws that made us wonder whether DLSS was natively rendering at 4K (3840x2160) or QHD (2560x1440).

Here's the first frame of a new sceneHere's the first frame of a new scene

And this is the first frame of a new scene at QHD, not zoomed-inAnd this is the first frame of a new scene at QHD, not zoomed-in

We also noticed that DLSS betrays its true resolution on the first frame of every new scene (see above). In the image below, screen captures taken 40 frames later show DLSS smoothing the jaggies with great efficiency. Especially at 4K, the output quality of DLSS is difficult to distinguish from true 4K with TAA applied.

DLSS looks almost perfect 40 frames after a new scene startsDLSS looks almost perfect 40 frames after a new scene starts

Aliasing is sometimes visible in the middle of a sequence with DLSS active, though, and it persists through the scene. Check out the image below, where jagged edges are more prominent in the 4K DLSS capture compared to TAA at 3840 x 2160.

We wanted to know more about what was going on under the hood to yield such different results. And in the following pages, we finally figured it out...

The Strengths and Weaknesses of DLSS in One Screenshot

In the shot with DLSS enabled, the background and its vegetation look better than the screen captures with no AA or with TAA enabled. But aliasing is more pronounced on the edges of the car. As a final clue, the text on the license plate in our DLSS example reveals a lack of definition compared to 4K with and without AA. In short, DLSS can't always perform miracles.

Note: This story originally appeared on Tom's Hardware FR.

MORE: Best Graphics Cards

MORE: ;Desktop GPU Performance Hierarchy Table

MORE: All Graphics Content

25 comments
    Your comment
  • hixbot
    OMG this might be a decent article, but I can't tell because of the autoplay video that hovers over the text makes it impossible to read.
  • richardvday
    I keep hearing about the autoplay videos yet i never see them ?
    I come here on my phone and my pc never have this problem. I use chrome what browser does that
  • bit_user
    1 said:
    Most surprising is that 4K with DLSS enabled runs faster than 4K without any anti-aliasing.

    Thank you! I was waiting for someone to try this. It seems I was vindicated, when I previously claimed that it's upsampling.

    Now, if I could just remember where I read that...
  • bit_user
    1 said:
    Notice that there is very little difference in GDDR6 usage between the runs with and without DLSS at 4K.

    You only compared vs TAA. Please compare against no AA, both in 2.5k and 4k.
  • bit_user
    1 said:
    In the Reflections demo, we have to wonder if DLAA is invoking the Turing architecture's Tensor cores to substitute in a higher-quality ground truth image prior to upscaling?

    I understand what you're saying, but it's incorrect to refer to the output of an inference pipeline as "ground truth". A ground truth is only present during training or evaluation.

    Anyway, thanks. Good article!
  • redgarl
    So, 4k no AA is better... like I noticed a long time ago. No need for AA at 4k, you are killing performances for no gain. At 2160p you don't see jaggies.
  • coolitic
    So... just "smart" upscaling. I'd still rather use no AA, or MSAA/SSAA if applicable.
  • bit_user
    251426 said:
    So, 4k no AA is better... like I noticed a long time ago.

    That's not what I see. Click on the images and look @ full resolution. Jagged lines and texture noise are readily visible.

    251426 said:
    No need for AA at 4k, you are killing performances for no gain.

    If you read the article, DLSS @ 4k is actually faster than no AA @ 4k.

    251426 said:
    At 2160p you don't see jaggies.

    Depending on monitor size, response time, and framerate. Monitors with worse response times will have some motion blurring that helps obscure artifacts. And, for any monitor, running at 144 Hz would blur away more of the artifacts than at 45 or 60 Hz.
  • Lasselundberg
    i hate your forced video's .....and why is there no 2080ti FE in stock anywhere
  • s1mon7
    Using a 4K monitor on a daily basis, aliasing is much less of an issue than seeing low res textures on 4K content. With that in mind, the DLSS samples immediately gave me the uncomfortable feeling of low res rendering. Sure, it is obvious on the license plate screenshot, but it is also apparent on the character on the first screenshot and foliage. They lack detail and have that "blurriness" of "this was not rendered in 4K" that daily users of 4K screens quickly grow to avoid, as it removes the biggest benefit of 4K screens - the crispness and life-like appearance of characters and objects. It's the perceived resolution of things on the screen that is the most important factor there, and DLSS takes that away.


    The way I see it, DLSS does the opposite of what truly matters in 4K after you actually get used to it and its pains, and I would not find it usable outside of really fast paced games where you don't take the time to appreciate the vistas. Those are also the games that aren't usually as demanding in 4K anyway, nor require 4K in the first place.

    This technology is much more useful for low resolutions, where aliasing is the far larger problem, and the textures, where rendered natively, don't deliver the same "wow" effect you expect from 4K anyway, thus turning them down a notch is far less noticeable.
  • s1mon7
    328798 said:
    251426 said:
    So, 4k no AA is better... like I noticed a long time ago.
    That's not what I see. Click on the images and look @ full resolution. Jagged lines and texture noise are readily visible.


    Yet DLSS looks very clearly lower res, even without zooming in. I'd argue that the vast majority of users would be far less likely to worry about jagged lines on non-AA 4K content than lowered perceived image resolution. The only case where this is not true is in really fast paced games or if someone uses their large-screen TV as a monitor up-close.

    Otherwise, DLSS makes 4K content look like it's not really 4K content, because it really isn't. It's just good upscaling, still with lower res image.
  • mr.mujx
    Can you guys test if it add input lag or not... Because I feel it does since it work on every frame more
    And can it be used effectively in 1080p or 2k to get more fps?
  • rantoc
    Few images seems to have been done in motion - Where TAA shows its ugly face and it would be interesting to see how stable DLSS is in that regard.
  • cane.phoenix
    How is the upscaling part still confusing for tech sites? Jensen even mentioned that they used it, in his own keynote, when the technology was launched... People just associate Super Sample with antialiasing. But the term is correct, since the technique uses samples, for it to work.

    And of course you will get more fps if you render a lower resolution, and then upscale it, compared to render a higher resolution. nVIDIA did provide a chart, that showed that DLSS would provide between 30-50% more fps.

    And who cares if the first frame in a new scene, have a lower resolution. It is only showed for 17 milliseconds @ 60fps...

    Also all the previous bashing of RTX and how they are not their money. DLSS is super easy to implement. It took the FF15 team 1week to do it. It provide almost indistinguishable quality from native 4K, gives 30-50% more fps (on top of the already better architecture = more fps than Pascal).
    nVIDIA actually gave people what they wanted, which was 4K @ 60fps...
  • uglyduckling81
    152222 said:
    OMG this might be a decent article, but I can't tell because of the autoplay video that hovers over the text makes it impossible to read.


    Install NoScript and make sure everything is blocked except TH site itself. You won't see that horrible video.

    Edit: Also I saw somewhere that DLSS renders at 1800p and up scales.
  • bit_user
    2823838 said:
    How is the upscaling part still confusing for tech sites? Jensen even mentioned that they used it, in his own keynote, when the technology was launched... People just associate Super Sample with antialiasing. But the term is correct, since the technique uses samples, for it to work.

    What I recall him saying is that supersampling was used to create the ground truth (something about 64 jittered samples per pixel, IIRC). The deep learning model then serves the purpose of inferring what the supersampled output would be, based on a non-supersampled input.

    If there's anywhere he actually said it renders at a lower resolution than the target, please tell us what time in the presentation he said that (preferably via timestamped youtube link).

    2823838 said:
    And of course you will get more fps if you render a lower resolution, and then upscale it, compared to render a higher resolution.

    Traditionally, but using methods much, much cheaper than DLSS. DLSS involves probably between 100x and 1000x the amount of computation of something like bicubic interpolation. So, it's not a given that the time to run DLSS would be less than the difference between rendering at its input resolution and native.

    2823838 said:
    nVIDIA did provide a chart, that showed that DLSS would provide between 30-50% more fps.

    They were comparing it with TAA. They never compared it with native 4k @ no AA, so you couldn't tell if it was faster just because TAA was so expensive.

    2823838 said:
    And who cares if the first frame in a new scene, have a lower resolution. It is only showed for 17 milliseconds @ 60fps...

    Your eyes are surprisingly good at detecting certain types of changes in images. Once you start to notice a pop or shift between the first and second frames, you might start to feel that you can no longer ignore it. I'm just saying it could get annoying - especially if you're playing something that runs at a lower framerate.

    2823838 said:
    Also all the previous bashing of RTX and how they are not their money. DLSS is super easy to implement. It took the FF15 team 1week to do it.

    They were probably also using Nvidia's GameWorks SDK. Game engines that don't use it might have to forego this feature, entirely. I don't know if that's true, but you can imagine Nvidia trying to use this as leverage to make developers buy into their SDK ecosystem and further disadvantage AMD hardware.

    2823838 said:
    nVIDIA actually gave people what they wanted, which was 4K @ 60fps...

    But people want it for all titles. Existing and those upcoming titles not built on Nvidia's SDKs.

    DLSS is a trick. It's a darn good one, but it's still a trick (or hack, if you prefer). And as such, it has downsides relative to native 4k.
  • cryoburner
    Quote:
    In the shot with DLSS enabled, the background and its vegetation look better than the screen captures with no AA or with TAA enabled.

    You seem to have missed something big here, that I noticed immediately in the first two comparison shots, and again in that car image. The reason the background looks "better" in these stills, is that DLSS is effectively removing much of the depth of field effect. The backgrounds are supposed to be blurry in those shots, because those parts of the scene are intended to be out of focus, to simulate a camera lens, giving the image some depth. Not being as blurry as it should be in those parts of the scene is another artifact that effectively makes the DLSS image quality worse. DLSS is applying a sort of sharpening filter to the upscaled output, and while that helps the image to look sharper than just a regular upscale, it has the side effect of also sharpening things that shouldn't be sharpened.

    You should be able to see this well in that first comparison image of food when viewed at full size. With no AA, the central part of the image is sharp and in focus, but the background to the upper-right, as well as the edge of the tortilla in the foreground, both show soft focus effects, as they should. With TAA applied, the entire scene gets a bit blurry, though the out of focus areas are still relatively out of focus, maintaining some depth. Now in the DLSS image, the central part of the shot that is supposed to be sharp and in-focus is actually a lot blurrier than TAA. However, the background and foreground are actually sharper than they should be, since the sharpening filter has effectively removed most of the focal effect that was supposed to be there. The net result is that instead of having the subject of the image sharp and in focus, and the background and foreground blurred to provide depth and help make the subject stand out, everything is at roughly the same somewhat-blurry level of focus, making the DLSS image look flatter.

    You can clearly see this artifact again in the "bending over" image, as well as in the car image. In both case, the trees in the background get sharper than they should be, while the subject of the image, the person or car, gets blurrier than even TAA. Some people may prefer to not have the depth of field effects, but in that case, turn them off. If the effect were disabled, you would clearly see that using no AA produces the sharpest image, TAA is somewhat blurrier but removes aliasing, and DLSS is significantly blurrier still. The only reason it looks "better" in some specific parts of some images, is that it's counteracting a graphical effect that's supposed to be there.

    Now, presumably a game could apply depth of field after the upscale and sharpen process to avoid this removal of the effect. In that case, however, everything would be blurrier than TAA with the effect active, and I suspect that was not done for this demo, since Nvidia likely preferred to make at least some parts of the scene look sharper than TAA, while providing better performance.

    And of course, it sounds like DLSS will also provide the option for simulated supersampling, as its name implies, rather than just upsampling from a lower resolution. This should increase performance demands over rendering at native resolution though, but not as much as actual supersampling.

    1 said:
    Finally, this is a technology that might be viable on entry-level Turing-based GPUs (as opposed to ray tracing, which requires a minimum level of performance to be useful), if those graphics processors end up with Tensor cores. We'd love to see low-end GPU play through AAA games at 1920 x 1080 based off of a 720p render.

    Maybe, but with larger pixel sizes, the loss of detail should be even more noticeable than at these high resolutions. I guess it could potentially be good for real low-end hardware, where it might mean the difference between medium and high settings in a game, but it also brings into question how much cost it would add to the cards to include enough tensor cores to perform the upscaling to 1080p, and whether simply including more traditional cores might be better.

    2809234 said:
    The way I see it, DLSS does the opposite of what truly matters in 4K after you actually get used to it and its pains, and I would not find it usable outside of really fast paced games where you don't take the time to appreciate the vistas. Those are also the games that aren't usually as demanding in 4K anyway, nor require 4K in the first place.

    On the other hand, you could think of it as running a game at 1440p with upscaling to 4K in a way that looks better than traditional forms of upscaling. If you are gaming at 4K, I'm sure you encounter games that you simply can't run at max settings while maintaining smooth performance. From an image quality standpoint, I'm sure there are cases where running a game at 1440p with max settings will look better than running it at 4K with medium settings. Raytraced effects might be one such example of this, where it might simply not be practical to run those effects in a game at native 4K, but with DLSS rendering the base image at a lower resolution, could keep things running smoother. More resolution isn't all that matters for image quality, after all.
  • bit_user
    582021 said:
    The only reason it looks "better" in some specific parts of some images, is that it's counteracting a graphical effect that's supposed to be there.

    This judgement is too selective. There are some very nicely anti-aliased edges in the DLSS output that look notably better than TAA and (of course) no-AA native res.

    582021 said:
    And of course, it sounds like DLSS will also provide the option for simulated supersampling, as its name implies, rather than just upsampling from a lower resolution. This should increase performance demands over rendering at native resolution though, but not as much as actual supersampling.

    They don't provide such an option. I think the rationale behind the name is that it was trained on a supersampled ground truth. They intend that it already looks supersampled. To some extent, I think they're right.

    Don't just look at edges, but also at details in the texture. DLSS cleans up a lot of noise, there, some of which you really can't claim was intentional.

    In the end, a true verdict depends on gameplay. Do let us know if/when you actually try it in person. I don't even trust Twitch/youtube videos, since the video compression blurs a lot of fine details and adds artifacts of its own.
  • cryoburner
    328798 said:
    This judgement is too selective. There are some very nicely anti-aliased edges in the DLSS output that look notably better than TAA and (of course) no-AA native res.

    From an image quality standpoint ignoring the performance gains, DLSS doesn't look particularly good in this implementation compared to TAA. The edges might be softened, but that's because everything has been softened. Everything that's supposed to be sharp looks quite muddy here, and looking at the areas of the scene where things are supposed to be in focus, it clearly looks worse than TAA. The purpose of DLSS in this demo is to improve performance at the cost of image quality.

    328798 said:
    They don't provide such an option. I think the rationale behind the name is that it was trained on a supersampled ground truth. They intend that it already looks supersampled. To some extent, I think they're right.

    I think you may be wrong on this. When they announced the RTX cards, I'm pretty sure it was mentioned that DLSS could be used to improve image quality or performance. And there's no reason for such an implementation not to work. This implementation renders only half the pixels and upscales the results to improve frame rates at the expense of image quality, but you could likewise render the scene at native resolution, use DLSS to double the pixels, then scale that back down again. Actually here, I found something in an Nvidia article that seems to imply that...

    https://news.developer.nvidia.com/dlss-what-does-it-mean-for-game-developers/

    Quote:
    Question: Will consumers be able to see the difference DLSS makes? Answer: Absolutely! The difference in both frame rate and image quality (depending on the mode selected) is quite pronounced. For instance, in many games that we’re working on, DLSS allows games to jump to being comfortably playable at 4K without stutters or lagged FPS.

    Notice the "depending on the mode selected" part. So, I think we'll also see this used for actual supersampling, at a reduction in performance over native resolution, even if this limited tech demo didn't do that.

    328798 said:
    Don't just look at edges, but also at details in the texture. DLSS cleans up a lot of noise, there, some of which you really can't claim was intentional.

    Again, everything looks too blurry to tell. I did notice that the character's hair looks softer and less pixelated with DLSS than with TAA, due to the process being applied indiscriminately across the entire scene, but the hair, and the entire character in general looks much blurrier, with the surface texture of his leather jacket completely lost. And the aliasing and loss of detail on the car looks much worse than the TAA example. Plus, as I previously pointed out, the backgrounds have had their focus effects improperly removed in this implementation by what is effectively a sharpening routine. The occasional jagged pixels getting past TAA is arguably less of a concern than having everything appear a bit muddy and flat.

    Of course, the performance gains could still make this worthwhile, as it likely looks better than other means of upscaling a lower resolution render target. And if it can provide a supersampling equivalent at a reduced performance impact, that could be good as well, and might actually provide some notable image quality gains over something like TAA.
  • bit_user
    582021 said:
    328798 said:
    They don't provide such an option. I think the rationale behind the name is that it was trained on a supersampled ground truth. They intend that it already looks supersampled. To some extent, I think they're right.
    I think you may be wrong on this. When they announced the RTX cards, I'm pretty sure it was mentioned that DLSS could be used to improve image quality or performance.

    No, they are not saying it reduces quality. Show me where they ever said that.

    Moreover, in the link you provided, they do explicitly state what I recall - that it's always trained on a supersampled ground truth @ the output resolution.
    Quote:
    During training, the DLSS model is fed thousands of aliased input frames and its output is judged against the “perfect” accumulated frames. This has the effect of teaching the model how to infer a 64 sample per pixel supersampled image from a 1 sample per pixel input frame.

    That's why they call it Deep Learning Super Sampling - because deep learning is used to achieve the effect of supersampling, instead of doing it by brute force.

    582021 said:
    you could likewise render the scene at native resolution, use DLSS to double the pixels, then scale that back down again.

    That's just silly. Why would you scale it up, and then back down? You would just train a DLSS filter that accepts native resolution input, instead of input at a lower resolution. But the model would still be trained on supersampled ground truth, and infer what that would look like.


    582021 said:
    Actually here, I found something in an Nvidia article that seems to imply that... https://news.developer.nvidia.com/dlss-what-does-it-mean-for-game-developers/
    Quote:
    Question: Will consumers be able to see the difference DLSS makes? Answer: Absolutely! The difference in both frame rate and image quality (depending on the mode selected) is quite pronounced. For instance, in many games that we’re working on, DLSS allows games to jump to being comfortably playable at 4K without stutters or lagged FPS.
    Notice the "depending on the mode selected" part. So, I think we'll also see this used for actual supersampling, at a reduction in performance over native resolution, even if this limited tech demo didn't do that.

    The "mode" is probably referring to this bit:
    Quote:
    DLSS is also flexible enough to allow developers to choose the level of performance and resolution scaling they wish rather than being locked to certain multiples of the physical monitor or display size

    So, they mean depending on the ratio of input to output resolution.
  • cryoburner
    328798 said:
    That's just silly. Why would you scale it up, and then back down? You would just train a DLSS filter that accepts native resolution input, instead of input at a lower resolution. But the model would still be trained on supersampled ground truth, and infer what that would look like.

    You seemed to be saying in your previous post that they won't provide an option to apply DLSS to a native resolution render to add additional samples for simulated supersampling, and I was simply pointing out how it could be used to achieve that goal, not that they would necessarily use that exact method. Undoubtedly there's more efficient things they could do to accomplish a similar result. Already, Nvidia's cards support Dynamic Super Resolution though, which is traditional supersampling, so at the very least, performing this process could be a matter of doing just that. On a 1440p screen, you could start with a native resolution render and use DLSS to fill in details for a 4K render requiring less performance than true 4K, then DSR could scale that back down to 1440p. The obvious reason for doing that would be that you won't lose detail compared to the native resolution render like you do here.

    As it stands, DLSS as implemented in this demo does not offer what I would consider to be "supersampled quality", and it is clearly a reduction in quality in most ways over even TAA. The whole point of the demo was to show how Nvidia's cards could use DLSS to provide higher frame rates at a "similar" quality level to TAA (which itself tends to be a bit blurry), not to show off better image quality. Better image quality is likely possible though, by starting with a native resolution render and using DLSS to generate additional samples. This article shows that the scene is being rendered here at half-resolution and DLSS is used to fill in missing pixels to recoup some of the lost quality, but it can undoubtedly be applied to a full-resolution render to improve quality as well.
  • bit_user
    582021 said:
    328798 said:
    That's just silly. Why would you scale it up, and then back down? You would just train a DLSS filter that accepts native resolution input, instead of input at a lower resolution. But the model would still be trained on supersampled ground truth, and infer what that would look like.
    You seemed to be saying in your previous post that they won't provide an option to apply DLSS to a native resolution render to add additional samples for simulated supersampling,

    I was referring what the article said about not having such an option. After seeing the link you posted, it does sound like they might not rule out the case where input resolution == output.
  • Randy_82
    4K no AA is the way to go, if there's no performance issues. The only time you would want AA on 4K, if you're taking screenshots and need a flatten clean edge-to-edge image. However, I can see where DLSS is a game changer for us that own 4K monitors and that's performance on games which can't quite hit the 60fps. Trying to play 2560x1440 on a native 4K monitor is way worst than what DLSS can offer and no form of AA can save it from displaying a blurry mess. On my LG 43UD79-B 43" 4K display, I tried native 1440p with no AA, image quality is ugly as f and too distracting with the shimmering foliage all over the place to play. I tried 1440p with TAA, I think the game made me blinded, as image quality is too blurry to enjoy.
  • t.enzenebner
    Can someone explain what's going on in the last picture and why the background is SO much better compared to no-AA? Even the cliff texture.