Nvidia Says Native Resolution Gaming is Out, DLSS is Here to Stay

Cyberpunk 2077 Ray Tracing: Overdrive Mode
(Image credit: YouTube - Nvidia GeForce)

Digital Foundry recently posted a roundtable discussion video featuring Jakub Knapik from CD Projekt Red and Jacob Freeman and Bryan Catanzaro from Nvidia discussing the implications of Nvidia's DLSS 3.5 technology in Cyberpunk 2077, as well as the general advantages of AI-generated upscaling and rendering.

During their discussion with Digital Foundry's Alex Battaglia and PCMR's Pedro Valadas, Bryan Catanzaro — Nvidia's VP of Applied Deep Learning Research — stated that native resolution gaming is no longer the best solution for maximum graphical fidelity. Catanzaro went on to say that the gaming graphics industry is headed toward more significant reliance on AI image reconstruction and AI-based graphics rendering in the future.

Catanzaro's statement was in response to a question from Valadas regarding DLSS and whether Nvidia planned to prioritize native resolution performance in its GPUs. Catanzaro pointed out that improving graphics fidelity through sheer brute force is no longer an ideal solution, due to the fact that "Moore's Law is dead." As a result, Catanzaro explained, smarter technologies like DLSS need to be implemented to improve graphics fidelity and circumvent the otherwise low gen-on-gen performance improvements seen in todays graphics hardware.

In the case of Cyberpunk 2077, both Valadas and CD Projekt Red's Jakub Knapik said that full path-tracing would have been impossible in that game without all of DLSS's technologies — especially in terms of image upscaling and frame generation. They also noted how DLSS smartly added more performance headroom to modern GPUs, allowing Cyberpunk 2077 to run full realistic light simulation in real-time, outputting a far more realistic and detailed-looking image compared to rendering games the traditional way.

When asked about the future of machine-learning in gaming graphics, Catanzaro said that DLSS/AI will eventually be able to replace traditional rendering entirely. Catanzaro continued by saying the industry has realized it can learn much more complicated functions by looking at large data sets (with AI) rather than by building algorithms from the ground up ("traditional rendering techniques"). 

Nvidia already showed us a glimpse of this future at NeurIPS 2018, where a driving demo was rendered entirely using AI.

Catanzaro's statement seem to confirm that the gaming world as we know it may be on its last legs. With the alleged "death" of Moore's Law, AI manipulation may be the only thing that continues to drive 3D graphics forward for the foreseeable future. Catanzaro's statements also subtly confirm that Nvidia plans to prioritize AI performance in future graphics cards to power AI frame rendering technologies like DLSS.

Aaron Klotz
Freelance News Writer

Aaron Klotz is a freelance writer for Tom’s Hardware US, covering news topics related to computer hardware such as CPUs, and graphics cards.

  • -Fran-
    No thanks, nVidia.

    I had a longer rant, but you know what... It boils down to: I do not want a future where nVidia is the sole provider of technologies to make games look good. If nVidia wants a pass, they need to make this not just accessible to other GPU vendors, but maybe include them as standarised API access across all engines. The Industry does not need another "Glide 3D" moment.

    Been there, done that and hated it.

    Regards.
    Reply
  • Order 66
    Nvidia remind me of this again in 10 years when AI solutions actually look better than native. Also maybe Nvidia ten years from now will be standard with their APIs. Maybe Nvidia should make their budget gpus actually good value so that they can play games at decent settings at NATIVE resolution. (genius I know) /s
    Reply
  • Kamen Rider Blade
    Sorry, but no UpScaling of any kind for me.

    It's either Native Rendering or Down Sampled from a (higher resolution to my current resolution).

    I want Real Image quality, not AI manipulated non-sense and artificial frame interpolation to boost frame rates.

    I want Ultra-Low Latency, Real Image Quality, not AI BS or any sort of "UpScaling".
    Reply
  • RichardtST
    So... more laggy imaginary imagery is the new normal, eh? Sounds suspiciously to me like they're just making excuses to feed us more new hardware that we don't need and don't want. They wouldn't do that, would they?

    Hint for the sarcasm-impaired: Yes! That last statement in the previous paragraph is loaded with sarcasm. Revel in it.
    Reply
  • elforeign
    I'm excited to see how AI and generative learning models will continue to transform graphics rendering.

    For all the people who have doubts about the underlying technology, you only need to look to the scientific community, they create and use enormous models to simulate and predict the real-world behavior of physical properties.

    I think Nvidia and Bryan are right to be exploring how to use better and larger models to help inform graphics rendering to decrease the computational expense and increase image fidelity.

    I think people are right to be wary of proprietary technology though, so I understand the backlash when one can assume Nvidia would further lock down its technology to drive hardware sales. But then again, that's capitalism.
    Reply
  • NeoMorpheus
    Of course they would push for this, since it keep everyone locked to their hardware.

    But thanks to the influencers and so called fair and unbiased reviewers, none of them point that out.

    Goodbye to the open platform that was PC gaming.
    Reply
  • Sleepy_Hollowed
    Yeah no, bye nvidia.
    Reply
  • Elusive Ruse
    Not to forget that it's upscaling on THEIR GPUs or nothing. Once they force any reviewer they send GPUs to sing DLSS praises and add it to their benchmarks like they did with RayTracing the masses will gobble it up.
    Reply
  • umeng2002_2
    They do have a point. DLAA looks fantastic, but only DLSS Quality is good enough to even consider replacing native rendering with a good TAA technique.
    Reply
  • derekullo
    Kamen Rider Blade said:
    Sorry, but no UpScaling of any kind for me.

    It's either Native Rendering or Down Sampled from a (higher resolution to my current resolution).

    I want Real Image quality, not AI manipulated non-sense and artificial frame interpolation to boost frame rates.

    I want Ultra-Low Latency, Real Image Quality, not AI BS or any sort of "UpScaling".
    You forget that ignorance is bliss!

    My eyes don't care if the car was 100% rasterized from 30 year old technology with bloom, ambient occlusion and a tiny bit of raytracing or the developer simply puts {Red, Ferrari, SP90, photorealistic} into the AI generator field for the car and the AI generates that instead.
    With enough real Red Ferrari SP90s in the imagery model it will create a realistic looking car.

    In the past upscaling was a dirty word that still brings back bad memories of blurry pictures, but with AI you can fill in the blanks/blurry and have a high resolution non-blurry scene. (The Jurassic Park analogy of filling in the holes in the genes isn't lost on me!)

    I'm not saying we are quite there yet with AI, but with transistors approaching the size of large atoms we can't rely on Moore's law for much longer. (An atom of potassium is 0.231 nanometers wide ... only 12.9 times smaller than a very recent 3 nanometer transistor.)
    Reply