Nvidia Announces More RTX and DLSS Games at Gamescom 2021

Marketing image for Nvidia RTX technology.
(Image credit: Nvidia)

Nvidia today via a blog post announced a veritable slew of upcoming games with support for the company's proprietary DLSS technology and ray tracing, either at launch or via development updates. The green company's monopoly on upscale technologies may be in the process of being challenged left and right, by AMD's FidelityFX Super Resolution (AMD FSR) and the upcoming Intel XeSS. However, Nvidia is riding the momentum of being the first adopter, leveraging its relations with game developers and game engines to bring DLSS support to as many titles as it can before its competitors flood the market with their own solutions.

While Nvidia may take this opportunity to extol its virtues in the ray tracing realm, nothing should stand in the way of AMD users activating ray tracing effects on their RX 6000 series graphics cards — raytracing itself isn't proprietary, unlike DLSS. However, as we've shown in our look at Nvidia vs. AMD Ray Tracing, Nvidia does offer generally superior ray tracing performance, and that's without enabling DLSS.

Nvidia also took to this blog post to announce something that will surely make Linux gamers happy: Next month, the company will extend DLSS support to dozens (emphasis ours) of DirectX 11 and DirectX 12 games, including Control, Cyberpunk 2077, Death Stranding, F1 2020, Mechwarrior 5: Mercenaries, and Necromunda: Hired Gun, building upon the recently-introduced DLSS support for Linux under Steam's Proton layer.

First off on today's games announcement by Nvidia is Marvel's Guardians of the Galaxy, which will drop on October 26th. The PC version has been confirmed to support Nvidia's DLSS and ray traced reflections at launch.

Dying Light 2: Stay Human, the awaited sequel to the original Dying Light, will once again be looking to push the envelope on graphics and world rendering. The Techland-developed game will feature ray traced global illumination, shadows, and reflections, which covers most of the major uses of ray tracing features a game can tout. To aid in all those gazillion rays and their bounded intersections, Techland also supports DLSS, providing a much needed performance boost for ray tracing heavy games. Techland hasn't announced a hard release date, but the game should drop before the end of the year.

If you are a Myst fan, the games' remaster coming on August 26th is sure to have piqued your interest already. This is one of those games where graphics is one of the most important elements for immersion, and the latest iteration of the classic point-and-click adventure game will feature ray traced reflections and Nvidia DLSS. The addition of Nvidia's DLSS tech becomes all the more important when one considers that this particular release features support for VR headsets, where framerates are as important to immersion as they are to avoid the sometimes floor-spattering side effects of low performance in a VR environment. Nvidia promises that performance in Myst will be more than doubled via DLSS, and that the addition of Nvidia's machine-learning-based upsampling algorithm will allow for the game to render at up to 4864x2448 while maintaining the 90 fps VR target.

Another game featuring support for both DLSS and ray tracing (in the form of ray traced reflections and shadows) is SYNCED: Off-Planet, a PvPvE and co-op action game that's being developed by Tencent Studios.

This is just a selection of some of the games being showcased by Nvidia. Other games using Nvidia's technologies include Battlefield 2042 (DLSS and Nvidia Reflex support but no ray tracing technology, at least for launch), Bright Memory Infinite (ray traced reflections, shadows, caustics, and global illumination as well as DLSS), Loopmancer (ray traced reflections, shadows, and global illumination sans DLSS, so it remains to be seen how well the Unreal engine 4 game runs absent of the upsampling technology to claw back the framerate lost from all of that eye candy), and Faraday Protocol (a story-driven puzzle game built on Unreal Engine 4 that introduces support for DLSS.

Naraka Bladepoint is yet another game that will now feature DLSS, with the battle royale game also featuring support for Nvidia Reflex technologies. Cyberpunk twin-stick shooter The Ascent is already a well-known and excellent quantity, and supports Nvidia's DLSS as well as ray traced reflections and shadows. Black Myth: Wukong has dazzled audiences with its Unreal Engine 5 gameplay reveal from last week, and even if the game sheds ray tracing in favor of Unreal Engine 5's shader-based techniques, the pairing with Nvidia DLSS promises a performant and beautiful action adventure. Lastly, GRIT and Chivalry 2 will both see the addition of Nvidia DLSS for increased performance on any Nvidia RTX graphics card.

If you don't have a raytracing-compatible graphics card yet and want to play these games with the settings cranked up to the max, check out our list of the best graphics cards, and we wish you good luck in navigating the current GPU availability landscape.

Francisco Pires
Freelance News Writer

Francisco Pires is a freelance news writer for Tom's Hardware with a soft side for quantum computing.

  • -Fran-
    "However, Nvidia is riding the momentum of being the first adopter, leveraging its relations with game developers and game engines to bring DLSS support to as many titles as it can before its competitors flood the market with their own solutions".

    Ah, you mean strong arming them and telling them about their "editorial direction" when choosing technologies to use or no big fat marketing paycheck? Of course. Very nVidia-like.

    Looking forward to an ARM future with them in charge... Not.

    Regards.
    Reply
  • Blacksad999
    83% of the market owns a Nvidia GPU, so it would make sense to cover something that's relevant to...83% of people. It's not some grand conspiracy. XD
    Reply
  • -Fran-
    Blacksad999 said:
    83% of the market owns a Nvidia GPU, so it would make sense to cover something that's relevant to...83% of people. It's not some grand conspiracy. XD
    It's not a conspiracy though. That's how nVidia rolls.

    Regards.
    Reply
  • VforV
    Blacksad999 said:
    83% of the market owns a Nvidia GPU, so it would make sense to cover something that's relevant to...83% of people. It's not some grand conspiracy. XD
    Rofl, sure 83%, but out of that 83% do you know how many can use RTX and DLSS, how many actually have Turing and Ampere to use them?

    About 20%! That's how many, they are still the minority.

    The rest of the nvidia users, which are the majority, are on older generations and are left in the dust...

    So, thanks AMD for FSR and soon intel for XeSS, which work on older GPUs. nvidia can shove their RTX and DLSS where the sun don't shine.
    Reply
  • Krotow
    VforV said:
    Rofl, sure 83%, but out of that 83% do you know how many can use RTX and DLSS, how many actually have Turing and Ampere to use them?

    Correct.

    VforV said:
    So, thanks AMD for FSR and soon intel for XeSS, which work on older GPUs. nvidia can shove their RTX and DLSS where the sun don't shine.

    DLSS 2.0 is already here and demonstrated very good results. Can't say that about FSR now. No Intel Arc hardware was given to reviewers yet so XeSS is still floating in clouds. Although it is good to know that competition to DLSS 2.0 is coming. And I hope that with Intel appearance on serious GPU stage GPU prices at last will slide down to bearable level.

    I'm actually curious who from these three elephants will release working asset loading from NVMe drives into GPU memory. I mean RTX IO + DirectStorage solution which Nvidia is cooking for 2 years already.
    Reply
  • VforV
    Krotow said:
    Correct.



    DLSS 2.0 is already here and demonstrated very good results. Can't say that about FSR now. No Intel Arc hardware was given to reviewers yet so XeSS is still floating in clouds. Although it is good to know that competition to DLSS 2.0 is coming. And I hope that with Intel appearance on serious GPU stage GPU prices at last will slide down to bearable level.

    I'm actually curious who from these three elephants will release working asset loading from NVMe drives into GPU memory. I mean RTX IO + DirectStorage solution which Nvidia is cooking for 2 years already.
    You mean an elephant, a rhino and a zebra, right? :p

    Joke aside, FSR already demonstrated good results too, if you count the good things as:
    Better than DLSS 1.0 was at launch, much better.
    Implemented in more games than DLSS 1.0, many more (DLSS 1.0 had zero at launch and for 2 years less than what FSR has already).
    Although slightly lower IQ than DLSS 2.0, but still good enough at 1440p and 4k to not matter the difference.
    Sharper image vs DLSS and better in motion and not introducing extra artifacts on the image from its side.
    Lower overhead than DLSS, which results in less extra input lag than DLSS over the native image. So great for latency sensitive games.
    Available for all GPUs from the past 7 (10?) years, not only on 2 expensive RTX generations, like DLSS is.
    Open source and much much easier and faster and cheaper to implement.Hey, would you look at all those FSR pluses.

    Between FSR and XeSS (backed up by Intel money) I have no worries that DLSS, will become less and less relevant in the next 1 to 2 years, unless nvidia goes open source too or has some new magic ace up their sleeve...
    Reply
  • Krotow
    VforV said:
    You mean an elephant, a rhino and a zebra, right? :p

    Of course :D

    VforV said:
    ... I have no worries that DLSS, will become less and less relevant in the next 1 to 2 years ...

    Artifical upscaling like DLSS is from Nvidia will only become more and more popular for performance reasons in complex scenes at large resolution. Especially when AAA titles will enjoy it for real and later - when 8K screens in consumer console market for rich and dumb will go mainstream.

    Particularly I have no fixation on GPU brand. I want DLSS 2.0 like upscaling, ray tracing and possible RTX IO compatible asset loading boost together with quiet cooling and working drivers regardless who from these three will release that first. I'll continue to wait while price + VAT (I live in Europe) for card like RTX 3080 or equal AMD/Intel alternative will slide down below 900€. No reason to overspend IMHO.

    One thing though which kinda deter me from AMD is video encoding quality in hardware. I occasionally need that feature. 5700 XT had bad artifacts in hardware encoded videos.
    Reply
  • -Fran-
    Krotow said:
    Artifical upscaling like DLSS is from Nvidia will only become more and more popular for performance reasons in complex scenes at large resolution. Especially when AAA titles will enjoy it for real and later - when 8K screens in consumer console market for rich and dumb will go mainstream.

    Particularly I have no fixation on GPU brand. I want DLSS 2.0 like upscaling, ray tracing and possible RTX IO compatible asset loading boost together with quiet cooling and working drivers regardless who from these three will release that first. I'll continue to wait while price + VAT (I live in Europe) for card like RTX 3080 or equal AMD/Intel alternative will slide down below 900€. No reason to overspend IMHO.

    One thing though which kinda deter me from AMD is video encoding quality in hardware. I occasionally need that feature. 5700 XT had bad artifacts in hardware encoded videos.
    There is always a point of "stupidity" where you're dedicating more hardware to faking upscaling than actually producing better images. This is the "stupidity" I find in nVidia using Tensor cores in consumer instead of actually using general purpose calculations for their solution. Particularly, and this is something that I'm sure some people has asked, but no one knows for sure: what if those Tensor cores are just utilized as regular Stream processor space? Raster would work even faster than now for sure and RT maybe won't be as fast, but it would still work plenty fast, no?

    History tells me that Intel will try and push the same division as when MMX was introduced. AMD and nVidia should actually be worried, but nVidia more than AMD. If they double down on Tensor cores for consumer graphics, there will be an even bigger gap they'll have to account for. Definitely an inflection point for them, but I'm sure they'll try to shoehorn Tensor cores until they just can't. Like PhysX.

    Regards.
    Reply
  • spongiemaster
    VforV said:
    Rofl, sure 83%, but out of that 83% do you know how many can use RTX and DLSS, how many actually have Turing and Ampere to use them?

    About 20%! That's how many, they are still the minority.
    The main group of people that will benefit from upsampling are gamers using ray tracing or 4k gamers. Nvidia likely controls more than 90% of that market.
    Reply
  • Krotow
    Yuka said:
    There is always a point of "stupidity" where you're dedicating more hardware to faking upscaling than actually producing better images. This is the "stupidity" I find in nVidia using Tensor cores in consumer instead of actually using general purpose calculations for their solution. Particularly, and this is something that I'm sure some people has asked, but no one knows for sure: what if those Tensor cores are just utilized as regular Stream processor space? Raster would work even faster than now for sure and RT maybe won't be as fast, but it would still work plenty fast, no?

    History tells me that Intel will try and push the same division as when MMX was introduced. AMD and nVidia should actually be worried, but nVidia more than AMD. If they double down on Tensor cores for consumer graphics, there will be an even bigger gap they'll have to account for. Definitely an inflection point for them, but I'm sure they'll try to shoehorn Tensor cores until they just can't. Like PhysX.

    Intel also use specific matrix multiplicator cores in upcoming Arc GPU architecture. Only Intel's cores are called XeSS :)

    Intel is also using dedicated Xe-cores in its upcoming GPUs to power its XeSS technology, with dedicated Xe Matrix eXtensions (XMX) matrix engines inside to offer hardware-accelerated AI processing.
    Source: https://www.theverge.com/2021/8/19/22631061/intel-arc-gpu-alchemist-xe-ss-super-sampling-ai-architecture-day-preview

    Seems battle in gaming area is now going around performance in 4K resolution and subsequent screen streaming support. With my 1440p monitor which I expect to use for at least next 5 years, I'm not complaining.
    Reply