AMD Details FidelityFX Super Resolution 2.0

AMD FSR 2.0 Deathloop screenshot
(Image credit: Arkane / AMD)

AMD presented it's FidelityFX Super Resolution 2.0 algorithm at GDC 2022, which we'll abbreviate to FSR2 going forward. As GDC caters to game developers, and with FSR and FSR2 being open source algorithms, AMD went into the nitty gritty of how everything works, and the choices and optimizations that were made. It also went into some of the benefits and difficulties of FSR 1.0, and how things will change with FSR2.

From a high level, FSR2 is designed to offer significant quality improvements over FSR1. It's an all-new algorithm, not built on the existing FSR work, which means integrating the new API will require reworking some of the code. AMD also expects FSR2 to be more demanding on the hardware side of things, part of the compromise needed to improve image quality. It's still an open API, designed to work on GPUs from AMD's competitors, including older generation GPUs, but the performance gains may be limited with low-end hardware.

FSR 1.0 was a spatial upscaling algorithm, designed for high performance and ease of integration into games. There were shortcomings with this approach. For example, FSR 1.0 requires a high-quality anti-aliased source image, which means if a game didn't have anti-aliasing already, it needed to implement that before it could use FSR. The quality of the upscaling was also dependent on the AA, so a poor quality AA would produce inferior upscaled results. It also didn't handle lower resolutions as well, as the lack of information in the source frame could result in shimmering, poor edge reconstruction, and other artifacts.

FSR2 switches to a temporal upscaling solution, and the input requirements are quite a bit more complex than with FSR. Instead of just a frame, FSR2 takes the scene color (i.e., the final rendered frame), scene depth buffer, and a motion vector buffer. This is similar to DLSS. From these, it produces a higher quality upscaled output. Unlike DLSS, however, no machine learning hardware is required or utilized.

As far as integration goes, FSR2 now takes an aliased image and produces anti-aliased results. Games that already implement DLSS or other temporal upscaling solutions should be able to easily integrate FSR2. FSR2 also supports dynamic resolution scaling (DRS), so it can replace other dynamic upscaling solutions.

AMD has also made some adjustments with the various upscaling modes. Previously, FSR1 had four presets: Ultra Quality, Quality, Balanced, and Performance. With FSR2, AMD has aligned its mode names with DLSS, so there will be Quality, Balanced, Performance, and an optional Ultra Performance mode. The input resolutions are unchanged, as far as we can tell, so Quality at 4K still upscales from 2560x1440 to 3840x2160 (1.5X upscaling factor in both dimensions), Balanced uses a 1.7X upscaling factor, Performance uses 2X, and Ultra Performance (mostly for extreme resolutions like 8K) uses a 3X upscaling factor.

FSR2 is currently in "beta" and further refinements are being made. AMD will put the full slide deck and video online sometime today, or in the near future, and it's slated to leave beta by June 2022. You can see a preview of FSR2 running in Deathloop in the video below.

Jarred Walton is a senior editor at Tom's Hardware focusing on everything GPU. He has been working as a tech journalist since 2004, writing for AnandTech, Maximum PC, and PC Gamer. From the first S3 Virge '3D decelerators' to today's GPUs, Jarred keeps up with all the latest graphics trends and is the one to ask about game performance.

  • drivinfast247
    So you get noticable improved image quality at the expense of slightly less fps? Malarkey!!
    Reply
  • Mandark
    ya
    Reply
  • Metal Messiah.
    Nice informative article. Really helps understand this new tech. Thanks, @ Jarred !
    Reply
  • -Fran-
    While it is implied in the article, it's not explicitly mentioned: AMD has given a guide to know which GPU families should/would work best at what resolutions. Also, as it's not ML-dependent, it can work with as many cards as FSR1.0 can: almost all of them. The asterisk is the amount of computing necessary to actually do the "temporal" stuff, which is the elephant in the room. While you can run this with, say, a 1050ti or lower, the "performance benefit" is going to be really questionable. I'd be willing to say the same with iGPUs and other lower end and older cards.

    Regardless, big kudos to AMD for not being as big a sleazy turd as the green team.

    Regards.
    Reply
  • hotaru.hino
    -Fran- said:
    Regardless, big kudos to AMD for not being as big a sleazy turd as the green team.
    For what? Incorporating what they thought would be really cool technology into GPUs and finding something that would use it?

    Just because AMD decided not to throw in any tensor cores into their GPUs doesn't mean NVIDIA is a sleazeball for not making DLSS open. Besides, it's not like the methods DLSS uses are patented or whatever. If anything, the secret sauce that NVIDIA wants to keep is all the training data for the AI to work with. The method itself is pretty much wide open for anyone to use to implement their own version. I mean, it's cool that AMD was able to come up with a generic method, but at the end of the day, you have to make your product stand out from the crowd. If NVIDIA finds that something to make them stand out, they shouldn't be forced to share it with the rest of the world.

    However time will tell how long this give AMD kudos. Intel seems to have AI acceleration in mind for their GPUs. Assuming their ARC GPUs can compete in the higher end, AMD's going to be left in the dust once when either side can do the same thing but better.

    EDIT: Another thing to point out is AMD continuing to make their features hardware agnostic is fine and all, but it doesn't convince me to buy their hardware. Any checkbox they add also gets added to everyone else's.
    Reply
  • -Fran-
    hotaru.hino said:
    For what? Incorporating what they thought would be really cool technology into GPUs and finding something that would use it?

    Just because AMD decided not to throw in any tensor cores into their GPUs doesn't mean NVIDIA is a sleazeball for not making DLSS open. Besides, it's not like the methods DLSS uses are patented or whatever. If anything, the secret sauce that NVIDIA wants to keep is all the training data for the AI to work with. The method itself is pretty much wide open for anyone to use to implement their own version. I mean, it's cool that AMD was able to come up with a generic method, but at the end of the day, you have to make your product stand out from the crowd. If NVIDIA finds that something to make them stand out, they shouldn't be forced to share it with the rest of the world.

    However time will tell how long this give AMD kudos. Intel seems to have AI acceleration in mind for their GPUs. Assuming their ARC GPUs can compete in the higher end, AMD's going to be left in the dust once when either side can do the same thing but better.

    EDIT: Another thing to point out is AMD continuing to make their features hardware agnostic is fine and all, but it doesn't convince me to buy their hardware. Any checkbox they add also gets added to everyone else's.
    LOL, no.

    AMD could have just made it proprietary/closed and exclusive to their GPUs like nVidia and call it a day. If it was reversed (FSR2.0 out first and DLSS just now), I'm sure nVidia would have still made it closed and exclusive pushing their own egocentric narrative. This doesn't mean AMD won't ever do it, but as I said, it's good they're still not in full "sleaze bag" mode.

    You can also thank nVidiots about the huge price hikes and laugh at how nVidia is literally saying it to everyone: "look, these idiots are willing to pay more, so we'll just charge more". And AMD is following suit, of course. As you say, they are still a big Corp. This is completely off-topic, but worth mentioning in the context of being a "sleaze bag".

    So yeah, I do think AMD deserves kudos for making something that's useful for everyone, no matter the Company they get their hardware from. Well, I'm also assuming this will be available for Intel. Also, XeSS will also be cross-hardware. Think about that.

    Regards.
    Reply