AMD's FSR 2.0 Delivers Again in God of War

God of War PC Version screenshots
(Image credit: Sony Interactive Entertainment)

God of War has been updated with FidelityFX Super Resolution 2.0, making it the third game so far to use AMD's new temporal upscaling technology, after Deathloop and Farming Simulator. This should give the Best Gaming GPUs a frame rate boost while retaining better image quality compared to what we saw with FSR 1.0 in the original release.

To quickly recap, AMD's FSR 2.0 technology is the company's latest innovation of its FidelityFX Super Resolution upscaling algorithm and is a significant departure from version 1.0. The biggest feature added to 2.0 is a temporal upscaling solution that takes into account image data from a multitude of frames instead of just one, along with motion vectors and the z-buffer.

This algorithm change provided FSR 2.0 with a massive jump in image quality over FSR 1.0, based on the results we've seen so far. The image quality improvement is enough to put FSR in direct competition with Nvidia's more mature Deep Learning Super Sampling (DLSS) counterpart, which of course requires an RTX GPU.

The only requirement for FSR 2.0 is a DirectX 11/12 compatible GPU as far as we can tell. AMD recommends at least a Radeon RX 590 or GTX 1070 for upscaling to 1080p, though it depends a lot on the settings you select. We were able to run Deathloop on Intel Gen11 and Gen12 integrated graphics, sort of, and still saw a modest increase in frame rates.

FSR 2.0 Image Quality Comparisons

We condutced our own testing with FSR 2.0 in God of War, comparing image quality with native rendering as well as DLSS 2.3. We found the practical differences to be nearly imperceptible, though if you look closely at the images there are some areas where DLSS may still hold a very slight advantage (e.g. on the foliage). If you're just playing the game rather than trying to spot minor changes, you'll appreciate the boost in performance and can simply move on.

Putting both algorithms under a microscope — you'll need to view the full size 4K images on a PC in the above gallery for this — there's a bit more blur on the pine needles with FSR 2.0 compared to DLSS, but that's about the only change. That's a big change from FSR 1.0, where artifacts were much more visible.

Switching to Performance mode, image quality does change a little bit. When blowing up our God of War screenshots, we found that image quality retention favors DLSS, which has slightly better image clarity over FSR 2.0's performance mode. DLSS does have the look of over-sharpening, however, and it's not an absolute victory.

Run around in the game, however, and it's very difficult to spot the difference between the two upscaling solutions. That's different than the above screenshots, which were taken when the camera is stationary. That's basically the best-case scenario for upscaling, though even in our limited testing we found both solutions looked good.

God of War FSR 2.0 Performance

Besides image quality, we also did some limited testing of performance. We used a GeForce RTX 3080 card, running at 4K with ultra settings. At native rendering, it averaged 71 fps in our test sequence, so the game was already running quite well. Quality mode in DLSS bumped that up 25% to 88 fps, while FSR 2.0 ran 20% faster at 85 fps. Performance mode also favored DLSS slightly, with 101 fps (42% faster) compared to 98 fps (39% faster) for FSR 2.0.

Other GPUs may show slightly difference percentage gains, and we're likely still running into some CPU bottleneck as we tested with a Core i9-9900K (Jarred is away from home so that's all he had access to). Still, who wouldn't want a 20–25 percent boost in performance, particularly when it comes with almost no loss in image quality?

Overall, we're still quite pleased with FSR 2.0's results. AMD has proven that you don't need a complex deep learning algorithm to get good upscaling quality, and that you can achieve comparable results without specialized hardware. DLSS may still hold a tiny advantage, and if you have an RTX card and the game gives you a choice there's no reason not to enable it. But for everyone that doesn't have an RTX GPU, FSR 2.0 is a Godsend of War.

Aaron Klotz
Contributing Writer

Aaron Klotz is a contributing writer for Tom’s Hardware, covering news related to computer hardware such as CPUs, and graphics cards.

  • KananX
    “Godsend of War” well said.
    Reply
  • hotaru.hino
    It might be time for NVIDIA to consider leaving out tensor cores on future consumer GPUs then. There doesn't seem to be any real consumer need for them outside of DLSS.

    Although I'm sure there are a lot of ML hobbyists who'd put up torches and pitchforks if this happened.
    Reply
  • TechyInAZ
    hotaru.hino said:
    It might be time for NVIDIA to consider leaving out tensor cores on future consumer GPUs then. There doesn't seem to be any real consumer need for them outside of DLSS.

    Although I'm sure there are a lot of ML hobbyists who'd put up torches and pitchforks if this happened.

    That's quite true, especially with core counts growing exponentially it seems. The more cores you have, the greater the performance enhancements FSR 2.0 should bring, theoretically.

    But for now, DLSS I believe still has a performance advantage over FSR 2.0 on some RTX GPUs and in some games.
    Reply
  • KananX
    hotaru.hino said:
    It might be time for NVIDIA to consider leaving out tensor cores on future consumer GPUs then. There doesn't seem to be any real consumer need for them outside of DLSS.

    Although I'm sure there are a lot of ML hobbyists who'd put up torches and pitchforks if this happened.
    I would agree if not for the creator features that also use tensor cores, and then RT denoising uses tensor cores as well, so they are quite needed, not just for dlss.
    Reply
  • hotaru.hino
    KananX said:
    I would agree if not for the creator features that also use tensor cores, and then RT denoising uses tensor cores as well, so they are quite needed, not just for dlss.
    Denoising is a subset of the operation of "filling in the blanks." If we have a system to do this such that tensor cores do not provide a significant performance benefit, then there's less of a need for such.

    AMD provided performance metrics with ray tracing enabled, so FSR 2.0 is clearly helping in denoising.
    Reply
  • KananX
    hotaru.hino said:
    Denoising is a subset of the operation of "filling in the blanks." If we have a system to do this such that tensor cores do not provide a significant performance benefit, then there's less of a need for such.

    AMD provided performance metrics with ray tracing enabled, so FSR 2.0 is clearly helping in denoising.
    Denoising of RT for Radeon cards is done via shaders, it has nothing to do with FSR or any other software.
    Reply
  • hotaru.hino
    KananX said:
    Denoising of RT for Radeon cards is done via shaders, it has nothing to do with FSR or any other software.
    Well either way at the end of the day, unless someone has done a complete frame time profile of the hardware, we'll never figure out how much of an influence the tensor cores actually provides. And as far as I can tell, it's not clear there's still much of an advantage.

    The only thing I can find is from NVIDIA themselves, but they labeled the time on tensor cores as DLSS working.
    Reply
  • spongiemaster
    hotaru.hino said:
    It might be time for NVIDIA to consider leaving out tensor cores on future consumer GPUs then. There doesn't seem to be any real consumer need for them outside of DLSS.

    Although I'm sure there are a lot of ML hobbyists who'd put up torches and pitchforks if this happened.
    The professional cards, formally known as Quadros, use the same dies as the RTX gaming cards. Nvidia isn't going to add a 3rd architecture for every generation. Tensor cores aren't going anywhere as long as there is demand for them from professionals who are willing to spend a whole lot more money for a GPU than gamers.
    Reply
  • KananX
    hotaru.hino said:
    Well either way at the end of the day, unless someone has done a complete frame time profile of the hardware, we'll never figure out how much of an influence the tensor cores actually provides. And as far as I can tell, it's not clear there's still much of an advantage.

    The only thing I can find is from NVIDIA themselves, but they labeled the time on tensor cores as DLSS working.
    As long as DLSS needs tensor cores and it’s useful for work, it will be used in the gaming arch, unless they change DLSS fundamentally and introduce a gaming architecture without Tensor cores at the same time, which is unlikely since it’s not really needed.
    Reply
  • hotaru251
    FSR 2.0 still suffers 1 issue: to get "native" like quality w/ performance boost you need high end gpu & running 4k.

    FSR 2.0 is w/e at 1080p (usually not worth it as quality gets hit hard)

    1440p is game dependent usually.

    4k is about only resolution that actually benefits w/o a quality loss.

    as it needs a boat load of pixels to work...which 1080p lacks (hence why its so bad in comparison)


    DLSS doesnt really have that issue and u can run weak hardware and low res and get usually betetr performance without losing much quality even at 1080p.


    also tensor cores and stuff could be utilized by other parts of PC in future. (virus scans are example)
    Reply