AMD FSR 3 With Fluid Frames Promises 2x FPS Uplift Over FSR 2

AMD RX 7000 Series Reference Card
(Image credit: AMD)

During yesterday's RX 7000 series live stream, AMD announced version 3.0 of its FidelityFX Super Resolution (FSR). This new version promises to be a DLSS 3 competitor by combining AMD's Fluid Motion Video frame generation technology with FSR upscaling. FSR 3 will arrive sometime in 2023, with promises of up to twice the frame rate of FSR 2.

Details on FSR 3's inner workings were slim to none, with AMD only providing rough performance estimates during the announcement. However, we expect AMD to give us the full details on FSR 3 closer to the technologies' release date later in 2023.

But, if FSR 3 follows in the footsteps of FSR versions 2 and 1, it could be an open-source frame generation and upscaling platform for all GPUs. FSR has never required proprietary hardware like Nvidia with DLSS, so there's a good chance FSR 3 will feature the same behavior.

The biggest takeaway from the announcement was that AMD Fluid Motion Frames technology is used in conjunction with FSR temporal upscaling for FSR 3. This tells us specifically that FSR 3 will indeed use some frame generation technology.

If you are not aware, AMD already has a frame generation technology known as AMD Fluid Motion Video. As the name suggests, this technology smoothes out video motion by adding additional frames to the video. As a result, we can extrapolate that Fluid Motion Frames is the 3D rendering version of this same technology and will allow AMD to add frame generation tech to FSR 3.

However, one thing worth mentioning is that DLSS 3 requires both Tensor Cores and an optical flow accelerator, which is only supported on 40 series GPUs. So AMD might need a hardware solution for FSR 3 to work. In addition, the required optical flow accelerator in DLSS 3 measures motion vectors between two frames so that DLSS can render and inject an artificial frame in-between the two real frames. 

As a result, AMD will need an equivalent motion vector solution that can measure motion between two frames if it wants to utilize frame generation. However, we don't know how AMD will solve this problem. It can go the same route as Nvidia by building a physical motion vector unit right on the GPU, or it can go the software route with a motion vector algorithm that runs on the GPU cores.

If AMD goes for the former route, FSR 3 will potentially be limited to RDNA3 GPUs or any other AMD GPU with motion vector units inside. But if it's the latter, AMD can run FSR 3 on any GPU, as long as the GPU has enough raw horsepower to power through FSR 3's computational requirements.

Aaron Klotz
Freelance News Writer

Aaron Klotz is a freelance writer for Tom’s Hardware US, covering news topics related to computer hardware such as CPUs, and graphics cards.

  • cryoburner
    So AMD might need a hardware solution for FSR 3 to work.
    It seems unlikely that you would need specialized hardware for frame generation to work, though having access to it might improve performance at a given quality level. Even without dedicated hardware, as long as the frame generation calculations take significantly less time to perform than rendering a new frame, the feature could be beneficial.

    It sounds a lot like the frame-doubling used to reduce performance drops on VR headsets, like "Asynchronous Spacewarp" for Oculus, a feature that they added through a software update six years ago. That just looks at the previous frame's depth and motion vector buffers and uses that information to deform the frame to simulate a new one, shifting objects to their new, estimated positions. There can be artifacts in some scenarios though, since missing regions behind moving objects need to be filled in, and I wouldn't expect either Nvidia or AMD's solutions to look perfect either, but it could be an effective way to make low frame rates feel a lot smoother.

    Maybe Nvidia thinks they can sell more graphics cards by artificially restricting the feature to 40-series hardware, but I see little reason why such a feature couldn't be enabled on all cards.
    Reply
  • bit_user
    cryoburner said:
    It seems unlikely that you would need specialized hardware for frame generation to work, though having access to it might improve performance at a given quality level.
    IMO, that depends a lot on how intrusive they want to make it. If you can make it at least as intrusive as TAA, then the computational burden should be relatively light.

    cryoburner said:
    Even without dedicated hardware, as long as the frame generation calculations take significantly less time to perform than rendering a new frame, the feature could be beneficial.
    Yeah, but it would be even better if it didn't eat into the compute budget for rendering, hardly at all. Because, as good as it might be, actually rendering new frames should remain the preferred option. So, if it's implemented as a non-intrusive feature, then having some hardware motion extrapolation engine would be very beneficial.

    cryoburner said:
    It sounds a lot like the frame-doubling used to reduce performance drops on VR headsets, like "Asynchronous Spacewarp" for Oculus, a feature that they added through a software update six years ago. That just looks at the previous frame's depth and motion vector buffers and uses that information to deform the frame to simulate a new one, shifting objects to their new, estimated positions. There can be artifacts in some scenarios though, since missing regions behind moving objects need to be filled in, and I wouldn't expect either Nvidia or AMD's solutions to look perfect either, but it could be an effective way to make low frame rates feel a lot smoother.
    A cool thing you could do, if you make the feature sufficiently intrusive, is to have the game engine fill the more significant holes by rendering them. As I say that, I'm reminded somewhat of Microsoft's ill-fated Talisman project.

    https://en.wikipedia.org/wiki/Microsoft_Talisman
    Reply