Intrepid modder builds Frame Warp demo from Nvidia Reflex 2 binaries — tech remains mysteriously shelved despite greatly reducing latency

A modder's demo of Nvidia's Reflex 2 tech with Frame Warp
(Image credit: Future)

Nvidia's Reflex 2 latency reduction tech, introduced alongside its Blackwell architecture, was meant to introduce spatial reprojection - aka "Frame Warp" - to fast-paced games in order to lessen perceived input lag. It also has promise alongside Blackwell's Multi Frame Generation when it's enabled. MFG comes with a substantial input latency penalty when it's running, and Frame Warp could help mitigate it.

As a very quick refresher, Reflex 2 with Frame Warp uses fresh mouse input data collected while a frame is rendering to predict and reproject the camera position of that frame in progress right before it's sent to the display, a process Nvidia claims will heighten the sense of "connectedness" and responsiveness delivered by the PC.

Google Preferred Source

Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

Jeffrey Kampman
Senior Analyst, Graphics

As the Senior Analyst, Graphics at Tom's Hardware, Jeff Kampman covers everything to do with GPUs, gaming performance, and more. From integrated graphics processors to discrete graphics cards to the hyperscale installations powering our AI future, if it's got a GPU in it, Jeff is on it. 

  • edzieba
    It's not clear whether these lingering artifacts are an inherent limitation of the tech that remains to be ironed out, or something that developers can or will need to tune as part of their integrations, but it's not invisible.
    Like with reprojection in the VR space far the last near-decade (first rotational implementation in 2016), there are three aspects to significantly reduce reprojection artefacts:
    1) Overrendering around the edges of the screen. The more additional 'off screen' pixels, the faster you can turn before you start hitting empty regions that need to be generated from whole-cloth rather than reprojected. Using past frame motion vectors to estimate future motion can aid in only overrendering on the edge it will be needed,

    2) Generating a motion-vector field and depth-map and passing that to the reprojection stage. The Mvec field usage is obvious, but the depth map aids in identifying disoccluded edges that will need inpainting rather than smearing pixels from neighbouring regions.

    3) The inpainting algorithm itself. This is where Nvidia has pre-made solutions available in their existing framegen system.
    Reply