XeSS SDK 2.1 release opens up Intel's framegen tech to compatible AMD and Nvidia GPUs — Xe Low Latency also goes cross-platform if framegen is enabled

Intel XeSS and how it works
(Image credit: Intel)

Intel has officially released the XeSS 2.1.0 SDK, a major update to its Xe Super Sampling (XeSS) framework that brings broader GPU support and unlocks frame generation for AMD and Nvidia cards. The update adds Xe Frame Generation support for any GPU that supports Shader Model 6.4 or higher, including GeForce GTX 10-series and newer, and AMD’s Radeon RX 5000 series onward. However, Intel recommends a GeForce RTX 30-series or Radeon RX 6000-series GPU or newer for the best experience.

Previously, only Intel Arc users had access to the full XeSS 2 feature set, which includes super resolution (XeSS-SR), frame generation (XeSS-FG), and low-latency mode (XeLL). With version 2.1.0, all three features can now be implemented on non-Intel GPUs. However, low latency rendering only kicks in when FG is active on non-Intel cards, so users and developers will need to enable both features for the benefits to apply.

Although this opens the door to wider adoption, support for the XeSS 2.1 SDK isn't automatic. Even if a game already supports XeSS 2, developers will still need to integrate the new SDK version to enable compatibility for Nvidia and AMD GPUs. For engines that don’t yet support XeSS at all, developers must link the updated SDK and adjust internal configuration to target other vendors.

Under the hood, XeSS 2.1 uses DP4a instructions to run its convolutional neural networks on non-Intel GPUs—offering a fallback path where Intel's XMX cores aren't available. On Intel Arc Alchemist and Battlemage GPUs, XMX still delivers better efficiency and performance for frame generation due to dedicated matrix acceleration. In contrast, Nvidia and AMD cards use a compute shader-based version that could deliver slightly lower image quality at a slightly higher cost in resources.

Despite the SDK's capability expansion, just 22 titles support XeSS 2 right now, according to Intel’s own tracking, and to our knowledge, that number hasn't recently increased. Between the developer effort required to update games to the latest SDK and the slow uptake of XeSS 2 in general, it's possible that the real-world availability of XeSS 2.1 and its cross-platform benefits will be highly limited.

Still, though, the fact that integrating the XeSS 2 SDK now enables framegen and low latency features across all GPU vendors could make it more appealing for developers who have to balance the time and resource investment of integrating these features against the expected user benefit.

AMD's cross-platform support for upscaling and framegen support in its FSR 3.x suite of technologies have won wide adoption for those features, so we can only hope that developers see the same appeal in XeSS 2.1. As cutting-edge upscaling tech becomes increasingly vendor-locked (see FSR 4 and DLSS), Intel's cross-platform support for both upscaling and framegen could be an increasingly rare approach.

Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

TOPICS
Hassam Nasir
Contributing Writer

Hassam Nasir is a die-hard hardware enthusiast with years of experience as a tech editor and writer, focusing on detailed CPU comparisons and general hardware news. When he’s not working, you’ll find him bending tubes for his ever-evolving custom water-loop gaming rig or benchmarking the latest CPUs and GPUs just for fun.

  • palladin9479
    Eventually everyone will settle on a standard that works everywhere. Both FSR and XeSS are getting more open with XeSS being the more "open" of the two, which is hilarious cause Intel.
    Reply
  • Bikki
    Need dll injection for this to work with all dlss games asap.
    Reply
  • DS426
    palladin9479 said:
    Eventually everyone will settle on a standard that works everywhere. Both FSR and XeSS are getting more open with XeSS being the more "open" of the two, which is hilarious cause Intel.
    Upscaling, frame-gen, and low-latency all need to be tightly woven into the graphics API's -- DirectX, Vulkan, and Metal. From there, if the vendors want to have some differentiated features based on their AI software and hardware, that's fine -- I say go for. For example, that might be to reduce overhead on shaders, so performance aspects could vary across vendors but quality levels would generally be the same across the board.

    And yes, funny that Intel are so open on this technology, though it's out of needs: to encourage adoption by game devs.
    Reply
  • palladin9479
    DS426 said:
    Upscaling, frame-gen, and low-latency all need to be tightly woven into the graphics API's -- DirectX, Vulkan, and Metal. From there, if the vendors want to have some differentiated features based on their AI software and hardware, that's fine -- I say go for. For example, that might be to reduce overhead on shaders, so performance aspects could vary across vendors but quality levels would generally be the same across the board.

    And yes, funny that Intel are so open on this technology, though it's out of needs: to encourage adoption by game devs.


    Pretty much every "modern" graphics feature started off as some sort of vendor proprietary item that eventually morphed into an API standard. Modern texture compression started off as a proprietary feature of S3's MeTaL API that was present in the S3 Savage 3D cards. I even had one way back in the day. So I have no doubt that upscaling / etc.. techniques based on AI algorithms will be eventually have a vendor neutral standard.
    Reply
  • Bikki
    palladin9479 said:
    Pretty much every "modern" graphics feature started off as some sort of vendor proprietary item that eventually morphed into an API standard. Modern texture compression started off as a proprietary feature of S3's MeTaL API that was present in the S3 Savage 3D cards. I even had one way back in the day. So I have no doubt that upscaling / etc.. techniques based on AI algorithms will be eventually have a vendor neutral standard.
    Yes this transition is fundamental. Everything get off as vendor proprietary as it gives head start advantage that paid off RD investment. Then when the technology becomes saturated when everyone else has something of similar it will quickly become a hassle for users and developer alike, which prompts some vendors to consolidate their tech into standard and everyone quickly switch over to that.
    Reply