Intel will retire rarely-used 16x MSAA support on Xe3 GPUs — AI upscalers like XeSS, FSR, and DLSS provide better, more efficient results

Intel Arc Battlemage B580 and B570
(Image credit: Intel)

Intel has begun phasing out 16x MSAA support in its upcoming Xe3 graphics,. As revealed by engineer Kenneth Graunke in a recent Mesa driver commit, “16x MSAA isn’t supported at all on certain Xe3 variants, and on its way out on the rest. Most vendors choose not to support it, and many apps offer more modern multisampling and upscaling techniques these days. Only 2/4/8x are supported going forward.” The change has already landed in the Mesa 25.3-devel branch and is being back-ported to earlier 25.1 and 25.2 releases.

Intel phasing out 16x MSAA support on Xe3

(Image credit: Future)
TOPICS
Hassam Nasir
Contributing Writer

Hassam Nasir is a die-hard hardware enthusiast with years of experience as a tech editor and writer, focusing on detailed CPU comparisons and general hardware news. When he’s not working, you’ll find him bending tubes for his ever-evolving custom water-loop gaming rig or benchmarking the latest CPUs and GPUs just for fun.

  • coolitic
    I don't know why we care about Redditor opinions for news articles, but, the lack of MSAA in most games has little to do w/ the performance-cost (it's not that much more expensive than the more complicated temporal techniques), but rather mainly to do w/ its incompatibility w/ deferred shading techniques, which have dominated the games industry since like 2010.

    The "incompatible w/ transparency" isn't strictly true either (but there are certainly complications), and it's actually quite amusing/ironic considering that deferred-shading itself pairs quite poorly w/ transparency, in a way that's (imo) significantly worse in terms of performance and/or appearance, versus a simple absence of MSAA ("absent" meaning strictly in the transparent portions).

    As an aside: the reason deferred-shading had become dominant at that time is because it allowed for negligible per-light cost for dynamic lighting (at least, for shadow-less lights), versus traditional forward-shading which scales quite poorly w/ dynamic light-count (4-ish being the conventional peak).

    However, "clustered-forward" shading catches up to deferred-shading w/ regards to dynamic-light cost, and doesn't suffer from many of the same limitations/difficulties that deferred does (ie. transparency, MSAA). The two main reasons (that I see) for clustered-forward not being widely-used is 1. complacency/tech-lag and 2. many screen-space techniques make use of the G-buffer that deferred provides.
    Reply