Microsoft prepares DirectX to support neural rendering for AI-powered graphics — a key feature of the update will be Cooperative Vector support
DirectX will soon gain neural rendering capabilities.

Microsoft is advancing its DirectX API to support neural rendering, signaling a transformative shift in graphics rendering by incorporating AI and machine learning. This development, highlighted in a recent blog post, is designed to enhance visual quality and efficiency in gaming and other graphics-intensive applications.
Neural rendering makes use of machine learning models to generate or enhance visual elements such as textures, lighting, and image upscaling. By offloading complex rendering tasks to AI, this approach improves both performance and visual fidelity while reducing the computational burden on traditional rendering pipelines. Technologies like Nvidia’s DLSS and AMD’s FSR have already demonstrated the potential of AI-enhanced rendering. Microsoft’s initiative seeks to provide a standardized, open framework for such capabilities within the widely used DirectX API.
A key feature of the forthcoming DirectX update is Cooperative Vector Support. This feature will enhance AI workloads for real-time rendering by optimizing matrix-vector operations crucial for AI tasks like training, fine-tuning, and inferencing. This feature allows AI tasks to run in different shader stages, enabling efficient execution of neural networks, such as in a pixel shader, without monopolizing the GPU. By integrating neural graphics into DirectX applications, it provides access to AI-accelerator hardware across platforms, empowering developers to create more immersive experiences.
Microsoft has confirmed that Cooperative vectors will leverage Tensor Cores in Nvidia's new RTX 50-series GPUs to enable neural shaders, enhancing game asset visualization, optimizing geometry for improved path tracing, and supporting tools for creating photorealistic game characters.
Microsoft’s High-Level Shading Language (HLSL) team is said to be working closely with major GPU manufacturers, including AMD, Intel, Nvidia, and Qualcomm, to ensure these new capabilities are optimized for a wide range of hardware architectures.
By embedding neural rendering capabilities into DirectX, Microsoft could broaden the adoption of AI-driven graphics across multiple platforms. Potential applications range from enhanced real-time ray tracing to adaptive resolution scaling for high-definition displays. While proprietary AI rendering technologies have been limited to specific ecosystems, Microsoft’s open approach could democratize access, fostering greater innovation and competition.
Though the updates are still in development and lack a definitive release date, they highlight the increasing role of AI in shaping the future of graphics technology.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Kunal Khullar is a contributing writer at Tom’s Hardware. He is a long time technology journalist and reviewer specializing in PC components and peripherals, and welcomes any and every question around building a PC.
-
jlake3 I'm sorry... but what? I'm struggling to make sense of this. Sounds like they're introducing some vendor agnostic matrix-vector instructions into DirectX... which I guess is different from what DirectML offers?Reply
This feature allows AI tasks to run in different shader stages, enabling efficient execution of neural networks, such as in a pixel shader, without monopolizing the GPU
And I guess these new AI instructions run on standard shaders rather than dedicated AI hardware, leading to better utilization.
Microsoft has confirmed that Cooperative vectors will leverage Tensor Cores in Nvidia's new RTX 50-series GPUs to enable neural shaders
...except when they use dedicated AI hardware anyway?
Maybe this is really in-the-weeds, high-level developer stuff that's not intended for me and I'm not supposed to get it, but it sounds like they're just trying to stuff as many AI buzzwords as they can into a press release because that's the thing everyone does these days. -
hotaru251 ahh yes lets help devs cut corners optimizing games even more e_e..Reply
By offloading complex rendering tasks to AI, this approach improves both performance and visual fidelity while reducing the computational burden on traditional rendering pipelines. Technologies like Nvidia’s DLSS and AMD’s FSR have already demonstrated the potential of AI-enhanced rendering.
...are we ignoring the fact that its not all good stuff? there ARE downsides to using it.... -
Scott_Tx
to catch a thief, send a thief. toss it all into chatgpt and ask it to decode it for you :)jlake3 said:Maybe this is really in-the-weeds, high-level developer stuff that's not intended for me and I'm not supposed to get it, but it sounds like they're just trying to stuff as many AI buzzwords as they can into a press release because that's the thing everyone does these days. -
AngelusF Why is it taking so long to write? Can't they just get AI to code it in a few minutes?Reply
Yeah right. -
Geef ezst036 said:They probably just want to flash advertisements inside of games at you.
Oh gawd, just imagine opening Doom and your character putting on the helmet. As the UI starts up it starts showing a small cat food commercial in the corner of the faceplate! 😺 😱Noooo! -
Pierce2623
Basically it’s reducing the load on the ROPs snd doing much of that computation that the ROPs normally would.through matrix math instead because it’s quicker and more power efficient.jlake3 said:I'm sorry... but what? I'm struggling to make sense of this. Sounds like they're introducing some vendor agnostic matrix-vector instructions into DirectX... which I guess is different from what DirectML offers?
And I guess these new AI instructions run on standard shaders rather than dedicated AI hardware, leading to better utilization.
...except when they use dedicated AI hardware anyway?
Maybe this is really in-the-weeds, high-level developer stuff that's not intended for me and I'm not supposed to get it, but it sounds like they're just trying to stuff as many AI buzzwords as they can into a press release because that's the thing everyone does these days. -
nitrium I think just about everything here was written by ChatGPT. The Microsoft link reads exactly like an AI wrote it complete with all the flowery hype words - which is ironic (and appropriate) in a sort of perverted way. https://devblogs.microsoft.com/directx/enabling-neural-rendering-in-directx-cooperative-vector-support-coming-soon/Reply -
bit_user
It's the level of integration that's different. I'm pretty sure DirectML operates in a different, high-level context, limiting your ability to utilize it from within the graphics pipeline and increasing the overhead of doing so. This new change allows simple AI models to be used directly within shader invocations.jlake3 said:I'm sorry... but what? I'm struggling to make sense of this. Sounds like they're introducing some vendor agnostic matrix-vector instructions into DirectX... which I guess is different from what DirectML offers?
If a GPU supports the extension, then the GPU should run them on tensor cores or whatever is the best alternative it has. No matter what, it's going to run on something within the GPU, though. There's no way it's shipping packets of instructions & data off to a separate NPU, or anything like that.jlake3 said:And I guess these new AI instructions run on standard shaders rather than dedicated AI hardware, leading to better utilization.
One thing this brings to mind is Jensen's comments about DLSS4 and predictive framegen, where they're conceivably doing some AI fill of areas in the predicted frame that weren't visible in prior frames. That would require executing an AI model directly in the graphics pipeline and is something you might even do from within a shader, if you could.jlake3 said:Maybe this is really in-the-weeds, high-level developer stuff that's not intended for me and I'm not supposed to get it, but it sounds like they're just trying to stuff as many AI buzzwords as they can into a press release because that's the thing everyone does these days.
Neural textures are another use case that comes to mind. However, shaders are used for more than that. For instance, tessellation shaders are used to synthesize geometry on-the-fly, and neural compression techniques should also be applicable there. Geometry shaders execute even earlier in the pipeline and can control instancing and transformations, making them an option for implementing AI-driven kinematics. -
bit_user
None of this aligns with my understanding of ROPs.Pierce2623 said:Basically it’s reducing the load on the ROPs snd doing much of that computation that the ROPs normally would.through matrix math instead because it’s quicker and more power efficient.
https://en.wikipedia.org/wiki/Render_output_unit
Yes, you could substitute some functions usually handled by a ROP using shader code, but it wouldn't be more efficient than letting a hard-wired ROP handle it (which is why hard-wired ROPs are a thing). The only reason you'd do so would be functional, not for efficiency's sake.