VRAM-friendly neural texture compression inches closer to reality — enthusiast shows massive compression benefits with Nvidia and Intel demos
AI texture decompression promises better image quality, lower resource usage

Nvidia's Blackwell architecture touts support for a raft of AI-powered "neural rendering" features. Among the more interesting of these is neural texture compression, or NTC. As developers pursue more realistic gaming experiences, texture sizes have also grown, creating more pressure on limited hardware resources like VRAM. An enthusiast has now demoed texture compression tech in action on both Nvidia and Intel test systems, demonstrating massive improvements in compression, a feature that will ultimately allow developers to do more with less VRAM, or to enable more features within the same GPU memory capacity.
NTC promises to greatly reduce texture sizes on disk and in memory and to improve the image quality of rendered scenes compared to the block-based texture compression techniques widely used today. It allows developers to use a small neural network optimized for each material in a scene to decompress those textures.
To enable neural rendering features like these, Nvidia, Microsoft, and other vendors have worked together to create a DirectX feature called Cooperative Vectors that gives developers fine-grained access to the matrix acceleration engines in modern Nvidia, Intel, and AMD GPUs. (Nvidia calls these Tensor Cores, Intel calls them XMX engines, and AMD calls them AI Accelerators).
NTC hasn't appeared in a shipping game yet, but the pieces are coming together. A new video from YouTuber Compusemble shows that an NTC-powered future could be a bright one. Compusemble walks us through two practical demonstrations of NTC: one from Intel and the other from Nvidia.

Intel's demo shows a walking T-Rex. The textures decompressed via NTC for this example are visibly crisper and sharper relative to those using the block compression method commonly employed today. The results from NTC look much closer to the native, uncompressed texture.
At least on Compusemble's system, which includes an RTX 5090, the average pass time required increases from 0.045ms to 0.111ms at 4K, or an increase of 2.5x. Even so, that's a tiny portion of the overall frame time.
Dramatically, NTC without Cooperative Vectors enabled requires a shocking 5.7 ms of pass time, demonstrating that Cooperative Vectors and matrix engines are essential for this technique to be practical.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Nvidia's demo shows the benefits of NTC for VRAM usage. Uncompressed, the textures for the flight helmet in this demo occupy 272 MB. Block compression reduces that to 98 MB, but NTC provides a further dramatic reduction to 11.37 MB.
As with Intel's dinosaur demo, there's a small computational cost to enabling NTC, but that tradeoff seems well worth it in exchange for the more efficient usage of fixed resources that the technique enables.
Overall, these demos show that neural texture compression could have dramatic and exciting benefits for developers and gamers, whether that's reducing VRAM pressure for a given scene or increasing the visual complexity possible within a set amount of resources. We hope developers begin taking advantage of this technique soon.

As the Senior Analyst, Graphics at Tom's Hardware, Jeff Kampman covers everything to do with GPUs, gaming performance, and more. From integrated graphics processors to discrete graphics cards to the hyperscale installations powering our AI future, if it's got a GPU in it, Jeff is on it.
-
-Fran- If this will be a DirectX (and Vulkan, I hope?) base feature, then it's a big W for everyone.Reply
Regards. -
thesyndrome I hadn't heard of Intel's take on texture compression, but that gives me a lot of hope. Despite owning an Nvidia GPU (probably going to be my last one tbh based on how the company has acted over the last few years and AMD closing the gap) I thought this was a proprietary technique that relied on Nvidia hardware based on Jensen's comments whilst speaking about it at GDC 2025, but to know that it's something potentially every GPU company can do seems like a huge boon for the industry.Reply
I have lamented texture issues since the days of Unreal Engine 3, with some games opting to go for low-fidelity textures to prevent pop-in from needing to load large textures taking time, and other games deciding they can barely be bothered to compress at all which leads to ludicrous install sizes. It would be nice if we could get games under 80GB again without sacrificing quality or causing performance issues (frankly I'd like to see them get below 50GB again, but I'm not holding my breath) -
JarredWaltonGPU Both of these demos raise some interesting questions. It's noted on the Intel T-Rex demo that the texture pass time (on an RTX 5090?) increases from 0.045 ms to 0.111 ms, but we don't know how much VRAM was being used. The Nvidia demo meanwhile notes a compressed texture size that goes from 272MB down to 98MB with BTC, and further drops to 11.37MB with NTC... but then we don't get a pass time.Reply
So what happens if a game uses even 2GB of NTC compressed textures? That should run just fine in terms of VRAM on even 8GB cards like the 5060 Ti 8GB and 5060, and potentially AMD's 9060 XT 8GB as well. But if it takes 0.111 ms for a workload that uses a paltry amount of textures — if T-Rex is anything like the Flight Helmet demo, we could be looking at less than 50MB of textures when compressed — and that's on an RTX 5090! Then what happens when we shift to 2GB of textures on an RTX 5060?
We can guess. RTX 5090 offers 5.4X more AI compute than the RTX 5060. That means potentially the same T-Rex demo that was taking 0.111 ms on the 5090 might now require 0.60 ms on the 5060. And then if we were to just guesstimate that a full game is using 40 times as much texture data as these simplistic demos, we're now talking about potentially spending 24 ms just on the texturing pass.
If you can pipeline things so that the whole engine doesn't stall while waiting for texture decompression, that would still mean at best 40-ish FPS. Drop the resolution to 1080p or even 1440p and potentially we double that performance. But again, this is just rough estimates.
I suspect there's a good reason we haven't seen any of this tech in a shipping game yet. It will take a lot of work to create the assets, both the uncompressed and NTC variants, and games will still need to work on GPUs without NTC support. In that sense, it's the same story as ray tracing yet again. Game publishers and developers are waiting for the proverbial chicken to arrive before they start building eggs into their games. -
nightbird321 If NTC is vendor neutral and quick to implement for devs, then awesome. If it requires specific hardware to run, it may be 10 years before it can be more than another of many graphics options.Reply