Why Does 4K Gaming Require so Much VRAM?
There's more going on than just a resolution increase.
When you're looking to buy one of the best graphics cards, performance often ranks as the most important aspect (along with pricing for many of us, naturally) — check our GPU benchmarks hierarchy if you want to see how the various cards stack up. But lately, we're seeing more games that seem to need lots of VRAM, possibly 16GB or more. Star Wars Jedi: Survivor, The Last of Us Part 1, Warhammer 40K: Darktide, Hogwarts Legacy, Final Fantasy 7 Remake, Elden Ring... the list is getting quite large for games that can exceed 8GB or even 12GB of VRAM use, depending on your chosen settings and resolution. What exactly is going on, though, and how much VRAM do you really need?
Superficially, you might think it's just a case of higher resolutions naturally requiring more VRAM. 3840x2160 (4K) is four times as many pixels as 1920x1080 (1080p), and 2.25X as many pixels as 2560x1440 (1440p). But while that can dramatically increase the number of calculations your GPU needs to perform, on its own it doesn't actually make a game use that much more VRAM.
Games have lots of buffers these days. There are framebuffers, depth buffers, geometry buffers, buffers for shadow maps and lighting, deferred rendering buffers, and potentially buffers for upscaling techniques like DLSS and FSR2. There can also be additional storage requirements for ray tracing, like the bounding volume hierarchy (BVH) structure. Anyway, not to get too far into the weeds, but there are many things that need memory.
Those memory requirements scale with resolution. For the framebuffer as an example, going to a resolution that's four times higher will generally mean using four times as much memory. Doing that for each buffer seems like it might be a big deal. But when you do the math, it's actually not that bad.
Using a 1080p resolution means 8,294,400 bytes are required for each buffer (1920 * 1080 pixels, with four bytes per pixel). 4K quadruples that to 33,177,600 bytes, while the in-between 1440p requires 14,745,600 bytes. Suppose there are ten such buffers used in a game. The difference between 4K and 1080p would still only end up being about 237 MiB — and there may be real-time compression / decompression techniques present in modern GPUs that reduce the storage requirements for certain buffers.
Most modern graphics cards have at least 8GB of VRAM, so an increase of around 0.2GB in memory requirements shouldn't matter much. But if you've ever gone from running a game mostly fine at 1440p to seeing your graphics card choke at 4K, you know something is causing problems. It may not be the larger buffers causing issues at 4K, but there's clearly something going on.
(Side note: Nvidia let us know that it's current architectures do not compress the framebuffers, "based on cost/benefit tradeoffs." However, we're not certain if AMD GPUs compress the framebuffer or not. Other buffers may or may not use some form of lossless compression, depending on the particular buffer and application.)
The actual culprit ends up being all the textures, which includes shadow maps, environment maps, and any other texture-related items that get stored in VRAM. You're probably saying, "Duh! Everyone already knows that." But it's not just a case of there being higher quality textures. There's more going on than you might have considered.
Take a game like Far Cry 6 as an example. This is one of the games in our current test suite where we know it can experience VRAM issues when you have the HD texture pack loaded and try to run it at 4K on a graphics card with 8GB of VRAM. Far Cry 6 will run just fine at 1440p with ultra settings (but without ray tracing) on the RTX 3070, averaging just over 100 fps. Bump to 4K however and performance can tank. Sometimes it will get just under 60 fps, but in testing we'll also get instances where it only runs at 10–20 fps. It's a bit weird in that it fluctuates so much between runs, but that's a different story.
Total War: Warhammer 3 is another game that can show a big drop in performance when going from 1440p to 4K on cards with 8GB VRAM. The RTX 3070 averages 65 fps at 1440p ultra and only 28 fps at 4K ultra, a 57% decrease in performance. AMD's RX 6650 XT drops from 36 fps at 1440p to just 14 fps at 4K, a 61% drop. Meanwhile, an RTX 3060 with 12GB goes from 44 fps to 23 fps — still a significant hit to performance, but now it's 'only' 48% slower. That has all the earmarks of running out of VRAM, but what exactly is happening behind the scenes?
The main culprit is textures, and it ties into something called MIP mapping. Mipmaps have been used in computer graphics for decades (they were invented by Lance Williams in 1983, according to Wikipedia). The idea is to pre-calculate lower resolution versions of a texture, typically using a high quality resize operation like bicubic filtering. That might be too expensive to run in real-time, or at least it was back in the earlier days of computer graphics, so game developers would pre-compute the mipmaps.
The benefit of doing so is that you can improve the image quality, reducing moiré patterns, aliasing, and other artifacts. But a potentially bigger benefit is that mipmaps can also reduce the amount of memory required to store all the textures — a GPU only needs to keep the highest accessed resolution of a texture in VRAM (along with all the lower resolution mipmaps as well, but those combined are less than half the size of the primary mipmap). That last bit is the key to understanding the difficulty with 4K compared to 1440p, but it's not very clear, so let's unpack that a bit.
I should note that there are different ways of handling texture storage in VRAM. What I am describing here is one way of handling things, where for example only the 1K and lower resolution mipmaps are loaded into VRAM if 1K is the highest resolution that's been accessed. I'm also not accounting for things like virtual textures, where only part of a texture is put into VRAM.
However, it can be beneficial to keep everything in VRAM if possible. For example, this would avoid a potential stutter when an object goes from using 512x512 to 1Kx1K textures and the higher resolution texture needs to be pulled into VRAM. This is why some games have strict VRAM requirements (Red Dead Redemption 2 and Doom Eternal come to mind — though note that RDR2 even at 4K and max settings only needs a bit less than 8GB). If you don't have enough VRAM, they'll try to prevent you from even attempting to use certain settings.
How a game engine actually implements VRAM allocation can vary, in other words. DirectX 12 and Vulkan in particular give developers a lot more control over texture management.
Typically, when a game engine applies textures, it will use a texture resolution that's one step higher than the number of pixels (proportionally) that the object will cover on the display — assuming such a texture resolution is available. So let's say there's an unobstructed rectangular polygon that covers a third of a 1080p display. That would mean the polygon occupies 640x360 pixels, so at most the game engine would use a 1024x1024 texture for it.
Move the viewport closer to the polygon, so that it exactly fills the whole display. Now it covers 1920x1080 pixels, and the game engine could select a 2048x2048 mipmap. Move closer still so that only half of the polygon is visible but it fills the whole screen. Then and only then would (typical) mipmapping opt for a 4096x4096 texture — assuming one is available.
What that means is that games that support 2K textures (2048x2048 resolution) are effectively "maxed out" on texture quality if you're only gaming at 1080p. And with the more complex geometry and environments used in modern games, most polygons and textures are likely getting nowhere near needing even 2K textures — 1K would suffice. Even 1440p will often only need to use 2K (or lower resolution) textures. That's because, even if 4K textures (4096x4096) are technically available, most polygons will cover far less than 2048 pixels in width or height, the exception being if you're very close to the surface so that it passes that threshold.
You can probably see where this is going. Bump up the display resolution to 4K, and suddenly the game engine and GPU will see a need to store and use higher resolution textures in VRAM far more often. The image quality probably won't even increase that much, but the VRAM requirements can basically triple. Hello fractionally higher image quality, goodbye performance — at least if the game exceeds the capacity of your VRAM.
The above gallery shows Redfall at epic settings, except we've turned down the texture quality to high, medium, and low in the subsequent images. (You'll want to view the full-size images on a 4K display if you're trying to pixel peep.) The differences, in this case, are very limited, with only the carpet on the right really showing a loss in fidelity at the medium preset, while the low preset shows a reduction in detail on some of the other objects. While we don't know for certain, we'd guess that the epic setting allows for 2K textures, high uses 1K textures, then 512 for medium, and 256 for low. Maybe we're off by a factor of two in each case, but either way, you can see that it's not a dramatic change.
There's plenty of debate that could be had over whether 2K and 4K textures are even necessary. Most GPUs can do a 2X upscale of a texture without drastically reducing the image quality, and things like DLSS, FSR2, and XeSS can potentially regain some of the quality through accumulation of data over multiple frames. Certainly, using 8K textures wouldn't help image quality on 4K and lower resolution displays — and some people would say that a simple sharpening filter looks better than extreme resolution textures.
It's not just about texture resolution, either. Textures are typically stored in a compressed format: BTC (Block Truncation Coding), S3TC, DirectX's BCn variants, or some other similar format. These formats all have one major thing in common: they're high speed and allow for random access, with compression ratios typically falling in the 4X to 8X range. Algorithms like JPEG can achieve much better compression ratios, but they can't be accessed randomly.
This is one of the reasons why Nvidia's Neural Texture Compression (NTC) sounds so promising. Figuring out ways to store higher-quality versions of textures in the same amount of memory, or use similar quality textures with potentially one tenth as much memory, would be very beneficial. But textures need to be accessible in a random fashion, in real-time, and that adds complexity to the task. AI algorithms may be better able to cope with this than traditional approaches.
But NTC isn't here yet, at least not for shipping games, and for practical use it may end up requiring new hardware and architectural changes. That means it might also require an Nvidia GPU, so unless Nvidia can get it into a future DirectX (and Vulkan) specification, it could be a while before it gains traction. We'll have to wait and see, in other words, even if it sounds really promising. (Check out the article and look at some of the images, and then imagine if games could cut texture memory use by 90%. It's possible but perhaps not fully practical for most graphics cards just yet.)
Back to the subject. Because of the jump texture storage requirements — and yes, larger buffers, more geometry, etc. — brought on by moving to 4K rendering, upscaling technologies like DLSS, FSR2, and XeSS also become a lot more useful as performance boosters. We see this regularly in testing, where the benefits of 2X upscaling (i.e. Quality mode) at 1080p may only be a moderate 10–20 percent increase in performance, while at 4K you could see a 50% improvement. With upscaling, even at 4K, the game engine would have the memory requirements of a lower resolution. (Yes, CPU bottlenecks can also be a factor, but even on modest cards like an RTX 3050, DLSS scaling tends to be far better at higher resolutions.)
Whether or not you want to game at 4K, whether or not you want to enable the highest quality settings, and whether you want to enable some form of upscaling: These are all ultimately a choice you can make. There's compromise either way: higher image quality on the one hand, higher performance on the other. Regardless, understanding exactly why 4K represents such a big jump in performance requirements from your GPU and VRAM is useful.
Even while AMD has been pushing the benefits of more VRAM, that's often been limited to its highest-end GPUs. With the RX 6000-series, you needed an RX 6800 or higher to get 16GB. The RX 6700/6750 XT dropped that to 12GB, while the mainstream RX 6600-series cards are back to 8GB and the RX 6500/6400 have just 4GB. Not surprisingly, the RX 6700-series parts were primarily marketed as being good for 1440p, while the RX 6600-series GPUs were targeted at 1080p gaming. Nvidia hasn't talked about VRAM as much, but higher VRAM capacities typically only come on higher priced parts — the RTX 3060 being an exception to that rule.
With upcoming graphics cards, we're certainly going to hear more noise about VRAM capacities. All indications are that Nvidia's rumored RTX 4060 Ti will come with 8GB of VRAM (and possibly a 16GB variant as well), and the same goes for AMD's alleged RX 7600. Except, Nvidia will probably charge quite a bit more money for its 4060 Ti card, either model, and it will likely be more of an RX 7700-series competitor (whenever such cards arrive). But neither the 4060 Ti 8GB nor the RX 7600 are likely to focus on 4K gaming, and it's very likely they'll still prove adequate in most games for 1080p and potentially 1440p at maxed-out settings.
And if not, turning down texture and shadow quality settings a notch should keep them viable. Go look at those Redfall images above again, comparing the epic versus high texture quality. Redfall and many other games simply don't show much of a visual benefit to using the highest quality textures possible. We might like the idea of having 16GB or more VRAM on all future graphics cards, but realistically there's still a place for mainstream parts with only 8GB.
Editor's note: I reached out to both AMD and Nvidia for comment, sending along an earlier version of the text. Representatives from both companies confirmed that the above explanation of how VRAM works is largely correct, though as always individual games and engines may do things in a different manner.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Jarred Walton is a senior editor at Tom's Hardware focusing on everything GPU. He has been working as a tech journalist since 2004, writing for AnandTech, Maximum PC, and PC Gamer. From the first S3 Virge '3D decelerators' to today's GPUs, Jarred keeps up with all the latest graphics trends and is the one to ask about game performance.
-
-Fran- Good article, Jarred. It would be nice to dig a bit deeper on the different ways graphical engines allocate VRAM and support it by contacting some Devs to give more context. AMD and nVidia will just give generic marketing answers and I don't really trust them, unless it's their engineers given those answers.Reply
In any case, it is definitely important to tell everyone this "VRAM" debacle is nuanced. And that nuance is super important to get a better idea where it matters.
That being said, and as a rule of thumb, you will always want more VRAM, just like you want more RAM. At the same price point and similar-ish performance level, do yourself a favour and always pick the GPU with more VRAM at the same performance and price tiers. Like it or not, it will just help the card last longer, meaning it'll save you money (I guess?) down the line.
Regards. -
DavidLejdar I suppose the size of the screen has some relevance about the image quality though, doesn't it? Like, at 4k, medium texture quality won't show as much an image quality loss on a 27'' display as it would on a 45+'' display. Of course, with a larger screen, one is supposed to sit further away from it, which then may not make an image quality loss as visible.Reply
But strictly speaking, if one (such as myself) looks specifically i.e. at city builder games and transport tycoon games and similar, and for sake of such games would go at least 32'' 4K, would a bit more VRAM make a visible impact (assuming the game has high-end support of course) ? Or would it in case of a city builder not likely matter as much due to there being a lot of small objects when zoomed out? Or would it even zoomed out still have quite a demand on the VRAM?
Not that I would be in a rush to upgrade. Currently RX 6700 XT with 12 GB VRAM (and AM5 CPU) works nice for running pretty much everything on max at 1440p. Just wondering about the context of VRAM on some other types of games, such as a city builder where one has a bustling metropolis, and wants to keep it at 60+ FPS when scrolling around and when zooming in and out. -
btmedic04 there definitely is a place for 8gb equipped video cards: the sub $300 market. 8gb vram buffers have been around for about 10 years now. its not unreasonable to think that it needs to be lower on the pricing tier, afterall, nobody in their right mind would pay $300+ for an 8gb kit of ddr4 or ddr5 todayReply -
JarredWaltonGPU
This all depends on the game, the textures available, and the size of objects. It's why things like the rug in those Redfall images show the most pronounced change with texture quality: It's a big, flat object that may occupy over 1000 pixels in width. With city builders, a lot of the textures probably aren't even larger than 512x512, or maybe even 256x256. Like, if a building is only going to occupy say a small 1–2 percent portion of the display, then even at 4K resolution the actual object size would be maybe 384x216 (i.e. one tenth of the display resolution).DavidLejdar said:...would a bit more VRAM make a visible impact (assuming the game has high-end support of course) ? Or would it in case of a city builder not likely matter as much due to there being a lot of small objects when zoomed out? Or would it even zoomed out still have quite a demand on the VRAM? -
JarredWaltonGPU
I did talk with some technical people, and that's why there's a side bar effectively saying, "Almost all of this is ultimately up to the game developers, so some engines and games will do things one way (e.g. Unreal Engine does a lot of texture swapping AFAICT), while others like to precache everything into memory and avoid stutters (Doom Eternal, to a lesser extent Red Dead Redemption 2)."-Fran- said:Good article, Jarred. It would be nice to dig a bit deeper on the different ways graphical engines allocate VRAM and support it by contacting some Devs to give more context. AMD and nVidia will just give generic marketing answers and I don't really trust them, unless it's their engineers given those answers.
In any case, it is definitely important to tell everyone this "VRAM" debacle is nuanced. And that nuance is super important to get a better idea where it matters.
That being said, and as a rule of thumb, you will always want more VRAM, just like you want more RAM. At the same price point and similar-ish performance level, do yourself a favour and always pick the GPU with more VRAM at the same performance and price tiers. Like it or not, it will just help the card last longer, meaning it'll save you money (I guess?) down the line.
One of the other aspects is that games could do tiling in DirectX 12. Then they would only need to load parts of a texture into memory. But tiling also requires effectively knowing which parts are needed, and predicting that can be difficult / impossible, which means it can cause stuttering as other bits need to be loaded until eventually the whole texture is in VRAM.
Another option, which I'm pretty sure various games have used, is that they determine the highest resolution textures to load into VRAM based on your resolution. Let's say a game has up to 2K textures available (which is pretty common these days, I think — 4K textures are rarely used, for the reasons I mentioned in the article). If you're running at 1080p or lower, there's very little benefit to be had from using 2K textures, so the game might just cap the texture size at 1K. And in fact, it might do that at 1440p as well. But at 4K, then it would opt to load the 2K textures (where available — I didn't mention this but obviously a game doesn't need to use the same size textures everywhere). -
PlaneInTheSky PC are falling behind consoles, and it's all because Nvidia and AMD refuse to give low-end and midrange cards enough VRAM.Reply
-Last gen PS4-pro and Xbox-One-X had 8GB VRAM.
PC kept up. Low-end GTX1060 and RX580 had 6-8GB VRAM to match.
-Today, PS5 and Xbox-Series-X have 16GB VRAM.
PC are not keeping up but have fallen behind. With 8GB VRAM that is insufficient to store all the texture assets.
People talk about "badly optimized ports". But there's little you can "optimize" to deal with a lack of VRAM. Developers are not going to redo all their textures for PC. -
russell_john
The PS5 and Series X DO NOT have 16 GB of VRAM, they have 16 GB of TOTAL RAM, that's for both the GPU and CPU to use. If they use 10 GB for VRAM then the CPU only has 6 GB left to work with which is why most quality settings are limited to 30 FPS because the CPU is starved for memory plus in Sony's case they ramp up the GPU frequency but have to lower the CPU frequency to keep package TDP under specs starving the CPU even more and creating a CPU bottleneck limiting the FPSPlaneInTheSky said:PC are falling behind consoles, and it's all because Nvidia and AMD refuse to give low-end and midrange cards enough VRAM.
-Last gen PS4-pro and Xbox-One-X had 8GB VRAM.
PC kept up. Low-end GTX1060 and RX580 had 6-8GB VRAM to match.
-Today, PS5 and Xbox-Series-X have 16GB VRAM.
PC are not keeping up but have fallen behind. With 8GB VRAM that is insufficient to store all the texture assets.
People talk about "badly optimized ports". But there's little you can "optimize" to deal with a lack of VRAM. Developers are not going to redo all their textures for PC.
Also very few games on the PS5 and Series X are true native 4K they are upscaled from a lower resolution -
PlaneInTheSky
No, they have 16GB VRAM.russell_john said:The PS5 and Series X DO NOT have 16 GB of VRAM, they have 16 GB of TOTAL RAM, that's for both the GPU and CPU to use.
The PS5 has 16GB of blazing fast GDDR6 VRAM, combined with a custom texture decompressor. It's a beast of a console.
Slow PC system memory is completely useless for the GPU processor. That's why adding faster DDR5 makes didly squat difference for gaming. GDDR6 VRAM is exponentially faster than DDR5 system memory.
Textures and shaders needs to be in VRAM to be usable by the GPU.
Directstorage is literally designed to bypass the PC system memory bottleneck.
https://i.postimg.cc/LHQMg3r9/Saitre.jpg
You can build a PC with 128GB DDR5 system memory, and your game will still stutter and run like garbage with a GPU with only 8GB VRAM.
The system memory on PC in no way compensates for the lack of VRAM compared to consoles.
To compete with a console with 16GB VRAM, PC need GPU with 16GB VRAM. That was the case last gen with 8GB, and it is the case today.
PC GPU should have 16GB VRAM, there's no excuse anymore. PC GPU with 8GB VRAM can not handle today's console ports. -
atomicWAR
Yeah comparing consoles to PC is frustrating due to the inherent differences in both platforms but generally speaking you've covered it. I honestly wish this gen had just focused on 90-120hz more. When you have a cross gen game your likely to a good performance mode going over 60fps but not so much for current gen only titles. 60hz should absolutely be the floor for frame rates as to often 30hz just ends up looking like trash in 3D games.russell_john said:The PS5 and Series X DO NOT have 16 GB of VRAM, they have 16 GB of TOTAL RAM, that's for both the GPU and CPU to use. If they use 10 GB for VRAM then the CPU only has 6 GB left to work with which is why most quality settings are limited to 30 FPS because the CPU is starved for memory plus in Sony's case they ramp up the GPU frequency but have to lower the CPU frequency to keep package TDP under specs starving the CPU even more and creating a CPU bottleneck limiting the FPS
Also very few games on the PS5 and Series X are true native 4K they are upscaled from a lower resolution
I love Zelda Tears of the Kingdom but the only time its' 30hz looks ok is when I am playing it in handheld mode and its smaller size screen 'smooths' things out a little. That is when the frame time delivery is consistent as there are a few spots it gets wonky (technical term). Point being this rush to higher resolutions isn't always the best answer to better looking game-play. Nintendo was right doing something different BUT for docked mode I would love 60hz to have been standard. Yes I do realize the age and lack of power the chip has, it has aged impressively considering. Nintendo needs to move on to new hardware but that's another conversation.
You're a bit off base here. russell_john pretty much nailed the rough out line on how the consoles use ram. They still need to run the OS and other base functions. In the case of the PS5 it uses around 2-2.5GB of that 16GB for the OS leaving between 13.5-14GB free for games according to various devs (XBSX 13.5GB usable, XBSS 8GB usable). PC's don't run the OS on the video card so the VRAM isn't limited in the same fashion as a console where memory is shared.PlaneInTheSky said:System memory is completely useless for the GPU, textures and shaders needs to be in VRAM to be usable.
The system memory on PC in no way compensates for the lack of VRAM compared to consoles.
To compete with a console with 16GB VRAM, PC need GPU with 16GB VRAM. That was the case last gen, and it is the case today.
https://www.engadget.com/xbox-series-s-game-developer-memory-increase-184932629.html
And sorry system ram does actually help compensate for lack of VRAM though admittedly its not ideal and games need to have been coded correctly to use it. This is where bad ports come in. So at the end of the day no a PC does not need 16GB of VRAM to compete with current gen consoles. 8GB is basically entry level at this point like the series S but even those newer cards should out perform it (RTX 3060 Ti+/4060+) in over all better settings, resolution and frame rates...and current gen 12GB cards should beat easily beat any of the big boy consoles on the market (bad ports excluded).
Saying PC is behind consoles at the moment just isn't true. Their are differences and some of them make ports less than ideal but the general basis of your argument is wrong. Consoles don't have 16GB of VRAM available for games and system ram can help with VRAM limited scenarios in many games (not all). -
PlaneInTheSky
Playstation does not disclose the size of the OS at runtime, so it's pointless for you to make guesses.atomicWAR said:You're a bit off base here. russell_john pretty much nailed the rough out line on how the consoles use ram. They still need to run the OS and other base functions. In the case of the PS5 it uses around 2-2.5GB of that 16GB for the OS leaving between 13.5-14GB free for games according to various devs (XBSX 13.5GB usable, XBSS 8GB usable). PC's don't run the OS on the video card so the VRAM isn't limited in the same fashion as a console where memory is shared.
The OS itself likely takes up barely any space.
PS5 uses a custom version of FreeBSD completely developed in C. The footprint of the OS is likely extremely low, much lower than your guesses. We're talking megabytes, not gigabytes.
What takes up some space is the rendering protocols.
The size on disc is very large, indicating it has many custom libraries that it can dynamically load at runtime to minimize the footprint.