Nvidia GeForce RTX 4060 Ti 16GB Review: Does More VRAM Help?

Double the memory, still the same underlying chip.

Gigabyte's Nvidia GeForce RTX 4060 Ti 16GB Gaming OC
(Image: © Tom's Hardware)

Why you can trust Tom's Hardware Our expert reviewers spend hours testing and comparing products and services so you can choose the best for you. Find out more about how we test.

The RTX 4060 Ti 16GB costs $499, which makes Nvidia's assertions that it's mostly for 1080p gaming somewhat ludicrous. Even accounting for inflation, $499 sits firmly in the high-end category, and as such, we'd expect it to handle 1440p gaming and even 4K at reduced settings. Saying that it's for 1080p feels more like a way to try and save face, as relative performance tends to drop off at higher resolutions thanks to the 128-bit memory interface. But let's go ahead and start with the 1080p results, with both medium and ultra settings — ultra first, considering the price of the 4060 Ti 16GB.

Our new test regimen gives us a global view of performance using the geometric mean of all 15 games, including both the ray tracing and rasterization test suites. Then we've got separate charts for the rasterization and ray tracing suites, plus charts for the individual games. If you don't like the "overall performance" chart, the other two are the same view that we've previously presented.

Our test suite is intentionally heavier on ray tracing games than what you might normally encounter. That's largely because ray tracing games tend to be the most demanding options, so if a new card can handle ray tracing reasonably well, it should do just fine with less demanding games. Ray tracing also feels increasingly like something we can expect to run well when optimized properly, and Nvidia's hardware proves that's possible. Mainstream and certainly high-end graphics cards need to be capable of running demanding settings, including ray tracing, at reasonable frame rates.

Nvidia GeForce RTX 4060 Ti 16GB performance charts

(Image credit: Tom's Hardware)

At 1080p ultra, ostensibly the target market for this card — it's the "most popular Steam gaming resolution," if such things hold any weight with you — there's little to no benefit to the extra VRAM in our test suite. That's not too surprising, as most games don't need a ton of memory at 1080p (see: Why does 4K require so much VRAM?). It doesn't matter whether we look at the rasterization or ray tracing results, either: both generally run within the margin of error on the 4060 Ti 8GB and 4060 Ti 16GB.

That's not the only takeaway, however. While the 4060 Ti 16GB card isn't clearly better (or worse) than the 8GB model, look at the other results. RTX 3070, which was originally a $499 card, delivers effectively the same level of performance. That's over 2.5 years later with no increase in performance (unless you include DLSS 3 Frame Generation, which is a different can of worms). The RX 6800, which now starts at around $450, also keeps pace, even with a lot of DXR testing included, and the RX 6800 XT, which only costs a bit more at $520, delivers superior performance.

There are workloads besides 1080p gaming, some of which can make use of the extra VRAM. They will mostly be special cases, however, rather than the rule. Gaming, in general, needs more than just 16GB of VRAM — if VRAM is a factor, then so is memory bandwidth and compute, and the 4060 Ti is somewhat lacking in both those areas.

Looking just at the nine rasterization games, the overall Nvidia versus Nvidia story doesn't change: The 4060 Ti 16GB card basically matches the 8GB variant, as well as the previous generation RTX 3070. Meanwhile, the Nvidia versus AMD story skews in favor of the red team.

The RX 6800 delivers 11% better rasterization performance overall, while the RX 6800 XT increases that to 26%. That's a pretty huge difference, and yes, the AMD cards do use quite a bit more power to get there. How important is efficiency compared to performance, though? That's more of a personal decision, and plenty of gamers would accept worse efficiency for more performance.

As for the individual game results, the 8GB model often beats the 16GB card by up to a few percent. That's probably due to variance between the card models, boost clocks, power limits, and increased VRAM — the same L2 cache size with double the VRAM may end up with slightly lowered hit rates. But it's all basically margin of error.

With AMD GPUs, there's a much wider level of variance among the games. Total War: Warhammer 3 ends up as the best result for the 4060 Ti 16GB: it's only 11% slower than the 6800 XT. We could also include Flight Simulator, but that's still hitting CPU limits and so the 3% difference isn't particularly meaningful. Elsewhere, AMD's GPU is up to 55% faster in Borderlands 3, which is a long-standing trend for that AMD-promoted game.

Naturally, the RTX 4060 Ti 16GB improves its relative standings with ray tracing games, at least relative to AMD GPUs. But the 8GB card and RTX 3070 are still equivalent in performance, while the 3070 Ti comes out slightly ahead. For AMD, the 4060 Ti delivers 5% better performance overall than the 6800 XT and 22% better performance than the vanilla RX 6800.

Turning to the individual game charts, there's a wide range of relative performance when looking at AMD and Nvidia GPUs. The 4060 Ti 16GB only beats the RX 6800 by 3.5% in Metro Exodus Enhanced, 7.7% in Spider-Man: Miles Morales, and 6.9% in Control Ultimate Edition. But then you get stuff like the Bright Memory Infinite Benchmark, where Nvidia is 24% faster, Cyberpunk 2077, where it's 37% faster, and Minecraft, where the gap is a whopping 65%.

We should note that we have retested all of the GPUs in Minecraft for this review. Long story short, we found that vsync was still not fully disengaged even though forced off in the various driver options. Actually, Intel Arc doesn't have a "vsync off" option, with Smooth Sync being the closest alternative. But if you manually edit the configuration file (located in %LocalAppData%\Packages\Microsoft.MinecraftUWP_8wekyb3d8bbwe\LocalState\games\com.mojang\minecraftpe, open the options.txt file) and change gfx_vsync from 1 to 0, you can force it off.

We found that Minecraft performance improved by about 5–10 percent for AMD GPUs, around 20 percent for Nvidia GPUs, and 50–100 percent for Intel Arc GPUs. So, Intel benefited the most, but this also affected the other cards. If you don't play Minecraft and don't care about its full ray tracing graphics, the DXR gap narrows quite a bit.

Jarred Walton

Jarred Walton is a senior editor at Tom's Hardware focusing on everything GPU. He has been working as a tech journalist since 2004, writing for AnandTech, Maximum PC, and PC Gamer. From the first S3 Virge '3D decelerators' to today's GPUs, Jarred keeps up with all the latest graphics trends and is the one to ask about game performance.

  • newtechldtech
    This card is good if you want to edit movies and need more VRAM .
    Reply
  • JarredWaltonGPU
    I just want to clarify something here: The score is a result of both pricing as well as performance and features, plus I took a look (again) at our About Page and the scores breakdown. This is most definitely a "Meh" product right now. Some of my previous reviews may have been half a point (star) higher than warranted. I've opted to "correct" my scale and thus this lands at the 2.5-star mark.

    I do feel our descriptions of some of the other scores are way too close together. My previous reviews were based more on my past experience and an internal ranking that's perhaps not what the TH text would suggest. Here's how I'd break it down:

    5 = Practically perfect
    4.5 = Excellent
    4 = Good
    3.5 = Okay, possibly a bad price
    3 = Okay but with serious caveats (pricing, performance, and/or other factors)
    2.5 = Meh, niche use cases
    ...

    The bottom four categories are still basically fine as described. Pretty much the TH text has everything from 3-star to 5-star as a "recommended" and that doesn't really jive with me. 🤷‍♂️

    This would have been great as a 3060 Ti replacement if it had 12GB and a 192-bit bus with a $399 price point. Then the 4060 Ti 8GB could have been a 3060 replacement with 8GB and a 128-bit bus at the $329 price point. And RTX 4060 would have been a 3050 replacement at $249.

    Fundamentally, this is a clearly worse value and specs proposition than the RTX 4060 Ti 8GB and the RTX 4070. It's way too close to the former and not close enough to the latter to warrant the $499 price tag.

    All of the RTX 40-series cards have generally been a case of "good in theory, priced too high." Everything from the 4080 down to the 4060 so far got a score of 3.5 stars from me. There's definitely wiggle room, and the text is more important than just that one final score. In retrospect, I still waffle on how the various parts actually rank.

    Here's an alternate ranking, based on retrospect and the other parts that have come out:

    4090: 4.5 star. It's an excellent halo part that gives you basically everything. Expensive, yes, but not really any worse than the previous gen 3090 / 3090 Ti and it's actually justifiable.

    4080: 3-star. It's fine on performance, but the generational price increase was just too much. 3080 Ti should have been a $999 (at most) part, and this should be $999 or less.

    4070 Ti: 3-star. Basically the same story as the 4080. It's fine performance, priced way too high generationally.

    4070: 3.5-star. Still higher price than I'd like, but the overall performance story is much better.

    4060 Ti 16GB: 2.5-star. Clearly a problem child, and there's a reason it wasn't sampled by Nvidia or its partners. (The review would have been done a week ago but I had a scheduled vacation.) This is now on the "Jarred's adjusted ranking."

    4060 Ti 8GB: 3-star. Okay, still a higher price than we'd like and the 128-bit interface is an issue.

    4060: 3.5-star. This isn't an amazing GPU, but it's cheaper than the 3060 launch price and so mostly makes up for the 128-bit interface, 8GB VRAM, and 24MB L2. Generally better as an overall pick than many of the other 40-series GPUs.

    AMD's RX 7000-series parts are a similar story. I think at the current prices, the 7900 XTX works as a $949 part and warrants the 4-star score. 7900 XT has dropped to $759 and also warrants the 4-star score, maybe. The 7600 at $259 is still a 3.5-star part. So, like I said, there's wiggle room. I don't think any of the charts or text are fundamentally out of line, and a half-star adjustment is basically easily justifiable on almost any review I've done.
    Reply
  • Lord_Moonub
    Jarred, thanks for this review. I do wonder if there is more silver lining on this card we might be missing though. Could it act as a good budget 4K card? What happens if users dial back settings slightly at 4K (eg no Ray tracing, no bleeding edge ultra features ) and then make the most of DLSS 3 and the extra 16GB VRAM? I wonder if users might get something visually close to top line experience at a much lower price.
    Reply
  • JarredWaltonGPU
    Lord_Moonub said:
    Jarred, thanks for this review. I do wonder if there is more silver lining on this card we might be missing though. Could it act as a good budget 4K card? What happens if users dial back settings slightly at 4K (eg no Ray tracing, no bleeding edge ultra features ) and then make the most of DLSS 3 and the extra 16GB VRAM? I wonder if users might get something visually close to top line experience at a much lower price.
    If you do those things, the 4060 Ti 8GB will be just as fast. Basically, dialing back settings to make this run better means dialing back settings so that more than 8GB isn't needed.
    Reply
  • Elusive Ruse
    Damn, @JarredWaltonGPU went hard! Appreciate the review and the clarification of your scoring system.
    Reply
  • InvalidError
    More memory doesn't do you much good without the bandwidth to put it to use. The 4060(Ti) needed 192bits to strike the practically perfect balance between capacity and bandwidth. It would have brought the 4060(Ti) launches from steaming garbage to at least being a consistent upgrade over the 3060(Ti).
    Reply
  • Greg7579
    Jarred, I'm building with the 4090 but love reading your GPU reviews, even the ones that are far below what I would build with because I learn something every time.
    I am not a gamer but a GFX Medium Format photographer and have multiple TB of high-res 200MB raw files that I work extensively with in LightRoom and Photoshop. I build every 4 years and update as I go. I build the absolute top-end of the PC arena, which is way overkill, but I do it anyway.
    As you know. Lightroom has many new amazing AI masking and noise reduction features that are like magic but so many users (photographers) are now grinding to a halt on their old rigs and laptops. Photographers tend to be behind the gamers on PC / laptop power. It is common knowledge on the photo and Adobe forums that these new AI capabilities eat VRAM like Skittles and extensively use the GPU for the grind. (Adobe LR & PS was always behind on using the GPU with the CPU for its editing and export tasks but now are going at it with gusto.) When I run an AI DeNoise on a big GFX 200MB file, my old rig with the 3080 (I'm building again soon with the 4 090) takes about 12 seconds to grind out the AI DeNoise task. Others rigs photographers use take several minutes or just crash. The Adobe and LightRoom forums are full of howling and gnashing of teeth about this. I tell them to start upgrading, but here is my question.... I can't wait to see what the 4090 will do with these photography-related workflow tasks in LR.
    Can you comment on this and tell me if indeed this new Lightroom AI masking and DeNoise (which is a miracle for photographers) is so VRAM intensive that doubling the VRAM on a card like this would really help alot? Isn't it true that NVidea made some decisions 3 years ago that resulted in not having enough (now far cheaper) VRAM in the 40 series? It should be double or triple what it is right? Anything you can teach me about increased GPU power and VRAM in Adobe LR for us photographers?
    Reply
  • Brian D Smith
    Good card for the OEM masses. No one else need worry about it.
    Reply
  • hotaru251
    4060ti should of been closer to a 4070.

    the gap between em is huge and the cost is way too high. (doubly so that it requires dlss3 support to not get crippled by the limited bus)
    Reply
  • atomicWAR
    JarredWaltonGPU said:
    Some of my previous reviews may have been half a point (star) higher than warranted. I've opted to "correct" my scale and thus this lands at the 2.5-star mark.

    Thank you for listening Jarred. I was one of those claiming on multiple recent gpu reviews that your scores were about a half star off though not alone in that sentiment either. I was quick to defend you from trolls though as you clearly were not shilling for Nvidia either. This post proves my faith was well placed in you. Thank you for being a straight arrow!
    Reply