AMD Radeon RX 6600 XT Review: The Memory Compromise

Navi 23 only has 8GB GDDR6 on a 128-bit interface

AMD Radeon RX 6600 XT
(Image: © Tom's Hardware)

Why you can trust Tom's Hardware Our expert reviewers spend hours testing and comparing products and services so you can choose the best for you. Find out more about how we test.

Radeon RX 6600 XT 1080p Gaming Performance

AMD pitches the Radeon RX 6600 XT primarily as a 1080p gaming solution, partly thanks to the 8GB VRAM and 32MB Infinity Cache, but mostly based on the level of performance it provides. We'll also include results at 1440p and 4K below. We test at ultra settings (or equivalent) for the primary test suite, though we also have a suite of nine games tested at medium quality that factors into our GPU benchmarks hierarchy — which we'll update with the RX 6600 XT soon enough.

As you'd expect from a borderline high-end GPU, performance at 1080p ultra is good across our test suite. Every game ran at over 60 fps, so if 1080p60 is your target, the RX 6600 XT will suffice. Of course, we're not just interested in the RX 6600 XT, but also how it stacks up to the competition. It's a bit of a mixed bag there.

Let's start with some of the important wins. AMD's newcomer beats Nvidia's RTX 3060 by 13% on average. Both cards were factory overclocked, so that's a straight-up win for AMD. There's not a single game in our list where the RTX 3060 came out ahead, though there were several effective ties — Far Cry 5, Horizon Zero Dawn, Red Dead Redemption 2, and Strange Brigade all showed less than a 3% difference in performance. On the other hand, Assassin's Creed Valhalla, Borderlands 3, and Forza Horizon 4 all gave the RX 6600 XT more than a 25% lead. The RX 6600 XT also beats the RTX 2070 Super and thus any lower tier RTX 20-series GPU.

Another interesting point is the RX 6600 XT performance on the AMD Ryzen 5900X PC (remember, our standard configuration uses the 9900K). Overall performance improved by 3.6%, with Dirt 5, Final Fantasy XIV and Forza Horizon 4 showing the biggest gains. Most of the other games show a minor 1–3% improvement. We didn't test whether the improvement was due to the faster CPU, or the PCIe Gen4 interface — or both. We also don't have current benchmarks for all of the other GPUs on the Ryzen 5900X, but mostly we wanted to show that AMD's high-end CPU did provide a minor improvement compared to the three years old (has it really been that long?) Core i9-9900K.

Finally, let's talk about generational performance comparisons. The RX 6600 XT ended up edging past the RX 5700 XT by 5% — not a huge improvement, but a win nonetheless. A few games were basically tied in performance, and the 5700 XT even came out (barely) ahead in a few cases, but at least at 1080p, Navi 23 beat Navi 10. That shows the Infinity Cache still helps a lot, even with only 32MB, because otherwise the drop in memory bandwidth would be far more noticeable.

Radeon RX 6600 XT 1440p Gaming Performance

We find 1440p to be the sweet spot for gaming, as it combines high refresh rates with the added crispness of a higher resolution. Here's where we start to feel the smaller Infinity Cache. Average fps on the RX 6600 XT dropped by 28% compared to 1080p ultra, while the increased resolution only dropped performance on the RX 6700 XT by 23%. Also, three games (Metro Exodus, Red Dead Redemption 2, and Watch Dogs Legion) fell below 60 fps, and Assassin's Creed Valhalla is right on the threshold. Gaming at 1440p is still absolutely possible, but you won't be pushing a 144Hz display in most games, and you might need to drop a few settings down a notch or two to get above 60 fps.

Because 1440p pushes the bottleneck mostly to the GPU, there's also less of a difference between the 9900K and the 5900X results. AMD's Ryzen CPU still came out with the overall win (by 2.6%), but the biggest lead was only 5% this time. Similar drops in relative performance occurred with other GPU comparisons as well. The RX 6600 XT led the RTX 3060 by 8.5% overall, and Nvidia's card came out ahead in a couple of games. The older RX 5700 XT also delivered better performance in a few games, narrowing the gap to just 2.2%.

The high-end RX 6000 GPUs also increased their margin of victory, at least in part thanks to having more memory bandwidth and Infinity Cache. Where the 6700 XT was 20% faster than the 6600 XT at 1080p, it was 28% faster at 1440p. Likewise, the RX 6800 improved from 37% faster at 1080p to 52% at 1440p. And it's not just AMD GPUs: The RTX 3060 Ti went from a 12% lead at 1080p to a 19% lead at 1440p.

That last comparison could be the most important, at least if/when GPU supply improves. The RTX 3060 Ti nominally costs $399, only $20 more than the RX 6600 XT's official starting price. Both have 8GB of GDDR6 memory, but the 3060 Ti has a 256-bit bus compared to the 128-bit bus on the 6600 XT, not to mention quite a bit more theoretical compute performance (16.2 TFLOPS). Too bad you can't find an RTX 3060 Ti for anywhere close to $400 — in our most recent GPU price index update, the average price on eBay was still nearly $1,000!

Radeon RX 6600 XT 4K Gaming Performance

This is where the wheels fall off, relatively speaking. The RX 6600 XT managed pretty well up to now, but the memory and bandwidth demands of 4K ultra prove to be a bit much. The RTX 3060 now holds a slight overall lead of 1.4%, with better performance — sometimes significantly so — in eight of the 13 games we tested. And that's not even allowing for DLSS or DXR performance, which we'll get to next. The RX 5700 XT also came out ahead, leading by 7.1%, and faster GPUs increased their leads: RX 6700 XT was 37% faster, RX 6800 was 74% faster, and RTX 3060 Ti was 34% faster than the RX 6600 XT.

Looking at the individual games, only one (Forza Horizon 4) averaged more than 60 fps, and Strange Brigade came close at 59.9 fps. Dropping the settings would obviously help, but in general, we don't recommend buying an RX 6600 XT for 4K gaming — at least, not unless you want the console experience of 4K at closer to 30 fps, perhaps with FSR or some other form of upscaling to smooth out the dips. 

MORE: Best Graphics Cards

MORE: GPU Benchmarks and Hierarchy

MORE: All Graphics Content

Jarred Walton

Jarred Walton is a senior editor at Tom's Hardware focusing on everything GPU. He has been working as a tech journalist since 2004, writing for AnandTech, Maximum PC, and PC Gamer. From the first S3 Virge '3D decelerators' to today's GPUs, Jarred keeps up with all the latest graphics trends and is the one to ask about game performance.

  • Zarax
    I know this is a lot to ask but given the ridicolous MSRP you might want to design a benchmark of discounted games (you could use isthereanydeal to see which ones have been at least once 50% off) that would be good to use with lower end cards or ones available used for acceptable prices.

    Something like "Budget gaming: how do the cheapest cards on ebay perform?" could be a very interesting read, especially given your high standards in writing and testing.
    Reply
  • salgado18
    I like the decision to lower memory bus width to 128 bits. It lowers mining performance without affecting gaming performance, and can't be undone like Nvidia's software-based solution.
    Reply
  • ottonis
    Due to production capacity constraints, AMD's main problem is they can't produce nearly as many GPUs as they would like and are thus being outsold by Nvidia by far.

    It's pretty obvious that AMD had one goal in mind with Navi23: increase production output as much as possible by shrinking die size while maintaining competitive 1080p gaming performance.
    Apparently, they accomplished that task. Whether or not the MSRP will have to be adapted: we will see,but I guess not as long as the global GPU shortage lasts.
    Reply
  • InvalidError
    salgado18 said:
    I like the decision to lower memory bus width to 128 bits. It lowers mining performance without affecting gaming performance, and can't be undone like Nvidia's software-based solution.
    Still having a 128bits on a $400 GPU is outrageous, especially if VRAM bandwidth bottleneck is a major contributor to the 6600(XT)'s collapse at higher resolutions and DXR.

    With only 8GB of VRAM, the GPU can only work on one ETH DAG at a time anyway, so narrowing the bus to 128bits shouldn't hurt too much. A good chunk of the reason why 12GB GPUs have a significant hash rate advantage is because they can work on two DAGs at a time while 16GB ones can do three and extra memory channels help with that concurrency.
    Reply
  • -Fran-
    salgado18 said:
    I like the decision to lower memory bus width to 128 bits. It lowers mining performance without affecting gaming performance, and can't be undone like Nvidia's software-based solution.
    Sorry, but you're not entirely correct there. It does affect performance. This is a very "at this moment in time" type of thing that you don't see it being a severe bottleneck, but crank up resolution to 1440 and it falls behind, almost consistently, against the 5700XT; that's not a positive look to the future of this card, even at 1080p. There's also the PCIe 3.0 at x8 link which will remove about 5% performance. HUB already tested and the biggest drop was DOOM Eternal with a whooping 20% drop in performance. That's massive and shameful.

    I have no idea why AMD made this card this way, but they're definitely trying to angry a lot of people with it... Me included. This card cannot be over $300 and that's the hill I will die on.

    Regards.
    Reply
  • ezst036
    The 6600 XT looks like a good Linux gaming card for Steam.
    Reply
  • InvalidError
    Yuka said:
    I have no idea why AMD made this card this way, but they're definitely trying to angry a lot of people with it... Me included. This card cannot be over $300 and that's the hill I will die on.
    Were it not for the GPU market going nuts over the last four years, increases in raw material costs and logistics costs, this would have been a $200-250 part.
    Reply
  • -Fran-
    InvalidError said:
    Were it not for the GPU market going nuts over the last four years, increases in raw material costs and logistics costs, this would have been a $200-250 part.
    I would buy that argument if it wasn't for the fact both AMD and nVidia are reeking in the cash like fishermen on a school of a million fish.

    Those are just excuses to screw people. I was definitely giving them the benefit of the doubt at the start, but not so much anymore. Their earn reports are the damning evidence they are just taking advantage of the situation and their excuses are just that: excuses. They can lower prices, period.

    Regards.
    Reply
  • ottonis
    Yuka said:
    I would buy that argument if it wasn't for the fact both AMD and nVidia are reeking in the cash like fishermen on a school of a million fish.

    Those are just excuses to screw people. I was definitely giving them the benefit of the doubt at the start, but not so much anymore. Their earn reports are the damning evidence they are just taking advantage of the situation and their excuses are just that: excuses. They can lower prices, period.

    Regards.

    The market has its own rules. As long as there is larger demand than the amount of GPUs AMD can produce, they will keep the prices high. That's just how (free) markets work.
    You can't blame a company for maximizing their profits within the margins the market provides to them.
    For a bottle of water, you usually pay less than a Dollar. Now, in the desert, with the next station being 500 miles away, you would pay even 10 Dollars (or 100?) for a bottle of water if you are thirsty.
    This will not change as long as global GPU shortage is lasting.
    Reply
  • -Fran-
    ottonis said:
    The market has its own rules. As long as there is larger demand than the amount of GPUs AMD can produce, they will keep the prices high. That's just how (free) markets work.
    You can't blame a company for maximizing their profits within the margins the market provides to them.
    For a bottle of water, you usually pay less than a Dollar. Now, in the desert, with the next station being 500 miles away, you would pay even 10 Dollars (or 100?) for a bottle of water if you are thirsty.
    This will not change as long as global GPU shortage is lasting.
    You're misunderstanding the argument: I do not care about their profit over my own money expenditure. I understand perfectly well they're Companies and their only purpose in their usable life is maximizing profit for their shareholders.

    So sure, you can defend free market and their behaviour all you want, but why? are you looking after their own well being? are you a stakeholder? do you have a vested interest in their market value? are you getting paid to defend their scummy behaviour towards consumers? do you want to pay more and more each generation for no performance increases per tier? do you want to pay a cars worth for a video card at some point? maybe a house's worth?

    Do not misunderstand arguments about AMD and nVidia being scummy. You should be aware you have to complain and not buy products at bad price points or they'll just continue to push the limit, because that's what they do.

    Regards.
    Reply