AMD Radeon RX 6600 XT Review: The Memory Compromise

Navi 23 only has 8GB GDDR6 on a 128-bit interface

AMD Radeon RX 6600 XT
(Image: © Tom's Hardware)

Tom's Hardware Verdict

The Radeon RX 6600 XT cuts VRAM, interface width, Infinity Cache, and GPU cores all in an attempt to enhance value, but, given its price point, the trimming may have gone too far.

Pros

  • +

    Generally faster than RX 5700 XT and RTX 3060

  • +

    Power-efficient design

  • +

    Good performance at 1080p

  • +

    Supply should be better than the other RX 6000 GPUs

Cons

  • -

    Only 8GB VRAM on a 128-bit bus, with 32MB Infinity Cache

  • -

    Poor ray tracing performance

  • -

    Expensive for a 1080p oriented GPU

  • -

    It's still going to sell out

Why you can trust Tom's Hardware Our expert reviewers spend hours testing and comparing products and services so you can choose the best for you. Find out more about how we test.

The AMD Radeon RX 6600 XT is a product of the times, delivering a reasonable overall package, but with somewhat lackluster specs and performance given the price. It's by far the most expensive 'mainstream' part we've seen so far, though such discussions are basically meaningless in the wake of ongoing GPU shortages. Look at our GPU price index, and it's obvious that anything worthy of being on our list of the best graphics cards will end up selling out, and then it will get scalped on places like eBay. MSRP or SEP (suggested etail pricing) will only impact a small fraction of people — those lucky enough to snag a card directly from AMD or Nvidia (BestBuy) at the 'official' price. However, most cards will end up selling at whatever price the market dictates, and sadly that means far more than AMD's nominal $379 launch price for the RX 6600 XT.

The Navi 23 GPU that powers the RX 6600 XT makes some interesting compromises as well. Normally, we'd expect to see a trimmed down Navi 22 part by this point in the life cycle, but either yields are so good that AMD doesn't have enough chips to launch something like the previously rumored Radeon RX 6700, or it's saving those harvested GPUs for something else (laptops). Navi 22 has a maximum configuration of 40 CUs, 12GB of GDDR6 on a 192-bit memory bus, and 96MB of Infinity Cache. Navi 23 still has 32 CUs, a relatively small reduction, but tops out at 8GB of GDDR6 on a 128-bit bus with only 32MB of Infinity Cache — AMD even cuts the PCIe interface to x8 Gen4, which is technically the same bandwidth as x16 Gen3, but if you're on a Gen3 board you get half the interface bandwidth. That results in a significantly smaller die size of 237mm^2 compared to 335mm^2 on Navi 22 — a 29% reduction. 

Swipe to scroll horizontally
GPU Specifications
Graphics CardRX 6600 XTRX 6800 XTRX 6800RX 6700 XTRX 5700 XT
ArchitectureNavi 23Navi 21Navi 21Navi 22Navi 10
Process TechnologyTSMC N7TSMC N7TSMC N7TSMC N7TSMC N7
Transistors (Billion)11.126.826.817.210.3
Die size (mm^2)237519519336251
CUs3272604040
GPU Cores20484608384025602560
Ray Accelerators32726040N/A
Infinity Cache (MB)3212896128N/A
Game Clock (MHz)23592250210524241755
VRAM Speed (Gbps)1616161614
VRAM (GB)81616128
VRAM Bus Width128256256192256
ROPs64128966464
TMUs128288240160160
TFLOPS FP32 (Boost)9.720.716.212.49
Bandwidth (GBps)256512512384448
PCIe Slot Interfacex8 Gen4x16 Gen4x16 Gen4x16 Gen4x16 Gen4
TBP (watts)160300250230225
Launch DateAug 2021Nov 2020Nov 2020Mar 2021Jul 2019
Launch Price$379 $649 $579 $479 $399

The problem is that all the trimming of the fat will inevitably affect the flavor of the steak. In addition, while the computational performance of the RX 6600 XT looks good — it's only about 20% lower than the RX 6700 XT — the reduction in memory bandwidth and capacity, as well as L3 cache, will also reduce performance. We'll see just how far in a bit, but the RX 5700 XT will be an interesting point of comparison, considering it has significantly more bandwidth but runs on the original RDNA architecture.

It's AMD's own fault for pushing VRAM capacities with earlier RX 6000 series launches. When Nvidia launched the RTX 3080 with 10GB VRAM and then kept the RTX 3070 Ti, RTX 3070, and RTX 3060 Ti at 8GB, AMD effectively made a statement with the RX 6700 XT, RX 6800, 6800 XT, and RX 6900 XT that 8GB simply wasn't sufficient. Of course, that's not entirely true, but there are certainly games and settings that will now have issues on GPUs that 'only' have 8GB of memory — games promoted by AMD, not surprisingly.

We'd argue that 8GB is still reasonable for mainstream options in our GPU benchmarks hierarchy, but then we run into problem number two: AMD has given the RX 6600 XT a price that's arguably more for the bottom of the high-end market than for a mainstream GPU. With the closest competitor being Nvidia's RTX 3060 12GB that offers 50% more memory as well as a theoretically 13% lower price, the RX 6600 XT might be a reasonable option, but it's not a clear winner by any stretch.

Architecturally, Navi 23 sticks with the same general formula as the other Big Navi and RDNA2 GPUs. It comes with DirectX Raytracing (DXR) support and implements the full DirectX 12 Ultimate features list, including Variable Rate Shading (VRS), mesh shaders, and sampler feedback. But AMD says the 32MB of Infinity Cache was chosen specifically for the target 1080p gaming audience, and we've seen in the past that higher resolutions tend to benefit from more L3 cache. That means while the RX 6600 XT should certainly do well at 1080p, it may not scale as well to higher resolutions like 1440p and 4K.

We're still more concerned with the 128-bit memory interface, though. That gives the RX 6600 XT exactly half the bandwidth and memory of the RX 6800 XT, with a quarter of the L3 cache. Will that mean half the performance as well? And what does that look like against competing GPUs from both Nvidia and AMD — and not just current generation cards, but also the previous generation?

These are good questions, though the most important question will be how many GPUs AMD can manufacture and get to the market. There's plenty of evidence that Nvidia currently outsells AMD by a ratio of at least 10-to-1 (based on the latest Steam Hardware Survey, as well as our GPU pricing index data), and that could boil down to production capacity. RX 6600 XT performance won't matter much if gamers can't go out and buy the card. 

MORE: Best Graphics Cards

MORE: GPU Benchmarks and Hierarchy

MORE: All Graphics Content

Jarred Walton

Jarred Walton is a senior editor at Tom's Hardware focusing on everything GPU. He has been working as a tech journalist since 2004, writing for AnandTech, Maximum PC, and PC Gamer. From the first S3 Virge '3D decelerators' to today's GPUs, Jarred keeps up with all the latest graphics trends and is the one to ask about game performance.

  • Zarax
    I know this is a lot to ask but given the ridicolous MSRP you might want to design a benchmark of discounted games (you could use isthereanydeal to see which ones have been at least once 50% off) that would be good to use with lower end cards or ones available used for acceptable prices.

    Something like "Budget gaming: how do the cheapest cards on ebay perform?" could be a very interesting read, especially given your high standards in writing and testing.
    Reply
  • salgado18
    I like the decision to lower memory bus width to 128 bits. It lowers mining performance without affecting gaming performance, and can't be undone like Nvidia's software-based solution.
    Reply
  • ottonis
    Due to production capacity constraints, AMD's main problem is they can't produce nearly as many GPUs as they would like and are thus being outsold by Nvidia by far.

    It's pretty obvious that AMD had one goal in mind with Navi23: increase production output as much as possible by shrinking die size while maintaining competitive 1080p gaming performance.
    Apparently, they accomplished that task. Whether or not the MSRP will have to be adapted: we will see,but I guess not as long as the global GPU shortage lasts.
    Reply
  • InvalidError
    salgado18 said:
    I like the decision to lower memory bus width to 128 bits. It lowers mining performance without affecting gaming performance, and can't be undone like Nvidia's software-based solution.
    Still having a 128bits on a $400 GPU is outrageous, especially if VRAM bandwidth bottleneck is a major contributor to the 6600(XT)'s collapse at higher resolutions and DXR.

    With only 8GB of VRAM, the GPU can only work on one ETH DAG at a time anyway, so narrowing the bus to 128bits shouldn't hurt too much. A good chunk of the reason why 12GB GPUs have a significant hash rate advantage is because they can work on two DAGs at a time while 16GB ones can do three and extra memory channels help with that concurrency.
    Reply
  • -Fran-
    salgado18 said:
    I like the decision to lower memory bus width to 128 bits. It lowers mining performance without affecting gaming performance, and can't be undone like Nvidia's software-based solution.
    Sorry, but you're not entirely correct there. It does affect performance. This is a very "at this moment in time" type of thing that you don't see it being a severe bottleneck, but crank up resolution to 1440 and it falls behind, almost consistently, against the 5700XT; that's not a positive look to the future of this card, even at 1080p. There's also the PCIe 3.0 at x8 link which will remove about 5% performance. HUB already tested and the biggest drop was DOOM Eternal with a whooping 20% drop in performance. That's massive and shameful.

    I have no idea why AMD made this card this way, but they're definitely trying to angry a lot of people with it... Me included. This card cannot be over $300 and that's the hill I will die on.

    Regards.
    Reply
  • ezst036
    The 6600 XT looks like a good Linux gaming card for Steam.
    Reply
  • InvalidError
    Yuka said:
    I have no idea why AMD made this card this way, but they're definitely trying to angry a lot of people with it... Me included. This card cannot be over $300 and that's the hill I will die on.
    Were it not for the GPU market going nuts over the last four years, increases in raw material costs and logistics costs, this would have been a $200-250 part.
    Reply
  • -Fran-
    InvalidError said:
    Were it not for the GPU market going nuts over the last four years, increases in raw material costs and logistics costs, this would have been a $200-250 part.
    I would buy that argument if it wasn't for the fact both AMD and nVidia are reeking in the cash like fishermen on a school of a million fish.

    Those are just excuses to screw people. I was definitely giving them the benefit of the doubt at the start, but not so much anymore. Their earn reports are the damning evidence they are just taking advantage of the situation and their excuses are just that: excuses. They can lower prices, period.

    Regards.
    Reply
  • ottonis
    Yuka said:
    I would buy that argument if it wasn't for the fact both AMD and nVidia are reeking in the cash like fishermen on a school of a million fish.

    Those are just excuses to screw people. I was definitely giving them the benefit of the doubt at the start, but not so much anymore. Their earn reports are the damning evidence they are just taking advantage of the situation and their excuses are just that: excuses. They can lower prices, period.

    Regards.

    The market has its own rules. As long as there is larger demand than the amount of GPUs AMD can produce, they will keep the prices high. That's just how (free) markets work.
    You can't blame a company for maximizing their profits within the margins the market provides to them.
    For a bottle of water, you usually pay less than a Dollar. Now, in the desert, with the next station being 500 miles away, you would pay even 10 Dollars (or 100?) for a bottle of water if you are thirsty.
    This will not change as long as global GPU shortage is lasting.
    Reply
  • -Fran-
    ottonis said:
    The market has its own rules. As long as there is larger demand than the amount of GPUs AMD can produce, they will keep the prices high. That's just how (free) markets work.
    You can't blame a company for maximizing their profits within the margins the market provides to them.
    For a bottle of water, you usually pay less than a Dollar. Now, in the desert, with the next station being 500 miles away, you would pay even 10 Dollars (or 100?) for a bottle of water if you are thirsty.
    This will not change as long as global GPU shortage is lasting.
    You're misunderstanding the argument: I do not care about their profit over my own money expenditure. I understand perfectly well they're Companies and their only purpose in their usable life is maximizing profit for their shareholders.

    So sure, you can defend free market and their behaviour all you want, but why? are you looking after their own well being? are you a stakeholder? do you have a vested interest in their market value? are you getting paid to defend their scummy behaviour towards consumers? do you want to pay more and more each generation for no performance increases per tier? do you want to pay a cars worth for a video card at some point? maybe a house's worth?

    Do not misunderstand arguments about AMD and nVidia being scummy. You should be aware you have to complain and not buy products at bad price points or they'll just continue to push the limit, because that's what they do.

    Regards.
    Reply