AMD Radeon RX 6600 XT Review: The Memory Compromise

Navi 23 only has 8GB GDDR6 on a 128-bit interface

AMD Radeon RX 6600 XT
(Image: © Tom's Hardware)

Why you can trust Tom's Hardware Our expert reviewers spend hours testing and comparing products and services so you can choose the best for you. Find out more about how we test.

(Image credit: Tom's Hardware)

Officially, the RX 6600 XT has a TDP (or TBP, typical board power) of 160W, but AMD doesn't have a reference card, and the ASRock card we received definitely uses more than 160W. Many other AIB custom cards will come factory overclocked with higher power use as well, but we can only test what we have on hand. We'll see about testing additional RX 6600 XT cards as they become available. We're running our normal suite of Powenetics testing to check the GPU power consumption and other aspects of the cards, using Metro Exodus at 1440p ultra and FurMark stress test at 900p.

In Metro Exodus, power use basically matches that of the RTX 3060 and lands just above 170W. That's the TDP of the Nvidia GPU and 10W higher than TBP for the RX 6600 XT. Switch over to FurMark, and power use increases more on the ASRock card than on the EVGA card, coming in at 181W vs. 175W. Out of the current generation GPUs from AMD and Nvidia, these are the two lowest-power options, with power consumption scaling up to 350W or more on the RTX 3090 and various custom RTX 3080, RX 6900 XT, and RX 6800 XT cards.

Average clock speeds for the RX 6600 XT set a new record of 2640MHz during the Metro Exodus benchmark. It's a pretty steady clock speed as well, and above the official AMD boost clock of 2589MHz — but it's a factory overclocked card, and ASRock doesn't list a maximum boost clock. Meanwhile, the average clock speed in FurMark still comes in at an impressive 2353MHz, basically matching the RX 6700 XT. By comparison, Nvidia's GPUs all clock at least 700MHz lower, and in some cases, more than 1GHz lower than the RX 6600 XT. We'd be worried AMD was heading down the NetBurst and Bulldozer path to higher performance, chasing clock speeds at the cost of efficiency, were it not for the relatively tame power use.

Fan speeds directly affect temperatures, and this is where the overengineered design of the ASRock Phantom Gaming comes into play. There's no need for triple slot coolers and fans on a 170W or even 180W TDP card — for example, the GeForce GTX 980 was a 180W TDP back in 2014 and did just fine with a single blower fan — but ASRock keeps temperatures and fan speeds low by including a larger cooler. It's particularly interesting just how quickly the GPU temperatures fall below 50C and the fans shut down. Some of the other GPUs show a slight dip in fan speeds between the benchmark iterations, but the ASRock card runs cool enough that the fans halt for 7–8 seconds during the brief loading screen.

Even when the fans are running, they don't hit high RPMs. Maximum RPMs were just over 1400, and peak temperatures during Metro testing were 60C. The temperatures were a bit hotter in our FurMark stress test, but the constant load allowed the fans to settle down to a similar level, and peak fan speed was actually slightly lower.

Lower fan speeds naturally mean lower noise levels. The noise floor of our test environment and equipment measures 33 dB(A), at a distance of 15cm from the side of the GPU. We put the SPL (Sound Pressure Level) meter close to the GPU fans to focus on their noise, rather than case fans or other noise sources.

The ASRock RX 6600 XT measured 37.0 dB while running Metro (not in a benchmark loop this time — we launch the game and just sit in the train watching the shadows crawl across the floor for 15 minutes for the noise testing). Fan speed was reported as just 35%, but that was the maximum fan speed of the three fans. Unfortunately, our software didn't show the average fan speed or the individual fan speeds, but manually setting the fans to 27% produced a similar noise level. Meanwhile, the ASRock card can get pretty loud at a static 75% fan speed, hitting 60.0 dB. Thankfully, there aren't many situations (playing in Death Valley, maybe) where the card would get hot enough to run the fans at 75%.

Radeon RX 6600 XT Mining Performance

(Image credit: Tom's Hardware)

Given the dependence of Ethereum mining on GPU memory bandwidth, it shouldn't come as much of a surprise that the RX 6600 XT isn't a particularly good GPU for mining. We use NiceHashMiner to check mining performance with a variety of algorithms, though unfortunately most of the tests failed to complete. Unfortunately, that's pretty common for new GPU architectures, and the resulting performance was quite low.

At the stock clocks, without any tuning (which you see in the above screenshot), the ASRock card managed just 28.3 MH/s in Ethash (aka DaggerHashimoto, though technically the two aren't quite the same). That's lower than you can get from an RX 570/580 8GB card, at least after tuning. Overclocking the GDDR6 on the card and tuning clocks does boost performance a bit, but still only to 31 MH/s. Which is good news for gamers as it should mean miners won't try to snap up all the RX 6600 XT cards. Not that mining profitability is what it once was, but it's still high enough that large firms are doing it.

If there's one bright spot to AMD using a 128-bit memory interface, this is it. Gaming performance is still good, basically matching or slightly exceeding the previous generation RX 5700 XT. However, the 5700 XT can do around 55 MH/s after tuning and optimizing performance. As a result, average prices for sold listings on eBay are around $800 for the RX 5700 XT. With only 30 MH/s, miners would likely only pay around $500 or less for the RX 6600 XT.

MORE: Best Graphics Cards

MORE: GPU Benchmarks and Hierarchy

MORE: All Graphics Content

Jarred Walton

Jarred Walton is a senior editor at Tom's Hardware focusing on everything GPU. He has been working as a tech journalist since 2004, writing for AnandTech, Maximum PC, and PC Gamer. From the first S3 Virge '3D decelerators' to today's GPUs, Jarred keeps up with all the latest graphics trends and is the one to ask about game performance.

  • Zarax
    I know this is a lot to ask but given the ridicolous MSRP you might want to design a benchmark of discounted games (you could use isthereanydeal to see which ones have been at least once 50% off) that would be good to use with lower end cards or ones available used for acceptable prices.

    Something like "Budget gaming: how do the cheapest cards on ebay perform?" could be a very interesting read, especially given your high standards in writing and testing.
    Reply
  • salgado18
    I like the decision to lower memory bus width to 128 bits. It lowers mining performance without affecting gaming performance, and can't be undone like Nvidia's software-based solution.
    Reply
  • ottonis
    Due to production capacity constraints, AMD's main problem is they can't produce nearly as many GPUs as they would like and are thus being outsold by Nvidia by far.

    It's pretty obvious that AMD had one goal in mind with Navi23: increase production output as much as possible by shrinking die size while maintaining competitive 1080p gaming performance.
    Apparently, they accomplished that task. Whether or not the MSRP will have to be adapted: we will see,but I guess not as long as the global GPU shortage lasts.
    Reply
  • InvalidError
    salgado18 said:
    I like the decision to lower memory bus width to 128 bits. It lowers mining performance without affecting gaming performance, and can't be undone like Nvidia's software-based solution.
    Still having a 128bits on a $400 GPU is outrageous, especially if VRAM bandwidth bottleneck is a major contributor to the 6600(XT)'s collapse at higher resolutions and DXR.

    With only 8GB of VRAM, the GPU can only work on one ETH DAG at a time anyway, so narrowing the bus to 128bits shouldn't hurt too much. A good chunk of the reason why 12GB GPUs have a significant hash rate advantage is because they can work on two DAGs at a time while 16GB ones can do three and extra memory channels help with that concurrency.
    Reply
  • -Fran-
    salgado18 said:
    I like the decision to lower memory bus width to 128 bits. It lowers mining performance without affecting gaming performance, and can't be undone like Nvidia's software-based solution.
    Sorry, but you're not entirely correct there. It does affect performance. This is a very "at this moment in time" type of thing that you don't see it being a severe bottleneck, but crank up resolution to 1440 and it falls behind, almost consistently, against the 5700XT; that's not a positive look to the future of this card, even at 1080p. There's also the PCIe 3.0 at x8 link which will remove about 5% performance. HUB already tested and the biggest drop was DOOM Eternal with a whooping 20% drop in performance. That's massive and shameful.

    I have no idea why AMD made this card this way, but they're definitely trying to angry a lot of people with it... Me included. This card cannot be over $300 and that's the hill I will die on.

    Regards.
    Reply
  • ezst036
    The 6600 XT looks like a good Linux gaming card for Steam.
    Reply
  • InvalidError
    Yuka said:
    I have no idea why AMD made this card this way, but they're definitely trying to angry a lot of people with it... Me included. This card cannot be over $300 and that's the hill I will die on.
    Were it not for the GPU market going nuts over the last four years, increases in raw material costs and logistics costs, this would have been a $200-250 part.
    Reply
  • -Fran-
    InvalidError said:
    Were it not for the GPU market going nuts over the last four years, increases in raw material costs and logistics costs, this would have been a $200-250 part.
    I would buy that argument if it wasn't for the fact both AMD and nVidia are reeking in the cash like fishermen on a school of a million fish.

    Those are just excuses to screw people. I was definitely giving them the benefit of the doubt at the start, but not so much anymore. Their earn reports are the damning evidence they are just taking advantage of the situation and their excuses are just that: excuses. They can lower prices, period.

    Regards.
    Reply
  • ottonis
    Yuka said:
    I would buy that argument if it wasn't for the fact both AMD and nVidia are reeking in the cash like fishermen on a school of a million fish.

    Those are just excuses to screw people. I was definitely giving them the benefit of the doubt at the start, but not so much anymore. Their earn reports are the damning evidence they are just taking advantage of the situation and their excuses are just that: excuses. They can lower prices, period.

    Regards.

    The market has its own rules. As long as there is larger demand than the amount of GPUs AMD can produce, they will keep the prices high. That's just how (free) markets work.
    You can't blame a company for maximizing their profits within the margins the market provides to them.
    For a bottle of water, you usually pay less than a Dollar. Now, in the desert, with the next station being 500 miles away, you would pay even 10 Dollars (or 100?) for a bottle of water if you are thirsty.
    This will not change as long as global GPU shortage is lasting.
    Reply
  • -Fran-
    ottonis said:
    The market has its own rules. As long as there is larger demand than the amount of GPUs AMD can produce, they will keep the prices high. That's just how (free) markets work.
    You can't blame a company for maximizing their profits within the margins the market provides to them.
    For a bottle of water, you usually pay less than a Dollar. Now, in the desert, with the next station being 500 miles away, you would pay even 10 Dollars (or 100?) for a bottle of water if you are thirsty.
    This will not change as long as global GPU shortage is lasting.
    You're misunderstanding the argument: I do not care about their profit over my own money expenditure. I understand perfectly well they're Companies and their only purpose in their usable life is maximizing profit for their shareholders.

    So sure, you can defend free market and their behaviour all you want, but why? are you looking after their own well being? are you a stakeholder? do you have a vested interest in their market value? are you getting paid to defend their scummy behaviour towards consumers? do you want to pay more and more each generation for no performance increases per tier? do you want to pay a cars worth for a video card at some point? maybe a house's worth?

    Do not misunderstand arguments about AMD and nVidia being scummy. You should be aware you have to complain and not buy products at bad price points or they'll just continue to push the limit, because that's what they do.

    Regards.
    Reply