AMD Radeon RX 9060 XT 16GB review: plenty of performance with 16GB

Be wary of the 8GB models, which are a completely different ballgame.

AMD Radeon RX 9060 XT 16GB
Editor's Choice
(Image: © Tom's Hardware)

Why you can trust Tom's Hardware Our expert reviewers spend hours testing and comparing products and services so you can choose the best for you. Find out more about how we test.

AMD Radeon RX 9060 XT 16GB

(Image credit: Tom's Hardware)

This is mostly going to be a rehash of what we've said in other recent reviews, as our testing hasn't changed. At the end of last year, just in time for the Intel Arc B580 launch, we revamped our test suite and our test PC, wiping the slate clean and requiring new benchmarks for every graphics card we want to have in our GPU benchmarks hierarchy.

We’ve updated the GPU benchmarks to use the new test suite and PC (older results are on pages two and three), and we're basically finished testing all current and previous generation GPUs. And after countless hours of testing and retesting — we’ve rerun the benchmarks on Intel’s Arc cards, as newer drivers plus game patches over the past six months have likely impacted performance — things are in pretty good shape.

All our primary testing takes place at native resolutions of 1920x1080, 2560x1440, and 3840x2160. DLSS, FSR, and XeSS upscaling can be used to lower the rendering resolution, with either AI-based algorithms (DLSS, XeSS, and FSR4) or hand-coded algorithms (FSR2/3) upscaling the result to a higher resolution. We view such an approach as a viable solution, but given the differences between how the algorithms work, what hardware they support, and how they ultimately look, we feel it’s best to use native rendering as the baseline.

Frame generation can be used to smooth out the presentation of frames to the monitor, and at times we’ve seen 80–100 percent higher “framerates” using framegen. And then there’s Nvidia’s MFG (multi-frame generation), which adds even more AI-generated frames but still samples user input at lower speeds. Framegen isn’t inherently bad, but differences in how a game implements the various solutions make it harder to compare the various GPUs. We view these software features as extras — things that might be worth having, but which make testing and comparisons far more difficult than we’d like.

Fundamentally, all upscaling and framegen solutions boost performance (and/or smoothness), often to similar degrees — and potentially at the cost of some image fidelity. If GPU X runs faster than GPU Y at native 1080p rendering, it should also be faster — by a similar percentage — at 4K with performance mode upscaling. However, depending on the supported algorithms and how they’re implemented, the final output on your monitor may or may not look the same.

Our GPU test PC has an AMD Ryzen 7 9800X3D processor, the fastest current CPU for gaming purposes. We also have 32GB of DDR5-6000 memory from G.Skill, with AMD EXPO timing enabled (CL30) on an ASRock X670E Taichi motherboard. We're running Windows 11 24H2, with the latest drivers at the time of testing.

We’re including 17 different GPUs for this review, and we’ve used a mixture of drivers released in the past few months for our testing. The RX 9060 XT was tested with AMD’s launch 25.6.1 drivers, and the RTX 5060 and RTX 5060 Ti 8GB used Nvidia’s latest 576.52 drivers. Intel’s Arc GPUs were most recently tested with the 6790 drivers from early May. We’ve also retested a few games where it was clear that recent patches completely changed the performance (Baldur’s Gate 3 being a prime example).

Our PC is hooked up to a 4K 240Hz display that supports G-Sync and Adaptive-Sync, allowing us to properly experience the higher framerates that the latest GPUs can provide. Most games won't get anywhere close to the 240Hz limit of the monitor at 4K when rendering at native resolution, which is where framegen and MFG can be helpful.

Our GPU test suite has been trimmed down to 18 games for now, four games with RT support enabled, and the remaining 14 games are run in pure rasterization mode. All games are tested using 1080p 'medium' settings (the specifics vary by game and are noted in the chart headers), along with 1080p, 1440p, and 4K 'ultra' settings. This provides a good overview of performance in a variety of situations. Depending on the GPU, some of those settings don't make as much sense as others, but everything so far has managed to (mostly) run up to 4K ultra.

Our OS has all the latest updates applied. We're also using Nvidia's PCAT v2 (Power Capture and Analysis Tool) hardware, which means we can grab real power use, GPU clocks, and more during our gaming benchmarks. We'll cover those results on page eight.

Finally, because GPUs aren't purely for gaming these days, we run professional and AI application tests. We've previously tested Stable Diffusion, using various custom scripts, but to level the playing field and hopefully make things a bit more manageable, we're turning to standardized benchmarks.

We use Procyon and run the AI Vision test as well as the Stable Diffusion 1.5 and XL tests; MLPerf Client 0.5 preview for AI text generation (not the newer version 0.6, though we may change at some point); SPECworkstation 4.0 for Handbrake transcoding, AI inference, and professional applications; 3DMark DXR Feature Test to check raw hardware RT performance; and finally Blender Benchmark 4.3.0 for professional 3D rendering (and a preview build of Blender 4.4.0 for the RDNA 4 GPUs, as Blender Benchmark currently doesn’t support the latest AMD cards).

TOPICS
  • thestryker
    While I still feel like there should have only been a single 9060 XT the 16GB is definitely what passes for as a good deal price v perf despite the upsell pricing. Hopefully over the lifetime of the card MSRP will be hit.
    Reply
  • JamesJones44
    Feels like if one is going to step up to a 16 GB model, the 5060 Ti looks like a better choice for $40 more. Otherwise one is just looking to save $90 by sticking with the 8 GB model.
    Reply
  • Alvar "Miles" Udell
    AMD showing again why they don't care about gaining market share: they have a product that can compete with Nvidia, but they don't price it anywhere near what it would take to get people to buy it if they're already Nvidia users.
    Reply
  • palladin9479
    I was hoping to see a 9060 XT 16 vs 8 GB charts the same as the 5060 Ti has a way to see where the cutoff is instead of the misinformation that gets spread. It's also entirely what the market cost is gonna be at.
    Reply
  • 3ogdy
    AMD Radeon RX 9060 XT 16GB : plenty of money to pay for an x60 card at $400
    Reply
  • tvargek
    when will TH add last gen xx60 class cards to their GPU ranking charts??
    Reply
  • virgult
    Alvar Miles Udell said:
    AMD showing again why they don't care about gaining market share: they have a product that can compete with Nvidia, but they don't price it anywhere near what it would take to get people to buy it if they're already Nvidia users.
    That's because it cannot compete. It's a bit worse, for a bit more power, if you're a gamer. Non-gaming workloads run abysmally bad compared to Nvidia, due to AMD's neglect of HIP, ROCm, and any effort to make pro workloads run well.
    This is not a competitive product, that's why it should be priced way lower.
    Reply
  • tvargek
    but don't forget 5060ti has lower performance on older MB's cause of narrow lanes and all those hoping to upgrade their older system with 5060 series should also buy new MB+CPU+MEM to gain full advantage of 5060ti
    Reply
  • palladin9479
    tvargek said:
    but don't forget 5060ti has lower performance on older MB's cause of narrow lanes and all those hoping to upgrade their older system with 5060 series should also buy new MB+CPU+MEM to gain full advantage of 5060ti

    Ehh that really depends. PCI-E bandwidth, which is what you are talking about, is only involved when data gets transmitted from system RAM to GPU VRAM. When you have plenty of VRAM then you really don't need to worry about that, if you are in a VRAM constrained situation which requires graphics resources to be swaped in and out of system RAM across the PCIe bus. PCIe 4 x16 slot is 32GB/s one way, PCIe 5.0 x16 is 64GB/s one way. System memory is much faster and therefor not the bottleneck. Honestly if someone is in such a situation that are you swaping texture data across the PCIe bus, they are already having a bad experience and need to either turn down texture resolution or upgrade to a newer card.
    Reply
  • GravtheGeek
    I had no problem getting the XFX model of the 16 gig for MSRP ($350) via newegg. Lots of $350 models out there. If you live near a Microcenter it's even better selection.

    One major thing to note about the powercolor reaper: it's only 200mm x 39mm for a 16 gb version. That makes it one of the best cards for some smaller SFF builds out there for the money.
    Reply