Sapphire RX 7900 XT Pulse Review: Quiet a Performance

A $120 price drop makes this card more enticing.

Sapphire RX 7900 XT Pulse photos and unboxing
(Image: © Tom's Hardware)

Why you can trust Tom's Hardware Our expert reviewers spend hours testing and comparing products and services so you can choose the best for you. Find out more about how we test.

(Image credit: Tom's Hardware)

Our power, clocks, and temperature testing now utilizes the same test suite as our gaming benchmarks, as the PCAT v2 hardware and FrameView software lets us collect this data alongside frametimes. We're also using our updated Core i9-13900K platform, so we're less likely to have CPU or platform limitations playing a role.

We have 1440p and 4K charts for power, GPU clocks, and temperatures below. Then we do a test using Metro Exodus at whatever "demanding for the GPU being tested" means — 1440p ultra in the case of the Sapphire RX 7900 XT — and after letting the game run for 15 minutes or more, we check noise levels. This basically represents the worst-case workload and maximum noise, though in practice a lot of games will end up at similar levels.

We'll present additional tables and information about efficiency (FPS/W) and value (FPS/$) at the bottom of the page.

The Sapphire RX 7900 XT does as advertised and uses slightly more power across our test suite than the reference 7900 XT, at both 1440p and 4K. In both cases, the Sapphire card averaged 322W of power use, while the reference card used 307W and 308W. Nvidia's RTX 4070 Ti meanwhile needed just 246W on average at 1440p, and 262W at 4K.

This is an ongoing issue with AMD's RDNA 3 architecture. If you look at RDNA 2 and Ampere, the previous generation GPUs, AMD generally held an advantage in efficiency. Part of that was because Nvidia used Samsung 8N (a 10nm-class node) while AMD used TSMC N7. Now, they're basically on the same 5nm-class node (TSMC 4N and N5, for Nvidia and AMD, respectively). It seems like Nvidia's architecture is overall simply more efficient.

There's also the question of how much extra power AMD had to use by opting for GPU chiplets instead of a monolithic die. We may never know for certain, but certainly there's room for improvement from AMD. Sapphire for its part managed to provide equivalent performance while drawing more power, which definitely isn't the best way of doing things.

GPU clock speeds on their own don't mean too much, unless you're comparing within the same architecture and even GPU. And that's where things get a bit interesting with the Sapphire RX 7900 XT. On paper, it has a higher 2450 MHz boost clock while the reference card 'only' has a 2400 MHz boost clock. But we know from experience that AMD's latest generation GPUs are likely to exceed that conservative boost clock.

With the reference RX 7900 XT, across our 15-game suite, we measured average GPU clocks of 2573 MHz — 173 MHz above the advertised boost clock. The Sapphire model meanwhile only averaged 2488 MHz, 38 MHz above its advertised speed. That's a 1440p, incidentally. The clocks drop quite at bit at 4K, where the reference card still averaged 2494 MHz, but the Sapphire card dropped to just 2290 MHz.

Looking at these results, we can't help but wonder if our Sapphire card was a bit of a lemon. We've since returned the card to the company, but other sites' results don't seem to corroborate with our results. There's a good chance a different Pulse card would have performed better than what we've shown here.

Finally, we have to look at temperatures and noise levels as a linked pair. Higher fan speeds can drop noise levels, and conversely lower fan speeds can result in higher temperatures but less noise. There's a balancing act that needs to be maintained, and generally speaking GPU core temperatures of less than 80C aren't a concern.

Looking just at the thermals, the Sapphire RX 7900 XT ran about 3–5C hotter than the reference card. Neither one averaged more than 70C, so temperatures aren't a concern, but we can only get a better idea of how the cards compare by also looking at fan speeds and/or noise levels.

We check noise levels using an SPL (sound pressure level) meter placed 10cm from the card, with the mic aimed right at the center of the middle fan (on the Sapphire card). This helps minimize the impact of other noise sources like the fans on the CPU cooler. The noise floor of our test environment and equipment is less than 32 dB(A).

After running Metro Exodus for over 15 minutes, the Sapphire RX 7900 XT settled at a fan speed of 38% with a noise level of 40.0 dB(A). While not the quietest GPU we've tested — some of the really large cards run at 37–38 dB(A) — it's equal to or better than most other competing GPUs. The reference AMD 7900 XT on the other hand had a 48.7 dB(A) result, with the fans running at 63%.

There's no question in my mind that the Sapphire cooling solution ends up performing better overall, even if the actual temperatures are slightly higher. The reference cards get relatively loud, while Sapphire's fans and cooler generally do a better job while keeping noise levels in check.

We also tested with a static fan speed of 75%, which caused the Sapphire RX 7900 XT to generate 56.2 dB(A) of noise. That's similar to a lot of other GPUs, but not hugely important as you shouldn't normally see the fans spinning at more than 50% or so.

GPU Value and Efficiency

Swipe to scroll horizontally
Graphics Card Value and Efficiency
Graphics CardFPS/$FPS/W1080p FPS1440p FPS4K FPSOnline PricePowerPC FPS/$
GeForce RTX 4060 Ti0.1370.38280.154.927.9$400144W10 — 0.0372
Radeon RX 6800 XT0.1310.22290.165.435.1$500294W8 — 0.0415
GeForce RTX 40700.1250.386101.873.239.3$585190W6 — 0.0441
Radeon RX 6950 XT0.1180.230100.474.540.2$630324W7 — 0.0437
Radeon RX 68000.1150.24778.656.330.1$490228W11 — 0.0360
GeForce RTX 4070 Ti0.1150.367121.790.550.0$790246W2 — 0.0486
Radeon RX 7900 XT Sapphire0.1110.26786.248.3$780322W4 — 0.0465
Radeon RX 7900 XT0.1110.281113.286.247.8$780307W5 — 0.0465
Radeon RX 7900 XTX0.0990.278123.596.656.3$980347W3 — 0.0470
GeForce RTX 40800.0980.412139.0108.362.7$1,108263W1 — 0.0496
Radeon RX 6900 XT0.0900.23495.470.237.9$780300W9 — 0.0379

This final gallery of images shows the full performance test suite, along with the above power, clocks, and temperature information. Latency is also provided, at least in some of the games (depending on the GPU and drivers used).

We've calculated efficiency in FPS/W for the various games, plus value in FPS/$ using the best current online prices we could find (usually at Newegg or Amazon, though B&H and Best Buy were also checked). We've summarized those results in the above table (based on 1440p performance and power), sorted by overall value.

As expected, even though cards like the RTX 4060 Ti have a (perhaps deservedly) bad rap, their lower prices still make them a "better value" than the more expensive GPUs. It's the typical diminishing returns in practice. However, we should note that we're only factoring in the cost of the graphics card. If you were to include a decent high-end PC, that changes the equation a lot. That's what the "PC FPS/$" column at the far right.

For the system cost, we used a Core i7-13700K, Arctic Freezer II 240 cooler, ASRock Z790 PG Lightning motherboard, 32GB G.Skill DDR5-6400 CL32 memory, Solidigm P44 Pro 2TB SSD, Phanteks Eclipse P400A case, and Phanteks Revolt Pro 850W gold PSU. Combined, those cost $1,074 at the time of writing. For a complete PC, the "best value" (in quotes because there's an element of subjectivity involved) switches to the most expensive RTX 4080 first, then the RTX 4070 Ti, followed by the RX 7900 XTX and RX 7900 XT. Food for thought if you're planning a complete system update!

Efficiency also puts the RTX 4080 in first, followed by the RTX 4070, RTX 4060 Ti, RTX 4070 Ti, and then finally the RX 7900 XT (reference card). Sapphire's GPU ranks two steps lower, just behind the RX 7900 XTX, thanks to its higher power draw. There's still a decent gap in efficiency between it and the RX 6800, however, which was the highest efficiency GPU from AMD's RX 6000-series.

Jarred Walton

Jarred Walton is a senior editor at Tom's Hardware focusing on everything GPU. He has been working as a tech journalist since 2004, writing for AnandTech, Maximum PC, and PC Gamer. From the first S3 Virge '3D decelerators' to today's GPUs, Jarred keeps up with all the latest graphics trends and is the one to ask about game performance.

  • cknobman
    If they get the 7900XT down to $699 I might consider getting one.
    Reply
  • Makaveli
    I have to agree only the Highend options from both NV and AMD this gen are worth it. Anything below 4080,4090 and 7900XT,7900XTX just looks like trash.
    Reply
  • Alvar "Miles" Udell
    Or you can just recognize that HD texture packs often do very little for image fidelity and stick with settings that won't exceed 12GB (unless a game is poorly coded, which unfortunately seems to be the case with a lot of recent ports).

    With that attitude you don't need to buy a higher end GPU, just buy a console for much less than $600!
    Reply
  • -Fran-
    Alvar Miles Udell said:
    With that attitude you don't need to buy a higher end GPU, just buy a console for much less than $600!
    This is actually a fair take. We're not talking about pennies here. These cards are mega expensive and saying "just compromise" feels wrong to say.

    I mean, people that buys a Ferrari Enzo won't use it* to haul big cargo or for off-road, but the cards above $300 start nudging the "if you need to compromise, just get a console" lever to me.

    As for the rest of the review, thanks for it. This card is a tad underrated as "cheap non-reference" cards go. The XTX version of this is same price as reference and a tad better (both come with the 3 8pin so you can OC it IIRC), so you can give them a good run for the money if you want. Also, they're better for Water Cooling enthusiasts as they keep the 3 8pin and aren't as expensive as the higher end cooled ones.

    Everything else is: "this is a 7900XT", haha.

    I wish you could give VR games a quick try and comparison, since friends with these are crying when I tell them my 6900XT is performing better at lower power than theirs.

    Regards.
    Reply
  • Alvar "Miles" Udell
    This is actually a fair take. We're not talking about pennies here. These cards are mega expensive and saying "just compromise" feels wrong to say.

    That's what I was going for. Mainstream and entry level cards have compromises, high end designated cards should only have ray tracing as their compromise (and that's not much of one in many cases).

    I also don't agree with TH's testing methodology here of requiring the use of ray tracing, since that is one area that usually brings a significant performance detriment for very little actual visual gain. In my opinion ray tracing shouldn't even be counted as a detail for the purposes of defining "max details", but a processing enhancement effect. Techpowerup's review did not use ray tracing for their average FPS chart (it's in a separate chart). They used 25 games, and some were not included from TH's choices, but it provides a far better real-world result. I hope TH will adopt a policy that will require all GPU tests to be run without RT, and if RT is to be included it should be placed in a separate chart, at least until such time when RT does not carry any more than a 10% reduction in performance.



    Reply
  • JarredWaltonGPU
    Alvar Miles Udell said:
    That's what I was going for. Mainstream and entry level cards have compromises, high end designated cards should only have ray tracing as their compromise (and that's not much of one in many cases).

    I also don't agree with TH's testing methodology here of requiring the use of ray tracing, since that is one area that usually brings a significant performance detriment for very little actual visual gain. In my opinion ray tracing shouldn't even be counted as a detail for the purposes of defining "max details", but a processing enhancement effect.
    If we're going down that road, we shouldn't even test at ultra settings, we should just run everything at medium or high. And for those games that actually have ultra settings that actually do look better? Those are just "processing enhancement effects." We should also just test at 1080p, because 1440p and 4K are "resolution enhancement effects." Or put more bluntly, discounting a chunk of what modern GPUs can do just because you don't like how it impacts GPU rankings isn't something I condone or intend to do.

    You'll note in the articles where I look at new games, the conclusion is often (though not always) that ultra and high are basically equivalent quality but ultra requires more GPU resources for minimal gains. Ray tracing, at least in some games, actually does way more than the minor differences between high and ultra. Weakly/poorly done RT of course doesn't do much. So games like Far Cry 6, World of Warcraft, Shadow of the Tomb Raider, Dirt 5, etc. But when it's actually used more extensively, it can make a bigger difference, like Minecraft, Cyberpunk 2077, and a few other games.

    If you're willing to discount ray tracing hardware entirely, you can discount a lot of other stuff as well and end up with consoles. But if you're willing to compromise on ray tracing just because it's an area where AMD GPUs in particular perform much worse than their Nvidia counterparts, that's just intentionally limiting your view of a graphics card to favor one brand.
    Reply
  • gg83
    Whats better the 6950xt for $600 or the 7900xt on-sale?
    Reply
  • Makaveli
    gg83 said:
    Whats better the 6950xt for $600 or the 7900xt on-sale?
    What is the difference in price?
    Reply
  • atomicWAR
    Makaveli said:
    What is the difference in price?
    Two hundred atm.
    Reply
  • sherhi
    Alvar Miles Udell said:
    I also don't agree with TH's testing methodology here of requiring the use of ray tracing, since that is one area that usually brings a significant performance detriment for very little actual visual gain. In my opinion...
    People have different opinions about visuals but these cards are usually within margin of error of reference models, this test shows it as well. Im sure they can make just a rasterization chart for you personally if you are interested in this model but again I bet its within margin of error of reference model and you can always check that here: https://www.tomshardware.com/reviews/gpu-hierarchy,4388.html
    Its standard these days to separate those measurements. In this article I dont see 1080p and I am okay with that, its high end GPU and if its good at 1440p then I dont really need another page or two (which takes maybe even an hour or two to write) for 1080p because it wont say much.
    Alvar Miles Udell said:
    With that attitude you don't need to buy a higher end GPU, just buy a console for much less than $600!
    I agree, is there any online chart comparing same games´performance of all modern GPUs and consoles? I know digital foundry is doing comparisons like PS5 vs Series X vs high end PC but mixing consoles´FPS into these GPU charts would be interesting.
    JarredWaltonGPU said:
    If we're going down that road, we shouldn't even test at ultra settings, we should just run everything at medium or high. And for those games that actually have ultra settings that actually do look better? Those are just "processing enhancement effects." We should also just test at 1080p, because 1440p and 4K are "resolution enhancement effects."
    I get your point but its not the best example, resolutions are standardized and its common for GPUs to behave differently, their power curve is often non-linear across resolutions...anyway RT should stay thats for sure.
    Reply