AMD Radeon RX 6800 XT and RX 6800 Review: Nipping at Ampere's Heels

The AMD Radeon RX 6800 XT and Radeon RX 6800 have arrived, joining the ranks of the best graphics cards and making some headway into the top positions in our GPU benchmarks hierarchy. Nvidia has had a virtual stranglehold on the GPU market for cards priced $500 or more, going back to at least the GTX 700-series in 2013. That's left AMD to mostly compete in the high-end, mid-range, and budget GPU markets. "No longer!" says Team Red. 

Big Navi, aka Navi 21, aka RDNA2, has arrived, bringing some impressive performance gains. AMD also finally joins the ray tracing fray, both with its PC desktop graphics cards and the next-gen PlayStation 5 and Xbox Series X consoles. How do AMD's latest GPUs stack up to the competition, and could this be AMD's GPU equivalent of the Ryzen debut of 2017? That's what we're here to find out.

We've previously discussed many aspects of today's launch, including details of the RDNA2 architecture, the GPU specifications, features, and more. Now, it's time to take all the theoretical aspects and lay some rubber on the track. If you want to know more about the finer details of RDNA2, we'll cover that as well. If you're just here for the benchmarks, skip down a few screens because, hell yeah, do we have some benchmarks. We've got our standard testbed using an 'ancient' Core i9-9900K CPU, but we wanted something a bit more for the fastest graphics cards on the planet. We've added more benchmarks on both Core i9-10900K and Ryzen 9 5900X. With the arrival of Ryzen 5000, running AMD GPUs with AMD CPUs finally means no compromises.

Update: We've added additional results to the CPU scaling charts. This review was originally published on November 18, 2020, but we'll continue to update related details as needed.

AMD Radeon RX 6800 Series: Specifications and Architecture 

Let's start with a quick look at the specifications, which have been mostly known for at least a month. We've also included the previous generation RX 5700 XT as a reference point. 

Swipe to scroll horizontally
Graphics CardRX 6800 XTRX 6800RX 5700 XT
GPUNavi 21 (XT)Navi 21 (XL)Navi 10 (XT)
Process (nm)777
Transistors (billion)26.826.810.3
Die size (mm^2)519519251
CUs726040
GPU cores460838402560
Ray Accelerators7260N/A
Game Clock (MHz)201518151755
Boost Clock (MHz)225021051905
VRAM Speed (MT/s)160001600014000
VRAM (GB)16168
Bus width256256256
Infinity Cache (MB)128128N/A
ROPs1289664
TMUs288240160
TFLOPS (boost)20.716.29.7
Bandwidth (GB/s)512512448
TBP (watts)300250225
Launch DateNov. 2020Nov. 2020July 2019
Launch Price$649 $579 $399

When AMD fans started talking about "Big Navi" as far back as last year, this is pretty much what they hoped to see. AMD has just about doubled down on every important aspect of its architecture, plus adding in a huge amount of L3 cache and Ray Accelerators to handle ray tracing ray/triangle intersection calculations. Clock speeds are also higher, and — spoiler alert! — the 6800 series cards actually exceed the Game Clock and can even go past the Boost Clock in some cases. Memory capacity has doubled, ROPs have doubled, TFLOPS has more than doubled, and the die size is also more than double.

Support for ray tracing is probably the most visible new feature, but RDNA2 also supports Variable Rate Shading (VRS), mesh shaders, and everything else that's part of the DirectX 12 Ultimate spec. There are other tweaks to the architecture, like support for 8K AV1 decode and 8K HEVC encode. But a lot of the underlying changes don't show up as an easily digestible number.

For example, AMD says it reworked much of the architecture to focus on a high speed design. That's where the greater than 2GHz clocks come from, but those aren't just fantasy numbers. Playing around with overclocking a bit — and the software to do this is still missing, so we had to stick with AMD's built-in overclocking tools — we actually hit clocks of over 2.5GHz. Yeah. I saw the supposed leaks before the launch claiming 2.4GHz and 2.5GHz and thought, "There's no way." I was wrong.

AMD's cache hierarchy is arguably one of the biggest changes. Besides a shared 1MB L1 cache for each cluster of 20 dual-CUs, there's a 4MB L2 cache and a whopping 128MB L3 cache that AMD calls the Infinity Cache. It also ties into the Infinity Fabric, but fundamentally, it helps optimize memory access latency and improve the effective bandwidth. Thanks to the 128MB cache, the framebuffer mostly ends up being cached, which drastically cuts down memory access. AMD says the effective bandwidth of the GDDR6 memory ends up being 119 percent higher than what the raw bandwidth would suggest.

The large cache also helps to reduce power consumption, which all ties into AMD's targeted 50 percent performance per Watt improvements. This doesn't mean power requirements stayed the same — RX 6800 has a slightly higher TBP (Total Board Power) than the RX 5700 XT, and the 6800 XT and upcoming 6900 XT are back at 300W (like the Vega 64). However, AMD still comes in at a lower power level than Nvidia's competing GPUs, which is a bit of a change of pace from previous generation architectures.

It's not entirely clear how AMD's Ray Accelerators stack up against Nvidia's RT cores. Much like Nvidia, AMD is putting one Ray Accelerator into each CU. (It seems we're missing an acronym. Should we call the ray accelerators RA? The sun god, casting down rays! Sorry, been up all night, getting a bit loopy here...) The thing is, Nvidia is on its second-gen RT cores that are supposed to be around 1.7X as fast as its first-gen RT cores. AMD's Ray Accelerators are supposedly 10 times as fast as doing the RT calculations via shader hardware, which is similar to what Nvidia said with its Turing RT cores. In practice, it looks as though Nvidia will maintain a lead in ray tracing performance.

That doesn't even get into the whole DLSS and Tensor core discussion. AMD's RDNA2 chips can do FP16 via shaders, but they're still a far cry from the computational throughput of Tensor cores. That may or may not matter, as perhaps the FP16 throughput is enough for real-time inference to do something akin to DLSS. AMD has talked about FidelityFX Super Resolution, which it's working on with Microsoft, but it's not available yet, and of course, no games are shipping with it yet either. Meanwhile, DLSS is in a couple of dozen games now, and it's also in Unreal Engine, which means uptake of DLSS could explode over the coming year.

Anyway, that's enough of the architectural talk for now. Let's meet the actual cards.

Meet the Radeon RX 6800 XT and RX 6800 Reference Cards 

We've already posted an unboxing of the RX 6800 cards, which you can see in the above video. The design is pretty traditional, building on previous cards like the Radeon VII. There's no blower this round, which is probably for the best if you're worried about noise levels. Otherwise, you get a similar industrial design and aesthetic with both the reference 6800 and 6800 XT. The only real change is that the 6800 XT has a fatter heatsink and weighs 115g more, which helps it cope with the higher TBP.

Both cards are triple fan designs, using custom 77mm fans that have an integrated rim. We saw the same style of fan on many of the RTX 30-series GPUs, and it looks like the engineers have discovered a better way to direct airflow. Both cards have a Radeon logo that lights up in red, but it looks like the 6800 XT might have an RGB logo — it's not exposed in software yet, but maybe that will come.

Otherwise, you get dual 8-pin PEG power connections, which might seem a bit overkill on the 6800 — it's a 250W card, after all, why should it need the potential for up to 375W of power? But we'll get into the power stuff later. If you're into collecting hardware boxes, the 6800 XT box is also larger and a bit nicer, but there's no real benefit otherwise.

The one potential concern with AMD's reference design is the video ports. There are two DisplayPort outputs, a single HDMI 2.1 connector, and a USB Type-C port. It's possible to use four displays with the cards, but the most popular gaming displays still use DisplayPort, and very few options exist for the Type-C connector. There also aren't any HDMI 2.1 monitors that I'm aware of, unless you want to use a TV for your monitor. But those will eventually come. Anyway, if you want a different port selection, keep an eye on the third party cards, as I'm sure they'll cover other configurations.

And now, on to the benchmarks.

Radeon RX 6800 Series Test Systems 

It seems AMD is having a microprocessor renaissance of sorts right now. First, it has Zen 3 coming out and basically demolishing Intel in every meaningful way in the CPU realm. Sure, Intel can compete on a per-core basis … but only up to 10-core chips without moving into HEDT territory. The new RX 6800 cards might just be the equivalent of AMD's Ryzen CPU launch. This time, AMD isn't making any apologies. It intends to go up against Nvidia's best. And of course, if we're going to test the best GPUs, maybe we ought to look at the best CPUs as well?

For this launch, we have three test systems. First is our old and reliable Core i9-9900K setup, which we still use as the baseline and for power testing. We're adding both AMD Ryzen 9 5900X and Intel Core i9-10900K builds to flesh things out. In retrospect, trying to do two new testbeds may have been a bit too ambitious, as we have to test each GPU on each testbed. We had to cut a bunch of previous-gen cards from our testing, and the hardware varies a bit among the PCs.

For the AMD build, we've got an MSI X570 Godlike motherboard, which is one of only a handful that supports AMD's new Smart Memory Access technology. Patriot supplied us with two kits of single bank DDR4-4000 memory, which means we have 4x8GB instead of our normal 2x16GB configuration. We also have the Patriot Viper VP4100 2TB SSD holding all of our games. Remember when 1TB used to feel like a huge amount of SSD storage? And then Call of Duty: Modern Warfare (2019) happened, sucking down over 200GB. Which is why we need 2TB drives.

Meanwhile, the Intel LGA1200 PC has an Asus Maximum XII Extreme motherboard, 2x16GB DDR4-3600 HyperX memory, and a 2TB XPG SX8200 Pro SSD. (I'm not sure if it's the old 'fast' version or the revised 'slow' variant, but it shouldn't matter for these GPU tests.) Full specs are in the table below.

Anyway, the slightly slower RAM might be a bit of a handicap on the Intel PCs, but this isn't a CPU review — we just wanted to use the two fastest CPUs, and time constraints and lack of duplicate hardware prevented us from going full apples-to-apples. The internal comparisons among GPUs on each testbed will still be consistent. Frankly, there's not a huge difference between the CPUs when it comes to gaming performance, especially at 1440p and 4K.

Besides the testbeds, I've also got a bunch of additional gaming tests. First is the suite of nine games we've used on recent GPU reviews like the RTX 30-series launch. We've done some 'bonus' tests on each of the Founders Edition reviews, but we're shifting gears this round. We're adding four new/recent games that will be tested on each of the CPU testbeds: Assassin's Creed Valhalla, Dirt 5, Horizon Zero Dawn, and Watch Dogs Legion — and we've enabled DirectX Raytracing (DXR) on Dirt 5 and Watch Dogs Legion.

There are some definite caveats, however. First, the beta DXR support in Dirt 5 doesn't look all that different from the regular mode, and it's an AMD promoted game. Coincidence? Maybe, but it's probably more likely that AMD is working with Codemasters to ensure it runs suitably on the RX 6800 cards. The other problem is probably just a bug, but AMD's RX 6800 cards seem to render the reflections in Watch Dogs Legion with a bit less fidelity.

Besides the above, we have a third suite of ray tracing tests: nine games (or benchmarks of future games) and 3DMark Port Royal. Of note, Wolfenstein Youngblood with ray tracing (which uses Nvidia's pre-VulkanRT extensions) wouldn't work on the AMD cards, and neither would the Bright Memory Infinite benchmark. Also, Crysis Remastered had some rendering errors with ray tracing enabled (on the nanosuits). Again, that's a known bug.

Radeon RX 6800 Gaming Performance

We've retested all of the RTX 30-series cards on our Core i9-9900K testbed … but we didn't have time to retest the RTX 20-series or RX 5700 series GPUs. The system has been updated with the latest 457.30 Nvidia drivers and AMD's pre-launch RX 6800 drivers, as well as Windows 10 20H2 (the October 2020 update to Windows). It looks like the combination of drivers and/or Windows updates may have dropped performance by about 1-2 percent overall, though there are other variables in play. Anyway, the older GPUs are included mostly as a point of reference.

We have 1080p, 1440p, and 4K ultra results for each of the games, as well as the combined average of the nine titles. We're going to dispense with the commentary for individual games right now (because of a time crunch), but we'll discuss the overall trends below.

9 Game Average

Borderlands 3

The Division 2

Far Cry 5

Final Fantasy XIV

Forza Horizon 4

Metro Exodus

Red Dead Redemption 2

Shadow Of The TombRaider

Strange Brigade

AMD's new GPUs definitely make a good showing in traditional rasterization games. At 4K, Nvidia's 3080 leads the 6800 XT by three percent, but it's not a clean sweep — AMD comes out on top in Borderlands 3, Far Cry 5, and Forza Horizon 4. Meanwhile, Nvidia gets modest wins in The Division 2, Final Fantasy XIV, Metro Exodus, Red Dead Redemption 2, Shadow of the Tomb Raider, and the largest lead is in Strange Brigade. But that's only at the highest resolution, where AMD's Infinity Cache may not be quite as effective.

Dropping to 1440p, the RTX 3080 and 6800 XT are effectively tied — again, AMD wins several games, Nvidia wins others, but the average performance is the same. At 1080p, AMD even pulls ahead by two percent overall. Not that we really expect most gamers forking over $650 or $700 or more on a graphics card to stick with a 1080p display, unless it's a 240Hz or 360Hz model.

Flipping over to the vanilla RX 6800 and the RTX 3070, AMD does even better. On average, the RX 6800 leads by 11 percent at 4K ultra, nine percent at 1440p ultra, and seven percent at 1080p ultra. Here the 8GB of GDDR6 memory on the RTX 3070 simply can't keep pace with the 16GB of higher clocked memory — and the Infinity Cache — that AMD brings to the party. The best Nvidia can do is one or two minor wins (e.g., Far Cry 5 at 1080p, where the GPUs are more CPU limited) and slightly higher minimum fps in FFXIV and Strange Brigade.

But as good as the RX 6800 looks against the RTX 3070, we prefer the RX 6800 XT from AMD. It only costs $70 more, which is basically the cost of one game and a fast food lunch. Or put another way, it's 12 percent more money, for 12 percent more performance at 1080p, 14 percent more performance at 1440p, and 16 percent better 4K performance. You also get AMD's Rage Mode pseudo-overclocking (really just increased power limits).

Radeon RX 6800 CPU Scaling and Overclocking

Our traditional gaming suite is due for retirement, but we didn't want to toss it out at the same time as a major GPU launch — it might look suspicious. We didn't have time to do a full suite of CPU scaling tests, but we did run 13 games on the five most recent high-end/extreme GPUs on our three test PCs. Here's the next series of charts, again with commentary below. 

13-Game Average