Meet Nvidia's GeForce RTX 2080 Super 8GB
The GeForce RTX 2080 that launched last September was unquestionably fast. But it wasn’t much quicker than the GeForce GTX 1080 Ti. More problematic, the Founders Edition model sold for $800, meaning you paid an extra $100 for its slight performance advantage over Nvidia’s previous-gen flagship.
The $699 GeForce RTX 2080 Super tips the scales back in favor of gamers with high-refresh QHD monitors and 4K displays by serving up average frame rates that are 6% higher than GeForce RTX 2080 (and 17% better than GTX 1080 Ti) at a price point that is $100 less expensive than the old 2080 Founders Edition card. It also offers performance that is far ahead of AMD's new Navi-powered Radeon 5700XT, but that's not surprising given that Nvidia's card costs 75% more.
It’s not going to make anyone with a high-end graphics card want to upgrade. But if you were waiting for a refresh to get your hands on better value than what the first round of Turing-based cards offered, know that GeForce RTX 2080 Super gives you 21% better performance per dollar than GeForce RTX 2080 Founders Edition (or a scant 6% improvement over the least expensive 2080s).
Editor's Note: An earlier version of this review carried the headline "Leaving Navi in the Dust." We've revised the title to better reflect the content of the article, which primarily compares RTX 2080 Super to other cards in its price range.
Meet GeForce RTX 2080 Super
Even more so than GeForce RTX 2060 Super and 2070 Super, the 2080 Super looks very, very similar to its predecessor. Nvidia now has ample supply of flawless TU104 processors that it can use to build GeForce RTX 2080 Super without disabling any of its on-die resources.
As a brief refresher on TU104 and its vital specs, TSMC manufactures the GPU on its 12nm FinFET node. A total of 13.6 billion transistors are crammed into a 545 mm² die, which is naturally smaller than Nvidia’s massive TU102 processor but still quite a bit larger than last generation’s 471 mm² flagship (GP102).
TU104 is constructed with the same building blocks as TU102; it just features fewer of them. Streaming Multiprocessors still sport 64 CUDA cores, eight Tensor cores, one RT core, four texture units, 16 load/store units, 256KB of register space, and 96KB of L1 cache/shared memory. TPCs are still composed of two SMs and a PolyMorph geometry engine. Only here, there are four TPCs per GPC, and six GPCs spread across the processor. Therefore, a fully enabled TU104 wields 48 SMs, 3,072 CUDA cores, 384 Tensor cores, 48 RT cores, 192 texture units, and 24 PolyMorph engines. TU104 also loses an eight-lane NVLink connection compared to TU102, limiting it to one x8 link and 50 GB/s of bi-directional throughput.
A correspondingly narrower back end feeds the compute resources through eight 32-bit GDDR6 memory controllers (256-bit aggregate) attached to 64 ROPs and 4MB of L2 cache. But rather than populating that 256-bit path with 14 Gb/s GDDR6 modules from Micron, Nvidia switches to 8GB of Samsung’s K4Z80325BC-HC16, a 16 Gb/s part clocked down to 15.5 Gb/s for GeForce RTX 2080 Super. Why de-tune the data rate? Jumping to 16 Gb/s would have required a PCB modification, and the gain wouldn't have been worth the added cost and complexity. Nvidia says the memory should still overclock to 16 Gb/s manually, though. What's more, GPU overclocks are more effective for improving performance since the chip isn't bandwidth-starved.
Row 0 - Cell 0 | GeForce RTX 2060 Super | GeForce RTX 2070 Super | GeForce RTX 2080 Super |
Architecture (GPU) | Turing (TU106) | Turing (TU104) | Turing (TU104) |
CUDA Cores | 2176 | 2560 | 3072 |
Peak FP32 Compute | 7.2 TLFOPS | 9.1 TFLOPS | 11.2 TFLOPS |
Tensor Cores | 272 | 320 | 384 |
RT Cores | 34 | 40 | 48 |
Texture Units | 136 | 160 | 192 |
Base Clock Rate | 1470 MHz | 1605 MHz | 1650 MHz |
GPU Boost Rate | 1650 MHz | 1770 MHz | 1815 MHz |
Memory Capacity | 8GB GDDR6 | 8GB GDDR6 | 8GB GDDR6 |
Memory Bus | 256-bit | 256-bit | 256-bit |
Memory Bandwidth | 448 GB/s | 448 GB/s | 496 GB/s |
ROPs | 64 | 64 | 64 |
L2 Cache | 4MB | 4MB | 4MB |
TDP | 175W | 215W | 250W |
Transistor Count | 10.8 billion | 13.6 billion | 13.6 billion |
Die Size | 445 mm² | 545 mm² | 545 mm² |
SLI Support | No | Yes | Yes |
The cumulative effect of a more capable TU104 is amplified by higher clock rates. Whereas GeForce RTX 2080 Founders Edition had a 1,515 MHz base and 1,800 MHz GPU Boost rating, GeForce RTX 2080 Super starts at 1,650 MHz base and typically operates closer to 1,815 MHz. Peak FP32 compute performance rises from 10.6 TFLOPS to 11.2 TFLOPS. And memory bandwidth increases to 496.1 GB/s, up from 448 GB/s.
Those more aggressive specs do influence GeForce RTX 2080 Super’s power consumption, bumping its board rating up to 250W. However, Nvidia still gets by with eight- and six-pin auxiliary power connectors.
For that matter, GeForce RTX 2080 Super and GeForce RTX 2080 Founders Edition are almost identical from the outside, other than some Super branding on the backplate and an RTX 2080 Super logo over a reflective sticker applied to the front. A pair of 8.5cm axial fans on either side utilize 13 blades to move heat away from TU104 as quickly as possible.
The same forged aluminum shroud holds them in place over a dense fin stack soldered onto a vapor chamber.
Inside, you’re looking at the same 8 (GPU) + 2 (memory)-phase power supply. Six of the GPU’s phases are fed by the power connectors, while the other two draw current from the PCIe slot.
Up front, Nvidia exposes the same display outputs: three DisplayPort 1.4 connectors, one HDMI 2.0b port, and one USB Type-C interface with VirtualLink support.
How We Tested GeForce RTX 2080 Super
With the GeForce RTX 2060 Super and 2070 Super reviews behind us, along with our coverage of Radeon RX 5700 and 5700 XT, we were able to fill in a few more blanks in our benchmark data using a brand-new platform. The machine we’re testing on now is powered by Intel’s Core i7-8086K six-core CPU on a Z370 Aorus Ultra Gaming motherboard with 64GB of a Corsair CMK128GX4M8A2400OC14 kit. We’re still using a couple of 500GB Crucial MX200 SSDs for our gaming suite, along with Noctua’s NH-D15S heat sink/fan combo.
Our latest library of data already included GeForce RTX 2080, GeForce RTX 2070, GeForce RTX 2060, GeForce GTX 1080 Ti, GeForce GTX 1080, GeForce GTX 1070 Ti, and GeForce GTX 1070. To that list, we added GeForce RTX 2080 Ti. All of those cards are represented by Nvidia’s own Founders Edition models except for the 1070 Ti, which is an MSI GeForce GTX 1070 Ti Gaming 8G. AMD’s own Radeon VII is part of the comparison as well, along with Sapphire’s Nitro+ Radeon RX Vega 64 and Nitro+ Radeon RX Vega 56. Those partner cards ensure we don’t see the frequency/throttling issues encountered with our reference models.
Our benchmark selection includes Battlefield V, Destiny 2, Far Cry 5, Final Fantasy XV, Forza Horizon 4, Grand Theft Auto V, Metro Exodus, Shadow of the Tomb Raider, Strange Brigade, Tom Clancy’s The Division 2, Tom Clancy’s Ghost Recon Wildlands, The Witcher 3 and Wolfenstein II: The New Colossus.
The testing methodology we're using comes from PresentMon: Performance In DirectX, OpenGL, And Vulkan. In short, these games are evaluated using a combination of OCAT and our own in-house GUI for PresentMon, with logging via GPU-Z.
We’re using driver build 431.16 for Nvidia’s GeForce RTX 2060 and 2070 Super and build 430.86 for all the previous-gen Nvidia cards. Earlier this month, Nvidia published driver build 431.36, which affected performance in Tom Clancy’s The Division 2, Strange Brigade, and Metro Exodus. As a result, we had to re-test GeForce RTX 2080, 2070, and 2060 using that update. On AMD’s side, we’re using Adrenalin 2019 Edition 19.6.3 for all three cards.
MORE: Best Graphics Cards
MORE: Desktop GPU Performance Hierarchy Table
MORE: All Graphics Content