Nvidia GeForce RTX 2080 Founders Edition Review: Faster, More Expensive Than GeForce GTX 1080 Ti

GeForce RTX 2080 doesn’t get a day in the sun. It’s thrust upon us, born alongside a handsomer, more athletic GeForce RTX 2080 Ti sibling. Enthusiasts fawn over that card’s ability to dribble through 4K resolutions at maximum quality without breaking a sweat. Though the 2080 Ti is obscenely expensive, it knows no equal and therefore sets a new bar for the competition to ogle. We all love a winner.

And then there’s GeForce RTX 2080. No slouch itself, the TU104-based board was bound to be fast by virtue of genetics. Indeed, Nvidia’s Founders Edition implementation generally outperforms the GeForce GTX 1080 Ti—a once-king of gaming performance. But it’s burdened by an $800 price tag. At a time when you can still find GTX 1080 Tis for $700, slightly higher frame rates from a more expensive RTX 2080 fail to impress. And so we wait…either for the supply of previous-gen Pascal GPUs to dry up, or third-party 2080s to appear at the $700 price point Nvidia promised back when Turing was announced.  

Fortunately, GeForce RTX 2080’s prospects for the future are promising. Not only does the card serve up GTX 1080 Ti-class performance, but it also supports the Turing-exclusive features that we know Nvidia is working hard to make available: real-time ray tracing via fixed-function RT cores, DLSS and AI denoising through its Tensor cores, mesh shaders, variable rate shading—all of the capabilities covered in Nvidia’s Turing Architecture Explored: Inside the GeForce RTX 2080.

TU104: Turing With Middle Child Syndrome

Like the TU102 GPU found in GeForce RTX 2080 Ti, TSMC manufactures TU104 on its 12nm FinFET node. But a transistor count of 13.6 billion results in a smaller 545 mm² die. “Smaller,” of course, requires a bit of context. Turing Jr out-measures the last generation’s 471 mm² flagship (GP102).

TU104 is constructed with the same building blocks as TU102; it just features fewer of them. Streaming Multiprocessors still sport 64 CUDA cores, eight Tensor cores, one RT core, four texture units, 16 load/store units, 256KB of register space, and 96KB of L1 cache/shared memory. TPCs are still composed of two SMs and a PolyMorph geometry engine. Only here, there are four TPCs per GPC, and six GPCs spread across the processor. Therefore, a fully enabled TU104 wields 48 SMs, 3072 CUDA cores, 384 Tensor cores, 48 RT cores, 192 texture units, and 24 PolyMorph engines. A correspondingly narrower back end feeds the compute resources through eight 32-bit GDDR6 memory controllers (256-bit aggregate) attached to 64 ROPs and 4MB of L2 cache.

TU104 also loses an eight-lane NVLink connection, limiting it to one x8 link and 50 GB/s of bi-directional throughput.

GeForce RTX 2080: TU104 Gets A (Tiny) Haircut

After seeing the GeForce RTX 2080 Ti serve up respectable performance in Battlefield V at 1920x1080 with ray tracing enabled, we can’t help but wonder if GeForce RTX 2080 is fast enough to maintain playable frame rates. Even a complete TU104 GPU is limited to 48 RT cores compared to TU102’s 68. But because Nvidia goes in and turns off one of TU104’s TPCs to create GeForce RTX 2080, another pair of RT cores is lost (along with 128 CUDA cores, eight texture units, 16 Tensor cores, and so on).

Unfortunately, we’ll have to wait for another day to measure RTX 2080’s alacrity in ray-traced games. There simply aren’t any available yet. UL did send us its 3DMark Ray Tracing Tech Demo to check out. And we were able record some video from the Star Wars Reflections demo running on a GeForce RTX 2080 Ti. But the real excitement happens in a couple of months when game developers implement the first hybrid rendering paths. Until then, GeForce RTX 2080’s ability to keep up in those workloads remains a mystery.

So, in the end, GeForce RTX 2080 struts onto the scene with 46 SMs hosting 2944 CUDA cores, 368 Tensor cores, 46 RT cores, 184 texture units, 64 ROPS, and 4MB of L2 cache. Eight gigabytes of 14 Gb/s GDDR6 on a 256-bit bus move up to 448 GB/s of data, adding more than 100 GB/s of memory bandwidth beyond what GeForce GTX 1080 could do.


GeForce RTX 2080 Ti FE
GeForce RTX 2080 FE
GeForce GTX 1080 Ti FE
GeForce GTX 1080 FE
Architecture (GPU)
Turing (TU102)
Turing (TU104)Pascal (GP102)
Pascal (GP104)
CUDA Cores
4352
29443584
2560
Peak FP32 Compute
14.2 TFLOPS
10.6 TFLOPS11.3 TFLOPS
8.9 TFLOPS
Tensor Cores
544
368N/A
N/A
RT Cores
68
46N/A
N/A
Texture Units
272
184224
160
Base Clock Rate
1350 MHz
1515 MHz1480 MHz
1607 MHz
GPU Boost Rate
1635 MHz
1800 MHz1582 MHz
1733 MHz
Memory Capacity
11GB GDDR6
8GB GDDR611GB GDDR5X
8GB GDDR5X
Memory Bus
352-bit
256-bit352-bit
256-bit
Memory Bandwidth
616 GB/s
448 GB/s484 GB/s
320 GB/s
ROPs
88
6488
64
L2 Cache
5.5MB
4MB2.75MB
2MB
TDP
260W
225W250W
180W
Transistor Count
18.6 billion
13.6 billion12 billion
7.2 billion
Die Size
754 mm²545 mm²471 mm²314 mm²
SLI Support
Yes (x8 NVLink, x2)
Yes (x8 NVLink)Yes (MIO)
Yes (MIO)

Nvidia’s Founders Edition card sports a 1515 MHz base frequency and 1800 MHz GPU Boost rating. Peak FP32 compute performance of 10.6 TFLOPS puts GeForce RTX 2080 behind GeForce GTX 1080 Ti (11.3 TFLOPS), but well ahead of GeForce GTX 1080 (8.9 TFLOPS). Of course, the faster Founders Edition model also uses more power. Its 225W TDP is 10W higher than the reference GeForce RTX 2080, and a full 45W above last generation’s GeForce GTX 1080. Still, 225W is low enough that Nvidia gets away with one six- and one eight-pin supplementary power connector (versus RTX 2080 Ti’s pair of eight-pin connectors).

With its thermal solution removed, the GeForce RTX 2080’s PCB looks a little tidier than what we found on GeForce RTX 2080 Ti. After all, it hosts far fewer components. The power supply, for example, is a conventional 8 (GPU) + 2 (memory)-phase design. Nvidia didn’t need any of the trickery we discovered on its flagship. Six of the GPU’s phases are fed by the aforementioned power connectors (along with the memory’s phases), while the other two originate at the PCIe slot.

The PWM controller responsible for the GPU’s power phases is surface-mounted around back, while the one corresponding to Micron’s GDDR6 modules is up toward the top, under a PCIe connector.

It’s easy to tell where the memory phases are located; they’re up top as well, next to the higher-inductance coils.

GPU Power Supply

Front and center in this design is uPI's uP9512 eight-phase buck controller specifically designed to support next-gen GPUs. Per uPI, "the uP9512 provides programmable output voltage and active voltage positioning functions to adjust the output voltage as a function of the load current, so it is optimally positioned for a load current transient."

The uP9512 supports Nvidia's Open Voltage Regulator Type 4i+ technology with PWMVID. This input is buffered and filtered to produce a very accurate reference voltage. The output voltage is then precisely controlled to the reference input. An integrated SMBus interface offers enough flexibility to optimize performance and efficiency, while also facilitating communication with the appropriate software.

All 13 voltage regulation circuits are equipped with an ON Semiconductor FDMF3160 Smart Power Stage module with integrated PowerTrench MOSFETs and driver ICs.

As usual, the coils rely on encapsulated ferrite cores, but this time they are rectangular to make room for the voltage regulator circuits.

Memory Power Supply

Micron's MT61K256M32JE-14:A memory ICs are powered by two phases coming from a second uP9512. The same FDMF3160 Smart Power Stage modules crop up yet again. The 470mH coils offer greater inductance than the ones found on the GPU power phases, but they're completely identical in terms of physical dimensions.

The input filtering takes place via three 1μH coils, whereby each of the three connection lines has a matching shunt. This is a very low resistance to which voltage drop is measured in parallel and passed on to the telemetry. Through these circuits, Nvidia can limit board power in a precise way.

Unfortunately for the folks who like a bit of redundancy, this card only comes equipped with one BIOS.

How We Tested GeForce RTX 2080

Nvidia’s latest and greatest will no doubt be found in one of the many high-end platforms now available from AMD and Intel. Our graphics station still employs an MSI Z170 Gaming M7 motherboard with an Intel Core i7-7700K CPU at 4.2 GHz, though. The processor is complemented by G.Skill’s F4-3000C15Q-16GRR memory kit. Crucial’s MX200 SSD remains, joined by a 1.4TB Intel DC P3700 loaded down with games.

As far as competition goes, we can assume that GeForce RTX 2080 is bested by GeForce RTX 2080 Ti and Titan V, both of which we have in our test pool. We also compare GeForce GTX 1080 Ti, Titan X, GeForce GTX 1080, GeForce GTX 1070 Ti, and GeForce GTX 1070 from Nvidia. AMD is represented by the Radeon RX Vega 64 and 56. All cards are either Founders Edition or reference models. We do have some partner boards in-house from both Nvidia and AMD, and plan to use those for third-party reviews.

Our benchmark selection now includes Ashes of the Singularity: Escalation, Battlefield 1, Civilization VI, Destiny 2,Doom, Far Cry 5,Forza Motorsport 7, Grand Theft Auto V, Metro: Last Light Redux, Rise of the Tomb Raider, Tom Clancy’s The Division, Tom Clancy’s Ghost Recon Wildlands, The Witcher 3 and World of Warcraft: Battle for Azeroth. We’re working on adding Monster Hunter: World, Shadow of the Tomb Raider, Wolfenstein II, and a couple of others, but had to scrap those plans due to very limited time with Nvidia’s final driver for its Turing-based cards.

The testing methodology we're using comes from PresentMon: Performance In DirectX, OpenGL, And Vulkan. In short, all of these games are evaluated using a combination of OCAT and our own in-house GUI for PresentMon, with logging via AIDA64.

All of the numbers you see in today’s piece are fresh, using updated drivers. For Nvidia, we’re using build 411.51 for GeForce RTX 2080 Ti and 2080. The other cards were tested with build 398.82. Titan V’s results were spot-checked with 411.51 to ensure performance didn’t change. AMD’s cards utilize Crimson Adrenalin Edition 18.8.1, which was the latest at test time.

MORE: Best Graphics Cards

MORE: Desktop GPU Performance Hierarchy Table

MORE: All Graphics Content

Create a new thread in the Reviews comments forum about this subject
90 comments
Comment from the forums
    Your comment
    Top Comments
  • chaosmassive
    thank you for your thorough review on these cards,
    finally the card has been demystified and indeed for the price is it not worth the buy considering 1080 ti in such a low price..

    turned out I dont need ray tracing in my life before I die.
  • Other Comments
  • wh3resmycar
    failure, this is a failure.

    this gtx20 series looks like it won't be worth it.
  • velocityg4
    What? No “just buy it” in your conclusion.

    Considering Founders edition usually starts about $100 more that standard edition. Plus, it is new to market. If a 2080 can be had for $100 more than a 1080 Ti. The price is as expected.
  • tojumikie
    another price-panicked pundit
  • Krazie_Ivan
    2080 should have been the 2070, as it barely beats a 1080ti and is the TU104 die. and given the 30mo since Pascal launch, we should almost be looking at 3000 series benches. combine those two with the insane pricing, and Turing/RTX is a huge disappointment. DLSS could be nice and i'm glad Nvidia is pushing for RT development, but there's not enough positives here to justify the costs. $380 2080 / $500 2080ti (and relabel them to match their die codes, like Keplar-Pascal)... otherwise, no thx.
  • chaosmassive
    thank you for your thorough review on these cards,
    finally the card has been demystified and indeed for the price is it not worth the buy considering 1080 ti in such a low price..

    turned out I dont need ray tracing in my life before I die.
  • shrapnel_indie
    Quote:
    After seeing the GeForce RTX 2080 Ti serve up respectable performance in Battlefield V at 1920x1080 with ray tracing enabled,

    Quote:
    Unfortunately, we’ll have to wait for another day to measure RTX 2080’s alacrity in ray-traced games. There simply aren’t any available yet.


    Odd... either ray tracing graphics games are available or they're not. You can't test what isn't available for testing... and RT for BF5, last I heard was a zero-day patch... (or was it the modifications to RT that was supposed to improve FPS to acceptable levels.)
  • cangelini
    Anonymous said:
    Quote:
    After seeing the GeForce RTX 2080 Ti serve up respectable performance in Battlefield V at 1920x1080 with ray tracing enabled,

    Quote:
    Unfortunately, we’ll have to wait for another day to measure RTX 2080’s alacrity in ray-traced games. There simply aren’t any available yet.


    Odd... either ray tracing graphics games are available or they're not. You can't test what isn't available for testing... and RT for BF5, last I heard was a zero-day patch... (or was it the modifications to RT that was supposed to improve FPS to acceptable levels.)


    They're not available, but we've seen Battlefield 5 in action with ray tracing enabled ;)
  • WINTERLORD
    wait a minute the 2080 has only one RT core and the 2080 has 72 RT cores? I think there may be an error in the review. update spoke to soon i think that means 1rt cluster...

    first page says " TU104 is constructed with the same building blocks as TU102; it just features fewer of them. Streaming Multiprocessors still sport 64 CUDA cores, eight Tensor cores, one RT core, four texture units, 16 load/store units, 256KB of register space, and 96KB of L1 cache/shared memory. "
  • jimmysmitty
    Anonymous said:
    failure, this is a failure.

    this gtx20 series looks like it won't be worth it.


    I think sales will determine that and if history is anything without stiff competition from AMD I am sure they will sell just fine especially once the AiB cards come out.

    Anonymous said:
    What? No “just buy it” in your conclusion.

    Considering Founders edition usually starts about $100 more that standard edition. Plus, it is new to market. If a 2080 can be had for $100 more than a 1080 Ti. The price is as expected.


    Chris has never been like that.

    That said, the pricing should be decent for AiB after a few months. When they launch they get price gouged. Still I would have loved a GTX 1080 price number. That GPU outperformed the 980 Ti by a good margin and was cheaper at launch.

    Maybe AMD will come out with something sometime soon. Otherwise we wont see pricing drop. That or AMD will take advantage of the pricing increase and up theirs too.
  • hixbot
    Such a shame 2.5 years after pascal launches, performance per dollar does not improve.
  • mapesdhs
    Wait, are you comparing to a 1080 Ti FE here? But who has that? Most people would have AIB versions of the 1080 Ti, in which case the margin between it and the 2080 FE will be smaller, and in more cases the 2080 FE will be slower. The charts really should include at least one typically decent AIB card, like an FTW3 or something.
  • jimmysmitty
    Anonymous said:
    Wait, are you comparing to a 1080 Ti FE here? But who has that? Most people would have AIB versions of the 1080 Ti, in which case the margin between it and the 2080 FE will be smaller, and in more cases the 2080 FE will be slower. The charts really should include at least one typically decent AIB card, like an FTW3 or something.


    True but the AiB versions of the 20 series will be out soon as well meaning they should also increase performance with higher stock clocks/faster VRAM.
  • TMTOWTSAC
    Looks like the 2080 ti is the first true no-compromise 4k card. That's going to be worth it to a lot of people regardless of price. The 2080 will live or die based on its performance in RT titles, and whether or not RT games take off quickly enough of course.

    All of which makes me think the 2070 is DOA. There's no way it can be as fast as the 1080 ti. It might not even be cheaper. And RT performance? It has half the RT cores of the 2080 ti running at a lower clock speed. If the 2080 ti is targeting 60fps@1080p with RT on, there's no way the 2070 can produce acceptable framerates. Is there anywhere in the product stack for that card?
  • saunupe1911
    You guys are missing the big picture here. It's the later half of 2018 and we still can't get 4K 60 FPS gaming at Ultra settings on a graphics card for under $1K. I would rather just stick to my little 1070 and crank settings down here and there to achieve what I need smh. I could care less about 144 MHz at 1440p.

    And I don't even want to read a 2070 review. That card simply won't be worth it.
  • cryoburner
    I'm glad to see the review put a heavy focus on price compared to the 1080 Ti, rather than just saying "Wowzers, look at how much faster it is than a 1080!"

    Also, I found it interesting that the performance of these cards if often a bit more similar to Vega. Not the specific performance levels, but the performance of the architectures in general relative to Pascal. In the games that Vega hits harder against Pascal, like BF1 or Forza, Turing tends to as well, while the games where Vega Falls behind, Turing's results are also less impressive. That is, aside from maybe Division, where Vega does exceptionally well, but Turing does somewhat poorly. Perhaps upcoming cards that show more performance overlap between the two companies will perform more similar than we've seen in recent years though.

    Of course, there's also raytracing performance that could affect things to some unknown degree. Turing should handle raytraced effects far better than Pascal, but it's yet to be seen whether AMD's next cards will offer competitive raytracing performance as well, or even if raytraced effects will become common enough within the next couple years for it to even matter much. The same goes for DLSS, which could potentially provide better or faster antialiasing. Though I would take Nvidia's cherry-picked Final Fantasy tech demo with a grain of salt, since it's difficult to say what exact settings were used, or whether they selected a certain scene where the feature worked atypically well. Until proper games are able to be tested with the feature, it's anyone's guess.

    Anonymous said:
    Looks like the 2080 ti is the first true no-compromise 4k card.

    I wouldn't say that. As soon as games start adding raytraced effects, it might might be lucky to maintain 60fps at 1080p. : P It would certainly be a compromise having to disable major visual effects just to get anywhere close to pushing 4K resolution on a $1200 card. It's certainly possible that developers will greatly tone down the quality of the effects to get them running better though, but that means these technology demonstrations of RTX that Nvidia has been showing off might not actually be all that representative of how the effects will look when they appear in actual games. I know Battlefield V's developer was already talking about having to scale back the raytracing effects from what was shown at Nvidia's conference. Either way, I don't expect the 2080 Ti to be running next year's games at max graphics settings with stable frame rates at 4K.
  • TJ Hooker
    It looks like the 2080 consumers nearly as much power as the 1080 Ti. That means that, in addition to not offering a meaningful improvement to performance per dollar compared to a card that came out 1.5 years ago, it doesn't significantly improve on performance per watt either...
  • newsonline5000000
    Last time the 1060 replaced the 980

    and the 1070 was faster than 980 and same speed of 980 ti


    and now after 2.5 years Nvidia is giving us What exactly ? RTX 2080 for $800 ??? The RTX 2080 should replace 1070 at $450
  • none12345
    About what i expected. The 2080ti is the money is no object champion. For anyone who can afford it, its the card to get.

    The rest of the RTX series is not worth it. Performance/$ goes down, which is absurd. Performance/$ has never gone down before for a new line of gpus. Especially after waiting over 2 years(which is a long time to wait for a new gen of gpus compared to the past).

    I don't have a 16nm card, but I'll wait for 7nm cards, i want a performance/$ increase on a new generation or no sale. And i don't want to buy into a >2 year old generation, so im not buying a 1080 at this point either. I'll wait for something better at a reasonable price.

    While i have been wanting high fidelity real time ray tracing for the past 20 years...i am not willing to pay a premium for a first draft, especially when there are no games to play. Ill be looking at ray tracing when the next iteration is out. When the bugs have been worked out, when games are out, and when performance is acceptable. For now, i only care about rasterization. History has taught us that when new gpu features come out in a card, the first gen isnt usually very good at it.
  • cryoburner
    Anonymous said:
    It looks like the 2080 consumers nearly as much power as the 1080 Ti. That means, in addition to not offering a meaningful improvement to performance per dollar compared to a card that came out 1.5 years ago, it doesn't significantly improve on performance per watt either...

    It makes some sense, since it has fewer graphics cores, but the ones that are there are clocked higher to make up the difference, and it is also adding RT and Tensor cores, while the efficiency gains moving from 16nm to 12nm should be a lot smaller than the jump from 28nm to 16nm.

    I'm curious how much something like hybrid raytracing will affect the power use as well. It seems like the card sticks hard to a 225 watt limit though, so perhaps it will cut into the graphics core clocks when RTX is active to divert power to the RT cores.
  • TJ Hooker
    Anonymous said:
    Anonymous said:
    It looks like the 2080 consumers nearly as much power as the 1080 Ti. That means, in addition to not offering a meaningful improvement to performance per dollar compared to a card that came out 1.5 years ago, it doesn't significantly improve on performance per watt either...

    It makes some sense, since it has fewer graphics cores, but the ones that are there are clocked higher to make up the difference, and it is also adding RT and Tensor cores, while the efficiency gains moving from 16nm to 12nm should be a lot smaller than the jump from 28nm to 16nm.

    Sure, but you can still get efficiency gains from architecture, independent of process node. Just look at Maxwell vs Kepler, both on 28 nm. It looks like any gains here are either very small or offset by RT/tensor cores as you suggest, making the inclusion of that hardware even more of a gamble on things like ray tracing and DLSS taking off.