GPU Benchmarks and Hierarchy 2022: Graphics Cards Ranked

GPU Benchmarks and Performance Hierarchy
(Image credit: Tom's Hardware)

Our GPU benchmarks hierarchy ranks all the current and previous generation graphics cards by performance, including all of the best graphics cards. Whether it's playing games or doing high-end creative work like 4K video editing, your graphics card typically plays the biggest role in determining performance, and even the best CPUs for Gaming take a secondary role.

We've revamped our GPU testbed and updated all of our benchmarks for 2022, and are now finished retesting nearly every graphics card from the past several generations, plus some even older GPUs. Our full GPU hierarchy using traditional rendering comes first, and below that we have our ray tracing GPU benchmarks hierarchy. Those of course require a ray tracing capable GPU so only AMD's RX 6000-series, Intel's Arc, and Nvidia's RTX cards are present. The results are all without enabling DLSS, DLSS 3, or XeSS on the various cards, mind you.

Nvidia launched the GeForce RTX 4090, which as expected leaped to the top of our GPU benchmarks. It's definitely a fast card, so fast that if you're not gaming at 4K, you might not need everything it can offer. It followed up with the GeForce RTX 4080, which has the same Ada Lovelace architecture but far fewer shaders and compute.

At the other end of the pricing spectrum, the Intel Arc A770 and Intel Arc A750 finally arrived, and while they're not the fastest GPUs, they have promise and are priced below their direct Nvidia competition (and probably closer to tied with their AMD equivalents). Speaking of, we also have the Sapphire RX 6700 10GB (opens in new tab) in the list now — we'll have the review done as soon as we're able. Meanwhile, AMD revealed specs for its RX 7900 XTX / XT and details of the RDNA 3 architecture as well, with cards set to arrive on December 13.

Below our main tables, you'll find our 2020–2021 benchmark suite, which has all of the previous generation GPUs running our older test suite running on a Core i9-9900K testbed. We also have the legacy GPU hierarchy (without benchmarks) at the bottom of the article for reference purposes.

The following tables sort everything solely by our performance-based GPU gaming benchmarks, at 1080p "ultra" for the main suite and at 1080p "medium" for the DXR suite. Factors including price, graphics card power consumption, overall efficiency, and features aren't factored into the rankings here. We've switched to a new Alder Lake Core i9-12900K testbed, changed up our test suite, and retested all of the past several generations of GPUs. Now let's hit the benchmarks and tables.

GPU Benchmarks Ranking 2022

For our latest benchmarks, we test (nearly) all GPUs at 1080p medium and 1080p ultra, and sort the table by the 1080p ultra results. Where it makes sense, we also test at 1440p ultra and 4K ultra. All of the scores are scaled relative to the top-ranking 1080p ultra card, which in our new suite is the RTX 4090 (at least at 4K and 1440p).

You can also see the above summary chart showing the relative performance of the cards we've tested across the past several generations of hardware at 1080p ultra — swipe through the above gallery if you want to see the 1080p medium, 1440p and 4K ultra images. There are a few missing options (e.g., the GT 1030, RX 550, and several Titan cards), but otherwise it's basically complete. We do have data in the table below for some of the other (older) GPUs.

The eight games we're using for our standard GPU benchmarks hierarchy are Borderlands 3 (DX12), Far Cry 6 (DX12), Flight Simulator (DX11/DX12), Forza Horizon 5 (DX12), Horizon Zero Dawn (DX12), Red Dead Redemption 2 (Vulkan), Total War Warhammer 3 (DX11), and Watch Dogs Legion (DX12). The fps score is the geometric mean (equal weighting) of the eight games.

Swipe to scroll horizontally
Tom's Hardware GPU Benchmarks Hierarchy
Graphics Card1080p Ultra1080p Medium1440p Ultra4K UltraSpecifications (Links to Review)
GeForce RTX 4090 (opens in new tab)100.0% (147.4fps)100.0% (188.0fps)100.0% (143.2fps)100.0% (116.3fps)AD102, 16384 shaders, 2520MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W (opens in new tab)
GeForce RTX 4080 (opens in new tab)95.5% (140.8fps)96.6% (181.7fps)90.8% (130.0fps)78.5% (91.4fps)AD103, 9728 shaders, 2505MHz, 16GB GDDR6X@22.4Gbps, 717GB/s, 320W (opens in new tab)
Radeon RX 6950 XT (opens in new tab)93.6% (137.9fps)101.6% (191.0fps)80.6% (115.4fps)60.4% (70.3fps)Navi 21, 5120 shaders, 2310MHz, 16GB GDDR6@18Gbps, 576GB/s, 335W (opens in new tab)
Radeon RX 6900 XT (opens in new tab)90.1% (132.9fps)99.0% (186.2fps)75.1% (107.6fps)55.7% (64.8fps)Navi 21, 5120 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300W (opens in new tab)
GeForce RTX 3090 Ti (opens in new tab)89.8% (132.4fps)95.8% (180.1fps)79.5% (113.9fps)65.1% (75.7fps)GA102, 10752 shaders, 1860MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W (opens in new tab)
Radeon RX 6800 XT (opens in new tab)86.4% (127.4fps)96.1% (180.7fps)71.2% (102.0fps)51.9% (60.4fps)Navi 21, 4608 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300W (opens in new tab)
GeForce RTX 3090 (opens in new tab)85.9% (126.6fps)94.7% (178.1fps)74.4% (106.5fps)59.1% (68.8fps)GA102, 10496 shaders, 1695MHz, 24GB GDDR6X@19.5Gbps, 936GB/s, 350W (opens in new tab)
GeForce RTX 3080 12GB (opens in new tab)84.5% (124.5fps)94.8% (178.2fps)72.6% (104.0fps)57.0% (66.3fps)GA102, 8960 shaders, 1845MHz, 12GB GDDR6X@19Gbps, 912GB/s, 400W (opens in new tab)
GeForce RTX 3080 Ti (opens in new tab)83.7% (123.4fps)93.1% (174.9fps)72.2% (103.4fps)57.1% (66.5fps)GA102, 10240 shaders, 1665MHz, 12GB GDDR6X@19Gbps, 912GB/s, 350W (opens in new tab)
Radeon RX 6800 (opens in new tab)79.1% (116.7fps)93.1% (174.9fps)63.5% (90.9fps)45.2% (52.5fps)Navi 21, 3840 shaders, 2105MHz, 16GB GDDR6@16Gbps, 512GB/s, 250W (opens in new tab)
GeForce RTX 3080 (opens in new tab)78.9% (116.3fps)92.3% (173.4fps)66.7% (95.5fps)52.1% (60.6fps)GA102, 8704 shaders, 1710MHz, 10GB GDDR6X@19Gbps, 760GB/s, 320W (opens in new tab)
Radeon RX 6750 XT (opens in new tab)71.5% (105.3fps)90.5% (170.2fps)54.6% (78.2fps)37.0% (43.1fps)Navi 22, 2560 shaders, 2600MHz, 12GB GDDR6@18Gbps, 432GB/s, 250W (opens in new tab)
GeForce RTX 3070 Ti (opens in new tab)70.6% (104.1fps)86.4% (162.4fps)57.6% (82.6fps)40.2% (46.8fps)GA104, 6144 shaders, 1770MHz, 8GB GDDR6X@19Gbps, 608GB/s, 290W (opens in new tab)
Titan RTX (opens in new tab)68.5% (101.0fps)84.2% (158.2fps)56.2% (80.5fps)41.5% (48.3fps)TU102, 4608 shaders, 1770MHz, 24GB GDDR6@14Gbps, 672GB/s, 280W (opens in new tab)
Radeon RX 6700 XT (opens in new tab)67.7% (99.8fps)86.2% (162.1fps)51.3% (73.4fps)34.8% (40.5fps)Navi 22, 2560 shaders, 2581MHz, 12GB GDDR6@16Gbps, 384GB/s, 230W (opens in new tab)
GeForce RTX 3070 (opens in new tab)67.7% (99.8fps)83.9% (157.7fps)54.1% (77.5fps)37.1% (43.2fps)GA104, 5888 shaders, 1725MHz, 8GB GDDR6@14Gbps, 448GB/s, 220W (opens in new tab)
GeForce RTX 2080 Ti (opens in new tab)65.1% (96.0fps)80.7% (151.6fps)52.6% (75.3fps)38.3% (44.6fps)TU102, 4352 shaders, 1545MHz, 11GB GDDR6@14Gbps, 616GB/s, 250W (opens in new tab)
GeForce RTX 3060 Ti (opens in new tab)62.1% (91.5fps)79.6% (149.7fps)48.7% (69.7fps) GA104, 4864 shaders, 1665MHz, 8GB GDDR6@14Gbps, 448GB/s, 200W (opens in new tab)
Radeon RX 6700 10GB (opens in new tab)59.5% (87.7fps)78.1% (146.8fps)44.3% (63.5fps)28.5% (33.1fps)Navi 22, 2304 shaders, 2450MHz, 10GB GDDR6@16Gbps, 320GB/s, 175W
GeForce RTX 2080 Super (opens in new tab)57.6% (84.9fps)73.3% (137.8fps)45.3% (64.9fps)29.7% (34.5fps)TU104, 3072 shaders, 1815MHz, 8GB GDDR6@15.5Gbps, 496GB/s, 250W (opens in new tab)
GeForce RTX 2080 (opens in new tab)55.8% (82.2fps)70.8% (133.1fps)43.6% (62.4fps) TU104, 2944 shaders, 1710MHz, 8GB GDDR6@14Gbps, 448GB/s, 215W (opens in new tab)
Intel Arc A770 16GB (opens in new tab)54.5% (80.3fps)66.0% (124.0fps)43.5% (62.3fps)31.6% (36.7fps)ACM-G10, 4096 shaders, 2100MHz, 16GB GDDR6@17.5Gbps, 560GB/s, 225W (opens in new tab)
Radeon RX 6650 XT (opens in new tab)54.1% (79.8fps)73.6% (138.4fps)39.6% (56.7fps) Navi 23, 2048 shaders, 2635MHz, 8GB GDDR6@18Gbps, 280GB/s, 180W (opens in new tab)
Radeon RX 6600 XT (opens in new tab)52.9% (78.0fps)72.6% (136.5fps)38.3% (54.9fps) Navi 23, 2048 shaders, 2589MHz, 8GB GDDR6@16Gbps, 256GB/s, 160W (opens in new tab)
GeForce RTX 2070 Super (opens in new tab)51.8% (76.4fps)66.0% (124.1fps)40.1% (57.4fps) TU104, 2560 shaders, 1770MHz, 8GB GDDR6@14Gbps, 448GB/s, 215W (opens in new tab)
Radeon RX 5700 XT (opens in new tab)50.0% (73.7fps)66.9% (125.8fps)37.2% (53.3fps)25.1% (29.3fps)Navi 10, 2560 shaders, 1905MHz, 8GB GDDR6@14Gbps, 448GB/s, 225W (opens in new tab)
Intel Arc A750 (opens in new tab)48.5% (71.4fps)61.9% (116.4fps)38.6% (55.2fps)27.3% (31.8fps)ACM-G10, 3584 shaders, 2050MHz, 8GB GDDR6@16Gbps, 512GB/s, 225W (opens in new tab)
GeForce RTX 3060 (opens in new tab)47.6% (70.2fps)63.2% (118.8fps)36.7% (52.6fps) GA106, 3584 shaders, 1777MHz, 12GB GDDR6@15Gbps, 360GB/s, 170W (opens in new tab)
Radeon VII (opens in new tab)47.3% (69.7fps)60.7% (114.0fps)37.0% (53.0fps)27.0% (31.4fps)Vega 20, 3840 shaders, 1750MHz, 16GB HBM2@2.0Gbps, 1024GB/s, 300W (opens in new tab)
GeForce RTX 2070 (opens in new tab)46.1% (67.9fps)58.9% (110.7fps)35.6% (51.0fps) TU106, 2304 shaders, 1620MHz, 8GB GDDR6@14Gbps, 448GB/s, 175W (opens in new tab)
Radeon RX 6600 (opens in new tab)45.2% (66.7fps)62.7% (117.8fps)32.2% (46.1fps) Navi 23, 1792 shaders, 2491MHz, 8GB GDDR6@14Gbps, 224GB/s, 132W (opens in new tab)
GeForce GTX 1080 Ti (opens in new tab)45.1% (66.5fps)58.8% (110.6fps)35.1% (50.3fps)25.4% (29.5fps)GP102, 3584 shaders, 1582MHz, 11GB GDDR5X@11Gbps, 484GB/s, 250W (opens in new tab)
GeForce RTX 2060 Super (opens in new tab)44.1% (65.1fps)56.3% (105.9fps)33.7% (48.2fps) TU106, 2176 shaders, 1650MHz, 8GB GDDR6@14Gbps, 448GB/s, 175W (opens in new tab)
Radeon RX 5700 (opens in new tab)44.0% (64.8fps)59.2% (111.3fps)32.9% (47.2fps) Navi 10, 2304 shaders, 1725MHz, 8GB GDDR6@14Gbps, 448GB/s, 180W (opens in new tab)
Radeon RX 5600 XT (opens in new tab)39.4% (58.1fps)53.5% (100.6fps)29.3% (42.0fps) Navi 10, 2304 shaders, 1750MHz, 8GB GDDR6@14Gbps, 336GB/s, 160W (opens in new tab)
Radeon RX Vega 64 (opens in new tab)38.5% (56.8fps)50.2% (94.3fps)29.0% (41.6fps)20.2% (23.5fps)Vega 10, 4096 shaders, 1546MHz, 8GB HBM2@1.89Gbps, 484GB/s, 295W (opens in new tab)
GeForce RTX 2060 (opens in new tab)37.4% (55.2fps)51.5% (96.8fps)27.0% (38.7fps) TU106, 1920 shaders, 1680MHz, 6GB GDDR6@14Gbps, 336GB/s, 160W (opens in new tab)
GeForce GTX 1080 (opens in new tab)36.1% (53.1fps)47.9% (90.0fps)27.5% (39.4fps) GP104, 2560 shaders, 1733MHz, 8GB GDDR5X@10Gbps, 320GB/s, 180W (opens in new tab)
GeForce RTX 3050 (opens in new tab)34.9% (51.4fps)47.6% (89.4fps)26.3% (37.6fps) GA106, 2560 shaders, 1777MHz, 8GB GDDR6@14Gbps, 224GB/s, 130W (opens in new tab)
GeForce GTX 1070 Ti (opens in new tab)34.7% (51.1fps)45.6% (85.8fps)26.5% (37.9fps) GP104, 2432 shaders, 1683MHz, 8GB GDDR5@8Gbps, 256GB/s, 180W (opens in new tab)
Radeon RX Vega 56 (opens in new tab)34.3% (50.6fps)44.9% (84.4fps)25.8% (37.0fps) Vega 10, 3584 shaders, 1471MHz, 8GB HBM2@1.6Gbps, 410GB/s, 210W (opens in new tab)
GeForce GTX 1660 Super (opens in new tab)30.7% (45.3fps)44.1% (82.8fps)22.6% (32.4fps) TU116, 1408 shaders, 1785MHz, 6GB GDDR6@14Gbps, 336GB/s, 125W (opens in new tab)
GeForce GTX 1660 Ti (opens in new tab)30.5% (45.0fps)43.8% (82.4fps)22.5% (32.2fps) TU116, 1536 shaders, 1770MHz, 6GB GDDR6@12Gbps, 288GB/s, 120W (opens in new tab)
GeForce GTX 1070 (opens in new tab)30.4% (44.8fps)39.9% (75.1fps)23.1% (33.1fps) GP104, 1920 shaders, 1683MHz, 8GB GDDR5@8Gbps, 256GB/s, 150W (opens in new tab)
GeForce GTX 1660 (opens in new tab)27.3% (40.2fps)39.9% (75.1fps)19.9% (28.5fps) TU116, 1408 shaders, 1785MHz, 6GB GDDR5@8Gbps, 192GB/s, 120W (opens in new tab)
Radeon RX 5500 XT 8GB (opens in new tab)27.0% (39.8fps)38.6% (72.6fps)19.9% (28.5fps) Navi 14, 1408 shaders, 1845MHz, 8GB GDDR6@14Gbps, 224GB/s, 130W (opens in new tab)
Radeon RX 590 (opens in new tab)26.7% (39.4fps)36.5% (68.6fps)20.3% (29.1fps) Polaris 30, 2304 shaders, 1545MHz, 8GB GDDR5@8Gbps, 256GB/s, 225W (opens in new tab)
GeForce GTX 980 Ti (opens in new tab)24.3% (35.9fps)33.3% (62.6fps)18.6% (26.7fps) GM200, 2816 shaders, 1075MHz, 6GB GDDR5@7Gbps, 336GB/s, 250W (opens in new tab)
Radeon R9 Fury X (opens in new tab)24.0% (35.4fps)34.3% (64.4fps)  Fiji, 4096 shaders, 1050MHz, 4GB HBM2@2Gbps, 512GB/s, 275W (opens in new tab)
Radeon RX 580 8GB (opens in new tab)24.0% (35.3fps)32.8% (61.7fps)18.2% (26.0fps) Polaris 20, 2304 shaders, 1340MHz, 8GB GDDR5@8Gbps, 256GB/s, 185W (opens in new tab)
GeForce GTX 1650 Super (opens in new tab)23.0% (33.9fps)36.2% (68.0fps)16.0% (23.0fps) TU116, 1280 shaders, 1725MHz, 4GB GDDR6@12Gbps, 192GB/s, 100W (opens in new tab)
Radeon RX 5500 XT 4GB (opens in new tab)22.7% (33.5fps)35.6% (66.9fps)  Navi 14, 1408 shaders, 1845MHz, 4GB GDDR6@14Gbps, 224GB/s, 130W (opens in new tab)
GeForce GTX 1060 6GB (opens in new tab)21.9% (32.2fps)30.9% (58.0fps)16.1% (23.0fps) GP106, 1280 shaders, 1708MHz, 6GB GDDR5@8Gbps, 192GB/s, 120W (opens in new tab)
Radeon RX 6500 XT (opens in new tab)20.9% (30.8fps)35.0% (65.8fps)12.6% (18.0fps) Navi 24, 1024 shaders, 2815MHz, 4GB GDDR6@18Gbps, 144GB/s, 107W (opens in new tab)
Radeon R9 390 (opens in new tab)20.2% (29.8fps)27.2% (51.2fps)  Grenada, 2560 shaders, 1000MHz, 8GB GDDR5@6Gbps, 384GB/s, 275W (opens in new tab)
GeForce GTX 980 (opens in new tab)19.6% (28.9fps)28.6% (53.7fps)  GM204, 2048 shaders, 1216MHz, 4GB GDDR5@7Gbps, 256GB/s, 165W (opens in new tab)
GeForce GTX 1650 GDDR6 (opens in new tab)19.5% (28.8fps)30.1% (56.7fps)  TU117, 896 shaders, 1590MHz, 4GB GDDR6@12Gbps, 192GB/s, 75W (opens in new tab)
Intel Arc A380 (opens in new tab)19.2% (28.3fps)29.1% (54.7fps)13.6% (19.5fps) ACM-G11, 1024 shaders, 2450MHz, 6GB GDDR6@15.5Gbps, 186GB/s, 75W (opens in new tab)
Radeon RX 570 4GB (opens in new tab)19.2% (28.3fps)28.5% (53.6fps)13.9% (20.0fps) Polaris 20, 2048 shaders, 1244MHz, 4GB GDDR5@7Gbps, 224GB/s, 150W (opens in new tab)
GeForce GTX 1060 3GB (opens in new tab)18.8% (27.8fps)28.0% (52.6fps)  GP106, 1152 shaders, 1708MHz, 3GB GDDR5@8Gbps, 192GB/s, 120W (opens in new tab)
GeForce GTX 1650 (opens in new tab)18.2% (26.9fps)27.2% (51.1fps)  TU117, 896 shaders, 1665MHz, 4GB GDDR5@8Gbps, 128GB/s, 75W (opens in new tab)
GeForce GTX 970 (opens in new tab)18.0% (26.5fps)26.1% (49.1fps)  GM204, 1664 shaders, 1178MHz, 4GB GDDR5@7Gbps, 256GB/s, 145W (opens in new tab)
Radeon RX 6400 (opens in new tab)16.0% (23.7fps)27.6% (52.0fps)  Navi 24, 768 shaders, 2321MHz, 4GB GDDR6@16Gbps, 128GB/s, 53W (opens in new tab)
GeForce GTX 780 (opens in new tab)14.9% (22.0fps)20.5% (38.5fps)  GK110, 2304 shaders, 900MHz, 3GB GDDR5@6Gbps, 288GB/s, 230W (opens in new tab)
GeForce GTX 1050 Ti (opens in new tab)13.5% (19.8fps)20.2% (38.0fps)  GP107, 768 shaders, 1392MHz, 4GB GDDR5@7Gbps, 112GB/s, 75W (opens in new tab)
GeForce GTX 1630 (opens in new tab)11.4% (16.9fps)18.0% (33.9fps)  TU117, 512 shaders, 1785MHz, 4GB GDDR6@12Gbps, 96GB/s, 75W (opens in new tab)
GeForce GTX 1050 (opens in new tab)10.0% (14.8fps)15.8% (29.8fps)  GP107, 640 shaders, 1455MHz, 2GB GDDR5@7Gbps, 112GB/s, 75W (opens in new tab)
Radeon RX 560 4GB (opens in new tab)10.0% (14.8fps)16.9% (31.8fps)  Baffin, 1024 shaders, 1275MHz, 4GB GDDR5@7Gbps, 112GB/s, 60-80W (opens in new tab)
Radeon RX 550 4GB (opens in new tab) 10.4% (19.6fps)  Lexa, 640 shaders, 1183MHz, 4GB GDDR5@7Gbps, 112GB/s, 50W (opens in new tab)
GeForce GT 1030 (opens in new tab) 7.7% (14.5fps)  GP108, 384 shaders, 1468MHz, 2GB GDDR5@6Gbps, 48GB/s, 30W (opens in new tab)

*: GPU couldn't run all tests, so the overall score is slightly skewed at 1080p ultra.

While the RTX 4090 does technically take first place at 1080p ultra, it's the 1440p and especially 4K numbers that impress. It's only 6% faster than the next closest RX 6950 XT at 1080p ultra, but that increases to 21% at 1440p and 63% at 4K. Against the RTX 3090 Ti, it's also a major upgrade: 10% faster at 1080p, 23% faster at 1440p, and 51% faster at 4K. (Just in case you check our reviews and notice a difference in scores, note that the above fps numbers incorporate both the average and minimum fps into a single score — with the average given more weight than the 99th percentile fps.)

Again, keep in mind that we're not including any ray tracing or DLSS results in the above table, as we intend to use the same test suite with the same settings on all current and previous generation graphics cards. Since only RTX cards support DLSS (and RTX 40-series if you want DLSS3), that would drastically limit which cards we could directly compare.

Of course the RTX 4090 comes at a steep price, though it's not that much worse than the previous generation RTX 3090. In fact, we'd say it's a lot better, as the 3090 was only a modest improvement in performance compared to the 3080 at the time of launch. Nvidia seems to have pulled out all the stops with the 4090, increasing the core counts, clock speeds, and power limits to push it beyond all contenders.

We're still waiting to see how the other RTX 40-series cards stack up, and AMD RX 7000-series and RDNA 3 GPUs aren't here yet. Once those arrive, we strongly suspect we'll see competitive performance from AMD, at much lower prices than the 4090. The first RDNA 3 GPUs are set to ship before the end of the year, and we'll learn more from AMD on November 3.

Turning to the previous generation GPUs, the RTX 20-series and GTX 16-series chips end up scattered throughout the results, along with the RX 5000-series. The general rule of thumb is that you get one or two "model upgrades" with the newer architectures, so for example the RTX 2080 Super comes in just below the RTX 3060 Ti, while the RX 5700 XT lands a few percent behind the RX 6600 XT.

Go back far enough, and you can see how modern games at ultra settings severely punish cards that don't have more than 4GB VRAM. We've been saying for a few years now that 4GB is just scraping by, and 6GB or more is desirable. The GTX 1060 3GB, GTX 1050, and GTX 780 actually failed to run some of our tests, which skews their results a bit, even though they do better at 1080p medium.

Now let's switch over to the ray tracing hierarchy.

Dying Light 2 settings and image quality comparisons

(Image credit: Techland)

Ray Tracing GPU Benchmarks Ranking 2022

Enabling ray tracing, particularly with demanding games like those we're using in our DXR test suite, can cause framerates to drop off a cliff. We're testing with "medium" and "ultra" ray tracing settings. Medium means using medium graphics settings but turning on ray tracing effects (set to "medium" if that's an option; otherwise, "on"), while ultra turns on all of the RT options at more or less maximum quality.

Because ray tracing is so much more demanding, we're sorting these results by the 1080p medium scores. That's also because the RX 6500 XT and 6400 along with the Arc A380 basically can't handle ray tracing even at these settings, and testing at anything more than 1080p medium would be fruitless. We've finished testing all the current ray tracing capable GPUs, though there will be more cards in the near future.

The six ray tracing games we're using are Bright Memory Infinite, Control Ultimate Edition, Cyberpunk 2077, Fortnite, Metro Exodus Enhanced, and Minecraft — all of these use the DirectX 12 / DX12 Ultimate API. The fps score is the geometric mean (equal weighting) of the six games, and the percentage is scaled relative to the fastest GPU in the list, which in this case is the GeForce RTX 3090 Ti.

Swipe to scroll horizontally
Tom's Hardware Ray Tracing GPU Benchmarks Hierarchy
Graphics Card1080p Medium1080p Ultra1440p Ultra4K UltraSpecifications (Links to Review)
GeForce RTX 4090 (opens in new tab)100.0% (164.9fps)100.0% (135.2fps)100.0% (97.4fps)100.0% (52.5fps)AD102, 16384 shaders, 2520MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W (opens in new tab)
GeForce RTX 4080 (opens in new tab)85.6% (141.2fps)80.0% (108.2fps)74.8% (72.9fps)70.8% (37.2fps)AD103, 9728 shaders, 2505MHz, 16GB GDDR6X@22.4Gbps, 717GB/s, 320W (opens in new tab)
GeForce RTX 3090 Ti (opens in new tab)71.7% (118.2fps)62.4% (84.4fps)58.8% (57.2fps)55.5% (29.1fps)GA102, 10752 shaders, 1860MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W (opens in new tab)
GeForce RTX 3090 (opens in new tab)65.7% (108.4fps)56.0% (75.7fps)52.1% (50.8fps)48.4% (25.4fps)GA102, 10496 shaders, 1695MHz, 24GB GDDR6X@19.5Gbps, 936GB/s, 350W (opens in new tab)
GeForce RTX 3080 Ti (opens in new tab)64.0% (105.6fps)54.7% (73.9fps)50.6% (49.2fps)47.0% (24.7fps)GA102, 10240 shaders, 1665MHz, 12GB GDDR6X@19Gbps, 912GB/s, 350W (opens in new tab)
GeForce RTX 3080 12GB (opens in new tab)63.5% (104.7fps)53.6% (72.4fps)49.2% (47.9fps)45.2% (23.7fps)GA102, 8960 shader