Nvidia GeForce RTX 3090 Ti Officially Launches, Starting at $1,999

GeForce RTX 3090 Ti launch images
(Image credit: Nvidia)

It's been a bit of a winding road to get here, but the GeForce RTX 3090 Ti officially launched today, with full specifications and pricing revealed about two months later than originally expected. If you're after maximum performance — "Damn the torpedoes, full speed ahead!" sort of thinking — the RTX 3090 Ti should now reign as the fastest option in our GPU benchmarks hierarchy, and possibly as the best graphics card for prosumer content creation people that don't want to move up to the Nvidia A-series offerings (formerly Quadro).

So, where's the review? We're still awaiting our sample, as Nvidia elected not to seed reviewers with its Founders Edition. We should have an AIC partner card shortly, and we'll post a full review with the usual suite of benchmarks once it arrives — including some extra proviz testing in content creation workloads. If you're mostly interested in gaming performance, take the GeForce RTX 3090 and tack on an extra 10% or so performance, give or take (Nvidia says it's 9% faster overall), and you'll mostly end up with the 3090 Ti.

While we wait for our card to arrive, here's a quick rundown of the official specs.

Swipe to scroll horizontally
Nvidia GeForce RTX 30-Series High-End Lineup
Graphics CardRTX 3090 TiRTX 3090RTX 3080 TiRTX 3080
ArchitectureGA102GA102GA102GA102
Process TechnologySamsung 8NSamsung 8NSamsung 8NSamsung 8N
Transistors (Billion)28.328.328.328.3
Die size (mm^2)628.4628.4628.4628.4
SMs84828068
GPU Cores1075210496102408704
Tensor Cores336328320272
RT Cores84828068
Base Clock (MHz)1560139513701440
Boost Clock (MHz)1860169516651710
VRAM Speed (Gbps)2119.51919
VRAM (GB)24241210
VRAM Bus Width384384384320
ROPs11211211296
TMUs336328320272
TFLOPS FP32 (Boost)40.035.634.129.8
TFLOPS FP16 (Tensor)160 (320)142 (285)136 (273)119 (238)
Bandwidth (GBps)1008936912760
TBP (watts)450350350320
Launch DateMar 2022Sep 2020Jun 2021Sep 2020
Starting Price$1,999$1,499$1,199$699

Considering the past 18 months of extreme GPU shortages and inflated GPU prices, you should definitely take the last line in the above table with a healthy serving of salt. There's a clear downward trend in recent graphics card prices, including a 25% observed drop in EU pricing during March, but we're not out of the woods just yet. Our latest data for the US using mid-March eBay GPU prices puts most of these extreme GPUs at around 30–50% over MSRP, except for the RTX 3080 (10GB) that's still floating at closer to double the MSRP. The (*cough*) 'good' news is that with a much higher starting MSRP, the actual RTX 3090 Ti prices may land a bit closer to Nvidia's hypothetical starting point — sort of like how the RTX 3080 Ti is only 30% over MSRP since it was priced over 70% higher than the 3080 as a baseline.

Moving past the pricing elephant in the room, there are some other eyebrow-raising items of note. We've long expected the memory to clock at 21Gbps, and credible rumors indicate that's a major reason for the two month delay in Nvidia spilling the beans on the 3090 Ti. The GPU also uses the fully armed and operational GA102 chip, sporting 84 streaming multiprocessors (SMs) and 10752 CUDA cores, with boost clocks about 200MHz higher than the RTX 3090.

But there's a catch, and it's a pretty big one: The RTX 3090 Ti has a TBP (Total Board Power) rating of 450W, 100W higher than the 3090 and 3080 Ti. That's nearly a 30% increase in power use, which isn't too surprising given the higher boost clock and memory speed. So basically, Nvidia is pushing to the far right of the voltage/frequency curve and maxing out performance at the cost of higher power consumption. Considering the recent Nvidia Hopper H100 reveal, this could be a taste of things to come for the Ada / RTX 40-series graphics cards.

What can you expect from the increased power, pricing, core counts, and clock speeds? As noted already, we don't have the card in hand just yet, but we do have benchmarks from all the other GPUs. In gaming performance, the RTX 3090 was only 2.4% faster than the RTX 3080 Ti overall, with a slightly larger 3.0% advantage if we focus purely on 4K gaming performance. Even if we switch over to ray tracing games with our DXR benchmark suite, the 3090 was still only 2.9% faster than the 3080 Ti. There's a bigger gap between the RTX 3090 and RTX 3080, with the 3090 leading by 16% on average and by 20% at 4K, but there's not a lot of gas left in the GA102 tank, it seems, even when paired with the Core i9-12900K, the current best CPU for gaming, or at least the fastest CPU for gaming.

On paper, looking just at the specs, the 3090 theoretically has 4.4% more compute and 2.6% more memory bandwidth than the RTX 3080 Ti. It also has 19.5% more compute and 23.2% more memory bandwidth than the RTX 3080. That means real-world performance scales pretty close to the paper specs, with compute being more important than bandwidth. The RTX 3090 Ti theoretically delivers 12.4% more compute and 7.7% more bandwidth than the 3090. At best, then, the 3090 Ti could be about 12% faster than the 3090, but in general, we expect it to land closer to 10% — and the margin of victory will be even smaller if the workload happens to be CPU limited.

Note that because Nvidia isn't seeding reviewers with reference clocked RTX 3090 Ti Founders Edition cards, there's a good chance that comparisons will be made using a factory overclocked card that can reach even higher levels of performance. It's likely only going to be a 2–4% bump, but we can't help but think the lack of reference card sampling was at least partly done in order to make the custom 3090 Ti cards look a bit better. They're still a highly questionable value, basically bringing back Titan RTX levels of pricing without a few of the extra Titan features.

(Image credit: Nvidia)

Designed for Content Creation

Like the RTX 3090 before it, Nvidia isn't pitching the RTX 3090 Ti primarily as a gaming GPU. Instead, it's a card designed for content creation. The extra VRAM should help quite a bit more in intense content creation workloads, though often those end up being a case of a card either succeeding or failing completely due to insufficient VRAM — there's a reason the Nvidia RTX A6000 has 48GB of slower GDDR6 memory, for example. The 3090 Ti has half as much VRAM, which means it's limited to models and workflows that stay under 24GB, but that's still double what the other consumer models offer.

Nvidia went so far as to provide a guide to testing "large memory workflows" on the RTX 3090 Ti. We're not opposed to that, but when the results of testing on GPUs with less than 24GB of VRAM end up with 'failed to run,' it's less about comparative benchmarking and more about portraying the extreme GPUs in the best light possible.

"Oh, you don't have an RTX 3090 Ti, 3090, or Titan RTX? Sorry, you can't do this particular task in this particular fashion." Again, that might be true, and it certainly can be relevant to content creators, but it's weird that these professional applications can't just run in a fallback-to-system-RAM fashion.

Anyway, if you have a need for a GPU that can handle 24GB VRAM workflows, the RTX 3090 Ti now supplants the RTX 3090 with better performance and a rather significant $500 bump in pricing. If you need even more VRAM, you'll have to step up to something like the Nvidia RTX A6000, which will have the added benefit of providing fully ISV-certified drivers for professional applications.

GeForce RTX 3090 Ti launch images

(Image credit: Nvidia)

Can the RTX 3090 Ti Handle 8K Gaming?

Besides content creation, the other aspect of the RTX 3090 Ti that Nvidia is once again pushing is the potential for 8K gaming. Frankly, it's a bit ludicrous, as the slight bump in performance relative to the 3090 won't suddenly make 8K more viable. Practically speaking, it's only going to be games that support DLSS Ultra Performance mode (or some other form of upscaling) that will reach higher framerates — well, those as well as the old and lightweight games that are kind enough to support 8K. If you're only after 30 fps, though, it can probably manage quite a few games at medium detail settings.

Frankly, if you actually have an 8K display and you want to hook it up to a PC, go right ahead and buy the RTX 3090 Ti, because clearly you can afford it. We don't have access to an 8K display for testing purposes, but even 4K still proves a bit much for the RTX 3090 at maximum quality without some form of upscaling. Ten percent faster than "not fast enough" likely isn't going to make or break the card, and we're definitely a long way from 8K becoming anything close to mainstream. That's probably for the best, or at least something your wallet will greatly appreciate.

We'll have a full review of an RTX 3090 Ti card up in the near future, once we have a card we can put through its paces. Stay tuned.

Jarred Walton

Jarred Walton is a senior editor at Tom's Hardware focusing on everything GPU. He has been working as a tech journalist since 2004, writing for AnandTech, Maximum PC, and PC Gamer. From the first S3 Virge '3D decelerators' to today's GPUs, Jarred keeps up with all the latest graphics trends and is the one to ask about game performance.

  • -Fran-
    Comparing the 3090 and now the TIE sibling to the "proper" Titan line is just hurting nVidia itself. Talk about a self-inflicted wound... Until they don't release a driver that unlocks proper CUDA and OPs for the GA102 die, then it will still be a "kiddies" toy. This is something Linus already reviewed back when it was compared the first time and nVidia, it seems, took the hint and didn't draw any more parallels.

    Anyway, that's just besides the point of this card. People that can make use of it, will get it regardless. As someone that doesn't need this card, I can only thank that I don't. $2K starting price is just too bonkers for not being a pro card. But in benchmarks it'll do or die. Looking at the numbers, this thing will be at best 10% better than the 3090 while using 100W more and over ~30% more in price ($500 from MSRP, but probably like $1K on the streets).

    I'd also love to see "room" temperature tests. I am really curious to see how this thing increases room temperature and how it affects it inversely. We're getting to the point at which where you live is going to matter, which is quite frankly bonkers.

    Regards.
    Reply
  • derekullo
    -Fran- said:
    Comparing the 3090 and now the TIE sibling to the "proper" Titan line is just hurting nVidia itself. Talk about a self-inflicted wound... Until they don't release a driver that unlocks proper CUDA and OPs for the GA102 die, then it will still be a "kiddies" toy. This is something Linus already reviewed back when it was compared the first time and nVidia, it seems, took the hint and didn't draw any more parallels.

    Anyway, that's just besides the point of this card. People that can make use of it, will get it regardless. As someone that doesn't need this card, I can only thank that I don't. $2K starting price is just too bonkers for not being a pro card. But in benchmarks it'll do or die. Looking at the numbers, this thing will be at best 10% better than the 3090 while using 100W more and over ~30% more in price ($500 from MSRP, but probably like $1K on the streets).

    I'd also love to see "room" temperature tests. I am really curious to see how this thing increases room temperature and how it affects it inversely. We're getting to the point at which where you live is going to matter, which is quite frankly bonkers.

    Regards.
    If you live near the arctic circle you could get a gaming pc and a heater all in one.
    Reply
  • -Fran-
    derekullo said:
    If you live near the arctic circle you could get a gaming pc and a heater all in one.
    Or if you live near a lake or sea, you have a free sauna room!

    Regards.
    Reply
  • Friesiansam
    -Fran- said:
    $2K starting price is just too bonkers for not being a pro card.
    Those gamers with a lot more money than sense and, a pathological need to be able to boast they have the best, will hoover-up every 3090 ti that's made, regardless of price.
    Reply
  • JarredWaltonGPU
    -Fran- said:
    Comparing the 3090 and now the TIE sibling to the "proper" Titan line is just hurting nVidia itself. Talk about a self-inflicted wound... Until they don't release a driver that unlocks proper CUDA and OPs for the GA102 die, then it will still be a "kiddies" toy. This is something Linus already reviewed back when it was compared the first time and nVidia, it seems, took the hint and didn't draw any more parallels.

    Anyway, that's just besides the point of this card. People that can make use of it, will get it regardless. As someone that doesn't need this card, I can only thank that I don't. $2K starting price is just too bonkers for not being a pro card. But in benchmarks it'll do or die. Looking at the numbers, this thing will be at best 10% better than the 3090 while using 100W more and over ~30% more in price ($500 from MSRP, but probably like $1K on the streets).

    I'd also love to see "room" temperature tests. I am really curious to see how this thing increases room temperature and how it affects it inversely. We're getting to the point at which where you live is going to matter, which is quite frankly bonkers.

    Regards.
    Nvidia still did mention the Titan RTX in its Reviewer's Guide, rather than the 3090 -- which makes sense, as this is only a modest bump from the 3090.
    Reply
  • SkyBill40
    This card is, in a word, dumb. It should come as little surprise though as Nvidia is going to milk as much money as they can out of those chips. They'd be better off in my opinion to make more 3080 cards seeing that's where the bulk of the high end will buy. The miniscule differences between the 3090 and this card just don't justify the monetary outlay, even for e-peen braggarts.
    Reply
  • -Fran-
    JarredWaltonGPU said:
    Nvidia still did mention the Titan RTX in its Reviewer's Guide, rather than the 3090 -- which makes sense, as this is only a modest bump from the 3090.
    That's shocking, TBH. I thought, honestly, they had actually remained silent because they got the hint... I guess not!

    Regards.
    Reply
  • TJ Hooker
    -Fran- said:
    Until they don't release a driver that unlocks proper CUDA and OPs for the GA102 die, then it will still be a "kiddies" toy.
    What exactly are you referring to here, FP64 (double precision) performance? Titans have always used the same drivers available to regular gaming cards.
    Reply
  • -Fran-
    TJ Hooker said:
    What exactly are you referring to here, FP64 (double precision) performance? Titans have always used the same drivers available to regular gaming cards.
    I had to go and refresh my memory, because I do remember the Titan being better at FP compute than the regular GF cards. That changed slightly with later gens of it, but the difference was made via driver locks, mainly. Linus already proved that, so I won't there again. In certain aspects for compute, the Titan cards are just different, or at least used to.

    So, in short from what I can see, while they do use the same driver suite, there's still differences on how they behave.

    Regards.
    Reply
  • TJ Hooker
    -Fran- said:
    I had to go and refresh my memory, because I do remember the Titan being better at FP compute than the regular GF cards. That changed slightly with later gens of it, but the difference was made via driver locks, mainly. Linus already proved that, so I won't there again. In certain aspects for compute, the Titan cards are just different, or at least used to.

    So, in short from what I can see, while they do use the same driver suite, there's still differences on how they behave.

    Regards.
    Yes, the FP64 performance is limited by drivers/FW, but that isn't new. The majority of Titans have had equivalent compute (including FP64) performance to regular Geforce cards (or only slightly higher, if they have a few extra cores).

    The only consistent, defining characteristic of Titan cards that I have found is that they cost more than all the regular Geforce cards at the time they are released. All other definitions I've seen people come up with with respect to what constitutes a 'real' Titan (usually in contrast to a RTX 3090) are broken by at least one previous Titan.
    Reply