Skip to main content

Nvidia GeForce RTX 3090 and GA102: Everything We Know

GeForce RTX 3090 details
(Image credit: Nvidia)

The Nvidia GeForce RTX 3090 is now confirmed as the next halo graphics card from Team Green, and Jensen has spilled the beans (most of them, anyway) on specs and performance. If you want the best performance from Nvidia's Ampere architecture, get ready to take out a small loan, because the king of the GPU hierarchy and the best graphics card ('best' as in 'fastest') won't come cheap. The RTX 3090 sets a new high-bar for single-GPU pricing at $1,499, not counting Nvidia's Titan series that it's apparently meant to replace. Here's everything we know about the GeForce RTX 3090.

We've covered the high-level view of Nvidia's new Ampere GPUs elsewhere, and you can read about the GeForce RTX 3080 and GeForce RTX 3070 in their own dedicated articles. The focus here is on the RTX 3090. After months of speculation and waiting, we finally have the hard details. It's big, quite literally. Nvidia's RTX 3090 Founders Edition sports a triple-slot cooler and has a 350W TDP. You might need a PSU, case, and CPU upgrade to make the most of this bad boy.

The GeForce RTX 3090 is the first 90-series suffix we've seen from Nvidia since the GTX 690 back in 2012. That was a dual-GPU variant of the GTX 680, back when multi-GPU was a thing. Which, technically it still is, but support has been seriously lacking of late. Regardless, the RTX 3090 is the only Ampere GeForce GPU that has NVLink support this round, just in case you have $3,000 sitting around. (Don't do it!) But let's hit the specs.

Nvidia GeForce RTX 3090 At A Glance: 

  • 24GB GDDR6X at 19.5Gbps
  • 10496 CUDA cores and 35.7 TFLOPS of FP32 compute
  • Samsung 8N manufacturing process
  • 1.9 times more efficient than Turing
  • Release Date: September 24, 2020
  • Price: $1,499

Nvidia GeForce RTX 3090 Specifications
GPUGA102
Graphics CardGeForce RTX 3090
Process (nm)Samsung 8N
Transistors (billion)28.3
Die Size (mm^2)628.4
SMs82
CUDA Cores10496
RT Cores82
Tensor Cores328
Boost Clock (MHz)1695
VRAM Speed (Gbps)19.5
VRAM (GB)24
Bus Width384
ROPs112
TMUs656
GFLOPS FP3235581
RT TFLOPS69
Tensor TFLOPS FP16 (Sparsity)142 (285)
Bandwidth (GB/s)936
TBP (watts)350
Launch DateSeptember 24, 2020
Launch Price$1,499

Nvidia GeForce RTX 3090 Specifications

Holy TFLOPS, Batman! No, seriously: Wow! We heard various rumors. We heard Nvidia might double the number of shader cores per SM. What we didn't expect was double the FP32 shaders while still packing 82 SMs. And Nvidia could have theoretically gone even bigger (the GA100 is an 826mm square chip, where GA102 is apparently only around 627mm square). Still, the resulting 36 TFLOPS of compute is going to be a massive boost to performance ... provided the rest of your PC can keep up.

Raw compute power is 150% more than the RTX 2080 Ti, for both the CUDA cores and the Tensor cores. As in, on paper the RTX 3090 is 2.5 times as fast as the previous king. Actually, maybe that's not fair — it should be compared with the Titan RTX, right? Then it's only 2.2 times as fast, plus it costs $1,000 less.

Except ... it's not that fast. See, the Turing architecture added a second pipeline for INT workloads, and Nvidia says games use a ratio of about 65:35 for FP32:INT. Ampere takes the INT pipeline and turns it into an FP32 or INT pipeline. That means perhaps two thirds of the secondary pipeline will be kept busy with INT operations, while the dedicated FP32 pipeline plugs along at 'only' 17.8 TFLOPS. Or combined, the GPU will behave somewhat like a 23.1 TFLOPS GPU with 12.5 TOPS of INT performance. Still, it's plenty fast.

I'm a bit sad that the GDDR6X memory 'only' clocks in at 19.5Gbps, and I'm in need of a GPU hat to eat (chocolate, please!), but we're still looking at 24GB of memory and 936 GBps of bandwidth. That's a 52% increase relative to the RTX 2080 Ti, and Nvidia likely has some architectural improvements to that it makes better effective use of that bandwidth. Note that GDDR6/GDDR6X have a half-width mode where each chip can run on a 16-bit interface, which is how you get 24 chips with a 384-bit interface and not a 768-bit interface.

Finally, ray tracing performance is 69 TFLOPS of RT computation. Nvidia rated the previous Turing GPUs in gigarays per second, but that was misleading, so we're now getting RT-TFLOPS. The RTX 2080 Ti incidentally had 34 TFLOPS of RT prowess. Again, Nvidia is looking at more than double the computational power on all the core metrics, and a bit more than 50% more memory bandwidth.

That is one big graphics card ... a BFGPU if you prefer (Image credit: Nvidia)

Meet the Nvidia GeForce RTX 3090

The GeForce RTX 3090 isn't just the most expensive GeForce card to date; it's the largest graphics card Nvidia has sold. We've seen various third party designs push the limits of good sense (in a good way, provided you have a large PC), but Nvidia has previously limited its designs to 2-slot solutions. No more! The RTX 3090 is a triple-slot card, measuring 12.3-inches in length and 5.4-inches in height.

It's a monster! And I love it. Be still, my heart! Forgive techno-lust, but this is definitely an exciting GPU. Soon, it will be here. My precious... Ahem.

As listed above, the RTX 3090 also sports a 350W TDP (or TGP if you prefer, which is power to the entire GPU and memory). To help cope with the added thermal output, Nvidia has significantly altered the cooling design compared to previous generation GPUs.

(Image credit: Nvidia)

The above image is from Nvidia's RTX 3080, but the GeForce RTX 3090 is the same fundamental design — only bigger. We're not positive, but it looks like the 3090 will have a bigger fan (120mm?) to go along with the wider heatsink. The size of the PCB meanwhile is smaller than the previous generation cards, so the extra size really is all about cooling.

It's not too surprising to see Nvidia take this approach. Even though the Turing architecture was very efficient overall, it still ran into power and thermal limits on the fastest models (RTX 2080 Super and above). The only way around that is to increase the TDP, and that meant improving the cooling capabilities as the top RTX 20-series Founders Edition graphics cards could get quite hot.

Part of the redesign also involved moving from dual 8-pin power connectors to a single 12-pin connector, at least for Nvidia's reference design. Third party cards appear to be sticking with dual 8-pin or even triple 8-pin connectors, and the 12-pin cable doesn't necessarily deliver more power. It's just a more compact connector, rotated 90 degrees to free up even more board space.

How will the new design fare against third party cards? We're certainly interested to find out. Will it be quieter, or lower temperatures, or both? Check back in a few weeks and we'll have the details.

(Image credit: Nvidia)

Nvidia GeForce RTX 3090 Features

Nvidia's GeForce RTX 3090 will make use of Micron's GDDR6X memory. This GPU and the RTX 3080 are the only ones slated to use GDDR6X for now, and Nvidia had to further improve the signal delivery to boost speeds. There's also new EDR tech for the RAM — Error Detection and Retry. Basically, if the RAM detects an error in transmission, it will retry until it succeeds (or crashes?). This will change memory overclocking somewhat, where at some point you'll increase clocks but end up with lower performance due to EDR.

HDMI 2.1 makes its debut in a graphics card, but the three DisplayPort connectors remain stuck at 1.4a. Both standards can drive an 8K display, but where HDMI 2.1 can do 8K120 via DSC, DisplayPort 1.4a requires DSC just to get to 8K60.

Nvidia has also added PCIe Gen4 support to its Ampere GPU. It's worth pointing out that this probably won't matter much for gaming performance, as the large 24GB of VRAM means there should be less data going back and forth over the PCIe bus. The other problem of course is that the fastest gaming CPUs still come from Intel, and Intel doesn't have a desktop PCIe Gen4 solution yet. That will come with next year's Rocket Lake processors, which will yet again use Intel's 14nm++(++) process. Intel's Alder Lake will also support PCIe Gen4 and will be the first SuperFIN (or post-SuperFIN) desktop CPU from Intel.

Does that mean AMD's X570 platform with a Ryzen 9 3900X is the better choice, since you get Gen4 support? Almost certainly not. We recently ran a full suite of benchmarks on ten GPUs and compared the performance of the Core i9-9900K vs. Ryzen 9 3900X. There were a few edge cases where the 3900X came out ahead, but it was more like a tie. For the RX 5700 XT at least, having a faster PCIe interface didn't appear to matter. Hopefully Zen 3 can further close the CPU gap, but that's a topic for another day.

(Image credit: Nvidia)

Nvidia GeForce RTX 3090 Performance

What's the net result of all these improvements? Citius, Altius, Fortius. Or in Nvidia's case: Bigger, faster, better. There's no doubt the RTX 3090 will emerge as Nvidia's fastest GPU ever, and we'd be extremely surprised to see AMD's Big Navi match it. It's not impossible, but it's extremely unlikely. But then it's also $1,500, which is enough to build a complete gaming PC with a GeForce RTX 3070.

How much faster will RTX 3090 be in practice, though? That's the tougher question. We've tested a lot of games and graphics cards over the years, and just in 2020 we've encountered several games where the RTX 2080 Ti didn't come in first place until at least 1080p ultra, and sometimes 1440p ultra. In short, the CPU was a bottleneck, and with significantly more GPU performance, it's going to be an even bigger bottleneck with the RTX 3090.

First piece of advice, then: Don't even bother with the RTX 3090 if you're not using a 4K display (or perhaps 3440x1440 ultrawide). There will be games where it's still the fastest GPU even at 1080p, particularly if the games use ray tracing, but I just can't see 1080p being a good fit for this GPU. If you're playing esports games like CSGO, even an RTX 3070 should suffice for 360 fps. If you're not, then you won't get above 200 fps even at minimum settings in many games.

Nvidia provided the above comparison of ray tracing performance, but it didn't state whether this was for RTX 3090 vs. RTX 2080 Ti, or some other comparison like RTX 3080 vs. RTX 2080. Regardless, even at 4K, in most cases the new Ampere GPU isn't able to double the performance of Turing. Maybe there are other factors, but we wager the CPU is holding the graphics card back at least a little.

(Image credit: Activision)

If you do have a high-end or one of the best gaming monitors, however, and you want to run with all the bells and whistles? The GeForce RTX 3090 is your best chance at maintaining silky smooth framerates. That will be even more important as the next generation consoles launch and we start to see games use more ray tracing effects.

Besides Cyberpunk 2077, which will use ray tracing for shadows, reflections, ambient occlusion, and diffuse illumination (plus DLSS 2.0), Call of Duty Black Ops Cold War will similarly support dynamic lights, shadows, ambient occlusion, and DLSS 2.0. The Fortnite RTX patch also adds multiple ray tracing effects. And the RTX 3090 is beefy enough that you can probably even use all of those in multiplayer without tanking your framerates.

In short, while the first generation of ray tracing enhanced games wasn't necessarily a great showcase for the technology (Control and perhaps Minecraft RTX being exceptions), it's not going anywhere. Nearly every movie these days makes use of ray tracing. It might be in its infancy as far as gaming is concerned, but over the coming years we expect to see more games pushing better effects. Just don't be surprised when RTX 4090 or RTX 5090 show up in the coming years and make even the RTX 3090 look anemic.

(Image credit: Asus)

Nvidia GeForce RTX 3090: The Bottom Line

The biggest unknowns have now been answered. The GeForce RTX 3090 is a graphical tour de force, with a massive 10496 GPU cores delivering up to 36 TFLOPS of compute performance. It will officially launch on September 24, priced at $1,500. Don't be surprised if it sells out, and don't be surprised if many custom designs end up costing far more than $1,500.

That's actually the good part about GeForce RTX 3090 branding instead of Titan branding. Nvidia always did their Titan cards in-house, and third party designs weren't allowed. The result was that the cards cost an arm and a leg (and a kidney as well), for a relatively small increase in performance. This is still an extremely expensive graphics card, of course, but it's $1,000 less than the Titan RTX.

Will the RTX 3090 be worth the price?  Probably not, at least in terms of fps per dollar spent. You certainly don't need a $1,500 graphics card to enjoy playing games, and you could buy three RTX 3070 cards for the same asking price. It's arguably more about bragging rights and being the fastest GPU on the planet, price be damned.

If you've got deep pockets, by all means, pre-order an RTX 3090 and get ready to rub it in your friends' noses. Why not order two plus an NVLink connector for good measure? You should seriously consider tossing in an 8K display for good measure. The rest of us will wait to see what the real-world gaming performance looks like come September 24.

  • JarredWaltonGPU
    Let me just post this here as well: There are some minor issues in the Micron "Categories of Ultra-Bandwidth Memory" table. RX5700XT sits under Titan RTX for GDDR6, but then the table says 12 placements and 12GB -- clearly not for Navi 10. The HBM2E AI Training Accelerator also says 6 placements and then lists 16-32GB, while the Nvidia A100 has 6 placements but only 5 active, with 40GB total memory.

    Given everything that's been said about Ampere and RTX 30-series hardware, I think it's far more likely that those two items are the errors, rather than the GDDR6X reference to RTX 3090 being an error. 21, 21, 21, 21 ... too many instances of that number come up. Maybe Nvidia will change it to 20 Gbps and ship me an edible GPU hat to eat, just for fun? Mmmm, GPU hats...

    We're basically two weeks away from final confirmation of many specs, though, and this is probably the last chance I'll have to publicly say anything before the launch. Or maybe not; we'll see. Micron pulled down the PDF, though, and I doubt Nvidia would have made a stink and had them remove the PDF if it didn't have a bunch of accurate details. It's my opinion and I'm sticking to it, but you know what they say about opinions.
    Reply
  • daeros
    Man, you guys really love whatever swag team green throws your way. This thing needs three 8-pin PEG connections, but Radeon VII draws too much power. You complain that it's only slightly faster than the 5700xt while ignoring that it beats the Titan RTX in compute workloads. And you're defending the idea of a $1500-2000 launch price. Tom was right, and I'll be filing this right along your 'Just buy it' article.
    Reply
  • JarredWaltonGPU
    daeros said:
    Man, you guys really love whatever swag team green throws your way. This thing needs three 8-pin PEG connections, but Radeon VII draws too much power. You complain that it's only slightly faster than the 5700xt while ignoring that it beats the Titan RTX in compute workloads. And you're defending the idea of a $1500-2000 launch price. Tom was right, and I'll be filing this right along your 'Just buy it' article.
    I’m not defending the price at all, I’m just saying what I think the price will be. And we don’t know how many 8-pin connectors this will use, or the price. The Radeon VII was okay at best, but at launch it basically tied the already old 1080 Ti in most workloads — except specific compute tests. It’s disingenuous to accuse us of bias while clearly exhibiting your own.

    For the record, Nvidia has not provided anything for this article — no information at all, other than what is known about the A100. It’s okay for enthusiasts to be excited about a new extreme GPU, and I am a GPU enthusiast. That’s what this article is about.

    If this is a $2000 card that’s twice as fast as RTX 2080 Ti, it will be impressive and yet too expensive by half. At $1500, it’s at least somewhat acceptable but still very much out of reach of even high-end gamers. I’d love to see it land at $1000, but that’s a pipe dream.
    Reply
  • gtarayan
    This card should retail $1299.
    Reply
  • JarredWaltonGPU
    gtarayan said:
    This card should retail $1299.
    That would be awesome if it happens. I’m trying to brace myself for far less awesome prices, though.
    Reply
  • IceQueen0607
    Predictions on AUD Pricing: I'm going to go for $4000 AUD. The xx90 might push it into a whole new pricing tier
    Things are very expensive here, but current climate has pushed GPU prices 30%-40% higher than same time last year.
    Reply
  • drtweak
    My first PC (P1 66Mhz) Had a Geforce 256 in it. That thing sucked and was not the "First" GPU XD
    Reply
  • Gurg
    If Nvidia prices this too high they will lose unit sales. In July 2020 Steam survey the 1080ti had a 1.57% market share vs just .91% for 2080ti. When released in March 2017 the 1080ti was priced at $700. whereas the 2080ti market price was around $1300 in Sept. 2018. The combined 2080 and super have a 1.7% market share priced at $700-750.

    While the demand from the top end gamer is relatively inelastic with that segment willing to pay for the best, for everyone else the demand is elastic. If Nvidia gets too aggressive on pricing the 3000 series, many potential buyers especially in current economy will opt to just stay pat with their 1080ti and above which will run even a 60hz 4K or all lower resolution monitor at playable frame rates.

    While the 3090 should certainly max out a 60hz 4K monitor, to take full advantage of its potential you would need to upgrade to a 144hz 4K monitor which starts at $700 . For me that is simply a bridge too far. If the 3080 is priced around $750 and offered enough performance increase to max out my existing 60hz 4K monitor then that would be a buy after initial pricing spike fades and partner cooling is available.

    If you spend $2,000 plus on a 3090 and a 144 hz 4K monitor you will most likely only be half way to saturating the 144 hz and won't be able to fully saturate it until the 5090 in 4-6 years.
    Reply
  • vinay2070
    Gurg said:
    If Nvidia prices this too high they will lose unit sales. In July 2020 Steam survey the 1080ti had a 1.57% market share vs just .91% for 2080ti. When released in March 2017 the 1080ti was priced at $700. whereas the 2080ti market price was around $1300 in Sept. 2018. The combined 2080 and super have a 1.7% market share priced at $700-750.

    So they essentially made more profit from the 2080Ti compared to 1080TI???

    Gurg said:
    While the 3090 should certainly max out a 60hz 4K monitor, to take full advantage of its potential you would need to upgrade to a 144hz 4K monitor which starts at $700 . For me that is simply a bridge too far. If the 3080 is priced around $750 and offered enough performance increase to max out my existing 60hz 4K monitor then that would be a buy after initial pricing spike fades and partner cooling is available.

    If you spend $2,000 plus on a 3090 and a 144 hz 4K monitor you will most likely only be half way to saturating the 144 hz and won't be able to fully saturate it until the 5090 in 4-6 years.

    A lot of consumers will be using the Ultra Wide QHD and QHD at 144Hz and some even 200 Hz, so its not just 4K that would require a 3090. Also if you max out RTX, that would bring performance down a lil bit if there is no DLSS implementation.
    Reply
  • Jim90
    Quite simply, you would have to be a fully certified idiot to purchases an Ampere card BEFORE RDNA2 cards appear. We've seen what RDNA2 can do when significantly scaled down in the new consoles (i.e. Unreal 5 demo on the PS5 @4K) and we've read plenty. Now it's time to wait till BOTH companies release their products and we finally see proper reviews.
    Reply