Skip to main content

Nvidia Ada Lovelace and GeForce RTX 40-Series: Everything We Know

Nvidia's Ada architecture and the presumed GeForce RTX 40-series graphics cards are slated to arrive by the end of the year, and likely in the September to October time frame. That's two years after the Nvidia Ampere architecture and basically right on schedule given the slowing down (or if you prefer, death) of Moore's 'Law.' With the Nvidia hack earlier this year, we have a seemingly good amount of information on what to expect. We've collected everything into this central hub detailing everything we know and expect from Nvidia's Ada architecture and the RTX 40-series family.

There are plenty of rumors swirling around now, and Nvidia has said precious little about its plans for Ada, which some alternately call Lovelace. What we do know is that Nvidia has detailed its data center Hopper H100 GPU, and we suspect that, much like with the Volta V100 and Ampere A100, the consumer products will follow in the not-too-distant future.

That last is perhaps the best sample of what to expect. The A100 was formally revealed in May 2020, with the consumer Ampere GPUs launching in the form of the RTX 3080 and RTX 3090 about four months later. If Nvidia follows a similar release schedule with Ada Lovelace GPUs, we can expect the RTX 40-series to arrive sometime in August or September. Let's start with the high level overview of the rumored specs for the Ada series of GPUs.

Nvidia GeForce RTX 40-Series "Ada" Rumored Specs
GPUAD102AD103AD104AD106AD107
Process TechnologyTSMC 4NTSMC 4NTSMC 4NTSMC 4NTSMC 4N
Transistor Count60B?40B?30B?20B?15B?
SMs / CUs14484603624
GPU Cores1843210752768046083072
Tensor Cores57633624014496
RT Cores14484603624
Boost Clock (MHz)1600-20001600-20001600-20001600-20001600-2000
Total L2 Cache (MB)9664483232
VRAM Speed (Gbps)21-2421-2416-2116-2114-21
VRAM Bus Width384256192128128
ROPs128-196?112?96?64?48?
TMUs57633624014496
TFLOPS FP32 (Boost)59-73.734.4-4324.6-30.714.7-18.49.8-12.3
TFLOPS FP16 (Tensor)472-590275-344197-246118-14779-98
Bandwidth (GBps)1008-1152672-768384-504256-336224-336
TDP (watts)<600<450<300<225<150
Price Estimate$1,000+$600-$1,000$450-$600$300-$450$200-$300

First off, huge helpings of salt need to be applied to the above information. We've put in tentative clock speed estimates of 1.6 to 2.0 GHz for the GPUs, which is inline with Nvidia's previous Ampere, Turing, and even the Pascal architectures. It's entirely possible that Nvidia will exceed those clocks, so we consider these a conservative estimate.

We're going on the assumption that Nvidia will use TSMC's 4N process — "4nm Nvidia" — on all of the Ada GPUs, which again might be technically incorrect. We know Hopper H100 uses TSMC's 4N node, which mostly appears to be a tweaked variation on TSMC's N5 node that's been widely used in Apple's smartphone and laptop chips and has been rumored to be the node Nvidia will use for Ada, and also what AMD will use for Zen 4 and RDNA 3.

Frankly, the node name doesn't matter nearly as much as the actual GPU specs and performance. "A rose by any other name would smell as sweet," in other words. We have long since passed the point where process node names have any real connection with physical features on a chip. Where 250nm (or 0.25 micron) chips actually had elements you could point at and measure at 0.25um width, physical scaling of chips has slowed down with the past several process nodes and they're now just marketing names.

Nvidia's Hopper H100 hints at where Nvidia might go with the Ada architecture. (Image credit: Nvidia)

Transistor counts are a best guess for now. We do know that Hopper H100 will have 80 billion transistors (which is really just an approximation, but we'll roll with that). The A100 GPU had 56 billion transistors, double the count of the GA102 consumer halo chip, but there are indications that Nvidia will be "going big" with the AD102 GPU and that it might be closer in size to the H100 than GA102 was to GA100. We'll update the tables if and when reliable information becomes available, but for now, any claims of transistor counts are simply different guesses than ours.

In theory, based on the "leaked" information we've seen so far, Ada looks to be a monster. It will pack in far more SMs and the associated cores than the current Ampere GPUs, which should provide a substantial boost to performance. Even if Ada ends up being less than what the leaks claimed, it's a safe bet we'll see performance from the top GPU — perhaps an RTX 4090, though Nvidia may change nomenclature again — will be a big step up from the RTX 3090 Ti.

The RTX 3080 for example was about 30% faster than the RTX 2080 Ti at launch, and the RTX 3090 added another 15%, at least if you pushed the GPU to its limits by running at 4K ultra. Which is also something to keep in mind. If you're currently running a more modest processor rather than one of the absolute best CPUs for gaming, meaning the Core i9-12900K or Ryzen 7 5800X3D, you could very well end up CPU limited even at 1440p ultra. A larger system upgrade will likely be necessary to get the most out of the fastest Ada GPUs. 

Ada Will Massively Boost Compute Performance

(Image credit: Shutterstock)

With the high-level overview out of the way, let's get into the specifics. The most noticeable change with Ada GPUs will be the number of SMs compared to the current Ampere generation. At the top, AD102 potentially packs 71% more SMs than the GA102. Even if nothing else were to significantly change in the architecture, we expect a huge increase in performance.

That will apply not just to graphics but to other elements as well. We're using Ampere calculations on the Tensor core performance, and a fully enabled AD102 chip running at close to 2GHz would have deep learning/AI compute of up to 590 TFLOPS in FP16. The GA102 in the RTX 3090 Ti by comparison tops out at around 321 TFLOPS FP16 (using Nvidia's sparsity feature). That's a theoretical 84% increase, based on core counts and clock speeds. The same theoretical 84% boost in performance should apply to ray tracing hardware as well.

That’s unless Nvidia reworks the RT cores and Tensor cores for the respective third-generation and fourth-generation implementations. We suspect there's not much need for massive changes on the Tensor cores — the big improvements in deep learning hardware will be more for Hopper H100 than Ada AD102 — though we could be wrong. Meanwhile, the RT cores could easily see refinements that improve per-core RT performance by another 25–50% over Ampere, just like Ampere was about 75% faster per RT core than Turing.

Worst-case, just porting the Ampere architecture from Samsung Foundry's 8N process to TSMC's 4N (or 5N or whatever) and not really changing anything else with the architecture, adding more cores and keeping similar clocks should provide more than enough of a generational performance increase. Nvidia might do far more than the minimum, but even the bottom tier AD107 chip would represent a solid 30% or more improvement over the current RTX 3050.

Keep in mind that the SM counts listed are for the complete chip, and most likely Nvidia will be using partially disabled chips to improve yields. Hopper H100 as an example has 144 potential SMs, but only 132 SMs are enabled on the SXM5 variant, while the PCIe 5.0 card will have 114 SMs enabled. We'll probably see Nvidia launch a top-end AD102 solution (i.e. RTX 4090) with somewhere between 132 and 140 SMs enabled, with lower tier models using fewer SMs. That of course leaves the door open for a future card (i.e. RTX 4090 Ti) with a fully enabled AD102, after yields have improved.

Guessing at Ada's ROPs

We've put question marks after the ROPs counts (render outputs) on all of the Ada GPUs, as we don't know yet how they're configured. With Ampere, Nvidia tied the ROPs to the GPCs, the Graphics Processing Clusters. Each GPC contains a certain number of SMs (Streaming Multiprocessors), which can be disabled in pairs. Even if we know the number of SMs, however, we don't know how they're split up into GPCs.

Take the AD102 with 144 SMs. That could be 12 GPCs of 12 SMs each, 8 GPCs with 18 SMs each, or 9 GPCs of 16 SMs each. Other possibilities also exist, but those are the three that we think are most likely. Nvidia isn't new to the GPU game, so whatever the arrangement, ultimately we should expect it to fit the needs of the GPU.

We've seen some guesses suggesting GA102 will have 12 GPCs of 12 SMs each, which would yield 192 ROPs as the maximum. It's not out of the question, but do note that Hopper H100 has eight GPC clusters of 18 SMs each, so that seems a reasonable configuration for AD102 as well — just without HBM3 and with less focus on deep learning hardware.

Questionable Leaks and Rumors

Hopper H100 has 144 SMs, spread out over 8 GPCs. Replace the HBM3 with GDDR6X, take out most of the FP64 cores and dumb down the tensor cores, then add in RT cores and you theoretically end up with Ada AD102. Maybe.  (Image credit: Nvidia)

Again, a disclaimer is in order. The 144 SMs figure for AD102 is… suspicious. The Hopper H100 chip coincidentally also has 144 SMs total, of which 132 are enabled in the top tier offering right now. For Ada and Hopper to both have the same 144 SMs would be very surprising. GA100 had a maximum of 120 SMs, so with the H100 Nvidia has only increased the SM count by 20%. In contrast, the supposed leaks have the AD102 sporting 71% more SMs than GA102.

We don't have anything better to go on right now, so we're reporting on the rumored 144 SM figure, but don't be surprised if that turns out to be totally bunk. Just because Nvidia was hacked and data was leaked doesn't mean everything that got out was accurate. Nvidia would potentially be better off tuning the architecture for higher clocks and using fewer SMs, similar to what AMD did with RDNA 2, but that could require a more significant overhaul of the underlying architecture.

On the other hand, there's at least one good reason for AD102 to be a huge chip: professional GPUs. Nvidia doesn't make completely separate silicon for the consumer and professional markets, evidenced by the A-series chips like the RTX A6000. That uses the same GA102 chip as the RTX 3080 through 3090 Ti, just with a few extra features turned on in the drivers. Ray tracing hasn't really set the gaming world on fire, but it's a big deal for the professional market, and packing in even more RT cores would be a boon to 3D rendering farms. Also note that Hopper H100 doesn't include any ray tracing hardware, just like the GA100 that it replaces.

The various Ada GPUs will also be used for inference platforms running AI and ML algorithms, which again means more Tensor cores and compute can be put to use. So, the bottom line is that the supposed maximum 144 SMs isn't totally out of the question, but it certainly warrants a healthy dose of skepticism. Perhaps the Nvidia hack found outdated information, or people interpreted it incorrectly. We'll know more in the coming months. 

Memory Subsystem: GDDR6X Rides Again

 The Ampere GA102 supports up to twelve 32-bit memory channels populated by GDDR6X, and we suspect AD102 will use a similar layout — just with faster memory speeds. (Image credit: Nvidia)

Recently, Micron announced it has roadmaps for GDDR6X memory running at speeds of up to 24Gbps. The latest RTX 3090 Ti only uses 21Gbps memory, and Nvidia is currently the only company using GDDR6X for anything. That immediately raises the question of what will be using 24Gbps GDDR6X, and the only reasonable answer seems to be Nvidia Ada. The lower-tier GPUs are more likely to stick with standard GDDR6 rather than GDDR6X as well, which tops out at 18Gbps.

This represents a bit of a problem, as GPUs generally need compute and bandwidth to scale proportionally to realize the promised amount of performance. The RTX 3090 Ti for example has 12% more compute than the 3090, and the higher clocked memory provides 8% more bandwidth. If our compute estimates from above prove even close to accurate, there's a huge disconnect brewing. A hypothetical RTX 4090 could have around 80% more compute than the RTX 3090 Ti, but only 14% more bandwidth.

There's far more room for bandwidth to grow on the lower tier GPUs, assuming GDDR6X power consumption can be kept in check. The current RTX 3050 through RTX 3070 all use standard GDDR6 memory, clocked at 14–15Gbps. We already know GDDR6 running at 18Gbps will be available in time for Ada, so a hypothetical RTX 4050 with 18Gbps GDDR6 ought to easily keep up with the increase in GPU computational power. If Nvidia still needs more bandwidth, it could tap GDDR6X for the lower tier GPUs as well.

There's also a slim chance that the higher tier Ada GPUs could end up being paired with GDDR7, or perhaps Samsung's "GDDR6+" that reportedly will hit speeds of up to 27Gbps. We haven't heard concrete details on either of those, however, and at this stage Nvidia will need its partners to be ramping up memory production. More production would inevitably lead to more leaks, and since we haven't seen leaks of GDDR7 or GDDR6+ production, we're going to assume it won't be here in time.

More likely is that Nvidia won't need massive increases in pure memory bandwidth, because instead it will rework the architecture, similar to what we saw AMD do with RDNA 2 compared to the original RDNA architecture. 

Ada Looks to Cash in on L2 Cache

One great way of reducing the need for more raw memory bandwidth is something that has been known and used for decades. Slap more cache on a chip and you get more cache hits, and every cache hit means the GPU doesn't need to pull data from the GDDR6/GDDR6X memory. AMD's Infinity Cache allowed the RDNA 2 chips to basically do more with less raw bandwidth, and leaked Nvidia Ada L2 cache information suggests Nvidia will take a somewhat similar approach.

AMD uses a massive L3 cache of up to 128MB on the Navi 21 GPU, with 96MB on Navi 22, 32MB on Navi 23, and just 16MB on Navi 24. Surprisingly, even the smaller 16MB cache does wonders for the memory subsystem. We didn't think the Radeon RX 6500 XT was a great card overall, but it basically keeps up with cards that have almost twice the memory bandwidth.

The Ada architecture appears to pair an 8MB L2 cache with each 32-bit memory controller. That means the cards with a 128-bit memory interface will get 32MB of total L2 cache, and the 384-bit interface cards at the top of the stack will have 96MB of L2 cache. While that's less than AMD's Infinity Cache in some cases, we don't know latencies or other aspects of the design yet. L2 cache tends to have lower latencies than L3 cache, so a slightly smaller L2 could definitely keep up with a larger but slower L3 cache.

If we look at AMD's RX 6700 XT as an example, it has about 35% more compute than the previous generation RX 5700 XT. Performance in our GPU benchmarks hierarchy meanwhile is about 32% higher at 1440p ultra, so performance overall scaled pretty much in line with compute. Except, the 6700 XT has a 192-bit interface and only 384 GB/s of bandwidth, 14% lower than the RX 5700 XT's 448 GB/s. That means the big Infinity Cache gave AMD a 50% boost to effective bandwidth.

Assuming Nvidia can get similar results with Ada, take the 14% increase in bandwidth that comes via 24Gbps memory and then pair that with a 50% increase in effective bandwidth. That would give AD102 roughly 71% more effective bandwidth, which is close enough to the increase in GPU compute that everything should play out nicely.

More disclaimers on the cache rumors are in order, however. Nvidia has released plenty of details on Hopper H100. It does indeed carry a larger L2 cache size than the previous generation GA100, but it's not 8MB per memory controller. In fact, the total L2 cache on H100 checks in at 50MB, compared to the A100's 40MB of L2. But Hopper also uses HBM3 memory and will be used with massive data sets, which is why it has 80GB of graphics memory. Anything that can't fit in 40MB isn't likely to fit in 50MB either, or even 150MB. Consumer workloads, and games in particular, are far more likely to benefit from a larger cache. Nvidia might be following in AMD's footsteps here, or the rumors might end up being completely wrong.

Ada Power Consumption

RTX 3090 Ti cards like the Asus TUF Gaming OC are already pushing 500W or more.  (Image credit: Tom's Hardware)

One element of the Ada architecture that's sure to raise an eyebrow or two will be power consumption. Igor of Igor's Lab was the first to go on record with rumors of a 600W TBP (Typical Board Power) for Ada, and the first time we heard that we laughed. "No way," we thought. Nvidia graphics cards topped out at close to 250W for many years, and Ampere's jump to 350W on the RTX 3090 (and later RTX 3080 Ti) already felt somewhat excessive. Then Nvidia announced the Hopper H100 specs and released the RTX 3090 Ti, and suddenly 600W didn't feel so unlikely.

It all goes back to the end of Dennard scaling, right along with the death of Moore's Law. Put simply, Dennard scaling — also called MOSFET scaling — observed that with every generation, dimensions could be scaled down by about 30%. That reduced overall area by 50% (scaling in both length and width), voltage dropped a similar 30%, and circuit delays would decrease by 30% as well. Furthermore, frequencies would increase by around 40% and total power consumption would decrease by 50%.

If that all sounds too good to be true, it's because Dennard scaling effectively stopped happening around 2007. Like Moore's Law, it didn't totally fail, but the gains became far less pronounced. Clock speeds in integrated circuits have only increased from a maximum of around 3.7GHz in 2004 with the Pentium 4 Extreme Edition to today's maximum of 5.5GHz in the Core i9-12900KS. That's still almost a 50% increase in frequency, but it's come over six generations (or more, depending on how you want to count) of process node improvements. Put another way, if Dennard scaling hadn't died, modern CPUs would clock as high as 28GHz. RIP, Dennard scaling, you'll be missed.

It's not just the frequency scaling that died, but power and voltage scaling as well. Today, a new process node can improve transistor density, but voltages and frequencies need to be balanced. If you want a chip that's twice as fast, you might need to use nearly twice as much power. Alternatively, you can build a chip that's more efficient, but it won't be any faster. Nvidia seems to be going after the first option with Ada.

Take a 350W Ampere GPU like GA102, and boost performance by 70–80%. Doing that will thus mean using 70–80% more power. 350W then becomes 595–630W. Nvidia might get slightly better than linear scaling, and 600W will very likely be the maximum power use on the reference cards, but we're already hearing word that some next-gen third party overclocked cards might include dual 16-pin power connectors. Zot! 

Will Ada Actually Become the RTX 40-Series?

There's still a question of what the next generation Nvidia GPUs will be called. We've suggested RTX 40-series, sticking with the pattern established by the last several generations, but Nvidia could always change things. One potential reason for a change: Chinese 'dislike' of the number four, which also means death in Cantonese and Mandarin.

Is that a good enough reason to switch things up? Perhaps not. Certainly we've seen plenty of graphics cards and other PC products with "4" in the model number over the years. Nvidia has invested a lot of money in its RTX brand, and while it might not be as exciting if everyone accurately guesses the names of the next series of GPUs, sales are ultimately what matters.

Whatever the Ada graphics cards end up being called won't change their performance or features. Most of us are reasonably convinced Nvidia will use RTX 40-series names, but it's not the end of the world if Nvidia changes things up. 

How Much Will RTX 40-Series Cards Cost?

(Image credit: Shutterstock)

The short answer, and the true answer, is that they will cost as much as Nvidia can get away with charging. Nvidia launched Ampere with one set of financial models, and those proved to be completely wrong for the Covid pandemic era. Real-world prices shot up and scalpers profiteered, and that was before cryptocurrency miners started paying two to three times the official recommended prices. Even now, we're still seeing markups of 30% or more. The good news is that GPU prices are coming down.

More likely than not, generational GPU prices will go up with Ada and the RTX 40-series. However, the supposed large L2 caches and relatively limited increases in memory bandwidth should lead to Ada delivering only a modest increase in mining performance over Ampere, just like AMD's RDNA 2 cards are only slightly faster than RDNA models. That means mining alone almost certainly won't be able to sustain the hugely inflated prices we've seen from late 2020 until early 2022, even if mining profitability were to "recover" before Ada arrives.

Depending on where supply and demand of existing cards land come September, we wouldn't be surprised to see the top AD102 graphics cards launch with a starting price of $999 for the base model (likely RTX 4080), with a higher performance "RTX 4090" taking over the $1,999 price point of the RTX 3090 Ti. Maybe Nvidia will even resurrect the Titan brand for Ada, though more likely than not that ship has sailed — too many professionals were using Titan to avoid buying a Quadro or A-series card, it seems.

As we'll discuss in the next section, there's no reason for Nvidia to immediately shift all of its GPU production from Ampere to Ada either. We'll likely see RTX 30-series GPUs still being produced for quite some time, especially since no other GPUs or CPUs are competing for Samsung Foundry's 8N manufacturing. Nvidia stands to gain more by introducing high-end Ada cards first, using all of the available capacity it can get from TSMC, and if necessary cut prices on the existing RTX 30 cards to plug any holes.

The reality is that prices are one of the final pieces of the puzzle that gets nailed down. In the past, we've seen last minute changes in price on quite a few graphics cards. AMD's RX 5700 XT and RX 5700 were announced to the press at $499 and $399, respectively, and then dropped to $399 and $349 a week later for the actual launch. Right now, months before the retail availability, no one knows for sure where prices will end up. There are multiple contingency plans, and which one gets selected will be determined a week or two before the cards officially go on sale.

Will Nvidia Change the Founders Edition Design?

Nvidia's GeForce RTX 3080 looked different from anything that came before it, but we've become far less enamored with the design.  (Image credit: Nvidia)

Nvidia made a lot of claims about its new Founders Edition card design at the launch of the RTX 3080 and 3090. While the cards generally work fine, what we've discovered over the past 18 months is that traditional axial cooling cards from third party AIC partners tend to cool better and run quieter, even while using more power. The GeForce RTX 3080 Ti Founders Edition was a particularly egregious example of how temperatures and fan speeds couldn't keep up with hotter running GPUs.

Now, factor in the rumored power draw that's nearly double what we saw with Ampere in some cases, and it's difficult to imagine Nvidia sticking with the current industrial design. Perhaps only a few tweaks are needed, but there's a reason all the RTX 3090 Ti cards occupy three or more slots. If Nvidia really has a 600W part in the works, it will need to provide some exceptional cooling to wick the heat away, ideally venting it out of the case.

There haven't been any leaks that we're aware of purporting to show what Ada cards will look like, either from Nvidia or its partners. That makes sense, as we're still quite a few months away from retail availability. We'll probably get some leaked images a month or two before the official launch, which means as long as there are no image leaks, the big reveal is likely at least two months away.

When Will Ada GPUs Launch?

We've mentioned a September timeframe for the launch of Ada and the RTX 40-series GPUs multiple times, but it's important to keep in mind that the first Ada cards will only be the tip of the iceberg. Nvidia launched the RTX 3080 and RTX 3090 in September 2020, the RTX 3070 arrived one month later, then the RTX 3060 Ti arrived just over a month after that. The RTX 3060 didn't come out until late February 2021, then Nvidia refreshed the series with the RTX 3080 Ti and RTX 3070 Ti in June 2021. The budget-friendly RTX 3050 didn't arrive until January 2022, and finally the RTX 3090 Ti was just launched at the end of March 2022.

We expect a staggered launch for the Ada cards as well, starting with the fastest models and trickling down into the high-end and mainstream offerings, with the budget-oriented AD106 and AD107 likely not coming until 2023 at earliest. As we just noted, the RTX 3050 only launched in late January, so it won't be due for a replacement for at least another year, if not longer. Then again, we still need true budget offerings to take over the GTX 1660 and GTX 1650 series. Could we get a new GTX series, or a true budget RTX card for under $200? It's possible, but don't count on it, as Nvidia seems content to let AMD and Intel fight it out in the sub-$200 range.

There will inevitably be a refresh of the Ada offerings about a year after the initial launch as well. Whether those end up being "Ti" models or "Super" models or something else is anyone's guess at this stage, but you can pretty much mark it on your calendar.

More Competition in the GPU Space

Intel's Arc Alchemist GPUs will finally enter the discrete graphics space in the coming months.  (Image credit: Intel)

Nvidia has been the dominant player in the graphics card space for a couple of decades now. It controls roughly 80 to 90 percent of the total GPU market and has largely been able to dictate the creation and adoption of new technologies like ray tracing and DLSS. However, with the continuing increase in the importance of AI and compute for scientific research and other computational workloads, and their reliance on GPU-like processors, numerous other companies are looking to break into the industry, chief among them being Intel.

Intel hasn't made a proper attempt at a dedicated graphics card since the late 90s, unless you count the aborted Larrabee. This time, Intel Arc appears to be the real deal — or at least the foot in the door. It looks like Intel has focused more on media capabilities, and the jury is very much still out when it comes to Arc's gaming or general compute performance. From what we know, the top consumer models will only be in the 18 TFLOPS range at best. Look at our table at the top and that looks like it will only compete with AD106.

But Arc Alchemist is merely the first in a regular cadence of GPU architectures that Intel has planned. Battlemage could easily double down on Alchemist's capabilities, and if Intel can get that out sooner than later, it could start to eat into Nvidia's market share, especially in the gaming laptop space.

AMD won't be standing still either, and it has said several times that it's "on track" to launch its RDNA 3 architecture by the end of the year. We expect AMD to move to TSMC's N5 node, meaning it will likely compete directly with Nvidia for wafers and both will have to make similar design decisions. AMD has so far avoided putting any form of deep learning hardware into its consumer GPUs (unlike its MI200 series), but with Arc also including Xe Matrix cores, it may need to rethink that approach.

There's also no question that Nvidia currently delivers far superior ray tracing performance than AMD's RX 6000-series cards, but AMD hasn't been nearly as vocal about ray tracing hardware or the need for RT effects in games. Intel for its part looks like it may deliver even less RT performance than AMD. But as long as most games continue to run faster and look good without RT effects, it's an uphill battle convincing people to upgrade their graphics cards.

It's been a long two years of GPU droughts and overpriced cards. 2022 is shaping up to be the first real excitement in the GPU space since 2020. Hopefully this round will see far better availability and pricing. It could hardly be worse than what we've seen for the past 18 months. 

Jarred Walton is a senior editor at Tom's Hardware focusing on everything GPU. He has been working as a tech journalist since 2004, writing for AnandTech, Maximum PC, and PC Gamer. From the first S3 Virge '3D decelerators' to today's GPUs, Jarred keeps up with all the latest graphics trends and is the one to ask about game performance.

  • -Fran-
    One nitpick with this way of phrasing: "That means the big Infinity Cache gave AMD a 50% boost to effective bandwidth".

    The Cache on the GPUs doesn't make it so the card has a higher bandwidth, much like AMD's 3D VCache is not making DDR4 magically have more bandwidth. I know what the implied point is, but I think it shouldn't be explained that way at all. Preventing using the GDDR/DDR BUS to fetch data is not the same as increasing the effective bandwidth of it. You saturate that cache and you're back to using the slow lane. On initial load, you still use the slow lane. Etc...

    Other than that, thanks for the information. I do not look forward to 600W GPUs. Ugh.

    Regards.
    Reply
  • thisisaname
    The one thing we know for sure is it is not going to be cheap!
    Reply
  • escksu
    I reckon 1000w gpu isnt that far away...

    Not a good thing for power consumption to keep going up when pple are all talking about climate change and going green
    Reply
  • drivinfast247
    escksu said:
    I reckon 1000w gpu isnt that far away...

    Not a good thing for power consumption to keep going up when pple are all talking about climate change and going green
    People talk a lot and most of it is only to hear their own voice.
    Reply
  • spongiemaster
    -Fran- said:
    Other than that, thanks for the information. I do not look forward to 600W GPUs. Ugh.
    Unless you're shopping for a $2000+ GPU, you're not going to have to worry about 600W any time soon. These new flagships are going to be the equivalent of SLI setups from years ago minus the headaches of needing SLI profiles for proper performance. You'll only need one physical slot, but the cooler is going to take up 4 like old school dual slot card SLI.
    Reply
  • Tom Sunday
    thisisaname said:
    The one thing we know for sure is it is not going to be cheap!

    I will be happy to snagging a basic RTX 3090 (dreaming it costing me around $700 for a GPU generation almost 2-years old) perhaps next year in January or so? Then a 4K TV as more money becomes available. The RTX 40-series is totally crazy in my view as how much power can one ever need. Besides you are right, it will also not be cheap.
    Reply
  • warezme
    I have been using a 1000w PS for many years but it's getting really long on the tooth so I purchased a new 1200w, waiting to get installed one of these days. I don't really like the idea of needing so much power but I remember the days of 2 and 4 card SLI , I used to run and that was excessive. Now a single card can run circles around all that without the driver and game compatibility issues so it is better.
    Reply
  • hannibal
    So these prices are for the first 5 seconds and after that we get normal 250% increase in price?
    Have to start saving for the 2000w PSU and getting a bank loan for the GPU!
    Good times ahead ;)
    Reply
  • DougMcC
    What are GPUs going to do in the next (50) generation? If power increased this much again, we'd be bumping up against the maximum wattage for a north american wall outlet.
    Reply
  • sizzling
    DougMcC said:
    What are GPUs going to do in the next (50) generation? If power increased this much again, we'd be bumping up against the maximum wattage for a north american wall outlet.
    Until the current generation there had been slight decreases or staying about the same for several generations. It seems after all other consumer electronics have upped their game for decreasing their products power requirements the gpu industry has gone the other way.
    Reply