Nvidia GeForce RTX 40 ‘Ada’ GPUs Reportedly Shun PCIe 5.0 Support
Nvidia allegedly will stick with a PCIe 4.0 interface for Ada
Nvidia’s Ampere GPU architecture has been with us for nearly two years, so gaming enthusiasts are eager to see what its successor, Nvidia Ada, delivers for the consumer market. Unfortunately, since Nvidia hasn’t officially announced Ada — that is expected to happen during the latter half of 2022 — we don’t have any concrete details for the assumed GeForce RTX 40 Series. However, one assumption was that the graphics cards would use the PCIe 5.0 interface for data.
A new tweet from serial leaker kopite7kimi seemingly adds doubt to those PCIe 5.0 claims. According to kopite7kimi, Ada will continue to use the PCIe 4.0 interface, which provides 64 GB/sec of total bandwidth.
PCIe Gen4April 24, 2022
The previous assumptions seemed plausible because Ada supports the PCIe 5.0 (12VHPWR) 16-pin power connector (12 + 4 pin), which can supply up to 600W of power to a graphics card. Further fueling this speculation is that Nvidia’s enterprise-class Hopper H100 GPU uses a PCIe 5.0 data bus, capable of delivering total bandwidth of 128 GB/sec.
But there could be a very good reason why Ada won’t adopt PCIe 5.0: it likely isn’t needed. Current generation graphics cards like the GeForce RTX 30 and Radeon RX 6000 families cannot saturate the PCIe 4.0 bus. So, Ada may happily hum along just fine with PCIe 4.0, while Hopper GPUs can actually leverage the additional overhead afforded by PCIe 5.0.
Retaining PCIe 4.0 support would also allow gamers to extract the best performance from the GeForce RTX 40 cards using currently-available platforms from AMD (i.e., X570) with Zen 3 processors. Intel already supports PCIe 5.0 with its Alder Lake processor platform, and AMD’s upcoming AM5 Zen 4 platform will also support the standard.
We must caution that we should take this new information with a big spoon of salt. Nothing is set in stone until we hear it from Jensen Huang’s mouth, so there’s still a possibility that kopite7kimi got it wrong. And we still don’t know what AMD’s plans are for RDNA 3. If AMD adopts PCIe 5.0 while Nvidia opts out, it will give the former a marketing advantage. We also wouldn’t put it past Intel to bring PCIe 5.0 support to its Arc Alchemist (or perhaps its sequel, Battlemage) discrete desktop GPUs scheduled to arrive later this year.
According to recent reporting, the Ada AD102 GPU recently entered the testing phase as Nvidia gears up to add a new entry to our list of best graphics cards for gaming. Also, be sure to read everything we know so far about Ada and the GeForce RTX 40 Series.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Brandon Hill is a senior editor at Tom's Hardware. He has written about PC and Mac tech since the late 1990s with bylines at AnandTech, DailyTech, and Hot Hardware. When he is not consuming copious amounts of tech news, he can be found enjoying the NC mountains or the beach with his wife and two sons.
There's a budget GeForce GPU selling in China that not even Nvidia knew it made — RTX 4010 turns out to be a modified RTX A400 workstation GPU
US to patch loopholes that allow China to buy banned AI GPUs from other countries — new regulations include national quotas on GPU exports and a global licensing system
-
hotaru.hino Well, considering the RTX 3080 barely loses anything going to PCIe 3.0, sounds like a sensible move.Reply -
kal326 There was already a rumor that the 3090TI PCB was compatible with the next gen chips. So given that it’s 4.0 and that or a similar PCB would be used with the next gen initial halo card it’s not really that much of a stretch.Reply -
InvalidError Retaining PCIe 4.0 support would also allow gamers to extract the best performance from the GeForce RTX 40 cards using currently-available platforms
Supporting PCIe 5.0 on the GPU wouldn't prevent this either, 4.0 platforms would still be able to run 4.0 speeds for the best performance they are capable of. All it would do is possibly allow even better performance on PCIe 5.0 platforms.
It may not be needed right now but I can easily imagine GPUs using 10+GB of system memory for DirectStorage asset caching and streaming a few years down the road, then having 50+GB/s of bandwidth to/from system memory may be handy for reducing noticeable asset pops. -
Mr.Vegas hotaru.hino said:Well, considering the RTX 3080 barely loses anything going to PCIe 3.0, sounds like a sensible move.
I disagree, you miss one important fact here.
Since there is no HEDT right now and i mean good HEDT that also does good gaming we stuck with whatever we have right now.
Basally because I have bunch of PCIe device my GPU is always running at x8 gen 4.0, its enough for my 3090, its like gen 3.0 x16, but for the newer gen GPU i expected Gen 5.0
Also because it will let us cut the PCIe Lanes even more, x4/x4/x4/x4 and all from CPU, PCIe Gen 5.0 x4 = PCIe Gen 4.0 x8 = PCIe Gen 3.0 x16 and its plenty
The main issue with Adler Lake is lack of PCIe bifurcation options, they only have 16 or x8/x8
Previous gen had much more options like x8/x4/x4
I hope the next gen CPU will allow to split the CPU PCIe lanes like previous gen CPU's,
This will be plenty enough for a high end mobo: Gen 5.0 x8/x4/x4, adding Gen 4.0 x4 from PCH and even adding 4 more gen 3.0 x1 lanes, will have insane amount of expansion options -
Chung Leong I expect bandwidth requirement to drop in the coming years as newer game engines take better advantage of sampler feedback to more intelligently stream assets into the GPU.Reply -
hotaru.hino
Okay, let me add on something else: The RTX 3080 doesn't have much of a performance loss even at PCIe 2.0 x16 (https://www.techpowerup.com/review/nvidia-geforce-rtx-3080-pci-express-scaling/27.html) And even then, PCIe bandwidth is only a major concern if the card ran out of VRAM (like really ran out of VRAM), which isn't going to be a problem unless you play at 4K with the amount of VRAM you get on a flagship card.Mr.Vegas said:I disagree, you miss one important fact here.
Since there is no HEDT right now and i mean good HEDT that also does good gaming we stuck with whatever we have right now.
Basally because I have bunch of PCIe device my GPU is always running at x8 gen 4.0, its enough for my 3090, its like gen 3.0 x16, but for the newer gen GPU i expected Gen 5.0
Also because it will let us cut the PCIe Lanes even more, x4/x4/x4/x4 and all from CPU, PCIe Gen 5.0 x4 = PCIe Gen 4.0 x8 = PCIe Gen 3.0 x16 and its plenty
The main issue with Adler Lake is lack of PCIe bifurcation options, they only have 16 or x8/x8
Previous gen had much more options like x8/x4/x4
I hope the next gen CPU will allow to split the CPU PCIe lanes like previous gen CPU's,
This will be plenty enough for a high end mobo: Gen 5.0 x8/x4/x4, adding Gen 4.0 x4 from PCH and even adding 4 more gen 3.0 x1 lanes, will have insane amount of expansion options