AMD RDNA 3 GPU Specs: Up To 12,288 ALUs, 96MB Infinity Cache

Radeon GPU
Radeon GPU (Image credit: AMD)

Although AMD revealed its RDNA 3 architecture in June, the chipmaker refused to share any specifications regarding the next-generation graphics cards. However, Angstronomics has shared new information about AMD's newest Navi 3x silicon. The new outlet claims that the specifications were finalized in 2020 and no changes have been made since. Nonetheless, treat the information with some caution.

Angstronomics details three Navi 3x silicon: Navi 31, Navi 32, and Navi 33. Starting from the largest to the smallest, Navi 31 is the flagship silicon and touted as the world's first GPU with a chiplet design. Navi 31, which carries the GFX1100 ID (codename Plum Bonito), reportedly features one Graphics Chiplet Die (GCD) with six accompanying Memory Chiplet Dies (MCD). The publication believes that the GCD is on TSMC’s 5nm process node while the MCD is a product of the 6nm process node.

The GCD, which approximately measures 308 mm², houses up to 48 Workgroup Processors (WGPs). If we break that down, each WGP has two compute units (CUs), meaning that the Navi 31 die sports 96 CUs or 12,288 ALUs. On the other hand, the MCD is around 37.5 mm² in size. Each MCD has 16MB of AMD's Infinity Cache so Navi 31-powered graphics cards will rock up to 96MB of Infinity Cache. Navi 31 comes with a 384-bit memory interface. Navi 31’s Infinity Cache is smaller than Navi 21, which has 128MB of Infinity Cache. Angstronomics reckons that AMD is prepping a 3D-stacked MCD (1-hi) that would double the Infinity Cache. Nonetheless, the performance uplift isn’t substational given the increase in cost so mainstream Navi 31 will stick to 96MB. In fact, a beefed-up version with 288MB Infinity Cache (2-hi) was previously in AMD’s plans, but the chipmaker may have canned it due to the abysmal benefit-cost ratio.

As per Angstronomics' information, AMD may offer a cut-down version of the Navi 31 SKU. The lesser variant may arrive with only 42 WGPs (84 CUs or 10,752 ALUs). The silicon purportedly has one less MCD, maxing out with a 80MB Infinity Cache configuration and a 320-bit memory bus.

Barring any changes, the Navi 31-based reference design seemingly sports an updated triple-fan cooling system that's moderately taller than the existing Navi 21 design. The makeover also includes a three red-stripe accent on the heatsink fins. In terms of power requirement, Navi 31 could do with just two 8-pin PCIe power connectors.

AMD RDNA 3 Specifications*

Swipe to scroll horizontally
Header Cell - Column 0 Navi 31Navi 32Navi 33
GFX IDGFX1100GFX1101GFX1102
CodenamePlum BonitoWheat NasHotpink Bonefish
DesignChiplet: 1x GCD, 6x MCD Chiplet: 1x GCD, 4x MCDMonolithic (TSMC N6, ~203 mm²)
GCDTSMC N5, ~308 mm²TSMC N5, ~200 mm²N/A
MCDTSMC N6, ~37.5 mm²TSMC N6, ~37.5 mm²N/A
WGPs483016
CUs966032
ALUs12,2887,6804,096
Infinity Cache (MB)966432
Memory Interface384-bit256-bit128-bit

*Specifications are unconfirmed.

Navi 32, aka GFX1101 (codename Wheat Nas) is the smaller version of Navi 31, targeting both mobile and desktop segments. The GCD and MCD measure 200 mm² and 37.5 mm², respectively. The GCD only has 30 WGPs, amounting up to 60 CUs (7,680 ALUs). The Navi 32 only has four MCDs, limiting the Infinity Cache to 64MB. Once again, it’s less than Navi 22’s 96MB configuration. Angstronomics thinks that AMD contemplated a 128MB (1-hi) variant for Navi 32. The benefit just can’t justify the higher cost So that model may not make it to the market.

On the contrary, Navi 33, GFX1102 (Hotpink Bonefish) sticks to a monolithic design, measuring around 203 mm² in die size. According to Angstronomics‘ report, AMD had planned to make Navi 33 a chiplet die with 18 WGPs with two MCDs. Since the volume and cost didn’t meet AMD’s goals, the chipmaker reportedly stuck with a monolithic die.

We’ll see Navi 33 on both mobile and desktop graphics cards. However, AMD's priority is to push Navi 33 on mobile devices, especially with the AMD Advantage initiative. Therefore, laptops will get priority over their desktop counterparts.

Navi 33 has 16 WGPs, which equals to 32 CUs (4,096 ALUs). Navi 33 boasts drop-in compatibility with Navi 23 PCBs, facilitating adoption among vendors. It has 32MB of Infinity Cache and a 128-bit memory interface. According to Angstronomics, Navi 33 outpeforms Intel's highest-tier Arc Alchemist offering and commands only half the cost of production while also being more power efficient.

AMD will launch the company’s high-end RDNA 3 graphics cards before the end of the year. Nvidia will compete with RDNA 3 with the new GeForce RTX 40-series lineup, rumored to launch in August or September.

Zhiye Liu
News Editor and Memory Reviewer

Zhiye Liu is a news editor and memory reviewer at Tom’s Hardware. Although he loves everything that’s hardware, he has a soft spot for CPUs, GPUs, and RAM.

  • -Fran-
    Hm... So if the original plan was to have more cache per group via vertical stacking, it'll mean to make up for that difference they'll probably have to clock the memory a tad higher or can't harvest the dies reducing the BUS width. This may lower their efficiency target or will have to play with the clocks a bit better.

    I'm most curious how they'll strike that balance with the chiplets.

    As for Navi33, it looks like a straightforward upgrade from the current Navi23. I just hope they give the damn thing a full X16 width* instead of the crappy X8. I know it's intended for mobile first and PCIe5 (supposedly?), but come on AMD. Make a Navi34 use the X8 instead! I know, I know; it's completely moot at this point and more than likely X8, but I'll be preemtively salty about it anyway! xD

    Regards.
    Reply
  • Alvar "Miles" Udell
    And as someone who was an early adopter of AMD's previous first MCM design, the Zen 1 based 1800X and the huge dumpster fire that turned out to be that ended up costing me over $150 more thanks to having to buy a new license for Windows and RMA shipping, I wouldn't touch these cards if they cost $1.
    Reply
  • TCA_ChinChin
    Alvar Miles Udell said:
    And as someone who was an early adopter of AMD's previous first MCM design, the Zen 1 based 1800X and the huge dumpster fire that turned out to be that ended up costing me over $150 more thanks to having to buy a new license for Windows and RMA shipping, I wouldn't touch these cards if they cost $1.
    Sorry for your bad experience, but it'll be your loss if they're good. Recent AMD products have much improved reliability compared to their Zen 1 stuff.

    On another note, I'm a little bit sad that they couldn't ship with more cache, but I guess we'll see how well they perform soon.
    Reply
  • Alvar "Miles" Udell
    TCA_ChinChin said:
    Sorry for your bad experience, but it'll be your loss if they're good. Recent AMD products have much improved reliability compared to their Zen 1 stuff.

    On another note, I'm a little bit sad that they couldn't ship with more cache, but I guess we'll see how well they perform soon.

    I had a number of negative experiences with AMD over the years, only owned AMD from the 9600XT to my Fury Nano because I disliked nVidia, but now, having switched, it would take a lot for AMD to impress me enough to switch back.
    Reply
  • giorgiog
    My experience with AMD video cards' drivers a decade ago still keeps me from considering them again. On the other hand, my 1st gen Ryzen 1950X Threadripper has been rock solid (OC'd to 4.0ghz) since day one (nearly 5 years ago.) So there's hope.
    Reply
  • cryoburner
    Alvar Miles Udell said:
    And as someone who was an early adopter of AMD's previous first MCM design, the Zen 1 based 1800X and the huge dumpster fire that turned out to be that ended up costing me over $150 more thanks to having to buy a new license for Windows and RMA shipping, I wouldn't touch these cards if they cost $1.
    Zen 1 and the 1800X were not MCM designs. They were monolithic. The multi-chip approach didn't make an appearance until the 3000-series.

    Also, it's unclear what any of that would have to do with buying a Windows license or paying for RMA shipping. >_>

    -Fran- said:
    As for Navi33, it looks like a straightforward upgrade from the current Navi23. I just hope they give the damn thing a full X16 width* instead of the crappy X8. I know it's intended for mobile first and PCIe5 (supposedly?), but come on AMD. Make a Navi34 use the X8 instead! I know, I know; it's completely moot at this point and more than likely X8, but I'll be preemtively salty about it anyway! xD
    PCIe 5.0 x2. : D
    Reply
  • jp7189
    When I first heard about chiplet gpus I assumed that meant multiple GCDs tied together with cache. That concerned me from a frametime consistency point of view, but excited me with the idea of 1x 2x and 4x configs. 48k ALUs in a single package would change the face of gaming forever. Now that I see we're talking only a single GCD I'm kinda meh. I'm sure it will be a good and competitive product without glitches, but it won't be earth shaking.
    Reply
  • dipique
    -Fran- said:
    I just hope they give the damn thing a full X16 width* instead of the crappy X8. I know it's intended for mobile first and PCIe5 (supposedly?), but come on AMD. Make a Navi34 use the X8 instead! I know, I know; it's completely moot at this point and more than likely X8, but I'll be preemtively salty about it anyway! xD

    I'm confused why you would want them to consume additional pcie lanes when they can't saturate the ones they already have. And this is on machines with limited pcie lanes.
    Reply
  • -Fran-
    dipique said:
    I'm confused why you would want them to consume additional pcie lanes when they can't saturate the ones they already have. And this is on machines with limited pcie lanes.
    Because people buying low end cards are people, more than likely, upgrading old systems which may not even have PCIe4.

    Regards.
    Reply
  • InvalidError
    -Fran- said:
    Because people buying low end cards are people, more than likely, upgrading old systems which may not even have PCIe4.
    As long as you keep your settings at a level where resources can stay in the GPU's VRAM, there is almost no difference between 3.0x8 and 4.0x16 until you reach 150-200fps where the amount of scene setup traffic and associated latency start becoming an issue in some titles. It is far more problematic at the 4GB low-end where 4.0x4 vs 3.0x4 can be a 50-80% loss due to having to do asset swaps from system memory much sooner and more often.
    Reply