According to a Linkedin profile, AMD is working on another chiplet-based GPU — UDNA could herald the return of 2.5D/ 3.5D chiplet-based configuration
Multi-GPU is back?

AMD's current-generation Radeon RX 9000-series product line-up based on the RDNA 4 architecture does not attempt to challenge Nvidia in the high-end desktop GPU market. Its range-topping Radeon RX 9070 XT rivals Nvidia's mid-range GeForce RTX 5070 Ti, one of the best graphics cards around. But it looks like the company's graphics division has an ace or two up its sleeves for the next-generation, according to the LinkedIn profile of one its senior fellows.
Laks Pappu, senior fellow and chief system-on-chip (SoC) architect at AMD, appears to be in charge of AMD's data center GPU development as well as Radeon products' architecture for cloud gaming, Navi4x and Navi5x generations, according to his LinkedIn profile. He describes his job as 'building next-generation competitive 2.5D/3.5D chiplet-based and monolithic graphics SoCs on various packaging technologies,' which pretty much implies that AMD's next generation graphics processors will use monolithic and multi-chiplet arrangements.
Laks Pappu joined AMD in August, 2022, after over 25 years at Intel, where he was in charge of Intel's discrete graphics processors codenamed DG1, Alchemist, and Battlemage. He also explored 'multi-tile GPUs' for high-end graphics cards, though for now dual-GPU Battlemage products are aimed at AI workloads rather than on graphics.
High-end GPUs for gaming and data centers often follow a 2.5 to 3.5-year development cycle, from architecture conception to final product (architecture definition and block-level planning: takes about a year, then physical implementation takes another 1 – 1.5 years depending on design complexity and transistor count, then tape and silicon bring up takes another year. When Pappu joined AMD in August 2022, the RDNA 4 and CDNA 4 architectures had already been defined, but he could have significantly influenced physical implementation (in terms of balances), block configuration, power/performance tradeoffs, and final silicon tuning. To that end, while he was not in charge of RDNA 4 and CDNA 4 architecture definition or development, his influence on the Radeon RX 9000-series and Instinct MI350-series products was significant.
Meanwhile, as he is involved in Navi 5x generation and probably Instinct MI500-series generation, these would be his first architectures led from the ground up, so he has full cycle influence. Apparently, Navi 5x could use a 2.5D or 3.5D packaging, if Pappu's job description at LinkedIn is accurate.
While AMD's Instinct MI300-series and Nvidia's Blackwell-series data center GPUs for AI and HPC use disaggregated designs, none of the existing client GPUs rely on a multi-tile architecture (except for Navi 31, but it disaggregates the design in a different way).
Building multi-tile consumer GPUs is extremely challenging due to the tightly coupled nature of graphics processing workloads and the need for ultra-fast, low-latency communication between processing units. Unlike CPUs, which can tolerate some latency across cores or chiplets, GPUs rely on thousands of parallel threads that must coordinate precisely and quickly, specifically within warps or thread groups. Disaggregating shader cores across multiple dies introduces synchronization overhead, latency penalties, and complex coherency requirements that can significantly reduce performance or increase power consumption. Moreover, maintaining high bandwidth between tiles demands advanced packaging technologies and interconnects (like Infinity Fabric or CoWoS), which increase cost and power consumption. Also, software and drivers must also present the multi-tile GPU as a single, unified device to operating systems and game engines, adding another layer of complexity. Combined, these architectural, manufacturing, and software hurdles have kept multi-tile designs mostly confined to data center and HPC GPUs, where the economics and workloads better justify the trade-offs.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
However, as it is getting harder and more expensive to build large gaming GPUs (like Nvidia's GB102), at some point it may finally make sense to build consumer-oriented graphics processors. Disaggregated multi-tile designs help yields on the silicon level (though advanced packaging also consumes some yield), but costs increase due to packaging complexity and interposer/bridge dies. So, if AMD figures out compute disaggregation, it may well, build a multi-tile GPU for client applications.
AMD was the first company to use a multi-chiplet design for data center and consumer CPUs, so it will not be a surprise if the company disaggregates graphics processing units in the future. In fact, AMD's Radeon RX 7900-series Navi 31 processors already feature a disaggregated design that consists of one main graphics core die (GCD) and six cache/memory controller/PHY chiplets, so it may well be considered as an experiment for a multi-tile GPU. Furthermore, floor plan of Navi 31's GCD indicates that its design is very symmetrical, which means that the chip can be 'halved' if needed assuming that AMD figures out how to disaggregate the design on the logical level and make software think that it is dealing with a monolithic GPU. For Navi 31, such design enabled AMD to create multiple product tiers from a single design (Radeon RX 7900 XTX, RX 7900 XT, RX 7900 GRE, RX 7900M), but in theory it could well build a multi-tile GPU had it done the compute disaggregation properly.
Yet, from what it seems from Pappu's LinkedIn profile, he has indeed envisioned multi-tile 'halo' GPUs while at Intel and is now working on 'next-generation competitive 2.5D/3.5D chiplet-based and monolithic graphic SoCs' perhaps based on the RDNA 5 architecture.
When to expect RDNA 5 is an interesting question though. AMD's usual GPU cadence follows a two-year cycle. The company could launch its RDNA 4-based Radeon RX 9070-series products in late 2024, but delayed them to March 2025. Hence, it is entirely reasonable to expect RDNA 5 to arrive in late 2026 or early 2027.
As of August 2025, RDNA 5 (Navi 5x) is most likely in the tape-out or early post-tape-out phase, meaning the architectural and RTL design stages are complete, physical design and verification are wrapping up, and AMD is either finalizing or has just delivered the GPU's GDSII files to TSMC for initial silicon fabrication. This aligns with a late 2026 to early 2027 launch window and places RDNA 5 at a stage where real hardware is still months away, but performance projections, firmware development, and initial driver work are well underway internally. That said, AMD will learn over the next several months whether multi-tile design makes sense for consumer GPUs or not based on testing the real hardware. Hence, it looks like we are probably going to see some very interesting leaks over the next few months. Stay tuned.
Follow Tom's Hardware on Google News, or add us as a preferred source, to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button!

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.
-
hannibal First gen chiplet gpus did not work very well at 7000 series that was rather big dissapointment to AMD. Lets see if the manage better this time or if they only use chiplets in datacenter gpus…Reply -
tamalero
Well, Ryzen1st gen wasn't all gold either.hannibal said:First gen chiplet gpus did not work very well at 7000 series that was rather big dissapointment to AMD. Lets see if the manage better this time or if they only use chiplets in datacenter gpus…
I still remember they had a lot of issues with the memory controller. -
Notton Having a look at the usual suspects, the next gen of chiplet products appear to point towards unifying their PlayStation, Xbox, Handhelds, Mobile, and dGPU lineups into a less sparse one.Reply
As in the Xbox custom solution could share the same GPU tile as a desktop graphics card.
Same goes for the other semi-custom chips that AMD currently makes, which is way too many.
As for a multi-tile GPU, I think that'll only exist for AI workloads where high idle power consumption is not important.