AMD to split flagship AI GPUs into specialized lineups for AI and HPC, add UALink — Instinct MI400-series models takes a different path

AMD
(Image credit: AMD)

Right now, AMD's Instinct MI300-series accelerators are aimed at both AI and HPC, which makes them universal but lowers maximum performance for both types of workloads. Starting from its next-generation Instinct MI400-series, AMD will offer distinct processors for AI and supercomputers in a bid to maximize performance for each workload, according to SemiAnalysis. However, there might be a problem with the scalability of these compute GPUs.

AMD plans to offer the Instinct MI450X for AI and the Instinct MI430X for HPC sometime in the second half of 2026. Both processors will rely on subsets of the CDNA Next architecture, but will be tailored for low-precision AI compute (FP4, FP8, BF16) or high-precision HPC compute (FP32, FP64). Such bifurcation of positioning will enable AMD to remove FP32 and FP64 logic from MI450X as well as FP4, FP8, and BF16 logic from MI430X, therefore maximizing die space for respective logic.

In addition to workload optimizations, AMD's Instinct MI400-series accelerators will also feature not only Infinity Fabric but also UALink interconnections, which will make them some of the first AI and HPC GPUs to feature UALink, a technology designed to challenge NVLink. But there is a major problem with UALink.

Support for UALink will be limited in 2026 due to the absence of switching silicon from external vendors, including Astera Labs, Auradine, Enfabrica, and XConn. As a result, the Instinct MI430X will only be usable in small configurations in topologies like mesh or torus, as there will be no UALink switches next year. AMD does not develop its own UALink switches and therefore relies entirely on partners, which may not be ready in the second half of next year.

Progress in UALink development has been slow due to coordination delays in the standards body. According to SemiAnalysis, chipmakers like Broadcom view the market for such switches as too small and are not assigning enough engineering resources to accelerate timelines. By contrast, networking initiatives under the Ultra Ethernet Consortium are advancing more quickly and already have compatible hardware available commercially.

In a bid to compete against Nvidia with its own rack-scale solutions, AMD intends to offer systems called Instinct MI450X IF64 and MI450X IF128 that will rely on the Infinity Fabric technology, possibly over Ethernet. SemiAnalysis believes that such solutions could be competitive with Nvidia's VR200 NVL144 platforms in the second half of 2026, though it remains to be seen how these systems will stack up.

Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

Anton Shilov
Contributing Writer

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • bit_user
    The article said:
    Right now, AMD's Instinct MI300-series accelerators are aimed at both AI and HPC, which makes them universal but lowers maximum performance for both types of workloads. Starting from its next-generation Instinct MI400-series, AMD will offer distinct processors for AI and supercomputers in a bid to maximize performance for each workload, according to SemiAnalysis.
    Yes, it's been obvious this would happen. Nvidia was first to show signs of moving in this direction, if you look at how Blackwell started to back away from fp64 compute, in favor of more AI horsepower.

    In the chiplet era, I could imagine HPC and AI chiplets being combined in a package. You'd have models with all AI chiplets, some with a 50/50 of AI and HPC... not sure if there's still enough of a market for 100% HPC chiplet accelerators, but they could do it if there were.
    Reply