Micron teams up with TSMC to deliver HBM4E, targeted for 2027 — collaboration could enable further customization
High bandwidth, high configurability

Micron has confirmed it will partner with TSMC to manufacture the base logic die for its next-generation HBM4E memory, with production targeted for 2027. The announcement, made during the company’s fiscal Q4 earnings call on September 23, adds yet more detail to an already busy roadmap.
Micron is shipping early HBM4 samples at speeds above 11 Gbps per pin, providing up to 2.8TB/s of bandwidth, and it has already locked down most of its 2026 HBM3E supply agreements. But the big takeaway is that Micron will hand TSMC the task of fabricating both standard and custom HBM4E logic dies, opening the door to tailored memory solutions for AI workloads.
The decision also places Micron squarely in the middle of the next wave of AI system design, aligning with previous reporting on HBM roadmaps across Micron, SK hynix, and Samsung, and with earlier analysis of how Micron views HBM4E as a platform for customization.
A semi-configurable subsystem
The industry is already familiar with the HBM cadence: HBM3E today, HBM4 in 2025–2026, and HBM4E around 2027, and each new generation brings higher per-pin data rates and taller stacks. SK hynix has already confirmed 12-Hi HBM4 with a full 2048-bit interface running at 10 GT/s, while Samsung is plotting similar capacities with its own logic processes. Micron is shipping its own HBM4 stacks and claims more than 20% better efficiency than HBM3E.
HBM4E is the extension of that roadmap, but Micron is treating it as something more. The company highlighted that the base die will be fabricated at TSMC, not in-house, and that custom logic-die designs will be offered to customers willing to pay a premium. By opening the base die to customization, Micron is effectively turning HBM into a semi-configurable subsystem. Instead of a one-size-fits-all interface layer, GPU vendors could request additional SRAM, dedicated compression engines, or tuned signal paths.
That approach mirrors what we have seen from SK hynix, which has already described customizable base dies as part of its HBM4 strategy. Given that customized memory is stickier, more profitable, and more important for customers trying to squeeze every watt and every cycle out of an AI accelerator, this is likely to become a lucrative segment of the market.
The importance of AI
The timing of Micron’s plans for HBM4E looks to be no accident. Nvidia and AMD both have next-generation data center GPUs slated for 2026 that will introduce HBM4, and HBM4E looks perfectly aligned to their successors.
Nvidia’s Rubin architecture, expected to follow Blackwell in 2026, is built around HBM4. Rubin-class GPUs are projected to deliver around 13 TB/s of memory bandwidth and up to 288GB of capacity, a jump from the 8 TB/s ceiling on Hopper with HBM3E. A follow-on platform, Rubin Ultra, is already on Nvidia’s roadmap for 2027. That platform specifically calls for HBM4E, with each GPU supporting up to a terabyte of memory and aggregate rack-level bandwidth measured in petabytes per second.
AMD’s trajectory is just as aggressive. Its Instinct MI400 family, expected around the same time as Rubin, is also moving to HBM4. Leaks suggest as much as 432 GB of HBM4 and 19.6 TB/s of bandwidth, more than double what AMD's MI350 delivers today. Like Rubin, MI400 uses a chiplet design, bound by ultra-wide memory buses, making HBM4 a necessity. After that is HBM4E, which is set for 2027 or 2028, depending on yields and ecosystem readiness.
This cadence makes Micron’s partnership with TSMC particularly important. By shifting the base die to a leading-edge logic process and offering customization, Micron can synchronize its roadmap with the needs of Rubin Ultra, MI400 successors, and whatever comes next in the accelerator space.
Thinking of the bigger picture, Micron’s partnership with TSMC raises questions around how HBM4E might proliferate widely into AI data centers. Right now, only the highest-end GPUs and TPUs use HBM, with the majority of servers still relying on DDR5 or LPDDR. That could change dramatically as workloads keep ballooning in size.
Micron has already said that its HBM customer base has grown to six, with Nvidia among them. The company is also working with Nvidia on deploying LPDDR in servers. The partnership with TSMC suggests that Micron intends to make HBM4E a broadly adopted piece of AI infrastructure, potentially making HBM4E the standard tier of memory for AI nodes in the second half of the decade.
Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

Luke James is a freelance writer and journalist. Although his background is in legal, he has a personal interest in all things tech, especially hardware and microelectronics, and anything regulatory.