HBM3 Spec Reaches 819 GBps of Bandwidth and 64GB of Capacity

(Image credit: JEDEC)

The evolution of High Bandwidth Memory (HBM) continues with the JEDEC Solid State Technology Association finalizing and publishing the HBM3 specification today, with the standout features including up to 819 GBps of bandwidth coupled with up to 16-Hi stacks and 64GB of capacity.

We have seen telltale indicators of what to expect in prior months, with news regarding JEDEC member company developments in HBM3. In November, we reported on an SK hynix 24GB HBM3 demo, and Rambus announced its HBM3-ready combined PHY and memory controller with some detailed specs back in August, for example. However, it is good to see the JEDC specification now agreed so the industry comprising HBM makers and users can move forward. In addition, the full spec is now downloadable from JEDEC.

(Image credit: Micron)

If you have followed the previous HBM3 coverage, you will know that the central promise of HBM3 is to double the per-pin data rate compared to HBM2. Indeed, the new spec specifies that HBM3 will provide a standard 6.4 Gbps data rate for 819 GBps of bandwidth. The key architectural change behind this speed-up is the doubling of the number of independent memory channels to 16. Moreover, HBM3 supports two pseudo channels per channel for virtual support of 32 channels.

Another welcome advance with the move to HBM3 is in potential capacity. With HBM die stacking using TSV technology, you gain capacity with denser packages plus higher stacks. HBM3 will enable from 4GB (8Gb 4-high) to 64GB (32Gb 16-high) capacities. However, JEDEC states that 16-high TSV stacks are for a future extension, so HBM3 makers will be limited to 12-high stacks maximum within the current spec (i.e., max 48GB capacity).

Meanwhile, the first HBM3 devices are expected to be based on 16Gb memory layers, says JEDEC. The range of densities and stack options in the HBM3 spec gives device makers a wide range of configurations.

JEDEC also highlights HBM3's high platform-level RAS (reliability, availability, serviceability), EEC on-die, and real-time error reporting, plus energy efficiency by using 0.4V signaling and 1.1V operating voltage. All these qualities are very attractive to the target market of HPC and AI processing customers.

Swipe to scroll horizontally

JEDEC spec




Per-Pin Transfer Rate (I/O Speed)

6.4 Gbps

3.2 Gbps / 3.65 Gbps

1 Gbps

Maximum Dies Per Stack

12 with up to 16 (16-Hi) on the way

8 (8-Hi) / 12 (12-Hi)

4 (4-Hi)

Max Package Capacity

64 GB

24 GB

4 GB


819 GBps

410 / 460 GBps

128 GBps

With the HBM3 spec now finalized and published, we should start to see progressive adoption of this memory tech as we travel through 2022. However, as hinted at above, it is likely to be highly focused on data centers, enterprise, HPC customers, and the like. We got a fleeting taste of HBM in consumer PC graphics cards back in the mid-2010s, but the implementation made AMD cards too expensive for the performance benefits that could be gained.

Mark Tyson
Freelance News Writer

Mark Tyson is a Freelance News Writer at Tom's Hardware US. He enjoys covering the full breadth of PC tech; from business and semiconductor design to products approaching the edge of reason.