Skip to main content

What Are HBM, HBM2 and HBM2E? A Basic Definition

(Image credit: ScotXW/Wikimedia Commons)

HBM stands for high bandwidth memory and is a type of memory interface used in 3D-stacked DRAM (dynamic random access memory) in AMD GPUs (aka graphics cards), as well as the server, high-performance computing and networking and client space. Samsung and SK Hynix make HBM chips. 

HBM Specs

HBM2 / HBM2E (Current)HBMOriginal HBM2HBM3 (Upcoming)
Max Pin Transfer Rate2.4 Gbps1 Gbps2 Gbps?
Max Capacity24GB4GB8GB64GB
Max Bandwidth307 GBps128 GBps256 GBps512 GBps

HBM uses less power but posts higher bandwidth than on DDR4 or GDDR5 memory with smaller chips, making it appealing to graphics card vendors.

(Image credit: AMD)

HBM technology works by vertically stacking memory chips on top of one another. The memory chips are connected through through-silicon vias (TSVs) and microbumps. Additionally, with two 128-bit channels per die, HBM’s memory bus is wider than that of other types of DRAM memory.

HBM2 and HBM2E

HBM2 debuted in 2016, and in December 2018 the JEDEC updated the HBM2 standard. The updated standard is commonly referred to as both HBM2 and HBM2E (to denote the deviation from the original HBM2 standard).

The HBM2 standard allows up to 12 dies per stack for a max capacity of 24GB. The standard also pegs memory bandwidth at 307 GBps, delivered across a 1,024-bit memory interface separated by 8 unique channels on each stack.

Originally, the HBM2 standard called for up to eight dies in a stack (as with HBM) with an overall bandwidth of 256 GBps.


While not yet available, the HBM3 standard is currently in discussion.

According to an Ars Technica report, HBM3 is expected to support up to 64GB capacities and speeds up to 512 GBps.

HBM3 will deliver more dies per stack and more than 2x the density per die with a similar power budget. It’s expected to come out by 2020.

This article is part of the Tom's Hardware Glossary.

 Further reading: