HBM stands for high bandwidth memory and is a type of memory interface used in 3D-stacked DRAM (dynamic random access memory) in AMD GPUs (aka graphics cards), as well as the server, high-performance computing and networking and client space. Samsung and SK Hynix make HBM chips.
HBM Specs
HBM2 / HBM2E (Current) | HBM | Original HBM2 | HBM3 (Upcoming) | |
Max Pin Transfer Rate | 2.4 Gbps | 1 Gbps | 2 Gbps | ? |
Max Capacity | 24GB | 4GB | 8GB | 64GB |
Max Bandwidth | 307 GBps | 128 GBps | 256 GBps | 512 GBps |
HBM uses less power but posts higher bandwidth than on DDR4 or GDDR5 memory with smaller chips, making it appealing to graphics card vendors.
HBM technology works by vertically stacking memory chips on top of one another. The memory chips are connected through through-silicon vias (TSVs) and microbumps. Additionally, with two 128-bit channels per die, HBM’s memory bus is wider than that of other types of DRAM memory.
HBM2 and HBM2E
HBM2 debuted in 2016, and in December 2018 the JEDEC updated the HBM2 standard. The updated standard is commonly referred to as both HBM2 and HBM2E (to denote the deviation from the original HBM2 standard).
The HBM2 standard allows up to 12 dies per stack for a max capacity of 24GB. The standard also pegs memory bandwidth at 307 GBps, delivered across a 1,024-bit memory interface separated by 8 unique channels on each stack.
Originally, the HBM2 standard called for up to eight dies in a stack (as with HBM) with an overall bandwidth of 256 GBps.
HBM3
While not yet available, the HBM3 standard is currently in discussion.
According to an Ars Technica report, HBM3 is expected to support up to 64GB capacities and speeds up to 512 GBps.
HBM3 will deliver more dies per stack and more than 2x the density per die with a similar power budget. It’s expected to come out by 2020.
This article is part of the Tom's Hardware Glossary.
Further reading:
- Best Graphics Cards for Gaming
- GPU Performance Hierarchy: Video Cards Ranked from Fastest to Slowest
- Graphics Cards Reviews