What Are HBM and HBM2? A Basic Definition

Credit: ScotXW/Wikimedia CommonsCredit: ScotXW/Wikimedia Commons

HBM stands for high bandwidth memory and is a type of memory interface used in 3D-stacked DRAM (dynamic random access memory), GPUs (aka graphics cards) from AMD, Hynix and Samsung.

HBM uses less power but posts higher bandwidth than graphics cards relying on DDR4 or GDDR5 memory. It does this by stacking as many as eight DRAM dies and connecting them via through-silicon vias and microbumps. The HBM memory bus is wider than other types of DRAM memory; it uses two 128-bit channels per die. HBM graphics cards use a wide-interface architecture for low power and high speed. They can transfer 1 bit of data at a rate of 1 GT (gigatransfer) per second (GT/s) per pin for an overall bandwidth of 128 gigabytes per second (GB/s).


HBM2 debuted in 2016. In December 2018, the JEDEC updated the HBM2 standard. It now allows up to 12 dies per stack for a max capacity of 24GB. The standard also pegs memory bandwidth at 307GB/s, delivered across a 1,024-bit memory interface separated by eight unique channels on each stack. 

Originally, the standard called for up to eight dies in a stack (as with HBM) with an overall bandwidth of 256GB/s.

HBM3 and HBM4

While not yet available, HBM3 and HBM4 standards are currently being discussed.

HBM3 is expected to increase density over HBM2 with more dies in a stack, more density per die and higher efficiency. Transfer data rate is being pegged at 4GT/s, and bandwidth is expected to be 512GB/s to 1TB/s. According to an Ars Technica report, Samsung is planning production in 2019 or 2020.

HBM4, meanwhile, will follow, but there’s limited information on the specification available. There is a report from Wccftech claiming a transfer rate of 4TB/s, but it remains unconfirmed.

This article is part of the Tom's Hardware Glossary.

 Further reading:

1 comment
    Your comment
  • kiniku
    What was it today? A failed hype name AMD tried to use to sell GPUs.