Credit: ScotXW/Wikimedia Commons
HBM uses less power but posts higher bandwidth than graphics cards relying on DDR4 or GDDR5 memory. It does this by stacking as many as eight DRAM dies and connecting them via through-silicon vias and microbumps. The HBM memory bus is wider than other types of DRAM memory; it uses two 128-bit channels per die. HBM graphics cards use a wide-interface architecture for low power and high speed. They can transfer 1 bit of data at a rate of 1 GT (gigatransfer) per second (GT/s) per pin for an overall bandwidth of 128 gigabytes per second (GB/s).
HBM2 debuted in 2016. The specification calls for up to eight dies in a stack (as with HBM). However, GPUs using HBM2 memory transfer 1 bit of data at a rate of 2GT/s, for an overall bandwidth of 256GB/s.
HBM3 and HBM4
While not yet available, HBM3 and HBM4 standards are currently being discussed.
HBM3 is expected to increase density over HBM2 with more dies in a stack, more density per die and higher efficiency. Transfer data rate is being pegged at 4GT/s, and bandwidth is expected to be 512GB/s to 1TB/s. According to an Ars Technica report, Samsung is planning production in 2019 or 2020.
HBM4, meanwhile, will follow, but there’s limited information on the specification available. There is a report from Wccftech claiming a transfer rate of 4TB/s, but it remains unconfirmed.
This article is part of the Tom's Hardware Glossary.
- Best Graphics Cards for Gaming in 2018
- GPU Performance Hierarchy: Video Cards Ranked from Fastest to Slowest
- Graphics Cards Reviews