HBM2 Standard Beefed Up to 24GB Support, 307 GB/s per Stack

(Image credit: JEDEC)

The JEDEC Solid State Technology Association has updated the existing HBM2 (High Bandwidth Memory 2) standard to support capacities up to 24GB per stack and speeds up to 307 GB/s.

HBM has been commonly employed in the graphics card, high-performance computing (HPC), server, networking and client industries. The HBM standard consists of stacking memory chips vertically on top of each other whereby each memory chip is interconnected to the next via through-silicon vias (TSV) and microbumps. When compared to DDR4 and GDDR5, HBM chips are considerably smaller, faster and also more power efficient. Traits like these are what propelled graphics card manufacturers, such as AMD and Nvidia, to implement HBM into their products.

The original HBM2 (the successor to HBM) standard (JESD235) comes with support for up to eight dies per stack for a maximum capacity of 8GB and a memory bandwidth of 256 GB/s per package. JEDEC's latest update to the HBM2 specification (JESD235B) has opened the doors for manufacturers to pack up to 12 dies per stack to achieve a maximum capacity of 24GB. JEDEC has also increased the memory bandwidth to 307 GB/s, which is delivered across a 1,024-bit memory interface that's separated into eight independent channels on each stack.

AMD's Radeon Instinct MI60 and Nvidia's Quadro GV100, Titan V CEO Edition and Tesla V100 graphics cards already feature 32GB of HBM2 memory. It certainly will be interesting to see if future professional and mainstream graphics cards can take advantage of JEDEC's update. 

Zhiye Liu
RAM Reviewer and News Editor

Zhiye Liu is a Freelance News Writer at Tom’s Hardware US. Although he loves everything that’s hardware, he has a soft spot for CPUs, GPUs, and RAM.