Nvidia Readies New AI and HPC GPUs for China Market: Report

Nvidia Hopper H100 GPU and DGX systems
(Image credit: Nvidia)

Nvidia is prepping three new GPUs for artificial intelligence (AI) and high-performance computing (HPC) applications tailored for Chinese market and to comply with U.S. export requirements, according to ChinaStarMarket.cn. The new units will be based on the Ada Lovelace and Hopper architectures, according to the leaked information.

The AI and HPC products in question are HGX H20, L20 PCle, and L2 PCle GPUs and all of them are already heading to Chinese server makers, the report claims. Meanwhile, HKEPC has published a slide which claims that the new HGX H20 with 96 GB of HBM3 memory is based on the Hopper architecture and either uses a severely crippled flagship H100 silicon, or a new Hopper-based AI and HPC GPU design.  Since this is an unofficial piece of information, take it with a pinch of salt.

Swipe to scroll horizontally
GPUHGX H20L20 PCleL2 PCle
ArchitectureHopperAda LovelaceAda Lovelace
Memory96 GB HBM348 GB GDDR6 w/ ECC24 GB GDDR6 w/ ECC
Memory Bandwidth4.0 TB/s864 GB/s300 GB/s
INT8 I FP8 Tensor296 I 296 TFLOPS239 I 239 TFLOPS193 I 193 TFLOPS
BF16 I FP16 Tensor148 I 148 TFLOPS119.5 I 119,5 TFLOPS96.5 I 96.5 TFLOPS
TF32 Tensor74 TFLOPS59.8 TFLOPS48.3 TFLOPS
FP3244 TFLOPS59.8 TFLOPS24.1 TFLOPS
FP641 TFLOPSN/AN/A
RT CoreN/AYesYes
MIGUp to 7 MIGN/AN/A
L2 Cache60 MB96 MB36 MB
Media Engine7 NVDEC, 7 NVJPEG3 NVENC (+AV1), 3 NVDEC, 4 NVJPEG2 NVENC (AVI), 4 NVDEC, 4 NVJPEG
Power400 W275WTBD
Form Factor8-way HGX2-slot FHFL1-slot LP
InterfacePCIe Gen5 x16: 128 GB/sPCle Gen4 x16: 64 GB/sPCle Gen4 x16: 64 GB/s
NVLink900 GB/s--
SamplesNovember 2023November 2023November 2023
ProductionDecember 2023December 2023December 2023
Anton Shilov
Contributing Writer

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.