H100
Latest about H100

DeepSeek's AI breakthrough bypasses industry-standard CUDA for some functions, uses Nvidia's assembly-like PTX programming instead
By Anton Shilov published
DeepSeek used PTX ISA to fine-tune its AI model training.

Updated US export restrictions may have significant impact on Israel
By Anton Shilov published
Under the new U.S. export rules, Israel might be unable to get enough AI processors to develop AI projects, including Intel Gaudi processors developed in Israel.

Chinese AI company's AI model breakthrough highlights limits of US sanctions
By Anton Shilov published
DeepSeek trains DeepSeek-V3 model with 671 billion parameters on a cluster of 2048 GPUs.

Russian firm starts shipments of HPC system based on homegrown CPU
By Anton Shilov published
Russian company Graviton launches first AI and HPC machine with homegrown CPUs that will have to be used with Nvidia's H100.

Indian firms secretly funneled AMD, Nvidia AI GPUs to Russia — sanctions reportedly skirted on hundreds of millions of dollars of hardware
By Anton Shilov published
India becomes the second-largest supplier of restricted technology to Russia as Indian companies shipped AMD's Instinct MI300X and Nvidia's H100 processors to Russia.

Distributor claims that Nvidia has allegedly stopped taking orders on HGX H20 GPU processors
By Anton Shilov published
Some dealers in China reportedly cease taking orders on Nvidia's HGX H20 processor.

Intel launches Gaudi 3 accelerator for AI: Slower than Nvidia's H100 AI GPU, but also cheaper
By Anton Shilov published
Intel formally introduces Gaudi 3 AI accelerators, claiming massive price and TCO advantages over Nvidia's H100.

Nvidia publishes first Blackwell B200 MLPerf results: Up to 4X faster than its H100 predecessor, when using FP4
By Anton Shilov published
There are quite a few caveats and qualifications to that figure.

Elon Musk shows off Cortex AI supercluster
By Dallin Grimm published
Another of Musk’s new supercomputers makes headway.

Faulty Nvidia H100 GPUs and HBM3 memory caused half of failures during LLama 3 training — one failure every three hours for Meta's 16,384 GPU training cluster
By Anton Shilov published
In a 16,384 H100 GPU cluster, something breaks down every few hours or so. In most cases, H100 GPUs are to blame, according to Meta.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.