xAI
Latest about xAI

Nvidia's Spectrum-X Ethernet to enable the world's largest AI supercomputer
By Anton Shilov published
Elon Musk's xAI Colossus AI supercomputer with 200,000 H200 GPUs uses Nvidia's Spectrum-X Ethernet to connect servers.

Elon Musk is doubling the world's largest AI GPU cluster
By Mark Tyson published
Billionaire Elon Musk boasts that his remarkable xAI Colossus data center is set to double its firepower 'soon.'

First in-depth look at Elon Musk's 100,000 GPU AI cluster
By Sunny Grimm published
Now, witness the firepower of this fully armed and operational AI supercluster

Elon Musk set up 100,000 Nvidia H200 GPUs in 19 days - Jensen says process normally takes 4 years
By Aaron Klotz published
Elon Musk and the team behind xAI purportedly setup a total of 100,000 H200 Nvidia GPUs in just 19 days. That's a feat that should have taken four years to complete.

Oracle will use three small nuclear reactors to power new 1-gigawatt AI data center
By Jowi Morales published
Oracle revealed during its quarterly earnings call that it has secured permits to build a trio of small modular nuclear reactors that could deliver power for a 1-gigawatt AI data center.

xAI Colossus supercomputer with 100K H100 GPUs comes online
By Anton Shilov published
Now that X's Colossus supercomputer with 100,000 H100 GPUs is online, Elon Musk hints at its further expansion with 50,000 more H200 GPUs.

Faulty Nvidia H100 GPUs and HBM3 memory caused half of failures during LLama 3 training — one failure every three hours for Meta's 16,384 GPU training cluster
By Anton Shilov published
In a 16,384 H100 GPU cluster, something breaks down every few hours or so. In most cases, H100 GPUs are to blame, according to Meta.

Elon Musk powers new 'World's Fastest AI Data Center" with gargantuan portable power generators to sidestep electricity supply constraints
By Jowi Morales published
Elon Musk deployed 14 mobile generators at the xAI Memphis Supercluster to generate 35 MWe to power 32,000 H100 GPUs.

Elon Musk reveals photos of Dojo D1 Supercomputer cluster — roughly equivalent to 8,000 Nvidia H100 GPUs for AI training
By Jowi Morales published
Elon Musk says that he'll have 90,000 Nvidia H100s, 40,000 AI4 chips, and the equivalent of 8,000 H100 GPUs in Dojo D1 processors by the end of 2024.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.