xAI
Latest about xAI

Elon Musk's xAI reportedly shifts $6 billion AI server order from troubled Supermicro to its rivals
By Anton Shilov published
Dell, Inventec, and Wistron land new orders from xAI as Supermicro faces significant financial challenges.

Elon Musk's massive AI data center gets unlocked
By Jowi Morales published
The Tennessee Valley Authority approved xAI's request for 150MW to power its AI supercomputer used for training Grok.

Elon Musk spent roughly $10 billion on AI training hardware in 2024
By Anton Shilov published
xAI and Tesla spend billions on AI training hardware.

Nvidia's Spectrum-X Ethernet to enable the world's largest AI supercomputer
By Anton Shilov published
Elon Musk's xAI Colossus AI supercomputer with 200,000 H200 GPUs uses Nvidia's Spectrum-X Ethernet to connect servers.

Elon Musk is doubling the world's largest AI GPU cluster
By Mark Tyson published
Billionaire Elon Musk boasts that his remarkable xAI Colossus data center is set to double its firepower 'soon.'

First in-depth look at Elon Musk's 100,000 GPU AI cluster
By Sunny Grimm published
Now, witness the firepower of this fully armed and operational AI supercluster

Elon Musk set up 100,000 Nvidia H200 GPUs in 19 days - Jensen says process normally takes 4 years
By Aaron Klotz published
Elon Musk and the team behind xAI purportedly setup a total of 100,000 H200 Nvidia GPUs in just 19 days. That's a feat that should have taken four years to complete.

Oracle will use three small nuclear reactors to power new 1-gigawatt AI data center
By Jowi Morales published
Oracle revealed during its quarterly earnings call that it has secured permits to build a trio of small modular nuclear reactors that could deliver power for a 1-gigawatt AI data center.

xAI Colossus supercomputer with 100K H100 GPUs comes online
By Anton Shilov published
Now that X's Colossus supercomputer with 100,000 H100 GPUs is online, Elon Musk hints at its further expansion with 50,000 more H200 GPUs.

Faulty Nvidia H100 GPUs and HBM3 memory caused half of failures during LLama 3 training — one failure every three hours for Meta's 16,384 GPU training cluster
By Anton Shilov published
In a 16,384 H100 GPU cluster, something breaks down every few hours or so. In most cases, H100 GPUs are to blame, according to Meta.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.