Elon Musk's xAI plans to build 'Gigafactory of Compute' by fall 2025 — using 100,000 Nvidia's H100 GPUs

xAI's Grok chatbot
(Image credit: xAI)

xAI, Elon Musk's AI startup, plans to build a massive supercomputer to enhance its AI chatbot, Grok, reports Reuters citing The Information. The supercomputer, which Elon Musk refers to as the Gigafactory of Compute, is projected to be ready by fall 2025 and might involve a collaboration with Oracle. With this development, Musk aims to significantly surpass rival GPU clusters in both size and capability.

In a presentation for investors, Elon Musk revealed that the new supercomputer will use as many as 100,000 Nvidia's H100 GPUs based on the Hopper architecture, making it at least four times larger than the largest existing GPU clusters, according to The Information. Nvidia's H100 GPUs are highly sought after in the AI data center chip market, although strong demand made them difficult to obtain last year. But these are not the range-topping Nvidia GPUs they once were, with the green company about to ship its H200 compute GPUs for AI and HPC applications and is prepping to ship its Blackwell-based B100 and B200 GPUs in the second half of the year. 

It is unclear why xAI decided to use essentially a previous-generation technology for its 2025 supercomputer, but this substantial hardware investment must reflect the scale of xAI's ambitions. Keep in mind that we are dealing with an unofficial report and the plans could change. Yet, Musk reportedly holds himself 'personally responsible for delivering the supercomputer on time' as the project is very important for developing large language models.

Anton Shilov
Contributing Writer

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.