Nvidia and OpenAI forge $100 billion alliance to deliver 10 gigawatts of Nvidia hardware for AI datacenters
All of a sudden, the 1.21 gigawatts in Back To The Future aren't impressive anymore.

Tech industry giants OpenAI and Nvidia have announced a pivotal partnership, which will deploy 10 gigawatts worth of AI datacenters and $100 billion in investments.
OpenAI has committed to creating multiple datacenters with Nvidia as its "preferred strategic compute and networking partner," with the first one expected to deploy in the second half of 2026. The partnership will see OpenAI construct fervently until the total combined power budget of those datacenters reaches "at least" 10 gigawatts. For its part, Nvidia dove into its war chest to secure $100 billion, returning the favor by progressively investing in OpenAI, presumably via share purchases.
Additionally, and perhaps most interestingly, both companies commit to "co-optimize" their respective roadmaps. It's not hard to imagine that the hands of Nvidia's AI clients already guide the chipmaker's designs, but this statement could imply that OpenAI will have a bigger say in Nvidia's plans than before.
The companies also point out that the new collaboration dovetails nicely with the existing agreements with the likes of Microsoft, Oracle, and SoftBank. OpenAI is already the exclusive AI partner for Microsoft, which promised in January to invest $80 billion in the technology.
Meanwhile, OpenAI's Sam Altman remarks that "compute infrastructure will be the basis for the economy of the future", a statement that would seem more like hyperbole a mere two or three years ago.
OpenAI's next datacenters will use Nvidia's Vera Rubin platform (and presumably Rubin Ultra), powerful accelerators packing 76 TB of HBM4 memory that should be capable of performing FP4 inference at 3.6 exaflops and FP8 training at 1.2 exaflops. The fact that the "exa" prefix is becoming commonplace is exciting and scary in equal measures.
The Rubin GPU and Vera CPUs taped out in late August and are now being manufactured in TSMC facilities. Meanwhile, Rubin Ultra is expected to deliver 15 exaflops of FP4 operations for inference, and 5 exaflops of FP4 for training. These figures come by way of 365 TB of HBM4e memory and 14 GB300 GPUs.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
To put the 10-gigawatt figure into perspective, a contemporary U.S. nuclear power plant reactor is suitable for around 1 gigawatt, meaning these new datacenters will gobble up 10 reactors' worth of juice to do their thing. That's a concept that's hard to wrap one's head around. While the technological advancement is definitely impressive, it also raises hard questions about its environmental costs.
Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

Bruno Ferreira is a contributing writer for Tom's Hardware. He has decades of experience with PC hardware and assorted sundries, alongside a career as a developer. He's obsessed with detail and has a tendency to ramble on the topics he loves. When not doing that, he's usually playing games, or at live music shows and festivals.
-
hotaru251 at what point does the power grid become so strained its unreliable & those causing it forced to fund its expansion?Reply