OpenAI and Oracle ink deal to build massive Stargate data center, total project will power 2 million AI chips — Stargate partner SoftBank not involved in the project

Oracle
(Image credit: Oracle)

Among the concerns raised about the Stargate project, which involves partnerships with OpenAI, Oracle, and SoftBank, were scarce details about infrastructure support. Little by little, the companies disclosed their intentions and, on Tuesday, OpenAI and Oracle announced plans to build an additional 4.5 gigawatts (GW) of Stargate data center infrastructure in the U.S., pushing OpenAI’s total planned capacity beyond 5 GW. Interestingly, SoftBank is not involved in financing this buildout, despite being part of the Stargate project.

Under the terms of the plan announced in January, OpenAI, Oracle, and Softbank plan to build 20 data centers each measuring 500,000 square feet (46,450 square meters). However, it was unclear how they intended to power the data centers, as it does not look like the U.S. infrastructure has enough spare capacity to power the additional AI servers, cooling systems, and networking equipment used in AI data centers unless some sort of additional infrastructure is built. 

The announced 4.5 GW of infrastructure indeed refers primarily to electrical power availability, which is among the limiting factors for AI development these days.

OpenAI claims that the expanded infrastructure of 5 GW will enable its data centers to power over two million AI processors, though it does not disclose whether the infrastructure is meant to support 1.4 kW Blackwell Ultra processors or 3.6 kW Rubin Ultra processors. If a 5 GW infrastructure were to power only AI GPUs, then it could feed 3.571 million Blackwell Ultra or 1.388 million Rubin Ultra GPUs. However, AI accelerators typically consume only half of the entire data center's power, without taking into account power usage effectiveness (PUE), so the actual number of supported GPUs would be lower. 

The new 4.5 GW-capable facilities may be built in states such as Texas, Michigan, Wisconsin, and Wyoming, though exact locations are still being finalized. This is in addition to an existing site under construction in Abilene, Texas, which OpenAI considers a proof-of-concept facility to ensure its ability to deploy infrastructure at scale and speed. OpenAI believes that lessons learned from Abilene will help with the execution of subsequent sites. 

Parts of the Abilene facility — Stargate I — are now active as Oracle began installing server racks based on Nvidia's GB200 platform last month. OpenAI has begun utilizing this infrastructure to conduct early-stage AI training and inference tasks as part of its next-generation research initiatives.

Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

TOPICS
Anton Shilov
Contributing Writer

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.