Elon Musk confirms xAI is buying an overseas power plant and shipping the whole thing to the U.S. to power its new data center — 1 million AI GPUs and up to 2 Gigawatts of power under one roof, equivalent to powering 1.9 million homes

xAI Colossus Memphis Supercluster
(Image credit: xAI)

Elon Musk's next xAI data centers are expected to house millions of AI chips and consume so much power that Elon Musk has reportedly bought a power plant overseas and intends to ship it to the U.S., according to Dylan Patel from SemiAnalysis, who outlined xAI's recent progress in a podcast. Interestingly, Musk confirmed the statement in a subsequent tweet.

Elon Musk's current xAI Colossus AI supercomputer is already one of the world's most powerful and power-hungry machines on the planet, housing some 200,000 Nvidia Hopper GPUs and consuming around an astounding 300 MW of power, and xAI has faced significant headwinds in supplying it with enough power.

The challenges only become more intense as the company moves forward — Musk faces a monumental challenge with powering his next AI data center, one that is predicted to house one million AI GPUs, thus potentially consuming the same amount of power as 1.9 million households. Here's how the data center could consume that much power, and how Musk plans to deliver it.

Beyond Colossus

Elon Musk's xAI has assembled vast computing resources and a team of talented researchers to advance the company's Grok AI models, Patel said. However, even bigger challenges lay ahead.

It is no secret that Elon Musk has already run into trouble powering his existing xAI data center. Currently, the company's main data center, Colossus, which houses 200,000 Nvidia Hopper GPUs, is located near Memphis, Tennessee. To power this machine, xAI installed 35 gas turbines that can produce 420 MW of power, as well as deploying Tesla Megapack systems to smooth out power draw. However, things are going to get much more serious going forward.

Beyond the Colossus buildout, xAI is rapidly acquiring and developing new facilities. The company has purchased a factory in Memphis that is being converted into additional data center space, big enough to power around 125,000 eight-way GPU servers, along with all supporting hardware, including networking, storage, and cooling.

A million Nvidia Blackwell GPUs will consume between 1,000 MW (1 GW) and 1,400 MW (1.4 GW), depending on the accelerator models (B200, GB200, B300, GB300) used and their configuration.

However, the GPUs are not the only load on the power system; you must also account for the power consumption of CPUs, DDR5 memory, storage, networking gear, cooling, air conditioning, power supply inefficiency, and other factors such as lighting. In large AI clusters, a useful approximation is that overhead adds another 30% to 50% on top of the AI GPU power draw, a figure typically expressed as PUE (power usage effectiveness).

That said, depending on which Blackwell accelerators xAI plans to use, a million-GPU data center will consume between 1,400 MW and 1,960 MW (given a PUE of 1.4). What can possibly power a data center with a million high-performance GPUs for AI training and inference is a big question, as this undertaking is comparable to powering the potential equivalent of 1.9 million homes.

A power plant?

A large-scale solar power plant alone is not viable for a 24/7 compute load of this magnitude, as one would need several gigawatts of panels, plus massive battery storage, which is prohibitively expensive and land-intensive.

The most practical and commonly used option is building multiple natural gas combined-cycle gas turbine (CCGT) plants, each capable of producing 0.5 MW – 1,500 MW. This approach is relatively fast to deploy (several years), scalable in phases, and easier to integrate with existing electrical grids. Perhaps, this is what xAI plans to import to the U.S.

Alternatives like nuclear reactors could technically meet the load with fewer units (each can produce around 1,000 MW) with no direct carbon emissions, but nuclear plants take much longer to design, permit, and build (up to 10 years). It is unlikely that Musk has managed to buy a nuclear power plant overseas, with plans to ship it to the U.S.

In practice, any organization attempting a 1.4 – 1.96 Gigawatt deployment — like xAI — will effectively become a major industrial energy buyer. For now, xAI's Colossus produces power onsite and purchases power from the grid; therefore, it is likely that the company's next data center will follow suit and combine a dedicated onsite plant with grid interconnections.

Apparently, because acquiring a power plant in the U.S. can take too long, xAI is reportedly buying a plant overseas and shipping it in, something that highlights how AI development now hinges not only on compute hardware and software but also on securing massive energy supplies quickly.

There's no other way

Without a doubt, a data center housing a million AI accelerators with a dedicated power plant appears to be an extreme measure. However, Patel points out that most leading AI companies are ultimately converging on similar strategies: concentrating enormous compute clusters, hiring top-tier researchers, and training ever-larger AI models. To that end, if xAI plans to stay ahead of the competition, it needs to build even more advanced and power-hungry data centers.

Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

Anton Shilov
Contributing Writer

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • Findecanor
    So much for being a champion for renewable energy and a clean future ...
    Reply
  • Jame5
    There's no other way

    Sure there is. But right now we are in the part of the curve where people are throwing more hardware at the problem instead of trying to figure out better ways to solve the problem, or if the problem is even a problem to begin with.
    Reply
  • SomeoneElse23
    Findecanor said:
    So much for being a champion for renewable energy and a clean future ...
    Everyone "championed" it when it was "the thing to do".

    Now no one seems to care. Or they never cared, they just do what looks good.
    Reply
  • SomeoneElse23
    Jame5 said:
    Sure there is. But right now we are in the part of the curve where people are throwing more hardware at the problem instead of trying to figure out better ways to solve the problem, or if the problem is even a problem to begin with.
    It's the current fad.
    Or there's something they aren't telling us.
    Reply
  • jp7189
    First, what kind of power plant is this? I see speculations, but nothing definitive.
    Second, if they are willing to ship a power plant, why not build the expansion in some other country with less regulations and social backlash? Surely, starlink can handle connectivity wherever they go.
    Reply
  • Dementoss
    The massive and wasteful cost, of a massive ego being allowed to run riot.
    Reply
  • SomeoneElse23
    Dementoss said:
    The massive and wasteful cost, of a massive ego being allowed to run riot.
    Some people just love to hate Elon. :(
    Reply
  • anti68
    Is he planning a rapid scheduled disassembly?
    Reply
  • ezst036
    Elon is actually forward thinking on this.

    Electricity prices are already through the roof and why would he want peasants with pitchforks in front of his campuses protesting about skyrocketing electricity costs through the roof due to electricity-guzzling AIs?

    This would not be a good look for any tech titan from Nvidia to xAI. Gotta keep the protests away.

    Is it entirely self-serving? Absolutely. However note that every last tech titan is doing electricity and nuclear. None of them want protests in front of their campuses.
    Reply
  • JRStern
    All the technological signs are that a million GPU facility is pointless.
    The resources needed to generate a new ChatGPT 4o level model have already fallen like 90% in the last few years, and will probably fall another 90% in the next three years, ... etc.
    Reply