IBM CEO warns that ongoing trillion-dollar AI data center buildout is unsustainable — says there is 'no way' that infrastructure costs can turn a profit
Krishna’s cost model challenges the economics behind multi-gigawatt AI campuses.
IBM CEO Arvind Krishna used an appearance on The Verge’s Decoder podcast to question whether the capital spending now underway in pursuit of AGI can ever pay for itself. Krishna said today’s figures for constructing and populating large AI data centers place the industry on a trajectory where roughly $8 trillion of cumulative commitments would require around $800 billion of annual profit simply to service the cost of capital.
The claim was tied directly to assumptions about current hardware, its depreciation, and energy, rather than any solid long-term forecasts, but it comes at a time when we’ve seen several companies one-upping one another with unprecedented, multi-year infrastructure projects.
Krishna estimated that filling a one-gigawatt AI facility with compute hardware requires around $80 billion. The issue is that deployments of this scale are moving from the drawing board and into practical planning stages, with leading AI companies proposing deployments with tens of gigawatts — and in some cases, beyond 100 gigawatts — each. Krishna said that, taken together, public and private announcements point to roughly one hundred gigawatts of currently planned capacity dedicated to AGI-class workloads.
At $80 billion per gigawatt, the total reaches $8 trillion. He tied those figures to the five-year refresh cycles common across accelerator fleets, arguing that the need to replace most of the hardware inside those data centers within that window creates a compounding effect on long-term capex requirements. He also placed the likelihood that current LLM-centric architectures reach AGI at between zero and 1% without new forms of knowledge integration.
Krishna pointed to depreciation as the part of the calculation most underappreciated by investors. AI accelerators are typically written down over five years, and he argued that the pace of architectural change means fleets must be replaced rather than extended. “You've got to use it all in five years because at that point, you've got to throw it away and refill it,” he said.
Recent financial-market criticism has centred on similar concerns. Investor Michael Burry, for example, has raised questions about whether hyperscalers can continue stretching useful-life assumptions if performance gains and model sizes force accelerated retirement of older GPUs.
The IBM chief said that ultimately, he expects generative-AI tools in their current form to drive substantial enterprise productivity, but that his concern is the relationship between the physical scale of next-gen AI infrastructure and the economics required to support it. Companies committing to these huge, multi-gigawatt campuses and compressed refresh schedules must therefore demonstrate returns that match the unprecedented capital expenditure that Krishna outlined.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

Luke James is a freelance writer and journalist. Although his background is in legal, he has a personal interest in all things tech, especially hardware and microelectronics, and anything regulatory.
-
tennis2 Where is all the investment money coming from? That's my question.Reply
Even if we're only talking about the datacenters' hardware/infrastructure, this is a massive amount of money. Where is it being divested from? -
Eximo It isn't, borrowed against stock price collateral. A giant house of cards with the only real winner being the large capital firms getting the interest while it is still growing.Reply
They all think that they can find a way to make profit eventually.