Meta sets up 'Meta Compute' organization for gigawatt-scale AI data centers — initiative is said to consumer hundreds of gigawatts over time
Next-generation data centers will need special treatment.
Meta is setting up Meta Compute, an organization that will be responsible for an aggressive expansion of its computing infrastructure, reports Reuters. Meta Compute plans to deploy infrastructure that will consume tens of gigawatts of power already this decade and scale to hundreds of gigawatts over a longer horizon, Mark Zuckerberg announced on Monday. Meta Compute will be jointly led by Santosh Janardhan, head of global infrastructure and co-head of engineering, and Daniel Gross.
"Today we are establishing a new top-level initiative called Meta Compute," Zuckerberg wrote in a post over at Threads. "Meta is planning to build tens of gigawatts this decade, and hundreds of gigawatts or more over time. How we engineer, invest, and partner to build this infrastructure will become a strategic advantage."
Janardhan's responsibility remains general and deeply technical: it will span Meta's overall system architecture, in-house silicon efforts, software stack, developer tools, as well as the buildout and day-to-day operation of the company's worldwide data center fleet and network.
Gross will run a newly created group focused on long-range capacity planning and developing supply chain that is capable of delivering equipment that will consume gigawatts of power (i.e., that is a lot of chips and servers). His responsibilities include defining Meta's future compute needs, managing strategic supplier relationships, following industry dynamics, and developing planning and business models to support infrastructure expansion at multi-gigawatt scale.
Given the responsibilities of Janardhan and Gross, Meta is establishing Meta Compute to systematically expand AI-scale infrastructure at a level far beyond traditional data-center growth. As advanced AI models now demand compute measured in tens of gigawatts and hundreds of gigawatts over time, Meta indeed needs to secure sophisticated hardware and construct actual buildings. A dedicated organization allows Meta to plan power, land, networking, and system architecture years in advance, rather than scaling reactively as demand rises.
The new structure also centralizes ownership of the full technical stack — from software and system architecture to in-house silicon, networks, and data centers — to ensure that hardware and software decisions are made together to maximize efficiency. At the same time, Meta Compute separates operational execution from long-term capacity strategy and supply chain creation.
Both executives will coordinate closely with Dina Powell McCormick, who has joined Meta as president and vice chair. Dina Powell McCormick will work closely with the compute and infrastructure organizations to ensure that Meta's multi-billion-dollar investments align with the company's objectives and deliver tangible economic benefits in the regions where the company operates. In addition, she will lead initiatives to establish new strategic capital alliances and develop new approaches to boost Meta's long-term investment capacity.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Meta establishes its Meta Compute organization at an interesting period. On the one hand, the company spent $72 billion on its AI initiatives in 2025 alone. On the other hand, these investments have yet to pay off as the company's Llama 4 model received a muted response and the company is not considered as a major AI leader like Google, Microsoft, or OpenAI.
Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.