Elon Musk wants foundry partners to build astounding '100 – 200 billion AI chips' per year — Musk says chipmaking industry can't deliver on his goals
This is orders of magnitude more than the industry can build these days.
It's no secret that Elon Musk has tremendous ambitions when it comes to artificial intelligence, but apparently, they are so tremendous that he wants to get more AI processors than the industry produces, or even can produce. As it turns out, Tesla might need '100 – 200 billion AI chips per year' and if it cannot get them from existing foundry partners, then the company will consider building its own fabs, which Musk discussed several weeks ago. Now he has elaborated on those goals further.
"I have tremendous respect for TSMC and Samsung, we work with both TSMC and Samsung at Tesla and SpaceX. They are great companies and we want them to make our chip as quickly as they can and scale up to the highest possible volume that they are comfortable doing," said Elon Musk, during his conversation with Ron Baron. "But it doesn't appear to be fast enough. When I asked how long it would take from start to finish to build a new chip fab built, they said five years to get to production. Five years for me is eternity. My timelines are one year, two years. […] I cannot even see past three years. This is not going to be fast enough. If they change their minds and say, yeah, they are going to go faster and they want to provide us with 100 billion, 200 billion AI chips a year in the time frame that we need them, that is great."
Starman @elonmusk joins our Founder and CEO @RonBaronAnalyst for a virtual fireside chat to discuss the future. Livestream starts at 1:05pm ET. https://t.co/6ceIb5OHTeNovember 14, 2025
Musk did not say when Tesla and SpaceX would require those 100 to 200 billion AI processors a year, but that number is pretty insane, assuming that he meant units, not dollars. To put it into context, the industry supplied 1.5 trillion semiconductor devices globally in 2023, according to the Semiconductor Industry Association. Yet, this number is a bit misleading, because the term 'chip' covers a wide variety of devices, ranging from tiny microcontrollers and sensors to memory chips and logic devices. Logic devices like Nvidia's H100 or B200/B300 AI GPUs are huge pieces of silicon that are hard to make, and thus take the longest lead and production times.
Musk recently said he believed power consumption for his AI5 AI processors could drop to as low as 250W. The power rating (TDP) of a chip can often be used as a decent relative proxy for the size of a chip, and by comparison, Nvidia's B200 GPUs can consume up to 1,200W, or nearly five times more power, thus implying that the AI5 will be a much smaller chip. Regardless, there absolutely isn't enough production capacity to meet Musk's targets, even if his chips are much smaller.
As one of the biggest clients of TSMC, Nvidia has supplied four million Hopper GPUs worth $100 billion (not counting China) throughout the active lifespan of the architecture, which was about two calendar years. With Blackwell, Nvidia has sold around six million GPUs, which equate to three million GPU packages, in the first four quarters of their lifespan.
If Musk indeed meant 200 billion units, then he would like to get orders of magnitude more AI processors than the industry (which is largely produced by TSMC) can build in a year. Yet, if he by any chance was referring to $100 - $200 billion worth of AI processors, then TSMC and Samsung Foundry could certainly produce that volume in the coming years. However, given that Musk is not satisfied with how quickly TSMC and Samsung build fabs, it looks like he indeed thinks he needs more than these companies can supply.
"We will be using TSMC fabs in Taiwan and Arizona, Samsung fabs in Korea and Texas," said Musk. "From their standpoint, they are moving like lightning. I am just saying that, nonetheless, it would be a limiting factor for us. They're going as fast as they can, but from their standpoint, it's 'pedal to the metal.' They just never had someone, a company, with our sense of urgency. It might just be that the only way to get to scale at the rate that we want to get to scale is to build up a real big fab, or be limited in output of Optimus and self-driving cars because of AI chip [supply]."
Whether Tesla and SpaceX really need 100–200 billion chips per year remains unclear. Tesla sold 1.79 million vehicles in 2024, so it does not need more than two million chips for its cars. Of course, the company might need millions more AI processors for its AI training efforts, though we have reasonable doubts that it can indeed build AI clusters powered by billions of GPUs any time soon. Also, while anthropomorphic Optimus robots, also powered by Tesla's AI hardware, could be a big market, it will take years to develop.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.
-
hotaru251 then he can fund em 100% right?Reply
the bubble will pop. No foundry is going to ramp up a ton of production for something that will pop and they are in the red by massive amount when those fabs are no longer useful. -
blppt Reply"My timelines are one year, two years. I cannot even see past three years. "
Oh, that's just a PHENOMENAL business model. -
logainofhades Want in one hand and :poop: in the other. He's ambitious to say the least, but I do not see this as being realistic.Reply -
logainofhades AI datacenters kind of remind me of how big computers in the 1950's were. Hopefully the performance advances enough soon that such size and scale is unnecessary. That many chips just for AI datacenters is going to be a lot of e-waste.Reply -
bit_user Reply
We know enough about the roadmap of silicon fabrication (i.e. if you look at stuff from IMEC and TSMC & Samsung's roadmaps) that there aren't going to be any massive breakthroughs in density or efficiency. The pace of improvement is slowing, and yet the seemingly unending hunger of AI companies for more compute and bandwidth means they're unlikely to be satisfied with those gains and will continue to build datacenters on massive scales.logainofhades said:AI datacenters kind of remind me of how big computers in the 1950's were. Hopefully the performance advances enough soon that such size and scale is unnecessary. That many chips just for AI datacenters is going to be a lot of e-waste.
It's possible that the breakthroughs could come on the AI front, not from the silicon manufacturing. However, the race to build the best and most capable AI still has me expecting they'll take any efficiency wins they can get and just use that to enable even more sophisticated models.
IMO, the saddest part is that these aren't even like gaming GPUs that enthusiasts could buy up in the secondary market. Once they reach an age above 5 years or so, they become uneconomical for anyone to do anything with. At least, on any kind of substantial scale. -
bit_user If you just sanity-check what he's saying: he thinks Tesla will be shipping more than 100x as many AI chips as the number of cell phones we currently produce. 100B is like 10 AI robots (of various forms, including drones & cars) for every human on the planet! And this is per-year???Reply
If he said 100-200 Million, that I could understand. That's within an order of magnitude of global automotive production. So, if Tesla had a large share of that and put the rest of them into silly robots, then I could at least understand his thinking. However, he clearly didn't make such a simple mistake, or else he wouldn't be talking about building fabs on a scale nobody thinks is possible.
Building robots at such a scale would have the Earth looking like Cybertron, in no time!
: D -
bit_user Reply
He'll absolutely need to make more DRAM, as well! NAND, too.razor512 said:He needs to get his own chip fab built, and then have it make DRAM. -
thestryker Reply
The amount of waste generated by these datacenters is something I hope has been accounted for though I've seen no evidence. Depreciation has always happened, but I can't think of another time where there has been this kind of generational volume.bit_user said:IMO, the saddest part is that these aren't even like gaming GPUs that enthusiasts could buy up in the secondary market. Once they reach an age above 5 years or so, they become uneconomical for anyone to do anything with. At least, on any kind of substantial scale.