Elon Musk says idling Tesla cars could create massive 100-million-vehicle strong computer for AI — 'bored' vehicles could offer 100 gigawatts of distributed compute power

Tesla Cars
(Image credit: Getty / Bloomberg)

During Tesla’s Q3 2025 earnings call, the firm’s CEO, Elon Musk, proposed that the cars take part in "a giant distributed inference fleet" to tap into their incredible compute power "if they are bored." Musk went on to estimate that, at some point, the advanced car fleet could summon "100 gigawatts of inference." Predictably, Musk’s latest musings have met with a mixed response on social media. So, let’s take a closer look at exactly what Musk said.

Tesla Q3 2025 Financial Results and Q&A Webcast - YouTube Tesla Q3 2025 Financial Results and Q&A Webcast - YouTube
Watch On

Expanding Tesla production and incentives to buy

During the Q&A session, Emmanuel Rosner from Wolfe Research asked about Musk’s intentions to expand the production of Tesla vehicles. He also queried what kind of incentives would be required to make such a production hike a reasonable business proposition.

Musk answered that an annualized production rate of three million vehicles should be achievable within 24 months. He added that the “single biggest expansion in production will be the Cyber Cab, which starts production in Q2 next year.” This will be a comfort-optimized automated transport vehicle, obviously targeting the cab market.

Killer app – allowing people to be lost in their smartphone screens, while driving

Beyond that project, the Tesla boss asserted that his team is looking closely at a killer app for new model cars with advanced processing. “If you tell someone, yes, the car is now so good, you can be on your phone and text the entire time while you're in the car. Anyone who can buy the car - will buy the car - end of story.”

Then, not for the first time, Musk heralded an “Autopilot safety game changer.” Elaborating on this, the Tesla CEO pledged, “I am 100% confident that we can solve unsupervised full self-driving at a safety level much greater than a human.”

Musk backed up his confidence by talking of the capabilities of the Tesla AI4 computer, also known as Hardware 4 (HW4). He indicated that, despite its muscle, the AI4 is already set to be eclipsed by the AI5, which outperforms it as much as 40-fold in tests. That sizable shot of extra performance boils down to facilitating autonomous driving systems that are 10x safer, it was suggested in the Q&A.

"A giant distributed inference fleet… [with] 100 gigawatts of inference"

Tesla chip

(Image credit: Tesla)

Using talk of computing power as a springboard, Musk then openly pondered whether the upcoming systems “might almost be too much intelligence for a car.” To address the decidedly first-world problem of owning a car “that might get bored,” the Tesla CEO went off on an interesting tangent about tapping into idle car processing power, effectively turning the Tesla fleet into a giant distributed inference network.

“One of the things I thought: if we got all these cars that maybe are bored… we could actually have a giant distributed inference fleet,” Musk said.

Obviously, plucking numbers from the air, the Tesla boss went on to optimistically project that this fleet could expand to, say, 100 million vehicles, with a baseline of a kilowatt of inference capability per vehicle. “That's 100 gigawatts of inference distributed with power and cooling taken with cooling and power conversion taken care of,” Musk told the financial experts on the earnings call. “So that seems like a pretty significant asset.”

At its core, Musk’s idea beckons comparisons with classical distributed computing platforms, like SETI@home and Folding@home. But this Tesla fleet proposal could make an interesting commercial moon shoot idea for investors and business analysts.

Meanwhile, users will probably be more concerned about their bought and paid for vehicles being used for someone else’s advantage, perhaps using extra electricity, and their computer systems enduring longer heat stress, and so on. There’d probably have to be a clear benefit for end-users to incentivize them to sign up to such a compute power-sharing scheme.

Google Preferred Source

Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

Mark Tyson
News Editor

Mark Tyson is a news editor at Tom's Hardware. He enjoys covering the full breadth of PC tech; from business and semiconductor design to products approaching the edge of reason.

  • hotaru251
    , the Tesla CEO pledged, “I am 100% confident that we can solve unsupervised full self-driving at a safety level much greater than a human.”

    so how many times for how many years did he say FSD on gen 1 tesla?
    Then said it was just bravado and nobody should think he was for real?

    users will probably be more concerned about their bought and paid for vehicles being used for someone else’s advantage, perhaps using extra electricity, and their computer systems enduring longer heat stress, and so on.

    it would likely be in the terms when you buy the thing so they dont have to pay anyone or give benefits. you either accept it or you dont get the car. (and they are betting on people interested in tesla to not care about it and just sign)
    Reply
  • BTM18
    Shut up Elon.
    Reply
  • bit_user
    How much memory do they each have? That would seem to be the most immediate limitation, since it would restrict model size.

    I'd be quite annoyed if my car started getting worse mileage, or used extra power when plugged in, unless I both had discretion over whether & when the inferencing happened and was compensated for it.

    Also, just because it's distributed doesn't mean it's not taxing the same grid as data centers. Sure, not everywhere is stressed equally, but some of those cars will be located and charging from networks that are already under strain.
    Reply
  • Rabohinf
    Fortunately, many of us have evolved to the degree we'll never need or use an electric vehicle.
    Reply
  • USAFRet
    Elon - Why hasn't the same 'distributed compute power' already emerged with the hundreds of millions of PCs around the world?
    Reply
  • bit_user
    USAFRet said:
    Elon - Why hasn't the same 'distributed compute power' already emerged with the hundreds of millions of PCs around the world?
    It's a good point, but here's where I think the question about memory size enters the picture. CPUs don't have enough inferencing horsepower and dGPUs don't generally have enough memory for inferencing on the kinds of models I think he's talking about.

    So, if whatever new Tesla self-driving chip has enough memory - and we know they have oodles of compute power - then he might have at least a superficial argument.
    Reply
  • USAFRet
    bit_user said:
    It's a good point, but here's where I think the question about memory size enters the picture. CPUs don't have enough inferencing horsepower and dGPUs don't generally have enough memory for inferencing on the kinds of models I think he's talking about.

    So, if whatever new Tesla self-driving chip has enough memory - and we know they have oodles of compute power - then he might have at least a superficial argument.
    For HW4, the current system.
    "The custom System on a chip (SoC) is called "FSD Computer 2". According to a teardown of a production HW4 unit in August 2023, the board has 16 GB of RAM and 256 GB of storage"

    https://en.wikipedia.org/wiki/Tesla_Autopilot_hardware
    ------------------------------------------------
    Tesla Intel Atom (MCU 2) and AMD Ryzen (MCU 3): Feature Differences and How to Tell What You Have
    https://www.notateslaapp.com/news/2417/tesla-intel-atom-mcu-2-and-amd-ryzen-mcu-3-feature-differences-and-how-to-tell-what-you-have
    Reply
  • bit_user
    USAFRet said:
    For HW4, the current system.
    "The custom System on a chip (SoC) is called "FSD Computer 2". According to a teardown of a production HW4 unit in August 2023, the board has 16 GB of RAM and 256 GB of storage"
    Thank you!

    Okay, so yeah. Anyone with >= 16 GB dGPUs of the past couple generations should be applicable for whatever he's talking about. Not "1 kW of inferencing horsepower", but some significant fraction of that.
    Reply
  • vanadiel007
    Computing on a vehicle will never work properly unless they can solve the issue of connection speed with the vehicles.

    There's a reason why we have data centers: all the units are connected with each other using super fast interconnect speeds so they can act "as one".

    Having to wait until a Tesla uploads the result using Starlink will simply take too long, so the "network of cars" will in reality only be a network of a single car.
    Reply
  • USAFRet
    bit_user said:
    Thank you!

    Okay, so yeah. Anyone with >= 16 GB dGPUs of the past couple generations should be applicable for whatever he's talking about. Not "1 kW of inferencing horsepower", but some significant fraction of that.
    And we have had distributed computing for a very long time, with (unfortunately) minimal uptake.
    Folding@Home, for instance.
    Reply