Tesla scraps custom Dojo wafer-level processor initiative, dismantles team — Musk to lean on Nvidia and AMD more

Tesla D1 Dojo Supercomputer Chip
(Image credit: Tesla)

Tesla's custom Dojo wafer-level processor has always been an ambitious and promising hardware project, but despite early success with the company's bespoke chip, Musk has still used Nvidia GPUs in addition to Dojo. Today, this reportedly comes to an end as Tesla has decided to dismantle its Dojo supercomputer program, reassign its remaining staff to other computing projects, and turn more heavily to outside technology providers like AMD or Nvidia, according to Bloomberg. Tesla has not formally confirmed the Dojo shutdown. 

Elon Musk, chief executive of Tesla, has recently ordered the Dojo effort to be entirely wound down, according to the report. As a result, Peter Bannon, the head of the Dojo project at Tesla, is reportedly set to leave the company. In fact, about 20 members of the team have left to join DensityAI, a startup created by former Tesla executives, in recent weeks, according to Bloomberg. Those who remain will be moved to other internal data centers and computing roles within Tesla, if the information is accurate. 

(Image credit: TSMC)

Tesla officially started its Dojo supercomputer project in 2021 and attempted to build wafer-scale processors for AI training. The company planned to use its own cluster built on its own proprietary hardware to train AI powering the full self-driving (FSD) capability of its cars and the Optimus humanoid robot. However, Tesla never completely relied on Dojo supercomputers and used third-party hardware as well.

"We are pursuing the dual path of Nvidia and Dojo," Musk said at an earnings call in 2023. "But I would think of Dojo as a long shot. It is a long shot worth taking because the payoff is potentially very high." 

However, Dojo had its own limitations when it came to memory capacity, and servers based on a wafer-scale processor were hard to produce as they used a lot of proprietary components. In fact, the roll-out of Dojo 2 hardware has been pretty slow, and the company expected to have a cluster equivalent to 100,000 of Nvidia's H100 GPUs up and running in 2026. Essentially, this meant that Tesla would run a cluster equivalent to xAI's Colossus in Fall 2024, but two years later. Recently, Musk implied that he would like Tesla cars' hardware and Dojo supercomputer hardware to run on the same architecture. 

"I think about Dojo 3 and the AI6 as the first [converged architecture designs]," Musk said in a July 23 earnings call (via Investing.com). "It seems like intuitively, we want to try to find convergence there where it is basically the same chip that is used where we use, say, two of them in a car or an Optimus and maybe a larger number on a on a [server] board, a kind of 5 - 12 twelve on a board or something like that. […] That sort of seems like intuitively the sensible way to go." 

If Tesla follows Musk's direction, it will continue developing its own hardware for both edge devices and data centers, this time based on a converged architecture that avoids relying on exotic design decisions and proprietary components. Then again, this has not been formally confirmed yet. 

Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

Anton Shilov
Contributing Writer

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • autobahn
    DOJO wafer got DOGE'ed! I wonder what this means for a company like Cerebras that is working on this same tech. Is there something flawed with the approach of one giant wafer?
    Reply