Chip designer Jim Keller says Intel still has 'a lot of work to do' — would consider it for Tenstorrent AI chip production, already in talks with TSMC, Rapidus, and Samsung for 2nm tech
Also working with Japanese firm, Rapidus, to get swift access to the smallest process nodes possible.
AI chip design startup, Tenstorrent, has announced it's working with a range of companies to build out its next-generation AI chips. These include TSMC, Samsung, and Japanese firm Rapidus, all of which will provide their latest 2nm process nodes to develop future AI hardware. CEO and AMD and Apple Veteran, Jim Keller, has also said he'd consider working with Intel, but that it "still [has] a lot of work to do," according to Nikkei Asia.
Tenstorrent was founded in 2016, with Jim Keller coming on as CTO in 2020 and then CEO in 2023. It's targeting a different approach to giants like Nvidia in its chip production, focusing more on cutting costs and maximizing efficiency. Its current chips, like the Blackhole AI accelerator, are built on TSMC's 6nm node, while an upcoming Quasar chip design uses Samsung's 4nm process. Beyond that, it wants 2nm for whatever comes next.
It's rare for companies to work with such a range of manufacturers for cutting-edge chips, but Tenstorrent claims it can do it because it uses chiplets for its designs. That lets it have different fabricators build different chips for it, and then it can package them altogether on a single die.
Alongside TSMC and Samsung, it is also working with Japanese firm Rapidus, a startup set up in just 2022 and supported by a range of Japanese businesses. Its sole model is to produce cutting-edge 2nm hardware by 2027. This is partly to reinvigorate Japan's semiconductor industry, but also to create localized and national capacity for advanced semiconductor production, as many countries are looking to do following the AI boom.
Tenstorrent has also worked with GlobalFoundries in the past, and has said it won't rule out using Intel's process technology in the future. Keller just seemingly wants to see where that technology goes, especially following recent investments from Nvidia and the US government.
Speaking on Intel with Nikkei Asia, Jelly said that "they still have a lot of work to do ... to deliver a really solid technology roadmap."
More immediately, Keller is looking to undercut the competition and target smaller companies that want to still leverage the capabilities of locally run AI, not just those building out giant multi-billion-dollar data centers.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
"Everybody says Nvidia, OpenAI, Google ... Well, the long tail of small applications is very large, too," Keller said. "We have developers who buy a $10,000 workstation and they're really happy. ... There's a lot of them, and that will lead to bigger business."
Follow Tom's Hardware on Google News, or add us as a preferred source, to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button!

Jon Martindale is a contributing writer for Tom's Hardware. For the past 20 years, he's been writing about PC components, emerging technologies, and the latest software advances. His deep and broad journalistic experience gives him unique insights into the most exciting technology trends of today and tomorrow.
-
S58_is_the_goat Intel Foundry be like...Reply
https://uploads.disquscdn.com/images/8efd962ef461b9600a8cc5e6c017e6592c8efdaac7d59928b57fbf5f97413851.gif -
DS426 Very sensible man -- always has been. Certainly knows the benefits of chiplets. ;)Reply
"Everybody says Nvidia, OpenAI, Google ... Well, the long tail of small applications is very large, too," Keller said. "We have developers who buy a $10,000 workstation and they're really happy. ... There's a lot of them, and that will lead to bigger business." - Jim Keller
That's right. Even $5K AI workstations can be plenty capable for their AI workflows. -
bit_user I'm glad to see Tenstorrent still in the fight, but they need to start shipping in meaningful volumes, or else I think they risk losing credibility. So many AI chip startups have already come and gone. Tenstorrent is looking to me like they might be starting to head in that direction, too. I hope I'm wrong.Reply -
Blacktie75 I have nothing but respect for Jim, but a lot of what's come out of his mouth since leaving Intel makes me think his leaving had more to do with specific people within Intel more than the company itself.Reply
This is the problem when a company gets so big that the CEO has no idea what the actual engineers are doing. The only problem I had with Pat was he should have started laying off the dead weight layers of management on day one.
Even today Intel doesn't have an accounts receivable problem, they have an accounts payable problem. They need to cut another 20k unnecessary jobs and focus on anyone that's actually working on moving the company forward. -
blppt Reply
Keller has always been the man. Saved AMD from collapse twice. If he couldn't get anything done at Intel afterwards, the problem was Intel.DS426 said:Very sensible man -- always has been. Certainly knows the benefits of chiplets. ;)
That's right. Even $5K AI workstations can be plenty capable for their AI workflows. -
razor512 Intel should consider using their chip fabs to produce some NAND, as well as other types of flash storage that has higher write endurance which will be more well received now that home NAS builds are becoming far more common.Reply
They need to produce optane SSDs that offer more storage and faster reads and writes.
It is also becoming a more competitive field since demand is expected to increase vastly. -
bit_user Reply
That would be a U-turn, because they just sold off their NAND fabs to SK Hynix about 3 years ago. As for higher-endurance, I think the Optane fab was owned by Micron, but Intel sued to prevent them from making any 3D-XPoint memory on their own and discontinued their Optane products.razor512 said:Intel should consider using their chip fabs to produce some NAND, as well as other types of flash storage that has higher write endurance
Intel is not in a good financial position to make the large investments needed to get back into the storage industry. I don't know the details, but it requires a substantially different fab process to make high-density memories than it does to make logic dies.
NAND can get in the same ballpark as Optane, on both endurance and performance. The one thing I can't do as well is low-QD read latency. However, that's not what the industry needs, anyhow.razor512 said:They need to produce optane SSDs that offer more storage and faster reads and writes.
https://www.tomshardware.com/pc-components/ssds/custom-pcie-5-0-ssd-with-3d-xl-flash-debuts-special-optane-like-flash-memory-delivers-up-to-3-5-million-random-iops
What's better about NAND is its efficiency and density. Optane really couldn't compete. -
ejolson Except for the name I like the approach taken by the Blackhole accelerator from Tenstorrent. Unfortunately, the price immediately increased from $1299 to $1399 which immediately gave me the perception of an unreliable supplier.Reply
I know the price is still significantly cheaper than current H and B accelerators from Nvidia. However, for individual developers looking for cost effective hardware, used V100 accelerators are now cheap on eBay, have AI capabilities and can also run 64-bit science and engineering computations after the bubble bursts.
On the other hand, the Blackhole looks more interesting, if only because it's quite different. Although CUDA 13 just dropped support for the V100, given the lower volume of Blackhole sales, it's not clear at present which will be supported further into the future. -
bit_user Reply
IMO, it's more cost-effective to use a RTX 5090, now that they're available near list price. You get the same 32 GB of memory, but more than double the non-sparse fp16 performance (419 vs. 194).ejolson said:I know the price is still significantly cheaper than current H and B accelerators from Nvidia. However, for individual developers looking for cost effective hardware, used V100 accelerators are now cheap on eBay,
The Black Hole p150's only real advantage is scalability.
Well, after the bubble, you can do even better than a V100. At least go for an A100, if not even a H100!ejolson said:have AI capabilities and can also run 64-bit science and engineering computations after the bubble bursts.
Yeah, that's always a risk of buying into equipment from a startup, but at least all of their stuff is open source (I'm pretty sure). Tenstorrent has a better chance than most, and they did secure a deal with LG to provide inferencing for their embedded processors, but the only truly safe bet is to buy newer Nvidia hardware (i.e. Ada, Hopper, or Blackwell).ejolson said:given the lower volume of Blackhole sales, it's not clear at present which will be supported further into the future.