Report: Arm developing custom CPU for OpenAI's in-house accelerator — core IP would underpin 10GW of installed AI capacity
The Information reports that Arm is developing a CPU for OpenAI’s custom Broadcom-built accelerator, part of a sweeping expansion plan.

OpenAI is reportedly working with SoftBank-owned Arm on a new CPU to complement the custom AI accelerator it is co-developing with Broadcom. The collaboration, first reported by The Information, would see Arm design a server-class CPU that anchors OpenAI’s next-generation AI racks, potentially representing one of Arm’s biggest steps into the data center market to date.
The chip in question is OpenAI’s in-house AI accelerator, part of plans announced on October 13 to deploy custom AI accelerators and rack systems in collaboration with Broadcom. The SoC, specialized for inference workloads, is expected to enter production in late 2026 and scale up to support roughly 10 gigawatts of compute capacity between 2026 and 2029. The Broadcom accelerator, said to be fabricated by TSMC, has been in development for roughly 18 months.
According to The Information, Arm’s new role goes well beyond supplying architectural blueprints. The company has recently started designing and manufacturing its own CPUs rather than just licensing cores to partners, and sees the OpenAI contract as a chance to expand its server ambitions. People familiar with the discussions told the outlet that OpenAI could use the Arm-designed CPU not only with its Broadcom chip, but also with systems from Nvidia and AMD.
The potential revenue from OpenAI’s CPU program could reach into the billions, the report also said, representing a major windfall for SoftBank, which owns nearly 90% of Arm and has borrowed heavily against its stake. SoftBank has also pledged to invest tens of billions of dollars into OpenAI’s data center build-out and to buy AI technology from the startup to help accelerate Arm’s own chip development cycle.
Together with earlier agreements with Nvidia and AMD, OpenAI says its chip programs now total as much as 26GW of planned data center capacity. If successful, OpenAI’s custom chip deployments could reach a total installed base that analysts estimate could cost more than $1 trillion in construction and equipment in tandem with its Nvidia and AMD purchases.
The OpenAI–Broadcom chip could also give the ChatGPT developer more leverage in pricing talks with Nvidia, whose H100 and forthcoming Blackwell GPUs still dominate the AI training market. If Broadcom and TSMC can scale production, OpenAI’s inference chips may offer a partial hedge against the tight GPU supply that has constrained AI labs for much of the past year.
Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

Luke James is a freelance writer and journalist. Although his background is in legal, he has a personal interest in all things tech, especially hardware and microelectronics, and anything regulatory.
-
bit_user
Amazon, Google, Microsoft, and Nvidia already have server CPUs with ARM-designed IP. Nvidia's Grace is probably the most similar to what they're doing.The article said:potentially representing one of Arm’s biggest steps into the data center market to date.
So, their presence in the datacenter isn't new, but perhaps how much of that value chain they're incorporating is the point of distinction.
That's what ARM needs. Its IP licensing simply hasn't generated adequate revenue to fuel continued growth. The whole Qualcomm legal drama was their attempt to extract more licensing revenue, but that didn't work. So, they're having to get deeper into the value chain and start competing with some of their licensees in more direct ways.The article said:The potential revenue from OpenAI’s CPU program could reach into the billions,
IMO, ARM should resolve conflicts of interest by establishing something akin to the RISC-V Foundation, to oversee the evolution and licensing of the ISA. Since Architecture Licenses aren't a big source of revenue and don't even appear to be big points of leverage, there's not much to lose by it, and perhaps it'd stave off competition from RISC-V for a bit longer.
Only if they can get enough memory. HBM production is booked way out, into the future. Being inference-oriented, maybe they'll instead use GDDR7, but I'm not sure that has enough slack in the supply chain, either.The article said:If Broadcom and TSMC can scale production, OpenAI’s inference chips may offer a partial hedge against the tight GPU supply that has constrained AI labs for much of the past year.