OpenAI might be building its own chip, but it’ll still be dependent on Nvidia — custom chip developed with Broadcom reportedly slips to Q3 2026
A $10 billion Broadcom deal won’t free OpenAI from Jensen Huang’s orbit.

OpenAI's long-rumored $10 billion partnership with Broadcom is already showing cracks. The company is widely understood to be developing a custom chip designed specifically for OpenAI's inference workloads, but according to individuals familiar with the matter, the project has "hit snags": OpenAI wanted more power, sooner, than Broadcom could deliver, and an internal push to roll the chip out in Q2 2026 has already slipped to Q3 at the earliest, according to a report from The Information.
The project, which has been kept deliberately quiet, is set to have manufacturing run through TSMC. Once live, the chip could handle inference jobs across OpenAI’s growing fleet of data centers, cutting its exposure to GPU bottlenecks and potentially lowering costs.
However, even as OpenAI lays the groundwork for its own silicon, it’s doubling down with Nvidia. A recent infrastructure agreement between the two companies, potentially worth more than $100 billion, would see Nvidia supply GPUs for the next wave of OpenAI-hosted AI clusters. Nvidia’s CEO Jensen Huang recently said that OpenAI “is likely going to be the next multi-trillion-dollar hyperscaler company,” with OpenAI remaining a cornerstone customer for Nvidia’s highest-end systems.
This, then, highlights the same paradoxical situation we’ve seen time and time again with AI. Amazon, Google, Microsoft, Meta, and now OpenAI are all building their own chips to reduce their reliance on Nvidia, while simultaneously relying on Nvidia more than ever.
A hedge with no clear endgame
Broadcom executives first confirmed what is believed to be the OpenAI deal late last year, saying that a large AI customer had booked billions in long-term orders. Reports quickly tied the deal to OpenAI, which has been growing a small, specialized in-house silicon team since at least mid-2023. The chip is understood to be designed for internal inference tasks and is not intended for commercial release. Broadcom handles the physical design, with TSMC expected to fabricate the chips.
This deal made OpenAI the latest entrant in a long line of hyperscalers trying to build their own chips. Amazon has its Trainium and Inferentia platforms. Google is now on its fifth-generation TPU. Microsoft is working on its Maia accelerators. Each was billed as a shift away from GPU dependency. Each still runs major workloads on Nvidia silicon.
OpenAI doesn’t shy away from this fact. Its GPT-4 model was trained on Nvidia H100s, and its hosting partners — including CoreWeave and Microsoft — continue to deploy Nvidia hardware at scale. The new custom chip effort might eventually take over some inference jobs, but there’s no evidence it will replace H100 or Blackwell-class GPUs for training. And even if the silicon performs well, it won’t come bundled with Nvidia’s competitive software stack.
There’s no matching CUDA
This is the piece challengers still can’t match. Nvidia’s CUDA platform remains the default target for nearly every AI framework in use today. From PyTorch and TensorFlow to popular model compilers and quantization toolkits, most of the AI software stack is optimized for Nvidia’s architecture. Migrating off it means rewriting core libraries, retraining engineers, and adapting models to new hardware, which, ultimately, is a cost few companies are willing to absorb.
OpenAI, like others, is unlikely to abandon CUDA without a compelling reason. Broadcom doesn’t offer its own software ecosystem, which means OpenAI’s team would need to build its own toolchain or adopt one of the open standards still struggling to reach parity. In the meantime, the easiest, fastest way to build and run large-scale models is still with Nvidia’s chips and software.
Jensen Huang knows this. Holding an iron grip over the industry, he’s reportedly given a heads-up by the likes of Amazon and Google before they announce a new chip that might compete with Nvidia’s. All this is done on the down low and, according to reports, has become something of an unwritten rule. It’s not required, but it happens, and it shows the degree to which Nvidia still commands power among its customers, even those building chips to hedge against Nvidia.
It’s not difficult to understand why this is the case. Nvidia is pouring billions into partnerships, infrastructure, and component sourcing. It recently agreed to buy up to $6.3 billion in unused GPU capacity from CoreWeave, invested nearly $1 billion to license Enfabrica’s networking tech, and paid Intel $5 billion as part of a joint development pact. It even agreed to support OpenAI’s next generation of GPU data centers despite OpenAI’s clear intent to use its own chips at some point.
Supply chain headwinds
Even if the OpenAI chip meets its performance goals, it faces supply chain headwinds. CoWoS packaging is still bottlenecked at TSMC, with Nvidia and AMD making up much of the near-term capacity. Advanced HBM memory is also under pressure, with SK hynix and Samsung prioritizing existing customers. So, while Broadcam can bring design expertise, it has no control over the back-end. Nor does OpenAI.
There’s also the question of scale. Nvidia’s Blackwell platform uses multi-chip modules, enormous memory bandwidth, and proprietary NVLink switching, a monolithic combination that Broadcom can’t offer. If OpenAI’s chip is simpler, it may be cheaper or more efficient per watt, but it also won’t be competitive on peak performance, which limits its value in training future large models.
All of these point toward a long-term hybrid model, where OpenAI uses both Nvidia and its own custom hardware depending on workload. Which, again, is what all the other hyperscalers are already doing.
The Broadcam partnership does make some sense for OpenAI from a strategic standpoint. If it ships on time (which looks unlikely) and performs well, it could reduce cost per token and give the company a touch more control over its infrastructure. But early signs aren't encouraging, and, in any case, it won’t be a silver bullet that replaces Nvidia's hardware for training cutting-edge models.
Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

Luke James is a freelance writer and journalist. Although his background is in legal, he has a personal interest in all things tech, especially hardware and microelectronics, and anything regulatory.