MEMBER EXCLUSIVE

OpenAI signs AMD deal for 6GW of AI GPUs with a massive equity kicker, OpenAI to obtain up to 160 million AMD shares at one cent apiece

Dr. Lisa Su
(Image credit: AMD)

OpenAI has secured up to 6 gigawatts of AMD GPU compute in a landmark supply agreement that could see the ChatGPT maker take a 10% stake in AMD.

The deal, announced on October 6, begins with AMD’s next-generation Instinct MI450 series and spans multiple future product cycles. The first one-gigawatt tranche is scheduled to be delivered in the second half of 2026, with follow-on deployments ramping up based on delivery and performance milestones. OpenAI’s warrant vests in stages and caps out at 160 million shares — almost a tenth of AMD’s outstanding stock — assuming the full 6GW is deployed and AMD’s share price triples from current levels.

That would make OpenAI one of AMD’s largest shareholders, and it gives both sides a reason to scale the partnership fast. But the partnership doesn’t represent a break with Nvidia. Sam Altman said via X.com that the AMD deal is incremental and that it will continue to expand Nvidia purchases alongside its MI450 deployments, adding that “The world needs much more compute…”

Last month, OpenAI and Nvidia outlined a separate 10GW roadmap built around Nvidia’s next-generation platforms.

More than a backup option

The size of the partnership dispels the notion that AMD is merely a backup option for when Nvidia is unavailable. AMD confirmed that its work with OpenAI spans multiple generations of silicon, starting with MI450 and extending into future co-developed architectures, with AMD’s Forrest Norrod calling it “transformative” in remarks to Reuters.

OpenAI, for its part, has reportedly already evaluated AMD’s current-gen MI300X parts and is believed to be running production inference workloads on them now. While most high-profile model training still happens on Nvidia H100 clusters, the MI300X’s larger memory pool and high bandwidth design make it particularly well-suited to LLM inference. AMD has pitched that advantage before, but this is the first time it’s landed a customer of this size to prove it.

MI450 is expected to push those numbers even further, and will arrive around the same time Nvidia’s Blackwell-based GB200 platforms begin volume deployment. AMD has been unusually direct about the comparison. If it can deliver, AMD stands to capture not just incremental demand but also new market share. Nvidia’s H100 is already allocation-bound, despite Nvidia saying it has plenty of supply, and its upcoming Blackwell parts are expected to face similar constraints. AMD, starting from a smaller base, may have more headroom to scale in 2026 and 2027. For OpenAI, that flexibility may be more valuable than squeezing every last bit of throughput per watt.

Equity for volume

The equity structure backing the deal is pretty aggressive. AMD is offering OpenAI the opportunity to purchase up to 160 million shares at $0.01 each, but only if it deploys the full 6 GW and AMD’s stock reaches the pre-agreed price targets along the way. The $600 ceiling would represent more than three times AMD’s pre-announcement trading price, implying a market cap north of half a trillion dollars. The company added nearly $80 billion in value when the deal was announced.

It’s not the first time OpenAI has used its purchasing power to secure financial upside. Microsoft’s initial investment in the company included infrastructure credits and profit participation. But this is the first time a chip supplier has offered equity in exchange for volume, and the first time OpenAI’s compute roadmap has been tied to a public stock price.

From AMD’s side, the warrant is a performance bet. It only vests if OpenAI buys and the market responds. AMD doesn’t hand over a single share until the first gigawatt is installed, and even then, only partially. That structure also helps insulate AMD from the downside risk of OpenAI’s custom silicon efforts, which are still in development with Broadcom and have reportedly slipped behind schedule.

AMD now has a foothold

OpenAI is already building out the infrastructure to support the six-gigawatt deal. Its first Stargate data center campus in Texas is being provisioned with nearly a gigawatt of on-site power generation just to keep pace with internal demand. That kind of buildout creates long lead-time demand not just for GPUs, but also for HBM, substrates, packaging, and data center cooling systems.

AMD’s MI450 and successors will face the same upstream constraints that have dogged Nvidia for the last two years, and AMD has not disclosed how it plans to scale supply at the required rate. It’s likely to pull from TSMC’s advanced packaging lines and may tap Intel Foundry or U.S.-based subcontractors if TSMC’s U.S. fabs aren’t ready in time.

There are also obvious questions regarding software readiness. AMD’s ROCm platform has made progress, and OpenAI’s endorsement will accelerate that. But most large-scale deployments today still assume CUDA-first development. Running the same model across Nvidia, AMD, and OpenAI’s future custom chips will require new levels of framework abstraction and operator portability, something OpenAI will need to tackle head-on as it balances workloads across suppliers.

What this deal shows is that achieving balance is OpenAI’s goal. The company isn’t walking away from Nvidia, but it’s not waiting around either. AMD now has a foothold, and if it can execute, a stake in its future that will be difficult to ignore.

Follow Tom's Hardware on Google News, or add us as a preferred source, to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button!

Luke James
Contributor

Luke James is a freelance writer and journalist.  Although his background is in legal, he has a personal interest in all things tech, especially hardware and microelectronics, and anything regulatory.