Amazon and Google tip off Jensen Huang before announcing information about their AI chips, says report — companies tread carefully to avoid surprising Nvidia
The quiet deference to Nvidia’s CEO reveals how much power the company still holds over AI’s most ambitious companies.

A new report claims that before Amazon or Google reveals anything about their latest artificial intelligence chips, they first notify Nvidia CEO Jensen Huang. The practice, touted by sources in The Information’s recent report on Nvidia’s internal dealmaking, reflects the quiet reality of the AI hardware market: Nvidia is still the dominant supplier of training compute, and its customers are trying not to get cut off.
According to the report, Amazon and Google each provide advance notice to Huang before unveiling updates to their custom silicon. The reason, sources say, is that Nvidia is still deeply embedded in their cloud operations and neither wants to surprise the person who effectively controls their AI infrastructure supply. Nvidia accounts for the overwhelming majority of the accelerators used to train large language models, and its GPUs also handle a growing share of inference tasks in the public cloud.
This deference comes as Nvidia is pouring billions into customers, suppliers, and competitors alike in a bid to tighten its grip on the market. In September alone, Nvidia signed a deal to buy up to $6.3 billion worth of unused GPU capacity from CoreWeave over the next seven years. It also invested $700 million in British data center startup Nscale and spent more than $900 million to acquire the CEO and key engineers of networking startup Enfabrica while licensing its chip technology. That followed news of a $5 billion investment in Intel to support joint chip development and a letter of intent with OpenAI to back a 10-gigawatt GPU data center buildout that could cost up to $100 billion.
The scale and timing of those investments show how aggressively Nvidia is trying to preempt the rise of non-GPU accelerators. Companies like Amazon, Google, and OpenAI are all pursuing in-house silicon efforts designed to reduce dependency on Nvidia hardware and improve performance or cost at scale. But even with years of effort and tens of billions of dollars behind them, their platforms are still heavily reliant on Nvidia’s CUDA ecosystem, software tooling, and manufacturing pipeline.
According to The Information, Nvidia’s dominance has created a dynamic where it acts like a financial backstop to the entire AI supply chain. The company can now fund suppliers, rent out capacity, and underwrite long-term purchases to support continued demand for its hardware. That makes it harder for any individual customer to walk away, even as they build competing products.
For now, the biggest names in cloud computing are still briefing Jensen Huang before revealing their next chips. That sort of dynamic may not last forever, but it says a lot about where the power still sits today.
Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

Luke James is a freelance writer and journalist. Although his background is in legal, he has a personal interest in all things tech, especially hardware and microelectronics, and anything regulatory.