The Senate's new SAFE bill is set to curb access to advanced chips to China, but that won't slow down the AI war — training workloads still heavily rely on Nvidia, while alternatives remain inefficient
A new bipartisan bill in the Senate could pause shipments, but there are ways around it.
A new bipartisan bill in the U.S. Senate threatens to put the brakes on Nvidia's efforts to sell its latest AI-training hardware to Chinese customers, even as the Trump administration mulls allowing lower-powered versions of the hardware. China is looking to restrict access to this kind of hardware too, to favor domestic chip firms, which would harden its supply chains and reduce trading turbulence. However, with no real alternatives to Nvidia's GPUs for training hardware and numerous ways to circumvent sanctions, tariffs, and trade barriers, it's hard to imagine Nvidia completely exiting the region.
Nvidia CEO Jensen Huang spent much of last week meeting with U.S. legislators, including President Trump and Republican members of the Senate Banking Committee, which oversees U.S. export control programs. Huang clearly wasn't persuasive enough, though, as now the proposed Secure and Feasible Exports Act (SAFE) bill would force the Commerce Department to halt export licenses for the sale of the latest chips to U.S. adversaries, including China and Russia, for 30 months.
This ban could cover all existing chips and anything more powerful than them developed by any of the major companies over that same period. Although it primarily targets Nvidia's Blackwell GPUs, it would also cover Nvidia's last-generation Hopper designs, AMD's graphics chips, and Google's latest TPU designs.
This is devastating news for Nvidia and many of its chip-manufacturing contemporaries. China is a massive market for hardware and AI development, but it's certainly not proven to be the most willing of markets.
Chinese authorities have spent months pushing back on the on-again, off-again availability of Nvidia hardware by encouraging its domestic companies to use domestic chip suppliers where possible. It mandates that Chinese companies use at least 50% domestically produced hardware and, more recently, has claimed that new packaging and assembly techniques can close the performance gap between Nvidia and its local producers.
Chinese chip firms have responded with gusto, too, announcing enormous plans to manufacture several times the chips they managed in 2025, as soon as next year. It's not clear if those plans will be physically possible in such a short time frame, but they're shooting for the moon nonetheless.
But even if the companies can fabricate these chips, there's no guarantee they'll be used, despite the double-ended carrot-and-stick approach of both the U.S. and Chinese authorities.
Inference is one thing, training is another
China has made major leaps in its AI hardware development over the past few years, particularly in the past year, as it's sought to build more reliable access to powerful AI hardware, while the U.S. turned the tap on and off at the whim of its mercurial commander-in-chief. These conditions have led Huawei to make tremendous advances and to design high-power systems that scale well, at the expense of efficiency.
But that's mainly in the realm of inference, which is the day-to-day running of an AI algorithm after it has been fully trained. Nvidia's GPU versatility is particularly well-suited for AI training, and it has no real rival.
There have been some semi-hyperbolic claims of a new Chinese chip design that leverages 3D hybrid bonding techniques, and is claimed to deliver performance comparable to 4nm Nvidia silicon in training workloads. Given the restrictions in place for China's access to EUV machines from ASML, it's an interesting area of expansion.
It's not proven yet, and questions remain over its efficiency, how manufacturers would handle thermal dissipation - memory and compute bonded directly raises serious overheating concerns - and such a complicated design could lead to yield issues when produced at scale.
But even if all the claims about this hardware prove true and it's indeed a relative competitor to Nvidia, why wouldn't the companies that need this hardware at scale right now not just keep using Nvidia anyway? When Deepseek developers were forced to use locally produced chips for training, they ended up switching back to Nvidia hardware because the performance just wasn't there.
Despite all the blocks and barriers from various governments and organizations, it hasn't been too difficult for companies to allegedly get their hands on.
Singaporean companies have been used to allegedly circumvent trade blocks, and leasing computing power from international partners effectively allows Chinese national companies to use whatever hardware they like. There are always mules willing to help get the hardware across the border for a fee, too.
Speed is everything
So, even if new barriers are put in place to make it harder for Nvidia to ship hardware to China, it will probably still happen. It's better for training than anything Chinese producers can make, it's still readily available, albeit through ever-more convoluted routes, and the companies that want the hardware are trying to compete with markets that have better access to it. As Deepseek 3.2's latest whitepaper shows, the race for AGI is now entering the stage where those with the best pre-training compute might push ahead with breakthroughs. Now, the AI race is turning into a question of scale, regardless of who is making the chips.

Jon Martindale is a contributing writer for Tom's Hardware. For the past 20 years, he's been writing about PC components, emerging technologies, and the latest software advances. His deep and broad journalistic experience gives him unique insights into the most exciting technology trends of today and tomorrow.