Top China silicon figure calls on country to stop using Nvidia GPUs for AI — says current AI development model could become 'lethal' if not addressed
Develop AI-specific ASICs instead.

Wei Shaojun, vice president of China Semiconductor Industry Association, and a senior Chinese academic and government adviser, has called on China and other Asian countries to ditch using Nvidia GPUs for AI training and inference. At a forum in Singapore, he warned that reliance on U.S.-origin hardware poses long-term risks for China and its regional peers, reports Bloomberg.
Wei criticized the current AI development model across Asia, which closely mirrors the American path of using compute GPUs from Nvidia or AMD for training large language models such as ChatGPT and DeepSeek. He argued that this imitation limits regional autonomy and could become 'lethal' if not addressed. According to Wei, Asia's strategy must diverge from the U.S. template, particularly in foundational areas like algorithm design and computing infrastructure.
After the U.S. government imposed restrictions on the performance of AI and HPC processors that could be shipped to China in 2023, it created significant hardware bottlenecks in the People's Republic, which slowed down the training of leading-edge AI models. Despite these challenges, Wei pointed to examples such as the rise of DeepSeek as evidence that Chinese companies are capable of making significant algorithmic advances even without cutting-edge hardware.
He also noted Beijing's stance against using Nvidia's H20 chip as a sign of the country’s push for true independence in AI infrastructure. At the same time, he acknowledged that while China's semiconductor industry has made progress, it is still years behind America and Taiwan, so the chances that China-based companies will be able to build AI accelerators that offer performance comparable to that of Nvidia's high-end offerings are thin.
Wei proposed that China should develop a new class of processors tailored specifically for large language model training, rather than continuing to rely on GPU architectures, as they were originally aimed at graphics processing. While he did not outline a concrete design, his remarks are a call for domestic innovation at the silicon level to support China’s AI ambitions. However, he did not point out how China plans to catch up with Taiwan and the U.S. in the semiconductor production race.
He concluded on a confident note, stating that China remains well-funded and determined to continue building its semiconductor ecosystem despite years of export controls and political pressure from the U.S. The overall message was clear: China must stop following and start leading by developing unique solutions suited to its own technological and strategic needs.
Nvidia GPUs became dominant in AI because their massively parallel architecture was ideal for accelerating matrix-heavy operations in deep learning, offering far greater efficiency than CPUs. Also, the CUDA software stack introduced in 2006 enabled developers to write general-purpose code for GPUs, paving the way for deep learning frameworks like TensorFlow and PyTorch to standardize on Nvidia hardware.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Over time, Nvidia reinforced its lead with specialized hardware (Tensor Cores, mixed-precision formats), tight software integration, and widespread cloud and OEM support, making its GPUs the default compute backbone for AI training and inference. Nvidia's modern architectures like Blackwell for data centers have plenty of optimizations for AI training and inference and have almost nothing to do with graphics. By contrast, special-purpose ASICs — which are advocated by Wei Shaojun — are yet to gain traction for either training or inference.
Follow Tom's Hardware on Google News, or add us as a preferred source, to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button!

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.
-
S58_is_the_goat Is this one of those "please stop buying Nvidia gpus so we can buy them" kind of deals? What's the alternative?Reply