Alibaba and ByteDance allegedly train Qwen and Doubao LLMs using Nvidia chips, despite export controls — Southeast Asian data center leases skirt around U.S. chip restrictions
Alibaba and ByteDance lease Southeast Asian data centers to work around U.S. restrictions.
Chinese technology giants, including Alibaba and ByteDance, are increasingly training their most advanced artificial intelligence models in Southeast Asia, taking advantage of overseas data centers equipped with high-end Nvidia GPUs, according to new reporting by the Financial Times. The shift reflects how leading AI labs in China are navigating U.S. export controls by leasing compute from non-Chinese operators based in Singapore and Malaysia.
Over the past year, Alibaba’s Qwen and ByteDance’s Doubao large language models have risen into the top tier of global LLM benchmarks. Allegedly, both have been trained, at least in part, using Nvidia accelerators located in offshore clusters.
Singapore-based operators told the FT that demand from Chinese firms has grown since April, when the Trump administration enforced a tighter embargo on Nvidia’s H20 and other export-compliant chips, only for the so-called “diffusion rule” intended to block overseas leasing to be rolled back shortly afterward under revised policy.
U.S. export controls currently prohibit Nvidia from selling its most advanced GPUs directly to China, and China has banned foreign AI chips from its state-funded data centers. But leasing compute from foreign-owned data centers abroad — even if the end user is Chinese — remains legal under the current rules.
A May 2025 notice withdrew proposed Biden-era restrictions known as the "AI diffusion rule" that would have treated such arrangements as indirect violations of the export ban. In effect, that allows companies to use H100- and A100-class accelerators outside China, provided the hardware is owned and managed by a compliant third party.
ByteDance and Alibaba are not the only firms pursuing this route, but they represent the most visible examples. Their arrangements allow them to train new models with performance targets on par with those of Western AI labs. The resulting weights can then be run inside China for inference on domestically sourced silicon. Chinese companies are increasingly using chips from Huawei and other local suppliers to handle deployment and user interactions, which now make up a growing share of AI workloads.
One exception is DeepSeek, a Shanghai-based firm that stockpiled Nvidia parts ahead of the U.S. ban and continues to train inside China. The company, which is also thought to be using shell companies to evade restrictions, has partnered with Huawei to optimize future training runs using local silicon.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
While training clusters are migrating abroad, private data still cannot leave China. That constraint means fine-tuning or retraining based on Chinese user data must take place domestically, even when the base model was developed offshore.
Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

Luke James is a freelance writer and journalist. Although his background is in legal, he has a personal interest in all things tech, especially hardware and microelectronics, and anything regulatory.