Skip to main content

U.S. GPU Export Restrictions Hit AMD, China's Tech Giants

(Image credit: AMD)

AMD has confirmed that the U.S. Department of Commerce now requires the company to get an export license to ship some of its high-performance compute GPUs to China, which will marginally affect its data center business. Meanwhile, the new high-performance CPU export rules imposed by the DoC will seriously hit almost all of China's high-tech companies as they rely on artificial intelligence and high-performance compute GPUs from Nvidia. 

AMD has notified its Chinese operations that from now on, it will have to obtain an export license from the U.S. Department of Commerce to sell its Instinct MI250 and MI250X compute GPUs to Chinese clients, reports Nikkei citing two sources familiar with the matter. AMD has confirmed to Nikkei that it had received an alert from the DoC about a new export requirement for high-end compute GPUs. Nvidia received a similar document in late August

AMD does not sell many compute GPUs these days (and most of them go into supercomputers in the U.S. and Europe), so the new China export restrictions won't significantly impact the company's data center business. By contrast, Nvidia sells a boatload of compute GPUs to clients in China, which is why its data center sales may take a $400 million hit this quarter because of the new export requirements. In addition, the DoC restricted sales of Nvidia's A100, A100X, H100, and more powerful compute GPUs, which is why the company will try to divert some of the orders to A30 compute GPUs.

The U.S. DoC restricts exports of high-performance compute GPUs because it does not want these parts to fall into the hands of the Chinese military or associated government agencies, which will use supercomputers based on these GPUs to develop new types of weapons (or new ways to optimize chip designs for arms and/or development of weaponry). Meanwhile, supercomputers used to design weapons nowadays rely both on AI for pathfinding and HPC for simulations. 

Nvidia's A100 and more advanced compute GPUs are extremely potent in AI workloads, whereas AMD's Instinct M200-series compute GPUs offer formidable FP64 performance for HPC workloads (see the table for details). Apparently, AMD's Instinct MI210 offers considerably higher FP64 performance than Nvidia's A100 and can even challenge the upcoming H100 in FP64 matrix operations, but it falls considerably behind in AI performance. Meanwhile, the MI210 can be sold to China without an export license, according to Nikkei. 

Instinct MI210Instinct MI250Instinct MI250XNvidia A100Nvidia H100
Compute Units104208220108 SMs132 SMs
Stream Processors6,65613,31214,0806,91216,896
FP64 Vector (Tensor)22.6 TFLOPS45.3 TFLOPS47.9 TFLOPS19.5 TFLOPS60 TFLOPS
FP32 Vector (Tensor)22.6 TFLOPS45.3 TFLOPS47.9 TFLOPS156 | 312* TFLOPS500 | 1000* TFLOPS
Peak FP16181 TFLOPS362.1 TFLOPS383 TFLOPS312 | 624* TFLOPS1000 | 2000* TFLOPS
Peak bfloat16181 TFLOPS362.1 TOPS383 TOPS312 | 624* TFLOPS1000 | 2000* TFLOPS
INT8181 TOPS362.1 TOPS383 TOPS624 | 1248* TOPS2000 | 4000* TOPS
HBM2E ECC Memory64GB128GB128GB80GB80GB
Memory Bandwidth1.6 TB/s3.2 TB/s3.2 TB/s2.039 TB/s3.0 TB/s
Form-FactorPCIe cardOAMOAMSXM4SXM5

*with sparsity

Based on the official performance numbers from AMD and Nvidia applied to the DoC's license requirements for everything that is equal to or exceeds Nvidia's A100, it looks like the DoC is more concerned about AI performance than HPC performance.

But AI is used considerably more widely than for supercomputer-based research. For example, many commercial companies — such as Alibaba, Baidu, and Tencent — use artificial intelligence for their services, so without higher-end Nvidia chips, they will have to stick to lower-performance A30 compute GPUs. Or they could use AI cloud instances from AWS or Google, reports Reuters

"It is a resource impact," said a former executive from AMD China in a conversation with Reuters. "They will still work on the same projects, they will still be moving forward; it just slows them down." 

Meanwhile, numerous Chinese companies develop GPUs at home and produce them at TSMC. Some of those compute GPUs or AI accelerators, such as Biren's BR100 or Baidu's Kunlun II, can even challenge Nvidia's A100 in terms of performance and H100 in terms of complexity.  

That said, while export license requirements for GPUs that are comparable to or better than Nvidia's A100 might slow down some projects in China (assuming that the DoC does not approve some customers), it will not stop them completely. 

Anton Shilov
Freelance News Writer

Anton Shilov is a Freelance News Writer at Tom’s Hardware US. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.