U.S. GPU Export Restrictions Hit AMD, China's Tech Giants
AMD's advanced compute GPUs to China affected by U.S. export license requirements.
AMD has confirmed that the U.S. Department of Commerce now requires the company to get an export license to ship some of its high-performance compute GPUs to China, which will marginally affect its data center business. Meanwhile, the new high-performance CPU export rules imposed by the DoC will seriously hit almost all of China's high-tech companies as they rely on artificial intelligence and high-performance compute GPUs from Nvidia.
AMD has notified its Chinese operations that from now on, it will have to obtain an export license from the U.S. Department of Commerce to sell its Instinct MI250 and MI250X compute GPUs to Chinese clients, reports Nikkei citing two sources familiar with the matter. AMD has confirmed to Nikkei that it had received an alert from the DoC about a new export requirement for high-end compute GPUs. Nvidia received a similar document in late August.
AMD does not sell many compute GPUs these days (and most of them go into supercomputers in the U.S. and Europe), so the new China export restrictions won't significantly impact the company's data center business. By contrast, Nvidia sells a boatload of compute GPUs to clients in China, which is why its data center sales may take a $400 million hit this quarter because of the new export requirements. In addition, the DoC restricted sales of Nvidia's A100, A100X, H100, and more powerful compute GPUs, which is why the company will try to divert some of the orders to A30 compute GPUs.
The U.S. DoC restricts exports of high-performance compute GPUs because it does not want these parts to fall into the hands of the Chinese military or associated government agencies, which will use supercomputers based on these GPUs to develop new types of weapons (or new ways to optimize chip designs for arms and/or development of weaponry). Meanwhile, supercomputers used to design weapons nowadays rely both on AI for pathfinding and HPC for simulations.
Nvidia's A100 and more advanced compute GPUs are extremely potent in AI workloads, whereas AMD's Instinct M200-series compute GPUs offer formidable FP64 performance for HPC workloads (see the table for details). Apparently, AMD's Instinct MI210 offers considerably higher FP64 performance than Nvidia's A100 and can even challenge the upcoming H100 in FP64 matrix operations, but it falls considerably behind in AI performance. Meanwhile, the MI210 can be sold to China without an export license, according to Nikkei.
Row 0 - Cell 0 | Instinct MI210 | Instinct MI250 | Instinct MI250X | Nvidia A100 | Nvidia H100 |
Compute Units | 104 | 208 | 220 | 108 SMs | 132 SMs |
Stream Processors | 6,656 | 13,312 | 14,080 | 6,912 | 16,896 |
FP64 Vector (Tensor) | 22.6 TFLOPS | 45.3 TFLOPS | 47.9 TFLOPS | 19.5 TFLOPS | 60 TFLOPS |
FP64 Matrix | 45.3 TFLOPS | 90.5 TFLOPS | 95.7 TFLOPS | 9.7 TFLOPS | 30 TFLOPS |
FP32 Vector (Tensor) | 22.6 TFLOPS | 45.3 TFLOPS | 47.9 TFLOPS | 156 | 312* TFLOPS | 500 | 1000* TFLOPS |
FP32 Matrix | 45.3 TFLOPS | 90.5 TFLOPS | 95.7 TFLOPS | 19.5 TFLOPS | 60 TFLOPS |
Peak FP16 | 181 TFLOPS | 362.1 TFLOPS | 383 TFLOPS | 312 | 624* TFLOPS | 1000 | 2000* TFLOPS |
Peak bfloat16 | 181 TFLOPS | 362.1 TOPS | 383 TOPS | 312 | 624* TFLOPS | 1000 | 2000* TFLOPS |
INT8 | 181 TOPS | 362.1 TOPS | 383 TOPS | 624 | 1248* TOPS | 2000 | 4000* TOPS |
HBM2E ECC Memory | 64GB | 128GB | 128GB | 80GB | 80GB |
Memory Bandwidth | 1.6 TB/s | 3.2 TB/s | 3.2 TB/s | 2.039 TB/s | 3.0 TB/s |
Form-Factor | PCIe card | OAM | OAM | SXM4 | SXM5 |
*with sparsity
Based on the official performance numbers from AMD and Nvidia applied to the DoC's license requirements for everything that is equal to or exceeds Nvidia's A100, it looks like the DoC is more concerned about AI performance than HPC performance.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
But AI is used considerably more widely than for supercomputer-based research. For example, many commercial companies — such as Alibaba, Baidu, and Tencent — use artificial intelligence for their services, so without higher-end Nvidia chips, they will have to stick to lower-performance A30 compute GPUs. Or they could use AI cloud instances from AWS or Google, reports Reuters.
"It is a resource impact," said a former executive from AMD China in a conversation with Reuters. "They will still work on the same projects, they will still be moving forward; it just slows them down."
Meanwhile, numerous Chinese companies develop GPUs at home and produce them at TSMC. Some of those compute GPUs or AI accelerators, such as Biren's BR100 or Baidu's Kunlun II, can even challenge Nvidia's A100 in terms of performance and H100 in terms of complexity.
That said, while export license requirements for GPUs that are comparable to or better than Nvidia's A100 might slow down some projects in China (assuming that the DoC does not approve some customers), it will not stop them completely.
Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.
US govt says Cisco gear often targeted in China's Salt Typhoon attacks on 8 telecommunications providers — issues Cisco-specific advice to patch networks to fend off attacks
Weak demand prevents Amazon from deploying AMD's Instinct AI accelerators in the cloud — the company plans to strengthen its portfolio with Nvidia Blackwell GPUs