Huawei's Ascend AI chip ecosystem scales up as China pushes for semiconductor independence — however, firm lags behind on efficiency and performance
Huawei’s AI server hardware, domestic packaging suppliers, and EDA partners now form the backbone of a China-first chip supply chain.
Huawei’s in-house Ascend processors and their surrounding supplier network are being positioned as the foundation of a national effort to build an independent, fully domestic semiconductor ecosystem in China. That includes everything from high-end AI chips and custom optical networks, through to packaging materials, photoresists, and gas delivery systems.
More than 60 semiconductor companies are now backed by Huawei’s investment arm Hubble, while local partners like Empyrean are advancing design toolchains to support a parallel AI software ecosystem independent of Nvidia and other U.S. vendors, according to reporting by Nikkei Asia and Trendforce,
This growing web of suppliers was showcased at the China Hi-Tech Fair in Shenzhen, where Huawei’s CloudMatrix 384 system, an AI server rack integrating 384 Ascend 910C processors, was positioned as a direct alternative to Nvidia’s GB200 platform Though obvious performance and efficiency trade-offs remain, the system highlights how far Huawei has come since the U.S. first restricted its access to foundry services and IP in 2019.
Competing with Blackwell by scale
The foundation of Huawei’s server strategy is the Ascend 910C, a dual-chiplet accelerator built using stacked HBM2E memory and a DaVinci NPU architecture tailored for AI workloads. The chip delivers up to 780 TFLOPS of dense BF16 compute, with the entire package consuming 350 watts
That trails Nvidia’s Hopper-based H100 or Blackwell-based B200 in both peak throughput and power efficiency, but Huawei offsets the difference by scaling up. The CloudMatrix 384 system, for example, combines twelve racks of Ascend modules with four optical interconnect racks, creating a 384-processor fabric that delivers around 300 PFLOPS in total. The network is entirely optical, with 6,912 pluggable transceivers forming a high-bandwidth, all-to-all topology.
The system draws around 559 kilowatts at peak load, which is nearly four times the power draw of Nvidia’s GB200-based DGX system. But Chinese data centers face fewer regulatory constraints on energy use, and local power costs remain significantly lower than in the U.S. That trade-off, paired with large-scale domestic chip availability, makes the Ascend stack a viable foundation for training large-scale AI models in-country. Huawei’s internal tests claim CloudMatrix outperforms Nvidia H100 platforms on specific model classes, although public benchmarks remain scarce.
The software stack around Ascend is also maturing. Huawei’s CANN programming environment and MindSpore framework support common model architectures through a translation layer that can ingest PyTorch or TensorFlow graphs. While CUDA remains dominant globally, Huawei is planning to open-source more of its toolchain to accelerate local development and draw interest from non-domestic partners where export controls permit.
Building the supply chain from the bottom up
What makes Huawei’s AI hardware strategy notable is how tightly integrated it has become with the broader Chinese chip supply chain. Hubble, Huawei’s private equity arm, has taken minority stakes in dozens of component and material suppliers since its formation. These firms are now expanding capacity or acquiring competitors, often with local government backing, to ensure domestic resilience against future sanctions.
Jiangsu-based HHCK Advanced Materials, in which Huawei holds a 2% stake, is one example. In November, the company acquired a rival producer of heat-resistant epoxy resins for around $255 million. In parallel, Vertilite, 4%-owned by Huawei, opened a new compound semiconductor facility in Jiangsu. It produces lasers and modulators for high-speed optical links, key to Huawei’s full-rack optical mesh interconnects. Meanwhile, Shanghai Winscene Technology is scaling up production of photoresists, which are essential to lithographic processes. Aerotech, another Huawei-linked firm, is expanding capacity for gas flow systems and valves used in chipmaking equipment.
Each of these firms targets a known vulnerability in China’s chip manufacturing pipeline. Huawei has also been linked to domestic efforts in electronic design automation. While Huawei is not believed to have an equity stake in Empyrean Technology, China’s leading EDA developer, sources say the two collaborate closely on tool development and circuit verification.
To compensate for its lack of access to EUV lithography tools, Huawei and its affiliate SiCarrier have jointly developed DUV-based multi-patterning techniques that could push logic nodes to the 5nm range, albeit with significant yield and cost penalties. SiCarrier holds patents in this area, and Huawei is reportedly helping validate the approach in early production.
A parallel stack, but not a level playing field
Despite progress, Huawei still lags behind Nvidia in per-chip performance, software adoption, and global market share. Nvidia’s GPUs remain standard in nearly all major machine learning frameworks and benefit from tight integration with CUDA, cuDNN, and optimized libraries for training and inference. Huawei’s MindSpore framework is still developing comparable capabilities and lacks widespread support outside China.
Ascend’s per-chip performance is also behind. Each 910C delivers roughly one-third the BF16 throughput of Nvidia’s B200. Even though Huawei can match or exceed total system performance by scaling horizontally, it still takes more silicon and more power to achieve parity. As of now, that cost is one Huawei is willing to absorb, especially as it builds a stack for use within China’s own borders and regulations.
Foundry access is another constraint. SMIC and other Chinese fabs are pushing 7nm-class production using DUV techniques, but yields remain behind TSMC or Samsung, and Huawei cannot currently access the latter’s advanced nodes. However, recent reporting by Bloomberg found that Huawei could be receiving backdoor foundry support through foundry networks, given that smuggled dies and Samsung HBM memory were recently found in Huawei’s new Ascend 910C AI chip.
In policy terms, the growth of Huawei’s semiconductor network fits neatly into China’s next five-year plan. The current draft, which runs through 2030, names chip self-reliance as a strategic priority. The national Big Fund, now in its third phase, has committed over $47 billion to semiconductor development. Hubble, Huawei and other private actors operate within this framework, often co-investing with local governments and state-owned capital vehicles.
The result is a vertically integrated, Huawei-centric supply chain that can design and deploy AI chips at volume without U.S. or European support. Whether it will deliver leading-edge performance and efficiency in the future remains uncertain. What is clearer is that the company, and the country around it, now have a credible fallback plan if sanctions tighten further or if access to key imports collapses.
Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

Luke James is a freelance writer and journalist. Although his background is in legal, he has a personal interest in all things tech, especially hardware and microelectronics, and anything regulatory.