Huawei's GPU Reportedly Matches Nvidia's A100

Huawei
(Image credit: Huawei)

Huawei's compute GPU capabilities are now on par with Nvidia's A100 GPUs, Liu Qingfeng, founder and chairman of Chinese AI company iFlytek, said at the 19th Summer Summit of the 2023 Yabuli China Entrepreneurs Forum (via IT Home).

Liu Qingfeng stated that Huawei has made significant strides in the GPU sector, achieving capabilities and performance comparable to Nvidia's A100 GPU. If true, this would be a remarkable accomplishment considering Nvidia's longstanding dominance in high-performance computing and AI. Liu also noted that three board members from Huawei are collaborating closely with iFlytek's special team, highlighting the strategic importance of this tech advancement for Huawei and the broader Chinese AI ecosystem. 

It is noteworthy that Huawei has never confirmed the development of its own compute GPUs, so iFlytek's comments are essentially the first confirmation that such a product exists. In fact, the company has Ascend 910 AI accelerators used for Huawei's Atlas 900 Pod A2 AI training cluster that Huawei has been using for its AI efforts for a while. Yet, the head of iFlytek specifically noted that he was talking about a compute GPU, which points to a new product.

Liu noted that while AI algorithms developed in China are robust, computational capabilities of domestic hardware have traditionally lagged behind Nvidia. He cited the challenges that Chinese companies have faced in training large-scale AI models, something that is primarily done on Nvidia's hardware. Since access to Nvidia's hardware for Chinese companies is somewhat constrained because of the U.S. curbs against the country's supercomputer sector, any reliance on Nvidia's compute GPUs is a major limitation for Chinese AI companies. 

In addition, Liu revealed iFlytek's ambitious plans to launch a general-purpose AI model by October to compete with ChatGPT and eventually GPT-4 by the first half of 2024. Back in May, iFlytek officially launched a cognitive large-scale model featuring seven core abilities: text generation, language understanding, knowledge-based question-answering, logical reasoning, mathematical ability, code ability, and multi-modal capability. The model's milestones include planned breakthroughs in open-ended question answering and multi-modal interactions, making it a multifaceted tool in the AI landscape.

Anton Shilov
Freelance News Writer

Anton Shilov is a Freelance News Writer at Tom’s Hardware US. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • Nikolay Mihaylov
    Is it a GPU or an AI accelerator?
    Reply
  • atomicWAR
    Nikolay Mihaylov said:
    Is it a GPU or an AI accelerator?
    Sounds like its a bit of both just like Nvidia's offerings. Question is can play Crysis? /s
    Reply
  • Kamen Rider Blade
    Question is, how much of nVIDIA hardware IP did they steal?
    Reply
  • P1nky
    A report that reportedly reports. (y)
    Reply
  • atomicWAR
    Kamen Rider Blade said:
    Question is, how much of nVIDIA hardware IP did they steal?
    Most if not all if China's recent tech history is any indiction (not being political, just facts). Though we do need more competition in the gpu space, stealing is not the way. So even if they released a gaming card...between spying concerns and IP theft, no thank you.
    P1nky said:
    A report that reportedly reports. (y)
    And I thought it was a report the reportedly reports a reporting. My bad thanks for clarifing the issue.
    Reply
  • gg83
    I don't believe they have a competitive anything. Unless it's an a100 with a huawei sticker.
    Reply
  • bit_user
    Nikolay Mihaylov said:
    Is it a GPU or an AI accelerator?
    Given the article says:
    "the head of iFlytek specifically noted that he was talking about a compute GPU"
    I'd assume it's a GPU-like compute accelerator with no graphics capabilities.
    Reply
  • bit_user
    gg83 said:
    I don't believe they have a competitive anything. Unless it's an a100 with a huawei sticker.
    Yeah, I really don't trust anything that's not from an independent reviewer.

    And that goes for not just the hardware, but also talk of AI models:
    "Back in May, iFlytek officially launched a cognitive large-scale model featuring seven core abilities: text generation, language understanding, knowledge-based question-answering, logical reasoning, mathematical ability, code ability, and multi-modal capability."
    It might have those capabilities, in theory, but we need to know if those features are actually usable and how they compare with GPT-4 and other LLMs.
    Reply
  • fleurdelis
    They never claimed it was a GPU, they said it's an AI accelerator. So ... just like Google's TPUs, which have been
    competitive with Nvidia for 7 years now.

    Whether the press release actually matches reality is one thing, but the comments in here saying that China is not capable of doing this without IP theft is just wishful thinking. It's an embarrassingly parallel problem space that boils down lots of parallel FMA/FMAC units.

    The harder part is the software compatibility (esp pytorch & tensorflow). Google getting this right is why TPUs have been viable. Without that you might have a supercomputer but you don't have something that's going to gain mass adoption (example: AMD's competitiveness in the HPC space but complete failure in AI).
    Reply
  • bit_user
    fleurdelis said:
    Whether the press release actually matches reality is one thing, but the comments in here saying that China is not capable of doing this without IP theft is just wishful thinking. It's an embarrassingly parallel problem space that boils down lots of parallel FMA/FMAC units.
    Ah, the hubris of inexperience. If it were that simple, Nvidia wouldn't be so dominant.

    You're in plenty of company, though. I'm sure there have been hundreds of AI ASICs and IP blocks designed by now, the vast majority of which have fallen by the wayside, once they met up against the harsh realities of what it takes to be truly competitive.

    It's one thing to make a toy IP for doing inference on tiny CNN models. It's a completely different universe, when you look at the challenges of training multi-billion parameter complex models.

    fleurdelis said:
    The harder part is the software compatibility (esp pytorch & tensorflow). Google getting this right is why TPUs have been viable. Without that you might have a supercomputer but you don't have something that's going to gain mass adoption (example: AMD's competitiveness in the HPC space but complete failure in AI).
    It's funny of you to say that, because a lot of AI chipmakers have long claimed such compatibility, including AMD. As for Google, you know they created TensorFlow, right?
    Reply