The Beijing Academy of Artificial Intelligence (BAAI) researchers announced this week a natural language processing model called WuDao 2.0 that, per the South China Morning Post, is more advanced than similar models developed by OpenAI and Google.
The report said WuDao 2.0 uses 1.75 trillion parameters to "simulate conversational speech, write poems, understand pictures and even generate recipes." The models developed by OpenAI and Google are supposed to do similar things, but they use fewer parameters to do so, which means WuDao 2.0 is likely better at those tasks.
This new model was specifically compared to the GPT-3 model OpenAI announced in 2020, which relies on 175 billion parameters, and the Switch Transformer model from Google that upped the number of parameters to one trillion in January. Both models were massive accomplishments; now WuDao 2.0 looms over them.
The number of parameters isn't the only thing that matters. Models also have to be trained, and the amount of data used in that process informs their performance. South China Morning Post said WuDao 2.0 was trained on "4.9 terabytes of images and texts, including 1.2 terabytes each of Chinese and English texts," by BAAI.
But more data isn't necessarily better data, so it's hard to make a direct comparison. OpenAI said that GPT-3 was trained on just 570GB of data, for example, but that's after it filtered the original dataset three times to improve its quality. The organization effectively separated 570GB of wheat from 45TB of chaff.
All of which is to say the raw numbers associated with WuDao 2.0 are impressive, yes, but they might not be indicative of the model's performance. Not that it will take long for the model to be stress-tested: BAAI has reportedly partnered with 22 companies, including smartphone maker Xiaomi, to bring WuDao 2.0 to their stacks.
Western organizations are pushing more into AI—just look at Nvidia's plans for its first CPU, Google's increasing reliance on AI to solve problems such as failing HDDs, and Arm's push to improve machine learning performance via the ArmV9 instruction set architecture. Supercomputers have also gained even more prominence.
WuDao 2.0 shows that China isn't resting on its laurels, though, and for now it's claimed the natural language processing crown. No matter how good the model is at making conversation, we suspect it will be the subject of many in the near future.
Stay on the Cutting Edge
Join the experts who read Tom's Hardware for the inside track on enthusiast PC tech news — and have for over 25 years. We'll send breaking news and in-depth reviews of CPUs, GPUs, AI, maker hardware and more straight to your inbox.
Nathaniel Mott is a freelance news and features writer for Tom's Hardware US, covering breaking news, security, and the silliest aspects of the tech industry.