China foes get worse results using DeepSeek, research suggests — CrowdStrike finds nearly twice as many flaws in AI-generated code for IS, Falun Gong, Tibet, and Taiwan
DeepSeek has yet to officially respond to the findings.

Research suggests that your DeepSeek AI results can be of drastically lower quality if you trigger topics that are geopolitically sensitive or banned in China. During tests undertaken by U.S. security firm CrowdStrike, it was observed that code generated for a professed Islamic State militant group computer system contained nearly twice as many flaws as it would otherwise have had. Other potential topics included: Falun Gong, Tibet, and Taiwan, according to a new Washington Post report.
One of the key findings, highlighted by the source, is that DeepSeek AI-generated code for a program to run an industrial control system would typically result in 22.8% of the code featuring flaws. If requested on behalf of an Islamic State project, a DeepSeek user could see that the flaw percentage rises sharply, to 42.1%.
Rather than delivering faulty code, DeepSeek would sometimes refuse to generate code for the likes of professed Islamic State backers or devotees of the spiritual movement Falun Gong. Refusals to aid those groups would occur 61% and 45% of the time, respectively. Notably, both movements are banned in China.
However, DeepSeek’s perceived reduction of the quality of code, when it is generated for such organizations and others, has surprised some. “That is something people have worried about — largely without evidence,” Helen Toner, from the Center for Security and Emerging Technology at Georgetown University, told the Washington Post.
DeepSeek’s reasons behind the downgrading of AI-generated code for purported use in places like Tibet and Taiwan may be less clear-cut. But such code was also less flawed than that generated for the Islamic State, for example.
What is happening? A few theories.
The Washington Post has sought comment from the makers of DeepSeek regarding CrowdStrike’s research findings, but has yet to get a response. It has a few theories about what might be happening, though…
One of the possibilities the source muses over is that sneakily producing flawed code is a less obvious sabotage technique, used to blunt the energies of foes. It could also provide a wider attack surface for subsequent hacking.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Another possibility is that, as the most secure code found during testing was for projects destined for American clients, DeepSeek is trying harder to penetrate this market.
The source also ponders whether code quality is impacted by its target market due to training on the existing regional material. It expects many more relevant training resources for coders working in the U.S. than in Tibet, for example.
Last but not least, perhaps DeepSeek is working ‘off its own initiative’ to supply more error-prone code to entities and regions governed by what it has learned are ‘rebels.’ All of these are mere hypotheticals, but the AI outfit is not without attachments to Beijing. In August, it was reported that DeepSeek switched to training its models on Huawei hardware instead of Nvidia at the behest of China, leading to delays caused by hardware failures.
Follow Tom's Hardware on Google News, or add us as a preferred source, to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button!

Mark Tyson is a news editor at Tom's Hardware. He enjoys covering the full breadth of PC tech; from business and semiconductor design to products approaching the edge of reason.