Anthropic blocks Chinese-controlled firms from Claude AI — cites 'legal, regulatory, and security risks'

China is known for its draconian control over its local Internet
(Image credit: Shutterstock)

Anthropic has updated its terms of service to block access to its Claude AI models for any company that’s majority-owned or controlled by Chinese entities, regardless of where those companies are based.

The company says this decision is about “legal, regulatory, and security risks” and ensuring that “authoritarian” regimes do not have access to its cutting-edge models.

All Claude models are affected

The decision covers all Claude models, including Claude 3.5 Sonnet, and all developer-facing tools. It also includes subsidiaries and joint ventures that fall under Chinese ownership. In practice, this means firms like ByteDance, Tencent, and Alibaba, as well as any portfolio companies or foreign-incorporated divisions, are now cut off.

Anthropic has acknowledged that this will impact revenue in the “low hundreds of millions of dollars,” but maintains the policy is necessary to protect against the misuse of U.S. AI technology in sensitive or strategic contexts.

This isn’t the first time that Chinese users have been blocked from U.S.-developed models, but it is the first instance of a provider pre-emptively cutting off access based on corporate ownership rather than geography or specific use cases.

Model migration begins

Within hours of the block coming to light, Chinese AI startup Zhipu released a migration toolkit aimed at Claude users. It reportedly offers plug-and-play switching to its GLM-4.5 API, alongside a developer package that costs a fraction of Claude’s pricing. According to Zhipu, the package includes 20 million free tokens and throughput three times higher than Claude’s base tier. The company has also promised support for large context windows and compatibility with existing Claude workflows.

Last year, Alibaba launched Alibaba Cloud after OpenAI restricted API access for developers in China. The company launched a migration program encouraging developers to switch to its Qwen-plus model, offering free tokens and competitive pricing compared to ChatGPT.

But the stakes are higher now. Many enterprise users are already building on Claude for tasks like customer service and internal code generation. Those users affected by Claude’s restrictions now face a choice of either rebuilding around local models or seeking exemptions through multicloud setups. Anthropic has not yet clarified how it will enforce the policy around global cloud resellers.

Follow Tom's Hardware on Google News, or add us as a preferred source, to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button!

TOPICS
Luke James
Contributor

Luke James is a freelance writer and journalist.  Although his background is in legal, he has a personal interest in all things tech, especially hardware and microelectronics, and anything regulatory. 

  • hotaru251
    Anthropic says that it wants to ensure that “authoritarian” and “adversarial” regimes cannot access its models.

    I wanna say something but i know mods would say its political :|
    Reply
  • Amdlova
    hotaru251 said:
    I wanna say something but i know mods would say its political :|
    They watching from the sky...
    In today world keeping your thoughts can save you from headaches.
    Reply
  • johnnycanadian
    I'll say it, then: when is Anthropic going to cut of the USA?
    Reply