The Chinese Communist Party sees China-made high-performance AI as a threat to its ruling system and is censoring it.



Chinese companies are releasing a steady stream of high-performance AI models, including many open models such as the Qwen and DeepSeek series. The Chinese government has recognized AI as an important economic and military element of China's future, but has also pointed out that it could pose a threat to the ruling regime.

China Is Worried AI Threatens Party Rule—and Is Trying to Tame It - WSJ

https://www.wsj.com/tech/ai/china-is-worried-ai-threatens-party-ruleand-is-trying-to-tame-it-bfdcda2d

Chinese AI development company DeepSeek released 'DeepSeek-V3.2' in December 2025. DeepSeek's AI is characterized by its high performance despite being an open model whose model data is publicly available. In the case of DeepSeek-V3.2, it recorded scores equivalent to OpenAI's GPT-5 and Google's Gemini 3 in benchmark tests.

'DeepSeek-V3.2' has been released, an open model that releases models for free with performance equivalent to GPT-5 and Gemini 3 - GIGAZINE



Alibaba , another major Chinese IT company, has an AI research team called Qwen that has released a number of AIs, as well as creating AI-enabled glasses called Quark AI Glasses.

Alibaba announces 'Quark AI Glasses' equipped with its own AI Qwen - GIGAZINE



As mentioned above, China is actively conducting research and development into AI, and is continuously releasing world-class high-performance models. According to the Wall Street Journal, the Chinese government recognizes the importance of AI, but is also aware of the risks it poses to national governance. In November 2025, the government enacted regulations requiring AI training to use data filtered out of politically sensitive topics and to pass ideological tests before release. The tests require the AI to be judged 'safe' in 96% of 31 risk categories, including 'inciting the subversion of state power and the overthrow of the socialist system.'

Even before the rules were enacted, it was pointed out that Chinese AI was censoring its output. For example, when DeepSeek-R1 was released in January 2025, it rejected answers to 85% of prompts about topics considered sensitive in China.

DeepSeek-R1 refuses to answer 85% of sensitive questions about China, but points out that restrictions can be easily circumvented - GIGAZINE



It has also been reported that there is a high rate of low-quality code being output to those who do not comply with the Chinese government's wishes.

DeepSeek is more likely to output vulnerable code when it encounters prompts that the Chinese Communist Party considers sensitive - GIGAZINE



The investigation found that this 'censorship' was not something that was naturally built into the AI during training, but was implemented after training was complete.

The Chinese government is confident in its approach to AI, saying that its 'Great Firewall' (Golden Shield) is a well-functioning internet censorship system, meaning that even if an AI were to produce content that goes against government policy, it would not be allowed to spread, and so it seems unlikely to gain momentum.

in Note, Posted by logc_nt