Replying to Avatar Tim Bouma

Didn’t see that one coming…

—————-

Beijing Worries AI Threatens Party Rule

BY STU WOO

The Wall Street Journal

Dec 26, 2025

China is enforcing strict guidelines to make sure chatbots don’t misbehave

Concerned that artificial intelligence could threaten Communist Party rule, Beijing is taking extraordinary steps to keep it under control.

Although China’s government sees AI as crucial to the country’s economic and military future, regulations and recent purges of online content show it also fears that AI could destabilize society. Chatbots pose a particular problem: Their ability to think for themselves could generate responses that spur people to question party rule.

In November, Beijing formalized rules it has been working on with AI companies to ensure their chatbots are trained on data filtered for politically sensitive content, and that they can pass an ideological test before going public. All AI-generated texts, videos and images must be explicitly labeled and traceable, making it easier to track and punish anyone spreading undesirable content.

Authorities recently said they removed 960,000 pieces of what they regarded as illegal or harmful AI-generated content during three months of an enforcement campaign. Authorities have classified AI as a major potential threat, adding it alongside earthquakes and epidemics to its National Emergency Response Plan.

Chinese authorities don’t want to regulate too much, people familiar with the government’s thinking said. Doing so could extinguish innovation and condemn China to secondtier status in the global AI race behind the U.S., which is taking a more hands-off approach toward policing AI.

But Beijing also can’t afford to let AI run amok. Chinese leader Xi Jinping said earlier this year that AI brought “unprecedented risks,” according to state media.

There are signs that China is, for now, finding a way to thread the needle.

Chinese models are scoring well in international rankings, both overall and in specific areas such as computer coding, even as they censor responses about the Tiananmen Square massacre, human rights and other sensitive topics. Major American AI models are mainly unavailable in China.

It could become harder for DeepSeek and other Chinese models to keep up with U.S. models as AI systems become more sophisticated.

Researchers outside China who have reviewed both Chinese and American models also say that China’s regulatory approach has some benefits: Its chatbots are often safer by some metrics, with less violence and pornography, and they are less likely to steer people toward self-harm.

“The Communist Party’s top priority has always been regulating political content, but there are people in the system who deeply care about the other social impacts of AI, especially on children,” said Matt Sheehan, who studies Chinese AI at the Carnegie Endowment for International Peace.

But he added that recent testing shows that compared with American chatbots, Chinese ones queried in English can also be easier to “jailbreak”—the process by which users bypass filters using tricks, such as asking AI how to assemble a bomb for an action-movie scene.

“A motivated user can still use tricks to get dangerous information out of them,” he said.

When AI systems train on content from the Chinese internet, it is already scrubbed as part of China’s so-called Great Firewall, the system Beijing set up years ago to block online content it finds objectionable. But to remain globally competitive, Chinese companies also incorporate materials from foreign websites, such as Wikipedia, that address taboos such as the Tiananmen Square massacre.

Developers of ChatGLM, a top Chinese model, say in a research paper that companies sometimes deal with this issue by filtering sensitive keywords and webpages from a pre-defined blacklist.

But when American researchers downloaded and ran Chinese models on their own computers in the U.S., much of the censorship vanished. Their conclusion: While some censorship is baked into Chinese AI models’ brains, much of the censorship happens later, after the models are trained.

Chinese government agencies overseeing AI didn’t respond to requests for comment.

American AI companies also regulate content to try to limit the spread of violent or other inappropriate material, in part to avoid lawsuits and bad publicity.

But Beijing’s efforts—at least for models operating inside China—typically go much further, researchers say. They reflect the country’s longstanding efforts to control public discourse.

Shared via PressReader

connecting people through news

lol this is hilarious.

The line especially saying that what concerns the CCP is that “chat bots can think for themselves”.

It just shows how they perceive the average sheep/NPC.

I would be insulted if people thought a chat bot could critically think better than I could.. no way that’s the case, since all I do is think (sometimes way too deeply).

Reply to this note

Please Login to reply.

Discussion

No replies yet.