China Unveils AI Safety Governance Framework 2.0 Amid Global Concerns on Speech Control

Image Credit: Rafik Wahba | Splash

China has unveiled an updated artificial intelligence safety framework addressing risks such as sudden leaps in AI intelligence and potential loss of human control, even as international critics accuse Beijing of deploying AI to restrict freedom of speech, often citing national security and social stability justifications that they describe as pretexts for censorship.

The Artificial Intelligence Safety Governance Framework 2.0, released under the guidance of the Cyberspace Administration of China (CAC), organised by the National Internet Emergency Center (CNCERT/CC), with involvement from the National Technical Committee 260 on Cybersecurity (TC260), refines risk management strategies while aligning with broader AI regulations that include content controls. This comes amid claims that China's AI ecosystem integrates mechanisms to maintain social harmony and state authority.

Background and Development

China's AI governance efforts trace back to the early 2010s, with the Great Firewall establishing foundational internet controls that have been built upon with AI enhanced systems for automated moderation. The initial AI Safety Governance Framework appeared on September 9, 2024, setting principles for safe AI amid a boom in domestic companies, now exceeding 5,000 according to the Ministry of Industry and Information Technology.

Version 2.0, launched on September 15, 2025, at the Cybersecurity Week in Kunming, Yunnan province, incorporates collaborative input from research bodies and industry to adapt to global trends. Parallel developments include the 2017 Cybersecurity Law mandating data localisation and content oversight, which has reportedly integrated AI for real time content moderation in tools from firms like Baidu and Tencent. Recent initiatives, such as a January 14, 2025 proposal submitted by a delegate to Shanghai's political meeting for using AI to control social media content, underscore the blend of safety and control objectives.

This trajectory reflects China's ambition to lead in AI by 2030, as seen in the Global AI Governance Action Plan from the July 2025 World Artificial Intelligence Conference, promoting international cooperation on ethical standards.

Key Risks, Measures and Practices

The framework categorises risks into inherent, application, and derivative types, as described in expert summaries, graded by scenario and scale, with responses ranging from low to extremely serious. It highlights scenarios like the emergence of AI self consciousness from sudden intelligence leaps, leading to unauthorised resource acquisition and threats to national security, alongside concerns in biotechnology and nuclear fields.

Mitigation includes proposed safeguards such as circuit breakers and human overrides, plus multi stakeholder monitoring involving developers and regulators. For open source AI, it urges risk disclosure to curb misuse.

Critics, including groups like Reporters Without Borders, point to AI's reported role in censorship, with models hypothesised to be trained on state media leading to suppression of dissent on topics like Tiananmen Square or Uyghur issues. Practices extend to mass surveillance in Xinjiang and disinformation campaigns targeting Taiwan, using generative AI for efficient content flagging, according to allegations. Beijing defends these as countermeasures against false information and threats to stability.

Reasons for the Approach

Rapid AI advancements prompted the framework's update to fill gaps in risk handling, informed by real world biases and global incidents. Authorities frame it within the 14th Five Year Plan for a secure digital economy.

Broader motivations include preserving social order, with AI viewed by analysts as a tool to preempt unrest similar to past global events. This state centric model integrates safety with content alignment to socialist values, which detractors view as enabling repression through potentially biased training data.

Impacts on AI, Security and Rights

The framework bolsters defences against AI threats to infrastructure and data sovereignty, providing guidelines for oversight to mitigate cyber risks and support economic resilience. It fosters innovation via clear guidelines, though potentially limiting high risk endeavours.

On rights, accusations suggest these measures restrict expression, contributing to China's press freedom ranking of 178th out of 180 in 2025. AI reportedly enables more precise repression, from penalising dissent via social credit systems to influencing global discourse, heightening international tensions over tech standards.

Future Trends and Outlook

Adaptive regulations are anticipated, with updates for emerging tech like multimodal AI, alongside TC260's push for new risk standards. Internationally, China advocates consensus on high risk areas through forums like BRICS.

As AI evolves, experts foresee more sophisticated controls, prompting calls for ethical frameworks balancing safety with freedoms by the late 2020s. This positions Beijing to shape global norms, though it may spur Western countermeasures on exports and human rights in tech governance.

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Previous
Previous

AI Index Outpaces Markets in 2025 as Global Investment Surges Past US$250 Billion

Next
Next

800g Drone Achieves Stable Flight Without IMU Using Event Camera and AI Network