China’s AI Censorship and Propaganda Tools Spark Global Governance Concerns

Image Credit: Jacky Lee

Recent reports detail China's deployment of large language models (LLMs) to enhance online censorship, embed state-aligned biases in AI systems, and influence international narratives, while its governance frameworks prioritize state control over individual rights protections.

AI-Enhanced Censorship Systems

A March 2025 investigation found an unsecured Baidu-hosted Elasticsearch database with ~133,000 labeled items (to Dec 2024) used for LLM-assisted ‘public opinion’ control. Entries prioritized Taiwan and military topics for immediate suppression and flagged indirect dissent (e.g., the idiom ‘when the tree falls, the monkeys scatter’). This marks an evolution from traditional keyword filters and manual oversight to AI-driven efficiency in maintaining control over public discourse. The Chinese government, through entities like the Cyberspace Administration of China overseen by President Xi Jinping, views the internet as a frontline for public opinion management. Freedom House continues to rate China among the worst for internet freedom, with AI enhancing censorship and surveillance capabilities.

Biases Embedded in Chinese LLMs

Independent tests and reporting show Chinese LLMs commonly refuse or sanitize politically sensitive topics, consistent with regulations mandating alignment with ‘core socialist values’ and official model vetting/filings. DeepSeek, for instance, uses an application-level safety layer that blocks certain content. The CSIS CFPD benchmark (~400 scenarios) finds some models—including Qwen2 and DeepSeek—skew more escalatory than others in crisis decision prompts; model preferences vary by country dyad. Such alignments stem from regulatory requirements, with government testing ensuring compliance.

Global Export of AI Technologies

China is promoting its AI systems internationally through state-backed initiatives, open-source strategies, and the Belt and Road Initiative spanning about 150 countries (depending on source and date). This includes exporting surveillance technologies to authoritarian regimes, potentially embedding CCP ideologies into global digital infrastructure. Platform takedowns and security research document PRC-linked networks using LLMs to generate, translate, and monitor content; separate work shows AI-generated ‘news anchors’ used in pro-China videos since 2023 (limited impact, but established technique).

The proliferation of models like DeepSeek raises risks of subtly spreading political biases, as users globally adopt them for tasks like coding, despite embedded preferences for hawkish foreign policy advice. Analysts warn this could influence international relations by normalizing state-controlled narratives in AI outputs.

Governance Frameworks and Human Rights Considerations

China's AI governance, shaped by initiatives like the 2023 Global AI Governance Initiative (GAIGI), emphasizes state sovereignty, mutual respect, and capacity-building for the Global South, including co-hosted UN events with countries like Zambia. In December 2024, China co-chaired with Zambia the UN Group of Friends for International Cooperation on AI Capacity-Building, continuing related events in 2025. Regulations focus on aligning AI with CCP directives, such as preventing infringement on national unity, rather than prioritizing individual rights. This contrasts with Western models, which stress protections against discrimination and privacy violations.

Critics highlight how this approach normalizes surveillance, as seen in AI applications for monitoring minorities like Uyghurs, potentially exacerbating human rights concerns. China's participation in international standard-setting aims to counter perceived Western dominance, but it risks embedding authoritarian norms globally.

Future Trends and Implications

As China pursues its 2020-2025 plan to invest approximately $1.4 trillion in digital infrastructure and registers more than 490 generative AI models domestically as of July 2025, AI censorship is expected to become more granular and efficient, integrating into broader surveillance ecosystems. Globally, this may intensify competition in AI standards, with China positioning itself as a leader in the Global South through "win-win" cooperation.

Analysts recommend ongoing evaluations and fine-tuning to mitigate biases, emphasizing the need for balanced international governance to address risks to free expression and democratic processes. Without consensus, divergent approaches could deepen divides in global AI ethics and security.

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Previous
Previous

Trump’s AI Order Sets 120 Days for Agencies to Shift to “Ideologically Neutral” Models

Next
Next

U.S. Congressional Hearing Probes AI Risks Amid U.S.-China Rivalry