Australia’s First National AI Plan: AUD 29.9M for New Safety Institute
Released on 2 December 2025, the National AI Plan shifts policy focus to sector-led regulation and establishes a new advisory AI Safety Institute. The strategy aims to balance economic opportunity with safety by utilizing existing legal frameworks rather than mandatory national guardrails.
AI in Democracy: 150,000 Votes in Tokyo and 55% Legal Adoption in Brazil
In their new book Rewiring Democracy, experts examine how AI is reshaping global politics, from Brazil’s judicial systems to an AI avatar engaging voters in Japan. The analysis highlights efficiency gains alongside significant risks of bias found in unregulated German election tools.
As Global AI Usage Hits 800M, Hong Kong Faces Access Restrictions
Despite major 2025 upgrades from OpenAI and Google, Hong Kong users face restricted access to global chatbots due to regulatory friction and corporate geoblocking. This report examines the impact on local businesses and the city’s strategic pivot toward developing sovereign AI models like HKChat.
10,000 Users Rate Political Bias Across 24 Major AI Models
A review of research published between mid-2024 and late-2025 highlights persistent political and social biases in large language models. Studies from Stanford and other institutions show how training data and safety filters shape model outputs, potentially influencing public opinion and automated decision-making in sectors like recruitment.
How ISO Standards Are Shaping the Potential $15.7T AI Economy in 2025
International standards are rapidly evolving to support the safe deployment of artificial intelligence. This report details the 2025 expansion of the ISO/IEC ecosystem, including new guidelines for impact assessment, auditing, and environmental sustainability, and examines the growing adoption of the ISO/IEC 42001 management system by global industry leaders.
AI Risk Barometer: How Experts See AGI Threats in a US$375 Billion AI World
The AI Risk Barometer, a joint project by the Institute for Security and Technology and the Future of Life Institute, surveys national security and AI experts on emerging risks from advanced systems such as AGI. By tracking views across policy, defence and technical communities, it aims to clarify where experts see the greatest threats and how current governance tools measure up.
AI Deepfakes Hit 38 Countries: How Synthetic Media Is Shaping 2025 Elections
Election-related deepfakes have become a routine challenge worldwide, with 38 countries affected and 33 of 87 recent voting nations reporting synthetic media in their campaigns. From Canada’s viral deepfake of Prime Minister Mark Carney to Romania’s rerun election, this report explores how AI-generated content is shaping voter perceptions, straining public trust, and prompting governments and tech companies to strengthen defences.
Deloitte Repays AU$97,587 Final Tranche After AI Errors in AU$440k Welfare Review
Deloitte has refunded the final AU$97,587 instalment of a AU$440,000 contract to the Australian Government after fabricated citations and a misquoted court judgment were identified in an AI-assisted welfare compliance review. The revised report removed the errors but retained its original findings and recommendations. The incident has heightened scrutiny of generative AI use in public policy work and strengthened calls for clearer verification and disclosure requirements.
California Enacts First U.S. Frontier AI Transparency Law: SB 53 Signed by Newsom
California has introduced the nation’s first legal framework focused on transparency in the development of frontier artificial intelligence systems. SB 53 requires major AI developers to disclose safety practices, report significant incidents, and protect whistleblowers, with enforcement led by the state Attorney General. The legislation responds to expert guidance on emerging AI risks while aiming to support ongoing innovation.
Generative AI Threatens Africa’s 2025 Elections: Experts Warn of Propaganda Surge
With elections looming across Africa in 2025, the surge of generative artificial intelligence is raising alarms among analysts. These tools, capable of producing realistic text, images, audio and video, are already being used to manufacture endorsements, stir unrest and bypass traditional moderation efforts. Experts call for continental cooperation and tailored detection tools to protect democratic integrity.
China Unveils AI Safety Governance Framework 2.0 Amid Global Concerns on Speech Control
China has released the Artificial Intelligence Safety Governance Framework 2.0, expanding its approach to managing AI risks, ethics, and national security. The update seeks to ensure AI systems remain safe and controllable while aligning with broader regulatory goals. Observers note the framework’s balance between innovation, oversight, and information control.
Trump’s AI Order Sets 120 Days for Agencies to Shift to “Ideologically Neutral” Models
President Trump has issued Executive Order 14319, requiring federal agencies to procure AI systems that follow principles of truth-seeking and ideological neutrality. The directive sets new procurement standards, calls for OMB guidance within 120 days, and forms part of a broader AI strategy alongside companion orders and an AI Action Plan.
China’s AI Censorship and Propaganda Tools Spark Global Governance Concerns
China’s rapid development of AI-driven censorship and propaganda systems is reshaping online control and global influence. From leaked datasets showing large-scale training of content filters to state-aligned biases in major Chinese LLMs, analysts warn that Beijing’s export of these technologies could embed authoritarian norms internationally, challenging efforts to safeguard human rights and democratic discourse.
U.S. Congressional Hearing Probes AI Risks Amid U.S.-China Rivalry
The U.S. House Select Committee on Strategic Competition with the Chinese Communist Party held a June 25 hearing on artificial intelligence, featuring testimony from Anthropic’s Jack Clark, CSBA’s Thomas Mahnken, and AI Policy Network’s Mark Beall. Witnesses highlighted the dual-edged potential of AI, national security concerns, and the need for export controls, federal testing, and legislative action to address risks of AGI and superintelligence.
Trump Administration Lifts Nvidia H20 Export Ban to China, Reshaping AI Trade Dynamics
The U.S. government has approved export licenses for Nvidia’s H20 chips to China, reversing an earlier ban introduced in April 2025. The decision follows high-level meetings, trade negotiations, and pressure from industry leaders. While it may support U.S. tech competitiveness, concerns remain over national security and China’s AI advancement.
China’s AI Censorship Goes Global: How LLMs Are Reshaping Digital Control and Governance
China is rapidly advancing its use of large language models (LLMs) to power next-generation digital censorship tools, aiming to control online narratives both domestically and abroad. This evolution from manual moderation to AI-driven surveillance is reshaping internet governance, sparking global debates about free expression, technology ethics, and state influence.
EU Releases General-Purpose AI Code of Practice Ahead of 2025 Compliance Deadline
The European Commission unveiled its General-Purpose AI Code of Practice on July 10, 2025, offering voluntary guidance for AI developers on transparency, copyright, and safety. This initiative aims to support compliance with the EU AI Act, which enforces new obligations on GPAI systems starting August 2, 2025.
AI Regulations May Undermine Defense Capabilities, Warns Atlantic Council Report
A new Atlantic Council report warns that civil artificial intelligence regulations, while excluding military applications, may still have unintended consequences for defense sectors. The study calls for stronger defense community involvement in shaping AI policy to mitigate risks and maintain strategic advantage.
Anthropic Launches Claude Gov AI for U.S. Defense, Rivals OpenAI in Government AI Market
Anthropic has introduced Claude Gov, a secure AI model suite developed for U.S. defense and intelligence use. Launched on June 5, 2025, Claude Gov is tailored to operate in classified environments, supporting national security tasks such as threat detection and strategic planning. The release positions Anthropic as a key player in the expanding government AI landscape, alongside competitors like OpenAI’s ChatGPT Gov.
UNESCO-UNDP Report Explores AI's Impact on Freedom of Expression in Elections
A new report by UNESCO and UNDP analyzes how artificial intelligence is influencing electoral freedom of expression worldwide. It explores both the opportunities AI presents for civic engagement and the challenges it poses to information integrity, offering policy recommendations for inclusive, ethical governance.
