AI Risk Barometer: How Experts See AGI Threats in a US$375 Billion AI World

Image Credit: Jacky Lee

A new initiative examining how national security professionals perceive the risks of advanced artificial intelligence has revealed early indications of differing priorities between policymakers and technical researchers, underscoring the need for more aligned governance strategies as AI systems grow more capable.

The Institute for Security and Technology (IST), a San Francisco Bay Area–based think tank focused on emerging technologies and national security, announced the launch of its AI Risk Barometer project on October 14. Developed in partnership with the Future of Life Institute (FLI), a U.S.-based nonprofit dedicated to ensuring that transformative technologies benefit society, the Barometer aims to provide a structured assessment of expert perceptions surrounding advanced AI, including artificial general intelligence (AGI) and artificial superintelligence (ASI).

The initiative comes amid rapid advances in frontier AI models and intense geopolitical competition, particularly between the United States and China, that is reshaping defence planning, cyber strategy, and global technology policy. Without a more unified understanding of short- and long-term AI risks, experts warn that governments may struggle to anticipate or manage emerging challenges.

A Structured Approach Inspired by Historical Precedents

IST notes that the Barometer draws methodological inspiration from the Manhattan Project’s “Compton Constant”, a 1940s tool designed to quantify catastrophic nuclear risks by surveying scientists and military leaders. In a similar spirit, the Barometer will gather input from national security professionals, technical specialists, and policymakers to better understand how different communities evaluate AI risk pathways.

While IST has not yet released detailed quantitative findings, the project intends to track how respondents assess timelines, risk scenarios, governance measures, and strategic vulnerabilities. Survey responses will be collected confidentially to encourage candour, particularly on sensitive defence- and security-related questions.

This work is designed to complement existing academic surveys of AI researchers by incorporating perspectives rooted in operational and national security contexts, which often differ from those of laboratory-based experts.

Likely Focus Areas: Control, Cybersecurity, and Strategic Misalignment

Although the Barometer has not yet published “early results,” IST’s broader AI security work highlights several themes likely to shape the project’s focus:

  • Loss of human control over increasingly capable models, particularly if systems pursue unintended objectives at high speed.

  • Cybersecurity vulnerabilities, including AI-enabled intrusion techniques or automated exploitation tools that surpass current defensive capabilities.

  • Risk of autonomous weapons diffusion, reflecting concerns about low barriers to entry and accelerated arms-race dynamics.

  • Divergent timelines and priorities between policymakers, who often emphasise immediate risks like cyberattacks or disinformation, and AI researchers, who warn about longer-term alignment and systemic-failure scenarios.

This disconnect, IST notes in its broader policy work, stems from differing incentives, planning horizons, and professional cultures across the national security and AI research communities.

Assessing Governance Tools Under Innovation Pressure

The Barometer also aims to inform how governments evaluate existing AI governance mechanisms, including:

  • U.S. export controls limiting the transfer of advanced AI chips and high-end compute to strategic competitors.

  • International governance proposals inspired by mechanisms such as the Biological Weapons Convention.

  • Safety measures such as independent model evaluations, audits, transparency reporting, and incident disclosure requirements.

IST observes that while these mechanisms can influence how frontier AI is developed and deployed, open-source models and declining compute costs present enforcement challenges. These concerns align with themes highlighted in FLI’s AI Safety Index, a recurring assessment evaluating transparency and safety practices across leading AI developers.

Both organisations stress the need for evidence-based, practical governance that mitigates risks without stifling beneficial innovation. This balance is increasingly important as mission-critical sectors — from defence logistics to Australian mining automation — adopt AI at scale.

Clarity on Definitions, Thresholds, and Risk Appetite

A recurring issue in national and international AI governance is the absence of shared definitions for concepts such as AGI, frontier models, and high-risk systems. IST and FLI emphasise that without clearer thresholds, consistency across regulatory frameworks will remain difficult.

The Barometer also encourages policymakers to explicitly consider risk tolerance, particularly how governments should respond if credible experts assign a non-zero probability to catastrophic AI failure modes, even when uncertainty remains high.

According to IST’s Deputy Director for AI Security Policy Mariami Tkeshelashvili and FLI’s AI and National Security Lead Hamza Chaudhry, the goal is not to predict AI futures with precision but to ensure that those directly responsible for preventing strategic risks have a structured mechanism to express concerns and shape governance priorities.

An Expanding Policy Landscape

The Barometer arrives as governments intensify regulatory activity around frontier AI:

  • California’s SB-53, enacted in 2025, introduces new reporting, transparency, and risk-planning obligations for high-compute frontier AI developers.

  • Recent U.S. National Defense Authorization Act (NDAA) cycles include provisions to strengthen defence-oriented AI governance, testing, and oversight.

  • The UN High-Level Advisory Body on AI released its final governance recommendations in 2024, outlining pathways for international coordination.

  • Global AI capital expenditure is accelerating rapidly: while Goldman Sachs projected in 2023 that investment could reach US$200 billion annually by 2025, more recent forecasts now estimate around US$375 billion in 2025, with multi-trillion-dollar spending expected this decade.

With frontier systems advancing quickly, IST and FLI expect the Barometer to serve as a recurring benchmark for how expert sentiment evolves — similar to economic confidence or geopolitical risk indices.

From Reaction to Resilience

As AI capabilities continue to advance, the AI Risk Barometer aims to provide policymakers with a clearer, evidence-based understanding of where experts believe the most pressing risks lie. Rather than amplifying fears, the project is intended to create a more informed and resilient foundation for global AI governance.

If successful, it could help shift governments from reactive responses to proactive risk management, ensuring that advanced AI strengthens, rather than destabilises, national and international security.

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Previous
Previous

Grok 4.1 Launch: xAI’s Model Scores 1586 EQ and 1483 Elo After Major Upgrade

Next
Next

ChatGPT Atlas Launch Triggers Security Concerns After 7-Day Prompt Injection Findings