European Parliament Study Advocates Strict Liability for High-Risk AI Systems

Image Credit: Alexey Larionov | Splash

A study commissioned by the European Parliament has recommended establishing a dedicated strict liability regime for high-risk artificial intelligence systems, aiming to address gaps in existing EU rules and ensure effective compensation for harms caused by AI technologies.

The proposal emerges amid ongoing debates over AI governance in the 27-member European Union, where regulators seek to balance innovation with public protection following the adoption of the bloc's landmark AI Act last year.

Background

The study, titled "Artificial Intelligence and Civil Liability – A European Perspective", was authored by Andrea Bertolini, an associate professor of private law at the Sant'Anna School of Advanced Studies in Pisa, Italy, and completed in July 2025 at the request of the Parliament's Committee on Legal Affairs. It evaluates the adequacy of current EU liability frameworks for AI, including the Product Liability Directive of 1985 and its revision adopted on Oct. 23, 2024, as Directive (EU) 2024/2853.

Under the original Product Liability Directive, manufacturers are held accountable for defective products without proving fault, but it struggles with AI's characteristics, such as autonomy and opacity, which complicate establishing defects or causation. The 2024 revision expands coverage to include software and AI systems, introducing measures like evidence disclosure and presumptions to ease claimant burdens, yet retains limitations like a focus on safety over performance and exclusions for damage to the product itself.

This analysis follows the European Commission's decision on Feb. 11, 2025, to withdraw its 2022 proposal for an AI Liability Directive, which had aimed to adapt fault-based rules for AI by facilitating evidence access in claims. The withdrawal, part of the Commission's 2025 work program, stemmed from stalled negotiations and concerns over regulatory overlap, with no immediate agreement anticipated among member states.

The broader context is shaped by the AI Act, Regulation (EU) 2024/1689, adopted in 2024, which categorizes AI by risk levels and imposes safety requirements on high-risk systems used in sectors like education, employment and critical infrastructure management, including networks for electricity or water supply where AI supports digital security functions. Without harmonized liability rules, the study warns of potential national divergences, as countries like Germany and Italy consider their own AI-specific regimes.

Recommendations

The report proposes a new standalone EU regulation to impose strict liability on operators of high-risk AI systems, defined in line with the AI Act as those posing significant risks to health, safety or fundamental rights. Liability would target a single operator—typically the entity controlling the system and deriving economic benefit, such as the provider or deployer—without requiring proof of negligence.

Covered harms include physical injury, property damage and significant non-material losses, such as data privacy violations, with compensation extending to damage to the AI system itself, unlike the Product Liability Directive. Suggested caps limit liability to 2 million euros for death, health or physical integrity harms, and 1 million euros for other categories. A 30-year limitation period would apply, alongside mandatory insurance to facilitate risk management.

For non-high-risk systems, a fault-based approach is advised, with a presumption of fault to address evidentiary challenges. The framework would integrate with the AI Act's conformity assessments to preemptively identify high-risk status, minimizing disputes in courts. Examples of high-risk AI include autonomous vehicles, medical diagnostics and algorithmic trading, where failures could lead to widespread impacts, including in digital security for critical networks.

Analysis

Existing rules fall short due to AI's black-box nature, making it difficult to trace faults in supply chains and prove causation, often resulting in under-compensation and inconsistent national applications. The withdrawn AI Liability Directive, while easing some procedural hurdles, was critiqued for complexity and limited harmonization, potentially leading to judicial inconsistencies.

The withdrawal exposed divisions: some member states and industry viewed it as reducing regulatory burdens to support growth, while Parliament emphasized risks of a regulatory vacuum eroding trust in AI. Absent EU action, fragmented national laws could complicate cross-border operations, particularly for AI in digital security where systems detect cyber threats in essential services.

Potential effects include streamlined victim redress for incidents like AI-driven security breaches, but elevated compliance costs for operators, potentially influencing pricing or innovation strategies.

Pros and Cons

Advocates highlight the regime's potential to boost accountability through predictable rules, deter unsafe AI deployments and foster market unity, enhancing public confidence in technologies like AI for digital security. It could encourage ethical practices by internalizing risks via insurance.

Detractors argue it risks overdeterrence, with strict liability and caps potentially burdening smaller firms and stifling experimentation, though mitigated by economic scaling. Lawmaker Axel Voss criticized the directive's withdrawal as creating a "Wild West" favoring large companies, while civil society called for prompt alternatives to safeguard rights. Tech lobbies praised the pullback for easing regulations.

Future Trends

The study may prompt the Commission to revisit AI liability, possibly proposing a revised regulation by 2026 to avert fragmentation. As AI embeds further in digital security—such as threat prediction in smart grids—demand for cohesive rules could intensify, aligning with global shifts toward risk-proportionate oversight, though EU competitiveness might suffer without unified standards.

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Previous
Previous

Scientists Discover Human Cell-Based 'Anthrobots' Capable of Tissue Repair and Age Reversal

Next
Next

Insta360 Launches Ace Pro 2 and X5 with AI Chips for 8K Action and 360 Capture