AI Deepfakes Hit 38 Countries: How Synthetic Media Is Shaping 2025 Elections

Image Credit: Louis Hansel | Splash

Generative artificial intelligence has rapidly become a defining factor in elections worldwide, emerging as both a tool for political communication and a vector for deception. Research by Surfshark estimates that 38 countries have already experienced election-related deepfake incidents, affecting an estimated 3.8 billion people. These cases range from synthetic videos targeting high-profile leaders to AI-generated audio designed to mislead voters during critical moments.

While no single deepfake has been proven to overturn a national result, analysts warn that the cumulative effect is steadily undermining public confidence. As generative tools become cheaper and more accessible, the sheer volume of synthetic media poses challenges for election authorities, fact-checkers and social platforms.

Foundations in Tech's Quiet Revolution

The rise of politically impactful deepfakes traces back to the deployment of powerful generative models in the early 2020s. Tools once requiring specialised skills can now produce highly realistic content in minutes, allowing lone actors, political operatives and foreign influence networks to scale up manipulation efforts.

A pivotal early incident occurred during Slovakia’s 2023 parliamentary election, when AI-generated audio recordings falsely portrayed opposition leader Michal Šimečka discussing vote manipulation. The clips circulated widely on social media during a sensitive blackout period, demonstrating how last-minute deepfakes can shape voter perceptions before debunking efforts catch up.

In the 2024 US election cycle, thousands of New Hampshire voters received robocalls that mimicked President Joe Biden’s voice, telling Democrats to stay home. Investigators later confirmed the audio was created using a commercial voice-cloning service, prompting regulators to propose major penalties and classify AI-generated voices in robocalls as unlawful without consent.

Similar cases have appeared in countries such as Argentina, where alleged AI-generated audio has been used to smear political advisers or cloud genuine controversies. As Stanford’s AI Index reports, the cost of producing synthetic media has fallen dramatically, enabling a wide range of actors to deploy convincing disinformation with minimal expertise.

US intelligence assessments consistently identify Russia, China and Iran as leading external contributors to AI-assisted disinformation campaigns, blending synthetic content with long-established influence strategies. Their operations increasingly leverage generative models to tailor messages in multiple languages and exploit polarised political environments.

2025's Testing Grounds: From Local Manipulation to National Turmoil

Elections held across 2025 revealed how deeply AI manipulation has permeated democratic processes.

In Australia, the 3 May 2025 federal election proceeded without major breaches, according to the Australian Electoral Commission. Still, authorities acknowledged the heightened risk environment and expanded partnerships with technology companies, such as Microsoft, to strengthen monitoring systems capable of spotting potentially deceptive AI-generated content before it spread widely.

A more consequential case emerged in Canada following the 2025 federal election. A viral deepfake video circulated online depicting newly elected Prime Minister Mark Carney announcing sweeping automotive restrictions. The clip used real footage paired with synthetic audio and spread on TikTok and X, accumulating significant engagement before it was flagged and debunked by fact-checkers. Analysts noted the video built on earlier fakes involving Carney and formed part of a broader pattern of AI-driven misinformation targeting Canadian politics.

Across Europe, AI-driven influence campaigns continued to evolve. In Germany, Russia-linked networks used AI-generated websites and visuals to amplify divisive narratives and bolster far-right messaging. Meanwhile, Ireland’s 2025 presidential election saw a fabricated video styled as a broadcaster news report falsely claiming that candidate Catherine Connolly had withdrawn from the race, prompting platforms to remove the clip after fact-checkers identified it as synthetic.

A landmark institutional response came from Romania, where the December 2024 presidential election was annulled due to alleged Russian digital interference and improper campaign financing. A rerun held in May 2025 resulted in a victory for centrist candidate Nicușor Dan over nationalist George Simion. While online manipulation played a role in the political crisis, publicly available evidence has not detailed specific deepfake incidents on the scale initially speculated. The case nonetheless shows how digital interference, potentially including AI elements, can trigger extraordinary legal remedies.

Beyond Europe, countries across Asia and Latin America reported growing use of AI-generated harassment targeting female politicians and journalists. In India, Indonesia, and Mexico, fact-checking groups documented synthetic intimate images and manipulated videos circulated via WhatsApp and other platforms, reflecting how generative tools can reinforce gender-based intimidation during election periods.

Echoes in Trust and Turnout

Although quantifying turnout impacts remains difficult, surveys consistently show rising public uncertainty about distinguishing authentic content from synthetic media. The Reuters Institute Digital News Report highlights that large portions of the public, often around half of respondents in many countries, express concern about identifying misinformation online, with deepfakes frequently cited as a major factor.

Fact-checking organisations have reported a noticeable increase in public inquiries about whether viral political clips are real or AI-generated, particularly during tight races or major political announcements.

Women and minority candidates face distinct challenges. NGOs and academic researchers describe a chilling effect from sexualised or abusive deepfakes, discouraging public participation even though the scale of this impact is not yet quantifiable with precision.

At the same time, AI has shown constructive potential. In India, for example, Prime Minister Narendra Modi’s speeches have been translated and dubbed into multiple regional languages using government-supported tools such as Bhashini, enabling broader outreach to multilingual communities. Civil society groups in other nations are exploring similar methods to improve accessibility in diverse electorates.

Defences Taking Shape Amid Ongoing Challenges

Governments, platforms and election authorities have introduced countermeasures, though implementation remains uneven.

At the 2024 Munich Security Conference, major technology companies, including OpenAI, Google, Meta, Microsoft and TikTok, signed the Tech Accord to Combat Deceptive Use of AI in Elections, pledging steps such as improving detection tools, restricting impersonation features and collaborating with election bodies during major contests.

The EU’s AI Act introduces new transparency rules, including requirements for labelling certain forms of synthetic media. These obligations will roll out in phases, with further guidance expected from EU regulators.

In the United States, lawmakers have introduced bills aimed at requiring disclosures on AI-generated political ads and expanding remedies for victims of deepfake impersonation, though most proposals remain in early legislative stages.

Election commissions worldwide are also adapting. Some have begun issuing voter guidance on identifying synthetic media, while others collaborate with tech firms to accelerate takedown or verification processes around election periods. However, existing detection systems still struggle with subtle manipulations or low-resolution uploads, and watermarking or provenance solutions are far from universal.

Media-literacy campaigns, ranging from public broadcasters’ explainer videos to NGO-led workshops, aim to help citizens verify content before sharing. Yet these initiatives frequently reach younger, urban digital users more than older or disconnected communities.

Trajectories: Guardrails or Escalation?

Looking toward 2026, analysts expect AI’s electoral influence to deepen. Three broad trends stand out:

  • More precise targeting
    Generative tools will enhance micro-targeting by enabling political actors to tailor messages by language group, region or demographic.

  • Greater blurring between valid and deceptive uses
    AI-assisted translations, avatars and campaign visuals will coexist with fabricated endorsements and synthetic scandals, complicating regulatory boundaries.

  • Closer integration with broader cyber operations
    Influence campaigns are likely to merge generative AI with hacking, data leaks and coordinated amplification networks.

Experts widely agree that bans on AI use in politics are impractical. Instead, they advocate shared provenance standards, clearer disclosure rules for campaigns and stronger cross-border cooperation to identify and contain synthetic threats.

The core question is no longer whether AI will shape elections, but whether democracies can develop the guardrails needed to harness its benefits while limiting its capacity for manipulation and division.

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Previous
Previous

ChatGPT 5.1 vs Rivals: New Benchmarks Show 0.6%–1.4% Hallucination Gap in 2025

Next
Next

AI Firewalls Gain Momentum as Breach Costs Hit USD 4–5M and Zero-Day Risks Stay Low