AI Chatbots Prove More Persuasive Than Humans in Online Debates, Study Finds

Image Credit: Ant Rozetsky | Splash
A recent study published in Nature Human Behaviour on May 19, 2025, finds that artificial intelligence chatbots, specifically OpenAI’s GPT-4, can be more persuasive than humans in online debates—particularly when given access to basic demographic information about their opponents. The research, which involved 900 U.S. participants debating various sociopolitical topics, underscores growing concerns about AI’s ability to shape public opinion.
Study Design and Methodology
Led by Francesco Salvi at the Swiss Federal Institute of Technology in Lausanne (EPFL), the study enlisted 900 U.S.-based participants to take part in 10-minute online debates on issues such as the death penalty, climate change, school uniforms, fossil fuel bans, and the societal impact of AI. Each participant was randomly paired with either another human or OpenAI’s GPT-4 chatbot. In some matches, debaters—human or AI—were provided with demographic details about their opponents, including gender, age, ethnicity, education, employment, and political affiliation. The participants themselves did not know who their opponent was. A simplified academic debate format was used, and opinion shifts were measured before and after each debate.
The results showed that GPT-4 was 64.4% more persuasive than humans when both had access to demographic data about their opponents. However, when no such information was available, GPT-4’s advantage disappeared and its persuasiveness became similar to that of humans.
Key Findings and Implications
The AI’s persuasive edge came from its ability to tailor arguments to demographic information, a skill that humans in the study did not leverage as effectively. Participants found GPT-4’s arguments, which relied on analytical reasoning and evidence-based points, to be compelling, especially on less polarizing issues. The study raises concerns that such AI tools could be misused for disinformation campaigns or political manipulation if not properly regulated.
Sandra Wachter, a professor at the University of Oxford, commented that the findings are “quite alarming”, noting the potential for AI to spread misinformation. The research team and other experts recommend that platforms and policymakers consider safeguards similar to those used in targeted advertising to address the risks of AI-driven persuasion. Further research is also recommended, including studies on other AI models such as Meta’s Llama or Anthropic’s Claude.
Broader Context and Expert Perspectives
This study represents one of the first direct, real-time comparisons of AI and human persuasiveness in online debate settings. AI expert Junade Ali pointed out a limitation: the study did not compare GPT-4 to trained or professional debaters, and results may differ in such scenarios.
Public and Ethical Considerations
Notably, about 75% of participants correctly identified when they were debating an AI opponent, suggesting that transparency can mitigate some risks. However, the risk remains that, in real-world scenarios where an AI’s identity might not be disclosed, its persuasive capabilities could have a substantial impact on public discourse, elections, or targeted campaigns. While AI could also be harnessed to reduce polarization or promote healthy behaviors, experts such as Oxford’s Michael Wooldridge warn about risks, including radicalization and propaganda. The study highlights the urgent need for robust safeguards and ethical standards to guide responsible use of persuasive AI in public forums.

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.