UK Universities See Surge in AI-Assisted Cheating, Cases Triple in One Year

Image Credit: Markus Winkler | Splash

Almost 7,000 UK university students were caught using artificial intelligence tools such as ChatGPT to cheat in assessments during the 2023-24 academic year, more than tripling the rate from the previous year. The findings, based on freedom of information requests to 155 institutions, underscore a rapid escalation in AI-related academic misconduct amid ongoing discussions about the technology's place in education.

The data indicated 5.1 confirmed cases of AI cheating per 1,000 students in 2023-24, up from 1.6 per 1,000 in 2022-23. Partial figures for the 2024-25 year up to May pointed to a further rise to about 7.5 cases per 1,000 students. Meanwhile, traditional plagiarism cases fell from 19 per 1,000 students in 2019-20 to 15.2 in 2023-24, with estimates suggesting a decline to 8.5 this year.

Background on AI's Emergence in Education

Generative AI tools like ChatGPT, released by OpenAI in November 2022, have seen swift adoption in academic environments. A February 2025 survey by the Higher Education Policy Institute (HEPI) and Kortext revealed that 88% of 1,041 UK undergraduate students had used generative AI for assessments, reflecting an "unprecedented increase" from 53% the previous year. Overall, 92% of respondents reported using AI tools in their studies. Technology firms have facilitated this growth, with Google providing a free 15-month upgrade to its Gemini AI for university students and OpenAI offering discounts to college students in the US and Canada.

Experts link the surge to AI's advanced capabilities and accessibility, including tutorials on platforms like TikTok that guide students on "humanising" AI-generated text to avoid detection. Dr Peter Scarfe, an associate professor at the University of Reading, noted that proving AI misuse is "near impossible" compared to traditional plagiarism, describing reported cases as "the tip of the iceberg". A 2024 study at the University of Reading found that AI-generated exam answers went undetected 94% of the time.

Development of Detection and Reporting Practices

The Guardian's data, gathered from 131 responding universities, exposed inconsistencies in monitoring AI misconduct, with more than 27% of institutions not categorizing it separately in 2023-24. This uneven tracking has prompted concerns about underreporting, building on a 2024 Times Higher Education analysis that documented rising AI-related cases at Russell Group universities.

Dr Thomas Lancaster, an academic integrity expert at Imperial College London, stated that AI misuse is "very hard to prove" when students refine outputs, stressing the importance of educating students on assessment objectives. Several universities declined to use tools like Turnitin's AI detector in 2023, citing concerns over false positives, though subsequent research has addressed these issues.

Impacts on Education and Ethics

The increase has burdened university assessment frameworks, creating conflicts between faculty and students over evidence and accusations. It has also spotlighted ethical challenges, including AI's benefits for students with dyslexia or learning disabilities, as highlighted by UK Science and Technology Secretary Peter Kyle, who has advocated for AI to "level up" educational opportunities.

Wider consequences include risks of over-reliance on AI, potentially eroding critical thinking, and a transition from conventional plagiarism as AI supplants direct copying. A 2025 analysis in the International Journal for Educational Integrity examined the reassessment of academic integrity amid AI advancements, warning of threats to educational quality.

Future Trends in AI and Education

AI's integration in UK education is poised to grow, with the global AI education market forecasted to reach US$112.3 billion by 2034. Emerging developments include personalized learning via generative AI for tailored content and progress monitoring, as well as combinations with augmented reality for interactive experiences.

By 2030, AI is expected to reshape 70% of job skills, necessitating educational adaptations for workforce preparation. Surveys indicate over 80% of UK teachers and K-12 students find AI tools beneficial, but frameworks like UNESCO's Recommendation on the Ethics of Artificial Intelligence emphasize transparency, bias mitigation, and human oversight. Universities may shift toward in-person examinations or AI-incorporated evaluations to combat misconduct, as experts anticipate broader sectoral changes.

3% Cover the Fee
TheDayAfterAI News

We are a leading AI-focused digital news platform, combining AI-generated reporting with human editorial oversight. By aggregating and synthesizing the latest developments in AI — spanning innovation, technology, ethics, policy and business — we deliver timely, accurate and thought-provoking content.

Previous
Previous

Finix-S1-32B Hits 0.6% Hallucination Rate as Mid-2025 AI Accuracy Rankings Shift

Next
Next

AI-Powered Robotics Reshape Facility Management with KABAM Deployments