UK universities are facing a growing challenge with students increasingly using artificial intelligence (AI) tools like ChatGPT to cheat in exams and assignments. According to a recent investigation by The Guardian, nearly 7,000 students were caught using AI inappropriately during the 2023–24 academic year, marking a sharp increase from previous years. The trend reflects a major shift in academic misconduct, with AI misuse overtaking traditional plagiarism.
While older forms of cheating such as copy-paste plagiarism are on the decline, AI-generated content is proving more difficult to detect. Data shows that in 2019–20, plagiarism made up nearly two-thirds of all misconduct cases. By 2023–24, that figure had dropped to just 15.2 per 1,000 students and early data from the 2024–25 academic year suggests it could dip to as low as 8.5 per 1,000.
This evolution has left many universities unprepared. Out of 155 institutions contacted, only 131 responded, and most do not yet record AI-related cheating separately. Many also lack the tools and policies to effectively address or even identify AI misuse. An experiment by the University of Reading showed that AI-generated essays bypassed standard detection systems 94% of the time, underlining how stealthy the technology can be.
Students are also becoming more sophisticated in how they use AI. While some directly submit AI-generated work, many use tools to generate outlines, paraphrase texts, or simplify dense academic readings. For students with learning disabilities like dyslexia, AI is often viewed as an enabler rather than a cheat code. Platforms like TikTok and YouTube now host thousands of tutorials on how to "humanise" AI-generated responses to avoid detection.
This shift is prompting UK universities to reevaluate traditional forms of assessment. Exams alone may not address the deeper issue, especially when AI can assist with comprehension and writing structure without outright completing tasks for students. Experts are urging institutions to focus more on teaching critical thinking, collaboration, and communication the kinds of human-centric skills that AI cannot easily replicate.
There is also growing consensus that universities must help students understand the purpose behind their assignments and not just treat them as performance metrics. The UK government has responded by supporting skill-based learning initiatives and exploring how AI can be integrated constructively into education. Still, with AI advancing rapidly and detection tools lagging behind, higher education institutions remain in a reactive phase. The challenge lies not just in stopping misuse, but in ensuring that learning remains meaningful, fair, and future-proof in the age of AI.









