A number of UK universities are facing a new wave of academic cheating: the use of artificial intelligence tools like ChatGPT to do assignments and exams. According to statistics, nearly 7,000 cases of violations have been verified in the 2023-2024 academic year, more than three times the previous year.
Notably, education experts say these figures are just the “tip of the iceberg”. Meanwhile, “plagiarism” (a previously common form of academic cheating) is on a sharp decline.
AI changes the nature of school cheating
Before AI tools became widespread, plagiarism accounted for two-thirds of misconduct at UK universities. But as ChatGPT and writing assistance platforms have become increasingly sophisticated and accessible, the nature of academic cheating has changed.
AI not only helps students “write” essays, but also suggests structures, references, and even more “natural” phrasing to bypass cheating detection systems.
According to statistics by May 2024, the rate of academic violations related to AI has increased to 7.5/1,000 students, while traditional plagiarism has decreased to only about 8.5/1,000. This reversal is making it difficult for universities to detect and handle violations.
An internal survey by the UK Higher Education Policy Institute found that up to 88% of students admitted to using AI to help them study and complete assignments. In a trial at the University of Reading, AI-generated essays were able to pass the examination system with a rate of up to 94%.
Dr Peter Scarfe, a lecturer in psychology at the University of Reading, warned that detecting AI-generated content is almost impossible if users know how to edit it.
“Unlike plagiarism, where you can compare copied text, AI tools create new content that is difficult to verify. Unless the student admits it, it is very difficult to convict them,” said Peter Scarfe .
Many students even consider using AI an integral part of their studies. One business administration student admitted that he often uses ChatGPT to brainstorm ideas and find references, and that “everyone uses it to some extent.”
“AI is not bad if used properly,” said academic Thomas Lancaster from Imperial College London. “But clearly, many students are crossing this line.”
What direction for higher education?
Despite the rapid increase in violations, more than 27% of UK universities still do not classify AI misuse as a separate category of misconduct, making it difficult to record, monitor and respond to violations.
Meanwhile, on platforms like TikTok, a series of videos showing students how to “circumvent the rules” are going viral: from using software to reinterpret AI-generated content, to tips for bypassing filters that detect unnatural content. These tools help “humanize” posts, making them appear as if they were written by humans.
Not only students, but technology companies also see the potential from this user group. Google is giving away free Gemini tool upgrade packages to students for 15 months, while OpenAI offers preferential prices for students in the US and Canada.
UK Science and Technology Secretary Peter Kyle recently said AI should be used to support students, especially those with learning difficulties. However, he also stressed that integrating AI into teaching, learning and assessment needs to be carefully considered to avoid abuse.
Source: https://baovanhoa.vn/cong-nghe/bao-dong-tinh-trang-gian-lan-thi-cu-bang-ai-tai-anh-143326.html
Comment (0)