On Wednesday (26 June), the University of Reading released a new study demonstrating that the use of AI can go undetected in the grading of university papers.
The researchers created 33 fictitious students and used ChatGPT to formulate exam answers and essays for first, second, and third-year modules. The false answers were submitted to markers who were not aware of the project.
The study, led by Associate Professor Peter Scarfe and Professor Etienne Roesch, revealed that 94% of the AI submissions were undetected.
The first and second-year submissions from the AI students attained higher grades than those of the human undergraduates.
However, an anomaly of the study was that the human third-year undergraduates attained higher grades than the AI students. This finding is “consistent with the notion” that AI cannot comprehend “abstract reasoning” in the way that a human mind can, said the researchers.
In 2023, US based Turnitin, a company that releases software for online assessment submission, released new software that allows for the detection of online AI submissions. Turnitin’s software has been used extensively by universities in the detection of the use of AI in online coursework or exam submissions.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataTurnitin originally claimed that it’s new software could detect “97% of ChatGPT and GPT-3 authored writing” and a “false positive rate” of “less than 1/100”. However, just seven weeks after its launch, Turnitin revealed that there has been a much “higher incidence of false positives”.
Not only have exam boards been unable to effectively detect the use of AI in students’ work, but existing detection software has also proven to be ineffective.
New policies must be developed to protect the “integrity” of academia, and the work of humans, as we move towards a world where the use of AI is the “new normal”, said the study.