New FSU research shows statistical analysis can detect when ChatGPT is used to cheat on multiple-choice chemistry exams
As use of generative artificial intelligence continues to extend into all reaches of education, much of the concern related to its impact on cheating has focused on essays, essay exam questions and other narrative assignments. Use of AI tools such as ChatGPT to cheat on multiple-choice exams has largely gone ignored.
A 糖心vlog chemist is half of a research partnership whose latest work is changing what we know about this type of cheating, and their findings have revealed how the use of ChatGPT to cheat on general chemistry multiple-choice exams can be detected through specific statistical methods. The work was published in .
鈥淲hile many educators and researchers try to detect AI assisted cheating in essays and open-ended responses, such as Turnitin AI detection, as far as we know, this is the first time anyone has proposed detecting its use on multiple-choice exams,鈥 said Ken Hanson, an associate professor in the FSU Department of Chemistry and Biochemistry. 鈥淏y evaluating differences in performances between student- and ChatGPT-based multiple-choice chemistry exams, we were able to identify ChatGPT instances across all exams with a false positive rate of almost zero.鈥
Researchers collected previous FSU student responses from five semesters worth of exams, input nearly 1,000 questions into ChatGPT and compared the outcomes. Average score and raw statistics were not enough to identify ChatGPT-like behavior because there are certain questions that ChatGPT always answered correctly or always answered incorrectly resulting in an overall score that was indistinguishable from students.
鈥淭hat鈥檚 the thing about ChatGPT 鈥 it can generate content, but it doesn鈥檛 necessarily generate correct content,鈥 Hanson said. 鈥淚t鈥檚 simply an answer generator. It鈥檚 trying to look like it knows the answer, and to someone who doesn鈥檛 understand the material, it probably does look like a correct answer.鈥
By using fit statistics, researchers fixed the ability parameters and refit the outcomes, finding ChatGPT鈥檚 response pattern was clearly different from that of the students.
On exams, high-performing students frequently answer difficult and easy questions correctly, while average students tend to answer some difficult questions and most easy questions correctly. Low-performing students typically only answer easy questions correctly. But on repeated attempts by ChatGPT to complete an exam, the AI tool sometimes answered every easier question incorrectly and every hard question correctly. Hanson and Sorenson used these behavior differences to detect the use of ChatGPT with almost 100-percent accuracy.
The duo鈥檚 strategy of employing a technique known as Rasch modeling and fit statistics can be readily applied to any and all generative AI chat bots, which will exhibit their own unique patterns to help educators identify the use of these chat bots in completing multiple-choice exams.
The research is the latest publication in a seven-year collaboration between Hanson and machine learning engineer Ben Sorenson.
Hanson and Sorenson, who first met in third grade, both attended St. Cloud State University in Minnesota for their undergraduate degrees and stayed in touch after moving into their careers. As a faculty member at FSU, Hanson became curious about measuring how much knowledge his students retained from lectures, courses and lab work.
鈥淭his was a conversation that I brought to Ben, who鈥檚 great with statistics, computer science and data processing,鈥 said Hanson, who is part of a group of FSU faculty working to improve student success in gateway STEM courses such as general chemistry and college algebra. 鈥淗e said we could use statistical tools to understand if my exams are good, and in 2017, we started analyzing exams.鈥
The core of this Rasch model is that a student鈥檚 probability of getting any test question correct is a function of two things: how difficult the question is and the student鈥檚 ability to answer the question. In this case, a student鈥檚 ability refers to how much knowledge they have and how many of the necessary components are needed to answer the question they have. Viewing the outcomes of an exam in this way provides powerful insights, researchers said.
鈥淭he collaboration between Ken and I, though remote, has been a really seamless, smooth process,鈥 Sorenson said. 鈥淥ur work is a great way to provide supporting evidence when educators might already suspect that cheating may be happening. What we didn鈥檛 expect was that the patterns of artificial intelligence would be so easy to identify.鈥
Hanson earned his doctorate in chemistry from the University of Southern California in 2010 and completed a postdoctoral position at the University of North Carolina at Chapel Hill before joining FSU鈥檚 chemistry faculty in 2013. His lab, the Hanson Research Group, focuses on molecular photochemistry and photophysics, or the study of light 鈥 photons 鈥 and light鈥檚 interaction with molecules. Hanson, a member of the American Chemical Society, has published more than 100 papers and holds over a dozen patents.
To learn more about Hanson鈥檚 research and the FSU Department of Chemistry and Biochemistry, visit .