Assessing AI detectors in identifying AI-generated code: Implications for education

Uloženo v:
Podrobná bibliografie
Název: Assessing AI detectors in identifying AI-generated code: Implications for education
Autoři: PAN, Wei Hung, CHOK, Ming Jie, WONG, Jonathan Leong Shan, SHIN, Yung Xin, POON, Yeong Shian, YANG, Zhou, CHONG, Chun Yong, David LO, LIM, Mei Kuan
Zdroj: Research Collection School Of Computing and Information Systems
Informace o vydavateli: Institutional Knowledge at Singapore Management University
Rok vydání: 2024
Sbírka: Institutional Knowledge (InK) at Singapore Management University
Témata: Software Engineering Education, AI-Generated Code, AI-Generated Code Detection, Artificial Intelligence and Robotics, Software Engineering
Popis: Educators are increasingly concerned about the usage of Large Language Models (LLMs) such as ChatGPT in programming education, particularly regarding the potential exploitation of imperfections in Artificial Intelligence Generated Content (AIGC) Detectors for academic misconduct.In this paper, we present an empirical study where the LLM is examined for its attempts to bypass detection by AIGC Detectors. This is achieved by generating code in response to a given question using different variants. We collected a dataset comprising 5,069 samples, with each sample consisting of a textual description of a coding problem and its corresponding human-written Python solution codes. These samples were obtained from various sources, including 80 from Quescol, 3,264 from Kaggle, and 1,725 from Leet-Code. From the dataset, we created 13 sets of code problem variant prompts, which were used to instruct ChatGPT to generate the outputs. Subsequently, we assessed the performance of five AIGC detectors. Our results demonstrate that existing AIGC Detectors perform poorly in distinguishing between human-written code and AI-generated code.
Druh dokumentu: text
Popis souboru: application/pdf
Jazyk: English
Relation: https://ink.library.smu.edu.sg/sis_research/9244; https://ink.library.smu.edu.sg/context/sis_research/article/10244/viewcontent/3639474.3640068.pdf
Dostupnost: https://ink.library.smu.edu.sg/sis_research/9244
https://ink.library.smu.edu.sg/context/sis_research/article/10244/viewcontent/3639474.3640068.pdf
Rights: http://creativecommons.org/licenses/by-nc-nd/4.0/
Přístupové číslo: edsbas.767B3516
Databáze: BASE
Popis
Abstrakt:Educators are increasingly concerned about the usage of Large Language Models (LLMs) such as ChatGPT in programming education, particularly regarding the potential exploitation of imperfections in Artificial Intelligence Generated Content (AIGC) Detectors for academic misconduct.In this paper, we present an empirical study where the LLM is examined for its attempts to bypass detection by AIGC Detectors. This is achieved by generating code in response to a given question using different variants. We collected a dataset comprising 5,069 samples, with each sample consisting of a textual description of a coding problem and its corresponding human-written Python solution codes. These samples were obtained from various sources, including 80 from Quescol, 3,264 from Kaggle, and 1,725 from Leet-Code. From the dataset, we created 13 sets of code problem variant prompts, which were used to instruct ChatGPT to generate the outputs. Subsequently, we assessed the performance of five AIGC detectors. Our results demonstrate that existing AIGC Detectors perform poorly in distinguishing between human-written code and AI-generated code.