Effects of adaptive feedback generated by a large language model: A case study in teacher education

Saved in:
Bibliographic Details
Title: Effects of adaptive feedback generated by a large language model: A case study in teacher education
Authors: Annette Kinder, Fiona J. Briese, Marius Jacobs, Niclas Dern, Niels Glodny, Simon Jacobs, Samuel Leßmann
Source: Computers and Education: Artificial Intelligence, Vol 8, Iss , Pp 100349- (2025)
Publisher Information: Elsevier, 2025.
Publication Year: 2025
Collection: LCC:Electronic computers. Computer science
Subject Terms: AI-Generated feedback, Diagnostic reasoning, Large language model (LLM), Higher education, Randomized controlled trial, Pre-service teachers, Electronic computers. Computer science, QA75.5-76.95
Description: This study investigates the effects of adaptive feedback generated by large language models (LLMs), specifically ChatGPT, on performance in a written diagnostic reasoning task among German pre-service teachers (n = 269). Additionally, the study analyzed user evaluations of the feedback and feedback processing time. Diagnostic reasoning, a critical skill for making informed pedagogical decisions, was assessed through a writing task integrated into a teacher preparation course. Participants were randomly assigned to receive either adaptive feedback generated by ChatGPT or static feedback prepared in advance by a human expert, which was identical for all participants in that condition, before completing a second writing task. The findings reveal that ChatGPT-generated adaptive feedback significantly improved the quality of justification in the students’ writing compared to the static feedback written by an expert. However, no significant difference was observed in decision accuracy between the two groups, suggesting that the type and source of feedback did not impact decision-making processes. Additionally, students who had received LLM-generated adaptive feedback spent more time processing the feedback and subsequently wrote longer texts, indicating longer engagement with the feedback and the task. Participants also rated adaptive feedback as more useful and interesting than static feedback, aligning with previous research on the motivational benefits of adaptive feedback. The study highlights the potential of LLMs like ChatGPT as valuable tools in educational settings, particularly in large courses where providing adaptive feedback is challenging.
Document Type: article
File Description: electronic resource
Language: English
ISSN: 2666-920X
Relation: http://www.sciencedirect.com/science/article/pii/S2666920X24001528; https://doaj.org/toc/2666-920X
DOI: 10.1016/j.caeai.2024.100349
Access URL: https://doaj.org/article/7706d2dd4d28492788154e4986343a96
Accession Number: edsdoj.7706d2dd4d28492788154e4986343a96
Database: Directory of Open Access Journals
Description
Abstract:This study investigates the effects of adaptive feedback generated by large language models (LLMs), specifically ChatGPT, on performance in a written diagnostic reasoning task among German pre-service teachers (n = 269). Additionally, the study analyzed user evaluations of the feedback and feedback processing time. Diagnostic reasoning, a critical skill for making informed pedagogical decisions, was assessed through a writing task integrated into a teacher preparation course. Participants were randomly assigned to receive either adaptive feedback generated by ChatGPT or static feedback prepared in advance by a human expert, which was identical for all participants in that condition, before completing a second writing task. The findings reveal that ChatGPT-generated adaptive feedback significantly improved the quality of justification in the students’ writing compared to the static feedback written by an expert. However, no significant difference was observed in decision accuracy between the two groups, suggesting that the type and source of feedback did not impact decision-making processes. Additionally, students who had received LLM-generated adaptive feedback spent more time processing the feedback and subsequently wrote longer texts, indicating longer engagement with the feedback and the task. Participants also rated adaptive feedback as more useful and interesting than static feedback, aligning with previous research on the motivational benefits of adaptive feedback. The study highlights the potential of LLMs like ChatGPT as valuable tools in educational settings, particularly in large courses where providing adaptive feedback is challenging.
ISSN:2666920X
DOI:10.1016/j.caeai.2024.100349