Comparing the quality of human and ChatGPT feedback of students’ writing

Offering students formative feedback on their writing is an effective way to facilitate writing development. Recent advances in AI (i.e., ChatGPT) may function as an automated writing evaluation tool, increasing the amount of feedback students receive and diminishing the burden on teachers to provid...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Learning and instruction Jg. 91; S. 101894
Hauptverfasser: Steiss, Jacob, Tate, Tamara, Graham, Steve, Cruz, Jazmin, Hebert, Michael, Wang, Jiali, Moon, Youngsun, Tseng, Waverly, Warschauer, Mark, Olson, Carol Booth
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Elsevier Ltd 01.06.2024
Schlagworte:
ISSN:0959-4752
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Offering students formative feedback on their writing is an effective way to facilitate writing development. Recent advances in AI (i.e., ChatGPT) may function as an automated writing evaluation tool, increasing the amount of feedback students receive and diminishing the burden on teachers to provide frequent feedback to large classes. We examined the ability of generative AI (ChatGPT) to provide formative feedback. We compared the quality of human and AI feedback by scoring the feedback each provided on secondary student essays. We scored the degree to which feedback (a) was criteria-based, (b) provided clear directions for improvement, (c) was accurate, (d) prioritized essential features, and (e) used a supportive tone. 200 pieces of human-generated formative feedback and 200 pieces of AI-generated formative feedback for the same essays. We examined whether ChatGPT and human feedback differed in quality for the whole sample, for compositions that differed in overall quality, and for native English speakers and English learners by comparing descriptive statistics and effect sizes. Human raters were better at providing high-quality feedback to students in all categories other than criteria-based. AI and humans showed differences in feedback quality based on essay quality. Feedback did not vary by language status for humans or AI. Well-trained evaluators provided higher quality feedback than ChatGPT. Considering the ease of generating feedback through ChatGPT and its overall quality, generative AI may still be useful in some contexts, particularly in formative early drafts or instances where a well-trained educator is unavailable. •Human feedback was better than ChatGPT feedback for ⅘ elements of formative feedback.•Differences between ChatGPT and humans were modest when considering the overall quality of feedback and time-savings•ChatGPT and human feedback varied in quality based on the score of the targeted essay.•ChatGPT has potential as an evaluative tool given tradeoffs between quality and time.
ISSN:0959-4752
DOI:10.1016/j.learninstruc.2024.101894