The Effects of Explanations in Automated Essay Scoring Systems on Student Trust and Motivation

Ethical considerations, including transparency, play an important role when using artificial intelligence (AI) in education. Explainable AI has been coined as a solution to provide more insight into the inner workings of AI algorithms. However, carefully designed user studies on how to design explan...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Journal of Learning Analytics Ročník 10; číslo 1; s. 37 - 53
Hlavní autori: Conijn, Rianne, Kahr, Patricia, Snijders, Chris
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: 12.03.2023
ISSN:1929-7750, 1929-7750
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Ethical considerations, including transparency, play an important role when using artificial intelligence (AI) in education. Explainable AI has been coined as a solution to provide more insight into the inner workings of AI algorithms. However, carefully designed user studies on how to design explanations for AI in education are still limited. The current study aimed to identify the effect of explanations of an automated essay scoring system on students’ trust and motivation. The explanations were designed using a needs-elicitation study with students in combination with guidelines and frameworks of explainable AI. Two types of explanations were tested: full-text global explanations and an accuracy statement. The results showed that both explanations did not have an effect on student trust or motivation compared to no explanations. Interestingly, the grade provided by the system, and especially the difference between the student’s self-estimated grade and the system grade, showed a large influence. Hence, it is important to consider the effects of the outcome of the system (here: grade) when considering the effect of explanations of AI in education.
ISSN:1929-7750
1929-7750
DOI:10.18608/jla.2023.7801