Artificial intelligence in international English language testing system writing assessments: A comparative study of human ratings and DeepAI

The International English Language Testing System (IELTS) is a high-stakes exam where Writing Task 2 significantly influences the overall scores, requiring reliable evaluation. While trained human raters perform this task, concerns about subjectivity and inconsistency have led to growing interest in...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Technology in Language Teaching & Learning Ročník 7; číslo 4; s. 103131
Hlavní autoři: Fathali, Somayeh, Mohajeri, Fatemeh
Médium: Journal Article
Jazyk:angličtina
Vydáno: Castledown Publishers 17.11.2025
Témata:
ISSN:2652-1687, 2652-1687
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:The International English Language Testing System (IELTS) is a high-stakes exam where Writing Task 2 significantly influences the overall scores, requiring reliable evaluation. While trained human raters perform this task, concerns about subjectivity and inconsistency have led to growing interest in artificial intelligence (AI)-based assessment tools. However, little empirical evidence exists on AI in high-stakes testing, and no study has examined DeepAI in this context. Accordingly, using a repeated measures design, this study investigated the comparability and reliability of human and DeepAI ratings of 145 IELTS Writing Task 2 essays collected from the official IELTS Tehran Test Centre. These essays had been previously scored by certified human examiners and were subsequently rescored by DeepAI using a rubric-based prompt based on IELTS standards. Statistical analyses, including paired sample t-tests and multivariate analysis of variance, were conducted to explore rater differences and scoring alignment. The results revealed no significant differences in the overall band scores between the human and AI assessments; however, minor differences were observed in some specific criteria. Additionally, DeepAI showed strong intra-rater reliability, producing consistent scores over a two-week interval. These findings suggest that DeepAI may serve as a reliable supplementary tool in high-stakes writing assessments. However, full replacement of human judgment remains premature, and a combination of human judgment and AI support may be the most effective approach.
ISSN:2652-1687
2652-1687
DOI:10.29140/tltl.v7n4.103131