Artificial intelligence in international English language testing system writing assessments: A comparative study of human ratings and DeepAI
The International English Language Testing System (IELTS) is a high-stakes exam where Writing Task 2 significantly influences the overall scores, requiring reliable evaluation. While trained human raters perform this task, concerns about subjectivity and inconsistency have led to growing interest in...
Gespeichert in:
| Veröffentlicht in: | Technology in Language Teaching & Learning Jg. 7; H. 4; S. 103131 |
|---|---|
| Hauptverfasser: | , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
Castledown Publishers
17.11.2025
|
| Schlagworte: | |
| ISSN: | 2652-1687, 2652-1687 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Zusammenfassung: | The International English Language Testing System (IELTS) is a high-stakes exam where Writing Task 2 significantly influences the overall scores, requiring reliable evaluation. While trained human raters perform this task, concerns about subjectivity and inconsistency have led to growing interest in artificial intelligence (AI)-based assessment tools. However, little empirical evidence exists on AI in high-stakes testing, and no study has examined DeepAI in this context. Accordingly, using a repeated measures design, this study investigated the comparability and reliability of human and DeepAI ratings of 145 IELTS Writing Task 2 essays collected from the official IELTS Tehran Test Centre. These essays had been previously scored by certified human examiners and were subsequently rescored by DeepAI using a rubric-based prompt based on IELTS standards. Statistical analyses, including paired sample t-tests and multivariate analysis of variance, were conducted to explore rater differences and scoring alignment. The results revealed no significant differences in the overall band scores between the human and AI assessments; however, minor differences were observed in some specific criteria. Additionally, DeepAI showed strong intra-rater reliability, producing consistent scores over a two-week interval. These findings suggest that DeepAI may serve as a reliable supplementary tool in high-stakes writing assessments. However, full replacement of human judgment remains premature, and a combination of human judgment and AI support may be the most effective approach. |
|---|---|
| ISSN: | 2652-1687 2652-1687 |
| DOI: | 10.29140/tltl.v7n4.103131 |