Standards-aligned annotations reveal organizational patterns in argumentative essays at scale

Saved in:
Bibliographic Details
Title: Standards-aligned annotations reveal organizational patterns in argumentative essays at scale
Authors: Amy Burkhardt, Suhwa Han, Sherri Woolf, Allison Boykin, Frank Rijmen, Susan Lottridge
Source: Frontiers in Education, Vol 10 (2025)
Publisher Information: Frontiers Media SA, 2025.
Publication Year: 2025
Subject Terms: argumentative writing assessment, writing feedback, machine learning, rubrics, assessment, Education (General), L7-991, automated scoring and feedback
Description: While scoring rubrics are widely used to evaluate student writing, they often fail to provide actionable feedback. Delivering such feedback—especially in an automated, scalable manner—requires the standardized detection of finer-grained information within a student’s essay. Achieving this level of detail demands the same rigor in development and training as creating a high-quality rubric. To this end, we describe the development of annotation guidelines aligned with state standards for detecting these elements, outline the annotator training process, and report strong inter-rater agreement results from a large-scale annotation effort involving nearly 20,000 essays. To further validate this approach, we connect annotations to broader patterns in student writing using Latent Class Analysis (LCA). Through this analysis, we identify distinct writing patterns from these fine-grained annotations and demonstrate their meaningful associations with overall rubric scores. Our findings show promise for how fine-grained analysis of argumentative essays can support students, at scale, in becoming more effective argumentative essay writers.
Document Type: Article
ISSN: 2504-284X
DOI: 10.3389/feduc.2025.1569529
Access URL: https://doaj.org/article/8f2662d6b96d42b9ae1231be1c7bd80d
Rights: CC BY
Accession Number: edsair.doi.dedup.....5990ec214b77bfe50be3b556c7b55776
Database: OpenAIRE
Description
Abstract:While scoring rubrics are widely used to evaluate student writing, they often fail to provide actionable feedback. Delivering such feedback—especially in an automated, scalable manner—requires the standardized detection of finer-grained information within a student’s essay. Achieving this level of detail demands the same rigor in development and training as creating a high-quality rubric. To this end, we describe the development of annotation guidelines aligned with state standards for detecting these elements, outline the annotator training process, and report strong inter-rater agreement results from a large-scale annotation effort involving nearly 20,000 essays. To further validate this approach, we connect annotations to broader patterns in student writing using Latent Class Analysis (LCA). Through this analysis, we identify distinct writing patterns from these fine-grained annotations and demonstrate their meaningful associations with overall rubric scores. Our findings show promise for how fine-grained analysis of argumentative essays can support students, at scale, in becoming more effective argumentative essay writers.
ISSN:2504284X
DOI:10.3389/feduc.2025.1569529