Linguistic features of student responses as indicators of performance in critical online reasoning tasks.

Uloženo v:
Podrobná bibliografie
Název: Linguistic features of student responses as indicators of performance in critical online reasoning tasks.
Alternate Title: Linguistische Merkmale studentischer Antworten als Indikatoren für die Performanz in kritischen Online-Reasoning-Aufgaben. (German)
Autoři: Mehler, Alexander, Bisang, Walter, Konca, Maxim, Czerwinski, Patryk, Graf, Jeremias Josef, Fritsch, Jana
Zdroj: Zeitschrift für Erziehungswissenschaft; Feb2026, Vol. 29 Issue 1, p91-147, 57p
Abstract (English): Language plays a central role in learning processes in higher education, both in the acquisition and processing of information and in the production of written responses to academic tasks. When relying on online sources, these processes can be situated within the framework of Critical Online Reasoning (COR), which addresses students' ability to search for, evaluate, and integrate online information in order to solve scenario-based tasks in a self-regulated manner. While COR research has mostly considered source-related and processual dimensions of online reasoning, the role of specific grammatical features as indicators of students' task performance has received little attention. Addressing this gap, the present pilot study tests the hypothesis that a small set of grammatical features is sufficient to predict response quality, thereby supporting the inference of task-specific student performance in COR tasks. To test this hypothesis, we propose an integrated qualitative-quantitative approach applied to written responses from economics students. The qualitative analysis examines grammatical features at the levels of semantics (e.g., modality) or syntax (e.g., adverbial clauses), and relates them to expert evaluations of response quality. The resulting linguistic model is then operationalized computationally and evaluated on a larger dataset using machine-learning methods. The results provide evidence for the predictive, though still limited validity of the linguistic model and show that its feature set can be substantially reduced while improving predictive performance. We compare the model against similarly low-dimensional approaches, identifying promising alternatives from quantitative linguistics. Using evolutionary search and contrast analysis, we ultimately reduce the model to two features. Given the increasing number of AI-based approaches to automated essay scoring, our findings demonstrate the feasibility of a fully linguistically controlled, low-dimensional model that remains interpretable from an educational-science perspective while being computationally efficient. [ABSTRACT FROM AUTHOR]
Abstract (German): Zusammenfassung: Sprache spielt eine zentrale Rolle in Lernprozessen im Hochschulbereich, sowohl beim Erwerb und der Verarbeitung von Informationen als auch bei der Erstellung schriftlicher Antworten auf akademische Aufgaben. Stützt man sich dabei auf Online-Quellen, so können diese Prozesse im Rahmen des Critical Online Reasoning (COR) angesiedelt werden, das sich mit der Fähigkeit von Studierenden befasst, Online-Informationen zu suchen, zu bewerten und zu integrieren, um szenariobasierte Aufgaben auf selbstregulierte Weise zu lösen. Während sich die COR-Forschung bisher hauptsächlich mit den quellbezogenen und prozessualen Dimensionen des Online-Denkens befasst hat, wurde der Rolle grammatikalischer Merkmale als Indikatoren für die Aufgabenleistung der Studierenden bisher wenig Aufmerksamkeit gewidmet. Um diese Lücke zu schließen, testet die vorliegende Pilotstudie die Hypothese, dass eine kleine Gruppe grammatikalischer Merkmale ausreicht, um die Qualität der Antworten vorherzusagen und damit Rückschlüsse auf die aufgabenspezifische Leistung der Studierenden in COR-Aufgaben zu ermöglichen. Um diese Hypothese zu überprüfen, schlagen wir einen integrierten qualitativ-quantitativen Ansatz vor, der auf schriftliche Antworten von Wirtschaftsstudierenden angewendet wird. Die qualitative Analyse untersucht grammatikalische Merkmale auf der Ebene der Semantik (z. B. Modalität) und Syntax (z. B. Adverbialsätze) und setzt sie in Beziehung zu Expertenbewertungen der Antwortqualität. Das resultierende linguistische Modell wird computational operationalisiert und anhand eines größeren Datensatzes mit Methoden des maschinellen Lernens evaluiert. Die Ergebnisse liefern Belege für die prädiktive, wenn auch noch begrenzte Validität des linguistischen Modells und zeigen, dass sein Merkmalssatz reduziert werden kann, während gleichzeitig die Vorhersageleistung verbessert wird. Wir vergleichen das Modell mit ähnlich niedrigdimensionalen Ansätzen und identifizieren vielversprechende Alternativen aus der quantitativen Linguistik. Mithilfe evolutionärer Suche und Kontrastanalyse reduzieren wir das Modell schließlich auf zwei Merkmale. Angesichts der zunehmenden Zahl von KI-basierten Ansätzen zur automatisierten Bewertung von Aufsätzen zeigen unsere Ergebnisse die Machbarkeit eines vollständig linguistisch gesteuerten, niedrigdimensionalen Modells, das aus bildungswissenschaftlicher Sicht interpretierbar und gleichzeitig computational effizient ist. [ABSTRACT FROM AUTHOR]
Copyright of Zeitschrift für Erziehungswissenschaft is the property of Springer Nature and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Databáze: Complementary Index
Popis
Abstrakt:Language plays a central role in learning processes in higher education, both in the acquisition and processing of information and in the production of written responses to academic tasks. When relying on online sources, these processes can be situated within the framework of Critical Online Reasoning (COR), which addresses students' ability to search for, evaluate, and integrate online information in order to solve scenario-based tasks in a self-regulated manner. While COR research has mostly considered source-related and processual dimensions of online reasoning, the role of specific grammatical features as indicators of students' task performance has received little attention. Addressing this gap, the present pilot study tests the hypothesis that a small set of grammatical features is sufficient to predict response quality, thereby supporting the inference of task-specific student performance in COR tasks. To test this hypothesis, we propose an integrated qualitative-quantitative approach applied to written responses from economics students. The qualitative analysis examines grammatical features at the levels of semantics (e.g., modality) or syntax (e.g., adverbial clauses), and relates them to expert evaluations of response quality. The resulting linguistic model is then operationalized computationally and evaluated on a larger dataset using machine-learning methods. The results provide evidence for the predictive, though still limited validity of the linguistic model and show that its feature set can be substantially reduced while improving predictive performance. We compare the model against similarly low-dimensional approaches, identifying promising alternatives from quantitative linguistics. Using evolutionary search and contrast analysis, we ultimately reduce the model to two features. Given the increasing number of AI-based approaches to automated essay scoring, our findings demonstrate the feasibility of a fully linguistically controlled, low-dimensional model that remains interpretable from an educational-science perspective while being computationally efficient. [ABSTRACT FROM AUTHOR]
ISSN:1434663X
DOI:10.1007/s11618-026-01388-6