A Comparison of Peer- and Tutor-Grading of an Introductory R Coding Assessment

Gespeichert in:
Bibliographische Detailangaben
Titel: A Comparison of Peer- and Tutor-Grading of an Introductory R Coding Assessment
Autoren: Charlotte M. Jones-Todd, Amy Renelle
Quelle: Journal of Statistics and Data Science Education, Pp 1-13 (2025)
Verlagsinformationen: Informa UK Limited, 2025.
Publikationsjahr: 2025
Schlagwörter: LC8-6691, Peer grading, Statistics, Programming, Probabilities. Mathematical statistics, Special aspects of education, QA273-280, Peer marking
Beschreibung: We investigate the level of agreement between tutor and peer grading of an introductory R programming assessment. Comparing peer and tutor grades we find a strong correlation of 0.848, 95%CI = (0.809, 0.880). Using standard multivariate data analysis techniques we find that tutors and peers grade similarly given a prescriptive criterion. However, when given a subjective criterion, tutors and peers use different schemas to grade. We find that tutors grade the subjective criterion autonomously from the other rubric criteria, whereas peers grade in line with the prescriptive criteria that evaluates the components and structure of the code. In addition, we estimate between-assessor and between-submission variation using a discrete-Beta mixed model and show that between-submission is greater than grader submission for both peers and tutors. Finally, we advocate for the use of peer assessment as a learning exercise and encourage readers to adapt the activity accordingly.
Publikationsart: Article
Sprache: English
ISSN: 2693-9169
DOI: 10.1080/26939169.2025.2520205
Zugangs-URL: https://doaj.org/article/fbb4c0d1b9ac468aa276ea9bbbb96c18
Rights: CC BY NC
Dokumentencode: edsair.doi.dedup.....2f46ab1ef86ab7d1b813d939269ffb33
Datenbank: OpenAIRE
Beschreibung
Abstract:We investigate the level of agreement between tutor and peer grading of an introductory R programming assessment. Comparing peer and tutor grades we find a strong correlation of 0.848, 95%CI = (0.809, 0.880). Using standard multivariate data analysis techniques we find that tutors and peers grade similarly given a prescriptive criterion. However, when given a subjective criterion, tutors and peers use different schemas to grade. We find that tutors grade the subjective criterion autonomously from the other rubric criteria, whereas peers grade in line with the prescriptive criteria that evaluates the components and structure of the code. In addition, we estimate between-assessor and between-submission variation using a discrete-Beta mixed model and show that between-submission is greater than grader submission for both peers and tutors. Finally, we advocate for the use of peer assessment as a learning exercise and encourage readers to adapt the activity accordingly.
ISSN:26939169
DOI:10.1080/26939169.2025.2520205