ASTRO: a semi-automated grading and feedback system for programming assignments

Uloženo v:
Podrobná bibliografie
Název: ASTRO: a semi-automated grading and feedback system for programming assignments
Autoři: Browning, Jonathan, Bustard, John, Anderson, Neil
Zdroj: Browning, J, Bustard, J & Anderson, N 2025, ASTRO: a semi-automated grading and feedback system for programming assignments. in 2025 IEEE Frontiers in Education Conference (FIE): Proceedings. IEEE Frontiers in Education Conference (FIE): Proceedings, Institute of Electrical and Electronics Engineers Inc., IEEE Frontiers in Education 2025, Nashville, Tennessee, United States, 02/11/2025.
Informace o vydavateli: Institute of Electrical and Electronics Engineers Inc.
Rok vydání: 2025
Sbírka: Queen's University Belfast: Research Portal
Témata: semi-automated grading, feedback system, programming assignments
Popis: This innovative practice full paper describes the design, implementation, and evaluation of the Abstract Syntax Tree Reviewer and Output Tester (ASTRO), a semi-automated grading system for programming assignments. The motivation for this work stems from the challenges associated with manual grading in large programming courses, including time inefficiency, inconsistent evaluation, and limited actionable feedback for students. ASTRO addresses these issues by leveraging automated processes to improve scalability and reliability while providing detailed feedback tailored to individual student submissions. ASTRO integrates static code analysis, runtime testing, and semantic evaluation to deliver consistent and actionable assessments. Its unique features include the ability to process non-standard submissions, handle runtime anomalies, and categorize student performance into conceptual bands. Unlike many automated systems that prioritize correctness alone, ASTRO emphasizes conceptual understanding and provides feedback that is deterministic, transparent, and actionable. The system was implemented to streamline grading for a first-year software engineering course, reducing grading time while maintaining fairness and pedagogical rigor. The development of ASTRO draws on established literature in automated grading systems and programming pedagogy. Systems like Web-CAT and SALP informed ASTRO’s design, particularly in integrating dynamic and static analysis for assessment. However, ASTRO advances beyond existing tools by addressing limitations in handling edge cases and providing conceptual feedback, as highlighted by recent research in automated assessment and semantic analysis. ASTRO was evaluated using a cohort of 128 students, comparing its performance with manual grading methods used in the previous academic year. Results showed that ASTRO reduced grading time from three weeks to two days. Statistical analysis revealed that ASTRO produced grades comparable to manual grading while offering a broader grade ...
Druh dokumentu: article in journal/newspaper
Jazyk: English
Dostupnost: https://pure.qub.ac.uk/en/publications/7126d8a7-30ec-4e61-beca-f3659f3e66a4
Rights: info:eu-repo/semantics/embargoedAccess
Přístupové číslo: edsbas.5E2D6BBF
Databáze: BASE
Popis
Abstrakt:This innovative practice full paper describes the design, implementation, and evaluation of the Abstract Syntax Tree Reviewer and Output Tester (ASTRO), a semi-automated grading system for programming assignments. The motivation for this work stems from the challenges associated with manual grading in large programming courses, including time inefficiency, inconsistent evaluation, and limited actionable feedback for students. ASTRO addresses these issues by leveraging automated processes to improve scalability and reliability while providing detailed feedback tailored to individual student submissions. ASTRO integrates static code analysis, runtime testing, and semantic evaluation to deliver consistent and actionable assessments. Its unique features include the ability to process non-standard submissions, handle runtime anomalies, and categorize student performance into conceptual bands. Unlike many automated systems that prioritize correctness alone, ASTRO emphasizes conceptual understanding and provides feedback that is deterministic, transparent, and actionable. The system was implemented to streamline grading for a first-year software engineering course, reducing grading time while maintaining fairness and pedagogical rigor. The development of ASTRO draws on established literature in automated grading systems and programming pedagogy. Systems like Web-CAT and SALP informed ASTRO’s design, particularly in integrating dynamic and static analysis for assessment. However, ASTRO advances beyond existing tools by addressing limitations in handling edge cases and providing conceptual feedback, as highlighted by recent research in automated assessment and semantic analysis. ASTRO was evaluated using a cohort of 128 students, comparing its performance with manual grading methods used in the previous academic year. Results showed that ASTRO reduced grading time from three weeks to two days. Statistical analysis revealed that ASTRO produced grades comparable to manual grading while offering a broader grade ...