Improving Assessment of Programming Pattern Knowledge through Code Editing and Revision

How well do code-writing tasks measure students' knowledge of programming patterns and anti-patterns? How can we assess this knowledge more accurately? To explore these questions, we surveyed 328 intermediate CS students and measured their performance on different types of tasks, including writ...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE/ACM International Conference on Software Engineering: Software Engineering Education and Training (Online) s. 58 - 69
Hlavní autoři: Nurollahian, Sara, Rafferty, Anna N., Wiese, Eliane
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 01.05.2023
Témata:
ISSN:2832-7578
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:How well do code-writing tasks measure students' knowledge of programming patterns and anti-patterns? How can we assess this knowledge more accurately? To explore these questions, we surveyed 328 intermediate CS students and measured their performance on different types of tasks, including writing code, editing someone else's code, and, if applicable, revising their own alternatively-structured code. Our tasks targeted returning a Boolean expression and using unique code within an if and else.We found that code writing sometimes under-estimated student knowledge. For tasks targeting returning a Boolean expression, over 55% of students who initially wrote with non-expert structure successfully revised to expert structure when prompted - even though the prompt did not include guidance on how to improve their code. Further, over 25% of students who initially wrote non-expert code could properly edit someone else's non-expert code to expert structure. These results show that non-expert code is not a reliable indicator of deep misconceptions about the structure of expert code. Finally, although code writing is correlated with code editing, the relationship is weak: a model with code writing as the sole predictor of code editing explains less than 15% of the variance. Model accuracy improves when we include additional predictors that reflect other facets of knowledge, namely the identification of expert code and selection of expert code as more readable than non-expert code. Together, these results indicate that a combination of code writing, revising, editing, and identification tasks can provide a more accurate assessment of student knowledge of programming patterns than code writing alone.
ISSN:2832-7578
DOI:10.1109/ICSE-SEET58685.2023.00012