From Pilots to Practices: A Scoping Review of GenAI-Enabled Personalization in Computer Science Education.

Uloženo v:
Podrobná bibliografie
Název: From Pilots to Practices: A Scoping Review of GenAI-Enabled Personalization in Computer Science Education.
Autoři: Reihanian, Iman, Hou, Yunfei, Sun, Qingquan
Zdroj: AI; Jan2026, Vol. 7 Issue 1, p6, 23p
Témata: GENERATIVE artificial intelligence, INTELLIGENT tutoring systems, CONTEXTUAL analysis, EDUCATION ethics, INDIVIDUALIZED instruction, COMPUTER science, EDUCATIONAL outcomes, FAIRNESS
Abstrakt: Generative AI enables personalized computer science education at scale, yet questions remain about whether such personalization supports or undermines learning. This scoping review synthesizes 32 studies (2023–2025) purposively sampled from 259 records to map personalization mechanisms and effectiveness signals in higher-education CS contexts. We identify five application domains—intelligent tutoring, personalized materials, formative feedback, AI-augmented assessment, and code review—and analyze how design choices shape learning outcomes. Designs incorporating explanation-first guidance, solution withholding, graduated hint ladders, and artifact grounding (student code, tests, and rubrics) consistently show more positive learning processes than unconstrained chat interfaces. Successful implementations share four patterns: context-aware tutoring anchored in student artifacts, multi-level hint structures requiring reflection, composition with traditional CS infrastructure (autograders and rubrics), and human-in-the-loop quality assurance. We propose an exploration-firstadoption framework emphasizing piloting, instrumentation, learning-preserving defaults, and evidence-based scaling. Four recurrent risks—academic integrity, privacy, bias and equity, and over-reliance—are paired with operational mitigation. Critical evidence gaps include longitudinal effects on skill retention, comparative evaluations of guardrail designs, equity impacts at scale, and standardized replication metrics. The evidence supports generative AI as a mechanism for precision scaffolding when embedded in exploration-first, audit-ready workflows that preserve productive struggle while scaling personalized support. [ABSTRACT FROM AUTHOR]
Copyright of AI is the property of MDPI and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Databáze: Complementary Index
Popis
Abstrakt:Generative AI enables personalized computer science education at scale, yet questions remain about whether such personalization supports or undermines learning. This scoping review synthesizes 32 studies (2023–2025) purposively sampled from 259 records to map personalization mechanisms and effectiveness signals in higher-education CS contexts. We identify five application domains—intelligent tutoring, personalized materials, formative feedback, AI-augmented assessment, and code review—and analyze how design choices shape learning outcomes. Designs incorporating explanation-first guidance, solution withholding, graduated hint ladders, and artifact grounding (student code, tests, and rubrics) consistently show more positive learning processes than unconstrained chat interfaces. Successful implementations share four patterns: context-aware tutoring anchored in student artifacts, multi-level hint structures requiring reflection, composition with traditional CS infrastructure (autograders and rubrics), and human-in-the-loop quality assurance. We propose an exploration-firstadoption framework emphasizing piloting, instrumentation, learning-preserving defaults, and evidence-based scaling. Four recurrent risks—academic integrity, privacy, bias and equity, and over-reliance—are paired with operational mitigation. Critical evidence gaps include longitudinal effects on skill retention, comparative evaluations of guardrail designs, equity impacts at scale, and standardized replication metrics. The evidence supports generative AI as a mechanism for precision scaffolding when embedded in exploration-first, audit-ready workflows that preserve productive struggle while scaling personalized support. [ABSTRACT FROM AUTHOR]
ISSN:26732688
DOI:10.3390/ai7010006