XAI-Ed: Half-Day Workshop on Pedagogy-Founded Explainable AI for Transparent, User-Centered AI in Education: Half-Day Workshop on Pedagogy-Founded Explainable AI for Transparent, User-Centered AI in Education
Gespeichert in:
| Titel: | XAI-Ed: Half-Day Workshop on Pedagogy-Founded Explainable AI for Transparent, User-Centered AI in Education: Half-Day Workshop on Pedagogy-Founded Explainable AI for Transparent, User-Centered AI in Education |
|---|---|
| Autoren: | Hasan Abu-Rasheed, Mutlu Cukurova, Hassan Khosravi, Jeroen Ooge, Christian Weber |
| Quelle: | Communications in Computer and Information Science ISBN: 9783031992667 |
| Verlagsinformationen: | Springer Nature Switzerland, 2025. |
| Publikationsjahr: | 2025 |
| Schlagwörter: | User-centered XAI, Educational stakeholders, Pedagogy, Explainable AI in Education, Human oversight |
| Beschreibung: | As educational technology evolves, integrating Artificial Intelligence (AI) into Technology-Enhanced Learning (TEL), Educational technologies (EdTech), and AI in Education (AIED) offer significant potential for learning and educational stakeholders. However, the opacity of many AI systems complicates the interpretation of their decision-making processes and their predictions of, e.g., personalized learning recommendations. Explainable AI (XAI) addresses this challenge by providing insights into how AI algorithms arrive at their predictions, thereby enabling stakeholders to make more informed decisions. This workshop will explore the convergence of XAI with AIED, TEL, and EdTech in general, aiming to empower learners and educators through transparent and understandable insights into the inner workings of AI algorithms. Although the recent advances within the AIED community underscored the growing demand for explainable and transparent AI systems, the focus has mainly been on the technical side of XAI, leaving a strong need for addressing the pedagogical aspects of XAI in education. Those are not only limited to the pedagogical foundations of XAI but also include the actual learning value from implementing XAI, how this value aligns with learning theories, and how XAI’s ability to achieve it can be evaluated. In educational contexts, where learner autonomy and agency are essential for a fair, learner-centered process, XAI is expected to provide meaningful explanations for algorithmic predictions. Successfully implementing XAI in EdTech and AIED, therefore, requires a robust understanding of its pedagogical foundations, technical realization, regulatory limitations, and the perspectives of the educational stakeholders, including learners, educators, developers, and institutions. |
| Publikationsart: | Part of book or chapter of book Conference object |
| Sprache: | English |
| DOI: | 10.1007/978-3-031-99267-4_26 |
| Zugangs-URL: | https://research-portal.uu.nl/en/publications/b8e6e3fa-16c2-44f4-a06e-12b36758cc1c https://doi.org/10.1007/978-3-031-99267-4_26 |
| Rights: | Springer Nature TDM |
| Dokumentencode: | edsair.doi.dedup.....7082e8128bb9528bdb2bd2fb227c97c3 |
| Datenbank: | OpenAIRE |
| Abstract: | As educational technology evolves, integrating Artificial Intelligence (AI) into Technology-Enhanced Learning (TEL), Educational technologies (EdTech), and AI in Education (AIED) offer significant potential for learning and educational stakeholders. However, the opacity of many AI systems complicates the interpretation of their decision-making processes and their predictions of, e.g., personalized learning recommendations. Explainable AI (XAI) addresses this challenge by providing insights into how AI algorithms arrive at their predictions, thereby enabling stakeholders to make more informed decisions. This workshop will explore the convergence of XAI with AIED, TEL, and EdTech in general, aiming to empower learners and educators through transparent and understandable insights into the inner workings of AI algorithms. Although the recent advances within the AIED community underscored the growing demand for explainable and transparent AI systems, the focus has mainly been on the technical side of XAI, leaving a strong need for addressing the pedagogical aspects of XAI in education. Those are not only limited to the pedagogical foundations of XAI but also include the actual learning value from implementing XAI, how this value aligns with learning theories, and how XAI’s ability to achieve it can be evaluated. In educational contexts, where learner autonomy and agency are essential for a fair, learner-centered process, XAI is expected to provide meaningful explanations for algorithmic predictions. Successfully implementing XAI in EdTech and AIED, therefore, requires a robust understanding of its pedagogical foundations, technical realization, regulatory limitations, and the perspectives of the educational stakeholders, including learners, educators, developers, and institutions. |
|---|---|
| DOI: | 10.1007/978-3-031-99267-4_26 |
Nájsť tento článok vo Web of Science