From Black Box to Glass Box: A Practical Review of Explainable Artificial Intelligence (XAI)

Explainable Artificial Intelligence (XAI) has become essential as machine learning systems are deployed in high-stakes domains such as security, finance, and healthcare. Traditional models often act as “black boxes”, limiting trust and accountability. Traditional models often act as “black boxes”, l...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:AI (Basel) Ročník 6; číslo 11; s. 285
Hlavní autoři: Liu, Xiaoming, Huang, Danni, Yao, Jingyu, Dong, Jing, Song, Litong, Wang, Hui, Yao, Chao, Chu, Weishen
Médium: Journal Article Book Review
Jazyk:angličtina
Vydáno: Basel MDPI AG 01.11.2025
Témata:
ISSN:2673-2688, 2673-2688
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Explainable Artificial Intelligence (XAI) has become essential as machine learning systems are deployed in high-stakes domains such as security, finance, and healthcare. Traditional models often act as “black boxes”, limiting trust and accountability. Traditional models often act as “black boxes”, limiting trust and accountability. However, most existing reviews treat explainability either as a technical problem or a philosophical issue, without connecting interpretability techniques to their real-world implications for security, privacy, and governance. This review fills that gap by integrating theoretical foundations with practical applications and societal perspectives. define transparency and interpretability as core concepts and introduce new economics-inspired notions of marginal transparency and marginal interpretability to highlight diminishing returns in disclosure and explanation. Methodologically, we examine model-agnostic approaches such as LIME and SHAP, alongside model-specific methods including decision trees and interpretable neural networks. We also address ante-hoc vs. post hoc strategies, local vs. global explanations, and emerging privacy-preserving techniques. To contextualize XAI’s growth, we integrate capital investment and publication trends, showing that research momentum has remained resilient despite market fluctuations. Finally, we propose a roadmap for 2025–2030, emphasizing evaluation standards, adaptive explanations, integration with Zero Trust architectures, and the development of self-explaining agents supported by global standards. By combining technical insights with societal implications, this article provides both a scholarly contribution and a practical reference for advancing trustworthy AI.
Bibliografie:content type line 1
SourceType-Scholarly Journals-1
ObjectType-Review-1
ISSN:2673-2688
2673-2688
DOI:10.3390/ai6110285