From Black Box to Glass Box: A Practical Review of Explainable Artificial Intelligence (XAI)
Explainable Artificial Intelligence (XAI) has become essential as machine learning systems are deployed in high-stakes domains such as security, finance, and healthcare. Traditional models often act as “black boxes”, limiting trust and accountability. Traditional models often act as “black boxes”, l...
Saved in:
| Published in: | AI (Basel) Vol. 6; no. 11; p. 285 |
|---|---|
| Main Authors: | , , , , , , , |
| Format: | Journal Article Book Review |
| Language: | English |
| Published: |
Basel
MDPI AG
01.11.2025
|
| Subjects: | |
| ISSN: | 2673-2688, 2673-2688 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Explainable Artificial Intelligence (XAI) has become essential as machine learning systems are deployed in high-stakes domains such as security, finance, and healthcare. Traditional models often act as “black boxes”, limiting trust and accountability. Traditional models often act as “black boxes”, limiting trust and accountability. However, most existing reviews treat explainability either as a technical problem or a philosophical issue, without connecting interpretability techniques to their real-world implications for security, privacy, and governance. This review fills that gap by integrating theoretical foundations with practical applications and societal perspectives. define transparency and interpretability as core concepts and introduce new economics-inspired notions of marginal transparency and marginal interpretability to highlight diminishing returns in disclosure and explanation. Methodologically, we examine model-agnostic approaches such as LIME and SHAP, alongside model-specific methods including decision trees and interpretable neural networks. We also address ante-hoc vs. post hoc strategies, local vs. global explanations, and emerging privacy-preserving techniques. To contextualize XAI’s growth, we integrate capital investment and publication trends, showing that research momentum has remained resilient despite market fluctuations. Finally, we propose a roadmap for 2025–2030, emphasizing evaluation standards, adaptive explanations, integration with Zero Trust architectures, and the development of self-explaining agents supported by global standards. By combining technical insights with societal implications, this article provides both a scholarly contribution and a practical reference for advancing trustworthy AI. |
|---|---|
| Bibliography: | content type line 1 SourceType-Scholarly Journals-1 ObjectType-Review-1 |
| ISSN: | 2673-2688 2673-2688 |
| DOI: | 10.3390/ai6110285 |