Explainable artificial intelligence for medical imaging systems using deep learning: a comprehensive review
The world recently witnessed strong growth in artificial intelligence (AI) use across various sectors, driven by the digital revolution that began in 2016. Despite this progress, significant concerns persist regarding the black-box nature of AI. Intelligent systems provide decisions without explanat...
Uloženo v:
| Vydáno v: | Cluster computing Ročník 28; číslo 7; s. 469 |
|---|---|
| Hlavní autoři: | , , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
New York
Springer US
01.09.2025
Springer Nature B.V |
| Témata: | |
| ISSN: | 1386-7857, 1573-7543 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Shrnutí: | The world recently witnessed strong growth in artificial intelligence (AI) use across various sectors, driven by the digital revolution that began in 2016. Despite this progress, significant concerns persist regarding the black-box nature of AI. Intelligent systems provide decisions without explanations, which has raised pressing issues, particularly in critical domains such as medicine. In medicine, errors can lead to disastrous consequences, putting lives at risk. The "unexplainable" nature of AI is a heavily debated topic in biomedical informatics and computing. Many "black-box" algorithms and systems obscure the logic behind their decisions, leaving users and even developers in the dark about how results are derived. Researchers have developed the field of explainable artificial intelligence (XAI), which holds significant promise for fostering confidence and openness between AI systems and their users. Unlike traditional AI methods, such as deep learning (DL), XAI provides mechanisms for decision-making while offering explanations that are understandable to humans. Medical imaging (MI) plays a crucial role in diagnosing and monitoring a broad range of diseases, and advancements in computer vision, image processing, and the availability of medical image datasets have revolutionized automated MI analysis. However, trust in these systems remains a challenge. To gain the trust of clinicians, authorities, and patients, diagnostic methodologies must be transparent, interpretable, and explainable, clearly conveying the rationale behind specific decisions. This paper reviews the current landscape of XAI methods for medical imaging, including methodologies, techniques, and applications. It covers various types of explanations, such as visual explanations, textual justifications, and example-based reasoning, emphasizing their significance in medical imaging contexts. Furthermore, the paper presents a comparative analysis of XAI methods, evaluating their effectiveness, interpretability, and alignment with medical standards. By identifying research gaps and exploring potential advancements, this review aims to contribute to the development of robust, interpretable, and reliable XAI systems for critical applications like medical imaging, ensuring accountability and fostering trust in AI-powered healthcare systems. |
|---|---|
| Bibliografie: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 1386-7857 1573-7543 |
| DOI: | 10.1007/s10586-025-05281-5 |