The role of explainable artificial intelligence in disease prediction: a systematic literature review and future research directions

Explainable Artificial Intelligence (XAI) enhances transparency and interpretability in AI models, which is crucial for trust and accountability in healthcare. A potential application of XAI is disease prediction using various data modalities. This study conducts a Systematic Literature Review (SLR)...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:BMC medical informatics and decision making Ročník 25; číslo 1; s. 110 - 17
Hlavní autoři: Alkhanbouli, Razan, Matar Abdulla Almadhaani, Hour, Alhosani, Farah, Simsekler, Mecit Can Emre
Médium: Journal Article
Jazyk:angličtina
Vydáno: London BioMed Central 04.03.2025
BioMed Central Ltd
Springer Nature B.V
BMC
Témata:
ISSN:1472-6947, 1472-6947
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Explainable Artificial Intelligence (XAI) enhances transparency and interpretability in AI models, which is crucial for trust and accountability in healthcare. A potential application of XAI is disease prediction using various data modalities. This study conducts a Systematic Literature Review (SLR) following the PRISMA protocol, synthesizing findings from 30 selected studies to examine XAI’s evolving role in disease prediction. It explores commonly used XAI methods, such as Shapley Additive Explanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), and their impact across medical fields in disease prediction. The review highlights key gaps, including limited dataset diversity, model complexity, and reliance on single data types, emphasizing the need for greater interpretability and data integration. Addressing these issues is crucial for advancing AI in healthcare. This study contributes by outlining current challenges and potential solutions, suggesting directions for future research to develop more reliable and robust XAI methods.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ObjectType-Undefined-3
ISSN:1472-6947
1472-6947
DOI:10.1186/s12911-025-02944-6