Choose your explanation: a comparison of SHAP and Grad-CAM in human activity recognition

Explaining machine learning (ML) models using eXplainable AI (XAI) techniques has become essential to make them more transparent and trustworthy. This is especially important in high-risk environments like healthcare, where understanding model decisions is critical to ensure ethical, sound, and trus...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Applied intelligence (Dordrecht, Netherlands) Ročník 55; číslo 17; s. 1107
Hlavní autoři: Tempel, Felix, Groos, Daniel, Ihlen, Espen Alexander F., Adde, Lars, Strümke, Inga
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York Springer US 01.11.2025
Springer Nature B.V
Témata:
ISSN:0924-669X, 1573-7497
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Explaining machine learning (ML) models using eXplainable AI (XAI) techniques has become essential to make them more transparent and trustworthy. This is especially important in high-risk environments like healthcare, where understanding model decisions is critical to ensure ethical, sound, and trustworthy outcome predictions. However, users are often confused about which explanability method to choose for their specific use case. We present a comparative analysis of two explainability methods, Shapley Additive Explanations (SHAP) and Gradient-weighted Class Activation Mapping (Grad-CAM), within the domain of human activity recognition (HAR) utilizing graph convolutional networks (GCNs). By evaluating these methods on skeleton-based input representation from two real-world datasets, including a healthcare-critical cerebral palsy (CP) case, this study provides vital insights into both approaches’ strengths, limitations, and differences, offering a roadmap for selecting the most appropriate explanation method based on specific models and applications. We qualitatively and quantitatively compare the two methods, focusing on feature importance ranking and model sensitivity through perturbation experiments. While SHAP provides detailed input feature attribution, Grad-CAM delivers faster, spatially oriented explanations, making both methods complementary depending on the application’s requirements. Given the importance of XAI in enhancing trust and transparency in ML models, particularly in sensitive environments like healthcare, our research demonstrates how SHAP and Grad-CAM could complement each other to provide model explanations.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0924-669X
1573-7497
DOI:10.1007/s10489-025-06968-3