Choose your explanation: a comparison of SHAP and Grad-CAM in human activity recognition
Explaining machine learning (ML) models using eXplainable AI (XAI) techniques has become essential to make them more transparent and trustworthy. This is especially important in high-risk environments like healthcare, where understanding model decisions is critical to ensure ethical, sound, and trus...
Saved in:
| Published in: | Applied intelligence (Dordrecht, Netherlands) Vol. 55; no. 17; p. 1107 |
|---|---|
| Main Authors: | , , , , |
| Format: | Journal Article |
| Language: | English |
| Published: |
New York
Springer US
01.11.2025
Springer Nature B.V |
| Subjects: | |
| ISSN: | 0924-669X, 1573-7497 |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Explaining machine learning (ML) models using eXplainable AI (XAI) techniques has become essential to make them more transparent and trustworthy. This is especially important in high-risk environments like healthcare, where understanding model decisions is critical to ensure ethical, sound, and trustworthy outcome predictions. However, users are often confused about which explanability method to choose for their specific use case. We present a comparative analysis of two explainability methods, Shapley Additive Explanations (SHAP) and Gradient-weighted Class Activation Mapping (Grad-CAM), within the domain of human activity recognition (HAR) utilizing graph convolutional networks (GCNs). By evaluating these methods on skeleton-based input representation from two real-world datasets, including a healthcare-critical cerebral palsy (CP) case, this study provides vital insights into both approaches’ strengths, limitations, and differences, offering a roadmap for selecting the most appropriate explanation method based on specific models and applications. We qualitatively and quantitatively compare the two methods, focusing on feature importance ranking and model sensitivity through perturbation experiments. While SHAP provides detailed input feature attribution, Grad-CAM delivers faster, spatially oriented explanations, making both methods complementary depending on the application’s requirements. Given the importance of XAI in enhancing trust and transparency in ML models, particularly in sensitive environments like healthcare, our research demonstrates how SHAP and Grad-CAM could complement each other to provide model explanations. |
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 0924-669X 1573-7497 |
| DOI: | 10.1007/s10489-025-06968-3 |