Cluster-CAM: Cluster-weighted visual interpretation of CNNs’ decision in image classification

Despite the tremendous success of convolutional neural networks (CNNs) in computer vision, the mechanism of CNNs still lacks clear interpretation. Currently, class activation mapping (CAM), a famous visualization technique to interpret CNN’s decision, has drawn increasing attention. Gradient-based C...

Full description

Saved in:
Bibliographic Details
Published in:Neural networks Vol. 178; p. 106473
Main Authors: Feng, Zhenpeng, Ji, Hongbing, Daković, Miloš, Cui, Xiyang, Zhu, Mingzhe, Stanković, Ljubiša
Format: Journal Article
Language:English
Published: United States Elsevier Ltd 01.10.2024
Subjects:
ISSN:0893-6080, 1879-2782, 1879-2782
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Abstract Despite the tremendous success of convolutional neural networks (CNNs) in computer vision, the mechanism of CNNs still lacks clear interpretation. Currently, class activation mapping (CAM), a famous visualization technique to interpret CNN’s decision, has drawn increasing attention. Gradient-based CAMs are efficient, while the performance is heavily affected by gradient vanishing and exploding. In contrast, gradient-free CAMs can avoid computing gradients to produce more understandable results. However, they are quite time-consuming because hundreds of forward interference per image are required. In this paper, we proposed Cluster-CAM, an effective and efficient gradient-free CNN interpretation algorithm. Cluster-CAM can significantly reduce the times of forward propagation by splitting the feature maps into clusters. Furthermore, we propose an artful strategy to forge a cognition-base map and cognition-scissors from clustered feature maps. The final salience heatmap will be produced by merging the above cognition maps. Qualitative results conspicuously show that Cluster-CAM can produce heatmaps where the highlighted regions match the human’s cognition more precisely than existing CAMs. The quantitative evaluation further demonstrates the superiority of Cluster-CAM in both effectiveness and efficiency.
AbstractList Despite the tremendous success of convolutional neural networks (CNNs) in computer vision, the mechanism of CNNs still lacks clear interpretation. Currently, class activation mapping (CAM), a famous visualization technique to interpret CNN's decision, has drawn increasing attention. Gradient-based CAMs are efficient, while the performance is heavily affected by gradient vanishing and exploding. In contrast, gradient-free CAMs can avoid computing gradients to produce more understandable results. However, they are quite time-consuming because hundreds of forward interference per image are required. In this paper, we proposed Cluster-CAM, an effective and efficient gradient-free CNN interpretation algorithm. Cluster-CAM can significantly reduce the times of forward propagation by splitting the feature maps into clusters. Furthermore, we propose an artful strategy to forge a cognition-base map and cognition-scissors from clustered feature maps. The final salience heatmap will be produced by merging the above cognition maps. Qualitative results conspicuously show that Cluster-CAM can produce heatmaps where the highlighted regions match the human's cognition more precisely than existing CAMs. The quantitative evaluation further demonstrates the superiority of Cluster-CAM in both effectiveness and efficiency.Despite the tremendous success of convolutional neural networks (CNNs) in computer vision, the mechanism of CNNs still lacks clear interpretation. Currently, class activation mapping (CAM), a famous visualization technique to interpret CNN's decision, has drawn increasing attention. Gradient-based CAMs are efficient, while the performance is heavily affected by gradient vanishing and exploding. In contrast, gradient-free CAMs can avoid computing gradients to produce more understandable results. However, they are quite time-consuming because hundreds of forward interference per image are required. In this paper, we proposed Cluster-CAM, an effective and efficient gradient-free CNN interpretation algorithm. Cluster-CAM can significantly reduce the times of forward propagation by splitting the feature maps into clusters. Furthermore, we propose an artful strategy to forge a cognition-base map and cognition-scissors from clustered feature maps. The final salience heatmap will be produced by merging the above cognition maps. Qualitative results conspicuously show that Cluster-CAM can produce heatmaps where the highlighted regions match the human's cognition more precisely than existing CAMs. The quantitative evaluation further demonstrates the superiority of Cluster-CAM in both effectiveness and efficiency.
Despite the tremendous success of convolutional neural networks (CNNs) in computer vision, the mechanism of CNNs still lacks clear interpretation. Currently, class activation mapping (CAM), a famous visualization technique to interpret CNN's decision, has drawn increasing attention. Gradient-based CAMs are efficient, while the performance is heavily affected by gradient vanishing and exploding. In contrast, gradient-free CAMs can avoid computing gradients to produce more understandable results. However, they are quite time-consuming because hundreds of forward interference per image are required. In this paper, we proposed Cluster-CAM, an effective and efficient gradient-free CNN interpretation algorithm. Cluster-CAM can significantly reduce the times of forward propagation by splitting the feature maps into clusters. Furthermore, we propose an artful strategy to forge a cognition-base map and cognition-scissors from clustered feature maps. The final salience heatmap will be produced by merging the above cognition maps. Qualitative results conspicuously show that Cluster-CAM can produce heatmaps where the highlighted regions match the human's cognition more precisely than existing CAMs. The quantitative evaluation further demonstrates the superiority of Cluster-CAM in both effectiveness and efficiency.
ArticleNumber 106473
Author Ji, Hongbing
Stanković, Ljubiša
Feng, Zhenpeng
Zhu, Mingzhe
Daković, Miloš
Cui, Xiyang
Author_xml – sequence: 1
  givenname: Zhenpeng
  orcidid: 0000-0002-0383-4794
  surname: Feng
  fullname: Feng, Zhenpeng
  organization: School of Electronic Engineering, Xidian University, Xi’an, China
– sequence: 2
  givenname: Hongbing
  surname: Ji
  fullname: Ji, Hongbing
  email: hbji@xidian.edu.cn
  organization: School of Electronic Engineering, Xidian University, Xi’an, China
– sequence: 3
  givenname: Miloš
  orcidid: 0000-0002-3317-3632
  surname: Daković
  fullname: Daković, Miloš
  organization: Faculty of Electrical Engineering, University of Montenegro, Podgorica, Montenegro
– sequence: 4
  givenname: Xiyang
  surname: Cui
  fullname: Cui, Xiyang
  organization: School of Electronic Engineering, Xidian University, Xi’an, China
– sequence: 5
  givenname: Mingzhe
  orcidid: 0000-0002-7962-3344
  surname: Zhu
  fullname: Zhu, Mingzhe
  organization: School of Electronic Engineering, Xidian University, Xi’an, China
– sequence: 6
  givenname: Ljubiša
  orcidid: 0000-0002-9736-9036
  surname: Stanković
  fullname: Stanković, Ljubiša
  organization: Faculty of Electrical Engineering, University of Montenegro, Podgorica, Montenegro
BackLink https://www.ncbi.nlm.nih.gov/pubmed/38941740$$D View this record in MEDLINE/PubMed
BookMark eNqFkMtKxDAUhoMoOo6-gUiXbjrm1ibjQpDiDbxsdB3a5FQzdNoxSRV3voav55OYsY4LFwqBkJ_vP5x822i97VpAaI_gCcEkP5xNWuhbCBOKKY9RzgVbQyMixTSlQtJ1NMJyytIcS7yFtr2fYYxzydkm2mJyyongeIRU0fQ-gEuLk-ujZPV4AfvwGMAkz9b3ZZPYNqYLB6EMtmuTrk6Kmxv_8faeGNDWLzMbz7x8gEQ3pfe2tvqL3UEbddl42P2-x-j-7PSuuEivbs8vi5OrVFMhQkoErqgRjHKoasEpw4aRquIYsgwybUT8nRYAxvA6k5oZTvmUESmFxlBxzcboYJi7cN1TDz6oufUamqZsoeu9YliwPKOS5xHd_0b7ag5GLVxc3L2qlZQI8AHQrvPeQf2DEKyW7tVMDe7V0r0a3Mfa0a-atoOw4Erb_Fc-HsoQJT1bcMprC60GYx3ooExn_x7wCXzHou4
CitedBy_id crossref_primary_10_1007_s00521_025_10978_0
crossref_primary_10_1016_j_eswa_2025_127238
crossref_primary_10_3390_land13101585
crossref_primary_10_3390_pr13051365
crossref_primary_10_1016_j_asoc_2025_113265
Cites_doi 10.1016/j.neunet.2021.09.018
10.1016/j.neucom.2022.09.129
10.1007/978-3-031-19775-8_27
10.1109/ICCV.2017.444
10.1016/j.neunet.2022.10.035
10.1016/j.neucom.2022.11.084
10.1016/j.neucom.2022.05.065
10.1016/j.neucom.2021.03.056
10.1016/j.neunet.2022.08.009
10.3390/rs13091772
10.3390/rs13204139
10.1109/TNNLS.2019.2944672
10.1109/CVPR.2016.308
10.1016/j.neucom.2019.05.043
10.1109/MSP.2019.2929832
10.1109/ICCV48922.2021.00137
10.1109/CVPRW50498.2020.00020
10.1109/ICCV.2017.74
10.1016/j.sigpro.2021.108301
10.1109/CVPR52688.2022.01167
10.1016/j.neucom.2022.09.029
10.1109/ICCV.2019.00980
10.1038/s41467-019-08987-4
10.1016/j.sigpro.2022.108685
10.1109/TNNLS.2021.3066850
10.1109/CVPR.2016.319
10.1109/CVPR46437.2021.01625
10.1109/MSP.2017.2696572
ContentType Journal Article
Copyright 2024 Elsevier Ltd
Copyright © 2024 Elsevier Ltd. All rights reserved.
Copyright_xml – notice: 2024 Elsevier Ltd
– notice: Copyright © 2024 Elsevier Ltd. All rights reserved.
DBID AAYXX
CITATION
NPM
7X8
DOI 10.1016/j.neunet.2024.106473
DatabaseName CrossRef
PubMed
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
MEDLINE - Academic
DatabaseTitleList MEDLINE - Academic
PubMed

Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: 7X8
  name: MEDLINE - Academic
  url: https://search.proquest.com/medline
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1879-2782
ExternalDocumentID 38941740
10_1016_j_neunet_2024_106473
S0893608024003976
Genre Journal Article
GroupedDBID ---
--K
--M
-~X
.DC
.~1
0R~
123
186
1B1
1RT
1~.
1~5
29N
4.4
457
4G.
53G
5RE
5VS
6TJ
7-5
71M
8P~
9JM
9JN
AABNK
AACTN
AAEDT
AAEDW
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAQXK
AAXLA
AAXUO
AAYFN
ABAOU
ABBOA
ABCQJ
ABEFU
ABFNM
ABFRF
ABHFT
ABIVO
ABJNI
ABLJU
ABMAC
ABXDB
ACDAQ
ACGFO
ACGFS
ACIUM
ACNNM
ACRLP
ACZNC
ADBBV
ADEZE
ADGUI
ADJOM
ADMUD
ADRHT
AEBSH
AECPX
AEFWE
AEKER
AENEX
AFKWA
AFTJW
AFXIZ
AGHFR
AGUBO
AGWIK
AGYEJ
AHHHB
AHJVU
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJOXV
AKRWK
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
ARUGR
ASPBG
AVWKF
AXJTR
AZFZN
BJAXD
BKOJK
BLXMC
CS3
DU5
EBS
EFJIC
EJD
EO8
EO9
EP2
EP3
F0J
F5P
FDB
FEDTE
FGOYB
FIRID
FNPLU
FYGXN
G-2
G-Q
G8K
GBLVA
GBOLZ
HLZ
HMQ
HVGLF
HZ~
IHE
J1W
JJJVA
K-O
KOM
KZ1
LG9
LMP
M2V
M41
MHUIS
MO0
MOBAO
MVM
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
Q38
R2-
RIG
ROL
RPZ
SBC
SCC
SDF
SDG
SDP
SES
SEW
SNS
SPC
SPCBC
SSN
SST
SSV
SSW
SSZ
T5K
TAE
UAP
UNMZH
VOH
WUQ
XPP
ZMT
~G-
9DU
AATTM
AAXKI
AAYWO
AAYXX
ABDPE
ABWVN
ACLOT
ACRPL
ACVFH
ADCNI
ADNMO
AEIPS
AEUPX
AFJKZ
AFPUW
AGQPQ
AIGII
AIIUN
AKBMS
AKYEP
ANKPU
APXCP
CITATION
EFKBS
EFLBG
~HD
BNPGV
NPM
SSH
7X8
ID FETCH-LOGICAL-c277t-170b2d7324ebf74230d31bb40e55e5cd7647c7eedd4f58c3d424931887c0eb4c3
ISSN 0893-6080
1879-2782
IngestDate Sun Sep 28 15:23:20 EDT 2025
Thu Apr 03 07:07:43 EDT 2025
Tue Nov 18 21:57:24 EST 2025
Sat Nov 29 05:33:08 EST 2025
Sat Aug 10 15:30:49 EDT 2024
IsPeerReviewed true
IsScholarly true
Keywords Explainable artificial intelligence
Clustering algorithm
Class activation mapping
Image classification
Language English
License Copyright © 2024 Elsevier Ltd. All rights reserved.
LinkModel OpenURL
MergedId FETCHMERGED-LOGICAL-c277t-170b2d7324ebf74230d31bb40e55e5cd7647c7eedd4f58c3d424931887c0eb4c3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ORCID 0000-0002-9736-9036
0000-0002-0383-4794
0000-0002-7962-3344
0000-0002-3317-3632
PMID 38941740
PQID 3073652846
PQPubID 23479
ParticipantIDs proquest_miscellaneous_3073652846
pubmed_primary_38941740
crossref_primary_10_1016_j_neunet_2024_106473
crossref_citationtrail_10_1016_j_neunet_2024_106473
elsevier_sciencedirect_doi_10_1016_j_neunet_2024_106473
PublicationCentury 2000
PublicationDate 2024-10-01
PublicationDateYYYYMMDD 2024-10-01
PublicationDate_xml – month: 10
  year: 2024
  text: 2024-10-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
PublicationTitle Neural networks
PublicationTitleAlternate Neural Netw
PublicationYear 2024
Publisher Elsevier Ltd
Publisher_xml – name: Elsevier Ltd
References Chattopadhay, Sarkar, Howlader, Balasubramanian (b2) 2018
(pp. 618–626).
Ramaswamy, H. G., et al. (2020). Ablation-cam: Visual explanations for deep convolutional network via gradient-free localization. In
Zhu, Feng, Stanković, Ding, Fan, Zhou (b44) 2022
(pp. 1316–1324).
Saleem, Yuan, Kurugollu, Anjum, Liu (b23) 2022; 513
Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., & Torralba, A. (2016). Learning deep features for discriminative localization. In
(pp. 983–991).
Sun, Song, Cai, Du, Guizani (b31) 2022; 500
Feng, Zhu, Stanković, Ji (b6) 2021; 13
He, Zhang, Ren, Sun (b8) 2016
Jung, H., & Oh, Y. (2021). Towards Better Explanations of Class Activation Mapping. In
Chen, Jin, Jin, Zhu, Chen (b3) 2022; 33
Scalzo, Stanković, Daković, Constantinides, Mandic (b24) 2023; 158
Fu, R., Hu, Q., Dong, X., Guo, Y., Gao, Y., & Li, B. (2020). Axiom-based Grad-CAM: Towards accurate visualization and explanation of CNNs. In
Townsend, Chaton, Monteiro (b34) 2020; 31
Wang, H., Wang, Z., Du, M., Yang, F., Zhang, Z., Ding, S., et al. (2020). Score-CAM: Score-weighted visual explanations for convolutional neural networks. In
Zhu, Y., Zhao, C., Wang, J., Zhao, X., Wu, Y., & Lu, H. (2017). Couplenet: Coupling global structure with local parts for object detection. In
Tan, Gao, Khan, Guan (b33) 2022; 155
Zeiler, Fergus (b38) 2014
Ma, Zhang, Pena-Pena, Arce (b16) 2021; 189
Feng, Ji, Stanković, Fan, Zhu (b5) 2021; 13
(pp. 4126–4134).
.
Redmon, Divvala, Girshick, Farhadi (b21) 2016
Zhang, Rao, Yang (b39) 2021
Krizhevsky, Sutskever, Hinton (b10) 2012
Stankovic, Mandic, Dakovic, Kisil, Sejdic, Constantinides (b30) 2019; 36
(pp. 24–25).
(pp. 459–474).
Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-CAM: Visual explanations from deep networks via gradient-based localization. In
Lapuschkin, Wäldchen, Binder, Montavon, Samek, Müller (b11) 2019; 10
(pp. 2921–2929).
(pp. 9705–9714).
Zheng, Q., Wang, Z., Zhou, J., & Lu, J. (2022). Shap-CAM: Visual explanations for convolutional neural networks based on Shapley Value. In
Zhou, Kainz (b42) 2018
Simonyan, K., & Zisserman, A. (2015). Very Deep Convolutional Networks for Large-Scale Image Recognition. In
Cao, J., Pang, Y., Han, J., & Li, X. (2019). Hierarchical shot detector. In
Petsiuk, Das, Saenko (b19) 2018
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z. (2016). Rethinking the Inception Architecture for Computer Vision. In
Liu, Zhang, Zhou, Wang (b15) 2023; 521
(pp. 1–14).
Ren, Li, Liu, Zhang (b22) 2021
Stankovic, Dakovic, Sejdic (b29) 2017; 34
Spinelli, Scardapane, Uncini (b27) 2022
Liu, Meng, Li, Mao, Chen (b14) 2022; 510
Tu, Zhou, Gan, Jiang, Hussain, Luo (b35) 2021; 449
Liang, Hu, Zhang, Lin, Xing (b12) 2018; 31
Vlahek, Mongus (b36) 2021
(pp. 11976–11986).
Srinivas, A., Lin, T.-Y., Parmar, N., Shlens, J., Abbeel, P., & Vaswani, A. (2021). Bottleneck transformers for visual recognition. In
Omeiza, Speakman, Cintas, Weldermariam (b18) 2019
Macpherson, Churchland, Sejnowski, DiCarlo, Kamitani, Takahashi (b17) 2021; 144
(pp. 16519–16529).
Zhao, Xie, Wang, Liu, Shi, Du (b40) 2019; 359
Liu, Z., Mao, H., Wu, C. Y., Feichtenhofer, C., Darrell, T., & Xie, S. (2022). A convnet for the 2020s. In
Deng, Dong, Socher, Li, Li, Fei-Fei (b4) 2009
(pp. 2818–2826).
Stankovic (10.1016/j.neunet.2024.106473_b30) 2019; 36
10.1016/j.neunet.2024.106473_b32
Zhang (10.1016/j.neunet.2024.106473_b39) 2021
Deng (10.1016/j.neunet.2024.106473_b4) 2009
Saleem (10.1016/j.neunet.2024.106473_b23) 2022; 513
Tu (10.1016/j.neunet.2024.106473_b35) 2021; 449
10.1016/j.neunet.2024.106473_b13
Macpherson (10.1016/j.neunet.2024.106473_b17) 2021; 144
10.1016/j.neunet.2024.106473_b37
Omeiza (10.1016/j.neunet.2024.106473_b18) 2019
Vlahek (10.1016/j.neunet.2024.106473_b36) 2021
Zhu (10.1016/j.neunet.2024.106473_b44) 2022
He (10.1016/j.neunet.2024.106473_b8) 2016
10.1016/j.neunet.2024.106473_b1
Chattopadhay (10.1016/j.neunet.2024.106473_b2) 2018
Lapuschkin (10.1016/j.neunet.2024.106473_b11) 2019; 10
Stankovic (10.1016/j.neunet.2024.106473_b29) 2017; 34
10.1016/j.neunet.2024.106473_b9
Zhao (10.1016/j.neunet.2024.106473_b40) 2019; 359
Ren (10.1016/j.neunet.2024.106473_b22) 2021
10.1016/j.neunet.2024.106473_b28
Feng (10.1016/j.neunet.2024.106473_b5) 2021; 13
10.1016/j.neunet.2024.106473_b7
Liang (10.1016/j.neunet.2024.106473_b12) 2018; 31
Zeiler (10.1016/j.neunet.2024.106473_b38) 2014
Chen (10.1016/j.neunet.2024.106473_b3) 2022; 33
10.1016/j.neunet.2024.106473_b41
10.1016/j.neunet.2024.106473_b20
10.1016/j.neunet.2024.106473_b43
10.1016/j.neunet.2024.106473_b45
Feng (10.1016/j.neunet.2024.106473_b6) 2021; 13
Ma (10.1016/j.neunet.2024.106473_b16) 2021; 189
Redmon (10.1016/j.neunet.2024.106473_b21) 2016
10.1016/j.neunet.2024.106473_b25
10.1016/j.neunet.2024.106473_b26
Krizhevsky (10.1016/j.neunet.2024.106473_b10) 2012
Zhou (10.1016/j.neunet.2024.106473_b42) 2018
Tan (10.1016/j.neunet.2024.106473_b33) 2022; 155
Liu (10.1016/j.neunet.2024.106473_b14) 2022; 510
Liu (10.1016/j.neunet.2024.106473_b15) 2023; 521
Sun (10.1016/j.neunet.2024.106473_b31) 2022; 500
Townsend (10.1016/j.neunet.2024.106473_b34) 2020; 31
Scalzo (10.1016/j.neunet.2024.106473_b24) 2023; 158
Spinelli (10.1016/j.neunet.2024.106473_b27) 2022
Petsiuk (10.1016/j.neunet.2024.106473_b19) 2018
References_xml – volume: 10
  start-page: 1
  year: 2019
  end-page: 8
  ident: b11
  article-title: Unmasking clever hans predictors and assessing what machines really learn
  publication-title: Nature Communications
– volume: 521
  start-page: 27
  year: 2023
  end-page: 40
  ident: b15
  article-title: BFMNet: Bilateral feature fusion network with multi-scale context aggregation for real-time semantic segmentation
  publication-title: Neurocomputing
– reference: (pp. 983–991).
– reference: Srinivas, A., Lin, T.-Y., Parmar, N., Shlens, J., Abbeel, P., & Vaswani, A. (2021). Bottleneck transformers for visual recognition. In
– reference: (pp. 2921–2929).
– volume: 144
  start-page: 603
  year: 2021
  end-page: 613
  ident: b17
  article-title: Natural and Artificial Intelligence: A brief introduction to the interplay between AI and neuroscience research
  publication-title: Neural Networks
– reference: Cao, J., Pang, Y., Han, J., & Li, X. (2019). Hierarchical shot detector. In
– volume: 510
  start-page: 193
  year: 2022
  end-page: 202
  ident: b14
  article-title: SiSL-Net: Saliency-guided self-supervised learning network for image classification
  publication-title: Neurocomputing
– volume: 449
  start-page: 443
  year: 2021
  end-page: 454
  ident: b35
  article-title: A novel domain activation mapping-guided network (DA-GNT) for visual tracking
  publication-title: Neurocomputing
– year: 2018
  ident: b42
  article-title: Efficient image evidence analysis of cnn classification results
– volume: 31
  year: 2018
  ident: b12
  article-title: Symbolic graph reasoning meets convolutions
  publication-title: Advances in Neural Information Processing Systems
– reference: (pp. 2818–2826).
– start-page: 8971
  year: 2021
  end-page: 8981
  ident: b22
  article-title: Interpreting and disentangling feature components of various complexity from DNNs
  publication-title: Proceedings of international conference on machine learning
– reference: (pp. 4126–4134).
– volume: 13
  start-page: 1772
  year: 2021
  ident: b6
  article-title: Self-matching CAM: A novel accurate visual explanation of CNNs for SAR image interpretation
  publication-title: Remote Sensing
– start-page: 839
  year: 2018
  end-page: 847
  ident: b2
  article-title: Grad-CAM++: Generalized gradient-based visual explanations for deep convolutional networks
  publication-title: Proceedings of 2018 IEEE winter conference on applications of computer vision
– reference: Fu, R., Hu, Q., Dong, X., Guo, Y., Gao, Y., & Li, B. (2020). Axiom-based Grad-CAM: Towards accurate visualization and explanation of CNNs. In
– reference: Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z. (2016). Rethinking the Inception Architecture for Computer Vision. In
– year: 2018
  ident: b19
  article-title: RISE: Randomized input sampling for explanation of black-box models
– year: 2021
  ident: b39
  article-title: Group-CAM: Group score-weighted visual explanations for deep convolutional networks
– reference: Simonyan, K., & Zisserman, A. (2015). Very Deep Convolutional Networks for Large-Scale Image Recognition. In
– reference: (pp. 1316–1324).
– year: 2022
  ident: b44
  article-title: A probe-feature for specific emitter identification using axiom-based grad-CAM
  publication-title: Signal Processing
– reference: Wang, H., Wang, Z., Du, M., Yang, F., Zhang, Z., Ding, S., et al. (2020). Score-CAM: Score-weighted visual explanations for convolutional neural networks. In
– start-page: 818
  year: 2014
  end-page: 833
  ident: b38
  article-title: Visualizing and understanding convolutional networks
  publication-title: European conference on computer vision
– reference: (pp. 11976–11986).
– reference: Zhu, Y., Zhao, C., Wang, J., Zhao, X., Wu, Y., & Lu, H. (2017). Couplenet: Coupling global structure with local parts for object detection. In
– volume: 158
  start-page: 83
  year: 2023
  end-page: 88
  ident: b24
  article-title: A class of doubly stochastic shift operators for random graph signals and their boundedness
  publication-title: Neural Networks
– reference: Liu, Z., Mao, H., Wu, C. Y., Feichtenhofer, C., Darrell, T., & Xie, S. (2022). A convnet for the 2020s. In
– volume: 500
  start-page: 989
  year: 2022
  end-page: 1002
  ident: b31
  article-title: CAMA: Class activation mapping disruptive attack for deep neural networks
  publication-title: Neurocomputing
– volume: 31
  start-page: 3456
  year: 2020
  end-page: 3470
  ident: b34
  article-title: Extracting relational explanations from deep neural networks: A survey from a neural-symbolic perspective
  publication-title: IEEE Transactions on Neural Networks and Learning Systems
– reference: (pp. 16519–16529).
– volume: 189
  year: 2021
  ident: b16
  article-title: Fast spectral clustering method based on graph similarity matrix completion
  publication-title: Signal Processing
– volume: 513
  start-page: 165
  year: 2022
  end-page: 180
  ident: b23
  article-title: Explaining deep neural networks: A survey on the global interpretation methods
  publication-title: Neurocomputing
– reference: Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-CAM: Visual explanations from deep networks via gradient-based localization. In
– year: 2019
  ident: b18
  article-title: Smooth grad-cam++: An enhanced inference level visualization technique for deep convolutional neural network models
– start-page: 1
  year: 2022
  end-page: 9
  ident: b27
  article-title: A meta-learning approach for training explainable graph neural networks
  publication-title: IEEE Transactions on Neural Networks and Learning Systems
– start-page: 248
  year: 2009
  end-page: 255
  ident: b4
  article-title: ImageNet: A large-scale hierarchical image database
  publication-title: Proceedings of 2009 IEEE conference on computer vision and pattern recognition
– volume: 359
  start-page: 185
  year: 2019
  end-page: 198
  ident: b40
  article-title: Visualizing and understanding of learned compressive sensing with residual network
  publication-title: Neurocomputing
– volume: 34
  start-page: 176
  year: 2017
  end-page: 182
  ident: b29
  article-title: Vertex-frequency analysis: A way to localize graph spectral components [lecture notes]
  publication-title: IEEE Signal Processing Magazine
– reference: Jung, H., & Oh, Y. (2021). Towards Better Explanations of Class Activation Mapping. In
– reference: (pp. 24–25).
– volume: 33
  start-page: 4991
  year: 2022
  end-page: 5003
  ident: b3
  article-title: Semisupervised semantic segmentation by improving prediction confidence
  publication-title: IEEE Transactions on Neural Networks and Learning Systems
– reference: Ramaswamy, H. G., et al. (2020). Ablation-cam: Visual explanations for deep convolutional network via gradient-free localization. In
– reference: (pp. 9705–9714).
– reference: .
– volume: 155
  start-page: 58
  year: 2022
  end-page: 73
  ident: b33
  article-title: Interpretable artificial intelligence through locality guided neural networks
  publication-title: Neural Networks
– start-page: 770
  year: 2016
  end-page: 778
  ident: b8
  article-title: Deep residual learning for image recognition
  publication-title: Proceedings of 2016 IEEE conference on computer vision and pattern recognition
– reference: (pp. 618–626).
– reference: Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., & Torralba, A. (2016). Learning deep features for discriminative localization. In
– reference: (pp. 459–474).
– reference: (pp. 1–14).
– year: 2012
  ident: b10
  article-title: ImageNet classification with deep convolutional neural networks
  publication-title: Advances in neural information processing systems, vol. 25
– start-page: 779
  year: 2016
  end-page: 788
  ident: b21
  article-title: You only look once: Unified, real-time object detection
  publication-title: Proceedings of 2016 IEEE conference on computer vision and pattern recognition
– reference: Zheng, Q., Wang, Z., Zhou, J., & Lu, J. (2022). Shap-CAM: Visual explanations for convolutional neural networks based on Shapley Value. In
– volume: 36
  start-page: 133
  year: 2019
  end-page: 145
  ident: b30
  article-title: Understanding the basis of graph signal processing via an intuitive example-driven approach
  publication-title: IEEE Signal Processing Magazine
– volume: 13
  start-page: 4139
  year: 2021
  ident: b5
  article-title: SC-SM CAM: An efficient visual interpretation of CNN for SAR images target recognition
  publication-title: Remote Sensing
– start-page: 1
  year: 2021
  end-page: 13
  ident: b36
  article-title: An efficient iterative approach to explainable feature learning
  publication-title: IEEE Transactions on Neural Networks and Learning Systems
– volume: 144
  start-page: 603
  year: 2021
  ident: 10.1016/j.neunet.2024.106473_b17
  article-title: Natural and Artificial Intelligence: A brief introduction to the interplay between AI and neuroscience research
  publication-title: Neural Networks
  doi: 10.1016/j.neunet.2021.09.018
– volume: 513
  start-page: 165
  year: 2022
  ident: 10.1016/j.neunet.2024.106473_b23
  article-title: Explaining deep neural networks: A survey on the global interpretation methods
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2022.09.129
– ident: 10.1016/j.neunet.2024.106473_b41
  doi: 10.1007/978-3-031-19775-8_27
– ident: 10.1016/j.neunet.2024.106473_b45
  doi: 10.1109/ICCV.2017.444
– year: 2018
  ident: 10.1016/j.neunet.2024.106473_b19
– start-page: 839
  year: 2018
  ident: 10.1016/j.neunet.2024.106473_b2
  article-title: Grad-CAM++: Generalized gradient-based visual explanations for deep convolutional networks
– start-page: 1
  year: 2021
  ident: 10.1016/j.neunet.2024.106473_b36
  article-title: An efficient iterative approach to explainable feature learning
  publication-title: IEEE Transactions on Neural Networks and Learning Systems
– start-page: 818
  year: 2014
  ident: 10.1016/j.neunet.2024.106473_b38
  article-title: Visualizing and understanding convolutional networks
– volume: 158
  start-page: 83
  year: 2023
  ident: 10.1016/j.neunet.2024.106473_b24
  article-title: A class of doubly stochastic shift operators for random graph signals and their boundedness
  publication-title: Neural Networks
  doi: 10.1016/j.neunet.2022.10.035
– start-page: 8971
  year: 2021
  ident: 10.1016/j.neunet.2024.106473_b22
  article-title: Interpreting and disentangling feature components of various complexity from DNNs
– volume: 521
  start-page: 27
  year: 2023
  ident: 10.1016/j.neunet.2024.106473_b15
  article-title: BFMNet: Bilateral feature fusion network with multi-scale context aggregation for real-time semantic segmentation
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2022.11.084
– volume: 500
  start-page: 989
  year: 2022
  ident: 10.1016/j.neunet.2024.106473_b31
  article-title: CAMA: Class activation mapping disruptive attack for deep neural networks
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2022.05.065
– volume: 449
  start-page: 443
  year: 2021
  ident: 10.1016/j.neunet.2024.106473_b35
  article-title: A novel domain activation mapping-guided network (DA-GNT) for visual tracking
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2021.03.056
– start-page: 779
  year: 2016
  ident: 10.1016/j.neunet.2024.106473_b21
  article-title: You only look once: Unified, real-time object detection
– year: 2012
  ident: 10.1016/j.neunet.2024.106473_b10
  article-title: ImageNet classification with deep convolutional neural networks
– volume: 155
  start-page: 58
  year: 2022
  ident: 10.1016/j.neunet.2024.106473_b33
  article-title: Interpretable artificial intelligence through locality guided neural networks
  publication-title: Neural Networks
  doi: 10.1016/j.neunet.2022.08.009
– start-page: 1
  year: 2022
  ident: 10.1016/j.neunet.2024.106473_b27
  article-title: A meta-learning approach for training explainable graph neural networks
  publication-title: IEEE Transactions on Neural Networks and Learning Systems
– volume: 13
  start-page: 1772
  issue: 9
  year: 2021
  ident: 10.1016/j.neunet.2024.106473_b6
  article-title: Self-matching CAM: A novel accurate visual explanation of CNNs for SAR image interpretation
  publication-title: Remote Sensing
  doi: 10.3390/rs13091772
– volume: 13
  start-page: 4139
  issue: 20
  year: 2021
  ident: 10.1016/j.neunet.2024.106473_b5
  article-title: SC-SM CAM: An efficient visual interpretation of CNN for SAR images target recognition
  publication-title: Remote Sensing
  doi: 10.3390/rs13204139
– volume: 31
  start-page: 3456
  issue: 9
  year: 2020
  ident: 10.1016/j.neunet.2024.106473_b34
  article-title: Extracting relational explanations from deep neural networks: A survey from a neural-symbolic perspective
  publication-title: IEEE Transactions on Neural Networks and Learning Systems
  doi: 10.1109/TNNLS.2019.2944672
– ident: 10.1016/j.neunet.2024.106473_b26
– ident: 10.1016/j.neunet.2024.106473_b32
  doi: 10.1109/CVPR.2016.308
– volume: 31
  year: 2018
  ident: 10.1016/j.neunet.2024.106473_b12
  article-title: Symbolic graph reasoning meets convolutions
  publication-title: Advances in Neural Information Processing Systems
– volume: 359
  start-page: 185
  year: 2019
  ident: 10.1016/j.neunet.2024.106473_b40
  article-title: Visualizing and understanding of learned compressive sensing with residual network
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2019.05.043
– volume: 36
  start-page: 133
  issue: 6
  year: 2019
  ident: 10.1016/j.neunet.2024.106473_b30
  article-title: Understanding the basis of graph signal processing via an intuitive example-driven approach
  publication-title: IEEE Signal Processing Magazine
  doi: 10.1109/MSP.2019.2929832
– year: 2018
  ident: 10.1016/j.neunet.2024.106473_b42
– ident: 10.1016/j.neunet.2024.106473_b9
  doi: 10.1109/ICCV48922.2021.00137
– year: 2019
  ident: 10.1016/j.neunet.2024.106473_b18
– ident: 10.1016/j.neunet.2024.106473_b37
  doi: 10.1109/CVPRW50498.2020.00020
– year: 2021
  ident: 10.1016/j.neunet.2024.106473_b39
– ident: 10.1016/j.neunet.2024.106473_b25
  doi: 10.1109/ICCV.2017.74
– volume: 189
  year: 2021
  ident: 10.1016/j.neunet.2024.106473_b16
  article-title: Fast spectral clustering method based on graph similarity matrix completion
  publication-title: Signal Processing
  doi: 10.1016/j.sigpro.2021.108301
– ident: 10.1016/j.neunet.2024.106473_b13
  doi: 10.1109/CVPR52688.2022.01167
– volume: 510
  start-page: 193
  year: 2022
  ident: 10.1016/j.neunet.2024.106473_b14
  article-title: SiSL-Net: Saliency-guided self-supervised learning network for image classification
  publication-title: Neurocomputing
  doi: 10.1016/j.neucom.2022.09.029
– ident: 10.1016/j.neunet.2024.106473_b1
  doi: 10.1109/ICCV.2019.00980
– volume: 10
  start-page: 1
  issue: 1
  year: 2019
  ident: 10.1016/j.neunet.2024.106473_b11
  article-title: Unmasking clever hans predictors and assessing what machines really learn
  publication-title: Nature Communications
  doi: 10.1038/s41467-019-08987-4
– start-page: 248
  year: 2009
  ident: 10.1016/j.neunet.2024.106473_b4
  article-title: ImageNet: A large-scale hierarchical image database
– year: 2022
  ident: 10.1016/j.neunet.2024.106473_b44
  article-title: A probe-feature for specific emitter identification using axiom-based grad-CAM
  publication-title: Signal Processing
  doi: 10.1016/j.sigpro.2022.108685
– volume: 33
  start-page: 4991
  issue: 9
  year: 2022
  ident: 10.1016/j.neunet.2024.106473_b3
  article-title: Semisupervised semantic segmentation by improving prediction confidence
  publication-title: IEEE Transactions on Neural Networks and Learning Systems
  doi: 10.1109/TNNLS.2021.3066850
– ident: 10.1016/j.neunet.2024.106473_b7
– ident: 10.1016/j.neunet.2024.106473_b43
  doi: 10.1109/CVPR.2016.319
– start-page: 770
  year: 2016
  ident: 10.1016/j.neunet.2024.106473_b8
  article-title: Deep residual learning for image recognition
– ident: 10.1016/j.neunet.2024.106473_b28
  doi: 10.1109/CVPR46437.2021.01625
– ident: 10.1016/j.neunet.2024.106473_b20
– volume: 34
  start-page: 176
  issue: 4
  year: 2017
  ident: 10.1016/j.neunet.2024.106473_b29
  article-title: Vertex-frequency analysis: A way to localize graph spectral components [lecture notes]
  publication-title: IEEE Signal Processing Magazine
  doi: 10.1109/MSP.2017.2696572
SSID ssj0006843
Score 2.46708
Snippet Despite the tremendous success of convolutional neural networks (CNNs) in computer vision, the mechanism of CNNs still lacks clear interpretation. Currently,...
SourceID proquest
pubmed
crossref
elsevier
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 106473
SubjectTerms Class activation mapping
Clustering algorithm
Explainable artificial intelligence
Image classification
Title Cluster-CAM: Cluster-weighted visual interpretation of CNNs’ decision in image classification
URI https://dx.doi.org/10.1016/j.neunet.2024.106473
https://www.ncbi.nlm.nih.gov/pubmed/38941740
https://www.proquest.com/docview/3073652846
Volume 178
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVESC
  databaseName: ScienceDirect database
  customDbUrl:
  eissn: 1879-2782
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0006843
  issn: 0893-6080
  databaseCode: AIEXJ
  dateStart: 19950101
  isFulltext: true
  titleUrlDefault: https://www.sciencedirect.com
  providerName: Elsevier
link http://cvtisr.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV3Nb9MwFLe6jQMXvj_Kx2QkBidXaeLUDreqdAI0CodOipBQlDg2pJSkrE0ZN_50nmM7pbBp48Alai27Sfx-ff49-30g9DSjfa6kDEiYhj6h1JckDaKMiCz3-koFKgryptgEm0x4HEfvO52fLhZmPWdlyU9Po8V_FTW0gbB16Ow_iLv9UWiAzyB0uILY4XopwY_mtU5-QEbDt9rcd1-_N3ugQC_XxbJuMm1seRtqr4zJpHGpyG3ZHb0VUnzVPj1CU2ztU7QR48xlfWrSdpTGmXwTSyKNCvnwWZYLaRdH7adjamRX5aes2LS-TL9U66Khtcy48s-rg1F4MGzhMKqbgXHxI7XD7EaFT1uXN6dbOYuIz0ytoZ48o80pZFPUx6rUvg6HDc7U9mbjYdYrZQ3v2dM37W26byfXnrxLDo-PjpLpOJ4-W3wjuu6YPp-3RVh20J7Pwgj04t7w9Th-067mA26CNNyDuvDLxkfw7xufR2_OM18aGjO9ga5Z-wMPDW5uoo4sb6HrrrYHtqr-Nvr4G4xe4D9BhA2I8DaIcKWwBtFz7CAEHXADIbwNoTvo-HA8Hb0ithQHET5jK9JnXubnDNi3zJQ-3PfyoJ9l1JNhKEORM3h3wYBv5VSFXAQ5BbMelgvOhCczKoK7aLesSnkfYQWEe8A1M889KqXiUapEFlDGI-DuKu2iwM1gImyeel0uZZ44h8RZYuY90fOemHnvItKOWpg8LRf0Z044ieWahkMmAK4LRj5xskxAFevztbSUVb1M9HI5CIHvDbronhFy-yxgF1Aw_r0Hlxj9EF3d_H8eod3VSS0foytivSqWJ_toh8V838L0F1C6s38
linkProvider Elsevier
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Cluster-CAM%3A+Cluster-weighted+visual+interpretation+of+CNNs%27+decision+in+image+classification&rft.jtitle=Neural+networks&rft.au=Feng%2C+Zhenpeng&rft.au=Ji%2C+Hongbing&rft.au=Dakovi%C4%87%2C+Milo%C5%A1&rft.au=Cui%2C+Xiyang&rft.date=2024-10-01&rft.issn=1879-2782&rft.eissn=1879-2782&rft.volume=178&rft.spage=106473&rft_id=info:doi/10.1016%2Fj.neunet.2024.106473&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0893-6080&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0893-6080&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0893-6080&client=summon