On Interpretability of CNNs for Multimodal Medical Image Segmentation

Despite their huge potential, deep learning-based models are still not trustful enough to warrant their adoption in clinical practice. The research on the interpretability and explainability of deep learning is currently attracting huge attention. Multilayer Convolutional Sparse Coding (ML-CSC) data...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:2022 30th European Signal Processing Conference (EUSIPCO) s. 1417 - 1421
Hlavní autori: Lazendic, Srdan, Janssens, Jens, Huang, Shaoguang, Pizurica, Aleksandra
Médium: Konferenčný príspevok..
Jazyk:English
Vydavateľské údaje: EUSIPCO 29.08.2022
Predmet:
ISSN:2076-1465
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Despite their huge potential, deep learning-based models are still not trustful enough to warrant their adoption in clinical practice. The research on the interpretability and explainability of deep learning is currently attracting huge attention. Multilayer Convolutional Sparse Coding (ML-CSC) data model, provides a model-based explanation of convolutional neural networks (CNNs). In this article, we extend the ML-CSC framework towards multimodal data for medical image segmentation, and propose a merged joint feature extraction ML-CSC model. This work generalizes and improves upon our previous model, by deriving a more elegant approach that merges feature extraction and convolutional sparse coding in a unified framework. A segmentation study on a multimodal magnetic resonance imaging (MRI) dataset confirms the effectiveness of the proposed approach. We also supply an interpretability study regarding the involved model parameters.
ISSN:2076-1465
DOI:10.23919/EUSIPCO55093.2022.9909776