On Interpretability of CNNs for Multimodal Medical Image Segmentation

Despite their huge potential, deep learning-based models are still not trustful enough to warrant their adoption in clinical practice. The research on the interpretability and explainability of deep learning is currently attracting huge attention. Multilayer Convolutional Sparse Coding (ML-CSC) data...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2022 30th European Signal Processing Conference (EUSIPCO) s. 1417 - 1421
Hlavní autoři: Lazendic, Srdan, Janssens, Jens, Huang, Shaoguang, Pizurica, Aleksandra
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: EUSIPCO 29.08.2022
Témata:
ISSN:2076-1465
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Despite their huge potential, deep learning-based models are still not trustful enough to warrant their adoption in clinical practice. The research on the interpretability and explainability of deep learning is currently attracting huge attention. Multilayer Convolutional Sparse Coding (ML-CSC) data model, provides a model-based explanation of convolutional neural networks (CNNs). In this article, we extend the ML-CSC framework towards multimodal data for medical image segmentation, and propose a merged joint feature extraction ML-CSC model. This work generalizes and improves upon our previous model, by deriving a more elegant approach that merges feature extraction and convolutional sparse coding in a unified framework. A segmentation study on a multimodal magnetic resonance imaging (MRI) dataset confirms the effectiveness of the proposed approach. We also supply an interpretability study regarding the involved model parameters.
ISSN:2076-1465
DOI:10.23919/EUSIPCO55093.2022.9909776