On Interpretability of CNNs for Multimodal Medical Image Segmentation

Despite their huge potential, deep learning-based models are still not trustful enough to warrant their adoption in clinical practice. The research on the interpretability and explainability of deep learning is currently attracting huge attention. Multilayer Convolutional Sparse Coding (ML-CSC) data...

Full description

Saved in:
Bibliographic Details
Published in:2022 30th European Signal Processing Conference (EUSIPCO) pp. 1417 - 1421
Main Authors: Lazendic, Srdan, Janssens, Jens, Huang, Shaoguang, Pizurica, Aleksandra
Format: Conference Proceeding
Language:English
Published: EUSIPCO 29.08.2022
Subjects:
ISSN:2076-1465
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Despite their huge potential, deep learning-based models are still not trustful enough to warrant their adoption in clinical practice. The research on the interpretability and explainability of deep learning is currently attracting huge attention. Multilayer Convolutional Sparse Coding (ML-CSC) data model, provides a model-based explanation of convolutional neural networks (CNNs). In this article, we extend the ML-CSC framework towards multimodal data for medical image segmentation, and propose a merged joint feature extraction ML-CSC model. This work generalizes and improves upon our previous model, by deriving a more elegant approach that merges feature extraction and convolutional sparse coding in a unified framework. A segmentation study on a multimodal magnetic resonance imaging (MRI) dataset confirms the effectiveness of the proposed approach. We also supply an interpretability study regarding the involved model parameters.
ISSN:2076-1465
DOI:10.23919/EUSIPCO55093.2022.9909776