Joint Sequence Learning and Cross-Modality Convolution for 3D Biomedical Segmentation

Deep learning models such as convolutional neural network have been widely used in 3D biomedical segmentation and achieve state-of-the-art performance. However, most of them often adapt a single modality or stack multiple modalities as different input channels, which ignores the correlations among t...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) s. 3739 - 3746
Hlavní autoři: Kuan-Lun Tseng, Yen-Liang Lin, Hsu, Winston, Chung-Yang Huang
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 01.07.2017
Témata:
ISSN:1063-6919, 1063-6919
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Deep learning models such as convolutional neural network have been widely used in 3D biomedical segmentation and achieve state-of-the-art performance. However, most of them often adapt a single modality or stack multiple modalities as different input channels, which ignores the correlations among them. To leverage the multi-modalities, we propose a deep convolution encoder-decoder structure with fusion layers to incorporate different modalities of MRI data. In addition, we exploit convolutional LSTM (convLSTM) to model a sequence of 2D slices, and jointly learn the multi-modalities and convLSTM in an end-to-end manner. To avoid converging to the certain labels, we adopt a re-weighting scheme and two phase training to handle the label imbalance. Experimental results on BRATS-2015 [13] show that our method outperforms state-of-the-art biomedical segmentation approaches.
ISSN:1063-6919
1063-6919
DOI:10.1109/CVPR.2017.398