Spatial feature fusion in 3D convolutional autoencoders for lung tumor segmentation from 3D CT images

Accurate detection and segmentation of lung tumors from volumetric CT scans is a critical area of research for the development of computer aided diagnosis systems for lung cancer. Several existing methods of 2D biomedical image segmentation based on convolutional autoencoders show decent performance...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Biomedical signal processing and control Ročník 78; s. 103996
Hlavní autori: Najeeb, Suhail, Bhuiyan, Mohammed Imamul Hassan
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Elsevier Ltd 01.09.2022
Predmet:
ISSN:1746-8094, 1746-8108
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Accurate detection and segmentation of lung tumors from volumetric CT scans is a critical area of research for the development of computer aided diagnosis systems for lung cancer. Several existing methods of 2D biomedical image segmentation based on convolutional autoencoders show decent performance for the task. However, it is imperative to make use of volumetric data for 3D segmentation tasks. Existing 3D segmentation networks are computationally expensive and have several limitations. In this paper, we introduce a novel approach which makes use of the spatial features learned at different levels of a 2D convolutional autoencoder to create a 3D segmentation network capable of more efficiently utilizing spatial and volumetric information. Our studies show that without any major changes to the underlying architecture and minimum computational overhead, our proposed approach can improve lung tumor segmentation performance by 1.61%, 2.25%, and 2.42% respectively for the 3D-UNet, 3D-MultiResUNet, and Recurrent-3D-DenseUNet networks on the LOTUS dataset in terms of mean 2D dice coefficient. Our proposed models also respectively report 7.58%, 2.32%, and 4.28% improvement in terms of 3D dice coefficient. The proposed modified version of the 3D-MultiResUNet network outperforms existing segmentation architectures on the dataset with a mean 2D dice coefficient of 0.8669. A key feature of our proposed method is that it can be applied to different convolutional autoencoder based segmentation networks to improve segmentation performance. •Segmentation of lung tumor volumes using 3D convolutional autoencoders.•Extraction of spatial features from 2D architectures.•Spatial feature fusion allows efficient utilization of both spatial and volumetric information.•Proposed models SFF-3D-UNet, SFF-3D-MultiResUNet, and SFF-Recurrent-3D-DenseUNet respectively show 1.61%, 2.25%, and 2.42% improvement in terms of 2D dice coefficient and 7.58%, 2.32%, and 4.28% improvement in terms of 3D dice coefficient.•The proposed best model, SFF-3D-MultiResUNet achieves a 2D dice score of 0.8669 on the LOTUS Benchmark.
ISSN:1746-8094
1746-8108
DOI:10.1016/j.bspc.2022.103996