Deep3DSCan: Deep residual network and morphological descriptor based framework for lung cancer classification and 3D segmentation

With the increasing incidence rate of lung cancer patients, early diagnosis could help in reducing the mortality rate. However, accurate recognition of cancerous lesions is immensely challenging owing to factors such as low contrast variation, heterogeneity and visual similarity between benign and m...

Full description

Saved in:
Bibliographic Details
Published in:IET image processing Vol. 14; no. 7; pp. 1240 - 1247
Main Authors: Bansal, Gaurang, Chamola, Vinay, Narang, Pratik, Kumar, Subham, Raman, Sundaresan
Format: Journal Article
Language:English
Published: The Institution of Engineering and Technology 29.05.2020
Subjects:
ISSN:1751-9659, 1751-9667
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:With the increasing incidence rate of lung cancer patients, early diagnosis could help in reducing the mortality rate. However, accurate recognition of cancerous lesions is immensely challenging owing to factors such as low contrast variation, heterogeneity and visual similarity between benign and malignant nodules. Deep learning techniques have been very effective in performing natural image segmentation with robustness to previously unseen situations, reasonable scale invariance and the ability to detect even minute differences. However, they usually fail to learn domain-specific features due to the limited amount of available data and domain agnostic nature of these techniques. This work presents an ensemble framework Deep3DSCan for lung cancer segmentation and classification. The deep 3D segmentation network generates the 3D volume of interest from computed tomography scans of patients. The deep features and handcrafted descriptors are extracted using a fine-tuned residual network and morphological techniques, respectively. Finally, the fused features are used for cancer classification. The experiments were conducted on the publicly available LUNA16 dataset. For the segmentation, the authors achieved an accuracy of 0.927, significant improvement over the template matching technique, which had achieved an accuracy of 0.927. For the detection, previous state-of-the-art is 0.866, while ours is 0.883.
ISSN:1751-9659
1751-9667
DOI:10.1049/iet-ipr.2019.1164