Retinal vessel image segmentation algorithm based on encoder-decoder structure

The accurate segmentation of retinal vessel image is significant for the early diagnosis of some diseases. A retinal vessel image segmentation algorithm based on an encoder-decoder structure is proposed. In the encoding section, the Inception module is used, which uses convolution kernels of differe...

Full description

Saved in:
Bibliographic Details
Published in:Multimedia tools and applications Vol. 81; no. 23; pp. 33361 - 33373
Main Authors: Zhai, ZhengLi, Feng, Shu, Yao, Luyao, Li, Penghui
Format: Journal Article
Language:English
Published: New York Springer US 01.09.2022
Springer Nature B.V
Subjects:
ISSN:1380-7501, 1573-7721
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The accurate segmentation of retinal vessel image is significant for the early diagnosis of some diseases. A retinal vessel image segmentation algorithm based on an encoder-decoder structure is proposed. In the encoding section, the Inception module is used, which uses convolution kernels of different scales to achieve features extract to obtain images multi-scale information. So as to enable the model to perceive blood vessels of various shapes and improve the accuracy of segmentation of small blood vessels, multiple pyramid pooling modules are adopted in the decoding process to aggregate more contextual information, and multi-scale and multi-local area feature fusion is used to improve segmentation effect. In addition, the feature fusion method is applied in the upsampling process to fuse low-order semantic features to obtain more low-level detailed information, thereby further promote the segmentation effect. The experimental results on DRIVE and STAER fundus image datasets show that the algorithm has higher sensitivity, accuracy and AUC value compared with other algorithms, and the segmentation effect is better.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1380-7501
1573-7721
DOI:10.1007/s11042-022-13176-5