CNNs for automatic glaucoma assessment using fundus images : an extensive validation

Uloženo v:
Podrobná bibliografie
Název: CNNs for automatic glaucoma assessment using fundus images : an extensive validation
Autoři: Díaz Pinto, Andrés., Morales Martínez, Sandra., Naranjo Ornedo, Valeriana., Köhler, Thomas., Mossi García, José Manuel., Navea Tejerina, Amparo.
Přispěvatelé: Producción Científica UCH 2019, UCH. Departamento de Cirugía (Extinguido), UCH. Departamento de Medicina y Cirugía
Informace o vydavateli: Springer Nature
Rok vydání: 2019
Témata: Glaucoma - Bases de datos, Ojos - Enfermedades - Diagnóstico por imagen, Eye - Diseases - Imaging, Glaucoma - Imaging, Neural networks (Neurobiology), Glaucoma - Diagnóstico por imagen, Glaucoma - Databases, Redes neuronales (Neurobiología)
Popis: Este artículo se ha publicado de forma definitiva en: https://biomedical-engineering-online.biomedcentral.com/articles/10.1186/s12938-019-0649-y ; En este artículo también participan: Sandra Morales, Valery Naranjo, Thomas Köhler, Jose M. Mossi and Amparo Navea. ; Background: Most current algorithms for automatic glaucoma assessment using fundus images rely on handcrafted features based on segmentation, which are affected by the performance of the chosen segmentation method and the extracted features. Among other characteristics, convolutional neural networks (CNNs) are known because of their ability to learn highly discriminative features from raw pixel intensities. Methods: In this paper, we employed five different ImageNet-trained models (VGG16, VGG19, InceptionV3, ResNet50 and Xception) for automatic glaucoma assessment using fundus images. Results from an extensive validation using cross-validation and cross-testing strategies were compared with previous works in the literature. Results: Using five public databases (1707 images), an average AUC of 0.9605 with a 95% confidence interval of 95.92–97.07%, an average specificity of 0.8580 and an average sensitivity of 0.9346 were obtained after using the Xception architecture, significantly improving the performance of other state-of-the-art works. Moreover, a new clinical database, ACRIMA, has been made publicly available, containing 705 labelled images. It is composed of 396 glaucomatous images and 309 normal images, which means, the largest public database for glaucoma diagnosis. The high specificity and sensitivity obtained from the proposed approach are supported by an extensive validation using not only the cross-validation strategy but also the cross-testing validation on, to the best of the authors’ knowledge, all publicly available glaucoma-labelled databases. Conclusions: These results suggest that using ImageNet-trained models is a robust alternative for automatic glaucoma screening system. All images, CNN weights and software used to fine-tune and ...
Druh dokumentu: article in journal/newspaper
Popis souboru: application/pdf
Jazyk: English
Relation: Este trabajo ha sido financiado por el Ministerio de Economía y Competitividad del Gobierno de España, Proyecto ACRIMA (TIN2013-46751-R) y Proyecto GALAHAD (H2020-ICT-2016-2017, 732613). Andrés Díaz-Pinto ha sido financiado por la Generalitat Valenciana por una beca Santiago Grisolía (GRISOLIA/2015/027).; BioMedical Engineering OnLine, vol. 18 (20 mar. 2019).; H2020-ICT-2016-2017, 732613; TIN2013-46751-R; http://hdl.handle.net/10637/10762
DOI: 10.1186/s12938-019-0649-y
Dostupnost: http://hdl.handle.net/10637/10762
https://doi.org/10.1186/s12938-019-0649-y
Rights: http://creativecommons.org/licenses/by/4.0/deed.es
Přístupové číslo: edsbas.5E76CA31
Databáze: BASE
Popis
Abstrakt:Este artículo se ha publicado de forma definitiva en: https://biomedical-engineering-online.biomedcentral.com/articles/10.1186/s12938-019-0649-y ; En este artículo también participan: Sandra Morales, Valery Naranjo, Thomas Köhler, Jose M. Mossi and Amparo Navea. ; Background: Most current algorithms for automatic glaucoma assessment using fundus images rely on handcrafted features based on segmentation, which are affected by the performance of the chosen segmentation method and the extracted features. Among other characteristics, convolutional neural networks (CNNs) are known because of their ability to learn highly discriminative features from raw pixel intensities. Methods: In this paper, we employed five different ImageNet-trained models (VGG16, VGG19, InceptionV3, ResNet50 and Xception) for automatic glaucoma assessment using fundus images. Results from an extensive validation using cross-validation and cross-testing strategies were compared with previous works in the literature. Results: Using five public databases (1707 images), an average AUC of 0.9605 with a 95% confidence interval of 95.92–97.07%, an average specificity of 0.8580 and an average sensitivity of 0.9346 were obtained after using the Xception architecture, significantly improving the performance of other state-of-the-art works. Moreover, a new clinical database, ACRIMA, has been made publicly available, containing 705 labelled images. It is composed of 396 glaucomatous images and 309 normal images, which means, the largest public database for glaucoma diagnosis. The high specificity and sensitivity obtained from the proposed approach are supported by an extensive validation using not only the cross-validation strategy but also the cross-testing validation on, to the best of the authors’ knowledge, all publicly available glaucoma-labelled databases. Conclusions: These results suggest that using ImageNet-trained models is a robust alternative for automatic glaucoma screening system. All images, CNN weights and software used to fine-tune and ...
DOI:10.1186/s12938-019-0649-y