Multi-Parametric Fusion of 3D Power Doppler Ultrasound for Fetal Kidney Segmentation Using Fully Convolutional Neural Networks

Uložené v:
Podrobná bibliografia
Názov: Multi-Parametric Fusion of 3D Power Doppler Ultrasound for Fetal Kidney Segmentation Using Fully Convolutional Neural Networks
Autori: Nipuna H. Weerasinghe, Nigel H. Lovell, Alec W. Welsh, Gordon N. Stevenson
Zdroj: IEEE Journal of Biomedical and Health Informatics. 25:2050-2057
Informácie o vydavateľovi: Institute of Electrical and Electronics Engineers (IEEE), 2021.
Rok vydania: 2021
Predmety: Kidney Disease, Neural Networks, Image Processing, anzsrc-for: 46 Information and Computing Sciences, Bioengineering, 02 engineering and technology, Kidney, Computer, 03 medical and health sciences, Computer-Assisted, 0302 clinical medicine, 46 Information and Computing Sciences, Clinical Research, Image Processing, Computer-Assisted, 0202 electrical engineering, electronic engineering, information engineering, Humans, 4601 Applied Computing, Ultrasonography, Doppler, anzsrc-for: 4601 Applied Computing, Reproducibility of Results, Ultrasonography, Doppler, Networking and Information Technology R&D (NITRD), Biomedical Imaging, Neural Networks, Computer
Popis: Kidney development is key to the long-term health of the fetus. Renal volume and vascularity assessed by 3D ultrasound (3D-US) are known markers of wellbeing, however, a lack of real-time image segmentation solutions preclude these measures being used in a busy clinical environment. In this work, we aimed to automate kidney segmentation using fully convolutional neural networks (fCNNs). We used multi-parametric input fusion incorporating 3D B-Mode and power Doppler (PD) volumes, aiming to improve segmentation accuracy. Three different fusion strategies and their performance were assessed versus a single input (B-Mode) network. Early input-level fusion provided the best segmentation accuracy with an average Dice similarity coefficient (DSC) of 0.81 and Hausdorff distance (HD) of 8.96 mm, an improvement of 0.06 DSC and reduction of 1.43 mm HD compared to our baseline network. Compared to manual segmentation for all models, repeatability was assessed by intra-class correlation coefficients (ICC) indicating good to excellent reproducibility (ICC 0.93). The framework was extended to support multiple graphics processing units (GPUs) to better handle volumetric data, dense fCNN models, batch normalization and complex fusion networks. This work and available source code provides a framework to increase the parameter space of encoder-decoder style fCNNs across multiple GPUs and shows that application of multi-parametric 3D-US in fCNN training improves segmentation accuracy.
Druh dokumentu: Article
Popis súboru: application/pdf
ISSN: 2168-2208
2168-2194
DOI: 10.1109/jbhi.2020.3027318
Prístupová URL adresa: https://pubmed.ncbi.nlm.nih.gov/32991292
https://europepmc.org/article/MED/32991292
https://ieeexplore.ieee.org/document/9209048
https://www.ncbi.nlm.nih.gov/pubmed/32991292
https://dblp.uni-trier.de/db/journals/titb/titb25.html#WeerasingheLWS21
Rights: IEEE Copyright
CC BY NC ND
Prístupové číslo: edsair.doi.dedup.....d0b842acb74a4ce45c8fe543a8929872
Databáza: OpenAIRE
Popis
Abstrakt:Kidney development is key to the long-term health of the fetus. Renal volume and vascularity assessed by 3D ultrasound (3D-US) are known markers of wellbeing, however, a lack of real-time image segmentation solutions preclude these measures being used in a busy clinical environment. In this work, we aimed to automate kidney segmentation using fully convolutional neural networks (fCNNs). We used multi-parametric input fusion incorporating 3D B-Mode and power Doppler (PD) volumes, aiming to improve segmentation accuracy. Three different fusion strategies and their performance were assessed versus a single input (B-Mode) network. Early input-level fusion provided the best segmentation accuracy with an average Dice similarity coefficient (DSC) of 0.81 and Hausdorff distance (HD) of 8.96 mm, an improvement of 0.06 DSC and reduction of 1.43 mm HD compared to our baseline network. Compared to manual segmentation for all models, repeatability was assessed by intra-class correlation coefficients (ICC) indicating good to excellent reproducibility (ICC 0.93). The framework was extended to support multiple graphics processing units (GPUs) to better handle volumetric data, dense fCNN models, batch normalization and complex fusion networks. This work and available source code provides a framework to increase the parameter space of encoder-decoder style fCNNs across multiple GPUs and shows that application of multi-parametric 3D-US in fCNN training improves segmentation accuracy.
ISSN:21682208
21682194
DOI:10.1109/jbhi.2020.3027318