Underwater Color Restoration Using U-Net Denoising Autoencoder

Visual inspection of underwater structures by vehicles, e.g. remotely operated vehicles (ROVs), plays an important role in scientific, military, and commercial sectors. However, the automatic extraction of information using software tools is hindered by the characteristics of water which degrade the...

Full description

Saved in:
Bibliographic Details
Published in:2019 11th International Symposium on Image and Signal Processing and Analysis (ISPA) pp. 117 - 122
Main Authors: Hashisho, Yousif, Albadawi, Mohamad, Krause, Tom, von Lukas, Uwe Freiherr
Format: Conference Proceeding
Language:English
Published: IEEE 01.09.2019
Subjects:
ISSN:1849-2266
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Visual inspection of underwater structures by vehicles, e.g. remotely operated vehicles (ROVs), plays an important role in scientific, military, and commercial sectors. However, the automatic extraction of information using software tools is hindered by the characteristics of water which degrade the quality of captured videos. As a contribution for restoring the color of underwater images, Underwater Denoising Autoencoder (UDAE) model is developed using a denoising autoencoder with U-Net architecture. The proposed network takes into consideration the accuracy and the computation cost to enable realtime implementation on underwater visual tasks using end-to-end autoencoder network. Underwater vehicles perception is improved by reconstructing captured frames; hence obtaining better performance in underwater tasks. Related learning methods use generative adversarial networks (GANs) to generate color corrected underwater images, and to our knowledge this paper is the first to deal with a single autoencoder capable of producing same or better results. Moreover, image pairs are constructed for training the proposed network, where it is hard to obtain such dataset from underwater scenery. At the end, the proposed model is compared to a state-of-the-art method.
ISSN:1849-2266
DOI:10.1109/ISPA.2019.8868679