Underwater scene prior inspired deep underwater image and video enhancement

•Underwater image and video synthesis approach is desired by data-driven methods.•Underwater scene prior is helpful for underwater image and video enhancement.•Light-weight network structure can be easily extended to underwater video. In underwater scenes, wavelength-dependent light absorption and s...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Pattern recognition Ročník 98; s. 107038
Hlavní autoři: Li, Chongyi, Anwar, Saeed, Porikli, Fatih
Médium: Journal Article
Jazyk:angličtina
Vydáno: Elsevier Ltd 01.02.2020
Témata:
ISSN:0031-3203, 1873-5142
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:•Underwater image and video synthesis approach is desired by data-driven methods.•Underwater scene prior is helpful for underwater image and video enhancement.•Light-weight network structure can be easily extended to underwater video. In underwater scenes, wavelength-dependent light absorption and scattering degrade the visibility of images and videos. The degraded underwater images and videos affect the accuracy of pattern recognition, visual understanding, and key feature extraction in underwater scenes. In this paper, we propose an underwater image enhancement convolutional neural network (CNN) model based on underwater scene prior, called UWCNN. Instead of estimating the parameters of underwater imaging model, the proposed UWCNN model directly reconstructs the clear latent underwater image, which benefits from the underwater scene prior which can be used to synthesize underwater image training data. Besides, based on the light-weight network structure and effective training data, our UWCNN model can be easily extended to underwater videos for frame-by-frame enhancement. Specifically, combining an underwater imaging physical model with optical properties of underwater scenes, we first synthesize underwater image degradation datasets which cover a diverse set of water types and degradation levels. Then, a light-weight CNN model is designed for enhancing each underwater scene type, which is trained by the corresponding training data. At last, this UWCNN model is directly extended to underwater video enhancement. Experiments on real-world and synthetic underwater images and videos demonstrate that our method generalizes well to different underwater scenes.
ISSN:0031-3203
1873-5142
DOI:10.1016/j.patcog.2019.107038