VAE-CoGAN: Unpaired image-to-image translation for low-level vision

Low-level vision problems, such as single image haze removal and single image rain removal, usually restore a clear image from an input image using a paired dataset. However, for many problems, the paired training dataset will not be available. In this paper, we propose an unpaired image-to-image tr...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Signal, image and video processing Ročník 17; číslo 4; s. 1019 - 1026
Hlavní autori: Zhang, Juan, Lang, Xiaoqi, Huang, Bo, Jiang, Xiaoyan
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: London Springer London 01.06.2023
Springer Nature B.V
Predmet:
ISSN:1863-1703, 1863-1711
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Low-level vision problems, such as single image haze removal and single image rain removal, usually restore a clear image from an input image using a paired dataset. However, for many problems, the paired training dataset will not be available. In this paper, we propose an unpaired image-to-image translation method based on coupled generative adversarial networks (CoGAN) called VAE-CoGAN to solve this problem. Different from the basic CoGAN, we propose a shared-latent space and variational autoencoder (VAE) in framework. We use synthetic datasets and the real-world images to evaluate our method. The extensive evaluation and comparison results show that the proposed method can be effectively applied to numerous low-level vision tasks with favorable performance against the state-of-the-art methods.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1863-1703
1863-1711
DOI:10.1007/s11760-022-02307-y