VAE-CoGAN: Unpaired image-to-image translation for low-level vision

Low-level vision problems, such as single image haze removal and single image rain removal, usually restore a clear image from an input image using a paired dataset. However, for many problems, the paired training dataset will not be available. In this paper, we propose an unpaired image-to-image tr...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Signal, image and video processing Jg. 17; H. 4; S. 1019 - 1026
Hauptverfasser: Zhang, Juan, Lang, Xiaoqi, Huang, Bo, Jiang, Xiaoyan
Format: Journal Article
Sprache:Englisch
Veröffentlicht: London Springer London 01.06.2023
Springer Nature B.V
Schlagworte:
ISSN:1863-1703, 1863-1711
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Low-level vision problems, such as single image haze removal and single image rain removal, usually restore a clear image from an input image using a paired dataset. However, for many problems, the paired training dataset will not be available. In this paper, we propose an unpaired image-to-image translation method based on coupled generative adversarial networks (CoGAN) called VAE-CoGAN to solve this problem. Different from the basic CoGAN, we propose a shared-latent space and variational autoencoder (VAE) in framework. We use synthetic datasets and the real-world images to evaluate our method. The extensive evaluation and comparison results show that the proposed method can be effectively applied to numerous low-level vision tasks with favorable performance against the state-of-the-art methods.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1863-1703
1863-1711
DOI:10.1007/s11760-022-02307-y