Image super-resolution based on conditional generative adversarial network

Generative adversarial network (GAN) is one of the most prevalent generative models that can synthesise realistic high-frequency details. However, a mismatch between the input and the output may arise when GAN is directly applied to image super-resolution. To alleviate this issue, the authors adopte...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IET image processing Ročník 14; číslo 13; s. 3006 - 3013
Hlavní autoři: Gao, Hongxia, Chen, Zhanhong, Huang, Binyang, Chen, Jiahe, Li, Zhifu
Médium: Journal Article
Jazyk:angličtina
Vydáno: The Institution of Engineering and Technology 01.11.2020
Témata:
ISSN:1751-9659, 1751-9667
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Generative adversarial network (GAN) is one of the most prevalent generative models that can synthesise realistic high-frequency details. However, a mismatch between the input and the output may arise when GAN is directly applied to image super-resolution. To alleviate this issue, the authors adopted a conditional GAN (cGAN) in this study. The cGAN discriminator attempted to guess whether the unknown high-resolution (HR) image was produced by the generator with the aid of the original low-resolution (LR) image. They propose a novel discriminator that only penalises at the scale of the patch and, thus, has relatively few parameters to train. The generator of cGAN is an encoder–decoder with skip connections to shuttle the shared low-level information directly across the network. To better maintain the low-frequency information and recover the high-frequency information, they designed a generator loss function combining adversarial loss term and L1 loss term. The former term is beneficial to the synthesis of fine-grained textures, while the latter is responsible for learning the overall structure of the LR input. The experiments revealed that the proposed method could generate HR images with richer details and less over-smoothness.
ISSN:1751-9659
1751-9667
DOI:10.1049/iet-ipr.2018.5767