Exploring Fast and Flexible Zero‐Shot Low‐Light Image/Video Enhancement

Low‐light image/video enhancement is a challenging task when images or video are captured under harsh lighting conditions. Existing methods mostly formulate this task as an image‐to‐image conversion task via supervised or unsupervised learning. However, such conversion methods require an extremely l...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Computer graphics forum Ročník 43; číslo 7
Hlavní autori: Han, Xianjun, Bao, Taoli, Yang, Hongyu
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Oxford Blackwell Publishing Ltd 01.10.2024
Predmet:
ISSN:0167-7055, 1467-8659
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Low‐light image/video enhancement is a challenging task when images or video are captured under harsh lighting conditions. Existing methods mostly formulate this task as an image‐to‐image conversion task via supervised or unsupervised learning. However, such conversion methods require an extremely large amount of data for training, whether paired or unpaired. In addition, these methods are restricted to specific training data, making it difficult for the trained model to enhance other types of images or video. In this paper, we explore a novel, fast and flexible, zero‐shot, low‐light image or video enhancement framework. Without relying on prior training or relationships among neighboring frames, we are committed to estimating the illumination of the input image/frame by a well‐designed network. The proposed zero‐shot, low‐light image/video enhancement architecture includes illumination estimation and residual correction modules. The network architecture is very concise and does not require any paired or unpaired data during training, which allows low‐light enhancement to be performed with several simple iterations. Despite its simplicity, we show that the method is fast and generalizes well to diverse lighting conditions. Many experiments on various images and videos qualitatively and quantitatively demonstrate the advantages of our method over state‐of‐the‐art methods.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0167-7055
1467-8659
DOI:10.1111/cgf.15210