Deep Flow Rendering: View Synthesis via Layer‐aware Reflection Flow

Novel view synthesis (NVS) generates images from unseen viewpoints based on a set of input images. It is a challenge because of inaccurate lighting optimization and geometry inference. Although current neural rendering methods have made significant progress, they still struggle to reconstruct global...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computer graphics forum Jg. 41; H. 4; S. 139 - 148
Hauptverfasser: Dai, Pinxuan, Xie, Ning
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Oxford Blackwell Publishing Ltd 01.07.2022
Schlagworte:
ISSN:0167-7055, 1467-8659
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Novel view synthesis (NVS) generates images from unseen viewpoints based on a set of input images. It is a challenge because of inaccurate lighting optimization and geometry inference. Although current neural rendering methods have made significant progress, they still struggle to reconstruct global illumination effects like reflections and exhibit ambiguous blurs in highly view‐dependent areas. This work addresses high‐quality view synthesis to emphasize reflection on non‐concave surfaces. We propose Deep Flow Rendering that optimizes direct and indirect lighting separately, leveraging texture mapping, appearance flow, and neural rendering. A learnable texture is used to predict view‐independent features, meanwhile enabling efficient reflection extraction. To accurately fit view‐dependent effects, we adopt a constrained neural flow to transfer image‐space features from nearby views to the target view in an edge‐preserving manner. Then we further implement a fusing renderer that utilizes the predictions of both layers to form the output image. The experiments demonstrate that our method outperforms the state‐of‐the‐art methods at synthesizing various scenes with challenging reflection effects.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0167-7055
1467-8659
DOI:10.1111/cgf.14593