NeRF-FF: a plug-in method to mitigate defocus blur for runtime optimized neural radiance fields

Gespeichert in:
Bibliographische Detailangaben
Titel: NeRF-FF: a plug-in method to mitigate defocus blur for runtime optimized neural radiance fields
Autoren: Tristan Wirth, Arne Rak, Max von Buelow, Volker Knauthe, Arjan Kuijper, Dieter W. Fellner
Quelle: The Visual Computer. 40:5043-5055
Verlagsinformationen: Springer Science and Business Media LLC, 2024.
Publikationsjahr: 2024
Schlagwörter: Research Line: Computer vision (CV), Image restoration, Research Line: Machine learning (ML), Research Line: Computer graphics (CG), 0202 electrical engineering, electronic engineering, information engineering, Deep learning, Image deblurring, LTA: Machine intelligence, algorithms, and data structures (incl. semantics), 02 engineering and technology, Branche: Information Technology, Realtime rendering
Beschreibung: Neural radiance fields (NeRFs) have revolutionized novel view synthesis, leading to an unprecedented level of realism in rendered images. However, the reconstruction quality of NeRFs suffers significantly from out-of-focus regions in the input images. We propose NeRF-FF, a plug-in strategy that estimates image masks based on Focus Frustums (FFs), i.e., the visible volume in the scene space that is in-focus. NeRF-FF enables a subsequently trained NeRF model to omit out-of-focus image regions during the training process. Existing methods to mitigate the effects of defocus blurred input images often leverage dynamic ray generation. This makes them incompatible with the static ray assumptions employed by runtime-performance-optimized NeRF variants, such as Instant-NGP, leading to high training times. Our experiments show that NeRF-FF outperforms state-of-the-art approaches regarding training time by two orders of magnitude—reducing it to under 1 min on end-consumer hardware—while maintaining comparable visual quality.
Publikationsart: Article
Dateibeschreibung: text
Sprache: English
ISSN: 1432-2315
0178-2789
DOI: 10.1007/s00371-024-03507-y
DOI: 10.24406/publica-3454
DOI: 10.26083/tuprints-00029081
Zugangs-URL: https://tuprints.ulb.tu-darmstadt.de/29081/3/371_2024_3507_MOESM1_ESM.mp4
https://tuprints.ulb.tu-darmstadt.de/29081/1/00371_2024_Article_3507.pdf
Rights: CC BY
Dokumentencode: edsair.doi.dedup.....3bcf78ff36c6fe94563cf77f6549cbfd
Datenbank: OpenAIRE
Beschreibung
Abstract:Neural radiance fields (NeRFs) have revolutionized novel view synthesis, leading to an unprecedented level of realism in rendered images. However, the reconstruction quality of NeRFs suffers significantly from out-of-focus regions in the input images. We propose NeRF-FF, a plug-in strategy that estimates image masks based on Focus Frustums (FFs), i.e., the visible volume in the scene space that is in-focus. NeRF-FF enables a subsequently trained NeRF model to omit out-of-focus image regions during the training process. Existing methods to mitigate the effects of defocus blurred input images often leverage dynamic ray generation. This makes them incompatible with the static ray assumptions employed by runtime-performance-optimized NeRF variants, such as Instant-NGP, leading to high training times. Our experiments show that NeRF-FF outperforms state-of-the-art approaches regarding training time by two orders of magnitude—reducing it to under 1 min on end-consumer hardware—while maintaining comparable visual quality.
ISSN:14322315
01782789
DOI:10.1007/s00371-024-03507-y