NoPe‐NeRF++: Local‐to‐Global Optimization of NeRF with No Pose Prior

In this paper, we introduce NoPe‐NeRF++, a novel local‐to‐global optimization algorithm for training Neural Radiance Fields (NeRF) without requiring pose priors. Existing methods, particularly NoPe‐NeRF, which focus solely on the local relationships within images, often struggle to recover accurate...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computer graphics forum Jg. 44; H. 2
Hauptverfasser: Shi, Dongbo, Cao, Shen, Wu, Bojian, Guo, Jinhui, Fan, Lubin, Chen, Renjie, Liu, Ligang, Ye, Jieping
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Oxford Blackwell Publishing Ltd 01.05.2025
Schlagworte:
ISSN:0167-7055, 1467-8659
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In this paper, we introduce NoPe‐NeRF++, a novel local‐to‐global optimization algorithm for training Neural Radiance Fields (NeRF) without requiring pose priors. Existing methods, particularly NoPe‐NeRF, which focus solely on the local relationships within images, often struggle to recover accurate camera poses in complex scenarios. To overcome the challenges, our approach begins with a relative pose initialization with explicit feature matching, followed by a local joint optimization to enhance the pose estimation for training a more robust NeRF representation. This method significantly improves the quality of initial poses. Additionally, we introduce global optimization phase that incorporates geometric consistency constraints through bundle adjustment, which integrates feature trajectories to further refine poses and collectively boost the quality of NeRF. Notably, our method is the first work that seamlessly combines the local and global cues with NeRF, and outperforms state‐of‐the‐art methods in both pose estimation accuracy and novel view synthesis. Extensive evaluations on benchmark datasets demonstrate our superior performance and robustness, even in challenging scenes, thus validating our design choices.
Bibliographie:
These authors contributed equally to this work.
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0167-7055
1467-8659
DOI:10.1111/cgf.70012