Nonrigid Structure-From-Motion via Differential Geometry With Recoverable Conformal Scale

Nonrigid structure-from-motion (NRSfM), a promising technique for addressing the mapping challenges in monocular visual deformable simultaneous localization and mapping, has attracted growing attention. We introduce a novel method, called Con-NRSfM, for NRSfM under conformal deformations, encompassi...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE transactions on robotics Ročník 41; s. 6229 - 6249
Hlavní autori: Chen, Yongbo, Zhang, Yanhao, Parashar, Shaifali, Zhao, Liang, Huang, Shoudong
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: IEEE 2025
Predmet:
ISSN:1552-3098, 1941-0468
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Nonrigid structure-from-motion (NRSfM), a promising technique for addressing the mapping challenges in monocular visual deformable simultaneous localization and mapping, has attracted growing attention. We introduce a novel method, called Con-NRSfM, for NRSfM under conformal deformations, encompassing isometric deformations as a subset. Our approach performs point-wise reconstruction using 2-D selected image warps optimized through a graph-based framework. Unlike existing methods that rely on strict assumptions, such as locally planar surfaces or locally linear deformations, and fail to recover the conformal scale, our method eliminates these constraints and accurately computes the local conformal scale. In addition, our framework decouples constraints on depth and conformal scale, which are inseparable in other approaches, enabling more precise depth estimation. To address the sensitivity of the formulated problem, we employ a parallel separable iterative optimization strategy. Furthermore, a self-supervised learning framework, utilizing an encoder-decoder network, is incorporated to generate dense 3-D point clouds with texture. Simulation and experimental results using both synthetic and real datasets demonstrate that our method surpasses existing approaches in terms of reconstruction accuracy and robustness.
ISSN:1552-3098
1941-0468
DOI:10.1109/TRO.2025.3621422