Effective fusion module with dilation convolution for monocular panoramic depth estimate.

Gespeichert in:
Bibliographische Detailangaben
Titel: Effective fusion module with dilation convolution for monocular panoramic depth estimate.
Autoren: Han, Cheng, Cai, Yongqing, Pan, Xinpeng, Wang, Ziyun
Quelle: IET Image Processing (Wiley-Blackwell); 3/27/2024, Vol. 18 Issue 4, p1073-1082, 10p
Schlagwörter: MONOCULARS, STEREO image processing, VIRTUAL reality, MAP projection, SHARED virtual environments, CONVOLUTION codes
Abstract: Depth estimation from monocular panoramic image is a crucial step in 3D reconstruction, which is a close relationship with virtual reality and metaverse technologies. In recent years, some methods, such as HRDFuse, BiFuse++, and UniFuse, have employed a two‐branch neural network leveraging two common projections: equirectangular and cubemap projections (CMPs). The equirectangular projection (ERP) provides a complete field of view but introduces distortion, while the CMP avoids distortion but introduces discontinuity at the boundary of the cube. In order to address the issue of distortion and discontinuity, the authors propose an efficient depth estimation fusion module to balance the feature mapping of the two projections. Moreover, for the ERP, the authors propose a novel inflated network architecture to extend the receptive field and effectively harness visual information. Extensive experiments show that the authors' method predicts more clear boundaries and accurate depth results while outperforming mainstream panoramic depth estimation algorithms. [ABSTRACT FROM AUTHOR]
Copyright of IET Image Processing (Wiley-Blackwell) is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Datenbank: Biomedical Index
Beschreibung
Abstract:Depth estimation from monocular panoramic image is a crucial step in 3D reconstruction, which is a close relationship with virtual reality and metaverse technologies. In recent years, some methods, such as HRDFuse, BiFuse++, and UniFuse, have employed a two‐branch neural network leveraging two common projections: equirectangular and cubemap projections (CMPs). The equirectangular projection (ERP) provides a complete field of view but introduces distortion, while the CMP avoids distortion but introduces discontinuity at the boundary of the cube. In order to address the issue of distortion and discontinuity, the authors propose an efficient depth estimation fusion module to balance the feature mapping of the two projections. Moreover, for the ERP, the authors propose a novel inflated network architecture to extend the receptive field and effectively harness visual information. Extensive experiments show that the authors' method predicts more clear boundaries and accurate depth results while outperforming mainstream panoramic depth estimation algorithms. [ABSTRACT FROM AUTHOR]
ISSN:17519659
DOI:10.1049/ipr2.13007