Self-supervised Multi-view Learning via Auto-encoding 3D Transformations.

Gespeichert in:
Bibliographische Detailangaben
Titel: Self-supervised Multi-view Learning via Auto-encoding 3D Transformations.
Autoren: XIANG GAO, WEI HU, GUO-JUN QI
Quelle: ACM Transactions on Multimedia Computing, Communications & Applications; Jan2024, Vol. 20 Issue 1, p1-23, 23p
Schlagwörter: SUPERVISED learning, OBJECT recognition (Computer vision), COMPUTER vision, DEEP learning
Abstract: 3D object representation learning is a fundamental challenge in computer vision to infer about the 3D world. Recent advances in deep learning have shown their efficiency in 3D object recognition, among which viewbased methods have performed best so far. However, feature learning of multiple views in existing methods is mostly performed in a supervised fashion, which often requires a large amount of data labels with high costs. In contrast, self-supervised learning aims to learnmulti-view feature representations without involving labeled data. To this end, we propose a novel self-supervised framework to learn Multi-View Transformation Equivariant Representations (MV-TER), exploring the equivariant transformations of a 3D object and its projected multiple views that we derive. Specifically, we perform a 3D transformation on a 3D object and obtain multiple views before and after the transformation via projection. Then, we train a representation encoding module to capture the intrinsic 3D object representation by decoding 3D transformation parameters from the fused feature representations of multiple views before and after the transformation. Experimental results demonstrate that the proposedMV-TER significantly outperforms the state-of-the-art view-based approaches in 3D object classification and retrieval tasks and show the generalization to real-world datasets. [ABSTRACT FROM AUTHOR]
Copyright of ACM Transactions on Multimedia Computing, Communications & Applications is the property of Association for Computing Machinery and its content may not be copied or emailed to multiple sites without the copyright holder's express written permission. Additionally, content may not be used with any artificial intelligence tools or machine learning technologies. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Datenbank: Complementary Index
Beschreibung
Abstract:3D object representation learning is a fundamental challenge in computer vision to infer about the 3D world. Recent advances in deep learning have shown their efficiency in 3D object recognition, among which viewbased methods have performed best so far. However, feature learning of multiple views in existing methods is mostly performed in a supervised fashion, which often requires a large amount of data labels with high costs. In contrast, self-supervised learning aims to learnmulti-view feature representations without involving labeled data. To this end, we propose a novel self-supervised framework to learn Multi-View Transformation Equivariant Representations (MV-TER), exploring the equivariant transformations of a 3D object and its projected multiple views that we derive. Specifically, we perform a 3D transformation on a 3D object and obtain multiple views before and after the transformation via projection. Then, we train a representation encoding module to capture the intrinsic 3D object representation by decoding 3D transformation parameters from the fused feature representations of multiple views before and after the transformation. Experimental results demonstrate that the proposedMV-TER significantly outperforms the state-of-the-art view-based approaches in 3D object classification and retrieval tasks and show the generalization to real-world datasets. [ABSTRACT FROM AUTHOR]
ISSN:15516857
DOI:10.1145/3597613