Contrastive Attention-Based Network for Self-Supervised Point Cloud Completion

Point cloud completion aims to reconstruct complete 3D shapes from partial observations, often requiring multiple views or complete data for training. In this paper, we propose an attention-driven, self-supervised autoencoder network that completes 3D point clouds from a single partial observation....

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE signal processing letters Jg. 32; S. 1 - 5
Hauptverfasser: Kumari, Seema, Kumar, Preyum, Mandal, Srimanta, Raman, Shanmuganathan
Format: Journal Article
Sprache:Englisch
Veröffentlicht: New York IEEE 2025
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Schlagworte:
ISSN:1070-9908, 1558-2361
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Point cloud completion aims to reconstruct complete 3D shapes from partial observations, often requiring multiple views or complete data for training. In this paper, we propose an attention-driven, self-supervised autoencoder network that completes 3D point clouds from a single partial observation. Multi-head self-attention captures robust contextual relationships, while residual connections in the autoencoder enhance geometric feature learning. In addition to this, we incorporate a contrastive learning-based loss, which encourages the network to better distinguish structural patterns even in highly incomplete observations. Experimental results on benchmark datasets demonstrate that the proposed approach achieves state-of-the-art performance in single-view point cloud completion.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1070-9908
1558-2361
DOI:10.1109/LSP.2025.3631424