Contrastive Attention-Based Network for Self-Supervised Point Cloud Completion

Point cloud completion aims to reconstruct complete 3D shapes from partial observations, often requiring multiple views or complete data for training. In this paper, we propose an attention-driven, self-supervised autoencoder network that completes 3D point clouds from a single partial observation....

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE signal processing letters Ročník 32; s. 4444 - 4448
Hlavní autori: Kumari, Seema, Kumar, Preyum, Mandal, Srimanta, Raman, Shanmuganathan
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: New York IEEE 2025
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Predmet:
ISSN:1070-9908, 1558-2361
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Point cloud completion aims to reconstruct complete 3D shapes from partial observations, often requiring multiple views or complete data for training. In this paper, we propose an attention-driven, self-supervised autoencoder network that completes 3D point clouds from a single partial observation. Multi-head self-attention captures robust contextual relationships, while residual connections in the autoencoder enhance geometric feature learning. In addition to this, we incorporate a contrastive learning-based loss, which encourages the network to better distinguish structural patterns even in highly incomplete observations. Experimental results on benchmark datasets demonstrate that the proposed approach achieves state-of-the-art performance in single-view point cloud completion.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1070-9908
1558-2361
DOI:10.1109/LSP.2025.3631424