Imbalanced low-rank tensor completion via latent matrix factorization

Tensor completion has been widely used in computer vision and machine learning. Most existing tensor completion methods empirically assume the intrinsic tensor is simultaneous low-rank in all over modes. However, tensor data recorded from real-world applications may conflict with these assumptions,...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural networks Jg. 155; S. 369 - 382
Hauptverfasser: Qiu, Yuning, Zhou, Guoxu, Zeng, Junhua, Zhao, Qibin, Xie, Shengli
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Elsevier Ltd 01.11.2022
Schlagworte:
ISSN:0893-6080, 1879-2782, 1879-2782
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Tensor completion has been widely used in computer vision and machine learning. Most existing tensor completion methods empirically assume the intrinsic tensor is simultaneous low-rank in all over modes. However, tensor data recorded from real-world applications may conflict with these assumptions, e.g., face images taken from different subjects often lie in a union of low-rank subspaces, which may result in a quite high rank or even full rank structure in its sample mode. To this aim, in this paper, we propose an imbalanced low-rank tensor completion method, which can flexibly estimate the low-rank incomplete tensor via decomposing it into a mixture of multiple latent tensor ring (TR) rank components. Specifically, each latent component is approximated using low-rank matrix factorization based on TR unfolding matrix. In addition, an effective proximal alternating minimization algorithm is developed and theoretically proved to maintain the global convergence property, that is, the whole sequence of iterates is convergent and converges to a critical point. Extensive experiments on both synthetic and real-world tensor data demonstrate that the proposed method achieves more favorable completion results with less computational cost when compared to the state-of-the-art tensor completion methods.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0893-6080
1879-2782
1879-2782
DOI:10.1016/j.neunet.2022.08.023