Scalable Proximal Jacobian Iteration Method With Global Convergence Analysis for Nonconvex Unconstrained Composite Optimizations

The recent studies have found that the nonconvex relaxation functions usually perform better than the convex counterparts in the <inline-formula> <tex-math notation="LaTeX">l_{0} </tex-math></inline-formula>-norm and rank function minimization problems. However, due...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transaction on neural networks and learning systems Jg. 30; H. 9; S. 2825 - 2839
Hauptverfasser: Zhang, Hengmin, Qian, Jianjun, Gao, Junbin, Yang, Jian, Xu, Chunyan
Format: Journal Article
Sprache:Englisch
Veröffentlicht: United States IEEE 01.09.2019
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Schlagworte:
ISSN:2162-237X, 2162-2388, 2162-2388
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The recent studies have found that the nonconvex relaxation functions usually perform better than the convex counterparts in the <inline-formula> <tex-math notation="LaTeX">l_{0} </tex-math></inline-formula>-norm and rank function minimization problems. However, due to the absence of convexity in these nonconvex problems, developing efficient algorithms with convergence guarantee becomes very challenging. Inspired by the basic ideas of both the Jacobian alternating direction method of multipliers (JADMMs) for solving linearly constrained problems with separable objectives and the proximal gradient methods (PGMs) for optimizing the unconstrained problems with one variable, this paper focuses on extending the PGMs to the proximal Jacobian iteration methods (PJIMs) for handling with a family of nonconvex composite optimization problems with two splitting variables. To reduce the total computational complexity by decreasing the number of iterations, we devise the accelerated version of PJIMs through the well-known Nesterov's acceleration strategy and further extend both to solve the multivariable cases. Most importantly, we provide a rigorous convergence analysis, in theory, to show that the generated variable sequence globally converges to a critical point by exploiting the Kurdyka-Łojasiewica (KŁ) property for a broad class of functions. Furthermore, we also establish the linear and sublinear convergence rates of the obtained variable sequence in the objective function. As the specific application to the nonconvex sparse and low-rank recovery problems, several numerical experiments can verify that the newly proposed algorithms not only keep fast convergence speed but also have high precision.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2162-237X
2162-2388
2162-2388
DOI:10.1109/TNNLS.2018.2885699