Decentralized Proximal Gradient Algorithms With Linear Convergence Rates

This article studies a class of nonsmooth decentralized multiagent optimization problems where the agents aim at minimizing a sum of local strongly-convex smooth components plus a common nonsmooth term. We propose a general primal-dual algorithmic framework that unifies many existing state-of-the-ar...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on automatic control Jg. 66; H. 6; S. 2787 - 2794
Hauptverfasser: Alghunaim, Sulaiman A., Ryu, Ernest K., Yuan, Kun, Sayed, Ali H.
Format: Journal Article
Sprache:Englisch
Veröffentlicht: New York IEEE 01.06.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Schlagworte:
ISSN:0018-9286, 1558-2523
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This article studies a class of nonsmooth decentralized multiagent optimization problems where the agents aim at minimizing a sum of local strongly-convex smooth components plus a common nonsmooth term. We propose a general primal-dual algorithmic framework that unifies many existing state-of-the-art algorithms. We establish linear convergence of the proposed method to the exact minimizer in the presence of the nonsmooth term. Moreover, for the more general class of problems with agent specific nonsmooth terms, we show that linear convergence cannot be achieved (in the worst case) for the class of algorithms that uses the gradients and the proximal mappings of the smooth and nonsmooth parts, respectively. We further provide a numerical counterexample that shows how some state-of-the-art algorithms fail to converge linearly for strongly convex objectives and different local non smooth terms.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0018-9286
1558-2523
DOI:10.1109/TAC.2020.3009363