Stochastic Proximal Gradient Consensus Over Random Networks

We consider solving a convex optimization problem with possibly stochastic gradient, and over a randomly time-varying multiagent network. Each agent has access to some local objective function, and it only has unbiased estimates of the gradients of the smooth component. We develop a dynamic stochast...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on signal processing Jg. 65; H. 11; S. 2933 - 2948
Hauptverfasser: Hong, Mingyi, Chang, Tsung-Hui
Format: Journal Article
Sprache:Englisch
Veröffentlicht: IEEE 01.06.2017
Schlagworte:
ISSN:1053-587X, 1941-0476
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We consider solving a convex optimization problem with possibly stochastic gradient, and over a randomly time-varying multiagent network. Each agent has access to some local objective function, and it only has unbiased estimates of the gradients of the smooth component. We develop a dynamic stochastic proximal-gradient consensus algorithm, with the following key features: (1) it works for both the static and certain randomly time-varying networks; (2) it allows the agents to utilize either the exact or stochastic gradient information; (3) it is convergent with provable rate. In particular, the proposed algorithm converges to a global optimal solution, with a rate of O(1/r) [resp. O(1/√r)] when the exact (resp. stochastic) gradient is available, where r is the iteration counter. Interestingly, the developed algorithm establishes a close connection among a number of (seemingly unrelated) distributed algorithms, such as the EXTRA, the PG-EXTRA, the IC/IDC-ADMM, the DLM, and the classical distributed subgradient method.
ISSN:1053-587X
1941-0476
DOI:10.1109/TSP.2017.2673815