Stochastic Proximal Gradient Consensus Over Random Networks

We consider solving a convex optimization problem with possibly stochastic gradient, and over a randomly time-varying multiagent network. Each agent has access to some local objective function, and it only has unbiased estimates of the gradients of the smooth component. We develop a dynamic stochast...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transactions on signal processing Ročník 65; číslo 11; s. 2933 - 2948
Hlavní autoři: Hong, Mingyi, Chang, Tsung-Hui
Médium: Journal Article
Jazyk:angličtina
Vydáno: IEEE 01.06.2017
Témata:
ISSN:1053-587X, 1941-0476
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:We consider solving a convex optimization problem with possibly stochastic gradient, and over a randomly time-varying multiagent network. Each agent has access to some local objective function, and it only has unbiased estimates of the gradients of the smooth component. We develop a dynamic stochastic proximal-gradient consensus algorithm, with the following key features: (1) it works for both the static and certain randomly time-varying networks; (2) it allows the agents to utilize either the exact or stochastic gradient information; (3) it is convergent with provable rate. In particular, the proposed algorithm converges to a global optimal solution, with a rate of O(1/r) [resp. O(1/√r)] when the exact (resp. stochastic) gradient is available, where r is the iteration counter. Interestingly, the developed algorithm establishes a close connection among a number of (seemingly unrelated) distributed algorithms, such as the EXTRA, the PG-EXTRA, the IC/IDC-ADMM, the DLM, and the classical distributed subgradient method.
ISSN:1053-587X
1941-0476
DOI:10.1109/TSP.2017.2673815