Distributed Stochastic Mirror Descent Algorithm Over Time-varying Network

In this paper, we propose a distributed stochastic mirror descent algorithm for solving distributed general (non-differentiable) convex optimization problem over a time-varying multi-agent network. We adopt Bregman divergence rather than Euclidean distance as the augmented distance measuring functio...

Full description

Saved in:
Bibliographic Details
Published in:IEEE International Conference on Control and Automation (Print) pp. 716 - 721
Main Authors: Wang, Yinghui, Zhou, Hongbing, Hong, Yiguang
Format: Conference Proceeding
Language:English
Published: IEEE 01.06.2018
Subjects:
ISSN:1948-3457
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In this paper, we propose a distributed stochastic mirror descent algorithm for solving distributed general (non-differentiable) convex optimization problem over a time-varying multi-agent network. We adopt Bregman divergence rather than Euclidean distance as the augmented distance measuring function to solve the distributed first-order Lagrangian-based convex optimization problem. With a fixed step-size, our algorithm achieves a convergence rate O\left (\frac{1}{T}\right) with an error bound, which is the best known convergence rate for distributed first-order algorithms. Numerical experiments demonstrate the performance of the proposed algorithm.
ISSN:1948-3457
DOI:10.1109/ICCA.2018.8444276