A Nesterov-Like Gradient Tracking Algorithm for Distributed Optimization Over Directed Networks

In this article, we concentrate on dealing with the distributed optimization problem over a directed network, where each unit possesses its own convex cost function and the principal target is to minimize a global cost function (formulated by the average of all local cost functions) while obeying th...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on systems, man, and cybernetics. Systems Jg. 51; H. 10; S. 6258 - 6270
Hauptverfasser: Lu, Qingguo, Liao, Xiaofeng, Li, Huaqing, Huang, Tingwen
Format: Journal Article
Sprache:Englisch
Veröffentlicht: New York IEEE 01.10.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Schlagworte:
ISSN:2168-2216, 2168-2232
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In this article, we concentrate on dealing with the distributed optimization problem over a directed network, where each unit possesses its own convex cost function and the principal target is to minimize a global cost function (formulated by the average of all local cost functions) while obeying the network connectivity structure. Most of the existing methods, such as push-sum strategy, have eliminated the unbalancedness induced by the directed network via utilizing column-stochastic weights, which may be infeasible if the distributed implementation requires each unit to gain access to (at least) its out-degree information. In contrast, to be suitable for the directed networks with row-stochastic weights, we propose a new directed distributed Nesterov-like gradient tracking algorithm, named as D-DNGT, that incorporates the gradient tracking into the distributed Nesterov method with momentum terms and employs nonuniform step-sizes. D-DNGT extends a number of outstanding consensus algorithms over strongly connected directed networks. The implementation of D-DNGT is straightforward if each unit locally chooses a suitable step-size and privately regulates the weights on information that acquires from in-neighbors. If the largest step-size and the maximum momentum coefficient are positive and small sufficiently, we can prove that D-DNGT converges linearly to the optimal solution provided that the cost functions are smooth and strongly convex. We provide numerical experiments to confirm the findings in this article and contrast D-DNGT with recently proposed distributed optimization approaches.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2168-2216
2168-2232
DOI:10.1109/TSMC.2019.2960770