A Nesterov-Like Gradient Tracking Algorithm for Distributed Optimization Over Directed Networks
In this article, we concentrate on dealing with the distributed optimization problem over a directed network, where each unit possesses its own convex cost function and the principal target is to minimize a global cost function (formulated by the average of all local cost functions) while obeying th...
Uloženo v:
| Vydáno v: | IEEE transactions on systems, man, and cybernetics. Systems Ročník 51; číslo 10; s. 6258 - 6270 |
|---|---|
| Hlavní autoři: | , , , |
| Médium: | Journal Article |
| Jazyk: | angličtina |
| Vydáno: |
New York
IEEE
01.10.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Témata: | |
| ISSN: | 2168-2216, 2168-2232 |
| On-line přístup: | Získat plný text |
| Tagy: |
Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
|
| Shrnutí: | In this article, we concentrate on dealing with the distributed optimization problem over a directed network, where each unit possesses its own convex cost function and the principal target is to minimize a global cost function (formulated by the average of all local cost functions) while obeying the network connectivity structure. Most of the existing methods, such as push-sum strategy, have eliminated the unbalancedness induced by the directed network via utilizing column-stochastic weights, which may be infeasible if the distributed implementation requires each unit to gain access to (at least) its out-degree information. In contrast, to be suitable for the directed networks with row-stochastic weights, we propose a new directed distributed Nesterov-like gradient tracking algorithm, named as D-DNGT, that incorporates the gradient tracking into the distributed Nesterov method with momentum terms and employs nonuniform step-sizes. D-DNGT extends a number of outstanding consensus algorithms over strongly connected directed networks. The implementation of D-DNGT is straightforward if each unit locally chooses a suitable step-size and privately regulates the weights on information that acquires from in-neighbors. If the largest step-size and the maximum momentum coefficient are positive and small sufficiently, we can prove that D-DNGT converges linearly to the optimal solution provided that the cost functions are smooth and strongly convex. We provide numerical experiments to confirm the findings in this article and contrast D-DNGT with recently proposed distributed optimization approaches. |
|---|---|
| Bibliografie: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 2168-2216 2168-2232 |
| DOI: | 10.1109/TSMC.2019.2960770 |