A Fenchel dual gradient method enabling regularization for nonsmooth distributed optimization over time-varying networks

In this paper, we develop a regularized Fenchel dual gradient method (RFDGM), which allows nodes in a time-varying undirected network to find a common decision, in a fully distributed fashion, for minimizing the sum of their local objective functions subject to their local constraints. Different fro...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Optimization methods & software Ročník 38; číslo 4; s. 813 - 836
Hlavní autori: Wu, Xuyang, Sou, Kin Cheong, Lu, Jie
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Abingdon Taylor & Francis 04.07.2023
Taylor & Francis Ltd
Predmet:
ISSN:1055-6788, 1029-4937, 1029-4937
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:In this paper, we develop a regularized Fenchel dual gradient method (RFDGM), which allows nodes in a time-varying undirected network to find a common decision, in a fully distributed fashion, for minimizing the sum of their local objective functions subject to their local constraints. Different from most existing distributed optimization algorithms that also cope with time-varying networks, RFDGM is able to handle problems with general convex objective functions and distinct local constraints, and still has non-asymptotic convergence results. Specifically, under a standard network connectivity condition, we show that RFDGM is guaranteed to reach ϵ-accuracy in both optimality and feasibility within iterations. Such iteration complexity can be improved to if the local objective functions are strongly convex but not necessarily differentiable. Finally, simulation results demonstrate the competence of RFDGM in practice.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1055-6788
1029-4937
1029-4937
DOI:10.1080/10556788.2023.2189713