Distributed Randomized Gradient-Free Convex Optimization With Set Constraints Over Time-Varying Weight-Unbalanced Digraphs

This paper explores a class of distributed constrained convex optimization problems where the objective function is a sum of <inline-formula><tex-math notation="LaTeX">N</tex-math></inline-formula> convex local objective functions. These functions are characterized...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE transactions on network science and engineering Ročník 12; číslo 2; s. 610 - 622
Hlavní autori: Zhu, Yanan, Li, Qinghai, Li, Tao, Wen, Guanghui
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Piscataway IEEE 01.03.2025
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Predmet:
ISSN:2327-4697, 2334-329X
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:This paper explores a class of distributed constrained convex optimization problems where the objective function is a sum of <inline-formula><tex-math notation="LaTeX">N</tex-math></inline-formula> convex local objective functions. These functions are characterized by local non-smoothness yet adhere to Lipschitz continuity, and the optimization process is further constrained by <inline-formula><tex-math notation="LaTeX">N</tex-math></inline-formula> distinct closed convex sets. To delineate the structure of information exchange among agents, a series of time-varying weight-unbalance directed graphs are introduced. Furthermore, this study introduces a novel algorithm, distributed randomized gradient-free constrained optimization algorithm. This algorithm marks a significant advancement by substituting the conventional requirement for precise gradient or subgradient information in each iterative update with a random gradient-free oracle, thereby addressing scenarios where accurate gradient information is hard to obtain. A thorough convergence analysis is provided based on the smoothing parameters inherent in the local objective functions, the Lipschitz constants, and a series of standard assumptions. Significantly, the proposed algorithm can converge to an approximate optimal solution within a predetermined error threshold for the consisdered optimization problem, achieving the same convergence rate of <inline-formula><tex-math notation="LaTeX">{\mathcal O}(\frac{\ln (k)}{\sqrt{k} })</tex-math></inline-formula> as the general randomized gradient-free algorithms when the decay step size is selected appropriately. And when at least one of the local objective functions exhibits strong convexity, the proposed algorithm can achieve a faster convergence rate, <inline-formula><tex-math notation="LaTeX">{\mathcal O}(\frac{1}{k})</tex-math></inline-formula>. Finally, rigorous simulation results verify the correctness of theoretical findings.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2327-4697
2334-329X
DOI:10.1109/TNSE.2024.3506732