Distributed Online Optimization With Long-Term Constraints

In this article, we consider distributed online convex optimization problems, where the distributed system consists of various computing units connected through a time-varying communication graph. In each time step, each computing unit selects a constrained vector, experiences a loss equal to an arb...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:IEEE transactions on automatic control Ročník 67; číslo 3; s. 1089 - 1104
Hlavní autori: Yuan, Deming, Proutiere, Alexandre, Shi, Guodong
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: New York IEEE 01.03.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Predmet:
ISSN:0018-9286, 1558-2523, 1558-2523
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:In this article, we consider distributed online convex optimization problems, where the distributed system consists of various computing units connected through a time-varying communication graph. In each time step, each computing unit selects a constrained vector, experiences a loss equal to an arbitrary convex function evaluated at this vector, and may communicate to its neighbors in the graph. The objective is to minimize the system-wide loss accumulated over time. We propose a decentralized algorithm with regret and cumulative constraint violation in <inline-formula><tex-math notation="LaTeX">{\mathcal O}(T^{\max \lbrace c,1-c\rbrace })</tex-math></inline-formula> and <inline-formula><tex-math notation="LaTeX">{\mathcal O}(T^{1-c/2})</tex-math></inline-formula>, respectively, for any <inline-formula><tex-math notation="LaTeX">c\in (0,1)</tex-math></inline-formula>, where <inline-formula><tex-math notation="LaTeX">T</tex-math></inline-formula> is the time horizon. When the loss functions are strongly convex, we establish improved regret and constraint violation upper bounds in <inline-formula><tex-math notation="LaTeX">{\mathcal O}(\log (T))</tex-math></inline-formula> and <inline-formula><tex-math notation="LaTeX">{\mathcal O}(\sqrt{T\log (T)})</tex-math></inline-formula>. These regret scalings match those obtained by state-of-the-art algorithms and fundamental limits in the corresponding centralized online optimization problem (for both convex and strongly convex loss functions). In the case of bandit feedback, the proposed algorithms achieve a regret and constraint violation in <inline-formula><tex-math notation="LaTeX">{\mathcal O}(T^{\max \lbrace c,1-c/3 \rbrace })</tex-math></inline-formula> and <inline-formula><tex-math notation="LaTeX">{\mathcal O}(T^{1-c/2})</tex-math></inline-formula> for any <inline-formula><tex-math notation="LaTeX">c\in (0,1)</tex-math></inline-formula>. We numerically illustrate the performance of our algorithms for the particular case of distributed online regularized linear regression problems on synthetic and real data.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0018-9286
1558-2523
1558-2523
DOI:10.1109/TAC.2021.3057601