A conditional gradient algorithm for distributed online optimization in networks

This paper addresses a network of computing nodes aiming to solve an online convex optimisation problem in a distributed manner, that is, by means of the local estimation and communication, without any central coordinator. An online distributed conditional gradient algorithm based on the conditional...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IET control theory & applications Jg. 15; H. 4; S. 570 - 579
Hauptverfasser: Shen, Xiuyu, Li, Dequan, Fang, Runyue, Dong, Qiao
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Wiley 01.03.2021
Schlagworte:
ISSN:1751-8644, 1751-8652
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This paper addresses a network of computing nodes aiming to solve an online convex optimisation problem in a distributed manner, that is, by means of the local estimation and communication, without any central coordinator. An online distributed conditional gradient algorithm based on the conditional gradient is developed, which can effectively tackle the problem of high time complexity of the distributed online optimisation. The proposed algorithm allows the global objective function to be decomposed into the sum of the local objective functions, and nodes collectively minimise the sum of local time‐varying objective functions while the communication pattern among nodes is captured as a connected undirected graph. By adding a regularisation term to the local objective function of each node, the proposed algorithm constructs a new time‐varying objective function. The proposed algorithm also utilises the local linear optimisation oracle to replace the projection operation such that the regret bound of the algorithm can be effectively improved. By introducing the nominal regret and the global regret, the convergence properties of the proposed algorithm are also theoretically analysed. It is shown that, if the objective function of each agent is strongly convex and smooth, these two types of regrets grow sublinearly with the order of O(logT), where T is the time horizon. Numerical experiments also demonstrate the advantages of the proposed algorithm over existing distributed optimisation algorithms.
ISSN:1751-8644
1751-8652
DOI:10.1049/cth2.12062