Distributed Subgradient Method With Random Quantization and Flexible Weights: Convergence Analysis

The distributed subgradient (DSG) method is a widely used algorithm for coping with large-scale distributed optimization problems in machine-learning applications. Most existing works on DSG focus on ideal communication between cooperative agents, where the shared information between agents is exact...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:IEEE transactions on cybernetics Ročník 54; číslo 2; s. 1 - 13
Hlavní autoři: Xia, Zhaoyue, Du, Jun, Jiang, Chunxiao, Poor, H. Vincent, Han, Zhu, Ren, Yong
Médium: Journal Article
Jazyk:angličtina
Vydáno: United States IEEE 01.02.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Témata:
ISSN:2168-2267, 2168-2275, 2168-2275
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:The distributed subgradient (DSG) method is a widely used algorithm for coping with large-scale distributed optimization problems in machine-learning applications. Most existing works on DSG focus on ideal communication between cooperative agents, where the shared information between agents is exact and perfect. This assumption, however, can lead to potential privacy concerns and is not feasible when wireless transmission links are of poor quality. To meet this challenge, a common approach is to quantize the data locally before transmission, which avoids exposure of raw data and significantly reduces the size of the data. Compared with perfect data, quantization poses fundamental challenges to maintaining data accuracy, which further impacts the convergence of the algorithms. To overcome this problem, we propose a DSG method with random quantization and flexible weights and provide comprehensive results on the convergence of the algorithm for (strongly/weakly) convex objective functions. We also derive the upper bounds on the convergence rates in terms of the quantization error, the distortion, the step sizes, and the number of network agents. Our analysis extends the existing results, for which special cases of step sizes and convex objective functions are considered, to general conclusions on weakly convex cases. Numerical simulations are conducted in convex and weakly convex settings to support our theoretical results.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2168-2267
2168-2275
2168-2275
DOI:10.1109/TCYB.2023.3336842