Loopless Variance Reduced Stochastic ADMM for Equality Constrained Problems in IoT Applications

The alternating direction method of multipliers (ADMMs) is an efficient optimization method for solving equality constrained problems in Internet of Things (IoT) applications. Recently, several stochastic variance reduced ADMM algorithms (e.g., SVRG-ADMM) have made exciting progress, such as linear...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE internet of things journal Jg. 9; H. 3; S. 2293 - 2303
Hauptverfasser: Liu, Yuanyuan, Geng, Jiacheng, Shang, Fanhua, An, Weixin, Liu, Hongying, Zhu, Qi
Format: Journal Article
Sprache:Englisch
Veröffentlicht: Piscataway IEEE 01.02.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Schlagworte:
ISSN:2327-4662, 2327-4662
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:The alternating direction method of multipliers (ADMMs) is an efficient optimization method for solving equality constrained problems in Internet of Things (IoT) applications. Recently, several stochastic variance reduced ADMM algorithms (e.g., SVRG-ADMM) have made exciting progress, such as linear convergence for strongly convex (SC) problems. However, SVRG-ADMM and its variants have an outer loop where the full gradient at the snapshot is computed, and their outer loop contains an inner loop, in which a large number of variance reduced gradients are estimated from random samples. This loopy design makes these methods more complex to analyze and determine the inner loop length, which must be proportional to the condition number to achieve best convergence, and is often set to <inline-formula> <tex-math notation="LaTeX">\mathcal {O}(n) </tex-math></inline-formula> as a suboptimal choice, where <inline-formula> <tex-math notation="LaTeX">n </tex-math></inline-formula> is the number of samples. To tackle these issues, we propose an efficient loopless variance reduced stochastic ADMM algorithm, called LVR-SADMM. In our LVR-SADMM, we remove the outer loop and replace it with a biased coin-flip, in which we update the snapshot with a small probability to trigger the full gradient computation. Moreover, we also theoretically analyze the convergence property of LVR-SADMM, which shows that it enjoys a fast linear convergence rate for SC problems. In particular, we also present an accelerated loopless SVRG-ADMM (LAVR-SADMM) method for both SC and non-SC problems. Various experimental results on many real-world data sets verify that the proposed methods can achieve an average speedup of <inline-formula> <tex-math notation="LaTeX">2\times </tex-math></inline-formula> in the SC case and <inline-formula> <tex-math notation="LaTeX">5\times </tex-math></inline-formula> in the non-SC case over their loopy counterparts, respectively.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2327-4662
2327-4662
DOI:10.1109/JIOT.2021.3095561