A hybrid stochastic alternating direction method of multipliers for nonconvex and nonsmooth composite optimization

•A hybrid stochastic ADMM with a hybrid gradient estimator is proposed.•An optimal complexity of the algorithm is established.•Theoretical guidance is provided by analyzing the roles of hybrid and penalty parameters.•Connections and distinctions with the method of Dinh et al. (2022) is clarified.•Co...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:European journal of operational research Ročník 329; číslo 1; s. 63 - 78
Hlavní autori: Zeng, Yuxuan, Bai, Jianchao, Wang, Shengjia, Wang, Zhiguo, Shen, Xiaojing
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Elsevier B.V 16.02.2026
Predmet:
ISSN:0377-2217
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:•A hybrid stochastic ADMM with a hybrid gradient estimator is proposed.•An optimal complexity of the algorithm is established.•Theoretical guidance is provided by analyzing the roles of hybrid and penalty parameters.•Connections and distinctions with the method of Dinh et al. (2022) is clarified.•Comparison experiments demonstrate the efficiency of the algorithm. Nonconvex and nonsmooth composite optimization problems with linear constraints have gained significant attention in practical applications. This paper proposes a hybrid stochastic Alternating Direction Method of Multipliers (ADMM) leveraging a novel hybrid estimator to solve such problems with expectation or finite-sum objective functions. Compared to existing double-loop stochastic ADMMs, our method features simpler updates enabled by a single-loop, single-sample framework, while avoiding the need for checkpoint selection. Under mild conditions, we analyze the explicit relationships between key parameters using refined Lyapunov functions and rigorously establish the sublinear convergence. To the best of our knowledge, our work is the first single-loop stochastic ADMM for solving both expectation and finite-sum problems while matching the best-known oracle complexity bound comparable to state-of-the-art double-loop stochastic ADMMs. Numerical experiments on several different nonconvex minimization tasks demonstrate the superior performance of the proposed method.
ISSN:0377-2217
DOI:10.1016/j.ejor.2025.10.024