A hybrid stochastic alternating direction method of multipliers for nonconvex and nonsmooth composite optimization

•A hybrid stochastic ADMM with a hybrid gradient estimator is proposed.•An optimal complexity of the algorithm is established.•Theoretical guidance is provided by analyzing the roles of hybrid and penalty parameters.•Connections and distinctions with the method of Dinh et al. (2022) is clarified.•Co...

Full description

Saved in:
Bibliographic Details
Published in:European journal of operational research Vol. 329; no. 1; pp. 63 - 78
Main Authors: Zeng, Yuxuan, Bai, Jianchao, Wang, Shengjia, Wang, Zhiguo, Shen, Xiaojing
Format: Journal Article
Language:English
Published: Elsevier B.V 16.02.2026
Subjects:
ISSN:0377-2217
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:•A hybrid stochastic ADMM with a hybrid gradient estimator is proposed.•An optimal complexity of the algorithm is established.•Theoretical guidance is provided by analyzing the roles of hybrid and penalty parameters.•Connections and distinctions with the method of Dinh et al. (2022) is clarified.•Comparison experiments demonstrate the efficiency of the algorithm. Nonconvex and nonsmooth composite optimization problems with linear constraints have gained significant attention in practical applications. This paper proposes a hybrid stochastic Alternating Direction Method of Multipliers (ADMM) leveraging a novel hybrid estimator to solve such problems with expectation or finite-sum objective functions. Compared to existing double-loop stochastic ADMMs, our method features simpler updates enabled by a single-loop, single-sample framework, while avoiding the need for checkpoint selection. Under mild conditions, we analyze the explicit relationships between key parameters using refined Lyapunov functions and rigorously establish the sublinear convergence. To the best of our knowledge, our work is the first single-loop stochastic ADMM for solving both expectation and finite-sum problems while matching the best-known oracle complexity bound comparable to state-of-the-art double-loop stochastic ADMMs. Numerical experiments on several different nonconvex minimization tasks demonstrate the superior performance of the proposed method.
ISSN:0377-2217
DOI:10.1016/j.ejor.2025.10.024