Single-Loop Variance-Reduced Stochastic Algorithm for Nonconvex-Concave Minimax Optimization

Nonconvex-concave (NC-C) finite-sum minimax problems have broad applications in decentralized optimization and various machine learning tasks. However, the nonsmooth nature of NC-C problems makes it challenging to design effective variance reduction techniques. Existing vanilla stochastic algorithms...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Proceedings of the ... IEEE International Conference on Acoustics, Speech and Signal Processing (1998) S. 1 - 5
Hauptverfasser: Jiang, Xia, Zhu, Linglingzhi, Zheng, Taoli, So, Anthony Man-Cho
Format: Tagungsbericht
Sprache:Englisch
Veröffentlicht: IEEE 06.04.2025
Schlagworte:
ISSN:2379-190X
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Nonconvex-concave (NC-C) finite-sum minimax problems have broad applications in decentralized optimization and various machine learning tasks. However, the nonsmooth nature of NC-C problems makes it challenging to design effective variance reduction techniques. Existing vanilla stochastic algorithms using uniform samples for gradient estimation often exhibit slow convergence rates and require bounded variance assumptions. In this paper, we develop a novel probabilistic variance reduction updating scheme and propose a single-loop algorithm called the probabilistic variance-reduced smoothed gradient descent-ascent (PVR-SGDA) algorithm. The proposed algorithm achieves an iteration complexity of {\mathcal{O}}\left({{\varepsilon ^{ - 4}}}\right), surpassing the best-known rates of stochastic algorithms for NC-C minimax problems and matching the performance of the best deterministic algorithms in this context. Finally, we demonstrate the effectiveness of the proposed algorithm through numerical simulations.
ISSN:2379-190X
DOI:10.1109/ICASSP49660.2025.10889086