Accelerated variance-reduced methods for saddle-point problems
We consider composite minimax optimization problems where the goal is to find a saddle-point of a large sum of non-bilinear objective functions augmented by simple composite regularizers for the primal and dual variables. For such problems, under the average-smoothness assumption, we propose acceler...
Gespeichert in:
| Veröffentlicht in: | EURO journal on computational optimization Jg. 10; S. 100048 |
|---|---|
| Hauptverfasser: | , , , , , |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
Elsevier Ltd
2022
Elsevier |
| Schlagworte: | |
| ISSN: | 2192-4406 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Zusammenfassung: | We consider composite minimax optimization problems where the goal is to find a saddle-point of a large sum of non-bilinear objective functions augmented by simple composite regularizers for the primal and dual variables. For such problems, under the average-smoothness assumption, we propose accelerated stochastic variance-reduced algorithms with optimal up to logarithmic factors complexity bounds. In particular, we consider strongly-convex-strongly-concave, convex-strongly-concave, and convex-concave objectives. To the best of our knowledge, these are the first nearly-optimal algorithms for this setting.
•Optimal accelerated stochastic variance-reduced algorithm for composite saddle-point problems.•Saddle-point problems with different strongly-convex and strongly-concave parameters.•Upper bounds for composite saddle-point problems with a finite sum structure.•Achieving the lower bounds for composite saddle-point problems with finite sum structure up to logarithmic factor. |
|---|---|
| ISSN: | 2192-4406 |
| DOI: | 10.1016/j.ejco.2022.100048 |