Variance reduction for root-finding problems
Minimizing finite sums of smooth and strongly convex functions is an important task in machine learning. Recent work has developed stochastic gradient methods that optimize these sums with less computation than methods that do not exploit the finite sum structure. This speedup results from using eff...
Gespeichert in:
| Veröffentlicht in: | Mathematical programming Jg. 197; H. 1; S. 375 - 410 |
|---|---|
| 1. Verfasser: | |
| Format: | Journal Article |
| Sprache: | Englisch |
| Veröffentlicht: |
Berlin/Heidelberg
Springer Berlin Heidelberg
01.01.2023
|
| Schlagworte: | |
| ISSN: | 0025-5610, 1436-4646 |
| Online-Zugang: | Volltext |
| Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
| Zusammenfassung: | Minimizing finite sums of smooth and strongly convex functions is an important task in machine learning. Recent work has developed stochastic gradient methods that optimize these sums with less computation than methods that do not exploit the finite sum structure. This speedup results from using efficiently constructed stochastic gradient estimators, which have variance that diminishes as the algorithm progresses. In this work, we ask whether the benefits of variance reduction extend to fixed point and root-finding problems involving sums of nonlinear operators. Our main result shows that variance reduction offers a similar speedup when applied to a broad class of root-finding problems. We illustrate the result on three tasks involving sums of
n
nonlinear operators: averaged fixed point, monotone inclusions, and nonsmooth common minimizer problems. In certain “poorly conditioned regimes,” the proposed method offers an
n
-fold speedup over standard methods. |
|---|---|
| ISSN: | 0025-5610 1436-4646 |
| DOI: | 10.1007/s10107-021-01758-4 |