Convergence of Stochastic Proximal Gradient Algorithm

We study the extension of the proximal gradient algorithm where only a stochastic gradient estimate is available and a relaxation step is allowed. We establish convergence rates for function values in the convex case, as well as almost sure convergence and convergence rates for the iterates under fu...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:Applied mathematics & optimization Ročník 82; číslo 3; s. 891 - 917
Hlavní autoři: Rosasco, Lorenzo, Villa, Silvia, Vũ, Bằng Công
Médium: Journal Article
Jazyk:angličtina
Vydáno: New York Springer US 01.12.2020
Springer Nature B.V
Témata:
ISSN:0095-4616, 1432-0606
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:We study the extension of the proximal gradient algorithm where only a stochastic gradient estimate is available and a relaxation step is allowed. We establish convergence rates for function values in the convex case, as well as almost sure convergence and convergence rates for the iterates under further convexity assumptions. Our analysis avoid averaging the iterates and error summability assumptions which might not be satisfied in applications, e.g. in machine learning. Our proofing technique extends classical ideas from the analysis of deterministic proximal gradient algorithms.
Bibliografie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0095-4616
1432-0606
DOI:10.1007/s00245-019-09617-7