Random Gradient-Free Minimization of Convex Functions

In this paper, we prove new complexity bounds for methods of convex optimization based only on computation of the function value. The search directions of our schemes are normally distributed random Gaussian vectors. It appears that such methods usually need at most n times more iterations than the...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Foundations of computational mathematics Jg. 17; H. 2; S. 527 - 566
Hauptverfasser: Nesterov, Yurii, Spokoiny, Vladimir
Format: Journal Article
Sprache:Englisch
Veröffentlicht: New York Springer US 01.04.2017
Springer Nature B.V
Schlagworte:
ISSN:1615-3375, 1615-3383
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In this paper, we prove new complexity bounds for methods of convex optimization based only on computation of the function value. The search directions of our schemes are normally distributed random Gaussian vectors. It appears that such methods usually need at most n times more iterations than the standard gradient methods, where n is the dimension of the space of variables. This conclusion is true for both nonsmooth and smooth problems. For the latter class, we present also an accelerated scheme with the expected rate of convergence O ( n 2 k 2 ) , where k is the iteration counter. For stochastic optimization, we propose a zero-order scheme and justify its expected rate of convergence O ( n k 1 / 2 ) . We give also some bounds for the rate of convergence of the random gradient-free methods to stationary points of nonconvex functions, for both smooth and nonsmooth cases. Our theoretical results are supported by preliminary computational experiments.
Bibliographie:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1615-3375
1615-3383
DOI:10.1007/s10208-015-9296-2