Exponentiated Gradient versus Gradient Descent for Linear Predictors

We consider two algorithms for on-line prediction based on a linear model. The algorithms are the well-known gradient descent (GD) algorithm and a new algorithm, which we call EG ±. They both maintain a weight vector using simple updates. For the GD algorithm, the update is based on subtracting the...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Information and computation Jg. 132; H. 1; S. 1 - 63
Hauptverfasser: Kivinen, Jyrki, Warmuth, Manfred K.
Format: Journal Article
Sprache:Englisch
Veröffentlicht: San Diego, CA Elsevier Inc 10.01.1997
Elsevier
Schlagworte:
ISSN:0890-5401, 1090-2651
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:We consider two algorithms for on-line prediction based on a linear model. The algorithms are the well-known gradient descent (GD) algorithm and a new algorithm, which we call EG ±. They both maintain a weight vector using simple updates. For the GD algorithm, the update is based on subtracting the gradient of the squared error made on a prediction. The EG ±algorithm uses the components of the gradient in the exponents of factors that are used in updating the weight vector multiplicatively. We present worst-case loss bounds for EG ±and compare them to previously known bounds for the GD algorithm. The bounds suggest that the losses of the algorithms are in general incomparable, but EG ±has a much smaller loss if only few components of the input are relevant for the predictions. We have performed experiments which show that our worst-case upper bounds are quite tight already on simple artificial data.
ISSN:0890-5401
1090-2651
DOI:10.1006/inco.1996.2612