Regular Article
Exponentiated Gradient versus Gradient Descent for Linear Predictors,☆☆

https://doi.org/10.1006/inco.1996.2612Get rights and content
Under an Elsevier user license
open archive

Abstract

We consider two algorithms for on-line prediction based on a linear model. The algorithms are the well-known gradient descent (GD) algorithm and a new algorithm, which we call EG±. They both maintain a weight vector using simple updates. For the GD algorithm, the update is based on subtracting the gradient of the squared error made on a prediction. The EG±algorithm uses the components of the gradient in the exponents of factors that are used in updating the weight vector multiplicatively. We present worst-case loss bounds for EG±and compare them to previously known bounds for the GD algorithm. The bounds suggest that the losses of the algorithms are in general incomparable, but EG±has a much smaller loss if only few components of the input are relevant for the predictions. We have performed experiments which show that our worst-case upper bounds are quite tight already on simple artificial data.

Cited by (0)

final manuscript received September 10, 1996

☆☆

An extended abstract appeared in “Proceedings of the 27 th Annual ACM Symposium on the Theory of Computing,” pp. 209–218, ACM Press, New York, May 1995.

Supported by the Academy of Finland, Emil Aaltonen Foundation, and ESPRIT Project NeuroCOLT.

Supported by NSF Grant IRI-9123692. E-mail: [email protected].