Abstract
We study on-line generalized linear regression with multidimensional outputs, i.e., neural networks with multiple output nodes but no hidden nodes. We allow at the final layer transfer functions such as the softmax function that need to consider the linear activations to all the output neurons. The weight vectors used to produce the linear activations are represented indirectly by maintaining separate parameter vectors. We get the weight vector by applying a particular parameterization function to the parameter vector. Updating the parameter vectors upon seeing new examples is done additively, as in the usual gradient descent update. However, by using a nonlinear parameterization function between the parameter vectors and the weight vectors, we can make the resulting update of the weight vector quite different from a true gradient descent update. To analyse such updates, we define a notion of a matching loss function and apply it both to the transfer function and to the parameterization function. The loss function that matches the transfer function is used to measure the goodness of the predictions of the algorithm. The loss function that matches the parameterization function can be used both as a measure of divergence between models in motivating the update rule of the algorithm and as a measure of progress in analyzing its relative performance compared to an arbitrary fixed model. As a result, we have a unified treatment that generalizes earlier results for the gradient descent and exponentiated gradient algorithms to multidimensional outputs, including multiclass logistic regression.
Article PDF
Similar content being viewed by others
References
Amari, S. (1985). Differential Geometrical Methods in Statistics. Berlin: Springer.
Auer, P., Herbster, M., & Warmuth, M. K. (1995). Exponentially many local minima for single neurons. Advances in Neural Information Processing Systems (vol. 8, pp. 316–317). Cambridge, MA: MIT Press.
Azoury, K. & Warmuth, M. K. (1999). Relative loss bounds for on-line density estimation with the exponential family of distributions. In Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence (pp. 31–40). San Francisco, CA: Morgan Kaufmann.
Boser, B. E., Guyon, I. M., & Vapnik, V. N. (1992). A training algorithm for optimal margin classifiers. In Proceedings of the Fifth Annual Workshop on Computational Learning Theory (pp. 144–152). New York: ACM.
Bregman, L. M. (1967). The relaxation method of finding the common point of convex sets and its applications to the solution of problems in convex programming. USSR Computational Mathematics and Mathematical Physics, 7, 200–217.
Budinich, M. (1993). Some notes on perceptron learning. Journal of Physics A: Mathematical and General, 26, 4237–4247.
Cesa-Bianchi, N. (1999). Analysis of two gradient-based algorithms for on-line regression. Journal of Computer and System Sciences, 59, 392–411.
Cesa-Bianchi, N., Long, P. M., & Warmuth, M. K. (1996). Worst-case quadratic loss bounds for prediction using linear functions and gradient descent. IEEE Transactions on Neural Networks, 7, 604–619.
Fahrmeir, L. & Tutz, G. (1991). Multivariate Statistical Modelling Based on Generalized Linear Models. New York: Springer.
Forster, J. (1999). On relative loss bounds in generalized linear regression. In Proceedings of the 12th International Symposium on Fundamentals of Computation Theory (pp. 269–280). Berlin: Springer.
Foster, D. P. (1991). Prediction in the worst case. The Annals of Statistics, 19, 1084–1090.
Gentile, C. & Littlestone, N. (1999). The robustness of the p-norm algorithms. In Proceedings of the Twelfth Annual Conference on Computational Learning Theory (pp. 1–11). New York: ACM.
Gentile, C. & Warmuth, M. K. (1999). Linear hinge loss and average margin. Advances in Neural Information Processing Systems (vol. 11, pp. 225–231). Cambridge, MA: MIT Press.
Grove, A. J., Littlestone, N., & Schuurmans, D. (1997). General convergence results for linear discriminant updates. In Proceedings of the Tenth Annual Conference on Computational Learning Theory (pp. 171–183). New York: ACM.
Guo, Y., Bartlett, P. L., Shawe-Taylor, J., & Williamson, R. C. (1999). Covering numbers for support vector machines. In Proceedings of the Twelfth Annual Conference on Computational Learning Theory (pp. 267–277). New York: ACM.
Helmbold, D. P., Kivinen, J., & Warmuth, M. K. (1999). Relative loss bounds for single neurons. IEEE Transactions on Neural Networks, 10, 1291–1304.
Kivinen, J. & Warmuth, M. K. (1997). Additive versus exponentiated gradient updates for linear prediction. Information and Computation, 132, 1–64.
Kivinen, J., Warmuth, M. K., & Auer, P. (1997). The perceptron algorithm vs. winnow: Linear vs. logarithmic mistake bounds when few input variables are relevant. Artificial Intelligence, 97, 325–343.
Littlestone, N. (1989). Mistake bounds and logarithmic linear-threshold learning algorithms. Ph.D. Thesis, Technical Report UCSC-CRL-89-11, University of California, Santa Cruz.
McCullagh, P. & Nelder, J. A. (1989). Generalized Linear Models. New York: Chapman & Hall.
Vovk, V. (1998). Competitive on-line linear regression. Advances in Neural Information Processing Systems (vol. 10, pp. 364–370). Cambridge, MA: MIT Press.
Warmuth, M. K. & Jagota, A. K. (1997). Continuous versus discrete-time nonlinear gradient descent: Relative loss bounds and convergence. Unpublished manuscript.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Kivinen, J., Warmuth, M.K. Relative Loss Bounds for Multidimensional Regression Problems. Machine Learning 45, 301–329 (2001). https://doi.org/10.1023/A:1017938623079
Issue Date:
DOI: https://doi.org/10.1023/A:1017938623079