Skip to main content
Top
Published in:

26-07-2023

Optimality of Robust Online Learning

Authors: Zheng-Chu Guo, Andreas Christmann, Lei Shi

Published in: Foundations of Computational Mathematics | Issue 5/2024

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

In this paper, we study an online learning algorithm with a robust loss function \(\mathcal {L}_{\sigma }\) for regression over a reproducing kernel Hilbert space (RKHS). The loss function \(\mathcal {L}_{\sigma }\) involving a scaling parameter \(\sigma >0\) can cover a wide range of commonly used robust losses. The proposed algorithm is then a robust alternative for online least squares regression aiming to estimate the conditional mean function. For properly chosen \(\sigma \) and step size, we show that the last iterate of this online algorithm can achieve optimal capacity independent convergence in the mean square distance. Moreover, if additional information on the underlying function space is known, we also establish optimal capacity-dependent rates for strong convergence in RKHS. To the best of our knowledge, both of the two results are new to the existing literature of online learning.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference N. Aronszajn. Theory of reproducing kernels. Transactions of the American Mathematical Society, 68 (1950), 337–404.MathSciNetCrossRef N. Aronszajn. Theory of reproducing kernels. Transactions of the American Mathematical Society, 68 (1950), 337–404.MathSciNetCrossRef
2.
go back to reference F. Bauer, S. Pereverzev, and L. Rosasco. On regularization algorithms in learning theory. Journal of complexity, 23 (2007), 52–72.MathSciNetCrossRef F. Bauer, S. Pereverzev, and L. Rosasco. On regularization algorithms in learning theory. Journal of complexity, 23 (2007), 52–72.MathSciNetCrossRef
3.
go back to reference R. Bessa, V. Miranda, and J. Gama. Entropy and correntropy against minimum square error in offline and online three-day ahead wind power forecasting. IEEE Transactions on Power Systems, 24 (2009), 1657–1666.CrossRef R. Bessa, V. Miranda, and J. Gama. Entropy and correntropy against minimum square error in offline and online three-day ahead wind power forecasting. IEEE Transactions on Power Systems, 24 (2009), 1657–1666.CrossRef
4.
go back to reference M. Black and P. Anandan. The robust estimation of multiple motions: Parametric and piecewise-smooth flow fields. Computer vision and image understanding, 63 (1996), 75–104.CrossRef M. Black and P. Anandan. The robust estimation of multiple motions: Parametric and piecewise-smooth flow fields. Computer vision and image understanding, 63 (1996), 75–104.CrossRef
5.
go back to reference G. Blanchard and N. Mücke. Optimal rates for regularization of statistical inverse Learning problems. Foundations of Computational Mathematics, 18 (2018), 971–1013.MathSciNetCrossRef G. Blanchard and N. Mücke. Optimal rates for regularization of statistical inverse Learning problems. Foundations of Computational Mathematics, 18 (2018), 971–1013.MathSciNetCrossRef
6.
go back to reference L. Bottou, F. E Curtis, and J. Nocedal. Optimization methods for large-scale machine learning. SIAM Review, 60 (2018), 223–311.MathSciNetCrossRef L. Bottou, F. E Curtis, and J. Nocedal. Optimization methods for large-scale machine learning. SIAM Review, 60 (2018), 223–311.MathSciNetCrossRef
7.
go back to reference A. Caponnetto and E. De Vito. Optimal rates for the regularized least squares algorithm. Foundations of Computational Mathematics, 7 (2007), 331–368.MathSciNetCrossRef A. Caponnetto and E. De Vito. Optimal rates for the regularized least squares algorithm. Foundations of Computational Mathematics, 7 (2007), 331–368.MathSciNetCrossRef
8.
go back to reference X. Chen, B. Tang, J. Fan, and X. Guo. Online gradient descent algorithms for functional data learning. Journal of Complexity, page 101635, 2021. X. Chen, B. Tang, J. Fan, and X. Guo. Online gradient descent algorithms for functional data learning. Journal of Complexity, page 101635, 2021.
9.
go back to reference A. Christmann and A. Van Messem, and I. Steinwart. On consistency and robustness properties of support vector machines for heavy-tailed distributions. Statistics and Its Interface, 2 (2009), 331–327.MathSciNetCrossRef A. Christmann and A. Van Messem, and I. Steinwart. On consistency and robustness properties of support vector machines for heavy-tailed distributions. Statistics and Its Interface, 2 (2009), 331–327.MathSciNetCrossRef
10.
go back to reference A. Christmann and I. Steinwart. Consistency and robustness of kernel-based regression in convex risk minimization. Bernoulli, 13 (2007), 799–819.MathSciNetCrossRef A. Christmann and I. Steinwart. Consistency and robustness of kernel-based regression in convex risk minimization. Bernoulli, 13 (2007), 799–819.MathSciNetCrossRef
11.
go back to reference F. Cucker and D. X. Zhou. Learning Theory: An Approximation Theory Viewpoint. Cambridge Univesity Press, 2007. F. Cucker and D. X. Zhou. Learning Theory: An Approximation Theory Viewpoint. Cambridge Univesity Press, 2007.
12.
go back to reference K. De Brabanter, K. Pelckmans, J. De Brabanter, M. Debruyne, J. A. K. Suykens, M. Hubert, and B. De Moor. Robustness of kernel based regression: a comparison of iterative weighting schemes. International Conference on Artificial Neural Networks, (2009), 100–110. K. De Brabanter, K. Pelckmans, J. De Brabanter, M. Debruyne, J. A. K. Suykens, M. Hubert, and B. De Moor. Robustness of kernel based regression: a comparison of iterative weighting schemes. International Conference on Artificial Neural Networks, (2009), 100–110.
13.
go back to reference M. Debruyne, A. Christmann, M. Hubert, and J. A. K. Suykens. Robustness of reweighted least squares kernel based regression. Journal of Multivariate Analysis, 101 (2010), 447–463.MathSciNetCrossRef M. Debruyne, A. Christmann, M. Hubert, and J. A. K. Suykens. Robustness of reweighted least squares kernel based regression. Journal of Multivariate Analysis, 101 (2010), 447–463.MathSciNetCrossRef
14.
go back to reference E. De Vito, S. Pereverzyev, and L. Rosasco. Adaptive kernel methods using the balancing principle. Foundations of Computational Mathematics, 10 (2010), 455–479.MathSciNetCrossRef E. De Vito, S. Pereverzyev, and L. Rosasco. Adaptive kernel methods using the balancing principle. Foundations of Computational Mathematics, 10 (2010), 455–479.MathSciNetCrossRef
15.
go back to reference A. Dieuleveut and F. Bach. Nonparametric stochastic approximation with large step-sizes. The Annals of Statistics, 44 (2016), 1363–1399.MathSciNetCrossRef A. Dieuleveut and F. Bach. Nonparametric stochastic approximation with large step-sizes. The Annals of Statistics, 44 (2016), 1363–1399.MathSciNetCrossRef
16.
go back to reference R. Fair. On the robust estimation of econometric models. Annals of Economic and Social Measurement, 3 (1974), 667–677. R. Fair. On the robust estimation of econometric models. Annals of Economic and Social Measurement, 3 (1974), 667–677.
17.
go back to reference H. Feng, S. Hou, L. Wei, and D. X. Zhou. CNN models for readability of Chinese texts. Mathematical Foundations of Computing, 5 (2021), 351–362.CrossRef H. Feng, S. Hou, L. Wei, and D. X. Zhou. CNN models for readability of Chinese texts. Mathematical Foundations of Computing, 5 (2021), 351–362.CrossRef
18.
go back to reference Y. Feng, X. Huang, L. Shi, Y. Yang, and J. A. K. Suykens. Learning with the maximum correntropy criterion induced losses for regression. Journal of Machine Learning Research, 16 (2015), 993–1034.MathSciNet Y. Feng, X. Huang, L. Shi, Y. Yang, and J. A. K. Suykens. Learning with the maximum correntropy criterion induced losses for regression. Journal of Machine Learning Research, 16 (2015), 993–1034.MathSciNet
19.
go back to reference Y. Feng and Q. Wu. A framework of learning through empirical gain maximization. Neural Computation, 33 (2021), 1656–1697.MathSciNetCrossRef Y. Feng and Q. Wu. A framework of learning through empirical gain maximization. Neural Computation, 33 (2021), 1656–1697.MathSciNetCrossRef
20.
go back to reference S. Ganan and D. McClure. Bayesian image analysis: An application to single photon emission tomography. Journal of the American Statistical Association, (1985), 12–18. S. Ganan and D. McClure. Bayesian image analysis: An application to single photon emission tomography. Journal of the American Statistical Association, (1985), 12–18.
21.
go back to reference X. Guo, Z. C. Guo, and L. Shi. Capacity dependent analysis for functional online learning algorithms. Applied and Computational Harmonic Analysis, 67 (2023), 1–30. X. Guo, Z. C. Guo, and L. Shi. Capacity dependent analysis for functional online learning algorithms. Applied and Computational Harmonic Analysis, 67 (2023), 1–30.
22.
go back to reference Z. C. Guo, T. Hu, and L. Shi. Gradient descent for robust kernel based regression. Inverse Problems, 34 (2018), 065009(29pp). Z. C. Guo, T. Hu, and L. Shi. Gradient descent for robust kernel based regression. Inverse Problems, 34 (2018), 065009(29pp).
23.
go back to reference Z. C. Guo, S. B. Lin, and D. X. Zhou. Learning theory of distribued spectral algorithms. Inverse Problems, 33 (2017), 074009(29pp). Z. C. Guo, S. B. Lin, and D. X. Zhou. Learning theory of distribued spectral algorithms. Inverse Problems, 33 (2017), 074009(29pp).
24.
go back to reference Z. C. Guo and L. Shi. Fast and strong convergence of online learning algorithms. Advances in Computational Mathematics, 26 (2019), 1–26.MathSciNet Z. C. Guo and L. Shi. Fast and strong convergence of online learning algorithms. Advances in Computational Mathematics, 26 (2019), 1–26.MathSciNet
25.
go back to reference F. R. Hampel, E. M. Ronchetti and P. J. Rousseeuw, and W. A. Stahel. Robust statistics: The Approach Based on Influence Functions. John Wiley & Sons, New York, 1986. F. R. Hampel, E. M. Ronchetti and P. J. Rousseeuw, and W. A. Stahel. Robust statistics: The Approach Based on Influence Functions. John Wiley & Sons, New York, 1986.
26.
go back to reference R. He, W. Zheng, and B. Hu. Maximum correntropy criterion for robust face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33 (2011), 1561–1576.CrossRef R. He, W. Zheng, and B. Hu. Maximum correntropy criterion for robust face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33 (2011), 1561–1576.CrossRef
27.
go back to reference P. W. Holland and R. E. Welsch. Robust regression using iteratively reweighted leastsquares. Communications in Statistics-Theory and Methods, 6 (1977), 813–827.CrossRef P. W. Holland and R. E. Welsch. Robust regression using iteratively reweighted leastsquares. Communications in Statistics-Theory and Methods, 6 (1977), 813–827.CrossRef
28.
go back to reference S. Huang, Y. Feng, and Q. Wu, Learning theory of minimum error entropy under weak moment conditions. Analysis and Applications, 20 (2022), 121–139.MathSciNetCrossRef S. Huang, Y. Feng, and Q. Wu, Learning theory of minimum error entropy under weak moment conditions. Analysis and Applications, 20 (2022), 121–139.MathSciNetCrossRef
30.
go back to reference J. Lin and L. Rosasco. Optimal learning for multi-pass stochastic gradient methods. In Advances in Neural Information Processing Systems, 4556–4564, 2016. J. Lin and L. Rosasco. Optimal learning for multi-pass stochastic gradient methods. In Advances in Neural Information Processing Systems, 4556–4564, 2016.
31.
go back to reference W. Liu, P. Pokharel, and J. C. Principe. Correntropy: Properties and applications in non-Gaussian signal processing. IEEE Transactions on Signal Processing, 55 (2007), 5286–5298.MathSciNetCrossRef W. Liu, P. Pokharel, and J. C. Principe. Correntropy: Properties and applications in non-Gaussian signal processing. IEEE Transactions on Signal Processing, 55 (2007), 5286–5298.MathSciNetCrossRef
32.
go back to reference S. Lu, P. Mathé, and S. V. Pereverzev. Balancing principle in supervised learning for a general regularization scheme. Applied and Computational Harmonic Analysis, 48 (2020), 123–148.MathSciNetCrossRef S. Lu, P. Mathé, and S. V. Pereverzev. Balancing principle in supervised learning for a general regularization scheme. Applied and Computational Harmonic Analysis, 48 (2020), 123–148.MathSciNetCrossRef
33.
go back to reference F. Lv and J. Fan, Optimal learning with Gaussians and correntropy loss. Analysis and Applications, 19(2021), 107–124.MathSciNetCrossRef F. Lv and J. Fan, Optimal learning with Gaussians and correntropy loss. Analysis and Applications, 19(2021), 107–124.MathSciNetCrossRef
34.
go back to reference R. Maronna, D. Martin, and V. Yohai. Robust Statistics. John Wiley & Sons, Chichester, 2006.CrossRef R. Maronna, D. Martin, and V. Yohai. Robust Statistics. John Wiley & Sons, Chichester, 2006.CrossRef
35.
go back to reference R. A. Maronna and R. D. Martin and V. J. Yohai. Robust Statistics: Theory and Methods. John Wiley & Sons, New York, 2006.CrossRef R. A. Maronna and R. D. Martin and V. J. Yohai. Robust Statistics: Theory and Methods. John Wiley & Sons, New York, 2006.CrossRef
36.
go back to reference I. Mizera and C. Müller. Breakdown points of Cauchy regression-scale estimators. Statistics & probability letters, 57 (2002), 79–89.MathSciNetCrossRef I. Mizera and C. Müller. Breakdown points of Cauchy regression-scale estimators. Statistics & probability letters, 57 (2002), 79–89.MathSciNetCrossRef
37.
go back to reference L. Pillaud-Vivien, R. Alessandro, and F. Bach. Statistical optimality of stochastic gradient descent on hard learning problems through multiple passes. In Advances in Neural Information Processing Systems, 8114–8124, 2018. L. Pillaud-Vivien, R. Alessandro, and F. Bach. Statistical optimality of stochastic gradient descent on hard learning problems through multiple passes. In Advances in Neural Information Processing Systems, 8114–8124, 2018.
38.
go back to reference A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM Journal on Optimization, 19 (2009), 1574–1609.MathSciNetCrossRef A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM Journal on Optimization, 19 (2009), 1574–1609.MathSciNetCrossRef
39.
go back to reference A. Rakhlin, O. Shamir, and K. Sridharan. Making gradient descent optimal for strongly convex stochastic optimization. In Proceedings of the 29th International Conference on Machine Learning (ICML-12), 449–456, 2012. A. Rakhlin, O. Shamir, and K. Sridharan. Making gradient descent optimal for strongly convex stochastic optimization. In Proceedings of the 29th International Conference on Machine Learning (ICML-12), 449–456, 2012.
40.
go back to reference G. Raskutti, M. J. Wainwright, and B. Yu. Early stopping and non-parametric regression: an optimal data-dependent stopping rule. Journal of Machine Learning Research, 15 (2014), 335–366.MathSciNet G. Raskutti, M. J. Wainwright, and B. Yu. Early stopping and non-parametric regression: an optimal data-dependent stopping rule. Journal of Machine Learning Research, 15 (2014), 335–366.MathSciNet
41.
go back to reference L. Rosasco, A, Tacchetti, and S. Villa. Regularization by early stopping for online learning algorithms. Stat, 1050 (2014), 30 pages. L. Rosasco, A, Tacchetti, and S. Villa. Regularization by early stopping for online learning algorithms. Stat, 1050 (2014), 30 pages.
42.
go back to reference I. Santamaría, P. Pokharel, and J. C. Principe. Generalized correlation function: definition, properties, and application to blind equalization. IEEE Transactions on Signal Processing, 54 (2006), 2187–2197.CrossRef I. Santamaría, P. Pokharel, and J. C. Principe. Generalized correlation function: definition, properties, and application to blind equalization. IEEE Transactions on Signal Processing, 54 (2006), 2187–2197.CrossRef
43.
go back to reference B. Schölkopf and A. J. Smola. Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT press, 2018. B. Schölkopf and A. J. Smola. Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT press, 2018.
44.
go back to reference S. Smale and D. X. Zhou. Estimating the approximation error in learning theory. Analysis and Applications, 1 (2003), 17–41.MathSciNetCrossRef S. Smale and D. X. Zhou. Estimating the approximation error in learning theory. Analysis and Applications, 1 (2003), 17–41.MathSciNetCrossRef
45.
go back to reference S. Smale and D. X. Zhou. Learning theory estimates via integral operators and their approximations. Constructive Approximation, 26 (2007), 153–172.MathSciNetCrossRef S. Smale and D. X. Zhou. Learning theory estimates via integral operators and their approximations. Constructive Approximation, 26 (2007), 153–172.MathSciNetCrossRef
46.
47.
go back to reference I. Steinwart. How to compare different loss functions and their risks. Constructive Approximation, 26 (2017), 225–287.MathSciNetCrossRef I. Steinwart. How to compare different loss functions and their risks. Constructive Approximation, 26 (2017), 225–287.MathSciNetCrossRef
48.
go back to reference I. Steinwart and A. Christmann. Support Vector Machines. Springer-Verlag, New York, 2008. I. Steinwart and A. Christmann. Support Vector Machines. Springer-Verlag, New York, 2008.
49.
go back to reference I. Steinwart, D. R. Hush, and C. Scovel. Optimal rates for regularized least squares regression. In The 22nd Annual Conference on Learning Theory (COLT), 2009. I. Steinwart, D. R. Hush, and C. Scovel. Optimal rates for regularized least squares regression. In The 22nd Annual Conference on Learning Theory (COLT), 2009.
50.
go back to reference D. Sun, S. Roth, and M. Black. Secrets of optical flow estimation and their principles. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2432–2439, 2010. D. Sun, S. Roth, and M. Black. Secrets of optical flow estimation and their principles. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), 2432–2439, 2010.
51.
go back to reference I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in deep learning. In International Conference on Machine Learning (ICML-13), 1139–1147, 2013. I. Sutskever, J. Martens, G. Dahl, and G. Hinton. On the importance of initialization and momentum in deep learning. In International Conference on Machine Learning (ICML-13), 1139–1147, 2013.
52.
go back to reference Y. Yao. On complexity issues of online learning algorithms. IEEE Transactions on Information Theory, 56 (2010), 6470–6481.MathSciNetCrossRef Y. Yao. On complexity issues of online learning algorithms. IEEE Transactions on Information Theory, 56 (2010), 6470–6481.MathSciNetCrossRef
53.
go back to reference Y. Ying and M. Pontil. Online gradient descent learning algorithms. Foundations of Computational Mathematics, 8 (2008), 561–596.MathSciNetCrossRef Y. Ying and M. Pontil. Online gradient descent learning algorithms. Foundations of Computational Mathematics, 8 (2008), 561–596.MathSciNetCrossRef
54.
go back to reference Y. Ying and D. X. Zhou. Unregularized online learning algorithms with general loss functions. Applied and Computational Harmonic Analysis, 42 (2017), 224–244.MathSciNetCrossRef Y. Ying and D. X. Zhou. Unregularized online learning algorithms with general loss functions. Applied and Computational Harmonic Analysis, 42 (2017), 224–244.MathSciNetCrossRef
55.
go back to reference T. Zhang. Solving large scale linear prediction problems using stochastic gradient descent algorithms. In International Conference on Machine Learning (ICML-04), 919–926, 2004. T. Zhang. Solving large scale linear prediction problems using stochastic gradient descent algorithms. In International Conference on Machine Learning (ICML-04), 919–926, 2004.
56.
go back to reference X. Zhu, Z. Li, and J. Sun. Expression recognition method combining convolutional features and Transformer. Mathematical Foundations of Computing, 6 (2023), 203–217.CrossRef X. Zhu, Z. Li, and J. Sun. Expression recognition method combining convolutional features and Transformer. Mathematical Foundations of Computing, 6 (2023), 203–217.CrossRef
Metadata
Title
Optimality of Robust Online Learning
Authors
Zheng-Chu Guo
Andreas Christmann
Lei Shi
Publication date
26-07-2023
Publisher
Springer US
Published in
Foundations of Computational Mathematics / Issue 5/2024
Print ISSN: 1615-3375
Electronic ISSN: 1615-3383
DOI
https://doi.org/10.1007/s10208-023-09616-9

Premium Partner