Skip to main content
Erschienen in: Neural Computing and Applications 23/2020

27.05.2020 | Original Article

Online Bayesian shrinkage regression

verfasst von: Waqas Jamil, Abdelhamid Bouchachia

Erschienen in: Neural Computing and Applications | Ausgabe 23/2020

Einloggen

Aktivieren Sie unsere intelligente Suche um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The present work introduces an original and new online regression method that extends the shrinkage via limit of Gibbs sampler (SLOG) in the context of online learning. In particular, we theoretically show how the proposed online SLOG (OSLOG) is obtained using the Bayesian framework without resorting to the Gibbs sampler or considering a hierarchical representation. Moreover, in order to define the performance guarantee of OSLOG, we derive an upper bound on the cumulative squared loss. It is the only online regression algorithm with sparsity that gives logarithmic regret. Furthermore, we do an empirical comparison with two state-of-the-art algorithms to illustrate the performance of OSLOG relying on three aspects: normality, sparsity and multicollinearity showing an excellent achievement of trade-off between these properties.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Fußnoten
1
This is a mild assumption which is always satisfied in practice. Not making such assumption will lead to counter intuitive results such as Banach–Tarski paradox. For details, see, for example, [18].
 
2
All algorithms are available from SOLMA library: https://​github.​com/​proteus-h2020/​proteus-solma.
 
Literatur
1.
Zurück zum Zitat Tibshirani R (1996) Regression shrinkage and selection via the lasso. J R Stat Soc Ser B (Methodol) 58:267–288MathSciNetMATH Tibshirani R (1996) Regression shrinkage and selection via the lasso. J R Stat Soc Ser B (Methodol) 58:267–288MathSciNetMATH
3.
Zurück zum Zitat Rajaratnam B, Roberts S, Sparks D, Dalal O (2016) Lasso regression: estimation and shrinkage via the limit of Gibbs sampling. J R Stat Soc Ser B (Stat Methodol) 78(1):153–174MathSciNetCrossRef Rajaratnam B, Roberts S, Sparks D, Dalal O (2016) Lasso regression: estimation and shrinkage via the limit of Gibbs sampling. J R Stat Soc Ser B (Stat Methodol) 78(1):153–174MathSciNetCrossRef
4.
Zurück zum Zitat Sambasivan R, Das S, Saha SK (2018) A Bayesian perspective of statistical machine learning for big data. arXiv preprint arXiv:1811.04788 Sambasivan R, Das S, Saha SK (2018) A Bayesian perspective of statistical machine learning for big data. arXiv preprint arXiv:​1811.​04788
5.
Zurück zum Zitat Langford J, Li L, Zhang T (2009) Sparse online learning via truncated gradient. J Mach Learn Res 10(Mar):777–801MathSciNetMATH Langford J, Li L, Zhang T (2009) Sparse online learning via truncated gradient. J Mach Learn Res 10(Mar):777–801MathSciNetMATH
6.
Zurück zum Zitat Gerchinovitz S (2013) Sparsity regret bounds for individual sequences in online linear regression. J Mach Learn Res 14(Mar):729–769MathSciNetMATH Gerchinovitz S (2013) Sparsity regret bounds for individual sequences in online linear regression. J Mach Learn Res 14(Mar):729–769MathSciNetMATH
7.
Zurück zum Zitat Duchi J, Singer Y (2009) Efficient online and batch learning using forward backward splitting. J Mach Learn Res 10(Dec):2899–2934MathSciNetMATH Duchi J, Singer Y (2009) Efficient online and batch learning using forward backward splitting. J Mach Learn Res 10(Dec):2899–2934MathSciNetMATH
8.
Zurück zum Zitat Shalev-Shwartz S, Tewari A (2011) Stochastic methods for l1-regularized loss minimization. J Mach Learn Res 12(Jun):1865–1892MathSciNetMATH Shalev-Shwartz S, Tewari A (2011) Stochastic methods for l1-regularized loss minimization. J Mach Learn Res 12(Jun):1865–1892MathSciNetMATH
9.
Zurück zum Zitat Zinkevich M (2003) Online convex programming and generalized infinitesimal gradient ascent. Technical Report CMU-CS-03-110, School of Computer Science, Carnegie Mellon University Zinkevich M (2003) Online convex programming and generalized infinitesimal gradient ascent. Technical Report CMU-CS-03-110, School of Computer Science, Carnegie Mellon University
10.
Zurück zum Zitat Hazan E, Agarwal A, Kale S (2007) Logarithmic regret algorithms for online convex optimization. Mach Learn 69(2–3):169–192CrossRef Hazan E, Agarwal A, Kale S (2007) Logarithmic regret algorithms for online convex optimization. Mach Learn 69(2–3):169–192CrossRef
11.
Zurück zum Zitat Francesco O, Nicolo C-B, Claudio G (2012) Beyond logarithmic bounds in online learning. In: Artificial intelligence and statistics, pp 823–831 Francesco O, Nicolo C-B, Claudio G (2012) Beyond logarithmic bounds in online learning. In: Artificial intelligence and statistics, pp 823–831
13.
Zurück zum Zitat Andrews DF, Mallows CL (1974) Scale mixtures of normal distributions. J R Stat Soc Ser B (Methodol) 36:99–102MathSciNetMATH Andrews DF, Mallows CL (1974) Scale mixtures of normal distributions. J R Stat Soc Ser B (Methodol) 36:99–102MathSciNetMATH
14.
Zurück zum Zitat Murphy K (2014) Machine learning, a probabilistic perspective. Taylor & Francis, LondonMATH Murphy K (2014) Machine learning, a probabilistic perspective. Taylor & Francis, LondonMATH
15.
Zurück zum Zitat Kotowicz J (1990) Convergent real sequences. Upper and lower bound of sets of real numbers. Formaliz Math 1(3):477–481 Kotowicz J (1990) Convergent real sequences. Upper and lower bound of sets of real numbers. Formaliz Math 1(3):477–481
17.
Zurück zum Zitat Walter R et al (1976) Principles of mathematical analysis, vol 3. McGraw-Hill, New YorkMATH Walter R et al (1976) Principles of mathematical analysis, vol 3. McGraw-Hill, New YorkMATH
18.
Zurück zum Zitat Tao T (2011) An introduction to measure theory. American Mathematical Society, ProvidenceCrossRef Tao T (2011) An introduction to measure theory. American Mathematical Society, ProvidenceCrossRef
19.
Zurück zum Zitat Kivinen J, Warmuth M(1999) Averaging expert predictions. In: Computational learning theory. Springer, p 638 Kivinen J, Warmuth M(1999) Averaging expert predictions. In: Computational learning theory. Springer, p 638
20.
Zurück zum Zitat Kakade SM, Ng AY (2005) Online bounds for Bayesian algorithms. In: Advances in neural information processing systems, pp 641–648 Kakade SM, Ng AY (2005) Online bounds for Bayesian algorithms. In: Advances in neural information processing systems, pp 641–648
21.
Zurück zum Zitat Beckenbach EF, Bellman R (2012) Inequalities, vol 30. Springer, BerlinMATH Beckenbach EF, Bellman R (2012) Inequalities, vol 30. Springer, BerlinMATH
22.
Zurück zum Zitat Vovk V (2001) Competitive on-line statistics. Int Stat Rev/Revue Internationale de Statistique 69:213–248MATH Vovk V (2001) Competitive on-line statistics. Int Stat Rev/Revue Internationale de Statistique 69:213–248MATH
23.
Zurück zum Zitat Quinonero-Candela J, Dagan I, Magnini B, d’Alché BF (2006) Machine learning challenges: evaluating predictive uncertainty, visual object classification, and recognizing textual entailment. In: First Pascal Machine Learning Challenges Workshop, MLCW 2005, Southampton, UK, 11–13 April 2005, Revised Selected Papers, vol 3944. Springer Quinonero-Candela J, Dagan I, Magnini B, d’Alché BF (2006) Machine learning challenges: evaluating predictive uncertainty, visual object classification, and recognizing textual entailment. In: First Pascal Machine Learning Challenges Workshop, MLCW 2005, Southampton, UK, 11–13 April 2005, Revised Selected Papers, vol 3944. Springer
24.
Zurück zum Zitat Akbilgic O, Bozdogan H, Balaban ME (2014) A novel hybrid RBF neural networks model as a forecaster. Stat Comput 24(3):365–375MathSciNetCrossRef Akbilgic O, Bozdogan H, Balaban ME (2014) A novel hybrid RBF neural networks model as a forecaster. Stat Comput 24(3):365–375MathSciNetCrossRef
Metadaten
Titel
Online Bayesian shrinkage regression
verfasst von
Waqas Jamil
Abdelhamid Bouchachia
Publikationsdatum
27.05.2020
Verlag
Springer London
Erschienen in
Neural Computing and Applications / Ausgabe 23/2020
Print ISSN: 0941-0643
Elektronische ISSN: 1433-3058
DOI
https://doi.org/10.1007/s00521-020-04947-y

Weitere Artikel der Ausgabe 23/2020

Neural Computing and Applications 23/2020 Zur Ausgabe

S.I. : Emerging applications of Deep Learning and Spiking ANN

Critical infrastructure protection based on memory-augmented meta-learning framework

S.I. : Emerging applications of Deep Learning and Spiking ANN

Online meta-learning firewall to prevent phishing attacks