Skip to main content
Erschienen in: International Journal of Machine Learning and Cybernetics 6/2019

07.06.2018 | Original Article

Large-scale support vector regression with budgeted stochastic gradient descent

verfasst von: Zongxia Xie, Yingda Li

Erschienen in: International Journal of Machine Learning and Cybernetics | Ausgabe 6/2019

Einloggen

Aktivieren Sie unsere intelligente Suche um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Support vector regression (SVR) is a widely used regression technique for its competent performance. However, non-linear SVR is time consuming for large-scale tasks due to the dimension curse of kernelization. Recently, a budgeted stochastic gradient descent (BSGD) method has been developed to train large-scale kernelized SVC. In this paper, we extend the BSGD method to non-linear regression tasks. According to the performance of different budget maintenance strategies, we combine the stochastic gradient descent (SGD) method with the merging strategy. Experimental results on real-world datasets show that the proposed kernelized SVR with BSGD can achieve competent accuracy, with good computational efficiency compared to some state-of-the-art algorithms.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Weitere Produktempfehlungen anzeigen
Literatur
1.
Zurück zum Zitat Boser BE, Guyon IM, Vapnik VN (1992) A training algorithm for optimal margin classifiers. In: Proceedings of the fifth annual workshop on Computational learning theory, ACM, pp 144–152 Boser BE, Guyon IM, Vapnik VN (1992) A training algorithm for optimal margin classifiers. In: Proceedings of the fifth annual workshop on Computational learning theory, ACM, pp 144–152
2.
Zurück zum Zitat Brown JD, Summers MF, Johnson BA (2015) Prediction of hydrogen and carbon chemical shifts from rna using database mining and support vector regression. J Biomol NMR 63(1):1–14CrossRef Brown JD, Summers MF, Johnson BA (2015) Prediction of hydrogen and carbon chemical shifts from rna using database mining and support vector regression. J Biomol NMR 63(1):1–14CrossRef
3.
Zurück zum Zitat Chen J, Xue X, Ha M, Yu D, Ma L (2014) Support vector regression method for wind speed prediction incorporating probability prior knowledge. Math Probl Eng 2014(2014):1–10 Chen J, Xue X, Ha M, Yu D, Ma L (2014) Support vector regression method for wind speed prediction incorporating probability prior knowledge. Math Probl Eng 2014(2014):1–10
4.
Zurück zum Zitat Osuna E, Freund R, Girosi F (1997) Training support vector machines: an application to face detection. In: Computer Vision and Pattern Recognition, 1997. Proceedings, 1997 IEEE Computer Society Conference on 1997, pp 130–136 Osuna E, Freund R, Girosi F (1997) Training support vector machines: an application to face detection. In: Computer Vision and Pattern Recognition, 1997. Proceedings, 1997 IEEE Computer Society Conference on 1997, pp 130–136
5.
Zurück zum Zitat Ho CH, Lin CJ (2012) Large-scale linear support vector regression. J Mach Learn Res 13(1):3323–3348MathSciNetMATH Ho CH, Lin CJ (2012) Large-scale linear support vector regression. J Mach Learn Res 13(1):3323–3348MathSciNetMATH
6.
Zurück zum Zitat Lin CJ, Weng RC, Keerthi SS (2007) Trust region newton method for large-scale logistic regression. J Mach Learn Res 9(2):561–568MathSciNet Lin CJ, Weng RC, Keerthi SS (2007) Trust region newton method for large-scale logistic regression. J Mach Learn Res 9(2):561–568MathSciNet
7.
Zurück zum Zitat Hsieh CJ, Chang KW, Lin CJ, Keerthi SS, Sundararajan S (2008) A dual coordinate descent method for large-scale linear SVM. In: ICML, pp 1369–1398 Hsieh CJ, Chang KW, Lin CJ, Keerthi SS, Sundararajan S (2008) A dual coordinate descent method for large-scale linear SVM. In: ICML, pp 1369–1398
8.
Zurück zum Zitat Xie X, Chen C, Chen Z (2015) Mini-batch quasi-newton optimization for large scale linear support vector regression. In: International Conference on Mechatronics, Materials, Chemistry and Computer Engineering Xie X, Chen C, Chen Z (2015) Mini-batch quasi-newton optimization for large scale linear support vector regression. In: International Conference on Mechatronics, Materials, Chemistry and Computer Engineering
10.
Zurück zum Zitat Wang Z, Crammer K, Vucetic S (2012) Breaking the curse of kernelization: budgeted stochastic gradient descent for large-scale SVM training. J Mach Learn Res 13(1):3103–3131MathSciNetMATH Wang Z, Crammer K, Vucetic S (2012) Breaking the curse of kernelization: budgeted stochastic gradient descent for large-scale SVM training. J Mach Learn Res 13(1):3103–3131MathSciNetMATH
11.
Zurück zum Zitat Zheng S (2015) A fast algorithm for training support vector regression via smoothed primal function minimization. Int J Mach Learn Cybern 6(1):1–12MathSciNetCrossRef Zheng S (2015) A fast algorithm for training support vector regression via smoothed primal function minimization. Int J Mach Learn Cybern 6(1):1–12MathSciNetCrossRef
12.
Zurück zum Zitat Lu J, Hoi SC, Wang J, Zhao P, Liu Z-Y (2016) Large scale online kernel learning. J Mach Learn Res 17(47):1–43MathSciNetMATH Lu J, Hoi SC, Wang J, Zhao P, Liu Z-Y (2016) Large scale online kernel learning. J Mach Learn Res 17(47):1–43MathSciNetMATH
13.
Zurück zum Zitat Crammer K, Kandola J, Singer Y (2003) Online classification on a budget. Adv Neural Inf Process Syst 40(2):225–232 Crammer K, Kandola J, Singer Y (2003) Online classification on a budget. Adv Neural Inf Process Syst 40(2):225–232
14.
Zurück zum Zitat Cavallanti G, Cesa-Bianchi N, Gentile C (2006) Tracking the best hyperplane with a simple budget perceptron. Mach Learn 69(2–3):143–167MATH Cavallanti G, Cesa-Bianchi N, Gentile C (2006) Tracking the best hyperplane with a simple budget perceptron. Mach Learn 69(2–3):143–167MATH
15.
Zurück zum Zitat Dekel O, Shalev-Shwartz S, Singer Y (2008) The forgetron: a kernel-based perceptron on a budget. SIAM J Comput 37(5):1342–1372MathSciNetMATHCrossRef Dekel O, Shalev-Shwartz S, Singer Y (2008) The forgetron: a kernel-based perceptron on a budget. SIAM J Comput 37(5):1342–1372MathSciNetMATHCrossRef
16.
Zurück zum Zitat Orabona F, Keshet J, Caputo B (2009) Bounded kernel-based online learning. J Mach Learn Res 10(6):2643–2666MathSciNetMATH Orabona F, Keshet J, Caputo B (2009) Bounded kernel-based online learning. J Mach Learn Res 10(6):2643–2666MathSciNetMATH
17.
Zurück zum Zitat Lvesque JC (2013) Ensembles of budgeted kernel support vector machines for parallel large scale learning. In: NIPS 2013 Workshop on Big Learning: Advances in Algorithms and Data Management Lvesque JC (2013) Ensembles of budgeted kernel support vector machines for parallel large scale learning. In: NIPS 2013 Workshop on Big Learning: Advances in Algorithms and Data Management
18.
19.
Zurück zum Zitat Cristianini N, Scholkopf B (2002) Support vector machines and kernel methods: the new generation of learning machines. Ai Mag 23(3):31–41 Cristianini N, Scholkopf B (2002) Support vector machines and kernel methods: the new generation of learning machines. Ai Mag 23(3):31–41
20.
Zurück zum Zitat Bordes A, Bottou L, Gallinari P (2009) Sgd-qn: careful quasi-newton stochastic gradient descent. J Mach Learn Res 10(3):1737–1754MathSciNetMATH Bordes A, Bottou L, Gallinari P (2009) Sgd-qn: careful quasi-newton stochastic gradient descent. J Mach Learn Res 10(3):1737–1754MathSciNetMATH
21.
Zurück zum Zitat Zhu ZA, Chen W, Wang G, Zhu C, Chen Z (2009) P-packSVM: Parallel Primal grAdient desCent Kernel SVM. In: IEEE 13th International Conference on Data Mining, pp 677–686 Zhu ZA, Chen W, Wang G, Zhu C, Chen Z (2009) P-packSVM: Parallel Primal grAdient desCent Kernel SVM. In: IEEE 13th International Conference on Data Mining, pp 677–686
23.
Zurück zum Zitat Shalev-Shwartz S, Singer Y, Srebro N, Cotter A (2007), Pegasos: primal estimated sub-gradient solver for SVM. In: Machine Learning, Proceedings of the Twenty-Fourth International Conference, pp 3–30 Shalev-Shwartz S, Singer Y, Srebro N, Cotter A (2007), Pegasos: primal estimated sub-gradient solver for SVM. In: Machine Learning, Proceedings of the Twenty-Fourth International Conference, pp 3–30
24.
Zurück zum Zitat Chang CC, Lin CJ (2011) Libsvm: a library for support vector machines. ACM Trans Intell Syst Technol 2(3):389–396CrossRef Chang CC, Lin CJ (2011) Libsvm: a library for support vector machines. ACM Trans Intell Syst Technol 2(3):389–396CrossRef
Metadaten
Titel
Large-scale support vector regression with budgeted stochastic gradient descent
verfasst von
Zongxia Xie
Yingda Li
Publikationsdatum
07.06.2018
Verlag
Springer Berlin Heidelberg
Erschienen in
International Journal of Machine Learning and Cybernetics / Ausgabe 6/2019
Print ISSN: 1868-8071
Elektronische ISSN: 1868-808X
DOI
https://doi.org/10.1007/s13042-018-0832-7

Weitere Artikel der Ausgabe 6/2019

International Journal of Machine Learning and Cybernetics 6/2019 Zur Ausgabe