Skip to main content
Top
Published in: International Journal of Machine Learning and Cybernetics 1/2015

01-02-2015 | Original Article

A fast algorithm for training support vector regression via smoothed primal function minimization

Author: Songfeng Zheng

Published in: International Journal of Machine Learning and Cybernetics | Issue 1/2015

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

The support vector regression (SVR) model is usually fitted by solving a quadratic programming problem, which is computationally expensive. To improve the computational efficiency, we propose to directly minimize the objective function in the primal form. However, the loss function used by SVR is not differentiable, which prevents the well-developed gradient based optimization methods from being applicable. As such, we introduce a smooth function to approximate the original loss function in the primal form of SVR, which transforms the original quadratic programming into a convex unconstrained minimization problem. The properties of the proposed smoothed objective function are discussed and we prove that the solution of the smoothly approximated model converges to the original SVR solution. A conjugate gradient algorithm is designed for minimizing the proposed smoothly approximated objective function in a sequential minimization manner. Extensive experiments on real-world datasets show that, compared to the quadratic programming based SVR, the proposed approach can achieve similar prediction accuracy with significantly improved computational efficiency, specifically, it is hundreds of times faster for linear SVR model and multiple times faster for nonlinear SVR model.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Show more products
Footnotes
1
The source code is available upon request.
 
Literature
2.
go back to reference Bertsekas DP (1999) Nonlinear programming, 2nd edn. Athena Scientific, Belmont Bertsekas DP (1999) Nonlinear programming, 2nd edn. Athena Scientific, Belmont
3.
go back to reference Boyd S, Vandenberghe L (2004) Convex optimization. Cambridge University Press, Cambridge Boyd S, Vandenberghe L (2004) Convex optimization. Cambridge University Press, Cambridge
4.
go back to reference Chang K-W, Hsieh C-J, Lin C-J (2008) Coordinate descent method for large-scale L 2-loss linear support vector machines. J Mach Learn Res 9:1369–1398MATHMathSciNet Chang K-W, Hsieh C-J, Lin C-J (2008) Coordinate descent method for large-scale L 2-loss linear support vector machines. J Mach Learn Res 9:1369–1398MATHMathSciNet
5.
go back to reference Chang CC, Lin CJ (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol 2(3):27:1–27:27CrossRef Chang CC, Lin CJ (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol 2(3):27:1–27:27CrossRef
7.
go back to reference Chen C, Mangasarian OL (1996) A class of smoothing functions for nonlinear and mixed complementarity problems. Comput Optim Appl 5:97–138CrossRefMATHMathSciNet Chen C, Mangasarian OL (1996) A class of smoothing functions for nonlinear and mixed complementarity problems. Comput Optim Appl 5:97–138CrossRefMATHMathSciNet
8.
go back to reference Fung G, Mangasarian OL (2004) A feature selection Newton method for support vector machine classification. Comput Optim Appl 28(2):185–202CrossRefMATHMathSciNet Fung G, Mangasarian OL (2004) A feature selection Newton method for support vector machine classification. Comput Optim Appl 28(2):185–202CrossRefMATHMathSciNet
10.
go back to reference Hastie T, Tibshirani R, Friedman J (2009) The elements of statistical learning: prediction, inference and data mining, 2nd edn. Springer, New York Hastie T, Tibshirani R, Friedman J (2009) The elements of statistical learning: prediction, inference and data mining, 2nd edn. Springer, New York
12.
go back to reference Hsieh C-J, Chang K-W, Lin C-J, Keerthi SS, Sundararajan S (2008) A dual coordinate descent method for large-scale linear SVM. In: Proceedings of the 25th international conference on machine learning, pp 408–415 Hsieh C-J, Chang K-W, Lin C-J, Keerthi SS, Sundararajan S (2008) A dual coordinate descent method for large-scale linear SVM. In: Proceedings of the 25th international conference on machine learning, pp 408–415
13.
go back to reference Joachims J (1999) Making large-scale SVM learning practical. In: Schölkopf B, Burges C, Smola A (eds) Advances in kernel methods—support vector learning. MIT-Press, London Joachims J (1999) Making large-scale SVM learning practical. In: Schölkopf B, Burges C, Smola A (eds) Advances in kernel methods—support vector learning. MIT-Press, London
15.
16.
go back to reference Liu H, Palatucci M, Zhang J (2009) Blockwise coordinate descent procedures for the multi-task Lasso, with applications to neural semantic basis discovery. In: Proceedings of the 26th international conference on machine learning, pp 649–656 Liu H, Palatucci M, Zhang J (2009) Blockwise coordinate descent procedures for the multi-task Lasso, with applications to neural semantic basis discovery. In: Proceedings of the 26th international conference on machine learning, pp 649–656
17.
go back to reference Mangasarian OL, Musicant DR (2002) Large scale kernel regression via linear programming. Mach Learn 46(1/3):255-269CrossRefMATH Mangasarian OL, Musicant DR (2002) Large scale kernel regression via linear programming. Mach Learn 46(1/3):255-269CrossRefMATH
18.
go back to reference Musicant DR, Mangasarian OL (1999) Massive support vector regression. In: Proceedings of NIPS workshop on learning with support vectors: theory and applications Musicant DR, Mangasarian OL (1999) Massive support vector regression. In: Proceedings of NIPS workshop on learning with support vectors: theory and applications
19.
go back to reference Osuna E, Freund R, Girosi F (1997a) An improved training algorithm for support vector machines. In: Proceedings of IEEE workshop on neural networks for signal processing, pp 276–285 Osuna E, Freund R, Girosi F (1997a) An improved training algorithm for support vector machines. In: Proceedings of IEEE workshop on neural networks for signal processing, pp 276–285
20.
go back to reference Osuna E, Freund R, Girosi F (1997b). Training support vector machines: an application to face detection. In: Proceedings of IEEE conference on computer vision and pattern recognition Osuna E, Freund R, Girosi F (1997b). Training support vector machines: an application to face detection. In: Proceedings of IEEE conference on computer vision and pattern recognition
21.
go back to reference Platt J (1998) Fast training of support vector machines using sequential minimal optimization. In: Schölkopf B, Burges C, Smola A (eds) Advances in kernel methods—support vector learning. MIT-Press, London Platt J (1998) Fast training of support vector machines using sequential minimal optimization. In: Schölkopf B, Burges C, Smola A (eds) Advances in kernel methods—support vector learning. MIT-Press, London
22.
go back to reference Schölkopf B, Smola A (2002) Learning with kernels. MIT Press, Cambridge Schölkopf B, Smola A (2002) Learning with kernels. MIT Press, Cambridge
24.
go back to reference Smola AJ, Schölkopf B, Rätsch G (1999) Linear programs for automatic accuracy control in regression. In: Proceedings of ninth international conference on artificial neural networks, pp 575–580 Smola AJ, Schölkopf B, Rätsch G (1999) Linear programs for automatic accuracy control in regression. In: Proceedings of ninth international conference on artificial neural networks, pp 575–580
25.
go back to reference Tibshirani R (1996) Regression shrinkage and selection via the Lasso. J R Stat Soc Ser B 58(1): 267-288MATHMathSciNet Tibshirani R (1996) Regression shrinkage and selection via the Lasso. J R Stat Soc Ser B 58(1): 267-288MATHMathSciNet
26.
go back to reference Vapnik V (1998) Statistical learning theory. Wiley, NY Vapnik V (1998) Statistical learning theory. Wiley, NY
27.
go back to reference Walsh GR (1975) Methods of optimization. Wiley, NY Walsh GR (1975) Methods of optimization. Wiley, NY
28.
go back to reference Yeh I-C (1998) Modeling of strength of high performance concrete using artificial neural networks. Cement Concrete Res 28(12):1797–1808CrossRef Yeh I-C (1998) Modeling of strength of high performance concrete using artificial neural networks. Cement Concrete Res 28(12):1797–1808CrossRef
29.
go back to reference Zhang J, Jin R, Yang Y, Hauptmann AG (2003) Modified logistic regression: an approximation to SVM and its applications in large-scale text categorization. In: Proceedings of the 20th international conference on machine learning, pp 888–895 Zhang J, Jin R, Yang Y, Hauptmann AG (2003) Modified logistic regression: an approximation to SVM and its applications in large-scale text categorization. In: Proceedings of the 20th international conference on machine learning, pp 888–895
30.
go back to reference Zheng S (2011) Gradient descent algorithms for quantile regression with smooth approximation. Int J Mach Learn Cybern 2(3):191–207CrossRef Zheng S (2011) Gradient descent algorithms for quantile regression with smooth approximation. Int J Mach Learn Cybern 2(3):191–207CrossRef
31.
go back to reference Zhu J, Rosset S, Hastie T, Tibshirani R (2004) 1-norm support vector machines. In: Proceedings of neural information processing systems Zhu J, Rosset S, Hastie T, Tibshirani R (2004) 1-norm support vector machines. In: Proceedings of neural information processing systems
Metadata
Title
A fast algorithm for training support vector regression via smoothed primal function minimization
Author
Songfeng Zheng
Publication date
01-02-2015
Publisher
Springer Berlin Heidelberg
Published in
International Journal of Machine Learning and Cybernetics / Issue 1/2015
Print ISSN: 1868-8071
Electronic ISSN: 1868-808X
DOI
https://doi.org/10.1007/s13042-013-0200-6

Other articles of this Issue 1/2015

International Journal of Machine Learning and Cybernetics 1/2015 Go to the issue