Skip to main content
Erschienen in: 4OR 2/2018

23.05.2018 | Invited Survey

Nonlinear optimization and support vector machines

verfasst von: Veronica Piccialli, Marco Sciandrone

Erschienen in: 4OR | Ausgabe 2/2018

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Support Vector Machine (SVM) is one of the most important class of machine learning models and algorithms, and has been successfully applied in various fields. Nonlinear optimization plays a crucial role in SVM methodology, both in defining the machine learning models and in designing convergent and efficient algorithms for large-scale training problems. In this paper we present the convex programming problems underlying SVM focusing on supervised binary classification. We analyze the most important and used optimization methods for SVM training problems, and we discuss how the properties of these problems can be incorporated in designing useful algorithms.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Literatur
Zurück zum Zitat Astorino A, Fuduli A (2015) Support vector machine polyhedral separability in semisupervised learning. J Optim Theory Appl 164:1039–1050CrossRef Astorino A, Fuduli A (2015) Support vector machine polyhedral separability in semisupervised learning. J Optim Theory Appl 164:1039–1050CrossRef
Zurück zum Zitat Bertsekas DP (1999) Nonlinear programming. Athena Scientific, Belmont Bertsekas DP (1999) Nonlinear programming. Athena Scientific, Belmont
Zurück zum Zitat Bishop CM (2006) Pattern recognition and machine learning. Springer, Berlin Bishop CM (2006) Pattern recognition and machine learning. Springer, Berlin
Zurück zum Zitat Boser BE, Guyon IM, Vapnik VN (1992) A training algorithm for optimal margin classifiers. In: Proceedings of the fifth annual workshop on computational learning theory, COLT ’92. ACM, New York, pp 144–152 Boser BE, Guyon IM, Vapnik VN (1992) A training algorithm for optimal margin classifiers. In: Proceedings of the fifth annual workshop on computational learning theory, COLT ’92. ACM, New York, pp 144–152
Zurück zum Zitat Byrd RH, Chin GM, Neveitt W, Nocedal J (2011) On the use of stochastic hessian information in optimization methods for machine learning. SIAM J Optim 21:977–995CrossRef Byrd RH, Chin GM, Neveitt W, Nocedal J (2011) On the use of stochastic hessian information in optimization methods for machine learning. SIAM J Optim 21:977–995CrossRef
Zurück zum Zitat Carrizosa E, Romero Morales D (2013) Supervised classification and mathematical optimization. Comput Oper Res 40:150–165CrossRef Carrizosa E, Romero Morales D (2013) Supervised classification and mathematical optimization. Comput Oper Res 40:150–165CrossRef
Zurück zum Zitat Cassioli A, Chiavaioli A, Manes C, Sciandrone M (2013) An incremental least squares algorithm for large scale linear classification. Eur J Oper Res 224:560–565CrossRef Cassioli A, Chiavaioli A, Manes C, Sciandrone M (2013) An incremental least squares algorithm for large scale linear classification. Eur J Oper Res 224:560–565CrossRef
Zurück zum Zitat Chang CC, Hsu CW, Lin CJ (2000) The analysis of decomposition methods for support vector machines. IEEE Trans Neural Netw Learn Syst 11:1003–1008CrossRef Chang CC, Hsu CW, Lin CJ (2000) The analysis of decomposition methods for support vector machines. IEEE Trans Neural Netw Learn Syst 11:1003–1008CrossRef
Zurück zum Zitat Chang KW, Hsieh CJ, Lin CJ (2008) Coordinate descent method for large-scale l2-loss linear support vector machines. J Mach Learn Res 9:1369–1398 Chang KW, Hsieh CJ, Lin CJ (2008) Coordinate descent method for large-scale l2-loss linear support vector machines. J Mach Learn Res 9:1369–1398
Zurück zum Zitat Chapelle O (2007) Training a support vector machine in the primal. Neural Comput 19:1155–1178CrossRef Chapelle O (2007) Training a support vector machine in the primal. Neural Comput 19:1155–1178CrossRef
Zurück zum Zitat Chen PH, Fan RE, Lin CJ (2006) A study on smo-type decomposition methods for support vector machines. IEEE Trans Neural Netw 17:893–908CrossRef Chen PH, Fan RE, Lin CJ (2006) A study on smo-type decomposition methods for support vector machines. IEEE Trans Neural Netw 17:893–908CrossRef
Zurück zum Zitat Chiang WL, Lee MC, Lin CJ (2016) Parallel dual coordinate descent method for large-scale linear classification in multi-core environments. In: Proceedings of the 22Nd ACM SIGKDD international conference on knowledge discovery and data mining, KDD ’16. ACM, New York, pp 1485–1494 Chiang WL, Lee MC, Lin CJ (2016) Parallel dual coordinate descent method for large-scale linear classification in multi-core environments. In: Proceedings of the 22Nd ACM SIGKDD international conference on knowledge discovery and data mining, KDD ’16. ACM, New York, pp 1485–1494
Zurück zum Zitat Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20:273–297 Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20:273–297
Zurück zum Zitat Dai HY, Fletcher R (2006) New algorithms for singly linearly constrained quadratic programs subject to lower and upper bounds. Math Programm 106:403–421CrossRef Dai HY, Fletcher R (2006) New algorithms for singly linearly constrained quadratic programs subject to lower and upper bounds. Math Programm 106:403–421CrossRef
Zurück zum Zitat Fan RE, Chang KW, Hsieh CJ, Wang XR, Lin CJ (2008) Liblinear: a library for large linear classification. J Mach Learn Res 9:1871–1874 Fan RE, Chang KW, Hsieh CJ, Wang XR, Lin CJ (2008) Liblinear: a library for large linear classification. J Mach Learn Res 9:1871–1874
Zurück zum Zitat Fan RE, Chen PH, Lin CJ (2005) Working set selection using second order information for training support vector machines. J Mach Learn Res 6:1889–1918 Fan RE, Chen PH, Lin CJ (2005) Working set selection using second order information for training support vector machines. J Mach Learn Res 6:1889–1918
Zurück zum Zitat Ferris MC, Munson TS (2004) Semismooth support vector machines. Math Programm B 101:185–204 Ferris MC, Munson TS (2004) Semismooth support vector machines. Math Programm B 101:185–204
Zurück zum Zitat Ferris MC, Munson TS (2002) Interior-point methods for massive support vector machines. SIAM J Optim 13:783–804CrossRef Ferris MC, Munson TS (2002) Interior-point methods for massive support vector machines. SIAM J Optim 13:783–804CrossRef
Zurück zum Zitat Fine S, Scheinberg K (2001) Efficient svm training using low-rank kernel representations. J Mach Learn Res 2:243–264 Fine S, Scheinberg K (2001) Efficient svm training using low-rank kernel representations. J Mach Learn Res 2:243–264
Zurück zum Zitat Fletcher R (1987) Practical methods of optimization, 2nd edn. Wiley, New York Fletcher R (1987) Practical methods of optimization, 2nd edn. Wiley, New York
Zurück zum Zitat Franc V, Sonnenburg S (2009) Optimized cutting plane algorithm for large-scale risk minimization. J Mach Learn Res 10:2157–2192 Franc V, Sonnenburg S (2009) Optimized cutting plane algorithm for large-scale risk minimization. J Mach Learn Res 10:2157–2192
Zurück zum Zitat Franc V, Sonnenburg S (2008) Optimized cutting plane algorithm for support vector machines. In: Proceedings of the 25th international conference on machine learning, ICML ’08. ACM, New York, pp 320–327 Franc V, Sonnenburg S (2008) Optimized cutting plane algorithm for support vector machines. In: Proceedings of the 25th international conference on machine learning, ICML ’08. ACM, New York, pp 320–327
Zurück zum Zitat Fung G, Mangasarian OL (2001) Proximal support vector machine classifiers. In: Proceedings of the seventh ACM SIGKDD international conference on knowledge discovery and data mining, KDD ’01. ACM New York, pp 77–86 Fung G, Mangasarian OL (2001) Proximal support vector machine classifiers. In: Proceedings of the seventh ACM SIGKDD international conference on knowledge discovery and data mining, KDD ’01. ACM New York, pp 77–86
Zurück zum Zitat Gaudioso M, Gorgone E, Labbé M, Rodríguez-Chía AM (2017) Lagrangian relaxation for svm feature selection. Comput OR 87:137–145CrossRef Gaudioso M, Gorgone E, Labbé M, Rodríguez-Chía AM (2017) Lagrangian relaxation for svm feature selection. Comput OR 87:137–145CrossRef
Zurück zum Zitat Gertz EM, Griffin JD (2010) Using an iterative linear solver in an interior-point method for generating support vector machines. Comput Optim Appl 47:431–453CrossRef Gertz EM, Griffin JD (2010) Using an iterative linear solver in an interior-point method for generating support vector machines. Comput Optim Appl 47:431–453CrossRef
Zurück zum Zitat Glasmachers T, Igel C (2006) Maximum-gain working set selection for svms. J Mach Learn Res 7:1437–1466 Glasmachers T, Igel C (2006) Maximum-gain working set selection for svms. J Mach Learn Res 7:1437–1466
Zurück zum Zitat Goldfarb D, Scheinberg K (2008) Numerically stable ldlt factorizations in interior point methods for convex quadratic programming. IMA J Numer Anal 28:806–826CrossRef Goldfarb D, Scheinberg K (2008) Numerically stable ldlt factorizations in interior point methods for convex quadratic programming. IMA J Numer Anal 28:806–826CrossRef
Zurück zum Zitat Golub GH, Van Loan CF (1996) Matrix computations, 3rd edn. Johns Hopkins University Press, Baltimore Golub GH, Van Loan CF (1996) Matrix computations, 3rd edn. Johns Hopkins University Press, Baltimore
Zurück zum Zitat Hsieh CJ, Chang KW, Lin CJ, Keerthi SS, Sundararajan S (2008) A dual coordinate descent method for large-scale linear svm. In: Proceedings of the 25th international conference on machine learning, ICML ’08. ACM, New York, pp 408–415 Hsieh CJ, Chang KW, Lin CJ, Keerthi SS, Sundararajan S (2008) A dual coordinate descent method for large-scale linear svm. In: Proceedings of the 25th international conference on machine learning, ICML ’08. ACM, New York, pp 408–415
Zurück zum Zitat Hsu CW, Lin CJ (2002) A comparison of methods for multiclass support vector machines. IEEE Trans Neural Netw 13:415–425CrossRef Hsu CW, Lin CJ (2002) A comparison of methods for multiclass support vector machines. IEEE Trans Neural Netw 13:415–425CrossRef
Zurück zum Zitat Hsu CW, Lin CJ (2002) A simple decomposition method for support vector machines. Mach Learn 46:291–314CrossRef Hsu CW, Lin CJ (2002) A simple decomposition method for support vector machines. Mach Learn 46:291–314CrossRef
Zurück zum Zitat Teo CH, Vishwanthan SVN, Smola AJ, Le QV (2010) Bundle methods for regularized risk minimization. J Mach Learn Res 11:311–365 Teo CH, Vishwanthan SVN, Smola AJ, Le QV (2010) Bundle methods for regularized risk minimization. J Mach Learn Res 11:311–365
Zurück zum Zitat Joachims T (1999) Advances in kernel methods. Chapter making large-scale support vector machine learning practical. MIT Press, Cambridge, pp 169–184 Joachims T (1999) Advances in kernel methods. Chapter making large-scale support vector machine learning practical. MIT Press, Cambridge, pp 169–184
Zurück zum Zitat Joachims T ( 2006) Training linear svms in linear time. In: Proceedings of the 12th ACM SIGKDD international conference on knowledge discovery and data mining, KDD ’06. ACM, New York, pp 217–226 Joachims T ( 2006) Training linear svms in linear time. In: Proceedings of the 12th ACM SIGKDD international conference on knowledge discovery and data mining, KDD ’06. ACM, New York, pp 217–226
Zurück zum Zitat Joachims T, Finley T, Yu CNJ (2009) Cutting-plane training of structural svms. Mach Learn 77:27–59CrossRef Joachims T, Finley T, Yu CNJ (2009) Cutting-plane training of structural svms. Mach Learn 77:27–59CrossRef
Zurück zum Zitat Joachims T, Yu CNJ (2009) Sparse kernel svms via cutting-plane training. Mach Learn 76:179–193CrossRef Joachims T, Yu CNJ (2009) Sparse kernel svms via cutting-plane training. Mach Learn 76:179–193CrossRef
Zurück zum Zitat Keerthi SS, DeCoste D (2005) A modified finite Newton method for fast solution of large scale linear svms. J Mach Learn Res 6:341–361 Keerthi SS, DeCoste D (2005) A modified finite Newton method for fast solution of large scale linear svms. J Mach Learn Res 6:341–361
Zurück zum Zitat Keerthi SS, Gilbert EG (2002) Convergence of a generalized smo algorithm for svm classifier design. Mach Learn 46:351–360CrossRef Keerthi SS, Gilbert EG (2002) Convergence of a generalized smo algorithm for svm classifier design. Mach Learn 46:351–360CrossRef
Zurück zum Zitat Keerthi SS, Lin CJ (2003) Asymptotic behaviors of support vector machines with gaussian kernel. Neural Comput 15:1667–1689CrossRef Keerthi SS, Lin CJ (2003) Asymptotic behaviors of support vector machines with gaussian kernel. Neural Comput 15:1667–1689CrossRef
Zurück zum Zitat Kimeldorf GS, Wahba G (1970) A correspondence between bayesian estimation on stochastic processes and smoothing by splines. Ann Math Stat 41:495–502CrossRef Kimeldorf GS, Wahba G (1970) A correspondence between bayesian estimation on stochastic processes and smoothing by splines. Ann Math Stat 41:495–502CrossRef
Zurück zum Zitat Kiwiel KC (2008) Breakpoint searching algorithms for the continuous quadratic knapsack problem. Math Program 112:473–491CrossRef Kiwiel KC (2008) Breakpoint searching algorithms for the continuous quadratic knapsack problem. Math Program 112:473–491CrossRef
Zurück zum Zitat Le QV, Smola AJ, Vishwanathan S (2008) Bundle methods for machine learning. In: Platt JC, Koller D, Singer Y, Roweis ST (eds) Advances in neural information processing systems 20. Curran Associates Inc, New York, pp 1377–1384 Le QV, Smola AJ, Vishwanathan S (2008) Bundle methods for machine learning. In: Platt JC, Koller D, Singer Y, Roweis ST (eds) Advances in neural information processing systems 20. Curran Associates Inc, New York, pp 1377–1384
Zurück zum Zitat Lee MC, Chiang WL, Lin CJ (2015) Fast matrix-vector multiplications for large-scale logistic regression on shared-memory systems. In: Aggarwal C, Zhou Z-H, Tuzhilin A, Xiong H, Wu X (eds) ICDM. IEEE Computer Society, pp 835–840 Lee MC, Chiang WL, Lin CJ (2015) Fast matrix-vector multiplications for large-scale logistic regression on shared-memory systems. In: Aggarwal C, Zhou Z-H, Tuzhilin A, Xiong H, Wu X (eds) ICDM. IEEE Computer Society, pp 835–840
Zurück zum Zitat Lee YJ, Mangasarian OL (2001) SSVM: a smooth support vector machine for classification. Comput Optim Appl 20:5–22CrossRef Lee YJ, Mangasarian OL (2001) SSVM: a smooth support vector machine for classification. Comput Optim Appl 20:5–22CrossRef
Zurück zum Zitat Lin C-J, Lucidi S, Palagi L, Risi A, Sciandrone M (2009) A decomposition algorithm model for singly linearly constrained problems subject to lower and upper bounds. J Optim Theory Appl 141:107–126CrossRef Lin C-J, Lucidi S, Palagi L, Risi A, Sciandrone M (2009) A decomposition algorithm model for singly linearly constrained problems subject to lower and upper bounds. J Optim Theory Appl 141:107–126CrossRef
Zurück zum Zitat Lin CJ (2001) Formulations of support vector machines: a note from an optimization point of view. Neural Comput 13:307–317CrossRef Lin CJ (2001) Formulations of support vector machines: a note from an optimization point of view. Neural Comput 13:307–317CrossRef
Zurück zum Zitat Lin CJ (2001) On the convergence of the decomposition method for support vector machines. IEEE Trans Neural Netw 12:1288–1298CrossRef Lin CJ (2001) On the convergence of the decomposition method for support vector machines. IEEE Trans Neural Netw 12:1288–1298CrossRef
Zurück zum Zitat Lin CJ (2002) Asymptotic convergence of an SMO algorithm without any assumptions. IEEE Trans Neural Netw 13:248–250CrossRef Lin CJ (2002) Asymptotic convergence of an SMO algorithm without any assumptions. IEEE Trans Neural Netw 13:248–250CrossRef
Zurück zum Zitat Lin CJ (2002) A formal analysis of stopping criteria of decomposition methods for support vector machines. IEEE Trans Neural Netw 13:1045–1052CrossRef Lin CJ (2002) A formal analysis of stopping criteria of decomposition methods for support vector machines. IEEE Trans Neural Netw 13:1045–1052CrossRef
Zurück zum Zitat Lin CJ, Morè JJ (1999) Newton’s method for large bound-constrained optimization problems. SIAM J Optim 9:1100–1127CrossRef Lin CJ, Morè JJ (1999) Newton’s method for large bound-constrained optimization problems. SIAM J Optim 9:1100–1127CrossRef
Zurück zum Zitat Lucidi S, Palagi L, Risi A, Sciandrone M (2007) A convergent decomposition algorithm for support vector machines. Comput Optim Appl 38:217–234CrossRef Lucidi S, Palagi L, Risi A, Sciandrone M (2007) A convergent decomposition algorithm for support vector machines. Comput Optim Appl 38:217–234CrossRef
Zurück zum Zitat Lucidi S, Palagi L, Risi A, Sciandrone M (2009) A convergent hybrid decomposition algorithm model for svm training. IEEE Trans Neural Netw 20:1055–1060CrossRef Lucidi S, Palagi L, Risi A, Sciandrone M (2009) A convergent hybrid decomposition algorithm model for svm training. IEEE Trans Neural Netw 20:1055–1060CrossRef
Zurück zum Zitat Mangasarian OL (1994) Nonlinear programming. Classics in applied mathematics. Society for Industrial and Applied Mathematics. ISBN: 9780898713411 Mangasarian OL (1994) Nonlinear programming. Classics in applied mathematics. Society for Industrial and Applied Mathematics. ISBN: 9780898713411
Zurück zum Zitat Mangasarian OL (2002) A finite Newton method for classification. Optim Methods Softw 17:913–929CrossRef Mangasarian OL (2002) A finite Newton method for classification. Optim Methods Softw 17:913–929CrossRef
Zurück zum Zitat Mangasarian OL (2006) Exact 1-norm support vector machines via unconstrained convex differentiable minimization. J Mach Learn Res 7:1517–1530 Mangasarian OL (2006) Exact 1-norm support vector machines via unconstrained convex differentiable minimization. J Mach Learn Res 7:1517–1530
Zurück zum Zitat Mangasarian OL, Musicant DR (1999) Successive overrelaxation for support vector machines. IEEE Trans Neural Netw 10:1032–1037CrossRef Mangasarian OL, Musicant DR (1999) Successive overrelaxation for support vector machines. IEEE Trans Neural Netw 10:1032–1037CrossRef
Zurück zum Zitat Mangasarian OL, Musicant DR (2001) Lagrangian support vector machines. J Mach Learn Res 1:161–177 Mangasarian OL, Musicant DR (2001) Lagrangian support vector machines. J Mach Learn Res 1:161–177
Zurück zum Zitat Mavroforakis ME, Theodoridis S (2006) A geometric approach to support vector machine (SVM) classification. IEEE Trans Neural Netw 17:671–682CrossRef Mavroforakis ME, Theodoridis S (2006) A geometric approach to support vector machine (SVM) classification. IEEE Trans Neural Netw 17:671–682CrossRef
Zurück zum Zitat Osuna E, Freund R, Girosi F (1997) An improved training algorithm for support vector machines. In: Neural networks for signal processing VII. Proceedings of the 1997 IEEE signal processing society workshop, pp 276–285 Osuna E, Freund R, Girosi F (1997) An improved training algorithm for support vector machines. In: Neural networks for signal processing VII. Proceedings of the 1997 IEEE signal processing society workshop, pp 276–285
Zurück zum Zitat Osuna E, Freund R, Girosit F (1997) Training support vector machines: an application to face detection. In Proceedings of IEEE computer society conference on computer vision and pattern recognition, pp 130–136 Osuna E, Freund R, Girosit F (1997) Training support vector machines: an application to face detection. In Proceedings of IEEE computer society conference on computer vision and pattern recognition, pp 130–136
Zurück zum Zitat Palagi L, Sciandrone M (2005) On the convergence of a modified version of the svmlight algorithm. Optim Methods Softw 20:315–332CrossRef Palagi L, Sciandrone M (2005) On the convergence of a modified version of the svmlight algorithm. Optim Methods Softw 20:315–332CrossRef
Zurück zum Zitat Pardalos PM, Kovoor N (1990) An algorithm for a singly constrained class of quadratic programs subject to upper and lower bounds. Math Program 46:321–328CrossRef Pardalos PM, Kovoor N (1990) An algorithm for a singly constrained class of quadratic programs subject to upper and lower bounds. Math Program 46:321–328CrossRef
Zurück zum Zitat Platt JC (1999) Advances in kernel methods. Chapter fast training of support vector machines using sequential minimal optimization. MIT Press, Cambridge, pp 185–208 Platt JC (1999) Advances in kernel methods. Chapter fast training of support vector machines using sequential minimal optimization. MIT Press, Cambridge, pp 185–208
Zurück zum Zitat Scholkopf B, Smola AJ (2001) Learning with Kernels: support vector machines, regularization, optimization, and beyond. MIT Press, Cambridge Scholkopf B, Smola AJ (2001) Learning with Kernels: support vector machines, regularization, optimization, and beyond. MIT Press, Cambridge
Zurück zum Zitat Shalev-Shwartz S, Singer Y, Srebro N, Cotter A (2011) Pegasos: primal estimated sub-gradient solver for svm. Math Program 127:3–30CrossRef Shalev-Shwartz S, Singer Y, Srebro N, Cotter A (2011) Pegasos: primal estimated sub-gradient solver for svm. Math Program 127:3–30CrossRef
Zurück zum Zitat Shawe-Taylor J, Cristianini N (2004) Kernel methods for pattern analysis. Cambridge University Press, New YorkCrossRef Shawe-Taylor J, Cristianini N (2004) Kernel methods for pattern analysis. Cambridge University Press, New YorkCrossRef
Zurück zum Zitat Suykens JAK, Vandewalle J (1999) Least squares support vector machine classifiers. Neural Process Lett 9:293–300CrossRef Suykens JAK, Vandewalle J (1999) Least squares support vector machine classifiers. Neural Process Lett 9:293–300CrossRef
Zurück zum Zitat Glasmachers T, Dogan U (2013) Accelerated coordinate descent with adaptive coordinate frequencies. In: Asian conference on machine learning, ACML 2013, Canberra, ACT, Australia, pp 72–86 Glasmachers T, Dogan U (2013) Accelerated coordinate descent with adaptive coordinate frequencies. In: Asian conference on machine learning, ACML 2013, Canberra, ACT, Australia, pp 72–86
Zurück zum Zitat Serafini T, Zanni L (2005) On the working set selection in gradient projection-based decomposition techniques for support vector machines. Optim Methods Softw 20:583–596CrossRef Serafini T, Zanni L (2005) On the working set selection in gradient projection-based decomposition techniques for support vector machines. Optim Methods Softw 20:583–596CrossRef
Zurück zum Zitat Tsang IW, Kwok JT, Cheung PM (2005) Core vector machines: fast svm training on very large data sets. J Mach Learn Res 6:363–392 Tsang IW, Kwok JT, Cheung PM (2005) Core vector machines: fast svm training on very large data sets. J Mach Learn Res 6:363–392
Zurück zum Zitat Vapnik VN (1998) Statistical learning theory. Wiley-Interscience, New York Vapnik VN (1998) Statistical learning theory. Wiley-Interscience, New York
Zurück zum Zitat Wang PW, Lin CJ (2014) Iteration complexity of feasible descent methods for convex optimization. J Mach Learn Res 15:1523–1548 Wang PW, Lin CJ (2014) Iteration complexity of feasible descent methods for convex optimization. J Mach Learn Res 15:1523–1548
Zurück zum Zitat Wang Z, Crammer K, Vucetic S (2012) Breaking the curse of kernelization: budgeted stochastic gradient descent for large-scale svm training. J Mach Learn Res 13:3103–3131 Wang Z, Crammer K, Vucetic S (2012) Breaking the curse of kernelization: budgeted stochastic gradient descent for large-scale svm training. J Mach Learn Res 13:3103–3131
Zurück zum Zitat Woodsend K, Gondzio J (2009) Hybrid mpi/openmp parallel linear support vector machine training. J Mach Learn Res 10:1937–1953 Woodsend K, Gondzio J (2009) Hybrid mpi/openmp parallel linear support vector machine training. J Mach Learn Res 10:1937–1953
Zurück zum Zitat Woodsend K, Gondzio J (2011) Exploiting separability in large-scale linear support vector machine training. Comput Optim Appl 49:241–269CrossRef Woodsend K, Gondzio J (2011) Exploiting separability in large-scale linear support vector machine training. Comput Optim Appl 49:241–269CrossRef
Metadaten
Titel
Nonlinear optimization and support vector machines
verfasst von
Veronica Piccialli
Marco Sciandrone
Publikationsdatum
23.05.2018
Verlag
Springer Berlin Heidelberg
Erschienen in
4OR / Ausgabe 2/2018
Print ISSN: 1619-4500
Elektronische ISSN: 1614-2411
DOI
https://doi.org/10.1007/s10288-018-0378-2

Weitere Artikel der Ausgabe 2/2018

4OR 2/2018 Zur Ausgabe

    Marktübersichten

    Die im Laufe eines Jahres in der „adhäsion“ veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.