Skip to main content
Erschienen in: Neural Computing and Applications 9/2018

02.09.2016 | Original Article

A divide-and-conquer method for large scale ν-nonparallel support vector machines

verfasst von: Xuchan Ju, Yingjie Tian

Erschienen in: Neural Computing and Applications | Ausgabe 9/2018

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Recently, nonparallel support vector machine (NPSVM), a branch of support vector machines (SVMs), is developed and has attracted considerable interest. A kind of developed NPSVM, ν-nonparallel support vector machine (ν-NPSVM), which inherits the advantages of ν-support vector machine (ν-SVM) enables us to achieve higher classification accuracy and less time to tune for parameters. However, applications of ν-NPSVM to large data sets are seriously hampered by their excessive training time. In this paper, we use divide-and-conquer technique for large scale ν-nonparallel support vector machine (DC-νNPSVM) aiming at overcoming this burden. In the division step, we first divide the whole samples into several smaller subsamples and solve smaller subproblems using ν-NPSVM independently. We theoretically and experimentally show that objective function value, solutions, and support vectors solved by (DC-νNPSVM) are likely to those of the whole ν-NPSVM. In the conquer step, subsolutions from subproblems are used as an initial coordinate descent solver which converges to the optimal solution quickly. Moreover, multi-level (DC-νNPSVM) is adopted to balance the accuracy and efficiency. (DC-νNPSVM) can achieve higher accuracy by tuning the parameters in a smaller range and control the number of support vectors efficiently. Quantities of experiments show our (DC-νNPSVM) outperforms state-of-the-art SVM methods for large scale data sets.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Cortes C, Vapnik VN (1995) Support-vector networks. Mach Learn 20(3):273–297MATH Cortes C, Vapnik VN (1995) Support-vector networks. Mach Learn 20(3):273–297MATH
2.
Zurück zum Zitat Vapnik VN (1996) The nature of statistical learning theory. Springer, New YorkMATH Vapnik VN (1996) The nature of statistical learning theory. Springer, New YorkMATH
3.
Zurück zum Zitat Vapnik VN (1998) Statistical learning theory. Wiley, New YorkMATH Vapnik VN (1998) Statistical learning theory. Wiley, New YorkMATH
4.
Zurück zum Zitat Trafalis TB, Ince H (2000) Support vector machine for regression and applications to financial forecasting. In: Proceedings of the IEEE-INNS-ENNS international joint conference on neural networks, vol 6, pp 348–353 Trafalis TB, Ince H (2000) Support vector machine for regression and applications to financial forecasting. In: Proceedings of the IEEE-INNS-ENNS international joint conference on neural networks, vol 6, pp 348–353
5.
Zurück zum Zitat Schölkopf B, Smola AJ, Williamson RC, Bartlett PL (2000) New support vector algorithms. Neural Comput 12:1207–1245CrossRef Schölkopf B, Smola AJ, Williamson RC, Bartlett PL (2000) New support vector algorithms. Neural Comput 12:1207–1245CrossRef
6.
Zurück zum Zitat Tian YJ, Zhang Q, Liu DL (2014) ν-Nonparallel support vector machine for pattern classification. Neural Comput Appl 25:1007–1020CrossRef Tian YJ, Zhang Q, Liu DL (2014) ν-Nonparallel support vector machine for pattern classification. Neural Comput Appl 25:1007–1020CrossRef
7.
Zurück zum Zitat Platt JC (1998) Fast training of support vector machines using sequential minimal optimization. In: Schölkopf B, Burges CJC, Smola AJ (eds) Advances in Kernel Methods-Support Vector Learning. MIT Press, Cambridge Platt JC (1998) Fast training of support vector machines using sequential minimal optimization. In: Schölkopf B, Burges CJC, Smola AJ (eds) Advances in Kernel Methods-Support Vector Learning. MIT Press, Cambridge
8.
Zurück zum Zitat Chang CC, Lin CJ (2011) Libsvm: a linrary for support vector machines. ACM Trans Intell Syst Tecnol 2(27):1–27 Chang CC, Lin CJ (2011) Libsvm: a linrary for support vector machines. ACM Trans Intell Syst Tecnol 2(27):1–27
9.
Zurück zum Zitat Hsieh CJ, Chang KW, Lin CJ, Keerthi SS, Sundararajan S (2008) A dual coordinate descent method for large-scale linear svm. In: Proceedings of the 25th international conference on machine learning, pp 408–415 Hsieh CJ, Chang KW, Lin CJ, Keerthi SS, Sundararajan S (2008) A dual coordinate descent method for large-scale linear svm. In: Proceedings of the 25th international conference on machine learning, pp 408–415
10.
Zurück zum Zitat Fine S, Scheinberg K (2001) Efficient svm training using low-rank kernel representations. J Mach Learn Res 2:243–264MATH Fine S, Scheinberg K (2001) Efficient svm training using low-rank kernel representations. J Mach Learn Res 2:243–264MATH
11.
Zurück zum Zitat Zhang K, Lan L, Wang Z, Moerchen F (2012) Scaling up kernel SVM on Limited resources: a low-rank linearization approach. In: Proceedings of 15th internation conference on artificial intelligence and statistics (AISTATS) Zhang K, Lan L, Wang Z, Moerchen F (2012) Scaling up kernel SVM on Limited resources: a low-rank linearization approach. In: Proceedings of 15th internation conference on artificial intelligence and statistics (AISTATS)
12.
Zurück zum Zitat Le Q, Sarlós T, Smola A (2013) Fastfood-approximating kernel expansions in loglinear time. In: Proceedings of the 30th international conference on machine learning Le Q, Sarlós T, Smola A (2013) Fastfood-approximating kernel expansions in loglinear time. In: Proceedings of the 30th international conference on machine learning
13.
Zurück zum Zitat John M, Christian JD (1989) Fast learning in networks of locally-tuned processing units. Neural Comput 1:281–294CrossRef John M, Christian JD (1989) Fast learning in networks of locally-tuned processing units. Neural Comput 1:281–294CrossRef
14.
Zurück zum Zitat Yu H, Yang J, Han J, Li X (2005) Making SVMs scalable to large data sets using hierarchical cluster indexing. Data Min Knowl Discov 11(3):295–321MathSciNetCrossRef Yu H, Yang J, Han J, Li X (2005) Making SVMs scalable to large data sets using hierarchical cluster indexing. Data Min Knowl Discov 11(3):295–321MathSciNetCrossRef
15.
Zurück zum Zitat Zhang K, Tsang IW, Kwok JT (2008) Improved Nyström low rank approximation and error analysis. In: Proceedings of the 25st international conference on machine learning Zhang K, Tsang IW, Kwok JT (2008) Improved Nyström low rank approximation and error analysis. In: Proceedings of the 25st international conference on machine learning
16.
Zurück zum Zitat Jacobs RA, Jordan MI, Nowlan SJ, Hinton GE (1991) Adaptive mixtures of local experts. Neural Comput 3(1):79–87CrossRef Jacobs RA, Jordan MI, Nowlan SJ, Hinton GE (1991) Adaptive mixtures of local experts. Neural Comput 3(1):79–87CrossRef
17.
Zurück zum Zitat Tresp V (2000) A bayesian committee machine. Neural Comput 12:2719–2741CrossRef Tresp V (2000) A bayesian committee machine. Neural Comput 12:2719–2741CrossRef
18.
Zurück zum Zitat Graf HP, Cosatto E, Bottou L, Durdanovic I, Vapnik VN (2005) Parallel support vector machines: the cascade SVM. In: Proceedings of conference on neural information processing systems (NIPS) Graf HP, Cosatto E, Bottou L, Durdanovic I, Vapnik VN (2005) Parallel support vector machines: the cascade SVM. In: Proceedings of conference on neural information processing systems (NIPS)
19.
Zurück zum Zitat Hsieh CJ, Si S, Dhillon IS (2014) A divide-and-conquer solver for kernel support vector machines. In: Proceedings of the 31st international conference on machine learning Hsieh CJ, Si S, Dhillon IS (2014) A divide-and-conquer solver for kernel support vector machines. In: Proceedings of the 31st international conference on machine learning
20.
Zurück zum Zitat Tian YJ, Ju XC, Shi Y (2016) A divide-and-combine method for large scale nonparallel support vector machines. Neural Netw 75:12–21CrossRef Tian YJ, Ju XC, Shi Y (2016) A divide-and-combine method for large scale nonparallel support vector machines. Neural Netw 75:12–21CrossRef
21.
Zurück zum Zitat Tsang IW, Kwok JT, Cheung PM (2005) Core vector machines: fast svm training on very large data sets. J Mach Learn Res 6:363–392MathSciNetMATH Tsang IW, Kwok JT, Cheung PM (2005) Core vector machines: fast svm training on very large data sets. J Mach Learn Res 6:363–392MathSciNetMATH
22.
Zurück zum Zitat Tsang IW, Kocsor A, Kwok JT (2007) Simpler core vector machines with enclosing balls. In: Proceedings of the 24th international conference on machine learning Tsang IW, Kocsor A, Kwok JT (2007) Simpler core vector machines with enclosing balls. In: Proceedings of the 24th international conference on machine learning
23.
Zurück zum Zitat Wang D, Zhang B, Zhang P, Qiao H (2010) An online core vector machine with adaptive meb adjustment. Pattern Recognit 42(10):3468–3482CrossRefMATH Wang D, Zhang B, Zhang P, Qiao H (2010) An online core vector machine with adaptive meb adjustment. Pattern Recognit 42(10):3468–3482CrossRefMATH
24.
Zurück zum Zitat Rahimi A, Recht B (2007) Random features for large-scale kernel machines. In: Advances in neural information processing systems Rahimi A, Recht B (2007) Random features for large-scale kernel machines. In: Advances in neural information processing systems
26.
Zurück zum Zitat Joachims T (2006) Training linear svms in linear time. In: Proceedings of the 12th ACM SIGKDD international conference on knowledge discovery and data mining Joachims T (2006) Training linear svms in linear time. In: Proceedings of the 12th ACM SIGKDD international conference on knowledge discovery and data mining
27.
Zurück zum Zitat Franc V, Sonnenburg S (2008) Optimized cutting plane algorithm for support vector machines. In: Proceedings of the 25th international conference on machine learning, pp 320–327 Franc V, Sonnenburg S (2008) Optimized cutting plane algorithm for support vector machines. In: Proceedings of the 25th international conference on machine learning, pp 320–327
28.
Zurück zum Zitat Teo CH, Vishwanathan S, Smola A, Le QV (2010) Bundle methods for regularized risk minimization. J Mach Learn Res 11:311–365MathSciNetMATH Teo CH, Vishwanathan S, Smola A, Le QV (2010) Bundle methods for regularized risk minimization. J Mach Learn Res 11:311–365MathSciNetMATH
29.
Zurück zum Zitat Zhang T (2004) Solving large scale linear prediction problems using stochastic gradient descent algorithms. In: Proceedings of the 21st international conference on machine learning Zhang T (2004) Solving large scale linear prediction problems using stochastic gradient descent algorithms. In: Proceedings of the 21st international conference on machine learning
30.
Zurück zum Zitat Shalev-Shwartz S, Singer Y, Srebro N, Cotter A (2011) Pegasos: primal estimated sub-gradient solver for SVM. Math Program 127:3–30MathSciNetCrossRefMATH Shalev-Shwartz S, Singer Y, Srebro N, Cotter A (2011) Pegasos: primal estimated sub-gradient solver for SVM. Math Program 127:3–30MathSciNetCrossRefMATH
31.
Zurück zum Zitat Bordes A, Ertekin S, Weston J, Bottou L (2005) Fast kernel classifiers with online and active learning. J Mach Learn Res 6:1579–1619MathSciNetMATH Bordes A, Ertekin S, Weston J, Bottou L (2005) Fast kernel classifiers with online and active learning. J Mach Learn Res 6:1579–1619MathSciNetMATH
32.
Zurück zum Zitat Fan RE, Chen PH, Lin CJ (2005) Working set selection using second order information for training SVM. J Mach Learn Res 6:1889–1918MathSciNetMATH Fan RE, Chen PH, Lin CJ (2005) Working set selection using second order information for training SVM. J Mach Learn Res 6:1889–1918MathSciNetMATH
33.
Zurück zum Zitat Radha C, Rong J, Timothy CH, Anil KJ (2011) Approximate kernel k-means: solution to large scale kernel clustering. In: ACM SIGKDD conference on knowledge discovery and data mining Radha C, Rong J, Timothy CH, Anil KJ (2011) Approximate kernel k-means: solution to large scale kernel clustering. In: ACM SIGKDD conference on knowledge discovery and data mining
34.
Zurück zum Zitat Tian YJ, Qi ZQ, Ju XC, Shi Y, Liu XH (2013) Nonparallel support vector machines for pattern classification. IEEE Trans Cybern 44(7):1067–1079CrossRef Tian YJ, Qi ZQ, Ju XC, Shi Y, Liu XH (2013) Nonparallel support vector machines for pattern classification. IEEE Trans Cybern 44(7):1067–1079CrossRef
Metadaten
Titel
A divide-and-conquer method for large scale ν-nonparallel support vector machines
verfasst von
Xuchan Ju
Yingjie Tian
Publikationsdatum
02.09.2016
Verlag
Springer London
Erschienen in
Neural Computing and Applications / Ausgabe 9/2018
Print ISSN: 0941-0643
Elektronische ISSN: 1433-3058
DOI
https://doi.org/10.1007/s00521-016-2574-3

Weitere Artikel der Ausgabe 9/2018

Neural Computing and Applications 9/2018 Zur Ausgabe