Skip to main content
Top
Published in: Neural Computing and Applications 2/2019

22-06-2017 | Original Article

Generalization improvement for regularized least squares classification

Authors: Haitao Gan, Qingshan She, Yuliang Ma, Wei Wu, Ming Meng

Published in: Neural Computing and Applications | Special Issue 2/2019

Log in

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

In the past decades, regularized least squares classification (RLSC) is a commonly used supervised classification method in the machine learning filed because it can be easily resolved through the simple matrix analysis and achieve a close-form solution. Recently, some studies conjecture that the margin distribution is more crucial to the generalization performance. Moreover, from the view of margin distribution, RLSC only considers the first-order statistics (i.e., margin mean) and does not consider the actual higher-order statistics of margin distribution. In this paper, we propose a novel RLSC which takes into account the actual second-order (i.e., variance) information of margin distribution. It is intuitively expected that small margin variance will improve the generalization performance of RLSC from a geometric view. We incorporate the margin variance into the objective function of RLSC and achieve the optimal classifier by minimizing the margin variance. To evaluate the performance of our algorithm, we conduct a series of experiments on several benchmark datasets in comparison with RLSC, kernel minimum squared error, support vector machine and large margin distribution machine. And the empirical results verify the effectiveness of our algorithm and indicate that the margin distribution is helpful to improve the classification performance.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Adankon MM, Cheriet M (2011) Help-training for semi-supervised support vector machines. Pattern Recognit 44(9):2220–2230CrossRef Adankon MM, Cheriet M (2011) Help-training for semi-supervised support vector machines. Pattern Recognit 44(9):2220–2230CrossRef
2.
go back to reference Belkin M, Niyogi P, Sindhwani V (2006) Manifold regularization: a geometric framework for learning from labeled and unlabeled examples. J Mach Learn Res 7:2399–2434MathSciNetMATH Belkin M, Niyogi P, Sindhwani V (2006) Manifold regularization: a geometric framework for learning from labeled and unlabeled examples. J Mach Learn Res 7:2399–2434MathSciNetMATH
4.
go back to reference Hsieh CJ, Chang KW, Lin CJ, Keerthi SS, Sundararajan S (2008) A dual coordinate descent method for large-scale linear SVM. In: Proceedings of the 25th international conference on machine learning, ACM, pp 408–415 Hsieh CJ, Chang KW, Lin CJ, Keerthi SS, Sundararajan S (2008) A dual coordinate descent method for large-scale linear SVM. In: Proceedings of the 25th international conference on machine learning, ACM, pp 408–415
6.
go back to reference Micchelli CA, Pontil M (2005) Learning the kernel function via regularization. J Mach Learn Res 6:1099–1125MathSciNetMATH Micchelli CA, Pontil M (2005) Learning the kernel function via regularization. J Mach Learn Res 6:1099–1125MathSciNetMATH
7.
go back to reference Reyzin L, Schapire RE (2006) How boosting the margin can also boost classifier complexity. In: Proceedings of the 23rd international conference on Machine learning, ACM, pp 753–760 Reyzin L, Schapire RE (2006) How boosting the margin can also boost classifier complexity. In: Proceedings of the 23rd international conference on Machine learning, ACM, pp 753–760
8.
go back to reference Rifkin R, Yeo G, Poggio T (2003) Regularized least-squares classification. Nato Sci Ser Sub Ser III Comput Syst Sci 190:131–154 Rifkin R, Yeo G, Poggio T (2003) Regularized least-squares classification. Nato Sci Ser Sub Ser III Comput Syst Sci 190:131–154
9.
go back to reference Vapnik VN, Vapnik V (1998) Statistical learning theory. Wiley, New YorkMATH Vapnik VN, Vapnik V (1998) Statistical learning theory. Wiley, New YorkMATH
10.
go back to reference Wang L, Sugiyama M, Jing Z, Yang C, Zhou ZH, Feng J (2011) A refined margin analysis for boosting algorithms via equilibrium margin. J Mach Learn Res 12:1835–1863MathSciNetMATH Wang L, Sugiyama M, Jing Z, Yang C, Zhou ZH, Feng J (2011) A refined margin analysis for boosting algorithms via equilibrium margin. J Mach Learn Res 12:1835–1863MathSciNetMATH
11.
go back to reference Wang Y, Chen S, Xue H, Fu Z (2015) Semi-supervised classification learning by discrimination-aware manifold regularization. Neurocomputing 147:299–306CrossRef Wang Y, Chen S, Xue H, Fu Z (2015) Semi-supervised classification learning by discrimination-aware manifold regularization. Neurocomputing 147:299–306CrossRef
12.
go back to reference Xu J, Zhang X, Li Y (2001) Kernel MSE algorithm: a unified framework for KFD, LS-SVM and KRR. In: Proceedings of international joint conference on neural networks, IEEE, pp 1486–1491 Xu J, Zhang X, Li Y (2001) Kernel MSE algorithm: a unified framework for KFD, LS-SVM and KRR. In: Proceedings of international joint conference on neural networks, IEEE, pp 1486–1491
13.
go back to reference Xue H, Chen S, Yang Q (2009) Discriminatively regularized least-squares classification. Pattern Recognit 42(1):93–104CrossRefMATH Xue H, Chen S, Yang Q (2009) Discriminatively regularized least-squares classification. Pattern Recognit 42(1):93–104CrossRefMATH
15.
go back to reference Zhang P, Peng J (2004) SVM vs regularized least squares classification. In: Proceedings of the 17th international conference on pattern recognition, IEEE, pp 176–179 Zhang P, Peng J (2004) SVM vs regularized least squares classification. In: Proceedings of the 17th international conference on pattern recognition, IEEE, pp 176–179
16.
go back to reference Zhou ZH (2014) Large margin distribution learning. In: Artificial neural networks in pattern recognition, Springer, pp 1–11 Zhou ZH (2014) Large margin distribution learning. In: Artificial neural networks in pattern recognition, Springer, pp 1–11
Metadata
Title
Generalization improvement for regularized least squares classification
Authors
Haitao Gan
Qingshan She
Yuliang Ma
Wei Wu
Ming Meng
Publication date
22-06-2017
Publisher
Springer London
Published in
Neural Computing and Applications / Issue Special Issue 2/2019
Print ISSN: 0941-0643
Electronic ISSN: 1433-3058
DOI
https://doi.org/10.1007/s00521-017-3090-9

Other articles of this Special Issue 2/2019

Neural Computing and Applications 2/2019 Go to the issue

Premium Partner