Skip to main content
main-content

Tipp

Weitere Artikel dieser Ausgabe durch Wischen aufrufen

17.02.2018 | Ausgabe 2/2018

Neural Processing Letters 2/2018

Ultra-Sparse Classifiers Through Minimizing the VC Dimension in the Empirical Feature Space

Submitted to the Special Issue on “Off the Mainstream: Advances in Neural Networks and Machine Learning for Pattern Recognition”

Zeitschrift:
Neural Processing Letters > Ausgabe 2/2018
Autoren:
Jayadeva, Mayank Sharma, Sumit Soman, Himanshu Pant
Wichtige Hinweise

Electronic supplementary material

The online version of this article (https://​doi.​org/​10.​1007/​s11063-018-9793-9) contains supplementary material, which is available to authorized users.

Abstract

Sparse representations have gained much interest due to the rapid growth of intelligent embedded systems, the need to reduce the time to mine large datasets, and for reducing the footprint of recognition based applications on portable devices. Computational learning theory tells us that the Vapnik–Chervonenkis (VC) dimension of a learning model directly impacts the structural risk and the generalization ability of a learning model. The minimal complexity machine (MCM) was recently proposed as a way to learn a hyperplane classifier by minimizing a tight bound on the VC dimension; results show that it learns very sparse representations that yield test set accuracies that are comparable to the state-of-the-art. The MCM formulation works in the primal itself, both when the classifier is learnt in the input space and when it is learnt implicitly in a higher dimensional feature space. In the latter case, the hyperplane is constructed in the empirical feature space (EFS). In this paper, we examine the hyperplane restricted to the EFS. The EFS is a finite dimensional vector space spanned by the image vectors in a higher dimensional feature space. Since the VC dimension of a linear hyperplane classifier is exactly the number of features used by the classifier, the dimension of the EFS is a direct measure of both the sparsity of the model and the VC dimension. This allows us to formulate optimization problems that focus on learning sparse representations, and yet generalize well. We derive an EFS version of the MCM, that allows us to minimize the model complexity and improve sparsity. We also propose a novel least squares version of the MCM in the EFS. Experimental results demonstrate that the EFS variants yield sparse models with generalization comparable to the state-of-the-art.

Bitte loggen Sie sich ein, um Zugang zu diesem Inhalt zu erhalten

Sie möchten Zugang zu diesem Inhalt erhalten? Dann informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 69.000 Bücher
  • über 500 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Umwelt
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe

Testen Sie jetzt 30 Tage kostenlos.

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 50.000 Bücher
  • über 380 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Umwelt
  • Maschinenbau + Werkstoffe​​​​​​​




Testen Sie jetzt 30 Tage kostenlos.

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 58.000 Bücher
  • über 300 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb




Testen Sie jetzt 30 Tage kostenlos.

Zusatzmaterial
Supplementary material 1 (pdf 125 KB)
11063_2018_9793_MOESM1_ESM.pdf
Literatur
Über diesen Artikel

Weitere Artikel der Ausgabe 2/2018

Neural Processing Letters 2/2018 Zur Ausgabe