Skip to main content

2021 | OriginalPaper | Buchkapitel

4. Classification

verfasst von : Vladimir Shikhman, David Müller

Erschienen in: Mathematical Foundations of Big Data Analytics

Verlag: Springer Berlin Heidelberg

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Classification is a process by which new objects, events, people, or experiences are assigned to some class on the basis of characteristics shared by members of the same class, and features distinguishing the members of one class from those of another. In the context of data science it is often necessary to categorize new, unlabeled information based upon its relevance to known, labeled data. Usual applications include credit investigation of a potential client in presence of the current or previous clients with disclosed financial history. Another important application deals with the analytical quality control. Here, a decision has to be made whether a patient is likely to be virus-infected by comparing own and other patients’ test results. In this chapter, we shall use linear classifiers to assign a newcomer to a particular class. This assignment depends on whether the corresponding performance of the newcomer exceeds a certain bound. Three types of linear classifiers are discussed. First, we introduce the statistically motivated Fisher’s discriminant. The latter maximizes the sample variance between the classes and minimizes the variance of data within the classes. The computation of Fisher’s discriminant leads to a nicely structured eigenvalue problem. Second, the celebrated support-vector machine is studied. It is geometrically motivated, and maximizes the margin between two classes. The detection of an optimal separating hyperplane is based on the convex duality. Third, the naïve Bayes classifier is derived. Rooted in the application of Bayes theorem, the latter is of probabilistic origin. Namely, the Bernoulli probabilities of an assignment to one or another class conditioned on the observed data are compared.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
Zurück zum Zitat C. Cortes, V.N. Vapnik, Support-vector networks. Machine Learning 20, 273–297 (1995)MATH C. Cortes, V.N. Vapnik, Support-vector networks. Machine Learning 20, 273–297 (1995)MATH
Zurück zum Zitat R.A. Fisher, The use of multiple measurements in taxonomic problems. Annals of Eugenics 7, 179–188 (1936)CrossRef R.A. Fisher, The use of multiple measurements in taxonomic problems. Annals of Eugenics 7, 179–188 (1936)CrossRef
Zurück zum Zitat C.-J. Hsieh, K.-W. Chang, C.-J. Lin, S. Keerthi, S. Sellamanickam, A dual coordinate descent method for large-scale linear SVM, in Proceedings of the 25th International Conference on Machine Learning, 2008 C.-J. Hsieh, K.-W. Chang, C.-J. Lin, S. Keerthi, S. Sellamanickam, A dual coordinate descent method for large-scale linear SVM, in Proceedings of the 25th International Conference on Machine Learning, 2008
Zurück zum Zitat H.T. Jongen, K. Meer, E. Triesch, Optimization Theory (Kluwer Academic Publishers, Boston, 2004)MATH H.T. Jongen, K. Meer, E. Triesch, Optimization Theory (Kluwer Academic Publishers, Boston, 2004)MATH
Zurück zum Zitat S.Y. Kung, Kernel Methods and Machine Learning (Cambridge University Press, New York, 2014)CrossRef S.Y. Kung, Kernel Methods and Machine Learning (Cambridge University Press, New York, 2014)CrossRef
Zurück zum Zitat R. Mathar, G. Alirezaei, E. Balda, A. Behboodi, Fundamentals of Data Analytics (Springer, Cham, Switzerland, 2020)CrossRef R. Mathar, G. Alirezaei, E. Balda, A. Behboodi, Fundamentals of Data Analytics (Springer, Cham, Switzerland, 2020)CrossRef
Zurück zum Zitat Yu. Nesterov, Lectures on Convex Optimization (Springer, Cham, Switzerland, 2018)CrossRef Yu. Nesterov, Lectures on Convex Optimization (Springer, Cham, Switzerland, 2018)CrossRef
Metadaten
Titel
Classification
verfasst von
Vladimir Shikhman
David Müller
Copyright-Jahr
2021
Verlag
Springer Berlin Heidelberg
DOI
https://doi.org/10.1007/978-3-662-62521-7_4

Premium Partner