Skip to main content

2023 | OriginalPaper | Buchkapitel

10. k Nearest Neighbors

verfasst von : Frank Acito

Erschienen in: Predictive Analytics with KNIME

Verlag: Springer Nature Switzerland

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

K Nearest Neighbors (kNN) is a powerful and intuitive data mining model for classification and regression tasks. As an instance-based or memory-based learning algorithm, kNN classifies new objects based on their similarity to known objects in the training data. Unlike parametric models, kNN is non-parametric and does not rely on assumptions about data distributions.
The main advantage of kNN is its simplicity and ability to handle large datasets efficiently. However, one of its drawbacks is that it requires scanning all the training data each time a new observation needs to be classified, which can be time-consuming for large datasets.
The kNN algorithm calculates the distances between the new observation and all existing data points. The k nearest neighbors are selected based on the smallest distances, and their majority class or average value is used for classification or regression.
For classification tasks, kNN is considered a “lazy” algorithm because it does not create an explicit model during training. Instead, it stores the entire dataset and makes decisions on new observations instantly. In contrast, “eager” algorithms, like logistic regression, build a model during training that is then used for predictions.
In addition to classification, kNN can also be used for regression tasks. It can capture non-linear relationships between predictors and continuous target variables without requiring a predefined model.
While kNN is flexible and robust to different target variables and distributions, it requires standardizing predictors to avoid bias from variables with large values. It also suffers from the “curse of dimensionality,” where the performance degrades in high-dimensional spaces due to increased sparsity.
Despite its limitations, kNN remains a valuable tool in data mining, especially when dealing with non-linear relationships and a lack of strict assumptions about the data. Careful data preprocessing and optimization of the value of k can help improve its performance in various applications.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
Zurück zum Zitat Detrano, R., et al. (1989). International application of a new probability algorithm for the diagnosis of coronary artery disease. American Journal of Cardiology, 64(5), 304–310.CrossRef Detrano, R., et al. (1989). International application of a new probability algorithm for the diagnosis of coronary artery disease. American Journal of Cardiology, 64(5), 304–310.CrossRef
Zurück zum Zitat Wang, Y., & Wang, Z. -O. (2007). A fast KNN algorithm for text categorization. Paper presented at the 2007 International Conference on Machine Learning and Cybernetics. IEEE, Hong Kong. Wang, Y., & Wang, Z. -O. (2007). A fast KNN algorithm for text categorization. Paper presented at the 2007 International Conference on Machine Learning and Cybernetics. IEEE, Hong Kong.
Metadaten
Titel
k Nearest Neighbors
verfasst von
Frank Acito
Copyright-Jahr
2023
DOI
https://doi.org/10.1007/978-3-031-45630-5_10