Skip to main content

2023 | OriginalPaper | Buchkapitel

11. Neural Networks

verfasst von : Frank Acito

Erschienen in: Predictive Analytics with KNIME

Verlag: Springer Nature Switzerland

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

This chapter explores neural networks, focusing on their applications and underlying principles. Neural networks have gained immense popularity due to their flexibility and accuracy in supervised data mining tasks. They can effectively handle problems with categorical and continuous target variables, making them a versatile tool in predictive modeling. The chapter introduces the concept of artificial neural networks, which mimic the structure and function of human brain neurons.
The mathematical model of a neuron, first proposed by McCulloch and Pitts, serves as the foundation for neural networks. However, early attempts to implement neural networks faced challenges, leading to a period of reduced interest. The breakthrough came in the 1980s with the development of algorithms like backpropagation, which enabled the estimation of weights in multilayer networks.
The chapter discusses the learning process for neural networks, which involves adjusting the model weights iteratively to minimize an error function. Different activation functions are explored, each influencing the output of the neurons. Notably, the ReLU activation function enabled the development of deep learning models with three or more hidden layers.
An example of a single-layer artificial neuron demonstrates the calculations with various activation functions. This is followed by an example of a multilayer perceptron, showcasing the real power of neural networks with multiple layers and nodes. Neural network applications using KNIME are illustrated in the context of credit screening and predicting used car prices.
The chapter also emphasizes the importance of proper data preparation, including normalization and dealing with oversampling in the context of classification problems. Overfitting, a common challenge in neural networks, is discussed, and techniques to mitigate it are presented.
The chapter provides a comprehensive overview of neural networks, highlighting their strengths and challenges. Neural networks offer great potential for complex and non-linear problems but require careful considerations in data preparation, model complexity, and validation to ensure reliable and accurate predictions.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
1
“New Navy Device Learns by Doing: Psychologist Shows Embryo of Computer Designed to Read and Grow Wiser.” 1958. The New York Times, July 8, 1958, pg. 25.
 
2
See “A concise history of neural networks“(Jaspreet, 2016) for more on the development of neural nets.
 
3
While it is possible to have a linear activation function, most activation functions are non-linear. Using only linear activations would re-create ordinary regression using neural networks.
 
4
It is possible that the accuracy with the test or validation set is higher than with the training set. This should not be a concern since it is most likely due to the randomness in creating the subsets of data.
 
5
When applying the normalization from the training set to the validation or test sets, an error is sometimes raised since values found in the latter two data sets might be beyond the range of values in the training set. This is usually not a problem and can be ignored.
 
6
Gromping, U. (2019). South German Credit Data: Correcting a Widely Used Data Se. Retrieved from https://​archive.​ics.​uci.​edu: https://​archive.​ics.​uci.​edu/​ml/​datasets/​South+German+Cre​dit+
 
Literatur
Zurück zum Zitat Hastie, T., Tibshirani, R., & Friedman, J. (2009). The elements of statistical learning. New York: Springer. Hastie, T., Tibshirani, R., & Friedman, J. (2009). The elements of statistical learning. New York: Springer.
Zurück zum Zitat McCulloch, W. S., & Pitts, W. H. (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5, 114–133.MathSciNetCrossRefMATH McCulloch, W. S., & Pitts, W. H. (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5, 114–133.MathSciNetCrossRefMATH
Zurück zum Zitat Mercioni, M. A., & Stefan, H. (2020) The most used activation functions: Classic versus current. In 2020 International conference on Development and Application Systems (DAS), Suceava, Romania, pp. 141–145. Mercioni, M. A., & Stefan, H. (2020) The most used activation functions: Classic versus current. In 2020 International conference on Development and Application Systems (DAS), Suceava, Romania, pp. 141–145.
Zurück zum Zitat Minsky, M., & Papert, S. (1969). Perceptrons: an introduction to computational geometry. MIT Press.MATH Minsky, M., & Papert, S. (1969). Perceptrons: an introduction to computational geometry. MIT Press.MATH
Zurück zum Zitat Rosenblatt, F. (1958). The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65(6), 386–408.CrossRef Rosenblatt, F. (1958). The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65(6), 386–408.CrossRef
Zurück zum Zitat Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning internal representations by error propagation. In D. E. Rumelhart (Ed.), Parallel distributed processing: explorations in the microstructures of cognition (pp. 318–362). MIT Press.CrossRef Rumelhart, D. E., Hinton, G. E., & Williams, R. J. (1986). Learning internal representations by error propagation. In D. E. Rumelhart (Ed.), Parallel distributed processing: explorations in the microstructures of cognition (pp. 318–362). MIT Press.CrossRef
Zurück zum Zitat Silipo, R., & Melcher, K. (2020). Codeless deep learning with KNIME. Packt Publishing. Silipo, R., & Melcher, K. (2020). Codeless deep learning with KNIME. Packt Publishing.
Metadaten
Titel
Neural Networks
verfasst von
Frank Acito
Copyright-Jahr
2023
DOI
https://doi.org/10.1007/978-3-031-45630-5_11