Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden.
powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden.
powered by
Abstract
This chapter explores neural networks, focusing on their applications and underlying principles. Neural networks have gained immense popularity due to their flexibility and accuracy in supervised data mining tasks. They can effectively handle problems with categorical and continuous target variables, making them a versatile tool in predictive modeling. The chapter introduces the concept of artificial neural networks, which mimic the structure and function of human brain neurons.
The mathematical model of a neuron, first proposed by McCulloch and Pitts, serves as the foundation for neural networks. However, early attempts to implement neural networks faced challenges, leading to a period of reduced interest. The breakthrough came in the 1980s with the development of algorithms like backpropagation, which enabled the estimation of weights in multilayer networks.
The chapter discusses the learning process for neural networks, which involves adjusting the model weights iteratively to minimize an error function. Different activation functions are explored, each influencing the output of the neurons. Notably, the ReLU activation function enabled the development of deep learning models with three or more hidden layers.
An example of a single-layer artificial neuron demonstrates the calculations with various activation functions. This is followed by an example of a multilayer perceptron, showcasing the real power of neural networks with multiple layers and nodes. Neural network applications using KNIME are illustrated in the context of credit screening and predicting used car prices.
The chapter also emphasizes the importance of proper data preparation, including normalization and dealing with oversampling in the context of classification problems. Overfitting, a common challenge in neural networks, is discussed, and techniques to mitigate it are presented.
The chapter provides a comprehensive overview of neural networks, highlighting their strengths and challenges. Neural networks offer great potential for complex and non-linear problems but require careful considerations in data preparation, model complexity, and validation to ensure reliable and accurate predictions.
Anzeige
Bitte loggen Sie sich ein, um Zugang zu Ihrer Lizenz zu erhalten.
“New Navy Device Learns by Doing: Psychologist Shows Embryo of Computer Designed to Read and Grow Wiser.” 1958. The New York Times, July 8, 1958, pg. 25.
While it is possible to have a linear activation function, most activation functions are non-linear. Using only linear activations would re-create ordinary regression using neural networks.
It is possible that the accuracy with the test or validation set is higher than with the training set. This should not be a concern since it is most likely due to the randomness in creating the subsets of data.
When applying the normalization from the training set to the validation or test sets, an error is sometimes raised since values found in the latter two data sets might be beyond the range of values in the training set. This is usually not a problem and can be ignored.