2013 | OriginalPaper | Chapter
Artificial Neural Networks
Author : Hanspeter A Mallot
Published in: Computational Neuroscience
Publisher: Springer International Publishing
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
In models of large networks of neurons, the behavior of individual neurons is treated much simpler than in the Hodgkin-Huxley theory presented in Chapter 1: activity is usually represented by a binary variable (1 = firing; 0 = silent) and time is modeled by a discrete sequence of time steps running in synchrony for all neurons in the net. Besides activity, the most interesting state variable of such networks is synaptic strength, or weight, which determines the influence of one neuron on its neighbors in the network. Synaptic weights may change according to so-called “learning rules”, that allow to find network connectivities optimized for the performance of various tasks. The networks are thus characterized by two state variables, a vector of neuron activities per time step and a matrix of neuronto-neuron transmission weights describing the connectivity, which also depends on time. In this chapter, we will discuss the basic approach and a number of important network architectures for tasks such as pattern recognition, learning of input-output associations, or the self-organization of representations that are optimal in a certain, well-defined sense. The mathematical treatment is largely based on linear algebra (vectors and matrices) and, as in the other chapters, will be explained “on the fly”.