Abstract
Neural network models are algorithms for cognitive tasks, such as learning and optimization, which are in a loose sense based on concepts derived from research into the nature of the brain. In mathematical terms a neural network model is defined as a directed graph with the following properties:
-
1.
A state variable n i is associated with each node i.
-
2.
A real-valued weight ω ik is associated with each link (ik) between two nodes i and k
-
3.
A real-valued bias ϑ i is associated with each node i
-
4.
A transfer function f i [N k , ω ik , ϑ i , (k ≠ i) is defined, for each node i, which determines the state of the node as a function of its bias, of the weights of its incoming links, and of the states of the nodes connected to it by these links
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Notes
A state variable n i is associated with each node i.
A real-valued weight w ik is associated with each link (ik) between two nodes i and k.
A real-valued bias ϑi is associated with each node i.
A transfer function f i [n k , w ik , ϑiti, (k ≢ i)] is defined, for each node i, which determines the state of the node as a function of its bias, of the weights of its incoming links, and of the states of the nodes connected to it by these links.
The terms “neural computation” and “neural computer”, which are frequently used in this context, are somewhat misleading: neural networks do not appear to be very adept at solving classical computational problems, such as adding, multiplying, or dividing numbers. The most promising area of application concerns cognitive tasks that are hard to cast into algebraic equations.
Technically, even the simple perceptron was a three-layer device, since a preprocessing layer of sensory units was located in front of the first layer of computing neurons, which is here called the “input” layer. The connections between the sensory units and the following (input) layer of neurons were not adjustable, i.e. no learning occurred at this level.
In fact, the XOR problem can be easily solved by a feed-forward network with three layers, but no practical algorithm for the construction of the w ij of such generalized perceptrons was known at the time.
On the other hand, it is highly questionable whether biological assemblies of neurons can operate synchronously, except in very special cases. Most likely, reality is somewhere intermediate between the two extremes, where precisely being a sharply debated subject [C185].
The stochastic evolution law introduced by Little may actually represent a real feature of biological neurons, viz. that nerve cells can fire spontaneously without external excitation, leading to a persistent noise level in the network.
The two goals are sometimes hard to reconcile: the excitement generated by the recent progress in “neural computing” has an innate tendency to distract interest from research into the principles governing the human mind. A critical view has been voiced by F. Crick [Cr89].
Indeed, testing the correctness of conventional (von Neumann) computer programs is a difficult problem, as well. For example, there is no general algorithm that checks in a finite time whether a given program always terminates or not. But a variety of clever testing techniques allows us to check the accuracy of computer programs with a very high probability of being correct.
This view is supported by the observation that the daily duration of dream sleep gradually decreases during the life of a person, and that the longest periods of dreaming occur in newly born, or even unborn, babies. If this hypothesis is correct, attempts to remember the contents of dreams, as advocated by Freud and his followers, would be counterproductive rather than helpful!
For example, focusing attention, one can follow the words of a single speaker in a room where many people talk simultaneously, often far exceeding the speaker’s voice level. Among researchers of the auditory process this is known as the “cocktail party effect”.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1995 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Müller, B., Reinhardt, J., Strickland, M.T. (1995). Neural Networks Introduced. In: Neural Networks. Physics of Neural Networks. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-57760-4_2
Download citation
DOI: https://doi.org/10.1007/978-3-642-57760-4_2
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-60207-1
Online ISBN: 978-3-642-57760-4
eBook Packages: Springer Book Archive