Abstract
Before we enter into the discussion of how one can derive a learning rule for multilayered perceptrons, it is useful to consider a simple example. We choose the exclusive-OR (XOR) function, with which we are already familiar. In order to circumvent the “no-go” theorem derived for simple perceptrons in the previous section, we add a hidden layer containing two neurons which receive signals from the input neurons and feed the output neuron (see Fig. 6.1). We denote the states of the hidden neurons by the variables s j , (j = 1, . . . , N h). 1 The synaptic connections between the hidden neurons and the output neurons are denoted by w ij ; those between the input layer and the hidden layer by w jk . The threshold potentials of the output neurons are called ϑ ij those of the hidden neurons are called ϑ j .
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Notes
Capital letters S will henceforth symbolize output neurons, while hidden neurons are denoted by lower-case letters s.
Even if error back-propagation may be biologically implausible, its use as a computational algorithm can be helpful in neurobiological studies. For example [Lo89] have been able to understand the function of interneurons found in the nervous system of a leech by comparison with a simulated neural network trained with back-propagation.
A neuron with this property is often referred to as the “grandmother neuron”. This term derives from the (probably incorrect) notion that a specific neuron is activated in the brain, when one thinks about a certain concept, such as one’s grandmother.
It belongs to the class of NP-complete problems (see Footnote 1 in Sect. 11.1).
It can actually be shown that even a single layer of hidden neurons has sufficient flexibility to represent any continuous function [Fi80, He87b, Ho89d, Cy89]. The proof, which is based on a general theorem of Kolmogorov concerning the representation of functions of several variables [Ko57], is rather formal and does not guarantee the existence of a reasonable representation in practice, in contrast to the constructive proof that can be given for networks with two hidden layers.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1995 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Müller, B., Reinhardt, J., Strickland, M.T. (1995). Multilayered Perceptrons. In: Neural Networks. Physics of Neural Networks. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-57760-4_6
Download citation
DOI: https://doi.org/10.1007/978-3-642-57760-4_6
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-60207-1
Online ISBN: 978-3-642-57760-4
eBook Packages: Springer Book Archive