Skip to main content

Multilayered Perceptrons

  • Chapter
Neural Networks

Part of the book series: Physics of Neural Networks ((NEURAL NETWORKS))

Abstract

Before we enter into the discussion of how one can derive a learning rule for multilayered perceptrons, it is useful to consider a simple example. We choose the exclusive-OR (XOR) function, with which we are already familiar. In order to circumvent the “no-go” theorem derived for simple perceptrons in the previous section, we add a hidden layer containing two neurons which receive signals from the input neurons and feed the output neuron (see Fig. 6.1). We denote the states of the hidden neurons by the variables s j , (j = 1, . . . , N h). 1 The synaptic connections between the hidden neurons and the output neurons are denoted by w ij ; those between the input layer and the hidden layer by w jk . The threshold potentials of the output neurons are called ϑ ij those of the hidden neurons are called ϑ j .

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 16.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

  1. Capital letters S will henceforth symbolize output neurons, while hidden neurons are denoted by lower-case letters s.

    Google Scholar 

  2. Even if error back-propagation may be biologically implausible, its use as a computational algorithm can be helpful in neurobiological studies. For example [Lo89] have been able to understand the function of interneurons found in the nervous system of a leech by comparison with a simulated neural network trained with back-propagation.

    Google Scholar 

  3. A neuron with this property is often referred to as the “grandmother neuron”. This term derives from the (probably incorrect) notion that a specific neuron is activated in the brain, when one thinks about a certain concept, such as one’s grandmother.

    Google Scholar 

  4. It belongs to the class of NP-complete problems (see Footnote 1 in Sect. 11.1).

    Google Scholar 

  5. It can actually be shown that even a single layer of hidden neurons has sufficient flexibility to represent any continuous function [Fi80, He87b, Ho89d, Cy89]. The proof, which is based on a general theorem of Kolmogorov concerning the representation of functions of several variables [Ko57], is rather formal and does not guarantee the existence of a reasonable representation in practice, in contrast to the constructive proof that can be given for networks with two hidden layers.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Müller, B., Reinhardt, J., Strickland, M.T. (1995). Multilayered Perceptrons. In: Neural Networks. Physics of Neural Networks. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-57760-4_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-57760-4_6

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-60207-1

  • Online ISBN: 978-3-642-57760-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics