Skip to main content

Neural Networks Introduced

  • Chapter
Neural Networks

Part of the book series: Physics of Neural Networks ((NEURAL NETWORKS))

Abstract

Neural network models are algorithms for cognitive tasks, such as learning and optimization, which are in a loose sense based on concepts derived from research into the nature of the brain. In mathematical terms a neural network model is defined as a directed graph with the following properties:

  1. 1.

    A state variable n i is associated with each node i.

  2. 2.

    A real-valued weight ω ik is associated with each link (ik) between two nodes i and k

  3. 3.

    A real-valued bias ϑ i is associated with each node i

  4. 4.

    A transfer function f i [N k , ω ik , ϑ i , (ki) is defined, for each node i, which determines the state of the node as a function of its bias, of the weights of its incoming links, and of the states of the nodes connected to it by these links

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 16.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

  1. A state variable n i is associated with each node i.

    Google Scholar 

  2. A real-valued weight w ik is associated with each link (ik) between two nodes i and k.

    Google Scholar 

  3. A real-valued bias ϑi is associated with each node i.

    Google Scholar 

  4. A transfer function f i [n k , w ik , ϑiti, (ki)] is defined, for each node i, which determines the state of the node as a function of its bias, of the weights of its incoming links, and of the states of the nodes connected to it by these links.

    Google Scholar 

  5. The terms “neural computation” and “neural computer”, which are frequently used in this context, are somewhat misleading: neural networks do not appear to be very adept at solving classical computational problems, such as adding, multiplying, or dividing numbers. The most promising area of application concerns cognitive tasks that are hard to cast into algebraic equations.

    Google Scholar 

  6. Technically, even the simple perceptron was a three-layer device, since a preprocessing layer of sensory units was located in front of the first layer of computing neurons, which is here called the “input” layer. The connections between the sensory units and the following (input) layer of neurons were not adjustable, i.e. no learning occurred at this level.

    Google Scholar 

  7. In fact, the XOR problem can be easily solved by a feed-forward network with three layers, but no practical algorithm for the construction of the w ij of such generalized perceptrons was known at the time.

    Google Scholar 

  8. On the other hand, it is highly questionable whether biological assemblies of neurons can operate synchronously, except in very special cases. Most likely, reality is somewhere intermediate between the two extremes, where precisely being a sharply debated subject [C185].

    Google Scholar 

  9. The stochastic evolution law introduced by Little may actually represent a real feature of biological neurons, viz. that nerve cells can fire spontaneously without external excitation, leading to a persistent noise level in the network.

    Google Scholar 

  10. The two goals are sometimes hard to reconcile: the excitement generated by the recent progress in “neural computing” has an innate tendency to distract interest from research into the principles governing the human mind. A critical view has been voiced by F. Crick [Cr89].

    Google Scholar 

  11. Indeed, testing the correctness of conventional (von Neumann) computer programs is a difficult problem, as well. For example, there is no general algorithm that checks in a finite time whether a given program always terminates or not. But a variety of clever testing techniques allows us to check the accuracy of computer programs with a very high probability of being correct.

    Google Scholar 

  12. This view is supported by the observation that the daily duration of dream sleep gradually decreases during the life of a person, and that the longest periods of dreaming occur in newly born, or even unborn, babies. If this hypothesis is correct, attempts to remember the contents of dreams, as advocated by Freud and his followers, would be counterproductive rather than helpful!

    Google Scholar 

  13. For example, focusing attention, one can follow the words of a single speaker in a room where many people talk simultaneously, often far exceeding the speaker’s voice level. Among researchers of the auditory process this is known as the “cocktail party effect”.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Müller, B., Reinhardt, J., Strickland, M.T. (1995). Neural Networks Introduced. In: Neural Networks. Physics of Neural Networks. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-57760-4_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-57760-4_2

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-60207-1

  • Online ISBN: 978-3-642-57760-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics