Learning sets of filters using back-propagation

https://doi.org/10.1016/0885-2308(87)90026-XGet rights and content

Abstract

A learning procedure, called back-propagation, for layered networks of deterministic, neuron-like units has been described previously. The ability of the procedure automatically to discover useful internal representations makes it a powerful tool for attacking difficult problems like speech recognition. This paper describes further research on the learning procedure and presents an example in which a network learns a set of filters that enable it to discriminate formant-like patterns in the presence of noise. The generality of the learning procedure is illustrated by a second example in which a similar network learns an edge detection task. The speed of learning is strongly dependent on the shape of the surface formed by the error measure in “weight space”. Examples are given of the error surface for a simple task and an acceleration method that speeds up descent in weight space is illustrated. The main drawback of the learning procedure is the way it scales as the size of the task and the network increases. Some preliminary results on scaling are reported and it is shown how the magnitude of the optimal weight changes depends on the fan-in of the units. Additional results show how the amount of interaction between the weights affects the learning speed. The paper is concluded with a discussion of the difficulties that are likely to be encounted in applying back-propagation to more realistic problems in speech recognition, and some promising approaches to overcoming these difficulties.

References (12)

  • D.H. Ackley et al.

    A learning algorithm for Boltzmann Machines

    Cognitive Science

    (1985)
  • R.W. Prager et al.

    Boltzmann machines for speech recognition

    Computer Speech and Language

    (1986)
  • S. Amari

    A theory of adaptive pattern classifiers

    IEEE Transactions on Electronic Computers

    (1967)
  • J.L. Elman et al.
  • G.E. Hinton

    Learning distributed representations of concepts

  • T. Kohonen et al.

    Phonotopic maps: insightful representation of phonological features for speech recognition

There are more references available in the full text version of this article.

Cited by (84)

View all citing articles on Scopus

This research was supported by contract N00014-86-K-00167 from the Office of Naval Research and an R. K. Mellon Fellowship to David Plaut.

View full text