Original contribution
A minimum error neural network (MNN)

https://doi.org/10.1016/0893-6080(93)90007-JGet rights and content

Abstract

A minimum error neural network (MNN) model is presented and applied to a network of the appropriate architecture. The associated one-pass learning rule involves the estimation of input densities. This is accomplished by utilizing local Gaussian functions. A major distinction between this network and other Gaussian based estimators is in the selection of covariance matrices. In MNN, every single local function has its own covariance matrix. The Gram-Schmidt orthogonalization process is used to obtain these matrices. In comparison with the well known probabilistic neural network (PNN), the proposed network has shown improved performance.

References (13)

  • M.T. Musavi et al.

    On the training of radial basis function classifiers

    Neural Networks

    (1992)
  • D.F. Specht

    Probabilistic neural networks

    Neural Networks

    (1990)
  • V.A. Epanechnik

    Nonparametric estimation of a multidimensional probability density

    Theory of Probability Application

    (1969)
  • P.S. Maloney et al.

    The use of probabilistic neural networks to improve solution times for hull-to-emitter correlation problems

  • D. Montana

    A weighted probabilistic neural network

    Advances in Neural Information Processing Systems

    (1991)
  • A. Papoulis

    Probability, random variables, and stochastic processes

    (1984)
There are more references available in the full text version of this article.

Cited by (0)

View full text