Elsevier

Neurocomputing

Volume 31, Issues 1–4, March 2000, Pages 167-183
Neurocomputing

Observations on morphological associative memories and the kernel method

https://doi.org/10.1016/S0925-2312(99)00176-9Get rights and content

Abstract

The ability of human beings to retrieve information on the basis of associated cues continues to elicit great interest among researchers. Investigations of how the brain is capable to make such associations from partial information have led to a variety of theoretical neural network models that act as associative memories. Several researchers have had significant success in retrieving complete stored patterns from noisy or incomplete input pattern keys by using morphological associative memories. Thus far morphological associative memories have been employed in two different ways: a direct approach which is suitable for input patterns containing either dilative or erosive noise and an indirect one for arbitrarily corrupted input patterns which is based on kernel vectors. In a recent paper (P. Sussner, in: Proceedings of the International ICSA/IFAC Symposium on Neural Computation, Vienna, September 1998), we suggested how to select these kernel vectors and we deduced exact statements on the amount of noise which is permissible for perfect recall. In this paper, we establish the proofs for all our claims made about the choice of kernel vectors and perfect recall in kernel method applications. Moreover, we provide arguments for the success of both approaches beyond the experimental results presented up to this point.

Introduction

The concept of morphological neural networks grew out of the theory of image algebra developed by Ritter [20]. A sub-algebra of image algebra can be viewed as the mathematical background not only for morphological image processing but also for morphological neural networks [16], [6]. A number of researchers devised morphological neural networks for very specialized applications. Davidson employed morphological neural networks in order to solve template identification and target classification problems [5], [4]. Suarez-Araujo applied morphological neural networks to compute homothetic auditory and visual invariances [21]. Another interesting network consisting of a morphological net and a classical feedforward network used for feature extraction and classification was designed by Won et al. [24], [25].

The properties of morphological neural networks differ drastically from those of traditional neural network models. These differences are due to the fact that traditional neural network operations consist of linear operations followed by an application of nonlinear activation functions whereas in morphological neural computing the next state of a neuron or in performing the next layer neural network computation involves the nonlinear operation of adding neural values and their synaptic strengths followed by forming the maximum of the results. A fairly comprehensive and rigorous basis for computing with morphological neural networks appeared in [17].

One of the first goals achieved in the development of morphological neural networks was the establishment of a morphological associative memory network (MAM). In its basic form, this model of an associative memory resembles the well-known correlation memory or linear associative memory [9]. As is the case in correlation recording, the morphological associative memory provides for a simple method to add new associations. Correlation recording requires orthogonality of the key vectors in order to exhibit perfect recall of the fundamental associations. The morphological auto-associative memory does not restrict the domain of the key vectors in any way. Thus, as many associations as desired can be encoded into the memory [19]. In the binary case, the limit is 2n, where n is the length of memory. In comparison, McEliece et al. showed that the (asymptotic) capacity of the Hopfield associative memory is n/2logn if with high probability the unique fundamental memory is to be recovered, except for a vanishingly small fraction of fundamental memories [13]. Evidently, the information storage capacity (number of bits which can be stored and recalled associatively) of the morphological auto-associative memory also exceeds the respective number for certain linear matrix associative memories which was calculated by Palm [15], [23].

The discrete correlation-recorded Hopfield net is one of the most popular methods for auto-association of binary patterns [7], [8], [1], [12]. Unlike the Hopfield network, which is a recurrent neural network, our model provides the final result in one pass through the network without any significant amount of training. We previously used a number of experiments to demonstrate the MAM's efficiency to deal with either dilative or erosive changes to the input patterns including incomplete patterns [18]. Thus, the morphological associative memory model reveals almost all the desired general characteristics of an associative memory with one notable exception: This network's inability to deal with patterns which include erosive as well as dilative noise at the same time.

We suggested the kernel method as a possible solution to this dilemma. The kernel method depends on the choice of a number of suitable vectors called kernel vectors. In a recent paper, we presented a characterization of kernel vectors which greatly simplifies their choice in the binary case [22]. We were furthermore able to answer the following question: What is the amount of corruption of the key vectors which is admissible in order to maintain perfect recall when using the kernel method? This paper provides the necessary proofs for these results. Furthermore, it includes various observations on the direct approach as well as on the kernel method which confirm our previous assumptions about morphological associative memories.

Section snippets

Computational basis for morphological neural networks

Artificial neural network models are specified by the network topology, node characteristics, and training or learning rules. The underlying algebraic system used in these models is the set of real numbers R together with the operations of addition and multiplication and the laws governing these operations. This algebraic system, known as a ring, is commonly denoted by (R,+,·).

The basic computations occurring in the proposed morphological network are based on the algebraic lattice structure (R±∞

Introduction to morphological associative memories

An associative memory is an input–output system that describes a relation R⊆Rm×Rn. If (x,y)∈R, i.e. if the input x produces the output y, then the associative memory is said to store or record the memory association (x,y). Unlike conventional computer associative memories, neural network models serving as associative memories are generally capable of retrieving a complete output pattern y even if the input pattern x is corrupted or incomplete. The purpose of auto-associative memories is the

Conditions for kernels

The previous section indicates that morphological associative memories outperform “conventional” associative memories such as correlation recording and the Hopfield net in many aspects. In fact, the morphological associative memory model reveals almost all the desired general characteristics of an associative memory with one notable exception: the network's inability to deal with patterns which include erosive and dilative noise at the same time.

The memory WXX is suitable for application to

Conditions for perfect recall of corrupted patterns

In view of the fact that the results of the previous section induce a practical method for selecting kernels in an association problem, this section poses the following question: Given a collection of kernel vectors whose choice is based on Theorem 5, what is the amount of corruption of the input patterns which is admissible in order to maintain perfect recall? In other words, given a corrupted version x̃γ of xγ, under what conditions isWZY(MZZx̃γ)=yγ.Theorem 5 again comes to the rescue when

Concluding remarks

Morphological associative memories (MAMs) are a recent development in the theory of artificial neural networks. Interesting results on MAMs such as infinite storage capacity, the ability to recognize partially occluded patterns, and the ease of incorporating additional pattern associations into the memory have already been established. A combination of morphological associative memories M and W forming an input-output system {inputMWoutput} has been used to deal with input patterns which are

Peter Sussner is currently a faculty member of the Institute of Mathematics, Statistics, and Scientific Computation at the State University of Campinas, Brazil. He received the Diploma degree in Mathematics from the University of Erlangen, Germany, in 1990. In August 1991, he went to the University of Florida on a Fulbright Scholarship, where he joined a research team at the center for Computer Vision and Visualization. He completed his Ph.D. in Mathematics at the University of Florida in

References (25)

  • G.X. Ritter et al.

    Image algebra: an overview

    Comput. Vision Graphics Image Processing

    (1990)
  • Y. Abu-Mostafa et al.

    Information capacity of the Hopfield model

    IEEE Trans. Inf. Theory

    (1985)
  • J.S. Albus

    A new approach to manipulator control: the cerebellar model articulation controller

    J. Dyn. Syst. Meas. Control Trans. ASME

    (1975)
  • J.A. Austin, ADAM, a distributed associative memory for scene analysis, in: Proceedings of the IEEE First International...
  • J.L. Davidson, Simulated annealing and morphological neural networks, in: Image Algebra and Morphological Image...
  • J.L. Davidson et al.

    Morphology neural networks: an introduction with applications

    IEEE Systems Signal Processing

    (1993)
  • J.L. Davidson, G.X. Ritter, A theory of morphological neural networks, in: Digital Optical Computing II, Vol. 1215,...
  • J.J. Hopfield, Neural networks and physical systems with emergent collective computational abilities, Vol. 79,...
  • J.J. Hopfield, Neurons with graded response have collective computational properties like those of two-state neurons....
  • T. Kohonen

    Correlation matrix memory

    IEEE Trans. Comput.

    (1972)
  • T. Kohonen et al.

    Representation of associated data by computers

    IEEE Trans. Comput.

    (1973)
  • B. Kosko

    Adaptive bidirectional associative memories

    Applied Optics

    (1987)
  • Cited by (0)

    Peter Sussner is currently a faculty member of the Institute of Mathematics, Statistics, and Scientific Computation at the State University of Campinas, Brazil. He received the Diploma degree in Mathematics from the University of Erlangen, Germany, in 1990. In August 1991, he went to the University of Florida on a Fulbright Scholarship, where he joined a research team at the center for Computer Vision and Visualization. He completed his Ph.D. in Mathematics at the University of Florida in August 1996.

    His research interests include neural networks, image algebra, image processing and numerical optimization.

    View full text