Elsevier

Neurocomputing

Volume 21, Issues 1–3, 6 November 1998, Pages 91-100
Neurocomputing

Applications of the growing self-organizing map1

https://doi.org/10.1016/S0925-2312(98)00037-XGet rights and content

Abstract

The growing self-organizing map (GSOM), an extension of Kohonen’s self-organizing map algorithm, adapts not only the position of the map weight vectors in the input space, but also the topology of the map output space grid. This additional feature allows for an unsupervised generation of dimension-reducing projections with optimal neighborhood preservation, even if the effective dimensionality of the input data set is not known. In three case studies involving real-world data sets we show that the GSOM is able to reproducably generate projections with a very good degree of neighborhood preservation. For one of the data sets, an experimentally obtained time series from a nonlinear system, the correct dimensionality dA≈3 of the underlying attractor is known from other methods; here the GSOM leads to maps without space grids which are also three dimensional.

Introduction

The self-organization of topographic maps is a broadly pursued research topic, both in the biological realm (development of maps in sensory and motor areas of the brain 2, 18, 20, 23) and in signal processing applications. Here, neural maps are utilized in the fashion of neighborhood preserving vector quantizers which map data from an (possibly high-dimensional) input space VRdV onto neurons in a map output space A. In both fields, Kohonen’s self-organizing map (SOM) algorithm provides the basis for many investigations (for a brief introduction to the SOM, see the accompanying article [15], for an overview over applications of the SOM, see [16]). The neurons are located at positions rk,k=1,…,N, in the map output space A and have associated to them weight vectors mrk which determine a particular position in the map input space V. The mapping ΨVA is realized by a winner takes it all ruleΨVA:vs=argminrkAvmrk‖.To achieve this projection, SOMs adapt the weight vectors mrk by learning steps which involve the presentation of data points (or stimuli) v, followed by adaptation stepsΔmrk=εhrks(vmrk).The neighborhood function hrks, usually chosen to be of Gaussian shape,hrks=exprks22,enforces neurons which are close in the map output space to keep their weight vectors closeby in the map input space.

In this way, a neighborhood preserving mapping from input to output space is established, provided the map output space matches the topology of the data manifold in the input space. If this is not the case, as in the famous example of a square input space which is mapped onto a line of neurons as an output space, the SOM-algorithm cannot possibly achieve a neighborhood preserving projection (the issue of defining or measuring neighborhood preservation is tricky, for a more intricate discussion see 4, 13, 21). As the effective dimensionality of a data manifold in the input space is most often (a priori) unknown, straightforward application of the SOM with fixed output space topology does not guarantee an optimal neighborhood preservation. The growing self-organizing map-algorithm (GSOM, [3]) provides a method to get around this problem. In the GSOM, the output space has the shape of a generalized hypercube, parametrized by extensions n1×n2×n3… along the different dimensions. During a growth procedure, these different extensions, as well as the weight vectors mrk, are objects of the adaptation. A result of nj=1,∀j>dA means that the output space grid is dA-dimensional. In the following sections, we first describe the operation of the GSOM in some detail, and then proceed to three case studies demonstrating the application of the GSOM to the neighborhood preserving projection of real-world data sets.

Section snippets

The growing self-organizing map

Initially, a GSOM starts with just two neurons, i.e. a minimal one-dimensional configuration. After adaptation of the two corresponding weight vectors following the regular SOM procedure, either a third neuron is added to form a line of increased length, or it is decided to open up a further output space dimension, resulting in a 2×2-network. In all further growth steps, the map output space grid is further enlarged by increasing the extension into one of the existing dimensions by one, i.e. ni

Results of simulations

The GSOM-procedure has been shown to work well on synthetic data sets [3]. In particular, the influence of noise on the map development of the GSOM was investigated. Thereby, the data were embedded in a high-dimensional data space with noise in the additional data dimensions. It was shown that the GSOM detects the relevant shape of the data up to a noise level of 5%. Because of these considerations, here we only refer to [3] for a further analysis.

However, more challenging tests for such an

Discussion

As pointed out in the introduction, the necessity of prior specification of an output space lattice, and the desire to achieve an optimal preservation of neighborhoods in SOMs are somewhat opposing constraints. One simple solution of the dilemma is to test several sensible map lattice structures, and to choose the best among them using some measure of “how well the map matches the data” (various measures have been proposed along these lines 1, 6, 19, 21, 24). A more elegant solution consists in

Thomas Villmann was born in 1963 in Potsdam, Germany. He studied mathematics and received a Diploma degree in 1990 and a PhD in computer science in 1996, both from the Leipzig University, Germany. Presently, he is with the Klinik for Psychotherapy and Psychosomatic Medicine of the University of Leipzig. His research interests include the theory of artificial neutral networks, in particular neural maps, neural modelling, evolutionary algorithms and hybrid systems. In addition he applies the

References (24)

  • J.M. Digman

    Personality structureemergence of the five-factor model

    Ann. Rev. Psychol.

    (1990)
  • B. Fritzke

    Growing grida self-organizing network with constant neighborhood range and adaptation strength

    Neural Processing Lett.

    (1995)
  • Cited by (39)

    • A survey on machine learning for recurring concept drifting data streams

      2023, Expert Systems with Applications
      Citation Excerpt :

      For this reason, it has been adapted in different research works (Liu & Ban, 2015; Prudent & Ennaji, 2005; Si, Lin, & Vuong, 2000) that have introduced dynamic (growing) methods for online learning. An example of these is growing self-organising maps (GSOM) (Fritzke, 1996; Villmann & Bauer, 1998). It grows nodes at the edges of the map when the total distance of an example exceeds a threshold, which allows it to track regions that may present dynamic behaviours when the original SOM would stabilise and lose its capacity to re-shape.

    • Topological structural analysis based on self-adaptive growing neural network for shape feature extraction

      2022, Neurocomputing
      Citation Excerpt :

      For this reason, many researchers improved the algorithm on the basis of SOM, hoping to automatically adjust the network structure according to the inherent properties in the input data [16–18]. GSOM (growing self-organizing map) can dynamically and adaptively grow neurons from the boundary of the network in the process of network training [19]. Growing Grid is a kind of self-growing neural network, which grows by adding a row or a column of neurons, so it maintains a square network topology [20].

    • Online state space generation by a growing self-organizing map and differential learning for reinforcement learning

      2020, Applied Soft Computing
      Citation Excerpt :

      This allows the network to be kept growing until the accuracy is sufficiently good. Villmann and Bauer presented an SOM in which growing is not restricted to rows or columns, and the grid becomes a hypercube to achieve low recognition errors and preserves neighborhoods between the data space and the map output space [21,22]. On the other hand, a growing hierarchical SOM proposed by Dittenbach et al. has a hierarchical architecture composed of independent growing SOMs [23].

    • A hand shape instruction recognition and learning system using growing SOM with asymmetric neighborhood function

      2016, Neurocomputing
      Citation Excerpt :

      To overcome these problems, different improved SOM variants have been proposed. For example, to deal with the exhaustion of units, Growing SOM (GSOM) is proposed in [22–25], and Transient-SOM (T-SOM) is proposed in our previous works [26,27]. To keep stable learning convergence and use less parameters, Parameterless SOM (PL-SOM) is proposed by Berglund & Sitte in [28] and we adopted its idea to GSOM as a Parameterless Growing SOM (PL-G-SOM) [7–9,29].

    View all citing articles on Scopus

    1. Download : Download full-size image
    Thomas Villmann was born in 1963 in Potsdam, Germany. He studied mathematics and received a Diploma degree in 1990 and a PhD in computer science in 1996, both from the Leipzig University, Germany. Presently, he is with the Klinik for Psychotherapy and Psychosomatic Medicine of the University of Leipzig. His research interests include the theory of artificial neutral networks, in particular neural maps, neural modelling, evolutionary algorithms and hybrid systems. In addition he applies the developed approaches to several research areas as data analysis in psychotherapy research and optimization in VLSI-design.

    1. Download : Download full-size image
    Hans-Ulrich Bauer was born in 1961 in Germany. He studied physics and received a Master’s degree from the University of California at San Diego, a Diploma degree from the Technical University of Munich, and a PhD from the University of Frankfurt in 1990. His interest in neural networks began several years ago when he investigated the stability of recurrent networks and their application to speech processing during his graduate work. Currently he is at the Max-Planck-Institute für Strömungsphysik in Göttingen where he applies ideas from nonlinear dynamics to the analysis of neural systems. One of his particular interests involves the self-organization of neural maps. A different line of research is concerned with the modeling and functional interpretation of dynamical neural responses, primarily in the visual system.

    1

    This work has been supported by Deutsche Forschungsgemeinschaft, SFB 185 “Nichtlineare Dynamik”, TP E6.

    View full text