Elsevier

Neurocomputing

Volume 70, Issues 13–15, August 2007, Pages 2552-2560
Neurocomputing

A self-organizing map of sigma–pi units

https://doi.org/10.1016/j.neucom.2006.05.014Get rights and content

Abstract

By frame of reference transformations, an input variable in one coordinate system is transformed into an output variable in a different coordinate system depending on another input variable. If the variables are represented as neural population codes, then a sigma–pi network is a natural way of coding this transformation. By multiplying two inputs it detects coactivations of input units, and by summing over the multiplied inputs, one output unit can respond invariantly to different combinations of coactivated input units. Here, we present a sigma–pi network and a learning algorithm by which the output representation self-organizes to form a topographic map. This network solves the frame of reference transformation problem by unsupervised learning.

Section snippets

Model architecture

The network architecture is schematically displayed in Fig. 2. With the number of units in the corresponding layers being Nx, Ny and Nz, the total number of possible sigma–pi connections {wijk} is Nx·Ny·Nz. This is in the order of, but still less than, the case of the basis function networks [6], [12], [16]. A unit i on the top layer is activated by the input vectors x and y via the relationai=j,kwijkxjyk.Hence, a sigma–pi weight wijk is effective, if unit j of the input vector x is

General idea

For simplicity of notation we will term the output quantity the “sum location”, envisaging the relation μx+μy=μz as a paramount example. For a given sum location μz, there are many possible pairs of inputs (μx,μy) which lead to the same sum. Therefore, learning is about generating responses that are invariant to variations of input pairs which belong to the same sum location.

In order to generate these invariances, we will supply the learning algorithm with sets of input pairs that shall lead to

One-dimensional maps

Simple case: Fig. 6 shows the resulting connections of trained networks. In Fig. 6(a) the weights of each unit fall onto a diagonal line in input space along which the sum μx+μy is a constant. This constant decreases linearly from left to right, indicating an “inverted” polarity of the map. Different initial random values of the weights can lead to another polarity. Test transformations of this network are displayed in Fig. 7(b) in response to the input shown in Fig. 7(a). The map units

Discussion

Based on our recent approach of a neural frame of reference transformation which was trained by supervised learning [17], we intend to use the model presented in this paper in the context of a neurally controlled robot docking maneuver. The supervised system has been tested on a robot simulator, and Fig. 10 explains the geometry on our PeopleBot robot.

The overall neural system which controls a robot to pick up an object will consist of three parts: (i) a visual system provides the horizontal

Acknowledgments

This research is part of the MirrorBot project supported by a EU, FET-IST programme, Grant IST-2001-35282, coordinated by Prof. Wermter.

Cornelius Weber is a Junior Fellow at the Frankfurt Institute for Advanced Studies in Germany since March 2006. He graduated in physics in Bielefeld, Germany in 1995 and received his PhD in computer science in Berlin in 2000. Then he worked in the group of Alexandre Pouget in Brain and Cognitive Sciences, University of Rochester, USA. From 2002 to 2005 he worked in Hybrid Intelligent Systems at the University of Sunderland, UK throughout the EU-funded MirrorBot project. His research interests

References (18)

There are more references available in the full text version of this article.

Cited by (39)

  • A novel nature inspired firefly algorithm with higher order neural network: Performance analysis

    2016, Engineering Science and Technology, an International Journal
    Citation Excerpt :

    It was able to find an appropriate input–output mapping of various chaotic financial time series data with a good performance in learning speed and generalization capability. A sigma-pi network trained with an online learning algorithm for solving the frame of reference transformation problem has been presented by Cornelius Weber and Stefan Wermter [45]. An online gradient algorithm for Pi-Sigma neural networks with stochastic inputs with improved computational efficiency have been proposed by X. Kang et al. [46].

  • A novel Chemical Reaction Optimization based Higher order Neural Network (CRO-HONN) for nonlinear classification

    2015, Ain Shams Engineering Journal
    Citation Excerpt :

    Li [48] has suggested a memory based Sigma–Pi–Sigma neural network for excellent learning convergence along with reducing the memory size and overcoming the possible extensive memory requirement problem. Weber and Wermter [49] have presented a sigma-pi network trained with an online learning algorithm for solving the frame of reference transformation problem. For financial time series prediction a novel application of Ridge polynomial network formed by adding different degrees of Pi–Sigma neural networks has been suggested by Ghazali et al. [50] which is able to find an appropriate input output mapping of various chaotic financial time series data with a good performance in learning speed and generalization capability.

  • Convergence of batch gradient learning algorithm with smoothing L<inf>1/2</inf> regularization for Sigma-Pi-Sigma neural networks

    2015, Neurocomputing
    Citation Excerpt :

    Sigma–Pi–Sigma neural networks (SPSNNs) are considered as efficient high-order neural networks which can learn to implement static mapping that multilayer neural networks and radial basis function networks usually do [1], since the output of the SPSNNs has the sum of product-of-sum form. A self-organizing map of Sigma–Pi units was provided in [2]. The applicability of networks built on Sigma–Pi units with Elman topology was explored in [3].

View all citing articles on Scopus

Cornelius Weber is a Junior Fellow at the Frankfurt Institute for Advanced Studies in Germany since March 2006. He graduated in physics in Bielefeld, Germany in 1995 and received his PhD in computer science in Berlin in 2000. Then he worked in the group of Alexandre Pouget in Brain and Cognitive Sciences, University of Rochester, USA. From 2002 to 2005 he worked in Hybrid Intelligent Systems at the University of Sunderland, UK throughout the EU-funded MirrorBot project. His research interests are in computational neuroscience, focusing on visual and motor systems, and robotic applications. In December 2003 he won the Machine Intelligence Prize of the British Computer Society in Cambridge, demonstrating the “visually guided grasping robot MIRA”. This publication is motivated by extending the robot's grasping range for such a scenario.

Stefan Wermter is professor in Intelligent Systems at the University of Sunderland, UK and is the Director of the Centre for Hybrid Intelligent Systems. His research interests are in intelligent systems, neural networks, cognitive neuroscience, hybrid systems, language processing and learning robots. He has a Diploma from the University of Dortmund, an MSc from the University of Massachusetts and a PhD and Higher Doctorate (Habilitation) from the University of Hamburg, all in computer science. He was a Research Scientist at ICSI, Berkeley in 1997 before accepting the Chair in Intelligent Systems at the University of Sunderland in 1998.

Professor Wermter has written or edited five books and published about 150 articles on this research area, including books like “Hybrid Connectionist Natural Language Processing” or “Connectionist, Statistical, and Symbolic Approaches to Learning for Natural Language Processing”, “Hybrid Neural Systems”, “Emergent Neural Computational Architectures based on Neuroscience” and “Biomimetic Neural Learning for Intelligent Robots”.

He is an Associate Editor of the journals “Connection Science”, the “International Journal for Hybrid Intelligent Systems” and the “Knowledge and Information Systems”. He is on the editorial board of the journals “Neural Networks”, “Cognitive Systems Research”, “Neural Computing Surveys”, “Neural Information Processing” and “Journal of Computational Intelligence”. Furthermore, he is leading the EU project MirrorBot on biomimetic multimodal learning in a mirror neuron-based robot and coordinates the EmerNet network on “emerging computational neural architectures based on neuroscience”.

View full text