Elsevier

Neurocomputing

Volume 48, Issues 1–4, October 2002, Pages 17-37
Neurocomputing

Error-backpropagation in temporally encoded networks of spiking neurons

https://doi.org/10.1016/S0925-2312(01)00658-0Get rights and content

Abstract

For a network of spiking neurons that encodes information in the timing of individual spike times, we derive a supervised learning rule, SpikeProp, akin to traditional error-backpropagation. With this algorithm, we demonstrate how networks of spiking neurons with biologically reasonable action potentials can perform complex non-linear classification in fast temporal coding just as well as rate-coded networks. We perform experiments for the classical XOR problem, when posed in a temporal setting, as well as for a number of other benchmark datasets. Comparing the (implicit) number of spiking neurons required for the encoding of the interpolated XOR problem, the trained networks demonstrate that temporal coding is a viable code for fast neural information processing, and as such requires less neurons than instantaneous rate-coding. Furthermore, we find that reliable temporal computation in the spiking networks was only accomplished when using spike response functions with a time constant longer than the coding interval, as has been predicted by theoretical considerations.

Introduction

Due to its success in artificial neural networks, the sigmoidal neuron is reputed to be a successful model of biological neuronal behavior. By modeling the rate at which a single biological neuron discharges action potentials (spikes) as a monotonically increasing function of input-match, many useful applications of artificial neural networks have been build [22], [7], [37], [34] and substantial theoretical insights in the behavior of connectionist structures have been obtained [41], [27].

However, the spiking nature of biological neurons has recently led to explorations of the computational power associated with temporal information coding in single spikes [32], [21], [13], [26], [20], [17], [49]. In [29] it was proven that networks of spiking neurons can simulate arbitrary feedforward sigmoidal neural nets and can thus approximate any continuous function, and that neurons that convey information by individual spike times are computationally more powerful than neurons with sigmoidal activation functions [30].

As spikes can be described by “event” coordinates (place, time) and the number of active (spiking) neurons is typically sparse, artificial spiking neural networks allow for efficient implementations of large neural networks [48], [33]. Single-spike-time computing has also been suggested as a new paradigm for VLSI neural network implementations [28] which would offer a significant speed-up.

Network architectures based on spiking neurons that encode information in the individual spike times have yielded, amongst others, a self-organizing map akin to Kohonen's SOM [39] and networks for unsupervised clustering [35], [8]. The principle of coding input intensity by relative firing time has also been successfully applied to a network for character recognition [10]. For applications of temporally encoded spiking neural networks however, no practical supervised algorithm has been developed so far. Even in [10] the authors resort to traditional sigmoidal backpropagation to learn to discriminate the histograms of different spike-time responses.

To enable useful supervised learning with the temporal coding paradigm, we develop a learning algorithm for single spikes that keeps the advantages of spiking neurons while allowing for at least equally powerful learning as in sigmoidal neural networks. We derive an error-backpropagation-based supervised learning algorithm for networks of spiking neurons that transfer the information in the timing of a single spike. The method we use is analogous to the derivation by Rumelhart et al. [40]. To overcome the discontinuous nature of spiking neurons, we approximate the thresholding function. We show that the algorithm is capable of learning complex non-linear tasks in spiking neural networks with similar accuracy as traditional sigmoidal neural networks. This is demonstrated experimentally for the classical XOR classification task, as well as for a number of real-world datasets.

We believe that our results are also of interest to the broader connectionist community, as the possibility of coding information in spike times has been receiving considerable attention. In particular, we demonstrate empirically that networks of biologically reasonable spiking neurons can perform complex non-linear classification in a fast temporal encoding just as well as rate-coded networks. Although this paper primarily describes a learning rule applicable to a class of artificial neural networks, spiking neurons are significantly closer to biological neurons than sigmoidal ones. For computing with a rate-code on a very short time scale, it is generally believed that in biological neural systems the responses of a large number of spiking neurons are pooled to obtain an instantaneous measure of average firing rate. Although expensive in neurons, such a pooled response has the advantage that it is robust because of the large number of participating neurons. When comparing such an instantaneous rate-code to temporal coding with single spikes, it is well known that significantly less spiking neurons are required, albeit at the cost of robustness. In this paper, we present, to the best of our knowledge, the first spiking neural network that is trainable in a supervised manner and as such demonstrates the viability and efficiency of a functional spiking neural network as a function approximator.

We also present results that support the prediction that the length of the rising segment of the post-synaptic potential needs to be longer than the relevant temporal structure in order to allow for reliable temporal computation [28]. For spiking neurons, the post-synaptic potential describes the dynamics of a spike impinging onto a neuron, and is typically modeled as the difference of two exponentially decaying functions [19]. The effective rise and decay time of such a function is modeled after the membrane potential time constants of biological neurons. As noted, from a computational point of view, our findings support the theoretical predictions in [28]. From a biological perspective, these findings counter the common opinion among neuroscientists that fine temporal processing in spiking neural networks is prohibited by the relatively long time constants of cortical neurons (as noted for example in [15]).

The rest of this paper is organized as follows: in Section 2 the spiking neural network is formally defined. In Section 3, we derive the error-backpropagation algorithm. In Section 4 we test our algorithm on the classical XOR example, and we also study the learning behavior of the algorithm. By encoding real-valued input dimensions into a temporal code by means of receptive fields, we show results for a number of other benchmark problems in Section 5. The results of the experiments are discussed in Section 6. In a separate subsection, we consider the relevance of our findings for biological systems.

Section snippets

A network of spiking neurons

The network architecture consists of a feedforward network of spiking neurons with multiple delayed synaptic terminals (Fig. 1, as described by Natschläger and Ruf [35]). The neurons in the network generate action potentials, or spikes, when the internal neuron state variable, called “membrane potential”, crosses a threshold ϑ. The relationship between input spikes and the internal state variable is described by the spike response model (SRM), as introduced by Gerstner [18]. Depending on the

Error-backpropagation

We derive error-backpropagation, analogous to the derivation by Rumelhart et al. [40]. Equations are derived for a fully connected feedforward network with layers labeled H(input), I(hidden) and J(output), where the resulting algorithm applies equally well to networks with more hidden layers.

The target of the algorithm is to learn a set of target firing times, denoted {tjd}, at the output neurons jJ for a given set of input patterns {P[t1th]}, where P[t1th] defines a single input pattern

The XOR-problem

In this section, we will apply the SpikeProp algorithm to the XOR problem. The XOR function is a classical example of a non-linear problem that requires hidden units to transform the input into the desired output.

To encode the XOR function in spike-time patterns, we associate a 0 with a “late” firing time and a 1 with an “early” firing time. With specific values 0 and 6 for the respective input times, we use the following temporally encoded XOR:

Input patternsOutput patterns
0016
0610
6010
6616

Other benchmark problems

In this section, we perform experiments with the SpikeProp algorithm on a number of standard benchmark problems: the Iris dataset, the Wisconsin breast-cancer dataset and the Statlog Landsat dataset.

First however, we introduce a method for encoding input variables into temporal spike-time patterns by population coding. We are not aware of any previous encoding methods for transforming real-world data into spike-time patterns and therefore describe the method in detail.

Discussion

We discuss two kinds of implications: the computational implications of the algorithm we presented, and some considerations regarding biological relevance.

Conclusion

In this paper, we derived a learning rule for feedforward spiking neural networks by back-propagating the temporal error at the output. By linearizing the relationship between the post-synaptic input and the resultant spiking time, we were able to circumvent the discontinuity associated with thresholding. The result is a learning rule that works well for smaller learning rates and for time constants of the post-synaptic potential larger than the maximal temporal coding range. This latter result

Sander Bohte is a Ph.D student in the group of Han La Poutré at the Netherlands Centre for Mathematics and Computer Science (CWI). He obtained his M.Sc. in physics from the University of Amsterdam. His research interests include spiking neural networks, dynamic binding in connectionist networks, and emerging behavior in adaptive distributed systems.

References (52)

  • Buo-qiang Bi et al.

    Distributed synaptic modification in neural networks induced by patterned stimulation

    Nature

    (1999)
  • W. Bialek et al.

    Reading a neural code

    Science

    (1991)
  • Ch.M. Bishop

    Neural Networks for Pattern Recognition

    (1995)
  • S.M. Bohte, J.N. Kok, H. La Poutré, Unsupervised classification in a layered network of spiking neurons, Proceedings of...
  • M. Brecht, W. Singer, A.K. Engel, Role of temporal codes for sensorimotor integration in the superior collicus,...
  • D.V. Buonomano et al.

    A neural network model of temporal code generation and position-invariant pattern recognition

    Neural Comput.

    (1999)
  • C.E. Carr et al.

    A circuit for detection of interaural time differences in the brain stem of the barn owl

    J. Neurosci.

    (1990)
  • G. Deco et al.

    The coding of information by spiking neurons: an analytical study

    Network: Comput. Neural Systems

    (1998)
  • M. Diesmann et al.

    Stable propagation of synchronous spiking in cortical neural networks

    Nature

    (1999)
  • C.W. Eurich et al.

    Multidimensional encoding strategy of spiking neurons

    Neural Comput.

    (2000)
  • W. Gerstner

    Time structure of the activity in neural network models

    Phys. Rev. E

    (1995)
  • W. Gerstner

    Spiking neurons

  • W. Gerstner et al.

    A neuronal learning rule for sub-millisecond temporal coding

    Nature

    (1996)
  • W. Gerstner et al.

    Why spikes? hebbian learning and retrieval of time-resolved excitation patterns

    Biol. Cybern.

    (1993)
  • S. Haykin

    Neural Networks, A Comprehensive Foundations

    (1994)
  • W. Heiligenberg

    Neural Nets in Electric Fish

    (1991)
  • Cited by (950)

    • Exploiting deep learning accelerators for neuromorphic workloads

      2024, Neuromorphic Computing and Engineering
    View all citing articles on Scopus

    Sander Bohte is a Ph.D student in the group of Han La Poutré at the Netherlands Centre for Mathematics and Computer Science (CWI). He obtained his M.Sc. in physics from the University of Amsterdam. His research interests include spiking neural networks, dynamic binding in connectionist networks, and emerging behavior in adaptive distributed systems.

    Han La Poutré is professor at the Eindhoven University of Technology in the School of Technology Management, and head of the group Evolutionary Systems and Applied Algorithmics of CWI. He received his Ph.D. degree in Computer Science from Utrecht University. His research interests include neural networks, discrete algorithms, evolutionary systems, and agent-based computational economics.

    Joost Kok is professor in Fundamental Computer Science at the Leiden Institute of Advanced Computer Science of Leiden University in the Netherlands. His research interests are coordination of software components, data mining and optimization.

    View full text