Elsevier

Neural Networks

Volume 45, September 2013, Pages 39-49
Neural Networks

2013 Special Issue
Computing with networks of spiking neurons on a biophysically motivated floating-gate based neuromorphic integrated circuit

https://doi.org/10.1016/j.neunet.2013.02.011Get rights and content

Abstract

Results are presented from several spiking network experiments performed on a novel neuromorphic integrated circuit. The networks are discussed in terms of their computational significance, which includes applications such as arbitrary spatiotemporal pattern generation and recognition, winner-take-all competition, stable generation of rhythmic outputs, and volatile memory. Analogies to the behavior of real biological neural systems are also noted. The alternatives for implementing the same computations are discussed and compared from a computational efficiency standpoint, with the conclusion that implementing neural networks on neuromorphic hardware is significantly more power efficient than numerical integration of model equations on traditional digital hardware.

Introduction

It has been said that in order to fully understand something, you must figure out how to build it. While this blanket statement may not be true in all cases, we believe that developing neuromorphic hardware will ultimately lead to a better understanding of biological neural systems and the principles upon which their computation is based. We design neuromorphic systems with the expectation that they will be useful for improving our knowledge of neuroscience and computational science. These sciences in turn inform the design of improved neuromorphic systems. This feedback path is illustrated in Fig. 1, which emphasizes the role of computational experiments performed on neuromorphic hardware. Experiments on neuromorphic hardware not only inform future neuromorphic designs, but also have significantly different constraints than numerical simulations have. Analog implementations model biology using the physics inherent to the devices, which is more power efficient than creating a complex digital system to represent the same equations. Therefore, in signal processing applications where low power and modest precision are required, analog neuromorphic systems are more desirable than their digital equivalents (Douglas, Mahowald, & Mead, 1995). Thus, such experiments are expected to be a stimulant of creative approaches to thinking about neural systems and computation. In this work, we demonstrate results from real-time “simulations” of neural networks with up to 100 neurons. In this context, a “simulation” is an experiment run on our neuromorphic hardware, in which we attempt to simulate the behavior of biological neural networks using our silicon models of neurobiology. These networks perform computationally relevant functions such as arbitrary spatiotemporal pattern generation and recognition, winner-take-all competition, stable generation of rhythmic outputs, and volatile memory.

Previously we reported the design and measurements of a neuron integrated circuit (IC) with 100 neurons, 30,000 synapses, capability to implement spike-timing dependent plasticity (STDP), and an address-event representation (AER) interface (Brink et al., 2013). This study takes this further, not by simply showing a working IC capable of implementing models, but by putting this IC to work as a platform for investigating these models. This demonstration serves several purposes. First, it helps to highlight some interesting approaches to network computation that fit nicely with the neuromorphic circuit models on this particular system. Further, it allows for a kind of benchmarking comparison among the various neuromorphic hardware systems in use today. It also makes a contribution to the case for the neuromorphic very-large-scale integration (VLSI) approach, as it is another instance of a neuromorphic design facilitating the study of spiking networks.

Section  2 gives a description of the system on which the simulations were performed. This section is intended to be accessible to readers that do not have a circuits background. Section  3 covers the basic building blocks that inform the intuition for understanding the results in the networks. Section  4 describes the networks that were simulated and presents the measured data. Section  6 analyzes the computation performed by these networks and by the neuromorphic platform, with a comparison to alternative approaches.

Section snippets

Neuromorphic platform overview

This section covers the details about the neuromorphic platform that are required for understanding of the system results. Fig. 2 shows the block-level design of the neuromorphic IC as well as the die photo; the IC consists of 100 model neurons, 30,000 model synapses, and an address-event representation (AER) module for digital communication of spiking inputs and outputs.

The model neurons are biophysically inspired channel-based models, which were originally introduced in Farquhar and Hasler

Dot matrix network

In order to demonstrate the basic functionality of all of the components of the signal flow, a dot matrix network was simulated, wherein each neuron can be caused to spike by applying an AER input to it. All 100 neurons in the chip were given a stream of input events designed to result in a desired pattern when viewed in a raster plot format. Fig. 5 shows the network topology and the measured result.

A few aberrations in the result can be seen. For example, a few neurons around the edges of the

Spatiotemporal pattern generation and detection

Spatiotemporal processing in neural networks has been an area of interest in neuromorphic engineering for many years (Liu & Douglas, 2004). The repeatable timings demonstrated in our synfire chain make it useful for creating some networks wherein spike timing is important in the information encoding scheme. For instance, arbitrary spatiotemporal patterns can be detected by a network that has a distinct synfire chain for each distinct input channel. The approach is illustrated for a sequence of

Power efficiency

If one is interested not only in the most expedient solution to a computational problem, but instead has a specific desire to see how the problem might be solved by a neural structure, then one is forced to somehow replicate the behavior of biological systems. In this case, perhaps the most common approach is to use a digital computer to numerically integrate model equations describing a network and its constituent neurons and synapses. From a computational efficiency standpoint, it is

Conclusion

Having demonstrated modeling of multiple neural structures on this neuromorphic IC, we turn our attention to the topic of what computation the networks are performing, and how efficiently they are doing it. It is difficult to compare the computations done by a network of neurons to those performed by a traditional computing structure such as a microprocessor. The network of neurons is unable to find all of the eigenvalues or the inverse of a matrix, while this task is straightforward for a

References (31)

  • M. Abeles et al.

    Spatiotemporal firing patterns in the frontal cortex of behaving monkeys

    Journal of Neurophysiology

    (1993)
  • M. Abeles et al.

    Modeling compositionality by dynamic binding of synfire chains

    Journal of Computational Neuroscience

    (2004)
  • H. Arnoldi et al.

    Translation-invariant pattern recognition based on synfire chains

    Biological Cybernetics

    (1999)
  • J. Arthur et al.

    Learning in silicon: timing is everything

    Advances in Neural Information Processing Systems

    (2006)
  • A. Basu et al.

    Bifurcations in a silicon neuron

  • S. Brink et al.

    A learning-enabled neuron array ic based upon transistor models of biological phenomena

    IEEE Transactions on Biomedical Circuits and Systems

    (2013)
  • P. Camilleri et al.

    A neuromorphic Avlsi network chip with configurable plastic synapses

  • Capocaccia, (2010). Exploring network architectures with the facets hardware and PyNN....
  • Choudhary, S., Sloan, S., Fok, S., Neckar, A., Trautmann, E., & Gao, P. et al. (2012). Silicon neurons that compute. In...
  • A. Davison et al.

    Pynn: a common interface for neuronal network simulators

    Frontiers in Neuroinformatics

    (2008)
  • R. Douglas et al.

    Neuromorphic analogue VLSI

    Annual Review of Neuroscience

    (1995)
  • D. Durstewitz et al.

    Neurocomputational models of working memory

    Nature Neuroscience

    (2000)
  • E. Farquhar et al.

    A bio-physically inspired silicon neuron

    IEEE Transactions on Circuits and Systems I: Regular Papers

    (2005)
  • P. Gao et al.

    Dynamical system guided mapping of quantitative neuronal models onto neuromorphic hardware

    IEEE Transactions on Circuits and Systems I: Regular Papers

    (2012)
  • D.H. Goldberg et al.

    Probabilistic synaptic weighting in a reconfigurable network of VLSI integrate-and-fire neurons

    Neural Networks

    (2001)
  • Cited by (26)

    • Advancements in materials, devices, and integration schemes for a new generation of neuromorphic computers

      2022, Materials Today
      Citation Excerpt :

      The oxide trapping issues and thermal noise can also cause problems with the reliability and variability of the weight updates. Additional circuitries can offset this problem at the cost of a larger footprint, limited scaling, and added power [215,218,219]. Despite all of this, many new neuromorphic design concepts have used flash architectures, providing a way for this device technology to continue contributing to future computing platforms [126,211–213].

    • Multiplier-less digital implementation of neuron-astrocyte signalling on FPGA

      2015, Neurocomputing
      Citation Excerpt :

      To date, neuromorphic hardware comprises many different building blocks such as spiking neurons [10], synapses [11], plastic mechanisms [12–15], etc. used to build complex systems to address problems in computational neuroscience or help making devices and techniques to address prosthetic needs. Moreover, several analog and digital brain-inspired electronic systems have been proposed as solutions for fast simulations of spiking neural networks [16–18]. While these architectures are useful for discovering the computational properties of large-scale models of the nervous system, the challenge of building physical devices that can behave intelligently in the real world and exhibit cognitive abilities still remains open [19,20].

    • An Asynchronous Soft Macro for Ultra-Low Power Communication in Neuromorphic Computing

      2022, Proceeding - IEEE International Conference on Artificial Intelligence Circuits and Systems, AICAS 2022
    View all citing articles on Scopus
    View full text