Skip to main content
main-content

Über dieses Buch

This book covers all major aspects of cutting-edge research in the field of neuromorphic hardware engineering involving emerging nanoscale devices. Special emphasis is given to leading works in hybrid low-power CMOS-Nanodevice design. The book offers readers a bidirectional (top-down and bottom-up) perspective on designing efficient bio-inspired hardware. At the nanodevice level, it focuses on various flavors of emerging resistive memory (RRAM) technology. At the algorithm level, it addresses optimized implementations of supervised and stochastic learning paradigms such as: spike-time-dependent plasticity (STDP), long-term potentiation (LTP), long-term depression (LTD), extreme learning machines (ELM) and early adoptions of restricted Boltzmann machines (RBM) to name a few. The contributions discuss system-level power/energy/parasitic trade-offs, and complex real-world applications. The book is suited for both advanced researchers and students interested in the field.

Inhaltsverzeichnis

Frontmatter

Hardware Spiking Artificial Neurons, Their Response Function, and Noises

In this chapter, overviewed are hardware-based spiking artificial neurons that code neuronal information by means of action potential, viz. spike, in hardware artificial neural networks (ANNs). Ongoing attempts to realize neuronal behaviours on Si ‘to a limited extent’ are addressed in comparison with biological neurons. Note that ‘to a limited extent’ in this context implicitly means ‘sufficiently’ for realizing key features of neurons as information processors. This ambiguous definition is perhaps open to a question as to what neuronal behaviours the key features encompass. The key features are delimited within the framework of neuromorphic engineering, and thus, they approximately are (i) integrate-and-fire; (ii) neuronal response function, i.e. spike-firing rate change upon synaptic current; and (iii) noise in neuronal response function. Hardware-based spiking artificial neurons are aimed to achieve these goals that are ambitious albeit challenging. Overviewing a number of attempts having made up to now illustrates approximately two seemingly different approaches to the goal: a mainstream approach with conventional active circuit elements, e.g. complementary metal-oxide-semiconductor (CMOS), and an emerging one with monostable resistive switching devices, i.e. threshold switches. This chapter will cover these approaches with particular emphasis on the latter. For instance, available types of threshold switches, which are classified upon underlying physics will be dealt with in detail.

Doo Seok Jeong

Synaptic Plasticity with Memristive Nanodevices

This chapter provides a comprehensive overview of current research on nanoscale memory devices suitable to implement some aspect of synaptic plasticity. Without being exhaustive on the different forms of plasticity that could be realized, we propose an overall classification and analysis of few of them, which can be the basis for going into the field of neuromorphic computing. More precisely, we present how nanoscale memory devices, implemented in a spike-based context, can be used for synaptic plasticity functions such as spike rate-dependent plasticity, spike timing-dependent plasticity, short-term plasticity, and long-term plasticity.

Selina La Barbera, Fabien Alibart

Neuromemristive Systems: A Circuit Design Perspective

Neuromemristive systems (NMSs) are brain inspired, adaptive computer architectures based on emerging resistive memory technology (memristors). NMSs adopt a mixed-signal design approach with closely coupled memory and processing, resulting in high area and energy efficiencies. Existing work suggests that NMSs could even supplant conventional architectures in niche application domains. However, given the infancy of the field, there are still a number open design questions, particularly in the area of circuit realization, that must be explored in order for the research to move forward. This chapter reviews a number of theoretical and practical concepts related to NMS circuit design, with particular focus on neuron, synapse, and plasticity circuits.

Cory Merkel, Dhireesha Kudithipudi

Memristor-Based Platforms: A Comparison Between Continous-Time and Discrete-Time Cellular Neural Networks

In this chapter, theory, circuit design methodologies and possible applications of Cellular Nanoscale Networks (CNNs) exploiting memristor technology are reviewed. Memristor-based CNNs platforms (MCNNs) make use of memristors to realize analog multiplication circuits that are essential to perform CNN calculation with low power and small area.

Young-Su Kim, Sang-Hak Shin, Jacopo Secco, Keyong-Sik Min, Fernando Corinto

Reinterpretation of Magnetic Tunnel Junctions as Stochastic Memristive Devices

Spin-transfer torque magnetic random access memory (STT-MRAM) is currently under intense academic and industrial development, since it features nonvolatility, high write and read speed, and outstanding endurance. The basic cell of STT-MRAM, the spin-transfer torque magnetic tunnel junction (STT-MTJ), is a resistive memory that can be switched by electrical current. STT-MTJs are nevertheless usually not considered as memristors as they feature only two stable memory states. Their specific stochastic behavior, however, can be particularly interesting for synaptic applications and can allow us reinterpreting STT-MTJs as “stochastic memristive devices.” In this chapter, we introduce basic concepts relating to STT-MTJs behavior and their possible use to implement learning-capable synapses. Using system-level simulations of an example of neuroinspired architecture, we highlight the potential of this technology for learning systems. We also compare the different programming regimes of STT-MTJs with regard to learning and evaluate the robustness of a learning system based on STT-MTJs to device variations and imperfections. These results open the way for unexplored applications of magnetic memory in low-power, cognitive-type systems.

Adrien F. Vincent, Nicolas Locatelli, Damien Querlioz

Multiple Binary OxRAMs as Synapses for Convolutional Neural Networks

Oxide-based resistive memory (OxRAM) devices find applications in memory, logic, and neuromorphic computing systems. Among the different dielectrics proposed in OxRAM stacks, hafnium oxide, HfO$$_{2}$$2, attracted growing interest because of its compatibility with typical BEOL advanced CMOS processing and promising performances in terms of endurance (higher than Flash) and switching speed (few tens of ns). This chapter describes an artificial synapse composed of multiple binary HfO$$_{2}$$2-based OxRAM cells connected in parallel, thereby providing synaptic analog behavior. The VRRAM technology is presented as a possible solution to gain area with respect to planar approaches by realizing one VRRAM pillar per synapse. The HfO$$_{2}$$2-based OxRAM synapse has been proposed for hardware implementation of power efficient Convolutional Neural Networks for visual pattern recognition applications. Finally, the synaptic weight resolution and the robustness to device variability of the network have been investigated. Statistical evaluation of device variability is obtained on a 16 kbit OxRAM memory array integrated into advanced 28 nm CMOS technology.

E. Vianello, D. Garbin, O. Bichler, G. Piccolboni, G. Molas, B. De Salvo, L. Perniola

Nonvolatile Memory Crossbar Arrays for Non-von Neumann Computing

In the conventional von Neumann (VN) architecture, data—both operands and operations to be performed on those operands—makes its way from memory to a dedicated central processor. With the end of Dennard scaling and the resulting slowdown in Moore’s law, the IT industry is turning its attention to non-Von Neumann (non-VN) architectures, and in particular, to computing architectures motivated by the human brain. One family of such non-VN computing architectures is artificial neural networks (ANNs). To be competitive with conventional architectures, such ANNs will need to be massively parallel, with many neurons interconnected using a vast number of synapses, working together efficiently to compute problems of significant interest. Emerging nonvolatile memories, such as phase-change memory (PCM) or resistive memory (RRAM), could prove very helpful for this, by providing inherently analog synaptic behavior in densely packed crossbar arrays suitable for on-chip learning. We discuss our recent research investigating the characteristics needed from such nonvolatile memory elements for implementation of high-performance ANNs. We describe experiments on a 3-layer perceptron network with 164,885 synapses, each implemented using 2 NVM devices. A variant of the backpropagation weight update rule suitable for NVM+selector crossbar arrays is shown and implemented in a mixed hardware–software experiment using an available, non-crossbar PCM array. Extensive tolerancing results are enabled by precise matching of our NN simulator to the conditions of the hardware experiment. This tolerancing shows clearly that NVM-based neural networks are highly resilient to random effects (NVM variability, yield, and stochasticity), but highly sensitive to gradient effects that act to steer all synaptic weights. Simulations of ANNs with both PCM and non-filamentary bipolar RRAM based on Pr$$_{1-x}$$1-xCa$$_x$$xMnO$$_3$$3 (PCMO) are also discussed. PCM exhibits smooth, slightly nonlinear partial-SET (conductance increase) behavior, but the asymmetry of its abrupt RESET introduces difficulties; in contrast, PCMO offers continuous conductance change in both directions, but exhibits significant nonlinearities (degree of conductance change depends strongly on absolute conductance). The quantitative impacts of these issues on ANN performance (classification accuracy) are discussed.

Severin Sidler, Jun-Woo Jang, Geoffrey W. Burr, Robert M. Shelby, Irem Boybat, Carmelo di Nolfo, Pritish Narayanan, Kumar Virwani, Hyunsang Hwang

Novel Biomimetic Si Devices for Neuromorphic Computing Architecture

Neuromorphic computing requires low-power devices and circuits in cross-point architecture. On-chip learning is a significant challenge that requires the implementation of learning rules like spike-timing-dependent plasticity (STDP)—a method that modifies synaptic strength depending upon the time correlation between the presynaptic and postsynaptic neuron spikes in a specific function. To implement this capability in phase-change memory (PCM) or resistance RAM (RRAM)-based cross-point arrays, two schemes have been proposed in the literature where the time correlations are captured by an address event representation scheme using an universal bus or superposition of long custom waveforms. In comparison, in biology, the pulses are sharp and the time correlation information is processed at the synapse by the natural dynamics of the synapse. These are attractive attributes for minimizing power and complexity/area. Another challenge is realizing an area- and power-efficient implementation of the electronic neuron. A leaky integrate-and-fire (LIF) neuron has been implemented using analog and digital circuits which are highly power and area inefficient. To improve area and power efficiency, we have recently proposed: (i) A Si diode-based synaptic device where the charge carrier internal dynamics is used to capture the time correlation based on sharp pulses (100$$\times $$× sharper than custom waveforms to improve energy per spike) which can operate at 10$$^{3}$$3–10$$^{6}$$6 times faster than biology (providing accelerated learning) and (ii) A compact Si neuronal device that has a 60$$\times $$× area and 5$$\times $$× power benefit compared to analog implementation of neurons. These are novel devices that are based on SiGe CMOS technology, and they are highly manufacturable. The synaptic devices are based on natural transients of the impact ionization-based n$$^{+}$$+ p n$$^{+}$$+ diode (I-NPN diode). STDP and Hebbian learning rules have been implemented. The neuron requires further modification of the I-NPN diode requiring a gating structure and some simple circuits. A leaky integrate-and-fire (LIF) neuron has also been demonstrated. Based on their device-level area and power efficiency, system-level power and area of neural networks will be highly enhanced.

U. Ganguly, Bipin Rajendran

Exploiting Variability in Resistive Memory Devices for Cognitive Systems

In literature, different approaches point to the use of different resistive memory (RRAM) device families such as PCM [1], OxRAM, CBRAM [2], and STT-MRAM [3] for synaptic emulation in dedicated neuromorphic hardware. Most of these works justify the use of RRAM devices in hybrid learning hardware on grounds of their inherent advantages, such as ultra-high density, high endurance, high retention, CMOS compatibility, possibility of 3D integration, and low power consumption [4]. However, with the advent of more complex learning and weight update algorithms (beyond-STDP kinds), for example the ones inspired from Machine Learning, the peripheral synaptic circuit overhead considerably increases. Thus, use of RRAM cannot be justified on the merits of device properties alone. A more application-oriented approach is needed to further strengthen the case of RRAM devices in such systems that exploit the device properties also for peripheral nonsynaptic and learning circuitry, beyond the usual synaptic application alone.In this chapter, we discuss two novel designs utilizing the inherent variability in resistive memory devices to successfully implement modified versions of Extreme Learning Machines and Restricted Boltzmann Machines in hardware.

Vivek Parmar, Manan Suri

Theoretical Analysis of Spike-Timing-Dependent Plasticity Learning with Memristive Devices

Several recent works, described in chapters of the present series, have shown that memristive devices can naturally emulate variations of the biological spike-timing-dependent plasticity (STDP) learning rule and can allow the design of learning systems. Such systems can be built with memristive devices of extremely diverse physics and behaviors and are particularly robust to device variations and imperfections. The present work investigates the theoretical roots of their STDP learning. It is suggested, by revisiting works developed in the field of computational neuroscience, that STDP learning can approximate the machine learning algorithm of Expectation-Maximization, the neural network operation implementing “Expectation” steps, while STDP itself implementing “Maximization” steps. This process allows a system to perform Bayesian inference among the values of a latent variable present in the input. This theoretical analysis allows interpreting how STDP differs for several device physics and why it is robust to device mismatch. It can also provide guidelines for designing STDP-based learning systems.

Damien Querlioz, Olivier Bichler, Adrien F. Vincent, Christian Gamrat

Erratum to: Novel Biomimetic Si Devices for Neuromorphic Computing Architecture

U. Ganguly, Bipin Rajendran
Weitere Informationen

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.

Whitepaper

- ANZEIGE -

INDUSTRIE 4.0

Der Hype um Industrie 4.0 hat sich gelegt – nun geht es an die Umsetzung. Das Whitepaper von Protolabs zeigt Unternehmen und Führungskräften, wie sie die 4. Industrielle Revolution erfolgreich meistern. Es liegt an den Herstellern, die besten Möglichkeiten und effizientesten Prozesse bereitzustellen, die Unternehmen für die Herstellung von Produkten nutzen können. Lesen Sie mehr zu: Verbesserten Strukturen von Herstellern und Fabriken | Konvergenz zwischen Soft- und Hardwareautomatisierung | Auswirkungen auf die Neuaufstellung von Unternehmen | verkürzten Produkteinführungszeiten
Jetzt gratis downloaden!

Bildnachweise