Skip to main content
Erschienen in: Journal of Computational Electronics 1/2020

Open Access 26.12.2019

Memristive-synapse spiking neural networks based on single-electron transistors

verfasst von: Keliu Long, Xiaohong Zhang

Erschienen in: Journal of Computational Electronics | Ausgabe 1/2020

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In recent decades, with the rapid development of artificial intelligence technologies and bionic engineering, the spiking neural network (SNN), inspired by biological neural systems, has become one of the most promising research topics, enjoying numerous applications in various fields. Due to its complex structure, the simplification of SNN circuits requires serious consideration, along with their power consumption and space occupation. In this regard, the use of SSN circuits based on single-electron transistors (SETs) and modified memristor synapses is proposed herein. A prominent feature of SETs is Coulomb oscillation, which has characteristics similar to the pulses produced by spiking neurons. Here, a novel window function is used in the memristor model to improve the linearity of the memristor and solve the boundary and terminal lock problems. In addition, we modify the memristor synapse to achieve better weight control. Finally, to test the SNN constructed with SETs and memristor synapses, an associative memory learning process, including memory construction, loss, reconstruction, and change, is implemented in the circuit using the PSPICE simulator.
Hinweise

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Nearly all human activities are controlled by the brain, which is a low-power-consumption system (roughly 10 W [1]) offering high-speed processing of large amounts of data. Moreover, the human brain is a parallelized system with a certain amount of fault tolerance capability [2]. To reproduce brain behaviors, artificial neural networks (ANNs) have been proposed and used in many ways, e.g., to identify failures in transmission lines and for diagnostic imaging in the biomedical field [3, 4]. In a traditional artificial neuron, information is transmitted in the form of continuous signals. In biological systems, however, neurons do not use continuous signals but rather pulses to process information [5]. Therefore, such systems based on continuous signals cannot imitate brain behaviors well, and it is hard to use them to perform special actions that are extremely common in biological systems, such as complex classification, pattern recognition, adaptive learning, and outcome predictions [6]. Therefore, a third generation of ANNs, viz. spiking neural networks (SNNs), was created to solve these difficulties [7]. In recent years, there has been great development in the SNN field [810], and many kinds of computational and hardware spiking neuron models have been proposed, such as the Hodgkin–Huxley neuron model [11], Izhikevich neuron model [12], leaky integrate-and-fire model [13], etc. As mentioned above, the human brain is a high-density and extremely low-power-consumption system with nearly ~ 1010 neurons and ~ 1014 synapses [14]. Thus, a single neuron may be connected to up to 104 other neurons by synapses [15]. This is the greatest challenge that must be overcome in terms of microelectronics design of ANNs. Thus, the desired features of any devices used to build ANNs are high density, low power dissipation, and strong driving ability. Traditionally, different kinds of field-effect transistors (FETs) have been used to construct neurons and synapses, but they have an ultimate density limit on the order of 1010 cm−2. Thus, Cantley et al. [6] proposed a spiking neuron circuit based on nanoscale noncrystalline silicon thin-film transistors (TFTs), which solved the issue of the driving ability of SNNs perfectly, although they ignored the complexity of the circuit.
Based on this discussion, devices with the characteristics of a simplified circuit, high integration, and low power dissipation are considerably desirable for use in microelectronics circuit design and related fields. In this sense, single-electron transistors (SETs) have attracted attention from researchers due to their smaller space occupation, simpler structure, and lower energy consumption compared with traditional FETs [16]. The basic operational principles of a SET are Coulomb blockade oscillation and the tunneling effect, with only a few electrons being transported [17]. Based on these remarkable characteristics, many circuits have been designed [18, 19]. Liu et al. [20] built a simple hardware-oriented spiking neuron model based on the Coulomb oscillation effect of a SET; their proposed SNN circuit structure is simple and can be used to realize a few basic SSN functions. However, for some special applications such as updating synapse weights, their circuit may be ineffective. To validate neuron models with SNNs, Pês et al. [2] designed logic gates based on SETs and modified Wenpeng’s SET model to one that can operate at room temperature. Although SETs offer many advantages, their serious drawback is that their output impedance must be large enough (no less than the quantum resistance of 25 kΩ) [21]. Clearly, the driving ability of SETs is insufficient for long-distance transmission, nor can they handle too much load. For this reason, hybrid SET–FET circuits have been used in several applications [2225] due to the high driving ability of metal–oxide–semiconductor field-effect transistors (MOSFETs), providing an effective solution to the driving problem of SET-based neurons. However, a serious problem remains in ANN circuit designs, viz. that the number of synapses is thousands of times greater than the number of neurons in biological systems, thus the synapse density is a critical parameter when devising neural networks.
Chua [26] theoretically proposed the memristor in 1971, and these devices have gained in popularity recently due to their physical implementation by Williams et al. [27] at HP Labs in 2008. In recent years, a considerable number of memristor models have been designed and applications identified due to their excellent characteristics, such as nanoscale size and energy-saving attributes [2830]. The most remarkable feature of memristors is their memory ability, more specifically, that a memristor can remember the charge flowing through it; i.e., the resistance of a memristor varies with its current. Consequently, memristors and corresponding hybrid circuits have been widely used as synapses in ANNs [3133]. Memristor bridge synapse circuits have been widely used to control weights in ANNs [34]; however, such bridge structures are based on at least four memristors, which has a negative effect on the circuit integration and their energy-saving ability. To overcome these shortcomings, many researchers have used only one memristor as a synapse, in spite of the nonlinearity of its weight refreshing. In this situation, hybrid circuits of memristors have been devised, including memristor–FET circuits, etc. [35, 36]. Yang et al. designed a two-memristor synapse with reverse series connection to simplify the weight adjustment and the circuit. However, such series connection of two memristors is only effective for linear HP memristors. Moreover, window functions and boundary effects have not been discussed in such memristor models.
Therefore, a new SNN neuron model is proposed herein based on a SET and FET hybrid circuit, considering not only the complexity and driving ability of the neuron circuit but also the power and space consumption. To validate the model, a modified memristor synapse is constructed and used to connect isolated neurons into a simple network. The memristors considered herein are equipped with a novel simple window function that can simplify the memristor synapse weight adjustment. Finally, a series of simulations are performed to verify the design.

2 Basic nanoscale devices

2.1 Single-electron transistors

2.1.1 Brief introduction to SET properties

A SET is a promising nanoscale device. In recent decades, the physical characteristics of SETs have been described sufficiently by analytic expressions and simulations, and recent research on SETs has achieved huge breakthroughs [37, 38]. As shown in Fig. 1, a SET can be used as a three-terminal device just like a traditional transistor, where the back gate (G2) of the SET is usually connected to the ground or reference voltage in some specific applications, e.g., the design of logic gates [2].
As mentioned above, besides their ultralow energy and space consumption characteristics, another eminent characteristic of SETs is Coulomb oscillation. Coulomb oscillation can be summarized as when the current between the source (S) and drain (D) changes periodically for a linear gate voltage (G1). In other words, there is a periodic relationship between the gate voltage (G1) and source–drain current. Figure 2 shows the change in the period of the source–drain current versus the linear gate voltage (G1). The gate voltage (G1) changes linearly at a rate of 2 V/s rate in this simulation, thus G1 will ramp from 0 to 0.5 V in 0.25 s while G2 is grounded, resulting in the illustrated time-varying Coulomb oscillation.
Liu et al. found that the Coulomb oscillation phenomenon of SETs shares some features with pulses produced by a spiking neuron [20]. Based on this important study, Liu et al. constructed a simple spiking neuron with a single SET. The SET spiking neuron model can exhibit diverse forms of spiking, just as the Izhikevich model [39] does, e.g., tonic spiking, class 1 excitable spiking, inhibition-induced spiking, and so on. Moreover, some researchers found that the SET spiking neuron model is able to synchronize and integrate information in a neural network, and it also has the ability to implement phase encoding [20]. With these outstanding features, SET spiking neuron have a bright prospect in hardware designs.

2.1.2 Choosing the simulation platform and SET model

To explore the characteristics of SETs effectively and to execute more accurate circuit simulations of various SET applications, simulated models and platforms are of considerable importance. Although several excellent single-electron circuit simulation platforms such as MOSES and SIMON [40] already exist, they do not include sufficient peripheral circuit components such as transistors and diodes, which is extremely inconvenient for hybrid SET circuit design and property testing. For design convenience, in this work, PSPICE is chosen to simulate the SET circuits and their corresponding hybrid circuits because of its rich and extensive library. In addition to the abundant digital and analog devices available in PSPICE, it is also a widely used simulation program, so its simulation files have good compatibility with other platforms such as Multisim etc.
Three main simulation methods are used for SET circuits: analytical modeling, macromodeling, and the Monte Carlo method. In macromodeling, an equivalent circuit of the SET is designed to solve the Kirchhoff’s voltage law (KVL) and Kirchhoff’s current law (KCL) equations; common devices, i.e., resistors and diodes, are used in the macromodel. Macromodeling is propitious for simplifying circuit structures and reducing simulation times, but the precision of such simulations may not be high enough and the internal states of the considered SET cannot be measured. The Monte Carlo method is used to calculate the probability of tunneling events that occur on Coulomb islands [17]. However, this method consumes too much time when performing probability calculations, which is highly undesirable in terms of the simulation process.
To overcome these shortcomings, the SET SPICE model proposed by Lientschnigl et al. [21] is adopted herein. This model is considerably quick for hybrid simulations of SETs because of its stationary master equation approach, and the full orthodox theory of single electrons is described. For a detailed description of the model see Ref. [21].

2.2 Memristor-based synapse construction

2.2.1 Introduction to the memristor model

After real memristors were fabricated by HP Labs, numerous memristor models with diverse characteristics and structures were proposed, such as the sub-nanosecond switching of a tantalum oxide memristor [41], ferroelectric memristors [42], ferroelectric tunnel memristors [43], etc. However, the mathematical expressions for these memristors are extremely complex, and it is difficult to represent them using specific expressions [44]. In light of these problems, the linear drift model proposed by HP Labs is adopted herein. As shown in Fig. 3, the memristor is composed of two metal regions, viz. a doped region (gray) and an undoped region (white), sandwiched between two metal electrodes (black). The change in the resistance can thus be simply conceived as a movement of the boundary between the two regions under an external voltage.
The HP drift model can be expressed as
$$M(t) = R_{{\text{ON}}} x(t) + R_{{\text{OFF}}} (1 - x(t)),$$
(1)
$$\frac{{\text{d}x(t)}}{{\text{d}t}} = ki(t),$$
(2)
$$v(t) = M(t) \times i(t),$$
(3)
where \(M(t)\) is the resistance of the memristor, and \(R_{{\text{ON}}}\) and \(R_{{\text{OFF}}}\) are the maximum and minimum resistance, respectively. The term \(x(t) = \frac{w(t)}{D} \in (0,1)\) is the doped ratio of the region, \(w(t)\) is the width of the doped region, D is the length of the memristor, \(k = \frac{{u_{\text{v}} R{}_{{\text{ON}}}}}{{D^{2} }}\) is a constant coefficient, and \(u_{\text{v}}\) is the mobility rate of the ions, which takes a constant value. The HP linear drift model can reflect real physical memristors well, but it ignores the boundary effects of devices; i.e., \(x(t) = \frac{w(t)}{D}\) may overflow from the range of [0, 1] when a continuous source is imposed on the memristor. To improve the performance of the HP drift model and solve the boundary problem, various window functions have been proposed.

2.2.2 Novel memristor window function

As mentioned above, when modeling real physical memristors, a window function \(F(x)\) must be added to Eq. (2), modifying it to
$$\frac{{\text{d}x(t)}}{{\text{d}t}} = kF(x)i(t).$$
(4)
HP Labs proposed a window function, namely the HP window function [27]:
$$F_{{\text{HP}}} (x) = x - x^{2}.$$
(5)
This function can solve the boundary problem when \(x = 0\), and the window function strengthens the nonlinear drift; however, boundary problems still exist. To address this, Joglekar and Wolf designed a new window function that can solve the boundary problem perfectly [45]:
$$F_{{\text{Jo}}} (x) = 1 - (2x - 1)^{2p},$$
(6)
where \(x \in [0,1]\) and \(p \in N^{*}\). However, this function suffers from the termination lock problem; i.e., the state variable of the memristor cannot move from \(x = 0\) or \(x = 1\), even when the current reverses direction. To address this, the Biolek window function was proposed [46]:
$$F_{{\text{Bi}}} (x) = 1 - (x - \text{stp}( - i))^{2p},$$
(7)
$$\text{stp}(i) = \left| \begin{aligned} 1,\quad \begin{array}{*{20}c} {i \ge 0} \\ \end{array} \hfill \\ 0,\quad i < 0, \hfill \\ \end{aligned} \right.$$
(8)
where \(x \in [0,1]\) and \(p \in N^{*}\). Even though the Biolek window function can solve the terminal lock and boundary problems, the expression of the Biolek window function is so complex that its application is limited. Moreover, the parameter p must be a positive integer. To alleviate this situation, based on Prodomakis’s work [47], Zhou et al. [48] introduced the following window function:
$$F_{{\text{Zh}}} (x) = j\left( {\frac{2}{{1.5 + \text{sgn} (i)(x - 0.5)}} - 1} \right)^{p},$$
(9)
$$\text{sgn} (i) = \left| \begin{aligned} 1, \hfill \\ - 1, \hfill \\ \end{aligned} \right.\begin{array}{*{20}c} {} & {i \ge 0} \\ {} & {i < 1}, \\ \end{array}$$
(10)
where \(p \ge 0\) and \(j \ge 0\). Although Zhou’s function broadens the range of the parameter p, a new coefficient, j, is added to the complex window function.
Therefore, a simple, novel, and useful window function is designed herein to overcome these defects (i.e., terminal lock, boundary problem, complexity of expression, etc.). The mathematical expression for the window function is
$$F(x) = \text{stp}( - \text{sign}(i) \cdot x + \text{stp}(i)),$$
(11)
where \(\text{stp}(i)\) is the previously introduced step function (8) and \(\text{sign}(i)\) is the same as expression (10). The window function proposed herein is compared with those mentioned above in Fig. 4. In it, the parameters p and j are omitted and the total equation is composed of only two internal memristor variables. This simplifies the window function greatly and improves its range of application. Most existing window functions are nonlinear in the range [0, 1], which impacts memristor performance. Although this nonlinearity problem can be solved to some extent, e.g., via the Biolek window function, the Biolek window cannot effectively handle the complexity of the memristor model. Conversely, the novel window function shown in Fig. 4 is a constant function when \(0 < x < 1\), and its value changes to zero when \(x = 0\) or \(x = 1\). Moreover, even though the value of the novel window function becomes zero, no terminal lock occurs if the memristor current direction changes.

2.2.3 Analysis of the modified memristor synapse

Researchers have proposed many different synapse circuits constructed using various memristor models, where these circuits have diverse characteristics that mostly depend on the memristor itself. Accordingly, Yang et al. [49] put forward a synapse circuit structured with two HP linear drift memristor models in series. Compared with the memristor bridge structure, this synaptic circuit uses fewer memristors. However, the boundary problem was neglected in their memristor model; that is, a window function was not applied. Moreover, the structure of their synapse was too simple to be used for sophisticated functions. Thus, the modified memristor synapse shown in Fig. 5 is proposed herein.
In Fig. 5, two completely identical memristors (M1, M2) are connected in reverse in series, where the transistor functions as a switch controlling the synapse output and weight modification. The conduction of T10 or T20 is controlled by Vmemcon, which can be in a high-impedance state (no signal) with a positive value (Vn) or take a negative value (−Vn). The threshold values of T10 (N-channel) and T20 (P-channel) are set as −VTth and VTth, respectively. Both Vn and VTth are positive constant values, with Vn being larger than VTth. According to the transistor characteristics, T10 or T20 will shut down (allowing no more current to pass) when Vmemcon has a signal (−Vn or Vn), and Vmemcon will be sent to Vin to modify the weight of the synapse. Conversely, Vin has an input signal when Vmemcon has no signal (high impedance), then the channels of T10 and T20 are conductive and the synapse output depends on the synapse weight and Vin. The input signal Vin has no effect on the transistors T10 and T20, because its value is much lower than the transistor threshold.
In the initial state, the total memristance of the synapse is
$$M(0) = M1(0) + M2(0),$$
(12)
where
$$M1(0) = R_{{\text{ON}}} x_{10} + R_{{\text{OFF}}} (1 - x_{10} ),$$
(13)
$$M2(0) = R_{{\text{OFF}}} x_{20} + R_{{\text{ON}}} (1 - x_{20} ).,$$
(14)
\(M1(0)\) and \(M2(0)\) are the initial memristances of the two memristors, respectively, and \(x_{10}\) and \(x_{20}\) are the initial state variables of the corresponding memristors. The memristors in any state can be described by
$$\begin{aligned} M1(\Delta t) & = (R_{{\text{ON}}} - R_{{\text{OFF}}} )(x_{10} + \Delta x) + R_{{\text{OFF}}} \\ & = M1(0) + k(R_{{\text{ON}}} - R_{{\text{OFF}}} ) \\ &\quad \times \int_{0}^{\Delta t} {F_{ + } (x_{10} + \Delta x)i(\Delta t)\text{d}t}. \\ \end{aligned}$$
(15)
Meanwhile, as M2 is connected in reverse,
$$\begin{aligned} M2(\Delta t) & = (R_{{\text{OFF}}} - R_{{\text{ON}}} )(x_{20} - \Delta x) + R_{{\text{ON}}} \\ & = M2(0) - k(R_{{\text{ON}}} - R_{{\text{OFF}}} ) \\ & \quad \times \int_{0}^{\Delta t} {F_{ - } (x_{20} - \Delta x)i(\Delta t)\text{d}t}. \\ \end{aligned}$$
(16)
Accordingly, the total memristance \(M(\Delta t)\) of the two memristors can be deduced as follows:
$$\begin{aligned} M(\Delta t) & = M1(0) + M2(0) + k(R_{{\text{ON}}} - R_{{\text{OFF}}} ) \\ &\quad \times \int_{0}^{\Delta t} {(F_{ + } (x_{10} + \Delta x) - F_{ - } (x_{20} - \Delta x)) \times i(\Delta t)\text{d}t}, \\ \end{aligned}$$
(17)
where \(F_{ + } ( \cdot )\) and \(F_{ - } ( \cdot )\) denote the value of the window function when the memristor undergoes forward or reverse conduction, respectively. It can be seen that the total memristance of the synapse will be the sum of the two initial memristances when the term \(F_{ + } (x_{10} + \Delta x) - F_{ - } (x_{20} - \Delta x)\) is 0. Unfortunately, due to the symmetry and nonlinearity of existing window functions, only when \(x_{10}\) plus \(x_{20}\) equals one will the term \(F_{ + } (x_{10} + \Delta x) - F_{ - } (x_{20} - \Delta x)\) become 0. However, the novel window function proposed herein frees up the initial memristance setting; that is, in the novel window function,\(F_{ + } (x_{10} + \Delta x)\) equals \(F_{ - } (x_{20} - \Delta x)\), lying between 0 and 1 because of its linear constant property. Therefore, the total resistance of the synapse is a constant value, the current of the synapse is also invariant when a constant-voltage source is imposed on the synapse, and the two memristors applied in the synapse can be described by
$$\begin{aligned} M1(\Delta t) & = M1(0) + k(R_{{\text{ON}}} - R_{{\text{OFF}}} ) \\ & \quad \times \int_{0}^{\Delta t} {F_{ + } (x_{10} + \Delta x)i(\Delta t)\text{d}t} \\ & = M1(0) + k^{'} i\Delta t, \\ \end{aligned}$$
(18)
$$\begin{aligned} M2(\Delta t) & = M2(0) - k(R_{{\text{ON}}} - R_{{\text{OFF}}} ) \\ & \quad \times \int_{0}^{\Delta t} {F_{ - } (x_{20} - \Delta x)i(\Delta t)\text{d}t} \\ & = M2(0) - k^{'} i\Delta t, \\ \end{aligned}$$
(19)
where \(M1(\Delta t)\) and \(M2(\Delta t)\) change linearly with time under a certain definite voltage, i is a constant current, and \(k^{'} = k(R_{{\text{ON}}} - R_{{\text{OFF}}} )\). Thus, when applying the new window function, the synapse constructed from two HP linear drift memristor models is a linear device, greatly simplifying the weight adjustment processes (Fig. 6), which can summarized as follows:
(a)
The output voltages of the synapses, which reflect the weight values, will increase linearly when the input voltage is positive.
 
(b)
If there is no input voltage to a synapse circuit, the weights of the synapses will remain unchanged.
 
(c)
Conversely, there will be a linear reduction in the synaptic weights when a negative input voltage is applied to the synapse circuit.
 
In the real world, almost all devices exhibit a significant amount of nonlinearity, e.g., in the IV curves, state changing, etc. However, in the design presented herein, the memristor-based synapse is used to adjust the weight between neurons, and the rate of change of the weight is comparatively slow when it has a proper initial value. Therefore, the whole weight training can be simplified as a linear process, resulting in a useful linear synapse. Although absolutely linear devices are relatively scarce, a combination of multiple devices can be used to implement linear circuits to some degree. Indeed, this section discusses the linearity characteristics of memristors connected in series based on the proposed window function.

3 Memristive spiking neuron network based on SETs

3.1 Parameter settings of the primary nanoscale devices

The SET model proposed in Ref. [21] can work well under arbitrarily high gate voltages and bias voltages, and it can handle a considerable range of hybrid circuit styles. Nevertheless, the model was not initially built for constructing spiking neurons. Moreover, the maximum frequency of a biological neuron may reach several hundred hertz (i.e. milliseconds) [50], whereas the SET model designed in Ref. [21] cannot emulate this situation well. Therefore, the parameters of the SET must be redefined. Figure 7 shows several Coulomb oscillations created by a SET with different parameter values, where the corresponding key parameters are presented in Table 1.
Table 1
The key parameter values of the SET
Scheme
Cs, Cd
Cg
Rs, Rd
Ref. [20]
10−20 F
6 × 10−18 F
108 Ω
Ref. [21]
10−18 F
10−18 F
105 Ω
This work
10−20 F
10−17 F
105 Ω
As shown in Fig. 7, the difference between Figs. 7 and 2 is that the drain voltage is used to show the Coulomb oscillation of the SETs with different parameters by grounding the SET drain with a series-connected resistor. The gate voltage (G1) changes state linearly at a rate of 1 V/s in this simulation, while the source voltage is 5 mV. The amplitude of the output voltage pulse of the SET constructed using the parameters in Ref. [20] is obviously smaller than the others, reaching merely 1 mV compared with the source value of 5 mV. The terms Rs and Rd are related to the amplitude of the output pulse, i.e.; the higher the resistance, the smaller the pulse amplitude. Accordingly, Rs and Rd are adjusted to 105 Ω to increase the pulse amplitude. Additionally, the gate capacitance Cg determines the maximum frequency of the output pulse, which decreases with an increase of Cg. Moreover, the frequency of biological pulses is mostly concentrated around values on the order of tens of hertz. However, the pulse frequency of the SET constructed using the parameters in Ref. [20] or [21] is relatively low compared with those produced by biological bodies. Thus, the value of Cg is modified to 10−17 F to better imitate the characteristics of biologic neurons, resulting in a SET frequency of about 30 Hz.
The memristor is a key component in the proposed neural network circuit, which is used to adjust the connection weights between neuron pairs. Several parameters of the memristor in this work are presented in Table 2, as used directly in the circuit simulation.
Table 2
The crucial parameters of the memristor
\(R_{{\text{ON}}}\)
\(R_{{\text{OFF}}}\)
\(u_{\rm v}\)
D
\(100\,\varOmega\)
\(20\,\text{k}\varOmega\)
\(2 \times 10^{ - 14} \,\text{m}^{2} \,\text{s}^{ - 1} \,\text{V}^{ - 1}\)
\(10\,\text{nm}\)
As mentioned in Sect. 2.2.1, the term \(k = \frac{{u_{\text{v}} R_{{\text{ON}}} }}{{D^{2} }}\) is defined as the rate of movement of the interface between the doped and undoped regions. Thus, according to the parameters in Table 2, a value of \(k = 20,000\) is used. Additionally, the parameters of the window function used in this work are described in Sect. 2.2.2. Under periodic stimulus, the voltage–current (VI) curve produced by the HP linear drift model with the novel window function is depicted in Fig. 8. The VI curve is endowed with the characteristics of a Fig. 8-shaped pinched hysteresis loop, where the area of the loop decreases with an increase of the source frequency, indicating that the memristor model used in this simulation is a standard memristor device.

3.2 Spiking neurons connected via memristor synapses

As mentioned above, biological neural systems are extremely complex with numerous neurons and connection synapses, and the activities of biological bodies are not the result of single neurons or individual brain regions, but rather arise from the action of the whole system. To understand neural systems better and imitate their functionalities more completely, multilayered neuron structures and multiple connections between neurons must be considered. The characteristics of single spiking neurons modeled using the SET as proposed herein are discussed in previous sections, so connections between multiple neurons and related neural activities should now be considered.
To begin with, it is worth noting that the connection between two neurons is fundamental in neural systems, making it necessary to analyze and reproduce all the relevant activities between them. Figure 9 displays the conceptual structure of two neurons (Ni, Nj) connected by a synapse (\(\omega_{ij}\)). Generally, a single neuron abides by the “integrate-and-fire” rule; i.e., the neuron integrates presynaptic spikes until reaching a threshold, at which point it outputs pulses. Specifically, as shown in Fig. 9, the integration process (symbol \(\varSigma\)) in Ni can be expressed as
$$s_{i} = \sum\limits_{k = 1}^{n} {v_{k} } \omega_{ki},$$
(20)
where \(S_{i}\) denotes the summed Ni inputs, \(v_{k}\) is the output voltage from the \(k\text{th}\) presynaptic neuron, and \(\omega_{kj}\) denotes the weight between Ni and the \(k\text{th}\) presynaptic neuron. Once the weighted summation exceeds the threshold \(v_{{\text{th}}}\) of Ni, Ni will be in an active state and give an output voltage \(v_{{i\text{out}}}\) to drive synapses that are directly connected with it. The neuron activation function is a step function:
$$g_{i} = \left\{ {\begin{array}{*{20}l} {v_{{i\text{out}}} ,} \hfill & {\quad s_{i} > v_{{\text{th}}} } \hfill \\ {0,} \hfill & {\quad s_{i} \le v_{{\text{th}}} }, \hfill \\ \end{array} } \right.$$
(21)
where \(v_{{i\text{out}}}\) is the output voltage of the neuron and \(s_{i}\) denotes the sum of its inputs.
Finally, the input voltage that Nj receives from Ni is \(v_{{i\text{out}}} \omega_{ij}\). As seen in Fig. 9, when Nj is activated, there exists a backward feedback signal, \(v_{jf}\), with the same amplitude and sign as the output voltage, \(v_{{j\text{out}}}\). Note that the feedback voltage plays a vital role in modifying the synaptic weight. The necessary detailed process of synaptic weight refreshing is introduced in the circuit simulation section. To verify the SNN, classic neural activities should be reproduced by the network. As is well known, the structure of neural systems is considerably complex, with many functions being present in a single system. One of the most famous functions is associative learning or memory, as demonstrated by Pavlov’s well-known dog experiment [51], which occurs prevalently in most biological behaviors. Inspired by this interesting behavior, the Hebbian theorem was proposed, that is, “neurons that fire together wire together.” In recent years, various structures of neural learning and memory networks based on memristor synapses have been proposed. To the best of the authors’ knowledge, compared with associative memory construction and loss, associative memory change and reconstruction have rarely been discussed. In most works, memory associative losses are merely attributed to the self-forgetting behavior of synapses, rather than the result of neural activities. Therefore, in this work, associative memory construction, loss, reconstruction, and change in SET-based spiking neural networks are described.
Based on the two-neuron connection and the aforementioned associative memory, Fig. 10 shows a diagram of a simple neural network built with four spiking neurons connected by three memristor synapses (introduced in Sect. 2.2.3), where the presynaptic neurons N1–N3 are connected with the postsynaptic neuron N4 via a memristor synapse. In this diagram, the synapse weight adjustment circuit is omitted; therefore, the control signal terminals (Vmemcon1–Vmemcon3) of synapses M1–M3 are floating.
In Fig. 10, memristor symbols with gray or white backgrounds denote high- or low-resistance states, respectively. Note that the memristive synapse S24 between N2 and N4 is grounded via a memristor with gray background, indicating that the synapse has an inherently large weight, namely N2 can activate N4 directly when N2 is in an excitatory state. This interesting phenomenon is commonly seen in our daily life, e.g., unconscious salivation when chewing gum, which is normally called an unconditioned reflex. Conversely, S14 and S34 are connection synapses with weak weights, which means N1 or N3 cannot initially activate N4, and the process of strengthening the weight in S14 and S34 is so-called learning; For instance, once the weight of S14 rises to some extent with the help of unconditioned reflexes occurring between N2 and N4 simultaneously, N1 can directly activate N4 without N2 being in an active state. This is referred to as a conditioned reflex or associative memory building. As for associative memory losses, synapses will not lose their memory until the presynaptic neuron is kept in an inactive state. However, there is another way for an associative memory to be lost, as proposed in Pavlov’s dog experiment; i.e., if postsynaptic neuron N4 is always activated by another neuron (e.g., N2) rather than neuron N1, then the connection strength between N1 and N4 will become weaker and weaker until it disappears completely.
After associative memory losses, the process of associative memory reconstruction is exactly the same as memory construction. The essence of associative memory change is the associative memory construction between N3 and N4 accompanied by memory loss in S14.
For simplicity, the specific peripheral circuits of the network are omitted from this schematic sketch; the specific connection modes among the various devices are introduced in the next section.

4 The circuit simulation of a memristive SNN

4.1 The connection between two spiking neurons

4.1.1 Basic modules of SNNs

Figure 11 presents basic modules of SET spiking neural networks in PSPICE, where the SET and memristor models, and their corresponding parameters, were introduced in former sections. All of the basic modules contain eight MOSFETs (T1–T8), two linear drift memristors (M1, M2) with novel window functions, five resistors (R1–R5), a capacitor (C1), a linear voltage source (V1), and a comparator (LM219). R6 and R7 are used to facilitate the memductance test of the corresponding memristor but have no effect on the whole circuit. In addition to the SET and memristors configured in the previous sections, the remaining parameters of all the module circuits are presented in Table 3, as used directly in all the circuit simulations unless otherwise noted.
Table 3
The parameters of the basic circuit modules
Parameter
Value
Device
Style
Threshold (V)
C1
50 μF
T1
NMOS
1
R1
100 MΩ
T2
NMOS
1
R2, R6, R7
1 KΩ
T3
NMOS
−1
R3, R4, R5
100 MΩ
T4
PMOS
1
V1
25 mV/s
T5
NMOS
1
VCC
5 mV
T6
NMOS
1
VP
1.5 V
T7
NMOS
1
VN
−1.5 V
T8
PMOS
1
VT
2 mV
   
Table 3 shows that the values of R1 and C1 may deviate from practicality to some extent, but C1 and R1 with such values could reproduce neuron pulse collection processes like biological neurons (as shown by the simulation result in Fig. 13). Moreover, the designed spiking neuron comprises a comparator LM219, which needs a capacitor with comparatively large value to drive it. Therefore, we must take these values as designed in our model. However, with the development of circuit integration, the comparator could become smaller and consume less power. Accordingly, the values of the capacitor and resistor used in the neuron design would become smaller, which is achievable in the near future. The remaining MOSFET parameters are set to the default values of the standard Shichman–Hodges transistor model in PSPICE.
As shown in Fig. 11, there are three circuit modules marked by three colored frames: a SET spiking neuron (solid red line), a memristor synapse (dotted blue), and a synapse weight adjustment circuit (dashed green).
In the spiking neuron module, an RC circuit consists of a capacitor (C1) and a resistor (R1), which function together as a signal integrator. Once the voltage of C1 exceeds the threshold VT, the spiking neuron will be in an active state and output pulses. After the neuron has been activated, it will produce a control voltage (Vcon) to the weight adjustment circuit, which can modulate the weights of the synapses according to the states of other neurons that are connected to it. Specifically, if two neurons (one presynaptic neuron and one postsynaptic neuron) are connected together and are activated synchronously, both neurons will give a voltage (Vcon) to the weight adjustment, and the weight adjustment circuit will give a positive voltage (VP) to increase the weight of the synapse circuit that connects the two neurons. Conversely, if both neurons are in a resting state, there will be no output to the weight adjustment circuit, and the weights of the synapse will remain unchanged. It is noteworthy that the weight adjustment circuit will impose a negative voltage (VN) to the synapse circuit when a postsynaptic neuron is in an excitatory state compared with the presynaptic one being in an inhibitory state. However, the weight adjustment circuit will have no effect when only the presynaptic neuron is activated. The detailed operation of the weight adjustment circuit is presented as a logic table in Table 4.
Table 4
The logic table of the weight adjustment circuit
Vcon1
Vcon2
Vmemcon
1
1
VP
1
0
VN
0
1
\
0
0
\
In Table 4, “1” and “0” indicate that the input (Vcon1 or Vcon2) is larger or smaller than the transistor threshold, respectively, while “\” means no output.
As stated above, there can be positive or negative voltage feedback in a synapse circuit, which completely depends on the states of the presynaptic and postsynaptic neurons. As shown in Fig. 11, a PMOSFET and a NMOSFET in series connection are attached to the central point of two reversed series-connected memristors in the synapse circuit, and all the MOSFETs are conductive when the corresponding synapse weight adjustment circuit does not work. However, T4 will shut down and the synapse weight \(\omega = \frac{M1}{M1 + M1}\) will gradually increase when the weight adjustment circuit outputs a positive voltage, VP. Conversely, under a negative voltage VN, T3 will shut down, and the weight will decrease slowly. More specifically, based on this control rule, both VP and VN can be set to any value depending on the circuit design.

4.1.2 Simulating the connection between two spiking neurons

According to the previous introductions on basic module circuits, a circuit containing two spiking neurons connected with a memristor synapse is shown in Fig. 12, which corresponds to the conceptual diagram in Fig. 9. To begin with, after N1 is in an excitatory state, the state of N2 completely depends on the weight of the synapse between N1 and N2; specifically, N1 can activate N2 when the weight is large enough, but N2 cannot be activated if the weight is too small. Therefore, to validate the SET spiking neuron model and lay the foundations for the neural network construction, only the connection between two neurons in this simulation is discussed, thus the weight adjustment process is not taken into consideration. Moreover, a constant voltage source (5 mV) that lasts for 0.5 s is directly imposed on the source terminal (S) of the SET in N1, which indicates that N1 is in an active state ranging from 0 to 0.5 s. The remaining parameters are used as introduced in Sect. 4.1.1.
First, suppose that N1 can activate N2, namely that the synapse connection between N1 and N2 is strong. Based on this hypothesis, M1 and M2 are set as 2 kΩ and 18 kΩ, respectively; thus, the large weight value can be determined as \(\omega_{{\text{big}}} = \frac{M2}{M1 + M2} = 0.9\). The PSPICE simulated result is shown in Fig. 13a. Conversely, for a small weight value,\(\omega_{{\text{small}}} = 1 - \omega_{{\text{big}}} = 0.1\), which indicates a weak connection between N1 and N2; the corresponding simulated results are shown in Fig. 13b.
In Fig. 13a, there is a 0.25 s delay in N2’s output [V(N2OUTPUT)] due to the integration characteristic of the spiking neurons. However, in essence, this is the result of the RC circuit’s charging process. After N2 receives four pulses via a memristor synapse, the postsynaptic potential [V(PSP) in Fig. 13] exceeds the threshold of N2, and N2 becomes active and outputs pulses. It is noteworthy that N2 remains in an active state (between two dashed lines from 0.5 s in Fig. 13a) for about 0.074 s after the presynaptic neuron N1 [V(N1OUTPUT) in Fig. 13] changes into an inactive state, because the RC circuit needs a certain amount of time to discharge. However, when the connection weight value changes to 0.1, Fig. 13b shows that the postsynaptic potential cannot reach the threshold, thus N1 cannot activate N2 directly, and N2 does not provide an output.
Generally, in this PSPICE simulation of a connection between two spiking neurons, the spiking neurons based on SETs work well and the synapse constructed from the memristors is useful.

4.2 Connections between four SET-based spiking neurons

4.2.1 A circuit of connections for four spiking neurons

On the basis of the circuit connecting two spiking neurons and the aforementioned connection diagram in Fig. 10, a neural network circuit consisting of four SET spiking neurons is designed in PSPICE (Fig. 14). According to Fig. 10, the connection between N2 and N4 is inherently strong and stable, and the effects of the feedback controlling voltage on S24 should be omitted; thus, there exists no feedback circuit between N2 and N4, which allows the circuit to be simplified further. The initial values of the memristors are presented in Table 5.
Table 5
The initial values of the memristors
Memristor
Initial value
M1, M4, M5
18k
M2, M3, M6
2k
As more presynaptic neurons are added into the neuron network, only two MOSFETs are consumed by each synapse weight adjustment circuit (Fig. 11), because all the weight-controlling voltages (Vcon) that all the synapses receive from N4 are the same (for convenience, all the synapse weight adjustment circuits are drawn completely in Fig. 14). The parameters of all the components used in this circuit are the same as in Sect. 4.1.

4.2.2 The associative memory implementation in SNN

In this section, by using the SNN circuit shown in Fig. 14, an experiment is carried out to reveal any associative activities including associative memory construction, loss, reconstruction, and change. The whole simulation procedure is divided into the 12 stages listed in Table 6, where “N” and “Y” denote the inactive and active state of presynaptic spiking neurons, respectively, and each stage lasts 0.6 s.
Table 6
The states of the presynaptic spiking neurons
Stage
1
2
3
4
5
6
7
8
9
10
11
12
N1 state
Y
N
N
Y
Y
N
Y
Y
Y
N
N
Y
N2 state
N
Y
N
Y
N
Y
N
Y
N
Y
N
N
N3 state
N
N
Y
N
N
N
N
N
N
Y
Y
N
According to Table 6 and the procedures related to associative learning, the detailed information on the states of the presynaptic neurons can be described as follows:
  • Testing—Phase 1 (stages 1, 2, 3) These processes aim at testing the connection strength between the presynaptic neurons N1–N3 and N4. As mentioned above, only N2 can initially activate N4 because of its inherently strong relationship with N4.
  • Associative learning 1 (stage 4): To strengthen the synapse S14 between N1 and N4, both N1 and N2 become active based on the Hebbian rule, while N3 remains in an inactive state.
  • Testing—Phase 2 (stage 5) After the associative learning step in stage 4, synapse S14 has been enhanced sufficiently. Therefore, to check the learning result, only N1 is set as an active state.
  • Memory loss (stage 6) In this stage, according to Pavlov’s dog experiment, the established associative memory is not eternal, and it will disappear after a period of time. Therefore, only N2 is activated while N1 and N3 are kept in resting states.
  • Testing—Phase 3 (stage 7) After suffering the memory loss step, only N1 is in active state.
  • Associative memory reconstruction (stage 8) This stage is completely the same as stage 4 (i.e., associative learning). N1 and N2 are in active states, while N3 is in a resting state.
  • Testing—Phase 4 (stage 9) Just like stage 5, N1 is excited, while N2 and N3 are in inactive states.
  • Associative memory change (stage 10) In this stage, N2 and N3 are activated simultaneously, while N1 changes into an inactive state.
  • Testing—Phase 5 (stages 11, 12) When testing the result of the associative memory change, only one presynaptic neuron (N1 or N3) is activated in each stage.

4.2.3 Simulation result and analysis

As shown in Fig. 15, the PSPICE simulation is divided into nine parts, consisting of a total of 12 stages (S stands for stage in Fig. 15). In the phase 1 test, only N2 can activate N4, which conforms well to the preset characteristic that only N2 has a strong connection with N4. In terms of the two neuron connection simulation results presented in Sect. 4.1.2, once a postsynaptic neuron has been activated by a presynaptic neuron, there will be a 0.074 s delay when the postsynaptic neuron changes into a resting state after the presynaptic neuron becomes inactive. In order to separate the response results of N4 driven by different presynaptic neurons, each stage contains a 0.1 s interval (marked with the symbol “0.1 s” in Fig. 15).
After finishing the associative memory construction stage (S4 in Fig. 15), Fig. 15 shows that N1 can activate N4 independently in the phase 2 testing stage (stage 5). In other words, with the help of N2, the associative relationship between N1 and N4 has been built.
However, if N1 does not fire while N2 keeps firing for a period of time, N1 will gradually lose its associative memory in the memory loss stage (stage 6), whereby N4 cannot be activated by N1 any more (stage 7). Although memory disappeared in N1, it can be rebuilt just like in stage 4, i.e., firing N1 and N2 together for enough time (stage 8), where the phase 3 testing (stage 9) shows that memory has been rebuilt in stage 8. After N1 retrieves its memory, in order to change the associative memory, N3 and N4 fire simultaneously during the memory change phase (stage 10). The phase 5 testing, including stages 11 and 12, show that N3 built associative memory (N3 can fire N4) while N1 lost its memory (N1 cannot activate N4).
Furthermore, the memristances and weights of all synapses in the simulation are shown in Fig. 16. Due to the memristor model and the new window function, the memristances and weights vary linearly in every stage, which greatly facilitates the weight adjustment process; i.e., an accurate weight value can be set directly by controlling the voltage imposed on the synapse. Moreover, in all testing phases, there is little change in the weights and memristances of all synapses. In particular, the weights and memristances remain unchanged in S24 because of its inherently strong and stable connection with N4. However, S14 and S34 change depending on the specific activity being performed in the neuron network (Fig. 16). When memory is lost, the weights of all the corresponding synapses will decrease; conversely, the weights will increase when memory is constructed/reconstructed.
Generally, the simulated results effectively reveal the characteristics of associative memory, such as associative memory construction, loss, reconstruction, and change. Moreover, they validate the feasibility of using SET spiking neurons in a network; that is, a neural network with a specific use can be constructed based on SET spiking neurons and memristor synapses. A neural network system is built herein from four neurons based on two-neuron connections. Thus, the system based on SETs and memristors exhibits scalability at a certain level.
More precisely, every SET neuron has its own independent driving source, therefore pulses will hardly be lost in a neuron and mostly depend on the synapse weight, a characteristic that is suitable for multiple-layer neural network design. After building some special function neural networks based on the three basic modules (SET neuron, memristor synapse, and weight adjustment circuit) according to the design goal, larger-scale neural networks with more layers and inputs could be constructed using these basic function networks. As a result, the scale of neural networks could be further enlarged. Furthermore, the SET and memristor are nanoscale and low-power-consumption devices, which also favors the scalability of the system.

5 Conclusions

An SNN based on SETs and memristor synapses is proposed, and detailed associative memory-related processes are implemented therein. More precisely, a SET model with appropriate parameters is used to construct a spiking neuron. Meanwhile, the memristor synapse is modified; specifically, a new window function is incorporated into the memristor model, considerably enhancing the linearity of the synapses. Moreover, using PSPICE simulations, two SET-based spiking neurons connected via a memristor synapse are tested; the results show that the neurons and memristors perform well in the simulated circuit. Furthermore, an SNN circuit is designed based on a previous two-neuron simulation, being constructed from SET spiking neurons and memristor synapses; then, associative memory activities are realized in the circuit simulation. These simulations provide not only an effective neural network circuit design but also an approach for combined application of SETs and memristors. Further work should concentrate on optimizing single SET spiking neurons for more useful applications.

Acknowledgements

This work is jointly supported by the National Natural Science Foundation of China (nos. 61763017, 51665019), Scientific Research Plan Projects of Jiangxi Education Department (no. GJJ150621), Natural Science Foundation of Jiangxi Province (nos. 20161BAB202053, 20161BAB206145), and Innovation Fund for Graduate Students in Jiangxi Province (grant no. YC2017-S302).
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literatur
2.
Zurück zum Zitat Pês, B.D.S., Guimarães, J.G., Bonfim, M.J.D.C.: A modified nanoelectronic spiking neuron model. J. Comput. Electron. 16, 98–105 (2017)CrossRef Pês, B.D.S., Guimarães, J.G., Bonfim, M.J.D.C.: A modified nanoelectronic spiking neuron model. J. Comput. Electron. 16, 98–105 (2017)CrossRef
3.
Zurück zum Zitat Bowen, T., Roth, R.F.: Design of a scintiliation counter K + detector for a bubble chamber. IRE Trans. Nucl. Sci. 9, 340–344 (1962)CrossRef Bowen, T., Roth, R.F.: Design of a scintiliation counter K + detector for a bubble chamber. IRE Trans. Nucl. Sci. 9, 340–344 (1962)CrossRef
4.
Zurück zum Zitat Jiang, J., Trundle, P., Ren, J.: Medical image analysis with artificial neural networks. Comput. Med. Imaging Graph. 34, 617–631 (2010)CrossRef Jiang, J., Trundle, P., Ren, J.: Medical image analysis with artificial neural networks. Comput. Med. Imaging Graph. 34, 617–631 (2010)CrossRef
5.
Zurück zum Zitat Guimarães, J.G., Romariz, A.R.S.: Bio-inspired oscillators with single-electron transistors: circuit simulation and input encoding example. J. Comput. Theor. Nanosci. 10, 2563–2567 (2013)CrossRef Guimarães, J.G., Romariz, A.R.S.: Bio-inspired oscillators with single-electron transistors: circuit simulation and input encoding example. J. Comput. Theor. Nanosci. 10, 2563–2567 (2013)CrossRef
6.
Zurück zum Zitat Cantley, K.D., Subramaniam, A., Stiegler, H.J., Chapman, R.A., Vogel, E.M.: Neural learning circuits utilizing nano-crystalline silicon transistors and memristors. IEEE Trans. Neural Netw. Learn. 23, 565–573 (2012)CrossRef Cantley, K.D., Subramaniam, A., Stiegler, H.J., Chapman, R.A., Vogel, E.M.: Neural learning circuits utilizing nano-crystalline silicon transistors and memristors. IEEE Trans. Neural Netw. Learn. 23, 565–573 (2012)CrossRef
9.
Zurück zum Zitat Ponulak, F., Kasinski, A.: Introduction to spiking neural networks: information processing, learning and applications. Acta Neurobiol. Exp. 71, 409–433 (2011) Ponulak, F., Kasinski, A.: Introduction to spiking neural networks: information processing, learning and applications. Acta Neurobiol. Exp. 71, 409–433 (2011)
10.
Zurück zum Zitat Cristini, A., Salerno, M., Susi, G.A.: Continuous-time spiking neural network paradigm. In: Bassis, S., Esposito, A., Morabito, F. (eds.) Advances in Neural Networks: Computational and Theoretical Issues. Smart Innovation, Systems and Technologies, pp. 49–60. Springer, Cham (2015)CrossRef Cristini, A., Salerno, M., Susi, G.A.: Continuous-time spiking neural network paradigm. In: Bassis, S., Esposito, A., Morabito, F. (eds.) Advances in Neural Networks: Computational and Theoretical Issues. Smart Innovation, Systems and Technologies, pp. 49–60. Springer, Cham (2015)CrossRef
12.
Zurück zum Zitat Izhikevich, E.M.: Simple model of spiking neurons. IEEE Trans. Neural Netw. 14, 1569–1572 (2003)CrossRef Izhikevich, E.M.: Simple model of spiking neurons. IEEE Trans. Neural Netw. 14, 1569–1572 (2003)CrossRef
13.
Zurück zum Zitat Kornijcuk, V., Lim, H., Seok, J.Y., Kim, G., Kim, S.K., Kim, I., Choi, B.J., Jeong, D.S.: Leaky integrate-and-fire neuron circuit based on floating-gate integrator. Front. Neurosci. 10, 212–227 (2016)CrossRef Kornijcuk, V., Lim, H., Seok, J.Y., Kim, G., Kim, S.K., Kim, I., Choi, B.J., Jeong, D.S.: Leaky integrate-and-fire neuron circuit based on floating-gate integrator. Front. Neurosci. 10, 212–227 (2016)CrossRef
14.
Zurück zum Zitat Tang, Y., Nyengaard, J.R., De Groot, D.M., Gundersen, H.J.: Total regional and global number of synapses in the human brain neocortex. Synapse 41, 258–273 (2001)CrossRef Tang, Y., Nyengaard, J.R., De Groot, D.M., Gundersen, H.J.: Total regional and global number of synapses in the human brain neocortex. Synapse 41, 258–273 (2001)CrossRef
15.
Zurück zum Zitat Hawkins, J., Blakeslee, S.: On Intelligence: How a New Understanding of the Brain Will Lead to the Creation of Truly Intelligent Machines. MacMillan, Basingstoke (2007) Hawkins, J., Blakeslee, S.: On Intelligence: How a New Understanding of the Brain Will Lead to the Creation of Truly Intelligent Machines. MacMillan, Basingstoke (2007)
19.
Zurück zum Zitat Sharifi, M.J.: A theoretical study of the performance of a single-electron transistor buffer. IEICE Trans. Electron. E94(C), 1105–1111 (2011)CrossRef Sharifi, M.J.: A theoretical study of the performance of a single-electron transistor buffer. IEICE Trans. Electron. E94(C), 1105–1111 (2011)CrossRef
21.
Zurück zum Zitat Lientschnig, G., Weymann, I., Hadley, P.: Simulating hybrid circuits of single-electron transistors and field-effect transistors. Jpn. J. Appl. Phys. 42, 6467–6472 (2003)CrossRef Lientschnig, G., Weymann, I., Hadley, P.: Simulating hybrid circuits of single-electron transistors and field-effect transistors. Jpn. J. Appl. Phys. 42, 6467–6472 (2003)CrossRef
22.
Zurück zum Zitat Jain, A., Ghosh, A., Singh, N.B., Sarkar, S.K.: Stability and reliability analysis of hybrid CMOS-SET circuits—a new approach. J. Comput. Theor. Nanosci. 11, 2519–2525 (2014)CrossRef Jain, A., Ghosh, A., Singh, N.B., Sarkar, S.K.: Stability and reliability analysis of hybrid CMOS-SET circuits—a new approach. J. Comput. Theor. Nanosci. 11, 2519–2525 (2014)CrossRef
24.
Zurück zum Zitat Ghosh, A., Jain, A., Singh, N.B., Sarkar, S.K.: Design and implementation of SET-CMOS hybrid half subtractor. In: IEEE India Conference, pp. 1–4 (2014) Ghosh, A., Jain, A., Singh, N.B., Sarkar, S.K.: Design and implementation of SET-CMOS hybrid half subtractor. In: IEEE India Conference, pp. 1–4 (2014)
26.
Zurück zum Zitat Chua, L.O.: Memristor-the missing circuit element. IEEE Trans. Circuit Theory. 18, 507–519 (1971)CrossRef Chua, L.O.: Memristor-the missing circuit element. IEEE Trans. Circuit Theory. 18, 507–519 (1971)CrossRef
27.
Zurück zum Zitat Strukov, D.B., Snider, G.S., Stewart, D.R., Williams, R.S.: The missing memristor found. Nature 453, 80–83 (2008)CrossRef Strukov, D.B., Snider, G.S., Stewart, D.R., Williams, R.S.: The missing memristor found. Nature 453, 80–83 (2008)CrossRef
30.
Zurück zum Zitat Gupta, I., Serb, A., Khiat, A., Prodromakis, T.: Towards a memristor-based spike-sorting platform. In: IEEE Biomedical Circuits and Systems Conference, Shanghai (2016) Gupta, I., Serb, A., Khiat, A., Prodromakis, T.: Towards a memristor-based spike-sorting platform. In: IEEE Biomedical Circuits and Systems Conference, Shanghai (2016)
31.
Zurück zum Zitat Jo, S.H., Chang, T., Ebong, I., Bhadviya, B.B.: Nanoscale memristor device as synapse in neuromorphic systems. Nano Lett. 10, 1297–1301 (2010)CrossRef Jo, S.H., Chang, T., Ebong, I., Bhadviya, B.B.: Nanoscale memristor device as synapse in neuromorphic systems. Nano Lett. 10, 1297–1301 (2010)CrossRef
32.
Zurück zum Zitat Kim, H., Sah, M.P., Yang, C., Roska, T., Chua, L.O.: Memristor bridge synapses. Proc. IEEE 100, 2061–2070 (2012)CrossRef Kim, H., Sah, M.P., Yang, C., Roska, T., Chua, L.O.: Memristor bridge synapses. Proc. IEEE 100, 2061–2070 (2012)CrossRef
33.
Zurück zum Zitat Wu, X.Y., Saxena, V., Zhu, K.H.: A CMOS spiking neuron for dense memristor-synapse connectivity for brain-inspired computing. In: International Joint Conference on Neural Networks, Killarney (2015) Wu, X.Y., Saxena, V., Zhu, K.H.: A CMOS spiking neuron for dense memristor-synapse connectivity for brain-inspired computing. In: International Joint Conference on Neural Networks, Killarney (2015)
34.
Zurück zum Zitat Adhikari, S.P., Kim, H., Budhathoki, R.K., Yang, C., Chua, L.O.: A circuit-based learning architecture for multilayer neural networks with memristor bridge synapses. IEEE Trans. Circuits Syst. I Reg. Pap. 62, 215–223 (2015)CrossRef Adhikari, S.P., Kim, H., Budhathoki, R.K., Yang, C., Chua, L.O.: A circuit-based learning architecture for multilayer neural networks with memristor bridge synapses. IEEE Trans. Circuits Syst. I Reg. Pap. 62, 215–223 (2015)CrossRef
35.
Zurück zum Zitat Anasane, K.J., Kshirsagar, U.A.: Memristor MOS content addressable memory (MCAM) design using 22 nm VLSI technology. Int. J. Adv. Res. Comput. Commun. Eng. 4, 189–194 (2015)CrossRef Anasane, K.J., Kshirsagar, U.A.: Memristor MOS content addressable memory (MCAM) design using 22 nm VLSI technology. Int. J. Adv. Res. Comput. Commun. Eng. 4, 189–194 (2015)CrossRef
36.
Zurück zum Zitat Tabassum, S., Parveen, F., Rashid, A.M.H.: Low power high speed ternary content addressable memory design using 8 MOSFETs and 4 memristors: hybrid structure. In: 8th International Conference on Electrical and Computer Engineering, Dhaka (2014) Tabassum, S., Parveen, F., Rashid, A.M.H.: Low power high speed ternary content addressable memory design using 8 MOSFETs and 4 memristors: hybrid structure. In: 8th International Conference on Electrical and Computer Engineering, Dhaka (2014)
37.
Zurück zum Zitat Yu, Y.S., Oh, J.H., Hwang, S.W., Ahn, D.: Equivalent circuit approach for single electron transistor model for efficient circuit simulation by SPICE. Electron. Lett. 38, 850–852 (2002)CrossRef Yu, Y.S., Oh, J.H., Hwang, S.W., Ahn, D.: Equivalent circuit approach for single electron transistor model for efficient circuit simulation by SPICE. Electron. Lett. 38, 850–852 (2002)CrossRef
40.
Zurück zum Zitat Boubaker, A., Troudi, M., Sghaier, N., Souifi, A., Baboux, N., Kalboussi, A.: Electrical characteristics and modelling of multi-island single-electron transistor using SIMON simulator. Microelectron. J. 40, 543–546 (2009)CrossRef Boubaker, A., Troudi, M., Sghaier, N., Souifi, A., Baboux, N., Kalboussi, A.: Electrical characteristics and modelling of multi-island single-electron transistor using SIMON simulator. Microelectron. J. 40, 543–546 (2009)CrossRef
42.
Zurück zum Zitat Chanthbouala, A., Garcia, V., Cherifi, R.O., Bouzehouane, K., Fusil, S., Moya, X., Xavier, S., Yamada, H., Deranlot, C., Mathur, N.D., Bibes, M., Barthélémy, A., Grollier, J.: A ferroelectric memristor. Nat. Mater. 11, 860–864 (2012)CrossRef Chanthbouala, A., Garcia, V., Cherifi, R.O., Bouzehouane, K., Fusil, S., Moya, X., Xavier, S., Yamada, H., Deranlot, C., Mathur, N.D., Bibes, M., Barthélémy, A., Grollier, J.: A ferroelectric memristor. Nat. Mater. 11, 860–864 (2012)CrossRef
43.
Zurück zum Zitat Kim, D.J., Lu, H., Ryu, S., Bark, C.W., Eom, C.B., Tsymbal, E.Y., Gruverman, A.: Ferroelectric tunnel memristor. Nano Lett. 12, 5697–5702 (2012)CrossRef Kim, D.J., Lu, H., Ryu, S., Bark, C.W., Eom, C.B., Tsymbal, E.Y., Gruverman, A.: Ferroelectric tunnel memristor. Nano Lett. 12, 5697–5702 (2012)CrossRef
44.
Zurück zum Zitat Bao, B.C.: Introduction to Memristor Circuit. Science Press, Changzhou (2014) Bao, B.C.: Introduction to Memristor Circuit. Science Press, Changzhou (2014)
46.
Zurück zum Zitat Biolek, Z., Biolek, D., Biolkova, V.: SPICE model of memristor with nonlinear dopant drift. Radioengineering 18, 210–214 (2009)MATH Biolek, Z., Biolek, D., Biolkova, V.: SPICE model of memristor with nonlinear dopant drift. Radioengineering 18, 210–214 (2009)MATH
47.
Zurück zum Zitat Prodromakis, T., Peh, B.P., Papavassiliou, C., Toumazou, C.: A versatile memristor model with nonlinear dopant kinetics. IEEE Trans. Electron Dev. 58, 3099–3105 (2011)CrossRef Prodromakis, T., Peh, B.P., Papavassiliou, C., Toumazou, C.: A versatile memristor model with nonlinear dopant kinetics. IEEE Trans. Electron Dev. 58, 3099–3105 (2011)CrossRef
48.
Zurück zum Zitat Zhou, E.R., Fang, L., Liu, R.L., Tang, Z.S.: An improved memristor model for brain-inspired computing. Chin. Phys. B 26, 537–543 (2017) Zhou, E.R., Fang, L., Liu, R.L., Tang, Z.S.: An improved memristor model for brain-inspired computing. Chin. Phys. B 26, 537–543 (2017)
50.
Zurück zum Zitat Cantley, K.D., Subramaniam, A., Stiegler, H.J., Chapman, R.A., Vogel, E.M.: SPICE simulation of nanoscale non-crystalline silicon TFTs in spiking neuron circuits. In: IEEE International Midwest Symposium on Circuits and Systems. Seattle, WA, pp. 1202–1205 (2010) Cantley, K.D., Subramaniam, A., Stiegler, H.J., Chapman, R.A., Vogel, E.M.: SPICE simulation of nanoscale non-crystalline silicon TFTs in spiking neuron circuits. In: IEEE International Midwest Symposium on Circuits and Systems. Seattle, WA, pp. 1202–1205 (2010)
51.
Zurück zum Zitat Pavlov, I.P.: Conditioned reflex: an investigation of the physiological activity of the cerebral cortex. Ann. Neurosci. 8, 136–141 (2010) Pavlov, I.P.: Conditioned reflex: an investigation of the physiological activity of the cerebral cortex. Ann. Neurosci. 8, 136–141 (2010)
Metadaten
Titel
Memristive-synapse spiking neural networks based on single-electron transistors
verfasst von
Keliu Long
Xiaohong Zhang
Publikationsdatum
26.12.2019
Verlag
Springer US
Erschienen in
Journal of Computational Electronics / Ausgabe 1/2020
Print ISSN: 1569-8025
Elektronische ISSN: 1572-8137
DOI
https://doi.org/10.1007/s10825-019-01437-w

Weitere Artikel der Ausgabe 1/2020

Journal of Computational Electronics 1/2020 Zur Ausgabe

Neuer Inhalt