Skip to main content
Erschienen in: Neural Processing Letters 2/2019

Open Access 27.08.2018

Stability of Inertial Neural Network with Time-Varying Delays Via Sampled-Data Control

verfasst von: Jingfeng Wang, Lixin Tian

Erschienen in: Neural Processing Letters | Ausgabe 2/2019

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
download
DOWNLOAD
print
DRUCKEN
insite
SUCHEN
loading …

Abstract

In this paper, the stability problem is studied for inertial neural network with time-varying delays. The sampled-data control method is employed for the system design. First, by choosing a proper variable substitution, the original system is transformed into first-order differential equations. Then, an input delay approach is applied to deal with the stability of sampling system. Based on the Lyapunov function method, several sufficient conditions are derived to guarantee the global stability of the equilibrium. Furthermore, when employing an error-feedback control term to the slave neural network, parallel criteria regarding to the synchronization of the master neural network are also generated. Finally, some examples are given to illustrate the theoretical results.
Hinweise

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Recurrent neural networks (RNNS), especially Hopfield neural networks, cellular neural networks (CNNs), Cohen–Grossberg neural networks (CGNNs), bidirectional associative memory (BAM) neural networks have been extensively investigated during the past years, since their potential application in the fields of pattern recognition, associative memories, signal processing, fixed-point computations, and so on. In the design of neural networks, the dynamical properties of networks, such as the stability of the networks, play important roles. And there have been many literatures on the stability of neural networks [114].
It has been confirmed that the time delay, which is an inherent feature of signal transmission between neurons, is one of the main sources for causing oscillation, divergence or instability of neural networks. Therefore, stability analysis for neural networks with time delays has been an attractive subject of research in the past few years [121]. Noticing that lots of previous studies mainly focused on neural networks with only first derivative of the states, whereas it is also of significant importance to introduce an inertial term, or equivalently, the influence of inductance, into the artificial neural networks, because the inertial term is considered to be a critical tool to generate complicated bifurcation behavior and chaos [22, 23]. Babcocka and Westervelt pointed out that dynamics may become complexity when the neuron couplings contain an inertial nature [24]. Therefore, the dynamical characteristics of the inertial neural network have been investigated extensively [1520, 2527]. For instance, resorting to homeomorphism and inequality technique, the stability of inertial Cohen–Grossberg neural networks with time delays was studied [16, 17]. In [18], the matrix measure strategies were used to analyze the stability and synchronization of inertial BAM neural network with time delays. Stability of inertial BAM neural network with time-varying delay was studied via impulsive control [19].
As we all know, external forcing or stimuli plays a very important role in nonlinear systems. By adding certain types of forcing, rich phenomena can be observed in physical, ecological systems. Dynamic features including stability types can be greatly affected by external forcing. For master–slave neural networks, some kind of control term can been added to the slave system in order to guarantee the synchronization between the master system and the slave system. There are many control strategies such as impulsive control [79, 19, 25], feedback control [10, 28], intermittent control [11, 12, 15], sampled-data control [13, 14] and so on to stabilize the RNNs. To be noted that the sampled-data control drastically reduces the amount of transmitted information and increases the efficiency of bandwidth usage, because its’ control signals are kept constant during the sampling period and are allowed to change only at the sampling instant. On the other hand, due to the rapid growth of the digital hardware technologies, the sampled-data control method is more efficient, secure and useful. Due to these advantages, there are many results on the sampled-data control of finite-dimensional systems over the past decades [13, 14, 2937]. For example, the stabilization of BAM neural networks with time-varying delay in the leakage terms was studied via sampled-data control [14]. Using sampled-data control, the asymptotical synchronization problem of Chaotic Lur’e systems with time delays was investigated in [31]. Based on the fuzzy-model-based control approach and LMI technique, several conditions were derived to guarantee the stability of the T-S fuzzy delayed neural networks with Markovian jumping parameters via sampled-data control [33].
Motivated by the above discussions, this paper mainly focuses on the fundamental problem of stability discussion of a class of time-varying delayed inertial neuron network via sampled-data control. To the best of our knowledge, this extension has not been reported in the literatures at the present stage. The main contributions of this paper are as follows. (1) We make the first attempt to address the sampled-data stability control problem for inertial neuron networks with time-varying delays. (2) Because sampled-data control signals are kept constant during the sampling period and are allowed to change only at the sampling instant, the control signals have discontinuous form. For the sake of discussion, the sampling system is changed into a continuous time-delay system by using the input delay approach. (3) A sampled-data controller is designed to achieve the synchronization of the master–slaver neural networks.
The remaining paper is organized as follows. Section 2 describes some preliminaries including definition, assumption and lemmas. The main result is stated in Sect. 3. The application concerning the synchronization of master–slave systems is investigated in Sect. 4. Section 5 gives some numerical examples to testify the theoretical analysis. Finally, conclusions are drawn in Sect. 6.

2 Preliminaries

For the sake of convenience, we introduce some notations as follows. \(R^n\) denotes the n-dimensional Euclidean space. The notation\(X > Y(X \ge Y)\), where X and Y are symmetric matrices, means that \(X-Y\) is positive definite (positive semidefinite). The superscript “T” represents the transpose, and diag{...} stands for a block-diagonal matrix. For a symmetry matrix,“ \(*\) ” denotes the symmetric elements in it. Let \(\lambda _{\max } (P)\,\)and \(\lambda _{\min } (P)\) be the maximum and minimum eigenvalue of matrix P, respectively.
In this paper, we consider the following inertial neural network:
$$\begin{aligned} \frac{d^2u_i (t)}{dt^2}= & {} - a_i \frac{du_i (t)}{dt} - b_i u_i (t) +\sum \limits _{j = 1}^n {c_{ij} g_j (u_j (t))}\nonumber \\&+ \sum \limits _{j = 1}^n {d_{ij}g_j (u_j (t - \tau (t)))}+ I_i ,\quad i = 1,2, \ldots ,n, \end{aligned}$$
(2.1)
where \(u_i (t)\) denotes the state variable of the ith neuron at time t, the second derivative of \(u_i (t)\) is called an inertial term of system (2.1), \(a_i> 0\,,\,b_i > 0\) are constants, \(c_{ij} \,,\,d_{ij} \,\) are constants and denote the connection weights of the neural network, \(g_j ( \cdot )\)denotes the neurons activation function of jth neuron at time t, \(\tau (t)\) is the time varying delays which satisfies \(0 \le \tau (t) \le \tau \,,\,{\dot{\tau }}(t) \le {\bar{\tau }} < 1\), \(I_i \) is the external input on the ith neuron from the neural field.
Assumption 1
For any \( x_1 \ne x_2 \in R \), the neuron activation functions \(g_i ( \cdot )\) satisfy
$$\begin{aligned} l_i^ - \le \frac{g_i (x_1 ) - g_i (x_2 )}{x_1 - x_2 } \le l_i^ + \,,\quad i = 1,2, \ldots ,n, \end{aligned}$$
(2.2)
with \(l_i^ - \,,\,l_i^ + \) being constant scalars.
Denote \( L^ - = diag\{l_1^- ,l_2^ - , \ldots ,l_n^ - \} \) and \( L^ + = diag\{l_1^ + ,l_2^ + , \ldots ,l_n^ + \} \) for convenience.
Next, by introducing the following variable transformation:
$$\begin{aligned} v_i (t) = \frac{du_i (t)}{dt} + \xi _i u_i (t),\quad i = 1,2, \ldots ,n, \end{aligned}$$
the original neural network (2.1) can be written as
$$\begin{aligned} \frac{du_i (t)}{dt}= & {} -\, \xi _i u_i (t) + v_i (t), \nonumber \\ \frac{dv_i (t)}{dt}= & {} -\, [b_i \, - \xi _i (a_i - \xi _i )]u_i (t) - (a_i - \xi _i )v_i (t) + \sum \limits _{j = 1}^n {c_{ij} g_j (u_j (t))} \nonumber \\&+\,\sum \limits _{j = 1}^n {d_{ij} g_j (u_j (t - \tau (t)))} + I_i, \end{aligned}$$
(2.3)
for \(i = 1,2, \ldots ,n.\)
Denote \(u(t) = (u_1 (t),u_2 (t), \ldots ,u_n (t))^T\),\(v(t) = (v_1 (t),v_2 (t), \ldots ,v_n (t))^T\), the network (2.3) can be written as
$$\begin{aligned} {\frac{du(t)}{dt}}= & {} - \varLambda u(t) + v(t), \nonumber \\ {\frac{dv(t)}{dt}}= & {} - Au(t) - Bv(t) + Cg(u(t)) + Dg(u(t - \tau (t))) + I, \end{aligned}$$
(2.4)
where \(\varLambda = diag\{\xi _1 ,\xi _2 , \ldots ,\xi _n \}\), \(A = diag\{\alpha _1 ,\alpha _2 , \ldots ,\alpha _n \}\), \(B = diag\{\beta _1 ,\beta _2 , \ldots ,\beta _n \}\), \( C = (c_{ij} )_{n\times n} ,\,D = (d_{ij} )_{n\times n},\, I = diag\{I_1 ,I_2 , \ldots ,I_n \}, \quad \alpha _i = b_i - \xi _i (a_i - \xi _i ),\,\beta _i = a_i - \xi _i.\)
Remark 1
In order to get less conservative results, we can use the transformation: \( v_i (t) = \zeta _i \frac{du_i (t)}{dt} + \xi _i u_i (t)\,,\quad i = 1,2, \ldots ,n.\) Here, for simplicity, we only choose one variable to express the transformation.
Definition 1
The point \((u^ * ,v^ * )\) with \(u^ * = (u_1 ,u_2 ,\ldots ,u_n )^T,v^ * = (v_1,v_2 ,\ldots ,v_n )^T\) is called an equilibrium of the neural network in (2.4) if
$$\begin{aligned}&{ - \varLambda u^ * + v^ * = 0,} \nonumber \\&{ - Au^*- Bv^*+ Cg(u^ * ) + Dg(u^ * ) + I = 0.} \end{aligned}$$
(2.5)
Let \((u^ * ,v^ * )\) be the equilibrium point of (2.4). For the purpose of simplicity, we can shift the equilibrium point \((u^ * ,v^ * )\) to the origin by letting \(x = u - u^ * ,\,y = v - v^ * \), then the network (2.4) can be transformed into
$$\begin{aligned}&{\frac{dx(t)}{dt}} = - \varLambda x(t) + y(t), \nonumber \\&{\frac{dy(t)}{dt}} = - Ax(t) - By(t) + Cf(x(t)) + Df(x(t - \tau (t))), \end{aligned}$$
(2.6)
where \(x(t) = (x_1 (t),x_2 (t), \ldots ,x_n (t))^T \), \( y(t) = (y_1 (t),y_2 (t), \ldots ,y_n (t))^T \) are the state vectors of the transformed networks. It follows from (2.2) that the function \(f(x) = g(x + u^ * ) - g(u^ * )\) satisfies
$$\begin{aligned} L^ - \le \frac{f(x)}{x} \le L^ +, \end{aligned}$$
(2.7)
for any \(x \ne 0 \in R\).
Remark 2
From (2.7), we can obtain \( - [f(x(t)) - L^ - x(t)]^TU_1[f(x(t)) - L^ + x(t)] \ge 0 \) and \( - [f(x(t - \tau (t))) - L^ - x(t - \tau (t))]^TU_2[f(x(t - \tau (t))) - L^ + x(t - \tau (t))] \ge 0 , \) where \( U_1, U_2 \) are positive diagonal matrices.
In the following, we will consider the sampled-data stability problem of the neural network (2.4). Let \(\mu (t) = Kx(t_k )\,,\,\nu (t) = My(t_k )\), where \(K\,,\,M \in R^{n\times n}\) are the sampled-data feedback controller gain matrices to be designed . \(t_k \) denotes the sample time point and satisfies \(0 = t_0< t_1< \cdots< t_k < \cdots \), and \(\mathop {\lim }\nolimits _{k \rightarrow +\, \infty } t_k = +\, \infty \). Moreover, we assume that there exists a positive constant h such that \(t_{k + 1} - t_k \le h,\forall k \in N\).
Under the sampled-data control, (2.6) can be modeled as
$$\begin{aligned} {\frac{dx(t)}{dt}}= & {} - \varLambda x(t) + y(t) + \mu (t), \nonumber \\ {\frac{dy(t)}{dt}}= & {} - Ax(t) - By(t) + Cf(x(t)) + Df(x(t - \tau (t))) + \nu (t). \end{aligned}$$
(2.8)
Let \(h(t) = t - t_k \), for \(t \in [t_k,\,t_{k + 1} )\), we have
$$\begin{aligned} {\frac{dx(t)}{dt}}= & {} - \varLambda x(t) + y(t) + Kx(t - h(t)), \nonumber \\ {\frac{dy(t)}{dt}}= & {} - Ax(t) - By(t) + Cf(x(t)) + Df(x(t - \tau (t))) + My(t - h(t)). \end{aligned}$$
(2.9)
The initial conditions of (2.9) are given as :
$$\begin{aligned} {\left\{ \begin{array}{ll} {x(t) = \varphi (t),\quad t \in [ - {\bar{h}},0]}, \\ {y(t) = \phi (t),\quad t \in [ - h,0],\quad } \\ \end{array}\right. } \end{aligned}$$
(2.10)
where \({\bar{h}} = \max \{\,h,\tau \}.\)
Remark 3
Based on the input delay approach [14, 33, 38], the system (2.8) is changed into a continuous-time system (2.9) because it is difficult to solve the stabilization problem of system (2.8) with the discrete term \(\mu (t) = Kx(t_k ), \nu (t) = My(t_k )\).
In order to investigate the stabilization problem of the network (2.9), the following lemmas are needed.
Lemma 1
[39] For any constant matrix \(M > 0\), any scalars a and b with \(a < b\),and a vector function \(x(t):[a,b] \rightarrow R\) such that the integrals concerned are well defined, then the following inequality holds:
$$\begin{aligned} \left[ \int _a^b x(s)ds\right] ^TM \left[ \int _a^b x(s)ds\right] \le (b - a)\int _a^b {x^T(s)Mx(s)ds} . \end{aligned}$$
Lemma 2
[19] If \(P \in R^{n\times n}\) is a symmetric positive definite matrix and \(Q \in R^{n\times n}\)is a symmetric matrix, then
$$\begin{aligned} \lambda _{\min } (P^{ - 1}Q)x^TPx \le x^TQx \le \lambda _{\max } (P^{ - 1}Q)x^TPx,\quad x \in R^n . \end{aligned}$$
Lemma 3
[40] If \(\omega (t):[t_0 , + \infty ) \rightarrow R \) is uniformly continuous, and \(\int _{t_0 }^t {\omega (s)ds} \) has a finite limit as \(t \rightarrow + \infty \), then \(\mathop {\lim }\nolimits _{t \rightarrow + \infty } \omega (t) = 0\).

3 Main Results

In this section, we shall establish some sufficient conditions to ensure the stability of network (2.9).
Theorem 1
Under Assumption 1, if there exist positive-definite matrices \( P_i (i = 1,\ldots ,9) \), \( Q_i (i = 1,2) \), positive-definite diagonal matrices \( U_i (i = 1,2) \) and any matrices XY such that the following LMI holds:
$$\begin{aligned} \varPi = \left[ {{\begin{array}{*{20}c} {\pi _{1,1} } &{}\quad {\pi _{1,2} } &{}\quad {\pi _{1,3} } &{}\quad 0 &{}\quad {\pi _{1,5} } &{}\quad 0 &{}\quad {\pi _{1,7} } &{}\quad {\pi _{1,8} } &{}\quad 0 &{}\quad 0 &{}\quad {\pi _{1,11} } &{}\quad 0 \\ * &{}\quad {\pi _{2,2} } &{}\quad 0 &{}\quad 0 &{}\quad {\pi _{2,5} } &{}\quad 0 &{}\quad {\pi _{2,7} } &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ * &{}\quad * &{}\quad {\pi _{3,3} } &{}\quad {\pi _{3,4} } &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad {\pi _{3,12} } \\ * &{}\quad * &{}\quad * &{}\quad {\pi _{4,4} } &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ * &{}\quad * &{}\quad * &{}\quad * &{}\quad {\pi _{5,5} } &{}\quad {\pi _{5,6} } &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad {\pi _{6,6} } &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad {\pi _{7,7} } &{}\quad {\pi _{7,8} } &{}\quad {\pi _{7,9} } &{}\quad 0 &{}\quad {\pi _{7,11} } &{}\quad {\pi _{7,12} } \\ * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad {\pi _{8,8} } &{}\quad {\pi _{8,9} } &{}\quad 0 &{}\quad {\pi _{8,11} } &{}\quad {\pi _{8,12} } \\ * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad {\pi _{9,9} } &{}\quad {\pi _{9,10} } &{}\quad 0 &{}\quad 0 \\ * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad {\pi _{10,10} } &{}\quad 0 &{}\quad 0 \\ * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad {\pi _{11,11} } &{}\quad 0 \\ * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad {\pi _{12,12} } \\ \end{array} }} \right] < 0\nonumber \\ \end{aligned}$$
(3.11)
where
$$\begin{aligned}&\pi _{1,1} = P_3 - P_4 + P_5 - P_7 - (1 - {\bar{\tau }})P_9 - L^ - U_1 L^ + - 2Q_1 \varLambda , \\&\pi _{1,2} = P_1 - Q_1 - Q_1 \varLambda , \pi _{1,3} = P_4 + (1 - {\bar{\tau }})P_9 , \pi _{1,5} = P_7 + X, \pi _{1,7} = Q_1 - Q_2 A, \\&\pi _{1,8} = - Q_2 A, \pi _{1,11} = \frac{1}{2}U_1 (L^ - + L^ + ), \\&\pi _{2,2} = \tau ^2P_4 + h^2P_7 + \tau ^2P_9 - 2Q_1 , \pi _{2,5} = X, \pi _{2,7} = Q_1 , \\&\pi _{3,3} = - (1 - {\bar{\tau }})P_3 - 2P_4 - (1 - {\bar{\tau }})P_9 - L^ - U_2 L^ + , \pi _{3,4} = P_4 , \\&\pi _{3,12} = \frac{1}{2}U_2 (L^ - + L^ + ), \\&\pi _{4,4} = - P_4 , \\&\pi _{5,5} = - 2P_7 , \pi _{5,6} = P_7 , \\&\pi _{6,6} = - P_5 - P_7 , \\&\pi _{7,7} = P_6 - P_8 - 2Q_2 B, \pi _{7,8} = P_2 - Q_2 - Q_2 B, \pi _{7,9} = P_8 + Y, \\&\pi _{7,11} = Q_2C , \pi _{7,12} = Q_2D , \\&\pi _{8,8} = h^2P_8-2Q_2 , \pi _{8,9} = Y , \pi _{8,11} = Q_2C , \pi _{8,12} = Q_2D , \\&\pi _{9,9} = -2P_8 , \pi _{9,10} = P_8 , \\&\pi _{10,10} = -P_6 - P_8 , \\&\pi _{11,11} = -U_1 , \\&\pi _{12,12} = -U_2 , \end{aligned}$$
then the equilibrium of network (2.9) is globally asymptotically stable. Moreover, the desired controller gain matrices are given as \( K = Q_1^{ - 1} X,\,M = Q_2^{ - 1} Y \) .
Proof
Consider the following Lyapunov-Krasovskii functional :
$$\begin{aligned} V(t) = \sum \limits _{i = 1}^6 {V_i (t)} \end{aligned}$$
(3.12)
where
$$\begin{aligned} \begin{array}{l} V_1 (t) = x^T(t)P_1 x(t) + y^T(t)P_2 y(t),\\ V_2 (t) = \int _{t - \tau (t)}^t {x^T(s)P_3 x(s)ds},\\ V_3 (t) = \tau \int _{ - \tau }^0 {\int _{t + \theta }^t {{\dot{x}}^T(s)P_4 {\dot{x}}(s)dsd\theta } } ,\\ V_4 (t) = \int _{t - h}^t {x^T(s)P_5 x(s)ds} + \int _{t - h}^t {y^T(s)P_6 y(s)ds},\\ V_5 (t) = h\int _{ - h}^0 {\int _{t + \theta }^t {{\dot{x}}^T(s)P_7 \dot{x}(s)dsd\theta } } + h\int _{ - h}^0 {\int _{t + \theta }^t {{\dot{y}}^T(s)P_8 {\dot{y}}(s)dsd\theta } },\\ V_6 (t) = \tau \int _{ - \tau (t)}^0 {\int _{t + \theta }^t {{\dot{x}}^T(s)P_9 {\dot{x}}(s)dsd\theta } }.\\ \end{array} \end{aligned}$$
Then take the derivative of V(t) along the system (2.9) ,
$$\begin{aligned} {\dot{V}}_1 (t)= & {} 2x^T(t)P_1 {\dot{x}}(t) + 2y^T(t)P_2 {\dot{y}}(t), \end{aligned}$$
(3.13)
$$\begin{aligned} {\dot{V}}_2 (t)\le & {} x^T(t)P_3 x(t) - (1 - {\bar{\tau }})x^T(t - \tau (t))P_3 x(t - \tau (t)), \end{aligned}$$
(3.14)
$$\begin{aligned} {\dot{V}}_3 (t)= & {} \tau ^2{\dot{x}}^T(t)P_4 {\dot{x}}(t) - \tau \int _{t - \tau }^t {{\dot{x}}^T(s)P_4 {\dot{x}}(s)ds},\nonumber \\\le & {} \tau ^2{\dot{x}}^T(t)P_4 {\dot{x}}(t) - 2x^T(t - \tau (t))P_4 x(t - \tau (t)) + 2x^T(t - \tau (t))P_4 x(t - \tau ) \nonumber \\&-\, x^T(t - \tau )P_4 x(t - \tau ) - x^T(t)P_4 x(t) + 2x^T(t)P_4 x(t - \tau (t)) \end{aligned}$$
(3.15)
$$\begin{aligned} {\dot{V}}_4 (t)= & {} x^T(t)P_5 x(t) - x^T(t - h)P_5 x(t - h) + y^T(t)P_6 y(t) - y^T(t - h)P_6 y(t - h),\nonumber \\ \end{aligned}$$
(3.16)
$$\begin{aligned} {\dot{V}}_5 (t)= & {} h^2{\dot{x}}^T(t)P_7 {\dot{x}}(t) - h\int _{t - h}^t {\dot{x}^T(s)P_7 {\dot{x}}(s)ds} + h^2{\dot{y}}^T(t)P_8 {\dot{x}}(t) \nonumber \\&-\, h\int _{t - h}^t {{\dot{x}}^T(s)P_8 {\dot{x}}(s)ds}\nonumber \\\le & {} h^2{\dot{x}}^T(t)P_7 {\dot{x}}(t) - 2x^T(t - h(t))P_7 x(t - h(t)) + 2x^T(t - h(t))P_7 x(t - h) \nonumber \\&-\, x^T(t - h)P_7 x(t - h)- x^T(t)P_7 x(t) + 2x^T(t)P_7 x(t - h(t)) \nonumber \\&+\,h^2{\dot{y}}^T(t)P_8 {\dot{y}}(t)- 2y^T(t - h(t))P_8 y(t - h(t)) + 2y^T(t -h(t))P_8 y(t - h)\nonumber \\&-\, y^T(t - h)P_8 y(t - h) - y^T(t)P_8 y(t) + 2y^T(t)P_8 y(t - h(t)), \end{aligned}$$
(3.17)
$$\begin{aligned} {\dot{V}}_6 (t)\le & {} \tau ^2{\dot{x}}^T(t)P_9 {\dot{x}}(t) - \tau (1 - \bar{\tau })\int _{t - \tau (t)}^t {{\dot{x}}^T(s)P_9 {\dot{x}}(s)ds}\nonumber \\\le & {} \tau ^2{\dot{x}}^T(t)P_9 {\dot{x}}(t) - (1 - {\bar{\tau }})x^T(t)P_9 x(t) + 2(1 - {\bar{\tau }})x^T(t)P_9 x(t - \tau (t)) \nonumber \\&-\, (1 - {\bar{\tau }})x^T(t - \tau (t))P_9 x(t - \tau (t)). \end{aligned}$$
(3.18)
In addition,
$$\begin{aligned} 0= & {} 2[x^T(t) + {\dot{x}}^T(t)]Q_1 [ - {\dot{x}}(t) + {\dot{x}}(t)] \nonumber \\= & {} 2[x^T(t) + {\dot{x}}^T(t)]Q_1 [ - {\dot{x}}(t) - \varLambda x(t) + y(t) + Kx(t - h(t))] \nonumber \\= & {} - 2x^T(t)Q_1 {\dot{x}}^T(t) - 2x^T(t)Q_1 \varLambda x(t) + 2x^T(t)Q_1 y(t) + 2x^T(t)Q_1 Kx(t - h(t)) \nonumber \\&-\, 2{\dot{x}}^T(t)Q_1 {\dot{x}}(t) - 2{\dot{x}}^T(t)Q_1 \varLambda x(t) + 2\dot{x}^T(t)Q_1 y(t) + 2{\dot{x}}^T(t)Q_1 Kx(t - h(t)).\nonumber \\ \end{aligned}$$
(3.19)
$$\begin{aligned} 0= & {} 2[y^T(t) + {\dot{y}}^T(t)]Q_2 [ - {\dot{y}}(t) + {\dot{y}}(t)]\nonumber \\= & {} 2[y^T(t) + {\dot{y}}^T(t)]Q_2 [ - {\dot{y}}(t) - Ax(t) - By(t) + Cf(x(t)) + Df(x - \tau (t))\nonumber \\&+\, My(t - h(t))]\nonumber \\= & {} - 2y^T(t)Q_2 {\dot{y}}(t) - 2y^T(t)Q_2 Ax(t) - 2y^T(t)Q_2 By(t) + 2y^T(t)Q_2 Cf(x(t)) \nonumber \\&+\, 2y^T(t)Q_2 Df(x(t - \tau (t)) + 2y^T(t)Q_2 My(t - h(t)) \nonumber \\&-\, 2{\dot{y}}^T(t)Q_2 {\dot{y}}(t) - 2{\dot{y}}^T(t)Q_2 Ax(t) - 2\dot{y}^T(t)Q_2 By(t) + 2{\dot{y}}^T(t)Q_2 Cf(x(t)) \nonumber \\&+ \,2{\dot{y}}^T(t)Q_2 Df(x(t - \tau (t)) + 2{\dot{y}}^T(t)Q_2 My(t - h(t)). \end{aligned}$$
(3.20)
$$\begin{aligned} 0\le & {} - [f(x(t)) - L^ - x(t)]^TU_1 [f(x(t)) - L^ + x(t)] \nonumber \\= & {} - f^T(x(t))U_1 f(x(t)) + x^T(t)U_1 (L^ - + L^ + )f(x(t)) - x^T(t)L^ - U_1 L^ + x(t). \end{aligned}$$
(3.21)
$$\begin{aligned} 0\le & {} - [f(x(t - \tau (t))) - L^ - x(t - \tau (t))]^TU_2 [f(x(t - \tau (t))) - L^ + x(t - \tau (t))] \nonumber \\= & {} - f^T(x(t - \tau (t)))U_2 f(x(t - \tau (t))) + x^T(t - \tau (t))U_2 (L^ - + L^ + )f(x(t - \tau (t))) \nonumber \\&-\, x^T(t - \tau (t))L^ - U_2 L^ + x(t - \tau (t)). \end{aligned}$$
(3.22)
Based on (3.13)–(3.22), let
$$\begin{aligned} \xi (t)= & {} \left[ x^T(t),{\dot{x}}^T(t), x^T(t - \tau (t)), x^T(t - \tau ), x^T(t - h(t)), x^T(t - h), \right. \nonumber \\&\left. y^T(t), {\dot{y}}^T(t),y^T(t - h(t)), y^T(t - h),f^T(x(t)), f^T(x(t - \tau (t)))\right] ^T \end{aligned}$$
we have
$$\begin{aligned} {\dot{V}}(t) \le \xi ^T(t)\varPi \xi (t). \end{aligned}$$
(3.23)
From (3.23), it can be seen that
$$\begin{aligned} V(t) - \int _0^t {\xi ^T(s)\varPi \xi (s)} ds \le V(0),\quad t \ge 0. \end{aligned}$$
(3.24)
Moreover,
$$\begin{aligned} \begin{aligned} V(0)&\le [\lambda _{max} (P_1 ) + \tau \lambda _{max} (P_3 ) + \frac{1}{2}\tau ^3\lambda _{max} (P_4 ) + h\lambda _{max} (P_5 ) + \frac{1}{2}h^3\lambda _{max} (P_7 ) \\&\quad + \frac{1}{2}\tau ^3\lambda _{max} (P_9 )]\varPhi ^2 + [\lambda _{max} (P_2 ) + h\lambda _{max} (P_6 ) + \frac{1}{2}h^3\lambda _{max} (P_8 )]\varPsi ^2 \\&< +\infty , \end{aligned} \end{aligned}$$
(3.25)
where
$$\begin{aligned} \varPhi = max\{\mathop {sup}\limits _{t \in [ - {\bar{h}},0]} \left\| {\varphi (t)} \right\| ,\mathop {sup}\limits _{t \in [ - {\bar{h}},0]} \left\| {{\dot{\varphi }}(t)} \right\| \},\\ \varPsi = max\{\mathop {sup}\limits _{t \in [ - h,0]} \left\| {\phi (t)} \right\| ,\mathop {sup}\limits _{t \in [ - h,0]} \left\| {{\dot{\phi }}(t)} \right\| \}. \end{aligned}$$
On the other hand, by the definition of V(t), we get
$$\begin{aligned} \begin{aligned} V(t)&\ge x^T(t)P_1 x(t) + y^T(t)P_2 y(t) \\&\ge \lambda _{\min } (P_1 )x^T(t)x(t) + \lambda _{\min } (P_2 )y^T(t)y(t) \\&= \lambda _{\min } (P_1 )\left\| {x(t)} \right\| ^2 + \lambda _{\min } (P_2 )\left\| {y(t)} \right\| ^2 \\&\ge \min \{\lambda _{\min } (P_1 ),\lambda _{\min } (P_2 )\}[\left\| {x(t)} \right\| ^2 + \left\| {y(t)} \right\| ^2]. \end{aligned} \end{aligned}$$
(3.26)
Then, combining (3.24), (3.25) and (3.26), one has
$$\begin{aligned} \left\| {x(t)} \right\| ^2 + \left\| {y(t)} \right\| ^2 \le \frac{V(0)}{\min \{\lambda _{\min } (P_1 ),\lambda _{\min } (P_2 )\}} < + \infty \end{aligned}$$
(3.27)
which demonstrates that the solution of (2.9) is uniformly bound on \([0, + \infty )\). Next, we shall prove that \((\left\| {x(t)} \right\| ,\left\| {y(t)} \right\| ) \rightarrow (0,0)\) as \(t \rightarrow + \infty \). On the one hand, the boundedness of \(\left\| {{\dot{x}}(t)} \right\| \) and \(\left\| {{\dot{y}}(t)} \right\| \) can be deduced from (2.9) and (3.27). On the other hand, from (3.23), we have
$$\begin{aligned} {\dot{V}}(t) \le \xi ^T(t)\varPi \xi (t) \le \lambda _{\min } (\varPi )(x^T(t)x(t) + y^T(t)y(t)) \end{aligned}$$
(3.28)
So, \(\int _0^t {x^T(s)x(s)} ds < + \infty \) and \(\int _0^t {y^T(s)y(s)} ds < + \infty \). In view of Lemma 3, we have \((\left\| {x(t)} \right\| ,\left\| {y(t)} \right\| ) \rightarrow (0,0)\). That is, the equilibrium point of the system (2.9) is globally asymptotically stable.
\(\square \)
Remark 4
Let \( - L^- = L^+ > 0 \) in Assumption 1, we can obtain the corresponding result for inertial neural network with Lipschtiz-type activation function which was discussed in [18, 19]. Thus, our results possess better application.
Remark 5
In Theorem 1, the matrix \( \varLambda \) determines whether the condition (3.11) can hold, in which different \( \xi _i (i=1,2,\ldots ,n) \) represents different variable transformation. Comparing with a certain variable substitution, Theorem 1 is less conservative by introducing a free-weigh matrix \( \varLambda \).
Remark 6
The construction of appropriate Lyapunov function is one of difficulties of the Lyapunov theory. The Lyapunov function we made is dependent on the time delays, which results in our conclusions less conservative.
Remark 7
Synchronization, which means that the dynamical behaviors of coupled systems obtain the same time spatial state. In real world, stability and synchronization of chaotic systems are very important in consideration of its potential applications in different kinds of areas including secure communication, biology system, optics, information processing and so on [37]. Therefore, based on the proposed results, the synchronization problem of the master–slave neural network is discussed below.

4 Application

In this section, the precious results will be applied to the master–slave synchronization problem.
The master neural network is given as
$$\begin{aligned} \frac{d^2u_i (t)}{dt^2}= & {} - a_i \frac{du_i (t)}{dt} - b_i u_i (t) + \sum \limits _{j = 1}^n {c_{ij} g_j (u_j (t))}\nonumber \\&+ \sum \limits _{j = 1}^n {d_{ij} g_j (u_j (t - \tau (t)))} + I_i \end{aligned}$$
(4.29)
and the slave neural network
$$\begin{aligned} \frac{d^2\omega _i (t)}{dt^2}= & {} - a_i \frac{d\omega _i (t)}{dt} - b_i \omega _i (t) + \sum \limits _{j = 1}^n {c_{ij} g_j (\omega _j (t))} \nonumber \\&+ \sum \limits _{j = 1}^n {d_{ij} g_j (\omega _j (t - \tau (t)))} + I_i + J_i (t) \end{aligned}$$
(4.30)
where \(J_i (t)\) is the coupling control term to be designed, the two neural networks may have different initial values. The notation of \( a_i, b_i ,c_{ij}, d_{ij},g_j ( \cdot )\) and \( I_i \) are the same as ones in Sect. 2. Let
$$\begin{aligned} {\tilde{u}}_i (t) = \frac{du_i (t)}{dt} + \xi _i u_i (t),\quad i = 1, 2, \ldots ,n, \\ {\tilde{\omega }}_i (t) = \frac{d\omega _i (t)}{dt} + \xi _i \omega _i (t),\quad i = 1, 2, \ldots ,n, \end{aligned}$$
and denote \( u(t) = (u_1 (t),u_2 (t), \ldots ,u_n (t))^T \), \( \tilde{u}(t) = ({\tilde{u}}_1 (t),{\tilde{u}}_2 (t), \ldots ,{\tilde{u}}_n (t))^T \), \( \omega (t) = (\omega _1 (t),\omega _2 (t), \ldots ,\omega _n (t))^T \), \( {\tilde{\omega }}(t) = ({\tilde{\omega }}_1 (t),\tilde{\omega }_2 (t), \ldots ,{\tilde{\omega }}_n (t))^T \), the above two systems can be written as
$$\begin{aligned} {\frac{du(t)}{dt}}= & {} - \varLambda u(t) + {\tilde{u}}(t) \nonumber \\ {\frac{d{\tilde{u}}(t)}{dt}}= & {} - Au(t) - B{\tilde{u}}(t) + Cg(u(t)) + Dg(u(t - \tau (t))) + I \end{aligned}$$
(4.31)
and
$$\begin{aligned} {\frac{d\omega (t)}{dt}}= & {} - \varLambda \omega (t) + {\tilde{\omega }}(t) \nonumber \\ {\frac{d{\tilde{\omega }}(t)}{dt}}= & {} - A\omega (t) - B{\tilde{\omega }}(t) + Cg(\omega (t)) + Dg(\omega (t - \tau (t))) + I + J(t) \end{aligned}$$
(4.32)
where \( \varLambda = diag\{\xi _1 ,\xi _2 , \ldots ,\xi _n \},\)\( A = diag\{\alpha _1 ,\alpha _2 , \ldots ,\alpha _n \},\)\( B = diag\{\beta _1 ,\beta _2 , \ldots ,\beta _n \},\)\( \alpha _i = b_i \, - \xi _i (a_i - \xi ),\)\( \beta _i = a_i - \xi _i ,\)\( C = (c_{ij} )_{n\times n}, \)\( D = (d_{ij} )_{n\times n},\)\( I = diag\{I_1 ,I_2 , \ldots ,I_n \}, J(t) = diag\{J_1 (t),J_2 (t), \ldots ,J_n (t)\}.\)
Here J(t) is considered to be an error feedback control term with the form \( J(t) = Kx(t_k)+ My(t_k )\), where \( t_k \) satisfies \( 0 = t_0< t_1< \ldots< t_k < \ldots \) , \(\mathop {\lim }\nolimits _{k \rightarrow + \infty } t_k = + \infty \) and \( t_{k + 1} - t_k \le h \) ,\( \forall k \in N \) ,where KM are the sampled-data feedback controller gain matrices to be designed. Let \( h(t) = t - t_k \) for \( t \in [t_k ,\,t_{k + 1} )\) and define the synchronization error signal as \( x(t) = \omega (t) - u(t) \), \( y(t) = {\tilde{\omega }}(t) - {\tilde{u}}(t).\) Subtracting (4.31) from (4.32), the error system can be obtained as follows
$$\begin{aligned} {\frac{dx(t)}{dt}}= & {} - \varLambda x(t) + y(t) \nonumber \\ {\frac{dy(t)}{dt}}= & {} - Ax(t) - By(t) + Cf(x(t)) + Df(x(t - \tau (t))) + Kx(t-h(t))\nonumber \\&+\,My(t- h(t)) \end{aligned}$$
(4.33)
Theorem 2
Under Assumption 1, if there exist positive-definite symmetric matrices \( P_i (i = 1,\ldots ,9) \), \( Q_i (i = 1,2) \), positive-definite diagonal matrices \( U_i (i = 1,2) \) and any matrices XY such that the following LMI holds:
$$\begin{aligned} \varSigma = \left[ {{\begin{array}{*{20}c} {\varSigma _{1,1} } &{}\quad {\varSigma _{1,2} } &{}\quad {\varSigma _{1,3} } &{}\quad 0 &{}\quad {\varSigma _{1,5} } &{}\quad 0 &{}\quad {\varSigma _{1,7} } &{}\quad {\varSigma _{1,8} } &{}\quad 0 &{}\quad 0 &{}\quad {\varSigma _{1,11} } &{}\quad 0 \\ * &{}\quad {\varSigma _{2,2} } &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad {\varSigma _{2,7} } &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ * &{}\quad * &{}\quad {\varSigma _{3,3} } &{}\quad {\varSigma _{3,4} } &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad {\varSigma _{3,12} } \\ * &{}\quad * &{}\quad * &{}\quad {\varSigma _{4,4} } &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ * &{}\quad * &{}\quad * &{}\quad * &{}\quad {\varSigma _{5,5} } &{}\quad {\varSigma _{5,6} } &{}\quad {\varSigma _{5,7}} &{}\quad {\varSigma _{5,8}} &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad {\varSigma _{6,6} } &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad {\varSigma _{7,7} } &{}\quad {\varSigma _{7,8} } &{}\quad {\varSigma _{7,9} } &{}\quad 0 &{}\quad {\varSigma _{7,11} } &{}\quad {\varSigma _{7,12} } \\ * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad {\varSigma _{8,8} } &{}\quad {\varSigma _{8,9} } &{}\quad 0 &{}\quad {\varSigma _{8,11} } &{}\quad {\varSigma _{8,12} } \\ * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad {\varSigma _{9,9} } &{}\quad {\varSigma _{9,10} } &{}\quad 0 &{}\quad 0 \\ * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad {\varSigma _{10,10} } &{}\quad 0 &{}\quad 0 \\ * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad {\varSigma _{11,11} } &{}\quad 0 \\ * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad * &{}\quad {\varSigma _{12,12} } \\ \end{array} }} \right] < 0\nonumber \\ \end{aligned}$$
(4.34)
where
$$\begin{aligned}&\varSigma _{1,1} = P_3 - P_4 + P_5 - P_7 - (1 - {\bar{\tau }})P_9 - L^ - U_1 L^ + - 2Q_1 \varLambda ,\\&\varSigma _{1,2} = P_1 - Q_1 - Q_1 \varLambda ,\varSigma _{1,3} = P_4 + (1 - {\bar{\tau }})P_9 , \varSigma _{1,5} = P_7 , \varSigma _{1,7} = Q_1 - Q_2 A, \\&\varSigma _{1,8} = - Q_2 A,\varSigma _{1,11} = \frac{1}{2}U_1 (L^ - + L^ + ),\\&\varSigma _{2,2} = \tau ^2P_4 + h^2P_7 + \tau ^2P_9 - 2Q_1 , \varSigma _{2,7} = Q_1 ,\\&\varSigma _{3,3} = - (1 - {\bar{\tau }})P_3 - 2P_4 - (1 - {\bar{\tau }})P_9 - L^ - U_2 L^ + , \varSigma _{3,4} = P_4 ,\\&\varSigma _{3,12} = \frac{1}{2}U_2 (L^ - + L^ + ),\\&\varSigma _{4,4} = - P_4 ,\\&\varSigma _{5,5} = - 2P_7 , \varSigma _{5,6} = P_7 , = \varSigma _{5,7} = X, \varSigma _{5,8} = X , \\&\varSigma _{6,6} = - P_5 - P_7 ,\\&\varSigma _{7,7} = P_6 - P_8 - 2Q_2 B, \varSigma _{7,8} = P_2 - Q_2 - Q_2 B, \varSigma _{7,9} = P_8 + Y, \varSigma _{7,11} = Q_2C , \\&\varSigma _{7,12} = Q_2D , \\&\varSigma _{8,8} = h^2P_8-2Q_2 , \varSigma _{8,9} = Y , \varSigma _{8,11} = Q_2C , \varSigma _{8,12} = Q_2D , \\&\varSigma _{9,9} = -2P_8 , \varSigma _{9,10} = P_8 , \\&\varSigma _{10,10} = -P_6 - P_8 , \\&\varSigma _{11,11} = -U_1 , \\&\varSigma _{12,12} = -U_2 , \end{aligned}$$
then the master system (4.29) and the slave system (4.30) are globally synchronized under the desired controller gain matric being given as \( K = Q_2^{ - 1} X,M = Q_2^{ - 1} Y \).
Proof
Noting that
$$\begin{aligned} 0= & {} 2[x^T(t) + {\dot{x}}^T(t)]Q_1 [ - {\dot{x}}(t) + {\dot{x}}(t)] \nonumber \\= & {} 2[x^T(t) + {\dot{x}}^T(t)]Q_1 [ - {\dot{x}}(t) - \varLambda x(t) + y(t) ] \nonumber \\= & {} - 2x^T(t)Q_1 {\dot{x}}^T(t) - 2x^T(t)Q_1 \varLambda x(t) + 2x^T(t)Q_1 y(t)\nonumber \\&-\, 2{\dot{x}}^T(t)Q_1 {\dot{x}}(t) - 2{\dot{x}}^T(t)Q_1 \varLambda x(t) + 2\dot{x}^T(t)Q_1 y(t). \end{aligned}$$
(4.35)
$$\begin{aligned} 0= & {} 2[y^T(t) + {\dot{y}}^T(t)]Q_2 [ - {\dot{y}}(t) + {\dot{y}}(t)]\nonumber \\= & {} 2[y^T(t) + {\dot{y}}^T(t)]Q_2 [ - {\dot{y}}(t) - Ax(t) - By(t) + Cf(x(t)) + Df(x - \tau (t))\nonumber \\&+\, Kx(t - h(t))+ My(t - h(t))]\nonumber \\= & {} - 2y^T(t)Q_2 {\dot{y}}(t) - 2y^T(t)Q_2 Ax(t) - 2y^T(t)Q_2 By(t) +2y^T(t)Q_2 Cf(x(t)) \nonumber \\&+\, 2y^T(t)Q_2 Df(x(t - \tau (t))+ 2y^T(t)Q_2Kx(t - h(t)) + 2y^T(t)Q_2 My(t - h(t)) \nonumber \\&-\, 2{\dot{y}}^T(t)Q_2 {\dot{y}}(t) - 2{\dot{y}}^T(t)Q_2 Ax(t) - 2\dot{y}^T(t)Q_2 By(t) + 2{\dot{y}}^T(t)Q_2 Cf(x(t)) \nonumber \\&+\, 2{\dot{y}}^T(t)Q_2 Df(x(t - \tau (t))+ 2{\dot{y}}^T(t)Q_2Kx(t - h(t)) \nonumber \\&+\, 2{\dot{y}}^T(t)Q_2 My(t - h(t)). \end{aligned}$$
(4.36)
then we obtain
$$\begin{aligned} {\dot{V}}(t) \le \xi ^T(t)\varSigma \xi (t). \end{aligned}$$
(4.37)
where \( V(t), \xi (t) \) are the same as Sect. 3.
Similar to the proof of Theorem 1, we get that the slave system (4.30) can be globally synchronized to the master system (4.29) under control \( K = Q_2^{ - 1} X,M = Q_2^{ - 1} Y \) . \(\square \)

5 Illustrative example

In this section, two examples are given to illustrate the effectiveness of our results.
Example 1
Consider the following delayed inertial neuron network
$$\begin{aligned} \left\{ \begin{array}{l} \frac{d^2x_1(t)}{dt^2} = - \frac{dx_1 (t)}{dt} - 1.5x_1 (t) + 0.3g_1 (x_1 (t)) + 0.1g_2 (x_2 (t))\\ \quad \qquad \qquad +\,0.4g_1 (x_1 (t - \tau (t)) + 0.2g_2 (x_2 (t - \tau (t)) + 1 \\ \frac{d^2x_2 (t)}{dt^2} = - \frac{dx_2 (t)}{dt} - 1.5x_2 (t) + 0.2g_1 (x_1 (t)) + 0.2g_2 (x_2 (t)) \\ \qquad \quad \qquad +\,0.2g_2 (x_2 (t - \tau (t)) +1 \\ \end{array} \right. \end{aligned}$$
(5.38)
where \( g_1 (x_1 (t)) = \sin (x_1 (t))\),\(g_2 (x_2 (t)) = \cos (x_2 (t))\),\(\tau (t) = 0.5\sin ^2(t) + 0.5\).
Obviously, \( g_i ( \cdot ) \quad (i = 1,2)\) satisfy (2.2) with \(l_1^ - = l_2^ - = - 1\),\(l_1^ + = l_2^ + = 1\), and \(\tau = 1,{\bar{\tau }} = 0.5.\)
Choosing \( \xi _1 = \xi _2 = 2 \), the inertial neural network (5.38) can be rewritten in the form (2.4), and the corresponding matrices can be easily obtained
$$\begin{aligned} \varLambda= & {} \left( {{\begin{array}{*{20}c} 2 &{}\quad 0 \\ 0 &{}\quad 2 \\ \end{array} }} \right) \,,\,A = \left( {{\begin{array}{*{20}c} {3.5} &{}\quad 0 \\ 0 &{}\quad {3.5} \\ \end{array} }} \right) \,,\,B = \left( {{\begin{array}{*{20}c} { - 1} &{}\quad 0 \\ 0 &{}\quad { - 1} \\ \end{array} }} \right) ,\\C= & {} \left( {{\begin{array}{*{20}c} {0.3} &{}\quad {0.1} \\ {0.2} &{}\quad {0.2} \\ \end{array} }} \right) \,,\,D = \left( {{\begin{array}{*{20}c} {0.4} &{}\quad {0.2} \\ 0 &{}\quad {0.2} \\ \end{array} }} \right) ,I = \left( {{\begin{array}{*{20}c} 1 &{}\quad 0 \\ 0 &{}\quad 1 \\ \end{array} }} \right) . \end{aligned}$$
Taking sampling time points \( t_k = 0.02k,k = 1,2, \ldots ,\) and the sampling period is \( h = 0.02 \) . Solving LMIs (3.11), we have
$$\begin{aligned} X&\quad =&\quad \left( {{\begin{array}{*{20}c} { - 0.1996} &{}\quad {0.0005} \\ {0.0005} &{}\quad { - 0.2006} \\ \end{array} }} \right) , \quad Y = \left( {{\begin{array}{*{20}c} { - 0.4914} &{}\quad {0.0123} \\ {0.0123} &{}\quad { - 0.5132} \\ \end{array} }} \right) , \\ P_1&\quad =&\quad \left( {{\begin{array}{*{20}c} {1.9947} &{}\quad { - 0.0445} \\ { - 0.0445} &{}\quad {2.0740} \\ \end{array} }} \right) , \quad P_2 = \left( {{\begin{array}{*{20}c} {0.4082} &{}\quad { - 0.0075} \\ { - 0.0075} &{}\quad {0.4214} \\ \end{array} }} \right) , \\ P_3&\quad =&\quad \left( {{\begin{array}{*{20}c} {0.9164} &{}\quad {-0.0007} \\ {-0.0007} &{}\quad {0.9176} \\ \end{array} }} \right) , \quad P_4 = \left( {{\begin{array}{*{20}c} {0.2528} &{}\quad { - 0.0018} \\ { - 0.0018} &{}\quad {0.2559} \\ \end{array} }} \right) , \\ P_5&\quad =&\quad \left( {{\begin{array}{*{20}c} {0.4590} &{}\quad { - 0.0015} \\ { - 0.0015} &{}\quad {0.4615} \\ \end{array} }} \right) , \quad P_6 = \left( {{\begin{array}{*{20}c} {0.1638} &{}\quad { - 0.0008} \\ { - 0.0008} &{}\quad {0.1651} \\ \end{array} }} \right) , \\ P_7&\quad =&\quad \left( {{\begin{array}{*{20}c} {0.7210} &{}\quad {0.0009} \\ {0.0009} &{}\quad {0.7195} \\ \end{array} }} \right) , \qquad \quad P_8 = \left( {{\begin{array}{*{20}c} {2.3186} &{}\quad { - 0.0027} \\ { - 0.0027} &{}\quad {2.3234} \\ \end{array} }} \right) , \\ P_9&\quad =&\quad \left( {{\begin{array}{*{20}c} {0.1536} &{}\quad { - 0.0016} \\ { - 0.0016} &{}\quad {0.1565} \\ \end{array} }} \right) , \quad Q_1 = \left( {{\begin{array}{*{20}c} {0.7600} &{}\quad { - 0.0120} \\ { - 0.0120} &{}\quad {0.7814} \\ \end{array} }} \right) , \\ Q_2&\quad =&\quad \left( {{\begin{array}{*{20}c} {0.0979} &{}\quad { - 0.0055} \\ { - 0.0055} &{}\quad {0.1077} \\ \end{array} }} \right) , \quad U_1 = \left( {{\begin{array}{*{20}c} {0.5510} &{}\quad 0 \\ 0 &{}\quad {0.5510} \\ \end{array} }} \right) , \\ U_2&\quad =&\quad \left( {{\begin{array}{*{20}c} {0.3975} &{}\quad 0 \\ 0 &{}\quad {0.3975} \\ \end{array} }} \right) . \end{aligned}$$
The controller gain matrices can be obtained as follows:
$$\begin{aligned} K = Q_1^{ - 1} X = \left( {{\begin{array}{*{20}c} { - 0.2627} &{}\quad { - 0.0033} \\ { - 0.0033} &{}\quad { - 0.2567} \\ \end{array} }} \right) , \\ M = Q_2^{ - 1} Y = \left( {{\begin{array}{*{20}c} { - 5.0258} &{}\quad { - 0.1426} \\ { - 0.1426} &{}\quad { - 4.7716} \\ \end{array} }} \right) . \end{aligned}$$
From Figs. 1 and 2, we can see that the equilibrium is globally asymptotically stable. Thus Theorem 1 is well-verified.
Example 2
We consider the following master–slave neural networks
$$\begin{aligned} \left\{ {{\begin{array}{*{20}l} {\begin{array}{l} \frac{d^2x_i (t)}{dt^2} = - 0.05\frac{dx_i (t)}{dt} - 0.05x_i (t) + 0.2\sin (x_i (t)) - 0.6\cos (y_i (t)) \\ \quad \quad \quad \qquad +\, 0.1\sin (x_i (t - \tau (t))+ 0.1\cos (y_i (t - \tau (t)) + 1 \\ \end{array}} \\ {\begin{array}{l} \frac{d^2y_i (t)}{dt^2} = - 0.05\frac{dy_i (t)}{dt} - 0.05y_i (t) + 0.5\sin (x_i (t)) + 0.3\cos (y_i (t)) \\ \quad \quad \quad \qquad -\, 0.1\sin (x_i (t - \tau (t))+ 0.1\cos (y_i (t - \tau (t)) + 1 \\ \end{array}} \\ \end{array} }} \right. \end{aligned}$$
(5.39)
where \( i = 1,2,\;\tau (t) = \frac{e^x}{1 + e^x}\;,\;(x_1 ,y_1 )^T \) is the state of the master neural network and \( (x_2 ,y_2 )^T \) is the state of the slave neural network. When no control term is added to the slave neural network, Fig. 3 indicates the error between the two neural networks always exists.
Taking sampling time points \( t_k=0.1k,k=1,2,\ldots ,\) and the sampling period \( h=0.1 \), choosing
$$\begin{aligned} \varLambda = \left( {{\begin{array}{*{20}c} 3 &{}\quad 0 \\ 0 &{}\quad 3 \\ \end{array} }} \right) \end{aligned}$$
and by using Theorem 2, the corresponding matrices are
$$\begin{aligned} K = \left( {{\begin{array}{*{20}c} { 6.1683 } &{}\quad { -0.1105 } \\ { -0.1105 } &{}\quad { 6.0854 } \\ \end{array} }} \right) ,\quad M = \left( {{\begin{array}{*{20}c} { - 6.7376 } &{}\quad {0.0050} \\ {0.0050} &{}\quad { - 6.7339 } \\ \end{array} }} \right) \end{aligned}$$
From Fig. 4, we can see the slave neural network is globally synchronized to the master neural network. Thus our criteria are well-verified.

6 Conclusion

This paper concerns with the stability of inertial neural network with time-varying delays using sampled-data control. By employing an input delay approach and the Lyapunov theory, some sufficient conditions are obtained to ensure the stability of the equilibrium. Furthermore, several criteria are given to guarantee the synchronization of the master–slave neural networks under error-feedback sampled-data control. Meanwhile, the variable transformation we introduced can assure our results are less conservative. Two examples are used to demonstrate that the theoretical results are correct and effective.

Acknowledgements

This research is supported by the National Natural Science Foundation of China (Nos. 91546118, 71690242).
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literatur
1.
Zurück zum Zitat Ozcan N (2011) A new sufficient condition for global robust stability of delayed neural networks. Neural Process Lett 34(3):305–316CrossRef Ozcan N (2011) A new sufficient condition for global robust stability of delayed neural networks. Neural Process Lett 34(3):305–316CrossRef
2.
Zurück zum Zitat Zhang Z, Liu W, Zhou D (2012) Global asymptotic stability to a generalized cohen-grossberg BAM neural networks of neural type delays. Neural Netw 25:94–105CrossRef Zhang Z, Liu W, Zhou D (2012) Global asymptotic stability to a generalized cohen-grossberg BAM neural networks of neural type delays. Neural Netw 25:94–105CrossRef
3.
Zurück zum Zitat Huang H, Huang T, Chen X, Qian C (2013) Exponential stabilization of delayed recurrent neural networks: a state estimation based approach. Neural Netw 48:153–157CrossRef Huang H, Huang T, Chen X, Qian C (2013) Exponential stabilization of delayed recurrent neural networks: a state estimation based approach. Neural Netw 48:153–157CrossRef
4.
Zurück zum Zitat Zhou LQ (2013) Delay-dependent exponential stability of cellular neural networks with multi-proportional delays. Neural Process Lett 38:347–359CrossRef Zhou LQ (2013) Delay-dependent exponential stability of cellular neural networks with multi-proportional delays. Neural Process Lett 38:347–359CrossRef
5.
Zurück zum Zitat Bao HM (2016) Existence and exponential stability of periodic solution for BAM fuzzy Cohen–Grossberg neural networks with mixed delays. Neural Process Lett 43:871–885CrossRef Bao HM (2016) Existence and exponential stability of periodic solution for BAM fuzzy Cohen–Grossberg neural networks with mixed delays. Neural Process Lett 43:871–885CrossRef
6.
Zurück zum Zitat Xin YM, Li YX, Cheng ZS, Huang X (2016) Global exponential stability for switched memristive neural networks with time-varying delays. Neural Netw 80:34–42CrossRef Xin YM, Li YX, Cheng ZS, Huang X (2016) Global exponential stability for switched memristive neural networks with time-varying delays. Neural Netw 80:34–42CrossRef
7.
Zurück zum Zitat Wei LN, Chen WH, Huang GJ (2015) Globally exponential stabilization of neural networks with mixed time delays via implusive control. Appl Math Comput 260:10–26MathSciNetMATH Wei LN, Chen WH, Huang GJ (2015) Globally exponential stabilization of neural networks with mixed time delays via implusive control. Appl Math Comput 260:10–26MathSciNetMATH
8.
Zurück zum Zitat Song QK, Yan H, Zhao ZJ, Liu YR (2016) Global exponential stability of complex-valued neural networks with both time-varying delays and impulsive effects. Neural Netw 79:108–116CrossRef Song QK, Yan H, Zhao ZJ, Liu YR (2016) Global exponential stability of complex-valued neural networks with both time-varying delays and impulsive effects. Neural Netw 79:108–116CrossRef
9.
Zurück zum Zitat Wang JL, Wu HN, Guo L (2013) Stability analysis of reaction-diffusion Cohen–Grossberg neural networks under impulsive control. Neurocomputing 106:21–30CrossRef Wang JL, Wu HN, Guo L (2013) Stability analysis of reaction-diffusion Cohen–Grossberg neural networks under impulsive control. Neurocomputing 106:21–30CrossRef
10.
Zurück zum Zitat Wang LM, Shen Y, Sheng Y (2016) Finite-time robust stabilization of uncertain delayed neural networks with discontinuous activations via delayed feedback control. Neural Netw 76:46–54CrossRef Wang LM, Shen Y, Sheng Y (2016) Finite-time robust stabilization of uncertain delayed neural networks with discontinuous activations via delayed feedback control. Neural Netw 76:46–54CrossRef
11.
Zurück zum Zitat Zhang ZM, He Y, Zhang CK, Wu M (2016) Exponential stabilization of neural networks with time-varying delay by periodically intermittent control. Neurocomputing 207:469–475CrossRef Zhang ZM, He Y, Zhang CK, Wu M (2016) Exponential stabilization of neural networks with time-varying delay by periodically intermittent control. Neurocomputing 207:469–475CrossRef
12.
Zurück zum Zitat Yang SJ, Li CD, Huang TW (2016) Exponential stabilization and synchronization for fuzzy model of memristive neural networks by periodically intermittent control. Neural Netw 75:162–172CrossRef Yang SJ, Li CD, Huang TW (2016) Exponential stabilization and synchronization for fuzzy model of memristive neural networks by periodically intermittent control. Neural Netw 75:162–172CrossRef
13.
Zurück zum Zitat Wu ZG, Shi P (2014) Exponential stabilization for sampled-data neural-network-based control systems. IEEE Trans Neural Netw Learn Syst 25(12):2180–2190CrossRef Wu ZG, Shi P (2014) Exponential stabilization for sampled-data neural-network-based control systems. IEEE Trans Neural Netw Learn Syst 25(12):2180–2190CrossRef
14.
Zurück zum Zitat Li L, Yang YQ, Lin G (2016) The stabilization of BAM neural network with time-varying delays in the leakage terms via sampled-data control. Neural Comput Appl 27:447–457CrossRef Li L, Yang YQ, Lin G (2016) The stabilization of BAM neural network with time-varying delays in the leakage terms via sampled-data control. Neural Comput Appl 27:447–457CrossRef
15.
Zurück zum Zitat Zhang W, Li CD, Huang TW, Tan J (2015) Exponential stability of inertial BAM neural networks with time varying delay via periodically intermittent control. Neural Comput Appl 26:1781–1787CrossRef Zhang W, Li CD, Huang TW, Tan J (2015) Exponential stability of inertial BAM neural networks with time varying delay via periodically intermittent control. Neural Comput Appl 26:1781–1787CrossRef
16.
Zurück zum Zitat Ke Y, Miao C (2013) Stability analysis of inertial Cohen Grossberg-type neural networks with time delays. Neurocomputing 117:196–205CrossRef Ke Y, Miao C (2013) Stability analysis of inertial Cohen Grossberg-type neural networks with time delays. Neurocomputing 117:196–205CrossRef
17.
Zurück zum Zitat Yu S, Zhang Z, Quan Z (2015) New global exponential stability conditions for inertial Cohen Grossberg neural networks with time delays. Neurocomputing 151:1446–1454CrossRef Yu S, Zhang Z, Quan Z (2015) New global exponential stability conditions for inertial Cohen Grossberg neural networks with time delays. Neurocomputing 151:1446–1454CrossRef
18.
Zurück zum Zitat Cao J, Wan Y (2014) Matrix measure strategies for stability and synchronization of inertial BAM neural network with time delays. Neural Netw 53:165–172CrossRef Cao J, Wan Y (2014) Matrix measure strategies for stability and synchronization of inertial BAM neural network with time delays. Neural Netw 53:165–172CrossRef
19.
Zurück zum Zitat Qi J, Li C, Huang T (2015) Stability of inertial BAM neural network with time-varying delay via impulsive control. Neurocomputing 161:162–167CrossRef Qi J, Li C, Huang T (2015) Stability of inertial BAM neural network with time-varying delay via impulsive control. Neurocomputing 161:162–167CrossRef
20.
Zurück zum Zitat Rakkiyappan R, Premalatha S, Chandrasekar A, Cao J (2016) Stability and synchronization analysis of inertial memristive neural networks with time delays. Cogn Neurodyn 10(5):437–451CrossRef Rakkiyappan R, Premalatha S, Chandrasekar A, Cao J (2016) Stability and synchronization analysis of inertial memristive neural networks with time delays. Cogn Neurodyn 10(5):437–451CrossRef
21.
Zurück zum Zitat Zhang HX, Hu J, Zou L, Yu XY, Wu ZH (2018) Event-based state estimation for time-varying stochastic coupling networks with missing measurements under uncertain occurrence probabilities. Int J Gen Syst 47(5):506–521MathSciNetCrossRef Zhang HX, Hu J, Zou L, Yu XY, Wu ZH (2018) Event-based state estimation for time-varying stochastic coupling networks with missing measurements under uncertain occurrence probabilities. Int J Gen Syst 47(5):506–521MathSciNetCrossRef
22.
Zurück zum Zitat He X, Li CD, Shu YL (2012) Bogdanov–Takens bifurcation in a single inertial neuron model with delay. Neurocomputing 89:193–201CrossRef He X, Li CD, Shu YL (2012) Bogdanov–Takens bifurcation in a single inertial neuron model with delay. Neurocomputing 89:193–201CrossRef
23.
Zurück zum Zitat He X, Yu JZ, Huang TW, Li CD, Li CJ (2014) Neural network for solving Nash equilibrium problem in application of multiuser power control. Neural Netw 57:73–78CrossRef He X, Yu JZ, Huang TW, Li CD, Li CJ (2014) Neural network for solving Nash equilibrium problem in application of multiuser power control. Neural Netw 57:73–78CrossRef
24.
Zurück zum Zitat Babcock KL, Westervelt RM (1986) Stability and dynamics of simple electronic neural networks with added inertia. Phys D 23:464–469CrossRef Babcock KL, Westervelt RM (1986) Stability and dynamics of simple electronic neural networks with added inertia. Phys D 23:464–469CrossRef
25.
Zurück zum Zitat Tu ZW, Cao JD, Hayat T (2016) Matrix measure based dissipativity analysis for inertial delayed uncertain neural networks. Neural Netw 75:47C–55CrossRef Tu ZW, Cao JD, Hayat T (2016) Matrix measure based dissipativity analysis for inertial delayed uncertain neural networks. Neural Netw 75:47C–55CrossRef
26.
Zurück zum Zitat Wan P, Jian JG (2017) Global convergence analysis of impulsive inertial neural networks with time-varying delays. Neurocomputing 245:68C–76CrossRef Wan P, Jian JG (2017) Global convergence analysis of impulsive inertial neural networks with time-varying delays. Neurocomputing 245:68C–76CrossRef
27.
Zurück zum Zitat Tu ZW, Cao JD, Hayat T (2016) Global exponential stability in Lagrange sense for inertial neural networks with time-varying delays. Neurocomputing 171:524C–531CrossRef Tu ZW, Cao JD, Hayat T (2016) Global exponential stability in Lagrange sense for inertial neural networks with time-varying delays. Neurocomputing 171:524C–531CrossRef
28.
Zurück zum Zitat Ebrahimi B, Tafreshi R, Franchek M (2014) A dynamic feedback control strategy for control loops with time-varying delay. Int J Control 87(5):887–897MathSciNetCrossRef Ebrahimi B, Tafreshi R, Franchek M (2014) A dynamic feedback control strategy for control loops with time-varying delay. Int J Control 87(5):887–897MathSciNetCrossRef
29.
Zurück zum Zitat Lee TH, Park JH, Kwon OM (2013) Stochastic sampled-data control for state estimation of time-varying delayed neural networks. Neural Netw Off J Int Neural Netw Soc 46(5):99–108CrossRef Lee TH, Park JH, Kwon OM (2013) Stochastic sampled-data control for state estimation of time-varying delayed neural networks. Neural Netw Off J Int Neural Netw Soc 46(5):99–108CrossRef
30.
Zurück zum Zitat Rakkiyappan R, Chandrasekar A, Park JH, Kwon OM (2014) Exponential synchronization criteria for Markovian jumping neural networks with time-varying delays and sampled-data control. Nonlinear Anal Hybrid Syst 14:16–37MathSciNetCrossRef Rakkiyappan R, Chandrasekar A, Park JH, Kwon OM (2014) Exponential synchronization criteria for Markovian jumping neural networks with time-varying delays and sampled-data control. Nonlinear Anal Hybrid Syst 14:16–37MathSciNetCrossRef
31.
Zurück zum Zitat Hua CC, Ge C, Guan XP (2015) Synchronization of Chaotic Lur’e systems with time delays using sampled-data control. IEEE Trans Neural Netw Learn Syst 26(6):1214–1221MathSciNetCrossRef Hua CC, Ge C, Guan XP (2015) Synchronization of Chaotic Lur’e systems with time delays using sampled-data control. IEEE Trans Neural Netw Learn Syst 26(6):1214–1221MathSciNetCrossRef
32.
Zurück zum Zitat Zhang WY, Li JM, Xing KY, Ding CY (2016) Synchronization for distributed parameter NNs with mixed time delays via sampled-data control. Neurocomputing 175:265–277CrossRef Zhang WY, Li JM, Xing KY, Ding CY (2016) Synchronization for distributed parameter NNs with mixed time delays via sampled-data control. Neurocomputing 175:265–277CrossRef
33.
Zurück zum Zitat SyedAli M, Gunasekaran N, Zhu QX (2017) State estimation of T-S fuzzy delayed neural networks with Markovian jumping parameters using sampled-data control. Fuzzy Sets Syst 306:87–104MathSciNetCrossRef SyedAli M, Gunasekaran N, Zhu QX (2017) State estimation of T-S fuzzy delayed neural networks with Markovian jumping parameters using sampled-data control. Fuzzy Sets Syst 306:87–104MathSciNetCrossRef
34.
Zurück zum Zitat Wu HQ, Li RX, Wei HZ (2015) Synchronization of a class of memristive neural networks with time delays via sampled-data control. Int J Mach Learn Cybern 6(3):365–373CrossRef Wu HQ, Li RX, Wei HZ (2015) Synchronization of a class of memristive neural networks with time delays via sampled-data control. Int J Mach Learn Cybern 6(3):365–373CrossRef
35.
Zurück zum Zitat Pakkiyappan R, Dharani S, Cao JD (2015) Synchronization of neural networks with control packet loss and time-varying delay via stochastic sampled-data controller. IEEE Trans Neural Netw Learn Syst 26(12):3215–3226MathSciNetCrossRef Pakkiyappan R, Dharani S, Cao JD (2015) Synchronization of neural networks with control packet loss and time-varying delay via stochastic sampled-data controller. IEEE Trans Neural Netw Learn Syst 26(12):3215–3226MathSciNetCrossRef
36.
Zurück zum Zitat Zhang WB, Tang Y, Huang TW, Kurths J (2017) Sampled-data consensus of linear multi-agent systems with packet losses. IEEE Trans Neural Netw Learn Syst 28(11):2516–2527MathSciNetCrossRef Zhang WB, Tang Y, Huang TW, Kurths J (2017) Sampled-data consensus of linear multi-agent systems with packet losses. IEEE Trans Neural Netw Learn Syst 28(11):2516–2527MathSciNetCrossRef
37.
Zurück zum Zitat Huang DS, Jiang MH, Jian JG (2017) Finite-time synchronization of inertial memristive neural networks with time-varying delays via sampled-data control. Neurocomputing 293:100–107 Huang DS, Jiang MH, Jian JG (2017) Finite-time synchronization of inertial memristive neural networks with time-varying delays via sampled-data control. Neurocomputing 293:100–107
38.
Zurück zum Zitat Fridman EM (1992) Use models with aftereffect in the problem of design of optimal digital control. Autom Remote Control 53:1523–1528MathSciNet Fridman EM (1992) Use models with aftereffect in the problem of design of optimal digital control. Autom Remote Control 53:1523–1528MathSciNet
39.
Zurück zum Zitat Liu YR, Wang ZD, Liu XH (2006) Global exponential stability of generalized recurrent neural networks with discrete and distributed delays. Neural Netw 19:667–67CrossRef Liu YR, Wang ZD, Liu XH (2006) Global exponential stability of generalized recurrent neural networks with discrete and distributed delays. Neural Netw 19:667–67CrossRef
40.
Zurück zum Zitat Slotine JJE, Li W (2004) Applied nonlinear control. China Machine Press, BeijingMATH Slotine JJE, Li W (2004) Applied nonlinear control. China Machine Press, BeijingMATH
Metadaten
Titel
Stability of Inertial Neural Network with Time-Varying Delays Via Sampled-Data Control
verfasst von
Jingfeng Wang
Lixin Tian
Publikationsdatum
27.08.2018
Verlag
Springer US
Erschienen in
Neural Processing Letters / Ausgabe 2/2019
Print ISSN: 1370-4621
Elektronische ISSN: 1573-773X
DOI
https://doi.org/10.1007/s11063-018-9905-6

Weitere Artikel der Ausgabe 2/2019

Neural Processing Letters 2/2019 Zur Ausgabe

Neuer Inhalt