Skip to main content
Erschienen in: Neural Processing Letters 2/2024

Open Access 01.04.2024

New Insights on Bidirectional Associative Memory Neural Networks with Leakage Delay Components and Time-Varying Delays Using Sampled-Data Control

verfasst von: S. Ravi Chandra, S. Padmanabhan, V. Umesha, M. Syed Ali, Grienggrai Rajchakit, Anuwat Jirawattanpanit

Erschienen in: Neural Processing Letters | Ausgabe 2/2024

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The sampling data control of bidirectional associative memory (BAM) neural network with leakage delay is considered in this article. The BAM model is viewed as a mixed delay that combines a distributed delay, a discrete delay that varies over time, and a delay in the leaking period. The sampling system is then converted to a continuous time-delay system using an input delay method. In order to get adequate conditions in the form of linear matrix inequalities(LMIs), we build a new Lyapunov-Krasovskii Functional (LKF) in conjunction with the free weight matrix approach. Finally, a simulation results are given to show the efficiency of the theoretical approach.
Hinweise

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

A neural network is a processing device, whose design was inspired by the design and functioning of human brain and their components. There is no idle memory containing data and programmed, but each neuron is programmed and continuously active. Neural network has many applications. The most likely applications for the neural networks are (i) Classification (ii) Association and (iii) Reasoning. One of the applications of neural networks with time delays are unavoidable in many practical systems such as biology systems and artificial neural networks [46]. Recently, the stability analysis of dynamical systems are greatly focused and has become an emerging area of research due to the fact that it has most successful applications such as image processing, optimization, pattern recognition and other areas [79]. Various types of time-delay systems and delay dynamical networks have been investigated, and many significant results have been reported [1015]. From the point of view of nonlinear dynamics, the study of neural networks with delays is useful and important to solve problems both theoretically and practically.
An extension of the unidirectional auto associator of Hopfield neural network is called bidirectional associative memory (BAM) neural networks, which was first introduced by kosko [16]. Then, BAM neural networks with delays have attracted considerable attention and have been widely investigated. It is composed of neurons arranged in two layers: the X-layer and the Y-layer. The neurons in one layer are fully interconnected to the neurons in the other layer, while there are no interconnection among neurons in the same layer. Through iterations of forward and backward propagation information flows between the two layers, it performs a two-way associative search for stored bipolar vector pairs and generalize the single-layer auto-associative Hebbian correlation to a two-layer pattern-matched hetero associative circuits. Therefore, it possesses good applications in the field of pattern recognition and artificial intelligence [1721]. In addition, the addressable memories or patterns of BAM neural networks can be stored with a two-way associative search. Accordingly, the BAM neural network has been widely studied both in theory and applications. In practical applications, the BAM neural network has been successfully applied to image processing, pattern recognition, automatic control, associative memory, parallel computation, and optimization problems. Therefore, it is interesting and important to study the stability of BAM neural network, which has been widely investigated [2230].
Recently, Gopalsamy [31] initially introduced the delays in the ”forgetting” or leakage terms, and some results have been obtained, (see for example [3236] and the references cited therein). Unfortunately, the delays in the leakage terms of neural networks in most literatures listed above are constants, and few authors have considered the dynamics of BAM neural networks with time-varying delays in the leakage terms [31]. Therefore, it is important and interesting to further investigate the dynamical behaviors of the BAM neural networks with time-varying leakage delays.
Up to now, various control approaches have been adopted to stabilize those instable systems. Controls such as dynamic feedback control, fuzzy logical control, impulsive control, \(H_\infty \) control, sliding mode control and sampled-data control are adopted by many authors [3742]. The sampled-data control deals with continuous system by sampling the data at discrete time based on the computer, sensors, filters and network communication. Hence it is more preferable to use digital controllers instead of analogue circuits. This drastically reduces the amount of the transmitted information and improve the control efficiency. Compared with continuous control, the sampled-data control is more efficient, secure and useful [4349]. In [50], the author considered the synchronization problem of coupled chaotic neural networks with time delay in the leakage term using sampled-data control. R. Sakthivel et al. [36] established a state estimator for BAM neural networks with leakage delays with the help of Lyapunov technique and LMI framework.
However to the best of our knowledge, the sampled-data control design of BAM neural network with time delay in the leakage term has not been investigated until now. This motivates our work.
The main contribution of this paper lies in three aspects:
(i).
It is the first attempt to study the sampled-data control of delayed BAM neural networks. An efficient approach is presented to deal with it.
 
(ii).
Based on the Lyapunov-Krasovskii theory and LMI framework, a new set of sufficient conditions is obtained to ensure that the dynamical system is globally asymptotically stable.
 
(iii).
It is worth pointing that a novel Lyapunov–Krasovskii functional (LKF) is constructed with augmented terms
 
$$\begin{aligned}&\sum \limits _{i=0}^{1}\tau _{(3-i)}\int _{-\tau _{(3-i)}}^{0}\int _{t+\theta }^{t} \left[ \begin{array} {cc} x(s)\\ \dot{x}(s) \end{array}\right] ^T \left[ \begin{array} {cc} X_{(3-2i)} &{} Y_{(3-2i)}\\ \maltese &{} Z_{(3-2i)} \end{array}\right] \left[ \begin{array} {cc} x(s)\\ \dot{x}(s) \end{array}\right] dsd\theta ,\\&\sum \limits _{i=1}^{2}\tau _{(2i-1)}\int _{-\tau _{(2i-1)}}^{0}\int _{t+\theta }^{t} \left[ \begin{array} {cc} y(s)\\ \dot{y}(s) \end{array}\right] ^T \left[ \begin{array} {cc} X_{(2i)} &{} Y_{(2i)}\\ \maltese &{} Z_{(2i)} \end{array}\right] \left[ \begin{array} {cc} y(s)\\ \dot{y}(s) \end{array}\right] dsd\theta . \end{aligned}$$
The main objective of this paper is to study the sampled-data control for bidirectional associate memory neural networks with leakage delay components. By constructing suitable triple integral LKF, by utilizing the free-weighting matrix method and reciprocal convex method, a unified linear matrix inequality (LMI) approach is developed to establish sufficient conditions for leakage BAM neural network is globally stable. Finally a numerical example is given to illustrate the usefulness and effectiveness of the proposed method (Table 1).
Notations: Throughout the paper, we have used the standard notations in [36].
Table 1
Comparison with existing results on BAM neural networks (BAMNNs)
BAMNNs
[5]
[17, 19]
[18, 20, 21]
[26, 27]
[28]
[38]
This paper
Leakage delay
\(\times \)
\(\times \)
\(\times \)
\(\times \)
\(\times \)
\(\times \)
\(\surd \)
Distributed delay
\(\times \)
\(\surd \)
\(\times \)
\(\times \)
\(\surd \)
\(\surd \)
\(\surd \)
Sampled-data control
\(\surd \)
\(\times \)
\(\times \)
\(\times \)
\(\times \)
\(\times \)
\(\surd \)
LMIs approch
\(\surd \)
\(\times \)
\(\times \)
\(\surd \)
\(\surd \)
\(\times \)
\(\surd \)
Stabilization
\(\times \)
\(\times \)
\(\times \)
\(\times \)
\(\times \)
\(\surd \)
\(\surd \)

2 Problem description and Introductory

We investigated the sampled-data stabilization of a BAM neural networks with leaky delays. The following BAM neural network with leaking delay components is considered:
$$\begin{aligned} \left\{ \begin{array}{ll} \dot{x}^{(1)}_i(t)=-a_ix^{(1)}_i(t-\delta _1)+\sum \limits _{j=1}^{n}{{w}^{(1)}_{ij}}\tilde{f}^{(1)}_j(y^{(1)}_j(t-\tau _1(t)))\\ +\sum \limits _{j=1}^{n}{{w}^{(2)}_{ij}}\int _{t-\tau _1}^{t}\tilde{f}^{(2)}_j(y^{(1)}_j(s))ds+J^{(1)}_i,\\ \dot{y}^{(1)}_j(t)=-b_jy^{(1)}_j(t-\delta _2)+\sum \limits _{i=1}^{n}{{v}^{(1)}_{ij}}\tilde{g}^{(1)}_i(x^{(1)}_i(t-\tau _2(t)))\\ +\sum \limits _{i=1}^{n}{{v}^{(2)}_{ij}}\int _{t-\tau _2}^{t}\tilde{g}^{(2)}_i(x^{(1)}_i(s))ds+J^{(2)}_j. \end{array} \right. \end{aligned}$$
(1)
Here \(x^{(1)}_i(t)\) and \(y^{(1)}_j(t)\) are the state variables representing the neuron’s of state at time t, respectively. \(a_i>0\) and \(b_j>0\) are constants, and signify the time scales for the relevant network layers. \({{w}^{(1)}_{ij}},{{w}^{(2)}_{ij}},{{v}^{(1)}_{ij}}\) and \({{v}^{(2)}_{ij}}\) are the synaptic connection weights. The external inputs are represented by \(J^{(1)}_i\) and \(J^{(2)}_j\); \(\ \delta _1\) and \(\ \delta _2\) are leakage delays; Time-varying delay components are \(\tau _1(t)>0\) and \(\tau _2(t)>0\); Neuron activation functions \(\tilde{f}^{(1)}_j(\cdot ),\ \tilde{f}^{(2)}_j(\cdot ),\ \tilde{g}^{(1)}_i(\cdot )\) and \(\tilde{g}^{(2)}_i(\cdot )\), \(\tau _1(t)>0\) and \(\tau _2(t)>0\) are differentiable, bounded and satisfies
$$\begin{aligned} 0\le \tau _1(t)\le \tau _1, \quad {\dot{\tau }}_1(t)\le \mu _1,\ 0\le \tau _2(t)\le \tau _2, \quad {\dot{\tau }}_2(t)\le \mu _2, \end{aligned}$$
(2)
where \(\tau _1,\ \tau _2,\ \delta _1,\ \delta _2,\ \mu _1\) and \(\mu _2\) are positive constants.
Let \((x^{(1)^*},y^{(1)^*})^T\) be an equilibrium point of equation (1). Then, \((x^{(1)^*},y^{(1)^*})^T\) satisfies the following equations:
$$\begin{aligned} \left\{ \begin{array}{ll} &{}0=-a_ix^{(1)^*}_i+\sum \limits _{j=1}^{n}{{w}^{(1)}_{ij}}\tilde{f}^{(1)}_j(y_j^{(1)^{(1)^*}}) +\sum \limits _{j=1}^{n}{{w}^{(2)}_{ij}}\tilde{f}^{(2)}_jy_j^{(1)^*}+J^{(1)}_i,\\ &{}0=-b_jy^{(1)^*}_j+\sum \limits _{i=1}^{n}{{v}^{(1)}_{ij}}\tilde{g}^{(1)}_i(x_i^{(1)^*}) +\sum \limits _{i=1}^{n}{{v}^{(2)}_{ij}}\tilde{g}^{(2)}_ix_i^{(1)^*}+J^{(2)}_j. \end{array} \right. \end{aligned}$$
(3)
By shifting the equilibrium point \((x^{(1)^*},y^{(1)^*})^T\) to the origin using transformation \(x_{i}(t)=x^{(1)}_i(t)-x^{(1)^*}_i,\ y_{i}(t)=y^{(1)}_i(t)-y^{(1)^*}_i\) the system (1) can be written as:
$$\begin{aligned} \left\{ \begin{array}{ll} &{}\dot{x}(t)=-Ax(t-\delta _1)+W_1f_1(y(t-\tau _1(t))) +W_2\int _{t-\tau _1}^{t}f_2(y(s))ds,\\ &{}\dot{y}(t)=-By(t-\delta _2)+V_1g_1(x(t-\tau _2(t))) +V_2\int _{t-\tau _2}^{t}g_2(x(s))ds, \end{array} \right. \end{aligned}$$
(4)
where \(A=diag\{a_1,...,a_n\}>0,\ B=diag\{b_1,...,b_n\}>0,\ W_1=(w^{(1)}_{ij})_{n\times n}\in R^{n\times n},\ W_2=(w^{(2)}_{ij})_{n\times n}\in R^{n\times n},\ V_1=(v^{(1)}_{ij})_{n\times n}\in R^{n\times n},\ V_2=(v^{(2)}_{ij})_{n\times n}\in R^{n\times n},\ f_1(y(t))=\tilde{f}_1(y(t)-y^{(1)^*}),\ f_2(y(t))=\tilde{f}_2(y(t)-y^{(1)^*}),\ g_1(x(t))=\tilde{g}_1(x(t)-x^{(1)^*})\) and \(g_2(x(t))=\tilde{g}_2(x(t)-x^{(1)^*})\) with \(f_1(0)=f_2(0)=g_1(0)=g_2(0)=0\).
Assumption (A). The neuron activation functions \(f_{ki}(\cdot ),\ g_{kj}(\cdot )\) in (4) are non-decreasing, limited, and there exist constants \(F^{-}_{ki},\ F^{+}_{ki},\ G^{-}_{kj},\ G^{+}_{kj}\) such that
$$\begin{aligned} F^{-}_{ki}\le \dfrac{f_{ki}(\alpha )-f_{ki}(\beta )}{\alpha -\beta }\le F^{+}_{ki} ,\ i=1,2,..,m,\nonumber \\ G^{-}_{kj}\le \dfrac{g_{kj}(\alpha )-g_{kj}(\beta )}{\alpha -\beta }\le G^{+}_{kj} ,\ j=1,2,..,n. \end{aligned}$$
(5)
Here \(k=1,2\) and \(\alpha ,\beta \in \mathbb {R}\) are both equal to \(\beta \). We define the following matrices for ease of notation:
$$\begin{aligned}&F_{k1}=diag\Big \{F^{-}_{k1}F^{+}_{k1},F^{-}_{k2}F^{+}_{k2},...,F^{-}_{km}F^{+}_{km}\Big \},\\&F_{k2}=diag\bigg \{\frac{F^{-}_{k1}+F^{+}_{k1}}{2},\frac{F^{-}_{k2}+F^{+}_{k2}}{2},..., \frac{F^{-}_{km}+F^{+}_{km}}{2}\bigg \},\\&G_{k1}=diag\Big \{G^{-}_{k1}G^{+}_{k1},G^{-}_{k2}G^{+}_{k2},...,G^{-}_{kn}G^{+}_{kn}\Big \},\\&G_{k2}=diag\bigg \{\frac{G^{-}_{k1}+G^{+}_{k1}}{2},\frac{G^{-}_{k2}+G^{+}_{k2}}{2},..., \frac{G^{-}_{kn}+G^{+}_{kn}}{2}\bigg \}. \end{aligned}$$
The system (4) with control is given by
$$\begin{aligned} \left\{ \begin{array}{ll} &{}\dot{x}(t)=-Ax(t-\delta _1)+W_1f_1(y(t-\tau _1(t))) +W_2\int _{t-\tau _1}^{t}f_2(y(s))ds+u(t),\\ &{}\dot{y}(t)=-By(t-\delta _2)+V_1g_1(x(t-\tau _2(t))) +V_2\int _{t-\tau _2}^{t}g_2(x(s))ds+v(t), \end{array} \right. \end{aligned}$$
(6)
where the output measurements are u(t) and v(t). We define the sampled-data control as:
$$\begin{aligned} u(t)=\mathcal {K}x(t_k),\ v(t)=\mathcal {M}y(t_k), \end{aligned}$$
(7)
where the sampled-data gain matrices are \(\mathcal {K},\mathcal {M}\in \mathcal {R}^{n\times n}\). The sample time point, \(t_k\), meets the conditions \(0=t_0<t_1,...,<t_k<...,\) and \(\lim \limits _{k\rightarrow \infty }t_k=+\infty .\) Additionally, there is a positive constant \(\tau _3\) such that \(t_{k+1}-t_k\le \tau _3,\ \forall k\in {\textbf {N}},\). If \(\tau _3(t)=t-t_k,\) for \(t\in [t_k,t_{k+1}),\) then \(t_k=t-\tau _3(t)\) with \(0\le \tau _3(t)\le \tau _3\). The delayed BAM networks with leakage delay (6) may be rebuilt as follows in accordance with the control law:
$$\begin{aligned} \left\{ \begin{array}{ll} &{}\dot{x}(t)=-Ax(t-\delta _1)+W_1f_1(y(t-\tau _1(t))) +W_2\int _{t-\tau _1}^{t}f_2(y(s))ds+\mathcal {K}x(t-\tau _3(t)),\\ &{}\dot{y}(t)=-By(t-\delta _2)+V_1g_1(x(t-\tau _2(t))) +V_2\int _{t-\tau _2}^{t}g_2(x(s))ds+\mathcal {M}y(t-\tau _3(t)). \end{array} \right. \end{aligned}$$
(8)
We present the following lemmas, which will be employed in the main theorems.
Lemma 2.1
[51] If the following integrations are defined well, for every positive definite matrix \(N>0\) and scalars \(\beta>\alpha >0\), we have
$$\begin{aligned}&-(\alpha -\beta )\int _{\beta }^{\alpha }e^T(s)Ne(s) ds\le -\int _{\beta }^{\alpha }e^T(s)ds\ N \int _{\beta }^{\alpha }e(s)ds,\\&-\frac{\beta ^2}{2!}\int _{-\beta }^{0}\int _{t+\delta }^{t}e^T(s)Ne(s)dsd\delta \le -\Big (\int _{\beta }^{0}\int _{t+\delta }^{t}e(s)dsd\delta \Big )^T\ N\ \Big (\int _{\beta }^{0}\int _{t+\delta }^{t}e^T(s)dsd\delta \Big ). \end{aligned}$$
Lemma 2.2
[52] For every constant matrix \(R\in R^{n \times n},\ R=R^T>0,\), and \(\dot{e}:[-\tau _M,0]\rightarrow R^n\), respectively. The integrations listed below are satisfy
$$\begin{aligned}&-\tau _M\int _{t-\tau _M}^{t} \dot{e}^T(s)N \dot{e}(s)dsd\theta \le \left[ \begin{array} {cc} e(t)\\ e(t-\tau _M) \end{array}\right] ^T \left[ \begin{array} {cc} -R &{} R\\ R &{} -R \end{array}\right] \ \left[ \begin{array} {cc} e(t)\\ e(t-\tau _M) \end{array}\right] . \end{aligned}$$
Lemma 2.3
[53] Let an open subset D of \(R^m\) have positive values for \(f_1,f_2,...,f_N:R^m\mapsto R \). When \(f_i\) over D is reciprocally convex, it is satisfied that,
$$\begin{aligned} \min _{\{\alpha _i|\alpha _i>0,\sum _{i}\alpha _i=1\}} \sum \limits _{i}\frac{1}{\alpha _i}f_i(t)=\sum \limits _{i}f_i(t)+ \max _{g_{ij}(t)}\sum \limits _{i\ne j}g_{i,j}(t) \end{aligned}$$
subject to
$$\begin{aligned} \bigg \{g_{i,j}(t):R^m\mapsto R,g_{i,j}(t)\triangleq g_{i,j}(t), \left[ \begin{array}{cc} f_i(t) &{} g_{i,j}(t) \\ g_{i,j}(t) &{} f_j(t) \end{array}\right] \ge 0 \bigg \}. \end{aligned}$$

3 Main results

This section examines the system stability (8) and derives the necessary requirements while maintaining system stability.
Theorem 3.1
Under Assumption (A), for given gain matrices \(\mathcal {K},\ \mathcal {M}\) and scalars \(\tau _1\), \(\tau _2\), \(\tau _3\), \(\delta _1\), \(\delta _2\), \(\mu _1\), and \(\mu _2\), the equilibrium point of system (8) is stable if there exist matrices \(\mathcal {P}>0\), \(\mathcal {Q}>0\), \(P_i>0\), \((i=1,2,...,8)\), \(Q_j>0\), \(R_j>0\), \(\left[ \begin{array} {cc} X_{j} &{} Y_{j}\\ \maltese &{} Z_{j} \end{array}\right] >0,\) positive-definite diagonal matrices \(U_{j},\ J_{j}, \) and any matrices \( \left[ \begin{array} {cc} M_{j} &{} N_{j}\\ H_{j} &{} T_{j} \end{array}\right] ,\ (j=1,2,...,4), {\tilde{R}}_1,\ {\tilde{R}}_2, S_1 \) and \(S_2\) such that the following LMIs hold:
$$\begin{aligned}&\left[ \begin{array}{c|c} \left[ \begin{array} {cc} X_{k} &{} Y_{k} \\ \maltese &{} Z_{k} \end{array}\right] &{} \left[ \begin{array}{cc} M_{k} &{} N_{k} \\ H_{k} &{} T_{k} \end{array}\right] \\ \hline \maltese &{} \left[ \begin{array} {cc} X_{k} &{} Y_{k} \\ \maltese &{} Z_{k} \end{array}\right] \end{array} \right]>0,\ \left[ \begin{array} {cc} R_l &{} {\tilde{R}}_l\\ \maltese &{} R_l \end{array}\right] >0,\ k=1,...,4,\ l=1,2, \end{aligned}$$
(9)
$$\begin{aligned}&\Omega =\left[ \begin{array}{c|c} \left[ \begin{array} {cc} \Omega _{11} &{} [0]_{11n\times 11n} \\ \maltese &{} \Omega _{22} \end{array}\right] &{} [0]_{24n\times 24n} \\ \hline \maltese &{} \left[ \begin{array} {cc} \Omega _{33} &{} [0]_{11n\times 11n}\\ \maltese &{} \Omega _{44} \end{array}\right] \end{array} \right] <0, \end{aligned}$$
(10)
where
$$\begin{aligned} \Omega _{11}&=\left[ \begin{array}{cccccccccccc} (1,1) &{} (1,2) &{} -S_1A &{} Q_1 &{} 0 &{} Q_3+S_1\mathcal {K} &{} 0 &{} G_{12}J_1 &{} S_1W_1 &{} G_{22}J_2 &{} 0 &{} S_1W_2\\ \maltese &{} (2,2) &{} -S_1A &{} 0 &{} 0 &{} S_1\mathcal {K} &{} 0 &{} 0 &{} S_1W_1 &{} 0 &{} 0 &{} S_1W_2 \\ \maltese &{} \maltese &{} -\delta _1P_1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0&{} 0 \\ \maltese &{} \maltese &{} \maltese &{} (4,4) &{} Q_1 &{} 0 &{} 0 &{} 0 &{} G_{32}J_3 &{} 0 &{} 0&{} 0 \\ \maltese &{} \maltese &{} \maltese &{}\maltese &{} (5,5) &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0&{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -2Q_3 &{} Q_3 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} (7,7) &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -J_1 &{} 0&{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -J_3 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} (10,10) &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -J_4 &{} 0 \end{array}\right] ,\\ \Omega _{22}&=\left[ \begin{array}{cccccccccccccc} -R_4 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} -X_1 &{} -Y_1 &{} -M_1 &{} -N_1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} -Z_1 &{} -H_1 &{} -T_1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} -X_1 &{} -Y_1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} -Z_1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -R_1 &{} -\tilde{R_1} &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -R_1 &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -X_3 &{} -Y_3 &{} -M_3 &{} -N_3 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -Z_3 &{} -H_3 &{} -T_3 \\ \maltese &{}\maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -X_3 &{} -Y_3 \\ \maltese &{}\maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -Z_3 \end{array}\right] ,\\ \Omega _{33}&=\left[ \begin{array}{cccccccccccc} (1,1)^* &{} (1,2)^* &{} -S_2B &{} Q_2 &{} 0 &{} Q_4+S_2\mathcal {M} &{} 0 &{} F_{12}U_1 &{} S_2V_1 &{} F_{22}U_2 &{} 0 &{} S_2V_2\\ \maltese &{} (2,2)^* &{} -S_2B &{} 0 &{} 0 &{} S_2\mathcal {M} &{} 0 &{} 0 &{} S_2V_1 &{} 0 &{} 0 &{} S_2V_2 \\ \maltese &{} \maltese &{} -\delta _2P_2 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0&{} 0 \\ \maltese &{} \maltese &{} \maltese &{} (4,4)^* &{} Q_2 &{} 0 &{} 0 &{} 0 &{} F_{32}U_3 &{} 0 &{} 0&{} 0 \\ \maltese &{} \maltese &{} \maltese &{}\maltese &{} (5,5)^* &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0&{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -2Q_4 &{} Q_4 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} (7,7)^* &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -U_1 &{} 0&{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -U_3 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} (10,10)^* &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -U_4 &{} 0 \end{array}\right] ,\\ \Omega _{44}&=\left[ \begin{array}{cccccccccccccc} -R_3 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} -X_2 &{} -Y_2 &{} -M_2 &{} -N_2 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} -Z_2 &{} -H_2 &{} -T_2 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} -X_2 &{} -Y_2 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} -Z_2 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -R_2 &{} -\tilde{R_2} &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -R_2 &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -X_4 &{} -Y_4 &{} -M_4 &{} -N_4 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -Z_4 &{} -H_4 &{} -T_3 \\ \maltese &{}\maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -X_4 &{} -Y_4 \\ \maltese &{}\maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -Z_4 \end{array}\right] , \end{aligned}$$
$$\begin{aligned}&(1,1)=\delta _1P_1+P_3+P_5+P_7-Q_1-Q_3+\tau _2^2X_1+\tau _3^2X_3-G_{11}J_1-G_{21}J_2,\\&(1,1)^*=\delta _2P_2+P_4+P_5+P_8-Q_2-Q_3+\tau _1^2X_2+\tau _3^2X_4-F_{11}U_1-F_{21}U_2,\\&(1,2)=\mathcal {P}+\tau _2^2Y_1+\tau _3^2Y_3-S_1,\ (1,2)^*=\mathcal {Q}+\tau _1^2Y_2+\tau _3^2Y_4-S_2,\\&(2,2)=\tau _2^2Q_1+\tau _3^2Q_3+\tau _2^2Z_1+\tau _3^2Z_3+\bigg (\frac{\tau _2^2}{2!}\bigg )^2R_1-S_1-S_1^T,\\&(2,2)^*=\tau _1^2Q_2+\tau _3^2Q_4+\tau _1^2Z_2+\tau _3^2Z_4+\bigg (\frac{\tau _1^2}{2!}\bigg )^2R_2-S_2-S_2^T,\\&(4,4)=-(1-\mu _2)P_3-2Q_1-G_{31} J_3-G_{41} J_4,\\&(4,4)^*=-(1-\mu _1)P_4-2Q_2-F_{31}U_3-F_{41}U_4,\\&(5,5)=-P_5-Q_1,\ (7,7)=-Q_3-P_7, \ (10,10)=-J_2+\tau ^2_2R_4,\\&(5,5)^*=-P_6-Q_2,\ (7,7)^*=-Q_4-P_8, \ (10,10)^*=-U_2+\tau ^2_1R_3, \end{aligned}$$
Proof
We construct the Lyapunov Krasovskii functional as:
$$\begin{aligned} V(x_{t},y_{t},t)=\sum \limits _{i=1}^{7} V_{i}(x_{t},y_{t},t), \end{aligned}$$
(11)
where
$$\begin{aligned} V_1(x_{t},y_{t},t)=&x^T(t)\mathcal {P}x(t)+y^T(t)\mathcal {Q}y(t),\\ V_2(x_{t},y_{t},t)=&\delta _1\int _{t-\delta _1}^{t}x^T(s)P_1x(s)ds +\delta _2\int _{t-\delta _2}^{t}y^T(s)P_2y(s)ds,\\ V_3(x_{t},y_{t},t)=&\int _{t-\tau _2(t)}^{t}x^T(s)P_3x(s)ds +\int _{t-\tau _1(t)}^{t}y^T(s)P_4y(s)ds+\int _{t-\tau _2}^{t}x^T(s)P_5x(s)ds\\&+ \int _{t-\tau _1}^{t}y^T(s)P_6y(s)ds+\int _{t-\tau _3}^{t}x^T(s)P_7x(s)ds+\int _{t-\tau _3}^{t}y^T(s)P_8y(s)ds, \\ V_4(x_{t},y_{t},t)=&\tau _2\int _{-\tau _2}^{0}\int _{t+\theta }^{t}\dot{x}^T(s)Q_1\dot{x}(s)dsd\theta + \tau _1\int _{-\tau _1}^{0}\int _{t+\theta }^{t}\dot{y}^T(s)Q_2\dot{y}(s)dsd\theta \\&+ \tau _3\int _{-\tau _3}^{0}\int _{t+\theta }^{t}\dot{x}^T(s)Q_3\dot{x}(s)dsd\theta + \tau _3\int _{-\tau _3}^{0}\int _{t+\theta }^{t}\dot{y}^T(s)Q_4\dot{y}(s)dsd\theta ,\\ V_5(x_{t},y_{t},t)=&\sum \limits _{i=0}^{1}\tau _{(3-i)}\int _{-\tau _{(3-i)}}^{0}\int _{t+\theta }^{t} \left[ \begin{array} {cc} x(s)\\ \dot{x}(s) \end{array}\right] ^T \left[ \begin{array} {cc} X_{(3-2i)} &{} Y_{(3-2i)}\\ \maltese &{} Z_{(3-2i)} \end{array}\right] \left[ \begin{array} {cc} x(s)\\ \dot{x}(s) \end{array}\right] dsd\theta \\+&\sum \limits _{i=1}^{2}\tau _{(2i-1)}\int _{-\tau _{(2i-1)}}^{0}\int _{t+\theta }^{t} \left[ \begin{array} {cc} y(s)\\ \dot{y}(s) \end{array}\right] ^T \left[ \begin{array} {cc} X_{(2i)} &{} Y_{(2i)}\\ \maltese &{} Z_{(2i)} \end{array}\right] \left[ \begin{array} {cc} y(s)\\ \dot{y}(s) \end{array}\right] dsd\theta ,\\ V_6(x_{t},y_{t},t)=&\frac{\tau _2^2}{2!}\int _{-\tau _2}^{0}\int _{\theta }^{0}\int _{t+\vartheta }^{t} \dot{x}^T(s) R_1\dot{x}(s)dsd\vartheta d\theta \\&+\frac{\tau _1^2}{2!}\int _{-\tau _1}^{0}\int _{\theta }^{0} \int _{t+\vartheta }^{t}\dot{y}^T(s) R_2\dot{y}(s)dsd\vartheta d\theta ,\\ V_7(x_{t},y_{t},t)=&\tau _1\int _{-\tau _1}^{0}\int _{t+\theta }^{t}f^T_2(y(s))R_3f_2(y(s))dsd\theta \\&+ \tau _2\int _{-\tau _2}^{0}\int _{t+\theta }^{t}g^T_2(x(s))R_4g_2(x(s))dsd\theta . \end{aligned}$$
We calculate the derivatives \(\dot{V}_i(x_{t},y_{t},t),\ i=1,2,...,7\) along the trajectories of the system (8) gives,
$$\begin{aligned} \dot{V}_1(x_{t},y_{t},t)=&2x^T(t)\mathcal {P}\dot{x}(t)+2y^T(t)\mathcal {Q}\dot{y}(t),\end{aligned}$$
(12)
$$\begin{aligned} \dot{V}_2(x_{t},y_{t},t)=&\delta _1[x^T(t)P_1x(t)-x^T(t-\delta _1)P_1x(t-\delta _1)] +\delta _2[y^T(t)P_2y(t)-y^T(t-\delta _2)P_2y(t-\delta _2)],\end{aligned}$$
(13)
$$\begin{aligned} \dot{V}_3(x_{t},y_{t},t)\le&x^T(t)[P_3+P_5+P_7]x(t)+y^T(t)[P_4+P_6+P_8]y(t) -(1-\mu _2)x^T(t-\tau _2(t))P_3x(t-\tau _2(t))\nonumber \\&-(1-\mu _1)y^T(t-\tau _1(t))P_4y(t-\tau _1(t))-x^T(t-\tau _2)P_5x(t-\tau _2)-y^T(t-\tau _1) P_6y(t-\tau _1)\nonumber \\&-x^T(t-\tau _3)P_7x(t-\tau _3)-y^T(t-\tau _3)P_8y(t-\tau _3),\end{aligned}$$
(14)
$$\begin{aligned} \dot{V}_4(x_{t},y_{t},t)=&\dot{x}^T(t)[\tau _2^2Q_1+\tau _3^2Q_3]\dot{x}(t)+ \dot{y}^T(t)[\tau _1^2Q_2+\tau _3^2Q_4]\dot{y}(t)-\tau _2\int _{t-\tau _2}^{t}\dot{x}^T(s)Q_1\dot{x}(s)ds\nonumber \\&-\tau _1\int _{t-\tau _1}^{t}\dot{y}^T(s)Q_2\dot{y}(s)ds-\tau _3\int _{t-\tau _3}^{t}\dot{x}^T(s)Q_3\dot{x}(s)ds -\tau _3\int _{t-\tau _3}^{t}\dot{y}^T(s)Q_4\dot{y}(s)ds,\end{aligned}$$
(15)
$$\begin{aligned} \dot{V}_5(x_{t},y_{t},t)=&\sum \limits _{i=0}^{1}\tau ^2_{(3-i)} \left[ \begin{array} {cc} x(t)\\ \dot{x}(t) \end{array}\right] ^T \left[ \begin{array} {cc} X_{(3-2i)} &{} Y_{(3-2i)}\\ \maltese &{} Z_{(3-2i)} \end{array}\right] \left[ \begin{array} {cc} x(t)\\ \dot{x}(t) \end{array}\right] \nonumber \\ +&\sum \limits _{i=1}^{2}\tau ^2_{(2i-1)} \left[ \begin{array} {cc} y(t)\\ \dot{y}(t) \end{array}\right] ^T \left[ \begin{array} {cc} X_{(2i)} &{} Y_{(2i)}\\ \maltese &{} Z_{(2i)} \end{array}\right] \left[ \begin{array} {cc} y(t)\\ \dot{y}(t) \end{array}\right] \nonumber \\ -&\sum \limits _{i=0}^{1}\tau _{(3-i)}\int _{t-\tau _{(3-i)}}^{t} \left[ \begin{array} {cc} x(s)\\ \dot{x}(s) \end{array}\right] ^T \left[ \begin{array} {cc} X_{(3-2i)} &{} Y_{(3-2i)}\\ \maltese &{} Z_{(3-2i)} \end{array}\right] \left[ \begin{array} {cc} x(s)\\ \dot{x}(s) \end{array}\right] ds\nonumber \\ -&\sum \limits _{i=1}^{2}\tau _{(2i-1)}\int _{t-\tau _{(2i-1)}}^{t} \left[ \begin{array} {cc} y(s)\\ \dot{y}(s) \end{array}\right] ^T \left[ \begin{array} {cc} X_{(2i)} &{} Y_{(2i)}\\ \maltese &{} Z_{(2i)} \end{array}\right] \left[ \begin{array} {cc} y(s)\\ \dot{y}(s), \end{array}\right] ds\end{aligned}$$
(16)
$$\begin{aligned} \dot{V}_6(x_{t},y_{t},t)=&\bigg (\frac{\tau _2^2}{2!}\bigg )^2\dot{x}^T(t)R_1\dot{x}(t) +\bigg (\frac{\tau _1^2}{2!}\bigg )^2\dot{y}^T(t)R_2\dot{y}(t) -\frac{\tau _2^2}{2!}\int _{-\tau _1}^{0}\int _{t+\theta }^{t}\dot{x}^T(s)R_1\dot{x}(s)dsd\theta \nonumber \\&-\frac{\tau _1^2}{2!}\int _{-\tau _1}^{0}\int _{t+\theta }^{t}\dot{y}^T(s)R_2\dot{y}(s)dsd\theta ,\end{aligned}$$
(17)
$$\begin{aligned} \dot{V}_7(x_{t},y_{t},t)=&f_2^T(y(t))\tau _1^2R_3 f_2(y(t))+g_2^T(x(t))\tau _2^2R_4 g_2(x(t))\nonumber \\&-\tau _1\int _{t-\tau _1}^{t}f_2^T(y(s))R_3 f_2(y(s))ds-\tau _2\int _{t-\tau _2}^{t}g_2^T(x(s))R_4 g_2(x(s))ds, \end{aligned}$$
(18)
The integral terms in (15) can be expressed as following
$$\begin{aligned} -\tau _2\int _{t-\tau _2}^{t}\dot{x}^T(s)Q_1\dot{x}(s)ds=&-\tau _2 \int _{t-\tau _2}^{t-\tau _2(t)}\dot{x}^T(s)Q_1\dot{x}(s)ds- \tau _2\int _{t-\tau _2(t)}^{t}\dot{x}^T(s)Q_1\dot{x}(s)ds,\\ -\tau _1\int _{t-\tau _1}^{t}\dot{y}^T(s)Q_2\dot{y}(s)ds=&- \tau _1\int _{t-\tau _1}^{t-\tau _1(t)}\dot{y}^T(s)Q_2\dot{y}(s)ds-\tau _1\int _{t-\tau _1(t)}^{t}\dot{y}^T(s)Q_2\dot{y}(s)ds,\\ -\tau _3\int _{t-\tau _3}^{t}\dot{x}^T(s)Q_3\dot{x}(s)ds=&- \tau _3\int _{t-\tau _3}^{t-\tau _3(t)}\dot{x}^T(s)Q_3\dot{x}(s)ds-\tau _3\int _{t-\tau _3(t)}^{t}\dot{x}^T(s)Q_3\dot{x}(s)ds,\\ -\tau _3\int _{t-\tau _3}^{t}\dot{y}^T(s)Q_4\dot{y}(s)ds=&- \tau _3\int _{t-\tau _2}^{t-\tau _3(t)}\dot{y}^T(s)Q_4\dot{y}(s)ds-\tau _3\int _{t-\tau _3(t)}^{t}\dot{y}^T(s)Q_4\dot{y}(s)ds. \end{aligned}$$
By applying Lemma 2.2, we obtain
$$\begin{aligned}&-\tau _2\int _{t-\tau _2}^{t-\tau _2(t)}\dot{x}^T(s)Q_1\dot{x}(s)ds\le&\left[ \begin{array} {cc} x(t-\tau _2(t))\\ x(t-\tau _2) \end{array}\right] ^T \left[ \begin{array} {cc} -Q_1 &{} Q_1\\ \maltese &{} -Q_1 \end{array}\right] \left[ \begin{array} {cc} x(t-\tau _2(t))\\ x(t-\tau _2) \end{array}\right] , \end{aligned}$$
(19)
$$\begin{aligned}&-\tau _2\int _{t-\tau _2(t)}^{t}\dot{x}^T(s)Q_1\dot{x}(s)ds\le&\left[ \begin{array} {cc} x(t)\\ x(t-\tau _2(t)) \end{array}\right] ^T \left[ \begin{array} {cc} -Q_1 &{} Q_1\\ \maltese &{} -Q_1 \end{array}\right] \left[ \begin{array} {cc} x(t)\\ x(t-\tau _2(t)) \end{array}\right] ,\end{aligned}$$
(20)
$$\begin{aligned}&-\tau _1\int _{t-\tau _1}^{t-\tau _1(t)}\dot{y}^T(s)Q_2\dot{y}(s)ds\le&\left[ \begin{array} {cc} y(t-\tau _1(t))\\ y(t-\tau _1) \end{array}\right] ^T \left[ \begin{array} {cc} -Q_2 &{} Q_2\\ \maltese &{} -Q_2 \end{array}\right] \left[ \begin{array} {cc} y(t-\tau _1(t))\\ y(t-\tau _1) \end{array}\right] ,\end{aligned}$$
(21)
$$\begin{aligned}&-\tau _1\int _{t-\tau _1(t)}^{t}\dot{y}^T(s)Q_2\dot{y}(s)ds\le&\left[ \begin{array} {cc} y(t)\\ y(t-\tau _1(t)) \end{array}\right] ^T \left[ \begin{array} {cc} -Q_2 &{} Q_2\\ \maltese &{} -Q_2 \end{array}\right] \left[ \begin{array} {cc} y(t)\\ y(t-\tau _1(t)) \end{array}\right] ,\end{aligned}$$
(22)
$$\begin{aligned}&-\tau _3\int _{t-\tau _3}^{t-\tau _3(t)}\dot{x}^T(s)Q_3\dot{x}(s)ds\le&\left[ \begin{array} {cc} x(t-\tau _3(t))\\ x(t-\tau _3) \end{array}\right] ^T \left[ \begin{array} {cc} -Q_3 &{} Q_3\\ \maltese &{} -Q_3 \end{array}\right] \left[ \begin{array} {cc} x(t-\tau _3(t))\\ x(t-\tau _3) \end{array}\right] ,\end{aligned}$$
(23)
$$\begin{aligned}&-\tau _3\int _{t-\tau _3(t)}^{t}\dot{x}^T(s)Q_3\dot{x}(s)ds\le&\left[ \begin{array} {cc} x(t)\\ x(t-\tau _3(t)) \end{array}\right] ^T \left[ \begin{array} {cc} -Q_3 &{} Q_3\\ \maltese &{} -Q_3 \end{array}\right] \left[ \begin{array} {cc} x(t)\\ x(t-\tau _3(t)) \end{array}\right] ,\end{aligned}$$
(24)
$$\begin{aligned}&-\tau _3\int _{t-\tau _3}^{t-\tau _3(t)}\dot{y}^T(s)Q_4\dot{y}(s)ds\le&\left[ \begin{array} {cc} y(t-\tau _3(t))\\ y(t-\tau _3) \end{array}\right] ^T \left[ \begin{array} {cc} -Q_4 &{} Q_4\\ \maltese &{} -Q_4 \end{array}\right] \left[ \begin{array} {cc} y(t-\tau _3(t))\\ y(t-\tau _3) \end{array}\right] ,\end{aligned}$$
(25)
$$\begin{aligned}&-\tau _3\int _{t-\tau _3(t)}^{t}\dot{y}^T(s)Q_4\dot{y}(s)ds\le&\left[ \begin{array} {cc} y(t)\\ y(t-\tau _3(t)) \end{array}\right] ^T \left[ \begin{array} {cc} -Q_4 &{} Q_4\\ \maltese &{} -Q_4 \end{array}\right] \left[ \begin{array} {cc} y(t)\\ y(t-\tau _3(t)) \end{array}\right] . \end{aligned}$$
(26)
By applying the Lemmas 2.1 and 2.3 in \(\dot{V}_5(x_{t},y_{t},t)\), we obtain the following results
$$\begin{aligned}&-\sum \limits _{i=0}^{1}\tau _{(3-i)}\int _{t-\tau _{(3-i)}}^{t} \left[ \begin{array} {cc} x(s)\\ \dot{x}(s) \end{array}\right] ^T \left[ \begin{array} {cc} X_{(3-2i)} &{} Y_{(3-2i)}\\ \maltese &{} Z_{(3-2i)} \end{array}\right] \left[ \begin{array} {cc} x(s)\\ \dot{x}(s) \end{array}\right] ds\\&\quad =-\sum \limits _{i=0}^{1}\tau _{(3-i)}\Bigg [\int _{t-\tau _{(3-i)}(t)}^{t} +\int _{t-\tau _{(3-i)}}^{t-\tau _{(3-i)}(t)}\Bigg ] \left[ \begin{array} {cc} x(s)\\ \dot{x}(s) \end{array}\right] ^T \left[ \begin{array} {cc} X_{(3-2i)} &{} Y_{(3-2i)} \\ \maltese &{} Z_{(3-2i)} \end{array}\right] \left[ \begin{array} {cc} x(s)\\ \dot{x}(s) \end{array}\right] ds,\\&\quad =-\sum \limits _{i=0}^{1}\tau _{(3-i)}\Biggl [-\tau _{(3-i)}\int _{t-\tau _{(3-i)}(t)}^{t} \left[ \begin{array} {cc} x(s)\\ \dot{x}(s) \end{array}\right] ^T \left[ \begin{array} {cc} X_{(3-2i)} &{} Y_{(3-2i)} \\ \maltese &{} Z_{(3-2i)} \end{array}\right] \left[ \begin{array} {cc} x(s)\\ \dot{x}(s) \end{array}\right] ds\\&\qquad -\tau _{(3-i)}\int _{t-\tau _{(3-i)}}^{t-\tau _{(3-i)}(t)} \left[ \begin{array} {cc} x(s)\\ \dot{x}(s) \end{array}\right] ^T \left[ \begin{array} {cc} X_{(3-2i)} &{} Y_{(3-2i)} \\ \maltese &{} Z_{(3-2i)} \end{array}\right] \left[ \begin{array} {cc} x(s)\\ \dot{x}(s) \end{array}\right] ds\Biggl ],\\&\quad =-\sum \limits _{i=0}^{1}\tau _{(3-i)}\Biggl [\frac{\tau _{(3-i)}}{\tau _{(3-i)}(t)}\times \tau _{(3-i)}(t)\int _{t-\tau _{(3-i)}(t)}^{t} \left[ \begin{array} {cc} x(s)\\ \dot{x}(s) \end{array}\right] ^T \left[ \begin{array} {cc} X_{(3-2i)} &{} Y_{(3-2i)} \\ \maltese &{} Z_{(3-2i)} \end{array}\right] \left[ \begin{array} {cc} x(s)\\ \dot{x}(s) \end{array}\right] ds\\&\qquad -\frac{\tau _{(3-i)}(\tau _{(3-i)}-\tau _{(3-i)}(t))}{(\tau _{(3-i)}-\tau _{(3-i)}(t))}\times \int _{t-\tau _{(3-i)}}^{t-\tau _{(3-i)}(t)} \left[ \begin{array} {cc} x(s)\\ \dot{x}(s) \end{array}\right] ^T \left[ \begin{array} {cc} X_{(3-2i)} &{} Y_{(3-2i)} \\ \maltese &{} Z_{(3-2i)} \end{array}\right] \left[ \begin{array} {cc} x(s)\\ \dot{x}(s) \end{array}\right] ds\Biggl ],\\&\quad \le -\sum \limits _{i=0}^{1}\tau _{(3-i)}\Biggl [\frac{\tau _{(3-i)}}{\tau _{(3-i)}(t)}\int _{t-\tau _{(3-i)}(t)}^{t} \left[ \begin{array} {cc} x(s)\\ \dot{x}(s) \end{array}\right] ^T ds \left[ \begin{array} {cc} X_{(3-2i)} &{} Y_{(3-2i)} \\ \maltese &{} Z_{(3-2i)} \end{array}\right] \int _{t-\tau _{(3-i)}(t)}^{t} \left[ \begin{array} {cc} x(s)\\ \dot{x}(s) \end{array}\right] ds\\&\qquad -\frac{\tau _{(3-i)}}{(\tau _1-\tau _{(3-i)}(t))}\int _{t-\tau _{(3-i)}}^{t-\tau _{(3-i)}(t)} \left[ \begin{array} {cc} x(s)\\ \dot{x}(s) \end{array}\right] ^T ds \left[ \begin{array} {cc} X_{(3-2i)} &{} Y_{(3-2i)} \\ \maltese &{} Z_{(3-2i)} \end{array}\right] \int _{t-\tau _{(3-i)}}^{t-\tau _{(3-i)}(t)} \left[ \begin{array} {cc} x(s)\\ \dot{x}(s) \end{array}\right] ds\Biggl ],\\&\qquad \le -\sum \limits _{i=0}^{1}\tau _{(3-i)}\Biggl [\int _{t-\tau _{(3-i)}(t)}^{t} \left[ \begin{array} {cc} x(s)\\ \dot{x}(s) \end{array}\right] ^T ds \left[ \begin{array} {cc} X_{(3-2i)} &{} Y_{(3-2i)} \\ \maltese &{} Z_{(3-2i)} \end{array}\right] \int _{t-\tau _{(3-i)}(t)}^{t} \left[ \begin{array} {cc} x(s)\\ \dot{x}(s) \end{array}\right] ds\\&\qquad -2\int _{t-\tau _{(3-i)}(t)}^{t} \left[ \begin{array} {cc} x(s)\\ \dot{x}(s) \end{array}\right] ^T ds \left[ \begin{array} {cc} M_{(3-2i)} &{} N_{(3-2i)}\\ H_{(3-2i)} &{} T_{(3-2i)} \end{array}\right] \int _{t-\tau _{(3-i)}}^{t-\tau _{(3-i)}(t)} \left[ \begin{array} {cc} x(s)\\ \dot{x}(s) \end{array}\right] ds\\&\qquad -\int _{t-\tau _{(3-i)}}^{t-\tau _{(3-i)}(t)} \left[ \begin{array} {cc} x(s)\\ \dot{x}(s) \end{array}\right] ^T ds \left[ \begin{array} {cc} X_{(3-2i)} &{} Y_{(3-2i)} \\ \maltese &{} Z_{(3-2i)} \end{array}\right] \int _{t-\tau _{(3-i)}}^{t-\tau _{(3-i)}(t)} \left[ \begin{array} {cc} x(s)\\ \dot{x}(s) \end{array}\right] ds\Biggl ], \end{aligned}$$
$$\begin{aligned} =-\sum \limits _{i=0}^{1}\tau _{(3-i)}\left[ \begin{array}{c} \begin{array}{c} \int _{t-\tau _{(3-i)}(t)}^{t}\left[ \begin{array} {cc} x(s)\\ \dot{x}(s) \end{array}\right] ds\\ \hline -\int _{t-\tau _{(3-i)}}^{t-\tau _{(3-i)}(t)}\left[ \begin{array} {cc} x(s)\\ \dot{x}(s) \end{array}\right] ds \end{array} \\ \end{array} \right] ^T&\left[ \begin{array}{c|c} \left[ \begin{array} {cc} X_{(3-2i)} &{} Y_{(3-2i)} \\ \maltese &{} Z_{(3-2i)} \end{array}\right] &{} \left[ \begin{array}{cc} M_{(3-2i)} &{} N_{(3-2i)} \\ H_{(3-2i)} &{} T_{(3-2i)} \end{array}\right] \\ \hline \maltese &{} \left[ \begin{array} {cc} X_{(3-2i)} &{} Y_{(3-2i)} \\ \maltese &{} Z_{(3-2i)} \end{array}\right] \end{array} \right] \nonumber \\ {}&\times \left[ \begin{array}{c} \begin{array}{c} \int _{t-\tau _{(3-i)}(t)}^{t}\left[ \begin{array} {cc} x(s)\\ \dot{x}(s) \end{array}\right] ds\\ \hline -\int _{t-\tau _{(3-i)}}^{t-\tau _{(3-i)}(t)}\left[ \begin{array} {cc} x(s)\\ \dot{x}(s) \end{array}\right] ds \end{array} \\ \end{array} \right] . \end{aligned}$$
(27)
We can estimate the three terms in inequality (16) in a similar way to (27) as
$$\begin{aligned}&-\sum \limits _{i=1}^{2}\tau _{(2i-1)}\int _{t-\tau _{(2i-1)}}^{t} \left[ \begin{array} {cc} y(s)\\ \dot{y}(s) \end{array}\right] ^T \left[ \begin{array} {cc} X_{2i} &{} Y_{2i}\\ \maltese &{} Z_{2i} \end{array}\right] \left[ \begin{array} {cc} y(s)\\ \dot{y}(s) \end{array}\right] ds\nonumber \\ {}&\le -\sum \limits _{i=1}^{2}\tau _{(2i-1)} \left[ \begin{array}{c} \begin{array}{c} \int _{t-\tau _{(2i-1)}(t)}^{t}\left[ \begin{array} {cc} y(s)\\ \dot{y}(s) \end{array}\right] ds\\ \hline \int _{t-\tau _{(2i-1)}}^{t-\tau _{(2i-1)}(t)}\left[ \begin{array} {cc} y(s)\\ \dot{y}(s) \end{array}\right] ds \end{array} \\ \end{array} \right] ^T \left[ \begin{array}{c|c} \left[ \begin{array} {cc} X_{(2i)} &{} Y_{(2i)} \\ \maltese &{} Z_{(2i)} \end{array}\right] &{} \left[ \begin{array}{cc} M_{(2i)} &{} N_{(2i)} \\ H_{(2i)} &{} T_{(2i)} \end{array}\right] \\ \hline \maltese &{} \left[ \begin{array} {cc} X_{(2i)} &{} Y_{(2i)} \\ \maltese &{} Z_{(2i)} \end{array}\right] \end{array} \right] \nonumber \\ {}&\hspace{2cm}\times \left[ \begin{array}{c} \begin{array}{c} \int _{t-\tau _{(2i-1)}(t)}^{t}\left[ \begin{array} {cc} y(s)\\ \dot{y}(s) \end{array}\right] ds\\ \hline \int _{t-\tau _{(2i-1)}}^{t-\tau _{(2i-1)}(t)}\left[ \begin{array} {cc} y(s)\\ \dot{y}(s) \end{array}\right] ds \end{array} \\ \end{array} \right] . \end{aligned}$$
(28)
The upper bound of the reciprocally convex combination in \(\dot{V}_6(x_{t},y_{t},t)\) can be obtained as
$$\begin{aligned}&-\frac{\tau _2^2}{2!}\int _{-\tau _2}^{0}\int _{t+\theta }^{t}\dot{x}^T(s)R_1\dot{x}(s)dsd\theta \nonumber \\&\quad \le \left[ \begin{array} {cc} \int _{-\tau _2(t)}^{0}\int _{t+\theta }^{t}\dot{x}^T(s)ds\\ \int _{-\tau _2}^{-\tau _2(t)}\int _{t+\theta }^{t}\dot{x}^T(s)ds \end{array}\right] ^T \left[ \begin{array} {cc} -R_1 &{} -{\tilde{R}}_1\\ \maltese &{} -R_1 \end{array}\right] \left[ \begin{array} {cc} \int _{-\tau _2(t)}^{0}\int _{t+\theta }^{t}\dot{x}^T(s)ds\\ \int _{-\tau _2}^{-\tau _2(t)}\int _{t+\theta }^{t}\dot{x}^T(s)ds \end{array}\right] ,\end{aligned}$$
(29)
$$\begin{aligned}&-\frac{\tau _1^2}{2!}\int _{-\tau _1}^{0}\int _{t+\theta }^{t}\dot{y}^T(s)R_2\dot{y}(s)dsd\theta \nonumber \\&\quad \le \left[ \begin{array} {cc} \int _{-\tau _1(t)}^{0}\int _{t+\theta }^{t}\dot{y}^T(s)ds\\ \int _{-\tau _1}^{-\tau _1(t)}\int _{t+\theta }^{t}\dot{y}^T(s)ds \end{array}\right] ^T \left[ \begin{array} {cc} -R_2 &{} -{\tilde{R}}_2\\ \maltese &{} -R_2 \end{array}\right] \left[ \begin{array} {cc} \int _{-\tau _1(t)}^{0}\int _{t+\theta }^{t}\dot{y}^T(s)ds\\ \int _{-\tau _1}^{-\tau _1(t)}\int _{t+\theta }^{t}\dot{y}^T(s)ds \end{array}\right] . \end{aligned}$$
(30)
According to Lemma 2.1 the following inequality’s are hold:
$$\begin{aligned}&-\tau _1\int _{t-\tau _1}^{t}f_2^T(y(s))R_3 f_2(y(s))ds\le - \Bigg (\int _{t-\tau _1}^{t}f_2(y(s))ds\Bigg )^T R_3\Bigg (\int _{t-\tau _1}^{t}f_2(y(s))ds\Bigg ),\end{aligned}$$
(31)
$$\begin{aligned}&-\tau _2\int _{t-\tau _2}^{t}g_2^T(x(s))R_4 g_2(x(s))ds\le - \Bigg (\int _{t-\tau _2}^{t}g_2(x(s))ds\Bigg )^T R_4\Bigg (\int _{t-\tau _2}^{t}g_2(x(s))ds\Bigg ). \end{aligned}$$
(32)
On the other hand, for any matrices \(S_1\) and \(S_2\) with appropriate dimensions the following equations holds:
$$\begin{aligned}&0=[x(t)+\dot{x}(t)] S_1[-\dot{x}(t)+\dot{x}(t)],\nonumber \\&0=[x(t)+\dot{x}(t)] S_1[-\dot{x}(t)-Ax(t-\delta _1)+W_1f_1(y(t-\tau _1(t))) +W_2\int _{t-\tau _1}^{t}f_2(y(s))ds+\mathcal {K}x(t-\tau _3(t))],\end{aligned}$$
(33)
$$\begin{aligned}&0=[y(t)+\dot{y}(t)]S_2[-\dot{y}(t)+\dot{y}(t)],\nonumber \\&0=[y(t)+\dot{y}(t)]S_2[-\dot{y}(t)-By(t-\delta _2)+V_1g_1(x(t-\tau _2(t))) +V_2\int _{t-\tau _2}^{t}g_2(x(s))ds+\mathcal {M}y(t-\tau _3(t))]. \end{aligned}$$
(34)
Furthermore, based on Assumption (A), we have
$$\begin{aligned}&\Big [f_{ki}(y_i(t))-F_{ki}^{-}y_i(t)\Big ]^T\Big [f_{ki}(y_i(t))-F_{ki}^{+}y_i(t)\Big ]\le 0,\ i=1,2,...,n,\\&\Big [g_{kj}(x_j(t))-G_{kj}^{-}x_j(t)\Big ]^T\Big [g_{kj}(x_i(t))-G_{kj}^{+}x_j(t)\Big ]\le 0,\ j=1,2,...,n, \end{aligned}$$
where \(k=1,2,\)
which is equivalent to
$$\begin{aligned}&\left[ \begin{array} {cc} y(t)\\ f_{k}(y(t)) \end{array}\right] ^T \left[ \begin{array} {cc} F_{ki}^{-}F_{ki}^{+}y_iy_i^T &{} -\frac{F_{ki}^{-}+F_{ki}^{+}}{2}y_iy_i^T\\ -\frac{F_{ki}^{-}+F_{ki}^{+}}{2}y_iy_i^T &{} y_iy_i^T \end{array}\right] \left[ \begin{array} {cc} y(t)\\ f_{k}(y(t)) \end{array}\right] \le 0,\\&\left[ \begin{array} {cc} x(t)\\ g_{k}(x(t)) \end{array}\right] ^T \left[ \begin{array} {cc} G_{kj}^{-}G_{kj}^{+}x_jx_j^T &{} -\frac{G_{kj}^{-}+G_{kj}^{+}}{2}x_jx_j^T\\ -\frac{G_{kj}^{-}+G_{kj}^{+}}{2}x_jx_j^T &{} x_jx_j^T \end{array}\right] \left[ \begin{array} {cc} x(t)\\ g_{k}(x(t)) \end{array}\right] \le 0. \end{aligned}$$
where \(x_i\) and \(y_j\) denotes the units column vector having element 1 on its \(i^{th}\) row, \(j^{th}\) row, and zeros elsewhere. Let \(U_k=diag\{u^k_{11},u^k_{12},...,u^k_{1n}\}>0,\) \(J_k=diag\{{\overline{j}}^k_{11},{\overline{j}}^k_{12},...,{\overline{j}}^k_{1n}\}>0,\). Here \(x_i\) and \(y_j\) stand for the units column vector, which has zeros on the other rows and element 1 on the \(i^{th}\) row, \(j^{th}\) rows. It is simple to observe that if \(U_k=diag\{u^k_{11},u^k_{12},...,u^k_{1n}\}>0,\) \(J_k=diag\{{\overline{j}}^k_{11},{\overline{j}}^k_{12},...,{\overline{j}}^k_{1n}\}>0,\)
$$\begin{aligned}&\sum \limits _{i=1}^{n}u^k_{1i} \left[ \begin{array} {cc} y(t)\\ f_{k}(y(t)) \end{array}\right] ^T \left[ \begin{array} {cc} F_{ki}^{-}F_{ki}^{+}y_iy_i^T &{} -\frac{F_{ki}^{-}+F_{ki}^{+}}{2}y_iy_i^T\\ -\frac{F_{ki}^{-}+F_{ki}^{+}}{2}y_iy_i^T &{} y_iy_i^T \end{array}\right] \left[ \begin{array} {cc} y(t)\\ f_{k}(y(t)) \end{array}\right] \le 0,\\&\sum \limits _{j=1}^{n}{\overline{j}}^k_{1j} \left[ \begin{array} {cc} x(t)\\ g_{k}(x(t)) \end{array}\right] ^T \left[ \begin{array} {cc} G_{kj}^{-}G_{kj}^{+}x_jx_j^T &{} -\frac{G_{kj}^{-}+G_{kj}^{+}}{2}x_jx_j^T\\ -\frac{G_{kj}^{-}+G_{kj}^{+}}{2}x_jx_j^T &{} x_jx_j^T \end{array}\right] \left[ \begin{array} {cc} x(t)\\ g_{k}(x(t)) \end{array}\right] \le 0, \end{aligned}$$
That is,
$$\begin{aligned}&\left[ \begin{array} {cc} y(t)\\ f_{k}(y(t)) \end{array}\right] ^T \left[ \begin{array} {cc} -F_{k1}U_{k} &{} F_{k2}U_{k}\\ F_{k2}U_{k} &{} -U_{k} \end{array}\right] \left[ \begin{array} {cc} y(t)\\ f_{k}(y(t)) \end{array}\right] \ge 0,\end{aligned}$$
(35)
$$\begin{aligned}&\left[ \begin{array} {cc} x(t)\\ g_{k}(x(t)) \end{array}\right] ^T \left[ \begin{array} {cc} -G_{k1}J_{k} &{} G_{k2}J_{k}\\ G_{k2}J_{k} &{} -J_{k} \end{array}\right] \left[ \begin{array} {cc} x(t)\\ g_{k}(x(t)) \end{array}\right] \ge 0. \end{aligned}$$
(36)
Similar to above, for \(U_{k+2}=diag\{u^{k+2}_{11},u^{k+2}_{12},...,u^{k+2}_{1n}\}>0\), \(J_{k+2}=diag\{{\overline{j}}^{k+2}_{11},{\overline{j}}^{k+2}_{12},...,{\overline{j}}^{k+2}_{1n}\}>0,\) we can obtain the following inequalities:
$$\begin{aligned}&\left[ \begin{array} {cc} y(t-\tau _1(t))\\ f_{k}(y(t-\tau _1(t))) \end{array}\right] ^T \left[ \begin{array} {cc} -F_{k1}U_{k+2} &{} F_{k2}U_{k+2}\\ F_{k2}U_{k+2} &{} -U_{k+2} \end{array}\right] \left[ \begin{array} {cc} y(t-\tau _1(t))\\ f_{k}(y(t-\tau _1(t))) \end{array}\right] \ge 0,\end{aligned}$$
(37)
$$\begin{aligned}&\left[ \begin{array} {cc} x(t-\tau _2(t))\\ g_{k}(x(t-\tau _2(t))) \end{array}\right] ^T \left[ \begin{array} {cc} -G_{k1}J_{k+2} &{} G_{k2}J_{k+2}\\ G_{k2}J_{k+2} &{} -J_{k+2} \end{array}\right] \left[ \begin{array} {cc} x(t-\tau _2(t))\\ g_{k}(x(t-\tau _2(t))) \end{array}\right] \ge 0. \end{aligned}$$
(38)
Now, combining (12)–(38), we have
$$\begin{aligned} \dot{V}(x_{t},y_{t},t)\le -\zeta ^T(t)\Omega \zeta (t), \end{aligned}$$
(39)
where
$$\begin{aligned} \zeta ^T(t)=&[\zeta _1^T(t)\ \zeta _2^T(t)],\\ \zeta _1^T(t)=&\bigg [x^T(t)\quad \dot{x}^T(t)\\ {}&\quad x^T(t-\delta _1)\quad x^T(t-\tau _2(t))\quad x^T(t-\tau _2)\quad x^T(t-\tau _3(t))\quad x^T(t-\tau _3)\quad g^T_1(x(t)\\ {}&g^T_1(x(t-\tau _2(t)) \quad g^T_2(x(t)\quad g^T_2(x(t-\tau _2(t))\\ {}&\quad \int _{t-\tau _2}^{t}g^T_2(x(s)ds\quad \int _{t-\tau _2(t)}^{t}x^T(s)ds\quad \int _{t-\tau _2(t)}^{t}\dot{x}^T(s)ds\\&\int _{t-\tau _2}^{t-\tau _2(t)} x^T(s)ds\quad \int _{t-\tau _2}^{t-\tau _2(t)} \dot{x}^T(s)ds\quad \int _{-\tau _2(t)}^{0}\int _{t+\theta }^{t}\dot{x}^T(s)ds \\ {}&\quad \int _{-\tau _2}^{-\tau _2(t)}\int _{t+\theta }^{t}\dot{x}^T(s)ds\quad \int _{t-\tau _3(t)}^{t}x^T(s)ds\\&\int _{t-\tau _3(t)}^{t}\dot{x}^T(s)ds\quad \int _{t-\tau _3}^{t-\tau _3(t)} x^T(s)ds\quad \int _{t-\tau _3}^{t-\tau _3(t)}\dot{x}^T(s)ds\bigg ],\\ \zeta _2^T(t)=&\bigg [y^T(t)\quad \dot{y}^T(t)\\ {}&\quad y^T(t-\delta _2)\ y^T(t-\tau _1(t))\ y^T(t-\tau _1)\quad y^T(t-\tau _3(t))\quad y^T(t-\tau _3) \quad f^T_1(y(t)\\ {}&f^T_1(y(t-\tau _1(t))\\ {}&\quad f^T_2(y(t)\quad f^T_2(y(t-\tau _1(t))\ \int _{t-\tau _1}^{t}f^T_2(y(s)ds\quad \int _{t-\tau _1(t)}^{t}y^T(s)ds\quad \int _{t-\tau _1(t)}^{t}\dot{y}^T(s)ds\\ {}&\int _{t-\tau _1}^{t-\tau _1(t)} y^T(s)ds\quad \int _{t-\tau _1}^{t-\tau _1(t)}\dot{y}^T(s)ds\ \int _{-\tau _1(t)}^{0}\int _{t+\theta }^{t}\dot{y}^T(s)ds \ \int _{-\tau _1}^{-\tau _1(t)}\int _{t+\theta }^{t}\dot{y}^T(s)ds\\&\int _{t-\tau _3(t)}^{t}y^T(s)ds \int _{t-\tau _3(t)}^{t}\dot{y}^T(s)ds\\ {}&\quad \int _{t-\tau _3}^{t-\tau _3(t)} y^T(s)ds\ \int _{t-\tau _3}^{t-\tau _3(t)}\dot{y}^T(s)ds\bigg ], \end{aligned}$$
and \(\Omega \) is defined as in (10).
Thus, the equilibrium point of (8) is globally asymptotically stable. It may be inferred from the inequality (39) that,
$$\begin{aligned} V(x_{t},y_{t},t)+\int _{0}^{t}\zeta ^T(s)\Omega \zeta (s)ds\le V(0)<\infty , \end{aligned}$$
(40)
where
$$\begin{aligned} V(0)\le \big [&\lambda _{max}(\mathcal {P})+\delta _1\lambda _{max}(P_1)+\tau _2\lambda _{max}(P_3)+\tau _2\lambda _{max}(P_5) +\tau _3\lambda _{max}(P_7)+\frac{\tau _2^2}{2!}\lambda _{max}(Q_1)\\ +&\frac{\tau _3^2}{2!}\lambda _{max}(Q_3)+\sum \limits _{i=0}^{1}\frac{\tau _{(3-i)}^2}{2!}\lambda _{max}\left[ \begin{array} {cc} X_{(2i+1)} &{} Y_{(2i+1)}\\ \maltese &{} Z_{(2i+1)} \end{array}\right] + \frac{\tau _2^3}{3!}\lambda _{max}(R_1)+\frac{\tau _2^2}{2!}\lambda _{max}(R_3)\big ] \mathbf \Psi _{x_t}^2\\+&\big [\lambda _{max}(\mathcal {Q})+\delta _2\lambda _{max}(P_2)+\tau _1\lambda _{max}(P_4) +\tau _1\lambda _{max}(P_6)+\tau _3\lambda _{max}(P_8)+\frac{\tau _1^2}{2!}\lambda _{max}(Q_2)\\ +&\frac{\tau _3^2}{2!}\lambda _{max}(Q_4)+\sum \limits _{i=1}^{2}\frac{\tau _{(2i-1)}^2}{2!}\lambda _{max}\left[ \begin{array} {cc} X_{(2i)} &{} Y_{(2i)}\\ \maltese &{} Z_{(2i)} \end{array}\right] +\frac{\tau _1^3}{3!}\lambda _{max}(R_2)+ \frac{\tau _1^2}{2!}\lambda _{max}(R_4)\big ]\mathbf \Psi _{y_t}^2, \end{aligned}$$
$$\begin{aligned} =\varSigma _1\mathbf \Psi _{x_t}^2+\varSigma _2\mathbf \Psi _{y_t}^2, \end{aligned}$$
(41)
where
$$\begin{aligned}&\mathbf \Psi _{x_t}=max\{\sup \limits _{\theta \in [-\upsilon _{x_t},0]}\Vert \psi _{x_t} (\theta )\Vert ,\ \sup \limits _{\theta \in [-\upsilon _2,0]}\Vert {\dot{\psi }}_{x_t}(\theta )\Vert \},\\ {}&\mathbf \Psi _{y_t} =max\{\sup \limits _{\theta \in [-\upsilon _{y_t},0]}\Vert \psi _{y_t} (\theta )\Vert ,\ \sup \limits _{\theta \in [-\upsilon _2,0]}\Vert {\dot{\psi }}_{y_t}(\theta )\Vert \}. \end{aligned}$$
On the other hand, by the definition of \(V(x_{t},y_{t},t)\), we get
$$\begin{aligned} V(x_{t},y_{t},t)\ge&x^T(t)\mathcal {P}x(t)+y^T(t)\mathcal {Q}y(t),\nonumber \\ \ge&\lambda _{min}\mathcal {P}x^T(t)x(t)+\lambda _{min}\mathcal {Q}y^T(t)y(t),\nonumber \\ =&\lambda _{min}\mathcal {P}\Vert x(t)\Vert ^2+\lambda _{min}\mathcal {Q}\Vert y(t)\Vert ^2,\nonumber \\ =&\min \{\lambda _{min}\mathcal {P},\lambda _{min}\mathcal {Q}\}\big (\Vert x(t)\Vert ^2+\Vert y(t)\Vert ^2\big ), \end{aligned}$$
(42)
Then, combining (41) and (42), we obtain
$$\begin{aligned} \Vert x(t)\Vert ^2+\Vert y(t)\Vert ^2\le \frac{V(0)}{\min \{\lambda _{min}\mathcal {P},\lambda _{min}\mathcal {Q}\}}. \end{aligned}$$
(43)
It is clear that the system (8) is globally asymptotic stable for \(\Omega <0\), by Lyapunov stability theory. This completes the proof. \(\square \)

4 Sampled-Data Stabilization

Theorem 4.1
Under Assumption (A), for given scalars \(\tau _1,\ \tau _2,\ \tau _3,\ \delta _1,\ \delta _2,\ \mu _1\), and \(\mu _2\), the equilibrium point of system (8) is stable if there exist matrices \(\mathcal {P}>0,\ \mathcal {Q}>0,\ P_i>0,\ (i=1,2,...,8), \ Q_j>0,\ R_j>0,\ \left[ \begin{array} {cc} X_{j} &{} Y_{j}\\ \maltese &{} Z_{j} \end{array}\right] >0,\) positive-definite diagonal matrices \(U_{j},\ J_{j} \) and any matrices \( \left[ \begin{array} {cc} M_{j} &{} N_{j}\\ H_{j} &{} T_{j} \end{array}\right] ,\ (j=1,2,...,4), {\tilde{R}}_1,\ {\tilde{R}}_2, {L_1}, {L_2} \) such that the following LMIs hold:
$$\begin{aligned} \left[ \begin{array}{c|c} \left[ \begin{array} {cc} X_{k} &{} Y_{k} \\ \maltese &{} Z_{k} \end{array}\right] &{} \left[ \begin{array}{cc} M_{k} &{} N_{k} \\ H_{k} &{} T_{k} \end{array}\right] \\ \hline \maltese &{} \left[ \begin{array} {cc} X_{k} &{} Y_{k} \\ \maltese &{} Z_{k} \end{array}\right] \end{array} \right]>0,\ \left[ \begin{array} {cc} R_l &{} {\tilde{R}}_l\\ \maltese &{} R_l \end{array}\right] >0,\ k=1,...,4,\ l=1,2, \end{aligned}$$
(44)
$$\begin{aligned} {\overline{\Omega }}=\left[ \begin{array}{c|c} \left[ \begin{array} {cc} {\overline{\Omega }}_{11} &{} [0]_{11n\times 11n} \\ \maltese &{} {\overline{\Omega }}_{22} \end{array}\right] &{} [0]_{24n\times 24n} \\ \hline \maltese &{} \left[ \begin{array} {cc} {\overline{\Omega }}_{33} &{} [0]_{11n\times 11n}\\ \maltese &{} {\overline{\Omega }}_{44} \end{array}\right] \end{array} \right] <0, \end{aligned}$$
(45)
where
$$\begin{aligned} {\overline{\Omega }}_{11}&=\left[ \begin{array}{cccccccccccc} (1,1)^{\diamond } &{} (1,2)^\diamond &{} -S_1A &{} Q_1 &{} 0 &{} Q_3+{L_1} &{} 0 &{} G_{12}J_1 &{} S_1W_1 &{} G_{22}J_2 &{} 0 &{} S_1W_2\\ \maltese &{} (2,2)^\diamond &{} -S_1A &{} 0 &{} 0 &{} {L_1} &{} 0 &{} 0 &{} S_1W_1 &{} 0 &{} 0 &{} S_1W_2 \\ \maltese &{} \maltese &{} -\delta _1P_1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0&{} 0 \\ \maltese &{} \maltese &{} \maltese &{} (4,4)^\diamond &{} Q_1 &{} 0 &{} 0 &{} 0 &{} G_{32}J_3 &{} 0 &{} 0&{} 0 \\ \maltese &{} \maltese &{} \maltese &{}\maltese &{} (5,5)^\diamond &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0&{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -2Q_3 &{} Q_3 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} (7,7)^\diamond &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -J_1 &{} 0&{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -J_3 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} (10,10)^\diamond &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -J_4 &{} 0 \end{array}\right] ,\\ {\overline{\Omega }}_{22}&=\left[ \begin{array}{cccccccccccccc} -R_4 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} -X_1 &{} -Y_1 &{} -M_1 &{} -N_1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} -Z_1 &{} -H_1 &{} -T_1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} -X_1 &{} -Y_1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} -Z_1 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -R_1 &{} -\tilde{R_1} &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -R_1 &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -X_3 &{} -Y_3 &{} -M_3 &{} -N_3 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -Z_3 &{} -H_3 &{} -T_3 \\ \maltese &{}\maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -X_3 &{} -Y_3 \\ \maltese &{}\maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -Z_3 \end{array}\right] ,\\ \Omega _{33}&=\left[ \begin{array}{cccccccccccc} (1,1)^\dag &{} (1,2)^\dag &{} -S_2B &{} Q_2 &{} 0 &{} Q_4+{L_2} &{} 0 &{} F_{12}U_1 &{} S_2V_1 &{} F_{22}U_2 &{} 0 &{} S_2V_2\\ \maltese &{} (2,2)^\dag &{} -S_2B &{} 0 &{} 0 &{} {L_2} &{} 0 &{} 0 &{} S_2V_1 &{} 0 &{} 0 &{} S_2V_2 \\ \maltese &{} \maltese &{} -\delta _2P_2 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0&{} 0 \\ \maltese &{} \maltese &{} \maltese &{} (4,4)^\dag &{} Q_2 &{} 0 &{} 0 &{} 0 &{} F_{32}U_3 &{} 0 &{} 0&{} 0 \\ \maltese &{} \maltese &{} \maltese &{}\maltese &{} (5,5)^\dag &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0&{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -2Q_4 &{} Q_4 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} (7,7)^\dag &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -U_1 &{} 0&{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -U_3 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} (10,10)^\dag &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -U_4 &{} 0 \end{array}\right] ,\\ {\overline{\Omega }}_{44}&=\left[ \begin{array}{cccccccccccccc} -R_3 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} -X_2 &{} -Y_2 &{} -M_2 &{} -N_2 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} -Z_2 &{} -H_2 &{} -T_2 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} -X_2 &{} -Y_2 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} -Z_2 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -R_2 &{} -\tilde{R_2} &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -R_2 &{} 0 &{} 0 &{} 0 &{} 0 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -X_4 &{} -Y_4 &{} -M_4 &{} -N_4 \\ \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -Z_4 &{} -H_4 &{} -T_3 \\ \maltese &{}\maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -X_4 &{} -Y_4 \\ \maltese &{}\maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} \maltese &{} -Z_4 \end{array}\right] , \end{aligned}$$
$$\begin{aligned}&(1,1)^\diamond =\delta _1P_1+P_3+P_5+P_7-Q_1-Q_3+\tau _2^2X_1+\tau _3^2X_3-G_{11}J_1-G_{21}J_2,\\&(1,1)^\dag =\delta _2P_2+P_4+P_5+P_8-Q_2-Q_3+\tau _1^2X_2+\tau _3^2X_4-F_{11}U_1-F_{21}U_2,\\&(1,2)^\diamond =\mathcal {P}+\tau _2^2Y_1+\tau _3^2Y_3-S_1,\ (1,2)^\dag =\mathcal {Q}+\tau _1^2Y_2+\tau _3^2Y_4-S_2,\\&(2,2)^\diamond =\tau _2^2Q_1+\tau _3^2Q_3+\tau _2^2Z_1+\tau _3^2Z_3+\bigg (\frac{\tau _2^2}{2!}\bigg )^2R_1-S_1-S_1^T,\\&(2,2)^\dag =\tau _1^2Q_2+\tau _3^2Q_4+\tau _1^2Z_2+\tau _3^2Z_4+\bigg (\frac{\tau _1^2}{2!}\bigg )^2R_2-S_2-S_2^T,\\&(4,4)^\diamond =-(1-\mu _2)P_3-2Q_1-G_{31} J_3-G_{41} J_4,\\&(4,4)^\dag =-(1-\mu _1)P_4-2Q_2-F_{31}U_3-F_{41}U_4,\\&(5,5)^\diamond =-P_5-Q_1,\ (7,7)^\diamond =-Q_3-P_7, \ (10,10)^\diamond =-J_2+\tau ^2_2R_4,\\&(5,5)^\dag =-P_6-Q_2,\ (7,7)^\dag =-Q_4-P_8, \ (10,10)^\dag =-U_2+\tau ^2_1R_3, \end{aligned}$$
Moreover, desired controller gain matrix are given by
$$\begin{aligned} \mathcal {K}=S_1^{-1}{L_1},\ \mathcal {M}=S_2^{-1}{L_2}. \end{aligned}$$
(46)
Proof
The proof is similar to that of Theorem 3.1. Because it is not included here. \(\square \)
Remark 4.2
By adopting the input delay approach, the proposed model can be transformed into continuous system. Because of the sampled-data control strategy deals with continuous system by sampling data at discrete time, we make the first attempt to address the sampled-data synchronization control problem for the presented model. A novel sampled-data control strategy was adopted to ensure that the drive system could achieve synchronization with the response system. To be noted that the sampled-data control drastically reduces the amount of transmitted information and increases the efficiency of bandwidth usage, because its’ control signals are kept constant during the sampling period and are allowed to change only at the sampling instant. On the other hand, due to the rapid growth of the digital hardware technologies, the sampled-data control method is more efficient, secure and useful. Thus, in this research area, a great deal of remarkable research investigations have been made and it can be found in [5457].
Remark 4.3
The leakage delay, in particular, which exists in the negative feedback term of BAM, has recently emerged as a research topic of great significance. In [58] Gopalsamy was the first who investigated the stability of the BAM neural networks with constant leakage delays. He was surprised to launched that time delays in leakage terms has an essential impact on the dynamical performances, and the stability issue has been destabilized by leakage term for NN model. Hence, it is clear that, dynamic behaviours of system including leakage/forgetting term, that can be construct in the negative feedback term in the system model, which has been sketched back to 1992 Later on, a substantial achievements have been reached regarding the dynamics of BAM with delay in the leakage term [5962]. On the other hand, it is natural that the BAM contains certain information about the derivative of the past state. This is expressed via the encompass of delay in the neutral derivative of BAM which often appears in the study of automatic control, population dynamics and vibrating masses attached to an elastic bar; the reader may consult the papers [6366] for more details.
Table 2
Example 5.1. Calculated upper bound of \(\tau _1=\tau _2\) for different \(\delta _1,\ \delta _2\) and \(\tau _3\) and \( \mu _1,\ \mu _2=1\)
Methods
\(\tau _1,\tau _2\)
Gain matrices
\(\delta _1,\delta _2,\tau _3=0.01\)
2.4743
\(\mathcal {K}=\left[ \begin{array}{cc} -21.1412 &{} -0.3062\\ -0.3071 &{} -20.6219 \end{array}\right] ,\ \mathcal {M}=\left[ \begin{array}{cc} -20.0062 &{} -1.4845 \\ -1.4039 &{} -26.2567 \end{array}\right] , \)
\(\delta _1,\delta _2,\tau _3=0.02\)
1.4290
\(\mathcal {K}=\left[ \begin{array}{cc} -17.4824 &{} -0.1314\\ -0.1314 &{} -17.2493 \end{array}\right] ,\ \mathcal {M}=\left[ \begin{array}{cc} -14.4491 &{} -0.8397\\ -0.7976 &{} -18.8227 \end{array}\right] , \)
\(\delta _1,\delta _2,\tau _3=0.03\)
0.5746
\(\mathcal {K}=\left[ \begin{array}{cc} -15.1595 &{} -0.0495\\ -0.0494 &{} -15.0783 \end{array}\right] ,\ \mathcal {M}=\left[ \begin{array}{cc} -12.3263 &{} -0.3975\\ -0.3764 &{} -15.6087 \end{array}\right] , \)
\(\delta _1,\delta _2,\tau _3=0.04\)
Infeasible
\(\mathcal {K}=\left[ \begin{array}{cc} -16.1686 &{} -0.1477\\ -0.1483 &{} -15.8738 \end{array}\right] ,\ \mathcal {M}=\left[ \begin{array}{cc} -12.8771 &{} -0.7226\\ -0.2716 &{} -17.6965 \end{array}\right] . \)

5 Numerical Examples

The usefulness of the theoretical methods proposed is demonstrated in this section by numerical examples.
Example 5.1
We consider the following BAM neural networks with leakage delays:
$$\begin{aligned} \left\{ \begin{array}{ll} &{}\dot{x}(t)=-Ax(t-\delta _1)+W_1f_1(y(t-\tau _1(t))) +W_2\int _{t-\tau _1}^{t}f_2(y(s))ds+\mathcal {K}x(t-\tau _3(t)),\\ &{}\dot{y}(t)=-By(t-\delta _2)+V_1g_1(x(t-\tau _2(t))) +V_2\int _{t-\tau _2}^{t}g_2(x(s))ds+\mathcal {M}y(t-\tau _3(t)), \end{array} \right. \end{aligned}$$
(47)
with the following parameters:
$$\begin{aligned}&A= \left[ \begin{array}{cc} 5 &{} 0 \\ 0 &{} 5 \end{array}\right] ,\ W_1= \left[ \begin{array}{cc} 0.03 &{} 0.02 \\ 0.02 &{} 0.01 \end{array}\right] ,\ W_2= \left[ \begin{array}{cc} 0.09 &{} 0.02 \\ 0.02 &{} 0.01 \end{array}\right] ,\\&B= \left[ \begin{array}{cc} 4 &{} 0 \\ 0 &{} 5 \end{array}\right] ,\ V_1= \left[ \begin{array}{cc} 0.1 &{} 0.2 \\ 0.2 &{} 0.2 \end{array}\right] ,\ V_2= \left[ \begin{array}{cc} 0.03 &{} 0.05 \\ 0.02 &{} 0.04 \end{array}\right] . \end{aligned}$$
Let the activation functions as \(f_1(x)=f_2(x)=g_1(x)=g_2(x)=tanh(x).\) It is obvious that Assumption (A) is satisfied, and we obtain
$$\begin{aligned} F_{11}=F_{21}=G_{11}=G_{21}=0,\ F_{12}=F_{22}=G_{12}=G_{22}=0.4I. \end{aligned}$$
The allowable new upper bounds of \(\tau _1,\ \tau _2 \) and the corresponding new gain matrices for various values of \( \mu _1,\ \mu _2 \) and leakage delays \( \delta _1,\ \delta _2 \) are obtained by solving the LMIs in (45) with the aforementioned new parameter values using the LMI control toolbox. These results are listed in Table 2. Let \(\tau _1=\tau _2=0.5746,\ \mu _1=\mu _2=1\) the dynamical analysis of BAM neural framework (1) is stable. The feasible solutions are shown below.
$$\begin{aligned}&\mathcal {P}=10^{-03}\times \left[ \begin{array}{cc} 0.9842 &{} -0.0028\\ -0.0028 &{} 0.9901 \end{array}\right] ,\ \mathcal {Q}=\left[ \begin{array}{cc} 0.0009 &{} 0.0000\\ 0.0000 &{} 0.0012 \end{array}\right] ,\ P_1=\left[ \begin{array}{cc} 0.0592 &{} -0.0001\\ -0.0001 &{} 0.0595 \end{array}\right] ,\\&P_2=\left[ \begin{array}{cc} 0.0489 &{} 0.0034\\ 0.0034 &{} 0.0746 \end{array}\right] ,\ P_3=10^{-05}\times \left[ \begin{array}{cc} 0.9160 &{} -0.0427\\ -0.0427 &{} 0.9800 \end{array}\right] ,\ P_4=10^{-04}\times \left[ \begin{array}{cc} 0.1719 &{} -0.0596\\ -0.0596 &{} 0.0207 \end{array}\right] ,\\&Q_1=10^{-05}\times \left[ \begin{array}{cc} 0.4919 &{} 0.0387\\ 0.0387 &{} 0.4522 \end{array}\right] ,\ Q_2=10^{-04}\times \left[ \begin{array}{cc} 0.3667 &{} 0.0561\\ 0.0561 &{} 0.1904 \end{array}\right] ,\\\&R_1=10^{-04}\times \left[ \begin{array}{cc} 0.7117 &{} -0.0382\\ -0.0382 &{} 0.7690 \end{array}\right] ,\\&R_2=10^{-03}\times \left[ \begin{array}{cc} 0.1542 &{} -0.0535\\ -0.0535 &{} 0.0186 \end{array}\right] ,\ R_3=10^{-04}\times \left[ \begin{array}{cc} 0.8700 &{} -0.0032\\ -0.0032 &{} 0.5637 \end{array}\right] ,\\\&U_1=10^{-06}\times \left[ \begin{array}{cc} 0.3773 &{} 0\\ 0 &{} 0.0455 \end{array}\right] ,\\&J_1=10^{-03}\times \left[ \begin{array}{cc} 0.1113 &{} 0\\ 0 &{} 0.1189 \end{array}\right] ,\ L_1=\left[ \begin{array}{cc} -0.0010 &{} 0.0000\\ 0.0000 &{} -0.0010 \end{array}\right] ,\ L_2=\left[ \begin{array}{cc} -0.0009 &{} -0.0000\\ -0.0000 &{} -0.0012 \end{array}\right] ,\\&S_1=10^{-04}\times \left[ \begin{array}{cc} 0.6801 &{} -0.0043\\ -0.0043 &{} 0.6882 \end{array}\right] ,\ S_2=10^{-04}\times \left[ \begin{array}{cc} 0.7692 &{} -0.0034\\ -0.0034 &{} 0.7825 \end{array}\right] . \end{aligned}$$
The following gain matrices are obtained by taking the sample time points \( t_k=0.01k, k=1,2,...,\) and the sampling period \(\tau _3=0.03\), respectively.
$$\begin{aligned} \mathcal {K}=S_1^{-1} L_1= \left[ \begin{array}{cc} -15.1595 &{} -0.0495\\ -0.0494 &{} -15.0783 \end{array}\right] ,\ \mathcal {M}=S_2^{-1} L_2= \left[ \begin{array}{cc} -12.3263 &{} -0.3975\\ -0.3764 &{} -15.6087 \end{array}\right] . \end{aligned}$$
Thus every requirement in Theorem 4.1 has been satisfies and system (47) is stable under the specified sampled-data feedback control, according to Theorem 4.1.

6 Conclusion

In order to regulate BAM neurons with leakage delays, a novel sampled-data control approach and its stability analysis are investigated in this research. We started by looking at instability events brought on by leakage delays. After that, we used a sampled-data control approach to bring the unstable systems under control. To determine the gain matrix for the planned sampled-data controllers, certain LMIs were generated. Finally, to demonstrate the efficiency of our theoretical findings, a numerical example and accompanying computational models have been provided. Future planning will take into account the intricacy of some control systems.Our future study will also focus on some systems biology with leakage delays as a key field of research such as Event triggered control and bifurcation analysis.

Acknowledgements

The work of the author was supported by National Research Council of Thailand (Talented Midcareer Researchers) under research Grant No. N42A650250.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literatur
1.
2.
Zurück zum Zitat Chandrasekar A, Radhika T, Zhu Q (2022) State estimation for genetic regulatory networks with two delay components by using second-order reciprocally convex approach. Neural Process Lett 54(1):327–345CrossRef Chandrasekar A, Radhika T, Zhu Q (2022) State estimation for genetic regulatory networks with two delay components by using second-order reciprocally convex approach. Neural Process Lett 54(1):327–345CrossRef
3.
Zurück zum Zitat Chandrasekar A, Rakkiyappan R, Li X (2016) Effects of bounded and unbounded leakage time-varying delays in memristor-based recurrent neural networks with different memductance functions. Neurocomputer 202:67–83CrossRef Chandrasekar A, Rakkiyappan R, Li X (2016) Effects of bounded and unbounded leakage time-varying delays in memristor-based recurrent neural networks with different memductance functions. Neurocomputer 202:67–83CrossRef
4.
Zurück zum Zitat Haykin S (1994) Neural networks: a comprehensive foundation. Prentice Hall, New York Haykin S (1994) Neural networks: a comprehensive foundation. Prentice Hall, New York
5.
Zurück zum Zitat Gunasekaran N, Syed Ali M (2021) Design of Stochastic Passivity and Passification for Delayed BAM Neural Networks with Markov Jump Parameters via Non-uniform Sampled-Data Control. Neural Process Lett 53:391–404CrossRef Gunasekaran N, Syed Ali M (2021) Design of Stochastic Passivity and Passification for Delayed BAM Neural Networks with Markov Jump Parameters via Non-uniform Sampled-Data Control. Neural Process Lett 53:391–404CrossRef
6.
Zurück zum Zitat Xu S, Lam J, Ho WC, Zou Y (2005) Delay-dependent exponential stability for a class of neural networks with time delays. J Comput Appl Math 183:16–28ADSMathSciNetCrossRef Xu S, Lam J, Ho WC, Zou Y (2005) Delay-dependent exponential stability for a class of neural networks with time delays. J Comput Appl Math 183:16–28ADSMathSciNetCrossRef
7.
Zurück zum Zitat Kwon OM, Lee SM, Park JH, Cha EJ (2012) New approaches on stability criteria for neural networks with interval time-varying delays. Appl Math Comput 218:9953–9964MathSciNet Kwon OM, Lee SM, Park JH, Cha EJ (2012) New approaches on stability criteria for neural networks with interval time-varying delays. Appl Math Comput 218:9953–9964MathSciNet
8.
Zurück zum Zitat Arik S (2014) An improved robust stability result for uncertain neural networks with multiple time delays. Neural Netw 54:1–10PubMedCrossRef Arik S (2014) An improved robust stability result for uncertain neural networks with multiple time delays. Neural Netw 54:1–10PubMedCrossRef
9.
Zurück zum Zitat Zeng HB, He Y, Wu M, Zhang CF (2011) Complete delay-decomposing approach to asymptotic stability for neural networks with time-varying delays. IEEE Trans Neural Netw 22:806–812PubMedCrossRef Zeng HB, He Y, Wu M, Zhang CF (2011) Complete delay-decomposing approach to asymptotic stability for neural networks with time-varying delays. IEEE Trans Neural Netw 22:806–812PubMedCrossRef
10.
Zurück zum Zitat Liu Y, Lee SM, Kwon OM, Park JH (2015) New approach to stability criteria for generalized neural networks with interval time-varying delays. Neurocomputing 149:1544–1551CrossRef Liu Y, Lee SM, Kwon OM, Park JH (2015) New approach to stability criteria for generalized neural networks with interval time-varying delays. Neurocomputing 149:1544–1551CrossRef
11.
Zurück zum Zitat Arik S (2014) An analysis of stability of neutral-type neural systems with constant time delays. J Franklin Inst 351:4949–4959MathSciNetCrossRef Arik S (2014) An analysis of stability of neutral-type neural systems with constant time delays. J Franklin Inst 351:4949–4959MathSciNetCrossRef
12.
Zurück zum Zitat Li X, Caraballo T, Rakkiyappan R, Xiuping H (2015) On the stability of impulsive functional differential equations with infinite delays. Math Methods Appl Sci 38(14):3130–3140MathSciNetCrossRef Li X, Caraballo T, Rakkiyappan R, Xiuping H (2015) On the stability of impulsive functional differential equations with infinite delays. Math Methods Appl Sci 38(14):3130–3140MathSciNetCrossRef
13.
Zurück zum Zitat Dan Y, Li X, Qiu J (2019) Output tracking control of delayed switched systems via state-dependent switching and dynamic output feedback. Nonlinear Anal Hybrid 32:294–305MathSciNetCrossRef Dan Y, Li X, Qiu J (2019) Output tracking control of delayed switched systems via state-dependent switching and dynamic output feedback. Nonlinear Anal Hybrid 32:294–305MathSciNetCrossRef
14.
Zurück zum Zitat Li X, Yang X, Huang T (2019) Persistence of delayed cooperative models: Impulsive control method, Applied. Math Comput 342:130–146MathSciNet Li X, Yang X, Huang T (2019) Persistence of delayed cooperative models: Impulsive control method, Applied. Math Comput 342:130–146MathSciNet
15.
Zurück zum Zitat Ali MS, Gunasekaran N, Zhu Q (2017) State estimation of T-S fuzzy delayed neural networks with Markovian jumping parameters using sampled-data control. Fuzzy Sets Syst 306:87–104MathSciNetCrossRef Ali MS, Gunasekaran N, Zhu Q (2017) State estimation of T-S fuzzy delayed neural networks with Markovian jumping parameters using sampled-data control. Fuzzy Sets Syst 306:87–104MathSciNetCrossRef
17.
Zurück zum Zitat Gopalsamy K, He XZ (1994) Delay independent stability in bidirectional associative memory networks. IEEE Trans Neural Netw 5:998–1002PubMedCrossRef Gopalsamy K, He XZ (1994) Delay independent stability in bidirectional associative memory networks. IEEE Trans Neural Netw 5:998–1002PubMedCrossRef
18.
Zurück zum Zitat Cao J, Dong M (2003) Exponential stability of delayed bi-directional associative memory networks. Appl Math Comput 135:105–112MathSciNet Cao J, Dong M (2003) Exponential stability of delayed bi-directional associative memory networks. Appl Math Comput 135:105–112MathSciNet
19.
Zurück zum Zitat Arik S (2005) Global asymptotic stability of bidirectional associative memory neural networks with time delays. IEEE Trans Neural Netw 16:580–586PubMedCrossRef Arik S (2005) Global asymptotic stability of bidirectional associative memory neural networks with time delays. IEEE Trans Neural Netw 16:580–586PubMedCrossRef
20.
Zurück zum Zitat Senan S, Arik S (2007) Global robust stability of bidirectional associative memory neural networks with multiple time delays. IEEE Trans Syst Man Cybern B 37:1375–1381CrossRef Senan S, Arik S (2007) Global robust stability of bidirectional associative memory neural networks with multiple time delays. IEEE Trans Syst Man Cybern B 37:1375–1381CrossRef
21.
Zurück zum Zitat Cao J, Wan Y (2014) Matrix measure strategies for stability and synchronization of inertial BAM neural network with time delays. Neural Netw 53:165–172PubMedCrossRef Cao J, Wan Y (2014) Matrix measure strategies for stability and synchronization of inertial BAM neural network with time delays. Neural Netw 53:165–172PubMedCrossRef
22.
Zurück zum Zitat Cao J, Song Q (2006) Stability in Cohen-Grossberg-type bidirectional associative memory neural networks with time-varying delays. Nonlinearity 19:1601–1617ADSMathSciNetCrossRef Cao J, Song Q (2006) Stability in Cohen-Grossberg-type bidirectional associative memory neural networks with time-varying delays. Nonlinearity 19:1601–1617ADSMathSciNetCrossRef
23.
Zurück zum Zitat Syed Ali M, Gunasekaran N, Rani ME (2017) Robust stability of Hopfield delayed neural networks via an augmented LK functional. Neurocomputing 234:198–204CrossRef Syed Ali M, Gunasekaran N, Rani ME (2017) Robust stability of Hopfield delayed neural networks via an augmented LK functional. Neurocomputing 234:198–204CrossRef
24.
Zurück zum Zitat Syed Ali M, Gunasekaran N, Ahn CK, Shi P (2016) Sampled-data stabilization for fuzzy genetic regulatory networks with leakage delays. IEEE/ACM Trans Comput Biol Bioinform 15:271–285PubMed Syed Ali M, Gunasekaran N, Ahn CK, Shi P (2016) Sampled-data stabilization for fuzzy genetic regulatory networks with leakage delays. IEEE/ACM Trans Comput Biol Bioinform 15:271–285PubMed
25.
Zurück zum Zitat Gunasekaran N, Ali MS (2021) Design of stochastic passivity and passification for delayed BAM neural networks with Markov jump parameters via non-uniform sampled-data control. Neural Process Lett 53:391–404CrossRef Gunasekaran N, Ali MS (2021) Design of stochastic passivity and passification for delayed BAM neural networks with Markov jump parameters via non-uniform sampled-data control. Neural Process Lett 53:391–404CrossRef
26.
Zurück zum Zitat Bao H, Cao J (2011) Robust state estimation for uncertain stochastic bidirectional associative memory networks with time-varying delays. Phys Scripta 83:065004ADSCrossRef Bao H, Cao J (2011) Robust state estimation for uncertain stochastic bidirectional associative memory networks with time-varying delays. Phys Scripta 83:065004ADSCrossRef
27.
Zurück zum Zitat Mathiyalagan K, Sakthivel R, Marshal Anthoni S (2012) New robust passivity criteria for stochastic fuzzy BAM neural networks with time-varying delays. Commun Nonlinear Sci Numer Simulat 17:1392–1407ADSMathSciNetCrossRef Mathiyalagan K, Sakthivel R, Marshal Anthoni S (2012) New robust passivity criteria for stochastic fuzzy BAM neural networks with time-varying delays. Commun Nonlinear Sci Numer Simulat 17:1392–1407ADSMathSciNetCrossRef
28.
Zurück zum Zitat Bao H, Cao J (2012) Exponential stability for stochastic BAM networks with discrete and distributed delays. Appl Math Comput 218:6188–6199MathSciNet Bao H, Cao J (2012) Exponential stability for stochastic BAM networks with discrete and distributed delays. Appl Math Comput 218:6188–6199MathSciNet
29.
Zurück zum Zitat Thoiyab NM, Muruganantham P, Zhu Q, Gunasekaran N (2021) Novel results on global stability analysis for multiple time-delayed BAM neural networks under parameter uncertainties. Chaos, Solitons Fractals 17(152):111441MathSciNetCrossRef Thoiyab NM, Muruganantham P, Zhu Q, Gunasekaran N (2021) Novel results on global stability analysis for multiple time-delayed BAM neural networks under parameter uncertainties. Chaos, Solitons Fractals 17(152):111441MathSciNetCrossRef
30.
Zurück zum Zitat Thoiyab NM, Muruganantham P, Gunasekaran N (2021) Global robust stability analysis for hybrid bam neural networks. In: 2021 IEEE second international conference on control, measurement and instrumentation (CMI), pp. 93–98 Thoiyab NM, Muruganantham P, Gunasekaran N (2021) Global robust stability analysis for hybrid bam neural networks. In: 2021 IEEE second international conference on control, measurement and instrumentation (CMI), pp. 93–98
32.
Zurück zum Zitat Wang H, Li C, Xu H (2009) Existence and global exponential stability of periodic solution of cellular neural networks with impulses and leakage delay. Int J Bifurcat Chaos 19:831–842MathSciNetCrossRef Wang H, Li C, Xu H (2009) Existence and global exponential stability of periodic solution of cellular neural networks with impulses and leakage delay. Int J Bifurcat Chaos 19:831–842MathSciNetCrossRef
33.
Zurück zum Zitat Li X, Cao J (2010) Delay-dependent stability of neural networks of neutral type with time delay in the leakage term. Nonlinearity 23:1709–1726ADSMathSciNetCrossRef Li X, Cao J (2010) Delay-dependent stability of neural networks of neutral type with time delay in the leakage term. Nonlinearity 23:1709–1726ADSMathSciNetCrossRef
34.
Zurück zum Zitat Li X, Fu X, Balasubramaniam P, Rakkiyappan R (2010) Existence, uniqueness and stability analysis of recurrent neural networks with time delay in the leakage term under impulsive perturbations. Nonlinear Anal Real World Appl 11:4092–4108MathSciNetCrossRef Li X, Fu X, Balasubramaniam P, Rakkiyappan R (2010) Existence, uniqueness and stability analysis of recurrent neural networks with time delay in the leakage term under impulsive perturbations. Nonlinear Anal Real World Appl 11:4092–4108MathSciNetCrossRef
35.
Zurück zum Zitat Lakshmanan S, Park JH, Jung HY, Balasubramaniam P (2012) Design of state estimator for neural networks with leakage, discrete and distributed delays. Appl Math Comput 218:11297–11310MathSciNet Lakshmanan S, Park JH, Jung HY, Balasubramaniam P (2012) Design of state estimator for neural networks with leakage, discrete and distributed delays. Appl Math Comput 218:11297–11310MathSciNet
36.
Zurück zum Zitat Sakthivel R, Vadivel P, Mathiyalagan K, Arunkumar A, Sivachitra M (2015) Design of state estimator for bidirectional associative memory neural networks with leakage delays. Inform Sci 296:263–274MathSciNetCrossRef Sakthivel R, Vadivel P, Mathiyalagan K, Arunkumar A, Sivachitra M (2015) Design of state estimator for bidirectional associative memory neural networks with leakage delays. Inform Sci 296:263–274MathSciNetCrossRef
37.
Zurück zum Zitat Jagannathan S, Vandegrift M, Lewis FL (2000) Adaptive fuzzy logic control of discrete-time dynamical systems. Automatica 36:229–241MathSciNetCrossRef Jagannathan S, Vandegrift M, Lewis FL (2000) Adaptive fuzzy logic control of discrete-time dynamical systems. Automatica 36:229–241MathSciNetCrossRef
38.
Zurück zum Zitat Stamova I, Stamov G (2012) Impulsive control on global asymptotic stability for a class of impulsive bidirectional associative memory neural networks with distributed delays. Math Comput Model 53:824–831MathSciNetCrossRef Stamova I, Stamov G (2012) Impulsive control on global asymptotic stability for a class of impulsive bidirectional associative memory neural networks with distributed delays. Math Comput Model 53:824–831MathSciNetCrossRef
39.
Zurück zum Zitat Li B, Shen H, Song X, Zhao J (2014) Robust exponential \(H_\infty \) control for uncertain time-varying delay systems with input saturation: A Markov jump model approach. Appl Math Comput 237:190–202MathSciNet Li B, Shen H, Song X, Zhao J (2014) Robust exponential \(H_\infty \) control for uncertain time-varying delay systems with input saturation: A Markov jump model approach. Appl Math Comput 237:190–202MathSciNet
40.
Zurück zum Zitat Behrouz E, Reza T, Matthew F (2014) A dynamic feedback control strategy for control loops with time-varying delay. Int J Control 87:887–897MathSciNetCrossRef Behrouz E, Reza T, Matthew F (2014) A dynamic feedback control strategy for control loops with time-varying delay. Int J Control 87:887–897MathSciNetCrossRef
41.
Zurück zum Zitat Peng C, Zhang J (2015) Event-triggered output-feedback \(H_\infty \) control for networked control systems with time-varying sampling. IET Control Theory Appl 9:1384–1391MathSciNetCrossRef Peng C, Zhang J (2015) Event-triggered output-feedback \(H_\infty \) control for networked control systems with time-varying sampling. IET Control Theory Appl 9:1384–1391MathSciNetCrossRef
42.
Zurück zum Zitat Zhu XL, Wang Y (2011) Stabilization for sampled-data neural-network-based control systems. IEEE Trans Syst Man Cybern 41:210–221CrossRef Zhu XL, Wang Y (2011) Stabilization for sampled-data neural-network-based control systems. IEEE Trans Syst Man Cybern 41:210–221CrossRef
43.
Zurück zum Zitat Zhang W, Yu L (2010) Stabilization of sampled-data control systems with control inputs missing. IEEE Trans Automat Control 55:447–452ADSMathSciNetCrossRef Zhang W, Yu L (2010) Stabilization of sampled-data control systems with control inputs missing. IEEE Trans Automat Control 55:447–452ADSMathSciNetCrossRef
44.
Zurück zum Zitat Zhu XL, Wang Y (2011) Stabilization for sampled-data neural-networks-based control systems. IEEE Trans Syst Man Cybern 41:210–221CrossRef Zhu XL, Wang Y (2011) Stabilization for sampled-data neural-networks-based control systems. IEEE Trans Syst Man Cybern 41:210–221CrossRef
45.
Zurück zum Zitat Yoneyama J (2012) Robust sampled-data stabilization of uncertain fuzzy systems via input delay approach. Inform Sci 198:169–176MathSciNetCrossRef Yoneyama J (2012) Robust sampled-data stabilization of uncertain fuzzy systems via input delay approach. Inform Sci 198:169–176MathSciNetCrossRef
46.
Zurück zum Zitat Su L, Shen H (2015) Mixed \(H_\infty \)/passive synchronization for complex dynamical networks with sampled-data control. Appl Math Comput 259:931–942MathSciNet Su L, Shen H (2015) Mixed \(H_\infty \)/passive synchronization for complex dynamical networks with sampled-data control. Appl Math Comput 259:931–942MathSciNet
47.
Zurück zum Zitat Hui G, Zhanga H, Wu Z, Wang Y (2014) Control synthesis problem for networked linear sampled-data control systems with band-limited channels. Inform Sci 275:385–399MathSciNetCrossRef Hui G, Zhanga H, Wu Z, Wang Y (2014) Control synthesis problem for networked linear sampled-data control systems with band-limited channels. Inform Sci 275:385–399MathSciNetCrossRef
48.
Zurück zum Zitat Wu Z, Shi P, Su H, Chu J (2013) Stochastic synchronization of Markovian jump neural networks with time-varying delay using sampled-data. IEEE Trans Cybern 43:1796–1806PubMedCrossRef Wu Z, Shi P, Su H, Chu J (2013) Stochastic synchronization of Markovian jump neural networks with time-varying delay using sampled-data. IEEE Trans Cybern 43:1796–1806PubMedCrossRef
49.
Zurück zum Zitat Weng Y, Chao Z (2014) Robust Sampled-data \(H_\infty \) output feedback control of active suspension system. Int J Innov Comput Inf Control 10:281–292 Weng Y, Chao Z (2014) Robust Sampled-data \(H_\infty \) output feedback control of active suspension system. Int J Innov Comput Inf Control 10:281–292
50.
Zurück zum Zitat Gan Q, Liang Y (2012) Synchronization of chaotic neural networks with time-delay in the leakage term and parametric uncertainties based on sampled-data control. J Franklin Inst 349:1955–1971MathSciNetCrossRef Gan Q, Liang Y (2012) Synchronization of chaotic neural networks with time-delay in the leakage term and parametric uncertainties based on sampled-data control. J Franklin Inst 349:1955–1971MathSciNetCrossRef
51.
Zurück zum Zitat Feng Z, Lam J (2012) Integral partitioning approach to robust stabilisation for uncertain distributed time-delay systems. Int J Robust Nonlinear Control 22:679–689CrossRef Feng Z, Lam J (2012) Integral partitioning approach to robust stabilisation for uncertain distributed time-delay systems. Int J Robust Nonlinear Control 22:679–689CrossRef
52.
Zurück zum Zitat Balasubramaniam P, Krishnasamy R, Rakkiyappan R (2012) Delay-dependent stability of neutral systems with time-varying delays using delay-decomposition approach. Appl Math Model 36:2253–2261MathSciNetCrossRef Balasubramaniam P, Krishnasamy R, Rakkiyappan R (2012) Delay-dependent stability of neutral systems with time-varying delays using delay-decomposition approach. Appl Math Model 36:2253–2261MathSciNetCrossRef
53.
Zurück zum Zitat Park P, Ko JW, Jeong C (2011) Reciprocally convex approach to stability of systems with time-varying delays. Automatica 47:235–238MathSciNetCrossRef Park P, Ko JW, Jeong C (2011) Reciprocally convex approach to stability of systems with time-varying delays. Automatica 47:235–238MathSciNetCrossRef
54.
Zurück zum Zitat Wang J, Tian L (2019) Stability of Inertial Neural Network with Time-Varying Delays Via Sampled-Data Control. Neural Process Lett 50:1123–1138CrossRef Wang J, Tian L (2019) Stability of Inertial Neural Network with Time-Varying Delays Via Sampled-Data Control. Neural Process Lett 50:1123–1138CrossRef
55.
Zurück zum Zitat Saravanakumar R, Amini A, Datta R, Cao Y (2022) Reliable memory sampled-data consensus of multi-agent systems with nonlinear actuator faults. IEEE Trans Circuits Syst II Exp Briefs 69(4):2201–2205 Saravanakumar R, Amini A, Datta R, Cao Y (2022) Reliable memory sampled-data consensus of multi-agent systems with nonlinear actuator faults. IEEE Trans Circuits Syst II Exp Briefs 69(4):2201–2205
56.
Zurück zum Zitat Wang Z, Wu H (2017) Robust guaranteed cost sampled-data fuzzy control for uncertain nonlinear time-delay systems. IEEE Trans Syst Man Cybern Syst 49(5):964–975CrossRef Wang Z, Wu H (2017) Robust guaranteed cost sampled-data fuzzy control for uncertain nonlinear time-delay systems. IEEE Trans Syst Man Cybern Syst 49(5):964–975CrossRef
57.
Zurück zum Zitat Hua C, Wu S, Guan X (2020) Stabilization of T-s fuzzy system with time delay under sampled-data control using a new looped-functional. IEEE Trans Fuzzy Syst 28(2):400–407ADSCrossRef Hua C, Wu S, Guan X (2020) Stabilization of T-s fuzzy system with time delay under sampled-data control using a new looped-functional. IEEE Trans Fuzzy Syst 28(2):400–407ADSCrossRef
59.
Zurück zum Zitat Peng S (2010) Global attractive periodic solutions of BAM neural networks with continuously distributed delays in the leakage terms. Nonlinear Anal Real World Appl 11:2141–2151MathSciNetCrossRef Peng S (2010) Global attractive periodic solutions of BAM neural networks with continuously distributed delays in the leakage terms. Nonlinear Anal Real World Appl 11:2141–2151MathSciNetCrossRef
60.
Zurück zum Zitat Balasubramaniam P, Kalpana M, Rakkiyappan R (2011) Global asymptotic stability of BAM fuzzy cellular neural networks with time delay in the leakage term, discrete and unbounded distributed delays. Math Comput Model 53:839–853MathSciNetCrossRef Balasubramaniam P, Kalpana M, Rakkiyappan R (2011) Global asymptotic stability of BAM fuzzy cellular neural networks with time delay in the leakage term, discrete and unbounded distributed delays. Math Comput Model 53:839–853MathSciNetCrossRef
61.
Zurück zum Zitat Liu B (2013) Global exponential stability for BAM neural networks with time-varying delays in the leakage terms. Nonlinear Anal Real World Appl 14:559–566MathSciNetCrossRef Liu B (2013) Global exponential stability for BAM neural networks with time-varying delays in the leakage terms. Nonlinear Anal Real World Appl 14:559–566MathSciNetCrossRef
62.
Zurück zum Zitat Lakshmanan S, Park J, Lee T, Jung H, Rakkiyappan R (2013) Stability criteria for BAM neural networks with leakage delays and probabilistic time-varying delays. Appl Math Comput 219:9408–9423MathSciNet Lakshmanan S, Park J, Lee T, Jung H, Rakkiyappan R (2013) Stability criteria for BAM neural networks with leakage delays and probabilistic time-varying delays. Appl Math Comput 219:9408–9423MathSciNet
63.
Zurück zum Zitat Zhang H (2014) Existence ans stability of almost periodic solutions for CNNs with continuously distributed leakage delays. Neural Comput Appl 24:1135–1146CrossRef Zhang H (2014) Existence ans stability of almost periodic solutions for CNNs with continuously distributed leakage delays. Neural Comput Appl 24:1135–1146CrossRef
64.
Zurück zum Zitat Feng J, Xu S, Zou Y (2009) Delay-dependent stability of neutral type neural networks with distributed delays. Neurocomputing 72:2576–2580CrossRef Feng J, Xu S, Zou Y (2009) Delay-dependent stability of neutral type neural networks with distributed delays. Neurocomputing 72:2576–2580CrossRef
65.
Zurück zum Zitat Li YK, Li YQ (2013) Existence and exponential stability of almost periodic solution for neutral delay BAM neural networks with time-varying delays in leakage terms. J Frankin Inst 350:2808–2825MathSciNetCrossRef Li YK, Li YQ (2013) Existence and exponential stability of almost periodic solution for neutral delay BAM neural networks with time-varying delays in leakage terms. J Frankin Inst 350:2808–2825MathSciNetCrossRef
66.
Zurück zum Zitat Xu CJ, Li PL (2016) Existence and exponentially stability of anti-periodic solutions for neutral BAM neural networks with time-varying delays in the leakage terms. J Nonlinear Sci Appl 9(3):1285–1305MathSciNetCrossRef Xu CJ, Li PL (2016) Existence and exponentially stability of anti-periodic solutions for neutral BAM neural networks with time-varying delays in the leakage terms. J Nonlinear Sci Appl 9(3):1285–1305MathSciNetCrossRef
Metadaten
Titel
New Insights on Bidirectional Associative Memory Neural Networks with Leakage Delay Components and Time-Varying Delays Using Sampled-Data Control
verfasst von
S. Ravi Chandra
S. Padmanabhan
V. Umesha
M. Syed Ali
Grienggrai Rajchakit
Anuwat Jirawattanpanit
Publikationsdatum
01.04.2024
Verlag
Springer US
Erschienen in
Neural Processing Letters / Ausgabe 2/2024
Print ISSN: 1370-4621
Elektronische ISSN: 1573-773X
DOI
https://doi.org/10.1007/s11063-024-11499-y

Weitere Artikel der Ausgabe 2/2024

Neural Processing Letters 2/2024 Zur Ausgabe

Neuer Inhalt