In this paper, further analysis on stability of delayed neural networks is presented via the impulsive delay differential inequality, which was obtained by Li in recent publications. Based on the inequality, some new sufficient conditions ensuring global exponential stability of impulsive delay neural networks are derived, and the estimated exponential convergence rates are also obtained. The conditions are less conservative and restrictive than those established in the earlier references. In addition, some numerical examples are given to show the effectiveness of our obtained results.
Hinweise
Competing interests
The author declares that he has no competing interests.
Authors' contributions
The studies and manuscript of this paper was written by Jiayu Wang independently.
1. Introduction and preliminaries
In recent years, extensive research has been done in neural networks such as Hopfield neural networks, Cohen-Grossberg neural networks, cellular neural networks, and bidirectional associative memory neural networks, because of their potential applications in pattern recognition, image processing, associative memory, and so on, see [1‐28]. Recently, a new type of neural networks--impulsive neural networks display a combination of characteristics of both the continuous-time and discrete-time systems, which is an appropriate description of the phenomena of abrupt qualitative dynamical changes of essentially continuous-time systems, see [4, 9, 13‐22]. The stability of impulsive delay neural networks has become an important topic of theoretical studies and has been investigated by many researchers via different approaches, see [9, 13‐16, 20‐22] and the references cited therein. For example, Liu et al. [14] obtained some sufficient conditions on global exponential stability by utilizing impulsive delay differential inequality that has been given by Yue et al. [18] for impulsive high-order Hopfield neural networks with time-varying delays as follows:
(1.1)
In [20, 21], Xu and Yang investigated the global exponential stability of impulsive delay neural networks by establishing a delay differential inequality with impulsive initial conditions. The results extend and improve the recent works [23, 24]. More recently, Yang et al. [22] investigated the global exponential stability by Lyapunov function and Halanay inequality for impulsive extended BAM type Cohen-Grossberg neural networks with delays and variable coefficients as follows:
(1.2)
Anzeige
where
Although some stability conditions for impulsive delay neural networks proposed in [9, 14, 15, 18‐22], they have some conservatism to some extent, and there still exists open room for further improvement.
Recently, Li [25] establishes a new impulsive delay differential inequality as follows:
Lemma 1.1. Let α, β, r and τ denote nonnegative constants, and function f ∈ PC(ℝ, ℝ+) satisfies the scalar impulsive differential inequality
Anzeige
where 0 < σ ≤ + ∞, ak , bk , ∈ ℝ+, k(·) ∈ PC([0, σ], ℝ+) satisfiesfor some positive constant η0 > 0 in the case when σ = +∞. Moreover, when σ = +∞, the interval [t - σ, t] is understood to be replaced by (-∞, t].
Assume that
(i)
(ii) There exist constants M > 0, η > 0 such that
where λ ∈ (0, η0) satisfies
Then,
In particular, it includes the special case:
Lemma 1.2. Let α, β and τ denote nonnegative constants, ak, bk ∈ ℝ+, and function f ∈ PC(ℝ, ℝ+) satisfies
(1.3)
Assume that
(i) α > β ≥ 0.
(ii) There exist constants M > 0, η > 0 such that
where λ > 0 satisfies
Then
The purpose of this paper is to improve the results in [9, 14, 15, 18‐22] via the above results in Lemma 1.2, which is a special case of [25]. We will derive some new sufficient conditions to ensure the global exponential stability of equilibrium point for impulsive delay Hopfield neural networks (1.1) and BAM type Cohen-Grossberg neural networks (1.2). The main advantages of the obtained exponential stability conditions include:
(I)
In [9, 14, 15, 18, 22], all of those results require that the time sequence {tk } satisfies But this restriction will not be required in our results.
(II)
Even for the case our results still can be applied to the case not covered in [19, 20].
In addition, some illustrative examples are also given to demonstrate the effectiveness of the obtained results.
2. Global exponential stability analysis for HNNs
In this section, we will give some new sufficient conditions on the global exponential stability of equilibrium point for the neural network (1.1). The conditions are less restrictive and conservative than that given in [14].
System (1.1) may be rewritten in the following matrices forms:
(2.1)
Remark 2.1. For detail information about (2.1), one may see [14].
Theorem 2.1. Assume that conditions (i), (ii) in Theorem 1 in[14]hold, and
(iii)
there exists a constant η >0 such that
where,
and λ > 0 satisfies
Then the equilibrium point of the system (1.1) is globally exponentially stable with the approximate exponential convergence rate.
Remark 2.2. For the proofs of Theorems 2.1, we need only to mention a few points, since the rest is the same as in the proofs of Theorems 1 in [14]. First, similarly one may define V(t) = xT (t)Px(t), and it can be deduced that
Then using Lemma 1.2 in this paper (replacing Lemma 1 in [14]), Theorem 2.1 can be obtained.
Similarly we can obtain another stability criterion corresponding to Theorem 2 in [14] as follows:
Theorem 2.2. Assume that conditions (i) in Theorem 2 in[14]hold and
(iii)
there exists a constant η > 0 such that
where,
and λ > 0 satisfies
Then the equilibrium point of the system (1.1) is globally exponentially stable with the approximate exponential convergence rate.
Remark 2.3. In [14], under the assumption that , Liu et al. obtained some theorems on exponential stability of (1.1). Note that in our theorem 2.1 and 2.2, we only require that . Thus, our results improve the previous findings.
Example 2.1 Consider the three-neuron Hopfield neural network (1.1) with g1(u1) = tanh(0.63u1), g2(u2) = tanh(0.78u2), g3(u3) = tanh(0.46u3), h1(u1) = tanh(0.09u1), h2(u2) = tanh(0.02u2), h3(u3) = tanh(0.17u3), C = diag (C1, C2, C3) = diag (0.89, 0.88, 0.53), R = diag (R1, R2, R3) = diag(0.16, 0.12, 0.03), D = diag(d1, d2, d3) = diag(-0.95, -0.84, -0.99), 0 ≤ τi(t) ≤ 0.5, i = 1, 2, 3 and
(2.2)
In this example, similar to [14], one may choose P = diag(0.9, 0.7, 0.8), ε1 = 1, ε2 = 2 such that Ω < 0 in Theorem 2.1, and that a = 10.2628 > 2.3814 = b. Also, we can compute that ρ = 1. Thus, by Theorem 2.1, the equilibrium point of (2.2) is globally exponentially stable with the approximate convergence rate λ for , where λ > 0 satisfies the inequality: λ ≤ 10.2628 - 2.3814eλ 0.5.
Remark 2.4. In [14], Liu et al. obtained that the equilibrium point of (2.2) is globally exponentially stable for which was more restrictive and conservative than that of our result. Therefore, the result in this paper is applicable to more conditions.
3. Global exponential stability analysis for BAM type CGNNs
In this section, we will reconsider the global exponential stability of impulsive BAM type Cohen-Grossberg neural networks (1.2).
Theorem 3.1. Assume that (H1) - (H3) and (i), (ii) in Theorem 2 in[22]hold; moreover, suppose that
(iii)
there exists a constant η > 0 such that
where,
and λ > 0 satisfies
Then the equilibrium point of the system (1.2) is globally exponentially stable with the approximate exponential convergence rate.
Proof. Consider Lyapunov function as follows:
Then similar to the proof of Theorem 2 in [22], we arrive at
Then by Lemma 1.2, the result holds. □
Remark 3.1. In [22], Yang et al. obtained a sufficient condition for global asymptotic stability of (1.2), which assumes that , while ours do not impose this restriction.
Example 3.1. Consider the following extended BAM neural networks:
(3.1)
where uk = wk = vk = ek = 1 + (-1) kδ(t - tk ), the impulse times tk satisfy 0 ≤ t0 < t1 < < tk < ⋯, limk→ +∞tk = +∞ and . Let τ = 18.
By simple calculation, we can obtain , , , , , where λ > 0 satisfies the inequality:. We may choose λ = 0.16, then M ≈ 11.511 < 12.932 = exp {16λ}. By Theorem 3.1, the equilibrium point of (3.1) is globally exponentially stable with the approximate convergence rate 0.007.
Remark 3.2. It can be easily verified that (iv), (v) in Theorem 2 in [22] are violated in the above example. Thus, our results improve the results in [22].
4. A new inequality
In this section, we shall give a new inequality that is different from Lemma 1.2 and can be applied to the case not covered in [19, 20].
Theorem 4.1. Suppose that
(i)
;
(ii)
tk - tk-1> τ, and there exist constants M > 0, γ ≥ 0 such that
where λ > 0 satisfies
(4.1)
Then,
Proof. Condition (i) implies that there exists small enough λ > 0 such that the inequality (4.1) holds.
Next, we show
where a0 = 1, b0 = 0.
It is clear that for t ∈ [t0 - τ, t0] by the definition of .
Take k = 0, we shall show, for t ∈ [t0, t1)
(4.2)
Suppose on the contrary, then there exists some t ∈ [t0, t1) such that
Let
then t⋆ ∈ [t0, t1) and
(1)
f(t⋆) = W0(t⋆);
(2)
f(t) ≤ W0 (t), t ∈ [t0, t⋆];
(3)
.
Since , t⋆ ∈ [t0,t1), we get
Hence, we have
Thus, by the definitions of λ and W0, we have
which contradicts (3). So we get that (4.2) holds for all t ∈ [t0, t1).
Now, we assume that for t ∈ [tm-1, tm ), m ∈ ℤ+
(4.3)
We shall show that for t ∈ [tm , tm+1), m ∈ ℤ+
(4.4)
By (4.3) and the fact that tm - tm-1> τ, we know
Hence,
(4.5)
If (4.4) is not true, then there exists some t ∈ [tm , tm-1) such that
By (4.5), we define
where
then t* ∈ [tm , tm+1) and
(4)
f(t*) = Wm (t*);
(5)
f(t) ≤ Wm (t), t ∈ [tm , t*];
(6)
.
Since , t* ∈ [tm , tm+1), we get
In fact, when t* - τ ≥ tm , from (5), we have
When t* - τ < tm , note that tk - tk-1> τ, we have
This, together with (4), leads to
Hence, we obtain
which is a contradiction with (6). Hence, we obtain (4.4) holds for all t ∈ [tm , tm+1), m ∈ ℤ+. Thus, by the method of induction, we get, for t ∈ [tk , tk+1)
By condition (ii), we have
where λ satisfies (4.1). The proof of Theorem 4.1 is therefore completed. □
Remark 4.1. If there exists constant such that for all k ∈ ℤ+ holds, then we can choose γ = 0 in Theorem 4.1.
If let bk = 0, k ∈ ℤ+ in Theorem 4.1, then we can obtain the following result.
Corollary 4.1. Suppose that
(iii)
;
(iv)
tk -tk-1> τ, and there exist constants M > 0, γ ≥ 0 such that
whereλ > 0 satisfies
Then,
In the following, the superiority of the present approach over [19, 20] will be demonstrated by an example. The main tool for studying the neural network in [19, 20] is the following:
Lemma 4.1. Suppose that α > β ≥ 0, and f(t) satisfies scalar impulsive differential inequality
where
and f(t) is continuous except at each tk, k ∈ ℤ+, where it has jump discontinuities. The sequence {tk } satisfies 0 ≤ t0 < t1 < ⋯ < tk < ⋯, limk→ +∞tk = +∞.
Then,
(4.6)
where
Consider a particular network of two neurons as follows:
(4.7)
where tk - tk-1= 0.25, t0 = 0, k ∈ ℤ+, τi ∈ (0, 0.25), i = 1, 2 and
Let τ = max {τ1, τ2}, then τ ∈ (0, 0.25).
Choose V(t) = |x(t)| + |y(t)|, then
where .
Moreover,
where
Choose M = e2, γ = 0 in Corollary 4.1, we get
(4.8)
where λ > 0 satisfies λ ≤ 4 - 0.5e2 exp{λτ}. Hence, the equilibrium point (0, 0) of (4.7) is globally exponentially stable with the approximate convergence rate λ.
On the other hand, we will point out the inequality (4.6) is not feasible here.
In fact, by using the inequality (4.6), we get, for t ∈ [tk , tk+1),
since λ > 0 satisfies λ ≤ 4 - 0.5 exp{λτ}. This leads to that it is very difficult to get the estimation formula like (4.8). Therefore, our method is less conservative in some degree than that in [19, 20].
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Competing interests
The author declares that he has no competing interests.
Authors' contributions
The studies and manuscript of this paper was written by Jiayu Wang independently.