1 Introduction
Neural networks can be modeled either as a static neural network model or as a local field neural network model according to the modeling approaches [
1,
2]. The typical static neural networks are the recurrent back-propagation networks and the projection networks. The Hopfield neural network is a typical example of the local field neural networks. The two types of neural networks have attained broad applications in knowledge acquisition, combinatorial optimization, pattern recognition, and other areas [
3]. But these two types of neural networks are not equivalent in general. Static neural networks can be transferred into local field neural networks only under some preconditions. However, the preconditions are usually not satisfied. Hence, it is necessary to study static neural networks.
In practical implementation of neural networks, time delays are inevitably encountered and may lead to instability or significantly deteriorated performances [
4]. Therefore, the dynamics of delayed systems which include delayed neural networks has received considerable attention during the past years and many results have been achieved [
5‐
19]. As is well known, the Lyapunov-Krasovskii functional method is the most commonly used method in the investigation of dynamics of the delayed neural networks. The conservativeness of this approach mainly lies in two aspects: the construction of Lyapunov-Krasovskii functional and the estimation of its time-derivative. In order to get less conservative results, a variety of methods were proposed. First of all, several types of Lyapunov-Krasovskii functional were presented such as an augmented Lyapunov-Krasovskii functional [
20] and a delay-decomposing Lyapunov-Krasovskii functional [
21]. Second, novel integral inequalities were proposed to obtain a tighter upper bound of the integrals occurring in the time-derivative of the Lyapunov-Krasovskii functional. Wirtinger’s integral inequality [
22], the free-matrix-based integral inequality [
23], the integral inequality including a double integral [
24] were typical examples of these integral inequalities.
In a practical situation, it is impossible to completely acquire the state information of all neurons in neural networks because of their complicated structure. So it is worth to investigate the state estimation of neural networks. Recently, some results on the state estimation of neural networks have been obtained [
25‐
27]. In addition, in analog VLSI implementations of neural networks, uncertainties, which can be modeled as an energy-bounded input noise, should be taken into account because of the tolerances of the utilized electronic elements. Therefore, it is of practical significance to study the
\(\mathcal{H_{\infty}}\) state estimation of the delayed neural networks. Some significant results on this issue have been reported by some researchers [
28‐
34]. For instance, in [
28], the state estimation problem of the guaranteed
\(\mathcal{H_{\infty}}\) performance of static neural networks was discussed. In [
31], based on the reciprocally convex combination technique and a double-integral inequality, a delay-dependent condition was derived such that the error system was globally exponentially stable and a prescribed
\(\mathcal{H_{\infty}}\) performance was guaranteed. In [
32], further improved results were proposed by using zero equalities and a reciprocally convex approach [
35].
In the above mentioned results [
28‐
33], the lower bound of time-varying delay was always assumed to be 0. However, in the real world, the time-varying delay may be an interval delay, which means that the lower bound of the delay is not restricted to be 0. In this case, the criteria in [
28‐
33] guaranteeing the
\(\mathcal{H_{\infty}}\) performance of the state estimation cannot be applied because they did not consider the information of the lower bound of the delay. In [
34], by constructing an augmented Lyapunov-Krasovskii functional, the guaranteed
\(\mathcal{H_{\infty}}\) performance state estimation problem of static neural networks with interval time-varying delay was discussed. Slack variables were introduced in order to derive less conservative results, but the computational burden was increased at the same time [
34]. Thus, there remains room to improve the results reported in [
34], which is one of the motivations of this paper.
In this paper, the problem of an
\(\mathcal{H_{\infty}}\) state estimation for static neural networks with interval time-varying delay is investigated. The activation function is assumed to satisfy a sector-bounded condition. On one hand, a modified Lyapunov-Krasovskii functional is constructed which takes information of the lower bound of the time-varying delay into account. Compared with the Lyapunov-Krasovskii functional in [
34], the one proposed in this paper is simple, since some terms such as
\(V_{4}(t)\) in [
34] are removed. On the other hand, Wirtinger’s integral inequality, which can provide a tighter upper bound than the ones derived based on Jensen’s inequality, is employed to deal with the integral appearing in the derivative. Based on the constructed Lyapunov-Krasovskii functional and the new integral inequality, an improved delay-dependent criterion is derived such that the resulting error system is globally asymptotically stable with guaranteed
\(\mathcal{H_{\infty}}\) performance. Compared with existing relevant conclusions, the criterion in this paper has less conservativeness as well as a lower computational burden. In addition, when the lower bound of the time-varying delay is 0, a new delay-dependent
\(\mathcal{H_{\infty }}\) state estimation condition is also obtained, which can provide a better performance than the existing results. Simulation results are provided to demonstrate the effectiveness of the presented method.
2 Problem description and preliminaries
In this paper, the following delayed static neural network subject to noise disturbance was considered:
$$ \begin{aligned} &\dot{x}(t) = -Ax(t)+f\bigl(Wx\bigl(t-h(t) \bigr)+J\bigr)+B_{1}w(t), \\ &y(t) = Cx(t)+Dx\bigl(t-h(t)\bigr)+B_{2}w(t), \\ &z(t) = Hx(t), \\ &x(t) = \psi(t),\quad t\in[-h_{2}, 0], \end{aligned} $$
(1)
where
\(x(t)=[x_{1}(t), x_{2}(t), \ldots, x_{n}(t)]^{T}\in R^{n}\) is the state vector of the neural network associated with
n neurons,
\(y(t) \in R^{m}\) is the network output measurement,
\(z(t) \in R^{p}\), to be estimated, is a linear combination of the states,
\(w(t) \in R^{q}\) is the noise input belonging to
\(\mathcal{L}_{2}[0, \infty)\),
\(A=\operatorname{diag}\{a_{1}, a_{2}, \ldots, a_{n}\}>0\) describes the rate with which the
ith neuron will reset its potential to the resting state in isolation when disconnected from the networks and external inputs,
\(W=[w_{ij}]_{n\times n}\) is the delayed connection weight matrix,
\(B_{1}\),
\(B_{2}\),
C,
D, and
H are real known constant matrices with appropriate dimensions.
\(f(x(t))=[f_{1}(x_{1}(t)), f_{2}(x_{2}(t)), \ldots, f_{n}(x_{n}(t))]^{T}\) denotes the continuous activation function,
\(J=[J_{1}, J_{2}, \ldots, J_{n}]^{T}\) is the exogenous input vector.
\(\psi(t)\) is the initial condition.
\(h(t)\) denotes the time-varying delay satisfying
$$ 0\leq h_{1} \leq h(t) \leq h_{2}, \qquad \dot{h}(t)\leq\mu, $$
(2)
where
\(h_{1}\),
\(h_{2}\),
μ are known constants.
The neuron activation function
\(f_{i}(\cdot)\) satisfies
$$ k^{-}_{i}\leq\frac{f_{i}(x)-f_{i}(y)}{x-y}\leq k^{+}_{i}, \quad i=1,2,\ldots ,n, x\neq y, $$
(3)
where
\(k^{-}_{i}\),
\(k^{+}_{i}\) are some known constants.
For the neural network (
1), we construct a state estimator for estimation of
\(z(t)\):
$$ \begin{aligned} &\dot{\hat{x}}(t)=-A\hat{x}(t)+f\bigl(W \hat{x}\bigl(t-h(t)\bigr)+J\bigr)+K\bigl(y(t)-\hat {y}(t)\bigr), \\ &\hat{y}(t)=C\hat{x}(t)+D\hat{x}\bigl(t-h(t)\bigr), \\ &\hat{z}(t)=H\hat{x}(t), \\ &\hat{x}(t)=0,\quad t\in[-h_{2}, 0], \end{aligned} $$
(4)
where
\(\hat{x}(t)\in R^{n}\) denotes the estimated state,
\(\hat{z}(t)\in R^{p}\) denotes the estimated measurement of
\(z(t)\), and
K is the state estimator gain matrix to be determined.
Define the error by
\(r(t)=x(t)-\hat{x}(t)\), and
\(\bar{z}(t)=z(t)-\hat{z}(t)\). Then, based on (
1) and (
4), we can easily obtain the error system of the form
$$ \begin{aligned} &\dot{r}(t) = -(A+KC)r(t)-KDr\bigl(t-h(t) \bigr)+g\bigl(Wr\bigl(t-h(t)\bigr)\bigr) +(B_{1}-KB_{2})w(t), \\ &\bar{z}(t) = Hr(t), \end{aligned} $$
(5)
where
\(g(Wr(t))=f(Wx(t)+J)-f(W\hat{x}(t)+J)\).
To proceed, we need the following useful definition and lemmas.
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.