Elsevier

Neurocomputing

Volume 74, Issue 5, February 2011, Pages 838-845
Neurocomputing

Stochastic dissipativity analysis on discrete-time neural networks with time-varying delays

https://doi.org/10.1016/j.neucom.2010.11.018Get rights and content

Abstract

In this paper, the problems of global dissipativity and global exponential dissipativity are investigated for discrete-time stochastic neural networks with time-varying delays and general activation functions. By constructing appropriate Lyapunov–Krasovskii functionals and employing stochastic analysis technique, several new delay-dependent criteria for checking the global dissipativity and global exponential dissipativity of the addressed neural networks are established in linear matrix inequalities (LMIs). Furthermore, when the parameter uncertainties appear in the discrete-time stochastic neural networks with time-varying delays, the delay-dependent robust dissipativity criteria are also presented. Two examples are given to show the effectiveness and less conservatism of the proposed criteria.

Introduction

Over the past decades, delayed neural networks have attracted considerable attention because of their extensive applications in many areas such as signal processing, pattern recognition, associative memories, parallel computation, and optimization solvers [1]. In such applications, the qualitative analysis of the dynamical behaviors is a necessary step for the practical design of neural networks [2]. Many important results on the dynamical behaviors have been reported for delayed neural networks, see [1], [2], [3], [4], [5], [6], [7], [8], [9], [10] and the references therein for some recent publications.

Although neural networks are mostly studied in the continuous-time setting, they are often discretized for experimental or computational purposes. The dynamic characteristics of discrete-time neural networks have been extensively investigated, and many results have been obtained, for example, see [11], [12], [13], [14], [15], [16], [17], [18], [19], [20] and the references cited therein.

Just as pointed out in [10], stochastic disturbance is probably the main resource of the performance degradation of the implemented neural networks. Therefore, the dynamical behaviors of stochastic neural networks with time delay become increasingly significant. Many important and interesting results for checking dynamical behaviors have been reported for stochastic neural networks with time delay, for example, see [10], [14], [16], [20], and the references cited therein.

It is well known that the stability problem is central to the analysis of a dynamic system where various types of stability of an equilibrium point have captured the attention of researchers. Nevertheless, from a practical point of view, it is not always the case that every neural network has its orbits approach a single equilibrium point. It is possible that there is no equilibrium point in some situations. Therefore, the concept on dissipativity has been introduced [21]. As pointed out in [22], the dissipativity is also an important concept in dynamical neural networks. The concept of dissipativity in dynamical systems is a more general concept and it has found applications in the areas such as stability theory, chaos and synchronization theory, system norm estimation, and robust control [22]. Recently, the research of the dissipativity for delayed neural networks has drawn more and more attentions from scholars, some sufficient conditions checking the dissipativity for neural networks with time delays have been derived, for example, see [22], [23], [24], [25], [26], [27], [28], [29], [30] and references therein. In [22], [23], authors analyzed the dissipativity of neural network with constant delays, and derived some sufficient conditions for the global dissipativity and global exponential dissipativity of neural network with constant delays. In [24], [25], authors considered the global dissipativity and global robust dissipativity for neural network with both time-varying delays and unbounded distributed delays, several sufficient conditions for checking the global dissipativity and global robust dissipativity were obtained. In [26], [27], by using linear matrix inequality technique, authors investigated the global dissipativity of neural network with both discrete time-varying delays and distributed time-varying delays. In [28], the uniform dissipativity of a class of non-autonomous neural networks with time-varying delays was investigated by employing M-matrix and the techniques of inequality. In [29], authors considered the global dissipativity in mean for stochastic neural networks with constant delays, several sufficient conditions for checking the dissipativity were given. However, all of the above-mentioned literatures on the dissipativity for delayed neural networks are concerned with continuous-time case. Very recent, authors investigated the problem on global dissipativity of uncertain discrete-time neural networks, several criteria for checking the global dissipativity and global exponential dissipativity were given [30]. To the best of my knowledge, few authors have considered the problem on the dissipativity of discrete-time stochastic neural networks. Therefore, the study on the dissipativity of discrete-time stochastic neural networks still remains a good challenge.

Motivated by the above discussions, the objective of this paper is to study the problem on global dissipativity and global exponential dissipativity for discrete-time stochastic neural networks with time-varying delays and general activation functions. By employing appropriate Lyapunov–Krasovskii functionals and stochastic analysis technique, we obtain several new sufficient conditions for checking the global dissipativity and global exponential dissipativity of the addressed neural networks.

Notations: The notations are quite standard. Throughout this paper, I represents the unitary matrix with appropriate dimensions; N stands for the set of nonnegative integers; Rn and Rn×m denote, respectively, the n-dimensional Euclidean space and the set of all n×m real matrices. The superscript “T” denotes matrix transposition and the asterisk “⁎” denotes the elements below the main diagonal of a symmetric block matrix. The notation XY (respectively, X>Y) means that X and Y are symmetric matrices, and that XY is positive semidefinite (respectively, positive definite). · is the Euclidean norm in Rn. λmin(A) (respectively, λmax(A)) denotes the least (respectively, largest) eigenvalue of symmetric matrix A. For a positive constant a, [a] denotes the integer part of a. For integers a, b with a<b, N[a,b] denotes the discrete interval given by N[a,b]={a,a+1,,b1,b}. C(N[τ,0],Rn) denotes the set of all functions ϕ: N[τ,0]Rn. Let (Ω,F,{F}t0,P) be a complete probability space with a filtration {F}t0 satisfying the usual conditions (i.e., it is right continuous and F0 contains all P-null sets). E{·} stands for the mathematical expectation operator with respect to the given probability measure P. Denote by LF02(N[τ,0],Rn) the family of all F0-measurable C(N[τ,0],Rn)-valued random variables ψ={ψ(s):sN[τ,0]} such that supsN[τ,0]E{|ψ(s)|}<. ΔV(k) denotes the difference of function V(k) given by ΔV(k)=V(k+1)V(k). Matrices, if not explicitly specified, are assumed to have compatible dimensions.

Section snippets

Model description and preliminaries

In this paper, we consider the following discrete-time stochastic neural network model:x(k+1)=Cx(k)+Ag(x(k))+Bg(x(kτ(k)))+u+σ(x(k),x(kτ(k)))ω(k)for kN, where x(k)=(x1(k),x2(k),,xn(k))TRn, xi(k) is the state of the ith neuron at time k; g(x(k))=(g1(x1(k)),g2(x2(k)),,gn(xn(k)))TRn, gj(xj(k)) denotes the activation function of the jth neuron at time k; u=(u1,u2,,un)T is the input vector; the positive integer τ(k) corresponds to the transmission delay and satisfies ττ(k)τ¯ (τ0 and τ¯0

Main results

In this section, we shall establish our main criteria based on the LMI approach. For presentation convenience, in the following, we denote G1=diag{G1,G2,,Gn},G2=diag{G1+,G2+,,Gn+},G3=diag{G1G1+,G2G2+,,GnGn+},G4=diagG1+G1+2,G2+G2+2,,Gn+Gn+2,δ=τ+τ¯2.

Theorem 1

Suppose that (H1) and (H2) hold. If there exist eleven symmetric positive definite matrices P>0, Wi>0(i=1,2), Ri>0(i=1,2,3), Qi>0(i=1,2,3,4,5), four positive diagonal matrices Di>0(i=1,2), Hi>0(i=1,2), and a positive constant λ>0 such

Examples

Example 1

Consider a discrete-time stochastic neural network (1) with C=0.01000.03,A=0.080.030.040.01,B=0.010.050.020.02,u=0.310.23,g1(x)=tanh(0.7x)0.1sinx,g2(x)=tanh(0.4x)+0.2cosx,τ(k)=72sinkπ2,σ(x,y)=(0.1x0.2y,0.2x+0.3y)T.

It is easy to check that assumptions (H1) and (H2) are satisfied, and G1=−0.1, G1+=0.8, G2=−0.2, G2+=0.6, τ=5, τ¯=9, ρ1=0.09, ρ2=0.17. Thus, G1=0.1000.2,G2=0.8000.6,G3=0.08000.12,G4=0.35000.2,δ=7.By the Matlab LMI Control Toolbox, we can find a solution to LMIs in (3) as

Conclusions

In this paper, the global dissipativity and global exponential dissipativity have been investigated for discrete-time stochastic neural networks with time-varying delays and general activation functions. By constructing appropriate Lyapunov–Krasovskii functionals and employing some novel methods, several new delay-dependent criteria for checking the global dissipativity and global exponential dissipativity of the addressed neural networks have been derived in terms of LMIs, which can be checked

Acknowledgements

The author would like to thank the reviewers and the editor for their valuable suggestions and comments which have led to a much improved paper. This work was supported by the National Natural Science Foundation of China under Grants 60974132 and 10772152.

Qiankun Song was born in 1964. He received the B.S. degree in Mathematics in 1986 from Sichuan Normal University, Chengdu, China, and the M.S. degree in Applied Mathematics in 1996 from Northwestern Polytechnical University, Xi’an, China. He was a student at refresher class in the Department of Mathematics, Sichuan University, Chengdu, China, from September 1989 to July 1990.

From July 1986 to December 2000, he was with Department of Mathematics Sichuan University of Science and Engineering

References (32)

Cited by (28)

  • Graph-theoretic approach to exponential synchronization of coupled systems on networks with mixed time-varying delays

    2017, Journal of the Franklin Institute
    Citation Excerpt :

    In the past few decades, the coupled systems on networks (CSNs) have received intensive interest due to their extensive applications in diverse areas such as communication networks, neural networks, World Wide Web, power grids and others fields [1,2].

  • Global dissipativity analysis for delayed quaternion-valued neural networks

    2017, Neural Networks
    Citation Excerpt :

    Dissipativity and robust dissipativity for neural networks with discontinuous activations have been investigated in Wu et al. (2011) and Duan et al. (2016). By employing the Lyapunov functional and inequalities techniques, Song et al. studied the dissipativity for different classes of neural networks (Song, 2011; Song & Cao, 2009; Song & Zhao, 2005). With the help of nonsmooth analysis and Filippov theory, dissipativity analysis for memristor-based neural networks was conducted as well (Guo et al., 2013; Li, Rakkiyappan, & Velmurugan, 2015).

  • Global dissipativity of memristor-based neutral type inertial neural networks

    2017, Neural Networks
    Citation Excerpt :

    The global dissipativity analysis for T-S fuzzy neural networks with interval delays was discussed in Muralisankar, Gopalakrishnan, and Balasubramaniam (2012). Stochastic dissipativity for discrete-time neural networks was considered in Song (2011). By using Filippov theory and M-matrix, the authors considered the global dissipativity for neural networks with time delay and discontinuous functions (Duan & Huang, 2014).

View all citing articles on Scopus

Qiankun Song was born in 1964. He received the B.S. degree in Mathematics in 1986 from Sichuan Normal University, Chengdu, China, and the M.S. degree in Applied Mathematics in 1996 from Northwestern Polytechnical University, Xi’an, China. He was a student at refresher class in the Department of Mathematics, Sichuan University, Chengdu, China, from September 1989 to July 1990.

From July 1986 to December 2000, he was with Department of Mathematics Sichuan University of Science and Engineering Sichuan, China. From January 2001 to June 2006, he was with the Department of Mathematics, Huzhou University, Zhejiang, China. In July 2006, he moved to the Department of Mathematics, Chongqing Jiaotong University, Chongqing, China. Currently, he is a Professor at Chongqing Jiaotong University, and working toward the Ph.D. degree in Applied Mathematics, Sichuan University, Chengdu, China. He is the author or coauthor of more than 40 journal papers and two edited books. His current research interests include dissipativity theory of neural networks and chaos synchronization.

View full text