Elsevier

Neural Networks

Volume 44, August 2013, Pages 64-71
Neural Networks

A new upper bound for the norm of interval matrices with application to robust stability analysis of delayed neural networks

https://doi.org/10.1016/j.neunet.2013.03.014Get rights and content

Abstract

The main problem with the analysis of robust stability of neural networks is to find the upper bound norm for the intervalized interconnection matrices of neural networks. In the previous literature, the major three upper bound norms for the intervalized interconnection matrices have been reported and they have been successfully applied to derive new sufficient conditions for robust stability of delayed neural networks. One of the main contributions of this paper will be the derivation of a new upper bound for the norm of the intervalized interconnection matrices of neural networks. Then, by exploiting this new upper bound norm of interval matrices and using stability theory of Lyapunov functionals and the theory of homomorphic mapping, we will obtain new sufficient conditions for the existence, uniqueness and global asymptotic stability of the equilibrium point for the class of neural networks with discrete time delays under parameter uncertainties and with respect to continuous and slope-bounded activation functions. The results obtained in this paper will be shown to be new and they can be considered alternative results to previously published corresponding results. We also give some illustrative and comparative numerical examples to demonstrate the effectiveness and applicability of the proposed robust stability condition.

Introduction

In this paper, we consider the neural network model whose dynamical behavior is described by the following sets of nonlinear differential equations dxi(t)dt=cixi(t)+j=1naijfj(xj(t))+j=1nbijfj(xj(tτj))+ui,i=1,2,,n where n is the number of neurons, xi(t) denotes the state of the neuron i at time t,fi() denote activation functions, aij and bij denote the strengths of connectivity between neurons j and i at time t and tτj, respectively; τj represents the time delay required in transmitting a signal from the neuron j to the neuron i,ui is the constant input to the neuron i, and ci is the charging rate for the neuron i.

The neural network model defined by (1) can be written in the matrix–vector form as follows: ẋ(t)=Cx(t)+Af(x(t))+Bf(x(tτ))+u where x(t)=(x1(t),x2(t),,xn(t))TRn,A=(aij)n×n,B=(bij)n×n,C=diag(ci>0),u=(u1,u2,,un)TRn,f(x(t))=(f1(x1(t)),f2(x2(t)),,fn(xn(t)))TRn and f(x(tτ))=(f1(x1(tτ1)),f2(x2(tτ2)),,fn(xn(tτn)))TRn.

The main approach to establishing the desired equilibrium and stability properties of dynamical neural networks is first to determine the character of the activation functions fi, and then, impose some constraint conditions on the interconnected weight matrices A and B.

The activation functions fi are assumed to be slope-bounded in the sense that there exist some positive constants ki such that the following conditions hold:0fi(x)fi(y)xyki,i=1,2,,n,x,yR,xy. This class of functions will be denoted by fK. The functions of this class do not require to be bounded, differentiable and monotonically increasing. In the recent literature, many authors have considered unbounded but slope bounded activation functions in equilibrium and stability analysis of different types of neural networks and presented various robust stability conditions for the neural network models considered (Baese et al., 2009, Balasubramaniam and Ali, 2010, Cao, 2001, Cao and Ho, 2005, Cao et al., 2005, Cao and Wang, 2005, Deng et al., 2011, Ensari and Arik, 2010, Faydasicok and Arik, 2012, Guo and Huang, 2009, Han et al., 2011, Huang et al., 2007, Huang et al., 2009, Kao et al., 2012, Kwon and Park, 2008, Liao and Wong, 2004, Liu et al., 2009, Lou et al., 2012, Mahmoud and Ismail, 2010, Ozcan and Arik, 2006, Pan et al., 2011, Qi, 2007, Shao et al., 2010, Shen and Wang, 2012, Shen et al., 2011, Singh, 2007, Wang et al., 2010, Wu et al., 2012, Zeng and Wang, 2009, Zhang et al., 2010, Zhang et al., 2011, Zhou and Wan, 2010). It is well-known from Brouwer’s fixed point theorem that there always exists an equilibrium point of a neural network if the activation functions are bounded. However, if the activation functions are not bounded, then we cannot always guarantee the existence of an equilibrium point of a neural network. Therefore, in the next sections, we first focus on investigating the existence of a unique equilibrium point for neural networks, which is necessary for global robust asymptotic stability of the neural network model  (1).

When we desire to employ a stable neural network which is robust against the deviations in values of the elements of the non-delayed and delayed interconnection matrices of the neural system, we need to know the exact parameter intervals for these matrices. A standard approach to formulating this problem is to intervalize the system matrices A=(aij),B=(bij) and C=diag(ci>0) as follows CI{C=diag(ci):0<C¯CC¯,  i.e.,0<c¯icic¯i,i}AI{A=(aij):A¯AA¯,i.e.,a¯ijaija¯ij,i,j=1,2,,n}BI{B=(bij):B¯BB¯,i.e.,b¯ijbijb¯ij,i,j=1,2,,n}. In most of the robust stability analysis of neural network model (1), the norms of the matrices A=(aij) and B=(bij) indispensably involve the stability conditions. When the matrices A and B are given by the parameter ranges defined by (3), then, in order to express the exact involvement of the norms of the matrices A and B in the stability conditions, we need to know an upper bound for the norms of these matrices. In the previous literature, three different upper bound norms for the matrices A and B have been reported. A key contribution of the current paper is to define a new upper bound for the norms of A and B.

Section snippets

Preliminaries

In this section, we first view some basic norms of vectors and matrices. Let v=(v1,v2,,vn)TRn. The three commonly used vector norms are v1,v2,v which are defined as: v1=i=1n|vi|,v2=i=1nvi2,v=max1in|vi|. If Q=(qij)n×n, then Q1,Q2 and Q are defined as follows Q1=max1inj=1n|qji|Q2=[λmax(QTQ)]1/2Q=max1inj=1n|qij|. We use these notations. For the vector v=(v1,v2,,vn)T,|v| will denote |v|=(|v1|,|v2|,,|vn|)T. For any real matrix Q=(qij)n×n,|Q| will denote |Q|=(

Existence and uniqueness analysis of equilibrium point

In this section, we obtain new sufficient conditions that ensure the existence and uniqueness of the equilibrium point of the neural network model (2).

We first present the following result:

Theorem 1

For the neural system defined by   (2), letfKand the network parameters satisfy   (3). Then, the neural network model   (2)   has a unique equilibrium point for eachu, if there exists a positive diagonal matrixP=diag(pi>0)such thatΦ1=2C¯PK1(PA+ATP+PA+ATP2I)2P2|BTB|+2|BT|B+BTB2I=2C¯PK1(PA

Stability analysis of equilibrium point

In this section, we show that the conditions obtained in Theorem 1, Theorem 2 for the existence and uniqueness of the equilibrium point also imply the asymptotic stability of the equilibrium point of neural system (2). In order to proceed further, we denote the equilibrium point of (2) by x and use the transformation zi()=xi()xi,i=1,2,,n to put system (2) in the form: żi(t)=cizi(t)+j=1naijgj(zj(t))+j=1nbijgj(zj(tτj)) where gi(zi())=fi(zi()+xi)fi(xi),i=1,2,,n. It can be seen

A comparative numerical example

In this section, we will give an example to compare our results with the previous corresponding literature results. We first restate the previous results:

Theorem 5

Qi, 2007

For the neural system defined by   (2), letfKand the network parameters satisfy   (3). Then, the neural network model   (2)   is globally asymptotically robustly stable, if there exists a positive diagonal matrixP=diag(pi>0)such thatΦ2=2C¯PK1(PA+ATP+PA+ATP2I)2P2σ2(B)I>0whereA=12(A¯+A¯),A=12(A¯A¯),K=diag(ki>0),σ2(B)=B2+B2,B

Conclusions

We have first introduced a new upper bound for the norm of the interval matrices. Then, by applying this result to dynamical analysis of delayed neural networks together with employing Lyapunov stability and homomorphic mapping theorems, we have derived some new sufficient conditions for the global robust asymptotic stability of the equilibrium point for this class of neural networks. By giving a comparative numerical example, we have shown that our proposed results are new and can be

References (32)

Cited by (66)

View all citing articles on Scopus
View full text