Elsevier

Physics Letters A

Volume 356, Issues 4–5, 14 August 2006, Pages 346-352
Physics Letters A

Exponential stability of delayed recurrent neural networks with Markovian jumping parameters

https://doi.org/10.1016/j.physleta.2006.03.078Get rights and content

Abstract

In this Letter, the global exponential stability analysis problem is considered for a class of recurrent neural networks (RNNs) with time delays and Markovian jumping parameters. The jumping parameters considered here are generated from a continuous-time discrete-state homogeneous Markov process, which are governed by a Markov process with discrete and finite state space. The purpose of the problem addressed is to derive some easy-to-test conditions such that the dynamics of the neural network is stochastically exponentially stable in the mean square, independent of the time delay. By employing a new Lyapunov–Krasovskii functional, a linear matrix inequality (LMI) approach is developed to establish the desired sufficient conditions, and therefore the global exponential stability in the mean square for the delayed RNNs can be easily checked by utilizing the numerically efficient Matlab LMI toolbox, and no tuning of parameters is required. A numerical example is exploited to show the usefulness of the derived LMI-based stability conditions.

Introduction

In the past few decades, the mathematical properties of the recurrent neural networks (RNNs), such as the stability, the attractivity and the oscillation, have been intensively investigated. RNNs have been successfully applied in many areas, including image processing, pattern recognition, associative memory, and optimization problems. In fact, the stability analysis issue for RNNs with time delays has been an attractive subject of research in the past few years, where the time delays under consideration can be classified as constant delays, time-varying delays, and distributed delays. Various sufficient conditions, either delay-dependent or delay-independent, have been proposed to guarantee the global asymptotic or exponential stability for the RNNs with time-delays, see e.g. [1], [2], [3], [4] for some recent publications, where many methods have been exploited, such as the LMI approach and M-matrix approach.

Traditional RNNs assume that the continuous variables propagate from one processing unit to the next. However, RNNs sometimes have problems in catching long-term dependencies in the input stream. As the temporal sequences increase in length, the influence of early components of the sequence has less impact on the network output. Such a phenomenon is referred to as the problem of information latching [5]. A widely used approach to dealing with the information latching problem is to extract finite state representations (also called clusters, patterns, or modes) from trained networks [6], [7], [8], [9]. In other words, the RNNs may have finite modes, and the modes may switch (or jump) from one to another at different times. Recently, it has been shown in [10] that, the switching (or jumping) between different RNN modes can be governed by a Markovian chain. Hence, an RNN with such a “jumping” character may be modeled as a hybrid one; that is, the state space of the RNN contains both discrete and continuous states. For a specific mode, the dynamics of the RNN is continuous, but the parameter jumps among different modes may be seen as discrete events. Note that the concept of Markovian neural networks has already been used in some papers, see e.g. [11]. Therefore, RNNs with Markovian jumping parameters are of great significance in modeling a class of neural networks with finite network modes. It should be pointed out that, up to now, the stability analysis problem for RNNs with Markovian switching has received very little research attention, despite its practical importance.

In this Letter, we are concerned with the analysis issue for the global exponential stability of RNNs with mixed time-delays and Markovian jumping parameters. To the best of the authors' knowledge, this is the first attempt to introduce and investigate the delayed RNNs with Markovian switching. It is worth mentioning that the control and filtering problems for Markovian jump systems (MJSs) have already been widely studied in the control and signal processing communities, see e.g. [12], [13]. The main purpose of this Letter is to establish LMI-based stability criteria for testing whether the network dynamics is stochastically exponentially stable in the mean square, independent of the time delay. It is known that LMIs can be efficiently solved by utilizing the numerically attractive Matlab LMI toolbox, hence our proposed results would be practical. We will use a simple example to illustrate the usefulness of the derived LMI-based stability conditions.

Section snippets

Problem formulation

Notation

The notations in this Letter are quite standard. Rn and Rn×m denote, respectively, the n-dimensional Euclidean space and the set of all n×m real matrices. The superscript “T” denotes the transpose and the notation XY (respectively, X>Y) where X and Y are symmetric matrices, means that XY is positive semi-definite (respectively, positive definite). I is the identity matrix with compatible dimension. We let h>0 and C([h,0];Rn) denote the family of continuous functions φ from [h,0] to Rn with

Main results and proofs

Let us first give the following lemmas which will be frequently used in the proofs of our main results in this Letter.

Lemma 1

Let xRn, yRn and ε>0. Then we have xTy+yTxεxTx+ε−1yTy.

Proof

The proof follows from the inequality (ε1/2xε1/2y)T(ε1/2xε1/2y)0 immediately. □

Lemma 2

[14] Given constant matrices Ω1, Ω2, Ω3 where Ω1=Ω1T and 0<Ω2=Ω2T, thenΩ1+Ω3TΩ2−1Ω3<0 if and only if[Ω1Ω3TΩ3Ω2]<0,or[Ω2Ω3Ω3TΩ1]<0.

The main results of this Letter are given as follows, which shows that the network dynamics of (5) is

A numerical example

We present a simple example here in order to illustrate the usefulness of our main results. Our aim is to examine the global exponential stability of a given delayed neural network with Markovian jumping parameters.

Consider a two-neuron delayed neural network (5) with two modes. The network parameters are given as follows:A1=[1.6001.8],A2=[1.2001.5],G0=[0.2000.3],G1=[0.4000.6],W01=[1.21.51.71.2],W02=[0.60.10.10.2],W11=[1.10.50.50.8],W12=[0.80.20.20.3],Γ=[7766].

By using the Matlab LMI

Conclusions

In this Letter, we have dealt with the problem of global exponential stability analysis for a class of general recurrent neural networks, which both time delays and Markovian jumping parameters. We have removed the traditional monotonicity and smoothness assumptions on the activation function. A linear matrix inequality (LMI) approach has been developed to solve the problem addressed. The conditions for the global exponential stability have been derived in terms of the positive definite

References (19)

  • J. Cao et al.

    Chaos Solitons Fractals

    (2005)
  • H. Huang et al.

    Appl. Math. Comput.

    (2003)
  • J. Liang et al.

    Phys. Lett. A

    (2003)
  • Z. Wang et al.

    Phys. Lett. A

    (2005)
  • J.L. Elman

    Cognitive Sci.

    (1990)
  • M. Kovacic

    Eur. J. Oper. Res.

    (1993)
  • Y. Bengio, P. Frasconi, P. Simard, The problem of learning long-term dependencies in recurrent networks, in:...
  • D. Bolle et al.

    J. Phys. A

    (1992)
  • M.P. Casey

    Neural Comput.

    (1996)
There are more references available in the full text version of this article.

Cited by (357)

View all citing articles on Scopus

This work was supported in part by the Engineering and Physical Sciences Research Council (EPSRC) of the UK under Grant GR/S27658/01, the Nuffield Foundation of the UK under Grant NAL/00630/G, and the Alexander von Humboldt Foundation of Germany.

View full text