Elsevier

Automatica

Volume 38, Issue 2, February 2002, Pages 343-349
Automatica

Brief Paper
The H2-control for jump linear systems: cluster observations of the Markov state

https://doi.org/10.1016/S0005-1098(01)00210-2Get rights and content

Abstract

The H2-norm control problem of discrete-time Markov jump linear systems is addressed in this paper when part of, or the total of the Markov states is not accessible to the controller. The non-observed part of the Markov states is grouped in a number of clusters of observations; the case with a single cluster retrieves the situation when no Markov state is observed. The control action is provided in linear feedback form, which is invariant on each cluster, and this restricted complexity setting is adopted, aiming at computable solutions. We explore a recent result by de Oliveira, Bernussou, and Geromel (Systems Control Lett. 37 (1999) 261) involving an LMI characterization to establish a H2 solution that is stabilizing in the mean square sense. The novelty of the method is that it can handle in LMI form the situation ranging from no Markov state observation to complete state observation. In addition, when the state observation is complete, the optimal H2-norm solution is retrieved.

Introduction

Markov jump linear systems (MJLS) form a class of processes that presents changes of structure or modes according to an underlying Markov chain. Between jumps of the Markov state, the system evolves according to a linear description associated with the specific Markov state. The ability to model systems subject to abrupt variations in their structures due to failure or repairs, sudden changes in the environmental or in exogenous economic variables, among others, has attracted interest. The theory of stability, optimal LQ control and H control for MJLS is fairly complete and can be found in several papers spanning for more than one decade.

There are several works dealing with the control with complete state observation (Markov and linear states) for MJLS but fewer involving partial state observations. The information structure that is available for the controller is generally intrinsic to the problem that the MJLS is to model. For instance, if the changes in the Markov chain are associated to failures of components of a non-critical significance, or if they indicate changes between different operating points in a nonlinear plant, or even if some changes are difficult to measure, it is quite possible that the associated Markov states are not accessible to the controller. Although this scenario, from the point of view of applications seems more natural than that of the complete state observation, partial state observations impose serious limitations in the analytical front. This explains in part the considerable development and flourishing of the literature concerning complete state observation; in many situations it can be drawn from the large body of linear system theory, a parallel that is weaker when one deals with partial state observations. Some bridges can be drawn regarding restriction to the linear state observations, e.g. see de Farias, Geromel, do Val, and Costa (2000) or Ji and Chizeck (1992). However, when one regards partial state observations of the Markov state, no simple parallel to the linear system theory is known.

In principle, the analysis can only rely on general stochastic methods, which are able to provide solutions, but rarely in explicit or computable form. One approach to circumvent this difficulty is to adopt linear feedback controls in a restricted complexity setting, and we can mention Caines and Zhang (1995), Costa, do Val, and Geromel (1997), do Val and Basar (1999) and Pan and Bar-Shalom (1996), along this line. In Caines and Zhang (1995) and Pan and Bar-Shalom (1996) the stability of the closed loop system for the problem with no observation of the Markov state is studied, but the analyses do not produce a synthesis of stabilizing controls. In Costa et al. (1997) a solution is attempted but it cannot be posed in LMI form, and in do Val and Basar (1999) the receding horizon control problem is studied with no observation of the Markov state or with observations restricted to clusters of states. The optimal LQ control sequence is determined but stability cannot be assured.

In this paper, we deal with the problem of H2-norm control of MJLS without observations of the Markov state or observations restricted to clusters of the Markov state, to be defined more precisely in Section 3. We adopt the point of view of restricted complexity synthesis by imposing control actions defined in linear feedback form. We account for an LMI characterization presented in de Oliveira, Bernussou, and Geromel (1999), valid originally to deterministic robust control problems. We adapt it to the MJLS to verify stability in the mean square sense, and to formulate the H2-norm problem as a convex problem. The result attests that any solution to the convex problem is a stabilizing solution in the mean square sense, and in addition, the optimal H2 solution is retrieved whenever complete Markov state observation is allowed. This characterization is presented in Theorem 6; the case with no observation of the Markov state is also addressed. Finally, to expose the technique, two examples are presented in Section 4.

Section snippets

Basic formulation and concepts

Let X={1,…,N} be an index set, and consider the collections of real matrices: A=(A1,…,AN), dim(Ai)=n×n, E=(E1,…,EN), dim(Ei)=n×m and C=(C1,…,CN), dim(Ci)=p×n, i=1,…,N. Let us consider a discrete-time homogeneous Markov chain, Θ={θk;k⩾0} having X as state space and P=[pij], i,j=1,…,N as the transition probability matrix. The probability distribution of the Markov chain at the initial time is given by μ=(μ1,…,μN) in such a way that P(θ0=i)=μi.

Consider a fundamental probability space (Ω,F,{Fk},P).

The H2-control problem

The main objective of this paper is to study controlled MJLS with partial observations of the Markov state. Let B=(B1,…,BN), dim(Bi)=n×r and D=(D1,…,DN), dim(Di)=q×m for each i=1,…,N, be some associated set of matrices, and consider the stochastic systemG:xk=A(θk)xk+E(θk)wk+B(θk)uk,zk=C(θk)xk+D(θk)uk,k⩾0,w∈ℓ2m,E(|x0|2)<∞,θ0∼μ,where u={uk;k⩾0} is the control action. We assume that only part of the states of the Markov chain is accessible, and the control action can rely only on information on

Examples

Example 1

It is drawn from Ji and Chizeck (1990, Example 6.4). The system data areA1=2231,B1=21,E1=0.5000.4,C1=1−11100,D1=001,A2=100.51,B2=00,E2=1000.8,C2=100100,D2=001,μ=[01],P=0.90.10.80.2.The second mode represents an actuator failure. The results obtained for this example are presented in Table 1. The guaranteed cost solution refers to the deterministic worst case synthesis as in Geromel, Peres, and Souza (1993). The examples in this section were implemented using the LMIsol package (Oliveira, Faria,

Conclusions

In this work a solution to the H2-control problem of MJLS with incomplete observations of the Markov state is developed in LMI form. The cases with no observation of the Markov state or with observations by clusters, are dealt in the restricted complexity setting, involving linear state feedback controls in an appropriated class.

The solution retrieves the optimal H2 solution when complete Markov state observation is allowed, and it provides controllers that are stabilizing in the mean square

João B.R. do Val was born in São Paulo State, Brazil in 1955. He received the B.S. and M.S. degrees in Electrical Engineering from the University of Campinas (UNICAMP), Campinas, Brazil in 1977 and 1981, respectively and the Ph.D. degree in Electrical Engineering from the Imperial College of Science and Technology in London in 1985. He also received the Diploma of Imperial College in 1985. Since 1978 he has held a position at the Faculty of Electrical Engineering of UNICAMP. During the years

References (12)

There are more references available in the full text version of this article.

Cited by (159)

  • Synchronization for Markovian coupled neural networks with partial mode observation: The finite-time case

    2020, Journal of the Franklin Institute
    Citation Excerpt :

    Therefore, this problem has received much research attention (see, e.g., [33–38]). The H2-norm controllers have been designed for MJSs without state observations or with limited clusters of observations [33]. A detector-based approach that encompasses three types of state information, including complete state information, no state information, and cluster observation information, has been proposed for MJSs [39].

View all citing articles on Scopus

João B.R. do Val was born in São Paulo State, Brazil in 1955. He received the B.S. and M.S. degrees in Electrical Engineering from the University of Campinas (UNICAMP), Campinas, Brazil in 1977 and 1981, respectively and the Ph.D. degree in Electrical Engineering from the Imperial College of Science and Technology in London in 1985. He also received the Diploma of Imperial College in 1985. Since 1978 he has held a position at the Faculty of Electrical Engineering of UNICAMP. During the years 1996 and 1997 he was a Visiting Scholar at the Decision and Control Laboratory (Coordinated Science Laboratory) at University of Illinois at Urbana-Champaign. Since 2000, he is the editor of the journal “Controle Automaçâo” published by the Brazilian Society of Automatica. His research interests include Stochastic Systems and Control, Jump Processes, with applications in Operation Research and Communication problems.

José C. Geromel was born in Itatiba, Brazil in 1952. He received the B.S. and M.S. degrees in Electrical Engineering from the University of Campinas (UNICAMP), Campinas, Brazil, in 1975 and 1976, respectively. He received the Docteur d’ Etat degree from the University Paul Sabatier, Toulouse, France, in 1979. In 1975, he joined the School of Electrical Engineering, UNICAMP, where he is presently Professor of Control Theory. In 1987, he was a Visiting Professor at Milan Polytecnique Institute, Milan, Italy. His current research interests include convex programming theory, robust control systems design, robust filtering and joint location and control systems design. Prof. Geromel is a member of the Editorial Board of Studies in Informatics and Control and Subject Editor of the International Journal of Robust and Nonlinear Control. In 1994, he was awarded the Zeferino Vaz Award for his teaching and research activities at UNICAMP. Since 1991, he has been a Fellow of the CNPq — the Brazilian Council for Research and Development. In 1999, he has been named for Chevalier dans l’ Ordre des Palmes Academiques by the Minister of National Education of France. Since 1998, he is the Dean for Graduate Studies at UNICAMP and member of the Brazilian Academy of Science. He is co-author of the book Control Theory and Design: An RH2and RHViewpoint (with P. Colaneri and A. Locatelli, New York, Academic Press, 1997).

Alim P.C. Gonçalves was an undergraduate student at the School of Electrical and Computing Engineering, UNICAMP from 1995 to 2000, when he received his B.S. degree in Electrical Engineering. He was awarded a scholarship in an undergraduate program, to spend one year at the University of Stuttgart, Germany. He also received a scholarship during his studies on Jump Markov Systems. Presently, he holds a position at Lucent Technologies in Brazil.

This paper was not presented at any IFAC meeting. This paper was recommended for publication in revised form by Associate Editor R. Srikant under the direction of Editor Tamer Basar. Research supported in part by CNPq, Grant 300721/86-2(RN) and the PRONEX Grant 015/98 ‘Control of Dynamical Systems’.

View full text