Skip to main content
main-content

Über dieses Buch

It has been widely recognized nowadays the importance of introducing mathematical models that take into account possible sudden changes in the dynamical behavior of a high-integrity systems or a safety-critical system. Such systems can be found in aircraft control, nuclear power stations, robotic manipulator systems, integrated communication networks and large-scale flexible structures for space stations, and are inherently vulnerable to abrupt changes in their structures caused by component or interconnection failures. In this regard, a particularly interesting class of models is the so-called Markov jump linear systems (MJLS), which have been used in numerous applications including robotics, economics and wireless communication. Combining probability and operator theory, the present volume provides a unified and rigorous treatment of recent results in control theory of continuous-time MJLS. This unique approach is of great interest to experts working in the field of linear systems with Markovian jump parameters or in stochastic control. The volume focuses on one of the few cases of stochastic control problems with an actual explicit solution and offers material well-suited to coursework, introducing students to an interesting and active research area.

The book is addressed to researchers working in control and signal processing engineering. Prerequisites include a solid background in classical linear control theory, basic familiarity with continuous-time Markov chains and probability theory, and some elementary knowledge of operator theory. ​

Inhaltsverzeichnis

Frontmatter

Chapter 1. Introduction

Recent advances in technology have led to dynamical systems with increasing complexity, which in turn demand for more and more efficient and reliable control systems. It has been widely recognized that the requirements of specific behaviors and stringent performances call for the inclusion of possible failure prevention in a modern control design. Therefore, either due to security reasons or efficiency necessity, a system failure is a critical issue that has to be considered in the design of a controller in modern technology. In view of this, dynamical systems that are subject to abrupt changes have been a theme of increasing investigation in recent years, and a variety of different approaches to analyze this class of systems has emerged over the last decades. A particularly interesting class of models within this framework is the so-called Markov jump linear systems (MJLS), which is the subject matter of this book. The goal of this first chapter is to highlight, in a rather informal way, some of the main characteristics of MJLS, through some illustrative examples of possible applications of this class of systems.
Oswaldo L.V. Costa, Marcelo D. Fragoso, Marcos G. Todorov

Chapter 2. A Few Tools and Notations

This chapter consists primarily of some background material, with the selection of topics being dictated by our later needs. We introduce some notation and definitions that will be used throughout the book, recall some definitions and properties of semigroup operators and infinitesimal generators, and present some fundamental results on the existence and uniqueness of solutions of a differential equation. We also recall some basic definitions and results on continuous-time Markov chains with finite state space and introduce the spaces that are appropriate for our approach. Finally, we show some important auxiliary results regarding the stability of some operators and recall some basic facts regarding linear matrix inequalities.
Oswaldo L.V. Costa, Marcelo D. Fragoso, Marcos G. Todorov

Chapter 3. Mean-Square Stability

One of the features that distinguish MJLS from linear systems is the fact that stability (instability) for each mode of operation does not guarantee the stability (instability) of the system as a whole. This chapter provides a broad account on mean-square stability (MSS) for continuous-time MJLS. We follow an operator-theoretical approach to deal with this subject, trying as much as possible to trace a parallel with the stability theory results for continuous-time linear systems. In this way the MSS of MJLS is studied via the spectrum of an augmented matrix or via the existence of a positive-definite solution for a set of coupled Lyapunov equations. Besides the homogeneous case, we consider two scenarios regarding additive disturbances: the one in which the disturbances are characterized via a Wiener process and the one characterized by any function in \(L^{m}_{2}\). Finally, we treat the concepts of mean-square stabilizability and detectability and make a brief incursion in the case with partial information.
Oswaldo L.V. Costa, Marcelo D. Fragoso, Marcos G. Todorov

Chapter 4. Quadratic Optimal Control with Complete Observations

This chapter focuses on one of the few cases of stochastic control problems with an actual explicit solution. It deals with the quadratic optimal control problem for continuous-time MJLS in the usual finite- and infinite-horizon framework. It is assumed that both the state variable x(t) and jump variable θ(t) are available to the controller. The setup adopted in this chapter is based on Dynkin’s formula for the resulting Markov process obtained from the state x(t) and Markov chain θ(t). Under this approach, we consider the class of admissible controllers as those in a feedback form (on x(t) and θ(t)) satisfying a Lipschitz condition. It is shown that the solution for the problems relies, in part, on the study of a finite set of coupled differential and algebraic Riccati equations (CDRE and CARE, respectively).
Oswaldo L.V. Costa, Marcelo D. Fragoso, Marcos G. Todorov

Chapter 5. H 2 Optimal Control with Complete Observations

The purpose of this chapter is to revisit the infinite-horizon quadratic optimal control for continuous-time MJLS, studied in Chap. 4, but now from another point of view, usually known in the literature of linear systems as H 2 control. We assume here that both the state variable x(t) and the jump parameter θ(t) are available to the controller, so that is why we call this case with complete observations. It is considered that the transition rate matrix Π may be subject to polytopic uncertainties. The armory of concepts and mathematical techniques includes the definitions of robust and quadratic mean-square stabilizability, the controllability, observability Gramians, and the H 2-norm for MJLS. The H 2 control problem for both cases with and without uncertainties on Π is solved using linear matrix inequalities optimization tools. For the case in which there are no uncertainties, it is shown that the H 2 formulation and the infinite-horizon quadratic cost formulation, analyzed in Chap. 4, coincide with each other.
Oswaldo L.V. Costa, Marcelo D. Fragoso, Marcos G. Todorov

Chapter 6. Quadratic and H 2 Optimal Control with Partial Observations

This chapter deals with the finite-horizon quadratic optimal control problem and the H 2 control problem for continuous-time MJLS when the state variable x(t) is not directly accessible to the controller. It is assumed that only an output y(t) and the jump process θ(t) are available. The main goal is to derive the so-called separation principle for this problem. We consider the admissible controllers as those in the class of Markov jump output dynamic observer-based control systems. Tracing a parallel with the classical LQG theory, it is shown that the optimal control is explicitly obtained from two sets of coupled differential (for the finite-horizon case) and algebraic (for the H 2 case) Riccati equations. One set is associated with the optimal control problem when the state variable is available, as analyzed in Chaps. 4 and 5, and the other set is associated with the optimal filtering problem.
Oswaldo L.V. Costa, Marcelo D. Fragoso, Marcos G. Todorov

Chapter 7. Best Linear Filter with Unknown (x(t),θ(t))

It is a well-known fact that the optimal nonlinear filter for continuous-time MJLS, in the general case in which both the state variable and the jump parameter are not known, cannot be given in terms of a closed finite system of stochastic differential equations (it is not a finite filter). The aim of this chapter is to derive the best linear mean-square estimator for continuous-time MJLS in the scenario described above, i.e., assuming that only an output is available. The idea is to derive a filter which bears the desirable property of the Kalman filter, a recursive scheme suitable for computer implementation which allows some offline computation that alleviates the computational burden. The filter is derived as a function of the error covariance matrix whose dynamics is governed by two matrix differential equations: one associated with the second moment of the state variable and the other one associated with the second moment of the estimator. Both the finite- and infinite-horizon cases are considered.
Oswaldo L.V. Costa, Marcelo D. Fragoso, Marcos G. Todorov

Chapter 8. H ∞ Control

This chapter is devoted to the H control of continuous-time MJLS in the infinite-horizon setting. In order to study this problem, we start by deriving a bounded real lemma for the class of systems at hand. This allows us to characterize, in terms of the feasibility of linear matrix inequalities, whether a system is mean-square stable with prescribed degree of performance. In the sequence we proceed to study the disturbance attenuation problem, which consists of determining whether a given system can be stabilized and driven to a desired H performance level, by the action of control, and how such a controller can be designed. Two cases are considered, the static state feedback and dynamic output feedback control problems. The main results are presented through explicit formulae with the corresponding design algorithms for obtaining the controllers of interest.
Oswaldo L.V. Costa, Marcelo D. Fragoso, Marcos G. Todorov

Chapter 9. Design Techniques

In this chapter we present some design techniques, expressed as linear matrix inequalities optimization problems for continuous-time MJLS. The linear matrix inequalities paradigm offers a flexible and efficient framework to computational applications, for which many powerful numerical packages exist. The chapter begins with a study of the stability radii of MJLS, which includes an algorithm and a spectral approach for obtaining upper bounds in the real and complex cases, plus a connection between stability radii and uncertainties in the transition rate matrix of the Markov jump process. Next, we proceed to the design of robust controllers satisfying a suboptimal H 2 criterion, which includes a study of robust mixed H 2/H controllers. At the end of the chapter, a solution of linear matrix inequalities is presented for the stationary robust linear filtering problem.
Oswaldo L.V. Costa, Marcelo D. Fragoso, Marcos G. Todorov

Chapter 10. Some Numerical Examples

In this chapter some numerical applications of continuous-time MJLS are treated by means of the theoretical results introduced earlier in the book, in particular the design techniques presented for H 2 control, robust H 2 guaranteed cost control, mixed H 2/H control and stationary filtering. The first problem studied in this chapter is the control of Samuelson’s multiplier–accelerator macroeconomic model in different scenarios. We provide a numerical comparison between the standard H 2 approach and robust control methods previously introduced in the book. Further on, we provide estimates for stability radii and compare the numerical results yielded by different robust control approaches in the study of the coupling between electrical machines. The chapter proceeds with a study of the robust control of an underactuated robotic manipulator, as well as a numerical example regarding the optimal stationary filtering techniques derived in the book.
Oswaldo L.V. Costa, Marcelo D. Fragoso, Marcos G. Todorov

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise