Skip to main content
main-content

Über dieses Buch

“I believe that the authors have written a first-class book which can be used for a second or third year graduate level course in the subject... Researchers working in the area will certainly use the book as a standard reference....”

SIAM Review (Review of the First Edition)

“This book is devoted to one of the fastest developing fields in modern control theory---the so-called 'H-infinity optimal control theory'... In the authors' opinion 'the theory is now at a stage where it can easily be incorporated into a second-level graduate course in a control curriculum'. It seems that this book justifies this claim.”

Mathematical Reviews (Review of the First Edition)

“This book is a second edition of this very well-known text on H-infinity theory...This topic is central to modern control and hence this definitive book is highly recommended to anyone who wishes to catch up with this important theoretical development in applied mathematics and control.”

Short Book Reviews (Review of the Second Edition)

Inhaltsverzeichnis

Frontmatter

Chapter 1. A General Introduction to Minimax (H∞-Optimal) Designs

Abstract
A fundamental problem of theoretical and practical interest, that lies at the heart of control theory, is the design of controllers that yield acceptable performance for not a single plant and under known inputs but rather a family of plants under various types of inputs and disturbances. The importance of this problem has long been recognized, and over the years various scientific approaches have been developed and tested [55]. A common initial phase of all these approaches has been the formulation of a mathematically well-defined problem, usually in the form of optimization of a performance index, which is then followed by either the use of available tools or the development of requisite new mathematical tools for the solution of these problems. Two of these approaches, the sensitivity approach and the linear-quadratic-Gaussian (LQG) design, have dominated the field in the 1960s and early 1970s, with the former allowing small perturbations around an adopted nominal model ([49], [67]) and the latter ascribing some statistical description (specifically, Gaussian statistics) to the disturbances or unknown inputs [3]. During this period, the role of game theory in the design of robust (minimax) controllers was also recognized ([56], [157], [158], [129]), with the terminology “minimax controller” adopted from the statistical decision theory of the 1950s [130]. Here the objective is to obtain a design that minimizes a given performance index under worst possible disturbances or parameter variations (which maximize the same performance index). Since the desired controller will have to have a dynamic structure, this game-theoretic approach naturally requires the setting of dynamic (or differential) games, but with differential game theory (particularly with regard to information structures) being in its infancy in the 1960s, these initial attempts have not led to sufficiently general constructive methods for the design of robust controllers.
Tamer Başar, Pierre Bernhard

Chapter 2. Basic Elements of Static and Dynamic Games

Abstract
Since our approach in this book is based on (dynamic) game theory, it will be useful to present at the outset some of the basic notions of zero-sum game theory and some general results on the existence and characterization of saddle points. We first discuss, in the next section, static zero-sum games, that is, games where the actions of the players are selected independently of each other; in this case we also say that the players’ strategies are constants. We then discuss in Sections 2.2 and 2.3 some general properties of dynamic games (with possibly nonlinear dynamics), first in the discrete time and then in the continuous time, with the latter class of games known also as differential games. In both cases we also introduce the important notions of representation of a strategy, strong time consistency, and noise insensitivity.
Tamer Başar, Pierre Bernhard

Chapter 3. The Discrete-Time Minimax Design Problem with Perfect-State Measurements

Abstract
In this chapter we study the discrete-time minimax controller design problem, as formulated by (1.2a)–(1.3b), when the controller is allowed to have perfect access to the system state, either without or with one step delay.
Tamer Başar, Pierre Bernhard

Chapter 4. Continuous-Time Systems with Perfect-State Measurements

Abstract
In this chapter we develop the continuous-time counterparts of the results presented in Chapter 3. The disturbance attenuation problem will be the one formulated in Chapter 1, through (1.5a)–(1.7), under only perfect-state measurements. This will include the closed-loop perfect-state (CLPS), sampled-data perfect-state (SDPS), and delayed perfect-state (DPS) information patterns, which were introduced in Chapter 2 (Section 2.3).
Tamer Başar, Pierre Bernhard

Chapter 5. The Continuous-Time Problem with Imperfect-State Measurements

Abstract
We now turn to the class of continuous-time problems originally formulated in Section 1.2, where the state variable is no longer available to the controller, but only a disturbance-corrupted linear output is. The system is therefore described by the following equations
$$\begin{array}{*{20}c} {\dot x\left( t \right) = \left( t \right)x\left( t \right) + B\left( t \right)u\left( t \right) + D\left( t \right)w\left( t \right),} & {x\left( 0 \right) = x_0 } \\ \end{array}$$
(5.1)
$$y\left( t \right) = C\left( t \right)x\left( t \right) + E\left( t \right)w\left( t \right)$$
(5.2)
.
Tamer Başar, Pierre Bernhard

Chapter 6. The Discrete-Time Problem with Imperfect-State Measurements

Abstract
We study in this chapter the discrete-time counterpart of the problem of Chapter 5; said another way, the problem of Chapter 3, but with the measured output affected by disturbance
$$x_{k + 1} = A_k x_k + B_k u_k + D_k w_k ,$$
(6.1)
$$y_k = C_k x_k + E_k w_k$$
(6.2)
.
Tamer Başar, Pierre Bernhard

Chapter 7. Minimax Estimators and Performance Levels

Abstract
In the previous two chapters, in the development of the solutions to the disturbance attenuation problem with imperfect measurements in continuous and discrete time, we have encountered filter equations which resemble the standard Kalman filter when the weighting on the state in the cost function (i.e., Q) is set equal to zero. In this chapter we study such problems directly and show that the appearance of the Kalman filter, or Kalman filter-like structures, is actually not a coincidence. The commonality of the analysis of this chapter with those of the previous ones is again the use of game-theoretic techniques.
Tamer Başar, Pierre Bernhard

Chapter 8. Robustness to Regular and Singular Perturbations

Abstract
The objective of this chapter is to present a framework wherein robustness properties of H controllers obtained in Chapters 4 and 5 can be studied in the presence of small perturbations on the original system dynamics — such as small nonlinearities (for linear systems), unmodeled fast dynamics, and unmodeled weak coupling between subsystems of a large-scale system. This general framework will first be introduced in a context where the controller has access to perfect-state measurements (as in Chapter 4), and then extended to the imperfect-state measurements case only for linear systems. This will allow us to state precise robustness results for H controllers.
Tamer Başar, Pierre Bernhard

Chapter 9. Appendix A: Conjugate Points and Existence of Value

Abstract
We provide in the first part of this appendix a self-contained introduction to conjugate points as they arise in dynamic linear-quadratic optimization, and in particular in the context of finite-horizon linear-quadratic differential games. In the second part we present the counterpart of these results in the infinite horizon, where also the issue of stability becomes important. The material included in this appendix is used extensively in Chapter 4, in the proof of some key results in the solution of the continuous-time disturbance attenuation problem, and also partly in Chapters 5, 7, and 8.
Tamer Başar, Pierre Bernhard

Chapter 10. Appendix B: Danskin’s Theorem

Abstract
In this appendix we state and prove a theorem due to Danskin, which was used in Chapter 5, in the proof of Theorem 5.1. We also show how this result applies to prove a stronger version of Theorem 5.1, under weaker hypotheses (than those given in Chapter 5).
Tamer Başar, Pierre Bernhard

Backmatter

Weitere Informationen