Skip to main content

2012 | Buch

The Robust Maximum Principle

Theory and Applications

verfasst von: Vladimir G. Boltyanski, Alexander S. Poznyak

Verlag: Birkhäuser Boston

Buchreihe : Systems & Control: Foundations & Applications

insite
SUCHEN

Über dieses Buch

Both refining and extending previous publications by the authors, the material in this monograph has been class-tested in mathematical institutions throughout the world. Covering some of the key areas of optimal control theory (OCT)—a rapidly expanding field that has developed to analyze the optimal behavior of a constrained process over time—the authors use new methods to set out a version of OCT’s more refined ‘maximum principle’ designed to solve the problem of constructing optimal control strategies for uncertain systems where some parameters are unknown. Known as a ‘min-max’ problem, this type of difficulty occurs frequently when dealing with finite uncertain sets.

The text begins with a standalone section that reviews classical optimal control theory. Moving on to examine the tent method in detail, the book then presents its core material, which is a more robust maximum principle for both deterministic and stochastic systems. The results obtained have applications in production planning, reinsurance-dividend management, multi-model sliding mode control, and multi-model differential games.

Using powerful new tools in optimal control theory, this book explores material that will be of great interest to post-graduate students, researchers, and practitioners in applied mathematics and engineering, particularly in the area of systems and control.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Introduction
Abstract
In this introductory section we introduce the reader to the main topic of the book, dealing with the so-called Min-Max control problem, where the dynamics of the considered system is described by a first-order vector ordinary differential equation with the right-hand side depending on a parameter that may run over a given parametric set (finite or compact). The performance index is given in the joint (Bolza) form containing the terminal term as well as an integral one defined both on a finite or an infinite horizon. The considered problem consists of the developing of an admissible control law providing the minimum value of the performance index for the worst parameter selection on the plant dynamics. In fact, this Min-Max problem is an optimization problem in a Banach (infinite-dimensional) space. In this section we consider first a Min-Max problem in a finite-dimensional Euclidean space, in order to understand which specific features of a Min-Max solutions arise, what we may expect from their expansion to infinite-dimensional Min-Max problems and to verify whether these properties remain valid or not. We show that two main properties of the solution hold.
  • The joint Hamiltonian (the negative Lagrangian) of the initial optimization problem is equal to the sum (integral) of the individual Hamiltonians calculated over the given parametric set.
  • In the optimal point all loss functions, corresponding to “active indices” (for which the Lagrange multipliers are strictly positive), turn out to be equal.
The main question arising here is: “Do these two principal properties, formulated for finite-dimensional Min-Max problems, remain valid for the infinite-dimensional case, formulated in a Banach space for a Min-Max optimal control problem?” The answer is: YES, they do! A detailed justification of this positive answer forms the main contribution of this manuscript.
Vladimir G. Boltyanski, Alexander S. Poznyak

Topics of Classical Optimal Control

Frontmatter
Chapter 2. The Maximum Principle
Abstract
This chapter represents the basic concepts of Classical Optimal Control related to the Maximum Principle. The formulation of the general optimal control problem in the Bolza (as well as in the Mayer and the Lagrange) form is presented. The Maximum Principle, which gives the necessary conditions of optimality, for various problems with a fixed and variable horizon is formulated and proven. All necessary mathematical claims are given in the Appendix, which makes this material self-contained.
Vladimir G. Boltyanski, Alexander S. Poznyak
Chapter 3. Dynamic Programming
Abstract
The Dynamic Programming Method is discussed in this chapter, and the corresponding HJB equation, defining sufficient conditions of the optimality for an admissible control, is derived. Its smooth and nonsmooth (viscosity) solutions are discussed.
Vladimir G. Boltyanski, Alexander S. Poznyak
Chapter 4. Linear Quadratic Optimal Control
Abstract
This chapter deals with the optimal control design for linear models described by a linear (maybe nonstationary) ODE. The cost functional is considered both for finite and infinite horizons. Finite horizon optimal control is shown to be a linear nonstationary feedback control with a gain matrix generated by a backward differential matrix Riccati equation. For stationary models without any measurable uncontrollable inputs and an infinite horizon the optimal control is a linear stationary feedback with a gain matrix satisfying an algebraic matrix Riccati equation. The detailed analysis of this matrix equation is presented and the conditions for the parameters of a linear system are given that guarantee the existence and uniqueness of a positive-definite solution which is part of the gain matrix in the corresponding optimal linear feedback control.
Vladimir G. Boltyanski, Alexander S. Poznyak
Chapter 5. Time-Optimization Problem
Abstract
This chapter presents a detailed analysis of the so-called time-optimization problem. A switching control is shown to be optimal for this problem. The “Theorem on n-intervals” is proven for a class of linear stationary systems. Some examples illustrate the main results.
Vladimir G. Boltyanski, Alexander S. Poznyak

Tent Method

Frontmatter
Chapter 6. The Tent Method in Finite-Dimensional Spaces
Abstract
The Tent Method is shown to be a general tool for solving a wide spectrum of extremal problems. First, we show its workability in finite-dimensional spaces. Then topology is applied for the justification of some results in variational calculus. A short historical remark on the Tent Method is made and the idea of the proof of the Maximum Principle is explained in detail, paying special attention to the necessary topological tools. The finite-dimensional version of the Tent Method allows one to establish the Maximum Principle and to obtain a generalization of the Kuhn–Tucker Theorem in Euclidean spaces.
Vladimir G. Boltyanski, Alexander S. Poznyak
Chapter 7. Extremal Problems in Banach Spaces
Abstract
This chapter deals with the extension of the Tent Method to Banach spaces. The Abstract Extremal Problem is formulated as an intersection problem. The subspaces in the general positions are introduced. The necessary condition of the separability of a system of convex cones is derived. The criterion of separability in Hilbert spaces is presented. Then the analog of the Kuhn–Tucker Theorem for Banach spaces is discussed in detail.
Vladimir G. Boltyanski, Alexander S. Poznyak

Robust Maximum Principle for Deterministic Systems

Frontmatter
Chapter 8. Finite Collection of Dynamic Systems
Abstract
The general approach to the Min-Max Control Problem for uncertain systems, based on the suggested version of the Robust Maximum Principle, is presented. The uncertainty set is assumed to be finite, which leads to a direct numerical procedure realizing the suggested approach. It is shown that the Hamilton function used in this Robust Maximum Principle is equal to the sum of the standard Hamiltonians corresponding to a fixed value of the uncertainty parameter. The families of differential equations of the state and conjugate variables together with transversality and complementary slackness conditions are demonstrated to form a closed system of equations, sufficient to construct a corresponding robust optimal control.
Vladimir G. Boltyanski, Alexander S. Poznyak
Chapter 9. Multimodel Bolza and LQ Problem
Abstract
In this chapter the Robust Maximum Principle is applied to the Min-Max Bolza Multimodel Problem given in a general form where the cost function contains a terminal term as well as an integral one and a fixed horizon and a terminal set are considered. For the class of stationary models without any external inputs the robust optimal controller is also designed for the infinite horizon problem. The necessary conditions of the robust optimality are derived for the class of uncertain systems given by an ordinary differential equation with parameters in a given finite set. As a previous illustration of the approach suggested, the Min-Max Linear Quadratic Multimodel Control Problem is considered in detail. It is shown that the design of the Min-Max optimal controller is reduced to a finite-dimensional optimization problem given at the corresponding simplex set containing the weight parameters to be found.
Vladimir G. Boltyanski, Alexander S. Poznyak
Chapter 10. Linear Multimodel Time Optimization
Abstract
Robust time optimality can be considered as a particular case of the Lagrange problem, and therefore, the results obtained in the previous chapters allow us to formulate directly the Robust Maximum Principle for this time-optimization problem. As is shown in Chap. 8, the Robust Maximum Principle appears only as a necessary condition for robust optimality. But the specific character of the linear time-optimization problem permits us to obtain more profound results: in this case the Robust Maximum Principle appears as a necessary and sufficient condition. Moreover, for the linear robust time optimality it is possible to establish some additional results: the existence and uniqueness of robust controls, the piecewise constancy of robust controls for a polyhedral resource set, and a Feldbaum-type estimate for the number of intervals of constancy (or “switching”). All these aspects are studied below in detail.
Vladimir G. Boltyanski, Alexander S. Poznyak
Chapter 11. A Measurable Space as Uncertainty Set
Abstract
The purpose of this chapter is to extend the possibilities of the Maximum Principle approach for the class of Min-Max Control Problems dealing with the construction of the optimal control strategies for uncertain systems given by a system of ordinary differential equations with unknown parameters from a given compact measurable set. The problem considered belongs to the class of optimization problems of the Min-Max type. Below, a version of the Robust Maximum Principle applied to the Min-Max Mayer problem with a terminal set is presented. A fixed horizon is considered. The main contribution of this material is related to the statement of the robust (Min-Max) version of the Maximum Principle formulated for compact measurable sets of unknown parameters involved in a model description. It is shown that the robust optimal control, minimizing the worst parametric value of the terminal functional, maximizes the Lebesgue–Stieltjes integral of the standard Hamiltonian function (calculated under a fixed parameter value) taken over the given uncertainty parametric set. In some sense, this chapter generalizes the results given in the previous chapters in such a way that the case of a finite uncertainty set is a partial case of a compact uncertainty set, supplied with a given atomic measure.
Vladimir G. Boltyanski, Alexander S. Poznyak
Chapter 12. Dynamic Programming for Robust Optimization
Abstract
In this chapter we extend the Dynamic Programming (DP) approach to multimodel Optimal Control Problems (OCPs). We deal with the robust optimization of multimodel control systems and are particularly interested in the Hamilton–Jacobi–Bellman (HJB) equation for the above class of problems. Here we study a variant of the HJB for multimodel OCPs and examine the natural relationship between the Bellman DP techniques and the Robust Maximum Principle (MP). Moreover, we describe how to carry out the practical calculations in the context of multimodel LQ problems and derive the associated Riccati-type equation. In this chapter we follow Azhmyakov et al. (Nonlinear Anal. 72:1110–1119, 2010).
Vladimir G. Boltyanski, Alexander S. Poznyak
Chapter 13. Min-Max Sliding-Mode Control
Abstract
This chapter deals with the Min-Max Sliding-Mode Control design where the original linear time-varying system with unmatched disturbances and uncertainties is replaced by a finite set of dynamic models such that each one describes a particular uncertain case including exact realizations of possible dynamic equations as well as external bounded disturbances. Such a trade-off between an original uncertain linear time-varying dynamic system and a corresponding higher order multimodel system with complete knowledge leads to a linear multimodel system with known bounded disturbances. Each model from a given finite set is characterized by a quadratic performance index. The developed Min-Max Sliding-Mode Control strategy gives an optimal robust sliding-surface design algorithm, which is reduced to a solution of the equivalent LQ Problem that corresponds to the weighted performance indices with weights from a finite-dimensional simplex. An illustrative numerical example is presented.
Vladimir G. Boltyanski, Alexander S. Poznyak
Chapter 14. Multimodel Differential Games
Abstract
In this chapter we focus on the construction of robust Nash strategies for a class of multimodel games described by a system of ordinary differential equations with parameters from a given finite set. Such strategies entail the “Robust equilibrium” being applied to all scenarios (or models) of the game simultaneously. The multimodel concept allows one to improve the robustness of the designed strategies in the presence of some parametric uncertainty. The game solution corresponds to a Nash-equilibrium point of this game. In LQ dynamic games the equilibrium strategies obtained are shown to be linear functions of the so-called weighting parameters from a given finite-dimensional vector simplex. This technique permits us to transform the initial game problem, formulated in a Banach space (the control functions are to be found) to a static game given in finite-dimensional space (simplex). The corresponding numerical procedure is discussed. The weights obtained appear in an extended coupled Riccati differential equation. The effectiveness of the designed controllers is illustrated by a two-dimensional missile guidance problem.
Vladimir G. Boltyanski, Alexander S. Poznyak

Robust Maximum Principle for Stochastic Systems

Frontmatter
Chapter 15. Multiplant Robust Control
Abstract
In this chapter the Robust Stochastic Maximum Principle (in the Mayer form) is presented for a class of nonlinear continuous-time stochastic systems containing an unknown parameter from a given finite set and subject to terminal constraints. Its proof is based on the use of the Tent Method with the special technique specific for stochastic calculus. The Hamiltonian function used for these constructions is equal to the sum of the standard stochastic Hamiltonians corresponding to a fixed value of the uncertain parameter. The corresponding robust optimal control can be calculated numerically (a finite-dimensional optimization problem should be solved) for some simple situations.
Vladimir G. Boltyanski, Alexander S. Poznyak
Chapter 16. LQ-Stochastic Multimodel Control
Abstract
The main goal of this chapter is to illustrate the possibilities of the MP approach for a class of Min-Max Control Problems for uncertain systems described by a system of linear stochastic differential equations with a controlled drift and diffusion terms and unknown parameters within a given finite set. The problem belongs to the class of Min-Max Optimization Problems on a fixed finite horizon (where the cost function contains both an integral and a terminal term) and on an infinite one (the loss function is a time-averaged functional). The solution is based on the results on the Robust Stochastic Maximum Principle (RSMP) derived in the previous chapter. The construction of the Min-Max LQ optimal controller is shown to be reduced to a finite-dimensional optimization problem related to the solution of the Riccati equation parametrized by the weights to be found.
Vladimir G. Boltyanski, Alexander S. Poznyak
Chapter 17. A Compact Uncertainty Set
Abstract
This chapter extends the possibilities of the MP approach for a class of Min-Max control problems for uncertain models given by a system of stochastic differential equations with a controlled diffusion term and unknown parameters within a given measurable compact set. For simplicity, we consider the Min-Max problem belonging to the class of optimization problems with a fixed finite horizon where the cost function contains only a terminal term (without an integral part). The proof is based on the Tent Method in a Banach space, discussed in detail in Part II; it permits us to formulate the necessary conditions of optimality in the Hamiltonian form.
Vladimir G. Boltyanski, Alexander S. Poznyak
Backmatter
Metadaten
Titel
The Robust Maximum Principle
verfasst von
Vladimir G. Boltyanski
Alexander S. Poznyak
Copyright-Jahr
2012
Verlag
Birkhäuser Boston
Electronic ISBN
978-0-8176-8152-4
Print ISBN
978-0-8176-8151-7
DOI
https://doi.org/10.1007/978-0-8176-8152-4

Neuer Inhalt