Skip to main content

2003 | Buch

Impulsive Control in Continuous and Discrete-Continuous Systems

verfasst von: Boris M. Miller, Evgeny Ya. Rubinovich

Verlag: Springer US

insite
SUCHEN

Über dieses Buch

Impulsive Control in Continuous and Discrete-Continuous Systems is an up-to-date introduction to the theory of impulsive control in nonlinear systems. This is a new branch of the Optimal Control Theory, which is tightly connected to the Theory of Hybrid Systems. The text introduces the reader to the interesting area of optimal control problems with discontinuous solutions, discussing the application of a new and effective method of discontinuous time-transformation. With a large number of examples, illustrations, and applied problems arising in the area of observation control, this book is excellent as a textbook or reference for a senior or graduate-level course on the subject, as well as a reference for researchers in related fields.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Introduction
Abstract
The concept of a discrete-continuous or hybrid system (DCS or HS) arose in scientific practice on the boundary of 50-60-th years. Its appearance was connected with the development of digital automatic controls for continuous plants [199], [200]. These systems usually have the continuous and discrete (impulsive) parts which operate together providing the new properties to the system. Since then it had become clear that the presence of elements operating in a discrete or impulsive mode may lead to substantial changes in the system characteristics and properties. Thus, it gave rise to the necessity of creating some new directions and branches in the automatic control theory: i.e., the theory of dynamic systems with delay, theory of discrete deterministic and stochastic processes, as well as the development of special methods of stability, controllability, and optimization [18], [87], [169] [199], [200]. However during further investigations of various technical, economic, and biological systems, the concept of impulsive control became known as an action causing the instantaneous changes in the system states. These changes occur much quicker than the proper dynamic processes and behave in such a manner that in the natural time scale these changes seem to be instantaneous. It was found natural to substitute these quick changes by jumps in mathematical models of such systems. This substitution made it possible to simplify the description of those systems. This simplification is obtained at the expense of reducing the dimension of the control actions while replacing the functional description by a parametrical one. Various examples of such systems were considered in flight dynamics [81], [82], [98], [100], [110], [117], [154], [155], [156], [173], [202], [215], [216], in optimizing the tactics of chemical and radio-therapy for cancer and other diseases [80], [166], in economic analysis [14], [46], [69], [70], in control of information processes [31], [36], [66], [107], [114], [115], and in queuing systems [108], [116]. The number of various areas of the impulsive control applications was also presented in monographs [12], [57], where besides those mentioned above the following examples are presented: power production control, stocks management.
Boris M. Miller, Evgeny Ya. Rubinovich
Chapter 2. Discrete-continuous systems with impulse control
Abstract
Consider the evolution of a dynamical system, which state is described by the variable X(t) ∈ R n defined in some interval [0, T]. Suppose that X(t) satisfies the differential equation
$$ \mathop{X}\limits^{.} \left( t \right) = F(X(t),t), $$
(2.1)
with a given initial condition \( X(0) = {{x}_{0}} \in {{R}^{n}} \) and the following intermediate conditions
$$ X({{\tau }_{i}}) = X({{\tau }_{i}} - ) + \Psi (X({{\tau }_{i}} - ),{{\tau }_{i}},{{\omega }_{i}}), $$
(2.2)
which are given for some sequence of instants \( \{ {{\tau }_{i}},i = 1,...,N\underline < \infty \} , \), satisfying the inequalities
$$ 0\underline < {{\tau }_{1}} < {{\tau }_{2}} < ... < {{\tau }_{i}} < ... < {{\tau }_{N}}\underline < T. $$
(2.3)
Boris M. Miller, Evgeny Ya. Rubinovich
Chapter 3. Optimal impulse control problem with restricted number of impulses
Abstract
The classical statement of the impulse control problems [73], [98], [122], [218], [121] presumes, as a rule, the energy type constraints to be imposed on the total intensity of control actions. However, if there are no restrictions that are imposed on the total number of impulses or/and on the impulse repetition rate, then the impulse sliding mode can appear as an optimal solutions (see [73], [122] and Examples 1.2 and 1.3). The realization of such modes needs the extremely high impulse repetition rate, which may be illegal in some technical systems. One of the available method with these constraints to be taken into account is to restate the optimal control problem as a problem of mathematical programming. In this way one can obtain the optimality conditions of a Khun-Tucker’s type and solve the problem somehow with the aid of numerical procedures. Meanwhile in the optimal control problems there are some more powerful tools, like the Pontriagin maximum principle [18], [59], [169] which is much more effective, than mathematical programming methods. However, to derive the maximum principle in DCS, one has to justify the convexity of the attainability set for the system state after one impulse application [3], [4], [206]. Generally it is rather a complicated problem which can be effectively solved only for linear systems.
Boris M. Miller, Evgeny Ya. Rubinovich
Chapter 4. Representation of generalized solutions via differential equations with measures
Abstract
In this Chapter we discuss the control of a nonlinear dynamic system with its state governed by the nonlinear differential equation \(\dot X(t) = F(X(t),u(t),w(t),t), \) where F(x, u, w, t,) is given function, \( x \in {R^n},t \in [0,T],X(0) = {x_0}\) is the initial condition, \(u(t),w(t) \) are measurable controls on \([0,T]:u(t) \in {R^k}\) is an ordinary control component, and \(w(t) \in {R^m} \) is a generalized one.
Boris M. Miller, Evgeny Ya. Rubinovich
Chapter 5. Optimal control problems within the class of generalized solutions
Abstract
In this Chapter we return to the general nonlinear system described by the equation
$$\dot{X}(t) = F(X(t),u(t),w(t),t),$$
which satisfies Assumption 4.1 of the previous Chapter, so as F(x,u,w,t) is a given continuous function, which is the Lipschitz one with respect to (x,w) ∈ R n × R m , and has a linear growth with respect to x, and w.
Boris M. Miller, Evgeny Ya. Rubinovich
Chapter 6. Optimality conditions in control problems within the class of generalized solutions
Abstract
The problem of the optimality conditions derivation is the basic one in optimal control. Well-known necessary optimality conditions in the form of the maximum principle have been obtained at the beginning of 60-ths, and hereafter are widely used in the practice of the optimal control as a powerful tool for the solution of applied problems and development of the optimization algorithms and software. In its typical form the maximum principle reduces the infinitely dimensional optimization problem to some boundary-value problem for the system of differential equations. However, in view of the specific of systems with impulse control, the problem of optimality conditions did not have an adequate solution especially for nonlinear systems. As follows from the results of the above chapters the optimal solutions in systems with impulse control require the special class of equations, namely, the differential equations with measures. Meanwhile, the general methods of the necessary optimality condition derivation, based on the classical Dubovitskii-Milytin scheme [47], [63], can not be directly applied to this class of equations, particularly in the case when the measure itself serves as an additional control component. Indeed, as was shown in the Introduction the “small” variations of measure (or impulse control) might generate the “strong” variations of the paths, thereby the application of the general scheme, based on the linear correspondence between control-paths variation would be inapplicable.
Boris M. Miller, Evgeny Ya. Rubinovich
Chapter 7. Observation control problems in discrete-continuous stochastic systems
Abstract
The most common way to formulate a stochastic control problem is to let the control affect only the evolution of the state but not the observation program, which is usually supposed to be fixed and continuous. However, in many practical situations we also have a possibility of controlling the observation program in a way that affects both the observations timing and composition. This, in turn, leads to a control problem where one tries to choose the control to maximize the information content of the observations regarding the state taking at the same time into account various constraints and possible penalty imposed on the control effort, i.e. the observations control problem.
Boris M. Miller, Evgeny Ya. Rubinovich
Chapter 8. Appendix. Differential equations with measures
Abstract
We begin the appendix with description of some properties of functions of bounded variations. We do not have an intention to give a complete exposition of this very important area of a real analysis. Our aim is only to present some necessary result, to make this book more or less self-contained. For more details and proofs we refer to [89], [152], [179], [192].
Boris M. Miller, Evgeny Ya. Rubinovich
Backmatter
Metadaten
Titel
Impulsive Control in Continuous and Discrete-Continuous Systems
verfasst von
Boris M. Miller
Evgeny Ya. Rubinovich
Copyright-Jahr
2003
Verlag
Springer US
Electronic ISBN
978-1-4615-0095-7
Print ISBN
978-1-4613-4921-1
DOI
https://doi.org/10.1007/978-1-4615-0095-7