Skip to main content

2010 | Buch

Optimal Control

insite
SUCHEN

Über dieses Buch

"Each chapter contains a well-written introduction and notes. They include the author's deep insights on the subject matter and provide historical comments and guidance to related literature. This book may well become an important milestone in the literature of optimal control." —Mathematical Reviews "Thanks to a great effort to be self-contained, [this book] renders accessibly the subject to a wide audience. Therefore, it is recommended to all researchers and professionals interested in Optimal Control and its engineering and economic applications. It can serve as an excellent textbook for graduate courses in Optimal Control (with special emphasis on Nonsmooth Analysis)." —Automatica "The book may be an essential resource for potential readers, experts in control and optimization, as well as postgraduates and applied mathematicians, and it will be valued for its accessibility and clear exposition." —Applications of Mathematics

Inhaltsverzeichnis

Frontmatter
Chapter 1. Overview
Abstract
Optimal Control emerged as a distinct field of research in the 1950s, to address in a unified fashion optimization problems arising in scheduling and the control of engineering devices, beyond the reach of traditional analytical and computational techniques. Aerospace engineering is an important source of such problems, and the relevance of Optimal Control to the American and Russian space programs gave powerful initial impetus to the field.
Richard Vinter
Chapter 2. Measurable Multifunctions and Differential Inclusions
Abstract
Differential inclusions
$$\begin{array}{*{20}c}{\begin{array}{*{20}c}{\dot x\left( t \right) \in F\left( {t,x\left( t \right)} \right)} & {{\rm{a}}.{\rm{e}}.} & {t \in I} \\\end{array}}\\\end{array}$$
(2.1)
It feature prominently in modern treatments of Optimal Control. This has come about for several reasons. One is that Condition (2.1), summarizing constraints on allowable velocities, provides a convenient framework for stating hypotheses under which optima; control problems have solutions and optimality conditions may be derived. Another is that, even when we choose not to formulate an optimal control problem in terms of a differential inclusion, in cases when the data are nonsmooth, often the very statement of optimality conditions makes reference to differential inclusions. It is convenient then at this stage to highlight important properties of multifunctions and differential inclusions of particular relevance in Optimal Cotrol.
Richard Vinter
Chapter 3. Variational Principles
Abstract
The Name “variational principle” is traditionally attached to a law of nature asserting that some quantity is minimized. Examples are Dirichlet’s Principle (the spatial distribution of an electrostatic field minimizes some quadratic functional), Fermat’s Principle (a light ray follows a shortest path), or Hamilton’s Principle of Least Action (the evolution of a dynamical system is an “extremum” for the action functional). These principles are called “variational” because working through their detailed implications entails solving problems in the Calculus of Variations.
Richard Vinter
Chapter 4. Nonsmooth Analysis
Abstract
Let \(\bar x \in R^k\)be a point in the mainfold \(C: = \{x:g_i (x) = 0\begin{array}{*{20}c}{\rm for}\ {i = 1,...,m}\}\\\end{array}\) in which \(g_i :R^k\to R,i = 1,...,m\) are given continuously differentiable functions such that \(\nabla g_1 \left({\bar x}\right),...,\nabla g_m (\bar x)\) are linearly independent.
Richard Vinter
Chapter 5. Subdifferential Calculus
Abstract
In this chapter, we assemble a number of useful for calculating and estimating the limiting subdifferentials of composite functions in terms of their constituent mappings.
Richard Vinter
Chapter 6. The Maximum Principle
Abstract
This chapter focuses on a set of optimality conditions known as the Maximum Principle. Many competing sets of optimality conditions are now available, but the Maximum Principle retains a special significance. An early version of the Maximum Principle due to Pontryagin er al. was after all the breakthrough marking the emergence of Optimal control as a distinct field of research. Also, whatever additional information about minimizers is provided by Dynamic Programming, higher-order conditions, and the analysis of the geometry of state trajectories, first-order necessary conditions akin to the Maximum Principle remain the principal vehicles for the solution of specific optimal control problems (either directly or indirectly via the computational procedures they inspire), or at least for generating “suspects” for their solution.
Richard Vinter
Chapter 7. The Extended Euler–Lagrange and Hamilton Conditions
Abstract
The distinguishing feature of optimal control problems, as compared with traditional variational problems is the presence of constraints on the velocity variable.
Richard Vinter
Chapter 8. Necessary Conditions for Free End-Time Problems
Abstract
Our investigation of the properties of optimal strategies has, up till now, been confined to optimal control problems for which the underlying time interval [S,T] has been fixed.
Richard Vinter
Chapter 9. The Maximum Principle for State Constrained Problems
Abstract
In this chapter, we return to the framework of Chapter 6, in which the dynamic constraint takes the form of a differential equation parameterized by control functions. Our goal is to extend the earlier derived necessary conditions of optimality, in the form of a Maximum Principle, to allow for pathwise constraints on the state trajectories.
Richard Vinter
Chapter 10. Necessary Conditions for Differential Inclusion Problems with State Constraints
Abstract
In this, chapter we continue our investigation of necessary conditions for optimal control problems with pathwise state constraints. Now, however, the class of optimal control problems considered is one in which the dynamic constraint is formulated as a differential inclusion.
Richard Vinter
Chapter 11. Regularity of Minimizers
Abstract
In this chapter we seek information about regularity of minimizers. When do minimizing arcs have essentially bounded derivatives, higher-order derivatives, or other qualitative properties of interest in applications?
Richard Vinter
Chapter 12. Dynamic Programming
Abstract
Consider the optimal control problem:
$$\left(P\right)\left\{{\begin{array}{*{20}l}{{\rm{Minimize}}g\left({x\left(T\right)}\right)}\\{{\rm{over}}\begin{array}{*{20}c}{{\rm{arcs}}}\\\end{array}x \in {{W}}^{1,1} ([S,T];R^n)\quad{\rm{satisfying}}}\\{\dot x(t)\in F(t,x(t))\begin{array}{*{20}c} {{\rm{a}}{\rm{.e}}{\rm{.,}}} \\\end{array}} \\ {x(S) = x_0,} \\\end{array}} \right.$$
the data for which comprise an interval [S,T] ⊂ R, a function g : R n R ⋃ {+∞}, a multifunction \(F: [S,T]\times R^n \rightsquigarrow R^n\), and a point x 0 ε R n .
Richard Vinter
Backmatter
Metadaten
Titel
Optimal Control
verfasst von
Richard Vinter
Copyright-Jahr
2010
Verlag
Birkhäuser Boston
Electronic ISBN
978-0-8176-8086-2
Print ISBN
978-0-8176-4990-6
DOI
https://doi.org/10.1007/978-0-8176-8086-2

Neuer Inhalt