Skip to main content

1998 | Buch | 2. Auflage

Mathematical Control Theory

Deterministic Finite Dimensional Systems

verfasst von: Eduardo D. Sontag

Verlag: Springer New York

Buchreihe : Texts in Applied Mathematics

insite
SUCHEN

Über dieses Buch

Mathematics is playing an ever more important role in the physical and biologi­ cal sciences, provoking a blurring of boundaries between scientific disciplines and a resurgence of interest in the modern as well as the classical techniques of applied mathematics. This renewal of interest, both in research and teaching, has led to the establishment of the series Texts in Applied Mathematics (TAM). The development of new courses is a natural consequence of a high level of excitement on the research frontier as newer techniques, such as numerical and symbolic computer systems, dynamical systems, and chaos, mix with and rein­ force the traditional methods of applied mathematics. Thus, the purpose of this textbook series is to meet the current and future needs of these advances and to encourage the teaching of new courses. TAM will publish textbooks suitable for use in advanced undergraduate and beginning graduate courses, and will complement the Applied Mathematics Sci­ ences (AMS) series, which will focus on advanced textbooks and research-level monographs. v Preface to the Second Edition The most significant differences between this edition and the first are as follows: • Additional chapters and sections have been written, dealing with: nonlinear controllability via Lie-algebraic methods, variational and numerical approaches to nonlinear control, including a brief introduction to the Calculus of Variations and the Minimum Principle, - time-optimal control of linear systems, feedback linearization (single-input case), nonlinear optimal feedback, controllability of recurrent nets, and controllability of linear systems with bounded controls.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Introduction
Abstract
Mathematical control theory is the area of application-oriented mathematics that deals with the basic principles underlying the analysis and design of control systems. To control an object means to influence its behavior so as to achieve a desired goal. In order to implement this influence, engineers build devices that incorporate various mathematical techniques. These devices range from Watt’s steam engine governor, designed during the English Industrial Revolution, to the sophisticated microprocessor controllers found in consumer items —such as CD players and automobiles— or in industrial robots and airplane autopilots.
Eduardo D. Sontag
Chapter 2. Systems
Abstract
This Chapter introduces concepts and terminology associated with discrete- and continuous-time systems, linearization, and input/output expansions, and establishes some of their elementary properties.
Eduardo D. Sontag
Chapter 3. Reachability and Controllability
Abstract
In all of the definitions to follow, Σ = (T, X, U, φ) is an arbitrary system.
Eduardo D. Sontag
Chapter 4. Nonlinear Controllability
Abstract
In this chapter* we study controllability questions for time-invariant continuous- time systems = f(x, u).
Eduardo D. Sontag
Chapter 5. Feedback and Stabilization
Abstract
The introductory Sections 1.2 to 1.5, which the reader is advised to review at this point, motivated the search for feedback laws to control systems. One is led then to the general study of the effect of feedback and more generally to questions of stability for linear and nonlinear systems. This Chapter develops basic facts about linear feedback and related topics in the algebraic theory of control systems including a proof of the Pole-Shifting Theorem described in Chapter 1, as well as an elementary introduction to Lyapunov’s direct method and a proof of a “linearization principle” for stability. Some more “advanced” topics on nonlinear stabilization are also included, mostly to indicate some of the directions of current research.
Eduardo D. Sontag
Chapter 6. Outputs
Abstract
Except for the basic definitions given in Chapter 2, the other major ingredient in control theory, taking into account the constraints imposed by the impossibility of measuring all state variables, has not been examined yet. These constraints were illustrated in Chapter 1 through the example of proportional-only (as opposed to proportional-derivative) control. Section 1.6, in particular, should be reviewed at this point, for a motivational introduction to observability, observers, and dynamic feedback, topics which will be developed next.
Eduardo D. Sontag
Chapter 7. Observers and Dynamic Feedback
Abstract
Section 1.6 in Chapter 1 discussed the advantages of using integration in order to average out noise when obtaining state estimates. This leads us to the topic of dynamic observers. In this chapter, we deal with observers and with the design of controllers for linear systems using only output measurements.
Eduardo D. Sontag
Chapter 8. Optimality: Value Function
Abstract
Chapter 3 dealt with the abstract property of controllability, the possibility of using inputs in order to force a system to change from one state to another. Often, however, one is interested not merely in effecting such a transition, but in doing so in a “best” possible manner. One example of this was provided in that chapter, when we calculated the control of minimal norm that achieved a given transfer, in a fixed time interval.
Eduardo D. Sontag
Chapter 9. Optimality: Multipliers
Abstract
As described in the introduction to Chapter 8, an alternative approach to optimal control relies upon Lagrange multipliers in order to link static optimization problems. In this chapter, we provide a brief introduction to several selected topics in variational, or multiplier-based, optimal control, namely: minimization of Lagrangians (and the associated Hamiltonian formalism) for open input-value sets, the basic result in the classical Calculus of Variations seen as a special case, some remarks on numerical techniques, and the Pontryagin Minimum (or Maximum, depending on conventions) Principle for arbitrary control-value sets but free final state. The area of nonlinear optimal control is very broad, and technically subtle, and, for a more in-depth study, the reader should consult the extensive literature that exists on the subject.
Eduardo D. Sontag
Chapter 10. Optimality: Minimum-Time for Linear Systems
Abstract
We consider time-invariant continuous-time linear systems
(10.1)
with the control-value set U being a compact convex subset of ℝ m . As usual, a control is a measurable map ω: [0,T] → ℝ m so that ω(t) ∈ U for almost all t ∈ [0,T]. We denote by L m (0,T) the set consisting of measurable essentially bounded maps from [0,T] into ℝ m (when m = 1, just L (0,T)) and view the set of all controls as a subset L U (0,T) ⊆ L m (0,T). In this chapter, we write simply L U instead of L U , because, U being compact, all maps into U are essentially bounded.
Eduardo D. Sontag
Backmatter
Metadaten
Titel
Mathematical Control Theory
verfasst von
Eduardo D. Sontag
Copyright-Jahr
1998
Verlag
Springer New York
Electronic ISBN
978-1-4612-0577-7
Print ISBN
978-1-4612-6825-3
DOI
https://doi.org/10.1007/978-1-4612-0577-7