Skip to main content
main-content

Über dieses Buch

This textbook, now in its second edition, results from lectures, practical problems, and workshops on Optimal Control, given by the authors at Irkutsk State University, Far Eastern Federal University (both in Vladivostok, Russia), and Kwangwoon University (Seoul, South Korea).
In this work, the authors cover the theory of linear and nonlinear systems, touching on the basic problem of establishing the necessary and sufficient conditions of optimal processes. Readers will find two new chapters, with results of potential interest to researchers with a focus on the theory of optimal control, as well as to those interested in applications in Engineering and related sciences. In addition, several improvements have been made through the text.
This book is structured in three parts. Part I starts with a gentle introduction to the basic concepts in Optimal Control. In Part II, the theory of linear control systems is constructed on the basis of the separation theorem and the concept of a reachability set. The authors prove the closure of reachability set in the class of piecewise continuous controls and touch on the problems of controllability, observability, identification, performance, and terminal control. Part III, in its turn, is devoted to nonlinear control systems. Using the method of variations and the Lagrange multipliers rule of nonlinear problems, the authors prove the Pontryagin maximum principle for problems with mobile ends of trajectories.
Problem sets at the end of chapters and a list of additional tasks, provided in the appendix, are offered for students seeking to master the subject. The exercises have been chosen not only as a way to assimilate the theory but also as to induct the application of such knowledge in more advanced problems.

Inhaltsverzeichnis

Frontmatter

Introduction

Frontmatter

Chapter 1. The Subject of Optimal Control

Abstract
On the example of control by mechanical system, we illustrate the features of the optimal control problem. It is covered the main issues of optimal control theory: the causes of arising, the subject, objectives and relation with other mathematical disciplines.
Leonid T. Ashchepkov, Dmitriy V. Dolgy, Taekyun Kim, Ravi P. Agarwal

Chapter 2. Mathematical Model for Controlled Object

Abstract
The basis concepts of optimal control – controlled object, control, trajectory, process, and mathematical model are introduced. The questions of correctness of the mathematical model – the unambiguous description of the processes are discussed. We introduce the types of linear models and give illustrative examples.
Leonid T. Ashchepkov, Dmitriy V. Dolgy, Taekyun Kim, Ravi P. Agarwal

Control of Linear Systems

Frontmatter

Chapter 3. Reachability Set

Abstract
For the linear control systems it is proved Cauchy formula which represents the trajectory of the system with the help of the fundamental matrix. We list the properties of the fundamental matrix, introduce the notation of the reachability set of a linear system and establish its basic properties: the limitation, convexity, closure, and continuity. It is showed the relation of a special family of extreme controls with the boundary of a reachability set.
Leonid T. Ashchepkov, Dmitriy V. Dolgy, Taekyun Kim, Ravi P. Agarwal

Chapter 4. Controllability of Linear Systems

Abstract
We study the controllability of linear systems – the existence of processes with specified conditions on the ends of a trajectory. The criteria of point-to-point and complete controllability are established. We investigate the features of uncontrolled systems.
Leonid T. Ashchepkov, Dmitriy V. Dolgy, Taekyun Kim, Ravi P. Agarwal

Chapter 5. Minimum Time Problem

Abstract
We consider the two-point performance problem of translating a controlled object from one position to another one by trajectory of a linear system for minimal time. The conditions for solvability of the problem, the optimality criteria, and the relationship with Pontryagin’s maximum principle are defined. The stationary performance problem is studied in detail.
Leonid T. Ashchepkov, Dmitriy V. Dolgy, Taekyun Kim, Ravi P. Agarwal

Chapter 6. Synthesis of the Optimal System Performance

Abstract
The concept of the synthesized control and synthesis for optimal performance system is introduced. The reverse motion method is described. Examples of applying this method for the synthesis of stationary second-order systems are given.
Leonid T. Ashchepkov, Dmitriy V. Dolgy, Taekyun Kim, Ravi P. Agarwal

Chapter 7. The Observability Problem

Abstract
The problem of observability – the possibility to define and calculate the position of a controlled object at a given time by observable data is explored. We establish the criteria of observability for homogeneous, non-homogeneous, and stationary observability systems. The relationship between observability and controllability is showed.
Leonid T. Ashchepkov, Dmitriy V. Dolgy, Taekyun Kim, Ravi P. Agarwal

Chapter 8. Identification Problem

Abstract
The problem of identification – the possibility to define and calculate the unknown parameters of the controlled object by observable data is introduced. We establish criteria for identification. The application of the criteria for the restoration of unknown parameters is demonstrated.
Leonid T. Ashchepkov, Dmitriy V. Dolgy, Taekyun Kim, Ravi P. Agarwal

Control of Nonlinear Systems

Frontmatter

Chapter 9. Types of Optimal Control Problems

Abstract
We describe the specific elements of optimal control problems: objective functions, mathematical model, constraints. It is introduced necessary terminology. We distinguish several classes of problems: Simplest problem, Two-point minimum time problem, Genetal problem with the movable ends of the integral curve, Problem with intermediate states, and Common problem of optimal control.
Leonid T. Ashchepkov, Dmitriy V. Dolgy, Taekyun Kim, Ravi P. Agarwal

Chapter 10. Small Increments of a Trajectory

Abstract
With the aid of «small» variations of a fixed basis process we construct and describe the family of «close» processes by means of linear approximation. We clarify the concept of small variations and close processes.
Leonid T. Ashchepkov, Dmitriy V. Dolgy, Taekyun Kim, Ravi P. Agarwal

Chapter 11. The Simplest Problem of Optimal Control

Abstract
We set the maximum principle (necessary optimality conditions) in the Simplest problem of optimal control. We discuss related issues: the continuity of Hamiltonian, boundary value problem, the sufficiency, and application for linear systems.
Leonid T. Ashchepkov, Dmitriy V. Dolgy, Taekyun Kim, Ravi P. Agarwal

Chapter 12. General Optimal Control Problem

Abstract
The necessary optimality conditions in the General problem of optimal control are set out in several stages. Initially, for optimal process we construct a parameter family of “close” varied processes. The requirement for admissibility of varied processes leads to a finite auxiliary problem of nonlinear programming that depends on parameters of variation. The analysis of the auxiliary problem and the limiting translation by parameters of variation give the required necessary optimality conditions in the form of the Pontryagin’s maximum principle. We consider a using of the maximum principle for various particular cases of the General problem.
Leonid T. Ashchepkov, Dmitriy V. Dolgy, Taekyun Kim, Ravi P. Agarwal

Chapter 13. Problem with Intermediate States

Abstract
The maximum principle in a problem with intermediate phase constraints is proved. The application of the results to the optimal control problem with discontinuous right-hand sides of differential equations is shown.
Leonid T. Ashchepkov, Dmitriy V. Dolgy, Taekyun Kim, Ravi P. Agarwal

Chapter 14. Extremals Field Theory

Abstract
We introduce the concept “field of extremals” and prove the main statement of the section on the optimality of extremals – sufficiency of the maximum principle. We also consider on the application of extremals field theory to the problem of constructing invariant systems.
Leonid T. Ashchepkov, Dmitriy V. Dolgy, Taekyun Kim, Ravi P. Agarwal

Chapter 15. Sufficient Optimality Conditions

Abstract
We introduce the Krotov method for obtaining the sufficient optimality conditions for optimal control problem with mixed constraints. It is illustrated the application of sufficient conditions for the solution of particular examples and the problem of analytical formation of the regulator. The relationship of sufficient optimality conditions and the Bellman method of dynamic programming is considered.
Leonid T. Ashchepkov, Dmitriy V. Dolgy, Taekyun Kim, Ravi P. Agarwal

Backmatter

Weitere Informationen

Premium Partner