Skip to main content

2008 | Buch

Optimal Control of Nonlinear Processes

With Applications in Drugs, Corruption, and Terror

verfasst von: Dieter Grass, Jonathan P. Caulkins, Gustav Feichtinger, Gernot Tragler, Doris A. Behrens

Verlag: Springer Berlin Heidelberg

insite
SUCHEN

Über dieses Buch

Dynamic optimization is rocket science – and more. This volume teaches how to harness the modern theory of dynamic optimization to solve practical problems, not only from space flight but also in emerging social applications such as the control of drugs, corruption, and terror. These innovative domains are usefully thought about in terms of populations, incentives, and interventions, concepts which map well into the framework of optimal dynamic control. This volume is designed to be a lively introduction to the mathematics and a bridge to these hot topics in the economics of crime for current scholars. We celebrate Pontryagin’s Maximum Principle – that crowning intellectual achievement of human understanding – and push its frontiers by exploring models that display multiple equilibria whose basins of attraction are separated by higher-dimensional DNSS "tipping points". That rich theory is complemented by numerical methods available through a companion web site.

Inhaltsverzeichnis

Frontmatter

Background

1. Introduction
The methods of dynamic optimization are rocket science – and more. Quite literally when NASA or the European Space Agency plan space missions, they use the methods described in this book to determine when to launch, how much fuel to carry, and how fast and how long to fire thrusters. That's exciting, but it's old news. Engineers have appreciated the power of this branch of mathematics for decades.What is news is the extent to which these methods are now contributing to business, economics, public health, and public safety.
The common attribute across these diverse domains, from medicine to robotics, is the need to control or modify the behavior of dynamical systems to achieve desired goals, typically maximizing (or minimizing) a performance index. The mathematics of optimal control theorymake this possible. In particular, the discovery of the Maximum Principlefor optimal paths of a system is what led the way to successful designs of trajectories for space missions like Sputnik and the Apollo program and myriad applications here on earth.
2. Continuous-Time Dynamical Systems
The importance of continuous dynamical systems for optimal control theory is twofold. First, dynamical systems already occur in the problem formulation, in which the evolution of the states to be controlled is formulated as a differential equation. Second, and more important, the techniques for calculating and analyzing the solutions of optimal control problems, in the form in which we introduce them, profoundly rely on results provided by the theory of continuous dynamical systems. Therefore in this chapter we elaborate the theory in some detail.
To help the reader, not acquainted with dynamical systems, we first provide a historical introduction, and then present the simple case of a onedimensional dynamical system, introducing important concepts in an informal manner. Subsequently we restate these concepts and the required theory in a rigorous way.

Applied Optimal Control

3. Tour d'Horizon: Optimal Control
This chapter presents the fundamentals of optimal control theory. In the first section we give a short historical survey, introducing the reader to the main ideas and notions. Subsequently we introduce the standard problem of optimal control theory.
We state Pontryagin's Maximum Principle, distinguishing between the cases without and with mixed path or pure state inequality constraints. The Hamilton–Jacobi–Bellman equation is used to give an informal proof of the Maximum Principle. Then the Maximum Principle is extended to the case of an infinite planning horizon. This is followed by the presentation of a onedimensional optimal control model, and we give an economic interpretation of the Maximum Principle.
4. The Path to Deeper Insight: From Lagrange to Pontryagin
In the last chapter we presented Pontryagin's Maximum Principle and the Hamilton–Jacobi–Bellman equation to solve optimal control problems. Now we deal with its exposition from a different point of view. In this chapter we first describe an intuitive approach to the Maximum Principle and its results, in which we reformulate the continuous-time problem of optimal control theory as a discrete-time problem. Then we introduce the problem of static optimization under equality and inequality constraints. The Lagrangian approach is used to find first-order necessary optimality conditions for the optimal solution. From this formulation we derive a discrete form of Pontryagin's Maximum Principle. Thereafter it is easy to understand how the discrete principle might be extended to the continuous case. In the course of this informal approach we also introduce the main ideas of the calculus of variations, the forerunner of optimal control theory. This allows us to derive the necessary conditions of the Maximum Principle in a concise way but also to give an explanation for the jump condition for the costate in the case of pure state constraints.
5. Multiple Equilibria, Points of Indifference, and Thresholds
This chapter addresses the interesting and important topics of multiplicity and history-dependence of optimal solutions. Multiplicity means that for given initial states there exist multiple optimal solutions; thus the decision-maker is indifferent about which to chose. This explains why such initial states are called points of indifference. In contrast, history-dependence occurs when the optimal solution depends on the problem's temporal history.
The existence of multiple equilibria has been long-recognized in physics, and more recently has provided an important enrichment of economic control models. In policy-making, it may be crucial to recognize whether or not a given problem has multiple stable optimal equilibria and, if so, to locate the thresholds separating the basins of attraction surrounding these different equilibria. At such a threshold different optimal courses of actions are separated. Thus, in general, starting at a threshold, a rational economic agent is indifferent between moving toward one or the other equilibrium. Small movements away from the threshold can destroy the indifference and motivate a unique optimal course of action. Among the economic consequences of such an unstable threshold is history-dependence (also denoted as path-dependence): the optimal long-run stationary solution toward which an optimally controlled system converges can depend on the initial conditions.
In the following section we present general results on discounted, autonomous problems, which are applied to a typical problem in the subsequent section. Here multiplicity and history-dependence are explained by means of a simple one-state model that already exhibits many of the interesting features connected to indifference points. In the next section we introduce a definition of multiplicity and history-dependence for optimal control problems with an arbitrary number of states.

Advanced Topics

6. Higher-Dimensional Models
The theory of optimal control is wonderful. Now, how does one actually apply it? In particular, how does one compute the solutions numerically, including for higher-dimensional systems (since solving these systems can substantially be harder than solving one-dimensional models)? This third part of the book sheds light on the practical side of applying optimal control theory.
By “practical side” we still mean mathematical calculations, not political or organizational issues associated with persuading decision-makers to implement a solution or even statistical issues associated with data collection and model parameterization. Too often, however, texts on optimal control tell the reader whatto do but not howor even whenand whyto do it.
The “practical side” of how to solve optimal control problems involves writing computer programs because few models have analytic solutions. The “when and why” questions concern how to organize all of the components of an overall analysis. Given a particular mathematical formulation, what does one do first? What second? What does one do when one analytic strategy hits a dead end? We illustrate how to answer such questions with three innovative examples that are worked in depth.1 We offer one example each from the three main application areas discussed in this book: drug control, corruption, and counter-terror.
This chapter presents the model formulations and meta-analytic strategies employed to analyze them. Points in the analysis that require nontrivial computations are flagged, with a reference forward to the following chapter on numerical methods. In that subsequent Chap. 7, the discussion walks the reader step by step through the details of how to do computations of that sort, including references to the computer code available on the companion Web site for performing these numerical calculations.
7. Numerical Methods for Discounted Systems of Infinite Horizon
In the previous chapters we presented both the theoretical background of optimal control theory and some interesting examples from the application fields of drugs, corruption, and terror. What remains is the question of how to actually compute the optimal solutions. How should we apply the theoretical results to retrieve algorithms for numerical calculations? A broad range of numerical algorithms can be used to solve optimal control problems, and presenting the underlying theory is far beyond the scope of this book. We therefore restrict our considerations to the special class of autonomous, discounted infinite time horizon problems.We concentrate on this restricted class of optimal control models because they are the most commonly investigated problems in an economic context. Furthermore, we provide a collection of MATLAB files (OCMat toolbox)1 enabling the numerical calculations of such optimal control problems.
Several approaches can be chosen to solve optimal control problems. The method presented here uses Pontryagin's Maximum Principle to establish the corresponding canonical system. In its essence, solving an optimal control problem is translated to the problem of analyzing the canonical system (see Definition 3.6). Before we go into further detail we have to introduce some notational specifics and general techniques that are used throughout this chapter.
8. Extensions of the Maximum Principle
In the preceding seven chapters we explored the basic principles of optimal control theory applied to problems typically arising in economics and management science. In particular we illustrated how the interplay of analytical and numerical methods can be used to obtain insight when fighting drugs, corruption, and terror. Before concluding our tour we should briefly sketch several important extensions of “standard-type” optimal control theory.
Usually, in optimal control theory, systems evolve continuously in time, but in reality sudden discontinuous changes may happen at certain time points. Those changes may pertain not only to the model's parameter values but also to sudden modifications of the model's functional forms. In Sect. 8.1 we give a short introduction into the important and growing field of multi-stage models. Section 8.2 provides a brief introduction to the fundamentals of differential games. The Nash and the Stackelberg concepts are introduced. Examples from the fields of corruption and terrorism are provided to illustrate the flavor of differential games.
The homogeneity of economic agents is admittedly a fiction. There are several avenues to introduce heterogeneity. One of them has been discussed in Chap. 6, namely multi-compartment models. Partial differential equations (PDEs) are another powerful tool for distinguishing individuals according to various characteristics evolving over time.1 In Sect. 8.3 we present some basic facts about distributed parameter control.
Finally, in Sect. 8.4 we mention a number of other important topics in optimal control modeling, including stochastic models, impulse control, delayed systems, and nonsmooth systems.

Appendices

9. Mathematical Background
We aim at keeping the theory presented in this textbook strictly target-oriented and therefore confine ourselves to spaces, terms, and concepts necessary to understand nonlinear dynamics and optimal control theory. For the reader searching for more information about the mathematics behind our straightforward approach we refer to additional literature on areas of expertise that go beyond the scope of this textbook. Thus this section is a concise survey of the basics, serving as a minimum standard.
10. Derivations and Proofs of Technical Results
This appendix summarizes technicalities and proofs which may be of interest to the reader but would have interrupted the flow of exposition in the main text.
Backmatter
Metadaten
Titel
Optimal Control of Nonlinear Processes
verfasst von
Dieter Grass
Jonathan P. Caulkins
Gustav Feichtinger
Gernot Tragler
Doris A. Behrens
Copyright-Jahr
2008
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-77647-5
Print ISBN
978-3-540-77646-8
DOI
https://doi.org/10.1007/978-3-540-77647-5

Premium Partner