Skip to main content
main-content

Über dieses Buch

The purpose of the present book is to offer an up-to-date account of the theory of viscosity solutions of first order partial differential equations of Hamilton-Jacobi type and its applications to optimal deterministic control and differential games. The theory of viscosity solutions, initiated in the early 80's by the papers of M.G. Crandall and P.L. Lions [CL81, CL83], M.G. Crandall, L.C. Evans and P.L. Lions [CEL84] and P.L. Lions' influential monograph [L82], provides an - tremely convenient PDE framework for dealing with the lack of smoothness of the value functions arising in dynamic optimization problems. The leading theme of this book is a description of the implementation of the viscosity solutions approach to a number of significant model problems in op- real deterministic control and differential games. We have tried to emphasize the advantages offered by this approach in establishing the well-posedness of the c- responding Hamilton-Jacobi equations and to point out its role (when combined with various techniques from optimal control theory and nonsmooth analysis) in the important issue of feedback synthesis.

Inhaltsverzeichnis

Frontmatter

Chapter I. Outline of the main ideas on a model problem

Abstract
The purpose of this introductory chapter is to motivate the relevance of the notion of viscosity solution of partial differential equations of the form
$$F(x,\,u(x),\,Du(x))\, = \,0$$
in a Dynamic Programming approach to deterministic optimal control theory.
Martino Bardi, Italo Capuzzo-Dolcetta

Chapter II. Continuous viscosity solutions of Hamilton-Jacobi equations

Abstract
This chapter is devoted to the basic theory of continuous viscosity solutions of the Hamilton-Jacobi equation
$$F(x,u(x),Du(x)) = 0x \in \Omega ,$$
(HJ)
where Ω is an open domain of ℝ N and the Hamiltonian F = F(x, r, p) is a continuous real valued function on Ω × ℝ × ℝ N .
Martino Bardi, Italo Capuzzo-Dolcetta

Chapter III. Optimal control problems with continuous value functions: unrestricted state space

Abstract
In this Chapter we consider several optimal control problems whose value function is defined and continuous on the whole space ℝ N . This setting is suitable for those problems where no a priori constraint is imposed on the state of the control system. For all the problems considered we establish the Dynamic Programming Principle and derive from it the appropriate Hamilton-Jacobi-Bellman equation for the value function. This allows us to apply the theory of Chapter II, and some extensions of it, to prove that the value function can in fact be characterized as the unique viscosity solution of the corresponding Hamilton-Jacobi-Bellman equation.
Martino Bardi, Italo Capuzzo-Dolcetta

Chapter IV. Optimal control problems with continuous value functions: restricted state space

Abstract
In this chapter we continue the study of optimal control problems with continuous value functions and consider cost functionals involving the exit time from a given domain, in particular time-optimal control, and infinite horizon problems with constraints on the state variables. The continuity of the value function for these problems is not as easy as in the previous chapter. For time-optimal control this is essentially the problem of small-time local controllability. We give the proof of just a few simple results on this topic, and state without proof several others. For each problem we characterize the value function as the unique viscosity solution of the appropriate Hamilton-Jacobi-Bellman equation and boundary conditions. We do not give all the applications of this theory, as verification functions and conditions of optimality: most of them can be obtained by the arguments of Chapter III and are left as exercises for the reader.
Martino Bardi, Italo Capuzzo-Dolcetta

Chapter V. Discontinuous viscosity solutions and applications

Abstract
In this chapter we extend the theory of continuous viscosity solutions developed in Chapter II to include solutions that are not necessarily continuous. This has two motivations. The first is that many optimal control problems have a discontinuous value function and we want to extend to these problems the results of Chapters III and IV, in particular the characterization of the value function as the unique solution of the Hamilton-Jacobi-Bellman equation with suitable boundary conditions. The second motivation is more technical: viscosity solutions are stable with respect to certain relaxed semi-limits, that we call weak limits in the viscosity sense, which are semicontinuous sub- or supersolutions. These weak limits are used extensively in Chapters VI and VII to study the convergence of approximation schemes and several asymptotic limits, even for control problems where the value function is continuous.
Martino Bardi, Italo Capuzzo-Dolcetta

Chapter VI. Approximation and perturbation problems

Abstract
In this chapter we consider some approximation and perturbation problems for Hamilton-Jacobi equations.
Martino Bardi, Italo Capuzzo-Dolcetta

Chapter VII. Asymptotic problems

Abstract
In this chapter we consider several asymptotic problems in optimal control. Our approach is to pass to the limit as the relevant parameter goes to zero in the Hamilton-Jacobi-Bellman equation satisfied by the value function and characterize the limit value function as the viscosity solution of the limit equation.
Martino Bardi, Italo Capuzzo-Dolcetta

Chapter VIII. Differential Games

Abstract
In this chapter we consider two-person zero-sum differential games. Let us describe them.
Martino Bardi, Italo Capuzzo-Dolcetta

Backmatter

Weitere Informationen