Skip to main content
main-content

Über dieses Buch

This book offers a systematic introduction to the optimal stochastic control theory via the dynamic programming principle, which is a powerful tool to analyze control problems.

First we consider completely observable control problems with finite horizons. Using a time discretization we construct a nonlinear semigroup related to the dynamic programming principle (DPP), whose generator provides the Hamilton–Jacobi–Bellman (HJB) equation, and we characterize the value function via the nonlinear semigroup, besides the viscosity solution theory. When we control not only the dynamics of a system but also the terminal time of its evolution, control-stopping problems arise. This problem is treated in the same frameworks, via the nonlinear semigroup. Its results are applicable to the American option price problem.

Zero-sum two-player time-homogeneous stochastic differential games and viscosity solutions of the Isaacs equations arising from such games are studied via a nonlinear semigroup related to DPP (the min-max principle, to be precise). Using semi-discretization arguments, we construct the nonlinear semigroups whose generators provide lower and upper Isaacs equations.

Concerning partially observable control problems, we refer to stochastic parabolic equations driven by colored Wiener noises, in particular, the Zakai equation. The existence and uniqueness of solutions and regularities as well as Itô's formula are stated. A control problem for the Zakai equations has a nonlinear semigroup whose generator provides the HJB equation on a Banach space. The value function turns out to be a unique viscosity solution for the HJB equation under mild conditions.

This edition provides a more generalized treatment of the topic than does the earlier book Lectures on Stochastic Control Theory (ISI Lecture Notes 9), where time-homogeneous cases are dealt with. Here, for finite time-horizon control problems, DPP was formulated as a one-parameter nonlinear semigroup, whose generator provides the HJB equation, by using a time-discretization method. The semigroup corresponds to the value function and is characterized as the envelope of Markovian transition semigroups of responses for constant control processes. Besides finite time-horizon controls, the book discusses control-stopping problems in the same frameworks.

Inhaltsverzeichnis

Frontmatter

Chapter 1. Stochastic Differential Equations

Abstract
The purpose of this chapter is to overview elements of the theory of stochastic differential equations, based on Wiener processes, for use in the subsequent chapters. This theory was founded by K. Itô in 1942 (Itô, Zenkoku Shijo Sugaku Danwakai 244:1352–1400, 1942; Itô, On stochastic differential equations. Memoirs of the American Mathematical Society, vol 4. AMS, New York City, 1951). His aim was to construct Markov processes, governed by Kolmogorov’s differential equations via Wiener processes, and to analyze their sample paths. After that, stochastic differential equations have been used to describe dynamical processes in random environments of various fields. Here we consider stochastic differential equations with random coefficients, because we aim at studying stochastic control problems.The chapter is organized as follows. Section 1.1 is preliminaries. The basic definitions and results on stochastic processes are collected for later use. Stochastic differential equations and stochastic analysis will be introduced in Sect. 1.2. Section 1.3 deals with asset pricing problems as an application of previous results.
Makiko Nisio

Chapter 2. Optimal Control for Diffusion Processes

Abstract
This chapter deals with completely observable stochastic control problems for diffusion processes, described by SDEs. The decision maker chooses an optimal decision among all possible ones to achieve the goal. Namely, for a control process, its response evolves according to a (controlled) SDE and the payoff on a finite time interval is given. The controller wants to minimize (or maximize) the payoff by choosing an appropriate control process from among all possible ones. Here we consider three types of control processes:
1.
\((\mathcal{F}_{t})\)-progressively measurable processes.
 
2.
Brownian-adapted processes.
 
3.
Feedback controls.
 
In order to analyze the problems, we mainly use the dynamic programming principle (DPP) for the value function.The reminder of this chapter is organized as follows. Section 2.1 presents the formulation of control problems and basic properties of value functions, as preliminaries for later sections. Section 2.2 focuses on DPP. Although DPP is known as a two stage optimization method, we will formulate DPP by using a semigroup and characterize the value function via the semigroup. In Sect. 2.3, we deal with verification theorems, which give recipes for finding optimal Markovian policies. Section 2.4 considers a class of Merton-type optimal investment models, as an application of previous results.
Makiko Nisio

Chapter 3. Viscosity Solutions for HJB Equations

Abstract
The theory of viscosity solutions was originated by M.G. Crandall and P.L. Lions in the early 80s for the Hamilton–Jacobi equations and later P.L. Lions developed it for the HJB equations (Lions, J Commun PDE 8:1101–1134, 1983; Acta Math 16:243–278, 1988; Viscosity solutions of fully nonlinear second-order equations and optimal stochastic control in infinite dimensions. Part II. Optimal control of Zakai equation. In: Da Prato, Tubaro (eds) Stochastic partial differential equations and applications II. Lecture notes in mathematics, vol 1390. Springer, Berlin/Heidelberg, 1989, pp 147–170, 1989; J Funct Anal 86:1–18, 1989). In Chap. 2, we have seen the relation between the value function and the HJB equations. If the value function is smooth, then it provides the classical solution of the HJB equations. Unfortunately, when the diffusion coefficient is degenerate, smoothness does not necessarily hold, even for a simple case, and the HJB equations may in general have no classical equation, either. However, the theory of viscosity solutions gives a powerful tools for studing stochastic control problems. Regarding the viscosity solutions for the HJB equations, we claim only continuity for a solution, not necessarily differentiability. Thus, it has been shown that under mild conditions the value function is the unique viscosity solution of the HJB equation. We will revisit this fact in terms of semigroups in Sect. 3.1.3.This chapter is organized as follows. In Sects. 3.1 and 3.2, we recall some basic results on viscosity solutions for (nonlinear) parabolic equations for later use. In Sect. 3.3 we consider stochastic optimal control-stopping problems in a framework similar to that of finite time horizon controls.
Makiko Nisio

Chapter 4. Stochastic Differential Games

Abstract
In this chapter, we will deal with zero-sum two-player time-homogeneous stochastic differential games and viscosity solutions of the Isaacs equations arising from such games, via the dynamic programming principle.In Sect. 4.1, we are concerned with basic concepts and definitions and we introduce stochastic differential games, referring to (Controlled MarkovProcesses and viscosity solutions, 2nd edn. Springer, New York 2006), XI. Then, using a semi-discretization argument, we study the DPP for lower- and upper-value functions in Sect. 4.2. In Sect. 4.3, we will consider the Isaacs equations, via semigroups related to DPP. In Sect. 4.4, we consider a link between stochastic controls and differential games via risk sensitive controls.
Makiko Nisio

Chapter 5. Stochastic Parabolic Equations

Abstract
This chapter is devoted to stochastic evolution equations in Hilbert spaces, in particular stochastic parabolic type equations of the form
$$\displaystyle{du(t,x,\omega ) = \mathcal{L}(t,\omega )u(t,x,\omega )\,dt + \mathcal{M}(t,\omega )u(t,x,\omega )\,dW_{Q}(t),}$$
where \(\mathcal{L}\) and \(\mathcal{M}\) are second-order elliptic and first-order differential operators and W Q is a colored Wiener process (see Example 5.1).
These equations are generalization of finite-dimensional SDEs and appear in the study of random phenomena in natural sciences and the unnormalized conditional probability of finite-dimensional diffusion processes (see Sect. 5.5), related to filtering equations derived in Fujisaki et al. (Osaka J Math 9:19–40, 1972) and Kushner (J Differ Equ 3:179–190, 1967).
In Sect. 5.1 we collect basic definitions and results for Hilbert space-valued processes; in particular, for continuous martingales, quadratic variations and correlation operators are treated. Stochastic integrals are introduced in Sect. 5.2. Section 5.3 is devoted to the study of stochastic parabolic equations from the viewpoint of Hilbert space-valued SDEs, following Rozovskii (Stochastic evolution systems. Kluwer Academic, Dordrecht/Boston, 1990). By using the results presented, we also consider a semilinear stochastic parabolic equation with Lipschitz nonlinearity in Sect. 5.3.4. Section 5.4 deals with Itô’s formula and in Sect. 5.5 Zakai equations related to filtering problems are given.
Makiko Nisio

Chapter 6. Optimal Controls for Zakai Equations

Abstract
This chapter is an application of previous one. In Sect. 5.​5, we have introduced the (controlled) Zakai equation, which is a stochastic linear parabolic equation with a Brownian adapted control process. By using the results in Chap. 5, we will study control problems for Zakai equations related to partially observable diffusions. The control problem for partially observable diffusions turns out to be a completely observable control problem on a Hilbert space, by using the unnormalized conditional probability density given by the Zakai equation (cf. Bensoussan A, Stochastic control of partially observable systems. Cambridge University Press, Cambridge/New York, 1992; Lions, J Commun PDE 8:1101–1134, 1983, I, II; Gozzi and Świech, J Funct Analy 172:466–510, 2000). Section 6.1 is devoted to the analysis of controlled Zakai equations. In Sect. 6.2, we formulate control problems for a system governed by Zakai equations, in the same way as in Chap. 2. When a control process \(\gamma (\cdot )\) is chosen, the cost on a time internal [T 0, t] is given by \(\int _{T_{0}}^{t}r(u^{\gamma (\cdot )}(s),\gamma (s))\,ds + F(u^{\gamma (\cdot )}(t))\), where \(u^{\gamma (\cdot )}(\cdot )\) is the response of \(\gamma (\cdot )\). By taking a suitable control process, we want to minimize (or maximize) the expectation of the cost. In Sect. 6.3 we formulate the DPP via the semigroup constructed from the value function, whose generator is related to the HJB equation on a Hilbert space. The viscosity solution of HJB equation is introduced following Gozzi and Świech (J Funct Analy 172:466–510, 2000) in Sect. 6.4. Example 6.1 makes explicitly the connection between controlled Zakai equations and control of partially observable diffusions.
Makiko Nisio

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise