Skip to main content

2013 | Buch

Optimal Stochastic Control, Stochastic Target Problems, and Backward SDE

insite
SUCHEN

Über dieses Buch

​This book collects some recent developments in stochastic control theory with applications to financial mathematics. We first address standard stochastic control problems from the viewpoint of the recently developed weak dynamic programming principle. A special emphasis is put on the regularity issues and, in particular, on the behavior of the value function near the boundary. We then provide a quick review of the main tools from viscosity solutions which allow to overcome all regularity problems. We next address the class of stochastic target problems which extends in a nontrivial way the standard stochastic control problems. Here the theory of viscosity solutions plays a crucial role in the derivation of the dynamic programming equation as the infinitesimal counterpart of the corresponding geometric dynamic programming equation. The various developments of this theory have been stimulated by applications in finance and by relevant connections with geometric flows. Namely, the second order extension was motivated by illiquidity modeling, and the controlled loss version was introduced following the problem of quantile hedging. The third part specializes to an overview of Backward stochastic differential equations, and their extensions to the quadratic case.​

Inhaltsverzeichnis

Frontmatter
Chapter 1. Introduction
Abstract
These notes have been prepared for the graduate course taught at the Fields Institute, Toronto, during the thematic program on quantitative finance which was held from January to June, 2010.
Nizar Touzi
Chapter 2. Conditional Expectation and Linear Parabolic PDEs
Abstract
Throughout this chapter, \((\Omega,\mathcal{F}, \mathbb{F},P)\) is a filtered probability space with filtration \(\mathbb{F} =\{ {\mathcal{F}}_{t},\) t ≥ 0} satisfying the usual conditions. Let W = {W t ,t ≥ 0} be a Brownian motion valued in \({\mathbb{R}}^{d}\), defined on \((\Omega,\mathcal{F}, \mathbb{F},P)\).
Nizar Touzi
Chapter 3. Stochastic Control and Dynamic Programming
Abstract
In this chapter, we assume that the filtration \(\mathbb{F}\) is the \(\mathbb{P}\)−augmentation of the canonical filtration of the Brownian motion W. This restriction is only needed in order to simplify the presentation of the proof of the dynamic programming principle.
Nizar Touzi
Chapter 4. Optimal Stopping and Dynamic Programming
Abstract
As in the previous chapter, we assume here that the filtration \(\mathbb{F}\) is defined as the \(\mathbb{P}-\)augmentation of the canonical filtration of the Brownian motion W defined on the probability space \((\Omega,\mathcal{F}, \mathbb{P})\).
Nizar Touzi
Chapter 5. Solving Control Problems by Verification
Abstract
In this chapter, we present a general argument, based on Itô’s formula, which allows to show that some “guess” of the value function is indeed equal to the unknown value function. Namely, given a smooth solution v of the dynamic programming equation, we give sufficient conditions which allow to conclude that v coincides with the value function V. This is the so-called verification argument. The statement of this result is heavy, but its proof is simple and relies essentially on Itô’s formula. However, depending on the problem in hand, the verification of the conditions which must be satisfied by the candidate solution can be difficult.
Nizar Touzi
Chapter 6. Introduction to Viscosity Solutions
Abstract
Throughout this chapter, we provide the main tools from the theory of viscosity solutions for the purpose of our applications to stochastic control problems. For a deeper presentation, we refer to the excellent overview paper by Crandall et al. [14].
Nizar Touzi
Chapter 7. Dynamic Programming Equation in the Viscosity Sense
Abstract
We now turn to the stochastic control problem introduced in Sect. 3.1. The chief goal of this section is to use the notion of viscosity solutions in order to relax the smoothness condition on the value function V in the statement of Propositions 3.4 and 3.5. Notice that the following proofs are obtained by slight modification of the corresponding proofs in the smooth case.
Nizar Touzi
Chapter 8. Stochastic Target Problems
Abstract
In this section, we study a special class of stochastic target problems which avoids facing some technical difficulties, but reflects in a transparent way the main ideas and arguments to handle this new class of stochastic control problems.
Nizar Touzi
Chapter 9. Second Order Stochastic Target Problems
Abstract
In this chapter, we extend the class of stochastic target problems of the previous section to the case where the quadratic variation of the control process ν is involved in the optimization problem. This new class of problems is motivated by applications in financial mathematics.
Nizar Touzi
Chapter 10. Backward SDEs and Stochastic Control
Abstract
In this chapter, we introduce the notion of backward stochastic differential equation (BSDE hereafter) which allows to relate standard stochastic control to stochastic target problems. More importantly, the general theory in this chapter will be developed in the non-Markov framework. The Markovian framework of the previous chapters and the corresponding PDEs will be obtained under a specific construction. From this viewpoint, BSDEs can be viewed as the counterpart of PDEs in the non-Markov framework.
Nizar Touzi
Chapter 11. Quadratic Backward SDEs
Abstract
In this chapter, we consider an extension of the notion of BSDEs to the case where the dependence of the generator in the variable z has quadratic growth. In the Markovian case, this corresponds to a problem of second-order semilinear PDE with quadratic growth in the gradient term. The first existence and uniqueness result in this context was established by M. Kobylanski in her Ph.D. thesis by adapting some previously established PDE techniques to the non-Markov BSDE framework. In this chapter, we present an alternative argument introduced recently by Tevzadze[39].
Nizar Touzi
Chapter 12. Probabilistic Numerical Methods for Nonlinear PDEs
Abstract
In this chapter, we introduce a backward probabilistic scheme for the numerical approximation of the solution of a nonlinear partial differential equation. The scheme is decomposed into two steps
Nizar Touzi
Chapter 13. Introduction to Finite Differences Methods
Abstract
In this lecture, I discuss the practical aspects of designing Finite Difference methods for Hamilton–Jacobi–Bellman equations of parabolic type arising in quantitative finance. The approach is based on the very powerful and simple framework developed by Barles– Souganidis [4], see the review of the previous chapter. The key property here is the monotonicity which guarantees that the scheme satisfies the same ellipticity condition as the HJB operator. I will provide a number of examples of monotone schemes in these notes. In practice, pure finite difference schemes are only useful in 1, 2, or at most 3 spatial dimensions. One of their merits is to be quite simple and easy to implement. Also, as shown in the previous chapter, they can also be combined with Monte Carlo methods to solve nonlinear parabolic PDEs.
Agnès Tourin
Backmatter
Metadaten
Titel
Optimal Stochastic Control, Stochastic Target Problems, and Backward SDE
verfasst von
Nizar Touzi
Copyright-Jahr
2013
Verlag
Springer New York
Electronic ISBN
978-1-4614-4286-8
Print ISBN
978-1-4614-4285-1
DOI
https://doi.org/10.1007/978-1-4614-4286-8