2008 | OriginalPaper | Chapter
Stochastic optimal control and applications
Published in: Stochastic Calculus for Fractional Brownian Motion and Applications
Publisher: Springer London
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
Stochastic control has many important applications and is a crucial branch of mathematics. Some textbooks contain fundamental theory and examples of applications of stochastic control theory for systems driven by standard Brownian motion (see, for example, [96], [97], [182], [231]). In this chapter we shall deal with the stochastic control problem where the controlled system is driven by a
fBm
.
Even in the stochastic optimal control of systems driven by Brownian motion case or even for deterministic optimal control the explicit solution is difficult to obtain except for linear systems with quadratic control. There are several approaches to the solution of classical stochastic control problem. One is the Pontryagin maximum principle, another one is the Bellman dynamic programming principle. For linear quadratic control one can use the technique of completing squares. There are also some other methods for specific problems. For example, a famous problem in finance is the optimal consumption and portfolio studied by Merton (see [162]), and one of the main methods to solve this problem is the martingale method combined with Lagrangian multipliers. See [135] and the reference therein.
The dynamic programming method seems difficult to extend to
fBm
since
fBm
– and solutions of stochastic differential equations driven by
fBm
– are not Markov processes. However, we shall extend the Pontryagin maximum principle to general stochastic optimal control problems for systems driven by
fBm
s. To do this we need to consider backward stochastic differential equations driven by
fBm
.