Skip to main content
Log in

Existence of Risk-Sensitive Optimal Stationary Policies for Controlled Markov Processes

  • Published:
Applied Mathematics and Optimization Submit manuscript

Abstract.

In this paper we are concerned with the existence of optimal stationary policies for infinite-horizon risk-sensitive Markov control processes with denumerable state space, unbounded cost function, and long-run average cost. Introducing a discounted cost dynamic game, we prove that its value function satisfies an Isaacs equation, and its relationship with the risk-sensitive control problem is studied. Using the vanishing discount approach, we prove that the risk-sensitive dynamic programming inequality holds, and derive an optimal stationary policy.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Author information

Authors and Affiliations

Authors

Additional information

Accepted 1 October 1997

Rights and permissions

Reprints and permissions

About this article

Cite this article

Hernández-Hernández, D., Marcus, S. Existence of Risk-Sensitive Optimal Stationary Policies for Controlled Markov Processes . Appl Math Optim 40, 273–285 (1999). https://doi.org/10.1007/s002459900126

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/s002459900126

Navigation