Abstract.
An adaptive control problem of a discrete time Markov process that is completely observed in a fixed recurrent domain and is partially observed elsewhere is formulated and a solution is given by constructing an approximately self-optimal strategy. The state space of the Markov process is either a closed subset of Euclidean space or a countable set. Another adaptive control problem is solved where the process is always only partially observed but there is a family of random times when the process evaluated at these times is a family of independent, identically distributed random variables.
Author information
Authors and Affiliations
Additional information
Accepted 26 April 1996
Rights and permissions
About this article
Cite this article
Duncan, T., Pasik-Duncan, B. & Stettner, L. Adaptive Control of a Partially Observed Discrete Time Markov Process . Appl Math Optim 37, 269–293 (1998). https://doi.org/10.1007/s002459900077
Issue Date:
DOI: https://doi.org/10.1007/s002459900077