Skip to main content

2018 | Buch

Finite Approximations in Discrete-Time Stochastic Control

Quantized Models and Asymptotic Optimality

insite
SUCHEN

Über dieses Buch

In a unified form, this monograph presents fundamental results on the approximation of centralized and decentralized stochastic control problems, with uncountable state, measurement, and action spaces. It demonstrates how quantization provides a system-independent and constructive method for the reduction of a system with Borel spaces to one with finite state, measurement, and action spaces. In addition to this constructive view, the book considers both the information transmission approach for discretization of actions, and the computational approach for discretization of states and actions. Part I of the text discusses Markov decision processes and their finite-state or finite-action approximations, while Part II builds from there to finite approximations in decentralized stochastic control problems.
This volume is perfect for researchers and graduate students interested in stochastic controls. With the tools presented, readers will be able to establish the convergence of approximation models to original models and the methods are general enough that researchers can build corresponding approximation results, typically with no additional assumptions.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Introduction and Summary
Abstract
Control and optimization of dynamical systems in the presence of stochastic uncertainty is a mature field with a large range of applications. A comprehensive treatment of such problems can be found in excellent books and other resources including [7, 16, 29, 68, 84, 95, 104], and [6]. To date, there exist a nearly complete theory regarding the existence and structure of optimal solutions under various formulations as well as computational methods to obtain such optimal solutions for problems with finite state and control spaces. However, there still exist substantial computational challenges involving problems with large state and action spaces, such as standard Borel spaces. For such state and action spaces, obtaining optimal policies is in general computationally infeasible.
Naci Saldi, Tamás Linder, Serdar Yüksel

Finite Model Approximations in Stochastic Control

Frontmatter
Chapter 2. Prelude to Part I
Abstract
Part I involves classical stochastic control problems, with a single decision maker acting repeatedly over time with its information set growing at each time stage.
Naci Saldi, Tamás Linder, Serdar Yüksel
Chapter 3. Finite-Action Approximation of Markov Decision Processes
Abstract
In this chapter, we study the finite-action approximation of optimal control policies for discrete-time Markov decision processes (MDPs) with Borel state and action spaces, under discounted and average cost criteria. One main motivation for considering this problem stems from the optimal information transmission problem in networked control systems. In many applications of networked control, perfect transmission of the control actions to an actuator is infeasible when there is a communication channel of finite capacity between a controller and an actuator. Hence, the actions of the controller must be discretized (quantized) to facilitate reliable transmission. Although the problem of optimal information transmission from a plant/sensor to a controller has been studied extensively (see, e.g., [148] and references therein), much less is known about the problem of transmitting actions from a controller to an actuator. Such transmission schemes usually require a simple encoding/decoding rule since the actuator does not have the computational capability of the controller to use complex algorithms. For this reason, time-invariant scalar quantization is a practically useful encoding method for controller-actuator communication.
Naci Saldi, Tamás Linder, Serdar Yüksel
Chapter 4. Finite-State Approximation of Markov Decision Processes
Abstract
In this chapter we study the finite-state approximation problem for computing near optimal policies for discrete-time MDPs with Borel state and action spaces, under discounted and average costs criteria. Even though existence and structural properties of optimal policies of MDPs have been studied extensively in the literature, computing such policies is generally a challenging problem for systems with uncountable state spaces. This situation also arises in the fully observed reduction of a partially observed Markov decision process even when the original system has finite state and action spaces. Here we show that one way to compute approximately optimal solutions for such MDPs is to construct a reduced model with a new transition probability and one-stage cost function by quantizing the state space, i.e., by discretizing it on a finite grid. It is reasonable to expect that when the one-stage cost function and the transition probability of the original model has certain continuity properties, the cost of the optimal policy for the approximating finite model converges to the optimal cost of the original model as the discretization becomes finer. Moreover, under additional continuity conditions on the transition probability and the one stage cost function we also obtain bounds on the accuracy of the approximation in terms of the number of points used to discretize the state space, thereby providing a tradeoff between the computation cost and the performance loss in the system. In particular, we study the following two problems.
Naci Saldi, Tamás Linder, Serdar Yüksel
Chapter 5. Approximations for Partially Observed Markov Decision Processes
Abstract
This chapter studies the finite-model approximation of discrete-time partially observed Markov decision process. We will find that by performing the standard reduction method, where one transforms a partially observed model to a belief-based fully observed model, we can apply and properly generalize the results in the preceding chapters to obtain approximation results. The versatility of approximation results under weak continuity conditions become particularly evident while investigating the applicability of these results to the partially observed case. We also provide systematic procedures for the quantization of the set of probability measures on the state space of POMDPs which is the state space of belief-MDPs.
Naci Saldi, Tamás Linder, Serdar Yüksel
Chapter 6. Approximations for Constrained Markov Decision Problems
Abstract
This chapter studies the finite-state approximation of a discrete-time constrained Markov decision process with compact state space, under the discounted and average cost criteria. Using the linear programming formulation of the constrained discounted problem, we prove the convergence of the optimal value function of the finite-state model to the optimal value function of the original model. Under further continuity conditions on the transition probability of the original discounted model, we also establish a method to compute approximately optimal policies. For the average cost criterion, instead of using the finite-state linear programming approximation method, we use a direct method to establish analogous results under drift and minorization conditions which guarantee the geometric ergodicity of Markov chains induced by stationary policies.
Naci Saldi, Tamás Linder, Serdar Yüksel

Finite Model Approximations in Decentralized Stochastic Control

Frontmatter
Chapter 7. Prelude to Part II
Abstract
In Part II, we focus on decentralized stochastic control problems and their applications. In Chapter 8, we present our results on the finite model approximation of multi-agent stochastic control problems (team decision problems). We show that optimal strategies obtained from finite models approximate the optimal cost with arbitrary precision under mild technical assumptions. In particular, we show that quantized team policies are asymptotically optimal. In Chapter 9, the results are applied to Witsenhausen’s counterexample and the Gaussian relay channel problem.
Naci Saldi, Tamás Linder, Serdar Yüksel
Chapter 8. Finite Model Approximations in Decentralized Stochastic Control
Abstract
In this chapter, we study the approximation of static and dynamic team problems using finite models which are obtained through the uniform discretization, on a finite grid, of the observation and action spaces of agents. In particular, we are interested in the asymptotic optimality of quantized policies.
Naci Saldi, Tamás Linder, Serdar Yüksel
Chapter 9. Asymptotic Optimality of Finite Models for Witsenhausen’s Counterexample and Beyond
Abstract
In this chapter, we study the approximation of Witsenhausen’s counterexample and the Gaussian relay channel problem by using the results of the previous chapter. In particular, our goal is to establish that finite models obtained through the uniform quantization of the observation and action spaces result in a sequence of policies whose costs converge to the value function. We note that the operation of quantization has typically been the method to show that a non-linear policy can perform better than an optimal linear policy, both for Witsenhausen’s counterexample [10, 86] and the Gaussian relay channel problem [88, 152]. Our findings show that for a large class of problems, quantized policies not only may perform better than linear policies, but that they are actually almost optimal.
Naci Saldi, Tamás Linder, Serdar Yüksel
Backmatter
Metadaten
Titel
Finite Approximations in Discrete-Time Stochastic Control
verfasst von
Naci Saldi
Tamás Linder
Serdar Yüksel
Copyright-Jahr
2018
Electronic ISBN
978-3-319-79033-6
Print ISBN
978-3-319-79032-9
DOI
https://doi.org/10.1007/978-3-319-79033-6