Skip to main content
main-content

Über dieses Buch

This book presents the latest findings on stochastic dynamic programming models and on solving optimal control problems in networks. It includes the authors’ new findings on determining the optimal solution of discrete optimal control problems in networks and on solving game variants of Markov decision problems in the context of computational networks. First, the book studies the finite state space of Markov processes and reviews the existing methods and algorithms for determining the main characteristics in Markov chains, before proposing new approaches based on dynamic programming and combinatorial methods. Chapter two is dedicated to infinite horizon stochastic discrete optimal control models and Markov decision problems with average and expected total discounted optimization criteria, while Chapter three develops a special game-theoretical approach to Markov decision processes and stochastic discrete optimal control problems. In closing, the book’s final chapter is devoted to finite horizon stochastic control problems and Markov decision processes. The algorithms developed represent a valuable contribution to the important field of computational network theory.

Inhaltsverzeichnis

Frontmatter

Chapter 1. Discrete Stochastic Processes, Numerical Methods for Markov Chains and Polynomial Time Algorithms

Abstract
In this chapter we consider stochastic discrete systems with a finite set of states. We study stationary and non-stationary discrete Markov processes. The main attention we address to the problems of determining the state-time probabilities, calculation of the limiting and the differential probability matrices, determining the expected total cost and calculation of the average and the expected total discounted costs in the finite-state space of Markov processes.
Dmitrii Lozovanu, Stefan Pickl

Chapter 2. Stochastic Optimal Control Problems and Markov Decision Processes with Infinite Time Horizon

Abstract
The aim of this chapter is to develop methods and algorithms for determining the optimal solutions of stochastic discrete control problems and Markov decision problems with an infinite time horizon.
Dmitrii Lozovanu, Stefan Pickl

Chapter 3. A Game-Theoretical Approach to Markov Decision Processes, Stochastic Positional Games and Multicriteria Control Models

Abstract
In this chapter we formulate and study a class of stochastic positional games applying the game-theoretical concept to Markov decision problems with average and expected total discounted costs optimization criteria.
Dmitrii Lozovanu, Stefan Pickl

Chapter 4. Dynamic Programming Algorithms for Finite Horizon Control Problems and Markov Decision Processes

Abstract
In this chapter we study stochastic discrete control problems and Markov decision processes with finite time horizon. We assume that the set of states of dynamical system is finite and the starting and the final states are fixed.
Dmitrii Lozovanu, Stefan Pickl

Errata to: Optimization of Stochastic Discrete Systems and Control on Complex Networks

Without Abstract
Dmitrii Lozovanu, Stefan Pickl

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise