Skip to main content

1997 | Buch

Markov Processes for Stochastic Modeling

verfasst von: Masaaki Kijima

Verlag: Springer US

insite
SUCHEN

Über dieses Buch

This book presents an algebraic development of the theory of countable state space Markov chains with discrete- and continuous-time parameters. A Markov chain is a stochastic process characterized by the Markov prop­ erty that the distribution of future depends only on the current state, not on the whole history. Despite its simple form of dependency, the Markov property has enabled us to develop a rich system of concepts and theorems and to derive many results that are useful in applications. In fact, the areas that can be modeled, with varying degrees of success, by Markov chains are vast and are still expanding. The aim of this book is a discussion of the time-dependent behavior, called the transient behavior, of Markov chains. From the practical point of view, when modeling a stochastic system by a Markov chain, there are many instances in which time-limiting results such as stationary distributions have no meaning. Or, even when the stationary distribution is of some importance, it is often dangerous to use the stationary result alone without knowing the transient behavior of the Markov chain. Not many books have paid much attention to this topic, despite its obvious importance.

Inhaltsverzeichnis

Frontmatter
1. Introduction
Abstract
Stochastic processes are sequences of random variables generated by probabilistic laws. The word ‘stochastic’ comes from the Greek and means ‘random’ or ‘chance’. Markov processes are a class of stochastic processes that are distinguished by the Markov property and have many applications in, for example, operations research, biology, engineering, and economics. In this chapter, we introduce some basic concepts of Markov processes.
Masaaki Kijima
2. Discrete-time Markov chains
Abstract
This chapter concerns discrete-time Markov chains defined on a finite or denumerably infinite state space N. The Markov chains under consideration are assumed to be homogeneous. We assume without loss of generality that the state space consists of nonnegative integers N = ‛0,1,...,N”, where N < ∞ or N = ∞.
Masaaki Kijima
3. Monotone Markov chains
Abstract
In this chapter, we consider the monotonicity properties of discrete-time Markov chains where each monotonicity is characterized in terms of transition matrices. A Markov chain {X n }is said to be increasing (decreasing, respectively) if X n+1X n (X n X n+1) for all n = 0,1, ..., where ≻ denotes an ordering relation in some stochastic sense, and in either case we call {X n }internally monotone, or monotone for short. An external monotonicity is such that, for two Markov chains {X n }and {Y n , we have X n Y n for all n. Monotonicity properties are important both theoretically and practically because they lead to a variety of structural insights. In particular, they are a basic tool for deriving many useful inequalities in Markov chains for stochastic modeling.
Masaaki Kijima
4. Continuous-time Markov chains
Abstract
In this chapter, we consider the continuous-time analogs of discrete-time Markov chains. As in the discrete-time case, they are characterized by the Markov property that, given the present state, the future of the process is stochastically independent of the past.
Masaaki Kijima
5. Birth—death processes
Abstract
In the preceding chapter, we saw birth-death processes as a special class of continuous-time Markov chains. Let &#x007X(t)} denote a birth-death process. In Example 4.4, X(t) represents the size of a population at time t. A ‘birth’ increases the size by 1 and a ‘death’ decreases it by 1. However, this is indeed a rich and important class in modeling a variety of phenomena not only in biology but also in, e.g., operations research, demography, economics and engineering. Typical examples of problems that can be formulated as birth-death processes are the following.
Masaaki Kijima
A. Review of matrix theory
Abstract
In this section we discuss nonnegative matrices A = (a ij ), i.e., a ij ≥ 0 for all i and j, in which case we write A ≥ O. If a ij > 0 for all i and j, we write A > O. For two matrices A and B, we write AB if and only if A − B ≥ O and A > B if and only if AB > O. Throughout this section, we assume that matrices are finite and square.
Masaaki Kijima
B. Generating functions and Laplace transforms
Abstract
All the probabilistic properties of a random variable X are contained in its distribution function F(x). When working with the random variable X, therefore, we must start with the distribution function F(x). However, it is often true that working with some transformation of F(x) is much easier than working with F(x) itself. In this appendix, we discuss two important transformations, generating functions and Laplace transforms, the former being particularly useful for discrete random variables and the latter for nonnegative, absolutely continuous random variables.
Masaaki Kijima
C. Total positivity
Abstract
In this appendix, we provide some information about total positivity. The theory of totally positive functions is very rich and the results provided here are indeed only ‘the tip of the iceberg’. The reader interested in a complete discussion of the theory of total positivity should consult Karlin (1968). Throughout this appendix, X, Y and Z represent either intervals of the real line R ≡ (−∞, ∞) or a countable or finite set of discrete values along R.
Masaaki Kijima
Backmatter
Metadaten
Titel
Markov Processes for Stochastic Modeling
verfasst von
Masaaki Kijima
Copyright-Jahr
1997
Verlag
Springer US
Electronic ISBN
978-1-4899-3132-0
Print ISBN
978-0-412-60660-1
DOI
https://doi.org/10.1007/978-1-4899-3132-0