Skip to main content
Top

2000 | Book

Computational Probability

Editor: Winfried K. Grassmann

Publisher: Springer US

Book Series : International Series in Operations Research & Management Science

insite
SEARCH

About this book

Great advances have been made in recent years in the field of computational probability. In particular, the state of the art - as it relates to queuing systems, stochastic Petri-nets and systems dealing with reliability - has benefited significantly from these advances. The objective of this book is to make these topics accessible to researchers, graduate students, and practitioners. Great care was taken to make the exposition as clear as possible. Every line in the book has been evaluated, and changes have been made whenever it was felt that the initial exposition was not clear enough for the intended readership.
The work of major research scholars in this field comprises the individual chapters of Computational Probability. The first chapter describes, in nonmathematical terms, the challenges in computational probability. Chapter 2 describes the methodologies available for obtaining the transition matrices for Markov chains, with particular emphasis on stochastic Petri-nets. Chapter 3 discusses how to find transient probabilities and transient rewards for these Markov chains. The next two chapters indicate how to find steady-state probabilities for Markov chains with a finite number of states. Both direct and iterative methods are described in Chapter 4. Details of these methods are given in Chapter 5. Chapters 6 and 7 deal with infinite-state Markov chains, which occur frequently in queueing, because there are times one does not want to set a bound for all queues. Chapter 8 deals with transforms, in particular Laplace transforms. The work of Ward Whitt and his collaborators, who have recently developed a number of numerical methods for Laplace transform inversions, is emphasized in this chapter. Finally, if one wants to optimize a system, one way to do the optimization is through Markov decision making, described in Chapter 9. Markov modeling has found applications in many areas, three of which are described in detail: Chapter 10 analyzes discrete-time queues, Chapter 11 describes networks of queues, and Chapter 12 deals with reliability theory.

Table of Contents

Frontmatter
1. Computational Probability: Challenges and Limitations
Abstract
Computational probability is the science of calculating probabilities and expectations. As this book demonstrates, there are many mathematical challenges in the area of computational probability. To set the stage, we discuss the objectives of computational probability in more detail, and we point out the difficulties one encounters in this area. We also contrast computational probability with other approaches.
Winfried K. Grassmann
2. Tools for Formulating Markov Models
Abstract
Many man-made systems, especially those in the areas of computer and communication, are so complex that it is essential to study them with simplified mathematical models during their design, prototyping, and deployment.
Gianfranco Ciardo
3. Transient Solutions for Markov Chains
Abstract
Much of the theory developed for solving Markov chain models is devoted to obtaining steady state measures, that is, measures for which the observation interval (0, t) is “sufficiently large” (t → ∞). These measures are indeed approximations of the behavior of the system for a finite, but long, time interval, where long means with respect to the interval of time between occurrences of events in the system. However, an increasing number of applications requires the calculation of measures during a relatively “short” period of time. These are the so-called transient measures. In these cases the steady state measures are not good approximations for the transient, and one has to resort to different techniques to obtain the desired quantities.
Edmundo de Souza e Silva, H. Richard Gail
4. Numerical Methods for Computing Stationary Distributions of Finite Irreducible Markov Chains
Abstract
In this chapter our attention will be devoted to computational methods for computing stationary distributions of finite irreducible Markov chains. We let q ij denote the rate at which an n-state Markov chain moves from state i to state j. The n × n matrix Q whose off-diagonal elements are q ij and whose i th diagonal element is given by \( - \sum\limits_{j = 1}^n , j \ne i\) q ij is called the infinitesimal generator of the Markov chain. It may be shown that the stationary probability vector π, a row vector whose k-th element denotes the stationary probability of being in state k, can be obtained by solving the homogeneous system of equations πQ = 0. Alternatively, the problem may be formulated as an eigenvalue problem πP = π, where P = QΔt+I is the stochastic matrix of transition probabilities, (Δt must be chosen sufficiently small so that the probability of two or more transitions occurring in time Δt is small, i.e., of order o(t)). Mathematically, the problem is therefore quite simple. Unfortunately, problems arise from the computational point of view because of the large number of states which many systems may occupy. As indicated in Chapters 1 and 2, it is not uncommon for thousands of states to be generated even for simple applications.
William J. Stewart
5. Stochastic Automata Networks
Abstract
A Stochastic Automata Network (SAN) consists of a number of individual stochastic automata that operate more or less independently of each other. Each individual automaton, A, is represented by a number of states and rules that govern the manner in which it moves from one state to the next. The state of an automaton at any time t is just the state it occupies at time t and the state of the SAN at time t is given by the state of each of its constituent automata.
Brigitte Plateau, William J. Stewart
6. Matrix Analytic Methods
Abstract
This chapter shows how to find the equilibrium probabilities in processes of GI/M/1 type, and M/G/1 type, and GI/G/1 type by matrix analytic methods. GI/M/1-type processes are Markov chains with transition matrices having the same structure as the imbedded Markov chain of a GI/M/1 queue, except that the entries are matrices rather than scalars. Similarly, M/G/1 type processes have transition matrices of the same form as the imbedded Markov chain of the M/G/1 queue, except that the entries are matrices. In the imbedded Markov chain of the GI/M/1 queue, all columns but the first have the same entries, except that they are displaced so that the diagonal block entry is common to all. Similarly, in the M/G/1 queue, all rows except the first one are equal after proper centering.
Winfried K. Grassmann, David A. Stanford
7. Use of Characteristic Roots for Solving Infinite State Markov Chains
Abstract
In this chapter, our interest is in determining the stationary distribution of an irreducible positive recurrent Markov chain with an infinite state space. In particular, we consider the solution of such chains using roots or zeros. A root of an equation f (z) = 0 is a zero of the function f (z),and so for notational convenience we use the terms root and zero interchangeably. A natural class of chains that can be solved using roots are those with a transition matrix that has an almost Toeplitz structure. Specifically, the classes of M/G/1 type chains and G/M/1 type chains lend themselves to solution methods that utilize roots. In the M/G/1 case, it is natural to transform the stationary equations and solve for the stationary distribution using generating functions. However, in the G/M/1 case the stationary probability vector itself is given directly in terms of roots or zeros. Although our focus in this chapter is on the discrete-time case, we will show how the continuous-time case can be handled by the same techniques. The M/G/1 and G/M/1 classes can be solved using the matrix analytic method [Neuts, 1981, Neuts, 1989], and we will also discuss the relationship between the approach using roots and this method.
H. Richard Gail, Sidney L. Hantler, B. Alan Taylor
8. An Introduction to Numerical Transform Inversion and Its Application to Probability Models
Abstract
Numerical transform inversion has an odd place in computational probability. Historically, transforms were exploited extensively for solving queueing and related probability models, but only rarely was numerical inversion attempted. The model descriptions were usually left in the form of transforms. Vivid examples are the queueing books by Takács [Takács, 1962] and Cohen [Cohen, 1982]. When possible, probability distributions were calculated analytically by inverting transforms, e.g., by using tables of transform pairs. Also, moments of probability distributions were computed analytically by differentiating the transforms and, occasionally, approximations were developed by applying asymptotic methods to transforms, but only rarely did anyone try to compute probability distributions by numerically inverting the available transforms. However, there were exceptions, such as the early paper by Gaver [Gaver, 1966]. (For more on the history of numerical transform inversion, see our earlier survey [Abate and Whitt, 1992a].) Hence, in the application of probability models to engineering, transforms became regarded more as mathematical toys than practical tools. Indeed, the conventional wisdom was that numerical transform inversion was very difficult. Even numerical analysts were often doubtful of the numerical stability of inversion algorithms. In queueing, both theorists and practitioners lamented about the “Laplace curtain” obscuring our understanding of system behavior.
Joseph Abate, Gagan L. Choudhury, Ward Whitt
9. Optimal Control of Markov Chains
Abstract
Add costs and decisions to a Markov chain and you have a Markov decision chain (MDC), the subject of this chapter. Roughly speaking, the problem is how to control a Markov chain to achieve an economic objective. Control is exercised by taking a sequence of actions, each of which may depend on the currently observed state and may influence both the immediate cost and the next state transition.
Shaler Stidham Jr.
10. On Numerical Computations of Some Discrete-Time Queues
Abstract
Since Erlang did pioneering work in the application of queues to telephony, a lot has been written about queues in continuous time (see, for example [As-mussel, 1987, Bacelli and Bremaud, 1994, Bhat and Basawa, 1992, Boxma and Syski, 1988, Bunday, 1986, Bunday, 1996, Chaudhry and Templeton, 1983, Cohen, 1982, Cooper, 1981, Daigle, 1992, Gnedenko and Kovalenko, 1989, Gross and Harris, 1985, Kalashnikov, 1994, Kashyap and Chaudhry, 1988, Kleinrock, 1975, Lipsky, 1992, Medhi, 1991, Prabhu, 1965, Robertazzi, 1990, Srivastava and Kashyap, 1982, Tijms, 1986, White et al., 1975]). In comparison to that large body of literature, not much has been written about queues in discrete time.
Mohan L. Chaudhry
11. The Product form Tool for Queueing Networks
Abstract
Queueing networks are used widely as modelling and evaluation tools in manufacturing, telecommunications, computer networking, and related areas. Much of the research effort has been devoted to so-called Jackson networks, that is, networks with Poisson arrivals, exponential service times and routing independent of the state of the system and the history of the customer. The steady-state distribution of Jackson networks can be expressed in a so-called product form. This computationally attractive form will be shown to be directly related to the principle of balance per station. This principle will be used to provide practical insights concerning the following questions
1.
When can a product form be expected?
 
2.
Why is this product form often violated in practice?
 
3.
How can one restore a product form to obtain simple bounds?
 
Nico M. van Dijk, Winfried Grassmann
12. Techniques for System Dependability Evaluation
Abstract
A major application area for the probabilistic and numerical techniques explored in the earlier chapters is in characterizing the behavior of complex computer and communication systems. While system performance has received a lot of attention in the past, increasingly system dependability is gaining importance. The proliferation of computer and computer-based communication systems has contributed to this in no small measure. This chapter is thus a step in the direction of summarizing the techniques, tools and recent developments in the field of system dependability evaluation.
Jogesh K. Muppala, Ricardo M. Fricks, Kishor S. Trivedi
Backmatter
Metadata
Title
Computational Probability
Editor
Winfried K. Grassmann
Copyright Year
2000
Publisher
Springer US
Electronic ISBN
978-1-4757-4828-4
Print ISBN
978-1-4419-5100-7
DOI
https://doi.org/10.1007/978-1-4757-4828-4