main-content

## Über dieses Buch

A long time ago I started writing a book about Markov chains, Brownian motion, and diffusion. I soon had two hundred pages of manuscript and my publisher was enthusiastic. Some years and several drafts later, I had a thousand pages of manuscript, and my publisher was less enthusiastic. So we made it a trilogy: Markov Chains Brownian Motion and Diffusion Approximating Countable Markov Chains familiarly - MC, B & D, and ACM. I wrote the first two books for beginning graduate students with some knowledge of probability; if you can follow Sections 10.4 to 10.9 of Markov Chains, you're in. The first two books are quite independent of one another, and completely independent of this one, which is a monograph explaining one way to think about chains with instantaneous states. The results here are supposed to be new, except when there are specific disclaimers. It's written in the framework of Markov chains; we wanted to reprint in this volume the MC chapters needed for reference. but this proved impossible. Most of the proofs in the trilogy are new, and I tried hard to make them explicit. The old ones were often elegant, but I seldom saw what made them go. With my own, I can sometimes show you why things work. And, as I will argue in a minute, my demonstrations are easier technically. If I wrote them down well enough, you may come to agree.

## Inhaltsverzeichnis

### 1. Restricting the Range

Abstract
Let X be a Markov chain with state space I,stationary standard transitions P,and smooth sample functions. For simplicity, suppose I forms one recurrent class of states relative to P. Let J be a finite subset of I. Let X J be X watched only when in J; namely, X J is obtained from X by ignoring the times t with X(t) ∉ J. Call X J the restriction of X to J. This operation has been considered by Lévy (1951, 1952, 1958) and Williams (1966). The process X J is defined formally in Section 5. The idea is to introduce γ J (t),the rightmost time on the X-scale to the left of which X spends time t in J. Then X J (t)= X[γ J (t)]. Suppose J and K are finite subsets of I,with JK. Then X J = (X K ) J as shown in Section 5. The process X J is Markov with stationary standard transitions, say P J on J. This is proved in Section 6. The semigroups {P J :JI} are equicontinuous; consequently, X J converges to X in probability and in q-lim† with probability 1 as J increases to I ; in particular, P J converges to P as J increases to I. These results are proved in Section 7.
David Freedman

### 2. Restricting the Range: Applications

Abstract
Some general theorems on Markov chains are proved in Sections 2 through 6, using the theory developed in Chapter 1. In Section 7, there are hints for dealing with the transient case. A summary can be found at the beginning of Chapter 1. The sections of this chapter are almost independent of one another. For Sections 2 through 6, continue in the setting of Section 1.5. Namely, I is a finite or countably infinite set, with the discrete topology; and Ī = I for finite I, while Ī = I U {φ} is the one-point compactification of I for infinite I. There is a standard stochastic semigroup P on I,for which each i is recurrent and communicates with each j. For a discussion, see Section 1.4. The process X on the probability triple (Ω, P i ) is Markov with stationary transitions P, starting state i, and smooth sample functions. Namely, the sample functions are quasiregular and have metrically perfect level sets with infinite Lebesgue measure. For finite JI, the process X J is X watched only when in J. This process has J-valued right continuous sample functions, which visit each state on a set of times of infinite Lebesgue measure. Relative to P i , the process X J is Markov with stationary transitions P J , and generator Q J = P J (0). For a discussion, see Section 1.6. Recall that X J visits states ξJ,0, ξJ,1, ... with holding times τJ,0, τJ,1 .... Recall that µ J (t) is the time on the X J -scale corresponding to time t on the X-scale, while γ J (t) is the largest time on the X-scale corresponding to time t on the X J -scale.
David Freedman

### 3. Constructing the General Markov Chain

Abstract
In this chapter, I will construct the general Markov chain with continuous time parameter, countable state space, and stationary standard transitions. Let I be the countable state space. Let I n be a sequence of finite subsets of I which swell to I. For each n, suppose X n is Markov with continuous time parameter, has stationary standard transition, and has right continuous I n -valued step functions for sample functions. Suppose that X n is the restriction of X n+1 to I n , for all n. Here is the necessary and sufficient condition for the existence of a process X whose restriction to each I n is X n . The sum of the lengths of the I2, I3, … -intervals occurring before X1-time t is finite. If X is chosen with moderate care, it is automatically Markov with stationary standard transitions and smooth sample functions. Sections 1.5–6 show this construction is general. I will only do the work when I forms one recurrent class; but it is quite easy to drop this condition.
David Freedman

Without Abstract
David Freedman

### Backmatter

Weitere Informationen