Skip to main content
Top

1993 | Book

An Introduction to Probability and Stochastic Processes

Author: Marc A. Berger

Publisher: Springer New York

Book Series : Springer Texts in Statistics

insite
SEARCH

About this book

These notes were written as a result of my having taught a "nonmeasure theoretic" course in probability and stochastic processes a few times at the Weizmann Institute in Israel. I have tried to follow two principles. The first is to prove things "probabilistically" whenever possible without recourse to other branches of mathematics and in a notation that is as "probabilistic" as possible. Thus, for example, the asymptotics of pn for large n, where P is a stochastic matrix, is developed in Section V by using passage probabilities and hitting times rather than, say, pulling in Perron­ Frobenius theory or spectral analysis. Similarly in Section II the joint normal distribution is studied through conditional expectation rather than quadratic forms. The second principle I have tried to follow is to only prove results in their simple forms and to try to eliminate any minor technical com­ putations from proofs, so as to expose the most important steps. Steps in proofs or derivations that involve algebra or basic calculus are not shown; only steps involving, say, the use of independence or a dominated convergence argument or an assumptjon in a theorem are displayed. For example, in proving inversion formulas for characteristic functions I omit steps involving evaluation of basic trigonometric integrals and display details only where use is made of Fubini's Theorem or the Dominated Convergence Theorem.

Table of Contents

Frontmatter
Section I. Univariate Random Variables
Abstract
These are real-valued functions X defined on a probability space, taking on a finite or countably infinite number of values {x1,x2, …}. They can be described by a discrete density function
$${p_X}(x) = \mathbb{P}(X = x)$$
.
Marc A. Berger
Section II. Multivariate Random Variables
Abstract
Until now we have been restricted in our consideration of two random variables X and Y, together. We could only talk about, say, the distribution of X + Y or some function f(X, Y), in the special case where Y is a (Borel) function of X, or where X and Y are both (Borel) functions of some third random variable Z. Now we shall dicuss the analysis of joint random variables X and Y in a more general setting.
Marc A. Berger
Section III. Limit Laws
Abstract
In Section II we dealt with finite families X1,…,Xn of joint random variables. Now we shall be considering full sequences X1, X2,… of an infinity of joint random variables. The distribution of such a sequence is determined by the various finite-dimensional d.f.s \({F_{{X_{{n_1}}}}}, \ldots {X_{{n_k}}}\) but the jump to infinity introduces many new considerations. In particular, we shall deal with limits, events that occur infinitely often (i.o.), tail events, and various modes of convergence.
Marc A. Berger
Section IV. Markov Chains—Passage Phenomena
Abstract
My treatment of Markov chains in the three chapters which follow is modelled after the material in “Introduction to Stochastic Processes” by Hoel, Port and Stone (Ref. [28]), and is presented here with their kind permission. I have adopted their notation and style, because I feel it is the best way to introduce Markov chains in the spirit of these notes—namely, an approach which combines intuition (of the dynamics) with probabilistic reasoning. The presentation here is compressed and condensed. For a more leisurely account of this material, replete with many examples, problems and related topics, I recommend the Hoel, Port and Stone text. In addition their text discusses the important topics of differentiation and integration of stochastic processes, Brownian motion and stochastic differential equations, which are not contained herein.
Marc A. Berger
Section V. Markov Chains—Stationary Distributions and Steady State
Abstract
Let Nn(y) denote the number of visits of the Markov chain {Xn} to y during times m = 1,…,n. That is,
$${N_n}(y) = \sum\limits_{m = 1}^n {{I_{\{ y\} }}({X_m})} $$
Marc A. Berger
Section VI. Markov Jump Processes
Abstract
We want to describe Markov processes that evolve through continuous time t ≥ 0, but in a discrete state space ℒ. The prescription for such a process has two ingredients. There are random jump times 0 < τ1 < τ2 < … < τn < … when the process jumps away from the state it is at, and there are transition probabilities Q xy that govern the transitions at these jump times. The process {X(t): t ≥ 0 } itself has piecewise constant paths, which we can take to be right-continuous
$$X(t) = \left\{ \begin{gathered} {x_0}, 0 \leqslant t < {\tau _1}, \hfill \\ {x_1}, {\tau _1} \leqslant t < {\tau _2}, \hfill \\ {x_2}, {\tau _2} \leqslant t < {\tau _3}, \hfill \\ \ldots \ldots \ldots \ldots \ldots \ldots \ldots \hfill \\ \end{gathered} \right.$$
Marc A. Berger
Section VII. Ergodic Theory with an Application to Fractals
Abstract
We have already seen examples of limit theorems in Sections V and VI that assert the convergence of temporal averages to spatial averages. Thus if x is a positive recurrent state of an aperiodic irreducible Markov chain, then \(\mathop {\lim }\limits_{n \to \infty } {N_n}(x)/n = \pi (x)\). That is, the temporal average fraction of time the chain spends at state x converges to the spatial average π(x). Similarly, for Markov jump processes \(\frac{1}{t}\int_0^t {{I_{\{ x\} }}(X(s))ds \to \pi (x)} \) when x is positive recurrent.
Marc A. Berger
Backmatter
Metadata
Title
An Introduction to Probability and Stochastic Processes
Author
Marc A. Berger
Copyright Year
1993
Publisher
Springer New York
Electronic ISBN
978-1-4612-2726-7
Print ISBN
978-1-4612-7643-2
DOI
https://doi.org/10.1007/978-1-4612-2726-7