main-content

## Über dieses Buch

Assuming only calculus and linear algebra, this book introduces the reader in a technically complete way to measure theory and probability, discrete martingales, and weak convergence. It is self- contained and rigorous with a tutorial approach that leads the reader to develop basic skills in analysis and probability. While the original goal was to bring discrete martingale theory to a wide readership, it has been extended so that the book also covers the basic topics of measure theory as well as giving an introduction to the Central Limit Theory and weak convergence. Students of pure mathematics and statistics can expect to acquire a sound introduction to basic measure theory and probability. A reader with a background in finance, business, or engineering should be able to acquire a technical understanding of discrete martingales in the equivalent of one semester. J. C. Taylor is a Professor in the Department of Mathematics and Statistics at McGill University in Montreal. He is the author of numerous articles on potential theory, both probabilistic and analytic, and is particularly interested in the potential theory of symmetric spaces.

## Inhaltsverzeichnis

### Chapter I. Probability Spaces

Abstract
An introduction to analysis usually begins with a study of properties of ℝ, the set of real numbers. It will be assumed that you know something about them. More specifically, it is assumed that you realize that
(i)
they form a field (which means that you can add, multiply, subtract and divide in the usual way),and

(ii)
they are (totally) ordered (i.e., for any two numbers a, b ∈ ℝ, either a < b, a = b, or a> b) and a ≥ b if and only if a - b ≥ 0. Note that a < b and 0 < c imply ac < bc; a ≤ b imply a + c ≤ b + c for any c ∈ ℝ; 0 < 1 and - 1 < 0.

J. C. Taylor

### Chapter II. Integration

Abstract
In elementary probability, if Ω is finite, say Ω = {1, 2,…, n}, F is the collection of all subsets of Ω, and if P gives weight a i ≥ 0 to {i} with Σn i=1 a i = 1, then the (mathematical) expectation E[X] of a random variable X on Ω (i.e., a function X : Ω → ℝ) is defined to be Σ n i=1 X(i) a i , = Σn i=1 X(i)P({i{). When a i = 1/n for each i, this expectation is the usual average value of X. Heuristically, this number is what we expect as the average of a large number of “observations” of X — see the weak law of large numbers in Chapter IV. Also, if X is non-negative, then E[X] can be conceived as the “area” under the graph of X: over each i one may imagine a rectangle of width P({a i }) and height X(i); then Σn i=1 X(i)P({i}) is the sum of the “areas” of the rectangles that make up the set under the graph.
J. C. Taylor

### Chapter III. Independence and Product Measures

Abstract
Let (Ω, F, P) be a probability space and let X : Ω → ℝ2 be a vector valued function, i.e., the values are vectors in ℝ2. For each w ∈ Ω, let X1(w) = (X1(w), X2(w)), where X 1 (w)) and X2(w) are the components of X(w) with respect to the canonical basis of ℝ2 consisting of el = (1, 0) and e2 = (0, 1).
J. C. Taylor

### Chapter IV. Convergence of Random Variables and Measurable Functions

Abstract
In what follows, the results will be stated and proved for probability spaces. Their extension to general σ-finite measure spaces are to be taken for granted unless commented upon. The probabilist’s notation E[X] for the integral ∫ XdP or ∫ Xdµ will be used frequently.
J. C. Taylor

### Chapter V. Conditional Expectation and an Introduction to Martingales

Abstract
In this chapter, the conditional expectation operator will be defined and then used in the study of martingales. To begin, one considers the simplest cases of conditional expectation, which are closely related to conditional probability. Then, one proves the Riesz representation theorem for continuous linear functionals on Hilbert space as a tool for defining conditional expectation for square integrable random variables. Given this, it is easy to then define the conditional expectation of integrable random variables.
J. C. Taylor

### Chapter VI. An Introduction to Weak Convergence

Abstract
Let (Xn)n≥1 be a sequence of i.i.d. random variables on a probability space (Ω, F, P). Let F be the distribution function of the common law Q of the random variables.
J. C. Taylor

### Backmatter

Weitere Informationen