Skip to main content

1996 | Buch | 2. Auflage

Probability

verfasst von: A. N. Shiryaev

Verlag: Springer New York

Buchreihe : Graduate Texts in Mathematics

insite
SUCHEN

Über dieses Buch

In the Preface to the first edition, originally published in 1980, we mentioned that this book was based on the author's lectures in the Department of Mechanics and Mathematics of the Lomonosov University in Moscow, which were issued, in part, in mimeographed form under the title "Probabil­ ity, Statistics, and Stochastic Processors, I, II" and published by that Univer­ sity. Our original intention in writing the first edition of this book was to divide the contents into three parts: probability, mathematical statistics, and theory of stochastic processes, which corresponds to an outline of a three­ semester course of lectures for university students of mathematics. However, in the course of preparing the book, it turned out to be impossible to realize this intention completely, since a full exposition would have required too much space. In this connection, we stated in the Preface to the first edition that only probability theory and the theory of random processes with discrete time were really adequately presented. Essentially all of the first edition is reproduced in this second edition. Changes and corrections are, as a rule, editorial, taking into account com­ ments made by both Russian and foreign readers of the Russian original and ofthe English and Germantranslations [Sll]. The author is grateful to all of these readers for their attention, advice, and helpful criticisms. In this second English edition, new material also has been added, as follows: in Chapter 111, §5, §§7-12; in Chapter IV, §5; in Chapter VII, §§8-10.

Inhaltsverzeichnis

Frontmatter
Introduction
Abstract
The subject matter of probability theory is the mathematical analysis of random events, i.e., of those empirical phenomena which—under certain circumstance—can be described by saying that:
They do not have deterministic regularity (observations of them do not yield the same outcome);
whereas at the same time
They possess some statistical regularity (indicated by the statistical stability of their frequency).
A. N. Shiryaev
Chapter I. Elementary Probability Theory
Abstract
Let us consider an experiment of which all possible results are included in a finite number of outcomes ω 1,..., ω N . We do not need to know the nature of these outcomes, only that there are a finite number N of them.
A. N. Shiryaev
Chapter II. Mathematical Foundations of Probability Theory
Abstract
The models introduced in the preceding chapter enabled us to give a probabilistic-statistical description of experiments with a finite number of outcomes. For example, the triple (Ω, A, P) with
$$\Omega = \left\{ {\omega :\omega = \left( {{a_1},...,{a_n}} \right),{a_i} = 0,1} \right\},A = \left( {A:A \subseteq \Omega } \right)$$
and \(p\left( \omega \right) = {p^{\sum {{a_i}} }}{q^{n - \sum {{a_i}} }} \) is a model for the experiment in which a coin is tossed n times “independently” with probability p of falling head. In this model the number N(Ω) of outcomes, i.e. the number of points in Ω, is the finite number 2 n .
A. N. Shiryaev
Chapter III. Convergence of Probability Measures. Central Limit Theorem
Abstract
Many of the fundamental results in probability theory are formulated as limit theorems. Bernoulli’s law of large numbers was formulated as a limit theorem; so was the De Moivre-Laplace theorem, which can fairly be called the origin of a genuine theory of probability and, in particular, which led the way to numerous investigations that clarified the conditions for the validity of the central limit theorem. Poisson’s theorem on the approximation of the binomial distribution by the “Poisson” distribution in the case of rare events was formulated as a limit theorem. After the example of these propositions, and of results on the rapidity of convergence in the De Moivre-Laplace and Poisson theorems, it became clear that in probability it is necessary to deal with various kinds of convergence of distributions, and to establish the rapidity of convergence connected with the introduction of various “natural” measures of the distance between distributions. In the present chapter we shall discuss some general features of the convergence of probability distributions and of the distance between them. In this section we take up questions in the general theory of weak convergence of probability measures in metric spaces. The De Moivre-Laplace theorem, the progenitor of the central limit theorem, finds a natural place in this theory. From §3, it is clear that the method of characteristic functions is one of the most powerful means for proving limit theorems on the weak convergence of probability distributions in R n . In §6, we consider questions of metrizability of weak convergence. Then, in §8, we turn our attention to a different kind of convergence of distributions (stronger than weak convergence), namely convergence in variation. Proofs of the simplest results on the rapidity of convergence in the central limit theorem and Poisson’s theorem will be given in §§10 and 11.
A. N. Shiryaev
Chapter IV. Sequences and Sums of Independent Random Variables
Abstract
The series \(\sum\nolimits_{n = 1}^\infty {\left( {1/n} \right)} \) diverges and the series \(\sum\nolimits_{n = 1}^\infty {{{\left( { - 1} \right)}^n}} \left( {1/n} \right) \) converges. We ask the following question. What can we say about the convergence or divergence of a series \( \sum\nolimits_{n = 1}^\infty {\left( {{\xi _n}/n} \right)} \), where ξ1, ξ2,... is a sequence of independent identically distributed Bernoulli random variables with P1 = + 1) = P 1= - 1) = 1/2? In other words, what can be said about the convergence of a series whose general term is ± 1/n, where the signs are chosen in a random manner, according to the sequence ξ1, ξ2,... ?
A. N. Shiryaev
Chapter V. Stationary (Strict Sense) Random Sequences and Ergodic Theory
Abstract
Let (Ω P) be a probability space and \( \xi = \left( {{\xi _1},{\xi _2},...} \right) \) a sequence of random variables or, as we say, a random sequence. Let θ k ξ denote the sequence \(\left( {{\xi _{k + 1}},{\xi _{k + 2}},...} \right) \).
A. N. Shiryaev
Chapter VI. Stationary (Wide Sense) Random Sequences. L 2-Theory
Abstract
According to the definition given in the preceding chapter, a random sequence ξ = (ξ 1, ξ 2,...) is stationary in the strict sense if, for every set B ∈ ℬ(R ) and every n ≥ 1,
$$ P\{ ({\xi _1},{\text{ }}{\xi _2}, \ldots ) \in B\} = P\{ ({\xi _{n + 1}},{\text{ }}{\xi _{n + 2}}, \ldots ) \in B\} $$
(1)
.
A. N. Shiryaev
Chapter VII. Sequences of Random Variables That Form Martingales
Abstract
The study of the dependence of random variables arises in various ways in probability theory. In the theory of stationary (wide sense) random sequences, the basic indicator of dependence is the covariance function, and the inferences made in this theory are determined by the properties of that function. In the theory of Markov chains (§12.of Chapter I; Chapter VIII) the basic dependence is supplied by the transition function, which completely determines the development of the random variables involved in Markov dependence.
A. N. Shiryaev
Chapter VIII. Sequences of Random Variables That Form Markov Chains
Abstract
In Chapter I (§12), for finite probability spaces, we took the basic idea to be that of Markov dependence between random variables. We also presented a variety of examples and considered the simplest regularities that are possessed by random variables that are connected by a Markov chain.
A. N. Shiryaev
Backmatter
Metadaten
Titel
Probability
verfasst von
A. N. Shiryaev
Copyright-Jahr
1996
Verlag
Springer New York
Electronic ISBN
978-1-4757-2539-1
Print ISBN
978-1-4757-2541-4
DOI
https://doi.org/10.1007/978-1-4757-2539-1