Skip to main content
main-content

Über dieses Buch

Today, the theory of random processes represents a large field of mathematics with many different branches, and the task of choosing topics for a brief introduction to this theory is far from being simple. This introduction to the theory of random processes uses mathematical models that are simple, but have some importance for applications. We consider different processes, whose development in time depends on some random factors. The fundamental problem can be briefly circumscribed in the following way: given some relatively simple characteristics of a process, compute the probability of another event which may be very complicated; or estimate a random variable which is related to the behaviour of the process. The models that we consider are chosen in such a way that it is possible to discuss the different methods of the theory of random processes by referring to these models. The book starts with a treatment of homogeneous Markov processes with a countable number of states. The main topic is the ergodic theorem, the method of Kolmogorov's differential equations (Secs. 1-4) and the Brownian motion process, the connecting link being the transition from Kolmogorov's differential-difference equations for random walk to a limit diffusion equation (Sec. 5).

Inhaltsverzeichnis

Frontmatter

Section 1. Random Processes with Discrete State Space

Examples
Abstract
We consider the process of radioactive decay where radium Ra is converted into radon Rn over a period of time. In this process, at the time of disintegration, the Ra atom emits an α-particle (the nucleus of the helium atom He) and changes from Ra into Rn. It is well known that this process is a random process.
Yuriĭ A. Rozanov

Section 2. Homogeneous Markov Processes with a Countable Number of States

Kolmogorov’s Differential Equations
Abstract
We shall consider a system the state of which at time t is ξ(t). Let the number of possible states be finite or countable. As usual, we design each state by a number i = 0, 1, … . We suppose that the process of the transition of the system from one state into another is caused by chance and obeys the laws described in (1.9), (1.10) with transition probabilities
$$ \begin{array}{*{20}{c}} {{p_{ij}}\left( t \right) = P\left\{ {\xi \left( t \right) = j\left| {\xi \left( 0 \right) = i} \right.} \right\},\quad i,j = 0,1, \ldots }\\ {\left( {\sum\limits_i {{p_{ij}}\left( t \right) = 1} } \right).} \end{array} $$
(2.1)
We shall call ξ(t), t ≥ 0 a homogeneous Markov process. 1
Yuriĭ A. Rozanov

Section 3. Homogeneous Markov Processes with a Countable Number of States

Convergence to a Stationary Distribution
Abstract
Considering a homogeneous Markov process ξ(t), t ≥ 0, with a countable number of states j, we say that the probability distribution \({p_j^*} \) is stationary, if
$$ p_j^* = \sum\limits_k {p_k^* p_{kj} \left( t \right),\,j = 0,1, \ldots ,} $$
(3.1)
where {pkj(t)} are the transition probabilities of the process. It follows from the general formulas (2.3) and (2.4) with a stationary initial distribution \(p_i^0 = p_i^* \) that
$$ P\left\{ {\xi \left( {t_1 + t} \right) = i_1 , \ldots ,\xi \left( {t_n + t} \right) = i_n } \right\} = P\left\{ {\xi \left( {t_1 } \right) = i_1 , \ldots \xi \left( {t_n } \right) = i_n } \right\}, $$
(3.2)
i.e. the probability distribution of arbitrary random variables ξ(t1) does not change under a shift to time t ≥0. In particular, the probability distribution of the variables ξ(t) will be exactly the same for all t:
$$ p_j \left( t \right) = P\left\{ {\xi \left( t \right) = j} \right\} = p_j^* ,\,j = 0,1, \ldots . $$
(3.3)
Yuriĭ A. Rozanov

Section 4. Branching Processes

Method of Generating Functions
Abstract
We now consider a branching process ξ(t), t ≥ 0, for example the transformation of one type of particles which follows the principle that each particle existing at time s is, independently of the past (up to time s), transformed into n particles with probability pn(t), n = 0, 1, …. We will characterize the state of the process at time t by the total number ξ(t) of particles existing at this moment (we do not exclude the possibility n =∞).
Yuriĭ A. Rozanov

Section 5. Brownian Motion

The Diffusion Equation and Some Properties of the Trajectories
Abstract
We consider a particle moving in a homogeneous fluid. It undergoes chaotic collisions with the molecules of the fluid, and as a result of this, it obeys a continuous disorder motion which is called Brownian motion.
Yuriĭ A. Rozanov

Section 6. Random Processes in Multi-Server Systems

Abstract
We shall consider two examples of random processes arising in different multi-server systems.
Yuriĭ A. Rozanov

Section 7. Random Processes as Functions in Hilbert Space

Abstract
One of the most useful approaches to the study of random processes is, as we shall see in the following, the introduction of the Hilbert space H of random variables ξ, M|ξ|2 < ∞, with scalar product
$$ \left( {\xi _1 ,\xi _2 } \right) = M\xi _1 \bar \xi _2 $$
(7.1)
and the norm of quadratic mean
$$ \left\| \xi \right\| = \left( {M\left| \xi \right|^2 } \right)^{1/2} . $$
(7.2)
. This space is complete and, hence, each fundamental sequence of variables ξn ∈ H :
$$ \left\| {\xi _n - \xi _m } \right\| \to 0 $$
(7.3)
has a limit in quadratic mean ξ = limn→∞ ξn in the space H for n,m → ∞; i.e., there exists a variable ξ ∈ H such that
$$ \left\| {\xi _n - \xi } \right\| \to 0\,for\,n \to \infty $$
. (In this space H, we identify variables that are equal with probability 1).
Yuriĭ A. Rozanov

Section 8. Stochastic Measures and Integrals

Abstract
The usual tools of mathematical analysis and of the theory of ordinary differential equations cannot be applied to random functions of the type of the Brownian motion process, which arise in several branches of probability theory and are important for application. The reason is that these functions turn out to be not differentiable. In the theory of random processes we apply the theory of stochastic analysis and stochastic differential equations, the basic element of which is the stochastic integral, which we shall deal with now.
Yuriĭ A. Rozanov

Section 9. The Stochastic Ito Integral and Stochastic Differentials

Abstract
Suppose that a σ-algebra of events Bt is given; we interpret it as the total number of events up to time t. Then, accordingly,
$$ B^B \subseteq B^t ,\,s \leqslant t. $$
(9.1)
We generalize the idea of the stochastic integral to random functions ϕ(t), the values of which at each time t are the random variables
$$ \phi \left( t \right) = \phi \left( {\omega ,t} \right),\,\omega \in \Omega , $$
on the space of elementary events Ω; these functions are measurable with respect to the corresponding σ-algebra of events B t. We shall call such random functions non-anticipating. We suppose that the stochastic measure n(Δ) with the properties described in (8.1) – (8.5) has the mean value
$$ M\eta \left( \wedge \right) = 0, $$
(9.2)
where for each Δ = (s,t] the random variable n(Δ) is measurable with respect to the σ-algebra of events Bt and does not depend on the σ-algebra of events B 8 up to time s.
Yuriĭ A. Rozanov

Section 10. Stochastic Differential Equations

Abstract
In this paragraph we shall consider (real) random processes ξ(t), t t0, characterized by the stochastic differential
$$ \begin{array}{*{20}{c}} {d\xi \left( t \right) = \infty \left( t \right)dt + \beta \left( t \right)d\eta \left( t \right),}\\ {\alpha \left( t \right) = a\left( {t\xi \left( t \right)} \right),\beta \left( t \right) = b\left( {t\xi \left( t \right)} \right).} \end{array} $$
where a(t,x), b(t,x) are non-random functions of the parameters tt0 and —∞ < x < ∞ .
Yuriĭ A. Rozanov

Section 11. Diffusion Processes

Kolmogorov’s Differential Equations
Abstract
Suppose that the probability densities p(s, x, t, y), -∞ < y < ∞, depend on the parameters t > s > t0, -∞ < x < ∞ in such a way, that the Kolmogorov-Chapman equation holds:
$$ p\left( {s,x,t,y} \right) = \int_{ - \infty }^\infty {p\left( {s,x,u,z} \right)p\left( {u,z,t,y} \right)dz,\,s < u < t.} $$
(11.1)
.
Yuriĭ A. Rozanov

Section 12. Linear Stochastic Differential Equations and Linear Random Processes

Abstract
We know that the general solution of the linear differential equation
$$ x^{\left( n \right)} \left( t \right) - a_1 \left( t \right)x^{\left( {n - 1} \right)} \left( t \right) - \ldots - a_n \left( t \right)x\left( t \right) = 0,\,t > t_0 $$
(12.1)
(with constant coefficients) can be written in the form
$$ x\left( t \right) = \sum\limits_{k = 0}^{n - 1} {\omega _k \left( {t,t_0 } \right)x_k ,\,t \geqslant t_0 ,} $$
(12.2)
where we denote by x0, …, xn-1 the initial values
$$ x_0 = x\left( {t_0 } \right), \ldots ,x_{n - 1} = x^{\left( {n - 1} \right)} \left( {t_0 } \right) $$
and by ωk(t,t0) special solutions with initial value xk = 1, xj, = 0, for jk.
Yuriĭ A. Rozanov

Section 13. Stationary Processes

Spectral Analysis and Linear Transformations
Abstract
The random process ξ(t), M|ξ(t)|2 < ∞, on the real line —∞ < t < ∞ is called stationary in the wide sense, if its expectation Mξ(t) does not depend on t (we set Mξ(t) = 0), and if the correlation function
$$ B\left( {t,s} \right) = M\xi \left( t \right)\overline {\xi \left( s \right),} \, - \infty < s,t < \infty , $$
depends only on the difference ts:
$$ M\xi \left( t \right)\overline {\xi \left( s \right)} = B\left( {t - s} \right). $$
(13.1)
The corresponding function B(t), —∞ <t < ∞ is also called correlation function of the stationary process.
Yuriĭ A. Rozanov

Section 14. Some Problems of Optimal Estimation

Abstract
Let us consider the model of a random process ξ(t), t ≥ 0, with stochastic differential
$$ d\xi \left( t \right) = \theta \left( t \right)dt + d\eta \left( t \right). $$
(14.1)
. We suppose that θ(t) = θ is an unknown constant, which has to be estimated with the help of the values ξ(t), 0 ≤ t T. Assume, we are given the random process ξ(t), t ≥ 0, but we do not know the drift θ t = Mθ(t), t ≥ 0. The deviation of ξ from the expectation Mξ(t) is described by the standard Wiener process \(\eta \left( t \right) = \xi \left( t \right) - \theta t,\,t\, \geqslant 0\).
Yuriĭ A. Rozanov

Section 15. A Filtration Problem

Kalman-Bucy Filter
Abstract
We consider again a random process ξ(t, tt0, with stochastic differential
$$ d\xi \left( t \right) = \theta \left( t \right)dt + d\eta \left( t \right) $$
(15.1)
and initial value ξ(t0) = 0 where, contrary to (14.1), θ(t) is a random function (continuous in quadratic mean). We may interpret θ(t, tt0, as a “signal”, which has to be filtered out from the superposed (random) noise, which is characterized in (15.1) by the standard Wiener process η(t), tt0. We suppose that η(t), tt0, does not depend on θ(t), tt0.
Yuriĭ A. Rozanov

Backmatter

Weitere Informationen