Skip to main content
main-content

Über dieses Buch

This textbook is based on 20 years of teaching a graduate-level course in random processes to a constituency extending beyond signal processing, communications, control, and networking, and including in particular circuits, RF and optics graduate students. In order to accommodate today’s circuits students’ needs to understand noise modeling, while covering classical material on Brownian motion, Poisson processes, and power spectral densities, the author has inserted discussions of thermal noise, shot noise, quantization noise and oscillator phase noise. At the same time, techniques used to analyze modulated communications and radar signals, such as the baseband representation of bandpass random signals, or the computation of power spectral densities of a wide variety of modulated signals, are presented. This book also emphasizes modeling skills, primarily through the inclusion of long problems at the end of each chapter, where starting from a description of the operation of a system, a model is constructed and then analyzed.

Provides semester-length coverage of random processes, applicable to the analysis of electrical and computer engineering systems; Designed to be accessible to students with varying backgrounds in undergraduate mathematics and engineering; Includes solved examples throughout the discussion, as well as extensive problem sets at the end of every chapter; Develops and reinforces student’s modeling skills, with inclusion of modeling problems in every chapter; Solutions for instructors included.

Inhaltsverzeichnis

Frontmatter

1. Introduction

Abstract
Random processes have a wide range of applications outside mathematics to fields as different as physics and chemistry, engineering, biology, or economics and mathematical finance. When addressed at an audience in one of its fields of applications, random processes take a slightly different flavor since each discipline tends to use tools which are best adapted to the class of problems it seeks to analyze. The goal of this book is to present random processes techniques applicable to the analysis of electrical and computer engineering systems. In this context, random process analysis has traditionally played an important role in several areas: the analysis of noise in electronic, radio-frequency (RF) and optical devices, the study of communications signals in the presence of noise, and the evaluation of the performance of computer and networking systems.
Bernard C. Levy

Background

Frontmatter

2. Probability and Random Variables

Abstract
This chapter reviews the concepts of probability space, random variable, jointly distributed random variables, and random vectors, as well as transformations of random variables and vectors, expectations, moments, conditional expectations, and characteristic and moment generating functions. The presentation is fast paced and is not intended as a first-time introduction to the above topics.
Bernard C. Levy

3. Convergence and Limit Theorems

Abstract
The construction of models of random phenomena often relies on simplifications and approximations whose validity is typically justified by asymptotic arguments. Such arguments require extending to random variables the familiar convergence analysis of real numbers. We recall that if x n, n ≥ 1 is a sequence of real numbers, it converges to a real limit x if for each 𝜖 > 0, there exists and integer n(𝜖) such that |x n − x| < 𝜖 for all n ≥ n(𝜖). The main difficulty in extending this concept to sequences X n(ω), n ≥ 1 of random variables is that random variable sequences can be viewed as families of real number sequences indexed by the outcome ω ∈ Ω. So convergence can be defined in a variety of ways depending on whether we look at each sequence individually, or on whether we allow averaging across outcomes to measure the closeness of X n to its limit X. This chapter describes different modes of convergence: almost sure convergence, mean-square convergence, convergence in probability, and convergence in distribution, and their relations.
Convergence concepts play an important role in deriving key results of statistics such as the law of large numbers and the central limit theorem. However these results hold only under specific conditions, and in the case of the central limit theorem, are often misapplied. The second part of this chapter discusses in detail the derivation of limit theorems and the conditions under which they are valid.
Bernard C. Levy

Main Topics

Frontmatter

4. Specification of Random Processes

Abstract
The purpose of this chapter is to describe random processes and analyze their specification in terms of finite joint distributions for ordered time samples of the process. Particular emphasis is given to the computation of the second-order statistics of random processes, such as the autocorrelation, since these are heavily used by electrical engineers for signal analysis purposes, even though they provide only a very incomplete characterization of the stochastic behavior of processes, except in the Gaussian case. Several categories of random processes, such as Gaussian and Markov processes, which admit simple parametrizations, are introduced, as well as the class of stationary independent increments processes. This class contains the Wiener and Poisson processes, which play a major role in modeling random phenomena arising in electrical engineering systems.
Bernard C. Levy

5. Discrete-Time Finite Markov Chains

Abstract
In this chapter, we examine some of the properties of discrete-time finite Markov chains. These processes are discrete valued, and to keep the analysis as elementary as possible, we restrict our attention to homogeneous, i.e., time-invariant, and finite chains, i.e., chains where the process X(t) can take only a finite number of values. Under these simplifying assumptions, Markov chains can be analyzed by using tools of linear algebra. Yet, the class of finite homogeneous Markov chains covers a wide range of engineering systems, since it is in essence identical to the class of synchronous stochastic automata.
Bernard C. Levy

6. Wiener Process and White Gaussian Noise

Abstract
The Wiener process is a mathematical idealization of the physical Brownian motion describing the movement of suspended particles subjected to collisions by smaller atoms in a liquid. This phenomenon, first studied by botanist Robert Brown in 1827, was investigated by Einstein, Langevin and other physicists.
The Wiener process was first defined mathematically by MIT mathematician Norbert Wiener and its properties were later studied in detail by Paul Lévy. It has the interesting feature that its trajectories are fractals, since they are continuous without being differentiable anywhere. Nethertheless, the derivative of the Wiener process, called white Gaussian noise, can de defined in a generalized sense. This noise is frequently used by electrical engineers in their analyses, in particular to model the effect of thermal noise in electrical circuits.
Bernard C. Levy

7. Poisson Process and Shot Noise

Abstract
While the Wiener process has continuous sample paths and continuously changes by small increments, the Poisson process is integer valued with discontinuous sample paths and changes infrequently by unit increments. So, it is well adapted to model discrete random phenomena, such as photon counts in optics, the number of tasks processed by a computer system, or random spikes in a neural pathway. Like the Wiener process, the Poisson process has independent increments and can be constructed in several equivalent ways, which are described in Sect. 7.2. The times at which the Poisson process undergoes jumps are called epochs, and the interarrival times (the time between two epochs) are exponentially distributed. In this context, as explained in Sect. 7.3, the memoryless property of exponential distributions has an interesting consequence concerning the distribution of the residual time until the occurrence of the next epoch after an arbitrary reference time. As shown in Sect. 7.5, Poisson processes have also the the interesting feature that they remain Poisson after either merging, or splitting randomly into two lower rate processes. Finally, Sect. 7.6 examines the statistical properties of shot noise, which is a form of noise occurring in electronic devices, such as vacuum tubes or optical detectors, when electrons or photons arriving at random instants (modeled typically by the epochs of a Poisson process) trigger an electronic circuit response.
Bernard C. Levy

8. Processing and Frequency Analysis of Random Signals

Abstract
In this chapter, we discuss the effect of basic signal processing operations performed on random signals, such as linear filtering and modulation. Most of the attention is focused on WSS random signals, since such signals can be analyzed in the Fourier domain by using their power spectral density (PSD), which as explained in Sect. 8.3 provides a picture of the frequency composition of the signal of interest. The effect of linear filtering on the mean and autocorrelation of random signals is described in Sect. 8.2. The PSD and its properties are analyzed in Sect. 8.3. In this context, it is explained that WGN is undistinguishable from bandlimited WGN when passed through filters with a finite bandwidth, which explains why WGN is so convenient for performing noise computations. The Nyquist–Johnson model of noisy resistors is extended to RLC circuits in thermal equilibrium in Sect. 8.4. Techniques for evaluating the PSDs of PAM and Markov-chain modulated signals are described in Sect. 8.5. Then in Sect. 8.6, it is shown that a bandlimited WSS signal with bandwidth B can be reconstructed in the mean-square sense from its uniform samples provided the sampling frequency is greater than twice the signal bandwidth. This result extends to the WSS case the classical reconstruction result of Nyquist and Shannon for deterministic signals. Finally, Sect. 8.7 presents Rice’s model of bandpass WSS signals, which represents a bandpass WSS signal with occupied bandwidth 2B centered about ω c as a quadrature modulated signal with carrier frequency ω c and WSS in-phase and quadrature components of bandwidth B. This model is used routinely to analyze modulated bandpass communications or radar receivers by examining an equivalent baseband problem, thereby removing the the effect of modulation from consideration.
Bernard C. Levy

Advanced Topics

Frontmatter

9. Ergodicity

Abstract
The concept of ergodicity finds its origin in the statistical mechanics work of Boltzmann who proposed the ergodic hypothesis, whereby ensemble averages could be replaced by time averages. A further observation was provided by Gibbs who observed that for Hamiltonian systems, Liouville’s theorem ensures that for a system in equilibrium with a large number of particles, the probability density of particles in phase space is constant, thus ensuring that the probability distribution is SSS of order 1. Accordingly, ergodicity is analyzed in the context of either SSS or WSS processes. Unfortunately, ergodicity is not always guaranteed for SSS processes. Indeed, whereas for an iid process X(t), the SLLN ensures that the time average
$$\displaystyle \begin{aligned} \frac {1}{t} \sum _{s=0}^{t-1} X(s) \stackrel {a.s.}{\rightarrow } m_X = E[X(t)] \: , \end{aligned} $$
if we consider a process X(t) = X, where X is a fixed random variable, X(t) is SSS, but the average converges to X, which is a random variable, but not to its mean m X. It was shown by Birkhoff and Von Neumann that time averages of SSS processes converge almost surely and in the mean square, respectively, to a random variable which in some circumstances equals m X, the mean of X(t). In this chapter, we present several criteria under which time averages are equal asymptotically to ensemble averages.
Bernard C. Levy

10. Scalar Markov Diffusions and Ito Calculus

Abstract
In this process, we examine scalar Markov diffusions. These processes have “Brownian-motion-like” properties since their sample paths are continuous almost surely, but their dynamics are richer in the sense that unlike the Wiener process, which is homogeneous in both time and space, the process dynamics depend on both the current time t and position X(t) = x of the process. The diffusion processes can be used to describe a great variety of physical phenomena, such as oscillators in noise, the dynamics of airplanes subjected to turbulence, or the evolution of the price of financial securities, like stock options.
The PDF of Markov diffusions can be propagated forward in time by a parabolic partial differential equation called the Fokker-Planck or forward Kolmogorov equation. Another feature is that the expectation of a function of the diffusion process in the future, conditioned on the present value can be probagated backwards in time by using the backward Kolmogorov equation.
Diffusion processes arise naturally as the solution of stochastic differential equations (SDEs) driven by Wiener process increments. The construction of SDE solutions relies on a special form of calculus due to Japanese mathematician Kiyoshi Ito, which accounts for the fact that unlike in ordinary differential equations, quadratic diffusion terms of the form (dX(t))2 cannot be neglected.
Bernard C. Levy

Applications

Frontmatter

11. Wiener Filtering

Abstract
An important application of linear time-invariant filtering of WSS random processes is for the design of optimum estimation filters. In this problem, given some observations Y (t) related to a process X(t) of interest, one seeks to design a filter which is optimal in the sense that if Y (t) is the filter input, the estimate \(\hat {X} (t)\) produced at the filter output minimizes the mean-square estimation error with the desired process X(t). The level of difficulty of this problem depends on whether the estimation filter is required to be causal or not. The first complete solution of this problem for both the noncausal and causal cases was proposed by Wiener, and optimal estimation filters are therefore often referred to as Wiener filters.
Bernard C. Levy

12. Quantization Noise and Dithering

Abstract
While various circuit noise sources, such as thermal noise, shot noise or 1∕f noise, have a physical origin, other types of noise, like quantization or roundoff noise, occur as a consequence of computational operations performed by circuits. Specifically, the digitization of bandlimited signals with analog-to-digital converters (ADCs) involves not only signal sampling in time, but also amplitude quantization. The quantization operation introduces an error, which is called quantization noise. This noise plays an important role in analyzing the performance of DSP systems and in ADC design. Bennett observed in 1948 that for a sufficiently small discretization step, quantization noise can be viewed approximately as uniformly distributed and white. In a comprehensive study of quantization noise, Widrow was able to derive precise conditions for these properties to hold. Unfortunately, because it is a function of the input signal, the quantization noise is not independent from the input, which creates undesirable artifacts in audio systems. To overcome this problem, an independent dithering noise can be added to the input signal and removed after quantization. This chapter presents a comprehensive analysis of the properties of quantization noise and dithering.
Bernard C. Levy

13. Phase Noise in Autonomous Oscillators

Abstract
Due to its importance in circuit design, the problem of modeling phase noise in autonomous oscillators has attracted considerable attention over the last 50 years. Noteworthy phase noise studies include a spectrum model based on a feedback circuit oscillator parametrization proposed by Leeson and the impulse sensitivity function model proposed by Hajimiri and Lee. Unfortunately, even though these models offer valuable insights for oscillator design, they exhibit significant defects, such as their failure to predict a Lorentzian power spectral density for the oscillator tones affected by noise. The primary reason for this failure is that most early phase noise theories were based on linear perturbations. The first satisfactory nonlinear phase noise theory was proposed by Kaertner, with significant later refinements by Demir, Mehrothra, and Roychowdury. The key insight of this approach is that for a stable oscillator, random fluctuations in directions away from the oscillator trajectory remain contained due to the inherent trajectory stability, but fluctuations along the trajectory direction accumulate since they are not subjected to a restoring force. The model constructed by this approach relies on rather simplistic approximations, such as the neglect of amplitude fluctuations about the noiseless oscillator trajectory, but it results in a single scalar diffusion equation for the phase noise. This model can be simplified further by noticing that the phase noise evolves on time scale much slower than the rate of oscillation of the oscillator, so that an averaging method can be employed to describe the slow time-scale evolution of the phase noise. This averaging operation reveals that on a slow time-scale, the phase noise behaves like a scaled Wiener process, where the scaling parameter represents therefore the only and most important parameter of the model. This slow time-scale model can then be used to compute the oscillator autocorrelation and its power spectral density, which shows that the effect of phase noise is to smear the pure impulsive spectral lines of the noiseless oscillator into narrow Lorentzian shaped spectra, which conform with experimental observations.
Bernard C. Levy
Weitere Informationen