Skip to main content

2010 | Buch

Applied Probability

insite
SUCHEN

Über dieses Buch

Applied Probability presents a unique blend of theory and applications, with special emphasis on mathematical modeling, computational techniques, and examples from the biological sciences. It can serve as a textbook for graduate students in applied mathematics, biostatistics, computational biology, computer science, physics, and statistics. Readers should have a working knowledge of multivariate calculus, linear algebra, ordinary differential equations, and elementary probability theory.

Chapter 1 reviews elementary probability and provides a brief survey of relevant results from measure theory. Chapter 2 is an extended essay on calculating expectations. Chapter 3 deals with probabilistic applications of convexity, inequalities, and optimization theory. Chapters 4 and 5 touch on combinatorics and combinatorial optimization. Chapters 6 through 11 present core material on stochastic processes. If supplemented with appropriate sections from Chapters 1 and 2, there is sufficient material for a traditional semester-long course in stochastic processes covering the basics of Poisson processes, Markov chains, branching processes, martingales, and diffusion processes. The second edition adds two new chapters on asymptotic and numerical methods and an appendix that separates some of the more delicate mathematical theory from the steady flow of examples in the main text.

Besides the two new chapters, the second edition includes a more extensive list of exercises, many additions to the exposition of combinatorics, new material on rates of convergence to equilibrium in reversible Markov chains, a discussion of basic reproduction numbers in population modeling, and better coverage of Brownian motion. Because many chapters are nearly self-contained, mathematical scientists from a variety of backgrounds will find Applied Probability useful as a reference

Inhaltsverzeichnis

Frontmatter
1. Basic Notions of Probability Theory
Abstract
This initial chapter covers background material that every serious student of applied probability should master. In no sense is the chapter meant as a substitute for a previous course in applied probability or for a future course in measure-theoretic probability. Our comments are merely meant as reminders and as a bridge. Many mathematical facts will be stated without proof. This is unsatisfactory, but it is even more unsatisfactory to deny students the most powerful tools in the probabilist’s toolkit. Quite apart from specific tools, the language and intellectual perspective of modern probability theory also furnish an intuitive setting for solving practical problems. Probability involves modes of thought that are unique within mathematics. As a brief illustration of the material reviewed, we derive properties of the multivariate normal distribution in the final section of this chapter. Later chapters will build on the facts and vocabulary mentioned here and provide more elaborate applications.
Kenneth Lange
2. Calculation of Expectations
Abstract
Many of the hardest problems in applied probability revolve around the calculation of expectations of one sort or another. On one level, these are merely humble exercises in integration or summation. However, we should not be so quick to dismiss the intellectual challenges. Readers are doubtless already aware of the clever applications of characteristic and moment generating functions. This chapter is intended to review and extend some of the tools that probabilists routinely call on. Readers can consult the books [34, 59, 60, 78, 80, 166] for many additional examples of these tools in action.
Kenneth Lange
3. Convexity, Optimization, and Inequalities
Abstract
Convexity is one of the key concepts of mathematical analysis and has interesting consequences for optimization theory, statistical estimation, inequalities, and applied probability. Despite this fact, students seldom see convexity presented in a coherent fashion. It always seems to take a backseat to more pressing topics. The current chapter is intended as a partial remedy to this pedagogical gap.
Kenneth Lange
4. Combinatorics
Abstract
Combinatorics is the bane of many a student of probability theory. Even elementary combinatorial problems can be frustratingly subtle. The cure for this ill is more exposure, not less. Because combinatorics has so many important applications, serious students of the mathematical sciences neglect it at their peril. Here we explore a few topics in combinatorics that have maximum intersection with probability. Our policy is to assume that readers have a nodding familiarity with combinations and permutations. Based on this background, we discuss bijections, inclusion-exclusion (sieve) methods, Catalan numbers, Stirling numbers of the first and second kind, and the pigeonhole principle. Along the way we meet some applications that we hope will whet readers’ appetites for further study. The books [21, 22, 26, 59, 78, 139, 207] are especially recommended.
Kenneth Lange
5. Combinatorial Optimization
Abstract
Combinatorial averaging is a supple tool for understanding the solutions of discrete optimization problems. Computer scientists have designed many algorithms to solve such problems. Traditionally, these algorithms have been classified by their worst-case performance. Such an analysis can lead to undue pessimism. The average behavior of an algorithm is usually more relevant. Of course, to evaluate the average complexity of an algorithm, we must have some probability model for generating typical problems on which the algorithm operates. The examples in this chapter on sorting, data compression, and graph coloring illustrate some of the underlying models and the powerful techniques probabilists have created for analyzing algorithms.
Kenneth Lange
6. Poisson Processes
Abstract
The Poisson distribution rivals the normal distribution in importance. It occupies this position of eminence because of its connection to Poisson processes [59, 60, 80, 96, 106, 114, 170]. A Poisson process models the formation of random points in space or time. Most textbook treatments of Poisson processes stress one-dimensional processes. This is unfortunate because many of the important applications occur in higher dimensions, and the underlying theory is about as simple there. In this chapter, we emphasize multidimensional Poisson processes, their transformation properties, and computational tools for extracting information about them.
Kenneth Lange
7. Discrete-Time Markov Chains
Abstract
Applied probability thrives on models. Markov chains are one of the richest sources of good models for capturing dynamical behavior with a large stochastic component [23, 24, 59, 80, 106, 107, 118]. In this chapter we give a few examples and a quick theoretical overview of discrete-time Markov chains. The highlight of our theoretical development, Proposition 7.4.1, relies on a coupling argument. Because coupling is one of the most powerful and intuitively appealing tools available to probabilists, we examine a few of its general applications as well. We also stress reversible Markov chains. Reversibility permits explicit construction of the long-run or equilibrium distribution of a chain when such a distribution exists. Chapter 8 will cover continuous-time Markov chains.
Kenneth Lange
8. Continuous-Time Markov Chains
Abstract
This chapter introduces the subject of continuous-time Markov chains [23, 52, 59, 80, 106, 107, 118, 152]. In practice, continuous-time chains are more useful than discrete-time chains. For one thing, continuous-time chains permit variation in the waiting times for transitions between neighboring states. For another, they avoid the annoyances of periodic behavior. Balanced against these advantages is the disadvantage of a more complex theory involving linear differential equations. The primary distinction between the two types of chains is the substitution of transition intensities for transition probabilities. Once one grasps this difference, it is straightforward to formulate relevant continuous-time models. Implementing such models numerically and understanding them theoretically then require the matrix exponential function. Kendall’s birth-death-immigration process, treated at the end of the chapter, involves an infinite number of states and transition intensities that depend on time.
Kenneth Lange
9. Branching Processes
Abstract
A branching process models the reproduction of particles such as human beings, cells, or neutrons. In the simplest branching processes, time is measured discretely in generations, and particles are of only one type. Each particle is viewed as living one generation; during this period it produces offspring contributing to the next generation. The key assumption that drives the theory is that particles reproduce independently according to the same probabilistic law. Interactions between particles are forbidden.
Kenneth Lange
10. Martingales
Abstract
Martingales generalize the notion of a fair game in gambling. Theory to the contrary, many gamblers still believe that they simply need to hone their strategies to beat the house. Probabilists know better. The real payoff with martingales is their practical value throughout probability theory. This chapter introduces martingales, develops some relevant theory, and delves into a few applications. As a prelude, readers are urged to review the material on conditional expectations in Chapter 1. In the current chapter we briefly touch on the convergence properties of martingales, the optional stopping theorem, and large deviation bounds via Azuma’s inequality. More extensive treatments of martingale theory appear in the books [23, 24, 53, 80, 106, 118, 208]. Our other referenced sources either provide elementary accounts comparable in difficulty to the current material [129, 170] or interesting special applications [4, 134, 186, 201].
Kenneth Lange
11. Diffusion Processes
Abstract
Despite their reputation for sophistication, diffusion processes are widely applied throughout science and engineering. Here we survey the theory at an elementary level, stressing intuition rather than rigor. Readers with the time and mathematical inclination should follow up this brief account by delving into serious presentations of the mathematics [80, 107]. A good grounding in measure theory is indispensable in understanding the theory. At the highest level of abstraction, diffusion processes can be treated via the Ito stochastic integral [30, 38]. As background for this chapter, the reader is advised to review the material in Section 1.8 on the multivariate normal distribution.
Kenneth Lange
12. Asymptotic Methods
Abstract
Long before computers revolutionized numerical analysis, applied mathematicians devised many clever techniques for finding approximate answers to hard problems. Because approximate solutions focus on dominant contributions, they often provide more insight than exact solutions. In this chapter we take up the subject of asymptotic analysis. Although this material is old, dating back centuries in some cases, it still has its charms and utility. Our choice of topics differs from the typical syllabus of mathematical statistics, where the emphasis is on large sample theory and convergence in distribution [61, 132, 181]. Here we stress advanced calculus and combinatorics.
Kenneth Lange
13. Numerical Methods
Abstract
Stochastic modeling relies on a combination of exact solutions and numerical methods. As scientists and engineers tackle more realistic models, the symmetries supporting exact solutions fade. The current chapter sketches a few of the most promising numerical techniques. Further improvements in computing, statistics, and data management are bound to drive the rapidly growing and disorganized discipline of computational probability for decades to come.
Kenneth Lange
14. Poisson Approximation
Abstract
In the past few years, mathematicians have developed a powerful technique known as the Chen-Stein method for approximating the distribution of a sum of weakly dependent Bernoulli random variables [11, 18, 187]. In contrast to many asymptotic methods, this approximation carries with it explicit error bounds.
Kenneth Lange
15. Number Theory
Abstract
Number theory is one of the richest and oldest branches of mathematics. It is notable for its many unsolved but easily stated conjectures. The current chapter touches on issues surrounding prime numbers and their density. In particular, the chapter and book culminate with a proof of the prime number theorem. This highlight of 19th century mathematics was surmised by Legendre and Gauss, attacked by Riemann and Chebyshev, and finally proved by Hadamard and de la Valléé Poussin. These mathematicians created a large part of analytic function theory in the process. In the mid-20th century, Erdös and Selberg succeeded in crafting a proof that avoids analytic functions. Even so, their elementary proof is longer and harder to comprehend than the classical proofs. Our treatment follows the recent trail blazed by Newman [150] and Zagier [211] that uses a minimum of analytic function theory. We particularly stress the connections and insight provided by probability.
Kenneth Lange
Appendix: Mathematical Review
Kenneth Lange
Backmatter
Metadaten
Titel
Applied Probability
verfasst von
Kenneth Lange
Copyright-Jahr
2010
Verlag
Springer New York
Electronic ISBN
978-1-4419-7165-4
Print ISBN
978-1-4419-7164-7
DOI
https://doi.org/10.1007/978-1-4419-7165-4