main-content

## Über dieses Buch

Leave nothing to chance. This cliche embodies the common belief that ran­ domness has no place in carefully planned methodologies, every step should be spelled out, each i dotted and each t crossed. In discrete mathematics at least, nothing could be further from the truth. Introducing random choices into algorithms can improve their performance. The application of proba­ bilistic tools has led to the resolution of combinatorial problems which had resisted attack for decades. The chapters in this volume explore and celebrate this fact. Our intention was to bring together, for the first time, accessible discus­ sions of the disparate ways in which probabilistic ideas are enriching discrete mathematics. These discussions are aimed at mathematicians with a good combinatorial background but require only a passing acquaintance with the basic definitions in probability (e.g. expected value, conditional probability). A reader who already has a firm grasp on the area will be interested in the original research, novel syntheses, and discussions of ongoing developments scattered throughout the book. Some of the most convincing demonstrations of the power of these tech­ niques are randomized algorithms for estimating quantities which are hard to compute exactly. One example is the randomized algorithm of Dyer, Frieze and Kannan for estimating the volume of a polyhedron. To illustrate these techniques, we consider a simple related problem. Suppose S is some region of the unit square defined by a system of polynomial inequalities: Pi (x. y) ~ o.

## Inhaltsverzeichnis

### The Probabilistic Method

Abstract
Erdös is usually credited as being the pioneer of the probabilistic method, beginning with his seminal 1947 paper [21], although the probabilistic method had been used in at least two previous occasions by Turán in 1934[66] and by Szele in 1943[63]. By now, it is widely recognized as one of the most important techniques in the field of combinatorics. In this short survey, we will introduce a few of the basic tools and describe some of the areas in which the method has had impact.
Michael Molloy

### Probabilistic Analysis of Algorithms

Abstract
Rather than analyzing the worst case performance of algorithms, one can investigate their performance on typical instances of a given size. This is the approach we investigate in this paper. Of course, the first question we must answer is: what do we mean by a typical instance of a given size?
Alan M. Frieze, Bruce Reed

### An Overview of Randomized Algorithms

Abstract
A randomized algorithm makes random choices during its execution. The behavior of such an algorithm may thus be random even on a fixed input. The process of designing and analyzing a randomized algorithm focuses on establishing that it is likely to behave “well” on every input. The likelihood in such a statement depends only on the probabilistic choices made by the algorithm during execution and not on any assumptions about the input. It is especially important to distinguish a randomized algorithm from the averagecase analysis of algorithms, where one analyzes an algorithm assuming that its input is drawn from a fixed probability distribution. With a randomized algorithm, in contrast, no assumption is made about the input.
Rajeev Motwani, Prabhakar Raghavan

### Mathematical Foundations of the Markov Chain Monte Carlo Method

Summary
The Markov chain Monte Carlo (MCMC) method exploits the idea that information about a set of combinatorial objects may be obtained by performing an appropriately defined random walk on those objects. In the area of statistical physics, MCMC algorithms have been in use for many years for the purpose of estimating various quantities of physical interest, often expectations of random variables on “configurations” of a statistical model. The running time of MCMC algorithms depends on the rate at which the random walk converges to equilibrium; only when a condition of near-equilibrium has been achieved can the algorithm discover what “typical” objects are like. In the past decade or so, it has become possible to derive a priori bounds on the rate of convergence to equilibrium of random walks underlying MCMC algorithms of practical interest. In cases where a priori bounds cannot be derived, it may still be possible to conduct rigorously grounded experiments. Many of the main ideas and techniques are set out here, with the recent developments being discussed at greater length.
Mark Jerrum

### Percolation and the Random Cluster Model: Combinatorial and Algorithmic Problems

Abstract
In 1961 Harry Frisch, John Hammersley and I [13] carried out what were in those days massive Monte Carlo experiments attempting to determine the critical percolation probabilities of the various standard lattices. The constraints at that time were, as today, machine induced. The programmes were written in machine code on a computer which was the size of a large room with less power than a modern day calculator. Today the situation has radically changed. Several of these critical probabilities which we were trying to estimate are now known exactly. However the problems posed then have been replaced by problems of just as much charm and seeming intractability and it is some of these that I shall address in these lectures.
Dominic Welsh

### Concentration

Summary
Upper bounds on probabilities of large deviations for sums of bounded independent random variables may be extended to handle functions which depend in a limited way on a number of independent random variables. This ‘method of bounded differences’ has over the last dozen or so years had a great impact in probabilistic methods in discrete mathematics and in the mathematics of operational research and theoretical computer science. Recently Talagrand introduced an exciting new method for bounding probabilities of large deviations, which often proves superior to the bounded differences approach. In this chapter we introduce and survey these two approaches and some of their applications.
Colin McDiarmid

### Branching Processes and Their Applications in the Analysis of Tree Structures and Tree Algorithms

Summary
We give a partial overview of some results from the rich theory of branching processes and illustrate their use in the probabilistic analysis of algorithms and data structures. The branching processes we discuss include the Galton-Watson process, the branching random walk, the Crump-Mode-Jagers process, and conditional branching processes. The applications include the analysis of the height of random binary search trees, random m-ary search trees, quadtrees, union-find trees, uniform random recursive trees and plane-oriented recursive trees. All these trees have heights that grow logarithmically in the size of the tree. A different behavior is observed for the combinatorial models of trees, where one considers the uniform distribution over all trees in a certain family of trees. In many cases, such trees are distributed like trees in a Galton-Watson process conditioned on the tree size. This fact allows us to review Cayley trees (random labeled free trees), random binary trees, random unary-binary trees, random oriented plane trees, and indeed many other species of uniform trees. We also review a combinatorial optimization problem first suggested by Karp and Pearl. The analysis there is particularly beautiful and shows the flexibility of even the simplest branching processes.
Luc Devroye

### Backmatter

Weitere Informationen