Skip to main content

Über dieses Buch

Thomas M. Cover and B. Gopinatb The papers in this volume are the contributions to a special workshop on problems in communication and computation conducted in the summers of 1984 and 1985 in Morristown, New Jersey, and the summer of 1986 in Palo Alto. California. The structure of this workshop was unique: no recent results. no surveys. Instead. we asked for outstanding open prob~ lems in the field. There are many famous open problems, including the question P = NP?, the simplex conjecture in communication theory, the capacity region of the broadcast channel. and the two·helper problem in information theory. Beyond these well-defined problems are certain grand research goals. What is the general theory of information flow in stochastic networks? What is a comprehensive theory of computational complexity? What about a unification of algorithmic complexity and computational complex­ ity? Is there a notion of energy-free computation? And if so, where do information theory, communication theory, computer science, and physics meet at the atomic level? Is there a duality between computation and communication? Finally. what is the ultimate impact of algorithmic com­ plexity on probability theory? And what is its relationship to information theory? The idea was to present problems on the first day. try to solve them on the second day, and present the solutions on the third day. In actual fact, only one problem was solved during the meeting -- El Gamal's prob· lem on noisy communication over a common line.




Chapter I. Introduction

The papers in this volume are the contributions to a special workshop on problems in communication and computation conducted in the summers of 1984 and 1985 in Morristown, New Jersey, and the summer of 1986 in Palo Alto, California. The structure of this workshop was unique: no recent results, no surveys. Instead, we asked for outstanding open problems in the field. There are many famous open problems, including the question $$P = NP?,$$, the simplex conjecture in communication theory, the capacity region of the broadcast channel, and the two-helper problem in information theory.

Thomas M. Cover, B. Gopinath



Chapter II. FRACTRAN: A Simple Universal Programming Language for Arithmetic

To play the fraction game corresponding to a given list $${{f}_{1}},{{f}_{2}}, \ldots ,{{f}_{k}}$$ of fractions and starting integer N, you repeatedly multiply the integer you have at any stage (initially N) by the earliest fi in the list for which the answer is integral. Whenever there is no such fi, the game stops.

J. H. Conway

Problems in Communication


3.1. Some Basic Mathematical Problems of Multiuser Shannon Theory

At the present state of development of multiuser Shannon theory, the main interest is in single-letter characterizations of achievable rate regions (capacity regions) of various source (channel) networks, such as source coding with side information, multiple descriptions, and broadcast channels. The mathematical background of most such problems is very similar, namely, an entropy or image size characterization in the sense of [1].

I. Csiszár

3.2. The Information Theory of Perfect Hashing

Fredman and Komlós [1] have used an interesting information-theoretic technique to derive the hitherto sharpest converse (nonexistence) bounds for the problem of perfect hashing. It seems to me that this is the first use of “hard core information theory” in combinatorics.

János Körner

3.3. The Concept of Single-Letterization in Information Theory

Inherent in the definition of Shannon theory problems is an asymptotic characterization of the performance, rates and error probabilities of all possible code constructions in the given context. Then the results one is looking for give so-called single-letter characterizations of these performance measures. Yet nobody has put forward a mathematically valid explanation of the key notion of single-letter characterization.

János Körner

3.4. Is the Maximum Entropy Principle Operationally Justifiable?

Let X be a random variable originally believed to have distribution Q. When new information is obtained suggesting that the distribution of X actually belongs to a set of distributions Π not containing the original guess Q, this should be updated to conform with the new information. Intuitively a proper updating should be that element of Π which is closest to the original guess Q. It remains to specify the measure of distance between distributions to be used to find this closest element.

I. Csiszár

3.5. Eight Problems in Information Theory

Problem 1: So far, the capacity regions of multiway channels have been characterized in only a few cases. The main difficulty consists of finding appropriate methods for single-letterization.

R. Ahlswede

3.6. Optimum Signal Set for a Poisson Type Optical Channel

A simple model of an optical communication channel is the following. The channel input is a waveform x(t) which satisfies $$ 0 \le a \le x (t) \le b < \infty, \; \; 0\le t < \infty, $$ and the corresponding channel output is a Poisson jump process or counting process v(t) with intensity function x(t).

A. D. Wyner

3.7. Spectra of Bounded Functions

We are concerned here with waveforms x(t), -∞ < t < ∞, which satisfy an amplitude-constraint, |x(t)| ≤ A < ∞, and their spectra. We pose two open problems. The first is the maximization of the energy of a filtered version of an amplitude-constrained pulse with finite support. The second is the question of how close the power spectral density of a stationary amplitude-constrained random process can be to a flat band-limited spectrum. These questions appear to be difficult, but answers to them will shed light on certain aspects of storage in magnetic media (disks, tapes, etc. which are inherently amplitude limited) and on communication over microwave radio links.

A. D. Wyner

3.8. A Stochastic Decision Problem

In a team decision problem there are n agents. Agent i observes random variable Yi and, as a function of this observation, takes decision ui from a given set Ui of possible decisions. Denoting the decision function by γi, the problem is to choose (γ1,...γn) so as to optimize the expectation of a criterion C(u1, ..., un, Z), where Z is a random variable and the joint distribution of Z and the Yi is given [1]. Note that by conditioning one can assume that Z is the n-tuple of all observations Yi. Outside a few special cases, team problems are of high complexity [2].

H. S. Witsenhausen

3.9. Unsolved Problems Related to the Covering Radius of Codes

Some of the principal unsolved problems related to the covering radius of codes are described. For example, although it is almost 20 years since it was built, Elwyn Berlekamp’s light-bulb game is still unsolved.

N. J. A. Sloane

3.10. A Complexity Problem

Combinatorial extremal problems involving more than one operation are usually very difficult. Complexity problems fall into this category. We propose here an approach to the construction of monotone Boolean functions of large formula size (and large combinational complexity) via the following extremal problem, which involves only one operation.

R. Ahlswede

3.11. Codes as Orbits

For a finite set χ and natural n we call U ⊂ χnm-orbital, if there exist a V ⊂ U, |V| = m, and a subgroup G of the symmetric group ∑n such that $$VG = U.$$

R. Ahlswede

3.12. Reliable Communication of Highly Distributed Information

Shannon’s theory of information [1] and subsequent generalizations to multiple users (for a survey see [2]) consider the situation of a small number of users each with an unlimited amount of information. The users communicate over a noisy channel with the goal of exchanging their information reliably. Here, we consider a complementary model. We assume a very large number of users, each with a small amount of information. We also assume that the communication takes place over a noisy channel but assume that the goal of the users is to compute a function reliably. This highly distributed information model is motivated by problems of decision making in a network. The users could be either a large number of processors, human beings, or simply the components of a logic circuit. In all cases, the noise is an inevitable physical limitation.

Abbas El Gamal

3.13. Instability in a Communication Network

The problems described here are concerned with a stochastic model of a communication network. The model represents the interactions between the random demands placed on a network, and the aim is to understand its stationary behavior. In particular, we are interested in any clues that the network may exhibit instabilities, with perhaps various distinct modes of behavior possible.

F. P. Kelly

3.14. Conjecture: Feedback doesn’t Help Much

Consider the additive Gaussian noise channel with stationary timedependent noise $$Y (k) = X (k) + Z (k)$$, where {Z(k)} has power spectral density N(f). A (2nR, n) feedback code for such a channel is given by a collection of functions $$x_k^{(n)} (W, Y_1,Y_2, \dots, Y_{k-1})$$, $$k = 1,2, \dots, n, \; W \in \{1,2, \dots, 2^nR\}$$ and a decoding function $$g^{(n)} \: R^n \rightarrow\{1,2, \dots, 2^{nR}\}$$.

Thomas M. Cover

3.15. The Capacity of the Relay Channel

Consider the following seemingly simple discrete memoryless relay channel: Here Y1, Y2 are conditionally independent and conditionally identically distributed given X, that is, $$p(y_1,\, y_2\, |\, x) = p(y_1\, |\, x) p(y_2\, |\, x)$$. Also, the channel from Y1 to Y2 does not interfere with Y2. A (2nR, n) code for this channel is a map $$x : 2^{nR} \rightarrow X^n$$, a relay function $$r :Y_1^{n}\rightarrow 2^{nC_{0}}$$, and a decoding function $$g : 2^{nC_{0}} \times Y_2^{n} \rightarrow 2^{nR}$$. The probability of error is given by $$ P_e^{(n)} = P \{\,g(r(y_1),y_2) \ne W\}$$, where W is uniformly distributed over 2nR and $$ p(w, y_1, y_2) = 2^{-nR}\,\, \underset {i=1}{\overset {n}{\Pi}} p(y_{1i}\, |\, x_i(w)) \,\,\underset {i=1}{\overset {n}{\Pi}}\,\, p(y_{2i}\, |\, x_i(w))$$

Thomas M. Cover

3.16. Simplex Conjecture

It may not be known that the famous simplex conjecture in communication theory can be reduced to the following geometrical problem.

Thomas M. Cover

3.17. Essential Average Mutual Information

Consider two dependent random variables (S, C) and suppose that $$\hat{\chi }$$ is the optimal estimate of C when only S is known. I(S; C) is a measure of how much S tells us about C, and $$I(\hat{\chi } ; C)$$ is a measure of how much our optimal estimate $$\hat{\chi }$$ tells us about C. What can we say about $$I(\hat{\chi } ; C)$$ if we know that I(S; C) = 3 bits, for example? The optimality of $$\hat{\chi }$$ suggests that $$I(\hat{\chi } ; C)$$ should also be close to 3 bits. This is what we address in this problem. Let (S, C) be jointly distributed ~p(s,c), where S = {0,..., N−1}, and C = {0,..., M−1}. Let $$\hat{\chi }:\{ 0, \ldots ,N - 1\} \to \{ 0, \ldots ,M - 1\}$$ denote an arbitrary function of the outcomes of S. The problem is to estimate the numbers α(N, M) defined by $$\alpha (N,M) = \mathop{{\inf }}\limits_{{p:I(S;C) > 0}} \mathop{{\max }}\limits_{{\hat{\chi } = \hat{C}(S)}} \left( {\frac{{I(\hat{\chi };C)}}{{I(S;C)}}} \right)$$ Since $$I(\hat{\chi };C) \leqslant I(S;C)$$ (data processing inequality), α(N, M) ≤ 1 . In fact, α(N, M) < 1 for N, M as shown in the following example for α(3, 2).

Yaser S. Abu-Mostafa

3.18. Pointwise Universality of the Normal Form

The problems posed here arise in the context of combinational complexity of Boolean functions whose truth tables cannot be concisely specified [2]. This class of functions arises in the study of computation and decision-making based on natural data, such as the case of pattern recognition in uncontrolled environments. The main feature of these functions is the lack of a structure that would allow an efficient systematic implementation. This leaves us with a large number of essentially unrelated cases to account for, which puts a lower bound on the complexity of these functions. However, an exhaustive solution is not necessary either, since the essential dimensionality of the data is typically far less than the actual dimensionality.

Yaser S. Abu-Mostafa

3.19. On Classification with Partial Statistics and Universal Data Compression

Classification of finite alphabet sources with partial statistics is studied. Efficient universal discriminant functions are introduced and are shown to be closely related to universal data compression.

Jacob Ziv

3.20. Are Bayes Rules Consistent in Information?

Bayes’ rule provides a method for constructing estimators of probability density functions in both parametric and nonparametric cases. Let X1, X2, ..., Xn be a random sample from an unknown probability measure P0 with density function p0(x) with respect to a dominating measure λ(dx). Let µ be a prior probability measure on the space of all probability measures P which have densities p(x) = dP/dλ. Then the mean of the posterior yields the following estimator of the density function $${{\hat{p}}_{n}}(x) = \hat{p}(x;{{X}_{1}},{{X}_{2}}, \ldots ,{{X}_{n}}) = \frac{{\smallint p(x)(\prod _{{i = 1}}^{n}p({{X}_{i}}))d\mu }}{{\smallint (\prod _{{i = 1}}^{n}p({{X}_{i}}))d\mu }}.$$

Andrew R. Barron

3.21. On Finding Maximally Separated Signals for Digital Communications

The Lp norm of a function f : [0,∞) → R (real numbers) is given by $$\parallel f{{\parallel }_{p}} \equiv {{\left( {\int\limits_{0}^{\infty } {|f(t){{|}^{p}}dt} } \right)}^{{1/p}}}.$$

D. J. Hajela, Michael L. Honig

3.22. Frequency Assignment in Cellular Radio

Cellular radio uses a number of channels or frequencies (e.g., 7 × 44 = 308) divided into local cells (hexagons here) such that the same frequency can be reused in cells at graph distance 3 or greater: Here the whole plane is tiled. If we allow “call rearrangement,” we can think of assigning channels after we see the list of all call requests. Any number ≥ 0 of calls can be requested from any cell. The calls are being made to stationary, not to mobile, phones so one call corresponds to one channel. Suppose we have a bound on demand of the form “total number of calls requested in every 1-sphere is at most M.”

Edward C. Posner

Problems in Computation


4.1. In Search of a One-Way Function

Consider straight-line (SL) algorithms over a finite field with q elements.

Jacob Ziv

4.2. Average Case Complete Problems

Many interesting combinatorial problems were found to be NP-complete. Since there is little hope to solve them fast in the worst case, researchers look for algorithms which are fast just “on average,” This matter is sensitive to the choice of a particular NP-complete problem and a probability distribution of its instances. Some of these tasks are easy and some not. But one needs a way to distinguish the “difficult on average” problems. Such negative results could not only save “positive” efforts but may also be used in areas (like cryptography) where hardness of some problems is a frequent assumption. It is shown in [1] that the Tiling problem with uniform distribution of instances has no polynominal “on average” algorithm, unless every NP-problem with every simple probability distribution has it.

Leonid A. Levin

4.3. Does a Single Bit Accumulate the Hardness of the Inverting Problem?

It is demonstrated by Yao [1] what a crucial role information theory can play in the theory of computation. These matters deserve more consideration.

Leonid A. Levin

4.4. Computing the Busy Beaver Function

Efforts to calculate values of the noncomputable Busy Beaver function are discussed in the light of algorithmic information theory.

Gregory J. Chaitin

4.5. The Complexity of Computing Discrete Logarithms and Factoring Integers

Practically all knapsack public key cryptosystems have been broken in the last few years, and so essentially the only public key cryptosystems that still have some credibility and are widely known are those whose security depends on the difficulty of factoring integers (the RSA scheme and its variants) and those whose security depends on the difficulty of computing discrete logarithms in finite fields. Therefore, the computational complexity of these two problems is of great interest.

A. M. Odlyzko

4.6. Knapsack Used in Factoring

Suppose we are given l integers x1, x2,..., xl, in the range from - l1.5 to + l1.5. These integers may be thought of as being random and uniformly distributed in their range.

Don Coppersmith

4.7. Reliable Computation with Asynchronous Cellular Arrays

The homogeneous construction and local connectivity of cellular arrays makes them the natural domain for the formulation of certain general questions concerning reliable computation. We have addressed the problem of reliable computation in discrete time in two works. Gacs [1] constructs a (fairly complex) one-dimensional array while Gacs and Reif [2], based on Toom’s work, construct a very simple three-dimensional array. Even if built of unreliable components, these arrays can simulate any one-dimensional cellular array reliably.

Peter Gacs

4.8. Finite Memory Clocks

How does one tell time when the number of states in the clock is insufficient to count the elapsed time? For that matter, how good are humans at estimating the passage of time?

Thomas M. Cover

4.9. Distributed Shortest Path Algorithms

Consider a graph G(V,E) with a distinguished node called the root and with some positive weight associated with each direction on each edge. The length of a path in the graph is the sum of the weights in the direction of the path over the edges of the path. The shortest path problem is to find a minimum weight path from each node to the root. In the special case where each edge has unit weight, we call the shortest path problem the minimum hop problem.

R. G. Gallager

4.10. The Scope Problem

By a system we will mean a finite sequence S1,..., Sm of finite sets of positive integers. Denote by (a, i) the occurrence of integer a in set Si. The scope of (a, i) is the union of the sets Sα with j ≤ α ≤ k, where 1 ≤ j ≤ i ≤ k ≤ m and j is as low and k is as high as possible subject to the condition that for all β satisfying j < β < k, one has a ∈ Sβ. This means that the scope consists of the sets in the run of a’s to which (a, i) belongs, extended at each end of the run by one additional set, unless that end of the run is one end of the system.

H. S. Witsenhausen

4.11. A Conjectured Generalized Permanent Inequality and a Multiaccess Problem

Let k and n be positive integers, and let I denote the set of k-tuples, I = {1, 2, ..., n}k. For 1 ≤ j ≤ k, let Sj denote the collection of subsets L of I such that L has cardinality n and no two elements of L have the same jth coordinate. Let $$S = \mathop{ \cup }\limits_{j} {{S}_{j}}$$. Finally, let Fnk be the multinomial in variables x = (xi : i ∈ I) defined by $${{F}_{{n,k}}}(x) = \sum\limits_{{L \in S}} {\prod\limits_{{i \in L}} {{{x}_{i}}.} }$$

Bruce Hajek

4.12. Rotation Distance

In this note we summarize our recent results on rotation distance, a distance measure on binary trees with computer science applications. Our main result is that the maximum rotation distance between any two n-node binary trees is at most 2n - 6 for n ≥ 11, and this bound is tight for infinitely many n.

Daniel D. Sleator, Robert E. Tarjan, William P. Thurston

4.13. Efficient Digital Signature Schemes Based on Multivariate Polynomial Equations

In 1983, Ong Schnorr and Shamir proposed a new type of digital signature scheme, based on multivariate polynomial equations modulo composite numbers. The scheme had some unique features (such as a constant arithmetic complexity and a universal modulus capability), which made it an attractive alternative to the RSA signature scheme. Unfortunately, the first two incarnations of this scheme (based on binary quadratic equations and ternary cubic equations) were shown to be breakable by J. M. Pollard. The major open problem concerning this scheme is whether there exists a safe incarnation which is still attractive from a practical point of view.

Adi Shamir

4.14. Some Results for the Problem “Waiting for Godot”

Problem Statement: Consider an M/D/1 queueing system (Poisson arrival process, deterministic service times) and a test customer. The test customer is waiting for a friend whose arrival time is an exponentially distributed random variable. The test customer can either join the queue, if one exists, or wait outside the queue. Once the test customer joins the queue, he must stay in the queue until he reaches the server. If the test customer reaches the server after his friend arrives, he is served. Otherwise, he can either join the back of the queue, or wait outside the queue. What policy should the test customer follow to minimize the mean delay until service?

Michael L. Honig

4.15. Problems on Tiling, Independent Sets, and Trigonometric Polynomials

Problem 1: Given S ⊆ Zn, x ∈ Zn, a translate of S by x is $$S + x = \{ s + x|s \in S\}$$.

D. Hajela

4.16. Communication Complexity of Shifts

Let X = (X1, X2,...,X n ), where Xi ~ Bernoulli (1/2). Let $$Y = ({{X}_{{T + 1}}},{{X}_{{T + 2}}}, \ldots ,{{X}_{T}})$$, where T is uniformly distributed over {0, 1, 2, ..., n−1}. Thus y is a cyclic T-shift of x. Here T + k is modulo n.

Thomas M. Cover

4.17. A Coding Problem Concerning Simultaneous Threshold Detection

We define a threshold detection system (TDS) of order N to be a collection of N binary codewords, V1, V2, ..., V N , and N binary decision trees, T1, T2, ..., T N , such that the tree Ti on input Vj reports “no” if j < i, and “yes” otherwise.

Michael L. Fredman

4.18. Cooling Schedules for Optimal Annealing

We study a technique inspired by statistical mechanics, called simulated annealing [2] or stochastic relaxation [1], when applied to the maximum matching problem. The technique appears useful [2] for solving large, difficult (e.g., NP-hard) problems. Our motivation for studying the relatively simple maximum matching problem is to obtain sharp results concerning sufficient convergence rates. Numerous extensions can be readily conjectured.

Bruce Hajek

Problems in the Cracks


5.1. Pick the Largest Number

Player 1 writes down any two distinct numbers on separate slips of paper. Player 2 randomly chooses one of these slips of paper and looks at the number. Player 2 must decide whether the number in his hand is the larger of the two numbers. He can be right with probability one-half. It seems absurd that he can do better.

Thomas M. Cover

5.2. Ergodic Process Selection

Let $$\{ ({{X}_{i}},{{Y}_{i}})\} _{{i = 1}}^{\infty }$$ be a jointly ergodic stationary stochastic process. Define a selection function $${{\delta }_{n}}:{{X}^{{n - 1}}} \times {{Y}^{{n - 1}}} \to \{ 0,1\} ,n = 1,2, \ldots$$ We wish to maximize $$\begin{array}{*{20}{c}} {\mathop{{\lim }}\limits_{{n \to \infty }} \frac{1}{n}\sum\limits_{{i = 1}}^{n} {({{\delta }_{i}}({{X}_{1}}, \ldots ,{{X}_{{i - 1}}},{{Y}_{1}},{{Y}_{2}}, \ldots ,{{Y}_{{i - 1}}}){{X}_{i}}} } \hfill \\ { + (1 - {{\delta }_{i}}({{X}_{1}}, \ldots ,{{X}_{{i - 1}}},{{Y}_{1}}, \ldots ,{{Y}_{{i - 1}}})){{Y}_{i}})} \hfill \\ \end{array}$$ over all selection functions. Thus δi chooses either Xi or Y i to add to the running average.

Thomas M. Cover

5.3. Finding the Oldest Person

There are N people. Each person’s age is independently and uniformly distributed over [0, 1]. You want to find who the oldest person is (not the person’s age) with the minimum expected number of questions when the questions are structured as follows.

Pravin Varaiya

5.4. Gambler’s Ruin: A Random Walk on the Simplex

It is known that if two gamblers with capitals p and 1 - p, respectively, engage in a fair game (we can model it by Brownian motion on the unit interval starting at p) until one of the gamblers goes broke, then the gambler with initial capital p will win the game with probability p. Now suppose that there are m gamblers with capitals corresponding to a point p in the simplex pi ≥ 0, ∑Pi = 1. A random walk in the simplex occurs, and the gamblers go broke one by one. Once a gambler goes broke, he stays broke. What is the induced probability distribution on the order in which the gamblers go broke?

Thomas M. Cover

5.5. Linear Separability

Let $$({{X}_{i}},{{\theta }_{i}}), i = 1,2, \ldots ,n$$, be i.i.d. random pairs, where {θi} is Bernoulli with parameter 1/2, and Xi ~ fθi(x), xi ∈ Rd. We say $$({{X}_{i}},{{\theta }_{i}}), i = 1,2, \ldots ,n$$ is linearly separable if there exits a vector w ∈ Rd and a real number T such that $$\begin{array}{*{20}{c}} {\begin{array}{*{20}{c}} {{{w}^{t}}{{x}_{i}} \geqslant T,} & {{{\theta }_{i}} = 1} \\ \end{array} } \hfill \\ {\begin{array}{*{20}{c}} { < T,} & {{{\theta }_{i}} = 0,} & {for} & {i = 1,2, \ldots ,n.} \\ \end{array} } \hfill \\ \end{array}$$

Thomas M. Cover

5.6. The Generic Rank of A 2

We define a structured matrixA to be the set of all matrices (of a given dimension n × n) in which certain entries are constrained to be zero. We then define the generic rank of A to be the maximum of the ranks of any A ∈ A. It turns out that the generic rank of A may be computed easily. Form a bipartite graph G = (V, E), where the set of vertices is $$V = \{ 1,2, \ldots ,n ; 1\prime , \ldots ,n\prime \}$$. For any (i, j ∈ {1,...,n}2, the edge (i, j´ ) belongs to E if and only if the ijth entry of matrices in A is not constrained to be zero. Then, the generic rank of A equals the maximum number of edges in any bipartite matching of that graph.

John N. Tsitsiklis

5.7. The Stability of the Products of a Finite Set of Matrices

Let $$F = \{ {{A}_{t}}, \ldots ,{{A}_{N}}\}$$ be a set of n × n matrices. Given a sequence $$S = \{ {{A}_{{{{i}_{k}}}}}\} _{{k = 1}}^{\infty }$$, with $${{A}_{{{{i}_{k}}}}} \in F$$, we consider products of the form $${{B}_{{M,S}}} = \prod\limits_{{k = 1}}^{M} {{{A}_{{{{i}_{k}}}}}}$$. We are interested in questions of the following type: 1.Is the set {BM,S: M = 1, 2, ... } bounded for all sequences S? (We will then say that F is stable.) Does BM,S converge to zero, as M → ∞ for all S?2.What happens if we impose some restrictions on the set of allowed sequences S?3.What are some simple classes of matrices for which the answers to 1 and 2 become simpler?

John N. Tsitsiklis

5.8. Electrical Tomography

Tomography deduces a physical function σ(P) (say a density), at points P inside a living organ, from measurements made on the outside. With suitable interpretation, σ(P) may reveal tumors or other abnormalities. In X-ray tomography, σ(P) is an attenuation coefficient, external measurements supply integrals 1$$\alpha (L) = \int\limits_{L} {\sigma (P)ds}$$ along straight line rays L through the organ, and the integral equation (1) is solved for σ(P) (see [1]). In another kind of tomography, using nuclear magnetic resonance measurements, σ(P) is deduced from integrals over planes instead of lines (see [2]).

E. N. Gilbert, L. A. Shepp

5.9. Figure-Ground Problem for Sound

The impossible tuning fork is a good example of a figure-ground optical illusion. Tracing the body of the tuning fork leads to the background. What is figure and what is ground?

Thomas M. Cover

5.10. The Entropy Power Inequality and the Brunn-Minkowski Inequality

The Brunn-Minkowski inequality states that the nth root of the volume of the set sum of two sets in Euclidean n-space is greater than or equal to the sum of the nth roots of the volumes of the individual sets. The entropy power inequality states that the effective variance of the sum of two independent random variables with densities in n-space is greater than or equal to the sums of their effective variances. Formally, the inequalities can be seen to be similar. We are interested in determining whether this occurs by chance or whether there is a fundamental idea underlying both inequalities.

Thomas M. Cover

5.11. The Weird and Wonderful Chemistry of Audioactive Decay

Suppose we start with a string of numbers (i.e., positive integers), say 5 5 5 5 5.

J. H. Conway

Solutions to Six of the Problems


6.1. On the Spectral Density of some Stochastic Processes

We prove the following theorem, which was motivated by a question that Wyner raised in [1].

S. Boyd, D. J. Hajela

6.2. Ergodic Process Selection

The purpose of this note is to give a partial solution to the following problem posed by Thomas M. Cover [1]. Let (X, Y) = (Xi, Yi, i ∈ Z) be a jointly ergodic stationary stochastic process. A random process δ = (δi, i ∈ Z) is called a selection strategy if δi, ∈ {0,1} with probability one for each i, and a selection strategy δ is called sequential if for each i ≥ 1, δi is measurable with respect to $$\sigma ({{X}_{{i - 1}}},{{Y}_{{i - 1}}},{{X}_{{i - 2}}},{{Y}_{{i - 2}}}, \ldots ,{{X}_{1}},{{X}_{1}}),$$ which represents the finite past.

Bruce Hajek

6.3. Gambler’s Ruin: A Random Walk on the Simplex

The purpose of this note is to give a solution to a problem of Thomas M. Cover (see Chapter V, Section 5.4). Suppose there are three gamblers with respective capital p a , p b , and p c , where p a + p b + p c = 1. The players engage in a symmetric three-way game modeled by Brownian motion in the two-dimensional simplex P i ≥ 0, p a + p b + p c = 1. When one of the players goes broke, play continues between the remaining two players, where the play is now modeled by a Brownian motion in one dimension, until a second player loses, and the remaining player is declared a winner. Doob’s optional sampling theorem implies that player i will be a winner with probability p i . Cover’s problem is to find the probability that the players lose in a specific order. For example, we would like to find the probability that player 3 loses first and then player 2 loses. We provide a “messy” solution.

Bruce Hajek

6.4. Finding Parity in a Broadcast Network

Consider a broadcast network of N nodes in which each binary digit transmitted by each node is received by each other node via a binary symmetric channel whose crossover probability ε is independent over transmitters, receivers, and time. Each note has a binary state and the problem is to construct a distributed algorithm to find the parity of the set of states with some given reliability. This problem was first formulated by A. E1 Gamal (see Chapter III, Section 3.10) and is of interest because it is one of the simplest distributed algorithm problems involving noise.

R. G. Gallager

6.5. An Optimal Strategy for a Conflict Resolution Problem

Relevant to the design of multiple access protocols is the problem of finding the largest of N i.i.d. number X 1 , … ,X N uniformly distributed over [0,1] using the minimum number of questions of the following type. We pick a set A(1) ⊂ [0,1] and ask which X i ∈ A(1). Depending on the response, we pick another subset A(2) and ask which X i ∈ A(2) , and so on, until we identify the largest Xi. It is shown that the optimum sequence of question must be of the type A(k) = (a(k), 1]: the best sequence { a(k) } can then be determined by dynamic programming following the work of Arrow, Pesotchinsky, and Sobel. Thus [3] is resolved.

V. Anantharam, P. Varaiya

6.6. Coordination Complexity and the Rank of Boolean Functions

The MEX machine is a model for describing the coordination between concurrent processes in a distributive protocol. (See Figure 1.) The discrete recursion operates as follows: Every second is divided into two equal periods. There is a bus connecting all processes, and all information needed for the coordination of the processes is transmitted over the bus. During the first period, a state of the bus is selected. In the second period, each process “resolves” its task by changing its state according the selected bus state. Once the bus state is given, the state transitions at the processes are independent.

B. Gopinath, V. K. Wei


Weitere Informationen