Skip to main content

2006 | Buch

General Theory of Information Transfer and Combinatorics

herausgegeben von: Rudolf Ahlswede, Lars Bäumer, Ning Cai, Harout Aydinian, Vladimir Blinovsky, Christian Deppe, Haik Mashurian

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

Introduction

Introduction

Among the directions of research in GTIT-C, which could be established, we present in Chapters I-IX contributions of participants, which took shape in written form. The papers were thoroughly refereed. For the ease of reference they are numbered and in addition labelled by the letter B, which hints at this book.

Rudolf Ahlswede
Rudolf Ahlswede — From 60 to 66

On September 15th, 2004 Rudolf Ahlswede celebrated his 66th birthday and we describe here developments in his work in the last six years. For an account on the first 60 years we refer to the preface of the book “Numbers, Information and Complexity”, special volume in honour of R. Ahlswede on occasion of his 60th birthday, edited by Ingo Althöfer, Ning Cai, Gunter Dueck, Levon Khachatrian, Mark S. Pinsker, Andras Sárközy, Ingo Wegener and Zhen Zhang, Kluwer Academic Publishers, Boston, Dordrecht, London, 2000. From there we just cite the following paragraph describing Ahlswede’s approach to Information Theory:

”Ahlswede’s path to Information Theory, where he has been world-wide a leader for several decades, is probably unique, because it went without any engineering background through Philosophy: Between knowing and not knowing there are several degrees of knowledge with probability, which can even quantitatively be measured – unheard of in classical Philosophy.

During the period to be described here came on January 30, 2002 as a shock the message of the sudden and unexpected passing of his collaborator and friend Levon Khachatrian. His untimely death disrupted a very fruitful cooperation. We refer the reader to the memorial text on Levon Khachatrian in this volume for an appreciation of person and work.

Rudolf Ahlswede
Information Theory and Some Friendly Neighbors – Ein Wunschkonzert

No Abstract

Rudolf Ahlswede

Probabilistic Models

Identification for Sources

The classical transmission problem deals with the question how many possible messages can we transmit over a noisy channel? Transmission means there is an answer to the question “What is the actual message?” In the identification problem we deal with the question how many possible messages the receiver of a noisy channel can identify? Identification means there is an answer to the question “Is the actual message

u

?” Here

u

can be any member of the set of possible messages.

R. Ahlswede, B. Balkenhol, C. Kleinewächter
On Identification

In Shannon’s classical model of transmitting a message over a noisy channel we have the following situation:

There are two persons called

sender

and

receiver

. Sender and receiver can communicate via a

channel

. In the simplest case the sender just puts some

input letters

into the channel and the receiver gets some

output letters

. Usually the channel is

noisy

, i.e. the channel output is a random variable whose distribution is governed by the input letters. This model can be extended in several ways: Channels with

passive feedback

for example give the output letters back to the sender. Multiuser channels like

multiple access channels

or

broadcast channels

(which will not be considered in this paper) have several senders or receivers which want to communicate simultaneously. Common to all these models of

transmission

is the task that sender and receiver have to perform: Both have a common

message setM

and the sender is given a

messagei

M

. He has to

encode

the message (i.e. transform it into a sequence of input letters for the channel) in such a way, that the receiver can

decode

the sequence of output letters so that he can decide with a small probability of error what the message

i

was. The procedures for encoding and decoding are called a

code

for the channel and the number of times the channel is used to transmit one message is called the

blocklength

of the code.

C. Kleinewächter
Identification and Prediction

In this work the concept of identification is applied in the theory of prediction. This approach was suggested to us by our advisor Professor R. Ahlswede. This and other directions of research can be found also in [2]. Well known is Shannon’s theory of transmission of messages over a noisy channel ([15]). Using the framework of Shannon’s channel model a new concept of information transfer – called identification – was introduced by Ahlswede and Dueck in [1].

L. Bäumer
Watermarking Identification Codes with Related Topics on Common Randomness

Watermarking identification codes were introduced by Y. Steinberg and N. Merhav. In their model they assumed that

(1) the attacker uses a single channel to attack the watermark and both,the information hider and the decoder, know the attack channel;

(2) the decoder either completely he knows the covertext or knows nothing about it.

Then instead of the first assumption they suggested to study more robust models and instead of the second assumption they suggested to consider the case where the information hider is allowed to send a secret key to the decoder according to the covertext.

In response to the first suggestion in this paper we assume that the attacker chooses an unknown (for both information hider and decoder) channel from a set of channels or a compound channel, to attack the watermark. In response to the second suggestion we present two models. In the first model according to the output sequence of covertext the information hider generates side information componentwise as the secret key. In the second model the only constraint to the key space is an upper bound for its rate.

We present lower bounds for the identification capacities in the above models, which include the Steinberg and Merhav results on lower bounds. To obtain our lower bounds we introduce the corresponding models of common randomness. For the models with a single channel, we obtain the capacities of common randomness. For the models with a compound channel, we have lower and upper bounds and the differences of lower and upper bounds are due to the exchange and different orders of the max–min operations.

R. Ahlswede, N. Cai
Notes on Conditions for Successive Refinement of Information

The successive refinement of information (or source divisibility) under error exponent or reliability constraint is discussed. By rehabilitating Maroutian’s result on multiple descriptions [13], we refine our previous proof on successive refinability conditions reported in [7] and restate the result by Tuncel and Rose [17]. In particular, it is noted that the successive refinement in “purely” reliability sense is always possible.

A. N. Harutyunyan
Coding for the Multiple-Access Adder Channel

The coding problem for the multiple-access adder channel is considered, both for the case of permanent user activity and partial user activity. For permanent user activity, Khachatrian [10] has written an excellent survey for general, symmetric and non-symmetric rates. In this survey, we only deal with the special symmetric rate case, where all users have two codewords. The length of the shortest possible code is characterized, and amongst others, we present the code construction of Chang and Weldon [5]. We also deal with the case of signature coding (where we mean that one of the codewords for each user is the zero vector). As a code construction of this kind, we show Lindström’s one [12].

We also consider partial user activity. For this case, the resulting upper and lower bounds on the length of the shortest possible code differs by a factor of two. There are some constructions for suboptimal codes, but we do not know about constructions with length approaching the upper bound. The signature code is similar to the

B

m

code examined by D’yachkov and Rykov [7]. It is interesting, that the upper and lower bounds for the length of

B

m

codes are the same as for signature codes.

B. Laczay
Bounds of E-Capacity for Multiple-Access Channel with Random Parameter

The discrete memoryless multiple-access channel with random parameter is investigated. Various situations, when the state of the channel is known or unknown on the encoders and decoder, are considered. Some bounds of

E

-capacity and capacity regions for average error probability are obtained.

M. E. Haroutunian
Huge Size Codes for Identification Via a Multiple Access Channel Under a Word-Length Constraint

It is well-known from pioneering papers published since 1989 that, for identification via a communication channel of given Shannon capacity

${\cal C}$

, the number

N

(

n

) of messages which can be reliably identified using an identification code grows doubly exponentially with the wordlength

n

, as

n

→∞. This paper provides contributions to the study of identification plus transmission (IT) codes under a wordlength constraint

n

n

0

for sending an identifier and message content over a noiseless one-way channel. While the false identification probability no longer vanishes under such constraint, it can be drastically suppressed both by serial and parallel versions of an appropriately defined non-algebraic forward error control (FEC). The main result of this paper consists of exact and approximate expressions for the huge size

N

(

n

), and the corresponding second-order rate, of an asymptotically optimal IT code sequence under a wordlength constraint, both with and without either of the two proposed FEC schemes. Also, upper bounds are obtained on the false identification probability under such wordlength constraint for both FEC versions, showing the drastic reduction possible when applying FEC. Furthermore, it is outlined in this paper how the simultaneous transfer of identifiers and message contents by several active users via a noiseless multiple access channel (MAC) could be handled, using the concept of a least length single control sequence. Huge size identifier sets might be of design interest for certain prospective remote alarm services, such as when occasional alarm services are to be conveyed via a simple MAC which is controlled by the same single common cyclically permutable sequence at all nodes involved. It is pointed out under which circumstances the use of huge IT codes might be worth further detailed investigation by designers interested in such prospective services.

S. Csibi, E. von der Meulen
Codes with the Identifiable Parent Property and the Multiple-Access Channel

We begin with

I. The identifiable parent property and some first results about it

If

${\mathcal C}$

is a

q

–ary code of length

n

and

a

n

and

b

n

are two codewords, then

c

n

is called a descendant of

a

n

and

b

n

if

c

t

∈{

a

t

,

b

t

} for

t

=1,...,

n

. We are interested in codes

${\mathcal C}$

with the property that, given any descendant

c

n

, one can always identify at least one of the ‘parent’ codewords in

${\mathcal C}$

. We study bounds on

F

(

n

,

q

), the maximal cardinality of a code

${\mathcal C}$

with this property, which we call the

identifiable parent property

. Such codes play a role in schemes that protect against piracy of software.

R. Ahlswede, N. Cai

Cryptology – Pseudo Random Sequences

Transmission, Identification and Common Randomness Capacities for Wire-Tape Channels with Secure Feedback from the Decoder

We analyze wire-tape channels with secure feedback from the legitimate receiver. We present a lower bound on the transmission capacity (Theorem 1), which we conjecture to be tight and which is proved to be tight (Corollary 1) for Wyner’s original (degraded) wire-tape channel and also for the reversely degraded wire-tape channel for which the legitimate receiver gets a degraded version from the enemy (Corollary 2).

Somewhat surprisingly we completely determine the capacities of secure common randomness (Theorem 2) and secure identification (Theorem 3 and Corollary 3). Unlike for the DMC, these quantities are different here, because identification is linked to non-secure common randomness.

R. Ahlswede, N. Cai
A Simplified Method for Computing the Key Equivocation for Additive-Like Instantaneous Block Encipherers

We study the problem of computing the key equivocation rate for secrecy systems with additive-like instantaneous block (ALIB) encipherers. In general it is difficult to compute the exact value of the key equivocation rate for a secrecy system (

$f, {\cal C})$

with ALIB encipherer when the block length

n

becomes large. In this paper, we propose a simplified method for computing the key equivocation rate for two classes of secrecy systems with ALIB encipherers. 1) The function

f

is additive-like and the block encipherer

C

is the set of all

n

-length key words (sequences) of type

P

. 2) The function

f

is additive and the block encipherer

C

is a linear (

n

,

m

) code in the

n

-dimensional vector space GF(

q

)

n

. The method has a potential use for more classes of secrecy systems.

Z. Zhang
Secrecy Systems for Identification Via Channels with Additive-Like Instantaneous Block Encipherer

In this paper we propose a model of secrecy systems for identification via channels with ALIB encipherers and find the smallest asymptotic key rate of the ALIB encipherers needed for the requirement of security.

R. Ahlswede, N. Cai, Z. Zhang
Large Families of Pseudorandom Sequences of k Symbols and Their Complexity – Part I

In earlier papers we introduced the measures of pseudorandomness of finite binary sequences [13], introduced the notion of

f

–complexity of families of binary sequences, constructed large families of binary sequences with strong PR (= pseudorandom) properties [6], [12], and we showed that one of the earlier constructions can be modified to obtain families with high

f

–complexity [4]. In another paper [14] we extended the study of pseudorandomness from binary sequences to sequences on

k

symbols (“letters”). In [14] we also constructed

one

“good” pseudorandom sequence of a given length on

k

symbols. However, in the applications we need not only a few good sequences but large families of them, and in certain applications (cryptography) the complexity of the family of these sequences is more important than its size. In this paper our goal is to construct “many” “good” PR sequences on

k

symbols, to extend the notion of

f

–complexity to the

k

symbol case and to study this extended

f

–complexity concept.

R. Ahlswede, C. Mauduit, A. Sárközy
Large Families of Pseudorandom Sequences of k Symbols and Their Complexity – Part II

We continue the investigation of Part I, keep its terminology, and also continue the numbering of sections, equations, theorems etc.

Consequently we start here with Section 6. As mentioned in Section 4 we present now criteria for a triple (

r

,

t

,

p

) to be

k

–admissible. Then we consider the

f

–complexity (extended now to

k

–ary alphabets)

$\Gamma_k(\mathcal{F})$

of a family

$\mathcal{F}$

. It serves again as a performance parameter of key spaces in cryptography. We give a lower bound for the

f

–complexity for a family of the type constructed in Part I. In the last sections we explain what can be said about the theoretically best families

$\mathcal{F}$

with respect to their

f

–complexity

$\Gamma_k(\mathcal{F})$

. We begin with straightforward extensions of the results of [4] for

k

=2 to general

k

by using the same Covering Lemma as in [1].

R. Ahlswede, C. Mauduit, A. Sárközy
On a Fast Version of a Pseudorandom Generator

In an earlier paper I constructed a large family of pseudorandom sequences by using the discrete logarithm. While the sequences in this construction have strong pseudorandom properties, they can be generated very slowly since no fast algorithm is known to compute ind

n

. The purpose of this paper is to modify this family slightly so that the members of the new family can be generated much faster, and they have almost as good pseudorandom properties as the sequences in the original family.

K. Gyarmati
On Pseudorandom Sequences and Their Application

A large family of finite pseudorandom binary sequences is presented, and also tested “theoretically” for pseudorandomness. The optimal way of implementation is discussed and running time analysis is given. Numerical calculations are also presented.

J. Rivat, András Sárközy
Authorship Attribution of Texts: A Review

We survey the authorship attribution of documents given some prior stylistic characteristics of the author’s writing extracted from a corpus of known works, e.g., authentication of disputed documents or literary works. Although the pioneering paper based on word length histograms appeared at the very end of the nineteenth century, the resolution power of this and other stylometry approaches is yet to be studied both theoretically and on case studies such that additional information can assist finding the correct attribution.

We survey several theoretical approaches including ones approximating the apparently nearly optimal one based on Kolmogorov conditional complexity and some case studies: attributing Shakespeare canon and newly discovered works as well as allegedly M. Twain’s newly-discovered works, Federalist papers binary (Madison vs. Hamilton) discrimination using Naive Bayes and other classifiers, and steganography presence testing. The latter topic is complemented by a sketch of an anagrams ambiguity study based on the Shannon cryptography theory.

M. B. Malyutov

Quantum Models

Raum-Zeit und Quantenphysik – Ein Geburtstagsständchen für Hans-Jürgen Treder

Gefragt nach einem Vortrag für Hans-Jürgen Treders 75. Geburtstag, hatte ich kurz entschlossen geantwortet, etwas zum Verhältnis von Relativitätstheorie und Quantentheorie sagen zu wollen. Das war etwas leichtsinnig; denn nur ein paar Beobachtungen habe ich anzubieten, Lösungen nicht.

A. Uhlmann
Quantum Information Transfer from One System to Another One

The topics of the paper are: a) Anti-linear maps governing EPR tasks in the absence of distinguished reference bases. b) Imperfect quantum teleportation and its composition rule. c) Quantum teleportation by distributed measurements. d) Remarks on EPR with an arbitrary mixed state, and triggered by a Lüders measurement.

A. Uhlmann
On Rank Two Channels

Based on some identities for the determinant of completely positive maps of rank two, concurrences are calculated or estimated from below.

A. Uhlmann
Universal Sets of Quantum Information Processing Primitives and Their Optimal Use

This paper considers several concepts of universality in quantum information processing and deals with various (sometimes surprising) universal sets of quantum primitives as well as with their optimal use.

J. Gruska
An Upper Bound on the Rate of Information Transfer by Grover’s Oracle

Grover discovered a quantum algorithm for identifying a target element in an unstructured search universe of

N

items in approximately

$\pi/4 \sqrt{N}$

queries to a quantum oracle. For classical search using a classical oracle, the search complexity is of order

N

/2 queries since on average half of the items must be searched. In work preceding Grover’s, Bennett et al. had shown that no quantum algorithm can solve the search problem in fewer than

$O(\sqrt{N})$

queries. Thus, Grover’s algorithm has optimal order of complexity. Here, we present an information-theoretic analysis of Grover’s algorithm and show that the square-root speed-up by Grover’s algorithm is the best possible by any algorithm using the same quantum oracle.

E. Arikan
A Strong Converse Theorem for Quantum Multiple Access Channels

With the wringing technique developed by the first author for classical multiple access channels we show that the strong converse theorem holds also for quantum multiple access channels, if classical messages are transmitted.

R. Ahlswede, N. Cai
Identification Via Quantum Channels in the Presence of Prior Correlation and Feedback

Continuing our earlier work (

quant-ph/0401060

), we give two alternative proofs of the result that a noiseless qubit channel has identification capacity 2: the first is direct by a “maximal code with random extension” argument, the second is by showing that 1 bit of entanglement (which can be generated by transmitting 1 qubit) and negligible (quantum) communication has identification capacity 2. This generalizes a random hashing construction of Ahlswede and Dueck: that 1 shared random bit together with negligible communication has identification capacity 1.

We then apply these results to prove capacity formulas for various quantum feedback channels: passive classical feedback for quantum– classical channels, a feedback model for classical–quantum channels, and “coherent feedback” for general channels.

A. Winter
Additive Number Theory and the Ring of Quantum Integers

Let

m

and

n

be positive integers. For the quantum integer [

n

]

q

= 1+

q

+

q

2

+⋯+

q

n

 − − 1

there is a natural polynomial addition such that [

m

]

q

q

[

n

]

q

= [

m

+

n

]

q

and a natural polynomial multiplication such that [

m

]

q

q

[

n

]

q

= [

mn

]

q

. These definitions are motivated by elementary decompositions of intervals of integers in combinatorics and additive number theory. This leads to the construction of the ring of quantum integers and the field of quantum rational numbers.

M. B. Nathanson
The Proper Fiducial Argument

The paper describes the proper interpretation of the fiducial argument, as given by Fisher in (only) his first papers on the subject. It argues that far from being a quaint, little, isolated idea, this was the first attempt to build a bridge between aleatory probabilities (the only ones used by Neyman) and epistemic probabilities (the only ones used by Bayesians), by implicitly introducing, as a new type, frequentist epistemic probabilities. Some (partly rather unknown) reactions by other statisticians are discussed, and some rudiments of a new, unifying general theory of statistics are given which uses upper and lower probabilities and puts fiducial probability into a larger framework. Then Fisher’s pertaining 1930 paper is being reread in the light of present understanding, followed by some short sections on the (legitimate) aposteriori interpretation of confidence intervals, and on fiducial probabilities as limits of lower probabilities.

F. Hampel
On Sequential Discrimination Between Close Markov Chains

The appropriateness of the Wald-type logarithmic asymptotics for the mean length of sequential discrimination strategies between close alternatives has been already challenged in the well-known controversy over comparative performances of the asymptotically optimal Chernoff’s discrimination strategies and ad hoc heuristic rules of Box and Hill in the seventies.

We continue this discussion by showing a poor performance of the Wald-type asymptotic bounds for the mean length of asymptotically optimal sequential discrimination strategies between the simplest types of Markov chains by simulation. We propose some weak remedies against this disaster and some alternative asymptotic tools.

M. B. Malyutov, D. M. Malyutov
Estimating with Randomized Encoding the Joint Empirical Distribution in a Correlated Source

In order to put the present model and our results into the right perspectives we describe first key steps in multiuser source coding theory.

R. Ahlswede, Z. Zhang
On Logarithmically Asymptotically Optimal Hypothesis Testing for Arbitrarily Varying Sources with Side Information

The asymptotic interdependence of the error probabilities exponents (reliabilities) in optimal hypotheses testing is studied for arbitrarily varying sources with state sequence known to the statistician. The case when states are not known to the decision maker was studied by Fu and Shen.

R. Ahlswede, Ella Aloyan, E. Haroutunian
On Logarithmically Asymptotically Optimal Testing of Hypotheses and Identification

We introduce a new aspect of the influence of the information-theoretical methods on the statistical theory. The procedures of the probability distributions identification for

K

(≥1) random objects each having one from the known set of

M

(≥2) distributions are studied.

N

-sequences of discrete independent random variables represent results of

N

observations for each of

K

objects. On the base of such samples decisions must be made concerning probability distributions of the objects. For

$N \longrightarrow \infty$

the exponential decrease of the test’s error probabilities is considered. The reliability matrices of logarithmically asymptotically optimal procedures are investigated for some models and formulations of the identification problems. The optimal subsets of reliabilities which values may be given beforehand and conditions guaranteeing positiveness of all the reliabilities are investigated.

R. Ahlswede, E. Haroutunian
Correlation Inequalities in Function Spaces

We give a condition for a Borel measure on

R

[0,1]

which is sufficient for the validity of an AD-type correlation inequality in the function space.

R. Ahlswede, V. Blinovsky
Lower Bounds for Divergence in the Central Limit Theorem

A method for finding asymptotic lower bounds on information divergence is developed and used to determine the rate of convergence in the Central Limit Theorem.

Peter Harremoës

Information Measures – Error Concepts – Performance Criteria

Identification Entropy

Shannon (1948) has shown that a source

$({\mathcal {U}},P,U)$

with output

U

satisfying Prob (

U

=

u

)=

P

u

, can be encoded in a prefix code

${\mathcal{C}}=\{c_u:u\in{\mathcal {U}}\}\subset\{0,1\}^*$

such that for the entropy

$ H(P)=\sum\limits_{u\in{\mathcal {U}}}-p_u\log p_u\leq\sum p_u|| c_u|| \leq H(P)+1,$

where ||

c

u

|| is the length of

c

u

.

We use a prefix code

$\mathcal{C}$

for another purpose, namely noiseless identification, that is every user who wants to know whether a

u

$(u\in{\mathcal {U}})$

of his interest is the actual source output or not can consider the RV

C

with

$C=c_u=(c_{u_1},\dots,c_{u || c_u ||})$

and check whether

C

=(

C

1

,

C

2

,...) coincides with

c

u

in the first, second etc. letter and stop when the first different letter occurs or when

C

=

c

u

. Let

$L_{\mathcal{C}}(P,u)$

be the expected number of checkings, if code

$\mathcal{C}$

is used.

Our discovery is an identification entropy, namely the function

$H_I(P)=2\left(1-\sum\limits_{u\in{\mathcal {U}}}P_u^2\right).$

We prove that

$L_{\mathcal{C}}(P,P)=\sum\limits_{u\in{\mathcal {U}}}P_u$

$L_{\mathcal{C}}(P,u)\geq H_I(P)$

and thus also that

$ L(P)=\min\limits_{\mathcal{C}}\max\limits_{u\in{\mathcal {U}}}L_{\mathcal{C}}(P,u)\geq H_I(P)$

and related upper bounds, which demonstrate the operational significance of identification entropy in noiseless source coding similar as Shannon entropy does in noiseless data compression.

Also other averages such as

$\bar L_{\mathcal{C}}(P)=\frac1{|{\mathcal {U}}|} \sum\limits_{u\in{\mathcal {U}}}L_{\mathcal{C}}(P,u)$

are discussed in particular for Huffman codes where classically equivalent Huffman codes may now be different.

We also show that prefix codes, where the codewords correspond to the leaves in a regular binary tree, are universally good for this average.

R. Ahlswede
Optimal Information Measures for Weakly Chaotic Dynamical Systems

The study of weakly chaotic dynamical systems suggests that an important indicator for their classification is the quantity of information that is needed to describe their orbits. The information can be measured by the use of suitable compression algorithms. The algorithms are “optimal” for this purpose if they compress very efficiently zero entropy strings. We discuss a definition of optimality in this sense. We also show that the set of optimal algorithms is not empty, showing a concrete example. We prove that the algorithms which are optimal according to the above definition are suitable to measure the information needed to describe the orbits of the Manneville maps: in these examples the information content measured by these algorithms has the same asymptotic behavior as the algorithmic information content.

V. Benci, S. Galatolo
Report on Models of Write–Efficient Memories with Localized Errors and Defects

Write–efficient memories (WEM) as a model for storing and updating information were introduced by R. Ahlswede and Z. Zhang [2]. We consider now three new models of WEM with localized errors and defects, resp.

In the situation (

E

 + 

,

D

− −

), where the encoder is informed but the decoder is not informed about the previous state of the memory we study

1. WEM codes correcting defects,

2. WEM codes detecting localized errors.

Finally, in the situation (

E

 + 

,

D

 + 

), where both, the encoder and the decoder, are informed about the previous state of the memory we study.

3. WEM codes correcting localized errors.

In all three cases we determine for binary alphabet the optimal rates under a cost constraint defined in terms of the Hamming distance.

R. Ahlswede, M. S. Pinsker
Percolation on a k-Ary Tree

Starting from the root, extend

k

branches and append

k

children with probability

p

, or terminate with probability

q

=1–

p

. Then, we have a finite

k

-ary tree with probability one if 0 ≤

p

≤1/

k

. Moreover, we give the expectation and variance of the length of ideal codewords for representing the finite trees. Furthermore, we establish the probability of obtaining infinite tree, that is, of penetrating to infinity without termination for case 1/

k

p

≤1.

K. Kobayashi, H. Morita, M. Hoshi
On Concepts of Performance Parameters for Channels

Among the mostly investigated parameters for noisy channels are code size, error probability in decoding, block length; rate, capacity, reliability function; delay, complexity of coding. There are several statements about connections between these quantities. They carry names like “coding theorem”, “converse theorem” (weak, strong, ...), “direct theorem”, “capacity theorem”, “lower bound”, “upper bound”, etc. There are analogous notions for source coding.

This note has become necessary after the author noticed that Information Theory suffers from a lack of precision in terminology. Its purpose is to open a discussion about this situation with the goal to gain more clarity.

There is also some confusion concerning the scopes of analytical and combinatorial methods in probabilistic coding theory, particularly in the theory of identification. We present a covering (or approximation) lemma for hypergraphs, which especially makes strong converse proofs in this area transparent and dramatically simplifies them.

R. Ahlswede
Appendix: On Common Information and Related Characteristics of Correlated Information Sources

This is a literal copy of a manuscript from 1974. References have been updated. It contains a critical discussion of in those days recent concepts of “common information” and suggests also alternative definitions. (Compare pages 402–405 in the book by I. Csiszár, J. Körner “Information Theory: Coding Theorems for Discrete Memoryless Systems”, Akademiai Kiado, Budapest 1981.) One of our definitions gave rise to the now well–known source coding problem for two helpers (formulated in 2.) on page 7).

More importantly, an extension of one concept to “common information with list knowledge” has recently (R. Ahlswede and V. Balakirsky “Identification under Random Processes” invited paper in honor of Mark Pinsker, Sept. 1995) turned out to play a key role in analyzing the contribution of a correlated source to the identification capacity of a channel.

Thus the old ideas have led now to concepts of operational significance and therefore are made accessible here.

R. Ahlswede, J. Körner

Search – Sorting – Ordering – Planning

Q-Ary Ulam-Renyi Game with Constrained Lies

The Ulam-Rényi game is a classical model for the problem of finding an unknown number in a finite set using as few question as possible when up to a finite number

e

of the answers may be lies. In the variant, we consider in this paper, questions with

q

many possible answers are allowed,

q

fixed and known beforehand, and lies are constrained as follows: Let

${\mathcal Q} = \{0,1,\dots, q-1\}$

be the set of possible answers to a

q

-ary question. For each

$k \in {\mathcal Q}$

when the sincere answer to the question is

k

, the responder can choose a mendacious answer only from a set

$L(k) \subseteq {\mathcal Q} \setminus \{k\}.$

For each

$k \in {\mathcal Q} ,$

the set

L

(

k

) is fixed before the game starts and known to the questioner. The classical

q

-ary Ulam-Rényi game, in which the responder is completely free in choosing the lies, in our setting corresponds to the particular case

$L(k) = {\mathcal Q} \setminus \{k\},$

for each

$k \in {\mathcal Q}.$

The problem we consider here, is suggested by the counterpart of the Ulam-Rényi game in the theory of error-correcting codes, where (the counterparts of) lies are due to the noise in the channel carrying the answers. We shall use our assumptions on noise and its effects (as represented by the constraints

L

(

k

) over the possible error patterns) with the aim of producing the most efficient search strategies. We solve the problem by assuming some symmetry on the sets

L

(

k

): specifically, we assume that there exists a constant

d

q

–1 such that |

L

(

k

)| =

d

for each

k

, and the number of indices

j

such that

k

L

(

j

) is equal to

d

. We provide a lower bound on the number of questions needed to solve the problem and prove that in infinitely many cases this bound is attained by (optimal) search strategies. Moreover we prove that, in the remaining cases, at most one question more than the lower bound is always sufficient to successfully find the unknown number. Our results are constructive and search strategies are actually provided. All our strategies also enjoy the property that, among all the possible adaptive strategies, they use the minimum amount of adaptiveness during the search process.

F. Cicalese, Christian Deppe
Search with Noisy and Delayed Responses

It is well–known that search problems with a stochastic response matrix acting independently for the questions can be equivalently formulated as transmission problems for a discrete memoryless channel (DMC) with feedback.

This is explained in Chapter 3 of the book Search Problems by R. Ahlswede and I. Wegener (Wiley 1987, translation of Suchprobleme, Teubner 1979).

There also Ahlswede’s coding scheme for the DMC and also for the arbitrarily varying channel (AVC) achieving the capacities are described. The latter can be viewed as a robust model for search.

In this paper we analyse this robust model with a

time delay

for the noiseless feedback. In the terminology of search this means that the answers are given with delay.

We determine the (asymptotically) optimal performances, that is,

find the capacities

, for the cases where the

delay is constant

and

linear in the blocklength

. Finally we also give the corresponding results for the DMC

with zero–error probability

.

R. Ahlswede, N. Cai
A Kraft–Type Inequality for d–Delay Binary Search Codes

Among the models of delayed search discussed in [1], [2], the simplest one can be formulated as the following two–player game. One player, say

A

, holds a secret number

$m\in{\mathcal M}\triangleq\{1,2,\dots,M\}$

and another player, say

Q

, tries to learn the secret number by asking

A

at time

i

questions, like “Is

m

x

i

?”, where

x

i

is a number chosen by

Q

. The rule is that at time

i

+

dA

must answer

Q

’s question at time

i

correctly and at time

jQ

can choose

x

j

according to all answers he has received.

R. Ahlswede, N. Cai
Threshold Group Testing

We introduce a natural generalization of the well-studied group testing problem: A test gives a positive (negative) answer if the pool contains at least

u

(at most

l

) positive elements, and an arbitrary answer if the number of positive elements is between these fixed thresholds

l

and

u

. We show that the

p

positive elements can be determined up to a constant number of misclassifications, bounded by the gap between the thresholds. This is in a sense the best possible result. Then we study the number of tests needed to achieve this goal if

n

elements are given. If the gap is zero, the complexity is, similarly to classical group testing,

O

(

p

log

n

) for any fixed

u

. For the general case we propose a two-phase strategy consisting of a

Distill

and a

Compress

phase. We obtain some tradeoffs between classification accuracy and the number of tests.

P. Damaschke
A Fast Suffix-Sorting Algorithm

We present an algorithm to sort all suffixes of

$x^n=(x_1,\dots,x_n) \in {\cal X}^n$

lexicographically, where

${\cal X}=\{0,\dots,q-1\}$

. Fast and efficient sorting of a large amount of data according to its suffix structure (suffix-sorting) is a useful technology in many fields of application, front-most in the field of Data Compression where it is used e.g. for the Burrows and Wheeler Transformation (BWT for short), a block-sorting transformation ([3],[9]).

R. Ahlswede, B. Balkenhol, C. Deppe, M. Fröhlich
Monotonicity Checking

In our thesis we cosidered the complexity of the monotonicity checking problem: given a finite poset and an unknown real-valued function on it find out whether this function is monotone. Two decision models were considered: the comparison model, where the queries are usual comparisons, and the linear model, where the queries are comparisons of linear combinations of the input. This is a report on our results.

M. Kyureghyan
Algorithmic Motion Planning: The Randomized Approach

Algorithms based on randomized sampling proved to be the only viable algorithmic tool for quickly solving motion planning problems involving many degrees of freedom. Information on the configuration space is acquired by generating samples and finding simple paths among them. Paths and samples are stored in a suitable data structure. According to this paradigm, in the recent years a wide number of algorithmic techniques have been proposed and some approaches are now widely used. This survey reviews the main algorithms, outlining their advantages and drawbacks, as well as the knowledge recently acquired in the field.

S. Carpin

Language Evolution – Pattern Discovery – Reconstructions

Information Theoretic Models in Language Evolution

We study a model for language evolution which was introduced by Nowak and Krakauer ([12]). We analyze discrete distance spaces and prove a conjecture of Nowak for all metrics with a positive semidefinite associated matrix. This natural class of metrics includes all metrics studied by different authors in this connection. In particular it includes all ultra-metric spaces.

Furthermore, the role of feedback is explored and multi-user scenarios are studied. In all models we give lower and upper bounds for the fitness.

R. Ahlswede, E. Arikan, L. Bäumer, C. Deppe
Zipf’s Law, Hyperbolic Distributions and Entropy Loss

Zipf’s law – or Estoup-Zipf’s law – is an empirical fact of computational linguistics which relates rank and frequency of words in natural languages. The law suggests modelling by distributions of “hyperbolic type”,. We present a satisfactory general definition and an information theoretical characterization of the resulting

hyperbolic distributions

. When applied to linguistics this leads to a property of stability and flexibility, explaining that a language can develop towards higher and higher expressive powers without changing its basic structure.

P. Harremoës, F. Topsoe
Bridging Lossy and Lossless Compression by Motif Pattern Discovery

We present data compression techniques hinged on the notion of a

motif

, interpreted here as a string of intermittently solid and wild characters that recurs more or less frequently in an input sequence or family of sequences. This notion arises originally in the analysis of sequences, particularly biomolecules, due to its multiple implications in the understanding of biological structure and function, and it has been the subject of various characterizations and study. Correspondingly, motif discovery techniques and tools have been devised. This task is made hard by the circumstance that the number of motifs identifiable in general in a sequence can be exponential in the size of that sequence. A significant gain in the direction of reducing the number of motifs is achieved through the introduction of

irredundant

motifs, which in intuitive terms are motifs of which the structure and list of occurrences cannot be inferred by a combination of other motifs’ occurrences. Although suboptimal, the available procedures for the extraction of some such motifs are not prohibitively expensive. Here we show that irredundant motifs can be usefully exploited in lossy compression methods based on textual substitution and suitable for signals as well as text. Actually, once the motifs in our lossy encodings are disambiguated into corresponding lossless codebooks, they still prove capable of yielding savings over popular methods in use. Preliminary experiments with these fungible strategies at the crossroads of lossless and lossy data compression show performances that improve over popular methods (i.e. GZip) by more than 20% in lossy and 10% in lossless implementations.

A. Apostolico, M. Comin, L. Parida
Reverse–Complement Similarity Codes

In this paper, we discuss a general notion of

similarity function

between two sequences which is based on their common subsequences. This notion arises in some applications of molecular biology [14]. We introduce the concept of

similarity codes

and study the logarithmic asymptotics for the size of optimal codes. Our mathematical results announced in [13] correspond to the

longest common subsequence

(LCS) similarity function [2] which leads to a special subclass of these codes called

reverse-complement

(RC) similarity codes. RC codes for

additive

similarity functions have been studied in previous papers [9,10,11,12].

A. D’yachkov, D. Torney, P. Vilenkin, S. White
On Some Applications of Information Indices in Chemical Graph Theory

Information theory has been used in various branches of science. During recent years it is applied extensively in chemical graph theory for describing chemical structures and for providing good correlations between physico–chemical and structural properties by means of information indices. The application of information indices to the problem of characterizing molecular structures is presented in the paper. The information indices based on the distance in a graph are considered with respect to their correlating ability and discriminating power.

Mathematics Subject Classification 2000: 94A15, 94A17, 94C15.

E. V. Konstantinova
Largest Graphs of Diameter 2 and Maximum Degree 6

The results of computer generation of the largest graphs of diameter 2 and maximum degree 6 are presented. The order of such graphs is equal 32. There are exactly 6 graphs of diameter 2 and maximum degree 6 on 32 vertices including one vertex-transitive graph.

S. G. Molodtsov

Network Coding

An Outside Opinion

In order to get an outside opinion about Network Coding we cite here the Network Coding Homepage (www.networkcoding.info) of a leading expert, Ralf Koetter.

Welcome to the Network Coding Coding Home Page. This site is meant to provide a service to the community by summarizing the main developments in network coding. Our hope is that this site can serve as a repository and resource for researchers and scientists in the field.

R. Ahlswede
Problems in Network Coding and Error Correcting Codes Appended by a Draft Version of S. Riis “Utilising Public Information in Network Coding”

In most of todays information networks messages are send in packets of information that is not modified or mixed with the content of other packets during transmission. This holds on macro level (e.g. the internet, wireless communications) as well as on micro level (e.g. communication within processors, communication between a processor and external devises).

Today messages in wireless communication are sent in a manner where each active communication channel carries exactly one “conversation”. This approach can be improved considerably by a cleverly designed but sometimes rather complicated channel sharing scheme (network coding). The approach is very new and is still in its pioneering phase. Worldwide only a handful of papers in network coding were published year 2001 or before, 8 papers in 2002, 23 papers in 2003 and over 25 papers already in the first half of 2004; (according to the database developed by R. Koetters). The first conference on Network Coding and applications is scheduled for Trento, Italy April 2005. Research into network coding is growing fast, and Microsoft, IBM and other companies have research teams who are researching this new field. A few American universities (Princeton, MIT, Caltech and Berkeley) have also established research groups in network coding.

S. Riis, R. Ahlswede

Combinatorial Models

Coverings

On the Thinnest Coverings of Spheres and Ellipsoids with Balls in Hamming and Euclidean Spaces

In this paper, we present some new results on the thinnest coverings that can be obtained in Hamming or Euclidean spaces if spheres and ellipsoids are covered with balls of some radius

ε

. In particular, we tighten the bounds currently known for the

ε

-entropy of Hamming spheres of an arbitrary radius

r

. New bounds for the

ε

-entropy of Hamming balls are also derived. If both parameters

ε

and

r

are linear in dimension

n

, then the upper bounds exceed the lower ones by an additive term of order log

n

. We also present the uniform bounds valid for all values of

ε

and

r

.

In the second part of the paper, new sufficient conditions are obtained, which allow one to verify the validity of the asymptotic formula for the size of an ellipsoid in a Hamming space. Finally, we survey recent results concerning coverings of ellipsoids in Hamming and Euclidean spaces.

I. Dumer, M. S. Pinsker, V. V. Prelov
Appendix: On Set Coverings in Cartesian Product Spaces

Consider

$(X,{\mathcal E})$

, where

X

is a finite set and

${\mathcal E}$

is a system of subsets whose union equals

X

. For every natural number

n

∈ℕ define the cartesian products

X

n

=∏

1

n

X

and

${\mathcal E}_n=\prod_1^n{\mathcal E}$

. The following problem is investigated: how many sets of

${\mathcal E}_n$

are needed to cover

X

n

? Let this number be denoted by

c

(

n

). It is proved that for all

n

∈ℕ

$\exp\{C\cdot n\}\leq c(n)\leq\exp\{Cn+\log n+\log\log|X|\}+1.$

A formula for

C

is given. The result generalizes to the case where

X

and

${\mathcal E}$

are not necessarily finite and also to the case of non–identical factors in the product. As applications one obtains estimates on the minimal size of an externally stable set in cartesian product graphs and also estimates on the minimal number of cliques needed to cover such graphs.

R. Ahlswede

Partitions

Testing Sets for 1-Perfect Code

This paper continues the research of [1,2]. In [1] it was shown that a 1-perfect code is uniquely determined by its vertices at the middle levels of hypercube and in [2] the concerned formula was obtained. Now we prove that the vertices at the

r

-th level,

r

≤(

n

–1)/2, of such a code of length

n

uniquely determine all code vertices at the lower levels.

S. V. Avgustinovich, A. Yu. Vasil’eva
On Partitions of a Rectangle into Rectangles with Restricted Number of Cross Sections

Consider partitions of a rectangle into rectangles with restricted number of cross sections. The problem has an information theoretic flavor (in the spirit of [3]): richness of two dimensional pictures under 1–dimensional restrictions of from local information to global information.

We present two approaches to upper bound the cardinality of such partitions: a combinatorial recursion and harmonic analysis.

R. Ahlswede, A. A. Yudin

Isoperimetry

On Attractive and Friendly Sets in Sequence Spaces

To a large extent the present work is far from being conclusive, instead, new directions of research in combinatorial extremal theory are started. Also questions concerning generalizations are immediately noticeable.

The incentive came from problems in several fields such as Algebra, Geometry, Probability, Information and Complexity Theory. Like several basic combinatorial problems they may play a role in other fields. For scenarios of interplay we refer also to [9].

R. Ahlswede, L. Khachatrian
Remarks on an Edge Isoperimetric Problem

Among all collections of a given number of

k

-element subsets of an

n

-element groundset find a collection which maximizes the number of pairs of subsets which intersect in

k

–1 elements.

This problem was solved for

k

=2 by Ahlswede and Katona, and is open for

k

>2.

We survey some linear algebra approaches which yield to estimations for the maximum number of pairs, and we present another short proof of the Ahlswede-Katona result.

C. Bey
Appendix: On Edge–Isoperimetric Theorems for Uniform Hypergraphs

Denote by Ω={1,...,

n

} an

n

–element set. For all

$A,B\in\binom{\Omega}k$

, the

k

–element subsets of Ω, define the relation ~ as follows:

A

~

B

iff

A

and

B

have a common shadow, i.e. there is a

$C\in\binom{\Omega}{k-1}$

with

C

A

and

C

B

. For fixed integer

α

, our goal is to find a family

${\mathcal A}$

of

k

–subsets with size

α

, having as many as possible ~–relations for all pairs of its elements. For

k

=2 this was achieved by Ahlswede and Katona [2] many years ago.

R. Ahlswede, N. Cai

Isodiametry

Appendix: Solution of Burnashev’s Problem and a Sharpening of the Erdős/Ko/Rado Theorem

Motivated by a coding problem for Gaussian channels, Burnashev came to the following

Geometric Problem

(which he stated at the Information Theory Meeting in Oberwolfach, Germany, April 1982). For every

δ

> 0, does there exist a constant

λ

(

δ

) >0 such that the following is true: “Every finite set {

x

1

,...,

x

N

} in a Hilbert space

H

has a subset

$\{x_{i_1},\dots,x_{i_M}\}$

,

M

λ

(

δ

)

N

, without ‘bad’ triangles. (A triangle is

bad

, if one side is longer than 1+

δ

and the two others are shorter (≤) than 1)”?

R. Ahlswede

Networks

Realization of Intensity Modulated Radiation Fields Using Multileaf Collimators

In the treatment of cancer using high energetic radiation the problem arises how to irradiate the tumor without damaging the healthy tissue in the immediate vicinity. In order to do this as efficiently as possible intensity modulated radiation therapy (IMRT) is used. A modern way to modulate the homogeneous radiation field delivered by an external accelerator is to use a multileaf collimator in the static or in the dynamic mode. In this paper several aspects of the construction of optimal treatment plans are discussed and some algorithms for this task are described.

T. Kalinowski
Sparse Asymmetric Connectors in Communication Networks

An (

n

,

N

,

d

)–connector is an acyclic digraph with

n

inputs and

N

outputs in which for any injective mapping of input vertices into output vertices there exist

n

vertex disjoint paths of length

d

joining each input to its corresponding output. We consider the problem of construction of sparse (

n

,

N

,2)–connectors (depth 2 connectors) when

n

N

. The probabilistic argument in [1] shows the existence of (

n

,

N

,2)–connectors of size (number of edges) O(

N

) if

$n\leq N^{1/2-\varepsilon},\ \varepsilon >0$

. However, the known explicit constructions with

$n\leq\sqrt{N}$

in [6],[1],[2] are of size O

$(N\sqrt{n})$

. Here we present a simple combinatorial construction for (

n

,

N

,2)–connectors of size O(

N

log

2

n

). We also consider depth 2 fault–tolerant connectors under arc or node failures.

R. Ahlswede, H. Aydinian

Problem Section

Finding $C_{\text{NRI}}({\mathcal W})$ , the Identification Capacity of the AVC ${\mathcal W}$ , if Randomization in the Encoding Is Excluded

For a DMC

W

it is not hard to show that

$C_{\text{NRI}}(W)$

equals the logarithm of the number of different row-vectors in

W

(see [A145]). Also for AVC

${\mathcal W}_0$

with 0-1-matrices only as transmission matrices

$C_{\text{NRI}}({\mathcal W}_0)$

equals the capacity

$C({\mathcal W}_0)$

for transmission, which in turn equals Shannon’s zero error capacity

$C_0(\overline{W})$

, where

$\overline{W}=\frac 1{|{\mathcal W}_0|}\sum\limits_{W\in{\mathcal W}_0}W$

(see [A7]).

R. Ahlswede
Intersection Graphs of Rectangles and Segments

Let

F

be a finite family of sets and

G

(

F

) be the intersection graph of

F

(the vertices of

G

(

F

) are the sets of family

F

and the edges of

G

(

F

) correspond to intersecting pairs of sets). The transversal number

τ

(

F

) is the minimum number of points meeting all sets of

F

. The independent (stability) number

α

(

F

) is the maximum number of pairwise disjoint sets in

F

. The clique number

ω

(

F

) is the maximum number of pairwise intersecting sets in

F

. The coloring number

q

(

F

) is the minimum number of classes in a partition of

F

into pairwise disjoint sets.

R. Ahlswede, I. Karapetyan
Cutoff Rate Enhancement

The cutoff rate of a discrete memoryless channel (DMC)

$W:{\cal X} \longrightarrow {\cal Y}$

is defined as

$R_0(W) = \max_Q -\log \sum\limits_{y \in {\cal Y}} \left[ \sum\limits_{x \in {\cal X}} Q(x) \sqrt{W(y|x)}\right]^2$

where the maximum is over all probability distributions on

${\cal X}$

. This parameter serves as a figure of merit for coding applications. There is also a decoding algorithm known as ’sequential decoding’ that can readily achieve rates up to

R

0

.

E. Arikan
Some Problems in Organic Coding Theory

Organic coding theory describes, analyses, and simulates the structure and function of organic codes (OC), viz., of codes used in living systems as sets of arbitrary rules of encoding and decoding between two independent subsystems [3]. A very well-known example of OC is the genetic code (GC), a degenerate quaternary code of length 3 whose codewords (mRNA triplets) encode amino acids, which are component parts of the primary structure of proteins, and the beginning and end of encoding mRNA sequences. From a semiotic point of view, GC is of great interest because it breaks free from a pure symbolic way of encoding which can be characterized as resulting in codes wherein the mutual Kolomogorov complexity of the encoding and the encoded structure is nearly equal to the Kolmogorov complexity of each one of these structures [2]. GC is analysed thoroughly since fifty years. But organic coding theory is still in its infancy. Many problems of formal, empirical, and methodological nature are open. In the following, I present three of them: the empirical problem of the evolutionary function of OCs (2), the methodological problem of the arbitrariness of OCs (3), and the formal problem of selecting an adequate model of OCs (4).

S. Artmann
Generalized Anticodes in Hamming Spaces

Let

${\cal H}(n)=\{0,1\}^n$

denote the binary Hamming space with the Hamming distance

d

H

. The Hamming weight is denoted by wt

H

. Given integers

$l\geq 1,\ 1\leq\delta <n$

, let

${\cal A\subset H}(n)$

satisfy the Condition (D): for every subset

$A\subset{\cal A}$

with |

A

|=

l

+ 1 there exist two distinct points

a

,

b

A

with

d

H

(

a

,

b

) ≤

δ

.

H. Aydinian
Two Problems from Coding Theory

We suggest to prove the two following conjectures about the properties of some special functions.

First consider the following polynome in

$\lambda\in [0, (q-1)/q],\ q=2,3,\ldots $

$f^{q}_L (\lambda )=\sum\limits_{j_i :\ \sum_{i=1}^{q}j_i =L+1}\left( 1-\frac{\max\{ j_1 , \ldots ,j_q \}}{L+1}\right){L+1\choose j_1 ,\ldots ,j_{q}}\left(\frac{\lambda}{q-1}\right)^{L+1-j_q} (1-\lambda )^{j_q} .$

It arise in the problem of obtaining the upper bound for the rate of multiple pa cking of

q

–ary Hamming space. The problem is to prove that this function is ∩–convex.

V. Blinovsky
Private Capacity of Broadcast Channels

The broadcast channel was introduced by T. M. Cover in 1972 [7]. In its simplified version, it has one sender (or encoder)

E

and two users (or decoders)

$D_l, \ l=1, \ 2$

. The sender

E

is required to send the messages

m

1

and

m

2

uniformly chosen from the message sets

${\cal M}_1$

and

${\cal M}_2$

respectively to

D

1

and

D

2

correctly with probability close to one. That is, the sender encodes the message (

m

1

,

m

2

) to an input sequence

x

n

over a finite input alphabet

${\cal X}$

and sends it to the two users via two noisy channels

W

n

and

V

n

, respectively. The first (second) user

D

1

(

D

2

) decodes the output sequence

y

n

over the finite output alphabet

${\cal Y}$

of the channel

W

n

(the output sequence

z

n

over the finite output alphabet

${\cal Z}$

of the channel

V

n

) to the first message

$\hat{m}_1$

(the second message

$\hat{m}_2$

). In general, the capacity regions for this kind of channels are still unknown. Their determination is probably one of the hardest open problems in Multi-user Shannon Theory.

N. Cai
A Short Survey on Upper and Lower Bounds for Multidimensional Zero Sums

After giving some background on sums of residue classes we explained the following problem on multidimensional zero sums which is well known in combinatorial number theory:

Let

f

(

n

,

d

) denote the least integer such that any choice of

f

(

n

,

d

) elements in ℤ

n

d

contains a subset of size

n

whose sum is zero. Harborth [12] proved that (

n

–1)2

d

+1 ≤

f

(

n

,

d

) ≤(

n

–1)

n

d

+1. The lower bound follows from the example in which there are

n

–1 copies of each of the 2

d

vectors with entries 0 or 1. The upper bound follows since any set of (

n

–1)

n

d

+1 vectors must contain, by the pigeonhole principle,

n

vectors which are equivalent modulo

n

.

C. Elsholtz
Binary Linear Codes That Are Optimal for Error Correction

When a binary linear [

n

,

k

] code

C

is used for error correction on the binary symmetric channel with bit error probability

p

, and the decoding algorithm is maximum likelihood decoding, the probability of correct decoding

P

cd

(

C

,

p

) is given by

$$P_{\rm cd}(C,p)=\sum_{i=0}^n\alpha_i \, p^i(1-p)^{n-i},$$

where

α

i

is the number of correctable errors (coset leaders) of weight

i

.

An [

n

,

k

] code

C

is

optimal for

p

where 0<

p

<1/2, if

$$P_{\rm cd}(C,p)\geq P_{\rm cd}(D,p)$$

for all [

n

,

k

] codes

D

.

T. Kløve
Capacity Problem of Trapdoor Channel

The capacity problem of the trapdoor channel is one of famous long-standing problems. The trapdoor channel considered by Blackwell [1] is a typical example of channel with memory. Ash [2] expressed the channel by using two channel matrices with four states, while the expression does not necessarily make the problem tractable. We provided an interesting recurrence giving the explicit expression of the conditional distributions of the channel output sequences given input sequences of length

n

[5]. In order to determine the capacity of this special channel, we have to understand the fractal structure buried in the channel matrices between input and output sequences of long length.

K. Kobayashi
Hotlink Assignment on the Web

Here, we explain the problem of assigning hotlinks to web pages. We indicate the known results for this problem and the open questions.

E. S. Laber
The Rigidity of Hamming Spaces

We call a set

B

a base of a metric space

L

if every point of

L

is uniquely determined by its distances to the points of

B

.

The minimal possible number of points of a base is called the rigidity of the metric space and is denoted by

r

(

L

).

n-dimensional q-ary Hamming space is denoted by

H

n

,

q

.

V. S. Lebedev
A Conjecture in Finite Fields

Let

$q\equiv 1\textrm{ (mod 4)}$

be a prime power. For a primitive element

α

of

$\mathbb{F}:=\textrm{GF}(q)$

and an element

$0\ne t\in\mathbb{F}$

we define a graph

G

=

G

(

α

,

t

) on the vertex set

$\mathbb{F}$

by

E

(

G

)=

E

1

E

2

, where

$E_1=\left\{\{\alpha^{2i},\alpha^{2i+1}\}\mid i=0,1,\dots, (q-3)/2\right\},$

$E_2=\left\{\{\alpha^{2i+1}+t,\alpha^{2i+2}+t\}\mid i=0,1,\dots, (q-3)/2\right\}.$

It is easy to verify that

E

1

E

2

=∅. Hence, all vertices of

G

have degree two, except 0 and

t

which have degree one. In other words, one component of

G

is a path connecting 0 and

t

, and all other components are cycles (of even length). Moreover, if

$0\ne t,t'\in\mathbb{F}$

, then

G

(

α

,

t

) and

G

(

α

,

t

′) are isomorphic [4], justifying the notation

G

(

α

).

U. Leck
Multiparty Computations in Non-private Environments

Private multi-party computations is an intensively studied subject of modern cryptography. In general, private computation can be defined as follows: Consider a set of players, where each player knows an individual secret. The goal is to compute a function depending on these secrets such that after the computation none of the players knows anything about the secrets of others that cannot be derived from his own input and the result of the function. To compute the function, the players exchange messages with each other using secure links. For a formal definition of cryptographically secure privacy see [8] and for privacy in information theoretic sense see [5,2].

M. Liśkiewicz
Some Mathematical Problems Related to Quantum Hypothesis Testing

We present two open problems related to the asymptotics of quantum hypothesis testing, together with some discussions about their mutual relation, the classical counterparts and a possible conjecture.

Hiroshi Nagaoka
Designs and Perfect Codes

A Steiner triple system of order n

(briefly

STS

(

n

)) is a family of 3-element blocks (subsets or triples) of the set

N

={1,2,...,

n

} such that each not ordered pair of elements of

N

appears in exactly one block.

Two

STS

-s of order

n

are called

isomorphic

if there exists a permutation on the set

N

which transforms them into one another. It is well known that

STS

(

n

) exists if and only if

n

≡1 or 3 (mod 6).

F. I. Solov’eva
Backmatter
Metadaten
Titel
General Theory of Information Transfer and Combinatorics
herausgegeben von
Rudolf Ahlswede
Lars Bäumer
Ning Cai
Harout Aydinian
Vladimir Blinovsky
Christian Deppe
Haik Mashurian
Copyright-Jahr
2006
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-46245-3
Print ISBN
978-3-540-46244-6
DOI
https://doi.org/10.1007/11889342

Premium Partner