Skip to main content
Top

2013 | Book

Information Theory, Combinatorics, and Search Theory

In Memory of Rudolf Ahlswede

Editors: Harout Aydinian, Ferdinando Cicalese, Christian Deppe

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This volume is dedicated to the memory of Rudolf Ahlswede, who passed away in December 2010. The Festschrift contains 36 thoroughly refereed research papers from a memorial symposium, which took place in July 2011.

The four macro-topics of this workshop: theory of games and strategic planning; combinatorial group testing and database mining; computational biology and string matching; information coding and spreading and patrolling on networks; provide a comprehensive picture of the vision Rudolf Ahlswede put forward of a broad and systematic theory of search.

Table of Contents

Frontmatter

Information Theory

Two New Results for Identification for Sources

We provide two new results for identification for sources. The first result is about block codes. In [Ahlswede and Cai, IEEE-IT, 52(9), 4198-4207, 2006] it is proven that the

q

-ary identification entropy

H

I

,

q

(

P

) is a lower bound for the average number

L

(

P

,

P

) of expected checkings during the identification process. A necessary assumption for this proof is that the uniform distribution minimizes the symmetric running time

$L_{\mathcal C}(P,P)$

for binary block codes

$\mathcal C=\{0,1\}^k$

. This assumption is proved in Sect. 2 not only for binary block codes but for any

q

-ary block code. The second result is about upper bounds for the worst-case running time. In [Ahlswede, Balkenhol and Kleinewchter, LNCS, 4123, 51-61, 2006] the authors proved in Theorem 3 that

L

(

P

) < 3 by an inductive code construction. We discover an alteration of their scheme which strengthens this upper bound significantly.

Christian Heup
L-Identification for Uniformly Distributed Sources and the q-ary Identification Entropy of Second Order

In this article we generalize the concept of identification for sources, which was introduced by Ahlswede, to the concept of

L

-identification for sources. This means that we do not only consider a discrete source but a discrete memoryless source (DMS) with

L

outputs. The task of

L

-identification is now to check for any previously given output whether it is part of the

L

outputs of the DMS. We establish a counting lemma and use it to show that, if the source is uniformly distributed, the

L

-identification symmetric running time asymptotically equals the rational number

$$K_{L,q}=-\sum_{l=1}^L(-1)^l{L \choose l}\frac{q^l}{q^l-1}\enspace.$$

We then turn to general distributions and aim to establish a lower bound for the symmetric 2-identification running time. In order to use the above asymptotic result we first concatenate a given code sufficiently many times and show that for 2-identification the uniform distribution is optimal, thus yielding a first lower bound. This lower bound contains the symmetric (1-)identification running time negatively signed so that (1-)identification entropy cannot be applied immediately. However, using the fact that the (1-)identification entropy is attained iff the probability distribution consists only of

q

-powers, we can show that our lower bound is in this case also exactly met for 2-identification. We then prove that the obtained expression is in general a lower bound for the symmetric 2-identification running time and that it obeys fundamental properties of entropy functions. Hence, the following expression is called the

q-ary identification entropy of second order

$$H_{\mbox{\tiny ID}}^{2,q}(P) =2\frac{q}{q-1}\left(1-\sum_{u\in{\mathcal U}}p_u^2\right)-\frac{q^2}{q^2-1}\left(1-\sum_{u\in{\mathcal U}}p_u^3\right)\enspace.$$

Christian Heup
Optimal Rate Region of Two-Hop Multiple Access Channel via Amplify-and-Forward Scheme

We study a two-hop multiple access channel (MAC), where two source nodes communicate with the destination node via a set of amplify-and-forward (AF) relays. To characterize the optimal rate region, we focus on deriving the boundary points of it, which is formulated as a weighted sum rate maximization problem. In the first part, we are concerned with the scenario that all relays are under a sum power constraint. Although the optimal AF rate region for the case has been obtained, we revisit the results by an alternative method. The first step is to investigate the algebraic structures of the three SNR functions in the rate set of the two-hop MAC with a specific AF scheme. Then an equivalent optimization problem is established for deriving each boundary point of the optimal rate region. From the geometric perspective, the problem has a simple solution by optimizing a one-dimensional problem

without

constraint. In the second part, the optimal rate region of a two-hop MAC under the individual power constraints is discussed, which is still an open problem. An algorithm is proposed to compute the maximum individual and sum rates along with the corresponding AF schemes.

Binyue Liu, Ning Cai
Strong Secrecy for Multiple Access Channels

We show strongly secret achievable rate regions for two different wiretap multiple-access channel coding problems. In the first problem, each encoder has a private message and both together have a common message to transmit. The encoders have entropy-limited access to common randomness. If no common randomness is available, then the achievable region derived here does not allow for the secret transmission of a common message. The second coding problem assumes that the encoders do not have a common message nor access to common randomness. However, they may have a conferencing link over which they may iteratively exchange rate-limited information. This can be used to form a common message and common randomness to reduce the second coding problem to the first one. We give the example of a channel where the achievable region equals zero without conferencing or common randomness and where conferencing establishes the possibility of secret message transmission. Both coding problems describe practically relevant networks which need to be secured against eavesdropping attacks.

Moritz Wiese, Holger Boche
Capacity Results for Arbitrarily Varying Wiretap Channels

In this work the arbitrarily varying wiretap channel AVWC is studied. We derive a lower bound on the random code secrecy capacity for the average error criterion and the strong secrecy criterion in the case of a best channel to the eavesdropper by using Ahlswede’s robustification technique for ordinary AVCs. We show that in the case of a non-symmetrisable channel to the legitimate receiver the deterministic code secrecy capacity equals the random code secrecy capacity, a result similar to Ahlswede’s dichotomy result for ordinary AVCs. Using this we can derive that the lower bound is also valid for the deterministic code capacity of the AVWC. The proof of the dichotomy result is based on the elimination technique introduced by Ahlswede for ordinary AVCs. We further prove upper bounds on the deterministic code secrecy capacity in the general case, which results in a multi-letter expression for the secrecy capacity in the case of a best channel to the eavesdropper. Using techniques of Ahlswede, developed to guarantee the validity of a reliability criterion, the main contribution of this work is to integrate the strong secrecy criterion into these techniques.

Igor Bjelaković, Holger Boche, Jochen Sommerfeld
On Oblivious Transfer Capacity

Upper and lower bounds to the oblivious transfer (OT) capacity of discrete memoryless channels and multiple sources are obtained, for 1 of 2 strings OT with honest but curious participants. The upper bounds hold also for one-string OT. The results provide the exact value of OT capacity for a specified class of models, and the necessary and sufficient condition of its positivity, in general.

Rudolf Ahlswede, Imre Csiszár
Achieving Net Feedback Gain in the Linear-Deterministic Butterfly Network with a Full-Duplex Relay

A symmetric butterfly network (BFN) with a full-duplex relay operating in a bi-directional fashion for feedback is considered. This network is relevant for a variety of wireless networks, including cellular systems dealing with cell-edge users. Upper bounds on the capacity region of the general memoryless BFN with feedback are derived based on cut-set and cooperation arguments and then specialized to the linear deterministic BFN with relay-source feedback. It is shown that the upper bounds are achievable using combinations of the compute-forward strategy and the classical decode-and-forward strategy, thus fully characterizing the capacity region. It is shown that net rate gains are possible in certain parameter regimes.

Anas Chaaban, Aydin Sezgin, Daniela Tuninetti
Uniformly Generating Origin Destination Tables

Given a network and the total flows into and out of each of the sink and source nodes, it is useful to select uniformly at random an origin-destination (O-D) matrix for which the total in and out flows at sinks and sources (column and row sums) matches the given data. We give an algorithm for small networks (less than 16 nodes) for sampling such O-D matrices with exactly the uniform distribution and apply it to traffic network analysis. This algorithm also can be applied to communication networks and used in the statistical analysis of contingency tables.

David M. Einstein, Lee K. Jones
Identification via Quantum Channels

We review the development of the quantum version of Ahlswede and Dueck’s theory of identification via channels. As is often the case in quantum probability, there is not just one but several quantizations: we know at least two different concepts of identification of classical information via quantum channels, and three different identification capacities for quantum information.

In the present summary overview we concentrate on conceptual points and open problems, referring the reader to the small set of original articles for details.

Andreas Winter
Classical-Quantum Arbitrarily Varying Wiretap Channel

We derive a lower bound on the secrecy capacity of classical-quantum arbitrarily varying wiretap channel for both the case with and without channel state information at the transmitter.

Vladimir Blinovsky, Minglai Cai
Arbitrarily Varying and Compound Classical-Quantum Channels and a Note on Quantum Zero-Error Capacities

We consider compound as well as arbitrarily varying classical-quantum channel models. For classical-quantum compound channels, we give an elementary proof of the direct part of the coding theorem. A weak converse under average error criterion to this statement is also established. We use this result together with the robustification and elimination technique developed by Ahlswede in order to give an alternative proof of the direct part of the coding theorem for a finite classical-quantum arbitrarily varying channels with the criterion of success being average error probability. Moreover we provide a proof of the strong converse to the random coding capacity in this setting.

The notion of symmetrizability for the maximal error probability is defined and it is shown to be both necessary and sufficient for the capacity for message transmission with maximal error probability criterion to equal zero.

Finally, it is shown that the connection between zero-error capacity and certain arbitrarily varying channels is, just like in the case of quantum channels, only partially valid for classical-quantum channels.

Igor Bjelaković, Holger Boche, Gisbert Janßen, Janis Nötzel
On the Value of Multiple Read/Write Streams for Data Compression

We study whether, when restricted to using polylogarithmic memory and polylogarithmic passes, we can achieve qualitatively better data compression with multiple read/write streams than we can with only one. We first show how we can achieve universal compression using only one pass over one stream. We then show that one stream is not sufficient for us to achieve good grammar-based compression. Finally, we show that two streams are necessary and sufficient for us to achieve entropy-only bounds.

Travis Gagie
How to Read a Randomly Mixed Up Message

We review results on scenery reconstruction obtained over the past decade and place the problem into the context of information retrieval, when the message is damaged by a special form of a reading error.

Matthias Löwe
Multiple Objects: Error Exponents in Hypotheses Testing and Identification

We survey a series of investigations of optimal testing of multiple hypotheses concerning various multiobject models.

These studies are a prominent instance of application of methods and techniques developed in Shannon information theory for solution of typical statistical problems.

Evgueni Haroutunian, Parandzem Hakobyan

Combinatorics

Family Complexity and VC-Dimension

In 2003 Ahlswede, Khachatrian, Mauduit and Sárközy introduced the notion of family complexity of binary sequences, and in 2006 Ahlswede, Mauduit and Sárközy extended this definition to sequences of

k

symbols. Since that several further related papers have been published on this subject. In this paper our main goal is to present a survey of all these papers. We will also answer a question of Csiszár and Gách on the connection of family complexity and VC-dimension.

Christian Mauduit, András Sárközy
The Restricted Word Shadow Problem

Recently we introduced and studied the shadow minimization problem under word-subword relation. In this paper we consider this problem for the restricted case and give optimal solution.

Rudolf Ahlswede, Vladimir Lebedev
Mixed Orthogonal Arrays, k-Dimensional M-Part Sperner Multifamilies, and Full Multitransversals

Aydinian

et al.

[J. Combinatorial Theory A

118

(2)(2011), 702–725] substituted the usual BLYM inequality for

L

-Sperner families with a set of

M

inequalities for (

m

1

,

m

2

,…,

m

M

;

L

1

,

L

2

,…,

L

M

) type

M

-part Sperner families and showed that if all inequalities hold with equality, then the family is homogeneous. Aydinian

et al.

[Australasian J. Comb.

48

(2010), 133–141] observed that all inequalities hold with equality if and only if the transversal of the Sperner family corresponds to a simple mixed orthogonal array with constraint

M

, strength

M

 − 1, using

m

i

 + 1 symbols in the

i

th

column. In this paper we define

k

-dimensional

M

-part Sperner multifamilies with parameters

$L_P:~P\in\binom{[M]}{k}$

and prove

$\binom{M}{k}$

BLYM inequalities for them. We show that if

k

 < 

M

and all inequalities hold with equality, then these multifamilies must be homogeneous with profile matrices that are strength

M

 − 

k

mixed orthogonal arrays. For

k

 = 

M

, homogeneity is not always true, but some necessary conditions are given for certain simple families. These results extend to products of posets which have the strong normalized matching property. Following the methods of Aydinian

et al.

[Australasian J. Comb.

48

(2010), 133–141], we give new constructions to simple mixed orthogonal arrays with constraint

M

, strength

M

 − 

k

, using

m

i

 + 1 symbols in the

i

th

column. We extend the convex hull method to

k

-dimensional

M

-part Sperner multifamilies, and allow additional conditions providing new results even for simple 1-part Sperner families.

Harout Aydinian, Éva Czabarka, László A. Székely
Generic Algorithms for Factoring Strings

In this paper we describe algorithms for factoring words over sets of strings known as circ-UMFFs, generalizations of the well-known Lyndon words based on lexorder, whose properties were first studied in 1958 by Chen, Fox and Lyndon. In 1983 Duval designed an elegant linear-time sequential (RAM) Lyndon factorization algorithm; a corresponding parallel (PRAM) algorithm was described in 1994 by Daykin, Iliopoulos and Smyth. In 2003 Daykin and Daykin introduced various circ-UMFFs, including one based on

V

-words and

V

-ordering; in 2011 linear string comparison and sequential factorization algorithms based on

V

-order were given by Daykin, Daykin and Smyth. Here we first describe generic RAM and PRAM algorithms for factoring a word over any circ-UMFF; then we show how to customize these generic algorithms to yield optimal parallel Lyndon-like

V

-word factorization.

David E. Daykin, Jacqueline W. Daykin, Costas S. Iliopoulos, W. F. Smyth
On Data Recovery in Distributed Databases

We present an approach for data encoding and recovery of lost information in a distributed database system. The dependencies between the informational redundancy of the database and its recovery rate are investigated, and fast recovery algorithms are developed.

Sergei L. Bezrukov, Uwe Leck, Victor P. Piotrowski
An Unstable Hypergraph Problem with a Unique Optimal Solution

There is a variety of problems in extremal combinatorics for which there is a unique configuration achieving the optimum value. Moreover, as the size of the problem grows, configurations that “almost achieve” the optimal value can be shown to be “almost equal” to the extremal configuration. This phenomenon, known as

stability

, has been formalized by Simonovits [A Method for Solving Extremal Problems in Graph Theory, Stability Problems,

Theory of Graphs

(Proc.Colloq., Tihany, 1966), 279–319] in the context of graphs, but has since been considered for several combinatorial structures. In this work, we describe a hypergraph extremal problem with an unusual combinatorial feature, namely, while the problem is unstable, it has a unique optimal solution up to isomorphism. To the best of our knowledge, this is the first such example in the context of (simple) hypergraphs.

More precisely, for fixed positive integers

r

and ℓ with 1 ≤ ℓ < 

r

, and given an

r

-uniform hypergraph

H

, let

κ

(

H

, 4,ℓ) denote the number of 4-colorings of the set of hyperedges of

H

for which any two hyperedges in the same color class intersect in at least ℓ elements. Consider the function KC

$(n,r,4,\ell)=\max_{H\in{\mathcal H}_{n,r}} \kappa (H, 4,\ell) $

, where the maximum runs over the family

${\mathcal H}_{n,r}$

of all

r

-uniform hypergraphs on

n

vertices. We show that, for

n

large, there is a unique

n

-vertex hypergraph

H

for which

κ

(

H

, 4,ℓ) = KC(

n

,

r

,4,ℓ), despite the fact that the problem of determining KC(

n

,

r

,4,ℓ) is unstable for every fixed

r

 > ℓ ≥ 2.

Carlos Hoppen, Yoshiharu Kohayakawa, Hanno Lefmann
Multiparty Communication Complexity of Vector–Valued and Sum–Type Functions

Rudolf Ahlswede’s work on communication complexity dealt with functions defined on direct sums: vector–valued functions and sum–type functions. He was interested in single–letter characterizations and provided several lower bound techniques to this aim. In this paper we shall review these lower bounds and extend them to the “number in hand” multiparty model of communication complexity.

Ulrich Tamm
Threshold Functions for Distinct Parts: Revisiting Erdős–Lehner

We study four problems: put

n

distinguishable/non-distinguishable balls into

k

non-empty distinguishable/non-distinguishable boxes randomly. What is the threshold function

k

 = 

k

(

n

) to make almost sure that no two boxes contain the same number of balls? The non-distinguishable ball problems are very close to the Erdős–Lehner asymptotic formula for the number of partitions of the integer

n

into

k

parts with

k

 = 

o

(

n

1/3

). The problem is motivated by the statistics of an experiment, where we only can tell whether outcomes are identical or different.

Éva Czabarka, Matteo Marsili, László A. Székely
On Some Structural Properties of Star and Pancake Graphs

In this paper we give a report on some results for the Star and Pancake graphs obtained after the conference “Search Methodologies” which was held in October, 2010. The graphs are defined as Cayley graphs on the symmetric group with the generating sets of all prefix–transpositions and prefix–reversals, correspondingly. They are also known as the Star and Pancake networks in computer science. In this paper we give the full characterization of perfect codes for these graphs. We also investigate a cycle structure of the Pancake graph and present an explicit description of small cycles as well as their number in the graph.

Elena Konstantinova

Search Theory

Threshold and Majority Group Testing

We consider two generalizations of group testing: threshold group testing (introduced by Damaschke [11]) and majority group testing (a further generalization, including threshold group testing and a model introduced by Lebedev [20]).

We show that each separating code gives a nonadaptive strategy for threshold group testing for some parameters. This is a generalization of a result in [4] on “guessing secrets”, introduced in [9].

We introduce threshold codes and show that each threshold code gives a nonadaptive strategy for threshold group testing. Threshold codes include also the construction of [6]. In contrast to [8], where the number of defectives is bounded, we consider the case when the number of defectives are known. We show that we can improve the rate in this case.

We consider majority group testing if the number of defective elements is unknown but bounded (otherwise it reduces to threshold group testing). We show that cover-free codes and separating codes give strategies for majority group testing. We give a lower bound for the rate of majority group testing.

Rudolf Ahlswede, Christian Deppe, Vladimir Lebedev
Superimposed Codes and Threshold Group Testing

We will discuss superimposed codes and non-adaptive group testing designs arising from the potentialities of compressed genotyping models in molecular biology. The given paper was motivated by the 30th anniversary of D’yachkov-Rykov recurrent upper bound on the rate of superimposed codes published in 1982. We were also inspired by recent results obtained for non-adaptive threshold group testing which develop the theory of superimposed codes.

Arkadii D’yachkov, Vyacheslav Rykov, Christian Deppe, Vladimir Lebedev
New Construction of Error-Tolerant Pooling Designs

In this paper a new class of error-tolerant pooling designs associated with finite vector spaces is presented. We construct

d

z

-disjunct inclusion matrices using packings in finite projective spaces.For certain parameters our construction gives better performance than previously known ones. In particular, the construction gives a family of disjunct matrices with near optimal parameters.

Rudolf Ahlswede, Harout Aydinian
Density-Based Group Testing

In this paper we study a new, generalized version of the well-known group testing problem. In the classical model of group testing we are given

n

objects, some of which are considered to be defective. We can test certain subsets of the objects whether they contain at least one defective element. The goal is usually to find all defectives using as few tests as possible. In our model the presence of defective elements in a test set

Q

can be recognized if and only if their number is large enough compared to the size of

Q

. More precisely for a test

Q

the answer is

yes

if and only if there are at least

α

|

Q

| defective elements in

Q

for some fixed

α

.

Dániel Gerbner, Balázs Keszegh, Dömötör Pálvölgyi, Gábor Wiener
Group Testing with Multiple Mutually-Obscuring Positives

Group testing is a frequently used tool to identify an unknown set of defective (positive) elements out of a large collection of elements by testing subsets (pools) for the presence of defectives. Various models have been studied in the literature. The most studied case concerns only two types (defective and non-defective) of elements in the given collection. This paper studies a novel and natural generalization of group testing, where more than one type of defectives are allowed with an additional assumption that certain obscuring phenomena occur among different types of defectives. This paper proposes some algorithms for this problem, trying to optimize different measures of performance: the total number of tests required, the number of stages needed to perform all tests and the decoding complexity.

Hong-Bin Chen, Hung-Lin Fu
An Efficient Algorithm for Combinatorial Group Testing

In the (

d

,

n

) group testing problem

n

items have to be identified as either good or defective and the number of defective items is known to be

d

. A test on an arbitrary group (subset) of items reveals either that all items in the group are good or that at least one of the items is defective, but not how many or which items are defective. We present a new algorithm which in the worst case needs less than

$0.255d+\frac{1}{2}\log d+5.5$

tests more than the information lower bound

$\left\lceil \log\binom{n}{d}\right\rceil $

for

$\frac{n}{d}\geq2$

. For

$\frac{n}{d}\geq38$

, the difference decreases to less than

$0.187d+\frac{1}{2}\log d+5.5$

tests. For

d

 ≥ 10, this is a considerable improvement over the

d

 − 1 additional tests given for the best previously known algorithm by Hwang. We conjecture that the behaviour for large

n

and

d

of the difference is optimal for

$\frac{n}{d}\leq4$

. This implies that the

$\frac{1}{2}-\log\frac{32}{27}=0.255$

tests per defective given in the bound above are the best possible.

Andreas Allemann
Randomized Post-optimization for t-Restrictions

Search, test, and measurement problems in sparse domains often require the construction of arrays in which every

t

or fewer columns satisfy a simply stated combinatorial condition. Such

t-restriction problems

often ask for the construction of an array satisfying the

t

-restriction while having as few rows as possible. Combinatorial, algebraic, and probabilistic methods have been brought to bear for specific

t

-restriction problems; yet in most cases they do not succeed in constructing arrays with a number of rows near the minimum, at least when the number of columns is small. To address this, an algorithmic method is proposed that, given an array satisfying a

t

-restriction, attempts to improve the array by removing rows. The key idea is to determine the necessity of the entry in each cell of the array in meeting the

t

-restriction, and repeatedly replacing unnecessary entries, with the goal of producing an entire row of unnecessary entries. Such a row can then be deleted, improving the array, and the process can be iterated. For certain

t

-restrictions, it is shown that by determining conflict graphs, entries that are necessary can nonetheless be changed without violating the

t

-restriction. This permits a richer set of ways to improve the arrays. The efficacy of these methods is demonstrated via computational results.

Charles J. Colbourn, Peyman Nayeri
Search for Sparse Active Inputs: A Review

The theory of Compressed Sensing (highly popular in recent years) has a close relative that was developed around thirty years earlier and has been almost forgotten since – the design of screening experiments. For both problems, the main assumption is sparsity of active inputs, and the fundamental feature in both theories is the threshold phenomenon: reliable recovery of sparse active inputs is possible when the rate of design is less than the so-called capacity threshold, and impossible with higher rates.

Another close relative of both theories is

multi-access information transmission

. We survey a collection of tight and almost tight screening capacity bounds for both

adaptive

and

non-adaptive

strategies which correspond to either having or not having feedback in information transmission. These bounds are inspired by results from multi-access capacity theory. We also compare these bounds with the simulated performance of two analysis methods: (i) linear programming relaxation methods akin to basis pursuit used in compressed sensing, and (ii) greedy methods of low complexity for both non-adaptive and adaptive strategies.

Mikhail Malyutov
Search When the Lie Depends on the Target

The following model is considered. There is exactly one unknown element in the

n

-element set. A question is a partition of

S

into three classes: (

A

,

L

,

B

). If

x

 ∈ 

A

then the answer is “yes” (or 1), if

x

 ∈ 

B

then the answer is “no” (or 0), finally if

x

 ∈ 

L

then the answer can be either “yes” or “no”. In other words, if the answer “yes” is obtained then we know that

x

 ∈ 

A

 ∪ 

L

while in the case of “no” answer the conclusion is

x

 ∈ 

B

 ∪ 

L

. The mathematical problem is to minimize the minimum number of questions under certain assumptions on the sizes of

A

,

B

and

L

. This problem has been solved under the condition |

L

| ≥ 

k

by the author and Krisztián Tichler in previous papers for both the adaptive and non-adaptive cases. In this paper we suggest to solve the problem under the conditions |

A

| ≤ 

a

, |

B

| ≤ 

b

. We exhibit some partial results for both the adaptive and non-adaptive cases. We also show that the problem is closely related to some known combinatorial problems. Let us mention that the case

b

 = 

n

 − 

a

has been more or less solved in earlier papers.

Gyula O. H. Katona, Krisztián Tichler
A Heuristic Solution of a Cutting Problem Using Hypergraphs

We consider a cutting problem, which have a practical application. In this problem, items are being cut from larger items, for example textile patterns from a panel of cloth. We deal steel tubes of given length, which have to be sawed from longer steel tubes. These longer steel tubes come with given costs and the sawing has to be planned such that the total costs are minimized.

Christian Deppe, Christian Wischmann
Remarks on History and Presence of Game Tree Search and Research

One hundred years ago, in 1912 game tree search was introduced as a scientific field by Ernst Zermelo in a concise 4-page paper. Almost four decades later the first computers were there, and three more or less concrete proposals for a Chess computer program were made by Norbert Wiener, Claude Shannon, and Alan Turing. After a long march of craftsmanship, in 1997 computer Deep Blue beat the best human Chess player in a match with six games.

The other big classic in the world of games is Go from Asia. The approach from computer Chess does not work in Go. But in 2006 a Monte Carlo tree search procedure became the starting point of a triumph march. Within the following six years computer Go programs have reached a level near to that of the best western amateur players. Also in other games like Havannah, Monte Carlo search led to tremendous progress in computer playing strength.

We describe the origins of game tree search in the early 20th century and discuss some of the waves of progress. With the help of C. Donninger we also meditate about the twilight role of science and scientific research for progress in game programming.

Ingo Althöfer
Multiplied Complete Fix-Free Codes and Shiftings Regarding the 3/4-Conjecture

Given a nonnegative sequence

α

of integers with Kraftsum at most 3/4, Ahlswede, Balkenhol and Khachatrian proposed the existence of a fix-free code with exactly

α

n

words for any length

n

. In this article complete thin fix-free codes are constructed and both so-called

n

-closed systems and multiplication are used to enlarge this class. In addition, a sufficient criterion is given in terms of elementary sequence-shifting preserving the fix-freedom of the associated code.

Michael Bodewig
Creating Order and Ballot Sequences

Rudolf Ahlswede introduced the theory of creating order roughly at the same time as his theory of identification. He was always surprised that it did not achieve the same popularity as identification. We shall here present a multi-user model in which, contrasting to Ahlswede’s original model, the size of the memory may vary in time. The influence of the maximum size of the memory device on the expected occurrence of the first 0 in the sequence produced by the organizer is studied. In the case that there is one outgoing bit in each time unit two steps of a simple random walk on the lattice can be combined to one step in a random walk for the exhaustion of the memory.

Ulrich Tamm
Backmatter
Metadata
Title
Information Theory, Combinatorics, and Search Theory
Editors
Harout Aydinian
Ferdinando Cicalese
Christian Deppe
Copyright Year
2013
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-36899-8
Print ISBN
978-3-642-36898-1
DOI
https://doi.org/10.1007/978-3-642-36899-8

Premium Partner