Skip to main content
Top

2010 | Book

Computing and Combinatorics

16th Annual International Conference, COCOON 2010, Nha Trang, Vietnam, July 19-21, 2010. Proceedings

Editors: My T. Thai, Sartaj Sahni

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

Table of Contents

Frontmatter

Invited Talks

Understanding and Inductive Inference

This talk will be about different kinds of understanding, including especially that provided by deductive and inductive inference. Examples will be drawn from Galileo, Kepler, and Fermat. Applications will be given to the physicist’s problem of inferring the laws of nature, and to the restricted case of inferring sequences in Sloane’s Encyclopedia of Integer Sequences.

While there is much theory and understanding of deductive inference, there is relatively little understanding of the process of inductive inference. Inductive inference is about hypothesizing, hopefully inferring, the laws of physics from observation of the data. It is what Kepler did in coming up with his three laws of planetary motion.

Once laws are in place, deductive inference can be used to derive their consequences, which then uncovers much of the understanding in physics.

This talk will define inductive inference, state some of its open problems, and give applications to a relatively simpler but still largely unexplored task, namely, inferring a large natural class of algorithmically generated integer sequences that includes (as of this writing) 20in Sloane’s Encyclopedia of integer sequences.

Manuel Blum
Computing with Cells: Membrane Systems

Membrane computing is a part of the general research effort of describing and investigating computing models, ideas, architectures, and paradigms from processes taking place in nature, especially in biology. It identifies an unconventional computing model, namely a P system, which abstracts from the way living cells process chemical compounds in their compartmental structure. P systems are a class of distributed maximally parallel computing devices of a biochemical type. We give a brief overview of the area and present results that answer some interesting and fundamental questions concerning issues such as determinism versus nondeterminism, membrane and alphabet-size hierarchies, various notions of parallelism, reachability, computational complexity. We also look at neural-like systems, called spiking neural P systems. Such systems incorporate the idea of spiking neurons in membrane computing. We study various classes and characterize their computing power and complexity.

Oscar H. Ibarra

Complexity and Inapproximability

Boxicity and Poset Dimension

Let

G

be a simple, undirected, finite graph with vertex set

V

(

G

) and edge set

E

(

G

). A

k

-dimensional box is a Cartesian product of closed intervals [

a

1

,

b

1

]×[

a

2

,

b

2

]× ⋯ ×[

a

k

,

b

k

]. The

boxicity

of

G

, box(

G

) is the minimum integer

k

such that

G

can be represented as the intersection graph of

k

-dimensional boxes, i.e. each vertex is mapped to a

k

-dimensional box and two vertices are adjacent in

G

if and only if their corresponding boxes intersect. Let

${\mathcal P} =(S,P)$

be a poset where

S

is the ground set and

P

is a reflexive, anti-symmetric and transitive binary relation on

S

. The dimension of

${\mathcal P}$

,

${\rm dim}({\mathcal P})$

is the minimum integer

t

such that

P

can be expressed as the intersection of

t

total orders.

Let

$G_{\mathcal P}$

be the

underlying comparability graph

of

${\mathcal P}$

. It is a well-known fact that posets with the same underlying comparability graph have the same dimension. The first result of this paper links the dimension of a poset to the boxicity of its underlying comparability graph. In particular, we show that for any poset

${\mathcal P}$

,

${\rm box}(G_{\mathcal P})/(\chi(G_{\mathcal P})-1) \leq {\rm dim}({\mathcal P})\leq 2{\rm box}(G_{\mathcal P})$

, where

$\chi(G_{\mathcal P})$

is the chromatic number of

$G_{\mathcal P}$

and

$\chi(G_{\mathcal P}){\neq}1$

.

The second result of the paper relates the boxicity of a graph

G

with a natural partial order associated with its

extended double cover

, denoted as

G

c

. Let

${\mathcal P}_c$

be the natural height-2 poset associated with

G

c

by making

A

the set of minimal elements and

B

the set of maximal elements. We show that

$\frac{{\rm box}(G)}{2} \leq {\rm dim}({\mathcal P}_c) \leq 2{\rm box}(G)+4$

.

These results have some immediate and significant consequences. The upper bound

${\rm dim}({\mathcal P})\leq 2{\rm box}(G_{\mathcal P})$

allows us to derive hitherto unknown upper bounds for poset dimension. In the other direction, using the already known bounds for partial order dimension we get the following: (1) The boxicity of any graph with maximum degree Δ is

O

(Δlog

2

Δ) which is an improvement over the best known upper bound of Δ

2

 + 2. (2) There exist graphs with boxicity Ω(ΔlogΔ). This disproves a conjecture that the boxicity of a graph is

O

(Δ). (3) There exists no polynomial-time algorithm to approximate the boxicity of a bipartite graph on

n

vertices with a factor of

O

(

n

0.5 − 

ε

) for any

ε

> 0, unless

NP

 = 

ZPP

.

Abhijin Adiga, Diptendu Bhowmick, L. Sunil Chandran
On the Hardness against Constant-Depth Linear-Size Circuits

The notion of average-case hardness is a fundamental one in complexity theory. In particular, it plays an important role in the research on derandomization, as there are general derandomization results which are based on the assumption that average-case hard functions exist. However, to achieve a complete derandomization, one usually needs a function which is extremely hard against a complexity class, in the sense that any algorithm in the class fails to compute the function on 1/2 − 2

 − Ω(

n

)

fraction of its

n

-bit inputs. Unfortunately, lower bound results are very rare and they are only known for very restricted complexity classes, and achieving such extreme hardness seems even more difficult. Motivated by this, we study the hardness against linear-size circuits of constant depth in this paper. We show that the parity function is extremely hard for them: any such circuit must fail to compute the parity function on at least 1/2 − 2

 − Ω(

n

)

fraction of inputs.

Chi-Jen Lu, Hsin-Lung Wu
A K-Provers Parallel Repetition Theorem for a Version of No-Signaling Model

The parallel repetition theorem states that for any two provers one round game with value at most 1 − 

ε

(for

ε

< 1/2), the value of the game repeated

n

times in parallel is at most (1 − 

ε

3

)

Ω(

n

/log

s

)

where

s

is the size of the answers set [Raz98],[Hol07]. It is not known how the value of the game decreases when there are three or more players. In this paper we address the problem of the error decrease of parallel repetition game for

k

-provers where

k

 > 2. We consider a special case of the No-Signaling model and show that the error of the parallel repetition of

k

provers one round game, for

k

 > 2, in this model, decreases exponentially depending only on the error of the original game and on the number of repetitions. There were no prior results for

k

-provers parallel repetition for

k

 > 2 in any model.

Ricky Rosen
The Curse of Connectivity: t-Total Vertex (Edge) Cover

We investigate the effect of certain natural connectivity constraints on the parameterized complexity of two fundamental graph covering problems, namely

k

-

Vertex Cover

and

k

-

Edge Cover

. Specifically, we impose the additional requirement that each connected component of a solution have at least

t

vertices (resp. edges from the solution), and call the problem

t

-

total vertex cover

(resp.

t

-

total edge cover

). We show that

both problems remain fixed-parameter tractable with these restrictions, with running times of the form

${\mathcal O}^{*}\left(c^{k}\right)$

for some constant

c

 > 0 in each case;

for every

t

 ≥ 2,

t

-

total vertex cover

has no polynomial kernel unless the Polynomial Hierarchy collapses to the third level;

for every

t

 ≥ 2,

t

-

total edge cover

has a linear vertex kernel of size

$\frac{t+1}{t}k$

.

Henning Fernau, Fedor V. Fomin, Geevarghese Philip, Saket Saurabh
Counting Paths in VPA Is Complete for #NC 1

We give a #

NC

1

upper bound for the problem of counting accepting paths in any fixed visibly pushdown automaton. Our algorithm involves a non-trivial adaptation of the arithmetic formula evaluation algorithm of Buss, Cook, Gupta, Ramachandran ([9]). We also show that the problem is #

NC

1

hard. Our results show that the difference between #

BWBP

and #

NC

1

is captured exactly by the addition of a visible stack to a nondeterministic finite-state automata.

Andreas Krebs, Nutan Limaye, Meena Mahajan
Depth-Independent Lower Bounds on the Communication Complexity of Read-Once Boolean Formulas

We show lower bounds of

$\Omega(\sqrt{n})$

and Ω(

n

1/4

) on the randomized and quantum communication complexity, respectively, of all

n

-variable read-once Boolean formulas. Our results complement the recent lower bound of Ω(

n

/8

d

) by Leonardos and Saks [LS09] and Ω(

n

/2

O

(

d

log

d

)

) by Jayram, Kopparty and Raghavendra [JKR09] for randomized communication complexity of read-once Boolean formulas with depth

d

.

We obtain our result by “embedding” either the Disjointness problem or its complement in any given read-once Boolean formula.

Rahul Jain, Hartmut Klauck, Shengyu Zhang

Approximation Algorithms

Multiplying Pessimistic Estimators: Deterministic Approximation of Max TSP and Maximum Triangle Packing

We give a generalization of the method of pessimistic estimators [14], in which we compose estimators by multiplying them. We give conditions on the pessimistic estimators of two expectations, under which the product of the pessimistic estimators is a pessimistic estimator of the product of the two expectations. This approach can be useful when derandomizing algorithms for which one needs to bound a certain probability, which can be expressed as an intersection of multiple events; using our method, one can define pessimistic estimators for the probabilities of the individual events, and then multiply them to obtain a pessimistic estimator for the probability of the intersection of the events. We apply this method to give derandomizations of all known approximation algorithms for the maximum traveling salesman problem and the maximum triangle packing problem: we define simple pessimistic estimators based on the analysis of known randomized algorithms and show that we can multiply them to obtain pessimistic estimators for the expected weight of the solution. This gives deterministic algorithms with better approximation guarantees than what was previously known.

Anke van Zuylen
Clustering with or without the Approximation

We study algorithms for clustering data that were recently proposed by Balcan, Blum and Gupta in SODA’09 [4] and that have already given rise to two follow-up papers. The input for the clustering problem consists of points in a metric space and a number

k

, specifying the desired number of clusters. The algorithms find a clustering that is provably close to a target clustering, provided that the instance has the “( 1 + 

α

,

ε

)-property”, which means that the instance is such that all solutions to the

k

-median problem for which the objective value is at most (1 + 

α

) times the optimal objective value correspond to clusterings that misclassify at most an

ε

fraction of the points with respect to the target clustering. We investigate the theoretical and practical implications of their results.

Our main contributions are as follows. First, we show that instances that have the ( 1 + 

α

,

ε

)-property and for which, additionally, the clusters in the target clustering are large, are easier than general instances: the algorithm proposed in [4] is a constant factor approximation algorithm with an approximation guarantee that is better than the known hardness of approximation for general instances. Further, we show that it is

NP

-hard to check if an instance satisfies the ( 1 + 

α

,

ε

)-property for a given (

α

,

ε

); the algorithms in [4] need such

α

and

ε

as input parameters, however. We propose ways to use their algorithms even if we do not know values of

α

and

ε

for which the assumption holds. Finally, we implement these methods and other popular methods, and test them on real world data sets. We find that on these data sets there are no

α

and

ε

so that the dataset has both ( 1 + 

α

,

ε

)-property and sufficiently large clusters in the target solution. For the general case, we show that on our data sets the performance guarantee proved by [4] is meaningless for the values of

α

,

ε

such that the data set has the ( 1 + 

α

,

ε

)-property. The algorithm nonetheless gives reasonable results, although it is outperformed by other methods.

Frans Schalekamp, Michael Yu, Anke van Zuylen
A Self-stabilizing 3-Approximation for the Maximum Leaf Spanning Tree Problem in Arbitrary Networks

The maximum leaf spanning tree (MLST) is a good candidate for constructing a virtual backbone in self-organized multihop wireless networks, but is practically intractable (NP-complete). Self-stabilization is a general technique that permits to recover from catastrophic transient failures in self-organized networks without human intervention. We propose a fully distributed self-stabilizing approximation algorithm for the MLST problem on arbitrary topology networks. Our algorithm is the first self-stabilizing protocol that is specifically designed for the construction of an MLST. It improves other previous self-stabilizing solutions both for generality (arbitrary topology graphs

vs.

unit disk graphs or generalized disk graphs, respectively) and for approximation ratio, as it guarantees the number of its leaves is at least 1/3 of the maximum one. The time complexity of our algorithm is

O

(

n

2

) rounds.

Sayaka Kamei, Hirotsugu Kakugawa, Stéphane Devismes, Sébastien Tixeuil
Approximate Weighted Farthest Neighbors and Minimum Dilation Stars

We provide an efficient reduction from the problem of querying approximate multiplicatively weighted farthest neighbors in a metric space to the unweighted problem. Combining our techniques with core-sets for approximate unweighted farthest neighbors, we show how to find approximate farthest neighbors that are farther than a factor (1 − 

ε

) of optimal in time

O

(log

n

) per query in

D

-dimensional Euclidean space for any constants

D

and

ε

. As an application, we find an

O

(

n

log

n

) expected time algorithm for choosing the center of a star topology network connecting a given set of points, so as to approximately minimize the maximum dilation between any pair of points.

John Augustine, David Eppstein, Kevin A. Wortman
Approximated Distributed Minimum Vertex Cover Algorithms for Bounded Degree Graphs

In this paper, two distributed algorithms for the minimum vertex cover problem are given. In the unweighted case, we propose a 2.5-approximation algorithm with round complexity

O

(Δ), where Δ is the maximal degree of

G

, improving the previous 3-approximation result with the same round complexity

O

(Δ). For the weighted case, we give a 4-approximation algorithm with round complexity

O

(Δ).

Yong Zhang, Francis Y. L. Chin, Hing-Fung Ting

Graph Theory and Algorithms

Maximum Upward Planar Subgraph of a Single-Source Embedded Digraph

We show how to compute a maximum upward planar single-source subgraph of a single-source embedded DAG

G

φ

. We first show that finding a maximum upward planar subgraph of a single-source embedded digraph is NP-complete. We then give a new characterization of upward planar single-source digraphs. We use this characterization to present an algorithm that computes a maximum upward planar single-source subgraph of a single-source embedded DAG. This algorithm takes

O

(

n

4

) time in the worst case and

O

(

n

3

) time on average.

Aimal Rextin, Patrick Healy
Triangle-Free 2-Matchings Revisited

A

2-matching

in an undirected graph

G

 = (

VG

,

EG

) is a function

x

:

EG

→{0,1,2} such that for each node

v

 ∈ 

VG

the sum of values

x

(

e

) on all edges

e

incident to

v

does not exceed 2. The

size

of

x

is the sum ∑ 

e

x

(

e

). If {

e

 ∈ 

EGx

(

e

) ≠ 0} contains no triangles then

x

is called

triangle-free

.

Cornuéjols and Pulleyblank devised a combinatorial

O

(

mn

)-algorithm that finds a triangle free 2-matching of maximum size (hereinafter

n

: = |

VG

|,

m

: = |

EG

|) and also established a min-max theorem.

We claim that this approach is, in fact, superfluous by demonstrating how their results may be obtained directly from the Edmonds–Gallai decomposition. Applying the algorithm of Micali and Vazirani we are able to find a maximum triangle-free 2-matching in

$O(m\sqrt{n})$

-time. Also we give a short self-contained algorithmic proof of the min-max theorem.

Next, we consider the case of regular graphs. It is well-known that every regular graph admits a perfect 2-matching. One can easily strengthen this result and prove that every

d

-regular graph (for

d

 ≥ 3) contains a perfect triangle-free 2-matching. We give the following algorithms for finding a perfect triangle-free 2-matching in a

d

-regular graph: an

O

(

n

)-algorithm for

d

 = 3, an

O

(

m

 + 

n

3/2

)-algorithm for

d

 = 2

k

(

k

 ≥ 2), and an

O

(

n

2

)-algorithm for

d

 = 2

k

 + 1 (

k

 ≥ 2).

Maxim Babenko, Alexey Gusakov, Ilya Razenshteyn
The Cover Time of Deterministic Random Walks

The rotor router model is a popular deterministic analogue of a random walk on a graph. Instead of moving to a random neighbor, the neighbors are served in a fixed order. We examine how fast this “deterministic random walk” covers all vertices (or all edges). We present general techniques to derive upper bounds for the vertex and edge cover time and derive matching lower bounds for several important graph classes. Depending on the topology, the deterministic random walk can be asymptotically faster, slower or equally fast compared to the classical random walk.

Tobias Friedrich, Thomas Sauerwald
Finding Maximum Edge Bicliques in Convex Bipartite Graphs

A bipartite graph

G

 = (

A

,

B

,

E

) is convex on

B

if there exists an ordering of the vertices of

B

such that for any vertex

v

 ∈ 

A

, vertices adjacent to

v

are consecutive in

B

. A complete bipartite subgraph of a graph

G

is called a biclique of

G

. In this paper, we study the problem of finding the maximum edge-cardinality biclique in convex bipartite graphs. Given a bipartite graph

G

 = (

A

,

B

,

E

) which is convex on

B

, we present a new algorithm that computes the maximum edge-cardinality biclique of

G

in

O

(

n

log

3

n

loglog

n

) time and

O

(

n

) space, where

n

 = |

A

|. This improves the current

O

(

n

2

) time bound available for the problem.

Doron Nussbaum, Shuye Pu, Jörg-Rüdiger Sack, Takeaki Uno, Hamid Zarrabi-Zadeh
A Note on Vertex Cover in Graphs with Maximum Degree 3

We show that the

k

-Vertex Cover problem in degree-3 graphs can be solved in

O

*

(1.1616

k

) time, which improves previous results of

O

*

(1.1940

k

) by Chen, Kanj and Xia and

O

*

(1.1864

k

) by Razgon. In this paper, we will present a new way to analyze algorithms for the problem. We use

$r=k-\frac{2}{5}n$

to measure the size of the search tree, and then get a simple

$O(1.6651^{k-\frac{2}{5}n_0})$

-time algorithm, where

n

0

is the number of vertices with degree ≥ 2 in the graph. Combining this result with fast algorithms for the Maximum Independent Set problem in degree-3 graphs, we improve the upper bound for the

k

-Vertex Cover problem in degree-3 graphs.

Mingyu Xiao
Computing Graph Spanners in Small Memory: Fault-Tolerance and Streaming

Let

G

be an undirected graph with

m

edges and

n

vertices. A spanner of

G

is a subgraph which preserves approximate distances between all pairs of vertices. An

f

-vertex fault-tolerant spanner is a subgraph which preserves approximate distances, under the failure of any set of at most

f

vertices. The contribution of this paper is twofold: we present algorithms for computing fault-tolerant spanners, and propose streaming algorithms for computing spanners in very small internal memory. In particular, we give algorithms for computing

f

-vertex fault-tolerant (3,2)- and (2,1)-spanners of

G

with the following bounds: our (3,2)-spanner contains

O

(

f

4/3

n

4/3

) edges and can be computed in time

$\tilde{O}\left(f^2m\right)$

, while our (2,1)-spanner contains

O

(

fn

3/2

) edges and can be computed in time

$\tilde{O}\left(fm\right)$

. Both algorithms improve significantly on previously known bounds.

Assume that the graph

G

is presented as an input stream of edges, which may appear in any arbitrary order, and that we do not know in advance

m

and

n

. We show how to compute efficiently (3,2)- and (2,1)-spanners of

G

, using only very small internal memory and a slow access external memory device. Our spanners have asymptotically optimal size and the I/O complexity of our algorithms for computing such spanners is optimal up to a polylogarithmic factor. Our

f

-vertex fault-tolerant (3,2)- and (2,1)-spanners can also be computed efficiently in the same computational model described above.

Giorgio Ausiello, Paolo G. Franciosa, Giuseppe F. Italiano, Andrea Ribichini
Factorization of Cartesian Products of Hypergraphs

In this article we present the L2-section, a tool used to represent a hypergraph in terms of an “advanced graph” and results leading to first algorithm, in

O

(

nm

), for a bounded-rank, bounded-degree hypergraph

H

, which factorizes

H

in prime factors. The paper puts a premium on the characterization of the prime factors of a hypergraph, by exploiting isomorphisms between the layers in the 2-section, as returned by a standard graph factorization algorithm, such as the one designed by

Imrich

and

Peterin

.

Alain Bretto, Yannick Silvestre

Graph Drawing and Coloring

Minimum-Segment Convex Drawings of 3-Connected Cubic Plane Graphs
(Extended Abstract)

A convex drawing of a plane graph

G

is a plane drawing of

G

, where each vertex is drawn as a point, each edge is drawn as a straight line segment and each face is drawn as a convex polygon. A maximal segment is a drawing of a maximal set of edges that form a straight line segment. A minimum-segment convex drawing of

G

is a convex drawing of

G

where the number of maximal segments is the minimum among all possible convex drawings of

G

. In this paper, we present a linear-time algorithm to obtain a minimum-segment convex drawing Γ of a 3-connected cubic plane graph

G

of

n

vertices, where the drawing is not a grid drawing. We also give a linear-time algorithm to obtain a convex grid drawing of

G

on an

$(\frac{n}{2}+1)\times (\frac{n}{2}+1)$

grid with at most

s

n

 + 1 maximal segments, where

$s_n=\frac{n}{2}+3$

is the lower bound on the number of maximal segments in a convex drawing of

G

.

Sudip Biswas, Debajyoti Mondal, Rahnuma Islam Nishat, Md. Saidur Rahman
On Three Parameters of Invisibility Graphs

The invisibility graph

I

(

X

) of a set

$X \subseteq {\mathbb R}^d$

is a (possibly infinite) graph whose vertices are the points of

X

and two vertices are connected by an edge if and only if the straight-line segment connecting the two corresponding points is not fully contained in

X

.

We settle a conjecture of Matoušek and Valtr claiming that for invisibility graphs of planar sets, the chromatic number cannot be bounded in terms of the clique number.

Josef Cibulka, Jan Kynčl, Viola Mészáros, Rudolf Stolař, Pavel Valtr
Imbalance Is Fixed Parameter Tractable

In the

Imbalance Minimization

problem we are given a graph

G

 = (

V

,

E

) and an integer

b

and asked whether there is an ordering

v

1

...

v

n

of

V

such that the sum of the imbalance of all the vertices is at most

b

. The imbalance of a vertex

v

i

is the absolute value of the difference between the number of neighbors to the left and right of

v

i

. The problem is also known as the

Balanced Vertex Ordering

problem and it finds many applications in graph drawing. We show that this problem is fixed parameter tractable and provide an algorithm that runs in time 2

O

(

b

log

b

)

·

n

O

(1)

. This resolves an open problem of Kára et al. [COCOON 2005].

Daniel Lokshtanov, Neeldhara Misra, Saket Saurabh
The Ramsey Number for a Linear Forest versus Two Identical Copies of Complete Graphs

Let

H

be a graph with the chromatic number

h

and the chromatic surplus

s

. A connected graph

G

of order

n

is called

H-good

if

R

(

G

,

H

) = (

n

 − 1)(

h

 − 1) + 

s

. We show that

P

n

is 2

K

m

-good for

n

 ≥ 3. Furthermore, we obtain the Ramsey number

R

(

L

,2

K

m

), where

L

is a linear forest. In addition, we also give the Ramsey number

R

(

L

,

H

m

) which is an extension for

R

(

kP

n

,

H

m

) proposed by Ali et al. [1], where

H

m

is a cocktail party graph on 2

m

vertices.

I W. Sudarsana, Adiwijaya, S. Musdalifah

Computational Geometry

Optimal Binary Space Partitions in the Plane

An optimal

bsp

for a set

S

of disjoint line segments in the plane is a

bsp

for

S

that produces the minimum number of cuts. We study optimal

bsps

for three classes of

bsps

, which differ in the splitting lines that can be used when partitioning a set of fragments in the recursive partitioning process:

free

bsps

can use any splitting line,

restricted

bsps

can only use splitting lines through pairs of fragment endpoints, and

auto-partitions

can only use splitting lines containing a fragment. We obtain the two following results:

It is

np

-hard to decide whether a given set of segments admits an auto-partition that does not make any cuts.

An optimal restricted

bsp

makes at most 2 times as many cuts as an optimal free

bsp

for the same set of segments.

Mark de Berg, Amirali Khosravi
Exact and Approximation Algorithms for Geometric and Capacitated Set Cover Problems

First, we study geometric variants of the standard set cover motivated by assignment of directional antenna and shipping with deadlines, providing the first known polynomial-time exact solutions.

Next, we consider the following general (non-necessarily geometric) capacitated set cover problem. There is given a set of elements with real weights and a family of sets of the elements. One can use a set if it is a subset of one of the sets in the family and the sum of the weights of its elements is at most one. The goal is to cover all the elements with the allowed sets.

We show that any polynomial-time algorithm that approximates the uncapacitated version of the set cover problem with ratio

r

can be converted to an approximation algorithm for the capacitated version with ratio

r

 + 1.357.

The composition of these two results yields a polynomial-time approximation algorithm for the problem of covering a set of customers represented by a weighted

n

-point set with a minimum number of antennas of variable angular range and fixed capacity with ratio 2.357.

Piotr Berman, Marek Karpinski, Andrzej Lingas
Effect of Corner Information in Simultaneous Placement of K Rectangles and Tableaux

We consider the optimization problem of finding

k

nonintersecting rectangles and tableaux in

n

×

n

pixel plane where each pixel has a real valued weight. We discuss existence of efficient algorithms if a corner point of each rectangle/tableau is specified.

Shinya Anzai, Jinhee Chun, Ryosei Kasai, Matias Korman, Takeshi Tokuyama
Detecting Areas Visited Regularly

We are given a trajectory

$\mathcal T$

and an area

$\mathcal A$

.

$\mathcal T$

might intersect

$\mathcal A$

several times, and our aim is to detect whether

$\mathcal T$

visits

$\mathcal A$

with some regularity, e.g. what is the longest time span that a GPS-GSM equipped elephant visited a specific lake on a daily (weekly or yearly) basis, where the elephant has to visit the lake

most

of the days (weeks or years), but not necessarily on

every

day (week or year). We call this a

regular pattern

with

period

of one day (week or year, respectively).

We consider the most general version of the problem defined in [8], the case where we are not given the period length of the regular pattern but have to find the longest regular pattern over all possible period lengths. We give an exact algorithm with

${\mathcal O}(n^{3.5} \log^3 n)$

running time and an approximate algorithm with

${\mathcal O}(\frac{1}{\varepsilon} n^3 \log^2 n)$

running time.

We also consider the problem of finding a region that is visited regularly if one is not given. We give exact and approximate algorithms for this problem when the period length is fixed.

Bojan Djordjevic, Joachim Gudmundsson
Tile-Packing Tomography Is ${\mathbb{NP}}$ -hard

Discrete tomography deals with reconstructing finite spatial objects from their projections. The objects we study in this paper are called tilings or tile-packings, and they consist of a number of disjoint copies of a fixed tile, where a tile is defined as a connected set of grid points. A row projection specifies how many grid points are covered by tiles in a given row; column projections are defined analogously. For a fixed tile, is it possible to reconstruct its tilings from their projections in polynomial time? It is known that the answer to this question is affirmative if the tile is a bar (its width or height is 1), while for some other types of tiles

${\mathbb{NP}}$

-hardness results have been shown in the literature. In this paper we present a complete solution to this question by showing that the problem remains

${\mathbb{NP}}$

-hard for

all

tiles other than bars.

Marek Chrobak, Christoph Dürr, Flavio Guíñez, Antoni Lozano, Nguyen Kim Thang
The Rectilinear k-Bends TSP

We study a hard geometric problem. Given

n

points in the plane and a positive integer

k

, the

Rectilinear

k

-Bends Traveling Salesman Problem

asks if there is a piecewise linear tour through the

n

points with at most

k

bends where every line-segment in the path is either horizontal or vertical. The problem has applications in VLSI design. We prove that this problem belongs to the class FPT (fixed-parameter tractable). We give an algorithm that runs in

O

(

kn

2

 + 

k

4

k

n

) time by kernelization. We present two variations on the main result. These variations are derived from the distinction between line-segments and lines. Note that a rectilinear tour with

k

bends is a cover with

k

line-segments, and therefore a cover by lines. We derive FPT-algorithms using bounded-search-tree techniques and improve the time complexity for these variants.

Vladimir Estivill-Castro, Apichat Heednacram, Francis Suraweera
Tracking a Generator by Persistence

The persistent homology provides a mathematical tool to describe “features” in a principled manner. The persistence algorithm proposed by Edelsbrunner et al. [5] can compute not only the persistent homology for a filtered simplicial complex, but also representative generating cycles for persistent homology groups. However, if there are dynamic changes either in the filtration or in the underlying simplicial complex, the representative generating cycle can change wildly. In this paper, we consider the problem of tracking generating cycles with temporal coherence. Specifically, our goal is to track a chosen essential generating cycle so that the changes in it are “local”. This requires reordering simplices in the filtration. To handle reordering operations, we build upon the matrix framework proposed by Cohen-Steiner et al. [3] to swap two consecutive simplices, so that we can process a reordering directly. We present an application showing how our algorithm can track an essential cycle in a complex constructed out of a point cloud data.

Oleksiy Busaryev, Tamal K. Dey, Yusu Wang
Auspicious Tatami Mat Arrangements

We introduce tatami tilings, and present some of the many interesting questions that arise when studying them. Roughly speaking, we are considering tilings of rectilinear regions with 1 ×2 dimer tiles and 1 ×1 monomer tiles, with the constraint that no four corners of the tiles meet. Typical problems are to minimize the number of monomers in a tiling, or to count the number of tilings in a particular shape. We determine the underlying structure of tatami tilings of rectangles and use this to prove that the number of tatami tilings of an

n

×

n

square with

n

monomers is

n

2

n

 − 1

. We also prove that, for fixed-height, the number of tatami tilings of a rectangle is a rational function and outline an algorithm that produces the coefficients of the two polynomials of the numerator and the denominator.

Alejandro Erickson, Frank Ruskey, Mark Schurch, Jennifer Woodcock

Automata, Logic, Algebra and Number Theory

Faster Generation of Shorthand Universal Cycles for Permutations

A universal cycle for the

k

-permutations of 〈

n

〉 = {1,2,...,

n

} is a circular string of length (

n

)

k

that contains each

k

-permutation exactly once as a substring. Jackson (Discrete Mathematics, 149 (1996) 123–129) proved their existence for all

k

 ≤ 

n

 − 1. Knuth (

The Art of Computer Programming, Volume 4

, Fascicle 2, Addison-Wesley, 2005) pointed out the importance of the

k

 = 

n

 − 1 case, where each (

n

 − 1)-permutation is “shorthand” for exactly one permutation of 〈

n

〉. Ruskey-Williams (ACM Transactions on Algorithms, in press) answered Knuth’s request for an explicit construction of a shorthand universal cycle for permutations, and gave an algorithm that creates successive symbols in worst-case

O

(1)-time. This paper provides two new algorithmic constructions that create successive blocks of

n

symbols in

O

(1) amortized time within an array of length

n

. The constructions are based on: (a) an approach known to bell-ringers for over 300 years, and (b) the recent shift Gray code by Williams (SODA, (2009) 987–996). For (a), we show that the majority of changes between successive permutations are full rotations; asymptotically, the proportion of them is (

n

 − 2)/

n

.

Alexander Holroyd, Frank Ruskey, Aaron Williams
The Complexity of Word Circuits

A

word circuit

[1] is a directed acyclic graph in which each edge holds a

w

-bit word (i.e. some

x

 ∈ {0,1}

w

) and each node is a gate computing some binary function

g

:{0,1}

w

×{0,1}

w

→{0,1}

w

. The following problem was studied in [1]: How many binary gates are needed to compute a ternary function

f

:({0,1}

w

)

3

→{0,1}

w

. They proved that (2 + 

o

(1))2

w

binary gates are enough for any ternary function, and there exists a ternary function which requires word circuits of size (1 − 

o

(1))2

w

. One of the open problems in [1] is to get these bounds tight within a low order term. In this paper we solved this problem by constructing new word circuits for ternary functions of size (1 + 

o

(1))2

w

. We investigate the problem in a general setting: How many

k

-input word gates are needed for computing an

n

-input word function

f

:({0,1}

w

)

n

→{0,1}

w

(here

n

 ≥ 

k

). We show that for any fixed

n

, (1 − 

o

(1))2

(

n

 − 

k

)

w

basic gates are necessary and (1 + 

o

(1))2

(

n

 − 

k

)

w

gates are sufficient (assume

w

is sufficiently large). Since word circuit is a natural generalization of boolean circuit, we also consider the case when

w

is a constant and the number of inputs

n

is sufficiently large. We show that

$(1\pm o(1))\frac{2^{wn}}{(k-1)n}$

basic gates are necessary and sufficient in this case.

Xue Chen, Guangda Hu, Xiaoming Sun
On the Density of Regular and Context-Free Languages

The density of a language is defined as the function

$d_L(n) = | L \cap \Sigma^n |$

and counts the number of words of a certain length accepted by

L

. The study of the density of regular and context-free languages has attracted some attention culminating in the fact that such languages are either sparse, when the density can be bounded by a polynomial, or dense otherwise. We show that for all nonambiguous context-free languages the number of accepted words of a given length

n

can also be computed recursively using a finite combination of the number of accepted words smaller than

n

, or

$d_L = \sum_{j=1}^k u_j d_L (n-j) $

. This extends an old result by Chomsky and provides us with a more expressive description and new insights into possible applications of the density function for such languages as well as possible characterizations of the density of higher languages.

Michael Hartwig
Extensions of the Minimum Cost Homomorphism Problem

Assume

D

is a finite set and

R

is a finite set of functions from

D

to the natural numbers. An instance of the minimum

R

-cost homomorphism problem (

MinHom

R

) is a set of variables

V

subject to specified constraints together with a positive weight

c

vr

for each combination of

v

 ∈ 

V

and

r

 ∈ 

R

. The aim is to find a function

f

:

V

D

such that

f

satisfies all constraints and ∑ 

v

 ∈ 

V

 ∑ 

r

 ∈ 

R

c

vr

r

(

f

(

v

)) is maximized.

This problem unifies well-known optimization problems such as the minimum cost homomorphism problem and the maximum solution problem, and this makes it a computationally interesting fragment of the valued CSP framework for optimization problems. We parameterize

MinHom

R

by

constraint languages

, i.e. sets Γ of relations that are allowed in constraints. A constraint language is called

conservative

if every unary relation is a member of it; such constraint languages play an important role in understanding the structure of constraint problems. The dichotomy conjecture for

MinHom

R

is the following statement: if Γ is a constraint language, then

MinHom

R

is either polynomial-time solvable or NP-complete. For

MinHom

the dichotomy result has been recently obtained [Takhanov, STACS, 2010] and the goal of this paper is to expand this result to the case of

MinHom

R

with conservative constraint language. For arbitrary

R

this problem is still open, but assuming certain restrictions on

R

we prove a dichotomy. As a consequence of this result we obtain a dichotomy for the conservative maximum solution problem.

Rustem Takhanov
The Longest Almost-Increasing Subsequence

Given a sequence of

n

elements, we introduce the notion of an almost-increasing subsequence in two contexts. The first notion is the longest subsequence that can be converted to an increasing subsequence by possibly adding a value, that is at most a fixed constant

c

, to each of the elements. We show how to optimally construct such subsequence in

O

(

n

log

k

) time, where

k

is the length of the output subsequence. As an exercise, we show how to produce in

O

(

n

2

log

k

) time a special type of subsequences, that we call subsequences obeying the triangle inequality, by using as a subroutine our algorithm for the above case. The second notion is the longest subsequence where every element is at least the value of a monotonically non-decreasing function in terms of the

r

preceding elements (or even with respect to every

r

elements among those preceding it). We show how to construct such subsequence in

O

(

n

r

log

k

) time.

Amr Elmasry
Universal Test Sets for Reversible Circuits
(Extended Abstract)

A set of test vectors is complete for a reversible circuit if it covers all stuck-at faults on the wires of the circuit. It has been known that any reversible circuit has a surprisingly small complete test set, while it is NP-hard to generate a minimum complete test set for a reversible circuit. A test set is universal for a family of reversible circuits if it is complete for any circuit in the family. We show minimum universal test sets for some families of CNOT circuits.

Satoshi Tayu, Shota Fukuyama, Shuichi Ueno
Approximate Counting with a Floating-Point Counter

When many objects are counted simultaneously in large data streams, as in the course of network traffic monitoring, or Webgraph and molecular sequence analyses, memory becomes a limiting factor. Robert Morris [

Communications of the ACM

, 21:840–842, 1978] proposed a probabilistic technique for approximate counting that is extremely economical. The basic idea is to increment a counter containing the value

X

with probability 2

− 

X

. As a result, the counter contains an approximation of

$\lg n$

after

n

probabilistic updates, stored in

$\lg\lg n$

bits. Here we revisit the original idea of Morris. We introduce a binary floating-point counter that combines a

d

-bit significand with a binary exponent, stored together on

$d+\lg\lg n$

bits. The counter yields a simple formula for an unbiased estimation of

n

with a standard deviation of about 0.6·

n

2

− 

d

/2

.

We analyze the floating-point counter’s performance in a general framework that applies to any probabilistic counter. In that framework, we provide practical formulas to construct unbiased estimates, and to assess the asymptotic accuracy of any counter.

Miklós Csűrös

Network Optimization and Scheduling Algorithm

Broadcasting in Heterogeneous Tree Networks

We consider the broadcasting problem in heterogeneous tree networks. A heterogeneous tree network is represented by a weighted tree

T

 = (

V

,

E

) such that the weight of each edge denotes the communication time between the two end vertices. The broadcasting problem is to find a broadcast center such that the maximum communication time from the broadcast center to all vertices is minimized. In this paper, we propose a linear time algorithm for the broadcasting problem in a heterogeneous tree network following the postal model. As a byproduct of the algorithm, we can compute in linear time the broadcasting time of any vertex in the tree, i.e., the maximum time required to transmit messages from the vertex to every other vertex in the tree. Furthermore, an optimal sequence by which the broadcast center broadcasts its messages to all vertices in

T

can also be determined in linear time.

Yu-Hsuan Su, Ching-Chi Lin, D. T. Lee
Contention Resolution in Multiple-Access Channels: k-Selection in Radio Networks

In this paper, contention resolution among

k

contenders on a multiple-access channel is explored. The problem studied has been modeled as a

k

-Selection in Radio Networks, in which every contender has to have exclusive access at least once to a shared communication channel. The randomized adaptive protocol presented shows that, for a probability of error 2

ε

, all the contenders get access to the channel in time (

e

 + 1 + 

ξ

)

k

 + 

O

(log

2

(1/

ε

)), where

ε

 ≤ 1/(

n

 + 1),

ξ

> 0 is any constant arbitrarily close to 0, and

n

is the total number of potential contenders. The above time complexity is asymptotically optimal for any significant

ε

. The protocol works even if the number of contenders

k

is unknown and collisions can not be detected.

Antonio Fernández Anta, Miguel A. Mosteiro
Online Preemptive Scheduling with Immediate Decision or Notification and Penalties

We consider online preemptive scheduling problems where jobs have deadlines and the objective is to maximize the total value of jobs completed before their deadlines. In the first problem, preemptions are not free but incur a penalty. In the second problem, a job has to be accepted or rejected immediately upon arrival, and possibly allocated a fixed scheduling interval as well; if these accepted jobs are eventually not completed they incur a penalty (on top of not getting the value of the job). We give an algorithm with the optimal competitive ratio for the first problem, and new and improved algorithms for the second problem.

Stanley P. Y. Fung

Computational Biology and Bioinformatics

Discovering Pairwise Compatibility Graphs

Let

T

be an edge weighted tree, let

d

T

(

u

,

v

) be the sum of the weights of the edges on the path from

u

to

v

in

T

, and let

d

min

and

d

max

be two non-negative real numbers such that

d

min

 ≤ 

d

max

. Then a pairwise compatibility graph of

T

for

d

min

and

d

max

is a graph

G

 = (

V

,

E

), where each vertex

u

′ ∈ 

V

corresponds to a leaf

u

of

T

and there is an edge (

u

′,

v

′) ∈ 

E

if and only if

d

min

 ≤ 

d

T

(

u

,

v

) ≤ 

d

max

. A graph

G

is called a pairwise compatibility graph (

PCG

) if there exists an edge weighted tree

T

and two non-negative real numbers

d

min

and

d

max

such that

G

is a pairwise compatibility graph of

T

for

d

min

and

d

max

. Kearney

et al.

conjectured that every graph is a

PCG

[3]. In this paper, we refute the conjecture by showing that not all graphs are

PCG

s. We also show that the well known tree power graphs and some of their extensions are

PCG

s.

Muhammad Nur Yanhaona, Md. Shamsuzzoha Bayzid, Md. Saidur Rahman
Near Optimal Solutions for Maximum Quasi-bicliques

The maximum quasi-biclique problem has been proposed for finding interacting protein group pairs from large protein-protein interaction (PPI) networks. The problem is defined as follows:

The Maximum Quasi-biclique Problem:

Given a bipartite graph

G

 = (

X

 ∪ 

Y

,

E

) and a number 0 < 

δ

 ≤ 0.5, find a subset

X

opt

of

X

and a subset

Y

opt

of

Y

such that any vertex

x

 ∈ 

X

opt

is incident to at least (1 − 

δ

)|

Y

opt

| vertices in

Y

opt

, any vertex

y

 ∈ 

Y

opt

is incident to at least (1 − 

δ

)|

X

opt

| vertices in

X

opt

and |

X

opt

| + |

Y

opt

| is maximized.

The problem was proved to be NP-hard [2]. We design a polynomial time approximation scheme to give a quasi-biclique (

X

a

,

Y

a

) for

X

a

 ⊆ 

X

and

Y

a

 ⊆ 

Y

with |

X

a

| ≥ (1 − 

ε

)|

X

opt

| and |

Y

a

| ≥ (1 − 

ε

)|

Y

opt

| such that any vertex

x

 ∈ 

X

a

is incident to at least (1 − 

δ

 − 

ε

)|

Y

a

| vertices in

Y

a

and any vertex

y

 ∈ 

Y

a

is incident to at least (1 − 

δ

 − 

ε

)|

X

A

| vertices in

X

a

for any

ε

> 0, where

X

opt

and

Y

opt

form the optimal solution.

Lusheng Wang
Fast Coupled Path Planning: From Pseudo-Polynomial to Polynomial

The coupled path planning (CPP) problem models the motion paths of the leaves of a multileaf collimator for optimally reproducing the prescribed dose in intensity-modulated radiation therapy (IMRT). Two versions of the CPP problem, unconstrained and constrained CPPs, are studied based on whether specifying the starting and ending positions of the leave paths. By exploring the underlying properties of the problem such as submodularity and L

$^\natural$

-convexity, we solve both CPP problems in polynomial time using the path-end hopping, local searching and proximity scaling techniques, improving current best known pseudo-polynomial time algorithms. Our algorithms are simple and easy to be implemented. Experimental results on real medical data showed that our CPP algorithms outperformed previous best-known algorithm by at least one order of magnitude.

Yunlong Liu, Xiaodong Wu
Constant Time Approximation Scheme for Largest Well Predicted Subset

The largest well predicted subset problem is formulated for comparison of two predicted 3D protein structures from the same sequence. Given two ordered point sets

A

 = {

a

1

, ⋯ ,

a

n

} and

B

 = {

b

1

,

b

2

, ⋯ 

b

n

} containing

n

points, and a threshold

d

, the

largest well predicted subset

problem is to find the rigid transformation

T

for a largest subset

B

opt

of

B

such that the distance between

a

i

and

T

(

b

i

) is at most

d

for every

b

i

in

B

opt

. A meaningful prediction requires that the size of

B

opt

is at least

αn

for some constant

α

[8]. We use LWPS(

A

,

B

,

d

,

α

) to denote the largest well predicted subset problem with meaningful prediction. An (1 + 

δ

1

, 1 − 

δ

2

)-approximation for LWPS(

A

,

B

,

d

,

α

) is to find a transformation

T

to bring a subset

B

′ ⊆ 

B

of size at least (1 − 

δ

2

)|

B

opt

| such that for each

b

i

 ∈ 

B

′, the Euclidean distance between the two points distance(

a

i

,

T

(

b

i

)) ≤ (1 + 

δ

1

)

d

. We develop a constant time (1 + 

δ

1

, 1 − 

δ

2

)-approximation algorithm for LWPS(

A

,

B

,

d

,

α

) for arbitrary positive constants

δ

1

and

δ

2

.

Bin Fu, Lusheng Wang
On Sorting Permutations by Double-Cut-and-Joins

The problem of

sorting permutations by double-cut-and-joins

(SBD) arises when we perform the double-cut-and-join (DCJ) operations on pairs of unichromosomal genomes without the gene strandedness information. In this paper we show it is a NP-hard problem by reduction to an equivalent previously-known problem, called

breakpoint graph decomposition

(BGD), which calls for a largest collection of edge-disjoint alternating cycles in a breakpoint graph. To obtain a better approximation algorithm for the SBD problem, we made a suitable modification to Lin and Jiang’s algorithm which was initially proposed to approximate the BGD problem, and then carried out a rigorous performance analysis via fractional linear programming. The approximation ratio thus achieved for the SBD problem is

$\frac{17}{12}+\epsilon \approx 1.4167 +\epsilon$

, for any positive

ε

.

Xin Chen
A Three-String Approach to the Closest String Problem

Given a set of

n

strings of length

L

and a radius

d

, the closest string problem asks for a new string

t

sol

that is within a Hamming distance of

d

to each of the given strings. It is known that the problem is NP-hard and its optimization version admits a polynomial time approximation scheme (PTAS). Parameterized algorithms have been then developed to solve the problem when

d

is small. In this paper, with a new approach (called the

3-string approach

), we first design a parameterized algorithm for binary strings that runs in

$O\left(nL + nd^3 6.731^d\right)$

time, while the previous best runs in

$O\left(nL + nd 8^d\right)$

time. We then extend the algorithm to arbitrary alphabet sizes, obtaining an algorithm that runs in

$ O \left( nL + nd 1.612^d \left( \alpha^2 +1 - 2 \alpha^{-1} + \alpha^{-2} \right) ^{3d}\right) $

time, where

$\alpha = \sqrt[3]{\mathstrut \sqrt{|\Sigma|-1}+1 }$

. This new time bound is better than the previous best for small alphabets, including the very important case where |Σ| = 4 (i.e., the case of DNA strings).

Zhi-Zhong Chen, Bin Ma, Lusheng Wang
A 2k Kernel for the Cluster Editing Problem

The

cluster editing

problem for a given graph

G

and a given parameter

k

asks if one can apply at most

k

edge insertion/deletion operations on

G

so that the resulting graph is a union of disjoint cliques. The problem has attracted much attention because of its applications in bioinformatics. In this paper, we present a polynomial time kernelization algorithm for the problem that produces a kernel of size bounded by 2

k

, improving the previously best kernel of size 4

k

for the problem.

Jianer Chen, Jie Meng

Data Structure and Sampling Theory

On the Computation of 3D Visibility Skeletons

The 3D visibility skeleton is a data structure that encodes the global visibility information of a set of 3D objects. While it is useful in answering global visibility queries, its large size often limits its practical use. In this paper, we address this issue by proposing a subset of the visibility skeleton, which is empirically about 25% to 50% of the whole set. We show that the rest of the data structure can be recovered from the subset as needed, partially or completely. The running time complexity, which we analyze in terms of output size, is efficient. We also prove that the subset is minimal in the sense that the complexity bound ceases to hold if the subset is restricted further.

Sylvain Lazard, Christophe Weibel, Sue Whitesides, Linqiao Zhang
The Violation Heap: A Relaxed Fibonacci-Like Heap

We give a priority queue that achieves the same amortized bounds as Fibonacci heaps. Namely, find-min requires

O

(1) worst-case time, insert, meld and decrease-key require

O

(1) amortized time, and delete-min requires

O

(log

n

) amortized time. Our structure is simple and promises an efficient practical behavior when compared to other known Fibonacci-like heaps. The main idea behind our construction is to propagate rank updates instead of performing cascaded cuts following a decrease-key operation, allowing for a relaxed structure.

Amr Elmasry
Threshold Rules for Online Sample Selection

We consider the following sample selection problem. We observe in an online fashion a sequence of samples, each endowed by a quality. Our goal is to either select or reject each sample, so as to maximize the aggregate quality of the subsample selected so far. There is a natural trade-off here between the rate of selection and the aggregate quality of the subsample. We show that for a number of such problems extremely simple and oblivious “threshold rules” for selection achieve optimal tradeoffs between rate of selection and aggregate quality in a probabilistic sense. In some cases we show that the same threshold rule is optimal for a large class of quality distributions and is thus oblivious in a strong sense.

Eric Bach, Shuchi Chawla, Seeun Umboh
Heterogeneous Subset Sampling

In this paper, we consider the problem of heterogeneous subset sampling. Each element in a domain set has a different probability of being included in a sample, which is a subset of the domain set. Drawing a sample from a domain set of size

n

takes

O

(

n

) time if a Naive algorithm is employed. We propose a Hybrid algorithm that requires

O

(

n

) preprocessing time and

O

(

n

) extra space. On average, it draws a sample in

$O(1+n\sqrt{p^{*}})$

time, where

p

*

is min (

p

μ

, 1 − 

p

μ

) and

p

μ

denotes the mean of inclusion probabilities. In the worst case, it takes

O

(

n

) time to draw a sample. In addition to the theoretical analysis, we evaluate the performance of the Hybrid algorithm via experiments and present an application for particle-based simulations of the spread of a disease.

Meng-Tsung Tsai, Da-Wei Wang, Churn-Jung Liau, Tsan-sheng Hsu

Cryptography, Security, Coding and Game Theory

Identity-Based Authenticated Asymmetric Group Key Agreement Protocol

In identity-based public-key cryptography, an entity’s public key can be easily derived from its identity. The direct derivation of public keys in identity-based public-key cryptography eliminates the need for certificates and solves certain public key management problems in traditional public-key cryptosystems. Recently, the notion of

asymmetric group key agreement

was introduced, in which the group members merely negotiate a common encryption key which is accessible to any entity, but they hold respective secret decryption keys. In this paper, we first propose a security model for identity-based authenticated asymmetric group key agreement (IB-AAGKA) protocols. We then propose an IB-AAGKA protocol which is proven secure under the Bilinear Diffie-Hellman Exponent assumption.

Lei Zhang, Qianhong Wu, Bo Qin, Josep Domingo-Ferrer
Zero-Knowledge Argument for Simultaneous Discrete Logarithms

In Crypto’92, Chaum and Pedersen introduced a widely-used protocol (CP protocol for short) for proving the equality of two discrete logarithms (EQDL) with unconditional soundness, which plays a central role in DL-based cryptography. Somewhat surprisingly, the CP protocol has never been improved for nearly two decades since its advent. We note that the CP protocol is usually used as a non-interactive proof by using the Fiat-Shamir heuristic, which inevitably relies on the random oracle model (ROM) and assumes that the adversary is computationally bounded. In this paper, we present an EQDL protocol in the ROM which saves ≈40% of the computational cost and ≈33% of the prover’s uploading bandwidth. Our idea can be naturally extended for simultaneously showing the equality of

n

discrete logarithms with

O

(1)-size commitment, in contrast to the

n

-element adaption of the CP protocol which requires

O

(

n

)-size. This improvement benefits a variety of interesting cryptosystems, ranging from signatures and anonymous credential systems, to verifiable secret sharing and threshold cryptosystems. As an example, we present a signature scheme that only takes one (offline) exponentiation to sign, without utilizing pairing, relying on the standard decisional Diffie-Hellman assumption.

Sherman S. M. Chow, Changshe Ma, Jian Weng
Directed Figure Codes: Decidability Frontier

We consider directed figures defined as labelled polyominoes with designated start and end points, with a partial catenation. As we are interested in codicity verification for sets of figures, we show that it is decidable for sets of figures with parallel translation vectors, which is one of the decidability border cases in this setting. We give a constructive proof which leads to a straightforward algorithm.

Michał Kolarz
Backmatter
Metadata
Title
Computing and Combinatorics
Editors
My T. Thai
Sartaj Sahni
Copyright Year
2010
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-14031-0
Print ISBN
978-3-642-14030-3
DOI
https://doi.org/10.1007/978-3-642-14031-0

Premium Partner