Skip to main content

2008 | Buch

Algorithms and Computation

19th International Symposium, ISAAC 2008, Gold Coast, Australia, December 15-17, 2008. Proceedings

herausgegeben von: Seok-Hee Hong, Hiroshi Nagamochi, Takuro Fukunaga

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This volume contains the proceedings of the 19th International Symposium on Algorithmsand Computation (ISAAC 2008),held on the Gold Coast, Australia, December 15–17, 2008. In the past, it was held in Tokyo (1990), Taipei (1991), Nagoya (1992), Hong Kong (1993), Beijing (1994), Cairns (1995), Osaka (1996), Singapore (1997), Daejeon (1998), Chennai (1999), Taipei (2000), Christchurch (2001), Vancouver (2002), Kyoto (2003), Hong Kong (2004), Hainan (2005), Kolkata (2006), and Sendai (2007). ISAACis anannualinternationalsymposiumthatcoversthe verywide range of topics in the ?eld of algorithms and computation. The main purpose of the symposium is to provide a forum for researchers working in algorithms and theoryofcomputationfrom allovertheworld.In responseto ourcallfor papers, we received 229 submissions from 40 countries. The task of selecting the papers in this volume was done by our Program Committee and many other external reviewers. After an extremely rigorous review process and extensive discussion, the Committee selected 78 papers. We hope all accepted papers will eventually appear in scienti?c journals in a more polished form. Two special issues, one of Algorithmica and one of the International Journal on Computational Geometry and Applications, with selected papers from ISAAC 2008 are in preparation.

Inhaltsverzeichnis

Frontmatter

Invited Talk

Constant-Working-Space Algorithms: How Fast Can We Solve Problems without Using Any Extra Array?

In this talk I will present a new direction of algorithms which do not use any extra working array. More formally, we want to design efficient algorithms which require no extra array of size depending on input size

n

but use only constant working storage cells (variables), each having

O

(log

n

) bits. As an example, consider a problem of finding the median among

n

given numbers. A linear-time algorithm for the problem is well known. An ordinary implementation of the algorithm requires another array of the same size. It is not very hard to implement the algorithm without using any additional array, in other words, to design an in-place algorithm. Unfortunately, it is

not

a constant working space algorithm in our model since it requires some space, say

O

(log

n

) space, for maintaining recursive calls. A good news is that an efficient algorithm is known which finds the median in

O

(

n

1 + 

ε

) time using

O

(1/

ε

) working space for any small positive constant

ε

. The algorithm finds the median without altering any element of an input array. In other words, the input array is considered as a read-only array.

Tetsuo Asano
Some Constrained Notions of Planarity

Graph Drawing is the art and science of making pictures of graphs. Planarity has always played a central role in graph drawing research. Effective, efficient and elegant methods for drawing planar graphs were developed over the course of the last century by Wagner, Hopcroft and Tarjan, Read, de Frassieux, Pach and Pollack, amongst others.

Peter Eades
Reachability Problems on Directed Graphs

I will present recent work and open problems on two directed graph reachability problems, one dynamic, one static. The dynamic problem is to detect the creation of a cycle in a directed graph as arcs are added. Much progress has been made recently on this problem, but intriguing questions remain. The static problem is to compute dominators and related information on a flowgraph. This problem has been solved, but the solution is complicated, and there are related problems that are not so well understood. The work to be discussed is by colleagues, other researchers, and the speaker.

Robert E. Tarjan

1A Approximation Algorithm I

Greedy Construction of 2-Approximation Minimum Manhattan Network

Given a set

T

of

n

points in

, a Manhattan Network

G

is a network with all its edges horizontal or vertical segments, such that for all

p

,

q

 ∈ 

T

, in

G

there exists a path (named a Manhattan path) of the length exactly the Manhattan distance between

p

and

q

. The Minimum Manhattan Network problem is to find a Manhattan network of the minimum length,

i.e.

, the total length of the segments of the network is to be minimized. In this paper we present a 2-approximation algorithm with time complexity

O

(

n

log

n

), which improves the 2-approximation algorithm with time complexity

O

(

n

2

). Moreover, compared with other 2-approximation algorithms employing linear programming or dynamic programming technique, it was first discovered that only greedy strategy suffices to get 2-approximation network.

Zeyu Guo, He Sun, Hong Zhu
The Complexity of Minimum Convex Coloring

A coloring of the vertices of a graph is called convex if each subgraph induced by all vertices of the same color is connected. We consider three variants of recoloring a colored graph with minimal cost such that the resulting coloring is convex. Two variants of the problem are shown to be

${\mathcal{NP}}$

-hard on trees even if in the initial coloring each color is used to color only a bounded number of vertices. For graphs of bounded treewidth, we present a polynomial-time (2 + 

ε

)-approximation algorithm for these two variants and a polynomial-time algorithm for the third variant. Our results also show that, unless

${\mathcal{NP}} \subseteq DTIME(n^{O(\log \log n)})$

, there is no polynomial-time approximation algorithm with a ratio of size (1 − 

o

(1))ln ln

n

for the following problem: Given pairs of vertices in an undirected graph of bounded treewidth, determine the minimal possible number

l

for which all except

l

pairs can be connected by disjoint paths.

Frank Kammer, Torsten Tholey
On the Complexity of Reconfiguration Problems

Reconfiguration problems arise when we wish to find a step-by-step transformation between two feasible solutions of a problem such that all intermediate results are also feasible. We demonstrate that a host of reconfiguration problems derived from NP-complete problems are PSPACE-complete, while some are also NP-hard to approximate. In contrast, several reconfiguration versions of problems in P are solvable in polynomial time.

Takehiro Ito, Erik D. Demaine, Nicholas J. A. Harvey, Christos H. Papadimitriou, Martha Sideri, Ryuhei Uehara, Yushi Uno
Multiobjective Disk Cover Admits a PTAS

We introduce multiobjective disk cover problems and study their approximability. We construct a polynomial-time approximation scheme (PTAS) for the multiobjective problem where

k

types of points (customers) in the plane have to be covered by disks (base stations) such that the number of disks is minimized and for each type of points, the number of covered points is maximized. Our approximation scheme can be extended so that it works with the following additional features: interferences, different services for different types of customers, different shapes of supply areas, weighted customers, individual costs for base stations, and payoff for the quality of the obtained service.

Furthermore, we show that it is crucial to solve this problem in a multiobjective way, where all objectives are optimized at the same time. The constrained approach (i.e., the restriction of a multiobjective problem to a single objective) often used for such problems can significantly degrade their approximability. We can show non-approximability results for several single-objective restrictions of multiobjective disk cover problems. For example, if there are 2 types of customers, then maximizing the supplied customers of one type is not even approximable within a constant factor, unless P = NP.

Christian Glaßer, Christian Reitwießner, Heinz Schmitz

1B Online Algorithm

Data Stream Algorithms via Expander Graphs

We present a simple way of designing deterministic algorithms for problems in the data stream model via lossless expander graphs. We illustrate this by considering two problems, namely,

k

-sparsity testing and estimating frequency of items.

Sumit Ganguly
Improving the Competitive Ratio of the Online OVSF Code Assignment Problem

Online OVSF code assignment has an important application to wireless communications. Recently, this problem was formally modeled as an online problem, and performances of online algorithms have been analyzed by the competitive analysis. The previous best upper and lower bounds on the competitive ratio were 10 and 5/3, respectively. In this paper, we improve them to 7 and 2, respectively. We also show that our analysis for the upper bound is tight by giving an input sequence for which the competitive ratio of our algorithm is 7 − 

ε

for arbitrary

ε

> 0.

Shuichi Miyazaki, Kazuya Okamoto
Optimal Key Tree Structure for Deleting Two or More Leaves

We study the optimal tree structure for the key management problem. In the key tree, when two or more leaves are deleted or replaced, the updating cost is defined to be the number of encryptions needed to securely update the remaining keys. Our objective is to find the optimal tree structure where the worst case updating cost is minimum. We first prove the degree upper bound (

k

 + 1)

2

 − 1 when

k

leaves are deleted from the tree. Then we focus on the 2-deletion problem and prove that the optimal tree is a balanced tree with certain root degree 5 ≤ 

d

 ≤ 7 where the number of leaves in the subtrees differs by at most one and each subtree is a 2-3 tree.

Weiwei Wu, Minming Li, Enhong Chen
Comparing First-Fit and Next-Fit for Online Edge Coloring

We study the performance of the algorithms First-Fit and Next-Fit for two online edge coloring problems. In the min-coloring problem, all edges must be colored using as few colors as possible. In the max-coloring problem, a fixed number of colors is given, and as many edges as possible should be colored. Previous analysis using the competitive ratio has not separated the performance of First-Fit and Next-Fit, but intuition suggests that First-Fit should be better than Next-Fit. We compare First-Fit and Next-Fit using the relative worst order ratio, and show that First-Fit is better than Next-Fit for the min-coloring problem. For the max-coloring problem, we show that First-Fit and Next-Fit are not strictly comparable, i.e., there are graphs for which First-Fit is better than Next-Fit and graphs where Next-Fit is slightly better than First-Fit.

Martin R. Ehmsen, Lene M. Favrholdt, Jens S. Kohrt, Rodica Mihai

2A Data Structure and Algorithm

Selecting Sums in Arrays

In an array of

n

numbers each of the

$\binom{n}{2}+n$

contiguous subarrays define a sum. In this paper we focus on algorithms for selecting and reporting maximal sums from an array of numbers. First, we consider the problem of reporting

k

subarrays inducing the

k

largest sums among all subarrays of length at least

l

and at most

u

. For this problem we design an optimal

O

(

n

 + 

k

) time algorithm. Secondly, we consider the problem of selecting a subarray storing the

k

’th largest sum. For this problem we prove a time bound of

Θ

(

n

· max {1,log(

k

/

n

)}) by describing an algorithm with this running time and by proving a matching lower bound. Finally, we combine the ideas and obtain an

O

(

n

· max {1,log(

k

/

n

)}) time algorithm that selects a subarray storing the

k

’th largest sum among all subarrays of length at least

l

and at most

u

.

Gerth Stølting Brodal, Allan Grønlund Jørgensen
Succinct and I/O Efficient Data Structures for Traversal in Trees

We present two results for path traversal in trees, where the traversal is performed in an asymptotically optimal number of I/Os and the tree structure is represented succinctly. Our first result is for bottom-up traversal that starts with a node in the tree

T

and traverses a path to the root. For blocks of size

B

, a tree on

N

nodes, and for a path of length

K

, we design data structures that permit traversal of the bottom-up path in

O

(

K

/

B

) I/Os using only

$2N + \frac{\epsilon N }{\log_B{N}} + o(N)$

bits, for an arbitrarily selected constant,

ε

, where 0 < 

ε

< 1. Our second result is for top-down traversal in binary trees. We store

T

using (3 + 

q

)

N

 + 

o

(

N

) bits, where

q

is the number of bits required to store a key, while top-down traversal can still be performed in an asymptotically optimal number of I/Os.

Craig Dillabaugh, Meng He, Anil Maheshwari
Space-Time Tradeoffs for Longest-Common-Prefix Array Computation

The suffix array, a space efficient alternative to the suffix tree, is an important data structure for string processing, enabling efficient and often optimal algorithms for pattern matching, data compression, repeat finding and many problems arising in computational biology. An essential augmentation to the suffix array for many of these tasks is the Longest Common Prefix (LCP) array. In particular the LCP array allows one to simulate bottom-up and top-down traversals of the suffix tree with significantly less memory overhead (but in the same time bounds). Since 2001 the LCP array has been computable in

Θ

(

n

) time, but the algorithm (even after subsequent refinements) requires relatively large working memory. In this paper we describe a new algorithm that provides a continuous space-time tradeoff for LCP array construction, running in

O

(

nv

) time and requiring

$n + O(n/\sqrt{v} + v)$

bytes of working space, where

v

can be chosen to suit the available memory. Furthermore, the algorithm processes the suffix array, and outputs the LCP, strictly left-to-right, making it suitable for use with external memory. We show experimentally that for many naturally occurring strings our algorithm is faster than the linear time algorithms, while using significantly less working memory.

Simon J. Puglisi, Andrew Turpin
Power Domination in $\mathcal{O}^*(1.7548^n)$ Using Reference Search Trees

The

Power Dominating Set

problem is an extension of the well-known domination problem on graphs in a way that we enrich it by a second propagation rule: Given a graph

G

(

V

,

E

) a set

P

 ⊆ 

V

is a power dominating set if every vertex is observed after we have applied the next two rules exhaustively. First, a vertex is observed if

v

 ∈ 

P

or it has a neighbor in

P

. Secondly, if an observed vertex has exactly one unobserved neighbor

u

, then also

u

will be observed as well. We show that

Power Dominating Set

remains

$\mathcal{NP}$

-hard on cubic graphs. We designed an algorithm solving this problem in time

$\mathcal{O}^*(1.7548^n)$

on general graphs. To achieve this we have used a new notion of search trees called reference search trees providing non-local pointers.

Daniel Raible, Henning Fernau

2B Game Theory

The Isolation Game: A Game of Distances

We introduce a new multi-player geometric game, which we will refer to as the

isolation game

, and study its Nash equilibria and best or better response dynamics. The isolation game is inspired by the Voronoi game, competitive facility location, and geometric sampling. In the Voronoi game studied by Dürr and Thang, each player’s objective is to maximize the area of her Voronoi region. In contrast, in the isolation game, each player’s objective is to position herself as far away from other players as possible in a bounded space. Even though this game has a simple definition, we show that its game-theoretic behaviors are quite rich and complex. We consider various measures of farness from one player to a group of players and analyze their impacts to the existence of Nash equilibria and to the convergence of the best or better response dynamics: We prove that it is NP-hard to decide whether a Nash equilibrium exists, using either a very simple farness measure in an asymmetric space or a slightly more sophisticated farness measure in a symmetric space. Complementing to these hardness results, we establish existence theorems for several special families of farness measures in symmetric spaces: We prove that for isolation games where each player wants to maximize her distance to her

m

th

nearest neighbor, for any

m

, equilibria always exist. Moreover, there is always a better response sequence starting from any configuration that leads to a Nash equilibrium. We show that when

m

 = 1 the game is a potential game — no better response sequence has a cycle, but when

m

 > 1 the games are not potential. More generally, we study farness functions that give different weights to a player’s distances to others based on the distance rankings, and obtain both existence and hardness results when the weights are monotonically increasing or decreasing. Finally, we present results on the hardness of computing best responses when the space has a compact representation as a hypercube.

Yingchao Zhao, Wei Chen, Shang-Hua Teng
On a Non-cooperative Model for Wavelength Assignment in Multifiber Optical Networks

We study path multicoloring games that describe situations in which selfish entities possess communication requests in a multifiber all-optical network. Each player is charged according to the maximum fiber multiplicity that her color (wavelength) choice incurs and the social cost is the maximum player cost. We investigate the price of anarchy of such games and provide two different upper bounds for general graphs—namely the number of wavelengths and the minimum length of a path of maximum disutility, over all worst-case Nash Equilibria—as well as matching lower bounds which hold even for trees; as a corollary we obtain that the price of anarchy in stars is exactly 2. We also prove constant bounds for the price of anarchy in chains and rings in which the number of wavelengths is relatively small compared to the load of the network; in the opposite case we show that the price of anarchy is unbounded.

Evangelos Bampas, Aris Pagourtzis, George Pierrakos, Katerina Potika
The Complexity of Rationalizing Matchings

Given a set of observed economic choices, can one infer preferences and/or utility functions for the players that are consistent with the data? Questions of this type are called

rationalization

or

revealed preference

problems in the economic literature, and are the subject of a rich body of work.

From the computer science perspective, it is natural to study the complexity of rationalization in various scenarios. We consider a class of rationalization problems in which the economic data is expressed by a collection of matchings, and the question is whether there exist preference orderings for the nodes under which all the matchings are

stable

.

We show that the rationalization problem for one-one matchings is NP-complete. We propose two natural notions of approximation, and show that the problem is hard to approximate to within a constant factor, under both. On the positive side, we describe a simple algorithm that achieves a 3/4 approximation ratio for one of these approximation notions. We also prove similar results for a version of many-one matching.

Shankar Kalyanaraman, Christopher Umans
A Game Theoretic Approach for Efficient Graph Coloring

We give an efficient local search algorithm that computes a good vertex coloring of a graph

G

. In order to better illustrate this local search method, we view local moves as selfish moves in a suitably defined game. In particular, given a graph

G

 = (

V

,

E

) of

n

vertices and

m

edges, we define the

graph coloring game

Γ

(

G

) as a strategic game where the set of players is the set of vertices and the players share the same action set, which is a set of

n

colors. The payoff that a vertex

v

receives, given the actions chosen by all vertices, equals the total number of vertices that have chosen the same color as

v

, unless a neighbor of

v

has also chosen the same color, in which case the payoff of

v

is 0. We show:

The game

Γ

(

G

) has always pure Nash equilibria. Each pure equilibrium is a proper coloring of

G

. Furthermore, there exists a pure equilibrium that corresponds to an optimum coloring.

We give a polynomial time algorithm

$\mathcal{A}$

which computes a pure Nash equilibrium of

Γ

(

G

).

The total number,

k

, of colors used in

any

pure Nash equilibrium (and thus achieved by

$\mathcal{A}$

) is

$k\leq\min\{\Delta_2+1, \frac{n+\omega}{2}, \frac{1+\sqrt{1+8m}}{2}, n-\alpha+1\}$

, where

ω

,

α

are the clique number and the independence number of

G

and

Δ

2

is the maximum degree that a vertex can have subject to the condition that it is adjacent to at least one vertex of equal or greater degree. (

Δ

2

is no more than the maximum degree

Δ

of

G

.)

Thus, in fact, we propose here a new, efficient coloring method that achieves a number of colors satisfying (together) the known general upper bounds on the chromatic number

χ

. Our method is also an alternative general way of proving, constructively, all these bounds.

Finally, we show how to strengthen our method (staying in polynomial time) so that it avoids “bad” pure Nash equilibria (i.e. those admitting a number of colors

k

far away from

χ

). In particular, we show that our enhanced method colors

optimally

dense random

q

-partite graphs (of fixed

q

) with high probability.

Panagiota N. Panagopoulou, Paul G. Spirakis

3A Graph Algorithm I

Partitioning a Weighted Tree to Subtrees of Almost Uniform Size

Assume that each vertex of a graph

G

is assigned a nonnegative integer weight and that

l

and

u

are integers such that 0 ≤ 

l

 ≤ 

u

. One wishes to partition

G

into connected components by deleting edges from

G

so that the total weight of each component is at least

l

and at most

u

. Such an “almost uniform” partition is called an (

l

,

u

)-partition. We deal with three problems to find an (

l

,

u

)-partition of a given graph: the minimum partition problem is to find an (

l

,

u

)-partition with the minimum number of components; the maximum partition problem is defined analogously; and the

p

-partition problem is to find an (

l

,

u

)-partition with a given number

p

of components. All these problems are NP-hard even for series-parallel graphs, but are solvable for paths in linear time and for trees in polynomial time. In this paper, we give polynomial-time algorithms to solve the three problems for trees, which are much simpler and faster than the known algorithms.

Takehiro Ito, Takeaki Uno, Xiao Zhou, Takao Nishizeki
An Improved Divide-and-Conquer Algorithm for Finding All Minimum k-Way Cuts

Given a positive integer

k

and an edge-weighted undirected graph

G

 = (

V

,

E

;

w

), the

minimum k

-way cut

problem is to find a subset of edges of minimum total weight whose removal separates the graph into

k

connected components. This problem is a natural generalization of the classical

minimum cut

problem and has been well-studied in the literature.

A simple and natural method to solve the minimum

k

-way cut problem is the divide-and-conquer method: getting a minimum

k

-way cut by properly separating the graph into two small graphs and then finding minimum

k

′-way cut and

k

′′-way cut respectively in the two small graphs, where

k

′ + 

k

′′ = 

k

. In this paper, we present the first algorithm for the tight case of

$k'=\lfloor k/2\rfloor$

. Our algorithm runs in

$O(n^{4k-\lg k})$

time and can enumerate all minimum

k

-way cuts, which improves all the previously known divide-and-conquer algorithms for this problem.

Mingyu Xiao
On the Algorithmic Effectiveness of Digraph Decompositions and Complexity Measures

We place our focus on the gap between treewidth’s success in producing fixed-parameter polynomial algorithms for hard graph problems, and specifically

Hamiltonian Circuit

and

Max Cut

, and the failure of its directed variants (directed tree-width [9], DAG-width [11] and kelly-width [8]) to replicate it in the realm of digraphs. We answer the question of why this gap exists by giving two hardness results: we show that

Directed Hamiltonian Circuit

is

W

[2]-hard when the parameter is the width of the input graph, for any of these widths, and that

Max Di Cut

remains NP-hard even when restricted to DAGs, which have the minimum possible width under all these definitions. Our results also apply to directed pathwidth.

Michael Lampis, Georgia Kaouri, Valia Mitsou
An Efficient Scaling Algorithm for the Minimum Weight Bibranching Problem

Let

G

 = (

VG

,

AG

) be a digraph and let

$S \sqcup T$

be a bipartition of

VG

. A

bibranching

is a subset

B

 ⊆ 

AG

such that for each node

s

 ∈ 

S

there exists a directed

s

T

path in

B

and, vice versa, for each node

t

 ∈ 

T

there exists a directed

S

t

path in

B

.

Bibranchings generalize both branchings and bipartite edge covers. Keijsper and Pendavingh proposed a strongly polynomial primal-dual algorithm that finds a minimum weight bibranching in

O

(

n

′(

m

 + 

n

log

n

)) time (where

,

,

).

In this paper we develop a weight-scaling

$O(m \sqrt{n} \; \log n \log(nW))$

-time algorithm for the minimum weight bibranching problem (where

W

denotes the maximum magnitude of arc weights).

Maxim A. Babenko
The Balanced Edge Cover Problem

For an undirected graph

G

 = (

V

,

E

), an edge cover is defined as a set of edges that covers all vertices of

V

. It is known that a minimum edge cover can be found in polynomial time and forms a collection of star graphs. In this paper, we consider the problem of finding a balanced edge cover where the degrees of star center vertices are balanced, which can be applied to optimize sensor network structures, for example. To this end, we formulate the problem as a minimization of the summation of strictly monotone increasing convex costs associated with degrees for covered vertices, and show that the optimality can be characterized as the non-existence of certain alternating paths. By using this characterization, we show that the optimal covers are also minimum edge covers, have the lexicographically smallest degree sequence of the covered vertices, and minimize the maximum degree of covered vertices. Based on the characterization we also present an

O

(|

V

||

E

|) time algorithm.

Yuta Harada, Hirotaka Ono, Kunihiko Sadakane, Masafumi Yamashita

3B Fixed Parameter Tractability

Firefighting on Trees: (1 − 1/e)–Approximation, Fixed Parameter Tractability and a Subexponential Algorithm

The firefighter problem is defined as follows. Initially, a fire breaks out at a vertex

r

of a graph

G

. In each subsequent time unit, a firefighter chooses a vertex not yet on fire and protects it, and the fire spreads to all unprotected neighbors of the vertices on fire. The objective is to choose a sequence of vertices for the firefighter to protect so as to save the maximum number of vertices. The firefighter problem can be used to model the spread of fire, diseases, computer viruses and suchlike in a macro-control level.

In this paper, we study algorithmic aspects of the firefighter problem on trees, which is NP-hard even for trees of maximum degree 3. We present a (1 − 1/

e

)-approximation algorithm based on LP relaxation and randomized rounding, and give several FPT algorithms using a random separation technique of Cai, Chan and Chan. Furthermore, we obtain an

$2^{O(\sqrt{n}\log n)}$

-time subexponential algorithm.

Leizhen Cai, Elad Verbin, Lin Yang
A New Algorithm for Finding Trees with Many Leaves

We present an algorithm that finds trees with at least

k

leaves in undirected and directed graphs. These problems are known as

Maximum Leaf Spanning Tree

for undirected graphs, and, respectively,

Directed Maximum Leaf Out-Tree

and

Directed Maximum Leaf Spanning Out-Tree

in the case of directed graphs. The run time of our algorithm is

$O({\it poly}(|V|) + 4^k k^2)$

on undirected graphs, and

O

(4

k

|

V

| ·|

E

|) on directed graphs. This improves over the previously fastest algorithms for these problems with run times of

$O({\it poly}(|V|) + 6.75^k {\it poly}(k))$

and

$2^{O(k \log k)} {\it poly}(|V|)$

, respectively.

Joachim Kneis, Alexander Langer, Peter Rossmanith
Faster Parameterized Algorithms for Minimum Fill-In

We present two parameterized algorithms for the

Minimum Fill-In

problem, also known as

Chordal Completion

: given an arbitrary graph

G

and integer

k

, can we add at most

k

edges to

G

to obtain a chordal graph? Our first algorithm has running time

, and requires polynomial space. This improves the base of the exponential part of the best known parameterized algorithm time for this problem so far. We are able to improve this running time even further, at the cost of more space. Our second algorithm has running time

and requires

space.

Hans L. Bodlaender, Pinar Heggernes, Yngve Villanger
Graph Layout Problems Parameterized by Vertex Cover

In the framework of parameterized complexity, one of the most commonly used structural parameters is the

treewidth

of the input graph. The reason for this is that most natural graph problems turn out to be fixed parameter tractable when parameterized by treewidth. However,

Graph Layout

problems are a notable exception. In particular, no fixed parameter tractable algorithms are known for the

Cutwidth

,

Bandwidth

,

Imbalance

and

Distortion

problems parameterized by treewidth. In fact,

Bandwidth

remains NP-complete even restricted to trees. A possible way to attack graph layout problems is to consider structural parameterizations that are stronger than treewidth. In this paper we study graph layout problems parameterized by the size of the minimum vertex cover of the input graph. We show that all the mentioned problems are fixed parameter tractable. Our basic ingredient is a classical algorithm for

Integer Linear Programming

when parameterized by dimension, designed by Lenstra and later improved by Kannan. We hope that our results will serve to re-emphasize the importance and utility of this algorithm.

Michael R. Fellows, Daniel Lokshtanov, Neeldhara Misra, Frances A. Rosamond, Saket Saurabh
A Linear Kernel for the k-Disjoint Cycle Problem on Planar Graphs

We consider the following problem: given a planar graph

G

 = (

V

,

E

) and integer

k

, find if possible at least

k

vertex disjoint cycles in

G

. This problem is known to be NP-complete. In this paper, we give a number of simple data reduction rules. Each rule transforms the input to an equivalent smaller input, and can be carried out in polynomial time. We show that inputs on which no rule can be carried out have size linear in

k

. Thereby we obtain that the

k

-Disjoint Cycles

problem on planar graphs has a kernel of linear size. We also present a parameterized algorithm with a running time of

$O(c^{\sqrt{k}} + n^2)$

.

Hans L. Bodlaender, Eelko Penninkx, Richard B. Tan

4A Distributed Algorithm

How to Guard a Graph?

We initiate the study of the algorithmic foundations of games in which a set of cops has to guard a region in a graph (or digraph) against a robber. The robber and the cops are placed on vertices of the graph; they take turns in moving to adjacent vertices (or staying). The goal of the robber is to enter the guarded region at a vertex with no cop on it. The problem is to find the minimum number of cops needed to prevent the robber from entering the guarded region. The problem is highly non-trivial even if the robber’s or the cops’ regions are restricted to very simple graphs. The computational complexity of the problem depends heavily on the chosen restriction. In particular, if the robber’s region is only a path, then the problem can be solved in polynomial time. When the robber moves in a tree, then the decision version of the problem is NP-complete. Furthermore, if the robber is moving in a DAG, the problem becomes PSPACE-complete.

Fedor V. Fomin, Petr A. Golovach, Alexander Hall, Matúš Mihalák, Elias Vicari, Peter Widmayer
Tree Decontamination with Temporary Immunity

Consider a tree network that has been contaminated by a persistent and active

virus

: when infected, a network site will continuously attempt to spread the virus to all its neighbours. The

decontamination

problem is that of disinfecting the entire network using a team of mobile antiviral system agents, called

cleaners

, avoiding any recontamination of decontaminated areas. A cleaner is able to decontaminate any infected node it visits; once the cleaner departs, the decontaminated node is immune for

t

 ≥ 0 time units to viral attacks from infected neighbours. After the

immunity time

t

is elapsed, re-contamination can occur. The primary research objective is to determine the minimum

team size

, as well as the

solution strategy

, that is the protocol that would enable such a minimal team of cleaners to perform the task. The network decontamination problem has been extensively investigated in the literature, and a very large number of studies exist on the subject. However, all the existing work is limited to the special case

t

 = 0. In this paper we examine the tree decontamination problem for any value

t

 ≥ 0. We determine the minimum team size necessary to disinfect any given tree with immunity time

t

. Further we show how to compute for all nodes of the tree the minimum team size and implicitly the solution strategy starting from each starting node; these computations use a total of

Θ

(

n

) time (serially) or

Θ

(

n

) messages (distributively). We then provide a complete structural characterization of the class of trees that can be decontaminated with

k

agents and immunity time

t

; we do so by identifying the

forbidden subgraphs

and analyzing their properties. Finally, we consider

generic

decontamination algorithms, i.e. protocols that work unchanged in a large class of trees, with little knowledge of their topological structure. We prove that, for each immunity time

t

 ≥ 0, all trees of height at most

h

can be decontaminated by a team of

$k=\lfloor {{2 h} \over t+2 }\rfloor$

agents whose only knowledge of the tree is the bound

h

. The proof is constructive.

Paola Flocchini, Bernard Mans, Nicola Santoro
Reconfiguration of Cube-Style Modular Robots Using O(logn) Parallel Moves

We consider a model of reconfigurable robot, introduced and prototyped by the robotics community. The robot consists of independently manipulable unit-square atoms that can extend/contract arms on each side and attach/detach from neighbors. The optimal worst-case number of sequential moves required to transform one connected configuration to another was shown to be

Θ

(

n

) at ISAAC 2007. However, in principle, atoms can all move simultaneously. We develop a parallel algorithm for reconfiguration that runs in only

O

(log

n

) parallel steps, although the total number of operations increases slightly to

Θ

(

n

log

n

). The result is the first (theoretically) almost-instantaneous universally reconfigurable robot built from simple units.

Greg Aloupis, Sébastien Collette, Erik D. Demaine, Stefan Langerman, Vera Sacristán, Stefanie Wuhrer
Squaring the Circle with Weak Mobile Robots

The Circle Formation Problem (CFP) consists in the design of a protocol insuring that starting from an initial arbitrary configuration (where no two robots are at the same position),

n

robots eventually form a

regular n

-gon

in finite time. None of the deterministic solutions proposed so far works for a system made of 4 or 3 robots. This comes from the fact that it is very difficult to maintain a geometric invariant with such a few number of robots,

e.g.,

the smallest enclosing circle, concentric cycles, properties of the convex hull, or a leader. As a matter of fact, due to the high rate of symmetric configurations, the problem was suspected to be unsolvable with 4 robots. In this paper, we disprove this conjecture. We present two non-trivial deterministic protocols that solves CFP with 4 and 3 robots, respectively. The proposed solutions do not require that each robot reaches its destination in one atomic step. Our result closes CFP for any number

n

( > 0) of robots in the semi-synchronous model.

Yoann Dieudonné, Franck Petit

4B Database

Evaluation of General Set Expressions

We consider the problem of evaluating an expression over sets. The sets are preprocessed and are therefore sorted, and the operators can be any of union, intersection, difference, complement, and symmetric difference (exclusive union). Given the expression as a formula and the sizes of the input sets, we are interested in the worst-case complexity of evaluation (in terms of the size of the sets). The problem is motivated by document retrieval in search engines where a user query translates directly to an expression over the sets containing the user-entered words. Special cases of of this problem have been studied [7,6] where the expression has a restricted form. In this paper, we present an efficient algorithm to evaluate the most general form of a set expression. We show a lower bound on this problem for expressions of the form

E

1

, or

E

1

 − 

E

2

where

E

1

and

E

2

are expressions with union, intersection, and symmetric difference operators. We demonstrate that the algorithm’s complexity matches the lower bound in these instances. We, moreover, conjecture that the algorithm works optimally, even when we allow difference and complement operations in

E

1

and

E

2

.

Ehsan Chiniforooshan, Arash Farzan, Mehdi Mirzazadeh
Computing with Priced Information: When the Value Makes the Price

We study the function evaluation problem in the priced information framework introduced in [Charikar

et al.

2002]. We characterize the best possible extremal competitive ratio for the class of game tree functions. Moreover, we extend the above result to the case when the cost of reading a variable depends on the value of the variable. In this new value dependent cost variant of the problem, we also exactly evaluate the extremal competitive ratio for the whole class of monotone Boolean functions.

Ferdinando Cicalese, Martin Milanič
Deductive Inference for the Interiors and Exteriors of Horn Theories

In this paper, we investigate the deductive inference for the interiors and exteriors of Horn knowledge bases, where the interiors and exteriors were introduced by Makino and Ibaraki [11] to study stability properties of knowledge bases. We present a linear time algorithm for the deduction for the interiors and show that it is co-NP-complete for the deduction for the exteriors. Under model-based representation, we show that the deduction problem for interiors is NP-complete while the one for exteriors is co-NP-complete. As for Horn envelopes of the exteriors, we show that it is linearly solvable under model-based representation, while it is co-NP-complete under formula-based representation. We also discuss the polynomially solvable cases for all the intractable problems.

Kazuhisa Makino, Hirotaka Ono
Leaf Powers and Their Properties: Using the Trees

A graph

G

on

n

vertices is a

k

-

leaf power

(

$G \in {\cal G}_{k}$

) if it is isomorphic to a graph that can be “generated” from a tree

T

that has

n

leaves, by taking the leaves to represent vertices of

G

, and making two vertices adjacent if and only if they are at distance at most

k

in

T

. We address two questions in this paper:

(1) As

k

increases, do we always have

${\cal G}_{k} \subseteq {\cal G}_{k+1}$

? Answering an open question of Andreas Brandstädt and Van Bang Le [2,3,1], we show that the answer, perhaps surprisingly, is “no.”

(2) How should one design algorithms to determine, for

k

-leaf powers, if they have some property?

One way this can be done is to use the fact that

k

-leaf powers have bounded cliquewidth. This fact, plus the FPT cliquewidth approximation algorithm of Oum and Seymour [14], combined with the results of Courcelle, Makowsky and Rotics [7], allows us to conclude that properties expressible in a general logic formalism, can be decided in FPT time for

k

-leaf powers, parameterizing by

k

. This is wildly inefficient. We explore a different approach, under the assumption that a generating tree is given with the graph. We show that one can use the tree

directly

to decide the property, by means of a finite-state tree automaton. (A more general theorem has been independently obtained by Blumensath and Courcelle [5].)

We place our results in a general context of “tree-definable” graph classes, of which

k

-leaf powers are one particular example.

Michael R. Fellows, Daniel Meister, Frances A. Rosamond, R. Sritharan, Jan Arne Telle

5A Approximation Algorithm II

Deterministic Sparse Column Based Matrix Reconstruction via Greedy Approximation of SVD

Given a matrix

A

 ∈ ℝ

m

×

n

of rank

r

, and an integer

k

 < 

r

, the top

k

singular vectors provide the best rank-

k

approximation to

A

. When the columns of

A

have specific meaning, it is desirable to find (provably) “good” approximations to

A

k

which use only a small number of columns in

A

. Proposed solutions to this problem have thus far focused on randomized algorithms. Our main result is a simple greedy deterministic algorithm with guarantees on the performance and the number of columns chosen. Specifically, our greedy algorithm chooses

c

columns from

A

with

$c=O \left({{k^2\log k} \over {\epsilon^2}} \mu^2(A)\ln\left({\sqrt{k}\|{A_k}\|_F} \over {\epsilon}\|{A-A_k}\|_F\right)\right)$

such that

$${\|A-C_{gr}C_{gr}^+A\|}_F \leq \left(1+\epsilon \right)\|{A-A_k}_F,$$

where

C

gr

is the matrix composed of the

c

columns,

$C_{gr}^+$

is the pseudo-inverse of

C

gr

(

$C_{gr}C_{gr}^+A$

is the best reconstruction of

A

from

C

gr

), and

μ

(

A

) is a measure of the

coherence

in the normalized columns of

A

. The running time of the algorithm is

O

(

SVD

(

A

k

) + 

mnc

) where

SVD

(

A

k

) is the running time complexity of computing the first

k

singular vectors of

A

. To the best of our knowledge, this is the first deterministic algorithm with performance guarantees on the number of columns and a (1 + 

ε

) approximation ratio in Frobenius norm. The algorithm is quite simple and intuitive and is obtained by combining a generalization of the well known

sparse approximation problem

from information theory with an

existence

result on the possibility of sparse approximation. Tightening the analysis along either of these two dimensions would yield improved results.

Ali Çivril, Malik Magdon-Ismail
Minimizing Total Flow-Time: The Unrelated Case

We consider the problem of minimizing flow-time in the unrelated machines setting. We introduce a notion of (

α

,

β

) variability to capture settings where processing times of jobs on machines are not completely arbitrary and give an

O

(

β

log

α

) approximation for this setting. As special cases, we get (1) an

O

(

k

) approximation when there are only

k

different processing times (2) an

O

(log

P

)-approximation if each job can only go on a specified subset of machines, but has the same processing requirement on each such machine. Further, the machines can have different speeds. Here

P

is the ratio of the largest to the smallest processing requirement, (3) an

- approximation algorithm for unrelated machines if we assume that our algorithm has machines which are an

ε

-factor faster than the optimum algorithm’s machines. We also improve the lower bound on the approximability for the problem of minimizing flow time on parallel machines from

$\Omega(\sqrt{\log P/ \log\log P})$

to

Ω

(log

1 − 

ε

P

) for any

ε

> 0.

Naveen Garg, Amit Kumar, V. N. Muralidhara
Approximating the Volume of Unions and Intersections of High-Dimensional Geometric Objects

We consider the computation of the volume of the union of high-dimensional geometric objects. While showing that this problem is #

P

-hard already for very simple bodies (i.e., axis-parallel boxes), we give a fast FPRAS for all objects where one can: (1) test whether a given point lies inside the object, (2) sample a point uniformly, (3) calculate the volume of the object in polynomial time. All three oracles can be weak, that is, just approximate. This implies that Klee’s measure problem and the hypervolume indicator can be approximated efficiently even though they are #

P

-hard and hence cannot be solved exactly in time polynomial in the number of dimensions unless

P

=

NP

. Our algorithm also allows to approximate efficiently the volume of the union of convex bodies given by weak membership oracles.

For the analogous problem of the intersection of high-dimensional geometric objects we prove #

P

-hardness for boxes and show that there is no multiplicative polynomial-time

-approximation for certain boxes unless

NP

=

BPP

, but give a simple additive polynomial-time

ε

-approximation.

Karl Bringmann, Tobias Friedrich
Space-Efficient Informational Redundancy

We study the relation of autoreducibility and mitoticity for polylog-space many-one reductions and log-space many-one reductions. For polylog-space these notions coincide, while proving the same for log-space is out of reach. More precisely, we show the following results with respect to nontrivial sets and many-one reductions.

1

polylog-space autoreducible

${\,\mathop{\Leftrightarrow}\,}$

polylog-space mitotic

1

log-space mitotic

log-space autoreducible

(log

n

·loglog

n

)-space mitotic

1

relative to an oracle, log-space autoreducible

log-space mitotic

The oracle is an infinite family of graphs whose construction combines arguments from Ramsey theory and Kolmogorov complexity.

Christian Glaßer

5B Computational Biology

Minkowski Sum Selection and Finding

Let

P

,

Q

 ⊆ ℝ

2

be two

n

-point multisets and

Ar

 ≥ 

b

be a set of

λ

inequalities on

x

and

y

, where

A

 ∈ ℝ

λ

×2

,

$r=[^x_y]$

, and

b

 ∈ ℝ

λ

. Define the

constrained Minkowski sum

(

P

 ⊕ 

Q

)

Ar

 ≥ 

b

as the multiset {(

p

 + 

q

) |

p

 ∈ 

P

,

q

 ∈ 

Q

,

A

(

p

 + 

q

) ≥ 

b

}. Given

P

,

Q

,

Ar

 ≥ 

b

, an objective function

f

:ℝ

2

→ℝ, and a positive integer

k

, the

Minkowski Sum Selection

problem is to find the

k

th

largest objective value among all objective values of points in (

P

 ⊕ 

Q

)

Ar

 ≥ 

b

. Given

P

,

Q

,

Ar

 ≥ 

b

, an objective function

f

:ℝ

2

→ℝ, and a real number

δ

, the

Minkowski Sum Finding

problem is to find a point (

x

*

,

y

*

) in (

P

 ⊕ 

Q

)

Ar

 ≥ 

b

such that |

f

(

x

*

,

y

*

) − 

δ

| is minimized. For the

Minkowski Sum Selection

problem with linear objective functions, we obtain the following results: (1) optimal

O

(

n

log

n

) time algorithms for

λ

= 1; (2)

O

(

n

log

2

n

) time deterministic algorithms and expected

O

(

n

log

n

) time randomized algorithms for any fixed

λ

> 1. For the

Minkowski Sum Finding

problem with linear objective functions or objective functions of the form

$f(x,y)=\frac{by}{ax}$

, we construct optimal

O

(

n

log

n

) time algorithms for any fixed

λ

 ≥ 1. As a byproduct, we obtain improved algorithms for the

Length-Constrained Sum Selection

problem and the

Density Finding

problem.

Cheng-Wei Luo, Hsiao-Fei Liu, Peng-An Chen, Kun-Mao Chao
Constructing the Simplest Possible Phylogenetic Network from Triplets

A phylogenetic network is a directed acyclic graph that visualises an evolutionary history containing so-called

reticulations

such as recombinations, hybridisations or lateral gene transfers. Here we consider the construction of a simplest possible phylogenetic network consistent with an input set

T

, where

T

contains at least one phylogenetic tree on three leaves (a

triplet

) for each combination of three taxa. To quantify the complexity of a network we consider both the total number of reticulations and the number of reticulations per biconnected component, called the

level

of the network. We give polynomial-time algorithms for constructing a level-1 respectively a level-2 network that contains a minimum number of reticulations and is consistent with

T

(if such a network exists). In addition, we show that if

T

is precisely equal to the set of triplets consistent with some network, then we can construct such a network, which minimises both the level and the total number of reticulations, in time

O

(|

T

|

k

 + 1

), if

k

is a fixed upper bound on the level.

Leo van Iersel, Steven Kelk
New Results on Optimizing Rooted Triplets Consistency

A set of phylogenetic trees with overlapping leaf sets is

consistent

if it can be merged without conflicts into a supertree. In this paper, we study the polynomial-time approximability of two related optimization problems called

the maximum rooted triplets consistency problem

(

$\textsc{MaxRTC}$

) and

the minimum rooted triplets inconsistency problem

(

$\textsc{MinRTI}$

) in which the input is a set

$\mathcal{R}$

of rooted triplets, and where the objectives are to find a largest cardinality subset of

$\mathcal{R}$

which is consistent and a smallest cardinality subset of

$\mathcal{R}$

whose removal from

$\mathcal{R}$

results in a consistent set, respectively. We first show that a simple modification to Wu’s

Best-Pair-Merge-First

heuristic [25] results in a bottom-up-based 3-approximation for

$\textsc{MaxRTC}$

. We then demonstrate how any approximation algorithm for

$\textsc{MinRTI}$

could be used to approximate

$\textsc{MaxRTC}$

, and thus obtain the first polynomial-time approximation algorithm for

$\textsc{MaxRTC}$

with approximation ratio smaller than 3. Next, we prove that for a set of rooted triplets generated under a uniform random model, the maximum fraction of triplets which can be consistent with any tree is approximately one third, and then provide a deterministic construction of a triplet set having a similar property which is subsequently used to prove that both

$\textsc{MaxRTC}$

and

$\textsc{MinRTI}$

are

NP

-hard even if restricted to minimally dense instances. Finally, we prove that

$\textsc{MinRTI}$

cannot be approximated within a ratio of

Ω

(log

n

) in polynomial time, unless

P

 = 

NP

.

Jaroslaw Byrka, Sylvain Guillemot, Jesper Jansson
A Method to Overcome Computer Word Size Limitation in Bit-Parallel Pattern Matching

The performance of the pattern matching algorithms based on bit-parallelism degrades when the input pattern length exceeds the computer word size. Although several divide-and-conquer methods have been proposed to overcome that limitation, the resulting schemes are not that much efficient and hard to implement. This study introduces a new fast bit-parallel pattern matching algorithm that is capable of searching patterns of any length in a common bit-parallel fashion. The proposed bit-parallel length invariant matcher (BLIM) is compared with the Shift-Or and bit-parallel non-deterministic matching (BNDM) algorithms along with the standard Boyer-Moore and Sunday’s quick search, which are known to be the very fast in general. Benchmarks have been conducted on natural language, DNA sequence, and binary alphabet random texts. Besides the length invariant architecture of the algorithm, experimental results indicate that on the average BLIM is 18%, 44%, and 6% faster than BNDM, which is accepted as one of the fastest algorithms of this genre, on natural language, DNA sequence and binary random texts respectively.

M. Oǧuzhan Külekci

6A Computational Geometry I

Inducing Polygons of Line Arrangements

We show that an arrangement

$\mathcal{A}$

of

n

lines in general position in the plane has an inducing polygon of size

O

(

n

). Additionally, we present a simple algorithm for finding an inducing

n

-path for

$\mathcal {A}$

in

O

(

n

log

n

) time and an algorithm that constructs an inducing

n

-gon for a special class of line arrangements within the same time bound.

Ludmila Scharf, Marc Scherfenberg
Free-Form Surface Partition in 3-D

We study the problem of partitioning a spherical representation

S

of a free-form surface

F

in 3-D, which is to partition a 3-D sphere

S

into two hemispheres such that a representative normal vector for each hemisphere optimizes a given global objective function. This problem arises in important practical applications, particularly surface machining in manufacturing. We model the spherical surface partition problem as processing multiple off-line sequences of insertions/deletions of convex polygons alternated with certain point queries on the common intersection of the polygons. Our algorithm combines nontrivial data structures, geometric observations, and algorithmic techniques. It takes

$O(\min\{m^2n \log \log m + \frac{m^3 \log^2(mn) \log^2(\log m)}{\log^3 m}, m^3\log^2n+mn\})$

time, where

m

is the number of polygons, of size

O

(

n

) each, in one off-line sequence (generally,

m

 ≤ 

n

). This is a significant improvement over the previous best-known

O

(

m

2

n

2

) time algorithm. As a by-product, our algorithm can process

O

(

n

) insertions/deletions of convex polygons (of size

O

(

n

) each) and queries on their common intersections in

O

(

n

2

loglog

n

) time, improving over the “standard”

O

(

n

2

log

n

) time solution for off-line maintenance of

O

(

n

2

) insertions/deletions of points and queries. Our techniques may be useful in solving other problems.

Danny Z. Chen, Ewa Misiołek
Approximate Nearest Neighbor Search under Translation Invariant Hausdorff Distance

The Hausdorff distance is a measure for the resemblance of two geometric objects. Given a set of

n

point patterns and a query point pattern

Q

, the nearest neighbor of

Q

under the Hausdorff distance is the point pattern which minimizes this distance to

Q

. An extension of the Hausdorff distance is the translation invariant Hausdorff distance which additionally allows the translation of the point patterns in order to minimize the distance. This paper introduces the first data structure which allows to solve the nearest neighbor problem for the

directed

Hausdorff distance under

translation

in sublinear query time in a non-heuristic manner, in the sense that the quality of the results, the performance, and the space bounds are guaranteed. The data structure answers queries for both directions of the directed Hausdorff distance with a

$ \sqrt{d(s-1.5)}(1+\epsilon) $

-approximation factor in

$ O(\log \frac{n}{\epsilon}) $

query time for the nearest neighbor and

O

(

k

 + log

n

) query time for the

k

-th nearest neighbor for any

ε

> 0 . (The

O

-notation of the latter runtime contains terms that are quadratic in

ε

− 1

.)

Furthermore it is shown how to find the exact nearest neighbor under the directed Hausdorff distance without transformation of the point sets within some weaker time and storage bounds.

Christian Knauer, Marc Scherfenberg
Preprocessing Imprecise Points and Splitting Triangulations

Given a triangulation of a set of

n

points in the plane, each colored red or blue, we show how to compute a triangulation of just the blue points in time

O

(

n

). We apply this result to show that one can preprocess a set of disjoint regions (representing “imprecise points”) in the plane having total complexity

n

in

O

(

n

log

n

) time so that if one point per region is specified with precise coordinates, a triangulation of the

n

points can be computed in

O

(

n

) time.

Marc van Kreveld, Maarten Löffler, Joseph S. B. Mitchell
Efficient Output-Sensitive Construction of Reeb Graphs

The Reeb graph tracks topology changes in level sets of a scalar function and finds applications in scientific visualization and geometric modeling. This paper describes a near-optimal two-step algorithm that constructs the Reeb graph of a Morse function defined over manifolds in any dimension. The algorithm first identifies the critical points of the input manifold, and then connects these critical points in the second step to obtain the Reeb graph. A simplification mechanism based on topological persistence aids in the removal of noise and unimportant features. A radial layout scheme results in a feature-directed drawing of the Reeb graph. Experimental results demonstrate the efficiency of the Reeb graph construction in practice and its applications.

Harish Doraiswamy, Vijay Natarajan

6B Complexity I

Signature Theory in Holographic Algorithms

Valiant initiated a theory of holographic algorithms based on perfect matchings. These algorithms express computations in terms of signatures realizable by matchgates. We substantially develop the signature theory in terms of

d

-realizability and

d

-admissibility, where

d

measures the dimension of the basis subvariety on which a signature is feasible. Starting with 2-admissibility, we prove a Birkhoff-type theorem for the class of 2-realizable signatures. This gives a complete structural understanding of 2-realizability and 2-admissibility. This is followed by characterization theorems for 1-realizability and 1-admissibility.

Jin-Yi Cai, Pinyan Lu
The Complexity of SPP Formula Minimization

Circuit minimization is a useful procedure in the field of logic synthesis. Recently, it was proven that the minimization of ( ∨ , ∧ ,¬) formulae is hard for the second level of the polynomial hierarchy [BU08]. The complexity of minimizing more specialized formula models was left open, however. One model used in logic synthesis is a three-level model in which the third level is composed of parity gates, called SPPs. SPPs allow for small representations of Boolean functions and have efficient heuristics for minimization. However, little was known about the complexity of SPP minimization. Here, we show that SPP minimization is complete for the second level of the Polynomial Hierarchy under Turing reductions.

David Buchfuhrer
Understanding a Non-trivial Cellular Automaton by Finding Its Simplest Underlying Communication Protocol

In the present work we find a

non-trivial

communication protocol describing the dynamics of an elementary CA, and we prove that there are

no simpler

descriptions (protocols) for such CA. This is, to our knowledge, the first time such a result is obtained in the study of CAs. More precisely, we divide the cells of Rule 218 into two groups and we describe (and therefore understand) its global dynamics by finding a protocol taking place between these two parts. We assume that

x

 ∈ {0,1}

n

is given to Alice while

y

 ∈ {0,1}

n

is given to Bob. Let us call

z

(

x

,

y

) ∈ {0,1} the result of the dynamical interaction between the cells. We exhibit a protocol where Alice, instead of the

n

bits of

x

, sends 2⌈log(

n

)⌉ + 1 bits to Bob allowing him to compute

z

(

x

,

y

). Roughly, she sends 2 particular positions of her string

x

. By proving that any one-round protocol computing

z

(

x

,

y

) must exchange at least 2⌈log(

n

)⌉− 5 bits, the optimality of our construction (up to a constant) is concluded.

Eric Goles, Cedric Little, Ivan Rapaport
Negation-Limited Inverters of Linear Size

An inverter is a circuit which outputs ¬

x

1

, ¬

x

2

, ..., ¬

x

n

for any Boolean inputs

x

1

,

x

2

, ...,

x

n

. Beals, Nishino and Tanaka have given a construction of an inverter which has size

O

(

n

log

n

) and depth

O

(log

n

) and uses ⌈log(

n

 + 1) ⌉ NOT gates. In this paper we give a construction of an inverter which has size

O

(

n

) and depth log

1 + 

o

(1)

n

and uses log

1 + 

o

(1)

n

NOT gates. This is the first negation-limited inverter of linear size using only

o

(

n

) NOT gates.

Hiroki Morizumi, Genki Suzuki
3-Message NP Arguments in the BPK Model with Optimal Soundness and Zero-Knowledge

Under sub-exponential time hardness assumptions, we show that any language in NP has a 3-message argument system in the bare public key (BPK) model, that satisfies resettable zero-knowledge (i.e., it reveals no information to any cheating verifier that can even reset provers) and bounded-resettable soundness (i.e., a verifier cannot be convinced of a false theorem, even if the cheating prover resets the verifier up to a fixed polynomial number of sessions). Our protocol has essentially optimal soundness among 3-message protocols (in that all stronger known soundness notions cannot be achieved with only 3 messages) and zero-knowledge (in that it achieves the strongest known zero-knowledge notion). We also show an extension of this protocol so that it achieves polylogarithmic communication complexity, although under very strong assumptions.

Giovanni Di Crescenzo, Helger Lipmaa

7A Computational Geometry II

A Complete Approximation Algorithm for Shortest Bounded-Curvature Paths

We address the problem of finding a polynomial-time approximation scheme for shortest bounded-curvature paths in the presence of obstacles. Given an arbitrary environment

$\mathcal{E}$

consisting of polygonal obstacles, two feasible configurations, a length ℓ, and an approximation factor

ε

, our algorithm either (i) verifies that every feasible bounded-curvature path joining the two configurations is longer than ℓ or (ii) constructs such a path

Π

whose length is at most (1 + 

ε

) times the length of the shortest such path. The run time of our algorithm is polynomial in

n

(the total number of obstacle vertices and edges in

$\mathcal{E}$

),

m

(the bit precision of the input),

ε

− 1

, and ℓ.

For general polygonal environments, there is no known upper bound on the length, or description, of a shortest feasible bounded-curvature path as a function of

n

and

m

. Furthermore, even if the length and description of a shortest path are known to be linear in

n

and

m

, finding such a path is known to be NP-hard [14].

Previous results construct (1 + 

ε

) approximations to the shortest

ε

-robust bounded-curvature path [11,3] in time that is polynomial in

n

and

ε

− 1

. (Intuitively, a path is

ε

-robust if it remains feasible when simultaneously twisted by some small amount at each of its environment contacts.) Unfortunately,

ε

-robust solutions do not exist for all problem instances that admit bounded-curvature paths. Furthermore, even if a

ε

-robust path exists, the shortest bounded-curvature path may be arbitrarily shorter than the shortest

ε

-robust path. In effect, these earlier results confound two distinct sources of problem difficulty, measured by

ε

− 1

and ℓ. Our result is not only more general, but it also clarifies the critical factors contributing to the complexity of bounded-curvature motion planning.

Jonathan Backer, David Kirkpatrick
Detecting Commuting Patterns by Clustering Subtrajectories

In this paper we consider the problem of detecting

commuting patterns

in a trajectory. For this we search for similar subtrajectories. To measure spatial similarity we choose the Fréchet distance and the discrete Fréchet distance between subtrajectories, which are invariant under differences in speed. We give several approximation algorithms, and also show that the problem of finding the ‘longest’ subtrajectory cluster is as hard as MaxClique to compute and approximate.

Kevin Buchin, Maike Buchin, Joachim Gudmundsson, Maarten Löffler, Jun Luo
On the Stretch Factor of Convex Delaunay Graphs

Let

C

be a compact and convex set in the plane that contains the origin in its interior, and let

S

be a finite set of points in the plane. The Delaunay graph

$\mathord{\it DG}_C(S)$

of

S

is defined to be the dual of the Voronoi diagram of

S

with respect to the convex distance function defined by

C

. We prove that

$\mathord{\it DG}_C(S)$

is a

t

-spanner for

S

, for some constant

t

that depends only on the shape of the set

C

. Thus, for any two points

p

and

q

in

S

, the graph

$\mathord{\it DG}_C(S)$

contains a path between

p

and

q

whose Euclidean length is at most

t

times the Euclidean distance between

p

and

q

.

Prosenjit Bose, Paz Carmi, Sébastien Collette, Michiel Smid
Covering a Simple Polygon by Monotone Directions

In this paper we study the problem of finding a set of

k

directions for a given simple polygon

P

, such that for each point

p

 ∈ 

P

there is at least one direction in which the line through

p

intersects the polygon only once. For

k

 = 1, this is the classical problem of finding directions in which the polygon is monotone, and all such directions can be found in linear time for a simple

n

-gon. For

k

 > 1, this problem becomes much harder; we give an

O

(

n

5

log

2

n

)-time algorithm for

k

 = 2, and

O

(

n

3

k

 + 2

)-time algorithm for

k

 ≥ 3. These results are the first on the generalization of the monotonicity problem.

Hee-Kap Ahn, Peter Brass, Christian Knauer, Hyeon-Suk Na, Chan-Su Shin

7B Network

On the Stability of Web Crawling and Web Search

In this paper, we analyze a graph-theoretic property motivated by web crawling. We introduce a notion of stable cores, which is the set of web pages that are usually contained in the crawling buffer when the buffer size is smaller than the total number of web pages. We analyze the size of core in a random graph model based on the bounded Pareto power law distribution. We prove that a core of significant size exists for a large range of parameters 2 < 

α

< 3 for the power law.

Reid Anderson, Christian Borgs, Jennifer Chayes, John Hopcroft, Vahab Mirrokni, Shang-Hua Teng
Average Update Times for Fully-Dynamic All-Pairs Shortest Paths

We study the fully-dynamic all pairs shortest path problem for graphs with arbitrary non-negative edge weights. It is known for digraphs that an update of the distance matrix costs

$\ensuremath{{\cal \tilde O}}(n^{2.75})$

worst-case time [Thorup, STOC ’05] and

$\ensuremath{{\cal \tilde O}}(n^2)$

amortized time [Demetrescu and Italiano, J.ACM ’04] where

n

is the number of vertices. We present the first average-case analysis of the undirected problem. For a random update we show that the expected time per update is bounded by

O

(

n

4/3 + 

ε

) for all

ε

> 0.

Tobias Friedrich, Nils Hebbinghaus
Computing Frequency Dominators and Related Problems

We consider the problem of finding

frequency dominators

in a directed graph with a single source vertex and a single terminal vertex. A vertex

x

is a

frequency dominator

of a vertex

y

if and only if in each source to terminal path, the number of occurrences of

x

is at least equal to the number of occurrences of

y

. This problem was introduced in a paper by Lee et al. [11] in the context of dynamic program optimization, where an efficient algorithm to compute the frequency dominance relation in reducible graphs was given. In this paper we show that frequency dominators can be efficiently computed in general directed graphs. Specifically, we present an algorithm that computes a sparse (

O

(

n

)-space), implicit representation of the frequency dominance relation in

O

(

m

 + 

n

) time, where

n

is the number of vertices and

m

is the number of arcs of the input graph. Given this representation we can report all the frequency dominators of a vertex in time proportional to the output size, and answer queries of whether a vertex

x

is a frequency dominator of a vertex

y

in constant time. Therefore, we get the same asymptotic complexity as for the regular dominators problem. We also show that, given our representation of frequency dominance, we can partition the vertex set into

regions

in

O

(

n

) time, such that all vertices in the same region have equal number of appearances in any source to terminal path. The computation of regions has applications in program optimization and parallelization.

Loukas Georgiadis
Computing Best Swaps in Optimal Tree Spanners

In a densely connected communication network, represented by a graph G with nonnegative edge-weights, it is often advantageous to route all communication on a sparse, spanning subnetwork, typically a spanning tree of G. With the communication overhead in mind, we consider a spanning tree

T

of

G

which guarantees that for any two nodes, their distance in

T

is at most

k

times their distance in

G

, where

k

, called the stretch, is as small as possible. Such a spanning tree which minimizes the stretch is called an optimal tree spanner, and it can be used for efficient routing. However, for a communication tree, the failure of an edge is catastrophic; it disconnects the tree. Functionality can be restored by connecting both parts of the tree with another edge, while leaving the two parts themselves untouched. In situations where the failure can be repaired rapidly, such a quick fix is preferred over the recomputation of an entirely new optimal tree spanner, because it is much closer to the previous solution and hence requires far fewer adjustments in the routing scheme. We are therefore interested in the problem of finding for any possibly failing edge in the spanner

T

a best swap edge to replace it. The objective here is naturally to minimize the stretch of the new tree. We show how all these best swap edges can be computed in total time

O

(

m

2

log

n

) in graphs with arbitrary nonnegative edge weights. For graphs with unit weight edges (also called unweighted graphs), we present an

O

(

n

3

) time algorithm. Furthermore, we present a distributed algorithm for computing the best swap for each edge in the spanner.

Shantanu Das, Beat Gfeller, Peter Widmayer

8A Optimization

Covering a Point Set by Two Disjoint Rectangles

Given a set

S

of

n

points in the plane, the disjoint two-rectangle covering problem is to find a pair of disjoint rectangles such that their union contains

S

and the area of the larger rectangle is minimized. In this paper we consider two variants of this optimization problem: (1) the rectangles are free to rotate but must remain parallel to each other, and (2) one rectangle is axis-parallel but the other rectangle is allowed to have an arbitrary orientation. For both of the problems, we present

O

(

n

2

log

n

)-time algorithms using

O

(

n

) space.

Hee-Kap Ahn, Sang Won Bae
Computing the Maximum Detour of a Plane Graph in Subquadratic Time

Let

G

be a plane graph where each edge is a line segment. We consider the problem of computing the maximum detour of

G

, defined as the maximum over all pairs of distinct points

p

and

q

of

G

of the ratio between the distance between

p

and

q

in

G

and the distance |

pq

|. The fastest known algorithm for this problem has

Θ

(

n

2

) running time where

n

is the number of vertices. We show how to obtain

O

(

n

3/2

log

3

n

) expected running time. We also show that if

G

has bounded treewidth, its maximum detour can be computed in

O

(

n

log

3

n

) expected time.

Christian Wulff-Nilsen
Finding Long Paths, Cycles and Circuits

We present a polynomial-time algorithm to find a cycle of length

$\exp(\Omega(\sqrt{\log \ell}))$

in an undirected graph having a cycle of length ≥ ℓ. This is a slight improvement over previously known bounds. In addition the algorithm is more general, since it can similarly approximate the longest circuit, as well as the longest

S

-circuit (i.e., for

S

an arbitrary subset of vertices, a circuit that can visit each vertex in

S

at most once). We also show that any algorithm for approximating the longest cycle can approximate the longest circuit, with a square root reduction in length. For digraphs, we show that the long cycle and long circuit problems have the same approximation ratio up to a constant factor. We also give an algorithm to find a

vw

-path of length ≥ log

n

/loglog

n

if one exists; this is within a loglog

n

factor of a hardness result.

Harold N. Gabow, Shuxin Nie
Computing Best and Worst Shortcuts of Graphs Embedded in Metric Spaces

Given a graph embedded in a metric space, its dilation is the maximum over all distinct pairs of vertices of the ratio between their distance in the graph and the metric distance between them. Given such a graph

G

with

n

vertices and

m

edges and consisting of at most two connected components, we consider the problem of augmenting

G

with an edge such that the resulting graph has minimum dilation. We show that we can find such an edge in

$O((n^4\log n)/\sqrt m)$

time using linear space which solves an open problem of whether a linear-space algorithm with

o

(

n

4

) running time exists. We show that

O

(

n

2

log

n

) time is achievable if

G

is a simple path or the union of two vertex-disjoint simple paths. Finally, we show how to find an edge that maximizes the dilation of the resulting graph in

O

(

n

3

) time with

O

(

n

2

) space and in

O

(

n

3

log

n

) time with linear space.

Jun Luo, Christian Wulff-Nilsen

8B Routing

On Labeled Traveling Salesman Problems

We consider labeled Traveling Salesman Problems, defined upon a complete graph of

n

vertices with colored edges. The objective is to find a tour of maximum (or minimum) number of colors. We derive results regarding hardness of approximation, and analyze approximation algorithms for both versions of the problem. For the maximization version we give a

$\frac{1}{2}$

-approximation algorithm and show that it is

APX

-hard. For the minimization version, we show that it is not approximable within

n

1 − 

ε

for every

ε

> 0. When every color appears in the graph at most

r

times and

r

is an increasing function of

n

the problem is not

O

(

r

1 − 

ε

)-approximable. For fixed constant

r

we analyze a polynomial-time (

r

 + 

H

r

)/2-approximation algorithm (

H

r

is the

r

-th harmonic number), and prove

APX

-hardness. Analysis of the studied algorithms is shown to be tight.

Basile Couëtoux, Laurent Gourvès, Jérôme Monnot, Orestis A. Telelis
Navigating in a Graph by Aid of Its Spanning Tree

Let

G

 = (

V

,

E

) be a graph and

T

be a spanning tree of

G

. We consider the following strategy in advancing in

G

from a vertex

x

towards a target vertex

y

: from a current vertex

z

(initially,

z

 = 

x

), unless

z

 = 

y

, go to a neighbor of

z

in

G

that is closest to

y

in

T

(breaking ties arbitrarily). In this strategy, each vertex has full knowledge of its neighborhood in

G

and can use the distances in

T

to navigate in

G

. Thus, additionally to standard local information (the neighborhood

N

G

(

v

)), the only global information that is available to each vertex

v

is the topology of the spanning tree

T

(in fact,

v

can know only a very small piece of information about

T

and still be able to infer from it the necessary tree-distances). For each source vertex

x

and target vertex

y

, this way, a path, called a greedy routing path, is produced. Denote by

g

G

,

T

(

x

,

y

) the length of a longest greedy routing path that can be produced for

x

and

y

using this strategy and

T

. We say that a spanning tree

T

of a graph

G

is an

additive r

-carcass

for

G

if

g

G

,

T

(

x

,

y

) ≤ 

d

G

(

x

,

y

) + 

r

for each ordered pair

x

,

y

 ∈ 

V

. In this paper, we investigate the problem, given a graph family

${\cal F}$

, whether a small integer

r

exists such that any graph

$G\in {\cal F}$

admits an additive

r

-carcass. We show that rectilinear

p

×

q

grids, hypercubes, distance-hereditary graphs, dually chordal graphs (and, therefore, strongly chordal graphs and interval graphs), all admit additive 0-carcasses. Furthermore, every chordal graph

G

admits an additive (

ω

(

G

) + 1)-carcass (where

ω

(

G

) is the size of a maximum clique of

G

), each 3-sun-free chordal graph admits an additive 2-carcass, each chordal bipartite graph admits an additive 4-carcass. In particular, any

k

-tree admits an additive (

k

 + 2)-carcass. All those carcasses are easy to construct.

Feodor F. Dragan, Martin Matamala
Single Vehicle Scheduling Problems on Path/Tree/Cycle Networks with Release and Handling Times

In this paper, we consider the single vehicle scheduling problem (SVSP) on networks. Each job, located at some node, has a release time and a handling time. The vehicle starts from a node (depot), processes all the jobs, and then returns back to the depot. The processing of a job cannot be started before its release time, and its handling time indicates the time needed to process the job. The objective is to find a routing schedule of the vehicle that minimizes the completion time. When the underlying network is a path, we provide a simple 3/2-approximation algorithm for SVSP where the depot is arbitrarily located on the path, and a 5/3-approximation algorithm for SVSP where the vehicle’s starting depot and the ending depot are not the same. For the case when the network is a tree network, we show that SVSP is polynomially approximable within 11/6 of optimal. All these results are improvements of the previous results [2,4]. The approximation ratio is improved when the tree network has constant number of leaf nodes. For cycle networks, we propose a 9/5-approximation algorithm and show that SVSP without handling times can be solved exactly in polynomial time. No such results on cycle networks were previously known.

Binay Bhattacharya, Paz Carmi, Yuzhuang Hu, Qiaosheng Shi
Bidirectional Core-Based Routing in Dynamic Time-Dependent Road Networks

In a recent work [1], we proposed a point-to-point shortest paths algorithm which applies bidirectional search on time-dependent road networks. The algorithm is based on

A

 ∗ 

and runs a backward search in order to bound the set of nodes that have to be explored by the forward search. In this paper we extend the bidirectional time-dependent search algorithm in order to allow core routing, which is a very effective technique introduced for static graphs that consists in carrying out most of the search on a subset of the original node set. Moreover, we tackle the dynamic scenario where the piecewise linear time-dependent arc cost functions are not fixed, but can have their coefficients updated. We provide extensive computational results to show that our approach is a significant speed-up with respect to the original algorithm, and it is able to deal with the dynamic scenario requiring only a small computational effort to update the cost functions and related data structures.

Daniel Delling, Giacomo Nannicini

9A Graph Algorithm II

Bandwidth of Bipartite Permutation Graphs
(Extended Abstract)

The bandwidth problem is finding a linear layout of vertices in a graph in such a way that minimizes the maximum distance between two vertices joined by an edge. The bandwidth problem is one of the classic

${\mathcal NP}$

-complete problems. Especially, the problem is

${\mathcal NP}$

-complete even for trees. The bandwidth problem can be solved in polynomial time for a few graph classes. Efficient algorithms for computing the bandwidth for three graph classes are presented. The first one is a linear time algorithm for a threshold graph, and the second one is a linear time algorithm for a chain graph. The last algorithm solves the bandwidth problem for a bipartite permutation graph in

O

(

n

2

) time. The former two algorithms improve the previously known upper bounds to optimal, and the last one improves recent result, and they give positive answers to some open problems.

Ryuhei Uehara
König Deletion Sets and Vertex Covers above the Matching Size

A graph is König-Egerváry if the size of a minimum vertex cover equals the size of a maximum matching in the graph. We show that the problem of deleting at most

k

vertices to make a given graph König-Egerváry is fixed-parameter tractable with respect to

k

. This is proved using interesting structural theorems on matchings and vertex covers which could be useful in other contexts.

We also show an interesting parameter-preserving reduction from the vertex-deletion version of red/blue-split graphs [4,9] to a version of

Vertex Cover

and as a by-product obtain

1

the best-known exact algorithm for the optimization version of

Odd Cycle Transversal

[15];

1

fixed-parameter algorithms for several vertex-deletion problems including the following: deleting

k

vertices to make a given graph (a) bipartite [17], (b) split [5], and (c) red/blue-split [7].

Sounaka Mishra, Venkatesh Raman, Saket Saurabh, Somnath Sikdar
Independent Sets of Maximum Weight in Apple-Free Graphs

We present the first polynomial-time algorithm to solve the Maximum Weight Independent Set problem for apple-free graphs, which is a common generalization of several important classes where the problem can be solved efficiently, such as claw-free graphs, chordal graphs and cographs. Our solution is based on a combination of two algorithmic techniques (modular decomposition and decomposition by clique separators) and a deep combinatorial analysis of the structure of apple-free graphs. Our algorithm is robust in the sense that it does not require the input graph

G

to be apple-free; the algorithm either finds an independent set of maximum weight in

G

or reports that

G

is not apple-free.

Andreas Brandstädt, Tilo Klembt, Vadim V. Lozin, Raffaele Mosca
Enumeration of Perfect Sequences of Chordal Graph

A graph is chordal if and only if it has no chordless cycle of length more than three. The set of maximal cliques in a chordal graph admits special tree structures called clique trees. A perfect sequence is a sequence of maximal cliques obtained by using the reverse order of repeatedly removing the leaves of a clique tree. This paper addresses the problem of enumerating all the perfect sequences. Although this problem has statistical applications, no efficient algorithm has been proposed. There are two difficulties with developing this type of algorithms. First, a chordal graph does not generally have a unique clique tree. Second, a perfect sequence can normally be generated by two or more distinct clique trees. Thus it is hard using a straightforward way to generate perfect sequences from each possible clique tree. In this paper, we propose a method to enumerate perfect sequences without constructing clique trees. As a result, we have developed the first polynomial delay algorithm for dealing with this problem. In particular, the time complexity of the algorithm on average is

O

(1) for each perfect sequence.

Yasuko Matsui, Ryuhei Uehara, Takeaki Uno
From Tree-Width to Clique-Width: Excluding a Unit Interval Graph

From the theory of graph minors we know that the class of planar graphs is the only critical class with respect to tree-width. In the present paper, we reveal a critical class with respect to clique-width, a notion generalizing tree-width. This class is known in the literature under different names, such as unit interval, proper interval or indifference graphs, and has important applications in various fields, including molecular biology. We prove that the unit interval graphs constitute a minimal hereditary class of unbounded clique-width. As an application, we show that

list coloring

is fixed parameter tractable in the class of unit interval graphs.

Vadim V. Lozin

9B Complexity II

New Results on the Most Significant Bit of Integer Multiplication

Integer multiplication as one of the basic arithmetic functions has been in the focus of several complexity theoretical investigations and ordered binary decision diagrams (OBDDs) are one of the most common dynamic data structures for Boolean functions. Analyzing the limits of symbolic graph algorithms for the reachability problem Sawitzki (2006) has presented the first exponential lower bound on the

π

-OBDD size for the most significant bit of integer multiplication according to one predefined variable order

π

. Since the choice of the variable order is a main issue to obtain OBDDs of small size the investigation is continued. As a result a new upper bound method and the first non-trivial upper bound on the size of OBDDs according to an arbitrary variable order is presented. Furthermore, Sawitzki’s lower bound is improved.

Beate Bollig, Jochen Klump
Sorting with Complete Networks of Stacks

Knuth introduced the problem of sorting numbers using a sequence of stacks. Tarjan extended this idea to sorting with acyclic networks of stacks (and queues), where items to be sorted move from a source through the network to a sink while they may be stored temporarily at nodes (the stacks). Both characterized which permutations are

sortable

this way; but they ignored the associated optimization problem (minimize the number of moves) and its complexity.

Given a complete, thus cyclic, network of

k

 ≥ 2 stacks, any permutation is obviously sortable. The important question now is how to actually sort with a minimum number of

shuffles

, i.e., moves in between stacks. This is a natural algorithmic complement to the structural questions asked by Knuth, Tarjan, and others. It is the first time shuffles are considered in stack sorting—despite of the great practical importance of this optimization version.

We show that it is NP-hard to approximate the minimum number of shuffles within

$\mathcal{O}(n^{1-\epsilon})$

for networks of

k

 ≥ 4 stacks, even when the problem is restricted to complete networks, by relating stack sorting to

Min

k

-Partition

on circle graphs (for which we prove a stronger inapproximability result of independent interest). For complete networks, a simple merge sort algorithm achieves an approximation ratio of

$\mathcal{O}(n \log n)$

for

k

 ≥ 3; however, closing the logarithmic gap to our lower bound appears to be an intriguing open question. Yet, on the positive side, we present a tight approximation algorithm which computes a solution with a linear approximation guarantee, using a resource augmentation to

αk

 + 1 stacks, given an

α

-approximation algorithm for coloring circle graphs.

When there are constraints as to which items may be placed on top of each other, deciding about sortability becomes non-trivial again. We show that this problem is PSPACE-complete, for every fixed

k

 ≥ 3.

Felix G. König, Marco E. Lübbecke
Quantum Query Complexity of Boolean Functions with Small On-Sets

The main objective of this paper is to show that the quantum query complexity

Q

(

f

) of an

N

-bit Boolean function

f

is bounded by a function of a simple and natural parameter, i.e.,

M

 = |{

x

|

f

(

x

) = 1}| or the size of

f

’s

on-set

. We prove that: (i) For

$poly(N)\le M\le 2^{N^d}$

for some constant 0 < 

d

 < 1, the upper bound of

Q

(

f

) is

$O(\sqrt{N\log M / \log N})$

. This bound is tight, namely there is a Boolean function

f

such that

$Q(f) = \Omega(\sqrt{N\log M / \log N})$

. (ii) For the same range of

M

, the (also tight) lower bound of

Q

(

f

) is

$\Omega(\sqrt{N})$

. (iii) The average value of

Q

(

f

) is bounded from above and below by

$Q(f) = O(\log M +\sqrt{N})$

and

$Q(f) = \Omega (\log M/\log N+ \sqrt{N})$

, respectively. The first bound gives a simple way of bounding the quantum query complexity of testing some graph properties. In particular, it is proved that the quantum query complexity of planarity testing for a graph with

n

vertices is

Θ

(

N

3/4

) where

$N = \frac{n(n-1)}{2}$

.

Andris Ambainis, Kazuo Iwama, Masaki Nakanishi, Harumichi Nishimura, Rudy Raymond, Seiichiro Tani, Shigeru Yamashita
Unbounded-Error Quantum Query Complexity

This work studies the quantum query complexity of Boolean functions in an

unbounded-error

scenario where it is only required that the query algorithm succeeds with a probability strictly greater than 1/2. We first show that, just as in the communication complexity model, the unbounded-error quantum query complexity is exactly half of its classical counterpart for any (partial or total) Boolean function. Next, connecting the query and communication complexity results, we show that the “black-box” approach to convert quantum query algorithms into communication protocols by Buhrman-Cleve-Wigderson [STOC’98] is optimal even in the unbounded-error setting. We also study a related setting, called the

weakly

unbounded-error setting. In contrast to the case of communication complexity, we show a tight multiplicative

Θ

(log

n

) separation between quantum and classical query complexity in this setting for a

partial

Boolean function.

Ashley Montanaro, Harumichi Nishimura, Rudy Raymond
Super-Exponential Size Advantage of Quantum Finite Automata with Mixed States

Quantum finite automata with mixed states are proved to be super-exponentially more concise rather than quantum finite automata with pure states. It was proved earlier by A.Ambainis and R.Freivalds that quantum finite automata with pure states can have exponentially smaller number of states than deterministic finite automata recognizing the same language. There was a never published ”folk theorem” proving that quantum finite automata with mixed states are no more than super-exponentially more concise than deterministic finite automata. It was not known whether the super-exponential advantage of quantum automata is really achievable.

We use a novel proof technique based on Kolmogorov complexity to prove that there is an infinite sequence of distinct integers

n

such that there are languages

L

n

in a 4-letter alphabet such that there are quantum finite automata with mixed states with 2

n

 + 1 states recognizing the language

L

n

with probability

$\frac{3}{4} $

while any deterministic finite automaton recognizing

L

n

needs to have at least

e

O

(

n

ln

n

)

states.

Rūsiņš Freivalds
Backmatter
Metadaten
Titel
Algorithms and Computation
herausgegeben von
Seok-Hee Hong
Hiroshi Nagamochi
Takuro Fukunaga
Copyright-Jahr
2008
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-92182-0
Print ISBN
978-3-540-92181-3
DOI
https://doi.org/10.1007/978-3-540-92182-0

Premium Partner