main-content

Über dieses Buch

The two-volume set LNCS 5125 and LNCS 5126 constitutes the refereed proceedings of the 35th International Colloquium on Automata, Languages and Programming, ICALP 2008, held in Reykjavik, Iceland, in July 2008. The 126 revised full papers presented together with 4 invited lectures were carefully reviewed and selected from a total of 407 submissions. The papers are grouped in three major tracks on algorithms, automata, complexity and games, on logic, semantics, and theory of programming, and on security and cryptography foundations. LNCS 5125 contains 70 contributions of track A selected from 269 submissions as well as 2 invited lectures. The papers are organized in topical sections on complexity: boolean functions and circuits, data structures, random walks and random structures, design and analysis of algorithms, scheduling, codes and coding, coloring, randomness in computation, online and dynamic algorithms, approximation algorithms, property testing, parameterized algorithms and complexity, graph algorithms, computational complexity, games and automata, group testing, streaming, and quantum, algorithmic game theory, and quantum computing.

Inhaltsverzeichnis

Graph Structure and Monadic Second-Order Logic: Language Theoretical Aspects

Graph structure

is a flexible concept covering many different types of graph properties. Hierarchical decompositions yielding the notions of

tree-width and clique-width

, expressed by terms written with appropriate graph operations and associated with

are important tools for the construction of Fixed-Parameter Tractable algorithms and also for the extension of methods and results of Formal Language Theory to the description of sets of finite graphs. This informal overview presents the main definitions, results and open problems and tries to answer some frequently asked questions.

Bruno Courcelle

Internet Ad Auctions: Insights and Directions

On the Internet, there are advertisements (ads) of different kinds: image, text, video and other specially marked objects that are distinct from the underlying content of the page. There is an industry behind the management of such ads, and they face a number of algorithmic challenges. This note will present a small selection of such problems, some insights and open research directions.

S. Muthukrishnan

The Complexity of Boolean Formula Minimization

The Minimum Equivalent Expression problem is a natural optimization problem in the second level of the Polynomial-Time Hierarchy. It has long been conjectured to be

$\Sigma_2^P$

-complete and indeed appears as an open problem in Garey and Johnson [GJ79]. The depth-2 variant was only shown to be

$\Sigma_2^P$

-complete in 1998 [Uma98], and even resolving the complexity of the depth-3 version has been mentioned as a challenging open problem. We prove that the depth-

k

version is

$\Sigma_2^P$

-complete under Turing reductions for all

k

≥ 3. We also settle the complexity of the original, unbounded depth Minimum Equivalent Expression problem, by showing that it too is

$\Sigma_2^P$

-complete under Turing reductions.

David Buchfuhrer, Christopher Umans

Optimal Cryptographic Hardness of Learning Monotone Functions

A wide range of positive and negative results have been established for learning different classes of Boolean functions from uniformly distributed random examples. However, polynomial-time algorithms have thus far been obtained almost exclusively for various classes of

monotone

functions, while the computational hardness results obtained to date have all been for various classes of general (nonmonotone) functions. Motivated by this disparity between known positive results (for monotone functions) and negative results (for nonmonotone functions), we establish strong computational limitations on the efficient learnability of various classes of monotone functions.

We give several such hardness results which are provably almost optimal since they nearly match known positive results. Some of our results show cryptographic hardness of learning polynomial-size monotone circuits to accuracy only slightly greater than

$1/2 + 1/\sqrt{n}$

; this accuracy bound is close to optimal by known positive results (Blum

et al.

, FOCS ’98). Other results show that under a plausible cryptographic hardness assumption, a class of constant-depth, sub-polynomial-size circuits computing monotone functions is hard to learn; this result is close to optimal in terms of the circuit size parameter by known positive results as well (Servedio, Information and Computation ’04). Our main tool is a complexity-theoretic approach to hardness amplification via noise sensitivity of monotone functions that was pioneered by O’Donnell (JCSS ’04).

Dana Dachman-Soled, Homin K. Lee, Tal Malkin, Rocco A. Servedio, Andrew Wan, Hoeteck Wee

On Berge Multiplication for Monotone Boolean Dualization

Given the prime CNF representation

φ

of a monotone Boolean function

f

:{0,1}

n

↦{0,1}, the dualization problem calls for finding the corresponding prime DNF representation

ψ

of

f

. A very simple method (called

Berge multiplication

[2] [Page 52–53]) works by multiplying out the clauses of

φ

from left to right in some order, simplifying whenever possible using

the absorption law

. We show that for any monotone CNF

φ

, Berge multiplication can be done in subexponential time, and for many interesting subclasses of monotone CNF’s such as CNF’s with bounded size, bounded degree, bounded intersection, bounded conformality, and read-once formula, it can be done in polynomial or quasi-polynomial time.

Endre Boros, Khaled Elbassioni, Kazuhisa Makino

Diagonal Circuit Identity Testing and Lower Bounds

In this paper we give the first deterministic polynomial time algorithm for testing whether a

diagonal

depth-3 circuit

C

(

x

1

,...,

x

n

) (i.e.

C

is a sum of powers of linear functions) is zero. We also prove an exponential lower bound showing that such a circuit will compute determinant or permanent only if there are exponentially many linear functions. Our techniques generalize to the following new results:

1

Suppose we are given a depth-4 circuit (over any field

$\mathbb{F}$

) of the form:

$$C({x_1},\ldots,{x_n}):=\sum_{i=1}^k L_{i,1}^{e_{i,1}}\cdots L_{i,s}^{e_{i,s}}$$

where, each

L

i

,

j

is a sum of univariate polynomials in

$\mathbb{F}[{x_1},\ldots,{x_n}]$

. We can test whether

C

is zero deterministically in

poly

(

size

(

C

),

max

i

{(1 +

e

i

,1

) ⋯ (1 +

e

i

,

s

)}) field operations. In particular, this gives a deterministic polynomial time identity test for general depth-3 circuits

C

when the

d

: =degree(C) is logarithmic in the size(C).

1

We prove that if the above circuit

C

(

x

1

,...,

x

n

) computes the determinant (or permanent) of an

m

×

m

formal matrix with a “small”

$s=o\left(\frac{m}{\log m}\right)$

then

k

= 2

Ω

(

m

)

. Our lower bounds work for all fields

$\mathbb{F}$

. (Previous exponential lower bounds for depth-3 only work for nonzero characteristic.)

1

We also present an exponentially faster identity test for homogeneous diagonal circuits (deterministically in

poly

(

nk

log(

d

)) field operations over finite fields).

Nitin Saxena

Cell-Probe Proofs and Nondeterministic Cell-Probe Complexity

We study the nondeterministic cell-probe complexity of static data structures. We introduce

cell-probe proofs (CPP)

, a proof system for the cell-probe model, which describes verifications instead of computations in the cell-probe model. We present a combinatorial characterization of CPP. With this novel tool, we prove the following lower bounds for the nondeterministic cell-probe complexity of static data structures:

We show that there exist data structure problems which have super-constant nondeterministic cell-probe complexity. In particular, we show that for the exact nearest neighbor search (NNS) problem or the partial match problem in high dimensional Hamming space, there does not exist a static data structure with Poly(

n

) cells, each of which contains

n

o

(1)

bits, such that the nondeterministic cell-probe complexity is

O

(1), where

n

is the number of points in the data set for the NNS or partial match problem.

For the polynomial evaluation problem, if single-cell nondeterministic probes are sufficient, then either the size of a single cell is close to the size of the whole polynomial, or the total size of the data structure is close to that of a naive data structure that stores results for all possible queries.

Yitong Yin

Constructing Efficient Dictionaries in Close to Sorting Time

The dictionary problem is among the oldest problems in computer science. Yet our understanding of the complexity of the dictionary problem in realistic models of computation has been far from complete. Designing highly efficient dictionaries without resorting to use of randomness appeared to be a particularly challenging task. We present solutions to the static dictionary problem that significantly improve the previously known upper bounds and bring them close to obvious lower bounds. Our dictionaries have a constant lookup cost and use linear space, which was known to be possible, but the worst-case cost of construction of the structures is proportional to only loglog

n

times the cost of sorting the input. Our claimed performance bounds are obtained in the word RAM model and in the external memory models; only the involved sorting procedures in the algorithms need to be changed between the models.

Milan Ružić

On List Update with Locality of Reference

We present a comprehensive study of the list update problem with locality of reference. More specifically, we present a combined theoretical and experimental study in which the theoretically proven and experimentally observed performance guarantees of algorithms match or nearly match. In the first part of the paper we introduce a new model of locality of reference that is based on the natural concept of runs. Using this model we develop refined theoretical analyses of popular list update algorithms. The second part of the paper is devoted to an extensive experimental study in which we have tested the algorithms on traces from benchmark libraries. It shows that the theoretical and experimental bounds differ by just a few percent. Our new bounds are substantially lower than those provided by standard competitive analysis. Another result is that the elegant Move-To-Front strategy exhibits the best performance, which confirms that it is the method of choice in practice.

Susanne Albers, Sonja Lauer

A New Combinatorial Approach for Sparse Graph Problems

We give a new combinatorial data structure for representing arbitrary Boolean matrices. After a short preprocessing phase, the data structure can perform fast vector multiplications with a given matrix, where the runtime depends on the sparsity of the input vector. The data structure can also return

minimum witnesses

for the matrix-vector product. Our approach is simple and implementable: the data structure works by precomputing small problems and recombining them in a novel way. It can be easily plugged into existing algorithms, achieving an asymptotic speedup over previous results. As a consequence, we achieve new running time bounds for computing the transitive closure of a graph, all pairs shortest paths on unweighted undirected graphs, and finding a maximum node-weighted triangle. Furthermore, any asymptotic improvement on our algorithms would imply a

o

(

n

3

/log

2

n

) combinatorial algorithm for Boolean matrix multiplication, a longstanding open problem in the area. We also use the data structure to give the first asymptotic improvement over

O

(

mn

) for all pairs least common ancestors on directed acyclic graphs.

Guy E. Blelloch, Virginia Vassilevska, Ryan Williams

How to Explore a Fast-Changing World (Cover Time of a Simple Random Walk on Evolving Graphs)

Motivated by real world networks and use of algorithms based on random walks on these networks we study the simple random walks on

dynamic

undirected graphs with fixed underlying vertex set, i.e., graphs which are modified by inserting or deleting edges at every step of the walk. We are interested in the expected time needed to visit all the vertices of such a dynamic graph, the

cover time

, under the assumption that the graph is being modified by an oblivious adversary. It is well known that on connected

static

undirected graphs the cover time is polynomial in the size of the graph. On the contrary and somewhat counter-intuitively, we show that there are adversary strategies which force the expected cover time of a simple random walk on connected dynamic graphs to be exponential. We relate this result to the cover time of static directed graphs. In addition we provide a simple strategy, the

lazy

random walk, that guarantees polynomial cover time regardless of the changes made by the adversary.

Chen Avin, Michal Koucký, Zvi Lotker

Networks Become Navigable as Nodes Move and Forget

We propose a dynamic process for network evolution, aiming at explaining the emergence of the small world phenomenon, i.e., the statistical observation that any pair of individuals are linked by a short chain of acquaintances computable by a simple decentralized routing algorithm, known as greedy routing. Our model is based on the combination of two dynamics: a random walk (spatial) process, and an harmonic forgetting (temporal) process. Both processes reflect natural behaviors of the individuals, viewed as nodes in the network of inter-individual acquaintances. We prove that, in

k

-dimensional lattices, the combination of these two processes generates long-range links mutually independently distributed as a

k

-harmonic distribution. We analyze the performances of greedy routing at the stationary regime of our process, and prove that the expected number of steps for routing from any source to any target in any multidimensional lattice is a polylogarithmic function of the distance between the two nodes in the lattice. Up to our knowledge, these results are the first formal proof that navigability in small worlds can emerge from a dynamic process for network evolution. Our dynamica process can find practical applications to the design of spatial gossip and resource location protocols.

Augustin Chaintreau, Pierre Fraigniaud, Emmanuelle Lebhar

Fast Distributed Computation of Cuts Via Random Circulations

We describe a new

circulation

-based method to determine cuts in an undirected graph. A circulation is an oriented labeling of edges with integers so that at each vertex, the sum of the in-labels equals the sum of out-labels. For an integer

k

, our approach is based on simple algorithms for sampling a circulation (mod

k

) uniformly at random. We prove that with high probability, certain dependencies in the random circulation correspond to cuts in the graph. This leads to simple new linear-time sequential algorithms for finding all cut edges and

cut pairs

(a set of 2 edges that form a cut) of a graph, and hence 2-edge-connected and 3-edge-connected components.

In the model of

distributed computing

in a graph

G

= (

V

,

E

) with

O

(log|

V

|)-bit messages, our approach yields faster algorithms for several problems. The diameter of

G

is denoted by

${\mathcal{D}}$

. Previously, Thurimella [J. Algorithms, 1997] gave a

$O({\mathcal{D}}+\sqrt{|V|}\log^* |V|)$

-time algorithm to identify all cut vertices, 2-edge-connected components, and cut edges, and Tsin [Int. J. Found. Comput. Sci., 2006] gave a

$O(|V|+{\mathcal{D}}^2)$

-time algorithm to identify all cut pairs and 3-edge-connected components.

We obtain simple

$O({\mathcal{D}})$

-time distributed algorithms to find all cut edges, 2-edge-connected components, and cut pairs, matching or improving previous time bounds on all graphs. Under certain assumptions these new algorithms are

universally optimal

, due to a

$\Omega({\mathcal{D}})$

-time lower bound on every graph. These results yield the first distributed algorithms with

sub-linear

time for cut pairs and 3-edge-connected components. Let

Δ

denote the maximum degree. We obtain a

$O({\mathcal{D}}+\Delta/\log|V|)$

-time distributed algorithm for finding cut vertices; this is faster than Thurimella’s algorithm on all graphs with

$\Delta, {\mathcal{D}} = O(\sqrt{|V|})$

. The basic distributed algorithms are Monte Carlo, but can be made Las Vegas without increasing the asymptotic complexity.

David Pritchard

Finding a Maximum Matching in a Sparse Random Graph in O(n) Expected Time

We present a linear expected time algorithm for finding maximum cardinality matchings in sparse random graphs. This is optimal and improves on previous results by a logarithmic factor.

Prasad Chebolu, Alan Frieze, Páll Melsted

Function Evaluation Via Linear Programming in the Priced Information Model

We determine the complexity of evaluating monotone Boolean functions in a variant of the decision tree model introduced in [Charikar

et al.

2002]. In this model, reading different variables can incur different costs, and competitive analysis is employed to evaluate the performance of the algorithms. It is known that for a monotone Boolean function

f

, the size of the largest certificate, aka

PROOF

(

f

), is a lower bound for

γ

(

f

), the best possible competitiveness achievable by an algorithm on

f

. This bound has been proved to be achievable for some subclasses of the monotone Boolean functions, e.g., read once formulae, threshold trees. However, determining

γ

(

f

) for an arbitrary monotone Boolean function has so far remained a major open question, with the best known upper bound being essentially a factor of 2 away from the above lower bound.

We close the gap and prove that

for any monotone Boolean function

f

,

γ

(

f

) =

PROOF

(

f

). In fact, we prove that

γ

(

f

) ≤

PROOF

(

f

) holds for a class much larger than the set of monotone Boolean functions. This class also contains all Boolean functions.

Ferdinando Cicalese, Eduardo Sany Laber

Improved Approximation Algorithms for Budgeted Allocations

We provide a 3/2-approximation algorithm for an offline budgeted allocations problem with applications to sponsored search auctions. This an improvement over the

e

/(

e

− 1) approximation of Andelman and Mansour [1] and the

e

/(

e

− 1) −

ε

approximation (for

ε

≈ 0.0001) of Feige and Vondrak [2] for the more general Maximum Submodular Welfare (SMW) problem. For a special case of our problem, we improve this ratio to

$\sqrt{2}$

. We also show that the problem is APX-hard.

Yossi Azar, Benjamin Birnbaum, Anna R. Karlin, Claire Mathieu, C. Thach Nguyen

The Travelling Salesman Problem in Bounded Degree Graphs

We show that the travelling salesman problem in bounded-degree graphs can be solved in time

$O\bigl((2-\epsilon)^n\bigr)$

, where

ε

> 0 depends only on the degree bound but not on the number of cities,

n

. The algorithm is a variant of the classical dynamic programming solution due to Bellman, and, independently, Held and Karp. In the case of bounded integer weights on the edges, we also present a polynomial-space algorithm with running time

$O\bigl((2-\epsilon)^n\bigr)$

on bounded-degree graphs.

Andreas Björklund, Thore Husfeldt, Petteri Kaski, Mikko Koivisto

Treewidth Computation and Extremal Combinatorics

For a given graph

G

and integers

b

,

f

≥ 0, let

S

be a subset of vertices of

G

of size

b

+ 1 such that the subgraph of

G

induced by

S

is connected and

S

can be separated from other vertices of

G

by removing

f

vertices. We prove that every graph on

n

vertices contains at most

$n\binom{b+f}{b}$

such vertex subsets. This result from extremal combinatorics appears to be very useful in the design of several enumeration and exact algorithms. In particular, we use it to provide algorithms that for a given

n

-vertex graph

G

compute the treewidth of

G

in time

$\mathcal{O}(1.7549^n)$

by making use of exponential space and in time

$\mathcal{O}(2.6151^n)$

and polynomial space;

decide in time

$\mathcal{O}(({\frac{2n+k+1}{3})^{k+1}\cdot kn^6})$

if the treewidth of

G

is at most

k

;

list all minimal separators of

G

in time

$\mathcal{O}(1.6181^n)$

and all potential maximal cliques of

G

in time

$\mathcal{O}(1.7549^n)$

.

This significantly improves previous algorithms for these problems.

Fedor V. Fomin, Yngve Villanger

Fast Scheduling of Weighted Unit Jobs with Release Times and Deadlines

We present a fast algorithm for the following classic scheduling problem: Determine a maximum-weight schedule for a collection of unit jobs, each of which has an associated release time, deadline, and weight. All previous algorithms for this problem have at least quadratic worst-case complexity. This job scheduling problem can also be viewed as a special case of weighted bipartite matching: each job represents a vertex on the left side of the bipartite graph; each time slot represents a vertex on the right side; each job is connected by an edge to all time slots between its release time and deadline; all of the edges adjacent to a given job have weight equal to the weight of the job. Letting

U

denote the set of jobs and

V

denote the set of time slots, our algorithm runs in

O

(|

U

| +

k

log

2

k

) time, where

k

≤ min {|

U

|,|

V

|} denotes the cardinality of a maximum-cardinality matching. Thus our algorithm runs in nearly linear time, a dramatic improvement over the previous quadratic bounds.

C. Greg Plaxton

Approximation Algorithms for Scheduling Parallel Jobs: Breaking the Approximation Ratio of 2

In this paper we study variants of the non-preemptive parallel job scheduling problem where the number of machines is polynomially bounded in the number of jobs. For this problem we show that a schedule with length at most (1 +

ε

)OPT can be calculated in polynomial time, which is the best possible result (in the sense of approximation ratio), since the problem is strongly NP-hard.

For the case when all jobs must be allotted to a subset of machines with consecutive indices a schedule with length at most (1.5 +

ε

)OPT can be calculated in polynomial time. The previously best known results are algorithms with absolute approximation ratio 2.

Klaus Jansen, Ralf Thöle

A PTAS for Static Priority Real-Time Scheduling with Resource Augmentation

We present a polynomial time approximation scheme for the

real-time scheduling problem with fixed priorities

when resource augmentation is allowed. For a fixed

ε

> 0, our algorithm computes an assignment using at most (1 +

ε

OPT

+ 1 processors in polynomial time, which is feasible if the processors have speed 1 +

ε

. We also show that, unless

P

=

NP

, there does not exist an asymptotic FPTAS for this problem.

Friedrich Eisenbrand, Thomas Rothvoß

Optimal Monotone Encodings

Moran, Naor and Segev have asked what is the minimal

r

=

r

(

n

,

k

) for which there exists an (

n

,

k

)

-monotone encoding of length r

, i.e., a monotone injective function from subsets of size up to

k

of {1, 2, ...,

n

} to

r

bits. Monotone encodings are relevant to the study of tamper-proof data structures and arise also in the design of broadcast schemes in certain communication networks.

To answer this question, we develop a relaxation of

k

-superimposed families, which we call

α

-fraction

k

-multi-user tracing ((

k

,

α

)-FUT families). We show that

r

(

n

,

k

) =

Θ

(

k

log(

n

/

k

)) by proving tight asymptotic lower and upper bounds on the size of (

k

,

α

)-FUT families and by constructing an (

n

,

k

)-monotone encoding of length

O

(

k

log(

n

/

k

)).

We also present an explicit construction of an (

n

, 2)-monotone encoding of length 2log

n

+

O

(1), which is optimal up to an additive constant.

Noga Alon, Rani Hod

Polynomial-Time Construction of Linear Network Coding

Constructing

k

independent sessions between

k

source-sink pairs with the help of a linear operation at each vertex is one of the most standard problems in network coding. For an unbounded

k

, this is known to be NP-hard. Very recently, a polynomial-time algorithm was given for

k

= 2 [Wang and Shroff, ISIT 2007], but was open for a general (constant)

k

. This paper gives a polynomial-time algorithm for this problem under the assumption that the size of the finite field for the linear operations is bounded by a fixed constant.

Kazuo Iwama, Harumichi Nishimura, Mike Paterson, Rudy Raymond, Shigeru Yamashita

Complexity of Decoding Positive-Rate Reed-Solomon Codes

The complexity of maximum likelihood decoding of the Reed-Solomon codes [

q

− 1,

k

]

q

is a well known open problem. The only known result [4] in this direction states that it is at least as hard as the discrete logarithm in some cases where the information rate unfortunately goes to zero. In this paper, we remove the rate restriction and prove that the same complexity result holds for any positive information rate. In particular, this resolves an open problem left in [4], and rules out the possibility of a polynomial time algorithm for maximum likelihood decoding problem of Reed-Solomon codes of any rate under a well known cryptographical hardness assumption. As a side result, we give an explicit construction of Hamming balls of radius bounded away from the minimum distance, which contain exponentially many codewords for Reed-Solomon code of any positive rate less than one. The previous constructions in [2][7] only apply to Reed-Solomon codes of diminishing rates. We also give an explicit construction of Hamming balls of relative radius less than 1 which contain subexponentially many codewords for Reed-Solomon code of rate approaching one.

Qi Cheng, Daqing Wan

Computational Complexity of the Distance Constrained Labeling Problem for Trees (Extended Abstract)

An

L

(

p

,

q

)-labeling of a graph is a labeling of its vertices by nonnegative integers such that the labels of adjacent vertices differ by at least

p

and the labels of vertices at distance 2 differ by at least

q

. The span of such a labeling is the maximum label used. Distance constrained labelings are an important graph theoretical approach to the Frequency Assignment Problem applied in mobile and wireless networks.

In this paper we show that determining the minimum span of an

L

(

p

,

q

)-labeling of a tree is

NP

-hard whenever

q

is not a divisor of

p

. This demonstrates significant difference in computational complexity of this problem for

q

= 1 and

q

> 1. In addition, we give a sufficient and necessary condition for the existence of an

H

(

p

,

q

)-labeling of a tree in the case when the metric on the label space is determined by a strongly vertex transitive graph

H

. This generalizes the problem of distance constrained labeling in cyclic metric, that was known to be solvable in polynomial time for trees.

Jiří Fiala, Petr A. Golovach, Jan Kratochvíl

The Randomized Coloring Procedure with Symmetry-Breaking

A basic

randomized coloring procedure

has been used in probabilistic proofs to obtain remarkably strong results on graph coloring. These results include the asymptotic version of the List Coloring Conjecture due to Kahn, the extensions of Brooks’ Theorem to sparse graphs due to Kim and Johansson, and Luby’s fast parallel and distributed algorithms for graph coloring. The most challenging aspect of a typical probabilistic proof is showing adequate concentration bounds for key random variables. In this paper, we present a simple symmetry-breaking augmentation to the randomized coloring procedure that works well in conjunction with

Azuma’s Martingale Inequality

to easily yield the requisite concentration bounds. We use this approach to obtain a number of results in two areas:

frugal coloring

and

weighted equitable coloring

. A

β-frugal coloring

of a graph

G

is a proper vertex-coloring of

G

in which no color appears more than

β

times in any neighborhood. Let

G

= (

V

,

E

) be a vertex-weighted graph with weight function

w

:

V

→[0, 1] and let

W

= ∑

v

∈

V

w

(

v

). A

weighted equitable coloring

of

G

is a proper

k

-coloring such that the total weight of every color class is “large”, i.e., “not much smaller” than

W

/

k

; this notion is useful in obtaining tail bounds for sums of dependent random variables.

Sriram Pemmaraju, Aravind Srinivasan

The Local Nature of List Colorings for Graphs of High Girth

We consider list coloring problems for graphs

$\mathcal{G}$

of girth larger than

c

log

Δ

− 1

n

, where

n

and

Δ

≥ 3 are, respectively, the order and the maximum degree of

$\mathcal{G}$

, and

c

is a suitable constant. First, we determine that the edge and total list chromatic numbers of these graphs are

$\chi'_l({\mathcal{G}}) = \Delta$

and

$\chi''_l({\mathcal{G}}) = \Delta + 1$

. This proves that the general conjectures of Bollobás and Harris (1985), Behzad and Vizing (1969) and Juvan, Mohar and Škrekovski (1998) hold for this particular class of graphs.

Moreover, our proofs exhibit a certain degree of “locality”, which we exploit to obtain an efficient distributed algorithm able to compute both kinds of optimal list colorings.

Also, using an argument similar to one of Erdös, we show that our algorithm can compute

k

-list vertex colorings of graphs having girth larger than

c

log

k

− 1

n

.

Flavio Chierichetti, Andrea Vattani

Approximating List-Coloring on a Fixed Surface

It is well-known that approximating the chromatic number within a factor of

n

1 −

ε

cannot be done in polynomial time for any

ε

> 0, unless

coRP

=

NP

. Also, it is known that computing the list-chromatic number is much harder than the chromatic number (assuming that the complexity classes

NP

and

coNP

are different). In fact, the problem of deciding if a given graph is

f

-list-colorable for a function

f

:

V

→{

k

− 1,

k

} for

k

≥ 3 is

Π

2

p

-complete.

In this paper, we are concerned with the following questions:

1

Given a graph embedded on a surface of bounded genus, what is its list-chromatic number ?

1

Given a graph embedded on a surface of bounded genus with list-chromatic number

k

, what is the least

l

(

l

≥

k

) such that the graph can be efficiently and legally colored given a list (coloring scheme) of order

l

?

The seminal result of Thomassen [19] gives rise to answers for these problems when a given graph is planar. In fact, he gave a polynomial time algorithm to 5-list-color a planar graph. Thomassen’s result together with the hardness result (distinguishing between 3, 4 and 5 list-colorability is NP-complete for planar graphs and bounded genus graphs) gives an additive approximation algorithm for list-coloring planar graphs within 2 of the list-chromatic number.

Our main result is to extend this result to bounded genus graphs. In fact, our algorithm gives a list-coloring when each vertex has a list with at least

χ

l

(

G

) + 2 colors available. The time complexity is

O

(

n

).

It also generalizes the other deep result of Thomassen [20] who gave an additive approximation algorithm for graph-coloring bounded genus graphs within 2 of the chromatic number.

This theorem can be compared to the result by Kawarabayashi and Mohar(STOC’06) who gave an

O

(

k

)-approximation algorithm for list-coloring graphs with no

K

k

-minors. For minor-closed graphs, there is a 2-approximation algorithm for graph-coloring by Demaine, Hajiaghayi and Kawarabayashi (FOCS’05), but it seems that there is a huge gap between list-coloring and graph-coloring in minor-closed family of graphs. On the other hand, this is not the case for bounded genus graphs, as we pointed out above.

Ken-ichi Kawarabayashi

Asymptotically Optimal Hitting Sets Against Polynomials

Our main result is an efficient construction of a hitting set generator against the class of polynomials of degree

d

i

in the

i

-th variable. The seed length of this generator is

. Here, log

D

= ∑

i

log(

d

i

+ 1) is a lower bound on the seed length of any hitting set generator against this class. Our construction is the first to achieve asymptotically optimal seed length for every choice of the parameters

d

i

. In fact, we present a nearly linear time construction with this asymptotic guarantee. Furthermore, our results extend to classes of polynomials parameterized by upper bounds on the number of nonzero terms in each variable. Underlying our constructions is a general and novel framework that exploits the product structure common to the classes of polynomials we consider. This framework allows us to obtain efficient and asymptotically optimal hitting set generators from primitives that need not be optimal or efficient by themselves.

As our main corollary, we obtain the first blackbox polynomial identity tests with an asymptotically optimal randomness consumption.

Markus Bläser, Moritz Hardt, David Steurer

The Smoothed Complexity of Edit Distance

We initiate the study of the smoothed complexity of sequence alignment, by proposing a semi-random model of edit distance between two input strings, generated as follows. First, an adversary chooses two binary strings of length

d

and a longest common subsequence

A

of them. Then, every character is perturbed independently with probability

p

, except that

A

is perturbed in exactly the same way inside the two strings.

We design two efficient algorithms that compute the edit distance on smoothed instances up to a constant factor approximation. The first algorithm runs in near-linear time, namely

d

1 +

ε

for any fixed

ε

> 0. The second one runs in time sublinear in

d

, assuming the edit distance is not too small. These approximation and runtime guarantees are significantly better then the bounds known for worst-case inputs, e.g. near-linear time algorithm achieving approximation roughly

d

1/3

, due to Batu, Ergün, and Sahinalp [SODA 2006].

Our technical contribution is twofold. First, we rely on finding matches between substrings in the two strings, where two substrings are considered a match if their edit distance is relatively small, a prevailing technique in commonly used heuristics, such as PatternHunter of Ma, Tromp and Li [Bioinformatics, 2002]. Second, we effectively reduce the smoothed edit distance to a simpler variant of (worst-case) edit distance, namely, edit distance on permutations (a.k.a. Ulam’s metric). We are thus able to build on algorithms developed for the Ulam metric, whose much better algorithmic guarantees usually do not carry over to general edit distance.

Alexandr Andoni, Robert Krauthgamer

Randomized Self-assembly for Approximate Shapes

In this paper we design tile self-assembly systems which assemble arbitrarily close approximations to target squares with arbitrarily high probability. This is in contrast to previous work which has only considered deterministic assemblies of a single shape. Our technique takes advantage of the ability to assign tile concentrations to each tile type of a self-assembly system. Such an assignment yields a probability distribution over the set of possible assembled shapes. We show that by considering the assembly of close approximations to target shapes with high probability, as opposed to exact deterministic assembly, we are able to achieve significant reductions in tile complexity. In fact, we restrict ourselves to constant sized tile systems, encoding all information about the target shape into the tile concentration assignment. In practice, this offers a potentially useful tradeoff, as large libraries of particles may be infeasible or require substantial effort to create, while the replication of existing particles to adjust relative concentration may be much easier. To illustrate our technique we focus on the assembly of

n

×

n

squares, a special case class of shapes whose study has proven fruitful in the development of new self-assembly systems.

Ming-Yang Kao, Robert Schweller

Succinct Data Structures for Retrieval and Approximate Membership (Extended Abstract)

The

retrieval problem

is the problem of associating data with keys in a set. Formally, the data structure must store a function

$f\colon U\to \{0,1\}^r$

that has specified values on the elements of a given set

S

⊆

U

, |

S

| =

n

, but may have any value on elements outside

S

. All known methods (e. g. those based on perfect hash functions), induce a space overhead of

Θ

(

n

) bits over the optimum, regardless of the evaluation time. We show that for any

k

, query time

O

(

k

) can be achieved using space that is within a factor 1 +

e

−

k

of optimal, asymptotically for large

n

. The time to construct the data structure is

O

(

n

), expected. If we allow logarithmic evaluation time, the additive overhead can be reduced to

O

(loglog

n

) bits whp. A general reduction transfers the results on retrieval into analogous results on

approximate membership

, a problem traditionally addressed using Bloom filters. Thus we obtain space bounds arbitrarily close to the lower bound for this problem as well. The evaluation procedures of our data structures are extremely simple. For the results stated above we assume free access to fully random hash functions. This assumption can be justified using space

o

(

n

) to simulate full randomness on a RAM.

Martin Dietzfelbinger, Rasmus Pagh

Competitive Weighted Matching in Transversal Matroids

Consider a bipartite graph with a set of left-vertices and a set of right-vertices. All the edges adjacent to the same left-vertex have the same weight. We present an algorithm that, given the set of right-vertices and the number of left-vertices, processes a uniformly random permutation of the left-vertices, one left-vertex at a time. In processing a particular left-vertex, the algorithm either permanently matches the left-vertex to a thus-far unmatched right-vertex, or decides never to match the left-vertex. The weight of the matching returned by our algorithm is within a constant factor of that of a maximum weight matching.

Nedialko B. Dimitrov, C. Greg Plaxton

Scheduling for Speed Bounded Processors

We consider online scheduling algorithms in the dynamic speed scaling model, where a processor can scale its speed between 0 and some maximum speed

T

. The processor uses energy at rate

s

α

when run at speed

s

, where

α

> 1 is a constant. Most modern processors use dynamic speed scaling to manage their energy usage. This leads to the problem of designing execution strategies that are both energy efficient, and yet have almost optimum performance.

We consider two problems in this model and give essentially optimum possible algorithms for them. In the first problem, jobs with arbitrary sizes and deadlines arrive online and the goal is to maximize the throughput, i.e. the total size of jobs completed successfully. We give an algorithm that is 4-competitive for throughput and

O

(1)-competitive for the energy used. This improves upon the 14 throughput competitive algorithm of Chan et al. [10]. Our throughput guarantee is optimal as any online algorithm must be at least 4-competitive even if the energy concern is ignored [7]. In the second problem, we consider optimizing the trade-off between the total flow time incurred and the energy consumed by the jobs. We give a 4-competitive algorithm to minimize total flow time plus energy for unweighted unit size jobs, and a (2 +

o

(1))

α

/ln

α

-competitive algorithm to minimize fractional weighted flow time plus energy. Prior to our work, these guarantees were known only when the processor speed was unbounded (

T

= ∞ ) [4].

Nikhil Bansal, Ho-Leung Chan, Tak-Wah Lam, Lap-Kei Lee

Faster Algorithms for Incremental Topological Ordering

We present two online algorithms for maintaining a topological order of a directed acyclic graph as arcs are added, and detecting a cycle when one is created. Our first algorithm takes

O

(

m

1/2

) amortized time per arc and our second algorithm takes

O

(

n

2.5

/

m

) amortized time per arc, where

n

is the number of vertices and

m

is the total number of arcs. For sparse graphs, our

O

(

m

1/2

) bound improves the best previous bound by a factor of log

n

and is tight to within a constant factor for a natural class of algorithms that includes all the existing ones. Our main insight is that the two-way search method of previous algorithms does not require an ordered search, but can be more general, allowing us to avoid the use of heaps (priority queues). Instead, the deterministic version of our algorithm uses (approximate) median-finding; the randomized version of our algorithm uses uniform random sampling. For dense graphs, our

O

(

n

2.5

/

m

) bound improves the best previously published bound by a factor of

n

1/4

and a recent bound obtained independently of our work by a factor of log

n

. Our main insight is that graph search is wasteful when the graph is dense and can be avoided by searching the topological order space instead. Our algorithms extend to the maintenance of strong components, in the same asymptotic time bounds.

Bernhard Haeupler, Telikepalli Kavitha, Rogers Mathew, Siddhartha Sen, Robert E. Tarjan

Dynamic Normal Forms and Dynamic Characteristic Polynomial

We present the first fully dynamic algorithm for computing the characteristic polynomial of a matrix. In the generic symmetric case our algorithm supports rank-one updates in

O

(

n

2

log

n

) randomized time and queries in constant time, whereas in the general case the algorithm works in

O

(

n

2

k

log

n

) randomized time, where

k

is the number of invariant factors of the matrix. The algorithm is based on the first dynamic algorithm for computing normal forms of a matrix such as the Frobenius normal form or the tridiagonal symmetric form. The algorithm can be extended to solve the matrix eigenproblem with relative error 2

−

b

O

(

n

log

2

n

log

b

) time. Furthermore, it can be used to dynamically maintain the singular value decomposition (SVD) of a generic matrix. Together with the algorithm the hardness of the problem is studied. For the symmetric case we present an

Ω

(

n

2

) lower bound for rank-one updates and an

Ω

(

n

) lower bound for element updates.

Gudmund Skovbjerg Frandsen, Piotr Sankowski

Algorithms for ε-Approximations of Terrains

Consider a point set

${\mathcal{D}}$

with a measure function

$\mu : {\mathcal{D}} \to \mathcal{R}$

. Let

${\mathcal{A}}$

be the set of subsets of

$\mathcal{D}$

induced by containment in a shape from some geometric family (e.g. axis-aligned rectangles, half planes, balls,

k

-oriented polygons). We say a range space

$(\mathcal{D}, \mathcal{A})$

has an

ε

-approximation

P

if

$$\max_{R \in \mathcal{A}} \left| \frac{\mu(R \cap P)}{ \mu(P)} - \frac{\mu(R \cap \mathcal{D})}{ \mu(\mathcal{D})} \right| \leq \varepsilon.$$

We describe algorithms for deterministically constructing discrete

ε

-approximations for continuous point sets such as distributions or terrains. Furthermore, for certain families of subsets

$\mathcal{A}$

, such as those described by axis-aligned rectangles, we reduce the size of the

ε

-approximations by almost a square root from

$O(\frac{1}{\varepsilon^2} \log \frac{1}{\varepsilon})$

to

. This is often the first step in transforming a continuous problem into a discrete one for which combinatorial techniques can be applied. We describe applications of this result in geospatial analysis, biosurveillance, and sensor networks.

Jeff M. Phillips

An Approximation Algorithm for Binary Searching in Trees

In this work we consider the problem of computing efficient strategies for searching in trees. As a generalization of the classical binary search for ordered lists, suppose one wishes to find a (unknown) specific node of a tree by asking queries to its arcs, where each query indicates the endpoint closer to the desired node. Given the likelihood of each node being the one searched, the objective is to compute a search strategy that minimizes the expected number of queries. Practical applications of this problem include file system synchronization and software testing. Here we present a linear time algorithm which is the first constant factor approximation for this problem. This represents a significant improvement over previous

O

(log

n

)-approximation.

Eduardo Laber, Marco Molinaro

Algorithms for 2-Route Cut Problems

In this paper we study approximation algorithms for

multi-route cut

problems in undirected graphs. In these problems the goal is to find a minimum cost set of edges to be removed from a given graph such that the edge-connectivity (or node-connectivity) between certain pairs of nodes is reduced below a given threshold

K

. In the usual cut problems the edge connectivity is required to be reduced below 1 (i.e. disconnected). We consider the case of

K

= 2 and obtain poly-logarithmic approximation algorithms for fundamental cut problems including single-source, multiway-cut, multicut, and sparsest cut. These cut problems are dual to multi-route flows that are of interest in fault-tolerant networks flows. Our results show that the flow-cut gap between 2-route cuts and 2-route flows is poly-logarithmic in undirected graphs with arbitrary capacities. 2-route cuts are also closely related to well-studied feedback problems and we obtain results on some new variants. Multi-route cuts pose interesting algorithmic challenges. The new techniques developed here are of independent technical interest, and may have applications to other cut and partitioning problems.

Chandra Chekuri, Sanjeev Khanna

The Two-Edge Connectivity Survivable Network Problem in Planar Graphs

Consider the following problem: given a graph with edge-weights and a subset

Q

of vertices, find a minimum-weight subgraph in which there are two edge-disjoint paths connecting every pair of vertices in

Q

. The problem is a failure-resilient analog of the Steiner tree problem, and arises in telecommunications applications. A more general formulation, also employed in telecommunications optimization, assigns a number (or

requirement

)

r

v

∈ {0,1,2} to each vertex

v

in the graph; for each pair

u

,

v

of vertices, the solution network is required to contain min{

r

u

,

r

v

} edge-disjoint

u

-to-

v

paths.

We address the problem in planar graphs, considering a popular relaxation in which the solution is allowed to use multiple copies of the input-graph edges (paying separately for each copy). The problem is SNP-hard in general graphs and NP-hard in planar graphs. We give the first polynomial-time approximation scheme in planar graphs. The running time is

O

(

n

log

n

).

Under the additional restriction that the requirements are in {0,2} for vertices on the boundary of a single face of a planar graph, we give a linear-time algorithm to find the optimal solution.

Efficiently Testing Sparse GF(2) Polynomials

We give the first algorithm that is both query-efficient and time-efficient for testing whether an unknown function

f

: {0,1}

n

→{0,1} is an

s

-sparse

GF

(2) polynomial versus

ε

-far from every such polynomial. Our algorithm makes poly(

s

,1/

ε

) black-box queries to

f

and runs in time

n

·poly(

s

,1/

ε

). The only previous algorithm for this testing problem [DLM

+

07] used poly(

s

,1/

ε

) queries, but had running time exponential in

s

and super-polynomial in 1/

ε

.

Our approach significantly extends the “testing by implicit learning” methodology of [DLM

+

07]. The learning component of that earlier work was a brute-force exhaustive search over a concept class to find a hypothesis consistent with a sample of random examples. In this work, the learning component is a sophisticated exact learning algorithm for sparse

GF

(2) polynomials due to Schapire and Sellie [SS96]. A crucial element of this work, which enables us to simulate the membership queries required by [SS96], is an analysis establishing new properties of how sparse

GF

(2) polynomials simplify under certain restrictions of “low-influence” sets of variables.

Ilias Diakonikolas, Homin K. Lee, Kevin Matulef, Rocco A. Servedio, Andrew Wan

Testing Properties of Sets of Points in Metric Spaces

Given query access to a set of points in a metric space, we wish to quickly check if it has a specific property. More precisely, we wish to distinguish sets of points that have the property from those that need to have at least an

ε

fraction of points modified to achieve it.

We show one-sided error testers that immediately follow from known characterizations of metric spaces. Among other things, we give testers for tree metrics and ultrametrics which are optimal among one-sided error testers. Our tester for embeddability into the line is optimal even among two-sided error testers, and runs in sublinear time. We complement our algorithms with several lower bounds. For instance, we present lower bounds for testing dimensionality reduction in the ℓ

1

and ℓ

∞

metrics, which improve upon lower bounds given by Krauthgamer and Sasson (SODA 2003). All our lower bounds are constructed by using a generic approach.

We also look at the problem from a streaming perspective, and give a method for converting each of our property testers into a streaming tester.

Krzysztof Onak

An Expansion Tester for Bounded Degree Graphs

We consider the problem of testing graph expansion (either vertex or edge) in the bounded degree model [10]. We give a property tester that given a graph with degree bound

d

, an expansion bound

α

, and a parameter

ε

> 0, accepts the graph with high probability if its expansion is more than

α

, and rejects it with high probability if it is

ε

-far from any graph with expansion

α

′ with degree bound

d

, where

α

′ <

α

is a function of

α

. For edge expansion, we obtain

$\alpha' = \Omega(\frac{\alpha^2}{d})$

, and for vertex expansion, we obtain

$\alpha' = \Omega(\frac{\alpha^2}{d^2})$

. In either case, the algorithm runs in time

$\tilde{O}(\frac{n^{(1+\mu)/2}d^2}{\epsilon\alpha^2})$

for any given constant

μ

> 0.

Property Testing on k-Vertex-Connectivity of Graphs

We present an algorithm for testing the

k

-vertex-connectivity of graphs with given maximum degree. The time complexity of the algorithm is independent of the number of vertices and edges of graphs. A graph

G

with

n

vertices and maximum degree at most

d

is called

ε

-far from

k

-vertex-connectivity when at least

$\frac{\epsilon dn}{2}$

edges must be added to or removed from

G

to obtain a

k

-vertex-connected graph with maximum degree at most

d

. The algorithm always accepts every graph that is

k

-vertex-connected and rejects every graph that is

ε

-far from

k

-vertex-connectivity with a probability of at least 2/3. The algorithm runs in

${O\left(d\left(\frac{c}{\epsilon d}\right)^{k}\log\frac{1}{\epsilon d}\right)}$

time (

c

> 1 is a constant) for given (

k

− 1)-vertex-connected graphs, and

${O\left(d\left(\frac{ck}{\epsilon d}\right)^{k}\log\frac{k}{\epsilon d}\right)}$

time (

c

> 1 is a constant) for given general graphs. It is the first constant-time

k

-vertex-connectivity testing algorithm for general

k

≥ 4.

Yuichi Yoshida, Hiro Ito

Almost 2-SAT Is Fixed-Parameter Tractable (Extended Abstract)

We consider the following problem. Given a

2-cnf

formula, is it possible to remove at most

k

clauses so that the resulting

2-cnf

formula is satisfiable? This problem is known to different research communities in theoretical computer science under the names Almost 2-SAT, All-but-

k

2-SAT,

2-cnf

deletion, and 2-SAT deletion. The status of the fixed-parameter tractability of this problem is a long-standing open question in the area of parameterized complexity. We resolve this open question by proposing an algorithm that solves this problem in

O

(15

k

*

k

*

m

3

) time showing that this problem is fixed-parameter tractable.

Igor Razgon, Barry O’Sullivan

On Problems without Polynomial Kernels (Extended Abstract)

Kernelization is a central technique used in parameterized algorithms, and in other approaches for coping with NP-hard problems. In this paper, we introduce a new method which allows us to show that many problems do not have polynomial size kernels under reasonable complexity-theoretic assumptions. These problems include

k

-Path,

k

-Cycle,

k

-Exact Cycle,

k

-Short Cheap Tour,

k

-Graph Minor Order Test,

k

-Cutwidth,

k

-Search Number,

k

-Pathwidth,

k

-Treewidth,

k

-Branchwidth

, and several optimization problems parameterized by treewidth or cliquewidth.

Hans L. Bodlaender, Rodney G. Downey, Michael R. Fellows, Danny Hermelin

Faster Algebraic Algorithms for Path and Packing Problems

We study the problem of deciding whether an

n

-variate polynomial, presented as an arithmetic circuit

G

, contains a degree

k

square-free term with an odd coefficient. We show that if

G

can be evaluated over the integers modulo 2

k

+ 1

in time

t

and space

s

, the problem can be decided with constant probability in

O

((

kn

+

t

)2

k

) time and

O

(

kn

+

s

) space. Based on this, we present new and faster algorithms for two well studied problems: (i) an

O

*

(2

mk

) algorithm for the

m

-set

k

-packing problem and (ii) an

O

*

(2

3

k

/2

) algorithm for the simple

k

-path problem, or an

O

*

(2

k

) algorithm if the graph has an induced

k

-subgraph with an odd number of Hamiltonian paths. Our algorithms use

poly

(

n

) random bits, comparing to the 2

O

(

k

)

random bits required in prior algorithms, while having similar low space requirements.

Ioannis Koutis

Understanding the Complexity of Induced Subgraph Isomorphisms

We study left-hand side restrictions of the

induced subgraph isomorphism

problem: Fixing a class

, for given graphs

G

and arbitrary

H

we ask for induced subgraphs of

H

isomorphic to

G

.

For the homomorphism problem this kind of restriction has been studied by Grohe and Dalmau, Kolaitis and Vardi for the decision problem and by Dalmau and Jonsson for its counting variant.

We give a dichotomy result for both variants of the induced subgraph isomorphism problem. Under some assumption from parameterized complexity theory, these problems are solvable in polynomial time if and only if

contains no arbitrarily large graphs.

All classifications are given by means of parameterized complexity. The results are presented for arbitrary structures of bounded arity which implies, for example, analogous results for directed graphs.

Furthermore, we show that no such dichotomy is possible in the sense of classical complexity. That is, if

there are classes

such that the induced subgraph isomorphism problem on

is neither in

nor

-complete. This argument may be of independent interest, because it is applicable to various parameterized problems.

Yijia Chen, Marc Thurley, Mark Weyer

Spanners in Sparse Graphs

A

t-spanner

of a graph

G

is a spanning subgraph

S

in which the distance between every pair of vertices is at most

t

times their distance in

G

. If

S

is required to be a tree then

S

is called a

tree t

-spanner

of

G

. In 1998, Fekete and Kremer showed that on unweighted planar graphs the

tree t

-spanner problem

(the problem to decide whether

G

t

-spanner) is polynomial time solvable for

t

≤ 3 and is NP-complete as long as

t

is part of the input. They also left as an open problem whether the tree

t

-spanner problem is polynomial time solvable for every fixed

t

≥ 4. In this work we resolve this open problem and extend the solution in several directions. We show that for every fixed

t

, it is possible in polynomial time not only to decide if a planar graph

G

has a tree

t

-spanner, but also to decide if

G

has a

t

-spanner of

bounded treewidth

. Moreover, for every fixed values of

t

and

k

, the problem, for a given planar graph

G

to decide if

G

has a

t

-spanner of treewidth at most

k

, is not only polynomial time solvable, but is

fixed parameter tractable

(with

k

and

t

being the parameters). In particular, the running time of our algorithm is linear with respect to the size of

G

. We extend this result from planar to a much more general class of sparse graphs containing graphs of bounded genus. An

apex graph

is a graph obtained from a planar graph

G

by adding a vertex and making it adjacent to some vertices of

G

. We show that the problem of finding a

t

-spanner of treewidth

k

is fixed parameter tractable on graphs that do not contain some fixed apex graph as a minor, i.e. on

apex-minor-free graphs

. Graphs of bounded treewidth are sparse graphs and our technique can be used to settle the complexity of the parameterized version of the

sparse t

-spanner problem

, where for given

t

and

m

n

-vertex graph has a

t

-spanner with at most

n

− 1 +

m

edges. Our results imply that the sparse

t

-spanner problem is fixed parameter tractable on apex-minor-free graphs with

t

and

m

being the parameters. Finally we show that the tractability border of the

t

-spanner problem cannot be extended beyond the class of apex-minor-free graphs. In particular, we prove that for every

t

≥ 4, the problem of finding a tree

t

-spanner is NP-complete on

K

6

-minor-free graphs

. Thus our results are tight, in a sense that the restriction of input graph being apex-minor-free cannot be replaced by

H

-minor-free for some non-apex fixed graph

H

.

Feodor F. Dragan, Fedor V. Fomin, Petr A. Golovach

Distance Oracles for Unweighted Graphs: Breaking the Quadratic Barrier with Constant Additive Error

Thorup and Zwick, in the seminal paper [Journal of ACM, 52(1), 2005, pp 1-24], showed that a weighted undirected graph on

n

vertices can be preprocessed in subcubic time to design a data structure which occupies only subquadratic space, and yet, for any pair of vertices, can answer distance query approximately in constant time. The data structure is termed as approximate distance oracle. Subsequently, there has been improvement in their preprocessing time, and presently the best known algorithms [4,3] achieve expected

O

(

n

2

) preprocessing time for these oracles. For a class of graphs, these algorithms indeed run in

Θ

(

n

2

) time. In this paper, we are able to break this quadratic barrier at the expense of introducing a (small) constant additive error for unweighted graphs. In achieving this goal, we have been able to preserve the optimal size-stretch trade offs of the oracles. One of our algorithms can be extended to weighted graphs, where the additive error becomes 2 ·

w

max

(

u

,

v

) - here

w

max

(

u

,

v

) is the heaviest edge in the shortest path between vertices

u

,

v

.

Surender Baswana, Akshay Gaur, Sandeep Sen, Jayant Upadhyay

All-Pairs Shortest Paths with a Sublinear Additive Error

We show that for every 0 ≤

p

≤ 1 there is an algorithm with running time of

O

(

n

2.575 −

p

/(7.4 − 2.3

p

)

) that given a directed graph with small positive integer weights, estimates the length of the shortest path between every pair of vertices

u

,

v

in the graph to within an

error

δ

p

(

u

,

v

), where

δ

(

u

,

v

) is the exact length of the shortest path between

u

and

v

. This algorithm runs faster than the fastest algorithm for computing exact shortest paths for any 0 <

p

≤ 1.

Previously the only way to “bit” the running time of the exact shortest path algorithms was by applying an algorithm of Zwick [FOCS ’98] that approximates the shortest path distances within a

multiplicative

error of (1 +

ε

). Our algorithm thus gives a smooth qualitative and quantitative transition between the fastest

exact

shortest paths algorithm, and the fastest

approximation

algorithm with a linear additive error. In fact, the main ingredient we need in order to obtain the above result, which is also interesting in its own right, is an algorithm for computing (1 +

ε

) multiplicative approximations for the shortest paths, whose running time is faster than the running time of Zwick’s approximation algorithm when

ε

≪ 1 and the graph has small integer weights.

Liam Roditty, Asaf Shapira

Simpler Linear-Time Modular Decomposition Via Recursive Factorizing Permutations

Modular decomposition is fundamental for many important problems in algorithmic graph theory including transitive orientation, the recognition of several classes of graphs, and certain combinatorial optimization problems. Accordingly, there has been a drive towards a practical, linear-time algorithm for the problem. This paper posits such an algorithm; we present a linear-time modular decomposition algorithm that proceeds in four straightforward steps. This is achieved by introducing the notion of factorizing permutations to an earlier recursive approach. The only data structure used is an ordered list of trees, and each of the four steps amounts to simple traversals of these trees. Previous algorithms were either exceedingly complicated or resorted to impractical data-structures.

Marc Tedder, Derek Corneil, Michel Habib, Christophe Paul

The Complexity of the Counting Constraint Satisfaction Problem

The Counting Constraint Satisfaction Problem (

${\rm \#CSP}(\mathcal{H})$

) over a finite relational structure

$\mathcal{H}$

can be expressed as follows: given a relational structure

$\mathcal{G}$

over the same vocabulary, determine the number of homomorphisms from

$\mathcal{G}$

to

$\mathcal{H}$

. In this paper we characterize relational structures

$\mathcal{H}$

for which

${\rm \#CSP}(\mathcal{H})$

can be solved in polynomial time and prove that for all other structures the problem is #P-complete.

Andrei A. Bulatov

On the Hardness of Losing Weight

We study the complexity of local search for the Boolean constraint satisfaction problem (CSP), in the following form: given a CSP instance, that is, a collection of constraints, and a solution to it, the question is whether there is a better (lighter, i.e., having strictly less Hamming weight) solution within a given distance from the initial solution. We classify the complexity, both classical and parameterized, of such problems by a Schaefer-style dichotomy result, that is, with a restricted set of allowed types of constraints. Our results show that there is a considerable amount of such problems that are NP-hard, but fixed-parameter tractable when parameterized by the distance.

Andrei Krokhin, Dániel Marx

Product Theorems Via Semidefinite Programming

The tendency of semidefinite programs to compose perfectly under product has been exploited many times in complexity theory: for example, by Lovász to determine the Shannon capacity of the pentagon; to show a direct sum theorem for non-deterministic communication complexity and direct product theorems for discrepancy; and in interactive proof systems to show parallel repetition theorems for restricted classes of games. Despite all these examples of product theorems—some going back nearly thirty years—it was only recently that Mittal and Szegedy began to develop a general theory to explain when and why semidefinite programs behave perfectly under product. This theory captured many examples in the literature, but there were also some notable exceptions which it could not explain—namely, an early parallel repetition result of Feige and Lovász, and a direct product theorem for the discrepancy method of communication complexity by Lee, Shraibman, and Špalek. We extend the theory of Mittal and Szegedy to explain these cases as well. Indeed, to the best of our knowledge, our theory captures all examples of semidefinite product theorems in the literature.

Troy Lee, Rajat Mittal

Sound 3-Query PCPPs Are Long

We initiate the study of the tradeoff between the

length

of a probabilistically checkable proof of proximity (PCPP) and the maximal

soundness

that can be guaranteed by a 3-query verifier with oracle access to the proof. Our main observation is that a verifier limited to querying a short proof cannot obtain the same soundness as that obtained by a verifier querying a long proof. Moreover, we quantify the

soundness deficiency

as a function of the proof-length and show that any verifier obtaining “best possible” soundness must query an exponentially long proof.

Eli Ben-Sasson, Prahladh Harsha, Oded Lachish, Arie Matsliah

Approximative Methods for Monotone Systems of Min-Max-Polynomial Equations

A

monotone system of min-max-polynomial equations

(min-max-MSPE) over the variables

X

1

,...,

X

n

has for every

i

exactly one equation of the form

X

i

=

f

i

(

X

1

,...,

X

n

) where each

f

i

(

X

1

,...,

X

n

) is an expression built up from polynomials with non-negative coefficients, minimum- and maximum-operators. The question of computing least solutions of min-max-MSPEs arises naturally in the analysis of

recursive stochastic games

[5,6,14]. Min-max-MSPEs generalize MSPEs for which convergence speed results of Newton’s method are established in [11,3]. We present the first methods for approximatively computing least solutions of min-max-MSPEs which converge at least linearly. Whereas the first one converges faster, a single step of the second method is cheaper. Furthermore, we compute

ε

-optimal positional strategies for the player who wants to maximize the outcome in a recursive stochastic game.

Javier Esparza, Thomas Gawlitza, Stefan Kiefer, Helmut Seidl

Recursive Stochastic Games with Positive Rewards

We study the complexity of a class of Markov decision processes and, more generally, stochastic games, called 1-exit Recursive Markov Decision Processes (1-RMDPs) and Simple Stochastic Games (1-RSSGs) with strictly positive rewards. These are a class of finitely presented countable-state zero-sum stochastic games, with total expected reward objective. They subsume standard finite-state MDPs and Condon’s simple stochastic games and correspond to optimization and game versions of several classic stochastic models, with rewards. Such stochastic models arise naturally as models of probabilistic procedural programs with recursion, and the problems we address are motivated by the goal of analyzing the optimal/pessimal expected running time in such a setting.

We give polynomial time algorithms for 1-exit Recursive Markov decision processes (1-RMDPs) with positive rewards. Specifically, we show that the exact optimal value of both maximizing and minimizing 1-RMDPs with positive rewards can be computed in polynomial time (this value may be ∞). For two-player 1-RSSGs with positive rewards, we prove a “stackless and memoryless” determinacy result, and show that deciding whether the game value is at least a given value

r

is in NP ∩ coNP. We also prove that a simultaneous strategy improvement algorithm converges to the value and optimal strategies for these stochastic games. We observe that 1-RSSG positive reward games are “harder” than finite-state SSGs in several senses.

Kousha Etessami, Dominik Wojtczak, Mihalis Yannakakis

Complementation, Disambiguation, and Determinization of Büchi Automata Unified

We present a uniform framework for (1) complementing Büchi automata, (2) turning Büchi automata into equivalent unambiguous Büchi automata, and (3) turning Büchi automata into equivalent deterministic automata. We present the first solution to (2) which does not make use of McNaughton’s theorem (determinization) and an intuitive and conceptually simple solution to (3).

Our results are based on Muller and Schupp’s procedure for turning alternating tree automata into non-deterministic ones.

Detlef Kähler, Thomas Wilke

Tree Projections: Hypergraph Games and Minimality

A hypergraph-game characterization is provided for hypergraph tree projections (TPs) and, hence, for the special cases of generalized and fractional hypertree decompositions, where such a characterization was missing and asked for. In this game, as for the Robber and Cops game characterizing tree decompositions, the existence of winning strategies implies the existence of monotone ones, which are yet somehow preferable, because they correspond to minimal tree projections. In fact, it is shown that minimal TPs enjoy a number of nice properties, such as the same kind of connection property as (minimal) tree decompositions of graphs. Finally, it is shown that this property is somehow tight, by giving a negative answer to an open question about a slightly stronger notion of connection property, defined to speed-up the computation of hypertree decompositions.

Gianluigi Greco, Francesco Scarcello

Explicit Non-adaptive Combinatorial Group Testing Schemes

Group testing is a long studied problem in combinatorics: A small set of

r

ill people should be identified out of the whole (

n

people) by using only queries (tests) of the form “Does set X contain an ill human?”. In this paper we provide an explicit construction of a testing scheme which is better (smaller) than any known explicit construction. This scheme has

${\it \Theta}\left({\min[r^2 \ln n,n]}\right)$

tests which is as many as the best non-explicit schemes have. In our construction we use a fact that may have a value by its own right: Linear error-correction codes with parameters [

m

,

k

,

δm

]

q

meeting the Gilbert-Varshamov bound may be constructed quite efficiently, in

${\it \Theta}\left({q^km}\right)$

time.

Ely Porat, Amir Rothschild

Tight Lower Bounds for Multi-pass Stream Computation Via Pass Elimination

There is a natural relationship between lower bounds in the multi-pass stream model and lower bounds in multi-round communication. However, this connection is less understood than the connection between single-pass stream computation and one-way communication. In this paper, we consider data-stream problems for which reductions from natural multi-round communication problems do not yield tight bounds or do not apply. While lower bounds are known for some of these data-stream problems, many of these only apply to deterministic or comparison-based algorithms, whereas the lower bounds we present apply to any (possibly randomized) algorithms. Our results are particularly relevant to evaluating functions that are dependent on the ordering of the stream, such as the longest increasing subsequence and a variant of tree pointer jumping in which pointers are revealed according to a post-order traversal.

Our approach is based on establishing “pass-elimination” type results that are analogous to the round-elimination results of Miltersen et al. [23] and Sen [29]. We demonstrate our approach by proving tight bounds for a range of data-stream problems including finding the longest increasing sequences (a problem that has recently become very popular [22,16,30,15,12] and we resolve an open question of [30]), constructing convex hulls and fixed-dimensional linear programming (generalizing results of [8] to randomized algorithms), and the “greater-than” problem (improving results of [9]). These results will also clarify one of the main messages of our work: sometimes it is necessary to prove lower bounds directly for stream computation rather than proving a lower bound for a communication problem and then constructing a reduction to a data-stream problem.

Sudipto Guha, Andrew McGregor

Impossibility of a Quantum Speed-Up with a Faulty Oracle

We consider Grover’s unstructured search problem in the setting where each oracle call has some small probability of failing. We show that no quantum speed-up is possible in this case.

Oded Regev, Liron Schiff

Superpolynomial Speedups Based on Almost Any Quantum Circuit

The first separation between quantum polynomial time and classical bounded-error polynomial time was due to Bernstein and Vazirani in 1993. They first showed a

O

(1) vs.

Ω

(

n

) quantum-classical oracle separation based on the quantum Hadamard transform, and then showed how to amplify this into a

n

O

(1)

time quantum algorithm and a

n

Ω

(log

n

)

classical query lower bound.

We generalize both aspects of this speedup. We show that a wide class of unitary circuits (which we call

dispersing

circuits) can be used in place of Hadamards to obtain a

O

(1) vs.

Ω

(

n

) separation. The class of dispersing circuits includes all quantum Fourier transforms (including over nonabelian groups) as well as nearly all sufficiently long random circuits. Second, we give a general method for amplifying quantum-classical separations that allows us to achieve a

n

O

(1)

vs.

n

Ω

(log

n

)

separation from any dispersing circuit.

Sean Hallgren, Aram W. Harrow

The Speed of Convergence in Congestion Games under Best-Response Dynamics

We investigate the speed of convergence of congestion games with linear latency functions under best response dynamics. Namely, we estimate the social performance achieved after a limited number of rounds, during each of which every player performs one best response move. In particular, we show that the price of anarchy achieved after

k

rounds, defined as the highest possible ratio among the total latency cost, that is the sum of all players latencies, and the minimum possible cost, is

$O(\sqrt[2^{k-1}] {n})$

, where

n

is the number of players. For constant values of

k

such a bound asymptotically matches the

$\Omega(\sqrt[2^{k-1}] {n}/k)$

lower bound that we determine as a refinement of the one in [7]. As a consequence, we prove that order of loglog

n

rounds are not only necessary, but also sufficient to achieve a constant price of anarchy, i.e. comparable to the one at Nash equilibrium. This result is particularly relevant, as reaching an equilibrium may require a number of rounds exponential in

n

. We then provide a new lower bound of

$\Omega(\sqrt[2^k-1] {n})$

for load balancing games, that is congestion games in which every strategy consists of a single resource, thus showing that a number of rounds proportional to loglog

n

is necessary and sufficient also under such a restriction.

Our results thus solve the important left open question of the polynomial speed of convergence of linear congestion games to constant factor solutions.

Angelo Fanelli, Michele Flammini, Luca Moscardelli

Uniform Budgets and the Envy-Free Pricing Problem

We consider the unit-demand min-buying pricing problem, in which we want to compute revenue maximizing prices for a set of products

$\mathcal{P}$

assuming that each consumer from a set of consumer samples

$\mathcal{C}$

will purchase her cheapest affordable product once prices are fixed. We focus on the special uniform-budget case, in which every consumer has only a single non-zero budget for some set of products. This constitutes a special case also of the unit-demand envy-free pricing problem.

We show that, assuming specific hardness of the balanced bipartite independent set problem in constant degree graphs or hardness of refuting random 3CNF formulas, the unit-demand min-buying pricing problem with uniform budgets cannot be approximated in polynomial time within

$\mathcal{O}(\log ^{\varepsilon} |\mathcal{C}|)$

for some

ε

> 0. This is the first result giving evidence that unit-demand envy-free pricing, as well, might be hard to approximate essentially better than within the known logarithmic ratio.

We then introduce a slightly more general problem definition in which consumers are given as an explicit probability distribution and show that in this case the envy-free pricing problem can be shown to be inapproximable within

$\mathcal{O}(|\mathcal{P}|^{\varepsilon})$

assuming NP

$\nsubseteq \bigcap _{\delta >0}$

BPTIME(

$2^{\mathcal{O}(n^{\delta})}$

). Finally, we briefly argue that all the results apply to the important setting of pricing with single-minded consumers as well.

Patrick Briest

Bayesian Combinatorial Auctions

We study the following Bayesian setting:

m

items are sold to

n

selfish bidders in

m

independent second-price auctions. Each bidder has a

private

valuation function that expresses complex preferences over

all

subsets of items. Bidders only have

beliefs

about the valuation functions of the other bidders, in the form of probability distributions. The objective is to allocate the items to the bidders in a way that provides a good approximation to the optimal social welfare value. We show that if bidders have submodular valuation functions, then every Bayesian Nash equilibrium of the resulting game provides a 2-approximation to the optimal social welfare. Moreover, we show that in the full-information game a pure Nash always exists and can be found in time that is polynomial in both

m

and

n

.

George Christodoulou, Annamária Kovács, Michael Schapira

Truthful Unification Framework for Packing Integer Programs with Choices

One of the most interesting research directions within the field of algorithmic mechanism design revolves the study of hard combinatorial optimization problems. In this setting, many common algorithmic techniques cannot be utilized as they violate certain monotonicity properties that are imperative for truthfulness. Consequently, it seems of the essence to develop alternative methods, which can underlie truthful mechanisms. In particular, since many problems can be formulated as instances of integer linear programs, it seems that devising techniques that apply to integer linear programs is significantly important.

In this paper, we focus our attention on packing integer programs with choices. Our main findings can be briefly summarized as follows:

1

We develop a framework, which can be used as a building block to approximately solve packing integer programs with choices. The framework is built upon a novel unification technique that approximately solves an instance of a packing integer program with choices, given algorithms that approximately solve sub-instances of it. The framework is deterministic and monotone, and hence can underlie truthful

deterministic

mechanisms.

1

We demonstrate the applicability of the framework by applying it to several NP-hard problems. In particular, we focus on the

bandwidth allocation problem in tree networks

, and the

multiple knapsack problem on bipartite graphs

. Notably, using the mentioned framework, we attain the first non-trivial approximation guarantees for these problems in a game theoretic setting.

Yossi Azar, Iftah Gamzu

Upper Bounds on the Noise Threshold for Fault-Tolerant Quantum Computing

We prove new upper bounds on the tolerable level of noise in a quantum circuit. Our circuits consist of unitary

k

-qubit gates each of whose input wires is subject to depolarizing noise of strength

p

, and arbitrary one-qubit gates that are essentially noise-free. We assume the output of the circuit is the result of measuring some designated qubit in the final state. Our main result is that for

$p>1-{\it \Theta}(1/\sqrt{k})$

, the output of any such circuit of large enough depth is essentially independent of its input, thereby making the circuit useless. For the important special case of

k

= 2, our bound is

p

> 35.7%. Moreover, if the only gate on more than one qubit is the CNOT, then our bound becomes 29.3%. These bounds on

p

are notably better than previous bounds, yet incomparable because of the somewhat different circuit model that we are using. Our main technique is a Pauli basis decomposition, which we believe should lead to further progress in deriving such bounds.

Julia Kempe, Oded Regev, Falk Unger, Ronald de Wolf

Finding Optimal Flows Efficiently

Among the models of quantum computation, the One-way Quantum Computer [11,12] is one of the most promising proposals of physical realisation [13], and opens new perspectives for parallelisation by taking advantage of quantum entanglement [2]. Since a One-way QC is based on quantum measurement, which is a fundamentally nondeterministic evolution, a sufficient condition of global determinism has been introduced in [4] as the existence of a

causal flow

in a graph that underlies the computation. A

O

(

n

3

)-algorithm has been introduced [6] for finding such a causal flow when the numbers of output and input vertices in the graph are equal, otherwise no polynomial time algorithm was known for deciding whether a graph has a causal flow or not. Our main contribution is to introduce a

O

(

m

)-algorithm for finding a causal flow (where

m

is the number of edges of the graph), if any, whatever the numbers of input and output vertices are. This answers the open question stated by Danos and Kashefi [4] and by de Beaudrap [6]. Moreover, we prove that our algorithm produces a flow of minimal depth.

Whereas the existence of a causal flow is a sufficient condition for determinism, it is not a necessary condition. A weaker version of the causal flow, called

gflow

(generalised flow) has been introduced in [3] and has been proved to be a necessary and sufficient condition for a family of deterministic computations. Moreover the depth of the quantum computation is upper bounded by the depth of the gflow. However the existence of a polynomial time algorithm that finds a gflow has been stated as an open question in [3]. In this paper we answer this positively with a polynomial time algorithm that outputs an optimal gflow of a given graph and thus finds an optimal correction strategy to the nondeterministic evolution due to measurements.

Mehdi Mhalla, Simon Perdrix

Optimal Quantum Adversary Lower Bounds for Ordered Search

The goal of the ordered search problem is to find a particular item in an ordered list of

n

items. Using the adversary method, Høyer, Neerbek, and Shi proved a quantum lower bound for this problem of

$\frac{1}{\pi} \ln n + {\it \Theta}(1)$

. Here, we find the exact value of the best possible quantum adversary lower bound for a symmetrized version of ordered search (whose query complexity differs from that of the original problem by at most 1). Thus we show that the best lower bound for ordered search that can be proved by the adversary method is

$\frac{1}{\pi} \ln n + O(1)$

. Furthermore, we show that this remains true for the generalized adversary method allowing negative weights.

Andrew M. Childs, Troy Lee

Quantum SAT for a Qutrit-Cinquit Pair Is QMA 1-Complete

We show that the quantum SAT problem is

-complete when restricted to interactions between a three-dimensional particle and a five-dimensional particle. The best previously known result is for particles of dimensions 4 and 9. The main novel ingredient of our proof is a certain Hamiltonian construction named the

Triangle Hamiltonian

. It allows to verify the application of a 2-qubit CNOT gate without generating explicitly interactions between pairs of workspace qubits. We believe this construction may contribute to progress in other Hamiltonian-related problems as well as in adiabatic computation.

Lior Eldar, Oded Regev

Superpolynomial Speedups Based on Almost Any Quantum Circuit

Sean Hallgren, Aram W. Harrow

Backmatter

Weitere Informationen