Skip to main content
Top

2012 | Book

Automata, Languages, and Programming

39th International Colloquium, ICALP 2012, Warwick, UK, July 9-13, 2012, Proceedings, Part I

Editors: Artur Czumaj, Kurt Mehlhorn, Andrew Pitts, Roger Wattenhofer

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This two-volume set of LNCS 7391 and LNCS 7392 constitutes the refereed proceedings of the 39th International Colloquium on Automata, Languages and Programming, ICALP 2012, held in Warwick, UK, in July 2012. The total of 123 revised full papers presented in this volume were carefully reviewed and selected from 432 submissions. They are organized in three tracks focussing on algorithms, complexity and games; logic, semantics, automata and theory of programming; and foundations of networked computation.

Table of Contents

Frontmatter

Track A – Algorithms, Complexity and Games

Unsatisfiability Bounds for Random CSPs from an Energetic Interpolation Method

The interpolation method, originally developed in statistical physics, transforms distributions of random CSPs to distributions of much simpler problems while bounding the change in a number of associated statistical quantities along the transformation path. After a number of further mathematical developments, it is now known that, in principle, the method can yield rigorous unsatisfiability bounds if one “plugs in an appropriate functional distribution”. A drawback of the method is that identifying appropriate distributions and plugging them in leads to major analytical challenges as the distributions required are, in fact, infinite dimensional objects. We develop a variant of the interpolation method for random CSPs on arbitrary sparse degree distributions which trades accuracy for tractability. In particular, our bounds only require the solution of a 1-dimensional optimization problem (which typically turns out to be very easy) and as such can be used to compute explicit rigorous unsatisfiability bounds.

Dimitris Achlioptas, Ricardo Menchaca-Mendez
The NOF Multiparty Communication Complexity of Composed Functions

We study the

k

-party ‘number on the forehead’ communication complexity of composed functions

f

 ∘ 

g

, where

f

:{0,1}

n

 → {±1},

g

: {0,1}

k

 → {0,1} and for (

x

1

,…,

x

k

) ∈ ({0,1}

n

)

k

,

f

 ∘ 

g

(

x

1

,…,

x

k

) = 

f

(…,

g

(

x

1,

i

,…,

x

k

,

i

), …). We show that there is an

O

(log

3

n

) cost simultaneous protocol for

$\textnormal{\textsc{sym}} \circ g$

when

k

 > 1 + log

n

,

$\textnormal{\textsc{sym}}$

is any symmetric function and

g

is

any function

. Previously, an efficient protocol was only known for

$\textnormal{\textsc{sym}} \circ g$

when

g

is symmetric and “compressible”. We also get a non-simultaneous protocol for

$\textnormal{\textsc{sym}} \circ g$

of cost

O

((

n

/2

k

) log

n

 + 

k

log

n

) for any

k

 ≥ 2.

In the setting of

k

 ≤ 1 + log

n

, we study more closely functions of the form

$\textnormal{\textsc{majority}} \circ g$

,

$\textnormal{\textsc{mod}}_m \circ g$

, and

$\textnormal{\textsc{nor}} \circ g$

, where the latter two are generalizations of the well-known and studied functions Generalized Inner Product and Disjointness respectively. We characterize the communication complexity of these functions with respect to the choice of

g

. As the main application of our results, we answer a question posed by Babai et al. (

SIAM Journal on Computing, 33:137–166, 2004

) and determine the communication complexity of

$\textnormal{\textsc{majority}} \circ \textnormal{\textsc{qcsb}}_k$

, where

$\textnormal{\textsc{qcsb}}_k$

is the “quadratic character of the sum of the bits” function.

Anil Ada, Arkadev Chattopadhyay, Omar Fawzi, Phuong Nguyen
Quantum Strategies Are Better Than Classical in Almost Any XOR Game

We initiate a study of random instances of nonlocal games. We show that quantum strategies are better than classical for almost any 2-player XOR game. More precisely, for large

n

, the entangled value of a random 2-player XOR game with

n

questions to every player is at least 1.21... times the classical value, for 1 − 

o

(1) fraction of all 2-player XOR games.

Andris Ambainis, Artūrs Bačkurs, Kaspars Balodis, Dmitrijs Kravčenko, Raitis Ozols, Juris Smotrovs, Madars Virza
Efficient Submodular Function Maximization under Linear Packing Constraints

We study the problem of maximizing a monotone submodular set function subject to linear packing constraints. An instance of this problem consists of a matrix

A

 ∈ [0,1]

m

×

n

, a vector

b

 ∈ [1, ∞ )

m

, and a monotone submodular set function

f

: 2

[

n

]

 → ℝ

 + 

. The objective is to find a set

S

that maximizes

f

(

S

) subject to

A

x

S

 ≤ 

b

, where

x

S

stands for the characteristic vector of the set

S

. A well-studied special case of this problem is when

f

is linear. This special linear case captures the class of packing integer programs.

Our main contribution is an efficient combinatorial algorithm that achieves an approximation ratio of Ω(1 /

m

1/

W

), where

W

 =  min {

b

i

/

A

ij

:

A

ij

 > 0} is the width of the packing constraints. This result matches the best known performance guarantee for the linear case. One immediate corollary of this result is that the algorithm under consideration achieves constant factor approximation when the number of constraints is constant or when the width of the constraints is sufficiently large. This motivates us to study the large width setting, trying to determine its exact approximability. We develop an algorithm that has an approximation ratio of (1 − 

ε

)(1 − 1/

e

) when

W

 = Ω(ln

m

/

ε

2

). This result essentially matches the theoretical lower bound of 1 − 1/

e

. We also study the special setting in which the matrix

A

is binary and

k

-column sparse. A

k

-column sparse matrix has at most

k

non-zero entries in each of its column. We design a fast combinatorial algorithm that achieves an approximation ratio of Ω(1 / (

Wk

1/

W

)), that is, its performance guarantee only depends on the sparsity and width parameters.

Yossi Azar, Iftah Gamzu
Polynomial-Time Isomorphism Test for Groups with No Abelian Normal Subgroups
(Extended Abstract)

We consider the problem of testing isomorphism of groups of order

n

given by Cayley tables. The trivial

n

log

n

bound on the time complexity for the general case has not been improved upon over the past four decades. We demonstrate that the obstacle to efficient algorithms is the presence of abelian normal subgroups; we show this by giving a polynomial-time isomorphism test for groups without nontrivial abelian normal subgroups. This concludes a project started by the authors and J. A. Grochow (SODA 2011). Two key new ingredient are: (a) an algorithm to test permutational isomorphism of permutation groups in time, polynomial in the order and simply exponential in the degree; (b) the introduction of the “twisted code equivalence problem,” a generalization of the classical code equivalence problem by admitting a group action on the alphabet. Both of these problems are of independent interest.

László Babai, Paolo Codenotti, Youming Qiao
Clustering under Perturbation Resilience

Motivated by the fact that distances between data points in many real-world clustering instances are often based on heuristic measures, Bilu and Linial [6] proposed analyzing objective based clustering problems under the assumption that the optimum clustering to the objective is preserved under small multiplicative perturbations to distances between points. In this paper, we provide several results within this framework. For separable center-based objectives, we present an algorithm that can optimally cluster instances resilient to

$(1 + \sqrt{2})$

-factor perturbations, solving an open problem of Awasthi et al. [2]. For the

k

-median objective, we additionally give algorithms for a weaker, relaxed, and more realistic assumption in which we allow the optimal solution to change in a small fraction of the points after perturbation. We also provide positive results for min-sum clustering which is a generally much harder objective than

k

-median (and also non-center-based). Our algorithms are based on new linkage criteria that may be of independent interest.

Maria Florina Balcan, Yingyu Liang
Secretary Problems with Convex Costs

We consider online resource allocation problems where given a set of requests our goal is to select a subset that maximizes a value minus cost type of objective. Requests are presented online in random order, and each request possesses an adversarial value and an adversarial size. The online algorithm must make an irrevocable accept/reject decision as soon as it sees each request. The “profit” of a set of accepted requests is its total value minus a convex cost function of its total size. This problem falls within the framework of secretary problems. Unlike previous work in that area, one of the main challenges we face is that the objective function can be positive or negative, and we must guard against accepting requests that look good early on but cause the solution to have an arbitrarily large cost as more requests are accepted. This necessitates new techniques. We study this problem under various feasibility constraints and present online algorithms with competitive ratios only a constant factor worse than those known in the absence of costs for the same feasibility constraints. We also consider a multi-dimensional version of the problem that generalizes multi-dimensional knapsack within a secretary framework. In the absence of feasibility constraints, we present an

O

(ℓ) competitive algorithm where ℓ is the number of dimensions; this matches within constant factors the best known ratio for multi-dimensional knapsack secretary.

Siddharth Barman, Seeun Umboh, Shuchi Chawla, David Malec
Nearly Simultaneously Resettable Black-Box Zero Knowledge

An important open question in Cryptography concerns the possibility of achieving secure protocols even in the presence of physical attacks. Here we focus on the case of proof systems where an adversary forces the honest player to re-use its randomness in different executions. In 2009, Deng, Goyal and Sahai [1] constructed a

simultaneously resettable

non-black-box zero-knowledge argument system that is secure against resetting provers

and

verifiers.

In this work we study the case of the black-box use of the code of the adversary and show a nearly simultaneously resettable black-box zero-knowledge proof systems under standard assumptions. Compared to [1], our protocol is a

proof

(rather then just argument) system, but requires that the resetting prover can reset the verifier up to a bounded number of times (which is unavoidable for black-box simulation), while the verifier can reset the prover an arbitrary polynomial number of times. The main contribution of our construction is that the round complexity is independent of the above bound. To achieve our result, we construct a constant-round nearly simultaneously resettable coin-flipping protocol that we believe is of independent interest.

Joshua Baron, Rafail Ostrovsky, Ivan Visconti
Complexity of Complexity and Maximal Plain versus Prefix-Free Kolmogorov Complexity

Peter Gacs showed [2] that for every

n

there exists a bit string

x

of length

n

whose plain complexity

C

(

x

) has almost maximal conditional complexity relative to

x

, i.e.,

C

(

C

(

x

)|

x

) ≥ log

n

 − log

(2)

n

 − 

O

(1). Here log

2

(

i

) = loglog

i

etc. Following Elena Kalinina [4], we provide a game-theoretic proof of this result; modifying her argument, we get a better (and tight) bound log

n

 − 

O

(1). We also show the same bound for prefix-free complexity.

Robert Solovay’s showed [11] that infinitely many strings

x

have maximal plain complexity but not maximal prefix-free complexity (among the strings of the same length); i.e. for some

c

: |

x

| − 

C

(

x

) ≤ 

c

and |

x

| + 

K

(|

x

|) − 

K

(

x

) ≥ log

(2)

|

x

| − 

c

log

(3)

|

x

|. Using the result above, we provide a short proof of Solovay’s result. We also generalize it by showing that for some

c

and for all

n

there are strings

x

of length

n

with

n

 − 

C

(

x

) ≤ 

c

, and

n

 + 

K

(

n

) − 

K

(

x

) ≥ 

K

(

K

(

n

)|

n

) − 3

K

(

K

(

K

(

n

)|

n

) |

n

) − 

c

. This is very close to the upperbound

K

(

K

(

n

)|

n

) + 

O

(1) proved by Solovay.

Bruno Bauwens
On Quadratic Programming with a Ratio Objective

Quadratic Programming (QP) is the well-studied problem of maximizing over { − 1,1} values the quadratic form ∑ 

i

 ≠ 

j

a

ij

x

i

x

j

. QP captures many known combinatorial optimization problems, and assuming the Unique Games conjecture, Semidefinite Programming (SDP) techniques give optimal approximation algorithms. We extend this body of work by initiating the study of Quadratic Programming problems where the variables take values in the domain { − 1,0,1}. The specific problem we study is

$$\begin{aligned} \textsf{QP-Ratio} &: \mbox{\ \ } \max_{\{-1,0,1\}^n} \frac{\sum_{i \not = j} a_{ij} x_i x_j}{\sum x_i^2} \end{aligned}$$

This is a natural relative of several well studied problems (in fact Trevisan introduced a normalized variant as a stepping stone towards a spectral algorithm for Max Cut Gain). Quadratic ratio problems are good testbeds for both algorithms and complexity because the techniques used for quadratic problems for the { − 1,1} and {0,1} domains do not seem to carry over to the { − 1,0,1} domain. We give approximation algorithms and evidence for the hardness of approximating these problems.

We consider an SDP relaxation obtained by adding constraints to the natural eigenvalue (or SDP) relaxation for this problem. Using this, we obtain an

$\tilde{O}(n^{1/3})$

approximation algorithm for QP-ratio. We also give a

$\tilde{O}(n^{1/4})$

approximation for bipartite graphs, and better algorithms for special cases.

As with other problems with ratio objectives (e.g. uniform sparsest cut), it seems difficult to obtain inapproximability results based on

P

 ≠ 

NP

. We give two results that indicate that

QP-Ratio

is hard to approximate to within any constant factor: one is based on the assumption that random instances of Max

k

-AND are hard to approximate, and the other makes a connection to a ratio version of Unique Games. We also give a natural distribution on instances of QP-Ratio for which an

n

ε

approximation (for

ε

roughly 1/10) seems out of reach of current techniques.

Aditya Bhaskara, Moses Charikar, Rajsekar Manokaran, Aravindan Vijayaraghavan
De-amortizing Binary Search Trees

We present a general method for de-amortizing essentially any Binary Search Tree (BST) algorithm. In particular, by transforming Splay Trees, our method produces a BST that has the same asymptotic cost as Splay Trees on any access sequence while performing each search in

O

(log

n

) worst case time. By transforming Multi-Splay Trees, we obtain a BST that is

O

(loglog

n

) competitive, satisfies the scanning theorem, the static optimality theorem, the static finger theorem, the working set theorem, and performs each search in

O

(log

n

) worst case time. Transforming OPT proves the existence of an

O

(1)-competitive offline BST algorithm which performs at most O(log n) BST operations between each access to the keys in the input sequence. Finally, we obtain that if there is an

O

(1)-competitive online BST algorithm, then there is also one that performs every search in

O

(log

n

) operations worst case.

Prosenjit Bose, Sébastien Collette, Rolf Fagerberg, Stefan Langerman
Efficient Sampling Methods for Discrete Distributions

We study the fundamental problem of the exact and efficient generation of random values from a finite and discrete probability distribution. Suppose that we are given

n

distinct events with associated probabilities

p

1

, …,

p

n

. We consider the problem of sampling a subset, which includes the

i

th event independently with probability

p

i

, and the problem of sampling from the distribution, where the

i

th event has a probability proportional to

p

i

. For both problems, we present on two different classes of inputs – sorted and general probabilities – efficient preprocessing algorithms that allow for asymptotically optimal querying, and prove almost matching lower bounds for their complexity.

Karl Bringmann, Konstantinos Panagiotou
Approximation Algorithms for Online Weighted Rank Function Maximization under Matroid Constraints

Consider the following online version of the submodular maximization problem under a matroid constraint. We are given a set of elements over which a matroid is defined. The goal is to incrementally choose a subset that remains independent in the matroid over time. At each time, a new weighted rank function of a different matroid (one per time) over the same elements is presented; the algorithm can add a few elements to the incrementally constructed set, and reaps a reward equal to the value of the new weighted rank function on the current set. The goal of the algorithm as it builds this independent set online is to maximize the sum of these (weighted rank) rewards. As in regular online analysis, we compare the rewards of our online algorithm to that of an offline optimum, namely a single independent set of the matroid that maximizes the sum of the weighted rank rewards that arrive over time. This problem is a natural extension of two well-studied streams of earlier work: the first is on online set cover algorithms (in particular for the max coverage version) while the second is on approximately maximizing submodular functions under a matroid constraint.

In this paper, we present the first randomized online algorithms for this problem with poly-logarithmic competitive ratio. To do this, we employ the LP formulation of a scaled reward version of the problem. Then we extend a weighted-majority type update rule along with uncrossing properties of tight sets in the matroid polytope to find an approximately optimal fractional LP solution. We use the fractional solution values as probabilities for a online randomized rounding algorithm. To show that our rounding produces a sufficiently large reward independent set, we prove and use new covering properties for randomly rounded fractional solutions in the matroid polytope that may be of independent interest.

Niv Buchbinder, Joseph (Seffi) Naor, R. Ravi, Mohit Singh
Improved LP-Rounding Approximation Algorithm for k-level Uncapacitated Facility Location

We study the k-level uncapacitated facility location problem, where clients need to be connected with paths crossing open facilities of

k

types (levels). In this paper we give an approximation algorithm that for any constant

k

, in polynomial time, delivers solutions of cost at most

α

k

times

OPT

, where

α

k

is an increasing function of

k

, with lim

k

 → ∞ 

α

k

 = 3.

Our algorithm rounds a fractional solution to an extended LP formulation of the problem. The rounding builds upon the technique of iteratively rounding fractional solutions on trees (Garg, Konjevod, and Ravi SODA’98) originally used for the group Steiner tree problem.

We improve the approximation ratio for

k

-UFL for all

k

 ≥ 3, in particular we obtain the ratio equal 2.02, 2.14, and 2.24 for

k

 = 3,4, and 5.

Jaroslaw Byrka, Bartosz Rybicki
Testing Coverage Functions

A

coverage function

f

over a ground set [

m

] is associated with a universe

U

of weighted elements and

m

sets

A

1

,…,

A

m

 ⊆ 

U

, and for any

T

 ⊆ [

m

],

f

(

T

) is defined as the total weight of the elements in the union ∪ 

j

 ∈ 

T

A

j

. Coverage functions are an important special case of submodular functions, and arise in many applications, for instance as a class of utility functions of agents in combinatorial auctions.

Set functions such as coverage functions often lack succinct representations, and in algorithmic applications, an access to a value oracle is assumed. In this paper, we ask whether one can test if a given oracle is that of a coverage function or not. We demonstrate an algorithm which makes

O

(

m

|

U

|) queries to an oracle of a coverage function and completely reconstructs it. This gives a polytime tester for

succinct

coverage functions for which |

U

| is polynomially bounded in

m

. In contrast, we demonstrate a set function which is “far” from coverage, but requires

$2^{\tilde{\Theta}(m)}$

queries to distinguish it from the class of coverage functions.

Deeparnab Chakrabarty, Zhiyi Huang
Sparse Fault-Tolerant Spanners for Doubling Metrics with Bounded Hop-Diameter or Degree

We study

fault-tolerant

spanners in doubling metrics. A subgraph

H

for a metric space

X

is called a

k

-vertex-fault-tolerant

t

-spanner ((

k

,

t

)-VFTS or simply

k

-VFTS), if for any subset

S

 ⊆ 

X

with |

S

| ≤ 

k

, it holds that

d

H

 ∖ 

S

(

x

,

y

) ≤ 

t

·

d

(

x

,

y

), for any pair of

x

,

y

 ∈ 

X

 ∖ 

S

.

For any doubling metric, we give a basic construction of

k

-VFTS with stretch arbitrarily close to 1 that has optimal

O

(

kn

) edges. In addition, we also consider bounded hop-diameter, which is studied in the context of fault-tolerance for the first time even for Euclidean spanners. We provide a construction of

k

-VFTS with bounded hop-diameter: for

m

 ≥ 2

n

, we can reduce the hop-diameter of the above

k

-VFTS to

O

(

α

(

m

,

n

)) by adding

O

(

km

) edges, where

α

is a functional inverse of the Ackermann’s function.

Finally, we construct a fault-tolerant single-sink spanner with bounded maximum degree, and use it to reduce the maximum degree of our basic

k

-VFTS. As a result, we get a

k

-VFTS with

O

(

k

2

n

) edges and maximum degree

O

(

k

2

).

T. -H. Hubert Chan, Mingfei Li, Li Ning
A Dependent LP-Rounding Approach for the k-Median Problem

In this paper, we revisit the classical

k

-median problem. Using the standard LP relaxation for

k

-median, we give an efficient algorithm to construct a probability distribution on sets of

k

centers that matches the marginals specified by the optimal LP solution. Analyzing the approximation ratio of our algorithm presents significant technical difficulties: we are able to show an upper bound of 3.25. While this is worse than the current best known 3 + 

ε

guarantee of [2], because: (1) it leads to 3.25 approximation algorithms for some generalizations of the

k

-median problem, including the

k

-facility location problem introduced in [10], (2) our algorithm runs in

$\tilde{O}(k^3 n^2/\delta^2)$

time to achieve 3.25(1 + 

δ

)-approximation compared to the

O

(

n

8

) time required by the local search algorithm of [2] to guarantee a 3.25 approximation, and (3) our approach has the potential to beat the decade old bound of 3 + 

ε

for

k

-median.

We also give a 34-approximation for the knapsack median problem, which greatly improves the approximation constant in [13]. Using the same technique, we also give a 9-approximation for matroid median problem introduced in [11], improving on their 16-approximation.

Moses Charikar, Shi Li
Node-Weighted Network Design in Planar and Minor-Closed Families of Graphs

We consider

node-weighted

network design in planar and minor-closed families of graphs. In particular we focus on the edge-connectivity survivable network design problem (EC-SNDP). The input consists of a node-weighted undirected graph

G

 = (

V

,

E

) and integral connectivity requirements

r

(

uv

) for each pair of nodes

uv

. The goal is to find a minimum node-weighted subgraph

H

of

G

such that, for each pair

uv

,

H

contains

r

(

uv

) edge-disjoint paths between

u

and

v

. Our main result is an

O

(

k

)-approximation algorithm for EC-SNDP where

k

 =  max

uv

r

(

uv

) is the maximum requirement. This improves the

O

(

k

log

n

)-approximation known for node-weighted EC-SNDP in general graphs [15]. Our algorithm and analysis applies to the more general problem of covering a proper function with maximum requirement

k

. Our result is inspired by, and generalizes, the work of Demaine, Hajiaghayi and Klein [5] who gave constant factor approximation algorithms for node-weighted Steiner tree and Steiner forest problems (and more generally covering 0-1 proper functions) in planar and minor-closed families of graphs.

Chandra Chekuri, Alina Ene, Ali Vakilian
Computing the Visibility Polygon of an Island in a Polygonal Domain

Given a set

$\mathcal{P}$

of

h

pairwise-disjoint polygonal obstacles of totally

n

vertices in the plane, we study the problem of computing the (weakly) visibility polygon from a polygonal obstacle

P

*

(an island) in

$\mathcal{P}$

. We give an

O

(

n

2

h

2

) time algorithm for it. Previously, the special case where

P

*

is a line segment was solved in

O

(

n

4

) time, which is worst-case optimal. In addition, when all obstacles in

$\mathcal{P}$

(including

P

*

) are convex, our algorithm runs in

O

(

n

 + 

h

4

) time.

Danny Z. Chen, Haitao Wang
Directed Subset Feedback Vertex Set Is Fixed-Parameter Tractable

Given a graph

G

and an integer

k

, the

Feedback Vertex Set

(

FVS

) problem asks if there is a vertex set

T

of size at most

k

that hits all cycles in the graph. Bodlaender (WG ’91) gave the first fixed-parameter algorithm for

FVS

in undirected graphs. The fixed-parameter tractability status of

FVS

in directed graphs was a long-standing open problem until Chen et al. (STOC ’08) showed that it is fixed-parameter tractable by giving an 4

k

k

!

n

O

(1)

algorithm. In the subset versions of this problems, we are given an additional subset

S

of vertices (resp. edges) and we want to hit all cycles passing through a vertex of

S

(resp. an edge of

S

). Indeed both the edge and vertex versions are known to be equivalent in the parameterized sense. Recently the

Subset Feedback Vertex Set

in undirected graphs was shown to be FPT by Cygan et al. (ICALP ’11) and Kakimura et al. (SODA ’12). We generalize the result of Chen et al. (STOC ’08) by showing that

Subset Feedback Vertex Set

in directed graphs can be solved in time

$2^{2^{O(k)}}n^{O(1)}$

, i.e., FPT parameterized by size

k

of the solution. By our result, we complete the picture for feedback vertex set problems and their subset versions in undirected and directed graphs.

The technique of random sampling of important separators was used by Marx and Razgon (STOC ’11) to show that

Undirected Multicut

is FPT and was generalized by Chitnis et al. (SODA ’12) to directed graphs to show that

Directed Multiway Cut

is FPT. In this paper we give a general family of problems (which includes

Directed Multiway Cut

and

Directed Subset Feedback Vertex Set

among others) for which we can do random sampling of important separators and obtain a set which is disjoint from a minimum solution and covers its “shadow”. We believe this general approach will be useful for showing the fixed-parameter tractability of other problems in directed graphs.

Rajesh Chitnis, Marek Cygan, Mohammadtaghi Hajiaghayi, Dániel Marx
Max-Cut Parameterized above the Edwards-Erdős Bound

We study the boundary of tractability for the

Max-Cut

problem in graphs. Our main result shows that

Max-Cut

above the Edwards-Erdős bound is fixed-parameter tractable: we give an algorithm that for any connected graph with

n

vertices and

m

edges finds a cut of size

$$ \frac{m}{2} + \frac{n-1}{4} + k $$

in time 2

O

(

k

)

·

n

4

, or decides that no such cut exists.

This answers a long-standing open question from parameterized complexity that has been posed a number of times over the past 15 years.

Our algorithm is asymptotically optimal, under the Exponential Time Hypothesis, and is strengthened by a polynomial-time computable kernel of polynomial size.

Robert Crowston, Mark Jones, Matthias Mnich
Clique Cover and Graph Separation: New Incompressibility Results

The field of kernelization studies polynomial-time preprocessing routines for hard problems in the framework of parameterized complexity. In this paper we show that, unless

$\textrm{NP} \subseteq \textrm{coNP}/\textrm{poly}$

and the polynomial hierarchy collapses up to its third level, the following parameterized problems do not admit a polynomial-time preprocessing algorithm that reduces the size of an instance to polynomial in the parameter:

Edge Clique Cover

, parameterized by the number of cliques,

Directed Edge/Vertex

Multiway Cut

, parameterized by the size of the cutset, even in the case of two terminals,

Edge/Vertex

Multicut

, parameterized by the size of the cutset,

and

k

-Way Cut

, parameterized by the size of the cutset.

The existence of a polynomial kernelization for

Edge Clique Cover

was a seasoned veteran in open problem sessions. Furthermore, our results complement very recent developments in designing parameterized algorithms for cut problems by Marx and Razgon [STOC’11], Bousquet et al. [STOC’11], Kawarabayashi and Thorup [FOCS’11] and Chitnis et al. [SODA’12].

Marek Cygan, Stefan Kratsch, Marcin Pilipczuk, Michał Pilipczuk, Magnus Wahlström
The Inverse Shapley Value Problem

For

f

a weighted voting scheme used by

n

voters to choose between two candidates, the

n

Shapley-Shubik Indices

(or

Shapley values

) of

f

provide a measure of how much control each voter can exert over the overall outcome of the vote. Shapley-Shubik indices were introduced by Lloyd Shapley and Martin Shubik in 1954 [SS54] and are widely studied in social choice theory as a measure of the “influence” of voters. The

Inverse Shapley Value Problem

is the problem of designing a weighted voting scheme which (approximately) achieves a desired input vector of values for the Shapley-Shubik indices. Despite much interest in this problem no provably correct and efficient algorithm was known prior to our work.

We give the first efficient algorithm with provable performance guarantees for the Inverse Shapley Value Problem. For any constant

ε

 > 0 our algorithm runs in fixed poly(

n

) time (the degree of the polynomial is independent of

ε

) and has the following performance guarantee: given as input a vector of desired Shapley values, if any “reasonable” weighted voting scheme (roughly, one in which the threshold is not too skewed) approximately matches the desired vector of values to within some small error, then our algorithm explicitly outputs a weighted voting scheme that achieves this vector of Shapley values to within error

ε

. If there is a “reasonable” voting scheme in which all voting weights are integers at most poly(

n

) that approximately achieves the desired Shapley values, then our algorithm runs in time poly(

n

) and outputs a weighted voting scheme that achieves the target vector of Shapley values to within error

ε

 = 

n

− 1/8

.

Anindya De, Ilias Diakonikolas, Rocco Servedio
Zero-One Rounding of Singular Vectors

We propose a generic and simple technique called

dyadic rounding

for rounding real vectors to zero-one vectors, and show its several applications in approximating singular vectors of matrices by zero-one vectors, cut decompositions of matrices, and norm optimization problems. Our rounding technique leads to the following consequences.

1

Given any

A

 ∈ ℝ

m

×

n

, there exists

z

 ∈ {0, 1}

n

such that

$$ \frac{\left\|Az\right\|_{q}}{\left\|z\right\|_{p}} \geq \Omega\left(p^{1 - \frac{1}{p}} (\log n)^{\frac{1}{p} - 1}\right) \left\|A\right\|_{p \mapsto q}, $$

where

$\left\|A\right\|_{p \mapsto q} = \max_{x \neq 0} \left\|Ax\right\|_{q} / \left\|x\right\|_{p}$

. Moreover, given any vector

v

 ∈ ℝ

n

we can round it to a vector

z

 ∈ {0, 1}

n

with the same approximation guarantee as above, but now the guarantee is with respect to

$\left\|Av\right\|_{q}/\left\|Av\right\|_{p}$

instead of

$\left\|A\right\|_{p \mapsto q}$

. Although stated for

p

q

norm, this generalizes to the case when

$\left\|Az\right\|_{q}$

is replaced by

any

norm of

z

.

2

Given any

A

 ∈ ℝ

m

×

n

, we can efficiently find

z

 ∈ {0, 1}

n

such that

$$ \frac{\left\|Az\right\|}{\left\|z\right\|} \geq \frac{\sigma_{1}(A)}{2 \sqrt{2 \log n}}, $$

where

σ

1

(

A

) is the top singular value of

A

. Extending this, we can efficiently find

orthogonal

z

1

,

z

2

, …,

z

k

 ∈ {0, 1}

n

such that

$$ \frac{\left\|Az_{i}\right\|}{\left\|z_{i}\right\|} \geq \Omega\left(\frac{\sigma_{k}(A)}{\sqrt{k \log n}}\right), \quad \text{for all $i \in[k]$}. $$

We complement these results by showing that they are almost tight.

3

Given any

A

 ∈ ℝ

m

×

n

of rank

r

, we can approximate it (under the Frobenius norm) by a sum of

O

(

r

log

2

m

log

2

n

) cut-matrices, within an error of at most

$\left\|A\right\|_{F}/\text{poly}(m, n)$

. In comparison, the Singular Value Decomposition uses

r

rank-1 terms in the sum (but not necessarily cut matrices) and has zero error, whereas the cut decomposition lemma by Frieze and Kannan in their algorithmic version of Szemerédi’s regularity partition [9,10] uses only

O

(1/

ε

2

) cut matrices but has a large

${\epsilon} \sqrt{mn} \left\|A\right\|_{F}$

error (under the cut norm). Our algorithm is deterministic and more efficient for the corresponding error range.

Amit Deshpande, Ravindran Kannan, Nikhil Srivastava
Label Cover Instances with Large Girth and the Hardness of Approximating Basic k-Spanner

We study the well-known

Label Cover

problem under the additional requirement that problem instances have large girth. We show that if the girth is some

k

, the problem is roughly

$2^{(\log^{1-\epsilon} n)/k}$

hard to approximate for all constant

ε

 > 0. A similar theorem was claimed by Elkin and Peleg [ICALP 2000] as part of an attempt to prove hardness for the basic

k

-spanner problem, but their proof was later found to have a fundamental error. Thus we give both

the first

non-trivial lower bound for the problem of Label Cover with large girth as well as the first full proof of strong hardness for the basic

k

-spanner problem, which is both the simplest problem in graph spanners and one of the few for which super-logarithmic hardness was not known. Assuming

$NP \not\subseteq BPTIME(2^{polylog(n)})$

, we show (roughly) that for every

k

 ≥ 3 and every constant

ε

 > 0 it is hard to approximate the basic

k

-spanner problem within a factor better than

$2^{(\log^{1-\epsilon} n) / k}$

. This improves over the previous best lower bound of only Ω(log

n

)/

k

from [17]. Our main technique is subsampling the edges of 2-query PCPs, which allows us to reduce the degree of a PCP to be essentially equal to the soundness desired. This turns out to be enough to basically guarantee large girth.

Michael Dinitz, Guy Kortsarz, Ran Raz
Space-Constrained Interval Selection

We study streaming algorithms for the interval selection problem: finding a maximum cardinality subset of disjoint intervals on the line. A deterministic 2-approximation streaming algorithm for this problem is developed, together with an algorithm for the special case of proper intervals, achieving improved approximation ratio of 3/2. We complement these upper bounds by proving that they are essentially best possible in the streaming setting: it is shown that an approximation ratio of 2 − 

ε

(or 3 / 2 − 

ε

for proper intervals) cannot be achieved unless the space is linear in the input size. In passing, we also answer an open question of Adler and Azar [1] regarding the space complexity of constant-competitive randomized preemptive online algorithms for the same problem.

Yuval Emek, Magnús M. Halldórsson, Adi Rosén
Polynomial Time Algorithms for Branching Markov Decision Processes and Probabilistic Min(Max) Polynomial Bellman Equations

We show that one can approximate the least fixed point solution for a multivariate system of monotone probabilistic max (min) polynomial equations, in time polynomial in both the encoding size of the system of equations and in log(1/

ε

), where

ε

 > 0 is the desired additive error bound of the solution. (The model of computation is the standard Turing machine model.)

These equations form the Bellman optimality equations for several important classes of

infinite-state

Markov Decision Processes (MDPs). Thus, as a corollary, we obtain the first polynomial time algorithms for computing to within arbitrary desired precision the

optimal value

vector for several classes of infinite-state MDPs which arise as extensions of classic, and heavily studied, purely stochastic processes. These include both the problem of maximizing and minimizing the

termination (extinction) probability

of multi-type branching MDPs, stochastic context-free MDPs, and 1-exit Recursive MDPs. We also show that we can compute in P-time an

ε

-optimal policy for any given desired

ε

 > 0.

Kousha Etessami, Alistair Stewart, Mihalis Yannakakis
Succinct Indices for Range Queries with Applications to Orthogonal Range Maxima

We consider the problem of preprocessing

N

points in 2D, each endowed with a priority, to answer the following queries: given a axis-parallel rectangle, determine the point with the largest priority in the rectangle. Using the ideas of the

effective entropy

of range maxima queries and

succinct indices

for range maxima queries, we obtain a structure that uses

O

(

N

) words and answers the above query in

O

(log

N

loglog

N

) time. This a direct improvement of Chazelle’s result from 1985 [10] for this problem – Chazelle required

O

(

N

/

ε

) words to answer queries in

O

((log

N

)

1 + 

ε

) time for any constant

ε

 > 0.

Arash Farzan, J. Ian Munro, Rajeev Raman
Universal Factor Graphs

The factor graph of an instance of a symmetric constraint satisfaction problem on

n

Boolean variables and

m

constraints (CSPs such as k-SAT, k-AND, k-LIN) is a bipartite graph describing which variables appear in which constraints. The factor graph describes the instance up to the polarity of the variables, and hence there are up to 2

km

instances of the CSP that share the same factor graph. It is well known that factor graphs with certain structural properties make the underlying CSP easier to either solve exactly (e.g., for tree structures) or approximately (e.g., for planar structures). We are interested in the following question: is there a factor graph for which if one can solve every instance of the CSP with this particular factor graph, then one can solve every instance of the CSP regardless of the factor graph (and similarly, for approximation)? We call such a factor graph

universal

. As one needs different factor graphs for different values of

n

and

m

, this gives rise to the notion of a family of universal factor graphs.

We initiate a systematic study of universal factor graphs, and present some results for max-

k

SAT. Our work has connections with the notion of preprocessing as previously studied for closest codeword and closest lattice-vector problems, with proofs for the PCP theorem, and with tests for the long code. Many questions remain open.

Uriel Feige, Shlomo Jozeph
Parameterized Approximation via Fidelity Preserving Transformations

We motivate and describe a new parameterized approximation paradigm which studies the interaction between performance ratio and running time for any parametrization of a given optimization problem. As a key tool, we introduce the concept of

α

-

shrinking transformation

, for

α

 ≥ 1. Applying such transformation to a parameterized problem instance decreases the parameter value, while preserving approximation ratio of

α

(or

α-fidelity

).

For example, it is well-known that

Vertex Cover

cannot be approximated within any constant factor better than 2 [24] (under usual assumptions). Our parameterized

α

-approximation algorithm for

k-Vertex Cover

, parameterized by the solution size, has a running time of 1.273

(2 − 

α

)

k

, where the running time of the best FPT algorithm is 1.273

k

[10]. Our algorithms define a continuous tradeoff between running times and approximation ratios, allowing practitioners to appropriately allocate computational resources.

Moving even beyond the performance ratio, we call for a new type of

approximative kernelization race

. Our

α

-shrinking transformations can be used to obtain kernels which are smaller than the best known for a given problem. For the

Vertex Cover

problem we obtain a kernel size of 2(2 − 

α

)

k

. The smaller “

α

-fidelity” kernels allow us to solve exactly problem instances more efficiently, while obtaining an approximate solution for the original instance.

We show that such transformations exist for several fundamental problems, including

Vertex Cover

,

d-Hitting Set

,

Connected Vertex Cover

and

Steiner Tree

. We note that most of our algorithms are easy to implement and are therefore practical in use.

Michael R. Fellows, Ariel Kulik, Frances Rosamond, Hadas Shachnai
Backdoors to Acyclic SAT

Backdoor sets contain certain key variables of a CNF formula

F

that make it easy to solve the formula. More specifically, a

weak backdoor set

of

F

is a set

X

of variables such that there exits a truth assignment

τ

to

X

that reduces

F

to a satisfiable formula

F

[

τ

] that belongs to a polynomial-time decidable base class

$\mathcal C$

. A

strong backdoor set

is a set

X

of variables such that for all assignments

τ

to

X

, the reduced formula

F

[

τ

] belongs to

$\mathcal C$

.

We study the problem of finding backdoor sets of size at most

k

with respect to the base class of CNF formulas with acyclic incidence graphs, taking

k

as the parameter. We show that

1

the detection of weak backdoor sets is

W

[2]-hard in general but fixed-parameter tractable for

r

-CNF formulas, for any fixed

r

 ≥ 3, and

2

the detection of strong backdoor sets is fixed-parameter approximable.

Result 1 is the the first positive one for a base class that does not have a characterization with obstructions of bounded size. Result 2 is the first positive one for a base class for which strong backdoor sets are more powerful than deletion backdoor sets.

Not only SAT, but also #SAT can be solved in polynomial time for CNF formulas with acyclic incidence graphs. Hence Result 2 establishes a new structural parameter that makes #SAT fixed-parameter tractable and that is incomparable with known parameters such as treewidth and clique-width. We obtain the algorithms by a combination of an algorithmic version of the Erdős-Pósa Theorem, Courcelle’s model checking for monadic second order logic, and new combinatorial results on how disjoint cycles can interact with the backdoor set.

Serge Gaspers, Stefan Szeider
Dominators, Directed Bipolar Orders, and Independent Spanning Trees

We consider problems related to dominators and independent spanning trees in flowgraphs and provide linear-time algorithms for their solutions. We introduce the notion of a

directed bipolar order

, generalizing a previous notion of Plein and Cheriyan and Reif. We show how to construct such an order from information computed by several known algorithms for finding dominators. We show how to concurrently verify the correctness of a dominator tree

D

and a directed bipolar order

O

very simply, and how to construct from

D

and

O

two spanning trees whose paths are disjoint except for common dominators. Finally, we describe alternative ways to verify dominators without using a directed bipolar order.

Loukas Georgiadis, Robert E. Tarjan
Hardness of Approximation for Quantum Problems

The polynomial hierarchy plays a central role in classical complexity theory. Here, we define a quantum generalization of the polynomial hierarchy, and initiate its study. We show that not only are there natural complete problems for the second level of this quantum hierarchy, but that these problems are in fact hard to approximate. Our work thus yields the first known hardness of approximation results for a quantum complexity class. Using these techniques, we also obtain hardness of approximation for the class QCMA. Our approach is based on the use of dispersers, and is inspired by the classical results of Umans regarding hardness of approximation for the second level of the classical polynomial hierarchy (Umans 1999). We close by showing that a variant of the local Hamiltonian problem with hybrid classical-quantum ground states is complete for the second level of our quantum hierarchy.

Sevag Gharibian, Julia Kempe
The Complexity of Computing the Sign of the Tutte Polynomial (and Consequent #P-hardness of Approximation)

We study the complexity of computing the sign of the Tutte polynomial of a graph. As there are only three possible outcomes (positive, negative, and zero), this seems at first sight more like a decision problem than a counting problem. Surprisingly, however, there are large regions of the parameter space for which computing the sign of the Tutte polynomial is actually #P-hard. As a trivial consequence, approximating the polynomial is also #P-hard in this case. Thus,

approximately

evaluating the Tutte polynomial in these regions is as hard as

exactly

counting the satisfying assignments to a CNF Boolean formula. For most other points in the parameter space, we show that computing the sign of the polynomial is in FP, whereas approximating the polynomial can be done in polynomial time with an NP oracle. As a special case, we completely resolve the complexity of computing the sign of the chromatic polynomial — this is easily computable at

q

 = 2 and when

q

 ≤ 32/27, and is NP-hard to compute for all other values of the parameter

q

.

Leslie Ann Goldberg, Mark Jerrum
Stochastic Vehicle Routing with Recourse

We study the classic

Vehicle Routing Problem

in the setting of stochastic optimization with recourse.

StochVRP

is a two-stage problem, where demand is satisfied using two routes: fixed and recourse. The fixed route is computed using only a demand distribution. Then after observing the demand instantiations, a recourse route is computed – but costs here become more expensive by a factor

λ

.

We present an

O

(log

2

n

·log(

))-approximation algorithm for this stochastic routing problem, under arbitrary distributions. The main idea in this result is relating

StochVRP

to a special case of

submodular orienteering

, called

knapsack rank-function orienteering

. We also give a better approximation ratio for

knapsack rank-function orienteering

than what follows from prior work. Finally, we provide a Unique Games Conjecture based

ω

(1) hardness of approximation for

StochVRP

, even on star-like metrics on which our algorithm achieves a logarithmic approximation.

Inge Li Gørtz, Viswanath Nagarajan, Rishi Saket
The Online Metric Matching Problem for Doubling Metrics

In the online minimum-cost metric matching problem, we are given an instance of a metric space with

k

servers, and must match arriving requests to as-yet-unmatched servers to minimize the total distance from the requests to their assigned servers. We study this problem for the line metric and for doubling metrics in general. We give

O

(log

k

)-competitive randomized algorithms, which reduces the gap between the current

O

(log

2

k

)-competitive randomized algorithms and the constant-competitive lower bounds known for these settings.

We first analyze the “harmonic” algorithm for the line, that for each request chooses one of its two closest servers with probability inversely proportional to the distance to that server; this is

O

(log

k

)-competitive, with suitable guess-and-double steps to ensure that the metric has aspect ratio polynomial in

k

. The second algorithm embeds the metric into a random HST, and picks a server randomly from among the closest available servers in the HST, with the selection based upon how the servers are distributed within the tree. This algorithm is

O

(1)-competitive for HSTs obtained from embedding doubling metrics, and hence gives a randomized

O

(log

k

)-competitive algorithm for doubling metrics.

Anupam Gupta, Kevin Lewi
Approximating Sparse Covering Integer Programs Online

A

covering integer program

(CIP) is a mathematical program of the form:

$$\begin{aligned} \min \{ c^\top \mathbf{x} \mid A\mathbf{x} \geq \mathbf{1},\; \mathbf{0} \leq \mathbf{x} \leq \mathbf{u},\; \mathbf{x} \in {\ensuremath{\mathbb{Z}}}^n\},\nonumber \end{aligned}$$

where

$A \in R_{\geq 0}^{m \times n}, c,u \in {\ensuremath{\mathbb{R}}}_{\geq 0}^n$

. In the online setting, the constraints (i.e., the rows of the constraint matrix

A

) arrive over time, and the algorithm can only increase the coordinates of

x

to maintain feasibility. As an intermediate step, we consider solving the

covering linear program

(CLP) online, where the requirement

x

 ∈ ℤ

n

is replaced by

x

 ∈ ℝ

n

.

Our main results are (a) an

O

(log

k

)-competitive online algorithm for solving the CLP, and (b) an

O

(log

k

·logℓ)-competitive randomized online algorithm for solving the CIP. Here

k

 ≤ 

n

and ℓ ≤ 

m

respectively denote the maximum number of non-zero entries in any row and column of the constraint matrix

A

. By a result of Feige and Korman, this is the best possible for polynomial-time online algorithms, even in the special case of set cover (where

A

 ∈ {0,1}

m

×

n

and

c

,

u

 ∈ {0,1}

n

).

The novel ingredient of our approach is to allow the dual variables to increase and decrease throughout the course of the algorithm. We show that the previous approaches, which either only raise dual variables, or lower duals only within a guess-and-double framework, cannot give a performance better than

O

(log

n

), even when each constraint only has a single variable (i.e.,

k

 = 1).

Anupam Gupta, Viswanath Nagarajan
Streaming and Communication Complexity of Clique Approximation

We consider the classic clique (or, equivalently, the independent set) problem in two settings. In the streaming model, edges are given one by one in an adversarial order, and the algorithm aims to output a good approximation under space restrictions. In the communication complexity setting, two players, each holds a graph on

n

vertices, and they wish to use a limited amount of communication to distinguish between the cases when the union of the two graphs has a low or a high clique number. The settings are related in that the communication complexity gives a lower bound on the space complexity of streaming algorithms.

We give several results that illustrate different tradeoffs between clique separability and the required communication/space complexity under randomization. The main result is a lower bound of

$\Omega(\frac{n^2}{r^2\log^2{n}})$

-space for any

r

-approximate randomized streaming algorithm for maximum clique. A simple random sampling argument shows that this is tight up to a logarithmic factor. For the case when

r

 = 

o

(log

n

), we present another lower bound of

$\Omega(\frac{n^2}{r^4})$

. In particular, it implies that any constant approximation randomized streaming algorithm requires Ω(

n

2

) space, even if the algorithm runs in exponential time. Finally, we give a third lower bound that holds for the extremal case of

s

 − 1 vs.

$\mathcal{R}(s)-1$

, where

$\mathcal{R}(s)$

is the

s

-th Ramsey number. This is the extremal setting of clique numbers that can be separated. The proofs involve some novel combinatorial structures and sophisticated combinatorial constructions.

Magnús M. Halldórsson, Xiaoming Sun, Mario Szegedy, Chengu Wang
Distributed Private Heavy Hitters

In this paper, we give efficient algorithms and lower bounds for solving the

heavy hitters

problem while preserving

differential privacy

in the fully distributed

local

model. In this model, there are

n

parties, each of which possesses a single element from a universe of size

N

. The heavy hitters problem is to find the identity of the most common element shared amongst the

n

parties. In the local model, there is no trusted database administrator, and so the algorithm must interact with each of the

n

parties separately, using a differentially private protocol. We give tight information-theoretic upper and lower bounds on the accuracy to which this problem can be solved in the local model (giving a separation between the local model and the more common centralized model of privacy), as well as computationally efficient algorithms even in the case where the data universe

N

may be exponentially large.

Justin Hsu, Sanjeev Khanna, Aaron Roth
A Thirty Year Old Conjecture about Promise Problems

Even, Selman, and Yacobi [ESY84, SY82] formulated a conjecture that in current terminology asserts that there do not exist disjoint NP-pairs all of whose separators are NP-hard viaTuring reductions. In this paper we consider a variant of this conjecture—there do not exist disjoint NP-pairs all of whose separators are NP-hard via bounded-truth-table reductions. We provide evidence for this conjecture. We also observe that if the original conjecture holds, then some of the known probabilistic public-key cryptosystems are not NP-hard to crack.

Andrew Hughes, A. Pavan, Nathan Russell, Alan Selman
Minimum Latency Submodular Cover

We study the submodular ranking problem in the presence of metric costs. The input to the

minimum latency submodular cover

(

MLSC

) problem consists of a metric (

V

,

d

) with source

r

 ∈ 

V

and

m

monotone submodular functions

f

1

,

f

2

, ...,

f

m

: 2

V

 → [0,1]. The goal is to find a path originating at

r

that minimizes the total cover time of all functions; the cover time of function

f

i

is the smallest value

t

such that

f

i

has value one for the vertices visited within distance

t

along the path. This generalizes many previously studied problems, such as

submodular ranking

[1] when the metric is uniform, and

group Steiner tree

[14] when

m

 = 1 and

f

1

is a coverage function. We give a polynomial time

$O(\log \frac{1}{\epsilon } \cdot \log^{2+\delta} |V|)$

-approximation algorithm for

MLSC

, where

ε

 > 0 is the smallest non-zero marginal increase of any

$\{f_i\}_{i=1}^m$

and

δ

 > 0 is any constant. This result is enabled by a simpler analysis of the submodular ranking algorithm from [1].

We also consider the

stochastic submodular ranking

problem where elements

V

have random instantiations, and obtain an adaptive algorithm with an

O

(log1/

ε

) approximation ratio, which is best possible. This result also generalizes several previously studied stochastic problems, eg.

adaptive set cover

[15] and

shared filter evaluation

[24,23].

Sungjin Im, Viswanath Nagarajan, Ruben van der Zwaan
Constant-Time Algorithms for Sparsity Matroids

A graph

G

 = (

V

,

E

) is called (

k

, ℓ)-sparse if |

F

| ≤ 

k

|

V

(

F

)| − ℓ for any

F

 ⊆ 

E

with

F

 ≠ ∅. Here,

V

(

F

) denotes the set of vertices incident to

F

. A graph

G

 = (

V

,

E

) is called (

k

,ℓ)-full if

G

contains a (

k

,ℓ)-sparse subgraph with |

V

| vertices and

k

|

V

| − ℓ edges. The family of edge sets of (

k

,ℓ)-sparse subgraphs forms a family of independent sets of a matroid on

E

, known as the sparsity matroid of

G

. In this paper, we give a constant-time algorithm that approximates the rank of the sparsity matroid associated with a degree-bounded undirected graph. This algorithm leads to a constant-time tester for (

k

,ℓ)-fullness in the bounded-degree model, (i.e., we can decide with high probability whether the input graph satisfies a property or far from it). Depending on the values of

k

and ℓ, our algorithm can test various properties of graphs such as connectivity, rigidity, and how many spanning trees can be packed in a unified manner.

Based on this result, we also propose a constant-time tester for (

k

,ℓ)-edge-connected-orientability in the bounded-degree model, where an undirected graph

G

is called (

k

,ℓ)-edge-connected-orientable if there exists an orientation

$\vec{G}$

of

G

with a vertex

r

 ∈ 

V

such that

$\vec{G}$

contains

k

arc-disjoint dipaths from

r

to each vertex

v

 ∈ 

V

and ℓ arc-disjoint dipaths from each vertex

v

 ∈ 

V

to

r

.

A tester is called a one-sided error tester for

P

if it always accepts a graph satisfying

P

. We show, for any

k

 ≥ 2 and (proper) ℓ ≥ 0, every one-sided error tester for (

k

,ℓ)-fullness and (

k

,ℓ)-edge-connected-orientability requires Ω(

n

) queries.

Hiro Ito, Shin-Ichi Tanigawa, Yuichi Yoshida
CRAM: Compressed Random Access Memory

We present a new data structure called the

Compressed Random Access Memory

(CRAM) that can store a dynamic string

T

of characters, e.g., representing the memory of a computer, in compressed form while achieving asymptotically almost-optimal bounds (in terms of empirical entropy) on the compression ratio. It allows short substrings of

T

to be decompressed and retrieved efficiently and, significantly, characters at arbitrary positions of

T

to be modified quickly during execution

without decompressing the entire string

. This can be regarded as a new type of data compression that can update a compressed file directly. Moreover, at the cost of slightly increasing the time spent per operation, the CRAM can be extended to also support insertions and deletions. Our key observation that the empirical entropy of a string does not change much after a small change to the string, as well as our simple yet efficient method for maintaining an array of variable-length blocks under length modifications, may be useful for many other applications as well.

Jesper Jansson, Kunihiko Sadakane, Wing-Kin Sung
Improving Quantum Query Complexity of Boolean Matrix Multiplication Using Graph Collision

The quantum query complexity of Boolean matrix multiplication is typically studied as a function of the matrix dimension,

n

, as well as the number of 1s in the output, ℓ. We prove an upper bound of

$\widetilde{\mathrm{O}}(n\sqrt{\ell})$

for all values of ℓ. This is an improvement over previous algorithms for all values of ℓ. On the other hand, we show that for any

ε

 < 1 and any ℓ ≤ 

εn

2

, there is an

$\Omega(n\sqrt{\ell})$

lower bound for this problem, showing that our algorithm is essentially tight.

We first reduce Boolean matrix multiplication to several instances of graph collision. We then provide an algorithm that takes advantage of the fact that the underlying graph in all of our instances is very dense to find all graph collisions efficiently.

Stacey Jeffery, Robin Kothari, Frédéric Magniez
Faster Fully Compressed Pattern Matching by Recompression

In this paper, a fully compressed pattern matching problem is studied. The compression is represented by straight-line programs (SLPs), i.e. a context-free grammar generating exactly one string; the term fully means that

both

the pattern

and

the text are given in the compressed form. The problem is approached using a recently developed technique of local recompression: the SLPs are refactored, so that substrings of the pattern and text are encoded in both SLPs in the same way. To this end, the SLPs are

locally decompressed

and then

recompressed

in a uniform way.

This technique yields an

$\mathcal{O}((n+m)\log M \log(n+m))$

algorithm for compressed pattern matching, where

n

(

m

) is the size of the compressed representation of the text (pattern, respectively), while

M

is the size of the decompressed pattern. Since

M

 ≤ 2

m

, this substantially improves the previously best

$\mathcal{O}(m^2n)$

algorithm.

Since LZ compression standard reduces to SLP with log(

N

/

n

) overhead and in

$\mathcal{O}(n \log(N/n))$

time, the presented algorithm can be applied also to the fully LZ-compressed pattern matching problem, yielding an

$\mathcal{O}(s \log s \log M)$

running time, where

s

 = 

n

log(

N

/

n

) + 

m

log(

M

/

m

).

Artur Jeż
NNS Lower Bounds via Metric Expansion for l  ∞  and EMD

We give new lower bounds for randomized NNS data structures in the cell probe model based on

robust

metric expansion for two metric spaces:

l

 ∞ 

and Earth Mover Distance (EMD) in high dimensions. In particular, our results imply stronger non-embedability for these metric spaces into

l

1

. The main components of our approach are a strengthening of the isoperimetric inequality for the distribution on

l

 ∞ 

introduced by Andoni et al [FOCS’08] and a robust isoperimetric inequality for EMD on quotients of the boolean hypercube.

Michael Kapralov, Rina Panigrahy
Quantum Adversary (Upper) Bound

We describe a method for upper bounding the quantum query complexity of certain boolean formula evaluation problems, using fundamental theorems about the general adversary bound. This nonconstructive method gives an upper bound on query complexity without producing an algorithm. For example, we describe an oracle problem that we prove (non-constructively) can be solved in

O

(1) queries, where the previous best quantum algorithm uses a polynomial number of queries. We then give an explicit

O

(1) query algorithm based on span programs, and show that for a special case of this problem, there exists a

O

(1) query algorithm that uses the quantum Haar transform. This special case is a potentially interesting problem in its own right, which we call the

Haar Problem

.

Shelby Kimmel
Solving Planar k -Terminal Cut in $O(n^{c \sqrt{k}})$ Time

The problem

Planar

k

-Terminal Cut

is as follows: given an undirected planar graph with edge-costs and with

k

vertices designated as terminals, find a minimum-cost set of edges whose removal pairwise separates the terminals. It was known that the complexity of this problem is

O

(

n

2

k

 − 4

log

n

). We show that there is a constant

c

such that the complexity is

$O(n^{c\sqrt{k}})$

. This matches a recent lower bound of Marx showing that the

$c\sqrt{k}$

term in the exponent is best possible up to the constant

c

(assuming the Exponential Time Hypothesis).

Philip N. Klein, Dániel Marx
Fixed-Parameter Tractability of Multicut in Directed Acyclic Graphs

The

Multicut

problem, given a graph

G

, a set of terminal pairs

$\ensuremath{\mathcal{T}}=\{(s_i,t_i)\ |\ 1\leq i\leq r\}$

and an integer

p

, asks whether one can find a

cutset

consisting of at most

p

non-terminal vertices that separates all the terminal pairs, i.e., after removing the cutset,

t

i

is not reachable from

s

i

for each 1 ≤ 

i

 ≤ 

r

. The fixed-parameter tractability of

Multicut

in undirected graphs, parameterized by the size of the cutset only, has been recently proven by Marx and Razgon [2] and, independently, by Bousquet et al. [3], after resisting attacks as a long-standing open problem. In this paper we prove that

Multicut

is fixed-parameter tractable on directed acyclic graphs, when parameterized both by the size of the cutset and the number of terminal pairs. We complement this result by showing that this is implausible for parameterization by the size of the cutset only, as this version of the problem remains

W

[1]-hard.

Stefan Kratsch, Marcin Pilipczuk, Michał Pilipczuk, Magnus Wahlström
Preserving Terminal Distances Using Minors

We introduce the following notion of compressing an undirected graph

G

with (nonnegative) edge-lengths and terminal vertices

R

 ⊆ 

V

(

G

). A

distance-preserving minor

is a minor

G

′ (of

G

) with possibly different edge-lengths, such that

R

 ⊆ 

V

(

G

′) and the shortest-path distance between every pair of terminals is exactly the same in

G

and in

G

′. We ask: what is the smallest

f

*

(

k

) such that every graph

G

with

k

 = |

R

| terminals admits a distance-preserving minor

G

′ with at most

f

*

(

k

) vertices?

Simple analysis shows that

f

*

(

k

) ≤ 

O

(

k

4

). Our main result proves that

f

*

(

k

) ≥ Ω(

k

2

), significantly improving over the trivial

f

*

(

k

) ≥ 

k

. Our lower bound holds even for planar graphs

G

, in contrast to graphs

G

of constant treewidth, for which we prove that

O

(

k

) vertices suffice.

Robert Krauthgamer, Tamar Zondiner
A Rounding by Sampling Approach to the Minimum Size k-Arc Connected Subgraph Problem

In the

k

-arc connected subgraph problem, we are given a directed graph

G

and an integer

k

and the goal is the find a subgraph of minimum cost such that there are at least

k

-arc disjoint paths between any pair of vertices. We give a simple (1 + 1/

k

)-approximation to the unweighted variant of the problem, where all arcs of

G

have the same cost. This improves on the 1 + 2/

k

approximation of Gabow et al. [GGTW09].

Similar to the 2-approximation algorithm for this problem [FJ81], our algorithm simply takes the union of a

k

in-arborescence and a

k

out-arborescence. The main difference is in the selection of the two arborescences. Here, inspired by the recent applications of the rounding by sampling method (see e.g. [AGM+10, MOS11, OSS11, AKS12]), we select the arborescences randomly by sampling from a distribution on unions of

k

arborescences that is defined based on an

extreme point solution

of the linear programming relaxation of the problem. In the analysis, we crucially utilize the sparsity property of the extreme point solution to upper-bound the size of the union of the sampled arborescences.

To complement the algorithm, we also show that the integrality gap of the minimum cost strongly connected subgraph problem (i.e., when

k

 = 1) is at least 3/2 − 

ε

, for any

ε

 > 0. Our integrality gap instance is inspired by the integrality gap example of the asymmetric traveling salesman problem [CGK06], hence providing further evidence of connections between the approximability of the two problems.

Bundit Laekhanukit, Shayan Oveis Gharan, Mohit Singh
Classical and Quantum Partition Bound and Detector Inefficiency

We study randomized and quantum efficiency lower bounds in communication complexity. These arise from the study of zero-communication protocols in which players are allowed to abort. Our scenario is inspired by the physics setup of Bell experiments, where two players share a predefined entangled state but are not allowed to communicate. Each is given a measurement as input, which they perform on their share of the system. The outcomes of the measurements should follow a distribution predicted by quantum mechanics; however, in practice, the detectors may fail to produce an output in some of the runs. The efficiency of the experiment is the probability that neither of the detectors fails.

When the players share a quantum state, this leads to a new bound on quantum communication complexity (eff

*

) that subsumes the factorization norm. When players share randomness instead of a quantum state, the efficiency bound (eff), coincides with the partition bound of Jain and Klauck. This is one of the strongest lower bounds known for randomized communication complexity, which subsumes all the known combinatorial and algebraic methods including the rectangle (corruption) bound, the factorization norm, and discrepancy. The lower bound is formulated as a convex optimization problem. In practice, the dual form is more feasible to use, and we show that it amounts to constructing an explicit Bell inequality (for eff) or Tsirelson inequality (for eff

*

). For one-way communication, we show that the quantum one-way partition bound is tight for classical communication with shared entanglement up to arbitrarily small error.

Sophie Laplante, Virginie Lerays, Jérémie Roland
Testing Similar Means

We consider the problem of testing a basic property of collections of distributions: having similar means. Namely, the algorithm should accept collections of distributions in which all distributions have means that do not differ by more than some given parameter, and should reject collections that are relatively far from having this property. By ‘far’ we mean that it is necessary to modify the distributions in a relatively significant manner (measured according to the ℓ

1

distance averaged over the distributions) so as to obtain the property. We study this problem in two models. In the first model (the

query model

) the algorithm may ask for samples from any distribution of its choice, and in the second model (the

sampling model

) the distributions from which it gets samples are selected randomly. We provide upper and lower bounds in both models. In particular, in the query model, the complexity of the problem is polynomial in 1/

ε

(where

ε

is the given distance parameter). While in the sampling model, the complexity grows roughly as

m

1 − poly(

ε

)

, where

m

is the number of distributions.

Reut Levi, Dana Ron, Ronitt Rubinfeld
The Parameterized Complexity of k-Edge Induced Subgraphs

We prove that finding a

k

-edge induced subgraph is fixed-parameter tractable, thereby answering an open problem of Leizhen Cai [2]. Our algorithm is based on several combinatorial observations, Gauss’ famous

Eureka

theorem [1], and a generalization of the wellknown fpt-algorithm for the model-checking problem for first-order logic on graphs with locally bounded tree-width due to Frick and Grohe [13]. On the other hand, we show that two natural counting versions of the problem are hard. Hence, the

k

-edge induced subgraph problem is one of the very few known examples in parameterized complexity that are easy for decision while hard for counting.

Bingkai Lin, Yijia Chen
Converting Online Algorithms to Local Computation Algorithms

We propose a general method for converting online algorithms to local computation algorithms, by selecting a random permutation of the input, and simulating running the online algorithm. We bound the number of steps of the algorithm using a

query tree

, which models the dependencies between queries. We improve previous analyses of query trees on graphs of bounded degree, and extend this improved analysis to the cases where the degrees are distributed binomially, and to a special case of bipartite graphs.

Using this method, we give a local computation algorithm for maximal matching in graphs of bounded degree, which runs in time and space

O

(log

3

n

).

We also show how to convert a large family of load balancing algorithms (related to balls and bins problems) to local computation algorithms. This gives several local load balancing algorithms which achieve the same approximation ratios as the online algorithms, but run in

O

(log

n

) time and space.

Finally, we modify existing local computation algorithms for hypergraph 2-coloring and

k

-CNF and use our improved analysis to obtain better time and space bounds, of

O

(log

4

n

), removing the dependency on the maximal degree of the graph from the exponent.

Yishay Mansour, Aviad Rubinstein, Shai Vardi, Ning Xie
Assigning Sporadic Tasks to Unrelated Parallel Machines

We study the problem of assigning sporadic tasks to unrelated machines such that the tasks on each machine can be feasibly scheduled. Despite its importance for modern real-time systems, this problem has not been studied before. We present a polynomial-time algorithm which approximates the problem with a constant speedup factor of

$11+4\sqrt{3} \approx{17.9}$

and show that any polynomial-time algorithm needs a speedup factor of at least 2, unless

P

 = 

NP

. In the case of a constant number of machines we give a polynomial-time approximation scheme. Key to these results are two new relaxations of the demand bound function which yields a sufficient and necessary condition for a task system on a single machine to be feasible.

Alberto Marchetti-Spaccamela, Cyriel Rutten, Suzanne van der Ster, Andreas Wiese
A Tight Lower Bound for Planar Multiway Cut with Fixed Number of Terminals

Given a planar graph with

k

terminal vertices, the

Planar Multiway Cut

problem asks for a minimum set of edges whose removal pairwise separates the terminals from each other. A classical algorithm of Dahlhaus et al. [2] solves the problem in time

n

O

(

k

)

, which was very recently improved to

$2^{O(k)}\cdot n^{O(\sqrt{k})}$

time by Klein and Marx [6]. Here we show the optimality of the latter algorithm: assuming the Exponential Time Hypothesis (ETH), there is no

$f(k)\cdot n^{o(\sqrt{k})}$

time algorithm for

Planar Multiway Cut

. It also follows that the problem is W[1]-hard, answering an open question of Downey and Fellows [3].

Dániel Marx
The Power of Recourse for Online MST and TSP

We consider the online MST and TSP problems with recourse. The nodes of an unknown graph with metric edge cost appear one by one and must be connected in such a way that the resulting tree or tour has low cost. In the standard online setting, with irrevocable decisions, no algorithm can guarantee a constant competitive ratio. In our model we allow recourse actions by giving a limited budget of edge rearrangements per iteration. It has been an open question for more than 20 years if an online algorithm equipped with a constant (amortized) budget can guarantee constant-approximate solutions [7].

As our main result, we answer this question affirmatively in an amortized setting. We introduce an algorithm that maintains a nearly optimal tree when given constant amortized budget. In the non-amortized setting, we specify a promising proof technique and conjecture a structural property of optimal solutions that would prove a constant competitive ratio with a single recourse action. It might seem rather optimistic that such a small budget should be sufficient for a significant cost improvement. However, we can prove such power of recourse in the offline setting in which the sequence of node arrivals is known. Even this problem prohibits constant approximations if there is no recourse allowed. Surprisingly, already a smallest recourse budget significantly improves the performance guarantee from non-constant to constant.

Unlike in classical TSP variants, the standard double-tree and shortcutting approach does not give constant guarantees in the online setting. However, a non-trivial robust shortcutting technique allows to translate online MST results into TSP results at the loss of small factors.

Nicole Megow, Martin Skutella, José Verschae, Andreas Wiese
Geometry of Online Packing Linear Programs

We consider packing LP’s with

m

rows where all constraint coefficients are normalized to be in the unit interval. The

n

columns arrive in random order and the goal is to set the corresponding decision variables irrevocably when they arrive to obtain a feasible solution maximizing the expected reward. Previous (1 − 

ε

)-competitive algorithms require the right-hand side of the LP to be

$\Omega (\frac{m}{\epsilon^2} \log \frac{n}{\epsilon})$

, a bound that worsens with the number of columns and rows. However, the dependence on the number of columns is not required in the single-row case and known lower bounds for the general case are also independent of

n

.

Our goal is to understand whether the dependence on

n

is required in the multi-row case, making it fundamentally harder than the single-row version. We refute this by exhibiting an algorithm which is (1 − 

ε

)-competitive as long as the right-hand sides are

$\Omega (\frac{m^2}{\epsilon^2} \log \frac{m}{\epsilon})$

. Our techniques refine previous PAC-learning based approaches which interpret the online decisions as linear classifications of the columns based on sampled dual prices. The key ingredient of our improvement comes from a non-standard covering argument together with the realization that only when the columns of the LP belong to few 1-d subspaces we can obtain small such covers; bounding the size of the cover constructed also relies on the geometry of linear classifiers. General packing LP’s are handled by perturbing the input columns, which can be seen as making the learning problem more robust.

Marco Molinaro, R. Ravi
Self-assembly with Geometric Tiles

In this work we propose a generalization of Winfree’s abstract Tile Assembly Model (aTAM) in which tile types are assigned rigid shapes, or geometries, along each tile face. We examine the number of distinct tile types needed to assemble shapes within this model, the temperature required for efficient assembly, and the problem of designing compact geometric faces to meet given compatibility specifications. We pose the following question: can complex geometric tile faces arbitrarily reduce the number of distinct tile types to assemble shapes? Within the most basic generalization of the aTAM, we show that the answer is no. For almost all

n

at least

$\Omega(\sqrt{\log n})$

tile types are required to uniquely assemble an

n

×

n

square, regardless of how much complexity is pumped into the face of each tile type. However, we show for all

n

we can achieve a matching

$O(\sqrt{\log n})$

tile types, beating the known lower bound of Θ(log

n

/ loglog

n

) that holds for almost all

n

within the aTAM. Further, our result holds at temperature

τ

 = 1. Our next result considers a geometric tile model that is a generalization of the 2-handed abstract tile assembly model in which tile aggregates must move together through obstacle free paths within the plane. Within this model we present a novel construction that harnesses the collision free path requirement to allow for the unique assembly of any

n

×

n

square with a sleek

O

(loglog

n

) distinct tile types. This construction is of interest in that it is the first tile self-assembly result to harness collision free planar translation to increase efficiency, whereas previous work has simply used the planarity restriction as a desireable quality that could be achieved at reduced efficiency. This surprisingly low tile type result further emphasizes a fundamental open question: Is it possible to assemble

n

×

n

squares with

O

(1) distinct tile types? Essentially, how far can the trade off between the number of distinct tile types required for an assembly and the complexity of each tile type itself be taken?

Bin Fu, Matthew J. Patitz, Robert T. Schweller, Robert Sheline
Quasi-polynomial Local Search for Restricted Max-Min Fair Allocation

The restricted max-min fair allocation problem (also known as the restricted Santa Claus problem) is one of few problems that enjoys the intriguing status of having a better estimation algorithm than approximation algorithm. Indeed, Asadpour et al. [1] proved that a certain configuration LP can be used to estimate the optimal value within a factor 1/(4 + 

ε

), for any

ε

 > 0, but at the same time it is not known how to efficiently find a solution with a comparable performance guarantee.

A natural question that arises from their work is if the difference between these guarantees is inherent or because of a lack of suitable techniques. We address this problem by giving a quasi-polynomial approximation algorithm with the mentioned performance guarantee. More specifically, we modify the local search of [1] and provide a novel analysis that lets us significantly improve the bound on its running time: from 2

O

(

n

)

to

n

O

(log

n

)

. Our techniques also have the interesting property that although we use the rather complex configuration LP in the analysis, we never actually solve it and therefore the resulting algorithm is purely combinatorial.

Lukas Polacek, Ola Svensson
Strictly-Black-Box Zero-Knowledge and Efficient Validation of Financial Transactions

Zero Knowledge Proofs (ZKPs) are one of the most striking innovations in theoretical computer science. In practice, the prevalent ZKP methods are, at times, too complicated to be useful for real-life applications. In this paper we present a practically efficient method for ZKPs which has a wide range applications. Specifically, motivated by the need to provide an upon-demand efficient validation of various financial transactions (e.g., the high-volume Internet auctions), we have developed a novel secure and highly efficient method for validating correctness of the output of a transaction while keeping input values secret. The method applies to input values which are publicly committed to by employing generic commitment functions (even input values submitted using tamper-proof hardware solely with input/ output access can be used.) We call these: strictly black box [SBB] commitments. Hence these commitments are typically much faster than public-key ones, and are the only cryptographic/ security tool we give the poly-time players, throughout. The general problem we solve in this work is: Let SLC be a publicly known staight line computation on n input values taken from a finite field and having k output values. The inputs are publicly committed to in a SBB manner. An Evaluator performs the SLC on the inputs and announces the output values. Upon demand the Evaluator, or a Prover acting on his behalf, can present to a Verifier a proof of correctness of the announced output values. This is done in a manner that (1) The input values as well as all intermediate values of the SLC remain information theoretically secret. (2) The probability that the Verifier will accept a false claim of correctness of the output values can be made exponentially small. (3) The Prover can supply any required number of proofs of correctness to multiple Verifiers. (4) The method is highly efficient. The application to financial processes is straight forward. To this end (1) we first use a novel technique for representation of values from a finite field which we call “split representation”, the two coordinates of the split representation are generically committed to; (2) next, the SLC is augmented by the Prover into a ”translation” which is presented to the Verifier as a sequence of generically committed split representations of values; (3) using the translation, the Prover and Verifier conduct a secrecy preserving proof of correctness of the announced SLC output values; (4) in order to exponentially reduce the probability of cheating by the Prover and also to enable multiple proofs, a novel highly efficient method for preparation of any number of committed-to split representations of the n input values is employed. The extreme efficiency of these ZK methods is of decisive importance for large volume applications. Secrecy preserving validation of announced results of Vickrey auctions is our demonstrative example.

Michael O. Rabin, Yishay Mansour, S. Muthukrishnan, Moti Yung
Parameterized Tractability of Multiway Cut with Parity Constraints

In this paper, we study a parity based generalization of the classical

Multiway Cut

problem. Formally, we study the

Parity Multiway Cut

problem, where the input is a graph

G

, vertex subsets

T

e

and

T

o

(

T

 = 

T

e

 ∪ 

T

o

) called terminals, a positive integer

k

and the objective is to test whether there exists a

k

-sized vertex subset

S

such that

S

intersects all odd paths from

v

 ∈ 

T

o

to

T

 ∖ {

v

} and all even paths from

v

 ∈ 

T

e

to

T

 ∖ {

v

}. When

T

e

 = 

T

o

, this is precisely the classical

Multiway Cut

problem. If

T

o

 = ∅ then this is the

Even Multiway Cut

problem and if

T

e

 = ∅ then this is the

Odd Multiway Cut

problem. We remark that even the problem of deciding whether there is a set of at most

k

vertices that intersects all odd paths between a pair of vertices

s

and

t

is

NP

-complete. Our primary motivation for studying this problem is the recently initiated parameterized study of parity versions of graphs minors (Kawarabayashi, Reed and Wollan, FOCS 2011) and separation problems similar to

Multiway Cut

. The area of design of parameterized algorithms for graph separation problems has seen a lot of recent activity, which includes algorithms for

Multi-Cut

on undirected graphs (Marx and Razgon, STOC 2011, Bousquet, Daligault and Thomassé, STOC 2011),

k

-

way cut

(Kawarabayashi and Thorup, FOCS 2011), and

Multiway Cut

on directed graphs (Chitnis, Hajiaghayi and Marx, SODA 2012). A second motivation is that this problem serves as a good example to illustrate the application of a generalization of important separators which we introduce, and can be applied even when most of the recently develped tools fail to apply. We believe that this could be a useful tool for several other separation problems as well. We obtain this generalization by dividing the graph into slices with small boundaries and applying a divide and conquer paradigm over these slices. We show that

Parity Multiway Cut

is fixed parameter tractable (

FPT

) by giving an algorithm that runs in time

$f(k)n^{{\mathcal{O}}(1)}$

. More precisely, we show that instances of this problem with solutions of size

${\cal O}(\log \log n)$

can be solved in polynomial time. Along with this new notion of generalized important separators, our algorithm also combines several ideas used in previous parameterized algorithms for graph separation problems including the notion of important separators and randomized selection of important sets to simplify the input instance.

Daniel Lokshtanov, M. S. Ramanujan
Set Cover Revisited: Hypergraph Cover with Hard Capacities

In this paper, we consider generalizations of classical covering problems to handle hard capacities. In the hard capacitated set cover problem, additionally each set has a covering capacity which we are not allowed to exceed. In other words, after picking a set, we may cover at most a specified number of elements. Based on the classical results by Wolsey, an

O

(log

n

) approximation follows for this problem.

Chuzhoy and Naor [FOCS 2002], first studied the special case of unweighted vertex cover with hard capacities and developed an elegant 3 approximation for it based on rounding a natural LP relaxation. This was subsequently improved to a 2 approximation by Gandhi et al. [ICALP 2003]. These results are surprising in light of the fact that for weighted vertex cover with hard capacities, the problem is at least as hard as set cover to approximate. Hence this separates the unweighted problem from the weighted version.

The set cover hardness precludes the possibility of a constant factor approximation for the hard-capacitated vertex cover problem on weighted graphs. However, it was not known whether a better than logarithmic approximation is possible on unweighted

multigraphs

, i.e., graphs that may contain parallel edges. Neither the approach of Chuzhoy and Naor, nor the follow-up work of Gandhi et al. can handle the case of multigraphs. In fact, achieving a constant factor approximation for hard-capacitated vertex cover problem on unweighted multigraphs was posed as an open question in Chuzhoy and Naor’s work. In this paper, we resolve this question by providing the first constant factor approximation algorithm for the vertex cover problem with hard capacities on unweighted multigraphs. Previous works cannot handle hypergraphs which is analogous to consider set systems where elements belong to at most

f

sets. In this paper, we give an

O

(

f

) approximation algorithm for this problem. Further, we extend these works to consider partial covers.

Barna Saha, Samir Khuller
On the Limits of Sparsification

Impagliazzo, Paturi and Zane (JCSS 2001) proved a sparsification lemma for

k

-CNFs: every k-CNF is a sub-exponential size disjunction of

k

-CNFs with a linear number of clauses. This lemma has subsequently played a key role in the study of the exact complexity of the satisfiability problem. A natural question is whether an analogous structural result holds for CNFs or even for broader non-uniform classes such as constant-depth circuits or Boolean formulae. We prove a very strong negative result in this connection: For every superlinear function

f

(

n

), there are CNFs of size

f

(

n

) which cannot be written as a disjunction of 2

n

 − 

εn

CNFs each having a linear number of clauses for any

ε

 > 0. We also give a hierarchy of such non-sparsifiable CNFs: For every

k

, there is a

k

′ for which there are CNFs of size

n

k

which cannot be written as a sub-exponential size disjunction of CNFs of size

n

k

. Furthermore, our lower bounds hold not just against CNFs but against an

arbitrary

family of functions as long as the cardinality of the family is appropriately bounded.

As by-products of our result, we make progress both on questions about circuit lower bounds for depth-3 circuits and satisfiability algorithms for constant-depth circuits. Improving on a result of Impagliazzo, Paturi and Zane, for any

f

(

n

) = 

ω

(

n

log(

n

)), we define a pseudo-random function generator with seed length

f

(

n

) such that with high probability, a function in the output of this generator does not have depth-3 circuits of size 2

n

 − 

o

(

n

)

with bounded bottom fan-in. We show that if we could decrease the seed length of our generator below

n

, we would get an explicit function which does not have linear-size logarithmic-depth series-parallel circuits, solving a long-standing open question.

Motivated by the question of whether CNFs sparsify into bounded-depth circuits, we show a

simplification

result for bounded-depth circuits: any bounded-depth circuit of linear size can be written as a sub-exponential size disjunction of linear-size constant-width CNFs. As a corollary, we show that if there is an algorithm for CNF satisfiability which runs in time

O

(2

αn

) for some fixed

α

 < 1 on CNFs of linear size, then there is an algorithm for satisfiability of linear-size constant-depth circuits which runs in time

O

(2

(

α

 + 

o

(1))

n

).

Rahul Santhanam, Srikanth Srinivasan
Certifying 3-Connectivity in Linear Time

One of the most noted construction methods of 3-vertex-connected graphs is due to Tutte and based on the following fact: Every 3-vertex-connected graph

G

on more than 4 vertices contains a contractible edge, i. e., an edge whose contraction generates a 3-connected graph. This implies the existence of a sequence of edge contractions from

G

to

K

4

such that every intermediate graph is 3-vertex-connected. A theorem of Barnette and Grünbaum yields a similar sequence using removals of edges instead of contractions.

We show how to compute both sequences in optimal time, improving the previously best known running times of

O

(|

V

|

2

) to

O

(|

E

|). Based on this result, we give a linear-time test of 3-connectivity that is certifying; finding such an algorithm has been a major open problem in the design of certifying algorithms in the last years. The 3-connectivity test is conceptually different from well-known linear-time tests of 3-connectivity; it uses a certificate that is easy to verify in time

O

(|

E

|). We also provide an optimal certifying test of 3-edge-connectivity.

Jens M. Schmidt
Epsilon-Net Method for Optimizations over Separable States

We give algorithms for the optimization problem:

$\max_\rho \left\langle Q , \rho\right\rangle $

, where

Q

is a Hermitian matrix, and the variable

ρ

is a bipartite

separable

quantum state. This problem lies at the heart of several problems in quantum computation and information, such as the complexity of QMA(2). While the problem is NP-hard, our algorithms are better than brute force for several instances of interest. In particular, they give PSPACE upper bounds on promise problems admitting a QMA(2) protocol in which the verifier performs only logarithmic number of elementary gate on both proofs, as well as the promise problem of deciding if a bipartite local Hamiltonian has large or small ground energy. For

Q

 ≥ 0, our algorithm runs in time exponential in ||

Q

||

F

. While the existence of such an algorithm was first proved recently by Brandão, Christandl and Yard [

Proceedings of the 43rd annual ACM Symposium on Theory of Computation

, 343–352, 2011], our algorithm is conceptually simpler.

Yaoyun Shi, Xiaodi Wu
Faster Algorithms for Privately Releasing Marginals

We study the problem of releasing

k-way marginals

of a database

D

 ∈ ({0, 1}

d

)

n

, while preserving differential privacy. The answer to a

k

-way marginal query is the fraction of

D

’s records

x

 ∈ {0, 1}

d

with a given value in each of a given set of up to

k

columns. Marginal queries enable a rich class of statistical analyses of a dataset, and designing efficient algorithms for privately releasing marginal queries has been identified as an important open problem in private data analysis (cf. Barak et. al., PODS ’07).

We give an algorithm that runs in time

$d^{O(\sqrt{k})}$

and releases a private summary capable of answering any

k

-way marginal query with at most ±.01 error on every query as long as

$n \geq d^{O(\sqrt{k})}$

. To our knowledge, ours is the first algorithm capable of privately releasing marginal queries with non-trivial worst-case accuracy guarantees in time substantially smaller than the number of

k

-way marginal queries, which is

d

Θ(

k

)

(for

k

 ≪ 

d

).

Justin Thaler, Jonathan Ullman, Salil Vadhan
Stochastic Matching with Commitment

We consider the following stochastic optimization problem first introduced by Chen et al. in [7]. We are given a vertex set of a random graph where each possible edge is present with probability

p

e

. We do not know which edges are actually present unless we scan/probe an edge. However whenever we probe an edge and find it to be present, we are constrained to picking the edge and both its end points are deleted from the graph. We wish to find the maximum matching in this model. We compare our results against the optimal omniscient algorithm that knows the edges of the graph and present a 0.573 factor algorithm using a novel sampling technique. We also prove that no algorithm can attain a factor better than 0.898 in this model.

Kevin P. Costello, Prasad Tetali, Pushkar Tripathi
Rademacher-Sketch: A Dimensionality-Reducing Embedding for Sum-Product Norms, with an Application to Earth-Mover Distance

Consider a

sum-product

normed space, i.e. a space of the form

$Y=\ell_1^n \otimes X$

, where

X

is another normed space. Each element in

Y

consists of a length-

n

vector of elements in

X

, and the norm of an element in

Y

is the sum of the norms of its coordinates. In this paper we show a constant-distortion embedding from the normed space

$\ell_1^n \otimes X$

into a lower-dimensional normed space

$\ell_1^{n'} \otimes X$

, where

n

′ ≪ 

n

is some value that depends on the properties of the normed space

X

(namely, on its

Rademacher dimension

). In particular, composing this embedding with another well-known embedding of Indyk [18], we get an

O

(1/

ε

)-distortion embedding from the earth-mover metric

EMD

Δ

on the grid [Δ]

2

to

$\ell_1^{\Delta^{O(\epsilon )}} \otimes {\sf{EEMD}}_{\Delta^{\epsilon }}$

(where

EEMD

is a norm that generalizes earth-mover distance). This embedding is stronger (and simpler) than the sketching algorithm of Andoni et al [4], which maps

EMD

Δ

with

O

(1/

ε

) approximation into sketches of size Δ

O

(

ε

)

.

Elad Verbin, Qin Zhang
A Matrix Hyperbolic Cosine Algorithm and Applications

In this paper, we generalize Spencer’s hyperbolic cosine algorithm to the matrix-valued setting. We apply the proposed algorithm to several problems by analyzing its computational efficiency under two special cases of matrices; one in which the matrices have a group structure and an other in which they have rank-one. As an application of the former case, we present a deterministic algorithm that, given the multiplication table of a finite group of size

n

, it constructs an expanding Cayley graph of logarithmic degree in near-optimal

$\mathcal{O}(n^2\log^3 n)$

time. For the latter case, we present a fast deterministic algorithm for spectral sparsification of positive semi-definite matrices, which implies an improved deterministic algorithm for spectral graph sparsification of dense graphs. In addition, we give an elementary connection between spectral sparsification of positive semi-definite matrices and element-wise matrix sparsification. As a consequence, we obtain improved element-wise sparsification algorithms for diagonally dominant-like matrices.

Anastasios Zouzias
Backmatter
Metadata
Title
Automata, Languages, and Programming
Editors
Artur Czumaj
Kurt Mehlhorn
Andrew Pitts
Roger Wattenhofer
Copyright Year
2012
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-31594-7
Print ISBN
978-3-642-31593-0
DOI
https://doi.org/10.1007/978-3-642-31594-7

Premium Partner