Skip to main content
Top

2015 | Book

Automata, Languages, and Programming

42nd International Colloquium, ICALP 2015, Kyoto, Japan, July 6-10, 2015, Proceedings, Part I

Editors: Magnús M. Halldórsson, Kazuo Iwama, Naoki Kobayashi, Bettina Speckmann

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

The two-volume set LNCS 9134 and LNCS 9135 constitutes the refereed proceedings of the 42nd International Colloquium on Automata, Languages and Programming, ICALP 2015, held in Kyoto, Japan, in July 2015. The 143 revised full papers presented were carefully reviewed and selected from 507 submissions. The papers are organized in the following three tracks: algorithms, complexity, and games; logic, semantics, automata, and theory of programming; and foundations of networked computation: models, algorithms, and information management.

Table of Contents

Frontmatter
Statistical Randomized Encodings: A Complexity Theoretic View

A randomized encoding of a function

f

(

x

) is a randomized function

$$\hat{f}(x,r)$$

f

^

(

x

,

r

)

, such that the “encoding”

$$\hat{f}(x,r)$$

f

^

(

x

,

r

)

reveals

f

(

x

) and essentially no additional information about

x

. Randomized encodings of functions have found many applications in different areas of cryptography, including secure multiparty computation, efficient parallel cryptography, and verifiable computation.

We initiate a complexity-theoretic study of the class

$$\mathsf {SRE} $$

SRE

of languages (or boolean functions) that admit an efficient statistical randomized encoding. That is,

$$\hat{f}(x,r)$$

f

^

(

x

,

r

)

can be computed in time poly(|

x

|), and its output distribution on input

x

can be sampled in time poly(|

x

|) given

f

(

x

), up to a small statistical distance.

We obtain the following main results.

Separating

$$\mathsf {SRE} $$

SRE

from efficient computation:

We give the first examples of promise problems and languages in

$$\mathsf {SRE} $$

SRE

that are widely conjectured to lie outside

$$\mathsf {P/poly}$$

P

/

poly

. Our candidate promise problems and languages are based on the standard Learning with Errors (LWE) assumption, a non-standard variant of the Decisional Diffie Hellman (DDH) assumption and the “Abelian Subgroup Membership problem” (which generalizes Quadratic-Residuosity and a variant of DDH).

Separating

$$\mathsf {SZK} $$

SZK

from

$$\mathsf {SRE} $$

SRE

:

We explore the relationship of

$$\mathsf {SRE} $$

SRE

with the class

$$\mathsf {SZK} $$

SZK

of problems possessing statistical zero knowledge proofs. It is known that

$$\mathsf {SRE} \subseteq \mathsf {SZK} $$

SRE

SZK

. We present an oracle separation which demonstrates that a containment of

$$\mathsf {SZK} $$

SZK

in

$$\mathsf {SRE} $$

SRE

cannot be proved via relativizing techniques.

Shweta Agrawal, Yuval Ishai, Dakshita Khurana, Anat Paskin-Cherniavsky
Tighter Fourier Transform Lower Bounds

The Fourier Transform is one of the most important linear transformations used in science and engineering. Cooley and Tukey’s Fast Fourier Transform (FFT) from 1964 is a method for computing this transformation in time

$$O(n\log n)$$

O

(

n

log

n

)

. Achieving a matching lower bound in a reasonable computational model is one of the most important open problems in theoretical computer science. In 2014, improving on his previous work, Ailon showed that if an algorithm speeds up the FFT by a factor of

$$b=b(n)\ge 1$$

b

=

b

(

n

)

1

, then it must rely on computing, as an intermediate “bottleneck” step, a linear mapping of the input with condition number

$$\Omega (b(n))$$

Ω

(

b

(

n

)

)

. Our main result shows that a factor

$$b$$

b

speedup implies existence of not just one but

$$\Omega (n)$$

Ω

(

n

)

$$b$$

b

-ill conditioned bottlenecks occurring at

$$\Omega (n)$$

Ω

(

n

)

different steps, each causing information from independent (orthogonal) components of the input to either overflow or underflow. This provides further evidence that beating FFT is hard. Our result also gives the first quantitative tradeoff between computation speed and information loss in Fourier computation on fixed word size architectures. The main technical result is an entropy analysis of the Fourier transform under transformations of low trace, which is interesting in its own right.

Nir Ailon
Quantifying Competitiveness in Paging with Locality of Reference

The classical paging problem is to maintain a two-level memory system so that a sequence of requests to memory pages can be served with a small number of faults. Standard competitive analysis gives overly pessimistic results as it ignores the fact that real-world input sequences exhibit locality of reference. In this paper we study the paging problem using an intuitive and simple locality model that records inter-request distances in the input. A characteristic vector

$$\mathcal{C}$$

C

defines a class of request sequences that satisfy certain properties on these distances. The concept was introduced by Panagiotou and Souza [

19

].

As a main contribution we develop new and improved bounds on the performance of important paging algorithms. A strength and novelty of the results is that they express algorithm performance in terms of locality parameters. In a first step we develop a new lower bound on the number of page faults incurred by an optimal offline algorithm

opt

. The bound is tight up to a small additive constant. Based on these expressions for

opt

’s cost, we obtain nearly tight upper and lower bounds on

lru

’s competitiveness, given any characteristic vector

$$\mathcal{C}$$

C

. The resulting ratios range between 1 and

$$k$$

k

, depending on

$$\mathcal{C}$$

C

. Furthermore, we compare

lru

to

fifo

and

fwf

. For the first time we show bounds that quantify the difference between

lru

’s performance and that of the other two strategies. The results imply that

lru

is strictly superior on inputs with a high degree of locality of reference. In particular, there exist general input families for which

lru

achieves constant competitive ratios whereas the guarantees of

fifo

and

fwf

tend to

$$k$$

k

, the size of the fast memory. Finally, we report on an experimental study that demonstrates that our theoretical bounds are very close to the experimentally observed ones. Hence we believe that our contributions bring competitive paging again closer to practice.

Susanne Albers, Dario Frascaria
Approximation Algorithms for Computing Maximin Share Allocations

We study the problem of computing maximin share guarantees, a recently introduced fairness notion. Given a set of

$$n$$

n

agents and a set of goods, the maximin share of a single agent is the best that she can guarantee to herself, if she would be allowed to partition the goods in any way she prefers, into

$$n$$

n

bundles, and then receive her least desirable bundle. The objective then in our problem is to find a partition, so that each agent is guaranteed her maximin share. In settings with indivisible goods, such allocations are not guaranteed to exist, hence, we resort to approximation algorithms. Our main result is a

$$2/3$$

2

/

3

-approximation, that runs in polynomial time for any number of agents. This improves upon the algorithm of Procaccia and Wang [

14

], which also produces a

$$2/3$$

2

/

3

-approximation but runs in polynomial time only for a constant number of agents. We then investigate the intriguing case of

$$3$$

3

agents, for which it is already known that exact maximin share allocations do not always exist. We provide a

$$6/7$$

6

/

7

-approximation algorithm for this case, improving on the currently known ratio of

$$3/4$$

3

/

4

. Finally, we undertake a probabilistic analysis. We prove that in randomly generated instances, with high probability there exists a maximin share allocation. This can be seen as a justification of the experimental evidence reported in [

5

,

14

], that maximin share allocations exist almost always.

Georgios Amanatidis, Evangelos Markakis, Afshin Nikzad, Amin Saberi
Envy-Free Pricing in Large Markets: Approximating Revenue and Welfare

We study the classic setting of envy-free pricing, in which a single seller chooses prices for its many items, with the goal of maximizing revenue once the items are allocated. Despite the large body of work addressing such settings, most versions of this problem have resisted good approximation factors for maximizing revenue; this is true even for the classic unit-demand case. In this paper, we study envy-free pricing with unit-demand buyers, but unlike previous work we focus on

large

markets: ones in which the demand of each buyer is infinitesimally small compared to the size of the overall market. We assume that the buyer valuations for the items they desire have a nice (although reasonable) structure, i.e., that the aggregate buyer demand has a monotone hazard rate and that the values of every buyer type come from the same support.

For such large markets, our main contribution is a

$$1.88$$

1.88

approximation algorithm for maximizing revenue, showing that good pricing schemes can be computed when the number of buyers is large. We also give a

$$(e,2)$$

(

e

,

2

)

-bicriteria algorithm that simultaneously approximates both maximum revenue and welfare, thus showing that it is possible to obtain both good revenue and welfare at the same time. We further generalize our results by relaxing some of our assumptions, and quantify the necessary tradeoffs between revenue and welfare in our setting. Our results are the first known approximations for large markets, and crucially rely on new lower bounds which we prove for the profit-maximizing solutions.

Elliot Anshelevich, Koushik Kar, Shreyas Sekar
Batched Point Location in SINR Diagrams via Algebraic Tools

The

SINR model

for the quality of wireless connections has been the subject of extensive recent study. It attempts to predict whether a particular transmitter is heard at a specific location, in a setting consisting of

n

simultaneous transmitters and background noise. The SINR model gives rise to a natural geometric object, the

SINR diagram

, which partitions the space into

n

regions where each of the transmitters can be heard and the remaining space where no transmitter can be heard.

Efficient

point location

in the SINR diagram, i.e., being able to build a data structure that facilitates determining, for a query point, whether any transmitter is heard there, and if so, which one, has been recently investigated in several papers. These planar data structures are constructed in time at least quadratic in

n

and support logarithmic-time approximate queries. Moreover, the performance of some of the proposed structures depends strongly not only on the number

n

of transmitters and on the approximation parameter

$$\varepsilon $$

ε

, but also on some geometric parameters that cannot be bounded

a priori

as a function of

n

or

$$\varepsilon $$

ε

.

In this paper, we address the question of

batched

point location queries, i.e., answering many queries simultaneously. Specifically, in one dimension, we can answer

n

queries

exactly

in amortized polylogarithmic time per query, while in the plane we can do it approximately.

All these results can handle

arbitrary

power assignments to the transmitters. Moreover, the amortized query time in these results depends only on

n

and

$$\varepsilon $$

ε

.

Finally, these results demonstrate the (so far underutilized) power of combining algebraic tools with those of computational geometry and other fields.

Boris Aronov, Matthew J. Katz
On the Randomized Competitive Ratio of Reordering Buffer Management with Non-Uniform Costs

Reordering buffer management (RBM) is an elegant theoretical model that captures the tradeoff between buffer size and switching costs for a variety of reordering/sequencing problems. In this problem, colored items arrive over time, and are placed in a buffer of size

$$k$$

k

. When the buffer becomes full, an item must be removed from the buffer. A penalty cost is incurred each time the sequence of removed items switches colors. In the non-uniform cost model, there is a weight

$$w_c$$

w

c

associated with each color

$$c$$

c

, and the cost of switching to color

$$c$$

c

is

$$w_c$$

w

c

. The goal is to minimize the total cost of the output sequence, using the buffer to rearrange the input sequence.

Recently, a randomized

$$O(\log \log k)$$

O

(

log

log

k

)

-competitive online algorithm was given for the case that all colors have the same weight (FOCS 2013). This is an exponential improvement over the nearly tight bound of

$$O(\sqrt{ \log k})$$

O

(

log

k

)

on the deterministic competitive ratio of that version of the problem (Adamaszek et al. , STOC 2011). In this paper, we give an

$$O((\log \log k\gamma )^2)$$

O

(

(

log

log

k

γ

)

2

)

-competitive algorithm for the non-uniform case, where

$$\gamma $$

γ

is the ratio of the maximum to minimum color weight. Our work demonstrates that randomness can achieve exponential improvement in the competitive ratio even for the non-uniform case.

Noa Avigdor-Elgrabli, Sungjin Im, Benjamin Moseley, Yuval Rabani
Serving in the Dark should be done Non-Uniformly

We study the following balls and bins stochastic game between a player and an adversary: there are

B

bins and a sequence of ball arrival and extraction events. In an arrival event a ball is stored in an empty bin chosen by the adversary and discarded if no bin is empty. In an extraction event, an algorithm selects a bin, clears it, and gains its content. We are interested in analyzing the gain of an algorithm which serves in the dark without any feedback at all, i.e., does not see the sequence, the content of the bins, and even the content of the cleared bins (i.e. an oblivious algorithm). We compare that gain to the gain of an optimal, open eyes, strategy that gets the same online sequence. We name this gain ratio the

“loss of serving in the dark”

.

The randomized algorithm that was previously analyzed is choosing a bin independently and uniformly at random, which resulted in a competitive ratio of about 1.69. We show that although no information is ever provided to the algorithm, using non-uniform probability distribution reduces the competitive ratio. Specifically, we design a 1.55-competitive algorithm and establish a lower bound of 1.5. We also prove a lower bound of 2 against any deterministic algorithm. This matches the performance of the round robin 2-competitive strategy. Finally, we present an application relating to a prompt mechanism for bounded capacity auctions.

Yossi Azar, Ilan Reuven Cohen
Finding the Median (Obliviously) with Bounded Space

We prove that any oblivious algorithm using space

$$S$$

S

to find the median of a list of

$$n$$

n

integers from

$$\{1,\ldots ,2n\}$$

{

1

,

,

2

n

}

requires time

$$\varOmega (n\log \log _S n)$$

Ω

(

n

log

log

S

n

)

. This bound also applies to the problem of determining whether the median is odd or even. It is nearly optimal since Chan, following Munro and Raman, has shown that there is a (randomized) selection algorithm using only

$$s$$

s

registers, each of which can store an input value or

$$O(\log n)$$

O

(

log

n

)

-bit counter, that makes only

$$O(\log \log _s n)$$

O

(

log

log

s

n

)

passes over the input. The bound also implies a size lower bound for read-once branching programs computing the low order bit of the median and implies the analog of

$$\mathsf P\ne \mathsf {NP}\cap \mathsf c\mathsf o\mathsf N\mathsf P$$

P

NP

c

o

N

P

for length

$$o(n\log \log n)$$

o

(

n

log

log

n

)

oblivious branching programs.

Paul Beame, Vincent Liew, Mihai Pǎtraşcu
Approximation Algorithms for Min-Sum k-Clustering and Balanced k-Median

We consider two closely related fundamental clustering problems in this paper. In the

Min-Sum

k

-Clustering

problem, one is given a metric space and has to partition the points into

k

clusters while minimizing the total pairwise distances between the points assigned to the same cluster. In the

Balanced

k

-Median

problem, the instance is the same and one has to obtain a partitioning into

k

clusters

$$C_1,\ldots ,C_k$$

C

1

,

,

C

k

, where each cluster

$$C_i$$

C

i

has a center

$$c_i$$

c

i

, while minimizing the total assignment costs for the points in the metric; here the cost of assigning a point

j

to a cluster

$$C_i$$

C

i

is equal to

$$|C_i|$$

|

C

i

|

times the distance between

j

and

$$c_i$$

c

i

in the metric.

In this paper, we present an

$$O(\log n)$$

O

(

log

n

)

-approximation for both these problems where

n

is the number of points in the metric that are to be served. This is an improvement over the

$$O(\epsilon ^{-1}\log ^{1 + \epsilon } n)$$

O

(

ϵ

-

1

log

1

+

ϵ

n

)

-approximation (for any constant

$$\epsilon > 0$$

ϵ

>

0

) obtained by Bartal, Charikar, and Raz [STOC ’01]. We also obtain a quasi-PTAS for Balanced

k

-Median in metrics with constant doubling dimension.

As in the work of Bartal et al., our approximation for general metrics uses embeddings into tree metrics. The main technical contribution in this paper is an

O

(1)-approximation for Balanced

k

-Median in hierarchically separated trees (HSTs). Our improvement comes from a more direct dynamic programming approach that heavily exploits properties of standard HSTs. In this way, we avoid the reduction to special types of HSTs that were considered by Bartal et al., thereby avoiding an additional

$$O(\epsilon ^{-1} \log ^\epsilon n)$$

O

(

ϵ

-

1

log

ϵ

n

)

loss.

Babak Behsaz, Zachary Friggstad, Mohammad R. Salavatipour, Rohit Sivakumar
Solving Linear Programming with Constraints Unknown

What is the value of input information in solving linear programming? The celebrated ellipsoid algorithm tells us that the full information of input constraints is not necessary; the algorithm works as long as there exists an oracle that, on a proposed candidate solution, returns a violation in the form of a separating hyperplane. Can linear programming still be efficiently solved if the returned violation is in other formats?

Motivated by some real-world scenarios, we study this question in a trial-and-error framework: there is an oracle that, upon a proposed solution, returns the

index

of a violated constraint (with the content of the constraint still hidden). When more than one constraint is violated, two variants in the model are investigated. (1) The oracle returns the index of a “most violated” constraint, measured by the Euclidean distance of the proposed solution and the half-spaces defined by the constraints. In this case, the LP can be efficiently solved (under a mild condition of non-degeneracy). (2) The oracle returns the index of an arbitrary (i.e., worst-case) violated constraint. In this case, we give an algorithm with running time exponential in the number of variables. We then show that the exponential dependence on

n

is unfortunately necessary even for the query complexity. These results put together shed light on the amount of information that one needs in order to solve a linear program efficiently.

The proofs of the results employ a variety of geometric techniques, including the weighted spherical Voronoi diagram and the furthest Voronoi diagram.

Xiaohui Bei, Ning Chen, Shengyu Zhang
Deterministic Randomness Extraction from Generalized and Distributed Santha-Vazirani Sources

A Santha-Vazirani (SV) source is a sequence of random bits where the conditional distribution of each bit, given the previous bits, can be partially controlled by an adversary. Santha and Vazirani show that deterministic randomness extraction from these sources is impossible. In this paper, we study the generalization of SV sources for non-binary sequences. We show that unlike the binary case, deterministic randomness extraction in the generalized case is sometimes possible. We present a necessary condition and a sufficient condition for the possibility of deterministic randomness extraction. These two conditions coincide in “non-degenerate” cases.

Next, we turn to a distributed setting. In this setting the SV source consists of a random sequence of pairs

$$(a_1, b_1), (a_2, b_2), \ldots $$

(

a

1

,

b

1

)

,

(

a

2

,

b

2

)

,

distributed between two parties, where the first party receives

$$a_i$$

a

i

’s and the second one receives

$$b_i$$

b

i

’s. The goal of the two parties is to extract common randomness without communication. Using the notion of

maximal correlation

, we prove a necessary condition and a sufficient condition for the possibility of common randomness extraction from these sources. Based on these two conditions, the problem of common randomness extraction essentially reduces to the problem of randomness extraction from (non-distributed) SV sources. This result generalizes results of Gács and Körner, and Witsenhausen about common randomness extraction from i.i.d. sources to adversarial sources.

Salman Beigi, Omid Etesami, Amin Gohari
Limitations of Algebraic Approaches to Graph Isomorphism Testing

We investigate the power of graph isomorphism algorithms based on algebraic reasoning techniques like Gröbner basis computation. The idea of these algorithms is to encode two graphs into a system of equations that are satisfiable if and only if if the graphs are isomorphic, and then to (try to) decide satisfiability of the system using, for example, the Gröbner basis algorithm. In some cases this can be done in polynomial time, in particular, if the equations admit a bounded degree refutation in an algebraic proof systems such as Nullstellensatz or polynomial calculus. We prove linear lower bounds on the polynomial calculus degree over all fields of characteristic

$$\ne 2$$

2

and also linear lower bounds for the degree of Positivstellensatz calculus derivations.

We compare this approach to recently studied linear and semidefinite programming approaches to isomorphism testing, which are known to be related to the combinatorial Weisfeiler-Lehman algorithm. We exactly characterise the power of the Weisfeiler-Lehman algorithm in terms of an algebraic proof system that lies between degree-

k

Nullstellensatz and degree-

k

polynomial calculus.

Christoph Berkholz, Martin Grohe
Fully Dynamic Matching in Bipartite Graphs

We present two fully dynamic algorithms for maximum cardinality matching in bipartite graphs. Our main result is a

deterministic

algorithm that maintains a

$$(3/2 + \epsilon )$$

(

3

/

2

+

ϵ

)

approximation in

worst-case

update time

$$O(m^{1/4}\epsilon ^{-2.5})$$

O

(

m

1

/

4

ϵ

-

2.5

)

. This algorithm is polynomially faster than all previous

deterministic

algorithms for

any

constant approximation, and faster than all previous algorithms (randomized included) that achieve a better-than-2 approximation. We also give stronger results for bipartite graphs whose arboricity is at most

$$\alpha $$

α

, achieving a

$$(1+ \epsilon )$$

(

1

+

ϵ

)

approximation in worst-case update time

$$O(\alpha (\alpha + \log (n)) + \epsilon ^{-4}(\alpha + \log (n)) + \epsilon ^{-6})$$

O

(

α

(

α

+

log

(

n

)

)

+

ϵ

-

4

(

α

+

log

(

n

)

)

+

ϵ

-

6

)

, which is

$$O(\alpha (\alpha + \log n))$$

O

(

α

(

α

+

log

n

)

)

for constant

$$\epsilon $$

ϵ

. Previous results for small arboricity graphs had similar update times but could only maintain a maximal matching (2-approximation). All these previous algorithms, however, were not limited to bipartite graphs.

Aaron Bernstein, Cliff Stein
Feasible Interpolation for QBF Resolution Calculi

In sharp contrast to classical proof complexity we are currently short of lower bound techniques for QBF proof systems. We establish the feasible interpolation technique for all resolution-based QBF systems, whether modelling CDCL or expansion-based solving. This both provides the first general lower bound method for QBF calculi as well as largely extends the scope of classical feasible interpolation. We apply our technique to obtain new exponential lower bounds to all resolution-based QBF systems for a new class of QBF formulas based on the clique problem. Finally, we show how feasible interpolation relates to the recently established lower bound method based on strategy extraction [

7

].

Olaf Beyersdorff, Leroy Chew, Meena Mahajan, Anil Shukla
Simultaneous Approximation of Constraint Satisfaction Problems

Given

k

collections of 2SAT clauses on the same set of variables

V

, can we find one assignment that satisfies a large fraction of clauses from

each

collection? We consider such

simultaneous

constraint satisfaction problems, and design the first nontrivial approximation algorithms in this context.

Our main result is that for every CSP

$${\mathcal {F}}$$

F

, for

, there is a polynomial time constant factor

Pareto

approximation algorithm for

k

simultaneous

Max

-

$${\mathcal {F}}$$

F

-CSP instances. Our methods are quite general, and we also use them to give an improved approximation factor for simultaneous

Max

-

w

-SAT (for

). In contrast, for

$$k = \omega (\log n)$$

k

=

ω

(

log

n

)

, no nonzero approximation factor for

k

simultaneous

Max

-

$${\mathcal {F}}$$

F

-CSP instances can be achieved in polynomial time (assuming the Exponential Time Hypothesis).

These problems are a natural meeting point for the theory of constraint satisfaction problems and multiobjective optimization. We also suggest a number of interesting directions for future research.

Amey Bhangale, Swastik Kopparty, Sushant Sachdeva
Design of Dynamic Algorithms via Primal-Dual Method

In this paper, we develop a dynamic version of the primal-dual method for optimization problems, and apply it to obtain the following results. (1) For the dynamic set-cover problem, we maintain an

$$O(f^2)$$

O

(

f

2

)

-approximately optimal solution in

$$O(f \cdot \log (m+n))$$

O

(

f

·

log

(

m

+

n

)

)

amortized update time, where

$$f$$

f

is the maximum “frequency” of an element,

$$n$$

n

is the number of sets, and

$$m$$

m

is the maximum number of elements in the universe at any point in time. (2) For the dynamic

$$b$$

b

-matching problem, we maintain an

$$O(1)$$

O

(

1

)

-approximately optimal solution in

$$O(\log ^3 n)$$

O

(

log

3

n

)

amortized update time, where

$$n$$

n

is the number of nodes in the graph.

Sayan Bhattacharya, Monika Henzinger, Giuseppe F. Italiano
What Percentage of Programs Halt?

Fix an optimal Turing machine

U

and for each

n

consider the ratio

$$\rho ^U_n$$

ρ

n

U

of the number of halting programs of length at most

n

by the total number of such programs. Does this quantity have a limit value? In this paper, we show that it is not the case, and further characterise the reals which can be the limsup of such a sequence

$$\rho ^U_n$$

ρ

n

U

. We also study, for a given optimal machine

U

, how hard it is to approximate the domain of

U

from the point of view of coarse and generic computability.

Laurent Bienvenu, Damien Desfontaines, Alexander Shen
The Parity of Set Systems Under Random Restrictions with Applications to Exponential Time Problems

We reduce the problem of detecting the existence of an object to the problem of computing the parity of the number of objects in question. In particular, when given any non-empty set system, we prove that randomly restricting elements of its ground set makes the size of the restricted set system an odd number with significant probability. When compared to previously known reductions of this type, ours excel in their simplicity: For graph problems, restricting elements of the ground set usually corresponds to simple deletion and contraction operations, which can be encoded efficiently in most problems. We find three applications of our reductions:

1.

An exponential-time algorithm: We show how to decide Hamiltonicity in directed

$$n$$

n

-vertex graphs with running time

$$1.9999^n$$

1

.

9999

n

provided that the graph has at most

$$1.0385^n$$

1

.

0385

n

Hamiltonian cycles. We do so by reducing to the algorithm of Björklund and Husfeldt (FOCS 2013) that computes the parity of the number of Hamiltonian cycles in time

$$1.619^n$$

1

.

619

n

.

2.

A new result in the framework of Cygan et al. (CCC 2012) for analyzing the complexity of NP-hard problems under the Strong Exponential Time Hypothesis: If the parity of the number of Set Covers can be determined in time

$$1.9999^n$$

1

.

9999

n

, then Set Cover can be decided in the same time.

3.

A structural result in parameterized complexity: We define the parameterized complexity class

$$\oplus $$

W[1] and prove that it is at least as hard as W[1] under randomized fpt-reductions with bounded one-sided error; this is analogous to the classical result

$$\mathrm {NP\subseteq RP^{\oplus P}}$$

NP

RP

P

by Toda (SICOMP 1991).

Andreas Björklund, Holger Dell, Thore Husfeldt
Spotting Trees with Few Leaves

We show two results related to the

Hamiltonicity

and

$$k$$

k

-Path

algorithms in undirected graphs by Björklund [FOCS’10], and Björklund et al., [arXiv’10]. First, we demonstrate that the technique used can be generalized to finding some

$$k$$

k

-vertex tree with

$$l$$

l

leaves in an

$$n$$

n

-vertex undirected graph in

$$O^*(1.657^k2^{l/2})$$

O

(

1

.

657

k

2

l

/

2

)

time. It can be applied as a subroutine to solve the

$$k$$

k

-Internal Spanning Tree

(

$$k$$

k

-IST) problem in

$$O^*({\text {min}}(3.455^k, 1.946^n))$$

O

(

min

(

3

.

455

k

,

1

.

946

n

)

)

time using polynomial space, improving upon previous algorithms for this problem. In particular, for the first time, we break the natural barrier of

$$O^*(2^n)$$

O

(

2

n

)

. Second, we show that the iterated random bipartition employed by the algorithm can be improved whenever the host graph admits a vertex coloring with few colors; it can be an ordinary proper vertex coloring, a fractional vertex coloring, or a vector coloring. In effect, we show improved bounds for

$$k$$

k

-Path

and

Hamiltonicity

in any graph of maximum degree

$$\Delta =4,\ldots ,12$$

Δ

=

4

,

,

12

or with vector chromatic number at most

$$8$$

8

.

Andreas Björklund, Vikram Kamat, Łukasz Kowalik, Meirav Zehavi
Constraint Satisfaction Problems over the Integers with Successor

A

distance constraint satisfaction problem

is a constraint satisfaction problem (CSP) whose constraint language consists of relations that are first-order definable over

$$({\mathbb Z};{{\mathrm{succ}}})$$

(

Z

;

succ

)

, i.e., over the integers with the successor function. Our main result says that every distance CSP is in P or NP-complete, unless it can be formulated as a finite domain CSP in which case the computational complexity is not known in general.

Manuel Bodirsky, Barnaby Martin, Antoine Mottet
Hardness Amplification and the Approximate Degree of Constant-Depth Circuits

We establish a generic form of hardness amplification for the approximability of constant-depth Boolean circuits by polynomials. Specifically, we show that if a Boolean circuit cannot be pointwise approximated by low-degree polynomials to within constant error in a certain one-sided sense, then an OR of disjoint copies of that circuit cannot be pointwise approximated even with very high error. As our main application, we show that for every sequence of degrees

$$d(n)$$

d

(

n

)

, there is an explicit depth-three circuit

$$F: \{-1,1\}^n \rightarrow \{-1,1\}$$

F

:

{

-

1

,

1

}

n

{

-

1

,

1

}

of polynomial-size such that any degree-

$$d$$

d

polynomial cannot pointwise approximate

$$F$$

F

to error better than

$$1-\exp (-\tilde{\Omega }(nd^{-3/2}))$$

1

-

exp

(

-

Ω

~

(

n

d

-

3

/

2

)

)

. As a consequence of our main result, we obtain an

$$\exp (-\tilde{\Omega }(n^{2/5}))$$

exp

(

-

Ω

~

(

n

2

/

5

)

)

upper bound on the the discrepancy of a function in AC

$$^{\text{0 }}$$

0

, and an

$$\exp (\tilde{\Omega }(n^{2/5}))$$

exp

(

Ω

~

(

n

2

/

5

)

)

lower bound on the threshold weight of AC

$$^{\text{0 }}$$

0

, improving over the previous best results of

$$\exp (-\Omega (n^{1/3}))$$

exp

(

-

Ω

(

n

1

/

3

)

)

and

$$\exp (\Omega (n^{1/3}))$$

exp

(

Ω

(

n

1

/

3

)

)

respectively.

Our techniques also yield a new lower bound of

$$\Omega (n^{1/2}/\log ^{(d-2)/2}(n))$$

Ω

(

n

1

/

2

/

log

(

d

-

2

)

/

2

(

n

)

)

on the approximate degree of the AND-OR tree of depth

$$d$$

d

, which is tight up to polylogarithmic factors for any constant

$$d$$

d

, as well as new bounds for read-once DNF formulas. In turn, these results imply new lower bounds on the communication and circuit complexity of these classes, and demonstrate strong limitations on existing PAC learning algorithms.

Mark Bun, Justin Thaler
Algorithms and Complexity for Turaev-Viro Invariants

The Turaev-Viro invariants are a powerful family of topological invariants for distinguishing between different 3-manifolds. They are invaluable for mathematical software, but current algorithms to compute them require exponential time.

The invariants are parameterised by an integer

$$r \ge 3$$

r

3

. We resolve the question of complexity for

$$r=3$$

r

=

3

and

$$r=4$$

r

=

4

, giving simple proofs that computing Turaev-Viro invariants for

$$r=3$$

r

=

3

is polynomial time, but for

$$r=4$$

r

=

4

is #P-hard. Moreover, we give an explicit fixed-parameter tractable algorithm for arbitrary

$$r$$

r

, and show through concrete implementation and experimentation that this algorithm is practical—and indeed preferable—to the prior state of the art for real computation.

Benjamin A. Burton, Clément Maria, Jonathan Spreer
Big Data on the Rise?
Testing Monotonicity of Distributions

The field of property testing of probability distributions, or distribution testing, aims to provide fast and (most likely) correct answers to questions pertaining to specific aspects of very large datasets. In this work, we consider a property of particular interest,

monotonicity of distributions

. We focus on the complexity of monotonicity testing across different models of access to the distributions [

5

,

7

,

8

,

20

]; and obtain results in these new settings that differ significantly (and somewhat surprisingly) from the known bounds in the standard sampling model [

1

].

Clément L. Canonne
Unit Interval Editing is Fixed-Parameter Tractable

Given a graph

$$G$$

G

and integers

$$k_1$$

k

1

,

$$k_2$$

k

2

, and

$$k_3$$

k

3

, the unit interval editing problem asks whether

$$G$$

G

can be transformed into a unit interval graph by at most

$$k_1$$

k

1

vertex deletions,

$$k_2$$

k

2

edge deletions, and

$$k_3$$

k

3

edge additions. We give an algorithm solving the problem in

$$2^{O(k\log k)}\cdot (n+m)$$

2

O

(

k

log

k

)

·

(

n

+

m

)

time, where

$$k := k_1 + k_2 + k_3$$

k

:

=

k

1

+

k

2

+

k

3

, and

$$n, m$$

n

,

m

denote respectively the numbers of vertices and edges of

$$G$$

G

. Therefore, it is fixed-parameter tractable parameterized by the total number of allowed operations.

This implies the fixed-parameter tractability of the unit interval edge deletion problem, for which we also present a more efficient algorithm running in time

$$O(4^k \cdot (n + m))$$

O

(

4

k

·

(

n

+

m

)

)

. Another result is an

$$O(6^k \cdot (n + m))$$

O

(

6

k

·

(

n

+

m

)

)

-time algorithm for the unit interval vertex deletion problem, significantly improving the best-known algorithm running in time

$$O(6^k \cdot n^6)$$

O

(

6

k

·

n

6

)

.

Yixin Cao
Streaming Algorithms for Submodular Function Maximization

We consider the problem of maximizing a nonnegative submodular set function

$$f:2^{\mathcal {N}} \rightarrow \mathbb {R}^+$$

f

:

2

N

R

+

subject to a

p

-matchoid constraint in the single-pass streaming setting. Previous work in this context has considered streaming algorithms for modular functions and monotone submodular functions. The main result is for submodular functions that are

non-monotone

. We describe deterministic and randomized algorithms that obtain a

$$\varOmega (\frac{1}{p})$$

Ω

(

1

p

)

-approximation using

$$O(k \log k)$$

O

(

k

log

k

)

-space, where

k

is an upper bound on the cardinality of the desired set. The model assumes value oracle access to

f

and membership oracles for the matroids defining the

p

-matchoid constraint.

Chandra Chekuri, Shalmoli Gupta, Kent Quanrud
Multilinear Pseudorandom Functions

We define the new notion of a

multilinear

pseudorandom function (PRF), and give a construction with a proof of security assuming the hardness of the decisional Diffie-Hellman problem. A direct application of our construction yields (non-multilinear) PRFs with aggregate security from the same assumption, resolving an open question in [

CGV15

]. Additionally, multilinear PRFs give a new way of viewing existing algebraic PRF constructions: our main theorem implies they too satisfy aggregate security.

Aloni Cohen, Justin Holmgren
Zero-Fixing Extractors for Sub-Logarithmic Entropy

An (

n

,

k

)-bit-fixing source is a distribution on

n

bit strings, that is fixed on

$$n-k$$

n

-

k

of the coordinates, and jointly uniform on the remaining

k

bits. Explicit constructions of bit-fixing extractors by Gabizon, Raz and Shaltiel [SICOMP 2006] and Rao [CCC 2009], extract

$$(1-o(1)) \cdot k$$

(

1

-

o

(

1

)

)

·

k

bits for

$$k = \mathrm{poly}\log {n}$$

k

=

poly

log

n

, almost matching the probabilistic argument. Intriguingly, unlike other well-studied sources of randomness, a result of Kamp and Zuckerman [SICOMP 2006] shows that, for

any

k

, some small portion of the entropy in an (

n

,

k

)-bit-fixing source can be extracted. Although the extractor does not extract all the entropy, it does extract

$$\log (k)/2$$

log

(

k

)

/

2

bits.

In this paper we prove that when the entropy

k

is small enough compared to

n

, this exponential entropy-loss is unavoidable. More precisely, we show that for

$$n > \mathsf {Tower}(k^2)$$

n

>

Tower

(

k

2

)

one cannot extract more than

$$\log (k)/2 + O(1)$$

log

(

k

)

/

2

+

O

(

1

)

bits from (

n

,

k

)-bit-fixing sources. The remaining entropy is inaccessible, information theoretically. By the Kamp-Zuckerman construction, this negative result is tight. For small enough

k

, this strengthens a result by Reshef and Vadhan [RSA 2013], who proved a similar bound for extractors computable by space-bounded streaming algorithms.

Our impossibility result also holds for what we call

zero-fixing

sources. These are bit-fixing sources where the fixed bits are set to 0. We complement our negative result, by giving an explicit construction of an (

n

,

k

)-zero-fixing extractor that outputs

$$\Omega (k)$$

Ω

(

k

)

bits for

$$k \ge \mathrm{poly}\log \log {n}$$

k

poly

log

log

n

. Finally, we give a construction of an (

n

,

k

)-bit-fixing extractor, that outputs

$$k-O(1)$$

k

-

O

(

1

)

bits, for entropy

$$k = (1+o(1)) \cdot \log \log {n}$$

k

=

(

1

+

o

(

1

)

)

·

log

log

n

, with running-time

$$n^{O(( \log {\log {n}})^2)}$$

n

O

(

(

log

log

n

)

2

)

. This answers an open problem by Reshef and Vadhan [RSA 2013].

Gil Cohen, Igor Shinkar
Interactive Proofs with Approximately Commuting Provers

The class

$$\mathrm {MIP}^*$$

MIP

of promise problems that can be decided through an interactive proof system with multiple entangled provers provides a complexity-theoretic framework for the exploration of the nonlocal properties of entanglement. Very little is known in terms of the power of this class. The only proposed approach for establishing upper bounds is based on a hierarchy of semidefinite programs introduced independently by Pironio et al. and Doherty et al. in 2006. This hierarchy converges to a value, the field-theoretic value, that is only known to coincide with the provers’ maximum success probability in a given proof system under a plausible but difficult mathematical conjecture, Connes’ embedding conjecture. No bounds on the rate of convergence are known.

We introduce a rounding scheme for the hierarchy, establishing that any solution to its

$$N$$

N

-th level can be mapped to a strategy for the provers in which measurement operators associated with distinct provers have pairwise commutator bounded by

$$O(\ell ^2/\sqrt{N})$$

O

(

2

/

N

)

in operator norm, where

$$\ell $$

is the number of possible answers per prover.

Our rounding scheme motivates the introduction of a variant of quantum multiprover interactive proof systems, called

$$\mathrm {MIP}_\delta ^*$$

MIP

δ

, in which the soundness property is required to hold against provers allowed to operate on the same Hilbert space as long as the commutator of operations performed by distinct provers has norm at most

$$\delta $$

δ

. Our rounding scheme implies the upper bound

$$\mathrm {MIP}_\delta ^* \subseteq \mathrm {DTIME}(\exp (\exp ({{\mathrm{poly}}})/\delta ^2))$$

MIP

δ

DTIME

(

exp

(

exp

(

poly

)

/

δ

2

)

)

. In terms of lower bounds we establish that

$$\mathrm {MIP}^*_{2^{-{{\mathrm{poly}}}}}$$

MIP

2

-

poly

contains

$$\mathrm {NEXP}$$

NEXP

with completeness

$$1$$

1

and soundness

$$1-2^{-{{\mathrm{poly}}}}$$

1

-

2

-

poly

. We discuss connections with the mathematical literature on approximate commutation and applications to device-independent cryptography.

Matthew Coudron, Thomas Vidick
Popular Matchings with Two-Sided Preferences and One-Sided Ties

We are given a bipartite graph

$$G = (A \cup B, E)$$

G

=

(

A

B

,

E

)

where each vertex has a preference list ranking its neighbors: in particular, every

$$a \in A$$

a

A

ranks its neighbors in a strict order of preference, whereas the preference lists of

$$b \in B$$

b

B

may contain ties. A matching

M

is

popular

if there is no matching

$$M'$$

M

such that the number of vertices that prefer

$$M'$$

M

to

M

exceeds the number that prefer

M

to

$$M'$$

M

. We show that the problem of deciding whether

G

admits a popular matching or not is

$$\mathsf {NP}$$

NP

-hard. This is the case even when every

$$b \in B$$

b

B

either has a strict preference list or puts all its neighbors into a single tie. In contrast, we show that the problem becomes polynomially solvable in the case when each

$$b \in B$$

b

B

puts all its neighbors into a single tie. That is, all neighbors of

b

are tied in

b

’s list and and

b

desires to be matched to any of them. Our main result is an

$$O(n^2)$$

O

(

n

2

)

algorithm (where

$$n = |A \cup B|$$

n

=

|

A

B

|

) for the popular matching problem in this model. Note that this model is quite different from the model where vertices in

B

have no preferences and do

not

care whether they are matched or not.

Ágnes Cseh, Chien-Chung Huang, Telikepalli Kavitha
Block Interpolation: A Framework for Tight Exponential-Time Counting Complexity

We devise a framework for proving tight lower bounds under the counting exponential-time hypothesis

$$\mathsf {\#ETH}$$

#

ETH

introduced by Dell et al. Our framework allows to convert many known

$$\mathsf {\#P}$$

#

P

-hardness results for counting problems into results of the following type: If the given problem admits an algorithm with running time

$$2^{o(n)}$$

2

o

(

n

)

on graphs with

$$n$$

n

vertices and

$$\mathcal {O}(n)$$

O

(

n

)

edges, then

$$\mathsf {\#ETH}$$

#

ETH

fails. As exemplary applications of this framework, we obtain such tight lower bounds for the evaluation of the zero-one permanent, the matching polynomial, and the Tutte polynomial on all non-easy points except for two lines.

Radu Curticapean
On Convergence and Threshold Properties of Discrete Lotka-Volterra Population Protocols

In this work we focus on a natural class of population protocols whose dynamics are modeled by the discrete version of Lotka-Volterra equations with no linear term. In such protocols, when an agent

$$a$$

a

of type (species)

$$i$$

i

interacts with an agent

$$b$$

b

of type (species)

$$j$$

j

with

$$a$$

a

as the initiator, then

$$b$$

b

’s type becomes

$$i$$

i

with probability

$$P_{ij}$$

P

i

j

. In such an interaction, we think of

$$a$$

a

as the predator,

$$b$$

b

as the prey, and the type of the prey is either converted to that of the predator or stays as is. Such protocols capture the dynamics of some opinion spreading models and generalize the well-known Rock-Paper-Scissors discrete dynamics. We consider the pairwise interactions among agents that are scheduled uniformly at random.

We start by considering the convergence time and show that any Lotka-Volterra-type protocol on an

$$n$$

n

-agent population converges to some absorbing state in time polynomial in

$$n$$

n

, w.h.p., when any pair of agents is allowed to interact. By contrast, when the interaction graph is a star, there exist protocols of the considered type, such as Rock-Paper-Scissors, which require exponential time to converge. We then study threshold effects exhibited by Lotka-Volterra-type protocols with 3 and more species under interactions between any pair of agents. We present a simple 4-type protocol in which the probability difference of reaching the two possible absorbing states is strongly amplified by the ratio of the initial populations of the two other types, which are transient, but “control” convergence. We then prove that the Rock-Paper-Scissors protocol reaches each of its three possible absorbing states with almost equal probability, starting from any configuration satisfying some sub-linear lower bound on the initial size of each species. That is, Rock-Paper-Scissors is a realization of a “coin-flip consensus” in a distributed system. Some of our techniques may be of independent value.

Jurek Czyzowicz, Leszek Ga̧sieniec, Adrian Kosowski, Evangelos Kranakis, Paul G. Spirakis, Przemysław Uznański
Scheduling Bidirectional Traffic on a Path

We study the fundamental problem of scheduling bidirectional traffic along a path composed of multiple segments. The main feature of the problem is that jobs traveling in the same direction can be scheduled in quick succession on a segment, while jobs in opposing directions cannot cross a segment at the same time. We show that this tradeoff makes the problem significantly harder than the related flow shop problem, by proving that it is

$$\mathsf {NP}$$

NP

-hard even for identical jobs. We complement this result with a PTAS for a single segment and non-identical jobs. If we allow some pairs of jobs traveling in different directions to cross a segment concurrently, the problem becomes

$$\mathsf {APX}$$

APX

-hard even on a single segment and with identical jobs. We give polynomial algorithms for the setting with restricted compatibilities between jobs on a single and any constant number of segments, respectively.

Yann Disser, Max Klimm, Elisabeth Lübbecke
On the Problem of Approximating the Eigenvalues of Undirected Graphs in Probabilistic Logspace

We introduce the problem of

approximating

the eigenvalues of a given stochastic/symmetric matrix in the context of classical space-bounded computation.

The problem can be

exactly

solved in

$$\mathsf {DET}\subseteq \mathsf {NC}^2$$

DET

NC

2

. Recently, it has been shown that the approximation problem can be solved by a

quantum

logspace algorithm. We show a

$$\mathsf {BPL}$$

BPL

algorithm that approximates any eigenvalue with a

constant

accuracy. The result we obtain falls short of achieving the polynomially-small accuracy that the quantum algorithm achieves. Thus, at our current state of knowledge, we can achieve polynomially-small accuracy with quantum logspace algorithms, constant accuracy with probabilistic logspace algorithms, and no non-trivial result is known for deterministic logspace algorithms. The quantum algorithm also has the advantage of working over arbitrary, possibly non-stochastic Hermitian operators.

Our work raises several challenges. First, a derandomization challenge, trying to achieve a deterministic algorithm approximating eigenvalues with some non-trivial accuracy. Second, a de-quantumization challenge, trying to decide whether the quantum logspace model is strictly stronger than the classical probabilistic one or not. It also casts the deterministic, probabilistic and quantum space-bounded models as problems in linear algebra with differences between symmetric, stochastic and arbitrary operators. We therefore believe the problem of approximating the eigenvalues of a graph is not only natural and important by itself, but also important for understanding the relative power of deterministic, probabilistic and quantum logspace computation.

Dean Doron, Amnon Ta-Shma
On Planar Boolean CSP

We give a partial classification of the complexity of Planar Boolean CSP, including a complete dichotomy for templates containing only relations of arity at most

$$5$$

5

.

Zdeněk Dvořák, Martin Kupec
On Temporal Graph Exploration

A temporal graph is a graph in which the edge set can change from step to step. The temporal graph exploration problem

TEXP

is the problem of computing a foremost exploration schedule for a temporal graph, i.e., a temporal walk that starts at a given start node, visits all nodes of the graph, and has the smallest arrival time. We consider only temporal graphs that are connected at each step. For such temporal graphs with

n

nodes, we show that it is

$$\mathbf {NP}$$

NP

-hard to approximate

TEXP

with ratio

$$O(n^{1-\varepsilon })$$

O

(

n

1

-

ε

)

for any

$$\varepsilon >0$$

ε

>

0

. We also provide an explicit construction of temporal graphs that require

$$\Theta (n^2)$$

Θ

(

n

2

)

steps to be explored. We then consider

TEXP

under the assumption that the underlying graph (i.e. the graph that contains all edges that are present in the temporal graph in at least one step) belongs to a specific class of graphs. Among other results, we show that temporal graphs can be explored in

$$O(n^{1.5}k^2\log n)$$

O

(

n

1.5

k

2

log

n

)

steps if the underlying graph has treewidth

k

and in

$$O(n\log ^3 n)$$

O

(

n

log

3

n

)

steps if the underlying graph is a

$$2 \times n$$

2

×

n

grid. We also show that sparse temporal graphs with regularly present edges can always be explored in

O

(

n

) steps.

Thomas Erlebach, Michael Hoffmann, Frank Kammer
Mind Your Coins: Fully Leakage-Resilient Signatures with Graceful Degradation

We construct a new leakage-resilient signature scheme. Our scheme remains unforgeable in the noisy leakage model, where the only restriction on the leakage is that it does not decrease the min-entropy of the secret key by too much. The leakage information can depend on the entire state of the signer; this property is sometimes known as

fully

leakage resilience.

An additional feature of our construction, is that it offers a graceful degradation of security in situations where standard existential unforgeability is impossible. This property was recently put forward by Nielsen

et al.

(PKC 2014) in the bounded leakage model, to deal with settings in which the secret key is much larger than the size of a signature.

For security parameter

$$\kappa $$

κ

, our scheme tolerates leakage on the entire state of the signer until

$$\omega (\log \kappa )$$

ω

(

log

κ

)

bits of min-entropy are left in the secret key, and is proven secure in the standard model. While we describe our scheme in terms of generic building blocks, we also explain how to instantiate it efficiently under fairly standard number-theoretic assumptions.

Antonio Faonio, Jesper Buus Nielsen, Daniele Venturi
A $$(1+{\varepsilon })$$ ( 1 + ε ) -Embedding of Low Highway Dimension Graphs into Bounded Treewidth Graphs

Graphs with bounded

highway dimension

were introduced in [Abraham et al., SODA 2010] as a model of transportation networks. We show that any such graph can be embedded into a distribution over bounded treewidth graphs with arbitrarily small distortion. More concretely, if the highway dimension of

G

is constant we show how to randomly compute a subgraph of the shortest path metric of the input graph

G

with the following two properties: it distorts the distances of

G

by a factor of

$$1+{\varepsilon }$$

1

+

ε

in expectation and has a treewidth that is polylogarithmic in the aspect ratio of

G

. In particular, this result implies quasi-polynomial time approximation schemes for a number of optimization problems that naturally arise in transportation networks, including Travelling Salesman, Steiner Tree, and Facility Location.

To construct our embedding for low highway dimension graphs we extend Talwar’s [STOC 2004] embedding of low doubling dimension metrics into bounded treewidth graphs, which generalizes known results for Euclidean metrics. We add several non-trivial ingredients to Talwar’s techniques, and in particular thoroughly analyze the structure of low highway dimension graphs. Thus we demonstrate that the geometric toolkit used for Euclidean metrics extends beyond the class of low doubling metrics.

Andreas Emil Feldmann, Wai Shing Fung, Jochen Könemann, Ian Post
Lower Bounds for the Graph Homomorphism Problem

The graph homomorphism problem (HOM) asks whether the vertices of a given

n

-vertex graph

G

can be mapped to the vertices of a given

h

-vertex graph

H

such that each edge of

G

is mapped to an edge of

H

. The problem generalizes the graph coloring problem and at the same time can be viewed as a special case of the 2-CSP problem. In this paper, we prove several lower bounds for HOM under the Exponential Time Hypothesis (ETH) assumption. The main result is a lower bound

$$2^{\Omega \left( \frac{n \log h}{\log \log h}\right) }$$

2

Ω

n

log

h

log

log

h

. This rules out the existence of a single-exponential algorithm and shows that the trivial upper bound

$$2^{\mathcal {O}(n\log {h})}$$

2

O

(

n

log

h

)

is almost asymptotically tight.

We also investigate what properties of graphs

G

and

H

make it difficult to solve HOM(

G

,

H

). An easy observation is that an

$$\mathcal {O}(h^n)$$

O

(

h

n

)

upper bound can be improved to

$$\mathcal {O}(h^{{\text {vc}}(G)})$$

O

(

h

vc

(

G

)

)

where

$${\text {vc}}(G)$$

vc

(

G

)

is the minimum size of a vertex cover of

G

. The second lower bound

$$h^{\Omega ({\text {vc}}(G))}$$

h

Ω

(

vc

(

G

)

)

shows that the upper bound is asymptotically tight. As to the properties of the “right-hand side” graph

H

, it is known that HOM(

G

,

H

) can be solved in time

$$(f(\Delta (H)))^n$$

(

f

(

Δ

(

H

)

)

)

n

and

$$(f({\text {tw}}(H)))^n$$

(

f

(

tw

(

H

)

)

)

n

where

$$\Delta (H)$$

Δ

(

H

)

is the maximum degree of

H

and

$${\text {tw}}(H)$$

tw

(

H

)

is the treewidth of

H

. This gives single-exponential algorithms for graphs of bounded maximum degree or bounded treewidth. Since the chromatic number

$$\chi (H)$$

χ

(

H

)

does not exceed

$${\text {tw}}(H)$$

tw

(

H

)

and

$$\Delta (H)+1$$

Δ

(

H

)

+

1

, it is natural to ask whether similar upper bounds with respect to

$$\chi (H)$$

χ

(

H

)

can be obtained. We provide a negative answer by establishing a lower bound

$$(f(\chi (H)))^n$$

(

f

(

χ

(

H

)

)

)

n

for every function

f

. We also observe that similar lower bounds can be obtained for locally injective homomorphisms.

Fedor V. Fomin, Alexander Golovnev, Alexander S. Kulikov, Ivan Mihajlin
Parameterized Single-Exponential Time Polynomial Space Algorithm for Steiner Tree

In the Steiner tree problem, we are given as input a connected

$$n$$

n

-vertex graph with edge weights in

$$\{1,2,\ldots ,W\}$$

{

1

,

2

,

,

W

}

, and a subset of

$$k$$

k

terminal vertices. Our task is to compute a minimum-weight tree that contains all the terminals. We give an algorithm for this problem with running time

$${\mathcal O}(7.97^k\cdot n^4\cdot \log {W})$$

O

(

7

.

97

k

·

n

4

·

log

W

)

using

$${\mathcal O}(n^3\cdot \log {nW} \cdot \log k)$$

O

(

n

3

·

log

n

W

·

log

k

)

space. This is the first single-exponential time, polynomial-space FPT algorithm for the weighted

Steiner Tree

problem.

Fedor V. Fomin, Petteri Kaski, Daniel Lokshtanov, Fahad Panolan, Saket Saurabh
Relative Discrepancy Does not Separate Information and Communication Complexity

Does the information complexity of a function equal its communication complexity? We examine whether any currently known techniques might be used to show a separation between the two notions. Ganor

et al.

recently provided such a separation in the distributional case for a specific input distribution. We show that in the non-distributional setting, the relative discrepancy bound is smaller than the information complexity, hence it cannot separate information and communication complexity. In addition, in the distributional case, we provide a linear program formulation for relative discrepancy and relate it to variants of the partition bound, resolving also an open question regarding the relation of the partition bound and information complexity. Last, we prove the equivalence between the adaptive relative discrepancy and the public-coin partition, implying that the logarithm of the adaptive relative discrepancy bound is quadratically tight with respect to communication.

Lila Fontes, Rahul Jain, Iordanis Kerenidis, Sophie Laplante, Mathieu Laurière, Jérémie Roland
A Galois Connection for Valued Constraint Languages of Infinite Size

A Galois connection between clones and relational clones on a fixed finite domain is one of the cornerstones of the so-called algebraic approach to the computational complexity of non-uniform Constraint Satisfaction Problems (CSPs). Cohen et al. established a Galois connection between

finitely-generated

weighted clones and

finitely-generated

weighted relational clones [SICOMP’13], and asked whether this connection holds in general. We answer this question in the affirmative for weighted (relational) clones with

real

weights and show that the complexity of the corresponding Valued CSPs is preserved.

Peter Fulla, Stanislav Živný
Approximately Counting H-Colourings is $$\#\mathrm {BIS}$$ # BIS -Hard

We consider counting

H

-colourings from an input graph

G

to a target graph

H

. We show that for any fixed graph

H

without trivial components, this is as hard as the well-known problem

$$\#\mathrm {BIS}$$

#

BIS

, the problem of (approximately) counting independent sets in a bipartite graph.

$$\#\mathrm {BIS}$$

#

BIS

is a complete problem in an important complexity class for approximate counting, and is believed not to have an FPRAS. If this is so, then our result shows that for every graph

H

without trivial components, the

H

-colouring counting problem has no FPRAS. This problem was studied a decade ago by Goldberg, Kelk and Paterson. They were able to show that approximately sampling

H

-colourings is

$$\#\mathrm {BIS}$$

#

BIS

-hard, but it was not known how to get the result for approximate counting. Our solution builds on non-constructive ideas using the work of Lovász. The full version is available at

arxiv.org/abs/1502.01335

. The theorem numbering here matches the full version.

Andreas Galanis, Leslie Ann Goldberg, Mark Jerrum
Taylor Polynomial Estimator for Estimating Frequency Moments

We present a randomized algorithm for estimating the

p

th moment

$$F_p$$

F

p

of the frequency vector of a data stream in the general update (turnstile) model to within a multiplicative factor of

$$1 \pm \epsilon $$

1

±

ϵ

, for

$$p > 2$$

p

>

2

, with high constant confidence. For

$$0 < \epsilon \le 1$$

0

<

ϵ

1

, the algorithm uses space

$$O( n^{1-2/p} \epsilon ^{-2} + n^{1-2/p} \epsilon ^{-4/p} \log (n))$$

O

(

n

1

-

2

/

p

ϵ

-

2

+

n

1

-

2

/

p

ϵ

-

4

/

p

log

(

n

)

)

words. This improves over the current bound of

$$O(n^{1-2/p} \epsilon ^{-2-4/p} \log (n))$$

O

(

n

1

-

2

/

p

ϵ

-

2

-

4

/

p

log

(

n

)

)

words by Andoni et. al. in [

2

]. Our space upper bound matches the lower bound of Li and Woodruff [

17

] for

$$\epsilon = (\log (n))^{-\varOmega (1)}$$

ϵ

=

(

log

(

n

)

)

-

Ω

(

1

)

and the lower bound of Andoni et. al. [

3

] for

$$\epsilon = \varOmega (1)$$

ϵ

=

Ω

(

1

)

.

Sumit Ganguly
ETR-Completeness for Decision Versions of Multi-player (Symmetric) Nash Equilibria

As a result of some important works [

4

6

,

10

,

15

], the complexity of 2-player Nash equilibrium is by now well understood, even when equilibria with special properties are desired and when the game is symmetric. However, for multi-player games, when equilibria with special properties are desired, the only result known is due to Schaefer and Štefankovič [

18

]: that checking whether a 3-player NE (3-Nash) instance has an equilibrium in a ball of radius half in

$$l_{\infty }$$

l

-norm is ETR-complete, where ETR is the class Existential Theory of Reals.

Building on their work, we show that the following decision versions of 3-Nash are also ETR-complete: checking whether (

i

) there are two or more equilibria, (

ii

) there exists an equilibrium in which each player gets at least

h

payoff, where

h

is a rational number, (

iii

) a given set of strategies are played with non-zero probability, and (

iv

) all the played strategies belong to a given set.

Next, we give a reduction from 3-Nash to symmetric 3-Nash, hence resolving an open problem of Papadimitriou [

14

]. This yields ETR-completeness for symmetric 3-Nash for the last two problems stated above as well as completeness for the class

$$\rm {FIXP_a}$$

FIXP

a

, a variant of FIXP for strong approximation. All our results extend to

k

-Nash, for any constant

$$k \ge 3$$

k

3

.

Jugal Garg, Ruta Mehta, Vijay V. Vazirani, Sadra Yazdanbod
Separate, Measure and Conquer: Faster Polynomial-Space Algorithms for Max 2-CSP and Counting Dominating Sets

We show a method resulting in the improvement of several polynomial-space, exponential-time algorithms. The method capitalizes on the existence of small balanced separators for sparse graphs, which can be exploited for branching to disconnect an instance into independent components. For this algorithm design paradigm, the challenge to date has been to obtain improvements in worst-case analyses of algorithms, compared with algorithms that are analyzed with advanced methods, such as Measure and Conquer. Our contribution is the design of a general method to integrate the advantage from the separator-branching into Measure and Conquer, for an improved running time analysis.

We illustrate the method with improved algorithms for

Max

$$(r,2)$$

(

r

,

2

)

-CSP and

#Dominating Set

. For

Max

$$(r,2)$$

(

r

,

2

)

-CSP instances with domain size

r

and

m

constraints, the running time improves from

$${r}^{m/6}$$

r

m

/

6

to

$${r}^{m/7.5}$$

r

m

/

7.5

for cubic instances and from

$${r}^{0.19\cdot m}$$

r

0.19

·

m

to

$${r}^{0.18\cdot m}$$

r

0.18

·

m

for general instances, omitting subexponential factors. For

#Dominating Set

instances with

n

vertices, the running time improves from

$${1.4143}^{n}$$

1.4143

n

to

$${1.2458}^{n}$$

1.2458

n

for cubic instances and from

$${1.5673}^{n}$$

1.5673

n

to

$${1.5183}^{n}$$

1.5183

n

for general instances. It is likely that other algorithms relying on local transformations can be improved using our method, which exploits a non-local property of graphs.

Serge Gaspers, Gregory B. Sorkin
Submatrix Maximum Queries in Monge Matrices Are Equivalent to Predecessor Search

We present an optimal data structure for submatrix maximum queries in

$$n\times n$$

n

×

n

Monge matrices. Our result is a two-way reduction showing that the problem is equivalent to the classical predecessor problem in a universe of polynomial size. This gives a data structure of

$$O(n)$$

O

(

n

)

space that answers submatrix maximum queries in

$$O(\log \log n)$$

O

(

log

log

n

)

time, as well as a matching lower bound, showing that

$$O(\log \log n)$$

O

(

log

log

n

)

query-time is optimal for any data structure of size

$$O(n$$

O

(

n

polylog

$$(n))$$

(

n

)

)

. Our result settles the problem, improving on the

$$O(\log ^2 n)$$

O

(

log

2

n

)

query-time in SODA’12, and on the

$$O(\log n)$$

O

(

log

n

)

query-time in ICALP’14.

In addition, we show that partial Monge matrices can be handled in the same bounds as full Monge matrices. In both previous results, partial Monge matrices incurred additional inverse-Ackerman factors.

Paweł Gawrychowski, Shay Mozes, Oren Weimann
Optimal Encodings for Range Top- $$k$$ k , Selection, and Min-Max

We consider encoding problems for range queries on arrays. In these problems the goal is to store a structure capable of recovering the answer to all queries that occupies the information theoretic minimum space possible, to within lower order terms. As input, we are given an array

$$A[1..n]$$

A

[

1

.

.

n

]

, and a fixed parameter

$$k \in [1,n]$$

k

[

1

,

n

]

. A

range top-

$$k$$

k

query on an arbitrary range

$$[i,j] \subseteq [1,n]$$

[

i

,

j

]

[

1

,

n

]

asks us to return the ordered set of indices

$$\{\ell _1, ..., \ell _{k}\}$$

{

1

,

.

.

.

,

k

}

such that

$$A[\ell _m]$$

A

[

m

]

is the

$$m$$

m

-th largest element in

$$A[i..j]$$

A

[

i

.

.

j

]

, for

$$1 \le m \le k$$

1

m

k

. A

range selection

query for an arbitrary range

$$[i,j] \subseteq [1,n]$$

[

i

,

j

]

[

1

,

n

]

and query parameter

$$k' \in [1,k]$$

k

[

1

,

k

]

asks us to return the index of the

$$k'$$

k

-th largest element in

$$A[i..j]$$

A

[

i

.

.

j

]

. We completely resolve the space complexity of both of these heavily studied problems—to within lower order terms—for all

$$k = o(n)$$

k

=

o

(

n

)

. Previously, the constant factor in the space complexity was known only for

$$k=1$$

k

=

1

. We also resolve the space complexity of another problem, that we call

range min-max

, in which the goal is to return the indices of both the minimum and maximum elements in a range.

Paweł Gawrychowski, Patrick K. Nicholson
2-Vertex Connectivity in Directed Graphs

Given a directed graph, two vertices

v

and

w

are 2-

vertex-connected

if there are two internally vertex-disjoint paths from

v

to

w

and two internally vertex-disjoint paths from

w

to

v

. In this paper, we show how to compute this relation in

$$O(m+n)$$

O

(

m

+

n

)

time, where

n

is the number of vertices and

m

is the number of edges of the graph. As a side result, we show how to build in linear time an

O

(

n

)-space data structure, which can answer in constant time queries on whether any two vertices are 2-vertex-connected. Additionally, when two query vertices

v

and

w

are not 2-vertex-connected, our data structure can produce in constant time a “witness” of this property, by exhibiting a vertex or an edge that is contained in all paths from

v

to

w

or in all paths from

w

to

v

. We are also able to compute in linear time a sparse certificate for 2-vertex connectivity, i.e., a subgraph of the input graph that has

O

(

n

) edges and maintains the same 2-vertex connectivity properties as the input graph.

Loukas Georgiadis, Giuseppe F. Italiano, Luigi Laura, Nikos Parotsidis
Ground State Connectivity of Local Hamiltonians

The study of ground state energies of local Hamiltonians has played a fundamental role in quantum complexity theory. In this paper, we take a new direction by introducing the physically motivated notion of “ground state connectivity” of local Hamiltonians, which captures problems in areas ranging from quantum stabilizer codes to quantum memories. We show that determining how “connected” the ground space of a local Hamiltonian is can range from QCMA-complete to PSPACE-complete, as well as NEXP-complete for an appropriately defined “succinct” version of the problem. As a result, we obtain a natural QCMA-complete problem, a goal which has generally proven difficult since the conception of QCMA over a decade ago. Our proofs rely on a new technical tool, the Traversal Lemma, which analyzes the Hilbert space a local unitary evolution must traverse under certain conditions. We show that this lemma is essentially tight with respect to the length of the unitary evolution in question.

Sevag Gharibian, Jamie Sikora
Uniform Kernelization Complexity of Hitting Forbidden Minors

The

$$\mathcal {F}$$

F

-Minor-Free Deletion

problem asks, for a fixed set

$$\mathcal {F}$$

F

and an input consisting of a graph

G

and integer

k

, whether

k

vertices can be removed from

G

such that the resulting graph does not contain any member of

$$\mathcal {F} $$

F

as a minor. Fomin et al. (FOCS 2012) showed that the special case when

$$\mathcal {F} $$

F

contains at least one planar graph has a kernel of size

$$f(\mathcal {F}) \cdot k^{g(\mathcal {F})}$$

f

(

F

)

·

k

g

(

F

)

for some functions

f

and

g

. They left open whether this

Planar

$$\mathcal {F}$$

F

-Minor-Free Deletion

problem has kernels whose size is uniformly polynomial, of the form

$$f(\mathcal {F}) \cdot k^c$$

f

(

F

)

·

k

c

for some universal constant

c

. We prove that some

Planar

$$\mathcal {F}$$

F

-Minor-Free Deletion

problems do not have uniformly polynomial kernels (unless

NP

$$\subseteq $$

coNP

/

poly

), not even when parameterized by the vertex cover number. On the positive side, we consider the problem of determining whether

k

vertices can be removed to obtain a graph of treedepth at most

$$\eta $$

η

. We prove that this problem admits uniformly polynomial kernels with

$${\mathcal {O}}(k^6)$$

O

(

k

6

)

vertices for every fixed

$$\eta $$

η

.

Archontia C. Giannopoulou, Bart M. P. Jansen, Daniel Lokshtanov, Saket Saurabh
Counting Homomorphisms to Square-Free Graphs, Modulo 2

We study the problem

$$\oplus \textsc {HomsTo}{H}$$

H

O

M

S

T

O

H

of counting, modulo 2, the homomorphisms from an input graph to a fixed undirected graph

H

. A characteristic feature of modular counting is that cancellations make wider classes of instances tractable than is the case for exact (non-modular) counting, so subtle dichotomy theorems can arise. We show the following dichotomy: for any

H

that contains no 4-cycles,

$$\oplus \textsc {HomsTo}{H}$$

H

O

M

S

T

O

H

is either in polynomial time or is

$$\oplus \mathrm {P}$$

P

-complete. This partially confirms a conjecture of Faben and Jerrum that was previously only known to hold for trees and for a restricted class of tree-width-2 graphs called cactus graphs. We confirm the conjecture for a rich class of graphs including graphs of unbounded tree-width. In particular, we focus on square-free graphs, which are graphs without 4-cycles. These graphs arise frequently in combinatorics, for example in connection with the strong perfect graph theorem and in certain graph algorithms. Previous dichotomy theorems required the graph to be tree-like so that tree-like decompositions could be exploited in the proof. We prove the conjecture for a much richer class of graphs by adopting a much more general approach.

Andreas Göbel, Leslie Ann Goldberg, David Richerby
Approximately Counting Locally-Optimal Structures

A

locally-optimal

structure is a combinatorial structure that cannot be improved by certain (greedy) local moves, even though it may not be globally optimal. An example is a maximal independent set in a graph. It is trivial to construct an independent set in a graph. It is easy to (greedily) construct a maximal independent set. However, it is NP-hard to construct a globally-optimal (maximum) independent set.This situation is typical. Constructing a locally-optimal structure is somewhat more difficult than constructing an arbitrary structure, and constructing a globally-optimal structure is more difficult than constructing a locally-optimal structure. The same situation arises with listing. The differences between the problems become obscured when we move from listing to counting because nearly everything is

$$\#\text {P} $$

#

P

-complete. However, we highlight an interesting phenomenon that arises in approximate counting, where approximately counting locally-optimal structures is apparently more difficult than approximately counting globally-optimal structures. Specifically, we show that counting maximal independent sets is complete for

$$\#\text {P} $$

#

P

with respect to approximation-preserving reductions, whereas counting all independent sets, or counting maximum independent sets is complete for an apparently smaller class, #RH

$$\varPi _1$$

Π

1

which has a prominent role in the complexity of approximate counting. Motivated by the difficulty of approximately counting maximal independent sets in bipartite graphs, we also study counting problems involving minimal separators and minimal edge separators (which are also locally-optimal structures). Minimal separators have applications via fixed-parameter-tractable algorithms for constructing triangulations and phylogenetic trees. Although exact (exponential-time) algorithms exist for listing these structures, we show that the counting problems are as hard as they could possibly be. All of the exact counting problems are

$$\#\text {P} $$

#

P

-complete, and all of the approximation problems are complete for

$$\#\text {P} $$

#

P

with respect to approximation-preserving reductions. A full version [

14

] containing detailed proofs is available at

http://arxiv.org/abs/1411.6829

. Theorem-numbering here matches the full version.

Leslie Ann Goldberg, Rob Gysel, John Lapinskas
Proofs of Proximity for Context-Free Languages and Read-Once Branching Programs
(Extended Abstract)

Proofs of proximity are probabilistic proof systems in which the verifier only queries a

sub-linear

number of input bits, and soundness only means that, with high probability, the input is close to an accepting input. In their minimal form, called

Merlin-Arthur proofs of proximity

(

$${\mathcal {MAP}}$$

MAP

), the verifier receives, in addition to query access to the input, also free access to an explicitly given short (sub-linear) proof. A more general notion is that of an

interactive proof of proximity

(

$${\mathcal {IPP}}$$

IPP

), in which the verifier is allowed to interact with an all-powerful, yet untrusted, prover.

$${\mathcal {MAP}}$$

MAP

s and

$${\mathcal {IPP}}$$

IPP

s may be thought of as the

$${\mathcal {NP}}$$

NP

and

$${\mathcal {IP}}$$

IP

analogues of property testing, respectively.

In this work we construct proofs of proximity for two natural classes of properties: (1) context-free languages, and (2) languages accepted by small read-once branching programs. Our main results are:

1.

$${\mathcal {MAP}}$$

MAP

s for these two classes, in which, for inputs of length

$$n$$

n

, both the verifier’s query complexity and the length of the

$${\mathcal {MAP}}$$

MAP

proof are

$$\widetilde{O}(\sqrt{n})$$

O

~

(

n

)

.

2.

$${\mathcal {IPP}}$$

IPP

s for the same two classes with constant query complexity, poly-logarithmic communication complexity, and logarithmically many rounds of interaction.

Oded Goldreich, Tom Gur, Ron D. Rothblum
Fast Algorithms for Diameter-Optimally Augmenting Paths

We consider the problem of augmenting a graph with

$$n$$

n

vertices embedded in a metric space, by inserting one additional edge in order to minimize the diameter of the resulting graph. We present an exact algorithm for the cases when the input graph is a path that runs in

$$O(n \log ^3 n)$$

O

(

n

log

3

n

)

time. We also present an algorithm that computes a

$$(1+\varepsilon )$$

(

1

+

ε

)

-approximation in

$$O(n + 1/\varepsilon ^3)$$

O

(

n

+

1

/

ε

3

)

time for paths in

$${\mathbb {R}}^{d}$$

R

d

, where

$$d$$

d

is a constant.

Ulrike Große, Joachim Gudmundsson, Christian Knauer, Michiel Smid, Fabian Stehn
Hollow Heaps

We introduce the

hollow heap

, a very simple data structure with the same amortized efficiency as the classical Fibonacci heap. All heap operations except

delete and delete-min take

O

(1) time, worst case as well as amortized;

delete and delete-min take

$$O(\log n)$$

O

(

log

n

)

amortized time. Hollow heaps are by far the simplest structure to achieve this. Hollow heaps combine two novel ideas: the use of lazy deletion and re-insertion to do

decrease-key

operations, and the use of a dag (directed acyclic graph) instead of a tree or set of trees to represent a heap. Lazy deletion produces hollow nodes (nodes without items), giving the data structure its name.

Thomas Dueholm Hansen, Haim Kaplan, Robert E. Tarjan, Uri Zwick
Linear-Time List Recovery of High-Rate Expander Codes

We show that expander codes, when properly instantiated, are high-rate list recoverable codes with linear-time list recovery algorithms. List recoverable codes have been useful recently in constructing efficiently list-decodable codes, as well as explicit constructions of matrices for compressive sensing and group testing. Previous list recoverable codes with linear-time decoding algorithms have all had rate at most

$$1/2$$

1

/

2

; in contrast, our codes can have rate

$$1 - \varepsilon $$

1

-

ε

for any

$$\varepsilon > 0$$

ε

>

0

. We can plug our high-rate codes into a framework of Alon and Luby (1996) and Meir (2014) to obtain linear-time list recoverable codes of arbitrary rates

$$R$$

R

, which approach the optimal trade-off between the number of non-trivial lists provided and the rate of the code.

While list-recovery is interesting on its own, our primary motivation is applications to list-decoding. A slight strengthening of our result would imply linear-time and optimally list-decodable codes for all rates. Thus, our result is a step in the direction of solving this important problem.

Brett Hemenway, Mary Wootters
Finding 2-Edge and 2-Vertex Strongly Connected Components in Quadratic Time

We present faster algorithms for computing the 2-edge and 2-vertex strongly connected components of a directed graph. While in

undirected

graphs the 2-edge and 2-vertex connected components can be found in linear time, in

directed

graphs with

m

edges and

n

vertices only rather simple

O

(

m

n

)-time algorithms were known. We use a hierarchical sparsification technique to obtain algorithms that run in time

$$O(n^2)$$

O

(

n

2

)

. For 2-edge strongly connected components our algorithm gives the first running time improvement in 20 years. Additionally we present an

$$O(m^2 /\log n)$$

O

(

m

2

/

log

n

)

-time algorithm for 2-edge strongly connected components, and thus improve over the

O

(

m

n

) running time also when

$$m = O(n)$$

m

=

O

(

n

)

. Our approach extends to

k

-edge and

k

-vertex strongly connected components for any constant

k

with a running time of

$$O(n^2 \log n)$$

O

(

n

2

log

n

)

for

k

-edge-connectivity and

$$O(n^3)$$

O

(

n

3

)

for

k

-vertex-connectivity.

Monika Henzinger, Sebastian Krinninger, Veronika Loitzenbauer
Improved Algorithms for Decremental Single-Source Reachability on Directed Graphs

Recently we presented the first algorithm for maintaining the set of nodes reachable from a source node in a directed graph that is modified by edge deletions with

$$o(mn)$$

o

(

m

n

)

total update time, where

$$m$$

m

is the number of edges and

$$n$$

n

is the number of nodes in the graph [Henzinger et al. STOC 2014]. The algorithm is a combination of several different algorithms, each for a different

$$m$$

m

vs.

$$n$$

n

trade-off. For the case of

$$m = \varTheta (n^{1.5})$$

m

=

Θ

(

n

1.5

)

the running time is

$$O(n^{2.47})$$

O

(

n

2.47

)

, just barely below

$$mn = \varTheta (n^{2.5})$$

m

n

=

Θ

(

n

2.5

)

. In this paper we simplify the previous algorithm using new algorithmic ideas and achieve an improved running time of

$$\tilde{O}(\min ( m^{7/6} n^{2/3}, m^{3/4} n^{5/4 + o(1)}, m^{2/3} n^{4/3+o(1)} + m^{3/7} n^{12/7+o(1)}))$$

O

~

(

min

(

m

7

/

6

n

2

/

3

,

m

3

/

4

n

5

/

4

+

o

(

1

)

,

m

2

/

3

n

4

/

3

+

o

(

1

)

+

m

3

/

7

n

12

/

7

+

o

(

1

)

)

)

. This gives, e.g.,

$$O(n^{2.36})$$

O

(

n

2.36

)

for the notorious case

$$m = \varTheta (n^{1.5})$$

m

=

Θ

(

n

1.5

)

. We obtain the same upper bounds for the problem of maintaining the strongly connected components of a directed graph undergoing edge deletions. Our algorithms are correct with high probabililty against an oblivious adversary.

Monika Henzinger, Sebastian Krinninger, Danupon Nanongkai
Weighted Reordering Buffer Improved via Variants of Knapsack Covering Inequalities

We consider the weighted Reordering Buffer Management problem. In this problem a set of

n

elements arrive over time one at a time and the elements can be stored in a buffer of size

k

. When the buffer becomes full, an element must be output. Elements are colored and if two elements are output consecutively and they have different colors then a switching cost is incurred. If the new color output is

c

, the cost is

$$w_c$$

w

c

. The objective is to reorder the elements to minimize the total switching cost in the output sequence.

In this paper, we give an improved randomized

$$O(\log \log \log k\gamma )$$

O

(

log

log

log

k

γ

)

-approximation for this problem where

$$\gamma $$

γ

is the ratio of the maximum to minimum weight of a color, improving upon the previous best

$$O(\log \log k\gamma )$$

O

(

log

log

k

γ

)

-approximation. Our improvement builds on strengthening the standard linear program for the problem with non-standard knapsack coveringinequalities. In particular, by leveraging the structure of these inequalities, our algorithm manages to render several random procedures more powerful and combine them effectively, thereby giving an exponential improvement upon the previous work.

Sungjin Im, Benjamin Moseley
Local Reductions

We reduce non-deterministic time

$$T \ge 2^n$$

T

2

n

to a 3SAT instance

$$\phi $$

ϕ

of quasilinear size

$$|\phi | = T \cdot \log ^{O(1)} T$$

|

ϕ

|

=

T

·

log

O

(

1

)

T

such that there is an explicit circuit

C

that on input an index

i

of

$$\log |\phi |$$

log

|

ϕ

|

bits outputs the

i

th clause, and each output bit of

C

depends on

O

(1) input bits. The previous best result was

C

in NC

$$^1$$

1

. Even in the simpler setting of polynomial size

$$|\phi | = \mathrm {poly}(T)$$

|

ϕ

|

=

poly

(

T

)

the previous best result was

C

in AC

$$^0$$

0

.

More generally, for any time

$$T \ge n$$

T

n

and parameter

$$r \le n$$

r

n

we obtain

$$\log _2 |\phi | = \max (\log T, n/r) + O(\log n) + O(\log \log T)$$

log

2

|

ϕ

|

=

max

(

log

T

,

n

/

r

)

+

O

(

log

n

)

+

O

(

log

log

T

)

and each output bit of

C

is a decision tree of depth

$$O(\log r)$$

O

(

log

r

)

.

As an application, we tighten Williams’ connection between satisfiability algorithms and circuit lower bounds (STOC 2010; SIAM J. Comput. 2013).

Hamid Jahanjou, Eric Miles, Emanuele Viola
Query Complexity in Expectation

We study the query complexity of computing a function

$$f:\{0,1\}^n\rightarrow \mathbb {R}_+$$

f

:

{

0

,

1

}

n

R

+

in expectation

. This requires the algorithm on input

$$x$$

x

to output a nonnegative random variable whose expectation equals

$$f(x)$$

f

(

x

)

, using as few queries to the input

$$x$$

x

as possible. We exactly characterize both the randomized and the quantum query complexity by two polynomial degrees, the nonnegative literal degree and the sum-of-squares degree, respectively. We observe that the quantum complexity can be unboundedly smaller than the classical complexity for some functions, but can be at most polynomially smaller for Boolean functions. These query complexities relate to (and are motivated by) the extension complexity of polytopes. The

linear

extension complexity of a polytope is characterized by the randomized

communication

complexity of computing its slack matrix in expectation, and the

semidefinite

(psd) extension complexity is characterized by the analogous quantum model. Since query complexity can be used to upper bound communication complexity of related functions, we can derive some upper bounds on psd extension complexity by constructing efficient quantum query algorithms. As an example we give an exponentially-close entrywise approximation of the slack matrix of the perfect matching polytope with psd-rank only

$$2^{n^{1/2+\varepsilon }}$$

2

n

1

/

2

+

ε

. Finally, we show randomized and quantum query complexity in expectation corresponds to the Sherali-Adams and Lasserre hierarchies, respectively.

Jedrzej Kaniewski, Troy Lee, Ronald de Wolf
Near-Linear Query Complexity for Graph Inference

How efficiently can we find an unknown graph using distance or shortest path queries between its vertices? Let

$$G = (V,E)$$

G

=

(

V

,

E

)

be a connected, undirected, and unweighted graph of bounded degree. The edge set

E

is initially unknown, and the graph can be accessed using a

distance oracle

, which receives a pair of vertices (

u

,

v

) and returns the distance between

u

and

v

. In the

verification

problem, we are given a hypothetical graph

$$\hat{G} = (V,\hat{E})$$

G

^

=

(

V

,

E

^

)

and want to check whether

G

is equal to

$$\hat{G}$$

G

^

. We analyze a natural greedy algorithm and prove that it uses

$$n^{1+o(1)}$$

n

1

+

o

(

1

)

distance queries. In the more difficult

reconstruction

problem,

$$\hat{G}$$

G

^

is not given, and the goal is to find the graph

G

. If the graph can be accessed using a

shortest path oracle

, which returns not just the distance but an actual shortest path between

u

and

v

, we show that extending the idea of greedy gives a reconstruction algorithm that uses

$$n^{1+o(1)}$$

n

1

+

o

(

1

)

shortest path queries. When the graph has bounded treewidth, we further bound the query complexity of the greedy algorithms for both problems by

$$\tilde{O}(n)$$

O

~

(

n

)

. When the graph is chordal, we provide a randomized algorithm for reconstruction using

$$\tilde{O}(n)$$

O

~

(

n

)

distance queries.

Sampath Kannan, Claire Mathieu, Hang Zhou
A QPTAS for the Base of the Number of Crossing-Free Structures on a Planar Point Set

The number of triangulations of a planar

n

point set

S

is known to be

$$c^n$$

c

n

, where the base

c

lies between 2.43 and 30. Similarly, the number of spanning trees on

S

is known to be

$$d^n$$

d

n

, where the base

d

lies between 6.75 and 141.07. The fastest known algorithm for counting triangulations of

S

runs in

$$O^*(2^n)$$

O

(

2

n

)

time while that for counting spanning trees runs in

$$O^*(7.125^n)$$

O

(

7

.

125

n

)

time. The fastest known arbitrarily close approximation algorithms for the base of the number of triangulations of

S

and the base of the number of spanning trees of

S

, respectively, run in time subexponential in

n

. We present the first quasi-polynomial approximation schemes for the base of the number of triangulations of

S

and the base of the number of spanning trees on

S

, respectively.

Marek Karpinski, Andrzej Lingas, Dzmitry Sledneu
Finding a Path in Group-Labeled Graphs with Two Labels Forbidden

The parity of the length of paths and cycles is a classical and well-studied topic in graph theory and theoretical computer science. The parity constraints can be extended to the label constraints in a group-labeled graph, which is a directed graph with a group label on each arc. Recently, paths and cycles in group-labeled graphs have been investigated, such as finding non-zero disjoint paths and cycles.

In this paper, we present a solution to finding an

$$s$$

s

$$t$$

t

path in a group-labeled graph with two labels forbidden. This also leads to an elementary solution to finding a zero path in a

$${\mathbb Z}_3$$

Z

3

-labeled graph, which is the first nontrivial case of finding a zero path. This situation in fact generalizes the 2-disjoint paths problem in undirected graphs, which also motivates us to consider that setting. More precisely, we provide a polynomial-time algorithm for testing whether there are at most two possible labels of

$$s$$

s

$$t$$

t

paths in a group-labeled graph or not, and finding

$$s$$

s

$$t$$

t

paths attaining at least three distinct labels if exist. We also give a necessary and sufficient condition for a group-labeled graph to have exactly two possible labels of

$$s$$

s

$$t$$

t

paths, and our algorithm is based on this characterization.

Yasushi Kawase, Yusuke Kobayashi, Yutaro Yamaguchi
Lower Bounds for Sums of Powers of Low Degree Univariates

We consider the problem of representing a univariate polynomial

f

(

x

) as a sum of powers of low degree polynomials. We prove a lower bound of

$$\Omega \left( \sqrt{\frac{d}{t}} \right) $$

Ω

d

t

for writing an explicit univariate degree-

d

polynomial

f

(

x

) as a sum of powers of degree-

t

polynomials.

Neeraj Kayal, Pascal Koiran, Timothée Pecatte, Chandan Saha
Approximating CSPs Using LP Relaxation

This paper studies how well the standard LP relaxation approximates a

$$k$$

k

-ary constraint satisfaction problem (CSP) on label set

$$[L]$$

[

L

]

. We show that, assuming the Unique Games Conjecture, it achieves an approximation within

$$O(k^3\cdot \log L)$$

O

(

k

3

·

log

L

)

of the optimal approximation factor. In particular we prove the following hardness result: let

$$\mathcal {I} $$

I

be a

$$k$$

k

-ary CSP on label set

$$[L]$$

[

L

]

with constraints from a

constraint class

$$\mathcal {C} $$

C

, such that it is a

$$(c,s)$$

(

c

,

s

)

-integrality gap for the standard LP relaxation. Then, given an instance

$$\mathcal {H} $$

H

with constraints from

$$\mathcal {C} $$

C

, it is NP-hard to decide whether,

$$\mathsf{opt} (\mathcal {H}) \ge \varOmega \left( \frac{c}{k^3\log L}\right) ,\ \ \text { or }\ \ \mathsf{opt} (\mathcal {H}) \le 4\cdot s,$$

opt

(

H

)

Ω

c

k

3

log

L

,

or

opt

(

H

)

4

·

s

,

assuming the Unique Games Conjecture. We also show the existence of an efficient LP rounding algorithm

$$\mathsf{Round}$$

Round

such that given an instance

$$\mathcal {H} $$

H

from a

permutation invariant

constraint class

$$\mathcal {C} $$

C

which is a

$$(c,s)$$

(

c

,

s

)

-

rounding gap

for

$$\mathsf{Round}$$

Round

, it is NP-hard to decide whether,

$$\mathsf{opt} (\mathcal {H}) \ge \varOmega \left( \frac{c}{k^3\log L}\right) ,\ \ \text { or }\ \ \mathsf{opt} (\mathcal {H}) \le O\left( (\log L)^k\right) \cdot s,$$

opt

(

H

)

Ω

c

k

3

log

L

,

or

opt

(

H

)

O

(

log

L

)

k

·

s

,

assuming the Unique Games Conjecture.

Subhash Khot, Rishi Saket
Comparator Circuits over Finite Bounded Posets

Comparator circuit model was originally introduced in [

4

] (and further studied in [

2

]) to capture problems which are not known to be

P

-complete but still not known to admit efficient parallel algorithms. The class

CC

is the complexity class of problems many-one logspace reducible to the Comparator Circuit Value Problem and we know that

NLOG

$$\subseteq $$

CC

$$\subseteq $$

P

. Cook

et al

[

2

] showed that

CC

is also the class of languages decided by polynomial size comparator circuits.

We study generalizations of the comparator circuit model that work over fixed finite bounded posets. We observe that there are universal comparator circuits even over arbitrary fixed finite bounded posets. Building on this, we show that general (resp. skew) comparator circuits of polynomial size over fixed finite

distributive

lattices characterizes

CC

(resp.

LOG

). Complementing this, we show that general comparator circuits of polynomial size over arbitrary fixed finite lattices exactly characterizes

P

and that when the comparator circuit is skew they characterize

NLOG

. In addition, we show a characterization of the class

NP

by a family of polynomial sized comparator circuits over fixed

finite bounded posets

. These results generalize the results in [

2

] regarding the power of comparator circuits. As an aside, we consider generalizations of Boolean formulae over arbitrary lattices. We show that Spira’s theorem[

5

] can be extended to this setting as well and show that polynomial sized Boolean formulae over finite fixed lattices capture exactly

NC

$$^1$$

1

.

Our techniques involve design of comparator circuits and finite posets. We then use known results from lattice theory to show that the posets that we obtain can be embedded into appropriate lattices. Our results gives new methods to establish

CC

upper bound for problems also indicate potential new approaches towards the problems

P

vs

CC

and

NLOG

vs

LOG

using lattice theoretic methods.

Balagopal Komarath, Jayalal Sarma, K. S. Sunil
Algebraic Properties of Valued Constraint Satisfaction Problem

The paper presents an algebraic framework for optimization problems expressible as Valued Constraint Satisfaction Problems. Our results generalize the algebraic framework for the decision version (CSPs) provided by Bulatov et al. [SICOMP 2005].

We introduce the notions of weighted algebras and varieties, and use the Galois connection due to Cohen et al. [SICOMP 2013] to link VCSP languages to weighted algebras. We show that the difficulty of VCSP depends only on the weighted variety generated by the associated weighted algebra.

Paralleling the results for CSPs we exhibit a reduction to cores and rigid cores which allows us to focus on idempotent weighted varieties. Further, we propose an analogue of the Algebraic CSP Dichotomy Conjecture; prove the hardness direction and verify that it agrees with known results for VCSPs on two-element sets [Cohen et al. 2006], finite-valued VCSPs [Thapper and Živný 2013], and conservative VCSPs [Kolmogorov and Živný 2013].

Marcin Kozik, Joanna Ochremiak
Towards Understanding the Smoothed Approximation Ratio of the 2-Opt Heuristic

The 2-Opt heuristic is a very simple, easy-to-implement local search heuristic for the traveling salesman problem. While it usually provides good approximations to the optimal tour in experiments, its worst-case performance is poor.

In an attempt to explain the approximation performance of 2-Opt, we analyze the smoothed approximation ratio of 2-Opt. We obtain a bound of

$$O(\log (1/\sigma ))$$

O

(

log

(

1

/

σ

)

)

for the smoothed approximation ratio of 2-Opt. As a lower bound, we prove that the worst-case lower bound of

$$\Omega (\frac{\log n}{\log \log n})$$

Ω

(

log

n

log

log

n

)

for the approximation ratio holds for

$$\sigma = O(1/\sqrt{n})$$

σ

=

O

(

1

/

n

)

.

Our main technical novelty is that, different from existing smoothed analyses, we do not separately analyze objective values of the global and the local optimum on all inputs, but simultaneously bound them on the same input.

Marvin Künnemann, Bodo Manthey
On the Hardest Problem Formulations for the $$0/1$$ 0 / 1 Lasserre Hierarchy

The Lasserre/Sum-of-Squares (SoS) hierarchy is a systematic procedure for constructing a sequence of increasingly tight semidefinite relaxations. It is known that the hierarchy converges to the 0/1 polytope in

$$n$$

n

levels and captures the convex relaxations used in the best available approximation algorithms for a wide variety of optimization problems.

In this paper we characterize the set of 0/1 integer linear problems and unconstrained 0/1 polynomial optimization problems that can still have an integrality gap at level

$$n-1$$

n

-

1

. These problems are the hardest for the Lasserre hierarchy in this sense.

Adam Kurpisz, Samuli Leppänen, Monaldo Mastrolilli
Replacing Mark Bits with Randomness in Fibonacci Heaps

A Fibonacci heap is a deterministic data structure implementing a priority queue with optimal amortized operation costs. An unfortunate aspect of Fibonacci heaps is that they must maintain a “mark bit” which serves only to ensure efficiency of heap operations, not correctness. Karger proposed a simple randomized variant of Fibonacci heaps in which mark bits are replaced by coin flips. This variant still has expected amortized cost

O

(1) for insert, decrease-key, and merge. Karger conjectured that this data structure has expected amortized cost

$$O(\log s)$$

O

(

log

s

)

for delete-min, where

s

is the number of heap operations.

We give a tight analysis of Karger’s randomized Fibonacci heaps, resolving Karger’s conjecture. Specifically, we obtain matching upper and lower bounds of

$$\varTheta (\log ^2 s / \log \log s)$$

Θ

(

log

2

s

/

log

log

s

)

for the runtime of delete-min. We also prove a tight lower bound of

$$\varOmega (\sqrt{n})$$

Ω

(

n

)

on delete-min in terms of the number of heap elements

n

. The request sequence used to prove this bound also solves an open problem of Fredman on whether cascading cuts are necessary. Finally, we give a simple additional modification to these heaps which yields a tight runtime

$$O(\log ^2 n / \log \log n)$$

O

(

log

2

n

/

log

log

n

)

for delete-min.

Jerry Li, John Peebles
A PTAS for the Weighted Unit Disk Cover Problem

We are given a set of weighted unit disks and a set of points in Euclidean plane. The minimum weight unit disk cover (

WUDC

) problem asks for a subset of disks of minimum total weight that covers all given points.

WUDC

is one of the geometric set cover problems, which have been studied extensively for the past two decades (for many different geometric range spaces, such as (unit) disks, halfspaces, rectangles, triangles). It is known that the unweighted

WUDC

problem is NP-hard and admits a polynomial-time approximation scheme (PTAS). For the weighted

WUDC

problem, several constant approximations have been developed. However, whether the problem admits a PTAS has been an open question. In this paper, we answer this question affirmatively by presenting the first PTAS for

WUDC

. Our result implies the first PTAS for the minimum weight dominating set problem in unit disk graphs. Combining with existing ideas, our result can also be used to obtain the first PTAS for the maxmimum lifetime coverage problem and an improved constant approximation ratio for the connected dominating set problem in unit disk graphs.

Jian Li, Yifei Jin
Approximating the Expected Values for Combinatorial Optimization Problems over Stochastic Points

We consider the stochastic geometry model where the location of each node is a random point in a given metric space, or the existence of each node is uncertain. We study the problems of computing the expected lengths of several combinatorial or geometric optimization problems over stochastic points, including closest pair, minimum spanning tree,

$$k$$

k

-clustering, minimum perfect matching, and minimum cycle cover. We also consider the problem of estimating the probability that the length of closest pair, or the diameter, is at most, or at least, a given threshold. Most of the above problems are known to be

$$\#\mathrm {P}$$

#

P

-hard. We obtain FPRAS (Fully Polynomial Randomized Approximation Scheme) for most of them in both the existential and locational uncertainty models. Our result for stochastic minimum spanning trees in the locational uncertain model improves upon the previously known constant factor approximation algorithm. Our results for other problems are the first known to the best of our knowledge.

Lingxiao Huang, Jian Li
Deterministic Truncation of Linear Matroids

Let

$$M=(E,\mathcal{I})$$

M

=

(

E

,

I

)

be a matroid. A

$$k$$

k

-truncation

of

$$M$$

M

is a matroid

$$M'=(E,\mathcal{I}')$$

M

=

(

E

,

I

)

such that for any

$$A\subseteq E$$

A

E

,

$$A\in \mathcal{I}'$$

A

I

if and only if

$$|A|\le k$$

|

A

|

k

and

$$A\in \mathcal {I}$$

A

I

. Given a linear representation of

$$M$$

M

we consider the problem of finding a linear representation of the

$$k$$

k

-truncation of this matroid. This problem can be expressed as the following problem on matrices. Let

$$M$$

M

be a

$$n\times m$$

n

×

m

matrix over a field

$$\mathbb {F}$$

F

. A

rank

$$k$$

k

-truncation

of the matrix

$$M$$

M

is a

$$k\times m$$

k

×

m

matrix

$$M_k$$

M

k

(over

$${\mathbb F}$$

F

or a related field) such that for every subset

$$I\subseteq \{1,\ldots ,m\}$$

I

{

1

,

,

m

}

of size at most

$$k$$

k

, the set of columns corresponding to

$$I$$

I

in

$$M$$

M

has rank

$$|I|$$

|

I

|

if and only if the corresponding set of columns in

$$M_k$$

M

k

has rank

$$|I|$$

|

I

|

. A common way to compute a rank

$$k$$

k

-truncation of a

$$n \times m$$

n

×

m

matrix is to multiply the matrix with a random

$$k\times n$$

k

×

n

matrix (with the entries from a field of an exponential size), yielding a simple randomized algorithm. So a natural question is whether it possible to obtain a rank

$$k$$

k

-truncation of a matrix,

deterministically.

In this paper we settle this question for matrices over any field in which the field operations can be done efficiently. This includes any finite field and the field of rationals (

$$\mathbb Q$$

Q

).

Our algorithms are based on the properties of the classical Wronskian determinant, and the folded Wronskian determinant, which was recently introduced by Guruswami and Kopparty [

FOCS, 2013

], and was implicitly present in the work of Forbes and Shpilka [

STOC, 2012

]. These were used in the context of subspace designs, and reducing randomness for polynomial identity testing and other related problems. Our main conceptual contribution in this paper is to show that the Wronskian determinant can also be used to obtain a representation of the truncation of a linear matroid in deterministic polynomial time. Finally, we use our results to derandomize several parameterized algorithms, including an algorithm for computing

$$\ell $$

-Matroid Parity

, to which several problems like

$$\ell $$

-Matroid Intersection

can be reduced.

Daniel Lokshtanov, Pranabendu Misra, Fahad Panolan, Saket Saurabh
Linear Time Parameterized Algorithms for Subset Feedback Vertex Set

In the

Subset Feedback Vertex Set (Subset FVS)

problem, the input is a graph

$$G$$

G

on

$$n$$

n

vertices and

$$m$$

m

edges, a subset of vertices

$$T$$

T

, referred to as terminals, and an integer

$$k$$

k

. The objective is to determine whether there exists a set of at most

$$k$$

k

vertices intersecting every cycle that contains a terminal. The study of parameterized algorithms for this generalization of the

Feedback Vertex Set

problem has received significant attention over the last few years. In fact the parameterized complexity of this problem was open until 2011, when two groups independently showed that the problem is fixed parameter tractable (

FPT

). Using tools from graph minors Kawarabayashi and Kobayashi obtained an algorithm for

Subset FVS

running in time

$${\mathcal {O}}(f(k)\cdot n^2 m)$$

O

(

f

(

k

)

·

n

2

m

)

[SODA 2012, JCTB 2012]. Independently, Cygan et al. [ICALP 2011, SIDMA 2013] designed an algorithm for

Subset FVS

running in time

$$2^{{\mathcal {O}}(k \log k)}\cdot n^{{\mathcal {O}}(1)}$$

2

O

(

k

log

k

)

·

n

O

(

1

)

. More recently, Wahlström obtained the first single exponential time algorithm for

Subset FVS

, running in time

$$4^{k}\cdot n^{{\mathcal {O}}(1)}$$

4

k

·

n

O

(

1

)

[SODA 2014]. While the

$$2^{{\mathcal {O}}(k)}$$

2

O

(

k

)

dependence on the parameter

$$k$$

k

is optimal under the Exponential Time Hypothesis (ETH), the dependence of this algorithm as well as those preceding it, on the input size is far from linear.

In this paper we design the first linear time parameterized algorithms for

Subset FVS

. More precisely, we obtain two new algorithms for

Subset FVS

.

A randomized algorithm for

Subset FVS

running in time

$${\mathcal {O}}(25.6^k k^{{\mathcal {O}}(1)} (n + m))$$

O

(

25

.

6

k

k

O

(

1

)

(

n

+

m

)

)

.

A deterministic algorithm for

Subset FVS

running in time

$$2^{{\mathcal {O}}(k \log k)} (n + m)$$

2

O

(

k

log

k

)

(

n

+

m

)

.

In particular, the first algorithm obtains the best possible dependence on both the parameter as well as the input size, up to the constant in the exponent. Both of our algorithms are based on “cut centrality”, in the sense that solution vertices are likely to show up in minimum size cuts between vertices sampled from carefully chosen distributions.

Daniel Lokshtanov, M. S. Ramanujan, Saket Saurabh
An Optimal Algorithm for Minimum-Link Rectilinear Paths in Triangulated Rectilinear Domains

We present a new algorithm for finding minimum-link rectilinear paths among

h

rectilinear obstacles with a total of

n

vertices in the plane. After the plane is triangulated, for any point

s

, our algorithm builds an

O

(

n

)-size data structure in

$$O(n+h\log h)$$

O

(

n

+

h

log

h

)

time, such that given any query point

t

, we can compute a minimum-link rectilinear path from

s

to

t

in

$$O(\log n+k)$$

O

(

log

n

+

k

)

time, where

k

is the number of edges of the path, and the query time is

$$O(\log n)$$

O

(

log

n

)

if we only want to know the value

k

. The previously best algorithm solves the problem in

$$O(n\log n)$$

O

(

n

log

n

)

time.

Joseph S. B. Mitchell, Valentin Polishchuk, Mikko Sysikaski, Haitao Wang
Amplification of One-Way Information Complexity via Codes and Noise Sensitivity

We show a new connection between the information complexity of one-way communication problems under product distributions and a relaxed notion of list-decodable codes. As a consequence, we obtain a characterization of the information complexity of one-way problems under product distributions for

any error rate

based on covering numbers. This generalizes the characterization via VC dimension for constant error rates given by Kremer, Nisan, and Ron (CCC, 1999). It also provides an

exponential improvement in the error rate

, yielding tight bounds for a number of problems. In addition, our framework gives a new technique for analyzing the complexity of composition (e.g., XOR and OR) of one-way communication problems, connecting the difficulty of these problems to the

noise sensitivity

of the composing function. Using this connection, we strengthen the lower bounds obtained by Molinaro, Woodruff and Yaroslavtsev (SODA, 2013) for several problems in the distributed and streaming models, obtaining optimal lower bounds for finding the approximate closest pair of a set of points and the approximate largest entry in a matrix product. Finally, to illustrate the utility and simplicity of our framework, we show how it unifies proofs of existing

$$1$$

1

-way lower bounds for sparse set disjointness, the indexing problem, the greater than function under product distributions, and the gap-Hamming problem under the uniform distribution.

Marco Molinaro, David P. Woodruff, Grigory Yaroslavtsev
A $$(2+\epsilon )$$ ( 2 + ϵ ) -Approximation Algorithm for the Storage Allocation Problem

Packing problems are a fundamental class of problems studied in combinatorial optimization. Three particularly important and well-studied questions in this domain are the Unsplittable Flow on a Path problem (UFP), the Maximum Weight Independent Set of Rectangles problem (MWISR), and the 2-dimensional geometric knapsack problem. In this paper, we study the Storage Allocation Problem (SAP) which is a natural combination of those three questions. Given is a path with edge capacities and a set of tasks that are specified by start and end vertices, demands, and profits. The goal is to select a subset of the tasks that can be drawn as non-overlapping rectangles underneath the capacity profile, the height of a rectangles corresponding to the demand of the respective task. This problem arises naturally in settings where a certain available bandwidth has to be allocated contiguously to selected requests.

While for 2D-knapsack and UFP there are polynomial time

$$(2+\epsilon )$$

(

2

+

ϵ

)

-approximation algorithms known [Jansen and Zhang, SODA 2004] [Anagnostopoulos et al., SODA 2014] the best known approximation factor for SAP is

$$9+\epsilon $$

9

+

ϵ

[Bar-Yehuda, SPAA 2013]. In this paper, we level the understanding of SAP and the other two problems above by presenting a polynomial time

$$(2+\epsilon )$$

(

2

+

ϵ

)

-approximation algorithm for SAP. A typically difficult special case of UFP and its variations arises if all input tasks are relatively large compared to the capacity of the smallest edge they are using. For that case, we even obtain a pseudopolynomial time

exact

algorithm for SAP.

Tobias Mömke, Andreas Wiese
Shortest Reconfiguration Paths in the Solution Space of Boolean Formulas

Given a Boolean formula and a satisfying assignment, a flip is an operation that changes the value of a variable in the assignment so that the resulting assignment remains satisfying. We study the problem of computing the shortest sequence of flips (if one exists) that transforms a given satisfying assignment

$$s$$

s

to another satisfying assignment

$$t$$

t

of the Boolean formula. Earlier work characterized the complexity of deciding the existence of a sequence of flips between two given satisfying assignments using Schaefer’s framework for classification of Boolean formulas. We build on it to provide a trichotomy for the complexity of finding the shortest sequence of flips and show that it is either in P, NP-complete, or PSPACE-complete. Our result adds to the small set of complexity results known for shortest reconfiguration sequence problems by providing an example where the shortest sequence can be found in polynomial time even though the path flips variables that have the same value in both

$$s$$

s

and

$$t$$

t

. This is in contrast to all reconfiguration problems studied so far, where polynomial time algorithms for computing the shortest path were known only for cases where the path modified the symmetric difference only. Our proof uses Birkhoff’s representation theorem on a set system that we show to be a distributive lattice. The technique is insightful and can perhaps be used for other reconfiguration problems as well.

Amer E. Mouawad, Naomi Nishimura, Vinayak Pathak, Venkatesh Raman
Computing the Fréchet Distance Between Polygons with Holes

We study the problem of computing the Fréchet distance between subsets of Euclidean space. Even though the problem has been studied extensively for 1-dimensional curves, very little is known for

$$d$$

d

-dimensional spaces, for any

$$d\ge 2$$

d

2

. For general polygons in

$$\mathbb {R}^2$$

R

2

, it has been shown to be NP-hard, and the best known polynomial-time algorithm works only for polygons with at most a single puncture [Buchin

et al.

, 2010]. Generalizing [Buchin

et al.

, 2008] we give a polynomial-time algorithm for the case of arbitrary polygons with a constant number of punctures. Moreover, we show that approximating the Fréchet distance between polyhedral domains in

$$\mathbb {R}^3$$

R

3

to within a factor of

$$n^{1/\log \log n}$$

n

1

/

log

log

n

is NP-hard.

Amir Nayyeri, Anastasios Sidiropoulos
An Improved Private Mechanism for Small Databases

We study the problem of answering a workload of linear queries

$$\mathcal {Q}$$

Q

, on a database of size at most

$$n = o(|\mathcal {Q}|)$$

n

=

o

(

|

Q

|

)

drawn from a universe

$$\mathcal {U}$$

U

under the constraint of (approximate) differential privacy. Nikolov, Talwar, and Zhang [

NTZ13

] proposed an efficient mechanism that, for any given

$$\mathcal {Q}$$

Q

and

$$n$$

n

, answers the queries with average error that is at most a factor polynomial in

$$\log |\mathcal {Q}|$$

log

|

Q

|

and

$$\log |\mathcal {U}|$$

log

|

U

|

worse than the best possible. Here we improve on this guarantee and give a mechanism whose competitiveness ratio is at most polynomial in

$$\log n$$

log

n

and

$$\log |\mathcal {U}|$$

log

|

U

|

, and has no dependence on

$$|\mathcal {Q}|$$

|

Q

|

. Our mechanism is based on the projection mechanism of [

NTZ13

], but in place of an ad-hoc noise distribution, we use a distribution which is in a sense optimal for the projection mechanism, and analyze it using convex duality and the restricted invertibility principle.

Aleksandar Nikolov
Binary Pattern Tile Set Synthesis Is NP-hard

We solve an open problem, stated in 2008, about the feasibility of designing efficient algorithmic self-assembling systems which produce 2-dimensional colored patterns. More precisely, we show that the problem of finding the smallest tile assembly system which will self-assemble an input pattern with 2 colors (i.e.,

$$2$$

2

-

Pats

) is

NP

-hard. One crucial lemma makes use of a computer-assisted proof, which is a relatively novel but increasingly utilized paradigm for deriving proofs for complex mathematical problems. This tool is especially powerful for attacking combinatorial problems, as exemplified by the proof for the four color theorem and the recent important advance on the Erdős discrepancy problem using computer programs. In this paper, these techniques will be brought to a new order of magnitude, computational tasks corresponding to one CPU-year. We massively parallelize our program, and provide a full proof of its correctness. Its source code is freely available online.

Lila Kari, Steffen Kopecki, Pierre-Étienne Meunier, Matthew J. Patitz, Shinnosuke Seki
Near-Optimal Upper Bound on Fourier Dimension of Boolean Functions in Terms of Fourier Sparsity

We prove that the Fourier dimension of any Boolean function with Fourier sparsity

$$s$$

s

is at most

$$O\left( \sqrt{s} \log s\right) $$

O

s

log

s

. This bound is tight up to a factor of

$$O(\log s)$$

O

(

log

s

)

as the Fourier dimension and sparsity of the addressing function are quadratically related. We obtain our result by bounding the non-adaptive parity decision tree complexity, which is known to be equivalent to the Fourier dimension. A consequence of our result is that XOR functions have a one way deterministic communication protocol of communication complexity

$$O(\sqrt{r} \log r)$$

O

(

r

log

r

)

, where

$$r$$

r

is the rank of its communication matrix.

Swagato Sanyal
Condensed Unpredictability

We consider the task of deriving a key with high HILL entropy (i.e., being computationally indistinguishable from a key with high min-entropy) from an unpredictable source.

Previous to this work, the only known way to transform unpredictability into a key that was

$$\epsilon $$

ϵ

indistinguishable from having min-entropy was via pseudorandomness, for example by Goldreich-Levin (GL) hardcore bits. This approach has the inherent limitation that from a source with

$$k$$

k

bits of unpredictability entropy one can derive a key of length (and thus HILL entropy) at most

$$k-2\log (1/\epsilon )$$

k

-

2

log

(

1

/

ϵ

)

bits. In many settings, e.g. when dealing with biometric data, such a

$$2\log (1/\epsilon )$$

2

log

(

1

/

ϵ

)

bit entropy loss in not an option. Our main technical contribution is a theorem that states that in the high entropy regime, unpredictability implies HILL entropy. Concretely, any variable

$$K$$

K

with

$$|K|-d$$

|

K

|

-

d

bits of unpredictability entropy has the same amount of so called metric entropy (against real-valued, deterministic distinguishers), which is known to imply the same amount of HILL entropy. The loss in circuit size in this argument is exponential in the entropy gap

$$d$$

d

, and thus this result only applies for small

$$d$$

d

(i.e., where the size of distinguishers considered is exponential in

$$d$$

d

).

To overcome the above restriction, we investigate if it’s possible to first “condense” unpredictability entropy and make the entropy gap small. We show that any source with

$$k$$

k

bits of unpredictability can be condensed into a source of length

$$k$$

k

with

$$k-3$$

k

-

3

bits of unpredictability entropy. Our condenser simply “abuses" the GL construction and derives a

$$k$$

k

bit key from a source with

$$k$$

k

bits of unpredicatibily. The original GL theorem implies nothing when extracting that many bits, but we show that in this regime, GL still behaves like a “condenser" for unpredictability. This result comes with two caveats (1) the loss in circuit size is exponential in

$$k$$

k

and (2) we require that the source we start with has

no

HILL entropy (equivalently, one can efficiently check if a guess is correct). We leave it as an intriguing open problem to overcome these restrictions or to prove they’re inherent.

Maciej Skórski, Alexander Golovnev, Krzysztof Pietrzak
Sherali-Adams Relaxations for Valued CSPs
Johan Thapper, Stanislav Živný
Two-sided Online Bipartite Matching and Vertex Cover: Beating the Greedy Algorithm

We consider the generalizations of two classical problems, online bipartite matching and ski rental, in the field of online algorithms, and establish a novel connection between them.

In the original setting of online bipartite matching, vertices from only one side of the bipartite graph are online. Motivated by market clearing applications where both buyers and sellers are online, we study the generalization, called two-sided online bipartite matching, in which all vertices can be online. An algorithm for it should maintain a

$$b$$

b

-matching and try to maximize its size. We show that this problem can be attacked by considering the complementary “dual” problem, two-sided online bipartite vertex cover, which in fact is a generalization of ski rental.

As the greedy algorithm is 1/2-competitive for both problems, the challenge is to beat the ratio of 1/2. In this paper, we present new 0.526-competitive algorithms for both problems under the large budget assumption. A key technical ingredient of our results is a charging-based framework for the design and analysis of water-filling type algorithms. This allows us to systematically establish approximation bounds for various water-filling algorithms.

On the hardness side, we show that no online randomized algorithm achieves a competitive ratio better than 0.570 and 0.625 respectively for these two problems. Our bounds show that the one-sided optimal ratio of

$$1-1/e\approx 0.632$$

1

-

1

/

e

0.632

is indeed unattainable.

Yajun Wang, Sam Chiu-wai Wong
The Simultaneous Communication of Disjointness with Applications to Data Streams

We study

k

-party number-in-hand set disjointness in the simultaneous message-passing model, and show that even if each element

$$i\in [n]$$

i

[

n

]

is guaranteed to either belong to all

k

parties or to at most

O

(1) parties in expectation (and to at most

$$O(\log n)$$

O

(

log

n

)

parties with high probability), then

$$\varOmega (n \min (\log 1/\delta , \log k) / k )$$

Ω

(

n

min

(

log

1

/

δ

,

log

k

)

/

k

)

communication is required by any

$$\delta $$

δ

-error communication protocol for this problem (assuming

$$k = \varOmega (\log n)$$

k

=

Ω

(

log

n

)

).

We use the strong promise of our lower bound, together with a recent characterization of turnstile streaming algorithms as linear sketches, to obtain new lower bounds for the well-studied problem in data streams of approximating the frequency moments. We obtain a space lower bound of

$$\varOmega (n^{1-2/p} \varepsilon ^{-2} \log M \log 1/\delta )$$

Ω

(

n

1

-

2

/

p

ε

-

2

log

M

log

1

/

δ

)

bits for any algorithm giving a

$$(1+\varepsilon )$$

(

1

+

ε

)

-approximation to the

p

-th moment

$$\sum _{i=1}^n |x_i|^p$$

i

=

1

n

|

x

i

|

p

of an

n

-dimensional vector

$$x\in \{\pm M\}^n$$

x

{

±

M

}

n

with probability

$$1-\delta $$

1

-

δ

, for any

$$\delta \ge 2^{-o(n^{1/p})}$$

δ

2

-

o

(

n

1

/

p

)

. Our lower bound improves upon a prior

$$\varOmega (n^{1-2/p} \varepsilon ^{-2} \log M)$$

Ω

(

n

1

-

2

/

p

ε

-

2

log

M

)

lower bound which did not capture the dependence on

$$\delta $$

δ

, and our bound is optimal whenever

$$\varepsilon \le 1/\text {poly}(\log n)$$

ε

1

/

poly

(

log

n

)

. This is the first example of a lower bound in data streams which uses a characterization in terms of linear sketches to obtain stronger lower bounds than obtainable via the one-way communication model; indeed, our set disjointness lower bound provably cannot hold in the one-way model.

Omri Weinstein, David P. Woodruff
An Improved Combinatorial Algorithm for Boolean Matrix Multiplication

We present a new combinatorial algorithm for triangle finding and Boolean matrix multiplication that runs in

$$\hat{O}(n^3/\log ^4 n)$$

O

^

(

n

3

/

log

4

n

)

time, where the

$$\hat{O}$$

O

^

notation suppresses poly(loglog) factors. This improves the previous best combinatorial algorithm by Chan [

4

] that runs in

$$\hat{O}(n^3/\log ^3 n)$$

O

^

(

n

3

/

log

3

n

)

time. Our algorithm generalizes the divide-and-conquer strategy of Chan’s algorithm.

Moreover, we propose a general framework for detecting triangles in graphs and computing Boolean matrix multiplication. Roughly speaking, if we can find the “easy parts” of a given instance efficiently, we can solve the whole problem faster than

$$n^3$$

n

3

.

Huacheng Yu
Backmatter
Metadata
Title
Automata, Languages, and Programming
Editors
Magnús M. Halldórsson
Kazuo Iwama
Naoki Kobayashi
Bettina Speckmann
Copyright Year
2015
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-662-47672-7
Print ISBN
978-3-662-47671-0
DOI
https://doi.org/10.1007/978-3-662-47672-7

Premium Partner