Skip to main content

2009 | Buch

Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques

12th International Workshop, APPROX 2009, and 13th International Workshop, RANDOM 2009, Berkeley, CA, USA, August 21-23, 2009. Proceedings

herausgegeben von: Irit Dinur, Klaus Jansen, Joseph Naor, José Rolim

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

Contributed Talks of APPROX

Approximation Algorithms and Hardness Results for Packing Element-Disjoint Steiner Trees in Planar Graphs

We study the problem of packing element-disjoint Steiner trees in graphs. We are given a graph and a designated subset of terminal nodes, and the goal is to find a maximum cardinality set of element-disjoint trees such that each tree contains every terminal node. An

element

means a non-terminal node or an edge. (Thus, each non-terminal node and each edge must be in at most one of the trees.) We show that the problem is

APX

-hard when there are only three terminal nodes, thus answering an open question.

Our main focus is on the special case when the graph is planar. We show that the problem of finding two element-disjoint Steiner trees in a planar graph is

NP

-hard. We design an algorithm for planar graphs that achieves an approximation guarantee close to 2. In fact, given a planar graph that is

k

element-connected on the terminals (

k

is an upper bound on the number of element-disjoint Steiner trees), the algorithm returns

$\lfloor{{k}\over{2}}-1\rfloor$

element-disjoint Steiner trees. Using this algorithm, we get an approximation algorithm for the edge-disjoint version of the problem on planar graphs that improves on the previous approximation guarantees. We also show that the natural LP relaxation of the planar problem has an integrality ratio approaching 2.

Ashkan Aazami, Joseph Cheriyan, Krishnam Raju Jampani
Adaptive Sampling for k-Means Clustering

We show that

adaptively

sampled

O

(

k

) centers give a constant factor bi-criteria approximation for the

k

-means problem, with a constant probability. Moreover, these

O

(

k

) centers contain a subset of

k

centers which give a constant factor approximation, and can be found using LP-based techniques of Jain and Vazirani [JV01] and Charikar et al. [CGTS02]. Both these algorithms run in effectively

O

(

nkd

) time and extend the

O

(log

k

)-approximation achieved by the

k

-means++ algorithm of Arthur and Vassilvitskii [AV07].

Ankit Aggarwal, Amit Deshpande, Ravi Kannan
Approximations for Aligned Coloring and Spillage Minimization in Interval and Chordal Graphs

We consider the problem of aligned coloring of interval and chordal graphs. These problems have substantial applications to register allocation in compilers and have recently been proven NP-Hard. We provide the first constant approximations: a

$\frac{4}{3}$

-approximation for interval graphs and a

$\frac{3}{2}$

-approximation for chordal graphs. We extend our techniques to the problem of minimizing spillage in these graph types.

Douglas E. Carroll, Adam Meyerson, Brian Tagiku
Unsplittable Flow in Paths and Trees and Column-Restricted Packing Integer Programs

We consider the unsplittable flow problem (UFP) and the closely related column-restricted packing integer programs (CPIPs). In UFP we are given an edge-capacitated graph

G

 = (

V

,

E

) and

k

request pairs

R

1

, …,

R

k

, where each

R

i

consists of a source-destination pair (

s

i

,

t

i

), a demand

d

i

and a weight

w

i

. The goal is to find a maximum weight subset of requests that can be routed unsplittably in

G

. Most previous work on UFP has focused on the

no-bottleneck

case in which the maximum demand of the requests is at most the smallest edge capacity. Inspired by the recent work of Bansal

et

al

. [3] on UFP on a path without the above assumption, we consider UFP on paths as well as trees. We give a simple

O

(log

n

) approximation for UFP on trees when all weights are identical; this yields an

O

(log

2

n

) approximation for the weighted case. These are the first non-trivial approximations for UFP on trees. We develop an LP relaxation for UFP on paths that has an integrality gap of

O

(log

2

n

); previously there was no relaxation with

o

(

n

) gap. We also consider UFP in general graphs and CPIPs without the no-bottleneck assumption and obtain new and useful results.

Chandra Chekuri, Alina Ene, Nitish Korula
Truthful Mechanisms via Greedy Iterative Packing

An important research thread in algorithmic game theory studies the design of efficient truthful mechanisms that approximate the optimal social welfare. A fundamental question is whether an

α

-approximation algorithm translates into an

α

-approximate truthful mechanism. It is well-known that plugging an

α

-approximation algorithm into the VCG technique may not yield a truthful mechanism. Thus, it is natural to investigate properties of approximation algorithms that enable their use in truthful mechanisms.

The main contribution of this paper is to identify a useful and natural property of approximation algorithms, which we call loser-independence; this property is applicable in the single-minded and single-parameter settings. Intuitively, a loser-independent algorithm does not change its outcome when the bid of a losing agent increases, unless that agent becomes a winner. We demonstrate that loser-independent algorithms can be employed as sub-procedures in a greedy iterative packing approach while preserving monotonicity. A greedy iterative approach provides a good approximation in the context of maximizing a non-decreasing submodular function subject to independence constraints. Our framework gives rise to truthful approximation mechanisms for various problems. Notably, some problems arise in online mechanism design.

Chandra Chekuri, Iftah Gamzu
Resource Minimization Job Scheduling

Given a set

J

of jobs, where each job

j

is associated with release date

r

j

, deadline

d

j

and processing time

p

j

, our goal is to schedule all jobs using the minimum possible number of machines. Scheduling a job

j

requires selecting an interval of length

p

j

between its release date and deadline, and assigning it to a machine, with the restriction that each machine executes at most one job at any given time. This is one of the basic settings in the resource-minimization job scheduling, and the classical randomized rounding technique of Raghavan and Thompson provides an

O

(log

n

/loglog

n

)-approximation for it. This result has been recently improved to an

$O(\sqrt{\log n})$

-approximation, and moreover an efficient algorithm for scheduling all jobs on

O

((

OPT

)

2

) machines has been shown. We build on this prior work to obtain a constant factor approximation algorithm for the problem.

Julia Chuzhoy, Paolo Codenotti
The Power of Preemption on Unrelated Machines and Applications to Scheduling Orders

Scheduling jobs on unrelated parallel machines so as to minimize the makespan is one of the basic, well-studied problems in the area of machine scheduling. In the first part of the paper we prove that the power of preemption, i.e., the ratio between the makespan of an optimal nonpreemptive and an optimal preemptive schedule, is exactly 4. This result is a definite answer to an important basic open problem in scheduling. The proof of the lower bound is based on a clever iterative construction while the rounding technique we use to prove the upper bound is an adaptation of Shmoys and Tardos’ rounding for the generalized assignment problem. In the second part of the paper we apply this adaptation to the more general setting in which orders, consisting of several jobs, have to be processed on unrelated parallel machines so as to minimize the sum of weighted completion times of the orders. We obtain the first constant factor approximation algorithms for the preemptive and nonpreemptive case, improving and extending a recent result by Leung et. al.

José R. Correa, Martin Skutella, José Verschae
New Hardness Results for Diophantine Approximation

We revisit

simultaneous Diophantine approximation

, a classical problem from the geometry of numbers which has many applications in algorithms and complexity. The input to the decision version of this problem consists of a rational vector

α

 ∈ ℚ

n

, an error bound

ε

and a denominator bound

N

 ∈ ℕ

 + 

. One has to decide whether there exists an integer, called the

denominator

Q

with 1 ≤ 

Q

 ≤ 

N

such that the distance of each number

Q

·

α

i

to its nearest integer is bounded by

ε

. Lagarias has shown that this problem is NP-complete and optimization versions have been shown to be hard to approximate within a factor

n

c

/ loglog

n

for some constant

c

 > 0. We strengthen the existing hardness results and show that the optimization problem of finding the

smallest denominator

Q

 ∈ ℕ

 + 

such that the distances of

Q

·

α

i

to the nearest integer are bounded by

ε

is hard to approximate within a factor 2

n

unless

${\textrm{P}} = {\rm NP}$

.

We then outline two further applications of this strengthening: We show that a directed version of Diophantine approximation is also hard to approximate. Furthermore we prove that the

mixing set

problem with arbitrary capacities is NP-hard. This solves an open problem raised by Conforti, Di Summa and Wolsey.

Friedrich Eisenbrand, Thomas Rothvoß
PASS Approximation
A Framework for Analyzing and Designing Heuristics

We introduce a new framework for designing and analyzing algorithms. Our framework applies best to problems that are inapproximable according to the standard worst-case analysis. We circumvent such negative results by designing guarantees for classes of instances, parameterized according to properties of the optimal solution. We also make sure that our parameterized approximation, called

PArametrized by the Signature of the Solution (PASS)

approximation, is the best possible. We show how to apply our framework to problems with additive and submodular objective functions such as the capacitated maximum facility location problems. We consider two types of algorithms for these problems. For greedy algorithms, our framework provides a justification for preferring a certain natural greedy rule over some alternative greedy rules that have been used in similar contexts. For LP-based algorithms, we show that the natural LP relaxation for these problems is not optimal in our framework. We design a new LP relaxation and show that this LP relaxation coupled with a new randomized rounding technique is optimal in our framework.

In passing, we note that our results strictly improve over previous results of Kleinberg, Papadimitriou and Raghavan [JACM 2004] concerning the approximation ratio of the greedy algorithm.

Uriel Feige, Nicole Immorlica, Vahab S. Mirrokni, Hamid Nazerzadeh
Optimal Sherali-Adams Gaps from Pairwise Independence

This work considers the problem of approximating fixed predicate constraint satisfaction problems (

MAX k-CSP

(

P

)). We show that if the set of assignments accepted by

P

contains the support of a balanced pairwise independent distribution over the domain of the inputs, then such a problem on

n

variables cannot be approximated better than the trivial (random) approximation, even using Ω(

n

) levels of the Sherali-Adams LP hierarchy.

It was recently shown [3] that under the Unique Game Conjecture, CSPs with predicates with this condition cannot be approximated better than the trivial approximation. Our results can be viewed as an unconditional analogue of this result in the restricted computational model defined by the Sherali-Adams hierarchy. We also introduce a new generalization of techniques to define consistent “local distributions” over partial assignments to variables in the problem, which is often the crux of proving lower bounds for such hierarchies.

Konstantinos Georgiou, Avner Magen, Madhur Tulsiani
An Approximation Scheme for Terrain Guarding

We obtain a polynomial time approximation scheme for the terrain guarding problem improving upon several recent constant factor approximations. Our algorithm is a local search algorithm inspired by the recent results of Chan and Har-Peled [2] and Mustafa and Ray [15]. Our key contribution is to show the existence of a planar graph that appropriately relates the local and global optimum.

Matt Gibson, Gaurav Kanade, Erik Krohn, Kasturi Varadarajan
Scheduling with Outliers

In classical scheduling problems, we are given jobs and machines, and have to schedule all the jobs to minimize some objective function. What if each job has a specified profit, and we are no longer required to process all jobs? Instead, we can schedule any subset of jobs whose total profit is at least a (hard) target profit requirement, while still trying to approximately minimize the objective function.

We refer to this class of problems as

scheduling with outliers

. This model was initiated by Charikar and Khuller (SODA ’06) for minimum max-response time in broadcast scheduling. In this paper, we consider three other well-studied scheduling objectives: the generalized assignment problem, average weighted completion time, and average flow time, for which LP-based approximation algorithms are provided. Our main results are:

For the

minimum average flow time

problem on identical machines, we give an LP-based logarithmic approximation algorithm for the unit profits case, and complement this result by presenting a matching integrality gap.

For the

average weighted completion time

problem on unrelated machines, we give a constant-factor approximation. The algorithm is based on randomized rounding of the time-indexed LP relaxation strengthened by knapsack-cover inequalities.

For the

generalized assignment problem

with outliers, we outline a simple reduction to GAP without outliers to obtain an algorithm whose makespan is within 3 times the optimum makespan, and whose cost is at most (1 + 

ε

) times the optimal cost.

Anupam Gupta, Ravishankar Krishnaswamy, Amit Kumar, Danny Segev
Improved Inapproximability Results for Maximum k-Colorable Subgraph

We study the maximization version of the fundamental graph coloring problem. Here the goal is to color the vertices of a

k

-colorable graph with

k

colors so that a maximum fraction of edges are properly colored (i.e. their endpoints receive different colors). A random

k

-coloring properly colors an expected fraction

$1-\frac{1}{k}$

of edges. We prove that given a graph promised to be

k

-colorable, it is NP-hard to find a

k

-coloring that properly colors more than a fraction

$\approx 1-\frac{1}{33 k}$

of edges. Previously, only a hardness factor of

$1- O\bigl(\frac{1}{k^2}\bigr)$

was known. Our result pins down the correct asymptotic dependence of the approximation factor on

k

. Along the way, we prove that approximating the Maximum 3-colorable subgraph problem within a factor greater than

$\frac{32}{33}$

is NP-hard.

Using semidefinite programming, it is known that one can do better than a random coloring and properly color a fraction

$1-\frac{1}{k} +\frac{2 \ln k}{k^2}$

of edges in polynomial time. We show that, assuming the 2-to-1 conjecture, it is hard to properly color (using

k

colors) more than a fraction

$1-\frac{1}{k} + O\left(\frac{\ln k}{k^2}\right)$

of edges of a

k

-colorable graph.

Venkatesan Guruswami, Ali Kemal Sinop
Improved Absolute Approximation Ratios for Two-Dimensional Packing Problems

We consider the two-dimensional bin packing and strip packing problem, where a list of rectangles has to be packed into a minimal number of rectangular bins or a strip of minimal height, respectively. All packings have to be non-overlapping and orthogonal, i.e., axis-parallel. Our algorithm for strip packing has an absolute approximation ratio of 1.9396 and is the first algorithm to break the approximation ratio of 2 which was established more than a decade ago. Moreover, we present a polynomial time approximation scheme (

$\mathcal{PTAS}$

) for strip packing where rotations by 90 degrees are permitted and an algorithm for two-dimensional bin packing with an absolute worst-case ratio of 2, which is optimal provided

$\mathcal{P} \not= \mathcal{NP}$

.

Rolf Harren, Rob van Stee
On the Optimality of Gluing over Scales

We show that for every

α

 > 0, there exist

n

-point metric spaces (

X

,

d

) where every “scale” admits a Euclidean embedding with distortion at most

α

, but the whole space requires distortion at least

$\Omega(\sqrt{\alpha \log n})$

. This shows that the scale-gluing lemma [Lee, SODA 2005] is tight, and disproves a conjecture stated there. This matching upper bound was known to be tight at both endpoints, i.e. when

α

 = Θ(1) and

α

 = Θ(log

n

), but nowhere in between.

More specifically, we exhibit

n

-point spaces with doubling constant

λ

requiring Euclidean distortion

$\Omega(\sqrt{\log \lambda \log n})$

, which also shows that the technique of “measured descent” [Krauthgamer, et. al.,

Geometric and Functional Analysis

] is optimal. We extend this to

L

p

spaces with

p

 > 1, where one requires distortion at least Ω((log

n

)

1/

q

(log

λ

)

1 − 1/

q

) when

q

 =  max {

p

,2}, a result which is tight for every

p

 > 1.

Alex Jaffe, James R. Lee, Mohammad Moharrami
On Hardness of Pricing Items for Single-Minded Bidders

We consider the following

item pricing

problem which has received much attention recently. A seller has an infinite numbers of copies of

n

items. There are

m

buyers, each with a budget and an intention to buy a fixed subset of items. Given prices on the items, each buyer buys his subset of items, at the given prices, provided the total price of the subset is at most his budget. The objective of the seller is to determine the prices such that her total profit is maximized.

In this paper, we focus on the case where the buyers are interested in subsets of size at most two. This special case is known to be APX-hard (Guruswami et al [1]). The best known approximation algorithm, by Balcan and Blum, gives a 4-approximation [2]. We show that there is indeed a gap of 4 for the combinatorial upper bound used in their analysis. We further show that a natural linear programming relaxation of this problem has an integrality gap of 4, even in this special case. Then we prove that the problem is NP-hard to approximate within a factor of 2 assuming the Unique Games Conjecture; and it is unconditionally NP-hard to approximate within a factor 17/16. Finally, we extend the APX-hardness of the problem to the special case in which the graph formed by items as vertices and buyers as edges is

bipartite

.

We hope that our techniques will be helpful for obtaining stronger hardness of approximation bounds for this problem.

Rohit Khandekar, Tracy Kimbrel, Konstantin Makarychev, Maxim Sviridenko
Real-Time Message Routing and Scheduling

Exchanging messages between nodes of a network (e.g., embedded computers) is a fundamental issue in real-time systems involving critical routing and scheduling decisions. In order for messages to arrive on time, one has to determine a suitable (short) origin-destination path for each message and resolve conflicts between messages whose paths share a communication link of the network. We provide efficient routing strategies yielding origin-destination paths of bounded dilation and congestion. In particular, we can give good a priori guarantees on the time required to send a given set of messages which, under certain reasonable conditions, implies that all messages can be scheduled to reach their destination on time. Our algorithm uses a path-based LP-relaxation and iterative rounding. Finally, for message routing along a directed path (which is already

$\mathcal{NP}$

-hard), we identify a natural class of instances for which a simple scheduling heuristic yields provably optimal solutions.

Ronald Koch, Britta Peis, Martin Skutella, Andreas Wiese
Approximating Some Network Design Problems with Node Costs

We study several multi-criteria undirected network design problems with node costs and lengths with all problems related to the node costs

Multicommodity Buy at Bulk

(

mbb

) problem in which we are given a graph

G

 = (

V

,

E

), demands {

d

st

:

s

,

t

 ∈ 

V

}, and a family {

c

v

:

v

 ∈ 

V

} of subadditive cost functions. For every

s

,

t

 ∈ 

V

we seek to send

d

st

flow units from

s

to

t

on a single path, so that ∑ 

v

c

v

(

f

v

) is minimized, where

f

v

the total amount of flow through

v

. In the

Multicommodity Cost-Distance

(

mcd

) problem we are also given lengths {ℓ(

v

):

v

 ∈ 

V

}, and seek a subgraph

H

of

G

that minimizes

c

(

H

) + ∑ 

s

,

t

 ∈ 

V

d

st

·ℓ

H

(

s

,

t

), where ℓ

H

(

s

,

t

) is the minimum ℓ-length of an

st

-path in

H

. The approximation for these two problems is equivalent up to a factor arbitrarily close to 2. We give an

O

(log

3

n

)-approximation algorithm for both problems for the case of demands polynomial in

n

. The previously best known approximation ratio for these problems was

O

(log

4

n

) [Chekuri et al., FOCS 2006] and [Chekuri et al., SODA 2007]. This technique seems quite robust and was already used in order to improve the ratio of Buy-at-bulk with protection (Antonakopoulos et al FOCS 2007) from log

3

h

to log

2

h

. See ?.

We also consider the

Maximum Covering Tree

(

maxct

) problem which is closely related to

mbb

: given a graph

G

 = (

V

,

E

), costs {

c

(

v

):

v

 ∈ 

V

}, profits {

p

(

v

):

v

 ∈ 

V

}, and a bound

C

, find a subtree

T

of

G

with

c

(

T

) ≤ 

C

and

p

(

T

) maximum. The best known approximation algorithm for

maxct

[Moss and Rabani, STOC 2001] computes a tree

T

with

c

(

T

) ≤ 2

C

and

p

(

T

) = Ω(

opt

/log

n

). We provide the first nontrivial lower bound and in fact provide a

bicriteria

lower bound on approximating this problem (which is stronger than the usual lower bound) by showing that the problem admits no better than Ω(1/(loglog

n

)) approximation assuming

$\mbox{NP}\not\subseteq \mbox{Quasi(P)}$

even if the algorithm is allowed to violate the budget by any universal constant ρ

. This disproves a conjecture of [Moss and Rabani, STOC 2001].

Another related to

mbb

problem is the

Shallow Light Steiner Tree

(

slst

) problem, in which we are given a graph

G

 = (

V

,

E

), costs {

c

(

v

):

v

 ∈ 

V

}, lengths {ℓ(

v

):

v

 ∈ 

V

}, a set

U

 ⊆ 

V

of terminals, and a bound

L

. The goal is to find a subtree

T

of

G

containing

U

with

diam

(

T

) ≤ 

L

and

c

(

T

) minimum. We give an algorithm that computes a tree

T

with

c

(

T

) = 

O

(log

2

n

) ·

opt

and

diam

(

T

) = 

O

(log

n

) ·

L

. Previously, a polylogarithmic bicriteria approximation was known only for the case of edge costs and edge lengths.

Guy Kortsarz, Zeev Nutov
Submodular Maximization over Multiple Matroids via Generalized Exchange Properties

In this paper, we consider the problem of maximizing a non-negative submodular function

f

, defined on a (finite) ground set

N

, subject to matroid constraints. A function

f

: 2

N

 → ℝ is

submodular

if for all

S

,

T

 ⊆ 

N

,

f

(

S

 ∪ 

T

) + 

f

(

S

 ∩ 

T

) ≤ 

f

(

S

) + 

f

(

T

). Furthermore, all submodular functions that we deal with are assumed to be non-negative. Throughout, we assume that our submodular function

f

is given by a

value oracle

; i.e., for a given set

S

 ⊆ 

N

, an algorithm can query an oracle to find the value

f

(

S

). Without loss of generality, we take the ground set

N

to be [

n

] = {1,2,…,

n

}.

Jon Lee, Maxim Sviridenko, Jan Vondrák
Robust Algorithms for on Minor-Free Graphs Based on the Sherali-Adams Hierarchy

This work provides a Linear Programming-based Polynomial Time Approximation Scheme (PTAS) for two classical NP-hard problems on graphs when the input graph is guaranteed to be planar, or more generally Minor Free. The algorithm applies a sufficiently large number (some function of 1/

ε

when 1 + 

ε

approximation is required) of rounds of the so-called Sherali-Adams Lift-and-Project system. needed to obtain a (1 + 

ε

)-approximation, where

f

is some function that depends only on the graph that should be avoided as a minor. The problem we discuss are the well-studied problems, the

and

problems. An curious fact we expose is that in the world of minor-free graph, the

is harder in some sense than the

.

Our main result shows how to get a PTAS for

in the more general “noisy setting” in which input graphs are not assumed to be planar/minor-free, but only close to being so. In this setting we bound integrality gaps by 1 + 

ε

, which in turn provides a 1 + 

ε

approximation of the optimum value; however we don’t know how to actually find a solution with this approximation guarantee. While there are known combinatorial algorithms for the non-noisy setting of the above graph problems, we know of no previous approximation algorithms in the noisy setting. Further, we give evidence that current combinatorial techniques will fail to generalize to this noisy setting.

Avner Magen, Mohammad Moharrami
Minimizing Average Shortest Path Distances via Shortcut Edge Addition

We consider adding

k

shortcut edges

(

i.e.

edges of small fixed length

δ

 ≥ 0) to a graph so as to minimize the weighted average shortest path distance over all pairs of vertices. We explore several variations of the problem and give

O

(1)-approximations for each. We also improve the best known approximation ratio for metric

k

-median with penalties, as many of our approximations depend upon this bound. We give a

$(1+2\frac{(p+1)}{\beta(p+1)-1},\beta)$

-approximation with runtime exponential in

p

. If we set

β

 = 1 (to be exact on the number of medians), this matches the best current

k

-median (without penalties) result.

Adam Meyerson, Brian Tagiku
Approximating Node-Connectivity Augmentation Problems

We consider the (undirected)

Node Connectivity Augmentation

(

NCA

) problem: given a graph

J

 = (

V

,

E

J

) and connectivity requirements {

r

(

u

,

v

):

u

,

v

 ∈ 

V

}, find a minimum size set

I

of new edges (any edge is allowed) so that

J

 + 

I

contains

r

(

u

,

v

) internally disjoint

uv

-paths, for all

u

,

v

 ∈ 

V

. In the

Rooted NCA

there is

s

 ∈ 

V

so that

r

(

u

,

v

) > 0 implies

u

 = 

s

or

v

 = 

s

. For large values of

k

 =  max

u

,

v

 ∈ 

V

r

(

u

,

v

),

NCA

is at least as hard to approximate as

Label-Cover

and thus it is unlikely to admit a polylogarithmic approximation.

Rooted NCA

is at least as hard to approximate as

Hitting-Set

. The previously best approximation ratios for the problem were

O

(

k

ln

n

) for

NCA

and

O

(ln

n

) for

Rooted NCA

. In [Approximating connectivity augmentation problems, SODA 2005] the author posed the following open question: Does there exist a function

ρ

(

k

) so that

NCA

admits a

ρ

(

k

)-approximation algorithm? In this paper we answer this question, by giving an approximation algorithm with ratios

O

(

k

ln

2

k

) for

NCA

and

O

(ln

2

k

) for

Rooted NCA

. This is the first approximation algorithm with ratio independent of

n

, and thus is a constant for any fixed

k

. Our algorithm is based on the following new structural result which is of independent interest. If

${\cal D}$

is a set of node pairs in a graph

J

, then the maximum degree in the hypergraph formed by the inclusion minimal tight sets separating at least one pair in

${\cal D}$

is

O

(ℓ

2

), where ℓ is the maximum connectivity of a pair in

${\cal D}$

.

Zeev Nutov
A 7/9 - Approximation Algorithm for the Maximum Traveling Salesman Problem

We give a deterministic combinatorial 7/9-approximation algorithm for the symmetric maximum traveling salesman problem.

Katarzyna Paluch, Marcin Mucha, Aleksander Ma̧dry
Approximation Algorithms for Domatic Partitions of Unit Disk Graphs

We prove a new structural property regarding the “skyline” of uniform radius disks and use this to derive a number of new sequential and distributed approximation algorithms for well-known optimization problems on unit disk graphs (UDGs). Specifically, the paper presents new approximation algorithms for two problems:

domatic partition

and

weighted minimum dominating set (WMDS)

on UDGs, both of which are of significant interest to the distributed computing community because of applications to energy conservation in wireless networks. Using the aforementioned skyline property, we derive the first constant-factor approximation algorithm for the domatic partition problem on UDGs. Prior to our work, the best approximation factor for this problem was

O

(log

n

), obtained by simply using the approximation algorithm for general graphs. From the domatic partition algorithm, we derive a new and simpler constant-factor approximation for WMDS on UDGs. Because of “locality” properties that our algorithms possess, both algorithms have relatively simple

constant-round

distributed implementations in the

$\mathcal{LOCAL}$

model, where there is no bound on the message size. In addition, we obtain

O

(log

2

n

)-round distributed implementations of these algorithms in the

$\mathcal{CONGEST}$

model, where message sizes are bounded above by

O

(log

n

) bits per message.

Saurav Pandit, Sriram V. Pemmaraju, Kasturi Varadarajan
On the Complexity of the Asymmetric VPN Problem

We give the first constant factor approximation algorithm for the asymmetric Virtual Private Network (

Vpn

) problem with arbitrary concave costs. We even show the stronger result, that there is always a tree solution of cost at most 2·

OPT

and that a tree solution of (expected) cost at most 49.84·

OPT

can be determined in polynomial time.

For the case of linear cost we obtain a

$(2+\varepsilon\frac{\mathcal R}{\mathcal S})$

-approximation algorithm for any fixed

ε

 > 0, where

$\mathcal{S}$

and

$\mathcal{R}$

(

$\mathcal{R} \geq \mathcal{S}$

) denote the outgoing and ingoing demand, respectively.

Furthermore, we answer an outstanding open question about the complexity status of the so called

balanced

$\textsc{Vpn}$

problem by proving its

NP

-hardness.

Thomas Rothvoß, Laura Sanità

Contributed Talks of RANDOM

Deterministic Approximation Algorithms for the Nearest Codeword Problem

The Nearest Codeword Problem (NCP) is a basic algorithmic question in the theory of error-correcting codes. Given a point

$v \in \mathbb{F}_2^n$

and a linear space

$L\subseteq \mathbb{F}_2^n$

of dimension

k

NCP asks to find a point

l

 ∈ 

L

that minimizes the (Hamming) distance from

v

. It is well-known that the nearest codeword problem is NP-hard. Therefore approximation algorithms are of interest. The best efficient approximation algorithms for the NCP to date are due to Berman and Karpinski. They are a deterministic algorithm that achieves an approximation ratio of

O

(

k

/

c

) for an arbitrary constant

c

, and a randomized algorithm that achieves an approximation ratio of

O

(

k

/log

n

).

In this paper we present new deterministic algorithms for approximating the NCP that improve substantially upon the earlier work. Specifically, we obtain:

A polynomial time

O

(

n

/log

n

)-approximation algorithm;

An

n

O

(

s

)

time

O

(

k

log

(

s

)

n

/ log

n

)-approximation algorithm, where log

(

s

)

n

stands for

s

iterations of log, e.g., log

(2)

n

 = loglog

n

;

An

$n^{O(\log^* n)}$

time

O

(

k

/log

n

)-approximation algorithm.

We also initiate a study of the following Remote Point Problem (RPP). Given a linear space

$L\subseteq \mathbb{F}_2^n$

of dimension

k

RPP asks to find a point

$v\in \mathbb{F}_2^n$

that is far from

L

. We say that an algorithm achieves a remoteness of

r

for the RPP if it always outputs a point

v

that is at least

r

-far from

L

. In this paper we present a deterministic polynomial time algorithm that achieves a remoteness of Ω(

n

log

k

/

k

) for all

k

 ≤ 

n

/2. We motivate the remote point problem by relating it to both the nearest codeword problem and the matrix rigidity approach to circuit lower bounds in computational complexity theory.

Noga Alon, Rina Panigrahy, Sergey Yekhanin
Strong Parallel Repetition Theorem for Free Projection Games

The parallel repetition theorem states that for any two provers one round game with value at most 1 − 

ε

(for

ε

 < 1/2), the value of the game repeated

n

times in parallel is at most (1 − 

ε

3

)

Ω(

n

/log

s

)

where

s

is the size of the answers set [Raz98],[Hol07]. For

Projection Games

the bound on the value of the game repeated

n

times in parallel was improved to (1 − 

ε

2

)

Ω(

n

)

[Rao08] and was shown to be tight [Raz08]. In this paper we show that if the questions are taken according to a product distribution then the value of the repeated game is at most (1 − 

ε

2

)

Ω(

n

/log

s

)

and if in addition the game is a

Projection Game

we obtain a

strong parallel repetition

theorem, i.e., a bound of (1 − 

ε

)

Ω(

n

)

.

Boaz Barak, Anup Rao, Ran Raz, Ricky Rosen, Ronen Shaltiel
Random Low Degree Polynomials are Hard to Approximate

We study the problem of how well a typical multivariate polynomial can be approximated by lower degree polynomials over

${\mathbb{F}_{2}}$

. We prove that, with very high probability, a random degree

d

 + 1 polynomial has only an exponentially small correlation with all polynomials of degree

d

, for all degrees

d

up to

$\Theta\left(n\right)$

. That is, a random degree

d

 + 1 polynomial does not admit a good approximation of lower degree. In order to prove this, we prove far tail estimates on the distribution of the bias of a random low degree polynomial. Recently, several results regarding the weight distribution of Reed–Muller codes were obtained. Our results can be interpreted as a new large deviation bound on the weight distribution of Reed–Muller codes.

Ido Ben-Eliezer, Rani Hod, Shachar Lovett
Composition of Semi-LTCs by Two-Wise Tensor Products

We continue the study of the local testability of error correcting codes constructed by taking the two-wise tensor product of a “base-code” with itself. We show that if the base-code is any locally testable code (LTC) or any expander code, then the code obtained by taking the

repeated

two-wise tensor product of the base-code with itself is locally testable. This extends the results of Dinur et al. in [11] in two ways. First, we answer a question posed in that paper by expanding the class of allowed base-codes to include all locally testable code, and not just so-called

uniform LTCs

whose associated tester queries all codeword entries with equal probability. Second, we show that

repeating

the two-wise tensor operation a constant number of times still results in a locally testable code, improving upon previous results which only worked when the tensor product was applied

once

.

To obtain our results we define a new tester for the tensor product of LTCs. Our tester uses the distribution of the tester associated with the base-code to sample rows and columns of the product code. This construction differs from previously studied testers for tensor product codes which sampled rows and columns

uniformly

.

Eli Ben-Sasson, Michael Viderman
On the Security of Goldreich’s One-Way Function

Goldreich (ECCC 2000) suggested a simple construction of a candidate one-way function

f

: {0,1}

n

 → {0,1}

m

where each bit of output is a fixed predicate

P

of a constant number

d

of (random) input bits. We investigate the security of this construction in the regime

m

 = 

Dn

, where

D

(

d

) is a sufficiently large constant. We prove that for any predicate

P

that correlates with either one or two of its variables,

f

can be inverted with high probability.

We also prove an amplification claim regarding Goldreich’s construction. Suppose we are given an assignment

x

′ ∈ {0,1}

n

that has correlation

ε

 > 0 with the hidden assignment

x

 ∈ {0,1}

n

. Then, given access to

x

′, it is possible to invert

f

on

x

with high probability, provided

D

 = 

D

(

d

,

ε

) is sufficiently large.

Andrej Bogdanov, Youming Qiao
Random Tensors and Planted Cliques

The

r

-parity tensor of a graph is a generalization of the adjacency matrix, where the tensor’s entries denote the parity of the number of edges in subgraphs induced by

r

distinct vertices. For

r

 = 2, it is the adjacency matrix with 1’s for edges and − 1’s for nonedges. It is well-known that the 2-norm of the adjacency matrix of a random graph is

$O(\sqrt{n})$

. Here we show that the 2-norm of the

r

-parity tensor is at most

$f(r)\sqrt{n}\log^{O(r)}n$

, answering a question of Frieze and Kannan [1] who proved this for

r

 = 3. As a consequence, we get a tight connection between the planted clique problem and the problem of finding a vector that approximates the 2-norm of the

r

-parity tensor of a random graph. Our proof method is based on an inductive application of concentration of measure.

S. Charles Brubaker, Santosh S. Vempala
Sampling s-Concave Functions: The Limit of Convexity Based Isoperimetry

Efficient sampling, integration and optimization algorithms for logconcave functions [BV04, KV06, LV06a] rely on the good isoperimetry of these functions. We extend this to show that − 1/(

n

 − 1)-concave functions have good isoperimetry, and moreover, using a characterization of functions based on their values along every line, we prove that this is

the largest class

of functions with good isoperimetry in the spectrum from concave to quasi-concave. We give an efficient sampling algorithm based on a random walk for − 1/(

n

 − 1)-concave probability densities satisfying a smoothness criterion, which includes heavy-tailed densities such as the Cauchy density. In addition, the mixing time of this random walk for Cauchy density matches the corresponding best known bounds for logconcave densities.

Karthekeyan Chandrasekaran, Amit Deshpande, Santosh Vempala
Average-Case Analyses of Vickrey Costs

We explore the average-case “Vickrey” cost of structures in three random settings: the Vickrey cost of a shortest path in a complete graph or digraph with random edge weights; the Vickrey cost of a minimum spanning tree (MST) in a complete graph with random edge weights; and the Vickrey cost of a perfect matching in a complete bipartite graph with random edge weights. In each case, in the large-size limit, the Vickrey cost is precisely 2 times the (non-Vickrey) minimum cost, but this is the result of case-specific calculations, with no general reason found for it to be true.

Separately, we consider the problem of sparsifying a complete graph with random edge weights so that all-pairs shortest paths are preserved approximately. The problem of sparsifying a given graph so that for every pair of vertices, the length of the shortest path in the sparsified graph is within some multiplicative factor and/or additive constant of the original distance has received substantial study in theoretical computer science. For the complete digraph

${\vec{K}_n}$

with random edge weights, we show that

whp

Θ(

n

ln

n

) edges are necessary and sufficient for a spanning subgraph to give good all-pairs shortest paths approximations.

Prasad Chebolu, Alan Frieze, Páll Melsted, Gregory B. Sorkin
A Hypergraph Dictatorship Test with Perfect Completeness

A hypergraph dictatorship test is first introduced by Samorodnitsky and Trevisan and serves as a key component in their unique games based PCP construction. Such a test has oracle access to a collection of functions and determines whether all the functions are the same dictatorship, or all their low degree influences are

o

(1). Their test makes

q

 ≥ 3 queries, has amortized query complexity

$1+O\left(\frac{\log q}{q}\right)$

, but has an inherent loss of perfect completeness. In this paper we give an (adaptive) hypergraph dictatorship test that achieves both perfect completeness and amortized query complexity

$1+O\left(\frac{\log q}{q}\right)$

.

Victor Chen
Extractors Using Hardness Amplification

Zimand [24] presented simple constructions of locally computable strong extractors whose analysis relies on the

direct product theorem

for one-way functions and on the

Blum-Micali-Yao

generator. For

N

-bit sources of entropy

γN

, his extractor has seed

O

(log

2

N

) and extracts

N

γ

/3

random bits.

We show that his construction can be analyzed based solely on the direct product theorem for general functions. Using the direct product theorem of Impagliazzo et al. [6], we show that Zimand’s construction can extract

$\tilde \Omega_\gamma (N^{1/3}) $

random bits. (As in Zimand’s construction, the seed length is

O

(log

2

N

) bits.)

We also show that a simplified construction can be analyzed based solely on the XOR lemma. Using Levin’s proof of the XOR lemma [8], we provide an alternative simpler construction of a locally computable extractor with seed length

O

(log

2

N

) and output length

$\tilde \Omega_\gamma (N^{1/3})$

.

Finally, we show that the

derandomized direct product theorem

of Impagliazzo and Wigderson [7] can be used to derive a locally computable extractor construction with

O

(log

N

) seed length and

$\tilde \Omega (N^{1/5})$

output length. Zimand describes a construction with

O

(log

N

) seed length and

$O(2^{\sqrt{\log N}})$

output length.

Anindya De, Luca Trevisan
How Well Do Random Walks Parallelize?

A random walk on a graph is a process that explores the graph in a random way: at each step the walk is at a vertex of the graph, and at each step it moves to a uniformly selected neighbor of this vertex. Random walks are extremely useful in computer science and in other fields. A very natural problem that was recently raised by Alon, Avin, Koucky, Kozma, Lotker, and Tuttle (though it was implicit in several previous papers) is to analyze the behavior of

k

independent walks in comparison with the behavior of a single walk. In particular, Alon et al. showed that in various settings (e.g., for expander graphs),

k

random walks cover the graph (i.e., visit all its nodes), Ω(

k

)-times faster (in expectation) than a single walk. In other words, in such cases

k

random walks efficiently “parallelize” a single random walk. Alon et al. also demonstrated that, depending on the specific setting, this “speedup” can vary from logarithmic to exponential in

k

.

In this paper we initiate a more systematic study of multiple random walks. We give lower and upper bounds both on the cover time

and on the hitting time

(the time it takes to hit one specific node) of multiple random walks. Our study revolves over three alternatives for the starting vertices of the random walks: the worst starting vertices (those who maximize the hitting/cover time), the best starting vertices, and starting vertices selected from the stationary distribution. Among our results, we show that the speedup when starting the walks at the worst vertices cannot be too large - the hitting time cannot improve by more than an

O

(

k

) factor and the cover time cannot improve by more than min {

k

log

n

,

k

2

} (where

n

is the number of vertices). These results should be contrasted with the fact that there was no previously known upper-bound on the speedup and that the speedup can even be

exponential

in

k

for random starting vertices. Some of these results were independently obtained by Elsässer and Sauerwald (ICALP 2009). We further show that for

k

that is not too large (as a function of various parameters of the graph), the speedup in cover time is

O

(

k

)

even for walks that start from the best vertices

(those that minimize the cover time). As a rather surprising corollary of our theorems, we obtain a new bound which relates the cover time

C

and the mixing time mix of a graph. Specifically, we show that

$C=O(m \sqrt{\mathrm{mix}}\log ^2n)$

(where

m

is the number of edges).

Klim Efremenko, Omer Reingold
An Analysis of Random-Walk Cuckoo Hashing

In this paper, we provide a polylogarithmic bound that holds with high probability on the insertion time for cuckoo hashing under the random-walk insertion method. Cuckoo hashing provides a useful methodology for building practical, high-performance hash tables. The essential idea of cuckoo hashing is to combine the power of schemes that allow multiple hash locations for an item with the power to dynamically change the location of an item among its possible locations. Previous work on the case where the number of choices is larger than two has required a breadth-first search analysis, which is both inefficient in practice and currently has only a polynomial high probability upper bound on the insertion time. Here we significantly advance the state of the art by proving a polylogarithmic bound on the more efficient random-walk method, where items repeatedly kick out random blocking items until a free location for an item is found.

Alan Frieze, Páll Melsted, Michael Mitzenmacher
Hierarchy Theorems for Property Testing

Referring to the query complexity of property testing, we prove the existence of a rich hierarchy of corresponding complexity classes. That is, for any relevant function

q

, we prove the existence of properties that have testing complexity Θ(

q

). Such results are proven in three standard domains often considered in property testing: generic functions, adjacency predicates describing (dense) graphs, and incidence functions describing bounded-degree graphs. While in two cases the proofs are quite straightforward, the techniques employed in the case of the dense graph model seem significantly more involved. Specifically, problems that arise and are treated in the latter case include (1) the preservation of distances between graph under a blow-up operation, and (2) the construction of monotone graph properties that have local structure.

Oded Goldreich, Michael Krivelevich, Ilan Newman, Eyal Rozenberg
Algorithmic Aspects of Property Testing in the Dense Graphs Model

In this paper we consider two basic questions regarding the query complexity of testing graph properties in the adjacency matrix model. The first question refers to the relation between adaptive and non-adaptive testers, whereas the second question refers to testability within complexity that is inversely proportional to the proximity parameter, denoted

ε

. The study of these questions reveals the importance of algorithmic design (also) in this model. The highlights of our study are:

A gap between the complexity of adaptive and non-adaptive testers. Specifically, there exists a (natural) graph property that can be tested using

${\widetilde{O}}(\epsilon^{-1})$

adaptive queries, but cannot be tested using

o

(

ε

− 3/2

) non-adaptive queries.

In contrast, there exist natural graph properties that can be tested using

${\widetilde{O}}(\epsilon^{-1})$

non-adaptive queries, whereas Ω(

ε

− 1

) queries are required even in the adaptive case.

We mention that the properties used in the foregoing conflicting results have a similar flavor, although they are of course different.

Oded Goldreich, Dana Ron
Succinct Representation of Codes with Applications to Testing

Motivated by questions in property testing, we search for linear error-correcting codes that have the “single local orbit” property: they are specified by a single local constraint and its translations under the symmetry group of the code. We show that the dual of every “sparse” binary code whose coordinates are indexed by elements of

${\mathbb{F}}_{2^n}$

for prime

n

, and whose symmetry group includes the group of non-singular affine transformations of

${\mathbb{F}}_{2^n}$

, has the single local orbit property. (A code is

sparse

if it contains polynomially many codewords in its block length.) In particular this class includes the dual-BCH codes for whose duals (BCH codes) simple bases were not known. Our result gives the first short (

O

(

n

)-bit, as opposed to

$\exp(n)$

-bit) description of a low-weight basis for BCH codes. If 2

n

 − 1 is a Mersenne prime, then we get that every sparse

cyclic

code also has the single local orbit.

Elena Grigorescu, Tali Kaufman, Madhu Sudan
Efficient Quantum Tensor Product Expanders and k-Designs

Quantum expanders are a quantum analogue of expanders, and

k

-tensor product expanders are a generalisation to graphs that randomise

k

correlated walkers. Here we give an efficient construction of constant-degree, constant-gap quantum

k

-tensor product expanders. The key ingredients are an efficient classical tensor product expander and the quantum Fourier transform. Our construction works whenever

k

 = 

O

(

n

/log

n

), where

n

is the number of qubits. An immediate corollary of this result is an efficient construction of an approximate unitary

k

-design, which is a quantum analogue of an approximate

k

-wise independent function, on

n

qubits for any

k

 = 

O

(

n

/log

n

). Previously, no efficient constructions were known for

k

 > 2, while state designs, of which unitary designs are a generalisation, were constructed efficiently in [1].

Aram W. Harrow, Richard A. Low
Hellinger Strikes Back: A Note on the Multi-party Information Complexity of AND

The

${\textsc{And}}$

problem on

t

bits is a promise decision problem where either at most one bit of the input is set to 1 (

No

instance) or all

t

bits are set to 1 (

${\textsc{Yes}}$

instance). In this note, I will give a new proof of an Ω(1/

t

) lower bound on the information complexity of

${\textsc{And}}$

in the number-in-hand model of communication. This was recently established by Gronemeier, STACS 2009. The proof exploits the information geometry of communication protocols via Hellinger distance in a novel manner and avoids the analytic approach inherent in previous work. As previously known, this bound implies an Ω(

n

/

t

) lower bound on the communication complexity of multiparty disjointness and consequently a Ω(

n

1 − 2/

k

) space lower bound on estimating the

k

-th frequency moment

F

k

.

T. S. Jayram
Pseudorandom Generators and Typically-Correct Derandomization

The area of derandomization attempts to provide efficient deterministic simulations of randomized algorithms in various algorithmic settings. Goldreich and Wigderson introduced a notion of “typically-correct” deterministic simulations, which are allowed to err on few inputs. In this paper we further the study of typically-correct derandomization in two ways.

First, we develop a generic approach for constructing typically-correct derandomizations based on seed-extending pseudorandom generators, which are pseudorandom generators that reveal their seed. We use our approach to obtain both conditional and unconditional typically-correct derandomization results in various algorithmic settings. We show that our technique strictly generalizes an earlier approach by Shaltiel based on randomness extractors, and simplifies the proofs of some known results. We also demonstrate that our approach is applicable in algorithmic settings where earlier work did not apply. For example, we present a typically-correct polynomial-time simulation for every language in BPP based on a hardness assumption that is weaker than the ones used in earlier work.

Second, we investigate whether typically-correct derandomization of BPP implies circuit lower bounds. Extending the work of Kabanets and Impagliazzo for the zero-error case, we establish a positive answer for error rates in the range considered by Goldreich and Wigderson. In doing so, we provide a simpler proof of the zero-error result. Our proof scales better than the original one and does not rely on the result by Impagliazzo, Kabanets, and Wigderson that NEXP having polynomial-size circuits implies that NEXP coincides with EXP.

Jeff Kinne, Dieter van Melkebeek, Ronen Shaltiel
Baum’s Algorithm Learns Intersections of Halfspaces with Respect to Log-Concave Distributions

In 1990, E. Baum gave an elegant polynomial-time algorithm for learning the intersection of two origin-centered halfspaces with respect to any symmetric distribution (i.e., any

${\mathcal D}$

such that

${\mathcal D}(E) = {\mathcal D}(-E)$

) [3]. Here we prove that his algorithm also succeeds with respect to any mean zero distribution

${\mathcal D}$

with a log-concave density (a broad class of distributions that need not be symmetric). As far as we are aware, prior to this work, it was not known how to efficiently learn any class of intersections of halfspaces with respect to log-concave distributions.

The key to our proof is a “Brunn-Minkowski” inequality for log-concave densities that may be of independent interest.

Adam R. Klivans, Philip M. Long, Alex K. Tang
Tolerant Linearity Testing and Locally Testable Codes

We study tolerant linearity testing under general distributions. Given groups

G

and

H

, a distribution

μ

on

G

, and oracle access to a function

f

:

G

 → 

H

, we consider the task of approximating the smallest

μ

-distance of

f

to a homomorphism

h

:

G

 → 

H

, where the

μ

-distance between

f

and

h

is the probability that

f

(

x

) ≠ 

h

(

x

) when

x

is drawn according to the distribution

μ

. This question is intimately connected to local testability of linear codes.

In this work, we give a general sufficient condition on the distribution

μ

for linearity to be tolerantly testable with a constant number of queries. Using this condition we show that linearity is tolerantly testable for several natural classes of distributions including low bias, symmetric and product distributions. This gives a new and simple proof of a result of Kaufman and Sudan which shows that sparse, unbiased linear codes over

$\mathbb{Z}_2^n$

are locally testable.

Swastik Kopparty, Shubhangi Saraf
Pseudorandom Bit Generators That Fool Modular Sums

We consider the following problem: for given

n

,

M

, produce a sequence

X

1

,

X

2

,…,

X

n

of

bits

that fools every linear test modulo

M

. We present two constructions of generators for such sequences. For every constant prime power

M

, the first construction has seed length

O

M

(log(

n

/

ε

)), which is optimal up to the hidden constant. (A similar construction was independently discovered by Meka and Zuckerman [MZ]). The second construction works for every

M

,

n

, and has seed length

O

(log

n

 + log(

M

/

ε

)log(

M

log(1/

ε

))).

The problem we study is a generalization of the problem of constructing

small bias

distributions [NN], which are solutions to the

M

 = 2 case. We note that even for the case

M

 = 3 the best previously known constructions were generators fooling general bounded-space computations, and required

O

(log

2

n

) seed length.

For our first construction, we show how to employ recently constructed generators for sequences of elements of ℤ

M

that fool small-degree polynomials (modulo

M

). The most interesting technical component of our second construction is a variant of the derandomized graph squaring operation of [RV]. Our generalization handles a product of two distinct graphs with distinct bounds on their expansion. This is then used to produce pseudorandom-walks where each step is taken on a different regular directed graph (rather than pseudorandom walks on a single regular directed graph as in [RTV, RV]).

Shachar Lovett, Omer Reingold, Luca Trevisan, Salil Vadhan
The Glauber Dynamics for Colourings of Bounded Degree Trees

We study the Glauber dynamics Markov chain for

k

-colourings of trees with maximum degree Δ. For

k

 ≥ 3, we show that the mixing time on

every

tree is at most

n

O

(1 + Δ/(

k

logΔ))

. This bound is tight up to the constant factor in the exponent, as evidenced by the complete tree. Our proof uses a weighted canonical paths analysis and a variation of the block dynamics that exploits the differing relaxation times of blocks.

Brendan Lucier, Michael Molloy, Yuval Peres
Testing ±1-weight halfspace

We consider the problem of testing whether a Boolean function

f

:{ − 1,1}

n

 → { − 1,1} is a ±1-

weight halfspace

, i.e. a function of the form

f

(

x

) = sgn(

w

1

x

1

 + 

w

2

x

2

 + ⋯ + 

w

n

x

n

) where the weights

w

i

take values in { − 1,1}. We show that the complexity of this problem is markedly different from the problem of testing whether

f

is a general halfspace with arbitrary weights. While the latter can be done with a number of queries that is independent of

n

[7], to distinguish whether

f

is a ±-

weight halfspace

versus

ε

-far from all such halfspaces we prove that nonadaptive algorithms must make Ω(log

n

) queries. We complement this lower bound with a sublinear upper bound showing that

$O(\sqrt{n}\cdot $

poly

$(\frac{1}{\epsilon}))$

queries suffice.

Kevin Matulef, Ryan O’Donnell, Ronitt Rubinfeld, Rocco A. Servedio
Small-Bias Spaces for Group Products

Small-bias, or

ε

-biased, spaces have found many applications in complexity theory, coding theory, and derandomization. We generalize the notion of small-bias spaces to the setting of group products. Besides being natural, our extension captures some of the difficulties in constructing pseudorandom generators for constant-width branching programs - a longstanding open problem. We provide an efficient deterministic construction of small-bias spaces for solvable groups. Our construction exploits the fact that solvable groups have nontrivial normal subgroups that are abelian and builds on the construction of Azar et al. [AMN98] for abelian groups. For arbitrary finite groups, we give an efficient deterministic construction achieving constant bias. We also construct pseudorandom generators fooling linear functions mod

p

for primes

p

.

Raghu Meka, David Zuckerman
Small Clique Detection and Approximate Nash Equilibria

Recently, Hazan and Krauthgamer showed [12] that if, for a fixed small

ε

, an

ε

-best

ε

-approximate Nash equilibrium can be found in polynomial time in two-player games, then it is also possible to find a planted clique in

G

n

, 1/2

of size

C

log

n

, where

C

is a large fixed constant independent of

ε

. In this paper, we extend their result to show that if an

ε

-best

ε

-approximate equilibrium can be efficiently found for arbitrarily small

ε

 > 0, then one can detect the presence of a planted clique of size (2 + 

δ

) log

n

in

G

n

, 1/2

in polynomial time for arbitrarily small

δ

 > 0. Our result is optimal in the sense that graphs in

G

n

, 1/2

have cliques of size (2 − 

o

(1)) log

n

with high probability.

Lorenz Minder, Dan Vilenchik
Testing Computability by Width Two OBDDs

Property testing is concerned with deciding whether an object (e.g. a graph or a function) has a certain property or is “far” (for some definition of far) from every object with that property. In this paper we give lower and upper bounds for testing functions for the property of being computable by a read-once width-2

Ordered Binary Decision Diagram

(OBDD), also known as a

branching program

, where the order of the variables is known. Width-2 OBDDs generalize two classes of functions that have been studied in the context of property testing - linear functions (over

GF

(2)) and monomials. In both these cases membership can be tested in time that is linear in 1/

ε

. Interestingly, unlike either of these classes, in which the query complexity of the testing algorithm does not depend on the number,

n

, of variables in the tested function, we show that (one-sided error) testing for computability by a width-2 OBDD requires Ω(log(

n

)) queries, and give an algorithm (with one-sided error) that tests for this property and performs

$\tilde{O}(\log(n)/\epsilon)$

queries.

Dana Ron, Gilad Tsur
Improved Polynomial Identity Testing for Read-Once Formulas

An

arithmetic read-once formula

(ROF for short) is a formula (a circuit whose underlying graph is a tree) in which the operations are { + ,×} and such that every input variable labels at most one leaf. In this paper we study the problems of giving deterministic identity testing and reconstruction algorithms for ROFs. Our main result is an

$n^{{\mathcal{O}}(k + \log n)}$

time deterministic algorithm for checking whether a black box holding the sum of

k

n

-variate ROFs computes the zero polynomial. In other words, we provide a hitting set of size

$n^{{\mathcal{O}}(k + \log n)}$

for the sum of

k

ROFs. This result greatly improves [27] where an

$n^{{\mathcal{O}}(k^2 + \sqrt n)}$

algorithm was given for the problem.

Using our new results we obtain a deterministic reconstruction algorithms for read-once formulas that runs in time

$n^{{\mathcal{O}}(\log n)}$

.

In fact, our results also hold for the more general model of

preprocessed

read-once formulas that we define in this paper. In this model we are allowed to replace each variable

x

i

with a polynomial

T

i

(

x

i

).

Our techniques are very close to the techniques in [27]. The main difference is that we obtain several tighter versions of the tools first used there. In particular we obtain a better version of the

hardness of representation

approach which was first used in [27]. This technique can be thought of as a very explicit way of transforming (mild) hardness of a very structured polynomial to an identity testing algorithm.

Amir Shpilka, Ilya Volkovich
Smooth Analysis of the Condition Number and the Least Singular Value

A few years ago, Spielman and Teng initiated the study of Smooth analysis of the condition number and the least singular value of a matrix. Let

x

be a complex variable with mean zero and bounded variance. Let

N

n

be the random matrix of sie

n

whose entries are iid copies of

x

and

M

a deterministic matrix of the same size. The goal of smooth analysis is to estimate the condition number and the least singular value of

M

 + 

N

n

.

Spielman and Teng considered the case when

x

is gaussian. We are going to study the general case when

x

is arbitrary. Our investigation reveals a new and interesting fact that, unlike the gaussian case, in the general case the “core“ matrix

M

does play a role in the tail bounds for the least singular value of

M

 + 

N

n

. Consequently, our estimate involves the norm ∥ 

M

 ∥ and it is a challenging question to determine the right magnitude of this involvement. When ∥ 

M

 ∥ is relatively small, our estimate is nearly optimal and extends or refines several existing result.

Terence Tao, Van Vu
Erratum: Resource Minimization Job Scheduling

A claim on page 75 of this paper, stating that

J

 = 

J

1

 ∪ … ∪ 

J

k

was found to be incorrect, invalidating the main result of the paper. The best current approximation ratio for this problem therefore remains

$O(\sqrt{log~n/log~log~n})$

. We thank Kirk Pruhs for pointing this error out to us.

Julia Chuzhoy, Paolo Codenotti
Backmatter
Metadaten
Titel
Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques
herausgegeben von
Irit Dinur
Klaus Jansen
Joseph Naor
José Rolim
Copyright-Jahr
2009
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-03685-9
Print ISBN
978-3-642-03684-2
DOI
https://doi.org/10.1007/978-3-642-03685-9

Premium Partner