Skip to main content

2009 | Buch

Automata, Languages and Programming

36th International Colloquium, ICALP 2009, Rhodes, Greece, July 5-12, 2009, Proceedings, Part I

herausgegeben von: Susanne Albers, Alberto Marchetti-Spaccamela, Yossi Matias, Sotiris Nikoletseas, Wolfgang Thomas

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

Invited Lectures

Assigning Papers to Referees

Refereed conferences require every submission to be reviewed by members of a program committee (PC) in charge of selecting the conference program. A main responsibility of the PC chair is to organize the review process, in particular, to decide which papers are assigned to which member of the PC. The PC chair typically bases her decision on input from the PC, her knowledge of submissions and PC members, or scores that are computed automatically from keywords provided by authors and PC members. From now on, we call PC members reviewers or

referees

.

Kurt Mehlhorn
Algorithmic Game Theory: A Snapshot

Algorithmic game theory is the research area in the interface between the theories of algorithms, networks, and games, which emerged more than a decade ago motivated by the advent of the Internet. “Snapshot” means several things: very personal point of view, of topical and possibly ephemeral interest, and put together in a hurry.

Christos H. Papadimitriou

Contributed Papers

SDP-Based Algorithms for Maximum Independent Set Problems on Hypergraphs

This paper deals with approximations of maximum independent sets in non-uniform hypergraphs of low degree. We obtain the first performance ratio that is sublinear in terms of the maximum or average degree of the hypergraph. We extend this to the weighted case and give a

$O(\bar{D} \log\log \bar{D}/\log \bar{D})$

bound, where

$\bar{D}$

is the average weighted degree in a hypergraph, matching the best bounds known for the special case of graphs. Our approach is to use an semi-definite technique to sparsify a given hypergraph and then apply combinatorial algorithms to find a large independent set in the resulting sparser instance.

Geir Agnarsson, Magnús M. Halldórsson, Elena Losievskaja
Correlation Clustering Revisited: The “True” Cost of Error Minimization Problems

Correlation Clustering was defined by Bansal, Blum, and Chawla as the problem of clustering a set of elements based on a, possibly inconsistent, binary similarity function between element pairs. Their setting is agnostic in the sense that a ground truth clustering is not assumed to exist, and the cost of a solution is computed against the input similarity function. This problem has been studied in theory and in practice and has been subsequently proven to be APX-Hard.

In this work we assume that there does exist an unknown correct clustering of the data. In this setting, we argue that it is more reasonable to measure the output clustering’s accuracy against the unknown underlying true clustering. We present two main results. The first is a novel method for continuously morphing a general (non-metric) function into a pseudometric. This technique may be useful for other metric embedding and clustering problems. The second is a simple algorithm for randomly rounding a pseudometric into a clustering. Combining the two, we obtain a certificate for the possibility of getting a solution of factor strictly less than 2 for our problem. This approximation coefficient could not have been achieved by considering the agnostic version of the problem unless

P

 = 

NP

.

Nir Ailon, Edo Liberty
Sorting and Selection with Imprecise Comparisons

In experimental psychology, the method of paired comparisons was proposed as a means for ranking preferences amongst

n

elements of a human subject. The method requires performing all

$\binom{n}{2}$

comparisons then sorting elements according to the number of wins. The large number of comparisons is performed to counter the potentially faulty decision-making of the human subject, who acts as an imprecise comparator.

We consider a simple model of the imprecise comparisons: there exists some

δ

> 0 such that when a subject is given two elements to compare, if the values of those elements (as perceived by the subject) differ by at least

δ

, then the comparison will be made correctly; when the two elements have values that are within

δ

, the outcome of the comparison is unpredictable. This

δ

corresponds to the

just noticeable difference unit (JND)

or

difference threshold

in the psychophysics literature, but does not require the statistical assumptions used to define this value.

In this model, the standard method of paired comparisons minimizes the errors introduced by the imprecise comparisons at the cost of

$\binom{n}{2}$

comparisons. We show that the same optimal guarantees can be achieved using 4

n

3/2

comparisons, and we prove the optimality of our method. We then explore the general tradeoff between the guarantees on the error that can be made and number of comparisons for the problems of sorting, max-finding, and selection. Our results provide close-to-optimal solutions for each of these problems.

Miklós Ajtai, Vitaly Feldman, Avinatan Hassidim, Jelani Nelson
Fast FAST

We present a randomized subexponential time, polynomial space parameterized algorithm for the

k

-Weighted Feedback Arc Set in Tournaments

(

k

-FAST

) problem. We also show that our algorithm can be derandomized by slightly increasing the running time. To derandomize our algorithm we construct a new kind of universal hash functions, that we coin

universal coloring families

. For integers

m

,

k

and

r

, a family

${\mathcal F}$

of functions from [

m

] to [

r

] is called a universal (

m

,

k

,

r

)-coloring family if for any graph

G

on the set of vertices [

m

] with at most

k

edges, there exists an

$f \in {\mathcal F}$

which is a proper vertex coloring of

G

. Our algorithm is the first non-trivial subexponential time parameterized algorithm outside the framework of bidimensionality.

Noga Alon, Daniel Lokshtanov, Saket Saurabh
Bounds on the Size of Small Depth Circuits for Approximating Majority

In this paper, we show that for every constant 0 < 

ε

< 1/2 and for every constant

d

 ≥ 2, the minimum size of a depth

d

Boolean circuit that

ε

-approximates Majority function on

n

variables is exp(

Θ

(

n

1/(2

d

 − 2)

)). The lower bound for every

d

 ≥ 2 and the upper bound for

d

 = 2 have been previously shown by O’Donnell and Wimmer [ICALP’07], and the contribution of this paper is to give a matching upper bound for

d

 ≥ 3.

Kazuyuki Amano
Counting Subgraphs via Homomorphisms

Counting homomorphisms between graphs has applications in variety of areas, including extremal graph theory, properties of graph products, partition functions in statistical physics and property testing of large graphs. In this work we show a new application of counting graph homomorphisms to the areas of exact and parameterized algorithms.

We introduce a generic approach for counting subgraphs in a graph. The main idea is to relate counting subgraphs to counting graph homomorphisms. This approach provides new algorithms and unifies several well known results in algorithms and combinatorics including the recent algorithm of Björklund, Husfeldt and Koivisto for computing the chromatic polynomial, the classical algorithm of Kohn, Gottlieb, Kohn, and Karp for counting Hamiltonian cycles, Ryser’s formula for counting perfect matchings of a bipartite graph, and color coding based algorithms of Alon, Yuster, and Zwick.

Omid Amini, Fedor V. Fomin, Saket Saurabh
External Sampling

We initiate the study of sublinear-time algorithms in the external memory model [1]. In this model, the data is stored in blocks of a certain size

B

, and the algorithm is charged a unit cost for each block access. This model is well-studied, since it reflects the computational issues occurring when the (massive) input is stored on a disk. Since each block access operates on

B

data elements in parallel, many problems have external memory algorithms whose number of block accesses is only a small fraction (e.g. 1/

B

) of their main memory complexity.

However, to the best of our knowledge, no such reduction in complexity is known for

any

sublinear-time algorithm. One plausible explanation is that the vast majority of sublinear-time algorithms use random sampling and thus exhibit no locality of reference. This state of affairs is quite unfortunate, since both sublinear-time algorithms and the external memory model are important approaches to dealing with massive data sets, and ideally they should be combined to achieve best performance.

In this paper we show that such combination is indeed possible. In particular, we consider three well-studied problems: testing of

distinctness

,

uniformity

and

identity

of an empirical distribution induced by data. For these problems we show random-sampling-based algorithms whose number of block accesses is up to a factor of

$1/\sqrt{B}$

smaller than the main memory complexity of those problems. We also show that this improvement is optimal for those problems.

Since these problems are natural primitives for a number of sampling-based algorithms for other problems, our tools improve the external memory complexity of other problems as well.

Alexandr Andoni, Piotr Indyk, Krzysztof Onak, Ronitt Rubinfeld
Functional Monitoring without Monotonicity

The notion of

distributed functional monitoring

was recently introduced by Cormode, Muthukrishnan and Yi to initiate a formal study of the communication cost of certain fundamental problems arising in distributed systems, especially sensor networks. In this model, each of

k

sites reads a stream of tokens and is in communication with a central coordinator, who wishes to continuously monitor some function

f

of

σ

, the union of the

k

streams. The goal is to minimize the number of bits communicated by a protocol that correctly monitors

f

(

σ

), to within some small error. As in previous work, we focus on a threshold version of the problem, where the coordinator’s task is simply to maintain a single output bit, which is 0 whenever

f

(

σ

) ≤ 

τ

(1 − 

ε

) and 1 whenever

f

(

σ

) ≥ 

τ

. Following Cormode et al., we term this the (

k

,

f

,

τ

,

ε

) functional monitoring problem.

In previous work, some upper and lower bounds were obtained for this problem, with

f

being a frequency moment function, e.g.,

F

0

,

F

1

,

F

2

. Importantly, these functions are

monotone

. Here, we further advance the study of such problems, proving three new classes of results. First, we provide nontrivial monitoring protocols when

f

is either

H

, the empirical Shannon entropy of a stream, or any of a related class of entropy functions (Tsallis entropies). These are the first nontrivial algorithms for distributed monitoring of non-monotone functions. Second, we study the effect of non-monotonicity of

f

on our ability to give nontrivial monitoring protocols, by considering

f

 = 

F

p

with deletions allowed, as well as

f

 = 

H

. Third, we prove new lower bounds on this problem when

f

 = 

F

p

, for several values of

p

.

Chrisil Arackaparambil, Joshua Brody, Amit Chakrabarti
De-amortized Cuckoo Hashing: Provable Worst-Case Performance and Experimental Results

Cuckoo hashing is a highly practical dynamic dictionary: it provides amortized constant insertion time, worst case constant deletion time and lookup time, and good memory utilization. However, with a noticeable probability during the insertion of

n

elements some insertion requires

Ω

(log

n

) time. Whereas such an amortized guarantee may be suitable for some applications, in other applications (such as high-performance routing) this is highly undesirable.

Kirsch and Mitzenmacher (Allerton ’07) proposed a de-amortization of cuckoo hashing using queueing techniques that preserve its attractive properties. They demonstrated a significant improvement to the worst case performance of cuckoo hashing via experimental results, but left open the problem of constructing a scheme with provable properties.

In this work we present a de-amortization of cuckoo hashing that

provably

guarantees constant worst case operations. Specifically, for any sequence of polynomially many operations, with overwhelming probability over the randomness of the initialization phase, each operation is performed in constant time. In addition, we present a general approach for proving that the performance guarantees are preserved when using hash functions with limited independence instead of truly random hash functions. Our approach relies on a recent result of Braverman (CCC ’09) showing that poly-logarithmic independence fools

AC

0

circuits, and may find additional applications in various similar settings. Our theoretical analysis and experimental results indicate that the scheme is highly efficient, and provides a practical alternative to the only other known approach for constructing dynamic dictionaries with such worst case guarantees, due to Dietzfelbinger and Meyer auf der Heide (ICALP ’90).

Yuriy Arbitman, Moni Naor, Gil Segev
Towards a Study of Low-Complexity Graphs

We propose the study of graphs that are defined by low-complexity distributed and deterministic agents. We suggest that this viewpoint may help introduce the element of

individual choice

in models of large scale social networks. This viewpoint may also provide interesting new classes of graphs for which to design algorithms.

We focus largely on the case where the “low complexity” computation is

AC

0

. We show that this is already a rich class of graphs that includes examples of lossless expanders and power-law graphs. We give evidence that even such low complexity graphs present a formidable challenge to algorithms designers. On the positive side, we show that many algorithms from property testing and data sketching can be adapted to give meaningful results for low-complexity graphs.

Sanjeev Arora, David Steurer, Avi Wigderson
Decidability of Conjugacy of Tree-Shifts of Finite Type

A one-sided (resp. two-sided) shift of finite type of dimension one can be described as the set of infinite (resp. bi-infinite) sequences of consecutive edges in a finite-state automaton. While the conjugacy of shifts of finite type is decidable for one-sided shifts of finite type of dimension one, the result is unknown in the two-sided case.

In this paper, we study the shifts of finite type defined by infinite trees. Indeed, infinite trees have a natural structure of one-sided shifts, between the shifts of dimension one and two. We prove a decomposition theorem for these tree-shifts,

i.e.

we show that a conjugacy between two tree-shifts of finite type can be broken down into a finite sequence of elementary transformations called in-splittings and in-amalgamations. We prove that the conjugacy problem is decidable for tree-shifts of finite type. This result makes the class of tree-shifts closer to the class of one-sided shifts of dimension one than to the class of two-sided ones. Our proof uses the notion of bottom-up tree automata.

Nathalie Aubrun, Marie-Pierre Béal
Improved Bounds for Speed Scaling in Devices Obeying the Cube-Root Rule

Speed scaling is a power management technique that involves dynamically changing the speed of a processor. This gives rise to dual-objective scheduling problems, where the operating system both wants to conserve energy and optimize some Quality of Service (QoS) measure of the resulting schedule. In the most investigated speed scaling problem in the literature, the QoS constraint is deadline feasibility, and the objective is to minimize the energy used. The standard assumption is that the power consumption is the speed to some constant power

α

. We give the first non-trivial lower bound, namely

e

α

− 1

/

α

, on the competitive ratio for this problem. This comes close to the best upper bound which is about 2

e

α

 + 1

.

We analyze a natural class of algorithms called qOA, where at any time, the processor works at

q

 ≥ 1 times the minimum speed required to ensure feasibility assuming no new jobs arrive. For CMOS based processors, and many other types of devices,

α

= 3, that is, they satisfy the cube-root rule. When

α

= 3, we show that qOA is 6.7-competitive, improving upon the previous best guarantee of 27 achieved by the algorithm Optimal Available (OA). So when the cube-root rule holds, our results reduce the range for the optimal competitive ratio from [1.2, 27] to [2.4, 6.7]. We also analyze qOA for general

α

and give almost matching upper and lower bounds.

Nikhil Bansal, Ho-Leung Chan, Kirk Pruhs, Dmitriy Katz
Competitive Analysis of Aggregate Max in Windowed Streaming

We consider the problem of maintaining a fixed number

k

of items observed over a data stream, so as to optimize the maximum value over a fixed number

n

of recent observations. Unlike previous approaches, we use the competitive analysis framework and compare the performance of the online streaming algorithm against an optimal adversary that knows the entire sequence in advance. We consider the problem of maximizing the

aggregate max

, i.e., the sum of the values of the largest items in the algorithm’s memory over the entire sequence. For this problem, we prove an asymptotically tight competitive ratio, achieved by a simple heuristic, called partition-greedy, that performs stream updates efficiently and has almost optimal performance. In contrast, we prove that the problem of maximizing, for every time

t

, the value maintained by the online algorithm in memory, is considerably harder: in particular, we show a tight competitive ratio that depends on the maximum value of the stream. We further prove negative results for the closely related problem of maintaining the aggregate minimum and for the generalized version of the aggregate max problem in which every item comes with an individual window.

Luca Becchetti, Elias Koutsoupias
Faster Regular Expression Matching

Regular expression matching is a key task (and often the computational bottleneck) in a variety of widely used software tools and applications, for instance, the

unix

grep

and

sed

commands, scripting languages such as

awk

and

perl

, programs for analyzing massive data streams, etc. We show how to solve this ubiquitous task in linear space and

O

(

nm

(loglog

n

)/(log

n

)

3/2

 + 

n

 + 

m

) time where

m

is the length of the expression and

n

the length of the string. This is the first improvement for the dominant

O

(

nm

/log

n

) term in Myers’

O

(

nm

/log

n

 + (

n

 + 

m

)log

n

) bound [JACM 1992]. We also get improved bounds for external memory.

Philip Bille, Mikkel Thorup
A Fast and Simple Parallel Algorithm for the Monotone Duality Problem

We consider the monotone duality problem i.e., checking whether given monotone CNF

ϕ

and DNF

ψ

are equivalent, which is a prominent open problem in NP-completeness. We construct a fast and simple parallel algorithms for the problem, that run in polylogarithmic time by using quasi-polynomially many processors. The algorithm exhibits better parallel time complexity of the existing algorithms of Elbassioni [11]. By using a different threshold of the degree parameter

ε

of

ϕ

in the algorithm, we also present a stronger bound on the number of processors for polylogarithmic-time parallel computation and improves over the previously best known bound on the sequential time complexity of the problem in the case when the magnitudes of |

ϕ

|, |

ψ

| and

n

are different, e.g., |

ψ

| = |

ϕ

|

α

 ≫ 

n

for

α

> 1, where

n

denotes the number of variables. Furthermore, we show that, for several interesting well-known classes of monotone CNFs

ϕ

such as bounded degree, clause-size, and intersection-size, our parallel algorithm runs polylogarithmic time by using polynomially many processors.

Endre Boros, Kazuhisa Makino
Unconditional Lower Bounds against Advice

We show several unconditional lower bounds for exponential time classes against polynomial time classes with advice, including:

1

For any constant

c

,

${\sf NEXP} \not \subseteq {\rm{\sf P}}^{\sf NP[n^c]}/n^c$

1

For any constant

c

,

${\sf MAEXP} \not \subseteq {\rm {\sf MA}}/n^c$

1

${\sf BPEXP} \not \subseteq {\sf BPP}/n^{o(1)}$

It was previously unknown even whether

NEXP

 ⊆ 

NP

/

n

0.01

. For the probabilistic classes, no lower bounds for uniform exponential time against advice were known before.

We also consider the question of whether these lower bounds can be made to work on almost all input lengths rather than on infinitely many. We give an oracle relative to which

NEXP

 ⊆ 

io

NP

, which provides evidence that this is not possible with current techniques.

Harry Buhrman, Lance Fortnow, Rahul Santhanam
Approximating Decision Trees with Multiway Branches

We consider the problem of constructing decision trees for entity identification from a given table. The input is a table containing information about a set of entities over a fixed set of attributes. The goal is to construct a decision tree that identifies each entity unambiguously by testing the attribute values such that the average number of tests is minimized. The previously best known approximation ratio for this problem was

O

(log

2

N

). In this paper, we present a new greedy heuristic that yields an improved approximation ratio of

O

(log

N

).

Venkatesan T. Chakaravarthy, Vinayaka Pandit, Sambuddha Roy, Yogish Sabharwal
Annotations in Data Streams

The central goal of data stream algorithms is to process massive streams of data using

sublinear

storage space. Motivated by work in the database community on outsourcing database and data stream processing, we ask whether the space usage of such algorithms be further reduced by enlisting a more powerful “helper” who can

annotate

the stream as it is read. We do not wish to blindly trust the helper, so we require that the algorithm be convinced of having computed a correct answer. We show upper bounds that achieve a non-trivial tradeoff between the amount of annotation used and the space required to verify it. We also prove lower bounds on such tradeoffs, often nearly matching the upper bounds, via notions related to Merlin-Arthur communication complexity. Our results cover the classic data stream problems of selection, frequency moments, and fundamental graph problems such as triangle-freeness and connectivity. Our work is also part of a growing trend — including recent studies of multi-pass streaming, read/write streams and randomly ordered streams — of asking more complexity-theoretic questions about data stream processing. It is a recognition that, in addition to practical relevance, the data stream model raises many interesting theoretical questions in its own right.

Amit Chakrabarti, Graham Cormode, Andrew McGregor
The Tile Complexity of Linear Assemblies

The conventional Tile Assembly Model (TAM) developed by Winfree using Wang tiles is a powerful, Turing-universal theoretical framework which models varied self-assembly processes. We describe a natural extension to TAM called the Probabilistic Tile Assembly Model (PTAM) to model the inherent probabilistic behavior in physically realized self-assembled systems. A particular challenge in DNA nanoscience is to form linear assemblies or

rulers

of a specified length using the smallest possible tile set. These rulers can then be used as components for construction of other complex structures. In TAM, a deterministic linear assembly of length

N

requires a tile set of cardinality at least

N

. In contrast, for any given

N

, we demonstrate linear assemblies of expected length

N

with a tile set of cardinality

Θ

(log

N

) and prove a matching lower bound of

Ω

(log

N

). We also propose a simple extension to PTAM called

κ

-pad systems in which we associate

κ

pads with each side of a tile, allowing abutting tiles to bind when at least one pair of corresponding pads match and prove analogous results. All our probabilistic constructions are free from co-operative tile binding errors and can be modified to produce assemblies whose probability distribution of lengths has arbitrarily small tail bounds dropping exponentially with a given multiplicative factor increase in number of tile types. Thus, for linear assembly systems, we have shown that randomization can be exploited to get large improvements in tile complexity at a small expense of precision in length.

Harish Chandran, Nikhil Gopalkrishnan, John Reif
A Graph Reduction Step Preserving Element-Connectivity and Applications

Given an undirected graph

G

 = (

V

,

E

) and subset of terminals

T

 ⊆ 

V

, the

element-connectivity

κ

G

(

u

,

v

) of two terminals

u

,

v

 ∈ 

T

is the maximum number of

u

-

v

paths that are pairwise disjoint in both edges and non-terminals

V

 ∖ 

T

(the paths need not be disjoint in terminals). Element-connectivity is more general than edge-connectivity and less general than vertex-connectivity. Hind and Oellermann [18] gave a graph reduction step that preserves the

global

element-connectivity of the graph. We show that this step also preserves

local

connectivity, that is, all the pairwise element-connectivities of the terminals.

We give two applications of the step to connectivity and network design problems: First, we show a polylogarithmic approximation for the problem of packing element-disjoint Steiner forests in general graphs, and an

O

(1)-approximation in planar graphs. Second, we find a very short and intuitive proof of a spider-decomposition theorem of Chuzhoy and Khanna [10] in the context of the single-sink

k

-

vertex-connectivity

problem. Our results highlight the effectiveness of the element-connectivity reduction step; we believe it will find more applications in the future.

Chandra Chekuri, Nitish Korula
Approximating Matches Made in Heaven

Motivated by applications in online dating and kidney exchange, we study a stochastic matching problem in which we have a random graph

G

given by a node set

V

and probabilities

p

(

i

,

j

) on all pairs

i

,

j

 ∈ 

V

representing the probability that edge (

i

,

j

) exists. Additionally, each node has an integer weight

t

(

i

) called its patience parameter. Nodes represent agents in a matching market with dichotomous preferences, i.e., each agent finds every other agent either

acceptable

or

unacceptable

and is indifferent between all acceptable agents. The goal is to maximize the welfare, or produce a matching between acceptable agents of maximum size. Preferences must be solicited based on probabilistic information represented by

p

(

i

,

j

), and agent

i

can be asked at most

t

(

i

) questions regarding his or her preferences.

A stochastic matching algorithm iteratively probes pairs of nodes

i

and

j

with positive patience parameters. With probability

p

(

i

,

j

), an edge exists and the nodes are irrevocably matched. With probability 1 − 

p

(

i

,

j

), the edge does not exist and the patience parameters of the nodes are decremented. We give a simple greedy strategy for selecting probes which produces a matching whose cardinality is, in expectation, at least a quarter of the size of this optimal algorithm’s matching. We additionally show that variants of our algorithm (and our analysis) can handle more complicated constraints, such as a limit on the maximum number of rounds, or the number of pairs probed in each round.

Ning Chen, Nicole Immorlica, Anna R. Karlin, Mohammad Mahdian, Atri Rudra
Strong and Pareto Price of Anarchy in Congestion Games

Strong Nash equilibria and Pareto-optimal Nash equilibria are natural and important strengthenings of the Nash equilibrium concept. We study these stronger notions of equilibrium in congestion games, focusing on the relationships between the price of anarchy for these equilibria and that for standard Nash equilibria (which is well understood). For

symmetric

congestion games with polynomial or exponential latency functions, we show that the price of anarchy for strong and Pareto-optimal equilibria is much smaller than the standard price of anarchy. On the other hand, for asymmetric congestion games with polynomial latencies the strong and Pareto prices of anarchy are essentially as large as the standard price of anarchy; while for asymmetric games with exponential latencies the Pareto and standard prices of anarchy are the same but the strong price of anarchy is substantially smaller. Finally, in the special case of linear latencies, we show that in asymmetric games the strong and Pareto prices of anarchy coincide exactly with the known value

$\frac{5}{2}$

for standard Nash, but are strictly smaller for symmetric games.

Steve Chien, Alistair Sinclair
A Better Algorithm for Random k-SAT

Let

Φ

be a uniformly distributed random

k

-SAT formula with

n

variables and

m

clauses. We present a polynomial time algorithm that finds a satisfying assignment of

Φ

with high probability for constraint densities

$m/n<(1-\varepsilon_k)2^k{\rm ln}(k)/k$

, where

ε

k

→0. Previously no efficient algorithm was known to find solutions with non-vanishing probability beyond

m

/

n

 = 1.817·2

k

/

k

[Frieze and Suen, Journal of Algorithms 1996].

Amin Coja-Oghlan
Exact and Approximate Bandwidth

In this paper we gather several improvements in the field of exact and approximate exponential-time algorithms for the

Bandwidth

problem. For graphs with treewidth

t

we present a

O

(

n

O

(

t

)

2

n

) exact algorithm. Moreover for the same class of graphs we introduce a subexponential constant-approximation scheme – for any

α

> 0 there exists a (1 + 

α

)-approximation algorithm running in

$O(\exp(c(t + \sqrt{n/\alpha})\log n))$

time where

c

is a universal constant. These results seem interesting since Unger has proved that

Bandwidth

does not belong to

APX

even when the input graph is a tree (assuming

P

 ≠ 

NP

). So somewhat surprisingly, despite Unger’s result it turns out that not only a subexponential constant approximation is possible but also a subexponential approximation scheme exists. Furthermore, for any positive integer

r

, we present a (4

r

 − 1)-approximation algorithm that solves

Bandwidth

for an arbitrary input graph in

$O^*(2^{n\over r})$

time and polynomial space. Finally we improve the currently best known exact algorithm for arbitrary graphs with a

O

(4.473

n

) time and space algorithm.

In the algorithms for the small treewidth we develop a technique based on the Fast Fourier Transform, parallel to the Fast Subset Convolution techniques introduced by Björklund et al. This technique can be also used as a simple method of finding a chromatic number of all subgraphs of a given graph in

O

*

(2

n

) time and space, what matches the best known results.

Marek Cygan, Marcin Pilipczuk
Approximation Algorithms via Structural Results for Apex-Minor-Free Graphs

We develop new structural results for apex-minor-free graphs and show their power by developing two new approximation algorithms. The first is an additive approximation for coloring within 2 of the optimal chromatic number, which is essentially best possible, and generalizes the seminal result by Thomassen [32] for bounded-genus graphs. This result also improves our understanding from an algorithmic point of view of the venerable Hadwiger conjecture about coloring

H

-minor-free graphs. The second approximation result is a PTAS for unweighted TSP in apex-minor-free graphs, which generalizes PTASs for TSP in planar graphs and bounded-genus graphs [20,2,24,15].

We strengthen the structural results from the seminal Graph Minor Theory of Robertson and Seymour in the case of apex-minor-free graphs, showing that apices can be made adjacent only to vortices if we generalize the notion of vortices to “quasivortices” of bounded treewidth, proving a conjecture from [10]. We show that this structure theorem is a powerful tool for developing algorithms on apex-minor-free graphs, including for the classic problems of coloring and TSP. In particular, we use this theorem to partition the edges of a graph into

k

pieces, for any

k

, such that contracting any piece results in a bounded-treewidth graph, generalizing previous similar results for planar graphs [24] and bounded-genus graphs [15]. We also highlight the difficulties in extending our results to general

H

-minor-free graphs.

Erik D. Demaine, MohammadTaghi Hajiaghayi, Ken-ichi Kawarabayashi
Node-Weighted Steiner Tree and Group Steiner Tree in Planar Graphs

We improve the approximation ratios for two optimization problems in planar graphs. For node-weighted Steiner tree, a classical network-optimization problem, the best achievable approximation ratio in general graphs is

Θ

(log

n

), and nothing better was previously known for planar graphs. We give a constant-factor approximation for planar graphs. Our algorithm generalizes to allow as input any nontrivial minor-closed graph family, and also generalizes to address other optimization problems such as Steiner forest, prize-collecting Steiner tree, and network-formation games.

The second problem we address is group Steiner tree: given a graph with edge weights and a collection of groups (subsets of nodes), find a minimum-weight connected subgraph that includes at least one node from each group. The best approximation ratio known in general graphs is

O

(log

3

n

), or

O

(log

2

n

) when the host graph is a tree. We obtain an

O

(log

n

polyloglog

n

) approximation algorithm for the special case where the graph is planar embedded and each group is the set of nodes on a face. We obtain the same approximation ratio for the minimum-weight tour that must visit each group.

Erik D. Demaine, MohammadTaghi Hajiaghayi, Philip N. Klein
On Cartesian Trees and Range Minimum Queries

We present new results on Cartesian trees with applications in range minimum queries and bottleneck edge queries. We introduce a cache-oblivious Cartesian tree for solving the range minimum query problem, a Cartesian tree of a tree for the bottleneck edge query problem on trees and undirected graphs, and a proof that no Cartesian tree exists for the two-dimensional version of the range minimum query problem.

Erik D. Demaine, Gad M. Landau, Oren Weimann
Applications of a Splitting Trick

We study applications of a simple method for circumventing the “full randomness assumption” when building a hashing-based data structure for a set

S

of keys. The general approach is to “split”

S

into “pieces”

S

i

, by a splitting hash function. On a piece

S

i

, a method or data structure for generating full randomness is used that uses more space than |

S

i

|. Under certain circumstances, this data structure can be “shared” among the constructions for the pieces

S

i

, which leads to a tighter overall space bound. The method was introduced in the context of cuckoo hashing and its variants, but it seems to have wider applicability. To demonstrate its power and some subtleties, we study three new applications, improving previous constructions: (i) Space-efficient simulation of full randomness on

S

(following work by Pagh and Pagh (2003/08) and Dietzfelbinger and Woelfel (2003)); (ii) Construction of highly independent functions in the style of Siegel (1989/2004); (iii) One-probe schemes as in work by Buhrman, Miltersen, Radhakrishnan, and Venkatesh (2000/02) and Pagh and Pagh (2002).

Martin Dietzfelbinger, Michael Rink
Quasirandom Rumor Spreading: Expanders, Push vs. Pull, and Robustness

Randomized rumor spreading is an efficient protocol to distribute information in networks. Recently, a quasirandom version has been proposed and proven to work equally well on many graphs and better for sparse random graphs. In this work we show three main results for the quasirandom rumor spreading model.

We exhibit a natural expansion property for networks which suffices to make quasirandom rumor spreading inform all nodes of the network in logarithmic time with high probability. This expansion property is satisfied, among others, by many expander graphs, random regular graphs, and Erdős-Rényi random graphs.

For all network topologies, we show that if one of the push or pull model works well, so does the other. We also show that quasirandom rumor spreading is robust against transmission failures. If each message sent out gets lost with probability

f

, then the runtime increases only by a factor of

$\O(1/(1-f))$

.

Benjamin Doerr, Tobias Friedrich, Thomas Sauerwald
Incompressibility through Colors and IDs

In parameterized complexity each problem instance comes with a parameter

k

, and a parameterized problem is said to admit a

polynomial kernel

if there are polynomial time preprocessing rules that reduce the input instance to an instance with size polynomial in

k

. Many problems have been shown to admit polynomial kernels, but it is only recently that a framework for showing the non-existence of polynomial kernels has been developed by Bodlaender et al. [4] and Fortnow and Santhanam [9]. In this paper we show how to combine these results with combinatorial reductions which use colors and IDs in order to prove kernelization lower bounds for a variety of basic problems:

We show that the

Steiner Tree

problem parameterized by the number of terminals and solution size

k

, and the

Connected Vertex Cover

and

Capacitated Vertex Cover

problems do not admit a polynomial kernel. The two latter results are surprising because the closely related

Vertex Cover

problem admits a kernel of size 2

k

.

Alon and Gutner obtain a

k

poly

(

h

)

kernel for

Dominating Set in

H

-Minor Free Graphs

parameterized by

h

 = |

H

| and solution size

k

and ask whether kernels of smaller size exist [2]. We partially resolve this question by showing that

Dominating Set in

H

-Minor Free Graphs

does not admit a kernel with size polynomial in

k

 + 

h

.

Harnik and Naor obtain a “compression algorithm” for the

Sparse Subset Sum

problem [13]. We show that their algorithm is essentially optimal since the instances cannot be compressed further.

Hitting Set

and

Set Cover

admit a kernel of size

k

O

(

d

)

when parameterized by solution size

k

and maximum set size

d

. We show that neither of them, along with the

Unique Coverage

and

Bounded Rank Disjoint Sets

problems, admits a polynomial kernel.

All results are under the assumption that the polynomial hierarchy does not collapse to the third level. The existence of polynomial kernels for several of the problems mentioned above were open problems explicitly stated in the literature [2,3,11,12,14]. Many of our results also rule out the existence of compression algorithms, a notion similar to kernelization defined by Harnik and Naor [13], for the problems in question.

Michael Dom, Daniel Lokshtanov, Saket Saurabh
Partition Arguments in Multiparty Communication Complexity

Consider the “Number in Hand” multiparty communication complexity model, where

k

players

P

1

,...,

P

k

holding inputs

$x_1,\ldots,x_k\in{0, 1}^n$

(respectively) communicate in order to compute the value

f

(

x

1

,...,

x

k

). The main lower bound technique for the communication complexity of such problems is that of

partition arguments

: partition the

k

players into two disjoint sets of players and find a lower bound for the induced two-party communication complexity problem. In this paper, we study the power of the partition arguments method. Our two main results are very different in nature:

(i) For

randomized

communication complexity we show that partition arguments may be exponentially far from the true communication complexity. Specifically, we prove that there exists a 3-argument function

f

whose communication complexity is

Ω

(

n

) but partition arguments can only yield an

Ω

(log

n

) lower bound. The same holds for

nondeterministic

communication complexity.

(ii) For

deterministic

communication complexity, we prove that finding significant gaps, between the true communication complexity and the best lower bound that can be obtained via partition arguments, would imply progress on (a generalized version of) the “log rank conjecture” of communication complexity.

Jan Draisma, Eyal Kushilevitz, Enav Weinreb
High Complexity Tilings with Sparse Errors

Tile sets and tilings of the plane appear in many topics ranging from logic (the Entscheidungsproblem) to physics (quasicrystals). The idea is to enforce some global properties (of the entire tiling) by means of local rules (for neighbor tiles). A fundamental question: Can local rules enforce a complex (highly irregular) structure of a tiling?

The minimal (and weak) notion of irregularity is

aperiodicity

. R. Berger constructed a tile set such that every tiling is aperiodic. Though Berger’s tilings are not periodic, they are very regular in an intuitive sense.

In [3] a stronger result was proven: There exists a tile set such that all

n

×

n

squares in all tilings have Kolmogorov complexity

Ω

(

n

), i.e., contain

Ω

(

n

) bits of information. Such a tiling cannot be periodic or even computable.

In the present paper we apply the fixed-point argument from [5] to give a new construction of a tile set that enforces high Kolmogorov complexity tilings (thus providing an alternative proof of the results of [3]). The new construction is quite flexible, and we use it to prove a much stronger result: there exists a tile set such that all tilings have high Kolmogorov complexity even if (sparse enough) tiling errors are allowed.

Bruno Durand, Andrei Romashchenko, Alexander Shen
Tight Bounds for the Cover Time of Multiple Random Walks

We study the cover time of multiple random walks. Given a graph

G

of

n

vertices, assume that

k

independent random walks start from the same vertex. The parameter of interest is the

speed-up

defined as the ratio between the cover time of one and the cover time of

k

random walks. Recently Alon et al. developed several bounds that are based on the quotient between the cover time and maximum hitting times. Their technique gives a speed-up of

Ω

(

k

) on many graphs, however, for many graph classes,

k

has to be bounded by

${\mathcal{O}}(\log n)$

. They also conjectured that, for any 1 ≤ 

k

 ≤ 

n

, the speed-up is at most

${\mathcal{O}}(k)$

on any graph. As our main results, we prove the following:

We present a new lower bound on the speed-up that depends on the mixing-time. It gives a speed-up of

Ω

(

k

) on many graphs, even if

k

is as large as

n

.

We prove that the speed-up is

${\mathcal{O}}(k \log n)$

on any graph. Under rather mild conditions, we can also improve this bound to

${\mathcal{O}}(k)$

, matching exactly the conjecture of Alon et al.

We find the correct order of the speed-up for any value of 1 ≤ 

k

 ≤ 

n

on hypercubes, random graphs and expanders. For

d

-dimensional torus graphs (

d

 > 2), our bounds are tight up to a factor of

${\mathcal{O}}(\log n)$

.

Our findings also reveal a surprisingly sharp dichotomy on several graphs (including

d

-dim. torus and hypercubes): up to a certain threshold the speed-up is

k

, while there is no additional speed-up above the threshold.

Robert Elsässer, Thomas Sauerwald
Online Computation with Advice

We consider a model for online computation in which the online algorithm receives, together with each request, some information regarding the future, referred to as

advice

. The advice provided to the online algorithm may allow an improvement in its performance, compared to the classical model of complete lack of information regarding the future. We are interested in the impact of such advice on the competitive ratio, and in particular, in the relation between the size

b

of the advice, measured in terms of bits of information per request, and the (improved) competitive ratio. Since

b

 = 0 corresponds to the classical online model, and

$ b = \lceil \log |\mathcal{A}| \rceil $

, where

$\mathcal{A}$

is the algorithm’s action space, corresponds to the optimal (offline) one, our model spans a spectrum of settings ranging from classical online algorithms to offline ones.

In this paper we propose the above model and illustrate its applicability by considering two of the most extensively studied online problems, namely,

metrical task systems (MTS)

and the

k-server

problem. For MTS we establish tight (up to constant factors) upper and lower bounds on the competitive ratio of deterministic and randomized online algorithms with advice for any choice of 1 ≤ 

b

 ≤ 

Θ

(log

n

) , where

n

is the number of states in the system: we prove that any randomized online algorithm for MTS has competitive ratio

Ω

( log(

n

) /

b

) and we present a deterministic online algorithm for MTS with competitive ratio

O

(log(

n

) /

b

) . For the

k

-server problem we construct a deterministic online algorithm for general metric spaces with competitive ratio

k

O

(1 /

b

)

for any choice of

Θ

(1) ≤ 

b

 ≤ log

k

.

Yuval Emek, Pierre Fraigniaud, Amos Korman, Adi Rosén
Dynamic Succinct Ordered Trees

We study the problem of maintaining a dynamic tree succinctly, in 2

n

 + 

o

(

n

) bits, under updates of the following form: insertion or deletion of a leaf, insertion of a node on an edge (edge subdivision) or deletion of a node with only one child (the child becomes a child of the grandparent). We allow satellite data of a fixed (but not necessarily constant) size to be associated to the nodes of the tree.

We support update operations in constant amortized time and support access to satellite data and basic navigation operations in worst-case constant time; the basic navigation operations includes

parent

,

first/last-child

,

previous/next-child

. We demonstrate that to allow fast support for more extended operations such as determining the

i

-th child of a node, rank of a child among its siblings, or subtree size, we require a restrictive update strategy for which we propose the finger-update model where updates are performed by a finger which is only allowed to crawl on the tree (between a child and a parent or between consecutive siblings). Under this model, we describe how the named operations can be performed in worst-case constant time.

Previous work on dynamic succinct ordered trees [1,2] is mainly restricted to binary trees and achieves poly-logarithmic [1] or “poly-log-log” [2] update time under a more restricted model. Best previous result on ordinal trees achieves only sublinear amortized update time and “poly-log-log” query time [3].

Arash Farzan, J. Ian Munro
Universal Succinct Representations of Trees?

We consider the succinct representation of

ordinal

and

cardinal

trees on the RAM with logarithmic word size. Given a tree

T

, our representations support the following operations in

O

(1) time: (i)

$\mbox{{\tt BP-substring}}(i,b)$

, which reports the substring of length

b

bits (

b

is at most the wordsize) beginning at position

i

of the balanced parenthesis representation of

T

, (ii)

$\mbox{{\tt DFUDS-substring}}(i,b)$

, which does the same for the

depth first unary degree sequence

representation, and (iii) a similar operation for tree-partition based representations of

T

. We give:

an asymptotically space-optimal 2

n

 + 

o

(

n

) bit representation of

n

-node ordinal trees that supports all the above operations with

b

 = 

Θ

(log

n

), answering an open question from [He et al., ICALP’07].

an asymptotically space-optimal

C

(

n

,

k

) + 

o

(

n

)-bit representation of

k

-ary cardinal trees, that supports (with

$b = \Theta(\sqrt{\log n})$

) the operations (ii) and (iii) above, on the ordinal tree obtained by removing labels from the cardinal tree, as well as the usual label-based operations. As a result, we obtain a fully-functional cardinal tree representation with the above space complexity. This answers an open question from [Raman et al, SODA’02].

Our new representations are able to simultaneously

emulate

the BP, DFUDS and partitioned representations using a single instance of the data structure, and thus aim towards

universality

. They not only support the union of all the ordinal tree operations supported by these representations, but will also automatically inherit any new operations supported by these representations in the future.

Arash Farzan, Rajeev Raman, S. Srinivasa Rao
Distortion Is Fixed Parameter Tractable

We study low-distortion embedding of metric spaces into the line, and more generally, into the shortest path metric of trees, from the parameterized complexity perspective. Let

M

 = 

M

(

G

) be the shortest path metric of an edge weighted graph

G

, with the vertex set

V

(

G

) and the edge set

E

(

G

), on

n

vertices. We give the first fixed parameter tractable algorithm that for an

unweighted

graph metric

M

and integer

d

either constructs an embedding of

M

into the line with distortion at most

d

, or concludes that no such embedding exists. Our algorithm requires

O

(

nd

4

(2

d

 + 1)

2

d

) time which is a significant improvement over the best previous algorithm of Bădoiu

et al.

that runs in time

O

(

n

4

d

 + 2

d

O

(1)

). We find it surprising that this problem turns out to be fixed parameter tractable, because of its apparent similarity to the notoriously hard

Bandwidth Minimization

problem.

We extend our results on embedding unweighted graph metric into the line in two ways. First, we give an algorithm to construct small distortion embeddings of

weighted

graph metrics. The running time of our algorithm is

O

(

n

(

dW

)

4

(2

d

 + 1)

2

dW

) where

W

is the largest edge weight of the input graph. To complement this result, we show that the exponential dependence on the maximum edge weight is unavoidable. In particular, we show that deciding whether a weighted graph metric

M

(

G

) with maximum weight

W

 < |

V

(

G

)| can be embedded into the line with distortion at most

d

is NP-Complete for every fixed rational

d

 ≥ 2. This rules out any possibility of an algorithm with running time

O

((

nW

)

h

(

d

)

) where

h

is a function of

d

alone. Secondly, we consider more general host metrics for which analogous results hold. In particular, we prove that for any tree

T

with maximum degree

Δ

, embedding

M

into a shortest path metric of

T

is fixed parameter tractable, parameterized by (

Δ

,

d

).

Michael R. Fellows, Fedor V. Fomin, Daniel Lokshtanov, Elena Losievskaja, Frances A. Rosamond, Saket Saurabh
Towards Optimal Range Medians

We consider the following problem: given an unsorted array of

n

elements, and a sequence of intervals in the array, compute the median in each of the subarrays defined by the intervals. We describe a simple algorithm which uses

O

(

n

) space and needs

O

(

n

log

k

 + 

k

log

n

) time to answer

k

such median queries. This improves previous algorithms by a logarithmic factor and matches a lower bound for

k

 = 

O

(

n

). Since, in contrast to previous approaches, the algorithm decomposes the range of element values rather than the array, it has natural generalizations to higher-dimensional problems – it reduces a range median query to a logarithmic number of range counting queries.

Beat Gfeller, Peter Sanders
B-Treaps: A Uniquely Represented Alternative to B-Trees

We present the first uniquely represented data structure for an external memory model of computation, a B-tree analogue called a

B-treap

. Uniquely represented data structures represent each logical state with a unique machine state. Such data structures are

strongly history-independent

; they reveal no information about the historical sequence of operations that led to the current logical state. For example, a uniquely represented file-system would support the deletion of a file in a way that, in a strong information-theoretic sense, provably removes all evidence that the file ever existed. Like the B-tree, the B-treap has depth

$O(\log_{\ensuremath{B}} n)$

, uses linear space with high probability, where

${\ensuremath{B}}$

is the block transfer size of the external memory, and supports efficient one-dimensional range queries.

Daniel Golovin
Testing Fourier Dimensionality and Sparsity

We present a range of new results for testing properties of Boolean functions that are defined in terms of the Fourier spectrum. Broadly speaking, our results show that the property of a Boolean function having a concise Fourier representation is locally testable.

We first give an efficient algorithm for testing whether the Fourier spectrum of a Boolean function is supported in a low-dimensional subspace of

${\mathbb F}_2^n$

(equivalently, for testing whether

f

is a junta over a small number of parities). We next give an efficient algorithm for testing whether a Boolean function has a sparse Fourier spectrum (small number of nonzero coefficients). In both cases we also prove lower bounds showing that any testing algorithm — even an adaptive one — must have query complexity within a polynomial factor of our algorithms, which are nonadaptive. Finally, we give an “implicit learning” algorithm that lets us test

any

sub-property of Fourier concision.

Our technical contributions include new structural results about sparse Boolean functions and new analysis of the pairwise independent hashing of Fourier coefficients from [12].

Parikshit Gopalan, Ryan O’Donnell, Rocco A. Servedio, Amir Shpilka, Karl Wimmer
Revisiting the Direct Sum Theorem and Space Lower Bounds in Random Order Streams

Estimating frequency moments and

L

p

distances are well studied problems in the adversarial data stream model and tight space bounds are known for these two problems. There has been growing interest in revisiting these problems in the framework of random-order streams. The best space lower bound known for computing the

k

th

frequency moment in random-order streams is

Ω

(

n

1 − 2.5/

k

) by Andoni et al., and it is conjectured that the real lower bound shall be

Ω

(

n

1 − 2/

k

). In this paper, we resolve this conjecture. In our approach, we revisit the direct sum theorem developed by Bar-Yossef et al. in a random-partition private messages model and provide a tight

Ω

(

n

1 − 2/

k

/ℓ) space lower bound for any ℓ-pass algorithm that approximates the frequency moment in random-order stream model to a constant factor. Finally, we also introduce the notion of space-entropy tradeoffs in random order streams, as a means of studying intermediate models between adversarial and fully random order streams. We show an almost tight space-entropy tradeoff for

L

 ∞ 

distance and a non-trivial tradeoff for

L

p

distances.

Sudipto Guha, Zhiyi Huang
Wireless Communication Is in APX

In this paper we address a common question in wireless communication: How long does it take to satisfy an arbitrary set of wireless communication requests? This problem is known as the wireless scheduling problem. Our main result proves that wireless scheduling is in APX. In addition we present a robustness result, showing that constant parameter and model changes will modify the result only by a constant.

Magnús M. Halldórsson, Roger Wattenhofer
The Ehrenfeucht-Silberger Problem

We consider repetitions in words and solve a longstanding open problem about the relation between the period and the length of its longest unbordered factor. A word

u

is called bordered if there exists a proper prefix that is also a suffix of

u

, otherwise it is called unbordered. In 1979 Ehrenfeucht and Silberger raised the following problem: What is the maximum length of a word

w

, w.r.t. the length

τ

of its longest unbordered factor, still allowing that

τ

is shorter than the period

π

of

w

. We show that if

w

is longer than 7(

τ

− 1)/3 then

τ

 = 

π

which gives the optimal asymtotic bound.

Štěpán Holub, Dirk Nowotka
Applications of Effective Probability Theory to Martin-Löf Randomness

We pursue the study of the framework of

layerwise computability

introduced in a preceding paper and give three applications. (i) We prove a general version of Birkhoff’s ergodic theorem for random points, where the transformation and the observable are supposed to be

effectively measurable

instead of

computable

. This result significantly improves V’yugin and Nandakumar’s ones. (ii) We provide a general framework for deriving sharper theorems for random points, sensitive to the speed of convergence. This offers a systematic approach to obtain results in the spirit of Davie’s ones. (iii) Proving an effective version of Prokhorov theorem, we positively answer a question recently raised by Fouché: can random Brownian paths reach any random number? All this shows that

layerwise computability

is a powerful framework to study Martin-Löf randomness, with a wide range of applications.

Mathieu Hoyrup, Cristóbal Rojas
An EPTAS for Scheduling Jobs on Uniform Processors: Using an MILP Relaxation with a Constant Number of Integral Variables

In this paper, we present an efficient polynomial time approximation scheme (EPTAS) for scheduling on uniform processors, i.e. finding a minimum length schedule for a set of

n

independent jobs on

m

processors with different speeds (a fundamental NP-hard scheduling problem). The previous best polynomial time approximation scheme (PTAS) by Hochbaum and Shmoys has a running time of

$(n/\epsilon)^{O(1/\epsilon^2)}$

. Our algorithm, based on a new mixed integer linear programming (MILP) formulation with a constant number of integral variables and an interesting rounding method, finds a schedule whose length is within a relative error

ε

of the optimum, and has running time

$2^{O(1/\epsilon^2 \log(1/\epsilon)^3)} poly(n)$

.

Klaus Jansen
Popular Mixed Matchings

We study the problem of matching applicants to jobs under one-sided preferences; that is, each applicant ranks a non-empty subset of jobs under an order of preference, possibly involving ties. A matching

M

is said to be

more popular

than

T

if the applicants that prefer

M

to

T

outnumber those that prefer

T

to

M

. A matching is said to be

popular

if there is no matching more popular than it. Equivalently, a matching

M

is popular if

φ

(

M

,

T

) ≥ 

φ

(

T

,

M

) for all matchings

T

, where

φ

(

X

,

Y

) is the number of applicants that prefer

X

to

Y

.

Previously studied solution concepts based on the popularity criterion are either not guaranteed to exist for every instance (e.g., popular matchings) or are NP-hard to compute (e.g., least unpopular matchings). This paper addresses this issue by considering mixed matchings. A

mixed matching

is simply a probability distributions over matchings in the input graph. The function

φ

that compares two matchings generalizes in a natural manner to mixed matchings by taking expectation. A mixed matching

P

is popular if

φ

(

P

,

Q

) ≥ 

φ

(

Q

,

P

) for all mixed matchings

Q

.

We show that popular mixed matchings

always

exist and we design polynomial time algorithms for finding them. Then we study their efficiency and give tight bounds on the price of anarchy and price of stability of the popular matching problem.

Telikepalli Kavitha, Julián Mestre, Meghana Nasre
Factoring Groups Efficiently

We give a polynomial time algorithm that computes a decomposition of a finite group

G

given in the form of its multiplication table. That is, given

G

, the algorithm outputs two subgroups

A

and

B

of

G

such that

G

is the direct product of

A

and

B

, if such a decomposition exists.

Neeraj Kayal, Timur Nezhmetdinov
On Finding Dense Subgraphs

Given an undirected graph

G

 = (

V

,

E

), the density of a subgraph on vertex set

S

is defined as

$d(S)=\frac{|E(S)|}{|S|}$

, where

E

(

S

) is the set of edges in the subgraph induced by nodes in

S

. Finding subgraphs of maximum density is a very well studied problem. One can also generalize this notion to directed graphs. For a directed graph one notion of density given by Kannan and Vinay [12] is as follows: given subsets

S

and

T

of vertices, the density of the subgraph is

$d(S,T)=\frac{|E(S,T)|}{\sqrt{|S||T|}}$

, where

E

(

S

,

T

) is the set of edges going from

S

to

T

. Without any size constraints, a subgraph of maximum density can be found in polynomial time. When we require the subgraph to have a specified size, the problem of finding a maximum density subgraph becomes

NP

-hard. In this paper we focus on developing fast polynomial time algorithms for several variations of dense subgraph problems for both directed and undirected graphs. When there is no size bound, we extend the flow based technique for obtaining a densest subgraph in directed graphs and also give a linear time 2-approximation algorithm for it. When a size lower bound is specified for both directed and undirected cases, we show that the problem is NP-complete and give fast algorithms to find subgraphs within a factor 2 of the optimum density. We also show that solving the densest subgraph problem with an upper bound on size is as hard as solving the problem with an exact size constraint, within a constant factor.

Samir Khuller, Barna Saha
Learning Halfspaces with Malicious Noise

We give new algorithms for learning halfspaces in the challenging

malicious noise

model, where an adversary may corrupt both the labels and the underlying distribution of examples. Our algorithms can tolerate malicious noise rates exponentially larger than previous work in terms of the dependence on the dimension

n

, and succeed for the fairly broad class of all isotropic log-concave distributions.

We give poly(

n

, 1/

ε

)-time algorithms for solving the following problems to accuracy

ε

:

Learning origin-centered halfspaces in

R

n

with respect to the uniform distribution on the unit ball with malicious noise rate

η

 = 

Ω

(

ε

2

/log(

n

/

ε

)). (The best previous result was

Ω

(

ε

/(

n

log(

n

/

ε

))

1/4

).)

Learning origin-centered halfspaces with respect to any isotropic log- concave distribution on

R

n

with malicious noise rate

η

 = 

Ω

(

ε

3

/log(

n

/

ε

)). This is the first efficient algorithm for learning under isotropic log-concave distributions in the presence of malicious noise.

We also give a poly(

n

,1/

ε

)-time algorithm for learning origin-centered halfspaces under any isotropic log-concave distribution on

R

n

in the presence of

adversarial label noise

at rate

η

 = 

Ω

(

ε

3

/log(1/

ε

)). In the adversarial label noise setting (or agnostic model), labels can be noisy, but not example points themselves. Previous results could handle

η

 = 

Ω

(

ε

) but had running time exponential in an unspecified function of 1/

ε

.

Our analysis crucially exploits both concentration and anti-concentration properties of isotropic log-concave distributions. Our algorithms combine an iterative outlier removal procedure using Principal Component Analysis together with “smooth” boosting.

Adam R. Klivans, Philip M. Long, Rocco A. Servedio
General Scheme for Perfect Quantum Network Coding with Free Classical Communication

This paper considers the problem of efficiently transmitting quantum states through a network. It has been known for some time that without additional assumptions it is impossible to achieve this task

perfectly

in general — indeed, it is impossible even for the simple butterfly network. As additional resource we allow free classical communication between any pair of network nodes. It is shown that perfect quantum network coding is achievable in this model whenever classical network coding is possible over the same network when replacing all quantum capacities by classical capacities. More precisely, it is proved that perfect quantum network coding using free classical communication is possible over a network with

k

source-target pairs if there exists a classical linear (or even vector-linear) coding scheme over a finite ring. Our proof is constructive in that we give explicit quantum coding operations for each network node. This paper also gives an upper bound on the number of classical communication required in terms of

k

, the maximal fan-in of any network node, and the size of the network.

Hirotada Kobayashi, François Le Gall, Harumichi Nishimura, Martin Rötteler
Greedy ${\ensuremath{\Delta}}$ -Approximation Algorithm for Covering with Arbitrary Constraints and Submodular Cost

This paper describes a greedy

${\ensuremath{\Delta}}$

-approximation algorithm for

monotone covering

, a generalization of many fundamental NP-hard covering problems. The approximation ratio

${\ensuremath{\Delta}}$

is the maximum number of variables on which any constraint depends. (For example, for vertex cover,

${\ensuremath{\Delta}}$

is 2.) The algorithm unifies, generalizes, and improves many previous algorithms for fundamental covering problems such as vertex cover, set cover, facilities location, and integer and mixed-integer covering linear programs with upper bound on the variables.

The algorithm is also the first

${\ensuremath{\Delta}}$

-competitive algorithm for

online

monotone covering, which generalizes online versions of the above-mentioned covering problems as well as many fundamental online paging and caching problems. As such it also generalizes many classical online algorithms, including

lru, fifo, fwf, balance, greedy-dual, greedy-dual size

(a.k.a.

landlord

), and algorithms for connection caching, where

${\ensuremath{\Delta}}$

is the cache size. It also gives new

${\ensuremath{\Delta}}$

-competitive algorithms for

upgradable

variants of these problems, which model choosing the caching strategy

and

an appropriate hardware configuration (cache size, CPU, bus, network, etc.).

Christos Koufogiannakis, Neal E. Young
Limits and Applications of Group Algebras for Parameterized Problems

The algebraic framework introduced in [Koutis, Proc. of the 35

th

ICALP 2008] reduces several combinatorial problems in parameterized complexity to the problem of detecting multilinear degree-

k

monomials in polynomials presented as circuits. The best known (randomized) algorithm for this problem requires only

O

*

(2

k

) time and oracle access to an arithmetic circuit, i.e. the ability to evaluate the circuit on elements from a suitable group algebra. This algorithm has been used to obtain the best known algorithms for several parameterized problems. In this paper we use communication complexity to show that the

O

*

(2

k

) algorithm is essentially

optimal

within this

evaluation oracle

framework. On the positive side, we give new applications of the method: finding a copy of a given tree on

k

nodes, a spanning tree with at least

k

leaves, a minimum set of nodes that dominate at least

t

nodes, and an

m

-dimensional

k

-matching. In each case we achieve a faster algorithm than what was known. We also apply the algebraic method to problems in exact counting. Among other results, we show that a combination of dynamic programming and a variation of the algebraic method can break the trivial upper bounds for exact parameterized counting in fairly general settings.

Ioannis Koutis, Ryan Williams
Sleep with Guilt and Work Faster to Minimize Flow Plus Energy

In this paper we extend the study of flow-energy scheduling to a model that allows both sleep management and speed scaling. Our main result is a sleep management algorithm called IdleLonger, which works online for a processor with one or multiple levels of sleep states. The design of IdleLonger is interesting; among others, it may force the processor to idle or even sleep even though new jobs have already arrived. IdleLonger works in both clairvoyant and non-clairvoyant settings. We show how to adapt two existing speed scaling algorithms AJC [15] (clairvoyant) and LAPS [9] (non-clairvoyant) to the new model. The adapted algorithms, when coupled with IdleLonger, are shown to be O(1)-competitive clairvoyant and non-clairvoyant algorithms for minimizing flow plus energy on a processor that allows sleep management and speed scaling.

The above results are based on the traditional model with no limit on processor speed. If the processor has a maximum speed, the problem becomes more difficult as the processor, once overslept, cannot rely on unlimited extra speed to catch up the delay. Nevertheless, we are able to enhance IdleLonger and AJC so that they remain

O

(1)-competitive for flow plus energy under the bounded speed model. Non-clairvoyant scheduling in the bounded speed model is left as an open problem.

Tak-Wah Lam, Lap-Kei Lee, Hing-Fung Ting, Isaac K. K. To, Prudence W. H. Wong
Improved Bounds for Flow Shop Scheduling

We resolve an open question raised by Feige & Scheideler by showing that the best known approximation algorithm for flow shops is essentially tight with respect to the used lower bound on the optimal makespan. We also obtain a nearly tight hardness result for the general version of flow shops, where jobs are not required to be processed on each machine.

Similar results hold true when the objective is to minimize the sum of completion times.

Monaldo Mastrolilli, Ola Svensson
A 3/2-Approximation Algorithm for General Stable Marriage

In an instance of the stable marriage problem with ties and incomplete preference lists, stable matchings can have different sizes. It is APX-hard to compute a maximum cardinality stable matching, but there have recently been proposed polynomial-time approximation algorithms, with constant performance guarantees for both the general version of this problem, and for several special cases. Our contribution is to describe a

$\frac{3}{2}$

-approximation algorithm for the general version of this problem, improving upon the recent

$\frac{5}{3}$

-approximation algorithm of Király. Interest in such algorithms arises because of the problem’s application to centralized matching schemes, the best known of which involve the assignment of graduating medical students to hospitals in various countries.

Eric McDermid
Limiting Negations in Formulas

Negation-limited circuits have been studied as a circuit model between general circuits and monotone circuits. In this paper, we consider limiting negations in formulas. The minimum number of NOT gates in a Boolean circuit computing a Boolean function

f

is called the inversion complexity of

f

. In 1958, Markov determined the inversion complexity of every Boolean function and particularly proved that ⌈log

2

(

n

 + 1) ⌉ NOT gates are sufficient to compute any Boolean function on

n

variables. We determine the inversion complexity of every Boolean function in formulas, i.e., the minimum number of NOT gates (NOT operators) in a Boolean formula computing (representing) a Boolean function, and particularly prove that ⌈

n

/2 ⌉ NOT gates are sufficient to compute any Boolean function on

n

variables. Moreover we show that if there is a polynomial-size formula computing a Boolean function

f

, then there is a polynomial-size formula computing

f

with at most ⌈

n

/2 ⌉ NOT gates. We consider also the inversion complexity in formulas of negation normal form and prove that the inversion complexity is at most polynomials of

n

for every Boolean function on

n

variables.

Hiroki Morizumi
Fast Polynomial-Space Algorithms Using Möbius Inversion: Improving on Steiner Tree and Related Problems

Given a graph with

n

vertices,

k

terminals and bounded integer weights on the edges, we compute the minimum

Steiner Tree

in

${\mathcal{O}^*}(2^k)$

time and polynomial space, where the

${\mathcal{O}^*}$

notation omits

poly

(

n

,

k

) factors. Among our results are also polynomial-space

$\mathcal{O}^*(2^n)$

algorithms for several

${\mathcal{NP}}$

-complete spanning tree and partition problems.

The previous fastest known algorithms for these problems use the technique of dynamic programming among subsets, and require exponential space. We introduce the concept of branching walks and extend the Inclusion-Exclusion algorithm of Karp for counting Hamiltonian paths. Moreover, we show that our algorithms can also be obtained by applying Möbius inversion on the recurrences used for the dynamic programming algorithms.

Jesper Nederlof
Superhighness and Strong Jump Traceability

Let

A

be a c.e. set. Then

A

is strongly jump traceable if and only if

A

is Turing below each superhigh Martin-Löf random set. The proof combines priority with measure theoretic arguments.

André Nies
Amortized Communication Complexity of Distributions

Consider the following general communication problem: Alice and Bob have to simulate a probabilistic function

p

, that with every

$(x,y)\in{\mathcal{X}}\times {\mathcal{Y}}$

associates a probability distribution on

${\mathcal{A}} \times {\mathcal{B}}$

. The two parties, upon receiving inputs

x

and

y

, need to output

$a\in{\mathcal{A}}$

,

$b\in{\mathcal{B}}$

in such a manner that the (

a

,

b

) pair is distributed according to

p

(

x

,

y

). They share randomness, and have access to a channel that allows two-way communication. Our main focus is an instance of the above problem coming from the well known EPR experiment in quantum physics. In this paper, we are concerned with the amount of communication required to simulate the EPR experiment when it is repeated in parallel a large number of times, giving rise to a notion of amortized communication complexity.

In the 3-dimensional case, Toner and Bacon showed that this problem could be solved using on average 0.85 bits of communication per repetition [1]. We show that their approach cannot go below 0.414 bits, and we give a fundamentally different technique, relying on the reverse Shannon theorem, which allows us to reduce the amortized communication to 0.28 bits for dimension 3, and 0.410 bits for arbitrary dimension. We also give a lower bound of 0.13 bits for this problem (valid for one-way protocols), and conjecture that this could be improved to match the upper bounds. In our investigation we find interesting connections to a number of different problems in communication complexity, in particular to [2]. The results contained herein are entirely classical and no knowledge of the quantum phenomenon is assumed.

Jérémie Roland, Mario Szegedy
The Number of Symbol Comparisons in QuickSort and QuickSelect

We revisit the classical

QuickSort

and

QuickSelect

algorithms, under a complexity model that fully takes into account the elementary comparisons between symbols composing the records to be processed. Our probabilistic models belong to a broad category of information sources that encompasses memoryless (i.e., independent-symbols) and Markov sources, as well as many unbounded-correlation sources. We establish that, under our conditions, the average-case complexity of

QuickSort

is

O

(

n

log

2

n

) [rather than

O

(

n

log

n

), classically], whereas that of

QuickSelect

remains

O

(

n

). Explicit expressions for the implied constants are provided by our combinatorial–analytic methods.

Brigitte Vallée, Julien Clément, James Allen Fill, Philippe Flajolet
Computing the Girth of a Planar Graph in O(n logn) Time

We give an

O

(

n

log

n

) algorithm for computing the girth (shortest cycle) of an undirected

n

-vertex planar graph. Our solution extends to any graph of bounded genus. This improves upon the best previously known algorithms for this problem.

Oren Weimann, Raphael Yuster
Elimination Graphs

A graph is chordal if it does not contain any induced cycle of size greater than three. An alternative characterization of chordal graphs is via a perfect elimination ordering, which is an ordering of the vertices such that, for each vertex

v

, the neighbors of

v

that occur later than

v

in the ordering form a clique. Akcoglu et al [1] define an extension of chordal graphs whereby the neighbors of

v

that occur later than

v

in the elimination order have at most

k

independent vertices. We refer to such graphs as sequentially

k

-independent graphs. We study properties of such families of graphs, and we show that several natural classes of graphs are sequentially

k

-independent for small

k

. In particular, any intersection graph of translates of a convex object in a two dimensional plane is a sequentially 3-independent graph; furthermore, any planar graph is a sequentially 3-independent graph. For any fixed constant

k

, we develop simple, polynomial time approximation algorithms for sequentially

k

-independent graphs with respect to several well-studied NP-complete problems.

Yuli Ye, Allan Borodin
Backmatter
Metadaten
Titel
Automata, Languages and Programming
herausgegeben von
Susanne Albers
Alberto Marchetti-Spaccamela
Yossi Matias
Sotiris Nikoletseas
Wolfgang Thomas
Copyright-Jahr
2009
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-02927-1
Print ISBN
978-3-642-02926-4
DOI
https://doi.org/10.1007/978-3-642-02927-1