Skip to main content
main-content

Über dieses Buch

This book constitutes the refereed proceedings of the 12th International Colloquium on Theoretical Aspects of Computing, ICTAC 2015, held in Cali, Colombia, in October 2015. The 25 revised full papers presented together with 7 invited talks, 3 tool papers, and 2 short papers were carefully reviewed and selected from 93 submissions. The papers cover various topics such as algebra and category theory; automata and formal languages; concurrency; constraints, logic and semantic; software architecture and component-based design; and verification.

Inhaltsverzeichnis

Frontmatter

Improved Approximation Algorithms for Stochastic Matching

In this paper we consider the

Stochastic Matching

problem, which is motivated by applications in kidney exchange and online dating. We are given an undirected graph in which every edge is assigned a probability of existence and a positive profit, and each node is assigned a positive integer called

timeout

. We know whether an edge exists or not only after probing it. On this random graph we are executing a process, which one-by-one probes the edges and gradually constructs a matching. The process is constrained in two ways: once an edge is taken it cannot be removed from the matching, and the timeout of node

v

upper-bounds the number of edges incident to

v

that can be probed. The goal is to maximize the expected profit of the constructed matching.

For this problem Bansal et al. [4] provided a 3-approximation algorithm for bipartite graphs, and a 4-approximation for general graphs. In this work we improve the approximation factors to 2.845 and 3.709, respectively.

We also consider an online version of the bipartite case, where one side of the partition arrives node by node, and each time a node

b

arrives we have to decide which edges incident to

b

we want to probe, and in which order. Here we present a 4.07-approximation, improving on the 7.92-approximation of Bansal et al. [4].

The main technical ingredient in our result is a novel way of probing edges according to a random but non-uniform permutation. Patching this method with an algorithm that works best for large probability edges (plus some additional ideas) leads to our improved approximation factors.

Marek Adamczyk, Fabrizio Grandoni, Joydeep Mukherjee

Sorting and Permuting without Bank Conflicts on GPUs

In this paper, we look at the complexity of designing algorithms without any bank conflicts in the shared memory of Graphical Processing Units (GPUs). Given input of size

n

,

w

processors and

w

memory banks, we study three fundamental problems: sorting, permuting and

w

-way partitioning (defined as sorting an input containing exactly

n

/

w

copies of every integer in [

w

]).

We solve sorting in optimal

$O(\frac{n}{w} \log n)$

time. When

n

 ≥ 

w

2

, we solve the partitioning problem optimally in

O

(

n

/

w

) time. We also present a general solution for the partitioning problem which takes

$O(\frac{n}{w} \log^3_{n/w} w)$

time. Finally, we solve the permutation problem using a randomized algorithm in

$O(\frac{n}{w} \log\log\log_{n/w} n)$

time. Our results show evidence that when working with banked memory architectures, there is a separation between these problems and the permutation and partitioning problems are not as easy as simple parallel scanning.

Peyman Afshani, Nodari Sitchinava

Approximating Minimum-Area Rectangular and Convex Containers for Packing Convex Polygons

We investigate the problem of finding a minimum-area container for the disjoint packing of a set of convex polygons by translations. In particular, we consider axis-parallel rectangles or arbitrary convex sets as containers. For both optimization problems which are NP-hard we develop efficient constant factor approximation algorithms.

Helmut Alt, Mark de Berg, Christian Knauer

Primal-Dual and Dual-Fitting Analysis of Online Scheduling Algorithms for Generalized Flow Time Problems

We study online scheduling problems on a single processor that can be viewed as extensions of the well-studied problem of minimizing total weighted flow time. In particular, we provide a framework of analysis that is derived by duality properties, does not rely on potential functions and is applicable to a variety of scheduling problems. A key ingredient in our approach is bypassing the need for “black-box” rounding of fractional solutions, which yields improved competitive ratios.

We begin with an interpretation of Highest-Density-First (HDF) as a primal-dual algorithm, and a corresponding proof that HDF is optimal for total fractional weighted flow time (and thus scalable for the integral objective). Building upon the salient ideas of the proof, we show how to apply and extend this analysis to the more general problem of minimizing ∑ 

j

w

j

g

(

F

j

), where

w

j

is the job weight,

F

j

is the flow time and

g

is a non-decreasing cost function. Among other results, we present improved competitive ratios for the setting in which

g

is a concave function, and the setting of same-density jobs but general cost functions. We further apply our framework of analysis to online weighted completion time with general cost functions as well as scheduling under polyhedral constraints.

Spyros Angelopoulos, Giorgio Lucarelli, Kim Thang Nguyen

Buffer Management for Packets with Processing Times

We discuss the well known job scheduling problem with release times and deadlines, alongside an extended model - buffer management for packets with processing requirements. For job scheduling, an

$\Omega(\sqrt{\frac{\log{\kappa}}{\log{\log{\kappa}}}})$

lower bound for any randomized preemptive algorithm was shown by Irani and Canetti (1995), where

κ

is the the maximum job duration or the maximum job value (the minimum is assumed to be 1). The proof of this well-known result is fairly elaborate and involved. In contrast, we show a significantly improved lower bound of Ω(log

κ

) using a simple proof. Our result matches the easy upper bound and closes a gap which was supposedly open for 20 years.

We also discuss an interesting extension of job scheduling (for tight jobs). We discuss the problem of handling a FIFO buffer of a limited capacity, where packets arrive over time and may be preempted. Most of the work in buffer management considers the case where each packet has unit processing requirement. We consider a model where packets require some number of processing cycles before they can be transmitted. We aim to maximize the value of transmitted packets. We show an

$\Omega(\frac{\log{\kappa}}{\log{\log{\kappa}}})$

lower bound on the competitive ratio of randomized algorithms in this setting. We also present bounds for several special cases. For packets with unit values we also show a

ϕ

 ≈ 1.618 lower bound on the competitive ratio of deterministic algorithms, and a 2-competitive algorithm for this problem. For the case of packets with constant densities we present a 4-competitive algorithm.

Yossi Azar, Oren Gilon

A Triplet-Based Exact Method for the Shift Minimisation Personnel Task Scheduling Problem

In this paper we describe a new approach for solving the shift minimisation personnel task scheduling problem. This variant of fixed job scheduling problems arises when tasks with fixed start and end times have to be assigned to personnel with shift time constraints. We present definitions, formulations and briefly discuss complexity results for the variant that focuses on minimising the number of machines (or workers) that are required to schedule all jobs. We first develop some mathematical properties of the problem and subsequently, the necessary and sufficient conditions for feasibility. These properties are used to develop a new branch and bound scheme, which is used in conjunction with two column generation based approaches and a heuristic algorithm to create an efficient solution procedure. We present extensive computational results for large instances and thereby, empirically demonstrate the effectiveness of our new approach.

Davaatseren Baatar, Mohan Krishnamoorthy, Andreas T. Ernst

Exact Minkowski Sums of Polygons With Holes

We present an efficient algorithm that computes the Minkowski sum of two polygons, which may have holes. The new algorithm is based on the convolution approach. Its efficiency stems in part from a property for Minkowski sums of polygons with holes, which in fact holds in any dimension: Given two polygons with holes, for each input polygon we can fill up the holes that are relatively small compared to the other polygon. Specifically, we can always fill up all the holes of at least one polygon, transforming it into a simple polygon, and still obtain exactly the same Minkowski sum. Obliterating holes in the input summands speeds up the computation of Minkowski sums.

We introduce a robust implementation of the new algorithm, which follows the Exact Geometric Computation paradigm and thus guarantees exact results. We also present an empirical comparison of the performance of Minkowski sum construction of various input examples, where we show that the implementation of the new algorithm exhibits better performance than several other implementations in many cases. The software is available as part of the 2D Minkowski Sums package of

Cgal

(Computational Geometry Algorithms Library), starting from Release 4.7. Additional information and supplementary material is available at our project page

http://acg.cs.tau.ac.il/projects/rc

.

Alon Baram, Efi Fogel, Dan Halperin, Michael Hemmer, Sebastian Morr

λ > 4

A

polyomino

(“lattice animal”) is an edge-connected set of squares on the two-dimensional square lattice. Counting polyominoes is an extremely hard problem in enumerative combinatorics, with important applications in statistical physics for modeling processes of percolation and collapse of branched polymers. We investigated a fundamental question related to polyominoes, namely, what is their growth constant, the asymptotic ratio between

A

(

n

 + 1) and

A

(

n

) when

n

 → ∞, where

A

(

n

) is the number of polyominoes of size

n

. This value is also known as “Klarner’s constant” and denoted by

λ

. So far, the best lower and upper bounds on

λ

were roughly 3.98 and 4.65, respectively, and so not even a single decimal digit of

λ

was known. Using extremely high computing resources, we have shown (still rigorously) that

λ

 > 4.00253, thereby settled a long-standing problem: proving that the leading digit of

λ

is 4.

Gill Barequet, Günter Rote, Mira Shalah

Revenue Maximization for Selling Multiple Correlated Items

We study the problem of selling

n

items to a single buyer with an additive valuation function. We consider the valuation of the items to be correlated, i.e., desirabilities of the buyer for the items are not drawn independently. Ideally, the goal is to design a mechanism to maximize the revenue. However, it has been shown that a revenue optimal mechanism might be very complicated and as a result inapplicable to real-world auctions. Therefore, our focus is on designing a simple mechanism that achieves a constant fraction of the optimal revenue. Babaioff et al. [3] propose a simple mechanism that achieves a constant fraction of the optimal revenue for independent setting with a single additive buyer. However, they leave the following problem as an open question:

“Is there a simple, approximately optimal mechanism for a single additive buyer whose value for n items is sampled from a common base-value distribution?”

Babaioff et al. show a constant approximation factor of the optimal revenue can be achieved by either selling the items separately or as a whole bundle in the independent setting. We show a similar result for the correlated setting when the desirabilities of the buyer are drawn from a common base-value distribution. It is worth mentioning that the core decomposition lemma which is mainly the heart of the proofs for efficiency of the mechanisms does not hold for correlated settings. Therefore we propose a modified version of this lemma which is applicable to the correlated settings as well. Although we apply this technique to show the proposed mechanism can guarantee a constant fraction of the optimal revenue in a very weak correlation, this method alone can not directly show the efficiency of the mechanism in stronger correlations. Therefore, via a combinatorial approach we reduce the problem to an auction with a weak correlation to which the core decomposition technique is applicable. In addition, we introduce a generalized model of correlation for items and show the proposed mechanism achieves an

O

(log

k

) approximation factor of the optimal revenue in that setting.

MohammadHossein Bateni, Sina Dehghani, MohammadTaghi Hajiaghayi, Saeed Seddighin

Efficient Implementation of a Synchronous Parallel Push-Relabel Algorithm

Motivated by the observation that FIFO-based push-relabel algorithms are able to outperform highest label-based variants on modern, large maximum flow problem instances, we introduce an efficient implementation of the algorithm that uses coarse-grained parallelism to avoid the problems of existing parallel approaches. We demonstrate good relative and absolute speedups of our algorithm on a set of large graph instances taken from real-world applications. On a modern 40-core machine, our parallel implementation outperforms existing sequential implementations by up to a factor of 12 and other parallel implementations by factors of up to 3.

Niklas Baumstark, Guy Blelloch, Julian Shun

Towards Tight Lower Bounds for Scheduling Problems

We show a close connection between structural hardness for

k

-partite graphs and tight inapproximability results for scheduling problems with precedence constraints. Assuming a natural but nontrivial generalisation of the bipartite structural hardness result of [1], we obtain a hardness of 2 − 

ε

for the problem of minimising the makespan for scheduling precedence-constrained jobs with preemption on identical parallel machines. This matches the best approximation guarantee for this problem [6,4]. Assuming the same hypothesis, we also obtain a super constant inapproximability result for the problem of scheduling precedence-constrained jobs on related parallel machines, making progress towards settling an open question in both lists of ten open questions by Williamson and Shmoys [17], and by Schuurman and Woeginger [14].

The study of structural hardness of

k

-partite graphs is of independent interest, as it captures the intrinsic hardness for a large family of scheduling problems. Other than the ones already mentioned, this generalisation also implies tight inapproximability to the problem of minimising the weighted completion time for precedence-constrained jobs on a single machine, and the problem of minimising the makespan of precedence-constrained jobs on identical parallel machine, and hence unifying the results of Bansal and Khot[1] and Svensson [15], respectively.

Abbas Bazzi, Ashkan Norouzi-Fard

1-Planar Graphs have Constant Book Thickness

In a book embedding the vertices of a graph are placed on the “spine” of a book and the edges are assigned to “pages”, so that edges on the same page do not cross. In this paper, we prove that every 1-planar graph (that is, a graph that can be drawn on the plane such that no edge is crossed more than once) admits an embedding in a book with constant number of pages. To the best of our knowledge, the best non-trivial previous upper-bound was

$O(\sqrt{n})$

, where

n

is the number of vertices of the graph.

Michael A. Bekos, Till Bruckdorfer, Michael Kaufmann, Chrysanthi Raftopoulou

Access, Rank, and Select in Grammar-compressed Strings

Given a string

S

of length

N

on a fixed alphabet of

σ

symbols, a grammar compressor produces a context-free grammar

G

of size

n

that generates

S

and only

S

. In this paper we describe data structures to support the following operations on a grammar-compressed string:

access

(

S

,

i

,

j

) (return substring

S

[

i

,

j

]),

rank

c

(

S

,

i

) (return the number of occurrences of symbol

c

before position

i

in

S

), and

select

c

(

S

,

i

) (return the position of the

i

th occurrence of

c

in

S

). Our main result for

access

is a method that requires

$\O(n\log N)$

bits of space and

$\O(\log N+m/\log_\sigma N)$

time to extract

m

 = 

j

 − 

i

 + 1 consecutive symbols from

S

. Alternatively, we can achieve

$\O(\log_\tau N+m/\log_\sigma N)$

query time using

$\O(n\tau\log_\tau (N/n)\log N)$

bits of space, matching a lower bound stated by Verbin and Yu for strings where

N

is polynomially related to

n

when

τ

 = log

ε

N

. For

rank

and

select

we describe data structures of size

$\O(n\sigma\log N)$

bits that support the two operations in

$\O(\log N)$

time. We also extend our other structure to support both operations in

$\O(\log_\tau N)$

time using

$\O(n\tau\sigma\log_\tau (N/n)\log N)$

bits of space. When

τ

 = log

ε

N

the query time is

O

(log

N

/loglog

N

) and we provide a hardness result showing that significantly improving this would imply a major breakthrough on a hard graph-theoretical problem.

Djamal Belazzougui, Patrick Hagge Cording, Simon J. Puglisi, Yasuo Tabei

Fully-Dynamic Approximation of Betweenness Centrality

Betweenness is a well-known centrality measure that ranks the nodes of a network according to their participation in shortest paths. Since an exact computation is prohibitive in large networks, several approximation algorithms have been proposed. Besides that, recent years have seen the publication of dynamic algorithms for efficient recomputation of betweenness in evolving networks. In previous work we proposed the first semi-dynamic algorithms that recompute an

approximation

of betweenness in connected graphs after batches of edge insertions.

In this paper we propose the first fully-dynamic approximation algorithms (for weighted and unweighted undirected graphs that need not to be connected) with a provable guarantee on the maximum approximation error. The transfer to fully-dynamic and disconnected graphs implies additional algorithmic problems that could be of independent interest. In particular, we propose a new upper bound on the vertex diameter for weighted undirected graphs. For both weighted and unweighted graphs, we also propose the first fully-dynamic algorithms that keep track of this upper bound. In addition, we extend our former algorithm for semi-dynamic BFS to batches of both edge insertions and deletions.

Using approximation, our algorithms are the first to make in-memory computation of betweenness in fully-dynamic networks with millions of edges feasible. Our experiments show that they can achieve substantial speedups compared to recomputation, up to several orders of magnitude.

Elisabetta Bergamini, Henning Meyerhenke

Improved Purely Additive Fault-Tolerant Spanners

Let

G

be an unweighted

n

-node undirected graph. A

β-additive spanner

of

G

is a spanning subgraph

H

of

G

such that distances in

H

are stretched at most by an additive term

β

w.r.t. the corresponding distances in

G

. A natural research goal related with spanners is that of designing

sparse

spanners with

low

stretch.

In this paper, we focus on

fault-tolerant

additive spanners, namely additive spanners which are able to preserve their additive stretch even when one edge fails. We are able to improve all known such spanners, in terms of either sparsity or stretch. In particular, we consider the sparsest known spanners with stretch 6, 28, and 38, and reduce the stretch to 4, 10, and 14, respectively (while keeping the same sparsity).

Our results are based on two different constructions. On one hand, we show how to augment (by adding a

small

number of edges) a fault-tolerant additive

sourcewise spanner

(that approximately preserves distances only from a given set of source nodes) into one such spanner that preserves all pairwise distances. On the other hand, we show how to augment some known fault-tolerant additive spanners, based on clustering techniques. This way we decrease the additive stretch without any asymptotic increase in their size. We also obtain improved fault-tolerant additive spanners for the case of one vertex failure, and for the case of

f

edge failures.

Davide Bilò, Fabrizio Grandoni, Luciano Gualà, Stefano Leucci, Guido Proietti

Subexponential Time Algorithms for Finding Small Tree and Path Decompositions

The Minimum Size Tree Decomposition (MSTD) and Minimum Size Path Decomposition (MSPD) problems ask for a given

n

-vertex graph

G

and integer

k

, what is the minimum number of bags of a tree decomposition (respectively, path decomposition) of width at most

k

. The problems are known to be NP-complete for each fixed

k

 ≥ 4. In this paper we present algorithms that solve both problems for fixed

k

in 2

O

(

n

/ log

n

)

time and show that they cannot be solved in 2

o

(

n

/ log

n

)

time, assuming the Exponential Time Hypothesis.

Hans L. Bodlaender, Jesper Nederlof

Enumeration of 2-Level Polytopes

We propose the first algorithm for enumerating all combinatorial types of 2-level polytopes of a given dimension

d

, and provide complete experimental results for

$d \leqslant 6$

. Our approach is based on the notion of a simplicial core, that allows us to reduce the problem to the enumeration of the closed sets of a discrete closure operator, along with some convex hull computations and isomorphism tests.

Adam Bohn, Yuri Faenza, Samuel Fiorini, Vissarion Fisikopoulos, Marco Macchia, Kanstantsin Pashkovich

Upper and Lower Bounds for Online Routing on Delaunay Triangulations

Consider a weighted graph

G

whose vertices are points in the plane and edges are line segments between pairs of points whose weight is the Euclidean distance between its endpoints. A routing algorithm on

G

sends a message from any vertex

s

to any vertex

t

in

G

. The algorithm has a

competitive ratio

of

c

if the length of the path taken by the message is at most

c

times the length of the shortest path from

s

to

t

in

G

. It has a

routing ratio

of

c

if the length of the path is at most

c

times the Euclidean distance from

s

to

t

. The algorithm is

online

if it makes forwarding decisions based on (1) the

k

-neighborhood in

G

of the message’s current position (for constant

k

 > 0) and (2) limited information stored in the message header.

We present an online routing algorithm on the Delaunay triangulation with routing ratio less than 5.90, improving the best known routing ratio of 15.48. Our algorithm makes forwarding decisions based on the 1-neighborhood of the current position of the message and the positions of the message source and destination only.

We present a lower bound of 5.7282 on the routing ratio of our algorithm, so the 5.90 upper bound is close to the best possible. We also show that the routing (resp., competitive) ratio of any deterministic

k

-local algorithm is at least 1.70 (resp., 1.23) for the Delaunay triangulation and 2.70 (resp. 1.23) for the

L

1

-Delaunay triangulation. In the case of the

L

1

-Delaunay triangulation, this implies that even though there always exists a path between

s

and

t

whose length is at most 2.61|[

st

]|, it is not always possible to route a message along a path of length less than 2.70|[

st

]| using only local information.

Nicolas Bonichon, Prosenjit Bose, Jean-Lou De Carufel, Ljubomir Perković, André van Renssen

On Computing the Hyperbolicity of Real-World Graphs

The (Gromov) hyperbolicity is a topological property of a graph, which has been recently applied in several different contexts, such as the design of routing schemes, network security, computational biology, the analysis of graph algorithms, and the classification of complex networks. Computing the hyperbolicity of a graph can be very time consuming: indeed, the best available algorithm has running-time

$\mathcal{O}(n^{3.69})$

, which is clearly prohibitive for big graphs. In this paper, we provide a new and more efficient algorithm: although its worst-case complexity is

$\mathcal{O}(n^4)$

, in practice it is much faster, allowing, for the first time, the computation of the hyperbolicity of graphs with up to 200,000 nodes. We experimentally show that our new algorithm drastically outperforms the best previously available algorithms, by analyzing a big dataset of real-world networks. Finally, we apply the new algorithm to compute the hyperbolicity of random graphs generated with the Erdös-Renyi model, the Chung-Lu model, and the Configuration Model.

Michele Borassi, David Coudert, Pierluigi Crescenzi, Andrea Marino

Towards Single Face Shortest Vertex-Disjoint Paths in Undirected Planar Graphs

Given

k

pairs of terminals {(

s

1

,

t

1

), …, (

s

k

,

t

k

)} in a graph

G

, the min-sum

k

vertex-disjoint paths problem is to find a collection {

Q

1

,

Q

2

, …,

Q

k

} of vertex-disjoint paths with minimum total length, where

Q

i

is an

s

i

-to-

t

i

path between

s

i

and

t

i

. We consider the problem in planar graphs, where little is known about computational tractability, even in restricted cases. Kobayashi and Sommer propose a polynomial-time algorithm for

k

 ≤ 3 in undirected planar graphs assuming all terminals are adjacent to at most two faces. Colin de Verdière and Schrijver give a polynomial-time algorithm when all the sources are on the boundary of one face and all the sinks are on the boundary of another face and ask about the existence of a polynomial-time algorithm provided all terminals are on a common face.

We make progress toward Colin de Verdière and Schrijver’s open question by giving an

O

(

kn

5

) time algorithm for undirected planar graphs when {(

s

1

,

t

1

), …, (

s

k

,

t

k

)} are in counter-clockwise order on a common face.

Glencora Borradaile, Amir Nayyeri, Farzad Zafarani

Consensus Patterns (Probably) Has no EPTAS

Given

n

length-

L

strings

S

 = {

s

1

, …,

s

n

} over a constant size alphabet Σ together with an integer ℓ, where ℓ ≤ 

L

, the objective of

Consensus Patterns

is to find a length-ℓ string

s

, a substring

t

i

of each

s

i

in

S

such that ∑ 

 ∀ 

i

d

(

t

i

,

s

) is minimized. Here

d

(

x

,

y

) denotes the Hamming distance between the two strings

x

and

y

.

Consensus Patterns

admits a PTAS [Li et al., JCSS 2002] is fixed parameter tractable when parameterized by the objective function value [Marx, SICOMP 2008], and although it is a well-studied problem, improvement of the PTAS to an EPTAS seemed elusive. We prove that

Consensus Patterns

does not admit an EPTAS unless FPT=W[1], answering an open problem from [Fellows et al., STACS 2002, Combinatorica 2006]. To the best of our knowledge,

Consensus Patterns

is the first problem that admits a PTAS, and is fixed parameter tractable when parameterized by the value of the objective function but does not admit an EPTAS under plausible complexity assumptions. The proof of our hardness of approximation result combines parameterized reductions and gap preserving reductions in a novel manner.

Christina Boucher, Christine Lo, Daniel Lokshantov

Fast Quasi-Threshold Editing

We introduce Quasi-Threshold Mover (QTM), an algorithm to solve the quasi-threshold (also called trivially perfect) graph editing problem with a minimum number of edge insertions and deletions. Given a graph it computes a quasi-threshold graph which is close in terms of edit count, but not necessarily closest as this edit problem is NP-hard. We present an extensive experimental study, in which we show that QTM performs well in practice and is the first heuristic that is able to scale to large real-world graphs in practice. As a side result we further present a simple linear-time algorithm for the quasi-threshold recognition problem.

Ulrik Brandes, Michael Hamann, Ben Strasser, Dorothea Wagner

Sublinear Estimation of Weighted Matchings in Dynamic Data Streams

This paper presents an algorithm for estimating the weight of a maximum weighted matching by augmenting any estimation routine for the size of an unweighted matching. The algorithm is implementable in any streaming model including dynamic graph streams. We also give the first constant estimation for the maximum matching size in a dynamic graph stream for planar graphs (or any graph with bounded arboricity) using

$\tilde{O}(n^{4/5})$

space which also extends to weighted matching. Using previous results by Kapralov, Khanna, and Sudan (2014) we obtain a polylog(

n

) approximation for general graphs using polylog(

n

) space in random order streams, respectively. In addition, we give a space lower bound of Ω(

n

1 − 

ε

) for any randomized algorithm estimating the size of a maximum matching up to a 1 + 

O

(

ε

) factor for adversarial streams.

Marc Bury, Chris Schwiegelshohn

An Improved Approximation Algorithm for Knapsack Median Using Sparsification

Knapsack median is a generalization of the classic

k

-median problem in which we replace the cardinality constraint with a knapsack constraint. It is currently known to be 32-approximable. We improve on the best known algorithms in several ways, including adding randomization and applying sparsification as a preprocessing step. The latter improvement produces the first LP for this problem with bounded integrality gap. The new algorithm obtains an approximation factor of 17.46. We also give a 3.05 approximation with small budget violation.

Jaroslaw Byrka, Thomas Pensyl, Bartosz Rybicki, Joachim Spoerhase, Aravind Srinivasan, Khoa Trinh

Output-Sensitive Algorithms for Enumerating the Extreme Nondominated Points of Multiobjective Combinatorial Optimization Problems

This paper studies output-sensitive algorithms for enumeration problems in multiobjective combinatorial optimization (MOCO). We develop two methods for enumerating the extreme points of the Pareto-frontier of MOCO problems. The first method is based on a dual variant of Benson’s algorithm, which has been originally proposed for multiobjective linear optimization problems. We prove that the algorithm runs in output polynomial time for every fixed number of objectives if the weighted-sum scalarization can be solved in polynomial time. Hence, we propose the first algorithm which solves this general problem in output polynomial time. We also propose a new lexicographic version of the dual Benson algorithm that runs in incremental polynomial time in the case that the lexicographic optimization variant can be solved in polynomial time. As a consequence, the extreme points of the Pareto-frontier of the multiobjective spanning tree problem as well as the multiobjective global min-cut problem can be computed in polynomial time for a fixed number of objectives. Our computational experiments show the practicability of our improved algorithm: We present the first computational study for computing the extreme points of the multiobjective version of the assignment problem with five and more objectives. We also empirically investigate the running time behavior of our new lexicographic version compared to the original algorithm.

Fritz Bökler, Petra Mutzel

Self-Adjusting Binary Search Trees: What Makes Them Tick?

Splay trees (Sleator and Tarjan [11]) satisfy the so-called

access lemma

. Many of the nice properties of splay trees follow from it.

What makes self-adjusting binary search trees (BSTs) satisfy the access lemma?

After each access, self-adjusting BSTs replace the search path by a tree on the same set of nodes (the after-tree). We identify two simple combinatorial properties of the search path and the after-tree that imply the access lemma. Our main result

(i) implies the access lemma for

all

minimally self-adjusting BST algorithms for which it was known to hold: splay trees and their generalization to the class of

local algorithms

(Subramanian [12], Georgakopoulos and McClurkin [7]), as well as Greedy BST, introduced by Demaine et al. [5] and shown to satisfy the access lemma by Fox [6],

(ii) implies that BST algorithms based on “strict” depth-halving satisfy the access lemma, addressing an open question that was raised several times since 1985, and

(iii) yields an extremely short proof for the

O

(log

n

loglog

n

) amortized access cost for the path-balance heuristic (proposed by Sleator), matching the best known bound (Balasubramanian and Raman [2]) to a lower-order factor.

One of our combinatorial properties is

locality

. We show that any BST-algorithm that satisfies the access lemma via the sum-of-log (SOL) potential is necessarily local. The other property states that the sum of the number of leaves of the after-tree plus the number of side alternations in the search path must be at least a constant fraction of the length of the search path. We show that a weak form of this property is necessary for sequential access to be linear.

Parinya Chalermsook, Mayank Goswami, László Kozma, Kurt Mehlhorn, Thatchaphol Saranurak

On Element-Connectivity Preserving Graph Simplification

The notion of

element-connectivity

has found several important applications in network design and routing problems. We focus on a reduction step that preserves the element-connectivity [18, 4, 3], which when applied repeatedly allows one to reduce the original graph to a simpler one. This pre-processing step is a crucial ingredient in several applications. In this paper we revisit this reduction step and provide a new proof via the use of setpairs. Our main contribution is algorithmic results for several basic problems on element-connectivity including the problem of achieving the aforementioned graph simplification. We utilize the underlying submodularity properties of element-connectivity to derive faster algorithms.

Chandra Chekuri, Thapanapong Rukkanchanunt, Chao Xu

On Randomized Algorithms for Matching in the Online Preemptive Model

We investigate the power of randomized algorithms for the maximum cardinality matching (MCM) and the maximum weight matching (MWM) problems in the online preemptive model. In this model, the edges of a graph are revealed one by one and the algorithm is required to always maintain a valid matching. On seeing an edge, the algorithm has to either accept or reject the edge. If accepted, then the adjacent edges are discarded. The complexity of the problem is settled for deterministic algorithms [7, 9].

Almost nothing is known for randomized algorithms. A lower bound of 1.693 is known for MCM with a trivial upper bound of two. An upper bound of 5.356 is known for MWM. We initiate a systematic study of the same in this paper with an aim to isolate and understand the difficulty. We begin with a primal-dual analysis of the deterministic algorithm due to [7]. All deterministic lower bounds are on instances which are trees at every step. For this class of (unweighted) graphs we present a randomized algorithm which is

$\frac{28}{15}$

-competitive. The analysis is a considerable extension of the (simple) primal-dual analysis for the deterministic case. The key new technique is that the distribution of primal charge to dual variables depends on the “neighborhood” and needs to be done after having seen the entire input. The assignment is asymmetric: in that edges may assign different charges to the two end-points. Also the proof depends on a non-trivial structural statement on the performance of the algorithm on the input tree.

The other main result of this paper is an extension of the deterministic lower bound of Varadaraja [9] to a natural class of randomized algorithms which decide whether to accept a new edge or not using

independent

random choices. This indicates that randomized algorithms will have to use

dependent

coin tosses to succeed. Indeed, the few known randomized algorithms, even in very restricted models follow this.

We also present the best possible

$\frac{4}{3}$

-competitive randomized algorithm for MCM on paths.

Ashish Chiplunkar, Sumedh Tirodkar, Sundar Vishwanathan

A Characterization of Consistent Digital Line Segments in ℤ2

Our concern is the digitalization of line segments in ℤ

2

as considered by Chun et al. [5] and Christ et al. [4]. The key property that differentiates the research of Chun et al. and Christ et al. from other research in digital line segment construction is that the intersection of any two segments must be connected. Such a system of segments is called a consistent digital line segments system (CDS). Chun et al. give a construction for all segments in ℤ

d

that share a common endpoint (called consistent digital rays (CDR)) that has asymptotically optimal Hausdorff distance, and Christ et al. give a complete CDS in ℤ

2

with optimal Hausdorff distance. Christ et al. also give a characterization of CDRs in ℤ

2

, and they leave open the question on how to characterize CDSes in ℤ

2

. In this paper, we answer one of the most important open question regarding CDSes in ℤ

2

by giving the characterization asked for by Christ et al. We obtain the characterization by giving a set of necessary and sufficient conditions that a CDS must satisfy.

Iffat Chowdhury, Matt Gibson

On the Efficiency of All-Pay Mechanisms

We study the inefficiency of mixed equilibria, expressed as the price of anarchy, of all-pay auctions in three different environments: combinatorial, multi-unit and single-item auctions. First, we consider item-bidding combinatorial auctions where

m

all-pay auctions run in parallel, one for each good. For fractionally subadditive valuations, we strengthen the upper bound from 2 [22] to 1.82 by proving some structural properties that characterize the mixed Nash equilibria of the game. Next, we design an all-pay mechanism with a randomized allocation rule for the multi-unit auction. We show that, for bidders with submodular valuations, the mechanism admits a unique, 75% efficient, pure Nash equilibrium. The efficiency of this mechanism outperforms all the known bounds on the price of anarchy of mechanisms used for multi-unit auctions. Finally, we analyze single-item all-pay auctions motivated by their connection to contests and show tight bounds on the price of anarchy of social welfare, revenue and maximum bid.

George Christodoulou, Alkmini Sgouritsa, Bo Tang

Dictionary Matching in a Stream

We consider the problem of dictionary matching in a stream. Given a set of strings, known as a dictionary, and a stream of characters arriving one at a time, the task is to report each time some string in our dictionary occurs in the stream. We present a randomised algorithm which takes

O

(loglog(

k

 + 

m

)) time per arriving character and uses

O

(

k

log

m

) words of space, where

k

is the number of strings in the dictionary and

m

is the length of the longest string in the dictionary.

Raphaël Clifford, Allyx Fontaine, Ely Porat, Benjamin Sach, Tatiana Starikovskaya

Multicuts in Planar and Bounded-Genus Graphs with Bounded Number of Terminals

Given an undirected, edge-weighted graph

G

together with pairs of vertices, called pairs of

terminals

, the

minimum multicut problem

asks for a minimum-weight set of edges such that, after deleting these edges, the two terminals of each pair belong to different connected components of the graph. Relying on topological techniques, we provide a polynomial-time algorithm for this problem in the case where

G

is embedded on a fixed surface of genus

g

(e.g., when

G

is planar) and has a fixed number

t

of terminals. The running time is a polynomial of degree

$O\big(\sqrt{g^2+gt}\big)$

in the input size.

In the planar case, our result corrects an error in an extended abstract by Bentz [Int. Workshop on Parameterized and Exact Computation, 109–119, 2012]. The minimum multicut problem is also a generalization of the

multiway cut problem

, also known as the

multiterminal cut problem

; even for this special case, no dedicated algorithm was known for graphs embedded on surfaces.

Éric Colin de Verdière

A Fixed Parameter Tractable Approximation Scheme for the Optimal Cut Graph of a Surface

Given a graph

G

cellularly embedded on a surface Σ of genus

g

, a cut graph is a subgraph of

G

such that cutting Σ along

G

yields a topological disk. We provide a fixed parameter tractable approximation scheme for the problem of computing the shortest cut graph, that is, for any

ε

 > 0, we show how to compute a (1 + 

ε

) approximation of the shortest cut graph in time

f

(

ε

,

g

)

n

3

.

Our techniques first rely on the computation of a spanner for the problem using the technique of brick decompositions, to reduce the problem to the case of bounded tree-width. Then, to solve the bounded tree-width case, we introduce a variant of the surface-cut decomposition of Rué, Sau and Thilikos, which may be of independent interest.

Vincent Cohen-Addad, Arnaud de Mesmay

Explicit Expanding Expanders

Deterministic constructions of expander graphs have been an important topic of research in computer science and mathematics, with many well-studied constructions of infinite families of expanders. In some applications, though, an infinite family is not enough: we need expanders which are “close” to each other. We study the following question: Construct an an infinite sequence of expanders

G

0

,

G

1

,…, such that for every two consecutive graphs

G

i

and

G

i

 + 1

,

G

i

 + 1

can be obtained from

G

i

by adding a single vertex and inserting/removing a small number of edges, which we call the

expansion cost

of transitioning from

G

i

to

G

i

 + 1

. This question is very natural, e.g., in the context of datacenter networks, where the vertices represent racks of servers, and the expansion cost captures the amount of rewiring needed when adding another rack to the network. We present an

explicit

construction of

d

-regular expanders with expansion cost at most

$\frac{5d}{2}$

, for any

d

 ≥ 6. Our construction leverages the notion of a “2-lift” of a graph. This operation was first analyzed by Bilu and Linial [1], who repeatedly applied 2-lifts to construct an infinite family of expanders which double in size from one expander to the next. Our construction can be viewed as a way to “interpolate” between Bilu-Linial expanders with low expansion cost while preserving good edge expansion throughout.

Michael Dinitz, Michael Schapira, Asaf Valadarsky

On the Threshold of Intractability

We study the computational complexity of the graph modification problems

and

, adding and deleting as few edges as possible to transform the input into a threshold (or chain) graph. In this article, we show that both problems are

-hard, resolving a conjecture by Natanzon, Shamir, and Sharan (2001). On the positive side, we show that these problems admit quadratic vertex kernels. Furthermore, we give a subexponential time parameterized algorithm solving

in

time, making it one of relatively few natural problems in this complexity class on general graphs. These results are of broader interest to the field of social network analysis, where recent work of Brandes (2014) posits that the minimum edit distance to a threshold graph gives a good measure of consistency for node centralities. Finally, we show that all our positive results extend to

, as well as the completion and deletion variants of both problems.

Pål Grønås Drange, Markus Sortland Dregi, Daniel Lokshtanov, Blair D. Sullivan

A Polynomial Kernel for Trivially Perfect Editing

We give a kernel with

O

(

k

7

) vertices for

Trivially Perfect Editing

, the problem of adding or removing at most

k

edges in order to make a given graph trivially perfect. This answers in affirmative an open question posed by Nastos and Gao (Social Networks, 35(3):439–450, 2013) and by Liu, Wang, and Guo (Tsinghua Science and Technology, 19(4):346–357, 2014). Using our technique one can also obtain kernels of the same size for the related problems,

Trivially Perfect Completion

and

Trivially Perfect Deletion

.

We complement our study of

Trivially Perfect Editing

by proving that, contrary to

Trivially Perfect Completion

, it cannot be solved in time 2

o

(

k

)

·

n

O

(1)

unless the Exponential Time Hypothesis fails. In this manner we complete the picture of the parameterized and kernelization complexity of the classic edge modification problems for the class of trivially perfect graphs.

Pål Grønås Drange, Michał Pilipczuk

Polymatroid Prophet Inequalities

Prophet inequalities bound the reward of an online algorithm—or gambler—relative to the optimum offline algorithm—the prophet—in settings that involve making selections from a sequence of elements whose order is chosen adversarially but whose weights are random. The goal is to maximize total weight.

We consider the problem of choosing quantities of each element subject to polymatroid constraints when the weights are arbitrary concave functions. We present an online algorithm for this problem that does at least half as well as the optimum offline algorithm. This is best possible, as even in the case where a single number has to be picked no online algorithm can do better.

An important application of our result is in algorithmic mechanism design, where it leads to novel, truthful mechanisms that, under a monotone hazard rate (MHR) assumption on the conditional distributions of marginal weights, achieve a constant-factor approximation to the optimal revenue for this multi-parameter setting. Problems to which this result applies arise, for example, in the context of Video-on-Demand, Sponsored Search, or Bandwidth Markets.

Paul Dütting, Robert Kleinberg

Node-Balancing by Edge-Increments

Suppose you are given a graph

G

 = (

V

,

E

) with a weight assignment

w

:

V

 → ℤ and that your objective is to modify

w

using legal steps such that all vertices will have the same weight, where in each legal step you are allowed to choose an edge and increment the weights of its end points by 1.

In this paper we study several variants of this problem for graphs and hypergraphs. On the combinatorial side we show connections with fundamental results from matching theory such as Hall’s Theorem and Tutte’s Theorem. On the algorithmic side we study the computational complexity of associated decision problems.

Our main results are a characterization of the graphs for which any initial assignment can be balanced by edge-increments and a strongly polynomial-time algorithm that computes a balancing sequence of increments if one exists.

Friedrich Eisenbrand, Shay Moran, Rom Pinchasi, Martin Skutella

The Price of Matching with Metric Preferences

We consider a version of the Gale-Shapley stable matching setting, where each pair of nodes is associated with a (symmetric) matching cost and the preferences are determined with respect to these costs. This stable matching version is analyzed through the Price of Anarchy (PoA) and Price of Stability (PoS) lens under the objective of minimizing the total cost of matched nodes (for both the marriage and roommates variants). A simple example demonstrates that in the general case, the PoA and PoS are unbounded, hence we restrict our attention to metric costs. We use the notion of

α

-stability, where a pair of unmatched nodes defect only if both improve their costs by a factor greater than

α

 ≥ 1. Our main result is an asymptotically tight trade-off, showing that with respect to

α

-stable matchings, the Price of Stability is

$\Theta \big( n^{\log ( 1 + \frac{1}{2 \alpha} )} \big)$

. The proof is constructive: we present a simple algorithm that outputs an

α

-stable matching satisfying this bound.

Yuval Emek, Tobias Langner, Roger Wattenhofer

Selfish Vector Packing

We study the multidimensional vector packing problem with selfish items. An item is

d

-dimensional non-zero vector, whose rational components are in [0,1], and a set of items can be packed into a bin if for any 1 ≤ 

i

 ≤ 

d

, the sum of the

i

th components of all items of this set does not exceed 1. Items share costs of bins proportionally to their ℓ

1

-norms, and each item corresponds to a selfish player in the sense that it prefers to be packed into a bin minimizing its resulting cost. This defines a class of games called

vector packing games

. We show that any game in this class has a packing that is a strong equilibrium, and that the strong price of anarchy (and the strong price of stability) is logarithmic in

d

, and provide an algorithm that constructs such a packing. We also show improved and nearly tight lower and upper bounds of

d

 + 0.657067 and

d

 + 0.657143 respectively, on the price of anarchy, exhibiting a difference between the multidimensional problem and the one dimensional problem, for which that price of anarchy is at most 1.6428.

Leah Epstein, Elena Kleiman

Approximate Deadline-Scheduling with Precedence Constraints

We consider the classic problem of scheduling a set of

n

jobs non-preemptively on a single machine. Each job

j

has non-negative processing time, weight, and deadline, and a feasible schedule needs to be consistent with

chain-like

precedence constraints. The goal is to compute a feasible schedule that minimizes the sum of penalties of late jobs. Lenstra and Rinnoy Kan [Annals of Disc. Math., 1977] in their seminal work introduced this problem and showed that it is strongly NP-hard, even when all processing times and weights are 1. We study the approximability of the problem and our main result is an

O

(log

k

)-approximation algorithm for instances with

k

distinct job deadlines.

Hossein Efsandiari, MohammadTaghi Hajiaghyi, Jochen Könemann, Hamid Mahini, David Malec, Laura Sanità

Prophet Secretary

Optimal stopping theory is a powerful tool for analyzing scenarios such as online auctions in which we generally require optimizing an objective function over the space of stopping rules for an allocation process under uncertainty. Perhaps the most classic problems of stopping theory are the prophet inequality problem and the secretary problem. The classical prophet inequality states that by choosing the same threshold

OPT

/2 for every step, one can achieve the tight competitive ratio of 0.5. On the other hand, for the basic secretary problem, the optimal strategy achieves the tight competitive ratio of 1/

e

 ≈ 0.36

In this paper, we introduce

prophet secretary

, a natural combination of the prophet inequality and the secretary problems. We show that by using a single uniform threshold one cannot break the 0.5 barrier of the prophet inequality for the prophet secretary problem. However, we show that

using

n

distinct non-adaptive thresholds one can obtain a competitive ratio that goes to (1 − 1/

e

 ≈ 0.63) as

n

grows; and

no online algorithm can achieve a competitive ratio better than 0.73.

Our results improve the (asymptotic) approximation guarantee of single-item sequential posted pricing mechanisms from 0.5 to (1 − 1/

e

) when the order of agents (customers) is chosen randomly.

We also consider the minimization variants of stopping theory problems and in particular the prophet secretary problem. Interestingly, we show that, even for the simple case in which the input elements are drawn from identical and independent distributions (i.i.d.), there is no constant competitive online algorithm for the minimization variant of the prophet secretary problems. We extend this hardness result to the minimization variants of both the prophet inequality and the secretary problem as well.

Hossein Esfandiari, MohammadTaghi Hajiaghayi, Vahid Liaghat, Morteza Monemizadeh

Smoothed Analysis of the Squared Euclidean Maximum-Cut Problem

It is well-known that local search heuristics for the Maximum-Cut problem can take an exponential number of steps to find a local optimum, even though they usually stabilize quickly in experiments. To explain this discrepancy we have recently analyzed the simple local search algorithm FLIP in the framework of smoothed analysis, in which inputs are subject to a small amount of random noise. We have shown that in this framework the number of iterations is quasi-polynomial, i.e., it is polynomially bounded in

n

log

n

and

φ

, where

n

denotes the number of nodes and

φ

is a parameter of the perturbation.

In this paper we consider the special case in which the nodes are points in a

d

-dimensional space and the edge weights are given by the squared Euclidean distances between these points. We prove that in this case for any constant dimension

d

the smoothed number of iterations of FLIP is polynomially bounded in

n

and 1/

σ

, where

σ

denotes the standard deviation of the Gaussian noise. Squared Euclidean distances are often used in clustering problems and our result can also be seen as an upper bound on the smoothed number of iterations of local search for min-sum 2-clustering.

Michael Etscheid, Heiko Röglin

Maximizing Symmetric Submodular Functions

Symmetric submodular functions are an important family of submodular functions capturing many interesting cases including cut functions of graphs and hypergraphs. In this work, we identify submodular maximization problems for which one can get a better approximation for symmetric objectives compared to what is known for general submodular functions

For the problem of maximizing a non-negative symmetric submodular function

$f: 2^\mathcal{N} \to \mathbb{R}^{+}$

subject to a down-monotone solvable polytope

$\mathcal{P} \subseteq[0, 1]^\mathcal{N}$

we describe an algorithm producing a fractional solution of value at least 0.432 ·

f

(

OPT

), where

OPT

is the optimal

integral

solution. Our second result is a 0.432-approximation algorithm for the problem max {

f

(

S

) : |

S

| = 

k

} with a non-negative symmetric submodular function

$f: 2^\mathcal{N} \to \mathbb{R}^{+}$

. Our method also applies non-symmetric functions, in which case it produces

$\frac{1}{e} - o(1)$

approximation. Finally, we describe a

deterministic linear-time

$\frac{1}{2}$

-approximation algorithm for unconstrained maximization of a non-negative symmetric submodular function.

Moran Feldman

Approximating LZ77 via Small-Space Multiple-Pattern Matching

We generalize Karp-Rabin string matching to handle multiple patterns in

$\mathcal{O}(n \log n + m)$

time and

$\mathcal{O}(s)$

space, where

n

is the length of the text and

m

is the total length of the

s

patterns, returning correct answers with high probability. As a prime application of our algorithm, we show how to approximate the LZ77 parse of a string of length

n

. If the optimal parse consists of

z

phrases, using only

$\mathcal{O}(z)$

working space we can return a parse consisting of at most 2

z

phrases in

$\mathcal{O}(n\log n)$

time, and a parse of at most (1 + 

ε

)

z

phrases in

$\mathcal{O}(n\log^{2}n)$

for any constant

ε

 > 0. As previous quasilinear-time algorithms for LZ77 use Ω(

n

/polylog

n

) space, but

z

can be exponentially small in

n

, these improvements in space consumption are substantial.

Johannes Fischer, Travis Gagie, Paweł Gawrychowski, Tomasz Kociumaka

Fast Algorithms for Parameterized Problems with Relaxed Disjointness Constraints

In this paper we consider generalized versions of four well-studied problems in parameterized complexity and exact exponential time algorithms:

k

-

Path

,

Set Packing

,

Multilinear Monomial Testing

and

Hamiltonian Path

. The generalization is in every case obtained by introducing a

relaxation parameter

, which relaxes the constraints on feasible solutions. For example, the

k

-

Path

problem is generalized to

r

-Simple

k

-Path

where the task is to find a walk of length

k

that never visits any vertex more than

r

times. This problem was first considered by Abasi et al. [1].

Hamiltonian Path

is generalized to

Degree Bounded Spanning Tree

, where the input is a graph

G

and integer

d

, and the task is to find a spanning tree

T

of

G

such that every vertex has degree at most

d

in

T

.

The generalized problems can easily be shown to be NP-complete for every fixed value of the relaxation parameter. On the other hand, we give algorithms for the generalized problems whose worst-case running time (a)

matches

the running time of the best algorithms for the original problems up to constants in the exponent, and (b)

improves

significantly as the relaxation parameter increases. For example, we give a deterministic algorithm with running time

$O^{*}(2^{O(k\frac{\log r}{r})})$

for

r

-Simple

k

-Path

matching up to a constant in the exponent the randomized algorithm of Abasi et al. [1], and a randomized algorithm with running time

$O^{*}(2^{O(n\frac{\log d}{d})})$

for

Degree Bounded Spanning Tree

improving upon an

O

(2

n

 + 

o

(

n

)

) algorithm of Fomin et al. [8].

On the way to obtain our results we generalize the notion of

representative sets

to multisets, and give an efficient algorithm to compute such representative sets. Both the generalization of representative sets to multisets and the algorithm to compute them may be of independent interest.

Ariel Gabizon, Daniel Lokshtanov, Michał Pilipczuk

Medial Axis Based Routing Has Constant Load Balancing Factor

Load balanced routing is a long standing yet challenging problem. Despite many years’ of work there is still a gap between theoretical research and practical algorithms. The main contribution in this paper is to bridge the gap and provide rigorous analysis of a practically interesting algorithm for routing in a large scale sensor network of complex shape – routing by using the medial axis of the network. In this algorithm, a skeleton of the network is extracted such that a virtual coordinate system can be developed for greedy routing achieving good load balance in simulations. We show for the first time a constant approximation factor for this algorithm by a highly technical analysis. The analysis explains the performance observed in previous simulations and is also the first known constant approximation algorithm for load balanced routing in a sensor network with non-trivial geometry.

Jie Gao, Mayank Goswami

An Experimental Evaluation of the Best-of-Many Christofides’ Algorithm for the Traveling Salesman Problem

Recent papers on approximation algorithms for the traveling salesman problem (TSP) have given a new variant on the well-known Christofides’ algorithm for the TSP, called the

Best-of-Many Christofides’ algorithm

. The algorithm involves sampling a spanning tree from the solution to the standard LP relaxation of the TSP, and running Christofides’ algorithm on the sampled tree. In this paper we perform an experimental evaluation of the Best-of-Many Christofides’ algorithm to see if there are empirical reasons to believe its performance is better than that of Christofides’ algorithm. In our experiments, all of the implemented variants of the Best-of-Many Christofides’ algorithm perform significantly better than Christofides’ algorithm; an algorithm that samples from a maximum entropy distribution over spanning trees seems to be particularly good.

Kyle Genova, David P. Williamson

Approximating the Smallest Spanning Subgraph for 2-Edge-Connectivity in Directed Graphs

Let

G

be a strongly connected directed graph. We consider the following three problems, where we wish to compute the smallest strongly connected spanning subgraph of

G

that maintains respectively: the 2-edge-connected blocks of

G

(

2EC-B

); the 2-edge-connected components of

G

(

2EC-C

); both the 2-edge-connected blocks and the 2-edge-connected components of

G

(

2EC-B-C

). All three problems are NP-hard, and thus we are interested in efficient approximation algorithms. For

2EC-C

we can obtain a 3/2-approximation by combining previously known results. For

2EC-B

and

2EC-B-C

, we present new 4-approximation algorithms that run in linear time. We also propose various heuristics to improve the size of the computed subgraphs in practice, and conduct a thorough experimental study to assess their merits in practical scenarios.

Loukas Georgiadis, Giuseppe F. Italiano, Charis Papadopoulos, Nikos Parotsidis

A Probabilistic Approach to Reducing Algebraic Complexity of Delaunay Triangulations

We propose algorithms to compute the Delaunay triangulation of a point set

L

using only (squared) distance comparisons (i.e., predicates of degree 2). Our approach is based on the witness complex, a weak form of the Delaunay complex introduced by Carlsson and de Silva. We give conditions that ensure that the witness complex and the Delaunay triangulation coincide and we introduce a new perturbation scheme to compute a perturbed set

L

′ close to

L

such that the Delaunay triangulation and the witness complex coincide. Our perturbation algorithm is a geometric application of the Moser-Tardos constructive proof of the Lovász local lemma.

Jean-Daniel Boissonnat, Ramsay Dyer, Arijit Ghosh

A Characterization of Visibility Graphs for Pseudo-polygons

In this paper, we give a characterization of the visibility graphs of pseudo-polygons. We first identify some key combinatorial properties of pseudo-polygons, and we then give a set of five necessary conditions based off our identified properties. We then prove that these necessary conditions are also sufficient via a reduction to a characterization of vertex-edge visibility graphs given by O’Rourke and Streinu.

Matt Gibson, Erik Krohn, Qing Wang

Faster and More Dynamic Maximum Flow by Incremental Breadth-First Search

We introduce the

Excesses Incremental Breadth-First Search

(Excesses IBFS) algorithm for maximum flow problems. We show that Excesses IBFS has the best overall practical performance on real-world instances, while maintaining the same polynomial running time guarantee of

O

(

mn

2

) as IBFS, which it generalizes. Some applications, such as video object segmentation, require solving a series of maximum flow problems, each only slightly different than the previous. Excesses IBFS naturally extends to this dynamic setting and is competitive in practice with other dynamic methods.

Andrew V. Goldberg, Sagi Hed, Haim Kaplan, Pushmeet Kohli, Robert E. Tarjan, Renato F. Werneck

The Temp Secretary Problem

We consider a generalization of the secretary problem where contracts are temporary, and for a fixed duration

γ

. This models online hiring of temporary employees, or online auctions for re-usable resources. The problem is related to the question of finding a large independent set in a random unit interval graph.

Amos Fiat, Ilia Gorelik, Haim Kaplan, Slava Novgorodov

How to Sort by Walking on a Tree

Consider a graph

G

with

n

vertices. On each vertex we place a box. These

n

vertices and

n

boxes are both numbered from 1 to

n

and initially shuffled according to a permutation

π

 ∈ 

S

n

. We introduce a sorting problem for a single robot: In every step, the robot can walk along an edge of

G

and can carry at most one box at a time. At a vertex, it may swap the box placed there with the box it is carrying. How many steps does the robot need to sort all the boxes?

We present an algorithm that produces a shortest possible sorting walk for such a robot if

G

is a tree. The algorithm runs in time

$\mathcal{O}(n^2)$

and can be simplified further if

G

is a path. We show that for planar graphs the problem of finding a shortest possible sorting walk is

$\mathcal{NP}$

-complete.

Daniel Graf

Improved Analysis of Complete-Linkage Clustering

Complete-linkage clustering is a very popular method for computing hierarchical clusterings in practice, which is not fully understood theoretically. Given a finite set

P

 ⊆ ℝ

d

of points, the complete-linkage method starts with each point from

P

in a cluster of its own and then iteratively merges two clusters from the current clustering that have the smallest diameter when merged into a single cluster.

We study the problem of partitioning

P

into

k

clusters such that the largest diameter of the clusters is minimized and we prove that the complete-linkage method computes an

O

(1)-approximation for this problem for any metric that is induced by a norm, assuming that the dimension

d

is a constant. This improves the best previously known bound of

O

(log

k

) due to Ackermann et al. (Algorithmica, 2014). Our improved bound also carries over to the

k

-center and the discrete

k

-center problem.

Anna Großwendt, Heiko Röglin

Structural Parameterizations of the Mixed Chinese Postman Problem

In the Mixed Chinese Postman Problem (MCPP), given a weighted mixed graph

G

(

G

may have both edges and arcs), our aim is to find a minimum weight closed walk traversing each edge and arc at least once. The MCPP parameterized by the number of edges in

G

or the number of arcs in

G

is fixed-parameter tractable as proved by van Bevern

et al.

(2014) and Gutin, Jones and Sheng (2014), respectively. In this paper, we consider the unweighted version of MCPP. Solving an open question of van Bevern

et al.

(2014), we show that somewhat unexpectedly MCPP parameterized by the (undirected) treewidth of

G

is W[1]-hard. In fact, we prove that even the MCPP parameterized by the pathwidth of

G

is W[1]-hard. On the positive side, we show that MCPP parameterized by treedepth is fixed-parameter tractable. We are unaware of any natural graph parameters between pathwidth and treedepth and so our results provide a dichotomy of the complexity of MCPP.

Gregory Gutin, Mark Jones, Magnus Wahlström

Online Appointment Scheduling in the Random Order Model

We consider the following online appointment scheduling problem: Jobs of different processing times and weights arrive online step-by-step. Upon arrival of a job, its (future) starting date has to be determined immediately and irrevocably before the next job arrives, with the objective of minimizing the average weighted completion time. In this type of scheduling problem it is impossible to achieve non-trivial competitive ratios in the classical, adversarial arrival model, even if jobs have unit processing times. We weaken the adversary and consider random order of arrival instead. In this model the adversary defines the weight processing time pairs for all jobs, but the order in which the jobs arrive online is a permutation drawn uniformly at random.

For the case of jobs with unit processing time we give a constant competitive algorithm. We use this algorithm as a building block for the general case of variable job processing times and achieve competitive ratio

$\mathcal{O}$

(log

n

). We complement these algorithms with a lower bound of Ω(n) for unit-processing time jobs in the adversarial input model.

Oliver Göbel, Thomas Kesselheim, Andreas Tönnis

Approximation Algorithms for Connected Maximum Cut and Related Problems

An instance of the

Connected Maximum Cut

problem consists of an undirected graph

G

 = (

V

,

E

) and the goal is to find a subset of vertices

S

 ⊆ 

V

that maximizes the number of edges in the cut

δ

(

S

) such that the induced graph

G

[

S

] is connected. We present the first non-trivial

$\Omega(\frac{1}{\log n})$

approximation algorithm for the connected maximum cut problem in general graphs using novel techniques. We then extend our algorithm to an edge weighted case and obtain a poly-logarithmic approximation algorithm. Interestingly, in stark contrast to the classical max-cut problem, we show that the connected maximum cut problem remains NP-hard even on unweighted, planar graphs. On the positive side, we obtain a polynomial time approximation scheme for the connected maximum cut problem on planar graphs and more generally on graphs with bounded genus.

Mohammad Taghi Hajiaghayi, Guy Kortsarz, Robert MacDavid, Manish Purohit, Kanthi Sarpatwar

The Offset Filtration of Convex Objects

We consider offsets of a union of convex objects. We aim for a filtration, a sequence of nested cell complexes, that captures the topological evolution of the offsets for increasing radii. We describe methods to compute a filtration based on the Voronoi diagram of the given convex objects. We prove that, in two and three dimensions, the size of the filtration is proportional to the size of the Voronoi diagram. Our algorithm runs in Θ(

n

log

n

) in the 2-dimensional case and in expected time

O

(

n

3 + 

ε

), for any

ε

 > 0, in the 3-dimensional case. Our approach is inspired by alpha-complexes for point sets, but requires more involved machinery and analysis primarily since Voronoi regions of general convex objects do not form a good cover. We show by experiments that our approach results in a similarly fast and topologically more stable method compared to approximating the input by point samples.

Dan Halperin, Michael Kerber, Doron Shaharabani

Approximation Algorithms for Polynomial-Expansion and Low-Density Graphs

We investigate the family of intersection graphs of low density objects in low dimensional Euclidean space. This family is quite general, and includes planar graphs. This family of graphs has some interesting properties, and in particular, it is a subset of the family of graphs that have polynomial expansion.

We present efficient (1 + 

ε

)-approximation algorithms for polynomial expansion graphs, for

Independent Set

,

Set Cover

, and

Dominating Set

problems, among others, and these results seem to be new. Naturally,

PTAS

’s for these problems are known for subclasses of this graph family.

These results have immediate interesting applications in the geometric domain. For example, the new algorithms yield the only

PTAS

known for covering points by fat triangles (that are shallow).

We also prove corresponding hardness of approximation for some of these optimization problems, characterizing their intractability with respect to density. For example, we show that there is no

PTAS

for covering points by fat triangles if they are not shallow, thus matching our

PTAS

for this problem with respect to depth.

Sariel Har-Peled, Kent Quanrud

Monotone Drawings of 3-Connected Plane Graphs

A monotone drawing of a graph

G

is a straight-line drawing of

G

such that, for every pair of vertices

u

,

w

in

G

, there exists a path

P

uw

in

G

that is monotone on some line

l

uw

. (Namely, the order of the orthogonal projections of the vertices in

P

uw

on

l

uw

is the same as the order they appear in

P

uw

.) In this paper, we show that the classical Schnyder drawing of 3-connected plane graphs is a monotone drawing on a grid of size

f

×

f

(

f

 ≤ 2

n

 − 5 is the number of internal faces of

G

), which can be constructed in

O

(

n

) time. It also has the advantage that, for any given vertices

u

,

w

, the monotone line

l

uw

can be identified in

O

(1) time.

Xin He, Dayu He

Faster Fully-Dynamic Minimum Spanning Forest

We give a new data structure for the fully-dynamic minimum spanning forest problem in simple graphs. Edge updates are supported in

O

(log

4

n

/loglog

n

) expected amortized time per operation, improving the

O

(log

4

n

) amortized bound of Holm et al. (STOC ’98, JACM ’01). We also provide a deterministic data structure with amortized update time

O

(log

4

n

logloglog

n

/loglog

n

). We assume the Word-RAM model with standard instructions.

Jacob Holm, Eva Rotenberg, Christian Wulff-Nilsen

On the Equivalence among Problems of Bounded Width

In this paper, we introduce a methodology, called decomposition-based reductions, for showing the equivalence among various problems of bounded-width.

First, we show that the following are equivalent for any

α

 > 0:

SAT

can be solved in

$O^*(2^{\alpha{\bf tw} })$

time,

3-SAT

can be solved in

$O^*(2^{\alpha{\bf tw} })$

time,

Max 2-SAT

can be solved in

$O^*(2^{\alpha{\bf tw} })$

time,

Independent Set

can be solved in

$O^*(2^{\alpha{\bf tw} })$

time, and

Independent Set

can be solved in

$O^*(2^{\alpha{\bf cw} })$

time,

where

${\bf tw} $

and

${\bf cw} $

are the tree-width and clique-width of the instance, respectively. Then, we introduce a new parameterized complexity class

${\mbox{\textup{EPNL}}} $

, which includes

Set Cover

and

TSP

, and show that

SAT

,

3-SAT

,

Max 2-SAT

, and

Independent Set

parameterized by path-width are

${\mbox{\textup{EPNL}}} $

-complete. This implies that if one of these

${\mbox{\textup{EPNL}}} $

-complete problems can be solved in

O

*

(

c

k

) time, then any problem in

${\mbox{\textup{EPNL}}} $

can be solved in

O

*

(

c

k

) time.

Yoichi Iwata, Yuichi Yoshida

Fast Output-Sensitive Matrix Multiplication

We consider the problem of multiplying two

U

×

U

matrices

A

and

C

of elements from a field

$\mathcal{F}$

. We present a new randomized algorithm that can use the known fast square matrix multiplication algorithms to perform fewer arithmetic operations than the current state of the art for output matrices that are sparse.

In particular, let

ω

be the best known constant such that two dense

U

×

U

matrices can be multiplied with

$\mathcal{O} \left( U^\omega \right)$

arithmetic operations. Further denote by

N

the number of nonzero entries in the input matrices while

Z

is the number of nonzero entries of matrix product

AC

. We present a new Monte Carlo algorithm that uses

$\tilde{\mathcal{O}} \left( U^2 \left(\frac{Z}{U}\right)^{\omega-2} + N \right)$

arithmetic operations and outputs the nonzero entries of

AC

with high probability. For dense input, i.e.,

N

 = 

U

2

, if

Z

is asymptotically larger than

U

, this improves over state of the art methods, and it is always at most

$\mathcal{O} \left( U^\omega \right)$

. For general input density we improve upon state of the art when

N

is asymptotically larger than

U

4 − 

ω

Z

ω

 − 5/2

.

The algorithm is based on dividing the input into ”balanced” subproblems which are then compressed and computed. The new subroutine that computes a matrix product with balanced rows and columns in its output uses time

$\tilde{\mathcal{O}} \left( U Z^{(\omega -1)/2} + N\right)$

which is better than the current state of the art for balanced matrices when

N

is asymptotically larger than

U

Z

ω

/2 − 1

, which always holds when

N

 = 

U

2

.

In the I/O model — where

M

is the memory size and

B

is the block size — our algorithm is the first nontrivial result that exploits cancellations and sparsity of the output. The I/O complexity of our algorithm is

$\tilde{\mathcal{O}} \left( U^2 (Z/U)^{\omega-2}/(M^{\omega/2 -1} B) + Z/B + N/B \right)$

, which is asymptotically faster than the state of the art unless

M

is large.

Riko Jacob, Morten Stöckel

A Structural Approach to Kernels for ILPs: Treewidth and Total Unimodularity

Kernelization is a theoretical formalization of efficient preprocessing for

NP

-hard problems. Empirically, preprocessing is highly successful in practice, for example in state-of-the-art ILP-solvers like CPLEX. Motivated by this, previous work studied the existence of kernelizations for ILP related problems, e.g., for testing feasibility of

Ax

 ≤ 

b

. In contrast to the observed success of CPLEX, however, the results were largely negative. Intuitively, practical instances have far more useful structure than the worst-case instances used to prove these lower bounds.

In the present paper, we study the effect that subsystems that have (a Gaifman graph of) bounded treewidth or that are totally unimodular have on the kernelizability of the ILP feasibility problem. We show that, on the positive side, if these subsystems have a small number of variables on which they interact with the remaining instance, then we can efficiently replace them by smaller subsystems of size polynomial in the domain without changing feasibility. Thus, if large parts of an instance consist of such subsystems, then this yields a substantial size reduction. Complementing this we prove that relaxations to the considered structures, e.g., larger boundaries of the subsystems, allow worst-case lower bounds against kernelization. Thus, these relaxed structures give rise to instance families that cannot be efficiently reduced, by any approach.

Bart M. P. Jansen, Stefan Kratsch

On the Approximability of Digraph Ordering

Given an

n

-vertex digraph

D

 = (

V

,

A

) the

Max-

k

-Ordering

problem is to compute a labeling ℓ:

V

 → [

k

] maximizing the number of forward edges, i.e. edges (

u

,

v

) such that ℓ(

u

) < ℓ(

v

). For different values of

k

, this reduces to

Maximum Acyclic Subgraph

(

k

 = 

n

), and

Max-DiCut

(

k

 = 2). This work studies the approximability of

Max-

k

-Ordering

and its generalizations, motivated by their applications to job scheduling with

soft

precedence constraints. We give an LP rounding based 2-approximation algorithm for

Max-

k

-Ordering

for any

k

 = {2,…,

n

}, improving on the known

$\left.2k\middle/(k-1)\right.$

-approximation obtained via random assignment. The tightness of this rounding is shown by proving that for any

k

 = {2,…,

n

} and constant

ε

 > 0,

Max-

k

-Ordering

has an LP integrality gap of 2 − 

ε

for

$n^{\Omega\left(\left.1\middle/\log\log k\right.\right)}$

rounds of the Sherali-Adams hierarchy.

A further generalization of

Max-

k

-Ordering

is the

restricted maximum acyclic subgraph

problem or

RMAS

, where each vertex

v

has a finite set of allowable labels

S

v

 ⊆ ℤ

 + 

. We prove an LP rounding based

$\left.4\sqrt{2}\middle/\left(\sqrt{2}+1\right)\right. \approx 2.344$

approximation for it, improving on the

$2\sqrt{2} \approx 2.828$

approximation recently given by Grandoni et al. [5]. In fact, our approximation algorithm also works for a general version where the objective counts the edges which go forward by at least a positive

offset

specific to each edge.

The minimization formulation of digraph ordering is

DAG edge deletion

or

DED

(

k

), which requires deleting the minimum number of edges from an

n

-vertex directed acyclic graph (DAG) to remove all paths of length

k

. We show that both, the LP relaxation and a local ratio approach for

DED

(

k

) yield

k

-approximation for any

k

 ∈ [

n

]. A vertex deletion version was studied earlier by Paik et al. [16], and Svensson [17].

Sreyash Kenkre, Vinayaka Pandit, Manish Purohit, Rishi Saket

Welfare Maximization with Deferred Acceptance Auctions in Reallocation Problems

We design approximate weakly group strategy-proof mechanisms for resource reallocation problems using Milgrom and Segal’s deferred acceptance auction framework: the radio spectrum and network bandwidth reallocation problems in the procurement auction setting and the cost minimization problem with set cover constraints in the selling auction setting. Our deferred acceptance auctions are derived from simple greedy algorithms for the underlying optimization problems and guarantee approximately optimal social welfare (cost) of the agents retaining their rights (contracts). In the reallocation problems, we design procurement auctions to purchase agents’ broadcast/access rights to free up some of the resources such that the unpurchased rights can still be exercised with respect to the remaining resources. In the cost minimization problem, we design a selling auction to sell early termination rights to agents with existing contracts such that some minimal constraints are still satisfied with remaining contracts. In these problems, while the “allocated” agents transact, exchanging rights and payments, the objective and feasibility constraints are on the “rejected” agents.

Anthony Kim

On the Pathwidth of Almost Semicomplete Digraphs

We call a digraph

h-semicomplete

if each vertex of the digraph has at most

h

non-neighbors, where a non-neighbor of a vertex

v

is a vertex

u

 ≠ 

v

such that there is no edge between

u

and

v

in either direction. This notion generalizes that of semicomplete digraphs which are 0-semicomplete and tournaments which are semicomplete and have no anti-parallel pairs of edges. Our results in this paper are as follows. (1) We give an algorithm which, given an

h

-semicomplete digraph

G

on

n

vertices and a positive integer

k

, in (

h

 + 2

k

 + 1)

2

k

n

O

(1)

time either constructs a path-decomposition of

G

of width at most

k

or concludes correctly that the pathwidth of

G

is larger than

k

. (2) We show that there is a function

f

(

k

,

h

) such that every

h

-semicomplete digraph of pathwidth at least

f

(

k

,

h

) has a semicomplete subgraph of pathwidth at least

k

.

One consequence of these results is that the problem of deciding if a fixed digraph

H

is topologically contained in a given

h

-semicomplete digraph

G

admits a polynomial-time algorithm for fixed

h

.

Kenta Kitsunai, Yasuaki Kobayashi, Hisao Tamaki

Quicksort, Largest Bucket, and Min-Wise Hashing with Limited Independence

Randomized algorithms and data structures are often analyzed under the assumption of access to a perfect source of randomness. The most fundamental metric used to measure how “random” a hash function or a random number generator is, is its

independence

: a sequence of random variables is said to be

k

-independent if every variable is uniform and every size

k

subset is independent.

In this paper we consider three classic algorithms under limited independence. Besides the theoretical interest in removing the unrealistic assumption of full independence, the work is motivated by lower independence being more practical. We provide new bounds for randomized quicksort, min-wise hashing and largest bucket size under limited independence. Our results can be summarized as follows.

Randomized Quicksort.

When pivot elements are computed using a 5-independent hash function, Karloff and Raghavan, J.ACM’93 showed

$\mathcal{O} ( n \log n)$

expected worst-case running time for a special version of quicksort. We improve upon this, showing that the same running time is achieved with only 4-independence.

Min-Wise Hashing.

For a set

A

, consider the probability of a particular element being mapped to the smallest hash value. It is known that 5-independence implies the optimal probability

$\mathcal{O} (1 /n)$

. Broder et al., STOC’98 showed that 2-independence implies it is

$\mathcal{O}(1 / \sqrt{|A|})$

. We show a matching lower bound as well as new tight bounds for 3- and 4-independent hash functions.

Largest Bucket.

We consider the case where

n

balls are distributed to

n

buckets using a

k

-independent hash function and analyze the largest bucket size. Alon et. al, STOC’97 showed that there exists a 2-independent hash function implying a bucket of size Ω(

n

1/2

). We generalize the bound, providing a

k

-independent family of functions that imply size Ω(

n

1/

k

).

Mathias Bæk Tejs Knudsen, Morten Stöckel

Maximum Matching in Turnstile Streams

We consider the unweighted bipartite maximum matching problem in the one-pass turnstile streaming model where the input stream consists of edge insertions and deletions. In the insertion-only model, a one-pass 2-approximation streaming algorithm can be easily obtained with space O(

n

log

n

), where

n

denotes the number of vertices of the input graph. We show that no such result is possible if edge deletions are allowed, even if space O(

n

3/2 − 

δ

) is granted, for every

δ

 > 0. Specifically, for every 0 ≤ 

ε

 ≤ 1, we show that in the one-pass turnstile streaming model, in order to compute a O(

n

ε

)-approximation, space Ω(

n

3/2 − 4

ε

) is required for constant error randomized algorithms, and, up to logarithmic factors, space

$\tilde{\mathrm{O}}( n^{2-2\epsilon} )$

is sufficient.

Our lower bound result is proved in the simultaneous message model of communication and may be of independent interest.

Christian Konrad

A Lasserre Lower Bound for the Min-Sum Single Machine Scheduling Problem

The Min-sum single machine scheduling problem (denoted 1|| ∑ 

f

j

) generalizes a large number of sequencing problems. The first constant approximation guarantees have been obtained only recently and are based on natural time-indexed LP relaxations strengthened with the so called

Knapsack-Cover

inequalities (see Bansal and Pruhs, Cheung and Shmoys and the recent (4 + 

ε

)-approximation by Mestre and Verschae). These relaxations have an integrality gap of 2, since the Min-knapsack problem is a special case. No APX-hardness result is known and it is still conceivable that there exists a PTAS. Interestingly, the Lasserre hierarchy relaxation, when the objective function is incorporated as a constraint, reduces the integrality gap for the Min-knapsack problem to 1 + 

ε

.

In this paper we study the complexity of the Min-sum single machine scheduling problem under algorithms from the Lasserre hierarchy. We prove the first lower bound for this model by showing that the integrality gap is unbounded at level

$\Omega(\sqrt{n})$

even for a variant of the problem that is solvable in

O

(

n

log

n

) time, namely Min-number of tardy jobs. We consider a natural formulation that incorporates the objective function as a constraint and prove the result by partially diagonalizing the matrix associated with the relaxation and exploiting this characterization.

Adam Kurpisz, Samuli Leppänen, Monaldo Mastrolilli

Optimal Parameterized Algorithms for Planar Facility Location Problems Using Voronoi Diagrams

We study a general family of facility location problems defined on planar graphs and on the 2-dimensional plane. In these problems, a subset of

k

objects has to be selected, satisfying certain packing (disjointness) and covering constraints. Our main result is showing that, for each of these problems, the

$n^{{\mathcal{O}}(k)}$

time brute force algorithm of selecting

k

objects can be improved to

$n^{{\mathcal{O}}(\sqrt{k})}$

time. The algorithm is based on focusing on the Voronoi diagram of a hypothetical solution of

k

objects; this idea was introduced recently in the design of geometric QPTASs, but was not yet used for exact algorithms and for planar graphs. As concrete consequences of our main result, we obtain

$n^{{\mathcal{O}}(\sqrt{k})}$

time algorithms for the following problems:

d

-Scattered Set

in planar graphs (find

k

vertices at pairwise distance

d

);

d

-Dominating Set

/(

k

,

d

)

-Center

in planar graphs (find

k

vertices such that every vertex is at distance at most

d

from these vertices); select

k

pairwise disjoint connected vertex sets from a given collection; select

k

pairwise disjoint disks in the plane (of possibly different radii) from a given collection; cover a set of points in the plane by selecting

k

disks/axis-parallel squares from a given collection. We complement these positive results with lower bounds suggesting that some similar, but slightly more general problems (such as covering points with axis-parallel rectangles) do not admit

$n^{{\mathcal{O}}(\sqrt{k})}$

time algorithms.

Dániel Marx, Michał Pilipczuk

Randomization Helps Computing a Minimum Spanning Tree under Uncertainty

We consider the problem of finding a minimum spanning tree (MST) in a graph with uncertain edge weights given by open intervals on the edges. The exact weight of an edge in the corresponding uncertainty interval can be queried at a given cost. The task is to determine a possibly adaptive query sequence of minimum total cost for finding an MST. For uniform query cost, a deterministic algorithm with best possible competitive ratio 2 is known [7].

We solve a long-standing open problem by showing that randomized query strategies can beat the best possible competitive ratio 2 of deterministic algorithms. Our randomized algorithm achieves expected competitive ratio

$1+1/\sqrt{2}\approx 1.707$

. This result is based on novel structural insights to the problem enabling an interpretation as a generalized online bipartite vertex cover problem. We also consider arbitrary, edge-individual query costs and show how to obtain algorithms matching the best known competitive ratios for uniform query cost. Moreover, we give an optimal algorithm for the related problem of computing the exact weight of an MST at minimum query cost. This algorithm is based on an interesting relation between different algorithmic approaches using the cycle-property and the cut-property characterizing MSTs. Finally, we argue that all our results also hold for the more general setting of matroids. All our algorithms run in polynomial time.

Nicole Megow, Julie Meißner, Martin Skutella

Compressed Data Structures for Dynamic Sequences

We consider the problem of storing a dynamic string

S

over an alphabet Σ = { 1,…,

σ

} in compressed form. Our representation supports insertions and deletions of symbols and answers three fundamental queries:

returns the

i

-th symbol in

S

,

counts how many times a symbol

a

occurs among the first

i

positions in

S

, and

finds the position where a symbol

a

occurs for the

i

-th time. We present the first fully-dynamic data structure for arbitrarily large alphabets that achieves optimal query times for all three operations and supports updates with worst-case time guarantees. Ours is also the first fully-dynamic data structure that needs only

nH

k

 + 

o

(

n

log

σ

) bits, where

H

k

is the

k

-th order entropy and

n

is the string length. Moreover our representation supports extraction of a substring

S

[

i

..

i

 + ℓ] in optimal

O

(log

n

/loglog

n

 + ℓ/log

σ

n

) time.

J. Ian Munro, Yakov Nekrich

Geometric Hitting Sets for Disks: Theory and Practice

The geometric hitting set problem is one of the basic geometric combinatorial optimization problems: given a set

P

of points, and a set

$\mathcal{D}$

of geometric objects in the plane, the goal is to compute a small-sized subset of

P

that hits all objects in

$\mathcal{D}$

. In 1994, Bronniman and Goodrich [6] made an important connection of this problem to the size of fundamental combinatorial structures called

ε

-nets, showing that small-sized

ε

-nets imply approximation algorithms with correspondingly small approximation ratios. Finally, recently Agarwal-Pan [5] showed that their scheme can be implemented in near-linear time for disks in the plane.

This current state-of-the-art is lacking in three ways. First, the constants in current

ε

-net constructions are large, so the approximation factor ends up being more than 40. Second, the algorithm uses sophisticated geometric tools and data structures with large resulting constants. Third, these have resulted in a lack of available software for fast computation of small hitting-sets. In this paper, we make progress on all three of these barriers:

i

) we prove improved bounds on sizes of

ε

-nets,

ii

) design hitting-set algorithms without the use of these data-structures and finally,

iii

) present

dnet

, a public source-code module that incorporates both of these improvements to compute small-sized hitting sets and

ε

-nets efficiently in practice.

Norbert Bus, Nabil H. Mustafa, Saurabh Ray

Efficient Computation of Middle Levels Gray Codes

For any integer

n

 ≥ 1 a

middle levels Gray code

is a cyclic listing of all bitstrings of length 2

n

 + 1 that have either

n

or

n

 + 1 entries equal to 1 such that any two consecutive bitstrings in the list differ in exactly one bit. The question whether such a Gray code exists for every

n

 ≥ 1 has been the subject of intensive research during the last 30 years, and has been answered affirmatively only recently [T. Mütze. Proof of the middle levels conjecture.

arXiv:1404.4442

, 2014]. In this work we provide the first efficient algorithm to compute a middle levels Gray code. For a given bitstring, our algorithm computes the next ℓ bitstrings in the Gray code in time

$\mathcal{O}(n\ell(1+\frac{n}{\ell}))$

, which is

$\mathcal{O}(n)$

on average per bitstring provided that ℓ = Ω(

n

).

Torsten Mütze, Jerri Nummenpalo

Computing the Similarity Between Moving Curves

In this paper we study similarity measures for moving curves which can, for example, model changing coastlines or glacier termini. Points on a moving curve have two parameters, namely the position along the curve as well as time. We therefore focus on similarity measures for surfaces, specifically the Fréchet distance between surfaces. While the Fréchet distance between surfaces is not even known to be computable, we show for variants arising in the context of moving curves that they are polynomial-time solvable or NP-complete depending on the restrictions imposed on how the moving curves are matched. We achieve the polynomial-time solutions by a novel approach for computing a surface in the so-called free-space diagram based on max-flow min-cut duality.

Kevin Buchin, Tim Ophelders, Bettina Speckmann

I/O-Efficient Similarity Join

We present an I/O-efficient algorithm for computing similarity joins based on locality-sensitive hashing (LSH). In contrast to the filtering methods commonly suggested our method has provable sub-quadratic dependency on the data size. Further, in contrast to straightforward implementations of known LSH-based algorithms on external memory, our approach is able to take significant advantage of the available internal memory: Whereas the time complexity of classical algorithms includes a factor of

N

ρ

, where

ρ

is a parameter of the LSH used, the I/O complexity of our algorithm merely includes a factor (

N

/

M

)

ρ

, where

N

is the data size and

M

is the size of internal memory. Our algorithm is randomized and outputs the correct result with high probability. It is a simple, recursive, cache-oblivious procedure, and we believe that it will be useful also in other computational settings such as parallel computation.

Rasmus Pagh, Ninh Pham, Francesco Silvestri, Morten Stöckel

Improved Approximation Algorithms for Weighted 2-Path Partitions

We investigate two

NP

-complete vertex partition problems on edge weighted complete graphs with 3

k

vertices. The first problem asks to partition the graph into

k

vertex disjoint paths of length 2 (referred to as 2

-paths

) such that the total weight of the paths is maximized. We present a cubic time approximation algorithm that computes a 2-path partition whose total weight is at least .5833 of the weight of an optimal partition; improving upon the (.5265 − 

ε

)-approximation algorithm of [26]. Restricting the input graph to have edge weights in {0, 1}, we present a .75 approximation algorithm improving upon the .55-approximation algorithm of [16].

Combining this algorithm with a previously known approximation algorithm for the 3

-Set Packing

problem, we obtain a .6-approximation algorithm for the problem of partitioning a {0, 1}-edge-weighted graph into

k

vertex disjoint triangles of maximum total weight. The best known approximation algorithm for general weights achieves an approximation ratio of .5257 [4].

Amotz Bar-Noy, David Peleg, George Rabanca, Ivo Vigan

A Multivariate Approach for Weighted FPT Algorithms

We introduce a multivariate approach for solving weighted parameterized problems. Building on the

flexible

use of certain parameters, our approach defines a new general framework for applying the classic bounded search trees technique. In our model, given an instance of size

n

of a minimization/maximization problem, and a parameter

W

 ≥ 1, we seek a solution of weight at most/at least

W

. We demonstrate the wide applicability of our approach by solving the weighted variants of

Vertex Cover

,

3-Hitting Set

,

Edge Dominating Set

and

Max Internal Out-Branching

. While the best known algorithms for these problems admit running times of the form

a

W

n

O

(1)

, for some constant

a

 > 1, our approach yields running times of the form

b

s

n

O

(1)

, for some constant

b

 ≤ 

a

, where

s

 ≤ 

W

is the minimum

size

of a solution of weight at most (at least)

W

. If no such solution exists,

s

 =  min {

W

,

m

}, where

m

is the maximum size of a solution. Clearly,

s

can be substantially smaller than

W

. Moreover, we give an example for a problem whose polynomial-time solvability crucially relies on our flexible (in lieu of a strict) use of parameters.

We further show, among other results, that

Weighted Vertex Cover

and

Weighted Edge Dominating Set

are solvable in times 1.443

t

n

O

(1)

and 3

t

n

O

(1)

, respectively, where

t

 ≤ 

s

is the minimum size of a solution.

Hadas Shachnai, Meirav Zehavi

Incidences with Curves in ℝ d

We prove that the number of incidences between

m

points and

n

bounded-degree curves with

k

degrees of freedom in ℝ

d

is

$\left.+m+n\right),$

where the constant of proportionality depends on

k

,

ε

and

d

, for any

ε

 > 0, provided that no

j

-dimensional surface of degree

c

j

(

k

,

d

,

ε

), a constant parameter depending on

k

,

d

,

j

, and

ε

, contains more than

q

j

input curves, and that the

q

j

’s satisfy certain mild conditions.

This bound generalizes a recent result of Sharir and Solomon [20] concerning point-line incidences in four dimensions (where

d

 = 4 and

k

 = 2), and partly generalizes a recent result of Guth [8] (as well as the earlier bound of Guth and Katz [10]) in three dimensions (Guth’s three-dimensional bound has a better dependency on

q

). It also improves a recent

d

-dimensional general incidence bound by Fox, Pach, Sheffer, Suk, and Zahl [7], in the special case of incidences with algebraic curves. Our results are also related to recent works by Dvir and Gopi [4] and by Hablicsek and Scherr [11] concerning rich lines in high-dimensional spaces.

Micha Sharir, Adam Sheffer, Noam Solomon

D3-Tree: A Dynamic Deterministic Decentralized Structure

We present

D

3

-Tree, a dynamic deterministic structure for data management in decentralized networks, by engineering and further extending an existing decentralized structure.

D

3

-Tree achieves

O

(log

N

) worst-case search cost (

N

is the number of nodes in the network),

O

(log

N

) amortized load-balancing cost, and it is highly fault-tolerant. A particular strength of

D

3

-Tree is that it achieves

O

(log

N

) amortized search cost under massive node failures. We conduct an extensive experimental study verifying that

D

3

-Tree outperforms other well-known structures and that it achieves a significant success rate in element queries in case of massive node failures.

Spyros Sioutas, Efrosini Sourla, Kostas Tsichlas, Christos Zaroliagis

Ignorant vs. Anonymous Recommendations

We start with an unknown binary

n

×

m

matrix, where the entries correspond to the preferences of

n

users on

m

items. The goal is to find at least one item per user that the user likes, with as few queries as possible. Since there are matrices where any algorithm performs badly without any preliminary knowledge of the input matrix, we reveal an anonymized version of the input matrix to the algorithm in the beginning of the execution. The input matrix is anonymized by shuffling the rows according to a randomly chosen hidden permutation. We observe that this anonymous recommendation problem can be seen as an adaptive variant of the Min Sum Set Cover problem and show that the greedy solution for the original version of the problem provides a constant approximation for the adaptive version.

Jara Uitto, Roger Wattenhofer

Lower Bounds in the Preprocessing and Query Phases of Routing Algorithms

In the last decade, there has been a substantial amount of research in finding routing algorithms designed specifically to run on real-world graphs. In 2010, Abraham et al. showed upper bounds on the query time in terms of a graph’s

highway dimension

and diameter for the current fastest routing algorithms, including

contraction hierarchies

,

transit node routing

, and

hub labeling

. In this paper, we show corresponding lower bounds for the same three algorithms. We also show how to improve a result by Milosavljević which lower bounds the number of shortcuts added in the preprocessing stage for

contraction hierarchies

. We relax the assumption of an optimal contraction order (which is NP-hard to compute), allowing the result to be applicable to real-world instances. Finally, we give a proof that optimal preprocessing for

hub labeling

is NP-hard. Hardness of optimal preprocessing is known for most routing algorithms, and was suspected to be true for

hub labeling

.

Colin White

Trip-Based Public Transit Routing

We study the problem of computing all Pareto-optimal journeys in a public transit network regarding the two criteria of arrival time and number of transfers taken. We take a novel approach, focusing on trips and transfers between them, allowing fine-grained modeling. Our experiments on the metropolitan network of London show that the algorithm computes full 24-hour profiles in 70ms after a preprocessing phase of 30s, allowing fast queries in dynamic scenarios.

Sascha Witt

Mixing Color Coding-Related Techniques

Narrow sieves, representative sets and divide-and-color are three breakthrough techniques related to color coding, which led to the design of extremely fast parameterized algorithms. We present a novel family of strategies for applying mixtures of them. This includes: (a) a mix of representative sets and narrow sieves; (b) a faster computation of representative sets under certain separateness conditions, mixed with divide-and-color and a new technique, called “balanced cutting”; (c) two mixtures of representative sets and a new technique, called “unbalanced cutting”. We demonstrate our strategies by obtaining, among other results, significantly faster algorithms for

k

-Internal Out-Branching

and

Weighted 3-Set

k

-Packing

, and a general framework for speeding-up the previous best deterministic algorithms for

k

-Path

,

k

-Tree

,

r

-Dimensional

k

-Matching

,

Graph Motif

and

Partial Cover

.

Meirav Zehavi

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise