Skip to main content
main-content

Inhaltsverzeichnis

Frontmatter

Invited Lectures

Origami, Linkages, and Polyhedra: Folding with Algorithms

What forms of origami can be designed automatically by algorithms? What shapes can result by folding a piece of paper flat and making one complete straight cut? What polyhedra can be cut along their surface and unfolded into a flat piece of paper without overlap? When can a linkage of rigid bars be untangled or folded into a desired configuration? Folding and unfolding is a branch of discrete and computational geometry that addresses these and many other intriguing questions. I will give a taste of the many results that have been proved in the past few years, as well as the several exciting open problems that remain open. Many folding problems have applications in areas including manufacturing, robotics, graphics, and protein folding.

Erik D. Demaine

Reliable and Efficient Geometric Computing

Computing with geometric objects (points, curves, and surfaces) is central for many engineering disciplines and lies at the heart of computer aided design systems. Implementing geometric algorithms is notoriously difficult and most actual implementations are incomplete: they are known to crash or deliver the wrong result on some instances.

In the introductory part of the talk, we illustrate the pitfalls of geometric computing [5] and explain for one algorithm in detail where the problem lies and what goes wrong.

Kurt Mehlhorn

Some Computational Challenges in Today’s Bio-medicine

Over the last decade, biology has been rapidly transformed into an information science. Novel high-throughput technologies provide information in a scope that was unimaginable not long ago, and with these data a plethora of computational riddles are emerging. The challenge of deep and integrated computational analysis of diverse biological data, providing meaningful understanding of life processes and principles, is still very far from being answered. If met, this could help improve the quality of life of this and future generations.

We shall discuss several such challenges arising in diverse areas of biology and medicine, including analysis and evolution of genetic regulatory networks, disease association, and genome rearrangements. The tension between elegant algorithmics and useful practical solutions will be a common theme in this story.

Ron Shamir

Contributed Papers: Design and Analysis Track

Kinetic Collision Detection for Convex Fat Objects

We design compact and responsive kinetic data structures for detecting collisions between

n

convex fat objects in 3-dimensional space that can have arbitrary sizes. Our main results are:

(

i

) If the objects are 3-dimensional balls that roll on a plane, then we can detect collisions with a KDS of size

O

(

n

log

n

) that can handle events in

O

(log

n

) time. This structure processes

O

(

n

2

) events in the worst case, assuming that the objects follow constant-degree algebraic trajectories.

(

ii

) If the objects are convex fat 3-dimensional objects of constant complexity that are free-flying in

${\mathbb R}^3$

, then we can detect collisions with a KDS of

O

(

n

log

6

n

) size that can handle events in

O

(log

6

n

) time. This structure processes

O

(

n

2

) events in the worst case, assuming that the objects follow constant-degree algebraic trajectories. If the objects have similar sizes then the size of the KDS becomes

O

(

n

) and events can be handled in

O

(1) time.

M. A. Abam, M. de Berg, S. -H. Poon, B. Speckmann

Dynamic Connectivity for Axis-Parallel Rectangles

In this paper we give a fully dynamic data structure to maintain the connectivity of the intersection graph of

n

axis-parallel rectangles. The amortized update time (insertion and deletion of rectangles) is

O

(

n

10/11

polylog

n

) and the query time (deciding whether two given rectangles are connected) is

O

(1). It slightly improves the update time (

O

(

n

0.94

)) of the previous method while drastically reducing the query time (near

O

(

n

1/3

)). Our method does not use fast matrix multiplication results and supports a wider range of queries.

Peyman Afshani, Timothy M. Chan

Single Machine Precedence Constrained Scheduling Is a Vertex Cover Problem

In this paper we study the single machine precedence constrained scheduling problem of minimizing the sum of weighted completion time. Specifically, we settle an open problem first raised by Chudak & Hochbaum and whose answer was subsequently conjectured by Correa & Schulz.

The most significant implication of our result is that the addressed scheduling problem is a special case of the vertex cover problem. This will hopefully be an important step towards proving that the two problems behave identically in terms of approximability.

As a consequence of our result, previous results for the scheduling problem can be explained, and in some cases improved, by means of vertex cover theory. For example, our result implies the existence of a polynomial time algorithm for the special case of two-dimensional partial orders. This considerably extends Lawler’s result from 1978 for series-parallel orders.

Christoph Ambühl, Monaldo Mastrolilli

Cooperative TSP

In this paper we introduce and study cooperative variants of the Traveling Salesperson Problem. In these problems a salesperson has to make deliveries to customers who are willing to help in the process. Customer cooperativeness may be manifested in several modes: they may assist by approaching the salesperson, by reselling the goods they purchased to other customers, or by doing both.

Several objectives are of interest: minimizing the total distance traveled by all the participants, minimizing the maximal distance traveled by a participant and minimizing the total time until all the deliveries are made.

All the combinations of cooperation-modes and objective functions are considered, both in weighted undirected graphs and in Euclidean space. We show that most of the problems have a constant approximation algorithm, many of the others admit a PTAS, and a few are solvable in polynomial time. On the intractability side we provide NP-hardness proofs and inapproximability factors, some of which are tight.

Amitai Armon, Adi Avidor, Oded Schwartz

Fréchet Distance for Curves, Revisited

We revisit the problem of computing the Fréchet distance between polygonal curves, focusing on the

discrete

Fréchet distance, where only distance between vertices is considered. We develop efficient approximation algorithms for two natural classes of curves:

κ-bounded

curves and

backbone

curves, the latter of which are widely used to model molecular structures. We also propose a pseudo–output-sensitive algorithm for computing the discrete Fréchet distance exactly. The complexity of the algorithm is a function of the complexity of the free-space boundary, which is quadratic in the worst case, but tends to be lower in practice.

Boris Aronov, Sariel Har-Peled, Christian Knauer, Yusu Wang, Carola Wenk

Resource Allocation in Bounded Degree Trees

We study the

bandwidth allocation problem

(

bap

) in bounded degree trees. In this problem we are given a tree and a set of connection requests. Each request consists of a path in the tree, a bandwidth requirement, and a weight. Our goal is to find a maximum weight subset

S

of requests such that, for every edge

e

, the total bandwidth of requests in

S

whose path contains

e

is at most 1. We also consider the

storage allocation problem

(

sap

), in which it is also required that every request in the solution is given the same contiguous portion of the resource in every edge in its path. We present a deterministic approximation algorithm for

bap

in bounded degree trees with ratio

$(2\sqrt{e}-1)/(\sqrt{e}-1) $

+

ε

< 3.542. Our algorithm is based on a novel application of the

local ratio technique

in which the available bandwidth is divided into narrow strips and requests with very small bandwidths are allocated in these strips. We also present a randomized (2+

ε

)-approximation algorithm for

bap

in bounded degree trees. The best previously known ratio for

bap

in general trees is 5. We present a reduction from

sap

to

bap

that works for instances where the tree is a line and the bandwidths are very small. It follows that there exists a (2+

ε

)-approximation algorithm for

sap

in the line. The best previously known ratio for this problem is 7.

Reuven Bar-Yehuda, Michael Beder, Yuval Cohen, Dror Rawitz

Dynamic Algorithms for Graph Spanners

Let

G

=(

V

,

E

) be an undirected weighted graph on |

V

|=

n

vertices and |

E

|=

m

edges. For the graph

G

, A spanner with stretch

t

∈ℕ is a subgraph (

V

,

E

S

),

E

S

 ⊆ 

E

, such that the distance between any pair of vertices in this subgraph is at most

t

times the distance between them in the graph

G

. We present simple and efficient dynamic algorithms for maintaining spanners with essentially optimal (expected) size versus stretch trade-off for any given unweighted graph. The main result is a decremental algorithm that takes expected

$O(\mathop{\mathrm{polylog}} n)$

time per edge deletion for maintaining a spanner with arbitrary stretch. This algorithm easily leads to a fully dynamic algorithm with sublinear (in

n

) time per edge insertion or deletion. Quite interestingly, this paper also reports that for stretch at most 6, it is possible to maintain a spanner fully dynamically with expected constant time per update. All these algorithms use simple randomization techniques on the top of an existing static algorithm [6] for computing spanners, and achieve drastic improvement over the previous best deterministic dynamic algorithms for spanners.

Surender Baswana

Latency Constrained Aggregation in Sensor Networks

A sensor network consists of sensing devices which may exchange data through wireless communication; sensor networks are highly energy constrained since they are usually battery operated. Data aggregation is a possible way to save energy consumption: nodes may delay data in order to aggregate them into a single packet before forwarding them towards some central node (sink). However, many applications impose constraints on data freshness; this translates into latency constraints for data arriving at the sink.

We study the problem of data aggregation to minimize maximum energy consumption under latency constraints on sensed data delivery and we assume unique transmission paths that form a tree rooted at the sink. We prove that the off-line problem is strongly NP-hard and we design a 2-approximation algorithm. The latter uses a novel rounding technique.

Almost all real life sensor networks are managed on-line by simple distributed algorithms in the nodes. In this context we consider both the case in which sensor nodes are synchronized or not. We consider distributed on-line algorithms and use competitive analysis to assess their performance.

Luca Becchetti, Peter Korteweg, Alberto Marchetti-Spaccamela, Martin Skutella, Leen Stougie, Andrea Vitaletti

Competitive Analysis of Flash-Memory Algorithms

The cells of flash memories can only endure a limited number of write cycles, usually between 10,000 and 1,000,000. Furthermore, cells containing data must be erased before they can store new data, and erasure operations erase large blocks of memory, not individual cells. To maximize the endurance of the device (the amount of useful data that can be written to it before one of its cells wears out), flash-based systems move data around in an attempt to reduce the total number of erasures and to level the wear of the different erase blocks. This data movement introduces interesting online problems called

wear-leveling problems

. We show that a simple randomized algorithm for one problem is essentially optimal. For a more difficult problem, we show that clever offline algorithms can improve upon naive approaches, but online algorithms essentially cannot.

Avraham Ben-Aroya, Sivan Toledo

Contention Resolution with Heterogeneous Job Sizes

We study the problem of contention resolution for different-sized jobs on a simple channel. When a job makes a run attempt, it learns only whether the attempt succeeded or failed. We first analyze binary exponential backoff, and show that it achieves a makespan of

$V2^{\Theta(\sqrt{\log{n}})}$

with high probability, where

V

is the total work of all

n

contending jobs. This bound is significantly larger than when jobs are constant sized. A variant of exponential backoff, however, achieves makespan

O

(

V

log

V

) with high probability. Finally, we introduce a new protocol, size-hashed backoff, specifically designed for jobs of multiple sizes that achieves makespan

O

(

V

log

3

log

V

). The error probability of the first two bounds is polynomially small in

n

and the latter is polynomially small in log

V

.

Michael A. Bender, Jeremy T. Fineman, Seth Gilbert

Deciding Relaxed Two-Colorability—A Hardness Jump

A coloring is proper if each color class induces connected components of order one (where the

order

of a graph is its number of vertices). Here we study relaxations of proper two-colorings, such that the order of the induced monochromatic components in one (or both) of the color classes is bounded by a constant. In a

(C

1

, C

2

)-relaxed coloring

of a graph

G

every monochromatic component induced by vertices of the first (second) color is of order at most

C

1

(

C

2

, resp.). We are mostly concerned with (1,

C

)-relaxed colorings, in other words when/how is it possible to break up a graph into small components with the removal of an independent set.

We prove that every graph of maximum degree at most three can be (1,22)-relaxed colored and we give a quasilinear algorithm which constructs such a coloring. We also show that a similar statement cannot be true for graphs of maximum degree at most 4 in a very strong sense: we construct 4-regular graphs such that the removal of any independent set leaves a connected component whose order is linear in the number of vertices.

Furthermore we investigate the complexity of the decision problem (Δ,

C

)-AsymRelCol: Given a graph of maximum degree at most Δ, is there a (1,

C

)-relaxed coloring of

G

? We find a remarkable hardness jump in the behavior of this problem. We note that there is not even an obvious monotonicity in the hardness of the problem as

C

grows, i.e. the hardness for component order

C

+1 does not imply directly the hardness for

C

. In fact for

C

=1 the problem is obviously polynomial-time decidable, while it is shown that it is NP-hard for

C

=2 and Δ≥3.

For arbitrary Δ≥2 we still establish the monotonicity of hardness of (Δ,

C

)-AsymRelCol on the interval 2≤

C

≤∞ in the following strong sense. There exists a critical component order

f

(Δ)∈ℕ∪{∞} such that the problem of deciding (1,

C

)-relaxed colorability of graphs of maximum degree at most Δ is NP-complete for every 2≤

C

<

f

(Δ), while deciding (1,

f

(Δ))-colorability is trivial: every graph of maximum degree Δ is (1,

f

(Δ))-colorable. For Δ=3 the existence of this threshold is shown despite the fact that we do not know its precise value, only 6≤

f

(3)≤22. For any Δ≥4, (Δ,

C

)-AsymRelCol is NP-complete for arbitrary

C

≥2, so

f

(Δ)=∞.

We also study the symmetric version of the relaxed coloring problem, and make the first steps towards establishing a similar hardness jump.

Robert Berke, Tibor Szabó

Negative Examples for Sequential Importance Sampling of Binary Contingency Tables

The sequential importance sampling (SIS) algorithm has gained considerable popularity for its empirical success. One of its noted applications is to the binary contingency tables problem, an important problem in statistics, where the goal is to estimate the number of 0/1 matrices with prescribed row and column sums. We give a family of examples in which the SIS procedure, if run for any subexponential number of trials, will underestimate the number of tables by an exponential factor. This result holds for any of the usual design choices in the SIS algorithm, namely the ordering of the columns and rows. These are apparently the first theoretical results on the efficiency of the SIS algorithm for binary contingency tables. Finally, we present experimental evidence that the SIS algorithm is efficient for row and column sums that are regular. Our work is a first step in determining rigorously the class of inputs for which SIS is effective.

Ivona Bezáková, Alistair Sinclair, Daniel Štefankovič, Eric Vigoda

Estimating Entropy over Data Streams

We present an algorithm for estimating entropy of data streams consisting of insertion and deletion operations using

$\tilde{O}(1)$

space.

Lakshminath Bhuvanagiri, Sumit Ganguly

Necklaces, Convolutions, and X + Y

We give subquadratic algorithms that, given two necklaces each with

n

beads at arbitrary positions, compute the optimal rotation of the necklaces to best align the beads. Here alignment is measured according to the ℓ

p

norm of the vector of distances between pairs of beads from opposite necklaces in the best perfect matching. We show surprisingly different results for

p

=1,

p

=2, and

p

=∞. For

p

=2, we reduce the problem to standard convolution, while for

p

=∞ and

p

=1, we reduce the problem to (min,+) convolution and (median,+) convolution. Then we solve the latter two convolution problems in subquadratic time, which are interesting results in their own right. These results shed some light on the classic sorting

X

+

Y

problem, because the convolutions can be viewed as computing order statistics on the antidiagonals of the

X

+

Y

matrix. All of our algorithms run in

o

(

n

2

) time, whereas the obvious algorithms for these problems run in Θ(

n

2

) time.

David Bremner, Timothy M. Chan, Erik D. Demaine, Jeff Erickson, Ferran Hurtado, John Iacono, Stefan Langerman, Perouz Taslakian

Purely Functional Worst Case Constant Time Catenable Sorted Lists

We present a purely functional implementation of search trees that requires

O

(log

n

) time for search and update operations and supports the join of two trees in worst case constant time. Hence, we solve an open problem posed by Kaplan and Tarjan as to whether it is possible to envisage a data structure supporting simultaneously the join operation in

O

(1) time and the search and update operations in

O

(log

n

) time.

Gerth Stølting Brodal, Christos Makris, Kostas Tsichlas

Taxes for Linear Atomic Congestion Games

We study congestion games where players aim to access a set of resources. Each player has a set of possible strategies and each resource has a function associating the latency it incurs to the players using it. Players are non–cooperative and each wishes to follow strategies that minimize her own latency with no regard to the global optimum. Previous work has studied the impact of this selfish behavior to system performance. In this paper, we study the question of how much the performance can be improved if players are forced to pay taxes for using resources. Our objective is to extend the original game so that selfish behavior does not deteriorate performance. We consider atomic congestion games with linear latency functions and present both negative and positive results. Our negative results show that optimal system performance cannot be achieved even in very simple games. On the positive side, we show that there are ways to assign taxes that can improve the performance of linear congestion games by forcing players to follow strategies where the total latency suffered is within a factor of 2 of the minimum possible; this result is shown to be tight. Furthermore, even in cases where in the absence of taxes the system behavior may be very poor, we show that the total disutility of players (latency plus taxes) is not much larger than the optimal total latency. Besides existential results, we show how to compute taxes in time polynomial in the size of the game by solving convex quadratic programs. Similar questions have been extensively studied in the model of non-atomic congestion games. To the best of our knowledge, this is the first study of the efficiency of taxes in atomic congestion games.

Ioannis Caragiannis, Christos Kaklamanis, Panagiotis Kanellopoulos

Spanners with Slack

Given a metric (

V

,

d

), a

spanner

is a sparse graph whose shortest-path metric approximates the distance

d

to within a small multiplicative distortion. In this paper, we study the problem of

spanners with slack

: e.g., can we find sparse spanners where we are allowed to incur an arbitrarily large distortion on a small constant fraction of the distances, but are then required to incur only a constant (independent of

n

) distortion on the remaining distances? We answer this question in the affirmative, thus complementing similar recent results on embeddings with slack into ℓ

p

spaces. For instance, we show that if we ignore an

ε

fraction of the distances, we can get spanners with

O

(

n

) edges and

$O(\log {\frac{1}{\epsilon}})$

distortion for the remaining distances.

We also show how to obtain sparse and low-weight spanners with slack from existing constructions of conventional spanners, and these techniques allow us to also obtain the best known results for distance oracles and distance labelings with slack. This paper complements similar results obtained in recent research on slack embeddings into normed metric spaces.

T. -H. Hubert Chan, Michael Dinitz, Anupam Gupta

Compressed Indexes for Approximate String Matching

We revisit the problem of indexing a string

S

[1..

n

] to support searching all substrings in

S

that match a given pattern

P

[1..

m

] with at most

k

errors. Previous solutions either require an index of size exponential in

k

or need Ω(

m

k

) time for searching. Motivated by the indexing of DNA sequences, we investigate space efficient indexes that occupy only

O

(

n

) space. For

k

= 1, we give an index to support matching in

O

(

m

+

occ

+ log

n

loglog

n

) time. The previously best solution achieving this time complexity requires an index of size

O

(

n

log

n

). This new index can be used to improve existing indexes for

k

≥2 errors. Among others, it can support matching with

k

=2 errors in

O

(

m

log

n

loglog

n

+

occ

) time.

Ho-Leung Chan, Tak-Wah Lam, Wing-Kin Sung, Siu-Lung Tam, Swee-Seong Wong

Traversing the Machining Graph

Zigzag pocket machining (or 2

D

-milling) plays an important role in the manufacturing industry. The objective is to minimize the number of tool retractions in the zigzag machining path for a given

pocket

(i.e., a planar domain). We give an optimal linear time dynamic programming algorithm for simply connected pockets, and a linear plus

O

(1)

O

(

h

)

time optimal algorithm for pockets with

h

holes. If the dual graph of the zigzag line segment partition of the given pocket is a partial

k

-tree of bounded degree or a

k

-outerplanar graph, for a fixed

k

, we solve the problem optimally in time

O

(

n

log

n

). Finally, we propose a polynomial time algorithm for finding a machining path for a general pocket with

h

holes using at most

OPT

+

εh

retractions, where

OPT

is the smallest possible number of retractions and

ε

>0 is any constant.

Danny Z. Chen, Rudolf Fleischer, Jian Li, Haitao Wang, Hong Zhu

Efficient Computation of Nash Equilibria for Very Sparse Win-Lose Bimatrix Games

It is known that finding a Nash equilibrium for win-lose bimatrix games, i.e., two-player games where the players’ payoffs are zero and one, is complete for the class

PPAD

.

We describe a linear time algorithm which computes a Nash equilibrium for win-lose bimatrix games where the number of winning positions per strategy of each of the players is at most two.

The algorithm acts on the directed graph that represents the zero-one pattern of the payoff matrices describing the game. It is based upon the efficient detection of certain subgraphs which enable us to determine the support of a Nash equilibrium.

Bruno Codenotti, Mauro Leoncini, Giovanni Resta

Distributed Almost Exact Approximations for Minor-Closed Families

We give efficient deterministic distributed algorithms which given a graph

G

from a proper minor-closed family

$\mathcal{C}$

find an approximation of a minimum dominating set in

G

and a minimum connected dominating set in

G

. The algorithms are deterministic and run in a poly-logarithmic number of rounds. The approximation accomplished differs from an optimal by a multiplicative factor of (1+

o

(1)).

Andrzej Czygrinow, Michał Hańćkowiak

Spectral Clustering by Recursive Partitioning

In this paper, we analyze the second eigenvector technique of spectral partitioning on the planted partition random graph model, by constructing a recursive algorithm using the second eigenvectors in order to learn the planted partitions. The correctness of our algorithm is not based on the ratio-cut interpretation of the second eigenvector, but exploits instead the stability of the eigenvector subspace. As a result, we get an improved cluster separation bound in terms of dependence on the maximum variance. We also extend our results for a clustering problem in the case of sparse graphs.

Anirban Dasgupta, John Hopcroft, Ravi Kannan, Pradipta Mitra

Finite Termination of “Augmenting Path” Algorithms in the Presence of Irrational Problem Data

This paper considers two similar graph algorithms that work by repeatedly increasing “flow” along “augmenting paths”: the Ford- Fulkerson algorithm for the maximum flow problem and the Gale-Shapley algorithm for the stable allocation problem (a many-to-many generalization of the stable matching problem). Both algorithms clearly terminate when given integral input data. For real-valued input data, it was previously known that the Ford-Fulkerson algorithm runs in polynomial time if augmenting paths are chosen via breadth-first search, but that the algorithm might fail to terminate if augmenting paths are chosen in an arbitrary fashion. However, the performance of the Gale-Shapley algorithm on real-valued data was unresolved. Our main result shows that, in contrast to the Ford-Fulkerson algorithm, the Gale-Shapley algorithm always terminates in finite time on real-valued data. Although the Gale-Shapley algorithm may take exponential time in the worst case, it is a popular algorithm in practice due to its simplicity and the fact that it often runs very quickly (even in sublinear time) for many inputs encountered in practice. We also study the Ford-Fulkerson algorithm when augmenting paths are chosen via depth-first search, a common implementation in practice. We prove that, like breadth-first search, depth-first search also leads to finite termination (although not necessarily in polynomial time).

Brian C. Dean, Michel X. Goemans, Nicole Immorlica

Dynamic Programming and Fast Matrix Multiplication

We give a novel general approach for solving NP-hard optimization problems that combines dynamic programming and fast matrix multiplication. The technique is based on reducing much of the computation involved to matrix multiplication. We show that our approach works faster than the usual dynamic programming solution for any vertex subset problem on graphs of bounded branchwidth. In particular, we obtain the fastest algorithms for

Planar Independent Set

of runtime

$O(2^{2.52 \sqrt{n}})$

, for

Planar Dominating Set

of runtime exact

$O(2^{3.99 \sqrt{n}})$

and parameterized

$O(2^{11.98 \sqrt{k}}) \cdot n^{O(1)}$

, and for

Planar Hamiltonian Cycle

of runtime

$O(2^{5.58 \sqrt{n}})$

. The exponent of the running time is depending heavily on the running time of the fastest matrix multiplication algorithm that is currently

o

(

n

2.376

).

Frederic Dorn

Near-Entropy Hotlink Assignments

Consider a rooted tree

T

of arbitrary maximum degree

d

representing a collection of

n

web pages connected via a set of links, all reachable from a source home page represented by the root of

T

. Each web page

i

carries a weight

w

i

representative of the frequency with which it is visited. By adding hotlinks — shortcuts from a node to one of its descendents — we wish to minimize the expected number of steps

l

needed to visit pages from the home page, expressed as a function of the entropy

H

(

p

) of the access probabilities

p

. This paper introduces several new strategies for effectively assigning hotlinks in a tree. For assigning exactly one hotlink per node, our method guarantees an upper bound on

l

of 1.141

H

(

p

)+1 if

d

>2 and 1.08

H

(

p

)+ 2/3 if

d

=2. We also present the first efficient general methods for assigning at most

k

hotlinks per node in trees of arbitrary maximum degree, achieving bounds on

l

of at most

$\frac{2H(p)}{\log (k+1)}$

and

$\frac{H(p)}{\log (k+d) -- \log d}$

, respectively. Finally, we present an algorithm implementing these methods in

O

(

n

log

n

) time, an improvement over the previous

O

(

n

2

) time algorithms.

Karim Douïeb, Stefan Langerman

Subspace Sampling and Relative-Error Matrix Approximation: Column-Row-Based Methods

Much recent work in the theoretical computer science, linear algebra, and machine learning has considered matrix decompositions of the following form: given an

m

×

n

matrix

A

, decompose it as a product of three matrices,

C

,

U

, and

R

, where

C

consists of a small number of columns of

A

,

R

consists of a small number of rows of

A

, and

U

is a small carefully constructed matrix that guarantees that the product

CUR

is “close” to

A

. Applications of such decompositions include the computation of matrix “sketches”, speeding up kernel-based statistical learning, preserving sparsity in low-rank matrix representation, and improved interpretability of data analysis methods. Our main result is a randomized, polynomial algorithm which, given as input an

m

×

n

matrix

A

, returns as output matrices

C

,

U

,

R

such that

$$\|{A-CUR}\|_F \leq (1+\epsilon)\|{A-A_k}\|_F$$

with probability at least 1–

δ

. Here,

A

k

is the “best” rank-

k

approximation (provided by truncating the Singular Value Decomposition of

A

), and ||

X

||

F

is the Frobenius norm of the matrix

X

. The number of columns in

C

and rows in

R

is a low-degree polynomial in

k

, 1/

ε

, and log(1/

δ

). Our main result is obtained by an extension of our recent relative error approximation algorithm for ℓ

2

regression from overconstrained problems to general ℓ

2

regression problems. Our algorithm is simple, and it takes time of the order of the time needed to compute the top

k

right singular vectors of

A

. In addition, it samples the columns and rows of

A

via the method of “subspace sampling,” so-named since the sampling probabilities depend on the lengths of the rows of the top singular vectors, and since they ensure that we capture entirely a certain subspace of interest.

Petros Drineas, Michael W. Mahoney, S. Muthukrishnan

Finding Total Unimodularity in Optimization Problems Solved by Linear Programs

A popular approach in combinatorial optimization is to model problems as integer linear programs. Ideally, the relaxed linear program would have only integer solutions, which happens for instance when the constraint matrix is totally unimodular. Still, sometimes it is possible to build an integer solution with same cost from the fractional solution. Examples are two scheduling problems [4,5] and the single disk prefetching /caching problem [3]. We show that problems such as the three previously mentioned can be separated into two subproblems: (1) finding an optimal feasible set of slots, and (2) assigning the jobs or pages to the slots. It is straigthforward to show that the latter can be solved greedily. We are able to solve the former with a totally unimodular linear program, from which we obtain simple combinatorial algorithms with improved worst case running time.

Christoph Dürr, Mathilde Hurand

Preemptive Online Scheduling: Optimal Algorithms for All Speeds

Our main result is an optimal online algorithm for preemptive scheduling on uniformly related machines with the objective to minimize makespan. The algorithm is deterministic, yet it is optimal even among all randomized algorithms. In addition, it is optimal for any fixed combination of speeds of the machines, and thus our results subsume all the previous work on various special cases. Together with a new lower bound it follows that the overall competitive ratio of this optimal algorithm is between 2.054 and

e

≈2.718.

Tomáš Ebenlendr, Wojciech Jawor, Jiří Sgall

On the Complexity of the Multiplication Method for Monotone CNF/DNF Dualization

Given the irredundant CNF representation

φ

of a monotone Boolean function

f

:{0,1}

n

↦{0,1}, the dualization problem calls for finding the corresponding unique irredundant DNF representation

ψ

of

f

. The (generalized) multiplication method works by repeatedly dividing the clauses of

φ

into (not necessarily disjoint) groups, multiplying-out the clauses in each group, and then reducing the result by applying the absorption law. We present the first non-trivial upper-bounds on the complexity of this multiplication method. Precisely, we show that if the grouping of the clauses is done in an output-independent way, then multiplication can be performed in sub-exponential time

$(n|\psi|)^{(\sqrt{|\phi|})}|\phi|^{O(log n)}$

. On the other hand, multiplication can be carried-out in quasi-polynomial time poly (

n

,|

ψ

|)·|

ψ

|

o

(

log

|

ψ

|)

, provided that the grouping is done depending on the intermediate outputs produced during the multiplication process.

Khaled M. Elbassioni

Lower and Upper Bounds on FIFO Buffer Management in QoS Switches

We consider FIFO buffer management for switches providing differentiated services. In each time step, an arbitrary number of packets arrive, and only one packet can be sent. The buffer can store a limited number of packets, and, due to the FIFO property, the sequence of sent packets has to be a subsequence of the arriving packets. The differentiated service model is abstracted by attributing each packet with a value according to its service level. A buffer management strategy can drop packets. The goal is to maximize the sum of values of sent packets.

For only two different packet values, we introduce the account strategy and prove that this strategy achieves an optimal competitive ratio of ≈1.282, if the buffer size tends to infinity, and an optimal competitive ratio of

$(\sqrt{13}-1)/2 \approx 1.303$

, for arbitrary buffer sizes. For general packet values, the simple preemptive greedy strategy (PG) is studied. We show that PG achieves a competitive ratio of

$\sqrt{3} \approx 1.732$

which is the best known upper bound on the competitive ratio of this problem. In addition, we give a lower bound of

$1 + 1 / \sqrt{2} \approx 1.707$

on the competitive ratio of PG which improves the previously known lower bound. As a consequence, the competitive ratio of PG cannot be further improved significantly.

Matthias Englert, Matthias Westermann

Graph Coloring with Rejection

We consider the following vertex coloring problem. We are given an undirected graph

G

=(

V

,

E

), where each vertex

v

is associated with a penalty rejection cost

r

v

. We need to choose a subset of vertices,

V

′, and to find a proper coloring of the induced subgraph of

G

over

V

′. We are interested in both the number of colors used to color the vertices of

V

′, and in the total rejection cost of all other vertices. The goal is to minimize the sum of these two amounts. In this paper we consider both the online and the offline versions of this problem. In the online version, vertices arrive one at a time, revealing the rejection cost of the current vertex and the set of edges connecting it to previously revealed vertices. We also consider the classical online coloring problem on bounded degree graphs and on (

k

+1)-claw free graphs.

Leah Epstein, Asaf Levin, Gerhard J. Woeginger

A Doubling Dimension Threshold Θ(loglogn) for Augmented Graph Navigability

In his seminal work, Kleinberg showed how to augment meshes using random edges, so that they become navigable; that is, greedy routing computes paths of polylogarithmic expected length between any pairs of nodes. This yields the crucial question of determining wether such an augmentation is possible for all graphs. In this paper, we answer negatively to this question by exhibiting a threshold on the doubling dimension, above which an infinite family of graphs cannot be augmented to become navigable whatever the distribution of random edges is. Precisely, it was known that graphs of doubling dimension at most

O

(loglog

n

) are navigable. We show that for doubling dimension ≫loglog

n

, an infinite family of graphs cannot be augmented to become navigable. Finally, we complete our result by studying the special case of square meshes, that we prove to always be augmentable to become navigable.

Pierre Fraigniaud, Emmanuelle Lebhar, Zvi Lotker

Violator Spaces: Structure and Algorithms

Sharir and Welzl introduced an abstract framework for optimization problems, called

LP-type problems

or also

generalized linear programming problems

, which proved useful in algorithm design. We define a new, and as we believe, simpler and more natural framework:

violator spaces

, which constitute a proper generalization of LP-type problems. We show that Clarkson’s randomized algorithms for low-dimensional linear programming work in the context of violator spaces. For example, in this way we obtain the fastest known algorithm for the

P-matrix generalized linear complementarity problem

with a constant number of blocks. We also give two new characterizations of LP-type problems: they are equivalent to

acyclic

violator spaces, as well as to

concrete

LP-type problems (informally, the constraints in a concrete LP-type problem are subsets of a linearly ordered ground set, and the value of a set of constraints is the minimum of its intersection).

Bernd Gärtner, Jiří Matoušek, Leo Rüst, Petr Škovroň

Region-Restricted Clustering for Geographic Data Mining

Cluster detection for a set

P

of

n

points in geographic situations is usually dependent on land cover or another thematic map layer. This occurs for instance if the points of

P

can only occur in one land cover type. We extend the definition of clusters to

region-restricted clusters

, and give efficient algorithms for exact computation and approximation. The algorithm determines all axis-parallel squares with exactly

m

out of

n

points inside, size at most some prespepcified value, and area of a given land cover type at most another prespecified value. The exact algorithm runs in

O

(

nm

log

2

n

+ (

nm

+

nn

f

)log

2

n

f

) time, where

n

f

is the number of edges that bound the regions with the given land cover type. The approximation algorithm allows the square to be a factor 1+

ε

too large, and runs in

O

(

n

log

n

+

n

/

ε

2

+

n

f

log

2

n

f

+ (

n

log

2

n

f

)/(

2

)) time. We also show how to compute largest clusters and outliers.

Joachim Gudmundsson, Marc van Kreveld, Giri Narasimhan

An O(n 3 (loglogn/logn)5/4) Time Algorithm for All Pairs Shortest Paths

We present an

O

(

n

3

(loglog

n

/log

n

)

5/4

) time algorithm for all pairs shortest paths. This algorithm improves on the best previous result of

O

(

n

3

/log

n

) time.

Yijie Han

Cheating by Men in the Gale-Shapley Stable Matching Algorithm

This paper addresses strategies for the stable marriage problem. For the Gale-Shapley algorithm with men proposing, a classical theorem states that it is impossible for every cheating man to get a better partner than the one he gets if everyone is truthful. We study how to circumvent this theorem and incite men to cheat. First we devise coalitions in which a non-empty subset of the liars get better partners and no man is worse off than before. This strategy is limited in that not everyone in the coalition has the incentive to falsify his list. In an attempt to rectify this situation we introduce the element of randomness, but the theorem shows surprising robustness: it is impossible that every liar has a chance to improve the rank of his partner while no one gets hurt. To overcome the problem that some men lack the motivation to lie, we exhibit another randomized lying strategy in which every liar can expect to get a better partner on average, though with a chance of getting a worse one. Finally, we consider a variant scenario: instead of using the Gale-Shapley algorithm, suppose the stable matching is chosen at random. We present a modified form of the coalition strategy ensuring that every man in the coalition has a new probability distribution over partners which majorizes the original one.

Chien-Chung Huang

Approximating Almost All Instances of Max-Cut Within a Ratio Above the Håstad Threshold

We give a deterministic polynomial-time algorithm which for any given average degree

d

and

asymptotically almost all

random graphs

G

in

$\mathcal G(n, m= \lfloor\frac{d}{2}n\rfloor)$

outputs a cut of

G

whose ratio (in cardinality) with the maximum cut is at least 0.952. We remind the reader that it is known that unless P=NP, for no constant

ε

>0 is there a

Max-Cut

approximation algorithm that for

all inputs

achieves an approximation ratio of (16/17) +

ε

(16/17 < 0.94118).

A. C. Kaporis, L. M. Kirousis, E. C. Stavropoulos

Enumerating Spanning and Connected Subsets in Graphs and Matroids

We show that enumerating all minimal spanning and connected subsets of a given matroid can be solved in incremental quasi-polynomial time. In the special case of graphical matroids, we improve this complexity bound by showing that all minimal 2-vertex connected edge subsets of a given graph can be generated in incremental polynomial time.

L. Khachiyan, E. Boros, K. Borys, K. Elbassioni, V. Gurvich, K. Makino

Less Hashing, Same Performance: Building a Better Bloom Filter

A standard technique from the hashing literature is to use two hash functions

h

1

(

x

) and

h

2

(

x

) to simulate additional hash functions of the form

g

i

(

x

) =

h

1

(

x

) +

ih

2

(

x

). We demonstrate that this technique can be usefully applied to Bloom filters and related data structures. Specifically, only two hash functions are necessary to effectively implement a Bloom filter without any loss in the asymptotic false positive probability. This leads to less computation and potentially less need for randomness in practice.

Adam Kirsch, Michael Mitzenmacher

A Unified Approach to Approximating Partial Covering Problems

An instance of the

generalized partial cover

problem consists of a ground set

U

and a family of subsets

${\cal S} \subseteq 2^U$

. Each element

e

U

is associated with a profit

p

(

e

), whereas each subset

$S \in {\cal S}$

has a cost

c

(

S

). The objective is to find a minimum cost subcollection

${\cal S}' \subseteq {\cal S}$

such that the combined profit of the elements covered by

${\cal S}'$

is at least

P

, a specified profit bound. In the

prize-collecting

version of this problem, there is no strict requirement to cover any element; however, if the subsets we pick leave an element

e

U

uncovered, we incur a penalty of

π

(

e

). The goal is to identify a subcollection

${\cal S}' \subseteq {\cal S}$

that minimizes the cost of

${\cal S}'$

plus the penalties of uncovered elements.

Although problem-specific connections between the partial cover and the prize-collecting variants of a given covering problem have been explored and exploited, a more general connection remained open. The main contribution of this paper is to establish a formal relationship between these two variants. As a result, we present a unified framework for approximating problems that can be formulated or interpreted as special cases of generalized partial cover. We demonstrate the applicability of our method on a diverse collection of covering problems, for some of which we obtain the first non-trivial approximability results.

Jochen Könemann, Ojas Parekh, Danny Segev

Navigating Low-Dimensional and Hierarchical Population Networks

Social networks are

navigable small worlds

, in which two arbitrary people are likely connected by a short path of intermediate friends that can be found by a “decentralized” routing algorithm using only local information. We develop a model of social networks based on an arbitrary metric space of points, with population density varying across the points. We consider

rank-based friendships

, where the probability that person

u

befriends person

v

is inversely proportional to the number of people who are closer to

u

than

v

is. Our main result is that greedy routing can find a short path (of expected polylogarithmic length) from an arbitrary source to a randomly chosen target, independent of the population densities, as long as the doubling dimension of the metric space of locations is low. We also show that greedy routing finds short paths with good probability in tree-based metrics with varying population distributions.

Ravi Kumar, David Liben-Nowell, Andrew Tomkins

Popular Matchings in the Capacitated House Allocation Problem

We consider the problem of finding a popular matching in the

Capacitated House Allocation problem

(CHA). An instance of CHA involves a set of agents and a set of houses. Each agent has a preference list in which a subset of houses are ranked in strict order, and each house may be matched to a number of agents that must not exceed its capacity. A matching

M

is

popular

if there is no other matching

M

′ such that the number of agents who prefer their allocation in

M

′ to that in

M

exceeds the number of agents who prefer their allocation in

M

to that in

M

′. Here, we give an

$O(\sqrt{C}n_1+m)$

algorithm to determine if an instance of CHA admits a popular matching, and if so, to find a largest such matching, where

C

is the total capacity of the houses,

n

1

is the number of agents and

m

is the total length of the agents’ preference lists. For the case where preference lists may contain ties, we give an

$O((\sqrt{C}+n_1)m)$

algorithm for the analogous problem.

David F. Manlove, Colin T. S. Sng

Inner-Product Based Wavelet Synopses for Range-Sum Queries

In recent years wavelet based synopses were shown to be effective for approximate queries in database systems. The simplest wavelet synopses are constructed by computing the Haar transform over a vector consisting of either the raw-data or the prefix-sums of the data, and using a greedy-heuristic to select the wavelet coefficients that are kept in the synopsis. The greedy-heuristic is known to be optimal for point queries w.r.t. the mean-squared-error, but no similar efficient optimality result was known for range-sum queries, for which the effectiveness of such synopses was only shown experimentally.

We construct an operator that defines a norm that is equivalent to the mean-squared error over all possible

range-sum queries

, where the norm is measured on the

prefix-sums vector

. We show that the Haar basis (and in fact

any wavelet basis

) is orthogonal w.r.t. the inner product defined by this novel operator. This allows us to use Parseval-based thresholding, and thus obtain the first linear time construction of a provably optimal wavelet synopsis for range-sum queries. We show that the new thresholding is very similar to the greedy-heuristic that is based on point queries.

For the case of range-sum queries over the raw data, we define a similar operator, and show that Haar basis is not orthogonal w.r.t. the inner product defined by this operator.

Yossi Matias, Daniel Urieli

Approximation in Preemptive Stochastic Online Scheduling

We present a first constant performance guarantee for preemptive stochastic scheduling to minimize the sum of weighted completion times. For scheduling jobs with release dates on identical parallel machines we derive a policy with a guaranteed performance ratio of 2 which matches the currently best known result for the corresponding deterministic online problem.

Our policy applies to the recently introduced stochastic online scheduling model in which jobs arrive online over time. In contrast to the previously considered nonpreemptive setting, our preemptive policy extensively utilizes information on processing time distributions other than the first (and second) moments. In order to derive our result we introduce a new nontrivial lower bound on the expected value of an unknown optimal policy that we derive from an optimal policy for the basic problem on a single machine without release dates. This problem is known to be solved optimally by a Gittins index priority rule. This priority index also inspires the design of our policy.

Nicole Megow, Tjark Vredeveld

Greedy in Approximation Algorithms

The objective of this paper is to characterize classes of problems for which a greedy algorithm finds solutions provably close to optimum. To that end, we introduce the notion of

k

-extendible systems, a natural generalization of matroids, and show that a greedy algorithm is a

$\frac{1}{k}$

-factor approximation for these systems. Many seemly unrelated problems fit in our framework, e.g.:

b

-matching, maximum profit scheduling and maximum asymmetric TSP.

In the second half of the paper we focus on the maximum weight

b

-matching problem. The problem forms a 2-extendible system, so greedy gives us a

$\frac{1}{2}$

-factor solution which runs in

O

(

m

log

n

) time. We improve this by providing two linear time approximation algorithms for the problem: a

$\frac{1}{2}$

-factor algorithm that runs in

O

(

bm

) time, and a

$\left(\frac{2}{3} -- \epsilon\right)$

-factor algorithm which runs in expected

$O\left(b m \log \frac{1}{\epsilon}\right)$

time.

Julián Mestre

I/O-Efficient Undirected Shortest Paths with Unbounded Edge Lengths

We show how to compute single-source shortest paths in undirected graphs with non-negative edge lengths in

${\mathcal{O}}(\sqrt{nm/B}\log n + {\mathit{MST}}(n,m))$

I/Os, where

n

is the number of vertices,

m

is the number of edges,

B

is the disk block size, and MST(

n

,

m

) is the I/O-cost of computing a minimum spanning tree. For sparse graphs, the new algorithm performs

${\mathcal{O}}((n/\sqrt{B})\log n)$

I/Os. This result removes our previous algorithm’s dependence on the edge lengths in the graph.

Ulrich Meyer, Norbert Zeh

Stochastic Shortest Paths Via Quasi-convex Maximization

We consider the problem of finding shortest paths in a graph with independent randomly distributed edge lengths. Our goal is to maximize the probability that the path length does not exceed a given threshold value (deadline). We give a surprising exact

n

Θ(

log

n

)

algorithm for the case of normally distributed edge lengths, which is based on quasi-convex maximization. We then prove average and smoothed polynomial bounds for this algorithm, which also translate to average and smoothed bounds for the parametric shortest path problem, and extend to a more general non-convex optimization setting. We also consider a number other edge length distributions, giving a range of exact and approximation schemes.

Evdokia Nikolova, Jonathan A. Kelner, Matthew Brand, Michael Mitzenmacher

Path Hitting in Acyclic Graphs

An instance of the

path hitting

problem consists of two families of paths,

${\cal D}$

and

${\cal H}$

, in a common undirected graph, where each path in

${\cal H}$

is associated with a non-negative cost. We refer to

${\cal D}$

and

${\cal H}$

as the sets of

demand

and

hitting

paths, respectively. When

$p \in {\cal H}$

and

$q \in {\cal D}$

share at least one mutual edge, we say that

phits

q

. The objective is to find a minimum cost subset of

${\cal H}$

whose members collectively hit those of

${\cal D}$

.

In this paper we provide constant factor approximation algorithms for path hitting, confined to instances in which the underlying graph is a tree, a spider, or a star. Although such restricted settings may appear to be very simple, we demonstrate that they still capture some of the most basic covering problems in graphs.

Ojas Parekh, Danny Segev

Minimum Transversals in Posi-modular Systems

Given a system (

V

,

f

,

d

) on a finite set

V

consisting of two set functions

$f:2^V\rightarrow{\mathbb R}$

and

$d:2^V\rightarrow{\mathbb R}$

, we consider the problem of finding a set

R

 ⊆ 

V

of the minimum cardinality such that

f

(

X

)≥

d

(

X

) for all

X

 ⊆ 

V

 − 

R

, where the problem can be regarded as a natural generalization of the source location problems and the external network problems in (undirected) graphs and hypergraphs. We give a structural characterization of minimal deficient sets of (

V

,

f

,

d

) under certain conditions. We show that all such sets form a tree hypergraph if

f

is posi-modular and

d

is modulotone (i.e., each nonempty subset

X

of

V

has an element

v

X

such that

d

(

Y

)≥

d

(

X

) for all subsets

Y

of

X

that contain

v

), and that conversely any tree hypergraph can be represented by minimal deficient sets of (

V

,

f

,

d

) for a posi-modular function

f

and a modulotone function

d

. By using this characterization, we present a polynomial-time algorithm if, in addition,

f

is submodular and

d

is given by either

d

(

X

)=max{

p

(

v

)|

v

X

} for a function

$p:V \rightarrow{\mathbb R}_+$

or

d

(

X

)=max{

r

(

v

,

w

) |

v

X

,

w

V

X

} for a function

$r:V^2\rightarrow{\mathbb R}_+$

. Our result provides first polynomial-time algorithms for the source location problem in hypergraphs and the external network problems in graphs and hypergraphs. We also show that the problem is intractable, even if

f

is submodular and

d

0

.

Mariko Sakashita, Kazuhisa Makino, Hiroshi Nagamochi, Satoru Fujishige

An LP-Designed Algorithm for Constraint Satisfaction

The class Max (

r

,2)-CSP consists of constraint satisfaction problems with at most two

r

-valued variables per clause. For instances with

n

variables and

m

binary clauses, we present an

$\widetilde{O}({r^{19m/100}})$

-time algorithm. It is the fastest algorithm for most problems in the class (including Max Cut and Max 2-Sat), and in combination with “Generalized CSPs” introduced in a companion paper, also allows counting, sampling, and the solution of problems like Max Bisection that escape the usual CSP framework. Linear programming is key to the design as well as the analysis of the algorithm.

Alexander D. Scott, Gregory B. Sorkin

Approximate k-Steiner Forests Via the Lagrangian Relaxation Technique with Internal Preprocessing

An instance of the

k-Steiner forest

problem consists of an undirected graph

G

= (

V

,

E

), the edges of which are associated with non-negative costs, and a collection

${\cal D} = \{ (s_i, t_i) : 1 \leq i \leq d \}$

of distinct pairs of vertices, interchangeably referred to as

demands

. We say that a forest

${\cal F} \subseteq G$

connects

a demand (

s

i

,

t

i

) when it contains an

s

i

-

t

i

path. Given a requirement parameter

$k \leq |{\cal D}|$

, the goal is to find a minimum cost forest that connects at least

k

demands in

${\cal D}$

. This problem has recently been studied by Hajiaghayi and Jain [SODA ’06], whose main contribution in this context was to relate the inapproximability of

k

-Steiner forest to that of the

dense k

-subgraph

problem. However, Hajiaghayi and Jain did not provide any algorithmic result for the respective settings, and posed this objective as an important direction for future research.

In this paper, we present the first non-trivial approximation algorithm for the

k

-Steiner forest problem, which is based on a novel extension of the Lagrangian relaxation technique. Specifically, our algorithm constructs a feasible forest whose cost is within a factor of

$O( {\rm min} \{ n^{ 2/3 }, \sqrt{d} \} \cdot \log d )$

of optimal, where

n

is the number of vertices in the input graph and

d

is the number of demands.

Danny Segev, Gil Segev

Balancing Applied to Maximum Network Flow Problems

We explore

balancing

as a definitional and algorithmic tool for finding minimum cuts and maximum flows in ordinary and parametric networks. We show that a standard monotonic parametric maximum flow problem can be formulated as a problem of computing a particular maximum flow that is

balanced

in an appropriate sense. We present a divide-and-conquer algorithm to compute such a balanced flow in a logarithmic number of ordinary maximum-flow computations. For the special case of a bipartite network, we present two simple, local algorithms for computing a balanced flow. The local balancing idea becomes even simpler when applied to the ordinary maximum flow problem. For this problem, we present a round-robin arc-balancing algorithm that computes a maximum flow on an

n

-vertex,

m

-arc network with integer arc capacities of at most

U

in

O

(

n

2

m

log(

nU

)) time. Although this algorithm is slower by at least a factor of

n

than other known algorithms, it is extremely simple and well-suited to parallel and distributed implementation.

Robert Tarjan, Julie Ward, Bin Zhang, Yunhong Zhou, Jia Mao

Contributed Papers: Engineering and Applications Track

Out-of-Order Event Processing in Kinetic Data Structures

We study the problem of designing kinetic data structures (KDS’s for short) when event times cannot be computed exactly and events may be processed in a wrong order. In traditional KDS’s this can lead to major inconsistencies from which the KDS cannot recover. We present more robust KDS’s for the maintenance of two fundamental structures, kinetic sorting and tournament trees, which overcome the difficulty by employing a refined event scheduling and processing technique. We prove that the new event scheduling mechanism leads to a KDS that is correct except for finitely many short time intervals. We analyze the maximum delay of events and the maximum error in the structure, and we experimentally compare our approach to the standard event scheduling mechanism.

Mohammad Ali Abam, Pankaj K. Agarwal, Mark de Berg, Hai Yu

Kinetic Algorithms Via Self-adjusting Computation

Define a

static algorithm

as an algorithm that computes some combinatorial property of its input consisting of static, i.e., non-moving, objects. In this paper, we describe a technique for syntactically transforming static algorithms into

kinetic algorithms

, which compute properties of moving objects. The technique offers capabilities for composing kinetic algorithms, for integrating dynamic and kinetic changes, and for ensuring robustness even with fixed-precision floating-point arithmetic. To evaluate the effectiveness of the approach, we implement a library for performing the transformation, transform a number of algorithms, and give an experimental evaluation. The results show that the technique performs well in practice.

Umut A. Acar, Guy E. Blelloch, Kanat Tangwongsan, Jorge L. Vittes

Parallel Machine Scheduling Through Column Generation: Minimax Objective Functions

In this paper we consider one of the basic problems in scheduling and project management: scheduling on parallel identical machines. We present a

solution framework for a number of scheduling problems

in which the goal is to find a feasible schedule that minimizes some objective function of the minimax type on a set of parallel, identical machines, subject to release dates, deadlines, and/or generalized precedence constraints. Our framework is based on column generation. Although column generation has been successfully applied to many parallel machine scheduling problems with objective functions of the minsum type, the number of applications for minimax objective functions is still small.

We determine a lower bound on the objective function in the following way. We first turn the minimization problem into a decision problem by bounding the outcome value. We then ask ourselves ‘Are

m

machines enough to feasibly accommodate all jobs?’. We formulate this as an integer linear programming problem and we determine a high quality lower bound by applying column generation to the LP-relaxation; if this lower bound is more than

m

, then we can conclude infeasibility. To speed up the process, we compute an intermediate lower bound based on the outcome of the pricing problem. As the pricing problem is intractable for many variants of the original scheduling problem, we mostly solve it approximately by applying local search, but once in every 50 iterations or when local search fails, we use a time-indexed integer linear programming formulation to solve the pricing problem.

After having derived the lower bound on the objective function of the original scheduling problem, we try to find a matching upper bound by identifying a feasible schedule with objective function value equal to this lower bound. Our computational results show that our lower bound is so strong that this almost always succeeds. We are able to solve problems with up to 160 jobs and 10 machines in 10 minutes on average.

1980 Mathematics Subject Classification (Revision 1991)

: 90B35.

J. M. van den Akker, J. A. Hoogeveen, J. W. van Kempen

Reporting Flock Patterns

Data representing moving objects is rapidly getting more available, especially in the area of wildlife GPS tracking. It is a central belief that information is hidden in large data sets in the form of interesting patterns. One of the most common spatio-temporal patterns sought after is flocks. A flock is a large enough subset of objects moving along paths close to each other for a certain pre-defined time. We give a new definition that we argue is more realistic than the previous ones, and we present fast approximation algorithms to report flocks. The algorithms are analysed both theoretically and experimentally.

Marc Benkert, Joachim Gudmundsson, Florian Hübner, Thomas Wolle

On Exact Algorithms for Treewidth

We give experimental and theoretical results on the problem of computing the treewidth of a graph by exact exponential time algorithms using exponential space or using only polynomial space. We first report on an implementation of a dynamic programming algorithm for computing the treewidth of a graph with running time

O

 ∗ 

(2

n

). This algorithm is based on the old dynamic programming method introduced by Held and Karp for the

Traveling Salesman

problem. We use some optimizations that do not affect the worst case running time but improve on the running time on actual instances and can be seen to be practical for small instances. However, our experiments show that the space used by the algorithm is an important factor to what input sizes the algorithm is effective. For this purpose, we settle the problem of computing treewidth under the restriction that the space used is only polynomial. In this direction we give a simple

O

 ∗ 

(4

n

) algorithm that requires

polynomial

space. We also prove that using more refined techniques with balanced separators,

Treewidth

can be computed in

O

 ∗ 

(2.9512

n

) time and polynomial space.

Hans L. Bodlaender, Fedor V. Fomin, Arie M. C. A. Koster, Dieter Kratsch, Dimitrios M. Thilikos

An Improved Construction for Counting Bloom Filters

A counting Bloom filter (CBF) generalizes a Bloom filter data structure so as to allow membership queries on a set that can be changing dynamically via insertions and deletions. As with a Bloom filter, a CBF obtains space savings by allowing false positives. We provide a simple hashing-based alternative based on

d

-left hashing called a

d

-left CBF (dlCBF). The dlCBF offers the same functionality as a CBF, but uses less space, generally saving a factor of two or more. We describe the construction of dlCBFs, provide an analysis, and demonstrate their effectiveness experimentally.

Flavio Bonomi, Michael Mitzenmacher, Rina Panigrahy, Sushil Singh, George Varghese

An MINLP Solution Method for a Water Network Problem

We propose a solution method for a water-network optimization problem using a nonconvex continuous NLP relaxation and an MINLP search. We report successful computational experience using available MINLP software on problems from the literature and on difficult real-world instances.

Cristiana Bragalli, Claudia D’Ambrosio, Jon Lee, Andrea Lodi, Paolo Toth

Skewed Binary Search Trees

It is well-known that to minimize the number of comparisons a binary search tree should be perfectly balanced. Previous work has shown that a dominating factor over the running time for a search is the number of cache faults performed, and that an appropriate memory layout of a binary search tree can reduce the number of cache faults by several hundred percent. Motivated by the fact that during a search branching to the left or right at a node does not necessarily have the same cost, e.g. because of branch prediction schemes, we in this paper study the class of skewed binary search trees. For all nodes in a skewed binary search tree the ratio between the size of the left subtree and the size of the tree is a fixed constant (a ratio of 1/2 gives perfect balanced trees). In this paper we present an experimental study of various memory layouts of static skewed binary search trees, where each element in the tree is accessed with a uniform probability. Our results show that for many of the memory layouts we consider skewed binary search trees can perform better than perfect balanced search trees. The improvements in the running time are on the order of 15%.

Gerth Stølting Brodal, Gabriel Moruz

Algorithmic Aspects of Proportional Symbol Maps

Proportional symbol maps visualize numerical data associated with point locations by placing a scaled symbol—typically opaque disks or squares—at the corresponding point on a map. Overlapping symbols need to be drawn in such a way that the user can still judge their relative sizes accurately.

We identify two types of suitable drawings:

physically realizable drawings

and

stacking drawings

. For these we study the following two problems: Max-Min—maximize the minimum visible boundary length of each symbol—and Max-Total—maximize the total visible boundary length over all symbols. We show that both problems are NP-hard for physically realizable drawings. Max-Min can be solved in

O

(

n

2

log

n

) time for stacking drawings, which can be improved to

O

(

n

log

n

) or

O

(

n

log

2

n

) time when the input has certain properties. We also experimented with four methods to compute stacking drawings: our solution to the Max-Min problem performs best on the data sets considered.

S. Cabello, H. Haverkort, M. van Kreveld, B. Speckmann

Does Path Cleaning Help in Dynamic All-Pairs Shortest Paths?

In the dynamic all-pairs shortest path problem we wish to maintain information about distances in a weighted graph subject to dynamic operations such as edge insertions, edge deletions, and edge weight updates. The most efficient algorithms for this problem maintain a suitable superset of shortest paths in the graph. This superset retains information about the history of previous graph updates so as to avoid pathological situations where algorithms are continuously forced to rebuild large portions of their data structures. On the other hand, the set of maintained paths may grow too large, resulting in both prohibitive space consumption and inefficient updates. To circumvent this problem, the algorithms perform suitable path cleaning operations. In this paper, we implement and experiment with a recent efficient algorithm by Thorup, which differs from the previous algorithms mainly in the way path cleaning is done, and we carry out a thorough experimental investigation on known implementations of dynamic shortest path algorithms. Our experimental study puts the new results into perspective with respect to previous work and gives evidence that path cleaning, although crucial for the theoretical bounds, appears to be instead of very limited impact in practice.

C. Demetrescu, P. Faruolo, G. F. Italiano, M. Thorup

Multiline Addressing by Network Flow

We consider an optimization problem arising in the design of controllers for OLED displays. Our objective is to minimize amplitude of the electrical current through the diodes which has a direct impact on the lifetime of such a display. Modeling the problem in mathematical terms yields a class of network flow problems where we group the arcs and pay in each group only for the arc carrying the maximum flow. We develop (fully) combinatorial approximation heuristics suitable for being implemented in the hardware of a control device that drives an OLED display.

Friedrich Eisenbrand, Andreas Karrenbauer, Martin Skutella, Chihao Xu

The Engineering of a Compression Boosting Library: Theory vs Practice in BWT Compression

Data Compression is one of the most challenging arenas both for algorithm design and engineering. This is particularly true for Burrows and Wheeler Compression a technique that is important in itself and for the design of compressed indexes. There has been considerable debate on how to design and engineer compression algorithms based on the BWT paradigm. In particular, Move-to-Front Encoding is generally believed to be an “inefficient” part of the Burrows-Wheeler compression process. However, only recently two theoretically superior alternatives to Move-to-Front have been proposed, namely Compression Boosting and Wavelet Trees. The main contribution of this paper is to provide the first experimental comparison of these three techniques, giving a much needed methodological contribution to the current debate. We do so by providing a carefully engineered compression boosting library that can be used, on the one hand, to investigate the myriad new compression algorithms that can be based on boosting, and on the other hand, to make the first experimental assessment of how Move-to-Front behaves with respect to its recently proposed competitors. The main conclusion is that Boosting, Wavelet Trees and Move-to-Front yield quite close compression performance. Finally, our extensive experimental study of boosting technique brings to light a new fact overlooked in 10 years of experiments in the area: a fast adapting order-zero compressor is enough to provide state of the art BWT compression by simply compressing the run length encoded transform. In other words, Move-to-Front, Wavelet Trees, and Boosters can all be by-passed by a fast learner.

Paolo Ferragina, Raffaele Giancarlo, Giovanni Manzini

The Price of Resiliency: A Case Study on Sorting with Memory Faults

We address the problem of sorting in the presence of faults that may arbitrarily corrupt memory locations, and investigate the impact of memory faults both on the correctness and on the running times of mergesort-based algorithms. To achieve this goal, we develop a software testbed that simulates different fault injection strategies, and we perform a thorough experimental study using a combination of several fault parameters. Our experiments give evidence that simple-minded approaches to this problem are largely impractical, while the design of more sophisticated resilient algorithms seems really worth the effort. Another contribution of our computational study is a carefully engineered implementation of a resilient sorting algorithm, which appears robust to different memory fault patterns.

Umberto Ferraro-Petrillo, Irene Finocchi, Giuseppe F. Italiano

How Branch Mispredictions Affect Quicksort

We explain the counterintuitive observation that finding “good” pivots (close to the median of the array to be partitioned) may not improve performance of quicksort. Indeed, an intentionally

skewed

pivot

improves

performance. The reason is that while the instruction count decreases with the quality of the pivot, the likelihood that the direction of a branch is mispredicted also goes up. We analyze the effect of simple branch prediction schemes and measure the effects on real hardware.

Kanela Kaligosi, Peter Sanders

Robust, Generic and Efficient Construction of Envelopes of Surfaces in Three-Dimensional Spaces

Lower envelopes are fundamental structures in computational geometry, which have many applications, such as computing general Voronoi diagrams and performing hidden surface removal in computer graphics. We present a generic, robust and efficient implementation of the divide-and-conquer algorithm for computing the envelopes of surfaces in IR

3

. To the best of our knowledge, this is the first exact implementation that computes envelopes in three-dimensional space. Our implementation is based on

Cgal

(the Computational Geometry Algorithms Library) and is designated as a

Cgal

package. The separation of topology and geometry in our solution allows for the reuse of the algorithm with different families of surfaces, provided that a small set of geometric objects and operations on them is supplied. We used our algorithm to compute the lower and upper envelope for several types of surfaces. Exact arithmetic is typically slower than floating-point arithmetic, especially when higher order surfaces are involved. Since our implementation follows the exact geometric computation paradigm, we minimize the number of geometric operations, and by that significantly improve the performance of the algorithm in practice. Our experiments show interesting phenomena in the behavior of the divide-and-conquer algorithm and the combinatorics of lower envelopes of random surfaces.

Michal Meyerovitch

Engineering Highway Hierarchies

In [1], we presented a shortest path algorithm that allows fast point-to-point queries in graphs using preprocessed data. Here, we give an extensive revision of our method. It allows faster query and preprocessing times, it reduces the size of the data obtained during the preprocessing and it deals with directed graphs. Some important concepts like the

neighbourhood radii

and the

contraction of a network

have been generalised and are now more flexible. The query algorithm has been simplified: it differs only by a few lines from the bidirectional version of

Dijkstra

’s algorithm. We can prove that our algorithm is correct even if the graph contains several paths of the same length.

Experiments with real-world road networks confirm the effectiveness of our approach. Preprocessing the network of Western Europe, which consists of about 18 million nodes, takes 15 minutes and yields 68 bytes of additional data per node. Then, random queries take 0.76 ms on average. If we are willing to accept slower query times (1.38 ms), the memory usage can be decreased to 17 bytes per node. For the European and the US road networks, we can guarantee that at most 0.05% of all nodes are visited during any query.

Peter Sanders, Dominik Schultes

Univariate Polynomial Real Root Isolation: Continued Fractions Revisited

We present algorithmic, complexity and implementation results concerning real root isolation of integer univariate polynomials using the continued fraction expansion of real numbers. We improve the previously known bound by a factor of

, where

d

is the polynomial degree and

τ

bounds the coefficient bitsize, thus matching the current record complexity for real root isolation by exact methods. Namely, the complexity bound is

${{\widetilde{\mathcal{O}}_B}(d^4 \tau^2)}$

using a standard bound on the expected bitsize of the integers in the continued fraction expansion. We show how to compute the multiplicities within the same complexity and extend the algorithm to non square-free polynomials. Finally, we present an efficient open-source

C++

implementation in the algebraic library

synaps

, and illustrate its efficiency as compared to other available software. We use polynomials with coefficient bitsize up to 8000 and degree up to 1000.

Elias P. Tsigaridas, Ioannis Z. Emiris

Exact and Efficient Construction of Planar Minkowski Sums Using the Convolution Method

The Minkowski sum of two sets

A

,

B

 ∈ I R

d

, denoted

A

B

, is defined as

$\left\{ a + b ~\vert~ a \in A, b \in B \right\}$

. We describe an efficient and robust implementation for the construction of Minkowski sums of polygons in I R

2

using the

convolution

of the polygon boundaries. This method allows for faster computation of the sum of non-convex polygons in comparison with the widely-used methods for Minkowski-sum computation that decompose the input polygons into convex sub-polygons and compute the union of the pairwise sums of these convex sub-polygon.

Ron Wein

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise