Skip to main content
Top

2005 | Book

Algorithms – ESA 2005

13th Annual European Symposium, Palma de Mallorca, Spain, October 3-6, 2005. Proceedings

Editors: Gerth Stølting Brodal, Stefano Leonardi

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

Table of Contents

Frontmatter
Designing Reliable Algorithms in Unreliable Memories

Some of today’s applications run on computer platforms with large and inexpensive memories, which are also error-prone. Unfortunately, the appearance of even very few memory faults may jeopardize the correctness of the computational results. An algorithm is resilient to memory faults if, despite the corruption of some memory values before or during its execution, it is nevertheless able to get a correct output at least on the set of uncorrupted values. In this paper we will survey some recent work on reliable computation in the presence of memory faults.

Irene Finocchi, Fabrizio Grandoni, Giuseppe F. Italiano
From Balanced Graph Partitioning to Balanced Metric Labeling

The Metric Labeling problem is an elegant and powerful mathematical model capturing a wide range of classification problems that arise in computer vision and related fields. In a typical classification problem, one wishes to assign labels to a set of objects to optimize some measure of the quality of the labeling. The metric labeling problem captures a broad range of classification problems where the quality of a labeling depends on the pairwise relations between the underlying set of objects, as described by a weighted graph. Additionally, a metric distance function on the labels is defined, and for each label and each vertex, an assignment cost is given. The goal is to find a minimum-cost assignment of the vertices to the labels. The cost of the solution consists of two parts: the assignment costs of the vertices and the separation costs of the edges (each edge pays its weight times the distance between the two labels to which its endpoints are assigned). Metric labeling has many applications as well as rich connections to some well known problems in combinatorial optimization.

The

balanced metric labeling

problem has an additional constraint requiring that at most ℓ vertices can be assigned to each label, i.e., labels have

capacity

. We focus on the case where the given metric is uniform and note that it already captures various well-known balanced graph partitioning problems. We discuss (pseudo) approximation algorithms for the balanced metric labeling problem, and focus on several important techniques used for obtaining the algorithms. Spreading metrics have proved to be very useful for balanced graph partitioning and our starting point for balanced metric labeling is a linear programming formulation that combines an embedding of the graph in a simplex together with spreading metrics and additional constraints that strengthen the formulation. The approximation algorithm is based on a novel randomized rounding that uses both a randomized metric decomposition technique and a randomized label assignment technique. At the heart of our approach is the fact that only limited dependency is created between the labels assigned to different vertices, allowing us to bound the expected cost of the solution and the number of vertices assigned to each label, simultaneously.

Joseph Naor
Fearful Symmetries: Quantum Computing, Factoring, and Graph Isomorphism

Historically, the idea of symmetry has played a much greater role in physics than in computer science. While computer scientists typically work against an adversary, physicists are brought up to believe in the benevolence of nature, and to believe that the answers to natural questions are often as simple—and as symmetric—as they possibly could be. Indeed, symmetry is intimately linked to every branch of physics, from classical conservation laws to elementary particles to special and general relativity.

Cristopher Moore
Exploring an Unknown Graph Efficiently

We study the problem of exploring an unknown, strongly connected directed graph. Starting at some node of the graph, we must visit every edge and every node at least once. The goal is to minimize the number of edge traversals. It is known that the competitive ratio of online algorithms for this problem depends on the deficiency

d

of the graph, which is the minimum number of edges that must be added to make the graph Eulerian. We present the first deterministic online exploration algorithm whose competitive ratio is polynomial in

d

(it is

O

(

d

8

)).

Rudolf Fleischer, Gerhard Trippen
Online Routing in Faulty Meshes with Sub-linear Comparative Time and Traffic Ratio

We consider the problem of routing a message in a mesh network with faulty nodes. The number and positions of faulty nodes is unknown. It is known that a flooding strategy like expanding ring search can route a message in the minimum number of steps

h

while it causes a traffic (i.e. the total number of messages) of

${\mathcal O}(h^{2})$

. For optimizing traffic a single-path strategy is optimal producing traffic

${\mathcal O}(p + h)$

, where

p

is the perimeter length of the barriers formed by the faulty nodes. Therefore, we define the comparative traffic ratio as a quotient over

p

+

h

and the competitive time ratio as a quotient over

h

. Optimal algorithms with constant ratios are known for time and traffic, but not for both. We are interested in optimizing both parameters and define the combined comparative ratio as the maximum of competitive time ratio and comparative traffic ratio. Single-path strategies using the right-hand rule for traversing barriers as well as multi-path strategies like expanding ring search have a combined comparative ratio of Θ(

h

). It is an open question whether there exists an online routing strategy optimizing time and traffic for meshes with an unknown set of faulty nodes. We present an online strategy for routing with faulty nodes providing sub-linear combined comparative ratio of

$h^{{\mathcal O}(\sqrt{\frac{{\rm log log}h}{{\rm log}h}})}$

.

Stefan Rührup, Christian Schindelhauer
Heuristic Improvements for Computing Maximum Multicommodity Flow and Minimum Multicut

We propose heuristics to reduce the number of shortest path computations required to compute a 1+

ε

approximation to the maximum multicommodity flow in a graph. Through a series of improvements we are able to reduce the number of shortest path computations significantly. One key idea is to use the value of the best multicut encountered in the course of the algorithm. For almost all instances this multicut is significantly better than that computed by rounding the linear program.

Garima Batra, Naveen Garg, Garima Gupta
Relax-and-Cut for Capacitated Network Design

We present an evaluation of a Lagrangean-based branch-and-bound algorithm with additional valid inequalities for the capacitated network design problem. The focus is on two types of valid inequalities, the cover inequalities and local cuts. We show how these inequalities can be considered in a Lagrangean relaxation without destroying the computationally simple structure of the subproblems. We present an extensive computational study on a large set of benchmark data. The results show that the presented algorithm outperforms many other exact and heuristical solvers in terms of running time and solution quality.

Georg Kliewer, Larissa Timajev
On the Price of Anarchy and Stability of Correlated Equilibria of Linear Congestion Games,,

We consider the price of stability for Nash and correlated equilibria of linear congestion games. The price of stability is the optimistic price of anarchy, the ratio of the cost of the best Nash or correlated equilibrium over the social optimum. We show that for the sum social cost, which corresponds to the average cost of the players, every linear congestion game has Nash and correlated price of stability at most 1.6. We also give an almost matching lower bound of

$1+\sqrt{3}/3=1.577$

.

We also consider the price of anarchy of correlated equilibria. We extend existing results about Nash equilibria to correlated equilibria and show that for the sum social cost, the price of anarchy is exactly 2.5, the same for pure and mixed Nash and for correlated equilibria. The same bound holds for symmetric games as well. We also extend the results about Nash equilibria to correlated equilibria for weighted congestion games and we show that when the social cost is the total latency, the price of anarchy is

$(3+\sqrt{5})/2=2.618$

.

George Christodoulou, Elias Koutsoupias
The Complexity of Games on Highly Regular Graphs

We present algorithms and complexity results for the problem of finding equilibria (mixed Nash equilibria, pure Nash equilibria and correlated equilibria) in games with extremely succinct description that are defined on highly regular graphs such as the

d

-dimensional grid; we argue that such games are of interest in the modelling of large systems of interacting agents. We show that mixed Nash equilibria can be found in time exponential in the succinct representation by quantifier elimination, while correlated equilibria can be found in polynomial time by taking advantage of the game’s symmetries. Finally, the complexity of determining whether such a game on the

d

-dimensional grid has a pure Nash equilibrium depends on

d

and the dichotomy is remarkably sharp: it is solvable in polynomial time (in fact

NL

-complete) when

d

= 1, but it is

NEXP

-complete for

d

≥ 2.

Konstantinos Daskalakis, Christos H. Papadimitriou
Computing Equilibrium Prices: Does Theory Meet Practice?

The best known algorithms for the computation of market equilibria, in a general setting, are not guaranteed to run in polynomial time. On the other hand, simple poly-time algorithms are available for various restricted – yet important – markets.

In this paper, we experimentally explore the

gray zone

between the general problem and the poly-time solvable special cases. More precisely, we analyze the performance of some simple algorithms, for inputs which are relevant in practice, and where the theory does not provide poly-time guarantees.

Bruno Codenotti, Benton McCune, Rajiv Raman, Kasturi Varadarajan
Efficient Exact Algorithms on Planar Graphs: Exploiting Sphere Cut Branch Decompositions

Divide-and-conquer strategy based on variations of the Lipton-Tarjan planar separator theorem has been one of the most common approaches for solving planar graph problems for more than 20 years. We present a new framework for designing fast subexponential exact and parameterized algorithms on planar graphs. Our approach is based on geometric properties of planar branch decompositions obtained by Seymour & Thomas, combined with new techniques of dynamic programming on planar graphs based on properties of non-crossing partitions. Compared to divide-and-conquer algorithms, the main advantages of our method are a) it is a generic method which allows to attack broad classes of problems; b) the obtained algorithms provide a better worst case analysis. To exemplify our approach we show how to obtain an

$O(2^{6.903\sqrt{n}}n^{3/2}+n^{3})$

time algorithm solving weighted

Hamiltonian Cycle

. We observe how our technique can be used to solve

Planar Graph TSP

in time

$O(2^{10.8224\sqrt{n}}n^{3/2}+n^{3})$

. Our approach can be used to design parameterized algorithms as well. For example we introduce the first

$2^{O\sqrt{k}}k^{O(1)}.n^{O(1)}$

time algorithm for parameterized

Planar

k

–cycle

by showing that for a given

k

we can decide if a planar graph on

n

vertices has a cycle of length ≥

k

in time

$O(2^{13.6\sqrt{k}}\sqrt{k}n+n^{3})$

.

Frederic Dorn, Eelko Penninkx, Hans L. Bodlaender, Fedor V. Fomin
An Algorithm for the SAT Problem for Formulae of Linear Length

We present an algorithm that decides the satisfiability of a CNF formula where every variable occurs at most

k

times in time

$O(2^{N(1-c/(k+1)+O(1/k^{2}))})$

for some

c

(that is,

O

(

α

N

) with

α

< 2 for every fixed

k

). For

k

≤ 4, the results coincide with an earlier paper where we achieved running times of

O

(2

0.1736

N

) for

k

= 3 and

O

(2

0.3472

N

) for

k

= 4 (for

k

≤ 2, the problem is solvable in polynomial time). For

k

>4, these results are the best yet, with running times of

O

(2

0.4629

N

) for

k

= 5 and

O

(2

0.5408

N

) for

k

= 6. As a consequence of this, the same algorithm is shown to run in time

O

(2

0.0926

L

) for a formula of length (i.e.total number of literals)

L

. The previously best bound in terms of

L

is

O

(2

0.1030

L

). Our bound is also the best upper bound for an exact algorithm for a

3sat

formula with up to six occurrences per variable, and a

4sat

formula with up to eight occurrences per variable.

Magnus Wahlström
Linear-Time Enumeration of Isolated Cliques

For a given graph

G

of

n

vertices and

m

edges, a clique

S

of size

k

is said to be

c

-isolated if there are at most

ck

outgoing edges from

S

. It is shown that this parameter

c

is an interesting measure which governs the complexity of finding cliques. In particular, if

c

is a constant, then we can enumerate all

c

-isolated maximal cliques in linear time, and if

c

=

O

(log

n

), then we can enumerate all

c

-isolated maximal cliques in polynomial time. Note that there is a graph which has a superlinear number of

c

-isolated cliques if

c

is not a constant, and there is a graph which has a superpolynomial number of

c

-isolated cliques if

c

=

ω

(log

n

). In this sense our algorithm is optimal for the linear-time and polynomial-time enumeration of

c

-isolated cliques.

Hiro Ito, Kazuo Iwama, Tsuyoshi Osumi
Finding Shortest Non-separating and Non-contractible Cycles for Topologically Embedded Graphs

We present an algorithm for finding shortest surface non-separating cycles in graphs with given edge-lengths that are embedded on surfaces. The time complexity is

O

(

g

3/2

V

3/2

log

V

 + 

g

5/2

V

1/2

), where

V

is the number of vertices in the graph and

g

is the genus of the surface. If

g

 = 

o

(

V

1/3 − 

ε

), this represents a considerable improvement over previous results by Thomassen, and Erickson and Har-Peled. We also give algorithms to find a shortest non-contractible cycle in

O

(

g

$^{O({\it g})}$

V

3/2

) time, improving previous results for fixed genus.

This result can be applied for computing the (non-separating) face-width of embedded graphs. Using similar ideas we provide the first near-linear running time algorithm for computing the face-width of a graph embedded on the projective plane, and an algorithm to find the face-width of embedded toroidal graphs in

O

(

V

5/4

log

V

) time.

Sergio Cabello, Bojan Mohar
Delineating Boundaries for Imprecise Regions

In geographic information retrieval, queries often use names of geographic regions that do not have a well-defined boundary, such as “Southern France.” We provide two classes of algorithms for the problem of computing reasonable boundaries of such regions, based on evidence of given data points that are deemed likely to lie either inside or outside the region. Our problem formulation leads to a number of problems related to red-blue point separation and minimum-perimeter polygons, many of which we solve algorithmically. We give experimental results from our implementation and a comparison of the two approaches.

Iris Reinbacher, Marc Benkert, Marc van Kreveld, Joseph S. B. Mitchell, Alexander Wolff
Exacus: Efficient and Exact Algorithms for Curves and Surfaces

We present the first release of the

Exacus

C++ libraries. We aim for systematic support of non-linear geometry in software libraries. Our goals are efficiency, correctness, completeness, clarity of the design, modularity, flexibility, and ease of use. We present the generic design and structure of the libraries, which currently compute arrangements of curves and curve segments of low algebraic degree, and boolean operations on polygons bounded by such segments.

Eric Berberich, Arno Eigenwillig, Michael Hemmer, Susan Hert, Lutz Kettner, Kurt Mehlhorn, Joachim Reichel, Susanne Schmitt, Elmar Schömer, Nicola Wolpert
Min Sum Clustering with Penalties

Traditionally, clustering problems are investigated under the assumption that all objects must be clustered. A shortcoming of this formulation is that a few distant objects, called

outliers

, may exert a disproportionately strong influence over the solution. In this work we investigate the

k

-min-sum

clustering problem while addressing outliers in a meaningful way.

Given a complete graph

G

= (

V

,

E

), a weight function

w

:

E

IN

0

on its edges, and

$p \rightarrow {\it {IN}_{o}}$

a penalty function on its nodes, the

penalized

k

-min-sum problem

is the problem of finding a partition of

V

to

k

+1 sets, {

S

1

,...,

S

k

 + 1

}, minimizing

$\sum_{i=1}^{k}$

w

(

S

i

)+

p

(

S

k

 + 1

), where for

S

 ⊆ 

V

w

(

S

) =

$\sum_{e=\{{\it i},{\it j}\} \subset {\it S}}$

w

e

, and

p

(

S

) =

$\sum_{i \in S}{^p_i}$

.

We offer an efficient 2-approximation to the penalized 1-min-sum problem using a primal-dual algorithm. We prove that the penalized 1-min-sum problem is NP-hard even if

w

is a metric and present a randomized approximation scheme for it. For the metric penalized

k

-min-sum problem we offer a 2-approximation.

Refael Hassin, Einat Or
Improved Approximation Algorithms for Metric Max TSP

We present two polynomial-time approximation algorithms for the metric case of the maximum traveling salesman problem. One of them is for directed graphs and its approximation ratio is

$\frac{27}{35}$

. The other is for undirected graphs and its approximation ratio is

$\frac{7}{8} - o(1)$

. Both algorithms improve on the previous bests.

Zhi-Zhong Chen, Takayuki Nagoya
Unbalanced Graph Cuts

We introduce the

Minimum-size bounded-capacity cut

(

MinSBCC

) problem, in which we are given a graph with an identified source and seek to find a cut minimizing the number of nodes on the source side, subject to the constraint that its capacity not exceed a prescribed bound

B

. Besides being of interest in the study of graph cuts, this problem arises in many practical settings, such as in epidemiology, disaster control, military containment, as well as finding dense subgraphs and communities in graphs.

In general, the

MinSBCC

problem is NP-complete. We present an efficient

$(\frac{1}{{\rm \lambda}},\frac{1}{1-{\rm \lambda}})$

-bicriteria approximation algorithm for any 0 <

λ

< 1; that is, the algorithm finds a cut of capacity at most

$\frac{1}{{\rm \lambda}}B$

, leaving at most

$\frac{1}{1-{\rm \lambda}}$

times more vertices on the source side than the optimal solution with capacity

B

. In fact, the algorithm’s solution

either

violates the budget constraint,

or

exceeds the optimal number of source-side nodes, but not both. For graphs of bounded treewidth, we show that the problem with unit weight nodes can be solved optimally in polynomial time, and when the nodes have weights, approximated arbitrarily well by a PTAS.

Ara Hayrapetyan, David Kempe, Martin Pál, Zoya Svitkina
Low Degree Connectivity in Ad-Hoc Networks

The aim of the paper is to investigate the average case behavior of certain algorithms that are designed for connecting mobile agents in the two- or three-dimensional space. The general model is the following: let

X

be a set of points in the

d

-dimensional Euclidean space

E

d

,

d

≥ 2;

r

be a function that associates each element of

x

X

with a positive real number

r

(

x

). A graph

G

(

X

,

r

) is an oriented graph with the vertex set

X

, in which (

x

,

y

) is an edge if and only if

ρ

(

x

,

y

) ≤

r

(

x

), where

ρ

(

x

,

y

) denotes the Euclidean distance in the space

E

d

. Given a set

X

, the goal is to find a function

r

so that the graph

G

(

X

,

r

) is strongly connected (note that the graph

G

(

X

,

r

) need not be symmetric). Given a random set of points, the function

r

computed by the algorithm of the present paper is such that, for any constant

δ

, the average value of

r

(

x

)

δ

(the average transmitter power) is almost surely constant.

Luděk Kučera
5-Regular Graphs are 3-Colorable with Positive Probability

We show that uniformly random 5-regular graphs of

n

vertices are 3-colorable with probability that is positive independently of

n

.

J. Díaz, G. Grammatikopoulos, A. C. Kaporis, L. M. Kirousis, X. Pérez, D. G. Sotiropoulos
Optimal Integer Alphabetic Trees in Linear Time

We show that optimal alphabetic binary trees can be constructed in

O

(

n

) time if the elements of the initial sequence are drawn from a domain that can be sorted in linear time. We describe a [6] hybrid algorithm that combines the bottom-up approach of the original Hu-Tucker algorithm with the top-down approach of Larmore and Przytycka’s Cartesian tree algorithms. The hybrid algorithm demonstrates the computational equivalence of sorting and level tree construction.

T. C. Hu, Lawrence L. Larmore, J. David Morgenthaler
Predecessor Queries in Constant Time?

In this paper we design a new static data structure for batched predecessor queries. In particular, our data structure supports

$O(\sqrt{{\rm log}n})$

queries in

O

(1) time per query and requires

$O(n^{\epsilon\sqrt{{\rm log}n}})$

space for any

ε

> 0. This is the first

o

(

N

) space and

O

(1) amortized time data structure for arbitrary

$N = \Omega(n^{\epsilon\sqrt{{\rm log}n}})$

where

N

is the size of the universe. We also present a data structure that answers

O

(log log

N

) predecessor queries in

O

(1) time per query and requires

$O(n^{\epsilon{\rm log log} {\it N}})$

space for any

ε

> 0. The method of solution relies on a certain way of searching for predecessors of all elements of the query

in parallel

.

In a general case, our approach leads to a data structure that supports

p

(

n

) queries in

$O(\sqrt{{\rm log} n}/p(n))$

time per query and requires

O

(

n

$^{p({\it n})}$

) space for any

$p(n) =O(\sqrt{{\rm log}n})$

, and a data structure that supports

p

(

N

) queries in

O

(log log

N

/

p

(

N

)) time per query and requires

O

(

n

$^{p({\it N})}$

) space for any

p

(

N

)=

O

(log log

N

).

Marek Karpinski, Yakov Nekrich
An Algorithm for Node-Capacitated Ring Routing

A strongly polynomial time algorithm is described to solve the node-capacitated routing problem in an undirected ring network.

András Frank, Zoltán Király, Balázs Kotnyek
On Degree Constrained Shortest Paths

Traditional shortest path problems play a central role in both the design and use of communication networks and have been studied extensively. In this work, we consider a variant of the shortest path problem. The network has two kinds of edges, “actual” edges and “potential” edges. In addition, each vertex has a degree/interface constraint. We wish to compute a shortest path in the graph that maintains feasibility when we convert the potential edges on the shortest path to actual edges. The central difficulty is when a node has only one free interface, and the unconstrained shortest path chooses two potential edges incident on this node. We first show that this problem can be solved in polynomial time by reducing it to the minimum weighted perfect matching problem. The number of steps taken by this algorithm is

O

(|

E

|

2

log |

E

|) for the single-source single-destination case. In other words, for each

v

we compute the shortest path

P

v

such that converting the potential edges on

P

v

to actual edges, does not violate any degree constraint. We then develop more efficient algorithms by extending Dijkstra’s shortest path algorithm. The number of steps taken by the latter algorithm is

O

(|

E

||

V

|), even for the single-source all destination case.

Samir Khuller, Kwangil Lee, Mark Shayman
A New Template for Solving p-Median Problems for Trees in Sub-quadratic Time
(Extended Abstract)

We propose an

O

(

n

log

p

 + 2

n

) algorithm for solving the well-known

p

-Median problem for trees. Our analysis relies on the fact that

p

is considered constant (in practice, very often

p

<<

n

). This is the first result in almost 25 years that proposes a new algorithm for solving this problem, opening up several new avenues for research.

Robert Benkoczi, Binay Bhattacharya
Roll Cutting in the Curtain Industry
(Extended Abstract)

We study the problem of cutting a number of pieces of the same length from

n

rolls of different lengths so that the remaining part of each utilized roll is either sufficiently short or sufficiently long. A piece is sufficiently short, if it is shorter than a pre-specified threshold value

δ

min

, so that it can be thrown away as it cannot be used again for cutting future orders. And a piece is sufficiently long, if it is longer than a pre-specified threshold value

δ

max

(with

δ

max

 > 

δ

min

), so that it can reasonably be expected to be usable for cutting future orders of almost any length. We show that this problem, faced by a curtaining wholesaler, is solvable in

O

(

n

log

n

) time by analyzing a non-trivial class of allocation problems.

Arianna Alfieri, Steef L. van de Velde, Gerhard J. Woeginger
Space Efficient Algorithms for the Burrows-Wheeler Backtransformation

The Burrows-Wheeler transformation is used for effective data compression, e.g., in the well known program bzip2. Compression and decompression are done in a block-wise fashion; larger blocks usually result in better compression rates. With the currently used algorithms for decompression, 4

n

bytes of auxiliary memory for processing a block of

n

bytes are needed, 0 <

n

< 2

32

. This may pose a problem in embedded systems (e.g., mobile phones), where RAM is a scarce resource. In this paper we present algorithms that reduce the memory need without sacrificing speed too much.

The main results are: Assuming an input string of

n

characters, 0 <

n

< 2

32

, the reverse Burrows-Wheeler transformation can be done with 1.625

n

bytes of auxiliary memory and

O

(

n

) runtime, using just a few operations per input character. Alternatively, we can use

n

/

t

bytes and 256

tn

operations. The theoretical results are backed up by experimental data showing the space-time tradeoff.

Ulrich Lauther, Tamás Lukovszki
Cache-Oblivious Comparison-Based Algorithms on Multisets

We study three comparison-based problems related to multisets in the cache-oblivious model: Duplicate elimination, multisorting and finding the most frequent element (the mode). We are interested in minimizing the cache complexity (or number of cache misses) of algorithms for these problems in the context under which cache size and block size are unknown. We give algorithms with cache complexities within a constant factor of the optimal for all the problems. In the case of determining the mode, the optimal algorithm is randomized as the deterministic algorithm differs from the lower bound by a sublogarithmic factor. We can achieve optimality either with a randomized method or if given, along with the input, lg lg of relative frequency of the mode with a constant additive error.

Arash Farzan, Paolo Ferragina, Gianni Franceschini, J. Ian Munro
Oblivious vs. Distribution-Based Sorting: An Experimental Evaluation

We compare two algorithms for sorting out-of-core data on a distributed-memory cluster. One algorithm, Csort, is a 3-pass oblivious algorithm. The other, Dsort, makes two passes over the data and is based on the paradigm of distribution-based algorithms. In the context of out-of-core sorting, this study is the first comparison between the paradigms of distribution-based and oblivious algorithms. Dsort avoids two of the four steps of a typical distribution-based algorithm by making simplifying assumptions about the distribution of the input keys. Csort makes no assumptions about the keys. Despite the simplifying assumptions, the I/O and communication patterns of Dsort depend heavily on the exact sequence of input keys. Csort, on the other hand, takes advantage of predetermined I/O and communication patterns, governed entirely by the input size, in order to overlap computation, communication, and I/O . Experimental evidence shows that, even on inputs that followed Dsort’s simplifying assumptions, Csort fared well. The running time of Dsort showed great variation across five input cases, whereas Csort sorted all of them in approximately the same amount of time. In fact, Dsort ran significantly faster than Csort in just one out of the five input cases: the one that was the most unrealistically skewed in favor of Dsort. A more robust implementation of Dsort—one without the simplifying assumptions—would run even slower.

Geeta Chaudhry, Thomas H. Cormen
Allocating Memory in a Lock-Free Manner

The potential of multiprocessor systems is often not fully realized by their system services. Certain synchronization methods, such as lock-based ones, may limit the parallelism. It is significant to see the impact of wait/lock-free synchronization design in key services for multiprocessor systems, such as the memory allocation service. Efficient, scalable memory allocators for multithreaded applications on multiprocessors is a significant goal of recent research projects.

We propose a lock-free memory allocator, to enhance the parallelism in the system. Its architecture is inspired by Hoard, a successful concurrent memory allocator, with a modular, scalable design that preserves scalability and helps avoiding false-sharing and heap blowup. Within our e.ort on designing appropriate lock-free algorithms to construct this system, we propose a new non-blocking data structure called flat-sets, supporting conventional “internal” operations as well as “inter-object” operations, for moving items between flat-sets.

We implemented the memory allocator in a set of multiprocessor systems (UMA Sun Enterprise 450 and ccNUMA Origin 3800) and studied its behaviour. The results show that the good properties of Hoard w.r.t. false-sharing and heap-blowup are preserved, while the scalability properties are enhanced even further with the help of lock-free synchronization.

Anders Gidenstam, Marina Papatriantafilou, Philippas Tsigas
Generating Realistic Terrains with Higher-Order Delaunay Triangulations

For hydrologic applications, terrain models should have few local minima, and drainage lines should coincide with edges. We show that triangulating a set of points with elevations such that the number of local minima of the resulting terrain is minimized is NP-hard for degenerate point sets. The same result applies when there are no degeneracies for higher-order Delaunay triangulations. Two heuristics are presented to reduce the number of local minima for higher-order Delaunay triangulations, which start out with the Delaunay triangulation. We give efficient algorithms for their implementation, and test on real-world data how well they perform. We also study another desirable drainage characteristic, namely few valley components.

Thierry de Kok, Marc van Kreveld, Maarten Löffler
I/O-Efficient Construction of Constrained Delaunay Triangulations

In this paper, we designed and implemented an I/O-efficient algorithm for constructing constrained Delaunay triangulations. If the number of constraining segments is smaller than the memory size, our algorithm runs in expected

$O(\frac{N}{B}{\rm log}_{M/B}\frac{N}{B})$

I/Os for triangulating

N

points in the plane, where

M

is the memory size and

B

is the disk block size. If there are more constraining segments, the theoretical bound does not hold, but in practice the performance of our algorithm degrades gracefully. Through an extensive set of experiments with both synthetic and real data, we show that our algorithm is significantly faster than existing implementations.

Pankaj K. Agarwal, Lars Arge, Ke Yi
Convex Hull and Voronoi Diagram of Additively Weighted Points

We provide a complete description of dynamic algorithms for constructing convex hulls and Voronoi diagrams of additively weighted points of

${\mathbb R}^{d}$

. We present simple algorithms and provide a description of the predicates. The algorithms have been implemented in

${\mathbb R}^{3}$

and experimental results are reported. Our implementation follows the CGAL design and, in particular, is made both robust and efficient through the use of filtered exact arithmetic.

Jean-Daniel Boissonnat, Christophe Delage
New Tools and Simpler Algorithms for Branchwidth

We provide new tools, such as

k

-troikas and good subtree-representations, that allow us to give fast and simple algorithms computing branchwidth. We show that a graph

G

has branchwidth at most

k

if and only if it is a subgraph of a chordal graph in which every maximal clique has a

k

-troika respecting its minimal separators. Moreover, if

G

itself is chordal with clique tree

T

then such a chordal supergraph exists having clique tree a minor of

T

. We use these tools to give a straightforward

O

(

m

+

n

+

q

2

) algorithm computing branchwidth for an interval graph on

m

edges,

n

vertices and

q

maximal cliques. We also prove a conjecture of F. Mazoit [13] by showing that branchwidth is polynomial on a chordal graph given with a clique tree having a polynomial number of subtrees.

Christophe Paul, Jan Arne Telle
Treewidth Lower Bounds with Brambles

In this paper we present a new technique for computing lower bounds for graph treewidth. Our technique is based on the characterisation of the treewidth as the maximum order of a bramble of the graph. We give two algorithms: one for general graphs, and one for planar graphs. The algorithm for planar graphs is shown to give a lower bound for the treewidth that is at most a constant factor away from the exact treewidth. For both algorithms, we report on extensive computational experiments that show that the algorithms give often excellent lower bounds, in particular when applied to (close to) planar graphs.

Hans L. Bodlaender, Alexander Grigoriev, Arie M. C. A. Koster
Minimal Interval Completions

We study the problem of adding edges to an arbitrary graph so that the resulting graph is an interval graph. Our objective is to add an inclusion minimal set of edges, which means that no proper subset of the added edges can result in an interval graph when added to the original graph. We give a polynomial time algorithm to obtain a minimal interval completion of an arbitrary graph, thereby resolving the complexity of this problem.

Pinar Heggernes, Karol Suchan, Ioan Todinca, Yngve Villanger
A 2-Approximation Algorithm for Sorting by Prefix Reversals

Sorting by Prefix Reversals, also known as Pancake Flipping, is the problem of transforming a given permutation into the identity permutation, where the only allowed operations are reversals of a prefix of the permutation. The problem complexity is still unknown, and no algorithm with an approximation ratio better than 3 is known. We present the first polynomial-time 2-approximation algorithm to solve this problem. Empirical tests suggest that the average performance is in fact better than 2.

Johannes Fischer, Simon W. Ginzinger
Approximating the 2-Interval Pattern Problem

We address the problem of approximating the 2-

Interval Pattern

problem over its various models and restrictions. This problem, which is motivated by RNA secondary structure prediction, asks to find a maximum cardinality subset of a 2-interval set with respect to some prespecified model. For each such model, we give varying approximation quality depending on the different possible restrictions imposed on the input 2-interval set.

Maxime Crochemore, Danny Hermelin, Gad M. Landau, Stéphane Vialette
A Loopless Gray Code for Minimal Signed-Binary Representations

A string ...

a

2

a

1

a

0

over the alphabet {–1,0,1} is said to be a minimal signed-binary representation of an integer

n

if

n

= ∑

k

 ≥ 0

a

k

2

k

and the number of non-zero digits is minimal. We present a loopless (and hence a Gray code) algorithm for generating all minimal signed binary representations of a given integer

n

.

Gurmeet Singh Manku, Joe Sawada
Efficient Approximation Schemes for Geometric Problems?

An EPTAS (efficient PTAS) is an approximation scheme where

ε

does not appear in the exponent of

n

, i.e., the running time is

f

(

ε

)

n

c

. We use parameterized complexity to investigate the possibility of improving the known approximation schemes for certain geometric problems to EPTAS. Answering an open question of Alber and Fiala [2], we show that

Maximum Independent Set

is W[1]-complete for the intersection graphs of unit disks and axis-parallel unit squares in the plane. A standard consequence of this result is that the

$n^{O(1/{\it \epsilon})}$

time PTAS of Hunt et al. [11] for

Maximum Independent Set

on unit disk graphs cannot be improved to an EPTAS. Similar results are obtained for the problem of covering points with squares.

Dániel Marx
Geometric Clustering to Minimize the Sum of Cluster Sizes

We study geometric versions of the

min-size k

-clustering

problem, a clustering problem which generalizes clustering to minimize the sum of cluster radii and has important applications. We prove that the problem can be solved in polynomial time when the points to be clustered are located on a line. For Euclidean spaces of higher dimensions, we show that the problem is NP-hard and present polynomial time approximation schemes. The latter result yields an improved approximation algorithm for the related problem of

k

-clustering to minimize the sum of cluster diameters.

Vittorio Bilò, Ioannis Caragiannis, Christos Kaklamanis, Panagiotis Kanellopoulos
Approximation Schemes for Minimum 2-Connected Spanning Subgraphs in Weighted Planar Graphs

We present new approximation schemes for various classical problems of finding the minimum-weight spanning subgraph in edge-weighted undirected

planar graphs

that are resistant to edge or vertex removal. We first give a PTAS for the problem of finding minimum-weight 2-edge-connected spanning subgraphs where duplicate edges are allowed. Then we present a new greedy spanner construction for edge-weighted planar graphs, which augments any connected subgraph

A

of a weighted planar graph

G

to a (1+

ε

)-spanner of

G

with total weight bounded by weight(A)/

ε

. From this we derive quasi-polynomial time approximation schemes for the problems of finding the minimum-weight 2-edge-connected or biconnected spanning subgraph in planar graphs. We also design approximation schemes for the minimum-weight 1-2-connectivity problem, which is the variant of the survivable network design problem where vertices have 1 or 2 connectivity constraints. Prior to our work, for all these problems no polynomial or quasi-polynomial time algorithms were known to achieve an approximation ratio better than 2.

André Berger, Artur Czumaj, Michelangelo Grigni, Hairong Zhao
Packet Routing and Information Gathering in Lines, Rings and Trees

We study the problem of online packet routing and information gathering in lines, rings and trees. A network consist of

n

nodes. At each node a buffer of size

B

. Each buffer can transmit one packet to the next buffer at each time step. The packets injection is under adversarial control. Packets arriving at a full buffer must be discarded. In information gathering all packets have the same destination. If a packet reaches the destination it is absorbed. The goal is to maximize the number of absorbed packets. Previous studies have shown that even on the line topology this problem is difficult to handle by online algorithms. A lower bound of

${\it \Omega}(\sqrt{n})$

on the competitiveness of the Greedy algorithm was presented by Aiello et al in [1]. All other known algorithms have a near linear competitive ratio. In this paper we give the first

O

(log

n

) competitive deterministic algorithm for the information gathering problem in lines, rings and trees. We also consider multi-destination routing where the destination of a packet may be any node. For lines and rings we show an

O

(log

2

n

) competitive randomized algorithms. Both for information gathering and for the multi-destination routing our results improve exponentially the previous results.

Yossi Azar, Rafi Zachut
Jitter Regulation for Multiple Streams
(Extended Abstract)

For widely-used interactive communication, it is essential that traffic is kept as smooth as possible; the smoothness of a traffic is typically captured by its

delay jitter

, i.e., the difference between the maximal and minimal end-to-end delays. The task of minimizing the jitter is done by jitter regulators that use a limited-size buffer in order to shape the traffic. In many real-life situations regulators must handle multiple streams simultaneously and provide low jitter on each of them separately. This paper investigates the problem of minimizing jitter in such an environment, using a fixed-size buffer.

We show that the offline version of the problem can be solved in polynomial time, by introducing an efficient offline algorithm that finds a release schedule with optimal jitter. When regulating

M

streams in the online setting, we take a

competitive analysis

point of view and note that previous results in [1] can be extended to an online algorithm that uses a buffer of size 2

MB

and obtains the optimal jitter possible with a buffer of size

B

. The question arises whether such a resource augmentation is essential. We answer this question in the affirmative, by proving a lower bound that is tight up to a factor of 2, thus showing that jitter regulation does not scale well as the number of streams increases unless the buffer is sized-up proportionally.

David Hay, Gabriel Scalosub
Efficient c-Oriented Range Searching with DOP-Trees

A

c

-

dop

is a

c

-oriented convex polytope, that is, a convex polytope whose edges have orientations that come from a fixed set of

c

orientations. In this paper we study

dop

-trees—bounding-volume hierarchies that use

c

-

dop

s as bounding volumes—in the plane. We prove that for any set

S

of

n

disjoint

c

-

dop

s in the plane, one can construct a

dop

-tree such that a range query with a

c

-

dop

as query range can be answered in

O

(

n

1/2 + 

ε

 + 

k

) time, where

k

is the number of reported answers. This is optimal up to the factor

O

(

n

ε

). If the

c

-

dop

s in

S

may intersect, the query time becomes

O

(

n

$^{\rm 1-1/{\it c}}$

+

k

), which is optimal.

Mark de Berg, Herman Haverkort, Micha Streppel
Matching Point Sets with Respect to the Earth Mover’s Distance

The Earth Mover’s Distance (EMD) between two weighted point sets (point distributions) is a distance measure commonly used in computer vision for color-based image retrieval and shape matching. It measures the minimum amount of work needed to transform one set into the other one by weight transportation.

We study the following shape matching problem: Given two weighted point sets

A

and

B

in the plane, compute a rigid motion of

A

that minimizes its Earth Mover’s Distance to

B

. No algorithm is known that computes an exact solution to this problem. We present simple FPTAS and polynomial-time (2 +

ε

)-approximation algorithms for the minimum Euclidean EMD between

A

and

B

under translations and rigid motions.

Sergio Cabello, Panos Giannopoulos, Christian Knauer, Günter Rote
Small Stretch Spanners on Dynamic Graphs

We present fully dynamic algorithms for maintaining 3- and 5-spanners of undirected graphs. For unweighted graphs we maintain a 3- or 5-spanner under insertions and deletions of edges in

O

(

n

) amortized time per operation over a sequence of Ω(

n

) updates. The maintained 3-spanner (resp., 5-spanner) has

O

(

n

3/2

) edges (resp.,

O

(

n

4/3

) edges), which is known to be optimal. On weighted graphs with

d

different edge cost values, we maintain a 3- or 5-spanner in

O

(

n

) amortized time per operation over a sequence of Ω(

d

n

) updates. The maintained 3-spanner (resp., 5-spanner) has

O

(

d

n

3/2

) edges (resp.,

O

(

d

n

4/3

) edges). The same approach can be extended to graphs with real-valued edge costs in the range [1,

C

]. All our algorithms are deterministic and are substantially faster than recomputing a spanner from scratch after each update.

Giorgio Ausiello, Paolo G. Franciosa, Giuseppe F. Italiano
An Experimental Study of Algorithms for Fully Dynamic Transitive Closure

We have conducted an extensive experimental study on some recent, theoretically outstanding, algorithms for fully dynamic transitive closure along with several variants of them, and compared them to pseudo fully dynamic and simple-minded algorithms developed in a previous study. We tested and compared these implementations on random inputs, synthetic (worst-case) inputs, and on inputs motivated by real-world graphs. Our experiments reveal that some of the fully dynamic algorithms can really be of practical value in many situations.

Ioannis Krommidas, Christos Zaroliagis
Experimental Study of Geometric t-Spanners

The construction of

t

-spanners of a given point set has received a lot of attention, especially from a theoretical perspective. In this paper we perform the first extensive experimental study of the properties of

t

-spanners. The main aim is to examine the quality of the produced spanners in the plane. We implemented the most common

t

-spanner algorithms and tested them on a number of different point sets. The experiments are discussed and compared to the theoretical results and in several cases we suggest modifications that are implemented and evaluated. The quality measurements that we consider are the number of edges, the weight, the maximum degree, the diameter and the number of crossings.

Mohammad Farshi, Joachim Gudmundsson
Highway Hierarchies Hasten Exact Shortest Path Queries

We present a new speedup technique for route planning that exploits the hierarchy inherent in real world road networks. Our algorithm preprocesses the eight digit number of nodes needed for maps of the USA or Western Europe in a few hours using linear space. Shortest (i.e. fastest) path queries then take around eight milliseconds to produce exact shortest paths. This is about 2 000 times faster than using Dijkstra’s algorithm.

Peter Sanders, Dominik Schultes
Preemptive Scheduling of Independent Jobs on Identical Parallel Machines Subject to Migration Delays

We present hardness and approximation results for the problem of scheduling

n

independent jobs on

m

identical parallel machines subject to a migration delay

d

so as to minimize the makespan. We give a sharp threshold on the value of

d

for which the complexity of the problem changes from polynomial time solvable to NP-hard. We give initial results supporting a conjecture that there always exists an optimal schedule in which at most

m

– 1 jobs migrate. Further, we give a

O

(

n

) time

O

(1+1/log

2

n

)-approximation algorithm for

m

= 2, and show that there is a polynomial time approximation scheme for arbitrary

m

.

Aleksei V. Fishkin, Klaus Jansen, Sergey V. Sevastyanov, René Sitters
Fairness-Free Periodic Scheduling with Vacations

We consider a problem of

repeatedly

scheduling

n

jobs on

m

parallel machines. Each job is associated with a profit, gained each time the job is completed, and the goal is to maximize the average profit per time unit. Once the processing of a job is completed, it goes on

vacation

and returns to the system, ready to be processed again, only after its vacation is over. This problem has many applications, in production planning, machine maintenance, media-on-demand and databases query processing, among others.

We show that the problem is NP-hard already for jobs with unit processing times and unit profits, and develop approximation algorithms, as well as optimal algorithms for certain subclasses of instances. In particular, we show that a preemptive greedy algorithm achieves a ratio of 2 to the optimal for instances with arbitrary processing times and arbitrary profits. For the special case of unit processing times, we present a 1.67-approximation algorithm for instances with arbitrary profits, and a 1.39-approximation algorithm for instances where all jobs have the same (unit) profits. For the latter case, we also show that when the load generated by an instance is sufficiently large (in terms of

n

and

m

), any algorithm that uses no intended idle times yields an optimal schedule.

Jiří Sgall, Hadas Shachnai, Tami Tamir
Online Bin Packing with Cardinality Constraints

We consider a one dimensional storage system where each container can store a bounded amount of capacity as well as a bounded number of items

k

≥ 2. This defines the (standard) bin packing problem with cardinality constraints which is an important version of bin packing, introduced by Krause, Shen and Schwetman already in 1975. Following previous work on the unbounded space online problem, we establish the

exact

best competitive ratio for bounded space online algorithms for every value of

k

. This competitive ratio is a strictly increasing function of

k

which tends to

${\it \Pi}_{\infty}+1\approx 2.69103$

for large

k

. Lee and Lee showed in 1985 that the best possible competitive ratio for online bounded space algorithms for the classical bin packing problem is the sum of a series, and tends to

${\it \Pi}_{\rm \infty}$

as the allowed space (number of open bins) tends to infinity. We further design optimal online bounded space algorithms for

variable sized bin packing

, where each allowed bin size may have a distinct cardinality constraint, and for the

resource augmentation

model. All algorithms achieve the

exact

best possible competitive ratio possible for the given problem, and use constant numbers of open bins. Finally, we introduce unbounded space online algorithms with smaller competitive ratios than the previously known best algorithms for small values of

k

, for the standard cardinality constrained problem. These are the first algorithms with competitive ratio below 2 for

k

= 4,5,6.

Leah Epstein
Fast Monotone 3-Approximation Algorithm for Scheduling Related Machines

We consider the problem of scheduling

n

jobs to

m

machines of different speeds s.t. the makespan is minimized (

Q

||

C

max

). We provide a fast and simple, deterministic

monotone

3-approximation algorithm for

Q

||

C

max

Monotonicity is relevant in the context of

truthful mechanisms

: when each machine speed is only known to the machine itself, we need to motivate that machines declare their true speeds to the scheduling mechanism. As shown by Archer and Tardos, such motivation is possible only if the scheduling algorithm used by the mechanism is monotone. The best previous monotone algorithm that is polynomial in

m

, was a 5-approximation by Andelman et al. A

randomized

2-approximation method, satisfying a weaker definition of truthfulness, is given by Archer. As a core result, we prove the conjecture of Auletta et al., that the greedy algorithm (

Lpt

) is monotone if machine speeds are all integer powers of 2.

Annamária Kovács
Engineering Planar Separator Algorithms

We consider classical linear-time planar separator algorithms, determining for a given planar graph a small subset of the nodes whose removal separates the graph into two components of similar size. These algorithms are based upon

Planar Separator Theorem

s, which guarantee separators of size

$O(\sqrt{n})$

and remaining components of size less than 2

n

/3. In this work, we present a comprehensive experimental study of the algorithms applied to a large variety of graphs, where the main goal is to find separators that do not only satisfy upper bounds but also possess other desirable qualities with respect to separator size and component balance. We propose the usage of

fundamental cycles

, whose size is at most twice the diameter of the graph, as planar separators: For graphs of small diameter the guaranteed bound is better than the

$O(\sqrt{n})$

bounds, and it turns out that this simple strategy almost always outperforms the other algorithms, even for graphs with large diameter.

Martin Holzer, Grigorios Prasinos, Frank Schulz, Dorothea Wagner, Christos Zaroliagis
Stxxl: Standard Template Library for XXL Data Sets

We present a software library

Stxxl

, that enables practice-oriented experimentation with huge data sets.

Stxxl

is an implementation of the C++ standard template library STL for external memory computations. It supports parallel disks, overlapping between I/O and computation, and

pipelining

technique that can save more than

half

of the I/Os.

Stxxl

has already been used for computing minimum spanning trees, connected components, breadth-first search decompositions, constructing suffix arrays, and computing social network analysis metrics.

Roman Dementiev, Lutz Kettner, Peter Sanders
Negative Cycle Detection Problem

In this paper, we will describe some heuristics that can be used to improve the runtime of a wide range of commonly used algorithms for the negative cycle detection problem significantly, such as Bellman-Ford-Tarjan (BFCT) algorithm, Goldberg-Radzik (GORC) algorithm and Bellman-Ford-Moore algorithm with Predecessor Array (BFCF). The heuristics are very easy to be implemented and only require modifications of several lines of code of the original algorithms. We observed that the modified algorithms outperformed the original ones, particularly in random graphs and no cycle graphs. We discovered that 69% of test cases have improved. Also, the improvements are sometimes dramatic, which have an improvement of a factor of 23, excluding the infinity case, while the worst case has only decreased by 85% only, which is comparably small when compared to the improvement.

Chi-Him Wong, Yiu-Cheong Tam
An Optimal Algorithm for Querying Priced Information: Monotone Boolean Functions and Game Trees

We study competitive function evaluation in the context of computing with priced information. A function

f

has to be evaluated for a fixed but unknown choice of the values of the variables. Each variable

x

of

f

has an associated cost

c

(

x

), which has to be paid to read the value of

x

. The problem is to design algorithms that compute the function querying the values of the variables sequentially while trying to minimize the total cost incurred. The evaluation of the performance of the algorithms is made by employing competitive analysis. We determine the best possible extremal competitive ratio for the classes of threshold trees, game trees, and monotone boolean functions with constrained minterms, by providing a polynomial-time algorithm whose competitiveness matches the known lower bounds.

Ferdinando Cicalese, Eduardo Sany Laber
Online View Maintenance Under a Response-Time Constraint

A

materialized view

is a certain synopsis structure precomputed from one or more data sets (called

base tables

) in order to facilitate various queries on the data. When the underlying base tables change, the materialized view also needs to be updated accordingly to reflect those changes. We consider the problem of batch-incrementally maintaining a materialized view under a

response-time constraint

. We propose techniques for

selectively

processing updates to some base tables while keeping others batched, with the goal of minimizing the total maintenance cost while meeting the response-time constraint. We reduce this to a generalized paging problem, where the cost of evicting a page is a concave non-decreasing function of the number of continuous requests seen since the last time it was evicted. Our main result is an online algorithm that achieves a constant competitive ratio for all concave cost functions while relaxing the response-time constraint by a constant factor. For several special classes of cost functions, the competitive ratio can be improved with simpler, more intuitive algorithms. Our algorithms are based on emulating the behavior of an online paging algorithm on a page request sequence carefully designed from the cost function. The key novel technical ideas are twofold. The first involves discretizing the cost function, so that there is a collection of periodic paging sequences, with page sizes decreasing geometrically, which approximates the behavior of the original function. The second involves designing an online view maintenance algorithm based on the paging process, by emulating the behavior of the paging scheme in recursively defined phases.

Kamesh Munagala, Jun Yang, Hai Yu
Online Primal-Dual Algorithms for Covering and Packing Problems

We study a wide range of online covering and packing optimization problems. In an online covering problem a linear cost function is known in advance, but the linear constraints that define the feasible solution space are given one by one in an online fashion. In an online packing problem the profit function as well as the exact packing constraints are not fully known in advance. In each round additional information about the profit function and the constraints is revealed. We provide general deterministic schemes for online

fractional

covering and packing problems. We also provide deterministic algorithms for a couple of

integral

covering and packing problems.

Niv Buchbinder, Joseph Naor
Efficient Algorithms for Shared Backup Allocation in Networks with Partial Information

We study efficient algorithms for establishing reliable connections with bandwidth guarantees in communication networks. In the normal mode of operation, each connection uses a

primary

path to deliver packets from the source to the destination. To ensure continuous operation in the event of an edge failure, each connection uses a set of backup

bridges

, each bridge protecting a portion of the primary path. To meet the bandwidth requirement of the connection, a certain amount of bandwidth must be allocated on the edges of primary path, as well as on the backup edges. In this paper, we focus on minimizing the amount of required backup allocation by

sharing

backup bandwidth among different connections. We consider efficient sharing schemes that require only

partial

information about the current state of the network. In particular, the only information available for each edge is the total amount of primary allocation and the cost of allocating backup bandwidth on this edge. We consider the problem of finding a minimum cost backup allocation together with a set of bridges for a given primary path. We prove that this problem is

${\mathcal NP}$

-hard and present an approximation algorithm whose performance is within

${\mathcal O}({\rm log} n)$

of the optimum, where

n

is the number of edges in the primary path.

Yigal Bejerano, Joseph Naor, Alexander Sprintson
Using Fractional Primal-Dual to Schedule Split Intervals with Demands

We consider the problem of scheduling jobs that are given as groups of non-intersecting intervals on the real line. Each job

j

is associated with a

t

-interval, which consists of up to

t

segments, for some

t

≥ 1, a demand,

d

j

 ∈ [0,1], and a weight,

w

(

j

). A schedule is a collection of jobs, such that, for every

$s \in {\mathbb R}$

, the total demand of the jobs in the schedule whose

t

-interval contains

s

does not exceed 1. Our goal is to find a schedule that maximizes the total weight of scheduled jobs.

We present a 6

t

-approximation algorithm that uses a novel extension of the

primal-dual schema

called

fractional primal-dual

. The first step in a fractional primal-dual

r

-approximation algorithm is to compute an optimal solution,

x

*

, of an LP relaxation of the problem. Next, the algorithm produces an integral primal solution

x

, and a new LP, denoted by P′, that has the same objective function as the original problem, but contains inequalities that may not be valid with respect to the original problem. Moreover,

x

*

is a feasible solution of P′. The algorithm also computes a solution

y

to the dual of P′.

x

is

r

-approximate, since its weight is bounded by the value of

y

divided by

r

.

We present a

fractional local ratio

interpretation of our 6

t

-approximation algorithm. We also discuss the connection between fractional primal-dual and the fractional local ratio technique. Specifically, we show that the former is the primal-dual manifestation of the latter.

Reuven Bar-Yehuda, Dror Rawitz
An Approximation Algorithm for the Minimum Latency Set Cover Problem

The input to the

minimum latency set cover problem

consists of a set of jobs and a set of tools. Each job

j

needs a specific subset

S

j

of the tools in order to be processed. It is possible to install a single tool in every time unit. Once the entire subset

S

j

has been installed, job

j

can be processed instantly. The problem is to determine an order of job installations which minimizes the weighted sum of job completion times. We show that this problem is NP-hard in the strong sense and provide an

e

-approximation algorithm. Our approximation algorithm uses a framework of approximation algorithms which were developed for the minimum latency problem.

Refael Hassin, Asaf Levin
Workload-Optimal Histograms on Streams

A histogram is a piecewise-constant approximation of an observed data distribution. A histogram is used as a small-space, approximate synopsis of the underlying data distribution, which is often too large to be stored precisely. Histograms have found many applications in database management systems, perhaps most commonly for query selectivity estimation in query optimizers [1], but have also found applications in approximate query answering [2], load balancing in parallel join execution [3], mining time-series data [4], partition-based temporal join execution, query pro.ling for user feedback, etc. Ioannidis has a nice overview of the history of histograms, their applications, and their use in commercial DBMSs [5]. Also, Poosala’s thesis provides a systematic treatment of different types of histograms [3].

S. Muthukrishnan, M. Strauss, X. Zheng
Finding Frequent Patterns in a String in Sublinear Time

We consider the problem of testing whether (a large part of) a given string

X

of length

n

over some finite alphabet is covered by multiple occurrences of some (unspecified) pattern

Y

of arbitrary length in the combinatorial property testing model. Our algorithms randomly query a sublinear number of positions of

X

, and run in sublinear time in

n

. We first focus on finding patterns of a given length, and then discuss finding patterns of unspecified length.

Petra Berenbrink, Funda Ergun, Tom Friedetzky
Online Occlusion Culling

Modern computer graphics systems are able to render sophisticated 3D scenes consisting of millions of polygons. For most camera positions only a small collection of these polygons is visible. We address the problem of occlusion culling, i.e., determine hidden primitives. Aila, Miettinen, and Nordlund suggested to implement a FIFO buffer on graphics cards which is able to delay the polygons before drawing them [2]. When one of the polygons within the buffer is occluded or masked by another polygon arriving later from the application, the rendering engine can drop the occluded one without rendering, saving important rendering time.

We introduce a theoretical online model to analyse these problems in theory using competitive analysis. For different cost measures we invent the first competitive algorithms for online occlusion culling. Our implementation shows that these algorithms outperform the FIFO strategy for real 3D scenes as well.

Gereon Frahling, Jens Krokowski
Shortest Paths in Matrix Multiplication Time
[Extended Abstract]

In this paper we present an

${\tilde O}(W n^{\omega})$

time algorithm solving single source shortest path problem in graphs with integer weights from the set {–

W

,...,0,...,

W

}, where

ω

< 2.376 is the matrix multiplication exponent. For dense graphs with small edge weights, this result improves upon the algorithm of Goldberg that works in

${\tilde O}(mn^{0.5}{\rm log}W)$

time, and the Bellman-Ford algorithm that works in

O

(

nm

) time.

Piotr Sankowski
Computing Common Intervals of K Permutations, with Applications to Modular Decomposition of Graphs

We introduce a new way to compute common intervals of

K

permutations based on a very simple and general notion of generators of common intervals. This formalism leads to simple and efficient algorithms to compute the set of all common intervals of

K

permutations, that can contain a quadratic number of intervals, as well as a linear space basis of this set of common intervals. Finally, we show how our results on permutations can be used for computing the modular decomposition of graphs in linear time.

Anne Bergeron, Cedric Chauve, Fabien de Montgolfier, Mathieu Raffinot
Greedy Routing in Tree-Decomposed Graphs

We propose a new perspective on the small world phenomenon by considering arbitrary graphs augmented according to probabilistic distributions guided by tree-decompositions of the graphs. We show that, for any

n

-node graph

G

of treewidth ≤

k

, there exists a tree-decomposition-based distribution

${\mathcal D}$

such that greedy routing in the augmented graph

$(G,{\mathcal D})$

performs in

O

(

k

log

2

n

) expected number of steps. We also prove that if

G

has chordality ≤

k

, then the tree-decomposition-based distribution

${\mathcal D}$

insures that greedy routing in

$(G,{\mathcal D})$

performs in

O

((

k

+log

n

)log

n

) expected number of steps. In particular, for any

n

-node graph

G

of chordality

O

(log

n

) (e.g., chordal graphs), greedy routing in the augmented graph

$(G,{\mathcal D})$

performs in

O

(log

2

n

) expected number of steps.

Pierre Fraigniaud
Making Chord Robust to Byzantine Attacks

Chord is a distributed hash table (DHT) that requires only

O

(log

n

) links per node and performs searches with latency and message cost

O

(log

n

), where

n

is the number of peers in the network. Chord assumes all nodes behave according to protocol. We give a variant of Chord which is robust with high probability for any time period during which: 1) there are always at least

z

total peers in the network for some integer

z

; 2) there are never more than (1/4–

ε

)

z

Byzantine peers in the network for a fixed

ε

> 0; and 3) the number of peer insertion and deletion events is no more than

z

k

for some tunable parameter

k

. We assume there is an adversary controlling the Byzantine peers and that the IP-addresses of all the Byzantine peers and the locations where they join the network are carefully selected by this adversary. Our notion of robustness is rather strong in that we not only guarantee that searches can be performed but also that we can enforce any set of “proper behavior” such as contributing new material, etc. In comparison to Chord, the resources required by this new variant are only a polylogarithmic factor greater in communication, messaging, and linking costs.

Amos Fiat, Jared Saia, Maxwell Young
Bucket Game with Applications to Set Multicover and Dynamic Page Migration

We present a simple two-person

Bucket Game

, based on throwing balls into buckets, and we discuss possible players’ strategies. We use these strategies to create an approximation algorithm for a generalization of the well known Set Cover problem, where we need to cover each element by at least

k

sets. Furthermore, we apply these strategies to construct a randomized algorithm for Dynamic Page Migration problem achieving the optimal competitive ratio against an oblivious adversary.

Marcin Bienkowski, Jarosław Byrka
Bootstrapping a Hop-Optimal Network in the Weak Sensor Model

Sensor nodes are very weak computers that get distributed at random on a surface. Once deployed, they must wake up and form a radio network. Sensor network bootstrapping research thus has three parts: one must model the restrictions on sensor nodes; one must prove that the connectivity graph of the sensors has a subgraph that would make a good network; and one must give a distributed protocol for finding such a network subgraph that can be implemented on sensor nodes.

Although many particular restrictions on sensor nodes are implicit or explicit in many papers, there remain many inconsistencies and ambiguities from paper to paper. The lack of a clear model means that solutions to the network-bootstrapping problem in both the theory and systems literature all violate constraints on sensor nodes. For example, random geometric graph results on sensor networks predict the existence of subgraphs on the connectivity graph with good route-stretch, but these results do not address the degree of such a graph, and sensor networks must have constant degree. Furthermore, proposed protocols for actually finding such graphs require that nodes have too much memory, whereas others assume the existence of a contention-resolution mechanism.

We present a formal

Weak Sensor Model

that summarizes the literature on sensor node restrictions, taking the most restrictive choices when possible. We show that sensor connectivity graphs have low-degree subgraphs with good

hop-stretch

, as required by the

Weak Sensor Model

. Finally, we give a

Weak Sensor Model

-compatible protocol for finding such graphs. Ours is the first network initialization algorithm that is implementable on sensor nodes.

Martín Farach-Colton, Rohan J. Fernandes, Miguel A. Mosteiro
Approximating Integer Quadratic Programs and MAXCUT in Subdense Graphs

Let

A

be a real symmetric

n

×

n

-matrix with eigenvalues

λ

1

,⋯,

λ

n

ordered after decreasing absolute value, and

b

an

n

× 1-vector. We present an algorithm finding approximate solutions to min

x

*(

Ax

 + 

b

) and max

x

*(

Ax

 + 

b

) over

x

∈ {–1,1}

n

, with an absolute error of at most

$(c_{1}|{\rm \lambda_{1}}|+|{\rm \lambda}_{\lceil c_{2} {\rm log}n\rceil}|)2n+O((\alpha n+\beta)\sqrt{n {\rm log}n})$

, where

α

and

β

are the largest absolute values of the entries in

A

and

b

, respectively, for any positive constants

c

1

and

c

2

, in time polynomial in

n

.

We demonstrate that the algorithm yields a PTAS for MAXCUT in regular graphs on

n

vertices of degree

d

of

$\omega(\sqrt{n{\rm log}n})$

, as long as they contain

O

(

d

4

log

n

) 4-cycles. The strongest previous result showed that Ω(

n

/log

n

) average degree graphs admit a PTAS.

We also show that smooth

n

-variate polynomial integer programs of constant degree

k

, always can be approximated in polynomial time leaving an absolute error of

o

(

n

k

), answering in the affirmative a suspicion of Arora, Karger, and Karpinski in STOC 1995.

Andreas Björklund
A Cutting Planes Algorithm Based Upon a Semidefinite Relaxation for the Quadratic Assignment Problem

We present a cutting planes algorithm for the Quadratic Assignment Problem based upon a semidefinite relaxation, and we report experiments for classical instances. Our lower bound is compared with the ones obtained by linear and semidefinite approaches. Our tests show that the cuts we use (originally proposed for a linear approach) allow to improve significantly on the bounds obtained by the other approaches. Moreover, this is achieved within a moderate additional computing effort, and even in a shorter total time sometimes. Indeed, thanks to the strong tailing off effect of the SDP solver we have used (SB), we obtain in a reasonable time an approximate solution which is suitable to generate efficient cutting planes which speed up the convergence of SB.

Alain Faye, Frédéric Roupin
Approximation Complexity of min-max (Regret) Versions of Shortest Path, Spanning Tree, and Knapsack

This paper investigates, for the first time in the literature, the approximation of min-max (regret) versions of classical problems like shortest path, minimum spanning tree, and knapsack. For a bounded number of scenarios, we establish fully polynomial-time approximation schemes for the min-max versions of these problems, using relationships between multi-objective and min-max optimization. Using dynamic programming and classical trimming techniques, we construct a fully polynomial-time approximation scheme for min-max regret shortest path. We also establish a fully polynomial-time approximation scheme for min-max regret spanning tree and prove that min-max regret knapsack is not at all approximable. We also investigate the case of an unbounded number of scenarios, for which min-max and min-max regret versions of polynomial-time solvable problems usually become strongly

NP

-hard. In this setting, non-approximability results are provided for min-max (regret) versions of shortest path and spanning tree.

Hassene Aissi, Cristina Bazgan, Daniel Vanderpooten
Robust Approximate Zeros

Smale’s notion of an approximate zero of an analytic function

f

:

C

C

is extended to take into account the errors incurred in the evaluation of the Newton operator. Call this stronger notion a

robust approximate zero.

. We develop a corresponding

robust point estimate

for such zeros: we prove that if

z

0

 ∈ C satisfies

α

(

f

,

z

0

)<0.02 then

z

0

is a robust approximate zero, with the associated zero

z

*

lying in the closed disc

${\overline B}(z_{0},\frac{0.07}{f,z_{0}}$

. Here

α

(

f

,

z

),

γ

(

f

,

z

) are standard functions in point estimates.

Suppose

f

(

z

) is an

L

-bit integer square-free polynomial of degree

d

. Using our new algorithm, we can compute an

n

-bit absolute approximation of

z

*

 ∈ 

IR

starting from a bigfloat

z

0

, in time

O

[

dM

(

n

 + 

d

2

(

L

 + lg

d

)lg(

n

 + 

L

))], where

M

(

n

) is the complexity of multiplying

n

-bit integers.

Vikram Sharma, Zilin Du, Chee K. Yap
Optimizing a 2D Function Satisfying Unimodality Properties

The number of probes needed by the best possible algorithm for locally or globally optimizing a bivariate function varies substantially depending on the assumptions made about the function. We consider a wide variety of assumptions—in particular, global unimodality, unimodality of rows and/or columns, and total unimodality—and prove tight or nearly tight upper and lower bounds in all cases. Our results include both nontrivial optimization algorithms and nontrivial adversary arguments depending on the scenario.

Erik D. Demaine, Stefan Langerman
Backmatter
Metadata
Title
Algorithms – ESA 2005
Editors
Gerth Stølting Brodal
Stefano Leonardi
Copyright Year
2005
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-31951-1
Print ISBN
978-3-540-29118-3
DOI
https://doi.org/10.1007/11561071

Premium Partner