Skip to main content

2014 | Buch

Algorithm Theory – SWAT 2014

14th Scandinavian Symposium and Workshops, Copenhagen, Denmark, July 2-4, 2014. Proceedings

herausgegeben von: R. Ravi, Inge Li Gørtz

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 14th International Scandinavian Symposium and Workshops on Algorithm Theory, SWAT 2014, held in Copenhagen, Denmark, in July 2014. The 33 papers were carefully reviewed and selected from a total of 134 submissions. The papers present original research and cover a wide range of topics in the field of design and analysis of algorithms and data structures including but not limited to approximation algorithms, parameterized algorithms, computational biology, computational geometry and topology, distributed algorithms, external-memory algorithms, exponential algorithms, graph algorithms, online algorithms, optimization algorithms, randomized algorithms, streaming algorithms, string algorithms, sublinear algorithms and algorithmic game theory.

Inhaltsverzeichnis

Frontmatter
I/O-Efficient Range Minima Queries

In this paper we study the

offline (batched) range minima query (RMQ)

problem in the external memory (EM) and cache-oblivious (CO) models. In the

static

RMQ problem, given an array

A

, a query

rmq

A

(

i

,

j

) returns the smallest element in the range

A

[

i

,

j

].

If

B

is the size of the block and

m

is the number of blocks that fit in the internal memory in the EM and CO models, we show that

Q

range minima queries on an array of size

N

can be answered in O

$({{{N}\over{B}} + {{Q}\over{B}}\log_{m}{{Q}\over{B}}}) = {\rm O}{({\rm scan}({N}) + {\rm sort}({Q}))}$

I/Os in the CO model and slightly better O

$({{\rm scan}({N}) + {{Q}\over{B}} \log_m \min\{{{Q}\over{B}}, {{N}\over{B}}\}})$

I/Os in the EM model and linear space in both models. Our cache-oblivious result is new and our external memory result is an improvement of the previously known bound. We also show that the EM bound is tight by proving a matching lower bound. Our lower bound holds even if the queries are presorted in any predefined order.

In the batched

dynamic

RMQ problem, the queries must be answered in the presence of the updates (insertions/deletions) to the array. We show that in the EM model we can solve this problem in O

$({{\rm sort}({N}) + {\rm sort}{Q}\log_m {{N}\over{B}}})$

I/Os, again improving the best previously known bound.

Peyman Afshani, Nodari Sitchinava
Online Makespan Minimization with Parallel Schedules

Online makespan minimization is a classical problem in which a sequence of jobs

σ

 = 

J

1

, …,

J

n

has to be scheduled on

m

identical parallel machines so as to minimize the maximum completion time of any job. In this paper we investigate the problem in a model where extra power/resources are granted to an algorithm. More specifically, an online algorithm is allowed to build several schedules in parallel while processing

σ

. At the end of the scheduling process the best schedule is selected. This model can be viewed as providing an online algorithm with extra space, which is invested to maintain multiple solutions.

As a main result we develop a (4/3+

ε

)-competitive algorithm, for any 0 < 

ε

 ≤ 1, that uses a constant number of schedules. The constant is equal to 1/

ε

O

(log(1/

ε

))

. We also give a (1 + 

ε

)-competitive algorithm, for any 0 < 

ε

 ≤ 1, that builds a polynomial number of (

m

/

ε

)

O

(log(1/

ε

) /

ε

)

schedules. This value depends on

m

but is independent of the input

σ

. The performance guarantees are nearly best possible. We show that any algorithm that achieves a competitiveness smaller than 4/3 must construct Ω(

m

) schedules. On the technical level, our algorithms make use of novel guessing schemes that (1) predict the optimum makespan of

σ

to within a factor of 1 + 

ε

and (2) guess the job processing times and their frequencies in

σ

. In (2) we have to sparsify the universe of all guesses so as to reduce the number of schedules to a constant.

Susanne Albers, Matthias Hellwig
Expected Linear Time Sorting for Word Size Ω(log2 n loglogn)

Sorting

n

integers in the word-RAM model is a fundamental problem and a long-standing open problem is whether integer sorting is possible in linear time when the word size is

ω

(log

n

). In this paper we give an algorithm for sorting integers in expected linear time when the word size is Ω(log

2

n

loglog

n

). Previously expected linear time sorting was only possible for word size Ω(log

2 + 

ε

n

). Part of our construction is a new packed sorting algorithm that sorts

n

integers of

w

/

b

-bits packed in

${\mathcal O}(n/b)$

words, where

b

is the number of integers packed in a word of size

w

bits. The packed sorting algorithm runs in expected

${\mathcal O}(\tfrac{n}{b}(\log n + \log^2 b))$

time.

Djamal Belazzougui, Gerth Stølting Brodal, Jesper Sindahl Nielsen
Amortized Analysis of Smooth Quadtrees in All Dimensions

Quadtrees are a well-known data structure for representing geometric data in the plane, and naturally generalize to higher dimensions. A basic operation is to expand the tree by splitting a given leaf. A quadtree is

smooth

if adjacent leaf boxes differ by at most one in height.

In this paper, we analyze quadtrees that maintain smoothness with each split operation and also maintain neighbor pointers. Our main result shows that the smooth-split operation has an amortized cost of

O

(1) time for quadtrees of any fixed dimension

D

. This bound has exponential dependence on

D

which we show is unavoidable via a lower bound construction. We additionally give a lower bound construction showing an amortized cost of Ω(log

n

) for splits in a related quadtree model that does not maintain smoothness.

Huck Bennett, Chee Yap
New Approximability Results for the Robust k-Median Problem

We consider a variant of the classical

k

-median problem, introduced by Anthony et al.[1]. In the

Robust k-Median problem

, we are given an

n

-vertex metric space (

V

,

d

) and

m

client sets

$\left\{ S_i \subseteq V \right\}_{i=1}^m$

. We want to open a set

F

 ⊆ 

V

of

k

facilities such that the worst case connection cost over all client sets is minimized; that is, minimize

$\max_{i}\sum_{v \in S_i} d(F,v)$

. Anthony et al. showed an

O

(log

m

) approximation algorithm for any metric and APX-hardness even in the case of uniform metric. In this paper, we show that their algorithm is nearly tight by providing Ω(log

m

/ loglog

m

) approximation hardness, unless

${\sf NP} \subseteq \bigcap_{\delta >0} {\sf DTIME}(2^{n^{\delta}})$

. This result holds even for uniform and line metrics. To our knowledge, this is one of the rare cases in which a problem on a line metric is hard to approximate to within logarithmic factor. We complement the hardness result by an experimental evaluation of different heuristics that shows that very simple heuristics achieve good approximations for realistic classes of instances.

Sayan Bhattacharya, Parinya Chalermsook, Kurt Mehlhorn, Adrian Neumann
Trees and Co-trees with Bounded Degrees in Planar 3-connected Graphs

This paper considers the conjecture by Grünbaum that every planar 3-connected graph has a spanning tree

T

such that both

T

and its co-tree have maximum degree at most 3. Here, the

co-tree

of

T

is the spanning tree of the dual obtained by taking the duals of the non-tree edges. While Grünbaum’s conjecture remains open, we show that every planar 3-connected graph has a spanning tree

T

such that both

T

and its co-tree have maximum degree at most 5. It can be found in linear time.

Therese Biedl
Approximating the Revenue Maximization Problem with Sharp Demands

We consider the revenue maximization problem with sharp multi-demand, in which

m

indivisible items have to be sold to

n

potential buyers. Each buyer

i

is interested in getting exactly

d

i

items, and each item

j

gives a benefit

v

ij

to buyer

i

. We distinguish between unrelated and related valuations. In the former case, the benefit

v

ij

is completely arbitrary, while, in the latter, each item

j

has a quality

q

j

, each buyer

i

has a value

v

i

and the benefit

v

ij

is defined as the product

v

i

q

j

. The problem asks to determine a price for each item and an allocation of bundles of items to buyers with the aim of maximizing the total revenue, that is, the sum of the prices of all the sold items. The allocation must be envy-free, that is, each buyer must be happy with her assigned bundle and cannot improve her utility. We first prove that, for related valuations, the problem cannot be approximated to a factor

O

(

m

1 − 

ε

), for any

ε

 > 0, unless

P

=

NP

and that such result is asymptotically tight. In fact we provide a simple

m

-approximation algorithm even for unrelated valuations. We then focus on an interesting subclass of “proper” instances, that do not contain buyers a priori known not being able to receive any item. For such instances, we design an interesting 2-approximation algorithm and show that no (2 − 

ε

)-approximation is possible for any 0 < 

ε

 ≤ 1, unless

P

=

NP

. We observe that it is possible to efficiently check if an instance is proper, and if discarding useless buyers is allowed, an instance can be made proper in polynomial time, without worsening the value of its optimal solution.

Vittorio Bilò, Michele Flammini, Gianpiero Monaco
Reconfiguring Independent Sets in Claw-Free Graphs

We present a polynomial-time algorithm that, given two independent sets in a claw-free graph

G

, decides whether one can be transformed into the other by a sequence of elementary steps. Each elementary step is to remove a vertex

v

from the current independent set

S

and to add a new vertex

w

(not in

S

) such that the result is again an independent set. We also consider the more restricted model where

v

and

w

have to be adjacent.

Paul Bonsma, Marcin Kamiński, Marcin Wrochna
Competitive Online Routing on Delaunay Triangulations

The sequence of adjacent nodes (graph walk) visited by a routing algorithm on a graph

G

between given source and target nodes

s

and

t

is a

c-competitive

route if its length in

G

is at most

c

times the length of the shortest path from

s

to

t

in

G

. We present 21.766-, 17.982- and 15.479-competitive online routing algorithms on the Delaunay triangulation of an arbitrary given set of points in the plane. This improves the competitive ratio on Delaunay triangulations from the previous best of 45.749. We present a 7.621-competitive online routing algorithm for Delaunay triangulations of point sets in convex position.

Prosenjit Bose, Jean-Lou De Carufel, Stephane Durocher, Perouz Taslakian
Optimal Planar Orthogonal Skyline Counting Queries

The skyline of a set of points in the plane is the subset of maximal points, where a point (

x

,

y

) is maximal if no other point (

x

′,

y

′) satisfies

x

′ ≥ 

x

and

y

′ ≥ 

y

. We consider the problem of preprocessing a set

P

of

n

points into a space efficient static data structure supporting orthogonal skyline counting queries, i.e. given a query rectangle

R

to report the size of the skyline of

P

 ∩ 

R

. We present a data structure for storing

n

points with integer coordinates having query time

$O(\lg n/\lg\lg n)$

and space usage

O

(

n

) words. The model of computation is a unit cost RAM with logarithmic word size. We prove that these bounds are the best possible by presenting a matching lower bound in the cell probe model with logarithmic word size: Space usage

$n\lg^{O(1)} n$

implies worst case query time

$\Omega(\lg n/\lg\lg n)$

.

Gerth Stølting Brodal, Kasper Green Larsen
B-slack Trees: Space Efficient B-Trees

B-slack trees a subclass of B-trees that have substantially better worst-case space complexity, are introduced. They store

n

keys in height

O

(log

b

n

), where

b

is the maximum node degree. Updates can be performed in

$O(\log_{\frac b 2} n)$

amortized time. A relaxed balance version, which is well suited for concurrent implementation, is also presented.

Trevor Brown
Approximately Minwise Independence with Twisted Tabulation

A random hash function

h

is

ε

-minwise if for any set

S

, |

S

| = 

n

, and element

x

 ∈ 

S

,

$\Pr[h(x)=\min h(S)]=(1\pm\varepsilon )/n$

. Minwise hash functions with low bias

ε

have widespread applications within similarity estimation.

Hashing from a universe [

u

], the twisted tabulation hashing of Pǎtraşcu and Thorup [SODA’13] makes

c

 = 

O

(1) lookups in tables of size

u

1/

c

. Twisted tabulation was invented to get good concentration for hashing based sampling. Here we show that twisted tabulation yields

$\tilde O(1/u^{1/c})$

-minwise hashing.

In the classic independence paradigm of Wegman and Carter [FOCS’79]

$\tilde O(1/u^{1/c})$

-minwise hashing requires Ω(log

u

)-independence [Indyk SODA’99]. Pǎtraşcu and Thorup [STOC’11] had shown that simple tabulation, using same space and lookups yields

$\tilde O(1/n^{1/c})$

-minwise independence, which is good for large sets, but useless for small sets. Our analysis uses some of the same methods, but is much cleaner bypassing a complicated induction argument.

Søren Dahlgaard, Mikkel Thorup
Separability of Imprecise Points

An imprecise point is a point

p

with an associated imprecision region

${\mathcal{I}}_p$

indicating the set of possible locations of the point

p

. We study separability problems for a set

R

of red imprecise points and a set

B

of blue imprecise points in

${\Bbb R}^2$

, where the imprecision regions are axis-aligned rectangles and each point

p

 ∈ 

R

 ∪ 

B

is drawn uniformly at random from

${\mathcal{I}}_p$

. Our results include algorithms for finding

certain separators

(separating

R

from

B

with probability 1),

possible separators

(separating

R

from

B

with non-zero probability),

most likely

separators (separating

R

from

B

with maximal probability), and

maximal

separators (maximizing the expected number of correctly classified points).

Mark de Berg, Ali D. Mehrabi, Farnaz Sheikhi
Line-Distortion, Bandwidth and Path-Length of a Graph

We investigate the

minimum line-distortion

and the

minimum bandwidth

problems on unweighted graphs and their relations with the

minimum length

of a Robertson-Seymour’s path-decomposition. The

length

of a path-decomposition of a graph is the largest diameter of a bag in the decomposition. The

path-length

of a graph is the minimum length over all its path-decompositions. In particular, we show: (i) if a graph

G

can be embedded into the line with distortion

k

, then

G

admits a Robertson-Seymour’s path-decomposition with bags of diameter at most

k

in

G

; (ii) for every class of graphs with path-length bounded by a constant, there exist an efficient constant-factor approximation algorithm for the minimum line-distortion problem and an efficient constant-factor approximation algorithm for the minimum bandwidth problem; (iii) there is an efficient 2-approximation algorithm for computing the path-length of an arbitrary graph; (iv) AT-free graphs and some intersection families of graphs have path-length at most 2; (v) for AT-free graphs, there exist a linear time 8-approximation algorithm for the minimum line-distortion problem and a linear time 4-approximation algorithm for the minimum bandwidth problem.

Feodor F. Dragan, Ekkehard Köhler, Arne Leitert
Colorful Bin Packing

We study a variant of online bin packing, called colorful bin packing. In this problem, items that are presented one by one are to be packed into bins of size 1. Each item

i

has a size

s

i

 ∈ [0,1] and a color

$c_i \in {\cal{C}}$

, where

${\cal{C}}$

is a set of colors (that is not necessarily known in advance). The total size of items packed into a bin cannot exceed its size, thus an item

i

can always be packed into a new bin, but an item cannot be packed into a non-empty bin if the previous item packed into that bin has the same color, or if the occupied space in it is larger than 1 − 

s

i

. This problem generalizes standard online bin packing and online black and white bin packing (where

$|{\cal{C}}|=2$

). We prove that colorful bin packing is harder than black and white bin packing in the sense that an online algorithm for zero size items that packs the input into the smallest possible number of bins cannot exist for

$|{\cal{C}}|\geq 3$

, while it is known that such an algorithm exists for

$|{\cal{C}}|=2$

. We show that natural generalizations of classic algorithms for bin packing fail to work for the case

$|{\cal{C}}|\geq 3$

, and moreover, algorithms that perform well for black and white bin packing do not perform well either, already for the case

$|{\cal{C}}|=3$

. Our main results are a new algorithm for colorful bin packing that we design and analyze, whose absolute competitive ratio is 4, and a new lower bound of 2 on the asymptotic competitive ratio of any algorithm, that is valid even for black and white bin packing.

György Dósa, Leah Epstein
Algorithms Parameterized by Vertex Cover and Modular Width, through Potential Maximal Cliques

In this paper we give upper bounds on the number of

minimal separators

and

potential maximal cliques of graphs

w.r.t. two graph parameters, namely

vertex cover

(vc) and

modular width

(mw). We prove that for any graph, the number of minimal separators is

$\mathcal{O}^*(3^{\operatorname{vc}})$

and

$\mathcal{O}^*(1.6181^{\operatorname{mw}})$

, the number of potential maximal cliques is

$\mathcal{O}^*(4^{\operatorname{vc}})$

and

$\mathcal{O}^*(1.7347^{\operatorname{mw}})$

, and these objects can be listed within the same running times. (The

$\mathcal{O}^*$

notation suppresses polynomial factors in the size of the input.) Combined with known results [3,12], we deduce that a large family of problems, e.g.,

Treewidth

,

Minimum Fill-in

,

Longest Induced Path

,

Feedback vertex set

and many others, can be solved in time

$\mathcal{O}^*(4^{\operatorname{vc}})$

or

$\mathcal{O}^*(1.7347^{\operatorname{mw}})$

.

Fedor V. Fomin, Mathieu Liedloff, Pedro Montealegre, Ioan Todinca
Win-Win Kernelization for Degree Sequence Completion Problems

We study the kernelizability of a class of NP-hard graph modification problems based on vertex degree properties. Our main positive results refer to NP-hard graph completion (that is, edge addition) cases while we show that there is no hope to achieve analogous results for the corresponding vertex or edge deletion versions. Our algorithms are based on a method that transforms graph completion problems into efficiently solvable number problems and exploits

f

-factor computations for translating the results back into the graph setting. Indeed, our core observation is that we encounter a win-win situation in the sense that either the number of edge additions is small (and thus faster to find) or the problem is polynomial-time solvable. This approach helps in answering an open question by Mathieson and Szeider [JCSS 2012].

Vincent Froese, André Nichterlein, Rolf Niedermeier
On Matchings and b-Edge Dominating Sets: A 2-Approximation Algorithm for the 3-Edge Dominating Set Problem

We consider a multiple domination version of the edge dominating set problem, called the

b-EDS

problem, where an edge set

D

 ⊆ 

E

of minimum cardinality is sought in a given graph

G

 = (

V

,

E

) with a demand vector

b

 ∈ ℤ

E

such that each edge

e

 ∈ 

E

is required to be dominated by

b

(

e

) edges of

D

. When a solution

D

is not allowed to be a multi-set, it is called the

simple

b

-EDS problem. We present 2-approximation algorithms for the simple

b

-EDS problem for the cases of max

e

 ∈ 

E

b

(

e

) = 2 and max

e

 ∈ 

E

b

(

e

) = 3. The best approximation guarantee previously known for these problems is 8/3 due to Berger et al. [2] who showed the same guarantee to hold even for the minimum cost case and for arbitrarily large

b

. Our algorithms are designed based on an LP relaxation of the

b

-EDS problem and locally optimal matchings, and the optimum of

b

-EDS is related to either the size of such a matching or to the optimal LP value.

Toshihiro Fujito
Covering Problems in Edge- and Node-Weighted Graphs

This paper discusses the graph covering problem in which a set of edges in an edge- and node-weighted graph is chosen to satisfy some covering constraints while minimizing the sum of the weights. In this problem, because of the large integrality gap of a natural linear programming (LP) relaxation, LP rounding algorithms based on the relaxation yield poor performance. Here we propose a stronger LP relaxation for the graph covering problem. The proposed relaxation is applied to designing primal-dual algorithms for two fundamental graph covering problems: the prize-collecting edge dominating set problem and the multicut problem in trees. Our algorithms are an exact polynomial-time algorithm for the former problem, and a 2-approximation algorithm for the latter problem, respectively. These results match the currently known best results for purely edge-weighted graphs.

Takuro Fukunaga
Colored Range Searching in Linear Space

In

colored range searching

, we are given a set of

n

colored points in

d

 ≥ 2 dimensions to store, and want to support orthogonal range queries taking colors into account. In the

colored range counting

problem, a query must report the number of distinct colors found in the query range, while an answer to the

colored range reporting

problem must report the distinct colors in the query range.

We give the first linear space data structure for both problems in two dimensions (

d

 = 2) with

o

(

n

) worst case query time. We also give the first data structure obtaining almost-linear space usage and

o

(

n

) worst case query time for points in

d

 > 2 dimensions. Finally, we present the first dynamic solution to both counting and reporting with

o

(

n

) query time for

d

 ≥ 2 and

d

 ≥ 3 dimensions, respectively.

Roberto Grossi, Søren Vind
Fast Dynamic Graph Algorithms for Parameterized Problems

Fully dynamic graph is a data structure that (1) supports edge insertions and deletions and (2) answers problem specific queries. The time complexity of (1) and (2) are referred to as the update time and the query time respectively. There are many researches on dynamic graphs whose update time and query time are

o

(|

G

|), that is, sublinear in the graph size. However, almost all such researches are for problems in P. In this paper, we investigate dynamic graphs for NP-hard problems exploiting the notion of fixed parameter tractability (FPT).

We give dynamic graphs for

Vertex Cover

and

Cluster Vertex Deletion

parameterized by the solution size

k

. These dynamic graphs achieve almost the best possible update time

O

(poly(

k

)log

n

) and the query time

O

(

f

(poly(

k

),

k

)), where

f

(

n

,

k

) is the time complexity of any static graph algorithm for the problems. We obtain these results by dynamically maintaining an approximate solution which can be used to construct a small problem kernel. Exploiting the dynamic graph for

Cluster Vertex Deletion

, as a corollary, we obtain a quasilinear-time (polynomial) kernelization algorithm for

Cluster Vertex Deletion

. Until now, only quadratic time kernelization algorithms are known for this problem.

Yoichi Iwata, Keigo Oka
Extending Partial Representations of Proper and Unit Interval Graphs

The recently introduced problem of extending partial interval representations asks, for an interval graph with some intervals pre-drawn by the input, whether the partial representation can be extended to a representation of the entire graph. In this paper, we give a linear-time algorithm for extending proper interval representations and an almost quadratic-time algorithm for extending unit interval representations.

We also introduce the more general problem of

bounded representations

of unit interval graphs, where the input constrains the positions of intervals by lower and upper bounds. We show that this problem is

-complete for disconnected input graphs and give a polynomial-time algorithm for a special class of instances, where the ordering of the connected components of the input graph along the real line is fixed. This includes the case of partial representation extension.

The hardness result sharply contrasts the recent polynomial-time algorithm for bounded representations of proper interval graphs [Balko et al. ISAAC’13]. So unless

, proper and unit interval representations have very different structure. This explains why partial representation extension problems for these different types of representations require substantially different techniques.

Pavel Klavík, Jan Kratochvíl, Yota Otachi, Ignaz Rutter, Toshiki Saitoh, Maria Saumell, Tomáš Vyskočil
Minimum Tree Supports for Hypergraphs and Low-Concurrency Euler Diagrams

In this paper we present an

O

(

n

2

(

m

 + log

n

))-time algorithm for computing a minimum-weight tree support (if one exists) of a hypergraph

H

 = (

V

,

S

) with

n

vertices and

m

hyperedges. This improves the previously best known algorithm with running time

O

(

n

4

m

2

). A support of

H

is a graph

G

on

V

such that each hyperedge in

S

induces a connected subgraph in

G

. If

G

is a tree, it is called a tree support and it is a minimum tree support if its edge weight is minimum for a given edge weight function. Tree supports of hypergraphs have several applications, from social network analysis and network design problems to the visualization of hypergraphs and Euler diagrams. We show in particular how a minimum-weight tree support can be used to generate an area-proportional Euler diagram that satisfies typical well-formedness conditions and additionally minimizes the number of concurrent curves of the set boundaries in the Euler diagram.

Boris Klemz, Tamara Mchedlidze, Martin Nöllenburg
Additive Spanners: A Simple Construction

We consider additive spanners of unweighted undirected graphs. Let

G

be a graph and

H

a subgraph of

G

. The most naïve way to construct an additive

k

-spanner of

G

is the following: As long as

H

is not an additive

k

-spanner repeat: Find a pair (

u

,

v

) ∈ 

H

that violates the spanner-condition and a shortest path from

u

to

v

in

G

. Add the edges of this path to

H

.

We show that, with a very simple initial graph

H

, this naïve method gives additive 6- and 2-spanners of sizes matching the best known upper bounds. For additive 2-spanners we start with

H

 = ∅ and end with

O

(

n

3/2

) edges in the spanner. For additive 6-spanners we start with

H

containing

$\lfloor n^{1/3} \rfloor$

arbitrary edges incident to each node and end with a spanner of size

O

(

n

4/3

).

Mathias Bæk Tejs Knudsen
Assigning Channels via the Meet-in-the-Middle Approach

We study the complexity of the

Channel Assignment

problem. By applying the meet-in-the-middle approach we get an algorithm for the ℓ-bounded

Channel Assignment

(when the edge weights are bounded by ℓ) running in time

$O^*((2\sqrt{\ell+1})^n)$

. This is the first algorithm which breaks the (

O

(ℓ))

n

barrier. We extend this algorithm to the counting variant, at the cost of slightly higher polynomial factor.

A major open problem asks whether

Channel Assignment

admits a

O

(

c

n

)-time algorithm, for a constant

c

independent of ℓ. We consider a similar question for

Generalized

T

-Coloring

, a CSP problem that generalizes

Channel Assignment

. We show that

Generalized

T

-Coloring

does not admit a

$2^{2^{o\left(\sqrt{n}\right)}} {\rm poly}(r)$

-time algorithm, where

r

is the size of the instance.

Łukasz Kowalik, Arkadiusz Socała
Consistent Subset Sampling

Consistent sampling is a technique for specifying, in small space, a subset

S

of a potentially large universe

U

such that the elements in

S

satisfy a suitably chosen sampling condition. Given a subset

$\mathcal{I}\subseteq U$

it should be possible to quickly compute

$\mathcal{I}\cap S$

, i.e., the elements in

$\mathcal{I}$

satisfying the sampling condition. Consistent sampling has important applications in similarity estimation, and estimation of the number of distinct items in a data stream.

In this paper we generalize consistent sampling to the setting where we are interested in sampling size-

k

subsets occurring in some set in a collection of sets of bounded size

b

, where

k

is a small integer. This can be done by applying standard consistent sampling to the

k

-subsets of each set, but that approach requires time Θ(

b

k

). Using a carefully designed hash function, for a given sampling probability

p

 ∈ (0,1], we show how to improve the time complexity to Θ(

b

k

/2⌉

loglog

b

 + 

pb

k

) in expectation, while maintaining strong concentration bounds for the sample. The space usage of our method is Θ(

b

k

/4⌉

).

We demonstrate the utility of our technique by applying it to several well-studied data mining problems. We show how to efficiently estimate the number of frequent

k

-itemsets in a stream of transactions and the number of bipartite cliques in a graph given as incidence stream. Further, building upon a recent work by Campagna et al., we show that our approach can be applied to frequent itemset mining in a parallel or distributed setting. We also present applications in graph stream mining.

Konstantin Kutzkov, Rasmus Pagh
Triangle Counting in Dynamic Graph Streams

Estimating the number of triangles in graph streams using a limited amount of memory has become a popular topic in the last decade. Different variations of the problem have been studied, depending on whether the graph edges are provided in an arbitrary order or as incidence lists. However, with a few exceptions, the algorithms have considered

insert-only

streams. We present a new algorithm estimating the number of triangles in

dynamic

graph streams where edges can be both inserted and deleted. We show that our algorithm achieves better time and space complexity than previous solutions for various graph classes, for example sparse graphs with a relatively small number of triangles. Also, for graphs with constant transitivity coefficient, a common situation in real graphs, this is the first algorithm achieving constant processing time per edge. The result is achieved by a novel approach combining sampling of vertex triples and sparsification of the input graph.

Konstantin Kutzkov, Rasmus Pagh
Linear Time LexDFS on Cocomparability Graphs.

Lexicographic depth first search (LexDFS) is a graph search protocol which has already proved to be a powerful tool on cocomparability graphs. Cocomparability graphs have been well studied by investigating their complements (comparability graphs) and their corresponding posets. Recently however LexDFS has led to a number of elegant polynomial and near linear time algorithms on cocomparability graphs when used as a preprocessing step [2,3,11]. The nonlinear runtime of some of these results is a consequence of complexity of this preprocessing step. We present the first linear time algorithm to compute a LexDFS cocomparability ordering, therefore answering a problem raised in [2] and helping achieve the first linear time algorithms for the minimum path cover problem, and thus the Hamilton path problem, the maximum independent set problem and the minimum clique cover for this graph family.

Ekkehard Köhler, Lalla Mouatadid
Quantum Algorithms for Matrix Products over Semirings

In this paper we construct quantum algorithms for matrix products over several algebraic structures called semirings, including the ( max , min )-matrix product, the distance matrix product and the Boolean matrix product. In particular, we obtain the following results.

We construct a quantum algorithm computing the product of two

n

×

n

matrices over the ( max , min ) semiring with time complexity

O

(

n

2.473

). In comparison, the best known classical algorithm for the same problem has complexity

O

(

n

2.687

). As an application, we obtain a

O

(

n

2.473

)-time quantum algorithm for computing the all-pairs bottleneck paths of a graph with

n

vertices, while classically the best upper bound for this task is

O

(

n

2.687

).

We construct a quantum algorithm computing the ℓ most significant bits of each entry of the distance product of two

n

×

n

matrices in time

O

(2

0.64ℓ

n

2.46

). In comparison, prior to the present work, the best known classical algorithm for the same problem had complexity

O

(2

n

2.69

). Our techniques lead to further improvements for classical algorithms as well, reducing the classical complexity to

O

(2

0.96ℓ

n

2.69

), which gives a sublinear dependency on 2

.

The above two algorithms are the first quantum algorithms that perform better than the

$\tilde O(n^{5/2})$

-time straightforward quantum algorithm based on quantum search for matrix multiplication over these semirings. We also consider the Boolean semiring, and construct a quantum algorithm computing the product of two

n

×

n

Boolean matrices that outperforms the best known classical algorithms for sparse matrices.

François Le Gall, Harumichi Nishimura
Ranked Document Selection

Let

${\cal D}$

be a collection of string documents of

n

characters in total. The

top-k document retrieval problem

is to preprocess

${\cal D}$

into a data structure that, given a query (

P

,

k

), can return the

k

documents of

${\cal D}$

most relevant to pattern

P

. The relevance of a document

d

for a pattern

P

is given by a predefined ranking function

w

(

P

,

d

). Linear space and optimal query time solutions already exist for this problem.

In this paper we consider a novel problem,

document selection

queries, which aim to report the

k

th document most relevant to

P

(instead of reporting all top-

k

documents). We present a data structure using

O

(

n

log

ε

n

) space, for any constant

ε

 > 0, answering selection queries in time

O

(log

k

/ loglog

n

), and a linear-space data structure answering queries in time

O

(log

k

), given the locus node of

P

in a (generalized) suffix tree of

${\cal D}$

. We also prove that it is unlikely that a succinct-space solution for this problem exists with poly-logarithmic query time.

J. Ian Munro, Gonzalo Navarro, Rahul Shah, Sharma V. Thankachan
Approximation Algorithms for Hitting Triangle-Free Sets of Line Segments

We present polynomial time constant factor approximations on NP-Complete special instances of the Guarding a Set of Segments(GSS) problem. The input to the GSS problem consists of a set of line segments, and the goal is to find a minimum size hitting set of the given set of line segments. We consider the underlying planar graph on the set of intersection points as vertices and the edge set as pairs of vertices which are adjacent on a line segment. Our results are for the subclass of instances of GSS for which the underlying planar graph has girth at least 4. On this class of instances, we show that an optimum solution to the natural hitting set LP can be rounded to yield a 3-factor approximation to the optimum hitting set. The GSS problem remains NP-Complete on the sub-class of such instances. The main technique, that we believe could be quite general, is to round the hitting set LP optimum for special hypergraphs that we identify.

Anup Joshi, N. S. Narayanaswamy
Reduction Techniques for Graph Isomorphism in the Context of Width Parameters

We study the parameterized complexity of the graph isomorphism problem when parameterized by width parameters related to tree decompositions. We apply the following technique to obtain fixed-parameter tractability for such parameters. We first compute an isomorphism invariant set of potential bags for a decomposition and then apply a restricted version of the Weisfeiler-Lehman algorithm to solve isomorphism. With this we show fixed-parameter tractability for several parameters and provide a unified explanation for various isomorphism results concerned with parameters related to tree decompositions.

As a possibly first step towards intractability results for parameterized graph isomorphism we develop an fpt Turing-reduction from strong tree width to the a priori unrelated parameter maximum degree.

Yota Otachi, Pascal Schweitzer
Approximate Counting of Matchings in (3,3)-Hypergraphs

We design a fully polynomial time approximation scheme (FPTAS) for counting the number of matchings (packings) in arbitrary 3-uniform hypergraphs of maximum degree three, referred to as (3,3)-hypergraphs. It is the first polynomial time approximation scheme for that problem, which includes also, as a special case, the 3D Matching counting problem for 3-partite (3,3)-hypergraphs. The proof technique of this paper uses the general correlation decay technique and a new combinatorial analysis of the underlying structures of the intersection graphs. The proof method could be also of independent interest.

Andrzej Dudek, Marek Karpinski, Andrzej Ruciński, Edyta Szymańska
Backmatter
Metadaten
Titel
Algorithm Theory – SWAT 2014
herausgegeben von
R. Ravi
Inge Li Gørtz
Copyright-Jahr
2014
Verlag
Springer International Publishing
Electronic ISBN
978-3-319-08404-6
Print ISBN
978-3-319-08403-9
DOI
https://doi.org/10.1007/978-3-319-08404-6