Skip to main content

2012 | Buch

LATIN 2012: Theoretical Informatics

10th Latin American Symposium, Arequipa, Peru, April 16-20, 2012. Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the proceedings of the 10th Latin American Symposium on Theoretical Informatics, LATIN 2012, held in Arequipa, Peru, in April 2012. The 55 papers presented in this volume were carefully reviewed and selected from 153 submissions. The papers address a variety of topics in theoretical computer science with a certain focus on algorithms, automata theory and formal languages, coding theory and data compression, algorithmic graph theory and combinatorics, complexity theory, computational algebra, computational biology, computational geometry, computational number theory, cryptography, theoretical aspects of databases and information retrieval, data structures, networks, logic in computer science, machine learning, mathematical programming, parallel and distributed computing, pattern matching, quantum computing and random structures.

Inhaltsverzeichnis

Frontmatter
A Generalization of the Convex Kakeya Problem

We consider the following geometric alignment problem: Given a set of line segments in the plane, find a convex region of smallest area that contains a translate of each input segment. This can be seen as a generalization of Kakeya’s problem of finding a convex region of smallest area such that a needle can be turned through 360 degrees within this region. Our main result is an optimal Θ(

n

log

n

)-time algorithm for our geometric alignment problem, when the input is a set of

n

line segments. We also show that, if the goal is to minimize the perimeter of the region instead of its area, then the optimum placement is when the midpoints of the segments coincide. Finally, we show that for any compact convex figure

G

, the smallest enclosing disk of

G

is a smallest-perimeter region containing a translate of any rotated copy of

G

.

Hee-Kap Ahn, Sang Won Bae, Otfried Cheong, Joachim Gudmundsson, Takeshi Tokuyama, Antoine Vigneron
Low Complexity Scheduling Algorithm Minimizing the Energy for Tasks with Agreeable Deadlines

Power management aims in reducing the energy consumed by computer systems while maintaining a good level of performance. One of the mechanisms used to save energy is the shut-down mechanism which puts the system into a sleep state when it is idle. No energy is consumed in this state, but a fixed amount of energy is required for a transition from the sleep state to the active state which is equal to

L

times the energy required for the execution of a unit-time task. In this paper, we focus on the off-line version of this problem where a set of unit-time tasks with release dates and deadlines have to be scheduled in order to minimize the overall consumed energy during the idle periods of the schedule. Here we focus on the case where the tasks have agreeable deadlines. For the single processor case, an

O

(

n

3

) algorithm has been proposed in [7] for unit-time tasks and arbitrary

L

. We improve this result by introducing a new

O

(

n

2

) polynomial-time algorithm for tasks with arbitrary processing times and arbitrary

L

. For the multiprocessor case we also improve the complexity from

O

(

n

3

m

2

) [7] to

O

(

n

2

m

) in the case of unit-time tasks and unit

L

.

Eric Angel, Evripidis Bampis, Vincent Chau
Bichromatic 2-Center of Pairs of Points

We study a class of geometric optimization problems closely related to the 2-center problem: Given a set

S

of

n

pairs of points, assign to each point a color (“red” or “blue”) so that each pair’s points are assigned different colors and a function of the radii of the minimum enclosing balls of the red points and the blue points, respectively, is optimized. In particular, we consider the problems of minimizing the maximum and minimizing the sum of the two radii. For each case, minmax and minsum, we consider distances measured in the

L

2

and in the

L

 ∞ 

metrics. Our problems are motivated by a facility location problem in transportation system design, in which we are given origin/destination pairs of points for desired travel, and our goal is to locate an optimal road/flight segment in order to minimize the travel to/from the endpoints of the segment.

Esther M. Arkin, José Miguel Díaz-Báñez, Ferran Hurtado, Piyush Kumar, Joseph S. B. Mitchell, Belén Palop, Pablo Pérez-Lantero, Maria Saumell, Rodrigo I. Silveira
Erdős-Rényi Sequences and Deterministic Construction of Expanding Cayley Graphs

Given a finite group

G

by its multiplication table, we give a deterministic polynomial-time construction of a directed

O

(log|

G

|) degree Cayley expander for

G

. Our construction exploits the connection between rapid mixing random walks and spectral expansion. Our main group-theoretic tool is Erdős-Rényi sequences. We give a similar construction of

O

(log|

G

|) degree undirected Cayley expanders for

G

, which is an alternative proof of Wigderson and Xiao’s derandomization [WX08] of the Alon-Roichman randomized construction.

Vikraman Arvind, Partha Mukhopadhyay, Prajakta Nimbhorkar
A Better Approximation Ratio and an IP Formulation for a Sensor Cover Problem

We study a one-dimensional sensor cover problem, known as the Restricted Strip Cover (RSC) problem, defined as follows. We are given an interval

U

of the real line, and a set of

n

sensors, each of which covers some subinterval of

U

and is powered with a battery of limited duration. The RSC problem consists in assigning a starting time to each sensor so that the whole interval

U

is covered for as long as possible. We assume that when a sensor is turned on (at its starting time) it remains on through the duration of its battery.

Buchsbaum, Efrat, Jain, Venkatasubramanian and Yi showed that RSC is NP-hard and designed an

O

(loglog

n

)-approximation algorithm. More recently, Gibson and Varadarajan presented a greedy-like algorithm which they proved to have approximation ratio at most 5. We prove that the approximation ratio of this algorithm is 4, and exhibit an instance showing that this ratio is tight. We also show an integer programming formulation for this problem and present some computational results obtained with the implementation of this approach. For the same set of instances, we compute the quality of the solution found by the approximation algorithm.

Rafael da Ponte Barbosa, Yoshiko Wakabayashi
On the Advice Complexity of the Knapsack Problem

We study the advice complexity and the random bit complexity of the online knapsack problem: Given a knapsack of unit capacity, and

n

items that arrive in successive time steps, an online algorithm has to decide for every item whether it gets packed into the knapsack or not. The goal is to maximize the value of the items in the knapsack without exceeding its capacity. In the model of advice complexity of online problems, one asks how many bits of advice about the unknown parts of the input are both necessary and sufficient to achieve a specific competitive ratio. It is well-known that even the unweighted online knapsack problem does not admit any competitive deterministic online algorithm. We show that a single bit of advice helps a deterministic algorithm to become 2-competitive, but that

$\ensuremath{\mathrm{\Omega}\mathopen{}\left(\log n\right)} $

advice bits are necessary to further improve the deterministic competitive ratio. This is the first time that such a phase transition for the number of advice bits has been observed for any problem. We also show that, surprisingly, instead of an advice bit, a single random bit allows for a competitive ratio of 2, and any further amount of randomness does not improve this. Moreover, we prove that, in a resource augmentation model, i.e., when allowing a little overpacking of the knapsack, a constant number of advice bits suffices to achieve a near-optimal competitive ratio. We also study the weighted version of the problem proving that, with

$\ensuremath{\mathcal{O}\hspace*{-0.4pt}\mathopen{}\left(\log n\right)} $

bits of advice, we can get arbitrarily close to an optimal solution and, using asymptotically fewer bits, we are not competitive.

Hans-Joachim Böckenhauer, Dennis Komm, Richard Královič, Peter Rossmanith
Reoptimization of Some Maximum Weight Induced Hereditary Subgraph Problems

The reoptimization issue studied in this paper can be described as follows: given an instance

I

of some problem Π, an optimal solution OPT for Π in

I

and an instance

I

′ resulting from a local perturbation of

I

that consists of insertions or removals of a small number of data, we wish to use OPT in order to solve Π in

I

’, either optimally or by guaranteeing an approximation ratio better than that guaranteed by an ex nihilo computation and with running time better than that needed for such a computation. We use this setting in order to study weighted versions of several representatives of a broad class of problems known in the literature as maximum induced hereditary subgraph problems. The main problems studied are

max independent set, max

k

-

colorable subgraph

and

max split subgraph

under vertex insertions and deletions.

Nicolas Boria, Jérôme Monnot, Vangelis Th. Paschos
On Plane Constrained Bounded-Degree Spanners

Let

P

be a set of points in the plane and

S

a set of non-crossing line segments with endpoints in

P

. The visibility graph of

P

with respect to

S

, denoted

$\mathord{\it Vis}(P,S)$

, has vertex set

P

and an edge for each pair of vertices

u

,

v

in

P

for which no line segment of

S

properly intersects

uv

. We show that the constrained half-

θ

6

-graph (which is identical to the constrained Delaunay graph whose empty visible region is an equilateral triangle) is a plane 2-spanner of

$\mathord{\it Vis}(P,S)$

. We then show how to construct a plane 6-spanner of

$\mathord{\it Vis}(P,S)$

with maximum degree 6 + 

c

, where

c

is the maximum number of segments adjacent to a vertex.

Prosenjit Bose, Rolf Fagerberg, André van Renssen, Sander Verdonschot
Space-Efficient Approximation Scheme for Circular Earth Mover Distance

The Earth Mover Distance (EMD) between point sets

A

and

B

is the minimum cost of a bipartite matching between

A

and

B

. EMD is an important measure for estimating similarities between objects with quantifiable features and has important applications in several areas including computer vision. The streaming complexity of approximating EMD between point sets in a two-dimensional discretized grid is an important open problem proposed in [8,9].

We study the problem of approximating EMD in the streaming model, when points lie on a discretized circle. Computing the EMD in this setting has applications to computer vision [13] and can be seen as a special case of computing EMD on a discretized grid. We achieve a (1 ±

ε

) approximation for EMD in

$\tilde O(\varepsilon^{-3})$

space, for every 0 < 

ε

 < 1. To our knowledge, this is the first streaming algorithm for a natural and widely applied EMD model that matches the space bound asked in [9].

Joshua Brody, Hongyu Liang, Xiaoming Sun
Density Classification on Infinite Lattices and Trees

Consider an infinite graph with nodes initially labeled by independent Bernoulli random variables of parameter

p

. We want to find a (probabilistic or deterministic) cellular automaton or a finite-range interacting particle system that decides if

p

is smaller or larger than 1/2. Precisely, the trajectories should converge to the uniform configuration with only 0’s if

p

 < 1/2, and only 1’s if

p

 > 1/2. We present solutions to that problem on ℤ

d

, for any

d

 ≥ 2, and on the regular infinite trees. For ℤ, we propose some candidates that we back up with numerical simulations.

Ana Bušić, Nazim Fatès, Jean Mairesse, Irène Marcovici
Coloring Planar Homothets and Three-Dimensional Hypergraphs

We prove that every finite set of homothetic copies of a given compact and convex body in the plane can be colored with four colors so that any point covered by at least two copies is covered by two copies with distinct colors. This generalizes a previous result from Smorodinsky (SIAM J. Disc. Math. 2007). Then we show that for any

k

 ≥ 2, every three-dimensional hypergraph can be colored with 6(

k

 − 1) colors so that every hyperedge

e

contains min { |

e

|,

k

} vertices with mutually distinct colors. This refines a previous result from Aloupis

et al.

(Disc. & Comp. Geom. 2009). As corollaries, we obtain constant factor improvements for conflict-free coloring,

k

-strong conflict-free coloring, and choosability. Proofs of the upper bounds are constructive and yield simple, polynomial-time algorithms.

Jean Cardinal, Matias Korman
An Equivariance Theorem with Applications to Renaming

In the renaming problem, each process in a distributed system is issued a unique name from a large name space, and the processes must coordinate with one another to choose unique names from a much smaller name space.

We show that lower bounds on the solvability of renaming in an asynchronous distributed system can be formulated as a purely topological question about the existence of an equivariant chain map from a “topological disk” to a “topological annulus”. Proving the non-existence of such a map implies the non-existence of a distributed renaming algorithm in several related models of computation.

Armando Castañeda, Maurice Herlihy, Sergio Rajsbaum
Renaming Is Weaker Than Set Agreement But for Perfect Renaming: A Map of Sub-consensus Tasks

In the

wait-free

shared memory model substantial attention has been devoted to understanding the relative power of

sub-consensus

tasks. Two important sub-consensus families of tasks have been identified:

k

-set agreement and

M

-renaming. When 2 ≤ 

k

 ≤ 

n

 − 1 and

n

 ≤ 

M

 ≤ 2

n

 − 2, these tasks are more powerful than read/write registers, but not strong enough to solve consensus for two processes.

This paper studies the power of renaming with respect to set agreement. It shows that, in a system of

n

processes,

n

-renaming is strictly stronger than (

n

 − 1)-set agreement, but not stronger than (

n

 − 2)-set agreement. Furthermore, (

n

 + 1)-renaming cannot solve even (

n

 − 1)-set agreement. As a consequence, there are cases where set agreement and renaming are incomparable when looking at their power to implement each other.

Armando Castañeda, Damien Imbs, Sergio Rajsbaum, Michel Raynal
Pseudorandomness of a Random Kronecker Sequence

We study two randomness measures for the celebrated Kronecker sequence

${\cal S}(\alpha)$

formed by the fractional parts of the multiples of a real

α

. The first measure is the well-known discrepancy, whereas the other one, the Arnold measure, is less popular. Both describe the behaviour of the truncated sequence

${\cal S}_T(\alpha)$

formed with the first

T

terms, for

T

 → ∞. We perform a probabilistic study of the pseudorandomness of the sequence

${\cal S}(\alpha)$

(discrepancy and Arnold measure), and we give estimates of their mean values in two probabilistic settings : the input

α

may be either a random real or a random rational. The results exhibit strong similarities between the real and rational cases; they also show the influence of the number

T

of truncated terms, via its relation to the continued fraction expansion of

α

.

Eda Cesaratto, Brigitte Vallée
Revisiting the Cache Miss Analysis of Multithreaded Algorithms

This paper revisits the cache miss analysis of algorithms when scheduled using randomized work stealing (RWS) in a parallel environment where processors have private caches. We focus on the effect of task migration on cache miss costs, and in particular, the costs of accessing “hidden” data typically stored on execution stacks (such as the return location for a recursive call).

Prior analyses, with the exception of [1], do not account for such costs, and it is not clear how to extend them to account for these costs. By means of a new analysis, we show that for a variety of basic algorithms these task migration costs are no larger than the costs for the remainder of the computation, and thereby recover existing bounds. We also analyze a number of algorithms implicitly analyzed by [1], namely Scans (including Prefix Sums and Matrix Transposition), Matrix Multiply (the depth

n

in-place algorithm, the standard 8-way divide and conquer algorithm, and Strassen’s algorithm), I-GEP, finding a longest common subsequence, FFT, the SPMS sorting algorithm, list ranking and graph connected components; we obtain sharper bounds in many cases.

While this paper focusses on the RWS scheduler, the bounds we obtain are a function of the number of steals, and thus would apply to any scheduler given bounds on the number of steals it induces.

Richard Cole, Vijaya Ramachandran
Parameterized Complexity of MaxSat above Average

In

MaxSat

, we are given a CNF formula

F

with

n

variables and

m

clauses and asked to find a truth assignment satisfying the maximum number of clauses. Let

r

1

,…,

r

m

be the number of literals in the clauses of

F

. Then

${\rm asat}(F)=\sum_{i=1}^m (1-2^{-r_i})$

is the expected number of clauses satisfied by a random truth assignment (the truth values to the variables are distributed uniformly and independently). It is well-known that, in polynomial time, one can find a truth assignment satisfying at least asat(

F

) clauses. In the parameterized problem

MaxSat-AA

, we are to decide whether there is a truth assignment satisfying at least asat(

F

) + 

k

clauses, where

k

is the (nonnegative) parameter. We prove that

MaxSat-AA

is para-NP-complete and thus,

MaxSat-AA

is not fixed-parameter tractable unless P=NP. This is in sharp contrast to the similar problem

MaxLin2-AA

which was recently proved to be fixed-parameter tractable by Crowston

et al.

(FSTTCS 2011).

In fact, we consider a more refined version of

MaxSat-AA

,

Max-

r

(

n

)

-Sat-AA

, where

r

j

 ≤ 

r

(

n

) for each

j

. Alon

et al.

(SODA 2010) proved that if

r

 = 

r

(

n

) is a constant, then

Max-

r

-Sat-AA

is fixed-parameter tractable. We prove that

Max-

r

(

n

)

-Sat-AA

is para-NP-complete for

r

(

n

) = ⌈log

n

⌉. We also prove that assuming the exponential time hypothesis,

Max-

r

(

n

)

-Sat-AA

is not fixed-parameter tractable already for any

r

(

n

) ≥ loglog

n

 + 

φ

(

n

), where

φ

(

n

) is any unbounded strictly increasing function. This lower bound on

r

(

n

) cannot be decreased much further as we prove that

Max-

r

(

n

)

-Sat-AA

is fixed-parameter tractable for any

r

(

n

) ≤ loglog

n

 − logloglog

n

 − 

φ

(

n

), where

φ

(

n

) is any unbounded strictly increasing function. The proof uses some results on

MaxLin2-AA

.

Robert Crowston, Gregory Gutin, Mark Jones, Venkatesh Raman, Saket Saurabh
Solving the 2-Disjoint Connected Subgraphs Problem Faster Than 2 n

The

2-Disjoint Connected Subgraphs

problem, given a graph along with two disjoint sets of terminals

Z

1

,

Z

2

, asks whether it is possible to find disjoint sets

A

1

,

A

2

, such that

Z

1

 ⊆ 

A

1

,

Z

2

 ⊆ 

A

2

and

A

1

,

A

2

induce connected subgraphs. While the naive algorithm runs in

O

(2

n

n

O

(1)

) time, solutions with complexity of form

O

((2 − 

ε

)

n

) have been found only for special graph classes [15, 19]. In this paper we present an

O

(1.933

n

) algorithm for

2-Disjoint Connected Subgraphs

in general case, thus breaking the 2

n

barrier. As a counterpoise of this result we show that if we parameterize the problem by the number of non-terminal vertices, it is hard both to speed up the brute-force approach and to find a polynomial kernel.

Marek Cygan, Marcin Pilipczuk, Michał Pilipczuk, Jakub Onufry Wojtaszczyk
A O(1/ε 2) n -Time Sieving Algorithm for Approximate Integer Programming

The Integer Programming Problem (IP) for a polytope

P

 ⊆ ℝ

n

is to find an integer point in

P

or decide that

P

is integer free. We give a randomized algorithm for an approximate version of this problem, which correctly decides whether

P

contains an integer point or whether a (1 + 

ε

)-scaling of

P

about its center of gravity is integer free in

O

(1/

ε

2

)

n

-time and

O

(1/

ε

)

n

-space with overwhelming probability. We reduce this approximate IP question to an approximate Closest Vector Problem (CVP) in a “near-symmetric” semi-norm, which we solve via a randomized sieving technique first developed by Ajtai, Kumar, and Sivakumar (STOC 2001). Our main technical contribution is an extension of the AKS sieving technique which works for any near-symmetric semi-norm. Our results also extend to general convex bodies and lattices.

Daniel Dadush
Two-Dimensional Range Diameter Queries

Given a set of n points in the plane, range diameter queries ask for the furthest pair of points in a given axis-parallel rectangular range. We provide evidence for the hardness of designing space-efficient data structures that support range diameter queries by giving a reduction from the set intersection problem. The difficulty of the latter problem is widely acknowledged and is conjectured to require nearly quadratic space in order to obtain constant query time, which is matched by known data structures for both problems, up to polylogarithmic factors. We strengthen the evidence by giving a lower bound for an important subproblem arising in solutions to the range diameter problem: computing the diameter of two convex polygons, that are separated by a vertical line and are preprocessed independently, requires almost linear time in the number of vertices of the smaller polygon, no matter how much space is used. We also show that range diameter queries can be answered much more efficiently for the case of points in convex position by describing a data structure of size

O

(

n

log

n

) that supports queries in

O

(

log

n

) time.

Pooya Davoodi, Michiel Smid, Freek van Walderveen
An Improved Upper Bound on the Density of Universal Random Graphs

We give a polynomial time randomized algorithm that, on receiving as input a pair (

H

,

G

) of

n

-vertex graphs, searches for an embedding of

H

into

G

. If

H

has bounded maximum degree and

G

is suitably dense and pseudorandom, then the algorithm succeeds with high probability. Our algorithm proves that, for every integer

d

 ≥ 3 and suitable constant

C

 = 

C

d

, as

n

 → ∞, asymptotically almost all graphs with

n

vertices and

$\lfloor Cn^{2-1/d}\log^{1/d}n\rfloor$

edges contain as subgraphs all graphs with

n

vertices and maximum degree at most

d

.

Domingos Dellamonica Jr., Yoshiharu Kohayakawa, Vojtěch Rödl, Andrzej Ruciński
Logspace Computations in Graph Groups and Coxeter Groups

Computing normal forms in groups (or monoids) is in general harder than solving the word problem (equality testing). However, normal form computation has a much wider range of applications. It is therefore interesting to investigate the complexity of computing normal forms for important classes of groups. We show that shortlex normal forms in graph groups and in right-angled Coxeter groups can be computed in logspace. Graph groups are also known as free partially commutative groups or as right-angled Artin groups in the literature. (Artin groups can be realized as subgroups of Coxeter groups.) Graph groups arise in many areas and have a close connection to concurrency theory. The connection is used here. Indeed, for our result we use a representation of group elements by Mazurkiewicz traces. These are directed acyclic node-labelled graphs (i.e. pomsets). They form an algebraic model to describe runs of concurrent systems. Concurrent systems which are deterministic and co-deterministic can be studied via inverse monoids. As an application of our results we show that the word problem for free partially commutative inverse monoids is in logspace. This result generalizes a result of Ondrusch and the third author on free inverse monoids.

All Coxeter groups are linear, so the word problem can be solved in logspace, but it is open (in the non-right-angled case) whether shortlex normal forms can be computed in logspace, or, less demanding, whether they can be computed efficiently in parallel. We show that for all Coxeter groups the set of letters occurring in the shortlex normal form of an element can be computed in logspace.

Volker Diekert, Jonathan Kausch, Markus Lohrey
Approximating the Edge Length of 2-Edge Connected Planar Geometric Graphs on a Set of Points

Given a set

P

of

n

points in the plane, we solve the problems of constructing a geometric planar graph spanning

P

1) of minimum degree 2, and 2) which is 2-edge connected, respectively, and has max edge length bounded by a factor of 2 times the optimal; we also show that the factor 2 is best possible given appropriate connectivity conditions on the set

P

, respectively. First, we construct in

O

(

n

log

n

) time a geometric planar graph of minimum degree 2 and max edge length bounded by 2 times the optimal. This is then used to construct in

O

(

n

log

n

) time a 2-edge connected geometric planar graph spanning

P

with max edge length bounded by

$\sqrt{5}$

times the optimal, assuming that the set

P

forms a connected Unit Disk Graph. Second, we prove that 2 times the optimal is always sufficient if the set of points forms a 2 edge connected Unit Disk Graph and give an algorithm that runs in

O

(

n

2

) time. We also show that for

$k \in O(\sqrt{n})$

, there exists a set

P

of

n

points in the plane such that even though the Unit Disk Graph spanning

P

is

k

-vertex connected, there is no 2-edge connected geometric planar graph spanning

P

even if the length of its edges is allowed to be up to 17/16.

Stefan Dobrev, Evangelos Kranakis, Danny Krizanc, Oscar Morales-Ponce, Ladislav Stacho
On the Radon Number for P 3-Convexity

The generalization of classical results about convex sets in ℝ

n

to abstract convexity spaces, defined by sets of paths in graphs, leads to many challenging structural and algorithmic problems. Here we study the Radon number for the

P

3

-convexity on graphs.

P

3

-convexity has been proposed in connection with rumour and disease spreading processes in networks and the Radon number allows generalizations of Radon’s classical convexity result. We establish hardness results, describe efficient algorithms for trees, and prove a best-possible bound on the Radon number of connected graphs.

Mitre C. Dourado, Dieter Rautenbach, Vinícius Fernandes dos Santos, Philipp M. Schäfer, Jayme L. Szwarcfiter, Alexandre Toman
Computing Minimum Geodetic Sets of Proper Interval Graphs

We show that the geodetic number of proper interval graphs can be computed in polynomial time. This problem is

$\mbox{\rm NP}$

-hard on chordal graphs and on bipartite weakly chordal graphs. Only an upper bound on the geodetic number of proper interval graphs has been known prior to our result.

Tınaz Ekim, Aysel Erey, Pinar Heggernes, Pim van ’t Hof, Daniel Meister
Hausdorff Rank of Scattered Context-Free Linear Orders

We consider context-free languages equipped with the lexicographic ordering. We show that when the lexicographic ordering of a context-free language is scattered, then its Hausdorff rank is less than

ω

ω

. As an application of this result, we obtain that an ordinal is the order type of the lexicographic ordering of a context-free language if and only if it is less than

$\omega^{\omega^\omega}$

.

Zoltán Ésik, Szabolcs Iván
Opportunistic Information Dissemination in Mobile Ad-Hoc Networks: Adaptiveness vs. Obliviousness and Randomization vs. Determinism

In this paper the problem of information dissemination in Mobile Ad-hoc Networks (MANETs) is studied. The problem is to disseminate a piece of information, initially held by a distinguished source node, to all nodes in a target set. We assume a weak set of restrictions on the mobility of nodes, parameterized by

α

, the

disconnection time

, and

β

, the

link stability time

, such that the MANETs considered are connected enough for dissemination. Such a connectivity model generalizes previous models in that we assume much less connectivity, or make explicit the assumptions in previous papers.

In MANETs, nodes are embedded in the plane and can move with bounded speed. Communication between nodes occurs over a collision-prone single channel. We show upper and lower bounds for different types of randomized protocols, parameterized by

α

and

β

.

This problem has been extensively studied in static networks and for deterministic protocols. We show tight bounds on the randomized complexity of information dissemination in MANETs, for reasonable choices of

α

and

β

. We show that randomization reduces the time complexity of the problem by a logarithmic or linear factor, depending on the class of randomized protocol considered.

Martín Farach-Colton, Antonio Fernández Anta, Alessia Milani, Miguel A. Mosteiro, Shmuel Zaks
On the Non-progressive Spread of Influence through Social Networks

The spread of influence in social networks is studied in two main categories: the progressive model and the non-progressive model (see e.g. the seminal work of Kempe, Kleinberg, and Tardos in KDD 2003). While the progressive models are suitable for modeling the spread of influence in monopolistic settings, non-progressive are more appropriate for modeling non-monopolistic settings, e.g., modeling diffusion of two competing technologies over a social network. Despite the extensive work on the progressive model, non-progressive models have not been studied well. In this paper, we study the spread of influence in the nonprogressive model under the strict majority threshold: given a graph

G

with a set of initially infected nodes, each node gets infected at time

τ

iff a majority of its neighbors are infected at time

τ

– 1. Our goal in the

MinPTS

problem is to find a minimum-cardinality initial set of infected nodes that would eventually converge to a steady state where all nodes of

G

are infected.

We prove that while the MinPTS is NP-hard for a restricted family of graphs, it admits an improved constant-factor approximation algorithm for power-law graphs. We do so by proving lower and upper bounds in terms of the minimum and maximum degree of nodes in the graph. The upper bound is achieved in turn by applying a natural greedy algorithm. Our experimental evaluation of the greedy algorithm also shows its superior performance compared to other algorithms for a set of realworld graphs as well as the random power-law graphs. Finally, we study the convergence properties of these algorithms and show that the nonprogressive model converges in at most

O

(|

E

(

G

)|) steps.

MohammadAmin Fazli, Mohammad Ghodsi, Jafar Habibi, Pooya Jalaly Khalilabadi, Vahab Mirrokni, Sina Sadeghian Sadeghabad
Forbidden Patterns

We consider the problem of indexing a collection of documents (a.k.a. strings) of total length n such that the following kind of queries are supported: given two patterns P +  and P−, list all $\ensuremath{n_\text{match}}$ documents containing P + but notP−. This is a natural extension of the classic problem of document listing as considered by Muthukrishnan [SODA’02], where only the positive pattern P +  is given. Our main solution is an index of size O(n3/2) bits that supports queries in $O(|\ensuremath{P^+}|+|\ensuremath{P^-}|+\ensuremath{n_\text{match}}+\sqrt{n})$ time.

Johannes Fischer, Travis Gagie, Tsvi Kopelowitz, Moshe Lewenstein, Veli Mäkinen, Leena Salmela, Niko Välimäki
Structural Complexity of Multiobjective NP Search Problems

An NP search problem is a multivalued function that maps instances to polynomially length-bounded solutions such that the validity of solutions is testable in polynomial time. NPMV

g

denotes the class of these functions.

There are at least two computational tasks associated with an NP search problem:

(i) Find out whether a solution exists.

(ii) Compute an arbitrary solution.

Further computational tasks arise in settings with multiple objectives, for example:

(iii) Compute a solution that is minimal w.r.t. the first objective,

while the second objective does not exceed some budget. Each such computational task defines a class of multivalued functions. We systematically investigate these classes and their relation to traditional complexity classes and classes of multivalued functions, like NP or max·P.

For multiobjective problems, some classes of computational tasks turn out to be equivalent to the function class NPMV

g

, some to the class of decision problems NP, and some to a seemingly new class that includes both NPMV

g

and NP. Under the assumption that certain exponential time classes are different, we show that there are computational tasks of multiobjective problems (more precisely functions in NPMV

g

) that are Turing-inequivalent to any set.

Krzysztof Fleszar, Christian Glaßer, Fabian Lipp, Christian Reitwießner, Maximilian Witek
k-Gap Interval Graphs

We initiate the study of a new parameterization of graph problems. In a multiple interval representation of a graph, each vertex is associated to at least one interval of the real line, with an edge between two vertices if and only if an interval associated to one vertex has a nonempty intersection with an interval associated to the other vertex. A graph on

n

vertices is a

k

-gap interval graph if it has a multiple interval representation with at most

n

 + 

k

intervals in total. In order to scale up the nice algorithmic properties of interval graphs (where

k

= 0), we parameterize graph problems by

k

, and find FPT algorithms for several problems, including

Feedback Vertex Set

,

Dominating Set

,

Independent Set

,

Clique

,

Clique Cover

, and

Multiple Interval Transversal

. The

Coloring

problem turns out to be

-hard and we design an XP algorithm for the recognition problem.

Fedor V. Fomin, Serge Gaspers, Petr Golovach, Karol Suchan, Stefan Szeider, Erik Jan van Leeuwen, Martin Vatshelle, Yngve Villanger
Decidability Classes for Mobile Agents Computing

We establish a classification of decision problems that are to be solved by mobile agents operating in unlabeled graphs, using a deterministic protocol. The classification is with respect to the ability of a team of agents to solve the problem, possibly with the aid of additional information. In particular, our focus is on studying differences between the

decidability

of a decision problem by agents and its

verifiability

when a

certificate

for a positive answer is provided to the agents. Our main result shows that there exists a natural complete problem for mobile agent verification. We also show that, for a single agent, three natural oracles yield a strictly increasing chain of relative decidability classes.

Pierre Fraigniaud, Andrzej Pelc
NE Is Not NP Turing Reducible to Nonexponentially Dense NP Sets

A long standing open problem in the computational complexity theory is to separate NE from BPP, which is a subclass of NP

T

(NP) ∩ P/Poly. In this paper, we show that

${\rm NE}\not\subseteq {\rm NP}_{\rm T}({\rm NP}\cap$

$\mbox{\rm{Nonexponentially-Dense-Class}})$

, where

$\mbox{\rm{Nonexponentially-Dense-Class}}$

is the class of languages

A

without exponential density (for each constant

c

 > 0,

$|A^{\le n}|\le 2^{n^c}$

for infinitely many integers

n

). Our result implies

${\rm NE}\not\subseteq {\rm NP}_{\rm T}({{\rm padding}({\rm NP}, g(n))})$

for every time constructible super-polynomial function

g

(

n

) such as

$g(n)=n^{\left\lceil\log\left\lceil\log n\right\rceil \right\rceil }$

, where Padding(NP,

g

(

n

)) is class of all languages

L

B

 = {

s

10

g

(|

s

|) − |

s

| − 1

:

s

 ∈ 

B

} for

B

 ∈ NP. We also show

${\rm NE}\not\subseteq {\rm NP}_{{\rm T}}({\rm P}_{tt}({\rm NP})\cap{\rm TALLY}).$

Bin Fu
Efficient Arbitrary and Resolution Proofs of Unsatisfiability for Restricted Tree-Width

We consider unsatisfiable Boolean formulas in conjunctive normal form. It is known that unsatisfiability can be shown by a regular resolution proof in time polynomial in the number of variables

n

, and exponential in the tree-width

w

. It is also known that satisfiability for bounded tree-width can actually be decided in time linear in the length of the formula and exponential in the tree-width

w

.

We investigate the complexities of resolution proofs and arbitrary proofs in more detail depending on the number of variables

n

and the tree-width

w

. We present two very traditional algorithms, one based on resolution and the other based on truth tables. Maybe surprisingly, we point out that the two algorithms turn out to be basically the same algorithm with different interpretations.

Similar results holds for a bound

w

′ on the tree-width of the incidence graph for a somewhat extended notion of a resolution proof. The length of any proper resolution proof might be quadratic in

n

, but if we allow to introduce abbreviations for frequently occurring subclauses, then the proof length and running time are again linear in

n

.

Martin Fürer
Indexed Multi-pattern Matching

If we want to search sequentially for occurrences of many patterns in a given text, then we can apply any of dozens of multi-pattern matching algorithms in the literature. As far as we know, however, no one has said what to do if we are given a compressed self-index for the text instead of the text itself. In this paper we show how to take advantage of similarities between the patterns to speed up searches in an index. For example, we show how to store a string

S

[1..

n

] in

n

H

k

(

S

) + 

o

(

n

(

H

k

(

S

) + 1)) bits such that, given the LZ77 parse of the concatenation of

t

patterns of total length ℓ and maximum individual length

m

, we can count the occurrences of each pattern in a total of

$\ensuremath{\mathcal{O}\!\left( {(z + t) \log \ell \log m \log^{1 + \epsilon} n} \right)}$

time, where

z

is the number of phrases in the parse.

Travis Gagie, Kalle Karhu, Juha Kärkkäinen, Veli Mäkinen, Leena Salmela, Jorma Tarhio
New Lower Bound on Max Cut of Hypergraphs with an Application to r -Set Splitting

A classical result by Edwards states that every connected graph

G

on

n

vertices and

m

edges has a cut of size at least

$\frac{m}{2}+\frac{n-1}{4}$

. We generalize this result to

r

-hypergraphs, with a suitable notion of connectivity that coincides with the notion of connectivity on graphs for

r

 = 2. More precisely, we show that for every “partition connected”

r

-hypergraph (every hyperedge is of size at most

r

)

H

over a vertex set

V

(

H

), and edge set

E

(

H

) = {

e

1

,

e

2

,…

e

m

}, there always exists a 2-coloring of

V

(

H

) with {1, − 1} such that the number of hyperedges that have a vertex assigned 1 as well as a vertex assigned − 1 (or get “split”) is at least

$\mu_H+\frac{n-1}{r2^{r-1}}$

. Here

$\mu_H=\sum_{i=1}^{m}(1- 2/2^{|e_i|})=\sum_{i=1}^{m}(1- 2^{1-|e_i|})$

. We use our result to show that a version of

r

-Set Splitting

, namely,

Above Average

r

-Set Splitting (AA-

r

-SS)

, is fixed parameter tractable (FPT). Observe that a random 2-coloring that sets each vertex of the hypergraph

H

to 1 or − 1 with equal probability always splits at least

μ

H

hyperedges. In

AA-

r

-SS

, we are given an

r

-hypergraph

H

and a positive integer

k

and the question is whether there exists a 2-coloring of

V

(

H

) that splits at least

μ

H

 + 

k

hyperedges. We give an algorithm for

AA-

r

-SS

that runs in time

f

(

k

)

n

O

(1)

, showing that it is FPT, even when

r

 = 

c

1

log

n

, for every fixed constant

c

1

 < 1. Prior to our work

AA-

r

-SS

was known to be FPT only for constant

r

. We also complement our algorithmic result by showing that unless NP ⊆ DTIME(

n

loglog

n

),

AA-

⌈log

n

-SS

is not in

X

P.

Archontia C. Giannopoulou, Sudeshna Kolay, Saket Saurabh
Cache Me If You Can: Capacitated Selfish Replication Games

Motivated by peer-to-peer (P2P) networks and content delivery applications, we study Capacitated Selfish Replication (CSR) games, which involve nodes on a network making strategic choices regarding the content to replicate in their caches. Selfish replication games were introduced in [6], who analyzed the uncapacitated case leaving the capacitated version as an open direction.

In this work, we study pure Nash equilibria of CSR games with an emphasis on hierarchical networks, which have been extensively used to model communication costs of content delivery and P2P systems. The best result from previous work on CSR games for hierarchical networks [19,23] is the existence of a Nash equilibrium for a (slight generalization of a) 1-level hierarchy when the utility function is based on the sum of the costs of accessing the replicated objects in the network. Our main result is an exact polynomial-time algorithm for finding a Nash Equilibrium in any hierarchical network using a new technique which we term “fictional players”.We show that this technique extends to a general framework of natural preference orders, orders that are entirely arbitrary except for two constraints - “Nearer is better” and “Independence of irrelevant alternatives”. This axiomatic treatment captures a vast class of utility functions and even allows for nodes to simultaneously have utility functions of completely different functional forms.

Using our axiomatic framework, we next study CSR games on arbitrary networks and delineate the boundary between intractability and effective computability in terms of the network structure, object preferences, and number of objects. In addition to hierarchical networks, we show the existence of equilibria for general undirected networks when either object preferences are binary or there are two objects. For general CSR games, however, we show that it is NP-hard to determine whether equilibria exist. We also show that the existence of equilibria in strongly connected networks with two objects and binary object preferences can be solved in polynomial time via a reduction to the well-studied evencycle problem.

Ragavendran Gopalakrishnan, Dimitrios Kanoulas, Naga Naresh Karuturi, C. Pandu Rangan, Rajmohan Rajaraman, Ravi Sundaram
The Efficiency of MapReduce in Parallel External Memory

Since its introduction in 2004, the MapReduce framework has become one of the standard approaches in massive distributed and parallel computation. In contrast to its intensive use in practise, theoretical footing is still limited and only little work has been done yet to put MapReduce on a par with the major computational models. Following pioneer work that relates the MapReduce framework with PRAM and BSP in their macroscopic structure, we focus on the functionality provided by the framework itself, considered in the parallel external memory model (PEM). In this, we present upper and lower bounds on the parallel I/O-complexity that are matching up to constant factors for the shuffle step. The shuffle step is the single communication phase where all information of one MapReduce invocation gets transferred from map workers to reduce workers. Hence, we move the focus towards the internal communication step in contrast to previous work. The results we obtain further carry over to the BSP

*

model. On the one hand, this shows how much complexity can be “hidden” for an algorithm expressed in MapReduce compared to PEM. On the other hand, our results bound the worst-case performance loss of the MapReduce approach in terms of I/O-efficiency.

Gero Greiner, Riko Jacob
Algorithms for Some H-Join Decompositions

A

homogeneous pair

(also known as a

2-module

) of a graph is a pair {

M

1

,

M

2

} of disjoint vertex subsets such that for every vertex

x

 ∉ (

M

1

 ∪ 

M

2

) and

i

 ∈ {1,2},

x

is either adjacent to all vertices in

M

i

or to none of them. First used in the context of perfect graphs [Chvátal and Sbihi 1987], it is a generalization of

splits

(a.k.a 1-joins) and of

modules

. The algorithmics to compute them appears quite involved. In this paper, we describe an

O

(

mn

2

)-time algorithm computing (if any) a homogeneous pair, which not only improves a previous bound of

O

(

mn

3

) [Everett, Klein and Reed 1997], but also uses a nice structural property of homogenous pairs. Our result can be extended to compute the whole homogeneous pair decomposition tree, within the same complexity. Using similar ideas, we present an

O

(

nm

2

)-time algorithm to compute a

N

-join decomposition of a graph, improving a previous

O

(

n

6

) algorithm [Feder

et al.

2005]. These two decompositions are special case of

H-joins

[Bui-Xuan, Telle and Vatshelle 2010] to which our techniques apply.

Michel Habib, Antoine Mamcarz, Fabien de Montgolfier
On the Bend-Number of Planar and Outerplanar Graphs

The

bend-number

b

(

G

) of a graph

G

is the minimum

k

such that

G

may be represented as the edge intersection graph of a set of grid paths with at most

k

bends. We confirm a conjecture of Biedl and Stern showing that the maximum bend-number of outerplanar graphs is 2. Moreover we improve the formerly known lower and upper bound for the maximum bend-number of planar graphs from 2 and 5 to 3 and 4, respectively.

Daniel Heldt, Kolja Knauer, Torsten Ueckerdt
Hiring above the m-th Best Candidate: A Generalization of Records in Permutations

The

hiring problem

is a simple model of on-line decision-making under uncertainty. It is closely related to the well-known

secretary problem

, formally introduced in the early sixties. Broder et al. (2008) introduced the

hiring problem

as an extension of the secretary problem. Instead of selecting only one candidate, we are looking for selecting (hiring) many candidates to grow up a small company. In this context, a hiring strategy should meet two demands: to hire candidates at some reasonable rate and to improve the average quality of the hired staff. Soon afterwards, Archibald and Martínez (2009) introduced a discrete model of the hiring problem where candidates seen so far could be ranked from best to worst without the need to know their absolute quality scores. Hence the sequence of candidates could be modeled as a random permutation. Two general families of hiring strategies were introduced:

hiring above the m-th best candidate

and

hiring in the top P

%

quantile

(for instance,

P

 = 50 is hiring above the median). In this paper we consider only hiring above the

m

-th best candidate. We introduce new hiring parameters that describe the dynamics of the hiring process, like the

distance between the last two hirings

, and the quality of the hired staff, like the

score of the best discarded candidate

. While Archibald and Martínez made systematic use of analytic combinatorics techniques (Flajolet, Sedgewick, 2008) in their analysis, we use here a different approach to study the various hiring parameters associated to the hiring process. We are able to obtain explicit formulas for the probability distribution or the probability generating function of the random variables of interest in a rather direct way. The explicit nature of our results also allows a very detailed study of their asymptotic behaviour. Adding our new results to those of Archibald and Martínez leads to a very precise quantitative characterization of the hiring above the

m

-th best candidate strategy. This might prove very useful in applications of the hiring process, e.g., in data stream algorithms.

Ahmed Helmi, Conrado Martínez, Alois Panholzer
On the Performance of Smith’s Rule in Single-Machine Scheduling with Nonlinear Cost

We consider the problem of scheduling jobs on a single machine. Given some continuous cost function, we aim to compute a schedule minimizing the weighted total cost, where the cost of each individual job is determined by the cost function value at the job’s completion time. This problem is closely related to scheduling a single machine with nonuniform processing speed. We show that for piecewise linear cost functions it is strongly NP-hard.

The main contribution of this article is a tight analysis of the approximation factor of Smith’s rule under any particular convex or concave cost function. More specifically, for these wide classes of cost functions we reduce the task of determining a worst case problem instance to a continuous optimization problem, which can be solved by standard algebraic or numerical methods. For polynomial cost functions with positive coefficients it turns out that the tight approximation ratio can be calculated as the root of a univariate polynomial. To overcome unrealistic worst case instances, we also give tight bounds that are parameterized by the minimum, maximum, and total processing time.

Wiebke Höhn, Tobias Jacobs
Advantage of Overlapping Clusters for Minimizing Conductance

Graph clustering is an important problem with applications to bioinformatics, community discovery in social networks, distributed computing, etc. While most of the research in this area has focused on clustering using

disjoint

clusters, many real datasets have

inherently overlapping

clusters. We compare overlapping and non-overlapping clusterings in graphs in the context of minimizing their conductance. It is known that allowing clusters to overlap gives better results in practice. We prove that overlapping clustering may be significantly better than non-overlapping clustering with respect to conductance, even in a theoretical setting.

For minimizing the

maximum

conductance over the clusters, we give examples demonstrating that allowing overlaps can yield significantly better clusterings, namely, one that has much smaller optimum. In addition for the min-max variant, the overlapping version admits a simple approximation algorithm, while our algorithm for the non-overlapping version is complex and yields worse approximation ratio due to the presence of the additional constraint. Somewhat surprisingly, for the problem of minimizing the

sum

of conductances, we found out that allowing overlap does not really help. We show how to apply a general technique to

transform

any overlapping clustering into a non-overlapping one with only a modest increase in the sum of conductances. This

uncrossing

technique is of independent interest and may find further applications in the future.

Rohit Khandekar, Guy Kortsarz, Vahab Mirrokni
Independence of Tabulation-Based Hash Classes

A tabulation-based hash function maps a key into multiple derived characters which index random values in tables that are then combined with bitwise exclusive or operations to give the hashed value. Thorup and Zhang [9] presented tabulation-based hash classes that use linear maps over finite fields to map keys of the form (

a

,

b

) (composed of two characters,

a

and

b

, of equal length) to

d

derived characters in order to achieve

d

-wise independence. We present a variant in which

d

derived characters

a

 + 

b

·

i

, for

i

 = 0,…,

d

 − 1 (where arithmetic is over integers) are shown to yield (2

d

 − 1)-wise independence. Thus to achieve guaranteed

k

-wise independence for

k

 ≥ 6, our method reduces by about half the number of probes needed into the tables compared to Thorup and Zhang (they presented a different specialized scheme to give 4-wise [9] and 5-wise [10] independence).

Our analysis is based on an algebraic property that characterizes

k

-wise independence of tabulation-based hashing schemes, and combines this characterization with a geometric argument. We also prove a non-trivial lower bound on the number of derived characters necessary for

k

-wise independence with our and related hash classes.

Toryn Qwyllyn Klassen, Philipp Woelfel
Oblivious Two-Way Finite Automata: Decidability and Complexity

We investigate the descriptional complexity and decidability of obliviousness for two-way finite automata. In particular, we consider the simulation of two-way deterministic finite automata (

$\textrm{2DFA}$

s) by oblivious

$\textrm{2DFA}$

s, the simulation of oblivious

$\textrm{2DFA}$

s by sweeping

$\textrm{2DFA}$

s and one-way nondeterministic finite automata (

$\textrm{1NFA}$

s) as well as the simulation of sweeping

$\textrm{2DFA}$

s by

$\textrm{1NFA}$

s. In all cases exponential upper and lower bounds on the number of states are obtained for languages over an alphabet with at most four latters. Moreover, it is shown that obliviousness is decidable for

$\textrm{2DFA}$

s.

Martin Kutrib, Andreas Malcher, Giovanni Pighizzini
Clique-Colouring and Biclique-Colouring Unichord-Free Graphs

The class of unichord-free graphs was recently investigated in the context of vertex-colouring [J. Graph Theory 63 (2010) 31–67], edge-colouring [Theoret. Comput. Sci. 411 (2010) 1221–1234] and total-colouring [Discrete Appl. Math. 159 (2011) 1851–1864]. Unichord-free graphs proved to have a rich structure that can be used to obtain interesting results with respect to the study of the complexity of colouring problems. In particular, several surprising complexity dichotomies of colouring problems are found in subclasses of unichord-free graphs. In the present work, we investigate clique-colouring and biclique-colouring problems restricted to unichord-free graphs. We show that the clique-chromatic number of a unichord-free graph is at most 3, and that the 2-clique-colourable unichord-free graphs are precisely those that are perfect. We prove that the biclique-chromatic number of a unichord-free graph is at most its clique-number. We describe an

O

(

nm

)-time algorithm that returns an optimal clique-colouring, but the complexity to optimal biclique-colour a unichord-free graph is not classified yet. Nevertheless, we describe an

O

(

n

2

)-time algorithm that returns an optimal biclique-colouring in a subclass of unichord-free graphs called cactus.

Hélio B. Macêdo Filho, Raphael C. S. Machado, Celina M. H. Figueiredo
Random Walks and Bisections in Random Circulant Graphs

Using number theoretical tools, we prove two main results for random

r

-regular circulant graphs with

n

vertices, when

n

is sufficiently large and

r

is fixed. First, for any fixed

ε

 > 0, prime

n

and

L

 ≥ 

n

1/

r

(log

n

)

1 + 1/

r

 + 

ε

, walks of length at most

L

terminate at every vertex with asymptotically the same probability. Second, for any

n

, there is a polynomial time algorithm to find a vertex bisector and an edge bisector, both of size less than

n

1 − 1/

r

 + 

o

(1)

.

As circulant graphs are popular network topologies in distributed computing, we show that our results can be exploited for various information dissemination schemes. In particular, we provide lower bounds on the number of rounds required by any gossiping algorithms for any

n

. This settles an open question in an earlier work of the authors (2004) and shows that the generic gossiping algorithms of that work are nearly optimal.

Bernard Mans, Igor E. Shparlinski
The Feedback Arc Set Problem with Triangle Inequality Is a Vertex Cover Problem

We consider the (precedence constrained) Minimum Feedback Arc Set problem with triangle inequalities on the weights, which finds important applications in problems of ranking with inconsistent information. We present a surprising structural insight showing that the problem is a special case of the minimum vertex cover in hypergraphs with edges of size at most 3. This result leads to combinatorial approximation algorithms for the problem and opens the road to studying the problem as a vertex cover problem.

Monaldo Mastrolilli
Fully Analyzing an Algebraic Pólya Urn Model

This paper introduces and analyzes a particular class of Pólya urns: balls are of two colors, can only be added (the urns are said to be

additive

) and at every step the same constant number of balls is added, thus only the color compositions varies (the urns are said to be

balanced

). These properties make this class of urns ideally suited for analysis from an “analytic combinatorics” point-of-view, following in the footsteps of Flajolet et al. [4]. Through an

algebraic

generating function to which we apply a multiple coalescing saddle-point method, we are able to give precise asymptotic results for the probability distribution of the composition of the urn, as well as local limit law and large deviation bounds.

Basile Morcrette
Degree-Constrained Node-Connectivity

We give a general framework to handle

node-connectivity

degree constrained problems. In particular, for the

k

-Outconnected Subgraph

problem, for both directed and undirected graphs, our algorithm computes in polynomial time a solution

J

of cost

O

(log

k

) times the optimal, such that deg

J

(

v

) = 

O

(2

k

) ·

b

(

v

) for all

v

 ∈ 

V

. Similar result are obtained for the

Element-Connectivity

and the

k

-Connected Subgraph

problems. The latter is a significant improvement on the particular case of only degree-approximation and undirected graphs considered in [5]. In addition, for the

edge-connectivity

directed

Degree-Constrained

k

-Outconnected Subgraph

problem, we slightly improve the best known approximation ratio, by a simple proof.

Zeev Nutov
Survivable Network Activation Problems

In the

Survivable Networks Activation

problem we are given a graph

G

 = (

V

,

E

),

S

 ⊆ 

V

, a family {

f

uv

(

x

u

,

x

v

):

uv

 ∈ 

E

} of monotone non-decreasing

activating functions

from

$\mathbb{R}^2_+$

to {0,1} each, and

connectivity requirements

{

r

(

u

,

v

):

u

,

v

 ∈ 

V

}. The goal is to find a

weight assignment

w

 = {

w

v

:

v

 ∈ 

V

} of minimum total weight

w

(

V

) = ∑ 

v

 ∈ 

V

w

v

, such that: for all

u

,

v

 ∈ 

V

, the

activated graph

G

w

 = (

V

,

E

w

), where

E

w

 = {

uv

:

f

uv

(

w

u

,

w

v

) = 1}, contains

r

(

u

,

v

) pairwise edge-disjoint

uv

-paths such that no two of them have a node in

S

 ∖ {

u

,

v

} in common. This problem was suggested recently by Panigrahi [12], generalizing the

Node-Weighted Survivable Network

and the

Minimum-Power Survivable Network

problems, as well as several other problems with motivation in wireless networks. We give new approximation algorithms for this problem.

For undirected/directed graphs, our ratios are

O

(

k

log

n

) for

k

-Out/In-connected Subgraph Activation

and

k

-Connected Subgraph Activation

. For directed graphs this solves a question from [12] for

k

 = 1, while for the min-power case and

k

arbitrary this solves an open question from [9]. For other versions on undirected graphs, our ratios match the best known ones for the

Node-Weighted Survivable Network

problem [8].

Zeev Nutov
On the Integrality Gap of the Subtour LP for the 1,2-TSP

In this paper, we study the integrality gap of the subtour LP relaxation for the traveling salesman problem in the special case when all edge costs are either 1 or 2. For the general case of symmetric costs that obey triangle inequality, a famous conjecture is that the integrality gap is 4/3. Little progress towards resolving this conjecture has been made in thirty years. We conjecture that when all edge costs

c

ij

 ∈ {1,2}, the integrality gap is 10/9. We show that this conjecture is true when the optimal subtour LP solution has a certain structure. Under a weaker assumption, which is an analog of a recent conjecture by Schalekamp, Williamson and van Zuylen, we show that the integrality gap is at most 7/6. When we do not make any assumptions on the structure of the optimal subtour LP solution, we can show that inegrality gap is at most 19/15 ≈ 1.267 < 4/3; this is the first bound on the integrality gap of the subtour LP strictly less than 4/3 known for an interesting special case of the TSP.

Jiawei Qian, Frans Schalekamp, David P. Williamson, Anke van Zuylen
A Theory and Algorithms for Combinatorial Reoptimization

Many real-life applications involve systems that change dynamically over time. Thus, throughout the continuous operation of such a system, it is required to compute solutions for new problem instances, derived from previous instances. Since the transition from one solution to another incurs some cost, a natural goal is to have the solution for the new instance close to the original one (under a certain distance measure).

In this paper we develop a general model for combinatorial reoptimization, encompassing classical objective functions as well as the goal of minimizing the transition cost from one solution to the other. Formally, we say that

${\cal A}$

is an (

r

,

ρ

)-reapproximation algorithm if it achieves a

ρ

-approximation for the optimization problem, while paying a transition cost that is at most

r

times the minimum required for solving the problem optimally. When

ρ

 = 1 we get an (

r

,1)-reoptimization algorithm.

Using our model we derive reoptimization and reapproximation algorithms for several important classes of optimization problems. This includes

fully polynomial time reapproximation schemes

for DP-benevolent problems, a class introduced by Woeginger (

Proc. Tenth ACM-SIAM Symposium on Discrete Algorithms, 1999

), reapproximation algorithms for metric Facility Location problems, and (1,1)-reoptimization algorithm for polynomially solvable subset-selection problems.

Thus, we distinguish here for the first time between classes of reoptimization problems, by their hardness status with respect to minimizing transition costs while guaranteeing a good approximation for the underlying optimization problem.

Hadas Shachnai, Gal Tamir, Tami Tamir
Capacity Achieving Two-Write WOM Codes

In this paper we give several new constructions of WOM codes. The novelty in our constructions is the use of the so called

Wozencraft ensemble

of linear codes. Specifically, we obtain the following results.

We give an explicit construction of a two-write Write-Once-Memory (WOM for short) code that approaches capacity, over the binary alphabet. More formally, for every

ε

 > 0, 0 < 

p

 < 1 and

n

 = (1/

ε

)

O

(1/

)

we give a construction of a two-write WOM code of length

n

and capacity

H

(

p

) + 1 − 

p

 − 

ε

. Since the capacity of a two-write WOM code is max

p

(

H

(

p

) + 1 − 

p

), we get a code that is

ε

-close to capacity. Furthermore, encoding and decoding can be done in time

O

(

n

2

·poly(log

n

)) and time

O

(

n

·poly(log

n

)), respectively, and in logarithmic space.

We highlight a connection to linear seeded extractors for bit-fixing sources. In particular we show that obtaining such an extractor with seed length

O

(log

n

) can lead to improved parameters for 2-write WOM codes.

Amir Shpilka
The Relationship between Inner Product and Counting Cycles

Cycle-Counting

is the following communication complexity problem: Alice and Bob each holds a permutation of size

n

with the promise there will be either

a

cycles or

b

cycles in their product. They want to distinguish between these two cases by communicating a few bits. We show that the quantum/nondeterministic communication complexity is roughly

$\tilde \Omega((n-b)/(b-a))$

when

$a \equiv b \pmod 2$

. It is proved by reduction from a variant of the inner product problem over ℤ

m

. It constructs a bridge for various problems, including

In-Same-Cycle

[10],

One-Cycle

[14], and

Bipartiteness

on constant degree graph [9]. We also give space lower bounds in the streaming model for the

Connectivity

,

Bipartiteness

and

Girth

problems [7]. The inner product variant we used has a quantum lower bound of Ω(

n

log

p

(

m

)), where

p

(

m

) is the smallest prime factor of

m

. It implies that our lower bounds for

Cycle-Counting

and related problems still hold for quantum protocols, which was not known before this work.

Xiaoming Sun, Chengu Wang, Wei Yu
Approximating Minimum Label s-t Cut via Linear Programming

We consider the Minimum Label

s

-

t

Cut problem. Given an undirected graph

G

 = (

V

,

E

) with a label set

L

, in which each edge has a label from

L

, and a source

s

 ∈ 

V

together with a sink

t

 ∈ 

V

, the goal of the Minimum Label

s

-

t

Cut problem is to pick a subset of labels of minimized cardinality, such that the removal of all edges with these labels from

G

disconnects

s

and

t

. We present a min {

O

((

m

/

OPT

)

1/2

),

O

(

n

2/3

/

OPT

1/3

) }-approximation algorithm for the Minimum Label

s

-

t

Cut problem using linear programming technique, where

n

 = |

V

|,

m

 = |

E

|, and

OPT

is the optimal value of the input instance. This result improves the previously best known approximation ratio

O

(

m

1/2

) for this problem (Zhang et al., JOCO 21(2), 192–208 (2011)), and gives the first approximation ratio for this problem in terms of

n

. Moreover, we show that our linear program relaxation for the Minimum Label

s

-

t

Cut problem, even in a stronger form, has integrality gap Ω((

m

/

OPT

)

1/2 − 

ε

).

Linqing Tang, Peng Zhang
Backmatter
Metadaten
Titel
LATIN 2012: Theoretical Informatics
herausgegeben von
David Fernández-Baca
Copyright-Jahr
2012
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-29344-3
Print ISBN
978-3-642-29343-6
DOI
https://doi.org/10.1007/978-3-642-29344-3