Skip to main content
main-content

Über dieses Buch

This volume constitutes the refereed proceedings of the 37th International Symposium on Mathematical Foundations of Computer Science, MFCS 2012, held in Bratislava, Slovakia, in August 2012. The 63 revised full papers presented together with 8 invited talks were carefully reviewed and selected from 162 submissions. Topics covered include algorithmic game theory, algorithmic learning theory, algorithms and data structures, automata, formal languages, bioinformatics, complexity, computational geometry, computer-assisted reasoning, concurrency theory, databases and knowledge-based systems, foundations of computing, logic in computer science, models of computation, semantics and verification of programs, and theoretical issues in artificial intelligence.

Inhaltsverzeichnis

Frontmatter

On the Complexity of Ontological Reasoning under Disjunctive Existential Rules

Ontology-based data access is an emerging yet powerful technology that allows to enhance a classical relational database with an ontology in order to infer new intensional knowledge. Recently, Datalog+/- was introduced with the purpose of providing tractable reasoning algorithms for expressive ontology languages. In this framework, Datalog is extended by features such as existential quantification in rule heads, and at the same time the rule syntax is restricted to guarantee decidability, and also tractability, of relevant reasoning tasks. In this paper, we enrich Datalog even more by allowing not only existential quantification but also disjunction in rule heads, and we investigate the complexity of reasoning under the obtained formalism.

Georg Gottlob, Marco Manna, Michael Morak, Andreas Pieris

New Races in Parameterized Algorithmics

Once having classified an NP-hard problem fixed-parameter tractable with respect to a certain parameter, the race for the most efficient fixed-parameter algorithm starts. Herein, the attention usually focuses on improving the running time factor exponential in the considered parameter, and, in case of kernelization algorithms, to improve the bound on the kernel size. Both from a practical as well as a theoretical point of view, however, there are further aspects of efficiency that deserve attention. We discuss several of these aspects and particularly focus on the search for “stronger parameterizations” in developing fixed-parameter algorithms.

Christian Komusiewicz, Rolf Niedermeier

Scott Is Always Simple

In this paper we give an outline of recent algebraic results concerning theories and models of the untyped lambda calculus.

Antonino Salibra

A Toolkit for Proving Limitations of the Expressive Power of Logics

Zero-one laws, Ehrenfeucht-Fraïssé games, locality results, and logical reductions belong to the, by now, standard methods of Finite Model Theory, used for showing non-expressibility in certain logics (cf., e.g., the textbooks [1,2] or the entries in the Encyclopedia of Database Systems [3]).

More recently, the close connections between logic and circuits, along with strong lower bound results obtained in circuit complexity, have led to new lower bounds on the expressiveness of logics (cf., e.g., [4,5,6,7]). In particular, [4] solved a long standing open question of Finite Finite Model Theory, asking about the strictness of the bounded variable hierarchy of first-order logic on finite ordered graphs.

Nicole Schweikardt

How to Reconstruct a Genome

Since its early formulations (e.g., [1]), the genome assembly problem has attracted lots of interest from algorithm theoretic as well as from algorithm engineering point of view. In this problem, which is an inversion problem by nature, one is asked to reconstruct the entire DNA sequence from the short, randomly picked sequence fragments that a DNA sequencing instrument is able to read [2]. With the invent of current high-throughput sequencers producing such fragment reads in massive amounts, there is in molecular biology research a pronounced call for an accurate and fast reconstruction procedure.

It is customary to structure a reconstruction procedure into the following major steps: (1) Error correction of the fragments; (2) Finding pairwise overlaps between the fragments and representing the overlaps as a graph; (3) Constructing approximate superstrings, called

contigs

, for the fragments; (4) Constructing a linear order, called a

scaffold

, of the contigs. All steps are algorithmically challenging. Noisy data and intricate repetition structure of the target genome cause added difficulties. The talk attempts to give an overall picture of the genome assembly process and its algorithmic aspects emphasizing some recent developments in error correction [3], contig assembly, and scaffolding [4]. We also try to convey experiences from a major undertaking of

de novo

sequencing of a higher organism, Glanville fritillary butterfly

Melitea cinxia

. (A collaboration with I. Hanski,

www.helsinki.fi/science/metapop/index.htm

).

Esko Ukkonen

Simple Models for Recursive Schemes

Higher-order recursive schemes are abstract forms of programs where the meaning of built-in constructs is not specified. The semantics of a scheme is an infinite tree labeled with built-in constructs. The research on recursive schemes spans over more than forty years. Still, central problems like the equality problem, and more recently, the model checking problem for schemes remain very intriguing. Even though recursive schemes were originally though of as a syntactic simplification of a fragment of the lambda calculus, we propose to go back to lambda calculus to study schemes. In particular, for the model checking problem we propose to use standard finitary models for the simply-typed lambda calculus.

Igor Walukiewicz

Transportation under Nasty Side Constraints

The talk discusses planning problems where a set of items has to be transported from location

A

to location

B

subject to certain collision and/or resource constraints. We analyze the behavior of these problems, discuss their history, and derive some of their combinatorial and algorithmic properties.

Gerhard J. Woeginger

Computation of Least Fixed Points

Many problems from different areas can be formulated as problems of computing a fixed point of a suitable function. Classical examples include the computation of equilibria for games, price equilibria for markets, and many others. There has been significant progress in understanding better the computational nature of such problems and characterizing their complexity in terms of classes like FIXP, which captures the complexity of the computation of fixed points for general (nonlinear) algebraic functions, with the 3-player Nash equilibrium problem as a prototypical example, and PPAD for the computation of fixed points for piecewise linear functions, with the 2-player Nash equilibrium problem as a prototypical example.

Mihalis Yannakakis

Unordered Constraint Satisfaction Games

We consider two-player constraint satisfaction games on systems of Boolean constraints, in which the players take turns in selecting one of the available variables and setting it to

true

or

false

, with the goal of maximising (for Player I) or minimising (for Player II) the number of satisfied constraints. Unlike in standard QBF-type variable assignment games, we impose

no order

in which the variables are to be played. This makes the game setup more natural, but also more challenging to control. We provide polynomial-time, constant-factor approximation strategies for Player I when the constraints are parity functions or threshold functions with a threshold that is small compared to the arity of the constraints. Also, we prove that the problem of determining if Player I can satisfy all constraints is PSPACE-complete even in this unordered setting, and when the constraints are disjunctions of at most 6 literals (an unordered-game analogue of 6-QBF).

Lauri Ahlroth, Pekka Orponen

A Polynomial-Time Algorithm for Computing the Maximum Common Subgraph of Outerplanar Graphs of Bounded Degree

This paper considers the maximum common subgraph problem, which is to find a connected graph with the maximum number of edges that is isomorphic to a subgraph of each of the two input graphs. This paper presents a dynamic programming algorithm for computing the maximum common subgraph of two outerplanar graphs whose maximum vertex degree is bounded by a constant, where it is known that the problem is NP-hard even for outerplanar graphs of unbounded degree. Although the algorithm repeatedly modifies input graphs, it is shown that the number of relevant subproblems is polynomially bounded and thus the algorithm works in polynomial time.

Tatsuya Akutsu, Takeyuki Tamura

Reductions to the Set of Random Strings: The Resource-Bounded Case

This paper is motivated by a conjecture [1,5] that BPP can be characterized in terms of polynomial-time nonadaptive reductions to the set of Kolmogorov-random strings. In this paper we show that an approach laid out in [5] to settle this conjecture cannot succeed without significant alteration, but that it does bear fruit if we consider time-bounded Kolmogorov complexity instead.

We show that if a set

A

is reducible in polynomial time to the set of time-

t

-bounded Kolmogorov-random strings (for all large enough time bounds

t

), then

A

is in P/poly, and that if in addition such a reduction exists for any universal Turing machine one uses in the definition of Kolmogorov complexity, then

A

is in PSPACE.

Eric Allender, Harry Buhrman, Luke Friedman, Bruno Loff

Approximate Graph Isomorphism

We study optimization versions of Graph Isomorphism. Given two graphs

G

1

,

G

2

, we are interested in finding a bijection

π

from

V

(

G

1

) to

V

(

G

2

) that maximizes the number of matches (edges mapped to edges or non-edges mapped to non-edges). We give an

n

O

(log

n

)

time approximation scheme that for any constant factor

α

 < 1, computes an

α

-approximation. We prove this by combining the

n

O

(log

n

)

time additive error approximation algorithm of Arora et al. [

Math. Program.,

92, 2002] with a simple averaging algorithm. We also consider the corresponding minimization problem (of mismatches) and prove that it is

NP

-hard to

α

-approximate for any constant factor

α

. Further, we show that it is also

NP

-hard to approximate the maximum number of edges mapped to edges beyond a factor of 0.94.

We also explore these optimization problems for bounded color class graphs which is a well studied tractable special case of Graph Isomorphism. Surprisingly, the bounded color class case turns out to be harder than the uncolored case in the approximate setting.

Vikraman Arvind, Johannes Köbler, Sebastian Kuhnert, Yadu Vasudev

Near-Optimal Expanding Generator Sets for Solvable Permutation Groups

Let

G

 = 〈

S

〉 be a solvable subgroup of the symmetric group

S

n

given as input by the generator set

S

. We give a deterministic polynomial-time algorithm that computes an

expanding generator set

of size Õ(

n

2

) for

G

. As a byproduct of our proof, we obtain a new explicit construction of

ε

-bias spaces of size Õ

$(n{\rm poly}({\rm log} d))({{1}\over{\varepsilon}})^{O(1)}$

for the groups

$\mathbb{Z}_d^n$

.

Vikraman Arvind, Partha Mukhopadhyay, Prajakta Nimbhorkar, Yadu Vasudev

Generating Functions of Timed Languages

In order to study precisely the growth of timed languages, we associate to such a language a generating function. These functions (tightly related to volume and entropy of timed languages) satisfy compositionality properties and, for deterministic timed regular languages, can be characterized by integral equations. We provide procedures for closed-form computation of generating functions for some classes of timed automata and regular expressions.

Eugene Asarin, Nicolas Basset, Aldric Degorre, Dominique Perrin

The Robust Set Problem: Parameterized Complexity and Approximation

In this paper, we introduce the

Robust Set

problem: given a graph

G

 = (

V

,

E

), a threshold function

t

:

V

 → 

N

and an integer

k

, find a subset of vertices

V

′ ⊆ 

V

of size at least

k

such that every vertex

v

in

G

has less than

t

(

v

) neighbors in

V

′. This problem occurs in the context of the spread of undesirable agents through a network (virus, ideas, fire, …). Informally speaking, the problem asks to find the largest subset of vertices with the property that if anything bad happens in it then this will have no consequences on the remaining graph. The threshold

t

(

v

) of a vertex

v

represents its reliability regarding its neighborhood; that is, how many neighbors can be infected before

v

gets himself infected.

We study in this paper the parameterized complexity of

Robust Set

and the approximation of the associated maximization problem. When the parameter is

k

, we show that this problem is W[2]-complete in general and W[1]-complete if all thresholds are constant bounded. Moreover, we prove that, if

P

 ≠ 

NP

, the maximization version is not

n

1 − 

ε

- approximable for any

ε

 > 0 even when all thresholds are at most two. When each threshold is equal to the degree of the vertex, we show that

k

-Robust Set

is fixed-parameter tractable for parameter

k

and the maximization version is APX-complete. We give a polynomial-time algorithm for graphs of bounded treewidth and a PTAS for planar graphs. Finally, we show that the parametric dual problem (

n

 − 

k

)-

Robust Set

is fixed-parameter tractable for a large family of threshold functions.

Cristina Bazgan, Morgan Chopin

Mortality for 2 ×2 Matrices Is NP-Hard

We study the computational complexity of determining whether the zero matrix belongs to a finitely generated semigroup of two dimensional integer matrices (the mortality problem). We show that this problem is NP-hard to decide in the two-dimensional case by using a new encoding and properties of the projective special linear group. The decidability of the mortality problem in two dimensions remains a long standing open problem although in dimension three is known to be undecidable as was shown by Paterson in 1970.

We also show a lower bound on the minimum length solution to the Mortality Problem, which is exponential in the number of matrices of the generator set and the maximal element of the matrices.

Paul C. Bell, Mika Hirvensalo, Igor Potapov

Solving Counter Parity Games

We study a class of parity games equipped with counters that evolve according to arbitrary non-negative affine functions. These games capture several cost models for dynamic systems from the literature. We present an elementary algorithm for computing the exact value of a counter parity game, which both generalizes previous results and improves their complexity. To this end, we introduce a class of

ω

-regular games with imperfect information and imperfect recall, solve them using automata-based techniques, and prove a correspondence between finite-memory strategies in such games and strategies in counter parity games.

Dietmar Berwanger, Łukasz Kaiser, Simon Leßenich

Drawing Planar Graphs on Points Inside a Polygon

In this paper, we study the problem of drawing a given planar graph such that vertices are at pre-specified points and the entire drawing is inside a given polygon. We give a method that shows that for an

n

-vertex graph and a

k

-sided polygon, Θ(

kn

2

) bends are always sufficient. We also give an example of a graph where Θ(

kn

2

) bends are necessary for such a drawing.

Therese Biedl, Peter Floderus

New Advances in Reoptimizing the Minimum Steiner Tree Problem

In this paper we improve the results in the literature concerning the problem of computing the minimum Steiner tree given the minimum Steiner tree for a similar problem instance. Using a

σ

-approximation algorithm computing the minimum Steiner tree from scratch, we provide a

$\left(\frac{3 \sigma-1}{2 \sigma-1}+\epsilon\right)$

and a

$\left(\frac{2 \sigma-1}{\sigma}+\epsilon\right)$

-approximation algorithm for altering the instance by removing a vertex from the terminal set and by increasing the cost of an edge, respectively. If we use the best up to date

σ

 = ln 4 + 

ε

, our ratios equal 1.218 and 1.279 respectively.

Davide Bilò, Anna Zych

Smoothed Complexity Theory

Smoothed analysis is a new way of analyzing algorithms introduced by Spielman and Teng (

J. ACM

, 2004). Classical methods like worst-case or average-case analysis have accompanying complexity classes, like

P

and

Avg

P

, respectively. While worst-case or average-case analysis give us a means to talk about the running time of a particular algorithm, complexity classes allows us to talk about the inherent difficulty of problems.

Smoothed analysis is a hybrid of worst-case and average-case analysis and compensates some of their drawbacks. Despite its success for the analysis of single algorithms and problems, there is no embedding of smoothed analysis into computational complexity theory, which is necessary to classify problems according to their intrinsic difficulty.

We propose a framework for smoothed complexity theory, define the relevant classes, and prove some first results.

Markus Bläser, Bodo Manthey

Abelian Pattern Avoidance in Partial Words

Pattern avoidance is an important topic in combinatorics on words which dates back to the beginning of the twentieth century when Thue constructed an infinite word over a ternary alphabet that avoids squares, i.e., a word with no two adjacent identical factors. This result finds applications in various algebraic contexts where more general patterns than squares are considered. On the other hand, Erdős raised the question as to whether there exists an infinite word that avoids abelian squares, i.e., a word with no two adjacent factors being permutations of one another. Although this question was answered affirmately years later, knowledge of abelian pattern avoidance is rather limited. Recently, (abelian) pattern avoidance was initiated in the more general framework of partial words, which allow for undefined positions called holes. Here, we investigate conditions for a pattern to be abelian avoidable by a partial word with finitely or infinitely many holes.

Francine Blanchet-Sadri, Sean Simmons

The Complexity of Rerouting Shortest Paths

The Shortest Path Reconfiguration problem has as input a graph

G

(with unit edge lengths) with vertices

s

and

t

, and two shortest

st

-paths

P

and

Q

. The question is whether there exists a sequence of shortest

st

-paths that starts with

P

and ends with

Q

, such that subsequent paths differ in only one vertex. This is called a rerouting sequence.

This problem is shown to be PSPACE-complete. For claw-free graphs and chordal graphs, it is shown that the problem can be solved in polynomial time, and that shortest rerouting sequences have linear length. For these classes, it is also shown that deciding whether a rerouting sequence exists between

all

pairs of shortest

st

-paths can be done in polynomial time.

Paul Bonsma

Computing with Large Populations Using Interactions

We define a general model capturing the behavior of a population of anonymous agents that interact in pairs. This model captures some of the main features of opportunistic networks, in which nodes (such as the ones of a mobile ad hoc networks) meet sporadically. For its reminiscence to Population Protocol, we call our model

Large-Population Protocol

, or LPP. We are interested in the design of LPPs enforcing, for every

ν

 ∈ [0,1], a proportion

ν

of the agents to be in a specific subset of marked states, when the size of the population grows to infinity; In which case, we say that the protocol

computes

ν

. We prove that, for every

ν

 ∈ [0,1],

ν

is computable by a LPP if and only if

ν

is algebraic. Our positive result is constructive. That is, we show how to construct, for every algebraic number

ν

 ∈ [0,1], a protocol which computes

ν

.

Olivier Bournez, Pierre Fraigniaud, Xavier Koegler

Pancake Flipping Is Hard

Pancake Flipping is the problem of sorting a stack of pancakes of different sizes (that is, a permutation), when the only allowed operation is to insert a spatula anywhere in the stack and to flip the pancakes above it (that is, to perform a prefix reversal). In the burnt variant, one side of each pancake is marked as burnt, and it is required to finish with all pancakes having the burnt side down. Computing the optimal scenario for any stack of pancakes and determining the worst-case stack for any stack size have been challenges over more than three decades. Beyond being an intriguing combinatorial problem in itself, it also yields applications, e.g. in parallel computing and computational biology.

In this paper, we show that the Pancake Flipping problem, in its original (unburnt) variant, is

NP

-hard, thus answering the long-standing question of its computational complexity.

Laurent Bulteau, Guillaume Fertin, Irena Rusu

In-place Heap Construction with Optimized Comparisons, Moves, and Cache Misses

We show how to build a binary heap in-place in linear time by performing ~ 1.625

n

element comparisons, at most ~ 2.125

n

element moves, and ~

n

/

B

cache misses, where

n

is the size of the input array,

B

the capacity of the cache line, and ~

f

(

n

) approaches

f

(

n

) as

n

grows. The same bound for element comparisons was derived and conjectured to be optimal by Gonnet and Munro; however, their procedure requires Θ(

n

) pointers and does not have optimal cache behaviour. Our main idea is to mimic the Gonnet-Munro algorithm by converting a navigation pile into a binary heap. To construct a binary heap in-place, we use this algorithm to build bottom heaps of size

$\Theta(\lg n)$

and adjust the heap order at the upper levels using Floyd’s

sift-down

procedure. On another frontier, we compare different heap-construction alternatives in practice.

Jingsen Chen, Stefan Edelkamp, Amr Elmasry, Jyrki Katajainen

Model Checking Stochastic Branching Processes

Stochastic branching processes are a classical model for describing random trees, which have applications in numerous fields including biology, physics, and natural language processing. In particular, they have recently been proposed to describe parallel programs with stochastic process creation. In this paper, we consider the problem of model checking stochastic branching process. Given a branching process and a deterministic parity tree automaton, we are interested in computing the probability that the generated random tree is accepted by the automaton. We show that this probability can be compared with any rational number in PSPACE, and with 0 and 1 in polynomial time. In a second part, we suggest a tree extension of the logic PCTL, and develop a PSPACE algorithm for model checking a branching process against a formula of this logic. We also show that the qualitative fragment of this logic can be model checked in polynomial time.

Taolue Chen, Klaus Dräger, Stefan Kiefer

Parameterized Study of the Test Cover Problem

In this paper we carry out a systematic study of a natural covering problem, used for identification across several areas, in the realm of parameterized complexity. In the

Test Cover

problem we are given a set [

n

] = {1,…,

n

} of items together with a collection,

$\cal T$

, of distinct subsets of these items called tests. We assume that

$\cal T$

is a test cover, i.e., for each pair of items there is a test in

$\cal T$

containing exactly one of these items. The objective is to find a minimum size subcollection of

$\cal T$

, which is still a test cover. The generic parameterized version of

Test Cover

is denoted by

$p(k,n,|{\cal T}|)$

-

Test Cover

. Here, we are given

$([n],\cal{T})$

and a positive integer parameter

k

as input and the objective is to decide whether there is a test cover of size at most

$p(k,n,|{\cal T}|)$

. We study four parameterizations for

Test Cover

and obtain the following:

(a)

k

-

Test Cover

, and (

n

 − 

k

)-

Test Cover

are fixed-parameter tractable (FPT), i.e., these problems can be solved by algorithms of runtime

$f(k)\cdot poly(n,|{\cal T}|)$

, where

f

(

k

) is a function of

k

only.

(b)

$(|{\cal T}|-k)$

-

Test Cover

and (log

n

 + 

k

)-

Test Cover

are W[1]-hard. Thus, it is unlikely that these problems are FPT.

Robert Crowston, Gregory Gutin, Mark Jones, Saket Saurabh, Anders Yeo

Sitting Closer to Friends Than Enemies, Revisited

Signed graphs, i.e., undirected graphs with edges labelled with a plus or minus sign, are commonly used to model relationships in social networks. Recently, Kermarrec and Thraves [13] initiated the study of the problem of appropriately visualising the network: They asked whether any signed graph can be embedded into the metric space ℝ

l

in such a manner that every vertex is closer to all its friends (neighbours via positive edges) than to all its enemies (neighbours via negative edges). Interestingly, embeddability into ℝ

1

can be expressed as a purely combinatorial problem. In this paper we pursue a deeper study of this case, answering several questions posed by Kermarrec and Thraves.

First, we refine the approach of Kermarrec and Thraves for the case of complete signed graphs by showing that the problem is closely related to the recognition of proper interval graphs. Second, we prove that the general case, whose polynomial-time tractability remained open, is in fact

NP

-complete. Finally, we provide lower and upper bounds for the time complexity of the general case: we prove that the existence of a subexponential time (in the number of vertices and edges of the input signed graph) algorithm would violate the Exponential Time Hypothesis, whereas a simple dynamic programming approach gives a running time single-exponential in the number of vertices.

Marek Cygan, Marcin Pilipczuk, Michał Pilipczuk, Jakub Onufry Wojtaszczyk

A Dichotomy Theorem for Homomorphism Polynomials

In the present paper we show a dichotomy theorem for the complexity of polynomial evaluation. We associate to each graph

H

a polynomial that encodes all graphs of a fixed size homomorphic to

H

. We show that this family is computable by arithmetic circuits in constant depth if

H

has a loop or no edges and that it is hard otherwise (i.e., complete for VNP, the arithmetic class related to #

P

). We also demonstrate the hardness over ℚ of cut eliminator, a polynomial defined by Bürgisser which is known to be neither VP nor VNP-complete in

$\mathbb F_2$

, if VP ≠ VNP (VP is the class of polynomials computable by arithmetic circuits of polynomial size).

Nicolas de Rugy-Altherre

Finite State Transducers for Modular Möbius Number Systems

Modular Möbius number systems consist of Möbius transformations with integer coefficients and unit determinant. We show that in any modular Möbius number system, the computation of a Möbius transformation with integer coefficients can be performed by a finite state transducer and has linear time complexity. As a byproduct we show that every modular Möbius number system has the expansion subshift of finite type.

Martin Delacourt, Petr Kůrka

Zero-Knowledge Proofs via Polynomial Representations

Under the existence of commitment schemes with homomorphic properties, we construct a constant-round zero-knowledge proof system for an

$\mathcal NP$

-complete language that requires a number of commitments that is

sublinear

in the size of the (best known) witness verification predicate. The overall communication complexity improves upon best known results for the specific

$\mathcal NP$

-complete language [1,2] and results that could be obtained using zero-knowledge proof systems for the entire

$\mathcal NP$

class (most notably, [3,2,4]). Perhaps of independent interest, our techniques build a

proof system

after reducing the theorem to be proved to statements among low-degree polynomials over large fields and using Schwartz-Zippel lemma to prove polynomial identities among committed values.

Giovanni Di Crescenzo, Vadym Fedyukovych

Cluster Vertex Deletion: A Parameterization between Vertex Cover and Clique-Width

The

cluster vertex deletion

number of a graph is the minimum number of its vertices whose deletion results in a disjoint union of complete graphs. This generalizes the vertex cover number, provides an upper bound to the clique-width and is related to the previously studied notion of the twin cover of the graph under consideration. We study the fixed parameter tractability of basic graph theoretic problems related to coloring and Hamiltonicity parameterized by cluster vertex deletion number. Our results show that most of these problems remain fixed parameter tractable as well, and thus we push the borderline between tractability and intractability towards the clique-width parameter.

Martin Doucha, Jan Kratochvíl

On the Impact of Fair Best Response Dynamics

In this work we completely characterize how the frequency with which each player participates in the game dynamics affects the possibility of reaching efficient states, i.e., states with an approximation ratio within a constant factor from the price of anarchy, within a polynomially bounded number of best responses. We focus on the well known class of linear congestion games and we show that (i) if each player is allowed to play at least once and at most

β

times in

T

best responses, states with approximation ratio

O

(

β

) times the price of anarchy are reached after

T

⌈loglog

n

⌉ best responses, and that (ii) such a bound is essentially tight also after exponentially many ones. One important consequence of our result is that the fairness among players is a necessary and sufficient condition for guaranteeing a fast convergence to efficient states. This answers the important question of the maximum order of

β

needed to fast obtain efficient states, left open by [10,11] and [3], in which fast convergence for constant

β

and very slow convergence for

β

 = 

O

(

n

) have been shown, respectively. Finally, we show that the structure of the game implicitly affects its performances. In particular, we prove that in the symmetric setting, in which all players share the same set of strategies, the game always converges to an efficient state after a polynomial number of best responses, regardless of the frequency each player moves with. All the results extend to weighted congestion games.

Angelo Fanelli, Luca Moscardelli, Alexander Skopalik

Fast Balanced Partitioning Is Hard Even on Grids and Trees

Two kinds of approximation algorithms exist for the

k

-BALANCED PARTITIONING

problem: those that are fast but compute unsatisfactory approximation ratios, and those that guarantee high quality ratios but are slow. In this paper we prove that this tradeoff between

runtime

and

solution quality

is unavoidable. For the problem a minimum number of edges in a graph need to be found that, when cut, partition the vertices into

k

equal-sized sets. We develop a general reduction which identifies some sufficient conditions on the considered graph class in order to prove the hardness of the problem. We focus on two combinatorially simple but very different classes, namely

trees

and

solid grid graphs

. The latter are finite connected subgraphs of the infinite two-dimensional grid without holes. We apply the reduction to show that for solid grid graphs it is NP-hard to approximate the optimum number of cut edges within any satisfactory ratio. We also consider solutions in which the sets may deviate from being equal-sized. Our reduction is applied to grids and trees to prove that no

fully polynomial time

algorithm exists that computes solutions in which the sets are arbitrarily close to equal-sized. This is true even if the number of edges cut is allowed to increase when the limit on the set sizes decreases. These are the first bicriteria inapproximability results for the

k

-BALANCED PARTITIONING

problem.

Andreas Emil Feldmann

A Characterization of Bispecial Sturmian Words

A finite Sturmian word

w

over the alphabet {

a

,

b

} is left special (resp. right special) if

aw

and

bw

(resp.

wa

and

wb

) are both Sturmian words. A bispecial Sturmian word is a Sturmian word that is both left and right special. We show as a main result that bispecial Sturmian words are exactly the maximal internal factors of Christoffel words, that are words coding the digital approximations of segments in the Euclidean plane. This result is an extension of the known relation between central words and primitive Christoffel words. Our characterization allows us to give an enumerative formula for bispecial Sturmian words. We also investigate the minimal forbidden words for the set of Sturmian words.

Gabriele Fici

Online Sum-Radii Clustering

In Online Sum-Radii Clustering,

n

demand points arrive online and must be irrevocably assigned to a cluster upon arrival. The cost of each cluster is the sum of a fixed opening cost and its radius, and the objective is to minimize the total cost of the clusters opened by the algorithm. We show that the deterministic competitive ratio of Online Sum-Radii Clustering for general metric spaces is Θ(log

n

), where the upper bound follows from a primal-dual online algorithm, and the lower bound is valid for ternary Hierarchically Well-Separated Trees (HSTs) and for the Euclidean plane. Combined with the results of (Csirik et al., MFCS 2010), this result demonstrates that the deterministic competitive ratio of Online Sum-Radii Clustering changes abruptly, from constant to logarithmic, when we move from the line to the plane. We also show that Online Sum-Radii Clustering in HSTs is closely related to the Parking Permit problem introduced by (Meyerson, FOCS 2005). Exploiting the relation to Parking Permit, we obtain a lower bound of Ω(loglog

n

) on the randomized competitive ratio of Online Sum-Radii Clustering in tree metrics. Moreover, we present a simple randomized

O

(log

n

)-competitive algorithm, and a deterministic

O

(loglog

n

)-competitive algorithm for the fractional version of the problem.

Dimitris Fotakis, Paraschos Koutris

Observe and Remain Silent (Communication-Less Agent Location Discovery)

We study a randomised distributed communication-less coordination mechanism for

n

uniform anonymous agents located on a circle with unit circumference. We assume the agents are located at arbitrary but distinct positions, unknown to other agents. The agents perform actions in synchronised rounds. At the start of each round an agent chooses the direction of its movement (

clockwise

or

anticlockwise

), and moves at unit speed during this round. Agents are not allowed to overpass, i.e., when an agent collides with another it instantly starts moving with the same speed in the opposite direction. Agents cannot leave marks on the ring, have zero vision and cannot exchange messages. However, on the conclusion of each round each agent has access to (some, not necessarily all) information regarding its trajectory during this round. This information can be processed and stored by the agent for further analysis.

The

location discovery task

to be performed by each agent is to determine the initial position of every other agent and eventually to stop at its initial position, or proceed to another task, in a fully synchronised manner. Our primary motivation is to study distributed systems where agents collect the minimum amount of information that is necessary to accomplish this location discovery task.

Our main result is a fully distributed randomised (Las Vegas type) algorithm, solving the

location discovery problem

w.h.p.

in

O

(

n

log

2

n

) rounds (assuming the agents collect sufficient information). Note that our result also holds if initially the agents do not know the value of

n

and they have no coherent sense of direction.

Tom Friedetzky, Leszek Gąsieniec, Thomas Gorry, Russell Martin

When Trees Grow Low: Shrubs and Fast MSO1

Recent characterization [9] of those graphs for which coloured MSO

2

model checking is fast raised the interest in the graph invariant called

tree-depth

. Looking for a similar characterization for (coloured) MSO

1

, we introduce the notion of

shrub-depth

of a graph class. To prove that MSO

1

model checking is fast for classes of bounded shrub-depth, we show that shrub-depth exactly characterizes the graph classes having interpretation in coloured trees of bounded height. We also introduce a common extension of cographs and of graphs with bounded shrub-depth —

m-partite cographs

(still of bounded clique-width), which are well quasi-ordered by the relation “is an induced subgraph of” and therefore allow polynomial time testing of hereditary properties.

Robert Ganian, Petr Hliněný, Jaroslav Nešetřil, Jan Obdržálek, Patrice Ossona de Mendez, Reshma Ramadurai

Strategy Machines and Their Complexity

We introduce a machine model for the execution of strategies in (regular) infinite games that refines the standard model of Mealy automata. This model of controllers is formalized in the terminological framework of Turing machines. We show how polynomially sized controllers can be found for Muller and Streett games. We are able to distinguish aspects of executing strategies (“size”, “latency”, “space consumption”) that are not visible in Mealy automata. Also, lower bound results are obtained.

Marcus Gelderie

Coloring Graphs Characterized by a Forbidden Subgraph

The

Coloring

problem is to test whether a given graph can be colored with at most

k

colors for some given

k

, such that no two adjacent vertices receive the same color. The complexity of this problem on graphs that do not contain some graph

H

as an induced subgraph is known for each fixed graph

H

. A natural variant is to forbid a graph

H

only as a subgraph. We call such graphs strongly

H

-free and initiate a complexity classification of

Coloring

for strongly

H

-free graphs. We show that

Coloring

is

NP

-complete for strongly

H

-free graphs, even for

k

 = 3, when

H

contains a cycle, has maximum degree at least five, or contains a connected component with two vertices of degree four. We also give three conditions on a forest

H

of maximum degree at most four and with at most one vertex of degree four in each of its connected components, such that

Coloring

is

NP

-complete for strongly

H

-free graphs even for

k

 = 3. Finally, we classify the computational complexity of

Coloring

on strongly

H

-free graphs for all fixed graphs

H

up to seven vertices. In particular, we show that

Coloring

is polynomial-time solvable when

H

is a forest that has at most seven vertices and maximum degree at most four.

Petr A. Golovach, Daniël Paulusma, Bernard Ries

Obtaining Planarity by Contracting Few Edges

The

Planar Contraction

problem is to test whether a given graph can be made planar by using at most

k

edge contractions. This problem is known to be

NP

-complete. We show that it is fixed-parameter tractable when parameterized by

k

.

Petr A. Golovach, Pim van ’t Hof, Daniël Paulusma

Light Spanners in Bounded Pathwidth Graphs

Given an edge-weighted graph

G

and

ε

 > 0, a (1 + 

ε

)-spanner is a spanning subgraph

G

′ whose shortest path distances approximate those of

G

within a factor of 1 + 

ε

. For

G

from certain graph families (such as bounded genus graphs and apex graphs), we know that

light

spanners exist. That is, we can compute a (1 + 

ε

)-spanner

G

′ with total edge weight at most a constant times the weight of a minimum spanning tree. This constant may depend on

ε

and the graph family, but not on the particular graph

G

nor on the edge weighting. The existence of light spanners is essential in the design of approximation schemes for the metric TSP (the traveling salesman problem) and similar graph-metric problems.

In this paper we make some progress towards the conjecture that light spanners exist for every minor-closed graph family: we show that light spanners exist for graphs with bounded pathwidth, and they are computed by a greedy algorithm. We do this via the intermediate construction of light

monotone

spanning trees in such graphs.

Michelangelo Grigni, Hao-Hsiang Hung

Planarizing Gadgets for Perfect Matching Do Not Exist

To reduce a graph problem to its planar version, a standard technique is to replace crossings in a drawing of the input graph by planarizing gadgets. We show unconditionally that such a reduction is not possible for the perfect matching problem and also extend this to some other problems related to perfect matching. We further show that there is no planarizing gadget for the Hamiltonian cycle problem.

Rohit Gurjar, Arpita Korwar, Jochen Messner, Simon Straub, Thomas Thierauf

Kernels for Edge Dominating Set: Simpler or Smaller

A

kernelization

for a parameterized computational problem is a polynomial-time procedure that transforms every instance of the problem into an equivalent instance (the so-called

kernel

) whose size is bounded by a function of the value of the chosen parameter. We present new kernelizations for the NP-complete

Edge Dominating Set

problem which asks, given an undirected graph

G

 = (

V

,

E

) and an integer

k

, whether there exists a subset

D

 ⊆ 

E

with |

D

| ≤ 

k

such that every edge in

E

shares at least one endpoint with some edge in

D

. The best previous kernelization for

Edge Dominating Set

, due to Xiao, Kloks and Poon, yields a kernel with at most 2

k

2

 + 2

k

vertices in linear time. We first describe a very simple linear-time kernelization whose output has at most 4

k

2

 + 4

k

vertices and is either a trivial “no” instance or a vertex-induced subgraph of the input graph in which every edge dominating set of size ≤ 

k

is also an edge dominating set of the input graph. We then show that a refinement of the algorithm of Xiao, Kloks and Poon and a different analysis can lower the bound on the number of vertices in the kernel by a factor of about 4, namely to

$\max\{\frac{1}{2}k^2+\frac{7}{2}k,6 k\}$

.

Torben Hagerup

Categories of Coalgebraic Games

We consider a general notion of

coalgebraic game

, whereby games are viewed as elements of a

final coalgebra

. This allows for a smooth definition of

game operations

(

e.g.

sum, negation, and linear implication) as

final morphisms

. The notion of coalgebraic game subsumes different notions of games,

e.g.

possibly non-wellfounded Conway games and games arising in Game Semantics à la [AJM00]. We define various categories of coalgebraic games and (total) strategies, where the above operations become functorial, and induce a structure of monoidal closed or *-autonomous category. In particular, we define a category of coalgebraic games corresponding to AJM-games and winning strategies, and a generalization to non-wellfounded games of Joyal’s category of Conway games. This latter construction provides a categorical characterization of the equivalence by Berlekamp, Conway, Guy on

loopy games

.

Furio Honsell, Marina Lenisa, Rekha Redamalla

Quasi-recognizable vs MSO Definable Languages of One-Dimensional Overlapping Tiles

(Extended Abstract)

It has been shown [6] that, within the McAlister inverse monoid [10], whose elements can be seen as overlapping one-dimensional tiles, the class of languages recognizable by finite monoids collapses compared with the class of languages definable in Monadic Second Order Logic (MSO).

This paper aims at capturing the expressive power of the MSO definability of languages of tiles by means of a weakening of the notion of algebraic recognizability which we shall refer to as quasi-recognizability. For that purpose, since the collapse of algebraic recognizability is intrinsically linked with the notion of monoid morphism itself, we propose instead to use premorphisms, monotonic mappings on ordered monoids that are only required to be sub-multiplicative with respect to the monoid product, i.e. mapping

φ

so that for all

x

and

y

,

φ

(

xy

) ≤ 

φ

(

x

)

φ

(

y

).

In doing so, we indeed obtain, with additional but relatively natural closure conditions, the expected quasi-algebraic characterization of MSO definable languages of positive tiles. This result is achieved via the axiomatic definition of an original class of well-behaved ordered monoid so that quasi-recognizability implies MSO definability. An original embedding of any (finite) monoid

S

into a (finite) well-behaved ordered monoid

${\mathcal Q}(S)$

is then used to prove the converse.

David Janin

An Improved Approximation Scheme for Variable-Sized Bin Packing

The variable-sized bin packing problem (VBP) is a well-known generalization of the NP-hard bin packing problem (BP) where the items can be packed in bins of

M

given sizes. The objective is to minimize the total capacity of the bins used. We present an AFPTAS for VBP and BP with performance guarantee

$P(I) \leq (1+ \varepsilon )OPT(I) + O(\log^2(\frac{1}{\varepsilon }))$

. The additive term is much smaller than the additive term of already known AFPTAS. The running time of the algorithm is

$O( \frac{1}{\varepsilon ^6} \log\left(\frac{1}{\varepsilon }\right) + \log\left(\frac{1}{\varepsilon }\right) n)$

for bin packing and

$O(\frac{1}{\varepsilon ^{7}} \log^2\left(\frac{1}{\varepsilon }\right) + \log\left(\frac{1}{\varepsilon }\right)\left(M+n\right))$

for variable-sized bin packing, which is an improvement to previously known algorithms.

Klaus Jansen, Stefan Kraft

Gathering an Even Number of Robots in an Odd Ring without Global Multiplicity Detection

We propose a gathering protocol for an

even

number of robots in a ring-shaped network that allows symmetric but not periodic configurations as initial configurations, yet uses only local weak multiplicity detection. Robots are assumed to be anonymous and oblivious, and the execution model is the non-atomic CORDA model with asynchronous fair scheduling. In our scheme, the number of robots

k

must be greater than 8, the number of nodes

n

on a network must be odd and greater than

k

 + 3. The running time of our protocol is

O

(

n

2

) asynchronous rounds.

Sayaka Kamei, Anissa Lamani, Fukuhito Ooshita, Sébastien Tixeuil

Reversal Hierarchies for Small 2DFAs

A two-way deterministic finite automaton

with r(n) reversals

performs ≤ 

r

(

n

) input head reversals on every

n

-long input. Let 2D[

r

(

n

)] be all families of problems solvable by such automata of size polynomial in the index of the family. Then the

reversal hierarchy

2D[0] ⊆ 2D[1] ⊆ 2D[2] ⊆ ⋯ is strict, but 2D[

O

(1)] = 2D[

o

(

n

)]. Moreover, the

inner-reversal hierarchy

2D(0) ⊆ 2D(1) ⊆ 2D(2) ⊆ ⋯ , where now the bound is only for reversals strictly between the input end-markers, is also strict.

Christos A. Kapoutsis, Giovanni Pighizzini

Strictness of the Collapsible Pushdown Hierarchy

We present a pumping lemma for each level of the collapsible pushdown graph hierarchy in analogy to the second author’s pumping lemma for higher-order pushdown graphs (without collapse). Using this lemma, we give the first known examples that separate the levels of the collapsible pushdown graph hierarchy and of the collapsible pushdown tree hierarchy, i.e., the hierarchy of trees generated by higher-order recursion schemes. This confirms the open conjecture that higher orders allow one to generate more graphs and more trees.

Full proofs can be found in the arXiv version[10] of this paper.

Alexander Kartzow, Paweł Parys

Computational Complexity of Smooth Differential Equations

The computational complexity of the solution

h

to the ordinary differential equation

h

(0) = 0,

h

′(

t

) = 

g

(

t

,

h

(

t

)) under various assumptions on the function

g

has been investigated in hope of understanding the intrinsic hardness of solving the equation numerically. Kawamura showed in 2010 that the solution

h

can be

PSPACE

-hard even if

g

is assumed to be Lipschitz continuous and polynomial-time computable. We place further requirements on the smoothness of

g

and obtain the following results: the solution

h

can still be

PSPACE

-hard if

g

is assumed to be of class C

1

; for each

k

 ≥ 2, the solution

h

can be hard for the counting hierarchy if

g

is of class C

k

.

Akitoshi Kawamura, Hiroyuki Ota, Carsten Rösnick, Martin Ziegler

The Lower Reaches of Circuit Uniformity

The effect of severely tightening the uniformity of Boolean circuit families is investigated. The impact on NC

1

and its subclasses is shown to depend on the characterization chosen for the class, while classes such as P appear to be more robust. Tightly uniform subclasses of NC

1

whose separation may be within reach of current techniques emerge.

Christoph Behle, Andreas Krebs, Klaus-Jörn Lange, Pierre McKenzie

The Join Levels of the Trotter-Weil Hierarchy Are Decidable

The variety

DA

of finite monoids has a huge number of different characterizations, ranging from two-variable first-order logic FO

2

to unambiguous polynomials. In order to study the structure of the subvarieties of

DA

, Trotter and Weil considered the intersection of varieties of finite monoids with bands,

i

.

e

., with idempotent monoids. The varieties of idempotent monoids are very well understood and fully classified. Trotter and Weil showed that for every band variety

V

there exists a unique maximal variety

W

inside

DA

such that the intersection with bands yields the given band variety

V

. These maximal varieties

W

define the Trotter-Weil hierarchy. This hierarchy is infinite and it exhausts

DA

; induced by band varieties, it naturally has a zigzag shape. In their paper, Trotter and Weil have shown that the corners and the intersection levels of this hierarchy are decidable.

In this paper, we give a single identity of omega-terms for every join level of the Trotter-Weil hierarchy; this yields decidability. Moreover, we show that the join levels and the subsequent intersection levels do not coincide. Almeida and Azevedo have shown that the join of

$\mathcal R$

-trivial and

$\mathcal L$

-trivial finite monoids is decidable; this is the first non-trivial join level of the Trotter-Weil hierarchy. We extend this result to the other join levels of the Trotter-Weil hierarchy. At the end of the paper, we give two applications. First, we show that the hierarchy of deterministic and codeterministic products is decidable. And second, we show that the direction alternation depth of unambiguous interval logic is decidable.

Manfred Kufleitner, Alexander Lauser

Equations X + A = B and (X + X) + C = (X − X) + D over Sets of Natural Numbers

It has recently been shown, that hyper-arithmetical sets can be represented as the unique solutions of language equations over sets of natural numbers with operations of addition, subtraction and union. It is shown that the same expressive power, under a certain encoding, can be achieved by systems of just two equations,

X

 + 

A

 = 

B

and (

X

 + 

X

) + 

C

 = (

X

 − 

X

) + 

D

, without using union. It follows that the problems concerning the solutions of systems of the general form are as hard as the same problems restricted to these systems with two equations, it is known that the question for solution existence is

$\Sigma^1_1$

complete.

Tommi Lehtinen

Weakly-Synchronized Ground Tree Rewriting

(with Applications to Verifying Multithreaded Programs)

Ground tree rewrite systems (GTRS) are a well-known treeextension of prefix-rewrite systems on words (a.k.a. pushdown systems), where subtrees (instead of word prefixes) are rewritten. GTRS can model programs with unbounded recursion depth and thread-spawning, wherein the threads have a tree-shaped dependency graph. We consider the extension of GTRS with a finite (global) control unit for synchronizing among the active threads, a.k.a. state-extended GTRS (sGTRS). Since sGTRS is Turing-complete, we restrict the finite control unit to dags possibly with self-loops, a.k.a. weakly-synchronized GTRS (wGTRS). wGTRS can be regarded as a generalization of context-bounded analysis of multipushdown systems with dynamic thread spawning. We show that reachability, repeated reachability, and the complement of model checking deterministic LTL over weakly-synchronized GTRS (wGTRS) are NP-complete by a polynomial reduction to checking existential Presburger formulas, for which highly optimized solvers are available.

Anthony Widjaja Lin

Descriptional Complexity of Deterministic Regular Expressions

We study the descriptional complexity of regular languages that are definable by deterministic regular expressions. First, we examine possible blow-ups when translating between regular expressions, deterministic regular expressions, and deterministic automata. Then we give an overview of the closure properties of these languages under various language-theoretic operations and we study the descriptional complexity of applying these operations. Our main technical result is a general property that implies that the blow-up when translating a DFA to an equivalent deterministic expression can be exponential.

Katja Losemann, Wim Martens, Matthias Niewerth

Identity Testing, Multilinearity Testing, and Monomials in Read-Once/Twice Formulas and Branching Programs

We study the problem of testing if the polynomial computed by an arithmetic circuit is identically zero (

ACIT

). We give a deterministic polynomial time algorithm for this problem when the inputs are read-twice formulas. This algorithm also computes the

MLIN

predicate, testing if the input circuit computes a multilinear polynomial.

We further study two related computational problems on arithmetic circuits. Given an arithmetic circuit

C

, 1)

ZMC

: test if a given monomial in

C

has zero coefficient or not, and 2)

MonCount

: compute the number of monomials in

C

. These problems were introduced by Fournier, Malod and Mengel [STACS 2012], and shown to characterize various levels of the counting hierarchy (

CH

).

We address the above problems on read-restricted arithmetic circuits and branching programs. We prove several complexity characterizations for the above problems on these restricted classes of arithmetic circuits.

Meena Mahajan, B. V. Raghavendra Rao, Karteek Sreenivasaiah

Fine and Wilf’s Theorem and Pseudo-repetitions

The notion of repetition of factors in words is central to considerations on sequences. One of the recent generalizations regarding this concept was introduced by Czeizler et al. (2010) and investigates a restricted version of that notion in the context of DNA computing and bioinformatics. It considers a word to be a pseudo-repetition if it is the iterated concatenation of one of its prefixes and the image of this prefix through an involution. We present here a series of results in the fashion of the Fine and Wilf Theorem in a more general setting where we consider the periods of some word given by a prefix of it and images of that prefix through some arbitrary morphism or antimorphism.

Florin Manea, Robert Mercaş, Dirk Nowotka

Taking It to the Limit: Approximate Reasoning for Markov Processes

We develop a fusion of logical and metrical principles for reasoning about Markov processes. More precisely, we lift metrics from processes to sets of processes satisfying a formula and explore how the satisfaction relation behaves as sequences of processes

and sequences of formulas

approach limits. A key new concept is

dynamically-continuous metric bisimulation

which is a property of (pseudo)metrics. We prove theorems about satisfaction in the limit, robustness theorems as well as giving a topological characterization of various classes of formulas. This work is aimed at providing approximate reasoning principles for Markov processes.

Kim Guldstrand Larsen, Radu Mardare, Prakash Panangaden

Asymmetric Swap-Equilibrium: A Unifying Equilibrium Concept for Network Creation Games

We introduce and study the concept of an

asymmetric swap-equilibrium

for network creation games. A graph where every edge is

owned

by one of its endpoints is called to be in

asymmetric swap-equilibrium

, if no vertex

v

can delete its own edge {

v

,

w

} and add a new edge {

v

,

w

′} and thereby decrease the sum of distances from

v

to all other vertices. This equilibrium concept generalizes and unifies some of the previous equilibrium concepts for network creation games. While the structure and the quality of equilibrium networks is still not fully understood, we provide further (partial) insights for this open problem. As the two main results, we show that (1) every asymmetric swap-equilibrium has at most one (non-trivial) 2-edge-connected component, and (2) we show a logarithmic upper bound on the diameter of an asymmetric swap-equilibrium for the case that the minimum degree of the unique 2-edge-connected component is at least

n

ε

, for

$\varepsilon>\frac{4\lg 3}{\lg n}$

. Due to the generalizing property of asymmetric swap equilibria, these results hold for several equilibrium concepts that were previously studied. Along the way, we introduce a node-weighted version of the network creation games, which is of independent interest for further studies of network creation games.

Matúš Mihalák, Jan Christoph Schlegel

Between Tree Patterns and Conjunctive Queries: Is There Tractability beyond Acyclicity?

In static analysis of queries over trees in the presence of schemas, there is an exponential complexity gap between conjunctive queries (CQs, positive existential first-order formulae without disjunction) and tree patterns (tree-like acyclic CQs). Motivated by applications in XML data management, we consider various restrictions of CQs that bring their complexity down to that of tree patterns. Most importantly, we show that vertical tree patterns can be costlessly extended with full horizontal CQs over children. We also consider restricted classes of schemas and show that under disjunction-free schemas the complexity of static analysis sometimes drops dramatically.

Filip Murlak, Michał Ogiński, Marcin Przybyłko

Reducing a Target Interval to a Few Exact Queries

Many combinatorial problems involving weights can be formulated as a so-called

ranged problem

. That is, their input consists of a universe

U

, a (succinctly-represented) set family

$\mathcal{F} \subseteq 2^{U}$

, a weight function

ω

:

U

 → {1,…,

N

}, and integers 0 ≤ 

l

 ≤ 

u

 ≤ ∞. Then the problem is to decide whether there is an

$X \in \mathcal{F}$

such that

l

 ≤ ∑ 

e

 ∈ 

X

ω

(

e

) ≤ 

u

. Well-known examples of such problems include

Knapsack

,

Subset Sum

,

Maximum Matching

, and

Traveling Salesman

. In this paper, we develop a generic method to transform a ranged problem into an

exact problem

(i.e. a ranged problem for which

l

 = 

u

). We show that our method has several intriguing applications in exact exponential algorithms and parameterized complexity, namely:

In exact exponential algorithms, we present new insight into whether

Subset Sum

and

Knapsack

have efficient algorithms in both time and space. In particular, we show that the time and space complexity of

Subset Sum

and

Knapsack

are equivalent up to a small polynomial factor in the input size. We also give an algorithm that solves sparse instances of

Knapsack

efficiently in terms of space and time.

In parameterized complexity, we present the first kernelization results on weighted variants of several well-known problems. In particular, we show that weighted variants of

Vertex Cover

and

Dominating Set

,

Traveling Salesman

, and

Knapsack

all admit polynomial randomized Turing kernels when parameterized by |

U

|.

Curiously, our method relies on a technique more commonly found in approximation algorithms.

Jesper Nederlof, Erik Jan van Leeuwen, Ruben van der Zwaan

Maximum Cliques in Graphs with Small Intersection Number and Random Intersection Graphs

In this paper, we relate the problem of finding a maximum clique to the intersection number of the input graph (i.e. the minimum number of cliques needed to edge cover the graph). In particular, we consider the maximum clique problem for graphs with small intersection number and random intersection graphs (a model in which each one of

m

labels is chosen independently with probability

p

by each one of

n

vertices, and there are edges between any vertices with overlaps in the labels chosen).

We first present a simple algorithm which, on input

G

finds a maximum clique in

$O(2^{2^m + O(m)} + n^2 \min\{2^m, n\})$

time steps, where

m

is an upper bound on the intersection number and

n

is the number of vertices. Consequently, when

m

 ≤ ln ln

n

the running time of this algorithm is polynomial.

We then consider random instances of the random intersection graphs model as input graphs. As our main contribution, we prove that, when the number of labels is not too large (

m

 = 

n

α

, 0 < 

α

 < 1), we can use the label choices of the vertices to find a maximum clique in polynomial time whp. The proof of correctness for this algorithm relies on our Single Label Clique Theorem, which roughly states that whp a “large enough” clique cannot be formed by more than one label. This theorem generalizes and strengthens other related results in the state of the art, but also broadens the range of values considered (see e.g. [22] and [4]).

As an important consequence of our Single Label Clique Theorem, we prove that the problem of inferring the complete information of label choices for each vertex from the resulting random intersection graph (i.e. the

label representation of the graph

) is

solvable

whp; namely, the maximum likelihood estimation method will provide a unique solution (up to permutations of the labels). Finding efficient algorithms for constructing such a label representation is left as an interesting open problem for future research.

Sotiris Nikoletseas, Christoforos Raptopoulos, Paul G. Spirakis

A Finite Basis for ‘Almost Future’ Temporal Logic over the Reals

Kamp’s theorem established the expressive completeness of the temporal modalities Until and Since for the First-Order Monadic Logic of Order (FOMLO) over Real and Natural time flows. Over Natural time, a single future modality (Until) is sufficient to express all future FOMLO formulas. These are formulas whose truth value at any moment is determined by what happens from that moment on. Yet this fails to extend to Real time domains: Here no finite basis of future modalities can express all future FOMLO formulas. In this paper we show that finiteness can be recovered if we slightly soften the requirement that future formulas must be totally past-independent: We allow formulas to depend just on the very very near-past, and maintain the requirement that they be independent of the rest - actually - of most of the past. We call them ‘almost future’ formulas, and show that there is a finite basis of almost future modalities which is expressively complete over the Reals for the almost future fragment of FOMLO.

Dorit Pardo (Ordentlich), Alexander Rabinovich

Constructing Premaximal Ternary Square-Free Words of Any Level

We study extendability of ternary square-free words. Namely, we are interested in the square-free words that cannot be infinitely extended preserving square-freeness. We prove that any positive integer is the length of the longest extension of some ternary square-free word and thus solve an open problem by Allouche and Shallit. We also resolve the two-sided version of this problem.

Elena A. Petrova, Arseny M. Shur

Regularity Problems for Weak Pushdown ω-Automata and Games

We show that the regularity and equivalence problems are decidable for deterministic weak pushdown

ω

-automata, giving a partial answer to a question raised by Cohen and Gold in 1978. We prove the decidability by a reduction to the corresponding problems for deterministic pushdown automata on finite words. Furthermore, we consider the problem of deciding for pushdown games whether a winning strategy exists that can be implemented by a finite automaton. We show that this problem is already undecidable for games defined by one-counter automata or visibly pushdown automata with a safety condition.

Christof Löding, Stefan Repke

Computational Aspects of Cellular Automata on Countable Sofic Shifts

We investigate the computational properties of cellular automata on countable (equivalently, zero entropy) sofic shifts with an emphasis on nilpotency, periodicity, and asymptotic behavior. As a tool for proving decidability results, we prove the Starfleet Lemma, which is of independent interest. We present computational results including the decidability of nilpotency and periodicity, the undecidability of stability of the limit set, and the existence of a

$\mathrm{\Pi}^0_1$

-complete limit set and a

$\mathrm{\Sigma}^0_3$

-complete asymptotic set.

Ville Salo, Ilkka Törmä

Computing Lempel-Ziv Factorization Online

We present an algorithm which computes the Lempel-Ziv factorization of a word

W

of length

n

on an alphabet Σ of size

σ

online in the following sense: it reads

W

starting from the left, and, after reading each

r

 = 

O

(log

σ

n

) characters of

W

, updates the Lempel-Ziv factorization. The algorithm requires

O

(

n

log

σ

) bits of space and

O

(

n

log

2

n

) time. The basis of the algorithm is a sparse suffix tree combined with wavelet trees.

Tatiana Starikovskaya

On Two Stronger Versions of Dejean’s Conjecture

Repetition threshold is the smallest number

RT

(

n

) such that infinitely many

n

-ary words contain no repetition of order greater than

RT

(

n

). These “extremal” repetition-free words are called threshold words. All values of

RT

(

n

) are now known, since the celebrated Dejean’s conjecture (1972) was finally settled in 2009. We study two questions about threshold words. First, does the number of

n

-ary threshold words grow exponentially with length? This is the case for 3 ≤ 

n

 ≤ 10, and a folklore conjecture suggests an affirmative answer for all

n

 ≥ 3. Second, are there infinitely many

n

-ary threshold words containing only finitely many different repetitions of order

RT

(

n

)? The answer is “yes” for

n

 = 3, but nothing was previously known about bigger alphabets.

For odd

n

 = 7,9,…,101, we prove the strongest possible result in this direction. Namely, there are

exponentially many

n

-ary threshold words containing

no repetitions of order

RT

(

n

)

except for the repeats of just one letter

.

Igor N. Tunev, Arseny M. Shur

Probabilistic Automata and Probabilistic Logic

We present a monadic second-order logic which is extended by an expected value operator and show that this logic is expressively equivalent to probabilistic automata for both finite and infinite words. We give possible syntax extensions and an embedding of our probabilistic logic into weighted MSO logic. We further derive decidability results which are based on corresponding results for probabilistic automata.

Thomas Weidner

A Quadratic Vertex Kernel for Feedback Arc Set in Bipartite Tournaments

The

k

-feedback arc set problem is to determine whether there is a set

F

of at most

k

arcs in a directed graph

G

such that the removal of

F

makes

G

acyclic. The

k

-feedback arc set problems in tournaments and bipartite tournaments (

k

-FAST and

k

-FASBT) have applications in ranking aggregation and have been extensively studied from the viewpoint of parameterized complexity. Recently, Misra

et al.

provide a problem kernel with

O

(

k

3

) vertices for

k

-FASBT. Answering an open question by Misra

et al.

, we improve the kernel bound to

O

(

k

2

) vertices by introducing a new concept called “bimodule.”

Mingyu Xiao, Jiong Guo

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise