Skip to main content
main-content

Über dieses Buch

This two volume set LNCS 9234 and 9235 constitutes the refereed conference proceedings of the 40th International Symposium on Mathematical Foundations of Computer Science, MFCS 2015, held in Milan, Italy, in August 2015. The 82 revised full papers presented together with 5 invited talks were carefully selected from 201 submissions. The papers feature high-quality research in all branches of theoretical computer science. They have been organized in the following topical main sections: logic, semantics, automata, and theory of programming (volume 1) and algorithms, complexity, and games (volume 2).

Inhaltsverzeichnis

Frontmatter

Near-Optimal Asymmetric Binary Matrix Partitions

We study the asymmetric binary matrix partition problem that was recently introduced by Alon et al. (WINE 2013) to model the impact of asymmetric information on the revenue of the seller in take-it-or-leave-it sales. Instances of the problem consist of an

$$n \times m$$

binary matrix

A

and a probability distribution over its columns. A

partition scheme

$$B=(B_1,...,B_n)$$

consists of a partition

$$B_i$$

for each row

i

of

A

. The partition

$$B_i$$

acts as a smoothing operator on row

i

that distributes the expected value of each partition subset proportionally to all its entries. Given a scheme

B

that induces a smooth matrix

$$A^B$$

, the partition value is the expected maximum column entry of

$$A^B$$

. The objective is to find a partition scheme such that the resulting partition value is maximized. We present a 9 / 10-approximation algorithm for the case where the probability distribution is uniform and a

$$(1-1/e)$$

-approximation algorithm for non-uniform distributions, significantly improving results of Alon et al. Although our first algorithm is combinatorial (and very simple), the analysis is based on linear programming and duality arguments. In our second result we exploit a nice relation of the problem to submodular welfare maximization.

Fidaa Abed, Ioannis Caragiannis, Alexandros A. Voudouris

Dual VP Classes

We consider the complexity class

$${\mathsf{ACC}}^1$$

and related families of arithmetic circuits. We prove a variety of collapse results, showing several settings in which no loss of computational power results if fan-in of gates is severely restricted, as well as presenting a natural class of arithmetic circuits in which no expressive power is lost by severely restricting the algebraic degree of the circuits. These results tend to support a conjecture regarding the computational power of the complexity class

VP

over finite algebras, and they also highlight the significance of a class of arithmetic circuits that is in some sense dual to

VP

.

Eric Allender, Anna Gál, Ian Mertz

On Tinhofer’s Linear Programming Approach to Isomorphism Testing

Exploring a linear programming approach to Graph Isomorphism, Tinhofer (1991) defined the notion of

compact graphs

: A graph is

compact

if the polytope of its fractional automorphisms is integral. Tinhofer noted that isomorphism testing for compact graphs can be done quite efficiently by linear programming. However, the problem of characterizing and recognizing compact graphs in polynomial time remains an open question. In this paper we make new progress in our understanding of compact graphs. Our results are summarized below:

We show that all graphs

G

which are distinguishable from any non-isomorphic graph by the classical color-refinement procedure are compact. In other words, the applicability range for Tinhofer’s linear programming approach to isomorphism testing is at least as large as for the combinatorial approach based on color refinement.

Exploring the relationship between color refinement and compactness further, we study related combinatorial and algebraic graph properties introduced by Tinhofer and Godsil. We show that the corresponding classes of graphs form a hierarchy and we prove that recognizing each of these graph classes is

P

-hard. In particular, this gives a first complexity lower bound for recognizing compact graphs.

V. Arvind, Johannes Köbler, Gaurav Rattan, Oleg Verbitsky

On the Complexity of Noncommutative Polynomial Factorization

In this paper we study the complexity of factorization of polynomials in the free noncommutative ring

$$\mathbb {F}\langle x_1,x_2,\ldots ,x_n \rangle $$

of polynomials over the field

$$\mathbb {F}$$

and noncommuting variables

$$x_1,x_2,\ldots ,x_n$$

. Our main results are the following:

Although

$$\mathbb {F}\langle x_1,\ldots ,x_n \rangle $$

is not a unique factorization ring, we note that

variable-disjoint

factorization in

$$\mathbb {F}\langle x_1,\ldots ,x_n \rangle $$

has the uniqueness property. Furthermore, we prove that computing the variable-disjoint factorization is polynomial-time equivalent to Polynomial Identity Testing (both when the input polynomial is given by an arithmetic circuit or an algebraic branching program). We also show that variable-disjoint factorization in the black-box setting can be efficiently computed (where the factors computed will be also given by black-boxes, analogous to the work [

12

] in the commutative setting).

As a consequence of the previous result we show that homogeneous noncommutative polynomials and multilinear noncommutative polynomials have unique factorizations in the usual sense, which can be efficiently computed.

Finally, we discuss a polynomial decomposition problem in

$$\mathbb {F}\langle x_1,\ldots ,x_n \rangle $$

which is a natural generalization of homogeneous polynomial factorization and prove some complexity bounds for it.

V. Arvind, Gaurav Rattan, Pushkar Joglekar

An Algebraic Proof of the Real Number PCP Theorem

The PCP theorem has recently been shown to hold as well in the real number model of Blum, Shub, and Smale [

3

]. The proof given there structurally closely follows the proof of the original PCP theorem by Dinur [

7

]. In this paper we show that the theorem also can be derived using algebraic techniques similar to those employed by Arora et al. [

1

,

2

] in the first proof of the PCP theorem. This needs considerable additional efforts. Due to severe problems when using low-degree algebraic polynomials over the reals as codewords for one of the verifiers to be constructed, we work with certain trigonometric polynomials. This entails the necessity to design new segmentation procedures in order to obtain well structured real verifiers appropriate for applying the classical technique of verifier composition.

We believe that designing as well an algebraic proof for the real PCP theorem on one side leads to interesting questions in real number complexity theory and on the other sheds light on which ingredients are necessary in order to prove an important result like the PCP theorem in different computational structures.

Martijn Baartse, Klaus Meer

On the Complexity of Hub Labeling (Extended Abstract)

Hub Labeling (

HL

) is a data structure for distance oracles. Hierarchical

HL

(

HHL

) is a special type of

HL

, that received a lot of attention from a practical point of view. However, theoretical questions such as NP-hardness and approximation guarantees for

HHL

algorithms have been left aside. We study the computational complexity of

HL

and

HHL

. We prove that both

HL

and

HHL

are NP-hard, and present upper and lower bounds on the approximation ratios of greedy

HHL

algorithms that are used in practice. We also introduce a new variant of the greedy

HHL

algorithm that produces small labels for graphs with small highway dimension.

Maxim Babenko, Andrew V. Goldberg, Haim Kaplan, Ruslan Savchenko, Mathias Weller

On the Complexity of Speed Scaling

The most commonly studied energy management technique is speed scaling, which involves operating the processor in a slow, energy-efficient mode at non-critical times, and in a fast, energy-inefficient mode at critical times. The natural resulting optimization problems involve scheduling jobs on a speed-scalable processor and have conflicting dual objectives of minimizing energy usage and minimizing waiting times. One can formulate many different optimization problems depending on how one models the processor (e.g., whether allowed speeds are discrete or continuous, and the nature of relationship between speed and power), the performance objective (e.g., whether jobs are of equal or unequal importance, and whether one is interested in minimizing waiting times of jobs or of work), and how one handles the dual objective (e.g., whether they are combined in a single objective, or whether one objective is transformed into a constraint). There are a handful of papers in the algorithmic literature that each give an efficient algorithm for a particular formulation. In contrast, the goal of this paper is to look at a reasonably full landscape of all the possible formulations. We give several general reductions which, in some sense, reduce the number of problems that are distinct in a complexity theoretic sense. We show that some of the problems, for which there are efficient algorithms for a fixed speed processor, turn out to be NP-hard. We give efficient algorithms for some of the other problems. Finally, we identify those problems that appear to not be resolvable by standard techniques or by the techniques that we develop in this paper for the other problems.

Neal Barcelo, Peter Kling, Michael Nugent, Kirk Pruhs, Michele Scquizzato

Almost All Functions Require Exponential Energy

One potential method to attain more energy-efficient circuits with the current technology is Near-Threshold Computing, which means using less energy per gate by designing the supply voltages to be closer to the threshold voltage of transistors. However, this energy savings comes at a cost of a greater probability of gate failure, which necessitates that the circuits must be more fault-tolerant, and thus contain more gates. Thus achieving energy savings with Near-Threshold Computing involves properly balancing the energy used per gate with the number of gates used. The main result of this paper is that almost all Boolean functions require circuits that use exponential energy, even if allowed circuits using heterogeneous supply voltages. This is not an immediate consequence of Shannon’s classic result that almost all functions require exponential sized circuits of faultless gates because, as we show, the same circuit layout can compute many different functions, depending on the value of the supply voltages. The key step in the proof is to upper bound the number of different functions that one circuit layout can compute. We also show that the Boolean functions that require exponential energy are exactly the Boolean functions that require exponentially many faulty gates.

Neal Barcelo, Michael Nugent, Kirk Pruhs, Michele Scquizzato

On Dynamic DFS Tree in Directed Graphs

Let

$$G=(V,E)$$

be a directed graph on

n

vertices and

m

edges. We address the problem of maintaining a depth first search (DFS) tree efficiently under insertion/deletion of edges in

G

.

1.

We present an efficient randomized decremental algorithm for maintaining a DFS tree for a directed acyclic graph. For processing any arbitrary online sequence of edge deletions, this algorithm takes expected

$$O(mn \log n)$$

time.

2.

We present the following lower bound results.

(a)

Any decremental (or incremental) algorithm for maintaining the ordered DFS tree

explicitly

requires

$${\varOmega }(mn)$$

total update time in the worst case.

(b)

Any decremental (or incremental) algorithm for maintaining the ordered DFS tree is at least as hard as computing all-pairs reachability in a directed graph.

Surender Baswana, Keerti Choudhary

Metric Dimension of Bounded Width Graphs

The notion of

resolving sets

in a graph was introduced by Slater (1975) and Harary and Melter (1976) as a way of uniquely identifying every vertex in a graph. A set of vertices in a graph is a resolving set if for any pair of vertices

x

and

y

there is a vertex in the set which has distinct distances to

x

and

y

. A smallest resolving set in a graph is called a

metric basis

and its size, the

metric dimension

of the graph. The problem of computing the metric dimension of a graph is a well-known NP-hard problem and while it was known to be polynomial time solvable on trees, it is only recently that efforts have been made to understand its computational complexity on various restricted graph classes. In recent work, Foucaud et al. (2015) showed that this problem is NP-complete even on interval graphs. They complemented this result by also showing that it is

fixed-parameter tractable (FPT)

parameterized by the metric dimension of the graph. In this work, we show that this FPT result can in fact be extended to all graphs of bounded tree-length. This includes well-known classes like chordal graphs, AT-free graphs and permutation graphs. We also show that this problem is FPT parameterized by the modular-width of the input graph.

Rémy Belmonte, Fedor V. Fomin, Petr A. Golovach, M. S. Ramanujan

Equality, Revisited

We develop a new lower bound method for analysing the complexity of the Equality function (

EQ

) in the Simultaneous Message Passing (SMP) model of communication complexity. The new technique gives tight lower bounds of

$$\varOmega {\left( \sqrt{n}\right) }$$

for both

EQ

and its negation

NE

in the

non-deterministic

version of quantum-classical SMP, where Merlin is also quantum – this is the strongest known version of SMP where the complexity of both

EQ

and

NE

remain high (previously known techniques seem to be insufficient for this).

Besides, our analysis provides to a unified view of the communication complexity of

EQ

and

NE

, allowing to obtain tight characterisation in all previously studied and a few newly introduced versions of SMP, including all possible combination of either quantum or randomised Alice, Bob and Merlin in the non-deterministic case.

Some of our results highlight that

NE

is easier than

EQ

in the presence of classical proofs, whereas the problems have (roughly) the same complexity when a quantum proof is present.

Ralph Bottesch, Dmitry Gavinsky, Hartmut Klauck

Bounding the Clique-Width of H-free Chordal Graphs

A graph is

H

-free if it has no induced subgraph isomorphic to

H

. Brandstädt, Engelfriet, Le and Lozin proved that the class of chordal graphs with independence number at most 3 has unbounded clique-width. Brandstädt, Le and Mosca erroneously claimed that the gem and the co-gem are the only two 1-vertex

$$P_4$$

-extensions

H

for which the class of

H

-free chordal graphs has bounded clique-width. In fact we prove that bull-free chordal and co-chair-free chordal graphs have clique-width at most 3 and 4, respectively. In particular, we prove that the clique-width is:

(i)

bounded for four classes of

H

-free chordal graphs;

(ii)

unbounded for three subclasses of split graphs.

Our main result, obtained by combining new and known results, provides a classification of all but two stubborn cases, that is, with two potential exceptions we determine

all

graphs

H

for which the class of

H

-free chordal graphs has bounded clique-width. We illustrate the usefulness of this classification for classifying other types of graph classes by proving that the class of

$$(2P_1+ P_3, K_4)$$

-free graphs has bounded clique-width via a reduction to

$$K_4$$

-free chordal graphs. Finally, we give a complete classification of the (un)boundedness of clique-width of

H

-free weakly chordal graphs.

Andreas Brandstädt, Konrad K. Dabrowski, Shenwei Huang, Daniël Paulusma

New Bounds for the CLIQUE-GAP Problem Using Graph Decomposition Theory

Halldórsson, Sun, Szegedy, and Wang (

ICALP 2012

) [

16

] investigated the space complexity of the following problem CLIQUE-GAP(

r

,

s

): given a graph stream

G

, distinguish whether

$$\omega (G) \ge r$$

or

$$\omega (G) \le s$$

, where

$$\omega (G)$$

is the clique-number of

G

. In particular, they give matching upper and lower bounds for CLIQUE-GAP(

r

,

s

) for any

r

and

$$s =c\log (n)$$

, for some constant

c

. The space complexity of the CLIQUE-GAP problem for smaller values of

s

is left as an open question. In this paper, we answer this open question. Specifically, for

$$s=\tilde{O}(\log (n))$$

and for any

$$r>s$$

, we prove that the space complexity of CLIQUE-GAP problem is

$$\tilde{\varTheta }(\frac{ms^2}{r^2})$$

. Our lower bound is based on a new connection between graph decomposition theory (Chung, Erdös, and Spencer [

11

], and Chung [

10

]) and the multi-party set disjointness problem in communication complexity.

Vladimir Braverman, Zaoxing Liu, Tejasvam Singh, N. V. Vinodchandran, Lin F. Yang

QMA with Subset State Witnesses

The class QMA plays a fundamental role in quantum complexity theory and it has found surprising connections to condensed matter physics and in particular in the study of the minimum energy of quantum systems. In this paper, we further investigate the class QMA and its related class QCMA by asking what makes quantum witnesses potentially more powerful than classical ones. We provide a definition of a new class, SQMA, where we restrict the possible quantum witnesses to the “simpler” subset states, i.e. a uniform superposition over the elements of a subset of

n

-bit strings. Surprisingly, we prove that this class is equal to QMA, hence providing a new characterisation of the class QMA. We also describe a new complete problem for QMA and a stronger lower bound for the class

$${\text {QMA}}_1$$

.

Alex Bredariol Grilo, Iordanis Kerenidis, Jamie Sikora

Phase Transition for Local Search on Planted SAT

The Local Search algorithm (or Hill Climbing, or Iterative Improvement) is one of the simplest heuristics to solve the Satisfiability and Max-Satisfiability problems. Although it is not the best known Satisfiability algorithm even for the class of problems we study, the Local Search is a part of many satisfiability and max-satisfiability solvers, where it is used to find a good starting point for a more sophisticated heuristics, and to improve a candidate solution. In this paper we give an analysis of Local Search on random planted 3-CNF formulas. We show that a sharp transition of efficiency of Local Search occurs at density

$$\varrho = \frac{7}{6} \ln n$$

. Specifically we show that if there is

$$\kappa <\frac{7}{6}$$

such that the clause-to-variable ratio is less than

$$\kappa \ln n$$

(

n

is the number of variables in a CNF) then Local Search whp does not find a satisfying assignment, and if there is

$$\kappa >\frac{7}{6}$$

such that the clause-to-variable ratio is greater than

$$\kappa \ln n$$

then the local search whp finds a satisfying assignment. As a byproduct we also show that for any constant

$$\varrho $$

there is

$$\gamma $$

such that Local Search applied to a random (not necessarily planted) 3-CNF with clause-to-variable ratio

$$\varrho $$

produces an assignment that satisfies at least

$$\gamma n$$

clauses less than the maximal number of satisfiable clauses.

Andrei A. Bulatov, Evgeny S. Skvortsov

Optimal Bounds for Estimating Entropy with PMF Queries

Let

p

be an unknown probability distribution on

$$[n] := \{1, 2, \dots n\}$$

that we can access via two kinds of queries: A

SAMP

query takes no input and returns

$$x \in [n]$$

with probability

p

[

x

]; a

PMF

query takes as input

$$x \in [n]$$

and returns the value

p

[

x

]. We consider the task of estimating the entropy of

p

to within

$$\pm \varDelta $$

(with high probability). For the usual Shannon entropy

H

(

p

), we show that

$$\varOmega (\log ^2 n/\varDelta ^2)$$

queries are necessary, matching a recent upper bound of Canonne and Rubinfeld. For the Rényi entropy

$$H_\alpha $$

(

p

), where

$$\alpha >1$$

, we show that

$$\varTheta (n^{1-1/\alpha })$$

queries are necessary and sufficient. This complements recent work of Acharya et al. in the

$$\mathsf SAMP $$

-only model that showed

$$O(n^{1-1/\alpha })$$

queries suffice when

$$\alpha $$

is an integer, but

$$\widetilde{\varOmega }(n)$$

queries are necessary when

$$\alpha $$

is a noninteger. All of our lower bounds also easily extend to the model where

CDF

queries (given

x

, return

$$\sum _{y \le x}$$

p

[

y

]) are allowed.

Cafer Caferov, Barış Kaya, Ryan O’Donnell, A. C. Cem Say

Mutual Dimension and Random Sequences

If

S

and

T

are infinite sequences over a finite alphabet, then the

lower

and

upper mutual dimensions

mdim

(

S

:

T

) and

Mdim

(

S

:

T

) are the upper and lower densities of the algorithmic information that is shared by

S

and

T

. In this paper we investigate the relationships between mutual dimension and

coupled randomness

, which is the algorithmic randomness of two sequences

$$R_1$$

and

$$R_2$$

with respect to probability measures that may be dependent on one another. For a restricted but interesting class of coupled probability measures we prove an explicit formula for the mutual dimensions

$$mdim(R_1:R_2)$$

and

$$Mdim(R_1:R_2)$$

, and we show that the condition

$$Mdim(R_1:R_2) = 0$$

is necessary but not sufficient for

$$R_1$$

and

$$R_2$$

to be independently random.

We also identify conditions under which Billingsley generalizations of the mutual dimensions

mdim

(

S

:

T

) and

Mdim

(

S

:

T

) can be meaningfully defined; we show that under these conditions these generalized mutual dimensions have the “correct" relationships with the Billingsley generalizations of

dim

(

S

),

Dim

(

S

),

dim

(

T

), and

Dim

(

T

) that were developed and applied by Lutz and Mayordomo; and we prove a divergence formula for the values of these generalized mutual dimensions.

Adam Case, Jack H. Lutz

Optimal Algorithms and a PTAS for Cost-Aware Scheduling

We consider a natural generalization of classical scheduling problems in which using a time unit for processing a job causes some time-dependent cost which must be paid in addition to the standard scheduling cost. We study the scheduling objectives of minimizing the makespan and the sum of (weighted) completion times. It is not difficult to derive a polynomial-time algorithm for preemptive scheduling to minimize the makespan on unrelated machines. The problem of minimizing the total (weighted) completion time is considerably harder, even on a single machine. We present a polynomial-time algorithm that computes for any given sequence of jobs an optimal schedule, i.e., the optimal set of time-slots to be used for scheduling jobs according to the given sequence. This result is based on dynamic programming using a subtle analysis of the structure of optimal solutions and a potential function argument. With this algorithm, we solve the unweighted problem optimally in polynomial time. Furthermore, we argue that there is a

$$(4+{\varepsilon })$$

-approximation algorithm for the strongly NP-hard problem with individual job weights. For this weighted version, we also give a PTAS based on a dual scheduling approach introduced for scheduling on a machine of varying speed.

Lin Chen, Nicole Megow, Roman Rischke, Leen Stougie, José Verschae

Satisfiability Algorithms and Lower Bounds for Boolean Formulas over Finite Bases

We give a #SAT algorithm for boolean formulas over arbitrary finite bases. Let

$$B_k$$

be the basis composed of all boolean functions on at most

k

inputs. For

$$B_k$$

-formulas on

n

inputs of size

cn

, our algorithm runs in time

$$2^{n(1-\delta _{c,k})}$$

for

$$\delta _{c,k} = c^{-O(c^2k2^k)}$$

. We also show the average-case hardness of computing affine extractors using linear-size

$$B_k$$

-formulas.

We also give improved algorithms and lower bounds for formulas over finite unate bases, i.e., bases of functions which are monotone increasing or decreasing in each of the input variables.

Ruiwen Chen

Randomized Polynomial Time Protocol for Combinatorial Slepian-Wolf Problem

We consider the following combinatorial version of the Slepian–Wolf coding scheme. Two isolated Senders are given binary strings

X

and

Y

respectively; the length of each string is equal to

n

, and the Hamming distance between the strings is at most

$$\alpha n$$

. The Senders compress their strings and communicate the results to the Receiver. Then the Receiver must reconstruct both strings

X

and

Y

. The aim is to minimise the lengths of the transmitted messages.

The theoretical optimum of communication complexity for this scheme (with randomised parties) was found in [

6

], though effective protocols with optimal lengths of messages remained unknown. We close this gap and present for this communication problem a polynomial time randomised protocol that achieves the optimal communication complexity.

Daniyar Chumbalov, Andrei Romashchenko

Network Creation Games: Think Global – Act Local

We investigate a non-cooperative game-theoretic model for the formation of communication networks by selfish agents. Each agent aims for a central position at minimum cost for creating edges. In particular, the general model (Fabrikant et al., PODC’03) became popular for studying the structure of the Internet or social networks. Despite its significance, locality in this game was first studied only recently (Bilò et al., SPAA’14), where a worst case locality model was presented, which came with a high efficiency loss in terms of quality of equilibria. Our main contribution is a new and more optimistic view on locality: agents are limited in their knowledge and actions to their local view ranges, but can probe different strategies and finally choose the best. We study the influence of our locality notion on the hardness of computing best responses, convergence to equilibria, and quality of equilibria. Moreover, we compare the strength of local versus non-local strategy changes. Our results address the gap between the original model and the worst case locality variant. On the bright side, our efficiency results are in line with observations from the original model, yet we have a non-constant lower bound on the Price of Anarchy.

Andreas Cord-Landwehr, Pascal Lenzner

Oblivious Transfer from Weakly Random Self-Reducible Public-Key Cryptosystem

We define a new notion of

Weakly Random-Self-Reducibile

cryptosystems and show how it can be used to implement secure Oblivious Transfer. We also show that two recent Post-quantum cryptosystems (based on Learning With Errors and Approximate Integer GCD) can be viewed as Weakly Random-Self-Reducible.

Claude Crépeau, Raza Ali Kazmi

Efficient Computations over Encrypted Data Blocks

Secure computation (i.e., performing computation while keeping inputs private) is a fundamental problem in cryptography. In this paper, we present an efficient and secure 2-party computation protocol for any function computable via a monotone formula over equality statements between data blocks, under standard cryptographic assumptions. Our result bypasses roadblocks in previous general solutions, like Yao’s garbled circuits and Gentry’s lattice-based fully homomorphic encryption, by performing secure computations over data blocks (instead of bits) and using typical-size (instead of impractically large) cryptographic keys. An important efficiency property achieved is that the number of cryptographic operations in the protocol is

sublinear

in the size of the circuit representing the computed function. Even though not as general as in the two mentioned techniques, the class of formulae in our result contains a large number of well-known computational problems (while previously, only single specific problems were known to satisfy the mentioned sublinear efficiency property). Our main underlying technique is a new cryptographic primitive, perhaps of independent interest, that we call real-or-random conditional transfer, built as a variant of the well-known Rabin’s oblivious transfer primitive.

Giovanni Di Crescenzo, Brian Coan, Jonathan Kirsch

Polynomial Kernels for Weighted Problems

Kernelization is a formalization of efficient preprocessing for

$$\mathsf {NP}$$

-hard problems using the framework of parameterized complexity. Among open problems in kernelization it has been asked many times whether there are deterministic polynomial kernelizations for

Subset Sum

and

Knapsack

when parameterized by the number

n

of items.

We answer both questions affirmatively by using an algorithm for compressing numbers due to Frank and Tardos (Combinatorica 1987). This result had been first used by Marx and Végh (ICALP 2013) in the context of kernelization. We further illustrate its applicability by giving polynomial kernels also for weighted versions of several well-studied parameterized problems. Furthermore, when parameterized by the different item sizes we obtain a polynomial kernelization for

Subset Sum

and an exponential kernelization for

Knapsack

. Finally, we also obtain kernelization results for polynomial integer programs.

Michael Etscheid, Stefan Kratsch, Matthias Mnich, Heiko Röglin

A Shortcut to (Sun)Flowers: Kernels in Logarithmic Space or Linear Time

We investigate whether kernelization results can be obtained if we restrict kernelization algorithms to run in logarithmic space. This restriction for kernelization is motivated by the question of what results are attainable for preprocessing via simple and/or local reduction rules. We find kernelizations for

$$d$$

-

hitting set(

k

)

,

$$d$$

-

set packing(

k

)

,

edge dominating set(

k

)

, and a number of hitting and packing problems in graphs, each running in logspace. Additionally, we return to the question of linear-time kernelization. For

$$d$$

-

hitting set(

k

)

a linear-time kernel was given by van Bevern [Algorithmica (2014)]. We give a simpler procedure and save a large constant factor in the size bound. Furthermore, we show that we can obtain a linear-time kernel for

$$d$$

-

set packing(

k

)

.

Stefan Fafianie, Stefan Kratsch

Metastability of Asymptotically Well-Behaved Potential Games

(Extended Abstract)

One of the main criticisms to game theory concerns the assumption of full rationality. Logit dynamics is a decentralized algorithm in which a level of irrationality (a.k.a. “noise”) is introduced in players’ behavior. In this context, the solution concept of interest becomes the logit equilibrium, as opposed to Nash equilibria. Logit equilibria are distributions over strategy profiles that possess several nice properties, including existence and uniqueness. However, there are games in which their computation may take exponential time. We therefore look at an approximate version of logit equilibria, called

metastable distributions

, introduced by Auletta et al. [

4

]. These are distributions which remain stable (i.e., players do not go too far from it) for a large number of steps (rather than forever, as for logit equilibria). The hope is that these distributions exist and can be reached quickly by logit dynamics.

We identify a class of potential games, that we name

asymptotically well-behaved

, for which the behavior of the logit dynamics is not chaotic as the number of players increases, so to guarantee meaningful asymptotic results. We prove that any such game admits distributions which are metastable no matter the level of noise present in the system, and the starting profile of the dynamics. These distributions can be quickly reached if the rationality level is not too big when compared to the inverse of the maximum difference in potential. Our proofs build on results which may be of independent interest, including some spectral characterizations of the transition matrix defined by logit dynamics for generic games and the relationship among convergence measures for Markov chains.

Diodato Ferraioli, Carmine Ventre

The Shifted Partial Derivative Complexity of Elementary Symmetric Polynomials

We continue the study of the

shifted partial derivative measure

, introduced by Kayal (ECCC 2012), which has been used to prove many strong depth-4 circuit lower bounds starting from the work of Kayal, and that of Gupta et al. (CCC 2013).

We show a strong lower bound on the dimension of the shifted partial derivative space of the Elementary Symmetric Polynomials of degree

d

in

N

variables for

$$d < \log N / \log \log N$$

. This extends the work of Nisan and Wigderson (Computational Complexity 1997), who studied the

partial derivative space

of these polynomials. Prior to our work, there have been no results on the shifted partial derivative measure of these polynomials. Our result implies a strong lower bound for Elementary Symmetric Polynomials in the homogeneous

$${\Sigma \Pi \Sigma \Pi }$$

model with bounded bottom fan-in. This strengthens (under our degree assumptions) a lower bound of Nisan and Wigderson who proved the analogous result for homogeneous

$${\Sigma \Pi \Sigma }$$

model (i.e.

$${\Sigma \Pi \Sigma \Pi }$$

formulas with bottom fan-in 1).

Our main technical lemma gives a lower bound for the ranks of certain inclusion-like matrices, and may be of independent interest.

Hervé Fournier, Nutan Limaye, Meena Mahajan, Srikanth Srinivasan

Parameterized Algorithms for Parity Games

Determining the winner of a Parity Game is a major problem in computational complexity with a number of applications in verification. In a parameterized complexity setting, the problem has often been considered with parameters such as (directed versions of) treewidth or clique-width, by applying dynamic programming techniques.

In this paper we adopt a parameterized approach which is more inspired by well-known (non-parameterized) algorithms for this problem. We consider a number of natural parameterizations, such as by Directed Feedback Vertex Set, Distance to Tournament, and Modular Width. We show that, for these parameters, it is possible to obtain recursive parameterized algorithms which are simpler, faster and only require polynomial space. We complement these results with some algorithmic lower bounds which, among others, rule out a possible avenue for improving the best-known sub-exponential time algorithm for parity games.

Jakub Gajarský, Michael Lampis, Kazuhisa Makino, Valia Mitsou, Sebastian Ordyniak

Algorithmic Applications of Tree-Cut Width

The recently introduced graph parameter

tree-cut width

plays a similar role with respect to immersions as the graph parameter

treewidth

plays with respect to minors. In this paper we provide the first algorithmic applications of tree-cut width to hard combinatorial problems. Tree-cut width is known to be lower-bounded by a function of treewidth, but it can be much larger and hence has the potential to facilitate the efficient solution of problems which are not known to be fixed-parameter tractable (FPT) when parameterized by treewidth. We introduce the notion of nice tree-cut decompositions and provide FPT algorithms for the showcase problems

Capacitated Vertex Cover

,

Capacitated Dominating Set

and

Imbalance

parameterized by the tree-cut width of an input graph

G

. On the other hand, we show that

List Coloring

,

Precoloring Extension

and

Boolean CSP

(the latter parameterized by the tree-cut width of the incidence graph) are W[1]-hard and hence unlikely to be fixed-parameter tractable when parameterized by tree-cut width.

Robert Ganian, Eun Jung Kim, Stefan Szeider

Log-Concavity and Lower Bounds for Arithmetic Circuits

One question that we investigate in this paper is, how can we build log-concave polynomials using sparse polynomials as building blocks? More precisely, let

$$f = \sum _{i = 0}^d a_i X^i \in \mathbb {R}^+[X]$$

be a polynomial satisfying the log-concavity condition

$$a_i^2 > \tau a_{i-1}a_{i+1}$$

for every

$$i \in \{1,\ldots ,d-1\},$$

where

$$\tau > 0$$

. Whenever

f

can be written under the form

$$f = \sum _{i = 1}^k \prod _{j = 1}^m f_{i,j}$$

where the polynomials

$$f_{i,j}$$

have at most

t

monomials, it is clear that

$$d \leqslant k t^m$$

. Assuming that the

$$f_{i,j}$$

have only non-negative coefficients, we improve this degree bound to

$$d = \mathcal O(k m^{2/3} t^{2m/3} \mathrm{log^{2/3}}(kt))$$

if

$$\tau > 1$$

, and to

$$d \leqslant kmt$$

if

$$\tau = d^{2d}$$

.

This investigation has a complexity-theoretic motivation: we show that a suitable strengthening of the above results would imply a separation of the algebraic complexity classes

$$\mathsf {VP}$$

and

$$\mathsf {VNP}$$

. As they currently stand, these results are strong enough to provide a new example of a family of polynomials in

$$\mathsf {VNP}$$

which cannot be computed by monotone arithmetic circuits of polynomial size.

Ignacio García-Marco, Pascal Koiran, Sébastien Tavenas

Easy Multiple-Precision Divisors and Word-RAM Constants

For integers

$$b\ge 2$$

and

$$w\ge 1$$

, define the

(b, w) cover size

of an integer

A

as the smallest nonnegative integer

k

such that

A

can be written in the form

$$A=\sum _{i=1}^k(-1)^{\sigma _i}b^{\ell _i}d_i$$

, where

$$\sigma _i$$

,

$$\ell _i$$

and

$$d_i$$

are nonnegative integers and

$$0\le d_i<b^w$$

, for

$$i=1,\ldots ,k$$

. We study the efficient execution of arithmetic operations on (multiple-precision) integers of small (

b

,

w

) cover size on a word RAM with words of

w

b

-ary digits and constant-time multiplication and division. In particular, it is shown that if

A

is an

n

-digit integer and

B

is a nonzero

m

-digit integer of (

b

,

w

) cover size

k

, then

$$\lfloor A/B\rfloor $$

can be computed in

$$O(1+{{(k n+m)}/w})$$

time. Our results facilitate a unified description of word-RAM algorithms operating on integers that may occupy a fraction of a word or several words.

As an application, we consider the fast generation of integers of a special form for use in word-RAM computation. Many published word-RAM algorithms divide a

w

-bit word conceptually into equal-sized fields and employ full-word constants whose field values depend in simple ways on the field positions. The constants are either simply postulated or computed with ad-hoc methods. We describe a procedure for obtaining constants of the following form in constant time: The

i

th field, counted either from the right or from the left, contains

g

(

i

), where

g

is a constant-degree polynomial with integer coefficients that, disregarding mild restrictions, can be arbitrary. This general form covers almost all cases known to the author of word-RAM constants used in published algorithms.

Torben Hagerup

Visibly Counter Languages and the Structure of  $$\mathrm {NC}^{1}$$

We extend the familiar program of understanding circuit complexity in terms of regular languages to visibly counter languages. Like the regular languages, the visibly counter languages are

$$\mathrm {NC}^{1}$$

- complete. We investigate what the visibly counter languages in certain constant depth circuit complexity classes are. We have initiated this in a previous work for

$$\mathrm {AC}^{0}$$

. We present characterizations and decidability results for various logics and circuit classes. In particular, our approach yields a way to understand

$$\mathrm {TC}^{0}$$

, where the regular approach fails.

Michael Hahn, Andreas Krebs, Klaus-Jörn Lange, Michael Ludwig

The Price of Connectivity for Cycle Transversals

For a family of graphs

$${\mathcal F}$$

, an

$$\mathcal {F}$$

-transversal of a graph

G

is a subset

$$S \subseteq V(G)$$

that intersects every subset of

V

(

G

) that induces a subgraph isomorphic to a graph in

$$\mathcal {F}$$

. Let

$$t_\mathcal {F}(G)$$

be the minimum size of an

$$\mathcal {F}$$

-transversal of

G

, and

$$ ct _\mathcal {F}(G)$$

be the minimum size of an

$$\mathcal {F}$$

-transversal of

G

that induces a connected graph. For a class of connected graphs

$$\mathcal{G}$$

, the price of connectivity for

$$\mathcal {F}$$

-transversals is the supremum of the ratios

$$ ct _\mathcal {F}(G)/t_\mathcal {F}(G)$$

over all

$$G\in \mathcal{G}$$

. We perform an in-depth study into the price of connectivity for various well-known graph families

$$\mathcal{F}$$

that contain an infinite number of cycles and that, in addition, may contain one or more anticycles or short paths. For each of these families we study the price of connectivity for classes of graphs characterized by one forbidden induced subgraph

H

. We determine exactly those classes of

H

-free graphs for which this graph parameter is bounded by a multiplicative constant, bounded by an additive constant, or equal to 1. In particular, our tetrachotomies extend known results of Belmonte et al. (EuroComb 2012, MFCS 2013) for the case when

$$\mathcal{F}$$

is the family of all cycles.

Tatiana R. Hartinger, Matthew Johnson, Martin Milanič, Daniël Paulusma

Upper and Lower Bounds on Long Dual Paths in Line Arrangements

Given a line arrangement

$$\mathcal A$$

with

n

lines, we show that there exists a path of length

$$n^2/3 - O(n)$$

in the dual graph of

$$\mathcal A$$

formed by its faces. This bound is tight up to lower order terms. For the bicolored version, we describe an example of a line arrangement with 3

k

blue and 2

k

red lines with no alternating path longer than 14

k

. Further, we show that any line arrangement with

n

lines has a coloring such that it has an alternating path of length

$$\varOmega (n^2/ \log n)$$

. Our results also hold for pseudoline arrangements.

Udo Hoffmann, Linda Kleist, Tillmann Miltzow

A Numbers-on-Foreheads Game

Is there a joint distribution of

n

random variables over the natural numbers, such that they always form an increasing sequence and whenever you take two subsets of the set of random variables of the same cardinality, their distribution is almost the same?

We show that the answer is yes, but that the random variables will have to take values as large as

, where

$$\epsilon \le \epsilon _n$$

measures how different the two distributions can be, the tower contains

$$n-2$$

2’s and the constants in the

$$\varTheta $$

notation are allowed to depend on

n

. This result has an important consequence in game theory: It shows that even though you can define extensive form games that cannot be implemented on players who can tell the time, you can have implementations that approximate the game arbitrarily well.

Sune K. Jakobsen

Faster Lightweight Lempel-Ziv Parsing

We present an algorithm that computes the Lempel-Ziv decomposition in

$$O(n(\log \sigma + \log \log n))$$

time and

$$n\log \sigma + \epsilon n$$

bits of space, where

$$\epsilon $$

is a constant rational parameter,

n

is the length of the input string, and

$$\sigma $$

is the alphabet size. The

$$n\log \sigma $$

bits in the space bound are for the input string itself which is treated as read-only.

Dmitry Kosolobov

Parallel Identity Testing for Skew Circuits with Big Powers and Applications

Powerful skew arithmetic circuits are introduced. These are skew arithmetic circuits with variables, where input gates can be labelled with powers

$$x^n$$

for binary encoded numbers

n

. It is shown that polynomial identity testing for powerful skew arithmetic circuits belongs to

$$\mathsf {coRNC}^2$$

, which generalizes a corresponding result for (standard) skew circuits. Two applications of this result are presented: (i) Equivalence of higher-dimensional straight-line programs can be tested in

$$\mathsf {coRNC}^2$$

; this result is even new in the one-dimensional case, where the straight-line programs produce strings. (ii) The compressed word problem (or circuit evaluation problem) for certain wreath products belongs to

$$\mathsf {coRNC}^2$$

. Full proofs can be found in the long version [

13

].

Daniel König, Markus Lohrey

On Probabilistic Space-Bounded Machines with Multiple Access to Random Tape

We investigate probabilistic space-bounded Turing machines that are allowed to make multiple passes over the random tape. As our main contribution, we establish a connection between derandomization of such probabilistic space-bounded classes to the derandomization of probabilistic time-bounded classes. Our main result is the following.

For some integer

$$k>0$$

, if all the languages accepted by bounded-error randomized log-space machines that use

$$O(\log n \log ^{(k+3)} n)$$

random bits and make

$$O(\log ^{(k)} n)$$

passes over the random tape is in deterministic polynomial-time, then

$$\mathrm{ BPTIME}(n) \subseteq \mathrm{ DTIME}(2^{o(n)})$$

. Here

$$\log ^{(k)} n$$

denotes

$$\log $$

function applied

k

times iteratively.

This result can be interpreted as follows: If we restrict the number of random bits to

$$O(\log n)$$

for the above randomized machines, then the corresponding set of languages is trivially known to be in P. Further, it can be shown that (proof is given in the main body of the paper) if we instead restrict the number of passes to only

O

(1) for the above randomized machines, then the set of languages accepted is in P. Thus our result implies that any non-trivial extension of these simulations will lead to a non-trivial and unknown derandomization of

$$\mathrm{ BPTIME}(n)$$

. Motivated by this result, we further investigate the power of multi-pass, probabilistic space-bounded machines and establish additional results.

Debasis Mandal, A. Pavan, N. V. Vinodchandran

Densest Subgraph in Dynamic Graph Streams

In this paper, we consider the problem of approximating the densest subgraph in the dynamic graph stream model. In this model of computation, the input graph is defined by an arbitrary sequence of edge insertions and deletions and the goal is to analyze properties of the resulting graph given memory that is sub-linear in the size of the stream. We present a single-pass algorithm that returns a

$$(1+\epsilon )$$

approximation of the maximum density with high probability; the algorithm uses

$$O(\epsilon ^{-2} n {{\mathrm{polylog}}}~n)$$

space, processes each stream update in

$${{\mathrm{polylog}}}(n)$$

time, and uses

$${{\mathrm{poly}}}(n)$$

post-processing time where

n

is the number of nodes. The space used by our algorithm matches the lower bound of Bahmani et al. (PVLDB 2012) up to a poly-logarithmic factor for constant

$$\epsilon $$

. The best existing results for this problem were established recently by Bhattacharya et al. (STOC 2015). They presented a

$$(2+\epsilon )$$

approximation algorithm using similar space and another algorithm that both processed each update and maintained a

$$(4+\epsilon )$$

approximation of the current maximum density in

$${{\mathrm{polylog}}}(n)$$

time per-update.

Andrew McGregor, David Tench, Sofya Vorotnikova, Hoa T. Vu

The Offline Carpool Problem Revisited

The carpool problem is to schedule for every time

$$t\in \mathbb {N}$$

l

tasks taken from the set [

n

] (

$$n\ge 2$$

). Each task

i

has a weight

$$w_{i}(t)\ge 0$$

, where

$$\sum _{i=1}^n w_{i}(t)=l$$

. We let

$$c_i(t)\in \{0,1\}$$

be 1 iff task

i

is scheduled at time

t

, where (carpool condition)

$$w_i(t)=0\Rightarrow c_i(t)=0$$

.

The carpool problem exists in the literature for

$$l=1$$

, with a goal to make the schedule fair, by bounding the absolute value of

$$E_i(t)=\sum _{s=1}^t[w_{i}(s)-c_{i}(s)]$$

. In the typical online setting,

$$w_i(t)$$

is unknown prior to time

t

; therefore, the only sensible approach is to bound

$$|E_i(t)|$$

at all times. The optimal online algorithm for

$$l=1$$

can guarantee

$$|E_i(t)|=O(n)$$

. We show that the same guarantee can be maintained for a general

l

. However, it remains far from an ideal

$$|E_i(T)|<1$$

when all tasks have reached completion at some future time

$$t=T$$

.

The main contribution of this paper is the offline version of the carpool problem, where

$$w_i(t)$$

is known in advance for all times

$$t\le T$$

, and the fairness requirement is strengthened to the ideal

$$|E_i(T)|<1$$

while keeping

$$E_i(t)$$

bounded at all intermediate times

$$t<T$$

. This problem has been mistakenly considered solved for

$$l=1$$

using Tijdeman’s algorithm, so it remains open for

$$l\ge 1$$

. We show that achieving the ideal fairness with an intermediate

$$O(n^2)$$

bound is possible for a general

l

.

Saad Mneimneh, Saman Farhat

On Sampling Simple Paths in Planar Graphs According to Their Lengths

We consider the problem of sampling simple paths between two given vertices in a planar graph and propose a natural Markov chain exploring such paths by means of “local” modifications. This chain can be tuned so that the probability of sampling a path depends on its

length

(for instance, output shorter paths with higher probability than longer ones). We show that this chain is always ergodic and thus it converges to the desired sampling distribution for any planar graph. While this chain is not rapidly mixing in general, we prove that a simple restricted variant is. The restricted chain samples paths on a 2D lattice which are monotone in the vertical direction. To the best of our knowledge, this is the first example of a rapidly mixing Markov chain for sampling simple paths with a probability that depends on their lengths.

Sandro Montanari, Paolo Penna

Degree-Constrained Subgraph Reconfiguration is in P

The degree-constrained subgraph problem asks for a subgraph of a given graph such that the degree of each vertex is within some specified bounds. We study the following reconfiguration variant of this problem: Given two solutions to a degree-constrained subgraph instance, can we transform one solution into the other by adding and removing individual edges, such that each intermediate subgraph satisfies the degree constraints and contains at least a certain minimum number of edges? This problem is a generalization of the matching reconfiguration problem, which is known to be in P. We show that even in the more general setting the reconfiguration problem is in P.

Moritz Mühlenthaler

Generalized Pseudoforest Deletion: Algorithms and Uniform Kernel

Feedback Vertex Set (FVS)

is one of the most well studied problems in the realm of parameterized complexity. In this problem we are given a graph

G

and a positive integer

k

and the objective is to test whether there exists

$$S\subseteq V(G)$$

of size at most

k

such that

$$G-S$$

is a forest. Thus,

FVS

is about deleting as few vertices as possible to get a forest. The main goal of this paper is to study the following interesting problem: How can we generalize the family of forests such that the nice structural properties of forests and the interesting algorithmic properties of

FVS

can be extended to problems on this class? Towards this we define a graph class,

$$\mathcal{F}_l$$

, that contains all graphs where each connected component can transformed into forest by deleting at most

l

edges. The class

$$\mathcal{F}_1$$

is known as pseudoforest in the literature and we call

$$\mathcal{F}_l$$

as

l

-

pseudoforest

. We study the problem of deleting

k

-vertices to get into

$$\mathcal{F}_l$$

,

$$l$$

-

pseudoforest Deletion

, in the realm of parameterized complexity. We show that

$$l$$

-

pseudoforest Deletion

admits an algorithm with running time

$$c_l^k n^{{\mathcal {O}}(1)}$$

and admits a kernel of size

$$f(l)k^2$$

. Thus, for every fixed

l

we have a kernel of size

$${\mathcal {O}}(k^2)$$

. That is, we get a uniform polynomial kernel for

$$l$$

-

pseudoforest Deletion

. For the special case of

$$l=1$$

, we design an algorithm with running time

$$7.5618^k n^{{\mathcal {O}}(1)}$$

. Our algorithms and uniform kernels combine iterative compression, expansion lemma and protrusion machinery.

Geevarghese Philip, Ashutosh Rai, Saket Saurabh

Efficient Equilibria in Polymatrix Coordination Games

We consider polymatrix coordination games with individual preferences where every player corresponds to a node in a graph who plays with each neighbor a separate bimatrix game with non-negative symmetric payoffs. In this paper, we study

$$\alpha $$

-approximate k-equilibria

of these games, i.e., outcomes where no group of at most

k

players can deviate such that each member increases his payoff by at least a factor

$$\alpha $$

. We prove that for

$$\alpha \ge 2$$

these games have the finite coalitional improvement property (and thus

$$\alpha $$

-approximate

k

-equilibria exist), while for

$$\alpha < 2$$

this property does not hold. Further, we derive an almost tight bound of

$$2\alpha (n-1)/(k-1)$$

on the price of anarchy, where

n

is the number of players; in particular, it scales from unbounded for pure Nash equilibria (

$$k = 1)$$

to

$$2\alpha $$

for strong equilibria (

$$k = n$$

). We also settle the complexity of several problems related to the verification and existence of these equilibria. Finally, we investigate natural means to reduce the inefficiency of Nash equilibria. Most promisingly, we show that by fixing the strategies of

k

players the price of anarchy can be reduced to

n

/

k

(and this bound is tight).

Mona Rahn, Guido Schäfer

Finding Consensus Strings with Small Length Difference Between Input and Solution Strings

The parameterised complexity of the

Closest Substring Problem

and the

Consensus Patterns Problem

with respect to the parameter

$$(\ell - m)$$

is investigated, where

$$\ell $$

is the maximum length of the input strings and

m

is the length of the solution string. We present an exact exponential time algorithm for both problems, which is based on an alphabet reduction. Furthermore, it is shown that for most combinations of

$$(\ell - m)$$

and one of the classical parameters (

m

,

$$\ell $$

, number of input strings

k

, distance

d

), we obtain fixed-parameter tractability, but even for constant

$$(\ell - m)$$

and constant alphabet size, both problems are

$${{\mathrm{\mathsf {NP}}}}$$

-hard.

Markus L. Schmid

Active Linking Attacks

We study linking attacks on communication protocols. We observe that an

active

attacker is strictly more powerful in this setting than previously-considered passive attackers. We introduce a formal model to reason about active linking attacks, formally define security against these attacks and give conditions for both security and insecurity of protocols. In addition, we introduce a composition-like technique that allows to obtain security proofs by only studying small components of a protocol.

Henning Schnoor, Oliver Woizekowski

On the Complexity of Master Problems

A master solution for an instance of a combinatorial problem is a solution with the property that it is optimal for any sub instance. For example, a master tour for an instance of the TSP problem has the property that restricting the solution to any subset

S

results in an optimal solution for

S

. The problem of deciding if a TSP instance has a master tour is known to be polynomially solvable. Here, we show that the master tour problem is

$$\varDelta _2^p$$

-complete in the scenario setting, that means, the subsets

S

are restricted to some given sets. We also show that the master versions of Steiner tree and maximum weighted satisfiability are also

$$\varDelta _2^p$$

-complete, as is deciding whether the optimal solution for these problems is unique. Like for the master tour problem, the special case of the master version of Steiner tree where every subset of vertices is a possible scenario turns out to be polynomially solvable. All the results also hold for metric spaces.

Martijn van Ee, René Sitters

Efficient Algorithm for Computing All Low s-t Edge Connectivities in Directed Graphs

Given a directed graph with

n

nodes and

m

edges, the (strong) edge connectivity

$$\lambda (u,v)$$

between two nodes

u

and

v

is the minimum number of edges whose deletion makes

u

and

v

not strongly connected. The problem of computing the edge connectivities between all pairs of nodes of a directed graph can be done in

$$O(m^\omega )$$

time by Cheung, Lau and Leung (FOCS 2011), where

$$\omega $$

is the matrix multiplication factor (

$$\approx 2.373$$

), or in

$$\tilde{O}(mn^{1.5})$$

time using

O

(

n

) computations of max-flows by Cheng and Hu (IPCO 1990).

We consider in this paper the “low edge connectivity” problem, which aims at computing the edge connectivities for the pairs of nodes (

u

,

v

) such that

$$\lambda (u,v)\le k$$

. While the undirected version of this problem was considered by Hariharan, Kavitha and Panigrahi (SODA 2007), who presented an algorithm with expected running time

$$\tilde{O}(m+nk^3)$$

, no algorithm better than computing all-pairs edge connectivities was proposed for directed graphs. We provide an algorithm that computes all low edge connectivities in

O

(

kmn

) time, improving the previous best result of

$$O(\min (m^\omega , mn^{1.5}))$$

when

$$k\le \sqrt{n}$$

. Our algorithm also computes a minimum

u

-

v

cut for each pair of nodes (

u

,

v

) with

$$\lambda (u,v)\le k$$

.

Xiaowei Wu, Chenzi Zhang

Maximum Minimal Vertex Cover Parameterized by Vertex Cover

The parameterized complexity of problems is often studied with respect to the size of their optimal solutions. However, for a maximization problem, the size of the optimal solution can be very large, rendering algorithms parameterized by it inefficient. Therefore, we suggest to study the parameterized complexity of maximization problems with respect to the size of the optimal solutions to their

minimization

versions. We examine this suggestion by considering the

Maximum Minimal Vertex Cover (MMVC)

problem, whose minimization version,

Vertex Cover

, is one of the most studied problems in the field of Parameterized Complexity. Our main contribution is a parameterized approximation algorithm for

MMVC

, including its weighted variant. We also give conditional lower bounds for the running times of algorithms for

MMVC

and its weighted variant.

Meirav Zehavi

Fast Dynamic Weight Matchings in Convex Bipartite Graphs

This paper considers the problem of maintaining a maximum weight matching in a dynamic vertex weighted convex bipartite graph

$${G=(X,Y,E)}$$

, in which the neighbors of each

$${x\in X}$$

form an interval of

$${Y}$$

where

Y

is linearly ordered, and each vertex has an associated weight. The graph is subject to insertions and deletions of vertices and edges. Our algorithm supports the update operations in

$${O(\log ^2{|V|})}$$

amortized time, obtains the matching status of a vertex (whether it is matched) in constant worst-case time, and finds the mate of a matched vertex (with which it is matched) in polylogarithmic worst-case time. Our solution is more efficient than the best known solution for the problem in the unweighted version.

Quan Zu, Miaomiao Zhang, Bin Yu

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise