Skip to main content

2016 | Buch

LATIN 2016: Theoretical Informatics

12th Latin American Symposium, Ensenada, Mexico, April 11-15, 2016, Proceedings

herausgegeben von: Evangelos Kranakis, Gonzalo Navarro, Edgar Chávez

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 12th Latin American Symposium on Theoretical Informatics, LATIN 2016, held in Ensenada, Mexico, in April 2016.
The 52 papers presented together with 5 abstracts were carefully reviewed and selected from 131 submissions. The papers address a variety of topics in theoretical computer science with a certain focus on algorithms (approximation, online, randomized, algorithmic game theory, etc.), analytic combinatorics and analysis of algorithms, automata theory and formal languages, coding theory and data compression, combinatorial algorithms, combinatorial optimization, combinatorics and graph theory, complexity theory, computational algebra, computational biology, computational geometry, computational number theory, cryptology, databases and information retrieval, data structures, formal methods and security, Internet and the web, parallel and distributed computing, pattern matching, programming language theory, and random structures.

Inhaltsverzeichnis

Frontmatter
A Faster FPT Algorithm and a Smaller Kernel for Block Graph Vertex Deletion

A graph G is called a block graph if every maximal 2-connected component of G is a clique. In this paper we study the Block Graph Vertex Deletion from the perspective of fixed parameter tractable (FPT) and kernelization algorithms. In particular, an input to Block Graph Vertex Deletion consists of a graph G and a positive integer k, and the objective to check whether there exists a subset $$S \subseteq V(G)$$S⊆V(G) of size at most k such that the graph induced on $$V(G) \setminus S$$V(G)\S is a block graph. In this paper we give an FPT algorithm with running time $$4^kn^{\mathcal {O}(1)}$$4knO(1) and a polynomial kernel of size $$\mathcal {O}(k^4)$$O(k4) for Block Graph Vertex Deletion. The running time of our FPT algorithm improves over the previous best algorithm for the problem that runs in time $$10^kn^{\mathcal {O}(1)}$$10knO(1), and the size of our kernel reduces over the previously known kernel of size $$\mathcal {O}(k^6)$$O(k6). Our results are based on a novel connection between Block Graph Vertex Deletion and the classical Feedback Vertex Set problem in graphs without induced $$C_4$$C4 and $$K_4-e$$K4-e. To achieve our results we also obtain an algorithm for Weighted Feedback Vertex Set running in time $$3.618^kn^{\mathcal {O}(1)}$$3.618knO(1) and improving over the running time of previously known algorithm with running time $$5^kn^{\mathcal {O}(1)}$$5knO(1).

Akanksha Agrawal, Sudeshna Kolay, Daniel Lokshtanov, Saket Saurabh
A Middle Curve Based on Discrete Fréchet Distance

Given a set of polygonal curves we seek to find a middle curve that represents the set of curves. We require that the middle curve consists of points of the input curves and that it minimizes the discrete Fréchet distance to the input curves. We present algorithms for three different variants of this problem: computing an ordered middle curve, computing an ordered and restricted middle curve, and computing an unordered middle curve.

Hee-Kap Ahn, Helmut Alt, Maike Buchin, Eunjin Oh, Ludmila Scharf, Carola Wenk
Comparison-Based FIFO Buffer Management in QoS Switches

The following online problem arises in network devices, e.g., switches, with quality of service (QoS) guarantees. In each time step, an arbitrary number of packets arrive at a single FIFO buffer and only one packet can be transmitted. Packets may be kept in the buffer of limited size and, due to the FIFO property, the sequence of transmitted packets has to be a subsequence of the arriving packets. The differentiated service concept is implemented by attributing each packet with a non-negative value corresponding to its service level. A buffer management algorithm can reject arriving packets and preempt buffered packets. The goal is to maximize the total value of transmitted packets.para We study comparison-based buffer management algorithms, i.e., algorithms that make their decisions based solely on the relative order between packet values with no regard to the actual values. This kind of algorithms proves to be robust in the realm of QoS switches. Kesselman et al. [13] present a comparison-based algorithm that is 2-competitive. For a long time, it has been an open problem whether a comparison-based algorithm exists with a competitive ratio below 2. We present a lower bound of $$1+1/\sqrt{2} \approx 1.707$$1+1/2≈1.707 on the competitive ratio of any deterministic comparison-based algorithm and give an algorithm that matches this lower bound in the case of monotonic sequences, i.e., packets arrive in a non-decreasing order according to their values.

Kamal Al-Bawani, Matthias Englert, Matthias Westermann
Scheduling on Power-Heterogeneous Processors

We consider the problem of scheduling a set of jobs, each one specified by its release date, its deadline and its processing volume, on a set of heterogeneous speed-scalable processors, where the energy-consumption rate is processor-dependent. Our objective is to minimize the total energy consumption when both the preemption and the migration of jobs are allowed. We propose a new algorithm based on a compact linear programming formulation. Our method approaches the value of the optimal solution within any desired accuracy for a large set of continuous power functions. Furthermore, we develop a faster combinatorial algorithm based on flows for standard power functions and jobs whose density is lower bounded by a small constant. Finally, we extend and analyze the AVerage Rate (AVR) online algorithm in the heterogeneous setting.

Susanne Albers, Evripidis Bampis, Dimitrios Letsios, Giorgio Lucarelli, Richard Stotz
Period Recovery over the Hamming and Edit Distances

A string S of length n has period P of length p if $$S[i]=S[i+p]$$S[i]=S[i+p] for all $$1 \le i \le n-p$$1≤i≤n-p and $$n \ge 2p$$n≥2p. The shortest such substring, P, is called the period of S, and the string S is called periodic in P. In this paper we investigate the period recovery problem. Given a string S of length n, find the primitive period(s) P such that the distance between S and the string that is periodic in P is below a threshold $$\tau $$τ. We consider the period recovery problem over both the Hamming distance and the edit distance. For the Hamming distance case, we present an $$O(n \log n)$$O(nlogn) time algorithm, where $$\tau $$τ is given as $$\frac{n}{(2+\epsilon )p}$$n(2+ϵ)p, for $$0 < \epsilon < 1$$0<ϵ<1. For the edit distance case, $$\tau =\frac{n}{(4+\epsilon )p}$$τ=n(4+ϵ)p, and we provide an $$O(n^{4/ 3})$$O(n4/3) time algorithm.

Amihood Amir, Mika Amit, Gad M. Landau, Dina Sokol
Chasing Convex Bodies and Functions

We consider three related online problems: Online Convex Optimization, Convex Body Chasing, and Lazy Convex Body Chasing. In Online Convex Optimization the input is an online sequence of convex functions over some Euclidean space. In response to a function, the online algorithm can move to any destination point in the Euclidean space. The cost is the total distance moved plus the sum of the function costs at the destination points. Lazy Convex Body Chasing is a special case of Online Convex Optimization where the function is zero in some convex region, and grows linearly with the distance from this region. And Convex Body Chasing is a special case of Lazy Convex Body Chasing where the destination point has to be in the convex region. We show that these problems are equivalent in the sense that if any of these problems have an O(1)-competitive algorithm then all of the problems have an O(1)-competitive algorithm. By leveraging these results we then obtain the first O(1)-competitive algorithm for Online Convex Optimization in two dimensions, and give the first O(1)-competitive algorithm for chasing linear subspaces. We also give a simple algorithm and O(1)-competitiveness analysis for chasing lines.

Antonios Antoniadis, Neal Barcelo, Michael Nugent, Kirk Pruhs, Kevin Schewior, Michele Scquizzato
Parameterized Lower Bounds and Dichotomy Results for the NP-completeness of H-free Edge Modification Problems

For a graph H, the $$H$$H-free Edge Deletion problem asks whether there exist at most k edges whose deletion from the input graph G results in a graph without any induced copy of H. $$H$$H-free Edge Completion and $$H$$H-free Edge Editing are defined similarly where only completion (addition) of edges are allowed in the former and both completion and deletion are allowed in the latter. We completely settle the classical complexities of these problems by proving that $$H$$H-free Edge Deletion is NP-complete if and only if H is a graph with at least two edges, $$H$$H-free Edge Completion is NP-complete if and only if H is a graph with at least two non-edges and $$H$$H-free Edge Editing is NP-complete if and only if H is a graph with at least three vertices. Our result on $$H$$H-free Edge Editing resolves a conjecture by Alon and Stav (2009). Additionally, we prove that, these NP-complete problems cannot be solved in parameterized subexponential time, i.e., in time $$2^{o(k)}\cdot |G|^{O(1)}$$2o(k)·|G|O(1), unless Exponential Time Hypothesis fails. Furthermore, we obtain implications on the incompressibility of these problems.

N. R. Aravind, R. B. Sandeep, Naveen Sivadasan
Parameterized Complexity of Red Blue Set Cover for Lines

We investigate the parameterized complexity of Generalized Red Blue Set Cover (Gen-RBSC), a generalization of the classic Set Cover problem and the more recently studied Red Blue Set Cover problem. Given a universe U containing b blue elements and r red elements, positive integers $$k_\ell $$kℓ and $$k_r$$kr, and a family $$\mathcal F $$F of $$\ell $$ℓ sets over U, the Gen-RBSC problem is to decide whether there is a subfamily $$\mathcal F '\subseteq \mathcal F $$F′⊆F of size at most $$k_\ell $$kℓ that covers all blue elements, but at most $$k_r$$kr of the red elements. This generalizes Set Cover and thus in full generality it is intractable in the parameterized setting, when parameterized by $$k_\ell +k_r$$kℓ+kr. In this paper, we study Gen-RBSC-lines, where the elements are points in the plane and sets are defined by lines. We study this problem for the parameters $$k_\ell , k_r$$kℓ,kr, and $$k_\ell +k_r$$kℓ+kr. For all these cases, we either prove that the problem is W-hard or show that the problem is fixed parameter tractable (FPT). Finally, for the parameter $$k_\ell +k_r$$kℓ+kr, for which Gen-RBSC-lines admits FPT algorithms, we show that the problem does not have a polynomial kernel unless $$\text { co-NP}\subseteq \text { NP}/\mathrm{poly}$$co-NP⊆NP/poly. Further, we show that the FPT algorithm does not generalize to higher dimensions.

Pradeesha Ashok, Sudeshna Kolay, Saket Saurabh
Tight Bounds for Beacon-Based Coverage in Simple Rectilinear Polygons

We establish tight bounds for beacon-based coverage problems. In particular, we show that $$\lfloor \frac{n}{6} \rfloor $$⌊n6⌋ beacons are always sufficient and sometimes necessary to cover a simple rectilinear polygon P with n vertices. When P is monotone and rectilinear, we prove that this bound becomes $$\lfloor \frac{n+4}{8} \rfloor $$⌊n+48⌋. We also present an optimal linear-time algorithm for computing the beacon kernel of P.

Sang Won Bae, Chan-Su Shin, Antoine Vigneron
On Mobile Agent Verifiable Problems

We consider decision problems that are solved in a distributed fashion by synchronous mobile agents operating in an unknown, anonymous network. Each agent has a unique identifier and an input string and they have to decide collectively a property which may involve their input strings, the graph on which they are operating, and their particular starting positions. Building on recent work by Fraigniaud and Pelc [LATIN 2012, LNCS 7256, pp. 362–374], we introduce several natural new computability classes allowing for a finer classification of problems below $$\mathsf {co\text {-}MAV}$$co-MAV or $$\mathsf {MAV}$$MAV, the latter being the class of problems that are verifiable when the agents are provided with an appropriate certificate. We provide inclusion and separation results among all these classes. We also determine their closure properties with respect to set-theoretic operations. Our main technical tool, which is of independent interest, is a new meta-protocol that enables the execution of a possibly infinite number of mobile agent protocols essentially in parallel, similarly to the well-known dovetailing technique from classical computability theory.

Evangelos Bampas, David Ilcinkas
Computing Maximal Layers of Points in $$E^{f(n)}$$ E f ( n )

In this paper we present a randomized algorithm for computing the collection of maximal layers for a point set in $$E^{k}$$Ek ($$k = f(n)$$k=f(n)). The input to our algorithm is a point set $$P = \{p_1,\ldots ,p_n\}$$P={p1,…,pn} with $$p_i \in E^{k}$$pi∈Ek. The proposed algorithm achieves a runtime of $$O\left( kn^{2 - {1 \over \log {k}} + \log _k{\left( 1 + {2 \over {k+1}}\right) }}\log {n}\right) $$Okn2-1logk+logk1+2k+1logn when P is a random order and a runtime of $$O(k^2 n^{3/2 + (\log _{k}{(k-1)})/2}\log {n})$$O(k2n3/2+(logk(k-1))/2logn) for an arbitrary P. Both bounds hold in expectation. Additionally, the run time is bounded by $$O(kn^2)$$O(kn2) in the worst case. This is the first non-trivial algorithm whose run-time remains polynomial whenever f(n) is bounded by some polynomial in n while remaining sub-quadratic in n for constant k (in expectation). The algorithm is implemented using a new data-structure for storing and answering dominance queries over the set of incomparable points.

Indranil Banerjee, Dana Richards
On the Total Number of Bends for Planar Octilinear Drawings

An octilinear drawing of a planar graph is one in which each edge is drawn as a sequence of horizontal, vertical and diagonal at $$45^\circ $$45∘ line-segments. For such drawings to be readable, special care is needed in order to keep the number of bends small. As the problem of finding planar octilinear drawings of minimum number of bends is NP-hard, in this paper we focus on upper and lower bounds. From a recent result of Keszegh et al. on the slope number of planar graphs, we can derive an upper bound of $$4n-10$$4n-10 bends for 8-planar graphs with n vertices. We considerably improve this general bound and corresponding previous ones for triconnected 4-, 5- and 6-planar graphs. We also derive non-trivial lower bounds for these three classes of graphs by a technique inspired by the network flow formulation of Tamassia.

Michael A. Bekos, Michael Kaufmann, Robert Krug
Bidirectional Variable-Order de Bruijn Graphs

Implementing de Bruijn graphs compactly is an important problem because of their role in genome assembly. There are currently two main approaches, one using Bloom filters and the other using a kind of Burrows-Wheeler Transform on the edge labels of the graph. The second representation is more elegant and can even handle many graph-orders at once, but it does not cleanly support traversing edges backwards or inserting new nodes or edges. In this paper we resolve the first of these issues and partially address the second.

Djamal Belazzougui, Travis Gagie, Veli Mäkinen, Marco Previtali, Simon J. Puglisi
The Read/Write Protocol Complex Is Collapsible

The celebrated asynchronous computability theorem provides a characterization of the class of decision tasks that can be solved in a wait-free manner by asynchronous processes that communicate by writing and taking atomic snapshots of a shared memory. Several variations of the model have been proposed (immediate snapshots and iterated immediate snapshots), all equivalent for wait-free solution of decision tasks, in spite of the fact that the protocol complexes that arise from the different models are structurally distinct. The topological and combinatorial properties of these snapshot protocol complexes have been studied in detail, providing explanations for why the asynchronous computability theorem holds in all the models.In reality concurrent systems do not provide processes with snapshot operations. Instead, snapshots are implemented (by a wait-free protocol) using operations that write and read individual shared memory locations. Thus, read/write protocols are also computationally equivalent to snapshot protocols. However, the structure of the read/write protocol complex has not been studied. In this paper we show that the read/write iterated protocol complex is collapsible (and hence contractible). Furthermore, we show that a distributed protocol that wait-free implements atomic snapshots in effect is performing the collapses.

Fernando Benavides, Sergio Rajsbaum
The I/O Complexity of Computing Prime Tables

We revisit classical sieves for computing primes and analyze their performance in the external-memory model. Most prior sieves are analyzed in the RAM model, where the focus is on minimizing both the total number of operations and the size of the working set. The hope is that if the working set fits in RAM, then the sieve will have good I/O performance, though such an outcome is by no means guaranteed by a small working-set size.We analyze our algorithms directly in terms of I/Os and operations. In the external-memory model, permutation can be the most expensive aspect of sieving, in contrast to the RAM model, where permutations are trivial. We show how to implement classical sieves so that they have both good I/O performance and good RAM performance, even when the problem size N becomes huge—even superpolynomially larger than RAM. Towards this goal, we give two I/O-efficient priority queues that are optimized for the operations incurred by these sieves.

Michael A. Bender, Rezaul Chowdhury, Alexander Conway, Martín Farach-Colton, Pramod Ganapathi, Rob Johnson, Samuel McCauley, Bertrand Simon, Shikha Singh
Increasing Diamonds

A class of diamond-shaped combinatorial structures is studied whose enumerating generating functions satisfy differential equations of the form $$f'' = G(f)$$f′′=G(f), for some function G. In addition to their own interests and being natural extensions of increasing trees, the study of such DAG-structures was motivated by modelling executions of series-parallel concurrent processes; they may also be used in other digraph contexts having simultaneously a source and a sink, and are closely connected to a few other known combinatorial structures such as trees, cacti and permutations. We explore in this extended abstract the analytic-combinatorial aspect of these structures, as well as the algorithmic issues for efficiently generating random instances.

Olivier Bodini, Matthieu Dien, Xavier Fontaine, Antoine Genitrini, Hsien-Kuei Hwang
Scheduling Transfers of Resources over Time: Towards Car-Sharing with Flexible Drop-Offs

We consider an offline car-sharing assignment problem with flexible drop-offs, in which n users (customers) present their driving demands, and the system aims to assign the cars, initially located at given locations, to maximize the number of satisfied users. Each driving demand specifies the pick-up location and the drop-off location, as well as the time interval in which the car will be used. If a user requests several driving demands, then she is satisfied only if all her demands are fulfilled. We show that minimizing the number of vehicles that are needed to fulfill all demands is solvable in polynomial time. If every user has exactly one demand, we show that for given number of cars at locations, maximizing the number of satisfied users is also solvable in polynomial time. We then study the problem with two locations A and B, and where every user has two demands: one demand for transfer from A to B, and one demand for transfer from B to A, not necessarily in this order. We show that maximizing the number of satisfied users is NP-hard, and even APX-hard, even if all the transfers take exactly the same (non-zero) time. On the other hand, if all the transfers are instantaneous, the problem is again solvable in polynomial time.

Kateřina Böhmová, Yann Disser, Matúš Mihalák, Rastislav Šrámek
A 0.821-Ratio Purely Combinatorial Algorithm for Maximum k-vertex Cover in Bipartite Graphs

We study the polynomial time approximation of the max k-vertex cover problem in bipartite graphs and propose a purely combinatorial algorithm that beats the only such known algorithm, namely the greedy approach. We present a computer-assisted analysis of our algorithm, establishing that the worst case approximation guarantee is bounded below by 0.821.

Édouard Bonnet, Bruno Escoffier, Vangelis Th. Paschos, Georgios Stamoulis
Improved Spanning Ratio for Low Degree Plane Spanners

We describe an algorithm that builds a plane spanner with a maximum degree of 8 and a spanning ratio of $${\approx }4.414$$≈4.414 with respect to the complete graph. This is the best currently known spanning ratio for a plane spanner with a maximum degree of less than 14.

Prosenjit Bose, Darryl Hill, Michiel Smid
Constructing Consistent Digital Line Segments

Our concern is the digitalization of line segments in the unit grid as considered by Chun et al. [Discrete Comput. Geom., 2009], Christ et al. [Discrete Comput. Geom., 2012], and Chowdhury and Gibson [ESA, 2015]. In this setting, digital segments are defined so that they satisfy a set of axioms also satisfied by Euclidean line segments. The key property that differentiates this research from other research in digital line segments is that the intersection of any two segments must be connected. A system of digital line segments that satisfies these desired axioms is called a consistent digital line segments system (CDS). Our main contribution of this paper is to show that any collection of digital segments that satisfy the CDS properties in a finite $$n \times n$$n×n grid graph can be extended to a full CDS (with a segment for every pair of grid points). Moreover, we show that this extension can be computed with a polynomial-time algorithm. The algorithm is such that one can manually define the segments for some subset of the grid. For example, suppose one wants to precisely define the boundary of a digital polygon. Then we would only be interested in CDSes such that the digital line segments connecting the vertices of the polygon fit this desired boundary definition. Our algorithm allows one to manually specify the definitions of these desired segments. For any such definition that satisfies all CDS properties, our algorithm will return in polynomial time a CDS that “fits” with these manually chosen segments.

Iffat Chowdhury, Matt Gibson
Faster Information Gathering in Ad-Hoc Radio Tree Networks

We study information gathering in ad-hoc radio networks. Initially, each node of the network has a piece of information called a rumor, and the overall objective is to gather all these rumors in the designated target node. The ad-hoc property refers to the fact that the topology of the network is unknown when the computation starts. Aggregation of rumors is not allowed, which means that each node may transmit at most one rumor in one step.We focus on networks with tree topologies, that is we assume that the network is a tree with all edges directed towards the root, but, being ad-hoc, its actual topology is not known. We provide two deterministic algorithms for this problem. For the model that does not assume any collision detection nor acknowledgement mechanisms, we give an $$O(n\log \log n)$$O(nloglogn)-time algorithm, improving the previous upper bound of $$O(n\log n)$$O(nlogn). We also show that this running time can be further reduced to O(n) if the model allows for acknowledgements of successful transmissions.

Marek Chrobak, Kevin P. Costello
Stabbing Circles for Sets of Segments in the Plane

Stabbing a set S of n segments in the plane by a line is a well-known problem. In this paper we consider the variation where the stabbing object is a circle instead of a line. We show that the problem is tightly connected to cluster Voronoi diagrams, in particular, the Hausdorff and the farthest-color Voronoi diagram. Based on these diagrams, we provide a method to compute all the combinatorially different stabbing circles for S, and the stabbing circles with maximum and minimum radius. We give conditions under which our method is fast. These conditions are satisfied if the segments in S are parallel, resulting in a $$O(n \log ^2{n})$$O(nlog2n) time algorithm. We also observe that the stabbing circle problem for S can be solved in optimal $$O(n^2)$$O(n2) time and space by reducing the problem to computing the stabbing planes for a set of segments in 3D.

Mercè Claverol, Elena Khramtcova, Evanthia Papadopoulou, Maria Saumell, Carlos Seara
Faster Algorithms to Enumerate Hypergraph Transversals

A transversal of a hypergraph is a set of vertices intersecting each hyperedge. We design and analyze new exponential-time polynomial-space algorithms to enumerate all inclusion-minimal transversals of a hypergraph. For each fixed $$k\ge 3$$k≥3, our algorithms for hypergraphs of rank k, where the rank is the maximum size of a hyperedge, outperform the previous best. This also implies improved upper bounds on the maximum number of minimal transversals in n-vertex hypergraphs of rank $$k\ge 3$$k≥3. Our main algorithm is a branching algorithm whose running time is analyzed with Measure and Conquer. It enumerates all minimal transversals of hypergraphs of rank 3 in time $$O(1.6755^n)$$O(1.6755n). Our enumeration algorithms improve upon the best known algorithms for counting minimum transversals in hypergraphs of rank k for $$k\ge 3$$k≥3 and for computing a minimum transversal in hypergraphs of rank k for $$k\ge 6$$k≥6.

Manfred Cochefert, Jean-François Couturier, Serge Gaspers, Dieter Kratsch
Listing Acyclic Orientations of Graphs with Single and Multiple Sources

We study enumeration problems for the acyclic orientations of an undirected graph with n nodes and m edges, where each edge must be assigned a direction so that the resulting directed graph is acyclic. When the acyclic orientations have single or multiple sources specified as input along with the graph, our algorithm is the first one to provide guaranteed bounds, giving new bounds with a delay of $$O(m\cdot n)$$O(m·n) time per solution and $$O(n^2)$$O(n2) working space. When no sources are specified, our algorithm improves over previous work by reducing the delay to O(m), and is the first one with linear delay.

Alessio Conte, Roberto Grossi, Andrea Marino, Romeo Rizzi
Linear-Time Sequence Comparison Using Minimal Absent Words & Applications

Sequence comparison is a prerequisite to virtually all comparative genomic analyses. It is often realized by sequence alignment techniques, which are computationally expensive. This has led to increased research into alignment-free techniques, which are based on measures referring to the composition of sequences in terms of their constituent patterns. These measures, such as q-gram distance, are usually computed in time linear with respect to the length of the sequences. In this article, we focus on the complementary idea: how two sequences can be efficiently compared based on information that does not occur in the sequences. A word is an absent word of some sequence if it does not occur in the sequence. An absent word is minimal if all its proper factors occur in the sequence. Here we present the first linear-time and linear-space algorithm to compare two sequences by considering all their minimal absent words. In the process, we present results of combinatorial interest, and also extend the proposed techniques to compare circular sequences.

Maxime Crochemore, Gabriele Fici, Robert Mercaş, Solon P. Pissis
The Grandmama de Bruijn Sequence for Binary Strings

A de Bruijn sequence is a binary string of length $$2^n$$2n which, when viewed cyclically, contains every binary string of length n exactly once as a substring. Knuth refers to the lexicographically least de Bruijn sequence for each n as the “granddaddy” and Fredricksen et al. showed that it can be constructed by concatenating the aperiodic prefixes of the binary necklaces of length n in lexicographic order. In this paper we prove that the granddaddy has a lexicographic partner. The “grandmama” sequence is constructed by instead concatenating the aperiodic prefixes in co-lexicographic order. We explain how our sequence differs from the previous sequence and why it had not previously been discovered.

Patrick Baxter Dragon, Oscar I. Hernandez, Aaron Williams
Compressing Bounded Degree Graphs

Recently, Aravind et al. (IPEC 2014) showed that for any finite set of connected graphs $$\mathcal {H}$$, the problem $$\mathcal {H}$$-Free Edge Deletion admits a polynomial kernelization on bounded degree input graphs. We generalize this theorem by no longer requiring the graphs in $$\mathcal {H}$$ to be connected. Furthermore, we complement this result by showing that also $$\mathcal {H}$$-Free Edge Editing admits a polynomial kernelization on bounded degree input graphs.We show that there exists a finite set $$\mathcal {H}$$ of connected graphs such that $$\mathcal {H}$$-Free Edge Completion is incompressible even on input graphs of maximum degree 5, unless the polynomial hierarchy collapses to the third level. Under the same assumption, we show that $$C_{11}$$-free Edge Deletion—as well as $$\mathcal {H}$$-Free Edge Editing—is incompressible on 2-degenerate graphs.

Pål Grønås Drange, Markus Dregi, R. B. Sandeep
Random Partial Match in Quad-K-d Trees

Quad-K-d trees were introduced by Bereczky et al. [3] as a generalization of several well-known hierarchical multidimensional data structures such as K-d trees and quad trees. One of the interesting features of quad-$$K$$-d trees is that they provide a unified framework for the analysis of associative queries in hierarchical multidimensional data structures. In this paper we consider partial match, one of the fundamental associative queries, and prove that the expected cost of a random partial match in a random quad-$$K$$-d tree of size n is of the form $$\varTheta (n^\alpha )$$, with $$0 < \alpha < 1$$, for several families of quad-$$K$$-d trees including, among others, K-d trees and quad trees. We actually give a general result that applies to any family of quad-$$K$$-d trees where each node has a type that is independent of the type of other nodes. We derive, exploiting Roura’s Continuous Master Theorem, the general equation satisfied by $$\alpha $$, in terms of the dimension K, the number of specified coordinates s in the partial match query, and also the additional parameters that characterize each of the families of quad-$$K$$-d trees considered in the paper. We also conduct an experimental study whose results match our theoretical findings; as a by-product we propose an implementation of the partial match search in quad-$$K$$-d trees in full generality.

A. Duch, G. Lau, C. Martínez
From Discrepancy to Majority

We show how to select an item with the majority color from n two-colored items, given access to the items only through an oracle that returns the discrepancy of subsets of k items. We use $$n/\lfloor \tfrac{k}{2}\rfloor +O(k)$$n/⌊k2⌋+O(k) queries, improving a previous method by De Marco and Kranakis that used $$n-k+k^2/2$$n-k+k2/2 queries. We also prove a lower bound of $${n/(k-1)-O(n^{1/3})}$$n/(k-1)-O(n1/3) on the number of queries needed, improving a lower bound of $$\lfloor n/k\rfloor $$⌊n/k⌋ by De Marco and Kranakis.

David Eppstein, Daniel S. Hirschberg
On the Planar Split Thickness of Graphs

Motivated by applications in graph drawing and information visualization, we examine the planar split thickness of a graph, that is, the smallest k such that the graph is k-splittable into a planar graph. A k-split operation substitutes a vertex v by at most k new vertices such that each neighbor of v is connected to at least one of the new vertices.We first examine the planar split thickness of complete and complete bipartite graphs. We then prove that it is NP-hard to recognize graphs that are 2-splittable into a planar graph, and show that one can approximate the planar split thickness of a graph within a constant factor. If the treewidth is bounded, then we can even verify k-splittablity in linear time, for a constant k.

David Eppstein, Philipp Kindermann, Stephen Kobourov, Giuseppe Liotta, Anna Lubiw, Aude Maignan, Debajyoti Mondal, Hamideh Vosoughpour, Sue Whitesides, Stephen Wismath
A Bounded-Risk Mechanism for the Kidney Exchange Game

In this paper we consider the pairwise kidney exchange game. This game naturally appears in situations that some service providers benefit from pairwise allocations on a network, such as the kidney exchanges between hospitals.Ashlagi et al. [1] present a 2-approximation randomized truthful mechanism for this problem. This is the best known result in this setting with multiple players. However, we note that the variance of the utility of an agent in this mechanism may be as large as $$\varOmega (n^2)$$Ω(n2), which is not desirable in a real application. In this paper we resolve this issue by providing a 2-approximation randomized truthful mechanism in which the variance of the utility of each agent is at most $$2+\epsilon $$2+ϵ.As a side result, we apply our technique to design a deterministic mechanism such that, if an agent deviates from the mechanism, she does not gain more than $$2\lceil \log _2 m\rceil $$2⌈log2m⌉.

Hossein Esfandiari, Guy Kortsarz
Tight Approximations of Degeneracy in Large Graphs

Given an n-node m-edge graph G, the degeneracy of graph G and the associated node ordering can be computed in linear time in the RAM model by a greedy algorithm that iteratively removes the node of min-degree [28]. In the semi-streaming model for large graphs, where memory is limited to $$\mathcal {O}(n \,\mathrm{polylog}\,n)$$O(npolylogn) and edges can only be accessed in sequential passes, the greedy algorithm requires too many passes, so another approach is needed.In the semi-streaming model, there is a deterministic log-pass algorithm for generating an ordering whose degeneracy approximates the minimum possible to within a factor of $$(2+\varepsilon )$$(2+ε) for any constant $$\varepsilon > 0$$ε>0 [12]. In this paper, we propose a randomized algorithm that improves the approximation factor to $$(1+\varepsilon )$$(1+ε) with high probability and needs only a single pass. Our algorithm can be generalized to the model that allows edge deletions, but then it requires more computation and space usage.The generated node ordering not only yields a $$(1+\varepsilon )$$(1+ε)-approximation for the degeneracy but gives constant-factor approximations for arboricity and thickness.

Martín Farach-Colton, Meng-Tsung Tsai
Improved Approximation Algorithms for Capacitated Fault-Tolerant k-Center

In the k-center problem, given a metric space V and a positive integer k, one wants to select k elements (centers) of V and an assignment from V to centers, minimizing the maximum distance between an element of V and its assigned center. One of the most general variants is the capacitated$$\alpha $$α-fault-tolerantk-center, where centers have a limit on the number of assigned elements, and, if $$\alpha $$α centers fail, there is a reassignment from V to non-faulty centers. In this paper, we present a new approach to tackle fault tolerance, by selecting and pre-opening a set of backup centers, then solving the obtained residual instance. For the $$\{0,L\}$${0,L}-capacitated case, we give approximations with factor 6 for the basic problem, and 7 for the so called conservative variant, when only clients whose centers failed may be reassigned. Our algorithms improve on the previously best known factors of 9 and 17, respectively. Moreover, we consider the case with general capacities. Assuming $$\alpha $$α is constant, our method leads to the first approximations for this case.

Cristina G. Fernandes, Samuel P. de Paula, Lehilton L. C. Pedrosa
Bundled Crossings in Embedded Graphs

Edge crossings in a graph drawing are an important factor in the drawing’s quality. However, it is not just the presence of crossings that determines the drawing’s quality: any drawing of a nonplanar graph in the plane necessarily contains crossings, but the geometric structure of those crossings can have a significant impact on the drawing’s readability. In particular, the structure of two disjoint groups of locally parallel edges (bundles) intersecting in a complete crossbar (a bundled crossing) is visually simpler—even if it involves many individual crossings—than an equal number of random crossings scattered in the plane.In this paper, we investigate the complexity of partitioning the crossings of a given drawing of a graph into a minimum number of bundled crossings. We show that this problem is NP-hard, propose a constant-factor approximation scheme for the case of circular embeddings, where all vertices lie on the outer face, and show that the bundled crossings problem in general graphs is related to a minimum dissection problem.

Martin Fink, John Hershberger, Subhash Suri, Kevin Verbeek
Probabilistic Analysis of the Dual Next-Fit Algorithm for Bin Covering

In the bin covering problem, the goal is to fill as many bins as possible up to a certain minimal level with a given set of items of different sizes. Online variants, in which the items arrive one after another and have to be packed immediately on their arrival without knowledge about the future items, have been studied extensively in the literature. We study the simplest possible online algorithm Dual Next-Fit, which packs all arriving items into the same bin until it is filled and then proceeds with the next bin in the same manner. The competitive ratio of this and any other reasonable online algorithm is 1 / 2.We study Dual Next-Fit in a probabilistic setting where the item sizes are chosen i.i.d. according to a discrete distribution and we prove that, for every distribution, its expected competitive ratio is at least $$1/2 + \epsilon $$1/2+ϵ for a constant $$\epsilon >0$$ϵ>0 independent of the distribution. We also prove an upper bound of 2 / 3 and better lower bounds for certain restricted classes of distributions. Finally, we prove that the expected competitive ratio equals, for a large class of distributions, the random-order ratio, which is the expected competitive ratio when adversarially chosen items arrive in uniformly random order.

Carsten Fischer, Heiko Röglin
Deterministic Sparse Suffix Sorting on Rewritable Texts

Given a rewritable text T of length n on an alphabet of size $$\sigma $$σ, we propose an online algorithm computing the sparse suffix array and the sparse longest common prefix array of T in $$\mathop {}\mathopen {}\mathcal {O}\mathopen {}\left( c \sqrt{\lg n} \right. + \left. m \lg m \lg n \lg ^* n\right) $$Oclgn+mlgmlgnlg∗n time by using the text space and $$\mathop {}\mathopen {}\mathcal {O}\mathopen {}\left( m\right) $$Om additional working space, where $$m \le n$$m≤n is the number of suffixes to be sorted (provided online and arbitrarily), and $$c \ge m$$c≥m is the number of characters that must be compared for distinguishing the designated suffixes.

Johannes Fischer, Tomohiro I., Dominik Köppl
Minimizing the Number of Opinions for Fault-Tolerant Distributed Decision Using Well-Quasi Orderings

The notion of deciding a distributed language$$\mathcal {L} $$L is of growing interest in various distributed computing settings. Each process $$p_i$$pi is given an input value $$x_i$$xi, and the processes should collectively decide whether their set of input values $$x=(x_i)_i$$x=(xi)i is a valid state of the system w.r.t. to some specification, i.e., if $$x\in \mathcal {L} $$x∈L. In non-deterministic distributed decision each process $$p_i$$pi gets a local certificate $$c_i$$ci in addition to its input $$x_i$$xi. If the input $$x\in \mathcal {L} $$x∈L then there exists a certificate $$c=(c_i)_i$$c=(ci)i such that the processes collectively accept x, and if $$x\not \in \mathcal {L} $$x∉L, then for every c, the processes should collectively reject x. The collective decision is expressed by the set of opinions emitted by the processes.In this paper we study non-deterministic distributed decision in systems where asynchronous processes may crash. It is known that the number of opinions needed to deterministically decide a language can grow with n, the number of processes in the system. We prove that every distributed language $$\mathcal {L} $$L can be non-deterministically decided using only three opinions, with certificates of size $$\lceil \log \alpha (n)\rceil +1$$⌈logα(n)⌉+1 bits, where $$\alpha $$α grows at least as slowly as the inverse of the Ackerman function. The result is optimal, as we show that there are distributed languages that cannot be decided using just two opinions, even with arbitrarily large certificates.To prove our upper bound, we introduce the notion of distributed encoding of the integers, that provides an explicit construction of a long bad sequence in the well-quasi-ordering$$(\{0,1\}^*,\le _*)$$({0,1}∗,≤∗) controlled by the successor function. Thus, we provide a new class of applications for well-quasi-orderings that lies outside logic and complexity theory. For the lower bound we use combinatorial topology techniques.

Pierre Fraigniaud, Sergio Rajsbaum, Corentin Travers
Unshuffling Permutations

A permutation is said to be a square if it can be obtained by shuffling two order-isomorphic patterns. The definition is intended to be the natural counterpart to the ordinary shuffle of words and languages. In this paper, we tackle the problem of recognizing square permutations from both the point of view of algebra and algorithms. On the one hand, we present some algebraic and combinatorial properties of the shuffle product of permutations. We follow an unusual line consisting in defining the shuffle of permutations by means of an unshuffling operator, known as a coproduct. This strategy allows to obtain easy proofs for algebraic and combinatorial properties of our shuffle product. We besides exhibit a bijection between square (213, 231)-avoiding permutations and square binary words. On the other hand, by using a pattern avoidance criterion on oriented perfect matchings, we prove that recognizing square permutations is NP-complete.

Samuele Giraudo, Stéphane Vialette
Generating Random Spanning Trees via Fast Matrix Multiplication

We consider the problem of sampling a uniformly random spanning tree of a graph. This is a classic algorithmic problem for which several exact and approximate algorithms are known. Random spanning trees have several connections to Laplacian matrices; this leads to algorithms based on fast matrix multiplication. The best algorithm for dense graphs can produce a uniformly random spanning tree of an n-vertex graph in time $$O(n^{2.38})$$O(n2.38). This algorithm is intricate and requires explicitly computing the LU-decomposition of the Laplacian.We present a new algorithm that also runs in time $$O(n^{2.38})$$O(n2.38) but has several conceptual advantages. First, whereas previous algorithms need to introduce directed graphs, our algorithm works only with undirected graphs. Second, our algorithm uses fast matrix inversion as a black-box, thereby avoiding the intricate details of the LU-decomposition.

Nicholas J. A. Harvey, Keyulu Xu
Routing in Unit Disk Graphs

Let $$S \subset \mathbb {R}^2$$S⊂R2 be a set of n sites. The unit disk graph$${{\mathrm{UD}}}(S)$$UD(S) on S has vertex set S and an edge between two distinct sites $$s,t \in S$$s,t∈S if and only if s and t have Euclidean distance $$|st| \le 1$$|st|≤1.A routing scheme R for $${{\mathrm{UD}}}(S)$$UD(S) assigns to each site $$s \in S$$s∈S a label$$\ell (s)$$ℓ(s) and a routing table$$\rho (s)$$ρ(s). For any two sites $$s, t \in S$$s,t∈S, the scheme R must be able to route a packet from s to t in the following way: given a current siter (initially, $$r = s$$r=s), a headerh (initially empty), and the target label$$\ell (t)$$ℓ(t), the scheme R may consult the current routing table $$\rho (r)$$ρ(r) to compute a new site $$r'$$r′ and a new header $$h'$$h′, where $$r'$$r′ is a neighbor of r. The packet is then routed to $$r'$$r′, and the process is repeated until the packet reaches t. The resulting sequence of sites is called the routing path. The stretch of R is the maximum ratio of the (Euclidean) length of the routing path of R and the shortest path in $${{\mathrm{UD}}}(S)$$UD(S), over all pairs of sites in S.For any given $$\varepsilon > 0$$ε>0, we show how to construct a routing scheme for $${{\mathrm{UD}}}(S)$$UD(S) with stretch $$1+\varepsilon $$1+ε using labels of $$O(\log n)$$O(logn) bits and routing tables of $$O(\varepsilon ^{-5}\log ^2 n \log ^2 D)$$O(ε-5log2nlog2D) bits, where D is the (Euclidean) diameter of $${{\mathrm{UD}}}(S)$$UD(S). The header size is $$O(\log n \log D)$$O(lognlogD) bits.

Haim Kaplan, Wolfgang Mulzer, Liam Roditty, Paul Seiferth
Graph Drawings with One Bend and Few Slopes

We consider drawings of graphs in the plane in which edges are represented by polygonal paths with at most one bend and the number of different slopes used by all segments of these paths is small. We prove that $$\lceil \frac{\varDelta }{2}\rceil $$⌈Δ2⌉ edge slopes suffice for outerplanar drawings of outerplanar graphs with maximum degree $$\varDelta \geqslant 3$$Δ⩾3. This matches the obvious lower bound. We also show that $$\lceil \frac{\varDelta }{2}\rceil +1$$⌈Δ2⌉+1 edge slopes suffice for drawings of general graphs, improving on the previous bound of $$\varDelta +1$$Δ+1. Furthermore, we improve previous upper bounds on the number of slopes needed for planar drawings of planar and bipartite planar graphs.

Kolja Knauer, Bartosz Walczak
Edge-Editing to a Dense and a Sparse Graph Class

We consider a graph edge-editing problem, where the goal is to transform a given graph G into a disjoint union of two graphs from a pair of given graph classes, investigating what properties of the classes make the problem fixed-parameter tractable. We focus on the case when the first class is dense, i.e. every such graph G has minimum degree at least $$|V(G)| - \delta $$|V(G)|-δ for a constant $$\delta $$δ, and assume that the cost of editing to this class is fixed-parameter tractable parameterized by the cost. Under the assumptions that the second class either has bounded maximum degree, or is edge-monotone, can be defined in $$\text {MSO}_2$$MSO2, and has bounded treewidth, we prove that the problem is fixed-parameter tractable parameterized by the cost. We also show that the problem is fixed-parameter tractable parameterized by degeneracy if the second class consists of independent sets and Subgraph Isomorphism is fixed-parameter tractable for the input graphs. On the other hand, we prove that parameterization by degeneracy is in general $$\text { W[1]}$$W[1]-hard even for editing to cliques and independent sets.

Michal Kotrbčík, Rastislav Královič, Sebastian Ordyniak
Containment and Evasion in Stochastic Point Data

Given two disjoint and finite point sets $$A$$A and $$B$$B in $$\mathrm{I\! R}^d$$IRd, we say that $$B$$B is contained in $$A$$A if all the points of $$B$$B lie within the convex hull of $$A$$A, and that $$B$$Bevades$$A$$A if no point of $$B$$B lies inside the convex hull of $$A$$A. We investigate the containment and evasion problems of this type when the set $$A$$A is stochastic, meaning each of its points $$a_i$$ai is present with an independent probability $$\pi (a_i)$$π(ai). Our model is motivated by situations in which there is uncertainty about the set $$A$$A, for instance, due to randomized strategy of an adversarial agent or scheduling of monitoring sensors. Our main results include the following: (1) we can compute the exact probability of containment or evasion in two dimensions in worst-case $$O(n^4 + m^2)$$O(n4+m2) time and $$O(n^2 + m^2)$$O(n2+m2) space, where $$n = | A | $$n=|A| and $$m = | B | $$m=|B|, and (2) we prove that these problems are #P-hard in 3 or higher dimensions.

Nirman Kumar, Subhash Suri
Tree Compression Using String Grammars

We study the compressed representation of a ranked tree by a straight-line program (SLP) for its preorder traversal string, and compare it with the previously studied representation by straight-line context-free tree grammars (also known as tree straight-line programs or TSLPs). Although SLPs may be exponentially more succinct than TSLPs, we show that many simple tree queries can still be performed efficiently on SLPs, such as computing the height of a tree, tree navigation, or evaluation of Boolean expressions. Other problems like pattern matching and evaluation of tree automata become intractable.

Moses Ganardi, Danny Hucke, Markus Lohrey, Eric Noeth
Trees and Languages with Periodic Signature

The signature of a labelled tree (and hence of its prefix-closed branch language) is the sequence of the degrees of the nodes of the tree in the breadth-first traversal. In a previous work, we have characterised the signatures of the regular languages. Here, the trees and languages that have the simplest possible signatures, namely the periodic ones, are characterised as the sets of representations of the integers in rational base numeration systems.

Victor Marsault, Jacques Sakarovitch
Rank Reduction of Directed Graphs by Vertex and Edge Deletions

In this paper we continue our study of graph modification problems defined by reducing the rank of the adjacency matrix of the given graph, and extend our results from undirected graphs to directed graphs. An instance of a graph modification problem takes as input a graph G, a positive integer k and the objective is to delete k vertices (edges) so that the resulting graph belongs to a particular family, $$\mathcal F$$F, of graphs. Given a fixed positive integer r, we define $$\mathcal{F}_r$$Fr as the family of directed graphs where for each $$G\in \mathcal{F}_r$$G∈Fr, the rank of the adjacency matrix of G is at most r. Using the family $$\mathcal{F}_r$$Fr we do algorithmic study, both in classical and parameterized complexity, of the following graph modification problems: $$r$$r-Rank Vertex Deletion, $$r$$r-Rank Edge Deletion. We first show that both the problems are NP-Complete. Then we show that these problems are fixed parameter tractable (FPT) by designing an algorithm with running time $$2^{\mathcal{O}(k \log r)}n^{\mathcal{O}(1)}$$2O(klogr)nO(1) for $$r$$r-Rank Vertex Deletion, and an algorithm for $$r$$r-Rank Edge Deletion running in time $$2^{\mathcal{O}(f(r) \sqrt{k} \log k )}n^{\mathcal{O}(1)}$$2O(f(r)klogk)nO(1). We complement our FPT result by designing polynomial kernels for these problems. Our main structural result, which is the fulcrum of all our algorithmic results, is that for a fixed integer r the size of any “reduced graph” in $$\mathcal{F}_r$$Fr is upper bounded by $$3^r$$3r. This result is of independent interest and generalizes a similar result of Kotlov and Lovász regarding reduced undirected graphs of rank r.

Syed Mohammad Meesum, Saket Saurabh
New Deterministic Algorithms for Solving Parity Games

We study parity games in which one of the two players controls only a small number k of nodes and the other player controls the $$n-k$$n-k other nodes of the game. Our main result is a fixed-parameter algorithm that solves bipartite parity games in time $$k^{O(\sqrt{k})}\cdot O(n^3)$$kO(k)·O(n3) and general parity games in time $$(p+k)^{O(\sqrt{k})} \cdot O(pnm)$$(p+k)O(k)·O(pnm), where p denotes the number of distinct priorities and m denotes the number of edges. For all games with $$k = o(n)$$k=o(n) this improves the previously fastest algorithm by Jurdziński, Paterson, and Zwick (SICOMP 2008).We also obtain novel kernelization results and an improved deterministic algorithm for graphs with small average degree.

Matthias Mnich, Heiko Röglin, Clemens Rösner
Computing a Geodesic Two-Center of Points in a Simple Polygon

Given a simple polygon P and a set Q of points contained in P, we consider the geodesic k-center problem in which we seek to find k points, called centers, in P to minimize the maximum geodesic distance of any point of Q to its closest center. In this paper, we focus on the case for $$k=2$$k=2 and present the first exact algorithm that efficiently computes an optimal 2-center of Q with respect to the geodesic distance in P.

Eunjin Oh, Sang Won Bae, Hee-Kap Ahn
Simple Approximation Algorithms for Balanced MAX 2SAT

We study simple algorithms for the balanced MAX 2SAT problem, where we are given weighted clauses of length one and two with the property that for each variable x the total weight of clauses that x appears in equals the total weight of clauses for $$\overline{x}$$x¯. We show that such instances have a simple structural property in that any optimal solution can satisfy at most the total weight of the clauses minus half the total weight of the unit clauses. Using this property, we are able to show that a large class of greedy algorithms, including Johnson’s algorithm, gives a $$\frac{3}{4}$$34-approximation algorithm for balanced MAX 2SAT; a similar statement is false for general MAX 2SAT instances. We further give a spectral 0.81-approximation algorithm for balanced MAX E2SAT instances (in which each clause has exactly 2 literals) by a reduction to a spectral algorithm of Trevisan for the maximum colored cut problem. We provide experimental results showing that this spectral algorithm performs well and is slightly better than Johnson’s algorithm and the Goemans-Williamson semidefinite programming algorithm on balanced MAX E2SAT instances.

Alice Paul, Matthias Poloczek, David P. Williamson
A Parameterized Algorithm for Mixed-Cut

The classical Menger’s theorem states that in any undirected (or directed) graph G, given a pair of vertices s and t, the maximum number of vertex (edge) disjoint paths is equal to the minimum number of vertices (edges) needed to disconnect s from t. This min-max result can be turned into a polynomial time algorithm to find the maximum number of vertex (edge) disjoint paths as well as the minimum number of vertices (edges) needed to disconnect s from t. In this paper we study a mixed version of this problem, called Mixed-Cut, where we are given an undirected graph G, vertices s and t, positive integers k and l and the objective is to test whether there exist a k sized vertex set $$S\subseteq V(G)$$S⊆V(G) and an l sized edge set $$F\subseteq E(G)$$F⊆E(G) such that deletion of S and F from G disconnects from s and t. Apart from studying a generalization of classical problem, one of our main motivations for studying this problem comes from the fact that this problem naturally arises as a subproblem in the study of several graph editing (modification) problems. We start with a small observation that this problem is NP-complete and then study this problem, in fact a much stronger generalization of this, in the realm of parameterized complexity. In particular we study the Mixed Multiway Cut-Uncut problem where along with a set of terminals T, we are also given an equivalence relation $$\mathcal {R}$$R on T, and the question is whether we can delete at most k vertices and at most l edges such that connectivity of the terminals in the resulting graph respects $$\mathcal {R}$$R. Our main result is a fixed parameter algorithm for Mixed Multiway Cut-Uncut using the method of recursive understanding introduced by Chitnis et al. (FOCS 2012).

Ashutosh Rai, M. S. Ramanujan, Saket Saurabh
$$(k,n-k)$$ ( k , n - k ) -Max-Cut: An $${\mathcal O}^*(2^p)$$ O ∗ ( 2 p ) -Time Algorithm and a Polynomial Kernel

Max-Cut is a well-known classical NP-hard problem. This problem asks whether the vertex-set of a given graph $$G=(V,E)$$G=(V,E) can be partitioned into two disjoint subsets, A and B, such that there exist at least p edges with one endpoint in A and the other endpoint in B. It is well known that if $$p\le |E|/2$$p≤|E|/2, the answer is necessarily positive. A widely-studied variant of particular interest to parameterized complexity, called $$(k,n-k)$$(k,n-k)-Max-Cut, restricts the size of the subset A to be exactly k. For the $$(k,n-k)$$(k,n-k)-Max-Cut problem, we obtain an $${\mathcal O}^*(2^p)$$O∗(2p)-time algorithm, improving upon the previous best $${\mathcal O}^*(4^{p+o(p)})$$O∗(4p+o(p))-time algorithm, as well as the first polynomial kernel. Our algorithm relies on a delicate combination of methods and notions, including independent sets, depth-search trees, bounded search trees, dynamic programming and treewidth, while our kernel relies on examination of the closed neighborhood of the neighborhood of a certain independent set of the graph G.

Saket Saurabh, Meirav Zehavi
Independent Set of Convex Polygons: From $$n^{\epsilon }$$ n ϵ to $$1+\epsilon $$ 1 + ϵ via Shrinking

In the Independent Set of Convex Polygons problem we are given a set of weighted convex polygons in the plane and we want to compute a maximum weight subset of non-overlapping polygons. This is a very natural and well-studied problem with applications in many different areas. Unfortunately, there is a very large gap between the known upper and lower bounds for this problem. The best polynomial time algorithm we know has an approximation ratio of $$n^{\epsilon }$$nϵ and the best known lower bound shows only strong $$\mathsf {NP}$$NP-hardness.In this paper we close this gap completely, assuming that we are allowed to shrink the polygons a little bit, by a factor $$1-\delta $$1-δ for an arbitrarily small constant $$\delta >0$$δ>0, while the compared optimal solution cannot do this (resource augmentation). In this setting, we improve the approximation ratio from $$n^{\epsilon }$$nϵ to $$1+\epsilon $$1+ϵ which matches the above lower bound that still holds if we can shrink the polygons.

Andreas Wiese
Backmatter
Metadaten
Titel
LATIN 2016: Theoretical Informatics
herausgegeben von
Evangelos Kranakis
Gonzalo Navarro
Edgar Chávez
Copyright-Jahr
2016
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-662-49529-2
Print ISBN
978-3-662-49528-5
DOI
https://doi.org/10.1007/978-3-662-49529-2