Skip to main content

2015 | Buch

Algorithms and Computation

26th International Symposium, ISAAC 2015, Nagoya, Japan, December 9-11, 2015, Proceedings

herausgegeben von: Khaled Elbassioni, Kazuhisa Makino

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 26th International Symposium on Algorithms and Computation, ISAAC 2015, held in Nagoya, Japan, in December 2015.

The 65 revised full papers presented together with 3 invited talks were carefully reviewed and selected from 180 submissions for inclusion in the book. The focus of the volume is on the following topics: computational geometry; data structures; combinatorial optimization and approximation algorithms; randomized algorithms; graph algorithms and FPT; computational complexity; graph drawing and planar graphs; online and streaming algorithms; and string and DNA algorithms.

Inhaltsverzeichnis

Frontmatter

Computational Geometry I

Frontmatter
An Optimal Algorithm for Tiling the Plane with a Translated Polyomino

We give a O(n)-time algorithm for determining whether translations of a polyomino with n edges can tile the plane. The algorithm is also a O(n)-time algorithm for enumerating all regular tilings, and we prove that at most $$\varTheta (n)$$ such tilings exist.

Andrew Winslow
Adaptive Point Location in Planar Convex Subdivisions

We present a planar point location structure for a convex subdivision S. Given a query sequence of length m, the total running time is $$O(\mathrm {OPT} + m\log \log n + n)$$, where n is the number of vertices in S and $$\mathrm {OPT}$$ is the minimum running time to process the same query sequence by any linear decision tree for answering planar point location queries in S. The running time includes the preprocessing time. Therefore, for $$m \ge n$$, our running time is only worse than the best possible bound by $$O(\log \log n)$$ per query, which is much smaller than the $$O(\log n)$$ query time offered by an worst-case optimal planar point location structure.

Siu-Wing Cheng, Man-Kit Lau
Competitive Local Routing with Constraints

Let P be a set of n vertices in the plane and S a set of non-crossing line segments between vertices in P, called constraints. Two vertices are visible if the straight line segment connecting them does not properly intersect any constraints. The constrained $$\theta _m$$-graph is constructed by partitioning the plane around each vertex into m disjoint cones with aperture $$\theta = 2\uppi /m$$, and adding an edge to the ‘closest’ visible vertex in each cone. We consider how to route on the constrained $$\theta _6$$-graph. We first show that no deterministic 1-local routing algorithm is $$o(\sqrt{n})$$-competitive on all pairs of vertices of the constrained $$\theta _6$$-graph. After that, we show how to route between any two visible vertices using only 1-local information, while guaranteeing that the returned path has length at most 2 times the Euclidean distance between the source and destination. To the best of our knowledge, this is the first local routing algorithm in the constrained setting with guarantees on the path length.

Prosenjit Bose, Rolf Fagerberg, André van Renssen, Sander Verdonschot
Navigating Weighted Regions with Scattered Skinny Tetrahedra

We propose an algorithm for finding a $$(1+\varepsilon )$$-approximate shortest path through a weighted 3D simplicial complex $$\mathcal T$$. The weights are integers from the range [1, W] and the vertices have integral coordinates. Let N be the largest vertex coordinate magnitude, and let n be the number of tetrahedra in $$\mathcal T$$. Let $$\rho $$ be some arbitrary constant. Let $$\kappa $$ be the size of the largest connected component of tetrahedra whose aspect ratios exceed $$\rho $$. There exists a constant C dependent on $$\rho $$ but independent of $$\mathcal T$$ such that if $$\kappa \le \frac{1}{C}\log \log n + O(1)$$, the running time of our algorithm is polynomial in n, $$1/\varepsilon $$ and $$\log (NW)$$. If $$\kappa = O(1)$$, the running time reduces to $$O(n \varepsilon ^{-O(1)}(\log (NW))^{O(1)})$$.

Siu-Wing Cheng, Man-Kwun Chiu, Jiongxin Jin, Antoine Vigneron

Data Structures

Frontmatter
On the Succinct Representation of Unlabeled Permutations

We investigate the problem of succinctly representing an arbitrary unlabeled permutation $$\pi $$, so that $$\pi ^k(i)$$ can be computed quickly for any i and any integer power k. We consider the problem in several scenarios:Labeling schemes where we assign labels to elements and the query is to be answered by just examining the labels of the queried elements: we show that a label space of $$\sum _{i=1}^n \lfloor {n \over i} \rfloor \cdot i$$ is necessary and sufficient. In other words, $$2\lg n$$ bits of space are necessary and sufficient for representing each of the labels.Succinct data structures for the problem where we assign labels to the n elements from the label set $$\{1,\ldots , cn\}$$ where $$c\ge 1$$: we show that $$\varTheta (\sqrt{n})$$ bits are necessary and sufficient to represent the permutation. Moreover, we support queries in such a structure in O(1) time in the standard word-RAM model.Succinct data structures for the problem where we assign labels to the n elements from the label set $$\{1,\ldots , cn^{1+\epsilon }\}$$ where c is a constant and $$0 < \epsilon < 1$$: we show that $$\varTheta (n^{{(1-\epsilon )}/{2}})$$ bits are necessary and sufficient to represent the permutation. We can also support queries in such a structure in O(1) time in the standard word-RAM model.

Hicham El-Zein, J. Ian Munro, Siwei Yang
How to Select the Top k Elements from Evolving Data?

In this paper we investigate the top-k-selection problem, i.e. to determine and sort the top k elements, in the dynamic data model. Here dynamic means that the underlying total order evolves over time, and that the order can only be probed by pair-wise comparisons. It is assumed that at each time step, only one pair of elements can be compared. This assumption of restricted access is reasonable in the dynamic model, especially for massive data set where it is impossible to access all the data before the next change occurs. Previously only two special cases were studied [1] in this model: selecting the element of a given rank, and sorting all elements. This paper systematically deals with $$k\in [n]$$. Specifically, we identify the critical point $$k^*$$ such that the top-k-selection problem can be solved error-free with probability $$1-o(1)$$ if and only if $$k=o(k^*)$$. A lower bound of the error when $$k=\varOmega (k^*)$$ is also determined, which actually is tight under some conditions. In contrast, we show that the top-k-set problem, which means finding the top k elements without sorting them, can be solved error-free with probability $$1-o(1)$$ for all $$1\le k\le n$$. Additionally, we consider some extensions of the dynamic data model and show that most of these results still hold.

Qin Huang, Xingwu Liu, Xiaoming Sun, Jialin Zhang
Optimal Search Trees with 2-Way Comparisons

In 1971, Knuth gave an $$O(n^2)$$O(n2)-time algorithm for the classic problem of finding an optimal binary search tree. Knuth’s algorithm works only for search trees based on 3-way comparisons, but most modern computers support only 2-way comparisons ($$<$$<, $$\le $$≤, $$=$$=, $$\ge $$≥, and $$>$$>). Until this paper, the problem of finding an optimal search tree using 2-way comparisons remained open — poly-time algorithms were known only for restricted variants. We solve the general case, giving (i) an $$O(n^4)$$O(n4)-time algorithm and (ii) an $$O(n\log n)$$O(nlogn)-time additive-3 approximation algorithm. For finding optimal binary split trees, we (iii) obtain a linear speedup and (iv) prove some previous work incorrect.

Marek Chrobak, Mordecai Golin, J. Ian Munro, Neal E. Young
Multidimensional Range Selection

We study the problem of supporting (orthogonal) range selection queries over a set of n points in constant-dimensional space. Under the standard word-RAM model with word size $$w = \varOmega (\lg n)$$, we present data structures that occupy $$O(n \cdot (\lg n / \lg \lg n)^{d - 1})$$ words of space and support d-dimensional range selection queries using $$O((\lg n / \lg \lg n)^d)$$ query time. This improves the best known data structure by a factor of $$\lg \lg n$$ in query time. To develop our data structures, we generalize the “parallel counting” technique of Brodal, Gfeller, Jørgensen, and Sanders (2011) for one-dimensional range selection to higher dimensions.As a byproduct, we design data structures to support d-dimensional range counting queries within $$O(n \cdot (\lg n / \lg w + 1)^{d - 2})$$ words of space and $$O((\lg n / \lg w + 1)^{d - 1})$$ query time, for any word size $$w = \varOmega (\lg n)$$. This improves the best known result of JaJa, Mortensen, and Shi (2004) when $$\lg w\gg \lg \lg n$$.

Timothy M. Chan, Gelin Zhou

Combinatorial Optimization and Approximation Algorithms I

Frontmatter
On the Minimum Cost Range Assignment Problem

We study the problem of assigning transmission ranges to radio stations placed in a d-dimensional (d-D) Euclidean space in order to achieve a strongly connected communication network with minimum total cost, where the cost of transmitting in range r is proportional to $$r^\alpha $$. While this problem can be solved optimally in 1D, in higher dimensions it is known to be NP-hard for any $$\alpha \ge 1$$.For the 1D version of the problem and $$\alpha \ge 1$$, we propose a new approach that achieves an exact $$O(n^2)$$-time algorithm. This improves the running time of the best known algorithm by a factor of n. Moreover, we show that this new technique can be utilized for achieving a polynomial-time algorithm for finding the minimum cost range assignment in 1D whose induced communication graph is a t-spanner, for any $$t \ge 1$$.In higher dimensions, finding the optimal range assignment is NP-hard; however, it can be approximated within a constant factor. The best known approximation ratio is for the case $$\alpha =1$$, where the approximation ratio is 1.5. We show a new approximation algorithm that breaks the 1.5 ratio.

Paz Carmi, Lilach Chaitman-Yerushalmi
On the Approximability of the Minimum Rainbow Subgraph Problem and Other Related Problems

In this paper, we study the approximability of the Minimum Rainbow Subgraph (MRS) problem and other related problems. The input to the problem is an n-vertex undirected graph, with each edge colored with one of p colors. The goal is to find a subgraph on a minimum number of vertices which has one induced edge of each color. The problem is known to be NP-hard, and has an upper bound of $$O(\sqrt{n})$$ and a lower bound of $$\varOmega (\log n)$$ on its approximation ratio.We define a new problem called the Densest k Colored Subgraph problem, which has the same input as the MRS problem alongwith a parameter k. The goal is to output a subgraph on k vertices, which has the maximum number of edges of distinct colors. We give an $$O(n^{1/3})$$ approximation algorithm for it, and then, using that algorithm, give an $$O(n^{1/3}\log n)$$ approximation algorithm for the MRS problem. We observe that the MIN-REP problem is indeed a special case of the MRS problem. This also implies a combinatorial $$O(n^{1/3}\log n)$$ approximation algorithm for the MIN-REP problem. Previously, Charikar et al. [5] showed an ingenious LP-rounding based algorithm with an approximation ratio of $$O(n^{1/3}\log ^{2/3}n)$$ for MIN-REP. It is quasi-NP-hard to approximate the MIN-REP problem to within a factor of $$2^{\log ^{1-\epsilon }n}$$ [15]. The same hardness result now applies to the MRS problem. We also give approximation preserving reductions between various problems related to the MRS problem for which the best known approximation ratio is $$O(n^c)$$ where n is the size of the input and c is a fixed constant less than one.

Sumedh Tirodkar, Sundar Vishwanathan
General Caching Is Hard: Even with Small Pages

Caching (also known as paging) is a classical problem concerning page replacement policies in two-level memory systems. General caching is the variant with pages of different sizes and fault costs. The strong NP-hardness of its two important cases, the fault model (each page has unit fault cost) and the bit model (each page has the same fault cost as size) has been established. We prove that this already holds when page sizes are bounded by a small constant: The bit and fault models are strongly NP-complete even when page sizes are limited to $$\{1, 2, 3\}$$.Considering only the decision versions of the problems, general caching is equivalent to the unsplittable flow on a path problem and therefore our results also improve the hardness results about this problem.

Lukáš Folwarczný, Jiří Sgall

Randomized Algorithms I

Frontmatter
The Secretary Problem with a Choice Function

In the classical secretary problem, a decision-maker is willing to hire the best secretary out of n applicants that arrive in a random order, and the goal is to maximize the probability of choosing the best applicant. In this paper, we introduce the secretary problem with a choice function. The choice function represents the preference of the decision-maker. In this problem, the decision-maker hires some applicants, and the goal is to maximize the probability of choosing the best set of applicants defined by the choice function. We see that the secretary problem with a path-independent choice function generalizes secretary version of the stable matching problem, the maximum weight bipartite matching problem, and the maximum weight base problem in a matroid. When the choice function is path-independent, we provide an algorithm that succeeds with probability at least $$1/e^k$$ where k is the maximum size of the choice, and prove that this is the best possible. Moreover, for the non-path-independent case, we prove that the success probability goes to arbitrary small for any algorithm even if the maximum size of the choice is 2.

Yasushi Kawase
The Benefit of Recombination in Noisy Evolutionary Search

Practical optimization problems frequently include uncertainty about the quality measure, for example due to noisy evaluations. Thus, they do not allow for a straightforward application of traditional optimization techniques. In these settings meta-heuristics are a popular choice for deriving good optimization algorithms, most notably evolutionary algorithms which mimic evolution in nature. Empirical evidence suggests that genetic recombination is useful in uncertain environments because it can stabilize a noisy fitness signal. With this paper we want to support this claim with mathematical rigor.The setting we consider is that of noisy optimization. We study a simple noisy fitness function that is derived by adding Gaussian noise to a monotone function. First, we show that a classical evolutionary algorithm that does not employ sexual recombination (the $$( \mu +1) $$-EA) cannot handle the noise efficiently, regardless of the population size. Then we show that an evolutionary algorithm which does employ sexual recombination (the Compact Genetic Algorithm, short: cGA) can handle the noise using a graceful scaling of the population.

Tobias Friedrich, Timo Kötzing, Martin S. Krejca, Andrew M. Sutton
Algorithmic Learning for Steganography: Proper Learning of k-term DNF Formulas from Positive Samples

Proper learning from positive samples is a basic ingredient for designing secure steganographic systems for unknown covertext channels. In addition, security requirements imply that the hypothesis should not contain false positives. We present such a learner for k-term DNF formulas for the uniform distribution and a generalization to q-bounded distributions. We briefly also describe how these results can be used to design a secure stegosystem.

Matthias Ernst, Maciej Liśkiewicz, Rüdiger Reischuk

Combinatorial Optimization and Approximation Algorithms II

Frontmatter
Obtaining a Triangular Matrix by Independent Row-Column Permutations

Given a square (0, 1)-matrix A, we consider the problem of deciding whether there exists a permutation of the rows and a permutation of the columns of A such that, after these have been carried out, the resulting matrix is triangular. The complexity of the problem was posed as an open question by Wilf [6] in 1997. In 1998, DasGupta et al. [3] seemingly answered the question, proving it is NP-complete. However, we show here that their result is flawed, which leaves the question still open. Therefore, we give a definite answer to this question by proving that the problem is NP-complete. We finally present an exponential-time algorithm for solving the problem.

Guillaume Fertin, Irena Rusu, Stéphane Vialette

Open Access

Many-to-one Matchings with Lower Quotas: Algorithms and Complexity

We study a natural generalization of the maximum weight many-to-one matching problem. We are given an undirected bipartite graph $$G= (A \dot{\cup }P, E)$$ with weights on the edges in E, and with lower and upper quotas on the vertices in P. We seek a maximum weight many-to-one matching satisfying two sets of constraints: vertices in A are incident to at most one matching edge, while vertices in P are either unmatched or they are incident to a number of matching edges between their lower and upper quota. This problem, which we call maximum weight many-to-one matching with lower and upper quotas (wmlq), has applications to the assignment of students to projects within university courses, where there are constraints on the minimum and maximum numbers of students that must be assigned to each project.In this paper, we provide a comprehensive analysis of the complexity of wmlq from the viewpoints of classic polynomial time algorithms, fixed-parameter tractability, as well as approximability. We draw the line between $$\mathsf{NP}$$-hard and polynomially tractable instances in terms of degree and quota constraints and provide efficient algorithms to solve the tractable ones. We further show that the problem can be solved in polynomial time for instances with bounded treewidth; however, the corresponding runtime is exponential in the treewidth with the maximum upper quota $$u_{\max }$$ as basis, and we prove that this dependence is necessary unless $$\mathsf{FPT}= \mathsf{W}[1]$$. Finally, we also present an approximation algorithm for the general case with performance guarantee $$u_{\max }+1$$, which is asymptotically best possible unless $$\mathsf{P}= \mathsf{NP}$$.

Ashwin Arulselvan, Ágnes Cseh, Martin Groß, David F. Manlove, Jannik Matuschke
Minimizing the Maximum Moving Cost of Interval Coverage

In this paper, we study an interval coverage problem. We are given n intervals of the same length on a line L and a line segment B on L. Each interval has a nonnegative weight. The goal is to move the intervals along L such that every point of B is covered by at least one interval and the maximum moving cost of all intervals is minimized, where the moving cost of each interval is its moving distance times its weight. Algorithms for the “unweighted” version of this problem have been given before. In this paper, we present a first-known algorithm for this weighted version and our algorithm runs in $$O(n^2\log n\log \log n)$$ time. The problem has applications in mobile sensor barrier coverage.

Haitao Wang, Xiao Zhang

Randomized Algorithms II

Frontmatter
Heuristic Time Hierarchies via Hierarchies for Sampling Distributions

We introduce a new framework for proving the time hierarchy theorems for heuristic classes. The main ingredient of our proof is a hierarchy theorem for sampling distributions recently proved by Watson [11]. Class $$\mathrm {Heur}_{\epsilon }{\mathbf {FBPP}}$$ consists of functions with distributions on their inputs that can be computed in randomized polynomial time with bounded error on all except $$\epsilon $$ fraction of inputs. We prove that for every a, $$\delta $$ and integer k there exists a function $${F: \{0, 1\}^* \rightarrow \{0, 1, \dots , k - 1\}}$$ such that $$(F, U) \in \mathrm {Heur}_{\epsilon }{\mathbf {FBPP}}$$ for all $$\epsilon > 0$$ and for every ensemble of distributions $$D_n$$ samplable in $$n^a$$ steps, $$(F, D) \notin \mathrm {Heur}_{1 - \frac{1}{k} - \delta }{\mathbf {FBPTime}}[n^a]$$. This extends a previously known result for languages with uniform distributions proved by Pervyshev [9] by handling the case $$k > 2$$. We also prove that $${\mathbf {P}}\not \subseteq \mathrm {Heur}_{\frac{1}{2} - \epsilon }{\mathbf {BPTime}}[n^k]$$ if one-way functions exist.We also show that our technique may be extended for time hierarchies in some other heuristic classes.

Dmitry Itsykson, Alexander Knop, Dmitry Sokolov
Unbounded Discrepancy of Deterministic Random Walks on Grids

Random walks are frequently used in randomized algorithms. We study a derandomized variant of a random walk on graphs, called rotor-router model. In this model, instead of distributing tokens randomly, each vertex serves its neighbors in a fixed deterministic order. For most setups, both processes behave remarkably similar: Starting with the same initial configuration, the number of tokens in the rotor-router model deviates only slightly from the expected number of tokens on the corresponding vertex in the random walk model. The maximal difference over all vertices and all times is called single vertex discrepancy. Cooper and Spencer (2006) showed that on $$\mathbb {Z}^{d}$$ the single vertex discrepancy is only a constant $$c_d$$. Other authors also determined the precise value of $$c_d$$ for $$d=1,2$$. All these results, however, assume that initially all tokens are only placed on one partition of the bipartite graph $$\mathbb {Z}^{d}$$. We show that this assumption is crucial by proving that otherwise the single vertex discrepancy can become arbitrarily large. For all dimensions $$d\ge 1$$ and arbitrary discrepancies $$\ell \ge 0$$, we construct configurations that reach a discrepancy of at least $$\ell $$.

Tobias Friedrich, Maximilian Katzmann, Anton Krohmer
Trading off Worst and Expected Cost in Decision Tree Problems

We characterize the best possible trade-off achievable when optimizing the construction of a decision tree with respect to both the worst and the expected cost. It is known that a decision tree achieving the minimum possible worst case cost can behave very poorly in expectation (even exponentially worse than the optimal), and the vice versa is also true. Led by applications where deciding for the right optimization criterion might not be easy, recently, several authors have focussed on the bicriteria optimization of decision trees.An unanswered fundamental question is about the best possible trade-off achievable. Here we are able to sharply define the limits for such a task. More precisely, we show that for every $$\rho >0$$ there is a decision tree D with worst testing cost at most $$(1 + \rho )OPT_W+1$$ and expected testing cost at most $$\frac{1}{1 - e^{-\rho }} OPT_E,$$ where $$OPT_W$$ and $$OPT_E$$ denote the minimum worst testing cost and the minimum expected testing cost of a decision tree for the given instance. We also show that this is the best possible trade-off in the sense that there are infinitely many instances for which we cannot obtain a decision tree with both worst testing cost smaller than $$(1 + \rho )OPT_W$$ and expected testing cost smaller than $$\frac{1}{1 - e^{-\rho }} OPT_E.$$

Aline Saettler, Eduardo Laber, Ferdinando Cicalese

Graph Algorithms and FPT I

Frontmatter
Sliding Token on Bipartite Permutation Graphs

Sliding Token is a natural reconfiguration problem in which vertices of independent sets are iteratively replaced by neighbors. We develop techniques that may be useful in answering the conjecture that Sliding Token is polynomial-time decidable on bipartite graphs. Along the way, we give efficient algorithms for Sliding Token on bipartite permutation and bipartite distance-hereditary graphs.

Eli Fox-Epstein, Duc A. Hoang, Yota Otachi, Ryuhei Uehara
Output-Polynomial Enumeration on Graphs of Bounded (Local) Linear MIM-Width

The linear maximum induced matching width (LMIM-width) of a graph is a width parameter based on the maximum induced matching in some of its subgraphs. In this paper we study output-polynomial enumeration algorithms on graphs of bounded LMIM-width and graphs of bounded local LMIM-width. In particular, we show that all 1-minimal $$(\sigma ,\rho )$$-dominating sets, and hence all minimal dominating sets, of graphs of bounded LMIM-width can be enumerated with polynomial (linear) delay using polynomial space. Furthermore, we show that all minimal dominating sets of a unit square graph can be enumerated in incremental polynomial time.

Petr A. Golovach, Pinar Heggernes, Mamadou Moustapha Kanté, Dieter Kratsch, Sigve H. Sæther, Yngve Villanger
Minimum Degree Up to Local Complementation: Bounds, Parameterized Complexity, and Exact Algorithms

The local minimum degree of a graph is the minimum degree that can be reached by means of local complementation. For any n, there exist graphs of order n which have a local minimum degree at least 0.189n, or at least 0.110n when restricted to bipartite graphs. Regarding the upper bound, we show that the local minimum degree is at most $$\frac{3}{8}n+o(n)$$ for general graphs and $$\frac{n}{4}+o(n)$$ for bipartite graphs, improving the known $$\frac{n}{2}$$ upper bound. We also prove that the local minimum degree is smaller than half of the vertex cover number (up to a logarithmic term). The local minimum degree problem is NP-Complete and hard to approximate. We show that this problem, even when restricted to bipartite graphs, is in W[2] and FPT-equivalent to the EvenSet problem, whose W[1]-hardness is a long standing open question. Finally, we show that the local minimum degree is computed by a $$\mathcal O^*(1.938^n)$$-algorithm, and a $$\mathcal O^*(1.466^n)$$-algorithm for the bipartite graphs.

David Cattanéo, Simon Perdrix
Exact and FPT Algorithms for Max-Conflict Free Coloring in Hypergraphs

Conflict-free coloring of hypergraphs is a very well studied question of theoretical and practical interest. For a hypergraph $$H=(U, \mathcal F)$$, a conflict-free coloring of H refers to a vertex coloring where every hyperedge has a vertex with a unique color, distinct from all other vertices in the hyperedge. In this paper, we initiate a study of natural maximization version of this problem, namely, Max-CFC: For a given hypergraph H and a fixed $$r\ge 2$$, color the vertices of U using r colors so that the number of hyperedges that are conflict-free colored is maximized. By previously known hardness results for conflict-free coloring, this maximization version is NP-hard.We study this problem in the context of both exact and parameterized algorithms. In the parameterized setting, we study this problem with respect to the natural parameter, the solution size. In particular, we study the following question: p-CFC: For a given hypergraph, can we conflict-free color at least k hyperedges with at most r colors, the parameter being k. We show that this problem is FPT by designing an algorithm with running time $$2^{\mathcal {O}(k \log \log k + k \log r)}(n+m)^{\mathcal {O}(1)}$$ using a novel connection to the Unique Coverage problem and applying the method of color coding in a non-trivial manner. For the special case for hypergraphs induced by graph neighbourhoods we give a polynomial kernel. Finally, we give an exact algorithm for Max-CFC running in $$\mathcal {O}(2^{n+m})$$ time. All our algorithms, with minor modifications, work for a stronger version of conflict-free coloring, Unique Maximum Coloring.

Pradeesha Ashok, Aditi Dudeja, Sudeshna Kolay

Computational Geometry II

Frontmatter
Geometric Matching Algorithms for Two Realistic Terrains

We consider a geometric matching of two realistic terrains, each of which is modeled as a piecewise-linear bivariate function. For two realistic terrains f and g where the domain of g is relatively larger than that of f, we seek to find a translated copy $$f'$$ of f such that the domain of $$f'$$ is a sub-domain of g and the $$L_\infty $$ or the $$L_1$$ distance of $$f'$$ and g restricted to the domain of $$f'$$ is minimized. In this paper, we show a tight bound on the number of different combinatorial structures that f and g can have under translation in their projections on the xy-plane. We give a deterministic algorithm and a randomized algorithm that compute an optimal translation of f with respect to g under $$L_\infty $$ metric. We also give a deterministic algorithm that computes an optimal translation of f with respect to g under $$L_1$$ metric.

Sang Duk Yoon, Min-Gyu Kim, Wanbin Son, Hee-Kap Ahn
Size-Dependent Tile Self-Assembly: Constant-Height Rectangles and Stability

We introduce a new model of algorithmic tile self-assembly called size-dependent assembly. In previous models, supertiles are stable when the total strength of the bonds between any two halves exceeds some constant temperature. In this model, this constant temperature requirement is replaced by an nondecreasing temperature function$$\tau : \mathbb {N} \rightarrow \mathbb {N}$$ that depends on the size of the smaller of the two halves. This generalization allows supertiles to become unstable and break apart, and captures the increased forces that large structures may place on the bonds holding them together.We demonstrate the power of this model in two ways. First, we give fixed tile sets that assemble constant-height rectangles and squares of arbitrary input size given an appropriate temperature function. Second, we prove that deciding whether a supertile is stable is $$\mathsf {coNP}$$-complete. Both results contrast with known results for fixed temperature.

Sándor P. Fekete, Robert T. Schweller, Andrew Winslow
The 2-Center Problem in a Simple Polygon

The geodesic k-center problem in a simple polygon with n vertices consists in the following. Find k points, called centers, in the polygon to minimize the maximum geodesic distance from any point of the polygon to its closest center. In this paper, we focus on the case where $$k=2$$ and present an exact algorithm that returns an optimal geodesic 2-center in $$O(n^2\log ^2 n)$$ time.

Eunjin Oh, Jean-Lou De Carufel, Hee-Kap Ahn
Choice Is Hard

Let $$P=\{C_1,C_2,\ldots , C_n\}$$ be a set of color classes, where each color class $$C_i$$ consists of a pair of objects. We focus on two problems in which the objects are points on the line. In the first problem (rainbow minmax gap), given P, one needs to select exactly one point from each color class, such that the maximum distance between a pair of consecutive selected points is minimized. This problem was studied by Consuegra and Narasimhan, who left the question of its complexity unresolved. We prove that it is NP-hard. For our proof we obtain the following auxiliary result. A 3-SAT formula is an LSAT formula if each clause (viewed as a set of literals) intersects at most one other clause, and, moreover, if two clauses intersect, then they have exactly one literal in common. We prove that the problem of deciding whether an LSAT formula is satisfiable or not is NP-complete. We present two additional applications of the LSAT result, namely, to rainbow piercing and rainbow covering.In the second problem (covering color classes with intervals), given P, one needs to find a minimum-cardinality set $$\mathcal {I}$$ of intervals, such that exactly one point from each color class is covered by an interval in $$\mathcal {I}$$. Motivated by a problem in storage systems, this problem has received significant attention. Here, we settle the complexity question by proving that it is NP-hard.

Esther M. Arkin, Aritra Banik, Paz Carmi, Gui Citovsky, Matthew J. Katz, Joseph S. B. Mitchell, Marina Simakov

Graph Algorithms and FPT II

Frontmatter
Fully Dynamic Betweenness Centrality

We present fully dynamic algorithms for maintaining betweenness centrality (BC) of vertices in a directed graph $$G=(V,E)$$ with positive edge weights. BC is a widely used parameter in the analysis of large complex networks. We achieve an amortized $$O({\nu ^*}^2 \cdot \log ^3 n)$$ time per update with our basic algorithm, and $$O({\nu ^*}^2 \cdot \log ^2 n)$$ time with a more complex algorithm, where $$n = |V| $$, and $${\nu ^*}$$ bounds the number of distinct edges that lie on shortest paths through any single vertex. For graphs with $$\nu ^* = O(n)$$, our algorithms match the fully dynamic all pairs shortest paths (APSP) bounds of Demetrescu and Italiano [8] and Thorup [28] for unique shortest paths, where $$\nu ^*=n-1$$. Our first algorithm also contains within it, a method and analysis for obtaining fully dynamic APSP from a decremental algorithm, that differs from the one in [8].

Matteo Pontecorvi, Vijaya Ramachandran
When Patrolmen Become Corrupted: Monitoring a Graph Using Faulty Mobile Robots

A team of k mobile robots is deployed on a weighted graph whose edge weights represent distances. The robots perpetually move along the domain, represented by all points belonging to the graph edges, not exceeding their maximal speed. The robots need to patrol the graph by regularly visiting all points of the domain. In this paper, we consider a team of robots (patrolmen), at most f of which may be unreliable, i.e. they fail to comply with their patrolling duties.What algorithm should be followed so as to minimize the maximum time between successive visits of every edge point by a reliable patrolmen? The corresponding measure of efficiency of patrolling called idleness has been widely accepted in the robotics literature. We extend it to the case of untrusted patrolmen; we denote by $${\mathfrak {I}}_k^f (G)$$ the maximum time that a point of the domain may remain unvisited by reliable patrolmen. The objective is to find patrolling strategies minimizing $${\mathfrak {I}}_k^f (G)$$.We investigate this problem for various classes of graphs. We design optimal algorithms for line segments, which turn out to be surprisingly different from strategies for related patrolling problems proposed in the literature. We then use these results to study the case of general graphs. For Eulerian graphs G, we give an optimal patrolling strategy with idleness $${\mathfrak {I}}^f_k(G) = (f+1) |E| / k$$, where |E| is the sum of the lengths of the edges of G. Further, we show the hardness of the problem of computing the idle time for three robots, at most one of which is faulty, by reduction from 3-edge-coloring of cubic graphs — a known NP-hard problem. A byproduct of our proof is the investigation of classes of graphs minimizing idle time (with respect to the total length of edges); an example of such a class is known in the literature under the name of Kotzig graphs.

Jurek Czyzowicz, Leszek Gasieniec, Adrian Kosowski, Evangelos Kranakis, Danny Krizanc, Najmeh Taleb
Cops and Robbers on String Graphs

The game of cops and robber, introduced by Nowakowski and Winkler in 1983, is played by two players on a graph. One controls k cops and the other a robber. The players alternate and move their pieces to the distance at most one. The cops win if they capture the robber, the robber wins by escaping indefinitely. The cop number of G is the smallest k such that k cops win the game.We extend the results of Gavenčiak et al. [ISAAC 2013], investigating the maximum cop number of geometric intersection graphs. Our main result shows that the maximum cop number of string graphs is at most 15, improving the previous bound 30. We generalize this approach to string graphs on a surface of genus g to show that the maximum cop number is at most $$10g+15$$, which strengthens the result of Quilliot [J. Combin. Theory Ser. B 38, 89–92 (1985)]. For outer string graphs, we show that the maximum cop number is between 3 and 4. Our results also imply polynomial-time algorithms determining the cop number for all these graph classes.

Tomáš Gavenčiak, Przemysław Gordinowicz, Vít Jelínek, Pavel Klavík, Jan Kratochvíl
Min-Power Covering Problems

In the classical vertex cover problem, we are given a graph $$G=(V,E)$$ and we aim to find a minimum cardinality cover of the edges, i.e. a subset of the vertices $$C \subseteq V$$ such that for every edge $$e \in E$$, at least one of its extremities belongs to C. In the Min-Power-Cover version of the vertex cover problem, we consider an edge-weighted graph and we aim to find a cover of the edges and a valuation (power) of the vertices of the cover minimizing the total power of the vertices. We say that an edge e is covered if at least one of its extremities has a valuation (power) greater than or equal than the weight of e. In this paper, we consider Min-Power-Cover variants of various classical problems, including vertex cover, min cut, spanning tree and path problems.

Eric Angel, Evripidis Bampis, Vincent Chau, Alexander Kononov

Computational Geometry III

Frontmatter
Minimizing the Diameter of a Spanning Tree for Imprecise Points

We study the diameter of a spanning tree, i.e., the length of its longest simple path, under the imprecise points model, in which each point is assigned an own occurrence disk instead of an exact location. We prove that the minimum diameter of a spanning tree for n points each of which is selected from its occurrence disk can be computed in $$O(n^9)$$ time for arbitrary disks and in $$O(n^6)$$ time for unit ones. If the disks are disjoint, we improve the run-time respectively to $$O(n^8\log ^2n )$$ and $$O(n^5)$$. These results contrast with the fact that minimizing the sum of the edge lengths of a spanning tree for imprecise points is $$\mathsf {NP}$$-hard.

Chih-Hung Liu, Sandro Montanari
Model-Based Classification of Trajectories

We present algorithms for classifying trajectories based on a movement model parameterized by a single parameter, like the Brownian bridge movement model. Classification is the problem of assigning trajectories to classes of similar movement characteristics. For instance, the set of trajectories might be the subtrajectories resulting from segmenting a trajectory, thus identifying movement phases. We give an efficient algorithm to compute the optimal classification for a discrete set of parameter values. We also show that classification is NP-hard if the parameter values are allowed to vary continuously and present an algorithm that solves the problem in polynomial time under mild assumptions on the input.

Maike Buchin, Stef Sijben
Linear-Time Algorithms for the Farthest-Segment Voronoi Diagram and Related Tree Structures

We present linear-time algorithms to construct tree-like Voronoi diagrams with disconnected regions after the sequence of their faces along an enclosing boundary (or at infinity) is known. We focus on the farthest-segment Voronoi diagram, however, our techniques are also applicable to constructing the order-$$(k{+}1)$$ subdivision within an order-k Voronoi region of segments and updating a nearest-neighbor Voronoi diagram of segments after deletion of one site. Although tree-structured, these diagrams illustrate properties surprisingly different from their counterparts for points. The sequence of their faces along the relevant boundary forms a Davenport-Schinzel sequence of order $$\ge 2$$. Once this sequence is known, we show how to compute the corresponding Voronoi diagram in linear time, expected or deterministic, augmenting the existing linear-time frameworks for points in convex position with the ability to handle non-point sites and multiple Voronoi faces.

Elena Khramtcova, Evanthia Papadopoulou
Unfolding Orthogonal Polyhedra with Linear Refinement

An unfolding of a polyhedron is a single connected planar piece without overlap resulting from cutting and flattening the surface of the polyhedron. Even for orthogonal polyhedra, it is known that edge-unfolding, i.e., cuts are performed only along the edges of a polyhedron, is not sufficient to guarantee a successful unfolding in general. However, if additional cuts parallel to polyhedron edges are allowed, it has been shown that every orthogonal polyhedron of genus zero admits a grid-unfolding with quadratic refinement. Using a new unfolding technique developed in this paper, we improve upon the previous result by showing that linear refinement suffices. Our approach not only requires fewer cuts but is also much simpler.

Yi-Jun Chang, Hsu-Chun Yen

Combinatorial Optimization and Approximation Algorithms III

Frontmatter
Colored Non-crossing Euclidean Steiner Forest

Given a set of k-colored points in the plane, we consider the problem of finding k trees such that each tree connects all points of one color class, no two trees cross, and the total edge length of the trees is minimized. For $$k = 1$$, this is the well-known Euclidean Steiner tree problem. For general k, a $$k\rho $$-approximation algorithm is known, where $$\rho \le 1.21$$ is the Steiner ratio.We present a PTAS for $$k=2$$, a $$(5/3+\varepsilon )$$-approximation for $$k=3$$, and two approximation algorithms for general k, with ratios $$O(\sqrt{n} \log k)$$ and $$k+\varepsilon $$.

Sergey Bereg, Krzysztof Fleszar, Philipp Kindermann, Sergey Pupyrev, Joachim Spoerhase, Alexander Wolff
On a Generalization of Nemhauser and Trotter’s Local Optimization Theorem

The Nemhauser and Trotter’s theorem applies to the famous Vertex Cover problem and can obtain a 2-approximation solution and a problem kernel of 2k vertices. This theorem is a famous theorem in combinatorial optimization and has been extensively studied. One way to generalize this theorem is to extend the result to the Bounded-Degree Vertex Deletion problem. For a fixed integer $$d\ge 0$$, Bounded-Degree Vertex Deletion asks to delete at most k vertices of the input graph to make the maximum degree of the remaining graph at most d. Vertex Cover is a special case that $$d=0$$. Fellows, Guo, Moser and Niedermeier proved a generalized theorem that implies an O(k)-vertex kernel for Bounded-Degree Vertex Deletion for $$d=0$$ and 1, and for any $$\varepsilon >0$$, an $$O(k^{1+\varepsilon })$$-vertex kernel for each $$d\ge 2$$. In fact, it is still left as an open problem whether Bounded-Degree Vertex Deletion parameterized by k admits a linear-vertex kernel for each $$d\ge 3$$. In this paper, we refine the generalized Nemhauser and Trotter’s theorem. Our result implies a linear-vertex kernel for Bounded-Degree Vertex Deletion parameterized by k for each $$d\ge 0$$.

Mingyu Xiao
Approximation Algorithms in the Successive Hitting Set Model

We introduce the successive Hitting Set model, where the set system is not given in advance but a set generator produces the sets that contain a specific element from the universe on demand. Despite incomplete knowledge about the set system, we show that several approximation algorithms for the conventional Hitting Set problem can be adopted to perform well in this model. We describe, and experimentally investigate, several scenarios where the new model is beneficial compared to the conventional one.

Sabine Storandt

Randomized Algorithms III

Frontmatter
Generating Random Hyperbolic Graphs in Subquadratic Time

Complex networks have become increasingly popular for modeling various real-world phenomena. Realistic generative network models are important in this context as they simplify complex network research regarding data sharing, reproducibility, and scalability studies. Random hyperbolic graphs are a very promising family of geometric graphs with unit-disk neighborhood in the hyperbolic plane. Previous work provided empirical and theoretical evidence that this generative graph model creates networks with many realistic features.In this work we provide the first generation algorithm for random hyperbolic graphs with subquadratic running time. We prove a time complexity of $$O((n^{3/2}+m) \log n)$$ with high probability for the generation process. This running time is confirmed by experimental data with our implementation. The acceleration stems primarily from the reduction of pairwise distance computations through a polar quadtree, which we adapt to hyperbolic space for this purpose and which can be of independent interest. In practice we improve the running time of a previous implementation (which allows more general neighborhoods than the unit disk) by at least two orders of magnitude this way. Networks with billions of edges can now be generated in a few minutes.

Moritz von Looz, Henning Meyerhenke, Roman Prutkin
Provable Efficiency of Contraction Hierarchies with Randomized Preprocessing

We present a new way of analyzing Contraction Hierarchies (CH), a widely used speed-up technique for shortest path computations in road networks. In previous work, preprocessing and query times of deterministically constructed CH on road networks with n nodes were shown to be polynomial in n as well as the highway dimension h of the network and its diameter D. While h is conjectured to be polylogarithmic for road networks, a tight bound remains an open problem. We rely on the empirically justifiable assumption of the road network exhibiting small growth. We introduce a method to construct randomized Contraction Hierarchies on road networks as well as a probabilistic query routine. Our analysis reveals that randomized CH lead to sublinear search space sizes in the order of $$\sqrt{n} \log \sqrt{n}$$, auxiliary data in the order of $$n \log ^2 \sqrt{n}$$, and correct query results with high probability after a polynomial time preprocessing phase.

Stefan Funke, Sabine Storandt
Randomized Minmax Regret for Combinatorial Optimization Under Uncertainty

The minmax regret problem for combinatorial optimization under uncertainty can be viewed as a zero-sum game played between an optimizing player and an adversary, where the optimizing player selects a solution and the adversary selects costs with the intention of maximizing the regret of the player. The conventional minmax regret model considers only deterministic solutions/strategies, and minmax regret versions of most polynomial solvable problems are NP-hard. In this paper, we consider a randomized model where the optimizing player selects a probability distribution (corresponding to a mixed strategy) over solutions and the adversary selects costs with knowledge of the player’s distribution, but not its realization. We show that under this randomized model, the minmax regret version of any polynomial solvable combinatorial problem becomes polynomial solvable. This holds true for both interval and discrete scenario representations of uncertainty. Using the randomized model, we show new proofs of existing approximation algorithms for the deterministic model based on primal-dual approaches. We also determine integrality gaps of minmax regret formulations, giving tight bounds on the limits of performance gains from randomization. Finally, we prove that minmax regret problems are NP-hard under general convex uncertainty.

Andrew Mastin, Patrick Jaillet, Sang Chin

Computational Geometry IV

Frontmatter
An Optimal Algorithm for Reconstructing Point Set Order Types from Radial Orderings

Given a set P of n labeled points in the plane, the radial system of P describes, for each $$p\in P$$, the radial ordering of the other points around p. This notion is related to the order type of P, which describes the orientation (clockwise or counterclockwise) of every ordered triple of P. Given only the order type of P, it is easy to reconstruct the radial system of P, but the converse is not true. Aichholzer et al. (Reconstructing Point Set Order Types from Radial Orderings, in Proc. ISAAC 2014) defined T(R) to be the set of order types with radial system R and showed that sometimes $$|T(R)|=n-1$$. They give polynomial-time algorithms to compute T(R) when only given R.We describe an optimal $$O(n^2)$$ time algorithm for computing T(R). The algorithm constructs the convex hulls of all possible point sets with the given radial system, after which sidedness queries on point triples can be answered in constant time. This set of convex hulls can be found in O(n) time. Our results generalize to abstract order types.

Oswin Aichholzer, Vincent Kusters, Wolfgang Mulzer, Alexander Pilz, Manuel Wettstein
Improved Approximation for Fréchet Distance on c-packed Curves Matching Conditional Lower Bounds

The Fréchet distance is a well-studied and popular measure of similarity of two curves. The best known algorithms have quadratic time complexity, which has recently been shown to be optimal assuming the Strong Exponential Time Hypothesis (SETH) [Bringmann FOCS’14]. To overcome the worst-case quadratic time barrier, restricted classes of curves have been studied that attempt to capture realistic input curves. The most popular such class are c-packed curves, for which the Fréchet distance has a $$(1+{\varepsilon })$$-approximation in time $$\mathcal {O}(c n /{\varepsilon }+ c n \log n)$$ [Driemel et al. DCG’12]. In dimension $$d \geqslant 5$$ this cannot be improved to for any $$\delta > 0$$ unless SETH fails [Bringmann FOCS’14]. In this paper, exploiting properties that prevent stronger lower bounds, we present an improved algorithm with time complexity . This improves upon the algorithm by Driemel et al. for any $${\varepsilon }\ll 1/\log n$$, and matches the conditional lower bound (up to lower order factors of the form $$n^{o(1)}$$).

Karl Bringmann, Marvin Künnemann
Computing the Gromov-Hausdorff Distance for Metric Trees

The Gromov-Hausdorff distance is a natural way to measure distance between two metric spaces. We give the first proof of hardness and first non-trivial approximation algorithm for computing the Gromov-Hausdorff distance for geodesic metrics in trees. Specifically, we prove it is $$\mathrm {NP}$$-hard to approximate the Gromov-Hausdorff distance better than a factor of 3. We complement this result by providing a polynomial time $$O(\min \{n, \sqrt{rn}\})$$-approximation algorithm where r is the ratio of the longest edge length in both trees to the shortest edge length. For metric trees with unit length edges, this yields an $$O(\sqrt{n})$$-approximation algorithm.

Pankaj K. Agarwal, Kyle Fox, Abhinandan Nath, Anastasios Sidiropoulos, Yusu Wang
The VC-Dimension of Visibility on the Boundary of a Simple Polygon

In this paper, we prove that the VC-Dimension of visibility on the boundary of a simple polygon is exactly 6. Our result is the first tight bound for any variant of the VC-Dimension problem regarding simple polygons. Our upper bound proof is based off several structural lemmas which may be of independent interest to researchers studying geometric visibility.

Matt Gibson, Erik Krohn, Qing Wang

Computational Complexity I

Frontmatter
Quantum Bit Commitment with Application in Quantum Zero-Knowledge Proof (Extended Abstract)

In this work, we study formalization and construction of non-interactive statistically binding quantum bit commitment scheme (QBC), as well as its application in quantum zero-knowledge (QZK) proof. We explore the fully quantum model, where both computation and communication could be quantum. While most of the proofs here are straightforward based on previous works, we have two technical contributions. First, we show how to use reversibility of quantum computation to construct non-interactive QBC. Second, we identify new issue caused by quantum binding in security analysis and give our idea to circumvent it, which may be found useful elsewhere.

Jun Yan, Jian Weng, Dongdai Lin, Yujuan Quan
Effectiveness of Structural Restrictions for Hybrid CSPs

Constraint Satisfaction Problem (CSP) is a fundamental algorithmic problem that appears in many areas of Computer Science. It can be equivalently stated as computing a homomorphism $${\mathbf {R}\rightarrow \varvec{\Gamma }}$$ between two relational structures, e.g. between two directed graphs. Analyzing its complexity has been a prominent research direction, especially for the fixed template CSPs where the right side $$\varvec{\Gamma }$$ is fixed and the left side $$\mathbf {R}$$ is unconstrained.Far fewer results are known for the hybrid setting that restricts both sides simultaneously. It assumes that $$\mathbf {R}$$ belongs to a certain class of relational structures (called a structural restriction in this paper). We study which structural restrictions are effective, i.e. there exists a fixed template $$\varvec{\Gamma }$$ (from a certain class of languages) for which the problem is tractable when $$\mathbf {R}$$ is restricted, and NP-hard otherwise. We provide a characterization for structural restrictions that are closed under inverse homomorphisms. The criterion is based on the chromatic number of a relational structure defined in this paper; it generalizes the standard chromatic number of a graph.As our main tool, we use the algebraic machinery developed for fixed template CSPs. To apply it to our case, we introduce a new construction called a “lifted language”. We also give a characterization for structural restrictions corresponding to minor-closed families of graphs, extend results to certain Valued CSPs (namely conservative valued languages), and state implications for (valued) CSPs with ordered variables and for the maximum weight independent set problem on some restricted families of graphs.

Vladimir Kolmogorov, Michal Rolínek, Rustem Takhanov
Polynomial-Time Isomorphism Test of Groups that are Tame Extensions
(Extended Abstract)

We give new polynomial-time algorithms for testing isomorphism of a class of groups given by multiplication tables (GpI). Two results (Cannon & Holt, J. Symb. Comput. 2003; Babai, Codenotti & Qiao, ICALP 2012) imply that GpI reduces to the following: given groups G, H with characteristic subgroups of the same type and isomorphic to $$\mathbb {Z}_p^d$$, and given the coset of isomorphisms $$\mathrm {Iso}(G/\mathbb {Z}_p^d, H/\mathbb {Z}_p^d)$$, compute $$\mathrm {Iso}(G, H)$$ in time $$\mathrm {poly}(|G|)$$. Babai & Qiao (STACS 2012) solved this problem when a Sylow p-subgroup of $$G/\mathbb {Z}_p^d$$ is trivial. In this paper, we solve the preceding problem in the so-called “tame” case, i. e., when a Sylow p-subgroup of $$G/\mathbb {Z}_p^d$$ is cyclic, dihedral, semi-dihedral, or generalized quaternion. These cases correspond exactly to the group algebra $$\overline{\mathbb {F}}_p[G/\mathbb {Z}_p^d]$$ being of tame type, as in the celebrated tame-wild dichotomy in representation theory. We then solve new cases of GpI in polynomial time.Our result relies crucially on the divide-and-conquer strategy proposed earlier by the authors (CCC 2014), which splits GpI into two problems, one on group actions (representations), and one on group cohomology. Based on this strategy, we combine permutation group and representation algorithms with new mathematical results, including bounds on the number of indecomposable representations of groups in the tame case, and on the size of their cohomology groups.Finally, we note that when a group extension is not tame, the preceding bounds do not hold. This suggests a precise sense in which the tame-wild dichotomy from representation theory may also be a key barrier to cross to put GpI into $$\mathrm{P} $$.

Joshua A. Grochow, Youming Qiao
Quantum Algorithm for Triangle Finding in Sparse Graphs

This paper presents a quantum algorithm for triangle finding over sparse graphs that improves over the previous best quantum algorithm for this task by Buhrman et al. [SIAM Journal on Computing, 2005]. Our algorithm is based on the recent $$\tilde{O}(n^{5/4})$$-query algorithm given by Le Gall [FOCS 2014] for triangle finding over dense graphs (here n denotes the number of vertices in the graph). We show in particular that triangle finding can be solved with $$O(n^{5/4-\epsilon })$$ queries for some constant $$\epsilon >0$$ whenever the graph has at most $$O(n^{2-c})$$ edges for some constant $$c>0$$.

François Le Gall, Shogo Nakajima

Graph Drawing and Planar Graphs

Frontmatter
On Hardness of the Joint Crossing Number

The Joint Crossing Number problem asks for a simultaneous embedding of two disjoint graphs into one surface such that the number of edge crossings (between the two graphs) is minimized. It was introduced by Negami in 2001 in connection with diagonal flips in triangulations of surfaces, and subsequently investigated in a general form for small-genus surfaces. We prove that all of the commonly considered variants of this problem are NP-hard already in the orientable surface of genus 6, by a reduction from a special variant of the anchored crossing number problem of Cabello and Mohar.

Petr Hliněný, Gelasio Salazar
An $$O(n^{\epsilon })$$ Space and Polynomial Time Algorithm for Reachability in Directed Layered Planar Graphs

Given a graph G and two vertices s and t in it, graph reachability is the problem of checking whether there exists a path from s to t in G. We show that reachability in directed layered planar graphs can be decided in polynomial time and $$O(n^\epsilon )$$ space, for any $$\epsilon > 0$$. The previous best known space bound for this problem with polynomial time was approximately $$O(\sqrt{n})$$ space [1].Deciding graph reachability in SC is an important open question in complexity theory and in this paper we make progress towards resolving this question.

Diptarka Chakraborty, Raghunath Tewari
Constant Query Time $$(1+\epsilon )$$ -Approximate Distance Oracle for Planar Graphs

We give a $$(1+\epsilon )$$-approximate distance oracle with O(1) query time for an undirected planar graph G with n vertices and non-negative edge length. For $$\epsilon >0$$ and any two vertices u and v in G, our oracle gives a distance $$\tilde{d}(u,v)$$ with stretch $$(1+\epsilon )$$ in O(1) time. The oracle has size $$O(n\log n (\log n/\epsilon +f(\epsilon )))$$ and pre-processing time $$O(n\log n(\log ^3 n/\epsilon ^2+f(\epsilon )))$$, where $$f(\epsilon )=2^{O(1/\epsilon )}$$. This is the first $$(1+\epsilon )$$-approximate distance oracle with O(1) query time independent of $$\epsilon $$ and the size and pre-processing time nearly linear in n, and improves the query time $$O(1/\epsilon )$$ of previous $$(1+\epsilon )$$-approximate distance oracle with size nearly linear in n.

Qian-Ping Gu, Gengchun Xu
Partitioning Graph Drawings and Triangulated Simple Polygons into Greedily Routable Regions

A greedily routable region (GRR) is a closed subset of $$\mathbb R^2$$, in which each destination point can be reached from each starting point by choosing the direction with maximum reduction of the distance to the destination in each point of the path. Recently, Tan and Kermarrec proposed a geographic routing protocol for dense wireless sensor networks based on decomposing the network area into a small number of interior-disjoint GRRs. They showed that minimum decomposition is NP-hard for polygons with holes.We consider minimum GRR decomposition for plane straight-line drawings of graphs. Here, GRRs coincide with self-approaching drawings of trees, a drawing style which has become a popular research topic in graph drawing. We show that minimum decomposition is still NP-hard for graphs with cycles, but can be solved optimally for trees in polynomial time. Additionally, we give a 2-approximation for simple polygons, if a given triangulation has to be respected.

Martin Nöllenburg, Roman Prutkin, Ignaz Rutter

Computational Complexity II

Frontmatter
A New Approximate Min-Max Theorem with Applications in Cryptography

We propose a novel proof technique that can be applied to attack a broad class of problems in computational complexity, when switching the order of universal and existential quantifiers is helpful. Our approach combines the standard min-max theorem and convex approximation techniques, offering quantitative improvements over the standard way of using min-max theorems as well as more concise and elegant proofs.

Maciej Skórski
Give Me Another One!

We investigate the complexity of an optimization problem in Boolean propositional logic related to information theory: Given a conjunctive formula over a set of relations, find a satisfying assignment with minimal Hamming distance to a given assignment that satisfies the formula ($$\mathsf {NearestOtherSolution}$$, $$\mathsf {NOSol}$$).We present a complete classification with respect to the relations admitted in the formula. We give polynomial-time algorithms for several classes of constraint languages. For all other cases we prove hardness or completeness regarding $$\mathrm {poly{\text {-}}APX}$$, $$\mathrm {NPO}$$, or equivalence to a well-known hard optimization problem.

Mike Behrisch, Miki Hermann, Stefan Mengel, Gernot Salzer
On the Complexity of Computing Prime Tables

Many large arithmetic computations rely on tables of all primes less than n. For example, the fastest algorithms for computing n! takes time $$\mathcal {O}(\mathrm {M}(n\log n) + \mathrm {P}(n))$$, where $$\mathrm {M}(n)$$ is the time to multiply two n-bit numbers, and $$\mathrm {P}(n)$$ is the time to compute a prime table up to n. The fastest algorithm to compute $$\left( {\begin{array}{c}n\\ n/2\end{array}}\right) $$ also uses a prime table. We show that it takes time $$\mathcal {O}(\mathrm {M}(n) + \mathrm {P}(n))$$.In various models, the best bound on $$\mathrm {P}(n)$$ is greater than $$\mathrm {M}(n\log n)$$, given advances in the complexity of multiplication [8, 13]. In this paper, we give two algorithms to computing prime tables and analyze their complexity on a multitape Turing machine, one of the standard models for analyzing such algorithms. These two algorithms run in time $$\mathcal {O}(\mathrm {M}(n\log n))$$ and $$\mathcal {O}(n\log ^2 n/\log \log n)$$, respectively. We achieve our results by speeding up Atkin’s sieve.Given that the current best bound on $$\mathrm {M}(n)$$ is $$n\log n 2^{\mathcal {O}(\log ^*n)}$$, the second algorithm is faster and improves on the previous best algorithm by a factor of $$\log ^2\log n$$. Our fast prime-table algorithms speed up both the computation of n! and $$\left( {\begin{array}{c}n\\ n/2\end{array}}\right) $$.Finally, we show that computing the factorial takes $$\Omega (\mathrm {M}(n \log ^{4/7 - \varepsilon } n))$$ for any constant $$\varepsilon > 0$$ assuming only multiplication is allowed.

Martín Farach-Colton, Meng-Tsung Tsai
Game Values and Computational Complexity: An Analysis via Black-White Combinatorial Games

A black-white combinatorial game is a two-person game in which the pieces are colored either black or white. The players alternate moving or taking elements of a specific color designated to them before the game begins. A player loses the game if there is no legal move available for his color on his turn.We first show that some black-white versions of combinatorial games can only assume combinatorial game values that are numbers, which indicates that the game has many nice properties making it easier to solve. Indeed, numeric games have only previously been shown to be hard for $$\mathsf{NP}$$. We exhibit a language of natural numeric games (specifically, black-white poset games) that is $$\mathsf{PSPACE}$$-complete, closing the gap in complexity for the first time between these numeric games and the large collection of combinatorial games that are known to be $$\mathsf{PSPACE}$$-complete.In this vein, we also show that the game of Col played on general graphs is also $$\mathsf{PSPACE}$$-complete despite the fact that it can only assume two very simple game values. This is interesting because its natural black-white variant is numeric but only complete for $$\mathsf{P}^{\mathsf{NP}[\log ]}$$. Finally, we show that the problem of determining the winner of black-white Graph Nim is in $$\mathsf{P}$$ using a flow-based technique.

Stephen A. Fenner, Daniel Grier, Jochen Messner, Luke Schaeffer, Thomas Thierauf

Online and Streaming Algorithms

Frontmatter
Run Generation Revisited: What Goes Up May or May Not Come Down

We revisit the classic problem of run generation. Run generation is the first phase of external-memory sorting, where the objective is to scan through the data, reorder elements using a small buffer of size M, and output runs (contiguously sorted chunks of elements) that are as long as possible.We develop algorithms for minimizing the total number of runs (or equivalently, maximizing the average run length) when the runs are allowed to be sorted or reverse sorted. We study the problem in the online setting, both with and without resource augmentation, and in the offline setting.First, we analyze alternating-up-down replacement selection (runs alternate between sorted and reverse sorted), which was studied by Knuth as far back as 1963. We show that this simple policy is asymptotically optimal.Next, we give online algorithms having smaller competitive ratios with resource augmentation. We demonstrate that performance can also be improved with a small amount of foresight. Lastly, we present algorithms tailored for “nearly sorted” inputs which are guaranteed to have sufficiently long optimal runs.

Michael A. Bender, Samuel McCauley, Andrew McGregor, Shikha Singh, Hoa T. Vu
Streaming Verification in Data Analysis

Streaming interactive proofs (SIPs) are a framework to reason about outsourced computation, where a data owner (the verifier) outsources a computation to the cloud (the prover), but wishes to verify the correctness of the solution provided by the cloud service. In this paper we present streaming interactive proofs for problems in data analysis. We present protocols for clustering and shape fitting problems, as well as an improved protocol for rectangular matrix multiplication. The latter can in turn be used to verify keigenvectors of a (streamed) $$n \times n$$ matrix.In general our solutions use polylogarithmic rounds of communication and polylogarithmic total communication and verifier space. For special cases (when optimality certificates can be verified easily), we present constant round protocols with similar costs. For rectangular matrix multiplication and eigenvector verification, our protocols work in the more restricted annotated data streaming model, and use sublinear (but not polylogarithmic) communication.

Samira Daruki, Justin Thaler, Suresh Venkatasubramanian
All-Around Near-Optimal Solutions for the Online Bin Packing Problem

In this paper we present algorithms with optimal average-case and close-to-best known worst-case performance for the classic online bin packing problem. It has long been observed that known bin packing algorithms with optimal average-case performance are not optimal in the worst-case. In particular First Fit and Best Fit have optimal asymptotic average-case ratio of 1 but a worst-case competitive ratio of 1.7. The competitive ratio can be improved to 1.691 using the Harmonic algorithm. Further variations of this algorithm can push down the competitive ratio to 1.588. However, these algorithms have poor performance on average; in particular, Harmonic algorithm has average-case ratio of 1.27. In this paper, first we introduce a simple algorithm which we term Harmonic Match. This algorithm performs as well as Best Fit on average, i.e., it has an average-case ratio of 1. Moreover, the competitive ratio of the algorithm is as good as Harmonic, i.e., it converges to 1.691 which is an improvement over Best Fit and First Fit. We also introduce a different algorithm, termed as Refined Harmonic Match, which achieves an improved competitive ratio of 1.636 while maintaining the good average-case performance of Harmonic Match and Best Fit. Our experimental evaluations show that our proposed algorithms have comparable average-case performance with Best Fit and First Fit, and this holds also for sequences that follow distributions other than the uniform distribution.

Shahin Kamali, Alejandro López-Ortiz
Serving Online Requests with Mobile Servers

We study an online problem in which mobile servers have to be moved in order to efficiently serve at set of online requests. More formally, there is a set of n nodes and a set of k mobile servers that are placed at some of the nodes. Each node can potentially host several servers and the servers can be moved between the nodes. There are requests $$1,2,\ldots $$ that are adversarially issued at nodes one at a time, where a request issued at time t needs to be served at all times $$t' \ge t$$. The cost for serving the requests is a function of the number of servers and requests at the different nodes. The requirements on how to serve the requests are governed by two parameters $$\alpha \ge 1$$ and $$\beta \ge 0$$. An algorithm needs to guarantee that at all times, the total service cost remains within a multiplicative factor $$\alpha $$ and an additive term $$\beta $$ of the current optimal service cost.We consider online algorithms for two different minimization objectives. We first consider the natural problem of minimizing the total number of server movements. We show that in this case for every k, the competitive ratio of every deterministic online algorithm needs to be at least $$\varOmega (n)$$. Given this negative result, we then extend the minimization objective to also include the current service cost. We give almost tight bounds on the competitive ratio of the online problem where one needs to minimize the sum of the total number of movements and the current service cost. In particular, we show that at the cost of an additional additive term which is roughly linear in k, it is possible to achieve a multiplicative competitive ratio of $$1+\varepsilon $$ for every constant $$\varepsilon >0$$.

Abdolhamid Ghodselahi, Fabian Kuhn

String and DNA Algorithms

Frontmatter
An In-place Framework for Exact and Approximate Shortest Unique Substring Queries

We revisit the exact shortest unique substring (SUS) finding problem, and propose its approximate version where mismatches are allowed, due to its applications in subfields such as computational biology. We design a generic in-place framework that fits to solve both the exact and approximate k-mismatch SUS finding, using the minimum 2n memory words plus n bytes space, where n is the input string size. By using the in-place framework, we can find the exact and approximate k-mismatch SUS for every string position using a total of O(n) and $$O(n^2)$$ time, respectively, regardless of the value of k. Our framework does not involve any compressed or succinct data structures and thus is practical and easy to implement.

Wing-Kai Hon, Sharma V. Thankachan, Bojian Xu
Inferring Strings from Full Abelian Periods

Strings u, v are said to be Abelian equivalent if u is a permutation of the characters appearing in v. A string w is said to have a full Abelian periodp if $$w = w_1 \cdots w_k$$, where all $$w_i$$’s are of length p each and are all Abelian equivalent. This paper studies reverse-engineering problems on full Abelian periods. Given a positive integer n and a set D of divisors of n, we show how to compute in O(n) time the lexicographically smallest string of length n which has all elements of D as its full Abelian periods and has the minimum number of full Abelian periods not in D. Moreover, we give an algorithm to enumerate all such strings in amortized constant time per output after O(n)-time preprocessing. Also, we show how to enumerate the strings which have all elements of D as its full Abelian periods in amortized constant time per output after O(n)-time preprocessing.

Makoto Nishida, Tomohiro I., Shunsuke Inenaga, Hideo Bannai, Masayuki Takeda
Toehold DNA Languages are Regular (Extended Abstract)

We explore a method of designing algorithms using two types of DNA strands, namely rule strands (rules) and input strands. Rules are fixed in advance, and their task is to bind with the input strands in order to produce an output. We present algorithms for divisibility and primality testing as well as for square root computation. We measure the complexity of our algorithms in terms of the necessary rule strands. Our three algorithms utilize a super-constant amount of complex rules.Can one solve interesting problems using only few—or at least simple—rule strands? Our main result proves that restricting oneself to a constant number of rule strands is equivalent to deciding regular languages. More precisely, we show that an algorithm (possibly using infinitely many rule strands of arbitrary length) can merely decide regular languages if the structure of the rules themselves is simple, i.e., if the rule strands constitute a regular language.

Sebastian Brandt, Nicolas Mattia, Jochen Seidel, Roger Wattenhofer
Backmatter
Metadaten
Titel
Algorithms and Computation
herausgegeben von
Khaled Elbassioni
Kazuhisa Makino
Copyright-Jahr
2015
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-662-48971-0
Print ISBN
978-3-662-48970-3
DOI
https://doi.org/10.1007/978-3-662-48971-0