Skip to main content

2011 | Buch

WALCOM: Algorithms and Computation

5th International Workshop, WALCOM 2011, New Delhi, India, February 18-20, 2011. Proceedings

herausgegeben von: Naoki Katoh, Amit Kumar

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the proceedings of the 5th International Workshop on Algorithms and Computation, WALCOM 2011, held in New Delhi, India, in February 2011. The 20 papers presented in this volume were carefully reviewed and selected from 57 submissions. The papers are grouped in topical sections on approximation algorithms, hardness, algorithm engineering, computational geometry, string algorithms, and graph algorithms.

Inhaltsverzeichnis

Frontmatter

Invited Talks

Geometry and Topology from Point Cloud Data
Abstract
Many applications in science and engineering require modeling and analyzing a geometric shape or a function defined on it for scientific simulations, rapid prototyping, visualizations, and various other purposes. Often the geometry in question is only available through a set of sample points called a point cloud data. These point clouds can either be generated by scanning a physical object, or can be the output of a procedure that implicitly probes a hidden geometry, often a low dimensional manifold embedded in a relatively high dimensional Euclidean space.
Tamal Krishna Dey
The Disjoint Paths Problem: Algorithm and Structure
Abstract
We consider the following well-known problem, which is called the disjoint paths problem.
Input: A graph G with n vertices and m edges, k pairs of vertices (s 1,t 1),(s 2,t 2),..., (s k ,t k ) in G.
Output: (Vertex- or edge-) disjoint paths P 1, P 2, ...,P k in G such that P i joins s i and t i for i = 1,2,...,k.
This is certainly a central problem in algorithmic graph theory and combinatorial optimization. See the surveys [9, 31]. It has attracted attention in the contexts of transportation networks, VLSI layout and virtual circuit routing in high-speed networks or Internet. A basic technical problem is to interconnect certain prescribed “channels” on the chip such that wires belonging to different pins do not touch each other. In this simplest form, the problem mathematically amounts to finding vertex-disjoint trees or vertex-disjoint paths in a graph, each connecting a given set of vertices.
Ken-ichi Kawarabayashi
The Physarum Computer
Abstract
Physarum is a slime mold. It was observed over the past 10 years that the mold is able to solve shortest path problems and to construct good Steiner networks [2,4]. In a nutshell, the shortest path experiment is as follows: A maze is built and the mold is made to cover the entire maze. Food is then provided at two positions s and t and the evolution of the slime is observed. Over time, the slime retracts to the shortest s-t-path.
Kurt Mehlhorn

Approximation Algorithms

Maximum Betweenness Centrality: Approximability and Tractable Cases
Abstract
The Maximum Betweenness Centrality problem (MBC) can be defined as follows. Given a graph find a k-element node set C that maximizes the probability of detecting communication between a pair of nodes s and t chosen uniformly at random. It is assumed that the communication between s and t is realized along a shortest st path which is, again, selected uniformly at random. The communication is detected if the communication path contains a node of C.
Recently, Dolev et al. (2009) showed that MBC is NP-hard and gave a (1 − 1/e)-approximation using a greedy approach. We provide a reduction of MBC to Maximum Coverage that simplifies the analysis of the algorithm of Dolev et al. considerably. Our reduction allows us to obtain a new algorithm with the same approximation ratio for a (generalized) budgeted version of MBC. We provide tight examples showing that the analyses of both algorithms are best possible. Moreover, we prove that MBC is APX-complete and provide an exact polynomial-time algorithm for MBC on tree graphs.
Martin Fink, Joachim Spoerhase
Approximation Algorithms for Minimum Chain Vertex Deletion
Abstract
A bipartite graph G = (A ∪ B, E) is called a chain graph if there exists an ordering \(\rho=<\!v_1, v_2, \ldots v_n\!>\) of the vertices in A = {v 1,...,v n } such that N(v 1) ⊆ N(v 2) ... ⊆ N(v n ). Here N(v) denotes the set of neighbors of the vertex v in G. We call the vertex-deletion problem corresponding to the class of chain graphs as the Minimum Chain Vertex Deletion problem and the induced subgraph problem corresponding to chain graphs as the Maximum Induced Chain Subgraph problem. A weighted version of these problems is obtained by assigning positive weights on vertices and asking for a minimum weight deletion set to get into the class of chain graphs or asking for maximum weight induced chain subgraph. Using a rounding technique we show that the weighted version of Minimum Chain Vertex Deletion, has a factor 2 approximation algorithm on bipartite graphs. We also give a factor 3/2 approximation algorithm for a weighted version of Maximum Induced Chain Subgraph on bipartite graphs. We also show that both these problems are APX-complete.
Mrinal Kumar, Sounaka Mishra, N. Safina Devi, Saket Saurabh
Oblivious Buy-at-Bulk in Planar Graphs
Abstract
In the oblivious buy-at-bulk network design problem in a graph, the task is to compute a fixed set of paths for every pair of source-destination in the graph, such that any set of demands can be routed along these paths. The demands could be aggregated at intermediate edges where the fusion-cost is specified by a canonical (non-negative concave) function f. We give a novel algorithm for planar graphs which is oblivious with respect to the demands, and is also oblivious with respect to the fusion function f. The algorithm is deterministic and computes the fixed set of paths in polynomial time, and guarantees a O(logn) approximation ratio for any set of demands and any canonical fusion function f, where n is the number of nodes. The algorithm is asymptotically optimal, since it is known that this problem cannot be approximated with better than Ω(logn) ratio. To our knowledge, this is the first tight analysis for planar graphs, and improves the approximation ratio by a factor of logn with respect to previously known results.
Srivathsan Srinivasagopalan, Costas Busch, S. Sitharama Iyengar

Hardness

The Complexity of Acyclic Subhypergraph Problems
Abstract
We investigate the computational complexity of two decision problems concerning the existence of certain acyclic subhypergraphs of a given hypergraph, namely the Spanning Acyclic Subhypergraph problem and the Maximum Acyclic Subhypergraph problem. The former is about the existence of an acyclic subhypergraph such that each vertex of the input hypergraph is contained in at least one hyperedge of the subhypergraph. The latter is about the existence of an acyclic subhypergraph with k hyperedges where k is part of the input. For each of these problems, we consider different notions of acyclicity of hypergraphs: Berge-acyclicity, γ-acyclicity, β-acyclicity and α-acyclicity. We are also concerned with the size of the hyperedges of the input hypergraph. Depending on these two parameters (notion of acyclicity and size of the hyperedges), we try to determine which instances of the two problems are in PRNC and which are NP-complete.
David Duris, Yann Strozecki
Inapproximability of b-Matching in k-Uniform Hypergraphs
Abstract
In this paper we analyze the complexity of the maximum cardinality b-matching problem in k-uniform hypergraphs. b-matching is the following problem: for a given b ∈ ℕ and a hypergraph \(\mathcal{H}=(V,\mathcal{E})\), |V| = n, a subset \(M\subseteq \mathcal{E}\) with maximum cardinality is sought so that no vertex is contained in more than b hyperedges of M. The b-matching problem is a prototype of packing integer programs with right-hand side b ≥ 1. It has been studied in combinatorics and optimization. Positive approximation results are known for b ≥ 1 and k-uniform hypergraphs (Krysta 2005) and constant factor approximations for general hypergraphs for b = Ω(logn), (Srinivasan 1999, Srivastav, Stangier 1996, Raghavan, Thompson 1987), but the inapproximability of the problem has been studied only for b = 1 and k-uniform and k-partite hypergraphs (Hazan, Safra, Schwartz 2006). Thus the range from b ∈ [2,logn] is almost unexplored. In this paper we give the first inapproximability result for k-uniform hypergraphs: for every 2 ≤ b ≤ k/logk there no polynomial-time approximation within any ratio smaller than \(\Omega(\frac{k}{b\log{k}})\), unless \(\mathcal{P}=\mathcal{NP}\). Our result generalizes the result of Hazan, Safra and Schwartz from b = 1 to any fixed 2 ≤ b ≤ k/logk and k-uniform hypergraphs. But the crucial point is that for the first time we can see how b influences the non-approximability. We consider this result as a necessary step in further understanding the non-approximability of general packing problems. It is notable that the proof of Hazan, Safra and Schwartz cannot be lifted to b ≥ 2. In fact, some new techniques and ingredients are required; the probabilistic proof of the existence of a hypergraph with ”almost” disjoint b-matching where dependent events have to be decoupled (in contrast to Hazan et al.) and the generation of some sparse hypergraph.
Mourad El Ouali, Antje Fretwurst, Anand Srivastav
k-Level Crossing Minimization Is NP-Hard for Trees
Abstract
The k-level crossing minimization problem for graphs has received much interest in the graph drawing literature. In this paper we focus on the special case of trees. We show that the 2-level crossing minimization problem for trees where the order of the vertices on one level is fixed is solvable in quadratic time. We also show that the k-level crossing minimization problem for trees for an arbitrary number of levels is NP-Hard. This result exposes a source of difficulty for algorithm designers that compounds earlier results relating to the 2-level crossing minimization problem for graphs.
Martin Harrigan, Patrick Healy

Algorithm Engineering

Efficient Computation of Time-Dependent Centralities in Air Transportation Networks
Abstract
We introduce indices of centrality to analyze air transportation networks which represent the importance of airports and individual flights dependent on the time of the day (time-dependent centrality indices). Our centrality indices are based on earliest arrival paths with a minimum number of transfers in a time- and event-dependent network model. This means, that all paths correspond to real connections, in particular with transfers which obey minimum transfer times between flights. While the straight-forward computation of these indices is quite expensive, we provide efficient algorithms for the centrality computation. To this end, we construct a certain sequence of pairwise disjoint profile graphs representing all relevant paths by exploiting a special kind of subpath optimality. This avoids unnecessary repeated computations of all optimal time-dependent multi-criteria paths and allows us to use single criterion path queries for the earliest arrival time (without path construction). We have tested our method with original schedule data of 2010 provided by Official Airlines Guide (OAG) on the complete world-wide airport network. Our approach yields a speed-up over the straight-forward centrality computation by a factor of about 100 for the world-wide network.
Annabell Berger, Matthias Müller-Hannemann, Steffen Rechner, Alexander Zock
Analysis of Gauss-Sieve for Solving the Shortest Vector Problem in Lattices
Abstract
Lattice based cryptography is gaining more and more importance in the cryptographic community. The security of lattice based cryptosystems can be proven to be as hard as worst case lattice problems. The most important underlying hard problem is the shortest vector problem. There are two concurrent approaches for the search for shortest vectors in lattices: enumeration and probabilistic sieving algorithms.
Enumeration algorithms were the best choice, until in 2010, Micciancio and Voulgaris present a new heuristic sieving algorithm called Gauss Sieve, which was the first sieving algorithm considered to be competitive to exhaustive search algorithms. Later in 2010, Gama, Nguyen, and Regev published their extreme pruning variant of the enumeration, which again ruled out sieving.
In this paper, we present the practical results using Gauss Sieve that we gained in our experiments throughout the last year. We analyze the behaviour of Gauss Sieve that helps understanding the strengths and weaknesses of the algorithm.
Michael Schneider

Computational Geometry

Minimum Enclosing Circle of a Set of Fixed Points and a Mobile Point
Abstract
In this paper we study the variations in the center and the radius of the minimum enclosing circle (MEC) of a set of fixed points and one mobile point, moving along a straight line ℓ. Given a set S of n points and a line ℓ in \(\mathbb R^2\), we completely characterize the locus of the center of MEC of the set S ∪ {p}, for all p ∈ ℓ. We show that the locus is a continuous and piecewise differentiable linear function, and each of its differentiable piece lies either on the edges of the farthest-point Voronoi diagram of S, or on a line segment parallel to the line ℓ. Moreover, we prove that the locus can have at most O(n) differentiable pieces and can be computed in linear time, given the farthest-point Voronoi diagram of S.
Aritra Banik, Bhaswar B. Bhattacharya, Sandip Das
Efficient Top-k Queries for Orthogonal Ranges
Abstract
Advances in sensing and data gathering technologies have resulted in an explosion in the volume of data that is being generated, processed, and archived. In particular, this information overload calls for new methods for querying large spatial datasets, since users are often not interested in merely retrieving a list of all data items satisfying a query, but would, instead, like a more informative “summary” of the retrieved items. An example is the so-called top-k problem, where the goal is to retrieve from a set of n weighted points in IRd the k most significant points, ranked by their weights, that lie in an orthogonal query box in IRd (rather than get a list of all points lying in the query box). In this paper, efficient and output-sensitive solutions are presented for this problem in two settings. In the first setting, the k points are reported in arbitrary order and the underlying set can be updated dynamically through insertions and deletions of points. In the second setting, the k points are reported in sorted order of their weights.
Saladi Rahul, Prosenjit Gupta, Ravi Janardan, K. S. Rajan
Range-Aggregate Queries Involving Geometric Aggregation Operations
Abstract
In this paper we consider range-aggregate query problems wherein we wish to preprocess a set S of geometric objects such that given a query orthogonal range q, a certain aggregation function on the objects S′ = Sq can be answered efficiently. Range-aggregate version of point enclosure queries, 1-d segment intersection, 2-d orthogonal segment intersection (with/without distance constraint) are revisited and we improve the existing results for these problems. We also provide semi-dynamic (insertions) solutions to some of these problems. This paper is the first attempt to provide dynamic solutions to problems involving geometric aggregation operations.
Saladi Rahul, Ananda Swarup Das, K. S. Rajan, Kannan Srinathan

Approximation Algorithms

Multi Cover of a Polygon Minimizing the Sum of Areas
Abstract
We consider a geometric optimization problem that arises in sensor network design. Given a polygon P (possibly with holes) with n vertices, a set Y of m points representing sensors, and an integer k, 1 ≤ k ≤ m. The goal is to assign a sensing range, r i , to each of the sensors y i  ∈ Y, such that each point p ∈ P is covered by at least k sensors, and the cost, \(\sum_i r_i^\alpha\), of the assignment is minimized, where α is a constant.
In this paper, we assume that α= 2, that is, find a set of disks centered at points of Y, such that (i) each point in P is covered by at least k disks, and (ii) the sum of the areas of the disks is minimized. We present, for any constant k ≥ 1, a polynomial-time c 1-approximation algorithm for this problem, where c 1 = c 1(k) is a constant. The discrete version, where one has to cover a given set of n points, X, by disks centered at points of Y, arises as a subproblem. We present a polynomial-time c 2-approximation algorithm for this problem, where c 2 = c 2(k) is a constant.
A. Karim Abu-Affash, Paz Carmi, Matthew J. Katz, Gila Morgenstern
On the Discrete Unit Disk Cover Problem
Abstract
Given a set \({\cal P}\) of n points and a set \({\cal D}\) of m unit disks on a 2-dimensional plane, the discrete unit disk cover (DUDC) problem is (i) to check whether each point in \({\cal P}\) is covered by at least one disk in \({\cal D}\) or not and (ii) if so, then find a minimum cardinality subset \({\cal D}^* \subseteq {\cal D}\) such that unit disks in \({\cal D}^*\) cover all the points in \({\cal P}\). The discrete unit disk cover problem is a geometric version of the general set cover problem which is NP-hard [14]. The general set cover problem is not approximable within \(c \log |{\cal P}|\), for some constant c, but the DUDC problem was shown to admit a constant factor approximation. In this paper, we provide an algorithm with constant approximation factor 18. The running time of the proposed algorithm is O(n logn + m logm + mn). The previous best known tractable solution for the same problem was a 22-factor approximation algorithm with running time O(m 2 n 4).
Gautam K. Das, Robert Fraser, Alejandro Lòpez-Ortiz, Bradford G. Nickerson
Clustering with Internal Connectedness
Abstract
In this paper we study the problem of clustering entities that are described by two types of data: attribute data and relationship data. While attribute data describe the inherent characteristics of the entities, relationship data represent associations among them. Attribute data can be mapped to the Euclidean space, whereas that is not always possible for the relationship data. The relationship data is described by a graph over the vertices with edges denoting relationship between pairs of entities that they connect. We study clustering problems under the model where the relationship data is constrained by ‘internal connectedness,’ which requires that any two entities in a cluster are connected by an internal path, that is, a path via entities only from the same cluster. We study the k-median and k-means clustering problems under this model. We show that these problems are Ω(logn) hard to approximate and give O(logn) approximation algorithms for specific cases of these problems.
Neelima Gupta, Aditya Pancholi, Yogish Sabharwal

String Algorithms

Hashed Patricia Trie: Efficient Longest Prefix Matching in Peer-to-Peer Systems
Abstract
The design of efficient search structures for peer-to-peer systems has attracted a lot of attention in recent years. In this paper we address the problem of longest prefix matching and present an efficient data structure called hashed Patricia trie. Our hashed Patricia trie supports Prefixsearch(x) and Insert(x) in \(\mathcal O(\log \left|x\right|)\) hash table accesses and Delete(x) in \(\mathcal O(1)\) hash table accesses when |x| is the number of bits used to encode x. That is the costs only depend on |x| and not the size of the data structure. The hash table accesses may be realized by any distributed hash table (DHT).
Sebastian Kniesburges, Christian Scheideler
De Bruijn Sequences for the Binary Strings with Maximum Density
Abstract
A de Bruijn sequence is a circular binary string of length 2 n that contains each binary string of length n exactly once as a substring. A maximum-density de Bruijn sequence is a circular binary string of length \(\binom{n}{0} + \binom{n}{1} + \binom{n}{2} + \cdots + \binom{n}{m}\) that contains each binary string of length n with density (number of 1s) between 0 and m, inclusively. In this paper we efficiently generate maximum-density de Bruijn sequences for all values of n and m. An interesting special case occurs when n = 2m + 1. In this case our result is a “complement-free de Bruijn sequence” since it is a circular binary string of length 2 n − 1 that contains each binary string of length n or its complement exactly once as a substring.
Joe Sawada, Brett Stevens, Aaron Williams

Graph Algorithms

A Polynomial Time Algorithm for Longest Paths in Biconvex Graphs
Abstract
The longest path problem is the problem of finding a simple path of maximum length in a graph. Polynomial solutions for this problem are known only for special classes of graphs, while it is NP-hard on general graphs. In this paper we are proposing a O(n 6) time algorithm to find the longest path on biconvex graphs, where n is the number of vertices of the input graph. We have used Dynamic Programming approach.
Esha Ghosh, N. S. Narayanaswamy, C. Pandu Rangan
Counting Spanning Trees in Graphs Using Modular Decomposition
Abstract
In this paper we present an algorithm for determining the number of spanning trees of a graph G which takes advantage of the structure of the modular decomposition tree of G. Specifically, our algorithm works by contracting the modular decomposition tree of the input graph G in a bottom-up fashion until it becomes a single node; then, the number of spanning trees of G is computed as the product of a collection of values which are associated with the vertices of G and are updated during the contraction process. In particular, when applied on a (q,q − 4)-graph for fixed q, a P 4-tidy graph, or a tree-cograph, our algorithm computes the number of its spanning trees in time linear in the size of the graph, where the complexity of arithmetic operations is measured under the uniform-cost criterion. Therefore we give the first linear-time algorithm for the counting problem in the considered graph classes.
Stavros D. Nikolopoulos, Leonidas Palios, Charis Papadopoulos
On Graceful Labelings of Trees
(Extended Abstract)
Abstract
A tree is a connected acyclic graph. A tree of n vertices is said to be graceful if the vertices can be assigned the labels { 0, 1, 2, ..., n − 1} such that the absolute value of the differences in vertex labels between neighboring vertices generate the set consisting distinct values { 1, 2, 3, ..., n − 1}. Ringel-Kotzig conjectured that all trees are graceful. In this paper we give a partial solution of the conjecture by proving that two large subclasses of trees are graceful.
Sourabh Aryabhatta, Tonmoy Guha Roy, Md. Mohsin Uddin, Md. Saidur Rahman
Minimum-Layer Drawings of Trees
(Extended Abstract)
Abstract
A layered drawing of a tree T is a planar straight-line drawing of T, where the vertices of T are placed on some horizontal lines called layers. A minimum-layer drawing of T is a layered drawing of T on k layers, where k is the minimum number of layers required for any layered drawing of T. In this paper we give a linear-time algorithm for obtaining minimum-layer drawings of trees.
Debajyoti Mondal, Muhammad Jawaherul Alam, Md. Saidur Rahman
Backmatter
Metadaten
Titel
WALCOM: Algorithms and Computation
herausgegeben von
Naoki Katoh
Amit Kumar
Copyright-Jahr
2011
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-19094-0
Print ISBN
978-3-642-19093-3
DOI
https://doi.org/10.1007/978-3-642-19094-0