Skip to main content

2015 | Buch

Experimental Algorithms

14th International Symposium, SEA 2015, Paris, France, June 29 – July 1, 2015, Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 14th International Symposium on Experimental Algorithms, SEA 2015, held in Paris, France, in June/July 2015.

The 30 revised full papers presented were carefully reviewed and selected from 76 submissions. The main theme of the symposium is the role of experimentation and of algorithm engineering techniques in the design and evaluation of algorithms and data structures. The papers are grouped in topical sections on data structures, graph problems, combinatorial optimization, scheduling and allocation, and transportation networks.

Inhaltsverzeichnis

Frontmatter

Data structures

Frontmatter
Parallel Construction of Succinct Trees
Abstract
Succinct representations of trees are an elegant solution to make large trees fit in main memory while still supporting navigational operations in constant time. However, their construction time remains a bottleneck. We introduce a practical parallel algorithm that improves the state of the art in succinct tree construction. Given a tree on \(n\) nodes stored as a sequence of balanced parentheses, our algorithm builds a succinct tree representation in \(O(n/p+\lg p)\) time, where \(p\) is the number of available cores. The constructed representation uses \(2n + o(n)\) bits of space and supports a rich set of operations in \(O(\lg n)\) time. In experiments using up to 64 cores and on inputs of different sizes, our algorithm achieved good parallel speed-up. We also present an algorithm that takes \(O(n/p + \lg p)\) time to construct the balanced parenthesis representation of the input tree required by our succinct tree construction algorithm.
Leo Ferres, José Fuentes-Sepúlveda, Meng He, Norbert Zeh
Tree Compression with Top Trees Revisited
Abstract
We revisit tree compression with top trees (Bille et al. [2]), and present several improvements to the compressor and its analysis. By significantly reducing the amount of information stored and guiding the compression step using a RePair-inspired heuristic, we obtain a fast compressor achieving good compression ratios, addressing an open problem posed by [2]. We show how, with relatively small overhead, the compressed file can be converted into an in-memory representation that supports basic navigation operations in worst-case logarithmic time without decompression. We also show a much improved worst-case bound on the size of the output of top-tree compression (answering an open question posed in a talk on this algorithm by Weimann in 2012).
Lorenz Hübschle-Schneider, Rajeev Raman
A Bulk-Parallel Priority Queue in External Memory with STXXL
Abstract
We propose the design and an implementation of a bulk-parallel external memory priority queue to take advantage of both shared-memory parallelism and high external memory transfer speeds to parallel disks. To achieve higher performance by decoupling item insertions and extractions, we offer two parallelization interfaces: one using “bulk” sequences, the other by defining “limit” items. In the design, we discuss how to parallelize insertions using multiple heaps, and how to calculate a dynamic prediction sequence to prefetch blocks and apply parallel multiway merge for extraction. Our experimental results show that in the selected benchmarks the priority queue reaches 64% of the full parallel I/O bandwidth of SSDs and 49% of rotational disks, or the speed of sorting in external memory when bounded by computation.
Timo Bingmann, Thomas Keh, Peter Sanders

Graph Problems I

Frontmatter
Greedily Improving Our Own Centrality in A Network
Abstract
The closeness and the betweenness centralities are two well-known measures of importance of a vertex within a given complex network. Having high closeness or betweenness centrality can have positive impact on the vertex itself: hence, in this paper we consider the problem of determining how much a vertex can increase its centrality by creating a limited amount of new edges incident to it. We first prove that this problem does not admit a polynomial-time approximation scheme (unless \(P=NP\)), and we then propose a simple greedy approximation algorithm (with an almost tight approximation ratio), whose performance is then tested on synthetic graphs and real-world networks.
Pierluigi Crescenzi, Gianlorenzo D’Angelo, Lorenzo Severini, Yllka Velaj
An Exact Algorithm for Diameters of Large Real Directed Graphs
Abstract
We propose a new algorithm to compute the diameters of large real directed graphs. In contrast to recent algorithms, the proposed algorithm is designed for general directed graphs, i.e., it does not assume that given graphs are undirected or strongly connected. Experimental results on large real graphs show that the proposed algorithm is several orders of magnitude faster than the naive approach, and it reveals the exact diameters of large real directed graphs, for which only lower bounds have been known.
Takuya Akiba, Yoichi Iwata, Yuki Kawata
Graph Partitioning for Independent Sets
Abstract
Computing maximum independent sets in graphs is an important problem in computer science. In this paper, we develop an evolutionary algorithm to tackle the problem. The core innovations of the algorithm are very natural combine operations based on graph partitioning and local search algorithms. More precisely, we employ a state-of-the-art graph partitioner to derive operations that enable us to quickly exchange whole blocks of given independent sets. To enhance newly computed offsprings we combine our operators with a local search algorithm. Our experimental evaluation indicates that we are able to outperform state-of-the-art algorithms on a variety of instances.
Sebastian Lamm, Peter Sanders, Christian Schulz
Finding Connected Subgraphs of Fixed Minimum Density: Implementation and Experiments
Abstract
We consider the following problem. Given a graph and a rational number \(\mu \), \(0 < \mu \le 1\), find a connected subgraph of density at least \(\mu \) with the largest number of vertices. Here, the density of an \(n\)-vertex graph with \(m\) edges is \(m/\left( {\begin{array}{c}n\\ 2\end{array}}\right) \). This problem arises in many application contexts such as community detection in social networks. We implement a branch and bound algorithm and tune it for efficiency on sparse real-world graphs for the case \(\mu \ge 1/2\). Central issues for the implementation are the choice of branching candidates, two new upper bounding procedures, and several data reduction and early termination rules.
Christian Komusiewicz, Manuel Sorge, Kolja Stahl

Combinatorial Optimization I

Frontmatter
On the Generation of Cutting Planes which Maximize the Bound Improvement
Abstract
We propose a new cutting plane algorithm for Integer Linear Programming, which we refer to as the bound-optimal cutting plane method. The algorithm amounts to simultaneously generating k cuts which, when added to the linear programming relaxation, yield the (provably) largest bound improvement. We show that, in the general case, the corresponding cut generating problem can be cast as a Quadratically Constrained Quadratic Program. We also show that, for a large family of cuts, the latter can be reformulated as a Mixed-Integer Linear Program. We present computational experiments on the generation of bound-optimal stable set and cover inequalities for the max clique and knapsack problems. They show that, with respect to standard algorithms, the bound-optimal cutting plane method allows for a substantial reduction in the number of cuts and iterations needed to achieve either a given bound or an optimal solution.
Stefano Coniglio, Martin Tieves
Separation of Generic Cutting Planes in Branch-and-Price Using a Basis
Abstract
Dantzig-Wolfe reformulation of a mixed integer program partially convexifies a subset of the constraints, i.e., it implicitly adds all valid inequalities for the associated integer hull. Projecting an optimal basic solution of the reformulation’s LP relaxation to the original space does in general not yield a basic solution of the original LP relaxation. Cutting planes in the original problem that are separated using a basis like Gomory mixed integer cuts are therefore not directly applicable. Range [22] (and others) proposed as a remedy to heuristically compute a basic solution and separate this auxiliary solution also with cutting planes that stem from a basis. This might not only cut off the auxiliary solution, but also the solution we originally wanted to separate.
We discuss and extend Range’s ideas to enhance the separation procedure. In particular, we present alternative heuristics and consider additional valid inequalities strengthening the original LP relaxation before separation. Our full implementation, which is the first of its kind, is done within the GCG framework. We evaluate the effects on several problem classes. Our experiments show that the separated cuts strengthen the formulation on instances where the integrality gap is not too small. This leads to a reduced number of nodes and reduced solution times.
Marco E. Lübbecke, Jonas T. Witt
On a Nonconvex MINLP Formulation of the Euclidean Steiner Tree Problem in n-Space
Abstract
The Euclidean Steiner Tree Problem in dimension greater than 2 is notoriously difficult. Successful methods for exact solution are not based on mathematical-optimization — rather, they involve very sophisticated enumeration. There are two types of mathematical-optimization formulations in the literature, and it is an understatement to say that neither scales well enough to be useful. We focus on a known nonconvex MINLP formulation. Our goal is to make some first steps in improving the formulation so that large instances may eventually be amenable to solution by a spatial branch-and-bound algorithm. Along the way, we developed a new feature which we incorporated into the global-optimization solver SCIP and made accessible via the modeling language AMPL, for handling piecewise-smooth univariate functions that are globally concave.
Claudia D’Ambrosio, Marcia Fampa, Jon Lee, Stefan Vigerske

Scheduling and Allocation

Frontmatter
Scheduling MapReduce Jobs and Data Shuffle on Unrelated Processors
Abstract
We propose a constant approximation algorithm for generalizations of the Flexible Flow Shop (FFS) problem which form a realistic model for non-preemptive scheduling in MapReduce systems. Our results concern the minimization of the total weighted completion time of a set of MapReduce jobs on unrelated processors and improve substantially on the model proposed by Moseley et al. (SPAA 2011) in two directions: (i) we consider jobs consisting of multiple Map and Reduce tasks, which is the key idea behind MapReduce computations, and (ii) we introduce into our model the crucial cost of the data shuffle phase, i.e., the cost for the transmission of intermediate data from Map to Reduce tasks. Moreover, we experimentally evaluate our algorithm compared with a lower bound on the optimal cost of our problem as well as with a fast algorithm, which combines a simple online assignment of tasks to processors with a standard scheduling policy. As we observe, for random instances that capture data locality issues, our algorithm achieves a better performance.
Dimitris Fotakis, Ioannis Milis, Orestis Papadigenopoulos, Emmanouil Zampetakis, Georgios Zois
Station Assignment with Reallocation
Abstract
We study a dynamic allocation problem that arises in various scenarios where mobile clients joining and leaving have to communicate with static stations via radio transmissions. Restrictions are a maximum delay, or laxity, between consecutive client transmissions and a maximum bandwidth that a station can share among its clients. Clients are assigned to stations so that every client transmits under those restrictions. We consider reallocation algorithms, where clients are revealed at its arrival time, the departure time is unknown until they leave, and clients may be reallocated to another station, but at a cost determined by the laxity We present negative results for related previous protocols that motivate the study; we introduce new protocols that expound trade-offs between station usage and reallocation cost; we prove theoretically bounds on our performance metrics; and we show through simulations that, for realistic scenarios, our protocols behave even better than our theoretical guarantees.
Miguel A. Mosteiro, Yulia Rossikova, Prudence W. H. Wong
Online Knapsack of Unknown Capacity:
Energy Optimization for Smartphone Communications
Abstract
We propose a new variant of the more standard online knapsack problem where the only information missing to the provided instances is the capacity \(B\) of the knapsack. We refer to this problem as the online Knapsack of Unknown Capacity problem. Any algorithm must provide a strategy for ordering the items that are inserted in the knapsack in an online fashion, until the actual capacity of the knapsack is revealed and the last inserted item might not fit in. Apart from the interest in a new version of the fundamental knapsack problem, the motivations that lead to define this new variant come from energy consumption constraints in smartphone communications. We do provide lower and upper bounds to the problem for various cases. In general, we design an optimal algorithm admitting a \(\frac{1}{2}\)-competitive ratio. When all items admit uniform ratio of profit over size, our algorithm provides a \(\frac{49}{86}=.569\ldots \) competitive ratio that leaves some gap with the provided bound of \(\frac{1}{\varphi }=.618\ldots \), the inverse of the golden number. We then conduct experimental analysis for the competitive ratio guaranteed algorithms compared to the optimum and to various heuristics.
Daniele Diodati, Alfredo Navarra, Cristina M. Pinotti

Combinatorial Optimization II

Frontmatter
Reoptimization Techniques for MIP Solvers
Abstract
Recently, there have been many successful applications of optimization algorithms that solve a sequence of quite similar mixed-integer programs (MIPs) as subproblems. Traditionally, each problem in the sequence is solved from scratch. In this paper we consider reoptimization techniques that try to benefit from information obtained by solving previous problems of the sequence. We focus on the case that subsequent MIPs differ only in the objective function or that the feasible region is reduced. We propose extensions of the very complex branch-and-bound algorithms employed by general MIP solvers based on the idea to “warmstart” using the final search frontier of the preceding solver run. We extend the academic MIP solver SCIP by these techniques to obtain a reoptimizing branch-and-bound solver and report computational results which show the effectiveness of the approach.
Gerald Gamrath, Benjamin Hiller, Jakob Witzig
Submodular Minimization in the Context of Modern LP and MILP Methods and Solvers
Abstract
We consider the application of mixed-integer linear programming (MILP) solvers to the minimization of submodular functions. We evaluate common large-scale linear-programming (LP) techniques (e.g., column generation, row generation, dual stabilization) for solving a LP reformulation of the submodular minimization (SM) problem. We present heuristics based on the LP framework and a MILP solver. We evaluated the performance of our methods on a test bed of min-cut and matroid-intersection problems formulated as SM problems.
Andrew Orso, Jon Lee, Siqian Shen
Is Nearly-linear the Same in Theory and Practice? A Case Study with a Combinatorial Laplacian Solver
Abstract
Linear system solving is one of the main workhorses in applied mathematics. Recently, theoretical computer scientists have contributed sophisticated algorithms for solving linear systems with symmetric diagonally dominant matrices (a class to which Laplacian matrices belong) in provably nearly-linear time. These algorithms are highly interesting from a theoretical perspective, but there are no published results on how they perform in practice.
With this paper we address this gap. We provide the first implementation of the combinatorial solver by [Kelner et al., STOC 2013], which is particularly appealing due to its conceptual simplicity. The algorithm exploits that a Laplacian matrix corresponds to a graph; solving Laplacian linear systems amounts to finding an electrical flow in this graph with the help of cycles induced by a spanning tree with the low-stretch property.
The results of our comprehensive experimental study are ambivalent. They confirm a nearly-linear running time, but for reasonable inputs the constant factors make the solver much slower than methods with higher asymptotic complexity. One other aspect predicted by theory is confirmed by our findings: Spanning trees with lower stretch indeed reduce the solver’s running time. Yet, simple spanning tree algorithms perform better in practice than those with a guaranteed low stretch.
Daniel Hoske, Dimitar Lukarski, Henning Meyerhenke, Michael Wegner
Efficient and Practical Tree Preconditioning for Solving Laplacian Systems
Abstract
We consider the problem of designing efficient iterative methods for solving linear systems. In its full generality, this is one of the oldest problems in numerical analysis with a tremendous number of practical applications. We focus on a particular type of linear systems, associated with Laplacian matrices of undirected graphs, and study a class of iterative methods for which it is possible to speed up the convergence through combinatorial preconditioning. We consider a class of preconditioners, known as tree preconditioners, introduced by Vaidya, that have been shown to lead to asymptotic speed-up in certain cases. Rather than trying to improve the structure of the trees used in preconditioning, we propose a very simple modification to the basic tree preconditioner, which can significantly improve the performance of the iterative linear solvers in practice. We show that our modification leads to better conditioning for some special graphs, and provide extensive experimental evidence for the decrease in the complexity of the preconditioned conjugate gradient method for several graphs, including 3D meshes and complex networks.
Luca Castelli Aleardi, Alexandre Nolin, Maks Ovsjanikov

Other Applications I

Frontmatter
Efficient Generation of Stable Planar Cages for Chemistry
Abstract
In this paper we describe an algorithm which generates all colored planar maps with a good minimum sparsity from simple motifs and rules to connect them. An implementation of this algorithm is available and is used by chemists who want to quickly generate all sound molecules they can obtain by mixing some basic components.
Dominique Barth, Olivier David, Franck Quessette, Vincent Reinhard, Yann Strozecki, Sandrine Vial
Accurate and Efficient Methods to Improve Multiple Circular Sequence Alignment
Abstract
Multiple sequence alignment is a core computational task in bioinformatics and has been extensively studied over the past decades. This computation requires an implicit assumption on the input data: the left- and right-most position for each sequence is relevant. However, this is not the case for circular structures; for instance, MtDNA. Efforts have been made to address this issue but it is far from being solved. We have very recently introduced a fast algorithm for approximate circular string matching (Barton et al., Algo Mol Biol, 2014). Here, we first show how to extend this algorithm for approximate circular dictionary matching; and, then, apply this solution with agglomerative hierarchical clustering to find a sufficiently good rotation for each sequence. Furthermore, we propose an alternative method that is suitable for more divergent sequences. We implemented these methods in BEAR, a programme for improving multiple circular sequence alignment. Experimental results, using real and synthetic data, show the high accuracy and efficiency of these new methods in terms of the inferred likelihood-based phylogenies.
Carl Barton, Costas S. Iliopoulos, Ritu Kundu, Solon P. Pissis, Ahmad Retha, Fatima Vayani
Solving k-means on High-Dimensional Big Data
Abstract
In recent years, there have been major efforts to develop data stream algorithms that process inputs in one pass over the data with little memory requirement. For the k-means problem, this has led to the development of several \((1+\varepsilon )\)-approximations (under the assumption that k is a constant), but also to the design of algorithms that are extremely fast in practice and compute solutions of high accuracy. However, when not only the length of the stream is high but also the dimensionality of the input points, then current methods reach their limits.
We propose two algorithms, piecy and piecy-mr that are based on the recently developed data stream algorithm BICO that can process high dimensional data in one pass and output a solution of high quality. While piecy is suited for high dimensional data with a medium number of points, piecy-mr is meant for high dimensional data that comes in a very long stream. We provide an extensive experimental study to evaluate piecy and piecy-mr that shows the strength of the new algorithms.
Jan-Philipp W. Kappmeier, Daniel R. Schmidt, Melanie Schmidt

Transportation Networks

Frontmatter
Public Transit Labeling
Abstract
We study the journey planning problem in public transit networks. Developing efficient preprocessing-based speedup techniques for this problem has been challenging: current approaches either require massive preprocessing effort or provide limited speedups. Leveraging recent advances in Hub Labeling, the fastest algorithm for road networks, we revisit the well-known time-expanded model for public transit. Exploiting domain-specific properties, we provide simple and efficient algorithms for the earliest arrival, profile, and multicriteria problems, with queries that are orders of magnitude faster than the state of the art.
Daniel Delling, Julian Dibbelt, Thomas Pajor, Renato F. Werneck
On Balanced Separators in Road Networks
Abstract
The following algorithm partitions road networks surprisingly well: (i) sort the vertices by longitude (or latitude, or some linear combination) and (ii) compute the maximum flow from the first \(k\) nodes (forming the source) to the last \(k\) nodes (forming the sink). Return the corresponding minimum cut as an edge separator (or recurse until the resulting subgraphs are sufficiently small).
Aaron Schild, Christian Sommer
SALT. A Unified Framework for All Shortest-Path Query Variants on Road Networks
Abstract
Although recent scientific literature focuses on multiple shortest-path (SP) problem definitions for road networks, none of the existing solutions can efficiently answer all the different SP query variations. This work proposes SALT, a novel framework that not only efficiently answers most SP queries but also k-nearest neighbor queries not tackled by previous methods. Our solution offers excellent query performance and very short preprocessing times, thus making it also a viable option for dynamic, live-traffic road networks and all types of practical use-cases. The proposed SALT framework is a deployable software solution capturing a range of graph-related query problems under one “algorithmic hood”.
Alexandros Efentakis, Dieter Pfoser, Yannis Vassiliou

Other Applications II

Frontmatter
Huffman Codes versus Augmented Non-Prefix-Free Codes
Abstract
Non–prefix–free (NPF) codes are not uniquely decodable, and thus, have received very few attention due to the lack of that most essential feature required in any coding scheme. Augmenting NPF codes with compressed data structures has been proposed in ISIT’2013 [8] to overcome this limitation. It had been shown there that such an augmentation not only brings the unique decodability to NPF codes, but also provides efficient random access. In this study, we extend this approach and compare augmented NPF codes with the \(0\)th–order Huffman codes in terms of compression ratios and random access times. Basically, we benchmark four coding schemes as NPF codes augmented with wavelet trees (NPF–WT), with R/S dictionaries (NPF–RS), Huffman codes, and sampled Huffman codes. Since Huffman coding originally does not provide random access feature, sampling is a common way in practice to speed up access to arbitrary symbols in the encoded stream. We achieve sampling by simply managing an additional array that marks the beginnings of the codewords in steps of the sampling ratio, and keeping that sparse bit array compressed via R/S dictionary data structure. The experiments revealed that augmented NPF codes achieve compression very close to the Huffman with the additional advantage of random access. When compared to sampled Huffman coding both the compression ratios and random access performances of the NPF schemes are superior.
Boran Adaş, Ersin Bayraktar, M. Oğuzhan Külekci
Experimental Analysis of an Online Dictionary Matching Algorithm for Regular Expressions with Gaps
Abstract
Dictionary matching for regular expressions has gained recent interest because of a multitude of applications, including DNA sequence analysis, XML filtering, and network traffic analysis. In some applications, allowing wildcard and character class gaps in strings is enough, but usually the full expressive power of regular expressions is needed. In this paper we present and analyze a new algorithm for online dictionary matching for regular expressions. The unique feature of our algorithm is that it builds upon an algorithm for dictionary matching of string patterns with wildcard gaps, but is also capable of treating more complex regular expressions. In our experiments we used real data from expressions used for filtering spam e-mail. The size of the dictionary, that is, the number of different regular expressions to be matched varied from one to 3080. To find out how our algorithm scales to much larger numbers of patterns, we made small random changes to these patterns to produce up to 100000 patterns that are similar in style. We found out that the scalability of our algorithm is very good, being at its best for 10000–20000 patterns. Our algorithm outperforms the tested competitors for large dictionaries, GNU grep already for tens of patterns and Google’s RE2 for hundreds of patterns.
Riku Saikkonen, Seppo Sippu, Eljas Soisalon-Soininen
An Empirical Study of Finding Approximate Equilibria in Bimatrix Games
Abstract
While there have been a number of studies about the efficacy of methods to find exact Nash equilibria in bimatrix games, there has been little empirical work on finding approximate Nash equilibria. Here we provide such a study that compares a number of approximation methods and exact methods. In particular, we explore the trade-off between the quality of approximate equilibrium and the required running time to find one. We found that the existing library GAMUT, which has been the de facto standard that has been used to test exact methods, is insufficient as a test bed for approximation methods since many of its games have pure equilibria or other easy-to-find good approximate equilibria. We extend the breadth and depth of our study by including new interesting families of bimatrix games, and studying bimatrix games upto size \(2000 \times 2000\). Finally, we provide new close-to-worst-case examples for the best-performing algorithms for finding approximate Nash equilibria.
John Fearnley, Tobenna Peter Igwe, Rahul Savani
The Effect of Almost-Empty Faces on Planar Kandinsky Drawings
Abstract
Inspired by the recently-introduced slanted orthogonal graph drawing model, we introduce and study planar Kandinsky drawings with almost-empty faces (i.e., faces that were forbidden in the classical Kandinsky model).
Based on a recent NP-completeness result for Kandinsky drawings by Bläsius et al., we present and experimentally evaluate (i) an ILP that computes bend-optimal Kandinsky drawings with almost-empty faces, and, (ii) a more efficient heuristic that results in drawings with relatively few bends. Our evaluation shows that the new model, in the presence of many triangular faces, not only improves the number of bends, but also the compactness of the resulting drawings.
Michael A. Bekos, Michael Kaufmann, Robert Krug, Martin Siebenhaller

Graph Problems II

Frontmatter
An Experimental Analysis of a Polynomial Compression for the Steiner Cycle Problem
Abstract
We implement and evaluate a polynomial compression algorithm for the Steiner Cycle problem that was recently developed by Wahlström (STACS 2013). Steiner Cycle is a generalization of Hamiltonian Cycle and asks, given a graph \(G=(V,E)\) and a set of \(k\) terminals \(T\subseteq V\), whether there is a simple cycle containing \(T\) as well as an arbitrary number of further vertices of \(G\). Wahlström’s compression algorithm takes any such instance and in polynomial time produces an equivalent instance of size polynomial in \(k\). The algorithm has several distinguishing features that make it interesting as a test subject for evaluating theoretical results on preprocessing: It uses Gaussian elimination on the Tutte matrix of (essentially) the input graph instead of explicit reduction rules. The output is an instance of an artificial matrix problem, which might not even be in NP, rather than Steiner Cycle.
We study to what extend this compression algorithm is useful for actually speeding up the computation for Steiner Cycle. At high level, we find that there is a substantial improvement of using the compression in comparison to outright running a \({\mathcal {O}}(2^k\cdot |V|^c)\) algebraic algorithm also due to Wahlström. This is despite the fact that, at face value, the creation of somewhat artificial output instances by means of nonstandard tools seems not all that practical. It does benefit, however, from being strongly tied into a careful reorganization of the algebraic algorithm.
Stefan Fafianie, Stefan Kratsch
On the Quadratic Shortest Path Problem
Abstract
Finding the shortest path in a directed graph is one of the most important combinatorial optimization problems, having applications in a wide range of fields. In its basic version, however, the problem fails to represent situations in which the value of the objective function is determined not only by the choice of each single arc, but also by the combined presence of pairs of arcs in the solution. In this paper we model these situations as a Quadratic Shortest Path Problem, which calls for the minimization of a quadratic objective function subject to shortest-path constraints. We prove strong NP-hardness of the problem and analyze polynomially solvable special cases, obtained by restricting the distance of arc pairs in the graph that appear jointly in a quadratic monomial of the objective function. Based on this special case and problem structure, we devise fast lower bounding procedures for the general problem and show computationally that they clearly outperform other approaches proposed in the literature in terms of their strength.
Borzou Rostami, Federico Malucelli, Davide Frey, Christoph Buchheim
A Solution Merging Heuristic for the Steiner Problem in Graphs Using Tree Decompositions
Abstract
Fixed parameter tractable algorithms for bounded treewidth are known to exist for a wide class of graph optimization problems. While most research in this area has been focused on exact algorithms, it is hard to find decompositions of treewidth sufficiently small to make these algorithms fast enough for practical use. Consequently, tree decomposition based algorithms have limited applicability to large scale optimization. However, by first reducing the input graph so that a small width tree decomposition can be found, we can harness the power of tree decomposition based techniques in a heuristic algorithm, usable on graphs of much larger treewidth than would be tractable to solve exactly. We propose a solution merging heuristic to the Steiner Tree Problem that applies this idea. Standard local search heuristics provide a natural way to generate subgraphs with lower treewidth than the original instance, and subsequently we extract an improved solution by solving the instance induced by this subgraph. As such the fixed parameter tractable algorithm becomes an efficient tool for our solution merging heuristic. For a large class of sparse benchmark instances the algorithm is able to find small width tree decompositions on the union of generated solutions. Subsequently it can often improve on the generated solutions fast.
Thomas Bosman
Backmatter
Metadaten
Titel
Experimental Algorithms
herausgegeben von
Evripidis Bampis
Copyright-Jahr
2015
Electronic ISBN
978-3-319-20086-6
Print ISBN
978-3-319-20085-9
DOI
https://doi.org/10.1007/978-3-319-20086-6

Premium Partner