Skip to main content

2004 | Buch

Experimental and Efficient Algorithms

Third International Workshop, WEA 2004, Angra dos Reis, Brazil, May 25-28, 2004. Proceedings

herausgegeben von: Celso C. Ribeiro, Simone L. Martins

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter
A Hybrid Bin-Packing Heuristic to Multiprocessor Scheduling

The multiprocessor scheduling problem consists in scheduling a set of tasks with known processing times into a set of identical processors so as to minimize their makespan, i.e., the maximum processing time over all processors. We propose a new heuristic for solving the multiprocessor scheduling problem, based on a hybrid heuristic to the bin packing problem. Computational results illustrating the effectiveness of this approach are reported and compared with those obtained by other heuristics.

Adriana C. F. Alvim, Celso C. Ribeiro
Efficient Edge-Swapping Heuristics for Finding Minimum Fundamental Cycle Bases

The problem of finding a fundamental cycle basis with minimum total cost in a graph is NP-hard. Since fundamental cycle bases correspond to spanning trees, we propose new heuristics (local search and metaheuristics) in which edge swaps are iteratively applied to a current spanning tree. Structural properties that make the heuristics efficient are established. We also present a mixed integer programming formulation of the problem whose linear relaxation yields tighter lower bounds than known formulations. Computational results obtained with our algorithms are compared with those from existing constructive heuristics on several types of graphs.

Edoardo Amaldi, Leo Liberti, Nelson Maculan, Francesco Maffioli
Solving Chance-Constrained Programs Combining Tabu Search and Simulation

Real world problems usually have to deal with some uncertainties. This is particularly true for the planning of services whose requests are unknown a priori.Several approaches for solving stochastic problems are reported in the literature. Metaheuristics seem to be a powerful tool for computing good and robust solutions. However, the efficiency of algorithms based on Local Search, such as Tabu Search, suffers from the complexity of evaluating the objective function after each move.In this paper, we propose alternative methods of dealing with uncertainties which are suitable to be implemented within a Tabu Search framework.

Roberto Aringhieri
An Algorithm to Identify Clusters of Solutions in Multimodal Optimisation

Clustering can be used to identify groups of similar solutions in Multimodal Optimisation. However, a poor clustering quality reduces the benefit of this application. The vast majority of clustering methods in literature operate by resorting to a priori assumptions about the data, such as the number of cluster or cluster radius. Clusters are forced to conform to these assumptions, which may not be valid for the considered population. The latter can have a huge negative impact on the clustering quality. In this paper, we apply a clustering method that does not require a priori knowledge. We demonstrate the effectiveness and efficiency of the method on real and synthetic data sets emulating solutions in Multimodal Optimisation problems.

Pedro J. Ballester, Jonathan N. Carter
On an Experimental Algorithm for Revenue Management for Cargo Airlines

In this paper we present a set of algorithms which represent different implementations of strategies for revenue management for air cargo airlines. The problem is to decide on the acceptance of a booking request given a set of fixed accepted requests and expected further demand. The strategies are based on a planning model for air cargo routing which determines the maximal contribution to profit for the capacities of a given flight schedule. This planning model associates an optimal set of so-called itineraries/paths to the set of possible origin-destination pairs according to given yield values. In mathematical terms the model is the path flow formulation of a special multi-commodity flow problem. This model is solved by intelligently applying column generation.

Paul Bartodziej, Ulrich Derigs
Cooperation between Branch and Bound and Evolutionary Approaches to Solve a Bi-objective Flow Shop Problem

Over the years, many techniques have been established to solve NP-Hard Optimization Problems and in particular multiobjective problems. Each of them are efficient on several types of problems or instances. We can distinguish exact methods dedicated to solve small instances, from heuristics – and particularly metaheuristics – that approximate best solutions on large instances. In this article, we firstly present an efficient exact method, called the two-phases method. We apply it to a biobjective Flow Shop Problem to find the optimal set of solutions. Exact methods are limited by the size of the instances, so we propose an original cooperation between this exact method and a Genetic Algorithm to obtain good results on large instances. Results obtained are promising and show that cooperation between antagonist optimization methods could be very efficient.

Matthieu Basseur, Julien Lemesre, Clarisse Dhaenens, El-Ghazali Talbi
Simple Max-Cut for Split-Indifference Graphs and Graphs with Few P 4’s

The simple max-cut problem is as follows: given a graph, find a partition of its vertex set into two disjoint sets, such that the number of edges having one endpoint in each set is as large as possible. A split graph is a graph whose vertex set admits a partition into a stable set and a clique. The simple max-cut decision problem is known to be NP-complete for split graphs. An indifference graph is the intersection graph of a set of unit intervals of the real line. We show that the simple max-cut problem can be solved in linear time for a graph that is both split and indifference. Moreover, we also show that for each constant q, the simple max-cut problem can be solved in polynomial time for (q,q-4)-graphs. These are graphs for which no set of at most q vertices induces more than q-4 distinct P4’s.

Hans L. Bodlaender, Celina M. H. de Figueiredo, Marisa Gutierrez, Ton Kloks, Rolf Niedermeier
A Randomized Heuristic for Scene Recognition by Graph Matching

We propose a new strategy for solving the non-bijective graph matching problem in model-based pattern recognition. The search for the best correspondence between a model and an over-segmented image is formulated as a combinatorial optimization problem, defined by the relational attributed graphs representing the model and the image where recognition has to be performed, together with the node and edge similarities between them. A randomized construction algorithm is proposed to build feasible solutions to the problem. Two neighborhood structures and a local search procedure for solution improvement are also proposed. Computational results are presented and discussed, illustrating the effectiveness of the combined approach involving randomized construction and local search.

Maria C. Boeres, Celso C. Ribeiro, Isabelle Bloch
An Efficient Implementation of a Joint Generation Algorithm

Let $\mathcal{C}$ be an n-dimensional integral box, and π be a monotone property defined over the elements of $\mathcal{C}$. We consider the problems of incrementally generating jointly the families $\mathcal{F}_{\pi}$ and $\mathcal{G}_{\pi}$ of all minimal subsets satisfying property π and all maximal subsets not satisfying property π, when π is given by a polynomial-time satisfiability oracle. Problems of this type arise in many practical applications. It is known that the above joint generation problem can be solved in incremental quasi-polynomial time. In this paper, we present an efficient implementation of this procedure. We present experimental results to evaluate our implementation for a number of interesting monotone properties π.

Endre Boros, Khaled Elbassioni, Vladimir Gurvich, Leonid Khachiyan
Lempel, Even, and Cederbaum Planarity Method

We present a simple pedagogical graph theoretical description of Lempel, Even, and Cederbaum (LEC) planarity method based on concepts due to Thomas. A linear-time implementation of LEC method using the PC-tree data structure of Shih and Hsu is provided and described in details. We report on an experimental study involving this implementation and other available linear-time implementations of planarity algorithms.

John M. Boyer, Cristina G. Fernandes, Alexandre Noma, José C. de Pina
A Greedy Approximation Algorithm for the Uniform Labeling Problem Analyzed by a Primal-Dual Technique

In this paper we present a new fast approximation algorithm for the Uniform Metric Labeling Problem. This is an important classification problem that occur in many applications which consider the assignment of objects into labels, in a way that is consistent with some observed data that includes the relationship between the objects.The known approximation algorithms are based on solutions of large linear programs and are impractical for moderated and large size instances. We present an 8log n-approximation algorithm analyzed by a primal-dual technique which, although has factor greater than the previous algorithms, can be applied to large sized instances. We obtained experimental results on computational generated and image processing instances with the new algorithm and two others LP-based approximation algorithms. For these instances our algorithm present a considerable gain of computational time and the error ratio, when possible to compare, was less than 2% from the optimum.

Evandro C. Bracht, Luis A. A. Meira, Flávio K. Miyazawa
Distributed Circle Formation for Anonymous Oblivious Robots

This paper deals with systems of multiple mobile robots each of which observes the positions of the other robots and moves to a new position so that eventually the robots form a circle. In the model we study, the robots are anonymous and oblivious, in the sense that they cannot be distinguished by their appearance and do not have a common x-y coordinate system, while they are unable to remember past actions.We propose a new distributed algorithm for circle formation on the plane. We prove that our algorithm is correct and provide an upper bound for its performance. In addition, we conduct an extensive and detailed comparative simulation experimental study with the DK algorithm described in [7]. The results show that our algorithm is very simple and takes considerably less time to execute than algorithm DK.

Ioannis Chatzigiannakis, Michael Markou, Sotiris Nikoletseas
Dynamic Programming and Column Generation Based Approaches for Two-Dimensional Guillotine Cutting Problems

We investigate two cutting problems and their variants in which orthogonal rotations are allowed. We present a dynamic programming based algorithm for the Two-dimensional Guillotine Cutting Problem with Value (GCV) that uses the recurrence formula proposed by Beasley and the discretization points defined by Herz. We show that if the items are not so small compared to the dimension of the bin, this algorithm requires polynomial time. Using this algorithm we solved all instances of GCV found at the OR–LIBRARY, including one for which no optimal solution was known. We also investigate the Two-dimensional Guillotine Cutting Problem with Demands (GCD). We present a column generation based algorithm for GCD that uses the algorithm above mentioned to generate the columns. We propose two strategies to tackle the residual instances. We report on some computational experiments with the various algorithms we propose in this paper. The results indicate that these algorithms seem to be suitable for solving real-world instances.

Glauber Cintra, Yoshiko Wakabayashi
Engineering Shortest Path Algorithms

In this paper, we report on our own experience in studying a fundamental problem on graphs: all pairs shortest paths. In particular, we discuss the interplay between theory and practice in engineering a simple variant of Dijkstra’s shortest path algorithm. In this context, we show that studying heuristics that are efficient in practice can yield interesting clues to the combinatorial properties of the problem, and eventually lead to new theoretically efficient algorithms.

Camil Demetrescu, Giuseppe F. Italiano
How to Tell a Good Neighborhood from a Bad One: Satisfiability of Boolean Formulas

One of the major problems algorithm designers usually face is to know in advance whether a proposed optimization algorithm is going to behave as planned, and if not, what changes are to be made to the way new solutions are examined so that the algorithm performs nicely. In this work we develop a methodology for differentiating good neighborhoods from bad ones. As a case study we consider the structure of the space of assignments for random 3-SAT formulas and we compare two neighborhoods, a simple and a more refined one that we already know the corresponding algorithm behaves extremely well. We give evidence that it is possible to tell in advance what neighborhood structure will give rise to a good search algorithm and we show how our methodology could have been used to discover some recent results on the structure of the SAT space of solutions. We use as a tool “Go with the winners”, an optimization heuristic that uses many particles that independently search the space of all possible solutions. By gathering statistics, we compare the combinatorial characteristics of the different neighborhoods and we show that there are certain features that make a neighborhood better than another, thus giving rise to good search algorithms.

Tassos Dimitriou, Paul Spirakis
Implementing Approximation Algorithms for the Single-Source Unsplittable Flow Problem

In the single-source unsplittable flow problem, commodities must be routed simultaneously from a common source vertex to certain sinks in a given graph with edge capacities. The demand of each commodity must be routed along a single path so that the total flow through any edge is at most its capacity. This problem was introduced by Kleinberg [12] and generalizes several NP-complete problems. A cost value per unit of flow may also be defined for every edge. In this paper, we implement the 2-approximation algorithm of Dinitz, Garg, and Goemans [6] for congestion, which is the best known, and the (3,1)-approximation algorithm of Skutella [19] for congestion and cost, which is the best known bicriteria approximation. We study experimentally the quality of approximation achieved by the algorithms and the effect of heuristics on their performance. We also compare these algorithms against the previous best ones by Kolliopoulos and Stein [15].

Jingde Du, Stavros G. Kolliopoulos
Fingered Multidimensional Search Trees

In this work, we propose two variants of K-d trees where fingers are used to improve the performance of orthogonal range search and nearest neighbor queries when they exhibit locality of reference. The experiments show that the second alternative yields significant savings. Although it yields more modest improvements, the first variant does it with much less memory requirements and great simplicity, which makes it more attractive on practical grounds.

Amalia Duch, Conrado Martínez
Faster Deterministic and Randomized Algorithms on the Homogeneous Set Sandwich Problem

A homogeneous set is a non-trivial, proper subset of a graph’s vertices such that all its elements present exactly the same outer neighborhood. Given two graphs, G1(V,E1), G2(V,E2), we consider the problem of finding a sandwich graph G s (V,E S ), with E1 ⊆ E s  ⊆ E2, which contains a homogeneous set, in case such a graph exists. This is called the Homogeneous Set Sandwich Problem (HSSP). We give an O(n3.5) deterministic algorithm, which updates the known upper bounds for this problem, and an O(n3) Monte Carlo algorithm as well. Both algorithms, which share the same underlying idea, are quite easy to be implemented on the computer.

Celina M. H. de Figueiredo, Guilherme D. da Fonseca, Vinícius G. P. de Sá, Jeremy Spinrad
Efficient Implementation of the BSP/CGM Parallel Vertex Cover FPT Algorithm

In many applications NP-complete problems need to be solved exactly. One promising method to treat some intractable problems is by considering the so-called Parameterized Complexity that divides the problem input into a main part and a parameter. The main part of the input contributes polynomially on the total complexity of the problem, while the parameter is responsible for the combinatorial explosion. We consider the parallel FPT algorithm of Cheetham et al. to solve the k-Vertex Cover problem, using the CGM model. Our contribution is to present a refined and improved implementation. In our parallel experiments, we obtained better results and obtained smaller cover sizes for some input data. The key idea for these results was the choice of good data structures and use of the backtracking technique. We used 5 graphs that represent conflict graphs of amino acids, the same graphs used also by Cheetham et al. in their experiments. For two of these graphs, the times we obtained were approximately 115 times better, for one of them 16 times better, and, for the remaining graphs, the obtained times were slightly better. We must also emphasize that we used a computational environment that is inferior than that used in the experiments of Cheetham et al.. Furthermore, for three graphs, we obtained smaller sizes for the cover.

Erik J. Hanashiro, Henrique Mongelli, Siang W. Song
Combining Speed-Up Techniques for Shortest-Path Computations

Computing a shortest path from one node to another in a directed graph is a very common task in practice. This problem is classically solved by Dijkstra’s algorithm. Many techniques are known to speed up this algorithm heuristically, while optimality of the solution can still be guaranteed. In most studies, such techniques are considered individually. The focus of our work is the combination of speed-up techniques for Dijkstra’s algorithm. We consider all possible combinations of four known techniques, namely goal-directed search, bi-directed search, multi-level approach, and shortest-path bounding boxes, and show how these can be implemented. In an extensive experimental study we compare the performance of different combinations and analyze how the techniques harmonize when applied jointly. Several real-world graphs from road maps and public transport and two types of generated random graphs are taken into account.

Martin Holzer, Frank Schulz, Thomas Willhalm
Increased Bit-Parallelism for Approximate String Matching

Bit-parallelism permits executing several operations simultaneously over a set of bits or numbers stored in a single computer word. This technique permits searching for the approximate occurrences of a pattern of length m in a text of length n in time O(⌈m/w⌉n), where w is the number of bits in the computer word. Although this is asymptotically the optimal speedup over the basic O(mn) time algorithm, it wastes bit-parallelism’s power in the common case where m is much smaller than w, since w-m bits in the computer words get unused.In this paper we explore different ways to increase the bit-parallelism when the search pattern is short. First, we show how multiple patterns can be packed in a single computer word so as to search for multiple patterns simultaneously. Instead of paying O(rn) time to search for r patterns of length m<w, we obtain $O(\lceil r/\lfloor w/m\rfloor\rceil n)$ time. Second, we show how the mechanism permits boosting the search for a single pattern of length m < w, which can be searched for in time $O(n/\lfloor w/m\rfloor)$ instead of O(n). Finally, we show how to extend these algorithms so that the time bounds essentially depend on k instead of m, where k is the maximum number of differences permitted.Our experimental results show that that the algorithms work well in practice, and are the fastest alternatives for a wide range of search parameters.

Heikki Hyyrö, Kimmo Fredriksson, Gonzalo Navarro
The Role of Experimental Algorithms in Genomics

Biology has become a computational science. In their efforts to understand the functioning of cells at a molecular level, biologists make use of a growing array of databases that codify knowledge about genomes, the genes within them, the structure and function of the proteins encoded by the genes, and the interactions among genes, RNA molecules, proteins, molecular machines and other chemical components of the cell. Biologists have access to high-throughput measurement technologies such as DNA microarrays, which can measure the expression levels of tens of thousands of genes in a single experiment.

Richard M. Karp
A Fast Algorithm for Constructing Suffix Arrays for Fixed-Size Alphabets

The suffix array of a string T is basically a sorted list of all the suffixes of T. Suffix arrays have been fundamental index data structures in computational biology. If we are to search a DNA sequence in a genome sequence, we construct the suffix array for the genome sequence and then search the DNA sequence in the suffix array. In this paper, we consider the construction of the suffix array of T of length n where the size of the alphabet is fixed. It has been well-known that one can construct the suffix array of T in O(n) time by constructing suffix tree of T and traversing the suffix tree. Although this approach takes O(n) time, it is not appropriate for practical use because it uses a lot of spaces and it is complicated to implement. Recently, almost at the same time, several algorithms have been developed to directly construct suffix arrays in O(n) time. However, these algorithms are developed for integer alphabets and thus do not exploit the properties given when the size of the alphabet is fixed. We present a fast algorithm for constructing suffix arrays for the fixed-size alphabet. Our algorithm constructs suffix arrays faster than any other algorithms developed for integer or general alphabets when the size of the alphabet is fixed. For example, we reduced the time required for constructing suffix arrays for DNA sequences by 25%-38%. In addition, we do not sacrifice the space to improve the running time. The space required by our algorithm is almost equal to or even less than those required by previous fast algorithms.

Dong K. Kim, Junha Jo, Heejin Park
Pre-processing and Linear-Decomposition Algorithm to Solve the k-Colorability Problem

We are interested in the graph coloring problem. We studied the effectiveness of some pre-processings that are specific to the k-colorability problem and that promise to reduce the size or the difficulty of the instances. We propose to apply on the reduced graph an exact method based on a linear-decomposition of the graph. We present some experiments performed on literature instances, among which DIMACS library instances.

Corinne Lucet, Florence Mendes, Aziz Moukrim
An Experimental Study of Unranking Algorithms

We describe in this extended abstract the experiments that we have conducted in order to fullfil the following goals: a) to obtain a working implementation of the unranking algorithms that we have presented in previous works; b) to assess the validity and range of application of our theoretical analysis of the performance of these algorithms; c) to provide preliminary figures on the practical performance of these algorithms under a reasonable environment; and finally, d) to compare these algorithms with the algorithms for random generation. Additionally, the experiments support our conjecture that the average complexity of boustrophedonic unranking is Θ(nlog n) for many combinatorial classes (namely, those whose specification requires recursion) and that it performs only slightly worse than lexicographic unranking for iterative classes (those which do not require recursion to be specified).

Conrado Martínez, Xavier Molinero
An Improved Derandomized Approximation Algorithm for the Max-Controlled Set Problem

A vertex i of a graph G=(V,E) is said to be controlled by M ⊆ V if the majority of the elements of the neighborhood of i (including itself) belong to M. The set M is a monopoly in G if every vertex i ∈ V is controlled by M. Given a set M ⊆ V and two graphs G1=(V,E1) and G2=(V,E2) where E1 ⊆ E2, the monopoly verification problem (mvp) consists of deciding whether there exists a sandwich graph G=(V,E) (i.e., a graph where E1 ⊆ E ⊆ E2) such that M is a monopoly in G=(V,E). If the answer to the mvp is No, we then consider the max-controlled set problem (mcsp), whose objective is to find a sandwich graph G=(V,E) such that the number of vertices of G controlled by M is maximized. The mvp can be solved in polynomial time; the mcsp, however, is NP-hard. In this work, we present a deterministic polynomial time approximation algorithm for the mcsp with ratio $\frac{1}{2}+ \frac{1+\sqrt{n}}{2n-2}$, where n=|V|>4. (The case n ≤ 4 is solved exactly by considering the parameterized version of the mcsp.) The algoritm is obtained through the use of randomized rounding and derandomization techniques, namely the method of conditional expectations. Additionally, we show how to improve this ratio if good estimates of expectation are obtained in advance.

Carlos A. Martinhon, Fábio Protti
GRASP with Path-Relinking for the Quadratic Assignment Problem

This paper describes a GRASP with path-relinking heuristic for the quadratic assignment problem. GRASP is a multi-start procedure, where different points in the search space are probed with local search for high-quality solutions. Each iteration of GRASP consists of the construction of a randomized greedy solution, followed by local search, starting from the constructed solution. Path-relinking is an approach to integrate intensification and diversification in search. It consists in exploring trajectories that connect high-quality solutions. The trajectory is generated by introducing in the initial solution, attributes of the guiding solution. Experimental results illustrate the effectiveness of GRASP with path-relinking over pure GRASP on the quadratic assignment problem.

Carlos A. S. Oliveira, Panos M. Pardalos, Mauricio G. C. Resende
Finding Minimum Transmission Radii for Preserving Connectivity and Constructing Minimal Spanning Trees in Ad Hoc and Sensor Networks

The minimum transmission radius R that preserves ad hoc network connectivity is equal to the longest edge in the minimum spanning tree. This article proposes to use the longest LMST (local MST, recently proposed message free approximation of MST) edge to approximate R using a wave propagation quazi-localized algorithm. Despite small number of additional edges in LMST with respect to MST, they can extend R by about 33% its range on networks with up to 500 nodes. We then prove that MST is a subset of LMST and describe a quazi-localized scheme for constructing MST from LMST. The algorithm eliminates LMST edges which are not in MST by a loop breakage procedure, which iteratively follows dangling edges from leaves to LMST loops, and breaks loops by eliminating their longest edges, until the procedure finishes at a single leader node, which then broadcasts R to other nodes.

Francisco Javier Ovalle-Martínez, Ivan Stojmenovic, Fabián García-Nocetti, Julio Solano-González
A Dynamic Algorithm for Topologically Sorting Directed Acyclic Graphs

We consider how to maintain the topological order of a directed acyclic graph (DAG) in the presence of edge insertions and deletions. We present a new algorithm and, although this has marginally inferior time complexity compared with the best previously known result, we find that its simplicity leads to better performance in practice. In addition, we provide an empirical comparison against three alternatives over a large number of random DAG’s. The results show our algorithm is the best for sparse graphs and, surprisingly, that an alternative with poor theoretical complexity performs marginally better on dense graphs.

David J. Pearce, Paul H. J. Kelly
Approximating Interval Coloring and Max-Coloring in Chordal Graphs

We consider two coloring problems: interval coloring and max-coloring for chordal graphs. Given a graph G = (V, E) and positive integral vertex weights w: V →N, the interval coloring problem seeks to find an assignment of a real interval I(u) to each vertex u ∈ V such that two constraints are satisfied: (i) for every vertex u ∈ V, |I(u)| = w(u) and (ii) for every pair of adjacent vertices u and v, I(u) ∩ I(v) = ∅. The goal is to minimize the span | ∪  ν ∈ v I(ν)|. The max-coloring problem seeks to find a proper vertex coloring of G whose color classes C1, C2, ..., C k , minimize the sum of the weights of the heaviest vertices in the color classes, that is, $\sum^{k}_{i=1}max_{\nu \in C_{i}w(v)}$. Both problems arise in efficient memory allocation for programs. Both problems are NP-complete even for interval graphs, though both admit constant-factor approximation algorithms on interval graphs. In this paper we consider these problems for chordal graphs. There are no known constant-factor approximation algorithms for either interval coloring or for max-coloring on chordal graphs. However, we point out in this paper that there are several simple O(log(n))-factor approximation algorithms for both problems. We experimentally evaluate and compare three simple heuristics: first-fit, best-fit, and a heuristic based on partitioning the graph into vertex sets of similar weight. Our experiments show that in general first-fit performs better than the other two heuristics and is typically very close to OPT, deviating from OPT by about 6% in the worst case for both problems. Best-fit provides some competition to first-fit, but the graph partitioning heuristic performs significantly worse than either. Our basic data comes from about 10000 runs of the each of the three heuristics for each of the two problems on randomly generated chordal graphs of various sizes and sparsity.

Sriram V. Pemmaraju, Sriram Penumatcha, Rajiv Raman
A Statistical Approach for Algorithm Selection

This paper deals with heuristic algorithm characterization, which is applied to the solution of an NP-hard problem, in order to select the best algorithm for solving a given problem instance. The traditional approach for selecting algorithms compares their performance using an instance set, and concludes that one outperforms the other. Another common approach consists of developing mathematical models to relate performance to problem size. Recent approaches try to incorporate more characteristics. However, they do not identify the characteristics that affect performance in a critical way, and do not incorporate them explicitly in their performance model. In contrast, we propose a systematic procedure to create models that incorporate critical characteristics, aiming at the selection of the best algorithm for solving a given instance. To validate our approach we carried out experiments using an extensive test set. In particular, for the classical bin packing problem, we developed models that incorporate the interrelation among five critical characteristics and the performance of seven heuristic algorithms. As a result of applying our procedure, we obtained a 76% accuracy in the selection of the best algorithm.

Joaquín Pérez, Rodolfo A. Pazos, Juan Frausto, Guillermo Rodríguez, David Romero, Laura Cruz
An Improved Time-Sensitive Metaheuristic Framework for Combinatorial Optimization

We introduce a metaheuristic framework for combinatorial optimization. Our framework is similar to many existing frameworks (e.g. [27]) in that it is modular enough that important components can be independently developed to create optimizers for a wide range of problems. Ours is different in many aspects. Among them are its combinatorial emphasis and the use of simulated annealing and incremental greedy heuristics. We describe several annealing schedules and a hybrid strategy combining incremental greedy and simulated annealing heuristics. Our experiments show that (1) a particular annealing schedule is best on average and (2) the hybrid strategy on average outperforms each individual search strategy. Additionally, our framework guarantees the feasibility of returned solutions for combinatorial problems that permit infeasible solutions. We, further, discuss a generic method of optimizing efficiently bottle-neck problems under the local-search framework.

Vinhthuy Phan, Steven Skiena
A Huffman-Based Error Detecting Code

Even codes are Huffman based prefix codes with the additional property of being able to detect the occurrence of an odd number of 1-bit errors in the message. They have been defined motivated by a problem posed by Hamming in 1980. Even codes have been studied for the case in which the symbols have uniform probabilities. In the present work, we consider the general situation of arbitrary probabilities. We describe an exact algorithm for constructing an optimal even code. The algorithm has complexity O(n3 log n), where n is the number of symbols. Further we describe an heuristics for constructing a nearly optimal even code, which requires O(n log n) time. The cost of an even code constructed by the heuristics is at most 50% higher than the cost of a Huffman code, for the same probabilities. That is, less than 50% higher than the cost of the corresponding optimal even code. However, computer experiments have shown that, for practical purposes, this value seems to be much less: at most 5%, for n large enough. This corresponds to the overhead in the size of the encoded message, for having the ability to detect an odd number of 1-bit errors.

Paulo E. D. Pinto, Fábio Protti, Jayme L. Szwarcfiter
Solving Diameter Constrained Minimum Spanning Tree Problems in Dense Graphs

In this study, a lifting procedure is applied to some existing formulations of the Diameter Constrained Minimum Spanning Tree Problem. This problem typically models network design applications where all vertices must communicate with each other at minimum cost, while meeting or surpassing a given quality requirement. An alternative formulation is also proposed for instances of the problem where the diameter of feasible spanning trees can not exceed given odd numbers. This formulation dominated their counterparts in this study, in terms of the computation time required to obtain proven optimal solutions. First ever computational results are presented here for complete graph instances of the problem. Sparse graph instances as large as those found in the literature were solved to proven optimality for the case where diameters can not exceed given odd numbers. For these applications, the corresponding computation times are competitive with those found in the literature.

Andréa C. dos Santos, Abílio Lucena, Celso C. Ribeiro
An Efficient Tabu Search Heuristic for the School Timetabling Problem

The School Timetabling Problem (STP) regards the weekly scheduling of encounters between teachers and classes. Since this scheduling must satisfy organizational, pedagogical and personal costs, this problem is recognized as a very difficult combinatorial optimization problem. This work presents a new Tabu Search (TS) heuristic for STP. Two different memory-based diversification strategies are presented. Computational experiments with real world instances, in comparison with a previously proposed TS found in literature, show that the proposed method produces better solutions for all instances, as well as observed increased speed in the production of good quality solutions.

Haroldo G. Santos, Luiz S. Ochi, Marcone J. F. Souza
Experimental Studies of Symbolic Shortest-Path Algorithms

Graphs can be represented symbolically by the Ordered Binary Decision Diagram (OBDD) of their characteristic function. To solve problems in such implicitly given graphs, specialized symbolic algorithms are needed which are restricted to the use of functional operations offered by the OBDD data structure. In this paper, two symbolic algorithms for the single-source shortest-path problem with nonnegative positive integral edge weights are presented which represent symbolic versions of Dijkstra’s algorithm and the Bellman-Ford algorithm. They execute $\mathcal{O}(N\cdot{\rm log}(NB))$ resp. $\mathcal{O}(NM\cdot{\rm log}(NB))$ OBDD-operations to obtain the shortest paths in a graph with N nodes, M edges, and maximum edge weight B. Despite the larger worst-case bound, the symbolic Bellman-Ford-approach is expected to behave much better on structured graphs because it is able to handle updates of node distances effectively in parallel. Hence, both algorithms have been studied in experiments on random, grid, and threshold graphs with different weight functions. These studies support the assumption that the Dijkstra-approach behaves efficient w. r. t. space usage, while the Bellman-Ford-approach is dominant w. r. t. runtime.

Daniel Sawitzki
Experimental Comparison of Greedy Randomized Adaptive Search Procedures for the Maximum Diversity Problem

The maximum diversity problem (MDP) consists of identifying optimally diverse subsets of elements from some larger collection. The selection of elements is based on the diversity of their characteristics, calculated by a function applied on their attributes. This problem belongs to the class of NP-hard problems. This paper presents new GRASP heuristics for this problem, using different construction and local search procedures. Computational experiments and performance comparisons between GRASP heuristics from literature and the proposed heuristics are provided and the results are analyzed. The tests show that the new GRASP heuristics are quite robust and find good solutions to this problem.

Geiza C. Silva, Luiz S. Ochi, Simone L. Martins
Using Compact Tries for Cache-Efficient Sorting of Integers

The increasing latency between memory and processor speeds has made it imperative for algorithms to reduce expensive accesses to main memory. In earlier work, we presented cache-conscious algorithms for sorting strings, that have been shown to be almost two times faster than the previous algorithms, mainly due to better usage of the cache. In this paper, we propose two new algorithms, Burstsort and MEBurstsort, for sorting large sets of integer keys. Our algorithms use a novel approach for sorting integers, by dynamically constructing a compact trie which is used to allocate the keys to containers. These keys are then sorted within the cache. The new algorithms are simple, fast and efficient. We compare them against the best existing algorithms using several collections and data sizes. Our results show that MEBurstsort is up to 3.5 times faster than memory-tuned quicksort for 64-bit keys and up to 2.5 times faster for 32-bit keys. For 32-bit keys, on 10 of the 11 collections used, MEBurstsort was the fastest, whereas for 64-bit keys, it was the fastest for all collections.

Ranjan Sinha
Using Random Sampling to Build Approximate Tries for Efficient String Sorting

Algorithms for sorting large datasets can be made more efficient with careful use of memory hierarchies and reduction in the number of costly memory accesses. In earlier work, we introduced burstsort, a new string sorting algorithm that on large sets of strings is almost twice as fast as previous algorithms, primarily because it is more cache-efficient. The approach in burstsort is to dynamically build a small trie that is used to rapidly allocate each string to a bucket. In this paper, we introduce new variants of our algorithm: SR-burstsort, DR-burstsort, and DRL-burstsort. These algorithms use a random sample of the strings to construct an approximation to the trie prior to sorting. Our experimental results with sets of over 30 million strings show that the new variants reduce cache misses further than did the original burstsort, by up to 37%, while simultaneously reducing instruction counts by up to 24%. In pathological cases, even further savings can be obtained.

Ranjan Sinha, Justin Zobel
The Datapath Merging Problem in Reconfigurable Systems: Lower Bounds and Heuristic Evaluation

In this paper we investigate the datapath merging problem (DPM) in reconfigurable systems. DPM is in $\mathcal{NP}$-hard and it is described here in terms of a graph optimization problem. We present an Integer Programming (IP) formulation of DPM and introduce some valid inequalities for the convex hull of integer solutions. These inequalities form the basis of a branch-and-cut algorithm that we implemented. This algorithm was used to compute lower bounds for a set of DPM instances, allowing us to assess the performance of the heuristic proposed by Moreano et al. [1] which is among the best ones available for the problem. Our computational experiments confirmed the efficiency of Moreano’s heuristic. Moreover, the branch-and-cut algorithm also was proved to be a valuable tool to solve small-sized DPM instances to optimality.

Cid C. de Souza, André M. Lima, Nahri Moreano, Guido Araujo
An Analytical Model for Energy Minimization

Energy has emerged as a critical constraint in mobile computing because the power availability in most of these systems is limited by the battery power of the device. In this paper, we focus on the memory energy dissipation. This is motivated by the fact that, for data intensive applications, a significant amount of energy is dissipated in the memory. Advanced memory architectures like the Mobile SDRAM and the RDRAM support multiple power states of memory banks, which can be exploited to reduce energy dissipation in the system. Therefore, it is important to design efficient controller policies that transition among power states. Since the addressed memory chip must be in the active state in order to perform a read/write operation, the key point is the tradeoff between the energy reduction due to the use of low power modes and the energy overheads of the resulting activations. The lack of rigorous models for energy analysis is the main motivation of this work. Assuming regular transitions, we derive a formal model that captures the relation between the energy complexity and the memory activities. Given a predetermined number of activations, we approximate the optimal repartition among available power modes. We evaluate our model on the RDRAM and analyze the behavior of each parameter together with the energy that can be saved or lost.

Claude Tadonki, Jose Rolim
A Heuristic for Minimum-Width Graph Layering with Consideration of Dummy Nodes

We propose a new graph layering heuristic which can be used for hierarchical graph drawing with the minimum width. Our heuristic takes into account the space occupied by both the nodes and the edges of a directed acyclic graph and constructs layerings which are narrower that layerings constructed by the known layering algorithms. It can be used as a part of the Sugiyama method for hierarchical graph drawing. We present an extensive parameter study which we performed for designing our heuristic as well as for comparing it to other layering algorithms.

Alexandre Tarassov, Nikola S. Nikolov, Jürgen Branke
Backmatter
Metadaten
Titel
Experimental and Efficient Algorithms
herausgegeben von
Celso C. Ribeiro
Simone L. Martins
Copyright-Jahr
2004
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-24838-5
Print ISBN
978-3-540-22067-1
DOI
https://doi.org/10.1007/b97914