Skip to main content
main-content

Über dieses Buch

In the past two decades, breakthroughs in computer technology have made a tremendous impact on optimization. In particular, availability of parallel computers has created substantial interest in exploring the use of parallel processing for solving discrete and global optimization problems. The chapters in this volume cover a broad spectrum of recent research in parallel processing of discrete and related problems. The topics discussed include distributed branch-and-bound algorithms, parallel genetic algorithms for large scale discrete problems, simulated annealing, parallel branch-and-bound search under limited-memory constraints, parallelization of greedy randomized adaptive search procedures, parallel optical models of computing, randomized parallel algorithms, general techniques for the design of parallel discrete algorithms, parallel algorithms for the solution of quadratic assignment and satisfiability problems. The book will be a valuable source of information to faculty, students and researchers in combinatorial optimization and related areas.

Inhaltsverzeichnis

Frontmatter

Distributed Branch and Bound Algorithms for Global Optimization

Abstract
This paper presents computational results of the parallelized version of the αBB global optimization algorithm. Important algorithmic and implementational issues are discussed and their impact on the design of parallel branch and bound methods is analyzed. These issues include selection of the appropriate architecture, communication patterns, frequency of communication, and termination detection. The approach is demonstrated with a variety of computational studies aiming at revealing the various types of behavior of the distributed branch and bound global optimization algorithm can exhibit. These include ideal behavior, speedup, detrimental, and deceleration anomalies.
Ioannis P. Androulakis, Christodoulos A. Floudas

Large-Scale Structured Discrete Optimization via Parallel Genetic Algorithms

Abstract
We consider the application of parallel “island” genetic algorithms (GA’s) to the solution of large-scale discrete optimization problems with block structure. Optimization problems arising in a variety of important applications have block-structured objective functions and constraint sets, i.e., large subsets of variables and constraints that may be naturally clustered because of spatial or temporal contexts. For continuous optimization problems, many iterative decomposition techniques (such as the Dantzig-Wolfe method) have been developed that take advantage of this block structure by alternating between the solution of subproblems that deal with separate blocks in a decentralized manner (allowing the exploitation of parallel and distributed computation) and coordination phases in which subproblem solutions are combined in such a way as to obtain good solutions of the original problem. However, these approaches are not readily extended to discrete optimization because traditional coordination mechanisms generally do not preserve the discrete nature of the variables. This paper describes the use of GA’s as a parallel coordination technique for discrete optimization. To illustrate this approach, we consider a class of structured discrete geometric problems motivated by applications in database and the solution of partial differential equations. The most regular type of problem of this class is the determination of a minimum perimeter partition into P equal-area subdomains of an MxN grid, where P,M, and N are the input parameters, and perimeter is defined as the total perimeter of the P subdomains. (Regarding perimeter of a subdomain as energy, this can be thought of as a minimum energy equipartition.) For problems of this type involving millions of variables, high-level genetic algorithms that take advantage of block structure for genotype-phenotype differentiation provide a successful approach for the parallel coordination of subproblem solutions. Extensions to more general classes of problems are also presented.
Ioannis T. Christou, W. W. Donaldson, R. R. Meyer

Pushing the Limits of Solvable QAP Problems Using Parallel Processing - Is Nugent30 within Reach?

Abstract
In 1991 solving Quadratic Assignment Problems of dimension exceeding 15 was considered to be a computational challenge. Nevertheless, the solution of the classical Nugent20 benchmark of size 20 was announced 3 years later, and recently instances of size 25 have been solved. A key feature in this development has been the use of parallelism in Branch-and-Bound algorithms.
This article discusses the role of parallel processing in problem solving in general and in the solution of QAP instances in particular. The recent development in the solution of large QAP instances is reviewed, and based on descriptions of available bounding methods and their properties, the choice of bound function for a parallel Branch-and-Bound algorithm for QAP problems is discussed. Finally, we discuss the possibilities of pushing the limit of solvable QAP-instances by combining new bound calculation methods with high performance parallel computing.
Jens Clausen

On the Design of Parallel Discrete Algorithms for High Performance Computing Systems

Abstract
The use of high performance computing systems is steadily increasing. For instance, systems composed of PC clusters interconnected by high performance local networks is one of the major trends in parallel/distributed computing nowadays, since these clusters yield parallel systems more powerful than some super-computers, for a fraction of the price. Although a great deal of effort has been undertaken on system-level and programming environment issues for such systems, little attention has been paid to computing issues. In this paper we discuss theoretical (BSP-like) models and their adaptation to solve important classes of problems on high performance computing systems. We show as an example a coarse-grained parallel algorithm to solve the maximum weighted clique problem in interval graphs. It is theoretically optimal, since it requires only a constant number of communication rounds, and is also efficient in practice as demonstrated by reported experimental results obtained on a Myrinet-connected PC cluster. Noticeably, even super-linear speedups can occur for large data because of swapping factors. We conclude that these models are well adapted to the task of algorithm design for high performance computing systems.
Afonso Ferreira

Parallel Algorithms for Satisfiability (SAT) Testing

Abstract
The satisfiability (SAT) problem is a central problem in mathematical logic, computing theory, and artificial intelligence. In practice, the SAT problem is fundamental in solving many practical application problems. Methods to solve the SAT problem play an important role in the development of computing theory and intelligent computing systems. There has been great interest in the design of efficient algorithms to solve the SAT problem. Since the past decade, a variety of parallel processing techniques have been developed to solve the SAT problem. The goal of this paper is to expose the rapidly growing field, to discuss tradeoff and limitations, and to describe state-of-the-art techniques, both algorithms and special-purpose architectures, for parallel processing of the satisfiability problem.
Jun Gu

Sequential and Parallel Branch-and-Bound Search under Limited-Memory Constraints

Abstract
Branch-and-bound (B&B) best-first search (BFS) is a widely applicable method that requires the least number of node expansions to obtain optimal solutions to combinatorial optimization problems (COPs). However, for many problems of interest, its memory requirements are enormous and can far exceed the available memory capacity on most systems. To circumvent this problem, a number of limited-memory search methods have been proposed that are either based purely on depth-first search (DFS) or combine BFS with DFS. We survey and compare previous sequential and parallel limited-memory search methods, and discuss their suitability for solving different types of COPs. We also propose a new limited-memory search method, iterative extrapolated-cost bounded search (IES*), that performs a sequence of cost-bounded depth-first searches from the root node of the search space. In this method, cost bounds for successive iterations are set to an estimated optimal-solution cost obtained by extrapolating from search experience in previous iterations. We provide accurate and fast, approximate methods suitable for extrapolating the optimal-solution cost for lower-bound cost functions with a range of growth rates Finally, we propose an efficient approach to parallelizing IES* that is applicable, with minor modifications, to other iterative cost-bounded DFS methods like IDA* and DFS*. An important feature of this approach is the asynchronous execution of the different iterations of IES* by processors to minimize idling. We provide a method for determining cost bounds independently in different processors; these cost bounds vary from processor to processor, and decrease from an initial larger value to the true cost bound for the iteration. Further, to minimize unnecessary node expansions that can occur because of the asynchronous operation and because of the initial loose upper bounds, we propose an efficient load balancing technique. This technique distributes work of earlier iterations with higher priority among processors. As a result, different processors are likely to execute IES* iterations that are as close to each other as possible, and also the individual cost bounds for the same IES* iteration in different processors approach the true cost bound rapidly. This decreases the possibility of unnecessary work in parallel IES*, thus leading to an efficient parallelization.
Nihar R. Mahapatra, Shantanu Dutt

A Parallel Grasp for the Data Association Multidimensional Assignment Problem

Abstract
Data association multidimensional assignment problems appear in many applications such as MultiTarget MultiSensor Tracking, and particle tracking. The problem is characterized by the large input data and is very difficult to solve exactly. A Greedy Randomized Adaptive Search Procedure (GRASP) has been developed and computational results show good quality solutions can be obtained. Furthermore, the efficiency of the GRASP can be easily improved by parallelization of the code in the MPI environment.
R. A. Murphey, P. M. Pardalos, L. Pitsoulis

Basic Algorithms on Parallel Optical Models of Computing

Abstract
In this paper we identify some of the optical models of computing that have been proposed and survey algorithms for fundamental problems. Problems considered include prefix computation, packet routing, selection, sorting, and matrix operations.
Sanguthevar Rajasekaran

Randomized Parallel Algorithms

Abstract
In this paper we show some important randomized techniques for the parallel processing of discrete problems. In particular, we present a few parallel randomized algorithms frequently used for shortest paths problems, matching problems, depth first search and maximum independent set problems. We also discuss the connection between randomization and approximation, showing how randomization yields approximative solutions and we illustrate this connection by means of network flow problems.
Jose D. P. Rolim

Finite Behavior of Simulated Annealing: A Probabilistic Study

Abstract
Simulated annealing (SA) is a simple global optimization method that has been used extensively in various applications, ranging from the study of molecular clusters to combinatorial optimization problems in VLSI CAD. Although it has been proved that SA will find the global optimal solution with high probability in the limit[15], very little is known about the finite behavior of this method[6, 8]. In this paper, we present a probabilistic study of the finite behavior of SA. In particular, we define the concept of visiting probability and a simple algorithm for computing the visiting probability. Computational results on randomly generated test problems are presented.
Guoliang Xue

Backmatter

Weitere Informationen