Skip to main content

2003 | Buch

Handbook of Metaheuristics

herausgegeben von: Fred Glover, Gary A. Kochenberger

Verlag: Springer US

Buchreihe : International Series in Operations Research & Management Science

insite
SUCHEN

Über dieses Buch

Metaheuristics, in their original definition, are solution methods that orchestrate an interaction between local improvement procedures and higher level strategies to create a process capable of escaping from local optima and performing a robust search of a solution space. Over time, these methods have also come to include any procedures that employ strategies for overcoming the trap of local optimality in complex solution spaces, especially those procedures that utilize one or more neighborhood structures as a means of defining admissible moves to transition from one solution to another, or to build or destroy solutions in constructive and destructive processes. The degree to which neighborhoods are exploited varies according to the type of procedure. In the case of certain population-based procedures, such as genetic al- rithms, neighborhoods are implicitly (and somewhat restrictively) defined by reference to replacing components of one solution with those of another, by variously chosen rules of exchange popularly given the name of “crossover. ” In other population-based methods, based on the notion of path relinking, neighborhood structures are used in their full generality, including constructive and destructive neighborhoods as well as those for transitioning between (complete) solutions. Certain hybrids of classical evoluti- ary approaches, which link them with local search, also use neighborhood structures more fully, though apart from the combination process itself.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Scatter Search and Path Relinking: Advances and Applications
Abstract
Scatter search (SS) is a population-based method that has recently been shown to yield promising outcomes for solving combinatorial and nonlinear optimization problems. Based on formulations originally proposed in the 1960s for combining decision rules and problem constraints, SS uses strategies for combining solution vectors that have proved effective in a variety of problem settings. Path relinking (PR) has been suggested as an approach to integrate intensification and diversification strategies in a search scheme. The approach may be viewed as an extreme (highly focused) instance of a strategy that seeks to incorporate attributes of high quality solutions, by creating inducements to favor these attributes in the moves selected. The goal of this paper is to examine SS and PR strategies that provide useful alternatives to more established search methods. We describe the features of SS and PR that set them apart from other evolutionary approaches, and that offer opportunities for creating increasingly more versatile and effective methods in the future. Specific applications are summarized to provide a clearer understanding of settings where the methods are being used.
Fred Glover, Manuel Laguna, Rafael Marti
Chapter 2. An Introduction to Tabu Search
Abstract
This chapter presents the fundamental concepts of Tabu Search (TS) in a tutorial fashion. Special emphasis is put on showing the relationships with classical Local Search methods and on the basic elements of any TS heuristic, namely, the definition of the search space, the neighborhood structure, and the search memory. Other sections cover other important concepts such as search intensification and diversification and provide references to significant work on TS. Recent advances in TS are also briefly discussed.
Michel Gendreau
Chapter 3. Genetic Algorithms
Colin Reeves
Chapter 4. Genetic Programming: Automatic Synthesis of Topologies and Numerical Parameters
7 Conclusions
This chapter has demonstrated that a biologically motivated algorithm (genetic programming) is capable of automatically synthesizing both the topology of complex graphical structures and optimal or near-optimal numerical values for all elements of the structure possessing parameters.
John R. Koza
Chapter 5. A Gentle Introduction to Memetic Algorithms
Pablo Moscato, Carlos Cotta
Chapter 6. Variable Neighborhood Search
Abstract
Variable neighborhood search (VNS) is a recent metaheuristic for solving combinatorial and global optimization problems whose basic idea is systematic change of neighborhood within a local search. In this survey paper we present basic rules of VNS and some of its extensions. Moreover, applications are briefly summarized. They comprise heuristic solution of a variety of optimization problems, ways to accelerate exact algorithms and to analyze heuristic solution processes, as well as computer-assisted discovery of conjectures in graph theory.
Pierre Hansen, Nenad Mladenović
Chapter 7. Guided Local Search
Abstract
The combinatorial explosion problem prevents complete algorithms from solving many real-life combinatorial optimization problems. In many situations, heuristic search methods are needed. This chapter describes the principles of Guided Local Search (GLS) and Fast Local Search (FLS) and surveys their applications. GLS is a penalty-based meta-heuristic algorithm that sits on top of other local search algorithms, with the aim to improve their efficiency and robustness. FLS is a way of reducing the size of the neighborhood so as to improve the efficiency of local search. The chapter also provides guidance for implementing and using GLS and FLS. Four problems, representative of general application categories, are examined with detailed information provided on how to build a GLS-based method in each case.
Christos Voudouris, Edward P. K. Tsang
Chapter 8. Greedy Randomized Adaptive Search Procedures
Abstract
GRASP is a multi-start metaheuristic for combinatorial problems, in which each iteration consists basically of two phases: construction and local search. The construction phase builds a feasible solution, whose neighborhood is investigated until a local minimum is found during the local search phase. The best overall solution is kept as the result. In this chapter, we first describe the basic components of GRASP. Successful implementation techniques and parameter tuning strategies are discussed and illustrated by numerical results obtained for different applications. Enhanced or alternative solution construction mechanisms and techniques to speed up the search are also described: Reactive GRASP, cost perturbations, bias functions, memory and learning, local search on partially constructed solutions, hashing, and filtering. We also discuss in detail implementation strategies of memory-based intensification and post-optimization techniques using path-relinking. Hybridizations with other metaheuristics, paral lelization strategies, and applications are also reviewed.
Mauricio G. C. Resende, Celso C. Ribeiro
Chapter 9. The Ant Colony Optimization Metaheuristic: Algorithms, Applications, and Advances
Conclusions
The field of ACO algorithms is very lively, as testified, for example, by the successful biannual workshop (ANTS—From Ant Colonies to Artificial Ants: A Series of International Workshops on Ant Algorithms; http://iridia.ulb.ac.be/~ants/) where researchers meet to discuss the properties of ACO and other ant algorithms, both theoretically and experimentally.
From the theory side, the most interesting ongoing work concern the study of the relationship between ACO algorithms and other well-known stochastic optimization techniques. For example, it has been shown [68] that, when interpreting ACO algorithms as methods for searching in the space of pheromones (i.e., artificial ants are seen as procedures to update pheromone trails so to increase the probability of generating very good solutions to the combinatorial optimization problem), then AS can be interpreted as an approximate form of stochastic gradient descent [92] (a well- known algorithm which has been extensively used in machine learning) in the space of pheromones. Similarly, it has been shown [41,94] that there are strong connections between ACO algorithms and the Cross-Entropy method [77].
From the experimental side, most of the current research is in the direction of increasing the number of problems that are successfully solved by ACO algorithms, including real-word, industrial applications [39].
Currently, the great majority of problems attacked by ACO are static and well- defined combinatorial optimization problems, that is, problems for which all the necessary information is available and does not change during problem solution. For this kind of problems ACO algorithms must compete with very well established algorithms, often specialized for the given problem. Also, very often the role played by local search is extremely important to obtain good results (see for example [46]). Although rather successful on these problems, we believe that ACO algorithms will really show their greatest advantage when they are systematically applied to “ill-structured” problems for which it is not clear how to apply local search, or to highly dynamic domains with only local information available. A first step in this direction has already been made with the application to telecommunications networks routing, but much further research will be necessary. The reader interested in learning more about ACO is referred to the book “Ant Colony Optimization” by the same authors [40].
Marco Dorigo, Thomas Stützle
Chapter 10. The Theory and Practice of Simulated Annealing
Abstract
Simulated annealing is a popular local search meta-heuristic used to address discrete and, to a lesser extent, continuous optimization problems. The key feature of simulated annealing is that it provides a means to escape local optima by allowing hill-climbing moves (i.e., moves which worsen the objective function value) in hopes of finding a global optimum. A brief history of simulated annealing is presented, including a review of its application to discrete and continuous optimization problems. Convergence theory for simulated annealing is reviewed, as well as recent advances in the analysis of finite time performance. Other local search algorithms are discussed in terms of their relationship to simulated annealing. The chapter also presents practical guidelines for the implementation of simulated annealing in terms of cooling schedules, neighborhood functions, and appropriate applications.
Darrall Henderson, Sheldon H. Jacobson, Alan W. Johnson
Chapter 11. Iterated Local Search
Helena R. Lourenço, Olivier C. Martin, Thomas Stützle
Chapter 12. Multi-Start Methods
Abstract
Heuristic search procedures that aspire to find global optimal solutions to hard combinatorial optimization problems usually require some type of diversification to overcome local optimality. One way to achieve diversification is to re-start the procedure from a new solution once a region has been explored. In this chapter we describe the best known multi-start methods for solving optimization problems. We propose classifying these methods in terms of their use of randomization, memory and degree of rebuild. We also present a computational comparison of these methods on solving the linear ordering problem in terms of solution quality and diversification power.
Rafael Martí
Chapter 13. Local Search and Constraint Programming
Filippo Focacci, François Laburthe, Andrea Lodi
Chapter 14. Constraint Satisfaction
Abstract
Many problems can be formulated in terms of satisfying a set of constraints. This chapter focuses on methods for modeling and solving such problems used in artificial intelligence and implemented in constraint programming languages.
Eugene C. Freuder, Mark Wallace
Chapter 15. Artificial Neural Networks for Combinatorial Optimization
Jean-Yves Potvin, Kate A. Smith
Chapter 16. Hyper-Heuristics: An Emerging Direction in Modern Search Technology
Abstract
This chapter introduces and overviews an emerging methodology in search and optimisation. One of the key aims of these new approaches, which have been termed hyperheuristics, is to raise the level of generality at which optimisation systems can operate. An objective is that hyper-heuristics will lead to more general systems that are able to handle a wide range of problem domains rather than current meta-heuristic technology which tends to be customised to a particular problem or a narrow class of problems. Hyper-heuristics are broadly concerned with intelligently choosing the right heuristic or algorithm in a given situation. Of course, a hyper-heuristic can be (often is) a (meta-)heuristic and it can operate on (meta-)heuristics. In a certain sense, a hyper-heuristic works at a higher level when compared with the typical application of meta-heuristics to optimisation problems, i.e., a hyper-heuristic could be thought of as a (meta)-heuristic which operates on lower level (meta-)heuristics. In this chapter we will introduce the idea and give a brief history of this emerging area. In addition, we will review some of the latest work to be published in the field.
Edmund Burke, Graham Kendall, Jim Newall, Emma Hart, Peter Ross, Sonia Schulenburg
Chapter 17. Parallel Strategies for Meta-Heuristics
Abstract
We present a state-of-the-art survey of parallel meta-heuristic developments and results, discuss general design and implementation principles that apply to most meta-heuristic classes, instantiate these principles for the three meta-heuristic classes currently most extensively used—genetic methods, simulated annealing, and tabu search, and identify a number of trends and promising research directions.
Teodor Gabriel Crainic, Michel Toulouse
Chapter 18. Metaheuristic Class Libraries
Abstract
Just as metaheuristics provide flexible and extensible methods of solving optimization problems, class libraries provide flexible and extensible building blocks for creating software to implement metaheuristics. In this chapter we provide a brief introduction to the class libraries and object oriented programming techniques. We also provide a short summary of some of the more general metaheuristics class libraries and the literature that describes them. We conclude with a detailed example of library design.
Andreas Fink, Stefan Voß, David L. Woodruff
Chapter 19. Asynchronous Teams
Sarosh Talukdar, Sesh Murthy, Rama Akkiraju
Backmatter
Metadaten
Titel
Handbook of Metaheuristics
herausgegeben von
Fred Glover
Gary A. Kochenberger
Copyright-Jahr
2003
Verlag
Springer US
Electronic ISBN
978-0-306-48056-0
Print ISBN
978-1-4020-7263-5
DOI
https://doi.org/10.1007/b101874