Skip to main content

2010 | Buch

Handbook of Metaheuristics

insite
SUCHEN

Über dieses Buch

The rst edition of the Handbook of Metaheuristics was published in 2003 under the editorship of Fred Glover and Gary A. Kochenberger. Given the numerous - velopments observed in the eld of metaheuristics in recent years, it appeared that the time was ripe for a second edition of the Handbook. For different reasons, Fred and Gary were unable to accept Springer’s invitation to prepare this second e- tion and they suggested that we should take over the editorship responsibility of the Handbook. We are deeply honored and grateful for their trust. As stated in the rst edition, metaheuristics are “solution methods that orch- trate an interaction between local improvement procedures and higher level stra- gies to create a process capable of escaping from local optima and performing a robust search of a solution space. ” Although this broad characterization still holds today, many new and exciting developments and extensions have been observed in the last few years. We think in particular to hybrids, which take advantage of the strengths of each of their individual metaheuristic components to better explore the solution space. Hybrids of metaheuristics with other optimization techniques, like branch-and-bound, mathematical programming or constraint programming are also increasingly popular. On the front of applications, metaheuristics are now used to nd high-quality solutions to an ever-growing number of complex, ill-de ned re- world problems, in particular combinatorial ones.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Simulated Annealing
Abstract
Simulated annealing is a well-studied local search metaheuristic used to address discrete and, to a lesser extent, continuous optimization problems. The key feature of simulated annealing is that it provides a mechanism to escape local optima by allowing hill-climbing moves (i.e., moves which worsen the objective function value) in hopes of finding a global optimum. A brief history of simulated annealing is presented, including a review of its application to discrete, continuous, and multi-objective optimization problems. Asymptotic convergence and finite-time performance theory for simulated annealing are reviewed. Other local search algorithms are discussed in terms of their relationship to simulated annealing. The chapter also presents practical guidelines for the implementation of simulated annealing in terms of cooling schedules, neighborhood functions, and appropriate applications.
Alexander G. Nikolaev, Sheldon H. Jacobson
Chapter 2. Tabu Search
Abstract
This chapter presents the fundamental concepts of tabu search (TS) in a tutorial fashion. Special emphasis is put on showing the relationships with classical local search methods and on the basic elements of any TS heuristic, namely the definition of the search space, the neighborhood structure, and the search memory. Other sections cover other important concepts such as search intensification and diversification and provide references to significant work on TS. Recent advances in TS are also briefly discussed.
Michel Gendreau, Jean-Yves Potvin
Chapter 3. Variable Neighborhood Search
Abstract
Variable neighborhood search (VNS) is a metaheuristic for solving combinatorial and global optimization problems whose basic idea is a systematic change of neighborhood both within a descent phase to find a local optimum and in a perturbation phase to get out of the corresponding valley. In this chapter we present the basic schemes of VNS and some of its extensions. We then describe a recent development, i.e., formulation space search. We then present five families of applications in which VNS has proven to be very successful: (i) exact solution of large-scale location problems by primal–dual VNS; (ii) generation of feasible solutions to large mixed integer linear programs by hybridization of VNS and local branching; (iii) generation of good feasible solutions to continuous nonlinear programs; (iv) generation of feasible solutions and/or improved local optima for mixed integer nonlinear programs by hybridization of sequential quadratic programming and branch and bound within a VNS framework, and (v) exploration of graph theory to find conjectures, refutations, and proofs or ideas of proofs.
Pierre Hansen, Nenad Mladenović, Jack Brimberg, José A. Moreno Pérez
Chapter 4. Scatter Search and Path-Relinking: Fundamentals, Advances, and Applications
Abstract
Scatter search is an evolutionary metaheuristic that explores solution spaces by evolving a set of reference points, operating on a small set of solutions while making only limited use of randomization. We give a comprehensive description of the elements and methods that make up its template, including the most recent elements incorporated in successful applications in both global and combinatorial optimization. Path-relinking is an intensification strategy to explore trajectories connecting elite solutions obtained by heuristic methods such as scatter search, tabu search, and GRASP. We describe its mechanics, implementation issues, randomization, the use of pools of high-quality solutions to hybridize path-relinking with other heuristic methods, and evolutionary path-relinking. We also describe the hybridization of path-relinking with genetic algorithms to implement a progressive crossover operator. Some successful applications of scatter search and of path-relinking are also reported.
Mauricio G.C. Resende, Celso C. Ribeiro, Fred Glover, Rafael Martí
Chapter 5. Genetic Algorithms
Abstract
Genetic algorithms (GAs) have become popular as a means of solving hard combinatorial optimization problems. The first part of this chapter briefly traces their history, explains the basic concepts and discusses some of their theoretical aspects. It also references a number of sources for further research into their applications. The second part concentrates on the detailed implementation of a GA. It discusses the fundamentals of encoding a ‘genotype’ in different circumstances and describes the mechanics of population selection and management and the choice of genetic ‘operators’ for generating new populations. In closing, some specific guidelines for using GAs in practice are provided.
Colin R. Reeves
Chapter 6. A Modern Introduction to Memetic Algorithms
Abstract
Memetic algorithms are optimization techniques based on the synergistic combination of ideas taken from different algorithmic solvers, such as population-based search (as in evolutionary techniques) and local search (as in gradient-ascent techniques). After providing some historical notes on the origins of memetic algorithms, this work shows the general structure of these techniques, including some guidelines for their design. Some advanced topics such as multiobjective optimization, self-adaptation, and hybridization with complete techniques (e.g., branch-and-bound) are subsequently addressed. This chapter finishes with an overview of the numerous applications of these techniques and a sketch of the current development trends in this area.
Pablo Moscato, Carlos Cotta
Chapter 7. Genetic Programming
Abstract
Welcome to genetic programming, where the forces of nature are used to automatically evolve computer programs. We give a flavour of where GP has been successfully applied (it is far too wide an area to cover everything) and interesting current and future research but start with a tutorial of how to get started and finish with common pitfalls to avoid.
William B. Langdon, Robert I. McKay, Lee Spector
Chapter 8. Ant Colony Optimization: Overview and Recent Advances
Abstract
Ant Colony Optimization (ACO) is a metaheuristic that is inspired by the pheromone trail laying and following behavior of some ant species. Artificial ants in ACO are stochastic solution construction procedures that build candidate solutions for the problem instance under concern by exploiting (artificial) pheromone information that is adapted based on the ants’ search experience and possibly available heuristic information. Since the proposal of the Ant System, the first ACO algorithm, many significant research results have been obtained. These contributions focused on the development of high-performing algorithmic variants, the development of a generic algorithmic framework for ACO algorithms, successful applications of ACO algorithms to a wide range of computationally hard problems, and the theoretical understanding of properties of ACO algorithms. This chapter reviews these developments and gives an overview of recent research trends in ACO.
Marco Dorigo, Thomas Stützle
Chapter 9. Advanced Multi-start Methods
Abstract
Heuristic search procedures that aspire to find globally optimal solutions to hard combinatorial optimization problems usually require some type of diversification to overcome local optimality. One way to achieve diversification is to re-start the procedure from a new solution once a region has been explored. In this chapter we describe the best known multi-start methods for solving optimization problems. We propose classifying these methods in terms of their use of randomization, memory, and degree of rebuild. We also present a computational comparison of these methods on solving the maximum diversity problem in terms of solution quality and diversification power.
R. Martí, J. Marcos Moreno-Vega, A. Duarte
Chapter 10. Greedy Randomized Adaptive Search Procedures: Advances, Hybridizations, and Applications
Abstract
GRASP is a multi-start metaheuristic for combinatorial optimization problems, in which each iteration consists basically of two phases: construction and local search. The construction phase builds a feasible solution, whose neighborhood is investigated until a local minimum is found during the local search phase. The best overall solution is kept as the result. In this chapter, we first describe the basic components of GRASP. Successful implementation techniques are discussed and illustrated by numerical results obtained for different applications. Enhanced or alternative solution construction mechanisms and techniques to speed up the search are also described: alternative randomized greedy construction schemes, Reactive GRASP, cost perturbations, bias functions, memory and learning, local search on partially constructed solutions, hashing, and filtering. We also discuss implementation strategies of memory-based intensification and post-optimization techniques using path-relinking. Hybridizations with other metaheuristics, parallelization strategies, and applications are also reviewed.
Mauricio G.C. Resende, Celso C. Ribeiro
Chapter 11. Guided Local Search
Abstract
Combinatorial explosion is a well-known phenomenon that prevents complete algorithms from solving many real-life combinatorial optimization problems. In many situations, heuristic search methods are needed. This chapter describes the principles of Guided Local Search (GLS) and Fast Local Search (FLS) and surveys their applications. GLS is a penalty-based metaheuristic algorithm that sits on top of other local search algorithms, with the aim to improve their efficiency and robustness. FLS is a way of reducing the size of the neighbourhood to improve the efficiency of local search. The chapter also provides guidance for implementing and using GLS and FLS. Four problems, representative of general application categories, are examined with detailed information provided on how to build a GLS-based method in each case.
Christos Voudouris, Edward P.K. Tsang, Abdullah Alsheddy
Chapter 12. Iterated Local Search: Framework and Applications
Abstract
The key idea underlying iterated local search is to focus the search not on the full space of all candidate solutions but on the solutions that are returned by some underlying algorithm, typically a local search heuristic. The resulting search behavior can be characterized as iteratively building a chain of solutions of this embedded algorithm. The result is also a conceptually simple metaheuristic that nevertheless has led to state-of-the-art algorithms for many computationally hard problems. In fact, very good performance is often already obtained by rather straightforward implementations of the metaheuristic. In addition, the modular architecture of iterated local search makes it very suitable for an algorithm engineering approach where, progressively, the algorithms’ performance can be further optimized. Our purpose here is to give an accessible description of the underlying principles of iterated local search and a discussion of the main aspects that need to be taken into account for a successful application of it. In addition, we review the most important applications of this method and discuss its relationship to other metaheuristics.
Helena R. Lourenço, Olivier C. Martin, Thomas Stützle
Chapter 13. Large Neighborhood Search
Abstract
Heuristics based on large neighborhood search have recently shown outstanding results in solving various transportation and scheduling problems. Large neighborhood search methods explore a complex neighborhood by use of heuristics. Using large neighborhoods makes it possible to find better candidate solutions in each iteration and hence traverse a more promising search path. Starting from the large neighborhood search method, we give an overview of very large scale neighborhood search methods and discuss recent variants and extensions like variable depth search and adaptive large neighborhood search.
David Pisinger, Stefan Ropke
Chapter 14. Artificial Immune Systems
Abstract
The human immune system has numerous properties that make it ripe for exploitation in the computational domain, such as robustness and fault tolerance, and many different algorithms, collectively termed Artificial Immune Systems (AIS), have been inspired by it. Two generations of AIS are currently in use, with the first generation relying on simplified immune models and the second generation utilising interdisciplinary collaboration to develop a deeper understanding of the immune system and hence produce more complex models. Both generations of algorithms have been successfully applied to a variety of problems, including anomaly detection, pattern recognition, optimisation and robotics. In this chapter an overview of AIS is presented, its evolution is discussed, and it is shown that the diversification of the field is linked to the diversity of the immune system itself, leading to a number of algorithms as opposed to one archetypal system. Two case studies are also presented to help provide insight into the mechanisms of AIS; these are the idiotypic network approach and the Dendritic Cell Algorithm.
Julie Greensmith, Amanda Whitbrook, Uwe Aickelin
Chapter 15. A Classification of Hyper-heuristic Approaches
Abstract
The current state of the art in hyper-heuristic research comprises a set of approaches that share the common goal of automating the design and adaptation of heuristic methods to solve hard computational search problems. The main goal is to produce more generally applicable search methodologies. In this chapter we present an overview of previous categorisations of hyper-heuristics and provide a unified classification and definition, which capture the work that is being undertaken in this field. We distinguish between two main hyper-heuristic categories: heuristic selection and heuristic generation. Some representative examples of each category are discussed in detail. Our goals are to clarify the mainfeatures of existing techniques and to suggest new directions for hyper-heuristic research.
Edmund K. Burke, Matthew Hyde, Graham Kendall, Gabriela Ochoa, Ender Özcan, John R. Woodward
Chapter 16. Metaheuristic Hybrids
Abstract
Over the last years, so-called hybrid optimization approaches have become increasingly popular for addressing hard optimization problems. In fact, when looking at leading applications of metaheuristics for complex real-world scenarios, many if not most of them do not purely adhere to one specific classical metaheuristic model but rather combine different algorithmic techniques. Concepts from different metaheuristics are often hybridized with each other, but they are also often combined with other optimization techniques such as branch-and-bound and methods from the mathematical programming and constraint programming fields. Such combinations aim at exploiting the particular advantages of the individual components, and in fact well-designed hybrids often perform substantially better than their “pure” counterparts. Many very different ways of hybridizing metaheuristics are described in the literature, and unfortunately it is usually difficult to decide which approach(es) are most appropriate in a particular situation. This chapter gives an overview of this topic by starting with a classification of metaheuristic hybrids and then discussing several prominent design templates which are illustrated by concrete examples.
Günther R. Raidl, Jakob Puchinger, Christian Blum
Chapter 17. Parallel Meta-heuristics
Abstract
We present a state-of-the-art survey of parallel meta-heuristic strategies, developments, and results. We discuss general design and implementation principles that apply to most meta-heuristic classes and instantiate these principles for neighborhood and population-based meta-heuristics. We also identify a number of trends and promising research directions.
Teodor Gabriel Crainic, Michel Toulouse
Chapter 18. Reactive Search Optimization: Learning While Optimizing
Abstract
Reactive Search Optimization advocates the integration of sub-symbolic machine learning techniques into search heuristics for solving complex optimization problems. The word reactive hints at a ready response to events during the search through an internal online feedback loop for the self-tuning of critical parameters. Methodologies of interest for Reactive Search Optimization include machine learning and statistics, in particular reinforcement learning, active or query learning, neural networks, and meta-heuristics (although the boundary signalled by the “meta” prefix is not always clear).
Roberto Battiti, Mauro Brunato
Chapter 19. Stochastic Search in Metaheuristics
Abstract
Stochastic search is a key mechanism underlying many metaheuristics. The chapter starts with the presentation of a general framework algorithm in the form of a stochastic search process that contains a large variety of familiar metaheuristic techniques as special cases. Based on this unified view, questions concerning convergence and runtime are discussed on the level of a theoretical analysis. Concrete examples from diverse metaheuristic fields are given. In connection with runtime results, important topics as instance difficulty, phase transitions, parameter choice, No-Free-Lunch theorems, or fitness landscape analysis are addressed. Furthermore, a short sketch of the theory of black-box optimization is given, and generalizations of results to stochastic search under noise are outlined.
Walter J. Gutjahr
Chapter 20. An Introduction to Fitness Landscape Analysis and Cost Models for Local Search
Abstract
Despite their empirical effectiveness, our theoretical understanding of metaheuristic algorithms based on local search (and all other paradigms) is very limited, leading to significant problems for both researchers and practitioners. Specifically, the lack of a theory of local search impedes the development of more effective metaheuristic algorithms, prevents practitioners from identifying the metaheuristic most appropriate for a given problem, and permits widespread conjecture and misinformation regarding the benefits and/or drawbacks of particular metaheuristics. Local search metaheuristic performance is closely linked to the structure of the fitness landscape, i.e., the nature of the underlying search space. Consequently, understanding such structure is a first step toward understanding local search behavior, which can ultimately lead to a more general theory of local search. In this chapter, we introduce and survey the literature on fitness landscape analysis for local search, placing the research in the context of a broader, critical classification scheme delineating methodologies by their potential to account for local search metaheuristic performance.
Jean-Paul Watson
Chapter 21. Comparison of Metaheuristics
Abstract
Metaheuristics are truly diverse in nature—under the overarching theme of performing operations to escape local optima, algorithms as different as ant colony optimization, tabu search, harmony search, and genetic algorithms have emerged. Due to the unique functionality of each type of metaheuristic, comparison of metaheuristics is in many ways more difficult than other algorithmic comparisons. In this chapter, we discuss techniques for meaningful comparison of metaheuristics. We discuss how to create and classify instances in a new testbed and how to make sure other researchers have access to the problems for future metaheuristic comparisons. Further, we discuss the disadvantages of large parameter sets and how to measure complicating parameter interactions in a metaheuristic’s parameter space. Last, we discuss how to compare metaheuristics in terms of both solution quality and runtime.
John Silberholz, Bruce Golden
Backmatter
Metadaten
Titel
Handbook of Metaheuristics
herausgegeben von
Michel Gendreau
Jean-Yves Potvin
Copyright-Jahr
2010
Verlag
Springer US
Electronic ISBN
978-1-4419-1665-5
Print ISBN
978-1-4419-1663-1
DOI
https://doi.org/10.1007/978-1-4419-1665-5