Skip to main content

2019 | Buch

Handbook of Metaheuristics

insite
SUCHEN

Über dieses Buch

The third edition of this handbook is designed to provide a broad coverage of the concepts, implementations, and applications in metaheuristics. The book’s chapters serve as stand-alone presentations giving both the necessary underpinnings as well as practical guides for implementation. The nature of metaheuristics invites an analyst to modify basic methods in response to problem characteristics, past experiences, and personal preferences, and the chapters in this handbook are designed to facilitate this process as well. This new edition has been fully revised and features new chapters on swarm intelligence and automated design of metaheuristics from flexible algorithm frameworks. The authors who have contributed to this volume represent leading figures from the metaheuristic community and are responsible for pioneering contributions to the fields they write about. Their collective work has significantly enriched the field of optimization in general and combinatorial optimization in particular.Metaheuristics are solution methods that orchestrate an interaction between local improvement procedures and higher level strategies to create a process capable of escaping from local optima and performing a robust search of a solution space. In addition, many new and exciting developments and extensions have been observed in the last few years. Hybrids of metaheuristics with other optimization techniques, like branch-and-bound, mathematical programming or constraint programming are also increasingly popular. On the front of applications, metaheuristics are now used to find high-quality solutions to an ever-growing number of complex, ill-defined real-world problems, in particular combinatorial ones. This handbook should continue to be a great reference for researchers, graduate students, as well as practitioners interested in metaheuristics.

Inhaltsverzeichnis

Frontmatter
Chapter 1. Simulated Annealing: From Basics to Applications
Abstract
Simulated Annealing (SA) is one of the simplest and best-known metaheuristic method for addressing difficult black box global optimization problems whose objective function is not explicitly given and can only be evaluated via some costly computer simulation. It is massively used in real-life applications. The main advantage of SA is its simplicity. SA is based on an analogy with the physical annealing of materials that avoids the drawback of the Monte-Carlo approach (which can be trapped in local minima), thanks to an efficient Metropolis acceptance criterion. When the evaluation of the objective-function results from complex simulation processes that manipulate a large-dimension state space involving much memory, population-based algorithms are not applicable and SA is the right answer to address such issues. This chapter is an introduction to the subject. It presents the principles of local search optimization algorithms, of which simulated annealing is an extension, and the Metropolis algorithm, a basic component of SA. The basic SA algorithm for optimization is described together with two theoretical properties that are fundamental to SA: statistical equilibrium (inspired from elementary statistical physics) and asymptotic convergence (based on Markov chain theory). The chapter surveys the following practical issues of interest to the user who wishes to implement the SA algorithm for its particular application: finite-time approximation of the theoretical SA, polynomial-time cooling, Markov-chain length, stopping criteria, and simulation-based evaluations. To illustrate these concepts, this chapter presents the straightforward application of SA to two classical and simple classical NP-hard combinatorial optimization problems: the knapsack problem and the traveling salesman problem. The overall SA methodology is then deployed in detail on a real-life application: a large-scale aircraft trajectory planning problem involving nearly 30,000 flights at the European continental scale. This exemplifies how to tackle nowadays complex problems using the simple scheme of SA by exploiting particular features of the problem, by integrating astute computer implementation within the algorithm, and by setting user-defined parameters empirically, inspired by the SA basic theory presented in this chapter.
Daniel Delahaye, Supatcha Chaimatanan, Marcel Mongeau
Chapter 2. Tabu Search
Abstract
This chapter presents the fundamental concepts of Tabu Search (TS) in a tutorial fashion. Special emphasis is put on showing the relationships with classical local search methods and on the basic elements of any TS heuristic, namely, the definition of the search space, the neighborhood structure, and the search memory. Other sections cover other important concepts such as search intensification and diversification and provide references to significant work on TS. Recent advances in TS are also briefly discussed.
Michel Gendreau, Jean-Yves Potvin
Chapter 3. Variable Neighborhood Search
Abstract
Variable neighborhood search (VNS) is a metaheuristic for solving combinatorial and global optimization problems whose basic idea is a systematic change of neighborhood both within a descent phase to find a local optimum and in a perturbation phase to get out of the corresponding valley. In this chapter we present the basic schemes of VNS and some of its extensions. We then describe recent developments, i.e., formulation space search and variable formulation search. We then present some families of applications in which VNS has proven to be very successful: (1) exact solution of large scale location problems by primal-dual VNS; (2) generation of solutions to large mixed integer linear programs, by hybridization of VNS and local branching; (3) generation of solutions to very large mixed integer programs using VNS decomposition and exact solvers (4) generation of good feasible solutions to continuous nonlinear programs; (5) adaptation of VNS for solving automatic programming problems from the Artificial Intelligence field and (6) exploration of graph theory to find conjectures, refutations and proofs or ideas of proofs.
Pierre Hansen, Nenad Mladenović, Jack Brimberg, José A. Moreno Pérez
Chapter 4. Large Neighborhood Search
Abstract
In the last 15 years, heuristics based on large neighborhood search (LNS) and the variant adaptive large neighborhood search (ALNS) have become some of the most successful paradigms for solving various transportation and scheduling problems. Large neighborhood search methods explore a complex neighborhood through the use of heuristics. Using large neighborhoods makes it possible to find better candidate solutions in each iteration and hence follow a more promising search path. Starting from the general framework of large neighborhood search, we study in depth adaptive large neighborhood search, discussing design ideas and properties of the framework. Application of large neighborhood search methods in routing and scheduling are discussed. We end the chapter by presenting the related framework of very large-scale neighborhood search (VLSN) and discuss parallels to LNS, before drawing some conclusions about algorithms exploiting large neighborhoods.
David Pisinger, Stefan Ropke
Chapter 5. Iterated Local Search: Framework and Applications
Abstract
The key idea underlying iterated local search is to focus the search not on the full space of all candidate solutions but on the solutions that are returned by some underlying algorithm, typically a local search heuristic. The resulting search behavior can be characterized as iteratively building a chain of solutions of this embedded algorithm. The result is also a conceptually simple metaheuristic that nevertheless has led to state-of-the-art algorithms for many computationally hard problems. In fact, very good performance is often already obtained by rather straightforward implementations of the metaheuristic. In addition, the modular architecture of iterated local search makes it very suitable for an algorithm engineering approach where, progressively, the algorithm’s performance can be further optimized. Our purpose here is to give an accessible description of the underlying principles of iterated local search and a discussion of the main aspects that need to be taken into account for a successful application of it. In addition, we review the most important applications of this method and discuss its relationship with other metaheuristics.
Helena Ramalhinho Lourenço, Olivier C. Martin, Thomas Stützle
Chapter 6. Greedy Randomized Adaptive Search Procedures: Advances and Extensions
Abstract
A greedy randomized adaptive search procedure (GRASP) is a multi-start metaheuristic for combinatorial optimization problems, in which each iteration consists basically of two phases: construction and local search. The construction phase builds a feasible solution whose neighborhood is investigated until a local minimum is found during the local search phase. The best overall solution is kept as the result. In this chapter, we first describe the basic components of GRASP. Successful implementation techniques are discussed and illustrated by numerical results obtained for different applications. Enhanced or alternative solution construction mechanisms and techniques to speed up the search are also described: Alternative randomized greedy construction schemes, Reactive GRASP, cost perturbations, bias functions, memory and learning, Lagrangean constructive heuristics and Lagrangean GRASP, local search on partially constructed solutions, hashing, and filtering. We also discuss implementation strategies of memory-based intensification and post-optimization techniques using path-relinking. Restart strategies to speedup the search, hybridizations with other metaheuristics, and applications are also reviewed.
Mauricio G. C. Resende, Celso C. Ribeiro
Chapter 7. Intelligent Multi-Start Methods
Abstract
Heuristic search procedures aimed at finding globally optimal solutions to hard combinatorial optimization problems usually require some type of diversification to overcome local optimality. One way to achieve diversification is to re-start the procedure from a new solution once a region has been explored, which constitutes a multi-start procedure. In this chapter we describe the best known multi-start methods for solving optimization problems. We also describe their connections with other metaheuristic methodologies. We propose classifying these methods in terms of their use of randomization, memory and degree of rebuild. We also present a computational comparison of these methods on solving the Maximum Diversity Problem to illustrate the efficiency of the multi-start methodology in terms of solution quality and diversification power.
Rafael Martí, Ricardo Aceves, Maria Teresa León, Jose M. Moreno-Vega, Abraham Duarte
Chapter 8. Next Generation Genetic Algorithms: A User’s Guide and Tutorial
Abstract
Genetic algorithms are different from most other metaheuristics because they exploit three key ideas: (1) the use of a population of solutions to guide search, (2) the use of crossover operators that recombine two or more solutions to generate new and potentially better solutions, and (3) the active management of diversity to sustain exploration. New ideas that are also introduced in this chapter include (1) the use of deterministic recombination operators that are capable of tunneling between local optima, and (2) the use of deterministic constant time move operators.
Darrell Whitley
Chapter 9. An Accelerated Introduction to Memetic Algorithms
Abstract
Memetic algorithms (MAs) are optimization techniques based on the orchestrated interplay between global and local search components and have the exploitation of specific problem knowledge as one of their guiding principles. In its most classical form, a MA is typically composed of an underlying population-based engine onto which a local search component is integrated. These aspects are described in this chapter in some detail, paying particular attention to design and integration issues. After this description of the basic architecture of MAs, we move to different algorithmic extensions that give rise to more sophisticated memetic approaches. After providing a meta-review of the numerous practical applications of MAs, we close this chapter with an overview of current perspectives of memetic algorithms.
Pablo Moscato, Carlos Cotta
Chapter 10. Ant Colony Optimization: Overview and Recent Advances
Abstract
Ant Colony Optimization (ACO) is a metaheuristic that is inspired by the pheromone trail laying and following behavior of some ant species. Artificial ants in ACO are stochastic solution construction procedures that build candidate solutions for the problem instance under concern by exploiting (artificial) pheromone information that is adapted based on the ants’ search experience and possibly available heuristic information. Since the proposal of Ant System, the first ACO algorithm, many significant research results have been obtained. These contributions focused on the development of high performing algorithmic variants, the development of a generic algorithmic framework for ACO algorithm, successful applications of ACO algorithms to a wide range of computationally hard problems, and the theoretical understanding of important properties of ACO algorithms. This chapter reviews these developments and gives an overview of recent research trends in ACO.
Marco Dorigo, Thomas Stützle
Chapter 11. Swarm Intelligence
Abstract
Swarm Intelligence (SI) is an Artificial Intelligence (AI) discipline that studies the collective behaviours of artificial and natural systems such as those of insects or animals. SI is seen as a new concept of AI and is becoming increasingly accepted in the literature. SI techniques are typically inspired by natural phenomena, and they have exhibited remarkable capabilities in solving problems that are often perceived to be challenging to conventional computational techniques. Although an SI system lacks a centralized control, the system at the swarm (or population) level reveals remarkable complex and self-organizing behaviours, often as the result of local interactions among individuals in the swarm as well as individuals with the environment, based on very simple interaction rules.
Xiaodong Li, Maurice Clerc
Chapter 12. Metaheuristic Hybrids
Abstract
Over the last decades, so-called hybrid optimization approaches have become increasingly popular for addressing hard optimization problems. In fact, when looking at leading applications of metaheuristics for complex real-world scenarios, many if not most of them do not purely adhere to one specific classical metaheuristic model but rather combine different algorithmic techniques. Concepts from different metaheuristics are often hybridized with each other, but they are also often combined with other optimization techniques such as tree-search, dynamic programming and methods from the mathematical programming, constraint programming, and SAT-solving fields. Such combinations aim at exploiting the particular advantages of the individual components, and in fact well-designed hybrids often perform substantially better than their “pure” counterparts. Many very different ways of hybridizing metaheuristics are described in the literature, and unfortunately it is usually difficult to decide which approach(es) are most appropriate in a particular situation. This chapter gives an overview on this topic by starting with a classification of metaheuristic hybrids and then discussing several prominent design templates which are illustrated by concrete examples.
Günther R. Raidl, Jakob Puchinger, Christian Blum
Chapter 13. Parallel Metaheuristics and Cooperative Search
Abstract
The chapter presents a general view of parallel metaheuristics for optimization. It recalls the main concepts and strategies in designing parallel metaheuristics and identifies trends and promising research directions. The focus is on cooperation-based strategies, which display remarkable performances, in particular strategies based on asynchronous exchanges and the creation of new information out of exchanged data to enhance the global guidance of the search.
Teodor Crainic
Chapter 14. A Classification of Hyper-Heuristic Approaches: Revisited
Abstract
Hyper-heuristics comprise a set of approaches that aim to automate the development of computational search methodologies. This chapter overviews previous categorisations of hyper-heuristics and provides a unified classification and definition. We distinguish between two main hyper-heuristic categories: heuristic selection and heuristic generation. Some representative examples of each category are discussed in detail and recent research trends are highlighted.
Edmund K. Burke, Matthew R. Hyde, Graham Kendall, Gabriela Ochoa, Ender Özcan, John R. Woodward
Chapter 15. Reactive Search Optimization: Learning While Optimizing
Abstract
Reactive Search Optimization (RSO) advocates the integration of sub-symbolic machine learning techniques into search heuristics for solving complex optimization problems. The word reactive hints at a ready response to events during the search through an internal online feedback loop for the self-tuning of critical parameters. Methodologies of interest include prohibition-based methods, reactions on the neighborhood, the annealing schedule or the objective function, and reactions in population-based methods. This chapter describes different strategies that have been introduced in the literature as well as several applications to classic combinatorial tasks, continuous optimization and real-world problems.
Roberto Battiti, Mauro Brunato, Andrea Mariello
Chapter 16. Stochastic Search in Metaheuristics
Abstract
Stochastic search is a key mechanism underlying many metaheuristics. The chapter starts with the presentation of a general framework algorithm in the form of a stochastic search process that contains a large variety of familiar metaheuristic techniques as special cases. Based on this unified view, questions concerning convergence and runtime are discussed at the level of a theoretical analysis. Concrete examples from diverse metaheuristic fields are given. In connection with runtime results, important topics such as instance difficulty, phase transitions, parameter choice, No-Free-Lunch theorems or fitness landscape analysis are addressed. Furthermore, a short sketch of the theory of black-box optimization is given, and generalizations of results to stochastic search under noise and to robust optimization are outlined.
Walter J. Gutjahr, Roberto Montemanni
Chapter 17. Automated Design of Metaheuristic Algorithms
Abstract
The design and development of metaheuristic algorithms can be time-consuming and difficult for a number of reasons including the complexity of the problems being tackled, the large number of degrees of freedom when designing an algorithm and setting its numerical parameters, and the difficulties of algorithm analysis due to heuristic biases and stochasticity. Traditionally, this design and development has been done through a manual, labor-intensive approach guided mainly by the expertise and intuition of the algorithm designer. In recent years, a number of automatic algorithm configuration methods have been developed that are able to effectively search large and diverse parameter spaces. They have been shown to be very successful in identifying high-performing algorithm designs and parameter settings. In this chapter, we review the recent advances in addressing automatic metaheuristic algorithm design and configuration. We describe the main existing automatic algorithm configuration techniques and discuss some of the main uses of such techniques, ranging from the mere optimization of the performance of already developed metaheuristic algorithms to their pivotal role in modifying the way metaheuristic algorithms will be designed and developed in the future.
Thomas Stützle, Manuel López-Ibáñez
Chapter 18. Computational Comparison of Metaheuristics
Abstract
Metaheuristics are truly diverse in nature—under the overarching theme of performing operations to escape local optima, algorithms as different as ant colony optimization, tabu search, harmony search, and genetic algorithms have emerged. Due to the unique functionality of each type of metaheuristic, the computational comparison of metaheuristics is in many ways more difficult than other algorithmic comparisons. In this chapter, we discuss techniques for the meaningful computational comparison of metaheuristics. We discuss how to create and classify instances in a new testbed and how to make sure other researchers have access to these test instances for future metaheuristic comparisons. In addition, we discuss the disadvantages of large parameter sets and how to measure complicated parameter interactions in a metaheuristic’s parameter space. Finally, we explain how to compare metaheuristics in terms of both solution quality and runtime and how to compare parallel metaheuristics.
John Silberholz, Bruce Golden, Swati Gupta, Xingyin Wang
19. Correction to: Swarm Intelligence
Xiaodong Li, Maurice Clerc
Metadaten
Titel
Handbook of Metaheuristics
herausgegeben von
Michel Gendreau
Jean-Yves Potvin
Copyright-Jahr
2019
Electronic ISBN
978-3-319-91086-4
Print ISBN
978-3-319-91085-7
DOI
https://doi.org/10.1007/978-3-319-91086-4

Premium Partner