Skip to main content
main-content

Über dieses Buch

Metaheuristics, and evolutionary algorithms in particular, are known to provide efficient, adaptable solutions for many real-world problems, but the often informal way in which they are defined and applied has led to misconceptions, and even successful applications are sometimes the outcome of trial and error. Ideally, theoretical studies should explain when and why metaheuristics work, but the challenge is huge: mathematical analysis requires significant effort even for simple scenarios and real-life problems are usually quite complex.

In this book the editors establish a bridge between theory and practice, presenting principled methods that incorporate problem knowledge in evolutionary algorithms and other metaheuristics. The book consists of 11 chapters dealing with the following topics: theoretical results that show what is not possible, an assessment of unsuccessful lines of empirical research; methods for rigorously defining the appropriate scope of problems while acknowledging the compromise between the class of problems to which a search algorithm is applied and its overall expected performance; the top-down principled design of search algorithms, in particular showing that it is possible to design algorithms that are provably good for some rigorously defined classes; and, finally, principled practice, that is reasoned and systematic approaches to setting up experiments, metaheuristic adaptation to specific problems, and setting parameters.

With contributions by some of the leading researchers in this domain, this book will be of significant value to scientists, practitioners, and graduate students in the areas of evolutionary computing, metaheuristics, and computational intelligence.

Inhaltsverzeichnis

Frontmatter

Chapter 1. No Free Lunch Theorems: Limitations and Perspectives of Metaheuristics

The No Free Lunch (NFL) theorems for search and optimization are reviewed and their implications for the design of metaheuristics are discussed. The theorems state that any two search or optimization algorithms are equivalent when their performance is averaged across all possible problems and even over subsets of problems fulfilling certain constraints. The NFL results show that if there is no assumption regarding the relation between visited and unseen search points, efficient search and optimization is impossible. There is no well-performing universal metaheuristic, but the heuristics must be tailored to the problem class at hand using prior knowledge. In practice, it is not likely that the preconditions of the NFL theorems are fulfilled for a problem class and thus differences between algorithms exist. Therefore, tailored algorithms can exploit structure underlying the optimization problem. Given full knowledge about the problem class, it is, in theory, possible to construct an optimal algorithm.
Christian Igel

Chapter 2. Convergence Rates of Evolutionary Algorithms and Parallel Evolutionary Algorithms

This chapter discusses the advantages (robustness) and drawbacks (slowness) of algorithms searching the optimum by comparisons between fitness values only. The results are mathematical proofs, but practical implications in terms of speed-up for algorithms applied on parallel machines are presented, as well as practical hints for tuning parallel optimization algorithms and on the feasibility of some specific forms of optimization. Throughout the chapter, \([[a,b]] =\{ a,a + 1,\ldots,b\}\).
Fabien Teytaud, Olivier Teytaud

Chapter 3. Rugged and Elementary Landscapes

The landscape of an optimization problem combines the fitness (or cost) function f on the candidate set X with a notion of neighborhood on X, typically represented as a simple sparse graph. A landscape forms the substrate for local search heuristics including evolutionary algorithms. Understanding such optimization techniques thus requires insight into the connection between the graph structure and properties of the fitness function. Local minima and their gradient basins form the basis for a decomposition of landscapes. The local minima are nodes of a labeled graph with edges providing information on the reachability between the minima and/or the adjacency of their basins. Barrier trees, inherent structure networks, and funnel digraphs are such decompositions producing “coarse-grained” pictures of a landscape. A particularly fruitful approach is a spectral decomposition of the fitness function into eigenvectors of the graph Laplacian, akin to a Fourier transformation of a real function into the elementary waves on its domain. Many landscapes of practical and theoretical interest, including the Traveling Salesman Problem with transpositions and reversals, are elementary: Their spectral decomposition has a single non-zero coefficient. Other classes of landscapes, including k-satisfiability (K-SAT), are superpositions of the first few Laplacian eigenvectors. Furthermore, the ruggedness of a landscape, as measured by the correlation length of the fitness function, and its neutrality, the expected fraction of a candidate’s neighbors having the same fitness, can be expressed by the spectrum. Ruggedness and neutrality are found to be independently variable measures of a landscape. Beyond single instances of landscapes, models with random parameters, such as spin glasses, are amenable to this algebraic approach. This chapter provides an introduction into the structural features of discrete landscapes from both the geometric and the algebraic perspective.
Konstantin Klemm, Peter F. Stadler

Chapter 4. Single-Funnel and Multi-funnel Landscapes and Subthreshold-Seeking Behavior

Algorithms for parameter optimization display subthreshold-seeking behavior when the majority of the points that the algorithm samples have an evaluation less than some target threshold. Subthreshold-seeking algorithms avoid the curse of the general and Sharpened No Free Lunch theorems in the sense that they are better than random enumeration on a specific (but general) family of functions. In order for subthreshold-seeking search to be possible, most of the solutions that are below threshold must be localized in one or more regions of the search space. Functions with search landscapes that can be characterized as single-funnel or multi-funnel landscapes have this localized property. We first analyze a simple “Subthreshold-Seeker” algorithm. Further theoretical analysis details conditions that would allow a Hamming neighborhood local search algorithm using a Gray or binary representation to display subthreshold-seeking behavior. A very simple modification to local search is proposed that improves its subthreshold-seeking behavior.
Darrell Whitley, Jonathan Rowe

Chapter 5. Black-Box Complexity for Bounding the Performance of Randomized Search Heuristics

In black-box optimization a search algorithm looks for a global optimum of an unknown objective function that can only be explored by sampling the function values of some points in the search space. Black-box complexity measures the number of such function evaluations that any search algorithms needs to make in the worst case to locate a global optimum of any objective function from some class of functions. The black-box complexity of a function class thus yields a lower bound on the performance for all algorithms. This chapter gives a precise and accessible introduction to the notion of black-box complexity, explains important properties and discusses several concrete examples. Starting with simple examples and proceeding step-wise to more complex examples an introduction that explains how such results can be derived is presented.
Thomas Jansen

Chapter 6. Designing an Optimal Search Algorithm with Respect to Prior Information

There are many optimization algorithms, most of them with many parameters. When you know which family of problems you face, you would like to design the optimization algorithm which is the best for this family (e.g., on average against a given distribution of probability on this family of optimization algorithms). This chapter is devoted to this framework: we assume that we know a probability distribution, from which the fitness function is drawn, and we look for the optimal optimization algorithm. This can be based (i) on experimentations, i.e. tuning the parameters on a set of problems, (ii) on mathematical approaches automatically building an optimization algorithm from a probability distribution on fitness functions (reinforcement learning approaches), or (iii) some simplified versions of the latter, with more reasonable computational cost (Gaussian processes for optimization).
Olivier Teytaud, Emmanuel Vazquez

Chapter 7. The Bayesian Search Game

The aim of this chapter is to draw links between (1) No Free Lunch (NFL) theorems which, interpreted inversely, lay the foundation of how to design search heuristics that exploit prior knowledge about the function, (2) partially observable Markov decision processes (POMDP) and their approach to the problem of sequentially and optimally choosing search points, and (3) the use of Gaussian processes as a representation of belief, i.e., knowledge about the problem. On the one hand, this joint discussion of NFL, POMDPs and Gaussian processes will give a broader view on the problem of search heuristics. On the other hand this will naturally introduce us to efficient global optimization algorithms that are well known in operations research and geology (Gutmann, J Glob Optim 19:201–227, 2001; Jones et al., J Glob Optim 13:455–492, 1998; Jones, J Glob Optim 21:345–383, 2001) and which, in our view, naturally arise from a discussion of NFL and POMDPs.
Marc Toussaint

Chapter 8. Principled Design of Continuous Stochastic Search: From Theory to Practice

We derive a stochastic search procedure for parameter optimization from two first principles: (1) imposing the least prior assumptions, namely by maximum entropy sampling, unbiasedness and invariance; (2) exploiting all available information under the constraints imposed by (1). We additionally require that two of the most basic functions can be solved reasonably fast. Given these principles, two principal heuristics are used: reinforcing of good solutions and good steps (increasing their likelihood) and rendering successive steps orthogonal. The resulting search algorithm is the covariance matrix adaptation evolution strategy, CMA-ES, that coincides to a great extent to a natural gradient descent. The invariance properties of the CMA-ES are formalized, as are its maximum likelihood and stationarity properties. A small parameter study for a specific heuristic—deduced from the principles of reinforcing good steps and exploiting all information—is presented, namely for the cumulation of an evolution or search path. Experiments on two noisy functions are provided.
Nikolaus Hansen, Anne Auger

Chapter 9. Parsimony Pressure Made Easy: Solving the Problem of Bloat in GP

The parsimony pressure method is perhaps the simplest and most frequently used method to control bloat in genetic programming (GP). In this chapter we first reconsider the size evolution equation for genetic programming developed in Poli and McPhee (Evol Comput 11(2):169–206, 2003) and rewrite it in a form that shows its direct relationship to Price’s theorem. We then use this new formulation to derive theoretical results that show how to practically and optimally set the parsimony coefficient dynamically during a run so as to achieve complete control over the growth of the programs in a population. Experimental results confirm the effectiveness of the method, as we are able to tightly control the average program size under a variety of conditions. These include such unusual cases as dynamically varying target sizes so that the mean program size is allowed to grow during some phases of a run, while being forced to shrink in others.
Riccardo Poli, Nicholas Freitag McPhee

Chapter 10. Experimental Analysis of Optimization Algorithms: Tuning and Beyond

This chapter comprises the essence of several years of tutorials the authors gave on experimental research in evolutionary computation. We highlight the renaissance of experimental techniques also in other fields to especially focus on the specific conditions of experimental research in computer science, or more concretely, metaheuristic optimization. The experimental setup is discussed together with the pitfalls awaiting the unexperienced (and sometimes even the experienced). We present a severity criterion as a meta statistical concept for evaluating statistical inferences, which can be used to avoid fallacies, i.e., misconceptions resulting from incorrect reasoning in argumentation caused by floor or ceiling effects. The sequential parameter optimization is discussed as a meta statistical framework which integrates concepts such as severity. Parameter tuning is considered as a relatively new tool in method design and analysis, and it leads to the question of adaptability of optimization algorithms. Another branch of experimentation aims at attaining more concrete problem knowledge, we may term it “exploratory landscape analysis”, containing sample and visualization techniques that are often applied but not seen as being a methodological contribution. However, this chapter is not only a renarration of well-known facts. We also attempt to look into the future to estimate what the hot topics of methodological research will be in the coming years and what changes we may expect for the whole community.
Thomas Bartz-Beielstein, Mike Preuss

Chapter 11. Formal Search Algorithms + Problem Characterisations = Executable Search Strategies

Traditional evolutionary algorithms use a standard, predetermined representation space, often in conjunction with a similarly standard and predetermined set of genetic move operators, themselves defined in terms of the representation space. This approach, while simple, is awkward and—we contend—inappropriate for many classes of problem, especially those in which there are dependencies between problem variables (e.g., problems naturally defined over permutations). For these reasons, over time a much wider variety of representations have come into common use. This paper presents a method for specifying algorithms with respect to abstract or formal representations, making them independent of both problem domain and representation. It also defines a procedure for generating an appropriate problem representation from an explicit characterisation of a problem domain that captures beliefs about its structure. We are then able to apply a formal search algorithm to a given problem domain by providing a suitable characterization of it and using this to generate a formal representation of problems in the domain, resulting in a practical, executable search strategy specific to that domain. This process is illustrated by showing how identical formal algorithms can be applied to both the travelling sales-rep problem (TSP) and real parameter optimisation to yield familiar (but superficially very different) concrete search strategies.
Patrick D. Surry, Nicholas J. Radcliffe
Weitere Informationen

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.

Whitepaper

- ANZEIGE -

Globales Erdungssystem in urbanen Kabelnetzen

Bedingt durch die Altersstruktur vieler Kabelverteilnetze mit der damit verbundenen verminderten Isolationsfestigkeit oder durch fortschreitenden Kabelausbau ist es immer häufiger erforderlich, anstelle der Resonanz-Sternpunktserdung alternative Konzepte für die Sternpunktsbehandlung umzusetzen. Die damit verbundenen Fehlerortungskonzepte bzw. die Erhöhung der Restströme im Erdschlussfall führen jedoch aufgrund der hohen Fehlerströme zu neuen Anforderungen an die Erdungs- und Fehlerstromrückleitungs-Systeme. Lesen Sie hier über die Auswirkung von leitfähigen Strukturen auf die Stromaufteilung sowie die Potentialverhältnisse in urbanen Kabelnetzen bei stromstarken Erdschlüssen. Jetzt gratis downloaden!

Bildnachweise