Skip to main content

2001 | Buch

Theoretical Aspects of Evolutionary Computing

herausgegeben von: Dr. Leila Kallel, Dr. Bart Naudts, Alex Rogers

Verlag: Springer Berlin Heidelberg

Buchreihe : Natural Computing Series

insite
SUCHEN

Über dieses Buch

During the first week of September 1999, the Second EvoNet Summer School on Theoretical Aspects of Evolutionary Computing was held at the Middelheim cam­ pus of the University of Antwerp, Belgium. Originally intended as a small get­ together of PhD students interested in the theory of evolutionary computing, the summer school grew to become a successful combination of a four-day workshop with over twenty researchers in the field and a two-day lecture series open to a wider audience. This book is based on the lectures and workshop contributions of this summer school. Its first part consists of tutorial papers which introduce the reader to a num­ ber of important directions in the theory of evolutionary computing. The tutorials are at graduate level andassume only a basic backgroundin mathematics and com­ puter science. No prior knowledge ofevolutionary computing or its theory is nec­ essary. The second part of the book consists of technical papers, selected from the workshop contributions. A number of them build on the material of the tutorials, exploring the theory to research level. Other technical papers may require a visit to the library.

Inhaltsverzeichnis

Frontmatter

Tutorials

An Introduction to Evolutionary Computing in Design Search and Optimisation
Abstract
Evolutionary computing (EC) techniques are beginning to find a place in engineering design. In this tutorial we give a brief overview of EC and discuss the techniques and issues which are important in design search and optimisation.
A. Keane
Evolutionary Algorithms and Constraint Satisfaction: Definitions, Survey, Methodology, and Research Directions
Abstract
In this tutorial we consider the issue of constraint handling by evolutionary algorithms (EA). We start this study with a categorization of constrained problems and observe that constraint handling is not straightforward in an EA. Namely, the search operators mutation and recombination are ‘blind’ to constraints. Hence, there is no guarantee that if the parents satisfy some constraints the offspring will satisfy them as well. This suggests that the presence of constraints in a problem makes EAs intrinsically unsuited to solve this problem. This should especially hold if there are no objectives but only constraints in the original problem specification — the category of constraint satisfaction problems. A survey of related literature, however, discloses that there are quite a few successful attempts at evolutionary constraint satisfaction. Based on this survey we identify a number of common features in these approaches and arrive at the conclusion that the presence of constraints is not harmful, but rather helpful in that it provides extra information that EAs can utilize. The tutorial is concluded by considering a number of key questions on research methodology and some promising future research directions.
A. E. Eiben
The Dynamical Systems Model of the Simple Genetic Algorithm
Abstract
This tutorial describes the basic theory of the simple genetic algorithm, as developed by Michael Vose. The mathematical framework is established in which the actions of proportionate selection, mutation and crossover can be analysed. The results are illustrated through simple examples. The recently discovered connections between the mathematical form of the genetic operators and the Walsh transform are briefly described. Some current outstanding conjectures are also presented.
J. E. Rowe
Modelling Genetic Algorithm Dynamics
Abstract
This tutorial is an introduction to the mathematical modelling of the dynamics of genetic algorithms (GAs). The distinguishing feature of this approach is that we consider macroscopic properties of the system. After some brief introductory remarks, we look at a generational GA, with tournament selection, tackling the ones-counting problem. Initially we ignore recombination. We start with a two-parameter model of the evolution. This is sufficient to explain the qualitative features of the dynamics, although it does not give a good quantitative agreement with simulations. We show how the agreement can be improved by using more parameters to describe the population and by introducing finite population corrections. Finally, we come back to recombination and show how this can be modelled.
A. Prügel-Bennett, A. Rogers
Statistical Mechanics Theory of Genetic Algorithms
Abstract
This tutorial gives an introduction to the statistical mechanics method of analysing genetic algorithm (GA) dynamics. The goals are to study GAs acting on specific problems which include realistic features such as: finite population effects, crossover, large search spaces, and realistic cost functions. Statistical mechanics allows one to derive deterministic equations of motion which describe average quantities of the population after selection, mutation, and crossover in terms of those before. The general ideas of this approach are described here, and some details given via consideration of a specific problem. Finally, a description of the literature is given.
J. L. Shapiro
Theory of Evolution Strategies — A Tutorial
Abstract
Evolution strategies form a class of evolutionary optimization procedures the behavior of which is comparatively well understood theoretically, at least for simple cases. An approach commonly adopted is to view the optimization as a dynamical process. Local performance measures are introduced so as to arrive at quantitative results regarding the performance of the algorithms. By employing simple fitness models and approximations exact in certain limits, analytical results can be obtained for multi-parent strategies including recombination and even for some simple strategies employing self-adaptation of strategy parameters. This tutorial outlines the basic algorithms and the methods applied to their analysis. Local performance measures and the most commonly used fitness models are introduced and results from performance analyses are presented. Basic working principles of evolution strategies — namely the evolutionary progress principle, the genetic repair principle, and the phenomenon of speciation — are identified and discussed. Finally, open problems and areas of future research are outlined.
H.-G. Beyer, D. V. Arnold
Evolutionary Algorithms: From Recombination to Search Distributions
Abstract
First we show that all genetic algorithms can be approximated by an algorithm which keeps the population in linkage equilibrium. Here the genetic population is given as a product of univariate marginal distributions. We describe a simple algorithm which keeps the population in linkage equilibrium. It is called the univariate marginal distribution algorithm (UMDA). Our main result is that UMDA transforms the discrete optimization problem into a continuous one defined by the average fitness W(p1, . . . , p n ) as a function of the univariate marginal distributions p i. For proportionate selection UMDA performs gradient ascent in the landscape defined by W(p). We derive a difference equation for p i which has already been proposed by Wright in population genetics. We show that UMDA solves difficult multimodal optimization problems. For functions with highly correlated variables UMDA has to be extended. The factorized distribution algorithm (FDA) uses a factorization into marginal and conditional distributions. For decomposable functions the optimal factorization can be explicitly computed. In general it has to be computed from the data. This is done by LFDA. It uses a Bayesian network to represent the distribution. Computing the network structure from the data is called learning in Bayesian network theory. The problem of finding a minimal structure which explains the data is discussed in detail. It is shown that the Bayesian information criterion is a good score for this problem.
H. Mühlenbein, T. Mahnig
Properties of Fitness Functions and Search Landscapes
Abstract
This tutorial is an introduction to the study of properties of fitness functions and search landscapes in the context of predicting the difficulty of search problems for genetic algorithms and, more generally, for stochastic iterative algorithms. Central to this topic is the Walsh transform, which presents a view on the fitness function in terms of the interactions between the variables of this function. The first part of this tutorial introduces the Walsh decomposition of fitness functions, and its relation to epistasis variance. Exchanging the fitness function for the fitness landscape, the second part discusses two important consequences of putting a topology on the search space: the modality and the ruggedness. Methods to estimate properties of landscapes are discussed. The last part moves away from property learning of general fitness functions and landscapes, and instead focuses on properties of classes of fitness functions originating from well-known NP-hard search problems. Important issues here are the factorization of the joint probability distribution of the fitness values and the engineering of interactions to achieve a highly multimodal, symmetric fitness landscape resembling a needle-in-a-haystack problem.
L. Kallel, B. Naudts, C. R. Reeves

Technical Papers

A Solvable Model of a Hard Optimisation Problem
Abstract
The dynamics of a genetic algorithm (GA) on a model of a hard optimisation problem are analysed using a formalism which describes the changing fitness distribution of the GA population under ranking selection, uniform crossover, and mutation. The time to solve the optimisation problem is calculated in a closed form expression which enables the effect of the various GA parameters — population size, mutation rate, and selection scheme — to be understood.
A. Rogers, A. Prügel-Bennett
Bimodal Performance Profile of Evolutionary Search and the Effects of Crossover
Abstract
Tunable performance profiles for evolutionary search on instances of the adaptive distributed database management problem have previously been plotted and published by the authors. This demonstrates a bimodal feature of convergence time with respect to population size and mutation rate. Preliminary results on other problems (one-max, De Jong functions, etc.) led to the tentative conclusion that the features of the complex profile discovered could indeed be generic, and four key hypotheses were presented. These covered the effects of problem complexity and evaluation limit on optimal and non-optimal mutation rates. This paper expands significantly on these results looking in more detail at the one-max and royal staircase problems, and demonstrates the effect of various rates of crossover on the performance profile of evolutionary search. Crucially, these results continue to demonstrate the bimodal feature and show that reduced levels of crossover extend the influence of the bimodal region to higher population sizes. A study of the coefficient of variation of convergence time shows importantly that this can be at a minimum at an optimal mutation rate which can also deliver consistent results in a minimum number of evaluations.
M. Oates, J. Smedley, D. Corne, R. Loader
Evolution Strategies in Noisy Environments — A Survey of Existing Work
Abstract
Noise is a factor present in almost all real-world optimization problems. While it can potentially improve convergence reliability in multimodal optimization by preventing convergence towards merely local optima, it is generally detrimental to the velocity with which an optimum is approached. Evolution strategies (ES) form a class of evolutionary optimization procedures that are believed to be able to cope quite well with noise. A number of theoretical results as well as empirical findings regarding the influence of noise on the performance of ES can be found in the literature. The purpose of this survey is to summarize what is known regarding the behavior of ES in noisy environments and to outline directions for future research.
D. V. Arnold
Cyclic Attractors and Quasispecies Adaptability
Abstract
Using the techniques presented in a previous paper, it is possible to identify cyclic attractors (in the infinite population limit) for the simple genetic algorithm (with zero crossover) when the fitness function is periodic with respect to time. These techniques are applied here to study the adaptability of a quasispecies as the fitness function changes to favour another genotype. The effects of different mutation rates are examined, illustrating the ‘error threshold’ effect, that too high a mutation rate leads to a general collapse of the quasispecies and an inability to track the optimum. Experiments with very low mutation rates illustrate the inability of the population to evolve away from the initial ‘master sequence’ (in which the population clusters around a single genotype) in the face of adverse selective pressure. This result is not reflected in theoretical calculations, indicating that there is a metastable cyclic attractor that dominates for low mutation rates.
J. E. Rowe
Genetic Algorithms in Time-Dependent Environments
Abstract
The influence of time-dependent fitnesses on the infinite population dynamics of simple genetic algorithms (GAs) without crossover is analyzed. Based on general arguments, a schematic phase diagram is constructed that allows one to characterize the asymptotic states in dependence on the mutation rate and the time scale of changes. Furthermore, the notion of regular changes is raised for which the population can be shown to converge towards a generalized quasispecies. Based on this, error thresholds and an optimal mutation rate are approximately calculated for a generational GA with a moving needle-inthe-haystack landscape. The phase diagram thus found is fully consistent with our general considerations.
C. Ronnewinkel, C. O. Wilke, T. Martinetz
Statistical Machine Learning and Combinatorial Optimization
Abstract
In this work we apply statistical learning methods in the context of combinatorial optimization, which is understood as finding a binary string minimizing a given cost function. We first consider probability densities over binary strings and we define two different statistical criteria. Then we recast the initial problem as the problem of finding a density minimizing one of the two criteria. We restrict ourselves to densities described by a small number of parameters and solve the new problem by means of gradient techniques. This results in stochastic algorithms which iteratively update density parameters. We apply these algorithms to two families of densities, the Bernoulli model and the Gaussian model. The algorithms have been implemented and some experiments are reported.
A. Berny
Multi-Parent Scanning Crossover and Genetic Drift
Abstract
Genetic drift is a well-known phenomenon from biology. Only recently has it gained attention in the field of evolutionary computation. In this article we argue that occurrence-based scanning causes a stronger than usual genetic drift. This is done in the context of a genetic algorithm based on the steady state model. To prove our claim we define three kinds of events: drift off events, drift back events, and neutral events. Drift off events constitute those events that increase the number of dominant alleles. Drift back events constitute those events that decrease the number of dominant alleles. Neutral events leave the numbers intact. We prove that in our context, with occurrence-based scanning, the probability of drift off events is always bigger than the probability of drift back events. Furthermore, we prove that this tendency to drift off amplifies itself, i.e., drifting off increases the probability of drifting off even further. Finally, we prove that the tendency to drift off gets stronger when the number of parents increases. For comparison we show that uniform scanning, another multi-parent crossover operator, does not influence genetic drift at all in this context.
C. A. Schippers
Harmonic Recombination for Evolutionary Computation
Abstract
When using an evolutionary algorithm to handle an optimization problem the user must immediately contend with issues related to the bit representation of a feasible solution and the choice of evolutionary operators. In this presentation we discuss research into techniques that strive to nullify the a priori decisions made when establishing the representation and bit order of a feasible solution. One objective in this research is to develop techniques that reformulate a representation so as to make it more amenable to various evolutionary operators. Aside from the obvious practical benefits that should accrue, we hope that this approach to evolutionary computation will benefit a theoretical analysis that achieves more generality by avoiding unsupported assumptions about adjacency of feasible points in the solution space. In particular, our strategy is to avoid implicit specifications of locality due to adjacency assumptions that arise from the use of particular operators. For example, the assumption of adjacency of solutions as measured via Hamming distance is a notion that is convenient for a string mutation operator but does not necessarily have anything to do with the intrinsic structure of the objective function. Naturally, some type of adjacency, explicit or implicit, has to be at work since we wish to efficiently explore a solution space of exponential size in polynomial time, but the idea is to guide this exploration by stochastic techniques that are influenced by intrinsic function properties instead of unjustified a priori assumptions.
F. J. Burkowski
How to Detect all Maxima of a Function
Abstract
The first contribution of this paper is a theoretical investigation of a family of landscapes characterized by the number of their local optima N and the distribution of the sizes (α j ) of their attraction basins. For each landscape case, we give precise estimates of the size of the random sample that ensures that at least one point lies in each basin of attraction. A practical methodology is then proposed for identifying these quantities (N and (α j ) distribution) for an unknown landscape, given a random sample on that landscape and a local steepest ascent search. This methodology can be applied to any landscape specified with a modification operator and provides bounds on search complexity to detect all local optima. Experiments demonstrate the efficiency of this methodology for guiding the choice of modification operators, eventually leading to the design of problem-dependent optimnization heuristics.
J. Garnier, L. Kallel
On Classifications of Fitness Functions
Abstract
It is well-known that evolutionary algorithms succeed in optimizing some functions efficiently and fail for others. Therefore, one would like to classify fitness functions as more or less hard to optimize for evolutionary algorithms. The aim of this paper is to clarify limitations and possibilities for classifications of fitness functions from a theoretical point of view. We distinguish two different types of classifications, descriptive and analytical ones. We shortly discuss three widely known approaches, namely the NK-model, epistasis variance, and fitness distance correlation. Furthermore, we consider another recent measure, bit-wise epistasis. We discuss shortcomings and counter examples for all four measures and use this to motivate a discussion about possibilities and limitations of classifications of fitness functions in a broader context.
T. Jansen
Genetic Search on Highly Symmetric Solution Spaces: Preliminary Results
Abstract
Several optimisation problems have a highly symmetric solution space. In this case, there exist multiple equivalent solutions with respect to their structure and cost. Genetic algorithms do not generally perform well in this domain due to the lack of an effective search. In fact, the algorithm is very likely to explore a relatively small part of the space without being able to detect that a solution has been previously evaluated. As a consequence, the algorithm either converges to poor solutions or does not converge at all. This paper introduces two ad hoc genetic operators to deal directly with highly symmetric solution spaces. Both operators replace the standard crossover procedure of genetic algorithms. We present results of an investigation on combinatorial optimisation problems and in particular the graph colouring problem.
A. Marino
Structure Optimization and Isomorphisms
Abstract
In this article we deal with a quite general topic in evolutionary structure optimization, namely, redundancy in the encoding due to isomorphic structures. This problem is well known in topology optimization of neural networks (NNs) and we study it in this framework. Choosing a good NN structure for a given problem is still a difficult task for which we do not know any successful analytical means. But problem specific structures can lead to significantly improved results; evidence for this is given in this contribution by an NN that was evolutionarily adapted for a benchmark problem. The degree to which isomorphic structures, i.e., classes of equivalent NN topologies, enlarge the search space depends on the restrictions of the allowed structures and on the representation of the search space. In the context of structure optimization of NNs we observe similar phenomena of rare and frequent structures as are known from molecular biology. To cope with isomorphisms we make use of the relation between NN topologies and graphs. Exploiting methods from graph theory we demonstrate a general way to deal with isomorphic structures. The presented approach applied to NN optimization can be regarded as a solution to the so-called competing conventions problem [18]. Further, it can be used to save fitness evaluations when used in combination with a graph-database.
P. Stagge, C. Igel
Detecting Spin-Flip Symmetry in Optimization Problems
Abstract
The presence of symmetry in the representation of an optimization problem can cause the search algorithm to get stuck before reaching the global optimum. The traveling salesman problem and the graph coloring problem are some well-known NP-complete optimization problems containing symmetry in their usual representation. This paper describes a class of symmetry, called spin-flip symmetry, which one finds in functions like the one-dimensional nearest neighbor interaction functions. Spin-flip symmetry indicates that bit-complementary strings have the same function value. This notion can be generalized to substrings, called spin-flip blocks, in a canonical way. We distinguish two specific cases of spin-flip symmetry and introduce a spin-flip detection algorithm. The performance of the algorithm strongly depends on the initial sample the algorithm uses to detect the spin-flip blocks. The algorithm is designed to detect spin-flip symmetry in the whole search space, as well as in its hyperplanes. The difficulty with detection in hyperplanes is related to the detection of the correct hyperplane, which can be time consuming. Once the desired hyperplane is found, the spin-flip block can be detected using the normal detection procedure. The one-max problem shows that spin-flip symmetry in other types of subspaces, like the subspace in which the number of 1 s equals the number of Os, is not detected. For these kinds of subspaces the method needs to be extended.
C. Van Hoyweghen
Asymptotic Results for Genetic Algorithms with Applications to Nonlinear Estimation
Abstract
Genetic algorithms (GAs) are stochastic search methods based on natural evolution processes. They are defined as a system of particles (or individuals) evolving randomly and undergoing adaptation in a time non-necessarily homogeneous environment represented by a collection of fitness functions. The purpose of this work is to study the long-time behavior as well as large population asymptotic of GAs. Another side topic is to discuss the applications of GAs in numerical function analysis, Feynman—Kac formulae approximations, and in nonlinear filtering problems. Several variations and refinements will also be presented including continuous-time and branching particle models with random population size.
P. Del Moral, L. Miclo
Backmatter
Metadaten
Titel
Theoretical Aspects of Evolutionary Computing
herausgegeben von
Dr. Leila Kallel
Dr. Bart Naudts
Alex Rogers
Copyright-Jahr
2001
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-662-04448-3
Print ISBN
978-3-642-08676-2
DOI
https://doi.org/10.1007/978-3-662-04448-3