main-content

Über dieses Buch

Genetic algorithms are founded upon the principle of evolution, i.e., survival of the fittest. Hence evolution programming techniques, based on genetic algorithms, are applicable to many hard optimization problems, such as optimization of functions with linear and nonlinear constraints, the traveling salesman problem, and problems of scheduling, partitioning, and control. The importance of these techniques is still growing, since evolution programs are parallel in nature, and parallelism is one of the most promising directions in computer science.
The book is self-contained and the only prerequisite is basic undergraduate mathematics. This third edition has been substantially revised and extended by three new chapters and by additional appendices containing working material to cover recent developments and a change in the perception of evolutionary computation.

Inhaltsverzeichnis

Introduction

Abstract
During the last thirty years there has been a growing interest in problem solving systems based on principles of evolution and hereditary: such systems maintain a population of potential solutions, they have some selection process based on fitness of individuals, and some “genetic” operators. One type of such systems is a class of Evolution Strategies i.e., algorithms which imitate the principles of natural evolution for parameter optimization problems [319, 348] (Rechenberg, Schwefel). Fogel’s Evolutionary Programming [126] is a technique for searching through a space of small finite-state machines Glover’s Scatter Search techniques [142] maintain a population of reference points and generate offspring by weighted linear combinations. Another type of evolution based systems are Holland’s Genetic Algorithms (GAs) [188]. In 1990, Koza [231] proposed an evolution based systems, Genetic Programming, to search for the most fit computer program to solve a particular problem.
Zbigniew Michalewicz

1. GAs: What Are They?

Abstract
There is a large class of interesting problems for which no reasonably fast algorithms have been developed. Many of these problems are optimization problems that arise frequently in applications. Given such a hard optimization problem it is often possible to find an efficient algorithm whose solution is approximately optimal. For some hard optimization problems we can use probabilistic algorithms as well — these algorithms do not guarantee the optimum value, but by randomly choosing sufficiently many “witnesses” the probability of error may be made as small as we like.
Zbigniew Michalewicz

2. GAs: How Do They Work?

Abstract
In this chapter we discuss the actions of a genetic algorithm for a simple parameter optimization problem. We start with a few general comments; a detailed example follows.
Zbigniew Michalewicz

3. GAs: Why Do They Work?

Abstract
The theoretical foundations of genetic algorithms rely on a binary string representation of solutions, and on the notion of a schema (see e.g., [188]) — a template allowing exploration of similarities among chromosomes. A schema is built by introducing a don’t care symbol (⋆) into the alphabet of genes. A schema represents all strings (a hyperplane, or subset of the search space), which match it on all positions other than ‘⋆’.
Zbigniew Michalewicz

4. GAs: Selected Topics

Abstract
GA theory provides some explanation why, for a given problem formulation, we may obtain convergence to the sought optimal point. Unfortunately, practical applications do not always follow the theory, with the main reasons being:
• the coding of the problem often moves the GA to operate in a different space than that of the problem itself,
• there is a limit on the hypothetically unlimited number of iterations, and
• there is a limit on the hypothetically unlimited population size.
One of the implications of these observations is the inability of GAs, under certain conditions, to find the optimal solutions; such failures are caused by a premature convergence to a local optimum. The premature convergence is a common problem of genetic algorithms and other optimization algorithms. If convergence occurs too rapidly, then the valuable information developed in part of the population is often lost. Implementations of genetic algorithms are prone to converge prematurely before the optimal solution has been found, as stated in [46]:
“...While the performance of most implementations is comparable to or better than the performance of many other search techniques, it [GA] still fails to live up to the high expectations engendered by the theory. The problem is that, while the theory points to sampling rates and search behavior in the limit, any implementation uses a finite population or set of sample points. Estimates based on finite samples inevitably have a sampling error and lead to search trajectories much different from those theoretically predicted. This problem is manifested in practice as a premature loss of diversity in the population with the search converging to a sub-optimal solution.”
Eshelman and Schaffer [105] discuss a few strategies for combating premature convergence; these include (1) a mating strategy, called incest prevention,1 (2) a use of uniform crossover (see section 4.6), and (3) detecting duplicate strings in the population (similar to the crowding model; see section 4.1).
Zbigniew Michalewicz

5. Binary or Float?

Abstract
As discussed in the previous chapter, there are some problems that GA applications encounter that sometimes delay, if not prohibit, finding the optimal solutions with the desired precision. One of the implications of these problems was premature convergence of the entire population to a non—global optimum (Chapter 4); other consequences include inability to perform fine local tuning and inability to operate in the presence of nontrivial constraints (Chapters 6 and 7).
Zbigniew Michalewicz

6. Fine Local Tuning

Abstract
Genetic algorithms display inherent difficulties in performing local search for numerical applications. Holland suggested [188] that the genetic algorithm should be used as a preprocessor to perform the initial search, before turning the search process over to a system that can employ domain knowledge to guide the local search. As observed in [170]:
“Like natural genetic systems, GAs progress by virtue of changing the distribution of high performance substructures in the overall population; individual structures are not the focus of attention. Once the high performance regions of the search space are identified by a GA, it may be useful to invoke a local search routine to optimize the members of the final population.”
Zbigniew Michalewicz

7. Handling Constraints

Abstract
The general nonlinear programming problem NLP is to find x so as to optimize
$$optimizef(x),x = ({{x}_{1}}, \ldots ,{{x}_{q}}) \in {{R}^{q}},$$
subject to p ≥0 inequalities:
$${{g}_{i}}(x) = 0,j = 0, \ldots ,p,$$
and m−p ≥0 equations:
$${{h}_{j}}(x)0,j = p + 1, \ldots ,m.$$
Zbigniew Michalewicz

8. Evolution Strategies and Other Methods

Abstract
Evolution strategies (ESs) are algorithms which imitate the principles of natural evolution as a method to solve parameter optimization problems [18], [348]. They were developed in Germany during the 1960s. As stated in [348]:
“In 1963 two students at the Technical University of Berlin met and were soon collaborating on experiments which used the wind tunnel of the Institute of Flow Engineering. During the search for the optimal shapes of bodies in a flow, which was then a matter of laborious intuitive experimentation, the idea was conceived of proceeding strategically. However, attempts with the coordinate and simple gradient strategies were unsuccessful. Then one of the students, Ingo Rechenberg, now Professor of Bionics and Evolutionary Engineering, hit upon the idea of trying random changes in the parameters defining the shape, following the example of natural mutations. The evolution strategy was born.”
(The second student was Hans-Paul Schwefel, now Professor of Computer Science and Chair of System Analysis).
Zbigniew Michalewicz

9. The Transportation Problem

Abstract
In Chapter 7 we compared different GA approaches for handling constraints. It seems that for a particular class of problems (like the transportation problem) we can do better: we can use a more appropriate (natural) data structure (for a transportation problem, a matrix) and specialized genetic operators which operate on matrices. Such an evolution program would be much stronger method than GENOCOP: the GENOCOP optimizes any function with linear constraints, whereas the new evolution program optimizes only transportation problems (these problems have precisely n + k −1 equalities, where n and k denote the number of sources and destinations, respectively; see the description of the transportation problem below). However, it would be very interesting to see what can we gain by introducing extra problem-specific knowledge into an evolution program.
Zbigniew Michalewicz

10. The Traveling Salesman Problem

Abstract
In the next chapter, we present several examples of evolution programs tailored to specific applications (graph drawing, partitioning, scheduling). The traveling salesman problem (TSP) is just one of such applications; however, we treat it as a special problem — the mother of all problems — and discuss it in a separate chapter. What are the reasons?
Zbigniew Michalewicz

11. Evolution Programs for Various Discrete Problems

Abstract
As stated in the Introduction, it seems that most researchers modified their implementations of genetic algorithms either by using non-standard chromosome representation and/or by designing problem-specific genetic operators (e.g., [141], [385], [65], [76], etc.) to accommodate the problem to be solved, thus building efficient evolution programs. Such modifications were discussed in detail in the previous two chapters (Chapters 9 and 10) for the transportation problem and the traveling salesman problem, respectively. In this chapter, we have made a somewhat arbitrary selection of a few other evolution programs developed by the author and other researchers, which are based on non-standard chromosome representation and/or problem-specific knowledge operators. We discuss some systems for scheduling problems (section 11.1), the timetable problem (section 11.2), partitioning problems (section 11.3), and the path planning problem in mobile robot environment (section 11.4). The chapter concludes with an additional section 11.5, which provides some brief remarks on a few other, interesting problems.
Zbigniew Michalewicz

12. Machine Learning

Abstract
Machine learning is primarily devoted towards building computer programs able to construct new knowledge or to improve already possessed knowledge by using input information; much of this research employs heuristic approaches to learning rather than algorithmic ones. The most active research area in recent years [284] has continued to be symbolic empirical learning (SEL). This area is concerned with creating and/or modifying general symbolic descriptions, whose structure is unknown a priori. The most common topic in SEL is developing concept descriptions from concept examples [234], [284]. In particular, the problems in attribute-based spaces are of practical importance: in many such domains it is relatively easy to come up with a set of example events, on the other hand it is quite difficult to formulate hypotheses. The goal of a system implementing this kind of supervised learning is:
Given the initial set of example events and their membership in concepts, produce classification rules for the concepts present in the input set.
Depending on the output language, we can divide all approaches to automatic knowledge acquisition into two categories: symbolic and non-symbolic. Non-symbolic systems do not represent knowledge explicitly. For example, in statistical models knowledge is represented as a set of examples together with some statistics on them; in a connectionist model, knowledge is distributed among network connections [335]. On the other hand, symbolic systems produce and maintain explicit knowledge in a high-level descriptive language. The best known examples of this category of system are AQ and ID families [281] and [314].
Zbigniew Michalewicz

13. Evolutionary Programming and Genetic Programming

Abstract
In this chapter we review briefly two powerful evolutionary techniques; these are evolutionary programming (section 13.1) and genetic programming (section 13.2). These two techniques were developed a quarter of a century apart from each other; they aimed at different problems; they use different chromosomal representations for individuals in the population, and they put emphasis on different operators. Yet, they are very similar from our perspective of “evolution programs”: for particular tasks they aim at, they use specialized data structures (finite state machines and tree-structured computer programs) and specialized “genetic” operators. Also, both methods must control the complexity of the structure (some measure of the complexity of a finite state machine or a tree might be incorporated in the evaluation function). We discuss them in turn.
Zbigniew Michalewicz

14. A Hierarchy of Evolution Programs

Abstract
In this book we discussed different strategies, called Evolution Programs, which might be applied to hard optimization problems and which were based on the principle of evolution. Evolution programs borrow heavily from genetic algorithms However, they incorporate problem-specific knowledge by using “natural” data structures and problem-sensitive “genetic” operators. The basic difference between GAs and EPs is that the former are classified as weak, problem-independent methods, which is not the case for the latter.
Zbigniew Michalewicz

15. Evolution Programs and Heuristics

Abstract
As we already discussed in the previous chapters, the best known evolution programs include genetic algorithms, evolutionary programming, evolution strategies, and genetic programming. There are also many hybrid systems which incorporate various features of the above paradigms, and consequently are hard to classify; anyway, we refer to them just as evolution programs (or evolutionary algorithms, or evolutionary computation techniques).
Zbigniew Michalewicz

16. Conclusions

Abstract
The field of evolutionary computation has been growing rapidly over the last few years. Yet, there are still many gaps to be filled, many experiments to be done, many questions to be answered. In the final chapter of this text we examine a few important directions in which we can expect a lot of activity and significant results; we discuss them in turn.
Zbigniew Michalewicz

Backmatter

Weitere Informationen