Skip to main content
Erschienen in:
Buchtitelbild

Open Access 2023 | OriginalPaper | Buchkapitel

10. Population Management

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

After having generated several solutions, we can seek to learn how to combine them. This chapter review techniques for generating new solution from existing ones and for managing a population of solution. The most popular method in this field is undoubtedly genetic algorithms. However, the latter are less advanced metaheuristics than memetic algorithms or scatter search. The path relinking technique is also part of this chapter. Finally, among the last metaheuristics invented, we find the particle swarm methods, which seem adapted to continuous optimization.
By abuse of language, all the methods previously presented can be classified as single-solution metaheuristics. Although most of these methods are building or modifying a lot of different solutions, they only consider one current solution at an iteration and, eventually, the best solution found so far. This classification could be disputed, especially for the ant system. Indeed, several solutions are built at a given iteration. However, an ant constructs a solution without taking care of the work done in parallel by the other ants, and all the solutions built in one iteration are forgotten once the trails are updated. Similarly, there are taboo searches storing several solutions, but they are used for determining the taboo status of a current solution neighbor. This chapter considers methods where several solutions are explicitly stored and iteratively used for generating or modifying other ones.
With a proper modeling of an optimization problem, it is very easy to construct many different solutions, especially by means of a randomized method. Therefore, one can try to learn how to create new solutions from those previously constructed. This chapter studies how to exploit a population of solutions and how to combine the various basic metaheuristic components studied above.
Let us illustrate this by the tour merging technique for the TSP. Figure 10.1 shows five tours obtained with a randomized method in \(O (n \log n)\) presented in Section 6.​3.​2. None of these solutions looks really nice. However, superimposing these solutions on the optimal solution reveals that all the edges of the latter are part of these tours. Therefore, we believe that intelligent exploitation of various solutions can help to discover better ones.

10.1 Evolutionary Algorithms Framework

The intuition at the source of evolutionary algorithms comes from biologist works of the nineteenth century, like Darwin and Mendel who founded the theory of the evolution of living species. Indeed, over the course of generations, living beings are able to adapt to constantly changing external conditions. They can optimize their survival probability, thus resolving extremely complex problems. Therefore, why not attempt to artificially reproduce this evolution to solve hard combinatorial optimization problems?
In the 1960s and 1970s, various ways of exploiting these ideas emerged. The general framework of the evolutionary algorithms is provided by Algorithm 10.1. One begins by generating a set of μ solutions to the problem, usually in a purely random fashion. This set of solutions is called a population by analogy with a group of living beings. In the same way, a solution to the problem is an individual. Evolutionary algorithms repeat the next loop (called a generational loop) until a stopping criterion is met. This is either set in advance, for example, the number of times the generational loop is repeated, or decided on the basis of the diversity of individuals present in the population.
Algorithm 10.1: Framework of evolutionary algorithms
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-13714-3_10/MediaObjects/522503_1_En_10_Figa_HTML.png
First, a number of solutions from the population are selected to be used for breeding. This is achieved by a selection operator for reproduction. The purpose of this operator is to favour the individuals that are well adapted to their environment (those with the best fitness function) at the expense of those that are weaker, sick, and ill-adapted similar to what happens in nature.
The selected individuals are then mixed together (e.g., in pairs) using a crossover operator to form λ new solutions called offspring which undergo random modifications by means of a mutation operator. These two operators simulate the sexual reproduction of living species, assuming that, with a little luck, the favorable characteristics (the desirable genes contained in the DNA) of the parent solutions will be transmitted to their children and that fortuitous mutations will result in the appearance new favorable genes.
Finally, the new solutions are evaluated, and a selection operator for survival eliminates λ solutions from the μ + λ available to reduce to a new population of μ individuals. Figure 10.2 illustrates the process of a generational loop.
The framework of evolutionary algorithms leaves considerable freedom in the choices to be made for the implementation of the various operators and parameters. For instance, the “evolution strategy” of Rechengerg [9] does not use a crossover operator between two solutions. In this technique, the solutions of the population are modified with a mutation operator and compete with each other, much like parthenogenetic reproduction.

10.2 Genetic Algorithms

Among evolutionary algorithms, it is undoubtedly the genetic algorithms (GA) proposed by Holland [3] that have received the most attention. This is paradoxical, since the purpose of his study was to understand the convergence mechanisms of these algorithms, not their ability to optimize difficult problems. For a long time, the community in this field continued to work on the genetic algorithm convergence theory, studying “orthodox” versions of the various operators mentioned above, in conjunction with a standard representation of solutions under the form of Boolean vectors with a specified size.
Unfortunately, not all optimization problems have solutions that can be naturally represented by binary vectors. Using only standard operators and knowing their theoretical properties, considerable efforts have been made to discover appropriate encodings of solutions in the form of binary vectors and to decode them into feasible solutions.
For the problems whose solutions are naturally represented by a permutation, the random key coding technique allows exploiting the standard crossover and mutation operators. A permutation of the elements of 1…n are represented by an array t of n real numbers. The permutation p allowing the sorting of t corresponds to the solution coded by the array (see Fig. 10.12).
The next sections review the main genetic algorithm operators, discussing how they can be generalized so that they equally apply to a natural representation of solutions and not only to binary vectors.

10.2.1 Selection for Reproduction

The selection for reproduction aims to favor the most efficient solutions so that they can transmit their beneficial properties to their offspring. Each solution i must therefore be assigned a fitness measure f i; the higher the quality, the higher the selection probability must be. If the objective of the problem to be solved is to maximize a function admitting positive values, this function can be directly used as fitness function. Otherwise, a transformation of the objective function is required to assign a fitness to each individual.

10.2.1.1 Rank-Based Selection

A traditional transformation is to sort the individuals. This does not require the computation of an objective function but only the possibility to compare the solution quality. The fittest individual in a population has a rank of 1 and the worst of μ.
The individual i of rank r i has a quality measure \(f_i = (1 - \frac {r_i-1}{\mu })^p\), where \(p \geqslant 0\) is a parameter to modulate the selection pressure. A pressure p = 0 implies a uniform draw among the population (no selective pressure), while p = 2 represents a fairly high pressure. Code 10.1 provides an implementation of this operator for a selection pressure of p = 1.
Code 10.1 rank_based_selection.pyImplementation of a rank-based selection operator for reproduction, with selective pressure p = 1. The best of μ individuals has a probability of \(\frac {2\mu } {\mu \cdot (\mu + 1)}\) to be selected, while the worst has a probability of \(\frac {2} {\mu \cdot (\mu + 1)}\)
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-13714-3_10/MediaObjects/522503_1_En_10_Figaaa_HTML.png

10.2.1.2 Proportional Selection

The simplest selection operator is to randomly draw an individual proportionally to its fitness. The individual i has thus a probability \(f_i/{\sum {f_i}}\) of being selected. In principle, we do not select just one individual at each generational loop but several. The selection is ordinarily performed with replacement, so that a (good) individual can be selected several times in one generation.
Genetic algorithms are inherently parallel: the generational loop can be applied both to the production of a unique individual in each generation, as shown in Fig. 10.2, and to the generation of a multitude of offspring. A frequently used technique is to select an even number λ of parent solutions in a generation and pair them up, and each pair produces two offspring per crossover.

10.2.1.3 Natural Selection

It is also possible to perform a purely random and uniform selection for reproduction, just like what happens to many living species. The convergence of the algorithm must then be guided by the selection operator for survival, which ensures a bias toward the fittest individuals. Table 10.1 compares the selection probabilities of the operators presented above for a small population.
Table 10.1
Selection probability for different operators for reproduction. The objective function is to be maximized and is directly used as a fitness function for the proportional selection. The sum of the values of the objective function is 1000
Objective
 
Probability
function
Rank
Rank-based (p = 2)
Rank-Based (p = 1)
Natural
Proportional
220
1
0.260
0.182
0.1
0.220
162
2
0.210
0.164
0.1
0.162
157
3
0.166
0.146
0.1
0.157
93
4
0.127
0.127
0.1
0.093
85
5
0.094
0.109
0.1
0.085
74
6
0.065
0.091
0.1
0.074
61
7
0.042
0.073
0.1
0.061
55
8
0.023
0.054
0.1
0.055
49
9
0.010
0.036
0.1
0.049
44
10
0.003
0.018
0.1
0.044

10.2.1.4 Complete Selection

If one does not choose too large a population size, it is also possible to involve all individuals in a systematic way for reproduction. As with natural selection, the evolution of the population toward good solutions then depends on the selection operator for survival, which should favor the best solutions.

10.2.2 Crossover Operator

A crossover operator aims to simulate the sexual reproduction of living species. Schematically, the process of meiosis in sexual reproduction separates the DNA of each parent into two genetic sequences. This produces gametes (egg cell, sperm, or pollen grains). During the fertilization of the egg cell, genetic shuffling occurs, during which the sequence of genes of the offspring is produced by sequentially adding the genes of either parent in an arbitrary fashion.
The purpose of this operator is to produce a new offspring, different from its parents, but having inherited some of their features. With a little luck, the offspring receives good features from its parents and is better adapted to its environment. With a little less luck, the offspring does not receive those good features. Nevertheless, it perpetuates valuable genes and provides a source of diversity within the population, which means potential for innovation.
Figure 10.3 metaphorically illustrates this with the mating of different ladybird beetles. The couples at the top are likely to produce children very similar to themselves, while the couples at the bottom of the figure might produce genetically richer children.
There are evolutionary strategies where the crossover operator is absent. These strategies mimic asexual reproduction, where an individual produces an offspring practically identical to itself, where only spontaneous mutations cause the population gene pool to evolve.

10.2.2.1 Uniform Crossover

Uniform crossover involves taking two parent solutions, represented as vectors of n items and creating a third one by choosing its items from either parent with equal probability. Figure 10.4 illustrates the production of two “anti-twin” offspring from two parents. This crossover operator is appropriate if it is straightforward and logical to represent any solution of the problem by a vector of n components and if any vector of that size can match a feasible solution.
This is not the case for a problem where a permutation of n items is sought. One technique for adapting the uniform crossover for this situation is to proceed in two phases: in the first phase, the items of the permutation are randomly selected from either parent, provided that the item has not yet been chosen. If both parents possess items already selected at the position to be filled, the latter remains temporarily empty in the offspring. The second phase consists in filling in at random the vacant positions with the items that were not selected during the first phase. This operator is illustrated in Fig. 10.5.

10.2.2.2 Single-Point Crossover

The single-point crossover first randomly picks a point within the solution vector. Then it copies all the items of the first parent up to that point. Finally, it copies the items of the second parent from there. In practice, for a vector of n items, we randomly draw a number c between 1 and n − 1; we copy the items 1 to c from the first parent and the items c + 1 to n from the second parent. We can produce a second complementary offspring in parallel. Figure 10.6 illustrates this operator.

10.2.2.3 Two-Point Crossover

The two-point crossover consists in randomly selecting two different points. The offspring is created by copying the items before the first point and after the second point from one parent and copying the portion between the points from the other parent. This operator is illustrated in Fig. 10.7. The strategy can be generalized by choosing k crossover points.

10.2.2.4 OX Crossover

For each problem, we can invent a specific crossover operator. For instance, for the TSP, one can advance the argument that portions of the paths should be copied from the parents into the offspring. If a solution is a permutation of the cities, we realize that the uniform crossover seen previously (adapted to the case of permutations) does not really make sense: the starting city is not decisive. The cities that precede and succeed a given city are important, not the absolute position of the city in the tour. The two-point crossover operator can be adapted for the problems where the sequences are significant.
The OX crossover operator devised for the TSP begins by copying the intermediate portion of one parent, like the two-point crossover. The last city of this portion is located in the other parent and the offspring is completed by cyclically scanning the cities of this parent and inserting those not yet included. The OX crossover operator is illustrated in Fig. 10.8. An implementation of this operator is given in Code 10.2.
Code 10.2 OX_crossover.pyImplementation of the OX crossover operator, preserving a sub-path
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-13714-3_10/MediaObjects/522503_1_En_10_Figaab_HTML.png

10.2.3 Mutation Operator

The mutation operators can be described in a simple way in the context of this book: it consists in randomly applying one or more local moves to the solution, as described in Chap. 5 devoted to local searches.
The mutation operator has two roles: firstly, the local modification can improve the solution, and, secondly, even if the solution is not improved, it slows down the global convergence of the algorithm by strengthening the genetic diversity of the population. Indeed, without this operator, the population can only lose diversity. For instance, the crossover operators presented above systematically copy the identical parts of the parents in the offspring. Thus, some genes take over compared to others that disappear with the elimination of solutions by the selection operator for survival.
Figure 10.9 illustrates the influence of the mutation rate for a problem where a permutation of n elements is sought. In this figure, a mutation rate of 5% means that there is such a proportion of elements that are randomly swapped in the permutation. Code 10.3 gives an implementation of a mutation operator for problems on permutations.
Code 10.3 mutate.pyImplementing a mutation operator for problems on permutations
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-13714-3_10/MediaObjects/522503_1_En_10_Figaac_HTML.png

10.2.4 Selection for Survival

The last key operator in genetic algorithms is selection for survival, which aims to bring the population back to its initial size of μ individuals, after λ new solutions have been generated. Several selection policies have been devised, depending on the values chosen for the parameters μ and λ.

10.2.4.1 Generational Replacement

The simplest policy for selecting the individuals who will survive is to generate the same number of offspring as there are individuals in the population (λ = μ). The population at the beginning of the new generational loop is made up only of the offspring, the initial population disappearing. With such a choice, it is necessary to have a selection operator for reproduction that favors the best solutions. This means the best individuals are able to participate in the creation of several offspring, while some of the worst are excluded from the reproduction process.

10.2.4.2 Evolutionary Strategy

The evolutionary strategy (μ, λ) consists in generating numerous offspring (λ > μ) and in only keeping the μ best offspring for the next generation. The population is therefore completely changed from one iteration of the generational loop to the next. This strategy leads to a bias in the choice of the fittest individuals from one generation to the next. So, it is compatible with a uniform selection operator for reproduction.

10.2.4.3 Stationary Replacement

Another commonly used technique is to gradually evolve the population, with the generation of few offspring at each generational loop. A strategy is to generate λ = 2 children in each generation, which will replace their parents.

10.2.4.4 Elitist Replacement

Another more aggressive strategy is to consider all the μ + λ solutions available at the end of the generational loop and to keep only the best μ for the next generation. This strategy was adopted to produce Fig. 10.10 illustrating the evolution of the fittest solution of the populations for various values of μ.
Code 10.4 implements an elitist replacement when λ = 1, which means that only one offspring is produced at each generation. It replaces the worst solution in the population (if it is not even worse). In this code, we have included a basic population management that must contain exclusively different individuals. To simplify the test of equality between two solutions, they are discriminated only on the basis of their fitness: two solutions of the same length are considered identical.
Code 10.4 insert_child.py Implementation of elitist replacement where each generation produces only one child. This procedure implements basic population management where all individuals must have different fitness
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-13714-3_10/MediaObjects/522503_1_En_10_Figaad_HTML.png

10.3 Memetic Algorithms

Genetic algorithms have two major drawbacks: first, nothing ensures that the best solution found cannot be improved by a simple local modification, as seen in Chap. 5. Second, the diversity of the population declines with each iteration of the generational loop, eventually consisting only of clones of the same individual.
To overcome these two drawbacks, Moscato [8] designed what he called memetic algorithms. The first of these shortcomings is solved by applying a local search after producing an offspring. The simplest way to avoid duplication of individuals in the population is to eliminate them immediately, as implemented in Code 10.4.
Code 10.5 illustrates a straightforward implementation of a memetic algorithm for the TSP where the offspring are improved using a local search based on ejection chains and only replace the worst solution in the population if they are of better quality than the latter and their evaluation is different from all those in the population, thus ensuring that no duplicates are created. This algorithm implements only an elementary version of a memetic algorithm.
Code 10.5 tsp_GA.pyImplementation of a memetic algorithm for the TSP. This algorithm uses a selection operator for reproduction based on rank. After its generation, the offspring is improved by a local search (ejection chain method) and immediately replaces the worst solution in the population. This algorithm has three parameters: the number μ of solutions in the population, the number of generational loops to be performed, and the mutation rate
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-13714-3_10/MediaObjects/522503_1_En_10_Figaae_HTML.png
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-13714-3_10/MediaObjects/522503_1_En_10_Figaaf_HTML.png
Sörensen and Seveaux [11 ] proposed a more advanced population management. These authors suggest evaluating, for each solution produced, a similarity measure with the solutions contained in the population. Solutions that are too similar are discarded to maintain sufficient diversity so that the algorithm does not converge prematurely.
Scatter search is almost as old as genetic algorithms. Glover [1] proposed this technique in the context of integer linear programming. At the time, it broke certain taboos, such as being able to represent a solution in a natural form and not coded by a binary vector or to mix more than two solutions between them, as metaphorically illustrated in Fig. 10.11.
The chief ideas of scatter search comprise the following characteristics, presented in contrast to traditional genetic algorithms:
Dispersed initial population
Rather than randomly generating a large initial population, the last is generated deterministically and as scattered as possible in the space of potential solutions. They are not necessarily feasible but are rendered so by a repair/improvement operator.
Natural representation of solutions
Solutions are represented in a natural way and not necessarily with binary vectors of a given size.
Combination of several solutions
More than two solutions may contribute to the production of a new potential solution. Rather than relying on a large population and a selection operator for reproduction, scatter search tries all possible combinations of individuals in the population, which must therefore be limited to a few dozen solutions.
Repair/improvement operator
Because of the natural representation of solutions, the simultaneous combination of several individuals does not necessarily produce a feasible solution. A repair operator projecting a potential infeasible solution into the space of feasible solutions is therefore expected. This operator can also improve a feasible solution, especially by means of a local search.
Population management
A reference population, of small size, is decomposed into a subset of elite solutions (the best ones) and other solutions as different as possible from the elites. The goal is to increase the diversity of the population while keeping the best solutions.
The framework of scatter search is given by Algorithm 10.2. The advantage of this framework is its limited number of parameters: μ for the size of the reference population and E < μ for the set of elite solutions. Moreover, the value of μ must be limited to about twenty, since it is necessary to combine a number of potential solutions increasing exponentially with μ; this also means that the number E of elite solutions should be from a few units to about ten.
Algorithm 10.2: Scatter search framework
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-13714-3_10/MediaObjects/522503_1_En_10_Figb_HTML.png

10.4.1 Illustration of Scatter Search for the Knapsack Problem

To illustrate how the various options in the scatter search framework can be adapted to a particular problem, let us consider a knapsack instance:
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-13714-3_10/MediaObjects/522503_1_En_10_Equ1_HTML.png
(10.1)

10.4.1.1 Initial Population

The solutions to this problem are, therefore, ten-component binary vectors. To generate a set of potential solutions as scattered as possible, one can choose to put either all the objects in the knapsack, or one out of two, or one out of three, etc. For each potential solution thus generated, the complementary solution can also be added to the population. Naturally, not all the solutions from the population are feasible. To be specific, the solution with all objects does not satisfy the knapsack volume constraint; its complementary solution, with an objective value of zero, is the worst possible.
A repair/improvement operator must therefore be applied to these potential solutions. This can be performed as follows: as long as the solution is not feasible, remove the object with the worst value/volume ratio. A feasible solution can be improved greedily, by including the object with the best value/volume ratio as long as the capacity of the knapsack permits it. This produces the population of solutions given in Table 10.2.
Table 10.2
Initial scattered population P for the knapsack instance 10.1 and the result of applying the repair/improvement operator on the potential solutions. Those which are not feasible are in bold, as well as the E = 3 elite solutions
 
Potential
Value
Repaired/improved
Value
 
 
solution
 
solution
  
1
(1,1,1,1,1,1,1,1,1,1)
81
(0,1,1,1,0,0,0,0,1,1)
42
 
2
(1,0,1,0,1,0,1,0,1,0)
40
(1,0,1,1,1,0,0,0,0,0)
42
 
3
(1,0,0,1,0,0,1,0,0,1)
38
(1,0,0,1,0,0,1,0,0,1)
38
 
4
(1,0,0,0,1,0,0,0,1,0)
24
(1,0,0,1,1,0,0,0,1,0)
36
 
5
(1,0,0,0,0,1,0,0,0,0)
17
(1,0,1,1,0,1,0,0,0,0)
38
 
6
(0,0,0,0,0,0,0,0,0,0)
0
(0,1,1,1,0,0,0,0,0,1)
39
 
7
(0,1,0,1,0,1,0,1,0,1)
41
(0,1,0,1,0,1,0,0,0,1)
36
 
8
(0,1,1,0,1,1,0,1,1,0)
43
(0,1,1,1,1,0,0,0,1,0)
44
 
9
(0,1,1,1,0,1,1,1,0,1)
57
(0,1,1,1,0,0,0,0,1,1)
42
= solution 1
10
(0,1,1,1,1,0,1,1,1,1)
64
(0,1,1,1,0,0,0,0,1,1)
42
= solution 1

10.4.1.2 Creation of the Reference Set

Solutions 9 and 10 are identical to the first solution and are therefore eliminated. If we choose a set of E = 3 elite solutions, these are solutions 1, 2, and 8. Assuming that one wishes a reference set of μ = 5 solutions, two solutions must be added to the three elites, among solutions 3 to 7. The two solutions to complete the reference set are determined by evaluating a measure of dissimilarity with the elites. An approach is to consider the solutions maximizing the smallest Hamming distance to one of the elites which is illustrated in Table 10.3.
Table 10.3
Determining the solutions from the population that are as different as possible from the elites. If we want a reference set of μ = 5 solutions, we retain solutions 3 and 7 in addition to the three elites because they are those maximizing the smallest distance to one of the elites
 
Candidate
Hamming distance
Minimal
 
solution
Elite 1
Elite 2
Elite 8
distance
3
(1,0,0,1,0,0,1,0,0,1)
5
4
7
4
4
(1,0,0,1,1,0,0,0,1,0)
5
2
3
2
5
(1,0,1,1,0,1,0,0,0,0)
5
2
5
2
6
(0,1,1,1,0,0,0,0,0,1)
1
4
3
1
7
(0,1,0,1,0,1,0,0,0,1)
3
6
5
3

10.4.1.3 Combining solutions

Finally, we need to implement an operator that allows us to create a potential solution by combining several of them from the reference set. Let us suppose we want to combine solutions 3, 7, and 8, of values 38, 36, and 44, respectively. One possibility is to consider the solutions as numerical vectors and make a linear combination of them. It is tempting to assign a weight according to the solution’s fitness. One idea is to give a weight of \(\frac {38}{38+36+44}\) to solution 3, of \(\frac {36}{38+36+44}\) to solution 7, and of \(\frac {44}{38+36+44}\) to solution 8. The vector thus obtained is rounded to project it to binary values:
$$\displaystyle \begin{aligned} 0.322 \cdot & (1,0,0,1,0,0,1,0,0,1) + \\ 0.305 \cdot & (0,1,0,1,0,1,0,0,0,1) + \\ 0.373 \cdot & (0,1,1,1,1,0,0,0,1,0) \\ = & (0.322, 0.678, 0.373, 1.000, 0.373, 0.305, 0.322, 0.000, 0.373, 0.627)\\ Rounded: & (0, 1, 0, 1, 0, 0, 0, 0, 0, 1) \end{aligned} $$

10.5 Bias Random Key Genetic Algorithm

Biased random key genetic algorithms (BRKGA) also provide population management with a subset of E elite solutions that are copied to the next generation. The main ingredients of this technique are:
  • An array of real numbers (keys) encodes a solution. If a natural representation of a solution is a permutation, then the permutation is one that sorts the keys in increasing order.
  • The E best solutions from the population are kept for the next generation.
  • The selection operator for reproduction always chooses a solution among the E best.
  • An offspring is generated with a uniform crossover operator, but the components of the best parent-solution are chosen with probability > 1∕2.
  • At each generation, λ < μ − E children are generated. These offspring replace non-elite solutions for the next iteration.
  • The genetic diversity of the population is ensured by the introduction of μ − E − λ new randomly drawn arrays (mutants); this replaces the mutation operator.
Figure 10.12 illustrates how this method operates to generate a new solution.

10.6 Path Relinking

Path relinking (PR) was proposed by Glover [2] in the context of taboo search. The idea is to memorize a number of good solutions found by a taboo search. We select two of these solutions, which have been linked by a path with the taboo search. We link these two solutions again by a new, shorter path, going from neighboring solution to neighboring solution.
This technique can be implemented independently of a taboo search since all that is needed to implement it is a population of solutions and a neighborhood structure. A starting solution and an ending (target) solution are chosen from the population. We evaluate all the neighbors of the starting solution that are closer to the target solution than the starting one. Among these neighbors, the one with the best evaluation is identified, and the process is repeated from there until we arrive at the target solution. With a bit of luck, one of the intermediate solutions improves the best solution discovered. The path relinking technique is illustrated in Fig. 10.13.
There are different versions of path relinking: the path can be traversed in both directions by reversing the role of the starting and target solutions; an improvement method can be applied to each intermediate solution; ultimately, the starting and target solutions can be alternately modified, and the process stops when meeting in an intermediate solution. Code 10.6 provides an implementation of path relinking for the TSP. It is based on 3-opt moves.
Code 10.6 tsp_path_relinking.py Path relinking implementation for the TSP. At each iteration, we identify a 3-opt move that incorporates at least one arc from the target solution to the current solution
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-13714-3_10/MediaObjects/522503_1_En_10_Figaag_HTML.png

10.6.1 GRASP with Path Relinking

A method using the core components of metaheuristics (construction, local search, and management of a population of solutions) while remaining relatively simple and with few parameters is the GRASP-PR method (greedy adaptive search procedure with path relinking) by Laguna and Marti [6]. The idea is to generate a population P of different solutions by means of a GRASP with a parameter α (see Algorithm 7.​8). These solutions are improved by means of a local search.
Then, we repeat I max times a loop where we build a new solution, greedily and with a bias. This solution is also improved with a local search. We then randomly draw another solution of P and apply a path relinking procedure between both solutions.
The best solution of the path is added to P if it is both strictly better than one of P and is not already present in P. The new solution replaces the solution of P which is the most different from itself while being worse.
Algorithm 10.3 provides the GRASP-PR framework. Code 10.7 implements a GRASP-PR method for the TSP. The reader interested in recent GRASP-based optimization tools can find extensive information in the recent book of Resende and Ribeiro [10].
Algorithm 10.3: GRASP-PR framework
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-13714-3_10/MediaObjects/522503_1_En_10_Figc_HTML.png
Code 10.7 tsp_GRASP_PR.pyGRASP with path relinking implementation for the TSP
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-13714-3_10/MediaObjects/522503_1_En_10_Figaah_HTML.png
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-13714-3_10/MediaObjects/522503_1_En_10_Figaai_HTML.png
The Fixed Set Search method [4] (FSS) also incorporates several mechanisms that are discussed in this book. First, a population of solutions is generated using a standard GRASP procedure. Then, this population is gradually improved by applying a GRASP procedure guided by a learning mechanism. The latter can be seen as a vocabulary building: one randomly selects a few solutions from the population and calculates the frequency of occurrence of the elements constituting these solutions. Then, another solution is randomly selected from the population. Among the elements constituting this solution, a fixed number are retained, determined by those which have the highest frequency of occurrence previously calculated. The randomized greedy construction is modified so that it produces a solution containing all the fixed elements.
In the case of the TSP, these elements form sub-paths. A step in the randomized construction adds either an edge connecting a city not in the selected sub-paths or all the edges of a fixed sub-path. The tour thus constructed is improved by a local search and enriches the population of solutions.
The FSS method has several parameters: a stopping criterion (e.g., a number of iterations without improvement of the best solution), the number of solutions selected to determine the fixed set, the number of elements of the fixed set (which can vary from one iteration to another), and the α parameter of the randomized construction.
Another way of looking at FSS is to see it as an LNS-type method (Section 6.​4.​1) with learning mechanisms: the acceptance method manages a population of solutions. The destruction method chooses a random solution from the population and relaxes the elements that do not appear frequently in a random sample of solutions from the population.

10.8 Particle Swarm

Particle swarms are a bit special because they were first designed for continuous optimization. The idea is to evolve a population of particles. Their position represents a solution to the problem expressed as a vector of real numbers. The particles interact with each other. Each has a velocity in addition to its position and is attracted or repelled by the other particles.
This type of method, proposed by Kennedy and Eberhart [5], simulates the behavior of animals living in swarms, such as birds, insects, or fish, which adopt a behavior that favors their survival, whether it be to feed, defend themselves against predators, or undertake a migration. Each individual in the swarm is influenced by those nearby and possibly by a leader.
Translated into optimization terms, each particle represents a solution to the problem whose quality is measured by a fitness function. A particle moves at a certain speed in a given direction, but it is deflected by its environment: if there is another particle-solution of better quality in the vicinity, it is attracted in its direction. In that manner, each solution enumerated by the algorithm can be associated with the vertex of a graph. The edges of this graph correspond to particles that influence each other.
There are various variants of particle swarm methods, differing in the influence graph and the formulae used to calculate the deviations in particle velocity \(\overrightarrow {v_p}\). In its most classic version, a particle p is influenced by only two solutions: the global best solution \(\overrightarrow {g}\) found by the set of particles and the best solution \(\overrightarrow {m_p}\) it has found itself. The new velocity of the particle is a vector. Each component is modified with weights randomly drawn between 0 and ϕ 1 in the direction of \(\overrightarrow {m_p}\) and drawn between 0 and ϕ 2 in the direction of \(\overrightarrow {g}\), where ϕ 1 and ϕ 2 are parameters of the method. In addition, a particle is given an inertia ω as a parameter. Figure 10.14 illustrates the update of the velocity and the position of a particle. Algorithm 10.4 provides a simple particle swarm framework.
Algorithm 10.4: Swarm particle framework. https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-13714-3_10/522503_1_En_10_IEq15_HTML.gif is component-wise multiplication
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-13714-3_10/MediaObjects/522503_1_En_10_Figd_HTML.png
Various modifications have been proposed to this basic version. For instance, instead of being influenced by the best solution it has found itself, a particle is influenced by the best solution found by particles in its neighborhood. It is then necessary to define according to which topology the latter is chosen. A common variant is to constrain the velocity of the particles to remain between two bounds, v min and v max. This adds two parameters to the framework. Another proposed modification is to apply a small random shift to some particles to simulate turbulence.

10.8.1 Electromagnetic Method

In the electromagnetic method, a particle induces a force of attraction or repulsion on all the others. This force depends on the inverse of the square of the distance between the particles, like electrical forces. The direction of the force depends on the quality of the solutions. A particle is attracted by a solution that is better than itself and repelled by a worse solution.

10.8.2 Bestiary

In previous sections, we have only mentioned the basic algorithms, inspired by the behavior of social animals, and a variant, inspired by a process of physics. Different authors have proposed many metaheuristics whose framework is similar to that of Algorithm 10.4.
What distinguishes them is essentially the way of initializing the speed and the position of the particles (lines 3 and 4) as well as the “magic formulas” for their updates (lines 13 and 14).
These various magical formulas are inspired by the behavior of various animal species or in the processes of physics. To name just a few, there are amoeba, bacteria, bat, bee, butterfly, cockroaches, cuckoo, electromagnetism, firefly, and mosquito. There are various variants of these frameworks, obtained by hybridizing them with the key components of the metaheuristics discussed in this book. There are hundreds of proposals in the literature suggesting “new” metaheuristics inspired by various metaphors, sometimes even referring to the behavior of mythic creatures!
Very schematically, it is a matter of applying the intensification and diversification principles: elimination of certain solutions from the population, concentration toward the best discovered solutions, random walk, etc.
A number of these frameworks have been proposed in the context of continuous optimization. To adapt these methods to discrete optimization, one can implement a coding scheme, for example, the random keys seen in Section 10.5. Another solution is to consider the notion of neighborhood and path relinking. The reader who is a friend of animals and other creatures may consult [7] for a bestiary overview.
Rather than trying to devise a new heuristic based on an exotic metaphor using obscure terminology, we encourage the reader to use a standardized description, following the basic principles presented in this book. Indeed, during the last quarter century, there have been few truly innovative new concepts. It is a matter of adopting a more scientific posture, of justifying the choices of problem modeling, of establishing test protocols, etc., even if the development of a theory of metaheuristics still seems very far away and the heuristic solution of real-world optimization problems remains the only option.
Problems
10.1
Genetic Algorithm for a One-Dimensional Function
We need to optimize a function f of an integer variable x, \(0 \leqslant x < 2^n\). In the context of a genetic algorithm with a standard crossover operator, how to encode x in the form of a binary vector?
10.2
Inversion Sequence
A permutation p of elements from 1 to n can be represented by an inversion sequence s, where s i counts the number of elements of p 1, …, p k = i that are greater than i. For example, the permutation p = (2, 4, 6, 1, 5, 3) has the inversion sequence s = (3, 0, 3, 0, 1, 0): there are three elements greater than 1 before 1, 0 elements greater than 2 before 2, etc. To which permutations do the inversion sequences (4, 2, 3, 0, 1, 0) and (0, 0, 3, 1, 2, 0) correspond? Provide necessary and sufficient conditions for a vector s to be an inversion sequence corresponding to a permutation. Can the standard 1-point, 2-point, and uniform crossover operators be applied to inversion sequences? How can inversion sequences be used in the context of scatter search?
10.3
Rank Based Selection
What is the probability of the function rank_based_selection(m), given in Algorithm 10.1, to return a given value v?
10.4
Tuning a Genetic Algorithm
Adjust the population size and mutation rate of the procedure tsp_GA given by Code 10.5, if it generates a total of 5n children.
10.5
Scatter Search for the Knapsack Problem
Consider the knapsack instance 10.1 of Section 10.4. Perform the first iteration of a scatter search for this instance: generate the new population, repair/improve the solutions, update the reference set consisting of five solutions with three elites.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Literatur
2.
Zurück zum Zitat Glover, F.: Tabu search and adaptive memory programming — advances, applications and challenges. In: Barr, R.S., Helgason, R.V., Kennington, J.L. (eds.) Interfaces in Computer Science and Operations Research: Advances in Metaheuristics, Optimization, and Stochastic Modeling Technologies. pp. 1–75. Springer, Boston (1997). https://doi.org/10.1007/978-1-4615-4102-8_1 Glover, F.: Tabu search and adaptive memory programming — advances, applications and challenges. In: Barr, R.S., Helgason, R.V., Kennington, J.L. (eds.) Interfaces in Computer Science and Operations Research: Advances in Metaheuristics, Optimization, and Stochastic Modeling Technologies. pp. 1–75. Springer, Boston (1997). https://​doi.​org/​10.​1007/​978-1-4615-4102-8_​1
3.
Zurück zum Zitat Holland, J.: Adaptation in Natural and Artificial Systems. The University of Michigan Press, Ann Harbor (1975) Holland, J.: Adaptation in Natural and Artificial Systems. The University of Michigan Press, Ann Harbor (1975)
4.
Zurück zum Zitat Jovanovic, R., Tuba, M., Voß, S.: Fixed set search applied to the Traveling Salesman Problem. In: Blum, C., Gambini Santos, H., Pinacho-Davidson, P., Godoy del Campo, J. (eds.) Hybrid Metaheuristics. Lecture Notes in Computer Science, vol. 11299, pp. 63–77. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-05983-5_5 CrossRef Jovanovic, R., Tuba, M., Voß, S.: Fixed set search applied to the Traveling Salesman Problem. In: Blum, C., Gambini Santos, H., Pinacho-Davidson, P., Godoy del Campo, J. (eds.) Hybrid Metaheuristics. Lecture Notes in Computer Science, vol. 11299, pp. 63–77. Springer, Cham (2019). https://​doi.​org/​10.​1007/​978-3-030-05983-5_​5 CrossRef
8.
Zurück zum Zitat Moscato, P.: Memetic algorithms: A short introduction. In: Corne, D., Glover, F., Dorigo, M. (eds.) New Ideas in Optimisation, pp. 219–235. McGraw-Hill, London (1999) Moscato, P.: Memetic algorithms: A short introduction. In: Corne, D., Glover, F., Dorigo, M. (eds.) New Ideas in Optimisation, pp. 219–235. McGraw-Hill, London (1999)
Metadaten
Titel
Population Management
verfasst von
Éric D. Taillard
Copyright-Jahr
2023
DOI
https://doi.org/10.1007/978-3-031-13714-3_10

Premium Partner