Skip to main content

Über dieses Buch

This book constitutes selected best papers from the 10th International Conference on Artificial Evolution, EA 2011, held in Angers, France, in October 2011. Initially, 33 full papers and 10 post papers were carefully reviewed and selected from 64 submissions. This book presents the 19 best papers selected from these contributions. The papers are organized in topical sections on ant colony optimization; multi-objective optimization; analysis; implementation and robotics; combinatorial optimization; learning and parameter tuning; new nature inspired models; probabilistic algorithms; theory and evolutionary search; and applications.



Ant Colony Optimization

An Immigrants Scheme Based on Environmental Information for Ant Colony Optimization for the Dynamic Travelling Salesman Problem

Ant colony optimization (ACO) algorithms have proved to be powerful methods to address dynamic optimization problems. However, once the population converges to a solution and a dynamic change occurs, it is difficult for the population to adapt to the new environment since high levels of pheromone will be generated to a single trail and force the ants to follow it even after a dynamic change. A good solution is to maintain the diversity via transferring knowledge to the pheromone trails. Hence, we propose an immigrants scheme based on environmental information for ACO to address the dynamic travelling salesman problem (DTSP) with traffic factor. The immigrants are generated using a probabilistic distribution based on the frequency of cities, constructed from a number of ants of the previous iteration, and replace the worst ants in the current population. Experimental results based on different DTSP test cases show that the proposed immigrants scheme enhances the performance of ACO by the knowledge transferred from the previous environment and the generation of guided diversity.
Michalis Mavrovouniotis, Shengxiang Yang

Multi-Objective Optimization

A Surrogate-Based Intelligent Variation Operator for Multiobjective Optimization

Evolutionary algorithms are meta-heuristics that have shown flexibility, adaptability and good performance when solving Multiobjective Optimization Problems (MOPs). However, in order to achieve acceptable results, Multiobjective Evolutionary Algorithms (MOEAs) usually require several evaluations of the optimization function. Moreover, when each of these evaluations represents a high computational cost, these expensive problems remain intractable even by these meta-heuristics. To reduce the computational cost in expensive optimization problems, some researchers have replaced the real optimization function with a computationally inexpensive surrogate model. In this paper, we propose a new intelligent variation operator which is based on surrogate models. The operator is incorporated into a stand-alone search mechanism in order to perform its validation. Results indicate that the proposed algorithm can be used to optimize MOPs. However, it presents premature convergence when optimizing multifrontal MOPs. Therefore, in order to solve this drawback, the proposed operator was successfully hybridized with a MOEA. Results show that this latter approach outperformed both, the former proposed algorithm and the evolutionary algorithm but without the operator.
Alan Díaz-Manríquez, Gregorio Toscano-Pulido, Ricardo Landa-Becerra

The Relationship between the Covered Fraction, Completeness and Hypervolume Indicators

This paper investigates the relationship between the covered fraction, completeness, and (weighted) hypervolume indicators for assessing the quality of the Pareto-front approximations produced by multiobjective optimizers. It is shown that these unary quality indicators are all, by definition, weighted Hausdorff measures of the intersection of the region attained by such an optimizer outcome in objective space with some reference set. Moreover, when the optimizer is stochastic, the indicators considered lead to real-valued random variables following particular probability distributions. Expressions for the expected value of these distributions are derived, and shown to be directly related to the first-order attainment function.
Viviane Grunert da Fonseca, Carlos M. Fonseca


A Rigorous Runtime Analysis for Quasi-Random Restarts and Decreasing Stepsize

Multi-Modal Optimization (MMO) is ubiquitous in engineering, machine learning and artificial intelligence applications. Many algorithms have been proposed for multimodal optimization, and many of them are based on restart strategies. However, only few works address the issue of initialization in restarts. Furthermore, very few comparisons have been done, between different MMO algorithms, and against simple baseline methods. This paper proposes an analysis of restart strategies, and provides a restart strategy for any local search algorithm for which theoretical guarantees are derived. This restart strategy is to decrease some ’step-size’, rather than to increase the population size, and it uses quasi-random initialization, that leads to a rigorous proof of improvement with respect to random restarts or restarts with constant initial step-size. Furthermore, when this strategy encapsulates a (1+1)-ES with 1/5th adaptation rule, the resulting algorithm outperforms state of the art MMO algorithms while being computationally faster.
Marc Schoenauer, Fabien Teytaud, Olivier Teytaud

Local Optima Networks with Escape Edges

This paper proposes an alternative definition of edges (escape edges) for the recently introduced network-based model of combinatorial landscapes: Local Optima Networks (LON). The model compresses the information given by the whole search space into a smaller mathematical object that is the graph having as vertices the local optima and as edges the possible weighted transitions between them. The original definition of edges accounted for the notion of transitions between the basins of attraction of local optima. This definition, although informative, produced densely connected networks and required the exhaustive sampling of the basins of attraction. The alternative escape edges proposed here do not require a full computation of the basins. Instead, they account for the chances of escaping a local optima after a controlled mutation (e.g. 1 or 2 bit-flips) followed by hill-climbing. A statistical analysis comparing the two LON models for a set of NK landscapes, is presented and discussed. Moreover, a preliminary study is presented, which aims at validating the LON models as a tool for analyzing the dynamics of stochastic local search in combinatorial optimization.
Sébastien Vérel, Fabio Daolio, Gabriela Ochoa, Marco Tomassini

Visual Analysis of Population Scatterplots

We investigate how visual analytic tools can deal with the huge amount of data produced during the run of an evolutionary algorithm. We show, on toy examples and on two real life problems, how a multidimensional data visualisation tool like ScatterDice/GraphDice can be easily used for analysing raw output data produced along the run of an evolutionary algorithm. Visual interpretation of population data is not used very often by the EA community for experimental analysis. We show here that this approach may yield additional high level information that is hardly accessible through conventional computation.
Evelyne Lutton, Julie Foucquier, Nathalie Perrot, Jean Louchet, Jean-Daniel Fekete

Implementation and Robotics

An On-Line On-Board Distributed Algorithm for Evolutionary Robotics

Imagine autonomous, self-sufficient robot collectives that can adapt their controllers autonomously and self-sufficiently to learn to cope with situations unforeseen by their designers. As one step towards the realisation of this vision, we investigate on-board evolutionary algorithms that allow robot controllers to adapt without any outside supervision and while the robots perform their proper tasks. We propose an evag-based on-board evolutionary algorithm, where controllers are exchanged among robots that evolve simultaneously. We compare it with the ( μ + 1) on-line algorithm, which implements evolutionary adaptation inside a single robot. We perform simulation experiments to investigate algorithm performance and use parameter tuning to evaluate the algorithms at their best possible parameter settings. We find that distributed on-line on-board evolutionary algorithms that share genomes among robots such as our evag implementation effectively harness the pooled learning capabilities, with an increasing benefit over encapsulated approaches as the number of participating robots grows.
Robert-Jan Huijsman, Evert Haasdijk, A. E. Eiben

Improving Performance via Population Growth and Local Search: The Case of the Artificial Bee Colony Algorithm

We modify an artificial bee colony algorithm as follows: we make the population size grow over time and apply local search on strategically selected solutions. The modified algorithm obtains very good results on a set of large-scale continuous optimization benchmark problems. This is not the first time we see that the two aforementioned modifications make an initially non-competitive algorithm obtain state-of-the-art results. In previous work, we have shown that the same modifications substantially improve the performance of particle swarm optimization and ant colony optimization algorithms. Altogether, these results suggest that population growth coupled with local search help obtain high-quality results.
Doğan Aydın, Tianjun Liao, Marco A. Montes de Oca, Thomas Stützle

Two Ports of a Full Evolutionary Algorithm onto GPGPU

This paper presents two parallelizations of a standard evolutionary algorithm on an NVIDIA GPGPU card, thanks to a parallel replacement operator.
These algorithms tackle new problems where previously presented approaches do not obtain satisfactory speedup. If programming is more complicated and fewer options are allowed, the whole algorithm is executed in parallel, thereby fully exploiting the intrinsic parallelism of EAs and the many available GPGPU cores.
Finally, the method is validated using two benchmarks.
Ogier Maitre, Nicolas Lachiche, Pierre Collet

Combinatorial Optimization

A Multilevel Tabu Search with Backtracking for Exploring Weak Schur Numbers

In the field of Ramsey theory, the weak Schur number WS(k) is the largest integer n for which their exists a partition into k subsets of the integers [1,n] such that there is no x < y < z all in the same subset with x + y = z. Although studied since 1941, only the weak Schur numbers WS(1) through WS(4) are precisely known, for k ≥ 5 the WS(k) are only bracketed within rather loose bounds. We tackle this problem with a tabu search scheme, enhanced by a multilevel and backtracking mechanism. While heuristic approaches cannot definitely settle the value of weak Schur numbers, they can improve the lower bounds by finding suitable partitions, which in turn can provide ideas on the structure of the problem. In particular we exhibit a suitable 6-partition of [1,574] obtained by tabu search, improving on the current best lower bound for WS(6).
Denis Robilliard, Cyril Fonlupt, Virginie Marion-Poty, Amine Boumaza

An Improved Memetic Algorithm for the Antibandwidth Problem

This paper presents an Improved Memetic Algorithm (IMA) designed to compute near-optimal solutions for the antibandwidth problem. It incorporates two distinguishing features: an efficient heuristic to generate a good quality initial population and a local search operator based on a Stochastic Hill Climbing algorithm. The most suitable combination of parameter values for IMA is determined by employing a tunning methodology based on Combinatorial Interaction Testing. The performance of the fine-tunned IMA algorithm is investigated through extensive experimentation over well known benchmarks and compared with an existing state-of-the-art Memetic Algorithm, showing that IMA consistently improves the previous best-known results.
Eduardo Rodriguez-Tello, Luis Carlos Betancourt

Learning and Parameter Tuning

Adaptive Play in a Pollution Bargaining Game

We apply adaptive play to a simplified pollution game with two players. We find that agents with longer memory paradoxically perform worse in the long run. We interpret this result as an indication that adaptive play may be too restrictive as a model of agent behaviour in this context, although it can serve as a starting point for further research on bounded rationality in pollution games.
Vincent van der Goes

Learn-and-Optimize: A Parameter Tuning Framework for Evolutionary AI Planning

Learn-and-Optimize  (LaO)  is  a  generic  surrogate  based method for parameter tuning combining learning and optimization. In this paper LaO is used to tune Divide-and-Evolve (DaE), an Evolutionary Algorithm for AI Planning. The LaO framework makes it possible to learn the relation between some features describing a given instance and the optimal parameters for this instance, thus it enables to extrapolate this relation to unknown instances in the same domain. Moreover, the learned knowledge is used as a surrogate-model to accelerate the search for the optimal parameters. The proposed implementation of LaO uses an Artificial Neural Network for learning the mapping between features and optimal parameters, and the Covariance Matrix Adaptation Evolution Strategy for optimization. Results demonstrate that LaO is capable of improving the quality of the DaE results even with only a few iterations. The main limitation of the DaE case-study is the limited amount of meaningful features that are available to describe the instances. However, the learned model reaches almost the same performance on the test instances, which means that it is capable of generalization.
Mátyás Brendel, Marc Schoenauer

New Nature Inspired Models

A Model Based on Biological Invasions for Island Evolutionary Algorithms

Migration strategy plays an important role in designing effective distributed evolutionary algorithms. Here, a novel migration model inspired to the phenomenon known as biological invasion is adopted. The migration strategy is implemented through a multistage process involving large invading subpopulations and their competition with native individuals. In this work such a general approach is used within an island parallel model adopting Differential Evolution as the local algorithm. The resulting distributed algorithm is evaluated on a set of well known test functions and its effectiveness is compared against that of a classical distributed Differential Evolution.
Ivanoe De Falco, Antonio Della Cioppa, Domenico Maisto, Umberto Scafuri, Ernesto Tarantino

A Multi-objective Particle Swarm Optimizer Enhanced with a Differential Evolution Scheme

Particle swarm optimization (PSO) and differential evolution (DE) are meta-heuristics which have been found to be very successful in a wide variety of optimization tasks. The high convergence rate of PSO and the exploratory capabilities of DE make them highly viable candidates to be used for solving multi-objective optimization problems (MOPs). In previous studies that we have undertaken [2], we have observed that PSO has the ability to launch particles in the direction of a leader (i.e., a non-dominated solution) with a high selection pressure. However, this high selection pressure tends to move the swarm rapidly towards local optima. DE, on the other hand, seems to move solutions at smaller steps, yielding solutions close to their parents while exploring the search space at the same time. In this paper, we present a multi-objective particle swarm optimizer enhanced with a differential evolution scheme which aims to maintain diversity in the swarm while moving at a relatively fast rate. The goal is to avoid premature convergence without sacrificing much the convergence rate of the algorithm. In order to design our hybrid approach, we performed a series of experiments using the ZDT test suite. In the final part of the paper, our proposed approach is compared (using 2000, 3500, and 5000 objective function evaluations) with respect to four state-of-the-art multi-objective evolutionary algorithms, obtaining very competitive results.
Jorge Sebastian Hernández-Domínguez, Gregorio Toscano-Pulido, Carlos A. Coello Coello

Probabilistic Algorithms

Evolution of Multisensory Integration in Large Neural Fields

We show that by evolving neural fields it is possible to study the evolution of neural networks that perform multisensory integration of high dimensional input data. In particular, four simple tasks for the integration of visual and tactile input are introduced. Neural networks evolve that can use these senses in a cost-optimal way, enhance the accuracy of classifying noisy input images, or enhance spatial accuracy of perception. An evolved neural network is shown to display a kind of McGurk effect.
Benjamin Inden, Yaochu Jin, Robert Haschke, Helge Ritter

Reducing the Learning Time of Tetris in Evolution Strategies

Designing artificial players for the game of Tetris is a challenging problem that many authors addressed using different methods. Very performing implementations using evolution strategies have also been proposed. However one drawback of using evolution strategies for this problem can be the cost of evaluations due to the stochastic nature of the fitness function. This paper describes the use of racing algorithms to reduce the amount of evaluations of the fitness function in order to reduce the learning time. Different experiments illustrate the benefits and the limitation of racing in evolution strategies for this problem. Among the benefits is designing artificial players at the level of the top ranked players at a third of the cost.
Amine Boumaza

Theory and Evolutionary Search

Black-Box Complexity: Breaking the O(n logn) Barrier of LeadingOnes

We show that the unrestricted black-box complexity of the n-dimensional XOR- and permutation-invariant LeadingOnes function class is O(n log(n) / loglogn). This shows that the recent natural looking O(nlogn) bound is not tight.
The black-box optimization algorithm leading to this bound can be implemented in a way that only 3-ary unbiased variation operators are used. Hence our bound is also valid for the unbiased black-box complexity recently introduced by Lehre and Witt. The bound also remains valid if we impose the additional restriction that the black-box algorithm does not have access to the objective values but only to their relative order (ranking-based black-box complexity).
Benjamin Doerr, Carola Winzen


Imperialist Competitive Algorithm for Dynamic Optimization of Economic Dispatch in Power Systems

As energy costs are expected to keep rising in the coming years, mostly due to a growing worldwide demand, optimizing power generation is of crucial importance for utilities. Economic power dispatch is a tool commonly used by electric power plant operators to optimize the use of generation units. Optimization algorithms are at the center of such techniques and several different types of algorithms, such as genetic or particle swarm algorithms, have been proposed in the literature. This paper proposes the use of a new metaheuristic called imperialist competitive algorithm (ICA) for solving the economic dispatch problem. The algorithm performance is compared with the ones of other common algorithms. The accuracy and speed of the algorithm are especially studied. Results are obtained through several simulations on power plants and microgrids in which variable numbers of generators, storage units, loads and grid import/export lines are connected.
Robin Roche, Lhassane Idoumghar, Benjamin Blunier, Abdellatif Miraoui


Weitere Informationen

Premium Partner