Skip to main content

2016 | Buch

Parallel Problem Solving from Nature – PPSN XIV

14th International Conference, Edinburgh, UK, September 17-21, 2016, Proceedings

herausgegeben von: Julia Handl, Emma Hart, Peter R. Lewis, Manuel López-Ibáñez, Gabriela Ochoa, Ben Paechter

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 14th International Conference on Parallel Problem Solving from Nature, PPSN 2016, held in Edinburgh, UK, in September 2016.

The total of 93 revised full papers were carefully reviewed and selected from 224 submissions.
The meeting began with four workshops which offered an ideal opportunity to explore specific topics in intelligent transportation Workshop, landscape-aware heuristic search, natural computing in scheduling and timetabling, and advances in multi-modal optimization.

PPSN XIV also included sixteen free tutorials to give us all the opportunity to learn about new aspects: gray box optimization in theory; theory of evolutionary computation; graph-based and cartesian genetic programming; theory of parallel evolutionary algorithms; promoting diversity in evolutionary optimization: why and how; evolutionary multi-objective optimization; intelligent systems for smart cities; advances on multi-modal optimization; evolutionary computation in cryptography; evolutionary robotics - a practical guide to experiment with real hardware; evolutionary algorithms and hyper-heuristics; a bridge between optimization over manifolds and evolutionary computation; implementing evolutionary algorithms in the cloud; the attainment function approach to performance evaluation in EMO; runtime analysis of evolutionary algorithms: basic introduction; meta-model assisted (evolutionary) optimization.

The papers are organized in topical sections on adaption, self-adaption and parameter tuning; differential evolution and swarm intelligence; dynamic, uncertain and constrained environments; genetic programming; multi-objective, many-objective and multi-level optimization; parallel algorithms and hardware issues; real-word applications and modeling; theory; diversity and landscape analysis.

Inhaltsverzeichnis

Frontmatter

Adaptation, Self-adaptation and Parameter Tuning

Frontmatter
Online Model Selection for Restricted Covariance Matrix Adaptation

We focus on a variant of covariance matrix adaptation evolution strategy (CMA-ES) with a restricted covariance matrix model, namely VkD-CMA, which is aimed at reducing the internal time complexity and the adaptation time in terms of function evaluations. We tackle the shortage of the VkD-CMA—the model of the restricted covariance matrices needs to be selected beforehand. We propose a novel mechanism to adapt the model online in the VkD-CMA. It eliminates the need for advance model selection and leads to a performance competitive with or even better than the algorithm with a nearly optimal but fixed model.

Youhei Akimoto, Nikolaus Hansen
Genotype Regulation by Self-modifying Instruction-Based Development on Cellular Automata

A novel method for regulation of gene expression for artificial cellular systems is introduced. It is based on an instructon-based representation which allows self-modification of genotype programs, as to be able to control the expression of different genes at different stages of development, e.g., environmental adaptation. Coding and non-coding genome analogies can be drawn in our cellular system, where coding genes are in the form of developmental actions while non-coding genes are represented as modifying instructions that can change other genes. This technique was tested successfully on the morphogenesis of cellular structures from a seed, self-replication of structures, growth and replication combined, as well as reuse of an evolved genotype for development or replication of different structures than initially targeted by evolution.

Stefano Nichele, Tom Eivind Glover, Gunnar Tufte
Evolution Under Strong Noise: A Self-Adaptive Evolution Strategy Can Reach the Lower Performance Bound - The pcCMSA-ES

According to a theorem by Astete-Morales, Cauwet, and Teytaud, “simple Evolution Strategies (ES)” that optimize quadratic functions disturbed by additive Gaussian noise of constant variance can only reach a simple regret log-log convergence slope $$\ge -1/2$$ (lower bound). In this paper a population size controlled ES is presented that is able to perform better than the $$-1/2$$ limit. It is shown experimentally that the pcCMSA-ES is able to reach a slope of $$-1$$ being the theoretical lower bound of all comparison-based direct search algorithms.

Michael Hellwig, Hans-Georg Beyer
An Evolutionary Hyper-heuristic for the Software Project Scheduling Problem

Software project scheduling plays an important role in reducing the cost and duration of software projects. It is an NP-hard combinatorial optimization problem that has been addressed based on single and multi-objective algorithms. However, such algorithms have always used fixed genetic operators, and it is unclear which operators would be more appropriate across the search process. In this paper, we propose an evolutionary hyper-heuristic to solve the software project scheduling problem. Our novelties include the following: (1) this is the first work to adopt an evolutionary hyper-heuristic for the software project scheduling problem; (2) this is the first work for adaptive selection of both crossover and mutation operators; (3) we design different credit assignment methods for mutation and crossover; and (4) we use a sliding multi-armed bandit strategy to adaptively choose both crossover and mutation operators. The experimental results show that the proposed algorithm can solve the software project scheduling problem effectively.

Xiuli Wu, Pietro Consoli, Leandro Minku, Gabriela Ochoa, Xin Yao
The Multiple Insertion Pyramid: A Fast Parameter-Less Population Scheme

The Parameter-less Population Pyramid (P3) uses a novel population scheme, called the population pyramid. This population scheme does not require a fixed population size, instead it keeps adding new solutions to an ever growing set of layered populations. P3 is very efficient in terms of number of fitness function evaluations but its run-time is significantly higher than that of the Gene-pool Optimal Mixing Evolutionary Algorithm (GOMEA) which uses the same method of exploration. This higher run-time is caused by the need to rebuild the linkage tree every time a single new solution is added to the population pyramid. We propose a new population scheme, called the multiple insertion pyramid that results in a faster variant of P3 by inserting multiple solutions at the same time and operating on populations instead of on single solutions.

Willem den Besten, Dirk Thierens, Peter A. N. Bosman
Doubly Trained Evolution Control for the Surrogate CMA-ES

This paper presents a new variant of surrogate-model utilization in expensive continuous evolutionary black-box optimization. This algorithm is based on the surrogate version of the CMA-ES, the Surrogate Covariance Matrix Adaptation Evolution Strategy (S-CMA-ES). Similarly to the original S-CMA-ES, expensive function evaluations are saved through a surrogate model. However, the model is retrained after the points in which its prediction was most uncertain have been evaluated by the true fitness in each generation. We demonstrate that within small budget of evaluations, the new variant of S-CMA-ES improves the original algorithm and outperforms two state-of-the-art surrogate optimizers, except a few evaluations at the beginning of the optimization process.

Zbyněk Pitra, Lukáš Bajer, Martin Holeňa
Efficient Global Optimization with Indefinite Kernels

Kernel based surrogate models like Kriging are a popular remedy for costly objective function evaluations in optimization. Often, kernels are required to be definite. Highly customized kernels, or kernels for combinatorial representations, may be indefinite. This study investigates this issue in the context of Kriging. It is shown that approaches from the field of Support Vector Machines are useful starting points, but require further modifications to work with Kriging. This study compares a broad selection of methods for dealing with indefinite kernels in Kriging and Kriging-based Efficient Global Optimization, including spectrum transformation, feature embedding and computation of the nearest definite matrix. Model quality and optimization performance are tested. The standard, without explicitly correcting indefinite matrices, yields functional results, which are further improved by spectrum transformations.

Martin Zaefferer, Thomas Bartz-Beielstein
A Fitness Cloud Model for Adaptive Metaheuristic Selection Methods

Designing portfolio adaptive selection strategies is a promising approach to gain in generality when tackling a given optimization problem. However, we still lack much understanding of what makes a strategy effective, even if different benchmarks have been already designed for these issues. In this paper, we propose a new model based on fitness cloud allowing us to provide theoretical and empirical insights on when an on-line adaptive strategy can be beneficial to the search. In particular, we investigate the relative performance and behavior of two representative and commonly used selection strategies with respect to static (off-line) and purely random approaches, in a simple, yet sound realistic, setting of the proposed model.

Christopher Jankee, Sébastien Verel, Bilel Derbel, Cyril Fonlupt
A Study of the Performance of  Memetic Algorithms on Heterogeneous Ephemeral Environments

We consider the deployment of island-based memetic algorithms (MAs) endowed with $$\text {self-}{\star }$$ properties on unstable computational environments composed of a collection of computing nodes whose availability fluctuates. In this context, these properties refer to the ability of the MA to work autonomously in order to optimize its performance and to react to the instability of computational resources. The main focus of this work is analyzing the performance of such MAs when the underlying computational substrate is not only volatile but also heterogeneous in terms of the computational power of each of its constituent nodes. We use for this purpose a simulated environment subject to different volatility rates, whose topology is modeled as scale-free networks and whose computing power is distributed among nodes following different distributions. We observe that in general computational homogeneity is preferable in scenarios with low instability; in case of high instability, MAs without self-scaling and self-healing perform better when the computational power follows a power law, but performance seems to be less sensitive to the distribution when these $$\text {self-}{\star }$$ properties are used.

Rafael Nogueras, Carlos Cotta
Lyapunov Design of a Simple Step-Size Adaptation Strategy Based on Success

A simple success-based step-size adaptation rule for single-parent Evolution Strategies is formulated, and the setting of the corresponding parameters is considered. Theoretical convergence on the class of strictly unimodal functions of one variable that are symmetric around the optimum is investigated using a stochastic Lyapunov function method developed by Semenov and Terkel [5] in the context of martingale theory. General expressions for the conditional expectations of the next values of step size and distance to the optimum under $$(1\mathop {,}\limits ^{+}\lambda )$$-selection are analytically derived, and an appropriate Lyapunov function is constructed. Convergence rate upper bounds, as well as adaptation parameter values, are obtained through numerical optimization for increasing values of $$\lambda $$. By selecting the number of offspring that minimizes the bound on the convergence rate with respect to the number of function evaluations, all strategy parameter values result from the analysis.

Claudia R. Correa, Elizabeth F. Wanner, Carlos M. Fonseca

Differential Evolution and Swarm Intelligence

Frontmatter
TADE: Tight Adaptive Differential Evolution

Differential Evolution (DE) is a simple and effective evolutionary algorithm to solve optimization problems. The existing DE variants always maintain or increase the randomness of the differential vector when considering the trade-off of randomness and certainty among three components of the mutation operator. This paper considers the possibility to achieve a better trade-off and more accurate result by reducing the randomness of the differential vector, and designs a tight adaptive DE variant called TADE. In TADE, the population is divided into a major subpopulation adopting the general “current-to-pbest” strategy and a minor subpopulation utilizing our proposed strategy of sharing the same base vector but reducing the randomness in differential vector. Based on success-history parameter adaptation, TADE designs a simple information exchange scheme to avoid the homogeneity of parameters. The extensive experiments on CEC2014 suite show that TADE achieves better or equivalent performance on at least 76.7 % functions comparing with five state-of-the-art DE variants. Additional experiments are conducted to verify the rationality of this tight design.

Weijie Zheng, Haohuan Fu, Guangwen Yang
An Extension of Algebraic Differential Evolution for the Linear Ordering Problem with Cumulative Costs

In this paper we propose an extension to the algebraic differential evolution approach for permutation based problems (DEP). Conversely from classical differential evolution, DEP is fully combinatorial and it is extended in two directions: new generating sets based on exchange and insertion moves are considered, and the case $$F>1$$ is now allowed for the differential mutation operator. Moreover, also the crossover and selection operators of the original DEP have been modified in order to address the linear ordering problem with cumulative costs (LOPCC). The new DEP schemes are compared with the state-of-the-art LOPCC algorithms using a widely adopted benchmark suite. The experimental results show that DEP reaches competitive performances and, most remarkably, found 21 new best known solutions on the 50 largest LOPCC instances.

Marco Baioletti, Alfredo Milani, Valentino Santucci
Analysing the Performance of Migrating Birds Optimisation Approaches for Large Scale Continuous Problems

We present novel algorithmic schemes for dealing with large scale continuous problems. They are based on the recently proposed population-based meta-heuristics Migrating Birds Optimisation (mbo) and Multi-leader Migrating Birds Optimisation (mmbo), that have shown to be effective for solving combinatorial problems. The main objective of the current paper is twofold. First, we introduce a novel neighbour generating operator based on Differential Evolution (de) that allows to produce new individuals in the continuous decision space starting from those belonging to the current population. Second, we evaluate the performance of mbo and mmbo by incorporating our novel operator to them. Hence, mbo and mmbo are enabled for solving continuous problems. A set of well-known large scale functions is used for comparison purposes.

Eduardo Lalla-Ruiz, Eduardo Segredo, Stefan Voß, Emma Hart, Ben Paechter
How Far Are We from an Optimal, Adaptive DE?

We consider how an (almost) optimal parameter adaptation process for an adaptive DE might behave, and compare the behavior and performance of this approximately optimal process to that of existing, adaptive mechanisms for DE. An optimal parameter adaptation process is an useful notion for analyzing the parameter adaptation methods in adaptive DE as well as other adaptive evolutionary algorithms, but it cannot be known generally. Thus, we propose a Greedy Approximate Oracle method (GAO) which approximates an optimal parameter adaptation process. We compare the behavior of GAODE, a DE algorithm with GAO, to typical adaptive DEs on six benchmark functions and the BBOB benchmarks, and show that GAO can be used to (1) explore how much room for improvement there is in the performance of the adaptive DEs, and (2) obtain hints for developing future, effective parameter adaptation methods for adaptive DEs.

Ryoji Tanabe, Alex Fukunaga
Feature Based Algorithm Configuration: A Case Study with Differential Evolution

Algorithm Configuration is still an intricate problem especially in the continuous black box optimization domain. This paper empirically investigates the relationship between continuous problem features (measuring different problem characteristics) and the best parameter configuration of a given stochastic algorithm over a bench of test functions — namely here, the original version of Differential Evolution over the BBOB test bench. This is achieved by learning an empirical performance model from the problem features and the algorithm parameters. This performance model can then be used to compute an empirical optimal parameter configuration from features values. The results show that reasonable performance models can indeed be learned, resulting in a better parameter configuration than a static parameter setting optimized for robustness over the test bench.

Nacim Belkhir, Johann Dréo, Pierre Savéant, Marc Schoenauer
An Asynchronous and Steady State Update Strategy for the Particle Swarm Optimization Algorithm

This paper proposes an asynchronous and steady state update strategy for the Particle Swarm Optimization (PSO) inspired by the Bak-Sneppen model of co-evolution. The model consists of a set of fitness values (representing species) arranged in a network. By replacing iteratively the least fit species and its neighbors with random values (simulating extinction), the average fitness of the population tends to grow while the system is driven to a critical state. Based on these rules, we implement a PSO in which only the worst particle and its neighbors are updated and evaluated in each time-step. The other particles remain steady during one or more iterations, until they eventually meet the update criterion. The steady state PSO (SS-PSO) was tested on a set of benchmark functions, with three different population structures: lbest ring and lattice with von Neumann and Moore neighborhood. The experiments demonstrate that the strategy significantly improves the quality of results and convergence speed with Moore neighborhood. Further tests show that the major factor of enhancement is the selective pressure on the worst, since replacing the best or a random particle (and neighbors) yields inferior results.

C. M. Fernandes, J. J. Merelo, A. C. Rosa

Dynamic, Uncertain and Constrained Environments

Frontmatter
Augmented Lagrangian Constraint Handling for CMA-ES — Case of a Single Linear Constraint

We consider the problem of minimizing a function f subject to a single inequality constraint $$g(\mathbf x ) \le 0$$, in a black-box scenario. We present a covariance matrix adaptation evolution strategy using an adaptive augmented Lagrangian method to handle the constraint. We show that our algorithm is an instance of a general framework that allows to build an adaptive constraint handling algorithm from a general randomized adaptive algorithm for unconstrained optimization. We assess the performance of our algorithm on a set of linearly constrained functions, including convex quadratic and ill-conditioned functions, and observe linear convergence to the optimum.

Asma Atamna, Anne Auger, Nikolaus Hansen
An Active-Set Evolution Strategy for Optimization with Known Constraints

We propose an evolutionary approach to constrained optimization where the objective function is considered a black box, but the constraint functions are assumed to be known. The approach can be considered a stochastic active-set method. It labels constraints as either active or inactive and projects candidate solutions onto the subspace of the feasible region that is implied by rendering active inequality constraints equalities. We implement the approach in a $$(1+1)$$-ES and evaluate its performance using a commonly used set of test problems.

Dirk V. Arnold
Speciated Evolutionary Algorithm for Dynamic Constrained Optimisation

Dynamic constrained optimisation problems (DCOPs) have specific characteristics that do not exist in dynamic optimisation problems with bounded constraints or without constraints. This poses difficulties for some existing dynamic optimisation strategies. The maintaining/introducing diversity approaches might become less effective due to the presence of infeasible areas, and thus might not well handle with the switch of global optima between disconnected feasible regions. In this paper, a speciation-based approach was firstly proposed to overcome this, which utilizes deterministic crowding to maintain diversity, assortative mating and local search to promote exploitation, as well as feasibility rules to deal with constraints. The experimental studies demonstrate that the newly proposed method generally outperforms the state-of-the-art algorithms on a benchmark set of DCOPs.

Xiaofen Lu, Ke Tang, Xin Yao
On Constraint Handling in Surrogate-Assisted Evolutionary Many-Objective Optimization

Surrogate-assisted evolutionary multiobjective optimization algorithms are often used to solve computationally expensive problems. But their efficacy on handling constrained optimization problems having more than three objectives has not been widely studied. Particularly the issue of how feasible and infeasible solutions are handled in generating a data set for training a surrogate has not received much attention. In this paper, we use a recently proposed Kriging-assisted evolutionary algorithm for many-objective optimization and investigate the effect of infeasible solutions on the performance of the surrogates. We assume that constraint functions are computationally inexpensive and consider different ways of handling feasible and infeasible solutions for training the surrogate and examine them on different benchmark problems. Results on the comparison with a reference vector guided evolutionary algorithm show that it is vital for the success of the surrogate to properly deal with infeasible solutions.

Tinkle Chugh, Karthik Sindhya, Kaisa Miettinen, Jussi Hakanen, Yaochu Jin
Artificially Inducing Environmental Changes in Evolutionary Dynamic Optimization

Biological and artificial evolution can be speeded up by environmental changes. From the evolutionary computation perspective, environmental changes during the optimization process generate dynamic optimization problems (DOPs). However, only DOPs caused by intrinsic changes have been investigated in the area of evolutionary dynamic optimization (EDO). This paper is devoted to investigate artificially induced DOPs. A framework to generate artificially induced DOPs from any pseudo-Boolean problem is proposed. We use this framework to induce six different types of changes in a 0–1 knapsack problem and test which one results in higher speed up. Two strategies based on immigrants, which are used in EDO, are adapted to the artificially induced DOPs investigated here. Some types of changes did not result in better performance, while some types led to higher speed up. The algorithm with memory based immigrants presented very good performance.

Renato Tinós, Shengxiang Yang
Efficient Sampling When Searching for Robust Solutions

In the presence of noise on the decision variables, it is often desirable to find robust solutions, i.e., solutions with a good expected fitness over the distribution of possible disturbances. Sampling is commonly used to estimate the expected fitness of a solution; however, this option can be computationally expensive. Researchers have therefore suggested to take into account information from previously evaluated solutions. In this paper, we assume that each solution is evaluated once, and that the information about all previously evaluated solutions is stored in a memory that can be used to estimate a solution’s expected fitness. Then, we propose a new approach that determines which solution should be evaluated to best complement the information from the memory, and assigns weights to estimate the expected fitness of a solution from the memory. The proposed method is based on the Wasserstein distance, a probability distance metric that measures the difference between a sample distribution and a desired target distribution. Finally, an empirical comparison of our proposed method with other sampling methods from the literature is presented to demonstrate the efficacy of our method.

Juergen Branke, Xin Fei

Genetic Programming

Frontmatter
Optimising Quantisation Noise in Energy Measurement

We give a model of parallel distributed genetic improvement. With modern low cost power monitors; high speed Ethernet LAN latency and network jitter have little effect. The model calculates a minimum usable mutation effect based on the analogue to digital converter (ADC)’s resolution and shows the optimal test duration is inversely proportional to smallest impact we wish to detect. Using the example of a 1 kHz 12 bit 0.4095 Amp ADC optimising software energy consumption we find: it will be difficult to detect mutations which an average effect less than 58 $$\mu $$A, and typically experiments should last well under a second.

William B. Langdon, Justyna Petke, Bobby R. Bruce
Syntactical Similarity Learning by Means of Grammatical Evolution

Several research efforts have shown that a similarity function synthesized from examples may capture an application-specific similarity criterion in a way that fits the application needs more effectively than a generic distance definition. In this work, we propose a similarity learning algorithm tailored to problems of syntax-based entity extraction from unstructured text streams. The algorithm takes in input pairs of strings along with an indication of whether they adhere or not adhere to the same syntactic pattern. Our approach is based on Grammatical Evolution and explores systematically a similarity definition space including all functions that may be expressed with a specialized, simple language that we have defined for this purpose. We assessed our proposal on patterns representative of practical applications. The results suggest that the proposed approach is indeed feasible and that the learned similarity function is more effective than the Levenshtein distance and the Jaccard similarity index.

Alberto Bartoli, Andrea De Lorenzo, Eric Medvet, Fabiano Tarlao
Hierarchical Knowledge in Self-Improving Grammar-Based Genetic Programming

Structure of a grammar can influence how well a Grammar-Based Genetic Programming system solves a given problem but it is not obvious to design the structure of a grammar, especially when the problem is large. In this paper, our proposed Bayesian Grammar-Based Genetic Programming with Hierarchical Learning (BGBGP-HL) examines the grammar and builds new rules on the existing grammar structure during evolution. Once our system successfully finds the good solution(s), the adapted grammar will provide a grammar-based probabilistic model to the generation process of optimal solution(s). Moreover, our system can automatically discover new hierarchical knowledge (i.e. how the rules are structurally combined) which composes of multiple production rules in the original grammar. In the case study using deceptive royal tree problem, our evaluation shows that BGBGP-HL achieves the best performance among the competitors while it is capable of composing hierarchical knowledge. Compared to other algorithms, search performance of BGBGP-HL is shown to be more robust against deceptiveness and complexity of the problem.

Pak-Kan Wong, Man-Leung Wong, Kwong-Sak Leung
Parallel Hierarchical Evolution of String Library Functions

We introduce a parallel version of hierarchical evolutionary re-combination (herc) and use it to evolve programs for ten standard string processing tasks and a postfix calculator emulation task. Each processor maintains a separate evolutionary niche, with its own ladder of competing agents and codebank of potential mates. Further enhancements include evolution of multi-cell programs and incremental learning with reshuffling of data. We find the success rate is improved by transgenic evolution, where solutions to earlier tasks are recombined to solve later tasks. Sharing of genetic material between niches seems to improve performance for the postfix task, but for some of the string processing tasks it can increase the risk of premature convergence.

Jacob Soderlund, Darwin Vickers, Alan Blair
On the Non-uniform Redundancy in Grammatical Evolution

This paper investigates the redundancy of representation in grammatical evolution (GE) for binary trees. We analyze the entire GE solution space by creating all binary genotypes of predefined length and map them to phenotype trees, which are then characterized by their size, depth and shape. We find that the GE representation is strongly non-uniformly redundant. There are huge differences in the number of genotypes that encode one particular phenotype. Thus, it is difficult for GE to solve problems where the optimal tree solutions are underrepresented. In general, the GE mapping process is biased towards short tree structures, which implies high GE performance if the optimal solution requires small programs.

Ann Thorhauer
Tournament Selection Based on Statistical Test in Genetic Programming

Selection plays a critical role in the performance of evolutionary algorithms. Tournament selection is often considered the most popular techniques among several selection methods. Standard tournament selection randomly selects several individuals from the population and the individual with the best fitness value is chosen as the winner. In the context of Genetic Programming, this approach ignores the error value on the fitness cases of the problem emphasising relative fitness quality rather than detailed quantitative comparison. Subsequently, potentially useful information from the error vector may be lost. In this paper, we introduce the use of a statistical test into selection that utilizes information from the individual’s error vector. Two variants of tournament selection are proposed, and tested on Genetic Programming for symbolic regression problems. On the benchmark problems examined we observe a benefit of the proposed methods in reducing code growth and generalisation error.

Thi Huong Chu, Quang Uy Nguyen, Michael O’Neill
Kin Selection with Twin Genetic Programming

In steady state Twin GP both children created by sub-tree crossover and point mutation are used. They are born together and die together. Evolution is little changed. Indeed fitness selection using the twin’s co-conceived doppelganger is possible.

William B. Langdon
Using Scaffolding with Partial Call-Trees to Improve Search

Recursive functions are an attractive target for genetic programming because they can express complex computation compactly. However, the need to simultaneously discover correct recursive and base cases in these functions is a major obstacle in the evolutionary search process. To overcome these obstacles two recent remedies have been proposed. The first is Scaffolding which permits the recursive case of a function to be evaluated independently of the base case. The second is Call-Tree-Guided Genetic Programming (CTGGP) which uses a partial call tree, supplied by the user, to separately evolve the parameter expressions for recursive calls. Used in isolation, both of these approaches have been shown to offer significant advantages in terms of search performance. In this work we investigate the impact of different combinations of these approaches. We find that, on our benchmarks, CTGGP significantly outperforms Scaffolding and that a combination CTGGP and Scaffolding appears to produce further improvements in worst-case performance.

Brad Alexander, Connie Pyromallis, George Lorenzetti, Brad Zacher
Feature Extraction for Surrogate Models in Genetic Programming

We discuss the use of surrogate models in the field of genetic programming. We describe a set of features extracted from each tree and use it to train a model of the fitness function. The results indicate that such a model can be used to predict the fitness of new individuals without the need to evaluate them. In a series of experiments, we show how surrogate modeling is able to reduce the number of fitness evaluations needed in genetic programming, and we discuss how the use of surrogate models affects the exploration and convergence of genetic programming algorithms.

Martin Pilát, Roman Neruda
A General-Purpose Framework for Genetic Improvement

Genetic Improvement is an evolutionary-based technique. Despite its relatively recent introduction, several successful applications have been already reported in the scientific literature: it has been demonstrated able to modify the code complex programs without modifying their intended behavior; to increase performance with regards to speed, energy consumption or memory use. Some results suggest that it could be also used to correct bugs, restoring the software’s intended functionalities. Given the novelty of the technique, however, instances of Genetic Improvement so far rely upon ad-hoc, language-specific implementations. In this paper, we propose a general framework based on the software engineering’s idea of mutation testing coupled with Genetic Programming, that can be easily adapted to different programming languages and objective. In a preliminary evaluation, the framework efficiently optimizes the code of the md5 hash function in C, Java, and Python.

Francesco Marino, Giovanni Squillero, Alberto Tonda
On the Use of Semantics in Multi-objective Genetic Programming

Research on semantics in Genetic Programming (GP) has increased dramatically over the last number of years. Results in this area clearly indicate that its use in GP can considerably increase GP performance. Motivated by these results, this paper investigates for the first time the use of Semantics in Muti-objective GP within the well-known NSGA-II algorithm. To this end, we propose two forms of incorporating semantics into a MOGP system. Results on challenging (highly) unbalanced binary classification tasks indicate that the adoption of semantics in MOGP is beneficial, in particular when a semantic distance is incorporated into the core of NSGA-II.

Edgar Galván-López, Efrén Mezura-Montes, Ouassim Ait ElHara, Marc Schoenauer
Semantic Forward Propagation for Symbolic Regression

In recent years, a number of methods have been proposed that attempt to improve the performance of genetic programming by exploiting information about program semantics. One of the most important developments in this area is semantic backpropagation. The key idea of this method is to decompose a program into two parts—a subprogram and a context—and calculate the desired semantics of the subprogram that would make the entire program correct, assuming that the context remains unchanged. In this paper we introduce Forward Propagation Mutation, a novel operator that relies on the opposite assumption—instead of preserving the context, it retains the subprogram and attempts to place it in the semantically right context. We empirically compare the performance of semantic backpropagation and forward propagation operators on a set of symbolic regression benchmarks. The experimental results demonstrate that semantic forward propagation produces smaller programs that achieve significantly higher generalization performance.

Marcin Szubert, Anuradha Kodali, Sangram Ganguly, Kamalika Das, Josh C. Bongard
Reducing Dimensionality to Improve Search in Semantic Genetic Programming

Genetic programming approaches are moving from analysing the syntax of individual solutions to look into their semantics. One of the common definitions of the semantic space in the context of symbolic regression is a n-dimensional space, where n corresponds to the number of training examples. In problems where this number is high, the search process can became harder as the number of dimensions increase. Geometric semantic genetic programming (GSGP) explores the semantic space by performing geometric semantic operations—the fitness landscape seen by GSGP is guaranteed to be conic by construction. Intuitively, a lower number of dimensions can make search more feasible in this scenario, decreasing the chances of data overfitting and reducing the number of evaluations required to find a suitable solution. This paper proposes two approaches for dimensionality reduction in GSGP: (i) to apply current instance selection methods as a pre-process step before training points are given to GSGP; (ii) to incorporate instance selection to the evolution of GSGP. Experiments in 15 datasets show that GSGP performance is improved by using instance reduction during the evolution.

Luiz Otavio V. B. Oliveira, Luis F. Miranda, Gisele L. Pappa, Fernando E. B. Otero, Ricardo H. C. Takahashi

Multi-objective, Many-objective and Multi-level Optimisation

Frontmatter
iMOACO: A New Indicator-Based Multi-objective Ant Colony Optimization Algorithm for Continuous Search Spaces

Ant colony optimization (ACO) is a metaheurisitc which was originally designed to solve combinatorial optimization problems. In recent years, ACO has been extended to tackle continuous single-objective optimization problems, being ACO$$_\mathbb {R}$$ one of the most remarkable approaches of this sort. However, there exist just a few ACO-based algorithms designed to solve continuous multi-objective optimization problems (MOPs) and none of them has been tested with many-objective problems (i.e., multi-objective problems having four or more objectives). In this paper, we propose a novel multi-objective ant colony optimizer (called iMOACO$$_\mathbb {R}$$) for continuous search spaces, which is based on ACO$$_\mathbb {R}$$ and the R2 performance indicator. Our proposed approach is the first specifically designed to tackle many-objective optimization problems. Moreover, we present a comparative study of our proposal with respect to NSGA-III, MOEA/D, MOACO$$_\mathbb {R}$$ and SMS-EMOA using standard test problems and performance indicators adopted in the specialized literature. Our preliminary results indicate that iMOACO$$_\mathbb {R}$$ is very competitive with respect to state-of-the-art multi-objective evolutionary algorithms and is also able to outperform MOACO$$_\mathbb {R}$$.

Jesús Guillermo Falcón-Cardona, Carlos A. Coello Coello
Variable Interaction in Multi-objective Optimization Problems

Variable interaction is an important aspect of a problem, which reflects its structure, and has implications on the design of efficient optimization algorithms. Although variable interaction has been widely studied in the global optimization community, it has rarely been explored in the multi-objective optimization literature. In this paper, we empirically and analytically study the variable interaction structures of some popular multi-objective benchmark problems. Our study uncovers nontrivial variable interaction structures for the ZDT and DTLZ benchmark problems which were thought to be either separable or non-separable.

Ke Li, Mohammad Nabi Omidvar, Kalyanmoy Deb, Xin Yao
Improving Efficiency of Bi-level Worst Case Optimization

We consider the problem of identifying the trade-off between tolerance level and worst case performance, for a problem where the decision variables may be disturbed within a set tolerance level. This is a special case of a bi-level optimization problem. In general, bi-level optimization problems are computationally very expensive, because a lower level optimizer is called to evaluate each solution on the upper level. In this paper, we propose and compare several strategies to reduce the number of fitness evaluations without substantially compromising the final solution quality.

Ke Lu, Juergen Branke, Tapabrata Ray
Multi-objective Selection of Algorithm Portfolios: Experimental Validation

The selection of algorithms to build portfolios represents a multi-objective problem. From a possibly large pool of algorithm candidates, a portfolio of limited size but good quality over a wide range of problems is desired. Possible applications can be found in the context of machine learning, where the accuracy and runtime of different learning techniques must be weighed. Each algorithm is represented by its Pareto front, which has been approximated in an a priori parameter tuning. Our approach for multi-objective selection of algorithm portfolios (MOSAP) is capable to trade-off the number of algorithm candidates and the respective quality of the portfolio. The quality of the portfolio is defined as the distance to the joint Pareto front of all algorithm candidates. By means of a decision tree, also the selection of the right algorithm is possible based on the characteristics of the problem.In this paper, we propose a validation framework to analyze the performance of our MOSAP approach. This framework is based on a parametrized generator of the algorithm candidate’s Pareto front shapes. We discuss how to sample a landscape of multiple Pareto fronts with predefined intersections. The validation is performed by calculating discrete approximations for different landscapes and assessing the effect of the landscape parameters on the MOSAP approach.

Daniel Horn, Karin Schork, Tobias Wagner
Multi-objective Local Search Based on Decomposition

It is generally believed that Local search (Ls) should be used as a basic tool in multi-objective evolutionary computation for combinatorial optimization. However, not much effort has been made to investigate how to efficiently use Ls in multi-objective evolutionary computation algorithms. In this paper, we study some issues in the use of cooperative scalarizing local search approaches for decomposition-based multi-objective combinatorial optimization. We propose and study multiple move strategies in the Moea/d framework. By extensive experiments on a new set of bi-objective traveling salesman problems with tunable correlated objectives, we analyze these policies with different Moea/d parameters. Our empirical study has shed some insights about the impact of the Ls move strategy on the anytime performance of the algorithm.

Bilel Derbel, Arnaud Liefooghe, Qingfu Zhang, Hernan Aguirre, Kiyoshi Tanaka
Analyzing Inter-objective Relationships: A Case Study of Software Upgradability

In the process of solving real-world multi-objective problems, many existing studies only consider aggregate formulations of the problem, leaving the relationships between different objectives less visited. In this study, taking the software upgradability problem as a case study, we intend to gain insights into the inter-objective relationships of multi-objective problems. First, we obtain the Pareto schemes by uniformly sampling a set of solutions within the Pareto front. Second, we analyze the characteristics of the Pareto scheme, which reveal the relationships between different objectives. Third, to estimate the inter-objective relationships for new upgrade requests, we build a predictive model, with a set of problem-specific features. Finally, we propose a reference based indicator, to assess the risk of applying single-objective algorithms to solve the multi-objective software upgradability problem. Extensive experimental results demonstrate that, the predictive models built with problem-specific features are able to predict both algorithm independent inter-objective relationships, as well as the algorithm performance specific indicator properly.

Zhilei Ren, He Jiang, Jifeng Xuan, Ke Tang, Yan Hu
Multicriteria Building Spatial Design with Mixed Integer Evolutionary Algorithms

This paper proposes a first step towards multidisciplinary design of building spatial designs. Two criteria, total surface area (i.e. energy performance) and compliance (i.e. structural performance), are combined in a multicriteria optimisation framework. A new way of representing building spatial designs in a mixed integer parameter space is used within this framework. Two state-of-the-art algorithms, namely NSGA-II and SMS-EMOA, are used and compared to compute Pareto front approximations for problems of different size. Moreover, the paper discusses domain specific search operators, which are compared to generic operators, and techniques to handle constraints within the mutation. The results give first insights into the trade-off between energy and structural performance and the scalability of the approach.

Koen van der Blom, Sjonnie Boonstra, Hèrm Hofmeyer, Michael T. M. Emmerich
The Competing Travelling Salespersons Problem Under Multi-criteria

This paper introduces a novel type of a problem in which two travelling salespersons are competing, where each of them has two conflicting objectives. This problem is categorized as a Multi-Objective Game (MOG). It is solved by a non-utility approach, which has recently been introduced. According to this method all rationalizable strategies are initially found, to support posteriori decision on a strategy. An evolutionary algorithm is proposed to search for the set of rationalizable strategies. The applicability of the suggested algorithm is successfully demonstrated on the presented new type of problem.

Erella Matalon-Eisenstadt, Amiram Moshaiov, Gideon Avigad
A Parallel Multi-objective Memetic Algorithm Based on the IGD+ Indicator

The success of local search techniques in the solution of combinatorial optimization problems has motivated their incorporation into multi-objective evolutionary algorithms, giving rise to the so-called multi-objective memetic algorithms (MOMAs). The main advantage for adopting this sort of hybridization is to speed up convergence to the Pareto front. However, the use of MOMAs introduces new issues, such as how to select the solutions to which the local search will be applied and for how long to run the local search engine (the use of such a local search engine has an extra computational cost). Here, we propose a new MOMA which switches between a hypervolume-based global optimizer and an IGD+-based local search engine. Our proposed local search engine adopts a novel clustering technique based on the IGD+ indicator for splitting the objective space into sub-regions. Since both computing the hypervolume and applying a local search engine are very costly procedures, we propose a GPU-based parallelization of our algorithm. Our preliminary results indicate that our MOMA is able to converge faster than SMS-EMOA to the true Pareto front of multi-objective problems having different degrees of difficulty.

Edgar Manoatl Lopez, Carlos A. Coello Coello
Towards Automatic Testing of Reference Point Based Interactive Methods

In order to understand strengths and weaknesses of optimization algorithms, it is important to have access to different types of test problems, well defined performance indicators and analysis tools. Such tools are widely available for testing evolutionary multiobjective optimization algorithms.To our knowledge, there do not exist tools for analyzing the performance of interactive multiobjective optimization methods based on the reference point approach to communicating preference information. The main barrier to such tools is the involvement of human decision makers into interactive solution processes, which makes the performance of interactive methods dependent on the performance of humans using them. In this research, we aim towards a testing framework where the human decision maker is replaced with an artificial one and which allows to repetitively test interactive methods in a controlled environment.

Vesa Ojalehto, Dmitry Podkopaev, Kaisa Miettinen
Towards Many-Objective Optimisation with Hyper-heuristics: Identifying Good Heuristics with Indicators

The use of hyper-heuristics is increasing in the multi-objective optimisation domain, and the next logical advance in such methods is to use them in the solution of many-objective problems. Such problems comprise four or more objectives and are known to present a significant challenge to standard dominance-based evolutionary algorithms. We incorporate three comparison operators as alternatives to dominance and investigate their potential to optimise many-objective problems with a hyper-heuristic from the literature. We discover that the best results are obtained using either the favour relation or hypervolume, but conclude that changing the comparison operator alone will not allow for the generation of estimated Pareto fronts that are both close to and fully cover the true Pareto front.

David J. Walker, Ed Keedwell
Use of Piecewise Linear and Nonlinear Scalarizing Functions in MOEA/D

A number of weight vector-based algorithms have been proposed for many-objective optimization using the framework of MOEA/D (multi-objective evolutionary algorithm based on decomposition). Those algorithms are characterized by the use of uniformly distributed normalized weight vectors, which are also referred to as reference vectors, reference lines and search directions. Their common idea is to minimize the distance to the ideal point (i.e., convergence) and the distance to the reference line (i.e., uniformity). Each algorithm has its own mechanism for striking a convergence-uniformity balance. In the original MOEA/D with the PBI (penalty-based boundary intersection) function, this balance is handled by a penalty parameter. In this paper, we first discuss why an appropriate specification of the penalty parameter is difficult. Next we suggest a desired shape of contour lines of a scalarizing function in MOEA/D. Then we propose two ideas for modifying the PBI function. The proposed ideas generate piecewise linear and nonlinear contour lines. Finally we examine the effectiveness of the proposed ideas on the performance of MOEA/D for many-objective test problems.

Hisao Ishibuchi, Ken Doi, Yusuke Nojima
Pareto Inspired Multi-objective Rule Fitness for Noise-Adaptive Rule-Based Machine Learning

Learning classifier systems (LCSs) are rule-based evolutionary algorithms uniquely suited to classification and data mining in complex, multi-factorial, and heterogeneous problems. The fitness of individual LCS rules is commonly based on accuracy, but this metric alone is not ideal for assessing global rule ‘value’ in noisy problem domains and thus impedes effective knowledge extraction. Multi-objective fitness functions are promising but rely on prior knowledge of how to weigh objective importance (typically unavailable in real world problems). The Pareto-front concept offers a multi-objective strategy that is agnostic to objective importance. We propose a Pareto-inspired multi-objective rule fitness (PIMORF) for LCS, and combine it with a complimentary rule-compaction approach (SRC). We implemented these strategies in ExSTraCS, a successful supervised LCS and evaluated performance over an array of complex simulated noisy and clean problems (i.e. genetic and multiplexer) that each concurrently model pure interaction effects and heterogeneity. While evaluation over multiple performance metrics yielded mixed results, this work represents an important first step towards efficiently learning complex problem spaces without the advantage of prior problem knowledge. Overall the results suggest that PIMORF paired with SRC improved rule set interpretability, particularly with regard to heterogeneous patterns.

Ryan J. Urbanowicz, Randal S. Olson, Jason H. Moore
Decomposition-Based Approach for Solving Large Scale Multi-objective Problems

Decomposition is a well-established mathematical programming technique for dealing with multi-objective optimization problems (MOPs), which has been found to be efficient and effective when coupled to evolutionary algorithms, as evidenced by MOEA/D. MOEA/D decomposes a MOP into several single-objective subproblems by means of well-defined scalarizing functions. It has been shown that MOEA/D is able to generate a set of evenly distributed solutions for different multi-objective benchmark functions with a relatively low number of decision variables (usually no more than 30). In this work, we study the effect of scalability in MOEA/D and show how its efficacy decreases as the number of decision variables of the MOP increases. Also, we investigate the improvements that MOEA/D can achieve when combined with coevolutionary techniques, giving rise to a novel MOEA which decomposes the MOP both in objective and in decision variables space. This new algorithm is capable of optimizing large scale MOPs and outperforms MOEA/D and GDE3 when solving problems with a large number of decision variables (from 200 up to 1200).

Luis Miguel Antonio, Carlos A. Coello Coello

Parallel Algorithms and Hardware Issues

Frontmatter
An Evolutionary Framework for Replicating Neurophysiological Data with Spiking Neural Networks

Here we present a framework for the automatic tuning of spiking neural networks (SNNs) that utilizes an evolutionary algorithm featuring indirect encoding to achieve a drastic reduction in the dimensionality of the parameter space, combined with a GPU-accelerated SNN simulator that results in a considerable decrease in the time needed for fitness evaluation, despite the need for both a training and a testing phase. We tuned the parameters governing a learning rule called spike-timing-dependent plasticity (STDP), which was used to alter the synaptic weights of the network. We validated this framework by applying it to a case study in which synthetic neuronal firing rates were matched to electrophysiologically recorded neuronal firing rates in order to evolve network functionality. Our framework was not only able to match their firing rates, but also captured functional and behavioral aspects of the biological neuronal population, in roughly 50 generations.

Emily L. Rounds, Eric O. Scott, Andrew S. Alexander, Kenneth A. De Jong, Douglas A. Nitz, Jeffrey L. Krichmar
A Cross-Platform Assessment of Energy Consumption in Evolutionary Algorithms
Towards Energy-Aware Bioinspired Algorithms

Energy consumption is a matter of paramount importance in nowadays environmentally conscious society. It is also bound to be a crucial issue in light of the emergent computational environments arising from the pervasive use of networked handheld devices and wearables. Evolutionary algorithms (EAs) are ideally suited for this kind of environments due to their intrinsic flexibility and adaptiveness, provided they operate on viable energy terms. In this work we analyze the energy requirements of EAs, and particularly one of their main flavours, genetic programming (GP), on several computational platforms and study the impact that parametrisation has on these requirements, paving the way for a future generation of energy-aware EAs. As experimentally demonstrated, handheld devices and tiny computer models mainly used for educational purposes may be the most energy efficient ones when looking for solutions by means of EAs.

F. Fernández de Vega, F. Chávez, J. Díaz, J. A. García, P. A. Castillo, Juan J. Merelo, C. Cotta
Comparing Asynchronous and Synchronous Parallelization of the SMS-EMOA

We experimentally compare synchronous and asynchronous parallelization of the SMS-EMOA. We find that asynchronous parallelization usually obtains a better speed-up and is more robust to fluctuations in the evaluation time of objective functions. Simultaneously, the solution quality of both methods only degrades slightly as against the sequential variant. We even consider it possible for the parallelization to improve the quality of the solution set on some multimodal problems.

Simon Wessing, Günter Rudolph, Dino A. Menges
A Parallel Version of SMS-EMOA for Many-Objective Optimization Problems

In the last decade, there has been a growing interest in multi-objective evolutionary algorithms that use performance indicators to guide the search. A simple and effective one is the $$\mathcal {S}$$-Metric Selection Evolutionary Multi-Objective Algorithm (SMS-EMOA), which is based on the hypervolume indicator. Even though the maximization of the hypervolume is equivalent to achieving Pareto optimality, its computational cost increases exponentially with the number of objectives, which severely limits its applicability to many-objective optimization problems. In this paper, we present a parallel version of SMS-EMOA, where the execution time is reduced through an asynchronous island model with micro-populations, and diversity is preserved by external archives that are pruned to a fixed size employing a recently created technique based on the Parallel-Coordinates graph. The proposed approach, called $$\mathcal {S}$$-PAMICRO (PArallel MICRo Optimizer based on the $$\mathcal {S}$$ metric), is compared to the original SMS-EMOA and another state-of-the-art algorithm (HypE) on the WFG test problems using up to 10 objectives. Our experimental results show that $$\mathcal {S}$$-PAMICRO is a promising alternative that can solve many-objective optimization problems at an affordable computational cost.

Raquel Hernández Gómez, Carlos A. Coello Coello, Enrique Alba

Real-World Applications and Modelling

Frontmatter
Evolution of Active Categorical Image Classification via Saccadic Eye Movement

Pattern recognition and classification is a central concern for modern information processing systems. In particular, one key challenge to image and video classification has been that the computational cost of image processing scales linearly with the number of pixels in the image or video. Here we present an intelligent machine (the “active categorical classifier,” or ACC) that is inspired by the saccadic movements of the eye, and is capable of classifying images by selectively scanning only a portion of the image. We harness evolutionary computation to optimize the ACC on the MNIST hand-written digit classification task, and provide a proof-of-concept that the ACC works on noisy multi-class data. We further analyze the ACC and demonstrate its ability to classify images after viewing only a fraction of the pixels, and provide insight on future research paths to further improve upon the ACC presented here.

Randal S. Olson, Jason H. Moore, Christoph Adami
Cooperative Coevolution of Control for a Real Multirobot System

The potential of cooperative coevolutionary algorithms (CCEAs) as a tool for evolving control for heterogeneous multirobot teams has been shown in several previous works. The vast majority of these works have, however, been confined to simulation-based experiments. In this paper, we present one of the first demonstrations of a real multirobot system, operating outside laboratory conditions, with controllers synthesised by CCEAs. We evolve control for an aquatic multirobot system that has to perform a cooperative predator-prey pursuit task. The evolved controllers are transferred to real hardware, and their performance is assessed in a non-controlled outdoor environment. Two approaches are used to evolve control: a standard fitness-driven CCEA, and novelty-driven coevolution. We find that both approaches are able to evolve teams that transfer successfully to the real robots. Novelty-driven coevolution is able to evolve a broad range of successful team behaviours, which we test on the real multirobot system.

Jorge Gomes, Miguel Duarte, Pedro Mariano, Anders Lyhne Christensen
Replicating the Stroop Effect Using a Developmental Spatial Neuroevolution System

We present an approach to the study of cognitive phenomena by using evolutionary computation. To this end we use a spatial, developmental, neuroevolution system. We use our system to evolve ANNs to perform simple abstractions of the cognitive tasks of color perception and color reading. We define these tasks to explore the nature of the Stroop effect. We show that we can evolve it to perform a variety of cognitive tasks, and also that evolved networks exhibit complex interference behavior when dealing with multiple tasks and incongruent data. We also show that this interference behavior can be manipulated by changing the learning parameters, a method that we successfully use to create a Stroop like interference pattern.

Amit Benbassat, Avishai Henik
Evolving Cryptographic Pseudorandom Number Generators

Random number generators (RNGs) play an important role in many real-world applications. Besides true hardware RNGs, one important class are deterministic random number generators. Such generators do not possess the unpredictability of true RNGs, but still have a widespread usage. For a deterministic RNG to be used in cryptography, it needs to fulfill a number of conditions related to the speed, the security, and the ease of implementation. In this paper, we investigate how to evolve deterministic RNGs with Cartesian Genetic Programming. Our results show that such evolved generators easily pass all randomness tests and are extremely fast/small in hardware.

Stjepan Picek, Dominik Sisejkovic, Vladimir Rozic, Bohan Yang, Domagoj Jakobovic, Nele Mentens
Exploring Uncertainty and Movement in Categorical Perception Using Robots

Cognitive agents are able to perform categorical perception through physical interaction (active categorical perception; ACP), or passively at a distance (distal categorical perception; DCP). It is possible that the former scaffolds the learning of the latter. However, it is unclear whether ACP indeed scaffolds DCP in humans and animals, nor how a robot could be trained to likewise learn DCP from ACP. Here we demonstrate a method for doing so which involves uncertainty: robots are trained to perform ACP when uncertain and DCP when certain. We found evidence in these trials that suggests such scaffolding may be occurring: Early during training, robots moved objects to reduce uncertainty as to their class (ACP), but later in training, robots exhibited less action and less class uncertainty (DCP). Furthermore, we demonstrate that robots trained in such a manner are more competent at categorizing novel objects than robots trained to categorize in other ways.

Nathaniel Powell, Josh Bongard
Community Structure Detection for the Functional Connectivity Networks of the Brain

The community structure detection problem in weighted networks is tackled with a new approach based on game theory and extremal optimization, called Weighted Nash Extremal Optimization. This method approximates the Nash equilibria of a game in which nodes, as players, chose their community by maximizing their payoffs. After performing numerical experiments on synthetic networks, the new method is used to analyze functional connectivity networks of the brain in order to reveal possible connections between different brain regions. Results show that the proposed approach may be used to find biomedically relevant knowledge about brain functionality.

Rodica Ioana Lung, Mihai Suciu, Regina Meszlényi, Krisztian Buza, Noémi Gaskó
Data Classification Using Carbon-Nanotubes and Evolutionary Algorithms

The potential of Evolution in Materio (EiM) for machine learning problems is explored here. This technique makes use of evolutionary algorithms (EAs) to influence the processing abilities of an un-configured physically rich medium, via exploitation of its physical properties. The EiM results reported are obtained using particle swarm optimisation (PSO) and differential evolution (DE) to exploit the complex voltage/current relationship of a mixture of single walled carbon nanotubes (SWCNTs) and liquid crystals (LCs). The computational problem considered is simple binary data classification. Results presented are consistent and reproducible. The evolutionary process based on EAs has the capacity to evolve the material to a state where data classification can be performed. Finally, it appears that through the use of smooth signal inputs, PSO produces classifiers out of the SWCNT/LC substrate which generalise better than those evolved with DE.

E. Vissol-Gaudin, A. Kotsialos, M. K. Massey, D. A. Zeze, C. Pearson, C. Groves, M. C. Petty
WS Network Design Problem with Nonlinear Pricing Solved by Hybrid Algorithm

The aim of the paper is to introduce a wait-and-see (WS) reformulation of the transportation network design problem with stochastic price-dependent demand. The demand is defined by hyperbolic dependency and its parameters are modeled by random variables. Then, a WS reformulation of the mixed integer nonlinear program (MINLP) is proposed. The obtained separable scenario-based model can be repeatedly solved as a finite set of MINLPs by means of integer programming techniques or some heuristics. However, the authors combine a traditional optimization algorithm and a suitable genetic algorithm to obtain a hybrid algorithm that is modified for the WS case. The implementation of this hybrid algorithm and test results, illustrated with figures, are also discussed in the paper.

Dušan Hrabec, Pavel Popela, Jan Roupec
A Novel Efficient Mutation for Evolutionary Design of Combinational Logic Circuits

In this paper we investigate evolutionary mechanisms and propose a new mutation operator for the evolutionary design of Combinational Logic Circuits (CLCs). Understanding the root causes of evolutionary success is critical to improving existing techniques. Our focus is two-fold: to analyze beneficial mutations in Cartesian Genetic Programming, and to create an efficient mutation operator for digital CLC design. In the experiments performed the mutation proposed is better than or equivalent to traditional mutation.

Francisco A. L. Manfrini, Heder S. Bernardino, Helio J. C. Barbosa
Fast and Effective Multi-objective Optimisation of Submerged Wave Energy Converters

Despite its considerable potential, wave energy has not yet reached full commercial development. Currently, dozens of wave energy projects are exploring a variety of techniques to produce wave energy efficiently. A common design for a wave energy converter is called a buoy. A buoy typically floats on the surface or just below the surface of the water, and captures energy from the movement of the waves.In this article, we tackle the multi-objective variant of this problem: we are taking into account the highly complex interactions of the buoys, while optimising the energy yield, the necessary area, and the cable length needed to connect all buoys. We employ caching-techniques and problem-specific variation operators to make this problem computationally feasible. This is the first time the interactions between wave energy resource and array configuration are studied in a multi-objective way.

Dídac Rodríguez Arbonès, Boyin Ding, Nataliia Y. Sergiienko, Markus Wagner
Evolution of Spiking Neural Networks Robust to Noise and Damage for Control of Simple Animats

One of the central questions of biology is how complex biological systems can continue functioning in the presence of perturbations, damage, and mutational insults. This paper investigates evolution of spiking neural networks, consisting of adaptive exponential neurons. The networks are encoded in linear genomes in a manner inspired by genetic networks. The networks control a simple animat, with two sensors and two actuators, searching for targets in a simple environment. The results show that the presence of noise on the membrane voltage during evolution allows for evolution of efficient control and robustness to perturbations to the value of the neural parameters of neurons.

Borys Wróbel
Anomaly Detection with the Voronoi Diagram Evolutionary Algorithm

This paper presents the Voronoi diagram-based evolutionary algorithm (VorEAl). VorEAl partitions input space in abnormal/normal subsets using Voronoi diagrams. Diagrams are evolved using a multi-objective bio-inspired approach in order to conjointly optimize classification metrics while also being able to represent areas of the data space that are not present in the training dataset. As part of the paper VorEAl is experimentally validated and contrasted with similar approaches.

Luis Martí, Arsene Fansi-Tchango, Laurent Navarro, Marc Schoenauer
Evolving Spatially Aggregated Features from Satellite Imagery for Regional Modeling

Satellite imagery and remote sensing provide explanatory variables at relatively high resolutions for modeling geospatial phenomena, yet regional summaries are often desirable for analysis and actionable insight. In this paper, we propose a novel method of inducing spatial aggregations as a component of the machine learning process, yielding regional model features whose construction is driven by model prediction performance rather than prior assumptions. Our results demonstrate that Genetic Programming is particularly well suited to this type of feature construction because it can automatically synthesize appropriate aggregations, as well as better incorporate them into predictive models compared to other regression methods we tested. In our experiments we consider a specific problem instance and real-world dataset relevant to predicting snow properties in high-mountain Asia.

Sam Kriegman, Marcin Szubert, Josh C. Bongard, Christian Skalka
A Hybrid Autoencoder and Density Estimation Model for Anomaly Detection

A novel one-class learning approach is proposed for network anomaly detection based on combining autoencoders and density estimation. An autoencoder attempts to reproduce the input data in the output layer. The smaller hidden layer becomes a bottleneck, forming a compressed representation of the data. It is now proposed to take low density in the hidden layer as indicating an anomaly. We study two possibilities for modelling density: a single Gaussian, and a full kernel density estimation. The methods are tested on the NSL-KDD dataset, and experiments show that the proposed methods out-perform best-known results on three out of four sub-datasets.

Van Loi Cao, Miguel Nicolau, James McDermott

Theory

Frontmatter
Parameterized Analysis of Multi-objective Evolutionary Algorithms and the Weighted Vertex Cover Problem

A rigorous runtime analysis of evolutionary multi-objective optimization for the classical vertex cover problem in the context of parameterized complexity analysis has been presented by Kratsch and Neumann [1]. In this paper, we extend the analysis to the weighted vertex cover problem and provide a fixed parameter evolutionary algorithm with respect to OPT, the cost of the optimal solution for the problem. Moreover, using a diversity mechanism, we present a multi-objective evolutionary algorithm that finds a $$2-$$approximation in expected polynomial time.

Mojgan Pourhassan, Feng Shi, Frank Neumann
Fixed-Parameter Single Objective Search Heuristics for Minimum Vertex Cover

We consider how well-known branching approaches for the classical minimum vertex cover problem can be turned into randomized initialization strategies with provable performance guarantees and investigate them by experimental investigations. Furthermore, we show how these techniques can be built into local search components and analyze a basic local search variant that is similar to a state-of-the-art approach called NuMVC. Our experimental results for the two local search approaches show that making use of more complex branching strategies in the local search component can lead to better results on various benchmark graphs.

Wanru Gao, Tobias Friedrich, Frank Neumann
What Does the Evolution Path Learn in CMA-ES?

The Covariance matrix adaptation evolution strategy (CMA-ES) evolves a multivariate Gaussian distribution for continuous optimization. The evolution path, which accumulates historical search directions in successive generations, plays a crucial role in the adaptation of covariance matrix. In this paper, we investigate what the evolution path learns in the optimization procedure. We show that the evolution path accumulates natural gradient with respect to the distribution mean, and acts as a momentum under stationary condition. The experimental results suggest that the evolution path learns relative scales of the eigenvectors, expanded by singular values along corresponding eigenvectors of the inverse Hessian. Further, we show that the outer product of evolution path serves as a rank-1 momentum term for the covariance matrix.

Zhenhua Li, Qingfu Zhang
Graceful Scaling on Uniform Versus Steep-Tailed Noise

Recently, different evolutionary algorithms (EAs) have been analyzed in noisy environments. The most frequently used noise model for this was additive posterior noise (noise added after the fitness evaluation) taken from a Gaussian distribution. In particular, for this setting it was shown that the $$(\mu +1)$$-EA on OneMax does not scale gracefully (higher noise cannot efficiently be compensated by higher $$\mu $$).In this paper we want to understand whether there is anything special about the Gaussian distribution which makes the $$(\mu +1)$$-EA not scale gracefully. We keep the setting of posterior noise, but we look at other distributions. We see that for exponential tails the $$(\mu +1)$$-EA on OneMax does also not scale gracefully, for similar reasons as in the case of Gaussian noise. On the other hand, for uniform distributions (as well as other, similar distributions) we see that the $$(\mu +1)$$-EA on OneMax does scale gracefully, indicating the importance of the noise model.

Tobias Friedrich, Timo Kötzing, Martin S. Krejca, Andrew M. Sutton
On the Robustness of Evolving Populations

Most theoretical work that studies the benefit of recombination focuses on the ability of crossover to speed up optimization time on specific search problems. In this paper, we take a slightly different perspective and investigate recombination in the context of evolving solutions that exhibit mutational robustness, i.e., they display insensitivity to small perturbations. Various models in population genetics have demonstrated that increasing the effective recombination rate promotes the evolution of robustness. We show this result also holds in the context of evolutionary computation by rigorously proving crossover promotes the evolution of robust solutions in the standard ($$\mu $$ + 1) GA. Surprisingly, our results show that this effect is still present even when robust solutions are at a selective disadvantage due to lower fitness values.

Tobias Friedrich, Timo Kötzing, Andrew M. Sutton
Provably Optimal Self-adjusting Step Sizes for Multi-valued Decision Variables

We regard the problem of maximizing a OneMax-like function defined over an alphabet of size r. In previous work [GECCO 2016] we have investigated how three different mutation operators influence the performance of Randomized Local Search (RLS) and the (1+1) Evolutionary Algorithm. This work revealed that among these natural mutation operators none is superior to the other two for any choice of r. We have also given in [GECCO 2016] some indication that the best achievable run time for large r is $$\varTheta (n \log r (\log n + \log r))$$, regardless of how the mutation operator is chosen, as long as it is a static choice (i.e., the distribution used for variation of the current individual does not change over time).Here in this work we show that we can achieve a better performance if we allow for adaptive mutation operators. More precisely, we analyze the performance of RLS using a self-adjusting mutation strength. In this algorithm the size of the steps taken in each iteration depends on the success of previous iterations. That is, the mutation strength is increased after a successful iteration and it is decreased otherwise. We show that this idea yields an expected optimization time of $$\varTheta (n (\log n + \log r))$$, which is optimal among all comparison-based search heuristics. This is the first time that self-adjusting parameter choices are shown to outperform static choices on a discrete multi-valued optimization problem.

Benjamin Doerr, Carola Doerr, Timo Kötzing
Example Landscapes to Support Analysis of Multimodal Optimisation

Theoretical analysis of all kinds of randomised search heuristics has been and keeps being supported and facilitated by the use of simple example functions. Such functions help us understand the working principles of complicated heuristics. If the function represents some properties of practical problem landscapes these results become practically relevant. While this has been very successful in the past for optimisation in unimodal landscapes there is a need for generally accepted useful simple example functions for situations where unimodal objective functions are insufficient: multimodal optimisation and investigation of diversity preserving mechanisms are examples. A family of example landscapes is defined that comes with a limited number of parameters that allow to control important features of the landscape while all being still simple in some sense. Different expressions of these landscapes are presented and fundamental properties are explored.

Thomas Jansen, Christine Zarges
Self-adaptation of Mutation Rates in Non-elitist Populations

The runtime of evolutionary algorithms (EAs) depends critically on their parameter settings, which are often problem-specific. Automated schemes for parameter tuning have been developed to alleviate the high costs of manual parameter tuning. Experimental results indicate that self-adaptation, where parameter settings are encoded in the genomes of individuals, can be effective in continuous optimisation. However, results in discrete optimisation have been less conclusive. Furthermore, a rigorous runtime analysis that explains how self-adaptation can lead to asymptotic speedups has been missing. This paper provides the first such analysis for discrete, population-based EAs. We apply level-based analysis to show how a self-adaptive EA is capable of fine-tuning its mutation rate, leading to exponential speedups over EAs using fixed mutation rates.

Duc-Cuong Dang, Per Kristian Lehre
Hypervolume Sharpe-Ratio Indicator: Formalization and First Theoretical Results

Set-quality indicators have been used in Evolutionary Multiobjective Optimization Algorithms (EMOAs) to guide the search process. A new class of set-quality indicators, the Sharpe-Ratio Indicator, combining the selection of solutions with fitness assignment has been recently proposed. This class is based on a formulation of fitness assignment as a Portfolio Selection Problem which sees solutions as assets whose returns are random variables, and fitness as the investment in such assets/solutions. An instance of this class based on the Hypervolume Indicator has shown promising results when integrated in an EMOA called POSEA. The aim of this paper is to formalize the class of Sharpe-Ratio Indicators and to demonstrate some of the properties of that particular Sharpe-Ratio Indicator instance concerning monotonicity, sensitivity to scaling and parameter independence.

Andreia P. Guerreiro, Carlos M. Fonseca
k-Bit Mutation with Self-Adjusting k Outperforms Standard Bit Mutation

When using the classic standard bit mutation operator, parent and offspring differ in a random number of bits, distributed according to a binomial law. This has the advantage that all Hamming distances occur with some positive probability, hence this operator can be used, in principle, for all fitness landscapes. The downside of this “one-size-fits-all” approach, naturally, is a performance loss caused by the fact that often not the ideal number of bits is flipped. Still, the fear of getting stuck in local optima has made standard bit mutation become the preferred mutation operator.In this work we show that a self-adjusting choice of the number of bits to be flipped can both avoid the performance loss of standard bit mutation and avoid the risk of getting stuck in local optima. We propose a simple mechanism to adaptively learn the currently optimal mutation strength from previous iterations. This aims both at exploiting that generally different problems may need different mutation strengths and that for a fixed problem different strengths may become optimal in different stages of the optimization process.We experimentally show that our simple hill climber with this adaptive mutation strength outperforms both the randomized local search heuristic and the (1+1) evolutionary algorithm on the LeadingOnes function and on the minimum spanning tree problem. We show via mathematical means that our algorithm is able to detect precisely (apart from lower order terms) the complicated optimal fitness-dependent mutation strength recently discovered for the OneMax function. With its self-adjusting mutation strength it thus attains the same runtime (apart from o(n) lower-order terms) and the same (asymptotic) 13 % fitness-distance improvement over RLS that was recently obtained by manually computing the optimal fitness-dependent mutation strength.

Benjamin Doerr, Carola Doerr, Jing Yang
Selection Hyper-heuristics Can Provably Be Helpful in Evolutionary Multi-objective Optimization

Selection hyper-heuristics are automated methodologies for selecting existing low-level heuristics to solve hard computational problems. They have been found very useful for evolutionary algorithms when solving both single and multi-objective real-world optimization problems. Previous work mainly focuses on empirical study, while theoretical study, particularly in multi-objective optimization, is largely insufficient. In this paper, we use three main components of multi-objective evolutionary algorithms (selection mechanisms, mutation operators, acceptance strategies) as low-level heuristics, respectively, and prove that using heuristic selection (i.e., mixing low-level heuristics) can be exponentially faster than using only one low-level heuristic. Our result provides theoretical support for multi-objective selection hyper-heuristics, and might be helpful for designing efficient heuristic selection methods in practice.

Chao Qian, Ke Tang, Zhi-Hua Zhou

Diversity and Landscape Analysis

Frontmatter
RK-EDA: A Novel Random Key Based Estimation of Distribution Algorithm

The challenges of solving problems naturally represented as permutations by Estimation of Distribution Algorithms (EDAs) have been a recent focus of interest in the evolutionary computation community. One of the most common alternative representations for permutation based problems is the Random Key (RK), which enables the use of continuous approaches for this problem domain. However, the use of RK in EDAs have not produced competitive results to date and more recent research on permutation based EDAs have focused on creating superior algorithms with specially adapted representations. In this paper, we present RK-EDA; a novel RK based EDA that uses a cooling scheme to balance the exploration and exploitation of a search space by controlling the variance in its probabilistic model. Unlike the general performance of RK based EDAs, RK-EDA is actually competitive with the best EDAs on common permutation test problems: Flow Shop Scheduling, Linear Ordering, Quadratic Assignment, and Travelling Salesman Problems.

Mayowa Ayodele, John McCall, Olivier Regnier-Coudert
REMEDA: Random Embedding EDA for Optimising Functions with Intrinsic Dimension

It has been observed that in many real-world large scale problems only few variables have a major impact on the function value: While there are many inputs to the function, there are just few degrees of freedom. We refer to such functions as having a low intrinsic dimension. In this paper we devise an Estimation of Distribution Algorithm (EDA) for continuous optimisation that exploits intrinsic dimension without knowing the influential subspace of the input space, or its dimension, by employing the idea of random embedding. While the idea is applicable to any optimiser, EDA is known to be remarkably successful in low dimensional problems but prone to the curse of dimensionality in larger problems because its model building step requires large population sizes. Our method, Random Embedding in Estimation of Distribution Algorithm (REMEDA) remedies this weakness and is able to optimise very large dimensional problems as long as their intrinsic dimension is low.

Momodou L. Sanyang, Ata Kabán
Feature-Based Diversity Optimization for Problem Instance Classification

Understanding the behaviour of heuristic search methods is a challenge. This even holds for simple local search methods such as 2-OPT for the Traveling Salesperson problem. In this paper, we present a general framework that is able to construct a diverse set of instances that are hard or easy for a given search heuristic. Such a diverse set is obtained by using an evolutionary algorithm for constructing hard or easy instances that are diverse with respect to different features of the underlying problem. Examining the constructed instance sets, we show that many combinations of two or three features give a good classification of the TSP instances in terms of whether they are hard to be solved by 2-OPT.

Wanru Gao, Samadhi Nallaperuma, Frank Neumann
Searching for Quality Diversity When Diversity is Unaligned with Quality

Inspired by natural evolution’s affinity for discovering a wide variety of successful organisms, a new evolutionary search paradigm has emerged wherein the goal is not to find the single best solution but rather to collect a diversity of unique phenotypes where each variant is as good as it can be. These quality diversity (QD) algorithms therefore must explore multiple promising niches simultaneously. A QD algorithm’s diversity component, formalized by specifying a behavior characterization (BC), not only generates diversity but also promotes quality by helping to overcome deception in the fitness landscape. However, some BCs (particularly those that are unaligned with the notion of quality) do not adequately mitigate deception, rendering QD algorithms unable to discover the best-performing solutions on difficult problems. This paper introduces a solution that enables QD algorithms to pursue arbitrary notions of diversity without compromising their ability to solve hard problems: driving search with multiple BCs simultaneously.

Justin K. Pugh, L. B. Soros, Kenneth O. Stanley
Emergence of Diversity and Its Benefits for Crossover in Genetic Algorithms

Population diversity is essential for avoiding premature convergence in Genetic Algorithms (GAs) and for the effective use of crossover. Yet the dynamics of how diversity emerges in populations are not well understood. We use rigorous runtime analysis to gain insight into population dynamics and GA performance for a standard ($$\mu $$+1) GA and the $$\mathrm {Jump}_k$$ test function. By studying the stochastic process underlying the size of the largest collection of identical genotypes we show that the interplay of crossover followed by mutation may serve as a catalyst leading to a sudden burst of diversity. This leads to improvements of the expected optimisation time of order $$\varOmega (n/\log n)$$ compared to mutation-only algorithms like the (1+1) EA.

Duc-Cuong Dang, Tobias Friedrich, Timo Kötzing, Martin S. Krejca, Per Kristian Lehre, Pietro S. Oliveto, Dirk Sudholt, Andrew M. Sutton
Coarse-Grained Barrier Trees of Fitness Landscapes

Recent literature suggests that local optima in fitness landscapes are clustered, which offers an explanation of why perturbation-based metaheuristics often fail to find the global optimum: they become trapped in a sub-optimal cluster. We introduce a method to extract and visualize the global organization of these clusters in form of a barrier tree. Barrier trees have been used to visualize the barriers between local optima basins in fitness landscapes. Our method computes a more coarsely grained tree to reveal the barriers between clusters of local optima. The core element is a new variant of the flooding algorithm, applicable to local optima networks, a compressed representation of fitness landscapes. To identify the clusters, we apply a community detection algorithm. A sample of 200 NK fitness landscapes suggests that the depth of their coarse-grained barrier tree is related to their search difficulty.

Sebastian Herrmann, Gabriela Ochoa, Franz Rothlauf
Rapid Phenotypic Landscape Exploration Through Hierarchical Spatial Partitioning

Exploration of the search space through the optimisation of phenotypic diversity is of increasing interest within the field of evolutionary robotics. Novelty search and the more recent MAP-Elites are two state of the art evolutionary algorithms which diversify low dimensional phenotypic traits for divergent exploration. In this paper we introduce a novel alternative for rapid divergent search of the feature space. Unlike previous phenotypic search procedures, our proposed Spatial, Hierarchical, Illuminated Neuro-Evolution (SHINE) algorithm utilises a tree structure for the maintenance and selection of potential candidates. SHINE penalises previous solutions in more crowded areas of the landscape. Our experimental results show that SHINE significantly outperforms novelty search and MAP-Elites in both performance and exploration. We conclude that the SHINE algorithm is a viable method for rapid divergent search of low dimensional, phenotypic landscapes.

Davy Smith, Laurissa Tokarchuk, Geraint Wiggins
Understanding Environmental Influence in an Open-Ended Evolutionary Algorithm

It is well known that in open-ended evolution, the nature of the environment plays in key role in directing evolution. However, in Evolutionary Robotics, it is often unclear exactly how parameterisation of a given environment might influence the emergence of particular behaviours. We consider environments in which the total amount of energy is parameterised by availability and value, and use surface plots to explore the relationship between those environment parameters and emergent behaviour using a variant of a well-known distributed evolutionary algorithm (mEDEA). Analysis of the resulting landscape show that it is crucial for a researcher to select appropriate parameterisations in order that the environment provides the right balance between facilitating survival and exerting sufficient pressure for new behaviours to emerge. To the best of our knowledge, this is the first time such an analysis has been undertaken.

Andreas Steyven, Emma Hart, Ben Paechter
Simple Random Sampling Estimation of the Number of Local Optima

We evaluate the performance of estimating the number of local optima by estimating their proportion in the search space using simple random sampling (SRS). The performance of this method is compared against that of the jackknife method. The methods are used to estimate the number of optima in two landscapes of random instances of some combinatorial optimisation problems. SRS provides a cheap, unbiased and accurate estimate when the proportion is not exceedingly small. We discuss choices of confidence interval in the case of extremely small proportion. In such cases, the method more likely provides an upper bound to the number of optima and can be combined with other methods to obtain a better lower bound. We suggest that SRS should be the first choice for estimating the number of optima when no prior information is available about the landscape under study.

Khulood Alyahya, Jonathan E. Rowe
evoVision3D: A Multiscale Visualization of Evolutionary Histories

Evolutionary computation is a field defined by large data sets and complex relationships. Because of this complexity it can be difficult to identify trends and patterns that can help improve future projects and drive experimentation. To address this we present evoVision3D, a multiscale 3D system designed to take data sets from evolutionary design experiments and visualize them in order to assist in their inspection and analysis. Our system is implemented in the Unity 3D game development environment, for which we show that it lends itself to immersive navigation through large data sets, going even beyond evolution-based search and interactive data exploration.

Justin J. Kelly, Christian Jacob
Landscape Features for Computationally Expensive Evaluation Functions: Revisiting the Problem of Noise

When combined with machine learning, the black-box analysis of fitness landscapes promises to provide us with easy-to-compute features that can be used to select and configure an algorithm that is well-suited to the task at hand. As applications that involve computationally expensive, stochastic simulations become increasingly relevant in practice, however, there is a need for landscape features that are both (A) possible to estimate with a very limited budget of fitness evaluations, and (B) accurate in the presence of small to moderate amounts of noise. We show via a small set of relatively inexpensive landscape features based on hill-climbing methods that these two goals are in tension with each other: cheap features are sometimes extremely sensitive to even very small amounts of noise. We propose that features whose values are calculated using population-based search methods may provide a path forward in developing landscape analysis tools that are both inexpensive and robust to noise.

Eric O. Scott, Kenneth A. De Jong
Towards Analyzing Multimodality of Continuous Multiobjective Landscapes

This paper formally defines multimodality in multiobjective optimization (MO). We introduce a test-bed in which multimodal MO problems with known properties can be constructed as well as numerical characteristics of the resulting landscape. Gradient- and local search based strategies are compared on exemplary problems together with specific performance indicators in the multimodal MO setting. By this means the foundation for Exploratory Landscape Analysis in MO is provided.

Pascal Kerschke, Hao Wang, Mike Preuss, Christian Grimme, André Deutz, Heike Trautmann, Michael Emmerich
Population Diversity Measures Based on Variable-Order Markov Models for the Traveling Salesman Problem

This paper presents entropy-based population diversity measures that take into account dependencies between the variables in order to maintain genetic diversity in a GA for the traveling salesman problem. The first one is formulated as the entropy rate of a variable-order Markov process, where the probability of occurrence of each vertex is assumed to be dependent on the preceding vertices of variable length in the population. Compared to the use of a fixed-order Markov model, the variable-order model has the advantage of avoiding the lack of sufficient statistics for the estimation of the exponentially increasing number of conditional probability components as the order of the Markov process increases. Moreover, we develop a more elaborate population diversity measure by further reducing the problem of the lack of statistics

Yuichi Nagata
Convergence Versus Diversity in Multiobjective Optimization

Convergence and diversity are two main goals in multiobjective optimization. In literature, most existing multiobjective optimization evolutionary algorithms (MOEAs) adopt a convergence-first-and-diversity-second environmental selection which prefers nondominated solutions to dominated ones, as is the case with the popular nondominated sorting based selection method. While convergence-first sorting has continuously shown effectiveness for handling a variety of problems, it faces challenges to maintain well population diversity due to the overemphasis of convergence. In this paper, we propose a general diversity-first sorting method for multiobjective optimization. Based on the method, a new MOEA, called DBEA, is then introduced. DBEA is compared with the recently-developed nondominated sorting genetic algorithm III (NSGA-III) on different problems. Experimental studies show that the diversity-first method has great potential for diversity maintenance and is very competitive for many-objective optimization.

Shouyong Jiang, Shengxiang Yang
Tunnelling Crossover Networks for the Asymmetric TSP

Local optima networks are a compact representation of fitness landscapes that can be used for analysis and visualisation. This paper provides the first analysis of the Asymmetric Travelling Salesman Problem using local optima networks. These are generated by sampling the search space by recording the progress of an existing evolutionary algorithm based on the Generalised Asymmetric Partition Crossover. They are compared to networks sampled through the Chained Lin-Kernighan heuristic across 25 instances. Structural differences and similarities are identified, as well as examples where crossover smooths the landscape.

Nadarajen Veerapen, Gabriela Ochoa, Renato Tinós, Darrell Whitley

Workshops and Tutorials at PPSN 2016

Frontmatter
The Workshops at PPSN 2016

Workshops have a longstanding tradition at PPSN. This document provides a short description of the workshops that were held at the 2016 edition of the conference. For each workshop we provide a general description of the aim and scope, in addition to the list of accepted papers.

Christian Blum, Christine Zarges
Tutorials at PPSN 2016

PPSN 2016 hosts a total number of 16 tutorials covering a broad range of current research in evolutionary computation. The tutorials range from introductory to advanced and specialized but can all be attended without prior requirements. All PPSN attendees are cordially invited to take this opportunity to learn about ongoing research activities in our field!

Carola Doerr, Nicolas Bredeche, Enrique Alba, Thomas Bartz-Beielstein, Dimo Brockhoff, Benjamin Doerr, Gusz Eiben, Michael G. Epitropakis, Carlos M. Fonseca, Andreia Guerreiro, Evert Haasdijk, Jacqueline Heinerman, Julien Hubert, Per Kristian Lehre, Luigi Malagò, J. J. Merelo, Julian Miller, Boris Naujoks, Pietro Oliveto, Stjepan Picek, Nelishia Pillay, Mike Preuss, Patricia Ryser-Welch, Giovanni Squillero, Jörg Stork, Dirk Sudholt, Alberto Tonda, Darrell Whitley, Martin Zaefferer
Backmatter
Metadaten
Titel
Parallel Problem Solving from Nature – PPSN XIV
herausgegeben von
Julia Handl
Emma Hart
Peter R. Lewis
Manuel López-Ibáñez
Gabriela Ochoa
Ben Paechter
Copyright-Jahr
2016
Electronic ISBN
978-3-319-45823-6
Print ISBN
978-3-319-45822-9
DOI
https://doi.org/10.1007/978-3-319-45823-6

Premium Partner