Skip to main content

2021 | Buch

Applications of Evolutionary Computation

24th International Conference, EvoApplications 2021, Held as Part of EvoStar 2021, Virtual Event, April 7–9, 2021, Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 24th International Conference on Applications of Evolutionary Computation, EvoApplications 2021, held as part of Evo*2021, as Virtual Event, in April 2021, co-located with the Evo*2021 events EuroGP, EvoCOP, and EvoMUSART.

The 51 revised full papers presented in this book were carefully reviewed and selected from 78 submissions. The papers cover a wide spectrum of topics, ranging from applications of evolutionary computation; applications of deep bioinspired algorithms; soft computing applied to games; machine learning and AI in digital healthcare and personalized medicine; evolutionary computation in image analysis, signal processing and pattern recognition; evolutionary machine learning; parallel and distributed systems; and applications of nature inspired computing for sustainability and development.​

Inhaltsverzeichnis

Frontmatter

Applications of Evolutionary Computation

Frontmatter
On Restricting Real-Valued Genotypes in Evolutionary Algorithms

Real-valued genotypes together with the variation operators, mutation and crossover, constitute some of the fundamental building blocks of Evolutionary Algorithms. Real-valued genotypes are utilized in a broad range of contexts, from weights in Artificial Neural Networks to parameters in robot control systems. Shared between most uses of real-valued genomes is the need for limiting the range of individual parameters to allowable bounds. In this paper we will illustrate the challenge of limiting the parameters of real-valued genomes and analyse the most promising method to properly limit these values. We utilize both empirical as well as benchmark examples to demonstrate the utility of the proposed method and through a literature review show how the insight of this paper could impact other research within the field. The proposed method requires minimal intervention from Evolutionary Algorithm practitioners and behaves well under repeated application of variation operators, leading to better theoretical properties as well as significant differences in well-known benchmarks.

Jørgen Nordmoen, Tønnes F. Nygaard, Eivind Samuelsen, Kyrre Glette
Towards Explainable Exploratory Landscape Analysis: Extreme Feature Selection for Classifying BBOB Functions

Facilitated by the recent advances of Machine Learning (ML), the automated design of optimization heuristics is currently shaking up evolutionary computation (EC). Where the design of hand-picked guidelines for choosing a most suitable heuristic has long dominated research activities in the field, automatically trained heuristics are now seen to outperform human-derived choices even for well-researched optimization tasks. ML-based EC is therefore not any more a futuristic vision, but has become an integral part of our community.A key criticism that ML-based heuristics are often faced with is their potential lack of explainability, which may hinder future developments. This applies in particular to supervised learning techniques which extrapolate algorithms’ performance based on exploratory landscape analysis (ELA). In such applications, it is not uncommon to use dozens of problem features to build the models underlying the specific algorithm selection or configuration task. Our goal in this work is to analyze whether this many features are indeed needed. Using the classification of the BBOB test functions as testbed, we show that a surprisingly small number of features – often less than four – can suffice to achieve a 98% accuracy. Interestingly, the number of features required to meet this threshold is found to decrease with the problem dimension. We show that the classification accuracy transfers to settings in which several instances are involved in training and testing. In the leave-one-instance-out setting, however, classification accuracy drops significantly, and the transformation-invariance of the features becomes a decisive success factor.

Quentin Renau, Johann Dreo, Carola Doerr, Benjamin Doerr
Co-optimising Robot Morphology and Controller in a Simulated Open-Ended Environment

Designing robots by hand can be costly and time consuming, especially if the robots have to be created with novel materials, or be robust to internal or external changes. In order to create robots automatically, without the need for human intervention, it is necessary to optimise both the behaviour and the body design of the robot. However, when co-optimising the morphology and controller of a locomoting agent the morphology tends to converge prematurely, reaching a local optimum. Approaches such as explicit protection of morphological innovation have been used to reduce this problem, but it might also be possible to increase exploration of morphologies using a more indirect approach. We explore how changing the environment, where the agent locomotes, affects the convergence of morphologies. The agents’ morphologies and controllers are co-optimised, while the environments the agents locomote in are evolved open-endedly with the Paired Open-Ended Trailblazer (POET). We compare the diversity, fitness and robustness of agents evolving in environments generated by POET to agents evolved in handcrafted curricula of environments. Our agents each contain of a population of individuals being evolved with a genetic algorithm. This population is called the agent-population. We show that agent-populations evolving in open-endedly evolving environments exhibit larger morphological diversity than agent-populations evolving in hand crafted curricula of environments. POET proved capable of creating a curriculum of environments which encouraged both diversity and quality in the populations. This suggests that POET may be capable of reducing premature convergence in co-optimisation of morphology and controllers.

Emma Hjellbrekke Stensby, Kai Olav Ellefsen, Kyrre Glette
Multi-objective Workforce Allocation in Construction Projects

Managing construction projects is a complex, resource-intense and risky task that involves the organization and management of people skilled in the design and completion of construction projects. Embarking on a construction project means to plan the allocation of resources and labour, while ensuring that the output (e.g. a new building) meets a certain quality, and is delivered in time and within budget without breaching contractual obligations. We formulate a simplified version of this task as a constrained multi-objective optimization problem, and then use a non-dominated sorting genetic algorithm to tackle the problem. In addition to providing a formal definition of the problem, further contributions of this work include the validation of the methodology using real data of construction projects varying in scale and resource-utilisation; the use of real data is scarce in the construction project management area. We also perform a scenario-based analysis to understand how the approach reacts to changing environmental parameters (such as availability of resources). Finally, we discuss practical implications. Our empirical analysis highlights that the proposed approach improves significantly in terms of project budget, quality, and duration targets, when compared with the industry standard.

Andrew Iskandar, Richard Allmendinger
Generating Duplex Routes for Robust Bus Transport Network by Improved Multi-objective Evolutionary Algorithm Based on Decomposition

This paper proposes the duplex route generation method to evolve the bus route network which is robust to environmental changes and aims at investigating its effectiveness through the experiments. In this study, the “duplex route” corresponds to the alternative route and it has the advantage of not requiring to modify the route network in the environmental changes. To generate the duplex routes, this study employs MOEA/D as the base optimization method and introduces the following two operations in MOEA/D to increase the duplex routes while improving the fitness: (1) the crossover operation to generate the duplex routes, which is improved from the crossover operation in SEAMO2 [9] that evolves unique routes, and (2) the priority solution update operation in the enhanced MOEA/D [4] to maintain a diversity of the routes which contributes to improving the fitness. The experiments on Mandl’s benchmark problem has revealed: (1) the proposed crossover operation can generate many duplex networks as compared to the original crossover operation; (2) the priority solution update operation improves the fitness, i.e., a minimization of the passenger transportation time and the number of buses; and (3) integration of the two operations improves both the number of duplex routes and fitness, which is hard to be achieved by either operation.

Sho Kajihara, Hiroyuki Sato, Keiki Takadama
Combining Multi-objective Evolutionary Algorithms with Deep Generative Models Towards Focused Molecular Design

Recent advances in applying deep generative learning to molecular design have led to a large number of novel approaches to the targeted generation of molecules towards specific features and applications. In this work, we expand on the latent space navigation approach, where molecules are optimized by operating in their latent representation inside a deep auto-encoder, by introducing multi-objective evolutionary algorithms (MOEAs), and benchmarking the proposed framework on several objectives from recent literature. Using several case studies from literature, we show that our proposed method is capable of controlling abstract chemical properties, is competitive with other state-of-the-art methods and can perform relevant tasks such as optimizing a predefined molecule while maintaining a similarity threshold. Also, MOEAs allow to generate molecules with a good level of diversity, which is a desired feature.

Tiago Sousa, João Correia, Vitor Pereira, Miguel Rocha
A Multi-objective Evolutionary Algorithm Approach for Optimizing Part Quality Aware Assembly Job Shop Scheduling Problems

Motivated by a real-world application, we consider an Assembly Job Shop Scheduling Problem (AJSSP), with three objectives: product quality, product quantity, and first product lead time. Using real-world inspection data, we demonstrate the ability to model product quality transformations during assembly jobs via genetic programming by considering the quality attributes of subparts. We investigate integrating quality transformation models into an AJSSP. Through the use of the de facto standard multi-objective evolutionary algorithm, NSGA-II, and a novel genotype to handle the constraints, we describe an evolutionary approach to optimizing all stated objectives. This approach is empirically shown to outperform random search and hill climbing in both performance and usability metrics expected to be valuable to administrators involved in plant scheduling and operations.

Michael H. Prince, Kristian DeHaan, Daniel R. Tauritz
Evolutionary Grain-Mixing to Improve Profitability in Farming Winter Wheat

This paper focuses on adapting and applying a genetic algorithm (GA) and differential evolution (DE) to solve the grain (wheat) mixing problem. The proposed algorithms explore a search space that aims at finding a quality mixing of wheat from grain bins that produce the maximum profit at a grain elevator. The experimental results demonstrate that mixing bins provide more profit than not mixing, and that the evolutionary approaches lead to consistently higher profits than the non-evolutionary methods.

Md Asaduzzaman Noor, John W. Sheppard
Automatic Modular Design of Behavior Trees for Robot Swarms with Communication Capabilites

In this work, we develop a set of behavioral and conditional modules for the use with behavior trees. We present AutoMoDe-Cedrata, an automatic modular design method that automatically assembles and fine-tunes these modules into behavior trees that control robot swarms. We test Cedrata on three missions and, to gain further insights on its effectiveness, we design control software for the same missions using AutoMoDe-Maple, another automatic design method, and by a group of human designers. Results show that the proposed modules allow for well-performing behavior trees. Yet, Cedrata had difficulties automatically generating control software that performs similarly well as the one generated by human designers, especially when involving communication.

Jonas Kuckling, Vincent van Pelt, Mauro Birattari
Salp Swarm Optimization Search Based Feature Selection for Enhanced Phishing Websites Detection

Internet-connected devices are increasing rapidly. This facilitates transferring most of the real-world transactions to the cyber world. It follows that eCrime is growing continuously. Phishing is a cyber-attack carried out by intruders. They aim to deceive the users of the Internet to achieve their malicious goals. Therefore, experts have developed different approaches to protect financial transactions and personal login information of the users. Their primary concern is to detect the security breaches for online use of the Internet channels (e.g. emails, SMS, webpages, and social platforms). In this paper, we propose a new phishing detection system based on the Salp Swarm Algorithm (SSA). The main objective is to maximize the classification performance and minimize the number of features of the phishing system. Different transfer function (TF) families: S-TFs, V-TFs, X-TFs, U-TFs, and Z-TFs are used to convert the continuous SSA into binary. Sixteen different binary versions of the SSA algorithm are produced based on different TFs. A comparison analysis is performed to pick up the best binarization method. The phishing system is validated by comparing it with three state-of-the-art algorithms. The results show that BSSA with X-TFs achieved the best results in terms of the used evaluation measures.

Ruba Abu Khurma, Khair Eddin Sabri, Pedro A. Castillo, Ibrahim Aljarah
Real Time Optimisation of Traffic Signals to Prioritise Public Transport

This paper examines the optimisation of traffic signals to prioritise public transportation (busses) in real time. A novel representation for the traffic signal prioritisation problem is introduced. The novel representation is used within an evolutionary algorithm that supports safe solutions which comply with real-world traffic signal constraints. The proposed system finds near-optimal solutions in around 20 s, enabling real-time optimisation. The authors examine a specific junction in Hamburg, Germany, based on real-world traffic data a variety of different problem scenarios ranging from low to exceptional traffic saturations are generated. In collaboration with domain experts, a fitness function is defined to reduce the journey time of a bus while maintaining an overall stable traffic system. Candidate solutions are evaluated using the microscopic traffic simulator SUMO allowing for precise optimisation and addressing of the flow prediction problem. The results show good scaling of the proposed system, with more significant improvements in more congested scenarios. Given the results, future research on bigger and multiple road junctions is motivated.This work contributes to the field in four ways. Firstly, by defining a real-world problem containing the actual intersection layout and traffic signal parameters. Secondly, by presenting a software design that integrates highly efficient SUMO simulations into an evolutionary algorithm. Thirdly, by introducing a novel representation that allows unconventional solutions while ensuring compliance with traffic signal regulations at all times. Lastly, by testing the suggested approach on various problem scenarios of the real-world problem.

Milan Wittpohl, Per-Arno Plötz, Neil Urquhart
Adaptive Covariance Pattern Search

Pattern search is a family of single solution deterministic optimisation algorithms for numerical optimisation. Pattern search algorithms generate a new candidate solution by means of an archive of potential moves, named pattern. This pattern is generated by a basis of vectors that span the domain where the function to optimise is defined.The present article proposes an adaptive implementation of pattern search that performs, at run-time, a fitness landscape analysis of the problem to determine the pattern and adapt it to the geometry of the problem. The proposed algorithm, called Adaptive Covariance Pattern Search (ACPS) uses at the beginning the fundamental orthonormal basis (directions of the variables) to build the pattern. Subsequently, ACPS saves the successful visited solutions, calculates the covariance matrix associated with these samples, and then uses the eigenvectors of this covariance matrix to build the pattern. ACPS is a restarting algorithm that at each restart recalculates the pattern that progressively adapts to the problem to optimise. Numerical results show that the proposed ACPS appears to be a promising approach on various problems and dimensions.

Ferrante Neri
Evaluating the Success-History Based Adaptive Differential Evolution in the Protein Structure Prediction Problem

Proteins are vital macro-molecules for every living organism. As the exper imental determination of protein structures is costly and time-consuming, computational methods became an interesting way to predict proteins’ shape based on their amino acid sequence. Metaheuristics have been employed in the protein structure prediction problem through the years, with different characteristics and different knowledge sources. However, these methods are heavily dependent on parameter tuning, where wrong parameters might cause poor performance. Recently, adaptive strategies were proposed to deal with parameter tuning’s non-trivial task, leaving the algorithm to choose its parameters for each optimization step. Although adaptive metaheuristics are widely applied to benchmark problems, only a few were tested in the PSP problem. To contribute to the analysis of adaptive metaheuristics in the PSP problem, we explore in this work the capability of one of the CEC’14 winners: the Success-History based Adaptive Differential Evolution algorithm on the tertiary protein structure prediction problem. We tested the SHADE algorithm in eight different proteins and compared the algorithm to the other two classical non-adaptive differential evolution and the well-known self-adaptive differential evolution. Moreover, we enhanced the SHADE with domain knowledge from APL. Our results enlarge the research body in adaptive methods for the PSP problem, showing that SHADE is better than non-adaptive differential evolution approaches and competitive compared to self-adaptive differential evolution and related works.

Pedro Henrique Narloch, Márcio Dorn
Beyond Body Shape and Brain: Evolving the Sensory Apparatus of Voxel-Based Soft Robots

Biological and artificial embodied agents behave by acquiring information through sensors, processing that information, and acting on the environment. The sensory apparatus, i.e., the location on the body of the sensors and the kind of information the sensors are able to capture, has a great impact on the agent ability of exhibiting complex behaviors. While in nature, the sensory apparatus is the result of a long-lasting evolution, in artificial agents (robots) it is usually the result of a design choice. However, when the agents are complex and the design space is large, making that choice can be hard. In this paper, we explore the possibility of evolving the sensory apparatus of voxel-based soft robots (VSRs), a kind of simulated robots composed of multiple deformable components. VSRs, due to their intrinsic modularity, allow for great freedom in how to shape the robot body, brain, and sensory apparatus. We consider a set of sensors that allow the agent to sense itself and the environment (using vision and touch) and we show, experimentally, that the effectiveness of the sensory apparatus depends on the shape of the body and on the actuation capability, i.e., the VSR strength. Then we show that evolutionary optimizaemedvet@units.ittion is able to evolve an effective sensory apparatus, even when constraints on the availability of the sensors are posed. By extending the adaptation to the sensory apparatus, beyond the body shape and the brain, we believe that our study takes a step forward to the ambitious path towards self-building robots.

Andrea Ferigo, Giovanni Iacca, Eric Medvet
Desirable Objective Ranges in Preference-Based Evolutionary Multiobjective Optimization

In this paper, we propose a preference-based Evolutionary Multiobjective Optimization algorithm, at which the preferences are given in the form of desirable ranges for the objective functions, i.e. by means of aspiration and reservation levels. The aspiration levels are values to be achieved by the objectives, while the reservation levels are objective values not to be worsen. In the algorithm proposed, the first generations are performed using a set of weight vectors to initially converge to the region of the Pareto optimal front associated with the point formed with the reservation levels. At a certain moment, these weights are updated using the nondominated solutions generated so far, to re-direct the search towards the region which contains the Pareto optimal solutions with objective values among the desirable ranges. To this aim, the remaining number of generations are run using the updated weight vectors and the point formed with the aspiration levels. The computational experiment show the potential of our proposal in 2, 3 and 5-objective problems, in comparison to other state-of-the-art algorithms.

Sandra González-Gallardo, Rubén Saborido, Ana B. Ruiz, Mariano Luque
Improving Search Efficiency and Diversity of Solutions in Multiobjective Binary Optimization by Using Metaheuristics Plus Integer Linear Programming

Metaheuristics for solving multiobjective problems can provide an approximation of the Pareto front in a short time, but can also have difficulties finding feasible solutions in constrained problems. Integer linear programming solvers, on the other hand, are good at finding feasible solutions, but they can require some time to find and guarantee the efficient solutions of the problem. In this work we combine these two ideas to propose a hybrid algorithm mixing an exploration heuristic for multiobjective optimization with integer linear programming to solve multiobjective problems with binary variables and linear constraints. The algorithm has been designed to provide an approximation of the Pareto front that is well-spread throughout the objective space. In order to check the performance, we compare it with three popular metaheuristics using two benchmarks of multiobjective binary constrained problems. The results show that the proposed approach provides better performance than the baseline algorithms in terms of number of the solutions, hypervolume, generational distance, inverted generational distance, and the additive epsilon indicator.

Miguel Ángel Domínguez-Ríos, Francisco Chicano, Enrique Alba
Automated, Explainable Rule Extraction from MAP-Elites Archives

Quality-diversity (QD) algorithms that return a large archive of elite solutions to a problem provide insights into how high-performing solutions are distributed throughout a feature-space defined by a user. They are often described as illuminating the feature-space, providing a qualitative illustration of relationships between features and objective quality. However, if there are 1000 s of solutions in an archive, extracting a succinct set of rules that capture these relationships in a quantitative manner (i.e. as a set of rules) is challenging. We propose two methods for the automated generation of rules from data contained in an archive; the first uses Genetic Programming and the second, a rule-induction method known as CN2. Rules are generated from large archives of data produced by running MAP-Elites on an urban logistics problem. A quantitative and qualitative evaluation that includes the end-user demonstrate that the rules are capable of fitting the data, but also highlights some mismatches between the model used by the optimiser and that assumed by the user.

Neil Urquhart, Silke Höhl, Emma Hart

Applications of Deep Bioinspired Algorithms

Frontmatter
EDM-DRL: Toward Stable Reinforcement Learning Through Ensembled Directed Mutation

Deep reinforcement learning (DRL) has experienced tremendous growth in the past few years. However, training stability of agents continues to be an open research question. Here, the authors present Ensembled Directed Mutation of Deep Reinforcement Learning (EDM-DRL) - a hybridization of evolutionary computing (EC), ensemble learning, and DRL methods as a means of mitigating training instability in DRL agents. We show that our method trains more consistently than synchronous Advantage Actor Critic (A2C). We also show that by employing our novel mutation and ensemble methods, performance of DRL agents can be improved during test time without sacrificing training stability. Further, though a similar number of time steps are used, we show that the EDM-DRL algorithm uses a mere 1% or less of the network parameter updates used in A2C. Finally, we conduct an ablation study to identify components within the EDM-DRL algorithm responsible for highest contribution. Code and experimental logs are available at: https://github.com/Linked-Liszt/EDM-DRL .

Michael H. Prince, Andrew J. McGehee, Daniel R. Tauritz
Continuous Ant-Based Neural Topology Search

This work introduces a novel, nature-inspired neural architecture search (NAS) algorithm based on ant colony optimization, Continuous Ant-based Neural Topology Search (CANTS), which utilizes synthetic ants that move over a continuous search space based on the density and distribution of pheromones, strongly inspired by how ants move in the real world. The paths taken by the ant agents through the search space are utilized to construct artificial neural networks (ANNs). This continuous search space allows CANTS to automate the design of ANNs of any size, removing a key limitation inherent to many current NAS algorithms that must operate within structures of a size predetermined by the user. CANTS employs a distributed asynchronous strategy which allows it to scale to large-scale high performance computing resources, works with a variety of recurrent memory cell structures, and uses of a communal weight sharing strategy to reduce training time. The proposed procedure is evaluated on three real-world, time series prediction problems in the field of power systems and compared to two state-of-the-art algorithms. Results show that CANTS is able to provide improved or competitive results on all of these problems while also being easier to use, requiring half the number of user-specified hyper-parameters.

AbdElRahman ElSaid, Joshua Karns, Zimeng Lyu, Alexander G. Ororbia, Travis Desell

Soft Computing Applied to Games

Frontmatter
Playing with Dynamic Systems - Battling Swarms in Virtual Reality

In this paper, we present a serious game with the goal to provide an engaging and immersive experience to foster the players’ understanding of dynamic networked systems. Confronted with attacking swarm networks, the player has to analyse their underlying network topologies and to systematically dismantle the swarms using a set of different weapons. We detail the game design, including the artificial intelligence of the swarm, the play mechanics and the level designs. Finally, we conducted an analysis of the play performances of a test group over the course of the game which revealed a positive learning outcome.

Johannes Büttner, Christian Merz, Sebastian von Mammen
EvoCraft: A New Challenge for Open-Endedness

This paper introduces EvoCraft, a framework for Minecraft designed to study open-ended algorithms. We introduce an API that provides an open-source Python interface for communicating with Minecraft to place and track blocks. In contrast to previous work in Minecraft that focused on learning to play the game, the grand challenge we pose here is to automatically search for increasingly complex artifacts in an open-ended fashion. Compared to other environments used to study open-endedness, Minecraft allows the construction of almost any kind of structure, including actuated machines with circuits and mechanical components. We present initial baseline results in evolving simple Minecraft creations through both interactive and automated evolution. While evolution succeeds when tasked to grow a structure towards a specific target, it is unable to find a solution when rewarded for creating a simple machine that moves. Thus, EvoCraft offers a challenging new environment for automated search methods (such as evolution) to find complex artifacts that we hope will spur the development of more open-ended algorithms. A Python implementation of the EvoCraft framework is available at: github.com/real-itu/Evocraft-py .

Djordje Grbic, Rasmus Berg Palm, Elias Najarro, Claire Glanois, Sebastian Risi
A Profile-Based ‘GrEvolutionary’ Hearthstone Agent

In the last few years, the Hearthstone AI international Competition has been gaining fame among the scientific community. Several different entries have been presented using varied approaches. One of the best, EVA, was based on a Greedy approach combined with an Evolutionary Algorithm. However, almost all the proposals were designed to work in a general way, i.e. for any of the possible heroes. This generalisation presents a flaw, since the exclusive cards per hero are not really exploited, nor their potential different behaviour profiles. This paper follows a similar philosophy to EVA, also hybridizing Greedy + Evolutionary algorithms, but having in mind three different, and extended among the community, archetypes or profiles: Aggro, Control and Midrange. Thus, three different behaviours have been optimized aiming to create a more specialized agent able to use an Artificial Intelligence engine depending on the hero to play with. To prove the value of the approach several experiments have been conducted, comparing the evolved agents with EVA in many different matches using three different heroes. The results show an improvement over EVA for the three profile-based agents, as well as an excellent performance when combined with a MonteCarlo Tree Search approach.

Alejandro Romero García, Antonio M. Mora García

Machine Learning and AI in Digital Healthcare and Personalized Medicine

Frontmatter
Modelling Asthma Patients’ Responsiveness to Treatment Using Feature Selection and Evolutionary Computation

For several medical treatments, it is possible to observe transcriptional variations in gene expressions between responders and non-responders. Modelling the correlation between such variations and the patient’s response to drugs as a system of Ordinary Differential Equations could be invaluable to improve the efficacy of treatments and would represent an important step towards personalized medicine. Two main obstacles lie on this path: (i) the number of genes is too large to straightforwardly analyze their interactions; (ii) defining the correct parameters for the mathematical models of gene interaction is a complex optimization problem, even when a limited number of genes is involved. In this paper, we propose a novel approach to creating mathematical models able to explain patients’ response to treatment from transcriptional variations. The approach is based on: (i) a feature selection algorithm, set to identify a minimal set of gene expressions that are highly correlated with treatment outcome, (ii) a state-of-the-art evolutionary optimizer, Covariance Matrix Adaptation Evolution Strategy, applied to finding the parameters of the mathematical model characterizing the relationship between gene expressions and patient responsiveness. The proposed methodology is tested on real-world data describing responsiveness of asthma patients to Omalizumab, a humanized monoclonal antibody that binds to immunoglobulin E. In this case study, the presented approach is shown able to identify 5 genes (out of 28,402) that are transcriptionally relevant to predict treatment outcomes, and to deliver a compact mathematical model that is able to explain the interaction between the different genes involved.

Alejandro Lopez-Rincon, Daphne S. Roozendaal, Hilde M. Spierenburg, Asta L. Holm, Renee Metcalf, Paula Perez-Pardo, Aletta D. Kraneveld, Alberto Tonda
Bayesian Networks for Mood Prediction Using Unobtrusive Ecological Momentary Assessments

Depression affects an estimated 300 million people around the globe. Early detection of depression and associated mental health problems constitutes one of the best prevention methods when trying to reduce the disease’s incidence. Information collected by tracking smartphone use behaviour and using ecological momentary assessments (EMA) can be used together with machine learning techniques to identify patterns indicative of depression and predict its appearance, contributing in this way to its early detection. However many of these techniques fail to identify the importance and relationships between the factors used to reach their prediction outcome. In this paper we propose the use of Bayesian networks (BN) as a tool to analyse and model data collected using EMA and smartphone measured behaviours. We compare the performance of BN against results obtained using support vector regression and random forest. The comparison is done in terms of efficacy, efficiency, and insight. Results show that no significant difference in efficacy was found between the models. However, BN presented clear advantages in terms of efficiency and insight given its probability factorization, graphical representation and ability to infer under uncertainty.

Margarita Rebolledo, A. E. Eiben, Thomas Bartz-Beielstein
A Multi-objective Multi-type Facility Location Problem for the Delivery of Personalised Medicine

Advances in personalised medicine targeting specific sub-populations and individuals pose a challenge to the traditional pharmaceutical industry. With a higher level of personalisation, an already critical supply chain is facing additional demands added by the very sensitive nature of its products. Nevertheless, studies concerned with the efficient development and delivery of these products are scarce. Thus, this paper presents the case of personalised medicine and the challenges imposed by its mass delivery. We propose a multi-objective mathematical model for the location-allocation problem with two interdependent facility types in the case of personalised medicine products. We show its practical application through a cell and gene therapy case study. A multi-objective genetic algorithm with a novel population initialisation procedure is used as solution method.

Andreea Avramescu, Richard Allmendinger, Manuel López-Ibáñez

Evolutionary Computation in Image Analysis, Signal Processing and Pattern Recognition

Frontmatter
RDE-OP: A Region-Based Differential Evolution Algorithm Incorporation Opposition-Based Learning for Optimising the Learning Process of Multi-layer Neural Networks

Learning in multi-layer neural networks (MLNNs) involves finding appropriate weights and biases and is a challenging and important task since the performance of MLNNs is directly dependent on the weights. Conventional algorithms such as back-propagation suffer from difficulties including a tendency to get stuck in local optima. Population-based metaheuristic algorithms can be used to address these issues. In this paper, we propose a novel learning approach, RDE-OP, based on differential evolution (DE) boosted by a region-based scheme and an opposition-based learning strategy. DE is a population-based metaheuristic algorithm which has shown good performance in solving optimisation problems. Our approach integrates two effective concepts with DE. First, we find, using a clustering algorithm, regions in search space and select the cluster centres to represent these. Then, an updating scheme is proposed to include the clusters in the current population. In the next step, our proposed algorithm employs a quasi-opposition-based learning strategy for improved exploration of the search space. Experimental results on different datasets and in comparison with both conventional and population-based approaches convincingly indicate excellent performance of RDE-OP.

Seyed Jalaleddin Mousavirad, Gerald Schaefer, Iakov Korovin, Diego Oliva
Estimation of Grain-Level Residual Stresses in a Quenched Cylindrical Sample of Aluminum Alloy AA5083 Using Genetic Programming

Residual stresses are originated during manufacturing processes of metallic materials, so its study is important to avoid catastrophic accidents during component service. There are two main types of residual stresses, according to the length scale; macroscopic and microscopic. While the determination of tmacroscopic ones is almost a routine analysis, determining the microscopic stress of individual grains remains a pending task. In this paper, we present an approach using genetic programming to obtain the micro residual stresses in grains of a quenched cylindrical sample of aluminium alloy AA5083. The microstructure of this alloy is formed by grains with different orientation and stress. To obtain the stress of each grain we estimate the values of the micro residual stresses for each crystallographic orientation using information from neutron and electron back-scattered diffraction experiments. This information includes orientation maps of a normal section to the cylinder axes (individual orientations) and the dimensions of each grain. We assume that the micro residual stresses of each grain can be expressed as a function based on these variables and use genetic programming to find this expression.

Laura Millán, Gabriel Kronberger, J. Ignacio Hidalgo, Ricardo Fernández, Oscar Garnica, Gaspar González-Doncel
EDA-Based Optimization of Blow-Off Valve Positions for Centrifugal Compressor Systems

Designing actuators is an important part of automation technology and indispensable for the operation of plants in process industry. This also applies to valves which are important actuators of compressor systems. Compressor systems have a high degree of complexity due to the interconnection of many different components. Often simulation environments are used to test already designed actuators. In this study we show how a digital twin of a compressor system in combination with an Estimation of Distribution (EDA) algorithm can be used to facilitate the valve design. In addition, the installation position in the plant is determined in order to achieve a desired operating behaviour during an emergency shutdown.

Jacob Spindler, Rico Schulze, Kevin Schleifer, Hendrik Richter
3D-2D Registration Using X-Ray Simulation and CMA-ES

Radiographs of the hand are useful in diagnosing and staging diseases such as rheumatoid arthritis (RA) and other musculoskeletal diseases. Radiographs are projections of the 3D anatomy, with the useful information such as pose and pathology becoming lost in the process. We propose a 3D hand pose recovery method for radiographs of hands using a novel hybrid image registration method. Our pose recovery pipeline consists of aligning a simulated X-ray (digitally reconstructed radiograph) of an articulated phantom mesh model to a real hand radiograph using Covariance Matrix Adaptation Evolution Strategy. Early results demonstrate that our approach works well. Further inquiry is required to evaluate the applicability of our registration approach to other articulated musculoskeletal anatomy.

Tianci Wen, Radu P. Mihail, Franck P. Vidal
Lateralized Approach for Robustness Against Attacks in Emotion Categorization from Images

Deep learning has achieved a high classification accuracy on image classification tasks, including emotion categorization. However, deep learning models are highly vulnerable to adversarial attacks. Even a small change, imperceptible to a human (e.g. one-pixel attack), can decrease the classification accuracy of deep models. One reason could be their homogeneous representation of knowledge that considers all pixels in an image to be equally important is easily fooled. Enabling multiple representations of the same object, e.g. at the constituent and holistic viewpoints provides robustness against attacking a single view. This heterogeneity is provided by lateralization in biological systems. Lateral asymmetry of biological intelligence suggests heterogeneous learning of objects. This heterogeneity allows information to be learned at different levels of abstraction, i.e. at the constituent and the holistic level, enabling multiple representations of the same object.This work aims to create a novel system that can consider heterogeneous features e.g. mouth, eyes, nose, and jaw in a face image for emotion categorization. The experimental results show that the lateralized system successfully considers constituent and holistic features to exhibit robustness to unimportant and irrelevant changes to emotion in an image, demonstrating performance accuracy better than (or similar) to the deep learning system (VGG19). Overall, the novel lateralized method shows a stronger resistance to changes (10.86–47.72% decrease) than the deep model (25.15–83.43% decrease). The advances arise by allowing heterogeneous features, which enable constituent and holistic representations of image components.

Harisu Abdullahi Shehu, Abubakar Siddique, Will N. Browne, Hedwig Eisenbarth

Evolutionary Machine Learning

Frontmatter
Improved Crowding Distance in Multi-objective Optimization for Feature Selection in Classification

Feature selection is an essential preprocessing step in data mining and machine learning. A feature selection task can be treated as a multi-objective optimization problem which simultaneously minimizes the classification error and the number of selected features. Many existing feature selection approaches including multi-objective methods neglect that there exists multiple optimal solutions in feature selection. There can be multiple different optimal feature subsets which achieve the same or similar classification performance. Furthermore, when using evolutionary multi-objective optimization for feature selection, a crowding distance metric is typically used to play a role in environmental selection. However, some existing calculations of crowding metrics based on continuous/numeric values are inappropriate for feature selection since the search space of feature selection is discrete. Therefore, this paper proposes a new environmental selection method to modify the calculation of crowding metrics. The proposed approach is expected to help a multi-objective feature selection algorithm to find multiple potential optimal feature subsets. Experiments on sixteen different datasets of varying difficulty show that the proposed approach can find more diverse feature subsets, achieving the same classification performance without deteriorating performance regarding hypervolume and inverted generational distance.

Peng Wang, Bing Xue, Jing Liang, Mengjie Zhang
Deep Optimisation: Multi-scale Evolution by Inducing and Searching in Deep Representations

The ability of evolutionary processes to innovate and scale up over long periods of time, observed in nature, remains a central mystery in evolutionary biology, and a challenge for algorithm designers to emulate and explain in evolutionary computation (EC). The Major Transitions in Evolution is a compelling theory that explains evolvability through a multi-scale process whereby individuality (and hence selection and variation) is continually revised by the formation of associations between formerly independent entities, a process still not fully explored in EC. Deep Optimisation (DO) is a new type of model-building optimization algorithm (MBOA) that exploits deep learning methods to enable multi-scale optimization. DO uses an autoencoder model to induce a multi-level representation of solutions, capturing the relationships between the lower-level units that contribute to the quality of a solution. Variation and selection are then performed within the induced representations, causing model-informed changes to multiple solution variables simultaneously. Here, we first show that DO has impressive performance compared with other leading MBOAs (and other rival methods) on multiple knapsack problems, a standard combinatorial optimization problem of general interest. Going deeper, we then carry out a detailed investigation to understand the differences between DO and other MBOAs, identifying key problem characteristics where other MBOAs are afflicted by exponential running times, and DO is not. This study serves to concretize our understanding of the Major Transitions theory, and why that leads to evolvability, and also provides a strong motivation for further investigation of deep learning methods in optimization.

Jamie Caldwell, Joshua Knowles, Christoph Thies, Filip Kubacki, Richard Watson
Evolutionary Planning in Latent Space

Planning is a powerful approach to reinforcement learning with several desirable properties such as sampling efficiency. However, it requires a world model, which is not readily available in many real-life problems. In this paper, we propose to learn a world model that enables Evolutionary Planning in Latent Space (EPLS). We use a Variational Auto Encoder (VAE) to learn a compressed latent representation of individual observations and extend a Mixture Density Recurrent Neural Network (MDRNN) to learn a stochastic, multi-modal forward model of the world used for planning. We use the Random Mutation Hill Climbing (RMHC) algorithm to find a sequence of actions that maximize expected reward in this learned model of the world. We demonstrate how to build a world model by bootstrapping it with rollouts from a random policy and iteratively refining it with rollouts from an increasingly accurate planning policy using the learned world model. After few iterations, our planning agents exceed standard model-free reinforcement learning approaches, which demonstrates the viability of our approach. Code to reproduce the experiments is available at https://github.com/two2tee/WorldModelPlanning and videos at https://youtu.be/3M39QgeF27U .

Thor V. A. N. Olesen, Dennis T. T. Nguyen, Rasmus B. Palm, Sebastian Risi
Utilizing the Untapped Potential of Indirect Encoding for Neural Networks with Meta Learning

Indirect encoding is a promising area of research in machine learning/evolutionary computation, however, it is rarely able to achieve performance on par with state of the art directly encoded methods. One of the most important properties of indirect encoding is the ability to control exploration during learning by transforming random genotypic variation into an arbitrary distribution of phenotypic variation. This gives indirect encoding a capacity to learn to be adaptable in a way which is not possible for direct encoding. However, during normal objective based learning, there is no direct selection for adaptability, which results in not only a missed opportunity to improve the ability to learn, but often degrading it too. The recent meta learning algorithm MAML makes it possible to directly and efficiently optimize for the ability to adapt. This paper demonstrates that even when indirect encoding can be detrimental to performance in the case of normal learning, when selecting for the ability to adapt, indirect encoding can outperform direct encoding in a fair comparison. The indirect encoding technique Hypernetwork was used on the task of few shot image classification on the Omniglot dataset. The results show the importance of directly optimizing for adaptability in realizing the powerful potential of indirect encoding.

Adam Katona, Nuno Lourenço, Penousal Machado, Daniel W. Franks, James Alfred Walker
Effective Universal Unrestricted Adversarial Attacks Using a MOE Approach

Recent studies have shown that Deep Leaning models are susceptible to adversarial examples, which are data, in general images, intentionally modified to fool a machine learning classifier. In this paper, we present a multi-objective nested evolutionary algorithm to generate universal unrestricted adversarial examples in a black-box scenario. The unrestricted attacks are performed through the application of well-known image filters that are available in several image processing libraries, modern cameras, and mobile applications. The multi-objective optimization takes into account not only the attack success rate but also the detection rate. Experimental results showed that this approach is able to create a sequence of filters capable of generating very effective and undetectable attacks.

Alina Elena Baia, Gabriele Di Bari, Valentina Poggioni
Improving Distributed Neuroevolution Using Island Extinction and Repopulation

Neuroevolution commonly uses speciation strategies to better explore the search space of neural network architectures. One such speciation strategy is the use of islands, which are also popular in improving the performance of distributed evolutionary algorithms. However, islands may experience stagnation, which prevents their convergence towards better solutions and can result in wasted computation. This work evaluates utilizing an island extinction and repopulation mechanism to avoid premature convergence using Evolutionary eXploration of Augmenting Memory Models (EXAMM), an asynchronous island based neuroevolution algorithm that progressively evolves recurrent neural networks (RNNs). In island extinction and repopulation, all members of the worst performing island are erased periodically and repopulated with mutated versions of the global best RNN. This island based strategy is additionally compared to NEAT’s (NeuroEvolution of Augmenting Topologies) speciation strategy. Experiments were performed using two different real-world time series datasets (coal-fired power plant and aviation flight data). With statistical significance, results show that in addition to being more scalable, this island extinction and repopulation strategy evolves better global best genomes than both EXAMM’s original island based strategy and NEAT’s speciation strategy. The extinction and repopulation strategy is easy to implement, and can be generically applied to other neuroevolution algorithms.

Zimeng Lyu, Joshua Karns, AbdElRahman ElSaid, Mohamed Mkaouer, Travis Desell
An Experimental Study of Weight Initialization and Lamarckian Inheritance on Neuroevolution

Weight initialization is critical in being able to successfully train artificial neural networks (ANNs), and even more so for recurrent neural networks (RNNs) which can easily suffer from vanishing and exploding gradients. In neuroevolution, where evolutionary algorithms are applied to neural architecture search, weights typically need to be initialized at three different times: when the initial genomes (ANN architectures) are created, when offspring genomes are generated by crossover, and when new nodes or edges are created during mutation. This work explores the difference between the state-of-the-art Xavier and Kaiming methods, and novel Lamarckian weight inheritance for weight initialization during crossover and mutation operations. These are examined using the Evolutionary eXploration of Augmenting Memory Models (EXAMM) neuroevolution algorithm, which is capable of evolving RNNs with a variety of modern memory cells (e.g., LSTM, GRU, MGU, UGRNN and Delta-RNN cells) as well as recurrent connections with varying time skips through a high performance island based distributed evolutionary algorithm. Results show that with statistical significance, the Lamarckian strategy outperforms both Kaiming and Xavier weight initialization, can speed neuroevolution by requiring less backpropagation epochs to be evaluated per genome, and that the neuroevolutionary process provides further benefits to neural network weight optimization.

Zimeng Lyu, AbdElRahman ElSaid, Joshua Karns, Mohamed Mkaouer, Travis Desell
Towards Feature-Based Performance Regression Using Trajectory Data

Black-box optimization is a very active area of research, with many new algorithms being developed every year. This variety is needed, on the one hand, since different algorithms are most suitable for different types of optimization problems. But the variety also poses a meta-problem: which algorithm to choose for a given problem at hand? Past research has shown that per-instance algorithm selection based on exploratory landscape analysis (ELA) can be an efficient mean to tackle this meta-problem. Existing approaches, however, require the approximation of problem features based on a significant number of samples, which are typically selected through uniform sampling or Latin Hypercube Designs. The evaluation of these points is costly, and the benefit of an ELA-based algorithm selection over a default algorithm must therefore be significant in order to pay off. One could hope to by-pass the evaluations for the feature approximations by using the samples that a default algorithm would anyway perform, i.e., by using the points of the default algorithm’s trajectory. We analyze in this paper how well such an approach can work. Concretely, we test how accurately trajectory-based ELA approaches can predict the final solution quality of the CMA-ES after a fixed budget of function evaluations. We observe that the loss of trajectory-based predictions can be surprisingly small compared to the classical global sampling approach, if the remaining budget for which solution quality shall be predicted is not too large. Feature selection, in contrast, did not show any advantage in our experiments and rather led to worsened prediction accuracy. The inclusion of state variables of CMA-ES only has a moderate effect on the prediction accuracy.

Anja Jankovic, Tome Eftimov, Carola Doerr
Demonstrating the Evolution of GANs Through t-SNE

Generative Adversarial Networks (GANs) are powerful generative models that achieved strong results, mainly in the image domain. However, the training of GANs is not trivial, presenting some challenges tackled by different strategies. Evolutionary algorithms, such as COEGAN, were recently proposed as a solution to improve the GAN training, overcoming common problems that affect the model, such as vanishing gradient and mode collapse. In this work, we propose an evaluation method based on t-distributed Stochastic Neighbour Embedding (t-SNE) to assess the progress of GANs and visualize the distribution learned by generators in training. We propose the use of the feature space extracted from trained discriminators to evaluate samples produced by generators and from the input dataset. A metric based on the resulting t-SNE maps and the Jaccard index is proposed to represent the model quality. Experiments were conducted to assess the progress of GANs when trained using COEGAN. The results show both by visual inspection and metrics that the Evolutionary Algorithm gradually improves discriminators and generators through generations, avoiding problems such as mode collapse.

Victor Costa, Nuno Lourenço, João Correia, Penousal Machado
Optimising Diversity in Classifier Ensembles of Classification Trees

Ensembles of predictors have been generally found to have better performance than single predictors. Although diversity is widely thought to be an important factor in building successful ensembles, there have been contradictory results in the literature regarding the influence of diversity on the generalisation error. Fundamental to this may be the way diversity itself is defined. We present two new diversity measures, based on the idea of ambiguity, obtained from the bias-variance decomposition by using the cross-entropy error or the hinge-loss. If random sampling is used to select patterns on which ensemble members are trained, we find that generalisation error is negatively correlated with diversity at high sampling rates; conversely generalisation error is positively correlated with diversity when the sampling rate is low and the diversity high. We use evolutionary optimisers to select the subsets of patterns for predictor training by maximising these diversity measures on training data. Evaluation of their generalisation performance on a range of classification datasets from the literature shows that the ensembles obtained by maximising the cross-entropy diversity measure generalise well, enhancing the performance of small ensembles. Contrary to expectation, we find that there is no correlation between whether a pattern is selected and its proximity to the decision boundary.

Carina Ivaşcu, Richard M. Everson, Jonathan E. Fieldsend
WILDA: Wide Learning of Diverse Architectures for Classification of Large Datasets

In order to address scalability issues, which can be a challenge for Deep Learning methods, we propose Wide Learning of Diverse Architectures—a model that scales horizontally rather than vertically, enabling distributed learning. We propose a distributed version of a quality-diversity evolutionary algorithm (MAP-Elites) to evolve an architecturally diverse ensemble of shallow networks, each of which extracts a feature vector from the data. These features then become the input to a single shallow network which is optimised using gradient descent to solve a classification task. The technique is shown to perform well on two benchmark classification problems (MNIST and CIFAR). Additional experiments provide insight into the role that diversity plays in contributing to the performance of the repertoire.

Rui P. Cardoso, Emma Hart, David Burth Kurka, Jeremy Pitt
Evolving Character-Level DenseNet Architectures Using Genetic Programming

Densely Connected Convolutional Networks (DenseNet) have demonstrated impressive performance on image classification tasks, but limited research has been conducted on using character-level DenseNet (char-DenseNet) architectures for text classification tasks. It is not clear what DenseNet architectures are optimal for text classification tasks. The iterative task of designing, training and testing of char-DenseNets is a time consuming task that requires expert domain knowledge. Evolutionary deep learning (EDL) has been used to automatically design CNN architectures for the image classification domain, thereby mitigating the need for expert domain knowledge. This study demonstrates the first work on using EDL to evolve char-DenseNet architectures for text classification tasks. A novel genetic programming-based algorithm (GP-Dense) coupled with an indirect-encoding scheme, facilitates the evolution of performant char-DenseNet architectures. The algorithm is evaluated on two popular text datasets, and the best-evolved models are benchmarked against four current state-of-the-art character-level CNN and DenseNet models. Results indicate that the algorithm evolves performant models for both datasets that outperform two of the state-of-the-art models in terms of model accuracy and three of the state-of-the-art models in terms of parameter size.

Trevor Londt, Xiaoying Gao, Peter Andreae
Transfer Learning for Automated Test Case Prioritization Using XCSF

With the rise of test automation, companies start to rely on large amounts of test cases. However, there are situations where it is unfeasible to perform every test case as only a limited amount of time is available. Under such circumstances a set of crucial tests has to be compiled. Recent research has shown that reinforcement learning methods such as XCSF classifier systems are well-suited for this task. This work investigates whether reusing knowledge of XCSF-based agents is beneficial for prioritizing test cases and subsequently selecting test suites in terms of performance. We developed a simplistic population transformation and evaluate it in a series of experiments. Our evaluation shows that XCSF may indeed benefit from transfer learning for this use case.

Lukas Rosenbauer, David Pätzel, Anthony Stein, Jörg Hähner
On the Effects of Absumption for XCS with Continuous-Valued Inputs

The rule-based XCS Classifier System (XCS) aims at forming classifiers which are as general as possible to achieve an optimal performance level. A too high generalization pressure may lead to over-general classifiers degrading the performance of XCS. To date, no method exists for XCS for real-valued input spaces (XCSR) to handle over-general classifiers ensuring an accurate population. The Absumption mechanism and the Specify operator, both developed for XCS with binary inputs, provide a promising basis for over-generality handling in XCSR. This paper introduces adapted versions of Absumption and Specify by proposing different identification and specialization strategies for the application in XCSR. To determine their potential, the adapted techniques will be evaluated in different classification problems, i.e., common benchmarks and real-world data from the agricultural domain, and in a multi-step problem.

Alexander R. M. Wagner, Anthony Stein
A NEAT Visualisation of Neuroevolution Trajectories

NeuroEvolution of Augmenting Topologies (NEAT) is a system for evolving neural network topologies along with weights that has proven highly effective and adaptable for solving challenging reinforcement learning tasks. This paper analyses NEAT through the lens of Search Trajectory Networks (STNs), a recently proposed visual approach to study the dynamics of evolutionary algorithms. Our goal is to improve the understanding of neuroevolution systems. We present a visual and statistical analysis contrasting the behaviour of NEAT, with and without using the crossover operator, when solving the two benchmark problems outlined in the original NEAT article: XOR and double-pole balancing. Contrary to what is reported in the original NEAT article, our experiments without crossover perform significantly better in both domains.

Stefano Sarti, Gabriela Ochoa
Evaluating Models with Dynamic Sampling Holdout

Automated Machine Learning (Auto-ML) is a growing field where several techniques are being developed to address the question of how to automate the process of defining machine learning pipelines, using diverse types of approaches and with relative success, but still, being far from solved. Among these still unsolved questions, the computational cost is one of the major issues. In this context, evaluating a model takes a lot of time and resources, and yet that is still a step that has not received much attention in the Auto-ML literature.In this sense, this work revisits the Auto-CVE (Automated Coevolutionary Voting Ensemble) and proposes a new method for model evaluation: the dynamic sampling holdout. When compared to the regular Auto-CVE with cross-validation and the popular TPOT (Tree-based Pipeline Optimization Tool) algorithm, Auto-CVE with dynamic holdout shows competitive results in both predictive performance and computing time.

Celio H. N. Larcher Jr, Helio J. C. Barbosa

Parallel and Distributed Systems

Frontmatter
Event-Driven Multi-algorithm Optimization: Mixing Swarm and Evolutionary Strategies

Researchers in nature-inspired optimization have recently proposed multi-population asynchronous algorithms that split the evolutionary process between different search paradigms working in collaboration. These algorithms execute the optimization strategy by reading streams of messages containing solution populations from message queues. After searching for a small number of iterations, new evolved populations are generated and sent back to a queue. Current research suggests that when we have many population-processing algorithms communicating in parallel, parameters intensifying exploration or exploitation in each population strike a dynamic that balances the two, exploring and exploiting simultaneously, maintaining an overall diversity, and improving the search. In this work, we propose a simple reactive migration, population-generation, and processing method for the asynchronous processing of multi-population, multi-strategy algorithms that achieves an improvement over homogeneous configurations. We evaluate this method by comparing a heterogeneous ensemble of multi-populations against a homogeneous solution consisting of a Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) populations, using five problems from the noiseless BBOB toolbox for the optimization of continuous functions. Results show that compared with other asynchronous homogeneous population-based algorithms, this method offers better performance concerning the maximum number of evaluations needed to find a solution and the number runs where it found it.

Mario García-Valdez, Juan J. Merelo
TensorGP – Genetic Programming Engine in TensorFlow

In this paper, we resort to the TensorFlow framework to investigate the benefits of applying data vectorization and fitness caching methods to domain evaluation in Genetic Programming. For this purpose, an independent engine was developed, TensorGP, along with a testing suite to extract comparative timing results across different architectures and amongst both iterative and vectorized approaches. Our performance benchmarks demonstrate that by exploiting the TensorFlow eager execution model, performance gains of up to two orders of magnitude can be achieved on a parallel approach running on dedicated hardware when compared to a standard iterative approach.

Francisco Baeta, João Correia, Tiago Martins, Penousal Machado

Applications of Nature-Inspired Computing for Sustainability and Development

Frontmatter
A Novel Evolutionary Approach for IoT-Based Water Contaminant Detection

Nowadays, the problem of pollution in water is a very serious issue to be faced and it is really important to be able to monitoring it with non-invasive and low-cost solutions, like those offered by smart sensor technologies. In this paper, we propose an improvement of an our innovative classification system, based on geometrical cones, to detect and classify pollutants, belonging to a given set of substances, spilled into waste water. The solution is based on an ad-hoc classifier that can be implemented aboard the Smart Cable Water (SCW) sensor, based on SENSIPLUS technology developed by Sensichips s.r.l. The SCW is a smart-sensor endowed with six interdigitated electrodes, covered by specific sensing materials that allow detecting between different water contaminants. In order to develop an algorithm suitable to apply the “edge computing” paradigm we first compress the input data from a 10-dimensional space to a 3-D space by using the PCA decomposition techniques. Then we use an ad-hoc classifier to classify between the different contaminants in the transformed space. To learn the classifier’s parameters we used the evolutionary algorithms. The obtained results have been compared with the old classification system and other, more classical, machine learning approaches.

Claudio De Stefano, Luigi Ferrigno, Francesco Fontanella, Luca Gerevini, Mario Molinara
Evolutionary Algorithms for Roughness Coefficient Estimation in River Flow Analyses

Management and analyses of water resources is of paramount importance in the implementation of water related sustainable development goals. Hydraulic models are key in flood forecasting and simulation applied to a river flood analysis and risk prediction and an accurate estimation of the roughness is one of the main factors in predicting the discharge in a stream. In practical implementation roughness can be represented by the prediction of the well known Manning’s coefficient necessary for discharge calculation. In this paper we design an objective function that measures the quality of a given configuration of the Manning’s coefficient. Such an objective function is optimised through several evolutionary approaches, namely: (1+1)-ES, CMA-ES, Differential Evolution, Particle Swarm Optimization and Bayesian Optimization. As case of study, a river in the central Italy was considered. The results indicate that the model, consistent with the classical techniques adopted in the hydraulic engineering field, is applicable to natural rivers and is able to provide an estimation of the roughness coefficients with a satisfactory accuracy. A comparison of the performances of the five evolutionary algorithms is also proposed.

Antonio Agresta, Marco Baioletti, Chiara Biscarini, Alfredo Milani, Valentino Santucci
EA-Based ASV Trajectory Planner for Pollution Detection in Lentic Waters

This paper presents a new planner based on Evolutionary Algorithms (EAs) to optimize the trajectory of an Autonomous Surface Vehicle (ASV), equipped with a probe, that has to determine the location of a pollutant in lentic water bodies (e.g. reservoirs, dams). To achieve it, our planner 1) exploits the information provided by a simulator that determines the pollutant distribution based on the water currents and 2) is supported by an EA that optimizes the mission duration, the ASV trajectory length and the measurements taken by its probe in highly polluted areas. The current version of the planner also ensures that the trajectories are feasible from the ASV and water body perspective, and solves this constrained multi-objective problem as a mono-objective one that linearly combines the constraint and objective functions. The preliminary results over different scenarios show that the planner can already determine overall good solutions, but that needs to be modified (e.g. using a multi-objective intended EA) to improve them further.

Gonzalo Carazo-Barbero, Eva Besada-Portas, José M. Girón-Sierra, José A. López-Orozco
Backmatter
Metadaten
Titel
Applications of Evolutionary Computation
herausgegeben von
Pedro A. Castillo
Juan Luis Jiménez Laredo
Copyright-Jahr
2021
Electronic ISBN
978-3-030-72699-7
Print ISBN
978-3-030-72698-0
DOI
https://doi.org/10.1007/978-3-030-72699-7

Premium Partner