Skip to main content

2024 | Buch

Applications of Evolutionary Computation

27th European Conference, EvoApplications 2024, Held as Part of EvoStar 2024, Aberystwyth, UK, April 3–5, 2024, Proceedings, Part II

insite
SUCHEN

Über dieses Buch

The two-volume set LNCS 14634 and 14635 constitutes the refereed proceedings of the 27th European Conference on Applications of Evolutionary Computation, EvoApplications 2024, held as part of EvoStar 2024, in Aberystwyth, UK, April 3–5, 2024, and co-located with the EvoStar events, EvoCOP, EvoMUSART, and EuroGP.

The 51 full papers presented in these proceedings were carefully reviewed and selected from 77 submissions. The papers have been organized in the following topical sections: applications of evolutionary computation; analysis of evolutionary computation methods: theory, empirics, and real-world applications; computational intelligence for sustainability; evolutionary computation in edge, fog, and cloud computing; evolutionary computation in image analysis, signal processing and pattern recognition; evolutionary machine learning; machine learning and AI in digital healthcare and personalized medicine; problem landscape analysis for efficient optimization; softcomputing applied to games; and surrogate-assisted evolutionary optimisation.

Inhaltsverzeichnis

Frontmatter

Evolutionary Machine Learning

Frontmatter
Hindsight Experience Replay with Evolutionary Decision Trees for Curriculum Goal Generation
Abstract
Reinforcement learning (RL) algorithms often require a significant number of experiences to learn a policy capable of achieving desired goals in multi-goal robot manipulation tasks with sparse rewards. Hindsight Experience Replay (HER) is an existing method that improves learning efficiency by using failed trajectories and replacing the original goals with hindsight goals that are uniformly sampled from the visited states. However, HER has a limitation: the hindsight goals are mostly near the initial state, which hinders solving tasks efficiently if the desired goals are far from the initial state. To overcome this limitation, we introduce a curriculum learning method called HERDT (HER with Decision Trees). HERDT uses binary DTs to generate curriculum goals that guide a robotic agent progressively from an initial state toward a desired goal. During the warm-up stage, DTs are optimized using the Grammatical Evolution algorithm. In the training stage, curriculum goals are then sampled by DTs to help the agent navigate the environment. Since binary DTs generate discrete values, we fine-tune these curriculum points by incorporating a feedback value (i.e., the Q-value). This fine-tuning enables us to adjust the difficulty level of the generated curriculum points, ensuring that they are neither overly simplistic nor excessively challenging. In other words, these points are precisely tailored to match the robot’s ongoing learning policy. We evaluate our proposed approach on different sparse reward robotic manipulation tasks and compare it with the state-of-the-art HER approach. Our results demonstrate that our method consistently outperforms or matches the existing approach in all the tested tasks.
Erdi Sayar, Vladislav Vintaykin, Giovanni Iacca, Alois Knoll
Cultivating Diversity: A Comparison of Diversity Objectives in Neuroevolution
Abstract
Inspired by biological evolution’s ability to produce complex and intelligent beings, neuroevolution utilizes evolutionary algorithms for optimizing the connection weights and structure of artificial neural networks. With evolutionary algorithms often failing to produce the same level of diversity as biological evolution, explicitly encouraging diversity with additional optimization objectives has emerged as a successful approach. However, there is a lack of knowledge regarding the performance of different types of diversity objectives on problems with different characteristics. In this paper, we perform a systematic comparison between objectives related to structural diversity, behavioral diversity, and our newly proposed representational diversity. We explore these objectives’ effects on problems with different levels of modularity, regularity, deceptiveness and discreteness and find clear relationships between problem characteristics and the effect of different diversity objectives – suggesting that there is much to be gained from adapting diversity objectives to the specific problem being solved.
Didrik Spanne Reilstad, Kai Olav Ellefsen
Evolving Reservoirs for Meta Reinforcement Learning
Abstract
Animals often demonstrate a remarkable ability to adapt to their environments during their lifetime. They do so partly due to the evolution of morphological and neural structures. These structures capture features of environments shared between generations to bias and speed up lifetime learning. In this work, we propose a computational model for studying a mechanism that can enable such a process. We adopt a computational framework based on meta reinforcement learning as a model of the interplay between evolution and development. At the evolutionary scale, we evolve reservoirs, a family of recurrent neural networks that differ from conventional networks in that one optimizes not the synaptic weights, but hyperparameters controlling macro-level properties of the resulting network architecture. At the developmental scale, we employ these evolved reservoirs to facilitate the learning of a behavioral policy through Reinforcement Learning (RL). Within an RL agent, a reservoir encodes the environment state before providing it to an action policy. We evaluate our approach on several 2D and 3D simulated environments. Our results show that the evolution of reservoirs can improve the learning of diverse challenging tasks. We study in particular three hypotheses: the use of an architecture combining reservoirs and reinforcement learning could enable (1) solving tasks with partial observability, (2) generating oscillatory dynamics that facilitate the learning of locomotion tasks, and (3) facilitating the generalization of learned behaviors to new tasks unknown during the evolution phase.
Corentin Léger, Gautier Hamon, Eleni Nisioti, Xavier Hinaut, Clément Moulin-Frier
Hybrid Surrogate Assisted Evolutionary Multiobjective Reinforcement Learning for Continuous Robot Control
Abstract
Many real world reinforcement learning (RL) problems consist of multiple conflicting objective functions that need to be optimized simultaneously. Finding these optimal policies (known as Pareto optimal policies) for different preferences of objectives requires extensive state space exploration. Thus, obtaining a dense set of Pareto optimal policies is challenging and often reduces the sample efficiency. In this paper, we propose a hybrid multiobjective policy optimization approach for solving multiobjective reinforcement learning (MORL) problems with continuous actions. Our approach combines the faster convergence of multiobjective policy gradient (MOPG) and a surrogate assisted multiobjective evolutionary algorithm (MOEA) to produce a dense set of Pareto optimal policies. The solutions found by the MOPG algorithm are utilized to build computationally inexpensive surrogate models in the parameter space of the policies that approximate the return of policies. An MOEA is executed that utilizes the surrogates’ mean prediction and uncertainty in the prediction to find approximate optimal policies. The final solution policies are later evaluated using the simulator and stored in an archive. Tests on multiobjective continuous action RL benchmarks show that a hybrid surrogate assisted multiobjective evolutionary optimizer with robust selection criterion produces a dense set of Pareto optimal policies without extensively exploring the state space. We also apply the proposed approach to train Pareto optimal agents for autonomous driving, where the hybrid approach produced superior results compared to a state-of-the-art MOPG algorithm.
Atanu Mazumdar, Ville Kyrki
Towards Physical Plausibility in Neuroevolution Systems
Abstract
The increasing usage of Artificial Intelligence (AI) models, especially Deep Neural Networks (DNNs), is increasing the power consumption during training and inference, posing environmental concerns and driving the need for more energy-efficient algorithms and hardware solutions. This work addresses the growing energy consumption problem in Machine Learning (ML), particularly during the inference phase. Even a slight reduction in power usage can lead to significant energy savings, benefiting users, companies, and the environment. Our approach focuses on maximizing the accuracy of Artificial Neural Network (ANN) models using a neuroevolutionary framework whilst minimizing their power consumption. To do so, power consumption is considered in the fitness function. We introduce a new mutation strategy that stochastically reintroduces modules of layers, with power-efficient modules having a higher chance of being chosen. We introduce a novel technique that allows training two separate models in a single training step whilst promoting one of them to be more power efficient than the other while maintaining similar accuracy. The results demonstrate a reduction in power consumption of ANN models by up to 29.2% without a significant decrease in predictive performance.
Gabriel Cortês, Nuno Lourenço, Penousal Machado
Leveraging More of Biology in Evolutionary Reinforcement Learning
Abstract
In this paper, we survey the use of additional biologically inspired mechanisms, principles, and concepts in the area of evolutionary reinforcement learning (ERL). While recent years have witnessed the emergence of a swath of metaphor-laden approaches, many merely echo old algorithms through novel metaphors. Simultaneously, numerous promising ideas from evolutionary biology and related areas, ripe for exploitation within evolutionary machine learning, remain in relative obscurity. To address this gap, we provide a comprehensive analysis of innovative, often unorthodox approaches in ERL that leverage additional bio-inspired elements. Furthermore, we pinpoint research directions in the field with the largest potential to yield impactful outcomes and discuss classes of problems that could benefit the most from such research.
Bruno Gašperov, Marko Đurasević, Domagoj Jakobovic
A Hierarchical Dissimilarity Metric for Automated Machine Learning Pipelines, and Visualizing Search Behaviour
Abstract
In this study, the challenge of developing a dissimilarity metric for machine learning pipeline optimization is addressed. Traditional approaches, limited by simplified operator sets and pipeline structures, fail to address the full complexity of this task. Two novel metrics are proposed for measuring structural, and hyperparameter, dissimilarity in the decision space. A hierarchical approach is employed to integrate these metrics, prioritizing structural over hyperparameter differences. The Tree-based Pipeline Optimization Tool (TPOT) is utilized as the primary automated machine learning framework, applied on the abalone dataset. Novel visual representations of TPOT’s search dynamics are also proposed, providing some deeper insights into its behaviour and evolutionary trajectories, under different search conditions. The effects of altering the population selection mechanism and reducing population size are explored, highlighting the enhanced understanding these methods provide in automated machine learning pipeline optimization.
Angus Kenny, Tapabrata Ray, Steffen Limmer, Hemant Kumar Singh, Tobias Rodemann, Markus Olhofer
DeepEMO: A Multi-indicator Convolutional Neural Network-Based Evolutionary Multi-objective Algorithm
Abstract
Quality Indicators (QIs) have been used in numerous Evolutionary Multi-objective Optimization Algorithms (EMOAs) as selection mechanisms within the evolutionary process. Because each QI prefers specific point-distribution properties, an Indicator-based EMOA (IB-EMOA) that uses a single QI has an intrinsically limited scope of problems it can solve accurately. To overcome the issues that IB-EMOAs have, we present the first results of a new general multi-indicator-based multi-objective evolutionary algorithm, denoted as DeepEMO. It uses a Convolutional Neural Network (CNN) as a hyper-heuristic to choose, depending on the Pareto-front geometry, the appropriate indicator-based selection mechanism at each generation of the evolutionary process. We employ state-of-the-art benchmark problems with different Pareto front geometries to test our approach. Our experimental results show that DeepEMO obtains competitive performance across multiple QIs. This is because the CNN is employed to classify the geometry of the point cloud that approximates the Pareto front. Hence, DeepEMO compensates for the weaknesses of a single QI with the strengths of others, showing that its performance is invariant to the Pareto front geometry.
Emilio Bernal-Zubieta, Jesús Guillermo Falcón-Cardona, Jorge M. Cruz-Duarte
A Comparative Analysis of Evolutionary Adversarial One-Pixel Attacks
Abstract
Adversarial attacks pose significant challenges to the robustness of machine learning models. This paper explores the one-pixel attacks in image classification, a black-box adversarial attack that introduces changes to the pixels of the input images to make the classifier predict erroneously. We use a pragmatic approach by employing different evolutionary algorithms - Differential Evolution, Genetic Algorithms, and Covariance Matrix Adaptation Evolution Strategy - to find and optimise these one-pixel attacks. We focus on understanding how these algorithms generate effective one-pixel attacks. The experimentation was carried out on the CIFAR-10 dataset, a widespread benchmark in image classification. The experimental results cover an analysis of the following aspects: fitness optimisation, number of evaluations to generate an adversarial attack, success rate, number of adversarial attacks found per image, solution space coverage and level of distortion done to the original image to generate the attack. Overall, the experimentation provided insights into the nuances of the one-pixel attack and compared three standard evolutionary algorithms, showcasing each algorithm’s potential and evolutionary computation’s ability to find solutions in this strict case of the adversarial attack.
Luana Clare, Alexandra Marques, João Correia
Robust Neural Architecture Search Using Differential Evolution for Medical Images
Abstract
Recent studies have demonstrated that Convolutional Neural Network (CNN) architectures are sensitive to adversarial attacks with imperceptible permutations. Adversarial attacks on medical images may cause manipulated decisions and decrease the performance of the diagnosis system. The robustness of medical systems is crucial, as it assures an improved healthcare system and assists medical professionals in making decisions. Various studies have been proposed to secure medical systems against adversarial attacks, but they have used handcrafted architectures. This study proposes an evolutionary Neural Architecture Search (NAS) approach for searching robust architectures for medical image classification. The Differential Evolution (DE) algorithm is used as a search algorithm. Furthermore, we utilize an attention-based search space consisting of five different attention layers and sixteen convolution and pooling operations. Experiments on multiple MedMNIST datasets show that the proposed approach has achieved better results than deep learning architectures and a robust NAS approach.
Muhammad Junaid Ali, Laurent Moalic, Mokhtar Essaid, Lhassane Idoumghar
Progressive Self-supervised Multi-objective NAS for Image Classification
Abstract
We introduce a novel progressive self-supervised framework for neural architecture search. Our aim is to search for competitive, yet significantly less complex, generic CNN architectures that can be used for multiple tasks (i.e., as a pretrained model). This is achieved through cartesian genetic programming (CGP) for neural architecture search (NAS). Our approach integrates self-supervised learning with a progressive architecture search process. This synergy unfolds within the continuous domain which is tackled via multi-objective evolutionary algorithms (MOEAs). To empirically validate our proposal, we adopted a rigorous evaluation using the non-dominated sorting genetic algorithm II (NSGA-II) for the CIFAR-100, CIFAR-10, SVHN and CINIC-10 datasets. The experimental results showcase the competitiveness of our approach in relation to state-of-the-art proposals concerning both classification performance and model complexity. Additionally, the effectiveness of this method in achieving strong generalization can be inferred.
Cosijopii Garcia-Garcia, Alicia Morales-Reyes, Hugo Jair Escalante
Genetic Programming with Aggregate Channel Features for Flower Localization Using Limited Training Data
Abstract
Flower localization is a crucial image pre-processing step for subsequent classification/recognition that confronts challenges with diverse flower species, varying imaging conditions, and limited data. Existing flower localization methods face limitations, including reliance on color information, low model interpretability, and a large demand for training data. This paper proposes a new genetic programming (GP) approach called ACFGP with a novel representation to automated flower localization with limited training data. The novel GP representation enables ACFGP to evolve effective programs for generating aggregate channel features and achieving flower localization in diverse scenarios. Comparative evaluations against the baseline benchmark algorithm and YOLOv8 demonstrate ACFGP’s superior performance. Further analysis highlights the effectiveness of the aggregate channel features generated by ACFGP programs, demonstrating the superiority of ACFGP in addressing challenging flower localization tasks.
Qinyu Wang, Ying Bi, Bing Xue, Mengjie Zhang
Evolutionary Multi-objective Optimization of Large Language Model Prompts for Balancing Sentiments
Abstract
The advent of large language models (LLMs) such as ChatGPT has attracted considerable attention in various domains due to their remarkable performance and versatility. As the use of these models continues to grow, the importance of effective prompt engineering has come to the fore. Prompt optimization emerges as a crucial challenge, as it has a direct impact on model performance and the extraction of relevant information. Recently, evolutionary algorithms (EAs) have shown promise in addressing this issue, paving the way for novel optimization strategies. In this work, we propose a evolutionary multi-objective (EMO) approach specifically tailored for prompt optimization called EMO-Prompts, using sentiment analysis as a case study. We use sentiment analysis capabilities as our experimental targets. Our results demonstrate that EMO-Prompts effectively generates prompts capable of guiding the LLM to produce texts embodying two conflicting emotions simultaneously.
Jill Baumann, Oliver Kramer
Evolutionary Feature-Binning with Adaptive Burden Thresholding for Biomedical Risk Stratification
Abstract
Multivariate associations including additivity, feature interactions, heterogeneous effects, and rare feature states can present significant obstacles in statistical and machine-learning analyses. These relationships can limit the detection capabilities of many analytical methodologies when predicting outcomes including risk stratification in biomedical survival analyses. Feature Inclusion Bin Evolver for Risk Stratification (FIBERS) was previously proposed using an evolutionary algorithm to discover groups (i.e. bins) of features wherein the burden of feature values automatically determined the risk strata of a given instance in right-censored survival analysis. A key limitation of FIBERS is that it assumes a fixed threshold for feature burden in stratifying high vs. low risk, which restricts the flexibility of bin discovery. In the present work, we extend FIBERS to include different strategies for adaptive burden thresholding such that feature bins are discovered alongside the threshold that best separates risk strata. Preliminary comparative performance evaluation was conducted across simulated datasets with different underlying ideal burden thresholds yielding performance improvements over the original FIBERS algorithm. This algorithmic feasibility study lays the groundwork for ongoing application to the real-world problem of kidney graft failure risk stratification in dealing with the expected population heterogeneity including differences in race, ethnicity, and sex.
Harsh Bandhey, Sphia Sadek, Malek Kamoun, Ryan Urbanowicz
An Evolutionary Deep Learning Approach for Efficient Quantum Algorithms Transpilation
Abstract
Gate-based quantum computation describes algorithms as quantum circuits. These can be seen as a set of quantum gates acting on a set of qubits. To be executable, the circuit requires complex transformations to comply with the physical constraints of the machines. This process is known as transpilation, where qubits’ layout initialisation is one of its first and most challenging steps, usually done by considering the device error properties. As the size of the quantum algorithm increases, the transpilation becomes increasingly complex and time-consuming. This constitutes a bottleneck towards agile, fast, and error-robust quantum computation. This work proposes an evolutionary deep neural network that learns the qubits’ layout initialisation of the most advanced and complex IBM heuristic used in today’s quantum machines. The aim is to progressively replace weakly scalable transpilation heuristics with machine learning models. Previous work using machine learning models for qubits’ layout initialisation suffers from some shortcomings in the proposal’s correctness and generalisation as well as benchmarks diversity, utility, and availability. The present work solves those flaws by (I) devising a complete Machine Learning pipeline including the ETL component and the evolutionary deep neural model using the linkage learning algorithm P3, (II) a modelling applicable to any quantum algorithm with a special interest to both optimisation and machine learning ones, (III) diverse and fresh benchmarks using calibration data of four real IBM quantum computers collected over 10 months (Dec. 2022 and Oct. 2023) and training dataset built using four types of quantum optimisation and machine learning algorithms, as well as random ones. The proposal has been proven to be more efficient and simple than state-of-the-art deep neural models in the literature.
Zakaria Abdelmoiz Dahi, Francisco Chicano, Gabriel Luque
Measuring Similarities in Model Structure of Metaheuristic Rule Set Learners
Abstract
We present a way to measure similarity between sets of rules for regression tasks. This was identified to be an important but missing tool to investigate Metaheuristic Rule Set Learners (MRSLs), a class of algorithms that utilize metaheuristics such as Genetic Algorithms to solve learning tasks: The commonly-used predictive performance-based metrics such as mean absolute error do not capture most users’ actual preferences when they choose these kinds of models since they typically aim for model interpretability (i. e. low number of rules, meaningful rule placement etc.) and not low error alone. Our similarity measure is based on a form of metaheuristic-agnostic edit distance. It is meant to be used—in conjunction with a certain class of benchmark problems—for analysing and improving an as-of-yet underresearched part of MRSL algorithms: The metaheuristic that optimizes the model’s structure (i. e. the set of rule conditions). We discuss the measure’s most important properties and demonstrate its applicability by performing experiments on the best-known MRSL, XCSF, comparing it with two non-metaheuristic Rule Set Learners, Decision Trees and Random Forests.
David Pätzel, Richard Nordsieck, Jörg Hähner

Machine Learning and AI in Digital Healthcare and Personalized Medicine

Frontmatter
Incremental Growth on Compositional Pattern Producing Networks Based Optimization of Biohybrid Actuators
Abstract
One of the training methods of Artificial Neural Networks is Neuroevolution (NE) or the application of Evolutionary Optimization on the architecture and weights of networks to fit the target behaviour. In order to provide competitive results, three key concepts of the NE methods require more attention, i.e., the crossover operator, the niching capacity and the incremental growth of the solutions’ complexity. Here we study an appropriate implementation of the incremental growth for an application of NE on Compositional Pattern Producing Networks (CPPNs) that encode the morphologies of biohybrid actuators. The target for these actuators is to enable the efficient angular movement of a drug-delivering catheter in order to reach difficult areas in the human body. As a result, the methods presented here can be a part of a modular software pipeline that will enable the automatic design of Biohybrid Machines (BHMs) for a variety of applications. The proposed initialization with minimal complexity of these networks resulted in faster computation for the predefined computational budget in terms of number of generations, notwithstanding that the emerged champions have achieved similar fitness values with the ones that emerged from the baseline method. Here, fitness was defined as the maximum deflection of the biohybrid actuator from its initial position after 10 s of simulated time on an open-source physics simulator. Since, the implementation of niching was already employed in the existing baseline version of the methodology, future work will focus on the application of crossover operators.
Michail-Antisthenis Tsompanas

Problem Landscape Analysis for Efficient Optimization

Frontmatter
Hilbert Curves for Efficient Exploratory Landscape Analysis Neighbourhood Sampling
Abstract
Landscape analysis aims to characterise optimisation problems based on their objective (or fitness) function landscape properties. The problem search space is typically sampled, and various landscape features are estimated based on the samples. One particularly salient set of features is information content, which requires the samples to be sequences of neighbouring solutions, such that the local relationships between consecutive sample points are preserved. Generating such spatially correlated samples that also provide good search space coverage is challenging. It is therefore common to first obtain an unordered sample with good search space coverage, and then apply an ordering algorithm such as the nearest neighbour to minimise the distance between consecutive points in the sample. However, the nearest neighbour algorithm becomes computationally prohibitive in higher dimensions, thus there is a need for more efficient alternatives. In this study, Hilbert space-filling curves are proposed as a method to efficiently obtain high-quality ordered samples. Hilbert curves are a special case of fractal curves, and guarantee uniform coverage of a bounded search space while providing a spatially correlated sample. We study the effectiveness of Hilbert curves as samplers, and discover that they are capable of extracting salient features at a fraction of the computational cost compared to Latin hypercube sampling with post-factum ordering. Further, we investigate the use of Hilbert curves as an ordering strategy, and find that they order the sample significantly faster than the nearest neighbour ordering, without sacrificing the saliency of the extracted features.
Johannes J. Pienaar, Anna S. Boman, Katherine M. Malan
Predicting Algorithm Performance in Constrained Multiobjective Optimization: A Tough Nut to Crack
Abstract
Predicting algorithm performance is crucial for selecting the best performing algorithm for a given optimization problem. While some research on this topic has been done for single-objective optimization, it is still largely unexplored for constrained multiobjective optimization. In this work, we study two methodologies as candidates for predicting algorithm performance on 2D constrained multiobjective optimization problems. The first one consists of using state-of-the-art exploratory landscape analysis (ELA) features, designed specifically for constrained multiobjective optimization, as input to classical machine learning methods, and applying the resulting models to predict the performance classes. As an alternative methodology, we analyze an end-to-end deep neural network trained to predict algorithm performance from a suitable problem representation, without relying on ELA features. The experimental results obtained on benchmark problems with three multiobjective optimizers show that neither of the two methodologies is capable of substantially outperforming a dummy classifier. This suggests that, with the current benchmark problems and ELA features, predicting algorithm performance in constrained multiobjective optimization remains a challenge.
Andrejaana Andova, Jordan N. Cork, Aljoša Vodopija, Tea Tušar, Bogdan Filipič
On the Latent Structure of the bbob-biobj Test Suite
Abstract
Landscape analysis is a popular method for the characterization of black-box optimization problems. It consists of a sequence of operations that, from a limited sample of solutions, approximate and describe the hypersurfaces formed by characteristic problem properties. The hypersurfaces, called problem landscapes, are described by sets of carefully crafted features that ought to capture their characteristic properties. In this way, arbitrary optimization problems with potentially very different technical parameters, such as search space dimensionality, are projected into specific feature spaces where they can be further studied. The representation of a problem in a feature space can be used, for example, to find similar problems and identify metaheuristic optimization algorithms that have the best track record on the same type of tasks. Because of that, the quality and properties of problem representation in the feature spaces gain importance. In this work, we study the representation properties of the popular bbob-biobj test suite in the space of bi-objective features, analyze the structure naturally emerging in the feature space, and analyze the high-level properties of the projection. The obtained results clearly demonstrate the discrepancies between the latent structure of the test suite and its expert perception.
Pavel Krömer, Vojtěch Uher, Tea Tušar, Bogdan Filipič

Soft Computing Applied to Games

Frontmatter
Strategies for Evolving Diverse and Effective Behaviours in Pursuit Domains
Abstract
Gamer engagement with computer opponents is an important aspect of computer games. Players will be bored if computer opponents are predictable, and the game will be monotonous. Computer opponents that are both challenging and exhibit interesting and novel behaviours are ideal. This research explores different strategies that encourage diverse emergent behaviours for evolved intelligent agents, while maintaining good performance with the task at hand. We consider the pursuit domain, which consists of a single predator agent and twenty prey agents. The predator’s controller is evolved through genetic programming, while the preys’ controllers are hand-crafted. The fitness of a solution is calculated as the number of prey captured. Inspired by Lehman and Stanley’s novelty search strategy, the fitness is combined with a diversity score, determined by combining four rudimentary behaviour measurements. We combine these basic scores using the many objective optimization strategy known as “sum of ranks”, which is proven to effectively balance a high number of conflicting objectives in optimization problems. We also examine different population diversity strategies, as well as different weighting schemes for combining fitness and diversity scores. After producing sets of solutions for the above experiments, we manually tabulate higher-level emergent behaviour observed in the evolved predators. The use of K-nearest neighbours (K=32) with population archive, combined with a fitness:diversity weighting of 50:50, gave the best results, as it effectively balanced good fitness performance and diverse emergent behaviour.
Tyler Cowan, Brian J. Ross
Using Evolution and Deep Learning to Generate Diverse Intelligent Agents
Abstract
Emergent behaviour arises from the interactions between individual components of a system, rather than being explicitly programmed or designed. The evolution of interesting emergent behaviour in intelligent agents is important when evolving non-playable characters in video games. Here, we use genetic programming (GP) to evolve intelligent agents in a predator-prey simulation. A main goal is to evolve predator agents that exhibit interesting and diverse behaviours. First, we train a convolutional neural network (CNN) to recognize “generic” prey behaviour, as recorded by an image trace of a predator’s movement. A training set for 6 generic behaviours was used to train the CNN. A training accuracy of 98% was obtained, and a validation performance of 90%. Experiments were then performed that merge the CNN with GP fitness. In one experiment, the CNN’s classification values are used as a “diversity score” which, when weighted with the fitness score, allow both agent quality and diversity to be considered. In another experiment, we use the CNN classification score to encourage the evolution of one of the known classes of behaviours. Results were that this trained behaviour was indeed more frequently evolved, compared to GP runs using fitness alone. One conclusion is that machine learning techniques are a powerful tool for the automated generation of diverse, high-quality intelligent agents.
Marshall Joseph, Brian J. Ross
Vision Transformers for Computer Go
Abstract
Motivated by transformers’ success in diverse fields like language understanding and image analysis, our investigation explores their potential in the game of Go. Specifically, we focus on analyzing Transformers in Vision. Through a comprehensive examination of factors like prediction accuracy, win rates, memory, speed, size, and learning rate, we underscore the significant impact transformers can make in the game of Go. Notably, our findings reveal that transformers outperform the previous state-of-the-art models, demonstrating superior performance metrics. This comparative study was conducted against conventional Residual Networks.
Amani Sagri, Tristan Cazenave, Jérôme Arjonilla, Abdallah Saffidine

Surrogate-Assisted Evolutionary Optimisation

Frontmatter
Integrating Bayesian and Evolutionary Approaches for Multi-objective Optimisation
Abstract
Both Multi-Objective Evolutionary Algorithms (MOEAs) and Multi-Objective Bayesian Optimisation (MOBO) are designed to address challenges posed by multi-objective optimisation problems. MOBO offers the distinct advantage of managing computationally or financially expensive evaluations by constructing Bayesian models based on the dataset. MOBO employs an acquisition function to strike a balance between convergence and diversity, facilitating the selection of an appropriate decision vector. MOEAs, similarly focused on achieving convergence and diversity, employ a selection criterion. This paper contributes to the field of multi-objective optimisation by constructing Bayesian models on the selection criterion of decomposition-based MOEAs within the framework of MOBO. The modelling process incorporates both mono and multi-surrogate approaches. The findings underscore the efficacy of MOEA selection criteria in the MOBO context, particularly when adopting the multi-surrogate approach. Evaluation results on both real-world and benchmark problems demonstrate the superiority of the multi-surrogate approach over its mono-surrogate counterpart for a given selection criterion. This study emphasises the significance of bridging the gap between these two optimisation fields and leveraging their respective strengths.
Tinkle Chugh, Alex Evans
Backmatter
Metadaten
Titel
Applications of Evolutionary Computation
herausgegeben von
Stephen Smith
João Correia
Christian Cintrano
Copyright-Jahr
2024
Electronic ISBN
978-3-031-56855-8
Print ISBN
978-3-031-56854-1
DOI
https://doi.org/10.1007/978-3-031-56855-8