Skip to main content

2024 | Buch

Handbook of Evolutionary Machine Learning

herausgegeben von: Wolfgang Banzhaf, Penousal Machado, Mengjie Zhang

Verlag: Springer Nature Singapore

Buchreihe : Genetic and Evolutionary Computation

insite
SUCHEN

Über dieses Buch

This book, written by leading international researchers of evolutionary approaches to machine learning, explores various ways evolution can address machine learning problems and improve current methods of machine learning. Topics in this book are organized into five parts. The first part introduces some fundamental concepts and overviews of evolutionary approaches to the three different classes of learning employed in machine learning. The second addresses the use of evolutionary computation as a machine learning technique describing methodologic improvements for evolutionary clustering, classification, regression, and ensemble learning. The third part explores the connection between evolution and neural networks, in particular the connection to deep learning, generative and adversarial models as well as the exciting potential of evolution with large language models. The fourth part focuses on the use of evolutionary computation for supporting machine learning methods. This includes methodological developments for evolutionary data preparation, model parametrization, design, and validation. The final part covers several chapters on applications in medicine, robotics, science, finance, and other disciplines. Readers find reviews of application areas and can discover large-scale, real-world applications of evolutionary machine learning to a variety of problem domains. This book will serve as an essential reference for researchers, postgraduate students, practitioners in industry and all those interested in evolutionary approaches to machine learning.

Inhaltsverzeichnis

Frontmatter

Evolutionary Machine Learning Basics

Frontmatter
Chapter 1. Fundamentals of Evolutionary Machine Learning
Abstract
In this opening chapter, we overview the quickly developing field of evolutionary machine learning. We first motivate the field and define how we understand evolutionary machine learning. Then we take a look at its roots, finding that it has quite a long history, going back to the 1950s. We introduce a taxonomy of the field, discuss the major branches of evolutionary machine learning, and conclude by outlining open problems.
Wolfgang Banzhaf, Penousal Machado
Chapter 2. Evolutionary Supervised Machine Learning
Abstract
This chapter provides an overview of evolutionary approaches to supervised learning. It starts with the definition and scope of the opportunity, and then reviews three main areas: evolving general neural network designs, evolving solutions that are explainable, and forming a synergy of evolutionary and gradient-based methods.
Risto Miikkulainen
Chapter 3. EML for Unsupervised Learning
Abstract
This chapter introduces the use of Evolutionary Machine Learning (EML) techniques for unsupervised machine learning tasks. First, a brief introduction to the main concepts related to unsupervised Machine Learning (ML) is presented. Then, an overview of the main EML approaches to these tasks is given together with a discussion of the main achievements and current challenges in addressing these tasks. We focus on four commonly found unsupervised learning tasks: Data preparation, Outlier detection, Dimensionality reduction, and Association rule mining. Finally, we present a number of findings from the review. These findings could guide the reader at the time of applying EML techniques to unsupervised ML tasks or when developing new EML approaches.
Roberto Santana
Chapter 4. Evolutionary Computation and the Reinforcement Learning Problem
Abstract
Evolution by natural selection has built a vast array of highly efficient lifelong learning organisms, as evidenced by the spectacular diversity of species that rapidly adapt to environmental change and acquire new problem-solving skills through experience. Reinforcement Learning (RL) is a machine learning problem in which an agent must learn how to map situations to actions in an unknown world in order to maximise the sum of future rewards. There are no labelled examples of situation\(\rightarrow \)action mappings to learn from and we assume that no model of environment dynamics is available. As such, learning requires active trial-and-error interaction with the world. Evolutionary Reinforcement Learning (EvoRL), the application of evolutionary computation in RL, models this search process at multiple time scales: individual learning during the lifetime of an agent (i.e., operant conditioning) and population-wide learning through natural selection. Both modes of adaptation are wildly creative and fundamental to natural systems. This chapter discusses how EvoRL addresses some critical challenges in RL including the computational cost of extended interactions, the temporal credit assignment problem, partial-observability of state, nonstationary and multi-task environments, transfer learning, and hierarchical problem decomposition. In each case, the unique potential of EvoRL is highlighted in parallel with open challenges and research opportunities.
Stephen Kelly, Jory Schossau

Evolutionary Computation as Machine Learning

Frontmatter
Chapter 5. Evolutionary Regression and Modelling
Abstract
Regression and modelling, which identify the relationship between the dependent and independent variables, play an important role in knowledge discovery from data. Symbolic regression goes a step further by learning explicitly symbolic models from data that are potentially interpretable. This chapter provides an overview of evolutionary computation techniques for regression and modelling including coefficient learning and symbolic regression. We introduce the ideas behind various evolutionary computation methods for regression and present a review of the efforts on enhancing learning capability, generalisation, interpretability and imputation of missing data in evolutionary computation for regression.
Qi Chen, Bing Xue, Will Browne, Mengjie Zhang
Chapter 6. Evolutionary Clustering and Community Detection
Abstract
This chapter provides a formal definition of the problem of cluster analysis, and the related problem of community detection in graphs. Building on the mathematical definition of these problems, we motivate the use of evolutionary computation in this setting. We then review previous work on this topic, highlighting key approaches regarding the choice of representation and objective functions, as well as regarding the final process of model selection. Finally, we discuss successful applications of evolutionary clustering and the steps we consider necessary to encourage the uptake of these techniques in mainstream machine learning.
Julia Handl, Mario Garza-Fabre, Adán José-García
Chapter 7. Evolutionary Classification
Abstract
Classification is a supervised machine learning process that categories an instance based on a number of features. The process of classification involves several stages, including data preprocessing (such as feature selection and feature construction), model training and evaluation. Evolutionary computation has been widely applied to all these stages to improve the performance and explainability of the built classification models, where term for this research area is Evolutionary Classification. This chapter introduces the fundamental concepts of evolutionary classification, followed by the key ideas using evolutionary computation techniques to address existing classification challenges such as multi-class classification, unbalanced data, explainable/interpretable classifiers and transfer learning.
Bach Nguyen, Bing Xue, Will Browne, Mengjie Zhang
Chapter 8. Evolutionary Ensemble Learning
Abstract
Evolutionary Ensemble Learning (EEL) provides a general approach for scaling evolutionary learning algorithms to increasingly complex tasks. This is generally achieved by developing a diverse complement of models that provide solutions to different (yet overlapping) aspects of the task. This chapter reviews the topic of EEL by considering two basic application contexts that were initially developed independently: (1) ensembles as applied to classification and regression problems and (2) multi-agent systems as typically applied to reinforcement learning tasks. We show that common research themes have developed from the two communities, resulting in outcomes applicable to both application contexts. More recent developments reviewed include EEL frameworks that support variable-sized ensembles, scaling to high cardinality or dimensionality, and operation under dynamic environments. Looking to the future we point out that the versatility of EEL can lead to developments that support interpretable solutions and lifelong/continuous learning.
Malcolm I. Heywood

Evolution and Neural Networks

Frontmatter
Chapter 9. Evolutionary Neural Network Architecture Search
Abstract
Deep Neural Networks (DNNs) have been remarkably successful in numerous scenarios of machine learning. However, the typical design for DNN architectures is manual, which highly relies on the domain knowledge and experience of neural networks. Neural architecture search (NAS) methods are often considered an effective way to achieve automated design of DNN architectures. There are three approaches to realizing NAS: reinforcement learning approaches, gradient-based approaches, and evolutionary computation approaches. Among them, evolutionary computation-based NAS (ENAS) has received much attention. This chapter will detail ENAS in terms of four aspects. First, we will present an overall introduction to NAS and the commonly used approaches to NAS. Following that, we will introduce the core components of ENAS and discuss the details of how to design an ENAS algorithm with a focus on search space, search strategy, and performance evaluation of the ENAS algorithm. Moreover, detailed implementations of these components will be presented to help readers implement an ENAS algorithm step by step. We will discuss state-of-the-art ENAS methods with the three core components. Finally, we will provide five major challenges and identify corresponding future directions.
Zeqiong Lv, Xiaotian Song, Yuqi Feng, Yuwei Ou, Yanan Sun, Mengjie Zhang
Chapter 10. Evolutionary Generative Models
Abstract
In the last decade, generative models have seen widespread use for their ability to generate diverse artefacts in an increasingly simple way. Historically, the use of evolutionary computation as a generative model approach was dominant, and recently, as a consequence of the rise in popularity and amount of research being conducted in artificial intelligence, the application of evolutionary computation to generative models has broadened its scope to encompass more complex machine learning approaches. Therefore, it is opportune to propose a term capable of accommodating all these models under the same umbrella. To address this, we propose the term evolutionary generative models to refer to generative approaches that employ any type of evolutionary algorithm, whether applied on its own or in conjunction with other methods. In particular, we present a literature review on this topic, identifying the main properties of evolutionary generative models and categorising them into four different categories: evolutionary computation without machine learning, evolutionary computation aided by machine learning, machine learning aided by evolutionary computation and machine learning evolved by evolutionary computation. Therefore, we systematically analyse a selection of prominent works concerning evolutionary generative models. We conclude by addressing the most relevant challenges and open problems faced by current evolutionary generative models and discussing where the topic’s future is headed.
João Correia, Francisco Baeta, Tiago Martins
Chapter 11. Evolution Through Large Models
Abstract
This chapter pursues the insight that large language models (LLMs) trained to generate code can vastly improve the effectiveness of mutation operators applied to programs in genetic programming (GP). Because such LLMs benefit from training data that includes sequential changes and modifications, they can approximate likely changes that humans would make. To highlight the breadth of implications of such evolution through large models (ELM), in the main experiment ELM combined with MAP-Elites generates hundreds of thousands of functional examples of Python programs that output working ambulating robots in the Sodarace domain, which the original LLM had never seen in pretraining. These examples then help to bootstrap training a new conditional language model that can output the right walker for a particular terrain. The ability to bootstrap new models that can output appropriate artifacts for a given context in a domain where zero training data was previously available carries implications for open-endedness, deep learning, and reinforcement learning. These implications are explored here in depth in the hope of inspiring new directions of research now opened up by ELM.
Joel Lehman, Jonathan Gordon, Shawn Jain, Kamal Ndousse, Cathy Yeh, Kenneth O. Stanley
Chapter 12. Hardware-Aware Evolutionary Approaches to Deep Neural Networks
Abstract
This chapter gives an overview of evolutionary algorithm (EA) based methods applied to the design of efficient implementations of deep neural networks (DNN). We introduce various acceleration hardware platforms for DNNs developed especially for energy-efficient computing in edge devices. In addition to evolutionary optimization of their particular components or settings, we will describe neural architecture search (NAS) methods adopted to directly design highly optimized DNN architectures for a given hardware platform. Techniques that co-optimize hardware platforms and neural network architecture to maximize the accuracy-energy trade-offs will be emphasized. Case studies will primarily be devoted to NAS for image classification. Finally, the open challenges of this popular research area will be discussed.
Lukas Sekanina, Vojtech Mrazek, Michal Pinos
Chapter 13. Adversarial Evolutionary Learning with Distributed Spatial Coevolution
Abstract
Adversarial Evolutionary Learning (AEL) is concerned with competing adversaries that are adapting over time. This competition can be defined as a minimization–maximization problem. Different methods exist to model the search for solutions to this problem, such as the Competitive Coevolutionary Algorithm, Multi-agent Reinforcement Learning, Adversarial Machine Learning, and Evolutionary Game Theory. This chapter introduces an overview of AEL. We focus on spatially distributed competitive coevolution for adversarial evolutionary learning to deal with the Generative Adversarial Networks (GANs) training challenges. A population of multiple individual solutions, parameterized artificial neural networks (ANN), provides diversity to the gradient-based GAN learning and increases the robustness of the GAN training. The computational complexity is reduced by using a spatial topology that decreases the number of evaluations and facilitates scalability. In addition, the topology enables diverse hyper-parameters, objectives, search operators, and data. We present a design and an implementation of an AEL system with spatial competitive coevolution and gradient-based adversarial learning. We demonstrate how the increase in diversity improves the performance of generative learning tasks on image data. Moreover, the distributed population in AEL can help overcome some hardware limitations for ANN architectures.
Jamal Toutouh, Erik Hemberg, Una-May O’Reilly

Evolutionary Computation for Machine Learning

Frontmatter
Chapter 14. Genetic Programming as an Innovation Engine for Automated Machine Learning: The Tree-Based Pipeline Optimization Tool (TPOT)
Abstract
One of the central challenges of machine learning is the selection of methods for feature selection, feature engineering, and classification or regression algorithms for building an analytics pipeline. This is true for both novices and experts. Automated machine learning (AutoML) has emerged as a useful approach to generate machine learning pipelines without the need for manual construction and evaluation. We review here some challenges of building pipelines and present several of the first and most widely used AutoML methods and open-source software. We present in detail the Tree-based Pipeline Optimization Tool (TPOT) that represents pipelines as expression trees and uses genetic programming (GP) for discovery and optimization. We present some of the extensions of TPOT and its application to real-world big data. We end with some thoughts about the future of AutoML and evolutionary machine learning.
Jason H. Moore, Pedro H. Ribeiro, Nicholas Matsumoto, Anil K. Saini
Chapter 15. Evolutionary Model Validation—An Adversarial Robustness Perspective
Abstract
When building Machine Learning models, either manually or automatically, we need to make sure that they are able to solve the task at hand and generalize, i.e., perform well on unseen data. By properly validating a model and estimating its generalization performance, not only do we get a clearer idea of how it behaves but we might also identify problems (e.g., overfitting) before they lead to significant losses in a production environment. Model validation is usually focused on predictive performance, but with models being applied in safety-critical areas, robustness should also be taken into consideration. In this context, a robust model produces correct outputs even when presented with data that somehow deviates from the one used for training, including adversarial examples. These are samples to which small perturbations are added in order to purposely fool the model. There are, however, limited studies on the robustness of models designed by evolution. In this chapter, we address this gap in the literature by performing adversarial attacks and evaluating the models created by two prominent NeuroEvolution methods (DENSER and NSGA-Net). The results confirm that, despite achieving competitive results in standard settings where only predictive accuracy is analyzed, the evolved models are vulnerable to adversarial examples. This highlights the need to also address model validation from an adversarial robustness perspective.
Inês Valentim, Nuno Lourenço, Nuno Antunes
Chapter 16. Evolutionary Approaches to Explainable Machine Learning
Abstract
Machine learning models are increasingly being used in critical sectors, but their black-box nature has raised concerns about accountability and trust. The field of explainable artificial intelligence (XAI) or explainable machine learning (XML) has emerged in response to the need for human understanding of these models. Evolutionary computing, as a family of powerful optimization and learning tools, has significant potential to contribute to XAI/XML. In this chapter, we provide a brief introduction to XAI/XML and review various techniques in current use for explaining machine learning models. We then focus on how evolutionary computing can be used in XAI/XML, and review some approaches which incorporate EC techniques. We also discuss some open challenges in XAI/XML and opportunities for future research in this field using EC. Our aim is to demonstrate that evolutionary computing is well-suited for addressing current problems in explainability and to encourage further exploration of these methods to contribute to the development of more transparent, trustworthy, and accountable machine learning models.
Ryan Zhou, Ting Hu
Chapter 17. Evolutionary Algorithms for Fair Machine Learning
Abstract
At present, supervised machine learning algorithms are ubiquitously used to learn predictive models that have a major impact on people’s lives. However, the vast majority of such algorithms were developed to optimise predictive accuracy only, ignoring the issue of fairness in the predictions of the learned models. This often leads to unfair predictive models, since real-world data usually contains bias or prejudices against certain groups of individuals (e.g. some gender or race). Hence, an increasingly important research area involves fairness-aware machine learning algorithms, i.e. algorithms that optimise both the predictive accuracy and the fairness of their learned predictive models, from a multi-objective optimisation perspective. In this chapter, we review fairness-aware Evolutionary Algorithms (EAs) for supervised machine learning. We first briefly provide some background concepts on fairness measures and multi-objective optimisation approaches. Then, we review six EAs for fairness-aware machine learning, which are in general based on multi-objective optimisation principles. The reviewed EAs address a variety of supervised machine learning tasks, namely: three EAs address a data pre-processing task for classification (one addressing feature construction and two addressing feature selection); one EA optimises the hyper-parameters of a base classification algorithm; one EA evolves an ensemble of artificial neural network models; and one EA finds fair counterfactuals. We conclude with a summary of the main findings of this review and some suggested future research directions.
Alex Freitas, James Brookhouse

Applications of Evolutionary Machine Learning

Frontmatter
Chapter 18. Evolutionary Machine Learning in Science and Engineering
Abstract
Evolutionary machine learning (EML) has been increasingly applied to solving diverse science and engineering problems due to the global search, optimization, and multi-objective optimization capabilities of evolutionary algorithms and the strong modeling capability of complex functions and processes by machine learning (ML) and especially deep neural network models. They are widely used to solve modeling, prediction, control, and pattern detection problems. Especially EML algorithms are used for solving inverse design problems ranging from neural network architecture search, inverse materials design, control system design, and discovery of differential equations.
Jianjun Hu, Yuqi Song, Sadman Sadeed Omee, Lai Wei, Rongzhi Dong, Siddharth Gianey
Chapter 19. Evolutionary Machine Learning in Environmental Science
Abstract
This chapter reviews the use of Evolutionary Machine Learning (EML) in environmental science. We cover the various steps of the machine learning pipeline, also addressing topics like model robustness, interpretability, and human-competitiveness. Environmental science is an interdisciplinary field mainly dedicated to climate change, natural resource management, conservation biology, and sustainability. We review applications such as forest monitoring, optimization of photovoltaic installations, improvement of traffic flow, and reduction of waste in animal farms, among others.
João E. Batista, Sara Silva
Chapter 20. Evolutionary Machine Learning in Medicine
Abstract
This chapter reviews applications of evolutionary machine learning within the medical domain. It is divided into three parts. The first two parts give examples of recent work in two important and representative diseases, cancer and COVID-19, showing how evolutionary methods can be applied to diverse tasks in diagnosis, epidemiological modelling, and the design of drug interventions and treatment plans. The third part presents a case study of our own work within the area of Parkinson’s disease, demonstrating how an evolutionary machine learning approach has been successfully translated and applied within clinical settings.
Michael A. Lones, Stephen L. Smith
Chapter 21. Evolutionary Machine Learning for Space
Abstract
The Venn diagram of evolutionary computation, machine learning and space applications shows some intriguing overlaps. As evolutionary algorithms are often resource-intensive, they have not yet been applied in space. Nevertheless, it has been decisively demonstrated that evolutionary machine learning (EML) is a valuable tool for space, specifically in fields such as trajectory optimisation, optimal control and neuroevolution for robot control, where high-dimensional, discontinuous, sparse and/or non-linear problems abound. In the following chapter, we introduce common problems faced by the space research and application community, together with EML techniques used for generating robust, performant and, sometimes indeed, state-of-the-art solutions. The often complex mathematics behind some problems (especially in trajectory optimisation and optimal control) has been simplified to the minimum necessary to convey the essence of the challenge without encumbering the overview of the relevant EML algorithms. We hope that this chapter provides useful information to both the EML and the space communities in the form of algorithms, benchmarks and standing challenges.
Moritz von Looz, Alexander Hadjiivanov, Emmanuel Blazquez
Chapter 22. Evolutionary Machine Learning in Control
Abstract
This chapter aims to give an overview of recent applications of Evolutionary Machine Learning (EML) to control including opportunities and challenges. Control is at the heart of engineering applications. Examples include regulation, stabilization, reference tracking, synchronization, and coordination. Yet, control design of complex systems may be challenged by high dimensionality, nonlinearities, and delayed responses. A new path for control design is to reformulate the control problem as a regression problem to leverage powerful Machine Learning (ML) methods. In particular, bio-inspired ML methods are well adapted for solving control tasks thanks to easy deployment, interpretability, and little/no prior knowledge of the system to control needed. Hence, since the 50s, EML methods have been successful in optimizing intelligent controllers to solve many control tasks, including adaptive, multi-objective, robust control for robotics, electric engineering, and fluid mechanics, to cite a few examples.
Guy Y. Cornejo Maceda, Bernd R. Noack
Chapter 23. Evolutionary Machine Learning in Robotics
Abstract
In this chapter, we survey the most significant applications of EML to robotics. We first highlight the salient characteristics of the field in terms of what can be optimized and with what aims and constraints. Then we survey the large literature concerning the optimization, by the means of evolutionary computation, of artificial neural networks, traditionally considered a form of machine learning, used for controlling the robots: for easing the comprehension, we categorize the various approaches along different axes, as, e.g., the robotic task, the representation of the solutions, the evolutionary algorithm being employed. We then survey the many usages of evolutionary computation for optimizing the morphology of the robots, including those that tackle the challenging task of optimizing the morphology and the controller at the same time. Finally, we discuss the reality gap problem that consists in a potential mismatch between the quality of solutions found in simulations and their quality observed in reality.
Eric Medvet, Giorgia Nadizar, Federico Pigozzi, Erica Salvato
Chapter 24. Evolutionary Machine Learning in Finance
Abstract
One way to measure the impact of our field of research on finance is to analyse the adoption of evolutionary machine learning in the finance literature. In this study, we focus on articles appearing in the top-ranked journals in finance. A number of interesting observations are made including that there appears to be a trend in the adoption of evolutionary machine learning across a growing and diverse set of topics.
Michael O’Neill, Anthony Brabazon
Chapter 25. Evolutionary Machine Learning and Games
Abstract
Evolutionary machine learning (EML) has been applied to games in multiple ways, and for multiple different purposes. Importantly, AI research in games is not only about playing games; it is also about generating game content, modeling players, and many other applications. Many of these applications pose interesting problems for EML. We will structure this chapter on EML for games based on whether evolution is used to augment machine learning (ML) or ML is used to augment evolution. For completeness, we also briefly discuss the usage of ML and evolution separately in games.
Julian Togelius, Ahmed Khalifa, Sam Earle, Michael Cerny Green, Lisa Soros
Chapter 26. Evolutionary Machine Learning in the Arts
Abstract
This chapter looks at artistic and creative applications of evolutionary machine learning. While both evolutionary computing and machine learning techniques have been applied to all kinds of creative and artistic projects, it is more rare to see them used in combination. The chapter will examine the origins and uses of evolution in the arts, before presenting a case study of an evolutionary machine learning artwork. The discussion presents the technical, conceptual, and creative aspects of developing an artwork. The chapter concludes with a discussion on the rise of generative AI and how evolution might contribute to the next wave of artistic possibilities for evolutionary machine learning.
Jon McCormack
Backmatter
Metadaten
Titel
Handbook of Evolutionary Machine Learning
herausgegeben von
Wolfgang Banzhaf
Penousal Machado
Mengjie Zhang
Copyright-Jahr
2024
Verlag
Springer Nature Singapore
Electronic ISBN
978-981-9938-14-8
Print ISBN
978-981-9938-13-1
DOI
https://doi.org/10.1007/978-981-99-3814-8

Premium Partner