Skip to main content

2006 | Buch

Towards a New Evolutionary Computation

Advances in the Estimation of Distribution Algorithms

herausgegeben von: Jose A. Lozano, Pedro Larrañaga, Iñaki Inza, Endika Bengoetxea

Verlag: Springer Berlin Heidelberg

Buchreihe : Studies in Fuzziness and Soft Computing

insite
SUCHEN

Über dieses Buch

Estimation of Distribution Algorithms (EDAs) are a set of algorithms in the Evolutionary Computation (EC) field characterized by the use of explicit probability distributions in optimization. Contrarily to other EC techniques such as the broadly known Genetic Algorithms (GAs) in EDAs, the crossover and mutation operators are substituted by the sampling of a distribution previously learnt from the selected individuals. EDAs have experienced a high development that has transformed them into an established discipline within the EC field.

This book attracts the interest of new researchers in the EC field as well as in other optimization disciplines, and that it becomes a reference for all of us working on this topic. The twelve chapters of this book can be divided into those that endeavor to set a sound theoretical basis for EDAs, those that broaden the methodology of EDAs and finally those that have an applied objective.

Inhaltsverzeichnis

Frontmatter
Linking Entropy to Estimation of Distribution Algorithms
Summary
This chapter presents results on the application of the concept of entropy to estimation of distribution algorithms (EDAs). Firstly, the Boltzmann mutual information curves are introduced. They are shown to contain a lot of information about the difficulty of the functions. Next, a design method of discrete benchmark functions is presented. The newly developed approach allows the construction of both single and random classes of functions that obey a given collection of probabilistic constraints. This application and the next — the construction of low cost search distributions — are based on the principle of maximum entropy. The last proposal is the linear entropic mutation (LEM), an approach that measures the amount of mutation applied to a variable as the increase of its entropy. We argue that LEM is a natural operator for EDAs because it mutates distributions instead of single individuals.
Alberto Ochoa, Marta Soto
Entropy-based Convergence Measurement in Discrete Estimation of Distribution Algorithms
Summary
This chapter presents an entropy-based convergence measurement applicable to Estimation of Distribution Algorithms. Based on the measured entropy, the time point when the generation of new solutions becomes ineffective, can be detected. The proposed termination criterion is inherent to the complexity of used probabilistic models and automatically postpones the termination if inappropriate models are used.
Jiri Ocenasek
Real-coded Bayesian Optimization Algorithm
Summary
This chapter describes a real-coded (i.e., continuous) Estimation of Distribution Algorithm (EDA) that solves real-valued (i.e., numerical) optimization problems of bounded difficulty quickly, accurately, and reliably. This is the real-coded Bayesian Optimization Algorithm (rBOA). The objective is to bring the power of (discrete) BOA to bear upon the area of real-valued optimization. That is, the rBOA must properly decompose a problem and effectively perform Probabilistic Building-Block Crossover (PBBC) for real-valued multivariate data. In other words, a unique feature of rBOA is to learn complex dependencies of variables and make use of mixture models at the level of substructures. To begin with, a Bayesian factorization is performed. The resulting factorization that contains linkage information is then utilized for finding implicit subproblems (i.e., substructures). Mixture models are employed for independently fitting each of these substructures. Subsequently, an independent substructure-wise sampling draws the offspring. Experimental studies show that the rBOA finds, with a sub-quadratic scale-up behavior for (additively) decomposable problems, a solution that is superior in quality to that found by advanced real-coded EDAs regardless of inherent problem characteristics. Moreover, comparable or better performance is achieved for nondecomposable problems.
Chang Wook Ahn, R. S. Ramakrishna, David E. Goldberg
The CMA Evolution Strategy: A Comparing Review
Summary
Derived from the concept of self-adaptation in evolution strategies, the CMA (Covariance Matrix Adaptation) adapts the covariance matrix of a multi-variate normal search distribution. The CMA was originally designed to perform well with small populations. In this review, the argument starts out with large population sizes, reflecting recent extensions of the CMA algorithm. Commonalities and differences to continuous Estimation of Distribution Algorithms are analyzed. The aspects of reliability of the estimation, overall step size control, and independence from the coordinate system (invariance) become particularly important in small populations sizes. Consequently, performing the adaptation task with small populations is more intricate.
Nikolaus Hansen
Estimation of Distribution Programming: EDA-based Approach to Program Generation
Summary
We describe a framework for program evolution with an EDA-based approach. In this framework, the probability distribution of programs is estimated using a Bayesian network, and individuals are generated based on the estimated distribution. Considering that a dependency relationship of nodes in a program tree is explicit, i.e. the dependency relationship is strong between a parent node and its child node in a program expressed as a tree structure, we have chosen a Bayesian network as the distribution model of programs.
In order to demonstrate the effectiveness of our approach, this chapter shows results of comparative experiments with Genetic Programming. Thereafter, we discuss how Estimation of Distribution Programming works and the transitions of the evolved programs that are the forte of our methods. We also analyze the performance of a hybrid system which combines Estimation of Distribution Programming and Genetic Programming.
Kohsuke Yanai, Hitoshi Iba
Multi-objective Optimization with the Naive $$ \mathbb{M} $$ ID $$ \mathbb{E} $$ A
Summary
EDAs have been shown to perform well on a wide variety of single-objective optimization problems, for binary and real-valued variables. In this chapter we look into the extension of the EDA paradigm to multi-objective optimization. To this end, we focus the chapter around the introduction of a simple, but effective, EDA for multi-objective optimization: the naive \( \mathbb{M} \) ID\( \mathbb{E} \)AA (mixture-based multi-objective iterated density-estimation evolutionary algorithm). The probabilistic model in this specific algorithm is a mixture distribution. Each component in the mixture is a univariate factorization. As will be shown in this chapter, mixture distributions allow for wide-spread exploration of a multi-objective front, whereas most operators focus on a specific part of the multi-objective front. This wide-spread exploration aids the important preservation of diversity in multi-objective optimization. To further improve and maintain the diversity that is obtained by the mixture distribution, a specialized diversity preserving selection operator is used in the naive \( \mathbb{M} \) ID\( \mathbb{E} \)A. We verify the effectiveness of the naive \( \mathbb{M} \) ID\( \mathbb{E} \)A in two different problem domains and compare it with two other well-known efficient multi-objective evolutionary algorithms (MOEAs).
Peter A.N. Bosman, Dirk Thierens
A Parallel Island Model for Estimation of Distribution Algorithms
Summary
In this work we address the parallelization of the kind of Evolutionary Algorithms (EAs) known as Estimation of Distribution Algorithms (EDAs). After an initial discussion on the types of potentially parallel schemes for EDAs, we proceed to design a distributed island version (dEDA), aimed at improving the numerical efficiency of the sequential algorithm in terms of the number of evaluations. After evaluating such a dEDA on several well-known discrete and continuous test problems, we conclude that our model clearly outperforms existing centralized approaches from a numerical point of view, as well as speeding up the search considerably, thanks to its suitability for physical parallelism.
Julio Madera, Enrique Alba, Alberto Ochoa
GA-EDA: A New Hybrid Cooperative Search Evolutionary Algorithm
Summary
Hybrid metaheuristics have received considerable interest in recent years. A wide variety of hybrid approaches have been proposed in the literature. In this paper a new hybrid approach, named GA-EDA, is presented. This new hybrid algorithm is based on genetic and estimation of distribution algorithms. The original objective is to benefit from both approaches and attempt to achieve improved results in exploring the search space. In order to perform an evaluation of this new approach, a selection of synthetic optimization problems have been proposed, together with some real-world cases. Experimental results show the competitiveness of our new approach.
Victor Robles, Jose M. Peña, Pedro Larrañaga, María S. Pérez, Vanessa Herves
Bayesian Classifiers in Optimization: An EDA-like Approach
Summary
This chapter introduces a new Evolutionary Computation method which applies Bayesian classifiers in the construction of a probabilistic graphical model that will be used to evolve a population of individuals. On the other hand, the satisfiability problem (SAT) is a central problem in the theory of computation as a representative example of NP-complete problems. We have verified the performance of this new method for the SAT problem. We compare three different solution representations suggested in the literature. Finally, we apply local search methods for this problem.
Teresa Miquélez, Endika Bengoetxea, Pedro Larrañaga
Feature Ranking Using an EDA-based Wrapper Approach
Summary
Feature subset selection is an important pre-processing step for classification. A more general framework of feature selection is feature ranking. A feature ranking provides an ordered list of the features, sorted according to their relevance. Using such a ranking provides a better overview of the feature elimination process, and allows the human expert to gain more insight into the processes underlying the data. In this chapter, we describe a technique to derive a feature ranking directly from the estimated distribution of an EDA. As an example, we apply the method to the biological problem of acceptor splice site prediction, demonstrating the advantages for knowledge discovery in biological datasets with many features.
Yvan Saeys, Sven Degroeve, Yves Van de Peer
Learning Linguistic Fuzzy Rules by Using Estimation of Distribution Algorithms as the Search Engine in the COR Methodology
Summary
Learning models from data which have the double ability of being predictive and descriptive at the same time is currently one of the major goals of machine learning and data mining. Linguistic (or descriptive) fuzzy rule-based systems possess a good tradeoff between the aforementioned features and thus have received increasing attention in the last few years.
In this chapter we propose the use of estimation of distribution algorithms (EDAs) to guide the search of a good linguistic fuzzy rule system. To do this, we integrate EDAs in a recent methodology (COR) which tries to take advantage of the cooperation among rules.
Experiments are carried out with univariate and bivariate EDAs over four test functions, and the results show that the exploitation of (pairwise) dependencies done by bivariate EDAs yield to a better performance than univariate EDAs or genetic algorithms.
M. Julia Flores, José A. Gámez, José M. Puerta
Estimation of Distribution Algorithm with 2-opt Local Search for the Quadratic Assignment Problem
Summary
This chapter proposes a combination of estimation of distribution algorithm (EDA) and the 2-opt local search algorithm (EDA/LS) for the quadratic assignment problem (QAP). In EDA/LS, a new operator, called guided mutation, is employed for generating new solutions. This operator uses both global statistical information collected from the previous search and the location information of solutions found so far. The 2-opt local search algorithm is applied to each new solution generated by guided mutation. A restart strategy based on statistical information is used when the search is trapped in a local area. Experimental results on a set of QAP test instances show that EDA/LS is comparable with the memetic algorithm of Merz and Freisleben and outperforms estimation of distribution algorithm with guided local search (EDA/GLS). The proximate optimality principle on the QAP is verified experimentally to justify the rationale behind heuristics (including EDA/GLS) for the QAP.
Qingfu Zhang, Jianyong Sun, Edward Tsang, John Ford
Backmatter
Metadaten
Titel
Towards a New Evolutionary Computation
herausgegeben von
Jose A. Lozano
Pedro Larrañaga
Iñaki Inza
Endika Bengoetxea
Copyright-Jahr
2006
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-32494-2
Print ISBN
978-3-540-29006-3
DOI
https://doi.org/10.1007/3-540-32494-1

    Marktübersichten

    Die im Laufe eines Jahres in der „adhäsion“ veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.