Skip to main content
main-content

Über dieses Buch

"Parallel Evolutionary Computation" focuses on the aspects related to the parallelization of evolutionary computations, such as parallel genetic operators, parallel fitness evaluation, distributed genetic algorithms, and parallel hardware implementations, as well as on their impact on several applications.
The book is divided into four parts. The first part deals with a clear software-like and algorithmic vision on parallel evolutionary optimizations. The second part is about hardware implementations of genetic algorithms, a valuable topic which is hard to find in the present literature. The third part treats the problem of distributed evolutionary computation and presents three interesting applications wherein parallel EC new ideas are featured. Finally, the last part deals with the up-to-date field of parallel particle swarm optimization to illustrate the intrinsic similarities and potential extensions to techniques in this domain. The book offers a wide spectrum of sample works developed in leading research throughout the world about parallel implementations of efficient techniques at the heart of computational intelligence. It will be useful both for beginners and experienced researchers in the field of computational intelligence.

Inhaltsverzeichnis

Frontmatter

Parallel Evolutionary Optimization

Frontmatter

A Model for Parallel Operators in Genetic Algorithms

Abstract
In this chapter we analyze a model for applying parallel operators in Genetic Algorithms. Here we focus on crossover and varying mutation applied separately in parallel, emphasizing the gains on performance that can be achieved from the concurrent application of operators with different and complementary roles. We analyze the model in single population Genetic Algorithms using deterministic, adaptive, and self-adaptive mutation rate controls and test its performance on a broad range of classes of 0/1 multiple knapsack problems. We compare the proposed model with the conventional one, where varying mutation is also applied after crossover, presenting evidence that varying mutation parallel to crossover gives an efficient framework to achieve higher performance. We also show that the model is superior for online adaptation of parameters and contend that it is a better option for co-adaptation of parameters. In addition, we study the performance of parallel operators within distributed Genetic Algorithms showing that the inclusion of varying mutation parallel to crossover can increase considerably convergence reliability and robustness of the algorithm, reducing substantially communication costs due to migration.
Hernán Aguirre, Kiyoshi Tanaka

Parallel Evolutionary Multiobjective Optimization

Abstract
Research on multiobjective optimization is very active currently because most of the real-world engineering optimization problems are multiobjective in nature. Multiobjective optimization does not restrict to find a unique single solution, but a set of solutions collectively known as the Pareto front. Evolutionary algorithms (EAs) are especially well-suited for solving such kind of problems because they are able to find multiple trade-off solutions in a single run. However, these algorithms may be computationally expensive because (1) real-world problem optimization typically involves tasks demanding high computational resources and (2) they are aimed at finding the whole front of optimal solutions instead of searching for a single optimum. Parallelizing EAs arises as a possible way of facing this drawback. The first goal of this chapter is to provide the reader with a wide overview of the literature on parallel EAs for multiobjective optimization. Later, we include an experimental study where we develop and analyze pPAES, a parallel EA for multiobjective optimization based on the Pareto Archived Evolution Strategy (PAES). The obtained results show that pPAES is a promising option for solving multiobjective optimization problems.
Francisco Luna, Antonio J. Nebro, Enrique Alba

Parallel Hardware for Genetic Algorithms

Frontmatter

A Reconfigurable Parallel Hardware for Genetic Algorithms

Abstract
In this chapter, we propose a massively parallel architecture of a hardware implementation of genetic algorithms. This design is quite innovative as it provides a viable solution to the fitness computation problem, which depends heavily on the problem-specific knowledge. The proposed architecture is completely independent of such specifics. It implements the fitness computation using a neural network. The hardware implementation of the used neural network is stochastic and thus minimises the required hardware area without much increase in response time. Last but not least, we demonstrate the characteristics of the proposed hardware and compare it to existing ones.
Nadia Nedjah, Luiza de Macedo Mourelle

Reconfigurable Computing and Parallelism for Implementing and Accelerating Evolutionary Algorithms

Abstract
Reconfigurable Computing is a technique for executing algorithms directly on the hardware in order to accelerate and increase their performance. Recon-figurable hardware consists of programmed FPGA chips for working as specific purpose coprocessors. The algorithms to be executed are programmed by means of description hardware languages and implemented in hardware using synthesis tools. Reconfigurable Computing is very useful for processing high computational cost algorithms because the algorithms implemented in a specific hardware get greater performance than if they are processed by a general purpose conventional processor. So Reconfigurable Computing and parallel techniques have been applied on a genetic algorithm for solving the salesman problem and on a parallel evolutionary algorithm for time series predictions. The hardware implementation of these two problems allows a wide set of tools and techniques to be shown. In both cases satisfactory experimental performances have been obtained.
Miguel A. Vega Rodríguez, Juan A. Gómez Pulido, Juan M. Sánchez Pérez, José M. Granado Criado, Manuel Rubio del Solar

Distributed Evolutionary Computation

Frontmatter

Performance of Distributed GAs on DNA Fragment Assembly

Abstract
In this work, we present results on analyzing the behavior of a parallel distributed genetic algorithm over different LAN technologies. Our goal is to offer a study on the potential impact in the search mechanics when shifting between LANs. We will address three LANs: a Fast Ethernet network, a Gigabit Ethernet network, and a Myrinet network. We also study the importance of several parameters of the migration policy. The whole analysis will use the DNA fragment assembly problem to show the actual power and utility of the proposed distributed technique.
Enrique Alba, Gabriel Luque

On Parallel Evolutionary Algorithms on the Computational Grid

Abstract
In this chapter, we analyze the major traditional parallel models of evolutionary algorithms. The analysis is a contribution in parallel evolutionary computation as unlike previously published studies it is placed within the context of grid computing1. The objective is to visit again the parallel models in order to allow their adaptation to grids taking into account the characteristics of such execution environments in terms of volatility, heterogeneity, large scale and multi-administrative domain. The proposed study is a part of a methodological approach for the development of frameworks dedicated to the reusable design of parallel EAs transparently deployable on computational grids. We give an overview of such frameworks and present a case study related to ParadisEO-CMW which is a porting of ParadisEO onto Condor and MW allowing a transparent deployment of the parallel EAs on grids.
N. Melab, E-G. Talbi, S. Cahon

Parallel Evolutionary Algorithms on Consumer-Level Graphics Processing Unit

Abstract
Evolutionary Algorithms (EAs) are effective and robust methods for solving many practical problems such as feature selection, electrical circuits synthesis, and data mining. However, they may execute for a long time for some difficult problems, because several fitness evaluations must be performed. A promising approach to overcome this limitation is to parallelize these algorithms. In this chapter, we propose to implement a parallel EA on consumer-level Graphics Processing Unit (GPU). We perform experiments to compare our parallel EA with an ordinary EA and demonstrate that the former is much more effective than the latter. Since consumer-level graphics processing units are already widely available and installed on oridinary personal computers and they are easy to use and manage, more people will be able to use our parallel algorithm to solve their problems encountered in real-world applications.
Tien-Tsin Wong, Man Leung Wong

Parallel Particle Swarm Optimization

Frontmatter

Intelligent Parallel Particle Swarm Optimization Algorithms

Abstract
Some social systems of natural species, such as flocks of birds and schools of fish, possess interesting collective behavior. In these systems, globally sophisticated behavior emerges from local, indirect communication amongst simple agents with only limited capabilities. In an attempt to simulate this flocking behavior by computers, Kennedy and Eberthart (1995) realized that an optimization problem can be formulated as that of a flock of birds flying across an area seeking a location with abundant food. This observation, together with some abstraction and modification techniques, led to the development of a novel optimization technique - particle swarm optimization.
Shu-Chuan Chu, Jeng-Shyang Pan

Parallel Ant Colony Optimization for 3D Protein Structure Prediction using the HP Lattice Model

Abstract
Protein structure prediction (also known as the protein folding problem) studies the way in which a protein will ‘fold’ into its natural state. Due to the enormous complexities involed in accuratly predicting protein structures, many simplifications have been proposed. The Hydrophobic-Hydrophilic (HP) method is one such method of simplifying the problem. In this chapter we introduce a novel method of solving the HP protein folding problem in both two and three dimensions using Ant Colony Optimizations and a distributed programming paradigm. Tests across a small number of processors indicate that the multiple colony distributed ACO (MACO) approach outperforms single colony implementations. Experimental results also demonstrate that the proposed algorithms perform well in terms of network scalability.
Daniel Chu, Albert Zomaya

Backmatter

Weitere Informationen

Premium Partner

    Marktübersichten

    Die im Laufe eines Jahres in der „adhäsion“ veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen. 

    Bildnachweise