Elsevier

Applied Soft Computing

Volume 8, Issue 2, March 2008, Pages 849-857
Applied Soft Computing

A hybrid genetic algorithm and particle swarm optimization for multimodal functions

https://doi.org/10.1016/j.asoc.2007.07.002Get rights and content

Abstract

Heuristic optimization provides a robust and efficient approach for solving complex real-world problems. The focus of this research is on a hybrid method combining two heuristic optimization techniques, genetic algorithms (GA) and particle swarm optimization (PSO), for the global optimization of multimodal functions. Denoted as GA-PSO, this hybrid technique incorporates concepts from GA and PSO and creates individuals in a new generation not only by crossover and mutation operations as found in GA but also by mechanisms of PSO. The results of various experimental studies using a suite of 17 multimodal test functions taken from the literature have demonstrated the superiority of the hybrid GA-PSO approach over the other four search techniques in terms of solution quality and convergence rates.

Introduction

In the last two decades there has been a growing interest in evolutionary computing, which has inspired new ways for solving optimization problems. In contrast to traditional optimization methods, which emphasize accurate and exact computation, but may fall down on achieving the global optimum, evolutionary computation provides a more robust and efficient approach for solving complex real-world problems [1], [2]. Among existing evolutionary algorithms, the best-known branch is the genetic algorithm (GA). GA is a stochastic search procedure based on the mechanics of natural selection, genetics and evolution [3]. Since this type of algorithm simultaneously evaluates many points in the search space, it is more likely to find the global solution of a given problem. In addition, it uses only a simple scalar performance measure that does not require or use derivative information, so methods classified as GA are easy to use and implement.

More recently, based upon the interaction of individual entities called “particles,” Kennedy and Eberhart [4], [5] proposed a new heuristic algorithm called “particle swarm optimization” (denoted as PSO). The development of this algorithm follows from observations of social behaviors of animals, such as bird flocking and fish schooling. The theory of PSO describes a solution process in which each particle flies through the multi-dimensional search space while the particle's velocity and position are constantly updated according to the best previous performance of the particle or of the particle's neighbors, as well as the best performance of the particles in the entire population. Compared with GA, PSO has some attractive characteristics. It has memory, so knowledge of good solutions is retained by all the particles; whereas in GA, previous knowledge of the problem is discarded once the population changes. It has constructive cooperation between particles; that is, particles in the swarm share information among themselves. To date, PSO has been successfully applied to optimizing various continuous nonlinear functions in practice [6].

Hybridization of evolutionary algorithms with local search has been investigated in many studies [7], [8], [9]. Such a hybrid is often referred to as a mimetic algorithm. In the case at hand, we will combine two global optimization algorithms, i.e., GA and PSO, as PSO and GA both work with an initial population of solutions and combining the searching abilities of both methods seems to be a reasonable approach. Originally, PSO functions according to knowledge of social interaction, and all individuals are taken into account in each generation. On the contrary, GA simulates evolution and some individuals are selected while some others are eliminated from generation to generation. Taking advantage of the compensatory property of GA and PSO, we propose a new algorithm that combines the evolutionary natures of both (denoted as GA-PSO). The robustness of GA-PSO will be tested against a set of benchmark multimodal test functions collected from Siaary and Berthiau [10] and the results are compared extensively with those obtained by the continuous genetic algorithm, continuous hybrid algorithm, hybrid Nelder–Mead simplex search and particle swarm optimization, and hybrid continuous tabu search and Nelder–Mead simplex algorithm.

Section snippets

Genetic algorithms (GA)

The discovery of genetic algorithms (GA) was dated to the 1960s by Holland and further described by Goldberg [3]. GA is a randomized global search technique that solves problems by imitating processes observed from natural evolution. Based on the survival and reproduction of the fittest, GA continually exploits new and better solutions without any pre-assumptions, such as continuity and unimodality. GA has been successfully adopted in many complex optimization problems and shows its merits over

Hybrid genetic algorithm and particle swarm optimization

This section discusses the infrastructure and rationale of the hybrid algorithm. Fig. 1 depicts the schematic representation of the proposed hybrid GA-PSO. As can be seen, GA and PSO both work with the same initial population. When solving an N-dimensional problem, the hybrid approach takes 4N individuals that are randomly generated. These individuals may be regarded as chromosomes in the case of GA, or as particles in the case of PSO. The 4N individuals are sorted by fitness, and the top 2N

Computational experiments

This section focuses on the efficiency of GA-PSO as tested against 17 benchmark functions with 2–10 variables, which are listed in the appendix. To avoid attributing the optimization results to the choice of a particular initial population and to conduct fair comparisons, we perform each test 100 times, starting from various randomly selected points in the hyper-rectangular search domain given in the literature. The algorithm is coded in Matlab 7.0 and the simulations are run on a Pentium IV 2.4

Conclusions

In this paper, GA-PSO has been proposed as a new approach for optimizing multimodal functions. GA-PSO integrates the concept of evolving individuals originally modeled by GA with the concept of self-improvement of PSO, where individuals enhance themselves based on social interactions and their private cognition. Thus GA-PSO synthesizes the merits of both GA and PSO, and it is a simple and yet effective model to handle different kinds of continuous optimization problems. Simulated experiments

References (20)

There are more references available in the full text version of this article.

Cited by (0)

View full text