Grey Wolf Optimizer

https://doi.org/10.1016/j.advengsoft.2013.12.007Get rights and content

Highlights

  • A new meta-heuristic called Grey Wolf Optimizer inspired by grey wolves is proposed.

  • The GWO algorithm is benchmarked on 29 well-known test functions.

  • The results on the unimodal functions show the superior exploitation of GWO.

  • The exploration ability of GWO is confirmed by the results on multimodal functions.

  • The results on semi-real and real problems confirm the performance of GWO in practice.

Abstract

This work proposes a new meta-heuristic called Grey Wolf Optimizer (GWO) inspired by grey wolves (Canis lupus). The GWO algorithm mimics the leadership hierarchy and hunting mechanism of grey wolves in nature. Four types of grey wolves such as alpha, beta, delta, and omega are employed for simulating the leadership hierarchy. In addition, the three main steps of hunting, searching for prey, encircling prey, and attacking prey, are implemented. The algorithm is then benchmarked on 29 well-known test functions, and the results are verified by a comparative study with Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), Differential Evolution (DE), Evolutionary Programming (EP), and Evolution Strategy (ES). The results show that the GWO algorithm is able to provide very competitive results compared to these well-known meta-heuristics. The paper also considers solving three classical engineering design problems (tension/compression spring, welded beam, and pressure vessel designs) and presents a real application of the proposed method in the field of optical engineering. The results of the classical engineering design problems and real application prove that the proposed algorithm is applicable to challenging problems with unknown search spaces.

Introduction

Meta-heuristic optimization techniques have become very popular over the last two decades. Surprisingly, some of them such as Genetic Algorithm (GA) [1], Ant Colony Optimization (ACO) [2], and Particle Swarm Optimization (PSO) [3] are fairly well-known among not only computer scientists but also scientists from different fields. In addition to the huge number of theoretical works, such optimization techniques have been applied in various fields of study. There is a question here as to why meta-heuristics have become remarkably common. The answer to this question can be summarized into four main reasons: simplicity, flexibility, derivation-free mechanism, and local optima avoidance.

First, meta-heuristics are fairly simple. They have been mostly inspired by very simple concepts. The inspirations are typically related to physical phenomena, animals’ behaviors, or evolutionary concepts. The simplicity allows computer scientists to simulate different natural concepts, propose new meta-heuristics, hybridize two or more meta-heuristics, or improve the current meta-heuristics. Moreover, the simplicity assists other scientists to learn meta-heuristics quickly and apply them to their problems.

Second, flexibility refers to the applicability of meta-heuristics to different problems without any special changes in the structure of the algorithm. Meta-heuristics are readily applicable to different problems since they mostly assume problems as black boxes. In other words, only the input(s) and output(s) of a system are important for a meta-heuristic. So, all a designer needs is to know how to represent his/her problem for meta-heuristics.

Third, the majority of meta-heuristics have derivation-free mechanisms. In contrast to gradient-based optimization approaches, meta-heuristics optimize problems stochastically. The optimization process starts with random solution(s), and there is no need to calculate the derivative of search spaces to find the optimum. This makes meta-heuristics highly suitable for real problems with expensive or unknown derivative information.

Finally, meta-heuristics have superior abilities to avoid local optima compared to conventional optimization techniques. This is due to the stochastic nature of meta-heuristics which allow them to avoid stagnation in local solutions and search the entire search space extensively. The search space of real problems is usually unknown and very complex with a massive number of local optima, so meta-heuristics are good options for optimizing these challenging real problems.

The No Free Lunch (NFL) theorem [4] is worth mentioning here. This theorem has logically proved that there is no meta-heuristic best suited for solving all optimization problems. In other words, a particular meta-heuristic may show very promising results on a set of problems, but the same algorithm may show poor performance on a different set of problems. Obviously, NFL makes this field of study highly active which results in enhancing current approaches and proposing new meta-heuristics every year. This also motivates our attempts to develop a new meta-heuristic with inspiration from grey wolves.

Generally speaking, meta-heuristics can be divided into two main classes: single-solution-based and population-based. In the former class (Simulated Annealing [5] for instance) the search process starts with one candidate solution. This single candidate solution is then improved over the course of iterations. Population-based meta-heuristics, however, perform the optimization using a set of solutions (population). In this case the search process starts with a random initial population (multiple solutions), and this population is enhanced over the course of iterations. Population-based meta-heuristics have some advantages compared to single solution-based algorithms:

  • Multiple candidate solutions share information about the search space which results in sudden jumps toward the promising part of search space.

  • Multiple candidate solutions assist each other to avoid locally optimal solutions.

  • Population-based meta-heuristics generally have greater exploration compared to single solution-based algorithms.

One of the interesting branches of the population-based meta-heuristics is Swarm Intelligence (SI). The concepts of SI was first proposed in 1993 [6]. According to Bonabeau et al. [1], SI is “The emergent collective intelligence of groups of simple agents”. The inspirations of SI techniques originate mostly from natural colonies, flock, herds, and schools. Some of the most popular SI techniques are ACO [2], PSO [3], and Artificial Bee Colony (ABC) [7]. A comprehensive literature review of the SI algorithms is provided in the next section. Some of the advantages of SI algorithms are:

  • SI algorithms preserve information about the search space over the course of iteration, whereas Evolutionary Algorithms (EA) discard the information of the previous generations.

  • SI algorithms often utilize memory to save the best solution obtained so far.

  • SI algorithms usually have fewer parameters to adjust.

  • SI algorithms have less operators compared to evolutionary approaches (crossover, mutation, elitism, and so on).

  • SI algorithms are easy to implement.

Regardless of the differences between the meta-heuristics, a common feature is the division of the search process into two phases: exploration and exploitation [8], [9], [10], [11], [12]. The exploration phase refers to the process of investigating the promising area(s) of the search space as broadly as possible. An algorithm needs to have stochastic operators to randomly and globally search the search space in order to support this phase. However, exploitation refers to the local search capability around the promising regions obtained in the exploration phase. Finding a proper balance between these two phases is considered a challenging task due to the stochastic nature of meta-heuristics. This work proposes a new SI technique with inspiration from the social hierarchy and hunting behavior of grey wolf packs. The rest of the paper is organized as follows:

Section 2 presents a literature review of SI techniques. Section 3 outlines the proposed GWO algorithm. The results and discussion of benchmark functions, semi-real problems, and a real application are presented in Sections 4 Results and discussion, 5 GWO for classical engineering problems, 6 Real application of GWO in optical engineering (optical buffer design), respectively. Finally, Section 7 concludes the work and suggests some directions for future studies.

Section snippets

Literature review

Meta-heuristics may be classified into three main classes: evolutionary, physics-based, and SI algorithms. EAs are usually inspired by the concepts of evolution in nature. The most popular algorithm in this branch is GA. This algorithm was proposed by Holland in 1992 [13] and simulates Darwnian evolution concepts. The engineering applications of GA were extensively investigated by Goldberg [14]. Generally speaking, the optimization is done by evolving an initial random solution in EAs. Each new

Grey Wolf Optimizer (GWO)

In this section the inspiration of the proposed method is first discussed. Then, the mathematical model is provided.

Results and discussion

In this section the GWO algorithm is benchmarked on 29 benchmark functions. The first 23 benchmark functions are the classical functions utilized by many researchers [16], [48], [49], [50], [51], [82]. Despite the simplicity, we have chosen these test functions to be able to compare our results to those of the current meta-heuristics. These benchmark functions are listed in Table 1, Table 2, Table 3 where Dim indicates dimension of the function, Range is the boundary of the function’s search

GWO for classical engineering problems

In this section three constrained engineering design problems: tension/compression spring, welded beam, and pressure vessel designs, are employed. These problems have several equality and inequality constraints, so the GWO should be equipped with a constraint handling method to be able to optimize constrained problems as well. Generally speaking, constraint handling becomes very challenging when the fitness function directly affects the position updating of the search agents (GSA for instance).

Real application of GWO in optical engineering (optical buffer design)

The problem investigated in this section is called optical buffer design. In fact, an optical buffer is one of the main components of optical CPUs. The optical buffer slows the group velocity of light and allows the optical CPUs to process optical packets or adjust its timing. The most popular device to do this is a Photonic Crystal Waveguide (PCW). PCWs mostly have a lattice-shaped structure with a line defect in the middle. The radii of holes and shape of the line defect yield different slow

Conclusion

This work proposed a novel SI optimization algorithm inspired by grey wolves. The proposed method mimicked the social hierarchy and hunting behavior of grey wolves. Twenty nine test functions were employed in order to benchmark the performance of the proposed algorithm in terms of exploration, exploitation, local optima avoidance, and convergence. The results showed that GWO was able to provide highly competitive results compared to well-known heuristics such as PSO, GSA, DE, EP, and ES. First,

References (82)

  • Q. He et al.

    An effective co-evolutionary particle swarm optimization for constrained engineering design problems

    Eng Appl Artif Intell

    (2007)
  • C.A. Coello Coello

    Use of a self-adaptive penalty approach for engineering optimization problems

    Comput Ind

    (2000)
  • M. Mahdavi et al.

    An improved harmony search algorithm for solving optimization problems

    Appl Math Comput

    (2007)
  • F. Huang et al.

    An effective co-evolutionary differential evolution for constrained optimization

    Appl Math Comput

    (2007)
  • K. Deb

    An efficient constraint handling method for genetic algorithms

    Comput Methods Appl Mech Eng

    (2000)
  • K.S. Lee et al.

    A new meta-heuristic algorithm for continuous engineering optimization: harmony search theory and practice

    Comput Methods Appl Mech Eng

    (2005)
  • D. Wang et al.

    Slow light engineering in polyatomic photonic crystal waveguides based on square lattice

    Optics Commun

    (2011)
  • S.M. Mirjalili et al.

    Optical buffer performance enhancement using Particle Swarm Optimization in Ring-Shape-Hole Photonic Crystal Waveguide

    Optik – Int J Light Elect Optics

    (2013)
  • J. Wu et al.

    Wideband and low dispersion slow light in slotted photonic crystal waveguide

    Optics Commun

    (2010)
  • Bonabeau E, Dorigo M, Theraulaz G. Swarm intelligence: from natural to artificial systems: OUP USA;...
  • M. Dorigo et al.

    Ant colony optimization

    Comput Intell Magaz, IEEE

    (2006)
  • Kennedy J, Eberhart R. Particle swarm optimization, in Neural Networks, 1995. In: Proceedings, IEEE international...
  • D.H. Wolpert et al.

    No free lunch theorems for optimization

    Evolut Comput, IEEE Trans

    (1997)
  • Kirkpatrick S, Jr. DG, Vecchi MP. Optimization by simulated annealing. Science, vol. 220; 1983. p....
  • Beni G, Wang J. Swarm intelligence in cellular robotic systems. In: Robots and biological systems: towards a new...
  • Basturk B, Karaboga D. An artificial bee colony (ABC) algorithm for numeric function optimization. In: IEEE swarm...
  • Olorunda O, Engelbrecht AP. Measuring exploration/exploitation in particle swarms using swarm diversity. In:...
  • E. Alba et al.

    The exploration/exploitation tradeoff in dynamic cellular genetic algorithms

    Evolut Comput, IEEE Trans

    (2005)
  • L. Lin et al.

    Auto-tuning strategy for evolutionary algorithms: balancing between exploration and exploitation

    Soft Comput

    (2009)
  • Mirjalili S, Hashim SZM. A new hybrid PSOGSA algorithm for function optimization. In: Computer and information...
  • J.H. Holland

    Genetic algorithms

    Sci Am

    (1992)
  • Goldberg D. Genetic Algorithms in optimization, search and machine learning, Addison Wesley, New York. In: Eiben AE,...
  • R. Storn et al.

    Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces

    J Global Optim

    (1997)
  • X. Yao et al.

    Evolutionary programming made faster

    Evolut Comput, IEEE Trans

    (1999)
  • D. Fogel

    Artificial intelligence through simulated evolution

    (2009)
  • N. Hansen et al.

    Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES)

    Evolut Comput

    (2003)
  • I. Rechenberg

    Evolution strategy

    Comput Intel Imitat Life

    (1994)
  • Koza JR. Genetic programming;...
  • D. Simon

    Biogeography-based optimization

    Evolut Comput IEEE Trans

    (2008)
  • Webster B, Bernhard PJ. A local search optimization algorithm based on natural principles of gravitation. In:...
  • A. Kaveh et al.

    A novel heuristic optimization method: charged system search

    Acta Mech

    (2010)
  • Cited by (12582)

    View all citing articles on Scopus
    View full text