Grey Wolf Optimizer
Introduction
Meta-heuristic optimization techniques have become very popular over the last two decades. Surprisingly, some of them such as Genetic Algorithm (GA) [1], Ant Colony Optimization (ACO) [2], and Particle Swarm Optimization (PSO) [3] are fairly well-known among not only computer scientists but also scientists from different fields. In addition to the huge number of theoretical works, such optimization techniques have been applied in various fields of study. There is a question here as to why meta-heuristics have become remarkably common. The answer to this question can be summarized into four main reasons: simplicity, flexibility, derivation-free mechanism, and local optima avoidance.
First, meta-heuristics are fairly simple. They have been mostly inspired by very simple concepts. The inspirations are typically related to physical phenomena, animals’ behaviors, or evolutionary concepts. The simplicity allows computer scientists to simulate different natural concepts, propose new meta-heuristics, hybridize two or more meta-heuristics, or improve the current meta-heuristics. Moreover, the simplicity assists other scientists to learn meta-heuristics quickly and apply them to their problems.
Second, flexibility refers to the applicability of meta-heuristics to different problems without any special changes in the structure of the algorithm. Meta-heuristics are readily applicable to different problems since they mostly assume problems as black boxes. In other words, only the input(s) and output(s) of a system are important for a meta-heuristic. So, all a designer needs is to know how to represent his/her problem for meta-heuristics.
Third, the majority of meta-heuristics have derivation-free mechanisms. In contrast to gradient-based optimization approaches, meta-heuristics optimize problems stochastically. The optimization process starts with random solution(s), and there is no need to calculate the derivative of search spaces to find the optimum. This makes meta-heuristics highly suitable for real problems with expensive or unknown derivative information.
Finally, meta-heuristics have superior abilities to avoid local optima compared to conventional optimization techniques. This is due to the stochastic nature of meta-heuristics which allow them to avoid stagnation in local solutions and search the entire search space extensively. The search space of real problems is usually unknown and very complex with a massive number of local optima, so meta-heuristics are good options for optimizing these challenging real problems.
The No Free Lunch (NFL) theorem [4] is worth mentioning here. This theorem has logically proved that there is no meta-heuristic best suited for solving all optimization problems. In other words, a particular meta-heuristic may show very promising results on a set of problems, but the same algorithm may show poor performance on a different set of problems. Obviously, NFL makes this field of study highly active which results in enhancing current approaches and proposing new meta-heuristics every year. This also motivates our attempts to develop a new meta-heuristic with inspiration from grey wolves.
Generally speaking, meta-heuristics can be divided into two main classes: single-solution-based and population-based. In the former class (Simulated Annealing [5] for instance) the search process starts with one candidate solution. This single candidate solution is then improved over the course of iterations. Population-based meta-heuristics, however, perform the optimization using a set of solutions (population). In this case the search process starts with a random initial population (multiple solutions), and this population is enhanced over the course of iterations. Population-based meta-heuristics have some advantages compared to single solution-based algorithms:
- •
Multiple candidate solutions share information about the search space which results in sudden jumps toward the promising part of search space.
- •
Multiple candidate solutions assist each other to avoid locally optimal solutions.
- •
Population-based meta-heuristics generally have greater exploration compared to single solution-based algorithms.
One of the interesting branches of the population-based meta-heuristics is Swarm Intelligence (SI). The concepts of SI was first proposed in 1993 [6]. According to Bonabeau et al. [1], SI is “The emergent collective intelligence of groups of simple agents”. The inspirations of SI techniques originate mostly from natural colonies, flock, herds, and schools. Some of the most popular SI techniques are ACO [2], PSO [3], and Artificial Bee Colony (ABC) [7]. A comprehensive literature review of the SI algorithms is provided in the next section. Some of the advantages of SI algorithms are:
- •
SI algorithms preserve information about the search space over the course of iteration, whereas Evolutionary Algorithms (EA) discard the information of the previous generations.
- •
SI algorithms often utilize memory to save the best solution obtained so far.
- •
SI algorithms usually have fewer parameters to adjust.
- •
SI algorithms have less operators compared to evolutionary approaches (crossover, mutation, elitism, and so on).
- •
SI algorithms are easy to implement.
Regardless of the differences between the meta-heuristics, a common feature is the division of the search process into two phases: exploration and exploitation [8], [9], [10], [11], [12]. The exploration phase refers to the process of investigating the promising area(s) of the search space as broadly as possible. An algorithm needs to have stochastic operators to randomly and globally search the search space in order to support this phase. However, exploitation refers to the local search capability around the promising regions obtained in the exploration phase. Finding a proper balance between these two phases is considered a challenging task due to the stochastic nature of meta-heuristics. This work proposes a new SI technique with inspiration from the social hierarchy and hunting behavior of grey wolf packs. The rest of the paper is organized as follows:
Section 2 presents a literature review of SI techniques. Section 3 outlines the proposed GWO algorithm. The results and discussion of benchmark functions, semi-real problems, and a real application are presented in Sections 4 Results and discussion, 5 GWO for classical engineering problems, 6 Real application of GWO in optical engineering (optical buffer design), respectively. Finally, Section 7 concludes the work and suggests some directions for future studies.
Section snippets
Literature review
Meta-heuristics may be classified into three main classes: evolutionary, physics-based, and SI algorithms. EAs are usually inspired by the concepts of evolution in nature. The most popular algorithm in this branch is GA. This algorithm was proposed by Holland in 1992 [13] and simulates Darwnian evolution concepts. The engineering applications of GA were extensively investigated by Goldberg [14]. Generally speaking, the optimization is done by evolving an initial random solution in EAs. Each new
Grey Wolf Optimizer (GWO)
In this section the inspiration of the proposed method is first discussed. Then, the mathematical model is provided.
Results and discussion
In this section the GWO algorithm is benchmarked on 29 benchmark functions. The first 23 benchmark functions are the classical functions utilized by many researchers [16], [48], [49], [50], [51], [82]. Despite the simplicity, we have chosen these test functions to be able to compare our results to those of the current meta-heuristics. These benchmark functions are listed in Table 1, Table 2, Table 3 where Dim indicates dimension of the function, Range is the boundary of the function’s search
GWO for classical engineering problems
In this section three constrained engineering design problems: tension/compression spring, welded beam, and pressure vessel designs, are employed. These problems have several equality and inequality constraints, so the GWO should be equipped with a constraint handling method to be able to optimize constrained problems as well. Generally speaking, constraint handling becomes very challenging when the fitness function directly affects the position updating of the search agents (GSA for instance).
Real application of GWO in optical engineering (optical buffer design)
The problem investigated in this section is called optical buffer design. In fact, an optical buffer is one of the main components of optical CPUs. The optical buffer slows the group velocity of light and allows the optical CPUs to process optical packets or adjust its timing. The most popular device to do this is a Photonic Crystal Waveguide (PCW). PCWs mostly have a lattice-shaped structure with a line defect in the middle. The radii of holes and shape of the line defect yield different slow
Conclusion
This work proposed a novel SI optimization algorithm inspired by grey wolves. The proposed method mimicked the social hierarchy and hunting behavior of grey wolves. Twenty nine test functions were employed in order to benchmark the performance of the proposed algorithm in terms of exploration, exploitation, local optima avoidance, and convergence. The results showed that GWO was able to provide highly competitive results compared to well-known heuristics such as PSO, GSA, DE, EP, and ES. First,
References (82)
- et al.
Training feedforward neural networks using hybrid particle swarm optimization and gravitational search algorithm
Appl Math Comput
(2012) - et al.
A new optimization method: big bang–big crunch
Adv Eng Softw
(2006) - et al.
GSA: a gravitational search algorithm
Inf Sci
(2009) ACROA: artificial chemical reaction optimization algorithm for global optimization
Expert Syst Appl
(2011)- et al.
A new meta-heuristic method: ray optimization
Comput Struct
(2012) A new fruit fly optimization algorithm: taking the financial distress model as an example
Knowl-Based Syst
(2012)- et al.
Wolf-pack (Canis lupus) hunting strategies emerge from simple rules in computational simulations
Behav Process
(2011) - et al.
S-shaped versus V-shaped transfer functions for binary Particle Swarm Optimization
Swarm Evolut Comput
(2013) - et al.
A study of particle swarm optimization particle trajectories
Inf Sci
(2006) - et al.
Constraint-handling in genetic algorithms through the use of dominance-based tournament selection
Adv Eng Inform
(2002)
An effective co-evolutionary particle swarm optimization for constrained engineering design problems
Eng Appl Artif Intell
Use of a self-adaptive penalty approach for engineering optimization problems
Comput Ind
An improved harmony search algorithm for solving optimization problems
Appl Math Comput
An effective co-evolutionary differential evolution for constrained optimization
Appl Math Comput
An efficient constraint handling method for genetic algorithms
Comput Methods Appl Mech Eng
A new meta-heuristic algorithm for continuous engineering optimization: harmony search theory and practice
Comput Methods Appl Mech Eng
Slow light engineering in polyatomic photonic crystal waveguides based on square lattice
Optics Commun
Optical buffer performance enhancement using Particle Swarm Optimization in Ring-Shape-Hole Photonic Crystal Waveguide
Optik – Int J Light Elect Optics
Wideband and low dispersion slow light in slotted photonic crystal waveguide
Optics Commun
Ant colony optimization
Comput Intell Magaz, IEEE
No free lunch theorems for optimization
Evolut Comput, IEEE Trans
The exploration/exploitation tradeoff in dynamic cellular genetic algorithms
Evolut Comput, IEEE Trans
Auto-tuning strategy for evolutionary algorithms: balancing between exploration and exploitation
Soft Comput
Genetic algorithms
Sci Am
Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces
J Global Optim
Evolutionary programming made faster
Evolut Comput, IEEE Trans
Artificial intelligence through simulated evolution
Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES)
Evolut Comput
Evolution strategy
Comput Intel Imitat Life
Biogeography-based optimization
Evolut Comput IEEE Trans
A novel heuristic optimization method: charged system search
Acta Mech
Cited by (12582)
Optimized fractional order PID controller with sensorless speed estimation for torque control in induction motor
2024, Expert Systems with ApplicationsOptimized RB-RNN: Development of hybrid deep learning for analyzing student's behaviours in online-learning using brain waves and chatbots
2024, Expert Systems with ApplicationsParticle guided metaheuristic algorithm for global optimization and feature selection problems[Formula presented]
2024, Expert Systems with ApplicationsMetaheuristic-assisted complex H-infinity flight control tuning for the Hawkeye unmanned aerial vehicle: A comparative study
2024, Expert Systems with ApplicationsA novel Discrete Artificial Bee Colony algorithm combined with adaptive filtering to extract Fetal Electrocardiogram signals
2024, Expert Systems with ApplicationsA predictive energy-aware scheduling strategy for scientific workflows in fog computing
2024, Expert Systems with Applications