Elsevier

Expert Systems with Applications

Volume 47, 1 April 2016, Pages 106-119
Expert Systems with Applications

Multi-objective grey wolf optimizer: A novel algorithm for multi-criterion optimization

https://doi.org/10.1016/j.eswa.2015.10.039Get rights and content

Highlights

  • A novel multi-objective algorithm called Multi-objective Grey Wolf Optimizer is proposed.

  • MOGWO is benchmarked on 10 challenging multi-objective test problems.

  • The quantitative results show the superior convergence and coverage of MOGWO.

  • The coverage ability of MOGWO is confirmed by the qualitative results as well.

Abstract

Due to the novelty of the Grey Wolf Optimizer (GWO), there is no study in the literature to design a multi-objective version of this algorithm. This paper proposes a Multi-Objective Grey Wolf Optimizer (MOGWO) in order to optimize problems with multiple objectives for the first time. A fixed-sized external archive is integrated to the GWO for saving and retrieving the Pareto optimal solutions. This archive is then employed to define the social hierarchy and simulate the hunting behavior of grey wolves in multi-objective search spaces. The proposed method is tested on 10 multi-objective benchmark problems and compared with two well-known meta-heuristics: Multi-Objective Evolutionary Algorithm Based on Decomposition (MOEA/D) and Multi-Objective Particle Swarm Optimization (MOPSO). The qualitative and quantitative results show that the proposed algorithm is able to provide very competitive results and outperforms other algorithms. Note that the source codes of MOGWO are publicly available at http://www.alimirjalili.com/GWO.html.

Introduction

There are different challenges in solving real engineering problems, which needs specific tools to handle them. One of the most important characteristics of real problems, which make them challenging, is multi-objectivity. A problem is called multi-objective if there is more than one objective to be optimized. Needless to say, a multiple objective optimizer should be employed in order to solve such problems. There are two approaches for handling multiple objectives: a priori versus a posteriori (Branke, Kaußler and Schmeck, 2001, Marler and Arora, 2004).

The former class of optimizers combines the objectives of a multi-objective problem to a single-objective with a set of weights (provided by decision makers) that defines the importance of each objective and employs a single-objective optimizer to solve it. The unary-objective nature of the combined search spaces allows finding a single solution as the optimum. In contrary, a posterior method maintain the multi-objective formulation of multi-objective problems, allowing to explore the behavior of the problems across a range of design parameters and operating conditions compared to a priori approach (Deb, 2012). In this case, decision makers will eventually choose one of the obtained solutions based on their needs. There is also another type of handling multiple objectives called progressive method, in which decision makers’ preferences about the objectives are considered during optimization (Branke & Deb, 2005).

In contrary to single-objective optimization, there is no single solution when considering multiple objectives as the goal of the optimization process. In this case, a set of solutions, which represents various trade-offs between the objectives, includes optimal solutions of a multi-objective problem (Coello, Lamont, & Van Veldhuisen, 2007). Before 1984, mathematical multi-objective optimization techniques were popular among researchers in different fields of study such as applied mathematics, operation research, and computer science. Since the majority of the conventional approaches (including deterministic methods) suffered from stagnation in local optima, however, such techniques were not applicable as there are not nowadays.

In 1984, a revolutionary idea was proposed by David Schaffer (Coello Coello, 2006). He introduced the concepts of multi-objective optimization using stochastic optimization techniques (including evolutionary and heuristic). Since then, surprisingly, a significant number of researches have been dedicated for developing and evaluating multi-objective evolutionary/heuristic algorithms. The advantages of stochastic optimization techniques such as gradient-free mechanism and local optima avoidance made them readily applicable to the real problems as well. Nowadays, the application of multi-objective optimization techniques can be found in different fields of studies: mechanical engineering (Kipouros et al., 2008), civil engineering (Luh & Chueh, 2004), chemistry (Gaspar-Cunha and Covas, 2004, Rangaiah, 2008), and other fields (Coello & Lamont, 2004).

Early year of multi-objective stochastic optimization saw conversion of different single-objective optimization techniques to multi-objective algorithms. Some of the most well-known stochastic optimization techniques proposed so far are as follows:

  • Strength–Pareto Evolutionary Algorithm (SPEA) (Zitzler, 1999, Zitzler and Thiele, 1999).

  • Non-dominated Sorting Genetic Algorithm (Srinivas & Deb, 1994)

  • Non-dominated Sorting Genetic Algorithm version 2 (NSGA-II) (Deb, Pratap, Agarwal, & Meyarivan, 2002)

  • Multi-Objective Particle Swarm Optimization (MOPSO) (Coello, Pulido, & Lechuga, 2004)

  • Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D) (Zhang & Li, 2007)

  • Pareto Archived Evolution Strategy (PAES) (Knowles & Corne, 2000)

  • Pareto–frontier Differential Evolution (PDE) (Abbass, Sarker, & Newton, 2001).

The literature shows that these algorithms are able to effectively approximate the true Pareto optimal solutions of multi-objective problems. However, there is a theorem here called No Free Lunch (NFL) (Wolpert & Macready, 1997) that has been logically proved that there is no optimization technique for solving all optimization problems. According to this theorem, the superior performance of an optimizer on a class of problems cannot guarantee the similar performance on another class of problems. This theorem is the foundation of many works in the literature and allows researchers in this field to adapt the current techniques for new classes of problems or propose new optimization algorithms. This is the foundation and motivation of this work as well, in which we propose a novel multi-objective optimization algorithm called Multi-Objective Grey Wolf Optimizer (MOGWO) based on the recently proposed Grey Wolf Optimizer (GWO). The contributions of this research are as follows:

  • An archive has been integrated to the GWO algorithm to maintain non-dominated solutions.

  • A grid mechanism has been integrated to GWO in order to improve the non-dominated solutions in the archive.

  • A leader selection mechanism has been proposed based on alpha, beta, and delta wolves to update and replace the solutions in the archive.

  • The multi-objective version of GWO has been proposed utilizing the above three operators.

The rest of the paper is organized as follows. Section 2 presents definitions and preliminaries of optimization in a multi-objective search space. Section 3 briefly reviews the concepts of GWO and then proposes the MOGWO algorithm. The qualitative and qualitative results as well as relevant discussion are presented in Section 4. Eventually, Section 5 concludes the work and outlines some advises for future works.

Section snippets

Literature review

This section provides the concepts of multi-objective optimization and current techniques in the field of meta-heuristics.

Multi-Objective Grey Wolf Optimizer (MOGWO)

The GWO algorithm was proposed by Mirjalili, Mirjalili, and Lewis, 2014. The social leadership and hunting technique of grey wolves were the main inspiration of this algorithm. In order to mathematical model the social hierarchy of wolves when designing GWO, the fittest solution is considered as the alpha (α) wolf. Consequently, the second and third best solutions are named beta (β) and delta (δ) wolves, respectively. The rest of the candidate solutions are assumed to be omega (ω) wolves. In

Results and discussion

This section outlines experimental setup, presents results, and provides discussion.

Conclusion

This work proposed a novel multi-objective meta-heuristic called MOGWO. In fact, two new components were integrated to the GWO algorithm to allow it to perform multi-objective optimization. The first component was an archive for storing and retrieving the best non-dominated obtained solution so far during optimization. The second module was a leader selection mechanism that required MOGWO to select alpha, beta, and delta wolves for updating the position of omega wolves from the archive. The

Reference (51)

  • CoelloC.A.C. et al.

    Evolutionary algorithms for solving multi-objective problems

    (2007)
  • CoelloC.A.C. et al.

    Handling multiple objectives with particle swarm optimization

    Evolutionary Computation, IEEE Transactions on

    (2004)
  • Coello CoelloC.A.

    Evolutionary multi-objective optimization: a historical view of the field

    Computational Intelligence Magazine, IEEE

    (2006)
  • Coello CoelloC.A. et al.

    MOPSO: a proposal for multiple objective particle swarm optimization

  • DasI. et al.

    Normal-boundary intersection: a new method for generating the Pareto surface in nonlinear multicriteria optimization problems

    SIAM Journal on Optimization

    (1998)
  • DebK.

    Advances in evolutionary multi-objective optimization

    Search Based Software Engineering

    (2012)
  • DebK. et al.

    A fast and elitist multiobjective genetic algorithm: NSGA-II

    Evolutionary Computation, IEEE Transactions on

    (2002)
  • EdgeworthF.Y.

    Mathematical physics: P

    (1881)
  • FonsecaC.M. et al.

    A tutorial on the performance assessment of stochastic multiobjective optimizers

  • Gaspar-CunhaA. et al.

    RPSGAe-A multiobjective genetic algorithm with elitism: application to polymer extrusion

    A Lecture Notes in Economics and Mathematical Systems volume

    (2004)
  • GoldbergD.

    Genetic algorithms in optimization, search and machine learning

    (1989)
  • GoldbergD.E. et al.

    Genetic algorithms and machine learning

    Machine Learning

    (1988)
  • HancerE. et al.

    A multi-objective artificial bee colony approach to feature selection using fuzzy mutual information

  • HemmatianH. et al.

    Optimization of hybrid laminated composites using the multi-objective gravitational search algorithm (MOGSA)

    Engineering Optimization

    (2014)
  • KimI.Y. et al.

    Adaptive weighted-sum method for bi-objective optimization: Pareto front generation

    Structural and Multidisciplinary Optimization

    (2005)
  • Cited by (1168)

    View all citing articles on Scopus
    View full text