Skip to main content
Erschienen in: Soft Computing 22/2023

Open Access 27.07.2023 | Optimization

Enhancing the Harris’ Hawk optimiser for single- and multi-objective optimisation

verfasst von: Yit Hong Choo, Zheng Cai, Vu Le, Michael Johnstone, Douglas Creighton, Chee Peng Lim

Erschienen in: Soft Computing | Ausgabe 22/2023

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

This paper proposes an enhancement to the Harris’ Hawks Optimisation (HHO) algorithm. Firstly, an enhanced HHO (EHHO) model is developed to solve single-objective optimisation problems (SOPs). EHHO is then further extended to a multi-objective EHHO (MO-EHHO) model to solve multi-objective optimisation problems (MOPs). In EHHO, a nonlinear exploration factor is formulated to replace the original linear exploration method, which improves the exploration capability and facilitate the transition from exploration to exploitation. In addition, the Differential Evolution (DE) scheme is incorporated into EHHO to generate diverse individuals. To replace the DE mutation factor, a chaos strategy that increases randomness to cover wider search areas is adopted. The non-dominated sorting method with the crowding distance is leveraged in MO-EHHO, while a mutation mechanism is employed to increase the diversity of individuals in the external archive for addressing MOPs. Benchmark SOPs and MOPs are used to evaluate EHHO and MO-EHHO models, respectively. The sign test is employed to ascertain the performance of EHHO and MO-EHHO from the statistical perspective. Based on the average ranking method, EHHO and MO-EHHO indicate their efficacy in tackling SOPs and MOPs, as compared with those from the original HHO algorithm, its variants, and many other established evolutionary algorithms.
Hinweise
Zheng Cai, Vu Le, Michael Johnstone, Douglas Creighton, and Chee Peng Lim have contributed equally to this work.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Multi-objective optimisation problems (MOPs) need to consider more than one conflicting objective simultaneously, and they are ubiquitous in many real-world engineering applications. Unlike single-objective optimisation problems (SOPs), MOPs are more challenging and difficult to solve because the objectives in MOPs are often conflicting and/or incommensurable with each other, and improvement of one objective deteriorates other objectives. Hence, finding a set of equally-optimal solutions, i.e. the Pareto optimal set (PS), is necessary, since there is no single solution that can fulfill all objectives simultaneously (Akbari et al. 2012; Li et al. 2011).
Evolutionary algorithms (EAs) have become popular in recent years, mainly due to their population-based search capabilities that can approximate the PS. The applicability of EAs, however, is not always as good as that of other metaheuristics (Khan et al. 2016, 2019). Metaheuristic algorithms are flexible and easy to implement for solving SOPs and MOPs because they do not rely on gradient information of the objective landscape. While metaheuristic algorithms have been widely used to tackle various SOPs and MOPs, one key limitation is that the user-defined parameters of each metaheuristic algorithm require accurate tuning to realise its full potential. Another downside is the slow or premature convergence towards the global optimum solution (Talbi 2009; Heidari et al. 2019).
Two main types of metaheuristic algorithms are single solution-based and population-based models. Simulated Annealing (SA) (Van Laarhoven and Aarts 1987) and Variable Neighbourhood Search (VNS) (Mladenović and Hansen 1997) are examples of single solution-based metaheuristics, while Genetic Algorithm (GA) (Holland 1992b) and Differential Evolution (DE) (Price 1996; Storn and Price 1997) are examples of population-based metaheuristics (p-metaheuristic) (Talbi 2009). Single solution-based metaheuristics methods only solve one solution at a time during the optimisation process, while p-metaheuristics methods process a set of solution in each iteration. As expected, p-metaheuristics methods are more efficient than single solution-based metaheuristic and can often find an optimal solution, or suboptimal solutions that are located in the neighbourhood of the optimal solution space. By maintaining a population of solutions, these algorithms can avoid local optima and explore new regions of the search space.
Heidari et al. (2019) presents the four main types of p-metaheuristics methods, namely EAs, physics-based, human-based, and swarm intelligence (SI) models. The first types of p-metaheuristics methods are EAs. GA and DE are the most popular EAs, and many enhancements have been introduced (Holland 1992b; Price 1996). The algorithms use a variety of operators, such as mutation, crossover, and selection in the GA, to generate new solutions and improve the quality of the population. The search process involves the evaluation of multiple solutions simultaneously, and the selection of a pool of good solutions for the next generation. Genetic Programming (GP) and Biogeography-Based Optimiser (BBO) are other examples of EAs (Kinnear et al. 1994; Simon 2008). The physics-based metaheuristics methods include Big-Bang Big-Crunch (BBBC) (Erol and Eksin 2006), Central Force Optimisation (CFO) (Formato 2007) and Gravitational Search Algorithm (GSA) (Rashedi et al. 2009).
As the name suggests, these metaheuristics methods are based on the physics principles of a system. Human-based metaheuristics methods imitate human behaviours such as Teaching Learning-Based Optimisation (TLBO) (Rao et al. 2011), Tabu Search (TS) (Glover and Laguna 1998) and Harmony Search (HS) (Geem et al. 2001). SI algorithms have been the current trend for solving MOPs. SI algorithms are generally based on the social behaviours of organisms and social interactions of the population, e.g. the Particle Swarm Optimisation (PSO) algorithm that simulates the bird clustering behaviours (Kennedy and Eberhart 1995).
Recently, many new nature-inspired SI models have been proposed, such as Grey Wolf Optimisation (GWO) algorithm (Mirjalili et al. 2014), Whale Optimisation Algorithm (WOA) (Mirjalili and Lewis 2016), Grasshopper Optimisation Algorithm (GOA) (Mirjalili et al. 2018), Moth-flame optimisation (MFO) (Mirjalili 2015), Artificial Bee Colony (ABC) (Karaboga and Basturk 2008), and Harris’ Hawk Optimisation (HHO) (Heidari et al. 2019). The general procedure of these SI models can be summarised as follows:
  • Generate a set of individuals (population), where each individual represents a solution
  • Each individual in the population is updated using a metaheuristic algorithm
  • The newly generated population replaces the current population on an iterative manner
The above procedure continues for every iteration until a termination condition is satisfied, e.g. the maximum number of iterations. In accordance with the ’no free lunch’ theorem (NFL), there is no universal optimisation methods for solving all possible problems (Wolpert and Macready 1997). Many new optimisation methods and their enhanced variants have been introduced, since no single algorithm is effective for solving all classes of optimisation problems.
SI models have been used to solve various optimisation problems in different fields. Nonetheless, there is room for further research to improve their performance and extend their applications. Several enhancements of metaheuristic algorithms have been proposed to improve the efficiency of an algorithm. In general, four categories of enhancements are: (i) hybrid metaheuristic algorithms, (ii) modified metaheuristic algorithms, (iii) integration with chaos theory, and (iv) multi-objective metaheuristic algorithms. Note that hybrid metaheuristic algorithms integrate two or more algorithms for performance enhancement, while modified metaheuristic algorithms change the search process to improve the performance of an algorithm.
One major problem of all metaheuristic algorithms is pre-mature convergence and local optima, particularly in large-scale optimisation problems. Hybrid metaheuristic algorithms are used to solve optimisation problems with more efficient characteristics and with higher flexibility (Bettemir and Sonmez 2015; Li and Gao 2016). The GA and SA models are useful stochastic search algorithms for solving combinatorial optimisation problems. GA has been developed based on the principle of the survival-of-the-fittest and natural selection principles while SA is inspired by the physical process of annealing (Kirkpatrick et al. 1983; Van Laarhoven and Aarts 1987; Holland 1992a). SA often is coupled with GA to avoid local optima by irregularly allowing the acceptance of non-improving solutions. Hybrid Genetic Algorithm and Simulated Annealing (HGASA) is good at tackling optimisation problems, such as project scheduling with multiple constraints and railway crew scheduling (Chen and Shahandashti 2009; Hanafi and Kozan 2014; Bettemir and Sonmez 2015). Leveraging the capability of DE in searching for good and diverse solutions, many researchers have proposed integration of DE and other p-metaheuristics algorithms. Wang and Li (2019) enhanced the solution diversity by combining DE and GWO. An adaptive DE model was combined with GOA for multi-level satellite image segmentation (Jia et al. 2019). The resulting hybrid algorithm was evaluated with a series of experiments using satellite images, demonstrating search efficiency and population diversity.
Modification methods in metaheuristic algorithms refer to the process of changing the search process to improve the performance of an algorithm. One such method is the use of local search techniques to improve the solution quality. Local search techniques involve searching the local neighbourhood of a solution to find a better solution. Another modification method is the use of diversity mechanisms to prevent an algorithm from converging prematurely. Diversity mechanisms involve maintaining a diverse population of solutions to explore different regions of the search space. As an example, Gao et al. (2008) employed a logarithm decreasing inertia weight and chaos mutation with PSO to improve the convergence speed and diversity of solutions. In Gao and Zhao (2019), a variable weight was assigned to the leader of grey wolves, and a controlling parameter was introduced to decline exponentially and alter the transition pattern between exploration and exploitation. Moreover, Xie et al. (2020) enhanced GWO by integrating a nonlinear exploration factor to increase exploration in the early stage and improve exploitation in the second half of the search process.
This paper proposes enhancements to the HHO algorithm. Specifically, the nonlinear exploration factor, DE scheme, chaos strategy, and mutation mechanism from EAs are leveraged, leading to the proposed enhanced HHO (EHHO) model and its extension to multi-objective EHHO for solving SOPs and MOPs, respectively. A nonlinear exploration factor is introduced to replace the linear exploration factor in the original HHO algorithm. This nonlinear exploration factor aims to improve the exploration capability in the early search stage and facilitate the transition from exploration to exploitation. The DE scheme is then used to generate new, diverse individuals through mutation, crossover and selection. A chaos strategy is adopted to replace the DE mutation factor, in order to increase the randomness and find more valuable search areas. In MO-EHHO, a mutation mechanism is introduced to increase diversity of individuals in the external archive. The fast non-dominated sorting and crowding distance techniques from NSGA-II are also employed to select the optimal solution in each iteration and to maintain diversity of the non-dominated solutions.
Benchmark SOPs and MOPs are used to evaluate the effectiveness of EHHO and MO-EHHO (Yao et al. 1999; Digalakis and Margaritis 2001). The benchmark suite of SOPs includes the following tasks: unimodal (UM) functions F1–F7, multimodal (MM) functions F8–F23, and composition (CM) functions F24–F29. On the other hand, MO-EHHO is applied to tackling high-dimensional bi-objective problems without equality or inequality constraints and scalable tri-objective problems without equality or inequality constraints for performance evaluation. These include ZDT benchmark functions (ZDT1-4,6) (Deb et al. 2002; Deb and Agrawal 1999) and DTLZ benchmark functions (DTLZ1-DTLZ7) (Deb et al. 2005). The statistical sign test with a 95% confidence level is used for performance comparison. Four performance indicators are used, namely convergence, diversity, generational distance (GD), and inverted generational distance (IGD). The average ranking method (Yu et al. 2018) is leveraged to rank the performance of MO-EHHO against its competitors on each performance indicator.
This study introduces enhanced HHO-based models for solving SOPs snd MOPs. The organisation of this paper is as follows. In Sect. 2, a review on HHO, chaos theory, and DE is provided. In Sect. 3, the HHO algorithm and the proposed enhancements, namely EHHO and MO-EHHO, are explained in detail. The effectiveness of EHHO and MO-EHHO for solving SOPs and MOPs is evaluated, compared, analysed and discussed in Sect. 4. Concluding remarks and suggestions for future work are presented in Sect. 5.

2 Literature review

This section provides the concepts of multi-objective optimisation and current methods in meta-heuristics for solving MOPs.

2.1 Harris’ Hawks optimisation algorithm

Proposed by Heidari et al. (2019), the HHO algorithm is a swarm-based, nature-inspired metaheuristic model. It imitates the collaboration behaviours of Harris’ hawks working together to search and chase for a prey in different directions. A variety of hunting strategies are performed by Harris’ hawks based on the prey’s escaping energy. The mathematical model pertaining to the behaviours of Harris’ hawks are formulated for solving optimisation problems. The HHO algorithm has received extensive interest from researchers due to its effectiveness and fast convergence speed; however, the HHO algorithm suffers from the following problems:
1.
greatly affected by the tuning parameters;
 
2.
weak transition between exploration and exploitation;
 
3.
easily fall into local optima;
 
4.
poor convergence for high-dimensional problems and multimodal problems; and
 
5.
limited diversity of the solutions.
 
Enhancements of HHO have been proposed, e.g. hybridisation of HHO, modification of HHO and chaotic HHO (Alabool et al. 2021). Hybridisation is a popular method to improve the performance of an algorithm. HHO has been integrated with other optimisation methods to find the solution faster, avoid falling into local optima, provide better solution quality, and improve algorithm stability (Chen et al. 2020a; Gupta et al. 2020; Ewees and Abd Elaziz 2020). Chen et al. (2020a) integrated the DE algorithm and a multi-population strategy into HHO to overcome its convergence issue and prevent HHO from being trapped in local optima. In addition, opposition-based learning was applied to HHO by Gupta et al. (2020) to reduce the number of opposite solutions and increase the convergence speed. HHO has also been used to improve the performance of other algorithms, e.g. accelerating the search process of multi-verse optimisation (MVO) and maintaining the MVO population (Ewees and Abd Elaziz 2020).
Other HHO modifications include improving the convergence speed, increasing the population diversity, and enhancing the transition from exploration to exploitation (Alabool et al. 2021). HHO was integrated with a chaotic map to improve the balance between exploration and exploitation (Zheng-Ming et al. 2019; Qu et al. 2020). Specifically, Zheng-Ming et al. (2019) used a tent map in the exploration phase to identify the optimal solution rapidly, while Qu et al. (2020) employed a logistic map in the escaping energy factor to balance global exploration and local exploitation. The use of HHO for solving MOPs was investigated in Islam et al. (2020) and Jangir et al. (2021). Table 1 summarises the main enhancement methods of HHO.
Table 1
Recent popular enhancement methods of HHO for solving SOPs and MOPs
No
Algorithm
Ref
Year
Variant\method
Integrated method
Remarks
1
HHODE
Birogul (2019)
2019
Hybridisation of HHO
DE
To enhance transition between exploration and exploitation
2
HHO-DE
Bao et al. (2019)
2019
Hybridisation of HHO
DE, Subpopulation
To improve accuracy of HHO and avoid local optima
3
CMDHHO
Chen et al. (2020a)
2020
Hybridisation of HHO
Chaotic, DE, Multi-Population
To improve exploration and exploitation of HHO and the solution quality
4
CMVHHO
Ewees and Abd Elaziz (2020)
2020
Hybridisation of HHO
Chaotic, MVO
HHO is used as local search to improve the MVO performance
5
DEAHHO
Wunnava et al. (2020)
2020
Hybridisation of HHO
DE
To enhance exploration of HHO
6
MLHHDE
Abd Elaziz et al. (2021)
2021
Hybridisation of HHO
Multi-leader, DE
To avoid local optima and enhance exploration and exploitation of HHO
7
IHAOHHO
Wang et al. (2021)
2021
Hybridisation of HHO
Aquila Optimiser (AO), Nonlinear escaping energy, Random opposition-based learning
To improve search capabilities
8
H-HHO
Abualigah et al. (2021)
2021
Hybridisation of HHO
DE
To improve local (exploitation) search capabilities of HHO
9
DMSDL-HHO
Liu et al. (2022)
2022
Hybridisation of HHO
DE, Multi-swarm, Quasi-Newton method, Information exchange
To enhance exploration and exploitation capabilities and avoid local optima
10
Improved HHO
Zheng-Ming et al. (2019)
2019
Modification of HHO
Chaotic HHO (Tent map)
To improve the convergence speed
11
CHHO
Menesy et al. (2019)
2019
Modification of HHO
Chaotic HHO (10 chaotic functions)
To improve search capabilities of HHO and avoid local optima
12
m-HHO
Gupta et al. (2020)
2020
Modification of HHO
Non-linear energy parameter, Different rapid dives, Opposition-based learning (OBL), Greedy search method
To enhance search efficiency of HHO, prevent premature convergence and avoid local optima
13
ADHHO
Zhang et al. (2020)
2020
Modification of HHO
Adaptive cooperative foraging, Dispersed foraging strategies
To improve the population diversity and avoid local optima
14
IEHHO
Qu et al. (2020)
2020
Modification of HHO
Information exchange
To improve the balance between local and global search of HHO
15
EHHO
Chen et al. (2020b)
2020
Modification of HHO
OBL, Chaotic local search(CLS)
To achieve a better tradeoff betweendiversification and intensification
16
EHHO
Jiao et al. (2020)
2020
Modification of HHO
Orthogonal learning (OL) and General opposition-based learning (GOBL)
To improve the convergence speed and accuracy of HHO and the population diversity of population; to enhance the balance between exploration and exploitation
17
NOL-HHO
Yin et al. (2020)
2020
Modification of HHO
Nonlinear control parameter
To avoid local optima
18
CHHO
Dhawale et al. (2021)
2021
Modification of HHO
Chaotic HHO (Tent Map)
To improve exploration and exploitation with a tent map
19
RLHHO
Li et al. (2021)
2021
Modification of HHO
Logarithmic spiral, OBL, Rosenbrock Method (RM)
To improve exploration and local search capabilities of HHO; to enhance the convergence speed and accuracy
20
IHHO
Hussien and Amin (2022)
2022
Modification of HHO
OBL, Chaotic, Self-adaptive technique
To improve the HHO performance and convergence speed

2.2 Differential evolution (DE)

Introduced by Price (1996) and Storn and Price (1997), DE is well-known for its simplicity, fewer control parameters and superior performance in optimisation. Storn and Price (1997) employed DE to solve global optimisation problems. However, one major problem of all metaheuristics algorithms is the potential of early convergence and local optima. Leveraging on DE’s capability in finding better and diverse solutions, many researchers have proposed integration of DE and other p-metaheuristics algorithms. As an example, Wang and Li (2019) enhanced the solution diversity by integrating DE and the GWO algorithm. An adaptive DE was combined with GOA by Jia et al. (2019) for multi-level satellite image segmentation. The hybrid algorithm was evaluated with a series of experiments using satellite images. It was able to increase search efficiency and population diversity through iterations. In addition, hybrid models of DE and HHO can elevate the performance of the original HHO algorithm. Birogul (2019) introduced HHODE, which blends HHO and DE, which uses mutation operators from DE to harness its exploration strengths. The hybridisation manages to strike a balance between exploratory and exploitative tendencies. It was tested on an altered IEEE 30-bus test system for optimal power flow problem and proved to be more effective than other optimisation algorithms. Besides, DE is used by Chen et al. (2020a) in HHO to boost its local search capability and exploit the DE operations, such as mutation, crossover and selection, to discover better and more diverse solutions. A new HHO variant was introduced by Islam et al. (2020) to solve the power flow optimisation problem. Multiple researchers conducted studies to demonstrate hybridisation of DE with HHO for improving the HHO performance. Table 1 summarises the main publications in this topic (Birogul 2019; Bao et al. 2019; Wunnava et al. 2020; Abd Elaziz et al. 2021; Abualigah et al. 2021; Liu et al. 2022). In short, DE is useful for maintaining a more diverse and quality individual without losing the convergence speed in HHO.

2.3 Chaos theory

Chaos theory helps enhance the population diversity and cover wider search areas that an algorithm may miss. As mentioned before, HHO has a poor changeover from exploration to exploitation; hence, its susceptibility to local optima. Besides, HHO exhibits a poor performance in tackling high-dimensional and multimodal problems. In this respect, chaos theory is often applied to an optimisation algorithm to enhance their global search capability (Ewees and Abd Elaziz 2020; Chen et al. 2020b; Liu et al. 2020; Dhawale et al. 2021; Hussien and Amin 2022). A variety of chaotic maps are available: Chebyshev, circle, intermittency, iterative, Leibovich, logistic, piecewise, sawtooth, sine, singer, sinusoidal and tent map. Researchers popularly use logistic and tent maps to improve the HHO performance (Zheng-Ming et al. 2019; Menesy et al. 2019; Chen et al. 2020a). Chen et al. (2020a) introduced chaotic sequence into the escaping energy of the rabbit in HHO to enhance the global searchability and cover broader search areas. A chaotic map was incorporated into the adaptive mutation strategy and integrated into the external archive by Liu et al. (2020) to attain a set of diverse non-dominated solutions. Moreover, Xie et al. (2020) employed a chaotic strategy to assign weights for the leader wolf in GWO to increase the diversity of the leader wolves. Barshandeh and Haghzadeh (2021) integrated a chaos strategy to initialise the population and improve the solution diversity, allowing a larger search space to be covered. Table 1 Chaos shows the key publications on the use of chaos theory for improvement of the HHO algorithm.

2.4 Multi-objective optimisation

MOPs are similar to SOPs in terms of problem definition, but with multiple objective functions that cannot be efficiently solved by a single-objective algorithm. A set of equilibrium solutions exists for solving an MOP, i.e. a set of non-inferior solutions or Pareto optimal solutions (Fonseca and Fleming 1998). An MOP can be formulated as follows:
$$\begin{aligned} \begin{aligned} \text {Minimise} \, F(\textbf{x})&=f_{1}(\textbf{x}), f_{2}(\textbf{x}), \cdots , f_{k}(\textbf{x}) \\ \text {subject to:}&\\ g_{j}(\textbf{x})&\ge 0, \quad j=1,2, \ldots , r \\ h_{j}(\textbf{x})&=0, \quad j=r+1, r+2, \ldots , s \\ Lb_{i}&\le x_{i} \le Ub_{i}, i = 1,2,\ldots ,n \end{aligned} \end{aligned}$$
(1)
where n is the number of design variables, o is the number of objective functions, r is the number of inequality constraints, s is the number of equality constraints, \(g_{j}\) and \(h_{j}\) are the j-th inequality and equality constraints, respectively, and \(Lb_{i}\) and \(Ub_{i}\) are the lower and upper bounds of the i-th design variables.
In a single-objective function, solution \(\mathbf {x_1}\) is better than \(\mathbf {x_2}\) if \(f(\mathbf {x_1}) < f(\mathbf {x_2})\); however, this definition cannot be used in a MOP that has two or more objective functions. Using the Pareto dominance concept in solving MOP, solution \(\mathbf {x_1}\) is better than (dominates) \(\mathbf {x_2}\) if all objective functions \(f(\mathbf {x_1}) < f(\mathbf {x_2})\) (Fonseca and Fleming 1998). Two solutions are denoted as non-dominated if, for at least one objective function, but not all of them, \(f(\mathbf {x_1}) \nless f(\mathbf {x_2})\).
Recently, p-metaheuristics algorithms have been utilised to solve MOPs due to their population-based search capabilities in yielding multiple solutions in a single run. The Pareto dominance concept is used to determine the PS. Most p-metaheuristics algorithms employ non-dominated ranking and Pareto strategy to ensure the diversity of PS. Several useful p-metaheuristics algorithms that store the best PS when there are multiple optimum solutions are: NSGA-II (Deb et al. 2002), SPEA2 (Zitzler et al. 2001), and MOPSO with an archive (Coello et al. 2004). Mirjalili et al. (2016) discussed several multi-objective p-metaheuristics algorithms, such as multi-objective grey wolf optimiser (MOGWO) (Mirjalili et al. 2016), multi-objective ant lion optimiser (MOALO) (Mirjalili et al. 2017b), multi-objective multi-verse optimisation (MOMVO) (Mirjalili et al. 2017a) and multi-objective grasshopper optimisation algorithm (MOGOA) (Mirjalili et al. 2018), to solve MOPs. Jangir and Jangir (2018) embedded the non-dominated sorting into GWO to solve several multi-objective engineering design problems and apply it to a constraint multi-objective emission dispatch problems with economic constrained and integration of wind power. Liu et al. (2020) modified a MOGWO with multiple search strategies to solve MOPs. Hence, this paper extended the EHHO to a multi-objective EHHO that incorporated the non-dominated sorting from NSGA-II to sort and rank the non-dominated solution by calculating the crowding distance between solution sets to ensure a diverse PS. An external archive with a mutation strategy is utilised to diversify the obtained PS.

3 The proposed enhanced Harris’ Hawk optimiser

3.1 Harris’ Hawk optimisation

HHO was originally proposed by Heidari et al. (2019) to solve SOPs. It mimics the hunting strategy of harris’ hawks, focusing on their "surprise pounce" and "seven kills" to catch a prey (e.g. a rabbit). The hawks collaborate with each other in chasing or attacking the prey, and change their attack mode dynamically based on the escaping pattern of the prey. The hawk’s behaviour is modelled in three main stages: exploration, the transition from exploration to exploitation, and exploitation. Figure 1 presents the different stages of HHO, while the pseudocode of the HHO algorithm is presented in Algorithm 1.
The HHO algorithm begins by initialising a fixed number of randomised individuals (population), in which every individual represents a candidate solution. The individuals are random within the lower and upper boundaries of the problems. The fitness value of each individual is calculated every iteration until the stopping criteria are satisfied.
The key mathematical formulation of each HHO stage is as follows:
\({\textbf {Exploration stage:}}\)
$$\begin{aligned}{} & {} X(t+1)= {\left\{ \begin{array}{ll} X_\textrm{rand}(t)-r_1 |X_\textrm{rand}(t)\\ -2r_2X(t) |&{} q \ge 0.5\\ (X_\textrm{rabbit}(t)-X_m(t))\\ -r_3(LB+r_4(UB-LB))) &{} q < 0.5 \end{array}\right. }\nonumber \\ \end{aligned}$$
(2)
$$\begin{aligned}{} & {} X_m(t+1)=\frac{1}{N} \sum _{i=1}^{N} X_i(t) \end{aligned}$$
(3)
\({\textbf {Transition stage:}}\)
$$\begin{aligned}{} & {} e = 2 (1-\frac{t}{T}) \end{aligned}$$
(4)
$$\begin{aligned}{} & {} E_0 = 2rand()-1 \end{aligned}$$
(5)
$$\begin{aligned}{} & {} E = E_0 \times e \end{aligned}$$
(6)
Exploitation stage: Soft besiege (\(r \ge 0.5 \text {and} |E |\ge 0.5\))
$$\begin{aligned}{} & {} X(t+1) = \Delta X(t) - E|JX_\textrm{rabbit}(t)-X(t) |\end{aligned}$$
(7)
$$\begin{aligned}{} & {} \Delta X(t) = X_\textrm{rabbit}(t)-X(t) \end{aligned}$$
(8)
$$\begin{aligned}{} & {} J= 2(1-r_5) \end{aligned}$$
(9)
Hard besiege (\(r \le 0.5\) and \(|E |< 0.5\))
$$\begin{aligned} X(t+1) = X_\textrm{rabbit}(t) - E |\Delta X(t) |\end{aligned}$$
(10)
Soft besiege with progressive rapid dives(\(r < 0.5\) and \(|E |\ge 0.5\))
$$\begin{aligned}{} & {} Y = X_\textrm{rabbit}(t) - E|JX_\textrm{rabbit}(t)-X(t) |\end{aligned}$$
(11)
$$\begin{aligned}{} & {} Z = Y + S \times LF(D) \end{aligned}$$
(12)
$$\begin{aligned}{} & {} \begin{array}{l} L F(D)=0.01 \times \frac{\mu \times \sigma }{|v |^{\frac{1}{\beta }}}, \\ \sigma =\left( \frac{\Gamma (1+\beta ) \times \sin \left( \frac{\pi \beta }{2}\right) }{\Gamma \left( \frac{1+\beta }{2}\right) \times \beta \times 2\left( \frac{\beta -1}{2}\right) }\right) ^{\frac{1}{\beta }}, \beta =1.5 \end{array} \end{aligned}$$
(13)
$$\begin{aligned}{} & {} X(t+1) = {\left\{ \begin{array}{ll} Y &{} if \, F(Y)< F(X(t))\\ Z &{} if \, F(Z) < F(X(t)) \end{array}\right. } \end{aligned}$$
(14)
Hard besiege with progressive rapid dives (\(r < 0.5\) and \(|E |\le 0.5\))
$$\begin{aligned}{} & {} Y' = X_\textrm{rabbit}(t) - E |JX_\textrm{rabbit}(t)-X_m(t) |\end{aligned}$$
(15)
$$\begin{aligned}{} & {} Z' = Y + S \times LF(D) \end{aligned}$$
(16)
$$\begin{aligned}{} & {} X(t+1) =\left\{ \begin{array}{ll} Y' &{} \quad {\text {if }} \, F(Y')< F(X(t))\\ Z' &{} \quad {\text {if }} \, F(Z') < F(X(t)) \end{array} \right. \end{aligned}$$
(17)
where \(X(t+1)\) is the position vector of an individual in the next iteration t, \(X_\textrm{rabbit}(t)\) is the best position of a prey, X(t) is the current position of the individual, \(X_\textrm{rand}(t)\) is the random position of the individual, \(X_m(t)\) is the average position of the population, \(r_1\),\(r_2\), \(r_3\), \(r_4\), \(r_5\), q, v and \(\beta \) are random factors within [0,1], while LB and UB are the lower and upper bounds of the search space, respectively. In addition, E denotes the escaping energy of the prey, \(E_0\in (-1,1)\) is the initial state of the prey’s energy, \(e\in (2,0)\) is the exploration factor, and T is the maximum number iterations (e.g. 100 iterations). Besides, J is the jumping strength of the prey and \(\Delta X(t)\) denotes the difference between the position of an individual with the best position of the prey, while LF, D and S represent the dimensions of the search space (Heidari et al. 2019; Chen et al. 2020a). The detailed processes of the original HHO algorithm are shown in Algorithm 1 (Heidari et al. 2019).

3.2 Nonlinear exploration factor

A nonlinear exploration factor is introduced to replace the linear version from the original HHO algorithm. In the HHO algorithm, the exploration factor, e controls the transition pattern from exploration to exploitation as defined in Eq. (4), which decreases linearly from 2 to 0, resulting in an acute shrinkage of the search space and insufficient search attention. Multiple studies in the literature have explored adaptive control of diverse search operations to improve the conversion between exploration and exploitation. Trigonometric (Mirjalili 2016; Gao and Zhao 2019), exponential (Mittal et al. 2016; Long et al. 2018a), and logarithmic-based functions (Gao et al. 2008; Long et al. 2018b) functions are useful for controlling the exploration factor nonlinearly. In this research, we use a nonlinear exploration factor proposed by Xie et al. (2020), which is a combination of the trigonometric functions, i.e. cos and sin, and a hyperbolic function, i.e tanh. This nonlinear exploration factor aims to amplify the diversification capabilities in the early exploration phase and smoothen the changeover from exploration to exploitation. The proposed nonlinear exploration factor and newly proposed escaping energy are defined as follows:
$$\begin{aligned}{} & {} e' = 2 \times \Big (\cos \Big (\frac{(\tanh \theta )^2+(\theta \sin \pi \theta )^k}{(\tanh 1)^2}\times \frac{\pi }{2} \Big )\Big )^2 \end{aligned}$$
(18)
$$\begin{aligned}{} & {} \theta = \frac{t}{Max\_iter} \end{aligned}$$
(19)
$$\begin{aligned}{} & {} E' = E_0 \times e' \end{aligned}$$
(20)
where \(e'\) is the newly proposed exploration factor to govern the switch between exploration to exploitation, t and Max\(_\textrm{iter}\) are the current iteration and maximum iteration of each iteration, respectively. Coefficient k controls the descending slope of the exploration factor, \(e'\), while \(k = 5\) is adopted, as used in Xie et al. (2020). The new exploration factor and escaping energy descending pattern are presented in Fig. 2a, b, respectively. As shown in Fig. 2a, the proposed nonlinear exploration factor decreases with a descending slope. The first half of the search path has a more effective exploration rate, while the second half has a lower exploration rate. The search space is significantly expanded in the first half, which helps EHHO to focus on exploration and finish the search with exploitation.

3.3 Differential evolution (DE)

Developed by Price (1996), Storn and Price (1997), DE is a robust yet straightforward EA for optimising real-valued, multi-modal functions. DE is utilised in EHHO to enhance its searchability and solution quality. Three primary processes exist in DE, namely mutation, crossover and selection (Price 1996; Storn and Price 1997). They are described in the following subsection.

3.3.1 Mutation

Following the DE process, three random solutions are selected from the population for mutation. A constant mutation factor is applied to the difference between two individuals. The third individual is combined with the mutated difference vector to produce a new individual, as follows:
$$\begin{aligned} V_i(t) = X_{r1}(t) + F \times (X_{r2}(t) - X_{r3}(t)) \end{aligned}$$
(21)
where \(X_{r1}\), \(X_{r2}\), \(X_{r3}\) are three random solutions selected from the population, \(V_i\) is the new individual, F is the mutation factor \(\in [0.5,1]\), which is generated by a sinusoidal mapping defined in Eq. (22). The sinusoidal mapping is a chaotic sequence, as shown in Fig. 3. In accordance with chaos theory, a chaotic sequence has three important features, i.e. its initilisation condition, ergodicity, and randomness. The chaotic mutation factor can avoid local optima and premature convergence by exploiting randomness of the chaotic sequence.
$$\begin{aligned} ch_{i+1}&= ach^2_i \sin (\pi ch_i), \quad a =2.3 \,, \, ch_0 =0.7 \nonumber \\&\quad {\text {and }} \, i = 1,2,\ldots ,N \end{aligned}$$
(22)

3.3.2 Crossover

The crossover operation is modelled as follows:
$$\begin{aligned}{} & {} U_ i,j,g \left\{ \begin{array}{ll} V_ i,j,g &{}\quad \hbox {if rand}_j(0,1)\le Cr \,\hbox {or}\, j = {\text {rand}}_j \\ X_i,j,g &{}\quad \text{ if } \text{ otherwise } \end{array} \right. \end{aligned}$$
(23)
$$\begin{aligned}{} & {} Cr=Cr_l+{\text {rand}}\times (Cr_u-Cr_l) \end{aligned}$$
(24)
where \({\text {rand}}_j\) is randomised between [0, 1] and Cr is the crossover ratio. The solution from the mutation process will replace the current solution if Cr is higher than \({\text {rand}}_j\) in Eq. 23. The \(Cr_l\) and \(Cr_u\) are the lower and upper boundaries of Cr.

3.3.3 Selection

The selection operation is modelled as follows:
$$\begin{aligned} \begin{aligned} X_i,g+1 \left\{ \begin{array}{ll} U_{i,g} &{}\quad \text {if}\, f(U_{i,g})\le f(X_{i,g}) \\ X_i,g &{}\quad \text {otherwise} \end{array} \right. \\, i = 1,2,\ldots ,N \end{aligned} \end{aligned}$$
(25)
where \(X_ {i,g} \) is the original individual, \(U_{i,g}\) is the individual from crossover, and \(X_{i,g+1}\) is the new individual for the next iteration. Following crossover, DE determines the solution using equation 25. In this regard, f is the fitness function before and after mutation and crossover. The new solution with lower fitness value will replace the current solution with higher fitness lower in a minimisation problem.

3.4 EHHO for multi-objective optimisation

3.4.1 Mutation strategy

A mutation strategy is employed in the external archive to increase the chance of generating better non-dominated solutions when the mutation probability is smaller than the mutation factor. An adaptive mutation factor is applied and modelled as follows:
$$\begin{aligned} m=m_l+{\text {rand}}\times (m_u-m_l) \end{aligned}$$
(26)
where \(m_l\) and \(m_u\) are the lower and upper boundaries of the mutation factor (m). Generally, \(m_l=0.1\) and \(m_u=0.9\). A set of new solutions is obtained after the mutation operation is completed (Fig. 4).

3.4.2 External archive

Zhang and Sanderson (2009) proposed an adaptive DE algorithm with an optimal external archive to solve MOPs. The external archive is a solution set, Arc, that acts as a storage unit to keep historical PS obtained from each iteration. When a new and better solution is found, the existing solution is replaced by the new non-dominated solution. Moreover, other non-dominated solutions are removed based on the crowding distance when the maximum capacity q is reached. Note that q is denoted as the maximum number of solutions can be stored in the external archive, which has the same size of that of the population.

3.4.3 Fast non-dominated sorting

The fast non-dominated sorting method is integrated into the multi-objective p-metaheuristics algorithm for finding non-dominated solutions. Developed by Deb et al. (2002), it is used to sort the Pareto optimal solutions according to their degree of dominance in NSGA-II. The non-dominated solution is assigned as rank 1, the solution that is dominated by only one solution is assigned as rank 2, and so on. The solutions are chosen based on their ranks, in order to preserve quality of the solution base.

3.4.4 Crowding distance

The crowding distance is computed as follows (Deb et al. 2002):
$$\begin{aligned} X = {\left\{ \begin{array}{ll} \infty &{} \hbox {if }\; i=1 \hbox {or} j=1 \\ \sum _{j=1}^{n} \frac{{\text {obj}}_j (S_ i+1 )-{\text {obj}}_j (S_ i-1 )}{{\text {obj}}_j^\text {max}-{\text {obj}}_j^\text {min}} &{} \text {otherwise} \end{array}\right. } \end{aligned}$$
(27)
where q is the size of solution set, \(i\in q\). n is the number of objectives; \({\text {obj}}_j (S_ i+1)\) is the value of the jth objective of solution \(S_i\), and \({\text {obj}}_j^\text {max}\) and \({\text {obj}}_j^\text {min}\) are the maximum and minimum values of the jth objective of the solution set. Specifically, the crowding distance is the distance between two adjacent solutions on the same front; the larger the crowding distance, the closer the two adjacent solutions.

4 Experimental results and analysis

4.1 Evaluation with single-objective optimisation problems

4.1.1 Experimental setup and compared algorithms

A set of diverse benchmark problems (Yao et al. 1999; Digalakis and Margaritis 2001) is used to evaluate the proposed EHHO algorithm. The benchmark suite includes the following problems: unimodal (UM) functions F1–F7, multimodal (MM) functions F8–F23, and composition (CM) functions F24–F29. The UM functions evaluate the exploitative capabilities of optimisers with respect to their global best. The MM functions are designed to assess the diversification or exploration capabilities of optimisers. The characteristic and mathematical formulation of the UM and MM functions are presented in Tables 22, 23 and 24 in the Appendix. The CM functions are selected from the IEEE CEC 2005 competition (García et al. 2009), which include rotated, shifted and multi-modal hybrid composition functions. Details of the CM functions are presented in Table 25 in the Appendix. These functions are beneficial to investigate the interchange from exploration to exploitation and the capabilities of an optimiser in escaping from the local optima of an optimiser.
EHHO is implemented in Python and executed on a computer with Windows 10 64-bit professional and 16 GB RAM. The population size and maximum iterations of EHHO are set to 30 and 500, respectively, as proposed in the original HHO (Heidari et al. 2019) and HHODE (Birogul 2019). The EHHO results are documented and compared with other algorithms across the average performance over 30 independent runs. The results of other algorithms are extracted from the original HHO publication (Heidari et al. 2019) and HHODE publication (Birogul 2019). The average (AVG) and standard deviation (STD) of EHHO are compared with those of the following algorithms: GA (Simon 2008), BBO (Simon 2008), DE (Simon 2008), PSO (Simon 2008), FPA (Yang et al. 2014), GWO (Mirjalili et al. 2016), BAT (Yang and Gandomi 2012), FA (Gandomi et al. 2011), CS (Gandomi et al. 2013), MFO (Mirjalili 2015), TLBO (Rao et al. 2012) and DE (Simon 2008). These include commonly utilised optimisers, such as GA, DE, PSO and BBO, and also newly emerged optimisers such as FPA, GWO, BAT, FA, CS, MFO and TLBO. Additionally, EHHO is compared with a modified version of HHO, i.e. HHODE, from Birogul (2019).
A statistical hypothesis test, i.e. the non- parametric sign test at the 95% confidence level, is used for performance assessment from the statistical perspective. The sign test results at the 95% confidence interval and the associated p-values are presented in Tables 6, 7, 8 and 9. Symbol "+/=/-" indicates a better/equal/worse performance between two compared algorithms. The \(\checkmark \) indicates the target algorithm has significant difference in performance with the compared algorithm, \(\times \) is vice versa, and \(\approx \) depicts no significant difference in performance between the two algorithms.
To compare an algorithm with others in a more general form, lexicographic ordering and average ranking are used. Lexicographic ordering has been adopted to obtain the final ranking for all algorithms in the CEC 2009 competition (Yu et al. 2018). The average ranking method is used to reveal accuracy and stability of an algorithm against other competitors. All algorithms are ranked based on their average results pertaining to the total number of benchmark functions, N. To measure accuracy of the target algorithm, the mean rank \(\mu _r\) is used, i.e.
$$\begin{aligned} \mu _{r}=\frac{\sum _{i=1}^{n} R_{i}}{N} \end{aligned}$$
(28)
where \(R = \{ R_1,R_2,\ldots ,R_n \}\) is a rank set of the target algorithm. The lower the mean rank \(\mu _r\), the better the performance of the target algorithm.

4.2 Quantitative results of EHHO and discussion

By observing the quantitative results from different classes of benchmark functions, the AVG and STD measures from 30 independent runs indicate the performance and stability of EHHO. The EHHO results versus those from other competitors in dealing with UM and MM functions (F1–F13) on 30, 100, 500 and 1000 dimensions are tabulated in Tables 2, 3, 4 and 5. Table 8 shows the EHHO results versus those from competitors in dealing with MM and CM functions. The best AVG and STD scores of EHHO are in bold. The statistical results of UM, MM, and CM benchmark functions are tabulated in Tables 6, 7, 8 and 9. The average ranking results are tabulated in Tables 7, 8, 9 and 10.
From Table 2, EHHO generates superior results on F2, F4–F7, and F9–F13. From the statistical evaluation veiwpoint, EHHO outperforms its competitors in all 30-dimensional UM functions at the 95% confidence level, as indicated by \(p < 0.05\) in Table 6. Moreover, EHHO achieves better results than other models on F1, F2, F4–F6, and F9–F13 on 100-dimensional UM functions, as shown in Table 3. As indicated in Table 6, EHHO performs significantly better than its competitors at the 95% confidence level. It can be noticed that EHHO yields the best results on 9 out of 13 500-dimensional benchmark problems, as presented in Table 4. In addition, Table 6 shows EHHO performs statistically better than its competitors, achieving comparable results than those of HHODE/rand/1, HHODE/best/1, HHODE/current-to-best/2, and HHODE/best/2. From Table 5, EHHO depicts its superior performance with 10 out of 13 best results on the benchmark problem with 1000-dimension. Comparing with other models statistically, EHHO performs significantly better than its competitors at the 95% confidence level, and no statistically difference with HHODE/current-to-best/2. From the results, integrating DE can certainly achieve more diverse solutions, leading to the attainment of the global optimum solution. In summary, EHHO proves to be highly effective in solving the F4–F6 and F9–F13 benchmark functions. Furthermore, EHHO outperforms other algorithms in almost all instances, and achieves the most optimal global solution for F9 and F11 in search spaces ranging from 30 to 1000 dimensions.
According to the results presented in Table 8, EHHO outperforms other models on several functions, including F14–F17, F19, F21–F22, and F25–F27. It also depicts similar performance to other models on F23 and F29. However, when it comes to hybrid composition functions F18, F20, F24, and F28, EHHO has a relatively lower performance on those hybrid composition functions. From Table 10, the average ranking results reinforce the findings from the statistical analysis, providing additional insights into the comparative performance of algorithms on benchmark functions F14–F29. EHHO is inferior to HHODE/current-to-best/2, HHODE/rand/1, HHODE/best/1, HHODE/best/2, and HHO, but it outperforms GA, PSO, BBO, FPA, GWO, BAT, FA, CS, MFO, TLBO, and DE. These findings provide valuable insights into the strengths and weaknesses of the EHHO algorithm on specific benchmark functions.
Based on the results, it can be seen that using a hybrid of HHO and DE method can enhance the diversity of solutions and prevent local optima. By incorporating a chaotic mutation rate in DE, the population diversity can be further improved while avoiding stagnation. Additionally, the ergodicity and randomness of the chaotic mutation assist EHHO in avoiding premature convergence. The nonlinear escaping factor further enhances EHHO in transitioning from exploration to exploitation, which enabling its attainment of the global optimum solution. Overall, EHHO outperforms the original HHO algorithm in solving SOPs. However, there are certain composition functions, such as F18, F20, F24, and F28, where HHODE performance proposed by Birogul (2019) is superior to that of EHHO.
Table 2
Comparative analysis of EHHO and other published algorithms in Heidari et al. (2019) and Birogul (2019) for benchmark functions (F1–F13) with 30 dimensions
Benchmark
Metric
EHHO
HHODE/rand/1
HHODE/best/1
HHODE/current-to-best/2
HHODE/best/2
HHODE/rand/2
HHO
GA
PSO
F1
AVG
1.95E\(-\)167
5.85E\(-\)143
1.04E \(-\)169
8.25E\(-\)139
2.40E\(-\)146
1.08E\(-\)126
3.95E\(-\)97
1.03E+03
1.83E+04
STD
0.00E+00
3.20E\(-\)142
0.00E+00
4.52E\(-\)138
1.04E\(-\)145
5.89E\(-\)126
1.72E\(-\)96
5.79E+02
3.01E+03
F2
AVG
2.64E \(-\)93
1.18E\(-\)73
4.55E\(-\)90
3.53E\(-\)75
4.71E\(-\)75
1.93E\(-\)61
1.56E\(-\)51
2.47E+01
3.58E+02
STD
1.45E\(-\)92
6.44E\(-\)73
2.49E\(-\)89
1.93E\(-\)74
1.94E\(-\)74
1.06E\(-\)60
6.98E\(-\)51
5.68E+00
1.35E+03
F3
AVG
3.40E\(-\)21
8.33E\(-\)110
3.13E \(-\)138
9.79E\(-\)128
1.35E\(-\)107
2.05E\(-\)104
1.92E \(-\)63
2.65E+04
4.05E+04
STD
1.86E\(-\)20
4.57E\(-\)109
1.72E\(-\)134
5.36E\(-\)127
7.38E\(-\)107
1.13E\(-\)103
1.05E\(-\)62
3.44E+03
8.21E+03
F4
AVG
5.16E \(-\)113
1.87E\(-\)74
8.55E\(-\)89
1.12E\(-\)76
4.70E\(-\)72
1.34E\(-\)65
1.02E\(-\)47
5.17E+01
4.39E+01
STD
2.81E\(-\)112
6.69E\(-\)74
4.65E\(-\)88
5.43E\(-\)76
1.90E\(-\)71
4.40E\(-\)65
5.01E\(-\)47
1.05E+01
3.64E+00
F5
AVG
0.00E+00
1.22E\(-\)02
1.38E\(-\)02
8.68E\(-\)03
2.18E+01
3.00E\(-\)02
1.32E\(-\)02
1.95E+04
1.96E+07
STD
0.00E+00
8.31E\(-\)03
1.25E\(-\)02
1.16E\(-\)02
6.33E\(-\)02
5.21E\(-\)02
1.87E\(-\)02
1.31E+04
6.25E+06
F6
AVG
0.00E+00
1.25E\(-\)04
6.38E\(-\)04
6.14E\(-\)05
1.40E\(-\)04
3.25E\(-\)04
1.15E\(-\)04
9.01E+02
1.87E+04
STD
0.00E+00
2.03E\(-\)04
5.40E\(-\)04
8.84E\(-\)05
2.53E\(-\)04
7.55E\(-\)04
1.56E\(-\)04
2.84E+02
2.92E+03
F7
AVG
7.54E \(-\)05
1.95E\(-\)04
1.29E\(-\)04
9.03E\(-\)05
1.36E\(-\)04
1.52E\(-\)04
1.40E\(-\)04
1.91E\(-\)01
1.07E+01
STD
7.91E\(-\)05
2.35E\(-\)04
1.40E\(-\)04
9.28E\(-\)05
1.40E\(-\)04
1.84E\(-\)04
1.07E\(-\)04
1.50E\(-\)01
3.05E+00
F8
AVG
\(-\)1.26E+04
\(-\)1.24E+04
\(-\)1.22E+04
\(-\)1.25E+04
\(-\)1.24E+04
\(-\)1.26E+04
\(-\)1.25E+04
\(-\)1.26E+04
\(-\)3.86E+03
STD
1.31E\(-\)12
4.22E+02
7.47E+02
1.37E+02
4.90E+02
6.22E+01
1.47E+02
4.51E+00
2.49E+02
F9
AVG
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
9.04E+00
2.87E+02
STD
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
4.58E+00
1.95E+01
F10
AVG
4.44E \(-\)16
8.88E\(-\)16
8.88E\(-\)16
8.88E\(-\)16
8.88E\(-\)16
8.88E\(-\)16
8.88E\(-\)16
1.36E+01
1.75E+01
STD
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
4.01E\(-\)31
1.51E+00
3.67E\(-\)01
F11
AVG
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
1.01E+01
1.70E+02
STD
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
2.43E+00
3.17E+01
F12
AVG
3.31E \(-\)32
1.13E\(-\)05
2.01E\(-\)04
2.09E\(-\)07
1.32E\(-\)05
2.36E\(-\)06
2.08E\(-\)06
4.77E+00
1.51E+07
STD
9.51E\(-\)32
1.41E\(-\)05
1.91E\(-\)04
3.32E\(-\)07
1.93E\(-\)05
4.25E\(-\)06
1.19E\(-\)05
1.56E+00
9.88E+06
F13
AVG
2.43E \(-\)32
9.26E\(-\)05
2.26E\(-\)04
4.59E\(-\)06
1.08E\(-\)04
9.15E\(-\)05
1.57E\(-\)04
1.52E+01
5.73E+07
STD
5.90E\(-\)32
1.76E\(-\)04
2.54E\(-\)04
4.62E\(-\)06
1.96E\(-\)04
1.59E\(-\)04
2.15E\(-\)04
4.52E+00
2.68E+07
Benchmark
Metric
BBO
FPA
GWO
BAT
FA
CS
MFO
TLBO
DE
F1
AVG
7.59E+01
2.01E+03
1.18E\(-\)27
6.59E+04
7.11E\(-\)03
9.06E\(-\)04
1.01E+03
2.17E\(-\)89
1.33E\(-\)03
STD
2.75E+01
5.60E+02
1.47E\(-\)27
7.51E+03
3.21E\(-\)03
4.55E\(-\)04
3.05E+03
3.14E\(-\)89
5.92E\(-\)04
F2
AVG
1.36E\(-\)03
3.22E+01
9.71E\(-\)17
2.71E+08
4.34E\(-\)01
1.49E\(-\)01
3.19E+01
2.77E\(-\)45
6.83E\(-\)03
STD
7.45E\(-\)03
5.55E+00
5.60E\(-\)17
1.30E+09
1.84E\(-\)01
2.79E\(-\)02
2.06E+01
3.11E\(-\)45
2.06E\(-\)03
F3
AVG
1.21E+04
1.41E+03
5.12E\(-\)05
1.38E+05
1.66E+03
2.10E\(-\)01
2.43E+04
3.91E\(-\)18
3.97E+04
STD
2.69E+03
5.59E+02
2.03E\(-\)04
4.72E+04
6.72E+02
5.69E\(-\)02
1.41E+04
8.04E\(-\)18
5.37E+03
F4
AVG
3.02E+01
2.38E+01
1.24E\(-\)06
8.51E+01
1.11E\(-\)01
9.65E\(-\)02
7.00E+01
1.68E\(-\)36
1.15E+01
STD
4.39E+00
2.77E+00
1.94E\(-\)06
2.95E+00
4.75E\(-\)02
1.94E\(-\)02
7.06E+00
1.47E\(-\)36
2.37E+00
F5
AVG
1.82E+03
3.17E+05
2.70E+01
2.10E+08
7.97E+01
2.76E+01
7.35E+03
2.54E+01
1.06E+02
STD
9.40E+02
1.75E+05
7.78E\(-\)01
4.17E+07
7.39E+01
4.51E\(-\)01
2.26E+04
4.26E\(-\)01
1.01E+02
F6
AVG
6.71E+01
1.70E+03
8.44E\(-\)01
6.69E+04
6.94E\(-\)03
3.13E\(-\)03
2.68E+03
3.29E\(-\)05
1.44E\(-\)03
STD
2.20E+01
3.13E+02
3.18E\(-\)01
5.87E+03
3.61E\(-\)03
1.30E\(-\)03
5.84E+03
8.65E\(-\)05
5.38E\(-\)04
F7
AVG
2.91E\(-\)03
3.41E\(-\)01
1.70E\(-\)03
4.57E+01
6.62E\(-\)02
7.29E\(-\)02
4.50E+00
1.16E\(-\)03
5.24E\(-\)02
STD
1.83E\(-\)03
1.10E\(-\)01
1.06E\(-\)03
7.82E+00
4.23E\(-\)02
2.21E\(-\)02
9.21E+00
3.63E\(-\)04
1.37E\(-\)02
F8
AVG
\(-\)1.24E+04
\(-\)6.45E+03
\(-\)5.97E+03
\(-\)2.33E+03
\(-\)5.85E+03
\(-\) 5.19E+19
\(-\)8.48E+03
\(-\)7.76E+03
\(-\)6.82E+03
STD
3.50E+01
3.03E+02
7.10E+02
2.96E+02
1.16E+03
1.76E+20
7.98E+02
1.04E+03
3.94E+02
F9
AVG
0.00E+00
1.82E+02
2.19E+00
1.92E+02
3.82E+01
1.51E+01
1.59E+02
1.40E+01
1.58E+02
STD
0.00E+00
1.24E+01
3.69E+00
3.56E+01
1.12E+01
1.25E+00
3.21E+01
5.45E+00
1.17E+01
F10
AVG
2.13E+00
7.14E+00
1.03E\(-\)13
1.92E+01
4.58E\(-\)02
3.29E\(-\)02
1.74E+01
6.45E\(-\)15
1.21E\(-\)02
STD
3.53E\(-\)01
1.08E+00
1.70E\(-\)14
2.43E\(-\)01
1.20E\(-\)02
7.93E\(-\)03
4.95E+00
1.79E\(-\)15
3.30E\(-\)03
F11
AVG
1.46E+00
1.73E+01
4.76E\(-\)03
6.01E+02
4.23E\(-\)03
4.29E\(-\)05
3.10E+01
0.00E+00
3.52E\(-\)02
STD
1.69E\(-\)01
3.63E+00
8.57E\(-\)03
5.50E+01
1.29E\(-\)03
2.00E\(-\)05
5.94E+01
0.00E+00
7.20E\(-\)02
F12
AVG
6.68E\(-\)01
3.05E+02
4.83E\(-\)02
4.71E+08
3.13E\(-\)04
5.57E\(-\)05
2.46E+02
7.35E\(-\)06
2.25E\(-\)03
STD
2.62E\(-\)01
1.04E+03
2.12E\(-\)02
1.54E+08
1.76E\(-\)04
4.96E\(-\)05
1.21E+03
7.45E\(-\)06
1.70E\(-\)03
F13
AVG
1.82E+00
9.59E+04
5.96E\(-\)01
9.40E+08
2.08E\(-\)03
8.19E\(-\)03
2.73E+07
7.89E\(-\)02
9.12E\(-\)03
STD
3.41E\(-\)01
1.46E+05
2.23E\(-\)01
1.67E+08
9.62E\(-\)04
6.74E\(-\)03
1.04E+08
8.78E\(-\)02
1.16E\(-\)02
Table 3
Comparative analysis of EHHO and other published algorithms in Heidari et al. (2019) and Birogul (2019) for benchmark functions (F1–F13) with 100 dimensions
Benchmark
Metric
EHHO
HHODE/rand/1
HHODE/best/1
HHODE/current-to-best/2
HHODE/best/2
HHODE/rand/2
HHO
GA
PSO
F1
AVG
5.18E \(-\)190
1.55E\(-\)147
6.45E\(-\)177
7.49E\(-\)148
2.56E\(-\)140
3.08E\(-\)127
1.91E\(-\)94
5.41E+04
1.06E+05
STD
0.00E+00
8.09E\(-\)147
0.00E+00
2.94E\(-\)147
1.40E\(-\)139
1.25E\(-\)126
8.66E\(-\)94
1.42E+04
8.47E+03
F2
AVG
2.93E \(-\)108
2.21E\(-\)75
3.28E\(-\)93
3.23E\(-\)76
5.46E\(-\)75
5.26E\(-\)64
9.98E\(-\)52
2.53E+02
6.06E+23
STD
1.61E\(-\)107
8.80E\(-\)75
1.29E\(-\)92
1.76E\(-\)75
2.90E\(-\)74
2.81E\(-\)63
2.66E\(-\)51
1.41E+01
2.18E+24
F3
AVG
2.41E\(-\)22
1.39E\(-\)109
4.61E\(-\)119
1.14E\(-\)103
3.18E\(-\)96
9.44E\(-\)76
1.84E \(-\)59
2.53E+05
4.22E+05
STD
1.32E\(-\)21
6.78E\(-\)109
2.52E\(-\)118
6.26E\(-\)103
1.74E\(-\)95
4.78E\(-\)75
1.01E\(-\)58
5.03E+04
7.08E+04
F4
AVG
3.84E \(-\)116
1.53E\(-\)72
1.82E\(-\)86
5.48E\(-\)79
4.38E\(-\)72
4.34E\(-\)63
8.76E\(-\)47
8.19E+01
6.07E+01
STD
2.10E\(-\)115
6.93E\(-\)72
9.98E\(-\)86
1.46E\(-\)78
1.48E\(-\)71
1.63E\(-\)62
4.79E\(-\)46
3.15E+00
3.05E+00
F5
AVG
0.00E+00
9.84E\(-\)03
1.50E\(-\)02
4.61E\(-\)03
1.26E\(-\)02
8.82E\(-\)03
2.36E\(-\)02
2.37E+07
2.42E+08
STD
0.00E+00
5.81E\(-\)03
2.26E\(-\)03
2.08E\(-\)03
4.69E\(-\)03
3.12E\(-\)03
2.99E\(-\)02
8.43E+06
4.02E+07
F6
AVG
4.11E \(-\)32
3.66E\(-\)04
3.25E\(-\)04
1.13E\(-\)04
2.40E\(-\)04
1.79E\(-\)04
5.12E\(-\)04
5.42E+04
1.07E+05
STD
2.25E\(-\)31
2.06E\(-\)04
3.16E\(-\)04
2.01E\(-\)04
2.47E\(-\)04
1.77E\(-\)04
6.77E\(-\)04
1.09E+04
9.70E+03
F7
AVG
1.27E\(-\)04
1.56E\(-\)04
1.61E\(-\)04
2.30E\(-\)04
1.17E \(-\)04
1.37E\(-\)04
1.85E\(-\)04
2.73E+01
3.41E+02
STD
1.41E\(-\)04
1.78E\(-\)04
1.75E\(-\)04
3.33E\(-\)04
8.39E\(-\)05
1.35E\(-\)04
4.06E\(-\)04
4.45E+01
8.74E+01
F8
AVG
\(-\)4.19E+04
4.19E+04
\(-\)4.18E+04
\(-\)4.19E+04
\(-\)4.16E+04
\(-\)4.19E+04
\(-\)4.19E+04
\(-\)4.10E+04
\(-\)7.33E+03
STD
7.40E\(-\)12
2.26E+02
2.51E+02
7.89E+00
1.35E+03
1.56E+02
2.82E+00
1.14E+02
4.75E+02
F9
AVG
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
3.39E+02
1.16E+03
STD
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
4.17E+01
5.74E+01
F10
AVG
4.44E \(-\)16
8.88E\(-\)16
8.88E\(-\)16
8.88E\(-\)16
8.88E\(-\)16
8.88E\(-\)16
8.88E\(-\)16
1.82E+01
1.91E+01
STD
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
4.01E\(-\)31
4.35E\(-\)01
2.04E\(-\)01
F11
AVG
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
5.14E+02
9.49E+02
STD
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
1.05E+02
6.00E+01
F12
AVG
4.71E \(-\)33
8.03E\(-\)05
1.67E\(-\)04
1.33E\(-\)06
7.00E\(-\)05
4.62E\(-\)05
4.23E\(-\)06
4.55E+06
3.54E+08
STD
1.39E\(-\)48
2.15E\(-\)04
2.31E\(-\)04
2.07E\(-\)06
1.95E\(-\)04
1.13E\(-\)04
5.25E\(-\)06
8.22E+06
8.75E+07
F13
AVG
1.35E \(-\)32
9.18E\(-\)05
1.06E\(-\)04
6.43E\(-\)06
2.36E\(-\)04
6.28E\(-\)06
9.13E\(-\)05
5.26E+07
8.56E+08
STD
2.78E\(-\)48
2.61E\(-\)04
2.66E\(-\)04
9.70E\(-\)06
3.84E\(-\)04
9.42E\(-\)61
1.26E\(-\)04
3.76E+07
2.16E+08
Benchmark
Metric
BBO
FPA
GWO
BAT
FA
CS
MFO
TLBO
DE
F1
AVG
2.85E+03
1.39E+04
1.59E\(-\)12
2.72E+05
3.05E\(-\)01
3.17E\(-\)01
6.20E+04
3.62E\(-\)81
8.26E+03
STD
4.49E+02
2.71E+03
1.63E\(-\)12
1.42E+04
5.60E\(-\)02
5.28E\(-\)02
1.25E+04
4.14E\(-\)81
1.32E+03
F2
AVG
1.59E+01
1.01E+02
4.31E\(-\)08
6.00E+43
1.45E+01
4.05E+00
2.46E+02
3.27E\(-\)41
1.21E+02
STD
3.74E+00
9.36E+00
1.46E\(-\)08
1.18E+44
6.73E+00
3.16E\(-\)01
4.48E+01
2.75E\(-\)41
2.33E+01
F3
AVG
1.70E+05
1.89E+04
4.09E+02
1.43E+06
4.65E+04
6.88E+00
2.15E+05
4.33E\(-\)07
5.01E+05
STD
2.02E+04
5.44E+03
2.77E+02
6.21E+05
6.92E+03
1.02E+00
4.43E+04
8.20E\(-\)07
5.87E+04
F4
AVG
7.08E+01
3.51E+01
8.89E\(-\)01
9.41E+01
1.91E+01
2.58E\(-\)01
9.31E+01
6.36E\(-\)33
9.62E+01
STD
4.73E+00
3.37E+00
9.30E\(-\)01
1.49E+00
3.12E+00
2.80E\(-\)02
2.13E+00
6.66E\(-\)33
1.00E+00
F5
AVG
4.47E+05
4.64E+06
9.79E+01
1.10E+09
8.46E+02
1.33E+02
1.44E+08
9.67E+01
1.99E+07
STD
2.05E+05
1.98E+06
6.75E\(-\)01
9.47E+07
8.13E+02
7.34E+00
7.50E+07
7.77E\(-\)01
5.80E+06
F6
AVG
2.85E+03
1.26E+04
1.03E+01
2.69E+05
2.95E\(-\)01
2.65E+00
6.68E+04
3.27E+00
8.07E+03
STD
4.07E+02
2.06E+03
1.05E+00
1.25E+04
5.34E\(-\)02
3.94E\(-\)01
1.46E+04
6.98E\(-\)01
1.64E+03
F7
AVG
1.25E+00
5.84E+00
7.60E\(-\)03
3.01E+02
5.65E\(-\)01
1.21E+00
2.56E+02
1.50E\(-\)03
1.96E+01
STD
5.18E+00
2.16E+00
2.66E\(-\)03
2.66E+01
1.64E\(-\)01
2.65E\(-\)01
8.91E+01
5.39E\(-\)04
5.66E+00
F8
AVG
\(-\)3.85E+04
\(-\)1.28E+04
\(-\)1.67E+04
\(-\)4.07E+03
\(-\)1.81E+04
\(-\) 2.84E+18
\(-\)2.30E+04
\(-\)1.71E+04
\(-\)1.19E+04
STD
2.80E+02
4.64E+02
2.62E+03
9.37E+02
3.23E+03
6.91E+18
1.98E+03
3.54E+03
5.80E+02
F9
AVG
9.11E+00
8.47E+02
1.03E+01
7.97E+02
2.36E+02
1.72E+02
8.65E+02
1.02E+01
1.03E+03
STD
2.73E+00
4.01E+01
9.02E+00
6.33E+01
2.63E+01
9.24E+00
8.01E+01
5.57E+01
4.03E+01
F10
AVG
5.57E+00
8.21E+00
1.20E\(-\)07
1.94E+01
9.81E\(-\)01
3.88E\(-\)01
1.99E+01
1.66E\(-\)02
1.22E+01
STD
4.72E\(-\)01
1.14E+00
5.07E\(-\)08
6.50E\(-\)02
2.55E\(-\)01
5.23E\(-\)02
8.58E\(-\)02
9.10E\(-\)02
8.31E\(-\)01
F11
AVG
2.24E+01
1.19E+02
4.87E\(-\)03
2.47E+03
1.19E\(-\)01
4.56E\(-\)03
5.60E+02
0.00E+00
7.42E+01
STD
4.35E+00
2.00E+01
1.07E\(-\)02
1.03E+02
2.34E\(-\)02
9.73E\(-\)04
1.23E+02
0.00E+00
1.40E+01
F12
AVG
3.03E+02
1.55E+05
2.87E\(-\)01
2.64E+09
4.45E+00
2.47E\(-\)02
2.82E+08
3.03E\(-\)02
3.90E+07
STD
1.48E+03
1.74E+05
6.41E\(-\)02
2.69E+08
1.32E+00
5.98E\(-\)03
1.45E+08
1.02E\(-\)02
1.88E+07
F13
AVG
6.82E+04
2.76E+06
6.87E+00
5.01E+09
4.50E+01
5.84E+00
6.68E+08
5.47E+00
7.19E+07
STD
3.64E+04
1.80E+06
3.32E\(-\)01
3.93E+08
2.24E+01
1.21E+00
3.05E+08
8.34E\(-\)01
2.73E+07
Table 4
Comparative analysis of EHHO and other published algorithms in Heidari et al. (2019) and Birogul (2019) for benchmark functions (F1–F13) with 500 dimensions
Benchmark
Metric
EHHO
HHODE/rand/1
HHODE/best/1
HHODE/current-to-best/2
HHODE/best/2
HHODE/rand/2
HHO
GA
PSO
F1
AVG
5.31E\(-\)133
6.09E\(-\)142
9.89E \(-\)165
1.93E\(-\)150
6.31E\(-\)141
2.06E\(-\)125
1.46E\(-\)92
6.06E+05
6.42E+05
STD
2.91E\(-\)132
3.22E\(-\)141
1.01E\(-\)164
1.05E\(-\)149
2.68E\(-\)140
1.02E\(-\)124
8.01E\(-\)92
7.01E+04
2.96E+04
F2
AVG
4.22E \(-\)109
1.67E\(-\)72
1.35E\(-\)91
1.25E\(-\)78
8.71E\(-\)75
9.81E\(-\)62
7.87E\(-\)49
1.94E+03
6.08E+09
STD
2.31E\(-\)108
9.02E\(-\)72
7.34E\(-\)91
4.41E\(-\)78
4.70E\(-\)74
5.35E\(-\)61
3.11E\(-\)48
7.03E+01
1.70E+10
F3
AVG
1.80E\(-\)13
6.25E\(-\)95
4.12E\(-\)125
5.24E \(-\)126
2.57E\(-\)105
8.15E\(-\)101
6.54E\(-\)37
5.79E+06
1.13E+07
STD
9.82E\(-\)13
3.26E\(-\)94
2.73E\(-\)124
3.46E\(-\)125
1.01E\(-\)105
6.23E\(-\)101
3.58E\(-\)36
9.08E+05
1.43E+06
F4
AVG
3.66E \(-\)111
3.39E\(-\)76
9.19E\(-\)90
5.68E\(-\)76
4.64E\(-\)71
1.12E\(-\)64
1.29E\(-\)47
9.59E+01
8.18E+01
STD
2.00E\(-\)110
9.94E\(-\)76
5.01E\(-\)89
3.09E\(-\)75
1.49E\(-\)70
5.69E\(-\)64
4.11E\(-\)47
1.20E+00
1.49E+00
F5
AVG
0.00E+00
2.61E\(-\)01
3.44E\(-\)01
2.12E\(-\)01
8.14E\(-\)01
2.31E\(-\)01
3.10E\(-\)01
1.79E+09
1.84E+09
STD
0.00E+00
3.61E\(-\)01
7.37E\(-\)01
2.80E\(-\)01
1.63E+00
4.15E\(-\)01
3.73E\(-\)01
4.11E+08
1.11E+08
F6
AVG
0.00E+00
2.48E\(-\)03
1.15E\(-\)03
4.55E\(-\)04
5.01E\(-\)01
6.49E\(-\)03
2.94E\(-\)03
6.27E+05
6.57E+05
STD
0.00E+00
2.65E\(-\)02
2.45E\(-\)03
1.48E\(-\)04
1.30E+00
2.66E\(-\)02
3.98E\(-\)03
7.43E+04
3.29E+04
F7
AVG
8.85E\(-\)05
1.49E\(-\)04
9.14E\(-\)05
1.39E \(-\)05
1.23E\(-\)04
1.42E\(-\)04
2.51E\(-\)04
9.10E+03
1.43E+04
STD
9.98E\(-\)05
1.92E\(-\)04
8.46E\(-\)05
2.24E\(-\)05
1.07E\(-\)04
1.11E\(-\)04
2.43E\(-\)04
2.20E+03
1.51E+03
F8
AVG
\(-\)2.09E+05
\(-\)2.10E+05
\(-\)2.10E+05
\(-\)2.10E+05
\(-\)2.10E+05
\(-\)2.09E+05
\(-\)2.09E+05
\(-\)1.31E+05
\(-\)1.65E+04
STD
2.96E\(-\)11
9.11E+00
6.29E+00
3.95E+00
4.95E+00
7.80E+01
2.84E+01
2.31E+04
9.99E+02
F9
AVG
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
3.29E+03
6.63E+03
STD
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
1.96E+02
1.07E+02
F10
AVG
4.44E \(-\)16
8.88E\(-\)16
8.88E\(-\)16
8.88E\(-\)16
8.88E\(-\)16
8.88E\(-\)16
8.88E\(-\)16
1.96E+01
1.97E+01
STD
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
4.01E\(-\)31
2.04E\(-\)01
1.04E\(-\)01
F11
AVG
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
5.42E+03
5.94E+03
STD
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
7.32E+02
3.19E+02
F12
AVG
9.42E \(-\)34
2.71E\(-\)06
2.65E\(-\)06
7.33E\(-\)07
7.52E\(-\)06
7.35E\(-\)06
1.41E\(-\)06
2.79E+09
3.51E+09
STD
1.74E\(-\)49
9.31E\(-\)06
8.69E\(-\)06
2.22E\(-\)06
1.85E\(-\)05
2.35E\(-\)06
1.48E\(-\)06
1.11E+09
4.16E+08
F13
AVG
3.09E \(-\)31
2.82E\(-\)04
3.11E\(-\)04
6.38E\(-\)05
1.31E\(-\)04
3.99E\(-\)04
3.44E\(-\)04
8.84E+09
6.82E+09
STD
1.62E\(-\)30
3.32E\(-\)04
3.38E\(-\)04
1.31E\(-\)04
1.20E\(-\)04
2.95E\(-\)04
4.75E\(-\)04
2.00E+09
8.45E+08
Benchmark
Metric
BBO
FPA
GWO
BAT
FA
CS
MFO
TLBO
DE
F1
AVG
1.60E+05
8.26E+04
1.42E\(-\)03
1.52E+06
6.30E+04
6.80E+00
1.15E+06
2.14E\(-\)77
7.43E+05
STD
9.76E+03
1.32E+04
3.99E\(-\)04
3.58E+04
8.47E+03
4.93E\(-\)01
3.54E+04
1.94E\(-\)77
3.67E+04
F2
AVG
5.95E+02
5.13E+02
1.10E\(-\)02
8.34E+09
7.13E+02
4.57E+01
3.00E+08
2.31E\(-\)39
3.57E+09
STD
1.70E+01
4.84E+01
1.93E\(-\)03
1.70E+10
3.76E+01
2.05E+00
1.58E+09
1.63E\(-\)39
1.70E+10
F3
AVG
2.98E+06
5.34E+05
3.34E+05
3.37E+07
1.19E+06
2.03E+02
4.90E+06
1.06E+00
1.20E+07
STD
3.87E+05
1.34E+05
7.95E+04
1.41E+07
1.88E+05
2.72E+01
1.02E+06
3.70E+00
1.49E+06
F4
AVG
9.35E+01
4.52E+01
6.51E+01
9.82E+01
5.00E+01
4.06E\(-\)01
9.88E+01
4.02E\(-\)31
9.92E+01
STD
9.05E\(-\)01
4.28E+00
5.72E+00
3.32E\(-\)01
1.73E+00
3.03E\(-\)02
4.15E\(-\)01
2.67E\(-\)31
2.33E\(-\)01
F5
AVG
2.07E+08
3.30E+07
4.98E+02
6.94E+09
2.56E+07
1.21E+03
5.01E+09
4.97E+02
4.57E+09
STD
2.08E+07
8.76E+06
5.23E\(-\)01
2.23E+08
6.14E+06
7.04E+01
2.50E+08
3.07E\(-\)01
1.25E+09
F6
AVG
1.68E+05
8.01E+04
9.22E+01
1.53E+06
6.30E+04
8.27E+01
1.16E+06
7.82E+01
7.23E+05
STD
8.23E+03
9.32E+03
2.15E+00
3.37E+04
8.91E+03
2.24E+00
3.48E+04
2.50E+00
3.28E+04
F7
AVG
2.62E+03
2.53E+02
4.67E\(-\)02
2.23E+04
3.71E+02
8.05E+01
3.84E+04
1.71E\(-\)03
2.39E+04
STD
3.59E+02
6.28E+01
1.12E\(-\)02
1.15E+03
6.74E+01
1.37E+01
2.24E+03
4.80E\(-\)04
2.72E+03
F8
AVG
\(-\)1.42E+05
\(-\)3.00E+04
\(-\)5.70E+04
\(-\)9.03E+03
\(-\)7.27E+04
\(-\) 2.10E+17
\(-\)6.29E+04
\(-\)5.02E+04
\(-\)2.67E+04
STD
1.98E+03
1.14E+03
3.12E+03
2.12E+03
1.15E+04
1.14E+18
5.71E+03
1.00E+04
1.38E+03
F9
AVG
7.86E+02
4.96E+03
7.84E+01
6.18E+03
2.80E+03
2.54E+03
6.96E+03
0.00E+00
7.14E+03
STD
3.42E+01
7.64E+01
3.13E+01
1.20E+02
1.42E+02
5.21E+01
1.48E+02
0.00E+00
1.05E+02
F10
AVG
1.44E+01
8.55E+00
1.93E\(-\)03
2.04E+01
1.24E+01
1.07E+00
2.03E+01
7.62E\(-\)01
2.06E+01
STD
2.22E\(-\)01
8.66E\(-\)01
3.50E\(-\)04
3.25E\(-\)02
4.46E\(-\)01
6.01E\(-\)02
1.48E\(-\)01
2.33E+00
2.45E\(-\)01
F11
AVG
1.47E+03
6.88E+02
1.55E\(-\)02
1.38E+04
5.83E+02
2.66E\(-\)02
1.03E+04
0.00E+00
6.75E+03
STD
8.10E+01
8.17E+01
3.50E\(-\)02
3.19E+02
7.33E+01
2.30E\(-\)03
4.43E+02
0.00E+00
2.97E+02
F12
AVG
1.60E+08
4.50E+06
7.42E\(-\)01
1.70E+10
8.67E+05
3.87E\(-\)01
1.20E+10
4.61E\(-\)01
1.60E+10
STD
3.16E+07
3.37E+06
4.38E\(-\)02
6.29E+08
6.23E+05
2.47E\(-\)02
6.82E+08
2.40E\(-\)02
2.34E+09
F13
AVG
5.13E+08
3.94E+07
5.06E+01
3.17E+10
2.29E+07
6.00E+01
2.23E+10
4.98E+01
2.42E+10
STD
6.59E+07
1.87E+07
1.30E+00
9.68E+08
9.46E+06
1.13E+00
1.13E+09
9.97E\(-\)03
6.39E+09
Table 5
Comparative analysis of EHHO and other published algorithms in Heidari et al. (2019) and Birogul (2019) for benchmark functions (F1–F13) with 1000 dimensions
Benchmark
Metric
EHHO
HHODE/rand/1
HHODE/best/1
HHODE/current-to-best/2
HHODE/best/2
HHODE/rand/2
HHO
GA
PSO
F1
AVG
8.30E \(-\)183
1.36E\(-\)144
2.39E\(-\)181
9.82E\(-\)146
1.42E\(-\)140
1.63E\(-\)128
1.06E\(-\)94
1.36E+06
1.36E+06
STD
0.00E+00
5.87E\(-\)144
5.21E\(-\)180
5.38E\(-\)145
7.00E\(-\)140
8.52E\(-\)128
4.97E\(-\)94
1.79E+05
6.33E+04
F2
AVG
7.70E \(-\)103
4.28E\(-\)73
2.07E\(-\)89
4.34E\(-\)76
5.55E\(-\)73
4.59E\(-\)61
2.52E\(-\)50
4.29E+03
1.79E+10
STD
4.20E\(-\)102
2.32E\(-\)72
1.13E\(-\)88
2.37E\(-\)75
2.18E\(-\)72
2.32E\(-\)60
5.02E\(-\)50
8.86E+01
1.79E+10
F3
AVG
4.03E\(-\)19
2.23E\(-\)91
3.37E\(-\)121
2.63E \(-\)121
7.13E\(-\)98
9.63E\(-\)95
1.79E\(-\)17
2.29E+07
3.72E+07
STD
2.20E\(-\)18
1.94E\(-\)89
5.93E\(-\)120
4.98E\(-\)120
8.12E\(-\)97
1.01E\(-\)93
9.81E\(-\)17
3.93E+06
1.16E+07
F4
AVG
1.57E \(-\)105
5.30E\(-\)71
7.23E\(-\)90
3.51E\(-\)76
1.70E\(-\)72
2.33E\(-\)63
1.43E\(-\)46
9.79E+01
8.92E+01
STD
8.62E\(-\)105
2.90E\(-\)70
3.73E\(-\)89
1.89E\(-\)75
8.90E\(-\)72
1.27E\(-\)62
7.74E\(-\)46
7.16E\(-\)01
2.39E+00
F5
AVG
0.00E+00
1.78E\(-\)01
8.45E\(-\)02
6.23E\(-\)02
1.57E\(-\)01
1.61E\(-\)01
5.73E\(-\)01
4.73E+09
3.72E+09
STD
0.00E+00
2.36E\(-\)01
1.59E\(-\)01
1.61E\(-\)01
2.46E\(-\)01
2.22E\(-\)01
1.40E+00
9.63E+08
2.76E+08
F6
AVG
0.00E+00
5.85E\(-\)03
2.34E\(-\)03
4.65E\(-\)04
5.32E\(-\)03
2.12E\(-\)03
3.61E\(-\)03
1.52E+06
1.38E+06
STD
0.00E+00
6.57E\(-\)03
5.96E\(-\)03
4.13E\(-\)04
1.05E\(-\)02
2.84E\(-\)03
5.38E\(-\)03
1.88E+05
6.05E+04
F7
AVG
1.00E\(-\)04
1.20E\(-\)04
2.37E\(-\)04
9.93E \(-\)05
1.27E\(-\)04
2.07E\(-\)04
1.41E\(-\)04
4.45E+04
6.26E+04
STD
1.00E\(-\)04
1.33E\(-\)04
3.61E\(-\)04
1.33E\(-\)04
1.07E\(-\)04
1.27E\(-\)04
1.63E\(-\)04
8.40E+03
4.16E+03
F8
AVG
\(-\)4.19E+05
\(-\)4.19E+05
\(-\)4.13E+05
\(-\)4.19E+05
\(-\)4.19E+05
\(-\)4.19E+05
\(-\)4.19E+05
\(-\)1.94E+05
\(-\)2.30E+04
STD
1.18E\(-\)10
1.10E+01
9.67E+03
6.51E\(-\)01
4.25E+02
1.16E+02
1.03E+02
9.74E+03
1.70E+03
F9
AVG
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
8.02E+03
1.35E+04
STD
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
3.01E+02
1.83E+02
F10
AVG
4.44E \(-\)16
8.88E\(-\)16
8.88E\(-\)16
8.88E\(-\)16
8.88E\(-\)16
8.88E\(-\)16
8.88E\(-\)16
1.95E+01
1.98E+01
STD
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
4.01E\(-\)31
2.55E\(-\)01
1.24E\(-\)01
F11
AVG
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
1.26E+04
1.23E+04
STD
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
1.63E+03
5.18E+02
F12
AVG
4.71E \(-\)34
9.72E\(-\)06
3.60E\(-\)06
1.14E\(-\)06
3.18E\(-\)06
2.95E\(-\)06
1.02E\(-\)06
1.14E+10
7.73E+09
STD
8.70E\(-\)50
1.87E\(-\)05
3.36E\(-\)05
1.29E\(-\)06
4.57E\(-\)05
3.14E\(-\)06
1.16E\(-\)06
1.27E+09
6.72E+08
F13
AVG
1.35E \(-\)32
6.07E\(-\)04
6.55E\(-\)04
2.74E\(-\)04
8.92E\(-\)04
4.99E\(-\)04
8.41E\(-\)04
1.91E+10
1.58E+10
STD
2.78E\(-\)48
8.23E\(-\)04
7.26E\(-\)04
2.25E\(-\)04
1.97E\(-\)03
1.77E\(-\)03
1.18E\(-\)03
4.21E+09
1.56E+09
Benchmark
Metric
BBO
FPA
GWO
BAT
FA
CS
MFO
TLBO
DE
F1
AVG
6.51E+05
1.70E+05
2.42E\(-\)01
3.12E+06
3.20E+05
1.65E+01
2.73E+06
2.73E\(-\)76
2.16E+06
STD
2.37E+04
2.99E+04
4.72E\(-\)02
4.61E+04
2.11E+04
1.27E+00
4.70E+04
7.67E\(-\)76
3.39E+05
F2
AVG
1.96E+03
8.34E+02
7.11E\(-\)01
1.79E+10
1.79E+10
1.02E+02
1.79E+10
1.79E+10
1.79E+10
STD
2.18E+01
8.96E+01
4.96E\(-\)01
1.79E+10
1.79E+10
3.49E+00
1.79E+10
1.79E+10
1.79E+10
F3
AVG
9.92E+06
1.95E+06
1.49E+06
1.35E+08
4.95E+06
8.67E+02
1.94E+07
8.61E\(-\)01
5.03E+07
STD
1.48E+06
4.20E+05
2.43E+05
4.76E+07
7.19E+05
1.10E+02
3.69E+06
1.33E+00
4.14E+06
F4
AVG
9.73E+01
5.03E+01
7.94E+01
9.89E+01
6.06E+01
4.44E\(-\)01
9.96E+01
1.01E\(-\)30
9.95E+01
STD
7.62E\(-\)01
5.37E+00
2.77E+00
2.22E\(-\)01
2.69E+00
2.24E\(-\)02
1.49E\(-\)01
5.25E\(-\)31
1.43E\(-\)01
F5
AVG
1.29E+09
7.27E+07
1.06E+03
1.45E+10
2.47E+08
2.68E+03
1.25E+10
9.97E+02
1.49E+10
STD
6.36E+07
1.84E+07
3.07E+01
3.20E+08
3.24E+07
1.27E+02
3.15E+08
2.01E\(-\)01
3.06E+08
F6
AVG
6.31E+05
1.60E+05
2.03E+02
3.11E+06
3.18E+05
2.07E+02
2.73E+06
1.93E+02
2.04E+06
STD
1.82E+04
1.86E+04
2.45E+00
6.29E+04
2.47E+04
4.12E+00
4.56E+04
2.35E+00
2.46E+05
F7
AVG
3.84E+04
1.09E+03
1.47E\(-\)01
1.25E+05
4.44E+03
4.10E+02
1.96E+05
1.83E\(-\)03
2.27E+05
STD
2.91E+03
3.49E+02
3.28E\(-\)02
3.93E+03
4.00E+02
8.22E+01
6.19E+03
5.79E\(-\)04
3.52E+04
F8
AVG
\(-\)2.29E+05
\(-\)4.25E+04
\(-\)8.64E+04
\(-\)1.48E+04
\(-\)1.08E+05
\(-\) 9.34E+14
\(-\)9.00E+04
\(-\)6.44E+04
\(-\)3.72E+04
STD
3.76E+03
1.47E+03
1.91E+04
3.14E+03
1.69E+04
2.12E+15
7.20E+03
1.92E+04
1.23E+03
F9
AVG
2.86E+03
1.01E+04
2.06E+02
1.40E+04
7.17E+03
6.05E+03
1.56E+04
0.00E+00
1.50E+04
STD
9.03E+01
1.57E+02
4.81E+01
1.85E+02
1.88E+02
1.41E+02
1.94E+02
0.00E+00
1.79E+02
F10
AVG
1.67E+01
8.62E+00
1.88E\(-\)02
2.07E+01
1.55E+01
1.18E+00
2.04E+01
5.09E\(-\)01
2.07E+01
STD
8.63E\(-\)02
9.10E\(-\)01
2.74E\(-\)03
2.23E\(-\)02
2.42E\(-\)01
5.90E\(-\)02
2.16E\(-\)01
1.94E+00
1.06E\(-\)01
F11
AVG
5.75E+03
1.52E+03
6.58E\(-\)02
2.83E+04
2.87E+03
3.92E\(-\)02
2.47E+04
1.07E\(-\)16
1.85E+04
STD
1.78E+02
2.66E+02
8.82E\(-\)02
4.21E+02
1.78E+02
3.58E\(-\)03
4.51E+02
2.03E\(-\)17
2.22E+03
F12
AVG
1.56E+09
8.11E+06
1.15E+00
3.63E+10
6.76E+07
6.53E\(-\)01
3.04E+10
6.94E\(-\)01
3.72E+10
STD
1.46E+08
3.46E+06
1.82E\(-\)01
1.11E+09
1.80E+07
2.45E\(-\)02
9.72E+08
1.90E\(-\)02
7.67E+08
F13
AVG
4.17E+09
8.96E+07
1.21E+02
6.61E+10
4.42E+08
1.32E+02
5.62E+10
9.98E+01
6.66E+10
STD
2.54E+08
3.65E+07
1.11E+01
1.40E+09
7.91E+07
1.48E+00
1.76E+09
1.31E\(-\)02
2.26E+09
Table 6
Statistical test results with 95% confidence interval for EHHO on benchmark functions (F1–F13) with varying dimensions as compared with those published in Heidari et al. (2019) and Birogul (2019)
EHHO
30-Dimension
100-Dimension
vs.
\(+/=/-\)
p value
\(\alpha =0.05\)
\(+/=/-\)
p value
\(\alpha =0.05\)
HHODE/rand/1
10/2/1
0.0067
\(\checkmark \)
10/2/1
0.0067
\(\checkmark \)
HHODE/best/1
9/2/2
0.0348
\(\checkmark \)
10/2/1
0.0067
\(\checkmark \)
HHODE/current-to-best/2
10/2/1
0.0067
\(\checkmark \)
9/2/2
0.0348
\(\checkmark \)
HHODE/best/2
10/2/1
0.0067
\(\checkmark \)
9/2/2
0.0348
\(\checkmark \)
HHODE/rand/2
10/2/1
0.0067
\(\checkmark \)
10/2/1
0.0067
\(\checkmark \)
HHO
10/2/1
0.0067
\(\checkmark \)
9/2/2
0.0348
\(\checkmark \)
GA
12/0/1
0.0023
\(\checkmark \)
13/0/0
0.0003
\(\checkmark \)
PSO
13/0/0
0.0003
\(\checkmark \)
13/0/0
0.0003
\(\checkmark \)
BBO
12/1/0
0.0005
\(\checkmark \)
13/0/0
0.0003
\(\checkmark \)
FPA
13/0/0
0.0003
\(\checkmark \)
13/0/0
0.0003
\(\checkmark \)
GWO
13/0/0
0.0003
\(\checkmark \)
13/0/0
0.0003
\(\checkmark \)
BAT
13/0/0
0.0003
\(\checkmark \)
13/0/0
0.0003
\(\checkmark \)
FA
13/0/0
0.0003
\(\checkmark \)
13/0/0
0.0003
\(\checkmark \)
CS
12/0/1
0.0023
\(\checkmark \)
12/0/1
0.0023
\(\checkmark \)
MFO
13/0/0
0.0003
\(\checkmark \)
13/0/0
0.0003
\(\checkmark \)
TLBO
12/1/0
0.0005
\(\checkmark \)
12/1/0
0.0005
\(\checkmark \)
DE
13/0/0
0.0003
\(\checkmark \)
13/0/0
0.0003
\(\checkmark \)
EHHO
500-Dimension
1000-Dimension
vs.
\(+/=/-\)
p value
\(\alpha =0.05\)
\(+/=/-\)
p value
\(\alpha =0.05\)
HHODE/rand/1
8/2/3
0.1317
\(\approx \)
9/2/2
0.0348
\(\checkmark \)
HHODE/best/1
8/2/3
0.1317
\(\approx \)
10/2/1
0.0067
\(\checkmark \)
HHODE/current-to-best/2
7/2/4
0.3657
\(\approx \)
8/2/3
0.1317
\(\approx \)
HHODE/best/2
8/2/3
0.1317
\(\approx \)
10/2/1
0.0067
\(\checkmark \)
HHODE/rand/2
10/2/1
0.0067
\(\checkmark \)
10/2/1
0.0067
\(\checkmark \)
HHO
10/2/1
0.0067
\(\checkmark \)
10/2/1
0.0067
\(\checkmark \)
GA
13/0/0
0.0003
\(\checkmark \)
13/0/0
0.0003
\(\checkmark \)
PSO
13/0/0
0.0003
\(\checkmark \)
13/0/0
0.0003
\(\checkmark \)
BBO
13/0/0
0.0003
\(\checkmark \)
13/0/0
0.0003
\(\checkmark \)
FPA
13/0/0
0.0003
\(\checkmark \)
13/0/0
0.0003
\(\checkmark \)
GWO
13/0/0
0.0003
\(\checkmark \)
13/0/0
0.0003
\(\checkmark \)
BAT
13/0/0
0.0003
\(\checkmark \)
13/0/0
0.0003
\(\checkmark \)
FA
13/0/0
0.0003
\(\checkmark \)
13/0/0
0.0003
\(\checkmark \)
CS
12/0/1
0.0023
\(\checkmark \)
12/0/1
0.0023
\(\checkmark \)
MFO
13/0/0
0.0003
\(\checkmark \)
13/0/0
0.0003
\(\checkmark \)
TLBO
11/2/0
0.0009
\(\checkmark \)
12/1/0
0.0005
\(\checkmark \)
DE
13/0/0
0.0003
\(\checkmark \)
13/0/0
0.0003
\(\checkmark \)
Table 7
Final rankings of each algorithm in Heidari et al. (2019) and Birogul (2019) for benchmark function F1–F13 with dimension of 30–1000
Algorithms
30-D
100-D
500-D
1000-D
\(\mu _{r}\)
AR
\(\mu _{r}\)
AR
\(\mu _{r}\)
AR
\(\mu _{r}\)
AR
EHHO
1.6923
1
1.7692
1
2.2308
2
1.7692
1
HHODE/rand/1
4.0769
3
4.7692
7
3.5385
4
4.0000
4
HHODE/best/1
4.0769
3
3.6154
3
2.6923
3
3.5385
3
HHODE/current-to-best/2
2.6154
2
2.6923
2
2.0000
1
2.0769
2
HHODE/best/2
4.4615
5
4.0769
5
3.9231
5
4.2308
6
HHODE/rand/2
4.4615
5
3.6923
4
4.6923
6
4.1538
5
HHO
4.5385
7
4.6154
6
5.0000
7
4.5385
7
GA
13.2308
14
14.2308
14
13.8462
14
14.0000
14
PSO
16.9231
17
16.6154
17
15.2308
15
14.4615
15
BBO
11.3846
11
11.6923
12
12.4615
13
12.3846
13
FPA
14.7692
15
13.2308
13
11.9231
12
11.3846
11
GWO
10.3077
10
9.8462
10
9.6923
10
9.6154
10
BAT
17.9231
18
17.3846
18
17.3846
18
16.8462
18
FA
11.6154
12
10.7692
11
11.4615
11
11.9231
12
CS
9.6154
9
9.0000
9
9.0000
9
8.9231
9
MFO
15.0000
16
15.4615
16
16.0769
16
15.9231
16
TLBO
7.5385
8
8.1538
8
7.5385
8
8.4615
8
DE
11.6923
13
14.7692
15
16.7692
17
16.6923
17
Table 8
Results of benchmark functions (F14–F29) for EHHO vs its competitors in Heidari et al. (2019) and Birogul (2019)
Benchmark
Metric
EHHO
HHODE/rand/1
HHODE/best/1
HHODE/current-to-best/2
HHODE/best/2
HHODE/rand/2
HHO
GA
PSO
F14
AVG
9.98E \(-\)01
9.98E \(-\)01
9.98E \(-\)01
9.98E \(-\)01
9.98E \(-\)01
9.98E \(-\)01
9.98E \(-\)01
9.98E \(-\)01
1.39E+00
STD
1.09E\(-\)16
4.52E\(-\)16
3.39E\(-\)16
4.52E\(-\)16
3.39E\(-\)16
4.52E\(-\)16
9.23E\(-\)01
4.52E\(-\)16
4.60E\(-\)01
F15
AVG
3.08E \(-\)04
3.10E\(-\)04
3.10E\(-\)04
3.10E\(-\)04
3.17E\(-\)04
3.14E\(-\)04
3.10E\(-\)04
3.33E\(-\)02
1.61E\(-\)03
STD
5.35E\(-\)07
1.49E\(-\)06
1.48E\(-\)06
6.93E\(-\)06
1.92E\(-\)05
1.34E\(-\)05
1.97E\(-\)04
2.70E\(-\)02
4.60E\(-\)04
F16
AVG
\(-\) 1.03E+00
\(-\)1.03E+00
\(-\)1.03E+00
\(-\)1.03E+00
\(-\)1.03E+00
\(-\)1.03E+00
\(-\)1.03E+00
\(-\)3.78E\(-\)01
\(-\)1.03E+00
STD
2.86E\(-\)10
1.65E\(-\)09
2.28E\(-\)09
2.15E\(-\)09
3.87E\(-\)10
1.07E\(-\)09
6.78E\(-\)16
3.42E\(-\)01
2.95E\(-\)03
F17
AVG
3.98E \(-\)01
3.98E\(-\)01
3.98E\(-\)01
3.98E\(-\)01
3.98E\(-\)01
3.98E\(-\)01
3.98E\(-\)01
5.24E\(-\)01
4.00E\(-\)01
STD
1.14E\(-\)08
6.49E\(-\)06
7.01E\(-\)06
1.56E\(-\)05
2.41E\(-\)06
2.53E\(-\)05
2.54E\(-\)06
6.06E\(-\)02
1.39E\(-\)03
F18
AVG
1.11E+01
3.00E+00
3.00E+00
3.00E+00
3.00E+00
3.00E+00
3.00E+00
3.00E+00
3.10E+00
STD
1.26E+01
1.43E\(-\)06
1.18E\(-\)05
1.07E\(-\)05
6.33E\(-\)06
1.81E\(-\)05
0.00E+00
0.00E+00
7.60E\(-\)02
F19
AVG
\(-\) 3.86E+00
\(-\)3.86E+00
\(-\)3.86E+00
\(-\)3.86E+00
\(-\)3.86E+00
\(-\)3.86E+00
\(-\)3.86E+00
\(-\)3.42E+00
\(-\)3.86E+00
STD
4.16E\(-\)06
5.18E\(-\)03
6.17E\(-\)03
2.48E\(-\)03
3.21E\(-\)03
4.08E\(-\)03
2.44E\(-\)03
3.03E\(-\)01
1.24E\(-\)03
F20
AVG
\(-\)3.28E+00
\(-\) 3.32E+00
\(-\) 3.32E+00
\(-\) 3.32E+00
\(-\) 3.32E+00
\(-\) 3.32E+00
\(-\) 3.32E+00
\(-\)1.61E+00
\(-\)3.11E+00
STD
5.83E\(-\)02
1.36E\(-\)04
1.33E\(-\)04
7.44E\(-\)05
1.48E\(-\)04
1.50E\(-\)04
1.37E\(-\)01
4.60E\(-\)01
2.91E\(-\)02
F21
AVG
\(-\) 1.02E+01
\(-\)1.01E+01
\(-\)1.01E+01
\(-\)1.02E+01
\(-\)1.01E+01
\(-\)1.01E+01
\(-\)1.01E+01
\(-\)6.66E+00
\(-\)4.15E+00
STD
1.90E\(-\)10
1.39E\(-\)02
5.44E\(-\)03
4.86E\(-\)03
1.45E\(-\)02
6.05E\(-\)03
8.86E\(-\)01
3.73E+00
9.20E\(-\)01
F22
AVG
\(-\) 1.04E+01
\(-\)1.04E+01
\(-\)1.04E+01
\(-\)1.04E+01
\(-\)1.04E+01
\(-\)1.04E+01
\(-\)1.04E+01
\(-\)5.58E+00
\(-\)6.01E+00
STD
3.49E\(-\)05
7.13E\(-\)04
1.64E\(-\)03
7.32E\(-\)04
2.33E\(-\)03
3.14E\(-\)03
1.35E+00
2.61E+00
1.96E+00
F23
AVG
\(-\)1.05E+01
\(-\)1.05E+01
\(-\)1.04E+01
\(-\) 1.05E+01
\(-\)1.05E+01
\(-\)1.04E+01
\(-\) 1.05E+01
\(-\)4.70E+00
\(-\)4.72E+00
STD
3.76E\(-\)05
7.24E\(-\)02
8.99E\(-\)02
8.24E\(-\)04
1.46E\(-\)01
1.66E\(-\)01
9.28E\(-\)01
3.26E+00
1.74E+00
F24
AVG
5.23E+02
3.83E+02
3.86E+02
3.81E+02
3.82E+02
3.91E+02
3.97E+02
6.27E+02
7.68E+02
STD
1.64E+02
9.58E+01
9.50E+01
7.08E+01
8.38E+01
8.61E+01
7.96E+01
1.01E+02
7.61E+01
F25
AVG
9.10E+02
9.10E+02
9.10E+02
9.10E+02
9.10E+02
9.10E+02
9.10E+02
9.99E+02
1.18E+03
STD
5.87E\(-\)13
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
2.94E+01
3.30E+01
F26
AVG
9.10E+02
9.10E+02
9.10E+02
9.10E+02
9.10E+02
9.10E+02
9.10E+02
9.99E+02
1.18E+03
STD
6.34E\(-\)13
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
2.53E+01
3.52E+01
F27
AVG
9.10E+02
9.10E+02
9.10E+02
9.10E+02
9.10E+02
9.10E+02
9.10E+02
1.00E+03
1.20E+03
STD
9.41E\(-\)13
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
2.67E+01
2.40E+01
F28
AVG
1.58E+03
8.61E+02
8.60E+02
8.60E+02
8.65E+02
8.65E+02
8.61E+02
1.51E+03
1.71E+03
STD
2.01E+02
1.46E+00
8.99E\(-\)01
2.13E\(-\)01
8.64E+00
8.58E+00
6.51E\(-\)01
9.46E+01
3.52E+01
F29
AVG
1.48E+03
5.58E+02
5.58E+02
5.58E+02
5.61E+02
5.62E+02
5.59E+02
1.94E+03
2.10E+03
STD
2.88E+02
2.22E+00
2.28E+00
1.63E+00
8.43E+00
8.49E+00
5.11E+00
1.13E+01
2.97E+01
Benchmark
Metric
BBO
FPA
GWO
BAT
FA
CS
MFO
TLBO
DE
F14
AVG
9.98E \(-\)01
9.98E \(-\)01
4.17E+00
1.27E+01
3.51E+00
1.27E+01
2.74E+00
9.98E \(-\)01
1.23E+00
STD
4.52E\(-\)16
2.00E\(-\)04
3.61E+00
6.96E+00
2.16E+00
1.81E\(-\)15
1.82E+00
4.52E\(-\)16
9.23E\(-\)01
F15
AVG
1.66E\(-\)02
6.88E\(-\)04
6.24E\(-\)03
3.00E\(-\)02
1.01E\(-\)03
3.13E\(-\)04
2.35E\(-\)03
1.03E\(-\)03
5.63E\(-\)04
STD
8.60E\(-\)03
1.55E\(-\)04
1.25E\(-\)02
3.33E\(-\)02
4.01E\(-\)04
2.99E\(-\)05
4.92E\(-\)03
3.66E\(-\)03
2.81E\(-\)04
F16
AVG
\(-\)8.30E\(-\)01
\(-\)1.03E+00
\(-\)1.03E+00
\(-\)6.87E\(-\)01
\(-\)1.03E+00
\(-\)1.03E+00
\(-\)1.03E+00
\(-\)1.03E+00
\(-\)1.03E+00
STD
3.16E\(-\)01
6.78E\(-\)16
6.78E\(-\)16
8.18E\(-\)01
6.75E\(-\)16
6.78E\(-\)16
6.78E\(-\)16
6.78E\(-\)16
6.78E\(-\)16
F17
AVG
5.49E\(-\)01
3.98E\(-\)01
3.98E\(-\)01
3.98E\(-\)01
3.98E\(-\)01
3.98E\(-\)01
3.98E\(-\)01
3.98E\(-\)01
3.98E\(-\)01
STD
6.05E\(-\)02
1.69E\(-\)16
1.69E\(-\)16
1.58E\(-\)03
1.69E\(-\)16
1.69E\(-\)16
1.69E\(-\)16
1.69E\(-\)16
1.69E\(-\)16
F18
AVG
3.00E+00
3.00E+00
3.00E+00
1.47E+01
3.00E+00
3.00E+00
3.00E+00
3.00E+00
3.00E+00
STD
0.00E+00
0.00E+00
4.07E\(-\)05
2.21E+01
0.00E+00
0.00E+00
0.00E+00
0.00E+00
0.00E+00
F19
AVG
\(-\)3.78E+00
\(-\)3.86E+00
\(-\)3.86E+00
\(-\)3.84E+00
\(-\)3.86E+00
\(-\)3.86E+00
\(-\)3.86E+00
\(-\)3.86E+00
\(-\)3.86E+00
STD
1.26E\(-\)01
3.16E\(-\)15
3.14E\(-\)03
1.41E\(-\)01
3.16E\(-\)15
3.16E\(-\)15
1.44E\(-\)03
3.16E\(-\)15
3.16E\(-\)15
F20
AVG
\(-\)2.71E+00
\(-\)3.30E+00
\(-\)3.26E+00
\(-\)3.25E+00
\(-\)3.28E+00
\(-\) 3.32E+00
\(-\)3.24E+00
\(-\)3.24E+00
\(-\)3.27E+00
STD
3.58E\(-\)01
1.95E\(-\)02
6.43E\(-\)02
5.89E\(-\)02
6.36E\(-\)02
1.78E\(-\)15
6.42E\(-\)02
1.51E\(-\)01
5.89E\(-\)02
F21
AVG
\(-\)8.32E+00
\(-\)5.22E+00
\(-\)8.64E+00
\(-\)4.27E+00
\(-\)7.67E+00
\(-\)5.06E+00
\(-\)6.89E+00
\(-\)8.65E+00
\(-\)9.65E+00
STD
2.88E+00
8.15E\(-\)03
2.56E+00
2.55E+00
3.51E+02
1.78E\(-\)15
3.18E+02
1.77E+02
1.52E+02
F22
AVG
\(-\)9.38E+00
\(-\)5.34E+00
\(-\)1.04E+01
\(-\)5.61E+00
\(-\)9.64E+00
\(-\)5.09E+00
\(-\)8.26E+00
\(-\)1.02E+01
\(-\)9.75E+00
STD
2.60E+00
5.37E\(-\)02
6.78E\(-\)04
3.02E+00
2.29E+00
8.88E\(-\)16
3.08E+00
7.27E\(-\)03
1.99E+00
F23
AVG
\(-\)6.24E+00
\(-\)5.29E+00
\(-\)1.01E+01
\(-\)3.97E+00
\(-\)9.75E+00
\(-\)5.13E+00
\(-\)7.66E+00
\(-\)1.01E+01
\(-\) 1.05E+01
STD
3.78E+02
3.56E\(-\)01
1.72E+00
3.01E+00
2.35E+00
1.78E\(-\)15
3.58E+00
1.70E+00
8.88E\(-\)15
F24
AVG
4.93E+02
5.19E+02
4.87E+02
1.29E+03
4.72E+02
4.69E+02
4.12E+02
6.13E+02
4.31E+02
STD
1.03E+02
4.78E+01
1.43E+02
1.50E+02
2.52E+02
6.06E+01
6.84E+01
1.23E+02
6.42E+01
F25
AVG
9.35E+02
1.02E+03
9.85E+02
1.46E+03
9.54E+02
9.10E+02
9.48E+02
9.67E+02
9.18E+02
STD
9.61E+00
3.19E+01
3.00E+01
6.84E+01
1.17E+01
3.67E\(-\)02
2.71E+01
2.74E+01
1.05E+00
F26
AVG
9.34E+02
1.02E+03
9.74E+02
1.48E+03
9.54E+02
9.10E+02
9.40E+02
9.84E+02
9.17E+02
STD
8.25E+00
3.49E+01
2.25E+01
4.56E+01
1.41E+01
4.72E\(-\)02
2.17E+01
4.53E+01
8.98E\(-\)01
F27
AVG
9.40E+02
1.01E+03
9.70E+02
1.48E+03
9.48E+02
9.10E+02
9.45E+02
9.79E+02
9.17E+02
STD
2.31E+01
3.15E+01
1.95E+01
6.06E+01
1.12E+01
4.97E\(-\)02
2.68E+01
3.82E+01
8.62E\(-\)01
F28
AVG
1.07E+03
1.54E+03
1.34E+03
1.96E+03
1.02E+03
1.34E+03
1.46E+03
1.47E+03
1.55E+03
STD
2.02E+02
4.29E+01
1.91E+02
5.85E+01
2.71E+02
1.34E+02
3.61E+01
2.69E+02
9.64E+01
F29
AVG
1.90E+03
2.03E+03
1.91E+03
2.22E+03
1.99E+03
1.90E+03
1.88E+03
1.88E+03
1.90E+03
STD
8.82E+00
3.03E+01
6.57E+00
3.55E+01
1.89E+01
1.86E+02
6.53E+00
3.49E+00
4.20E+00

4.2.1 Runtime analysis

Table 11 presents the runtime (seconds) of EHHO for different benchmark functions and dimensions (30D, 100D, 500D, and 1000D), providing information into its computational efficiency and scalability. It takes more time to solve most benchmark functions as the number of dimensions increases, implying that the computational complexity of EHHO increases with larger problem dimensions. From Table 11, functions F1, F4, F5, F6, F8, F9, and F10 have lower runtimes across different dimensions. On the other hand, functions F3, F7, F11, F12, and F13 experience significant increases in runtimes with higher dimensions. This suggests that these functions may face scalability issues, as the algorithm takes longer to reach optimal solutions in higher-dimensional spaces. It is therefore essential to evaluate the scalability of the EHHO algorithm and find ways to improve its performance for higher-dimensional optimisation problems.
From Table 12, functions F15, F16, F17, F18, F19, and F20 have shorter completion times, which indicates that the algorithm can find satisfactory solutions for these functions rapidly. However, functions F14, F21, F22, and F23 take longer to complete, indicating that EHHO needs more computational power to find optimal solutions for these functions.
In addition, Table 12 presents the average runtime for functions F24–F29, which comprise rotated, shifted, and hybrid composition functions from the IEEE CEC 2005 competition (García et al. 2009). Function F29 has the shortest runtime, indicating that the algorithm is able to find satisfactory solutions faster for this function. F24 and F25–F27 have similar runtimes, with each run taking 47 s and 49 s, respectively. Function F28 takes the longest time to run, suggesting that it has more difficult optimisation properties that slow down the convergence of EHHO (Table 13).
To obtain a comprehensive understanding of the computational efficiency of EHHO, it is valuable to compare its runtimes with other competing algorithms on the same benchmark functions and dimensions. Unfortunately, the authors of these algorithms did not disclose their computational times. Nevertheless, this study has employed the same maximum iterations as those published in Heidari et al. (2019) and Birogul (2019).

4.3 Evaluation with multi-objective optimisation problems

4.3.1 Experimental setup and compared algorithms

MO-EHHO is applied to high-dimensional bi-objective problems without equality or inequality constraints and scalable tri-objective problems without equality or inequality constraints for performance evaluation. A total of 12 test functions are available, as follows:
  • ZDT benchmark functions (ZDT1-4,6) (Deb et al. 2002; Deb and Agrawal 1999)
  • DTLZ benchmark functions (DTLZ1-DTLZ7) (Deb et al. 2005).
The MO-EHHO performance is compared with those reported in Liu et al. (2020), Xiang et al. (2015), and Yang et al. (2022). The evaluation constitutes a comprehensive analysis of MO-EHHO to ascertain its effectiveness. A summary of performance comparison is as follows:
Table 9
Statistical test results with 95% confidence interval for EHHO on benchmark functions (F14–F29) as compared with those published in Heidari et al. (2019) and Birogul (2019)
EHHO
Benchmark functions (F14–F29)
vs
\(+/=/-\)
p value
\(\alpha =0.05\)
HHODE/rand/1
7/4/5
0.5637
\(\approx \)
HHODE/best/1
7/4/5
0.5637
\(\approx \)
HHODE/current-to-best/2
6/4/6
1.0000
\(\approx \)
HHODE/best/2
7/4/5
0.5637
\(\approx \)
HHODE/rand/2
7/4/5
0.5637
\(\approx \)
HHO
6/4/6
1.0000
\(\approx \)
GA
13/1/2
0.0045
\(\checkmark \)
PSO
15/0/1
0.0005
\(\checkmark \)
BBO
12/1/3
0.0201
\(\checkmark \)
FPA
11/1/4
0.0707
\(\approx \)
GWO
13/0/3
0.0124
\(\checkmark \)
BAT
16/0/0
0.0001
\(\checkmark \)
FA
12/0/4
0.0455
\(\checkmark \)
CS
12/0/4
0.0455
\(\checkmark \)
MFO
13/0/3
0.0124
\(\checkmark \)
TLBO
13/1/2
0.0045
\(\checkmark \)
DE
12/0/4
0.0455
\(\checkmark \)
Table 10
Final rankings of each algorithm in Heidari et al. (2019) and Birogul (2019) for benchmark function F14–F29
Algorithms
F14–F29
\(\mu _{r}\)
AR
EHHO
4.8750
7
HHODE/rand/1
2.2500
2
HHODE/best/1
2.2500
2
HHODE/current-to-best/2
1.6875
1
HHODE/best/2
2.9375
5
HHODE/rand/2
3.4375
6
HHO
2.4375
4
GA
14.1250
16
PSO
14.2500
17
BBO
11.4375
15
FPA
10.1875
14
GWO
9.2500
13
BAT
16.1250
18
FA
9.0000
11
CS
8.4375
9
MFO
9.1875
12
TLBO
8.6875
10
DE
7.3750
8
Table 11
Average runtime of EHHO for benchmark function F1–F13 on 30, 100, 500, 1000-dimension
Function
Runtime (s)
 
30D
100D
500D
1000D
F1
1.0134
1.2393
2.1984
3.4111
F2
1.2076
2.0342
6.3790
11.8459
F3
5.6454
16.2980
73.5222
145.7354
F4
0.9533
1.2637
2.9807
5.0923
F5
1.5590
1.6888
2.6040
3.8073
F6
1.2833
1.4334
2.3622
3.5427
F7
1.5895
2.1163
5.1570
8.8809
F8
1.2338
1.7000
4.3160
7.4538
F9
1.3254
1.4949
2.4804
3.7146
F10
1.7141
1.8171
2.8291
4.0465
F11
1.8434
2.5862
7.3000
12.9197
F12
3.1885
3.4381
5.4870
8.0133
F13
3.5722
3.7453
5.6946
8.1745
Table 12
Average runtime of EHHO for benchmark function F14–F23
Function
Runtime (s)
F14
15.7370
F15
1.8363
F16
0.9596
F17
1.0228
F18
1.0669
F19
2.4418
F20
2.5606
F21
6.7674
F22
8.6142
F23
11.4251
Table 13
Average runtime of EHHO for benchmark function F24–F29
Function
Runtime (s)
F24
47.2347
F25
49.7576
F26
49.7661
F27
49.9138
F28
57.9190
F29
39.9642
  • Comparison with a variety of MOEAs reported in Yang et al. (2022), i.e. CMPMOHHO (Yang et al. 2022), NSGA-II, SPEA2, NSGA-III (Deb and Jain 2013), PESA-II (Corne et al. 2001), CMOPSO (Zhang et al. 2018), CMOEA/D (Asafuddoula et al. 2012), MOEA/D-FRRMAB (Li et al. 2013), hpaEA (Chen et al. 2019), PREA (Yuan et al. 2020), ANSGA-III (Cheng et al. 2019), ar-MOEA (Yi et al. 2018), RPD-NSGA-II (Elarbi et al. 2017), and PICEA-g (Wang et al. 2012).
  • Comparison with a variety of MOEAs reported in Liu et al. (2020), i.e. MMOGWO (Liu et al. 2020), SPEA2 (Zitzler et al. 2001), NSGA-II (Deb et al. 2002), MOPSO (Coello et al. 2004), MOGWO (Mirjalili et al. 2016), MOALO (Mirjalili et al. 2017b), MOMVO algorithm (Mirjalili et al. 2017a), MOGOA (Mirjalili et al. 2018) and Multi-objective artificial bee colony (MOABC) algorithm (Akbari et al. 2012).
  • Comparison with a variety of MOEAs reported in Xiang et al. (2015) and Liu et al. (2020), i.e. PAES, SPEA2, NSGA-II, IBEA (Zitzler and Künzli 2004), OMOPSO (Lin et al. 2015), AbYSS (Nebro et al. 2008), CellDE (Durillo et al. 2008), MOEA/D, SMPSO (Lin et al. 2015), MOCell (Nebro et al. 2009), GDE3 (Kukkonen and Lampinen 2009) and eMOABC (Xiang et al. 2015).
Table 14
Mean values, standard deviation and ranking of MO-EHHO variants and other enhanced models in Yang et al. (2022) in terms of IGD for all test problems
Algorithm
ZDT1
ZDT2
ZDT3
ZDT4
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
MO-EHHO
7.46E \(-\) 04
1.01E \(-\) 04
1
9.80E \(-\) 04
3.28E \(-\) 04
1
1.50E \(-\) 03
3.18E \(-\) 04
1
1.00E\(-\)02
1.48E\(-\)02
9
CMPMO-EHHO
3.57E\(-\)03
3.79E\(-\)02
3
3.42E\(-\)03
2.34E\(-\)09
3
5.13E\(-\)03
1.01E\(-\)07
3
3.54E \(-\) 03
3.87E \(-\) 01
1
NSGA-III
3.92E\(-\)03
3.59E\(-\)05
6
3.87E\(-\)03
3.19E\(-\)05
6
6.12E\(-\)03
1.94E\(-\)04
8
7.90E\(-\)03
6.62E\(-\)03
7
C-MOEA/D
5.76E\(-\)03
8.46E\(-\)04
16
6.32E\(-\)03
6.92E\(-\)04
15
1.27E\(-\)02
5.15E\(-\)04
15
1.39E\(-\)02
7.89E\(-\)03
11
NSGA-II
4.81E\(-\)03
1.55E\(-\)04
13
4.84E\(-\)03
1.94E\(-\)04
12
7.40E\(-\)03
7.42E\(-\)03
10
5.43E\(-\)03
7.81E\(-\)04
6
CMOPSO
4.18E\(-\)03
1.01E\(-\)04
10
4.12E\(-\)03
9.49E\(-\)05
10
4.61E\(-\)03
6.00E\(-\)05
2
2.85E\(-\)01
2.64E\(-\)01
16
PESA-II
1.07E\(-\)02
1.30E\(-\)03
17
1.21E\(-\)02
2.84E\(-\)03
17
1.78E\(-\)02
2.08E\(-\)02
16
1.34E\(-\)02
5.24E\(-\)03
10
SPEA2
3.98E\(-\)03
6.75E\(-\)05
7
3.93E\(-\)03
5.79E\(-\)05
7
7.82E\(-\)03
8.96E\(-\)03
11
4.76E\(-\)03
8.77E\(-\)04
4
MOEA/D-FRRMAB
4.00E\(-\)03
1.29E\(-\)04
8
3.98E\(-\)03
1.52E\(-\)04
8
1.23E\(-\)02
1.30E\(-\)03
14
5.98E\(-\)01
8.81E\(-\)01
18
hpaEA
8.24E\(-\)02
5.77E\(-\)02
18
2.00E\(-\)01
2.30E\(-\)01
18
2.25E\(-\)02
1.88E\(-\)02
18
5.38E\(-\)01
1.92E\(-\)01
17
PREA
4.78E\(-\)03
1.40E\(-\)03
11
6.12E\(-\)03
4.31E\(-\)03
14
2.07E\(-\)02
3.31E\(-\)02
17
2.63E\(-\)02
2.72E\(-\)02
12
ANSGA-III
4.80E\(-\)03
3.32E\(-\)04
12
4.65E\(-\)03
3.18E\(-\)04
11
6.18E\(-\)03
5.27E\(-\)03
9
9.52E\(-\)03
8.68E\(-\)03
8
ar-MOEA
3.90E\(-\)03
8.96E\(-\)06
5
3.85E\(-\)03
2.42E\(-\)05
5
9.37E\(-\)03
8.71E\(-\)03
12
4.99E\(-\)03
7.83E\(-\)04
5
RPD-NSGA-II
4.98E\(-\)03
3.49E\(-\)04
14
5.85E\(-\)03
3.88E\(-\)04
13
1.02E\(-\)02
7.23E\(-\)03
13
1.60E\(-\)01
2.30E\(-\)02
14
PICEA-g
4.04E\(-\)03
7.58E\(-\)05
9
4.00E\(-\)03
5.70E\(-\)05
9
5.85E\(-\)03
5.35E\(-\)03
6
2.74E\(-\)01
5.21E\(-\)02
15
MOHHO/SP
4.25E\(-\)01
1.64E\(-\)02
19
3.49E\(-\)01
2.44E\(-\)02
19
2.76E\(-\)01
1.41E\(-\)02
19
8.01E\(-\)01
4.34E\(-\)04
19
CMPMO-HHO/NoP
3.52E\(-\)03
3.86E\(-\)10
2
3.42E\(-\)03
3.43E\(-\)10
2
5.34E\(-\)03
9.28E\(-\)08
5
3.56E\(-\)03
4.90E\(-\)07
3
CMPMO-HHO/SES
3.70E\(-\)03
2.15E\(-\)07
4
3.45E\(-\)03
2.55E\(-\)08
4
5.19E\(-\)03
1.42E\(-\)07
4
3.55E\(-\)03
1.09E\(-\)09
2
CMPMO-GA
5.28E\(-\)03
3.85E\(-\)07
15
7.05E\(-\)03
1.02E\(-\)06
16
6.01E\(-\)03
1.01E\(-\)06
7
3.78E\(-\)02
4.51E\(-\)04
13
Algorithm
ZDT6
DTLZ1
DTLZ2
DTLZ3
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
MO-EHHO
7.49E \(-\) 04
1.14E \(-\) 04
1
1.24E\(-\)01
1.71E\(-\)01
16
7.46E\(-\)03
2.97E\(-\)04
2
1.29E+00
1.24E+00
15
CMPMO-HHO
3.00E\(-\)03
5.42E\(-\)07
2
3.63E\(-\)02
3.14E\(-\)04
14
6.33E \(-\) 03
2.44E \(-\) 06
1
2.78E\(-\)01
4.58E\(-\)02
14
NSGA-III
3.29E\(-\)03
4.24E\(-\)04
8
2.06E\(-\)02
1.43E\(-\)06
4
5.45E\(-\)02
8.00E\(-\)07
7
5.45E\(-\)02
7.54E\(-\)06
5
C-MOEA/D
9.25E\(-\)03
1.97E\(-\)03
15
2.06E\(-\)02
1.04E\(-\)06
4
5.45E\(-\)02
5.30E\(-\)08
7
5.45E\(-\)02
9.24E\(-\)06
4
NSGA-II
3.72E\(-\)03
1.18E\(-\)04
10
2.72E\(-\)02
1.26E\(-\)03
11
6.91E\(-\)02
2.69E\(-\)03
15
6.81E\(-\)02
2.50E\(-\)03
9
CMOPSO
3.11E\(-\)03
4.24E\(-\)05
3
2.08E\(-\)02
4.03E\(-\)04
7
5.79E\(-\)02
1.00E\(-\)03
13
2.32E+00
2.33E+00
16
PESA-II
7.88E\(-\)03
7.57E\(-\)04
14
2.53E\(-\)02
2.08E\(-\)03
10
6.63E\(-\)02
2.94E\(-\)03
14
7.26E\(-\)02
9.95E\(-\)03
11
SPEA2
3.17E\(-\)03
1.05E\(-\)04
6
2.01E\(-\)02
1.25E\(-\)04
3
5.42E\(-\)02
6.01E\(-\)04
5
5.35E\(-\)02
6.52E\(-\)04
3
MOEA/D-FRRMAB
3.11E\(-\)03
6.41E\(-\)06
4
3.09E\(-\)02
3.67E\(-\)05
12
7.54E\(-\)02
2.51E\(-\)04
16
7.53E\(-\)02
3.20E\(-\)04
12
hpaEA
2.66E\(-\)01
2.00E\(-\)01
18
1.95E\(-\)02
1.21E\(-\)04
2
5.29E\(-\)02
2.88E\(-\)04
4
5.34E\(-\)02
3.20E\(-\)04
2
PREA
3.49E\(-\)03
1.27E\(-\)04
9
2.19E\(-\)02
4.44E\(-\)04
8
5.74E\(-\)02
6.23E\(-\)04
12
5.75E\(-\)02
8.47E\(-\)04
8
ANSGA-III
4.45E\(-\)03
1.27E\(-\)04
11
2.38E\(-\)02
4.53E\(-\)03
9
5.47E\(-\)02
5.70E\(-\)04
10
7.15E\(-\)02
1.88E\(-\)02
10
ar-MOEA
3.12E\(-\)03
1.23E\(-\)04
5
2.06E\(-\)02
3.71E\(-\)06
6
5.45E\(-\)02
2.47E\(-\)07
7
5.45E\(-\)02
1.22E\(-\)05
6
RPD-NSGA-II
7.71E\(-\)03
5.85E\(-\)04
13
3.18E\(-\)02
3.36E\(-\)03
13
5.69E\(-\)02
1.11E\(-\)03
11
5.69E\(-\)02
1.11E\(-\)03
7
PICEA-g
3.17E\(-\)03
5.85E\(-\)04
7
5.41E\(-\)02
6.75E\(-\)03
15
5.44E\(-\)02
5.56E\(-\)04
6
1.15E\(-\)01
1.63E\(-\)02
13
MOHHO/SP
1.83E\(-\)02
4.43E\(-\)04
17
1.98E+01
1.82E+02
19
1.99E\(-\)01
1.85E\(-\)03
19
1.18E+02
3.04E+03
19
CMPMO-HHO/NoP
1.23E\(-\)02
2.34E\(-\)05
16
9.26E+00
2.85E+01
17
1.39E\(-\)01
6.66E\(-\)05
18
4.62E+01
1.44E+03
18
CMPMO-HHO/SES
5.52E\(-\)03
3.06E\(-\)06
12
9.84E+00
2.85E+01
18
8.61E\(-\)02
3.63E\(-\)05
17
4.17E+01
2.05E+03
17
CMPMO-GA
4.43E\(-\)01
6.42E\(-\)03
19
2.05E \(-\) 03
8.94E \(-\) 06
1
1.25E\(-\)02
1.71E\(-\)04
3
2.95E \(-\) 02
1.20E \(-\) 03
1
MO-EHHO
8.94E \(-\) 03
9.09E \(-\) 04
1
1.05E \(-\) 04
1.50E \(-\) 05
1
1.02E \(-\) 04
1.52E \(-\) 05
1
9.80E \(-\) 04
8.58E \(-\) 05
1
CMPMO-HHO
1.08E\(-\)02
3.65E\(-\)05
2
9.95E\(-\)03
1.65E\(-\)06
10
9.52E\(-\)03
1.44E\(-\)06
10
9.06E\(-\)02
7.85E\(-\)05
8
NSGA-III
1.19E\(-\)01
1.68E\(-\)01
11
1.29E\(-\)02
1.58E\(-\)03
13
1.95E\(-\)02
2.98E\(-\)03
16
7.71E\(-\)02
3.56E\(-\)03
5
Algorithm
DTLZ4
DTLZ5
DTLZ6
DTLZ7
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
C-MOEA/D
5.45E\(-\)02
1.47E\(-\)04
6
3.39E\(-\)02
4.81E\(-\)07
17
3.39E\(-\)02
1.62E\(-\)07
17
1.53E\(-\)01
1.51E\(-\)03
15
NSGA-II
6.77E\(-\)02
2.61E\(-\)03
9
5.89E\(-\)03
4.81E\(-\)07
8
5.92E\(-\)03
3.69E\(-\)04
7
7.70E\(-\)02
3.77E\(-\)03
4
CMOPSO
1.79E\(-\)01
3.06E\(-\)01
13
5.17E\(-\)03
4.96E\(-\)04
6
4.20E\(-\)03
4.38E\(-\)05
3
1.10E\(-\)01
1.51E\(-\)01
12
PESA-II
6.47E\(-\)02
2.13E\(-\)03
8
1.12E\(-\)02
1.82E\(-\)03
12
1.47E\(-\)02
3.55E\(-\)03
13
2.31E\(-\)01
2.35E\(-\)01
17
SPEA2
1.84E\(-\)01
2.19E\(-\)01
14
4.37E\(-\)03
7.83E\(-\)05
2
4.10E\(-\)03
3.80E\(-\)05
2
6.98E\(-\)02
5.23E\(-\)02
2
MOEA/D-FRRMAB
7.78E\(-\)02
8.48E\(-\)03
10
1.46E\(-\)02
1.29E\(-\)05
15
1.46E\(-\)02
1.29E\(-\)05
12
1.93E\(-\)01
1.04E\(-\)02
16
hpaEA
3.49E\(-\)01
2.40E\(-\)01
16
4.94E\(-\)03
1.68E\(-\)04
5
4.59E\(-\)03
8.98E\(-\)05
4
1.31E\(-\)01
1.04E\(-\)02
13
PREA
3.80E\(-\)01
3.58E\(-\)01
17
4.66E\(-\)03
1.18E\(-\)04
4
4.68E\(-\)03
1.23E\(-\)04
5
1.05E\(-\)01
1.13E\(-\)01
11
ANSGA-III
1.66E\(-\)01
2.36E\(-\)01
12
1.09E\(-\)02
1.45E\(-\)03
11
1.48E\(-\)02
2.30E\(-\)03
14
8.51E\(-\)02
1.13E\(-\)01
6
ar-MOEA
3.95E\(-\)01
3.55E\(-\)01
18
5.37E\(-\)03
1.23E\(-\)04
7
5.02E\(-\)03
4.67E\(-\)05
6
9.56E\(-\)02
8.57E\(-\)02
9
RPD-NSGA-II
5.71E\(-\)02
1.27E\(-\)03
7
3.87E\(-\)02
4.08E\(-\)03
18
8.40E\(-\)02
3.37E\(-\)02
19
1.33E\(-\)01
1.44E\(-\)01
14
PICEA-g
2.47E\(-\)01
2.66E\(-\)01
15
4.45E\(-\)03
1.06E\(-\)04
3
7.16E\(-\)03
1.67E\(-\)02
8
3.90E\(-\)01
2.79E\(-\)01
18
MOHHO/SP
5.17E\(-\)01
5.72E\(-\)04
19
6.78E\(-\)02
3.01E\(-\)03
19
3.54E\(-\)02
5.12E\(-\)04
18
1.09E+00
4.25E\(-\)02
19
CMPMO-HHO/NoP
2.72E\(-\)02
3.73E\(-\)05
4
1.44E\(-\)02
3.75E\(-\)06
14
1.45E\(-\)02
2.84E\(-\)06
11
1.04E\(-\)01
1.36E\(-\)05
10
CMPMO-HHO/SES
2.68E\(-\)02
5.54E\(-\)05
3
9.44E\(-\)03
9.69E\(-\)07
9
9.24E\(-\)03
2.84E\(-\)06
9
8.94E\(-\)02
8.28E\(-\)05
7
CMPMO-GA
4.37E\(-\)02
3.02E\(-\)04
5
1.54E\(-\)02
7.76E\(-\)06
16
1.53E\(-\)02
7.71E\(-\)06
15
7.25E\(-\)02
3.19E\(-\)05
3

4.3.2 Performance metrics

Three common objective functions for evaluating the performance of a multi-objective algorithm in solving MOPs are as follows:
  • minimise the distance between the true PF and the obtained PS
  • maximise the spread of the obtained PS along with the true PF
  • maximise the extent of the obtained PS in covering the true PF
Based on these objectives, the following performance metrics are used to assess the MO-EHHO efficacy.
Convergence metric (\(\gamma \)): the distance between the PF and the obtained PS (Deb et al. 2002), i.e.
$$\begin{aligned} Y=\frac{\sum _{i=1}^{n}d_i}{n} \end{aligned}$$
(29)
Diversity metric (\(\Delta \)): the degree of spread achieved by the obtained PS (Deb et al. 2002), i.e.
$$\begin{aligned} \Delta =\frac{d_\textrm{min}+d_\textrm{max}+\sum _{i=1}^{n-1}|d_i-\bar{d}|}{d_\textrm{min}+d_\textrm{max}+(n-1)\bar{d}} \end{aligned}$$
(30)
Table 15
Statistical comparison of MO-EHHO and other enhanced models in Yang et al. (2022) with 95% confidence interval on each metric
MO-EHHO
IGD
vs
\(+/=/-\)
p value
\(\alpha =0.05\)
CMPMO-HHO
8/0/4
0.2482
\(\approx \)
NSGA-III
9/0/3
0.0833
\(\approx \)
C-MOEA/D
10/0/2
0.0209
\(\checkmark \)
NSGA-II
9/0/3
0.0833
\(\approx \)
CMOPSO
11/0/1
0.0039
\(\checkmark \)
PESA-II
10/0/2
0.0209
\(\checkmark \)
SPEA2
9/0/3
0.0833
\(\approx \)
MOEA/D-FRRMAB
10/0/2
0.0209
\(\checkmark \)
hpaEA
10/0/2
0.0209
\(\checkmark \)
PREA
10/0/2
0.0209
\(\checkmark \)
ANSGA-III
9/0/3
0.0833
\(\approx \)
ar-MOEA
9/0/3
0.0833
\(\approx \)
RPD-NSGA-II
10/0/2
0.0209
\(\checkmark \)
PICEA-g
10/0/2
0.0209
\(\checkmark \)
MOHHO/SP
12/0/0
0.0005
\(\checkmark \)
CMPMO-HHO/NoP
11/0/1
0.0039
\(\checkmark \)
CMPMO-HHO/SES
11/0/1
0.0039
\(\checkmark \)
CMPMO-GA
10/2/0
0.0209
\(\checkmark \)
Table 16
Final rankings of each algorithm on IGD metric with other enhanced models in Yang et al. (2022)
Algorithm
IGD
\(\mu _{r}\)
AR
MO-EHHO
4.1667
1
CMPMO-HHO
5.9167
3
NSGA-III
8.0000
5
C-MOEA/D
11.8333
15
NSGA-II
9.5000
8
CMOPSO
9.2500
7
PESA-II
13.2500
18
SPEA2
5.5000
2
MOEA/D-FRRMAB
12.0833
16
hpaEA
11.2500
14
PREA
10.6667
13
ANSGA-III
10.2500
11
ar-MOEA
7.5833
4
RPD-NSGA-II
13.0000
17
PICEA-g
10.3333
12
MOHHO/SP
18.7500
19
CMPMO-HHO/NoP
10.0000
10
CMPMO-HHO/SES
8.8333
6
CMPMO-GA
9.5000
8
Generational distance (GD): the distance between the true PF and the obtained PS (Van Veldhuizen and Lamont 1998), i.e.
$$\begin{aligned} GD=\frac{\sqrt{\sum _{i=1}^{n}d_i^2}}{n} \end{aligned}$$
(31)
Inverted generational distance (IGD): the distance between each solution composing the true PF and the obtained PS (Zitzler and Thiele 1999), i.e.
$$\begin{aligned} IGD=\frac{\sqrt{\sum _{i=1}^{m}(d_i)^2}}{m} \end{aligned}$$
(32)
where i the index of a solution set, \(d_i\) is the Euclidean distance between the ith solution in the non-dominated set and the ith solution in the true PF. n is the number of obtained PS, m is the number of solutions in the true PF. In addition, \(d_\textrm{min}\) and \(d_\textrm{max}\) are the minimum and maximum Euclidean distances of the extreme non-dominated solution, respectively, in the objective space, \(\bar{d}\) is the mean of all \(d_i\).
Table 17
Statistical test results with 95% confidence interval for each MO-EHHO variant on each metric as compared with those published in Liu et al. (2020)
MO-EHHO
\(\gamma \) (convergence)
\(\Delta \) (diversity)
vs
\(+/=/-\)
p value
\(\alpha =0.05\)
\(+/=/-\)
p value
\(\alpha =0.05\)
MMOGWO
8/0/4
0.2482
\(\approx \)
10/0/2
0.0209
\(\checkmark \)
SPEA2
10/0/2
0.0209
\(\checkmark \)
7/0/5
0.5637
\(\approx \)
NSGA-II
10/0/2
0.0209
\(\checkmark \)
6/0/6
1.0000
\(\approx \)
MOPSO
8/0/4
0.2482
\(\approx \)
1/0/11
0.0039
\(\times \)
MOGWO
7/0/5
0.5637
\(\approx \)
8/0/4
0.2482
\(\approx \)
MOALO
8/0/4
0.2482
\(\approx \)
11/0/1
0.0039
\(\checkmark \)
MOMVO
8/0/4
0.2482
\(\approx \)
10/0/2
0.0209
\(\checkmark \)
MOGOA
9/0/3
0.0833
\(\approx \)
11/0/1
0.0039
\(\checkmark \)
MOABC
11/0/1
0.0039
\(\checkmark \)
8/0/4
0.2482
\(\approx \)
MO-EHHO
GD
IGD
vs
\(+/=/-\)
p value
\(\alpha =0.05\)
\(+/=/-\)
p value
\(\alpha =0.05\)
MMOGWO
7/0/5
0.5637
\(\approx \)
11/0/1
0.0039
\(\checkmark \)
SPEA2
12/0/0
0.0005
\(\checkmark \)
12/0/0
0.0005
\(\checkmark \)
NSGA-II
10/0/2
0.0209
\(\checkmark \)
12/0/0
0.0005
\(\checkmark \)
MOPSO
8/0/4
0.2482
\(\approx \)
12/0/0
0.0005
\(\checkmark \)
MOGWO
6/0/6
1.0000
\(\approx \)
10/0/2
0.0209
\(\checkmark \)
MOALO
8/0/4
0.2482
\(\approx \)
11/0/1
0.0039
\(\checkmark \)
MOMVO
9/0/3
0.0833
\(\approx \)
12/0/0
0.0005
\(\checkmark \)
MOGOA
10/0/2
0.0209
\(\checkmark \)
11/0/1
0.0039
\(\checkmark \)
MOABC
12/0/0
0.0005
\(\checkmark \)
11/0/1
0.0039
\(\checkmark \)
Table 18
Final rankings of each algorithm on each metric with algorithms in Liu et al. (2020)
Algorithm
\(\gamma \) (convergence)
\(\Delta \) (diversity)
GD
IGD
\(\mu _{r}\)
AR
\(\mu _{r}\)
AR
\(\mu _{r}\)
AR
\(\mu _{r}\)
AR
MO-EHHO
3.4167
1
4.0000
3
3.1667
1
1.5000
1
MMOGWO
4.6667
2
5.5000
6
4.3333
3
5.2500
3
SPEA2
5.9167
7
5.2500
5
7.6667
9
6.4167
8
NSGA-II
6.1667
8
3.5000
2
5.4167
7
6.4167
8
MOPSO
5.5000
5
1.3333
1
5.1667
6
6.0833
6
MOGWO
4.7500
3
5.6667
7
4.0833
2
4.5000
2
MOALO
5.7500
6
9.0833
10
5.0000
5
5.8333
4
MOMVO
5.1667
4
7.7500
9
4.8333
4
6.0000
5
MOGOA
6.8333
9
7.5833
8
6.1667
8
6.9167
10
MOABC
6.8333
9
5.0833
4
9.1667
10
6.0833
6
Table 19
Mean values, standard deviation and ranking of MO-EHHO and several improved MOEAs in Liu et al. (2020) and Xiang et al. (2015) in IGD for all test problems
Algorithms
ZDT1
ZDT2
ZDT3
ZDT4
AVG
Rank
AVG
Rank
AVG
Rank
AVG
Rank
MO-EHHO
6.35E \(-\) 04
1
6.96E \(-\) 04
1
1.13E \(-\) 03
1
9.77E \(-\) 04
1
MMOGWO
8.01E\(-\)04
2
7.41E\(-\)04
2
2.65E\(-\)03
2
9.19E\(-\)01
12
PAES
1.18E\(-\)02
13
1.46E\(-\)02
14
5.61E\(-\)02
14
7.35E\(-\)03
8
SPEA2
3.92E\(-\)03
9
3.90E\(-\)03
8
4.85E\(-\)03
6
4.08E\(-\)03
5
NSGA-II
4.84E\(-\)03
12
4.90E\(-\)03
11
5.39E\(-\)03
8
4.93E\(-\)03
7
IBEA
4.11E\(-\)03
10
9.41E\(-\)03
13
2.97E\(-\)02
13
6.26E\(-\)01
11
OMOPSO
3.71E\(-\)03
5
3.83E\(-\)03
6
4.36E\(-\)03
4
4.93E+00
14
AbYSS
3.73E\(-\)03
6
3.83E\(-\)03
5
1.50E\(-\)02
11
4.41E\(-\)03
6
CellDE
4.83E\(-\)03
11
4.37E\(-\)03
10
1.02E\(-\)02
10
4.24E+00
13
MOEA/D
1.26E\(-\)02
14
9.14E\(-\)03
12
1.72E\(-\)02
12
1.43E\(-\)01
9
SMPSO
3.67E\(-\)03
3
3.80E\(-\)03
3
4.29E\(-\)03
3
3.72E\(-\)03
3
MOCell
3.69E\(-\)03
4
3.80E\(-\)03
3
6.18E\(-\)03
9
3.84E\(-\)03
4
GDE3
3.78E\(-\)03
8
3.91E\(-\)03
9
4.37E\(-\)03
5
4.72E\(-\)01
10
eMOABC
3.74E\(-\)03
7
3.85E\(-\)03
7
5.16E\(-\)03
7
3.71E\(-\)03
2
Algorithms
ZDT6
DTLZ1
DTLZ2
DTLZ3
AVG
Rank
AVG
Rank
AVG
Rank
AVG
Rank
MO-EHHO
8.32E \(-\) 04
1
1.24E\(-\)01
10
7.46E \(-\) 03
1
1.29E+00
10
MMOGWO
2.33E\(-\)01
14
6.29E+01
14
1.81E\(-\)02
2
2.58E+02
14
PAES
7.07E\(-\)03
13
5.86E\(-\)02
9
3.15E\(-\)01
14
1.92E\(-\)01
3
SPEA2
3.17E\(-\)03
8
2.02E \(-\) 02
1
5.42E\(-\)02
3
3.39E\(-\)01
5
NSGA-II
4.77E\(-\)03
11
2.61E\(-\)02
5
6.89E\(-\)02
10
2.94E\(-\)01
4
IBEA
5.17E\(-\)03
12
1.82E\(-\)01
12
1.22E\(-\)01
13
5.12E\(-\)01
7
OMOPSO
3.02E\(-\)03
4
1.18E+01
13
6.89E\(-\)02
9
1.15E+02
13
AbYSS
3.06E\(-\)03
6
2.73E\(-\)02
6
6.89E\(-\)02
11
3.94E\(-\)01
6
CellDE
3.44E\(-\)03
9
1.61E\(-\)01
11
6.61E\(-\)02
6
8.52E+00
12
MOEA/D
4.17E\(-\)03
10
2.54E\(-\)02
4
6.71E\(-\)02
8
1.18E+00
9
SMPSO
3.00E\(-\)03
3
2.83E\(-\)02
7
7.18E\(-\)02
12
1.16E\(-\)01
2
MOCell
3.00E\(-\)03
2
2.87E\(-\)02
8
6.68E\(-\)02
7
7.56E\(-\)01
8
GDE3
3.12E\(-\)03
7
2.34E\(-\)02
3
6.29E\(-\)02
5
2.26E+00
11
eMOABC
3.03E\(-\)03
5
2.26E\(-\)02
2
6.05E\(-\)02
4
6.25E \(-\) 02
1
Algorithms
DTLZ4
DTLZ5
DTLZ6
DTLZ7
AVG
Rank
AVG
Rank
AVG
Rank
AVG
Rank
MO-EHHO
8.94E \(-\) 03
1
1.05E \(-\) 04
1
1.02E \(-\) 04
1
9.80E \(-\) 04
1
MMOGWO
3.54E\(-\)02
2
1.80E+00
14
7.83E+00
14
1.58E\(-\)02
2
PAES
3.99E\(-\)01
14
6.84E\(-\)03
10
7.14E\(-\)03
7
8.88E\(-\)01
14
SPEA2
1.38E\(-\)01
12
4.33E\(-\)03
8
1.25E\(-\)02
9
6.96E\(-\)02
3
NSGA-II
6.39E\(-\)02
6
5.42E\(-\)03
9
1.36E\(-\)02
10
7.65E\(-\)02
5
IBEA
2.11E\(-\)01
13
1.94E\(-\)02
13
5.75E\(-\)02
11
4.00E\(-\)01
13
OMOPSO
6.49E\(-\)02
7
4.13E\(-\)03
6
3.90E\(-\)03
2
8.69E\(-\)02
8
AbYSS
6.05E\(-\)02
4
4.09E\(-\)03
5
7.89E\(-\)02
12
3.95E\(-\)01
12
CellDE
7.71E\(-\)02
10
8.56E\(-\)03
11
4.54E\(-\)03
6
1.23E\(-\)01
9
MOEA/D
5.49E\(-\)02
3
1.04E\(-\)02
12
9.37E\(-\)03
8
1.90E\(-\)01
10
SMPSO
6.81E\(-\)02
9
4.01E\(-\)03
2
3.94E\(-\)03
4
8.52E\(-\)02
6
MOCell
1.36E\(-\)01
11
4.05E\(-\)03
4
7.55E\(-\)01
13
2.45E\(-\)01
11
GDE3
6.57E\(-\)02
8
4.20E\(-\)03
7
4.15E\(-\)03
5
7.48E\(-\)02
4
eMOABC
6.26E\(-\)02
5
4.02E\(-\)03
3
3.93E\(-\)03
3
8.66E\(-\)02
7

4.4 Quantitative results of MO-EHHO and discussion

4.4.1 Comparison between MO-EHHO and other enhanced models in Yang et al. (2022)

MO-EHHO and other enhanced models presented in Yang et al. (2022) are compared using the ZDT and DTLZ functions. In accordance with the experimental procedure in Yang et al. (2022), the population size and external archive are set to 100. The number of maximum function evaluations of ZDT is 30,000, i.e. the same as that in Yang et al. (2022). However, the number of maximum function evaluations of DTLZ is 300,000 in Yang et al. (2022), as compared with only 50,000 for MO-EHHO in this research. In this comparison, 18 algorithms are compared with MO-EHHO, i.e. CMPMO-HHO, NSGA-II, SPEA2, NSGA-III, PESA-II, CMOPSO, C-MOEA/D, MOEA/D- FRRMAB, hpaEA, PREA, ANSGA-III, ar-MOEA, RPD-NSGA-II, PICEA-g, MOHHO/SP (multiobjective HHO algorithm with single population), CMPMOHHO/NoP (CMPMO-HHO without LCSDP) and CMPMOHHO/SES (CMPMO-HHO with single elite selection), and CMPMO-GA which is a CMPMO/des-based many-objective GA.
The average results and standard deviation of MO-EHHO and those presented in Yang et al. (2022) are depicted in Table 14. MO-EHHO generates the best or second-best scores on 9 out of 12 test problems in terms of IGD. Table 15 presents the sign test results of all MO-EHHO in terms of IGD as compared with those in Yang et al. (2022). MO-EHHO achieves comparable IGD results as compared with those of CMPMO-HHO, NSGA-III, NSGA-II, SPEA2, ANSGA-III, and ar-MOEA, but outperforms others at the 95% confidence interval with \(p<0.05\). Table 16 summarises the ranking outcomes of all algorithms. MO-EHHO is ranked in the top 1 position as the overall best performer in solving the ZDT and DTLZ problems. Note that the number of maximum function evaluations of MO-EHHO is lower than that in Yang et al. (2022), i.e. 50,000 versus 300,000, indicating the effectiveness of MO-EHHO in tackling MOPs.

4.4.2 Comparison between MO-EHHO and other algorithms published in Liu et al. (2020)

The performance of MO-EHHO is evaluated using the ZDT and DTLZ benchmark functions. The obtained results are compared with those published in Liu et al. (2020). In accordance with the experimental procedure in Liu et al. (2020), the maximum numbers of function evaluations are set to 25,000 and 50,000 for ZDTs and DTLZ problems, respectively; the population size and external archive size are set to 100; while the maximum numbers of iterations of each run are set to \(25,000 / 100 = 250\) for ZDT problems and \(50,000 / 100 = 500\) for DTLZ problems. The experiment is conducted in a Python environment. The obtained results are tabulated in Tables 27, 28, 29 and 30 in the Appendix. Besides, Table 17 presents the sign test results of MO-EHHO as compared with those of MMOGWO, SPEA2, NSGA-II, MOPSO, MOGWO, MOALO, MOMVO, MOGOA, and MOABC.
From Table 17, MO-EHHO outperforms SPEA2, NSGA-II and MOABC in terms of both \(\gamma \) and GD. In addition, MO-EHHO achieves better results than those of MOGOA in terms of GD. It should be noted that the \(\Delta \) metric evaluates the spread among the non-dominated solutions obtained by an algorithm, indicating the diversity property of the solutions. In terms of \(\Delta \), MO-EHHO achieves comparable results with those of SPEA2, NSGA-II, MOGWO, and MOABC. It yields better results as compared with those of MMOGWO, MOPSO, MOALO, MOMVO, and MOGOA at the 95% significance level (\(\alpha = 0.05\)). MO-EHHO depicts a superior performance as compared with those of its competitors. MO-EHHO results show a statistically significant difference in performance at the 95% confidence level as compared with those of MMOGWO, SPEA2, NSGA-II, MOPSO, MOGWO, MOALO, MOMVO, MOGOA, and MOABC, indicating a better balance between diversity and convergence by MO-EHHO.
From the analysis above, MO-EHHO outperforms their competitors at the 95% confidence interval in terms of IGD. The IGD metric evaluates both the convergence and diversity properties of an algorithm. Hence, the solutions produced by MO-EHHO have a good balance between convergence and diversity. Besides that, Table 18 depicts the ranking outcome of the algorithms on each metric. MO-EHHO ranks the first in terms of \(\gamma \), GD and IGD. Moreover, MOPSO occupies the top rank in terms of \(\Delta \), which is the diversity metric. In summary, MO-EHHO has a good overall performance in terms of convergence, but is inferior in diversity as compared with NSGA-II and MOPSO.

4.4.3 Comparison between MO-EHHO and several improved MOEA models in Liu et al. (2020)

MO-EHHO is compared with a variety of MOEA models presented in Liu et al. (2020) and Xiang et al. (2015), namely MMOGWO, PAES, SPEA2, NSGA-II, IBEA, OMOPSO, AbYSS, CellDE, MOEA/D, SMPSO, MOCell, GDE3, and eMOABC. The ZDTs and DTLZs benchmark functions are used for performance evaluation and comparison. In accordance with the experimental procedure in Liu et al. (2020) and Xiang et al. (2015), the number of maximum function evaluations is set to 50,000 for all problems, and the population size and external archive are set to 100, respectively. As such, the number of maximum iteration is \(50,000/100 = 500\) for each of the 30 independently runs. Only the IGD metric is used for performance evaluation, as per the results presented in Liu et al. (2020) and Xiang et al. (2015). This experiment is performed in a Python environment.
The mean IGD results of MO-EHHO over 30 independent runs are shown in Table 19. MO-EHHO is the best algorithm, producing the best scores on 10 out of 12 test problems. Table 20 presents the sign test outcomes of MO-EHHO against those of other algorithms. MO-EHHO demonstrates a superior performance as compared with those from its competitors in a statistical context. From Table 21, MO-EHHO is the best in ranking against its competitors. In short, MO-EHHO is the best among other MOEA models.
Table 20
Statistical comparison of MO-EHHO and several improved MOEAs in Liu et al. (2020) and Xiang et al. (2015) with 95% confidence interval on each metric
MO-EHHO
IGD
vs
\(+/=/-\)
p value
\(\alpha =0.05 \)
MMOGWO
12/0/0
0.0005
\(\checkmark \)
PAES
10/0/2
0.0209
\(\checkmark \)
SPEA2
10/0/2
0.0209
\(\checkmark \)
NSGA-II
10/0/2
0.0209
\(\checkmark \)
IBEA
11/0/1
0.0039
\(\checkmark \)
OMOPSO
12/0/0
0.0005
\(\checkmark \)
AbYSS
10/0/2
0.0209
\(\checkmark \)
CellDE
12/0/0
0.0005
\(\checkmark \)
MOEA/D
10/0/2
0.0209
\(\checkmark \)
SMPSO
10/0/2
0.0209
\(\checkmark \)
MOCell
10/0/2
0.0209
\(\checkmark \)
GDE3
11/0/1
0.0039
\(\checkmark \)
eMOABC
10/0/2
0.0209
\(\checkmark \)
Table 21
Final rankings of each algorithm on IGD metric with several improved MOEAs in Liu et al. (2020) and Xiang et al. (2015)
Algorithm
IGD
\(\mu _{r}\)
AR
MO-EHHO
2.5000
1
MMOGWO
7.8333
9
PAES
11.0833
13
SPEA2
6.4167
4
NSGA-II
8.1667
10
IBEA
11.7500
14
OMOPSO
7.5833
8
AbYSS
7.5000
7
CellDE
9.8333
12
MOEA/D
9.2500
11
SMPSO
4.7500
3
MOCell
7.0000
6
GDE3
6.8333
5
eMOABC
4.4167
2

5 Conclusions

In this study, we have proposed enhancements to the HHO algorithm, namely EHHO and MO-EHHO to solve SOPs and MOPs, respectively. Both EHHO and MO-EHHO models have been evaluated with 29 SOPs and 12 MOPs. Specifically, EHHO has been evaluated using a variety of single-objective benchmark problems, including unimodal functions F1–F7, multimodal functions F8–F23, and composition functions F24–F29. Two sets of well-known multi-objective benchmark functions, i.e. ZDT and DTLZ, have been employed to evaluate MO-EHHO. Four performance metrics, i.e. \(\gamma \), \(\Delta \), GD, and IGD, have been utilised to measure the convergence and diversity properties of MO-EHHO. In addition, the sign test is employed to deduce a statistical conclusion for the performance comparison between two compared algorithms at the significance level of \(\alpha =0.05\). The average ranking method has also been adopted to reveal accuracy and stability of an algorithm against other competitors. All algorithms have ranked based on their average results pertaining to the total number of benchmark functions.
The DE scheme has been integrated into EHHO, while the chaos theory has been utilised to formulate its mutation factor. It leverages the DE/best/1 mutation scheme, where the "best" indicates that the non-dominated solution is stored in the external archive, while "1" represents one mutated vector is used. The nonlinear exploration factor contributes towards the diversification capability in the early exploration phase, which facilitates a smooth changeover from exploration to exploitation. Additionally, the incorporation of DE into EHHO and the mutation strategy in the population are useful to prevent EHHO from falling into local optima. Despite its strengths, EHHO struggles in certain benchmark problems that involve higher dimensions and more complex search landscapes in comparison with those from the HHODE variants. For MOPs, the non-dominated sorting strategy from NSGA-II has been employed in MO-EHHO. MO-EHHO is able to exploit the Pareto optimal solutions while preserving its diversity.
The results of EHHO and MO-EHHO are compared with those from popular optimisation algorithms published in the literature. The results indicate that in most SOPs and MOPs, EHHO and MO-EHHO perform better than their competitors, as presented in Sect. 4. For further work, several research directions can be pursued. Firstly, the proposed models still have room for improvement. In particular, MO-EHHO can be combined with other multi-objective algorithms to attain more diverse solutions. Secondly, the proposed models can be applied to various optimisation tasks, including job-shop scheduling and other real-world problems.

Declarations

Ethical approval

This article does not contain any studies with animals performed by any of the authors.

Conflict of interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Informed consent was obtained from all individual participants included in the study.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Anhänge

Appendix A

Single-objective Benchmark Functions

See Tables 22, 23, 24 and 25.
Table 22
Description of unimodal benchmark functions
Function
Dimensions
Range
\(f_\textrm{min}\)
\(F_1(x)=\sum _{i=1}^{n}x_i^2\)
30, 100, 500, 1000
[\(-100,100\)]
0
\(F_2(x)=\sum _{i=1}^{n}|x_i |+ \prod _{i=1}^{n}|x_i |\)
30, 100, 500, 1000
[\(-10,10\)]
0
\(F_3(x)=\sum _{i=1}^{n}(\sum _{j-1}^{i}x_j)^2\)
30, 100, 500, 1000
[\(-100,100\)]
0
\(F_4(x)=\max _i[|x_i |, 1\le i \le n]\)
30, 100, 500, 1000
[\(-100,100\)]
0
\(F_5(x)=\sum _{i=1}^{n-1}[100(x_{i+1}-x_i^2)^2+(x_i-1)^2]\)
30, 100, 500, 1000
[\(-30,30\)]
0
\(F_6(x)=\sum _{i=1}^{n}([x_i+0.5])^2\)
30, 100, 500, 1000
[\(-100,100\)]
0
\(F_7(x)=\sum _{i=1}^{n}ix_i^4+{\text {random[0,1)}}\)
30, 100, 500, 1000
[\(-1.28,1.28\)]
0
Table 23
Description of multimodal benchmark functions
Function
Dimensions
Range
\(f_\textrm{min}\)
\(f_8(x)=\sum _{i=1}^n-x_i \sin \left( \sqrt{|x_i |}\right) \)
30, 100, 500, 1000
[\(-500,500\)]
\(-418.9829 \times n\)
\(f_{9}(x)=\sum _{i=1}^{\pi }\left[ x_{i}^{2}-10 \cos \left( 2 \pi x_{i}\right) +10\right] \)
30, 100, 500, 1000
[\(-5.12,5.12\)]
0
\(f_{10}(x)=-20 \exp \left( -0.2 \sqrt{\frac{1}{\pi } \sum _{i-1}^{\pi } x_{i}^{2}}\right) -\)
30, 100, 500, 1000
[\(-32,32\)]
0
\(\exp \left( \frac{1}{\pi } \sum _{i-1}^{\pi } \cos \left( 2 \pi x_{i}\right) \right) +20+e\)
   
\(f_{11}(x)=\frac{1}{4000} \sum _{i-1}^{n} x_{i}^{2}-\prod _{i=1}^{\pi } \cos \left( \frac{x_{i}}{\sqrt{1}}\right) +1\)
30, 100, 500, 1000
[\(-600,600\)]
0
\(f_{12}(x)=\)
30, 100, 500, 1000
[\(-50,50\)]
0
\(\frac{\pi }{\pi }\left\{ 10 \sin \left( \pi y_{1}\right) +\sum _{i=1}^{\pi -1}\left( y_{i}-1\right) ^{2}\left[ 1+10 \sin ^{2}\left( \pi y_{i+1}\right) \right] +\left( y_{n}-1\right) ^{2}\right\} \)
   
\(+\sum _{i=1}^{\pi } u\left( x_{i}, 10,100,4\right) \)
   
\(y_{i}=1+\frac{x_{i}+1}{4} u\left( x_{i}, a, k, m\right) =\left\{ \begin{array}{ll}k\left( x_{i}-a\right) ^{m} &{} x_{i}>a \\ 0-a &{}<x_{i}<a \\ k\left( -x_{i}-a\right) ^{m} &{} x_{i}<-a\end{array}\right. \)
   
\(f_{13}(x)=\)
30, 100, 500, 1000
[\(-50,50\)]
0
\(0.1\left\{ \sin ^{2}\left( 3 \pi x_{1}\right) +\sum _{i-1}^{\pi }\left( x_{i}-1\right) ^{2}\left[ 1+\sin ^{2}\left( 3 \pi x_{i}+1\right) \right] \right. \)
   
\(\left. +\left( x_{n}-1\right) ^{2}\left[ 1+\sin ^{2}\left( 2 \pi x_{\pi }\right) \right] \right\} +\sum _{i=1}^{\pi } u\left( x_{i}, 5,100,4\right) \)
   
Table 24
Description of fixed-dimension multimodal benchmark functions
Function
Dimensions
Range
\(f_\textrm{min}\)
\(f_{14}(x)=\left( \frac{1}{500}+\sum _{j-1}^{25} \frac{1}{j+\sum _{i=1}^{2}\left( x_{i}-a_{i j}\right) ^{6}}\right) ^{-1}\)
2
[\(-65,65\)]
1
\(f_{15}(x)=\sum _{i=1}^{11}\left[ a_{i}-\frac{x_{1}\left( b_{i}^{2}+b_{i} x_{2}\right) }{w_{i}^{2}+b_{1} x_{3}+x_{4}}\right] ^{2}\)
4
[\(-5,5\)]
0.00030
\(f_{16}(x)=4 x_{1}^{2}-2 \cdot 1 x_{1}^{4}+\frac{1}{3} x_{1}^{6}+x_{1} x_{2}-4 x_{2}^{2}+4 x_{2}^{4}\)
2
[\(-5,5\)]
\(-1.0316\)
\(f_{17}(x)=\left( x_{2}-\frac{5.1}{4 \pi ^{2}} x_{1}^{2}+\frac{5}{\pi } x_{1}-6\right) ^{2}+10\left( 1-\frac{1}{8 \pi }\right) \cos x_{1}+10\)
2
[\(-5,5\)]
\(-0.398\)
\(f_{18}(x)=\left[ 1+\left( x_{1}+x_{2}+1\right) ^{2}\left( 19-14 x_{1}+3 x_{1}^{2}-14 x_{2}+6 x_{1} x_{2}+3 x_{2}^{2}\right) \right] \)
2
[\(-2,2\)]
3
\(\times \left[ 30+\left( 2 x_{1}-3 x_{2}\right) ^{2} \times \left( 18-32 x_{1}+12 x_{1}^{2}+48 x_{2}-36 x_{1} x_{2}+27 x_{2}^{2}\right) \right] \)
   
\(f_{19}(x)=-\sum _{i-1}^{4} c_{1} \exp \left( -\sum _{j-1}^{3} a_{i j}\left( x_{j}-p_{i j}\right) ^{2}\right) \)
3
[0, 1]
\(-3.86\)
\(f_{20}(x)=-\sum _{i=1}^{4} c_{1} \exp \left( -\sum _{j-1}^{6} a_{i j}\left( x_{j}-p_{i j}\right) ^{2}\right) \)
6
[0, 1]
\(-3.32\)
\(f_{21}(x)=-\sum _{i-1}^{5}\left[ \left( X-a_{i}\right) \left( X-a_{i}\right) ^{T}+c_{i}\right] ^{-1}\)
4
[0, 10]
\(-10.1532\)
\(f_{22}(x)=-\sum _{i-1}^{7}\left[ \left( X-a_{i}\right) \left( X-a_{i}\right) ^{T}+c_{i}\right] ^{-1}\)
4
[0, 10]
\(-10.4028\)
\(f_{23}(x)=-\sum _{i-1}^{10}\left[ \left( X-a_{i}\right) \left( X-a_{i}\right) ^{T}+c_{i}\right] ^{-1}\)
4
[0, 10]
\(-10.5363\)
Table 25
Details of hybrid composition functions (MM: Multi-modal, R: Rotated, NS: Non-Separable, S: Scalable, D: Dimension)
ID
Description
Properties
D
Range
F24 (C16)
Rotated hybrid composition function
MM,R,NS,S
30
\([-5,5]^D\)
F25 (C18)
Rotated hybrid composition function
MM,R,NS,S
30
\([-5,5]^D\)
F26 (C19)
Rotated hybrid composition function with narrow basin global optimum
MM,NS,S
30
\([-5,5]^D\)
F27 (C20)
Rotated hybrid composition function with global optimum on the bounds
MM,NS,S
30
\([-5,5]^D\)
F28 (C21)
Rotated hybrid composition function
MM,R,NS,S
30
\([-5,5]^D\)
F29 (C25)
Rotated hybrid composition function without bounds
MM,NS,S
30
\([-5,5]^D\)

Appendix B

Quantitative results for multi-objective functions quantitative results for multi-objective functions

See Tables 26, 27, 28, 29, and 30.
Table 26
Mean values, standard deviation and ranking of MO-EHHO variants and those in Liu et al. (2020) in \(\gamma \) for all test problems
Algorithms
ZDT1
ZDT2
ZDT3
ZDT4
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
MO-EHHO
5.46E\(-\)03
5.02E\(-\)04
6
4.59E\(-\)03
9.17E\(-\)04
6
6.09E\(-\)03
4.01E\(-\)04
6
1.47E \(-\)01
2.00E \(-\)01
1
MMOGWO
1.07E\(-\)03
4.21E\(-\)04
4
7.77E\(-\)04
7.76E\(-\)05
3
3.55E\(-\)03
7.74E\(-\)04
2
4.21E+00
3.46E+00
3
MOGWO
9.54E\(-\)04
5.14E\(-\)04
3
8.05E\(-\)04
8.70E\(-\)05
4
4.51E\(-\)03
6.33E\(-\)04
3
7.59E+00
9.05E+00
6
acMOGWO
1.29E\(-\)03
4.38E\(-\)04
5
7.61E \(-\)04
8.32E \(-\)05
1
4.79E\(-\)03
5.52E\(-\)04
4
3.37E+00
2.39E+00
2
bMOGWO
8.65E \(-\)04
1.45E \(-\)04
1
7.63E\(-\)04
8.01E\(-\)05
2
4.80E\(-\)03
7.21E\(-\)04
5
6.33E+00
7.36E+00
5
eMOGWO
9.51E\(-\)04
3.06E\(-\)04
2
8.37E\(-\)04
1.18E\(-\)04
5
2.88E \(-\)03
4.21E \(-\)04
1
6.14E+00
6.16E+00
4
Algorithms
ZDT6
DTLZ1
DTLZ2
DTLZ3
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
MO-EHHO
3.91E \(-\)02
4.42E \(-\)02
1
1.91E+00
2.51E+00
1
6.51E \(-\)02
5.69E\(-\)03
6
1.78E+01
1.53E+01
1
MMOGWO
1.90E\(-\)01
1.14E\(-\)02
2
9.52E+01
2.54E+01
5
2.84E\(-\)02
1.07E\(-\)02
4
4.59E+02
1.00E+02
5
MOGWO
2.37E\(-\)01
6.00E\(-\)02
5
3.39E+01
4.97E+00
4
2.11E\(-\)02
9.98E\(-\)03
3
4.23E+02
3.33E+01
4
acMOGWO
1.95E\(-\)01
3.23E\(-\)02
3
2.47E+01
9.08E+00
2
2.98E\(-\)02
1.26E\(-\)02
5
2.96E+02
8.06E+01
2
bMOGWO
2.31E\(-\)01
5.70E\(-\)02
4
1.11E+02
1.36E+01
6
1.99E \(-\)02
1.17E \(-\)02
1
5.94E+02
4.03E+01
6
eMOGWO
2.72E\(-\)01
1.34E\(-\)01
6
3.01E+01
5.53E+00
3
2.06E\(-\)02
1.29E\(-\)02
2
4.12E+02
2.32E+01
3
Algorithms
DTLZ4
DTLZ5
DTLZ6
DTLZ7
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
MO-EHHO
4.54E\(-\)02
4.85E\(-\)03
5
5.89E \(-\)03
9.02E \(-\)04
1
3.72E \(-\)03
2.18E \(-\)04
1
1.91E\(-\)02
4.51E\(-\)03
2
MMOGWO
4.78E\(-\)02
3.38E\(-\)02
6
1.55E+00
1.61E\(-\)01
3
6.44E+00
4.69E\(-\)01
4
3.05E\(-\)02
2.30E\(-\)02
5
MOGWO
1.73E \(-\)02
9.37E \(-\)03
1
1.94E+00
9.84E\(-\)02
5
5.78E+00
4.35E\(-\)01
2
1.95E\(-\)02
1.45E\(-\)02
3
acMOGWO
2.53E\(-\)02
1.76E\(-\)02
3
1.53E+00
1.62E\(-\)01
2
6.45E+00
5.24E\(-\)01
5
3.26E\(-\)02
2.07E\(-\)02
6
bMOGWO
4.28E\(-\)02
1.01E\(-\)02
4
1.87E+00
9.82E\(-\)02
4
6.01E+00
5.48E\(-\)01
3
2.15E\(-\)02
1.37E\(-\)02
4
eMOGWO
2.05E\(-\)02
1.88E\(-\)02
2
1.94E+00
8.53E\(-\)02
6
6.45E+00
4.39E\(-\)01
6
1.31E \(-\)02
1.02E \(-\)02
1
Table 27
Mean values, standard deviation and ranking of MO-EHHO variants and other well-known algorithms in Liu et al. (2020) in terms of \(\gamma \) for all test problems
Algorithms
ZDT1
ZDT2
ZDT3
ZDT4
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
MO-EHHO
5.93E\(-\)03
8.44E\(-\)04
5
5.00E\(-\)03
9.96E\(-\)04
7
6.37E\(-\)03
6.08E\(-\)04
4
1.47E\(-\)01
2.45E\(-\)01
2
MMOGWO
1.23E\(-\)03
4.01E\(-\)04
2
8.52E\(-\)04
1.06E\(-\)04
3
4.69E \(-\)04
6.19E \(-\)04
1
4.25E+00
4.15E+00
4
SPEA2
1.04E\(-\)01
2.50E\(-\)02
9
1.62E\(-\)01
4.02E\(-\)02
9
7.44E\(-\)02
2.31E\(-\)02
9
9.73E \(-\)02
1.49E \(-\)01
1
NSGA-II
4.61E\(-\)02
4.33E\(-\)02
7
7.52E\(-\)02
4.28E\(-\)02
8
5.31E\(-\)02
5.42E\(-\)02
8
7.08E+00
2.85E+00
6
MOPSO
2.87E\(-\)03
1.77E\(-\)03
3
2.02E\(-\)03
1.54E\(-\)03
4
5.23E\(-\)03
1.11E\(-\)03
2
1.65E+01
1.36E+01
9
MOGWO
8.27E \(-\)04
2.10E \(-\)04
1
7.99E\(-\)04
9.56E\(-\)05
2
5.41E\(-\)03
2.70E\(-\)03
3
5.67E+00
5.84E+00
5
MOALO
5.04E\(-\)03
9.67E\(-\)03
4
5.40E \(-\)04
7.52E \(-\)05
1
7.67E\(-\)03
3.27E\(-\)03
5
2.01E+01
5.24E+00
10
MOMVO
3.92E\(-\)02
1.65E\(-\)02
6
4.50E\(-\)03
3.70E\(-\)03
6
2.77E\(-\)02
1.41E\(-\)02
6
1.26E+01
4.90E\(-\)02
7
MOGOA
7.79E\(-\)02
2.33E\(-\)01
8
4.02E\(-\)03
6.95E\(-\)03
5
3.83E\(-\)02
6.39E\(-\)02
7
1.53E+01
3.37E\(-\)01
8
MOABC
2.94E\(-\)01
5.59E\(-\)02
10
3.05E\(-\)01
7.19E\(-\)02
10
1.87E\(-\)01
5.94E\(-\)02
10
2.25E+00
8.90E\(-\)01
3
Algorithms
ZDT6
DTLZ1
DTLZ2
DTLZ3
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
MO-EHHO
6.11E\(-\)02
6.69E\(-\)02
2
1.91E+00
2.51E+00
1
6.51E\(-\)02
5.69E\(-\)03
8
1.78E+01
1.53E+01
2
MMOGWO
1.92E\(-\)01
1.02E\(-\)02
4
8.55E+01
2.47E+01
10
3.08E\(-\)02
1.49E\(-\)02
4
5.15E+02
9.91E+01
10
SPEA2
1.31E\(-\)01
2.92E\(-\)01
3
8.03E+00
5.88E+00
4
5.79E\(-\)02
1.49E\(-\)02
7
4.30E+01
2.19E+01
3
NSGA-II
2.30E\(-\)01
3.18E\(-\)01
6
1.28E+01
3.45E+00
6
5.10E\(-\)02
6.15E\(-\)03
6
7.24E+01
1.87E+01
5
MOPSO
3.06E \(-\)02
6.16E \(-\)02
1
2.69E+01
4.63E+00
7
7.85E\(-\)02
1.06E\(-\)02
9
4.10E+02
2.77E+01
8
MOGWO
2.75E\(-\)01
1.17E\(-\)01
8
3.43E+01
5.64E+00
9
2.23E\(-\)02
9.60E\(-\)03
3
4.24E+02
2.45E+01
9
MOALO
3.14E\(-\)01
1.98E\(-\)01
9
1.20E+01
2.98E+00
5
5.03E\(-\)02
1.99E\(-\)02
5
1.05E+02
1.21E+02
7
MOMVO
2.24E\(-\)01
4.52E\(-\)02
5
4.91E+00
7.27E\(-\)01
3
1.26E \(-\)02
2.62E \(-\)03
1
1.24E+01
6.14E+00
1
MOGOA
2.55E\(-\)01
6.86E\(-\)02
7
3.09E+01
7.90E\(-\)01
8
1.52E\(-\)02
8.63E\(-\)03
2
8.25E+01
2.97E+00
6
MOABC
3.95E\(-\)01
1.16E\(-\)01
10
3.64E+00
2.73E+00
2
9.05E\(-\)02
1.85E\(-\)01
10
6.60E+01
9.50E+01
4
Algorithms
DTLZ4
DTLZ5
DTLZ6
DTLZ7
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
MO-EHHO
4.54E\(-\)02
4.85E\(-\)03
6
5.89E \(-\)03
9.02E \(-\)04
1
3.72E \(-\)03
2.18E \(-\)04
1
1.91E\(-\)02
4.51E\(-\)03
2
MMOGWO
6.16E\(-\)02
3.34E\(-\)02
7
1.54E+00
1.95E\(-\)01
3
6.40E+00
4.20E\(-\)01
4
2.54E\(-\)02
2.00E\(-\)02
4
SPEA2
6.63E\(-\)02
1.62E\(-\)02
8
1.66E+00
1.02E\(-\)01
5
6.59E+00
3.00E\(-\)01
5
2.39E\(-\)01
6.33E\(-\)02
8
NSGA-II
2.60E\(-\)02
7.98E\(-\)03
3
1.79E+00
7.62E\(-\)02
7
7.07E+00
2.73E\(-\)01
6
3.33E\(-\)02
1.24E\(-\)02
6
MOPSO
8.78E\(-\)02
1.17E\(-\)02
9
1.68E+00
4.36E\(-\)02
6
6.07E+00
1.89E\(-\)01
3
3.08E\(-\)02
6.77E\(-\)03
5
MOGWO
3.12E\(-\)02
5.04E\(-\)02
4
1.91E+00
1.25E\(-\)01
8
5.93E+00
5.08E\(-\)01
2
2.00E\(-\)02
1.60E\(-\)02
3
MOALO
2.11E\(-\)01
1.83E\(-\)01
10
1.62E+00
2.85E\(-\)01
4
7.93E+00
8.69E\(-\)01
8
3.42E \(-\)03
2.25E \(-\)03
1
MOMVO
2.35E\(-\)02
6.50E\(-\)03
2
1.97E+00
2.33E\(-\)01
9
8.10E+00
4.30E\(-\)01
9
1.20E\(-\)01
5.27E\(-\)02
7
MOGOA
3.93E\(-\)02
2.88E\(-\)02
5
2.06E+00
2.34E\(-\)01
10
7.75E+00
5.49E\(-\)01
7
2.99E\(-\)01
7.34E\(-\)01
9
MOABC
2.34E \(-\)02
1.35E \(-\)02
1
1.40E+00
1.42E\(-\)01
2
9.59E+00
5.55E\(-\)02
10
5.23E\(-\)01
9.28E\(-\)01
10
Table 28
Mean values, standard deviation and ranking of MO-EHHO variants and other well-known algorithms in Liu et al. (2020) in terms of \(\Delta \) for all test problems
Algorithms
ZDT1
ZDT2
ZDT3
ZDT4
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
MO-EHHO
6.33E\(-\)01
7.57E\(-\)02
3
8.43E\(-\)01
1.17E\(-\)01
3
9.42E\(-\)01
7.59E\(-\)02
5
8.66E \(-\)01
2.26E \(-\)01
1
MMOGWO
1.09E+00
1.24E\(-\)01
7
9.89E\(-\)01
1.44E\(-\)01
6
9.78E\(-\)01
1.06E\(-\)01
6
1.04E+00
6.61E\(-\)02
7
SPEA2
9.11E\(-\)01
1.29E\(-\)01
5
9.57E\(-\)01
1.16E\(-\)01
5
9.18E\(-\)01
9.22E\(-\)02
4
1.18E+00
2.55E\(-\)01
10
NSGA-II
4.56E\(-\)01
5.10E\(-\)02
2
5.01E\(-\)01
6.90E\(-\)02
2
5.28E\(-\)01
1.02E\(-\)01
2
9.36E\(-\)01
3.25E\(-\)02
2
MOPSO
3.20E \(-\)01
4.79E \(-\)02
1
3.42E \(-\)01
5.46E \(-\)02
1
3.51E \(-\)01
4.56E \(-\)02
1
9.91E\(-\)01
3.72E\(-\)02
4
MOGWO
1.12E+00
1.43E\(-\)01
9
1.07E+00
1.41E\(-\)01
10
9.78E\(-\)01
1.04E\(-\)01
6
1.08E+00
1.18E\(-\)01
9
MOALO
1.11E+00
4.71E\(-\)02
8
1.02E+00
7.40E\(-\)03
9
1.30E+00
1.09E\(-\)01
10
1.04E+00
4.24E\(-\)02
7
MOMVO
9.77E\(-\)01
1.62E\(-\)01
6
1.01E+00
1.08E\(-\)02
8
1.09E+00
1.21E\(-\)01
8
9.98E\(-\)01
8.83E\(-\)04
5
MOGOA
1.20E+00
7.41E\(-\)02
10
1.00E+00
2.30E\(-\)04
7
1.28E+00
1.19E\(-\)01
9
9.81E\(-\)01
0.00E+00
3
MOABC
8.08E\(-\)01
8.04E\(-\)02
4
8.50E\(-\)01
9.17E\(-\)02
4
8.16E\(-\)01
9.78E\(-\)02
3
1.01E+00
1.51E\(-\)01
6
Algorithms
ZDT6
DTLZ1
DTLZ2
DTLZ3
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
MO-EHHO
1.24E+00
2.65E\(-\)01
10
7.80E\(-\)01
2.50E\(-\)01
3
5.92E\(-\)01
4.26E\(-\)02
3
8.92E\(-\)01
2.37E\(-\)01
4
MMOGWO
9.61E\(-\)01
7.41E\(-\)02
4
1.04E+00
1.80E\(-\)01
6
8.18E\(-\)01
3.79E\(-\)02
5
9.54E\(-\)01
1.59E\(-\)01
5
SPEA2
9.57E\(-\)01
1.89E\(-\)01
3
1.39E+00
3.23E\(-\)01
10
5.82E\(-\)01
5.61E\(-\)02
2
1.32E+00
3.61E\(-\)01
10
NSGA-II
9.96E\(-\)01
1.59E\(-\)01
5
8.52E\(-\)01
4.85E\(-\)02
4
6.81E\(-\)01
8.76E\(-\)02
4
9.62E\(-\)01
1.13E\(-\)01
6
MOPSO
6.91E \(-\)01
4.27E \(-\)01
1
6.98E \(-\)01
8.31E \(-\)02
1
4.71E \(-\)01
6.82E \(-\)02
1
7.63E\(-\)01
8.93E\(-\)02
2
MOGWO
1.12E+00
1.43E\(-\)01
7
7.18E\(-\)01
6.72E\(-\)02
2
8.60E\(-\)01
3.02E\(-\)02
6
7.01E \(-\)01
5.35E \(-\)02
1
MOALO
1.15E+00
1.03E\(-\)01
9
1.30E+00
2.19E\(-\)01
9
1.17E+00
5.33E\(-\)02
10
1.16E+00
1.74E\(-\)01
9
MOMVO
1.14E+00
7.91E\(-\)02
8
1.06E+00
6.99E\(-\)02
8
1.16E+00
6.73E\(-\)02
9
1.06E+00
9.92E\(-\)02
8
MOGOA
1.03E+00
9.94E\(-\)02
6
1.04E+00
2.62E\(-\)02
6
1.12E+00
8.32E\(-\)02
8
1.00E+00
6.16E\(-\)03
7
MOABC
9.31E\(-\)01
1.53E\(-\)01
2
9.20E\(-\)01
1.70E\(-\)01
5
1.02E+00
9.08E\(-\)02
7
8.47E\(-\)01
1.77E\(-\)01
3
Algorithms
DTLZ4
DTLZ5
DTLZ6
DTLZ7
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
MO-EHHO
8.14E\(-\)01
8.14E\(-\)02
8
6.92E\(-\)01
8.18E\(-\)02
2
7.51E\(-\)01
1.10E\(-\)01
4
7.30E\(-\)01
6.51E\(-\)02
2
MMOGWO
5.68E\(-\)01
6.19E\(-\)02
2
7.80E\(-\)01
5.62E\(-\)02
5
8.77E\(-\)01
8.98E\(-\)02
6
9.41E\(-\)01
7.91E\(-\)02
7
SPEA2
6.93E\(-\)01
1.38E\(-\)01
5
6.99E\(-\)01
4.55E\(-\)02
3
7.34E\(-\)01
4.87E\(-\)02
3
7.43E\(-\)01
9.01E\(-\)02
3
NSGA-II
6.85E\(-\)01
6.26E\(-\)02
4
7.41E\(-\)01
1.88E\(-\)02
4
7.17E\(-\)01
3.52E\(-\)02
2
8.49E\(-\)01
4.88E\(-\)02
5
MOPSO
4.97E \(-\)01
4.65E \(-\)02
1
6.58E \(-\)01
3.77E \(-\)02
1
6.09E \(-\)01
3.42E \(-\)02
1
4.61E \(-\)01
5.66E \(-\)02
1
MOGWO
6.07E\(-\)01
1.29E\(-\)01
3
8.00E\(-\)01
2.05E\(-\)02
6
7.64E\(-\)01
6.86E\(-\)02
5
8.27E\(-\)01
1.48E\(-\)01
4
MOALO
1.30E+00
8.33E\(-\)02
10
1.15E+00
5.02E\(-\)02
10
1.17E+00
6.73E\(-\)02
10
1.01E+00
7.80E\(-\)03
8
MOMVO
7.50E\(-\)01
4.87E\(-\)02
6
1.08E+00
3.77E\(-\)02
9
1.10E+00
3.80E\(-\)02
9
1.07E+00
1.34E\(-\)01
9
MOGOA
1.06E+00
3.89E\(-\)02
9
1.05E+00
6.56E\(-\)02
8
1.06E+00
2.38E\(-\)02
8
1.15E+00
9.69E\(-\)02
10
MOABC
7.59E\(-\)01
1.62E\(-\)01
7
9.95E\(-\)01
8.99E\(-\)02
7
9.48E\(-\)01
7.53E\(-\)02
7
9.36E\(-\)01
6.91E\(-\)02
6
Table 29
Mean values, standard deviation and ranking of MO-EHHO variants and other well-known algorithms in Liu et al. (2020) in terms of GD for all test problems
Algorithms
ZDT1
ZDT2
ZDT3
ZDT4
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
MO-EHHO
8.97E\(-\)04
1.42E\(-\)04
5
8.56E\(-\)04
6.25E\(-\)04
5
8.07E\(-\)04
8.77E\(-\)05
4
1.96E \(-\)02
3.38E \(-\)02
1
MMOGWO
2.34E\(-\)04
1.37E\(-\)04
2
9.79E\(-\)05
1.09E\(-\)05
3
6.16E \(-\)04
9.65E \(-\)05
1
6.10E\(-\)01
6.53E\(-\)01
3
SPEA2
1.23E\(-\)02
3.69E\(-\)03
9
2.60E\(-\)02
2.25E\(-\)02
9
1.65E\(-\)02
5.59E\(-\)03
9
8.91E\(-\)02
1.30E\(-\)01
2
NSGA-II
4.78E\(-\)03
4.47E\(-\)03
6
7.58E\(-\)03
4.26E\(-\)03
8
6.98E\(-\)03
5.77E\(-\)03
8
7.13E\(-\)01
2.84E\(-\)01
4
MOPSO
3.62E\(-\)04
1.83E\(-\)04
3
2.26E\(-\)04
1.73E\(-\)04
4
6.69E\(-\)04
1.06E\(-\)04
2
8.71E+00
6.98E+00
8
MOGWO
1.24E \(-\)04
5.79E \(-\)05
1
9.32E\(-\)05
1.03E\(-\)05
2
7.08E\(-\)04
4.24E\(-\)04
3
1.71E+00
3.45E+00
5
MOALO
6.70E\(-\)04
1.32E\(-\)03
4
6.12E \(-\)05
6.83E \(-\)06
1
1.22E\(-\)03
6.85E\(-\)04
5
2.06E+00
6.65E\(-\)01
7
MOMVO
9.10E\(-\)03
1.29E\(-\)02
8
1.57E\(-\)03
1.46E\(-\)03
6
6.86E\(-\)03
1.08E\(-\)02
7
1.95E+00
5.87E\(-\)01
6
MOGOA
8.55E\(-\)03
2.65E\(-\)02
7
2.25E\(-\)03
5.75E\(-\)03
7
4.70E\(-\)03
6.78E\(-\)03
6
1.48E+01
2.11E+00
10
MOABC
9.73E\(-\)02
2.37E\(-\)02
10
1.21E\(-\)01
3.52E\(-\)02
10
6.56E\(-\)02
2.21E\(-\)02
10
1.19E+01
5.59E\(-\)01
9
Algorithms
ZDT6
DTLZ1
DTLZ2
DTLZ3
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
MO-EHHO
3.76E\(-\)02
3.14E\(-\)02
5
2.79E \(-\)01
3.98E \(-\)01
1
7.58E\(-\)03
1.11E\(-\)03
7
2.28E+00
1.62E+00
1
MMOGWO
2.13E\(-\)02
8.35E\(-\)04
2
1.35E+01
3.49E+00
10
3.23E\(-\)03
1.52E\(-\)03
4
6.73E+01
1.20E+01
9
SPEA2
3.90E\(-\)02
5.58E\(-\)02
6
3.81E+00
2.75E+00
7
1.17E\(-\)02
3.71E\(-\)03
9
1.77E+01
9.52E+00
6
NSGA-II
6.21E\(-\)02
3.95E\(-\)02
9
1.33E+00
3.56E\(-\)01
3
5.93E\(-\)03
7.42E\(-\)04
6
7.36E+00
1.89E+00
3
MOPSO
1.78E \(-\)02
3.01E \(-\)02
1
6.10E+00
7.79E\(-\)01
8
1.15E\(-\)02
3.60E\(-\)03
8
7.94E+01
8.15E+00
10
MOGWO
4.53E\(-\)02
2.90E\(-\)02
7
7.58E+00
1.42E+00
9
2.41E\(-\)03
1.07E\(-\)03
2
6.67E+01
5.98E+00
8
MOALO
5.39E\(-\)02
4.99E\(-\)02
8
1.82E+00
8.15E\(-\)01
4
5.19E\(-\)03
2.00E\(-\)03
5
1.23E+01
1.47E+01
5
MOMVO
3.60E\(-\)02
2.93E\(-\)02
3
6.57E\(-\)01
3.65E\(-\)01
2
1.37E \(-\)03
2.96E \(-\)04
1
3.02E+00
1.70E+00
2
MOGOA
3.66E\(-\)02
1.94E\(-\)02
4
3.10E+00
7.76E\(-\)02
6
2.87E\(-\)03
1.99E\(-\)03
3
1.14E+01
1.01E+00
4
MOABC
1.70E\(-\)01
6.62E\(-\)03
10
2.41E+00
2.20E+00
5
7.68E\(-\)02
1.74E\(-\)01
10
5.59E+01
9.68E+01
7
Algorithms
DTLZ4
DTLZ5
DTLZ6
DTLZ7
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
MO-EHHO
5.40E\(-\)03
7.53E\(-\)04
4
7.29E \(-\)04
1.44E \(-\)04
1
4.42E \(-\)04
2.17E \(-\)05
1
3.77E\(-\)03
2.76E\(-\)03
3
MMOGWO
8.72E\(-\)03
5.77E\(-\)03
6
1.58E\(-\)01
1.98E\(-\)02
2
6.70E\(-\)01
3.77E\(-\)02
4
4.96E\(-\)03
3.98E\(-\)03
6
SPEA2
1.43E\(-\)02
3.54E\(-\)03
8
2.28E\(-\)01
1.60E\(-\)02
9
9.56E\(-\)01
5.93E\(-\)02
9
5.21E\(-\)02
2.48E\(-\)02
9
NSGA-II
3.12E\(-\)03
1.30E\(-\)03
2
1.96E\(-\)01
8.08E\(-\)03
6
7.48E\(-\)01
2.26E\(-\)02
5
4.48E\(-\)03
1.74E\(-\)03
5
MOPSO
1.27E\(-\)02
3.70E\(-\)03
7
1.84E\(-\)01
3.73E\(-\)03
4
6.62E\(-\)01
1.77E\(-\)02
3
4.33E\(-\)03
9.74E\(-\)04
4
MOGWO
3.98E\(-\)03
7.13E\(-\)03
3
1.92E\(-\)01
1.22E\(-\)02
5
6.05E\(-\)01
5.16E\(-\)02
2
3.29E\(-\)03
2.86E\(-\)03
2
MOALO
2.85E\(-\)02
2.33E\(-\)02
10
1.66E\(-\)01
2.68E\(-\)02
3
8.06E\(-\)01
7.41E\(-\)02
7
3.60E \(-\)04
2.28E \(-\)04
1
MOMVO
2.42E \(-\)03
6.34E \(-\)04
1
2.03E\(-\)01
2.16E\(-\)02
7
8.18E\(-\)01
4.11E\(-\)02
8
2.30E\(-\)02
3.25E\(-\)02
7
MOGOA
6.83E\(-\)03
7.29E\(-\)03
5
2.08E\(-\)01
2.28E\(-\)02
8
7.80E\(-\)01
5.33E\(-\)02
6
4.00E\(-\)02
8.08E\(-\)02
8
MOABC
1.79E\(-\)02
1.03E\(-\)02
9
7.49E\(-\)01
2.31E\(-\)01
10
2.23E+00
1.63E\(-\)01
10
3.24E\(-\)01
6.42E\(-\)01
10
Table 30
Mean values, standard deviation and ranking of MO-EHHO variants and other well-known algorithms in Liu et al. (2020) in terms of IGD for all test problems
Algorithms
ZDT1
ZDT2
ZDT3
ZDT4
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
MO-EHHO
1.04E\(-\)03
2.30E\(-\)04
2
1.56E\(-\)03
4.51E\(-\)04
5
2.24E \(-\)03
8.51E \(-\)04
1
1.26E \(-\)02
2.06E \(-\)02
1
MMOGWO
1.18E\(-\)03
4.01E\(-\)04
3
8.52E\(-\)04
1.06E\(-\)04
3
2.74E\(-\)03
3.71E\(-\)04
2
4.04E+00
4.16E+00
5
SPEA2
9.98E\(-\)02
2.50E\(-\)02
9
1.59E\(-\)01
4.02E\(-\)02
9
5.17E\(-\)02
1.46E\(-\)02
9
1.81E\(-\)02
1.46E\(-\)01
2
NSGA-II
3.72E\(-\)02
4.33E\(-\)02
8
8.31E\(-\)02
4.28E\(-\)02
8
2.91E\(-\)02
3.86E\(-\)02
8
6.22E+00
2.85E+00
6
MOPSO
2.57E\(-\)03
1.77E\(-\)03
5
1.64E\(-\)03
1.54E\(-\)03
6
3.18E\(-\)03
1.15E\(-\)03
4
1.04E+01
1.36E+01
7
MOGWO
8.17E \(-\)04
2.10E \(-\)04
1
7.99E\(-\)04
9.56E\(-\)05
2
2.93E\(-\)03
1.52E\(-\)03
3
3.89E+00
5.86E+00
4
MOALO
1.57E\(-\)03
9.67E\(-\)03
4
5.40E \(-\)04
7.52E \(-\)05
1
4.40E\(-\)03
2.80E\(-\)03
5
1.85E+01
5.25E+00
10
MOMVO
3.14E\(-\)02
1.65E\(-\)02
7
2.81E\(-\)03
3.70E\(-\)03
7
1.74E\(-\)02
8.02E\(-\)03
7
1.26E+01
4.91E\(-\)02
8
MOGOA
2.58E\(-\)02
2.33E\(-\)01
6
8.79E\(-\)04
6.95E\(-\)03
4
1.46E\(-\)02
4.68E\(-\)02
6
1.53E+01
3.38E\(-\)01
9
MOABC
3.02E\(-\)01
5.59E\(-\)02
10
3.02E\(-\)01
5.17E\(-\)03
10
1.21E\(-\)01
3.43E\(-\)02
10
2.17E+00
8.92E\(-\)01
3
Algorithms
ZDT6
DTLZ1
DTLZ2
DTLZ3
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
MO-EHHO
1.20E \(-\)03
3.62E \(-\)04
1
1.24E \(-\)01
1.71E \(-\)01
1
7.46E\(-\)03
2.97E\(-\)04
2
1.29E+00
1.24E+00
1
MMOGWO
2.34E\(-\)01
1.28E\(-\)02
5
1.90E+02
5.11E+01
10
2.78E\(-\)02
1.49E\(-\)02
6
5.10E+02
9.93E+01
10
SPEA2
4.57E\(-\)02
3.25E\(-\)01
3
1.40E+01
1.21E+01
4
5.94E\(-\)02
1.49E\(-\)02
9
3.98E+01
2.20E+01
4
NSGA-II
1.63E\(-\)01
3.53E\(-\)01
4
2.64E+01
7.06E+00
6
5.00E\(-\)02
6.15E\(-\)03
7
6.80E+01
1.88E+01
6
MOPSO
4.74E\(-\)03
6.68E\(-\)02
2
5.39E+01
9.49E+00
7
7.68E\(-\)02
1.06E\(-\)02
10
4.12E+02
2.77E+01
8
MOGWO
2.74E\(-\)01
1.27E\(-\)01
7
6.90E+01
1.16E+01
9
1.99E\(-\)02
9.60E\(-\)03
5
4.28E+02
2.45E+01
9
MOALO
3.18E\(-\)01
2.17E\(-\)01
9
2.39E+01
6.11E+00
5
5.01E\(-\)02
1.99E\(-\)02
8
4.68E+01
1.22E+02
5
MOMVO
2.59E\(-\)01
5.01E\(-\)02
6
9.71E+00
1.48E+00
3
1.24E\(-\)02
2.62E\(-\)03
3
1.09E+01
6.15E+00
2
MOGOA
2.88E\(-\)01
7.38E\(-\)02
8
6.39E+01
1.66E+00
8
1.51E\(-\)02
8.63E\(-\)03
4
8.23E+01
2.99E+00
7
MOABC
4.27E\(-\)01
1.25E\(-\)01
10
5.22E+00
5.57E+00
2
2.88E \(-\)03
1.85E \(-\)01
1
3.20E+01
9.52E+01
3
Algorithms
DTLZ4
DTLZ5
DTLZ6
DTLZ7
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
AVG
STD
Rank
MO-EHHO
8.94E \(-\)03
9.09E \(-\)04
1
1.05E \(-\)04
1.50E \(-\)05
1
1.02E \(-\)04
1.52E \(-\)05
1
9.80E \(-\)04
8.58E \(-\)05
1
MMOGWO
4.73E\(-\)02
3.35E\(-\)02
7
1.88E+00
2.66E\(-\)01
4
8.47E+00
5.11E\(-\)01
4
1.74E\(-\)02
1.58E\(-\)02
4
SPEA2
6.77E\(-\)02
1.63E\(-\)02
8
2.07E+00
1.55E\(-\)01
5
8.63E+00
3.80E\(-\)01
5
1.32E\(-\)01
2.22E\(-\)02
10
NSGA-II
2.27E\(-\)02
8.00E\(-\)03
5
2.29E+00
1.09E\(-\)01
8
9.20E+00
3.56E\(-\)01
6
2.17E\(-\)02
8.82E\(-\)03
5
MOPSO
8.84E\(-\)02
1.17E\(-\)02
9
2.11E+00
5.77E\(-\)02
6
7.83E+00
2.53E\(-\)01
3
2.71E\(-\)02
5.24E\(-\)03
6
MOGWO
1.48E\(-\)02
5.05E\(-\)02
2
2.18E+00
2.02E\(-\)01
7
7.64E+00
6.72E\(-\)01
2
1.47E\(-\)02
1.54E\(-\)02
3
MOALO
1.78E\(-\)01
1.84E\(-\)01
10
1.84E+00
3.90E\(-\)01
3
1.03E+01
1.22E+00
8
2.41E\(-\)03
1.52E\(-\)03
2
MOMVO
2.18E\(-\)02
6.52E\(-\)03
4
2.51E+00
3.48E+02
9
1.06E+01
6.46E\(-\)01
9
7.30E\(-\)02
1.68E\(-\)02
7
MOGOA
2.62E\(-\)02
2.89E\(-\)02
6
2.71E+00
3.40E\(-\)01
10
1.02E+01
1.01E+00
7
8.05E\(-\)02
2.23E\(-\)01
8
MOABC
1.94E\(-\)02
1.35E\(-\)02
3
1.69E+00
2.19E\(-\)01
2
1.20E+01
1.58E\(-\)01
10
1.09E\(-\)01
2.81E\(-\)01
9
Literatur
Zurück zum Zitat Abd Elaziz M , Yang H , Lu S (2021) A multi-leader harris hawk optimization based on differential evolution for feature selection and prediction influenza viruses h1n1. Artif Intell Rev 1–58 Abd Elaziz M , Yang H , Lu S (2021) A multi-leader harris hawk optimization based on differential evolution for feature selection and prediction influenza viruses h1n1. Artif Intell Rev 1–58
Zurück zum Zitat Abualigah L, Abd Elaziz M, Shehab M, Ahmad Alomari O, Alshinwan M, Alabool H, Al-Arabiat DA (2021) Hybrid Harris hawks optimization with differential evolution for data clustering. In: Metaheuristics in machine learning: theory and applications. Springer, pp 267–299 Abualigah L, Abd Elaziz M, Shehab M, Ahmad Alomari O, Alshinwan M, Alabool H, Al-Arabiat DA (2021) Hybrid Harris hawks optimization with differential evolution for data clustering. In: Metaheuristics in machine learning: theory and applications. Springer, pp 267–299
Zurück zum Zitat Akbari R, Hedayatzadeh R, Ziarati K, Hassanizadeh B (2012) A multi-objective artificial bee colony algorithm. Swarm Evol Comput 2:39–52 Akbari R, Hedayatzadeh R, Ziarati K, Hassanizadeh B (2012) A multi-objective artificial bee colony algorithm. Swarm Evol Comput 2:39–52
Zurück zum Zitat Alabool HM, Alarabiat D, Abualigah L, Heidari AA (2021) Harris hawks optimization: a comprehensive review of recent variants and applications. Neural Comput Appl 3315:8939–8980 Alabool HM, Alarabiat D, Abualigah L, Heidari AA (2021) Harris hawks optimization: a comprehensive review of recent variants and applications. Neural Comput Appl 3315:8939–8980
Zurück zum Zitat Asafuddoula M, Ray T, Sarker R, Alam K (2012) An adaptive constraint handling approach embedded moea/d. In: 2012 IEEE congress on evolutionary computation, pp. 1–8 Asafuddoula M, Ray T, Sarker R, Alam K (2012) An adaptive constraint handling approach embedded moea/d. In: 2012 IEEE congress on evolutionary computation, pp. 1–8
Zurück zum Zitat Bao X, Jia H, Lang C (2019) A novel hybrid Harris Hawks optimization for color image multilevel thresholding segmentation. IEEE Access 7:76529–76546 Bao X, Jia H, Lang C (2019) A novel hybrid Harris Hawks optimization for color image multilevel thresholding segmentation. IEEE Access 7:76529–76546
Zurück zum Zitat Barshandeh S, Haghzadeh M (2021) A new hybrid chaotic atom search optimization based on tree-seed algorithm and levy flight for solving optimization problems. Eng Comput 37:43079–3122 Barshandeh S, Haghzadeh M (2021) A new hybrid chaotic atom search optimization based on tree-seed algorithm and levy flight for solving optimization problems. Eng Comput 37:43079–3122
Zurück zum Zitat Bettemir ÖH, Sonmez R (2015) Hybrid genetic algorithm with simulated annealing for resource-constrained project scheduling. J Manag Eng 31:504014082 Bettemir ÖH, Sonmez R (2015) Hybrid genetic algorithm with simulated annealing for resource-constrained project scheduling. J Manag Eng 31:504014082
Zurück zum Zitat Birogul S (2019) Hybrid Harris Hawk optimization based on differential evolution (hhode) algorithm for optimal power flow problem. IEEE Access 7:184468–184488 Birogul S (2019) Hybrid Harris Hawk optimization based on differential evolution (hhode) algorithm for optimal power flow problem. IEEE Access 7:184468–184488
Zurück zum Zitat Chen P-H, Shahandashti SM (2009) Hybrid of genetic algorithm and simulated annealing for multiple project scheduling with multiple resource constraints. Autom Constr 18(4):434–443 Chen P-H, Shahandashti SM (2009) Hybrid of genetic algorithm and simulated annealing for multiple project scheduling with multiple resource constraints. Autom Constr 18(4):434–443
Zurück zum Zitat Chen H, Tian Y, Pedrycz W, Wu G, Wang R, Wang L (2019) Hyperplane assisted evolutionary algorithm for many-objective optimization problems. IEEE Trans Cybern 50(7):3367–3380 Chen H, Tian Y, Pedrycz W, Wu G, Wang R, Wang L (2019) Hyperplane assisted evolutionary algorithm for many-objective optimization problems. IEEE Trans Cybern 50(7):3367–3380
Zurück zum Zitat Chen H, Heidari AA, Chen H, Wang M, Pan Z, Gandomi AH (2020a) Multi-population differential evolution-assisted Harris Hawks optimization: framework and case studies. Future Gener Comput Syst 111:175–198 Chen H, Heidari AA, Chen H, Wang M, Pan Z, Gandomi AH (2020a) Multi-population differential evolution-assisted Harris Hawks optimization: framework and case studies. Future Gener Comput Syst 111:175–198
Zurück zum Zitat Chen H, Jiao S, Wang M, Heidari AA, Zhao X (2020b) Parameters identification of photovoltaic cells and modules using diversification-enriched Harris Hawks optimization with chaotic drifts. J Clean Prod 244:118778 Chen H, Jiao S, Wang M, Heidari AA, Zhao X (2020b) Parameters identification of photovoltaic cells and modules using diversification-enriched Harris Hawks optimization with chaotic drifts. J Clean Prod 244:118778
Zurück zum Zitat Cheng Q, Du B, Zhang L, Liu R (2019) Ansga-iii: a multiobjective endmember extraction algorithm for hyperspectral images. IEEE J Sel Top Appl Earth Obs Remote Sens 12(2):700–721 Cheng Q, Du B, Zhang L, Liu R (2019) Ansga-iii: a multiobjective endmember extraction algorithm for hyperspectral images. IEEE J Sel Top Appl Earth Obs Remote Sens 12(2):700–721
Zurück zum Zitat Coello CAC, Pulido GT, Lechuga MS (2004) Handling multiple objectives with particle swarm optimization. IEEE Trans Evol Comput 8(3):256–279 Coello CAC, Pulido GT, Lechuga MS (2004) Handling multiple objectives with particle swarm optimization. IEEE Trans Evol Comput 8(3):256–279
Zurück zum Zitat Corne DW, Jerram NR, Knowles JD, Oates MJ (2001) Pesa-ii: region-based selection in evolutionary multiobjective optimization. In: Proceedings of the 3rd annual conference on genetic and evolutionary computation, pp. 283–290 Corne DW, Jerram NR, Knowles JD, Oates MJ (2001) Pesa-ii: region-based selection in evolutionary multiobjective optimization. In: Proceedings of the 3rd annual conference on genetic and evolutionary computation, pp. 283–290
Zurück zum Zitat Deb K, Agrawal S (1999) A niched-penalty approach for constraint handling in genetic algorithms. In: Artificial neural nets and genetic algorithms, pp 235–243 Deb K, Agrawal S (1999) A niched-penalty approach for constraint handling in genetic algorithms. In: Artificial neural nets and genetic algorithms, pp 235–243
Zurück zum Zitat Deb K, Jain H (2013) An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part i: solving problems with box constraints. IEEE Trans Evol Comput 184:577–601 Deb K, Jain H (2013) An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part i: solving problems with box constraints. IEEE Trans Evol Comput 184:577–601
Zurück zum Zitat Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multiobjective genetic algorithm: Nsga-ii. IEEE Trans Evol Comput 62:182–197 Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multiobjective genetic algorithm: Nsga-ii. IEEE Trans Evol Comput 62:182–197
Zurück zum Zitat Deb K , Thiele L, Laumanns M, Zitzler E (2005) Scalable test problems for evolutionary multiobjective optimization. In: Evolutionary multiobjective optimization. Springer, pp 105–145 Deb K , Thiele L, Laumanns M, Zitzler E (2005) Scalable test problems for evolutionary multiobjective optimization. In: Evolutionary multiobjective optimization. Springer, pp 105–145
Zurück zum Zitat Dhawale D, Kamboj VK, Anand P (2021) An improved chaotic Harris Hawks optimizer for solving numerical and engineering optimization problems. Eng Comput, 1–46 Dhawale D, Kamboj VK, Anand P (2021) An improved chaotic Harris Hawks optimizer for solving numerical and engineering optimization problems. Eng Comput, 1–46
Zurück zum Zitat Digalakis JG, Margaritis KG (2001) On benchmarking functions for genetic algorithms. Int J Comput Math 77(4):81–506MathSciNetMATH Digalakis JG, Margaritis KG (2001) On benchmarking functions for genetic algorithms. Int J Comput Math 77(4):81–506MathSciNetMATH
Zurück zum Zitat Durillo JJ, Nebro AJ, Luna F, Alba E (2008) Solving three-objective optimization problems using a new hybrid cellular genetic algorithm. In: International conference on parallel problem solving from nature, pp 661–670 Durillo JJ, Nebro AJ, Luna F, Alba E (2008) Solving three-objective optimization problems using a new hybrid cellular genetic algorithm. In: International conference on parallel problem solving from nature, pp 661–670
Zurück zum Zitat Elarbi M, Bechikh S, Gupta A, Said LB, Ong Y-S (2017) A new decomposition-based nsga-ii for many-objective optimization. IEEE Trans Syst Man Cybern Syst 48(7):1191–1210 Elarbi M, Bechikh S, Gupta A, Said LB, Ong Y-S (2017) A new decomposition-based nsga-ii for many-objective optimization. IEEE Trans Syst Man Cybern Syst 48(7):1191–1210
Zurück zum Zitat Erol OK, Eksin I (2006) A new optimization method: big bang-big crunch. Adv Eng Softw 37(2):106–111 Erol OK, Eksin I (2006) A new optimization method: big bang-big crunch. Adv Eng Softw 37(2):106–111
Zurück zum Zitat Ewees AA, Abd Elaziz M (2020) Performance analysis of chaotic multi-verse Harris Hawks optimization: a case study on solving engineering problems. Eng Appl Artif Intell 88:103370 Ewees AA, Abd Elaziz M (2020) Performance analysis of chaotic multi-verse Harris Hawks optimization: a case study on solving engineering problems. Eng Appl Artif Intell 88:103370
Zurück zum Zitat Fonseca CM, Fleming PJ (1998) Multiobjective optimization and multiple constraint handling with evolutionary algorithms. I. A unified formulation. IEEE Trans Syst Man Cybern Part A Syst Hum 28(1):26–37 Fonseca CM, Fleming PJ (1998) Multiobjective optimization and multiple constraint handling with evolutionary algorithms. I. A unified formulation. IEEE Trans Syst Man Cybern Part A Syst Hum 28(1):26–37
Zurück zum Zitat Formato RA (2007) Central force optimization. Prog Electromagn Res 77(1):425–491 Formato RA (2007) Central force optimization. Prog Electromagn Res 77(1):425–491
Zurück zum Zitat Gandomi AH, Yang X-S, Alavi AH (2011) Mixed variable structural optimization using firefly algorithm. Comput Struct 89(23–24):2325–2336 Gandomi AH, Yang X-S, Alavi AH (2011) Mixed variable structural optimization using firefly algorithm. Comput Struct 89(23–24):2325–2336
Zurück zum Zitat Gandomi AH, Yang X-S, Alavi AH (2013) Cuckoo search algorithm: a metaheuristic approach to solve structural optimization problems. Eng Comput 29(1):17–35 Gandomi AH, Yang X-S, Alavi AH (2013) Cuckoo search algorithm: a metaheuristic approach to solve structural optimization problems. Eng Comput 29(1):17–35
Zurück zum Zitat Gao Z.-M, Zhao J (2019) An improved grey wolf optimization algorithm with variable weights. Comput Intell Neurosci 2019 Gao Z.-M, Zhao J (2019) An improved grey wolf optimization algorithm with variable weights. Comput Intell Neurosci 2019
Zurück zum Zitat Gao Y, An X, Liu J (2008) A particle swarm optimization algorithm with logarithm decreasing inertia weight and chaos mutation. In: 2008 international conference on computational intelligence and security, vol 1, pp 61–65 Gao Y, An X, Liu J (2008) A particle swarm optimization algorithm with logarithm decreasing inertia weight and chaos mutation. In: 2008 international conference on computational intelligence and security, vol 1, pp 61–65
Zurück zum Zitat García S, Molina D, Lozano M, Herrera F (2009) A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: a case study on the cec’ 2005 special session on real parameter optimization. J Heuristics 15(6):617–644MATH García S, Molina D, Lozano M, Herrera F (2009) A study on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: a case study on the cec’ 2005 special session on real parameter optimization. J Heuristics 15(6):617–644MATH
Zurück zum Zitat Geem ZW, Kim JH, Loganathan GV (2001) A new heuristic optimization algorithm: harmony search. Simulation 76(2):60–68 Geem ZW, Kim JH, Loganathan GV (2001) A new heuristic optimization algorithm: harmony search. Simulation 76(2):60–68
Zurück zum Zitat Glover F, Laguna M (1998) Tabu search. In: Handbook of combinatorial optimization. Springer, pp 2093–2229 Glover F, Laguna M (1998) Tabu search. In: Handbook of combinatorial optimization. Springer, pp 2093–2229
Zurück zum Zitat Gupta S, Deep K, Heidari AA, Moayedi H, Wang M (2020) Opposition-based learning Harris Hawks optimization with advanced transition rules: principles and analysis. Expert Syst Appl 158:113510 Gupta S, Deep K, Heidari AA, Moayedi H, Wang M (2020) Opposition-based learning Harris Hawks optimization with advanced transition rules: principles and analysis. Expert Syst Appl 158:113510
Zurück zum Zitat Hanafi R, Kozan E (2014) A hybrid constructive heuristic and simulated annealing for railway crew scheduling. Comput Ind Eng 70:11–19 Hanafi R, Kozan E (2014) A hybrid constructive heuristic and simulated annealing for railway crew scheduling. Comput Ind Eng 70:11–19
Zurück zum Zitat Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H (2019) Harris hawks optimization: algorithm and applications. Futur Gener Comput Syst 97:849–872 Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H (2019) Harris hawks optimization: algorithm and applications. Futur Gener Comput Syst 97:849–872
Zurück zum Zitat Holland JH (1992a) Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence. MIT Press, Cambridge Holland JH (1992a) Adaptation in natural and artificial systems: an introductory analysis with applications to biology, control, and artificial intelligence. MIT Press, Cambridge
Zurück zum Zitat Holland JH (1992b) Genetic algorithms. Sci Am 267(1):66–73 Holland JH (1992b) Genetic algorithms. Sci Am 267(1):66–73
Zurück zum Zitat Hussien AG, Amin M (2022) A self-adaptive Harris Hawks optimization algorithm with opposition-based learning and chaotic local search strategy for global optimization and feature selection. Int J Mach Learn Cybern 13(2):309–336 Hussien AG, Amin M (2022) A self-adaptive Harris Hawks optimization algorithm with opposition-based learning and chaotic local search strategy for global optimization and feature selection. Int J Mach Learn Cybern 13(2):309–336
Zurück zum Zitat Islam MZ, Wahab NIA, Veerasamy V, Hizam H, Mailah NF, Guerrero JM, Mohd Nasir MN (2020) A Harris Hawks optimization based single-and multi-objective optimal power flow considering environmental emission. Sustainability 12:135248 Islam MZ, Wahab NIA, Veerasamy V, Hizam H, Mailah NF, Guerrero JM, Mohd Nasir MN (2020) A Harris Hawks optimization based single-and multi-objective optimal power flow considering environmental emission. Sustainability 12:135248
Zurück zum Zitat Jangir P, Jangir N (2018) A new non-dominated sorting grey wolf optimizer (ns-gwo) algorithm: development and application to solve engineering designs and economic constrained emission dispatch problem with integration of wind power. Eng Appl Artif Intell 72:449–467 Jangir P, Jangir N (2018) A new non-dominated sorting grey wolf optimizer (ns-gwo) algorithm: development and application to solve engineering designs and economic constrained emission dispatch problem with integration of wind power. Eng Appl Artif Intell 72:449–467
Zurück zum Zitat Jangir P, Heidari AA, Chen H (2021) Elitist non-dominated sorting Harris Hawks optimization: framework and developments for multi-objective problems. Expert Syst Appl 186:115747 Jangir P, Heidari AA, Chen H (2021) Elitist non-dominated sorting Harris Hawks optimization: framework and developments for multi-objective problems. Expert Syst Appl 186:115747
Zurück zum Zitat Jia H, Lang C, Oliva D, Song W, Peng X (2019) Hybrid grasshopper optimization algorithm and differential evolution for multilevel satellite image segmentation. Remote Sens 11:91134 Jia H, Lang C, Oliva D, Song W, Peng X (2019) Hybrid grasshopper optimization algorithm and differential evolution for multilevel satellite image segmentation. Remote Sens 11:91134
Zurück zum Zitat Jiao S, Chong G, Huang C, Hu H, Wang M, Heidari AA, Zhao X (2020) Orthogonally adapted Harris Hawks optimization for parameter estimation of photovoltaic models. Energy 203:117804 Jiao S, Chong G, Huang C, Hu H, Wang M, Heidari AA, Zhao X (2020) Orthogonally adapted Harris Hawks optimization for parameter estimation of photovoltaic models. Energy 203:117804
Zurück zum Zitat Karaboga D, Basturk B (2008) On the performance of artificial bee colony (abc) algorithm. Appl Soft Comput 8(1):687–697 Karaboga D, Basturk B (2008) On the performance of artificial bee colony (abc) algorithm. Appl Soft Comput 8(1):687–697
Zurück zum Zitat Kennedy J, Eberhart R (1995) Particle swarm optimization. In: Proceedings of ICNN’95-international conference on neural networks, vol 4, pp 1942–1948 Kennedy J, Eberhart R (1995) Particle swarm optimization. In: Proceedings of ICNN’95-international conference on neural networks, vol 4, pp 1942–1948
Zurück zum Zitat Khan B, Johnstone M, Hanoun S, Lim CP, Creighton D, Nahavandi S (2016) Improved nsga-iii using neighborhood information and scalarization. In: 2016 IEEE international conference on systems, man, and cybernetics (SMC), pp 003033–003038 Khan B, Johnstone M, Hanoun S, Lim CP, Creighton D, Nahavandi S (2016) Improved nsga-iii using neighborhood information and scalarization. In: 2016 IEEE international conference on systems, man, and cybernetics (SMC), pp 003033–003038
Zurück zum Zitat Khan B, Hanoun S, Johnstone M, Lim CP, Creighton D, Nahavandi S (2019) A scalarization-based dominance evolutionary algorithm for many-objective optimization. Inf Sci 474:236–252MathSciNetMATH Khan B, Hanoun S, Johnstone M, Lim CP, Creighton D, Nahavandi S (2019) A scalarization-based dominance evolutionary algorithm for many-objective optimization. Inf Sci 474:236–252MathSciNetMATH
Zurück zum Zitat Kinnear KE, Langdon WB, Spector L, Angeline PJ, O’Reilly U-M (1994) Advances in genetic programming, vol 3. MIT Press, Cambridge Kinnear KE, Langdon WB, Spector L, Angeline PJ, O’Reilly U-M (1994) Advances in genetic programming, vol 3. MIT Press, Cambridge
Zurück zum Zitat Kirkpatrick S, Gelatt CD Jr, Vecchi MP (1983) Optimization by simulated annealing. Science 220(4598):671–680MathSciNetMATH Kirkpatrick S, Gelatt CD Jr, Vecchi MP (1983) Optimization by simulated annealing. Science 220(4598):671–680MathSciNetMATH
Zurück zum Zitat Kukkonen S, Lampinen J (2009) Performance assessment of generalized differential evolution 3 with a given set of constrained multi-objective test problems. In: 2009 IEEE congress on evolutionary computation, pp 1943–1950 Kukkonen S, Lampinen J (2009) Performance assessment of generalized differential evolution 3 with a given set of constrained multi-objective test problems. In: 2009 IEEE congress on evolutionary computation, pp 1943–1950
Zurück zum Zitat Li X, Gao L (2016) An effective hybrid genetic algorithm and Tabu search for flexible job shop scheduling problem. Int J Prod Econ 174:93–110 Li X, Gao L (2016) An effective hybrid genetic algorithm and Tabu search for flexible job shop scheduling problem. Int J Prod Econ 174:93–110
Zurück zum Zitat Li J-Q, Pan Q-K, Gao K-Z (2011) Pareto-based discrete artificial bee colony algorithm for multi-objective flexible job shop scheduling problems. Int J Adv Manuf Technol 55(9):1159–1169 Li J-Q, Pan Q-K, Gao K-Z (2011) Pareto-based discrete artificial bee colony algorithm for multi-objective flexible job shop scheduling problems. Int J Adv Manuf Technol 55(9):1159–1169
Zurück zum Zitat Li K, Fialho A, Kwong S, Zhang Q (2013) Adaptive operator selection with bandits for a multiobjective evolutionary algorithm based on decomposition. IEEE Trans Evol Comput 18(1):114–130 Li K, Fialho A, Kwong S, Zhang Q (2013) Adaptive operator selection with bandits for a multiobjective evolutionary algorithm based on decomposition. IEEE Trans Evol Comput 18(1):114–130
Zurück zum Zitat Li C, Li J, Chen H, Jin M, Ren H (2021) Enhanced Harris Hawks optimization with multi-strategy for global optimization tasks. Expert Syst Appl 185:115499 Li C, Li J, Chen H, Jin M, Ren H (2021) Enhanced Harris Hawks optimization with multi-strategy for global optimization tasks. Expert Syst Appl 185:115499
Zurück zum Zitat Lin Q, Li J, Du Z, Chen J, Ming Z (2015) A novel multi-objective particle swarm optimization with multiple search strategies. Eur J Oper Res 247(3):732–744MathSciNetMATH Lin Q, Li J, Du Z, Chen J, Ming Z (2015) A novel multi-objective particle swarm optimization with multiple search strategies. Eur J Oper Res 247(3):732–744MathSciNetMATH
Zurück zum Zitat Liu J, Yang Z, Li D (2020) A multiple search strategies based grey wolf optimizer for solving multi-objective optimization problems. Expert Syst Appl 145:113134 Liu J, Yang Z, Li D (2020) A multiple search strategies based grey wolf optimizer for solving multi-objective optimization problems. Expert Syst Appl 145:113134
Zurück zum Zitat Liu J, Liu X, Wu Y, Yang Z, Xu J (2022) Dynamic multi-swarm differential learning Harris Hawks optimizer and its application to optimal dispatch problem of cascade hydropower stations. Knowl Based Syst 242:108281 Liu J, Liu X, Wu Y, Yang Z, Xu J (2022) Dynamic multi-swarm differential learning Harris Hawks optimizer and its application to optimal dispatch problem of cascade hydropower stations. Knowl Based Syst 242:108281
Zurück zum Zitat Long W, Jiao J, Liang X, Tang M (2018a) An exploration-enhanced grey wolf optimizer to solve high-dimensional numerical optimization. Eng Appl Artif Intell 68:63–80 Long W, Jiao J, Liang X, Tang M (2018a) An exploration-enhanced grey wolf optimizer to solve high-dimensional numerical optimization. Eng Appl Artif Intell 68:63–80
Zurück zum Zitat Long W, Jiao J, Liang X, Tang M (2018b) Inspired grey wolf optimizer for solving large-scale function optimization problems. Appl Math Model 60:112–126 Long W, Jiao J, Liang X, Tang M (2018b) Inspired grey wolf optimizer for solving large-scale function optimization problems. Appl Math Model 60:112–126
Zurück zum Zitat Menesy AS, Sultan HM, Selim A, Ashmawy MG, Kamel S (2019) Developing and applying chaotic Harris Hawks optimization technique for extracting parameters of several proton exchange membrane fuel cell stacks. IEEE Access 8:1146–1159 Menesy AS, Sultan HM, Selim A, Ashmawy MG, Kamel S (2019) Developing and applying chaotic Harris Hawks optimization technique for extracting parameters of several proton exchange membrane fuel cell stacks. IEEE Access 8:1146–1159
Zurück zum Zitat Mirjalili S (2015) Moth-flame optimization algorithm: a novel nature-inspired heuristic paradigm. Knowl Based Syst 89:228–249 Mirjalili S (2015) Moth-flame optimization algorithm: a novel nature-inspired heuristic paradigm. Knowl Based Syst 89:228–249
Zurück zum Zitat Mirjalili S (2016) Sca: a sine cosine algorithm for solving optimization problems. Knowl Based Syst 96:120–133 Mirjalili S (2016) Sca: a sine cosine algorithm for solving optimization problems. Knowl Based Syst 96:120–133
Zurück zum Zitat Mirjalili S, Lewis A (2016) The whale optimization algorithm. Adv Eng Softw 95:51–67 Mirjalili S, Lewis A (2016) The whale optimization algorithm. Adv Eng Softw 95:51–67
Zurück zum Zitat Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Adv Eng Softw 69:46–61 Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Adv Eng Softw 69:46–61
Zurück zum Zitat Mirjalili S, Saremi S, Mirjalili SM, Coelho LDS (2016) Multi-objective grey wolf optimizer: a novel algorithm for multi-criterion optimization. Expert Syst Appl 47:106–119 Mirjalili S, Saremi S, Mirjalili SM, Coelho LDS (2016) Multi-objective grey wolf optimizer: a novel algorithm for multi-criterion optimization. Expert Syst Appl 47:106–119
Zurück zum Zitat Mirjalili S, Jangir P, Mirjalili SZ, Saremi S, Trivedi IN (2017a) Optimization of problems with multiple objectives using the multi-verse optimization algorithm. Knowl Based Syst 134:50–71 Mirjalili S, Jangir P, Mirjalili SZ, Saremi S, Trivedi IN (2017a) Optimization of problems with multiple objectives using the multi-verse optimization algorithm. Knowl Based Syst 134:50–71
Zurück zum Zitat Mirjalili S, Jangir P, Saremi S (2017b) Multi-objective ant lion optimizer: a multi-objective optimization algorithm for solving engineering problems. Appl Intell 46(1):79–95 Mirjalili S, Jangir P, Saremi S (2017b) Multi-objective ant lion optimizer: a multi-objective optimization algorithm for solving engineering problems. Appl Intell 46(1):79–95
Zurück zum Zitat Mirjalili SZ, Mirjalili S, Saremi S, Faris H, Aljarah I (2018) Grasshopper optimization algorithm for multi-objective optimization problems. Appl Intell 48(4):805–820 Mirjalili SZ, Mirjalili S, Saremi S, Faris H, Aljarah I (2018) Grasshopper optimization algorithm for multi-objective optimization problems. Appl Intell 48(4):805–820
Zurück zum Zitat Mittal N, Singh U, Sohi BS (2016) Modified grey wolf optimizer for global engineering optimization. Appl Comput Intell Soft Comput 2016 Mittal N, Singh U, Sohi BS (2016) Modified grey wolf optimizer for global engineering optimization. Appl Comput Intell Soft Comput 2016
Zurück zum Zitat Mladenović N, Hansen P (1997) Variable neighborhood search. Comput Oper Res 24(11):1097–1100MathSciNetMATH Mladenović N, Hansen P (1997) Variable neighborhood search. Comput Oper Res 24(11):1097–1100MathSciNetMATH
Zurück zum Zitat Nebro AJ, Luna F, Alba E, Dorronsoro B, Durillo JJ, Beham A (2008) Abyss: adapting scatter search to multiobjective optimization. IEEE Trans Evol Comput 12(4):439–457 Nebro AJ, Luna F, Alba E, Dorronsoro B, Durillo JJ, Beham A (2008) Abyss: adapting scatter search to multiobjective optimization. IEEE Trans Evol Comput 12(4):439–457
Zurück zum Zitat Nebro AJ, Durillo JJ, Luna F, Dorronsoro B, Alba E (2009) Mocell: a cellular genetic algorithm for multiobjective optimization. Int J Intell Syst 24(7):726–746MATH Nebro AJ, Durillo JJ, Luna F, Dorronsoro B, Alba E (2009) Mocell: a cellular genetic algorithm for multiobjective optimization. Int J Intell Syst 24(7):726–746MATH
Zurück zum Zitat Price K.V. (1996). Differential evolution: a fast and simple numerical optimizer. In: Proceedings of North American fuzzy information processing, pp 524–527 Price K.V. (1996). Differential evolution: a fast and simple numerical optimizer. In: Proceedings of North American fuzzy information processing, pp 524–527
Zurück zum Zitat Qu C, He W, Peng X, Peng X (2020) Harris hawks optimization with information exchange. Appl Math Model 84:52–75MathSciNetMATH Qu C, He W, Peng X, Peng X (2020) Harris hawks optimization with information exchange. Appl Math Model 84:52–75MathSciNetMATH
Zurück zum Zitat Rao RV, Savsani VJ, Vakharia D (2011) Teaching-learning-based optimization: a novel method for constrained mechanical design optimization problems. Comput Aided Des 43(3):303–315 Rao RV, Savsani VJ, Vakharia D (2011) Teaching-learning-based optimization: a novel method for constrained mechanical design optimization problems. Comput Aided Des 43(3):303–315
Zurück zum Zitat Rao RV, Savsani VJ, Vakharia D (2012) Teaching-learning-based optimization: an optimization method for continuous non-linear large scale problems. Inf Sci 183:11–15MathSciNet Rao RV, Savsani VJ, Vakharia D (2012) Teaching-learning-based optimization: an optimization method for continuous non-linear large scale problems. Inf Sci 183:11–15MathSciNet
Zurück zum Zitat Rashedi E, Nezamabadi-Pour H, Saryazdi S (2009) Gsa: a gravitational search algorithm. Inf Sci 179(13):2232–2248MATH Rashedi E, Nezamabadi-Pour H, Saryazdi S (2009) Gsa: a gravitational search algorithm. Inf Sci 179(13):2232–2248MATH
Zurück zum Zitat Simon D (2008) Biogeography-based optimization. IEEE Trans Evol Comput 12(6):702–713 Simon D (2008) Biogeography-based optimization. IEEE Trans Evol Comput 12(6):702–713
Zurück zum Zitat Storn R, Price K (1997) Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim 11(4):341–359MathSciNetMATH Storn R, Price K (1997) Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim 11(4):341–359MathSciNetMATH
Zurück zum Zitat Talbi E-G (2009) Metaheuristics: from design to implementation, vol 74. Wiley, New YorkMATH Talbi E-G (2009) Metaheuristics: from design to implementation, vol 74. Wiley, New YorkMATH
Zurück zum Zitat Van Laarhoven PJ, Aarts EH (1987) Simulated annealing. In: Simulated annealing: theory and applications. Springer, pp 7–15 Van Laarhoven PJ, Aarts EH (1987) Simulated annealing. In: Simulated annealing: theory and applications. Springer, pp 7–15
Zurück zum Zitat Van Veldhuizen DA, Lamont GB (1998) Multiobjective evolutionary algorithm research: a history and analysis. Tech Rep. Citeseer Van Veldhuizen DA, Lamont GB (1998) Multiobjective evolutionary algorithm research: a history and analysis. Tech Rep. Citeseer
Zurück zum Zitat Wang, J.- S., & Li, S.- X. (2019) An improved grey wolf optimizer based on differential evolution and elimination mechanism. Sci Rep 9:11–21 Wang, J.- S., & Li, S.- X. (2019) An improved grey wolf optimizer based on differential evolution and elimination mechanism. Sci Rep 9:11–21
Zurück zum Zitat Wang R, Purshouse RC, Fleming PJ (2012) Preference-inspired coevolutionary algorithms for many-objective optimization. IEEE Trans Evol Comput 17(4):474–494 Wang R, Purshouse RC, Fleming PJ (2012) Preference-inspired coevolutionary algorithms for many-objective optimization. IEEE Trans Evol Comput 17(4):474–494
Zurück zum Zitat Wang S, Jia H, Abualigah L, Liu Q, Zheng R (2021) An improved hybrid Aquila optimizer and Harris hawks algorithm for solving industrial engineering optimization problems. Processes 2021(9):1551 Wang S, Jia H, Abualigah L, Liu Q, Zheng R (2021) An improved hybrid Aquila optimizer and Harris hawks algorithm for solving industrial engineering optimization problems. Processes 2021(9):1551
Zurück zum Zitat Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. IEEE Trans Evol Comput 1(1):67–82 Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. IEEE Trans Evol Comput 1(1):67–82
Zurück zum Zitat Wunnava A, Naik MK, Panda R, Jena B, Abraham A (2020) A differential evolutionary adaptive Harris Hawks optimization for two dimensional practical masi entropy-based multilevel image thresholding. J King Saud Univ Comput Inf Sci 34:3011–3024 Wunnava A, Naik MK, Panda R, Jena B, Abraham A (2020) A differential evolutionary adaptive Harris Hawks optimization for two dimensional practical masi entropy-based multilevel image thresholding. J King Saud Univ Comput Inf Sci 34:3011–3024
Zurück zum Zitat Xiang Y, Zhou Y, Liu H (2015) An elitism based multi-objective artificial bee colony algorithm. Eur J Oper Res 245(1):168–193 Xiang Y, Zhou Y, Liu H (2015) An elitism based multi-objective artificial bee colony algorithm. Eur J Oper Res 245(1):168–193
Zurück zum Zitat Xie H, Zhang L, Lim CP (2020) Evolving cnn-lstm models for time series prediction using enhanced grey wolf optimizer. IEEE Access 8:161519–161541 Xie H, Zhang L, Lim CP (2020) Evolving cnn-lstm models for time series prediction using enhanced grey wolf optimizer. IEEE Access 8:161519–161541
Zurück zum Zitat Yang X-S, Gandomi AH (2012) Bat algorithm: a novel approach for global engineering optimization. Eng Comput 29:464–483 Yang X-S, Gandomi AH (2012) Bat algorithm: a novel approach for global engineering optimization. Eng Comput 29:464–483
Zurück zum Zitat Yang X-S, Karamanoglu M, He X (2014) Flower pollination algorithm: a novel approach for multiobjective optimization. Eng Optim 46(9):1222–1237MathSciNet Yang X-S, Karamanoglu M, He X (2014) Flower pollination algorithm: a novel approach for multiobjective optimization. Eng Optim 46(9):1222–1237MathSciNet
Zurück zum Zitat Yang N, Tang Z, Cai X, Chen L, Hu Q (2022) Cooperative multi-population Harris Hawks optimization for many-objective optimization. Complex Intell Syst 8:3299–3332 Yang N, Tang Z, Cai X, Chen L, Hu Q (2022) Cooperative multi-population Harris Hawks optimization for many-objective optimization. Complex Intell Syst 8:3299–3332
Zurück zum Zitat Yao X, Liu Y, Lin G (1999) Evolutionary programming made faster. IEEE Trans Evol Comput 3(2):82–102 Yao X, Liu Y, Lin G (1999) Evolutionary programming made faster. IEEE Trans Evol Comput 3(2):82–102
Zurück zum Zitat Yi J, Bai J, He H, Peng J, Tang D (2018) ar-moea: a novel preference-based dominance relation for evolutionary multiobjective optimization. IEEE Trans Evol Comput 23(5):788–802 Yi J, Bai J, He H, Peng J, Tang D (2018) ar-moea: a novel preference-based dominance relation for evolutionary multiobjective optimization. IEEE Trans Evol Comput 23(5):788–802
Zurück zum Zitat Yin Q, Cao B, Li X, Wang B, Zhang Q, Wei X (2020) An intelligent optimization algorithm for constructing a dna storage code: Nol-hho. Int J Mol Sci 21:62191 Yin Q, Cao B, Li X, Wang B, Zhang Q, Wei X (2020) An intelligent optimization algorithm for constructing a dna storage code: Nol-hho. Int J Mol Sci 21:62191
Zurück zum Zitat Yu D, Hong J, Zhang J, Niu Q (2018) Multi-objective individualized-instruction teaching-learning-based optimization algorithm. Appl Soft Comput 62:288–314 Yu D, Hong J, Zhang J, Niu Q (2018) Multi-objective individualized-instruction teaching-learning-based optimization algorithm. Appl Soft Comput 62:288–314
Zurück zum Zitat Yuan J, Liu H-L, Gu F, Zhang Q, He Z (2020) Investigating the properties of indicators and an evolutionary many-objective algorithm using promising regions. IEEE Trans Evol Comput 25(1):75–86 Yuan J, Liu H-L, Gu F, Zhang Q, He Z (2020) Investigating the properties of indicators and an evolutionary many-objective algorithm using promising regions. IEEE Trans Evol Comput 25(1):75–86
Zurück zum Zitat Zhang J, Sanderson AC (2009) Jade: adaptive differential evolution with optional external archive. IEEE Trans Evol Comput 13(5):945–958 Zhang J, Sanderson AC (2009) Jade: adaptive differential evolution with optional external archive. IEEE Trans Evol Comput 13(5):945–958
Zurück zum Zitat Zhang X, Zheng X, Cheng R, Qiu J, Jin Y (2018) A competitive mechanism based multi-objective particle swarm optimizer with fast convergence. Inf Sci 427:63–76MathSciNet Zhang X, Zheng X, Cheng R, Qiu J, Jin Y (2018) A competitive mechanism based multi-objective particle swarm optimizer with fast convergence. Inf Sci 427:63–76MathSciNet
Zurück zum Zitat Zhang X, Zhao K, Niu Y (2020) Improved Harris Hawks optimization based on adaptive cooperative foraging and dispersed foraging strategies. IEEE Access 8:160297–160314 Zhang X, Zhao K, Niu Y (2020) Improved Harris Hawks optimization based on adaptive cooperative foraging and dispersed foraging strategies. IEEE Access 8:160297–160314
Zurück zum Zitat Zheng-Ming G , Juan Z, Yu-Rong H, Chen H-F (2019) The improved harris hawk optimization algorithm with the tent map. In: 2019 3rd International conference on electronic information technology and computer engineering (EITCE), pp 336–339 Zheng-Ming G , Juan Z, Yu-Rong H, Chen H-F (2019) The improved harris hawk optimization algorithm with the tent map. In: 2019 3rd International conference on electronic information technology and computer engineering (EITCE), pp 336–339
Zurück zum Zitat Zitzler E, Thiele L (1999) Multiobjective evolutionary algorithms: a comparative case study and the strength pareto approach. IEEE Trans Evol Comput 3(4):257–271 Zitzler E, Thiele L (1999) Multiobjective evolutionary algorithms: a comparative case study and the strength pareto approach. IEEE Trans Evol Comput 3(4):257–271
Zurück zum Zitat Zitzler E, Künzli S (2004) Indicator-based selection in multiobjective search. In: International conference on parallel problem solving from nature, pp 832–842 Zitzler E, Künzli S (2004) Indicator-based selection in multiobjective search. In: International conference on parallel problem solving from nature, pp 832–842
Zurück zum Zitat Zitzler E, Laumanns M, Thiele L (2001) Spea2: improving the strength pareto evolutionary algorithm. TIK Report 103 Zitzler E, Laumanns M, Thiele L (2001) Spea2: improving the strength pareto evolutionary algorithm. TIK Report 103
Metadaten
Titel
Enhancing the Harris’ Hawk optimiser for single- and multi-objective optimisation
verfasst von
Yit Hong Choo
Zheng Cai
Vu Le
Michael Johnstone
Douglas Creighton
Chee Peng Lim
Publikationsdatum
27.07.2023
Verlag
Springer Berlin Heidelberg
Erschienen in
Soft Computing / Ausgabe 22/2023
Print ISSN: 1432-7643
Elektronische ISSN: 1433-7479
DOI
https://doi.org/10.1007/s00500-023-08952-w

Weitere Artikel der Ausgabe 22/2023

Soft Computing 22/2023 Zur Ausgabe

Premium Partner