Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

MMKE: Multi-trial vector-based monkey king evolution algorithm and its applications for engineering optimization problems

  • Mohammad H. Nadimi-Shahraki ,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Resources, Software, Supervision, Validation, Visualization, Writing – original draft, Writing – review & editing

    nadimi@iaun.ac.ir, nadimi@ieee.org

    Affiliations Faculty of Computer Engineering, Najafabad Branch, Islamic Azad University, Najafabad, Iran, Big Data Research Center, Najafabad Branch, Islamic Azad University, Najafabad, Iran, Centre for Artificial Intelligence Research and Optimisation, Torrens University Australia, Adelaide, Australia

  • Shokooh Taghian,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Resources, Software, Validation, Visualization, Writing – original draft, Writing – review & editing

    Affiliations Faculty of Computer Engineering, Najafabad Branch, Islamic Azad University, Najafabad, Iran, Big Data Research Center, Najafabad Branch, Islamic Azad University, Najafabad, Iran

  • Hoda Zamani,

    Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Resources, Software, Writing – original draft, Writing – review & editing

    Affiliations Faculty of Computer Engineering, Najafabad Branch, Islamic Azad University, Najafabad, Iran, Big Data Research Center, Najafabad Branch, Islamic Azad University, Najafabad, Iran

  • Seyedali Mirjalili,

    Roles Formal analysis, Methodology, Project administration, Resources, Supervision, Writing – review & editing

    Affiliations Centre for Artificial Intelligence Research and Optimisation, Torrens University Australia, Adelaide, Australia, Yonsei Frontier Lab, Yonsei University, Seoul, South Korea

  • Mohamed Abd Elaziz

    Roles Writing – review & editing

    Affiliation Department of Mathematics, Faculty of Science, Zagazig University, Zagazig, Egypt

Abstract

Monkey king evolution (MKE) is a population-based differential evolutionary algorithm in which the single evolution strategy and the control parameter affect the convergence and the balance between exploration and exploitation. Since evolution strategies have a considerable impact on the performance of algorithms, collaborating multiple strategies can significantly enhance the abilities of algorithms. This is our motivation to propose a multi-trial vector-based monkey king evolution algorithm named MMKE. It introduces novel best-history trial vector producer (BTVP) and random trial vector producer (RTVP) that can effectively collaborate with canonical MKE (MKE-TVP) using a multi-trial vector approach to tackle various real-world optimization problems with diverse challenges. It is expected that the proposed MMKE can improve the global search capability, strike a balance between exploration and exploitation, and prevent the original MKE algorithm from converging prematurely during the optimization process. The performance of the MMKE was assessed using CEC 2018 test functions, and the results were compared with eight metaheuristic algorithms. As a result of the experiments, it is demonstrated that the MMKE algorithm is capable of producing competitive and superior results in terms of accuracy and convergence rate in comparison to comparative algorithms. Additionally, the Friedman test was used to examine the gained experimental results statistically, proving that MMKE is significantly superior to comparative algorithms. Furthermore, four real-world engineering design problems and the optimal power flow (OPF) problem for the IEEE 30-bus system are optimized to demonstrate MMKE’s real applicability. The results showed that MMKE can effectively handle the difficulties associated with engineering problems and is able to solve single and multi-objective OPF problems with better solutions than comparative algorithms.

1. Introduction

Metaheuristic algorithms are shown to be advantageous for addressing challenging optimization problems with diverse properties, including but not limited to high dimensionality, multimodality, and non-differentiability in a reasonable time [1]. This has led to the widespread use of these algorithms and an increasing trend of developing new and improved algorithms. Such techniques can be considered as approximate algorithms that are the successful alternative approach for solving problems in polynomial time. The stochastic nature of these algorithms keeps them distinct in comparison with deterministic methods or conventional optimization algorithms. Nature-inspired algorithms are a class of problem solvers inspired by nature’s natural phenomena to design new and robust competing algorithms. Taking into account the No-Free-Lunch (NFL) theorem [2], that asserts no single optimization algorithm can handle all problems of any complexity, numerous new optimization algorithms have been designed that are capable of handling the vast majority of optimization issues. Furthermore, the same algorithm is also observed to have variable outcomes when applied to the same problem depending on the values of the parameters.

Metaheuristic algorithms can be categorized as either population-based or single solution-based. The single solution-based algorithms are focused on exploiting and expanding a solution for crossing the local optimal point. In contrast to single-based metaheuristics, population-based metaheuristics begin with a collection of solutions known as population or swarm. They then construct a new population of solutions via an iterative process. This allows for the transmission of information between the solutions. In real-world issues, there is a strong likelihood that a single solution-based algorithm may become stuck on the local optima. On the other hand, population-based algorithms are known to be extremely exploratory of the search space, with less incidence of becoming stuck locally. If a solution traps within the local suboptimal, further iterations will help to avoid it by including alternative solutions. As a downside, these algorithms are computationally costly and need more objective function evaluations.

Nature-inspired metaheuristic algorithms typically draw their inspiration from natural phenomena such as biological, physical, chemical, and geological principles [3]. Nature-inspired algorithms can be classified into three categories based on the source of inspiration [4]: evolution-based, swarm intelligence-based, and physics-based algorithms. Evolution-based algorithms tend to mimic creatures’ evolutionary behavior concepts in nature. In this category some prominent algorithms are genetic programming (GP) [5], differential evolution (DE) [6], evolution strategy (ES) [7], genetic algorithm (GA) [8], and evolutionary programming (EP) [9]. Swarm-intelligence-based algorithms are inspired by the social and collective behavior of swarms in nature, such as colonies of bees and ants, animal herds, and birds’ flocks. Some of the prevailing and recently introduced algorithms in this category are particle swarm optimization (PSO) [10], ant colony optimization (ACO) [11], artificial bee colony (ABC) [12], symbiotic organism search (SOS) [13], salp swarm algorithm (SSA) [14], squirrel search algorithm (SSA) [15], crow search algorithm (CSA) [16], grey wolf optimizer (GWO) [17], capuchin search algorithm (CapSA) [18], and Snake Optimizer (SO) [19].

The third category is physics-based algorithms that are derived from the fundamental physical laws existing in nature. Some well-known algorithms in this category are simulated annealing (SA) [20], big bang-big crunch (BB-BC) algorithm [21], gravitational search algorithm (GSA) [22], water evaporation optimization (WEO) [23], and Archimedes optimization algorithm (AOA) [24].

The monkey king evolution (MKE) [25] is one of the algorithms in the evolution-based category inspired by a Chinese mythological novel. The population is guided in this population-based algorithm by the best monkey king of the whole population. The MKE algorithm is prone to premature convergence and insufficient balance between exploitation and exploration. These shortcomings originate from the MKE’s evolution scheme (search strategy) that updates all monkey kings’ positions by considering the best one and using a fixed control parameter. Moreover, the MKE algorithm has a single evolution strategy (search strategy) to deal with various kinds of problems that results in inefficient performance when confronted with various issues. Thus, by incorporating multi-evolution strategies into the MKE algorithm, we aim to make it more effective in solving a wide variety of real-world optimization problems, which is the primary goal of this work.

As part of our prior work, we introduced a multi-trial vector (MTV) approach [26] to leverage a mix of evolution strategies and handle various issues. This approach comprised four components: winner-based distributing, multi-trial vector producing, evaluating and population updating, and life-time archiving. The MTV approach has the merits of introducing and combining multiple search strategies incorporated by defining distribution policies over the population, which increases the algorithms’ performance. Using the MTV approach can fulfill the need to define different strategies that can be adapted to various characteristics of problems in every stage of the search process to avoid local optima entrapment, prevent premature convergence, and strike an appropriate balance between exploitation and exploration. It is our motivation to use the merits of the MTV approach and a combination of three trial vectors to significantly improve the MKE algorithm’s performance and handle various complex real-world optimization issues.

In this paper, we propose an effective multi-trial vector-based monkey king evolution (MMKE) algorithm using the multi-trial vector (MTV) approach [26]. In the design of the multi-trial vector producing step of the MTV approach, novel best-history trial vector producer (BTVP) and random trial vector producer (RTVP) are introduced to cooperate with canonical MKE (MKE-TVP). Each trial vector producer (TVP) is adjusted to maintain a particular search behavior during the process of solving different problems with diverse characteristics. Also, each TVP is used to apply to a section of the population that is dedicated to that TVP. Through the winner-based MTV distribution strategy, the percentage of the devoted population is adjusted at consistent intervals depending on the number of individuals each TVP improved. The integration of different evolution strategies in the MTV approach can improve the balance between exploration and exploitation, prevent premature convergence, and avoid local and deceptive optimum conditions. As part of the validation of the proposed MMKE algorithm, experiments on 29 test functions taken from the CEC 2018’s special session on real-parameter optimization [27] were conducted. An evaluation of the results was made in comparison with state-of-the-art evolutionary and swarm intelligence algorithms, and then the statistical analysis was done. Additionally, the applicability of MMKE was demonstrated by solving engineering problems. Based on the comparisons and statistical analyses, the MMKE algorithm has proven to be superior to comparative algorithms.

The general methodology used in this research includes problem modeling and mathematical formulation, algorithmic design and development, performance assessment and comparison, and deployment for real-world applications. Regarding problem modeling and mathematical formulation, optimization is to find the best solution(s) from a collection of solutions that minimize (or maximize) an objective function and adhere to a number of constraints. The task of optimization can be mathematically formulated as a search to find X*, which minimizes such an objective function f(X*) < f(X) for all XΩ, where Ω is a non-empty large finite set as the domain of the search. Next, in the algorithmic design and development, the proposed MMKE algorithm’s methodology is initializing, winner-based distributing, control stopping criteria, multi-trial vector producing, evaluating and population updating, and archiving.

The performance assessment in this research is done the same as works on metaheuristics which is qualitative and quantitative. In the qualitative assessment, a visual analysis of MMKE is provided along with two directions: trial vector impact analysis and exploration and exploitation behavior analysis. Then, MMKE’s performance was evaluated using a quantitative analysis including exploration and exploitation, local optima avoidance, and convergence evaluation compared to eight state-of-the-art metaheuristic algorithms. Eventually, deployment for real-world applications includes five engineering problems that were used to study further the MMKE algorithm’s potential to address real-world engineering difficulties.

The contributions of this study can be summarized as follows:

  • Amending the MKE algorithm’s evolution scheme with the multi-trial vector (MTV) approach to enhance the original MKE’s performance.
  • Dividing the MKE’s population into the number of sub-populations based on the winner-based distribution policy and using an exclusive TVP for each sub-population to guide the individuals.
  • Proposing northward evolution strategies by introducing two new trial vector producers.
  • Using a combination of best-history trial vector producer (BTVP) and random trial vector producer (RTVP) in conjunction with canonical MKE (MKE-TVP) to improve global search capability, reducing the risk of trapping in the local optima and preventing premature convergence of the original MKE.

Based on the mentioned deficiencies of the MKE algorithm and the merits of the MTV approach, the research hypothesis is stated as follows: the performance of the MKE algorithm in terms of accuracy of the gained results, avoidance of local optima trapping, equilibrant exploitation and exploration, and prevention of premature convergence can be increased and enhanced using the MTV approach and effective trial vector producers.

The remainder of this article is structured in such a way that Section 2 summarizes relevant works. Section 3 contains the MKE algorithm’s mathematical model and flowchart. The proposed MMKE algorithm is detailed in Section 4. Section 5 discusses the MMKE’s qualitative and quantitative analysis, whereas Section 6 discusses the MMKE’s statistical analysis. MMKE’s applicability for solving real engineering design problems is assessed in Section 7. Finally, Section 8 concludes research findings and recommends further studies.

2. Related work

Nature-inspired algorithms have become popular choices to solve a wide variety of optimization issues in diverse areas such as engineering [2834], image processing and segmentation [3537], global optimization [3845], software fault prediction [46], scheduling [4750], photovoltaic modeling [5154], structural design problems [5559], power and energy management [6062], planning and routing problems [6365], power take off and placements of wave energy converters [66, 67], power consumption [68, 69], and wind speed prediction [70, 71]. Although the majority of nature-inspired algorithms are proposed to solve continuous problems, there have been various methods to adapt these algorithms to solve problems with discrete nature [72]. Numerous real-world issues have been solved using the adapted methods, including feature selection [7379], clustering and community detection [8084], and medical diagnosis [8587].

In nature-inspired algorithms, a population of individuals searches different regions of the solution space cooperatively by applying a search mechanism derived from natural phenomena. These algorithms possess two substantial aspects exploitation and exploration. Exploration is related to an optimization algorithm’s capability to explore diverse areas of the search space on a global scale. In contrast, exploitation refers to an algorithm’s capability to identify solutions that are close to the optimal solution in promising regions. An excessive level of exploration can lead to a decrease in the probability of finding the optimal solution. At the same time, too much exploitation can cause the algorithm to be trapped in local optima [88]. Thus, a proper balance between exploration and exploitation is essential to enhance the search ability, avoid falling into local optima, and achieve a reasonable solution.

Although these aspects are considered in the design of metaheuristic algorithms, different behaviors can occur when they confront various optimization problems. Thus, different alterations are also performed in the canonical versions of nature-inspired algorithms to solve optimization problems with different characteristics and complexities. As a clarification, imbalanced exploration and exploitation of the GWO algorithm is improved in the Gaze cues learning-based grey wolf optimizer (GGWO) [89]. The same issue is resolved by defining a hybrid phase into the MFO algorithm in the variant improved moth-flame optimization (IMFO) algorithm [90]. Likewise, the search ability of the basic EPO algorithm is improved for color image segmentation in improved emperor penguin optimization (IEPO) [91] by using the levy flight, Gaussian mutation, and opposition-based learning. Higher performance and quick convergence are achieved by a new parameter-adaptive DE (PaDE) algorithm [92] while solving numerical optimization problems.

In stochastic search algorithms, the quality of obtained solutions depends on various aspects, such as the search strategy, the adjustment of the parameters, and constraint handling of the problem. As shown in Fig 1, metaheuristic algorithms based on the number of strategies used during the search process can be classified into single search strategy algorithms and multi-search strategies algorithms. Single search strategy algorithms may not be able to find an appropriate solution for problems with complex search space because an adaptation needs with the changes in the search landscape during the optimization process. Multiple strategies have different characteristics and capabilities in multi search strategies algorithms, such as exploration, exploitation, and maintaining diversity. Thus, an effective algorithm with a combination of the different strategies has the potential to deal with diverse kinds of optimization problems. Moreover, it is beneficial to use different search techniques to increase the probability of locating the optimal solution for a sophisticated optimization issue with a complex search space.

thumbnail
Fig 1. The classification of single and multi-search strategies.

https://doi.org/10.1371/journal.pone.0280006.g001

As shown in Fig 1, the algorithms containing multi-search strategies can be applied to the whole population or individual sub-populations. In the first category, all the strategies are applied to the whole population to discover the survivors for the following iteration. To be more specific, algorithms such as SaDE [93], CoDE [94], SL-PSO [95], I-GWO [96], and MSCA [97] are kinds of algorithms that apply multi-search strategies to the whole population of individuals. It is an effective way to deal with different problems but computationally expensive as needs to evaluate the fitness value of multiple produced candidates. Qin et al. have developed a self-adaptive DE (SaDE) that effectively uses two mutation strategies concurrently by adjusting the control parameters F and CR in accordance with prior knowledge. Wang et al. suggested a composite DE (CoDE) that includes three trial vector techniques and three control parameter settings. In order to create new vectors, each individual is assigned a new search strategy and parameter selection at random. Cheng and Jin proposed a social learning PSO (SL-PSO) in which all the particles except the best one use a social learning mechanism to learn from particles with better objective value in the current fitness-sorted population. In [97], a multi-strategy enhanced sine cosine algorithm named MSCA is proposed, in which four search mechanism is applied to the search agents, and in each iteration, the previous position replaces the best candidate solutions. In [96], I-GWO was proposed by introducing a new search strategy named dimension learning-based hunting (DLH). The I-GWO enhances diversity, strikes a balance between exploration and exploitation, and copes with premature convergence of GWO. The wolves’ new position is selected between the GWO or the DLH search strategies according to the quality of obtained solutions.

In the second category, the main population is always split up into some small sub-populations, each of which is updated according to the assigned search strategy and control parameters to generate candidate solutions. This is a potential way to improve optimization performance by when each sub-population is responsible for exploring, exploiting, maintaining diversity, and reducing the probability of trap in local optima. In the literature, some algorithms were developed with multi sub-populations, each of which uses a different search strategy. In [98], a multi-swarm cooperative particle swarm optimizer named MCPSO was proposed in which the population contains a master swarm and several slave swarms. The particles of the master swarm move according to both master swarm and slave swarms’ knowledge, while the particles of slave swarms move based on the independent execution of PSO variants. In this algorithm, slave swarms can preserve diversity, and the interaction of this two kind of swarms has an effect on the balance between exploration and exploitation. In [99], multi-strategy ensemble particle swarm optimization (MEPSO) was proposed, in which parts Ⅰ and Ⅱ are a division of particles into two parts. Each part had a distinctive role in the search process; indeed, the Gaussian local search and differential mutation strategies were utilized in combination with the traditional PSO search algorithm for each part. The investigation revealed that the method utilized in part I can improve the algorithm’s convergence, while the other part can improve the algorithm’s exploration capability and avoidance of local optima.

EPSDE was suggested in [100], which comprises a collection of mutation and crossover and a pool of values matching to each related parameter that strive to develop better candidate solutions. EPSDE has proven to be competitive on a variety of optimization problems due to its proper moves in the search space. In [101], MPEDE, a multi-population based DE was proposed in which the population is partitioned into three sub-populations, each using a different mutation and update strategy. The constituent mutation strategies were selected from the literature are competitive in solving unimodal and multimodal optimization problems and performing a better exploration. The experimental results on the benchmark functions demonstrated that the proposed algorithm outperformed the other DE variants. An adaptive multi-population differential evolution (AMPDE) algorithm was proposed in [102], in which the size of the sub-populations was adaptively altered considering information gathered from prior search knowledge. Individuals from each sub-population were modified in accordance with the crossover operator that was assigned to them from GAs in order to create perturbed vectors. In [103], an adaptive DE with dynamic population reduction was proposed named sTDE-dR. The entire population was clustered into multiple tribes with different sizes, and each tribe has different mutation and crossover strategies. The experimental results showed the robustness of the proposed algorithms in comparison to the other comparative algorithms. As a variation on the developed MTV approach, [26], proposed a multi-trial vector-based differential evolution algorithm named MTDE. Despite previous algorithms that populations were divided into multiple sub-populations with smaller sizes, in the MTDE algorithm, the whole population is divided into three sub-populations with different sizes based on the defined distribution policy. The aim of using the combination of TVPs in the proposed algorithm is to maintain the population diversity, the balance between the exploration, and enhance the local search ability.

3. Monkey King Evolution (MKE) algorithm

The monkey king evolution algorithm (MKE) is a simple evolutionary algorithm inspired by the monkey king, a character of a Chinese mythological novel named “Journey to the West”. In a tough situation, the monkey king can transform into a number of small monkeys, each of which tries to find and report a solution. Then, the monkey king selects the most suitable solution for the trouble. In the MKE algorithm, N monkey kings are randomly distributed in the search space with Dim dimensions. The monkey kings’ position and the ith monkey king are denoted by matrix X and its ith row Xi = {xi,1, xi,2, , xi,Dim}. Then, in each generation, the monkey kings’ fitness is calculated to determine the gbest particle with the best fitness value and then the matrix Xgbest is built in which all rows are replicated by gbest position.

The MKE uses an evolution scheme to update the monkey kings’ position in which first, two different matrixes Xr1 and Xr2 are constructed by permuting the row vectors of the matrix X. Next, the matrix of mutated monkeys denoted by B is built using Eq (1) where FC is the fluctuation coefficient parameter with a constant value 0.7.

(1)

Then, the evolved monkeys are calculated by Eq (2), where M is a transformation matrix, and is the binary inverse of M. The matrix M is generated from a lower triangular matrix with the elements set by one. After that, each row of the matrix’s elements is randomly permutated, and then the sequence of row vectors of the matrix is separately permuted.

(2)

Finally, as shown in Fig 2, in the 7th line of the pseudo-code of MKE, each evolved monkey’s fitness value is calculated by which either the particle at the current position or its trial vector is selected to survive to the next generation.

Although the benchmarked results show that the MKE algorithm’s performance is sufficient in comparison to some PSO-based variants, it has shortcomings, including premature convergence and inadequate exploration/exploitation balance. These defects originate from updating all monkey kings’ positions using the MKE’s evolution scheme based on the gbest.

4. Multi-trial vector-based monkey king evolution (MMKE) algorithm

Solving different optimization problems with various characteristics such as uni/multi-modality, (non)separability, (a)symmetry [27, 104] requires suitable search strategies. Furthermore, maintaining an equilibrium between exploration and exploitation prevents premature convergence and stagnation and provides a higher level of population diversity. Then, the effectiveness of a metaheuristic algorithm in solving optimization problems depends on selecting an appropriate search strategy and setting its parameters. On the other hand, based on the no-free-lunch (NFL) theorem, there is no general-purpose search strategy to cope with optimization problems and different strategies are required for solving diverse problems [2]. These considerations led us to propose an improved variant of the MKE algorithm named multi-trial vector-based monkey king evolution (MMKE) algorithm to tackle its insufficiencies. MKE’s shortcomings include a premature convergence to local optima, an improper balance between exploration and exploitation, and a fixed control parameter. These issues suppress the MKE algorithm’s ability to handle different complicated problems.

As shown in Fig 3, the proposed MMKE algorithm’s framework consists of four main phases: initializing, winner-based distributing, multi-trial vector producing of MMKE, evaluating and population updating, and archiving. Each of these phases are explained in the following paragraphs.

Table 1 provides a nomenclature to show the parameters’ description used in the following sections.

The simple evolution scheme of the MKE algorithm is substituted by the MTV approach to boost its performance using different evolution strategies for solving various optimization problems. Adapting the MTV approach enables the MKE algorithm to use a varying number of trial vector producers (TVPs) as required to achieve a particular behavioral outcome. Moreover, another advantage of using the MTV approach is to dedicate a portion of the whole population based on the winner-based distribution policy to each TVP. Furthermore, the population exchange between TVPs can enhance information sharing between monkeys and maintain diversity. The proposed MMKE algorithm uses three different evolution strategies integrated with a random fluctuation coefficient for enhancing the global search ability, reducing the probability of trapping in local optima, and preventing the original MKE’s premature convergence.

The proposed MMKE algorithm’s flowchart is depicted in Fig 4, consisting of initializing, winner-based distributing, multi-trial vector producing, evaluating and population updating, and archiving. In the initializing step, after the random distribution of N monkeys and evaluating the fitness of the initial population in every ngen generation and in the winner-based distributing step, the sub-population size of TVPs is determined by considering the reward rule distribution policy. Then, three trial vector producers, canonical MKE (MKE-TVP), best-history trial vector producer (BTVP), and random trial vector producer (RTVP), cooperate in the step of multi-trial vector producing to guide the monkeys over the search space. MMKE’s ability to detect promising regions when solving different problems is significantly facilitated when these TVPs are combined. Then, in evaluating and population updating step, the monkeys’ current position is updated after calculating the fitness of evolved monkeys. According to the archiving step, the inferior ones are archived to use their knowledge in TVPs. The step-wise procedure of the proposed MMKE algorithm is explained as follows.

Initializing step: N monkeys are randomly overspread within the predefined range [l, u] using Eq (3).

(3)

Where xij is the position of the ith monkey king in the jth dimension, lj and uj are jth dimension’s lowest and maximum bounds, and rand represents a random value between 0 and 1, respectively. The positions of N monkeys are stored in matrix X, which is a N×Dim matrix. The fitness value of monkey Xi in tth generation is calculated by f (Xi (t)).

The winner-based distributing step: The whole generation is divided into k portions including ngen generations. The first step of each portion is to select the best TVP or the producer with the highest rate of improvement over the previous ngen generations. Therefore, the improved rate of each TVP, IRZ-TVP (Z represents one of the TVPs), is calculated by dividing the number of improved monkeys by the number of function evaluations in the previous portion using Eq (4).

(4)

After determining the improved rate of each TVP, the size of each TVP’s sub-population is calculated for the next n generations using the reward rule distribution policy defined in Eq (5), (5) where there is N number of monkeys, NZ-TVP is the size of the sub-population considering the TVPs’ improved rate, and λ = 0.25.

Multi-trial vector producing step: In each generation, monkey Xi is moved by one of the three different TVPs, MKE-TVP, BTVP, or RTVP. The MKE-TVP facilitates exploitation capability by enabling individuals to search for new solutions in their locality or immediate vicinity. Exploitation and escape from the local optima are handled using BTVP, whereas RTVP is designed to balance exploration and exploitation. Once each TVP mutates its dedicated sub-population, the evolved vector of the monkeys is produced using the M and matrixes by Eq (6). (6) Where UiMpop, UiBpop, and UiRpop indicate the produced candidate solution for XiMpop, XiBpop, and XiRpop, ith monkey of sub-population MKE-TVP, BTVP, or RTVP. ViMpop, ViBpop, and ViRpop indicate the mutated vector which generated for the ith monkey of sub-population MKE-TVP, BTVP, or RTVP.

Monkey king evolution trial vector producer (MKE-TVP): As mentioned in the preceding section, in each generation, the best monkey from X is considered as gbest and preserved in gbestpop. Next, to move the monkey Xi from XMpop sub-population, the constant fluctuation coefficient (FC = 0.7) multiples to the differentiate of the two randomly selected monkeys Xr1Mpop and Xr2Mpop. Finally, the MKE-TVP generates the mutated vector ViMpop by Eq (7).

(7)

Where ViMpop is the mutated vector for ith monkey of XMpop, gbestpop indicates the best monkey from X, and Xr1Mpop and Xr2Mpop are two randomly selected monkeys from XMpop sub-population.

As shown in Fig 5, this strategy is more explorative at the earlier stages of the evolution and afterward becomes more exploitative at the later stages of the optimization and will produce well-distributed solutions around the best monkey. Since the strategy used is the same as the MKE evolution scheme, this leads to produce identical monkeys throughout the generations, resulting in the occurrence of undesirable convergence.

Best-history trial vector producer (BTVP): Despite the widespread use of the best monkey in MKE-TVP to help for fast convergence, it might suffer from the premature convergence problem. Therefore, in BTVP, we aim to use the top best monkeys to direct the population’s evolution rather than the one global best. For this purpose, the best-history archive is designed to keep M recent best monkeys. At first, the best-history archive is initialized with the gbest vector and then the best vector is distinctively added to the gbest-history in the subsequent generations. If the best-history archive has no entry, the current best is replaced by the entry with the worst fitness value. In each generation, a matrix XBHpop which has NBTVP rows and Dim columns is created by repeating the best-history for (NBTVP/M) times. Then, the mutated vector ViBpop is produced by Eq (8), (8) where ViBpop is the mutated vector for ith monkey of XBpop, XiBHpop is the ith row of the best-history population and Xr1Bpop and Xr2Bpop are two randomly selected individuals from XBpop. Parameter C is a decreasing coefficient [105] which is computed by Eq (9), (9) where α and β are the initial and final values of parameter C, MaxGen and gen indicate the maximum number of generations and the current generation, and μ is a dimension dependent value. This strategy is less greedy than the MKE-TVP and prevents local optima trapping. The pseudo-code of the BTVP shown in Fig 6.

Random trial vector producer (RTVP): RTVP is proposed to prevent premature convergence and keep exploration and exploitation in balance. In fact, these problems are caused by the inability of the evolution strategies to generate newly evolved vectors and the failure to produce new promising monkeys. Therefore, the last TVP uses the difference between the current monkey and a random one from the population and a difference between two random monkeys from its sub-population (XRpop) and the combination of the archive and whole population (XAllpop). Then, the mutated vector ViRpop is produced by Eq (10).

(10)

Where ViRpop is the mutated vector for ith monkey of XRpop, Xr1Rpop and Xr2Rpop are two randomly selected individuals from XRpop, XiRpop is the ith monkey of XRpop, and XjAllpop is the randomly selected solution from the union of X and archive, respectively. As explained in the following, Fi is a scale factor which generated by Cauchy distribution.

Despite using the constant FC, for each monkey, a random number generated by Cauchy distribution [106] is calculated as Fi = randci (μf, σ), where μf is the mean value of improved scale factors initiated by 0.5 and σ = 0.2. In order to determine the value of Fi, it must lie within the range (0, 1]; if Fi is greater than 1, it is considered to be 1; otherwise, it should be recalculated. The scale factor F remains unchanged if there is absolutely no monkey with improved fitness in the population; however, if a monkey with enhanced fitness exists, μf is calculated using the weighted Lehmer mean by Eq (11).

(11)

Where Sf is the set of all scale factors of Xi that f(Xi(t+1)) < f(Xi(t)), and the weight wfi is calculated by Eq (12), where .

(12)

This TVP has a top priority for exploring and leading the search to the global optimum. Differences between random solutions are utilized to balance exploration and exploitation and preserve diversity throughout the optimization. The RTVP’s pseudo-code shown in Fig 7.

Evaluating and population updating: After the evolution of one generation of monkeys, the evolved monkeys’ fitness value is assessed and compared with that of the previous generation and the best monkeys are survived and are allowed to participate in the next generation.

Archiving: During each generation, inferior monkeys, the solutions that their candidates were replaced, possess important information about the search space’s potential areas. Thus, it is useful to save and distribute their knowledge in order to advise future generations of monkeys. To prohibit the residency of the earlier inferior monkeys’ in the archive, each of them has a lifetime variable that demonstrates the duration of being in the archive. At the end of each generation, the inferior monkeys are added to the archive, and the lifetime is increased by one. The number of monkeys in the archive must not exceed N; in that case, inferior monkeys with a longer lifespan should be removed in random order. Using the archive throughout the development of monkeys makes it possible to retain a high level of diversity in simple and complex problems. The proposed MMKE algorithm’s pseudo-code is demonstrated in Fig 7.

4.1 The computational complexity of MMKE

As shown in Fig 8, the main steps of the MMKE algorithm are initializing, winner-based distributing, multi-trial vector producing, and evaluating and population updating. All N monkeys are distributed in the D-dimensional search space in the first step with computational complexity O(ND). Then, the complexity of the while-loop (lines 7–26), including the winner-based distributing step (line 11), multi-trial vector producing and evaluating step is O(2N+NMKE-TVPD+NBTVPD+NRTVPD). Because N = NMKE-TVP+NBTVP+NRTVP, the complexity of the while-loop (lines 7–26) is O(2N+ND). The cost of creating XBHpop (line 24) is O(N), then the evolution’s complexity for all generations (G) is O(ND+G(2N+ND)). The overall computational complexity of the MMKE algorithm is equivalent to O(ND+2GN+GND) or O(GND).

5. Experimental evaluation and results

Various experiments were designed to evaluate our proposed algorithm’s performance by using different problems of the CEC 2018 [27]. First, a visual analysis of MMKE is provided along with two directions: trial vector impact analysis and exploration and exploitation behavior analysis. These experiments aimed to demonstrate the impact, convergence behavior, and explorative and exploitative tendencies of proposed trial vectors during the search process. Second, MMKE’s performance was evaluated using a quantitative analysis including exploration and exploitation, local optima avoidance, and convergence evaluation in comparison to eight state-of-the-art metaheuristic algorithms.

5.1 Benchmark test functions and experimental environment

As part of this study, 29 benchmark functions from the CEC 2018 test suite [27] are used in order to evaluate the proposed MMKE algorithm. There are four kinds of functions in this test suite: unimodal, simple multimodal, hybrid, and composition functions. It is imperative to analyze the algorithm’s exploitative capacity as well as its convergence behavior in issues where there is only one optimal solution. Unimodal functions (Func 1, Func 3) particularly serve this purpose. In addition, it is possible to test the algorithm’s exploratory and local optima avoidance abilities using multimodal functions (Func 4-Func 10) having more than one local optimum. Considering the importance of striking a balance between exploration and exploitation when solving real-world issues, hybrid (Func 11–Func 20) and composition (Func 21–Func 30) functions are suitable for benchmarking this capability. The MMKE was developed by Matlab programming environment R2018a, and all experiments were run using an Intel i7 CPU with 3.4GHz and 8.00 GB memory.

5.2 Visual analysis

In this experiment set, the MMKE’s visual analysis was performed on a number of selected functions of the CEC 2018 to analyse the impact of proposed trial vectors and exploration and exploitation behavior. First, to analyse the impact of introduced TVPs in the MMKE algorithm, the convergence of each TVP and the improved rate of each TVP is performed. Then, the exploration and exploitation tendency of MMKE is shown. All analyses were performed over (Dim×10000)/N generations, where N is the population size that set to 100 and Dim is the dimensions of the problem, which varies of 10, 30, and 50.

5.2.1 Trial vector impact analysis.

In the following subsection, the impact of the BTVP and RTVP evolution strategies examine on the performance of MMKE using two separate tests. The curves of these tests are shown in Figs 9 and 10 on six functions Func 3, Func 8, Func 10, Func 16, Func 21, and Func 30. In the first test, each MKE-TVP, BTV, and RTVP was considered as a distinct algorithm, and the best obtained result in each generation was compared to MMKE. As the curves shown in Fig 9, in comparison to MKE-TVP, better solutions are obtained by BTVP in unimodal, simple multimodal, hybrid, and composition functions. Thus, the findings of this analysis reveal that by using gbest-history, the BTVP is able to avoid premature convergence and entrapment in local optima, as well as perform superior exploitation. The RTVP is also able to better find optimal solutions for the hybrid and composition functions that can adjust a balance between exploration and exploitation and avoid premature convergence. This TVP has a top priority for exploring and leading the search to the global optimum. Differences between random solutions are utilized to balance exploration and exploitation and preserve diversity throughout the optimization.

In the second test, the improved rate of each TVP in the MMKE algorithm is calculated, and the percent of their improvement is indicated on the aforementioned functions of CEC 2018. As the curves shown in Fig 10, it is evident that BTVP and RTVP are dominant evolution strategies that have unique effects on the search process, while MKE-TVP has the least impact on the optimization process. Also, the improved rate of each TVP differs in functions with various features or even in different phases of the search process. By considering the curves shown in this figure, it reveals that the combination of MKE-TVP and BTVP has a superior effect on dealing with unimodal problems, while the cooperation of BTVP and RTVP in solving complex problems with many local optima is most significant.

5.2.2 Exploration and exploitation analysis.

In this section, the explorative and exploitative capabilities of the MMKE algorithm are examined on selected functions of CEC 2018. The exploration ability of an algorithm is to globally investigate more regions of the search space, while exploitation refers to locally search potential solutions in the promising regions to increase the efficiency of the found solution. Since metaheuristic algorithms use a population of solutions, the effect of exploration is more apparent when the distance between the solutions increases. On the other hand, the effect of exploitation increases when the distance among the solutions decreases. Depending on the search strategy of an algorithm, a tradeoff between exploration and exploitation is needed to achieve a reasonable solution. To analyze this behavior, a dimension-wise diversity measurement [107] shown in Eq (13) is used to measure the distance among the solutions.

(13)

Where Div is the diversity measurement of the whole population in each generation, Dim and N are dimensions and the size of the population, and median xj(t) refers to the median of dimension j in the whole population. The exploration and exploitation invested by an algorithm can be calculated using Eq (14) and Eq (15), (14) (15) where Divmax is the maximum diversity value during the optimization process. As can be observed from the plotted curves shown in Fig 11, both XPL% and XPT% are mutually complement each other and MMKE can be achieved an adequate balance between exploration and exploitation during the search process. Also, the figures demonstrate that MMKE starts to explore the search space through multiple candidate solutions. Then, exploration behavior is transformed into exploitation depending on the problem to be solved by a smooth transition.

thumbnail
Fig 11. The balance of exploration and exploitation in MMKE.

https://doi.org/10.1371/journal.pone.0280006.g011

The plotted curves for the unimodal test functions Func 1 are shown in Fig 11 for dimensions 10, 30, and 50, which indicate that individuals investigate search space with a high proportion of exploration during the first one-fourth of the evolutionary process. Then, individuals gradually modify their behavior to accelerate convergence to a global optimal with a large proportion of exploitation. The exploration capability of the proposed algorithm was evaluated using multimodal test functions Func 8 and Func 10, including several local optima. The plotted curves for these test functions indicate that individuals alternate between exploration and exploitation to decrease the probability of being trapped in local optima. In addition, the large proportion of exploration signifies that individuals tend to enhance the possibility of discovering new areas inside the search space, although exploiting the optimal solution is continued in the last iterations. The plotted curves depicted for Func 16, Func 21, and Func 30 show that individuals explore the search space with a large proportion of exploration during the initial iterations, and this ratio subsequently decreases until this behavior transitions into the exploitation capacity, which grows to accelerate convergence speed.

5.3 Quantitative evaluation

Analyzing the performance of the MMKE algorithm is the objective of this subsection, which includes an extensive experimental study and a statistical evaluation. On the basis of the gained results, a comparison is conducted with state-of-the-art algorithms, including grey wolf optimizer (GWO) [17], whale optimization algorithm (WOA) [4], salp swarm algorithm (SSA) [14], butterfly optimization algorithm (BOA) [108], and Aquila optimizer (AO) [109] from swarm intelligence algorithms and well-known composite DE (CoDE) [94], the ensemble of mutation strategies and parameters in DE (EPSDE) [100], quasi-affine transformation evolutionary (QUATRE) [110], and monkey king evolution (MKE) [25] from the same category of evolutionary algorithms.

The values of any parameters related to the comparative algorithms were set in accordance with the recommendations from the original article, as shown in Table 2. A total of 20 separate runs with varying dimensions of 10, 30, and 50 were used to assess all benchmark functions. Every time a run was performed, the maximum number of generations (MaxGen) was determined by (Dim×10000)/N, where Dim and N were set to the problem’s dimensions and a constant of 100. Reporting the obtained results is done by using the fitness error value f(gbestpop)–f(X*), where f(gbestpop) signifies the minimum fitness value gained and f(X*) denotes the actual global optimum solution of the test function. The mean and standard deviation of the error values were used to assess the algorithms’ performance. The experimental results are shown in Tables 36, in which the best-obtained error values are remarked in boldface. Furthermore, the bottom three consecutive rows of each table designated "l/t/w" represent the number of algorithm losses (l), ties (t), and wins (w).

thumbnail
Table 3. The obtained results for unimodal test functions.

https://doi.org/10.1371/journal.pone.0280006.t003

thumbnail
Table 4. The obtained results for simple multimodal test functions.

https://doi.org/10.1371/journal.pone.0280006.t004

thumbnail
Table 6. The obtained results for composition test functions.

https://doi.org/10.1371/journal.pone.0280006.t006

5.3.1 Exploration and exploitation evaluation.

The exploitative ability of algorithms is evaluated using unimodal functions, while the exploratory capability of algorithms is evaluated using multimodal functions. The exploitative and explorative qualifications of MMKE were evaluated and compared with comparative algorithms using these two kinds of test functions, which are detailed in the following:

As shown in Table 3, the MMKE algorithm has a significantly improved performance over MKE in gaining more accurate results for unimodal functions in dimensions 10, 30, and 50. This is mostly because the MKE-TVP and BTVP use the evolution strategy that is mostly exploitative since the best monkey or best-history archive of monkeys is selected to guide the search. In dimension 50, this is because of the usage of BTVP and RTVP, which assist in escaping from local optima while still preserving diversity. As a result, the MMKE algorithm exploits the optimum solution more efficiently than the MKE algorithm and other comparative algorithms.

As per the results stated in Table 4, MMKE is capable of producing competitive results for simple multimodal functions, particularly those with dimensions 10 and 30. This experiment is carried out on Func 4-Func 10, where the number of local optima rises exponentially while increasing the function dimension. These results prove that the proposed MMKE algorithm is competitive in exploration. The RTVP’s preservation of diversity and extensive exploration of the search space is the primary reason for the adequate exploration of MMKE.

5.3.2 Evaluation of local optima avoidance.

Hybrid and composition functions are usually composed of a variety of unimodal and multimodal functions, making them more complex and challenging in the optimization process. Consequently, these functions are appropriate for evaluating the MMKE’s capability to maintain the balance of exploration and exploitation, resulting in the avoidance of local optima.

For hybrid functions, the results in Table 5 show that MMKE surpasses all other algorithms in all three dimensions and produces better results. Furthermore, Table 6 shows the results of MMKE compared to comparative algorithms used in solving composition functions. Results demonstrated that the MMKE algorithm achieves a good balance between exploration and exploitation, which increases the ability to avoid local optima in a given situation. In addition, since each TVP’s improved rate is considered when determining the portion size of sub-populations, a suitable balance between exploration and exploitation can be achieved.

5.3.3 Convergence evaluation.

This experiment aims to assess and compare the convergence behavior and speed of MMKE with the comparative algorithms. Fig 12 shows the convergence curves for the unimodal function Func 1, multimodal functions Func 5 and Func 10, and composition functions Func 21 and Func 26 on dimensions 10, 30, and 50. Each of these curves represents the mean of the best values in every generation over twenty runs for each algorithm.

thumbnail
Fig 12. CEC 2018 convergence curves for selected functions with various dimensions.

https://doi.org/10.1371/journal.pone.0280006.g012

As indicated by the curves in Fig 12, the MMKE demonstrates three convergence behaviors during the optimization process for test functions with diverse properties. First, there is a decreasing convergence in the early generations, in which an approximate optimal solution is found and maintained. The second behavior seen during the first half of the generations is faster convergence, and the estimate of the global optimum becomes increasingly accurate as the number of generations increases. Finally, the last behavior involves steady improvement of the solution until the last generations are reached. Based on the curves, it can be stated that the suggested MMKE algorithm is better capable of striking a balance between exploration and exploitation throughout generations than the comparative algorithms. The curves in Fig 12 indicate the competitive behavior of the MMKE algorithm while solving unimodal functions considering the performance assessment results presented in Tables 36 and Fig 12.

The MMKE algorithm is also superior to other comparative algorithms in which it achieves a faster convergence on multimodal and composition functions than comparative algorithms. Since the proposed algorithm uses a combination of best-obtained solutions in the BTVP and differences between random solutions, it achieves sufficient convergence, exploitation, and proper balance between exploration and exploitation. Additionally, the proposed algorithm maintains diversity throughout the optimization process by utilizing the differences between random solutions. The convergence curves presented in Fig 12 demonstrate that MMKE outperforms both the hybrid and composition functions, respectively. The MMKE’s gained results prove that the exploration and exploitation processes are appropriately balanced in both hybrid and composition functions. Furthermore, the proposed MMKE algorithm maintains the diversity essential for dealing with issues in complicated functions.

5.4 Discussion and limitations

This subsection discusses the primary advancements and reasons that make the proposed MMKE algorithm suited for tackling complex benchmark functions and global optimization problems with superiority over comparative algorithms. The qualitative results which are represented in Figs 9 and 10, by utilizing gbest-history, the BTVP can prevent premature convergence and entrapment in local optima and perform greater exploitation, as shown by the results of this investigation. Additionally, the RTVP is able to locate optimal solutions for hybrid and composition functions that can strike a balance between exploration and exploitation and prevent premature convergence. This TVP is vital for exploring and directing the optimal global search. The exploration and exploitation analysis that is presented in Fig 11 demonstrates that MMKE allocates a high proportion of its resources to the exploitation of unimodal test functions and a high percentage of its resources to the investigation of multimodal test functions. Using the winner-based distributing and reward rule distribution policy, the proposed MMKE algorithm successfully switch between exploration and exploitation in hybrid and composition test functions.

According to the results and curves shown in Table 3 and Fig 12, the MMKE algorithm is very competitive for unimodal and multimodal test functions by accurately converging to the promising area in terms of its ability to exploit and explore. It is attributable to the BTVP and RTVP, which promote the flow of information through the use of informative monkeys from the best-history archive as well as random monkeys. The experimental evaluations presented in Tables 4 and 5 and the convergence curves in Fig 12 demonstrate that utilizing TVP’s improved rate for determining the portion size of sub-populations enhances the probability of assigning the appropriate TVP. The experimental results demonstrate that the proposed MMKE algorithm competes with state-of-the-art evolutionary and swarm intelligence algorithms and has superior performance for solving unimodal, multimodal, hybrid, and composition test functions.

As with any research, ours has its limitations. Three TVPs were utilized in this investigation to inform the design of the winner-based distribution strategy, which will need to be revised for further trial vectors. The suggested MTDE’s performance may also suffer when the problem’s dimensions are exceedingly high. This restriction can be overcome by conducting preliminary experiments to establish the optimal archive size and policy for high-dimensional issues.

6. Statistical analysis

While the experimental assessment results compare the proposed MMKE algorithm’s overall performance to that of comparative algorithms, the statistical significance of the algorithms is not revealed. Thus, the Friedman test [111] is used to prove MMKE’s statistical superiority. The non-parametric test Friedman test (Ff) is used to rank all algorithms according to their performance. This test is utilized to determine the MMKE and comparative algorithms ranking according to their obtained fitness by Eq (16), (16) where k, n, and Rj are the number of algorithms, case tests, and the mean rank of the jth algorithm, respectively. It scores each algorithm/problem pair from 1 (best outcome) to k (worst result) and then averages the rankings achieved across all problems to get the algorithm’s final rating.

In the Friedman test, the null hypothesis H0 which indicates there is no significant difference between the compared algorithms with p-value > 5%. The alternative hypothesis H1 assumed there is a significant difference between the results of the used algorithms for the 20 runs. This test scores each algorithm/problem pair from 1 (best outcome) to k (worst result) and then averages the rankings achieved across all problems to get the algorithm’s final rating. Better algorithms are identified by small ranks. The results of the Friedman rank test at a 95% confidence level are given in Table 7. According to Table 7, the p-value reached through the non-parametric test indicated the significance of the results and demonstrated the MMKE algorithm’s superiority on dimensions 10, 30, and 50 in comparison to state-of-the-art algorithms.

7. Applicability of MMKE for solving engineering design problems

As discussed in previous sections, metaheuristic algorithms are extremely useful for solving real-world engineering [112116]. This section includes five engineering problems that were used to study further the MMKE algorithm’s potential to address real-world engineering difficulties. MMKE and other comparative algorithms have been applied to the pressure vessel design [117], the welded beam design [118], the tension/compression spring [119], the three-bar truss [120], and the optimal power flow problems for the IEEE 30-bus system [121]. A detailed description of these problems can be found in S1 Appendix. The death penalty function is used to manage constraints in these problems such that solutions that violate any of the constraints are ignored. As a result, a large number increases the fitness value of solutions that disrupt one or more constraints [1]. Each algorithm is run 30 times individually in this experiment, with the maximum number of generations (MaxGen) and the population size (N) were considered (Dim×10^4)/N and 20, respectively. Also, in the last experiment, they set to 20, 50, and 200. For each problem, the design problem’s obtained values (DV), and the optimum value of the design problem’s objectives are tabulated in Tables 813.

thumbnail
Table 8. Comparison of variables and objective values for the problem of pressure vessel.

https://doi.org/10.1371/journal.pone.0280006.t008

thumbnail
Table 9. Comparison of variables and objective values for the problem of welded beam.

https://doi.org/10.1371/journal.pone.0280006.t009

thumbnail
Table 10. Comparison of variables and objective values for the problem of tension/compression spring design.

https://doi.org/10.1371/journal.pone.0280006.t010

thumbnail
Table 11. Comparison of variables and objective values for the problem of three-bar truss.

https://doi.org/10.1371/journal.pone.0280006.t011

thumbnail
Table 12. Comparison of variables and objective values for the OPF using IEEE 30-bus system for Case1.

https://doi.org/10.1371/journal.pone.0280006.t012

thumbnail
Table 13. Comparison of variables and objective values for the OPF using IEEE 30-bus system for Case 2.

https://doi.org/10.1371/journal.pone.0280006.t013

This experiment aims to investigate the stated hypothesis in real-life scenarios, whether using the MTV approach in the MKE algorithm enhances its performance in terms of local optima avoidance, premature convergence, and balanced equilibrant exploitation and exploration. Thus, four constrained engineering design problems and a medium-scale OPF with two cases are considered. Based on the gained results, not only the proposed MMKE algorithm does perform better in constrained small-scale problems, but also achieves excellent results on medium-scale OPF problem for both single and multi-objective cases. In engineering design problems, MMKE could obtain similar and even better results because the cooperation of the defined TVPs prevents premature convergence and local optima trapping. In addition, sufficient exploration and exploitation of RTVP and preservation of diversity lead to superior results in medium-scale OPF problem. As a consequence of these results, the proposed MMKE algorithm is competitive and can discover objective values that are equivalent to or better than those solutions found by comparative algorithms.

8. Conclusion and future works

There are many stochastic algorithms, including evolution-based metaheuristic algorithms, that are well-known and powerful in solving optimization problems. There are, however, certain deficiencies to these algorithms when they are applied to complex problems. In this paper, we employed the multi-trial vector approach to present an effective multi-trial vector-based monkey king evolution (MMKE) algorithm, which is an improvement over the evolutionary algorithm known as monkey king evolution (MKE). The evolution strategy used by the MKE resulted in premature convergence and an inadequate balance of exploration and exploitation. Thus, the MTV approach substituted the simple MKE evolution scheme, which employs a combination of various TVPs. By utilizing the MTV approach, the population is divided into several sub-populations using a winner-based distributing policy, where each subpopulation possesses its own TVP. In MMKE, two strategies, BTVP and RTVP, are corporate with the canonical MKE-TVP such that various problems can be tackled with distinct characteristics. Furthermore, as an additional advantage, MMKE uses the prior knowledge of the best individuals to avoid local optimums and premature convergence.

In order to evaluate the performance of our proposed algorithm, a variety of experiments were carried out using the CEC 2018 test suite. First, MMKE was qualitatively evaluated in visual analysis subsection, followed by two directions of trial vector impact analysis and exploration and exploitation analysis. The main purpose of this subsection is to illustrate the significant effect and convergence behavior of suggested trial vectors throughout the search process, as well as their explorative and exploitative potential. Next, in Subsection 5.3, the performance of MMKE was quantitatively assessed in terms of exploration and exploitation, local optimality avoidance, and convergence assessment in comparison to eight state-of-the-art algorithms. The results revealed the effectiveness of MMKE in achieving optimum global solutions with more stable convergence than other well-known published optimization algorithms.

Then, experimental results were statistically analyzed using the Friedman test. In addition to demonstrating MMKE’s statistical superiority over comparative algorithms, the statistical results also demonstrated that the MMKE algorithm guarantees the efficacy of explorations and maintains a balance between exploration and exploitation. Finally, we evaluated the applicability of the MMKE by solving four engineering design problems and the optimal power flow problem for the IEEE 30-bus system. MMKE has been able to provide superior solutions, both in terms of optimal objective function values and the number of function evaluations for these problems. MMKE shown significant performance advantages over other well-known optimization algorithms in engineering design problems and its ability to deal with various constraint problems.

For the purpose of providing a concise summary of the results gained via the performance assessment, Table 14 provides the overall effectiveness (OE) of the MMKE and comparative algorithms based on their total performance results presented in Tables 36. Therefore, it is imperative to calculate the algorithms’ overall effectiveness (OE) by Eq (17), where N and L are the number of functions and the total number of loser functions for each algorithm, respectively.

(17)
thumbnail
Table 14. Comparison of MMKE and comparative algorithms in terms of the overall effectiveness (OE).

https://doi.org/10.1371/journal.pone.0280006.t014

The following remarks can be concluded from the overall effectiveness of MMKE and comparative algorithms. Firstly, for all different dimensions 10, 30, and 50, the proposed MMKE algorithm is superior to other comparative algorithms. In addition, another important remark is that MMKE is not only efficient than swarm intelligence algorithms including GWO, WOA, SSA, BOA, and AO but also it is a serious competitor for the evolutionary algorithms consisting of CoDE, EPSDE, QUATRE, and MKE.

The following conclusions can be taken from the results of experimental performance assessment, statistical analysis, and solutions gained for engineering design problems:

  • The proposed best-history trial vector producer (BTVP) and random trial vector producer (RTVP) enhance exploitation and exploration.
  • In cooperation with the canonical MKE-TVP, the proposed BTVP and RTVP assist in improving the overall balance between exploration and exploitation. Thus, it becomes possible for the MMKE to escape the local optimality.
  • The results derived from different qualitative and quantitative experiments conducted on diverse test functions with various characteristics along with statistical tests testify to the superior performance of the MMKE algorithm over the comparative algorithms.
  • It has been shown that the MMKE algorithm effectively solves engineering problems.

The MMKE algorithm was proposed for the purpose of optimizing continuous single-objective optimization problems. For future research, several directions can be considered. MMKE can be adapted to handle binary and multi-objective problems depending on the problem to be solved to tackle discrete, multi-objective, and many-objective real-world optimization problems. Moreover, attempting to tackle problems in various fields, such as scheduling, image processing, feature selection, clustering, and community detection, is beneficial. Eventually, proposing the aggregate version of MMKE such that the TVPs benefit from the search strategies of other algorithms could be a valuable and advantageous contribution.

References

  1. 1. Talbi E.-G., Metaheuristics: from design to implementation, John Wiley & Sons, 2009.
  2. 2. Wolpert D. H. and Macready W. G., No free lunch theorems for optimization, IEEE transactions on evolutionary computation, vol. 1, (1997), pp. 67–82.
  3. 3. I. Fister Jr, -S. Yang, Fister, J. Brest, and D. Fister, A brief review of nature-inspired algorithms for optimization, arXiv preprint arXiv:1307.4186, (2013).
  4. 4. Mirjalili S. and Lewis A., The whale optimization algorithm, Advances in Engineering Software, vol. 95, (2016), pp. 51–67.
  5. 5. Koza J. R., Genetic programming, (1997).
  6. 6. Storn R. and Price K., Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces, Journal of global optimization, vol. 11, (1997), pp. 341–359.
  7. 7. Rechenberg I., Evolution Strategy: Optimization of Technical systems by means of biological evolution, Fromman-Holzboog, Stuttgart, vol. 104, (1973), pp. 15–16.
  8. 8. Goldberg D. E. and Holland J. H., Genetic algorithms and machine learning, (1988), pp. 95–99.
  9. 9. Yao X., Liu Y., and Lin G., Evolutionary programming made faster, IEEE Transactions on Evolutionary computation, vol. 3, (1999), pp. 82–102.
  10. 10. R. Eberhart and J. Kennedy, A new optimizer using particle swarm theory, in MHS’95. Proceedings of the Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan, 1995, pp. 39–43.
  11. 11. M. Dorigo and G. Di Caro, Ant colony optimization: a new meta-heuristic, in Proceedings of the 1999 congress on evolutionary computation-CEC99 (Cat. No. 99TH8406), 1999, pp. 1470–1477.
  12. 12. Karaboga D. and Basturk B., A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm, Journal of global optimization, vol. 39, (2007), pp. 459–471.
  13. 13. Cheng M.-Y. and Prayogo D., Symbiotic organisms search: a new metaheuristic optimization algorithm, Computers & Structures, vol. 139, (2014), pp. 98–112.
  14. 14. Mirjalili S., Gandomi A. H., Mirjalili S. Z., Saremi S., Faris H., and Mirjalili S. M., Salp Swarm Algorithm: A bio-inspired optimizer for engineering design problems, Advances in Engineering Software, vol. 114, (2017), pp. 163–191.
  15. 15. Jain M., Singh V., and Rani A., A novel nature-inspired algorithm for optimization: Squirrel search algorithm, Swarm and evolutionary computation, vol. 44, (2019), pp. 148–175.
  16. 16. Askarzadeh A., A novel metaheuristic method for solving constrained engineering optimization problems: crow search algorithm, Computers & Structures, vol. 169, (2016), pp. 1–12.
  17. 17. Mirjalili S., Mirjalili S. M., and Lewis A., Grey wolf optimizer, Advances in Engineering Software, vol. 69, (2014), pp. 46–61.
  18. 18. Braik M., Sheta A., and Al-Hiary H., A novel meta-heuristic search algorithm for solving optimization problems: capuchin search algorithm, Neural Computing and Applications, vol. 33, (2021), pp. 2515–2547.
  19. 19. Hashim F. A. and Hussien A. G., Snake Optimizer: A novel meta-heuristic optimization algorithm, Knowledge-Based Systems, vol. 242, (2022), p. 108320.
  20. 20. Kirkpatrick S., Gelatt C. D., and Vecchi M. P., Optimization by simulated annealing, Science, vol. 220, (1983), pp. 671–680. pmid:17813860
  21. 21. Erol O. K. and Eksin I., A new optimization method: big bang–big crunch, Advances in Engineering Software, vol. 37, (2006), pp. 106–111.
  22. 22. Rashedi E., Nezamabadi-Pour H., and Saryazdi S., GSA: a gravitational search algorithm, Information sciences, vol. 179, (2009), pp. 2232–2248.
  23. 23. Kaveh A. and Bakhshpoori T., Water evaporation optimization: a novel physically inspired optimization algorithm, Computers & Structures, vol. 167, (2016), pp. 69–85.
  24. 24. Hashim F. A., Hussain K., Houssein E. H., Mabrouk M. S., and Al-Atabany W., Archimedes optimization algorithm: a new metaheuristic algorithm for solving optimization problems, Applied Intelligence, vol. 51, (2021), pp. 1531–1551.
  25. 25. Meng Z. and Pan J.-S., Monkey King Evolution: A new memetic evolutionary algorithm and its application in vehicle fuel consumption optimization, Knowledge-Based Systems, vol. 97, (2016), pp. 144–157.
  26. 26. Nadimi-Shahraki M. H., Taghian S., Mirjalili S., and Faris H., MTDE: An effective multi-trial vector-based differential evolution algorithm and its applications for engineering design problems, Applied Soft Computing, vol. 97, (2020), p. 106761.
  27. 27. N. Awad, M. Ali,. Liang, B. Qu, and P. Suganthan, Problem definitions and evaluation criteria for the cec 2017 special sessionand competition on single objective real-parameter numerical optimization., Nanyang Technological University, Jordan University of Science and Technology and Zhengzhou University, Singapore and Zhenzhou, China, Tech. Rep, vol. 201611, (2016).
  28. 28. Ghasemi M. R. and Varaee H., A fast multi-objective optimization using an efficient ideal gas molecular movement algorithm, Engineering with Computers, vol. 33, (2017), pp. 477–496.
  29. 29. Fard E. S., Monfaredi K., and Nadimi-Shahraki M. H., An Area-Optimized Chip of Ant Colony Algorithm Design in Hardware Platform Using the Address-Based Method, International Journal of Electrical and Computer Engineering, vol. 4, (2014), pp. 989–998.
  30. 30. Hosseini M., Afrouzi H. H., Yarmohammadi S., Arasteh H., Toghraie D., Amiri A. J., et al., Optimization of FX-70 refrigerant evaporative heat transfer and fluid flow characteristics inside the corrugated tubes using multi-objective genetic algorithm, Chinese Journal of Chemical Engineering, vol. 28, (2020), pp. 2142–2151.
  31. 31. Agushaka J. O. and Ezugwu A. E., Advanced arithmetic optimization algorithm for solving mechanical engineering design problems, PLoS One, vol. 16, (2021), p. e0255703. pmid:34428219
  32. 32. Gharehchopogh F. S., Nadimi-Shahraki M. H., Barshandeh S., Abdollahzadeh B., and Zamani H., CQFFA: A Chaotic Quasi-oppositional Farmland Fertility Algorithm for Solving Engineering Optimization Problems, Journal of Bionic Engineering, (2022), pp. 1–26.
  33. 33. Kaveh A., Talatahari S., and Khodadadi N., Stochastic paint optimizer: theory and application in civil engineering, Engineering with Computers, (2020), pp. 1–32.
  34. 34. Khodadadi N., Talatahari S., and Dadras Eslamlou A., MOTEO: a novel multi-objective thermal exchange optimization algorithm for engineering problems, Soft Computing, (2022), pp. 1–26.
  35. 35. Chakraborty S., Saha A. K., Nama S., and Debnath S., COVID-19 X-ray image segmentation by modified whale optimization algorithm with population reduction, Computers in Biology and Medicine, vol. 139, (2021), p. 104984. pmid:34739972
  36. 36. Mittal H. and Saraswat M., An optimum multi-level image thresholding segmentation using non-local means 2D histogram and exponential Kbest gravitational search algorithm, Engineering Applications of Artificial Intelligence, vol. 71, (2018), pp. 226–235.
  37. 37. Houssein E. H., Helmy B. E.-d., Oliva D., Jangir P., Premkumar M., Elngar A. A., et al., An efficient multi-thresholding based COVID-19 CT images segmentation approach using an improved equilibrium optimizer, Biomedical Signal Processing and Control, vol. 73, (2022), p. 103401.
  38. 38. Chakraborty S., Sharma S., Saha A. K., and Chakraborty S., SHADE–WOA: A metaheuristic algorithm for global optimization, Applied Soft Computing, vol. 113, (2021), p. 107866.
  39. 39. Singh H., Singh B., and Kaur M., An improved elephant herding optimization for global optimization problems, Engineering with Computers, (2021), pp. 1–33.
  40. 40. Nadimi-Shahraki M. H., Taghian S., Mirjalili S., Ewees A. A., Abualigah L., and Abd Elaziz M., MTV-MFO: Multi-Trial Vector-Based Moth-Flame Optimization Algorithm, Symmetry, vol. 13, (2021), p. 2388.
  41. 41. Chakraborty S., Saha A. K., Chakraborty R., Saha M., and Nama S., HSWOA: An ensemble of hunger games search and whale optimization algorithm for global optimization, International Journal of Intelligent Systems, vol. 37, (2022), pp. 52–104.
  42. 42. Lu B., Zheng Z., Zhang Z., Yu Y., and Liu T., New global optimization algorithms based on multi-loop distributed control systems with serial structure and ring structure for solving global optimization problems, Engineering Applications of Artificial Intelligence, vol. 101, (2021), p. 104115.
  43. 43. Abualigah L., Diabat A., Mirjalili S., Abd Elaziz M., and Gandomi A. H., The arithmetic optimization algorithm, Computer methods in applied mechanics and engineering, vol. 376, (2021), p. 113609.
  44. 44. Houssein E. H., Oliva D., Çelik E., Emam M. M., and Ghoniem R. M., Boosted sooty tern optimization algorithm for global optimization and feature selection, Expert Systems with Applications, (2022), p. 119015.
  45. 45. Azizi M., Talatahari S., Khodadadi N., and Sareh P., Multiobjective atomic orbital search (MOAOS) for global and engineering design optimization, IEEE Access, vol. 10, (2022), pp. 67727–67746.
  46. 46. Kassaymeh S., Abdullah S., Al-Betar M. A., and Alweshah M., Salp swarm optimizer for modeling the software fault prediction problem, Journal of King Saud University-Computer and Information Sciences, (2021).
  47. 47. Kumar N. and Vidyarthi D. P., A novel hybrid PSO–GA meta-heuristic for scheduling of DAG with communication on multiprocessor systems, Engineering with Computers, vol. 32, (2016), pp. 35–47.
  48. 48. Makhadmeh S. N., Khader A. T., Al-Betar M. A., Naim S., Abasi A. K., and Alyasseri Z. A. A., A novel hybrid grey wolf optimizer with min-conflict algorithm for power scheduling problem in a smart home, Swarm and Evolutionary Computation, vol. 60, (2021), p. 100793.
  49. 49. Mohammadzadeh A., Masdari M., Gharehchopogh F. S., and Jafarian A., A hybrid multi-objective metaheuristic optimization algorithm for scientific workflow scheduling, Cluster Computing, vol. 24, (2021), pp. 1479–1503.
  50. 50. Li M. and Lei D., An imperialist competitive algorithm with feedback for energy-efficient flexible job shop scheduling with transportation and sequence-dependent setup times, Engineering Applications of Artificial Intelligence, vol. 103, (2021), p. 104307.
  51. 51. Yousri D., Allam D., Eteiba M., and Suganthan P. N., Static and dynamic photovoltaic models’ parameters identification using chaotic heterogeneous comprehensive learning particle swarm optimizer variants, Energy conversion and management, vol. 182, (2019), pp. 546–563.
  52. 52. Al-Majidi S. D., Abbod M. F., and Al-Raweshidy H. S., A particle swarm optimisation-trained feedforward neural network for predicting the maximum power point of a photovoltaic array, Engineering Applications of Artificial Intelligence, vol. 92, (2020), p. 103688.
  53. 53. Houssein E. H., Mahdy M. A., Fathy A., and Rezk H., A modified Marine Predator Algorithm based on opposition based learning for tracking the global MPP of shaded PV system, Expert Systems with Applications, vol. 183, (2021), p. 115253.
  54. 54. Ramadan A., Kamel S., Hassan M. H., Khurshaid T., and Rahmann C., An Improved Bald Eagle Search Algorithm for Parameter Estimation of Different Photovoltaic Models, Processes, vol. 9, (2021), p. 1127.
  55. 55. Zavala G., Nebro A. J., Luna F., and Coello C. A. C., Structural design using multi-objective metaheuristics. Comparative study and application to a real-world problem, Structural and Multidisciplinary Optimization, vol. 53, (2016), pp. 545–566.
  56. 56. Ahrari A., Atai A.-A., and Deb K., A customized bilevel optimization approach for solving large-scale truss design problems, Engineering Optimization, vol. 52, (2020), pp. 2062–2079.
  57. 57. Gupta S., Deep K., and Engelbrecht A. P., A memory guided sine cosine algorithm for global optimization, Engineering Applications of Artificial Intelligence, vol. 93, (2020), p. 103718.
  58. 58. Sharma S., Saha A. K., and Lohar G., Optimization of weight and cost of cantilever retaining wall by a hybrid metaheuristic algorithm, Engineering with Computers, (2021), pp. 1–27.
  59. 59. Kaveh A., Eslamlou A. D., and Khodadadi N., Dynamic water strider algorithm for optimal design of skeletal structures, Periodica Polytechnica Civil Engineering, vol. 64, (2020), pp. 904–916.
  60. 60. Nadimi-Shahraki M. H., Taghian S., Mirjalili S., Abualigah L., Abd Elaziz M., and Oliva D., EWOA-OPF: Effective Whale Optimization Algorithm to Solve Optimal Power Flow Problem, Electronics, vol. 10, (2021), p. 2975.
  61. 61. Eslami M., Neshat M., and Khalid S. A., A Novel Hybrid Sine Cosine Algorithm and Pattern Search for Optimal Coordination of Power System Damping Controllers, Sustainability, vol. 14, (2022), p. 541.
  62. 62. Ali M. H., Kamel S., Hassan M. H., Tostado-Véliz M., and Zawbaa H. M., An improved wild horse optimization algorithm for reliability based optimal DG planning of radial distribution networks, Energy Reports, vol. 8, (2022), pp. 582–604.
  63. 63. Sayarshad H. R., Using bees algorithm for material handling equipment planning in manufacturing systems, The International Journal of Advanced Manufacturing Technology, vol. 48, (2010), pp. 1009–1018.
  64. 64. M. Banaie-Dezfouli, M. H. Nadimi-Shahraki, and H. Zamani, A Novel Tour Planning Model using Big Data, in 2018 International Conference on Artificial Intelligence and Data Processing (IDAP), Malatya, Turkey, 2018, pp. 1–6.
  65. 65. Sun L., Pan Q.-k., Jing X.-L., and Huang J.-P., A light-robust-optimization model and an effective memetic algorithm for an open vehicle routing problem under uncertain travel times, Memetic Computing, vol. 13, (2021), pp. 149–167.
  66. 66. Neshat M., Alexander B., and Wagner M., A hybrid cooperative co-evolution algorithm framework for optimising power take off and placements of wave energy converters, Information Sciences, vol. 534, (2020), pp. 218–244.
  67. 67. Neshat M., Alexander B., Sergiienko N. Y., and Wagner M., New insights into position optimisation of wave energy converters using hybrid local search, Swarm and Evolutionary Computation, vol. 59, (2020), p. 100744.
  68. 68. Gharehpasha S., Masdari M., and Jafarian A., Power efficient virtual machine placement in cloud data centers with a discrete and chaotic hybrid optimization algorithm, Cluster Computing, vol. 24, (2021), pp. 1293–1315.
  69. 69. Xing H., Zhu J., Qu R., Dai P., Luo S., and Iqbal M. A., An ACO for energy-efficient and traffic-aware virtual machine placement in cloud computing, Swarm and Evolutionary Computation, vol. 68, (2022), p. 101012.
  70. 70. Tian Z., Short-term wind speed prediction based on LMD and improved FA optimized combined kernel function LSSVM, Engineering Applications of Artificial Intelligence, vol. 91, (2020), p. 103573.
  71. 71. Neshat M., Nezhad M. M., Abbasnejad E., Mirjalili S., Tjernberg L. B., Garcia D. A., et al., A deep learning-based evolutionary model for short-term wind speed forecasting: A case study of the Lillgrund offshore wind farm, Energy Conversion and Management, vol. 236, (2021), p. 114002.
  72. 72. S. Taghian, M. H. Nadimi-Shahraki, and H. Zamani, Comparative analysis of transfer function-based binary Metaheuristic algorithms for feature selection, in 2018 International Conference on Artificial Intelligence and Data Processing (IDAP), Malatya, Turkey, 2018, pp. 1–6.
  73. 73. Engelbrecht A. P., Grobler J., and Langeveld J., Set based particle swarm optimization for the feature selection problem, Engineering Applications of Artificial Intelligence, vol. 85, (2019), pp. 324–336.
  74. 74. Liao X., Khandelwal M., Yang H., Koopialipoor M., and Murlidhar B. R., Effects of a proper feature selection on prediction and optimization of drilling rate using intelligent techniques, Engineering with Computers, vol. 36, (2020), pp. 499–510.
  75. 75. Mohmmadzadeh H. and Gharehchopogh F. S., An efficient binary chaotic symbiotic organisms search algorithm approaches for feature selection problems, The Journal of Supercomputing, (2021), pp. 1–43.
  76. 76. Nadimi-Shahraki M. H., Banaie-Dezfouli M., Zamani H., Taghian S., and Mirjalili S., B-MFO: A Binary Moth-Flame Optimization for Feature Selection from Medical Datasets, Computers, vol. 10, (2021), p. 136.
  77. 77. Mohammadzadeh H. and Gharehchopogh F. S., Feature Selection with Binary Symbiotic Organisms Search Algorithm for Email Spam Detection, International Journal of Information Technology & Decision Making, vol. 20, (2021), pp. 469–515.
  78. 78. Nadimi-Shahraki M. H., Taghian S., Mirjalili S., and Abualigah L., Binary Aquila Optimizer for Selecting Effective Features from Medical Data: A COVID-19 Case Study, Mathematics, vol. 10, (2022), p. 1929.
  79. 79. Wang J., Lin D., Zhang Y., and Huang S., An adaptively balanced grey wolf optimization algorithm for feature selection on high-dimensional classification, Engineering Applications of Artificial Intelligence, vol. 114, (2022), p. 105088.
  80. 80. Moscato V., Picariello A., and Sperli G., Community detection based on game theory, Engineering Applications of Artificial Intelligence, vol. 85, (2019), pp. 773–782.
  81. 81. Masdari M. and Barshandeh S., Discrete teaching–learning-based optimization algorithm for clustering in wireless sensor networks, Journal of Ambient Intelligence and Humanized Computing, vol. 11, (2020), pp. 5459–5476.
  82. 82. Shaik T., Ravi V., and Deb K., Evolutionary Multi-Objective Optimization Algorithm for Community Detection in Complex Social Networks, SN Computer Science, vol. 2, (2021), pp. 1–25.
  83. 83. Kumar Y. and Kaur A., Variants of bat algorithm for solving partitional clustering problems, Engineering with Computers, (2021), pp. 1–27.
  84. 84. Nadimi-Shahraki M. H., Moeini E., Taghian S., and Mirjalili S., DMFO-CD: A Discrete Moth-Flame Optimization Algorithm for Community Detection, Algorithms, vol. 14, (2021), p. 314.
  85. 85. Yousri D., Abd Elaziz M., Abualigah L., Oliva D., Al-Qaness M. A., and Ewees A. A., COVID-19 X-ray images classification based on enhanced fractional-order cuckoo search optimizer using heavy-tailed distributions, Applied Soft Computing, vol. 101, (2021), p. 107052. pmid:33519325
  86. 86. Geetha K., Anitha V., Elhoseny M., Kathiresan S., Shamsolmoali P., and Selim M. M., An evolutionary lion optimization algorithm‐based image compression technique for biomedical applications, Expert Systems, vol. 38, (2021), p. e12508.
  87. 87. Gharehchopogh F. S., Maleki I., and Dizaji Z. A., Chaotic vortex search algorithm: metaheuristic algorithm for feature selection, Evolutionary Intelligence, vol. In press, (2021), pp. 1–32.
  88. 88. Lynn N. and Suganthan P. N., Heterogeneous comprehensive learning particle swarm optimization with enhanced exploration and exploitation, Swarm and Evolutionary Computation, vol. 24, (2015), pp. 11–24.
  89. 89. Nadimi-Shahraki M. H., Taghian S., Mirjalili S., Zamani H., and Bahreininejad A., GGWO: Gaze Cues Learning-based Grey Wolf Optimizer and its Applications for Solving Engineering Problems, Journal of Computational Science, (2022), p. 101636.
  90. 90. Pelusi D., Mascella R., Tallini L., Nayak J., Naik B., and Deng Y., An Improved Moth-Flame Optimization algorithm with hybrid search phase, Knowledge-Based Systems, vol. 191, (2020), p. 105277.
  91. 91. Xing Z., An improved emperor penguin optimization based multilevel thresholding for color image segmentation, Knowledge-Based Systems, vol. 194, (2020), p. 105570.
  92. 92. Meng Z., Pan J.-S., and Tseng K.-K., PaDE: An enhanced Differential Evolution algorithm with novel control parameter adaptation schemes for numerical optimization, Knowledge-Based Systems, vol. 168, (2019), pp. 80–99.
  93. 93. Qin A. K., Huang V. L., and Suganthan P. N., Differential evolution algorithm with strategy adaptation for global numerical optimization, IEEE transactions on Evolutionary Computation, vol. 13, (2008), pp. 398–417.
  94. 94. Wang Y., Cai Z., and Zhang Q., Differential evolution with composite trial vector generation strategies and control parameters, IEEE Transactions on Evolutionary Computation, vol. 15, (2011), pp. 55–66.
  95. 95. Cheng R. and Jin Y., A social learning particle swarm optimization algorithm for scalable optimization, Information Sciences, vol. 291, (2015), pp. 43–60.
  96. 96. Nadimi-Shahraki M. H., Taghian S., and Mirjalili S., An improved grey wolf optimizer for solving engineering problems, Expert Systems with Applications, vol. 166, (2021), p. 113917.
  97. 97. Chen H., Wang M., and Zhao X., A multi-strategy enhanced sine cosine algorithm for global optimization and constrained practical engineering problems, Applied Mathematics and Computation, vol. 369, (2020), p. 124872.
  98. 98. Niu B., Zhu Y., He X., and Wu H., MCPSO: A multi-swarm cooperative particle swarm optimizer, Applied Mathematics and computation, vol. 185, (2007), pp. 1050–1062.
  99. 99. Du W. and Li B., Multi-strategy ensemble particle swarm optimization for dynamic optimization, Information sciences, vol. 178, (2008), pp. 3096–3109.
  100. 100. Mallipeddi R., Suganthan P. N., Pan Q.-K., and Tasgetiren M. F., Differential evolution algorithm with ensemble of parameters and mutation strategies, Applied soft computing, vol. 11, (2011), pp. 1679–1696.
  101. 101. Wu G., Mallipeddi R., Suganthan P. N., Wang R., and Chen H., Differential evolution with multi-population based ensemble of mutation strategies, Information Sciences, vol. 329, (2016), pp. 329–345.
  102. 102. Wang X. and Tang L., An adaptive multi-population differential evolution algorithm for continuous multi-objective optimization, Information Sciences, vol. 348, (2016), pp. 124–141.
  103. 103. Ali M. Z., Awad N. H., Suganthan P. N., and Reynolds R. G., An adaptive multipopulation differential evolution with dynamic population reduction, IEEE transactions on cybernetics, vol. 47, (2016), pp. 2768–2779. pmid:28113798
  104. 104. Li X., Tang K., Omidvar M. N., Yang Z., Qin K., and China H., Benchmark functions for the CEC 2013 special session and competition on large-scale global optimization, gene, vol. 7, (2013), p. 8.
  105. 105. Long W., Jiao J., Liang X., and Tang M., An exploration-enhanced grey wolf optimizer to solve high-dimensional numerical optimization, Engineering Applications of Artificial Intelligence, vol. 68, (2018), pp. 63–80.
  106. 106. R. Tanabe and A. Fukunaga, Success-history based parameter adaptation for differential evolution, in 2013 IEEE congress on evolutionary computation, 2013, pp. 71–78.
  107. 107. Morales-Castañeda B., Zaldivar D., Cuevas E., Fausto F., and Rodríguez A., A better balance in metaheuristic algorithms: Does it exist?, Swarm and Evolutionary Computation, vol. 54, (2020), p. 100671.
  108. 108. Arora S. and Singh S., Butterfly optimization algorithm: a novel approach for global optimization, Soft Computing, vol. 23, (2019), pp. 715–734.
  109. 109. Abualigah L., Yousri D., Abd Elaziz M., Ewees A. A., Al-qaness M. A., and Gandomi A. H., Aquila Optimizer: A novel meta-heuristic optimization Algorithm, Computers & Industrial Engineering, vol. 157, (2021), p. 107250.
  110. 110. Meng Z., Pan J.-S., and Xu H., QUasi-Affine TRansformation Evolutionary (QUATRE) algorithm: a cooperative swarm based algorithm for global optimization, Knowledge-Based Systems, vol. 109, (2016), pp. 104–121.
  111. 111. Derrac J., García S., Molina D., and Herrera F., A practical tutorial on the use of nonparametric statistical tests as a methodology for comparing evolutionary and swarm intelligence algorithms, Swarm and Evolutionary Computation, vol. 1, (2011), pp. 3–18.
  112. 112. Ghasemi M. R. and Varaee H., Enhanced IGMM optimization algorithm based on vibration for numerical and engineering problems, Engineering with Computers, vol. 34, (2018), pp. 91–116.
  113. 113. Pan J.-S., Hu P., and Chu S.-C., Binary fish migration optimization for solving unit commitment, Energy, vol. 226, (2021), p. 120329.
  114. 114. Sattar D. and Salim R., A smart metaheuristic algorithm for solving engineering problems, Engineering with Computers, vol. 37, (2021), pp. 2389–2417.
  115. 115. Yıldız B. S., Pholdee N., Panagant N., Bureerat S., Yildiz A. R., and Sait S. M., A novel chaotic Henry gas solubility optimization algorithm for solving real-world engineering problems, Engineering with Computers, (2021), pp. 1–13.
  116. 116. Gharehchopogh F. S., Farnad B., and Alizadeh A., A farmland fertility algorithm for solving constrained engineering problems, Concurrency and Computation: Practice and Experience, (2021), p. e6310.
  117. 117. Kannan B. and Kramer S. N., An augmented Lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to mechanical design, Journal of mechanical design, vol. 116, (1994), pp. 405–411.
  118. 118. C. A. C. Coello, Use of a self-adaptive penalty approach for engineering optimization problems, Computers in Industry, vol. 41, (2000), pp. 113–127.
  119. 119. Arora J. S., Introduction to optimum design, Elsevier, 2004.
  120. 120. Nowacki H., Optimization in pre-contract ship design, In: Fujita Y, Lind K, Williams TJ (eds) Computer applications in the automation of shipyard operation and ship design, vol. 2, (1974), pp. 327–338.
  121. 121. Radosavljević J., Klimenta D., Jevtić M., and Arsić N., Optimal power flow using a hybrid optimization algorithm of particle swarm optimization and gravitational search algorithm, Electric Power Components and Systems, vol. 43, (2015), pp. 1958–1970.