Skip to main content
Top
Published in: Artificial Intelligence Review 5/2022

Open Access 17-11-2021

SFSADE: an improved self-adaptive differential evolution algorithm with a shuffled frog-leaping strategy

Authors: Qingtao Pan, Jun Tang, Haoran Wang, Hao Li, Xi Chen, Songyang Lao

Published in: Artificial Intelligence Review | Issue 5/2022

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

The differential evolution (DE) algorithm is an efficient random search algorithm based on swarm intelligence for solving optimization problems. It has the advantages of easy implementation, fast convergence, strong optimization ability and good robustness. However, the performance of DE is very sensitive to the design of different operators and the setting of control parameters. To solve these key problems, this paper proposes an improved self-adaptive differential evolution algorithm with a shuffled frog-leaping strategy (SFSADE). It innovatively incorporates the idea of the shuffled frog-leaping algorithm into DE, and at the same time, it cleverly introduces a new strategy of classification mutation, and also designs a new adaptive adjustment mechanism for control parameters. In addition, we have carried out a large number of simulation experiments on the 25 benchmark functions of CEC 2005 and two nonparametric statistical tests to comprehensively evaluate the performance of SFSADE. Finally, the results of simulation experiments and nonparametric statistical tests show that SFSADE is very effective in improving DE, and significantly improves the overall diversity of the population in the process of dynamic evolution. Compared with other advanced DE variants, its global search speed and optimization performance also has strong competitiveness.
Notes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

Differential evolution (DE) was jointly proposed by (Storn and Price 1997) to solve Chebyshev polynomials. It is a simple and efficient evolutionary algorithm for solving global optimization problems in a continuous search space (Li and Wang 2020). Essentially, DE is a random search algorithm based on a population, which mainly simulates the biological evolution process through the three operators of mutation, crossover and selection to evolve the initial solution to the global optimal solution (Wang et al. 2011). In the past 20 years of development, DE has attracted increasing attention from researchers and has been successfully applied to various scientific and engineering fields, such as pattern recognition (Maulik and Saha 2009; Koloseni et al. 2012; Cai and Wang 2015), image processing (Das and Konar 2009; Su et al. 2012), collision avoidance treatment (Tang et al. 2016; Sun et al. 2017; Tang 2019), and engineering design (Guo et al. 2001; Zhao et al. 2014; Sakr et al. 2017), and has achieved good application effects.
When DE is used to solve specific optimization problems, the design of different operators (i.e., mutation, crossover and selection) and the adjustment of control parameters (i.e., population size NP, mutation factor F and crossover probability CR) are very important (Parouha and Verma 2021). They directly determine the result of DE solution to a large extent. For different optimization problems, operator design and parameter setting are often different (Wu et al. 2016). However, improper settings may lead to premature or slow convergence of algorithm, easily falling into local optimization and other many problems. Therefore, researchers usually need to try the above settings repeatedly to better apply DE to solve numerical optimization problems (Zhu et al. 2020).
Therefore, in order to solve the above problem, we summarize a large number of enhanced DE variants proposed by researchers in recent years, such as FADE (with fuzzy logic self-adaptive strategy) (Liu and Lampinen 2005), TDE (with trigonometric mutation operation) (Fan and Lampinen 2003), EDA (with distributed estimation strategy) (Sun et al. 2005), jDE (with self-adaptive parameters) (Brest et al. 2006), AnDE (with new mutation and selection operation and combining simulated annealing ideal) (Das et al. 2007), JADE (with "current-to-pbest/1" mutation operation and self-adaptive parameters) (Zhang and Sanderson 2009), SaDE (with self-adaptive mutation strategies and parameters) (Qin et al. 2009b), CoDE (with composite generation strategies and control parameters) (Wang et al. 2011), SspDE (with self-adaptive strategies and control parameters) (Pan et al. 2011), ESADE (with "current-to-pbest/1" mutation operation and self-adaptive control parameters) (Guo et al. 2014), DMPSADE (with self-adaptive discrete mutation control parameters) (Fan and Yan 2015), MPEDE (with multiple mutation strategies and self-adaptive parameters) (Wu et al. 2016), DEMPSO (combining particle swarm optimization ideal) (Mao et al. 2017), SpDE (with multiple subpopulations and phase-mutations strategy) (Pan et al. 2018), HyGADE (combining Genetic Algorithm ideal) (Chaudhary et al. 2019), and SAMDE (with self-adaptive multipopulation strategy) (Zhu et al. 2020). Through much experimental research and theoretical analysis, the efficiency and performance of DE variants in solving problems have been greatly improved. In summary, the above improvements to DE can be simply divided into the following three types: redesigning the operations, adaptively adjusting control parameters, and incorporating other intelligent optimization algorithms (Li et al. 2020a). As a very successful algorithm, SAMDE (Zhu et al. 2020), integrates three effective ways of improvement. In order to increase the diversity of the population and improve the optimization performance and speed of the algorithm, this is a very effective attempt in which the entire population is decomposed into multiple subpopulations in DE. These subpopulations are continuously decomposed and merged in the evolution process, and different mutation strategies and control parameter adaptive adjustment mechanisms are used (MA et al. 2009). Although some effective rules on those settings have been summarized based on a large number of studies (Das and Suganthan 2011), their universality and adaptability need to be further improved and it is still very difficult to design a DE algorithm with better comprehensive performance.
In this paper, to further improve the solution performance of the DE algorithm, accelerate the convergence speed, prevent falling into local optimization and improve the stability, on the basis of SAMDE (Zhu et al. 2020) and p-ADE (Bi and Xiao 2012), we propose an improved self-adaptive differential evolution algorithm with a shuffled frog-leaping strategy, called SFSADE. We introduce the mutation strategy based on the classification idea and design a new parameter adaptive adjustment mechanism. The most important aspect is to form multiple subpopulations that evolve independently by using the idea of shuffled frog-leaping. The specific improvement measures include the following four aspects:
(1)
Introducing the shuffled frog-leaping strategy, in which dividing meme groups are based on individual fitness values, thereby forming multiple subgroups to evolve independently, improving the diversity and convergence speed of the population;
 
(2)
Designing a new mutation strategy. At the same time, the global optimal solution in the evolution process and the local optimal solution of each meme group are used to avoid search blindness caused by the random selection of individuals in the difference vector;
 
(3)
Using classification mutation strategy. Choosing different evolution directions for individuals with different fitness values to further balance the “explore” and “exploit” capability of the algorithm;
 
(4)
Incorporating a new parameter self-adaptive strategy. According to the fitness value of the individual and the evolutionary number, an adaptive parameter scheme is provided for each individual. In addition, the SFSADE algorithm proposed in this paper have carries out 10-, 30- and 50-dimensional tests on the 25 benchmark functions of CEC 2005 (Suganthan et al. 2005) and also performed two nonparametric statistical test, Wilcoxon signed rank test and Friedman test. Comparing its performance with the results of 7 other advanced algorithms at home and abroad mentioned in reference (Zhu et al. 2020), SFSADE has the great advantage and competitiveness.
 
The remaining sections of this paper are organized as follows: Sect. 2 introduces the basic DE and Shuffled Frog-Leaping algorithm. A review of the related work of differential evolution algorithms is presented in Sect. 3. In Sect. 4, the proposed algorithm SFSADE is introduced in detail. Section 5 shows the relevant experimental results and analysis. Finally, the conclusions are drawn in Sect. 6.

2 Differential evolution and shuffled frog leaping

2.1 Differential evolution

DE is a heuristic random parallel search algorithm based on swarm intelligence, which is mainly used to solve global optimization problems with continuous variables. Similar to other evolutionary algorithms, it mainly includes mutation, crossover and selection operations and performs the evolutionary search process through continuous iterative calculation. The five key operations of DE are introduced as follows.

2.1.1 Initialization operation

Assume that constrained numerical optimization problems (CNOPs) are defined as follows (Parouha and Verma 2021).
$$Min\;f\left( X \right)\;\;\;\;X \subseteq \left[ {X_{\min } ,X_{\max } } \right],\;X = \left( {X_{1} ,X_{2} , \ldots ,X_{n} } \right)$$
(1)
In the initial evolution number \(t = 0\), the individuals \({\varvec{X}}_{{{\varvec{i}},{\varvec{t}}}} = \left( {x_{i,t}^{1} ,\;x_{i,t}^{2} , \ldots ,\;x_{i,t}^{j} ,\; \ldots ,x_{i,t}^{D} } \right)\) in the initial population of DE are generated by Eq. (2), which is randomly generated according to the lower bound \({\varvec{X}}_{{{\varvec{min}}}} = \left\{ {x_{min}^{1} ,x_{min}^{2} , \ldots ,x_{min}^{n} } \right\}\) and upper bound \({\varvec{X}}_{{{\varvec{max}}}} = \left\{ {x_{max}^{1} ,x_{max}^{2} , \ldots ,x_{max}^{n} } \right\}\) of the optimization problem and is evenly distributed in the feasible solution space (Das and Suganthan 2011). The dimensionality \(D\) of the individual variables is equal to the dimension \(n\) of the decision variable \({\varvec{X}}\) in the objective function, and the number of individuals in the population is \(NP\) (Fan et al. 2019).
$$x_{i,t}^{j} = x_{min}^{j} + {\text{rand}}\left( {0,1} \right)\left( {x_{max}^{j} - x_{min}^{j} } \right) j = 1,2, \ldots ,D\;\; i = 1,2, \ldots ,NP$$
(2)
where \(x_{i,t}^{j}\) is the value of the \(j^{th}\) dimension of the \(i^{{{\text{th}}}}\) individual in the \(t^{th}\) generation, and \(\mathrm{rand}\left(\mathrm{0,1}\right)\) represents a random variable uniformly distributed in the range \(\left[\mathrm{0,1}\right]\).

2.1.2 Mutation operation

In the current \({t}^{th}\) evolutionary number, the \({i}^{\text{th}}\) individual \({{\varvec{x}}}_{{\varvec{i}},{\varvec{t}}}\) in the parent population is also called the target individual vector, and its corresponding mutant individual vector \({{\varvec{V}}}_{{\varvec{i}},{\varvec{t}}+1}=\left({v}_{i,t+1}^{1},\hspace{0.33em}{v}_{i,t+1}^{2},\dots ,{v}_{i,t+1}^{D}\right)\) is generated by the mutation operator. The most basic mutation strategy “DE/rand/1” is shown in Eq. (3), which is widely used, simple and robust (Zhang and Duan 2015).
$${\varvec{V}}_{{{\varvec{i}},{\varvec{t}} + 1}} = {\varvec{X}}_{{{\varvec{r}}1,{\varvec{t}}}} + F \cdot \left( {{\varvec{X}}_{{{\varvec{r}}2,{\varvec{t}}}} - {\varvec{X}}_{{{\varvec{r}}3,{\varvec{t}}}} } \right)$$
(3)
where \(r_{1} ,\;r_{2} , r_{3} \in \left\{ {1,2, \ldots ,NP} \right\}\) are randomly selected positive integers that are different from each other, representing the indexes of three randomly selected different individuals in the parent population, and \({r}_{1}, {r}_{2}, {r}_{3}\) are different from the current target individual vector index\(i\). \({{\varvec{X}}}_{{\varvec{r}}{\varvec{i}},{\varvec{t}}}\) is called the basis vector. \({{\varvec{X}}}_{{\varvec{r}}2,{\varvec{t}}}-{{\varvec{X}}}_{{\varvec{r}}3,{\varvec{t}}}\) are called the difference vector. \(F\) is the mutation factor or scaling factor, and it is one of the main control parameters of the DE algorithm. It usually takes a value in the range\(\left[\mathrm{0,2}\right]\), which controls the scaling of the difference vector and determines the degree of influence on the basis vector.
Some well-known mutation strategies in the literature are summarized as follows (Zhu et al. 2020):
“DE/rand/2”
$${\varvec{V}}_{{{\varvec{i}},{\varvec{t}} + 1}} = {\varvec{X}}_{{{\varvec{r}}1,{\varvec{t}}}} + F \cdot \left( {{\varvec{X}}_{{{\varvec{r}}2,{\varvec{t}}}} - {\varvec{X}}_{{{\varvec{r}}3,{\varvec{t}}}} } \right) + F \cdot \left( {{\varvec{X}}_{{{\varvec{r}}4,{\varvec{t}}}} - {\varvec{X}}_{{{\varvec{r}}5,{\varvec{t}}}} } \right)$$
(4)
“DE/best/1”
$${\varvec{V}}_{{{\varvec{i}},{\varvec{t}} + 1}} = {\varvec{X}}_{{{\varvec{best}},{\varvec{t}}}} + F \cdot \left( {{\varvec{X}}_{{{\varvec{r}}1,{\varvec{t}}}} - {\varvec{X}}_{{{\varvec{r}}2,{\varvec{t}}}} } \right)$$
(5)
“DE/best/2”
$${\varvec{V}}_{{{\varvec{i}},{\varvec{t}} + 1}} = {\varvec{X}}_{{{\varvec{best}},{\varvec{t}}}} + F \cdot \left( {{\varvec{X}}_{{{\varvec{r}}1,{\varvec{t}}}} - {\varvec{X}}_{{{\varvec{r}}2,{\varvec{t}}}} } \right) + F \cdot \left( {{\varvec{X}}_{{{\varvec{r}}3,{\varvec{t}}}} - {\varvec{X}}_{{{\varvec{r}}4,{\varvec{t}}}} } \right)$$
(6)
“DE/current-to-best/1”
$${\varvec{V}}_{{{\varvec{i}},{\varvec{t}} + 1}} = {\varvec{X}}_{{{\varvec{i}},{\varvec{t}}}} + F \cdot \left( {{\varvec{X}}_{{{\varvec{best}},{\varvec{t}}}} - {\varvec{X}}_{{{\varvec{i}},{\varvec{t}}}} } \right) + F \cdot \left( {{\varvec{X}}_{{{\varvec{r}}1,{\varvec{t}}}} - {\varvec{X}}_{{{\varvec{r}}2,{\varvec{t}}}} } \right)$$
(7)
“DE/rand-to-best/1”
$${\varvec{V}}_{{{\varvec{i}},{\varvec{t}} + 1}} = {\varvec{X}}_{{{\varvec{r}}1,{\varvec{t}}}} + F \cdot \left( {{\varvec{X}}_{{{\varvec{best}},{\varvec{t}}}} - {\varvec{X}}_{{{\varvec{r}}1,{\varvec{t}}}} } \right) + F \cdot \left( {{\varvec{X}}_{{{\varvec{r}}2,{\varvec{t}}}} - {\varvec{X}}_{{{\varvec{r}}3,{\varvec{t}}}} } \right)$$
(8)
where the parameters \(r_{1} ,\;r_{2} ,\;r_{3} ,\;r_{4} ,\;r_{5}\) are positive integers in range \(\left[ {1,NP} \right]\), which are different from the index \(i\) and are not equal to each other. \({{\varvec{X}}}_{{\varvec{b}}{\varvec{e}}{\varvec{s}}{\varvec{t}},{\varvec{t}}}\) is the individual with the optimal fitness in the \({t}^{th}\) generation.
Generally, the mutation individuals of the DE algorithm are generated by adding the weighted difference vector between different individuals in the parent population to the base vector, which is equivalent to adding a random disturbance to the base vector. In the early stage of evolution, there is a large difference between individuals, which makes the difference vector greatly disturb the base vector and ensures a strong exploration ability and global search ability. With the increase in evolutionary number, the difference between individuals decreases, which makes the DE algorithm enhance the population exploitation ability and local search ability.

2.1.3 Detection operation

The out-of-bound detection operator is after the mutation operator, which will ensure that the value of each dimension in every variant individual \({{\varvec{V}}}_{{\varvec{i}},{\varvec{t}}}\) is within the boundary constraints \(\left[{{\varvec{X}}}_{{\varvec{m}}{\varvec{i}}{\varvec{n}}},\hspace{0.33em}{{\varvec{X}}}_{{\varvec{m}}{\varvec{a}}{\varvec{x}}}\right]\) of the CNOPs [see Eq. (1)]. If the value of the \({j}^{th}\) dimension in \({{\varvec{V}}}_{{\varvec{i}},{\varvec{t}}}\) exceeds the range, the value can be corrected by the following simple and effective Eq. (9), that is, an individual is randomly generated according to the initialization operation (see Sect. 2.1.1) (Fan et al. 2019).
$$V_{{i,t}}^{j} = x_{{\min }}^{j} + rand\left( {0,1} \right) \cdot \left( {x_{{\max }}^{j} - x_{{\min }}^{j} } \right)if\left\{ \begin{gathered} V_{{i,t}}^{j} < x_{{\min }}^{j} \hfill \\ or \hfill \\ V_{{i,t}}^{j} > x_{{\max }}^{j} \hfill \\ \end{gathered} \right.$$
(9)

2.1.4 Crossover operation

After the detection operation, to further increase the diversity of the population, the target vector individual \({{\varvec{X}}}_{{\varvec{i}},{\varvec{t}}}\) and its corresponding mutant individual \({{\varvec{V}}}_{{\varvec{i}},{\varvec{t}}+1}\) need to be crossed to generate trial individual \({{\varvec{U}}}_{{\varvec{i}},{\varvec{t}}+1}=\left({u}_{i,t+1}^{1},\hspace{0.33em}{u}_{i,t+1}^{2},\dots ,{u}_{i,t+1}^{D}\right)\). The DE algorithm generally employs the following two crossover methods (Zhu et al. 2020).
2.1.4.1 Binomial crossover
In binomial crossover, to make the target individuals \({X}_{i,t}\) evolve, it is necessary to ensure that the components of at least one dimension of experimental individuals \({{\varvec{U}}}_{{\varvec{i}},{\varvec{t}}+1}\) are generated by the mutation individual \({{\varvec{V}}}_{{\varvec{i}},{\varvec{t}}+1}\), while the components of other dimensions are determined by the parameter \(CR\) randomly.
$${U}_{i,t+1}^{j}=\left\{\begin{array}{c}{V}_{i,t+1}^{j}\\ {X}_{i,t+1}^{j}\end{array}\right.\hspace{0.33em}\hspace{0.33em}\begin{array}{l}\mathrm{ if}\hspace{0.33em}{r}_{j}\le CR\hspace{0.33em}or\hspace{0.33em}j={j}_{rand}\\ otherwise\end{array}\text{ \hspace{0.33em} }j=\mathrm{1,2},\dots ,D$$
(10)
where \({r}_{j}\) is a decimal generated by the function \(\mathrm{rand}()\) in the range \(\left[\mathrm{0,1}\right]\). \({j}_{rand}\) is an integer randomly selected from the range \(\left[1,D\right]\). \(CR\) is the cross probability factor.
2.1.4.2 Exponential crossover
$$\left\{\begin{array}{l}\begin{array}{c}{U}_{i,t+1}^{j}=\left\{\begin{array}{c}{V}_{i,t+1}^{j}\\ {X}_{i,t+1}^{j}\end{array}\right.\hspace{0.33em}\hspace{0.33em} \begin{array}{l}j=<J{>}_{D},<J+1{>}_{D},\dots ,<J+L-1{>}_{D}\\ otherwise\end{array}\\ \hspace{0.33em}\end{array}\\ \left\{\begin{array}{l}L=1,\hspace{0.33em}\beta =rand\left(\mathrm{0,1}\right)\\ \mathrm{while}\left(\beta \le CR\hspace{0.33em}\&\hspace{0.33em}L<D\right)\\ \hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}\hspace{0.33em}L=L+1,\hspace{0.33em}\hspace{0.33em}\beta =\mathrm{rand}\left(\mathrm{0,1}\right)\\ \mathrm{end}\end{array}\right.\end{array}\right.$$
(11)
The above Eq. (11) is used for exponential crossover. \(J=\mathrm{rand}\left(1,D\right)\) means that the components from the \({J}^{th}\) dimension to the \(J+L-{1}^{th}\) dimension of the trial individual \({{\varvec{U}}}_{{\varvec{i}},{\varvec{t}}+1}\) are all composed of mutation individual \({{\varvec{V}}}_{{\varvec{i}},{\varvec{t}}+1}\). The components of other dimensions are composed of the target vector individual \({{\varvec{X}}}_{{\varvec{i}},{\varvec{t}}+1}\). where the angular brackets \({<>}_{D}\) denote a modulo function of modulus \(D\). The difference length \(L\) is determined by \(CR\).

2.1.5 Selection operation

After the crossover operation, DE will use the "greedy" strategy to select between the trial individual \({{\varvec{U}}}_{{\varvec{i}},{\varvec{t}}+1}\) and the target vector individual \({{\varvec{X}}}_{{\varvec{i}},{\varvec{t}}}\). The function value of the two under the constraint [see Eq. (1)] is also called the fitness value, and the smaller value is selected as a parent individual of the next generation, while the larger one is eliminated, as shown in Eq. (12) (Brest et al. 2006).
$${{\varvec{X}}}_{{\varvec{i}},{\varvec{t}}+1}=\left\{\begin{array}{l}{{\varvec{U}}}_{{\varvec{i}},{\varvec{t}}+1}, \mathrm{f}\left({{\varvec{U}}}_{{\varvec{i}},{\varvec{t}}+1}\right)<\mathrm{f}\left({{\varvec{X}}}_{{\varvec{i}},{\varvec{t}}}\right)\\ {{\varvec{X}}}_{{\varvec{i}},{\varvec{t}}}, \mathrm{f}\left({{\varvec{U}}}_{{\varvec{i}},{\varvec{t}}+1}\right)\ge \mathrm{f}\left({{\varvec{X}}}_{{\varvec{i}},{\varvec{t}}}\right)\end{array}\right.$$
(12)

2.2 Shuffled frog leaping

The shuffled frog-leaping algorithm (SFLA) is a heuristic random search algorithm based on swarm intelligence generated by simulating the swarm information sharing and communication mechanism in the foraging process of frogs. The algorithm was first proposed by Eusuff and Lansey (Eusuff and Lansey 2003); it combines the advantages of the memetic algorithm (MA) (Moscato 1989) and particle swarm optimization (PSO) algorithm (Kennedy and Eberhart 1995) and has the characteristics of fewer parameter settings, fast solving speed, and strong global optimization ability (Rahimi-Vahed and Mirzaei 2007). There are mainly the following three key operations in SFLA (Santander-Jimenez et al. 2018).

2.2.1 Initialization operation

Initialize the frog population as \({\varvec{F}}=\left({{\varvec{X}}}_{1},\hspace{0.33em}{{\varvec{X}}}_{2},\dots ,{{\varvec{X}}}_{{\varvec{N}}{\varvec{P}}}\right)\) with \(NP\) individuals in total [see Eq. (2)], and each individual is a randomly generated candidate solution. The \({i}^{th}\) individual is \({{\varvec{X}}}_{{\varvec{i}},{\varvec{t}}}=\left({x}_{i,t}^{1},\hspace{0.33em}{x}_{i,t}^{2},\dots ,\hspace{0.33em}{x}_{i,t}^{j},\hspace{0.33em}\dots ,{x}_{i,t}^{D}\right)\), \(1\le i\le NP\), where \(t\) represents the evolutionary number of each meme group, and \(D\) represents the dimension of the solution.

2.2.2 Grouping operation

The grouping operation can divide the population into \(M\) subpopulations equally. First, the \(NP\) candidate solutions in the population are arranged in descending order according to the fitness value calculated by Eq. (1), and then the first candidate solution is classified into the first subpopulation, and the second candidate solutions are classified into the second subpopulation. By analogy, the remainder is calculated according to Eq. (13), and the \(NP\) candidate solutions are allocated to \(M\) subpopulations, where \(s\) represents the subpopulation number to which the \({i}^{\text{th}}\) candidate solution is allocated (Hsu and Yang 2020).
$$s=i \mathrm{\% }M,\hspace{0.33em}\hspace{0.33em} \, i=\mathrm{1,2},\dots ,NP$$
(13)

2.2.3 Local update operation

\({\varvec{F}}{\varvec{g}}\) represents the candidate solution with the highest fitness value of the population in all previous generations, \({\varvec{F}}{{\varvec{b}}}_{{\varvec{t}}}^{{\varvec{s}}}\) represents the candidate solution with the highest fitness value in the \({s}^{th}\) subpopulation in the \({t}^{th}\) evolutionary generation as an optimal solution, and \({\varvec{F}}{{\varvec{w}}}_{{\varvec{t}}}^{{\varvec{s}}}\) represents the candidate solution with the lowest fitness value in the \({s}^{th}\) subpopulation in the \({t}^{th}\) evolutionary generation as a worst solution. In each local iterative calculation of the \({s}^{th}\) subpopulation, \({\varvec{F}}{{\varvec{w}}}_{{\varvec{t}}}^{{\varvec{s}}}\) is updated according to Eq. (14) (Wang et al. 2019). If \({\varvec{F}}{{\varvec{w}}}^{^{\prime}}\) is better than \({\varvec{F}}{{\varvec{w}}}_{{\varvec{t}}}^{{\varvec{s}}}\), that is, the fitness value of \({\varvec{F}}{{\varvec{w}}}^{^{\prime}}\) is smaller, then \({\varvec{F}}{{\varvec{w}}}^{^{\prime}}\) will replace the current \({\varvec{F}}{{\varvec{w}}}_{{\varvec{t}}}^{{\varvec{s}}}\); otherwise, replace \({\varvec{F}}{{\varvec{b}}}_{{\varvec{t}}}^{{\varvec{s}}}\) in Eq. (14) with \({\varvec{F}}{\varvec{g}}\) to recalculate \({\varvec{F}}{{\varvec{w}}}^{^{\prime}}\) and compare. If \({\varvec{F}}{{\varvec{w}}}^{^{\prime}}\) is still not better than \({\varvec{F}}{{\varvec{w}}}_{{\varvec{t}}}^{{\varvec{s}}}\), a random solution is generated to replace \({\varvec{F}}{{\varvec{w}}}_{{\varvec{t}}}^{{\varvec{s}}}\) by Eq. (2).
$${\varvec{F}}{{\varvec{w}}}^{^{\prime}}={\varvec{F}}{{\varvec{w}}}_{{\varvec{t}}}^{{\varvec{s}}}+\mathrm{rand}\left(\mathrm{0,1}\right)\cdot \left({\varvec{F}}{{\varvec{b}}}_{{\varvec{t}}}^{{\varvec{s}}}-{\varvec{F}}{{\varvec{w}}}_{{\varvec{t}}}^{{\varvec{s}}}\right)$$
(14)
Different operator design and control parameter schemes will have a great impact on the performance of DE in solving problems. The current improvement in DE can be summarized into the following three types (Li et al. 2020b).

3.1 Operation design

The operations are the core of DE, and many algorithms focus on its improvement. In TDE, (Fan and Lampinen 2003) proposed a triangle mutation strategy. It limits the trial individual to a triangular area to improve the efficiency of mutation operation and has achieved better results in neural network learning. In JADE, (Zhang and Sanderson 2009) adopted a new difference method, “DE/current-to-pbest/1”, which randomly selects one of the top p% excellent individuals to participate in the mutation and increase the diversity of the population. In SspDE, (Pan et al. 2011) used a strategy pool containing four mutation strategies and assigned mutation strategies and control parameter values to each individual at the same time. After several generations of evolution, the mutation strategy and the value of the control parameter that correspond to a better individual are redistributed to each individual based on a larger probability to improve the speed of convergence. In DMPSADE, (Fan and Yan 2015) used a strategy pool containing five mutation strategies, which determines the mutation strategy of each individual according to the cumulative probability of each strategy and the roulette algorithm. In MPEDE, (Wu et al. 2016) proposed an integrated DE that uses three different mutation strategies to form multiple subpopulations for simultaneous evolution. In SpDE, (Pan et al. 2018) divided the entire population into three subpopulations and proposed a new two-stage mutation strategy. The difference between individuals is considered in the evolution process. A better balance has been achieved between exploration and exploitation. In HMCFQDE, (Deng et al. 2021) designed a new hybrid mutation strategy based on the advantage of local neighborhood mutation, which greatly overcomes the problems of low efficiency of the original DE algorithm, insufficient diversity in the later stage, and slow convergence speed.

3.2 Control parameter setting

Before solving a specific problem, the relevant control parameters of DE should be determined first. (Storn and Price 1997) represented the number of populations \(NP\in \left[\mathrm{3,10}\right]\), and the mutation factor \(F\in \left[\mathrm{0.5,1.0}\right]\) was usually considered to be a more appropriate parameter scheme. (Gämperle et al. 2002) proved that \(NP\in \left[\mathrm{3,8}\right]\), \(F=0.6\), and \(CR\in \left[\mathrm{0.3,0.9}\right]\) were better parameter schemes when initializing the population. In fact, there is no fixed parameter setting scheme that can be well applied to various optimization problems. Different parameter schemes should be adopted according to different problems (Elsayed et al. 2013). Therefore, finding a suitable parameter scheme has attracted the attention of many researchers. In FADE, (Liu and Lampinen 2005) proposed an adaptive adjustment scheme that uses a fuzzy logic controller to continuously adjust the mutation factor \(F\) and crossover probability \(CR\) through multiple iteration cycles of population individuals and corresponding fitness function values. In jDE, (Brest et al. 2006) encoded the control parameters \(F\) and \(CR\) into the population individuals and adjusted them adaptively in the continuous iterative calculation. In SaDE, (Qin et al. 2009b) believed that it is possible to adaptively generate new individuals of the next generation and parameter values by learning the outstanding individuals of the past and the corresponding control parameter values. In CoDE, (Wang et al. 2011) created a parameter pool, including three control parameter schemes. By comparing the calculation results of the random combination of three independent trial individuals and the parameter schemes, the optimal scheme and its parameter scheme are determined to enter the next generation of evolution. In MDE, (Zou et al. 2013) used a Gaussian distribution and a uniform distribution to adjust the values of mutation factor \(F\) and crossover probability \(CR\), respectively. However, in DMPSADE, (Fan and Yan 2015) used two normal distributions to adaptively adjust the control parameter values and achieved better results. In SAMDE, (Zhu et al. 2020) proposed a new mutation strategy "DE/rand assembly/1", where the basis vector is composed of three randomly selected individuals in proportion. In the evolution process, the entire population is randomly divided into three equal-sized subpopulations, and each subpopulation adopts its own different mutation strategy and evolves independently. At the same time, it can generate adaptive control parameters according to evolutionary number. After each generation, the three subpopulations are mixed to regroup. Compared with other enhanced DE algorithms, SAMDE has a strong competitive performance.

3.3 Hybrid strategy

Currently, the strategy of hybridizing DE with different swarm intelligence optimization algorithms has become one of the most effective ways to enhance its optimization performance. In EDA, (Sun et al. 2005) combined the DE algorithm with the estimation of the distribution algorithm to better solve the global optimization problem in continuous space. In AnDE, (Das et al. 2007) merged the DE algorithm with the simulated annealing algorithm and proposed new individual selection conditions and mutation strategies. The experimental results show that the performance of the algorithm is better than that of the traditional DE algorithm. In ESADE, (Guo et al. 2014) used the parameter setting method and mutation strategy in JADE (Zhang and Sanderson 2009). At the same time, the simulated annealing algorithm is integrated into the selection operator, which greatly enhances the global search capability. In DEMPSO, (Mao et al. 2017) first executed DE to obtain the local optimal solution and the individual optimal solution and then used this information to find the global optimal solution through the iterative loop of the particle swarm optimization algorithm. In HyGADE, (Chaudhary et al. 2019) merged DE and an improved genetic algorithm based on the crossover of three parents. Compared with the basic genetic algorithm, the DE algorithm and the genetic algorithm with an improved crossover operation can obtain a better global optimal solution. In SFDE, (Wang et al. 2019) combined DE with the most basic shuffled frog-leaping algorithm, which improved the diversity of the population and the speed of global search during the entire dynamic evolution process. (Parouha and Verma 2021) designed an advanced hybrid algorithm (haDEPSO), which integrated with the advanced DE and PSO. The convergence characteristic of them provided different approximation to the solution space.
We further summarize the algorithms involved in the three types of improvement and the SFSADE proposed in this paper in detail in Table 1. According to the name of the algorithm, the time of creation, the mutation strategy (the number of mutation schemes and the number of individuals required), whether the control parameters are adaptive, whether the population is divided into sub-populations, whether it is integrated with other algorithms, the core idea of the algorithm, testing function status (number and highest dimension), algorithm performance evaluation indicators (iteration time, function value, average, standard deviation, intermediate value, optimal value, worst value) and statistical test type, a total of 11 aspects are summarized and analyzed. In summary, many enhanced DE algorithms have been studied for the improvement of mutation strategies and the design of self-adaptive adjustment mechanisms of control parameters. Consequently, determining how to develop a better hybrid strategy to improve the performance of DE is a pivotal concern current research. The SFSADE proposed in this paper adopts all the above three improvement methods. It combines the enhanced DE based on the self-adaptive control parameter adjustment mechanism and classification mutation strategy with the meme group idea in shuffled frog leaping and achieves better results. Compared with the latest advanced DE variants, such as SAMDE (Zhu et al. 2020) and SFDE (Wang et al. 2019), SFSADE has great differences and performance improvements.
Table 1
Comparison of improved DE algorithms
Improvement
type
Algorithm
name
Creation
time
Mutation
strategy
Control
parameter
Population
division
Algorithm
hybrid
Core
idea
Test
functions
Performance
metrics
Statistical
test
Operation
design
TDE
2003
[One, Three]
Fixed
Single
None
Trigonometric mutation
[2; 30D]
[Time, Value]
None
JADE
2009
[One, Four]
Dynamic
Single
None
DE/current-to-pbest
[20; 100D]
[Mean, Std]
None
SspDE
2011
[Four, Five]
Dynamic
Single
None
Trial vector generation
[19, 100D]
[Median]
None
DMPSADE
2015
[Five, Five]
Dynamic
Single
None
Discrete mutation
[25, 50D]
[Mean, Std]
Kruskal–Wallis
MPEDE
2016
[Three, Three]
Dynamic
Grouped
None
Multi-population ensemble
[25, 50D]
[Mean, Std]
Wilcoxon rank sum
SpDE
2018
[Two, Three]
Fixed
Grouped
None
two phases mutation
[2, 200D]
[Value]
None
HMCFQDE
2021
[Two, Two]
Fixed
Grouped
QEA / CCEA
Hybrid mutation
[6, 1000D]
[Mean, Std, Best, Worst]
None
Control
parameter
setting
FADE
2005
[Nine, Four]
Dynamic
Single
Mamdani’s
fuzzy
inference
Fuzzv logic controller
[6, 20D]
[Value]
None
jDE
2006
[One, Three]
Dynamic
Single
None
Self-adapting parameters
[21, 30D]
[Mean, Std]
None
SaDE
2009
[Two, Four]
Dynamic
Single
None
self-adapted by learning
[26, 30D]
[Mean, Std, Rate]
None
CoDE
2011
[Three, Four]
Dynamic
Single
None
Trial vector generation
[25, 30D]
[Mean, Std]
None
MDE
2013
[Two, Four]
Dynamic
Single
None
Adjust by two distributions
[25, 50D]
[Mean, Std]
Wilcoxon rank sum
SAMDE
2020
[Three, Four]
Dynamic
Grouped
None
Parameter adaptation
[25, 50D]
[Mean, Std]
Multiple Sign
Hybrid
strategy
EDE
2005
[One, Three]
Fixed
Single
EDA
Probability model
[5, 10D]
[Mean, Best]
None
AnDE
2007
[One, Three]
Dynamic
Single
SA
Conditional acceptance function
[6, 100D]
[Mean, Std]
Unpaired t
ESADE
2014
[Two, Three]
Dynamic
Single
SA
Parameters generation
[17, 30D]
[Mean, Std, Median]
Wilcoxon
signed rank and Friedman
DEMPSO
2017
[One, Three]
Fixed
Single
PSO
Parameter identification
[3, 10D]
[Value]
None
HyGADE
2019
[One, Three]
Fixed
Single
EA
Hybrid mutation
[46, 30D]
[Value]
None
SFDE
2019
[One, Three]
Dynamic
Grouped
SFLA
Subgroup sharing
[20, 30D]
[Mean, Std]
None
haDEPSO
2021
[Two, Two]
Fixed
Grouped
PSO
Advanced hybrid
[30, 30D]
[Mean, Std, Rate]
One-tailed t and Wilcoxon
signed rank
Ours
SFSADE
2021
[Three, Three]
Dynamic
Grouped
SFLA
Adaptive mutation and parameters
[25, 50D]
[Mean, Std]
Wilcoxon
signed rank and Friedman

4 SFSADE

For the DE algorithm, the most critical problem is how to balance the relationship between "explore" and "explicit" (Deng et al. 2020a). Exploration means that the population is more inclined to increase diversity so that the individuals will expand the search range in the solution space and improve the search performance and reliability of the algorithm. Explicit means that the population is more inclined to effectively use better individuals and explore better individuals in a local range. To achieve a good balance between them, our proposed SFSADE adopts a variety of strategies to improve DE and innovatively incorporates the idea of the shuffled frog-leaping algorithm. The flow chart of the SFSADE is shown in Fig. 1.
As shown in the figure above, the execution process of SFSADE is generally based on the two algorithms of DE and SFLA. After initializing the parameters and populations of the algorithm, first learn from SFLA and group the populations according to the fitness value of the individuals; then use DE as the core to perform unique mutation, crossover, and selection operations for each individual in different groups; Finally, the groups are merged and looped until the termination condition is met. Through these improvements, the SFSADE can better improve the search efficiency of the population while maintaining the diversity of the population to avoid premature convergence. The specific improvement strategies are described in the following four subsections.

4.1 Shuffled frog-leaping strategy

We are familiar with most swarm intelligence optimization algorithms, such as genetic algorithms (Sampson 1976), ant colony algorithms (Colorni et al. 1991), and particle swarm algorithms (Kennedy and Eberhart 1995). Similar to the most basic DE algorithm, they do not involve grouping operations, but they search and update based on the entire population, and the execution efficiency is low. However, the shuffled frog-leaping algorithm (Eusuff and Lansey 2003) is introduced in the SFSADE proposed in this paper. It divides the population into several small meme groups based on a grouping operation, and each meme group evolves independently of each other in each iteration, which can improve the execution speed of the algorithm. After each iteration, the meme groups will be fused again into a new population to transmit information, which is conducive to searching for the global optimal solution.
In the work of this paper, we will first sort the population according to the fitness values of all individuals, divide the population into a specified number of subpopulations of the same size, expand the diversity of the population, and improve the overall search performance. Then, instead of updating only the worst individual, such as the traditional shuffled frog-leaping algorithm or other enhanced DE algorithms, we will let each individual in each subpopulation perform improved mutation, crossover and other operations to update so that it is helpful for the population to find the global optimal solution. It is worth mentioning that after each iteration, each subpopulation will be merged, and then the grouping evolution in the next generation will be executed according to the process. Through this operation, the information of different subpopulations will be exchanged and shared, which will further enrich the diversity of the population, make it difficult to fall into local extremes and search for the global optimal solution better and faster.

4.2 Classification mutation strategy

The mutation operation in DE is actually a process of local search, where the base vector is the center of the local search, and the difference vector represents the size of the local search range, including the search step and direction. Therefore, the diversity of the population often depends on the diversity of each local search center, namely, the basis vector (Bi and Xiao 2012). For the usual mutation form, such as "DE/rand/*", the basis vector is randomly selected from the population. Although it has good diversity and high search performance, the efficiency is relatively low. Another example is "DE/best/*", which uses the best individuals of the population as the basis vector. It can make full use of excellent individuals and has better search efficiency. However, the basis vectors of the entire population lack diversity and tend to converge prematurely (Cai et al. 2021).
In fact, for different basis vectors, the degree of adaptation of the representative individuals may also vary greatly. For the optimization problem [see Eq. (1)], the smaller the fitness value is, the better the fitness, and vice versa. Therefore, different individuals should adopt different schemes in the direction and speed of their mutations to further improve the diversity and overall adaptability of the population at the same time and to search for a better global optimal solution. Based on the above analysis and related research, this paper adopts the following three types of mutation strategies for individuals with different adaptation ranges, as shown in Eqs. (1517) (Deng et al. 2021).
Strong individuals:\(\mathrm{f}\left({{\varvec{X}}}_{{\varvec{i}},{\varvec{t}}}\right)<{\mu }_{t,s}-2\cdot {\sigma }_{t,s}\)
$${{\varvec{V}}}_{{\varvec{i}},{\varvec{t}}}={R}_{i,t}\cdot {{\varvec{X}}}_{{\varvec{r}},{\varvec{t}}}+{G}_{i,t}\cdot \left({\varvec{X}}{\varvec{g}}{\varvec{b}}{\varvec{e}}{\varvec{s}}{{\varvec{t}}}_{{\varvec{t}}}-{{\varvec{X}}}_{{\varvec{i}},{\varvec{t}}}\right)$$
(15)
Ordinary individuals: \({\mu }_{t,s}-2\cdot {\sigma }_{t,s}<\mathrm{f}\left({{\varvec{X}}}_{{\varvec{i}},{\varvec{t}}}\right)<{\mu }_{t,s}+2\cdot {\sigma }_{t,s}\)
$${{\varvec{V}}}_{{\varvec{i}},{\varvec{t}}}={R}_{i,t}\cdot {{\varvec{X}}}_{{\varvec{r}},{\varvec{t}}}+{S}_{i,t}\times \left({\varvec{X}}{\varvec{s}}{\varvec{b}}{\varvec{e}}{\varvec{s}}{{\varvec{t}}}_{{\varvec{t}}}-{{\varvec{X}}}_{{\varvec{i}},{\varvec{t}}}\right)+{G}_{i,t}\cdot \left({\varvec{X}}{\varvec{g}}{\varvec{b}}{\varvec{e}}{\varvec{s}}{{\varvec{t}}}_{{\varvec{t}}}-{{\varvec{X}}}_{{\varvec{i}},{\varvec{t}}}\right)$$
(16)
Poor individuals: \(\mathrm{f}\left({{\varvec{X}}}_{{\varvec{i}},{\varvec{t}}}\right)>{\mu }_{t,s}+2\cdot {\sigma }_{t,s}\)
$${{\varvec{V}}}_{{\varvec{i}},{\varvec{t}}}={R}_{i,t}\cdot {{\varvec{X}}}_{{\varvec{r}},{\varvec{t}}}+{S}_{i,t}\cdot \left({\varvec{X}}{\varvec{s}}{\varvec{b}}{\varvec{e}}{\varvec{s}}{{\varvec{t}}}_{{\varvec{t}}}-{{\varvec{X}}}_{{\varvec{i}},{\varvec{t}}}\right)$$
(17)
where \(\mathrm{f}\left({{\varvec{X}}}_{{\varvec{i}},{\varvec{t}}}\right)\) represents the fitness value of the \({t}^{th}\) generation individual \({{\varvec{X}}}_{{\varvec{i}}}\) under the constraint [see Eq. (1)], and \({\mu }_{t,s}\) and \({\sigma }_{t,s}\), respectively, represent the average and standard deviation of the fitness value of the \({s}^{th}\) meme group in the \({t}^{th}\) generation. \({{\varvec{V}}}_{{\varvec{i}},{\varvec{t}}}\) represents the target individual corresponding to the current individual \({{\varvec{X}}}_{{\varvec{i}},{\varvec{t}}}\), \({{\varvec{X}}}_{{\varvec{r}},{\varvec{t}}}\) is an individual that is randomly selected from the population at the time of mutation as the basis vector that is different from \({{\varvec{X}}}_{{\varvec{i}},{\varvec{t}}}\), \({\varvec{X}}{\varvec{s}}{\varvec{b}}{\varvec{e}}{\varvec{s}}{{\varvec{t}}}_{{\varvec{t}}}\) is the best individual in the current meme group of the \({t}^{th}\) generation, and \({\varvec{X}}{\varvec{g}}{\varvec{b}}{\varvec{e}}{\varvec{s}}{{\varvec{t}}}_{{\varvec{t}}}\) represents the global optimal individual within the \({t}^{th}\) generation. \({R}_{i,t}\), \({S}_{i,t}\), and \({G}_{i,t}\) are, respectively, defined as random mutation coefficient, grouping mutation coefficient, and global mutation coefficient.
According to Chebyshev's theorem (Chebyshev 1867), regardless of the data distribution,, at least 75% of the data values are within two standard deviations of the average. As shown in Fig. 2, approximately 12.5% of individuals \({{\varvec{X}}}_{{\varvec{i}},{\varvec{t}}}\) have a fitness value \(\mathrm{f}\left({{\varvec{X}}}_{{\varvec{i}},{\varvec{t}}}\right)\) less than \({\mu }_{t,s}-2\cdot {\sigma }_{t,s}\). We define them as strong individuals, indicating that they have strong adaptability. These individuals implement the mutation strategy of Eq. (15), that is, try to retain the genes of the best individuals and improve local search capabilities. Similarly, approximately 12.5% of individuals have fitness values greater than \({\mu }_{t,s}+2\cdot {\sigma }_{t,s}\), and they are defined as poor individuals, indicating that they have weaker adaptability. These individuals implement the mutation strategy of Eq. (17), which promotes their learning from excellent individuals, enhances the individual's exploration ability and global search ability, and accelerates the convergence to the optimal solution. Approximately 75% of individuals have fitness values within the interval \(\left[{\mu }_{t,s}-2\cdot {\sigma }_{t,s},\hspace{0.33em}{\mu }_{t,s}+2\cdot {\sigma }_{t,s}\right]\), and we define these individuals as ordinary individuals. These individuals implement the mutation strategy of Eq. (16), that is, while retaining the best genes to learn from them, they also improve the individual's exploration ability and global search ability.

4.3 Self-adaptive control parameters strategy

In the DE, the setting of control parameters and the adjustment schemes have an important influence on the performance of the algorithm. Inappropriate parameter settings may cause the algorithm to converge too early or too slowly, and it is impossible to search for a better global optimal solution (Deng et al. 2020b). In the earlier enhanced DE, fixed control parameter values were often used. Although this method is simple, it is usually difficult to determine the appropriate parameter values, and it is also not conducive to the performance of the algorithm. The current trend is to use a self-adaptive control parameter strategy, that is, to determine different parameter schemes according to the actual situation of each individual.
Based on the new classification mutation strategy proposed above, in our improved DE algorithm SFSADE, there are mainly four control parameters \({R}_{i,t}\), \({S}_{i,t}\), \({G}_{i,t}\) and \(C{R}_{i,t}\). The first three parameters control the mutation operation, and the last parameter controls the proportion of the mutation individual \({{\varvec{V}}}_{{\varvec{i}},{\varvec{t}}}\) in the trial individual \({{\varvec{U}}}_{{\varvec{i}},{\varvec{t}}}\). These parameters all affect the convergence speed and population diversity of the algorithm. The increase in \({R}_{i,t}\) or \(C{R}_{i,t}\) helps to increase the degree of random mutation of the individual, enhances the global exploration ability of the population, and maintains the diversity of the population. Increasing the parameters \({S}_{i,t}\) or \({G}_{i,t}\) helps to enhance the local search ability of the individual and the exploitation ability of the population.
In the current enhanced DE, most of the adaptive parameter adjustment strategies are dynamically adjusted according to the evolutionary number of the algorithm. However, the strategy proposed in this paper takes into account the degree of adaptation of different individuals on that basis, that is, adaptively adjusts the control parameters of each individual according to the evolutionary number and the individual's fitness value. See the following Eqs. (1821) for details(Bi and Xiao 2012).
$${R}_{i,t}={R}_{min}+\left({R}_{max}-{R}_{min}\right)\cdot \left(\frac{1}{2}\cdot \left(2-{2}^\frac{t}{T}\right)+\frac{1}{2}\cdot \left(\frac{{f}_{i,t}-{f}_{min}}{{f}_{max}-{f}_{min}}\right)\right)$$
(18)
$${S}_{i,t}={S}_{min}+\left({S}_{max}-{S}_{min}\right)\cdot \left(\frac{1}{3}\cdot \left({2}^\frac{t}{T}-2\right)+\frac{2}{3}\cdot \left(\frac{{f}_{max}-{f}_{i,t}}{{f}_{max}-{f}_{min}}\right)\right)$$
(19)
$${G}_{i,t}={G}_{min}+\left({G}_{max}-{G}_{min}\right)\cdot \left(\frac{1}{3}\cdot \left({2}^\frac{t}{T}-2\right)+\frac{2}{3}\cdot \left(\frac{{f}_{max}-{f}_{i,t}}{{f}_{max}-{f}_{min}}\right)\right)$$
(20)
$$C{R}_{i,t}=C{R}_{min}+\left(C{R}_{max}-C{R}_{min}\right)\cdot \left(\frac{1}{2}\cdot \left(2-{2}^\frac{t}{T}\right)+\frac{1}{2}\cdot \left(\frac{{f}_{i,t}-{f}_{min}}{{f}_{max}-{f}_{min}}\right)\right)$$
(21)
where \({R}_{i,t}\in \left[{R}_{min},\hspace{0.33em}{R}_{max}\right]\), \({S}_{i,t}\in \left[{S}_{min},\hspace{0.33em}{S}_{max}\right]\), \({G}_{i,t}\in \left[{G}_{min},\hspace{0.33em}{G}_{max}\right]\), \(C{R}_{i,t}\in \left[C{R}_{min},\hspace{0.33em}C{R}_{max}\right]\), \(t\) is the current evolutionary number, \(T\) is the total evolutionary number, and \({f}_{max}\) and \({f}_{min}\) are the fitness values corresponding to the optimal individual and the worst individual of the group when the evolutionary number is \(t\), respectively.
In Fig. 3 above, the change in the four control parameters in Shifted Rosenbrock’s function test is shown. For our proposed parameter self-adaptive adjustment mechanism, its advantages can be analyzed from three aspects. The first is the influence of algorithm evolution number on control parameters. With the increase in evolutionary number, the enhancement of the individual's global search ability and population exploration ability should gradually transition to increasing the individual's local search ability and population exploitation ability so that the algorithm can reach the global optimal solution faster in the later stage. Therefore, the values of these four control parameters decrease as the number of algorithm iterations increases. The second is the influence of the degree of adaptation of each individual on the control parameters. For individuals with poor fitness in the population, the degree of mutation and crossover should be increased to promote their progress in a better search direction. Finally, there is the weight relationship between evolutionary numbers and individual fitness values. For \({R}_{i,t}\) and \(C{R}_{i,t}\), because they are more inclined to determine the degree of random evolution of the individual, it may be possible to make the weights of the two equal. \({S}_{i,t}\) and \({G}_{i,t}\) are more inclined to determine the directional convergence of the individual, so it is more reasonable to set the influence ratio of the individual fitness values to twice the evolutionary numbers.

4.4 The basic process of SFSADE

Through the three strategies to improve DE introduced above, we come to the detailed framework of SFSADE as given in Algorithm 1. Generally speaking, the complexity of the optimization algorithm is divided into time complexity and space complexity (Guanghui et al. 2020). However, the time consumed by the direct execution of the algorithm is very dependent on the test environment and also affected by the scale of the data. In view of this, we analyze its time complexity from the code itself. It can be estimated by calculating the number of basic operations that require constant time. Its usual function form is \(Tim{e}_{n}=f\left(n\right)\), where \(n\) is the dimension of the problem, \(Tim{e}_{n}\) is the execution time of the n-dimensional algorithm, \(\mathrm{f}(*)\) is a function that returns the execution time (Mirsadeghi and Khodayifar 2021).
https://static-content.springer.com/image/art%3A10.1007%2Fs10462-021-10099-9/MediaObjects/10462_2021_10099_Figg_HTML.gif
There are many types of time complexity: constant time (does not depend on the size of the problem), linear time (increases in a linear manner as the problem size increases), polynomial time (increases as the problem dimension based on polynomial function), exponential time (exponentially increases with the increase of the problem dimension), factorial time (calculates according to the factorial as the problem size increases) and so on, respectively corresponding to\(o(1)\),\(o(n)\), \(o({n}^{a})\),\(o({b}^{n})\), \(o(n!)\) and so on. Most optimization algorithms, such as differential evolution, particle swarm optimization, simulated annealing, etc., have a time complexity of\(o(p(n+\mathrm{CF}))\), where \(p\) is the size of the population, \(n\) is the dimension of the problem, and \(\mathrm{CF}\) is the Cost Function time complexity (Mirsadeghi and Khodayifar 2021). Therefore, as shown in the code in Algorithm 1, due to the additional nesting of loops outside, the time complexity of SFSADE is\(o(p({n}^{2}+\mathrm{CF}))\).

5 Experimental study

To verify the effect of the above improvement measures and the performance of the improved SFSADE algorithm, we conducted a large number of simulation experiments. The experimental program is written and run in MATLAB R2016a, and the calculation is performed on a 64-bit computer with an Intel(R) Core(TM) i7-10875H @ 2.3 GHz CPU, 16 GB RAM, and a Windows 10 operating system. At the same time, the test function selected by our experiment, the setting of related parameters, and the improved DE for comparison are shown in the literature that proposed the SAMDE algorithm (Zhu et al. 2020).

5.1 Selection test function

To comprehensively evaluate the performance of our algorithm, we implemented and tested it on the 25 benchmark functions of CEC 2005 (Suganthan et al. 2005). As shown in Table 2 below, it includes unimodal functions \(\mathrm{F}\text{1} - \mathrm{F}5\), basic multimodal functions \(\mathrm{F}\text{6} - \mathrm{F}{12}\), extended multimodal functions \(\mathrm{F}\text{13} - \mathrm{F}{14}\), and hybrid multimodal functions \(\mathrm{F}\text{15} - \mathrm{F}{25}\). In fact, the number of local optimal solutions of most multimodal functions will increase exponentially with the increase of the dimension of the problem variables, which makes multimodal functions one of the most difficult optimization problems to solve (Bi and Xiao 2012).
Table 2
Benchmark functions
Function
Initialization range
Global optimum
F1: Shifted Sphere Function
\({f}_{1}(X)={\sum }_{i=1}^{D}{x}_{i}^{2}\)
\(-100\le {x}_{i}\le 100\)
\({f}_{1}(0)=0\)
F2: Shifted Schwefel’s Problem 1.2
\({f}_{2}(X)={{\sum }_{i=1}^{D}\left({\sum }_{j=1}^{i}{x}_{j}\right)}^{2}\)
\(-100\le {x}_{i}\le 100\)
\({f}_{2}(0)=0\)
F3: Shifted Rotated High Conditioned Elliptic Function
\({f}_{3}(X)={{\sum }_{i=1}^{D}\left(1{0}^{6}\right)}^{\frac{i-1}{D-1}}{x}_{i}^{2},\hspace{0.33em}\hspace{0.33em}x={x}_{\text{initial}}\times M,\hspace{0.33em}M:\hspace{0.33em}orthogonal\hspace{0.33em}matrix.\)
\(-100\le {x}_{i}\le 100\)
\({f}_{3}(0)=0\)
F4: Shifted Schwefel’s Problem 1.2 with Noise in Fitness
\({f}_{4}(X)={{\sum }_{i=1}^{D}\left({\sum }_{j=1}^{i}{x}_{j}\right)}^{2}\times \left(1+0.4\left|N\left(\mathrm{0,1}\right)\right|\right)\)
\(-100\le {x}_{i}\le 100\)
\({f}_{4}(0)=0\)
F5: Schwefel’s Problem 2.6 with Global Optimum on Bounds
\({f}_{5}(X)={\text{max}}\left\{\left|A{}_{i}x-{B}_{i}\right|\right\}\)
  
\(A\hspace{0.33em}{\text{is}}\hspace{0.33em}a\hspace{0.33em}D\times D\hspace{0.33em}matrix,\hspace{0.33em}{a}_{ij}\hspace{0.33em}{\text{are}}\hspace{0.33em}\mathit{int}eger\hspace{0.33em}random\hspace{0.33em}numbers\hspace{0.33em}in\hspace{0.33em}the\hspace{0.33em}range\left[-\mathrm{500,500}\right],\)
  
\(\mathit{det}\left(A\right)\ne 0,\hspace{0.33em}{A}_{i}\hspace{0.33em}is\hspace{0.33em}the\hspace{0.33em}{i}^{th}\hspace{0.33em}row\hspace{0.33em}of\hspace{0.33em}A,\hspace{0.33em}{B}_{i}={A}_{i}\times o,\hspace{0.33em}o\hspace{0.33em}is\hspace{0.33em}a\hspace{0.33em}D\times 1\hspace{0.33em}vector,\hspace{0.33em}{o}_{i}\hspace{0.33em}are\hspace{0.33em}random\)
  
\(number\hspace{0.33em}in\hspace{0.33em}the\hspace{0.33em}range\left[-\mathrm{100,100}\right].\)
\(-100\le {x}_{i}\le 100\)
\({f}_{5}(0)=0\)
F6: Shifted Rosenbrock’s Function
\({f}_{6}(X)={\sum }_{i=1}^{D-1}\left(100{\left({x}_{i}^{2}-{x}_{i+1}\right)}^{2}+{\left({x}_{i}-1\right)}^{2}\right)\)
\(-100\le {x}_{i}\le 100\)
\({f}_{6}\left(1\right)=0\)
F7: Shifted Rotated Griewank’s Function without Bounds
\({f}_{7}(X)={\sum }_{i=1}^{D}\frac{{x}_{i}^{2}}{4000}-{\prod }_{i=1}^{D}\frac{{x}_{i}}{\sqrt{i}}+1\)
  
\(x={x}_{\text{initial}}\times M,\hspace{0.33em}M={M}^{^{\prime}}\left(1+0.3\left|N\left(\mathrm{0,1}\right)\right|\right),\)
  
\({M}^{^{\prime}}:\hspace{0.33em}linear\hspace{0.33em}transformation\hspace{0.33em}matrix,\hspace{0.33em}condition\hspace{0.33em}number=3.\)
\(-600\le {x}_{i}\le 600\)
\({f}_{7}(0)=0\)
F8: Shifted Rotated Ackley’s Function with Global Optimum on Bounds
\({f}_{8}(X)=-20\mathit{exp}\left(-0.2\sqrt{\frac{1}{D}{\sum }_{i=1}^{D}{x}_{i}^{2}}\right)-\mathit{exp}\left(\frac{1}{D}{\sum }_{i=1}^{D}\mathit{cos}\left(2\pi {x}_{i}\right)\right)+20+e\)
  
\(x={x}_{\text{initial}}\times M,\hspace{0.33em}M:\hspace{0.33em}linear\hspace{0.33em}transformation\hspace{0.33em}matrix,\hspace{0.33em}condition\hspace{0.33em}number=100.\)
\(-32\le {x}_{i}\le 32\)
\({f}_{8}(0)=0\)
F9: Shifted Rastrigin’s Function
\({f}_{9}(X)={\sum }_{i=1}^{D}\left({x}_{i}^{2}-10\mathit{cos}\left(2\pi {x}_{i}\right)+10\right)\)
\(-5\le {x}_{i}\le 5\)
\({f}_{9}(0)=0\)
F10: Shifted Rotated Rastrigin’s Function
\({f}_{10}(X)={\sum }_{i=1}^{D}\left({x}_{i}^{2}-10\mathit{cos}\left(2\pi {x}_{i}\right)+10\right)\)
  
\(x={x}_{\text{initial}}\times M,\hspace{0.33em}M:\hspace{0.33em}linear\hspace{0.33em}transformation\hspace{0.33em}matrix,\hspace{0.33em}condition\hspace{0.33em}number=2.\)
\(-5\le {x}_{i}\le 5\)
\({f}_{10}(0)=0\)
F11: Shifted Rotated Weierstrass Function
\({f}_{11}(X)={\sum }_{i=1}^{D}\left({\sum }_{k=0}^{{k}_{max}}[{a}^{k}\mathit{cos}(2\pi {b}^{k}({x}_{i}+0.5))]\right)-D{\sum }_{k=0}^{{k}_{max}}[{a}^{k}\mathit{cos}(2\pi {b}^{k}\times 0.5)]\)
  
\(a=0.5,b=3,{kinitial}_{max}\)
  
\(M:linear\text{ transformation matrix, condition number=5.}\)
\(\text{-0.}5\le {x}_{i}\le 0.5\)
\({f}_{11}(0)=0\)
F12: Schwefel’s Problem 2.13
\(f_{12} \left( X \right) = \mathop \sum \limits_{i = 1}^{D} \left( {A_{i} - B_{i} \left( x \right)} \right)^{2}\)
  
\(A_{i} = \mathop \sum \limits_{j = 1}^{D} \left( {a_{ij} \sin \alpha_{j} + b_{ij} \cos \alpha_{j} } \right),\;B_{i} \left( x \right) = \mathop \sum \limits_{j = 1}^{D} \left( {a_{ij} \sin x_{j} + b_{ij} \cos x_{j} } \right),\;for\;i = 1,2, \ldots ,D\)
  
\(A,\;B\;{\text{are}}\;two\;matrix,\;a_{ij} ,\;b_{ij} \;{\text{are}}\;{\text{int}} eger\;random\;numbers\)
  
\(\alpha = \left[ {\alpha_{1} ,\alpha_{2} , \ldots ,\alpha_{D} } \right],\;\alpha_{j} \;{\text{are}}\;{\text{int}} eger\;random\;numbers\;in\;the\;range\left[ { - \pi ,\pi } \right].\)
\(- \pi \le x_{i} \le \pi\)
\(f_{12} \left( \alpha \right) = 0\)
F13: Shifted Extended Griewank’s plus Rosenbrock’s Function (F8F2)
\(f_{13} \left( X \right) = F8\left( {F2\left( {x_{1} ,x_{2} } \right)} \right) + F8\left( {F2\left( {x_{2} ,x_{3} } \right)} \right) + \ldots + F8\left( {F2\left( {x_{D - 1} ,x_{D} } \right)} \right) + F8\left( {F2\left( {x_{D} ,x_{1} } \right)} \right)\)
\(- 3 \le x_{i} \le 1\)
\(f_{13} \left( 0 \right) = 0\)
F14: Shifted Rotated Expanded Scaffer’s F6
\(f_{14} \left( X \right) = F\left( {x_{1} ,x_{2} } \right) + F\left( {x_{2} ,x_{3} } \right) + \ldots F\left( {x_{D - 1} ,x_{D} } \right) + F\left( {x_{D} ,x_{1} } \right)\)
  
\(F\left( {x,y} \right) = 0.5 + \frac{{\left( {\sin^{2} \left( {\sqrt {x^{2} + y^{2} } } \right) - 0.5} \right)}}{{\left( {1 + 0.001\left( {x^{2} + y^{2} } \right)} \right)^{2} }}\)
  
\(x = x_{{{\text{initial}}}} \times M,\;M:\;linear\;transformation\;matrix,\;condition\;number = 3.\)
\(- 100 \le x_{i} \le 100\)
\(f_{14} \left( 0 \right) = 0\)
F15: Hybrid Composition Function
\(f_{1 - 2} \left( x \right) = Rastrigin^{\prime}s\;Function\)
  
\(f_{3 - 4} \left( x \right) = Weierstrass\;Function\)
  
\(f_{5 - 6} \left( x \right) = Griewank^{\prime}s\;Function\)
  
\(f_{7 - 8} \left( x \right) = Ackley^{\prime}s\;Function\)
  
\(f_{9 - 10} \left( x \right) = Sphere\;Function\)
  
\(\sigma_{i} = 1\;for\;i = 1,2, \ldots ,D,\)
  
\(\lambda = \left[ {1, 1, 10, 10, 5/60, 5/60, 5/32, 5/32, 5/100, 5/100} \right],\)
  
\(M_{i} \;are\;all\;identity\;matrices.\)
\(- 5 \le x_{i} \le 5\)
\(f_{15} \left( 0 \right) = 0\)
F16: Rotated Hybrid Composition Function
\(Except\;M_{i} \;are\;different\;linear\;transformation\;matrixes\;with\)
  
\(condition\;number\;of\;2,\;all\;other\;settings\;are\;the\;same\;as\;F_{15.}\)
\(- 5 \le x_{i} \le 5\)
\(f_{16} \left( 0 \right) = 0\)
F17: Rotated Hybrid Composition Function with Noise in Fitness
\(f_{17} \left( x \right) = F_{16} \times \left( {1 + 0.2\left| {N\left( {0,1} \right)} \right|} \right)\)
  
\(All\;other\;settings\;are\;the\;same\;as\;F_{16} .\)
\(- 5 \le x_{i} \le 5\)
\(f_{17} \left( 0 \right) = 0\)
F18: Rotated Hybrid Composition Function
\(f_{1 - 2} \left( x \right) = Ackley^{\prime}s\;Function\)
  
\(f_{3 - 4} \left( x \right) = Rastrigin^{\prime}s\;Function\)
  
\(f_{5 - 6} \left( x \right) = Sphere\;Function\)
  
\(f_{7 - 8} \left( x \right) = Weierstrass\;Function\)
  
\(f_{9 - 10} \left( x \right) = Griewank^{\prime}s\;Function\)
  
\(\sigma = \;\left[ {{1, 2, 1}{\text{.5, 1}}{.5, 1, 1, 1}{\text{.5, 1}}{.5, 2, 2}} \right];\)
  
\(\lambda = \lambda = \left[ {2*5/32; 5/32; 2*1; 1; 2*5/100; 5/100; 2*10; 10; 2*5/60; 5/60} \right];\)
  
\(M_{i} \;are\;all\;rotation\;matrices.\;Condition\;numbers\;are\;\left[ {2 3 2 3 2 3 20 30 200 300} \right].\)
\(- 5 \le x_{i} \le 5\)
\(f_{{{18}}} \left( 0 \right) = 0\)
F19: Rotated Hybrid Composition Function with a Narrow Basin for the Global Optimum
\(All\;settings\;are\;the\;same\;as\;F_{18} \;except\;\sigma = \left[ {{0}{\text{.1, 2, 1}}{.5, 1}{\text{.5, 1, 1, 1}}{.5, 1}{\text{.5, 2, 2}}} \right],\)
  
\(\lambda = \left[ {{0}{\text{.1}} \times {5/32; 5/32; 2} \times {1; 1; 2} \times {5/100; 5/100; 2} \times {10; 10; 2} \times {5/60; 5/60}} \right].\)
\(- 5 \le x_{i} \le 5\)
\(f_{19} \left( 0 \right) = 0\)
F20: Rotated Hybrid Composition Function with the Global Optimum on the Bounds
\(All\;settings\;are\;the\;same\;as\;F_{18} \;except\;after\;load\;the\;data\;file,\)
  
\(set\;o_{{1\left( {2j} \right)}} = 5,\;for\;j = 1,2, \ldots ,D/2.\)
\(- 5 \le x_{i} \le 5\)
\(f_{20} \left( 0 \right) = 0\)
F21: Rotated Hybrid Composition Function
\(f_{1 - 2} \left( x \right) = Rotated\;Expanded\;Scaffer^{\prime}s\;F6\;Function\)
  
\(f_{3 - 4} \left( x \right) = Rastrigin^{\prime}s\;Function\)
  
\(f_{5 - 6} \left( x \right) = F8F2\;\;Function\)
  
\(f_{7 - 8} \left( x \right) = Weierstrass\;Function\)
  
\(f_{9 - 10} \left( x \right) = Griewank^{\prime}s\;Function\)
  
\(\sigma = \left[ {1,1,1,1,1,2,2,2,2,2} \right];\)
  
\(\lambda = \left[ {5 \times {5/100; 5/100; 5} \times {1; 1; 5} \times {1; 1; 5} \times {10; 10; 5} \times {5/200; 5/200}} \right];\)
  
\(M_{i} \;are\;all\;orthogonal\;matrix.\)
\(- 5 \le x_{i} \le 5\)
\(f_{21} \left( 0 \right) = 0\)
F22: Rotated Hybrid Composition Function with High Condition Number Matrix
\(All\;settings\;are\;the\;same\;as\;F_{21} \;except\;M_{i}^{^{\prime}} s\;condition\;numbers\)
  
\(are\;\left[ {10 20 50 100 200 1000 2000 3000 4000 5000} \right].\)
\(- 5 \le x_{i} \le 5\)
\(f_{22} \left( 0 \right) = 0\)
F23: Non-Continuous Rotated Hybrid Composition Function
\(All\;settings\;are\;the\;same\;as\;F_{21} .\)
  
\(Except\;x_{j} = \left\{ {\begin{array}{*{20}c} {\begin{array}{*{20}c} {x_{j} } \\ {round\left( {2x_{j} } \right)/2} \\ \end{array} } \\ \end{array} \;\;} \right.\begin{array}{*{20}c} {\left| {x_{j} - o_{1j} } \right| < 1/2} \\ {\left| {x_{j} - o_{1j} } \right| \ge 1/2} \\ \end{array} \begin{array}{*{20}c} {\;for\;j = 1,2, \ldots ,D} \\ \end{array}\)
  
\(round\left( x \right) = \left\{ {\begin{array}{*{20}c} {a - 1} \\ a \\ {a + 1} \\ \end{array} \;\;\begin{array}{*{20}c} {if\;x \le 0\& b \ge 0.5} \\ {if\;b < 0.5} \\ {if\;x > 0\& b \ge 0.5} \\ \end{array} } \right.\)
  
\(where\;a\;is\;x^{\prime}s\;intgral\;part\;and\;b\;is\;x^{\prime}s\;decimal\;part.\)
  
All “round” operators in this document use the same schedule
\(-5\le {x}_{i}\le 5\)
\({f}_{23}(0)=0\)
F24: Rotated Hybrid Composition Function
\({f}_{1}\left(x\right)=Weierstrass\hspace{0.33em}Function\)
  
\({f}_{2}\left(x\right)=Rotated\hspace{0.33em}Expanded\hspace{0.33em}Scaffe{r}^{^{\prime}}s\hspace{0.33em}F6\hspace{0.33em}Function\)
  
\({f}_{3}\left(x\right)=F8F2\hspace{0.33em}\hspace{0.33em}Function\)
  
\({f}_{4}\left(x\right)=Ackle{y}^{^{\prime}}s\hspace{0.33em}Function\)
  
\({f}_{5}\left(x\right)=Rastrigi{n}^{^{\prime}}s\hspace{0.33em}Function\)
  
\({f}_{6}\left(x\right)=Griewan{k}^{^{\prime}}s\hspace{0.33em}Function\)
  
\({f}_{7}\left(x\right)=Non-Continuous\hspace{0.33em}Expanded\hspace{0.33em}Scaffe{r}^{^{\prime}}s\hspace{0.33em}F6\hspace{0.33em}Function\)
  
\({f}_{8}\left(x\right)=Non-Continuous\hspace{0.33em}Rastrigi{n}^{^{\prime}}s\hspace{0.33em}Funtion\)
  
\({f}_{9}\left(x\right)=High\hspace{0.33em}Conditioned\hspace{0.33em}Elliptic\hspace{0.33em}Function\)
  
\({f}_{10}\left(x\right)=Sphere\hspace{0.33em}Funtion\hspace{0.33em}with\hspace{0.33em}Noise\hspace{0.33em}in\hspace{0.33em}Fitness\)
  
\(\sigma =\text{2,}\hspace{0.33em}{\text{for}}\hspace{0.33em}i=\text{1,2,}\dots \text{,D;}\)
  
\(\lambda = [\text{10; 5/20; 1; 5/32; 1; 5/100; 5/50; 1; 5/100; 5/100}];\)
  
\({M}_{i}\hspace{0.33em}are\hspace{0.33em}all\hspace{0.33em}matrices,\hspace{0.33em}condition\hspace{0.33em}numbers\hspace{0.33em}are\hspace{0.33em}[\text{100 50 30 10 5 5 4 3 2 2 }].\)
\(-5\le {x}_{i}\le 5\)
\({f}_{24}(0)=0\)
F25: Rotated Hybrid Composition Function without Bounds
\(All\hspace{0.33em}settings\hspace{0.33em}are\hspace{0.33em}the\hspace{0.33em}same\hspace{0.33em}as\hspace{0.33em}{F}_{24}\hspace{0.33em}except\hspace{0.33em}no\hspace{0.33em}exact\)
  
\(search\hspace{0.33em}range\hspace{0.33em}set\hspace{0.33em}for\hspace{0.33em}this\hspace{0.33em}test\hspace{0.33em}function.\)
\(-2\le {x}_{i}\le 5\)
\({f}_{25}(0)=0\)

5.2 Comparison with state-of-the-art DE algorithms

We compare SFSADE with 7 advanced DE algorithms in JADE (Zhang and Sanderson 2009), SaDE (Qin et al. 2009a), SOUPDE (Weber et al. 2011), EPSDE (Mallipeddi et al. 2011), ESADE (Guo et al. 2014), MPEDE (Wu et al. 2016), and SAMDE (Zhu et al. 2020). Among them, SOUPDE and MPEDE use multiple subpopulation coevolution strategies, such as SFSADE. SaDE and EPSDE have multiple mutation strategies, such as SFSADE. JADE and ESADE can adaptively adjust control parameters such as SFSADE. As the latest improved DE algorithm, SAMDE integrates all the above improvement strategies, such as SFSADE. Therefore, we choose these enhanced DE variants for comparison, which has strong significance and pertinence.
To carry out comparative experiments more fairly, we refer to the relevant parameter settings in (Zhu et al. 2020) and compare them with the relevant calculation results. The maximum number of function evaluations is set as \(2000\times D\), which is equal to the population evolution number \(T\) multiplied by the evolution number of meme groups \(N\) in our SFSADE algorithm. The size of the population is also set to 60, and the boundary range of the control parameters \(R\), \(S\), \(G\) and \(CR\) is also \(\left[\mathrm{0,1}\right]\). The parameter settings of the other comparison algorithms are also the same. Moreover, the following settings are implemented: JADE with \(NP={100}\); SaDE with \(NP={50}\), \(lp={30}\); SOUPDE with \(NP={30}\), \(ps=\text{0.5}\), \(pu=\text{0.5}\), \(CR=\text{0.9}\); EPSDE with \(NP={50}\); ESADE with \(NP={50}\), \({t}_{0}={1000}\); MPEDE with \(NP={250}\), \(ng={20}\), \({\lambda }_{1}={\lambda }_{2}={\lambda }_{3}=\text{0.2}\);
We have performed 10-, 30-, and 50-dimensional optimization problems on 25 benchmark functions, and each test function performs 20 independent experiments on the corresponding dimension, calculates its average error and corresponding standard deviation as the final result, and records them in Tables 3, 4 and 5. The symbols "\(+\)", "\(\approx\)", and "\(-\)" in each table indicate that the performance of the corresponding algorithm is better, similar, or worse than SFSADE. For each test function, the optimal value found in all enhanced DE variants is highlighted in bold.
Table 3
Comparison of the average error (standard deviation) on 10 dimensions (Zhu et al. 2020)
 
JADE
SaDE
SOUPDE
EPSDE
ESADE
MPEDE
SAMDE
SFSADE
F1
8.00E-10
(3.67E-10)
3.45E-18
(1.73E-18)
2.44E-06
(5.66E-06)
1.86E-17
(9.26E-18)
8.96E-15
(3.59E-15)
1.24E + 00
(2.71E-02)
1.56E-22
(7.31E-23)
7.81E-73
(2.97E-72)
F2
4.86E-04
(2.54E-04)
 + 
3.77E-03
(2.05E-03)
 + 
3.39E + 00
(2.16E + 00)
1.23E-03
(6.65E-04)
 + 
2.94E-02
(2.57E-02)
 + 
3.95E + 01
(1.14E + 00)
3.37E-09
(1.82E-09)
 + 
5.75E-01
(8.51E-02)
F3
7.00E + 01
(3.65E + 01)
 + 
8.64E + 05
(5.29E + 05)
3.51E + 06
(2.49E + 06)
6.84E + 04
(3.24E + 04)
 + 
2.74E + 03
(5.94E + 01)
 + 
2.38E + 05
(1.03E + 04)
6.23E + 02
(1.33E + 02)
 + 
1.83E + 05
(5.22E + 04)
F4
7.97E-03
(4.15E-03)
 + 
6.97E-02
(3.74E-02)
 + 
1.24E + 01
(8.69E + 00)
1.20E-02
(6.05E-03)
 + 
5.82E-02
(4.74E-02)
 + 
9.77E + 01
(1.94E + 00)
7.77E-08
(3.71E-08)
 + 
1.67E + 00
(3.49E + 00)
F5
8.28E-01
(1.75E-01)
 + 
3.48E-02
(6.54E-03)
 + 
1.21E + 02
(3.79E + 01)
 + 
3.06E-03
(7.80E-04)
 + 
8.71E-04
(1.86E-04)
 + 
1.95E + 02
(2.28E + 00)
 + 
1.46E-04
(2.43E-05)
 + 
8.76E + 02
(9.26E + 02)
F6
2.06E + 01
(6.38E + 01)
4.86E + 00
(1.74E-01)
 + 
1.73E + 02
(1.22E + 02)
2.15E + 00
(2.31E-01)
 + 
7.82E + 00
(1.57E + 01)
 + 
2.83E + 03
(9.62E + 01)
1.21E + 00
(6.58E-02)
 + 
1.07E + 01
(3.78E + 00)
F7
8.86E-01
(1.07E-01)
7.53E-01
(1.49E-01)
9.75E-01
(1.03E-01)
7.98E-01
(1.28E-01)
1.92E-01
(5.92E-03)
1.33E + 00
(7.26E-03)
6.30E-01
(1.29E-01)
1.51E-01
(2.87E-02)
F8
2.11E + 01
(1.61E-01)
2.08E + 01
(1.25E-01)
2.09E + 01
(1.32E-01)
2.08E + 01
(1.31E-01)
2.16E + 01
(2.09E-01)
2.10E + 01
(6.23E-03)
2.08E + 01
(1.31E-01)
2.02E + 01
(5.76E-02)
F9
6.67E + 00
(2.05E + 00)
9.44E + 00
(2.71E + 00)
4.58E + 00
(2.66E + 00)
9.98E + 00
(3.10E + 00)
6.12E + 01
(1.44E + 01)
4.46E + 01
(6.16E-01)
2.32E + 01
(6.71E + 00)
1.31E-13
(1.75E-13)
F10
7.11E + 01
(1.17E + 01)
4.85E + 01
(9.24E + 00)
5.67E + 01
(1.13E + 01)
5.05E + 01
(9.62E + 00)
1.11E + 02
(2.11E + 01)
7.13E + 01
(4.75E-01)
5.15E + 01
(8.19E + 00)
1.30E + 01
(3.69E + 00)
F11
1.40E + 01
(1.26E + 00)
1.12E + 01
(1.22E + 00)
1.12E + 01
(1.26E + 00)
1.12E + 01
(1.20E + 00)
6.63E-01
(1.17E-01)
 + 
1.35E + 01
(6.86E-02)
1.14E + 01
(1.29E + 00)
4.00E + 00
(6.02E-01)
F12
2.09E + 03
(2.80E + 03)
5.97E + 02
(5.25E + 02)
3.04E + 03
(1.51E + 03)
3.25E + 02
(1.18E + 02)
3.54E + 02
(1.26E + 03)
9.37E + 03
(2.25E + 02)
1.69E + 02
(1.20E-03)
3.03E-02
(1.92E-02)
F13
3.77E + 00
(8.69E-01)
3.09E + 00
(6.98E-01)
2.53E + 00
(7.38E-01)
2.86E + 00
(7.41E-01)
1.14E + 01
(4.37E + 00)
5.38E + 00
(5.53E-02)
4.11E + 00
(8.03E-01)
1.39E-01
(2.53E-03)
F14
4.58E + 00
(1.63E-01)
4.22E + 00
(1.84E-01)
4.29E + 00
(1.99E-01)
4.22E + 00
(1.98E-01)
4.92E + 00
(1.31E-01)
4.50E + 00
(8.62E-03)
4.18E + 00
(1.86E-01)
2.84E + 00
(3.92E-02)
F15
3.94E + 02
(1.48E + 02)
2.20E + 02
(8.52E + 01)
2.97E + 02
(1.33E + 02)
2.20E + 02
(5.93E + 01
3.18E + 02
(2.81E + 01)
5.86E + 02
(2.50E + 00)
2.95E + 02
(2.63E + 01)
4.64E + 00
(4.87E + 00)
F16
2.60E + 02
(3.94E + 01)
2.09E + 02
(2.79E + 01)
2.34E + 02
(3.15E + 01)
2.09E + 02
(2.27E + 01)
3.03E + 02
(4.42E + 01)
2.60E + 02
(1.64E + 00)
2.04E + 02
(2.24E + 01)
9.27E + 01
(2.76E + 01)
F17
2.99E + 02
(4.19E + 01)
2.33E + 02
(2.93E + 01)
3.01E + 02
(6.34E + 01)
2.39E + 02
(2.60E + 01)
3.68E + 02
(5.94E + 01)
2.96E + 02
(1.72E + 00)
2.32E + 02
(2.42E + 01)
1.59E + 02
(8.22E + 01)
F18
7.75E + 02
(5.78E + 01)
7.96E + 02
(2.23E + 01)
6.59E + 02
(4.36E + 01)
6.35E + 02
(2.93E + 00)
8.32E + 02
(1.19E-01)
8.20E + 02
(1.81E + 00)
6.85E + 02
(6.83E-04)
3.15E + 02
(1.28E + 02)
F19
7.53E + 02
(5.50E + 01)
7.15E + 02
(2.12E + 01)
6.67E + 02
(5.73E + 01)
7.78E + 02
(5.79E + 00)
8.07E + 02
(1.04E-01)
8.14E + 02
(1.56E + 00)
6.65E + 02
(4.81E-04)
3.94E + 02
(7.07E + 01)
F20
7.84E + 02
(2.65E + 01)
6.51E + 02
(2.66E + 01)
6.70E + 02
(4.15E + 01)
6.96E + 02
(2.03E + 00)
8.23E + 02
(1.24E-01)
8.29E + 02
(3.11E + 00)
7.47E + 02
(2.90E-03)
3.82E + 02
(3.58E + 01)
F21
5.01E + 02
(1.25E + 02)
7.22E + 02
(2.51E + 01)
5.76E + 02
(5.81E + 00)
4.85E + 02
(7.63E-08)
6.90E + 02
(1.58E-03)
8.17E + 02
(1.27E + 00)
4.83E + 02
(1.37E-06)
4.09E + 02
(1.10E + 02)
F22
8.03E + 02
(1.07E + 01)
7.91E + 02
(6.54E + 00)
7.93E + 02
(1.94E + 01)
7.73E + 02
(5.82E + 00)
8.05E + 02
(1.20E + 01)
8.12E + 02
(4.31E-01)
7.70E + 02
(5.39E + 00)
4.52E + 02
(7.13E + 01)
F23
8.06E + 02
(7.85E + 01)
7.73E + 02
(2.93E + 01)
6.52E + 02
(1.42E + 01)
7.54E + 02
(4.19E + 00)
9.58E + 02
(3.45E + 01)
8.43E + 02
(2.21E + 00)
7.18E + 02
(6.65E-08)
5.02E + 02
(2.64E + 01)
F24
2.15E + 02
(2.81E + 01)
2.13E + 02
(2.63E + 00)
2.01E + 02
(2.74E + 00)
2.36E + 02
(3.65E-10)
2.48E + 02
(1.27E-09)
2.05E + 02
(1.11E-01)
2.12E + 02
(2.94E-16)
2.00E + 02
(5.99E-02)
F25
4.25E + 02
(1.17E + 01)
3.99E + 02
(5.01E + 00)
4.45E + 02
(1.74E + 01)
4.17E + 02
(4.60E + 00)
4.40E + 02
(3.17E + 01)
4.11E + 02
(4.44E-01)
3.94E + 02
(4.57E + 00)
2.61E + 02
(1.21E + 02)
 + 
-
4
0
21
4
0
21
1
0
24
5
0
20
6
0
19
1
0
24
5
0
20
 
Table 4
Comparison of the average error (standard deviation) on 30 dimensions (Zhu et al. 2020)
 
JADE
SaDE
SOUPDE
EPSDE
ESADE
MPEDE
SAMDE
SFSADE
F1
1.16E-23
(3.20E-24)
1.57E-26
(4.24E-27)
9.27E-06
(1.95E-05)
4.92E-23
(1.36E-23)
5.78E + 00
(1.54E + 00)
4.27E-05
(7.22E-07)
5.97E-24
(8.68E-25)
1.31E-92
(2.41E-92)
F2
9.40E-03
(7.21E-04)
 + 
2.02E + 02
(1.96E + 01)
 + 
6.28E + 02
(2.46E + 02)
 + 
1.36E + 02
(2.21E + 01)
 + 
5.77E + 00
(1.52E + 00)
 + 
1.51E + 01
(1.30E-01)
 + 
6.71E-01
(6.66E-02)
 + 
1.90E + 03
(5.41E + 02)
F3
3.06E + 05
(4.95E + 03)
9.77E + 06
(1.91E + 06)
1.45E + 08
v6.06E + 07)
6.50E + 06
(1.20E + 06)
8.76E + 05
(7.86E + 03)
 + 
1.04E + 05
(4.54E + 02)
 + 
1.93E + 06
(1.30E + 05)
 + 
2.65E + 06
(4.04E + 05)
F4
1.94E-01
(1.86E-00)
 + 
3.32E + 03
(4.23E + 02)
8.91E + 03
(4.09E + 03)
1.44E + 03
(2.54E + 02)
 + 
9.05E + 02
(3.52E + 01)
 + 
2.41E + 02 (2.32E + 00)
 + 
1.54E + 02
(1.53E + 01)
 + 
2.73E + 03
(4.11E + 02)
F5
1.61E + 03
(1.18E + 01)
 + 
2.71E + 03
(4.16E + 01)
 + 
8.26E + 03
(1.59E + 03)
 + 
1.88E + 03
(6.84E + 01)
 + 
2.65E + 03 (8.68E + 00)
 + 
1.58E + 03
(3.54E + 00)
 + 
1.52E + 03
(6.87E + 00)
 + 
9.84E + 03
(1.21E + 03)
F6
1.16E + 02 (2.11E + 02)
7.72E + 01
(5.18E-01)
 + 
3.50E + 02
(7.02E + 02)
3.23E + 01
(1.94E-01)
 + 
6.76E + 01
(3.54E + 00)
 + 
1.37E + 02
(7.08E-01)
3.75E + 01
(2.27E-02)
 + 
8.87E + 01
(1.32E + 01)
F7
1.28E-02
(4.14E-06)
2.49E-02
(3.02E-04)
1.28E + 00
(1.13E-01)
1.39E-02
(2.65E-07)
5.16E + 00
(1.19E + 00)
5.04E-01
(3.95E-03)
1.73E-02
(6.09E-06)
1.16E-02
(1.00E-02)
F8
2.14E + 01
(7.97E-02)
2.12E + 01
(6.41E-02)
2.12E + 01
(6.26E-02)
2.12E + 01
(6.25E-02)
2.16E + 01
(1.20E-01)
2.13E + 01
(4.26E-03)
2.12E + 01
(6.54E-02)
2.08E + 01
(4.47E-02)
F9
2.88E + 01
(4.65E + 00)
3.47E + 01
(5.16E + 00)
2.35E + 01
(5.59E + 00)
1.01E + 02
(1.38E + 01)
8.82E + 01
(1.39E + 01)
1.09E + 02
(1.12E + 00)
2.91E + 01
(8.09E-01)
1.76E-10
(3.52E-10)
F10
2.94E + 02
(2.40E + 01)
2.20E + 02
(1.90E + 01)
2.64E + 02
(3.43E + 01)
2.34E + 02
(1.90E + 01)
2.14E + 02
(2.51E + 01)
2.10E + 02
(8.81E-01)
2.25E + 02
(1.64E + 01)
2.01E + 02
(1.12E + 01)
F11
5.02E + 01
(2.13E + 00)
4.35E + 01
(2.04E + 00)
4.04E + 01
(2.67E + 00)
4.26E + 01
(2.20E + 00)
5.17E + 01
(3.67E + 00)
4.44E + 01 (1.21E-01)
4.52E + 01
(1.79E + 00)
2.46E + 01
(1.60E + 00)
F12
6.53E + 04
(3.34E + 04)
1.53E + 04
(1.13E + 04)
2.18E + 05
(6.12E + 04)
6.57E + 04
(1.55E + 04)
1.46E + 04
(5.95E + 03)
9.96E + 03
(3.44E + 01)
6.17E + 03
(2.50E + 00)
2.54E + 00
(6.65E-01)
F13
1.54E + 01
(1.86E + 00)
1.50E + 01
(1.46E + 00)
8.02E + 00
(1.25E + 00)
1.40E + 01
(1.76E + 01)
2.66E + 01
(3.64E + 00)
1.28E + 01
(1.34E-01)
1.78E + 01
(1.62E + 00)
8.38E-01
(6.66E-02)
F14
1.45E + 01
(1.70E-01)
1.40E + 01
(1.86E-01)
1.40E + 01
(2.04E-01)
1.40E + 01
(1.86E-01)
1.49E + 01
(1.48E-01)
1.43E + 01
(1.12E-02)
1.40E + 01
(1.76E-01)
1.22E + 01
(3.59E-01)
F15
3.50E + 02
(1.21E + 01)
3.59E + 02
(2.08E + 01)
4.17E + 02
(3.46E + 01)
3.88E + 02
(3.23E-08)
3.83E + 02
(1.83E + 00)
3.97E + 02
(5.47E-03)
3.49E + 02
(5.36E-13)
1.00E + 00
(1.74E-01)
F16
3.48E + 02
(3.56E + 01)
2.86E + 02
(2.92E + 01)
3.59E + 02
(6.89E + 01)
2.94E + 02
(1.89E + 01)
2.82E + 02
(2.23E + 01)
2.50E + 02
(1.63E + 00)
2.84E + 02
(1.45E + 01)
1.63E + 02
(2.02E + 01)
F17
4.00E + 02
(3.84E + 01)
3.02E + 02
(3.98E + 01)
4.26E + 02
(7.31E + 01)
3.38E + 02
(1.88E + 01)
3.37E + 02
(2.32E + 01)
2.87E + 02
(1.15E + 00)
3.00E + 02
(1.70E + 01)
1.86E + 02
(2.91E + 01)
F18
9.04E + 02
(4.41E-01)
9.04E + 02
(5.30E-01)
9.18E + 02
(2.34E + 00)
9.04E + 02
(2.76E-01)
9.03E + 02
(1.61E + 00)
9.07E + 02
(1.32E-02)
9.03E + 02
(7.53E-04)
6.54E + 02
(2.18E + 02)
F19
9.01E + 02
(4.14E-01)
9.03E + 02
(2.45E + 00)
9.09E + 02
(2.96E-01)
9.19E + 02
(2.30E + 00)
9.12E + 02
(1.74E + 00)
9.07E + 02
(1.17E-02)
8.99E + 02
(6.79E-04)
6.80E + 02
(2.44E + 02)
F20
9.08E + 02
(6.58E-01)
9.07E + 02
(5.35E-01)
9.18E + 02
(2.28E + 00)
9.10E + 02
(3.34E-01)
8.90E + 02
(1.58E + 00)
9.07E + 02
(1.17E-02)
8.99E + 02
(6.87E-04)
7.82E + 02
(1.77E + 02)
F21
5.00E + 02
(6.08E-13)
5.00E + 02 (3.07E-13)
5.00E + 02
(1.41E-05)
5.16E + 02
(2.67E-13)
5.30E + 02
(1.51E + 00)
5.00E + 02
(2.81E-07)
5.24E + 02
(5.22E-13)
4.80E + 02
(3.42E + 00)
F22
9.47E + 02
(8.35E + 00)
9.49E + 02
(7.99E + 00)
1.11E + 03
(5.68E + 01)
9.43E + 02
(8.08E + 00)
9.14E + 02
(7.56E + 00)
9.19E + 02
(3.37E-01)
9.29E + 02
(5.49E + 00)
5.01E + 02
(4.27E + 00)
F23
5.50E + 02
(1.20E-03)
5.50E + 02
(1.89E + 00)
5.34E + 02
(2.15E-03)
5.66E + 02
(1.28E-04)
5.34E + 02
(1.02E + 00)
5.38E + 02
(3.14E-05)
5.37E + 02
(7.08E-13)
5.17E + 02
(1.13E + 00)
F24
2.00E + 02
(7.61E-13)
2.00E + 02
(1.34E-13)
2.00E + 02
(1.22E-04)
2.00E + 02
(8.61E-13)
2.05E + 02
(1.40E + 00)
2.00E + 02
(7.27E-07)
2.00E + 02
(2.21E-13)
2.00E + 02
(6.07E-02)
F25
2.18E + 02
(3.95E-01)
2.16E + 02
(3.93E-01)
2.36E + 02
(4.81E + 00)
2.13E + 02
(2.43E-01)
 + 
2.13E + 02
(1.44E + 00)
 + 
2.13E + 02
(6.04E-03)
 + 
2.12E + 02
(1.40E-01)
 + 
2.16E + 02
(3.26E-01)
 + 
3
1
21
3
2
20
2
1
22
5
1
19
6
0
19
5
1
19
6
1
18
 
Tables 3, 4, 5 above respectively show the overall performance of SFSADE and the other 7 advanced DE variants on 25 test functions of 10, 30, and 50 dimensions. First of all, for unimodal functions \(\mathrm{F}1-\mathrm{F}5\), when solving the optimization problem \(\mathrm{F}1\), SFSADE performs optimally in three dimensions, and its performance is far superior to all other DE variants; on \(\mathrm{F}2-\mathrm{F}5\), the performance of SFSADE is at a general level, only better than some DE variants. Especially for \(\mathrm{F}5\), the optimization results are poor. Secondly, For the basic multimodal functions \(\mathrm{F}6-\mathrm{F}12\), SFSADE is only at a disadvantage in \(\mathrm{F}6\), and for other functions, it is almost all superior to other DE variants in three dimensions. Finally, whether it is the extended multimodal functions \(\mathrm{F}13-\mathrm{F}14\) or the hybrid multimodal functions \(\mathrm{F}15-\mathrm{F}25\), compared with other DE variants, the performance of SFSADE is still far ahead. Only for the two functions F24 and F25, SFSADE and other DE variants achieve similar results, or it is at an intermediate level.
Table 5
Comparison of the average error (standard deviation) on 50 dimensions (Zhu et al. 2020)
 
JADE
SaDE
SOUPDE
EPSDE
ESADE
MPEDE
SAMDE
SFSADE
F1
9.79E-28
(1.78E-28)
3.35E-28
(7.07E-29)
8.06E-06
(1.52E-05)
5.89E-26
(1.09E-26)
8.30E-02
(1.86E-02)
5.27E-08
(5.79E-10)
2.60E-18
(1.47E-19)
4.24E-126
(7.76E-126)
F2
1.89E + 00
(7.39E-02)
 + 
2.34E + 03
(8.22E + 01)
 + 
3.83E + 03
(1.24E + 03)
 + 
2.47E + 03
(2.03E + 02)
 + 
1.88E + 02
(2.98E + 00)
 + 
5.09E + 01
(1.63E-01)
 + 
1.26E + 02 (4.61E + 00)
 + 
7.31E + 03
(3.03E + 02)
F3
8.28E + 05
(5.39E + 03)
 + 
4.74E + 06
(4.40E + 04)
 + 
3.20E + 08
(1.04E + 08)
1.12E + 07 (8.54E + 05)
2.57E + 06
(7.23E + 03)
 + 
9.69E + 05 (6.64E + 02)
 + 
2.80E + 06
(5.25E + 04)
 + 
7.13E + 06
(1.04E + 06)
F4
6.95E + 03
(2.75E + 02)
 + 
2.14E + 04
(1.73E + 03)
9.34E + 04
(3.49E + 04)
1.87E + 04
(2.32E + 03)
1.38E + 04
(3.14E + 02)
5.89E + 03
(1.73E + 01)
 + 
6.55E + 03
(4.20E + 02)
 + 
9.35E + 03
(1.25E + 01)
F5
4.61E + 03
(6.57E + 00)
 + 
6.08E + 03
(2.81E + 01)
 + 
2.33E + 04
(3.77E + 03)
4.45E + 03
(4.76E + 01)
 + 
6.90E + 03
(1.48E + 00)
 + 
3.43E + 03
(1.44E + 00)
 + 
4.35E + 03
(6.45E + 00)
 + 
1.86E + 04
(2.07E + 03)
F6
8.45E + 01
(4.13E-01)
 + 
1.15E + 02
(1.13E-01)
 + 
3.80E + 02
(7.28E + 02)
6.54E + 01
(9.62E-02)
 + 
1.27E + 02
(3.14E + 00)
 + 
1.07E + 02
(1.40E-01)
 + 
1.60E + 02
(1.20E-01)
 + 
1.99E + 02
(1.17E + 01)
F7
6.06E-03 (5.83E-05)
 + 
2.99E-02
(4.42E-04)
 + 
1.05E + 00
(1.87E-02)
7.00E-03
(1.12E-05)
 + 
1.09E + 00
(2.13E-02)
5.63E-02 (2.72E-04)
2.50E-02
(2.38E-04)
 + 
5.57E-02
(3.63E-02)
F8
2.15E + 01
(5.83E-02)
2.13E + 01
(4.45E-02)
2.13E + 01
(4.70E-02)
2.13E + 01
(4.38E-02)
2.15E + 01
(8.91E-02)
2.14E + 01
(1.85E-03)
2.13E + 01
(4.56E-02)
2.09E + 01
(4.86E-02)
F9
1.03E + 02
(1.11E + 01)
5.90E + 01
(4.14E + 00)
4.36E + 01
(7.42E + 00)
2.24E + 02
(2.07E + 01)
2.08E + 01
(3.60E + 00)
1.68E + 02
(1.28E + 00)
5.14E + 01
(1.80E-08)
1.93E-12
(3.68E-12)
F10
5.02E + 02
(3.33E + 01)
4.08E + 02
(2.72E + 01)
5.12E + 02
(6.17E + 01)
4.35E + 02
(2.63E + 01)
2.54E + 02
(3.29E + 01)
3.09E + 02
(2.09E + 00)
4.14E + 02
(2.54E + 01)
2.70E + 02
(4.61E + 01)
F11
8.71E + 01
(2.77E + 00)
7.74E + 01
(2.69E + 00)
7.17E + 01
(3.85E + 00)
7.69E + 01
(2.77E + 00)
7.29E + 01
(3.96E + 00)
7.63E + 01
(1.60E-01
8.02E + 01
(2.34E + 00)
5.27E + 01
(3.14E-01)
F12
2.36E + 04 (9.63E + 02)
3.53E + 04 (3.58E + 01)
6.44E + 05
(1.44E + 05)
2.32E + 04
(5.43E + 01)
4.50E + 04
(3.97E + 01)
3.69E + 04 (1.89E + 01)
3.48E + 04
(7.99E + 00)
1.09E + 01
(3.76E + 00)
F13
2.65E + 01
(2.52E + 00)
2.90E + 01
(2.11E + 00)
1.34E + 01
(1.65E + 00)
2.76E + 01
(2.39E + 00)
1.26E + 01
(1.59E + 00)
2.01E + 01
(1.22E-01)
2.00E + 01
(1.45E + 00)
1.44E + 00
(6.03E-02)
F14
2.44E + 01
(1.91E-01)
2.38E + 01
(2.02E-01)
2.38E + 01
(2.47E-01)
2.38E + 01
(2.09E-01)
2.41E + 01
(2.26E-01)
2.40E + 01
(8.48E-03)
2.38E + 01
(1.83E-01)
2.19E + 01
(2.39E-01)
F15
3.29E + 02
(2.62E + 00)
3.73E + 02
(1.38E + 01)
3.55E + 02
(6.83E + 01)
3.42E + 02
(2.45E-13)
3.42E + 02
(9.09E-02)
3.52E + 02
(1.52E-04)
3.21E + 02
(1.14E-11)
1.84E + 00
(6.14E-01)
F16
3.62E + 02
(2.19E + 01)
2.31E + 02 (2.35E + 01)
4.45E + 02
(4.95E + 01)
3.36E + 02
(1.50E + 01)
2.76E + 02
(1.49E + 01)
2.16E + 02
(7.57E-01)
2.89E + 02
(1.59E + 01)
1.96E + 02
(3.17E + 00)
F17
4.21E + 02
(2.83E + 01)
3.43E + 02 (1.97E + 01)
5.71E + 02
(8.49E + 01)
3.70E + 02
(1.54E + 01)
3.62E + 02
(2.19E + 01)
2.93E + 02
(1.02E + 00)
3.67E + 02
(1.12E + 01)
2.72E + 02
(1.92E + 00)
F18
9.32E + 02
(2.24E + 00)
9.42E + 02
(1.37E + 00)
9.76E + 02
(1.67E + 01)
9.35E + 02
(8.39E-01)
7.87E + 02
(2.15E-01)
9.18E + 02 (5.90E-03)
9.30E + 02
(3.12E-05)
7.70E + 02
(4.62E-01)
F19
9.36E + 02
(1.51E + 00)
9.45E + 02
(2.22E + 00)
9.62E + 02
(6.94E + 00)
9.29E + 02
(3.21E-01)
7.44E + 02
(2.24E-01)
9.30E + 02
(4.11E-03)
9.27E + 02
(8.08E-05)
7.36E + 02
(2.12E-01)
F20
9.28E + 02
(1.12E + 00)
9.43E + 02
(1.10E + 00)
9.77E + 02
(1.72E + 01)
9.26E + 02
(5.52E-01)
8.09E + 02
(2.51E-01)
9.17E + 02 (7.15E-03)
9.32E + 02
(1.89E-05)
8.09E + 02
(3.64E-01)
F21
5.40E + 02
(1.47E-12)
5.00E + 02
(4.93E-13)
5.00E + 02
(5.35E-06)
5.00E + 02
(4.03E-13)
5.40E + 02
(1.89E-02)
5.61E + 02
(8.27E-04)
5.39E + 02
(1.07E-10)
4.99E + 02
(3.72E-02)
F22
1.00E + 03
(7.80E + 00)
9.88E + 02
(5.50E + 00)
1.09E + 03
(3.00E + 01)
9.76E + 02
(6.04E + 00)
9.49E + 02
(6.64E + 00)
9.34E + 02
(1.65E-01)
9.64E + 02
(4.29E + 00)
7.11E + 02
(2.57E + 00)
F23
5.95E + 02
(2.28E-01)
5.67E + 02
(4.23E-02)
5.39E + 02
(1.75E-02)
5.42E + 02
(6.14E-08)
5.67E + 02
(1.21E-02)
5.58E + 02
(1.61E-04)
5.83E + 02
(2.02E-09)
5.35E + 02
(2.13E + 00)
F24
8.80E + 02
(8.63E + 00)
2.00E + 02
(2.62E-11)
2.00E + 02
(9.40E-03)
2.00E + 02
(1.05E-13)
2.00E + 02
(2.03E-02)
2.00E + 02 (2.64E-09)
2.00E + 02
(2.86E-13)
2.00E + 02
(6.01E-03)
F25
4.92E + 02
(3.82E + 00)
2.43E + 02
(9.72E-01)
3.31E + 02
(1.33E + 01)
2.24E + 02
(2.46E-01)
2.23E + 02
(1.35E-01)
2.28E + 02 (6.83E-03)
2.22E + 02
(1.46E-01)
2.07E + 02
(7.14E-02)
 + 
6
0
19
5
1
19
1
1
23
4
1
20
4
1
20
5
1
19
6
1
18
 
In addition, for each dimension, we have calculated the number of times that SFSA outperforms JADE, SaDE, SOUPDE, EPSDE, ESADE, MPEDE, and SAMDE on 25 test functions. On 10-dimensional functions, it is 21, 21, 24, 20, 19, 24, 20, defeating other opponents 21.29 times on average. Suffice it to say that the overall performance of SFSADE is better than the other 7 advanced DE variants, and it is highly competitive. On 30 dimensions, the times are 21, 20, 22, 19, 19, 19, and 18 respectively. Although the average number of wins against other opponents has been reduced by 19.71, the overall performance of SFSADE is still better than other DE variants. On the 50th dimension, the times are 19, 19, 23, 20, 20, 19, and 18 respectively, and the average number of times of defeating other opponents is the same as on the 30th dimension, both are 19.71. It can also be seen that the overall performance of SFSADE is still far better than the other 7 advanced DE variants.
Through all the above optimization experiments, we can clearly see that SFSADE has competitive optimization results under different dimensions and types of test functions, and its results have great advantages over other algorithms. For further visual comparison, we quantitatively summarize these results in Fig. 4 according to four types and three dimensions. Among the 5 unimodal functions, one function has the best performance; among the 7 basic multimodal functions, 5, 6, and 6 optimal performances are achieved in the three dimensions respectively; on the 2 extended multimodal functions, SFSADE all reaches the best; for the 11 hybrid multimodal functions, it defeats almost all opponents in three dimensions. Obviously, our improved algorithm SFSADE is very successful and has strong comprehensive performance. Especially for high-dimensional hybrid multimodal functions that are extremely difficult to solve, SFSADE can still obtain the most accurate results when comparing with the other 7 algorithms, which further reflects its powerful optimization capabilities.

5.3 Statistical test

To further analyze the experimental results, we use the Wilcoxon signed ranks test and Friedman test, two nonparametric statistical tests, to evaluate the performance of the above algorithms (Mirsadeghi and Khodayifar 2021). Therefore, we consider to propose two different hypotheses and compare the proposed algorithm SFSADE with the other DE variants, namely JADE, SaDE, SOUPDE, EPSDE, ESADE, MPEDE, and SAMDE algorithms:
(a)
H0: There is no significantly difference between the proposed algorithm and the other variant algorithms;
 
(b)
H1: There is a significantly difference between the proposed algorithm and the other variant algorithms.
 
The Wilcoxon signed ranks test involves two samples and is used to detect the significant differences between the two samples (Derrac et al. 2011). Therefore, it is used for pairwise comparisons of the algorithms. Combining the optimization results of the average error in Tables 3, 4, 5, we perform Wilcoxon signed ranks test and show it in Table 6. In this table, \({R}^{+}\) represents the sum of ranks that our proposed algorithm is better than the second algorithm, and \({R}^{-}\) represents the sum of ranks in the opposite case, and satisfies the equation \(R^{ + } + R^{ - } = n(n + 1)/2\). From the p-value we calculate for each test function in each dimension, it can be seen that SFSADE shows a significant improvement over the other 7 DE variants with the significance levels \(\alpha =0.05\). Therefore, we reject the null hypothesis H0, which proves that there is a significant difference in the performance comparison between our proposed algorithm and the other 7 DE variants.
Table 6
The Wilcoxon signed ranks test results
Dimension
Comparison
\({R}^{+}\)
\({R}^{-}\)
p-value
10
SAMDE versus JADE
270.0
55.0
0.003821
SAMDE versus SaDE
285.0
40.0
0.000980
SAMDE versus SOUPDE
302.0
23.0
0.000174
SAMDE versus EPSDE
259.0
66.0
0.009417
SAMDE versus ESADE
253.0
72.0
0.014889
SAMDE versus MPEDE
303.0
22.0
0.000157
SAMDE versus SAMDE
258.0
67.0
0.010181
30
SAMDE versus JADE
235.5
91.5
0.071861
SAMDE versus SaDE
271.5
53.5
0.005139
SAMDE versus SOUPDE
281.5
43.5
0.001843
SAMDE versus EPSDE
240.5
84.5
0.042502
SAMDE versus ESADE
222.0
103.0
0.109385
SAMDE versus MPEDE
227.5
97.5
0.097490
SAMDE versus SAMDE
213.5
111.5
0.198543
50
SAMDE versus JADE
222.0
103.0
0.109386
SAMDE versus SaDE
238.5
86.5
0.048675
SAMDE versus SOUPDE
303.5
21.5
0.000203
SAMDE versus EPSDE
264.5
60.5
0.007235
SAMDE versus ESADE
224.5
100.5
0.124528
SAMDE versus MPEDE
219.5
105.5
0.153106
SAMDE versus SAMDE
221.5
103.5
0.129953
The Friedman test is a nonparametric simulation method of two-way analysis of variance. Through multiple comparisons, significant differences between different algorithms are found (Derrac et al. 2011). We first sort the 7 algorithms to be compared for the results of the 25 test functions and then examine the differences across columns to compare the overall performance of each algorithm. The results of the Friedman test are shown in Table 7. We write the corresponding general rank according to the Friedman rank, which represents the general performance of all of the algorithms on these functions. It can be seen from the ranking results that the performance of all algorithms is ranked in any of the 10 dimensions, 30 dimensions, and 50 dimensions. The algorithm SFSADE we proposed ranks first, far surpassing other algorithms. This once again shows that SFSADE has a strong comprehensive performance.
Table 7
The Friedman test results
Dimension
Algorithm
Friedman rank
General rank
10
JADE
5.34
6
SaDE
3.98
4
SOUPDE
5.08
5
EPSDE
3.70
3
ESADE
6.00
7
MPEDE
6.94
8
SAMDE
2.92
2
SFSADE
2.04
1
30
JADE
4.84
5
SaDE
4.60
4
SOUPDE
5.84
8
EPSDE
5.04
6
ESADE
5.12
7
MPEDE
4.48
3
SAMDE
3.62
2
SFSADE
2.46
1
50
JADE
5.24
7
SaDE
5.06
6
SOUPDE
6.20
8
EPSDE
4.54
5
ESADE
4.30
4
MPEDE
4.08
2
SAMDE
4.12
3
SFSADE
2.46
1

5.4 Parameter analysis

Taken together, the performance of SFSADE is far superior to that of other advanced DE variants under the same experimental conditions. This strongly proves that the three key improvement strategies we proposed, namely, the grouping evolution strategy, the classification mutation strategy, and the self-adaptive control parameter adjustment strategy, are all very effective. However, the poor test results on the unimodal functions \(\mathrm{F}2-\mathrm{F}5\) are still due to inappropriate initial setting parameters (Zhu et al. 2020). In the above experimental process, the range of control parameters, the ratio of population evolution number and group evolution number, and the number of meme groups will all affect the final result. We tested the sensitivity of SFSADE to changes in these important parameters through experiments. This further shows that SFSADE still has good performance for solving unimodal functions.
We take the experiment on the unimodal function \(\mathrm{F}2\) on 10 dimensions as an example, and the range of the self-adaptive control parameters of mutation and crossover operation is still \(\left[\mathrm{0.1,0.9}\right]\).
First, we conducted an experiment on the sensitivity of the parameter and the number of function evaluations. It is \(2000\times D\) in the original experiment, which is equal to the population evolution number multiplied by the group evolution number. In this experiment, the number of meme groups is fixed at 4, the number of populations is fixed at 60, and the population evolution iterations are fixed at 200. The number of grouping iterations is set to 100, 800, 1500, and 2200 to increase the number of function evaluations. Figure 5 shows the experimental results. The global optimal solutions corresponding to the four different grouping iterations are \(5.82\mathrm{E}-01\), \(\text{2.31E-04}\), \(\text{3.58E-08}\), and \(\text{7.14E}-{10}\). Therefore, within a certain range, as the number of grouping iterations increases, the convergence speed of the algorithm gradually increases, and the global optimal solution to be solved is increasingly better than the best result in Table 2.
Then, we experimented with the sensitivity of the parameter of the number of meme groups. In this experiment, the population number is fixed at 60, the population evolution iterations are fixed at 150, the number of grouping iterations is 300, and the number of meme groups is set to 3, 4, 5, and 6. Figure 6 shows the experimental results. The global optimal solutions corresponding to the four different numbers of meme groups are, \(\text{1.37E-01}\), \(\text{1.10E-04}\), \(\text{1.66E-08}\), and \(\text{1.53E-10}\). It can be seen that the number of meme groups is different, and the convergence speed of the algorithm and the global optimal solution of the solution are also different. Therefore, it is particularly important to select a more appropriate number of meme groups during the experiment.
Finally, we experimented on the sensitivity of the population size parameter. In this experiment, the number of meme groups is fixed at 3, and the number of function evaluations is fixed at \(2000 \times D\), where the population evolution iterations is 200, the group iterations is 100, and the population numbers are set to 60, 120, 180, and 240. Figure 7 shows the experimental results. The global optimal solutions corresponding to the four different population sizes are \(\text{4.91E-01}\), \(\text{3.03E-03}\), \(\text{6.28E-07}\), and \(\text{1.63E}-{09}\). Therefore, within a certain range, as the number of populations increases, the convergence speed of the algorithm gradually increases, and the global optimal solution to be solved is also better, which is also better than the best result in Table 2.
Through the above experiments, it can be seen that the setting of the experimental parameters has a great influence on the results. When optimizing the unimodal function, we can actively adjust the relevant experimental parameters to find a better global optimal solution. In particular, the two parameters that can be manually adjusted, the maximum evaluation number of the function and the population size, have significantly improved the solution effect of SFSADE. It also shows that our comprehensive improvement strategy for DE is very effective.

6 Conclusion

In order to improve the performance of DE algorithm in solving numerical optimization problems, this paper proposes an improved adaptive differential evolution algorithm SFSADE based on a shuffled frog-leaping strategy. In theory and experiment, we have verified the innovation and improvement in the algorithm from three different aspects: First, we have successfully integrated the shuffled frog-leaping algorithm, and divided the population into multiple sub-populations randomly based on the fitness value of the individual. Each sub-population does not affect independent evolution. After each generation is updated, these sub-populations reintegrate into a population, and carry out information transmission and exchange, so as to fully improve the diversity and convergence speed of the population. Second, in the process of evolution, we adopt a new classification mutation strategy designed, that is, individuals with different fitness values are affected by the dual effects of the global optimal individual and the grouped optimal individual to further enhance the adaptability and diversity of the population. Third, we introduce a new adaptive control parameter adjustment mechanism designed, that is, each generation will automatically adjust the control parameters related to mutation and crossover operations according to the evolutionary number and individual fitness values, which further improves the performance of the algorithm.
In addition, this paper uses the CEC2005 benchmark functions to carry out a large number of experiments, and conducts two non-parametric statistical tests, the Wilcoxon signed ranks test and Friedman test, which are compared with the other 7 most advanced DE variants (JADE, SaDE, SOUPDE, EPSDE, ESADE, MPEDE, and SAMDE) to conduct a comprehensive comparison and analysis. Through the results, it is proved that SFSADE has a strong comprehensive performance, which is far superior to other DE variants. At the same time, we also analyzed the sensitivity of SFSADE to related experimental parameters, further showing that compared with other DE variants, the three innovative improvement strategies of SFSADE have obvious advantages and competitiveness.
The good performance of SFSADE can try to solve optimization problems in many fields. For example, for the optimization model in multithreshold image segmentation, the algorithm may improve the segmentation effect and increase the segmentation speed; in the UAV trajectory planning problem, the algorithm may be able to overcome complex constraints and improve the quality of trajectory planning. In the future, we will further improve SFSADE, try to address numerical optimization problems of higher dimensions or different benchmark functions under time constraints, and hope to further solve other specific optimization problems in the real world.

Acknowledgements

The authors are very grateful to the reviewers for their beneficial suggestions. This work is supported by the National Natural Science Foundation Project of China (Grant No. 62073330), Young Talents Lifting Project of China (Grant No. 17JCJQQT048), Natural Science Foundation Project of Hunan Province of China (Grant No. 2019JJ20021) and NUDT Scientific Research Project of China (Grant No. ZK18-02-12).

Declarations

Conflict of interest

The authors declare that they have no conflicts of interest about this paper.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literature
go back to reference Bi X, Xiao J (2012) Classification-based self-adaptive differential evolution and its application in multi-lateral multi-issue negotiation. Front Comp Sci 6:442–461MathSciNet Bi X, Xiao J (2012) Classification-based self-adaptive differential evolution and its application in multi-lateral multi-issue negotiation. Front Comp Sci 6:442–461MathSciNet
go back to reference Chaudhary D, Tailor AK, Sharma VP, Chaturvedi S (2019) HyGADE: Hybrid of Genetic Algorithm and Differential Evolution Algorithm. In: 2019 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT). IEEE, pp 1–4 Chaudhary D, Tailor AK, Sharma VP, Chaturvedi S (2019) HyGADE: Hybrid of Genetic Algorithm and Differential Evolution Algorithm. In: 2019 10th International Conference on Computing, Communication and Networking Technologies (ICCCNT). IEEE, pp 1–4
go back to reference Chebyshev PL (1867) Des Valeurs Moyennes Liouville’s JMathPures Appl 12:177–184 Chebyshev PL (1867) Des Valeurs Moyennes Liouville’s JMathPures Appl 12:177–184
go back to reference Colorni A, Dorigo M, Maniezzo V (1991) Distributed Optimization by ant colonies. In: Proceedings of the First European Conference on Artificial Life. pp 134–142 Colorni A, Dorigo M, Maniezzo V (1991) Distributed Optimization by ant colonies. In: Proceedings of the First European Conference on Artificial Life. pp 134–142
go back to reference Das S, Konar A, Chakraborty UK (2007) Annealed Differential Evolution. In: 2007 IEEE Congress on Evolutionary Computation. IEEE, pp 1926–1933 Das S, Konar A, Chakraborty UK (2007) Annealed Differential Evolution. In: 2007 IEEE Congress on Evolutionary Computation. IEEE, pp 1926–1933
go back to reference Gämperle R, Müller SD, Koumoutsakos P (2002) A parameter study for differential evolution. In: Proceedings of WSEAS international conference on advances in intelligent systems, fuzzy systems and e-computing. pp 293–298 Gämperle R, Müller SD, Koumoutsakos P (2002) A parameter study for differential evolution. In: Proceedings of WSEAS international conference on advances in intelligent systems, fuzzy systems and e-computing. pp 293–298
go back to reference Kennedy J, Eberhart R (1995) Particle swarm optimization. In: IEEE International Conference on Neural Networks - Conference Proceedings. p 1942—1948 Kennedy J, Eberhart R (1995) Particle swarm optimization. In: IEEE International Conference on Neural Networks - Conference Proceedings. p 1942—1948
go back to reference Mao B, Xie Z, Wang Y et al (2017) A hybrid differential evolution and particle swarm optimization algorithm for numerical kinematics solution of remote maintenance manipulators. Fusion Eng Des 124:587–590CrossRef Mao B, Xie Z, Wang Y et al (2017) A hybrid differential evolution and particle swarm optimization algorithm for numerical kinematics solution of remote maintenance manipulators. Fusion Eng Des 124:587–590CrossRef
go back to reference Moscato P (1989) On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts - Towards Memetic Algorithms. Caltech Concurrent Computation Program Moscato P (1989) On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts - Towards Memetic Algorithms. Caltech Concurrent Computation Program
go back to reference Su Q, Huang Z, Hu Z, Wang X (2012) Binarization algorithm based on differential evolution algorithm for gray images. In: 2012 9th International Conference on Fuzzy Systems and Knowledge Discovery. IEEE, pp 2611–2615 Su Q, Huang Z, Hu Z, Wang X (2012) Binarization algorithm based on differential evolution algorithm for gray images. In: 2012 9th International Conference on Fuzzy Systems and Knowledge Discovery. IEEE, pp 2611–2615
go back to reference Suganthan PN, Hansen N, Liang JJ, et al (2005) Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. Nanyang Technological University Technical Report Suganthan PN, Hansen N, Liang JJ, et al (2005) Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization. Nanyang Technological University Technical Report
go back to reference Sun J, Zhang Q, Tsang EPK (2005) DE/EDA: A new evolutionary algorithm for global optimization. Inf Sci 169:249–262MathSciNetCrossRef Sun J, Zhang Q, Tsang EPK (2005) DE/EDA: A new evolutionary algorithm for global optimization. Inf Sci 169:249–262MathSciNetCrossRef
Metadata
Title
SFSADE: an improved self-adaptive differential evolution algorithm with a shuffled frog-leaping strategy
Authors
Qingtao Pan
Jun Tang
Haoran Wang
Hao Li
Xi Chen
Songyang Lao
Publication date
17-11-2021
Publisher
Springer Netherlands
Published in
Artificial Intelligence Review / Issue 5/2022
Print ISSN: 0269-2821
Electronic ISSN: 1573-7462
DOI
https://doi.org/10.1007/s10462-021-10099-9

Other articles of this Issue 5/2022

Artificial Intelligence Review 5/2022 Go to the issue

Premium Partner