Skip to main content
Erschienen in: Complex & Intelligent Systems 6/2023

Open Access 30.05.2023 | Original Article

Bi-preference linkage-driven artificial bee colony algorithm with multi-operator fusion

verfasst von: Haibo Yu, Yaxin Kang, Li Kang, Jianchao Zeng

Erschienen in: Complex & Intelligent Systems | Ausgabe 6/2023

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The artificial bee colony algorithm (ABC) struggles in handling complex optimization problems with high dimensions in light of its search operators’ strong exploration and weak exploitation properties. To tackle this situation, in this study, we propose a bi-preference linkage-driven ABC algorithm with multi-operator fusion, named BPLABC. BPLABC couples a preference-free stochastic search operator with a global best-guided search operator in the employed bee phase to maintain the population diversity while enhancing the population quality. During the onlooker bee phase, a tailored bi-type elite-guided exploitation mechanism is employed to regulate the exploitation intensity of the promising elite nectar sources selected via a new roulette selection probability calculation paradigm. To discourage the onlooker bees from slipping into local traps, after the scout bee phase, an auxiliary adversarial search operator is assembled to tug certain promising elite solutions away from the present pseudo-global best solution. To illustrate the effectiveness and efficiency of BPLABC, two sets of test suits consisting of 23 benchmark problems, 30 complex CEC2014 functions, and two real-world problems are picked for testing. Experimental results showed that BPLABC can achieve superior or equivalent performance to several representative ABC variants on the majority of the tested problems.
Hinweise

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Introduction

The artificial bee colony algorithm being a nature-inspired swarm intelligence optimization method was first proposed by Karaboga in 2005 [11], with the concept derived from the nectar collection behavior, in which bees carry out different activities depending on their division of labor and information exchange in the waggle dance area. ABC prevails for its simple schema and exceptional efficiency in tackling continuous optimization problems, and it has received a great deal of recognition in recent decades for handling a wide range of real-world optimization tasks, such as process and motion control [1], image coding and signal processing [14], job scheduling problems [16], traveling salesman problems [6], path planning [4], and knapsack problems [2]. However, due to the imbalanced learning on the exploration and exploitation of the solution space, ABC usually raises concerns about premature convergence, sluggish convergence speed, and low convergence accuracy as the complexity of the real-world problem rises. In essence, the behavioral learning of ABC demonstrates a significant preference for exploration, yet its capacity for exploitation is insufficient to compete [1520]. To address this issue, over the past several decades, a slew of enhanced ABC algorithms have been put forth, with particular attention paid to distinguishing the employed bee phase from the onlooker bee phase as the exploration and exploitation phases, respectively, while synchronous imposing convergence properties on the onlooker bee phase. Just to name a few, Guo et al. [25], drawing inspiration from particle swarm optimization (PSO), introduced a gbest-guided ABC (GABC), which, in virtue of the global best solution’s guidance, distributes a fraction of the higher quality information to each solution in the onlooker bee phase. Likewise, Gao et al. [10] suggested a global ABC search algorithm (GABCS) that directs the search by pairing the individual historical best and the global best.
It is conceivable that ABC, which prioritizes exploration before exploitation, helps to achieve rapid convergence. However, the issue of ABC’s propensity to slip into the local trap persists and has not been well improved. Meanwhile, given the diversity contributed by the scout bee phase and the successive roulette wheel selection in the late search phase throughout the search process, the weight of exploration and exploitation features a significant imbalance, with only the onlooker bees possessing the exploitation capability. To further address the aforementioned issue, blending search equations with varied search preferences at different search phases offers a promising and crucial solution for ABC’s performance enhancement since different search equations might synergistically complement each other in attaining the exploration–exploitation trade-off. Following this line, in this paper, we propose a bi-preference linkage-driven ABC with multi-operator fusion (BPLABC). BPLABC derives from four complementary improvements, where the first two sequentially integrate two sets of bi-preference search operators with different exploration and exploitation tendencies in the employed bee phase and the onlooker bee phase, respectively, and the third improvement defines a new roulette probability calculation paradigm based on the ranking of objective function values to ensure the constant delivery of elite solutions during the search process. Finally, the fourth improvement lies in the development of an auxiliary adversarial search equation to avoid the over-strong guide of the pseudo-best solution. The usefulness and efficiency of BPLABC are demonstrated by comprehensive experiments on 53 widely used benchmark problems and two real-world optimization instances. Experimental results reveal that the proposed BPLABC outperforms or is comparable to six representative ABC variants in the majority of tested cases.
Specifically, the primary contributions of BPLABC are highlighted below.
(1)
A bi-preference linkage-driven search operator fusion mechanism is tailored for improving the convergence performance of ABC. Two search equations coupled with different search preferences are alternatively triggered to diversify the exploration and enhance the solution quality in the employed bee phase, while a dual search equation separately induced by elite exemplars and the global best solutions is designed to strengthen the depth exploitation of promising regions.
 
(2)
A novel calculation paradigm for roulette selection probability is formulated based on the ranking of objective function values to ensure the constancy of the selection pressure throughout the search process.
 
(3)
An auxiliary adversarial search equation is developed to drag certain promising elite solutions away from the basin of the pseudo-global best for strengthening the convergence accuracy.
 
The remainder of the paper is structured as follows. Sect. “Related works” provides a brief overview of the basic ABC algorithm and related works. Sect. “Bi-preference linkage-driven ABC with multi-operator fusion” goes into detail about BPLABC, covering the motivation and characteristics of components involved in the method, and Sect. “Empirical study” presents the experimental results and accompanying analysis. Finally, Sect. “Conclusions” concludes the paper and discusses potential future works.

The basic ABC

ABC utilizes the information exchange within the colony to attain the goal of optimization by simulating bee behavior in nature. The iterative colony is made up of three sorts of bees in the basic ABC framework: employed bees, onlooker bees, and scout bees, wherein each type of bee serves a distinct role in different search phases. More specifically, the employed bees are in charge of scouring the entire space for promising nectar sources. Following the completion of the employed bee search phase, the onlooker bees selectively inherit nectar source information from the employed bees, undertaking exploitation around the elite nectar sources. In this instance, the scales for the employed bees and the onlooker bees are equal to the number of nectar sources. If a nectar source consecutively fails to improve and the failure count surpasses a specified threshold, the nectar source is abandoned, and the employed bee attached to that source turns into a scout bee, randomly exploring the whole space for a new potential nectar source.
In ABC, a nectar source denotes a solution to the optimization problem. Without loss of generality, let \(X_{i} = (x_{i,1} ,x_{i,2} ,...,x_{i,D} )\) be the ith nectar source, where D denotes the number of decision variables. The entire ABC optimization consists of the following phases.
(1)
Initialization phase
ABC first randomly generates N nectar sources using Eq. (1) to initialize the swarm, where \(i \in \{ 1,2, \cdot \cdot \cdot ,N\}\) denotes the index of ith nectar source,\(j \in \{ 1,2, \cdot \cdot \cdot ,D\}\) is the jth decision variable, and r is a random number within [0,1]. \(x_{j}^{\max }\) and \(x_{j}^{\min }\) indicate the lower and the upper bounds of the jth dimension, respectively. Each nectar source is then associated with each employed bee in a one-by-one assignment manner.
$$ x_{i,j} = x_{j}^{\min } + r \cdot (x_{j}^{\max } - x_{j}^{\min } ) $$
(1)
 
(2)
Employed bee phase
During this phase, the employed bees explore new nectar sources throughout the search space based on Eq. (2).
$$ v_{i,j} = x_{i,j} + \phi_{i,j} \cdot (x_{k,j} - x_{i,j} ) $$
(2)
where \(V_{i} = (v_{i,1} ,v_{i,2} , \cdot \cdot \cdot ,v_{i,D} )\) denotes an offspring nectar source. \(X_{k}\) is a randomly selected nectar source that is mutually exclusive with \(X_{i}\) among all nectar sources. \(\phi_{i,j}\) is a random number with a uniform distribution between [−1,1]. \(j \in \{ 1,2, \cdot \cdot \cdot ,D\}\) refers to a dimension index picked at random. The generated offspring nectar source will be reinitialized by Eq. (1) once it surpasses the problem domain. In this case, a counter is configured for each nectar source to track the number of times it has failed to improve in a row. If the offspring nectar source is superior to the corresponding parent nectar source, the parent nectar source will be swapped out and the associated counter remains at 0. If not, the corresponding counter value plus one without altering the parent nectar source.
 
(3)
Onlooker bee phase
Following the exploration of all employed bees, onlooker bees carry out exploitation in the vicinity of the nectar sources informed by the employed bees according to Eq. (2). Here the nectar sources to be exploited are contingent on their fitness that are calculated by Eq. (3), wherein \(f(X_{i} )\) represents the objective function value of nectar source \(X_{i}\).
$$ fit_{i} = \left\{ {\begin{array}{*{20}c} {\frac{1}{{1 + f(X_{i} )}}} & {if\;f(X_{i} ) \ge 0} \\ {1 + abs(f(X_{i} ))} & {otherwise} \\ \end{array} } \right. $$
(3)
$$ p_{i} = \frac{{fit_{i} }}{{\sum\nolimits_{i = 1}^{N} {fit_{i} } }} $$
(4)
The final nectar sources that will be chosen for exploitation are identified by the roulette wheel following the selection probability calculated by Eq. (4), where \(p_{i}\) and \(fit_{i}\) indicate the selection probability and the fitness value of \(X_{i}\), respectively.
 
(4)
Scout bee phase
Following the exploitation of all onlooker bees, all counters linked to the nectar sources are compared to the specified threshold, i.e., the maximum number of trials MaxTrial. If the counter value exceeds MaxTrial, the employed bee converts into a scout bee, and Eq. (1) is invoked again to generate a new potential nectar source, with the associated counter reset to 0. If not, no scout bee is generated.
 

Improved ABC variants

Because of ABC’s uneven allocation to exploration and exploitation, it usually suffers from premature and inaccurate convergence on complex fitness landscapes. As a result, over the past few decades, a majority of the existing ABC variants have concentrated on striking a good exploration–exploitation trade-off, particularly by enhancing the exploitation capabilities of the search operators. Inspired by the differential mutation operators of differential evolution (DE), Gao et al. [8] introduced an upgraded ABC (ABC/best), in which ABC/best/1 or ABC/best/2 was developed to boost the convergence capability of ABC. Analogously, Zhu et al. [25] presented a gbest-guided ABC (GABC) by referring to the learning operator of PSO, which incorporates the global best solution into the search equations to facilitate the propagation of partial quality information to each solution. Guo et al. [10] suggested a global ABC (GABCS) that incorporates both the individual personal best solution and the global best solution into the search equation for communicating the knowledge about the individual best while exchanging the information about the global best. Yu et al. [23] presented an adaptive ABC (AABC) by selecting multiple elites in the onlooker bee phase and tailoring an adaptive control mechanism to regulate the size of elites in each iteration to accommodate different search phases, significantly enhancing ABC’s convergence and stability. Furthermore, to handle the oscillation phenomena of the GABC search equation, Gao et al. [9] customized a preference-free stochastic search equation that generates the offspring with two randomly selected parents and developed an improved ABC (CABC). However, random differential terms weaken the exploitation capability of ABC. Inspired by BBPSO [12], a Gaussian bare-bone ABC (GBABC) with a Gaussian bare-bone search equation was devised by Zhou et al. [24], effectively alleviating the learning oscillation of onlooker bees.
Given the poor adaptivity of a single convergent search equation to problems with varying characteristics, collocating multiple search strategies appears to be a promising option for boosting ABC’s generality and robustness. By packaging multiple search equations with various properties, Kiran et al. [13] developed an ABC variant with a variable search strategy (ABCVSS), in which suitable search equations were identified to direct the search depending on the problem’s characteristics during each iteration. The expense of trial and error in determining the right search equations, however, raises the algorithm’s complexity. In order to leverage the complementing benefits of search equations in the searching process, Wang et al. [21] devised a multi-strategy ensemble ABC (MEABC), which recruited multiple search equations with various qualities competing to produce offspring. Additionally, Wang et al. [20] suggested an ABC with knowledge fusion (KFABC), which merged different search equations with various qualities and created a learning mechanism for stage identification to choose the most suitable search equation.
Given the scenario in which the roulette probabilities estimated based on fitness value occur equally at the latter stages of searching, resulting in increasing diversity and randomness, the discrepancy between the total number of individuals and the rank of each individual’s fitness value was recommended by Gao et al. [7] as a criterion for computing the roulette probability. To increase the selection pressure in the later search phase, Wang et al. [18] proposed using Bayesian estimation to determine the roulette probability of elite solutions conveyed from the employed bees. Furthermore, Wang et al. [17] used the chaotic system and the opposition-based learning technique to initialize the population, which considerably enhanced the initial population quality.

Bi-preference linkage-driven ABC with multi-operator fusion

In this section, we go into details of the proposed BPLABC, covering the motivation for BPLABC, improvements to the employed bee and onlooker bee phases, the new paradigm for calculating roulette probabilities, and the development of the adversarial search equation.

Motivation

Due to the strong randomness of search operators driven by Eqs. (12), which produce offspring solutions with an equal probability of superiority and inferiority and lack convergence drivers to balance exploration and exploitation, ABC exhibits weak convergence performance when handling complex optimization problems, especially for multimodal and high-dimensional problems. Although the roulette wheel selection on the onlooker bee phase is capable of emphasizing certain exploitation via exploiting the vicinity of some elite solutions, the convergence of individual fitness values in the late search stage may render the preference selection of the roulette wheel selection to appear to be a random selection. Furthermore, ABC updates only a single decision variable of the parent solution following the search operator, resulting in a rather monotonous inheritance of information from the parent solutions to the generated offspring solutions. Thus, it is critical to improving ABC’s exploitative capacity.
Following the aforementioned overview of ABC variants in Sect. “Improved ABC variants”, the focus on the existing ABC improvements remains on devising or assembling various search equations to enhance the exploitative capability of ABC. Conceivably, employing excellent convergent search equations benefits the convergence efficiency, while the packaged fusion of multiple search equations can leverage good attributes of different search equations. However, choosing a suitable search equation from multiple candidates involves lots of trial and error and is often problem dependent. Therefore, in this paper, we propose coupling different search equations with mutually exclusive preferences in the employed bee phase and onlooker bee phase to achieve a good balance between exploration and exploitation, as well as developing an auxiliary adversarial search equation based on the behavioral feedback of the two phases to further improve the convergence performance of ABC.

Bi-preference linkage-driven search equation fusion

In this work, two search equations equipped with various exemplars are alternatively used to collaborate during the employed bee search phase, as shown in Eq. (5).
$$ v_{i,j} = \left\{ {\begin{array}{*{20}c} {x_{{{\text{r1}},j}} + \phi_{i,j} \cdot (x_{{{\text{r1}},j}} - x_{r2,j} )} & {rand(0,1) > 0.5} \\ {x_{i,j} + \phi_{i,j} \cdot (x_{k,j} - x_{i,j} ) + \psi_{i,j} \cdot (x_{best,j} - x_{i,j} )} & {Otherwise} \\ \end{array} } \right. $$
(5)
where \(X_{r1}\) and \(X_{r2}\) (\(r1 \ne r2 \ne i\)) are two randomly selected nectar sources from the population. \(X_{best}\) denotes the current global best solution, and \(\psi_{i,j}\) is a number drawn at random from the uniform distribution \({\text{U}}(0,1)\). In Eq. (5), the first formula refers to the preference-free stochastic search equation that dedicates to inducing the employed bees to make breadth exploration on the solution space, while the second formula assigns a slight preference to the vicinities of a randomly selected nectar source and the global best. Here, the second formula in Eq. (5) plays the role of improving the overall quality of the population. These two equations are alternatively picked with the probability of 0.5 to derive high-quality nectar sources with good diversity.
To supervise the exploitation of onlooker bees, two search equations drawn by certain promising elites and the global best solution are synergistically integrated, as shown in Eq. (6).
$$ v_{i,j} = \left\{ \begin{gathered} \begin{array}{*{20}c} {x_{i,j} + \phi_{i,j} \cdot (x_{{{\text{r1}},j}} - x_{r2,j} )} & {i \le q \cdot N} \\ \end{array} \hfill \\ \begin{array}{*{20}c} {x_{i,j} + \psi_{i,j} \cdot (x_{best,j} - x_{i,j} )} & {i > q \cdot N} \\ \end{array} \hfill \\ \end{gathered} \right. $$
(6)
where \(X_{i}\) here denotes an elite solution selected via roulette, which is different from that in the employed bee phase. The first formula in Eq. (6) is to engage the onlooker bees to conduct in-depth exploitation around the vicinities of certain promising elite nectar sources. As a matter of fact, by diversifying the exploration for the nectar sources in the employed bee phase by Eq. (5), the elite nectar sources selected for depth exploitation in the subsequent onlooker bee phase are capable of delivering positive tractions to drive the onlooker bees toward the global optimal region that the true global optimum locates. Meanwhile, utilizing the second equation in Eq. (6) to encourage elite solutions to learn from the present global best solution can expedite the convergence of these elite solutions to the global best. Since both search equations have a strong propensity for convergence, an additional parameter q is introduced to prevent the search from being stuck in a local trap at this stage, where a large q value (\(q = 0.8\)) is allocated to the first search operator in Eq. (6) to trigger more exploitation of the elite solutions. Sect. “Adjustment for control parameter q” provides the sensitivity analysis of parameter q in terms of its impact on the convergence performance of BPLABC.

Ranking-based calculation paradigm for roulette probability

The roulette wheel selection in ABC may become inefficient in the latter search stage of the onlooker bee phase, where the fitness values of the nectar sources have converged. To sustain the selection pressure of the roulette wheel selection technique, we determine the selection probability of each nectar source by calculating the ranking of the objective function value rather than the fitness value. Firstly, the solutions are sorted according to their objective function values, then the probability of a solution being selected is formulated by Eq. (7).
$$ p_{i} = \frac{{r_{i} }}{{\sum\nolimits_{i = 1}^{N} {r_{i} } }} $$
(7)
where \(r_{i}\) denotes the rank of the solution \(i\) after sorting the objective function values in descending order. As described by Eq. (7), the selection probability of each solution only depends on its ranking of objective function value rather than the fitness value. Therefore, the ratio between the selection probabilities of the best and worst solutions remains \({{p_{{X_{{{\text{best}}}} }} } \mathord{\left/ {\vphantom {{p_{{X_{{{\text{best}}}} }} } {p_{{X_{{{\text{worst}}}} }} }}} \right. \kern-0pt} {p_{{X_{{{\text{worst}}}} }} }} = {{r_{N} } \mathord{\left/ {\vphantom {{r_{N} } {r_{1} }}} \right. \kern-0pt} {r_{1} }} = N\) throughout the search process, enabling the selection of elite solutions to be iteratively consistent.

Adversarial search equation

To counteract the impact of deception induced by the present global best solution on exploiting the regions of elite solutions, an adversarial search equation is constructed and applied after the scout bee phase, as shown in Eq. (8). It is worth noting that the present global best solution has a high selection probability based on the roulette wheel selection in the onlooker bee phase, which increases the risk of the algorithm’s premature convergence.
$$ v_{i,j} = x_{i,j} + \psi_{i,j} \cdot (x_{i,j} - x_{best,j} ) $$
(8)
The core idea of the adversarial search equation is to drag the exploitation of elite solutions away from the present global best solution. In fact, there are numerous local best solutions distributed in the fitness landscape of complex multimodal problems, and the true global best solution hides among them. Because there is no assurance that the current global best solution is the genuine global best, elite solutions that promise to uncover the true global best are vulnerable to interference by the present global best solution. Consequently, drawing elite solutions away from the allure of the present global best solution is conducive to breaking free from the local trap of the pseudo-global best solution, thereby enhancing convergence accuracy. In this case, the solutions that go through the adversarial search process are determined by the roulette wheel selection, and a parameter p is introduced to regulate the invoking frequency of the roulette operations. In this work, p is fixed to 0.5 to strike a good balance between exploration and exploitation. A sensitivity analysis of the p-value configuration is presented in Sect. “Adjustment for scale parameter p”.

Outline of the proposed method

Algorithm 1 summarizes the pseudo-code for BPLABC. In Algorithm 1, N denotes the colony size, Fes and MaxFes indicate the number of objective function evaluations currently consumed and the allowed maximum number of objective function evaluations, respectively. A flowchart for BPLABC is also given in Fig. 1.

Empirical study

To assess the effectiveness of BPLABC, in this section, we first carry out the validity verification concerning each of the four key modules embedded in the framework of BPLABC, followed by the sensitivity analysis on the scale parameter q and the control parameter p to acquire some insights into the effects of these two parameters on the convergence of BPLABC. Finally, the efficacy of BPLABC is evaluated by comparing it to six representative ABC variants.

Experimental configuration

Two sets of benchmark problems are used for the performance test in the succeeding experiments. The first suit contains 23 benchmark functions widely used in the literature [22], including seven unimodal problems with variable dimensionality, six multimodal problems with variable dimensionality, and 10 multimodal problems with fixed dimensionality. Table 1 lists the characteristics of problems in the first suit. The second test suit contains 30 CEC2014 benchmark problems introduced in [15].
Table 1
Characteristics of benchmark problems in the first test suit
No
Function name
Optimum
Search domain
Dimension
F01
Sphere function
0
[−100, 100]
n
F02
Schwefel’s problem 2.22
0
[−10, 10]
n
F03
Schwefel’s problem 1.2
0
[−100, 100]
n
F04
Schwefel’s problem 2.21
0
[−100, 100]
n
F05
Generalized Rosenbrock’s function
0
[−30, 30]
n
F06
Step function
0
[−100, 100]
n
F07
Quartic function i.e. noise
0
[−1.28, 1.27]
n
F08
Generalized Schwefel’s problem 2.26
−12,569.5
[−500, 500]
n
F09
Generalized Rastrigin’s function
0
[−5.12, 5.12]
n
F10
Ackley’s function
0
[−32, 32]
n
F11
Generalized Griewank’s function
0
[−600, 600]
n
F12
Generalized penalized function 1
0
[−50, 50]
n
F13
Generalized penalized function 2
0
[−50, 50]
n
F14
Shekel’s Foxholes function
0.998
[−65.53, 65.53]
2
F15
Kowalik’s function
0.0003075
[−5, 5]
4
F16
Six-hump Camel-Back function
−1.031629
[−5, 5]
2
F17
Branin function
0.398
[−5, 10] × [0, 15]
2
F18
Goldstein-Price function
3
[−5, 5]
2
F19
Hartman’s family 1
−3.86
[0, 1]
4
F20
Hartman’s family 2
−3.32
[0, 1]
6
F21
Shekel’s family 1
−10
[0, 10]
4
F22
Shekel’s family 2
−10
[0, 10]
4
F23
Shekel’s family 3
−10
[0, 10]
4
For all compared algorithms involved in the following experiments, the number of nectar sources is set to \(N = 100\), the MaxTrial is set to \(0.6 \times D \times N\), where D is the problem dimension. Other algorithm parameters of these contestants follow the same settings in the corresponding literature. In particular, for the proposed BPLABC, the scale parameter \(q = 0.8\) and the control parameter \(p = 0.5\) are used according to the sensitivity analysis in subSect. “Sensitivity analysis” . In this work, all compared algorithms terminate at \(MaxFes = 5 \times 10^{4}\), and each algorithm performs 30 independent runs for each test instance.

Comparison results regarding the four main modules in BPLABC

In this work, four improvements are made to BPLABC. In the first two improvements, two bi-preference linkage-driven search equations are introduced, one for the employed bee phase and the other for the onlooker bee phase. The third improvement comprises a roulette probability calculation paradigm based on the ranking of objective function values, and the fourth improvement concerns an adversarial search equation. To verify the effectiveness of these four modules, we designed four ABC variants by individually embedding distinct modules into the basic ABC framework as shown in Table 2.
Table 2
The characteristics of the four ABC variants incorporated with a simple module
ABC variants
Characteristics
ABC-E
Equation (5) is used instead of Eq. (2) in the employed bee phase to generate offspring nectar sources in ABC
ABC-O
Equation (6) is used instead of Eq. (2) in the onlooker bee phase to direct the onlooker bee exploitation in ABC
ABC-R
Equation (7) is used instead of Eq. (3) to compute the roulette wheel selection probability in ABC
ABC-A
Equation (8) is integrated into ABC to perform an auxiliary search after the scout bee phase
BPLABC and the four ABC variations mentioned above are tested on the first set of benchmark problems, with the dimensionality of the first 13 problems fixed to 30. Additionally, the basic ABC is also included as a baseline for comparison. Thus, a total of six algorithms, i.e., ABC, ABC-E, ABC-O, ABC-R, ABC-A, and BPLABC are taken in this experiment. To be fair, the parameter settings are the same for all six algorithms. Each algorithm performs 30 independent runs for each problem, and the mean of the best solutions obtained by each approach is listed in Table 3, with the best one on each test instance marked in bold. The pairwise Wilcoxon rank sum test is performed on the final results with a 5% significance level, where BPLABC is the baseline method. In Table 3, ‘ + ’, ‘ = ’ and ‘−’ indicate the compared algorithm is significantly better than, approximately equal to, and significantly worse than BPLABC, respectively.
Table 3
Comparison results of the ABC, ABC-E, ABC-O, ABC-R, ABC-A and BPLABC algorithms
No
ABC
ABC-E
ABC-O
ABC-R
ABC-A
BPLABC
F01
2.34E+02−
7.62E−08−
2.00E−07−
2.30E+02−
7.59E−112−
1.76E−147
F02
5.69E+00−
5.78E−05−
7.06E−05−
4.38E+00−
1.30E−59−
4.06E−79
F03
3.36E+04−
4.12E+03−
2.78E+02−
3.51E+04−
7.12E−45−
3.26E−76
F04
6.23E+01−
1.55E+00−
1.62E+00−
6.11E+01−
1.50E−49−
2.66E−69
F05
4.88E+05−
5.19E+01−
3.58E+02−
5.18E+05−
2.72E+01−
2.40E+01
F06
2.48E+02−
7.35E−08−
1.28E−07−
2.27E+02−
1.75E−01−
1.68E−15
F07
5.35E−01−
1.92E−02−
2.97E−02−
4.91E−01−
6.02E−04−
2.99E−04
F08
−4.81E+03−
−5.60E+03−
−5.80E+03−
−4.78E+03−
−4.73E+03−
−7.44E+03
F09
2.37E+02−
1.33E+02−
2.61E+01−
2.34E+02−
0.00E+00= 
0.00E+00
F10
6.48E+00−
8.32E−05−
7.98E−01−
6.48E+00−
3.55E−16+
1.30E−15
F11
3.21E+00−
5.91E−03−
1.43E−02−
3.06E+00−
0.00E+00= 
0.00E+00
F12
1.42E+05−
6.91E−03−
2.59E+01−
2.95E+05−
1.17E−02−
2.80E−17
F13
1.05E+06−
1.07E−03+
1.01E+02−
1.70E+06−
3.89E−01−
2.11E−02
F14
9.98E−01= 
9.98E−01= 
9.98E−01= 
9.98E−01= 
9.98E−01−
9.98E−01
F15
6.89E−04−
1.39E−03−
5.52E−04−
6.58E−04−
6.41E−04−
3.81E−04
F16
−1.03E+00= 
−1.03E+00= 
−1.03E+00= 
−1.03E+00= 
−1.03E+00= 
−1.03E+00
F17
3.98E−01= 
3.98E−01= 
3.98E−01= 
3.98E−01= 
3.98E−01= 
3.98E−01
F18
3.00E+00= 
3.00E+00= 
3.00E+00= 
3.00E+00= 
3.00E+00== 
3.00E+00
F19
−3.86E+00= 
−3.86E+01= 
−3.86E+02= 
−3.86E+03= 
−3.86E+04= 
−3.86E+00
F20
−3.32E+00+
−3.29E+00+
−3.32E+00+
−3.32E+00+
−3.32E+00+
−3.28E+00
F21
−1.01E+01+
−9.12E+00+
−1.01E+01+
−1.01E+01+
−1.01E+01+
−8.50E+00
F22
−1.04E+01−
−1.04E+01= 
−1.04E+01−
−1.04E+01−
−1.04E+01−
−1.04E+01
F23
−1.05E+01−
−1.05E+01= 
−1.05E+01−
−1.05E+01−
−1.05E+01−
−1.05E+01
 +/=/−
2/5/16
3/7/13
2/5/16
2/5/16
3/6/14
On all of the selected benchmark problems, ABC-E and ABC-O achieve significantly better results than ABC, demonstrating the efficiency of the bi-preference linkage-driven search equation fusion mechanism in improving ABC’s convergence. We can also notice from Table 2 that there is not much disparity between the best solutions obtained by ABC-R and ABC on the selected problems. Noting that the difference between these two approaches is only in the calculation paradigm of the selection probability, all other search operators are configured the same, thus both possess the same exploration capacity, resulting in no significant difference between the selection probabilities calculated by either the objective function value or the ranking of the objective function value. However, in terms of the obtained best solutions, BPLABC, which consists of the four enhanced modules, significantly surpasses the four ABC variants embedded with a single module as well as the basic ABC method on the selected benchmark problems. In general, among the contestants involved in this experiment, BPLABC wins at least 13 out of 23 problems against the other four competitors. For the first 13 unimodal and multimodal problems with variable dimensionalities, BPLABC gets the best solutions in most of the instances, while showing strong competitiveness in the remaining instances. Overall, the results in Table 3 demonstrate that the recommended four improvements can work synergistically to fully exert their linkage effectiveness in striking a good balance between exploration and exploitation.

Sensitivity analysis

Adjustment for control parameter q

BPLABC introduces a scale parameter q to regulate the ratio of the two search equations triggered on the onlooker bee phase. A large value of q tends to strengthen the exploitation around the elite solutions and loosen the convergence to the best solution. Conversely, a small value of q tends to guide the search converging towards the best solution while diminishing the exploitation surrounding the elite solutions. Thus, the parameter q plays a key role in balancing the search preference between the elite solution exploitation and convergence to the best solution. To determine a suitable q value for BPLABC, different q values ranging from 0.1 to 1, i.e., \(q = \{ 0.1,0.2.0.3,0.4,0.5,0.6,0.7,0.8,0.9,1\}\) are chosen for testing, and other parameters adopt the same configuration as in subSect. “Experimental configuration”. Here, the first suit of benchmark problems is employed for validating the effect of different q values on the performance of BPLABC, where the dimensions for problems F01–F13 are set to 30. Table 4 presents the mean results obtained by various q values over 30 independent runs, where the best ones are bolded.
Table 4
The average results of BPLABC with different q-values over 30 independent runs
No
q = 0.1
q = 0.2
q = 0.3
q = 0.4
q = 0.5
q = 0.6
q = 0.7
q = 0.8
q = 0.9
q = 1
F01
1.02E−248+
5.11E−234+
9.23E−222+
1.07E−206+
2.68E−197+
9.28E−178+
8.44E−166+
1.76E−147
2.50E−132−
1.54E−115−
F02
1.22E−127+
3.35E−122+
4.47E−116+
1.01E−108+
1.74E−102+
1.12E−95+
3.98E−87+
4.06E−79
1.44E−68−
1.54E−61−
F03
1.01E−134+
4.97E−131+
1.75E−124+
3.56E−115+
7.84E−107+
6.07E−94+
7.36E−87+
3.26E−76
2.88E−64−
9.13E−52−
F04
2.38E−114+
4.53E−108+
5.46E−103+
1.30E−97+
1.71E−91+
2.49E−84+
1.51E−78+
2.66E−69
5.71E−61−
3.13E−52−
F05
2.80E+01−
2.76E+01−
2.73E+01−
2.63E+01−
2.55E+01−
2.48E+01−
2.43E+01−
2.40E+01
2.43E+01−
2.47E+01−
F06
9.61E−01−
6.41E−01−
1.55E−01−
2.63E−02−
4.79E−06−
2.75E−11−
5.72E−15−
1.68E−15
4.37E−14−
1.73E−08−
F07
1.01E−04+
1.13E−04+
1.34E−04+
2.04E−04+
2.01E−04+
1.91E−04+
2.70E−04+
2.99E−04
3.45E−04−
4.65E−04−
F08
−7.31E+03−
−7.31E+03−
−7.65E+03+
−7.82E+03+
−8.28E+03+
−7.83E+03+
−8.00E+03+
−7.44E+03
−6.17E+03−
−5.32E+03−
F09
0.00E+00 = 
0.00E+00 = 
0.00E+00 = 
1.31E+00−
0.00E+00 = 
0.00E+00 = 
0.00E+00 = 
0.00E+00
0.00E+00 = 
0.00E+00 = 
F10
1.89E−15−
1.54E−15−
2.13E−15−
2.01E−15−
1.30E−15 = 
7.11E−16+
1.42E−15−
1.30E−15
9.47E−16+
9.47E−16+
F11
0.00E+00 = 
0.00E+00 = 
0.00E+00 = 
0.00E+00 = 
0.00E+00 = 
0.00E+00 = 
0.00E+00 = 
0.00E+00
0.00E+00 = 
0.00E+00 = 
F12
3.34E−02−
1.30E−02−
4.06E−03−
3.95E−04−
4.80E−07−
4.34E−13−
9.59E−17−
2.80E−17
7.98E−16−
7.02E−10−
F13
1.27E+00−
1.12E+00−
7.61E−01−
3.78E−01−
2.74E−01−
1.28E−01−
3.69E−00−
2.11E−02
1.72E−02+
7.93E−04+
F14
1.03E+00−
9.98E−01 = 
1.03E+00−
9.98E−01 = 
9.98E−01 = 
9.98E−01 = 
1.03E+00−
9.98E−01
9.98E−01 = 
9.98E−01 = 
F15
6.01E−04−
5.29E−04−
4.03E−04−
4.02E−04−
3.82E−04−
3.43E−04+
3.76E−04+
3.81E−04
3.40E−04+
4.64E−04−
F16
−1.03E+00 = 
−1.03E+00 = 
−1.03E+00 = 
−1.03E+00 = 
−1.03E+00 = 
−1.03E+00 = 
−1.03E+00 = 
−1.03E+00
−1.03E+00 = 
−1.03E+00 = 
F17
3.98E−01 = 
3.98E−01 = 
3.98E−01 = 
3.98E−01 = 
3.98E−01 = 
3.98E−01 = 
3.98E−01 = 
3.98E−01
3.98E−01 = 
3.98E−01 = 
F18
3.00E+00 = 
3.00E+00 = 
3.00E+00 = 
3.00E+00 = 
3.00E+00 = 
3.00E+00 = 
3.00E+00 = 
3.00E+00
3.00E+00 = 
3.00E+00 = 
F19
−3.86E+00 = 
−3.86E+00 = 
−3.86E+00 = 
−3.86E+00 = 
−3.86E+00 = 
−3.86E+00 = 
−3.86E+00 = 
−3.86E+00
−3.86E+00 = 
−3.86E+00 = 
F20
−3.29E+00+
−3.26E+00−
−3.28E+00 = 
−3.29E+00+
−3.27E+00−
−3.26E+00−
−3.26E+00−
−3.28E+00
−3.28E+00 = 
−3.28E+00 = 
F21
−7.67E+00−
−7.34E+00−
−8.38E+00−
−8.43E+00−
−8.17E+00−
−7.66E+00−
−6.86E+00−
−8.50E+00
−7.89E+00−
−7.82E+00−
F22
−6.63E+00−
−7.48E+00−
−9.14E+00−
−9.19E+00−
−9.78E+00−
−9.76E+00−
−1.03E+01−
−1.04E+01
−1.04E+01−
−1.02E+01−
F23
−9.12E+00−
−1.01E+01−
−9.80E+00−
−9.79E+00−
−1.01E+01−
−1.02E+01−
−1.04E+01−
−1.05E+01
−1.05E+01 = 
−1.05E+01 = 
 + / = /−
6/6/11
5/7/11
6/7/10
7/6/10
6/8/9
8/7/8
7/6/10
3/9/11
2/9/12
According to Table 4, BPLABC performs the best on the first four unimodal functions when \(q = 0.1\), however, when \(q = 0.8\), it performs the best on the majority of the remaining 18 unimodal and multimodal problems. Therefore, in this work, we fix \(q = 0.8\) in the subsequent experiments to ensure that BPLABC operates at its best overall.

Adjustment for scale parameter p

BPLABC introduces a new parameter p to regulate the frequency of activating the auxiliary search equation. The role of the auxiliary search equation here is to drag some promising elite solutions away from the current global best to control the greedy guidance of the pseudo-global best solution and maintain certain exploitation diversity on locating a new promising optimal region. To configure a suitable p for the proposed method, in this subsection, we test BPLABC with different p-values ranging from 0.1 to 1, i.e., \(p = \{ 0.1,0.2.0.3,0.4,0.5,0.6,0.7,0.8,0.9,1\}\) on the first suit of problems, where the dimensionality of the first 13 test functions is all set to 30 and other parameters follow the same settings as in the subSect. “Experimental configuration”. Table 5 shows the mean values of the best solutions obtained by each instantiation over 30 independent runs. From Table 5, we can find that the performance of BPLABC improves significantly on unimodal functions (F01, F02, F03 and F04) as parameter p grows from 0.1 to 1. However, when the parameter p hits 0.5, the performance of BPLABC on the 16 multimodal functions shows no appreciable improvement. Therefore, in BPLABC, we choose \(p = 0.5\) to allow the adversarial search operator to better accommodate different types of fitness landscapes.
Table 5
The average results from 30 independent runs of BPLABC using different p values
No
p = 0.1
p = 0.2
p = 0.3
p = 0.4
p = 0.5
p = 0.6
p = 0.7
p = 0.8
p = 0.9
p = 1
F01
2.76E−75−
1.16E−96−
7.48E−112−
6.13E−134−
1.76E−147
4.90E−165+
3.42E−175+
1.19E−191+
1.24E−205+
2.93E−217+
F02
7.19E−39−
2.51E−51−
1.02E−61−
8.34E−71−
4.06E−79
8.13E−88+
2.23E−94+
1.16E−100+
3.87E−106+
7.49E−115+
F03
7.41E−30−
3.22E−46−
3.11E−58−
9.19E−70−
3.26E−76
3.72E−82+
1.34E−91+
3.65E−92+
1.62E−102+
2.88E−109+
F04
1.47E−32−
7.23E−44−
2.78E−52−
1.50E−61−
2.66E−69
7.11E−77+
2.71E−82+
4.80E−91+
9.35E−96+
3.12E−102+
F05
2.39E+01+
2.38E+01+
2.39E+01+
2.41E+01−
2.40E+01
2.40E+01 = 
2.41E+01−
2.43E+01−
2.43E+01−
2.45E+01−
F06
2.32E−18+
5.39E−18+
6.46E−17+
1.72E−16+
1.68E−15
3.36E−15−
1.72E−14−
3.69E−14−
1.53E−13−
5.38E−13−
F07
8.07E−04−
5.85E−04−
4.13E−04−
3.43E−04−
2.99E−04
2.97E−04+
2.51E−04+
1.94E−04+
1.97E−04+
2.09E−04+
F08
−7.52E+03+
−8.03E+03+
−7.06E+03−
−7.30E+03−
−7.44E+03
−6.74E+03−
−6.81E+03−
−6.74E+03−
−6.48E+03−
−6.38E+03−
F09
7.30E−01−
0.00E+00 = 
0.00E+00 = 
0.00E+00 = 
0.00E+00
1.66E+00−
9.29E−01−
0.00E+00 = 
0.00E+00 = 
0.00E+00 = 
F10
3.32E−15−
2.37E−15−
1.66E−15−
1.78E−15−
1.30E−15
9.47E−16+
8.29E−16+
5.92E−16+
5.92E−16+
2.37E−16+
F11
0.00E+00 = 
0.00E+00 = 
0.00E+00 = 
0.00E+00 = 
0.00E+00
0.00E+00 = 
0.00E+00 = 
0.00E+00 = 
0.00E+00 = 
0.00E+00 = 
F12
2.43E−20+
2.33E−19+
1.52E−18+
2.46E−18+
2.80E−17
7.96E−17−
2.58E−16−
9.00E−16−
2.86E−15−
8.35E−15−
F13
1.39E−02+
1.86E−02+
2.01E−02+
2.74E−02−
2.11E−02
4.72E−02−
5.67E−02−
6.31E−02−
1.08E−01−
2.93E−01−
F14
9.98E−01 = 
9.98E−01 = 
9.98E−01 = 
9.98E−01 = 
9.98E−01
9.98E−01 = 
9.98E−01 = 
9.98E−01 = 
9.98E−01 = 
9.98E−01 = 
F15
3.70E−04+
3.23E−04+
4.59E−04−
4.16E−04−
3.81E−04
4.12E−04−
3.83E−04−
4.48E−04−
4.03E−04−
3.86E−04−
F16
−1.03E+00 = 
−1.03E+00 = 
−1.03E+00 = 
−1.03E+00 = 
−1.03E+00
−1.03E+00 = 
−1.03E+00 = 
−1.03E+00 = 
−1.03E+00 = 
−1.03E+00 = 
F17
3.98E−01 = 
3.98E−01 = 
3.98E−01 = 
3.98E−01 = 
3.98E−01
3.98E−01 = 
3.98E−01 = 
3.98E−01 = 
3.98E−01 = 
3.98E−01 = 
F18
3.00E+00 = 
3.00E+00 = 
3.00E+00 = 
3.00E+00 = 
3.00E+00
3.00E+00 = 
3.00E+00 = 
3.00E+00 = 
3.00E+00 = 
3.00E+00 = 
F19
−3.86E+00 = 
−3.86E+00 = 
−3.86E+00 = 
−3.86E+00 = 
−3.86E+00
−3.86E+00 = 
−3.86E+00 = 
−3.86E+00 = 
−3.86E+00 = 
−3.86E+00 = 
F20
−3.26E+00−
−3.27E+00−
−3.29E+00+
−3.27E+00−
−3.28E+00
−3.27E+00−
−3.27E+00−
−3.28E+00 = 
−3.28E+00 = 
−3.28E+00 = 
F21
−9.44E+00+
−9.51E+00+
−9.36E+00+
−9.02E+00+
−8.50E+00
−8.53E+00+
−7.79E+00−
−7.56E+00−
−8.22E+00−
−7.38E+00−
F22
−1.02E+01−
−1.04E+01 = 
−1.04E+01 = 
−1.03E+01−
−1.04E+01
−9.83E+00−
−1.02E+01−
−1.04E+01−
−1.04E+01−
−9.58E+00−
F23
−1.05E+01−
−1.05E+01 = 
−1.05E+01 = 
−1.05E+01 = 
−1.05E+01
−1.03E+01−
−1.05E+01−
−1.03E+01−
−1.05E+01−
−1.04E+01−
 + / = /−
7/6/10
6/9/7
6/9/8
3/8/12
7/7/9
6/6/11
6/8/9
6/8/9
6/8/9
Bold values denote the best mean results of each test instances

Comparison results of BPLABC versus six popular improved ABCs

To further validate the efficacy of the proposed BPLABC, in this section, we carry out a comparison of BPLABC with six representative ABC variants, including GABC [25], GABCS [10], CABC [9], IABC [17], TLABC [3], and BEABC [18], on the selected two suits of benchmark problems to obtain some depth insights into the optimization performance of BPLABC. Table 6 recapitulated the characteristics of these ABC variants that are isomorphic to BPLABC.
Table 6
The characteristics of the six representative ABC variants
ABC contestants
Characteristics
GABC
Appended the present global best to ABC’s search operator for upgrading its exploitation capacity
GABCS
Incorporated the individual historical best solutions and the global best solution into the ABC search equation for enhancing the convergence accuracy of ABC
CABC
Used a search equation with no search preference for any direction to address the oscillation issue caused by the GABC search equation
IABC
Employed the chaotic system and the opposition-based learning strategy to generate the initial population;
Incorporated the current best and second-best solutions into the onlooker bee search equation of the basic ABC
A chaotic search equation for the current best solution was used to improve global convergence
TLABC
Developed a teach-and-learn optimized search equation for the employed bee stage, coupling with DE/rand/1, to ensure diversity and improve the population quality
Used a search equation for learning towards a stochastic better solution in the onlooker bee phase;
Combined a discarded solution replacement with the oppositional-based learning strategy for the scout bee stage
BEABC
Employed Bayesian estimation to calculate the selection probability of the roulette wheel
Used either some random solutions better than the parent solutions or the best solution to guide the offspring generation in the onlooker bee phase
Used the best solution to guide the offspring generation in the scout bee phase
In the experiment, the dimensions of the first 13 problems in the first test suit are set to 30 and 50 dimensions, respectively, and the last 10 problems keep the fixed dimensions. For the second test suit, 30, 50 and 100 dimensions are taken into consideration, respectively, where the maximum number of objective function evaluations for the compared algorithms on 100-dimensional instances is constrained to \(1 \times 10^{5}\), and the rest of the settings remain the same as aforementioned. In addition, the Wilcoxon’s rank sum test is conducted with a 5% significance level on the final statistical results, where ‘ + ’, ‘ = ’ and ‘−’ indicate that the compared algorithm is significantly better than, approximately equal to, or significantly worse than the proposed BPLABC algorithm.

Comparison results on the first suit of benchmark problems

Tables 7 and 8 summarize the statistical results of the eight algorithms concerning 30-D and 50-D instances, respectively, where the test results of the fixed dimensional functions are included in Tables 7 and 8 contains the results on the first 13 functions of 50-D.
Table 7
Comparison results of BPLABC against other ABC variants on the first suit problems
No
ABC
GABC
CABC
GABCS
IABC
TLABC
BEABC
BPLABC
F01
2.34E+02−
3.05E−20−
7.63E−03−
5.80E−09−
1.86E−22−
1.53E−29−
6.98E−140−
1.76E−147
F02
5.69E+00−
3.07E−11−
1.51E−02−
2.51E−04−
6.54E−03−
2.38E−13−
2.34E−77−
4.06E−79
F03
3.36E+04−
6.53E+02−
9.83E+03−
1.04E+04−
2.66E−22−
2.01E−01−
1.14E−123+
3.26E−76
F04
6.23E+01−
3.06E−01−
1.06E+01−
4.66E+00−
1.81E−19−
6.68E−11−
1.18E−64−
2.66E−69
F05
4.88E+05−
3.91E+01−
9.42E+01−
9.53E+01−
2.87E+01−
2.48E+01−
2.71E+01−
2.40E+01
F06
2.48E+02−
5.75E−20+
8.68E−03−
5.53E−09−
6.26E−01−
8.11E−11−
4.34E−02−
1.68E−15
F07
5.35E−01−
1.35E−02−
4.15E−02−
2.67E−02−
9.31E−05+
2.60E−03−
1.11E−04+
2.99E−04
F08
−4.81E+03−
−7.07E+03−
−9.08E+03+
−6.85E+03−
−7.39E+03−
−7.38E+03−
−8.52E+03+
−7.44E+03
F09
2.37E+02−
5.47E+01−
3.82E+01−
1.59E+02−
0.00E+00= 
4.64E+00−
0.00E+00= 
0.00E+00
F10
6.48E+00−
1.14E−01−
1.91E−02−
2.30E−05−
0.00E+00+
5.68E−15−
0.00E+00+
1.30E−15
F11
3.21E+00−
9.93E−03−
2.34E−02−
9.28E−03−
0.00E+00= 
0.00E+00= 
0.00E+00= 
0.00E+00
F12
1.42E+05−
7.96E−02−
5.21E−02−
1.86E+01−
5.25E−02−
1.33E−10−
1.35E−03−
2.80E−17
F13
1.05E+06−
3.69E−02−
9.23E−02−
2.21E+01−
3.94E−01−
3.30E−03+
9.94E−02−
2.11E−02
F14
9.98E−01= 
9.98E−01= 
1.13E+00−
9.98E−01= 
9.98E−01= 
9.98E−01= 
9.98E−01= 
9.98E−01
F15
6.89E−04−
4.63E−03−
7.34E−04−
1.50E−03−
3.54E−04+
3.08E−04+
3.13E−04+
3.81E−04
F16
−1.03E+00= 
−1.03E+00= 
−1.03E+00= 
−1.03E+00= 
−1.00E+03−
−1.03E+00= 
−1.03E+00= 
−1.03E+00
F17
3.98E−01= 
3.98E−01= 
3.98E−01= 
3.98E−01= 
3.98E−01= 
3.98E−01= 
3.98E−01= 
3.98E−01
F18
3.00E+00= 
3.00E+00= 
3.00E+00= 
3.00E+00= 
3.00E+00= 
3.00E+00= 
3.00E+00= 
3.00E+00
F19
−3.86E+00= 
−3.86E+00= 
−3.86E+00= 
−3.86E+00= 
−3.86E+00= 
−3.86E+00= 
−3.86E+00= 
−3.86E+00
F20
−3.32E+00+
−3.25E+00−
−3.25E+00−
−3.28E+00+
−3.32E+00+
−3.32E+00+
−3.32E+00+
−3.28E+00
F21
−1.01E+01+
−8.97E+00+
−6.48E+00−
−9.07E+00+
−5.73E+00−
−1.01E+01+
−8.75E+00+
−8.50E+00
F22
−1.04E+01−
−1.02E+01−
−9.70E+00−
−9.72E+00−
−5.32E+00−
−1.04E+01= 
−9.87E+00−
−1.04E+01
F23
−1.05E+01−
−1.04E+01−
−1.03E+01−
−1.05E+01= 
−5.97E+00−
−1.05E+01= 
−1.02E+01−
−1.05E+01
+/=/−
2/5/16
2/5/16
1/4/18
2/6/15
4/6/13
4/8/11
7/7/9
Bold values denote the best mean results of each test instances
Table 8
Comparison results on the first 13 benchmark problems of the first test suit with D = 50
No
ABC
GABC
CABC
GABCS
IABC
TLABC
BEABC
BPLABC
F01
1.20E+04−
5.05E−09−
1.17E+02−
1.57E−02−
2.81E−22−
4.71E−25−
2.38E−142−
1.03E−146
F02
8.81E+01−
2.33E+00−
5.91E+00−
1.12E+01−
1.08E−02−
3.81E−08−
1.28E−74−
1.50E−76
F03
1.09E+05−
2.79E+04−
5.30E+04−
6.43E+04−
1.99E−21−
3.85E+01−
3.58E−127+
7.48E−64
F04
8.73E+01−
3.21E+01−
7.71E+01−
5.97E+01−
1.48E−19−
3.53E−09−
1.03E−64−
9.21E−67
F05
4.02E+07−
1.95E+02−
4.65E+04−
2.26E+02−
4.86E+01−
4.52E+01−
4.70E+01−
4.48E+01
F06
1.19E+04−
4.93E−09+
1.23E+02−
1.44E−02−
1.49E+00−
2.11E−05−
5.71E−01−
6.05E−08
F07
2.53E+01−
5.06E−02−
2.97E−01−
1.20E−01−
1.58E−04+
3.96E−03−
1.49E−04+
3.12E−04
F08
−6.21E+03−
−1.01E+04−
−1.25E+04+
−8.66E+03−
−1.12E+04+
−9.40E+03−
−1.12E+04+
−1.02E+04
F09
5.17E+02−
1.71E+02−
1.48E+02−
3.99E+02−
0.00E+00 = 
7.82E+00−
0.00E+00 = 
0.00E+00
F10
1.64E+01−
4.86E−01−
3.71E+00−
4.92E−02−
0.00E+00+
8.63E−14−
0.00E+00+
1.30E−15
F11
1.12E+02−
1.97E−03−
1.96E+00−
2.30E−02−
0.00E+00 = 
3.41E−13−
0.00E+00 = 
0.00E+00
F12
8.68E+07−
2.39E−01−
2.31E+03−
2.84E+01−
6.65E−02−
3.64E−06−
1.06E−02−
6.03E−10
F13
1.71E+08−
1.33E−01+
1.64E+04−
6.12E+01−
8.65E−01+
1.55E−02+
2.20E+00−
1.15E+00
+/=/−
0/0/13
2/0/11
1/0/12
0/0/13
4/2/7
1/0/12
4/2/7
Bold values denote the best mean results of each test instances
In general, from Table 7, it can be observed that BPLABC obtained the best results on four out of seven unimodal functions and 10 out of 16 multimodal functions against the other six ABC variants. Specifically, in contrast to IABC, GABCS, CABC, GABC and the basic ABC, BPLABC achieves significantly better results on at least 14 instances and loses only on at most four instances. Compared to TLABC, BPLABC wins on 11 instances, whereas it is defeated by TLABC on three out of 10 multimodal problems with fixed low dimensionality (F15, F20 and F21) and only one out of 13 high-dimensional problems (F13). Compared to BEABC, BPLABC wins and ties on seven instances among the first test suit, respectively, and yet BEABC beats BPLABC on four out of 13 high-dimensional problems (F03, F07, F08 and F10) and three out of 10 fixed dimensional multimodal problems (F15, F20 and F21). In addition, we can also notice from Table 7 that BPLABC obtained superior results on F12 against other contestants, where BEABC, IABC, GABCS, CABC and GABC may fall into the local trap. The result of GABC is better than that of BPLABC on the unimodal step function F06. We speculate that the additional weighting exerted on the global best solution guidance in GABC accelerates its convergence on this kind of fitness landscape. Moreover, BPLABC and TLABC acquired comparable results on F22 and F23, and the optimal solutions obtained by the two are significantly better than other competitors with exception of GABCS which also finds a similar optimum on F23.
Table 8 shows the results of contestants on 50-dimensional instances. From Table 8, we can conclude that, even though each algorithm’s overall performance degrades as the problem’s dimension rises, the BPLABC algorithm consistently achieves better results versus the other seven competitors, indicating the good scalability and robustness of BPLABC. More specifically, BPLABC defeats ABC and GABCS on all 13 problems with 30 dimensions and fails only on one problem in contrast to CABC (F08) and TLABC (F13), respectively. Compared to GABC, BPLABC wins in all the instances with exception of a unimodal function (F06) and a multimodal function (F13). In terms of the optimal solutions obtained by IABC and BEABC, both are highly competitive with BPLABC on the multimodal instances. However, BPLABC locates significantly better solutions for most of the unimodal problems. To further assess the performance discrepancy of the eight contestants, we apply the Friedman test to figure out the average rank of each contestant. The results for 30-D and 50-D cases are summarized in Fig. 2. It can be seen that BPLABC obtained the best rank among the eight algorithms involved in the test problems with 30 and 50 dimensions. As a whole, from the results in Tables 7, 8 and Fig. 2, we can conclude that the proposed BPLABC provides a superior method to tackle most of the high-dimensional benchmark problems.

Comparison results on the second suit of benchmark problems

Table 9 lists the comparison results of the contestants on the CEC2014 test problems characterized by complex fitness landscapes. From Table 9, it can be seen that BPLABC achieves the best solutions for most of the problems among the compared algorithms. To be specific, BPLABC significantly outperforms IABC on 23 out of 30 instances, followed by 22, 18, 17, 17, 14, and 13 out of 30 instances over the basic ABC, GABCS, GABC, BEABC, CABC, and TLABC, respectively, indicating the superior convergence performance of BPLABC. We can also observe from Table 9 that BPLABC obtains comparable results against the remaining contestants on F05, F12, F13, F14, and F26, where IABC performs the worst on F13, F14, and F26. Similarly, the results of BPLABC on F07 and F19 are competitive in contrast to those of GABC, CABC and TLABC, while there is no significant difference between BPLABC versus IABC, TLABC and BEABC on F24 and F25. However, apart from these instances, BPLABC defeats other contestants in a majority of the remaining cases, showing the remarkable generalization property of BPLABC on 30-dimensional problems.
Table 9
Comparison results on the second suit of 30-D benchmark problems
No
ABC
GABC
CABC
GABCS
IABC
TLABC
BEABC
BPLABC
F01
2.61E+08−
1.27E+07−
1.54E+07−
1.66E+07−
6.82E+08−
5.64E+06−
8.04E+07−
2.90E+06
F02
3.84E+08−
5.83E+03+
4.08E+04−
2.50E+03+
5.53E+10−
1.11E+04−
3.27E+08−
9.22E+03
F03
1.19E+05−
5.15E+04−
3.24E+04−
4.43E+04−
5.67E+04−
9.95E+03−
1.73E+04−
6.04E+03
F04
7.23E+02−
4.96E+02+
5.26E+02−
5.11E+02−
8.75E+03−
5.12E+02−
6.14E+02−
5.05E+02
F05
5.21E+02= 
5.21E+02= 
5.21E+02= 
5.21E+02= 
5.21E+02= 
5.21E+02= 
5.21E+02= 
5.21E+02
F06
6.36E+02−
6.09E+02+
6.08E+02+
6.09E+02+
6.34E+02−
6.10E+02+
6.20E+02−
6.16E+02
F07
7.03E+02−
7.00E+02= 
7.00E+02= 
7.00E+02= 
1.12E+03−
7.00E+02= 
7.04E+02−
7.00E+02
F08
1.01E+03−
9.31E+02−
8.40E+02+
9.24E+02−
1.05E+03−
8.33E+02+
8.83E+02−
8.60E+02
F09
1.15E+03−
1.11E+03−
9.73E+02+
1.11E+03−
1.20E+03−
9.78E+02−
1.10E+03−
9.74E+02
F10
7.47E+03−
5.55E+03−
2.66E+03+
5.69E+03−
6.16E+03−
6.02E+03−
2.27E+03+
3.09E+03
F11
8.51E+03−
8.32E+03−
6.86E+03+
8.34E+03−
6.68E+03+
8.27E+03−
6.73E+03+
7.36E+03
F12
1.20E+03= 
1.20E+03= 
1.20E+03= 
1.20E+03= 
1.20E+03= 
1.20E+03= 
1.20E+03= 
1.20E+03
F13
1.30E+03= 
1.30E+03= 
1.30E+03= 
1.30E+03= 
1.31E+03−
1.30E+03= 
1.30E+03= 
1.30E+03
F14
1.40E+03= 
1.40E+03= 
1.40E+03= 
1.40E+03= 
1.57E+03−
1.40E+03= 
1.40E+03= 
1.40E+03
F15
1.71E+03−
1.52E+03−
1.52E+03−
1.52E+03−
6.05E+04−
1.52E+03−
1.53E+03−
1.51E+03
F16
1.61E+03= 
1.61E+03= 
1.61E+03= 
1.61E+03= 
1.61E+03= 
1.61E+03= 
1.61E+03= 
1.61E+03
F17
8.01E+06−
1.25E+06−
1.39E+06−
1.43E+06−
1.02E+07−
6.90E+05−
2.04E+06−
4.22E+05
F18
5.54E+04−
8.45E+03−
5.77E+03−
9.58E+03−
5.31E+08−
3.08E+03+
1.21E+05−
4.31E+03
F19
1.92E+03−
1.91E+03= 
1.91E+03= 
1.92E+03−
2.18E+03−
1.91E+03= 
1.92E+03−
1.91E+03
F20
4.37E+04−
2.01E+04−
3.94E+04−
2.07E+04−
1.44E+04−
1.46E+04−
1.81E+04−
1.34E+04
F21
1.66E+06−
4.46E+05−
6.01E+05−
4.92E+05−
1.95E+05−
2.34E+05−
4.74E+05−
1.57E+05
F22
2.83E+03−
2.57E+03−
2.57E+03−
2.53E+03−
2.84E+03−
2.54E+03−
2.60E+03−
2.51E+03
F23
2.62E+03= 
2.62E+03= 
2.62E+03= 
2.62E+03= 
2.50E+03+
2.62E+03= 
2.50E+03+
2.62E+03
F24
2.65E+03−
2.63E+03−
2.63E+03−
2.63E+03−
2.60E+03= 
2.60E+03= 
2.60E+03= 
2.60E+03
F25
2.74E+03−
2.71E+03−
2.71E+03−
2.71E+03−
2.70E+03= 
2.70E+03= 
2.70E+03= 
2.70E+03
F26
2.70E+03= 
2.70E+03= 
2.70E+03= 
2.70E+03= 
2.71E+03−
2.70E+03= 
2.71E+03−
2.70E+03
F27
3.56E+03−
3.26E+03−
3.21E+03−
3.25E+03−
3.46E+03−
3.22E+03−
3.18E+03+
3.20E+03
F28
3.92E+03−
3.72E+03+
3.68E+03+
3.71E+03+
4.44E+03−
3.72E+03+
3.27E+03+
3.78E+03
F29
2.99E+04+
5.91E+05−
5.00E+03+
5.88E+05−
1.16E+07−
4.39E+03+
2.96E+04+
3.87E+05
F30
3.70E+04−
6.01E+03−
9.64E+03−
6.10E+03−
1.00E+05−
5.50E+03+
2.47E+04−
5.88E+03
 +/=/−
1/7/22
4/9/17
7/9/14
3/9/18
2/5/23
6/11/13
6/7/17
Bold values denote the best mean results of each test instances
From Table 9, we can also notice that BPLABC dominates ABC, GABC, GABCS, IABC, TLABC and BEABC on most of the simple multimodal functions (F04-F16), but it fails to obtain the best solutions as well as TLABC, CABC, BEABC, and IABC on simple multimodal functions F08, F09, F10, and F11, respectively. In particular, CABC defeats BPLABC on these four test instances and is only defeated by BPLABC on two (F04 and F15) out of 13 simple multimodal functions. This can be attributed to the preference-free search equation in CABC, which gives rise to a more diverse iterative population during the optimization. However, among the hybrid function 1 in CEC2014 problems, BPLABC dominates CABC and achieves significantly better solutions in comparison with other contestants on all the problems with exception of F18, on which it losses to TLABC. In addition, in terms of the composition functions featured by asymmetrical landscapes and different properties around the different local optima, BPLABC is outperformed by IABC and TLABC on F23 and is comparable with the remaining contestants. When compared to BEABC, the result of BPLABC is slightly worse than BEABC on F27 and is less competitive with that of GABC, CABC, GABCS, TLABC and BEABC on F28. BPLABC surpasses GABC, GABCS, and IABC on F29 but fails to improve the convergence performance of ABC in tackling this problem. However, for F30, BPLABC wins six out of seven contestants. We speculate that the degradation of BPLABC on these problems may result from the decline of population diversity. Since the bi-preference search equations in BPLABC put certain emphasis on the exploitation of promising elite solutions, the population is possible to be assimilated by these elites, resulting in the failure of the iterative population in jumping out of the local traps via the adversarial search with small escape steps.
To be more intuitive, Fig. 3 depicts the corresponding convergence profiles of the compared algorithms on eight representative 30-D problems from the second suit of benchmark problems, including two rotated unimodal functions (F01 and F03), two shifted multimodal functions (F09 and F15), three hybrid functions (F17 and F18 and 21), and one composition function (F29). For the unimodal function F01 and the hybrid multimodal function F17, the convergence curves of BPLABC vary uniformly and outperform the other contestants, indicating the superior performance of BPLABC in dealing with these two problems over 50,000 Fes. For F03, BPLABC converges slower than BEABC until 2000 Fes, but pulls away from the other algorithms afterward, especially after 30,000 Fes, which we speculate that the bi-preference search operator provided by the employed bee phase of BPLABC facilitates the search towards a high-quality solution, while BEABC still carries out random exploration. For F09 which featured a huge number of evenly distributed local optima, BPLABC does not converge as well as TLABC until 6000 Fes, however, the convergence performance of BPLABC exhibits remarkable enhancement in the subsequent search. We can also note that the convergence profile of BPLABC fails to decline consistently after about 20,000 FES, and BPLABC, CABC and TLABC converge to a similar result over 50,000 Fes. It may be attributed to the strong exploitation in the basin of some elite solutions in BPLABC that makes BPLABC consumes more trials on locating a near-optimal solution, while the no-preference search equation and teach-and-learn optimized search equation equipped in CABC and TLABC, respectively, allocate more exploration to locate the near-optimal region. Nevertheless, BPLABC converges the fastest on F15 and F18, achieving comparable results to other algorithms on F15 and second only to TLABC on F18. For F21, BPLABC converges the fastest, and although it is overtaken by IABC in the middle period of the whole search process, it still achieves the best solution after 50,000 Fes.
For F29, BPLABC only outperforms IABC, GABC and GABCS and performs poorer than ABC, CABC, TLABC and BEABC. We suspect that the rapid decline in population diversity caused by the elite-based preference search in BPLABC hinders its further explorative search for the promising optimal region. To substantiate this suspicion, a BPLABC variant without the auxiliary adversarial search phase denoted as BPLABC-NoDe is designed and compared with the basic ABC on the second suit of 30-D benchmarks, to obtain certain insights concerning the impact of the bi-preference search equation on the population diversity. Table 10 lists their final results. From Table 10, we can find that BPLABC-NoDe defeats ABC on 29 out of the 30 instances yet still fails to beat ABC on F29, revealing the poor exploration capacity of the bi-preference search equation of BPLABC in this instance. More intuitively, the evolving process of population diversity of BPLABC-NoDe during the search process on F29 is also recorded in Fig. 4, wherein the population diversity (PD) metric in [19] is applied to measure the diversity of the iterative population. A higher PD value indicates better population diversity. As shown in Fig. 4, it can be seen that the population diversity of BPLABC-NoDe decreases rapidly and fails to sustain a high level against that of the basic ABC. Moreover, we can also notice that the population diversity of BPLABC-NoDe appears a minor fluctuation after 36,000 Fes, which can be attributed to the reinitialization of the failed nectar sources during the scout bee phase. Afterward, it shows no significant fluctuations and becomes relatively smooth, indicating that a large part of the population might be caught in a local trap and fail to jump out of it. Nevertheless, the results in Table 10 demonstrate the good convergence accuracy and promising performance of BPLABC-NoDe in tackling problems characterized by different fitness landscapes, once again, revealing the remarkable superiority of the proposed bi-preference search equation.
Table 10
Comparison results of ABC and BPLABC-NoDe on the second suit of 30-D benchmarks
No
ABC
BPLABC-NoDe
F01
2.61E+08−
1.56E+06
F02
3.84E+08−
9.55E+03
F03
1.19E+05−
6.32E+03
F04
7.23E+02−
4.94E+02
F05
5.21E+02 = 
5.21E+02
F06
6.36E+02−
6.15E+02
F07
7.03E+02−
7.00E+02
F08
1.01E+03−
8.62E+02
F09
1.15E+03−
9.75E+02
F10
7.47E+03−
2.85E+03
F11
8.51E+03−
5.79E+03
F12
1.20E+03 = 
1.20E+03
F13
1.30E+03 = 
1.30E+03
F14
1.40E+03 = 
1.40E+03
F15
1.71E+03−
1.51E+03
F16
1.61E+03−
1.61E+03
F17
8.01E+06−
2.79E+05
F18
5.54E+04−
4.60E+03
F19
1.92E+03−
1.91E+03
F20
4.37E+04−
1.25E+04
F21
1.66E+06−
1.23E+05
F22
2.83E+03−
2.51E+03
F23
2.62E+03−
2.62E+03
F24
2.65E+03−
2.64E+03
F25
2.74E+03−
2.71E+03
F26
2.70E+03−
2.70E+03
F27
3.56E+03−
3.30E+03
F28
3.92E+03−
3.76E+03
F29
2.99E+04+
5.98E+05
F30
3.70E+04−
5.99E+03
 + / = /−
1/4/25
Bold values denote the best mean results of each test instances
Tables 11 and 12 summarize the final results of BPLABC versus the other ABC variants on 50- and 100-dimensional instances, respectively. It can be seen that the performance of BPLABC versus its competitors is similar to the cases of 30 dimensions. However, as the problem dimension increases, there are bound to be some differences. More specifically, for unimodal problems, when the problem dimension reaches 50 and 100, BPLABC beats GABCS on F02 and obtains significantly better results on F03. For the multimodal problems, BPLABC loses to GABC and GABCS on F04 in the cases of 30 and 50 dimensions, respectively, however, it obtains the best results among all the competitors when the problem scale reaches 100 dimensions, revealing its superior convergence accuracy in high-dimensional solution space. For F06, CABC outperforms BPLABC in 50- and 100-dimensional instances. Since CABC equips an operator with no search preference for any direction, it endows the algorithm with a greater capacity for exploration in contrast to BPLABC. For 50-dimensional F10 and F11, BPLABC obtains the best results while lagging behind TLABC in the 100-dimensional case. This is mainly facilitated by the stronger exploitation property of BPLABC over TLABC, while TLABC exerts concentration on the exploration via random learning and opposition-based learning. In terms of the hybrid problems, BPLABC dominates the compared algorithms on four out of six instances (F17, F20–F22) but struggles on F18 and F19 of 50 and 100 dimensions in contrast to GABC, CABC, GABCS and TLABC. We suspect that the pseudo-random number-based bi-preference search operator assignment scheme in BPLABC confines its convergence accuracy for these two instances. For composition problems F23–F30, in general, BPLABC losses to GABC, GABCS and TLABC on 50-dimensional F27–F30 according to the pairwise Wilcoxon rank sum test. The reason for the performance deterioration of BPLABC on these composition problems is mainly due to a lack of population diversity since high weights are associated with the elite-based search in BPLABC to enhance its convergence performance. However, the performance of BPLABC improves on F29 and F30 of 100 dimensions and follows closely behind TLABC. Despite that the bi-preference linkage-driven operators of BPLABC act similarly to the teaching and learning operators of TLABC, respectively, BPLABC imposes an extra roulette wheel selection module in compared to TLABC, which aggravates the convergence of population, thus weakening the intensity of the exploration of solution space.
Table 11
Comparison results on the second suit of 50-D benchmark problems
No
ABC
GABC
CABC
GABCS
IABC
TLABC
BEABC
BPLABC
F01
8.75E+08−
3.37E+07−
5.43E+07−
3.51E+07−
2.22E+09−
8.06E+06−
2.05E+08−
7.74E+06
F02
1.24E+10−
5.06E+04−
1.10E+08−
5.22E+04−
1.35E+11−
2.31E+05−
4.71E+09−
1.04E+04
F03
2.75E+05−
1.86E+05−
2.10E+05−
1.96E+05−
1.06E+05−
1.10E+05−
8.78E+04−
9.01E+04
F04
2.10E+03−
5.25E+02+
5.46E+02+
5.12E+02+
3.39E+04−
6.09E+02−
1.27E+03−
5.73E+02
F05
5.21E+02= 
5.21E+02= 
5.21E+02= 
5.21E+02= 
5.21E+02= 
5.21E+02= 
5.21E+02= 
5.21E+02
F06
6.70E+02−
6.25E+02+
6.24E+02+
6.24E+02+
6.64E+02−
6.24E+02+
6.45E+02−
6.34E+02
F07
8.02E+02−
7.00E+02= 
7.02E+02−
7.00E+02= 
1.90E+03−
7.00E+02= 
7.37E+02−
7.00E+02
F08
1.27E+03−
1.10E+03−
9.25E+02+
1.10E+03−
1.32E+03−
8.99E+02+
1.05E+03−
9.78E+02
F09
1.43E+03−
1.36E+03−
1.09E+03−
1.36E+03−
1.52E+03−
1.04E+03+
1.36E+03−
1.09E+03
F10
1.42E+04−
1.25E+04−
6.21E+03−
1.22E+04−
1.22E+04−
1.25E+04−
5.84E+03−
5.60E+03
F11
1.52E+04−=
1.50E+04−
1.44E+04−
1.51E+04−
1.39E+04−
1.49E+04−
1.37E+04−
1.28E+04
F12
1.20E+03= 
1.20E+03= 
1.20E+03= 
1.20E+03= 
1.20E+03= 
1.20E+03= 
1.20E+03= 
1.20E+03
F13
1.30E+03= 
1.30E+03= 
1.30E+03= 
1.30E+03= 
1.31E+03−
1.30E+03= 
1.30E+03= 
1.30E+03
F14
1.43E+03−
1.40E+03= 
1.40E+03= 
1.40E+03= 
1.67E+03−
1.40E+03= 
1.40E+03= 
1.40E+03
F15
1.17E+05−
1.54E+03= 
1.55E+03−
1.55E+03−
1.83E+06−
1.55E+03−
2.14E+03−
1.54E+03
F16
1.62E+03= 
1.62E+03= 
1.62E+03= 
1.62E+03= 
1.62E+03= 
1.62E+03= 
1.62E+03= 
1.62E+03
F17
5.62E+07−
5.74E+06−
5.78E+06−
5.78E+06−
1.41E+08−
1.20E+06−
1.61E+07−
1.11E+06
F18
1.98E+05−
5.19E+03−
3.72E+03+
4.86E+03−
7.68E+09−
2.67E+03+
2.02E+06−
4.05E+03
F19
1.99E+03−
1.94E+03+
1.95E+03−
1.94E+03+
2.93E+03−
1.97E+03−
1.98E+03−
1.95E+03
F20
1.91E+05−
7.59E+04−
1.08E+05−
8.01E+04−
3.53E+04−
3.93E+04−
3.51E+04−
2.82E+04
F21
2.10E+07−
4.15E+06−
4.55E+06−
5.96E+06−
6.18E+06−
1.66E+06−
4.50E+06−
6.06E+05
F22
4.05E+03−
3.79E+03−
3.64E+03−
3.78E+03−
4.55E+03−
3.66E+03−
3.87E+03−
3.25E+03
F23
2.67E+03−
2.64E+03= 
2.64E+03= 
2.64E+03= 
2.50E+03+
2.64E+03= 
2.50E+03+
2.64E+03
F24
2.76E+03−
2.68E+03−
2.68E+03−
2.68E+03−
2.60E+03+
2.64E+03= 
2.60E+03+
2.64E+03
F25
2.83E+03−
2.72E+03−
2.72E+03−
2.72E+03−
2.70E+03= 
2.70E+03= 
2.70E+03= 
2.70E+03
F26
2.70E+03+
2.75E+03−
2.78E+03−
2.75E+03−
2.71E+03+
2.76E+03−
2.78E+03−
2.74E+03
F27
4.67E+03−
3.67E+03+
3.61E+03+
3.66E+03+
4.72E+03−
3.65E+03+
4.13E+03−
3.92E+03
F28
4.88E+03−
4.38E+03+
4.07E+03+
4.30E+03+
7.00E+03−
4.21E+03+
3.91E+03+
4.70E+03
F29
4.50E+05+
7.57E+06+
5.49E+06+
8.44E+06+
1.67E+08−
4.82E+03+
5.85E+05+
8.75E+06
F30
2.00E+05−
1.43E+04+
1.84E+04−
1.47E+04+
1.87E+06−
1.60E+04+
1.13E+05−
1.67E+04
 +/=/−
2/4/24
7/8/15
7/6/17
7/7/16
3/4/23
8/9/13
4/6/20
Bold values denote the best mean results of each test instances
Table 12
Comparison results on the second suit of 100-D benchmark problems
No
ABC
GABC
CABC
GABCS
IABC
TLABC
BEABC
BPLABC
F01
4.57E+09−
3.04E+08−
3.35E+08−
2.96E+08−
7.08E+09−
5.42E+07−
2.30E+08−
4.37E+07
F02
8.53E+10−
2.15E+09−
4.36E+09−
2.50E+09−
2.85E+11−
2.51E+07−
6.36E+09−
6.24E+05
F03
6.54E+05−
4.39E+05−
4.49E+05−
4.37E+05−
2.86E+05−
1.99E+05−
1.28E+05−
8.80E+04
F04
1.20E+04−
8.34E+02−
1.02E+03−
8.38E+02−
8.39E+04−
8.91E+02−
1.47E+03−
8.00E+02
F05
5.21E+02= 
5.21E+02= 
5.21E+02= 
5.21E+02= 
5.21E+02= 
5.21E+02= 
5.21E+02= 
5.21E+02
F06
7.55E+02−
6.81E+02+
6.73E+02+
6.83E+02+
7.47E+02−
6.87E+02+
7.00E+02= 
7.00E+02
F07
1.38E+03−
7.13E+02−
7.42E+02−
7.19E+02−
3.56E+03−
7.02E+02−
7.60E+02−
7.00E+02
F08
1.95E+03−
1.73E+03−
1.13E+03+
1.73E+03−
2.01E+03−
1.12E+03+
1.28E+03−
1.27E+03
F09
2.20E+03−
2.05E+03−
1.35E+03+
2.03E+03−
2.30E+03−
1.28E+03+
1.72E+03−
1.44E+03
F10
3.15E+04−
2.92E+04−
1.24E+04+
2.97E+04−
2.73E+04−
2.54E+04−
1.07E+04+
1.30E+04
F11
3.26E+04−
3.21E+04−
2.97E+04−
3.24E+04−
2.83E+04−
3.16E+04−
2.23E+04+
2.48E+04
F12
1.20E+03= 
1.20E+03= 
1.20E+03= 
1.20E+03= 
1.20E+03= 
1.20E+03= 
1.20E+03= 
1.20E+03
F13
1.30E+03= 
1.30E+03= 
1.30E+03= 
1.30E+03= 
1.31E+03= 
1.30E+03= 
1.30E+03= 
1.30E+03
F14
1.60E+03−
1.40E+03= 
1.40E+03−
1.40E+03= 
2.23E+03−
1.40E+03−
1.41E+03−
1.40E+03
F15
5.25E+06−
2.57E+03−
7.42E+03−
2.07E+03−
1.31E+07−
1.65E+03−
2.50E+03−
1.62E+03
F16
1.65E+03= 
1.65E+03= 
1.65E+03= 
1.65E+03= 
1.65E+03= 
1.65E+03= 
1.65E+03= 
1.65E+03
F17
3.90E+08−
3.98E+07−
2.36E+07−
3.11E+07−
8.67E+08−
3.94E+06−
4.44E+07−
2.98E+06
F18
4.04E+06−
8.28E+03+
1.61E+04−
9.52E+03+
3.28E+10−
2.70E+03+
5.64E+05−
1.15E+04
F19
2.15E+03−
2.02E+03+
2.02E+03+
2.01E+03+
7.42E+03−
2.04E+03= 
2.09E+03−
2.04E+03
F20
6.10E+05−
2.65E+05−
2.54E+05−
2.68E+05−
2.81E+05−
9.20E+04−
9.12E+04−
6.82E+04
F21
1.80E+08−
1.37E+07−
1.05E+07−
1.35E+07−
2.49E+08−
2.61E+06−
1.74E+07−
2.06E+06
F22
7.18E+03−
6.85E+03−
6.05E+03−
6.92E+03−
1.62E+04−
5.32E+03−
5.84E+03−
4.62E+03
F23
2.91E+03−
2.66E+03−
2.66E+03−
2.66E+03−
2.50E+03+
2.65E+03−
2.50E+03+
2.58E+03
F24
3.18E+03−
2.84E+03−
2.85E+03−
2.84E+03−
2.60E+03+
2.60E+03+
2.60E+03+
2.76E+03
F25
3.28E+03−
2.76E+03−
2.77E+03−
2.77E+03−
2.70E+03= 
2.70E+03= 
2.70E+03= 
2.70E+03
F26
2.90E+03−
2.83E+03−
2.83E+03−
2.81E+03−
2.79E+03+
2.80E+03= 
2.80E+03= 
2.80E+03
F27
6.82E+03−
4.97E+03+
4.83E+03+
4.96E+03+
6.91E+03−
4.90E+03+
5.31E+03+
5.53E+03
F28
8.52E+03−
5.86E+03+
5.54E+03+
6.02E+03+
1.56E+04−
6.29E+03+
4.09E+03+
7.44E+03
F29
2.81E+06−
1.02E+07−
8.64E+06−
3.74E+06−
1.06E+09−
7.12E+03+
1.45E+05−
7.14E+03
F30
3.25E+06−
3.36E+04−
6.69E+04−
3.53E+04−
3.79E+07−
3.31E+04+
1.34E+05−
3.33E+04
 +/=/−
0/4/26
5/5/20
7/4/19
5/5/20
3/5/22
9/4/14
6/7/17
Bold values denote the best mean results of each test instances
In addition, Fig. 5 gives the average Friedman ranks for 30-, 50- and 100-dimensional instances and the best average rank is highlighted in bold. From Fig. 5, it can be seen that BPLABC ranks first, followed by TLABC, CABC, GABC, BEABC, GABCS, IABC and ABC regarding the 30-dimensional instances. For 50- and 100-dimensional instances, BPLABC also holds the first rank, indicating the excellent scalability of the proposed BPLABC.
In general, the results shown in Tables 9, 10, 11, 12 and Figs. 3, 4, 5 demonstrate the superior convergence performance and robustness of BPLABC in solving problems characterized by different fitness landscapes and complexities.

Results on real-world optimization problems

To better illustrate the performance of BPLABC on real-world problems, we further test it on the problems of parameter estimation for frequency-modulated sound waves (PEFMSW) and spread spectrum radar polly phase code design (SSRPCD). The key to solving the former lies in the optimization of the Frequency-Modulated (FM) synthesizer parameters to generate an estimated acoustic wave similar to the target acoustic wave, while the latter lies in the selection of a suitable waveform for a pulse.
PEFMSW is modeled as a highly complex multimodal optimization problem with strong epistasis [5], which can be formulated through the following equations.
$$ X = \{ a_{1} ,\omega_{1} ,a_{2} ,\omega_{2} ,a_{3} ,\omega_{3} \} $$
(9)
$$ y(t) = a_{1} \cdot \sin (\omega_{1} \cdot t \cdot \theta + a_{2} \cdot \sin (\omega_{2} \cdot t \cdot \theta + a_{3} \cdot \sin (\omega_{3} \cdot t \cdot \theta ))) $$
(10)
$$\begin{aligned} y_{0} (t)& =& (1.0) \cdot \sin ((5.0) \cdot t \cdot \theta - (1.5) \\ &&\cdot \sin ((4.8) \cdot t \cdot \theta + (2.0) \cdot \sin ((4.9) \cdot t \cdot \theta ))) \end{aligned}$$
(11)
$$ \min f(\mathop X\limits^{ \to } ) = \sum\limits_{t = 0}^{100} {(y(t) - y_{0} (t))^{2} } $$
(12)
where \(a_{1}\), \(\omega_{1}\), \(a_{2}\), \(\omega_{2}\), \(a_{3}\) and \(\omega_{3}\) denote the six decision variables to be optimized, and \(\theta = 2\pi /100\). Equation (10) and (11) are the expressions for the estimated sound and the target sound waves, respectively. Equation (12) is the fitness function, which is optimized within the specified hypercube region of \([ - 6.4,6.35]^{6}\).
SSRPCD is modeled as a min–max nonlinear non-convex continuous optimization problem with a large number of local optima [5]. SSRPCD is an NP-hard problem with a segmentally smooth objective function. In general, the SSRPCD problem can be mathematically described with the following equations.
$$ \begin{array}{*{20}c} {global} & {\mathop {\min }\limits_{x \in X} } \\ \end{array} f(x) = \max \{ \phi_{1} (x),...,\phi_{2m} (x)\} $$
(13)
$$ \begin{aligned} where\ X = \{ (x_{1} ,...,x_{n} ) \in R^{n} |0 \le x_{j} \le 2\pi ,j = 1,...,n\} ,\quad \hfill\\ \qquad \qquad \qquad m = 2n - 1 \hfill \\ and\quad \phi_{2i - 1} (x) = \sum\limits_{j = 1}^{n} {\cos \left( {\sum\limits_{k = |2i - j - 1| + 1}^{j} {x_{k} } } \right)} ,\quad \hfill\\ \quad\qquad \quad \qquad \qquad \quad i = 1,...,n \hfill \\ \end{aligned} $$
(14)
$$ \phi_{2i} (x) = 0.5 + \sum\limits_{j = 1}^{n} {\cos \left( {\sum\limits_{k = |2i - j| + 1}^{j} {x_{k} } } \right)} ,\quad i = 1,...,n - 1 $$
(15)
$$ \phi_{m + i} (x) = \phi_{i} (x),\quad i = 1,...,m $$
(16)
Here, three SSRPCD instances [5] with different complexities, i.e., 19-D, 20-D, and 30-D SSRPCD problems are considered.
In this group of the experiment, MaxFes is also set to 50,000 and BPLABC configures the same parameters as those in the previous subsection. The comparison also includes the six state-of-the-art ABC variants listed in Sect. “Comparison results of BPLABC versus six popular improved ABCs”, with the basic ABC serving as the baseline. Table 13 displays the average outcomes of all the contestants on these two problems over 30 independent runs, with the best results highlighted in bold. From Table 13, for the PEFMSW problem, BPLABC is slightly worse than TLABC and ranks second. For the SSRPCD problem, BPLABC yields the best results and is capable of achieving the best results as the problem complexity increases. As a whole, the proposed BPLABC outperforms the other ABC variants, validating the efficiency and remarkable performance of BPLABC on real-world problems.
Table 13
The average results of the eight contestants on PEFMSW and SSRPCD with different dimensions over 30 independent runs
Algorithm
PEFMSW (6-D)
SSRPCD (19-D)
SSRPCD (20-D)
SSRPCD (30-D)
ABC
1.91E+01
1.72E+00
1.86E+00
2.80E+00
GABC
1.23E+01
1.62E+00
1.76E+00
2.80E+00
CABC
1.37E+01
1.35E+00
1.51E+00
2.68E+00
GABCS
1.03E+01
1.61E+00
1.80E+00
2.82E+00
IABC
2.30E+01
1.46E+00
1.56E+00
2.60E+00
TLABC
6.16E+00
1.72E+00
1.80E+00
2.81E+00
BEABC
1.34E+01
1.53E+00
1.66E+00
2.67E+00
BPLABC
8.74E+00
1.25E+00
1.30E+00
2.52E+00
Bold values denote the best mean results of each test instances

Conclusions

In this work, we propose a bi-preference linkage-driven artificial bee colony algorithm with multi-operator fusion named BPLABC to address the low convergence accuracy and strengthen the exploitation property of ABC in dealing with high-dimensional complex optimization problems. In BPLABC, two bi-preference linkage-driven search equations are tailored, one for the employed bee phase and the other for onlooker bee phases, to strike the exploration–exploitation tradeoff. In order to ensure the consistency of the selection pressure of the roulette wheel selection throughout the search, a new roulette selection probability calculation paradigm based on the ranking of objective function values is proposed to determine the elite nectar sources for onlooker bees. Moreover, an adversarial auxiliary search equation is also designed to drag some promising elite solutions away from the current pseudo-global best solution for enhancing convergence accuracy. To validate the effectiveness and efficacy of BPLABC, comprehensive experiments on a variety of benchmark problems featuring different categories of fitness landscapes and two real-world problems including the parameter estimation for frequency-modulated sound waves and spread spectrum radar polly phase code design are conducted. The experimental results revealed that BPLABC performs remarkably well on the majority of the selected problems when compared to six representative ABC variants and the basic ABC.
Nonetheless, the newly introduced parameters p and q exert significant impacts on the performance of BPLABC. Coupling the iterative distribution of the colony in different phases to adapt the parameter configuration appears to be a promising way to further enhance the performance of BPLABC, which is one of our future works. In addition, applying BPLABC to tackle more real-world single- and multi-objective optimization problems is also included in our future work.

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China (Grant No. 62106237), the Joint Funds of the National Natural Science Foundation of China (Grant No. U21A20524), the Shanxi Province Science Foundation for Youths (Grant No. 201901D211237), and the Natural Science Research Project of Shanxi Province (Grant No. 202103021224218).

Declarations

Conflict of interest

The authors declare that they have no conflict of interest that could have appeared to influence the work reported in this paper.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literatur
1.
Zurück zum Zitat Akay B, Karaboga D (2012) A modified artificial bee colony algorithm for real-parameter optimization. Inf Sci 192:120–142CrossRef Akay B, Karaboga D (2012) A modified artificial bee colony algorithm for real-parameter optimization. Inf Sci 192:120–142CrossRef
5.
Zurück zum Zitat Das S, Suganthan PNJJU, Nanyang Technological University, Kolkata (2010) Problem definitions and evaluation criteria for CEC 2011 competition on testing evolutionary algorithms on real world optimization problems, pp 341–359 Das S, Suganthan PNJJU, Nanyang Technological University, Kolkata (2010) Problem definitions and evaluation criteria for CEC 2011 competition on testing evolutionary algorithms on real world optimization problems, pp 341–359
10.
Zurück zum Zitat Guo P, Cheng W, Liang J (2011) Global artificial bee colony search algorithm for numerical function optimization. In: 2011 seventh international conference on natural computation, vol 3. IEEE, pp 1280–1283 Guo P, Cheng W, Liang J (2011) Global artificial bee colony search algorithm for numerical function optimization. In: 2011 seventh international conference on natural computation, vol 3. IEEE, pp 1280–1283
11.
Zurück zum Zitat Karaboga D (2005) An idea based on honey bee swarm for numerical optimization, vol 200, pp 1–10. Technical report-tr06, Erciyes University, Engineering Faculty, Computer Engineering Department Karaboga D (2005) An idea based on honey bee swarm for numerical optimization, vol 200, pp 1–10. Technical report-tr06, Erciyes University, Engineering Faculty, Computer Engineering Department
12.
Zurück zum Zitat Kennedy J (2003) Bare bones particle swarms. In: Proceedings of the 2003 IEEE swarm intelligence symposium. SIS’03 (Cat. No. 03EX706). IEEE, pp 80–87 Kennedy J (2003) Bare bones particle swarms. In: Proceedings of the 2003 IEEE swarm intelligence symposium. SIS’03 (Cat. No. 03EX706). IEEE, pp 80–87
13.
Zurück zum Zitat Kiran MS, Hakli H, Gunduz M, Uguz H (2015) Artificial bee colony algorithm with variable search strategy for continuous optimization. Inf Sci 300:140–157MathSciNetCrossRef Kiran MS, Hakli H, Gunduz M, Uguz H (2015) Artificial bee colony algorithm with variable search strategy for continuous optimization. Inf Sci 300:140–157MathSciNetCrossRef
15.
Zurück zum Zitat Liang JJ, Qu BY, Suganthan PNJCIL, Zhengzhou University, Zhengzhou China, Technical Report NTU, Singapore (2013) Problem definitions and evaluation criteria for the CEC 2014 special session and competition on single objective real-parameter numerical optimization, vol 635, p 490 Liang JJ, Qu BY, Suganthan PNJCIL, Zhengzhou University, Zhengzhou China, Technical Report NTU, Singapore (2013) Problem definitions and evaluation criteria for the CEC 2014 special session and competition on single objective real-parameter numerical optimization, vol 635, p 490
17.
Zurück zum Zitat Wang C-F, Zhang Y-H (2016) An improved artificial bee colony algorithm for solving optimization problems. IAENG Int J Comput Sci 43(3):336–343 Wang C-F, Zhang Y-H (2016) An improved artificial bee colony algorithm for solving optimization problems. IAENG Int J Comput Sci 43(3):336–343
18.
Zurück zum Zitat Wang C, Shang P, Shen P (2022) An improved artificial bee colony algorithm based on Bayesian estimation. Complex Intell Syst 8(6):4971–4991CrossRef Wang C, Shang P, Shen P (2022) An improved artificial bee colony algorithm based on Bayesian estimation. Complex Intell Syst 8(6):4971–4991CrossRef
19.
Zurück zum Zitat Wang H, Jin Y, Yao X (2016) Diversity assessment in many-objective optimization. IEEE Trans Cybern 47(6):1510–1522CrossRef Wang H, Jin Y, Yao X (2016) Diversity assessment in many-objective optimization. IEEE Trans Cybern 47(6):1510–1522CrossRef
22.
Zurück zum Zitat Yao X, Liu Y, Lin GM (1999) Evolutionary programming made faster. IEEE Trans Evol Comput 3(2):82–102CrossRef Yao X, Liu Y, Lin GM (1999) Evolutionary programming made faster. IEEE Trans Evol Comput 3(2):82–102CrossRef
25.
Zurück zum Zitat Zhu G, Kwong S (2010) Gbest-guided artificial bee colony algorithm for numerical function optimization. Appl Math Comput 217(7):3166–3173MathSciNetCrossRefMATH Zhu G, Kwong S (2010) Gbest-guided artificial bee colony algorithm for numerical function optimization. Appl Math Comput 217(7):3166–3173MathSciNetCrossRefMATH
Metadaten
Titel
Bi-preference linkage-driven artificial bee colony algorithm with multi-operator fusion
verfasst von
Haibo Yu
Yaxin Kang
Li Kang
Jianchao Zeng
Publikationsdatum
30.05.2023
Verlag
Springer International Publishing
Erschienen in
Complex & Intelligent Systems / Ausgabe 6/2023
Print ISSN: 2199-4536
Elektronische ISSN: 2198-6053
DOI
https://doi.org/10.1007/s40747-023-01085-5

Weitere Artikel der Ausgabe 6/2023

Complex & Intelligent Systems 6/2023 Zur Ausgabe

Premium Partner