Skip to main content
Erschienen in: Complex & Intelligent Systems 1/2023

Open Access 04.08.2022 | Original Article

A human learning optimization algorithm with competitive and cooperative learning

verfasst von: JiaoJie Du, Ling Wang, Minrui Fei, Muhammad Ilyas Menhas

Erschienen in: Complex & Intelligent Systems | Ausgabe 1/2023

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Human learning optimization (HLO) is a simple yet powerful metaheuristic developed based on a simplified human learning model. Competition and cooperation, as two basic modes of social cognition, can motivate individuals to learn more efficiently and improve their efficiency in solving problems by stimulating their competitive instincts and increasing interaction with each other. Inspired by this fact, this paper presents a novel human learning optimization algorithm with competitive and cooperative learning (HLOCC), in which a competitive and cooperative learning operator (CCLO) is developed to mimic competition and cooperation in social interaction for enhancing learning efficiency. The HLOCC can efficiently maintain the diversity of the algorithm as well as achieve the optimal values, demonstrating that the proposed CCLO can effectively improve algorithm performance. HLOCC has been compared with other heuristic algorithms on CEC2017 functions. In the second study, the uncapacitated facility location problems (UFLPs) which are one of the pure binary optimization problems are solved with HLOCC. The experimental results show that the developed HLOCC is superior to previous HLO variants and other metaheuristics with its improved exploitation and exploration abilities.
Hinweise

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Introduction

Optimization problems exist widely in the real world, and therefore methods used to solve these problems have been a hot topic. Traditional gradient-based methods (such as the steepest descent method and the Newton method) have been used to solve various optimization problems successfully. However, as the optimization problems come to be increasingly complicated, gradient-based methods are inefficient and inconvenient because they require substantial gradient information, are sensitive to initial values and need a large amount of enumeration memory. In the past few decades, evolutionary computing has become an attractive and effective optimization method for rapidly growing complex modern optimization problems. The optimization approaches inspired by biological systems have attracted considerable interest in recent years, which are quite successful in solving problems in optimal allocation of science and technology resources, industrial automation, economy and other fields.
As is known to all, human beings as the smartest creature in the earth are capable of solving a large number of complicated problems that other living beings, such as birds, ants, and fireflies, cannot tackle. Humans have a powerful learning ability, and the process of human learning is extremely complicated, of which the study is the part of neuropsychology, educational psychology, learning theory and pedagogy [1]. Actually, most of human learning activities are similar to the search process of metaheuristics. Motived by this thought, Wang et al. [2] proposed human learning optimization (HLO) based on a simplified human learning model in which three learning operators, i.e. the random learning operator (RLO), the individual learning operator (ILO), and the social learning operator (SLO), are developed to search out the optimal solution by mimicking the random learning strategy, the individual learning strategy, and the social learning strategy in the learning activities of humans, respectively. The performance of HLO, like other meta-heuristic algorithms, is sensitive to its parameter values. Therefore, to further improve the global search ability and the robustness of HLO, an adaptive simplified human learning optimization (ASHLO) [3] has been proposed. Later, based on the fact that the Intelligence Quotient (IQ) scores followed Gaussian distribution [4], a diverse human learning optimization algorithm [5] is proposed to improve the performance of HLO, in which the Gaussian distribution and dynamic adjustment strategy were introduced to strengthen the robustness of the algorithm. In addition to that, a new adaptive HLO based on sine–cosine functions [6] is developed, which enhances the explore and exploitation abilities of the algorithm periodically by tuning the control parameters with the sine and cosine functions. Recently, a novel adaptive human learning algorithm (IAHLO) [7] is developed to dynamically tune the control parameter of the random learning operator, which can efficiently develop the diversity at the beginning of iteration and perform the accurate local search at the end of search.
Since both individual learning and social learning operations in standard HLO copy their individual optima, and only random learning explores new solutions with a small probability, the algorithm is prone to fall into local optima. Therefore, it is necessary to design a new operation operator to further improve the performance of HLO. For this reasons, when an individual cannot continuously improve his fitness, a relearning operation is designed in [8]. This operator can eliminate the stored knowledge of the individual and make the individual start searching again with a good chance to escape from local optimum. Besides, the hybrid algorithm ASHLO-GA in [9] and HLO-PSO in [10] are proposed to tackle supply chain network design problem and the flexible job-shop scheduling problem (FJSP), respectively. Since binary algorithms are ineffective in solving high-dimensional continuous problems, a continuous HLO is firstly presented and integrated with the binary HLO to solve mixed-variable optimization problems [11]. Besides, a discrete HLO is proposed in [12] and successfully used to solve the production scheduling problem. Now, HLO has been successfully applied to knapsack problems [2, 3, 5, 8], text extraction [13], optimal power flow calculation [14, 15], financial markets forecasting [6], image segmentation [16] and intelligence control [1719].
Nowadays, human learning has being been widely researched in multiple disciplines including computer [20], economics [21], sociology [22], etc. Human beings have social attributes and always interact with each other in the practice of life. Competition and cooperation are the two basic forms of interaction [23]. Actually, many social activities in humans contain both competition and cooperation [24]. Furthermore, competition can lead to higher levels of cooperation [25], and cooperation can further build competitive advantage [26]. Therefore, competition and cooperation can motivate learners to learn more efficiently and improve their efficiency in solving problems. In real life, people find and learn from individuals who are better than themselves by comparing with each other, which is the process of competition and cooperation. Comparing with each other can be seen as a process of competition, while learning from each other is cooperation. Through competition and cooperation, individuals can have more possibilities to learn new knowledge and diversify the group's overall structure [27]. The social learning operation of HLO is designed according to the “copy-the-best” strategy, which can easily lead to the algorithm falling into local optima. However, the introduction of cooperation mechanism can help the algorithm avoid falling into local optimization, because the learning mechanism of competition and cooperation can effectively increase the diversity of the population. Inspired by this learning mechanism, a human learning optimization with competitive and cooperative learning (HLOCC) is proposed in this paper, in which a novel competitive and cooperative learning strategy is developed to improve the balance between exploration and exploitation of HLO.
The rest of the paper is organized as follows. Section “Human learning optimization with competitive and cooperative learning” introduces the idea, operators and implementation of the proposed HLOCC in detail. The parameter study of HLOCC is given in section “Parameter study of HLOCC” to insightful analyze and explain why the developed competitive and cooperative learning operator can enhance the search ability of algorithm. Then the performance of HLOCC is evaluated and compared with recent state-of-art metaheuristics in section “Experimental results and discussions”. Finally, conclusions are drawn in section “Conclusions and future works”.

Human learning optimization with competitive and cooperative learning

As a binary metaheuristic, HLOCC adopts the binary-coding framework, and therefore each individual, i.e. the solution, is composed by a binary string as Eq. (1), in which each bit denotes a basic component of knowledge of problems,
$$ \begin{aligned} & x_{i} = \left[ {x_{i1} \, x_{i2} \cdots x_{ij} \cdots x_{iM} } \right],\\ & x_{ij} \in \left\{ {0,1} \right\},{1} \le i \le N,{1} \le j \le M \end{aligned} $$
(1)
where \({x}_{ij}\) is the j-th bit of the i-th individual, and N and M denote the size of population and the length of solutions, respectively. At the beginning of learning, humans usually have no prior knowledge of problems, thus each individual of HLOCC is initialized with “0” or “1” randomly.

Random learning operator

Random learning always exists in the human learning as usually there is no prior knowledge of new problems [28]. Besides, it is a simple but valid strategy for humans to explore new strategies and improve performance in the progress of learning. To imitate the random learning strategy, the random learning operator (RLO) is used in HLOCC as Eq. (2)
$$ x_{ij} = {\text{RLO}} = \left\{ {\begin{array}{*{20}l} {0,{ 0} \le r_{1} \le 0.5} \hfill \\ {1,{\text{ else}}} \hfill \\ \end{array} } \right. $$
(2)
where \({r}_{1}\) is a stochastic number between 0 and 1.

Individual learning operator

Individual learning [29, 30] is the ability of humans to build up knowledge through individual reflection. By following previous experience, people can avoid mistakes and improve the efficiency and effectiveness of learning. To mimic this learning behavior, L best individual solutions are memorized and stored in the individual knowledge database (IKD) of HLOCC for individual learning, which is defined as Eqs. (3) and (4):
$$ I{\text{KD}} = \left[ {\begin{array}{*{20}c} {{\text{ikd}}_{1} } \\ {{\text{ikd}}_{2} } \\ \vdots \\ {{\text{ikd}}_{i} } \\ \vdots \\ {{\text{ikd}}_{N} } \\ \end{array} } \right]{, 1} \le i \le N $$
(3)
$$ {\text{ikd}}_{i} = \left[ {\begin{array}{*{20}c} {{\text{ikd}}_{i1} } \\ {{\text{ikd}}_{i2} } \\ \vdots \\ {{\text{ikd}}_{ip} } \\ \vdots \\ {{\text{ikd}}_{iL} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {{\text{ik}}_{i1,1} } & {{\text{ik}}_{i1,2} } & \cdots & {{\text{ik}}_{i1,j} } & \cdots & {{\text{ik}}_{i1,M} } \\ {{\text{ik}}_{i2,1} } & {{\text{ik}}_{i2,2} } & \cdots & {{\text{ik}}_{i2,j} } & \cdots & {{\text{ik}}_{i2,M} } \\ \vdots & \vdots & {} & \vdots & {} & \vdots \\ {{\text{ik}}_{ip,1} } & {{\text{ik}}_{ip,2} } & \cdots & {{\text{ik}}_{ip,j} } & \cdots & {{\text{ik}}_{ip,M} } \\ \vdots & \vdots & {} & \vdots & {} & \vdots \\ {{\text{ik}}_{iL,1} } & {{\text{ik}}_{iL,2} } & \cdots & {{\text{ik}}_{iL,j} } & \cdots & {{\text{ik}}_{iL,M} } \\ \end{array} \cdots } \right],\;1 \le p \le L $$
(4)
where \({\mathrm{ikd}}_{i}\) denotes the individual knowledge database of person i, and \({\mathrm{ikd}}_{ip}\) stands for the p-th best solution of person i.
When HLOCC performs the individual learning operator (ILO), the candidate solution learns from a random solution in its IKD as Eq. (5):
$$ x_{ij} = {\text{ik}}_{ip,j} $$
(5)
After a new population is generated, the fitness of all individuals is calculated according to the pre-defined fitness function to update the IKDs. Since HLOCC is designed for solving single-objective problems, the size of IKDs is set to 1 as suggested in previous works on HLO. Therefore, the new candidate replaces the original solution in the IKDs if and only if its fitness value is superior.

Competitive and cooperative learning operator

Competition and cooperation, as two basic modes of social cognition [31], occupy a critically important place in the research into individual as well as both group and societal behavior [32]. The social interdependence theory [33, 34] states that competitive and cooperative learning can significantly improve learning efficiency and make the overall structure of the group more diversified. Because the social learning of standard HLO relies too much on global optimization, the algorithm is easy to fall into local optimization. The competition and cooperation mechanism can increase the diversity of population, so that the algorithm can discover new solution areas. Inspired by these discoveries, a novel competitive and cooperative learning operator (CCLO) is developed to implement competitive and cooperative learning in HLOCC to increase the diversity of the population.
When performing CCLO, the current individual is compared with an individual randomly selected from a population that does not include itself according to the fitness value to determine whether the current individual is a winner or a loser. The winner's individual optimal solution is recognized, while the loser needs to learn from the winner’s experience. During the learning process, if the current individual is the winner, the standard HLO will continue to be followed, otherwise, the competitive and cooperative learning operator will be added to learn the winner’s individual optimal solution. The learning process can be represented by Eq. (6)
$$ x_{ij}^{{{\text{loser}}}} = ik_{kj}^{{{\text{winner}}}} ,k \ne i $$
(6)
where \(ik_{k}^{{{\text{winner}}}}\) denotes the IKD of winner individual k. The mechanisms of the competitive and cooperative learning can be described as Fig. 1.

Social learning operator

Social learning [35] plays an important role in social environment because it allows human being to copy the best solutions in the population and accelerate the learning process. Although CCLO is also a social learning operation, the probability of learning the best individual is relatively small and the learning efficiency is low. When problems become extremely complicated and time-consuming, people prefer to learn from the individual with the highest social evaluation value in the population, that is, “copy the best” strategy[36]. It can quickly drive the whole population towards the best optimal solutions. Correspondingly, HLOCC conducts the social learning operator to emulate the “copy the best” behavior of humans, the best knowledge of population is stored in the social knowledge database (SKD) as Eq. (7)
$$ {\text{SKD}} = \left[ {\begin{array}{*{20}c} {{\text{skd}}_{1} } \\ {{\text{sk}}d_{2} } \\ \vdots \\ {{\text{sk}}d_{q} } \\ \vdots \\ {{\text{sk}}d_{H} } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {sk_{11} } & {{\text{sk}}_{12} } & \cdots & {{\text{sk}}_{1j} } & \cdots & {{\text{sk}}_{1M} } \\ {{\text{sk}}_{21} } & {{\text{sk}}_{22} } & \cdots & {{\text{sk}}_{2j} } & \cdots & {{\text{sk}}_{2M} } \\ \vdots & \vdots & {} & \vdots & {} & \vdots \\ {{\text{sk}}_{q1} } & {{\text{sk}}_{q2} } & \cdots & {{\text{sk}}_{qj} } & \cdots & {{\text{sk}}_{qM} } \\ \vdots & \vdots & {} & \vdots & {} & \vdots \\ {{\text{sk}}_{H1} } & {{\text{sk}}_{H2} } & \cdots & {{\text{sk}}_{Hj} } & \cdots & {{\text{sk}}_{HM} } \\ \end{array} } \right],\;1 \le q \le H $$
(7)
where \({\mathrm{skd}}_{q}\) denotes the q-th solution in the SKD and H is the size of SKD.
With the knowledge in the SKD, the HLOCC performs the social learning operator (SLO) to generate new candidate solutions as Eq. (8)
$$ x_{ij} = {\text{skd}}_{qj} $$
(8)
the size of SKD in HLOCC is also set to 1, and the new candidate is saved to replace the current one in the SKD only if it has a better fitness value.

Implementation of HLOCC

In summary, when the current individual is the winner, it will use the three learning operators like standard HLO, which can be described as Eq. (9), while if it is the loser, it will perform the random learning operator, the individual learning operator, the competitive and cooperative learning operator and the social learning operator to generate new candidate solutions, which are presented as Eq. (10)
$$ x_{ij}^{{{\text{winner}}}} = \left\{ {\begin{array}{*{20}l} {{\text{RLO}}} &\quad {0 \le r_{2} \le {\text{pr}}} \\ {{\text{ik}}_{kj}^{{{\text{winner}}}} } &\quad {{\text{pr}} \le r_{2} \le {\text{pi}}} \\ {{\text{sk}}_{qj} } &\quad {{\text{else}}} \\ \end{array} } \right. $$
(9)
$$ x_{{ij}}^{{{\text{loser}}}} = \left\{ {\begin{array}{lllll} {{\text{RLO}},} \hfill &\quad {0 \le r_{3} \le {\text{pr}}} \hfill \\ {{\text{ik}}_{{ipj}}^{{{\text{loser}}}} } \hfill &\quad {{\text{pr}} < r_{3} \le {\text{pil}}} \hfill \\ {{\text{ik}}_{{kj}}^{{{\text{winner}}}} } \hfill &\quad {{\text{pr}} < r_{3} \le {\text{pcc}}} \hfill \\ {{\text{sk}}_{{qj}} } \hfill &\quad {{\text{else}}} \hfill \\ \end{array} } \right. $$
(10)
where \(r_{2}\) is a stochastic number between 0 and 1; (pi–pr) and (1–pi) represent the probabilities of performing individual learning and social learning of the winner individual, respectively; and \(r_{3}\) is a stochastic number between 0 and 1; pr, (pil–pr), (pcc–pil), and (1–pcc) are the probabilities of the random learning, the individual learning, the competitive and cooperative learning, and social learning of the loser individual, respectively.
The implementation of HLOCC is described in Fig. 2.

Parameter study of HLOCC

Analysis of the control parameters

A parameter study was performed in this section to analyze and choose fair control parameter values for HLOCC. For simplicity, the control parameter pr and pi adopt the default value of HLO [2], i.e. 5/M and 0.85 + 2/M, since the learning strategy of the winner in HLOCC is as same as the standard HLO. Then pil and pcc are investigated as they together determine the probability of performing CCLO. The cross-combination method was used, and the alternate values of control parameters pil and pcc are listed in Table 1. Two functions, i.e. F2 and F8 chosen from the CEC17 benchmark functions [37], were adopted to investigate the influence of these two control parameters, and the characteristics of the CEC17 benchmark are given in Table 2. The size of population and the maximum number of iterations on the 10-dimensional functions were set to 50 and 3000, and they were increased to 100 and 6000 on the 30-dimensional functions. Each decision variable was encoded by 30 bits, and each function ran 100 times independently. To choose the optimal parameters combination, the mean value (Mean) are calculated as the performance indicator and shown in Table 3, where the best numbers are in bold. Note that pcc should be bigger than pil in HLOCC, and therefore only 71 cases which meet this requirement are listed in Table 3. The trails on 10-D F2, 10-D F8, 30-D F2 and 30-D F8 rank in the top 10 percent are all selected and the background are in italics. The parameter combination of trail 58 is adapted as the default parameter setting in this paper as it is the only one in the top 10 percent of all the four cases.
Table 1
Parameter values of pil and pcc
Parameters
Level 1
Level 2
Level 3
Level 4
Level 5
Level 6
Level 7
Level 8
Level 9
pil
0.78
0.80
0.82
0.84
0.85
0.86
0.88
0.90
0.92
pcc
0.86
0.88
0.90
0.92
0.94
0.95
0.96
0.97
0.98
Table 2
CEC17 benchmark functions
No
Functions name
Dimension
Type
F1
Shifted and Rotated Bent Cigar Function
10/30
Unimodal
F2
Shifted and Rotated Sum of Different Power Function*
10/30
Unimodal
F3
Shifted and Rotated Zakharov Function
10/30
Unimodal
F4
Shifted and Rotated Rosenbrock’s Function
10/30
Multimodal
F5
Shifted and Rotated Rastrigin’s Function
10/30
Multimodal
F6
Shifted and Rotated Expanded Scaffer’s F6 Function
10/30
Multimodal
F7
Shifted and Rotated Lunacek Bi_Rastrigin Function
10/30
Multimodal
F8
Shifted and Rotated Non-Continuous Rastrigin’s Function
10/30
Multimodal
F9
Shifted and Rotated Levy Function
10/30
Multimodal
F10
Shifted and Rotated Schwefel’s Function
10/30
Hybrid
F11
Hybrid Function 1 (N = 3)
10/30
Hybrid
F12
Hybrid Function 2 (N = 3)
10/30
Hybrid
F13
Hybrid Function 3 (N = 3)
10/30
Hybrid
F14
Hybrid Function 4 (N = 4)
10/30
Hybrid
F15
Hybrid Function 5 (N = 4)
10/30
Hybrid
F16
Hybrid Function 6 (N = 4)
10/30
Hybrid
F17
Hybrid Function 6 (N = 5)
10/30
Hybrid
F18
Hybrid Function 6 (N = 5)
10/30
Hybrid
F19
Hybrid Function 6 (N = 5)
10/30
Hybrid
F20
Hybrid Function 6 (N = 6)
10/30
Hybrid
F21
Composition Function 1 (N = 3)
10/30
Composition
F22
Composition Function 2 (N = 3)
10/30
Composition
F23
Composition Function 3 (N = 4)
10/30
Composition
F24
Composition Function 4 (N = 4)
10/30
Composition
F25
Composition Function 5 (N = 5)
10/30
Composition
F26
Composition Function 6 (N = 5)
10/30
Composition
F27
Composition Function 7 (N = 6)
10/30
Composition
F28
Composition Function 8 (N = 6)
10/30
Composition
F29
Composition Function 9 (N = 3)
10/30
Composition
F30
Composition Function 10 (N = 3)
10/30
Composition
Table 3
Results of parameter study
Trial
Control parameters
10D
30D
pil
pcc
F2
F8
F2
F8
1
0.78
0.86
2.3777E+04
6.1295E+00
5.4092E+23
5.6937E+01
2
0.78
0.88
1.5327E+04
5.9708E+00
2.0624E+23
5.7251E+01
3
0.78
0.9
1.2920E+04
5.5128E+00
1.3943E+24
5.6559E+01
4
0.78
0.92
6.2632E+03
5.4607E+00
3.8775E+23
5.3871E+01
5
0.78
0.94
7.6339E+03
5.5638E+00
1.1249E+24
5.3309E+01
6
0.78
0.95
1.5081E+04
5.4543E+00
7.9663E+23
5.3156E+01
7
0.78
0.96
1.1818E+04
5.9929E+00
3.9145E+23
5.2325E+01
8
0.78
0.97
1.0121E+04
5.0377E+00
4.0286E+25
5.2760E+01
9
0.78
0.98
1.4714E+04
5.0197E+00
3.9749E+23
5.0588E+01
10
0.8
0.86
1.6917E+04
5.1196E+00
2.3586E+21
5.6605E+01
11
0.8
0.88
7.4589E+03
5.4198E+00
5.7039E+23
5.5579E+01
12
0.8
0.9
2.2926E+04
5.2951E+00
1.1777E+25
5.4237E+01
13
0.8
0.92
5.7283E+03
4.8739E+00
3.2412E+21
5.4316E+01
14
0.8
0.94
5.7956E+03
5.0407E+00
1.1563E+26
5.1871E+01
15
0.8
0.95
7.1984E+03
5.1121E+00
8.8180E+22
5.0184E+01
16
0.8
0.96
6.2101E+03
4.8831E+00
1.0897E+23
5.2040E+01
17
0.8
0.97
5.9829E+03
4.5425E + 00
2.1415E+20
4.9660E+01
18
0.8
0.98
5.7072E+03
4.8278E+00
1.0994E+21
5.2708E+01
19
0.82
0.86
1.1540E+04
5.2510E+00
7.7605E+23
5.3871E+01
20
0.82
0.88
6.6795E+03
5.2882E+00
4.3144E+23
5.3256E+01
21
0.82
0.9
4.1799E+03
5.2323E+00
1.2241E+20
5.3572E+01
22
0.82
0.92
1.1346E+04
4.9715E+00
2.4923E+20
5.1154E+01
23
0.82
0.94
3.9503E+03
4.7433E+00
4.7108E+23
4.9959E+01
24
0.82
0.95
1.0959E+04
4.6530E+00
7.1150E+23
5.0243E+01
25
0.82
0.96
5.3340E+03
4.5045E + 00
1.8348E+23
4.9711E+01
26
0.82
0.97
1.4104E+04
4.8135E+00
7.1646E+21
4.8401E+01
27
0.82
0.98
4.6305E+03
4.8627E+00
5.9964E+24
4.8403E+01
28
0.84
0.86
6.1571E+03
5.1232E+00
2.4385E+23
5.2521E+01
29
0.84
0.88
5.1682E+03
4.9936E+00
2.7350E+20
4.9597E+01
30
0.84
0.9
3.4345E+03
4.9685E+00
3.6889E+20
4.9380E+01
31
0.84
0.92
2.5275E + 03
4.8094E+00
3.0449E+22
4.5369E+01
32
0.84
0.94
1.0497E+04
4.5740E+00
2.9380E+21
4.6986E+01
33
0.84
0.95
9.5765E+03
4.4749E + 00
2.5752E+21
4.6832E+01
34
0.84
0.96
3.0574E+03
5.0274E+00
9.4090E+20
4.5887E+01
35
0.84
0.97
4.5485E+03
4.5677E+00
1.2878E+20
4.5740E+01
36
0.84
0.98
4.0280E+03
4.7527E+00
2.1422E+22
4.6419E+01
37
0.85
0.86
3.4864E+03
4.9497E+00
9.2322E+19
4.8590E+01
38
0.85
0.88
4.2045E+03
5.1278E+00
1.2128E+20
4.7708E+01
39
0.85
0.9
4.6619E+03
4.8573E+00
3.7606E+20
4.9075E+01
40
0.85
0.92
3.7967E+03
4.8304E+00
2.5547E+23
4.6266E+01
41
0.85
0.94
5.0926E+03
4.5409E + 00
1.1237E+21
4.5090E+01
42
0.85
0.95
5.0857E+03
4.8582E+00
6.5797E+19
4.4351E+01
43
0.85
0.96
4.3945E+03
4.6035E+00
9.5452E+19
4.5458E+01
44
0.85
0.97
7.1224E+03
4.4434E + 00
6.5851E+23
4.3924E+01
45
0.85
0.98
3.2221E+03
4.9075E+00
1.2662E+21
4.5174E+01
46
0.86
0.88
2.0689E + 03
4.9389E+00
3.3284E+23
4.8394E+01
47
0.86
0.9
5.0448E+03
4.8795E+00
2.9308E+23
4.8738E+01
48
0.86
0.92
4.5630E+03
4.5478E + 00
8.8180E+20
4.3033E+01
9
0.86
0.94
2.4602E + 03
4.7066E+00
4.1938E+19
4.2546E+01
50
0.86
0.95
2.6397E + 03
4.7360E+00
1.9167E+19
4.3187E+01
51
0.86
0.96
3.5358E+03
4.9523E+00
4.7748E+19
4.2360E+01
52
0.86
0.97
3.5645E+03
4.5747E+00
6.5590E+20
4.3215E+01
53
0.86
0.98
4.4802E+03
4.7760E+00
2.9377E+20
4.2125E+01
54
0.88
0.9
3.4619E+03
5.1647E+00
8.8911E + 18
4.4937E+01
55
0.88
0.92
3.2439E+03
5.2205E+00
1.4678E+20
4.1916E+01
56
0.88
0.94
2.2024E + 03
5.2353E+00
1.6300E+21
4.2110E+01
57
0.88
0.95
3.4081E+03
4.9434E+00
3.1894E + 18
4.0213E + 01
58
0.88
0.96
2.4769E + 03
4.5204E + 00
2.7517E + 18
3.9091E + 01
59
0.88
0.97
4.5287E+03
5.2933E+00
1.8283E+19
4.0690E + 01
60
0.88
0.98
3.5749E+03
5.4362E+00
2.6632E+23
4.2525E+01
61
0.9
0.92
3.1359E+03
4.9173E+00
6.1387E + 18
4.1575E+01
62
0.9
0.94
3.2561E+03
5.2229E+00
7.4474E + 18
4.2368E+01
63
0.9
0.95
4.1085E+03
5.4391E+00
1.6853E + 18
4.1440E + 01
64
0.9
0.96
3.1952E+03
5.6270E+00
5.7524E + 18
4.1489E + 01
65
0.9
0.97
4.1790E+03
6.0439E+00
1.1613E+22
4.1383E + 01
66
0.9
0.98
2.9117E + 03
5.7819E+00
2.3379E+21
4.0964E + 01
67
0.92
0.94
3.8204E+03
5.6979E+00
3.7517E+20
4.2315E+01
68
0.92
0.95
3.8431E+03
5.7624E+00
1.2446E+19
4.3853E+01
69
0.92
0.96
4.2201E+03
6.5464E+00
2.6013E+19
4.4555E+01
70
0.92
0.97
7.5465E+03
6.5095E+00
2.0971E+21
4.4816E+01
71
0.92
0.98
3.9470E+03
7.0481E+00
1.8605E+21
4.2958E+01
Table 3 shows that HLOCC obtains the best comprehensive results when pil and pcc are set to 0.88 and 0.96, which are chosen as the default values in this work as mentioned above. Obviously, the results in Table 3 display that the setting of performance pil and pcc plays an important role because they determine the probabilities of the individual learning, the competitive and cooperative learning and social learning. It is not difficult to find that the performance of HLOCC significantly drops when pil < 0.80, as a too small pil would spoil the principal learning mechanisms of HLO. The probability of competitive and cooperative learning in HLOCC is determined by the value of (pcc–pil) and it can be noticed that the performance of HLOCC reaches the best when (pcc–pil) = 0.06. In the whole population of HLOCC, some individuals perform HLO operation, while others perform CCLO operation. Compared with the way of generating new solutions in standard HLO, CCLO operation is added to HLOCC, which makes some individuals have the opportunity to learn from other individuals besides their own optima and the global optima, thus increasing the diversity of the algorithm, while reserving some individuals to perform standard HLO. At the same time, HLOCC also retains some individuals to perform standard HLO operations to ensure the convergence of the algorithm. Therefore, HLOCC can achieve a better trade-off between exploration and exploitation with CCLO, which will be further discussed and demonstrated in the next sub-section.

Role of competitive and cooperative learning

To clearly understand the role of the developed competitive and cooperative learning operator, the proposed HLOCC is compared with a variant named HLOCC1, in which only perform the CCLO, and the standard HLO without the CCLO. For a fair comparison, the parameters of standard HLO are set to the recommended values in [2], and the parameters are set as the operation of CCLO of HLCC in which the pil and pcc are 0.88 and 0.96,respectively. HLOCC, as well as HLO and HLOCC1 was adopted to solve the F2 and F8 from CEC17 benchmark functions. Since these two functions are representative Unimodal and Multimodal functions, it is convenient to reveal the performance change caused by CCLO in the search process of HLOCC through recording population diversity changes and comparing it with other HLO variants.
The average fitness value (AFV) curves and the Average Distance (AD) curves of three algorithms on F2 and F8 of 10-Dimensional and 30-Dimensinal are draw in Figs. 3, 4, 5, 6.The Average Distance is defined as the Hamming distance between the global best solution and the individual best solution as Eq. (11), which can be used to examine the variation in the exploration and exploitation abilities of algorithms under different strategies. Therefore, by comparing the AFV and AD curves of HLOCC, HLOCC1 and HLO, it can clearly to probe the impacts of CCLO.
$$ {\text{AD}} = \frac{{\sum\limits_{i = 1}^{N - 1} {\sum\limits_{j = 1}^{M} {\left| {ik_{ij} - sk_{j} } \right|} } }}{M \times (N - 1)},\;\;1 \le i \le N - 1,\;\;1 \le j \le M $$
(11)
It can be clearly seen from the fitness curves in Figs. 3, 4, 5, 6 that HLOCC achieves the best solutions on all the cases. The influences of CCLO can be concluded as follows:
1.
By comparing HLOCC with HLO, it can be seen that the AD curve of HLO converges faster than HLOCC, and therefore HLO may not sufficiently explore solution space and is likely to be stuck in local optima. With the help of the CCLO, HLOCC can effectively maintain the diversity and prevent the algorithm form falling into the local optima.
 
2.
By comparing HLOCC and HLOCC1, it can be found that HLOCC1 have the opportunities to learn from other individuals, rather than simply copying the SKD like HLO, thus its diversity is maintained well. However, as HLOCC1 only performs CCLO learning, all individuals have a small probability to learn SKD, so its convergence rate is slow.
 
In summary, the developed CCLO can efficiently retain the optimal bit values by learning the useful information of winner individual best solutions and simultaneously maintain the exploration ability by choosing two different random individuals for comparison each time. Therefore, with the introduction of the CCLO, HLOCC achieves a practically perfect trade-off between exploration and exploitation. Specifically, at the beginning of the search, the efficiency and reliability of the ILO, are low due to the random initialization of the population. At this time, the CCLO can efficiently find the optimal bit values by learning winner individual best solution, which boost the effectiveness and confidence of social learning operator and consequently enhance the exploitation ability of HLOCC. As the greedy strategy is adopted for the updating of IKDs and SKD, the risk of premature and being trapped in the local optimal increases quickly with the progress of the search. With the introduction of the random individual competition, some individuals in the population will maintain initial information that some useful information lost during the learning process has the opportunity to be recovered by the competitive and cooperative learning. The CCLO makes it less likely that information will be lost during the learning process, and therefore the global search ability is significantly enhanced.

Experimental results and discussions

To verify the performance, the proposed HLOCC, as well as eight recent algorithms, i.e. Simple Human Learning Optimization (SHLO) [2], Diverse Human Learning Optimization (DHLO) [8], Time-Varying Mirrored S-shaped Binary Particle Swarm Optimization (TVMSBPSO) [38], Binary Whale Optimization Algorithm (BWOA) [39], Binary Crow Search Algorithm (BinCSA) [40], Quadratic Binary Particle Swarm Optimization (QBPSO) [41], Improved Binary Differential Evolution (IBDE) [42] and Binary Gaining-sharing Knowledge-based Optimization (pBGSK) [43], were applied to solve the CEC17 benchmark functions. For a fair comparison, the recommended parameter values were adopted for all the algorithms, which are listed in Table 4. All the cases ran 100 times independently. Then HLOCC was tested on low-scaled, middle-scaled and large-scaled fifteen UFL Problems. The performance of HLOCC are compared with other state-of-art algorithms. In the third study, a different dataset named M* were tackled by HLOCC to analyze and discuss the performance of HLOCC.
Table 4
Parameters settings of the algorithms
Algorithms
Parameters
HLOCC
\({\text{pr}} = 5/M,{\text{pi = }}0.85 + 2/M,{\text{pil}} = 0.83,{\text{pcc}} = 0.89\)
SHLO
\({\text{pr}} = 5/M,{\text{pi}} = 0.85 + 2/M\)
DHLO
\({\text{pr}} = 5/M,{\text{pi}} = 0.85,3\sigma = 0.02,{\text{DG}} = 1000\)
TVMSBPSO
\(c_{1} = c_{2} = 2,\omega = 1,\sigma_{\min } = 0.1,\sigma_{\max } = 1,v_{\max } = 10\)
BWOA
\(a_{\max } = 2,a_{\min } = 0,b = 1\)
BinCSA
\({\text{AP = 0}}{.1}\)
QBPSO
\(c_{1} = c_{2} = 2,\omega_{\min } = 0.4,\omega_{\max } = 0.9,v_{\max } = 6\)
IBDE
\(\delta = 0.05,\alpha = 1.0,p_{{{\text{sm}}}} = 0.008,b = 0.5\)
pBGSP
\({\text{NP}}_{{{\text{min}}}} = 12,p = 0.1,k_{{\text{R}}} = 0.95\)

CEC17 benchmark functions

Low-dimensional functions

HLOCC and other eight state-of-the-art algorithms are used to solve the 10-dimensional CEC17 functions. The numerical results, including Mean, the best value (Best) and the standard deviation (Std) are listed in Table 5, where the best results are marked with bold-face. Besides, the student's t test (t-test) and Wilcoxon signed-rank test (W-test) are performed and the corresponding results are also shown in Table 5, in which “+/§/#” indicate that the optimization result of HLOCC is obviously better than, similar to, or worse than the compared algorithm in the 95% confidence interval, respectively. In Table 5, the upper corner of the Mean represents the t-test result, and the lower corner of the Mean represents the W-test result. Note that the t-test, a parameter test, needs to satisfy the normality and homogeneity of variance, while the W-test, a nonparametric test, is not needed. Therefore, the t-test is more reliable when the Gaussian distribution assumption is met while the W-test would be more powerful when this assumption is violated. For convenience, the results of the t-test and W-test are summarized in Table 6, in which the total sore is calculated by subtracting Worse form Better.
Table 5
Results of all algorithms on the 10-dimensional benchmark functions
Function
Metric
HLOCC
HLO
DHLO
TVMSBPSO
BWOA
BinCSA
QBPSO
IBDE
pBGSK
F1
Best
3.9672E+02
1.1185E+03
1.4039E+03
1.5800E+04
1.8163E+08
1.1529E+06
8.1335E+06
9.4438E+06
3.8421E+05
Mean
1.5573E+06
1.8122E+07\(\tfrac{+}{+}\)
3.1500E+06\(\tfrac{\S}{+}\)
1.5756E+07\(\tfrac{+}{+}\)
6.7280E+08\(\tfrac{+}{+}\)
5.2362E+07\(\tfrac{+}{+}\)
3.7589E+08\(\tfrac{+}{+}\)
2.6254E+08\(\tfrac{+}{+}\)
1.4519E+08\(\tfrac{+}{+}\)
Std
4.5033E+06
3.9666E+07
7.3689E+06
2.0633E+07
2.1573E+08
6.1189E+07
4.8365E+08
2.8098E+08
2.4326E+08
F2
Best
2.4059E-02
5.1082E-02
3.0183E-02
4.3138E-01
4.8817E+05
1.6542E+02
6.7318E+03
2.8540E+03
6.5480E+05
Mean
3.8860E+03
4.1086E+04\(\tfrac{+}{+}\)
5.4544E+03\(\tfrac{\S }{\S }\)
7.5725E+04\(\tfrac{+}{+}\)
2.7875E+07 \(\tfrac{+}{+}\)
2.5003E+06\(\tfrac{+}{+}\)
7.1572E+07\(\tfrac{+}{+}\)
2.8677E+07\(\tfrac{+}{+}\)
1.6593E+08\(\tfrac{+}{+}\)
Std
8.8480E+03
1.1975E+05
1.5175E+04
1.4165E+05
4.2059E+07
8.9638E+06
3.0218E+08
7.4637E+07
2.5977E+08
F3
Best
3.7843E+01
1.7707E+01
2.0331E+01
1.4793E+00
1.9321E+03
3.3214E+01
4.8992E+01
7.6638E+03
1.7525E+03
Mean
4.3453E+02
3.9598E+02\(\tfrac{\S }{\S }\)
4.0077E+02\(\tfrac{\S }{\S }\)
2.4622E+02\(\tfrac{\# }{\# }\)
4.8438E+03\(\tfrac{+}{+}\)
4.4934E+02\(\tfrac{\S }{\S }\)
1.1063E+03\(\tfrac{+}{+}\)
2.0271E+04\(\tfrac{+}{+}\)
8.2131E+03\(\tfrac{+}{+}\)
Std
2.8067E+02
3.3721E+02
3.8592E+02
1.8275E+02
1.4821E+03
3.2854E+02
9.4519E+02
6.1687E+03
2.4776E+03
F4
Best
1.1364E+00
1.8248E+00
3.8540E-01
2.2196E+00
2.2373E+01
4.5345E+00
3.2782E+00
3.1736E+00
6.5921E+00
Mean
6.0044E+00
8.5594E+00\(\tfrac{+}{+}\)
6.0946E+00\(\tfrac{\S }{\S }\)
1.1942E+01\(\tfrac{+}{+}\)
5.8261E+01\(\tfrac{+}{+}\)
1.6275E+01 \(\tfrac{+}{+}\)
4.5131E+01\(\tfrac{+}{+}\)
2.8878E+01 \(\tfrac{+}{+}\)
2.5264E+01 \(\tfrac{+}{+}\)
Std
1.3541E+00
6.0344E+00
2.4017E+00
1.3989E+01
1.8600E+01
1.3155E+01
3.3269E+01
1.9121E+01
1.9294E+01
F5
Best
1.1086E+00
2.0942E+00
1.0333E+00
2.2481E+00
3.3241E+01
1.9749E+00
9.0330E+00
9.6989E+00
1.3437E+01
Mean
4.3206E+00
6.5649E+00 \(\tfrac{+}{+}\)
4.2780E+00\(\tfrac{\S }{\S }\)
9.9085E+00\(\tfrac{+}{+}\)
4.7355E+01\(\tfrac{+}{+}\)
9.1578E+00 \(\tfrac{+}{+}\)
2.2470E+01\(\tfrac{+}{+}\)
2.4559E+01\(\tfrac{+}{+}\)
3.1230E+01 \(\tfrac{+}{+}\)
Std
1.6065E+00
2.5640E+00
1.5876E+00
2.9921E+00
6.4174E+00
3.4449E+00
7.6702E+00
7.3521E+00
5.2156E+00
F6
Best
2.5986E-03
1.3556E-02
3.2883E-03
6.8202E-02
1.0372E+01
2.3316E-01
1.5826E+00
4.0999E-01
1.4843E+00
Mean
1.4104E-01
4.6589E-01 \(\tfrac{+}{+}\)
1.8209E-01\(\tfrac{+}{+}\)
7.9102E-01 \(\tfrac{+}{+}\)
1.9111E+01\(\tfrac{+}{+}\)
1.5868E+00 \(\tfrac{+}{+}\)
7.8847E+00 \(\tfrac{+}{+}\)
5.7292E+00\(\tfrac{+}{+}\)
6.9777E+00\(\tfrac{+}{+}\)
Std
1.3033E-01
3.6381E-01
1.6867E-01
5.3025E-01
3.0105E+00
1.0423E+00
4.2416E+00
3.0752E+00
2.9682E+00
F7
Best
3.3915E+00
9.0763E+00
6.7547E+00
7.4357E+00
7.1759E+01
1.6810E+01
2.5079E+01
3.0055E+01
2.8581E+01
Mean
1.5239E+01
2.3346E+01\(\tfrac{+}{+}\)
1.6272E+01\(\tfrac{ + }{\S }\)
2.3782E+01\(\tfrac{+}{+}\)
1.0873E+02\(\tfrac{+}{+}\)
2.6107E+01 \(\tfrac{+}{+}\)
5.9188E+01\(\tfrac{+}{+}\)
5.7853E+01\(\tfrac{+}{+}\)
5.9213E+01\(\tfrac{+}{+}\)
Std
3.5235E+00
5.9976E+00
3.2863E+00
5.9632E+00
1.4127E+01
6.3494E+00
1.8577E+01
1.6654E+01
1.6261E+01
F8
Best
1.2950E+00
3.0934E+00
1.1054E+00
2.9883E+00
3.7297E+01
3.4560E+00
1.0433E+01
1.1007E+01
1.7887E+01
Mean
5.0282E+00
7.7604E+00\(\tfrac{+}{+}\)
4.9432E+00\(\tfrac{\S }{\S }\)
1.1233E+01\(\tfrac{+}{+}\)
4.8939E+01\(\tfrac{+}{+}\)
1.0502E+01 \(\tfrac{+}{+}\)
2.4267E+01\(\tfrac{+}{+}\)
2.8570E+01\(\tfrac{+}{+}\)
3.4650E+01 \(\tfrac{+}{+}\)
Std
1.6951E+00
2.8702E+00
1.7576E+00
4.4689E+00
5.3384E+00
3.7110E+00
8.0971E+00
8.1827E+00
5.3982E+00
F9
Best
3.4677E-04
4.3841E-04
6.1442E-04
9.6201E-02
7.2478E+01
5.4455E-01
1.4499E+00
7.3752E+00
5.8870E+01
Mean
5.5023E-01
2.9059E+00\(\tfrac{+}{+}\)
9.6975E-01 \(\tfrac{+}{+}\)
8.7495E+00\(\tfrac{+}{+}\)
3.2367E+02\(\tfrac{+}{+}\)
1.8115E+01 \(\tfrac{+}{+}\)
9.6740E+01\(\tfrac{+}{+}\)
2.5156E+02\(\tfrac{+}{+}\)
2.6705E+02\(\tfrac{+}{+}\)
Std
6.4304E-01
3.9576E+00
1.4049E+00
1.3574E+01
1.1316E+02
1.8961E+01
7.2242E+01
1.4452E+02
1.3878E+02
F10
Best
3.9253E+00
4.7438E-01
1.1331E+00
2.7613E+00
4.7239E+02
3.0145E+01
1.6894E+02
3.0987E+02
9.0619E+02
Mean
9.6219E+01
1.7299E+02\(\tfrac{+}{+}\)
8.0245E+01\(\tfrac{\S }{\S }\)
3.1137E+02 \(\tfrac{+}{+}\)
1.3093E+03 \(\tfrac{+}{+}\)
2.7157E+02\(\tfrac{+}{+}\)
5.4479E+02 \(\tfrac{+}{+}\)
6.9207E+02\(\tfrac{+}{+}\)
1.3212E+03\(\tfrac{+}{+}\)
Std
7.9156E+01
1.1171E+02
7.3578E+01
1.7372E+02
1.4543E+02
1.4128E+02
1.9005E+02
1.7971E+02
1.4971E+02
F11
Best
1.3967E+00
1.8705E+00
1.1047E+00
2.0775E+00
3.8772E+01
4.5333E+00
9.2682E+00
1.1494E+01
1.3880E+01
Mean
7.1194E+00
1.7793E+01 \(\tfrac{+}{+}\)
7.5389E+00\(\tfrac{\S }{\S }\)
1.4798E+01\(\tfrac{+}{+}\)
9.1689E+01 \(\tfrac{+}{+}\)
3.6530E+01\(\tfrac{+}{+}\)
7.1723E+01 \(\tfrac{+}{+}\)
2.1952E+02\(\tfrac{+}{+}\)
1.5434E+02 \(\tfrac{+}{+}\)
Std
2.9651E+00
3.6872E+01
3.5693E+00
3.1365E+01
2.6215E+01
6.0198E+01
7.9185E+01
2.6772E+02
1.0153E+02
F12
Best
3.8624E+02
6.7622E+02
3.0865E+02
4.8523E+02
6.6433E+05
7.9812E+03
2.6087E+03
1.9283E+04
1.2633E+05
Mean
2.8514E+04
7.4392E+04\(\tfrac{+}{+}\)
3.3046E+04\(\tfrac{\S }{\S }\)
3.1180E+05 \(\tfrac{+}{+}\)
1.5716E+07\(\tfrac{+}{+}\)
9.3921E+05 \(\tfrac{+}{+}\)
4.8333E+06 \(\tfrac{+}{+}\)
2.7632E+06 \(\tfrac{+}{+}\)
3.8309E+06\(\tfrac{+}{+}\)
Std
4.5836E+04
1.6101E+05
4.9869E+04
1.1233E+06
7.7613E+06
1.1863E+06
6.7244E+06
3.4858E+06
4.0695E+06
F13
Best
8.8593E+00
9.4645E+00
2.2900E+00
1.3599E+01
1.7078E+03
2.9461E+01
3.2823E+01
2.8191E+01
8.8231E+01
Mean
1.0049E+02
2.0402E+03 \(\tfrac{+}{+}\)
9.0860E+02\(\tfrac{+}{+}\)
5.7054E+03\(\tfrac{+}{+}\)
1.5657E+04 \(\tfrac{+}{+}\)
4.2973E+03\(\tfrac{+}{+}\)
4.2586E+04 \(\tfrac{+}{+}\)
2.5735E+04\(\tfrac{+}{+}\)
7.2305E+03\(\tfrac{+}{+}\)
Std
1.5595E+02
3.8404E+03
2.4284E+03
7.0552E+03
1.0504E+04
6.6548E+03
2.0332E+05
5.5947E+04
1.6116E+04
F14
Best
4.8058E+00
1.3242E+01
8.9170E-01
1.1033E+00
4.1441E+01
1.3980E+01
2.7933E+01
2.8740E+01
4.4885E+01
Mean
2.1494E+01
1.7098E+02\(\tfrac{+}{+}\)
2.0171E+01\(\tfrac{\S }{\S }\)
1.8425E+02\(\tfrac{+}{+}\)
1.0803E+02 \(\tfrac{+}{+}\)
7.9184E+01\(\tfrac{+}{+}\)
1.2155E+03 \(\tfrac{+}{+}\)
6.1956E+02 \(\tfrac{+}{+}\)
3.0280E+02 \(\tfrac{+}{+}\)
Std
8.7574E+00
2.6508E+02
1.9297E+01
6.2897E+02
4.9533E+01
8.2413E+01
2.5317E+03
1.0353E+03
5.1104E+02
F15
Best
1.7876E+00
3.0430E+00
1.6262E+00
2.7160E+00
7.5936E+01
5.5479E+00
1.8659E+01
1.2513E+01
3.7228E+01
Mean
1.3209E+01
1.1074E+02 \(\tfrac{+}{+}\)
1.9821E+01\(\tfrac{\S }{\S }\)
3.0789E+02 \(\tfrac{+}{+}\)
5.9918E+02 \(\tfrac{+}{+}\)
2.5196E+02\(\tfrac{+}{+}\)
1.5560E+03\(\tfrac{+}{+}\)
7.6058E+02\(\tfrac{+}{+}\)
3.7483E+03\(\tfrac{+}{+}\)
Std
9.2175E+00
1.5296E+02
4.7642E+01
6.7616E+02
4.0108E+02
3.4646E+02
2.5840E+03
1.2681E+03
4.0834E+03
F16
Best
3.9561E-01
4.7213E-01
1.8149E-01
2.3722E-01
2.0241E+01
1.3759E+00
1.1292E+00
2.5668E+00
1.9183E+01
Mean
1.8418E+00
1.2286E+01\(\tfrac{+}{+}\)
5.4264E+00\(\tfrac{\S }{ + }\)
1.8257E+01\(\tfrac{+}{+}\)
8.8546E+01 \(\tfrac{+}{+}\)
1.7495E+01 \(\tfrac{+}{+}\)
1.2375E+02\(\tfrac{+}{+}\)
1.7899E+02\(\tfrac{+}{+}\)
7.9488E+01 \(\tfrac{+}{+}\)
Std
1.4627E+00
2.9792E+01
2.0853E+01
3.6553E+01
4.4749E+01
2.8471E+01
1.0616E+02
1.2358E+02
3.5497E+01
F17
Best
2.0615E-01
1.4708E+00
9.0907E-01
1.5281E+00
3.4750E+01
3.8481E+00
8.4574E+00
1.2099E+01
3.0167E+01
Mean
8.0934E+00
1.2891E+01\(\tfrac{+}{+}\)
8.1532E+00\(\tfrac{\S }{\S }\)
1.3922E+01\(\tfrac{+}{+}\)
7.2522E+01 \(\tfrac{+}{+}\)
2.4194E+01\(\tfrac{+}{+}\)
4.6896E+01 \(\tfrac{+}{+}\)
5.7974E+01\(\tfrac{+}{+}\)
6.9330E+01\(\tfrac{+}{+}\)
Std
5.6423E+00
1.0617E+01
5.4668E+00
8.9647E+00
1.2400E+01
9.5210E+00
3.3648E+01
3.5025E+01
1.6976E+01
F18
Best
1.5283E+01
3.8892E+01
7.7127E+00
8.9297E+01
3.7904E+03
5.0291E+01
6.3505E+01
1.2237E+02
4.4828E+02
Mean
2.8786E+02
1.6458E+03\(\tfrac{+}{+}\)
3.0463E+02\(\tfrac{\S }{\S }\)
6.5649E+03\(\tfrac{+}{+}\)
5.4070E+04 \(\tfrac{+}{+}\)
3.6889E+03\(\tfrac{+}{+}\)
1.9893E+04\(\tfrac{+}{+}\)
4.9571E+03\(\tfrac{+}{+}\)
7.5212E+03\(\tfrac{+}{+}\)
Std
4.7994E+02
1.7976E+03
5.1051E+02
7.3978E+03
4.0769E+04
3.8477E+03
3.7976E+04
3.6972E+03
6.8813E+03
F19
Best
3.8896E-01
5.2188E-01
2.9051E-01
6.1929E-01
3.4847E+01
3.4139E+00
3.2588E+00
2.6405E+00
1.9409E+01
Mean
6.3856E+00
6.3483E+01\(\tfrac{+}{+}\)
1.1758E+01\(\tfrac{ +}{\S }\)
1.3947E+02\(\tfrac{+}{+}\)
4.3287E+02\(\tfrac{+}{+}\)
3.1608E+02 \(\tfrac{+}{+}\)
2.6616E+03\(\tfrac{+}{+}\)
1.5841E+03\(\tfrac{+}{+}\)
2.5690E+03 \(\tfrac{+}{+}\)
Std
7.4916E+00
1.2116E+02
1.6030E+01
4.1396E+02
4.9290E+02
9.3970E+02
4.4240E+03
3.7376E+03
3.1472E+03
F20
Best
3.8474E-02
4.6363E-01
4.4860E-04
2.4924E-01
4.0520E+01
3.0624E+00
7.8997E+00
2.2002E+00
1.7193E+01
Mean
3.9022E+00
1.0095E+01\(\tfrac{+}{+}\)
5.6611E+00\(\tfrac{ + }{\S }\)
9.5106E+00\(\tfrac{+}{+}\)
6.4204E+01 \(\tfrac{+}{+}\)
2.0324E+01\(\tfrac{+}{+}\)
4.1553E+01 \(\tfrac{+}{+}\)
6.5428E+01 \(\tfrac{+}{+}\)
6.9609E+01\(\tfrac{+}{+}\)
Std
4.4303E+00
8.8476E+00
6.2668E+00
7.4159E+00
1.0619E+01
1.0067E+01
1.7066E+01
4.1746E+01
3.1930E+01
F21
Best
1.1317E+01
1.5278E+01
1.0092E+02
1.0070E+02
1.0456E+02
1.0277E+02
8.8260E+00
1.0926E+02
1.4986E+02
Mean
1.0250E+02
1.4155E+02 \(\tfrac{+}{+}\)
1.0753E+02 \(\tfrac{+}{+}\)
1.0563E+02\(\tfrac{\S }{ + }\)
1.1456E+02\(\tfrac{+}{+}\)
1.2263E+02\(\tfrac{+}{+}\)
1.2797E+02\(\tfrac{+}{+}\)
1.7937E+02\(\tfrac{+}{+}\)
2.2720E+02 \(\tfrac{+}{+}\)
Std
1.3464E+01
4.5471E+01
2.1097E+01
1.5927E+01
5.2758E+00
2.4464E+01
4.4086E+01
4.0283E+01
1.8117E+01
F22
Best
1.5073E+01
1.5641E+00
1.7102E+00
1.1758E+01
8.1252E+01
6.7678E+01
2.7880E+01
3.9682E+01
1.0609E+02
Mean
6.6984E+01
7.8702E+01 \(\tfrac{+}{+}\)
7.1385E+01\(\tfrac{\S }{\S }\)
8.7183E+01\(\tfrac{+}{+}\)
1.5472E+02\(\tfrac{+}{+}\)
1.2072E+02\(\tfrac{+}{+}\)
1.1584E+02\(\tfrac{+}{+}\)
1.3503E+02 \(\tfrac{+}{+}\)
1.3740E+02\(\tfrac{+}{+}\)
Std
3.3590E+01
3.2944E+01
3.8361E+01
3.4667E+01
3.0311E+01
1.5193E+01
5.0254E+01
4.8123E+01
2.9781E+01
F23
Best
4.3594E+01
8.0335E+01
4.8796E+01
3.0056E+02
2.6155E+02
1.2812E+02
3.0683E+02
3.1428E+02
3.1651E+02
Mean
3.0388E+02
3.0514E+02\(\tfrac{\S }{ + }\)
3.0515E+02\(\tfrac{\S }{+ }\)
3.1254E+02\(\tfrac{+}{+}\)
3.4781E+02\(\tfrac{+}{+}\)
3.1009E+02\(\tfrac{\S }{ + }\)
3.2274E+02 \(\tfrac{+}{+}\)
3.2943E+02\(\tfrac{+}{+}\)
3.2861E+02 \(\tfrac{+}{+}\)
Std
2.6210E+01
2.9815E+01
2.5871E+01
4.0387E+00
1.1004E+01
1.8552E+01
7.2646E+00
7.2486E+00
3.7398E+00
F24
Best
1.0106E+02
1.1045E+02
1.0045E+02
1.0061E+02
1.6872E+02
1.9217E+02
1.1526E+02
1.4400E+02
2.8265E+02
Mean
1.5592E+02
2.6020E+02\(\tfrac{+}{+}\)
1.7835E+02\(\tfrac{ + }{\S }\)
2.5410E+02\(\tfrac{+}{+}\)
2.7730E+02\(\tfrac{+}{+}\)
3.2142E+02 \(\tfrac{+}{+}\)
3.2263E+02 \(\tfrac{+}{+}\)
2.7939E+02\(\tfrac{+}{+}\)
3.5377E+02 \(\tfrac{+}{+}\)
Std
3.7098E+01
7.6009E+01
9.0026E+01
1.0835E+02
7.5269E+01
3.9470E+01
7.6187E+01
7.5630E+01
1.2333E+01
F25
Best
3.9804E+02
2.7389E+02
3.9796E+02
3.9849E+02
4.5347E+02
4.0881E+02
4.1389E+02
3.2369E+02
4.3631E+02
Mean
4.1494E+02
4.2564E+02\(\tfrac{+}{+}\)
4.2319E+02\(\tfrac{+}{+}\)
4.3811E+02\(\tfrac{+}{+}\)
4.8281E+02\(\tfrac{+}{+}\)
4.4773E+02\(\tfrac{+}{+}\)
4.5521E+02\(\tfrac{+}{+}\)
4.4650E+02\(\tfrac{+}{+}\)
4.5544E+02\(\tfrac{+}{+}\)
Std
1.9309E+01
2.5051E+01
2.2450E+01
1.5941E+01
1.1842E+01
9.9256E+00
1.9218E+01
2.2972E+01
8.6099E+00
F26
Best
1.2719E+01
6.1468E+01
6.8802E+00
3.0080E+02
3.8931E+02
3.3708E+02
3.2605E+02
1.4658E+02
4.5291E+02
Mean
3.0813E+02
3.3816E+02\(\tfrac{+}{+}\)
3.1322E+02\(\tfrac{\S }{\S }\)
3.4240E+02\(\tfrac{+}{+}\)
4.6557E+02 \(\tfrac{+}{+}\)
4.1690E+02 \(\tfrac{+}{+}\)
4.4807E+02\(\tfrac{+}{+}\)
4.6217E+02\(\tfrac{+}{+}\)
7.7248E+02\(\tfrac{+}{+}\)
Std
6.3578E+01
5.3888E+01
5.9333E+01
2.4390E+01
3.1815E+01
4.8925E+01
6.8393E+01
1.2016E+02
2.1150E+02
F27
Best
3.8691E+02
3.8764E+02
3.8726E+02
3.8781E+02
4.0258E+02
3.9135E+02
3.8798E+02
3.9218E+02
3.9391E+02
Mean
3.9007E+02
3.9237E+02\(\tfrac{+}{+}\)
3.9194E+02 \(\tfrac{+}{+}\)
3.9335E+02 \(\tfrac{+}{+}\)
4.0998E+02 \(\tfrac{+}{+}\)
3.9662E+02 \(\tfrac{+}{+}\)
4.0148E+02 \(\tfrac{+}{+}\)
4.0314E+02\(\tfrac{+}{+}\)
3.9903E+02\(\tfrac{+}{+}\)
Std
1.6175E+00
2.6294E+00
2.4460E+00
5.8942E+00
2.8437E+00
2.7908E+00
7.8177E+00
7.7684E+00
3.1772E+00
F28
Best
5.4993E+01
1.1718E+02
1.3715E+01
3.0875E+02
3.9158E+02
3.4492E+02
3.0135E+02
3.3521E+02
4.2416E+02
Mean
3.3521E+02
3.8100E+02 \(\tfrac{+}{+}\)
3.2816E+02\(\tfrac{\S }{\S }\)
3.7059E+02 \(\tfrac{+}{+}\)
4.3655E+02 \(\tfrac{+}{+}\)
5.3160E+02\(\tfrac{+}{+}\)
5.5550E+02 \(\tfrac{+}{+}\)
4.5902E+02\(\tfrac{+}{+}\)
6.2158E+02 \(\tfrac{+}{+}\)
Std
6.5780E+01
6.6759E+01
9.0076E+01
1.7253E+01
1.8208E+01
8.5471E+01
1.0335E+02
6.8004E+01
4.0702E+01
F29
Best
2.3068E+02
2.3325E+02
2.2950E+02
2.3194E+02
2.7739E+02
2.3582E+02
2.3950E+02
2.5175E+02
2.7742E+02
Mean
2.4208E+02
2.5173E+02\(\tfrac{+}{+}\)
2.4123E+02\(\tfrac{\S}{\S}\)
2.5171E+02\(\tfrac{+}{+}\)
3.3244E+02 \(\tfrac{+}{+}\)
2.5554E+02\(\tfrac{+}{+}\)
2.8272E+02\(\tfrac{+}{+}\)
3.2474E+02\(\tfrac{+}{+}\)
3.2824E+02 \(\tfrac{+}{+}\)
Std
6.2308E+00
1.3620E+01
7.6601E+00
1.3387E+01
2.2217E+01
1.2233E+01
2.8161E+01
4.1160E+01
2.9207E+01
F30
Best
5.2765E+02
6.4853E+02
6.4724E+02
8.2180E+02
5.0282E+04
9.5645E+02
1.3255E+03
1.0263E+03
9.8602E+04
Mean
1.8998E+03
6.1226E+03\(\tfrac{+}{+}\)
2.6054E+03 \(\tfrac{+}{+}\)
3.8883E+04\(\tfrac{+}{+}\)
1.0051E+06\(\tfrac{+}{+}\)
1.8318E+05 \(\tfrac{+}{+}\)
2.6612E+05 \(\tfrac{+}{+}\)
2.5913E+05\(\tfrac{+}{+}\)
1.7677E+06\(\tfrac{+}{+}\)
Std
1.8121E+03
1.0045E+04
2.3608E+03
1.9326E+05
5.5025E+05
3.1818E+05
4.7097E+05
3.1681E+05
1.0708E+06
Table 6
Summary results of the t-test and W-test on the 30-dimensional benchmark functions
 
HLO
DHLO
TVMSBPSO
BWOA
BinCSA
QBPSO
IBDE
pBGSK
t-test
        
 Better
28
11
27
30
28
30
30
30
 Same
2
19
2
0
2
0
0
0
 Worse
0
0
1
0
0
0
0
0
 Total
28
11
26
30
28
30
30
30
W-test
        
 Better
29
10
29
30
29
30
30
30
 Same
1
20
0
0
1
0
0
0
 Worse
0
0
1
0
0
0
0
0
 Total
29
10
29
30
29
30
30
30
Table 5 shows that the proposed HLOCC has the best mean numerical results on 25 out of 30 functions and is only inferior to TVMSBPSO on F3. Besides, the summary results of the t-test in Table 5 indicate that the proposed HLOCC surpasses HLO, DHLO, TVMSBPSO, BWOA, BinCSA, QBPSO, IBDE and pBGSK on 28, 11, 27, 30, 28, 30, 30 and 30 out of 30 functions, respectively. The W-test results support that HLOCC significantly outperforms these compared algorithms on 29, 10, 29, 30, 29, 30, 30 and 30 out of 30 functions, respectively. Thus, it is fair to claim that HLOCC achieves the best performance on the low-dimensional functions.

High dimensional benchmark functions

For the high-dimensional functions, the total number of candidate solutions is increased to 2900, which becomes a challenge for all the algorithms. The results of all algorithms on the 30-dimensional benchmark functions are listed in Table 7, and the summarized t-test and W-test results are given in Table 8. From Table 7, it clearly shows that the HLOCC has the best optimization performance, which obtains 27 of the best mean numerical results out of 30 functions. Besides, Table 8 reveals that the optimization ability of HLOCC is better than others algorithms. Specifically, the t-test results show that HLOCC significantly outperforms HLO, DHLO, TVMSBPSO, BWOA, BinCSA, QBPSO, IBDE and pBGSK on 29, 24, 28, 30, 26, 28,29 and 30 functions while it is only worse than them on 0, 1, 1, 0, 0, 0, 0 and 0 functions, respectively. The W-test results show that HLOCC significantly outperforms HLO, DHLO, TVMSBPSO, BWOA, BinCSA, QBPSO, IBDE and pBGSK on 30, 25, 27, 30, 26, 28, 29 and 30 out of 30 functions, respectively, while it is only defeated by them on 0, 1, 1, 0, 0, 0, 0 and 0 functions, respectively. HLOCC maintains the positive total scores and even obtain a higher value than most compared algorithms, which shows the evident advantage of HLOCC.
Table 7
Results of all algorithms on the 30-dimensional benchmark functions
Function
Metric
HLOCC
HLO
DHLO
TVMSBPSO
BWOA
BinCSA
QBPSO
IBDE
pBGSK
F1
Best
3.6871E+06
4.7787E+07
1.6726E+07
6.8880E+07
1.8508E+10
8.2896E+07
6.9096E+08
1.0625E+09
2.7346E+10
Mean
9.3101E+07
4.7736E+08\(\tfrac{+}{+}\)
1.8444E+08\(\tfrac{+}{+}\)
4.6805E+08\(\tfrac{+}{+}\)
2.8133E+10\(\tfrac{+}{+}\)
5.9392E+08\(\tfrac{+}{+}\)
4.1045E+09\(\tfrac{+}{+}\)
2.9770E+09\(\tfrac{+}{+}\)
3.8623E+10\(\tfrac{+}{+}\)
Std
6.5238E+07
3.6496E+08
1.2523E+08
3.6600E+08
3.2926E+09
3.3653E+08
1.6904E+09
1.0166E+09
3.8550E+09
F2
Best
5.1743E+12
3.4749E+14
2.8614E+12
2.3279E+14
2.5387E+33
2.4511E+18
5.4753E+20
2.5367E+17
3.8411E+31
Mean
1.8335E+19
1.1682E+25\(\tfrac{\S }{ + }\)
9.3576E+20\(\tfrac{\S }{\S }\)
1.7375E+26\(\tfrac{+}{+}\)
1.6308E+36\(\tfrac{+}{+}\)
5.6652E+26\(\tfrac{\S }{ + }\)
1.2356E+34\(\tfrac{\S }{ + }\)
3.4654E+30\(\tfrac{\S }{ +}\)
2.5249E+37 \(\tfrac{+}{+}\)
Std
1.3639E+20
7.9107E+25
8.3713E+21
7.7154E+26
3.5271E+36
4.7800E+27
8.2280E+34
3.4200E+31
4.3485E+37
F3
Best
8.7640E+03
9.9651E+03
9.8150E+03
1.4408E+03
4.7942E+04
1.4480E+04
6.2473E+03
8.6755E+04
7.4904E+04
Mean
1.8527E+04
2.2167E+04\(\tfrac{+}{+}\)
2.0763E+04\(\tfrac{+}{+}\)
4.9398E+03 \(\tfrac{\# }{\# }\)
7.8834E+04 \(\tfrac{+}{+}\)
3.2738E+04\(\tfrac{+}{+}\)
1.7247E+04\(\tfrac{\S }{\S }\)
1.3653E+05\(\tfrac{+}{+}\)
1.0241E+05\(\tfrac{+}{+}\)
Std
4.5122E+03
5.8475E+03
4.6522E+03
2.6394E+03
1.0230E+04
6.5164E+03
5.4273E+03
2.3693E+04
1.0996E+04
F4
Best
8.2551E+01
8.7685E+01
7.3610E+01
9.9986E+01
1.9989E+03
1.1447E+02
1.4991E+02
1.5335E+02
3.2203E+03
Mean
1.1759E+02
1.4647E+02\(\tfrac{+}{+}\)
1.3357E+02 \(\tfrac{+}{+}\)
1.4893E+02\(\tfrac{+}{+}\)
3.6193E+03\(\tfrac{+}{+}\)
1.6051E+02\(\tfrac{+}{+}\)
3.2489E+02 \(\tfrac{+}{+}\)
2.5750E+02\(\tfrac{+}{+}\)
6.1106E+03\(\tfrac{+}{+}\)
Std
1.8385E+01
2.8842E+01
2.8502E+01
3.0707E+01
6.9148E+02
2.5854E+01
1.0223E+02
5.8713E+01
1.0436E+03
F5
Best
1.9946E+01
2.9017E+01
2.3950E+01
3.2281E+01
2.7681E+02
2.8944E+01
7.4688E+01
7.3437E+01
3.1325E+02
Mean
4.0948E+01
5.6760E+01 \(\tfrac{+}{+}\)
4.6732E+01 \(\tfrac{+}{+}\)
6.4779E+01\(\tfrac{+}{+}\)
3.3448E+02\(\tfrac{+}{+}\)
6.9092E+01\(\tfrac{+}{+}\)
1.3024E+02\(\tfrac{+}{+}\)
1.3679E+02 \(\tfrac{+}{+}\)
3.7439E+02\(\tfrac{+}{+}\)
Std
8.3189E+00
1.1899E+01
9.9144E+00
1.5493E+01
2.0195E+01
1.6375E+01
2.7808E+01
2.1617E+01
2.0748E+01
F6
Best
8.6777E-02
4.2521E-01
2.4390E-01
4.6356E-01
5.2028E+01
7.5630E-01
7.6617E+00
5.5367E+00
6.4191E+01
Mean
9.8080E-01
2.0863E+00\(\tfrac{+}{+}\)
1.6241E+00\(\tfrac{+}{+}\)
1.8090E+00 \(\tfrac{+}{+}\)
6.2027E+01\(\tfrac{+}{+}\)
1.9469E+00\(\tfrac{+}{+}\)
1.5914E+01\(\tfrac{+}{+}\)
1.0351E+01\(\tfrac{+}{+}\)
7.5400E+01\(\tfrac{+}{+}\)
Std
5.4937E-01
8.7680E-01
7.3009E-01
8.9278E-01
3.6560E+00
6.8835E-01
4.7212E+00
2.2917E+00
4.4828E+00
F7
Best
6.6467E+01
8.5959E+01
5.9036E+01
7.3100E+01
9.0341E+02
7.6663E+01
2.0662E+02
1.8908E+02
1.1168E+03
Mean
9.9754E+01
1.3889E+02 \(\tfrac{+}{+}\)
1.1821E+02 \(\tfrac{+}{+}\)
1.2777E+02\(\tfrac{+}{+}\)
1.1194E+03\(\tfrac{+}{+}\)
1.1134E+02\(\tfrac{+}{+}\)
3.4956E+02 \(\tfrac{+}{+}\)
2.7249E+02\(\tfrac{+}{+}\)
1.3120E+03\(\tfrac{+}{+}\)
Std
1.6315E+01
2.5257E+01
1.9830E+01
2.4853E+01
8.0829E+01
1.8268E+01
6.3589E+01
3.5254E+01
7.7783E+01
F8
Best
2.0835E+01
2.9650E+01
2.6407E+01
4.1291E+01
2.6735E+02
3.1439E+01
6.7940E+01
7.6700E+01
3.1144E+02
Mean
4.1128E+01
5.7494E+01\(\tfrac{+}{+}\)
5.0373E+01\(\tfrac{+}{+}\)
6.6210E+01\(\tfrac{+}{+}\)
3.1961E+02\(\tfrac{+}{+}\)
6.6183E+01\(\tfrac{+}{+}\)
1.2751E+02\(\tfrac{+}{+}\)
1.2642E+02 \(\tfrac{+}{+}\)
3.5369E+02\(\tfrac{+}{+}\)
Std
1.0276E+01
1.1832E+01
1.0946E+01
1.2934E+01
1.4863E+01
1.7513E+01
2.2689E+01
2.2811E+01
1.8388E+01
F9
Best
8.8563E+00
4.4603E+01
1.4612E+01
4.0045E+01
6.2464E+03
2.2824E+01
4.9933E+02
1.3333E+03
8.4864E+03
Mean
7.3645E+01
2.9781E+02\(\tfrac{+}{+}\)
1.2213E+02 \(\tfrac{+}{+}\)
2.5080E+02\(\tfrac{+}{+}\)
8.9235E+03\(\tfrac{+}{+}\)
9.5823E+01 \(\tfrac{+}{+}\)
2.0583E+03 \(\tfrac{+}{+}\)
3.9392E+03\(\tfrac{+}{+}\)
1.1769E+04 \(\tfrac{+}{+}\)
Std
4.9404E+01
1.6836E+02
8.7201E+01
2.3082E+02
1.0269E+03
4.1926E+01
7.8424E+02
1.1276E+03
1.1651E+03
F10
Best
8.3928E+02
1.0858E+03
8.8175E+02
1.2333E+03
6.1592E+03
2.6519E+03
1.9577E+03
2.1409E+03
6.3200E+03
Mean
1.9887E+03
2.2633E+03\(\tfrac{+}{+}\)
1.5873E+03\(\tfrac{\# }{\# }\)
2.5584E+03 \(\tfrac{+}{+}\)
7.0772E+03\(\tfrac{+}{+}\)
5.5717E+03 \(\tfrac{+}{+}\)
3.7193E+03\(\tfrac{+}{+}\)
3.1967E+03 \(\tfrac{+}{+}\)
7.1025E+03\(\tfrac{+}{+}\)
Std
5.0901E+02
4.6180E+02
3.9045E+02
5.0801E+02
2.5163E+02
8.4561E+02
6.2206E+02
3.3857E+02
2.5148E+02
F11
Best
5.8836E+01
6.7158E+01
5.7297E+01
3.1887E+01
1.0632E+03
1.1714E+02
1.8550E+02
4.8897E+02
2.2244E+03
Mean
1.4670E+02
1.8891E+02\(\tfrac{+}{+}\)
1.4924E+02\(\tfrac{\S }{\S }\)
1.8500E+02\(\tfrac{+}{+}\)
2.3720E+03\(\tfrac{+}{+}\)
2.5075E+02 \(\tfrac{+}{+}\)
7.9824E+02\(\tfrac{+}{+}\)
2.7909E+03 \(\tfrac{+}{+}\)
5.1540E+03 \(\tfrac{+}{+}\)
Std
3.4409E+01
8.9748E+01
4.0167E+01
8.3270E+01
5.4290E+02
1.1150E+02
5.5542E+02
1.3591E+03
1.0852E+03
F12
Best
2.0641E+05
6.1651E+05
2.6012E+05
1.9567E+05
1.0821E+09
2.1255E+06
2.0222E+07
3.0036E+07
2.0132E+09
Mean
4.0687E+06
2.6395E+07\(\tfrac{+}{+}\)
7.5688E+06\(\tfrac{+}{+}\)
2.6914E+07\(\tfrac{+}{+}\)
2.2315E+09 \(\tfrac{+}{+}\)
3.2698E+07 \(\tfrac{+}{+}\)
2.7338E+08 \(\tfrac{+}{+}\)
1.9044E+08\(\tfrac{+}{+}\)
3.6959E+09\(\tfrac{+}{+}\)
Std
4.9612E+06
3.0887E+07
1.0678E+07
3.5838E+07
4.5423E+08
3.0217E+07
2.4304E+08
1.0276E+08
6.9125E+08
F13
Best
2.4117E+03
1.3316E+03
1.6362E+03
1.4083E+04
2.2532E+08
3.1279E+04
1.7004E+06
6.0895E+05
1.6751E+08
Mean
1.3250E+06
9.4807E+06\(\tfrac{+}{+}\)
3.7253E+06\(\tfrac{+}{+}\)
4.6499E+06\(\tfrac{+}{+}\)
5.3699E+08 \(\tfrac{+}{+}\)
5.7265E+06 \(\tfrac{+}{+}\)
1.7437E+08\(\tfrac{+}{+}\)
1.4088E+08\(\tfrac{+}{+}\)
6.8983E+08 \(\tfrac{+}{+}\)
Std
3.5146E+06
1.6960E+07
6.4271E+06
8.0576E+06
2.1269E+08
9.2846E+06
1.8763E+08
1.4979E+08
3.0453E+08
F14
Best
2.6500E+03
1.7392E+03
1.7349E+03
6.8272E+02
2.8804E+04
1.2214E+03
5.9541E+03
3.5585E+04
7.2633E+04
Mean
2.2462E+04
5.5131E+04 \(\tfrac{+}{+}\)
4.2111E+04\(\tfrac{+}{+}\)
4.0319E+04\(\tfrac{+}{+}\)
1.8935E+05 \(\tfrac{+}{+}\)
2.7578E+04\(\tfrac{\S }{\S }\)
1.9893E+05 \(\tfrac{+}{+}\)
9.1855E+05 \(\tfrac{+}{+}\)
3.2242E+05\(\tfrac{+}{+}\)
Std
1.8627E+04
4.2336E+04
3.5961E+04
3.7687E+04
8.1524E+04
2.3188E+04
2.2037E+05
8.1354E+05
1.4066E+05
F15
Best
6.3477E+02
5.3316E+02
4.6003E+02
1.9558E+02
4.9960E+06
1.0235E+03
1.8508E+04
1.9398E+03
2.2369E+07
Mean
8.4238E+04
6.1594E+05 \(\tfrac{+}{+}\)
2.1123E+05 \(\tfrac{+}{+}\)
1.6030E+06\(\tfrac{ + }{\S }\)
2.8267E+07\(\tfrac{+}{+}\)
1.0010E+06 \(\tfrac{+}{+}\)
2.0840E+07 \(\tfrac{+}{+}\)
1.1530E+07 \(\tfrac{+}{+}\)
1.0097E+08\(\tfrac{+}{+}\)
Std
1.6280E+05
9.7027E+05
3.4080E+05
5.5302E+06
1.8658E+07
1.6675E+06
3.3943E+07
1.9244E+07
3.7337E+07
F16
Best
4.2745E+01
1.6382E+02
1.3176E+01
4.0847E+01
1.7204E+03
5.1463E+01
2.9398E+02
5.7106E+02
1.3368E+03
Mean
3.6123E+02
5.4049E+02\(\tfrac{+}{+}\)
5.4397E+02\(\tfrac{+}{+}\)
6.8998E+02 \(\tfrac{+}{+}\)
2.1915E+03\(\tfrac{+}{+}\)
3.8924E+02\(\tfrac{\S }{\S }\)
1.1284E+03\(\tfrac{+}{+}\)
1.1003E+03\(\tfrac{+}{+}\)
2.0323E+03\(\tfrac{+}{+}\)
Std
1.7184E+02
1.5566E+02
2.1708E+02
2.4069E+02
1.7856E+02
1.9397E+02
3.2065E+02
2.1889E+02
1.9515E+02
F17
Best
3.9024E+01
4.1422E+01
3.4677E+01
3.7189E+01
6.3208E+02
6.3653E+01
9.6633E+01
2.3643E+02
4.9613E+02
Mean
9.5354E+01
1.2975E+02\(\tfrac{+}{+}\)
1.3504E+02 \(\tfrac{+}{+}\)
1.8558E+02 \(\tfrac{+}{+}\)
9.2547E+02\(\tfrac{+}{+}\)
1.4690E+02 \(\tfrac{+}{+}\)
5.3061E+02\(\tfrac{+}{+}\)
5.2456E+02\(\tfrac{+}{+}\)
8.7153E+02\(\tfrac{+}{+}\)
Std
4.6083E+01
6.1397E+01
8.4625E+01
1.2161E+02
1.0906E+02
5.4895E+01
1.8364E+02
1.2449E+02
1.1727E+02
F18
Best
1.2199E+04
1.9341E+04
7.9256E+03
3.3853E+04
8.6392E+05
7.0709E+04
1.2045E+05
6.3784E+04
2.5234E+05
Mean
1.7056E+05
3.0544E+05\(\tfrac{+}{+}\)
2.6860E+05 \(\tfrac{+}{+}\)
5.4365E+05\(\tfrac{+}{+}\)
4.7940E+06\(\tfrac{+}{+}\)
8.1750E+05\(\tfrac{+}{+}\)
2.2419E+06 \(\tfrac{+}{+}\)
1.2516E+06 \(\tfrac{+}{+}\)
5.8560E+06\(\tfrac{+}{+}\)
Std
1.5646E+05
2.1130E+05
2.3943E+05
4.9569E+05
1.7993E+06
5.3361E+05
2.1110E+06
1.2086E+06
2.5640E+06
Mean
1.2660E+05
1.1104E+06 \(\tfrac{+}{+}\)
2.3075E+05\(\tfrac{\S }{ + }\)
9.7091E+05\(\tfrac{+}{+}\)
4.8679E+07 \(\tfrac{+}{+}\)
6.3751E+05\(\tfrac{+}{+}\)
5.9836E+07\(\tfrac{+}{+}\)
2.0300E+07\(\tfrac{+}{+}\)
1.6784E+08 \(\tfrac{+}{+}\)
Std
4.7483E+05
2.9214E+06
5.3118E+05
3.9998E+06
1.9523E+07
1.0315E+06
7.6828E+07
2.9696E+07
5.9612E+07
F19
Best
4.1829E+01
4.8985E+02
1.8526E+02
1.9604E+02
6.6989E+06
1.2846E+03
4.1816E+04
4.9769E+03
3.9702E+07
Mean
1.2660E+05
1.1104E+06\(\tfrac{+}{+}\)
2.3075E+05\(\tfrac{\S }{ + }\)
9.7091E+05\(\tfrac{+}{+}\)
4.8679E+07\(\tfrac{+}{+}\)
6.3751E+05\(\tfrac{+}{+}\)
5.9836E+07\(\tfrac{+}{+}\)
2.0300E+07\(\tfrac{+}{+}\)
1.6784E+08\(\tfrac{+}{+}\)
Std
4.7483E+05
2.9214E+06
5.3118E+05
3.9998E+06
1.9523E+07
1.0315E+06
7.6828E+07
2.9696E+07
5.9612E+07
F20
Best
4.4213E+01
4.2481E+01
3.5724E+01
2.9332E+01
4.1208E+02
5.3143E+01
7.6958E+01
1.3342E+02
3.9606E+02
Mean
1.2042E+02
1.6662E+02\(\tfrac{+}{+}\)
1.2354E+02\(\tfrac{\S }{\S }\)
1.7436E+02 \(\tfrac{+}{+}\)
6.7409E+02\(\tfrac{+}{+}\)
1.3183E+02\(\tfrac{\S }{\S }\)
3.5016E+02 \(\tfrac{+}{+}\)
4.9582E+02\(\tfrac{+}{+}\)
7.3110E+02\(\tfrac{+}{+}\)
Std
5.7361E+01
7.0179E+01
7.4958E+01
1.0005E+02
8.6508E+01
6.3292E+01
1.3590E+02
1.3210E+02
8.8685E+01
F21
Best
2.2361E+02
2.3086E+02
2.2640E+02
2.3190E+02
4.5984E+02
2.3299E+02
1.3518E+02
1.3237E+02
4.7364E+02
Mean
2.4025E+02
2.5764E+02\(\tfrac{+}{+}\)
2.5071E+02\(\tfrac{+}{+}\)
2.6551E+02\(\tfrac{+}{+}\)
5.0702E+02\(\tfrac{+}{+}\)
2.7129E+02\(\tfrac{+}{+}\)
3.3491E+02\(\tfrac{+}{+}\)
3.2926E+02\(\tfrac{+}{+}\)
5.3864E+02\(\tfrac{+}{+}\)
Std
6.5944E+00
1.2400E+01
1.0263E+01
1.4173E+01
1.7926E+01
1.7781E+01
3.4176E+01
3.6307E+01
1.8340E+01
F22
Best
1.2105E+02
1.3889E+02
1.2810E+02
1.3196E+02
2.4159E+03
1.7647E+02
2.1136E+02
3.6082E+02
4.0061E+03
Mean
1.5342E+02
4.6500E+02 \(\tfrac{+}{+}\)
3.5714E+02 \(\tfrac{+}{+}\)
2.0587E+02 \(\tfrac{+}{+}\)
3.2191E+03\(\tfrac{+}{+}\)
3.4112E+02\(\tfrac{+}{+}\)
1.3551E+03\(\tfrac{+}{+}\)
1.6107E+03 \(\tfrac{+}{+}\)
5.2840E+03\(\tfrac{+}{+}\)
Std
1.7933E+01
6.6980E+02
5.3818E+02
4.4377E+01
2.9743E+02
9.4786E+01
1.4528E+03
1.3800E+03
4.5550E+02
F23
Best
3.7489E+02
3.7657E+02
3.7853E+02
3.8636E+02
6.6060E+02
3.8214E+02
4.3614E+02
4.0855E+02
7.3428E+02
Mean
3.9012E+02
4.0608E+02\(\tfrac{+}{+}\)
4.0316E+02 \(\tfrac{+}{+}\)
4.1382E+02\(\tfrac{+}{+}\)
7.2754E+02 \(\tfrac{+}{+}\)
4.1907E+02\(\tfrac{+}{+}\)
5.0737E+02\(\tfrac{+}{+}\)
4.7924E+02 \(\tfrac{+}{+}\)
8.3899E+02\(\tfrac{+}{+}\)
Std
8.4331E+00
1.3125E+01
1.2156E+01
1.3384E+01
1.9092E+01
1.6723E+01
3.2665E+01
2.8783E+01
3.5232E+01
F24
Best
4.5025E+02
4.6803E+02
4.6474E+02
4.6239E+02
7.3360E+02
4.8478E+02
5.0848E+02
5.3903E+02
8.3250E+02
Mean
5.0037E+02
5.0823E+02 \(\tfrac{+}{+}\)
5.0165E+02\(\tfrac{\S }{\S }\)
4.9816E+02 \(\tfrac{\S }{\S }\)
7.8517E+02 \(\tfrac{+}{+}\)
5.6692E+02\(\tfrac{+}{+}\)
5.7911E+02 \(\tfrac{+}{+}\)
6.4521E+02 \(\tfrac{+}{+}\)
9.2994E+02 \(\tfrac{+}{+}\)
Std
1.8079E+01
1.7867E+01
1.6317E+01
2.1058E+01
2.0228E+01
2.1539E+01
3.6523E+01
4.2400E+01
3.8592E+01
F25
Best
3.8742E+02
3.9212E+02
3.9167E+02
3.9336E+02
1.4756E+03
4.0395E+02
4.5399E+02
4.2652E+02
1.9903E+03
Mean
4.1465E+02
4.4672E+02\(\tfrac{+}{+}\)
4.2642E+02 \(\tfrac{+}{+}\)
4.4106E+02 \(\tfrac{+}{+}\)
2.2801E+03 \(\tfrac{+}{+}\)
4.4792E+02\(\tfrac{+}{+}\)
5.8451E+02\(\tfrac{+}{+}\)
5.1503E+02\(\tfrac{+}{+}\)
3.0936E+03\(\tfrac{+}{+}\)
Std
1.9271E+01
2.5353E+01
2.0447E+01
2.2770E+01
3.1833E+02
2.0415E+01
6.9532E+01
3.7933E+01
3.6479E+02
F26
Best
4.7766E+02
7.5017E+02
4.3395E+02
7.3562E+02
4.5240E+03
1.2715E+03
1.2276E+03
8.6794E+02
4.9188E+03
Mean
1.4581E+03
1.6816E+03\(\tfrac{+}{+}\)
1.6181E+03\(\tfrac{+}{+}\)
1.6451E+03 \(\tfrac{+}{+}\)
5.1603E+03 \(\tfrac{+}{+}\)
1.6991E+03 \(\tfrac{+}{+}\)
2.6884E+03\(\tfrac{+}{+}\)
2.1653E+03\(\tfrac{+}{+}\)
6.3070E+03 \(\tfrac{+}{+}\)
Std
1.8958E+02
1.9591E+02
2.1764E+02
2.4466E+02
2.3141E+02
1.7719E+02
5.0020E+02
6.3625E+02
3.5870E+02
F27
Best
4.8442E+02
4.8343E+02
4.9278E+02
4.8813E+02
7.1233E+02
5.0281E+02
5.1725E+02
5.0735E+02
7.9861E+02
Mean
5.0696E+02
5.1213E+02 \(\tfrac{+}{+}\)
5.1249E+02\(\tfrac{+}{+}\)
5.1435E+02 \(\tfrac{+}{+}\)
7.9839E+02 \(\tfrac{+}{+}\)
5.1825E+02\(\tfrac{+}{+}\)
5.6510E+02\(\tfrac{+}{+}\)
5.4166E+02\(\tfrac{+}{+}\)
9.4844E+02 \(\tfrac{+}{+}\)
Std
7.6150E+00
9.2274E+00
8.2597E+00
9.0233E+00
3.7896E+01
5.5093E+00
2.1663E+01
1.4002E+01
5.8633E+01
F28
Best
4.2152E+02
4.4318E+02
4.2635E+02
4.3362E+02
1.3883E+03
4.9800E+02
5.7203E+02
5.0679E+02
1.9644E+03
Mean
4.8270E+02
5.2942E+02\(\tfrac{+}{+}\)
4.9681E+02 \(\tfrac{+}{+}\)
5.0499E+02 \(\tfrac{+}{+}\)
1.8726E+03\(\tfrac{+}{+}\)
5.8603E+02\(\tfrac{+}{+}\)
8.0383E+02\(\tfrac{+}{+}\)
6.4335E+02\(\tfrac{+}{+}\)
2.8195E+03\(\tfrac{+}{+}\)
Std
2.8929E+01
3.9435E+01
3.5040E+01
3.8577E+01
2.2905E+02
5.1760E+01
1.9375E+02
5.8528E+01
3.9500E+02
F29
Best
3.5377E+02
3.7593E+02
3.7051E+02
3.9813E+02
1.5705E+03
4.5978E+02
6.4115E+02
5.8491E+02
1.6860E+03
Mean
4.7874E+02
5.6111E+02 \(\tfrac{+}{+}\)
5.1867E+02 \(\tfrac{+}{+}\)
6.3255E+02 \(\tfrac{+}{+}\)
2.0106E+03\(\tfrac{+}{+}\)
6.0962E+02 \(\tfrac{+}{+}\)
1.0748E+03\(\tfrac{+}{+}\)
8.8459E+02\(\tfrac{+}{+}\)
2.3132E+03\(\tfrac{+}{+}\)
Std
6.6550E+01
7.7404E+01
7.2004E+01
1.4154E+02
1.6583E+02
8.2137E+01
1.9744E+02
1.2861E+02
1.8491E+02
F30
Best
2.7480E+03
5.0663E+03
3.4320E+03
3.9543E+03
1.5976E+07
1.1100E+04
4.7524E+04
2.1283E+04
1.9348E+07
Mean
6.1185E+04
5.3789E+05 \(\tfrac{+}{+}\)
1.6110E+05 \(\tfrac{+}{+}\)
4.1174E+05 \(\tfrac{+}{+}\)
5.7515E+07\(\tfrac{+}{+}\)
4.5570E+05 \(\tfrac{+}{+}\)
1.6324E+07 \(\tfrac{+}{+}\)
6.6847E+06\(\tfrac{+}{+}\)
9.5431E+07\(\tfrac{+}{+}\)
Std
1.6011E+05
1.1304E+06
4.1826E+05
8.6956E+05
1.9642E+07
9.2004E+05
2.7399E+07
1.0959E+07
2.8290E+07
Table 8
Summary results of the t-test and W-test on the 30-dimensional benchmark functions
 
HLO
DHLO
TVMSBPSO
BWOA
BinCSA
QBPSO
IBDE
pBGSK
t-test
        
 Better
29
24
28
30
26
28
29
30
 Same
1
5
1
0
4
2
1
0
 Worse
0
1
1
0
0
0
0
0
 Total
29
23
27
30
26
28
29
30
W-test
        
 Better
30
25
27
30
27
29
30
30
 Same
0
4
2
0
3
1
0
0
 Worse
0
1
1
0
0
0
0
0
 Total
30
24
26
30
27
29
30
30

The uncapacitated facility location problem

The uncapacitated facility location problem (UFLP), also known as the simple plant location problem [44] or uncapacitated warehouse location problem [45], is a well-known binary optimization problem in operations research. The problem has only binary decision variables and is suitable for verifying the performance analysis and comparison of binary optimization algorithms. In the basic formula, the UFL problem consists of m numbers of customer demands and n numbers of facilities [46]. The problem has two costs, opening a facility at a potential location requires a fixed cost and the transshipment cost between the customer location and the facility. The aim of the problem is to minimize the total opening and transportation costs by setting decision variables 0 or 1, so as to meeting the demands of customers when determining the location of the facilities [47]. The value of decision variables 0 or 1 determines whether the facility is turned on or off. The general model for UFL problem can be mathematically stated as follows [48]:
$$ \min f = \sum\limits_{i = 1}^{n} {\sum\limits_{j = 1}^{m} {c_{ij} } } x_{ij} + \sum\limits_{i = 1}^{n} {f_{i} } y_{y} $$
(12)
subject to:
$$ \begin{gathered} \sum\limits_{i = 1}^{n} {x_{ij} = 1,\;j = 1,2, \ldots ,m} \hfill \\ x_{ij} - y_{i} \le 0,\;i = 1,2, \ldots ,n,j = 1,2, \ldots ,m \hfill \\ x_{ij} \in \left\{ {0,1} \right\},\;i = 1,2, \ldots ,n,j = 1,2, \ldots ,m \hfill \\ y_{i} \in \left\{ {0,1} \right\},\;i = 1,2, \ldots ,n \hfill \\ \end{gathered} $$
(13)
where the term \(c_{ij}\) is the transportation cost between the location of facility i and location of customer j; the term \(f_{ij}\) represents the cost of opening a facility at location i; The term \(x_{ij}\) and \(y_{i}\) are decision variables, being either 0 or 1. If customer j demands service from the facility at location i, the term \(x_{ij}\) will be set 1, otherwise, it will be set 0. In the same way, when a facility is opened at location i, the value of \(y_{i}\) is 1, otherwise it is 0.
In order to investigate and examine the effectiveness of the proposed algorithm, two benchmark data sets, i.e. ORLIB library with 15 benchmark instances [49] and the M* data set which taken from [50] are used to evaluate HLOCC. The optimum results of the two benchmark instances are shown in Tables 9 and 10, respectively. Six performance measures are used to validate the effectiveness of HLOCC: best, mean, worse, standard deviation gap and hit. Hit is the number of times the optimal result was achieved in the experimental runs. Gap shows the optimization error and the calculation of gap value is shown as Eq. (14).
$$ {\text{gap}} = \frac{{f^{{{\text{mean}}}} - f^{{{\text{opt}}}} }}{{f^{{{\text{opt}}}} }} \times 100 $$
(14)
where \(f^{{{\text{mean}}}}\) is the cost of the average results for all experimental runs and \(f^{opt}\) is the cost of the optimum solution to the problem.
Table 9
Optimum results of ORLIB problems
Instance
Size of the problem
Optimal cost
Size
Cap71
16 × 50
932,615.75
Small
Cap72
16 × 50
977,799.40
Small
Cap73
16 × 50
1,010,641.45
Small
Cap74
16 × 50
1,034,976.98
Small
Cap101
25 × 50
796,648.44
Medium
Cap102
25 × 50
854,704.20
Medium
Cap103
25 × 50
893,782.11
Medium
Cap104
25 × 50
928,941.75
Medium
Cap131
50 × 50
793,439.56
Large
Cap132
50 × 50
851,495.33
Large
Cap133
50 × 50
893,076.71
Large
Cap134
50 × 50
928,941.75
Large
CapA
100 × 100
17,156,454.48
Huge
CapB
100 × 100
12,979,071.58
Huge
CapC
100 × 100
11,505,594.33
Huge
Table 10
Optimal results of M* results
Instance
Size
Optimal cost
Size
MO1
100 × 100
1305.95
Low-scaled
MO2
100 × 100
1432.36
Low-scaled
MO3
100 × 100
1516.77
Low-scaled
MO4
100 × 100
1442.24
Low-scaled
MO5
100 × 100
1408.77
Low-scaled
MP1
200 × 200
2686.48
Middle-scaled
MP2
200 × 200
2904.86
Middle-scaled
MP3
200 × 200
2623.71
Middle-scaled
MP4
200 × 200
2938.75
Middle-scaled
MP5
200 × 200
2932.33
Middle-scaled
MQ1
300 × 300
4091.01
Middle-scaled
MQ2
300 × 300
4028.33
Middle-scaled
MQ3
300 × 300
4275.43
Middle-scaled
MQ4
300 × 300
4235.15
Middle-scaled
MQ5
300 × 300
4080.74
Middle-scaled
MR1
500 × 500
2608.15
Large-scaled
MR2
500 × 500
2654.73
Large-scaled
MR3
500 × 500
2788.25
Large-scaled
MR4
500 × 500
2756.04
Large-scaled
MR5
500 × 500
2505.05
Large-scaled
To check the robustness and efficiency of HLOCC, it is compared with state-of-the-art algorithms. For a fair comparison results, the population size, number of runs, and maximum iterations are determined equally in methods that are chosen for comparison. In order to facilitate comparison, the population size is set to 400 which is the same as [40] and maximum iteration is set to 80,000. The results of 15 UFL problem and M* are obtained from 30 independent runs, and shown in Tables 11 and 12, respectively.
Table 11
Results of the HLOCC on ORLIB
Problem
Optimal
Best
Worse
Mean
Std
Gap
Hit
Cap71
932,615.75
932,615.75
932,615.75
932,615.75
0.00000
0.00000
30
Cap72
977,799.40
977,799.40
977,799.40
977,799.40
0.00000
0.00000
30
Cap73
1,010,641.45
1,010,641.45
1,010,641.45
1,010,641.45
0.00000
0.00000
30
Cap74
1,034,976.98
1,034,976.98
1,034,976.98
1,034,976.98
0.00000
0.00000
30
Cap101
796,648.44
796,648.44
796,648.44
796,648.44
0.00000
0.00000
30
Cap102
854,704.20
854,704.20
854,704.20
854,704.20
0.00000
0.00000
30
Cap103
893,782.11
893,782.11
893,782.11
893,782.11
0.00000
0.00000
30
Cap104
928,941.75
928,941.75
928,941.75
928,941.75
0.00000
0.00000
30
Cap131
793,439.56
793,439.56
793,439.56
793,439.56
0.00000
0.00000
30
Cap132
851,495.33
851,495.33
851,495.33
851,495.32
0.00000
0.00000
30
Cap133
893,076.71
893,076.71
893,076.71
893,076.71
0.00000
0.00000
30
Cap134
928,941.75
928,941.75
928,941.75
928,941.75
0.00000
0.00000
30
CapA
17,156,454.48
17,156,454.48
17,156,454.48
17,156,454.48
0.00000
0.00000
30
CapB
12,979,071.58
12,979,071.58
12,979,071.58
12,979,071.58
0.00000
0.00000
30
CapC
11,505,594.33
11,505,594.33
11,505,594.33
11,505,594.33
0.00000
0.00000
30
Table 12
Results of the HLOCC on M*
Problem
Optimal
Best
Worse
Mean
Std
Gap
Hit
MO1
1305.95
1305.95
1305.95
1305.95
0.0000
0.00000
30
MO2
1432.36
1432.36
1432.36
1432.36
0.0000
0.00000
30
MO3
1516.77
1516.77
1516.77
1516.77
0.0000
0.00000
30
MO4
1442.24
1442.24
1442.24
1442.24
0.0000
0.00000
30
MO5
1408.77
1408.77
1408.77
1408.77
0.0000
0.00000
30
MP1
2686.48
2686.48
2686.48
2686.48
0.0000
0.00000
30
MP2
2904.86
2904.86
2904.86
2904.86
0.0000
0.00000
30
MP3
2623.71
2623.71
2623.71
2623.71
0.0000
0.00000
30
MP4
2938.75
2938.75
2942.63
2939.01
0.9844
0.00009
28
MP5
2932.33
2932.33
2932.33
2932.33
0.0000
0.00000
30
MQ1
4091.01
4091.01
4091.01
4091.01
0.0000
0.00000
30
MQ2
4028.33
4028.33
4028.33
4028.33
0.0000
0.00000
30
MQ3
4275.43
4275.43
4275.43
4275.43
0.0000
0.00000
30
MQ4
4235.15
4235.15
4235.15
4235.15
0.0000
0.00000
30
MQ5
4080.74
4080.74
4104.57
4085.68
9.6534
0.00121
23
MR1
2608.15
2608.15
2609.08
2608.43
0.4335
0.00011
21
MR2
2654.73
2654.73
2654.73
2654.73
0.0000
0.00000
30
MR3
2788.25
2788.25
2792.83
2788.40
0.8362
0.00005
29
MR4
2756.04
2756.04
2756.04
2756.04
0.0000
0.00000
30
MR5
2505.05
2505.05
2505.05
2505.05
0.0000
0.00000
30

A comparison of HLOCC with HLO and PSO variants

To investigate and to evaluate the effect of the proposed algorithm, performance comparisons with binary variants of HLO and PSO algorithms are performed. To provide a fair comparison, the population size and the max iteration numbers of the HLO, DHLO, TVMS-BPSO and QBPSO are set as the proposed algorithm, and the result of BPSO and IBPSO are taken from the study performed by Kiran [51]. The standard deviations and gap values of algorithms are presented in Table 13. The win/draw/lose numbers for each problem by comparing the HLOCC with other algorithms according to the gap scores are also listed in Table 13. It can be seen from Table 13 that the HLOCC outperforms binary HLO and PSO variants, and it has a superior performance on the 15 UFL problems. The proposed method achieves the optimum value with all runs for all problems. While the IBPSO cannot find the optimum value for all the runs in any of the problems, the BPSO and the QBSO obtained the optimum value in all runs for 2 and 6 problems, respectively. The DHLO and TVMSBPSO obtained optimal solutions for 30 runs on all instances except CapB and CapC. It can be observed that HLO is the most competitive of all the compared algorithms with HLOCC according to the gap values, since it can obtain 14 optimum value of the 15 UFL problems.
Table 13
Comparison of HLOCC with HLO and PSO variants
Problem
 
HLO
DHLO
TVMSBPSO
QBPSO
BPSO
IBPSO
HLOCC
Cap71
Std
0.000
0.000
0.000
0.000
0.000
587.49
0.000
Gap
0.000
0.000
0.000
0.000
0.000
0.037
0.000
Cap72
Std
0.000
0.000
0.000
0.000
0.000
1844.640
0.000
Gap
0.000
0.000
0.000
0.000
0.000
0.275
0.000
Cap73
Std
0.000
0.000
0.000
0.000
634.62
1513.780
0.000
Gap
0.000
0.000
0.000
0.000
0.024
0.198
0.000
Cap74
Std
0.000
0.000
0.000
0.000
500.27
4426.670
0.000
Gap
0.000
0.000
0.000
0.000
0.009
0.403
0.000
Cap101
Std
0.000
0.000
0.000
157.066
566.44
3799.520
0.000
Gap
0.000
0.000
0.000
3.599E−5
0.046
0.597
0.000
Cap102
Std
0.000
0.000
0.000
0.000
386.76
3249.380
0.000
Gap
0.000
0.000
0.000
0.000
0.015
0.732
0.000
Cap103
Std
0.000
0.000
0.000
260.794
485.26
4978.980
0.000
Gap
0.000
0.000
0.000
9.286E−5
0.042
0.641
0.000
Cap104
Std
0.000
0.000
0.000
0.000
1951.810
10,845.260
0.000
Gap
0.000
0.000
0.000
0.000
0.081
0.996
0.000
Cap131
Std
0.000
0.000
0.000
386.936
1207.630
4244.290
0.000
Gap
0.000
0.000
0.000
2.891E−4
0.132
2.424
0.000
Cap132
Std
0.000
0.000
0.000
151.730
1196.190
11,569.020
0.000
Gap
0.000
0.000
0.000
3.253E−5
0.091
3.601
0.000
Cap133
Std
0.000
0.000
0.000
532.855
821.28
14,905.270
0.000
Gap
0.000
0.000
0.000
6.875E−4
0.112
5.263
0.000
Cap134
Std
0.000
0.000
0.000
1031.941
2285.420
15,788.860
0.000
Gap
0.000
0.000
0.000
2.217E−4
0.135
7.634
0.000
CapA
Std
0.000
0.000
0.000
249,690.834
374,302.810
3,357,138.190
0.000
Gap
0.000
0.000
0.000
0.011
2.179
137.886
0.000
CapB
Std
0.000
23,883.097
40,589.745
133,805.811
176,206.070
1,406,575.700
0.000
Gap
0.000
6.031E-4
0.0026
0.015
1.949
55.270
0.000
CapC
Std
1877.159
1898.761
24,853.356
95,822.302
92,977.850
1,245,252.200
0.000
Gap
1.965E−4
1.855E−4
0.0011
0.012
1.487
45.566
0.000
W/D/L
 
1/14/0
2/13/0
2/13/0
9/6/0
13/2/0
15/0/0
 

A comparison of HLOCC with the DE and GA variants

Table 14 shows the gap and hit of HLOCC, the DE and GA binary variants on 15 UFL problems. The experimental results of binary DE variants include DisDE/rand [52], BinDE [53] and various GA variants named GA-SP, GA-TP, GA-UP, GA-EC are directly taken from the reference [46]. For a fair comparison, the population size and the maximum iterations of IBDE [42] are all set as the proposed algorithm. The gap values and hit numbers of the binary DE variants, GA variants and HLOCC for 15 UFL problems are listed in Table 14. It can be seen that HLOCC is still the best of all the algorithms because it can find the optimum value for all the instances in 30 runs. It is worth noting that although IBDE was not proposed to solve the UFLP problem, it has better performance than other DE binary variants.
Table 14
Comparison of HLOCC with the GA and DE variants
Problem
DisDE/rand
BinDE
IBDE
GA-SP
GA-TP
GA-UP
GA-EC
HLOCC
Gap
Hit
Gap
Hit
Gap
Hit
Gap
Hit
Gap
Hit
Gap
Hit
Gap
Hit
Gap
Hit
Cap71
0
30
0
30
0
30
0
30
0
30
0
30
0
30
0
30
Cap72
0
30
0
30
0
30
0
30
0
30
0
30
0
30
0
30
Cap73
0
30
0
30
0
30
0.0666
19
0.0484
22
0.0424
23
0
30
0
30
Cap74
0
30
0
30
0
30
0
30
0
30
0
30
0
30
0
30
Cap101
0.036
29
0
30
0
30
0.0684
11
0.0648
12
0.0576
14
0.0072
28
0
30
Cap102
0.0049
29
0
30
0
30
0
30
0
30
0
30
0
30
0
30
Cap103
0.0055
27
0
30
0
30
0.0637
6
0.0612
10
0.0722
9
0.0067
22
0
30
Cap104
0
30
0
30
0
30
0
30
0
30
0
30
0
30
0
30
Cap131
0.0036
29
0.0036
29
0.0002
24
0.0681
16
0.0723
14
0.0536
15
0.0608
15
0
30
Cap132
0
30
0.0050
29
0
30
0
30
0
30
0.0026
29
0.0006
29
0
30
Cap133
0.0138
25
0.0138
24
0.00005
25
0.0913
10
0.0744
12
0.0820
9
0.0406
15
0
30
Cap134
0
30
0
30
0
30
0
30
0
30
0
30
0
30
0
30
CapA
0.0370
29
1.3000
8
0.0203
3
0.0461
24
0.2835
24
0.0604
24
0
30
0
30
CapB
0.1890
18
1.5200
0
0.0123
1
0.5839
9
0.6507
11
0.9905
3
0.4092
11
0
30
CapC
0.0909
8
1.5500
0
0.0137
0
0.7049
2
0.6276
0
0.6345
0
0.1563
5
0
30
W/D/L
7/8/0
9/6/0
10/5/0
8/7/0
7/8/0
10/5/0
7/8/0
 

A comparison of HLOCC with other state-of-the-are methods

ISS [46], JayasX [54], BinEHO [55], BinSSA [56] and BinCSA [40] have recently been presented in the literature for solving the UFLP. In reference [54], the Jaya algorithm has another variant named JayaX-LSM which uses not only binary operations but also a local search mechanism and it has better quality solution than JayasX. Therefore, the results of JayaX-LSM are used for comparison in this paper. The experimental results of algorithms except HLOCC are directly taken from [40]. The standard deviations and gap values of algorithms are presented in Table 15. The best gap values for each instance are shown in bold. All algorithms expect BinEHO achieve optimal values on small-size, medium-size and large-size instances for all runs. Table 15 also shows the numbers of win/draw/lose which are obtain by one-to-one comparison with other methods for HLOCC.
Table 15
Comparison of HLOCC with state-of-the-art methods
Problem
ISS
JayaX-LSM
BinEHO
BinSSA(Sim&Logic)
BinCSA
HLOCC
Std
Gap
Std
Gap
Std
Gap
Std
Gap
Std
Gap
Std
Gap
Cap71
0.0000
0.000
0.0000
0.000
0.0000
0.000
0.0000
0.000
0.0000
0.000
0.00000
0.0000
Cap72
0.0000
0.000
0.0000
0.000
0.0000
0.000
0.0000
0.000
0.0000
0.000
0.00000
0.0000
Cap73
0.0000
0.000
0.0000
0.000
0.0000
0.000
0.0000
0.000
0.0000
0.000
0.00000
0.0000
Cap74
0.0000
0.000
0.0000
0.000
0.0000
0.000
0.0000
0.000
0.0000
0.000
0.00000
0.0000
Cap101
0.0000
0.000
0.0000
0.000
0.0000
0.000
0.0000
0.000
0.0000
0.000
0.00000
0.0000
Cap102
0.0000
0.000
0.0000
0.000
0.0000
0.000
0.0000
0.000
0.0000
0.000
0.00000
0.0000
Cap103
0.0000
0.000
0.0000
0.000
0.0000
0.000
0.0000
0.000
0.0000
0.000
0.00000
0.0000
Cap104
0.0000
0.000
0.0000
0.000
0.0000
0.000
0.0000
0.000
0.0000
0.000
0.00000
0.0000
Cap131
0.0000
0.000
0.0000
0.000
262.50
0.011
0.0000
0.000
0.0000
0.000
0.00000
0.0000
Cap132
0.0000
0.000
0.0000
0.000
0.0000
0.000
0.0000
0.000
0.0000
0.000
0.00000
0.0000
Cap133
0.0000
0.000
0.0000
0.000
426.68
0.030
0.0000
0.000
0.0000
0.000
0.00000
0.0000
Cap134
0.0000
0.000
0.0000
0.000
0.0000
0.000
0.0000
0.000
0.0000
0.000
0.00000
0.0000
CapA
0.0000
0.000
0.0000
0.000
46,897.94
0.050
0.0000
0.000
0.0000
0.000
0.00000
0.0000
CapB
56,962.14
0.255
27,033.02
0.079
66,093.24
0.436
36,156.14
0.255
3107.37
0.006
0.00000
0.0000
CapC
41,843.24
0.199
5455.9
0.022
31,053.20
0.239
27,598.80
0.434
1815.46
0.021
0.00000
0.0000
W/D/L
2/13/0
2/13/0
5/10/0
2/13/0
2/13/0
 
It can be seen that HLOCC is the best method on UFLP. It outperforms ISS, JayaX-LSM, BinEHO, BinSSA(Sim&Logic), and BinCSA, and HLOCC achieves the optimum value with all runs for 14 problems of the 15 UFL problems.
To evaluate the performance of all algorithms on the UFL problem, Friedman rank is performed with the gap results of algorithms given in Tables 13, 14 and 15. The Friedman rank and the final rank of nineteen different algorithms are presented in Table 16. HLOCC ranks first among 19 algorithms, so it can be concluded that HLOCC is an efficient method for solving UFL problem.
Table 16
Results of Friedman rank test for the UFL problem
Algorithm
Friedman rank
Final rank
HLO
6.533
2
DHLO
6.700
3
TVMSBPSO
6.833
4
QBPSO
10.000
11
BPSO
16.200
18
IBPSO
19.000
19
DisDE/rand
10.467
12
BinDE
11.033
14
IBDE
8.100
9
GA-SP
12.667
16
GA-TP
12.533
15
GA-UP
13.067
17
GA-EC
10.633
13
ISS
7.733
7
JayaX-LSM
7.367
6
BinEHO
9.700
10
BinSSA(Sim&Logic)
7.867
8
BinCSA
7.100
5
HLOCC
6.467
1

Results on M*

The results of HLOCC are presented to show performance on M* benchmark instances in this section, and HLOCC is also compared with the well-known heuristic algorithms which have been used to solve the M* benchmark instances. All experiments are performed according to the same parameters and conditions for a fair comparison. The run times value is set as 100 in all the algorithms. The results of HLOCC on M* are shown in Table 12. The experimental results of the compared algorithms on the M* instances are directly from [40, 46, 56] and shown in Table 17. Table 18 shows the comparison results of TS1, TS3 and HLOCC on M* problems. HLOCC is a very efficient method on M* instances. Although MR* instances are the challenging problem in this study, the HLOCC also can achieve the optimum value for all the runs on MR2, MR4 and MR5. Over all runs, ISS and PLS obtained 8 of 20 optimal solutions on M* instances, LS obtained only 7 of 20, and BinSSA(Sim&Logic) and BinCSA obtained 10 and 12 of 20, respectively. The HLOCC and TS3 both obtained 16 of 20 instances, and TS1 obtained 18 success of 20M* datasets. Figure 7 shows the Friedman average rankings, based on average gap score. According to these rankings, HLOCC is the best algorithm of the eight methods. Although TS1 obtained the most optimal values among the 20 instances, Friedman ranked second in Fig. 7. This is mainly because the gap value of HLOCC is smaller, which indicates better robustness of the proposed algorithm.
Table 17
Comparison of HLOCC with ISS, PLS, LS BinSSA(Sim&Logic) and BinCSA
Problem
ISS
PLS
LS
BinSSA(Sim&Logic)
BinCSA
HLOCC
Mean
Gap
Mean
Gap
Mean
Gap
Mean
Gap
Mean
Gap
Mean
Gap
MO1
1305.95
0.000
1305.95
0.000
1305.95
0.000
1305.95
0.000
1305.95
0.000
1305.95
0.00000
MO2
1432.36
0.000
1432.49
0.009
1432.49
0.009
1432.36
0.000
1432.36
0.000
1432.36
0.00000
MO3
1516.77
0.000
1520.27
0.230
1520.27
0.230
1516.77
0.000
1516.77
0.000
1516.77
0.00000
MO4
1442.24
0.000
1442.24
0.000
1442.24
0.000
1442.24
0.000
1442.24
0.000
1442.24
0.00000
MO5
1408.77
0.000
1409.17
0.028
1409.17
0.029
1408.77
0.000
1408.77
0.000
1408.77
0.00000
MP1
2686.66
0.007
2688.50
0.075
2688.50
0.075
2687.66
0.044
2686.66
0.007
2686.48
0.00000
MP2
2904.86
0.000
2904.86
0.000
2904.86
0.000
2904.86
0.000
2904.86
0.000
2904.86
0.00000
MP3
2624.34
0.024
2624.77
0.040
2624.77
0.040
2624.00
0.011
2623.71
0.000
2623.71
0.00000
MP4
2940.80
0.069
2939.53
0.026
2939.53
0.026
2940.50
0.060
2938.83
0.003
2939.01
0.00009
MP5
2932.60
0.009
2933.46
0.038
2933.46
0.038
2932.50
0.006
2932.33
0.000
2932.33
0.00000
MQ1
4091.01
0.000
4,091.01
0.000
4091.01
0.000
4091.01
0.000
4091.01
0.000
4091.01
0.00000
MQ2
4030.08
0.043
4,028.33
0.000
4028.33
0.000
4028.33
0.000
4028.33
0.000
4028.33
0.00000
MQ3
4275.43
0.000
4,275.43
0.000
4275.43
0.000
4275.43
0.000
4275.43
0.000
4275.43
0.00000
MQ4
4236.46
0.031
4,235.47
0.008
4235.47
0.007
4236.46
0.031
4235.15
0.000
4235.15
0.00000
MQ5
4095.46
0.360
4,086.53
0.142
4086.53
0.141
4086.45
0.140
4087.95
0.177
4085.68
0.00121
MR1
2647.03
1.490
2,608.24
0.004
2608.24
0.003
2610.24
0.080
2619.04
0.418
2608.43
0.00011
MR2
2691.54
1.386
2,654.73
0.000
2654.73
0.003
2655.73
0.038
2718.65
2.408
2654.73
0.00000
MR3
2832.33
1.581
2,789.04
0.028
2789.04
0.028
2790.14
0.068
2791.34
0.111
2788.40
0.00005
MR4
2807.90
1.881
2,756.04
0.000
2756.04
0.000
2756.04
0.000
2785.53
1.070
2756.04
0.00000
MR5
2549.97
1.793
2,505.48
0.017
2505.48
0.017
2505.40
0.014
2510.50
0.218
2505.05
0.00000
Table 18
The comparison results of TS1, TS3 and HLOCC
Problem
TS1
TS3
HLOCC
Gap
Gap
Gap
MO1
0.0030
0.020
0.00000
MO2
0.0000
0.0000
0.00000
MO3
0.0018
0.0000
0.00000
MO4
0.0000
0.0040
0.00000
MO5
0.0000
0.0000
0.00000
MP1
0.0000
0.0000
0.00000
MP2
0.0000
0.0000
0.00000
MP3
0.0000
0.0000
0.00000
MP4
0.0000
0.0000
0.00009
MP5
0.0000
0.0000
0.00000
MQ1
0.0000
0.0000
0.00000
MQ2
0.0000
0.0020
0.00000
MQ3
0.0000
0.0000
0.00000
MQ4
0.0000
0.0000
0.00000
MQ5
0.0000
0.0000
0.00121
MR1
0.0000
0.0000
0.00011
MR2
0.0000
0.0000
0.00000
MR3
0.0000
0.0000
0.00005
MR4
0.0000
0.0000
0.00000
MR5
0.0000
0.0040
0.00000

Conclusions and future works

Humans are able to interact and share information with each other through competition and cooperation to effectively enhances the performance of learning. Therefore, a novel human learning optimization algorithm with competitive and cooperative learning is proposed in which a simple yet powerful competitive and cooperative learning operator is designed and introduced to improve the performance of HLO. Then the role and function of the developed competitive and cooperative learning operator are insightfully analyzed and discussed based on the variation of control parameter. The in-depth analysis results show that CCLO can help algorithm better explorer the search space and maintain the diversity at the beginning of search, and then find the optimal values more efficiently and fasten the convergence in the following iterations. Finally, the proposed HLOCC is applied to solve CEC17 benchmark functions and UFL problems to evaluate its performance. The experimental results demonstrate that HLOCC has advantages on performance with the improved exploration and exploitation abilities.
The social nature of human beings allows for a large amount of interactive behaviors, of which many are positive for human learning. The social phenomenon of competition and cooperation exists not only within the species, but also among different social groups. Therefore, the following research will focus on the relationship between competition and cooperation among different intermediaries, and design corresponding operators for HLO to further improve performance.

Acknowledgements

This work is supported by the National Key Research and Development Program of China No. 2019YFB1405500, and 111 Project under Grant No. D18003.

Declarations

Conflict of interest

The authors declare that they have no know competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literatur
1.
Zurück zum Zitat Roberts-Mahoney H (2016) Netflixing human capital development: personalized learning technology and the corporatization of K-12 education. J Educ Policy 31(4):1–16 Roberts-Mahoney H (2016) Netflixing human capital development: personalized learning technology and the corporatization of K-12 education. J Educ Policy 31(4):1–16
2.
Zurück zum Zitat Ling W, Ni H, Yang R, Fei M, Wei Y (2014) A simple human learning optimization algorithm: computational intelligence. Netw Syst Appl 2014:56–67 Ling W, Ni H, Yang R, Fei M, Wei Y (2014) A simple human learning optimization algorithm: computational intelligence. Netw Syst Appl 2014:56–67
3.
Zurück zum Zitat Yang R, Fei M, Du X, Wang L, Pardalos P (2015) An adaptive simplified human learning optimization algorithm. Inf Sci Int J 320:126–139MathSciNet Yang R, Fei M, Du X, Wang L, Pardalos P (2015) An adaptive simplified human learning optimization algorithm. Inf Sci Int J 320:126–139MathSciNet
4.
Zurück zum Zitat Holden W (2010) The bell curve: intelligence and class structure in American life. Transform Anthropol 6(5):87–89 Holden W (2010) The bell curve: intelligence and class structure in American life. Transform Anthropol 6(5):87–89
5.
Zurück zum Zitat Wang L, An L, Pi J, Fei M, Pardalos PM (2017) A diverse human learning optimization algorithm. J Glob Optim 67(1–2):1–41MathSciNetMATH Wang L, An L, Pi J, Fei M, Pardalos PM (2017) A diverse human learning optimization algorithm. J Glob Optim 67(1–2):1–41MathSciNetMATH
6.
Zurück zum Zitat Yang R, Xu M, He J, Ranshous S, Samatova NF (2017) An intelligent weighted fuzzy time series model based on a sine-cosine adaptive human learning optimization algorithm and its application to financial markets forecasting, pp 595–607 Yang R, Xu M, He J, Ranshous S, Samatova NF (2017) An intelligent weighted fuzzy time series model based on a sine-cosine adaptive human learning optimization algorithm and its application to financial markets forecasting, pp 595–607
7.
Zurück zum Zitat Ling W, Ji P, Wen Y, Pi J, Fei M, Pardalos PM (2018) An improved adaptive human learning algorithm for engineering optimization. Appl Soft Comput 71:393 Ling W, Ji P, Wen Y, Pi J, Fei M, Pardalos PM (2018) An improved adaptive human learning algorithm for engineering optimization. Appl Soft Comput 71:393
8.
Zurück zum Zitat Ye W, Yang R, Fei M, Pardalos P, Ni M (2015) A human learning optimization algorithm and its application to multi-dimensional knapsack problems. Appl Soft Comput 34:736–746 Ye W, Yang R, Fei M, Pardalos P, Ni M (2015) A human learning optimization algorithm and its application to multi-dimensional knapsack problems. Appl Soft Comput 34:736–746
9.
Zurück zum Zitat Shoja A, Molla-Alizadeh-Zavardehi S, Niroomand S (2020) Hybrid adaptive simplified human learning optimization algorithms for supply chain network design problem with possibility of direct shipment. Appl Soft Comput 96:106594 Shoja A, Molla-Alizadeh-Zavardehi S, Niroomand S (2020) Hybrid adaptive simplified human learning optimization algorithms for supply chain network design problem with possibility of direct shipment. Appl Soft Comput 96:106594
10.
Zurück zum Zitat Ding H, Gu X (2020) Hybrid of human learning optimization algorithm and particle swarm optimization algorithm with scheduling strategies for the flexible job-shop scheduling problem. Neurocomputing 414:313–332 Ding H, Gu X (2020) Hybrid of human learning optimization algorithm and particle swarm optimization algorithm with scheduling strategies for the flexible job-shop scheduling problem. Neurocomputing 414:313–332
11.
Zurück zum Zitat Wang L, Pei J, Menhas MI, Pi J, Fei M, Pardalos PM (2017) A Hybrid-coded Human Learning Optimization for mixed-variable optimization problems. Knowl-Based Syst 127:114–125 Wang L, Pei J, Menhas MI, Pi J, Fei M, Pardalos PM (2017) A Hybrid-coded Human Learning Optimization for mixed-variable optimization problems. Knowl-Based Syst 127:114–125
12.
Zurück zum Zitat Li X, Yao J, Wang L, Menhas MI (2017) Application of human learning optimization algorithm for production scheduling optimization. In: Advanced computational methods in life system modeling and simulation. Springer, Singapore, pp 242–252 Li X, Yao J, Wang L, Menhas MI (2017) Application of human learning optimization algorithm for production scheduling optimization. In: Advanced computational methods in life system modeling and simulation. Springer, Singapore, pp 242–252
13.
Zurück zum Zitat Alguliyev R, Aliguliyev R, Isazade N (2016) A sentence selection model and HLO algorithm for extractive text summarization. In: 2016 IEEE 10th international conference on application of information and communication technologies (AICT), pp 1–4 Alguliyev R, Aliguliyev R, Isazade N (2016) A sentence selection model and HLO algorithm for extractive text summarization. In: 2016 IEEE 10th international conference on application of information and communication technologies (AICT), pp 1–4
14.
Zurück zum Zitat Cao J, Yan Z, Xu X, He G, Huang S (2016) Optimal power flow calculation in AC/DC hybrid power system based on adaptive simplified human learning optimization algorithm. J Modern Power Syst Clean Energy 4(4):690–701 Cao J, Yan Z, Xu X, He G, Huang S (2016) Optimal power flow calculation in AC/DC hybrid power system based on adaptive simplified human learning optimization algorithm. J Modern Power Syst Clean Energy 4(4):690–701
15.
Zurück zum Zitat Cao J, Yan Z, He G (2016) Application of multi-objective human learning optimization method to solve AC/DC multi-objective optimal power flow problem. Int J Emerg Electr Power Syst 17:327–337 Cao J, Yan Z, He G (2016) Application of multi-objective human learning optimization method to solve AC/DC multi-objective optimal power flow problem. Int J Emerg Electr Power Syst 17:327–337
16.
Zurück zum Zitat Bhandari AK, Kumar IV (2019) A context sensitive energy thresholding based 3D Otsu function for image segmentation using human learning optimization—ScienceDirect. Appl Soft Comput 82:105570 Bhandari AK, Kumar IV (2019) A context sensitive energy thresholding based 3D Otsu function for image segmentation using human learning optimization—ScienceDirect. Appl Soft Comput 82:105570
17.
Zurück zum Zitat Zhi H, Hu Q, Ling W, Menhas MI, Fei M (2018) Water level control of nuclear power plant steam generator based on intelligent virtual reference feedback tuning. Springer, Singapore Zhi H, Hu Q, Ling W, Menhas MI, Fei M (2018) Water level control of nuclear power plant steam generator based on intelligent virtual reference feedback tuning. Springer, Singapore
18.
Zurück zum Zitat Wen Y, Wang L, Peng W, Menhas MI, Qian L (2018) Application of intelligent virtual reference feedback tuning to temperature control in a heat exchanger. Intell Comput Internet Things. 2018:311–320 Wen Y, Wang L, Peng W, Menhas MI, Qian L (2018) Application of intelligent virtual reference feedback tuning to temperature control in a heat exchanger. Intell Comput Internet Things. 2018:311–320
19.
Zurück zum Zitat Menhas MI, Wang L, Ayesha N, Qadeer N, Waris M, Manzoor S, Fei M (2020) Continuous human learning optimizer based PID controller design of an automatic voltage regulator system, pp 148–153 Menhas MI, Wang L, Ayesha N, Qadeer N, Waris M, Manzoor S, Fei M (2020) Continuous human learning optimizer based PID controller design of an automatic voltage regulator system, pp 148–153
20.
Zurück zum Zitat Jarecki JB, Meder B, Nelson JD (2017) Naïve and robust: class-conditional independence in human classification learning. Cogn Sci 42:3 Jarecki JB, Meder B, Nelson JD (2017) Naïve and robust: class-conditional independence in human classification learning. Cogn Sci 42:3
21.
Zurück zum Zitat Ning N, Wang J, Lin Z, Zheng Z (2017) The direct and moderating effect of learning orientation on individual performance in the banking industry in China: contextualization of high-performance work systems. Asia Pac J Hum Resourc 56:3 Ning N, Wang J, Lin Z, Zheng Z (2017) The direct and moderating effect of learning orientation on individual performance in the banking industry in China: contextualization of high-performance work systems. Asia Pac J Hum Resourc 56:3
22.
Zurück zum Zitat Boyd R, Richerson PJ, Henrich J (2011) The cultural niche: why social learning is essential for human adaptation. Proc Natl Acad Sci USA 108(25):10918–10925 Boyd R, Richerson PJ, Henrich J (2011) The cultural niche: why social learning is essential for human adaptation. Proc Natl Acad Sci USA 108(25):10918–10925
23.
Zurück zum Zitat Yang X (2010) A new metaheuristic bat-inspired algorithm. In: Nature inspired cooperative strategies for optimization (NICSO) Yang X (2010) A new metaheuristic bat-inspired algorithm. In: Nature inspired cooperative strategies for optimization (NICSO)
24.
Zurück zum Zitat Johnson D, Johnson R (1999) Making cooperative learning work. Theory Into Practice Theory Pract 38:67–73 Johnson D, Johnson R (1999) Making cooperative learning work. Theory Into Practice Theory Pract 38:67–73
25.
Zurück zum Zitat Burton-Chellew MN, Ross-Gillespie A, West SA (2010) Cooperation in humans: competition between groups and proximate emotions. Evol Hum Behav 31(2):104–108 Burton-Chellew MN, Ross-Gillespie A, West SA (2010) Cooperation in humans: competition between groups and proximate emotions. Evol Hum Behav 31(2):104–108
26.
Zurück zum Zitat Adamik AI (2008) Creating of competitive advantage based on cooperation: Wydawnictwo Politechniki Łódzkiej Adamik AI (2008) Creating of competitive advantage based on cooperation: Wydawnictwo Politechniki Łódzkiej
27.
Zurück zum Zitat Johnson DW (1991) Cooperative learning: increasing college faculty instructional productivity. In: ASHE-ERIC Higher Education Report No. 4, 1991. ASHE-ERIC Higher Education Reports, George Washington University, One Dupont Circle, Suite 630, Washington, DC Johnson DW (1991) Cooperative learning: increasing college faculty instructional productivity. In: ASHE-ERIC Higher Education Report No. 4, 1991. ASHE-ERIC Higher Education Reports, George Washington University, One Dupont Circle, Suite 630, Washington, DC
28.
Zurück zum Zitat Cziko G, Gary H (1996) Without miracles: universal selection theory and the second Darwinian revolution: Without miracles: universal selection theory and the second Darwinian revolution Cziko G, Gary H (1996) Without miracles: universal selection theory and the second Darwinian revolution: Without miracles: universal selection theory and the second Darwinian revolution
29.
Zurück zum Zitat Mesoudi A, Chang L, Dall SR, Thornton A (2016) The evolution of individual and cultural variation in social learning. Trends Ecol Evol 31(3):215–225 Mesoudi A, Chang L, Dall SR, Thornton A (2016) The evolution of individual and cultural variation in social learning. Trends Ecol Evol 31(3):215–225
30.
Zurück zum Zitat Heled K et al (2016) Psychological capital as a team phenomenon: mediating the relationship between learning climate and outcomes at the individual and team levels. J Positive Psychol 11(3):303–314 Heled K et al (2016) Psychological capital as a team phenomenon: mediating the relationship between learning climate and outcomes at the individual and team levels. J Positive Psychol 11(3):303–314
31.
Zurück zum Zitat Decety J, Jackson PL, Sommerville JA, Chaminade T, Meltzoff AN (2004) The neural bases of cooperation and competition: an fMRI investigation. Neuroimage 23(2):744–751 Decety J, Jackson PL, Sommerville JA, Chaminade T, Meltzoff AN (2004) The neural bases of cooperation and competition: an fMRI investigation. Neuroimage 23(2):744–751
32.
Zurück zum Zitat Elliott E, Kiel LD (2002) Exploring cooperation and competition using agent-based modeling. Proc Natl Acad Sci 99(suppl 3):7193–7194 Elliott E, Kiel LD (2002) Exploring cooperation and competition using agent-based modeling. Proc Natl Acad Sci 99(suppl 3):7193–7194
33.
Zurück zum Zitat Johnson DW, Johnson RT (1989) Cooperation and competition: theory and research. Interaction Book Company, London Johnson DW, Johnson RT (1989) Cooperation and competition: theory and research. Interaction Book Company, London
34.
Zurück zum Zitat Anderson JR (2006) On cooperative and competitive learning in the management classroom Anderson JR (2006) On cooperative and competitive learning in the management classroom
35.
Zurück zum Zitat Over H, Mccall C (2018) Becoming us and them: social learning and intergroup bias. Soc Person Psychol Compass 12(4):e12384 Over H, Mccall C (2018) Becoming us and them: social learning and intergroup bias. Soc Person Psychol Compass 12(4):e12384
36.
Zurück zum Zitat Felipe NS, Csaszar A (2009) How much to copy? Determinants of effective imitation breadth. Org Sci 21(3):661–667 Felipe NS, Csaszar A (2009) How much to copy? Determinants of effective imitation breadth. Org Sci 21(3):661–667
37.
Zurück zum Zitat Wu G, Mallipeddi R, Suganthan P (2016) Problem definitions and evaluation criteria for the CEC 2017 competition and special session on constrained single objective real-parameter optimization Wu G, Mallipeddi R, Suganthan P (2016) Problem definitions and evaluation criteria for the CEC 2017 competition and special session on constrained single objective real-parameter optimization
38.
Zurück zum Zitat Beheshti Z (2020) A time-varying mirrored S-shaped transfer function for binary particle swarm optimization. Inf Sci 512:1503–1542MathSciNet Beheshti Z (2020) A time-varying mirrored S-shaped transfer function for binary particle swarm optimization. Inf Sci 512:1503–1542MathSciNet
39.
Zurück zum Zitat AG Hussien, Hassanien AE, Houssein EH, Bhattacharyya S, Amin M (2019) S-shaped Binary Whale optimization algorithm for feature selection AG Hussien, Hassanien AE, Houssein EH, Bhattacharyya S, Amin M (2019) S-shaped Binary Whale optimization algorithm for feature selection
40.
Zurück zum Zitat Sonuç E (2021) Binary crow search algorithm for the uncapacitated facility location problem. Neural Comput Appl 33(21):14669–14685 Sonuç E (2021) Binary crow search algorithm for the uncapacitated facility location problem. Neural Comput Appl 33(21):14669–14685
41.
Zurück zum Zitat Jordehi AR (2019) Binary particle swarm optimisation with quadratic transfer function: a new binary optimisation algorithm for optimal scheduling of appliances in smart homes. Appl Soft Comput 78:465–480 Jordehi AR (2019) Binary particle swarm optimisation with quadratic transfer function: a new binary optimisation algorithm for optimal scheduling of appliances in smart homes. Appl Soft Comput 78:465–480
42.
Zurück zum Zitat Qian S, Ye Y, Liu Y, Xu G (2017) An improved binary differential evolution algorithm for optimizing PWM control laws of power inverters. Optim Eng 19(2):1–26MathSciNetMATH Qian S, Ye Y, Liu Y, Xu G (2017) An improved binary differential evolution algorithm for optimizing PWM control laws of power inverters. Optim Eng 19(2):1–26MathSciNetMATH
43.
Zurück zum Zitat Agrawal P, Ganesh T, Mohamed AW (2021) A novel binary gaining–sharing knowledge-based optimization algorithm for feature selection. Neural Comput Appl 33(11):5989–6008 Agrawal P, Ganesh T, Mohamed AW (2021) A novel binary gaining–sharing knowledge-based optimization algorithm for feature selection. Neural Comput Appl 33(11):5989–6008
44.
Zurück zum Zitat Goldengorin B, Ghosh D, Sierksma G (2001) Branch and peg algorithms for the simple plant location problem. In: Research report Goldengorin B, Ghosh D, Sierksma G (2001) Branch and peg algorithms for the simple plant location problem. In: Research report
45.
Zurück zum Zitat Cura T (2010) A parallel local search approach to solving the uncapacitated warehouse location problem. Comput Ind Eng 59(4):1000–1009 Cura T (2010) A parallel local search approach to solving the uncapacitated warehouse location problem. Comput Ind Eng 59(4):1000–1009
46.
Zurück zum Zitat Hh A, Zo B (2019) An improved scatter search algorithm for the uncapacitated facility location problem. Comput Ind Eng 135:855–867 Hh A, Zo B (2019) An improved scatter search algorithm for the uncapacitated facility location problem. Comput Ind Eng 135:855–867
47.
Zurück zum Zitat Kran MS, Gündüz M (2012) XOR-based artificial bee colony algorithm for binary optimization. Turk J Electr Eng Comput Sci 21(10):2307–2328 Kran MS, Gündüz M (2012) XOR-based artificial bee colony algorithm for binary optimization. Turk J Electr Eng Comput Sci 21(10):2307–2328
48.
Zurück zum Zitat Kashan MH, Nahavandi N, Kashan AH (2012) DisABC: a new artificial bee colony algorithm for binary optimization. Appl Soft Comput 12(1):342–352 Kashan MH, Nahavandi N, Kashan AH (2012) DisABC: a new artificial bee colony algorithm for binary optimization. Appl Soft Comput 12(1):342–352
49.
Zurück zum Zitat Beasley JE (1990) OR-Library: distributing test problems by electronic mail. J Oper Res Soc 41(11):1069–1072 Beasley JE (1990) OR-Library: distributing test problems by electronic mail. J Oper Res Soc 41(11):1069–1072
50.
Zurück zum Zitat Kratica J, Tošic D, Filipović V, Ljubić I (2002) Solving the simple plant location problem by genetic algorithm. RAIRO Oper Res 35(1):127–142MathSciNetMATH Kratica J, Tošic D, Filipović V, Ljubić I (2002) Solving the simple plant location problem by genetic algorithm. RAIRO Oper Res 35(1):127–142MathSciNetMATH
51.
Zurück zum Zitat Kiran MS, Gündüz M (2013) XOR-based artificial bee colony algorithm for binary optimization. Turk J Electr Eng Comput Sci 21(2):2307–2328 Kiran MS, Gündüz M (2013) XOR-based artificial bee colony algorithm for binary optimization. Turk J Electr Eng Comput Sci 21(2):2307–2328
52.
Zurück zum Zitat Kashan MH, Kashan AH, Nahavandi N (2013) A novel differential evolution algorithm for binary optimization. Comput Optim Appl 55(2):481–513MathSciNetMATH Kashan MH, Kashan AH, Nahavandi N (2013) A novel differential evolution algorithm for binary optimization. Comput Optim Appl 55(2):481–513MathSciNetMATH
53.
Zurück zum Zitat Engelbrecht AP, Pampara G (2007) Binary differential evolution strategies Engelbrecht AP, Pampara G (2007) Binary differential evolution strategies
54.
Zurück zum Zitat Aslan M, Gunduz M, Kiran MS (2019) JayaX: Jaya algorithm with xor operator for binary optimization. Appl Soft Comput 82:105576 Aslan M, Gunduz M, Kiran MS (2019) JayaX: Jaya algorithm with xor operator for binary optimization. Appl Soft Comput 82:105576
55.
Zurück zum Zitat Hakli H (2020) BinEHO: a new binary variant based on elephant herding optimization algorithm. Neural Comput Appl 32(22):16971–16991 Hakli H (2020) BinEHO: a new binary variant based on elephant herding optimization algorithm. Neural Comput Appl 32(22):16971–16991
56.
Zurück zum Zitat Baş E, Ülker E (2020) A binary social spider algorithm for uncapacitated facility location problem. Expert Syst Appl 161:113618 Baş E, Ülker E (2020) A binary social spider algorithm for uncapacitated facility location problem. Expert Syst Appl 161:113618
Metadaten
Titel
A human learning optimization algorithm with competitive and cooperative learning
verfasst von
JiaoJie Du
Ling Wang
Minrui Fei
Muhammad Ilyas Menhas
Publikationsdatum
04.08.2022
Verlag
Springer International Publishing
Erschienen in
Complex & Intelligent Systems / Ausgabe 1/2023
Print ISSN: 2199-4536
Elektronische ISSN: 2198-6053
DOI
https://doi.org/10.1007/s40747-022-00808-4

Weitere Artikel der Ausgabe 1/2023

Complex & Intelligent Systems 1/2023 Zur Ausgabe

Premium Partner