Abstract

The mobile sensor network can sense and collect the data information of the monitored object in real time in the monitoring area. However, the collected information is meaningful only if the location of the node is known. This paper mainly optimizes the Monte Carlo Localization (MCL) in mobile sensor positioning technology. In recent years, the rapid development of heuristic algorithms has provided solutions to many complex problems. This paper combines the compact strategy into the adaptive particle swarm algorithm and proposes a compact adaptive particle swarm algorithm (cAPSO). The compact strategy replaces the specific position of each particle by the distribution probability of the particle swarm, which greatly reduces the memory usage. The performance of cAPSO is tested on 28 test functions of CEC2013, and compared with some existing heuristic algorithms, it proves that cAPSO has a better performance. At the same time, cAPSO is applied to MCL technology to improve the accuracy of node localization, and compared with other heuristic algorithms in the accuracy of MCL, the results show that cAPSO has a better performance.

1. Introduction

Metaheuristic algorithms are algorithms inspired by the life habits of various creatures in nature. Metaheuristic algorithms can effectively solve many problems in life [1] and are widely used in finance, transportation, physics, chemistry, military, and other fields [28]. The No Free Lunch Theorem [9, 10] proves that any optimization algorithm cannot suit all situations. Therefore, various metaheuristic algorithms and their improved algorithms are constantly being proposed to solve more complicated problems [1113].

The particle swarm optimization (PSO) is one of the most important metaheuristic algorithms proposed by Kennedy and Eberhart [14, 15]. They observed and analyzed the foraging behavior of birds and then proposed this algorithm. There is a global optimal position and an individual optimal position in the particle swarm optimization algorithm. These two positions are updated according to the fitness value in each iteration, so that the algorithm is closer to the optimal solution of the problem. The characteristics of PSO, such as few parameters, simple structure, and fast search speed, make it applied to many fields. There are also many improved PSO algorithms, such as constricted particle swarm optimization (CPSO) [16], fully informed particle swarm optimization (FIPSO) [17], comprehensive learning particle swarm optimization (CLPSO) [18], intelligence single particle optimization (ISPO) [19], and adaptive particle swarm optimization (APSO) [20]. The PSO can solve many problems, such as optimizing neural networks [21], solving vehicle routing problems [22, 23], scheduling workflow scheduling [24], and locating wireless sensor nodes [25]. Based on the original PSO algorithm, the APSO algorithm introduces evolutionary state evaluation strategies, elite learning strategies, and system adaptive parameter strategies to improve the original PSO algorithm. The APSO solves the problem of slow convergence speed and easy to fall into the local optimum of the original PSO algorithm. This paper mainly tries to combine the compact strategy with APSO to improve the accuracy of mobile sensor localization.

The application of a mobile sensor network is to provide services to people when the location information of the node is known [26]. The data measured by the node without location information is meaningless in many situations [27], such as forest fire detection [28]. Therefore, to make full use of the monitored data, it is necessary to know the location information of the node [29, 30]. Installing GPS for each node is the best way to solve this problem, but this way is expensive and energy-consuming. Therefore, a small number of nodes should be randomly selected to install GPS, and then, the positions of other nodes should be obtained by using positioning technology through the location of the GPS nodes [31]. The most significant difference between a mobile sensor and a fixed wireless sensor is its mobility [32, 33]. The mobility enables the sensor to collect effective information in the specified area better and solves the problem that the information in a certain area cannot be collected due to the damage of a specific location node. The deployment of mobile sensors is more convenient and does not require detailed design like fixed wireless sensor deployment nodes [34].

The compact idea is to use the behavior probability of the particle swarm to replace the position and velocity of each individual to express the particle swarm. The compact algorithm can effectively save memory space [20] and has applications in small robots [35], remote office [36], and space shuttle control [37]. Algorithms using compact ideas have also been continuously proposed, such as compact artificial bee colony (cABC) [38], compact sine cosine algorithm (cSCA) [39], compact bat algorithm (cBA) [40], and compact particle swarm optimization (cPSO) [41]. However, the combination of compact algorithm and APSO has not been mentioned. This paper hopes to combine the idea of compact with APSO and propose a cAPSO algorithm that uses small memory and fast convergence and applies it to mobile sensor localization.

The rest of the paper is organized as follows. The second section introduces the basis of related work and briefly introduces APSO algorithm and mobile sensor localization technology. The third section presents the improvement methods and steps of the algorithm. The fourth section tests the performance of the algorithm and compares it with similar algorithms. The fifth section applies the improved algorithm to mobile sensor localization. The sixth section gives the conclusion of this paper.

This section will briefly introduce APSO and mobile sensor localization technology MCL.

2.1. Particle Swarm Optimization

The PSO imitates the foraging behavior of birds. The food location is unknown, and each bird is affected by the surrounding birds and keeps approaching the bird with the best position during the foraging process [14, 15]. Depending on the solution of the problem, the position of the optimal individual is constantly updated iteratively. Suppose the problem dimension is , the current position of the -th individual is , and the current speed of the -th individual is . In each iteration, the particle swarm retains the global optimal position and the current optimal position, and each particle is affected by these two optimal particles and approaches these two particles. The iterative formulas for updating the position and velocity of the next generation of particles are shown in Equations (1) and (2). where and are two learning factors, is the inertia weight, and is a random number between (0,1). The iterative update of and are performed through the fitness value comparison. The pseudo-code of the PSO is shown in Algorithm 1.

while i < particles do
 Initialize the position Xi and velocity Vi of each particle
 Calculate the fitness value of each particle fitness(i)
i = i +1
end
Initialize the pBesti = Xi
Initialize the gBest = min(pBesti)
for g =1 to iterMax do
  for i =1 to particles do.
   Update the Vi and Xi of the particles by Equations (1) and (2)
   Calculate the fitness value of the new particle fitness(i)
   if fitness(i) < fitness() then
    = Xi
   end
   if fitness(i) < fitness(gBest) then
    = Xi
   end
  end
end
2.2. Adaptive Particle Swarm Optimization

The APSO is based on the original PSO by introducing state estimation strategy, elite learning strategy, and parameter adaptation strategy to improve it [20]. The improved algorithm can find the solution faster and more stable. The three strategies will be briefly introduced below.

2.2.1. State Estimation Strategy

The APSO divides the entire search process into four states, namely, exploration, exploitation, convergence, and jump-out, which are represented by S1, S2, S3, and S4. The division basis is the value of the evolution factor . To calculate the evolution factor , we must first calculate the Euclidean distance of each particle. The calculation formula of is shown in Equation (3). where represents the Euclidean distance of the -th particle, represents the total number of particle swarms, and represents the dimension of the problem.

After obtaining the Euclidean distance of each particle, we find the minimum value , maximum value , and optimal value . Then, calculate the evolution factor by Equation (4).

According to the value of the evolution factor , the state is divided according to Figure 1. In addition, the state division is also related to the state of the previous iteration. For example, the value of the evolution factor of this iteration is 0.55. It is assumed that the influence of the state of the previous iteration is not considered. In that case, the current state will be set as S1, but considering that the state of the previous iteration will have an impact on this state, so if the state of the previous iteration is S1 or S4, the current state will be set to S1. If the state of the previous iteration is S2 or S3, the current state will be set to S2.

2.2.2. Parameter Adaptation Strategy

Three dynamic parameters are involved in APSO: inertial weight , the individual learning factor , and the global learning factor . The weight of inertia changes with the evolutionary state. Its relationship with evolution factor is shown in Equation (5).

APSO initializes to 0.9. In the exploration state and the jump-out state, is more extensive, which leads to a more oversized , which is conducive to the global optimal search. In the convergence state and the exploitation state, is more minor, which leads to a smaller , which is conducive to the local convergence.

The and the are initialized to 2.0, and the two learning factors are adjusted according to the different evolutionary state. In the exploration state, increase and decrease . This ensures that the individual learning factors play a leading role in helping the particles explore their own best individuals and avoid falling into the local optimum. In the exploitation state, slightly increase and slightly decrease . Particles in the exploration phase gradually approach the local optimum. Increasing can more effectively enable the particles to explore around the individual optimum. Since the local optimum found may not be the global optimum, should be slightly reduced to prevent premature convergence because this will easily lead to the problem of the population falling into the local optimum. In the convergence stage, slightly increase and . Increasing means that the particle swarm has found the global optimum at this stage, and the particle swarm can converge to this global optimum. A slight increase of is to prevent the learning factor from reaching the upper limit prematurely. If the upper bound is reached prematurely, the particles will treat the local optimum as the global optimum and quickly converge. In the jump-out state, reduce and increase . It is helpful for the global optimum particle to jump out of the convergence zone and find a better position; other particles will follow the global optimal particle and converge to a better position. Table 1 shows the dynamic changes of the two learning factors in different states.

2.2.3. Elite Learning Strategy

The elite learning strategy makes the optimal global particles jump out of the convergence zone and finds a more superior position in the state of convergence. The elite learning strategy is determined by the elite learning rate . The calculation formula of is shown in Equation (6). where is the current number of iterations, and are the maximum and minimum values of the elite learning rate , taking 1.0 and 0.1, respectively, and iterMax is the maximum number of iterations. After obtaining the elite learning rate, we randomly select a dimension for Gaussian perturbations to get the global optimal particle out of the convergence region. The formula is shown in Equation (7).

2.2.4. The Pseudo-Code of APSO

The APSO introduces the above three strategies on the basis of PSO to optimize, and the optimized APSO can find the optimal solution to the problem better and faster when solving the problem.

The pseudo-code of APSO that combines the three strategies is shown in Algorithm 2.

while i < particles do
 Initialize the position Xi and velocity Vi of each particle
 Calculate the fitness value of each particle fitness(i)
i = i +1
end
Initialize the pBesti = Xi
Initialize the gBest = min(pBesti)
for g =1 to iterMax do
  for i =1 to particles do
    Update the Vi and Xi of the particles by Equations (1) and (2)
    Calculate the fitness value of the new particle fitness(i)
    if fitness(i) < fitness(pBesti) then
     pBesti= Xi
    end.
    if fitness(i) < fitness(gBest) then
     gBest= Xi
    end
end
Calculate the di by Equation (3)
Calculate the f by Equation (4)
Determine the evolutionary state S by Figure 1
w is adjusted according to Equation (5)
c1 and c2 are adjusted according to Table 1
if evolutionary state S == convergence state S3 then
   
   
   if fitness(T) < fitness(gBest) then
     gBest= T
   end
  end
end
2.3. Mobile Sensor Localization

This section mainly introduces the MCL mobile node location method [42, 43]. The mobile nodes in the mobile sensor network move with random speed and random direction. The biggest advantage of mobile nodes over fixed nodes lies in their mobility. The mobile node solves the problem that the information in a certain area cannot be collected due to the damage of the node at a specific location. The MCL method is mainly divided into three stages: the initialization stage, the prediction stage, and the filtering stage [44]. In the initialization stage, the moving area and the maximum moving speed are specified for each node. In the prediction stage, a preliminary estimate of the location of the mobile node is made. The speed and direction of node movement are uncertain. The position after the movement is within a circle. The center of this circle is the position of the last node, and the radius is the product of the speed and the positioning interval time, as shown in Figure 2.

The filtration stage is the most critical stage of MCL. Firstly, according to the distance between the anchor node and the unknown node, MCL determines which anchor nodes are the one-hop beacon nodes of the unknown node and which anchor nodes are the two-hop beacon nodes of the unknown node. Then, MCL obtains the one-hop beacon node set and the two-hop beacon node set . Secondly, randomly select points in possible areas, and filter the nonconforming points according to whether the selected points are within the range of one-hop and two-hop beacon nodes. The filter condition is shown in Equation (8).

The gray areas in Figure 3 are the sets of points that meet the filter conditions. The MCL locates the position of the unknown node in a small space according to the number of hops from the unknown node to the anchor node. Equation (8) filters out the points that fall in this small space. In order to prevent contingency, the coordinates of all points in this small space are averaged as the position of the unknown node initially predicted by MCL. Finally, the qualified points after filtering are estimated by Equation (9) to estimate the position of the unknown node. where represents the total number of points that meet the filter conditions, and node represents the position of node that meet the filter conditions.

3. Compact Adaptive Particle Swarm Optimization

This section mainly introduces the idea of compact strategy and how to apply compact strategy to adaptive particle swarm optimization algorithm.

3.1. Compact Strategy

The primary purpose of the compact strategy is to reduce memory usage without changing the performance of the original algorithm or even improving the performance of the original algorithm. The running speed will naturally be improved if the memory usage is reduced. The compact strategy uses PV perturbation vectors to represent the overall motion state of the population instead of simply using the position and velocity of each individual to represent the population state. The disturbance vector PV is defined as , where represents the average value of the disturbance vector, represents the standard deviation of the disturbance vector, and represents the current iteration update times.

The compact strategy ultimately returns a value between (0,1). The PV vector is composed of and , and the probability distribution function (PDF) can be calculated through and . Then, the cumulative distribution function (CDF) can be calculated by PDF. The calculation formulas of PDF and CDF are in Equations (10) and (11).

The CDF value range of the cumulative distribution function is (0,1), which is also the value range returned by the compact strategy. Taking the standard normal distribution as an example, PDF and CDF of the standard normal distribution are shown in Figure 4.

Another important content of the compact strategy is the iterative update of the PV disturbance vector. The compact strategy is based on comparison, and the winner and loser are obtained through a competitive game mechanism. Then, update it through the update iteration strategy. The update formulas are shown in Equations (12) and (13). where represents the current iteration times and represents the number of virtual populations. The mean value in the PV disturbance vector is generally set to 0, and the standard deviation in the PV disturbance vector is generally set to 10 to avoid the contingency of the local optimum during initialization. After a large number of experiments, it has been proven that the effect achieved when the number of virtual populations is set to 300 is the best. So is generally set to 300.

3.2. Implementation of the Compact Strategy in APSO

The compact strategy is based on comparison. In other words, there must be a game mechanism of size two so that the winner and loser can be generated, and then, and in the PV disturbance vector can be updated. APSO can meet this condition through the comparison of fitness values, so APSO meets the primary conditions of the compact strategy, and the compact strategy can be combined with APSO to achieve the goal of reducing memory and improving operating speed. For example, if there are 30 particles in the particle swarm and the dimension to solve the problem is also 30 dimensions, then APSO needs 900 storage units to store the position information of each particle in each dimension. However, cAPSO only needs 60 storage units to store the particle swarm in each dimension using PV perturbation vectors, which significantly saves storage space. The saving of storage space reduces the number of reads and writes to the memory, and the speed of the algorithm will also be improved accordingly.

The cAPSO expresses the position of the particle swarm through the overall probability distribution, and the value range of the compact strategy is between (0,1), assuming that is the return value of the compact strategy. Then, the return value of the compact strategy should be corresponded to the actual position range by Equation (14).

The evolution factor in the original APSO is calculated by Equations (3) and (4). However, the distribution probability is used to represent the position of the particle swarm in cAPSO, so the calculation of the evolution factor is replaced by Equation (15).

The cAPSO compares the fitness value of the position of this iteration generated by the PV disturbance vector with the fitness value of the optimal position of the present individual and generates winner and loser. The generated winner and loser use the Equations (12) and (13) to update the disturbance vector PV and then generate the position probability distribution of the particle swarm in the next iteration.

The pseudo-code of cAPSO is shown in Algorithm 3.

Initialize the PV(μ, σ) disturbance vector parameter
Initialize a particle swarm position
Initialize a particle swarm velocity
Initialize pBest = X
Initialize gBest = X
for g =1 to iterMax do
   
   Update the V and X of the particles by Equations (1) and (2)
   Calculate the fitness value of the new particle fitness(X)
   (winner, loser) = compare(fitness(pBest), fitness(X))
   Update the PV disturbance vector by Equations (12) and (13)
   Calculate the f by Equation (15)
   Determine the evolutionary state S by Figure 1
   w is adjusted according to Equation (5)
   c1 and c2 are adjusted according to Table 1
   if evolutionary state S == convergence state S3 then
    
    
    (winner, loser) = compare(fitness(gBest), fitness(X))
    gBest= winner
   end
end

4. The Performance Test of cAPSO

In this section, the cAPSO algorithm is mainly tested in 28 test functions of CEC2013 [45]. This paper compares the cAPSO with common heuristic algorithms and common compact algorithms. The 28 test functions in CEC2013 include 8 mixed functions, 15 multimodal functions, and 5 unimodal functions. These 28 functions are very representative. The 28 functions are represented by f1 to f28. Every experiment keeps the common parameters consistent during the comparison process to ensure the fairness of the comparison.

4.1. Performance Comparison of cAPSO and Common Heuristic Algorithms

In this section, the cAPSO is compared with genetic algorithm (GA) [46], differential evolution algorithm (DE) [47], whale optimization algorithm (WOA) [48], bat algorithm (BA) [49], and sine cosine algorithm (SCA) [50] on 28 test functions of CEC2013. At the same time, the overall performance of each algorithm compared with the cAPSO algorithm was measured at a significant level under the Wilcoxon signed rank test. Twenty tests are carried out on each test function, and then, the average value is taken to avoid the occurrence of chance. To ensure the fairness of the algorithm during testing, the dimension of the problem is set to 50, the number of populations is set to 60, and the number of iterations is set to 3000, and the search range requirement in CEC2013 is [-100,100]. Table 2 shows the performance comparison of cAPSO and common heuristic algorithms. In addition, Table 2 also shows that each algorithm is measured under the Wilcoxon signed rank test, with a significance level of . The symbol “>” represents that the performance of the cAPSO is better than the other heuristic algorithm, the symbol “=” represents that the performance of the cAPSO is the same as the other heuristic algorithm, and the symbol “<” represents that the performance of the cAPSO is worse than the other heuristic algorithm. The last row of Table 2 summarizes the comparison results of all test functions.

Table 2 shows that compared with DE, the test performance of cAPSO is better than DE in 20 functions, the same as DE in 2 functions, and worse than DE in 6 functions. Compared with BA, the test performance of cAPSO is better than BA in 21 functions, the same as BA in 1 function, and worse than BA in 6 functions. Compared with GA, WOA, and SCA, cAPSO has the same performance as these three algorithms in function f8, but it is better than others in other functions. It can be seen that the performance of the cAPSO algorithm combined with the compact strategy is greatly improved compared with the common heuristic algorithm.

In order to further describe the effect of the algorithm, this paper uses the convergence curve for evaluation. Since the convergence of many algorithms is very similar, the performance is not obvious in the convergence curve, so this paper selects several scattered representative curves for display. Figure 5 shows the convergence process of the algorithm on some test functions. The horizontal axis represents iteration times, and the vertical axis represents the fitness value of different algorithms. The smaller the fitness value, the better the performance on this test function. It can be seen from Figure 5 that the performance of the proposed cAPSO algorithm is better than other heuristic algorithms on f4, f9, f11, f17, and f27. But the performance of the test function on f13 and f22 is worse than DE, and the performance on f17 is not as good as BA.

4.2. Performance Comparison of cAPSO and Common Compact Algorithms

In this section, the cAPSO is compared with cPSO [41], cABC [38], cSCA [39], and cBA [40] on 28 test functions of CEC2013. At the same time, the overall performance of each algorithm compared with the cAPSO algorithm was measured at a significant level under the Wilcoxon signed rank test. Twenty tests are carried out on each test function, and then, the average value is taken to avoid the occurrence of chance. To ensure the fairness of the algorithm during testing, the dimension of the problem is set to 50, the number of populations is set to 60, the number of iterations is set to 3000, the virtual number of populations is set to 300, and the search range requirement in CEC2013 is [-100,100]. Table 3 shows the performance comparison of cAPSO and common compact algorithms. In addition, Table 3 also shows that each algorithm is measured under the Wilcoxon signed rank test, with a significance level of . The symbols “>,” “=,” and “<” have the same meaning as in Section 4.1. The last row of Table 3 summarizes the comparison results of all test functions.

Table 3 shows that compared with cPSO, the test performance of cAPSO is better than cPSO in 24 functions, the same as cPSO in 2 functions, and worse than cPSO in 2 functions. Compared with cBA, the test performance of cAPSO is better than cBA in 20 functions, the same as cBA in 3 functions, and worse than cBA in 5 functions. Compared with cABC and cSCA, cAPSO has the same performance as these two algorithms in function f8, but it is better than these two algorithms in other functions. It can be seen that cAPSO has strong competitiveness in compact algorithms and has advantages over other compact algorithms in performance.

As in Section 4.1, Figure 6 shows representative convergence curves of the proposed cAPSO and other compact algorithms. It can be seen from Figure 6 that the performance of the proposed cAPSO algorithm is better than other compact algorithms on f12, f15, f20, and f25. But the performance of the test function on f16 is not as good as cBA, and the performance on f23 is worse than cPSO.

5. Application of cAPSO in Mobile Sensor Localization

This section mainly applies cAPSO to the mobile sensor localization technology MCL and compares it with the original MCL, WOA-based MCL, and BA-based MCL under different anchor node numbers and different communication radius. It takes a lot of time and computing power to directly find a position with a small error through the MCL technology. A position with a large error is initially obtained through MCL technology, and then, cAPSO is applied around the obtained position for further optimization. The cAPSO broadcasts nodes around it, and the broadcast nodes move according to the idea of the cAPSO. The position after move is compared with the historical optimal position to update the optimal position. After a certain number of iterations, an optimal position is obtained as the final position of the MCL positioning technology. Mobile sensor positioning technology is a technology that uses the information of anchor nodes to estimate the location of unknown nodes, so the position error becomes the key to the technology. The smaller the position error, the more conducive to the accuracy of the information. Applying heuristic algorithms to MCL technology can better locate unknown nodes and reduce localization errors. The more accurate position coordinates are iteratively updated by the error function. The error function is defined as Equation (16). where represents the estimated location of the unknown node , represents the location of the anchor node , represents the total number of unknown nodes, and represents the total number of anchor nodes. represents the distance between each unknown node and each anchor node . This paper assumes that the anchor node can obtain the distance between the anchor node and the unknown node by the strength of the signal received from the unknown node . The smaller the error value, the higher the accuracy of localization.

5.1. Influence Analysis of Different Anchor Nodes

In the simulation experiment, 200 nodes are randomly scattered within a range of . The number of anchor nodes is 5, 10, 15, 20, 25, 30, 35, and 40, so the number of unknown nodes is 195, 190, 185, 180, 175, 170, 165, and 160. The communication radius is set to 50 m, and the simulation experiments with the same parameters were tested 20 times, and the average value was calculated as the error result. Apply cAPSO, WOA, and BA algorithms to the simulation experiment, respectively. The experimental results are shown in Table 4.

Table 4 shows that when the total number of nodes is 200 and the communication radius is 50 m, the larger the number of anchor nodes, the more accurate the location of unknown nodes and the smaller the error value. Table 4 clearly shows that the positioning error after optimization by the heuristic algorithm is much smaller than the original MCL. In the combined different heuristic algorithms, it can be clearly seen that although the cAPSO algorithm is not better than other algorithms in standard deviation every time, and it has better results than other heuristic algorithms in error mean.

5.2. Influence Analysis of Different Communication Radius

The simulation experiment randomly scatter 200 nodes in the range of , the number of anchor nodes is set to 30, and the communication radius is set to 10 m, 20 m, 30 m, 40 m, and 50 m. The simulation experiments with the same parameters were tested 20 times, and the average value was calculated as the error result. Apply cAPSO, WOA, and BA algorithms to the simulation experiment, respectively. The experimental results are shown in Table 5.

Table 5 shows that when the total number of nodes is 200 and the number of anchor nodes is 30, the greater the communication radius, the more accurate the location of unknown nodes, and the smaller the error value. Table 5 clearly shows that the positioning error after optimization by the heuristic algorithm is much smaller than that of the original MCL. In the combined different heuristic algorithms, it can also be clearly seen that the cAPSO algorithm is better than other heuristic algorithms in the comparison of the error mean.

6. Conclusion

In this paper, an improved APSO algorithm combined with compact strategy is proposed and applied to mobile sensor localization. The compact strategy no longer stores the position of each particle in each dimension but describes the distribution characteristics of the particles in each dimension through the operation of probability model. The compact strategy can reduce the use of memory very well. This paper tests the performance of cAPSO on 28 test functions of CEC2013 and compares it with the common heuristic algorithms GA, DE, BA, WOA, and SCA and the common compact strategy heuristic algorithms cPSO, cABC, cSCA, and cBA. The comparison results show that cAPSO has better test performance. Finally, cAPSO is applied to mobile sensor localization technology MCL, and it is also compared with WOA-based MCL and BA-based MCL. The results show that MCL based on cAPSO is more effective in solving this problem.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare no conflict of interest.

Acknowledgments

This project was funded by the National Key Research and Development Program of China under Grant No. 11974373.

Supplementary Materials

Reference URL of compact strategy thought: https://blog.csdn.net/Liu_Ning_666/article/details/118308477. Reference URL of many improved PSO algorithms: https://blog.csdn.net/Liu_Ning_666/article/details/120174723 (Supplementary Materials)