Particle swarm optimization with fitness adjustment parameters
Introduction
In 1995, Eberhart and Kennedy first proposed the standard particle swarm optimization (PSO). PSO algorithm is a widely applied meta-heuristic algorithm for solving optimization problems, such as resource planning, production scheduling and planning, and image enhancement (Omran et al., 2006, Shieh et al., 2011, Sun and Wu, 2011, Zhang et al., 2007). PSO is a population-based heuristic algorithm that uses a particle swarm, where the position of each particle in the search space directly represents the decision variables of the problem. Each particle updates its velocity and position on the basis of the original movement velocity, the personal best experience, and the global best experience. Considering the balance between the global search and local search strategies in PSO, three parameters are designed for adjusting the influence on the movement velocity of the next iteration: the inertia weight (w), the acceleration of self-cognition (c1), and the acceleration of social cognition (c2).
Numerous studies have demonstrated that the inertia weight and two cognition acceleration coefficients are critical in determining the solution quality and convergence speed of PSO (Clerc, 2005, Clerc and Kennedy, 2002, Holland, 1975, Jiang et al., 2007, Olorunda and Engelbrecht, 2008, Omran et al., 2006). Some studies focused on developing methods for selecting the appropriate parameter value. Chatterjee and Siarry, 2006, Eiben et al., 1999, Shi and Eberhart, 1998 denoted two ways of adjusting parameters, parameter tuning and parameter control. Parameter tuning is the commonly practiced approach that finds the appropriate values for parameters before the run of the algorithm, which remain consistent value during evolution. On the other hand, parameter control starts with initial parameter values which are changed during the iterations. The parameter value can be controlled by using adaptively the feedback of solution performance or a simple linear (or nonlinear) function in the evolution. Yasuda and Iwasaki (2004) and Xu (2013) proposed a parameter control based on the actual and ideal velocity information of each particle. Xu (2013) verified that using a nonlinear ideal velocity equation enables obtaining a superior solution to that obtained using a linear equation. These two forms of setting parameter values are both based on the assumption that a better movement of a particle is a larger step in the beginning and getting smaller in the end of the searching.
Most studies on PSO with a self-adaptive parameter value adjust the parameter value through a tuning mechanism that retains the concept of time-varying the parameter value. For example, the inertia weight unit of APSOVI (2013) is used to adjust the velocity to approximate its ideal value, which is a time-based cosine function. All particles apply the same set of parameter values in each iteration of the search process, regardless of whether the individual performance is good or bad. However, particles with higher performance could overshoot and be far from the nearby approximate optimum, which considerably increases the duration of the search process. To improve the efficiency of the search process, this study focus on developing a method to change the consistent parameter values applied over all particles. The proposed PSOFAP algorithm is constructed on the basis of a fitness-varying ideal velocity function (a transformation cosine function). To increase the efficiency of searching for approximate optimums, PSOFAP obtains the self-adaptive parameter values of particles using the PSO algorithm. The performance of PSOFAP was verified through twelve benchmark problems and the results were compared with those of four other PSO variants by applying the Wilcoxon’s test to the proposed PSOFAP algorithm and the APSOVI algorithm. The remaining of this paper is organized as follows. Section 2 reviews SPSO and advanced PSO with parameter selection. Section 3 describes the proposed PSOFAP algorithm in detail. In Section 4, we report and analyze the numerical experimental results on twelve benchmark problems and compare the proposed algorithm results with those of other state-of-the-art PSO variants through non-parametric statistical analysis test. Section 5 concludes the paper.
Section snippets
Variants of particle swarm optimization
Eberhart and Kennedy (1995) were inspired to develop the search strategy of PSO by the foraging behavior of flocks of birds and schools of fish. Each particle has its own position and velocity, which represent the values of the decision variables in the current iteration and the movement vector for the next iteration, respectively. The position and velocity of each particle change according to the information shared among all particles in the current iteration. Each particle can record the
Particle swarm optimization algorithm with parameter evolution
The PSOFAP algorithm based on individual fitness performance adjusts the absolute velocity of each particle by using Euclidean distance. Each particle has its own inertia weight, self-cognition acceleration coefficient, and social cognition acceleration coefficient. PSOFAP expects that the absolute movement of a particle with strong performance will be small. By contrast, a poorly performing particle must move farther to the following position. Hence, PSOFAP treats these three parameter values
Benchmark test
For evaluating the performance of PSOFAP, a series of experiments were conducted for each test function with three different dimension sizes: 10, 30, and 100. In addition, the standard PSO and three other PSO variants namely NWAPSO, DPATPSO and APSOVI were used for evaluating the contribution of PSOFAP. All values of the parameters of each algorithm are shown in Table 1.
To assess the performance of each algorithm compared with that of PSOFAP, twelve benchmark functions were adopted, as listed
Conclusions
This study proposes the PSOFAP algorithm, which focuses on avoiding premature convergence and enhancing efficiency to obtain more accurate approximate solutions. The proposed PSOFAP algorithm uses fitness performance to adjust three parameters of individual particles according to their individual velocities. In PSOFAP, each particle is assigned an ideal velocity based on its fitness performance over the swarm, and the deviation between its real velocity and ideal velocity is determined. The
References (47)
- et al.
An exact hybrid method for the vehicle routing problem with time windows and multiple deliverymen
Computers & Operations Research
(2017) - et al.
Analyzing evolutionary optimization and community detection algorithms using regression line dominance
Informance Science
(2017) - et al.
Nonlinear inertia weight variation for dynamic adaptation in particle swarm optimization
Computers & Operations Research
(2006) - et al.
Analyzing convergence performance of evolutionary algorithms: a statistical approach
Information Sciences
(2014) - et al.
A quality and distance guided hybrid algorithm for the vertex separator problem
Computers and Operations Research
(2017) - et al.
A priori optimization with recourse for the vehicle routing problem with hard time windows and stochastic service times
European Journal of Operational Research
(2016) - et al.
Modified quantum-behaved particle swarm optimization for parameters estimation of generalized nonlinear multi-regressions model based on Choquet integral with outliers
Applied Mathematics and Computation
(2013) - et al.
Stochastic convergence analysis and parameter selection of the standard particle swarm optimization algorithm
Information Processing Letters
(2007) Dynamic router node placement in wireless mesh networks: A PSO approach with constriction coefficient and its convergence analysis
Information Sciences
(2013)- et al.
A method for multi-class sentiment classification based on an improved one-vs-one (OVO) strategy and the support vector machine (SVM) algorithm
Information Sciences
(2017)
Improved performance of PSO with self-adaptive parameters for computing the optimal design of water supply systems
Engineering Applications of Artificial Intelligence
The vehicle-routing problem with time windows and driver-specific times
European Journal of Operational Research
Modified particle swarm optimization algorithm with simulated annealing behavior and its numerical verification
Applied Mathematics and Computation
An adaptive parameter tuning of particle swarm optimization algorithm
Applied Mathematics and Computation
Novel bare-bones particle swarm optimization and its performance for modeling vapor–liquid equilibrium data
Fluid Phase Equilibria
A conflict–congestion model for pedestrian–vehicle mixed evacuation based on discrete particle swarm optimization algorithm
Computers & Operations Research
A numerical evaluation of several stochastic algorithms on selected continuous global optimization test problems
Journal of Global Optimization
L'optimisation par essaims particulaires
The particle swarm-explosion, stability, and convergence in a multidimensional complex space
IEEE Transactions on Evolutionary Computation
Parameter control in evolutionary algorithms
IEEE Transactions on Evolutionary Computation
Cited by (45)
A multi-strategy improved slime mould algorithm for global optimization and engineering design problems
2023, Computer Methods in Applied Mechanics and EngineeringOptimization of parameters of pulse current gas tungsten arc welding using non conventional techniques
2022, Journal of Advanced Joining ProcessesCitation Excerpt :Similarly the output at the end of Ist iteration for yield strength, percent elongation, microhardness and impact toughness is depicted in Tables 23, 25, 27 and 29. Particle swarm optimization (PSO) is an optimization metaheuristic invented by Russel Eberhart and James Kenedy in 1995 (Li and Cheng, 2017; Ozcelik and Erzurumlu, 2006). PSO is inspired by the observation of the social behaviour of bird flocks.
Intermodal service network design with stochastic demand and short-term schedule modifications
2021, Computers and Industrial EngineeringCentrality based solution approaches for median-type incomplete hub location problems
2021, Computers and Industrial EngineeringCitation Excerpt :However, the Wilcoxon signed rank test is applied to further analyze whether the HBCBCA algorithm has significant differences with the results of other algorithms in terms of percentaged gap values. The Wilcoxon signed rank test stands out for non-parametric analysis, without the need for any distribution of population to compare two matched and related samples (Li, Chen, Hong, Jia, & Jun 2020; Li & Cheng, 2017). In this study, this analysis enables assessing result differences obtained by the two related algorithms.
A novel enhanced whale optimization algorithm for global optimization
2021, Computers and Industrial EngineeringParallel machine scheduling with position-based deterioration and learning effects in an uncertain manufacturing system
2020, Computers and Industrial Engineering