Low-discrepancy sequence initialized particle swarm optimization algorithm with high-order nonlinear time-varying inertia weight
Graphical abstract
Performance of PSO, LPSO, LPSO-TVAC and the proposed LHNPSO for solving Ackley function.
Introduction
With the inspiration motived by the research results on the modeling and the simulations of the behavior reflected in flocks of birds, Kennedy and Eberhart proposed the particle swarm optimization (PSO) algorithm [1]. The algorithm is a stochastic population-based method and is regarded as a global search strategy. In the PSO, particles move through the problem space with a specified velocity in search of optimal solution. Each particle maintains a memory which helps it keep the track of its previous best position and the global best position. The most important advantages of PSO are that it is easy to implement and has few parameters to adjust [2]. Commonly, the adjustable parameters are inertial weight, cognitive and social parameters.
PSO algorithm has attracted considerable attention and has been used in many research areas over the past decades [3], [4], [5], [6], [7], [8], [9], [10], [11]. Generally, convergence speed and ability to find global optima are basic criteria for assessing the performance of an optimization method. A large number of improved PSO algorithms [12], [13], [14], [15], [16], [17], [18], [19], [20], [21], [22] have been proposed to achieve the two goals, that is, faster convergence speed and avoiding premature or local optima.
The convergence speed of a PSO algorithm is dependent on the inertia weight and two acceleration coefficients. Random [23], linear time-varying [24], nonlinear time-varying [25], [26] and adaptive strategies [27], [28], [29] have been used to adjust the inertia weight to improve the convergence speed. A review and summary of various inertia weight modification mechanisms reported in literature can be found in reference [2]. Nonlinear time-varying and adaptive inertia weights normally have better performance than others. Moreover, the principle and implementation of nonlinearly decreasing inertia weights are simpler than those of adaptive ones. Currently, third and less order nonlinear functions have been investigated extensively for adjusting inertia weight, however, the performance of higher order nonlinear time-varying weight needs to be further studied.
The concept for initializing the particles of most PSO algorithms is same as that for generating random numbers in the traditional Monte-Carlo simulation method which could be regarded as the original stochastic optimization method. In contrast to traditional Monte-Carlo methods using pseudo random numbers, the quasi-Monte Carlo method produces deterministic sequences of well-chosen points that provide the best-possible spread in the change ranges of variables [30]. These deterministic sequences are often referred to as low-discrepancy sequences filling the sample area efficiently and uniformly [31], which have been successfully used to solve globally optimal problems [30], [31], [32], [33].
The major objective of this paper is to propose an improved PSO algorithm to benefit by the advantages of high-order nonlinear time-varying inertia weight and low-discrepancy sequence. This paper is organized as follows. Section 2 reviews the classical PSO and some commonly accepted improved version of PSO. In Section 3, different-order nonlinear functions, varying from 1/18 to 9, are tested to adjust the inertia weight and two acceleration coefficients, and the corresponding parametric studies on the orders are also implemented. The improved PSO with adjusted coefficients are further test with different population sizes. Section 4 presents an improved PSO method with low-discrepancy sequence initialized particles and high-order (1/π2) nonlinear dynamic varying inertia weight (LHNPSO), and experimental results by using well-known benchmark test functions. Section 5 gives a brief conclusion about this study.
Section snippets
PSO
PSO algorithm starts with a population of particles randomly initialized in the search space. Each particle represents a potential solution. The algorithm searches the optimal solution by moving the positions of particles in the search space. The position and velocity of the ith particle are represented by n-dimensional vector xi = (xi1, xi2, …, xin) and , respectively. The particle moves its current position toward the global optimum based on two items, that is, the best
Sensitivity analysis of time-varying inertia weight and acceleration coefficients to the order of nonlinear functions
In contrast to linear time-varying inertia weight and acceleration coefficients, in this study, nonlinear functions are proposed to adjust them. Non-linear high-order function has advantage to update PSO parameters. For instance, a faster reduction of the inertia weight can be achieved in the early stage, which will increase the rate of convergence. In the surrounding area of the optimum, the reduction of the inertia weight becomes slower, which will be helpful for capturing the solution.
LHNPSO and computational results
Here, we propose a low-discrepancy sequence initialized particle swarm optimization algorithm with high-order nonlinear time-varying inertia weight (LHNPSO). The initial population of particles is generated by using the Halton sequence to cover the search space efficiently. It has to be noted, based on our experience, low-discrepancy sequence initialization will not greatly improve the performance of our proposed method on these benchmark test functions, but low-discrepancy sampling based
Conclusions
A new variant of PSO is proposed which employs the low-discrepancy sequence and decreasing inertia weight adjusted by high-order (1/π2) nonlinear function. The performance of LHNPSO algorithm is tested by a set of well-known benchmark test functions and the results show that this method can converge faster and give a much more accurate final solution in comparison with PSO, LPSO and LPSO-TVAC algorithms. Moreover, the proposed LHNPSO is very easily to be implemented. From this study, it can
Acknowledgements
The authors would like to thank the reviewers for constructive comments on an earlier version of the paper. This research was supported by the Australian Research Council through ARC Discovery Project 130102934.
References (39)
- et al.
A novel particle swarm optimization algorithm with adaptive inertia weight
Appl. Soft Comput. J.
(2011) A study on particle swarm optimization and artificial bee colony algorithms for multilevel thresholding
Appl. Soft Comput.
(2013)- et al.
A particle swarm optimization algorithm for optimal car-call allocation in elevator group control systems
Appl. Soft Comput.
(2013) - et al.
Particle swarm optimization for the vehicle routing problem with stochastic demands
Appl. Soft Comput.
(2013) - et al.
Damage detection based on improved particle swarm optimization using vibration data
Appl. Soft Comput.
(2012) Product demand forecasts using wavelet kernel support vector machine and particle swarm optimization in manufacture system
J. Comput. Appl. Math.
(2010)- et al.
A discrete version of particle swarm optimization for flowshop scheduling problems
Comput. Oper. Res.
(2007) - et al.
Shadow detecting using particle swarm optimization and the Kolmogorov test
Comput. Math. Appl.
(2011) - et al.
Short-term scheduling of cascade reservoirs using an immune algorithm-based particle swarm optimization
Comput. Math. Appl.
(2011) - et al.
Particle swarm optimization applied to the design of water supply systems
Comput. Math. Appl.
(2008)
A new gradient based particle swarm optimization algorithm for accurate computation of global minimum
Appl. Soft Comput.
A novel particle swarm optimizer hybridized with extremal optimization
Appl. Soft Comput.
A novel hysteretic model for magnetorheological fluid dampers and parameter identification using particle swarm optimization
Sens. Actuators A: Phys.
A rank based particle swarm optimization algorithm with dynamic adaptation
J. Comput. Appl. Math.
Integrating particle swarm optimization with genetic algorithms for solving nonlinear optimization problems
J. Comput. Appl. Math.
Feedback learning particle swarm optimization
Appl. Soft Comput.
A hybrid of genetic algorithm and particle swarm optimization for solving bi-level linear programming problem – a case study on supply chain model
Appl. Math. Model.
Nonlinear inertia weight variation for dynamic adaptation in particle swarm optimization
Comput. Oper. Res.
Multi-Objective Particle Swarm Optimization with time variant inertia and acceleration coefficients
Inf. Sci.
Cited by (89)
Stochastic bandgap optimization for multiscale elastic metamaterials with manufacturing imperfections
2024, International Journal of Mechanical SciencesDiversity-guided particle swarm optimization with multi-level learning strategy
2024, Swarm and Evolutionary ComputationQuadratic interpolation and a new local search approach to improve particle swarm optimization: Solar photovoltaic parameter estimation
2024, Expert Systems with ApplicationsNonlinear dynamic analysis of a seismic vibrator-ground interaction system considering interval uncertainties
2023, Soil Dynamics and Earthquake EngineeringExploring the Impact of Random Distribution Choices on Particle Swarm Optimization: An Initial Analysis
2023, Procedia Computer Science