An adaptive parameter tuning of particle swarm optimization algorithm

https://doi.org/10.1016/j.amc.2012.10.067Get rights and content

Abstract

An adaptive parameter tuning of particle swarm optimization based on velocity information (APSO-VI) algorithm is proposed. In this paper the velocity convergence of particles is first analyzed and the relationship between the velocity of particle and the search failures is pointed out, which reveals the reasons why PSO has relative poor global searching ability. Then this algorithm introduces the velocity information which is defined as the average absolute value of velocity of all the particles. A new strategy is presented that the inertia weight is dynamically adjusted according to average absolute value of velocity which follows a given nonlinear ideal velocity by feedback control, which can avoid the velocity closed to zero at the early stage. Under the guide of the nonlinear ideal velocity, APSO-VI can maintain appropriate swarm diversity and alleviate the premature convergence validly. Numerical experiments are conducted to compare the proposed algorithm with different variants of PSO on some benchmark functions. Experimental results show that the proposed algorithm remarkably improves the ability of PSO to jump out of the local optima and significantly enhance the convergence speed and precision.

Introduction

Particle swarm optimization (PSO) algorithm is a population based heuristic global optimization technology introduced by Kennedy and Eberhart [1] in 1995, and referred to as a swarm-intelligence technique. Its basic idea is based on the simulation of simplified animal social behaviors, such as fish schooling, bird flocking, etc.

Over the past decade, the PSO algorithm has gained widespread popularity in the research community mainly because of its simplistic implementation, reported success on benchmark test problems, and acceptable performance on various application problems. PSO algorithm has already been applied with success to many areas, such as function optimization [2], artificial neural network training [3], fuzzy system control [4], etc.

Unfortunately, when solving complex multimodal tasks, the conventional PSO algorithm can easily be trapped in local optima, and the convergence rate decreased considerably in the later period of evolution; when reaching a near optimal solution, the algorithm stops optimizing, and thus the accuracy the algorithm can achieve is limited. These shortcomings have imposed the restrictions on the wider applications of the PSO to real-world optimization problems. Therefore, the abilities of good convergence speed and avoiding local optima are two important and appealing tasks in the study of PSO. A number of variants of PSO algorithms have been developed so far in order to achieve these two goals, see for example [5], [6], [7], [8], [9], [10], [11], [12], [14], [15], [16], [17], [18], [19], [20], [21], [22], [23], [24], [25], [26], [27], [28], [29], [30].

Firstly, the parametric studies on coefficients (inertia weight and coefficients) were conducted in [5], [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17]. The various inertia weighting strategies are categorized into three classes. The first class contains strategies in which the value of the inertia weight is constant or random during the search or is determined randomly [5], [6]. In the second class, the inertia weight is defined as a function of time or iteration number and hence the strategy is referred to as time-varying inertia weight strategy [7], [8], [9], [35]. Despite claims made in some other papers, these methods are not considered adaptive since they do not monitor the situation of the particles in the search space. The third class of the inertia weight strategies contains those methods which use a feedback parameter to monitor the state of the algorithm and adjust the value of the inertia weight [10], [11], [12], [14]. The acceleration coefficient c is usually used to balance the global and local search abilities of PSO. The various coefficients strategies are proposed to improve the efficiency of PSO [14], [15], [16], [17].

Secondly, some improvement based PSOs have been proposed to enhance the performance of PSO by making changes in the updating equation [18], [19], [20]. In Ref. [18], a modified velocity strategy (MVPSO) was presented, in which each particle is attracted by the global best particle and a random particle chosen from a set of good particles. Furthermore, in Ref. [19], in order to overcome the disadvantage of PSO, a perturbed particle swarm algorithm (pPSA) is presented based on the new particle updating strategy which is based upon the concept of perturbed global best to deal with the problem of premature convergence and diversity maintenance within the swarm.

Thirdly, hybridization by adopting the operators of other optimization algorithms in PSO has drawn increasing attention to improve the performance of the PSO [21], [22], [23], [24], [25], [26]. Evolutionary operators and other methods such as selection [21], crossover, mutation [22] and chaotic sequences [23] have been introduced to the PSO to keep the best particles, to increase the diversity of the population, and to improve the capability to escape local minima. In [24], a novel hybrid particle swarm optimizer (PSO-EO), which elegantly combines the exploration ability of PSO with the exploitation ability of extremal optimization (EO), was proposed to lead to improvement of the original PSO for optimization problems. In [26], a novel particle swarm optimization algorithm based on particle migration (MPSO) is proposed.

Fourthly, one existing method attempts to improve the performance of PSO by improving population topology structure [27], [28], [29], [30]. Kennedy and Mendes [27], [28], [29] evaluated a number of topologies presented as well as the case of random neighbors. Their research [28], [30] suggest that the gbest version converges fast but can be trapped in a local optimum very often, while the lbest network has more chances to find an optimal solution, although with slower convergence.

The mentioned techniques above utilize some mechanisms to improve the global ability of PSO to some extent, but they are hard to get a good tradeoff between the exploration and exploitation to prevent the stagnation of population and premature convergence. This is because that the change of particle’s position only depends on the velocity-item according to the velocity and position update rule of PSO and the mechanisms can’t effectively control the velocity of PSO to maintain the swarm diversity. So there still remains research room to improve the performance of PSO.

Recently, some researchers tend to use systematic guidelines for selecting or fine-tuning the parameters so as to provide an appropriate balance between diversification and intensification in the PSO searches [31], [32]. In Ref. [31], an adaptive PSO algorithm was proposed using the information defined as the average absolute value of velocity of all of the particles (DPATPSO), which can be used as an index to understand the briskness of all the particles. An adaptive strategy for tuning one of its parameters is introduced so as to follow a given linear ideal velocity by feedback control. Clearly, DPATPSO provides a good idea and a good framework for evolutionary algorithms. Although the PSO algorithm with linear ideal velocity by feedback control can locate satisfactory solution at a markedly fast speed, its ability to fine tune the optimum solution is limited, mainly due to the lack of mechanism for ideal velocity during evolution process.

In order to improve the global optimization ability of PSO, by analyzing the impact of velocity on the PSO search ability, the paper develops an adaptive parameter tuning of particle swarm optimization based on velocity information (APSO-VI) algorithm and tries to use nonlinear ideal velocity to control the search process, which aims to enable the PSO to gain the abilities of good convergence speed and search performance at the same time. The inertia weight is dynamically adjusted according to average absolute value of velocity of all of particles which follow a given nonlinear ideal velocity by feedback control. Through the guideline of the nonlinear ideal velocity to control the search process, APSO-VI can maintain appropriate diversity, and keep the balance between local search and global search validly.

Rest of this paper is organized as follows: Section 2 presents a review of PSO. Behavior analysis of standard PSO is given in Section 3. Description of the proposed APSO-VI algorithm is given in Section 4. Experiments on numerical optimization used to illustrate the efficiency of the proposed algorithm are given in Section 5. Finally, Section 6 concludes this paper.

Section snippets

Standard particle swarm optimization

Since the original version of PSO proposed in 1995 [13], lots of work have been done to develop it. In order to control the global exploration and the local exploitation validly, Shi and Eberhart [1] introduced a concept of inertia weight to the original PSO and developed a modified PSO referred to as the standard PSO version (SPSO). In SPSO, each particle represents a possible solution to the task at hand, and the particle swarm starts with the random initialization of a population of

Behavior analysis of SPSO

Early published theoretical analyses of SPSO have concentrated on analyzing the behavior of particles by studying particle trajectories [33], [34]. On this basis, let’s analyze the fluence of velocity of particle on convergence of SPSO. For convenience, denote φ1 = c1Rand1(), φ2 = c2Rand2(), φ = φ1 + φ2. Considering the velocity and position of a particle at discrete time steps, by substitution of Eq. (2) into Eq. (1), the following non-homogeneous recurrence relation is obtained:vi(t+1)=(1+w-φ1-φ2)vi(t

Adaptive parameter tuning of APSO-VI

In [31], DPATPSO introduced an adaptive strategy for tuning one of its parameters to follow a given linear ideal average velocity by feedback control. It is well known that there is systematic guideline for selecting or fine-tuning the parameters so as to provide an appropriate balance between exploration and exploitation in the DPATPSO searches. However, the larger velocity of particle is in favor of coarse global exploration and convergence when particle search, but it is not easy to obtain

Benchmark test functions

To investigate the performance of APSO-VI, nine benchmark functions [31], [36] are adopted in this paper. Table 1 provides a detailed description of these problems. All test functions are minimization problems. The first 3 functions (f1–f3) are unimodal functions while the rest of the functions (f4f9) are multimodal optimization problems. The dimensions of the search space (D) are 10, 30 and 100 respectively. For each test problem, x is the best solution to the test problem and f(x)

Conclusion

Aiming at the search failures of SPSO algorithm, the paper has analyzed the relation between the performance of SPSO and particle search velocity, an adaptive parameter tuning of particle swarm optimization based on velocity information (APSO-VI) algorithm is proposed. APSO-VI takes advantage of average absolute value of velocity which follows a given nonlinear ideal average velocity by feedback control to dynamically adjust the inertia weight. Under the guide of the given nonlinear ideal

Acknowledgements

The authors wish to acknowledge the National Nature Science Foundation of China (Grant 61175127) and the Science and Technology project of Department of Education of Jiangxi Province China (Grant GJJ12093) for the financial support.

References (36)

  • M.M. El-Sherbiny

    Particle swarm inspired optimization algorithm without velocity equation

    Egyptian Informatics J.

    (2011)
  • Hao. Gao et al.

    Particle swarm algorithm with hybrid mutation strategy

    Appl. Soft Comput.

    (2011)
  • W. Hong

    Chaotic particle swarm optimization algorithm in a support vector regression electric load forecasting model

    Energy Convers. Manage.

    (2009)
  • M.R. Chen et al.

    A novel particle swarm optimizer hybridized with extremal optimization

    Appl. Soft Comput.

    (2010)
  • W.F. Abd-El-Wahed et al.

    Integrating particle swarm optimization with genetic algorithms for solving nonlinear optimization problems

    J. Comput. Appl. Math.

    (2011)
  • Ma Gang et al.

    A novel particle swarm optimization algorithm based on particle migration

    Appl. Math. Comput.

    (2012)
  • M. Jiang et al.

    Stochastic convergence analysis and parameter selection of the standard particle swarm optimization algorithm

    Inform. Process. Lett.

    (2007)
  • J. Kennedy, R. Eberhart, Particle swarm optimization, in: Proceedings of IEEE International Conference on Neural...
  • Cited by (0)

    View full text