Elsevier

Information Sciences

Volumes 346–347, 10 June 2016, Pages 389-411
Information Sciences

Convergence proof of an enhanced Particle Swarm Optimisation method integrated with Evolutionary Game Theory

https://doi.org/10.1016/j.ins.2016.01.011Get rights and content

Abstract

This paper proposes an enhanced Particle Swarm Optimisation (PSO) algorithm and examines its performance. In the proposed PSO approach, PSO is combined with Evolutionary Game Theory to improve convergence. One of the main challenges of such stochastic optimisation algorithms is the difficulty in the theoretical analysis of the convergence and performance. Therefore, this paper analytically investigates the convergence and performance of the proposed PSO algorithm. The analysis results show that convergence speed of the proposed PSO is superior to that of the Standard PSO approach. This paper also develops another algorithm combining the proposed PSO with the Standard PSO algorithm to mitigate the potential premature convergence issue in the proposed PSO algorithm. The combined approach consists of two types of particles, one follows Standard PSO and the other follows the proposed PSO. This enables exploitation of both diversification of the particles’ exploration and adaptation of the search direction.

Introduction

Over the last few decades, many scientists have been inspired by the modelling of social interactions of animals to solve NP-hard optimisation problems. Although the communication among the different agents in social interactions is limited to an exchange of basic information, it results in a very effective team work. Particle Swarm Optimisation (PSO) is one of the most well known and established approaches based on this concept. The aim of the original PSO method, which was proposed by Kennedy and Eberhart was to reproduce this social interaction among agents in order to solve non-linear continuous optimisation problems [10], [11]. It is known that PSO not only provides efficient and satisfactory solutions like other meta-heuristic methods [8], [9], [21], but also achieves accurate results comparable to other search algorithms, such as Genetic Algorithms (GA) [1], [5], [16], for the problems involving unconstrained continuous functions and also more complex and highly constrained problems [7].

Since the performance of the PSO method heavily depends on the three weighting parameters determining interactions between the particles, how to set these parameters has been widely investigated. Shi and Eberhart studied the role played by the inertia weight and the maximum velocity in the conventional PSO algorithm [22]. Later, Meissner et al. devised one other way to select the coefficients [18]. The concept presented in [18] is based on optimisation of the free parameters of PSO by having swarms within a swarm. They used an empirical determination of the optimised coefficients by using Neural Networks. Note that classical and state-of-the-art PSO algorithms were comprehensively discussed in recent survey papers like [6], [29].

When designing a new optimisation method, like most of the stochastic algorithms, the convergence remains one of the key questions. The PSO initially proposed by Kennedy and Eberhart in [10] was proved to be a stagnant algorithm, i.e., it does not guarantee the convergence to a local optimal solution [20]. Despite many studies investigating the convergence to a stable point of the PSO [4], [20], [25], [26], [27], optimality of this point was not clearly established until recent investigations from Bergh and Engelbrecht in [28]. In their study, an adapted PSO-based method, called Guaranteed Convergence PSO (GCPSO), was proposed and its local convergence was analytically proved.

Despite the key modification proving the convergence of the GCPSO approach, its relative convergent speed to other methods was not investigated. This paper was initiated from the idea that it might be possible to design a method that guarantees convergence to the local optimal solution while improving the convergence speed of the original algorithm. Based on this idea, we developed an enhanced PSO approach which integrates the Evolutionary Game Theory (EGT) with PSO [12], [13], [14], [15]. Although the performance of this approach has been investigated via numerical simulations based on different sets of benchmark problems, there was no theoretical analysis performed.

This paper proposes a new PSO algorithm where EGT is integrated with Standard PSO (SPSO 2011), which was developed by Clerc in [3]. As the proposed PSO algorithm combines EGT with PSO, it is named Evolutionary Game based Particle Swarm Optimiser (EGPSO). The proposed algorithm maps an Evolutionary Stable Strategy (ESS) obtained from EGT to three coefficients determining the motion of the particles in PSO. Note that integrating EGT with PSO was also proposed in our previous studies [12], [13], [14], [15]. However, the mapping in this paper is newly proposed in order to have theoretical guarantees on the local convergence and superiority on the convergence characteristics.

The performance and convergence properties of the EGPSO algorithm proposed in this paper are theoretically examined. The analysis results show that the proposed EGPSO algorithm guarantees the convergent trajectory of particles and the convergent point is indeed a local optimum. Moreover, it is theoretically proved that the proposed EGPSO guarantees the convergence speed superiority over SPSO 2011. Note that performance analysis of the algorithm based on numerical simulations is not subject of this study, but of future investigation. The analysis of the local convergence of the proposed algorithm is based on the investigation of Solis and Wets in [23] on the convergence of the stochastic methods like random search techniques.

One of the potential issues with the EGPSO algorithms is that the convergence speed superiority might result in premature convergence to a local optimal solution. To alleviate this convergence issue, this paper also proposes a Combined-EGPSO (C-EGPSO) algorithm where there exist two types of particles, one following the EGPSO algorithm developed in this paper and the other following the SPSO 2011. Since EGPSO provides good convergence characteristics and SPSO 2011 enables efficient exploration of the solution space, the proposed C-EGPSO could leverage intensification and diversification of the search.

The rest of the paper is organised as follows. The next section introduces the mathematical conditions to obtain a convergent algorithm, then the required background to understand the basement of the proposed approach. Section 4 investigates deterministic aspects of the particles’ trajectory, before proving local optimality of EGPSO. Then, in Section 6, a novel algorithm based on the investigation of the previous sections is proposed.

Section snippets

Mathematical theory of convergence

It is open difficult to prove or disprove properties of stochastic optimisers such as PSO due to their stochastic nature. The convergence of stochastic search algorithms has been thoroughly investigated by Solis and Wets [23]. In their study, conditions for global and local convergence of search algorithms are obtained. Recently, using these conditions, Bergh and Engelbrecht examined the convergence of PSO and proved the local convergence of their proposed PSO, called GCPSO [28]. As this paper

EGPSO algorithm

This section will introduce the EGPSO algorithm developed in this paper. Since EGPSO is based on the SPSO 2011 [3] and EGT, we will first briefly introduce the SPSO 2011 and EGT. Then, this section will describe how the proposed EGPSO combines the two approaches.

Analysis on the trajectory of the proposed EGPSO

In this section, the trajectory followed by the particles is investigated in order to understand the role played by the coefficients ω, ϕ1 and ϕ2 described in Sections 3.1.1 and 3.2.3. Since SPSO 2011 uses two different moving rules depending on whether or not the particle is the best of its neighbourhood and EGPSO uses the same motion equations as those in SPSO 2011, the two trajectories will be investigated. Note that in this framework, we will only investigate the deterministic trajectory

Local convergence proof of EGPSO

In the previous section, the trajectory of the particles was investigated in order to prove that whatever the position of a particle X on the solution space S, it will always converge to the barycentre of the particles Xg and Xp. Since Xg and Xp are located in the convex convergent basin, it will always be possible to enter in this basin.

In most stochastic algorithms, it is hard to prove whether or not the algorithm converges to the optimum solution or at least to a satisfying solution in the

The combination of SPSO 2011 with EGPSO: C-EGSPSO

In Section 4, the trajectories of the particles in the proposed EGPSO and SPSO 2011 were investigated. The analysis validated that the three coefficients ω, ϕ1 and ϕ2 leveraging the EGT concepts guarantee the convergent trajectory and provide superiority on the convergence speed over the SPSO 2011. However, the superiority on the convergence speed does not guarantee efficient exploration, i.e. global search, of the solution space. When an optimisation algorithm has fast convergence, it often

Conclusion

In this paper, an enhanced SPSO 2011 algorithm was proposed. The presented method was based on the combination of EGT and SPSO 2011 designed by Clerc in [3], so named EGPSO. Integrating the two approaches was first proposed in our previous studies [12], [13], [14], [15] and its performance was empirically investigated. However, theoretical guarantee on the local optimality and the convergence properties have not been performed. This paper proposed new mapping from the ESS to the three PSO

References (29)

  • P. Taylor et al.

    Evolutionary stable strategies and game dynamics

    Math. Biosci.

    (1978)
  • I.C. Trelea

    The particle swarm optimisation algorithm: convergence analysis and parameter selection

    Inf. Process. Lett.

    (2003)
  • F. van den Bergh et al.

    A study of particle swarm optimization particle trajectories

    Inf. Sci.

    (2006)
  • M. Badamchizadeh et al.

    Applying modified discrete particle swarm optimization algorithm and genetic algorithm for system identification

    Computer and Automation Engineering (ICCAE), 2010, Singapore, Republic of Singapore

    (2010)
  • M. Clerc

    Back to random topology

    Technical report

    (2007)
  • M. Clerc

    Standard Particle Swarm Optimisation

    Technical report

    (2012)
  • M. Clerc et al.

    The particle swarm-explosion, stability, and convergence in a multidimensional complex space

    IEEE Trans. Evol. Comput.

    (2002)
  • R. Eberhart et al.

    Comparison between genetic algorithms and particle swarm optimization

    Proceedings of the 7th International Conference on Evolutionary Programming, San Diego, USA

    (1998)
  • M. Eslami et al.

    A survey of the state of the art in particle swarm optimization

    Res. J. Appl. Sci. Eng. Technol.

    (2012)
  • R. Hassan et al.

    A comparison of particle swarm optimization and the genetic algorithm

    Technical report

    (2004)
  • W. Hu et al.

    A new PSO scheduling simulation algorithm based on an intelligent compensation particle position rounding off

    ICNC ’08 Proceedings of the 2008 Fourth International Conference on Natural Computation, Jinan, China

    (2008)
  • U. Junker

    Air traffic flow management with heuristic repair

    Knowl. Eng. Rev.

    (2012)
  • J. Kennedy et al.

    Particle swarm optimization

    Proceedings of IEEE International Conference on Neural Networks, Piscataway, USA

    (1995)
  • J. Kennedy et al.

    A discrete binary version of the particle swarm algorithm

    The 1997 IEEE International Conference on Systems, Man, and Cybernetics, Orlando, USA

    (1997)
  • Cited by (0)

    View full text