As a natural extension of PSO, MD PSO may also have a serious drawback of premature convergence to a local optimum, due to the direct link of the information flow between particles and
gbest, which “guides” the rest of the swarm resulting in possible loss of diversity. Hence, this phenomenon increases the probability of being trapped in local optima [
1] and it is the main cause of the premature convergence problem especially when the search space is of high dimensions [
2] and the problem to be optimized is multimodal [
1]. Another reason for the premature convergence is that particles are flown through a single point which is (randomly) determined by
gbest and
pbest positions and this point is not even guaranteed to be a local optimum [
3]. Various modifications and PSO variants have been proposed in order to address this problem such as [
1,
3‐
28]. As briefly discussed in
Sect. 3.3, such methods usually try to improve the diversity among the particles and the search mechanism either by changing the update equations toward a more diversified version, by adding more randomization to the system (to particle velocities, positions, etc.), or simply resetting some or all particles randomly when some conditions are met. On the one hand, most of these variants require additional parameters to accomplish the task and thus making the algorithms even more parameter dependent. On the other hand, the main problem is in fact the inability of the algorithm to use available diversity in one or more positional components of a particle. Note that one or more components of any particle may already be in a close vicinity of the global optimum. This potential is then wasted with the (velocity) update in the next iteration, which changes all the components at once. In this chapter, we shall address this drawback of global convergence by developing two efficient techniques. The first one, the so-called Fractional Global Best Formation (FGBF), collects all such promising (or simply the best) components from each particle and fractionally creates an artificial global best (GB) candidate, the
aGB, which will be the swarm’s global best (GB) particle if it is better than the previous GB and the just-computed
gbest. Note that whenever a better
gbest particle or
aGB particle emerges, it will replace the current GB particle. Without any additional change, we shall show that FGBF can avoid local optima and thus yield the optimum (or near optimum) solution efficiently even in high dimensional search spaces. Unfortunately FGBF is not an entirely generic technique, which should be specifically adapted to the problem at hand (we shall return to this issue later). In order to address this drawback efficiently, we shall further present two generic approaches, one of which moves
gbest efficiently or simply put, “guides” it with respect to the function (or error surface) it resides on. The idea behind this is quite simple: since the velocity update equation of
gbest is quite poor, we shall replace it with a simple yet powerful stochastic search technique to
guide it instead. We shall henceforth show that due to the stochastic nature of the search technique, the likelihood of getting trapped into a local optimum can significantly be decreased.