Elsevier

Information Sciences

Volume 376, 10 January 2017, Pages 71-94
Information Sciences

Learning backtracking search optimisation algorithm and its application

https://doi.org/10.1016/j.ins.2016.10.002Get rights and content

Abstract

The backtracking search algorithm (BSA) is a recently proposed evolutionary algorithm (EA) that has been used for solving optimisation problems. The structure of the algorithm is simple and has only a single control parameter that should be determined. To improve the convergence performance and extend its application domain, a new algorithm called the learning BSA (LBSA) is proposed in this paper. In this method, the globally best information of the current generation and historical information in the BSA are combined to renew individuals according to a random probability, and the remaining individuals have their positions renewed by learning knowledge from the best individual, the worst individual, and another random individual of the current generation. There are two main advantages of the algorithm. First, some individuals update their positions with the guidance of the best individual (the teacher), which makes the convergence faster, and second, learning from different individuals, especially when avoiding the worst individual, increases the diversity of the population. To test the performance of the LBSA, benchmark functions in CEC2005 and CEC2014 were tested, and the algorithm was also used to train artificial neural networks for chaotic time series prediction and nonlinear system modelling problems. To evaluate the performance of LBSA with some other EAs, several comparisons between LBSA and other classical algorithms were conducted. The results indicate that LBSA performs well with respect to other algorithms and improves the performance of BSA.

Introduction

For some science and engineering problems that can be converted to optimisation problems, the search for and design of better optimisation algorithms has never ceased. To solve complex problems such as nonlinear, non-differentiable, and non-convex objective function problems, researchers have focused on designing novel evolutionary algorithms (EAs) or improving the performance of existing algorithms. Of these algorithms, swarm intelligence optimisation algorithms and genetic evolution algorithms play very important roles in the solution of complex optimisation problems. The particle swarm optimisation (PSO) algorithm [12], which mimics the foraging law of birds, has been successfully used in function optimisation [37], sequence classification problems [36], and power output systems [25]. Some variants of PSO, such as the fully informed particle swarm (PSOFIPS) [20], fitness-distance-ratio-based PSO (PSOFDR) [23], and comprehensive learning particle swarm (CLPSO) [15], have also been proposed to improve the performance of PSO and solve optimisation problems. Moreover, some hybrid algorithms are also introduced to improve the performance of PSO [13], [21], and some discrete methods are proposed to extend the application domain of PSO [2], [8], [10]. Simulating the teaching-learning process in a class, the teaching-learning–based optimisation (TLBO) algorithm [31] has also been proposed. There are fewer parameters that must be determined in its updating equations, and the algorithm is easily implemented. To make full use of the advantages of TLBO, two approaches have been taken: one is to improve the efficiency of the algorithm by modifying the updating process or combining it with other EAs, and the other is to extend its application domains. The concept of elitism has been utilised in elitism TLBO (ELTBO) to improve its performance by replacing some worst learners in the class by elitisms [29]. A self-learning TLBO that designs a new self-learning method has been used to train the radial basis function (RBF) neural modelling of batteries [41]. Many teachers, an adaptive teaching factor, tutorial training, and self-motivated learning have all been used to improve the diversity of the population [30]. These modified TLBO algorithms perform well for some optimisation problems. With respect to the application fields, the Lévy mutation operator [7] has been used to help TLBO avoid local optima, and the new algorithm has successfully been used for the IEEE 30-bus test system. An improved TLBO [6] with direction angle has been introduced to increase the searching ability of its learners, and it has been combined with a double differential evolution algorithm to handle the optimal reactive power dispatch (ORPD) problem. Local and self-learning methods [3] have been used to improve the performance of the original TLBO, and these approaches display some good performance for global optimisation problems. The multi-objective teaching-learning optimiser [19] has been used for handling reactive power systems. The optimisation of plate-fin heat exchangers [22] and modern machining processes [28], short-term hydrothermal scheduling problems [32], two-stage thermoelectric coolers [27], and flow-shop rescheduling problems [11], [14] have all been solved by TLBO. A detailed review of applications of TLBO can be found in [26]. Simplifying the training process and decreasing the number of undetermined parameters of intelligence optimisation algorithms are very important issues. To address them, a novel algorithm called the backtracking search algorithm (BSA) [5] was proposed in 2013 and used for some engineering optimisation problems. BSA has been used for surface wave analysis [34] and to solve concentric circular antenna array synthesis optimisation problems [9]. BSA and differential evolution algorithm (DE) have also been combined to solve unconstrained optimisation problems [40]. The oppositional BSA [17] was introduced to solve hyper-chaotic system parameter identification optimisation problems. The BSA with three constraint handling methods [43] was used for solving constrained optimisation problems. Compared to some classical EAs, BSA is a very new optimisation algorithm, and the improved variants of BSA are relatively few.

As mentioned above, BSA and TLBO have both shown to be superior at solving some optimisation problems, especially because the algorithms are simple and there are few undetermined parameters in the individual updating equations. As young algorithms, their performance at solving complex problems needs to be improved. Their disadvantages are detailed in Sections 2 and 3.

To improve the global performance of BSA, a new method that combines the ideas of TLBO and BSA is presented in this paper. The main contributions of the new method are as follows. The first contribution is the addition of learning guidance in the BSA updating equations to improve convergence speed of BSA. The second is the integration of three learning methods into one equation to update parts of individuals according to a random probability. Another contribution is the avoidance of the use of the worst individual in order to improve the diversity of the population. In contrast to the two function evaluations (FEs) for an individual in a generation of the TLBO algorithm, there is only one FE per generation in the new method. Hence, the computation cost for a generation of the proposed algorithm is less than that of TLBO.

The rest of the paper is organised as follows: the basic BSA and original TLBO are briefly introduced in Section 2 and Section 3, respectively; LBSA is presented in Section 4; the benchmark functions in CEC2005 and CEC2014 are tested to show the effectiveness of different algorithms in Section 5; the applications for two typical nonlinear modelling problems and chaotic time series prediction problems are displayed in Section 6; and conclusions are given in Section 7.

Section snippets

Basic BSA

The BSA is a population-based EA that is developed from a DE algorithm, but it is not the same as DE. It contains five processes: initialisation, selection-I, mutation, crossover, and selection-II. There are two populations in BSA: one is the evolution population and the other is the trial population. The trial population is composed of some historical information regarding the evolution population, and a search-direction matrix is built by the two populations to update the positions of the

Main steps of TLBO

TLBO is also a population-based EA that mimics the philosophy of teaching and learning in a class. There are two main phases in TLBO: the teacher phase and the learner phase. Because only the learning ideas of TLBO are used in the proposed learning BSA (LBSA), only the two main phases of the TLBO algorithm are described in this section.

Motivations

The main motivation behind LBSA is to fuse the learning ideas of TLBO into basic BSA to improve its global performance. In the basic BSA algorithm, when the diversity of the population becomes poor in the anaphase of evolution, the evolution ability of the individuals becomes weak. This phenomenon restricts the individual from finding the global optimum. Moreover, practice has shown that learning operators can improve the adaptability of the population, and learning from the best individual can

Simulation experiments

To test the performance of the LBSA, three benchmark tests were conducted. To illustrate the effectiveness of the proposed algorithm, three PSO variants (PSOFIPS [20], PSOFDR [23], and CLPSO [15]), two DE variants (jDE [1] and SaDE [24]), three recently proposed TLBO variants (TLBO [31], ETLBO [29], and DGSTLBO [45]), and basic BSA [5] were used as comparison algorithms. The included algorithms are more or less related to LBSA, such as the learning ideas in PSO and TLBO, historical information

ANN training

System identification is a method of identifying or measuring the mathematical model of a system from measurements of the system inputs and outputs. The accurate mathematical model of a nonlinear system is sometimes difficult to obtain, and hence building an intelligent model is very important. The l-step-ahead prediction approach uses the earlier states x(td + 1), x(td + 2), …, x(t) to predict the state of x^(t+L). It can be expressed as follows: x^(t+L)=f(x(td+1),x(td+2),,x(td+k),,x(t))

Box-Jenkins chaotic time series

The second chaotic system is the Box-Jenkins gas furnace model. The Box-Jenkins gas furnace Dataset was recorded from a combustion process of a methane-air mixture. Here, 296 pairs of data (x(t), u(t)) from t = 1 to 296 were chosen as the training and testing samples, and all the data sets were normalised. In addition, x(t) was the output CO2 concentration, and u(t) was the input gas flow rate. Further, u(t – 4) and x(t – 1) were commonly used to predict the next state x(t) as follows: x^(t)=f(u

Conclusions

In this paper, the basic BSA is extended to LBSA by a learning guidance method that modifies the BSA mutation process. Learning from the best individual is used in the mutation equation to improve the convergence speed of the algorithm, and the diversity of the algorithm is improved by avoiding the worst individual of the current generation. The three learning methods are integrated into one equation. Compared to TLBO, there is only one FE for an individual per iteration. The basic framework of

Acknowledgement

This work is partially supported by the National Natural Science Foundation of China (Grant Nos. 61572224, 61304082 and 41475017) and the National Science Fund for Distinguished Young Scholars (Grant No. 61425009). This work is also partially supported by the Major Project of Natural Science Research in Anhui Province (Grant No. KJ2015ZD36), the Natural Science Foundation in Colleges and Universities of Anhui Province (Grant No. KJ2016A639) and International Science and Technology Cooperation

References (45)

  • H.B. Ouyang et al.

    Hybrid harmony search particle swarm optimization with global dimension selection

    Inf. Sci

    (2016)
  • V. Patel et al.

    Optimization of a plate-fin heat exchanger design throughan improved multi-objective teaching–learning based optimization (MO-ITLBO) algorithm

    Chem. Eng. Res. Des.

    (2014)
  • R.V. Rao et al.

    Parameters optimization of selected casting processes using teaching–learning-based optimization algorithm

    Appl. Math. Model.

    (2014)
  • R.V. Rao et al.

    An improved teaching–learning-based optimization algorithm for solving unconstrained optimization problems

    Sci. Iran. Trans. D

    (2013)
  • R.V. Rao et al.

    Teaching–learning-based optimization: an optimization method for continuous non-linear large scale problems

    Inf. Sci

    (2012)
  • B. Samanta

    Prediction of chaotic time series using computational intelligence

    Expert Syst. Appl.

    (2011)
  • X. Song et al.

    Backtracking search algorithm for effective and efficient surface wave analysis

    J. Appl. Geophys.

    (2015)
  • C.Y. Tsai et al.

    A PSO-AB classifier for solving sequence classification problems

    Appl. Soft. Comput.

    (2015)
  • H. Wang et al.

    Diversity enhanced particle swarm optimization with neighborhood search

    Inf. Sci.

    (2013)
  • L. Wang et al.

    An improved teaching–learning-based optimization with neighborhood search for applications of ANN

    Neurocomputing

    (2014)
  • C.J. Zhang et al.

    Backtracking Search Algorithm with three constraint handling methods for constrained optimization problems

    Expert Syst. Appl.

    (2015)
  • L. Zhao et al.

    PSO-based single multiplicative neuron model for time series prediction

    Expert Syst. Appl.

    (2009)
  • Cited by (65)

    • A landscape-aware particle swarm optimization for parameter identification of photovoltaic models

      2022, Applied Soft Computing
      Citation Excerpt :

      Heuristic algorithms have been widely used for parameter extraction of solar PV models because they exhibit some fundamental advantages over traditional mathematical planning methods, such as resilience to dynamic changes, self-organizing capabilities, no need to meet specific mathematical characteristics, and the ability to evaluate multiple solutions in parallel [3,4]. Up to now, many heuristic algorithms have been developed, such as particle swarm optimization (PSO) [5], backtracking search algorithm (BSA) [6], differential evolution (DE) [7], JAYA algorithm [8], parallel swarm algorithm (PSA) [9], cuckoo search optimization (CSO) [10], QUATRE algorithm [11], and so on. Although the above heuristic algorithms can extract parameters for PV models, it is still hard to find more stable and accurate global solutions [12].

    View all citing articles on Scopus
    View full text