Skip to main content
Log in

Teaching–learning-based Optimization Algorithm for Parameter Identification in the Design of IIR Filters

  • Case Study
  • Published:
Journal of The Institution of Engineers (India): Series B Aims and scope Submit manuscript

Abstract

This paper presents a teaching–learning-based optimization (TLBO) algorithm to solve parameter identification problems in the designing of digital infinite impulse response (IIR) filter. TLBO based filter modelling is applied to calculate the parameters of unknown plant in simulations. Unlike other heuristic search algorithms, TLBO algorithm is an algorithm-specific parameter-less algorithm. In this paper big bang–big crunch (BB–BC) optimization and PSO algorithms are also applied to filter design for comparison. Unknown filter parameters are considered as a vector to be optimized by these algorithms. MATLAB programming is used for implementation of proposed algorithms. Experimental results show that the TLBO is more accurate to estimate the filter parameters than the BB–BC optimization algorithm and has faster convergence rate when compared to PSO algorithm. TLBO is used where accuracy is more essential than the convergence speed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

References

  1. J. Van de Vegte, Fundamentals of Digital Signal Processing (Prentice Hall, NJ, 2001)

    Google Scholar 

  2. J.T. Tsai, J.H. Chou, T.K. Liu, Optimal design of digital IIR filters by using hybrid Taguchi genetic algorithm, IEEE Trans. Ind. Electron, 53(3), 867–879 (2006)

    Google Scholar 

  3. M. Eftekhari, S.D. Katebi, Extracting compact fuzzy rules for nonlinear system modeling using subtractive clustering, GA and unscented filter, Appl. Math. Model, 32, 2634–2651 (2008)

    Google Scholar 

  4. W.D. Chang, Nonlinear system identification and control using a real-coded genetic algorithm, Appl. Math. Model, 31, 541–550 (2007)

    Google Scholar 

  5. K. Gurleen, R. Kaur, Design of recursive digital filters using multiobjective genetic algorithm, Int. J. Eng. Sci. Technol. (IJEST), 3(7), 5614–5621 (2011)

  6. J. Kennedy, R. Eberhart, Particle Swarm Optimization, Proceedings of the IEEE International Conference on Neural Networks, 1942–1948 (1995)

  7. M.A. Abido, Optimal design of power-system stabilizers using particle swarm optimization, IEEE Trans. Energy Convers, 17, 406–413 (2002)

    Google Scholar 

  8. Y.-L. Lin, W.-D. Chang, J.-G. Hsieh, A particle swarm optimization approach to nonlinear rational filter modeling, Sci. Direct Exp. Syst. Appl, 34, 1194–1199 (2008)

    Google Scholar 

  9. B. Durmuş, A. Gün, Parameter Identification Using Particle Swarm Optimization, International Advanced Technologies Symposium (IATS ‘11), Elazığ, Turkey, 16–18 May 2011, 188–192

  10. O.K. Erol, I. Eksin, New optimization method: big bang–big crunch, Adv. Eng. Softw, 37, 106–111 (2006)

    Google Scholar 

  11. R. Singh, H.K. Verma, Big bang–big crunch optimization algorithm for linear phase FIR digital filter design, Int. J. Electron. Commun. Comput. Eng. (IJECCE), 3(1), ISSN: 2249-071X, 74–78 (2012)

    Google Scholar 

  12. R. Singh, H.K. Verma, Multi-objective big bang–big crunch optimization algorithm for recursive digital filter design, Int. J. Eng. Innov. Res. (IJEIR), 1(2), 194–200 (2012)

    Google Scholar 

  13. R.V. Rao, V. Patel, An elitist teaching–learning-based optimization algorithm for solving complex constrained optimization problems, Int. J. Ind. Eng. Comput, 3(4), 535–560 (2012)

    Google Scholar 

  14. R.V. Rao, V.J. Savsani, D.P. Vakharia, Teaching–learning-based optimization: a novel optimization method for continuous non-linear large scale problems, Inf. Sci, 183(1), 1–15 (2012)

    Article  MathSciNet  Google Scholar 

  15. R. Rao, V. Savsani, D. Vakharia, Teaching–learning-based optimization: a novel method for constrained mechanical design optimization problems, Comput. Aided Des, 43(3), 303–315 (2011)

    Article  Google Scholar 

  16. R.V. Rao, V.D. Kalyankar, Parameters optimization of advanced machining processes using TLBO algorithm, Enterprise Project Portfolio Management (EPPM), Singapore, 20–21 September, 2011

  17. R.V. Rao, V.D. Kalyankar, Parameter optimization of modern machining processes using teaching–learning-based optimization algorithm, Eng. Appl. Artif. Intell. (2012), doi:10.1016/j.engappai.2012.06.007

    Google Scholar 

  18. T. Niknam, F. Golestaneh, M.S. Sadeghi, θ-Multiobjective teaching–learning-based optimization for dynamic economic emission dispatch, System Journal, IEEE, 6(2), 341–352 (2012). doi:10.1109/JSYST.2012.2183276

    Google Scholar 

  19. R. Venkata Rao, V. Patel, Multi-objective optimization of two stage thermoelectric cooler using a modified teaching–learning-based optimization algorithm, Eng. Appl. Artif. Intell, 26(1), 430–445 (2012)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to H. K. Verma.

Appendix

Appendix

Some Other Optimization Methods

PSO Optimization Method

PSO is a robust stochastic optimization technique based on the movement and intelligence of swarms. It was developed in 1995 by Kennedy and Eberhart [6]. It inspired by the social behaviour of organisms such as fish schooling and bird flocking. In PSO, each single solution in the search space is called ‘particle’. All of particles have fitness values which are evaluated by the fitness function to be optimized, and have velocities which direct the movement of the particles. Each particle is treated as a point in an N-dimensional space which adjusts its movement according to its own movement experience as well as the movement experience of other particles. Each particle keeps track of its coordinates in the solution space which are associated with the best solution (fitness) that has achieved so far by that particle called ‘personal best (p best)’. Another best value that is tracked by the PSO is the best value obtained so far by any particle in the neighbourhood of that particle called ‘global best (g best)’.

The basic concept of PSO lies in accelerating each particle toward its pbest and gbest locations, with a random weighted acceleration at each time step. After finding the two best values, each particle tries to modify its velocity and position using the following Eqs. (24) and (26), respectively.

$$ V_{i}^{k} = wV_{i}^{k} + c_{1} {\text{rand}}_{1} () \times \left( {p_{{\text{best}}_{i}} - s_{i}^{k} } \right) + c_{2} {\text{rand}}_{2} () \times \left( {g_{{\text{best}}_{.}} - s_{i}^{k} } \right) $$
(24)

where V k i is the velocity of agent i at iteration k; w, the weighting function; c j , the weighting factor; rand, the uniformly distributed random number between 0 and 1; s k i , the current position of agent i at iteration k; \( p_{{\text{best}}_{i}} \), the \( p_{\text{best}} \) of agent i; and g best, the g best of the group.

Here the weighting function w is expressed as:

$$ w = W_{\text{max}} - \frac{{\left[ {(W_{\text{max}} - W_{\text{min}} ) \times {\text{iter}}} \right]}}{\text{maxIter}} $$
(25)

where W max is the initial weight; W min, the final weight; maxIter, the maximum iteration number; and iter is the current iteration number.

$$ s_{i}^{k + 1} = s_{i}^{k} + V_{i}^{k + 1} . $$
(26)

This optimization method begin with several randomly selected particles at points of the form \( x^{ - } = \left\langle {x_{1} ,x_{2} ,x_{3}, \ldots, x_{n} } \right\rangle \) and each has a randomly selected velocity \( v^{ - } = \left\langle {v_{1} ,v_{2} ,v_{3}, \ldots, v_{n} } \right\rangle \). Each particle has three competing forces acting on k are inertia, its own memory, and the group memory. This can be expressed by Eq. (24). This algorithm relies on the fact that at first, particles will tend to move around erratically exploring large areas. Then the particles will tend to swarm around the best option found so far.

BB–BC Optimization Method

The BB–BC method developed by Erol and Eksin [10]. BB–BC method is the novel optimization method that relies on one of the theories of the evolution of the universe. This optimization method is a population-based heuristic search method consisting of two phases, BB and BC phase.

Big Bang Phase

The energy dissipation produces disorder and randomness is the main feature of this phase. Like other population-based search algorithms such as GA, PSO, this optimization method generates random points over the search space in a uniform manner in the BB phase.

Big Crunch Phase

In this BC phase, randomly distributed particles are drawn into an order i.e. uniformly distributed these points shrink to a single representative point called a ‘centre of mass’ is represented by ‘X C ’ and it is calculated according to:

$$ X_{C} = \frac{{ \mathop \sum \nolimits_{i = 1}^{N} \times\, (1/f_{i}) {x_{i}}}}{{ \mathop \sum \nolimits_{i = 1}^{N} \times\, (1/f_{i}) }} $$
(27)

where X i is the subject (design variable) within an p-dimensional search space generated; f i , a objective function value of this point; and N is the population size in BB phase.

After the BC phase, this algorithm generates the new candidates; those are randomly distributed around the ‘centre of mass’, to be used as updated particles in the BB phase of next iteration step, using a normal distribution operation in every direction. New candidates are created according to:

$$ X_{\text{new}} = X_{C} + \frac{l \times r}{k} $$
(28)

where X new stands for new particles around the ‘centre of mass’; l, the upper limit of the search space of design variables X i ; r, a normal random number within [0, 1]; and k is the iteration step. The new point X new should be come under the limit of search space.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Singh, R., Verma, H.K. Teaching–learning-based Optimization Algorithm for Parameter Identification in the Design of IIR Filters. J. Inst. Eng. India Ser. B 94, 285–294 (2013). https://doi.org/10.1007/s40031-013-0063-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40031-013-0063-y

Keywords

Navigation