Skip to main content
Log in

Adapting Self-Adaptive Parameters in Evolutionary Algorithms

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

The lognormal self-adaptation has been used extensively in evolutionary programming (EP) and evolution strategies (ES) to adjust the search step size for each objective variable. However, it was discovered in our previous study (K.-H. Liang, X. Yao, Y. Liu, C. Newton, and D. Hoffman, in Evolutionary Programming VII. Proc. of the Seventh Annual Conference on Evolutionary Programming, vol. 1447, edited by V. Porto, N. Saravanan, D. Waagen, and A. Eiben, Lecture Notes in Computer Science, Springer: Berlin, pp. 291–300, 1998) that such self-adaptation may rapidly lead to a search step size that is far too small to explore the search space any further, and thus stagnates search. This is called the loss of step size control. It is necessary to use a lower bound of search step size to avoid this problem. Unfortunately, the optimal setting of lower bound is highly problem dependent. This paper first analyzes both theoretically and empirically how the step size control was lost. Then two schemes of dynamic lower bound are proposed. The schemes enable the EP algorithm to adjust the lower bound dynamically during evolution. Experimental results are presented to demonstrate the effectiveness and efficiency of the dynamic lower bound for a set of benchmark functions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. X. Yao, “An overview of evolutionary computation,” Chinese Journal of Advanced Software Research, vol. 3, no. 1, pp. 12-29, 1996.

    Google Scholar 

  2. T. Bäck and H.-P. Schwefel, “An overview of evolutionary algorithms for parameter optimization,” Evolutionary Computation, vol. 1, no. 1, pp. 1-23, 1993.

    Google Scholar 

  3. T. Bäck, Evolutionary Algorithms in Theory and Practice: Evolution Strategies, Evolutionary Programming, Genetic Algorithms, Oxford University Press: New York, 1996.

    Google Scholar 

  4. H.Mühlenbein and D. Schlierkamp-Voosen, “Predictive models for the breeder genetic algorithm I. Continuous parameter optimization,” Evolutionary Computation, vol. 1, no. 1, pp. 25-49, 1993.

    Google Scholar 

  5. X. Yao and Y. Liu, “Fast evolutionary programming,” in Evolutionary Programming V: Proc. of the Fifth Annual Conference on Evolutionary Programming, edited by L. Fogel, P. Angeline, and T. Bäck, MIT Press: Cambridge, MA, pp. 451-460, 1996.

    Google Scholar 

  6. X. Yao, Y. Liu, and G. Lin, “Evolutionary programming made faster,” IEEE Transactions on Evolutionary Computation, vol. 3, pp. 82-102, July 1999.

    Google Scholar 

  7. K.-H. Liang, X. Yao, Y. Liu, C. Newton, and D. Hoffman, “Anexperimental investigation of self-adaptation in evolutionary programming,” in Evolutionary Programming VII: Proc. of the Seventh Annual Conference on Evolutionary Programming, edited by V. Porto, N. Saravanan, D. Waagen, and A. Eiben, vol. 1447 of Lecture Notes in Computer Science, pp.291-300, Springer: Berlin, 1998.

    Google Scholar 

  8. D. Fogel, “A comparison of evolutionary programming and genetic algorithms on selected constrained optimization problems,” Simulation, vol. 64, no. 6, pp. 397-404, 1995. 180 Liang, Yao and Newton

    Google Scholar 

  9. H.-P. Schwefel, Evolution and Optimum Seeking, Wiley: New York, 1995.

    Google Scholar 

  10. D. Fogel, L. Fogel, and J. Atmar, “Meta-evolutionary programming,” in Proc. of the 25th Asilomar Conference on Signals, Systems and Computers, edited by R. Chen, Maple Press: San Jose, CA, pp. 540-545, 1991.

    Google Scholar 

  11. K. Chellapilla, “Combining mutation operators in evolutionary programming,” IEEE Transactions on Evolutionary Computation, vol. 2, no. 3, pp. 91-96, 1998.

    Google Scholar 

  12. H.-G. Beyer, “Toward a theory of evolution strategies: Self-adaptation,” Evolutionary Computation, vol. 3, no. 3, pp. 311-348, 1995.

    Google Scholar 

  13. H. Larson, Introduction to Probability Theory and Statistical Inference. 3rd ed., Wiley: New York, 1982.

    Google Scholar 

  14. W. Feller, An Introduction to Probability Theory and Its Applications, vol. 1, 3rd ed., Wiley: New York, 1968.

    Google Scholar 

  15. P. Angeline, “Adaptive and self-adaptive evolutionary computation,” in Computation Intelligence: A Dynamic System Perspective, edited by Y. Palaniswami, R. Attikiouzel, R. Marks, D. Fogel, and T. Fukuda, IEEE Press: Piscataway, NJ, pp. 152-163, 1995.

    Google Scholar 

  16. A. Törn and A. Žilinskas, Global Optimization, vol. 350 of Lecture Notes in Computer Science, Springer-Verlag: New York, 1989.

    Google Scholar 

  17. D. Ackley, A Connectionist Machine for Genetic Hillclimbing, Kluwer: Boston, MA, 1987.

    Google Scholar 

  18. A. Griewank, “Global optimization by controlled random search,” Journal of Optimization Theory and Application, vol. 34, no. 1, pp. 11-39, 1981.

    Google Scholar 

  19. X. Yao, G. Lin, and Y. Liu, “An analysis of evolutionary algorithms based on neighbourhood and step sizes,” in Evolutionary Programming VI: Proc. of the Sixth Annual Conference on Evolutionary Programming, edited by P. Angeline, R. Reynolds, J. McDonnell, and R. Eberhart, vol. 1213 of Lecture Notes in Computer Science, Springer: Berlin, pp. 297-307, 1997.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Liang, KH., Yao, X. & Newton, C.S. Adapting Self-Adaptive Parameters in Evolutionary Algorithms. Applied Intelligence 15, 171–180 (2001). https://doi.org/10.1023/A:1011286929823

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1011286929823

Navigation