ABSTRACT
Evolutionary algorithms frequently get stuck on local optima--and fail to find the global optimum--when local gradients do not point the search process toward the direction of the global optimum. A recent breakthrough called Novelty Search ameliorates this problem by enabling the search process to explore in every direction by encouraging the production of novel, or not-yet-seen, phenotypes (e.g. new robot behaviors). However, a problem with Novelty Search is that it can get lost on "novelty plateaus" wherein novel behaviors in offspring are not immediately produced by mutation and crossover (e.g. when a sequence of specific mutations is required to produce new behaviors, but the intermediate mutations are not rewarded because they do not produce novel behaviors). In such cases, Novelty Search and related approaches that reward behavioral diversity can get stuck. Here we introduce a new approach, borrowed from human psychology, that mitigates this problem: encouraging creative thinking. In addition to rewarding novel behavior, we encourage evolving neural networks to "think differently" by rewarding not-yet-seen firing patterns in hidden neurons, which we call the "Creative Thinking Approach." We hypothesize that encouraging novel thinking can reward stepping stones toward new behaviors. On a variety of challenging robotic control problems from previous publications we demonstrate that, as problem difficulty increases, adding the Creative Thinking Approach increasingly improves performance over simply encouraging novel behaviors. Our results suggest that the Creative Thinking Approach could help improve the scale and complexity of problems that can be solved by evolutionary algorithms.
- C.M. Bishop. Pattern Recognition and Machine Learning (Information Science and Statistics). Springer-Verlag New York, Inc., Secaucus, NJ, USA, 2006. Google ScholarDigital Library
- J. Clune, K.O. Stanley, R.T. Pennock, and C. Ofria. On the performance of indirect encoding across the continuum of regularity. IEEE Transactions on Evolutionary Computation, 15(4):346--367, 2011. Google ScholarDigital Library
- C.A. Coello Coello. Evolutionary multi-objective optimization: a historical view of the field. Computational Intelligence Magazine, IEEE, 1(1):28--36, Feb 2006. Google ScholarDigital Library
- K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan. A fast and elitist multiobjective genetic algorithm: Nsga-ii. Trans. Evol. Comp, 6(2):182--197, April 2002. Google ScholarDigital Library
- K. Deb. Multi-Objective Optimization Using Evolutionary Algorithms. John Wiley & Sons, Inc., New York, NY, USA, 2001. Google ScholarDigital Library
- S. Doncieux and J.-B. Mouret. Behavioral diversity measures for evolutionary robotics. In WCCI 2010 IEEE World Congress on Computational Intelligence, Congress on Evolutionary Computation (CEC), pages 1303--1310, 2010.Google ScholarCross Ref
- J.L. Elman. Learning and development in neural networks: The importance of starting small. Cognition, 48(1):71--99, 1993.Google Scholar
- D. Floreano and C. Mattiussi. Bio-inspired artificial intelligence: theories, methods, and technologies. The MIT Press, 2008. Google ScholarDigital Library
- J. Gauci and K.O. Stanley. Autonomous evolution of topographic regularities in artificial neural networks. Neural Computation, 22(7):1860--1898, 2010. Google ScholarDigital Library
- A. Hyvarinen and E. Oja. Independent component analysis: Algorithms and applications. Neural Netw., 13(4--5):411--430, May 2000. Google ScholarDigital Library
- T. Jansen and I. Wegener. Evolutionary algorithms - how to cope with plateaus of constant fitness and when to reject strings of the same fitness. Trans. Evol. Comp, 5(6):589--599, December 2001. Google ScholarDigital Library
- I.T. Jolliffe. Principal Component Analysis. Springer Verlag.Google Scholar
- J. Lehman and K.O. Stanley. Exploiting open-endedness to solve problems through the search for novelty. In Proceedings of Artificial Life XI, volume 11, pages 329--336, 2008.Google Scholar
- J. Lehman and K.O. Stanley. Abandoning objectives: Evolution through the search for novelty alone. Evolutionary Computation, 19(2):189--223, 2011. Google ScholarDigital Library
- J. Lehman, S. Risi, D.B. D'Ambrosio, and K.O. Stanley. Rewarding reactivity to evolve robust controllers without multiple trials or noise. In Proceedings of the Artificial Life Conference, volume 13, pages 379--386, 2012.Google ScholarCross Ref
- J.-B. Mouret and S. Doncieux. SFERESv2: Evolvin' in the multi-core world. In WCCI 2010 IEEE World Congress on Computational Intelligence, Congress on Evolutionary Computation (CEC), pages 4079--4086, 2010.Google Scholar
- J.-B. Mouret and S. Doncieux. Encouraging behavioral diversity in evolutionary robotics: an empirical study. Evolutionary Computation, 1(20), 2012. Google ScholarDigital Library
- J.-B. Mouret and S. Doncieux. Overcoming the bootstrap problem in evolutionary robotics using behavioral diversity. In Evolutionary Computation, 2009. CEC'09. IEEE Congress on, pages 1161--1168. IEEE, 2009. Google ScholarDigital Library
- K.P. Murphy. Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning series). The MIT Press, August 2012. Google ScholarDigital Library
- E.S. Nicoara. Mechanisms to avoid the premature convergence of genetic algorithms. Petroleum - Gas University of Ploiesti Bulletin, Mathematics - Informatics - Physics Series, 61:87--96, 2009.Google Scholar
- C. Ollion, T. Pinville, and S. Doncieux. With a little help from selection pressures : evolution of memory in robot controllers. Artificial Life, 13:407--414, 2012.Google Scholar
- T. Rickards. Creativity and Problem Solving at Work. Gower Publishing, Ltd., 1997.Google Scholar
- A. Rogers and A. Prügel-Bennett. Genetic drift in genetic algorithm selection schemes. IEEE Transactions on Evolutionary Computation, 3(4):298--303, October 1999. Google ScholarDigital Library
- C. Ryan. Reducing premature convergence in evolutionary algorithms, 1996.Google Scholar
- K.O. Stanley, D.B. D'Ambrosio, and J. Gauci. A hypercube-based encoding for evolving large-scale neural networks. Artificial Life, 15(2):185--212, 2009. Google ScholarDigital Library
- K.O. Stanley and R. Miikkulainen. Evolving neural networks through augmenting topologies. Evolutionary Computation, 10(2):99--127, 2002. Google ScholarDigital Library
- K.O. Stanley and R. Miikkulainen. A taxonomy for artificial embryogeny. Artificial Life, 9(2):93--130, 2003. Google ScholarDigital Library
Index Terms
- Encouraging creative thinking in robots improves their ability to solve challenging problems
Recommendations
Novelty search creates robots with general skills for exploration
GECCO '14: Proceedings of the 2014 Annual Conference on Genetic and Evolutionary ComputationNovelty Search, a new type of Evolutionary Algorithm, has shown much promise in the last few years. Instead of selecting for phenotypes that are closer to an objective, Novelty Search assigns rewards based on how different the phenotypes are from those ...
A comparison of illumination algorithms in unbounded spaces
GECCO '17: Proceedings of the Genetic and Evolutionary Computation Conference CompanionIllumination algorithms are a new class of evolutionary algorithms capable of producing large archives of diverse and high-performing solutions. Examples of such algorithms include Novelty Search with Local Competition (NSLC), the Multi-dimensional ...
An Extended Study of Quality Diversity Algorithms
GECCO '16 Companion: Proceedings of the 2016 on Genetic and Evolutionary Computation Conference CompanionIn a departure from conventional optimization where the goal is to find the best possible solution, a new class of evolutionary algorithms instead search for quality diversity (QD) -- a maximally diverse collection of individuals in which each member is ...
Comments