Abstract
In standard neuro-evolution, a population of networks is evolved in a task, and the network that best solves the task is found. This network is then fixed and used to solve future instances of the problem. Networks evolved in this way do not handle real-time interaction very well. It is hard to evolve a solution ahead of time that can cope effectively with all the possible environments that might arise in the future and with all the possible ways someone may interact with it. This paper proposes evolving feedforward neural networks online to create agents that improve their performance through real-time interaction. This approach is demonstrated in a game world where neural-network-controlled individuals play against humans. Through evolution, these individuals learn to react to varying opponents while appropriately taking into account conflicting goals. After initial evaluation offline, the population is allowed to evolve online, and its performance improves considerably. The population not only adapts to novel situations brought about by changing strategies in the opponent and the game layout, but it also improves its performance in situations that it has already seen in offline training. This paper will describe an implementation of online evolution and shows that it is a practical method that exceeds the performance of offline evolution alone.
Similar content being viewed by others
References
Floreano, D., and Mondada, F.: Automatic creation of an autonomous agent: Genetic evolution of a neural-network driven robot, Simulation of Adaptive Behavior SAB-94 (1994), 421-430.
Gomez, F. and Miikkulainen, R.: Incremental evolution of complex general behavior, Adaptive Behavior 5 (1997), 317-342.
Moriarty, D., and Miikkulainen, R.: Discovering complex Othello strategies through evolutionary neural networks, Connection Science 7(3) (1995), 195-209.
Moriarty, D. E.: Symbiotic Evolution of Neural Networks in Sequential Decision Tasks, PhD thesis, Department of Computer Sciences, The University of Texas at Austin, Technical Report UT-AI97-257, 1997.
Moriarty, D. E. and Miikkulainen, R.: Efficient reinforcement learning through symbiotic evolution, In: Kaelbling, L. P. (ed.), Recent Advances in Reinforcement Learning, Kluwer, Dordrecht, Boston, 1996.
Nolfi, S. and Parisi, D.: Evolving non-trivial behaviors on real robots: An autonomous robot that picks up objects, In: Proceedings, the Fourth Congress of the Italian Association for Artificial Intelligence, Florence, Springer Verlag, 1995.
Nolfi, S., Elman, J. and Parisi, D.: Learning and evolution in neural networks, Adaptive Behavior 2 (1994), 5-28.
Nolfi, S., Floreano, D., Miglino, O. and Mondada, F.: Now to evolve autonomous robots: Different approaches in evolutionary robotics, In: Proceedings, Artificial Life IV, pp. 190-197, 1994.
Pollack, J. B., Blair, A. D. and Land, M.: Coevolution of a backgammon player, In: Proceedings of the Fifth Artificial Life Conference, MIT-Press, Cambridge, MA, 1996.
Richards, N., Moriarty, D. and Miikkulainen, R.: Evolving neural networks to play Go, Applied Intelligence 8 (1997), 85-96.
Werner, G. M. and Dyer, M. G.: Evolution of communication in artificial organisms, In: Langton, C. G., Taylor, C., Farmer, J. D., and Rasmussen, S. (eds.), Artificial Life II, Addison-Wesley, Reading, MA, pp. 659-687, 1991.