ABSTRACT
The idea of using evolutionary computation to train artificial neural networks, or neuroevolution (NE), for reinforcement learning (RL) tasks has now been around for over 20 years. However, as RL tasks become more challenging, the networks required become larger, as do their genomes. But, scaling NE to large nets (i.e. tens of thousands of weights) is infeasible using direct encodings that map genes one-to-one to network components. In this paper, we scale-up our compressed network encoding where network weight matrices are represented indirectly as a set of Fourier-type coefficients, to tasks that require very-large networks due to the high-dimensionality of their input space. The approach is demonstrated successfully on two reinforcement learning tasks in which the control networks receive visual input: (1) a vision-based version of the octopus control task requiring networks with over 3 thousand weights, and (2) a version of the TORCS driving game where networks with over 1 million weights are evolved to drive a car around a track using video images from the driver's perspective.
- G. Cuccu, M. Luciw, J. Schmidhuber, and F. Gomez. Intrinsically motivated evolutionary search for vision-based reinforcement learning. In Proceedings of the IEEE Conference on Development and Learning, and Epigenetic Robotics, 2011.Google Scholar
- D. B. D'Ambrosio and K. O. Stanley. A novel generative encoding for exploiting neural network sensor and output geometry. In Proceedings of the 9th annual conference on Genetic and evolutionary computation, (GECCO), pages 974--981, New York, NY, USA, 2007. ACM. Google ScholarDigital Library
- J. Gauci and K. Stanley. Generating large-scale neural networks through discovering geometric regularities. In Proceedings of the Conference on Genetic and Evolutionary Computation, pages 997--1004, New York, NY, USA, 2007. ACM. Google ScholarDigital Library
- F. Gomez, J. Koutník, and J. Schmidhuber. Compressed network complexity search. In Proceedings of the 12th International Conference on Parallel Problem Solving from Nature (PPSN XII, Taormina, IT), 2012. Google ScholarDigital Library
- F. Gomez, J. Schmidhuber, and R. Miikkulainen. Accelerated neural evolution through cooperatively coevolved synapses. Journal of Machine Learning Research, 9(May):937--965, 2008. Google ScholarDigital Library
- F. Gruau. Cellular encoding of genetic neural networks. Technical Report RR-92--21, Ecole Normale Superieure de Lyon, Institut IMAG, Lyon, France, 1992.Google Scholar
- H. Kitano. Designing neural networks using genetic algorithms with graph generation system. Complex Systems, 4:461--476, 1990.Google Scholar
- J. Koutník, F. Gomez, and J. Schmidhuber. Evolving neural networks in compressed weight space. In Proceedings of the Conference on Genetic and Evolutionary Computation (GECCO), 2010. Google ScholarDigital Library
- J. Koutník, F. Gomez, and J. Schmidhuber. Searching for minimal neural networks in fourier space. In Proceedings of the 4th Annual Conference on Artificial General Intelligence, 2010.Google ScholarCross Ref
- J. Koutník, J. Schmidhuber, and F. Gomez. A frequency-domain encoding for neuroevolution. Technical report, arXiv:1212.6521, 2012.Google Scholar
- D. Loiacono, P. L. Lanzi, J. Togelius, E. Onieva, D. A. Pelta, M. V. Butz, T. D. Lönneker, L. Cardamone, D. Perez, Y. Sáez, M. Preuss, and J. Quadflieg. The 2009 simulated car racing championship, 2009.Google Scholar
- L. G. Roberts. Machine Perception of Three-Dimensional Solids. Outstanding Dissertations in the Computer Sciences. Garland Publishing, New York, 1963.Google Scholar
- J. Schmidhuber. Discovering neural nets with low Kolmogorov complexity and high generalization capability. Neural Networks, 10(5):857--873, 1997. Google ScholarDigital Library
- R. K. Srivastava, J. Schmidhuber, and F. Gomez. Generalized compressed network search. In Proceedings of the 12th International Conference on Parallel Problem Solving from Nature (PPSN XII, Taormina, IT), 2012. Google ScholarDigital Library
- V. V. Tolat and B. Widrow. An adaptive "broom balancer" with visual inputs. In Proceedings of the IEEE International Conference on Neural Networksrm (San Diego, CA), pages 641--647. Piscataway, NJ: IEEE, 1988.Google ScholarCross Ref
- Y. Yekutieli, R. Sagiv-Zohar, R. Aharonov, Y. Engel, B. Hochner, and T. Flash. A dynamic model of the octopus arm. I. biomechanics of the octopus reaching movement. Journal of Neurophysiology, 94(2):1443--1458, 2005.Google ScholarCross Ref
Index Terms
- Evolving large-scale neural networks for vision-based reinforcement learning
Recommendations
Evolving deep unsupervised convolutional networks for vision-based reinforcement learning
GECCO '14: Proceedings of the 2014 Annual Conference on Genetic and Evolutionary ComputationDealing with high-dimensional input spaces, like visual input, is a challenging task for reinforcement learning (RL). Neuroevolution (NE), used for continuous RL problems, has to either reduce the problem dimensionality by (1) compressing the ...
NEAT for large-scale reinforcement learning through evolutionary feature learning and policy gradient search
GECCO '18: Proceedings of the Genetic and Evolutionary Computation ConferenceNeuroEvolution of Augmenting Topology (NEAT) is one of the most successful algorithms for solving traditional reinforcement learning (RL) tasks such as pole-balancing. However, the algorithm faces serious challenges while tackling problems with large ...
Evolving neural networks
GECCO '12: Proceedings of the 14th annual conference companion on Genetic and evolutionary computationNeuroevolution, i.e. evolution of artificial neural networks, has recently emerged as a powerful technique for solving challenging reinforcement learning problems. Compared to traditional (e.g. value-function based) methods, neuroevolution is especially ...
Comments