Skip to main content

2018 | OriginalPaper | Buchkapitel

Co-evolution of Fitness Predictors and Deep Neural Networks

verfasst von : Włodzimierz Funika, Paweł Koperek

Erschienen in: Parallel Processing and Applied Mathematics

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Deep neural networks proved to be a very useful and powerful tool with many applications. In order to achieve good learning results, the network architecture has, however, to be carefully designed, which requires a lot of experience and knowledge. Using an evolutionary process to develop new network topologies can facilitate this process. The limiting factor is the speed of evaluation of a single specimen (a single network architecture), which includes learning based on a large dataset. In this paper we propose a new approach which uses subsets of the original training set to approximate the fitness. We describe a co-evolutionary algorithm and discuss its key elements. Finally we draw conclusions from experiments and outline plans for future work.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 25(NIPS2012), 1–9 (2012) Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 25(NIPS2012), 1–9 (2012)
2.
Zurück zum Zitat Bengio, Y., Ducharme, R., Vincent, P., Janvin, C.: A neural probabilistic language model. J. Mach. Learn. Res. 3, 1137–1155 (2003)MATH Bengio, Y., Ducharme, R., Vincent, P., Janvin, C.: A neural probabilistic language model. J. Mach. Learn. Res. 3, 1137–1155 (2003)MATH
3.
Zurück zum Zitat Ng, A.Y.: Feature selection, L1 vs. L2 regularization, and rotational invariance. In: Twenty-First International Conference on Machine Learning - ICML 2004, p. 78 (2004) Ng, A.Y.: Feature selection, L1 vs. L2 regularization, and rotational invariance. In: Twenty-First International Conference on Machine Learning - ICML 2004, p. 78 (2004)
4.
Zurück zum Zitat Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. (JMLR) 15, 1929–1958 (2014)MathSciNetMATH Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. (JMLR) 15, 1929–1958 (2014)MathSciNetMATH
5.
Zurück zum Zitat Courville, A., Bengio, Y., Vincent, P.: Why does unsupervised pre-training help deep learning? J. Mach. Learn. Res. 9(2007), 201–208 (2010)MathSciNetMATH Courville, A., Bengio, Y., Vincent, P.: Why does unsupervised pre-training help deep learning? J. Mach. Learn. Res. 9(2007), 201–208 (2010)MathSciNetMATH
6.
Zurück zum Zitat Koza, J.R.: Human-competitive results produced by genetic programming. Genet. Prog. Evolvable Mach. 11(3–4), 251–284 (2010)CrossRef Koza, J.R.: Human-competitive results produced by genetic programming. Genet. Prog. Evolvable Mach. 11(3–4), 251–284 (2010)CrossRef
7.
Zurück zum Zitat Schmidt, M.D., Lipson, H.: Coevolution of fitness predictors. IEEE Trans. Evol. Comput. 12(6), 736–749 (2008)CrossRef Schmidt, M.D., Lipson, H.: Coevolution of fitness predictors. IEEE Trans. Evol. Comput. 12(6), 736–749 (2008)CrossRef
9.
Zurück zum Zitat LeCun, Y., Cortes, C.: MNIST handwritten digit database (2010) LeCun, Y., Cortes, C.: MNIST handwritten digit database (2010)
10.
Zurück zum Zitat McCulloch, W.S., Pitts, W.: A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5(4), 115–133 (1943)MathSciNetCrossRefMATH McCulloch, W.S., Pitts, W.: A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5(4), 115–133 (1943)MathSciNetCrossRefMATH
11.
Zurück zum Zitat LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2323 (1998)CrossRef LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2323 (1998)CrossRef
12.
Zurück zum Zitat Montana, D.J., Davis, L.: Training feedforward neural networks using genetic algorithms. In: Proceedings of the 11th International Joint Conference on Artificial Intelligence - Volume 1, vol. 89, pp. 762–767 (1989) Montana, D.J., Davis, L.: Training feedforward neural networks using genetic algorithms. In: Proceedings of the 11th International Joint Conference on Artificial Intelligence - Volume 1, vol. 89, pp. 762–767 (1989)
13.
Zurück zum Zitat Siebel, N.T., Bötel, J., Sommer, G.: Efficient neural network pruning during neuro-evolution. In: Proceedings of the International Joint Conference on Neural Networks, pp. 2920–2927 (2009) Siebel, N.T., Bötel, J., Sommer, G.: Efficient neural network pruning during neuro-evolution. In: Proceedings of the International Joint Conference on Neural Networks, pp. 2920–2927 (2009)
14.
Zurück zum Zitat Stanley, K.O., D’Ambrosio, D.B., Gauci, J.: A hypercube-based encoding for evolving large-scale neural networks. Artif. Life 15(2), 185–212 (2009)CrossRef Stanley, K.O., D’Ambrosio, D.B., Gauci, J.: A hypercube-based encoding for evolving large-scale neural networks. Artif. Life 15(2), 185–212 (2009)CrossRef
15.
Zurück zum Zitat Fernando, C., Banarse, D., Reynolds, M., Besse, F., Pfau, D., Jaderberg, M., Lanctot, M., Wierstra, D.: Convolution by evolution: differentiable pattern producing networks. CoRR, abs/1606.02580 (2016) Fernando, C., Banarse, D., Reynolds, M., Besse, F., Pfau, D., Jaderberg, M., Lanctot, M., Wierstra, D.: Convolution by evolution: differentiable pattern producing networks. CoRR, abs/1606.02580 (2016)
16.
Zurück zum Zitat David, O.E., Greental, I.: Genetic algorithms for evolving deep neural networks. In: GECCO Competition 2014, pp. 1451–1452 (2014) David, O.E., Greental, I.: Genetic algorithms for evolving deep neural networks. In: GECCO Competition 2014, pp. 1451–1452 (2014)
17.
Zurück zum Zitat Loshchilov, I., Hutter, F.: CMA-ES for hyperparameter optimization of deep neural networks. CoRR, abs/1604.07269 (2016) Loshchilov, I., Hutter, F.: CMA-ES for hyperparameter optimization of deep neural networks. CoRR, abs/1604.07269 (2016)
18.
Zurück zum Zitat Bongard, J.C., Lipson, H.: Nonlinear system identification using coevolution of models and tests. IEEE Trans. Evol. Comput. 9(4), 361–384 (2005)CrossRefMATH Bongard, J.C., Lipson, H.: Nonlinear system identification using coevolution of models and tests. IEEE Trans. Evol. Comput. 9(4), 361–384 (2005)CrossRefMATH
19.
Zurück zum Zitat Jin, Y.: A comprehensive survey of fitness approximation in evolutionary computation. Soft Comput. 9(1), 3–12 (2005)CrossRef Jin, Y.: A comprehensive survey of fitness approximation in evolutionary computation. Soft Comput. 9(1), 3–12 (2005)CrossRef
20.
Zurück zum Zitat Schmidt, M., Lipson, H.: Co-evolving fitness predictors for accelerating and reducing evaluations. GPTP 2006, 1 (2006) Schmidt, M., Lipson, H.: Co-evolving fitness predictors for accelerating and reducing evaluations. GPTP 2006, 1 (2006)
21.
Zurück zum Zitat Funika, W., Koperek, P.: Spatial-oriented neural network encoding for neuro-evolution. In: Proceeding of Cracow Grid Workshop (CGW 2016), pp. 37–38. ACC Cyfronet AGH, Krakow (2016) Funika, W., Koperek, P.: Spatial-oriented neural network encoding for neuro-evolution. In: Proceeding of Cracow Grid Workshop (CGW 2016), pp. 37–38. ACC Cyfronet AGH, Krakow (2016)
22.
Zurück zum Zitat Ciresan, D.C., Meier, U., Gambardella, L.M., Schmidhuber, J.: Deep big simple neural nets excel on handwritten digit recognition. CoRR, abs/1003.0358 (2010) Ciresan, D.C., Meier, U., Gambardella, L.M., Schmidhuber, J.: Deep big simple neural nets excel on handwritten digit recognition. CoRR, abs/1003.0358 (2010)
Metadaten
Titel
Co-evolution of Fitness Predictors and Deep Neural Networks
verfasst von
Włodzimierz Funika
Paweł Koperek
Copyright-Jahr
2018
DOI
https://doi.org/10.1007/978-3-319-78024-5_48