Skip to main content

2021 | OriginalPaper | Buchkapitel

Deep Optimisation: Multi-scale Evolution by Inducing and Searching in Deep Representations

verfasst von : Jamie Caldwell, Joshua Knowles, Christoph Thies, Filip Kubacki, Richard Watson

Erschienen in: Applications of Evolutionary Computation

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

The ability of evolutionary processes to innovate and scale up over long periods of time, observed in nature, remains a central mystery in evolutionary biology, and a challenge for algorithm designers to emulate and explain in evolutionary computation (EC). The Major Transitions in Evolution is a compelling theory that explains evolvability through a multi-scale process whereby individuality (and hence selection and variation) is continually revised by the formation of associations between formerly independent entities, a process still not fully explored in EC. Deep Optimisation (DO) is a new type of model-building optimization algorithm (MBOA) that exploits deep learning methods to enable multi-scale optimization. DO uses an autoencoder model to induce a multi-level representation of solutions, capturing the relationships between the lower-level units that contribute to the quality of a solution. Variation and selection are then performed within the induced representations, causing model-informed changes to multiple solution variables simultaneously. Here, we first show that DO has impressive performance compared with other leading MBOAs (and other rival methods) on multiple knapsack problems, a standard combinatorial optimization problem of general interest. Going deeper, we then carry out a detailed investigation to understand the differences between DO and other MBOAs, identifying key problem characteristics where other MBOAs are afflicted by exponential running times, and DO is not. This study serves to concretize our understanding of the Major Transitions theory, and why that leads to evolvability, and also provides a strong motivation for further investigation of deep learning methods in optimization.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Bello, I., Pham, H., Le, Q.V., Norouzi, M., Bengio, S.: Neural combinatorial optimization with reinforcement learning. arXiv preprint arXiv:1611.09940 (2016) Bello, I., Pham, H., Le, Q.V., Norouzi, M., Bengio, S.: Neural combinatorial optimization with reinforcement learning. arXiv preprint arXiv:​1611.​09940 (2016)
2.
Zurück zum Zitat Boyan, J., Moore, A.W.: Learning evaluation functions to improve optimization by local search. J. Mach. Learn. Res. 1(Nov), 77–112 (2000) Boyan, J., Moore, A.W.: Learning evaluation functions to improve optimization by local search. J. Mach. Learn. Res. 1(Nov), 77–112 (2000)
3.
Zurück zum Zitat Caldwell, J., Watson, R.A., Thies, C., Knowles, J.D.: Deep optimisation: Solving combinatorial optimisation problems using deep neural networks. arXiv preprint arXiv:1811.00784 (2018) Caldwell, J., Watson, R.A., Thies, C., Knowles, J.D.: Deep optimisation: Solving combinatorial optimisation problems using deep neural networks. arXiv preprint arXiv:​1811.​00784 (2018)
4.
Zurück zum Zitat Chu, P.C., Beasley, J.E.: A genetic algorithm for the multidimensional knapsack problem. J. Heuristics 4(1), 63–86 (1998)CrossRef Chu, P.C., Beasley, J.E.: A genetic algorithm for the multidimensional knapsack problem. J. Heuristics 4(1), 63–86 (1998)CrossRef
5.
Zurück zum Zitat Churchill, A.W., Sigtia, S., Fernando, C.: A denoising autoencoder that guides stochastic search. arXiv preprint arXiv:1404.1614 (2014) Churchill, A.W., Sigtia, S., Fernando, C.: A denoising autoencoder that guides stochastic search. arXiv preprint arXiv:​1404.​1614 (2014)
6.
Zurück zum Zitat Hansen, P., Mladenović, N., Pérez, J.A.M.: Variable neighbourhood search: methods and applications. Ann. Oper. Res. 175(1), 367–407 (2010)MathSciNetCrossRef Hansen, P., Mladenović, N., Pérez, J.A.M.: Variable neighbourhood search: methods and applications. Ann. Oper. Res. 175(1), 367–407 (2010)MathSciNetCrossRef
7.
Zurück zum Zitat Hopfield, J.J., Tank, D.W.: “neural” computation of decisions in optimization problems. Biol. Cybern. 52(3), 141–152 (1985)MATH Hopfield, J.J., Tank, D.W.: “neural” computation of decisions in optimization problems. Biol. Cybern. 52(3), 141–152 (1985)MATH
9.
Zurück zum Zitat Khalil, E., Dai, H., Zhang, Y., Dilkina, B., Song, L.: Learning combinatorial optimization algorithms over graphs. In: Advances in Neural Information Processing Systems, pp. 6348–6358 (2017) Khalil, E., Dai, H., Zhang, Y., Dilkina, B., Song, L.: Learning combinatorial optimization algorithms over graphs. In: Advances in Neural Information Processing Systems, pp. 6348–6358 (2017)
10.
Zurück zum Zitat Lombardi, M., Milano, M., Bartolini, A.: Empirical decision model learning. Artif. Intell. 244, 343–367 (2017)MathSciNetCrossRef Lombardi, M., Milano, M., Bartolini, A.: Empirical decision model learning. Artif. Intell. 244, 343–367 (2017)MathSciNetCrossRef
11.
Zurück zum Zitat Martins, J.P., Delbem, A.C.: Pairwise independence and its impact on estimation of distribution algorithms. Swarm Evol. Comput. 27, 80–96 (2016)CrossRef Martins, J.P., Delbem, A.C.: Pairwise independence and its impact on estimation of distribution algorithms. Swarm Evol. Comput. 27, 80–96 (2016)CrossRef
12.
Zurück zum Zitat Martins, J.P., Fonseca, C.M., Delbem, A.C.: On the performance of linkage-tree genetic algorithms for the multidimensional knapsack problem. Neurocomputing 146, 17–29 (2014)CrossRef Martins, J.P., Fonseca, C.M., Delbem, A.C.: On the performance of linkage-tree genetic algorithms for the multidimensional knapsack problem. Neurocomputing 146, 17–29 (2014)CrossRef
13.
Zurück zum Zitat Martins, J.P., Neto, C.B., Crocomo, M.K., Vittori, K., Delbem, A.C.: A comparison of linkage-learning-based genetic algorithms in multidimensional knapsack problems. In: 2013 IEEE Congress on Evolutionary Computation, pp. 502–509. IEEE (2013) Martins, J.P., Neto, C.B., Crocomo, M.K., Vittori, K., Delbem, A.C.: A comparison of linkage-learning-based genetic algorithms in multidimensional knapsack problems. In: 2013 IEEE Congress on Evolutionary Computation, pp. 502–509. IEEE (2013)
14.
Zurück zum Zitat Mazyavkina, N., Sviridov, S., Ivanov, S., Burnaev, E.: Reinforcement learning for combinatorial optimization: A survey. arXiv preprint arXiv:2003.03600 (2020) Mazyavkina, N., Sviridov, S., Ivanov, S., Burnaev, E.: Reinforcement learning for combinatorial optimization: A survey. arXiv preprint arXiv:​2003.​03600 (2020)
15.
Zurück zum Zitat Pelikan, M., Goldberg, D.E.: Hierarchical bayesian optimization algorithm. In: Pelikan, M., Sastry, K., CantúPaz, E. (eds.) Scalable Optimization via Probabilistic Modeling. Studies in Computational Intelligence, vol. 33, pp. 63–90. Springer, Berlin (2006). https://doi.org/10.1007/978-3-540-34954-9_4 Pelikan, M., Goldberg, D.E.: Hierarchical bayesian optimization algorithm. In: Pelikan, M., Sastry, K., CantúPaz, E. (eds.) Scalable Optimization via Probabilistic Modeling. Studies in Computational Intelligence, vol. 33, pp. 63–90. Springer, Berlin (2006). https://​doi.​org/​10.​1007/​978-3-540-34954-9_​4
16.
Zurück zum Zitat Pelikan, M., Goldberg, D.E., Tsutsui, S.: Hierarchical bayesian optimization algorithm: toward a new generation of evolutionary algorithms. In: SICE 2003 Annual Conference (IEEE Cat. No. 03TH8734), vol. 3, pp. 2738–2743. IEEE (2003) Pelikan, M., Goldberg, D.E., Tsutsui, S.: Hierarchical bayesian optimization algorithm: toward a new generation of evolutionary algorithms. In: SICE 2003 Annual Conference (IEEE Cat. No. 03TH8734), vol. 3, pp. 2738–2743. IEEE (2003)
17.
Zurück zum Zitat Probst, M.: Denoising autoencoders for fast combinatorial black box optimization (2015) Probst, M.: Denoising autoencoders for fast combinatorial black box optimization (2015)
18.
Zurück zum Zitat Santana, R.: Gray-box optimization and factorized distribution algorithms: where two worlds collide (2017) Santana, R.: Gray-box optimization and factorized distribution algorithms: where two worlds collide (2017)
19.
Zurück zum Zitat Smith, J.M., Szathmáry, E.: The Major Transitions in Evolution. Oxford University Press, Oxford (1997) Smith, J.M., Szathmáry, E.: The Major Transitions in Evolution. Oxford University Press, Oxford (1997)
20.
Zurück zum Zitat Thierens, D., Bosman, P.A.: Hierarchical problem solving with the linkage tree genetic algorithm. In: Proceedings of the 15th Annual Conference on Genetic and Evolutionary Computation, pp. 877–884 (2013) Thierens, D., Bosman, P.A.: Hierarchical problem solving with the linkage tree genetic algorithm. In: Proceedings of the 15th Annual Conference on Genetic and Evolutionary Computation, pp. 877–884 (2013)
21.
Zurück zum Zitat Volpato, R., Song, G.: Active learning to optimise time-expensive algorithm selection (2019) Volpato, R., Song, G.: Active learning to optimise time-expensive algorithm selection (2019)
22.
Zurück zum Zitat Vu, K.K., D’Ambrosio, C., Hamadi, Y., Liberti, L.: Surrogate-based methods for black-box optimization. Int. Trans. Oper. Res. 24(3), 393–424 (2017)MathSciNetCrossRef Vu, K.K., D’Ambrosio, C., Hamadi, Y., Liberti, L.: Surrogate-based methods for black-box optimization. Int. Trans. Oper. Res. 24(3), 393–424 (2017)MathSciNetCrossRef
23.
Zurück zum Zitat Wang, S.M., Wu, J.W., Chen, W.M., Yu, T.L.: Design of test problems for discrete estimation of distribution algorithms. In: Proceedings of the 15th Annual Conference on Genetic and Evolutionary Computation, pp. 407–414 (2013) Wang, S.M., Wu, J.W., Chen, W.M., Yu, T.L.: Design of test problems for discrete estimation of distribution algorithms. In: Proceedings of the 15th Annual Conference on Genetic and Evolutionary Computation, pp. 407–414 (2013)
25.
Zurück zum Zitat Watson, R.A., Szathmáry, E.: How can evolution learn? Trends Ecol. Evol. 31(2), 147–157 (2016)CrossRef Watson, R.A., Szathmáry, E.: How can evolution learn? Trends Ecol. Evol. 31(2), 147–157 (2016)CrossRef
26.
Zurück zum Zitat West, S.A., Fisher, R.M., Gardner, A., Kiers, E.T.: Major evolutionary transitions in individuality. Proc. Nat. Acad. Sci. 112(33), 10112–10119 (2015)CrossRef West, S.A., Fisher, R.M., Gardner, A., Kiers, E.T.: Major evolutionary transitions in individuality. Proc. Nat. Acad. Sci. 112(33), 10112–10119 (2015)CrossRef
27.
Zurück zum Zitat Yu, T.L., Sastry, K., Goldberg, D.E.: Linkage learning, overlapping building blocks, and systematic strategy for scalable recombination. In: Proceedings of the 7th Annual Conference on Genetic and Evolutionary Computation, pp. 1217–1224. GECCO 2005 (2005) Yu, T.L., Sastry, K., Goldberg, D.E.: Linkage learning, overlapping building blocks, and systematic strategy for scalable recombination. In: Proceedings of the 7th Annual Conference on Genetic and Evolutionary Computation, pp. 1217–1224. GECCO 2005 (2005)
28.
Zurück zum Zitat Zhang, W., Dietterich, T.G.: Solving combinatorial optimization tasks by reinforcement learning: a general methodology applied to resource-constrained scheduling. J.of Artif. Intell. Res. 1, 1–38 (2000) Zhang, W., Dietterich, T.G.: Solving combinatorial optimization tasks by reinforcement learning: a general methodology applied to resource-constrained scheduling. J.of Artif. Intell. Res. 1, 1–38 (2000)
Metadaten
Titel
Deep Optimisation: Multi-scale Evolution by Inducing and Searching in Deep Representations
verfasst von
Jamie Caldwell
Joshua Knowles
Christoph Thies
Filip Kubacki
Richard Watson
Copyright-Jahr
2021
DOI
https://doi.org/10.1007/978-3-030-72699-7_32

Premium Partner