Skip to main content

2019 | OriginalPaper | Buchkapitel

Reinforcement Learning Experiments Running Efficiently over Widly Heterogeneous Computer Farms

verfasst von : Borja Fernandez-Gauna, Xabier Larrucea, Manuel Graña

Erschienen in: Hybrid Artificial Intelligent Systems

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Researchers working with Reinforcement Learning typically face issues that severely hinder the efficiency of their research workflow. These issues include high computational requirements, numerous hyper-parameters that must be set manually, and the high probability of failing a lot of times before success. In this paper, we present some of the challenges our research has faced and the way we have tackled successfully them in an innovative software platform. We provide some benchmarking results that show the improvements introduced by the new platform.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Fußnoten
3
We use the terms experiment and experimental unit to distinguish two different concepts. The former refers to a configuration containing multi-valued hyper-parameters that will require several executions to finish, whereas the latter reefers to each of the single-valued configuration instances produced by combining the values of an experiment.
 
4
Exp-B requires Microsoft’s Cognitive Library, which only runs on x64 platforms. That’s the reason Exp-B cannot run on Windows-x32 machines.
 
Literatur
1.
Zurück zum Zitat Abadi, M., et al.: TensorFlow: a system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 2016), Savannah, GA, pp. 265–283. USENIX Association (2016) Abadi, M., et al.: TensorFlow: a system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 2016), Savannah, GA, pp. 265–283. USENIX Association (2016)
2.
3.
Zurück zum Zitat Fernández-Gauna, B., Fernandez-Gamiz, U., Graña, M.: Variable speed wind turbine controller adaptation by reinforcement learning. Integr. Comput. Aided Eng. 24(1), 27–39 (2017)CrossRef Fernández-Gauna, B., Fernandez-Gamiz, U., Graña, M.: Variable speed wind turbine controller adaptation by reinforcement learning. Integr. Comput. Aided Eng. 24(1), 27–39 (2017)CrossRef
5.
Zurück zum Zitat Geramifard, A., Dann, C., Klein, R.H., Dabney, W., How, J.P.: RLPy: a value-function-based reinforcement learning framework for education and research. J. Mach. Learn. Res. 16, 1573–1578 (2015) Geramifard, A., Dann, C., Klein, R.H., Dabney, W., How, J.P.: RLPy: a value-function-based reinforcement learning framework for education and research. J. Mach. Learn. Res. 16, 1573–1578 (2015)
6.
Zurück zum Zitat van Hasselt, H., Guez, A., Silver, D.: Deep reinforcement learning with double q-learning. In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI 2016, pp. 2094–2100. AAAI Press (2016) van Hasselt, H., Guez, A., Silver, D.: Deep reinforcement learning with double q-learning. In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI 2016, pp. 2094–2100. AAAI Press (2016)
7.
Zurück zum Zitat Schaul, T., et al.: PyBrain. J. Mach. Learn. Res. 11, 743–746 (2010) Schaul, T., et al.: PyBrain. J. Mach. Learn. Res. 11, 743–746 (2010)
8.
Zurück zum Zitat Seide, F., Agarwal, A.: CNTK: Microsoft’s open-source deep-learning toolkit. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2016, pp. 2135–2135. ACM, New York (2016) Seide, F., Agarwal, A.: CNTK: Microsoft’s open-source deep-learning toolkit. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2016, pp. 2135–2135. ACM, New York (2016)
9.
Zurück zum Zitat Silver, D., et al.: Mastering the game of go with deep neural networks and tree search. Nature 529, 484–503 (2016)CrossRef Silver, D., et al.: Mastering the game of go with deep neural networks and tree search. Nature 529, 484–503 (2016)CrossRef
Metadaten
Titel
Reinforcement Learning Experiments Running Efficiently over Widly Heterogeneous Computer Farms
verfasst von
Borja Fernandez-Gauna
Xabier Larrucea
Manuel Graña
Copyright-Jahr
2019
DOI
https://doi.org/10.1007/978-3-030-29859-3_64

Premium Partner