Skip to main content
Erschienen in:
Buchtitelbild

2015 | OriginalPaper | Buchkapitel

Adaptive Playouts in Monte-Carlo Tree Search with Policy-Gradient Reinforcement Learning

verfasst von : Tobias Graf, Marco Platzner

Erschienen in: Advances in Computer Games

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Monte-Carlo Tree Search evaluates positions with the help of a playout policy. If the playout policy evaluates a position wrong then there are cases where the tree-search has difficulties to find the correct move due to the large search-space. This paper explores adaptive playout-policies which improve the playout-policy during a tree-search. With the help of policy-gradient reinforcement learning techniques we optimize the playout-policy to give better evaluations. We tested the algorithm in Computer Go and measured an increase in playing strength of more than 100 ELO. The resulting program was able to deal with difficult test-cases which are known to pose a problem for Monte-Carlo-Tree-Search.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Baier, H.: Adaptive playout policies for Monte Carlo go. Master’s thesis, Osnabrueck University, Germany (2010) Baier, H.: Adaptive playout policies for Monte Carlo go. Master’s thesis, Osnabrueck University, Germany (2010)
2.
Zurück zum Zitat Baier, H., Drake, P.: The power of forgetting: improving the last-good-reply policy in Monte Carlo go. IEEE Trans. Comput. Intell. AI Games 2(4), 303–309 (2010)CrossRef Baier, H., Drake, P.: The power of forgetting: improving the last-good-reply policy in Monte Carlo go. IEEE Trans. Comput. Intell. AI Games 2(4), 303–309 (2010)CrossRef
3.
Zurück zum Zitat Baudis, P.: Effect of LGRF on the playing strength agains gnugo. Website 15 June 2012. http://www.mail-archive.com/computer-go@dvandva.org/msg05060.html. Accessed 09 March 2015 Baudis, P.: Effect of LGRF on the playing strength agains gnugo. Website 15 June 2012. http://www.mail-archive.com/computer-go@dvandva.org/msg05060.html. Accessed 09 March 2015
4.
Zurück zum Zitat Baudiš, P., Gailly, J.: PACHI: state of the art open source go program. In: van den Herik, H.J., Plaat, A. (eds.) ACG 2011. LNCS, vol. 7168, pp. 24–38. Springer, Heidelberg (2012)CrossRef Baudiš, P., Gailly, J.: PACHI: state of the art open source go program. In: van den Herik, H.J., Plaat, A. (eds.) ACG 2011. LNCS, vol. 7168, pp. 24–38. Springer, Heidelberg (2012)CrossRef
5.
Zurück zum Zitat Bottou, L.: Stochastic gradient tricks. In: Montavon, G., Orr, G.B., Müller, K.-R. (eds.) Neural Networks, Tricks of the Trade, Reloaded. Lecture Notes in Computer Science, vol. 7700, pp. 430–445. Springer, Heidelberg (2012) Bottou, L.: Stochastic gradient tricks. In: Montavon, G., Orr, G.B., Müller, K.-R. (eds.) Neural Networks, Tricks of the Trade, Reloaded. Lecture Notes in Computer Science, vol. 7700, pp. 430–445. Springer, Heidelberg (2012)
6.
Zurück zum Zitat Browne, C., Powley, E., Whitehouse, D., Lucas, S., Cowling, P., Rohlfshagen, P., Tavener, S., Perez, D., Samothrakis, S., Colton, S.: A survey of Monte Carlo tree search methods. IEEE Trans. Comput. Intell. AI Games 4(1), 1–43 (2012)CrossRef Browne, C., Powley, E., Whitehouse, D., Lucas, S., Cowling, P., Rohlfshagen, P., Tavener, S., Perez, D., Samothrakis, S., Colton, S.: A survey of Monte Carlo tree search methods. IEEE Trans. Comput. Intell. AI Games 4(1), 1–43 (2012)CrossRef
7.
Zurück zum Zitat Chaslot, G., Winands, M., Uiterwijk, J., van den Herik, H., Bouzy, B.: Progressive strategies for Monte-Carlo tree search. New Math. Nat. Comput. 4(3), 343–357 (2008)MathSciNetCrossRefMATH Chaslot, G., Winands, M., Uiterwijk, J., van den Herik, H., Bouzy, B.: Progressive strategies for Monte-Carlo tree search. New Math. Nat. Comput. 4(3), 343–357 (2008)MathSciNetCrossRefMATH
8.
Zurück zum Zitat Gelly, S., Silver, D.: Combining online and offline knowledge in UCT. In: Proceedings of the 24th International Conference on Machine Learning, ICML 2007, pp. 273–280, New York (2007) Gelly, S., Silver, D.: Combining online and offline knowledge in UCT. In: Proceedings of the 24th International Conference on Machine Learning, ICML 2007, pp. 273–280, New York (2007)
9.
Zurück zum Zitat Graf, T., Platzner, M.: Common fate graph patterns in Monte Carlo tree search for computer go. In: 2014 IEEE Conference on Computational Intelligence and Games (CIG), pp. 1–8, August 2014 Graf, T., Platzner, M.: Common fate graph patterns in Monte Carlo tree search for computer go. In: 2014 IEEE Conference on Computational Intelligence and Games (CIG), pp. 1–8, August 2014
10.
Zurück zum Zitat Huang, S.-C., Coulom, R., Lin, S.-S.: Monte-carlo simulation balancing in practice. In: van den Herik, H.J., Iida, H., Plaat, A. (eds.) CG 2010. LNCS, vol. 6515, pp. 81–92. Springer, Heidelberg (2011)CrossRef Huang, S.-C., Coulom, R., Lin, S.-S.: Monte-carlo simulation balancing in practice. In: van den Herik, H.J., Iida, H., Plaat, A. (eds.) CG 2010. LNCS, vol. 6515, pp. 81–92. Springer, Heidelberg (2011)CrossRef
11.
Zurück zum Zitat Huang, S.-C., Müller, M.: Investigating the limits of Monte-Carlo tree search methods in computer go. In: Herik, H.J., Iida, H., Plaat, A. (eds.) CG 2013. LNCS, vol. 8427, pp. 39–48. Springer, Heidelberg (2014) Huang, S.-C., Müller, M.: Investigating the limits of Monte-Carlo tree search methods in computer go. In: Herik, H.J., Iida, H., Plaat, A. (eds.) CG 2013. LNCS, vol. 8427, pp. 39–48. Springer, Heidelberg (2014)
12.
Zurück zum Zitat Ikeda, K., Viennot, S.: Efficiency of static knowledge bias in Monte-Carlo tree search. In: Herik, H.J., Iida, H., Plaat, A. (eds.) CG 2013. LNCS, vol. 8427, pp. 26–38. Springer, Heidelberg (2014) Ikeda, K., Viennot, S.: Efficiency of static knowledge bias in Monte-Carlo tree search. In: Herik, H.J., Iida, H., Plaat, A. (eds.) CG 2013. LNCS, vol. 8427, pp. 26–38. Springer, Heidelberg (2014)
13.
Zurück zum Zitat Lucas, S.M., Samothrakis, S., Pérez, D.: Fast evolutionary adaptationfor Monte Carlo tree search. In: Esparcia-Alcázar, A.I., Mora, A.M. (eds.) EvoApplications 2014. LNCS, vol. 8602, pp. 349–360. Springer, Heidelberg (2014) Lucas, S.M., Samothrakis, S., Pérez, D.: Fast evolutionary adaptationfor Monte Carlo tree search. In: Esparcia-Alcázar, A.I., Mora, A.M. (eds.) EvoApplications 2014. LNCS, vol. 8602, pp. 349–360. Springer, Heidelberg (2014)
14.
Zurück zum Zitat Perez, D., Samothrakis, S., Lucas, S.: Knowledge-based fast evolutionary MCTS for general video game playing. In: 2014 IEEE Conference on Computational Intelligence and Games (CIG), pp. 1–8, August 2014 Perez, D., Samothrakis, S., Lucas, S.: Knowledge-based fast evolutionary MCTS for general video game playing. In: 2014 IEEE Conference on Computational Intelligence and Games (CIG), pp. 1–8, August 2014
15.
Zurück zum Zitat Silver, D.: Reinforcement learning and simulation-based search in computer go. Ph.D. thesis, University of Alberta (2009) Silver, D.: Reinforcement learning and simulation-based search in computer go. Ph.D. thesis, University of Alberta (2009)
16.
Zurück zum Zitat Silver, D., Sutton, R.S., Müller, M.: Sample-based learning and search with permanent and transient memories. In: Proceedings of the 25th International Conference on Machine Learning, ICML 2008, pp. 968–975 (2008) Silver, D., Sutton, R.S., Müller, M.: Sample-based learning and search with permanent and transient memories. In: Proceedings of the 25th International Conference on Machine Learning, ICML 2008, pp. 968–975 (2008)
17.
Zurück zum Zitat Szepesvari, C.: Algorithms for Reinforcment Learning. Morgan and Claypool, USA (2010)MATH Szepesvari, C.: Algorithms for Reinforcment Learning. Morgan and Claypool, USA (2010)MATH
18.
Zurück zum Zitat Williams, R.J.: Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach. Learn. 8, 229–256 (1992)MATH Williams, R.J.: Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach. Learn. 8, 229–256 (1992)MATH
Metadaten
Titel
Adaptive Playouts in Monte-Carlo Tree Search with Policy-Gradient Reinforcement Learning
verfasst von
Tobias Graf
Marco Platzner
Copyright-Jahr
2015
DOI
https://doi.org/10.1007/978-3-319-27992-3_1