Skip to main content
Top

2018 | OriginalPaper | Chapter

Advice-Based Exploration in Model-Based Reinforcement Learning

Authors : Rodrigo Toro Icarte, Toryn Q. Klassen, Richard Anthony Valenzano, Sheila A. McIlraith

Published in: Advances in Artificial Intelligence

Publisher: Springer International Publishing

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Convergence to an optimal policy using model-based reinforcement learning can require significant exploration of the environment. In some settings such exploration is costly or even impossible, such as in cases where simulators are not available, or where there are prohibitively large state spaces. In this paper we examine the use of advice to guide the search for an optimal policy. To this end we propose a rich language for providing advice to a reinforcement learning agent. Unlike constraints which potentially eliminate optimal policies, advice offers guidance for the exploration, while preserving the guarantee of convergence to an optimal policy. Experimental results on deterministic grid worlds demonstrate the potential for good advice to reduce the amount of exploration required to learn a satisficing or optimal policy, while maintaining robustness in the face of incomplete or misleading advice.

Dont have a licence yet? Then find out more about our products and how to get one now:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literature
1.
go back to reference Pnueli, A.: The temporal logic of programs. In: FOCS, pp. 46–57 (1977) Pnueli, A.: The temporal logic of programs. In: FOCS, pp. 46–57 (1977)
2.
go back to reference Brafman, R., Tennenholtz, M.: R-MAX - a general polynomial time algorithm for near-optimal reinforcement learning. J. Mach. Learn. Res. 3, 213–231 (2002)MathSciNetMATH Brafman, R., Tennenholtz, M.: R-MAX - a general polynomial time algorithm for near-optimal reinforcement learning. J. Mach. Learn. Res. 3, 213–231 (2002)MathSciNetMATH
3.
go back to reference Bacchus, F., Kabanza, F.: Using temporal logics to express search control knowledge for planning. Artif. Intell. 116(1–2), 123–191 (2000)MathSciNetCrossRefMATH Bacchus, F., Kabanza, F.: Using temporal logics to express search control knowledge for planning. Artif. Intell. 116(1–2), 123–191 (2000)MathSciNetCrossRefMATH
4.
go back to reference De Giacomo, G., Masellis, R.D., Montali, M.: Reasoning on LTL on finite traces: insensitivity to infiniteness. In: AAAI, pp. 1027–1033 (2014) De Giacomo, G., Masellis, R.D., Montali, M.: Reasoning on LTL on finite traces: insensitivity to infiniteness. In: AAAI, pp. 1027–1033 (2014)
5.
go back to reference Baier, J., McIlraith, S.: Planning with first-order temporally extended goals using heuristic search. In: AAAI, pp. 788–795 (2006) Baier, J., McIlraith, S.: Planning with first-order temporally extended goals using heuristic search. In: AAAI, pp. 788–795 (2006)
6.
go back to reference Peng, B., MacGlashan, J., Loftin, R., Littman, M., Roberts, D., Taylor, M.: A need for speed: adapting agent action speed to improve task learning from non-expert humans. In: AAMAS, pp. 957–965 (2016) Peng, B., MacGlashan, J., Loftin, R., Littman, M., Roberts, D., Taylor, M.: A need for speed: adapting agent action speed to improve task learning from non-expert humans. In: AAMAS, pp. 957–965 (2016)
7.
go back to reference Hansen, E., Zilberstein, S.: LAO*: a heuristic search algorithm that finds solutions with loops. Artif. Intell. 129(1–2), 35–62 (2001)MathSciNetCrossRefMATH Hansen, E., Zilberstein, S.: LAO*: a heuristic search algorithm that finds solutions with loops. Artif. Intell. 129(1–2), 35–62 (2001)MathSciNetCrossRefMATH
8.
go back to reference McCarthy, J.: Programs with common sense. RLE and MIT Computation Center (1960) McCarthy, J.: Programs with common sense. RLE and MIT Computation Center (1960)
9.
go back to reference Lacerda, B., Parker, D., Hawes, N.: Optimal and dynamic planning for markov decision processes with co-safe LTL specifications. In: IROS, pp. 1511–1516 (2014) Lacerda, B., Parker, D., Hawes, N.: Optimal and dynamic planning for markov decision processes with co-safe LTL specifications. In: IROS, pp. 1511–1516 (2014)
10.
go back to reference Wen, M., Topcu, U.: Probably approximately correct learning in stochastic games with temporal logic specifications. In: IJCAI, pp. 3630–3636 (2016) Wen, M., Topcu, U.: Probably approximately correct learning in stochastic games with temporal logic specifications. In: IJCAI, pp. 3630–3636 (2016)
11.
go back to reference Andre, D., Russell, S.J.: Programmable reinforcement learning agents. In: NIPS, pp. 1019–1025 (2000) Andre, D., Russell, S.J.: Programmable reinforcement learning agents. In: NIPS, pp. 1019–1025 (2000)
12.
go back to reference Shapiro, D., Langley, P., Shachter, R.: Using background knowledge to speed reinforcement learning in physical agents. In: AA, pp. 254–261 (2001) Shapiro, D., Langley, P., Shachter, R.: Using background knowledge to speed reinforcement learning in physical agents. In: AA, pp. 254–261 (2001)
13.
go back to reference Isbell, C., Shelton, C.R., Kearns, M., Singh, S., Stone, P.: A social reinforcement learning agent. In: AA, pp. 377–384 (2001) Isbell, C., Shelton, C.R., Kearns, M., Singh, S., Stone, P.: A social reinforcement learning agent. In: AA, pp. 377–384 (2001)
14.
go back to reference Knox, W.B., Stone, P.: Tamer: training an agent manually via evaluative reinforcement. In: ICDL, pp. 292–297 (2008) Knox, W.B., Stone, P.: Tamer: training an agent manually via evaluative reinforcement. In: ICDL, pp. 292–297 (2008)
15.
go back to reference Judah, K., Roy, S., Fern, A., Dietterich, T.G.: Reinforcement learning via practice and critique advice. In: AAAI, pp. 481–486 (2010) Judah, K., Roy, S., Fern, A., Dietterich, T.G.: Reinforcement learning via practice and critique advice. In: AAAI, pp. 481–486 (2010)
16.
go back to reference Griffith, S., Subramanian, K., Scholz, J., Isbell, C., Thomaz, A.L.: Policy shaping: integrating human feedback with reinforcement learning. In: NIPS (2013) Griffith, S., Subramanian, K., Scholz, J., Isbell, C., Thomaz, A.L.: Policy shaping: integrating human feedback with reinforcement learning. In: NIPS (2013)
17.
go back to reference Maclin, R., Shavlik, J.: Creating advice-taking reinforcement learners. Mach. Learn. 22(1–3), 251–281 (1996)MATH Maclin, R., Shavlik, J.: Creating advice-taking reinforcement learners. Mach. Learn. 22(1–3), 251–281 (1996)MATH
18.
go back to reference Maclin, R., Shavlik, J., Torrey, L., Walker, T., Wild, E.: Giving advice about preferred actions to reinforcement learners via knowledge-based kernel regression. In: AAAI, pp. 819–824 (2005) Maclin, R., Shavlik, J., Torrey, L., Walker, T., Wild, E.: Giving advice about preferred actions to reinforcement learners via knowledge-based kernel regression. In: AAAI, pp. 819–824 (2005)
19.
go back to reference Kunapuli, G., Odom, P., Shavlik, J.W., Natarajan, S.: Guiding autonomous agents to better behaviors through human advice. In: ICDM, pp. 409–418 (2013) Kunapuli, G., Odom, P., Shavlik, J.W., Natarajan, S.: Guiding autonomous agents to better behaviors through human advice. In: ICDM, pp. 409–418 (2013)
20.
go back to reference Krening, S., Harrison, B., Feigh, K., Isbell, C., Riedl, M., Thomaz, A.: Learning from explanations using sentiment and advice in RL. IEEE Trans. Cogn. Dev. Syst. 9(1), 44–55 (2016)CrossRef Krening, S., Harrison, B., Feigh, K., Isbell, C., Riedl, M., Thomaz, A.: Learning from explanations using sentiment and advice in RL. IEEE Trans. Cogn. Dev. Syst. 9(1), 44–55 (2016)CrossRef
21.
go back to reference Andreas, J., Klein, D., Levine, S.: Modular multitask reinforcement learning with policy sketches. In: ICML, pp. 166–175 (2017) Andreas, J., Klein, D., Levine, S.: Modular multitask reinforcement learning with policy sketches. In: ICML, pp. 166–175 (2017)
Metadata
Title
Advice-Based Exploration in Model-Based Reinforcement Learning
Authors
Rodrigo Toro Icarte
Toryn Q. Klassen
Richard Anthony Valenzano
Sheila A. McIlraith
Copyright Year
2018
DOI
https://doi.org/10.1007/978-3-319-89656-4_6

Premium Partner