Skip to main content

1992 | OriginalPaper | Buchkapitel

Technical Note

Q-Learning

verfasst von : Christopher J. C. H. Watkins, Peter Dayan

Erschienen in: Reinforcement Learning

Verlag: Springer US

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Q-learning (Watkins, 1989) is a simple way for agents to learn how to act optimally in controlled Markovian domains. It amounts to an incremental method for dynamic programming which imposes limited computational demands. It works by successively improving its evaluations of the quality of particular actions at particular states.This paper presents and proves in detail a convergence theorem for Q-learning based on that outlined in Watkins (1989). We show that Q,-learning converges to the optimum action-values with probability 1 so long as all actions are repeatedly sampled in all states and the action-values are represented discretely. We also sketch extensions to the cases of non-discounted, but absorbing, Markov environments, and where many Q values can be changed each iteration, rather than just one.

Metadaten
Titel
Technical Note
verfasst von
Christopher J. C. H. Watkins
Peter Dayan
Copyright-Jahr
1992
Verlag
Springer US
DOI
https://doi.org/10.1007/978-1-4615-3618-5_4

Neuer Inhalt