Skip to main content

1992 | OriginalPaper | Buchkapitel

Practical Issues in Temporal Difference Learning

verfasst von : Gerald Tesauro

Erschienen in: Reinforcement Learning

Verlag: Springer US

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

This paper examines whether temporal difference methods for training connectionist networks, such as Sutton’s TD(λ) algorithm, can be successfully applied to complex real-world problems. A number of important practical issues are identified and discussed from a general theoretical perspective. These practical issues are then examined in the context of a case study in which TD(λ) is applied to learning the game of backgammon from the outcome of self-play. This is apparently the first application of this algorithm to a complex non-trivial task. It is found that, with zero knowledge built in, the network is able to learn from scratch to play the entire game at a fairly strong intermediate level of performance, which is clearly better than conventional commercial programs, and which in fact surpasses comparable networks trained on a massive human expert data set. This indicates that TD learning may work better in practice than one would expect based on current theory, and it suggests that further analysis of TD methods, as well as applications in other complex domains, may be worth investigating.

Metadaten
Titel
Practical Issues in Temporal Difference Learning
verfasst von
Gerald Tesauro
Copyright-Jahr
1992
Verlag
Springer US
DOI
https://doi.org/10.1007/978-1-4615-3618-5_3

Neuer Inhalt