2012 | OriginalPaper | Buchkapitel
Actor-Critic Algorithm Based on Incremental Least-Squares Temporal Difference with Eligibility Trace
verfasst von : Yuhu Cheng, Huanting Feng, Xuesong Wang
Erschienen in: Advanced Intelligent Computing Theories and Applications. With Aspects of Artificial Intelligence
Verlag: Springer Berlin Heidelberg
Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.
Wählen Sie Textabschnitte aus um mit Künstlicher Intelligenz passenden Patente zu finden. powered by
Markieren Sie Textabschnitte, um KI-gestützt weitere passende Inhalte zu finden. powered by
Compared with value-function-based reinforcement learning (RL) methods, policy gradient reinforcement learning methods have better convergence, but large variance of policy gradient estimation influences the learning performance. In order to improve the convergence speed of policy gradient RL methods and the precision of gradient estimation, a kind of Actor-Critic (AC) learning algorithm based on incremental least-squares temporal difference with eligibility trace (iLSTD(
λ
)) is proposed by making use of the characteristics of AC framework, function approximator and iLSTD(
λ
) algorithm. The Critic estimates the value-function according to the iLSTD(
λ
) algorithm, and the Actor updates the policy parameter based on a regular gradient. Simulation results concerning a grid world with 10×10 size illustrate that the AC algorithm based on iLSTD(
λ
) not only has quick convergence speed but also has good gradient estimation.