2012 | OriginalPaper | Chapter
Batch, Off-Policy and Model-Free Apprenticeship Learning
Authors : Edouard Klein, Matthieu Geist, Olivier Pietquin
Published in: Recent Advances in Reinforcement Learning
Publisher: Springer Berlin Heidelberg
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
This paper addresses the problem of apprenticeship learning, that is learning control policies from demonstration by an expert. An efficient framework for it is inverse reinforcement learning (IRL). Based on the assumption that the expert maximizes a utility function, IRL aims at learning the underlying reward from example trajectories. Many IRL algorithms assume that the reward function is linearly parameterized and rely on the computation of some associated
feature expectations
, which is done through Monte Carlo simulation. However, this assumes to have full trajectories for the expert policy as well as at least a generative model for intermediate policies. In this paper, we introduce a temporal difference method, namely LSTD-
μ
, to compute these feature expectations. This allows extending apprenticeship learning to a batch and off-policy setting.