Skip to main content
Erschienen in:
Buchtitelbild

2003 | OriginalPaper | Buchkapitel

Graph Kernels and Gaussian Processes for Relational Reinforcement Learning

verfasst von : Thomas Gärtner, Kurt Driessens, Jan Ramon

Erschienen in: Inductive Logic Programming

Verlag: Springer Berlin Heidelberg

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Relational reinforcement learning is a Q-learning technique for relational state-action spaces. It aims to enable agents to learn how to act in an environment that has no natural representation as a tuple of constants. In this case, the learning algorithm used to approximate the mapping between state-action pairs and their so called Q(uality)-value has to be not only very reliable, but it also has to be able to handle the relational representation of state-action pairs.In this paper we investigate the use of Gaussian processes to approximate the quality of state-action pairs. In order to employ Gaussian processes in a relational setting we use graph kernels as the covariance function between state-action pairs. Experiments conducted in the blocks world show that Gaussian processes with graph kernels can compete with, and often improve on, regression trees and instance based regression as a generalisation algorithm for relational reinforcement learning.

Metadaten
Titel
Graph Kernels and Gaussian Processes for Relational Reinforcement Learning
verfasst von
Thomas Gärtner
Kurt Driessens
Jan Ramon
Copyright-Jahr
2003
Verlag
Springer Berlin Heidelberg
DOI
https://doi.org/10.1007/978-3-540-39917-9_11

Premium Partner