2013 | OriginalPaper | Chapter
Reducing Examples in Relational Learning with Bounded-Treewidth Hypotheses
Authors : Ondřej Kuželka, Andrea Szabóová, Filip Železný
Published in: New Frontiers in Mining Complex Patterns
Publisher: Springer Berlin Heidelberg
Activate our intelligent search to find suitable subject content or patents.
Select sections of text to find matching patents with Artificial Intelligence. powered by
Select sections of text to find additional relevant content using AI-assisted search. powered by
Feature selection methods often improve the performance of attribute-value learning. We explore whether also in relational learning, examples in the form of clauses can be reduced in size to speed up learning
without affecting the learned hypothesis
. To this end, we introduce the notion of safe reduction: a safely reduced example cannot be distinguished from the original example
under the given hypothesis language bias
. Next, we consider the particular, rather permissive bias of bounded treewidth clauses. We show that under this hypothesis bias, examples of arbitrary treewidth can be reduced efficiently. The bounded treewidth bias can be replaced by other assumptions such as acyclicity with similar benefits. We evaluate our approach on four data sets with the popular system Aleph and the state-of-the-art relational learner nFOIL. On all four data sets we make learning faster for nFOIL, achieving an order-of-magnitude speed up on one of the data sets, and more accurate for Aleph.