Skip to main content

Über dieses Buch

This book constitutes the thoroughly refereed post-proceedings of the 23rd International Conference on Inductive Logic Programming, ILP 2013, held in Rio de Janeiro, Brazil, in August 2013.

The 9 revised extended papers were carefully reviewed and selected from 42 submissions. The conference now focuses on all aspects of learning in logic, multi-relational learning and data mining, statistical relational learning, graph and tree mining, relational reinforcement learning, and other forms of learning from structured data.



MetaBayes: Bayesian Meta-Interpretative Learning Using Higher-Order Stochastic Refinement

Recent papers have demonstrated that both predicate invention and the learning of recursion can be efficiently implemented by way of abduction with respect to a meta-interpreter. This paper shows how Meta-Interpretive Learning (MIL) can be extended to implement a Bayesian posterior distribution over the hypothesis space by treating the meta-interpreter as a Stochastic Logic Program. The resulting \(MetaBayes\) system uses stochastic refinement to randomly sample consistent hypotheses which are used to approximate Bayes’ Prediction. Most approaches to Statistical Relational Learning involve separate phases of model estimation and parameter estimation. We show how a variant of the MetaBayes approach can be used to carry out simultaneous model and parameter estimation for a new representation we refer to as a Super-imposed Logic Program (SiLPs). The implementation of this approach is referred to as \(MetaBayes_{SiLP}\). SiLPs are a particular form of ProbLog program, and so the parameters can also be estimated using the more traditional EM approach employed by ProbLog. This second approach is implemented in a new system called \(MilProbLog\). Experiments are conducted on learning grammars, family relations and a natural language domain. These demonstrate that \(MetaBayes\) outperforms \(MetaBayes_{MAP}\) in terms of predictive accuracy and also outperforms both \(MilProbLog\) and \(MetaBayes_{SiLP}\) on log likelihood measures. However, \(MetaBayes\) incurs substantially higher running times than \(MetaBayes_{MAP}\). On the other hand, \(MetaBayes\) and \(MetaBayes_{SiLP}\) have similar running times while both have much shorter running times than \(MilProbLog\).
Stephen H. Muggleton, Dianhuan Lin, Jianzhong Chen, Alireza Tamaddoni-Nezhad

On Differentially Private Inductive Logic Programming

We consider differentially private inductive logic programming. We begin by formulating the problem of guarantee differential privacy to inductive logic programming, and then prove the theoretical difficulty of simultaneously providing good utility and good privacy in this task. While our analysis proves that in general this is very difficult, it leaves a glimmer of hope in that when the size of the training data is large or the search tree for hypotheses is “short” and “narrow,” we might be able to get meaningful results. To prove our intuition, we implement a differentially private version of Aleph, and our experimental results show that our algorithm is able to produce accurate results for those two cases.
Chen Zeng, Eric Lantz, Jeffrey F. Naughton, David Page

Learning Through Hypothesis Refinement Using Answer Set Programming

Recent work has shown how a meta-level approach to inductive logic programming, which uses a semantic-preserving transformation of a learning task into an abductive reasoning problem, can address a large class of multi-predicate, nonmonotonic learning in a sound and complete manner. An Answer Set Programming (ASP) implementation, called ASPAL, has been proposed that uses ASP fixed point computation to solve a learning task, thus delegating the search to the ASP solver. Although this meta-level approach has been shown to be very general and flexible, the scalability of its ASP implementation is constrained by the grounding of the meta-theory. In this paper we build upon these results and propose a new meta-level learning approach that overcomes the scalability problem of ASPAL by breaking the learning process up into small manageable steps and using theory revision over the meta-level representation of the hypothesis space to improve the hypothesis computed at each step. We empirically evaluate the computational gain with respect to ASPAL using two different answer set solvers.
Duangtida Athakravi, Domenico Corapi, Krysia Broda, Alessandra Russo

A BDD-Based Algorithm for Learning from Interpretation Transition

In recent years, there has been an extensive interest in learning the dynamics of systems. For this purpose, a new learning method called learning from interpretation transition has been proposed recently [1]. However, both the run time and the memory space of this algorithm are exponential, so a better data structure and an efficient algorithm have been awaited. In this paper, we propose a new learning algorithm of this method utilizing an efficient data structure inspired from Ordered Binary Decision Diagrams. We show empirically that using this representation we can perform the same learning task faster with less memory space.
Tony Ribeiro, Katsumi Inoue, Chiaki Sakama

Accelerating Imitation Learning in Relational Domains via Transfer by Initialization

The problem of learning to mimic a human expert/teacher from training trajectories is called imitation learning. To make the process of teaching easier in this setting, we propose to employ transfer learning (where one learns on a source problem and transfers the knowledge to potentially more complex target problems). We consider multi-relational environments such as real-time strategy games and use functional-gradient boosting to capture and transfer the models learned in these environments. Our experiments demonstrate that our learner learns a very good initial model from the simple scenario and effectively transfers the knowledge to the more complex scenario thus achieving a jump start, a steeper learning curve and a higher convergence in performance.
Sriraam Natarajan, Phillip Odom, Saket Joshi, Tushar Khot, Kristian Kersting, Prasad Tadepalli

A Direct Policy-Search Algorithm for Relational Reinforcement Learning

In the field of relational reinforcement learning — a representational generalisation of reinforcement learning — the first-order representation of environments results in a potentially infinite number of possible states, requiring learning agents to use some form of abstraction to learn effectively. Instead of forming an abstraction over the state-action space, an alternative technique is to create behaviour directly through policy-search. The algorithm named Cerrla presented in this paper uses the cross-entropy method to learn behaviour directly in the form of decision-lists of relation rules for solving problems in a range of different environments, without the need for expert guidance in the learning process. The behaviour produced by the algorithm is easy to comprehend and is biased towards compactness. The results obtained show that Cerrla is competitive in both the standard testing environment and in Ms. Pac-Man and Carcassonne, two large and complex game environments.
Samuel Sarjant, Bernhard Pfahringer, Kurt Driessens, Tony Smith

AND Parallelism for ILP: The APIS System

Inductive Logic Programming (ILP) is a well known approach to Multi-Relational Data Mining. ILP systems may take a long time for analyzing the data mainly because the search (hypotheses) spaces are often very large and the evaluation of each hypothesis, which involves theorem proving, may be quite time consuming in some domains. To address these efficiency issues of ILP systems we propose the APIS (And ParallelISm for ILP) system that uses results from Logic Programming AND-parallelism. The approach enables the partition of the search space into sub-spaces of two kinds: sub-spaces where clause evaluation requires theorem proving; and sub-spaces where clause evaluation is performed quite efficiently without resorting to a theorem prover. We have also defined a new type of redundancy (Coverage-equivalent redundancy) that enables the prune of significant parts of the search space. The new type of pruning together with the partition of the hypothesis space considerably improved the performance of the APIS system. An empirical evaluation of the APIS system in standard ILP data sets shows considerable speedups without a lost of accuracy of the models constructed.
Rui Camacho, Ruy Ramos, Nuno A. Fonseca

Generalized Counting for Lifted Variable Elimination

Lifted probabilistic inference methods exploit symmetries in the structure of probabilistic models to perform inference more efficiently. In lifted variable elimination, the symmetry among a group of interchangeable random variables is captured by counting formulas, and exploited by operations that handle such formulas. In this paper, we generalize the structure of counting formulas and present a set of inference operators that introduce and eliminate these formulas from the model. This generalization expands the range of problems that can be solved in a lifted way. Our work is closely related to the recently introduced method of joint conversion. Due to its more fine grained formulation, however, our approach can provide more efficient solutions than joint conversion.
Nima Taghipour, Jesse Davis, Hendrik Blockeel

A FOIL-Like Method for Learning under Incompleteness and Vagueness

Incompleteness and vagueness are inherent properties of knowledge in several real world domains and are particularly pervading in those domains where entities could be better described in natural language. In order to deal with incomplete and vague structured knowledge, several fuzzy extensions of Description Logics (DLs) have been proposed in the literature. In this paper, we present a novel Foil-like method for inducing fuzzy DL inclusion axioms from crisp DL knowledge bases and discuss the results obtained on a real-world case study in the tourism application domain also in comparison with related works.
Francesca A. Lisi, Umberto Straccia


Weitere Informationen

Premium Partner