Skip to main content
Top

2017 | Book

Inductive Logic Programming

26th International Conference, ILP 2016, London, UK, September 4-6, 2016, Revised Selected Papers

insite
SEARCH

About this book

This book constitutes the thoroughly refereed post-conference proceedings of the 26th International Conference on Inductive Logic Programming, ILP 2016, held in London, UK, in September 2016.

The 10 full papers presented were carefully reviewed and selected from 29 submissions. The papers represent well the current breath of ILP research topics such as predicate invention; graph-based learning; spatial learning; logical foundations; statistical relational learning; probabilistic ILP; implementation and scalability; applications in robotics, cyber security and games.

Table of Contents

Frontmatter
Estimation-Based Search Space Traversal in PILP Environments
Abstract
Probabilistic Inductive Logic Programming (PILP) systems extend ILP by allowing the world to be represented using probabilistic facts and rules, and by learning probabilistic theories that can be used to make predictions. However, such systems can be inefficient both due to the large search space inherited from the ILP algorithm and to the probabilistic evaluation needed whenever a new candidate theory is generated. To address the latter issue, this work introduces probability estimators aimed at improving the efficiency of PILP systems. An estimator can avoid the computational cost of probabilistic theory evaluation by providing an estimate of the value of the combination of two subtheories. Experiments are performed on three real-world datasets of different areas (biology, medical and web-based) and show that, by reducing the number of theories to be evaluated, the estimators can significantly shorten the execution time without losing probabilistic accuracy.
Joana Côrte-Real, Inês Dutra, Ricardo Rocha
Inductive Logic Programming Meets Relational Databases: Efficient Learning of Markov Logic Networks
Abstract
Statistical Relational Learning (SRL) approaches have been developed to learn in presence of noisy relational data by combining probability theory with first order logic. While powerful, most learning approaches for these models do not scale well to large datasets. While advances have been made on using relational databases with SRL models [14], they have not been extended to handle the complex model learning (structure learning task). We present a scalable structure learning approach that combines the benefits of relational databases with search strategies that employ rich inductive bias from Inductive Logic Programming. We empirically show the benefits of our approach on boosted structure learning for Markov Logic Networks.
Marcin Malec, Tushar Khot, James Nagy, Erik Blask, Sriraam Natarajan
Online Structure Learning for Traffic Management
Abstract
Most event recognition approaches in sensor environments are based on manually constructed patterns for detecting events, and lack the ability to learn relational structures in the presence of uncertainty. We describe the application of \(\mathtt {OSL}\alpha \), an online structure learner for Markov Logic Networks that exploits Event Calculus axiomatizations, to event recognition for traffic management. Our empirical evaluation is based on large volumes of real sensor data, as well as synthetic data generated by a professional traffic micro-simulator. The experimental results demonstrate that \(\mathtt {OSL}\alpha \) can effectively learn traffic congestion definitions and, in some cases, outperform rules constructed by human experts.
Evangelos Michelioudakis, Alexander Artikis, Georgios Paliouras
Learning Through Advice-Seeking via Transfer
Abstract
Experts possess vast knowledge that is typically ignored by standard machine learning methods. This rich, relational knowledge can be utilized to learn more robust models especially in the presence of noisy and incomplete training data. Such experts are often domain but not machine learning experts. Thus, deciding what knowledge to provide is a difficult problem. Our goal is to improve the human-machine interaction by providing the expert with a machine-generated bias that can be refined by the expert as necessary. To this effect, we propose using transfer learning, leveraging knowledge in alternative domains, to guide the expert to give useful advice. This knowledge is captured in the form of first-order logic horn clauses. We demonstrate empirically the value of the transferred knowledge, as well as the contribution of the expert in providing initial knowledge, plus revising and directing the use of the transferred knowledge.
Phillip Odom, Raksha Kumaraswamy, Kristian Kersting, Sriraam Natarajan
How Does Predicate Invention Affect Human Comprehensibility?
Abstract
During the 1980s Michie defined Machine Learning in terms of two orthogonal axes of performance: predictive accuracy and comprehensibility of generated hypotheses. Since predictive accuracy was readily measurable and comprehensibility not so, later definitions in the 1990s, such as that of Mitchell, tended to use a one-dimensional approach to Machine Learning based solely on predictive accuracy, ultimately favouring statistical over symbolic Machine Learning approaches. In this paper we provide a definition of comprehensibility of hypotheses which can be estimated using human participant trials. We present the results of experiments testing human comprehensibility of logic programs learned with and without predicate invention. Results indicate that comprehensibility is affected not only by the complexity of the presented program but also by the existence of anonymous predicate symbols.
Ute Schmid, Christina Zeller, Tarek Besold, Alireza Tamaddoni-Nezhad, Stephen Muggleton
Distributional Learning of Regular Formal Graph System of Bounded Degree
Abstract
In this paper, we describe how distributional learning techniques can be applied to formal graph system (FGS) languages. An FGS is a logic program that deals with term graphs instead of the terms of first-order predicate logic. We show that the regular FGS languages of bounded degree with the 1-finite context property (1-FCP) and bounded treewidth property can be learned from positive data and membership queries.
Takayoshi Shoudai, Satoshi Matsumoto, Yusuke Suzuki
Learning Relational Dependency Networks for Relation Extraction
Abstract
We consider the task of KBP slot filling – extracting relation information from newswire documents for knowledge base construction. We present our pipeline, which employs Relational Dependency Networks (RDNs) to learn linguistic patterns for relation extraction. Additionally, we demonstrate how several components such as weak supervision, word2vec features, joint learning and the use of human advice, can be incorporated in this relational framework. We evaluate the different components in the benchmark KBP 2015 task and show that RDNs effectively model a diverse set of features and perform competitively with current state-of-the-art relation extraction methods.
Ameet Soni, Dileep Viswanathan, Jude Shavlik, Sriraam Natarajan
Towards Nonmonotonic Relational Learning from Knowledge Graphs
Abstract
Recent advances in information extraction have led to the so-called knowledge graphs (KGs), i.e., huge collections of relational factual knowledge. Since KGs are automatically constructed, they are inherently incomplete, thus naturally treated under the Open World Assumption (OWA). Rule mining techniques have been exploited to support the crucial task of KG completion. However, these techniques can mine Horn rules, which are insufficiently expressive to capture exceptions, and might thus make incorrect predictions on missing links. Recently, a rule-based method for filling in this gap was proposed which, however, applies to a flattened representation of a KG with only unary facts. In this work we make the first steps towards extending this approach to KGs in their original relational form, and provide preliminary evaluation results on real-world KGs, which demonstrate the effectiveness of our method.
Hai Dang Tran, Daria Stepanova, Mohamed H. Gad-Elrab, Francesca A. Lisi, Gerhard Weikum
Learning Predictive Categories Using Lifted Relational Neural Networks
Abstract
Lifted relational neural networks (LRNNs) are a flexible neural-symbolic framework based on the idea of lifted modelling. In this paper we show how LRNNs can be easily used to specify declaratively and solve learning problems in which latent categories of entities, properties and relations need to be jointly induced.
Gustav Šourek, Suresh Manandhar, Filip Železný, Steven Schockaert, Ondřej Kuželka
Generation of Near-Optimal Solutions Using ILP-Guided Sampling
Abstract
Our interest in this paper is in optimisation problems that are intractable to solve by direct numerical optimisation, but nevertheless have significant amounts of relevant domain-specific knowledge. The category of heuristic search techniques known as estimation of distribution algorithms (EDAs) seek to incrementally sample from probability distributions in which optimal (or near-optimal) solutions have increasingly higher probabilities. Can we use domain knowledge to assist the estimation of these distributions? To answer this in the affirmative, we need: (a) a general-purpose technique for the incorporation of domain knowledge when constructing models for optimal values; and (b) a way of using these models to generate new data samples. Here we investigate a combination of the use of Inductive Logic Programming (ILP) for (a), and standard logic-programming machinery to generate new samples for (b). Specifically, on each iteration of distribution estimation, an ILP engine is used to construct a model for good solutions. The resulting theory is then used to guide the generation of new data instances, which are now restricted to those derivable using the ILP model in conjunction with the background knowledge). We demonstrate the approach on two optimisation problems (predicting optimal depth-of-win for the KRK endgame, and job-shop scheduling). Our results are promising: (a) On each iteration of distribution estimation, samples obtained with an ILP theory have a substantially greater proportion of good solutions than samples without a theory; and (b) On termination of distribution estimation, samples obtained with an ILP theory contain more near-optimal samples than samples without a theory. Taken together, these results suggest that the use of ILP-constructed theories could be a useful technique for incorporating complex domain-knowledge into estimation distribution procedures.
Ashwin Srinivasan, Gautam Shroff, Lovekesh Vig, Sarmimala Saikia
Backmatter
Metadata
Title
Inductive Logic Programming
Editors
James Cussens
Alessandra Russo
Copyright Year
2017
Electronic ISBN
978-3-319-63342-8
Print ISBN
978-3-319-63341-1
DOI
https://doi.org/10.1007/978-3-319-63342-8

Premium Partner