Skip to main content
main-content

Über dieses Buch

This book constitutes the proceedings of the International Joint Conference on Rules and Reasoning, RuleML+RR 2019, held in Bolzano, Italy, during September 2019. This is the third conference of a new series, joining the efforts of two existing conference series, namely “RuleML” (International Web Rule Symposium) and “RR” (Web Reasoning and Rule Systems).

The 10 full research papers presented together with 5 short technical communications papers were carefully reviewed and selected from 26 submissions.

Inhaltsverzeichnis

Frontmatter

Full Papers

Frontmatter

Finding New Diamonds: Temporal Minimal-World Query Answering over Sparse ABoxes

Abstract
Lightweight temporal ontology languages have become a very active field of research in recent years. Many real-world applications, like processing electronic health records (EHRs), inherently contain a temporal dimension, and require efficient reasoning algorithms. Moreover, since medical data is not recorded on a regular basis, reasoners must deal with sparse data with potentially large temporal gaps. In this paper, we introduce a temporal extension of the tractable language https://static-content.springer.com/image/chp%3A10.1007%2F978-3-030-31095-0_1/MediaObjects/483638_1_En_1_Figa_HTML.gif , which features a new class of convex diamond operators that can be used to bridge temporal gaps. We develop a completion algorithm for our logic, which shows that entailment remains tractable. Based on this, we develop a minimal-world semantics for answering metric temporal conjunctive queries with negation. We show that query answering is combined first-order rewritable, and hence in polynomial time in data complexity.
Stefan Borgwardt, Walter Forkel, Alisa Kovtunova

Reasoning on with Defeasibility in ASP

Abstract
Reasoning on defeasible knowledge is a topic of interest in the area of description logics, as it is related to the need of representing exceptional instances in knowledge bases. In this direction, in our previous works we presented a framework for representing (contextualized) OWL RL knowledge bases with a notion of justified exceptions on defeasible axioms: reasoning in such framework is realized by a translation into ASP programs. The resulting reasoning process for OWL RL, however, introduces a complex encoding in order to capture reasoning on the negative information needed for reasoning on exceptions. In this paper, we apply the justified exception approach to knowledge bases in \(\textit{DL-Lite}_\mathcal{R}\), i.e. the language underlying OWL QL. We provide a definition for \(\textit{DL-Lite}_\mathcal{R}\) knowledge bases with defeasible axioms and study their semantic and computational properties. The limited form of \(\textit{DL-Lite}_\mathcal{R}\) axioms allows us to formulate a simpler encoding into ASP programs, where reasoning on negative information is managed by direct rules. The resulting materialization method gives rise to a complete reasoning procedure for instance checking in \(\textit{DL-Lite}_\mathcal{R}\) with defeasible axioms.
Loris Bozzato, Thomas Eiter, Luciano Serafini

ODRL Policy Modelling and Compliance Checking

Abstract
This paper addresses the problem of constructing a policy pipeline that enables compliance checking of business processes against regulatory obligations. Towards this end, we propose an Open Digital Rights Language (ODRL) profile that can be used to capture the semantics of both business policies in the form of sets of required permissions and regulatory requirements in the form of deontic concepts, and present their translation into Answer Set Programming (via the Institutional Action Language (InstAL)) for compliance checking purposes. The result of the compliance checking is either a positive compliance result or an explanation pertaining to the aspects of the policy that are causing the non-compliance. The pipeline is illustrated using two (key) fragments of the General Data Protect Regulation, namely Articles 6 (Lawfulness of processing) and Articles 46 (Transfers subject to appropriate safeguards) and industrially-relevant use cases that involve the specification of sets of permissions that are needed to execute business processes. The core contributions of this paper are the ODRL profile, which is capable of modelling regulatory obligations and business policies, the exercise of modelling elements of GDPR in this semantic formalism, and the operationalisation of the model to demonstrate its capability to support personal data processing compliance checking, and a basis for explaining why the request is deemed compliant or not.
Marina De Vos, Sabrina Kirrane, Julian Padget, Ken Satoh

Aligning, Interoperating, and Co-executing Air Traffic Control Rules Across PSOA RuleML and IDP

Abstract
This paper studies Knowledge Bases (KBs) in PSOA RuleML and IDP, aligning, interoperating, and co-executing them for a use case of Air Traffic Control (ATC) regulations. We focus on the common core of facts and rules in both languages, explaining basic language features. The used knowledge sources are regulations specified in (legal) English, and an aircraft data schema. In the modeling process, inconsistencies in both sources were discovered. We present the discovery process utilizing both specification languages, and highlight their unique features. We introduce three extensions to this ATC KB core: (1) While the current PSOA RuleML does not distinguish the ontology separately from the instance level, IDP does. Hence, we specify a vocabulary-enriched version of ATC KB in IDP for knowledge validation. (2) While the current IDP uses relational modeling, PSOA additionally supports graph modeling. Hence, we specify a relationally interoperable graph version of ATC KB in PSOA. (3) The KB is extended to include optimization criteria to allow the determination of an optimal sequence of more than two aircraft.
Marjolein Deryck, Theodoros Mitsikas, Sofia Almpani, Petros Stefaneas, Panayiotis Frangos, Iakovos Ouranos, Harold Boley, Joost Vennekens

An ASP-based Solution for Operating Room Scheduling with Beds Management

Abstract
The Operating Room Scheduling (ORS) problem is the task of assigning patients to operating rooms, taking into account different specialties, lengths and priority scores of each planned surgery, operating room session durations, and the availability of beds for the entire length of stay both in the Intensive Care Unit and in the wards. A proper solution to the ORS problem is of utmost importance for the quality of the health-care and the satisfaction of patients in hospital environments. In this paper we present an improved solution to the problem based on Answer Set Programming (ASP) that, differently from a recent one, takes explictly into account beds management. Results of an experimental analysis, conducted on benchmarks with realistic sizes and parameters, show that ASP is a suitable solving methodology for solving also such improved problem version.
Carmine Dodaro, Giuseppe Galatà, Muhammad Kamran Khan, Marco Maratea, Ivan Porro

EASE: Enabling Hardware Assertion Synthesis from English

Abstract
In this paper, we present EASE (Enabling hardware Assertion Synthesis from English) which translates hardware design specifications written in English to a formal assertion language. Existing natural language processing (NLP) tools for hardware verification utilize the vocabulary and grammar of a few specification documents only. Hence, they lack the ability to provide linguistic variations in parsing and writing natural language assertions. The grammar used in EASE does not follow a strict English syntax for writing design specifications. Our grammar incorporates dependency rules for syntactic categories which are coupled with semantic category dependencies that allow users to specify the same design specification using different word sequences in a sentence. Our NLP engine consists of interleaving operations of semantic and syntactic analyses to understand the input sentences and map differently worded sentences with the same meaning to the same logical form. Moreover, our approach also provides semantically driven suggestions for sentences that are not understood by the system. EASE has been tested on natural language requirements extracted from memory controller, UART and AMBA AXI protocol specification documents. The system has been tested for imperative, declarative and conditional types of specifications. The results show that the proposed approach can handle a more diverse set of linguistic variations than existing methods.
Rahul Krishnamurthy, Michael S. Hsiao

Formalizing Object-Ontological Mapping Using F-logic

Abstract
Ontologies can represent a significant asset of domain-specific information systems, written predominantly using the object-oriented paradigm. However, to be able to work with ontological data in this paradigm, a mapping must ensure transformation between the ontology and the object world. While many software libraries provide such a mapping, they lack standardization or formal guarantees of its semantics. In this paper, we provide a formalism for mapping ontologies between description logics and F-logic, a formal language for representing structural aspects of object-oriented programming languages. This formalism allows to precisely specify the semantics of the object-ontological mapping and thus ensure a predictable shape and behavior of the object model.
Martin Ledvinka, Petr Křemen

Alternating Fixpoint Operator for Hybrid MKNF Knowledge Bases as an Approximator of AFT

Abstract
Approximation fixpoint theory (AFT) provides an algebraic framework for the study of fixpoints of operators on bilattices and has found its applications in characterizing semantics for various types of logic programs and nonmonotonic languages. In this paper, we show one more application of this kind: the alternating fixpoint operator by Knorr et al. [8] for the study of well-founded semantics for hybrid MKNF knowledge bases is in fact an approximator of AFT in disguise, which, thanks to the power of abstraction of AFT, characterizes not only the well-founded semantics but also two-valued as well as three-valued semantics for hybrid MKNF knowledge bases. Furthermore, we show an improved approximator for these knowledge bases, of which the least stable fixpoint is information richer than the one formulated from Knorr et al.’s construction. This leads to an improved computation for the well-founded semantics.
Fangfang Liu, Jia-Huai You

Efficient TBox Reasoning with Value Restrictions—Introducing the Reasoner

Abstract
The Description Logic (DL) \({\mathcal {F\!L}_0}\) uses universal quantification, whereas its well-known counter-part \(\mathcal {E\!L}\) uses the existential one. While for \(\mathcal {E\!L}\) deciding subsumption in the presence of general TBoxes is tractable, this is no the case for \({\mathcal {F\!L}_0}\). We present a novel algorithm for solving the ExpTime-hard subsumption problem in \({\mathcal {F\!L}_0}\) w.r.t. general TBoxes, which is based on the computation of so-called least functional models. To build a such a model our algorithm treats TBox axioms as rules that are applied to objects of the interpretation domain. This algorithm is implemented in the \(\mathcal {F\!L}_{o}{} \textit{wer}\) reasoner, which uses a variant of the Rete pattern matching algorithm to find applicable rules. We present an evaluation of \(\mathcal {F\!L}_{o}{} \textit{wer}\) on a large set of TBoxes generated from real world ontologies. The experimental results indicate that our prototype implementation of the specialised technique for \({\mathcal {F\!L}_0}\) leads in most cases to a huge performance gain in comparison to the highly-optimised tableau reasoners.
Friedrich Michel, Anni-Yasmin Turhan, Benjamin Zarrieß

Query Rewriting for DL Ontologies Under the ICAR Semantics

Abstract
In the current paper we propose a general framework for answering queries over inconsistent DL knowledge bases. The proposed framework considers the ICAR semantics and is based on a rewriting algorithm that can be applied over arbitrary DLs. Since the problem of ICAR-answering is known to be intractable for DLs other than DL-Lite, our algorithm may not terminate. However, we were able to describe sufficient termination conditions and to show that they are always satisfied for instance queries and TBoxes expressed in the semi-acyclic-\(\mathcal {EL}_{\bot }\) as well as in DL-Lite\(_{bool}\). Interestingly, recent results on UCQ-rewritability and existing techniques can be used within the proposed framework, to check if the conditions are satisfied for a given query and ontology expressed in a DL for which the problem is in general intractable.
Despoina Trivela, Giorgos Stoilos, Vasilis Vassalos

Technical Communication Papers

Frontmatter

Complementing Logical Reasoning with Sub-symbolic Commonsense

Abstract
Neuro-symbolic integration is a current field of investigation in which symbolic approaches are combined with deep learning ones. In this work we start from simple non-relational knowledge that can be extracted from text by considering the co-occurrence of entities inside textual corpora; we show that we can easily integrate this knowledge with Logic Tensor Networks (LTNs), a neuro-symbolic model. Using LTNs it is possible to integrate axioms and facts with commonsense knowledge represented in a sub-symbolic form in one single model performing well in reasoning tasks. In spite of some current limitations, we show that results are promising.
Federico Bianchi, Matteo Palmonari, Pascal Hitzler, Luciano Serafini

Adding Constraint Tables to the DMN Standard: Preliminary Results

Abstract
The DMN standard allows users to build declarative models of their decision knowledge. The standard aims at being simple enough to allow business users to construct these models themselves, without help from IT staff. To this end, it combines simple decision tables with a clear visual notation. However, for real-life applications, DMN sometimes proves too restrictive. In this paper, we develop an extension to DMN’s decision table notation, which allows more knowledge to be expressed, while retaining the simplicity of DMN. We demonstrate our new notation on a real-life case study on product design.
Marjolein Deryck, Bram Aerts, Joost Vennekens

Detecting “Slippery Slope” and Other Argumentative Stances of Opposition Using Tree Kernels in Monologic Discourse

Abstract
The aim of this study is to propose an innovative methodology to classify argumentative stances in a monologic argumentative context. Particularly, the proposed approach shows that Tree Kernels can be used in combination with traditional textual vectorization to discriminate between different stances of opposition without the need of extracting highly engineered features. This can be useful in many Argument Mining sub-tasks. In particular, this work explores the possibility of classifying opposition stances by training multiple classifiers to reach different degrees of granularity. Noticeably, discriminating support and opposition stances can be particularly useful when trying to detect Argument Schemes, one of the most challenging sub-task in the Argument Mining pipeline. In this sense, the approach can be also considered as an attempt to classify stances of opposition that are related to specific Argument Schemes.
Davide Liga, Monica Palmirani

Fuzzy Logic Programming for Tuning Neural Networks

Abstract
Wide datasets are usually used for training and validating neural networks, which can be later tuned in order to correct their final behaviors according to a few number of test cases proposed by users. In this paper we show how the FLOPER system developed in our research group is able to perform this last task after coding a neural network with a fuzzy logic language where program rules extend the classical notion of clause by including on their bodies both fuzzy connectives (useful for modeling activation functions of neurons) and truth degrees (associated with weights and biases in neural networks). We present an online tool which helps to select such operators and values in an automatic way, accomplishing with our recent technique for tuning this kind of fuzzy programs. Moreover, our experimental results reveal that our tool generates the choices that better fit user’s preferences in a very efficient way and producing relevant improvements on tuned neural networks.
Ginés Moreno, Jesús Pérez, José A. Riaza

Querying Key-Value Stores Under Single-Key Constraints: Rewriting and Parallelization

Abstract
We consider the problem of querying key-value stores in the presence of semantic constraints, expressed as rules on keys, whose purpose is to establish a high-level view over a collection of legacy databases. We focus on the rewriting-based approach for data access, which is the most suitable for the key-value store setting because of the limited expressivity of the data model employed by such systems. Our main contribution is a parallel technique for rewriting and evaluating tree-shaped queries under constraints which is able to speed up query answering. We implemented and evaluated our parallel technique. Results show significant performance gains compared to the baseline sequential approach.
Olivier Rodriguez, Reza Akbarinia, Federico Ulliana

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise