Skip to main content

2013 | Buch

Web Reasoning and Rule Systems

7th International Conference, RR 2013, Mannheim, Germany, July 27-29, 2013. Proceedings

herausgegeben von: Wolfgang Faber, Domenico Lembo

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 7th International Conference on Web Reasoning and Rule Systems, RR 2013, held in Manheim, Germany in July 2013. The 19 revised research papers and 4 technical communications presented together with 2 invited talks and 1 tutorial talk were carefully reviewed and selected from 34 submissions. The scope of conference is decision making, planning, and intelligent agents, reasoning, machine learning, knowledge extraction and IR technologies, large-scale data management and reasoning on the web of data, data integration, dataspaces and ontology-based data access, non-standard reasoning, algorithms for distributed, parallelized, and scalable reasoning, and system descriptions and experimentation.

Inhaltsverzeichnis

Frontmatter

Invited Talks

Inconsistency Management for Description Logic Programs and Beyond
Abstract
Description logic programs are a declarative approach to access ontological knowledge bases through a query interface, and to combine the query results using rules that can be nonmonotonic. Noticeably, a bidirectional information flow between the rules and the ontology is supported, which opens the possibility of sophisticated data exchange and advanced reasoning tasks on top of ontologies. As it happens, inconsistency may arise from the interplay of the rules and the ontology. We consider this issue and discuss different origins of inconsistency, as well as approaches to deal with it. While recent progress has been made, several issues remain to be explored; among them is inconsistency management for generalizations of description logic programs, and in particular for HEX programs, where this issues is largely unexplored and challenging, the more if distributed or web-based evaluation scenarios are considered.
Thomas Eiter, Michael Fink, Daria Stepanova
Reasoning About Pattern-Based XML Queries
Abstract
We survey results about static analysis of pattern-based queries over XML documents. These queries are analogs of conjunctive queries, their unions and Boolean combinations, in which tree patterns play the role of atomic formulae. As in the relational case, they can be viewed as both queries and incomplete documents, and thus static analysis problems can also be viewed as finding certain answers of queries over such documents. We look at satisfiability of patterns under schemas, containment of queries for various features of XML used in queries, finding certain answers, and applications of pattern-based queries in reasoning about schema mappings for data exchange.
Amélie Gheerbrant, Leonid Libkin, Cristina Sirangelo
Answer Set Programming: Language, Applications and Development Tools
Abstract
Answer Set Programming (ASP) is a powerful language for knowledge representation and reasoning, that has been developed in the field of nonmonotonic reasoning and logic programming. The high knowledge-modeling power of ASP, together with the availability of efficient ASP systems, have implied a renewed interest in this formalism in recent years. ASP has been applied in many scientific applications, ranging from Artificial Intelligence, to Knowledge Management and Information Integration. The big challenge now is to show that ASP can be profitably used for real-world applications, and can attract much interest also in industry.
In this paper, we report on our on-the-field experience on the development of real-world applications in ASP. We have employed the DLV system, the first ASP system which is undergoing an industrial exploitation by a spin-off company, and is very well-suited for applications development, thanks also to the endowment of powerful development tools, supporting the activities of researchers and implementors. In particular, we describe a couple of real-world ASP applications for work-force management and e-tourism, and we focus on two advanced development tools for DLV: ASPIDE and JDLV. ASPIDE is an extensible integrated development environment for ASP; while JDLV is a plug-in for Eclipse, integrating ASP in a well-assessed development platform which enables a bilateral interaction between ASP and Java.
Giovanni Grasso, Nicola Leone, Francesco Ricca

Full Papers

A Variant of Earley Deduction with Partial Evaluation
Abstract
We present an algorithm for query evaluation given a logic program consisting of function-free Datalog rules. The algorithm is based on Earley Deduction [7,10], but uses explicit states to eliminate rules which are no longer needed, and partial evaluation to minimize the work at runtime. At least in certain cases, the new method is more efficient than our SLDMagic-method [2], and also beats the standard Magic set method. It is also theoretically interesting, because it consumes one EDB fact in each step. Because of its origin, it is especially well suited for parsing applications, e.g. for extracting data from web pages. However, it has the potential to speed up basic Datalog reasoning for many semantic web applications.
Stefan Brass, Heike Stephan
Verification and Synthesis in Description Logic Based Dynamic Systems
Abstract
In this paper, we devise a general framework for formalizing Description Logic Based Dynamic Systems that is parametric w.r.t. the description logic knowledge base and the progression mechanism of interest. Using this framework we study verification and adversarial synthesis for specifications expressed in a variant of first-order μ-calculus, with a controlled form of quantification across successive states. We provide key decidability results for both verification and synthesis, under a “bounded-state” assumption. We then study two instantiations of this general framework, where the description logic knowledge base is expressed in DL-Lite and \(\mathcal{ALCQI}\), respectively
Diego Calvanese, Giuseppe De Giacomo, Marco Montali, Fabio Patrizi
Towards an Efficient Algorithm to Reason over Description Logics Extended with Nominal Schemas
Abstract
Extending description logics with so-called nominal schemas has been shown to be a major step towards integrating description logics with rules paradigms. However, establishing efficient algorithms for reasoning with nominal schemas has so far been a challenge. In this paper, we present an algorithm to reason with the description logic fragment \(\mathcal{ELROV}_{n}\), a fragment that extends \(\mathcal{EL}^{++}\) with nominal schemas. We also report on an implementation and experimental evaluation of the algorithm, which shows that our approach is indeed rather efficient.
David Carral, Cong Wang, Pascal Hitzler
Computing Incoherence Explanations for Learned Ontologies
Abstract
Recent developments in ontology learning research have made it possible to generate significantly more expressive ontologies. Novel approaches can support human ontology engineers in rapidly creating logically complex and richly axiomatized schemas. Although the higher complexity increases the likelihood of modeling flaws, there is currently little tool support for diagnosing and repairing ontologies produced by automated approaches. Off-the-shelf debuggers based on logical reasoning struggle with the particular characteristics of learned ontologies. They are mostly inefficient when it comes to detecting modeling flaws, or highlighting all of the logical reasons for the discovered problems. In this paper, we propose a reasoning approach for discovering unsatisfiable classes and properties that is optimized for handling automatically generated, expressive ontologies. We describe our implementation of this approach, which we evaluated by comparing it with state-of-the-art reasoners.
Daniel Fleischhacker, Christian Meilicke, Johanna Völker, Mathias Niepert
An Ontology-Based Reasoning Approach for Electric Power Utilities
Abstract
Because of the semantic conflicts, the exchange of information between heterogeneous applications remains a complex task. One way to address this problem is to use ontologies for the identification and association of semantically corresponding information concepts. In the electric power industry, the IEC/CIM represents the most complete and widely accepted ontology. We attempt to show through three concrete examples how the CIM can reap advantages from a formal representation of knowledge in order to support complex processes. We present a semantic approach for finding ringlets in the distribution network, for checking specific data inconsistencies and finally for identifying CIM topological nodes. We conclude by stating that the combination of CIM and RDF has the main advantage of offering valuable flexibility in processing complex tasks.
Mohamed Gaha, Arnaud Zinflou, Christian Langheit, Alexandre Bouffard, Mathieu Viau, Luc Vouligny
Conjunctive Queries with Negation over DL-Lite: A Closer Look
Abstract
While conjunctive query (CQ) answering over DL-Lite has been studied extensively, there have been few attempts to analyse CQs with negated atoms. This paper deepens the study of the problem. Answering CQs with safe negation and CQs with a single inequality over DL-Lite with role inclusions is shown to be undecidable, even for a fixed TBox and query.Without role inclusions, answering CQs with one inequality is P-hard and with two inequalities coNP-hard in data complexity.
Víctor Gutiérrez-Basulto, Yazmín Ibañez-García, Roman Kontchakov, Egor V. Kostylev
On the Exploration of the Query Rewriting Space with Existential Rules
Abstract
We address the issue of Ontology-Based Data Access, with ontologies represented in the framework of existential rules, also known as Datalog+/-. A well-known approach involves rewriting the query using ontological knowledge. We focus here on the basic rewriting technique which consists of rewriting a conjunctive query (CQ) into a union of CQs. We assume that the set of rules is a finite unification set, i.e., for any CQ, there exists a finite sound and complete rewriting of this CQ with the rules. First, we study a generic breadth-first rewriting algorithm, which takes as input any rewriting operator. We define properties of the rewriting operator that ensure the correctness and the termination of this algorithm. Second, we study some operators with respect to the exhibited properties. All these operators have in common to be based on so-called piece-unifiers but they lead to different explorations of the rewriting space. Finally, an experimental comparison of these operators within an implementation of the generic breadth-first rewriting algorithm is presented.
Mélanie König, Michel Leclère, Marie-Laure Mugnier, Michaël Thomazo
Incomplete Information in RDF
Abstract
We extend RDF with the ability to represent property values that exist, but are unknown or partially known, using constraints. Following ideas from the incomplete information literature, we develop a semantics for this extension of RDF, called RDF i , and study SPARQL query evaluation in this framework.
Charalampos Nikolaou, Manolis Koubarakis
RIO: Minimizing User Interaction in Ontology Debugging
Abstract
Efficient ontology debugging is a cornerstone for many activities in the context of the Semantic Web, especially when automatic tools produce (parts of) ontologies such as in the field of ontology matching. The best currently known interactive debugging systems rely upon some meta information in terms of fault probabilities, which can speed up the debugging procedure in the good case, but can also have negative impact on the performance in the bad case. The problem is that assessment of the meta information is only possible a-posteriori. Consequently, as long as the actual fault is unknown, there is always some risk of suboptimal interactive diagnoses discrimination. As an alternative, one might prefer to rely on a tool which pursues a no-risk strategy. In this case, however, possibly well-chosen meta information cannot be exploited, resulting again in inefficient debugging actions. In this work we present a reinforcement learning strategy that continuously adapts its behavior depending on the performance achieved and minimizes the risk of using low-quality meta information. Therefore, this method is suitable for application scenarios where reliable a-priori fault estimates are difficult to obtain. Using a corpus of incoherent real-world ontologies from the field of ontology matching, we show that the proposed risk-aware query strategy outperforms both meta information based approaches and no-risk strategies on average in terms of required amount of user interaction.
Patrick Rodler, Kostyantyn Shchekotykhin, Philipp Fleiss, Gerhard Friedrich
Eliminating Nonmonotonic DL-Atoms in Description Logic Programs
Abstract
Nonmonotonic description logic programs (dl-programs) are a well-known formalism for combining rules and ontologies, where rules interact with an underlying ontology via dl-atoms that allow queries to the ontology under a possible update of its assertional part. It is known that dl-atoms may be nonmonotonic and dl-programs without nonmonotonic dl-atoms have many desirable properties. In this paper, we show that it is possible to remove nonmonotonic dl-atoms from a dl-program while preserving its strong/weak answer set semantics. Though the translation is faithful, it relies on the knowledge about monotonicity of dl-atoms. We then thoroughly investigate the complexity of deciding whether a dl-atom is monotonic under the description logics DL-Lite\(_{\mathcal R}\), \({\mathcal{EL}}^{++}\), \({\mathcal{SHIF}}\) and \({\mathcal{SHOIN}}\), which is of independent interest for computing strong answer sets. We show that the problem is intractable in general, but tractable for dl-atoms with bounded queries in DL-Lite\(_{\mathcal R}\).
Yisong Wang, Thomas Eiter, Jia-Huai You, LiYan Yuan, Yi-Dong Shen
BUNDLE: A Reasoner for Probabilistic Ontologies
Abstract
Representing uncertain information is very important for modeling real world domains. Recently, the DISPONTE semantics has been proposed for probabilistic description logics. In DISPONTE, the axioms of a knowledge base can be annotated with a set of variables and a real number between 0 and 1. This real number represents the probability of each version of the axiom in which the specified variables are instantiated. In this paper we present the algorithm BUNDLE for computing the probability of queries from DISPONTE knowledge bases that follow the \(\mathcal{ALC}\) semantics. BUNDLE exploits an underlying DL reasoner, such as Pellet, that is able to return explanations for queries. The explanations are encoded in a Binary Decision Diagram from which the probability of the query is computed. The experiments performed by applying BUNDLE to probabilistic knowledge bases show that it can handle ontologies of realistic size and is competitive with the system PRONTO for the probabilistic description logic P-\(\mathcal{SHIQ}\)(D).
Fabrizio Riguzzi, Elena Bellodi, Evelina Lamma, Riccardo Zese

Technical Communications

Detection of Inconsistencies in Rules Due to Changes in Ontologies: Let’s Get Formal
Abstract
In this paper, we focus on the impact of ontology changes on production rules, in the context of rule programs written over the entities of OWL ontologies. Then, ontology evolutions may make rules inconsistent with respect to the knowledge modeled by the ontology. To address this problem, we propose to combine two approaches: the syntactic approach of the \(\mathcal{M}{\rm odel}\)-\(\mathcal{D}{\rm etect}\)-\(\mathcal{R}{\rm epair}\) (\({\mathcal{MDR}}\)) method, and a semantic approach based on a formal description of production rules and rule program inconsistencies. The present paper shows on simple use cases the expected benefits of such a combination, which relies on existing implementations.
Bruno Berstel-Da Silva, Amina Chniti
Rule Revision in Normal DL Logic Programs
Abstract
Although several proposals to combine description logics with logic programming rules have been brought forward, hardly any of these approaches capture the dynamic nature of the Semantic Web. In this paper, we look at an expressive combination formalism, normal DL logic programs, and address changes to the rule component from the viewpoint of belief revision. To deal with inconsistency caused in revising normal DL logic programs, we first introduce a three-valued answer set semantics and present some useful properties. We then develop a revision operator for normal DL logic programs and show that our revision satisfies major postulates for logic program revision.
Sebastian Binnewies, Yisong Wang, Bela Stantic, Kewen Wang
OBDA and Intermodal Logistics: Active Projects and Applications
Abstract
In this paper we present the current state of affairs about our funded research projects concerning the investigation of Ontology-Based Data Access in the context of intermodal logistics. This application domain is particularly challenging for two main motivation. On the one hand, it is characterized by very large amounts of data; on the other hand, the design of a conceptual layer must strike a balance between efficient but potentially simplistic models, and sophisticated, but potentially highly inefficient ones.
Jean-Rémi Bourguet, Giuseppe Cicala, Luca Pulina, Armando Tacchella
Semantic Pervasive Advertising
Abstract
Pervasive advertising targets consumers on-the-move with ads displayed on their mobile devices. As for web advertising, ads are distributed by embedding them into websites and apps, easily flooding consumers with a large number of uninteresting offers. As the pervasive setting amplifies traditional issues such as targeting, cost, and privacy, we argue the need for a new perspective on the problem. We introduce PervADs, a privacy-preserving, user-centric, and pervasive ads-distribution platform which uses semantic technologies to reason about the consumer’s context and the intended target of the ads.
Lorenzo Carrara, Giorgio Orsi, Letizia Tanca
Semantics for Mapping Relations in SKOS
Abstract
SKOS (Simple Knowledge Organisation System) is a W3C Recommendation for sharing and linking structured vocabularies - taxonomies, thesauri, classification schemes, etc. - via theWeb [1]. Since the SKOS specification leaves the semantics of SKOS rather open, large modeling projects may specialise SKOS with informal modeling guidelines that narrow down the class of acceptable models (cf. [5]). Unfortunately, verifying that a (large) model complies with informal guidelines is a time consuming, error prone manual process.
Mika Cohen
From OWL to DL − Lite through Efficient Ontology Approximation
Abstract
Ontologies provide a conceptualization of a domain of interest which can be used for different objectives, such as for providing a formal description of the domain of interest for documentation purposes, or for providing a mechanism for reasoning upon the domain. For instance, they are the core element of the Ontology-Based Data Access (OBDA) [3,8] paradigm, in which the ontology is utilized as a conceptual view, allowing user access to the underlying data sources. With the aim to use an ontology as a formal description of the domain of interest, the use of expressive languages proves to be useful. If instead the goal is to use the ontology for reasoning tasks which require low computational complexity, the high expressivity of the language used to model the ontology may be a hindrance. In this scenario, the approximation of ontologies expressed in very expressive languages through ontologies expressed in languages which keep the computational complexity of the reasoning tasks low is pivotal.
Marco Console, Valerio Santarelli, Domenico Fabio Savo
PQMPMS: A Preference-enabled Querying Mechanism for Personalized Mobile Search
Abstract
A key challenge for personalized mobile search is to tailor the answers to the specific user by considering her contextual situation. To adapt the retrieved items to user’s context, this paper presents a preference-enabled querying mechanism for personalized mobile search. By exploiting the user’s dialogue history, we infer the weighted user preferences and interests. To further compute personalized answers, we aim to continuously collect the ratings given by the user’s friends regarding relevant topics from stream-based data sources such as Twitter. An experiment shows that our approach allows to compute the most relevant answers, providing an increased quality of search experience for the user.
Beibei Hu, Yves Vanrompay, Marie-Aude Aufaure
Kiabora: An Analyzer of Existential Rule Bases
Abstract
Kiabora is a software tool dedicated to the analysis of a base of existential rules (also called Datalog± rules). It is able to check if a set of rules belongs to a known class for which entailment is decidable, either directly, by checking some syntactic properties, or by means of its Graph of Rule Dependencies, which allows to combine decidable cases. Kiabora is available online via a simple web form. It is written in Java and is easily extensible to take new decidability results into account. It comes with a textual format, called DLGP (for Datalog Plus), which can be seen as an extension to usual plain Datalog format. In this paper, we briefly introduce the existential rule framework as well as decidability results and presents the analysis performed by Kiabora. More details are available on Kiabora website.
Michel Leclère, Marie-Laure Mugnier, Swan Rocher
StreamRule: A Nonmonotonic Stream Reasoning System for the Semantic Web
Abstract
Stream reasoning is an emerging research field focused on dynamic processing and continuous reasoning over huge volumes of streaming data. Finding the right trade-off between scalability and expressivity is a key challenge in this area. In this paper, we want to provide a baseline for exploring the applicability of complex reasoning to the Web of Data based on a solution that combines results and approaches from database research, stream processing, and nonmonotonic logic programming.
Alessandra Mileo, Ahmed Abdelrahman, Sean Policarpio, Manfred Hauswirth
An Integrated Environment for Reasoning over Ontologies via Logic Programming
Abstract
Ontology-based reasoning is considered a crucial task in the area of knowledge management. In this context, the interest in approaches that resort to Datalog (and its extensions) for implementing various reasoning tasks over ontologies is growing. Nonetheless, looking from the developer point of view, one can notice that the editing environments for ontologies on the one hand and Datalog-like logic programs on the other hand are often developed independently and miss a common perspective. In this paper we face with this issue proposing the integration of two major development environments for Datalog programs and Ontologies, respectively: ASPIDE and protégé. We extended both systems with specific plugins that enable a synergic interaction between the two development environments. The developer can then handle both ontologies and logic-based reasoning over them by exploiting specific tools integrated to work together.
Barbara Nardi, Kristian Reale, Francesco Ricca, Giorgio Terracina
HornDL: An Expressive Horn Description Logic with PTime Data Complexity
Abstract
We introduce a Horn description logic called Horn-DL, which is strictly and essentially richer than Horn-\(\mathcal{SROIQ}\), while still has PTime data complexity. In comparison with Horn-\(\mathcal{SROIQ}\), HornDL additionally allows the universal role and assertions of the form irreflexive (s), \(\lnot s(a,b)\), \(a \not\doteq b\). More importantly, in contrast to all the well-known Horn fragments \(\mathcal{EL}\), DL-Lite, DLP, Horn-\(\mathcal{SHIQ}\), Horn-\(\mathcal{SROIQ}\) of description logics, HornDL allows a form of the concept constructor “universal restriction” to appear at the left hand side of terminological inclusion axioms. Namely, a universal restriction can be used in such places in conjunction with the corresponding existential restriction. In the long version of this paper, we present the first algorithm with PTime data complexity for checking satisfiability of HornDL knowledge bases.
Linh Anh Nguyen, Thi-Bich-Loc Nguyen, Andrzej Szałas
Parameter Learning for Probabilistic Ontologies
Abstract
Recently, the problem of representing uncertainty in Description Logics (DLs) has received an increasing attention. In probabilistic DLs, axioms contain numeric parameters that are often difficult to specify or to tune for a human. In this paper we present an approach for learning and tuning the parameters of probabilistic ontologies from data. The resulting algorithm, called EDGE, is targeted to DLs following the DISPONTE approach, that applies the distribution semantics to DLs.
Fabrizio Riguzzi, Elena Bellodi, Evelina Lamma, Riccardo Zese
Backmatter
Metadaten
Titel
Web Reasoning and Rule Systems
herausgegeben von
Wolfgang Faber
Domenico Lembo
Copyright-Jahr
2013
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-39666-3
Print ISBN
978-3-642-39665-6
DOI
https://doi.org/10.1007/978-3-642-39666-3