Skip to main content

2017 | Buch

Transactions on Large-Scale Data- and Knowledge-Centered Systems XXXIV

Special Issue on Consistency and Inconsistency in Data-Centric Applications

herausgegeben von: Abdelkader Hameurlain, Josef Küng, Prof. Dr. Roland Wagner, Prof. Hendrik Decker

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This volume, the 34th issue of Transactions on Large-Scale Data- and Knowledge-Centered Systems, constitutes a special issue consisting of seven papers on the subject of Consistency and Inconsistency in Data-Centric Applications. The volume opens with an invited article on basic postulates for inconsistency measures. Three of the remaining six papers are revised, extended versions of papers presented at the First International Workshop on Consistency and Inconsistency, COIN 2016, held in conjunction with DEXA 2016 in Porto, Portugal, in September 2016. The other three papers were selected from submissions to a call for contributions to this edition. Each of the papers highlights a particular subtopic. However, all are concerned with logical inconsistencies that are either to be systematically avoided, or reasoned with consistently, i.e., without running the danger of an explosion of inferences.

Inhaltsverzeichnis

Frontmatter
Basic Postulates for Inconsistency Measures
Abstract
Postulates for inconsistency measures are examined, the set of postulates proposed by Hunter and Konieczny being the starting point. The focus is on two postulates that were questioned by various authors. Studying the first suggests a systematic transformation to guard postulates against a certain kind of counter-examples. The second postulate under investigation here is devoted to independence, for which a general version is proposed that avoids the pitfalls mentioned in the literature. Combining these two additions with some postulates previously introduced by the same author, a set of basic postulates alternative to the core set given by Hunter and Konieczny arises.
Philippe Besnard
Batch Composite Transactions in Stream Processing
Abstract
Stream processing is about processing continuous streams of data by programs in a workflow. Continuous execution is discretized by grouping input stream tuples into batches and using one batch at a time for the execution of programs. As source input batches arrive continuously, several batches may be processed in the workflow simultaneously. Ensuring correctness of these concurrent executions is important. As in databases and several advanced applications, the transaction concept can be applied to regulate concurrent executions and ensure their correctness in stream processing. The first step is defining transactions corresponding to the executions in a meaningful way. A general requirement in stream processing is that each batch be processed completely in the workflow. That is, all the programs triggered by the batch, directly and transitively, in the workflow must be executed successfully. Then, considering each program execution as a transaction, all the transactions involved in processing a batch can be grouped into a single batch composite transaction, abbreviated as BCT, and transactional properties applied to these BCTs. This works well when a batch is processed individually and completely in isolation. However, when the batches are split, merged or overlapped along the workflow computation, the resulting BCTs will have some transactions in common and applying transactional properties for them becomes complicated. We overcome the problems by defining nonblocking BCTs that have disjoint collections of transactions. They satisfy some properties analogous to those of the database transactions and facilitate (i) defining correctness of concurrent executions in terms of equivalent serial executions of composite transactions and (ii) processing each batch either completely or not at all, and rolling back partially processed batches without affecting the processing of other batches. We also suggest an appropriate roll back mechanism.
K. Vidyasankar
Enhancing User Rating Database Consistency Through Pruning
Abstract
Recommender systems are based on information about users’ past behavior to formulate recommendations about their future actions. However, as time goes by the interests and likings of people may change: people listen to different singers or even different types of music, watch different types of movies, read different types of books and so on. Due to this type of changes, an amount of inconsistency is introduced in the database since a portion of it does not reflect the current preferences of the user, which is its intended purpose.
In this paper, we present a pruning technique that removes old aged user behavior data from the ratings database, which are bound to correspond to invalidated preferences of the user. Through pruning (1) inconsistencies are removed and data quality is upgraded, (2) better rating prediction generation times are achieved and (3) the ratings database size is reduced. We also propose an algorithm for determining the amount of pruning that should be performed, allowing the tuning and operation of the pruning algorithm in an unsupervised fashion.
The proposed technique is evaluated and compared against seven aging algorithms, which reduce the importance of aged ratings, and a state-of-the-art pruning algorithm, using datasets with varying characteristics. It is also validated using two distinct rating prediction computation strategies, namely collaborative filtering and matrix factorization. The proposed technique needs no extra information concerning the items’ characteristics (e.g. categories that they belong to or attributes’ values), can be used in all rating databases that include a timestamp and has been proved to be effective in any size of users-items database and under two rating prediction computation strategies.
Dionisis Margaris, Costas Vassilakis
A Second Generation of Peer-to-Peer Semantic Wikis
Abstract
P2P Semantic Wikis (P2PSW) constitute a collaborative editing tool for knowledge and ontology creation, share and management. They ensure a massive collaboration in a distributed manner on replicated data composed of semantic wikis pages and semantic annotations. P2PSW are an instantiation of the optimistic replication model for semantic wikis. They ensure eventual syntactical consistency, i.e. that the wiki pages and semantic annotations store of the peers will eventually become identical. In spite of their advantages, these Wikis do not support a mechanism to maintain the quality of their semantic annotations. Thus, the content of the semantic wiki pages could be inconsistent for many reasons: the merge of the changes is made automatically by the wiki not by the users, missing information or inconsistent information added by the users of the peers. In this paper, I present a semantic inconsistency detection mechanism (SIDM) developed for P2PSW. SIDM detects the semantic inconsistency of the annotations in the semantic pages and improves the quality of the knowledge and the functionality of P2PSW. It indicates not only the existence of the semantic inconsistency in the wiki pages but also specifies the reason of the inconsistency. SIDM also facilitates the semantic inconsistency removal by determining exactly the position of the inconsistent annotations in the wiki pages and highlighting them via a semantic inconsistency visualization mechanism we developed.
Charbel Rahhal
Formalizing a Paraconsistent Logic in the Isabelle Proof Assistant
Abstract
We present a formalization of a so-called paraconsistent logic that avoids the catastrophic explosiveness of inconsistency in classical logic. The paraconsistent logic has a countably infinite number of non-classical truth values. We show how to use the proof assistant Isabelle to formally prove theorems in the logic as well as meta-theorems about the logic. In particular, we formalize a meta-theorem that allows us to reduce the infinite number of truth values to a finite number of truth values, for a given formula, and we use this result in a formalization of a small case study.
Jørgen Villadsen, Anders Schlichtkrull
A Proximity-Based Understanding of Conditionals
Abstract
The aim of the present paper is to introduce a new logic, PUC-Logic, which will be used to give a systematic account of well-known counterfactuals conditionals on the basis of a concept of proximity. We will formulate a natural deduction system for PUC-Logic, the system PUC-ND, that will be shown to be sound and complete with respect to the semantics of PUC-Logic. We shall also prove that PUC-Logic is decidable and that the system PUC-ND satisfies the normalization theorem.
Ricardo Queiroz de Araujo Fernandes, Edward Hermann Haeusler, Luiz Carlos Pinheiro Dias Pereira
Inconsistency-Tolerant Database Repairs and Simplified Repair Checking by Measure-Based Integrity Checking
Abstract
Database states may be inconsistent, i.e., their integrity may be violated. Database repairs are updates such that all integrity constraints become satisfied, while keeping the necessary changes to a minimum. Updates intending to repair inconsistency may go wrong. Repair checking is to find out if a given update is a repair, i.e., if the updated state is free of integrity violations and if the changes are minimal. However, integrity violations may be numerous, complex or opaque, so that attaining a complete absence of inconsistency is not realistic. We discuss inconsistency-tolerant concepts of repair and repair checking. Repairs are no longer asked to be total, i.e., only some but not all inconsistency is supposed to disappear by a repair. For checking if an update reduces the amount of inconsistency, integrity violations need to be comparable. For that, we use measure-based integrity checking. Both the inconsistency reduction and the minimality of inconsistency-tolerant repair candidates can be verified or falsified by measure-based integrity checkers that simplify the evaluation of constraints. As opposed to total repair checking, which evaluates integrity constraints brute-force, simplified repair checking exploits the incrementality of updates.
Hendrik Decker
Backmatter
Metadaten
Titel
Transactions on Large-Scale Data- and Knowledge-Centered Systems XXXIV
herausgegeben von
Abdelkader Hameurlain
Josef Küng
Prof. Dr. Roland Wagner
Prof. Hendrik Decker
Copyright-Jahr
2017
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-662-55947-5
Print ISBN
978-3-662-55946-8
DOI
https://doi.org/10.1007/978-3-662-55947-5