Skip to main content

2016 | Buch

Foundations of Information and Knowledge Systems

9th International Symposium, FoIKS 2016, Linz, Austria, March 7-11, 2016. Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the proceedings of the 9th International Symposium on Foundations of Information and Knowledge Systems, FoIKS 2016, held in Linz, Austria, in March 2016. The 14 revised full papers presented papers were carefully reviewed and selected from 23 submissions. The papers address various topics such as reasoning about beliefs, uncertainty, incompleteness, and inconsistency, inference and problem solving, querying and pattern mining, dealing with knowledge, logics and complexity.

Inhaltsverzeichnis

Frontmatter

Reasoning about Beliefs, Uncertainty, Incompleteness, and Inconsistency

Frontmatter
A Study of Argument Acceptability Dynamics Through Core and Remainder Sets

We analyze the acceptability dynamics of arguments through the proposal of two different kinds of minimal sets of arguments, core and remainder sets which are somehow responsible for the acceptability/rejection of a given argument. We develop a study of the consequences of breaking the construction of such sets towards the acceptance, and/or rejection, of an analyzed argument. This brings about the proposal of novel change operations for abstract argumentation first, and for logic-based argumentation, afterwards. The analysis upon logic-based argumentation shows some problems regarding the applicability of the standard semantics. In consequence, a reformulation of the notion of admissibility arises for accommodating the standard semantics upon logic-based argumentation. Finally, the proposed model is formalized in the light of the theory of belief revision by characterizing the corresponding operations through rationality postulates and representation theorems.

Martín O. Moguillansky
Anytime Algorithms for Solving Possibilistic MDPs and Hybrid MDPs

The ability of an agent to make quick, rational decisions in an uncertain environment is paramount for its applicability in realistic settings. Markov Decision Processes (MDP) provide such a framework, but can only model uncertainty that can be expressed as probabilities. Possibilistic counterparts of MDPs allow to model imprecise beliefs, yet they cannot accurately represent probabilistic sources of uncertainty and they lack the efficient online solvers found in the probabilistic MDP community. In this paper we advance the state of the art in three important ways. Firstly, we propose the first online planner for possibilistic MDP by adapting the Monte-Carlo Tree Search (MCTS) algorithm. A key component is the development of efficient search structures to sample possibility distributions based on the DPY transformation as introduced by Dubois, Prade, and Yager. Secondly, we introduce a hybrid MDP model that allows us to express both possibilistic and probabilistic uncertainty, where the hybrid model is a proper extension of both probabilistic and possibilistic MDPs. Thirdly, we demonstrate that MCTS algorithms can readily be applied to solve such hybrid models.

Kim Bauters, Weiru Liu, Lluís Godo
Possibilistic Conditional Tables

On the one hand possibility theory and possibilistic logic offer a powerful representation setting in artificial intelligence for handling uncertainty in a qualitative manner. On the other hand conditional tables (c-tables for short) and their probabilistic extension provide a well-known setting for representing respectively incomplete and uncertain information in relational databases. Although these two settings rely on the idea of possible worlds, they have been developed and used independently. This paper investigates the links between possibility theory, possibilistic logic and c-tables, before introducing possibilistic c-tables and discussing their relation with a recent certainty-based approach to uncertain databases and their differences with probabilistic c-tables.

Olivier Pivert, Henri Prade

Inference and Problem Solving

Frontmatter
Skeptical Inference Based on C-Representations and Its Characterization as a Constraint Satisfaction Problem

The axiomatic system P is an important standard for plausible, nonmonotonic inferences that is, however, known to be too weak to solve benchmark problems like irrelevance, or subclass inheritance (so-called Drowning Problem). Spohn’s ranking functions which provide a semantic base for system P have often been used to design stronger inference relations, like Pearl’s system Z, or c-representations. While each c-representation shows excellent inference properties and handles particularly irrelevance and subclass inheritance properly, it is still an open problem which c-representation is the best. In this paper, we focus on the generic properties of c-representations and consider the skeptical inference relation (c-inference) that is obtained by taking all c-representations of a given knowledge base into account. In particular, we show that c-inference preserves the properties of solving irrelevance and subclass inheritance which are met by every single c-representation. Moreover, we characterize skeptical c-inference as a constraint satisfaction problem so that constraint solvers can be used for its implementation.

Christoph Beierle, Christian Eichhorn, Gabriele Kern-Isberner
Systems and Implementations for Solving Reasoning Problems in Conditional Logics

Default rules like “If A, then normally B” or probabilistic rules like “If A, then B with probability x” are powerful constructs for knowledge representation. Such rules can be formalized as conditionals, denoted by $$(B|A)$$ or $$(B|A)[x]$$, and a conditional knowledge base consists of a set of conditionals. Different semantical models have been proposed for conditional knowledge bases, and the most important reasoning problems are to determine whether a knowledge base is consistent and to determine what a knowledge base entails. We present an overview on systems and implementations our group has been working on for solving reasoning problems in various semantics that have been developed for conditional knowledge bases. These semantics include quantitative, semi-quantitative, and qualitative conditional logics, based on both propositional logic and on first-order logic.

Christoph Beierle
Equivalence Between Answer-Set Programs Under (Partially) Fixed Input

Answer Set Programming (ASP) has become an increasingly popular formalism for declarative problem solving. Among the huge body of theoretical results, investigations of different equivalence notions between logic programs play a fundamental role for understanding modularity and optimization. While strong equivalence between two programs holds if they can be faithfully replaced by each other in any context (facts and rules), uniform equivalence amounts to equivalent behavior of programs under any set of facts. Both notions (as well as several variants thereof) have been extensively studied. However, the somewhat reverse notion of equivalence which holds if two programs are equivalent under the addition of any set of proper rules (i.e., all rules except facts) has not been investigated yet. In this paper, we close this gap and give a thorough study of this notion, which we call rule equivalence (RE), and its parameterized version where we allow facts over a given restricted alphabet to appear in the context. RE is thus a relationship between two programs whose input is (partially) fixed but where additional proper rules might still be added. Such a notion might be helpful in debugging of programs. We give full characterization results and a complexity analysis for the propositional case of RE. Moreover, we show that RE is decidable in the non-ground case.

Bernhard Bliem, Stefan Woltran

Querying and Pattern Mining

Frontmatter
A k-Means-Like Algorithm for Clustering Categorical Data Using an Information Theoretic-Based Dissimilarity Measure

Clustering large datasets is one of the important research problems for many machine learning applications. The k-means is very popular and widely used due to its ease of implementation, linear time complexity in size of the data, and almost surely convergence to local optima. However, working only on numerical data prohibits it from being used for clustering categorical data. In this paper, we aim to introduce an extension of k-means algorithm for clustering categorical data. Basically, we propose a new dissimilarity measure based on an information theoretic definition of similarity that considers the amount of information of two values in the domain set. The definition of cluster centers is generalized using kernel density estimation approach. Then, the new algorithm is proposed by incorporating a feature weighting scheme that automatically measures the contribution of individual attributes for the clusters. In order to demonstrate the performance of the new algorithm, we conduct a series of experiments on real datasets from UCI Machine Learning Repository and compare the obtained results with several previously developed algorithms for clustering categorical data.

Thu-Hien Thi Nguyen, Van-Nam Huynh
Discovering Overlapping Quantitative Associations by Density-Based Mining of Relevant Attributes

Association rule mining is an often used method to find relationships in the data and has been extensively studied in the literature. Unfortunately, most of these methods do not work well for numerical attributes. State-of-the-art quantitative association rule mining algorithms follow a common routine: (1) discretize the data and (2) mine for association rules. Unfortunately, this two-step approach can be rather inaccurate as discretization partitions the data space. This misses rules that are present in overlapping intervals.In this paper, we explore the data for quantitative association rules hidden in overlapping regions of numeric data. Our method works without the need for a discretization step, and thus, prevents information loss in partitioning numeric attributes prior to the mining step. It exploits a statistical test for selecting relevant attributes, detects relationships of dense intervals in these attributes, and finally combines them into quantitative association rules. We evaluate our method on synthetic and real data to show its efficiency and quality improvement compared to state-of-the-art methods.

Thomas Van Brussel, Emmanuel Müller, Bart Goethals
Semantic Matching Strategies for Job Recruitment: A Comparison of New and Known Approaches

A profile describes a set of skills a person may have or a set of skills required for a particular job. Profile matching aims to determine how well a given profile fits to a requested profile. The research reported in this paper starts from exact matching measure of [21]. It is extended then by matching filters in ontology hierarchies, since profiles naturally determine filters in the subsumption relation. Next we take into consideration similarities between different skills that are not related by the subsumption relation. Finally, a totally different approach, probabilistic matching based on the maximum entropy model is analyzed.

Gábor Rácz, Attila Sali, Klaus-Dieter Schewe
The Challenge of Optional Matching in SPARQL

Conjunctive queries are arguably the most widely used querying mechanism in practice and the most intensively studied one in database theory. Answering a conjunctive query (CQ) comes down to matching all atoms of the CQ simultaneously into the database. As a consequence, a CQ fails to provide any answer if the pattern described by the query does not exactly match the data. CQs might thus be too restrictive as a querying mechanism for data on the web, which is considered as inherently incomplete. The semantic web query language SPARQL therefore contains the OPTIONAL operator as a crucial feature. It allows the user to formulate queries which try to match parts of the query over the data if available, but do not destroy answers of the remaining query otherwise. In this article, we have a closer look at this optional matching feature of SPARQL. More specifically, we will survey several results which have recently been obtained for an interesting fragment of SPARQL – the so-called well-designed SPARQL graph patterns.

Shqiponja Ahmetaj, Wolfgang Fischl, Markus Kröll, Reinhard Pichler, Mantas Šimkus, Sebastian Skritek
Maintenance of Queries Under Database Changes: A Unified Logic Based Approach

This contribution deals with one single theme, the exploitation of logical reduction techniques in database theory. Two kinds of changes may be applied to databases: structural changes, known also as restructuring or schema evolution, and data changes. We present both of them in the terms of syntactically defined translation schemes.At the same time, we have application programs, computing different queries on the database, which are oriented on some specific generation of the database. Systematically using the technique of translation scheme, we introduce the notion of $$\Phi $$-sums and show how queries, expressible in extensions of First Order Logic (FOL) may be handled over different generations of the $$\Phi $$-sums. Moreover, using the technique of translation scheme, we introduce the notions of an incremental view recomputations. We prove when queries expressible in extensions of FOL allow incremental view recomputations.Our approach covers uniformly the cases we have encountered in the literature and can be applied to all existing query languages.

Elena V. Ravve

Dealing with Knowledge

Frontmatter
Selected Results and Related Issues of Confidentiality-Preserving Controlled Interaction Execution

Controlled Interaction Execution has been developed as a security server for inference control shielding an isolated, logic-oriented information system when interacting over the time with a client by means of messages, in particular for query and transaction processing. The control aims at preserving confidentiality in a formalized sense, intuitively and simplifying rephrased as follows: Even when having (assumed) a priori knowledge, recording the interaction history, being aware of the details of the control mechanism, and unrestrictedly rationally reasoning, the client should never be able to infer the validity of any sentence declared as a potential secret in the security server’s confidentiality policy. To enforce this goal, for each of a rich variety of specific situations a dedicated censor has been designed. As far as needed, a censor distorts a functionally expected reaction message such that suitably weakened or even believably incorrect information is communicated to the client. In this article, we consider selected results of recent and ongoing work and discuss several issues for further research and development. The topics covered range from the impact of the underlying logic, whether propositional or first-order or for non-monotonic beliefs or an abstraction from any specific one, to the kind of the interactions, whether only queries or also view publishing or updates or revisions or even procedural programs.

Joachim Biskup
Integrity Constraints for General-Purpose Knowledge Bases

Integrity constraints in databases have been studied extensively since the 1980s, and they are considered essential to guarantee database integrity. In recent years, several authors have studied how the same notion can be adapted to reasoning frameworks, in such a way that they achieve the purpose of guaranteeing a system’s consistency, but are kept separate from the reasoning mechanisms.In this paper we focus on multi-context systems, a general-purpose framework for combining heterogeneous reasoning systems, enhancing them with a notion of integrity constraints that generalizes the corresponding concept in the database world.

Luís Cruz-Filipe, Isabel Nunes, Peter Schneider-Kamp
A Knowledge Based Framework for Link Prediction in Social Networks

Social networks have a dynamic nature so their structures change over time. In this paper, we propose a new evolutionary method to predict the state of a network in the near future by extracting knowledge from its current structure. This method is based on the fact that social networks consist of communities. Observing current state of a given network, the method calculates the probability of a relationship between each pair of individuals who are not directly connected to each other and estimate the chance of being connected in the next time slot. We have tested and compared the method on one synthetic and one large real dataset with 117 185 083 edges. Results show that our method can predict the next state of a network with a high rate of accuracy.

Pooya Moradian Zadeh, Ziad Kobti

Logics and Complexity

Frontmatter
Approximation and Dependence via Multiteam Semantics

We define a variant of team semantics called multiteam semantics based on multisets and study the properties of various logics in this framework. In particular, we define natural probabilistic versions of inclusion and independence atoms and certain approximation operators motivated by approximate dependence atoms of Väänänen.

Arnaud Durand, Miika Hannula, Juha Kontinen, Arne Meier, Jonni Virtema
The Complexity of Non-Iterated Probabilistic Justification Logic

The logic $$\mathsf {PJ}$$ is a probabilistic logic defined by adding (non-iterated) probability operators to the basic justification logic $$\mathsf {J}$$. In this paper we establish upper and lower bounds for the complexity of the derivability problem in the logic $$\mathsf {PJ}$$. The main result of the paper is that the complexity of the derivability problem in $$\mathsf {PJ}$$ remains the same as the complexity of the derivability problem in the underlying logic $$\mathsf {J}$$, which is $$\varPi _2^p$$-complete. This implies hat the probability operators do not increase the complexity of the logic, although they arguably enrich the expressiveness of the language.

Ioannis Kokkinis
Relational Complexity and Higher Order Logics

Relational machines (RM) were introduced as abstract machines that compute queries to relational database instances (dbi’s), that are generic (i.e., that preserve isomorphisms). As RM’s cannot discern between tuples that are equivalent in first order logic with k variables, Relational Complexity was introduced as a complexity theory where the input dbi to a query is measured as its $$\textit{size}_k$$, i.e., as the number of classes in the equivalence relation of equality of $$\mathrm {FO}^k$$ types of k-tuples in the dbi. We describe the basic notions of Relational Complexity, and survey known characterizations of some of its main classes through different fixed point logics and through fragments of second and third order logics.

José Maria Turull-Torres
A Logic for Non-deterministic Parallel Abstract State Machines

We develop a logic which enables reasoning about single steps of non-deterministic parallel Abstract State Machines (ASMs). Our logic builds upon the unifying logic introduced by Nanchen and Stärk for reasoning about hierarchical (parallel) ASMs. Our main contribution to this regard is the handling of non-determinism (both bounded and unbounded) within the logical formalism. Moreover, we do this without sacrificing the completeness of the logic for statements about single steps of non-deterministic parallel ASMs, such as invariants of rules, consistency conditions for rules, or step-by-step equivalence of rules.

Flavio Ferrarotti, Klaus-Dieter Schewe, Loredana Tec, Qing Wang
Backmatter
Metadaten
Titel
Foundations of Information and Knowledge Systems
Copyright-Jahr
2016
Electronic ISBN
978-3-319-30024-5
Print ISBN
978-3-319-30023-8
DOI
https://doi.org/10.1007/978-3-319-30024-5

Premium Partner