Skip to main content

2014 | Buch

Logics in Artificial Intelligence

14th European Conference, JELIA 2014, Funchal, Madeira, Portugal, September 24-26, 2014. Proceedings

herausgegeben von: Eduardo Fermé, João Leite

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the proceedings of the 14th European Conference on Logics in Artificial Intelligence, JELIA 2014, held in Funchal, Madeira, Portugal, in September 2014. The 35 full papers and 14 short papers included in this volume were carefully reviewed and selected from 121 submissions. They are organized in topical sections named: description logics; automated reasoning; logics for uncertain reasoning; non-classical logics; answer-set programming; belief revision; dealing with inconsistency in ASP and DL; reason about actions and causality; system descriptions; short system descriptions; and short papers. The book also contains 4 full paper invited talks.

Inhaltsverzeichnis

Frontmatter

Invited Talks

Query Answering over Description Logic Ontologies

Description Logics (DLs) provide the formal foundation for ontology languages, and they have been advocated as formalisms for modeling the domain of interest in various settings, including the Semantic Web, data and information integration, and ontology-based data access. An important requirement there is the ability to answer complex database-like queries, while taking into account both extensional and intensional domain knowledge. The task of answering queries has been investigated intensively in the last years for a variety of DLs, and considering both data complexity, i.e., the complexity measured in the size of the extensional information only, and combined complexity. On the one hand, it has been shown to be in general (exponentially) more difficult than the standard reasoning tasks of concept satisfiability and subsumption; on the other hand a broad range of techniques have been developed. We overview here some of the key techniques developed in the last years for query answering over DL ontologies, ranging from rewriting based approaches for lightweight DLs, to tableaux algorithms, and techniques based on automata on infinite trees for very expressive DLs. The associated results, accompanied by matching lower bounds, have contributed to shaping the computational complexity picture for ontology-based query answering.

Diego Calvanese
Tools for the Investigation of Substructural and Paraconsistent Logics

We present an overview of the methods in [10,7,13] and their implementation in the system

TINC

. This system introduces analytic calculi for large classes of substructural and paraconsistent logics, which it then uses to prove various results about the formalized logics.

Agata Ciabattoni, Lara Spendier
Non-classical Planning with a Classical Planner: The Power of Transformations

Planning is the model-based approach to autonomous behavior where a predictive model of actions and sensors is used to generate the behavior for achieving given goals. The main challenges in planning are computational as all models, whether featuring uncertainty and feedback or not, are intractable in the worst case when represented in compact form. Classical planning refers to the simplest form of planning where goals are to be achieved by applying deterministic actions to a fully known initial situation. In this invited paper, I review the inferences performed by classical planners that enable them to deal with large problems, and the transformations that have been developed for using these planners to deal with non-classical features such as soft goals, hidden goals to be recognized, planning with incomplete information and sensing, and multiagent nested beliefs.

Hector Geffner
Opportunities for Argument-Centric Persuasion in Behaviour Change

The aim of behaviour change is to help people overcome specific behavioural problems in their everyday life (e.g. helping people to decrease their calorie intake). In current persuasion technology for behaviour change, the emphasis is on helping people to explore their issues (e.g. through questionnaires or game playing) or to remember to follow a behaviour change plan (e.g. diaries and email reminders). So explicit argumentation with consideration of arguments and counterarguments are not supported with existing persuasion technologies. With recent developments in computational models of argument, there is the opportunity for argument-centric persuasion in behaviour change. In this paper, key requirements for this will be presented, together with some discussion of how computational models of argumentation can be harnessed.

Anthony Hunter

Description Logics

The Fuzzy Description Logic $\mathsf{G}\text{-}{\mathcal{F\!L}_0} $ with Greatest Fixed-Point Semantics

We study the fuzzy extension of the Description Logic

$\mathcal{F\!L}_0$

with semantics based on the Gödel t-norm. We show that subsumption w.r.t. a finite set of primitive definitions, using greatest fixed-point semantics, can be characterized by a relation on weighted automata. We use this result to provide tight complexity bounds for reasoning in this logic, showing that it is

PSpace

-complete. If the definitions do not contain cycles, subsumption becomes

co-NP

-complete.

Stefan Borgwardt, José A. Leyva Galano, Rafael Peñaloza
Tight Complexity Bounds for Reasoning in the Description Logic $\mathcal{BE{\kern-.1em}L}$

Recently, Bayesian extensions of Description Logics, and in particular the logic

$\mathcal{BE{\kern-.1em}L}$

, were introduced as a means of representing certain knowledge that depends on an uncertain context. In this paper we introduce a novel structure, called

proof structure

, that encodes the contextual information required to deduce subsumption relations from a

$\mathcal{BE{\kern-.1em}L}$

knowledge base. Using this structure, we show that probabilistic reasoning in

$\mathcal{BE{\kern-.1em}L}$

can be reduced in polynomial time to standard Bayesian network inferences, thus obtaining tight complexity bounds for reasoning in

$\mathcal{BE{\kern-.1em}L}$

.

İsmail İlkan Ceylan, Rafael Peñaloza
Relevant Closure: A New Form of Defeasible Reasoning for Description Logics

Among the various proposals for defeasible reasoning for description logics, Rational Closure, a procedure originally defined for propositional logic, turns out to have a number of desirable properties. Not only it is computationally feasible, but it can also be implemented using existing classical reasoners. One of its drawbacks is that it can be seen as too weak from the inferential point of view. To overcome this limitation we introduce in this paper two extensions of Rational Closure: Basic Relevant Closure and Minimal Relevant Closure. As the names suggest, both rely on defining a version of relevance. Our formalisation of relevance in this context is based on the notion of a justification (a minimal subset of sentences implying a given sentence). This is, to our knowledge, the first proposal for defining defeasibility in terms of justifications—a notion that is well-established in the area of ontology debugging. Both Basic and Minimal Relevant Closure increase the inferential power of Rational Closure, giving back intuitive conclusions that cannot be obtained from Rational Closure. We analyse the properties and present algorithms for both Basic and Minimal Relevant Closure, and provide experimental results for both Basic Relevant Closure and Minimal Relevant Closure, comparing it with Rational Closure.

Giovanni Casini, Thomas Meyer, Kodylan Moodley, Riku Nortjé
Error-Tolerant Reasoning in the Description Logic $\mathcal{E{\kern-.1em}L}$

Developing and maintaining ontologies is an expensive and error-prone task. After an error is detected, users may have to wait for a long time before a corrected version of the ontology is available. In the meantime, one might still want to derive meaningful knowledge from the ontology, while avoiding the known errors. We study error-tolerant reasoning tasks in the description logic

$\mathcal{E{\kern-.1em}L}$

. While these problems are intractable, we propose methods for improving the reasoning times by precompiling information about the known errors and using proof-theoretic techniques for computing justifications. A prototypical implementation shows that our approach is feasible for large ontologies used in practice.

Michel Ludwig, Rafael Peñaloza

Automated Reasoning

Sub-propositional Fragments of the Interval Temporal Logic of Allen’s Relations

Interval temporal logics provide a natural framework for temporal reasoning about interval structures over linearly ordered domains, where intervals are taken as the primitive ontological entities. The most influential propositional interval-based logic is probably Halpern and Shoham’s Modal Logic of Time Intervals, a.k.a.

HS

. While most studies focused on the computational properties of the syntactic fragments that arise by considering only a subset of the set of modalities, the fragments that are obtained by weakening the propositional side have received very scarce attention. Here, we approach this problem by considering various sub-propositional fragments of

HS

, such as the so-called Horn, Krom, and core fragment. We prove that the Horn fragment of

HS

is undecidable on every interesting class of linearly ordered sets, and we briefly discuss the difficulties that arise when considering the other fragments.

Davide Bresolin, Emilio Muñoz-Velasco, Guido Sciavicco
SAT Modulo Graphs: Acyclicity

Acyclicity is a recurring property of solutions to many important combinatorial problems. In this work we study embeddings of specialized acyclicity constraints in the satisfiability problem of the classical propositional logic (SAT). We propose an embedding of directed graphs in SAT, with arcs labelled with propositional variables, and an extended SAT problem in which all clauses have to be satisfied and the subgraph consisting of arcs labelled

true

is acyclic. We devise a constraint propagator for the acyclicity constraint and show how it can be incorporated in off-the-shelf SAT solvers. We show that all existing encodings of acyclicity constraints in SAT are either prohibitively large or do not sanction all inferences made by the constraint propagator. Our experiments demonstrate the advantages of our solver over other approaches for handling acyclicity.

Martin Gebser, Tomi Janhunen, Jussi Rintanen
Enumerating Prime Implicants of Propositional Formulae in Conjunctive Normal Form

In this paper, a new approach for enumerating the set prime implicants (PI) of a Boolean formula in conjunctive normal form (CNF) is proposed. It is based on an encoding of the input formula as a new one whose models correspond to the set of prime implicants of the original theory. This first PI enumeration approach is then enhanced by an original use of the boolean functions or gates usually involved in many CNF instances encoding real-world problems. Experimental evaluation on several classes of CNF instances shows the feasibility of our proposed framework.

Said Jabbour, Joao Marques-Silva, Lakhdar Sais, Yakoub Salhi
Improving the Normalization of Weight Rules in Answer Set Programs

Cardinality and weight rules are important primitives in answer set programming. In this context, normalization means the translation of such rules back into normal rules, e.g., for the sake of boosting the search for answers sets. For instance, the normalization of cardinality rules can be based on Boolean circuits that effectively sort or select greatest elements amongst Boolean values. In this paper, we develop further constructions for the normalization of weight rules and adapt techniques that have been previously used to translate pseudo-Boolean constraints into the propositional satisfiability (SAT) problem. In particular, we consider mixed-radix numbers as an efficient way to represent and encode integer weights involved in a weight rule and propose a heuristic for selecting a suitable base. Moreover, we incorporate a scheme for structure sharing in the normalization procedure. In the experimental part, we study the effect of normalizing weight rules on compactness and search performance measured in terms of program size, search time, and number of conflicts.

Jori Bomanson, Martin Gebser, Tomi Janhunen

Logics for Uncertain Reasoning

Logical Foundations of Possibilistic Keys

Possibility theory is applied to introduce and reason about the fundamental notion of a key for uncertain data. Uncertainty is modeled qualitatively by assigning to tuples of data a degree of possibility with which they occur in a relation, and assigning to keys a degree of certainty which says to which tuples the key applies. The associated implication problem is characterized axiomatically and algorithmically. It is shown how sets of possibilistic keys can be visualized as possibilistic Armstrong relations, and how they can be discovered from given possibilistic relations. It is also shown how possibilistic keys can be used to clean dirty data by revising the belief in possibility degrees of tuples.

Henning Koehler, Uwe Leck, Sebastian Link, Henri Prade
Possibilistic Boolean Games: Strategic Reasoning under Incomplete Information

Boolean games offer a compact alternative to normal-form games, by encoding the goal of each agent as a propositional formula. In this paper, we show how this framework can be naturally extended to model situations in which agents are uncertain about other agents’ goals. We first use uncertainty measures from possibility theory to semantically define (solution concepts to) Boolean games with incomplete information. Then we present a syntactic characterization of these semantics, which can readily be implemented, and we characterize the computational complexity.

Sofie De Clercq, Steven Schockaert, Martine De Cock, Ann Nowé
LEG Networks for Ranking Functions

When using representations of plausibility for semantical frameworks, the storing capacity needed is usually exponentially in the number of variables. Therefore, network-based approaches that decompose the semantical space have proven to be fruitful in environments with probabilistic information. For applications where a more qualitative information is preferable to quantitative information, ordinal conditional functions (OCF) offer a convenient methodology. Here, Bayesian-like networks have been proposed for ranking functions, so called

OCF-networks

. These networks not only suffer from similar problems as Bayesian networks, in particular, allowing only restricted classes of conditional relationships, it also has been found recently that problems with admissibility may arise. In this paper we propose LEG networks for ranking functions, also carrying over an idea from probabilistics. OCF-LEG networks can be built for any conditional knowledge base and filled by local OCF that can be found by inductive reasoning. A global OCF is set up from the local ones, and it is shown that the global OCF is admissible with respect to the underlying knowledge base.

Christian Eichhorn, Gabriele Kern-Isberner
Logics for Approximating Implication Problems of Saturated Conditional Independence

Random variables are declared complete whenever they must not admit missing data. Intuitively, the larger the set of complete random variables the closer the implication of saturated conditional independence statements is approximated. Two different notions of implication are studied. In the classical notion, a statement is implied jointly by a set of statements, the fixed set of random variables and its subset of complete random variables. For the notion of pure implication the set of random variables is left undetermined. A first axiomatization for the classical notion is established that distinguishes purely implied from classically implied statements. Axiomatic, algorithmic and logical characterizations of pure implication are established. The latter appeal to applications in which the existence of random variables is uncertain, for example, when statements are integrated from different sources, when random variables are unknown or when they shall remain hidden.

Henning Koehler, Sebastian Link

Non-Classical Logics

Finitary S5-Theories

The objective of this paper is to identify a class of epistemic logic theories with group knowledge operators which have the fundamental property of being characterized by a

finite

number of

finite

models (up to equivalence). We specifically focus on

S5

-theories. We call this class of epistemic logic theories as

finitary

S5

-theories

. Models of finitary

S5

-theories can be shown to be canonical in that they do not contain two worlds with the same interpretation. When the theory is

pure

, these models are minimal and differ from each other only in the actual world. The paper presents an algorithm for computing all models of a finitary

S5

-theory. Finitary

S5

-theories find applications in several contexts—in particular, the paper discusses their use in epistemic multi-agent planning.

Tran Cao Son, Enrico Pontelli, Chitta Baral, Gregory Gelfond
Efficient Program Transformers for Translating LCC to PDL

This work proposes an alternative definition of the so-called program transformers, used to obtain reduction axioms in the Logic of Communication and Change. Our proposal uses an elegant matrix treatment of Brzozowski’s equational method instead of Kleene’s translation from finite automata to regular expressions. The two alternatives are shown to be equivalent, with Brzozowski’s method having the advantage of being computationally more efficient.

Pere Pardo, Enrique Sarrión-Morillo, Fernando Soler-Toscano, Fernando R. Velázquez-Quesada
On the Expressiveness of the Interval Logic of Allen’s Relations Over Finite and Discrete Linear Orders

Interval temporal logics take time intervals, instead of time instants, as their primitive temporal entities. One of the most studied interval temporal logics is Halpern and Shoham’s modal logic of time intervals HS, which associates a modal operator with each binary relation between intervals over a linear order (the so-called Allen’s interval relations). A complete classification of all HS fragments with respect to their relative expressive power has been recently given for the classes of all linear orders and of all dense linear orders. The cases of discrete and finite linear orders turn out to be much more involved. In this paper, we make a significant step towards solving the classification problem over those classes of linear orders. First, we illustrate various non-trivial temporal properties that can be expressed by HS fragments when interpreted over finite and discrete linear orders; then, we provide a complete set of definabilities for the HS modalities corresponding to the Allen’s relations

meets

,

later

,

begins

,

finishes

, and

during

, as well as the ones corresponding to their inverse relations. The only missing cases are those of the relations

overlaps

and

overlapped by

.

Luca Aceto, Dario Della Monica, Anna Ingólfsdóttir, Angelo Montanari, Guido Sciavicco
Only-Knowing à la Halpern-Moses for Non-omniscient Rational Agents: A Preliminary Report

We investigate the minimal knowledge approach of Halpern-Moses ‘

only knowing

’ in the context of two syntactic variants of stable belief sets that aim in avoiding the unreasonably perfect omniscient agent modelled in R. Stalnaker’s original definition of a stable epistemic state. The ‘

only knowing

’ approach of J. Halpern and Y. Moses provides equivalent characterizations of ‘

honest

’ formulas and characterizes the epistemic state of an agent that has been told only a finite number of facts. The formal account of what it means for an agent to ‘

only know

a

’ is actually based on ‘minimal’ epistemic states and is closely related to ground modal nonmonotonic logics. We examine here the behaviour of the HM-‘

only knowing

’ approach in the realm of the weak variants of stable epistemic states introduced recently by relaxing the positive or negative introspection context rules of Stalnaker’s definition, in a way reminiscent of the work done in modal epistemic logic in response to the ‘

logical omniscience

’ problem. We define the ‘

honest

’ formulas - formulas which can be meaningfully ‘

only known

’ - and characterize them in several ways, including model-theoretic characterizations using impossible worlds. As expected, the generalized ‘

only knowing

’ approach lacks the simplicity and elegance shared by the approaches based on Stalnaker’s stable sets (actually based on

S

5) but it is more realistic and can be handily fine-tuned.

Dimitris Askounis, Costas D. Koutras, Christos Moyzes, Yorgos Zikos

Answer-Set Programming

A Complexity Assessment for Queries Involving Sufficient and Necessary Causes

In this work, we revisit a recently proposed multi-valued semantics for logic programs where each true atom in a stable model is associated with a set of expressions (or causal justifications) involving rule labels. For positive programs, these causal justifications correspond to the possible alternative proofs of the atom that further satisfy some kind of minimality or lack of redundancy. This information can be queried for different purposes such as debugging, program design, diagnosis or causal explanation. Unfortunately, in the worst case, the number of causal justifications for an atom can be exponential with respect to the program size, so that computing the complete causal model may become intractable in the general case. However, we may instead just be interested in querying whether some particular set of rules are involved in the atom derivation, either as a

sufficient cause

(they provide one of the alternative proofs) or as a

necessary cause

(they are mandatorily used in all proofs). In this paper, we formally define sufficient and necessary causation for this setting and provide precise complexity characterizations of the associated decision problems, showing that they remain within the first two levels of the polynomial hierarchy.

Pedro Cabalar, Jorge Fandiño, Michael Fink
Inductive Learning of Answer Set Programs

Existing work on Inductive Logic Programming (ILP) has focused mainly on the learning of definite programs or normal logic programs. In this paper, we aim to push the computational boundary to a wider class of programs: Answer Set Programs. We propose a new paradigm for ILP that integrates existing notions of brave and cautious semantics within a unifying learning framework whose inductive solutions are Answer Set Programs and examples are partial interpretations We present an algorithm that is sound and complete with respect to our new notion of inductive solutions. We demonstrate its applicability by discussing a prototype implementation, called ILASP (Inductive Learning of Answer Set Programs), and evaluate its use in the context of planning. In particular, we show how ILASP can be used to learn agent’s knowledge about the environment. Solutions of the learned ASP program provide plans for the agent to travel through the given environment.

Mark Law, Alessandra Russo, Krysia Broda
Stable Models of Fuzzy Propositional Formulas

We introduce the stable model semantics for fuzzy propositional formulas, which generalizes both fuzzy propositional logic and the stable model semantics of Boolean propositional formulas. Combining the advantages of both formalisms, the introduced language allows highly configurable default reasoning involving fuzzy truth values. We show that several properties of Boolean stable models are naturally extended to this formalism, and discuss how it is related to other approaches to combining fuzzy logic and the stable model semantics.

Joohyung Lee, Yi Wang
A Free Logic for Stable Models with Partial Intensional Functions

In this paper we provide a new logical characterisation of stable models with partial functions that consists in a free-logic extension of Quantified Equilibrium Logic (QEL). In so-called “free” logics, terms may denote objects that are outside the domain of quantification, something that can be immediately used to capture partial functions. We show that this feature can be naturally accommodated in the monotonic basis of QEL (the logic of Quantified Here-and-There, QHT) by allowing variable quantification domains that depend on the world where the formula is being interpreted. The paper provides two main contributions: (i) a correspondence with Cabalar’s semantics for stable models with partial functions; and (ii) a Gentzen system for free QHT, the monotonic basis of free QEL.

Pedro Cabalar, Luis Fariñas del Cerro, David Pearce, Agustin Valverde

Belief Revision

Constructive Models for Contraction with Intransitive Plausibility Indifference

Plausibility rankings play a central role in modeling Belief Change, and they take different forms depending on the type of belief change under consideration: preorders on possible worlds, epistemic entrenchments, etc. A common feature of all these structures is that plausibility indifference is assumed to be transitive. In a previous article, [7], we argued that this is not always the case, and we introduced new sets of postulates for revision and contraction (weaker variants of the classical AGM postulates), that are liberated from the indifference transitivity assumption. Herein we complete the task by making the necessary adjustments to the epistemic entrenchment and the partial meet models. In particular we lift the indifference transitivity assumption from both these two models, and we establish representation results connecting the weaker models with the weaker postulates for contraction introduced in [7].

Pavlos Peppas, Mary-Anne Williams
Four Floors for the Theory of Theory Change: The Case of Imperfect Discrimination

The theory of theory change due to Alchourrón, Gärdenfors and Makinson (“AGM”) has been widely known as being characterised by two packages of postulates. The basic package consists of six postulates and is very weak, the full package adds two further postulates and is very strong. Revisiting two classical constructions of theory contraction, viz., relational possible models contraction and entrenchment-based contraction on the one hand and tracing the idea of imperfect discrimination of plausibilities on the other, I argue that four intermediate levels can be distinguished that play important roles within the AGM theory.

Hans Rott
Revisiting Postulates for Inconsistency Measures

Postulates for inconsistency measures are examined, the set of postulates due to Hunter and Konieczny being the starting point. Objections are raised against a few individual postulates. More general shortcomings are discussed and a new series of postulates is introduced.

Philippe Besnard
A Translation-Based Approach for Revision of Argumentation Frameworks

In this paper, we investigate the revision issue for Dung argumentation frameworks. The main idea is that such frameworks can be translated into propositional formulae, allowing the use of propositional revision operators to perform a rational minimal change. Our translation-based approach to revising argumentation frameworks can take advantage of any propositional revision operator ∘. Via a translation, each propositional operator ∘ can be associated with some revision operators ⋆ suited to argumentation frameworks. Some rationality postulates for the ⋆ operators are presented. If the revision formulae are restricted to formulae about acceptance statuses, some ⋆ operators satisfy these postulates provided that the corresponding ∘ operator is AGM.

Sylvie Coste-Marquis, Sébastien Konieczny, Jean-Guy Mailly, Pierre Marquis

Dealing with Inconsistency in ASP and DL

Preserving Strong Equivalence while Forgetting

A variety of proposals for forgetting in logic programs under different semantics have emerged that satisfy differing sets of properties considered desirable. Despite the achieved progress in devising approaches that capture an increasing number of these properties, the idea that the result of forgetting should preserve the meaning of the initial program for the remaining, non-forgotten, atoms, has not yet been captured. In particular, the existing proposals may not preserve dependency relations between such atoms that are given by the structure of the program. In logic programs, these relations are captured by strong equivalence, but, preserving strong equivalence of two different programs while forgetting does not suffice. Rather, strong equivalence relativized to the remaining atoms should be preserved between the original program and the one that results from forgetting. In this paper, we overcome this deficiency by formalizing the property that captures this maintenance of relations while forgetting, and at the same time a general semantic definition for such a forgetting for arbitrary logic programs. Then, we study forgetting for normal programs under the well-founded semantics, and for programs with double negation under the answer set semantics. In both cases, we focus on efficient syntax-based algorithms that only manipulate the rules in which changes are effectively necessary.

Matthias Knorr, José Julio Alferes
Computing Repairs for Inconsistent DL-programs over $\mathcal{EL}$ Ontologies

DL-programs couple nonmonotonic logic programs with DL- ontologies through queries in a loose way which may lead to inconsistency, i.e., lack of an answer set. Recently defined repair answer sets remedy this. In particular, for

$DL-Lite_{\mathcal{A}}$

ontologies, the computation of deletion repair answer sets can effectively be reduced to constraint matching based on so-called support sets. Here we consider the problem for DL-programs over

$\mathcal{EL}$

ontologies. This is more challenging than adopting a suitable notion of support sets and their computation. Compared to

$DL-Lite_{\mathcal{A}}$

, support sets may neither be small nor few, and completeness may need to be given up in favor of sound repair computation on incomplete support information. We provide such an algorithm and discuss partial support set computation, as well as a declarative implementation. Preliminary experiments show a very promising potential of the partial support set approach.

Thomas Eiter, Michael Fink, Daria Stepanova
A Prioritized Assertional-Based Revision for DL-Lite Knowledge Bases

DL-Lite

is a powerful and tractable family of description logics. One of the fundamental issue in this area is the evolution or revision of knowledge bases. To this end, many approaches are recently developed for revising flat

DL-Lite

knowledge bases. This paper investigates “Prioritized Removed Sets Revision” (PRSR) in

DL-Lite

framework where the assertions or data are prioritized (for instance in case where the data are provided by multiple sources having different reliability levels). PRSR approach is based on inconsistency minimization in order to restore consistency where the minimality refers to the lexicographic criterion and not to the set inclusion one. We study different forms of incorporated information: an assertion, a positive inclusion axiom or a negative inclusion axiom. We show that under some conditions PRSR can be achieved in polynomial time. We give logical properties of the proposed operators in terms of satisfaction of Hansson’s postulates rephrased in

DL-Lite

framework. We finally show how to use the notion of hitting sets for computing prioritized removed sets.

Salem Benferhat, Zied Bouraoui, Odile Papini, Eric Würbel
Modular Paracoherent Answer Sets

The answer set semantics may assign a logic program no model due to classic contradiction or cyclic negation. The latter can be remedied by resorting to a paracoherent semantics given by semi-equilibrium (

SEQ

) models, which are 3-valued interpretations that generalize the logical reconstruction of answer sets given by equilibrium models. While

SEQ

-models have interesting properties, they miss modularity in the rules, such that a natural modular (bottom up) evaluation of programs is hindered. We thus refine

SEQ

-models using splitting sets, the major tool for modularity in modeling and evaluating answer set programs. We consider canonical models that are independent of any particular splitting sequence from a class of splitting sequences, and present two such classes whose members are efficiently recognizable. Splitting

SEQ

-models does not make reasoning harder, except for deciding model existence in presence of constraints (without constraints, split

SEQ

-models always exist).

Giovanni Amendola, Thomas Eiter, Nicola Leone

Reason about Actions and Causality

Action Theories over Generalized Databases with Equality Constraints

In this work we focus on situation calculus action theories over

generalized databases with equality constraints

, here called GFDBs, which are able to finitely represent complete information over a possibly infinite number of objects. We contribute with the following: i) we show that GFDBs characterize the class of definitional KBs and that they are closed under progression; ii) we show that temporal projection queries are decidable for theories with an initial KB expressed as a GFDB, which we call GFDB-BATs; iii) we extend the notion of boundedness to allow for infinite objects in the extensions of fluents and prove that a wide class of generalized projection queries is decidable for GFDB-BAT under a restriction we call C-boundedness; iv) we show that checking whether C-boundedness holds for a given bound is decidable. The proposed action theories are to date the most expressive ones for which there are decidable methods for computing both progression and generalized projection.

Fabio Patrizi, Stavros Vassos
A Dynamic View of Active Integrity Constraints

Active integrity constraints have been introduced in the database community as a way to restore integrity. We view active integrity constraints as programs of Dynamic Logic of Propositional Assignments

DL

PA

and show how several semantics of database repair that were proposed in the literature can be characterised by

DL

PA

formulas. We moreover propose a new definition of repair. For all these definitions we provide

DL

PA

counterparts of decision problems such as the existence of a repair or the existence of a unique repair.

Guillaume Feuillade, Andreas Herzig
Similarity Orders from Causal Equations

The purpose of this paper is to demonstrate that, contrary to the received wisdom, causal reasoning can be formalized wholly within the framework of Lewis’ conditional logic. To this aim we simulate causal reasoning based on structural equations in Lewis’ order semantics. This reduction is based on a formalization of an intuitive idea for computing relative similarity between worlds. Worlds are the more similar the more they satisfy the same relevant propositions, where relevance is a comparative notion represented by a preorder. In the context of causal reasoning this relevance order on propositions depends on the causal structure of the problem domain.

Johannes Marti, Riccardo Pinosio
Verification of Context-Sensitive Knowledge and Action Bases

Knowledge and Action Bases (KABs) have been recently proposed as a formal framework to capture the dynamics of systems which manipulate Description Logic (DL) Knowledge Bases (KBs) through action execution. In this work, we enrich the KAB setting with contextual information, making use of different context dimensions. On the one hand, context is determined by the environment using context-changing actions that make use of the current state of the KB and the current context. On the other hand, it affects the set of TBox assertions that are relevant at each time point, and that have to be considered when processing queries posed over the KAB. Here we extend to our enriched setting the results on verification of rich temporal properties expressed in

μ

-calculus, which had been established for standard KABs. Specifically, we show that under a run-boundedness condition, verification stays decidable and does not incur in any additional cost in terms of worst-case complexity.We also show how to adapt syntactic conditions ensuring run-boundedness so as to account for contextual information, taking into account context-dependent activation of TBox assertions.

Diego Calvanese, İsmail İlkan Ceylan, Marco Montali, Ario Santoso

System Descriptions

System aspmt2smt: Computing ASPMT Theories by SMT Solvers

Answer Set Programming Modulo Theories (ASPMT) is an approach to combining answer set programming and satisfiability modulo theories based on the functional stable model semantics. It is shown that the tight fragment of ASPMT programs can be turned into SMT instances, thereby allowing SMT solvers to compute stable models of ASPMT programs. In this paper we present a compiler called

aspsmt2smt

, which implements this translation. The system uses ASP grounder

gringo

and SMT solver

z3

.

gringo

partially grounds input programs while leaving some variables to be processed by

z3

. We demonstrate that the system can effectively handle real number computations for reasoning about continuous changes.

Michael Bartholomew, Joohyung Lee
A Library of Anti-unification Algorithms

Generalization problems arise in many branches of artificial intelligence: machine learning, analogical and case-based reasoning, cognitive modeling, knowledge discovery, etc. Anti-unification is a technique used often to solve generalization problems. In this paper we describe an open-source library of some newly developed anti-unification algorithms in various theories: for first- and second-order unranked terms, higher-order patterns, and nominal terms.

Alexander Baumgartner, Temur Kutsia
The D-FLAT System for Dynamic Programming on Tree Decompositions

Complex reasoning problems over large amounts of data pose a great challenge for computer science. To overcome the obstacle of high computational complexity, exploiting structure by means of tree decompositions has proved to be effective in many cases. However, the implementation of suitable efficient algorithms is often tedious. D-FLAT is a software system that combines the logic programming language Answer Set Programming with problem solving on tree decompositions and can serve as a rapid prototyping tool for such algorithms. Since we initially proposed D-FLAT, we have made major changes to the system, improving its range of applicability and its usability. In this paper, we present the system resulting from these efforts.

Michael Abseher, Bernhard Bliem, Günther Charwat, Frederico Dusberger, Markus Hecher, Stefan Woltran

Short System Descriptions

ACUOS: A System for Modular ACU Generalization with Subtyping and Inheritance

Computing generalizers is relevant in a wide spectrum of automated reasoning areas where analogical reasoning and inductive inference are needed. The ACUOS system computes a complete and minimal set of semantic generalizers (also called “anti-unifiers”) of two structures in a typed language

modulo

a set of equational axioms. By supporting types and any (modular) combination of associativity (A), commutativity (C), and unity (U) algebraic axioms for function symbols, ACUOS allows reasoning about typed data structures, e.g. lists, trees, and (multi-)sets, and typical hierarchical/structural relations such as

is_a

and

part_of

. This paper discusses the modular ACU generalization tool ACUOS and illustrates its use in a classical artificial intelligence problem.

María Alpuente, Santiago Escobar, Javier Espert, José Meseguer
Drawing Euler Diagrams from Region Connection Calculus Specifications with Local Search

This paper describes a local search based approach and a software tool to approximate the problem of drawing Euler diagrams. Specifications are written using

RCC-8

-constraints and radius constraints. Euler diagrams are described as set of circles.

François Schwarzentruber, Jin-Kao Hao

Short Papers

Probabilistic Abstract Dialectical Frameworks

Although Dung’s frameworks are widely approved tools for abstract argumentation, their abstractness makes expressing notions such as support or uncertainty very difficult. Thus, many of their generalizations were created, including the probabilistic argumentation frameworks (PrAFs) and the abstract dialectical frameworks (ADFs). While the first allow modeling uncertain arguments and attacks, the latter can handle various dependencies between arguments. Although the actual probability layer in PrAFs is independent of the chosen semantics, new relations pose new challenges and new interpretations of what is the probability of a relation. Thus, the methodology for handling uncertainties cannot be shifted to more general structures without any further thought. In this paper we show how ADFs are extended with probabilities.

Sylwia Polberg, Dragan Doder
Argumentative Aggregation of Individual Opinions

Over a new abstract model of aggregating individual issues –

abstract debates

– we introduce an entire class of

aggregating operators

by borrowing ideas from

Abstract Argumentation

to

Social Choice Theory

. The main goal was to introduce

rational

aggregation methods which do not satisfy the commonly used

independence condition

in Social Choice Theory. This type of

context dependent aggregation

is very natural, could be useful in many real world decision making scenarios, and the present paper provides the first theoretical investigation of it.

Cosmina Croitoru
Measuring Dissimilarity between Judgment Sets

Distances and scores are widely used to measure (dis)similarity between objects of information such as preferences, belief sets, judgment sets, etc. Typically, measures are directly imported from information theory or topology, with little consideration for adequacy in the context of comparing logically related information. We propose a set of desirable properties for measures used to aggregate (logically related)

judgments

, and show which of the measures used for this purpose satisfy them.

Marija Slavkovik, Thomas Ågotnes
Exploiting Answer Set Programming for Handling Information Diffusion in a Multi-Social-Network Scenario

In this paper we apply Answer Set Programming for analyzing properties of social networks, and we consider Information Diffusion in Social Network Analysis. This problem has been deeply investigated for single social networks, but we focus on a new setting where many social networks coexist and are strictly connected to each other, thanks to those users who join more social networks. We present some experiments allowing us to conclude that the way of spreading information in a Multi-Social-Network scenario is completely different from that of a Single-Social-Network context.

Giuseppe Marra, Francesco Ricca, Giorgio Terracina, Domenico Ursino
Reasoning about Dynamic Normative Systems

The use of normative systems is widely accepted as an effective approach to control and regulate the behaviour of agents in multi-agent systems. When norms are added to a normative system, the behaviour of such a system changes. As of yet, there is no clear formal methodology to model the dynamics of a normative system under addition of various types of norms. In this paper we view the addition of a norm as an update of a normative system, and we provide update semantics to model this process.

Max Knobbout, Mehdi Dastani, John-Jules Ch. Meyer
A Modal Logic of Knowledge, Belief, and Estimation

We introduce

KBE

, a modal epistemic logic for reasoning about

Knowledge

,

Belief

and

Estimation

, three attitudes involved in an agent’s decision-making process. In our logic,

Knowledge

and

Belief

are captured by

S

4.2, a modal logic holding a distinguished position among the epistemic logics investigated in AI and Philosophy. The

Estimation

operator of

KBE

is a kind of generalized ‘

many

’ or ‘

most

’ quantifier, whose origins go back to the work of J. Burgess and A. Herzig, but its model-theoretic incarnation (‘weak filters’) has been introduced by K. Schlechta and V. Jauregui. We work with

complete weak filters

(‘

weak ultrafilters

’) as we are interested in situations where an estimation can be always reached. The axiomatization of

KBE

comprises ‘bridge’ axioms which reflect the intuitive relationship of ‘

estimation

’ to ‘

knowledge

’ and ‘

belief

’, several introspective properties are shown to hold and it comes out that

believing ϕ

can be equivalently defined in

KBE

as ‘

estimating that ϕ is known

’, an interesting fact and an indication of the intuitive correctness of the introduced

estimation

operator. The model theory of

KBE

comprises a class of frames combining relational Kripke frames with Scott-Montague semantics, in which neighborhoods are collections of ‘large’ sets of possible worlds. Soundness and completeness is mentioned and a tableaux proof procedure is sketched.

Costas D. Koutras, Christos Moyzes, Yorgos Zikos
A Logic for Belief Contraction

We introduce a logical system in which we can define a belief contraction modal connective. The logic is based on a combination of Moss-Parikh subset space logic and temporal logic. The semantics is the system of spheres used in conditional logic and the belief contraction connective encodes minimization of the distance between two spheres. We prove completeness and decidability and show that the belief contraction connective satisfies most of the AGM postulates when those are translated into statements of the logic.

Konstantinos Georgatos
Logic Foundations of the OCL Modelling Language

In this paper we define

the

first-order fragment of the Object Constraint Language (OCL), the declarative language for describing rules that apply to conceptual schemas in the Unified Modelling Language (UML). This fragment covers the whole of OCL without arithmetic operators, aggregation functions, iterators, and recursion. We give the set theoretical formal syntax and semantics in an elegant, concise, and clear way. This fragment has the same expressivity as domain-independent first-order logic (

aka

relational algebra), in the sense that any relational algebra expression can be reformulated as a logically equivalent OCL expression, and vice-versa.

Enrico Franconi, Alessandro Mosca, Xavier Oriol, Guillem Rull, Ernest Teniente
Constraint-Based Algorithm for Computing Temporal Invariants

Automatically identified invariants are an important part of reductions of state-space reachability problems to SAT and related formalisms as a method of pruning the search space. No general algorithms for computing temporal invariants have been proposed before. Earlier algorithms restrict to unconditional actions and at-most-one invariants. We propose a powerful inductive algorithm for computing invariants for timed systems, showing that a wide range of timed modeling languages can be handled uniformly. The algorithm reduces the computation of timed invariants to a sequence of temporal logic consistency tests.

Jussi Rintanen
Answer Set Solver Backdoors

Backdoor variables offer a generic notion for providing insights to the surprising success of constraint satisfaction solvers in solving remarkably complex real-world instances of combinatorial problems. We study backdoors in the context of answer set programming (ASP), and focus on studying the relative size of backdoors in terms of different state-of-the-art answer set solving algorithms. We show separations of ASP solver families in terms of the smallest existing backdoor sets for the solvers.

Emilia Oikarinen, Matti Järvisalo
Incremental SAT-Based Method with Native Boolean Cardinality Handling for the Hamiltonian Cycle Problem

The Hamiltonian cycle problem (HCP) is the problem of finding a spanning cycle in a given graph. HCP is NP-complete and has been known as an important problem due to its close relationship to the travelling salesman problem (TSP), which can be seen as an optimization variant of finding a minimum cost cycle. In a different viewpoint, HCP is a special case of TSP. In this paper, we propose an incremental SAT-based method for solving HCP. The number of clauses needed for a CNF encoding of HCP often prevents SAT-based methods from being scalable. Our method reduces that number of clauses by relaxing some constraints and by handling specifically cardinality constraints. Our approach has been implemented on top of the SAT solver

Sat4j

using

Scarab

. An experimental evaluation is carried out on several benchmark sets and compares our incremental SAT-based method against an existing eager SAT-based method and specialized methods for HCP.

Takehide Soh, Daniel Le Berre, Stéphanie Roussel, Mutsunori Banbara, Naoyuki Tamura
Revisiting Reductants in the Multi-adjoint Logic Programming Framework

In this work, after revisiting the different notions of reductant arisen in the framework of multi-adjoint logic programming and akin frameworks, we introduce a new, more adequate, notion of reductant in the context of multi-adjoint logic programs. We study some of its properties and its relationships with other notions of reductants.

Pascual Julián-Iranzo, Jesús Medina, Manuel Ojeda-Aciego
Backmatter
Metadaten
Titel
Logics in Artificial Intelligence
herausgegeben von
Eduardo Fermé
João Leite
Copyright-Jahr
2014
Verlag
Springer International Publishing
Electronic ISBN
978-3-319-11558-0
Print ISBN
978-3-319-11557-3
DOI
https://doi.org/10.1007/978-3-319-11558-0