Skip to main content

2016 | Buch

Logics in Artificial Intelligence

15th European Conference, JELIA 2016, Larnaca, Cyprus, November 9-11, 2016, Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the proceedings of the 15th European Conference on Logics in Artificial Intelligence, JELIA 2016, held in Larnaca, Cyprus, in November 2015. The 32 full papers and 10 short papers included in this volume were carefully reviewed and selected from 88 submissions. The accepted papers span a number of areas within Logics in AI, including: belief revision, answer set programming, argumentation, probabilistic reasoning, handling inconsistencies, temporal logics and planning, description logics, and decidability and complexity results.

Inhaltsverzeichnis

Frontmatter

Full Papers

Frontmatter
Metabolic Pathways as Temporal Logic Programs

Metabolic Networks, formed by series of metabolic pathways, are made of intracellular and extracellular reactions that determine the biochemical properties of a cell and by a set of interactions that guide and regulate the activity of these reactions. Cancers, for example, can sometimes appear in a cell as a result of some pathology in a metabolic pathway. Most of these pathways are formed by an intricate and complex network of chain reactions, and they can be represented in a human readable form using graphs which describe the cell signaling pathways.In this paper we present a logic, called Molecular Equilibrium Logic, a nonmonotonic logic which allows representing metabolic pathways. We also show how this logic can be presented in terms of a syntactical subset of Temporal Equilibrium Logic, the temporal extension of Equilibrium Logic, called Splittable Temporal Logic Programs.

Jean-Marc Alliot, Martín Diéguez, Luis Fariñas del Cerro
On Decidability of a Logic of Gossips

Gossip protocols aim at arriving, by means of point-to-point or group communications, at a situation in which all the agents know each other secrets, see, e.g., [11]. In [1], building upon [3], we studied distributed epistemic gossip protocols, which are examples of knowledge based programs introduced in [6]. These protocols use as guards formulas from a simple epistemic logic. We show here that these protocols are implementable by proving that it is decidable to determine whether a formula with no nested modalities is true after a sequence of calls. Building upon this result we further show that the problems of partial correctness and of termination of such protocols are decidable, as well.

Krzysztof R. Apt, Dominik Wojtczak
Hilbert-Style Axiomatization for Hybrid XPath with Data

In this paper we introduce a sound and complete axiomatization for XPath with data constraints extended with hybrid operators. First, we define $$\text {HXPath} _=({{\uparrow }{\downarrow }})$$, an extension of vertical XPath with nominals and the hybrid operator @. Then, we introduce an axiomatic system for $$\text {HXPath} _=({{\uparrow }{\downarrow }})$$, and we prove it is complete with respect to the class of abstract data trees, i.e., data trees in which data values are abstracted as equivalence relations. As a corollary, we also obtain completeness with respect to the class of concrete data trees.

Carlos Areces, Raul Fervari
Approximate Unification in the Description Logic

Unification in description logics (DLs) has been introduced as a novel inference service that can be used to detect redundancies in ontologies, by finding different concepts that may potentially stand for the same intuitive notion. It was first investigated in detail for the DL $$\mathcal {FL}_0$$, where unification can be reduced to solving certain language equations. In order to increase the recall of this method for finding redundancies, we introduce and investigate the notion of approximate unification, which basically finds pairs of concepts that “almost” unify. The meaning of “almost” is formalized using distance measures between concepts. We show that approximate unification in $$\mathcal {FL}_0$$ can be reduced to approximately solving language equations, and devise algorithms for solving the latter problem for two particular distance measures.

Franz Baader, Pavlos Marantidis, Alexander Okhotin
Inconsistency-Tolerant Query Answering: Rationality Properties and Computational Complexity Analysis

Generalising the state of the art, an inconsistency-tolerant semantics can be seen as a couple composed of a modifier operator and an inference strategy. In this paper we deepen the analysis of such general setting and focus on two aspects. First, we investigate the rationality properties of such semantics for existential rule knowledge bases. Second, we unfold the broad landscape of complexity results of inconsistency-tolerant semantics under a specific (yet expressive) subclass of existential rules.

Jean François Baget, Salem Benferhat, Zied Bouraoui, Madalina Croitoru, Marie-Laure Mugnier, Odile Papini, Swan Rocher, Karim Tabia
Temporal Here and There

Temporal Here and There (THT) constitutes the logical foundations of Temporal Equilibrium Logic. Nevertheless, it has never been studied in detail since results about axiomatisation and interdefinability of modal operators remained unknown. In this paper we provide a sound and complete axiomatic system for THT together with several results on interdefinability of modal operators.

Philippe Balbiani, Martín Diéguez
On Logics of Group Belief in Structured Coalitions

In the study of group belief formation, groups of agents are often assumed to possess a topological structure. Here we investigate some ways in which this topological structure may provide the semantical basis for logics of group belief. We impose a partial order on a set of agents first to be able to express preferences of agents by their doxastic abilities, secondly to express the idea of a coalition (well formed group) and thirdly to give a natural semantics for the group belief operator. We define the group belief of a set of agents in two different ways and study their corresponding logics. We also study a logic where doxastic preference is expressed by a binary operator. We prove completeness and discuss correspondences between the logics.

Philippe Balbiani, David Pearce, Levan Uridia
A Three-Value Abstraction Technique for the Verification of Epistemic Properties in Multi-agent Systems

We put forward an abstraction technique, based on a three-value semantics, for the verification of epistemic properties of agents participating in a multi-agent system. First we introduce a three-value interpretation of epistemic logic, based on a notion of order defined on the information content of the local states of each agent. Then, we use the three-value semantics to introduce an abstraction technique to verify epistemic properties of agents in infinite-state multi-agent systems.

Francesco Belardinelli, Alessio Lomuscio
A Relaxation of Internal Conflict and Defence in Weighted Argumentation Frameworks

In Weighted Abstract Argumentation Frameworks (WAAFs), weights on attacks bring more information. An advantage is the possibility to define a different notion of defence, which also checks if the weight associated with defence is compared with the weight of attacks. We study and merge together two different relaxations of classically crisp-concepts in WAAFs: one is related to a new notion of weighted defence (defence can be stronger or weaker at will), while the second one is related to how much inconsistency one is willing to tolerate inside an extension (which can be not totally conflict-free now). These two relaxations are strictly related and influence each other: allowing a small conflict may lead to have more arguments in an extension, and consequently result in a stronger or weaker defence. We model weights with a semiring structure, which can be instantiated to different metrics used in the literature (e.g., fuzzy WAAFs).

Stefano Bistarelli, Fabio Rossi, Francesco Santini
Decidability and Expressivity of Ockhamist Propositional Dynamic Logics

Ockhamist Propositional Dynamic Logic ($$\mathsf {OPDL}$$) is a logic unifying the family of dynamic logics and the family of branching-time temporal logics, two families of logic widely used in AI to model reactive systems and multi-agent systems (MAS). In this paper, we present two variants of this logic. These two logics share the same language and differ only in one semantic condition. The first logic embeds Bundled $$\textsf {CTL}^* $$ while the second embeds $$\textsf {CTL}^* $$. We provide a 2EXPTIME decision procedure for the satisfiability problem of each variant. The decision procedure for the first variant of $$\mathsf {OPDL}$$ is based on the elimination of Hintikka sets while the decision procedure for the second variant relies on automata.

Joseph Boudou, Emiliano Lorini
On the Expressiveness of Temporal Equilibrium Logic

We investigate expressiveness issues of Temporal Equilibrium Logic (TEL), a promising nonmonotonic logical framework for temporal reasoning. TEL shares the syntax of standard linear temporal logic LTL, but its semantics is an orthogonal combination of the LTL semantics with the nonmonotonic semantics of Equilibrium Logic. We establish that TEL is more expressive than LTL, and captures a strict subclass of $$\omega $$-regular languages. We illustrate the expressive power of $$\textsf {TEL} $$ by showing that $$\textsf {LTL} $$-conformant planning, which is not expressible in $$\textsf {LTL} $$, can be instead expressed in $$\textsf {TEL} $$. Additionally, we provide a systematic study of the expressiveness comparison between the LTL semantics and the TEL semantics for various natural syntactical fragments.

Laura Bozzelli, David Pearce
Introducing Role Defeasibility in Description Logics

Accounts of preferential reasoning in Description Logics often take as point of departure the semantic notion of a preference order on objects in a domain of interpretation, which allows for the development of notions of defeasible subsumption and entailment. However, such an approach does not account for defeasible roles, interpreted as partially ordered sets of tuples. We state the case for role defeasibility and introduce a corresponding preferential semantics for a number of defeasible constructs on roles. We show that this does not negatively affect decidability or complexity of reasoning for an important class of DLs, and that existing notions of preferential reasoning can be expressed in terms of defeasible roles.

Katarina Britz, Ivan Varzinczak
Opposition Frameworks

In this paper we introduce opposition frameworks, a generalization of Dung’s argumentation frameworks.While keeping the attack relation as the sole type of interaction between nodes and the abstract level of argumentation frameworks, opposition networks add more flexibility, reducing the gap between structured and abstract argumentation. A guarded attack calculus is developed in order to obtain proper generalizations of Dung’s admissibility-based semantics. The high modeling capabilities of our new setting offer an alternative instantiation solution (of other existing argumentation frameworks) for arguments evaluation.

Cosmina Croitoru, Kurt Mehlhorn
Prompt Interval Temporal Logic

Interval temporal logics are expressive formalisms for temporal representation and reasoning, which use time intervals as primitive temporal entities. They have been extensively studied for the past two decades and successfully applied in AI and computer science. Unfortunately, they lack the ability of expressing promptness conditions, as it happens with the commonly-used temporal logics, e.g., LTL: whenever we deal with a liveness request, such as “something good eventually happens”, there is no way to impose a bound on the delay with which it is fulfilled. In the last years, such an issue has been addressed in automata theory, game theory, and temporal logic. In this paper, we approach it in the interval temporal logic setting. First, we introduce PROMPT-PNL, a prompt extension of the well-studied interval temporal logic PNL, and we prove the undecidability of its satisfiability problem; then, we show how to recover decidability (NEXPTIME-completeness) by imposing a natural syntactic restriction on it.

Dario Della Monica, Angelo Montanari, Aniello Murano, Pietro Sala
Exploiting Contextual Knowledge for Hybrid Classification of Visual Objects

We consider the problem of classifying visual objects in a scene by exploiting the semantic context. For this task, we define hybrid classifiers (HC) that combine local classifiers with context constraints, and can be applied to collective classification problems (CCPs) in general. Context constraints are represented by weighted ASP constraints using object relations. To integrate probabilistic information provided by the classifier and the context, we embed our encoding in the formalism $$LP^{MLN}$$, and show that an optimal labeling can be efficiently obtained from the corresponding $$LP^{MLN}$$ program by employing an ordinary ASP solver. Moreover, we describe a methodology for constructing an HC for a CCP, and present experimental results of applying an HC for object classification in indoor and outdoor scenes, which exhibit significant improvements in terms of accuracy compared to using only a local classifier.

Thomas Eiter, Tobias Kaminski
Reasoning About Justified Belief Based on the Fusion of Evidence

In this paper, we propose logics for reasoning about belief and evidence. Starting from justification logic (JL) in which the reasons why a fact is believed are explicitly represented as justification terms, we explore the relationship between justified belief and fused information from different evidential sources. We argue that the expressive power of JL is inadequate for our purpose, because, while a justification formula can represent that a piece of evidence is admissible for the belief, it cannot express whether the evidence has been actually observed. Therefore, to address the issue, we propose more fine-grained JL’s that can express the informational content of evidence, and the actual observation of evidence is definable in such logics. As a byproduct, we also show that the proposed logics are easily extended to accommodate dynamic evidential reasoning. Consequently, we can integrate JL and dynamic epistemic logic (DEL) paradigms in a natural way.

Tuan-Fang Fan, Churn-Jung Liau
Writing Declarative Specifications for Clauses

Modern satisfiability (SAT) solvers provide an efficient implementation of classical propositional logic. Their input language, however, is based on the conjunctive normal form (CNF) of propositional formulas. To use SAT solver technology in practice, a user must create the input clauses in one way or another. A typical approach is to write a procedural program that generates formulas on the basis of some input data relevant for the problem domain and translates them into CNF. In this paper, we propose a declarative approach where the intended clauses are specified in terms of rules in analogy to answer set programming (ASP). This allows the user to write first-order specifications for intended clauses in a schematic way by exploiting term variables. We develop a formal framework required to define the semantics of such specifications. Moreover, we provide an implementation harnessing state-of-the-art ASP grounders to accomplish the grounding step of clauses. As a result, we obtain a general-purpose clause-level grounding approach for SAT solvers. Finally, we illustrate the capabilities of our specification methodology in terms of combinatorial and application problems.

Martin Gebser, Tomi Janhunen, Roland Kaminski, Torsten Schaub, Shahab Tasharrofi
Standard Sequent Calculi for Lewis’ Logics of Counterfactuals

We present new sequent calculi for Lewis’ logics of counterfactuals. The calculi are based on Lewis’ connective of comparative plausibility and modularly capture almost all logics of Lewis’ family. Our calculi are standard, in the sense that each connective is handled by a finite number of rules with a fixed and finite number of premises; internal, meaning that a sequent denotes a formula in the language, and analytical. We present two equivalent versions of the calculi: in the first one, the calculi comprise simple rules; we show that for the basic case of logic $$\mathbb {V}$$, the calculus allows for syntactic cut-elimination, a fundamental proof-theoretical property. In the second version, the calculi comprise invertible rules, they allow for terminating proof search and semantical completeness. We finally show that our calculi can simulate the only internal (non-standard) sequent calculi previously known for these logics.

Marianna Girlando, Björn Lellmann, Nicola Olivetti, Gian Luca Pozzato
Incremental Computation of Deterministic Extensions for Dynamic Argumentation Frameworks

We address the problem of efficiently recomputing the extensions of abstract argumentation frameworks (AFs) which are updated by adding/deleting arguments or attacks. In particular, after identifying some properties that hold for updates of AFs under several well-known semantics, we focus on the two most popular ‘deterministic’ semantics (namely, grounded and ideal) and present two algorithms for their incremental computation, well-suited to dynamic applications where updates to an initial AF are frequently performed to take into account new available knowledge. We experimentally validated the proposed approach.

Sergio Greco, Francesco Parisi
Revising Possibilistic Knowledge Bases via Compatibility Degrees

Possibilistic logic is a weighted logic for dealing with incomplete and uncertain information by assigning weights to propositional formulas. A possibilistic knowledge base (KB) is a finite set of such formulas. The problem of revising a possibilistic KB by possibilistic formula is not new. However, existing approaches are limited in two ways. Firstly, they suffer from the so-called drowning effect. Secondly, they handle certain and uncertain formulas separately and most only handle certain inputs. In this paper, we propose a unified approach that caters for revision by both certain and uncertain inputs and relieves the drowning effect. The approach is based on a refined inconsistency degree function called compatibility degree which provides a unifying framework (called cd-revision) for defining specific revision operators for possibilistic KBs. Our definition leads to an algorithm for computing the result of the proposed revision. The revision operators defined in cd-revision possess some desirable properties including those from classic belief revision and some others that are specific to possibilistic revision. We also show that several major revision operators for possibilistic, stratified and prioritised KBs can be embedded in cd-revision.

Yifan Jin, Kewen Wang, Zhe Wang, Zhiqiang Zhuang
Proving Craig and Lyndon Interpolation Using Labelled Sequent Calculi

Interpolation is a fundamental logical property with applications in mathematics, computer science, and artificial intelligence. In this paper, we develop a general method of translating a semantic description of modal logics via Kripke models into a constructive proof of the Lyndon interpolation property (LIP) via labelled sequents. Using this method we demonstrate that all frame conditions representable as Horn formulas imply the LIP and that all 15 logics of the modal cube, as well as the infinite family of transitive Geach logics, enjoy the LIP.

Roman Kuznets
Efficient Reasoning for Inconsistent Horn Formulae

Horn formulae are widely used in different settings that include logic programming, answer set programming, description logics, deductive databases, and system verification, among many others. One concrete example is concept subsumption in lightweight description logics, which can be reduced to inference in propositional Horn formulae. Some problems require one to reason with inconsistent Horn formulae. This is the case when providing minimal explanations of inconsistency. This paper proposes efficient algorithms for a number of decision, function and enumeration problems related with inconsistent Horn formulae. Concretely, the paper develops efficient algorithms for finding and enumerating minimal unsatisfiable subsets (MUSes), minimal correction subsets (MCSes), but also for computing the lean kernel. The paper also shows the practical importance of some of the proposed algorithms.

Joao Marques-Silva, Alexey Ignatiev, Carlos Mencía, Rafael Peñaloza
Information Flow Under Budget Constraints

Although first proposed in the database theory as properties of functional dependencies between attributes, Armstrong’s axioms capture general principles of information flow by describing properties of dependencies between sets of pieces of information. This paper generalizes Armstrong’s axioms to a setting in which there is a cost associated with information. The proposed logical system captures general principles of dependencies between pieces of information constrained by a given budget.

Pavel Naumov, Jia Tao
A Tool for Probabilistic Reasoning Based on Logic Programming and First-Order Theories Under Stable Model Semantics

This System Description paper describes the software framework PrASP (“Probabilistic Answer Set Programming”). PrASP is both an uncertainty reasoning and machine learning software and a probabilistic logic programming language based on Answer Set Programming (ASP). Besides serving as a research software platform for non-monotonic (inductive) probabilistic logic programming, our framework mainly targets applications in the area of uncertainty stream reasoning. PrASP programs can consist of ASP (AnsProlog) as well as First-Order Logic formulas (with stable model semantics), annotated with conditional or unconditional probabilities or probability intervals. A number of alternative inference algorithms allow to attune the system to different task characteristics (e.g., whether or not independence assumptions can be made).

Matthias Nickles
Pakota: A System for Enforcement in Abstract Argumentation

In this paper we describe Pakota, a system implementation that allows for solving enforcement problems over argumentation frameworks. Via harnessing Boolean satisfiability (SAT) and maximum satisfiability (MaxSAT) solvers, Pakota implements algorithms for extension and status enforcement under various central AF semantics, covering a range of NP-complete—via direct MaxSAT encodings—and $$\mathrm{\Sigma }_{2}^{P}$$-complete—via MaxSAT-based counterexample-guided abstraction refinement—enforcement problems. We overview the algorithmic approaches implemented in Pakota, and describe in detail the system architecture, features, interfaces, and usage of the system. Furthermore, we present an empirical evaluation on the impact of the choice of MaxSAT solvers on the scalability of the system, and also provide benchmark generators for extension and status enforcement.

Andreas Niskanen, Johannes P. Wallner, Matti Järvisalo
Kinetic Consistency and Relevance in Belief Revision

A critical aspect of rational belief revision that has been neglected by the classical AGM framework is what we call the principle of kinetic consistency. Loosely speaking, this principle dictates that the revision policies employed by a rational agent at different belief sets, are not independent, but ought to be related in a certain way. We formalise kinetic consistency axiomatically and semantically, and we establish a representation result explicitly connecting the two. We then combine the postulates for kinetic consistency, with Parikh’s postulate for relevant change, and add them to the classical AGM postulates for revision; we call this augmented set the extended AGM postulates. We prove the consistency and demonstrate the scope of the extended AGM postulates by showing that a whole new class of concrete revision operators introduced hererin, called PD operators, satisfies all extended AGM postulates. PD operators are of interest in their own right as they are natural generalisations of Dalal’s revision operator. We conclude the paper with some examples illustrating the strength of the extended AGM postulates, even for iterated revision scenarios.

Pavlos Peppas, Mary-Anne Williams
DRAT Proofs for XOR Reasoning

Unsatisfiability proofs in the DRAT format became the de facto standard to increase the reliability of contemporary SAT solvers. We consider the problem of generating proofs for the XOR reasoning component in SAT solvers and propose two methods: direct translation transforms every XOR constraint addition inference into a DRAT proof, whereas T-translation avoids the exponential blow-up in direct translations by using fresh variables. T-translation produces DRAT proofs from Gaussian elimination records that are polynomial in the size of the input CNF formula. Experiments show that a combination of both approaches with a simple prediction method outperforms the BDD-based method.

Tobias Philipp, Adrián Rebola-Pardo
Understanding the Abstract Dialectical Framework

Among the most general structures extending the framework by Dung are the abstract dialectical frameworks (ADFs). They come equipped with various types of semantics, with the most prominent – the labeling–based one – being analyzed in the context of computational complexity, instantiations and software support. This makes the abstract dialectical frameworks valuable tools for argumentation. However, there are fewer results available concerning the relation between the ADFs and other argumentation frameworks. In this paper we would like to address this issue by introducing a number of translations from various formalisms into ADFs. The results of our study show the similarities and differences between them, thus promoting the use and understanding of ADFs. Moreover, our analysis also proves their capability to model many of the existing frameworks, including those that go beyond the attack relation. Finally, translations allow other structures to benefit from the research on ADFs in general and from the existing software in particular.

Sylwia Polberg
Extensional Semantics for Higher-Order Logic Programs with Negation

We develop an extensional semantics for higher-order logic programs with negation, generalizing the technique that was introduced in [2, 3] for positive higher-order programs. In this way we provide an alternative extensional semantics for higher-order logic programs with negation to the one proposed in [6]. As an immediate useful consequence of our developments, we define for the language we consider the notions of stratification and local stratification, which generalize the familiar such notions from classical logic programming. We demonstrate that for stratified and locally stratified higher-order logic programs, the proposed semantics never assigns the unknown truth value.

Panos Rondogiannis, Ioanna Symeonidou
Reactive Policies with Planning for Action Languages

Action languages are an important family of formalisms to represent action domains in a declarative manner and to reason about them. For this reason, the behavior of an agent in an environment may be governed by policies which take such action domain descriptions into account. In this paper, we describe a formal semantics for describing policies that express a reactive behavior for an agent, and connect our framework with the representation power of action languages. In this framework, we mitigate the large state spaces by employing the notion of indistinguishability, and combine components that are efficient for describing reactivity such as target establishment and (online) planning. Our representation allows one to analyze the flow of executing the given reactive policy, and lays foundations for verifying properties of policies. Additionally, the flexibility of the representation opens a range of possibilities for designing behaviors.

Zeynep G. Saribatur, Thomas Eiter
Correct Grounded Reasoning with Presumptive Arguments

We address the semantics and normative questions for reasoning with presumptive arguments: How are presumptive arguments grounded in interpretations; and when are they evaluated as correct? For deductive and uncertain reasoning, classical logic and probability theory provide canonical answers to these questions. Staying formally close to these, we propose case models and their preferences as formal semantics for the interpretation of presumptive arguments. Arguments are evaluated as presumptively valid when they make a case that is maximally preferred. By qualitative and quantitative representation results, we show formal relations between deductive, uncertain and presumptive reasoning. In this way, the work is a step to the connection of logical and probabilistic approaches in AI.

Bart Verheij
Characterizability in Horn Belief Revision

Delgrande and Peppas characterized Horn belief revision operators obtained from Horn compliant faithful rankings by minimization, showing that a Horn belief revision operator belongs to this class if and only if it satisfies the Horn AGM postulates and the acyclicity postulate scheme. The acyclicity scheme has a postulate for every $$n\ge 3$$ expressing the non-existence of a certain cyclic substructure. We show that this class of Horn belief revision operators cannot be characterized by finitely many postulates. Thus the use of infinitely many postulates in the result of Delgrande and Peppas is unavoidable. The proof uses our finite model theoretic approach to characterizability, considering universal monadic second-order logic with quantifiers over closed sets, and using predicates expressing minimality. We also give another non-characterizability result and add some remarks on strict Horn compliance.

Jon Yaggie, György Turán

Short Papers

Frontmatter
Formalizing Goal Serializability for Evaluation of Planning Features

Evaluation of the properties of various planning techniques such as completeness and termination plays an important role in choosing an appropriate planning technique for a particular planning problem. In this paper, we use the already existing formal specification of two well-known and classic state space planning techniques, forward state space planning and goal stack state space planning techniques, in Transaction Logic($$ \mathcal {TR} $$) to study their completeness. Our study shows that using $$ \mathcal {TR} $$, we can formally specify the serializability of planning problems and prove the completeness of $$ \textit{STRIPS}$$ planning problems for planning problems with serializable goals.

Reza Basseda, Michael Kifer
Rule-based Stream Reasoning for Intelligent Administration of Content-Centric Networks

Content-Centric Networking (CCN) research addresses the mismatch between the modern usage of the Internet and its outdated architecture. Importantly, CCN routers use various caching strategies to locally cache content frequently requested by end users. However, it is unclear which content shall be stored and when it should be replaced. In this work, we employ novel techniques towards intelligent administration of CCN routers. Our approach allows for autonomous switching between existing strategies in response to changing content request patterns using rule-based stream reasoning framework LARS which extends Answer Set Programming for streams. The obtained possibility for flexible router configuration at runtime allows for faster experimentation and may result in significant performance gains, as shown in our evaluation.

Harald Beck, Bruno Bierbaumer, Minh Dao-Tran, Thomas Eiter, Hermann Hellwagner, Konstantin Schekotihin
Inconsistency Management in Reactive Multi-context Systems

We address the problem of global inconsistency in reactive multi-context systems (rMCSs), a framework for reactive reasoning in the presence of heterogeneous knowledge sources that can deal with continuous input streams. Their semantics is given in terms of equilibria streams. The occurrence of inconsistencies, where rMCSs fail to have an equilibria stream, can render the entire system useless. We discuss various methods for handling this problem, following different strategies such as repairing the rMCS, or even relaxing the notion of equilibria stream so that it can go through inconsistent states.

Gerhard Brewka, Stefan Ellmauthaler, Ricardo Gonçalves, Matthias Knorr, João Leite, Jörg Pührer
Iteratively-Supported Formulas and Strongly Supported Models for Kleene Answer Set Programs
(Extended Abstract)

In this extended abstract, we discuss the use of iteratively-supported formulas (ISFs) as a basis for computing strongly-supported models for Kleene Answer Set Programs (ASP$$^{K}$$). ASP$$^{K}$$ programs have a syntax identical to classical ASP programs. The semantics of ASP$$^{K}$$ programs is based on the use of Kleene three-valued logic and strongly-supported models. For normal ASP$$^{K}$$ programs, their strongly supported models are identical to classical answer sets using stable model semantics. For disjunctive ASP$$^{K}$$ programs, the semantics weakens the minimality assumption resulting in a classical interpretation for disjunction. We use ISFs to characterize strongly-supported models and show that they are polynomially bounded.

Patrick Doherty, Jonas Kvarnström, Andrzej Szałas
Forgetting in ASP: The Forgotten Properties

Many approaches for forgetting in Answer Set Programming (ASP) have been proposed in recent years, in the form of specific operators, or classes of operators, following different principles and obeying different properties. A recently published comprehensive overview of existing operators and properties provided a uniform picture of the landscape, including many novel (even surprising) results on relations between properties and operators. Yet, this overview largely missed an additional set properties for forgetting, proposed by Wong, and this paper aims to close this gap. It turns out that, while some of these properties are closely related to the properties previously studied, four of them are distinct providing novel results and insights, further strengthening established relations between existing operators.

Ricardo Gonçalves, Matthias Knorr, João Leite
On Hierarchical Task Networks

In planning based on hierarchical task networks (HTN), plans are generated by refining high-level actions (‘compound tasks’) into lower-level actions, until primitive actions are obtained that can be sent to execution. While a primitive action is defined by its precondition and effects, a high-level action is defined by zero, one or several methods: sets of (high-level or primitive) actions decomposing it together with a constraint. We give a semantics of HTNs in terms of dynamic logic with program inclusion. We propose postulates guaranteeing soundness and completeness of action refinement. We also show that hybrid planning can be analysed in the same dynamic logic framework.

Andreas Herzig, Laurent Perrussel, Zhanhao Xiao
Refinement of Intentions

According to Bratman, future-directed intentions are high-level plans. We view such plans as high-level actions that can typically not be executed directly: they have to be progressively refined until executable basic actions are obtained. Higher- and lower-level actions are linked by the means-end relation, alias instrumentality relation. In this paper we extend Shoham’s database perspective of Bratman’s theory by the notions of refinement and instrumentality.

Andreas Herzig, Laurent Perrussel, Zhanhao Xiao, Dongmo Zhang
GenB: A General Solver for AGM Revision

We describe a general tool for solving belief revision problems with a range of different operators. Our tool allows a user to flexibly specify a total pre-order over states, using simple selection boxes in a graphic user interface. In this manner, we are able to calculate the result of any AGM revision operator. The user is also able to specify so-called trust partitions to calculate the result of trust-sensitive revision. The overall goal is to provide users with a simple tool that can be used in applications involving AGM-style revision. While the tool can be demonstrated and tested as a standalone application with a fixed user interface, what we have actually developed is a set of libraries and functions that can flexibly be incorporprated in other systems. It is anticipated that this tool will be useful for experimentation, education, and prototyping to solve problems in formal reasoning.

Aaron Hunter, Eric Tsang
A Two-Phase Dialogue Game for Skeptical Preferred Semantics

In this paper we propose a labelling based dialogue game for determining whether a single argument within a Dung argumentation framework is skeptically preferred. Our game consists of two phases, and determines the membership of a single argument within the extension, assuming optimal play by dialogue participants. In the first phase, one player attempts to advance arguments to construct an extension not containing the argument under consideration, while the second phase verifies that the extension is indeed a preferred one. Correctness within this basic game requires perfect play by both players, and we therefore also introduce an overarching game to overcome this limitation.

Zohreh Shams, Nir Oren
Measuring Inconsistency in Answer Set Programs

We address the issue of quantitatively assessing the severity of inconsistencies in logic programs under the answer set semantics. While measuring inconsistency in classical logics has been investigated for some time now, taking the non-monotonicity of answer set semantics into account brings new challenges that have to be addressed by reasonable accounts of inconsistency measures. We investigate the behavior of inconsistency in logic programs by revisiting existing rationality postulates for inconsistency measurement and developing novel ones taking non-monotonicity into account. Further, we develop new measures for this setting and investigate their properties.

Markus Ulbricht, Matthias Thimm, Gerhard Brewka
Backmatter
Metadaten
Titel
Logics in Artificial Intelligence
herausgegeben von
Loizos Michael
Antonis Kakas
Copyright-Jahr
2016
Electronic ISBN
978-3-319-48758-8
Print ISBN
978-3-319-48757-1
DOI
https://doi.org/10.1007/978-3-319-48758-8