Skip to main content
main-content

Über dieses Buch

This book constitutes the refereed proceedings of the 14th International Conference on Logic Programming and Nonmonotonic Reasoning, LPNMR 2017, held in Espoo, Finland, in July 2017.
The 16 full papers and 11 short papers presented in this volume were carefully reviewed and selected from 47 submissions. The book also contains 4 invited talks.
The papers were organized in topical sections named: nonmonotonic reasoning; answer set programming; LPNMR systems; and LPNMR applications.

Inhaltsverzeichnis

Frontmatter

Invited Talks

Frontmatter

The Design of the Seventh Answer Set Programming Competition

Answer Set Programming (ASP) is a prominent knowledge representation language with roots in logic programming and non-monotonic reasoning. Biennial competitions are organized in order to furnish challenging benchmark collections and assess the advancement of the state of the art in ASP solving. In this paper, we report about the design of the Seventh ASP Competition, which is jointly organized by the University of Calabria (Italy), the University of Genova (Italy), and the University of Potsdam (Germany), in affiliation with the 14th International Conference on Logic Programming and Non-Monotonic Reasoning (LPNMR 2017). A novel feature of this competition edition is the re-introduction of a Model&Solve track, complementing the usual System track with problem domains where participants need to provide dedicated encodings and solving means.

Martin Gebser, Marco Maratea, Francesco Ricca

A Bird’s-Eye View of Forgetting in Answer-Set Programming

Forgetting is an operation that allows the removal, from a knowledge base, of middle variables no longer deemed relevant, while preserving all relationships (direct and indirect) between the remaining variables. When investigated in the context of Answer-Set Programming, many different approaches to forgetting have been proposed, following different intuitions, and obeying different sets of properties.This talk will present a bird’s-eye view of the complex landscape composed of the properties and operators of forgetting defined over the years in the context of Answer-Set Programming, zooming in on recent findings triggered by the formulation of the so-called strong persistence, a property based on the strong equivalence between an answer-set program and the result of forgetting modulo the forgotten atoms, which seems to best encode the requirements of the forgetting operation.

João Leite

Answer Set Programming and Its Applications in Planning and Multi-agent Systems

The paper presents some applications in planning and multi-agent systems of answer set programming. It highlights the benefits of answer set programming based techniques in these applications. It also describes a class of multi-agent planning problems that is challenging to answer set programming.

Tran Cao Son

From Logic Programming and Non-monotonic Reasoning to Computational Argumentation and Beyond

Argumentation has gained popularity in AI in recent years to support several activities and forms of reasoning. This talk will trace back the logic programming and non-monotonic reasoning origins of two well-known argumentation formalisms in AI (namely abstract argumentation and assumption-based argumentation). Finally, the talk will discuss recent developments in AI making use of computational argumentation, in particular to support collaborative decision making.

Francesca Toni

Nonmonotonic Reasoning

Frontmatter

Modular Construction of Minimal Models

We show that minimal models of positive propositional theories can be decomposed based on the structure of the dependency graph of the theories. This observation can be useful for many applications involving computation with minimal models. As an example of such benefits, we introduce new algorithms for minimal model finding and checking that are based on model decomposition. The algorithms’ temporal worst-case complexity is exponential in the size s of the largest connected component of the dependency graph, but their actual cost depends on the size of the largest source actually encountered, which can be far smaller than s, and on the class of theories to which sources belong. Indeed, if all sources reduce to an HCF or HEF theory, the algorithms are polynomial in the size of the theory.

Rachel Ben-Eliyahu-Zohary, Fabrizio Angiulli, Fabio Fassetti, Luigi Palopoli

A Hasse Diagram for Weighted Sceptical Semantics with a Unique-Status Grounded Semantics

We provide an initial study on the Hasse diagram that represents the partial order -w.r.t. set inclusion- among weighted sceptical semantics in Argumentation: grounded, ideal, and eager. Being our framework based on a parametric structure of weights, we can directly compare weighted and classical approaches. We define a unique-status weighted grounded semantics, and we prove that the lattice of strongly-admissible extensions becomes a semi-lattice.

Stefano Bistarelli, Francesco Santini

Foundations for a Probabilistic Event Calculus

We present PEC, an Event Calculus (EC) style action language for reasoning about probabilistic causal and narrative information. It has an action language style syntax similar to that of the EC variant $$\mathcal {M}$$odular-$$\mathcal {E}$$. Its semantics is given in terms of possible worlds which constitute possible evolutions of the domain, and builds on that of Epistemic Functional EC (EFEC). We also describe an ASP implementation of PEC and show the sense in which this is sound and complete.

Fabio Aurelio D’Asaro, Antonis Bikakis, Luke Dickens, Rob Miller

Contextual Reasoning: Usually Birds Can Abductively Fly

We present a new logic programming approach to contextual reasoning, based on the Weak Completion Semantics (WCS), the latter of which has been successfully applied in the past to adequately model various human reasoning tasks. One of the properties of WCS is the open world assumption with respect to undefined atoms. This is a characteristic that is different to other common Logic Programming semantics, a property that seems suitable when modeling human reasoning. Notwithstanding, we have noticed that the famous Tweety default reasoning example, originally introduced by Reiter, cannot be modeled straightforwardly under WCS. Hence, to address the issue and taking Pereira and Pinto’s inspection points as inspiration, we develop a notion of contextual reasoning for which we introduce contextual logic programs. We reconsider the formal properties of WCS with respect to these and verify whether they still hold. Finally, we set forth contextual abduction and show that not only the original Tweety example can be nicely modeled within the new approach, but more sophisticated examples as well, where context plays an important role.

Emmanuelle-Anna Dietz Saldanha, Steffen Hölldobler, Luís Moniz Pereira

Including Quantification in Defeasible Reasoning for the Description Logic

Defeasible Description Logics (DDLs) can state defeasible concept inclusions and often use rational closure according to the KLM postulates for reasoning. If in DDLs with quantification a defeasible subsumption relationship holds between concepts, it can also hold if these concepts appear nested in existential restrictions. Earlier reasoning algorithms did not detect this kind of relationships. We devise a new form of canonical models that extend classical ones for $$\mathcal {EL} _{\bot }$$ by elements that satisfy increasing amounts of defeasible knowledge and show that reasoning w.r.t. these models yields the missing rational entailments.

Maximilian Pensel, Anni-Yasmin Turhan

A Monotonic View on Reflexive Autoepistemic Reasoning

This paper introduces a novel monotonic modal logic, able to characterise reflexive autoepistemic reasoning of the nonmonotonic variant of modal logic $$\mathbf {SW5}$$: we add a second new modal operator into the original language of $$\mathbf {SW5}$$, and show that the resulting formalism called is strong enough to capture the minimal model notion underlying some major forms of nonmonotonic logic among which are autoepistemic logic, default logic, and nonmonotonic logic programming. The paper ends with a discussion of a general strategy, naturally embedding several nonmonotonic logics of similar kinds.

Ezgi Iraz Su

Minimal Inference Problem Over Finite Domains: The Landscape of Complexity

The complexity of the general inference problem for propositional circumscription in Boolean logic (or equivalently over the two-element domain) has been recently classified. This paper generalizes the problem to arbitrary finite domains. The problem we study here is parameterized by a set of relations (a constraint language), from which we are allowed to build a knowledge base, and a linear order on the domain, which is used to compare models.We use the algebraic approach provided originally in order to understand the complexity of the constraint satisfaction problem to give first non-trivial dichotomies and tractability results for the minimal inference problem over finite domains.

Michał Wrona

Answer Set Programming

Frontmatter

Gelfond-Zhang Aggregates as Propositional Formulas

We show that any ASP aggregate interpreted under Gelfond and Zhang’s (GZ) semantics can be replaced (under strong equivalence) by a propositional formula. Restricted to the original GZ syntax, the resulting formula is reducible to a disjunction of conjunctions of literals but the formulation is still applicable even when the syntax is extended to allow for arbitrary formulas (including nested aggregates) in the condition. Once GZ-aggregates are represented as formulas, we establish a formal comparison (in terms of the logic of Here-and-There) to Ferraris’ (F) aggregates, which are defined by a different formula translation involving nested implications. In particular, we prove that if we replace an F-aggregate by a GZ-aggregate in a rule head, we do not lose answer sets (although more can be gained). This extends the previously known result that the opposite happens in rule bodies, i.e., replacing a GZ-aggregate by an F-aggregate in the body may yield more answer sets. Finally, we characterise a class of aggregates for which GZ- and F-semantics coincide.

Pedro Cabalar, Jorge Fandinno, Torsten Schaub, Sebastian Schellhorn

Answer Set Solving with Bounded Treewidth Revisited

Parameterized algorithms are a way to solve hard problems more efficiently, given that a specific parameter of the input is small. In this paper, we apply this idea to the field of answer set programming (ASP). To this end, we propose two kinds of graph representations of programs to exploit their treewidth as a parameter. Treewidth roughly measures to which extent the internal structure of a program resembles a tree. Our main contribution is the design of parameterized dynamic programming algorithms, which run in linear time if the treewidth and weights of the given program are bounded. Compared to previous work, our algorithms handle the full syntax of ASP. Finally, we report on an empirical evaluation that shows good runtime behaviour for benchmark instances of low treewidth, especially for counting answer sets.

Johannes K. Fichte, Markus Hecher, Michael Morak, Stefan Woltran

Vicious Circle Principle and Formation of Sets in ASP Based Languages

The paper continues the investigation of Poincare and Russel’s Vicious Circle Principle (VCP) in the context of the design of logic programming languages with sets. We expand previously introduced language $$\mathcal {A}log$$ with aggregates by allowing infinite sets and several additional set related constructs useful for knowledge representation and teaching. In addition, we propose an alternative formalization of the original VCP and incorporate it into the semantics of new language, $$\mathcal {S}log^+$$, which allows more liberal construction of sets and their use in programming rules. We show that, for programs without disjunction and infinite sets, the formal semantics of aggregates in $$\mathcal {S}log^+$$ coincides with that of several other known languages. Their intuitive and formal semantics, however, are based on quite different ideas and seem to be more involved than that of $$\mathcal {S}log^+$$.

Michael Gelfond, Yuanlin Zhang

Answer Set Programs with Queries over Subprograms

Answer-Set Programming (ASP) is a declarative programming paradigm. In this paper we discuss two related restrictions and present a novel modeling technique to overcome them: (1) Meta-reasoning about the collection of answer sets of a program is in general only possible by external postprocessing, but not within the program. This prohibits the direct continuation of reasoning based on the answer to the query over a (sub)program’s answer sets. (2) The saturation programming technique exploits the minimality criterion for answer sets of a disjunctive ASP program to solve co-$$\mathsf {NP}$$-hard problems, which typically involve checking if a property holds for all objects in a certain domain. However, the technique is advanced and not easily applicable by average ASP users; moreover, the use of default-negation within saturation encodings is limited.In this paper, we present an approach which allows for brave and cautious query answering over normal subprograms within a disjunctive program in order to address restriction (1). The query answer is represented by a dedicated atom within each answer set of the overall program, which paves the way also for a more intuitive alternative to saturation encodings and allows also using default-negation within such encodings, which addresses restriction (2).

Christoph Redl

Explaining Inconsistency in Answer Set Programs and Extensions

Answer Set Programming (ASP) is a well-known problem solving approach based on nonmonotonic logic programs. hex-programs extend ASP with external atoms for accessing arbitrary external information. In this paper we study inconsistent ASP- and hex-programs, i.e., programs which do not possess answer sets, and introduce a novel notion of inconsistency reasons for characterizing their inconsistency depending on the input facts. This problem is mainly motivated by upcoming applications for optimizations of the evaluation algorithms for hex-programs. Further applications can be found in ASP debugging. We then analyze the complexity of reasoning problems related to the computation of such inconsistency reasons. Finally, we present a meta-programming encoding in disjunctive ASP which computes inconsistency reasons for given normal logic programs, and a basic procedural algorithm for computing inconsistency reasons for general hex-programs.

Christoph Redl

Blending Lazy-Grounding and CDNL Search for Answer-Set Solving

Efficient state-of-the-art answer-set solvers are two-phased: first grounding the input program, then applying search based on conflict-driven nogood learning (CDNL). The latter provides superior search performance but the former causes exponential memory requirements for many ASP programs. Lazy-grounding avoids this grounding bottleneck but exhibits poor search performance. The approach here aims for the best of both worlds: grounding and solving are interleaved, but there is a solving component distinct from the grounding component. The solving component works on (ground) nogoods, employs conflict-driven first-UIP learning and enables heuristics. Guessing is on atoms that represent applicable rules, atoms may be one of true, false, or must-be-true, and nogoods have a distinguished head literal. The lazy-grounding component is loosely coupled to the solver and may yield more ground instances than necessary, which avoids re-grounding whenever the solver moves from one search branch to another. The approach is implemented in the new ASP solver Alpha.

Antonius Weinzierl

Answer Set Programming with Graded Modality

Answer set programming with graded modality (ASP$$^{\text {GM}}$$) introduced in [6] provides an intuitive way for expressing modal concepts “at least as many as......” as well as “at most as many as......”, and defaults. This paper studies the semantics of ASP$$^{\text {GM}}$$ and investigates its connection to the most recent language of epistemic specification (ASP$$^{\text {KM}}$$) proposed in [3] and the answer set programming with epistemic negation (ASP$$^{\text {NOT}}$$) presented in [5]. Particularly, we define a new approach to evaluating graded modalities in ASP$$^{\text {GM}}$$ such that ASP$$^{\text {GM}}$$ is compatible with ASP$$^{\text {KM}}$$ as well as ASP$$^{\text {NOT}}$$ at the semantic level.

Zhizheng Zhang

LPNMR Systems

Frontmatter

The ASP System DLV2

We introduce , a new Answer Set Programming (ASP) system. combines , a fully-compliant ASP-Core-2 grounder, with the well-assessed solver . Input programs may be enriched by annotations and directives that customize heuristics of the system and extend its solving capabilities. An empirical analysis conducted on benchmarks from past ASP competitions shows that outperforms the old system and is close to the state-of-the-art ASP system $$\textsc {clingo} $$.

Mario Alviano, Francesco Calimeri, Carmine Dodaro, Davide Fuscà, Nicola Leone, Simona Perri, Francesco Ricca, Pierfrancesco Veltri, Jessica Zangari

lp2normal — A Normalization Tool for Extended Logic Programs

Answer set programming (ASP) features a rich rule-based modeling language for encoding search problems. While normal rules form the simplest rule type in the language, various forms of extended rules have been introduced in order to ease modeling of complex conditions and constraints. Normalization means replacing such extended rules with identically functioning sets of normal rules. In this system description, we present lp2normal, which is a state-of-the-art normalizer that acts as a filter on ground logic programs produced by grounders, such as gringo. It provides options to translate away choice rules, cardinality rules, and weight rules, and to rewrite optimization statements using comparable techniques. The produced logic programs are suitable inputs to tools that lack support for extended rules, in particular. We give an overview of the normalization techniques currently supported by the tool and summarize its features. Moreover, we discuss the typical application scenarios of normalization, such as when implementing the search for answer sets using a back-end solver without direct support for cardinality constraints or pseudo-Boolean constraints.

Jori Bomanson

: A System for Random Testing in ASP

We present $$\mathsf {Harvey}$$, a tool for random testing in answer-set programming (ASP) that allows to incorporate constraints to guide the generation of test inputs. Due to the declarative nature of ASP, it can be argued that there is less need for testing than in conventional software development. However, it is shown in practice that testing is still needed when more sophisticated methods are not viable. Random testing is recognised as a simple yet effective method in this regard. The approach described in this paper allows for random testing of answer-set programs in which both test-input generation and determining test verdicts is facilitated using ASP itself: The test-input space is defined using ASP rules and uniformity of test-input selection is achieved by using XOR sampling. This allows to go beyond simple random testing by adding further ASP constraints in the process.

Alexander Greßler, Johannes Oetsch, Hans Tompits

NoHR: Integrating XSB Prolog with the OWL 2 Profiles and Beyond

We present the latest, substantially improved, version of NoHR, a reasoner designed to answer queries over hybrid theories composed of an OWL ontology in Description Logics and a set of non-monotonic rules in Logic Programming. Whereas the need to combine the distinctive features of these two knowledge representation and reasoning approaches stems from real world applications, their integration is nevertheless theoretically challenging due to their substantial semantical differences. NoHR has been developed as a plug-in for the widely used ontology editor Protégé - in fact, the first hybrid reasoner of its kind for Protégé, building on a combination of reasoners dedicated to OWL and rules - but it is also available as a library, allowing for its integration within other environments and applications. Compared to previous versions of NoHR, this is the first that supports all polynomial OWL profiles, and even beyond, allowing for its usage with real-world ontologies that do not fit within a single profile. In addition, NoHR has now an enhanced integration with its rule engine, which provides support for a vast number of standard built-in Prolog predicates that considerably extend its usability.

Carlos Lopes, Matthias Knorr, João Leite

ArgueApply: A Mobile App for Argumentation

Formal models developed in the field of argumentation allow for analysing and evaluating problems that have previously been studied by philosophers on an informal level only. Importantly, they also give rise to the development of computational tools for argumentation. In this paper we report on ArgueApply, a mobile app for argumentation that is based on the Grappa framework, an extension of, e.g., abstract argumentation in the sense of Dung. With ArgueApply users can engage in online discussions and evaluate their semantics. Each of the resulting interpretations can be seen as a coherent view on a discussion in which some of the discussions statements are accepted and others rejected. Being a mobile tool, ArgueApply is intended to be more accessible than existing systems for computing argumentation semantics allowing, e.g., for spontaneous analysis of an ongoing discussion or collective preparation for an important debate. While having a practical system for these applications is the final goal of our work, an immediate objective is using the system for exploring which type of Grappa frameworks under which semantics are best suited for such applications.

Jörg Pührer

LPNMR Applications

Frontmatter

catnap: Generating Test Suites of Constrained Combinatorial Testing with Answer Set Programming

We develop an approach to test suite generation for Constrained Combinatorial Testing (CCT), one of the most widely studied combinatorial testing techniques, based on Answer Set Programming (ASP). The resulting catnap system accepts a CCT instance in fact format and combines it with a first-order encoding for generating test suites, which can subsequently be solved by any off-the-shelf ASP systems. We evaluate the effectiveness of our approach by empirically contrasting it to the best known bounds obtained via dedicated implementations.

Mutsunori Banbara, Katsumi Inoue, Hiromasa Kaneyuki, Tenda Okimoto, Torsten Schaub, Takehide Soh, Naoyuki Tamura

Automatic Synthesis of Optimal-Size Concentrators by Answer Set Programming

A concentrator is a circuit with N inputs and $$M\le N$$ outputs that can route any given subset of $$K\le M$$ valid inputs to K of its M outputs. Concentrator circuits are important building blocks of many parallel algorithms. The design of optimal concentrator circuits is however a challenging task that has already been considered in many research papers. In this paper, we show how answer set programming can be used to automatically generate concentrator circuits of provably optimal size.

Marc Dahlem, Tripti Jain, Klaus Schneider, Michael Gillmann

plasp 3: Towards Effective ASP Planning

We describe the new version of the PDDL-to-ASP translator plasp. First, it widens the range of accepted PDDL features. Second, it contains novel planning encodings, some inspired by SAT planning and others exploiting ASP features such as well-foundedness. All of them are designed for handling multi-valued fluents in order to capture both PDDL as well as SAS planning formats. Third, enabled by multi-shot ASP solving, it offers advanced planning algorithms also borrowed from SAT planning. As a result, plasp provides us with an ASP-based framework for studying a variety of planning techniques in a uniform setting. Finally, we demonstrate in an empirical analysis that these techniques have a significant impact on the performance of ASP planning.

Yannis Dimopoulos, Martin Gebser, Patrick Lühne, Javier Romero, Torsten Schaub

Nurse Scheduling via Answer Set Programming

The Nurse Scheduling problem (NSP) is a combinatorial problem that consists of assigning nurses to shifts according to given practical constraints. In previous years, several approaches have been proposed to solve different variants of the NSP. In this paper, an ASP encoding for one of these variants is presented, whose requirements have been provided by an Italian hospital. We also design a second encoding for the computation of “optimal” schedules. Finally, an experimental analysis has been conducted on real data provided by the Italian hospital using both encodings. Results are very positive: the state-of-the-art ASP system clingo is able to compute one year schedules in few minutes, and it scales well even when more than one hundred nurses are considered.

Carmine Dodaro, Marco Maratea

Hybrid Metabolic Network Completion

Metabolic networks play a crucial role in biology since they capture all chemical reactions in an organism. While there are networks of high quality for many model organisms, networks for less studied organisms are often of poor quality and suffer from incompleteness. To this end, we introduced in previous work an ASP-based approach to metabolic network completion. Although this qualitative approach allows for restoring moderately degraded networks, it fails to restore highly degraded ones. This is because it ignores quantitative constraints capturing reaction rates. To address this problem, we propose a hybrid approach to metabolic network completion that integrates our qualitative ASP approach with quantitative means for capturing reaction rates. We begin by formally reconciling existing stoichiometric and topological approaches to network completion in a unified formalism. With it, we develop a hybrid ASP encoding and rely upon the theory reasoning capacities of the ASP system clingo for solving the resulting logic program with linear constraints over reals. We empirically evaluate our approach by means of the metabolic network of Escherichia coli. Our analysis shows that our novel approach yields greatly superior results than obtainable from purely qualitative or quantitative approaches.

Clémence Frioux, Torsten Schaub, Sebastian Schellhorn, Anne Siegel, Philipp Wanko

Action Language Hybrid AL

This paper introduces an extension of the action language $$\mathcal {AL}$$ to Hybrid $$\mathcal {AL}$$. A program in Hybrid $$\mathcal {AL}$$ specifies both a transition diagram and associated computations for observing fluents and executing actions. The semantics of $$\mathcal {AL}$$ is defined in terms of Answer Set Programming (ASP). Similarly, the semantics of Hybrid $$\mathcal {AL}$$ is defined using Hybrid ASP which is an extension of ASP that allows rules to control sequential execution of arbitrary algorithms.

Alex Brik, Jeffrey Remmel

moviola: Interpreting Dynamic Logic Programs via Multi-shot Answer Set Programming

The causal rejection-based update semantics assign meanings to a Dynamic Logic Program (DLP), which is a sequence of logic programs each one updating the preceding ones. Although there are translations of DLPs under these update semantics to logic programs of Answer Set Programming (ASP), they have not led to efficient and easy to use implementations. This is mainly because such translations aim offline solving in a sense that the resulting logic program is given to an answer set solver to compute models of the current DLP and for any future updates the whole process has to be repeated from scratch. We aim to remedy this situation by utilizing multi-shot ASP, composed of iterative answer set computations of a changing program without restarting from scratch at every step. To this end, we developed a system called moviola, utilizing the multi-shot answer set solver clingo. Using the system, a user can interactively write a DLP, update it, compute its models according to various semantics on the fly.

Orkunt Sabuncu, João Leite

Adjudication of Coreference Annotations via Answer Set Optimization

We describe the first automatic approach for merging coreference annotations obtained from multiple annotators into a single gold standard. Merging is subject to hard constraints (consistency) and optimization criteria (minimal divergence from annotators) and involves an equivalence relation over a large number of elements. We describe two representations of the problem in Answer Set Programming and four objective functions suitable for the task. We provide two structurally different real-world benchmark datasets based on the METU-Sabanci Turkish Treebank, and we report our experiences in using the Gringo, Clasp, and Wasp tools for computing optimal adjudication results on these datasets.

Peter Schüller

Backmatter

Weitere Informationen

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.

Whitepaper

- ANZEIGE -

INDUSTRIE 4.0

Der Hype um Industrie 4.0 hat sich gelegt – nun geht es an die Umsetzung. Das Whitepaper von Protolabs zeigt Unternehmen und Führungskräften, wie sie die 4. Industrielle Revolution erfolgreich meistern. Es liegt an den Herstellern, die besten Möglichkeiten und effizientesten Prozesse bereitzustellen, die Unternehmen für die Herstellung von Produkten nutzen können. Lesen Sie mehr zu: Verbesserten Strukturen von Herstellern und Fabriken | Konvergenz zwischen Soft- und Hardwareautomatisierung | Auswirkungen auf die Neuaufstellung von Unternehmen | verkürzten Produkteinführungszeiten
Jetzt gratis downloaden!

Bildnachweise