Skip to main content

2008 | Buch

Logic Programming

24th International Conference, ICLP 2008 Udine, Italy, December 9-13 2008 Proceedings

herausgegeben von: Maria Garcia de la Banda, Enrico Pontelli

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 24th International Conference on Logic Programming, ICLP 2008, held in Udine, Italy, in December 2008. The 35 revised full papers together with 2 invited talks, 2 invited tutorials, 11 papers of the co-located first Workshop on Answer Set Programming and Other Computing Paradigms (ASPOCP 2008), as well as 26 poster presentations and the abstracts of 11 doctoral consortium articles were carefully reviewed and selected from 177 initial submissions. The papers cover all issues of current research in logic programming - they are organized in topical sections on applications, algorithms, systems, and implementations, semantics and foundations, analysis and transformations, CHRs and extensions, implementations and systems, answer set programming and extensions, as well as constraints and optimizations.

Inhaltsverzeichnis

Frontmatter

Invited Talk

The Life of a Logic Programming System

Logic Programming and the Prolog language have a major role in Computing. Prolog, and its derived languages, have been widely used in a impressive variety of application domains. Thus, a bit of the history of Logic Programming reflects in the history of systems such as Dec-10 Prolog [32], M-Prolog [15], C-Prolog [19], Quintus Prolog [20], SICStus Prolog [6], BIM-Prolog [17], ECLiPSe [1], BinProlog [30], SWI-Prolog [34], CIAO [14], and B-Prolog [35], to mention but a few. I briefly present the evolution of one such system, YAP, and present a personal perspective on the challenges ahead for YAP (and for Logic Programming).

Vítor Santos Costa

Special Session

Uniting the Prolog Community

In his article

A Wake-Up Call for the Logic Programming Community

, published in the December 2007 issue of the ALP Newsletter, the first author raised concerns about the viability of the Prolog programming language in the near future. The article had a pessimistic undertone, but expressed the hope that the current evolution can be reversed by uniting the Prolog community. Now, almost a year later, it is time for more optimistic news and we are happy that the ALP —through the ICLP 2008 program chairs—has given us the opportunity to re-iterate the arguments, present the recent positive developments and involve the whole community.

Tom Schrijvers, Bart Demoen

Invited Tutorials

Constraint Handling Rules
A Tutorial for (Prolog) Programmers

Constraint Handling Rules (CHR) [2,5] is a high-level programming language based on multi-headed, committed-choice, guarded multiset rewrite rules. Originally designed in 1991 by Frühwirth for the particular purpose of adding user-defined constraint solvers to a host-language, CHR has matured over the last decade to a powerful and elegant general-purpose language with a wide spectrum of application domains.

Different semantics have been proposed for the language, based on various logics (first-order logic, linear logic, ...). These logics, in combination with rewriting techniques, have been used to study program properties such as soundness and completeness, confluence, termination, ...While that line of work treats CHR as a calculus, this tutorial teaches CHR as a proper programming language.

As a programming language, CHR seems simple enough: The programmer specifies a number of rewrite rules, and the CHR engine applies these rules exhaustively to an initial (multi-)set of constraints. Yet, this simplicity hides great power: e.g., the power to quickly prototype new constraint solvers, the power to implement Prolog’s co-routining predicates

freeze/2

and

when/2

in a single CHR rule each, and the power to subsume Guarded Horn Clauses while still not exploiting CHR’s full potential. Moreover, CHR is the only declarative language known in which every algorithm can be implemented with optimal space and time complexity [4].

Unfortunately, few Prolog programmers are aware of the CHR language or that it is available in their Prolog system. These programmers are unable to tap into CHR’s power, so they have to go to great length to accomplish even simple tasks. Or they simply give up. This tutorial shows how to use CHR for solving their problems quickly and elegantly. Simple examples teach interactively how to write and reason about CHR programs, and what problems one can solve effectively with CHR.

This tutorial starts with ground CHR, the three types of rules, and the refined semantics [1] which is based on the notion of the active constraint and its occurrences. Other topics covered are triggering of rules, the propagation history, the use of data structures and the host language, declarations and impure features, and the common pitfalls of CHR.

This tutorial intends to make the attendants aware of CHR’s strengths as a programming language, and teaches them when and how to apply CHR for small to medium sized problems. The full set of tutorial slides is available at

http://www.cs.kuleuven.be/~dtai/projects/CHR/

.

About the Speaker.

Tom Schrijvers is a post-doctoral researcher at the K.U.Leuven in Belgium, who has defended his Ph.D. thesis on

Analyses, Optimizations and Extensions of Constraint Handling Rules

in 2005 [3]. His CHR implementation, the K.U.Leuven CHR system, is the most advanced in its kind and is in wide-spread use in many Prolog systems. Tom uses CHR on a daily basis, for implementing his compiler, for supporting his type checking and test generation research, or simply for gaining an edge in the Prolog Programming Contest.

Tom Schrijvers
Back to Interval Temporal Logics
(Extended Abstract)

Interval-based temporal reasoning naturally arises in a variety of fields, including artificial intelligence (temporal knowledge representation, systems for temporal planning and maintenance, qualitative reasoning, theories of events), theoretical computer science (specification and design of hardware components, concurrent real-time processes), temporal databases (event modeling, temporal aggregation), and computational linguistics (analysis of progressive tenses, semantics and processing of natural languages) [10].

Angelo Montanari
Systems Biology: Models and Logics

The field of systems biology focuses on creating a finely detailed picture of biological mechanisms. Recently, the need has arisen for more and more sophisticated and mathematically well founded computational tools, capable of analyzing those models that are and will be at the core of Systems Biology. The challenge consists in faithfully implementing such computational models in software packages exploiting the potential trade-offs among usability, accuracy, and scalability when dealing with large amounts of data. The aim of this presentation is that of introducing some emerging problems and proposed solutions in this context.

In particular, after an introductory first part, in the second part we will focus on the use of Hybrid Systems in Systems Biology. Hybrid systems are dynamical systems presenting both discrete and continuous evolution, originally proposed to study embedded systems, where a discrete control acts on a continuously changing environment. The presence of both discrete and continuous dynamics makes this formalism appealing also for modeling biological systems. However, the situation in this case is subtler, basically because there is no “natural” separation of discrete and continuous components. It comes as no surprise, therefore, that Hybrid Systems have been used in Systems Biology in rather various ways. Some approaches, like the description of biological switches, concentrate on the use of model-checking routines. Other applications, like the switching between continuous and discrete/stochastic simulation, focus on the exploitation of the interplay between discreteness and continuity in order to reduce the computational burden of numerical simulation, yet maintaining an acceptable precision. We will survey some of the main uses of Hybrid Automata in Systems Biology, through a series of cases studies that we deem interesting and paradigmatic, discussing both actual and foreseeable, logical and implementation issues.

Carla Piazza, Alberto Policriti
Separation Logic Tutorial

Separation logic is an extension of Hoare’s logic for reasoning about programs that manipulate pointers. It is based on the

separating conjunction

P

 ∗ 

Q

, which asserts that

P

and

Q

hold for separate portions of computer memory.

This tutorial on separation logic has three parts.

1

Basics.

Concentrating on highlights from the early work [1,2,3,4].

1

Model Theory.

The model theory of separation logic evolved from the general resource models of bunched logic [5,6,7] and includes an account of program dynamics in terms of their interaction with resource [8,9].

1

Proof Theory

. I will describe those aspects of the proof theory, particularly new entailment questions (frame and anti-frame inference [10,11]), which are important for applications in mechanized program verification.

Peter O’Hearn

Years of Stable Models Semantics Celebration

Invited Presentations

Authorization and Obligation Policies in Dynamic Systems

The paper defines a language for specifying authorization and obligation policies of an intelligent agent acting in a changing environment and presents several ASP based algorithms for checking compliance of an event with a policy specified in this language. The language allows representation of defeasible policies and is based on theory of action and change.

Michael Gelfond, Jorge Lobo
Twelve Definitions of a Stable Model

This is a review of some of the definitions of the concept of a stable model that have been proposed in the literature. These definitions are equivalent to each other, at least when applied to traditional Prolog-style programs, but there are reasons why each of them is valuable and interesting. A new characterization of stable models can suggest an alternative picture of the intuitive meaning of logic programs; or it can lead to new algorithms for generating stable models; or it can work better than others when we turn to generalizations of the traditional syntax that are important from the perspective of answer set programming; or it can be more convenient for use in proofs; or it can be interesting simply because it demonstrates a relationship between seemingly unrelated ideas.

Vladimir Lifschitz
Sixty Years of Stable Models

Twenty years ago Michael Gelfond and Vladimir Lifschitz published their celebrated paper on the stable model semantics of logic programs. Today, having built on and enlarged those key ideas of twenty years ago, answer set programming (ASP) has emerged as a flourishing paradigm of declarative programming, rich in theoretical advances and maturing applications. This is one aspect of the legacy of stable models, and a very important one. Another aspect, equally important, but somewhat farther from the limelight today, resides in the ability of stable models to provide us with a valuable method of reasoning - to give it a name let us call it

stable reasoning

. In the full version of this essay I examine some of the foundational concepts underlying the approach of stable models. I try to answer the question: “What is a stable model?” by searching for a purely logical grasp of the stability concept. In so doing, I shall discuss some concepts and results in logic from around 60 years ago. In particular, I look at questions such as:

How does a notion of stability presented in a work on intuitionistic mathematics in 1947 relate to the Gelfond-Lifschtiz concept of 1988?

How does the notion of constructible falsity published in 1949 help to explain certain properties of negation arising in the language of ASP?

Why is a seminal paper by McKinsey and Tarski, published in 1948, important for understanding the relations between answer sets and epistemic logic?

David Pearce
The DLV Project: A Tour from Theory and Research to Applications and Market

DLV

is one of the most succesful and widely used

ASP

systems. It is based on stable model semantics, and supports a powerful language extending Disjunctive Logic Programming with many expressive constructs, including aggregates, strong and weak constraints, functions, lists, and sets. In this paper, we describe the long tour from basic research on languages and semantics, studies on algorithms and complexity, design and implementation of prototypes, up to the realization of a powerful and efficient system, which won the last

ASP

competition, is employed in industrial applications, and is even ready for marketing and commercial distribution. We report on the experience we got in more than twelve years of work in the

DLV

project, focusing on most recent developments, industrial applications, trends, and market perspectives.

Nicola Leone, Wolfgang Faber

Invited Position Presentations

Using Answer Set Programming for Knowledge Representation and Reasoning: Future Directions

Since the proposal of the stable model semantics [1] of logic programs there has been a lot of developments that make answer set programs a suitable language for various kinds of knowledge representation. The building blocks that make answer set programming a suitable knowledge representation language include theoretical results, implementation and applications. The book [2] compiles most of the results that were available until 2002. Since then many additional results have been developed. However, many challenges and issues need to be further addressed before knowledge based intelligent systems become more prevalent.

Chitta Baral
Building a Knowledge Base System for an Integration of Logic Programming and Classical Logic

This paper presents a Knowledge Base project for FO(ID), an extension of classical logic with inductive definitions. This logic is a natural integration of classical logic and logic programming based on the view of a logic program as a definition. We discuss the relationship between inductive definitions and common sense reasoning and the strong similarities and striking differences with ASP and Abductive LP. We report on inference systems that combine state-of-the-art techniques of SAT and ASP. Experiments show that FO(ID) model expansion systems are competitive with the best ASP-solvers.

Marc Denecker, Joost Vennekens
SMS and ASP: Hype or TST?

Twenty years of stable model semantics (SMS) and almost ten years of Answer Set Programming (ASP) are a good reason for a moment of reflection on these important concepts. This position paper gives a personal account of their history, aspects of ASP, and emphasizes the role of theory and practice in this area.

Thomas Eiter
Quo Vadis Answer Set Programming?
Past, Present, and Future

We discuss the development, current state and the future of Answer Set Programming, making predictions that are not necessarily accurate.

V. W. Marek
Answer Set Programming without Unstratified Negation

The paper argues that for answer set programming purposes it is not necessary to use unstratified negation but it is more appropriate to employ a basic language based on simple choice constructs, integrity constraints, and stratified negation. This offers a framework that enables natural problem encodings and smooth extensions, for instance, with constraints and aggregates.

Ilkka Niemelä
Here’s the Beef: Answer Set Programming !

At the occasion of the Third International Conference on Principles of Knowledge Representation and Reasoning [1] in 1992, Ray Reiter delivered an invited talk entitled

“Twelve Years of Nonmonotonic Reasoning Research: Where (and What) Is the beef?”

, reflecting the state and future of the research area of Nonmonotonic Reasoning (NMR;[2]).Ray Reiter describes it in [3] as a “flourishing subculture” making many outside researchers “wonder what on earth this stuff is good for.” Although he seemed to be rather optimistic about the future of NMR, he nonetheless saw its major contribution on the theoretical side, providing “important insights about, and solutions to, many outstanding problems, not only in AI but in computer science in general.” Among them, he lists “Logic Programming implementations of nonmonotonic reasoning”

Torsten Schaub

Best Paper Awardees

Equivalences in Answer-Set Programming by Countermodels in the Logic of Here-and-There

In Answer-Set Programming different notions of equivalence, such as the prominent notions of strong and uniform equivalence, have been studied and characterized by various selections of models in the logic of Here-and-There (HT). For uniform equivalence however, correct characterizations in terms of HT-models can only be obtained for finite theories, respectively programs. In this paper, we show that a selection of countermodels in HT captures uniform equivalence also for infinite theories. This result is turned into coherent characterizations of the different notions of equivalence by countermodels, as well as by a mixture of HT-models and countermodels (so-called equivalence interpretations), which are lifted to first-order theories under a very general semantics given in terms of a quantified version of HT. We show that countermodels exhibit expedient properties like a simplified treatment of extended signatures, and provide further results for non-ground logic programs. In particular, uniform equivalence coincides under open and ordinary answer-set semantics, and for finite non-ground programs under these semantics, also the usual characterization of uniform equivalence in terms of maximal and total HT-models of the grounding is correct, even for infinite domains, when corresponding ground programs are infinite.

Michael Fink
Dynamic Programming Algorithms as Products of Weighted Logic Programs

Weighted logic programming, a generalization of bottom-up logic programming, is a successful framework for specifying dynamic programming algorithms. In this setting, proofs correspond to the algorithm’s output space, such as a path through a graph or a grammatical derivation, and are given a weighted score, often interpreted as a probability, that depends on the score of the base axioms used in the proof. The desired output is a function over all possible proofs, such as a sum of scores or an optimal score. We describe the

PRODUCT

transformation, which can merge two weighted logic programs into a new one. The resulting program optimizes a product of proof scores from the original programs, constituting a scoring function known in machine learning as a “product of experts.” Through the addition of intuitive constraining side conditions, we show that several important dynamic programming algorithms can be derived by applying

PRODUCT

to weighted logic programs corresponding to

simpler

weighted logic programs.

Shay B. Cohen, Robert J. Simmons, Noah A. Smith

Regular Papers

Applications I

Detecting Inconsistencies in Large Biological Networks with Answer Set Programming

We introduce an approach to detecting inconsistencies in large biological networks by using Answer Set Programming. To this end, we build upon a recently proposed notion of consistency between biochemical/genetic reactions and high-throughput profiles of cell activity. We then present an approach based on Answer Set Programming to check the consistency of large-scale data sets. Moreover, we extend this methodology to provide explanations for inconsistencies in the data by determining minimal representations of conflicts. In practice, this can be used to identify unreliable data or to indicate missing reactions.

Martin Gebser, Torsten Schaub, Sven Thiele, Björn Usadel, Philippe Veber
A Logic Programming Approach to Home Monitoring for Risk Prevention in Assisted Living

Monitoring a patient in his home environment is necessary to ensure continuity of care in home settings, but this activity must not be too much invasive and a burden for clinicians. For this reason we prototyped a system called SINDI (Secure and INDependent lIving), focused on i) collecting a limited amount of data about the person and the environment through Wireless Sensor Networks (WSN), and ii) reasoning about these data both to contextualize them and to support clinicians in understanding patients’ well being as well as in predicting possible evolutions of their health. Our hierarchical logic-based model of health combines data from different sources, sensor data, tests results, commonsense knowledge and patient’s clinical profile at the lower level, and correlation rules between aspects of health (

items

) across upper levels. The logical formalization and the reasoning process are based on Answer Set Programming. The expressive power of this logic programming paradigm allows efficient reasoning to support prevention, while declarativity simplifies rules specification by clinicians and allows automatic encoding of knowledge. This paper describes how these issues have been targeted in the application scenario of the SINDI system.

Alessandra Mileo, Davide Merico, Roberto Bisiani
Automatic Composition of Melodic and Harmonic Music by Answer Set Programming

The composition of most styles of music is governed by rules. The natural statement of these rules is declarative (“The highest and lowest notes in a piece must be separated by a consonant interval”) and non deterministic (“The base note of a key can be followed by any note in the key”). We show that by approaching the automation and analysis of composition as a knowledge representation task and formalising these rules in a suitable logical language, powerful and expressive intelligent composition tools can easily be built. This paper describes the use of answer set programming to construct an automated system that can compose both melodic and harmonic music, diagnose errors in human compositions and serve as a computer-aided composition tool. The use of a fully declarative language and an “off-the-shelf” reasoning engine allows the creation of tools which are significantly simpler, smaller and more flexible than those produced by existing approaches. It also combines harmonic and melodic composition in a single framework, which is a new feature in the growing area of algorithmic composition.

Georg Boenn, Martin Brain, Marina De Vos, John ffitch

Algorithms, Systems, and Implementations I

On the Efficient Execution of ProbLog Programs

The past few years have seen a surge of interest in the field of probabilistic logic learning or statistical relational learning. In this endeavor, many probabilistic logics have been developed. ProbLog is a recent probabilistic extension of Prolog motivated by the mining of large biological networks. In ProbLog, facts can be labeled with mutually independent probabilities that they belong to a randomly sampled program. Different kinds of queries can be posed to ProbLog programs. We introduce algorithms that allow the efficient execution of these queries, discuss their implementation on top of the YAP-Prolog system, and evaluate their performance in the context of large networks of biological entities.

Angelika Kimmig, Vítor Santos Costa, Ricardo Rocha, Bart Demoen, Luc De Raedt
Engineering an Incremental ASP Solver

Many real-world applications, like planning or model checking, comprise a parameter reflecting the size of a solution. In a propositional formalism like Answer Set Programming (ASP), such problems can only be dealt with in a bounded way, considering one problem instance after another by gradually increasing the bound on the solution size. We thus propose an incremental approach to both grounding and solving in ASP. Our goal is to avoid redundancy by gradually processing the extensions to a problem rather than repeatedly re-processing the entire (extended) problem. We start by furnishing a formal framework capturing our incremental approach in terms of module theory. In turn, we take advantage of this framework for guiding the successive treatment of program slices during grounding and solving. Finally, we describe the first integrated incremental ASP system,

iclingo

, and provide an experimental evaluation.

Martin Gebser, Roland Kaminski, Benjamin Kaufmann, Max Ostrowski, Torsten Schaub, Sven Thiele
Concurrent and Local Evaluation of Normal Programs

Tabled evaluations can incorporate a number of features, including tabled negation, reduction with respect to the well-founded model, tabled constraints and answer subsumption. Many of these features are most efficiently evaluated using the Local evaluation strategy, which fully evaluates each mutually dependent set of tabled subgoals before returning answers to other subgoals outside of that set. In this paper, we introduce a formalism, Concurrent Local SLG by which multiple threads of computation concurrently perform Local evaluation of the well-founded semantics, and which is a framework for multi-threaded tabling in the XSB system. We prove several properties of Local evaluation within single-threaded tabled computation. We then extend SLG to a model of concurrency and show that the completeness and complexity of SLG are retained when computed by multiple threads. Finally, we extend Local evaluation to concurrent SLG, and show that the properties of Local evaluation continue to hold under concurrency.

Rui Marques, Terrance Swift

Semantics and Foundations I

On the Continuity of Gelfond-Lifschitz Operator and Other Applications of Proof-Theory in ASP

Using a characterization of stable models of logic programs

P

as satisfying valuations of a suitably chosen propositional theory, called the set of

reduced defining equations

P

, we show that the finitary character of that theory

P

is equivalent to a certain continuity property of the Gelfond-Lifschitz operator

${\mathit{GL}}_P$

associated with the program

P

. The introduction of the formula

P

leads to a double-backtracking algorithm for computation of stable models by reduction to satisfiability of suitably chosen propositional theories. This algorithm does not use the reduction via loop-formulas as proposed in [1] or its extension proposed in [2]. Finally, we discuss possible extensions of techniques proposed in this paper to the context of cardinality constraints.

V. W. Marek, J. B. Remmel
α lean TA P: A Declarative Theorem Prover for First-Order Classical Logic

We present

α

lean

TA

P

, a declarative tableau-based theorem prover written as a pure relation. Like

lean

TA

P

, on which it is based,

α

lean

TA

P

can prove ground theorems in first-order classical logic. Since it is declarative,

α

lean

TA

P

generates

theorems and accepts non-ground theorems and proofs. The lack of mode restrictions also allows the user to provide guidance in proving complex theorems and to ask the prover to instantiate non-ground parts of theorems. We present a complete implementation of

α

lean

TA

P

, beginning with a translation of

lean

TA

P

into

α

Kanren, an embedding of nominal logic programming in Scheme. We then show how to use a combination of tagging and nominal unification to eliminate the impure operators inherited from

lean

TA

P

, resulting in a purely declarative theorem prover.

Joseph P. Near, William E. Byrd, Daniel P. Friedman
Towards Ludics Programming: Interactive Proof Search

Girard [1] introduced Ludics as an interactive theory aiming at overcoming the distinction between syntax and semantics in logic.

In this paper, we investigate how ludics could serve as a foundation for logic programming, providing a mechanism for interactive proof search, that is proof search by interaction (or proof search by cut-elimination).

Alexis Saurin
Declarative Semantics for Active Integrity Constraints

We study

active integrity constraints

, a formalism designed to describe integrity constraints on databases

and

to specify preferred ways to enforce them. The original semantics proposed for active integrity constraints is based on the concept of a

founded repair

. We point out that groundedness underlying founded repairs does not prevent cyclic justifications and so, may be inappropriate in some applications. Thus, using a different notion of grounding, with roots in logic programming and revision programming, we introduce two new semantics: of

justified weak repairs

, and of

justified repairs

. We study properties of these semantics, relate them to earlier semantics of active integrity constraints, and establish the complexity of basic decision problems.

Luciano Caroprese, Mirosław Truszczyński

Analysis and Transformations

A Folding Algorithm for Eliminating Existential Variables from Constraint Logic Programs

The existential variables of a clause in a constraint logic program are the variables which occur in the body of the clause and not in its head. The elimination of these variables is a transformation technique which is often used for improving program efficiency and verifying program properties. We consider a folding transformation rule which ensures the elimination of existential variables and we propose an algorithm for applying this rule in the case where the constraints are linear inequations over rational or real numbers. The algorithm combines techniques for matching terms modulo equational theories and techniques for solving systems of linear inequations. We show that an implementation of our folding algorithm performs well in practice.

Valerio Senni, Alberto Pettorossi, Maurizio Proietti
Negative Ternary Set-Sharing

The Set-Sharing domain has been widely used to infer at compile-time interesting properties of logic programs such as occurs-check reduction, automatic parallelization, and finite-tree analysis. However, performing abstract unification in this domain requires a closure operation that increases the number of sharing groups exponentially. Much attention has been given to mitigating this key inefficiency in this otherwise very useful domain. In this paper we present a novel approach to Set-Sharing: we define a new representation that leverages the complement (or negative) sharing relationships of the original sharing set, without loss of accuracy. Intuitively, given an abstract state

$sh_{\mathcal{V}}$

over the finite set of variables of interest

$\mathcal{V}$

, its negative representation is

$\wp(\mathcal{V}) \setminus sh_{\mathcal{V}}$

. Using this encoding during analysis dramatically reduces the number of elements that need to be represented in the abstract states and during abstract unification as the cardinality of the original set grows toward

$2^{|\mathcal{V}|}$

. To further compress the number of elements, we express the set-sharing relationships through a set of ternary strings that compacts the representation by eliminating redundancies among the sharing sets. Our experiments show that our approach can compress the number of relationships, reducing significantly the memory usage and running time of all abstract operations, including abstract unification.

Eric Trias, Jorge Navas, Elena S. Ackley, Stephanie Forrest, M. Hermenegildo
Termination of Narrowing Using Dependency Pairs

In this work, we extend the dependency pair approach for automated proofs of termination in order to prove the termination of narrowing. Our extension of the dependency pair approach generalizes the standard notion of dependency pairs by taking specifically into account the dependencies between the left-hand side of a rewrite rule and its own argument subterms. We demonstrate that the new

narrowing dependency pairs

exactly capture the narrowing termination behavior and provide an effective termination criterion which we prove to be sound and complete. Finally, we discuss how the problem of analyzing narrowing chains can be recast as a standard analysis problem for traditional (rewriting) chains, so that the proposed technique can be effectively mechanized by reusing the standard DP infrastructure.

María Alpuente, Santiago Escobar, José Iborra
Dynamic Analysis of Bounds Versus Domain Propagation

Constraint propagation solvers interleave propagation (removing impossible values from variable domains) with search. Previously, Schulte and Stuckey introduced the use of static analysis to determine where in a constraint program domain propagators can be replaced by more efficient bounds propagators and still ensure that the same search space is traversed.

This paper introduces a dynamic yet considerably simpler approach to uncover the same information. The information is obtained by a linear time traversal of an analysis graph that straightforwardly reflects the properties of propagators implementing constraints. Experiments confirm that the simple dynamic method is efficient and that it can be used interleaved with search, taking advantage of the simplification of the constraint graph that arises from search.

Christian Schulte, Peter J. Stuckey

Semantics and Foundations II

Lparse Programs Revisited: Semantics and Representation of Aggregates

Lparse programs are logic programs with weight constraints as implemented in the

smodels

system, which constitute an important class of logic programs with constraint atoms. To effectively apply lparse programs to problem solving, a clear understanding of its semantics and representation power is indispensable. In this paper, we study the semantics of lparse programs, called the

lparse semantics

. We show that for a large class of programs, called

strongly satisfiable programs

, the lparse semantics agrees with the semantics based on conditional satisfaction. However, when the two semantics disagree, a stable model admitted by the lparse semantics may be circularly justified. We then present a transformation, by which an lparse program can be transformed to a strongly satisfiable one, so that no circular models may be generated under the current implementation of

smodels

. This leads to an investigation of a methodological issue, namely the possibility of compact representation of aggregate programs by lparse programs. We present some experimental results to compare this approach with the ones where aggregates are more explicitly handled.

Guohua Liu, Jia-Huai You
Compiling Fuzzy Answer Set Programs to Fuzzy Propositional Theories

We show how a fuzzy answer set program can be compiled to an equivalent fuzzy propositional theory whose models correspond to the answer sets of the program. This creates a basis for constructing fuzzy answer set solvers, such as solvers based on fuzzy SAT-solvers or on linear programming.

Jeroen Janssen, Stijn Heymans, Dirk Vermeir, Martine De Cock
Abstract Answer Set Solvers

Nieuwenhuis, Oliveras, and Tinelli showed how to describe enhancements of the Davis-Putnam-Logemann-Loveland algorithm using transition systems, instead of pseudocode. We design a similar framework for three algorithms that generate answer sets for logic programs:

smodels

,

asp-sat

with Backtracking, and a newly designed and implemented algorithm

sup

. This approach to describing answer set solvers makes it easier to prove their correctness, to compare them, and to design new systems.

Yuliya Lierler

Semantics and Foundations III

Partial Functions and Equality in Answer Set Programming

In this paper we propose an extension of Answer Set Programming (ASP) [1], and in particular, of its most general logical counterpart, Quantified Equilibrium Logic (QEL) [2], to deal with partial functions. Although the treatment of equality in QEL can be established in different ways, we first analyse the choice of decidable equality with complete functions and Herbrand models, recently proposed in the literature [3]. We argue that this choice yields some counterintuitive effects from a logic programming and knowledge representation point of view. We then propose a variant called

where the set of functions is partitioned into

partial

and Herbrand functions (we also call

constructors

). In the rest of the paper, we show a direct connection to Scott’s

Logic of Existence

[4] and present a practical application, proposing an extension of normal logic programs to deal with partial functions and equality, so that they can be translated into function-free normal programs, being possible in this way to compute their answer sets with any standard ASP solver.

Pedro Cabalar
Computable Functions in ASP: Theory and Implementation

Disjunctive Logic Programming (DLP) under the answer set semantics, often referred to as Answer Set Programming (ASP), is a powerful formalism for knowledge representation and reasoning (KRR). The latest years witness an increasing effort for embedding functions in the context of ASP. Nevertheless, at present no ASP system allows for a reasonably unrestricted use of function terms. Functions are either required not to be recursive or subject to severe syntactic limitations, if allowed at all in ASP systems.

In this work we formally define the new class of finitely-ground programs, allowing for a powerful (possibly recursive) use of function terms in the full ASP language with disjunction and negation. We demonstrate that finitely-ground programs have nice computational properties: (i) both brave and cautious reasoning are decidable, and (ii) answer sets of finitely-ground programs are computable. Moreover, the language is highly expressive, as any computable function can be encoded by a finitely-ground program. Due to the high expressiveness, membership in the class of finitely-ground program is clearly not decidable (we prove that it is semi-decidable). We single out also a subset of finitely-ground programs, called finite-domain programs, which are effectively recognizable, while keeping computability of both reasoning and answer set computation.

We implement all results in DLP, further extending the language in order to support list and set terms, along with a rich library of built-in functions for their manipulation. The resulting ASP system is very powerful: any computable function can be encoded in a rich and fully declarative KRR language, ensuring termination on every finitely-ground program. In addition, termination is “a priori” guaranteed if the user asks for the finite-domain check.

Francesco Calimeri, Susanna Cozza, Giovambattista Ianni, Nicola Leone
Composing Normal Programs with Function Symbols

Several expressive, decidable fragments of Answer Set Programming with function symbols have been identified over the past years. Undecidability results suggest that there are no maximal decidable program classes encompassing all these fragments; this raises a sort of interoperability question: Given two programs belonging to different fragments, does their union preserve the nice computational properties of each fragment? In this paper we give a positive answer to this question and outline two of its possible applications. First, membership to a “good” fragment can be checked once and independently for each program module; this allows modular answer set programming with function symbols. As a second application, we extend known decidability results, by showing how different forms of recursion can be simultaneously supported.

Sabrina Baselice, Piero A. Bonatti

Applications II

Verification from Declarative Specifications Using Logic Programming

In recent years, the declarative programming philosophy has had a visible impact on new emerging disciplines, such as heterogeneous multi-agent systems and flexible business processes. We address the problem of formal verification for systems specified using declarative languages, focusing in particular on the Business Process Management field. We propose a verification method based on the g-SCIFF abductive logic programming proof procedure and evaluate our method empirically, by comparing its performance with that of other verification frameworks.

Marco Montali, Paolo Torroni, Marco Alberti, Federico Chesani, Marco Gavanelli, Evelina Lamma, Paola Mello
Prolog Based Description Logic Reasoning

In this paper we present the recent developments of the DLog system, an ABox reasoning engine for the the

$\mathcal{SHIQ}$

description logic language. DLog differs from traditional description logic reasoners in that it transforms description logic axioms into a Prolog program. The transformation is done independently from the ABox, i.e. the set of assertions about the individuals. This makes it possible to store the ABox assertions in a database, rather than in memory. This approach results in better scalability and helps using description logic ontologies directly on top of existing information sources.

The transformation involves several optimisation steps, which aim at producing more efficient Prolog programs. In this paper we focus on the partial evaluation technique we apply to obtain programs that do not use logic variables. This simplifies the implementation, improves performance and opens up the possibility of compiling into Mercury code.

In the paper we also present the recent architectural changes in the DLog system, summarise the most important features of the implementation and evaluate the performance of DLog by comparing it to the best available description logic reasoners.

Gergely Lukácsy, Péter Szeredi, Balázs Kádár
Resource Management Policy Handling Multiple Use-Cases in MPSoC Platforms Using Constraint Programming

Multi-processor system-on-chip (MPSoC) technology is finding widespread application in the embedded system domain, like in cell phones, automotive control units or avionics. Once deployed in field, these devices always run the same set of applications, in a well-characterized context. It is therefore possible to spend a large amount of time for off-line software optimization and then deploy the results on the field. Each possible set of applications that can be active simultaneously in an MPSoC platform leads to a different use-case that the system has to be verified and tested for. Above all, smooth switching between use-cases falls within the scope of the resource manager, since users should not experience artifacts or delays when a transition between any two consecutive use-cases takes place. In this paper, we propose a semi-static approach to the resource management problem, where the allocation and scheduling solutions for the tasks in each use-case are computed off-line via a Logic Based Benders Decomposition approach using Constraint Programming and stored for use in run-time mapping decisions. The solutions are logically organized in a lattice, so that the transition costs between any two consecutive use-cases can be bound. The resulting framework exhibits both a high level of flexibility and orders of magnitude speed ups w.r.t. monolithic approaches that do not exploit decomposition.

Luca Benini, Davide Bertozzi, Michela Milano

CHRs and Extensions

Optimization of CHR Propagation Rules

Constraint Handling Rules (CHR) is an elegant, high-level programming language based on multi-headed, forward chaining rules. To ensure CHR propagation rules are applied at most once with the same combination of constraints, CHR implementations maintain a so-called

propagation history

. The performance impact of this history can be significant. We introduce several optimizations that, for the majority of CHR rules, eliminate this overhead. We formally prove their correctness, and evaluate their implementation in two state-of-the-art CHR systems.

Peter Van Weert
Termination Analysis of CHR Revisited

Today, two distinct direct approaches to prove termination of CHR programs exist. The first approach, by

T. Frühwirth

, proves termination of CHR programs without propagation. The second, by

Voets et al.

, deals with programs that contain propagation. It is however less powerful on programs without propagation. In this paper, we present new termination conditions that are strictly more powerful than those from previous approaches and that are also applicable to a new class of programs. Furthermore, we present a new representation for CHR states for which size-decreases between consecutive states correspond to termination. Both contributions are linked: our termination conditions correspond to the existence of a well-founded order on the new state representation, which decreases for consecutive computation states.

Paolo Pilozzi, Danny De Schreye
Transactions in Constraint Handling Rules

CHR is a highly concurrent language, and yet it is by no means a trivial task to write correct concurrent CHR programs. We propose a new semantics for CHR, which allows specifying and reasoning about

transactions

. Transactions alleviate the complexity of writing concurrent programs by offering entire derivations to run atomically and in isolation.

We derive several program transformations based on our semantics that transform particular classes of transitional CHR programs to non-transactional ones. These transformations are useful because they obviate a general purpose transaction manager, and may lift unnecessary sequentialization present in the transactional semantics.

Tom Schrijvers, Martin Sulzmann
Cadmium: An Implementation of ACD Term Rewriting

Cadmium is a rule based programming language for compiling solver independent constraint models to various solver dependent back-ends. Cadmium is based on a hybrid between Constraint Handling Rules (CHR) and term rewriting modulo Associativity, Commutativity and a restricted form of Distributivity (ACD) called Conjunctive Context (CC). Experience with using Cadmium in the G12 project shows that CC is a powerful language feature, as local model mapping can depend on some non-local context, such as variable declarations or other constraints. However, CC significantly complicates the Cadmium normalisation algorithm, since the normal form of a term may depend on what context it appears in. In this paper we present an implementation of Cadmium based on classic bottom-up evaluation, but modified to handle CC matching. We evaluate the performance of the new implementation compared to earlier prototype normalisation algorithms. We show that the resulting system is fast enough to run “real-world” Cadmium applications.

Gregory J. Duck, Leslie De Koninck, Peter J. Stuckey

Semantics and Foundations IV

Quantified Equilibrium Logic and Foundations for Answer Set Programs

QHT

is a first-order super-intuitionistic logic that provides a foundation for answer set programming (ASP) and a useful tool for analysing and transforming non-ground programs. We recall some properties of

QHT

and its nonmonotonic extension, quantified equilibrium logic (QEL). We show how the proof theory of

QHT

can be used to extend to non-ground programs previous results on the completeness of

θ

-subsumption. We also establish a reduction of

QHT

to classical logic and show how this can be used to obtain and extend classical encodings for concepts such as the strong equivalence of programs and theories. We pay special attention to a class of general (disjunctive) logic programs that capture all universal theories in QEL.

David Pearce, Agustín Valverde
Elimination of Disjunction and Negation in Answer-Set Programs under Hyperequivalence

The study of different notions of equivalence is one of the cornerstones of current research in answer-set programming. This is mainly motivated by the needs of program simplification and modular programming, for which ordinary equivalence is insufficient. A recently introduced equivalence notion in this context is

hyperequivalence

, which includes as special cases strong, uniform, and ordinary equivalence. We study in this paper the question of replacing programs by syntactically simpler ones preserving hyperequivalence (we refer to such a replacement as a

casting

). In particular, we provide necessary and sufficient semantic conditions under which the elimination of disjunction, negation, or both, in programs is possible, preserving hyperequivalence. In other words, we characterise in model-theoretic terms when a disjunctive logic program can be replaced by a hyperequivalent normal, positive, or Horn program, respectively. Furthermore, we study the computational complexity of the considered tasks and, based on similar results for strong equivalence developed in previous work, we provide methods for constructing the respective hyperequivalent programs. Our results contribute to the understanding of problem settings in logic programming in the sense that they show in which scenarios the usage of certain constructs are superfluous or not.

Jörg Pührer, Hans Tompits, Stefan Woltran
Relativized Hyperequivalence of Logic Programs for Modular Programming

A recent framework of relativized hyperequivalence of programs offers a unifying generalization of strong and uniform equivalence. It seems to be especially well suited for applications in program optimization and modular programming due to its flexibility that allows us to restrict, independently of each other, the head and body alphabets in context programs. We study relativized hyperequivalence for the three semantics of logic programs given by stable, supported and supported minimal models. For each semantics, we identify four types of contexts, depending on whether the head and body alphabets are given directly or as the

complement

of a given set. Hyperequivalence relative to contexts where the head and body alphabets are specified directly has been studied before. In this paper, we establish the complexity of deciding relativized hyperequivalence wrt the three other types of context programs.

Mirosław Truszczyński, Stefan Woltran
Program Correspondence under the Answer-Set Semantics: The Non-ground Case

The study of various notions of equivalence between logic programs in the area of answer-set programming (ASP) gained increasing interest in recent years. The main reason for this undertaking is that ordinary equivalence between answer-set programs fails to yield a replacement property similar to the one of classical logic. Although many refined program correspondence notions have been introduced in the ASP literature so far, most of these notions were studied for propositional programs only, which limits their practical usability as concrete programming applications require the use of variables. In this paper, we address this issue and introduce a general framework for specifying parameterised notions of program equivalence for non-ground disjunctive logic programs under the answer-set semantics. Our framework is a generalisation of a similar one defined previously for the propositional case and allows the specification of several equivalence notions extending well-known ones studied for propositional programs. We provide semantic characterisations for instances of our framework generalising uniform equivalence, and we study decidability and complexity aspects. Furthermore, we consider axiomatisations of such correspondence problems by means of polynomial translations into second-order logic.

Johannes Oetsch, Hans Tompits

Algorithms, Systems, and Implementations II

Efficient Algorithms for Functional Constraints

Functional constraints

are an important constraint class in Constraint Programming (CP) systems, in particular for Constraint Logic Programming (CLP) systems. CP systems with finite domain constraints usually employ CSP-based solvers which use local consistency, e.g. arc consistency. We introduce a new approach which is based instead on

variable substitution

. We obtain efficient algorithms for reducing systems involving functional and

bi-functional constraints

together with other non-functional constraints. It also solves globally any CSP where there exists a variable such that any other variable is reachable from it through a sequence of functional constraints. Our experiments show that variable elimination can significantly improve the efficiency of solving problems with functional constraints.

Yuanlin Zhang, Roland H. C. Yap, Chendong Li, Satyanarayana Marisetti
Two WAM Implementations of Action Rules

Two implementations of Action Rules are presented in the context of a WAM-like Prolog system: one follows a meta-call based approach, the other uses suspended WAM environments on the WAM heap. Both are based on a program transformation that clarifies the semantics of Action Rules. Their implementation is compared experimentally to the TOAM-based implementation of Action Rules in B-Prolog. The suspension based approach is faster at reactivating agents on the instantiation event. The meta-call approach is easier to implement, performs overall very good and much better for synchronous events, and it is more flexible than the suspension based approaches.

Bart Demoen, Phuong-Lan Nguyen
Constraint-Level Advice for Shaving

This work concentrates on improving the robustness of constraint solvers by increasing the propagation strength of constraint models in a declarative and automatic manner. Our objective is to efficiently identify and remove shavable values during search. A value is shavable if as soon as it is assigned to its associated variable an inconsistency can be detected, making it possible to refute it. We extend previous work on shaving by using different techniques to decide if a given value is an interesting candidate for the shaving process. More precisely, we exploit the semantics of (global) constraints to suggest values, and reuse both the successes and failures of shaving later in search to tune shaving further. We illustrate our approach with two important global constraints, namely alldifferent and sum, and present the results of an experimentation obtained for three problem classes. The experimental results are quite encouraging: we are able to significantly reduce the number of search nodes (even by more than two orders of magnitude), and improve the average execution time by one order of magnitude.

Radoslaw Szymanek, Christophe Lecoutre
A High-Level Implementation of Non-deterministic, Unrestricted, Independent And-Parallelism

The growing popularity of multicore architectures has renewed interest in language-based approaches to the exploitation of parallelism. Logic programming has proved an interesting framework to this end, and there are parallel implementations which have achieved significant speedups, but at the cost of a quite sophisticated low-level machinery. This machinery has been found challenging to code and, specially, to maintain and expand. In this paper, we follow a different approach which adopts a higher level view by raising some of the core components of the implementation to the level of the source language. We briefly present an implementation model for independent and-parallelism which fully supports non-determinism through backtracking and provides flexible solutions for some of the main problems found in previous and-parallel implementations. Our proposal is able to optimize the execution for the case of deterministic programs and to exploit

unrestricted

and-parallelism, which allows exposing more parallelism among clause literals than fork-join-based proposals. We present performance results for an implementation, including data for benchmarks where and-parallelism is exploited in non-deterministic programs.

Amadeo Casas, Manuel Carro, Manuel V. Hermenegildo

Short Papers

Semantics and Foundations

Inference with Logic Programs with Annotated Disjunctions under the Well Founded Semantics

Logic Programs with Annotated Disjunctions (LPADs) allow to express probabilistic information in logic programming. The semantics of an LPAD is given in terms of well founded models of the normal logic programs obtained by selecting one disjunct from each ground LPAD clause. The paper presents SLGAD resolution that computes the (conditional) probability of a ground query from an LPAD and is based on SLG resolution for normal logic programs. SLGAD is evaluated on classical benchmarks for well founded semantics inference algorithms, namely the stalemate game and the ancestor relation. SLGAD is compared with Cilog2 and SLDNFAD, an algorithm based on SLDNF, on the programs that are modularly acyclic. The results show that SLGAD deals correctly with cyclic programs and, even if it is more expensive than SLDNFAD on problems where SLDNFAD succeeds, is faster than Cilog2 when the query is true in an exponential number of instances.

Fabrizio Riguzzi
Safe Formulas in the General Theory of Stable Models (Preliminary Report)

Safe first-order formulas generalize the concept of a safe rule, which plays an important role in the design of answer set solvers. We show that any safe sentence is equivalent, in a certain sense, to the result of its grounding—to the variable-free sentence obtained from it by replacing all quantifiers with multiple conjunctions and disjunctions. It follows that a safe sentence and the result of its grounding have the same stable models, and that stable models of a safe sentence can be characterized by a formula of a simple syntactic form.

Joohyung Lee, Vladimir Lifschitz, Ravi Palla
Non-determinism and Probabilities in Timed Concurrent Constraint Programming

A timed concurrent constraint process calculus with probabilistic and non-deterministic choices is proposed. We outline the rationale of an operational semantics for the calculus. The semantics ensures consistent interactions between both kinds of choices and is indispensable for the definition of logic-based verification capabilities over system specifications.

Jorge A. Pérez, Camilo Rueda
Stochastic Behavior and Explicit Discrete Time in Concurrent Constraint Programming

We address the inclusion of stochastic information into an

explicitly timed

concurrent constraint process language. An operational semantics is proposed as a preliminary result. Our approach finds applications in biology, among other areas.

Jesús Aranda, Jorge A. Pérez, Camilo Rueda, Frank D. Valencia
TopLog: ILP Using a Logic Program Declarative Bias

This paper introduces a new Inductive Logic Programming (ILP) framework called Top Directed Hypothesis Derivation (TDHD). In this framework each hypothesised clause must be derivable from a given logic program called top theory (⊤). The top theory can be viewed as a declarative bias which defines the hypothesis space. This replaces the metalogical mode statements which are used in many ILP systems. Firstly we present a theoretical framework for TDHD and show that standard SLD derivation can be used to efficiently derive hypotheses from ⊤. Secondly, we present a prototype implementation of TDHD within a new ILP system called TopLog. Thirdly, we show that the accuracy and efficiency of TopLog, on several benchmark datasets, is competitive with a state of the art ILP system like Aleph.

Stephen H. Muggleton, José Carlos Almeida Santos, Alireza Tamaddoni-Nezhad

Implementations and Systems

Towards Typed Prolog

Prolog is traditionally not statically typed. Since the benefits of static typing are huge, it was decided to grow a portable type system inside two widely used open source Prolog systems: SWI-Prolog and Yap. This requires close cooperation and agreement between the two systems. The type system is Hindley-Milner. The main characteristics of the introduction of types in SWI and Yap are that typing is not mandatory, that typed and untyped code can be mixed, and that the type checker can insert dynamic type checks at the boundaries between typed and untyped code. The basic decisions and the current status of the

Typed Prolog

project are described, as well as the remaining tasks and problems to be solved.

Tom Schrijvers, Vítor Santos Costa, Jan Wielemaker, Bart Demoen
Environment Reuse in the WAM

The TOAM reuses eagerly allocated stack frames, while the WAM avoids to allocate environments. This is investigated by using the tak/4 benchmark as an inital case study for better understanding what one can expect from environment reuse for deterministic predicates in the WAM. Additionally, artificial programs are used to amplify the findings. The experiment compares the impact of reusing an environment versus avoiding to create it: the latter seems a superior technique.

Bart Demoen, Phuong-Lan Nguyen
Logic Engines as Interactors

We introduce a new programming language construct,

Interactors

, supporting the agent-oriented view that programming is a dialog between simple, self-contained, autonomous building blocks.

We define

Interactors

as an abstraction of answer generation and refinement in

Logic Engines

resulting in expressive language extension and metaprogramming patterns.

Interactors extend language constructs like Ruby, Python and C#’s multiple coroutining block returns through

yield

statements and they can emulate the action of monadic constructs and catamorphisms in functional languages.

The full version of this paper [1] describes source level emulation of Prolog’s dynamic database and various built-ins in terms of an

Interactor

API and design patterns for algorithms involving combinatorial generation and infinite answer streams.

Paul Tarau
Global Storing Mechanisms for Tabled Evaluation

Arguably, the most successful data structure for tabling is tries. However, while tries are very efficient for variant based tabled evaluation, they are limited in their ability to recognize and represent repeated terms in different tabled calls or/and answers. In this paper, we propose a new design for the table space where tabled terms are stored in a common global trie instead of being spread over several different tries.

Jorge Costa, Ricardo Rocha
Thread-Based Competitive Or-Parallelism

This paper presents the logic programming concept of

thread-based competitive or-parallelism

, which combines the original idea of competitive or-parallelism with committed-choice nondeterminism and speculative threading. In thread-based competitive or-parallelism, an explicit disjunction of subgoals is interpreted as a set of concurrent alternatives, each running in its own thread. The subgoals compete for providing an answer and the first successful subgoal leads to the termination of the remaining ones. We discuss the implementation of competitive or-parallelism in the context of Logtalk, an object-oriented logic programming language, and present experimental results.

Paulo Moura, Ricardo Rocha, Sara C. Madeira

Answer Set Programming and Extensions

A Logic Language with Stable Model Semantics for Social Reasoning

In this paper we present a new language based on logic programming allowing us to represent some forms of social reasoning. The nice feature of this semantics is that the interdependent individuals’ requirements might result in a sort of guessing of agreed conclusions, which autonomously each individual cannot derive, thus capturing the common case of mutual influence of a community in the individuals’ reasoning.

Francesco Buccafurri, Gianluca Caminiti, Rosario Laurendi
ASPVIZ: Declarative Visualisation and Animation Using Answer Set Programming

Answer set programming provides a powerful platform for model-based reasoning problems. The answer sets are solutions, but for many non-trivial problems post-processing is often necessary for human readability. In this paper we describe a method and a tool for visualising answer sets in which we exploit answer set programming itself to define how visualisations are constructed. An exciting potential application of our method is to assist in the debugging of answer set programs that, as a consequence of their declarative nature, are not amenable to traditional approaches: visual rendering of answer sets offers a way to help programmers spot false and missing solutions.

Owen Cliffe, Marina De Vos, Martin Brain, Julian Padget
Removing Redundancy from Answer Set Programs

In answer set programming, ground programs are used as intermediate representations of logic programs for which answer sets are computed. It is typical that such programs contain many redundant rules—increasing the length of the program unnecessarily. In this article, we address redundancy of rules in answer set programs, and in particular, in program modules that are used as components of programs. To this end, we provide an exact semantical characterization of redundancy and present a translation-based method for detecting redundant rules. A prototype implementation, the

modular optimizer

(

modopt

), has been developed in the context of the

smodels

system. In the experimental part, we study the effects of modular optimization on lengths and run times of programs.

Tomi Janhunen
ASPARTIX: Implementing Argumentation Frameworks Using Answer-Set Programming

The system ASPARTIX is a tool for computing acceptable extensions for a broad range of formalizations of Dung’s argumentation framework and generalizations thereof. ASPARTIX relies on a fixed disjunctive datalog program which takes an instance of an argumentation framework as input, and uses the answer-set solver DLV for computing the type of extension specified by the user.

Uwe Egly, Sarah Alice Gaggl, Stefan Woltran
An Implementation of Extended P-Log Using XASP

We propose a new approach for implementing P-log using XASP, the interface of XSB with Smodels. By using the tabling mechanism of XSB, our system is most of the times faster than P-log. In addition, our implementation has query features not supported by P-log, as well as new set operations for domain definition.

Han The Anh, Carroline D. P. Kencana Ramli, Carlos Viegas Damásio

Constraints, Optimizations, and Applications

Compiling and Executing Declarative Modeling Languages to Gecode

We developed a compiler from SICStus Prolog CLP(FD) to Gecode and a compiler from MiniZinc to Gecode. We compared the running times of the executions of (standard) codes directly in the three languages and of the compiled codes for some classical problems. Performances of the compiled codes in Gecode improve those in the original languages and are comparable with running time of native Gecode code. This is a first step towards the definition of a unified declarative modeling tool for combinatorial problems.

Raffaele Cipriano, Agostino Dovier, Jacopo Mauro
Telecommunications Feature Subscription as a Partial Order Constraint Problem

This paper describes the application of a partial order constraint solver to a telecommunications feature subscription configuration problem. Partial order constraints are encoded to propositional logic and solved using a state-of-the-art Boolean satisfaction solver. The encoding is based on a symbol-based approach: symbols are viewed as variables which take integer values and are interpreted as indices in the order. Experimental evaluation indicates that partial order constraints are a viable alternative to previous solutions which apply constraint programming techniques and integer linear programming.

Michael Codish, Vitaly Lagoon, Peter J. Stuckey
A Constraint Logic Programming Approach to Automated Testing

In this paper we present a new constraint solver for the automated generation of test cases from specifications. The specification language is inspired by the contract-oriented programming extended with a finite state machines. Beyond the generation of correct argument values for method calls, we generate full test scenarios thanks to the symbolic animation of the specifications. We propose a flexible CSP architecture that can operate not only on integer or bounded domains but also on arbitrary types. An original notion of type builder is used to establish the link between the type semantics and the CSP framework. We illustrate this with a string builder that can automatically generate string instances depending on combinations of constraints.

Hakim Belhaouari, Frédéric Peschanski
Turing-Complete Subclasses of CHR

Interesting subclasses of CHR are still Turing-complete: CHR with only one kind of rule, with only one rule, and propositional refined CHR. This is shown by programming a simulator of Turing machines (or Minsky machines) within those subclasses. Single-headed CHR without host language and propositional abstract CHR are not Turing-complete.

Jon Sneyers
A Soft Approach to Multi-objective Optimization

Many combinatorial optimization problems require the assignment of a set of variables in such a way that an objective function is optimized. Often, the objective function involves different criteria, and it may happen that the requirements are in conflict: assignments that are good wrt. one objective may behave badly wrt. another. An optimal solution wrt. all criteria may not exist, and either the efficient frontier (the set of best incomparable solutions, all equally relevant in the absence of further information) or an approximation has to be looked after. The paper shows how the soft constraints formalism based on semirings, so far exploited for finding approximations, can embed also the computation of the efficient frontier in multi-objective optimization problems. The main result is the proof that the efficient frontier of a multi-objective problem can be obtained as the best level of consistency distilled from a suitable soft constraint problem.

Stefano Bistarelli, Fabio Gadducci, Javier Larrosa, Emma Rollon

Applications

A Multi-theory Logic Language for the World Wide Web

Despite the recent formalization of the Web in terms of Representational State Transfer (REST) architectural style and Resource-Oriented Architecture (ROA), current tools for Web programming generally misunderstand its design. Based on REST/ROA insights, we claim that logic languages are suited for promoting the Web architecture and principles. The mapping of REST/ROA abstractions onto elements of Contextual Logic Programming also permits runtime modification of resource behavior. In this paper we present Web Logic Programming as a Prolog-based language for the Web embedding REST/ROA principles, meant to be the basis of an application framework for rapid prototyping.

Giulio Piancastelli, Andrea Omicini
A Case Study in Engineering SQL Constraint Database Systems (Extended Abstract)

Practical contexts for constraint satisfaction problems (CSP) often involve large relational databases. The recent proposal by Cadoli and Mancini, CONSQL, shows that a simple extension to SQL provides a viable basis for modeling CSP [1]. This enables the transparent integration of CSP solvers with databases using SQL – the most widely known and popular database language, and opens the possibility for making the power of CSP technology accessible to SQL knowledgeable users.

Sebastien Siva, James J. Lu, Hantao Zhang
Policy-Driven Negotiations and Explanations: Exploiting Logic-Programming for Trust Management, Privacy & Security

Traditional protection mechanisms rely on the characterization of requesters by identity. This is adequate in a closed system with a known set of users but it is not feasible in open environments such as the Web, where parties may get in touch without being previously known to each other. In such cases policy-driven negotiation protocols have emerged as a possible solution to enforce security on future web applications. Along with this setting, we illustrate

Protune

, a system for specifying and cooperatively enforcing security and privacy policies (as well as other kinds of policies).

Protune

relies on logic programming for representing policies and for reasoning with and about them.

Piero A. Bonatti, Juri L. De Coi, Daniel Olmedilla, Luigi Sauro

Analysis, Transformations, and Implementations

An Algorithm for Sophisticated Code Matching in Logic Programs

In this work in progress, we develop a program analysis capable of efficiently detecting duplicated functionality within a logic program or a set of logic programs. Our main motivation is to provide an analysis that allows to automatically spot those locations in a program where a transformation could be applied that removes the duplication from the source code.

Wim Vanhoof, François Degrave
Trace Analysis for Predicting the Effectiveness of Partial Evaluation

The main goal of

partial evaluation

[1] is program specialization. Essentially, given a program and

part

of its input data|the so called

static

data|a partial evaluator returns a new, residual program which is specialized for the given data. An appropriate

residual

program for executing the remaining computations|those that depend on the so called

dynamic

data|is thus the output of the partial evaluator. Despite the fact that the main goal of partial evaluation is improving program efficiency (i.e., producing faster programs), there are very few approaches devoted to formally analyze the effects of partial evaluation, either

a priori

(prediction) or

a posteriori

. Recent approaches (e.g., [2,3]) have considered

experimental

frameworks for estimating the best

division

(roughly speaking, a classification of program parameters into static or dynamic), so that the optimal choice is followed when specializing the source program.

Germán Vidal
A Sketch of a Complete Scheme for Tabled Execution Based on Program Transformation

Tabled evaluation has proved to be an effective method to improve several aspects of goal-oriented query evaluation, including termination and complexity. “Native” implementations of tabled evaluation offer good performance, but also require significant implementation effort, affecting compiler and abstract machine. Alternatively, program transformation-based implementations, such as the original

continuation call

(

CCall

) technique, offer lower implementation burden at some efficiency cost. A limitation of the original

CCall

proposal is that it limits the interleaving of tabled and non-tabled predicates and thus cannot be used for arbitrary programs. In this work we present an extension of the

CCall

technique that allows the execution of arbitrary tabled programs, as well as some performance results. Our approach offers a useful tradeoff that can be competitive with state-of-the-art implementations, while keeping implementation effort relatively low.

Pablo Chico de Guzmán, Manuel Carro, Manuel V. Hermenegildo

Doctoral Consortium Presentations

Probabilistic and Concurrent Models for Security

Recent research in security and protocol verification has shown an important need for probabilistic formal concurrent models. Indeed, the use of probabilities in formal models allows to define and check quantitative properties which are usefull for a lot of applications, such as probabilistic anonymity, failures or information leakage. Several recent research showed very interesting situations for these properties [1]. Research on probabilistic models is also a topic of interest from the mathematical point of view. Topics include relations between probability and non-determinism, expressivity, processus equivalence and denotational semantics. Research domains for these topics include probabilistic process algebras, concurrent constraint programming and domain theory. Interesting references can be found in [2,3].

Romain Beauxis
On the Hybridization of Constraint Programming and Local Search Techniques: Models and Software Tools

Resource management problems are generally modeled as Constraint Satisfaction / Optimization Problems (CSPs or COPs). The solution methods for CSPs and COPs can be split into:

complete methods

, which systematically explore the whole solution space;

incomplete methods

, which rely on heuristics and focus on interesting areas of the solution space. Our focus is manly in Constrain Programming (CP) and Local Search (LS) techniques.

CP languages

are usually based on complete methods that analyze the search space alternating constraint propagation phases and variable assignment phases. Their main advantage is

flexibility

.

LS methods

, instead, rely on the definition of “proximity” and explore only specific areas of the search space. Their main advantage is

efficiency

.

Raffaele Cipriano
Development of an Automatic Testing Environment for Mercury

Testing refers to the activity of running a software component with respect to a well-chosen set of inputs and comparing the outputs that are produced with the expected results in order to find errors. To make testing less repetitive and quicker, a so-called test automation framework can be used to automatically execute a (previously written) test suite without user intervention. An automatic tool runs the software component that is being tested once for each test input, compares the actual result with the expected result and reports those test cases that failed during the test; a well-known example of such a tool being JUnit for Java [1]. However, the construction of test suites remains a mostly manual and thus time-consuming activity [2]. The need of adequacy criteria [3,4] renders the construction of (large) test suites complex and error-prone [5]. The objective of this work is to develop an analysis that automatically creates a set of test inputs that satisfies a particular coverage criterion for a given program written in Mercury.

François Degrave
Resolving CSP with Naming Games

In constraint satisfaction problems (CSP) we consider

N

variables

x

1

,

x

2

, ...

x

N

, their definition domains

D

1

,

D

2

, ...,

D

N

and a set of constraints on the values of these variables; solving the CSP means finding a particular value for the variables that satisfies the constraints. In the distributed CSP (DCSP) as defined by Makoto Yokoo [1], the variables of the CSP are distributed among the agents. These agents are able to communicate between themselves and know all the constraint predicates relevant to their variable. The agents through interaction find the appropriate values to solve the CSP.

The naming game describes a set of problems in which a number

N

of agents bootstrap a commonly agreed name for an object. Each naming game is defined by an interaction protocol/algorithm. An important aspect of the naming game is the hierarchy-free agent architecture. For other references on the naming game see the work of Steels [3] and Baronchelli et al. [2].

Giorgio Gosti
Biosequence Analysis in PRISM

In this work, we consider probabilistic models that can infer biological information solely from biological sequences such as DNA. Traditionally, computational models for biological sequence analysis have been implemented in a wide variety of procedural and object oriented programming languages [1]. Models implemented using stochastic logic programming (SLP [2,3,4]) instead, may draw upon the benefits of increased expressive power, conciseness and compositionality. It does, however, pose a big challenge to design efficient SLP models.

Ole Torp Lassen
Bi-dimensional Domains for the Non-overlapping Rectangles Constraint

Given a set of rectangles and a bi-dimensional container, the non-overlapping rectangles constraint aims to obtain consistency such that all the rectangles can be placed without intersection inside the box. So, the

nonOverlapping

([

R

1

,...,

R

n

],

Box

) holds iff all rectangles are placed inside the

Box

and no two rectangles

R

i

and

R

j

overlap.

Fabio Parisini
Extracting and Reasoning about Web Data

The Web contains an enormous quantity of information, virtually supporting all types of reasoning and decision-making. Unfortunately, most of the times automated access to Web data and tracking updates turns out to be hard. Several technological, standardization and research efforts are now under way to make, among other things,

Web data

easily accessed and properly manipulated by machines. Indeed, for several years now, the W3C consortium has released useful recommendations on RDF(s)[1] and OWL[2], which are used to assign semantic to Web data. In particular, making existing Web data available to machine

consumption

can be a challenging task. Obviously, it is impossible to re-write the whole Web using advanced semantic Web languages; hence, various new technologies are currently being proposed for translating existing Web data, into RDF document, as GRDDL[3]. In this manner, it is now possible to transform plain-HTML Web pages into RDF documents, and then apply inferential engines to deduce new knowledge.

Giovanni Pirrotta
Managing Quality of Service with Soft Constraints

The term

Quality of Service

(QoS) is “something” by which a user of the service (in a very large meaning) will judge how good the service is. In this research project we mainly focus our attention to three areas related with QoS:

i)

Networks,

ii)

Web Services and

iii)

Trust Management (TM).

Francesco Santini
TopLog: ILP Using a Logic Program Declarative Bias

Although Inductive Logic Programming can have other usages such as program synthesis, it is normally used as a logic based supervised machine learning algorithm. The usual setting for its application is,

given:

1) a set of background knowledge facts

B

, 2) a set of examples

E

,

find:

a set of hypotheses

H

, such that

B

,

H

 ⊧ 

E

.

H

, the induced model, is a set of Horn rules thus being easily comprehensible by a human.

José Carlos Almeida Santos
Generalising Constraint Solving over Finite Domains

Finite domain constraint solvers are typically applied to problems with only quite small values. This is the case in many tasks for which constraint-based approaches are well suited. A well-known benchmark library for constraints, CSPLib ([1]), consists almost exclusively of such examples.

On the other hand, the need for arbitrary precision integer arithmetic is widely recognised, and many common Prolog systems provide transparent built-in support for arbitrarily large integers.

Markus Triska
Detection of Security Vulnerabilities Using Guided Model Checking

Software security problems are good candidates for application of verification techniques. Usually it is not a complex task to represent certain security-related property in a particular verification framework. For instance in any software model checking environment (MC)[1] it is possible to state buffer overflow detection as a reachability problem. The approach works in theory and in practice, but has a major scalability drawback: the state-space, which represents all possible behaviors of the system, might grow exponentially in the size of the product of a model and a property. From the other side MC has an important advantage - a counter-example is produced automatically when the bug is found.

Aliaksei Tsitovich
Backmatter
Metadaten
Titel
Logic Programming
herausgegeben von
Maria Garcia de la Banda
Enrico Pontelli
Copyright-Jahr
2008
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-89982-2
Print ISBN
978-3-540-89981-5
DOI
https://doi.org/10.1007/978-3-540-89982-2

Premium Partner