Skip to main content
Top

2007 | Book

Logic Programming

23rd International Conference, ICLP 2007, Porto, Portugal, September 8-13, 2007. Proceedings

Editors: Véronica Dahl, Ilkka Niemelä

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

Table of Contents

Frontmatter

Invited Talks

Towards Overcoming the Knowledge Acquisition Bottleneck in Answer Set Prolog Applications: Embracing Natural Language Inputs

Answer set Prolog, or AnsProlog in short, is one of the leading knowledge representation (KR) languages with a large body of theoretical and building block results, several implementations and reasoning and declarative problem solving applications. But it shares the problem associated with knowledge acquisition with all other KR languages; most knowledge is entered manually by people and that is a bottleneck. Recent advances in natural language processing have led to some systems that convert natural language sentences to a logical form. Although these systems are in their infancy, they suggest a direction to overcome the above mentioned knowledge acquisition bottleneck. In this paper we discuss some recent work by us on developing applications that process logical forms of natural language text and use the processed result together with AnsProlog rules to do reasoning and problem solving.

Chitta Baral, Juraj Dzifcak, Luis Tari
Preferences, Contexts and Answer Sets

Answer set programming (ASP) is a declarative programming paradigm based on logic programs under stable model semantics, respectively its generalization to answer set semantics. Besides the availability of rather efficient answer set solvers, one of the major reasons for the success of ASP in recent years was the shift from a theorem proving to a constraint programming view: problems are represented such that stable models, respectively answer sets, rather than theorems correspond to solutions.

It is obvious that preferences play an important role in everyday decision making - and in many AI applications. For this reason a number of approaches combining answer set programming with explicit representations of preferences have been developed over the last years. The approaches can be roughly categorized according to the preference representation (quantitative vs. qualitative) the type of preferences they allow (static vs. dynamic) and the objects of prioritization (rules vs. atoms/formulas).

We will focus on qualitative dynamic formula preferences, give an account of existing approaches and show that by adding adequate optimization constructs one obtains interesting solutions to problems in belief merging, consistency handling, game theory and social choice.

Explicit representations of contexts also have quite a tradition in AI, going back to foundational work of John McCarthy. A context, intuitively, is a particular view of a state of affairs. Contexts can also be used as representations of beliefs of multiple agents.

We show how multi-context systems based on bridge rules, as developed by Fausto Giunchiglia and colleagues in Trento, can be extended to nonmonotonic context systems. We first discuss multi-context logic programming systems, and then generalize the ideas underlying these systems to a general framework for integrating arbitrary logics, monotonic or nonmonotonic. Techniques from answer set programming are at the heart of the framework.

We finally give a brief outlook on how the two main topics of the talk, preferences and contexts, can be combined fruitfully.

Several of the presented results were obtained in cooperation with Thomas Eiter, Ilkka Niemelä and Mirek Truszczyński.

Gerhard Brewka

Invited Tutorials

Answer Set Programming for the Semantic Web
(Tutorial)

The

Semantic Web

[1,2,3] aims at extending the current Web by standards and technologies that help machines to understand the information on the Web so that they can support richer discovery, data integration, navigation, and automation of tasks. Its development proceeds in layers, and the Ontology layer is the highest one that has currently reached a sufficient maturity, in the form of the

OWL Web Ontology Language (OWL)

[4,5], which is based on Description Logics. Current efforts are focused on realizing the Rules layer, which should complement the Ontology layer and offer sophisticated representation and reasoning capabilities. This raises, in particular, the issue of interlinking rules and ontologies. Excellent surveys that classify many proposals for combining rules and ontologies are [6,7]; general issues that arise in this are discussed e.g. in [8,9,10]. Notably, the World Wide Web Consortium (W3C) has installed The Rule Interchange Format (RIF) Working Group on order to produce a core rule language plus extensions which together allow rules to be translated between rule languages and thus transferred between rule systems; a first working draft has been released recently.

Answer Set Programming

(ASP) [11,12,13,14], also called A-Prolog [15,16,17], is a well-known declarative programming paradigm which has its roots in Logic Programming and Non-monotonic Reasoning [18]. Thanks to its many extensions [19], ASP is well-suited for modeling and solving problems which involve common sense reasoning, and has been fruitfully applied to a range of applications including data integration, configuration, diagnosis, text mining, reasoning about actions and change, etc.; see [16,17,20].

Within the context of the Semantic Web, the usage of ASP and related formalisms has been explored in different directions:

On the one hand, they have been exploited as a tool to encode reasoning tasks in Description Logics, like [16,22,23,24,25,26,27].

On the other hand, they have been used as a basis for giving a semantics to a combination of rules and ontologies. Here, increasing levels of integration have been considered:

loose couplings

, where rule and ontology predicates are separated, and the interaction is via a safe semantic interface like an inference relation e.g. [28,29,30,31,32]

tight couplings

, where rule and ontology predicates are separated, and the interaction is at the level of models, e.g. [33,34,35,36,37,38,39,10,40]; and

full integration

, where no distinction between rule and ontology predicates is made, e.g., [41,42,43,44].

In this tutorial, we will first briefly review ASP and ontology formalisms. We then will recall some of the issues that come up with the integration of rules and ontologies. After that, we will consider approaches to combine rules and ontologies under ASP, where particular attention well be devoted to non-monotonic description logic programs [45] and its derivatives [28,46] as a representative of loose couplings. However, also other approaches will be discussed. We further discuss the potential of such combinations, some applications, and finally some open issues.

Thomas Eiter
Coinductive Logic Programming and Its Applications

Coinduction has recently been introduced as a powerful technique for reasoning about unfounded sets, unbounded structures, and interactive computations. Where induction corresponds to least fixed point semantics, coinduction corresponds to greatest fixed point semantics. In this paper we discuss the introduction of coinduction into logic programming. We discuss applications of coinductive logic programming to verification and model checking, lazy evaluation, concurrent logic programming and non-monotonic reasoning.

Gopal Gupta, Ajay Bansal, Richard Min, Luke Simon, Ajay Mallya
Multi-paradigm Declarative Languages

Declarative programming languages advocate a programming style expressing the properties of problems and their solutions rather than how to compute individual solutions. Depending on the underlying formalism to express such properties, one can distinguish different classes of declarative languages, like functional, logic, or constraint programming languages. This paper surveys approaches to combine these different classes into a single programming language.

Michael Hanus
Logic Programming for Knowledge Representation

This note provides background information and references to the tutorial on recent research developments in logic programming inspired by needs of knowledge representation.

Mirosław Truszczyński

Regular Talks

Answer Set Programming

On Finitely Recursive Programs

Finitary programs

are a class of logic programs admitting functions symbols and hence infinite domains. In this paper we prove that a larger class of programs, called

finitely recursive programs

, preserves most of the good properties of finitary programs under the stable model semantics, namely: (i) finitely recursive programs enjoy a compactness property; (ii) inconsistency check and skeptical reasoning are semidecidable; (iii) skeptical resolution is complete. Moreover, we show how to check inconsistency and answer skeptical queries using finite subsets of the ground program instantiation.

S. Baselice, P. A. Bonatti, G. Criscuolo
Minimal Logic Programs

We consider the problem of obtaining a minimal logic program strongly equivalent (under the stable models semantics) to a given arbitrary propositional theory. We propose a method consisting in the generation of the set of prime implicates of the original theory, starting from its set of countermodels (in the logic of Here-and-There), in a similar vein to the Quine-McCluskey method for minimisation of boolean functions. As a side result, we also provide several results about

fundamental

rules (those that are not tautologies and do not contain redundant literals) which are combined to build the minimal programs. In particular, we characterise their form, their corresponding sets of countermodels, as well as necessary and sufficient conditions for entailment and equivalence among them.

Pedro Cabalar, David Pearce, Agustín Valverde
Generic Tableaux for Answer Set Programming

We provide a general and modular framework for describing inferences in Answer Set Programming (ASP) that aims at an easy incorporation of additional language constructs. To this end, we generalize previous work characterizing computations in ASP by means of tableau methods. We start with a very basic core fragment in which rule heads and bodies consist of atomic literals. We then gradually extend this setting by focusing on the concept of an aggregate, understood as an operation on a collection of entities. We exemplify our framework by applying it to conjunctions in rule bodies, cardinality constraints as used in

smodels

, and finally to disjunctions in rule heads.

Martin Gebser, Torsten Schaub
Extended ASP Tableaux and Rule Redundancy in Normal Logic Programs

We introduce an extended tableau calculus for answer set programming (ASP). The proof system is based on the ASP tableaux defined in [Gebser&Schaub, ICLP 2006], with an added extension rule. We investigate the power of Extended ASP Tableaux both theoretically and empirically. We study the relationship of Extended ASP Tableaux with the Extended Resolution proof system defined by Tseitin for clause sets, and separate Extended ASP Tableaux from ASP Tableaux by giving a polynomial length proof of a family of normal logic programs {

Π

n

} for which ASP Tableaux has exponential length minimal proofs with respect to

n

. Additionally, Extended ASP Tableaux imply interesting insight into the effect of program simplification on the length of proofs in ASP. Closely related to Extended ASP Tableaux, we empirically investigate the effect of redundant rules on the efficiency of ASP solving.

Matti Järvisalo, Emilia Oikarinen

Applications

Querying and Repairing Inconsistent Databases Under Three-Valued Semantics

The problem of managing and querying inconsistent databases has been deeply investigated in the last few years. As, in the general case, the problem of consistent query answering is hard, most of the techniques so far proposed have an exponential complexity. Moreover, polynomial techniques have been proposed only for restricted forms of constraints (e.g. functional dependencies) and queries.

In this paper we present a technique which allows us to compute “approximated” consistent answers in polynomial time, for general constraints and queries. The paper presents a three-valued semantics for constraint satisfaction, called

deterministic

, and considers three valued databases. Thus, a repaired database is a database where atoms may be either true or undefined (whereas missing atoms are false) that satisfies constraints under the deterministic three valued semantics. We show that in querying possibly inconsistent databases the answer is safe (true and false atoms in the answers are, respectively, true and false under the classical two-valued semantics) and the answers can be computed in polynomial time. We also show that deterministic answers can be computed by rewriting constraints into logic programs.

Sergio Greco, Cristian Molinaro
Logic Programming Approach to Automata-Based Decision Procedures

We propose a novel technique that maps decision problems in WS1S (weak monadic second-order logic with

n

successors) to the problem of query evaluation of Complex-value Datalog queries. We then show how the use of advanced implementation techniques for Logic Programs, in particular the use of tabling in the XSB system, yields a considerable improvement in performance over more traditional approaches. We also explore various optimizations of the proposed technique based on variants of tabling and goal reordering. Although our primary focus is on WS1S, it is straightforward to adapt our approach for other logics with existing automata-theoretic decision procedures, for example WS2S.

Gulay Unel, David Toman
A Logic Programming Framework for Combinational Circuit Synthesis

Logic Programming languages and combinational circuit synthesis tools share a common “combinatorial search over logic formulae” background. This paper attempts to reconnect the two fields with a fresh look at Prolog encodings for the combinatorial objects involved in circuit synthesis. While benefiting from Prolog’s fast unification algorithm and built-in backtracking mechanism, efficiency of our search algorithm is ensured by using parallel bitstring operations together with logic variable equality propagation, as a mapping mechanism from primary inputs to the leaves of candidate Leaf-DAGs implementing a combinational circuit specification. After an exhaustive expressiveness comparison of various minimal libraries, a surprising first-runner, Strict Boolean Inequality “<” together with constant function “1” also turns out to have small transistor-count implementations, competitive to NAND-only or NOR-only libraries. As a practical outcome, a more realistic circuit synthesizer is implemented that combines rewriting-based simplification of ( < ,1) circuits with exhaustive Leaf-DAG circuit search.

Paul Tarau, Brenda Luderman
Spatial-Yap: A Logic-Based Geographic Information System

Coupled deductive database systems join together logic programming systems and relational database management systems, in order to combine the best of both worlds. The current state-of-the-art of these interfaces is restricted to access extensional data in databases in Datalog form, disallowing access to compound terms. However, recent years have seen the evolution of relational database management systems in order to enable them to store and manage more complex information. Of this complex data, one of the most interesting and fast growing is that of

spatial data

. In this paper we describe the application of the MYDDAS deductive database system to the handling of spatial data, and the needed extensions, namely the ability to handle vectorial geometric attributes from database relations, the definition of spatial operators, and a visualization framework, in order to obtain a spatial deductive database system, that can be used as a geographic information system. We argue that such a system can improve the state-of-the-art of spatial data handling in all of its aspects, namely in spatial data modeling, spatial querying and spatial data mining. We describe, in particular, the application of such a logic-powered geographic information system to two real-world problems.

David Vaz, Michel Ferreira, Ricardo Lopes

Constraint Logic Programming

The Correspondence Between the Logical Algorithms Language and CHR

This paper investigates the relationship between the Logical Algorithms language (LA) of Ganzinger and McAllester and Constraint Handling Rules (CHR). We present a translation scheme from LA to CHR

$^{\textrm{rp}}$

: CHR with rule priorities and show that the meta-complexity theorem for LA can be applied to a subset of CHR

$^{\textrm{rp}}$

via inverse translation. This result is compared with previous work. Inspired by the high-level implementation proposal of Ganzinger and McAllester, we demonstrate how LA programs can be compiled into CHR rules that interact with a scheduler written in CHR. This forms the first actual implementation of LA. Our implementation achieves the complexity required for the meta-complexity theorem to hold and can execute a subset of CHR

$^{\textrm{rp}}$

with strong complexity bounds.

Leslie De Koninck, Tom Schrijvers, Bart Demoen
Observable Confluence for Constraint Handling Rules

Constraint Handling Rules (CHR) are a powerful rule based language for specifying constraint solvers. Critical for any rule based language is the notion of confluence, and for terminating CHR programs there is a decidable test for confluence. But many CHR programs that are in practice confluent fail this confluence test. The problem is that the states that illustrate non-confluence are not observable from the initial goals of interest. In this paper we introduce the notion of observable confluence, a more general notion of confluence which takes into account whether states are observable. We devise a test for observable confluence which allows us to verify observable confluence for a range of CHR programs dealing with agents, type systems, and the union-find algorithm.

Gregory J. Duck, Peter J. Stuckey, Martin Sulzmann
Graph Transformation Systems in CHR

In this paper we show it is possible to embed graph transformation systems (GTS) soundly and completely in constraint handling rules (CHR). We suggest an encoding for the graph production rules and we investigate its soundness and completeness by ensuring equivalence of rule applicability and results. We furthermore compare the notion of confluence in both systems and show how to adjust a standard CHR confluence check to work for an embedded GTS.

Frank Raiser
Multivalued Action Languages with Constraints in CLP(FD)

Action description languages, such as

$\mathcal{A}$

and

$\mathcal{B}$

[6], are expressive instruments introduced for formalizing planning domains and problems. The paper starts by proposing a methodology to encode an action language (with conditional effects and static causal laws), a slight variation of

$\cal B$

, using

Constraint Logic Programming over Finite Domains

. The approach is then generalized to lift the use of constraints to the level of the action language itself. A prototype implementation has been developed, and the preliminary results are presented and discussed.

Agostino Dovier, Andrea Formisano, Enrico Pontelli

Semantics

Declarative Diagnosis of Temporal Concurrent Constraint Programs

We present a framework for the declarative diagnosis of non-deterministic timed concurrent constraint programs. We present a denotational semantics based on a (continuous) immediate consequence operator,

$T_{{\mathcal{D}}}$

, which models the process behaviour associated with a program

${\mathcal{D}}$

given in terms of sequences of constraints. Then, we show that, given the intended specification of

${\mathcal{D}}$

, it is possible to check the correctness of

${\mathcal{D}}$

by a single step of

$T_{{\mathcal{D}}}$

. In order to develop an effective debugging method, we approximate the

denotational semantics

of

${\mathcal{D}}$

. We formalize this method by abstract interpretation techniques, and we derive a finitely terminating abstract diagnosis method, which can be used statically. We define an abstract domain which allows us to approximate the infinite sequences by a finite ‘cut’. As a further development we show how to use a specific linear temporal logic for deriving automatically the debugging sequences. Our debugging framework does not require the user to either provide error symptoms in advance or answer questions concerning program correctness. Our method is compositional, that may allow to master the complexity of the debugging methodology.

M. Falaschi, C. Olarte, C. Palamidessi, F. Valencia
Logic Programs with Abstract Constraint Atoms: The Role of Computations

We provide new perspectives on the semantics of logic programs with

constraints

. To this end we introduce several notions of

computation

and propose to use the

results

of computations as answer sets of programs with constraints. We discuss the rationale behind different classes of computations and study the relationships among them and among the corresponding concepts of answer sets. The proposed semantics generalize the answer set semantics for programs with monotone, convex and/or arbitrary constraints described in the literature.

Lengning Liu, Enrico Pontelli, Tran Cao Son, Mirosław Truszczyński

Program Analysis

Resource-Oriented Deadlock Analysis

We present a method of detecting if deadlocks may occur in concurrent logic programs. Typical deadlock analysis is “process-oriented”, being based on possible interleaving of processes. Our method is oriented towards the shared resources (communication channels, locks

et cetera

) and is based on orders in which individual resources are used by different processes. In cases where there are resources used by only a subset of all processes the search space can be dramatically reduced. The method arises very naturally out of the concurrent logic programming paradigm. Analysis of concurrent programs has previously used “coarsification” and “partial order” methods to reduce the search space. Our approach rediscovers and also extends these techniques. Our presentation is based around a logic programming pearl which finds deadlocked computations in a program which solves the dining philosophers problem.

Lee Naish
Static Region Analysis for Mercury

Region-based memory management is a form of compile-time memory management, well-known from the functional programming world. This paper describes a static region analysis for the logic programming language Mercury. We use region points-to graphs to model the partitioning of the memory used by a program into separate regions. The algorithm starts with a region points-to analysis that determines the different regions in the program. We then compute the liveness of the regions by using an extended live variable analysis. Finally, a program transformation adds region annotations to the program for region support. These annotations generate data for a region simulator that generates reports on the memory behaviour of region-annotated programs. Our approach obtains good memory consumption for several benchmark programs; for some of them it achieves optimal memory management.

Quan Phan, Gerda Janssens
Automatic Binding-Related Error Diagnosis in Logic Programs

This paper proposes a diagnosis algorithm for locating a certain kind of errors in logic programs: variable binding errors that result in abstract symptoms during compile-time checking of assertions based on abstract interpretation. The diagnoser analyzes the graph generated by the abstract interpreter, which is a provably safe approximation of the program semantics. The proposed algorithm traverses this graph to find the point where the actual error originates (a reason of the symptom), leading to the point the error has been reported (the symptom). The procedure is fully automatic, not requiring any interaction with the user. A prototype diagnoser has been implemented and preliminary results are encouraging.

Paweł Pietrzak, Manuel V. Hermenegildo
User-Definable Resource Bounds Analysis for Logic Programs

We present a static analysis that infers both upper and lower bounds on the usage that a logic program makes of a set of user-definable resources. The inferred bounds will in general be functions of input data sizes. A

resource

in our approach is a quite general, user-defined notion which associates a basic cost function with elementary operations. The analysis then derives the related (upper- and lower-bound) resource usage functions for all predicates in the program. We also present an assertion language which is used to define both such resources and resource-related properties that the system can then check based on the results of the analysis. We have performed some preliminary experiments with some concrete resources such as execution steps, bytes sent or received by an application, number of files left open, number of accesses to a database, number of calls to a procedure, number of asserts/retracts, etc. Applications of our analysis include resource consumption verification and debugging (including for mobile code), resource control in parallel/distributed computing, and resource-oriented specialization.

Jorge Navas, Edison Mera, Pedro López-García, Manuel V. Hermenegildo
Automatic Correctness Proofs for Logic Program Transformations

The many approaches which have been proposed in the literature for proving the correctness of unfold/fold program transformations, consist in associating suitable well-founded orderings with the proof trees of the atoms belonging to the least Herbrand models of the programs. In practice, these orderings are given by ‘clause measures’, that is, measures associated with the clauses of the programs to be transformed. In the unfold/fold transformation systems proposed so far, clause measures are fixed in advance, independently of the transformations to be proved correct. In this paper we propose a method for the automatic generation of the clause measures which, instead, takes into account the particular program transformation at hand. During the transformation process we construct a system of linear equations and inequations whose unknowns are the clause measures to be found, and the correctness of the transformation is guaranteed by the satisfiability of that system. Through some examples we show that our method is able to establish in a fully automatic way the correctness of program transformations which, by using other methods, are proved correct at the expense of fixing sophisticated clause measures.

Alberto Pettorossi, Maurizio Proietti, Valerio Senni

Special Interest Paper

Core TuLiP Logic Programming for Trust Management

We propose CoreTuLiP - the core of a trust management language based on Logic Programming. CoreTuLiP is based on a subset of moded logic programming, but enjoys the features of TM languages such as RT; in particular clauses are issued by different authorities and stored in a distributed manner. We present a lookup and inference algorithm which we prove to be correct and complete w.r.t. the declarative semantics. CoreTuLiP enjoys uniform syntax and the well-established semantics and is expressive enough to model scenarios which are hard to deal with in RT.

Marcin Czenko, Sandro Etalle

Implementation

Demand-Driven Indexing of Prolog Clauses

As logic programming applications grow in size, Prolog systems need to efficiently access larger and larger data sets and the need for any- and multi-argument indexing becomes more and more profound. Static generation of multi-argument indexing is one alternative, but applications often rely on features that are inherently dynamic which makes static techniques inapplicable or inaccurate. Another alternative is to employ dynamic schemes for flexible demand-driven indexing of Prolog clauses. We propose such schemes and discuss issues that need to be addressed for their efficient implementation in the context of WAM-based Prolog systems. We have implemented demand-driven indexing in two different Prolog systems and have been able to obtain non-negligible performance speedups: from a few percent up to orders of magnitude. Given these results, we see very little reason for Prolog systems not to incorporate some form of dynamic indexing based on actual demand. In fact, we see demand-driven indexing as only the first step towards effective runtime optimization of Prolog programs.

Vítor Santos Costa, Konstantinos Sagonas, Ricardo Lopes
Design, Implementation, and Evaluation of a Dynamic Compilation Framework for the YAP System

We propose dynamic compilation for Prolog, in the style of Just-In-Time compilers. Our approach adapts to the actual characteristics of the target program by

(i)

compiling only the parts of the program that are executed frequently, and

(ii)

adapting to actual call patterns. This allows aggressive optimization of the parts of the program that are really executed, and better informed heuristics to drive these optimizations. Our compiler does need to support all features in the language, only what is deemed important to performance. Complex execution patterns, such as the ones caused by error handling, may be left to the interpreter. On the other hand, compilation is now part of the run-time, and thus incurs run-time overheads. We have implemented dynamic compilation for YAP system. Our initial results suggest that dynamic compilation achieves very substantial performance improvements over the original interpreter, and that it can approach and even out-perform state-of-the-art native code systems. We believe that we have shown that dynamic compilation is worthwhile and fits naturally with Prolog execution.

Anderson Faustino da Silva, Vítor Santos Costa

Poster Presentations

Declarative Debugging of Missing Answers in Constraint Functional-Logic Programming

It is well known that constraint logic and functional-logic programming languages have many advantages, and there is a growing trend to develop and incorporate effective tools to this class of declarative languages. In particular,

debugging tools

are a practical need for diagnosing the causes of erroneous computations. Recently [1], we have presented a prototype tool for the declarative diagnosis of wrong computed answers in CFLP(

${{\mathcal{D}}}$

), a new generic scheme for lazy Constraint Functional-Logic Programming which can be instantiated by any constraint domain

${{\mathcal{D}}}$

given as parameter [2]. The declarative diagnosis of

missing answers

is another well-known debugging problem in constraint logic programming [4]. This poster summarizes an approach to this problem in CFLP(

${{\mathcal{D}}}$

). From a programmer’s viewpoint, a tool for diagnosing missing answers can be used to experiment wether the program rules for certain functions are sufficient or not for computing certain expected answers.

Rafael Caballero, Mario Rodríguez Artalejo, Rafael del Vado Vírseda
Tightly Integrated Probabilistic Description Logic Programs for the Semantic Web

We present a novel approach to probabilistic description logic programs for the Semantic Web, where a tight integration of disjunctive logic programs under the answer set semantics with description logics is generalized by probabilistic uncertainty. The approach has a number of nice features. In particular, it allows for a natural probabilistic data integration and for a natural representation of ontology mappings under probabilistic uncertainty and inconsistency. It also provides a natural integration of a situation-calculus based language for reasoning about actions with both description logics and probabilistic uncertainty.

Andrea Calì, Thomas Lukasiewicz
View Updating Through Active Integrity Constraints

Current database systems are often large and complex and the case that a user or an application has full access to the entire database is rare. It is more likely to occur that access is granted via windows of the entire systems, called

views

. A view, usually virtual, is defined by giving a query on the whole database and at any point the content of the view is just the outcome of this query. Applications query a base relation or a view in the same way. Therefore, querying a view does not represent a serious conceptual problem. In contrast, the issue of

view updating

is problematic and of paramount importance: it refers to the problem of translating an update request against a view into an update request involving the base of data. Over the years, a substantial amount of research has been devoted to the various issues surrounding view updating and not surprisingly a wide selection of approaches to the view update problem has evolved. See [2,3] for surveys of methods for view updating.

Luciano Caroprese, Irina Trubitsyna, Ester Zumpano
Prosper: A Framework for Extending Prolog Applications with a Web Interface

Clear separation of presentation and code-behind, declarative use of visual control elements and a supportive background framework to automate recurring tasks are fundamental to rapid web application development. This poster presents a framework that facilitates extending Prolog applications with a web front-end. The framework relies on Prolog to the greatest possible extent, supports code re-use, and integrates easily into existing web server solutions.

Levente Hunyadi
Web Sites Verification: An Abductive Logic Programming Tool

We present the CIFFWEB system, an innovative tool for the verification of web sites, relying upon abductive logic programming. The system allows the user to define rules that a web site should fulfill by using (a fragment of) the query language Xcerpt. The rules are translated into abductive logic programs with constraints and their fulfillment is checked through the CIFF abductive proof procedure.

P. Mancarella, G. Terreni, F. Toni
Visual Logic Programming Method Based on Structural Analysis and Design Technique

The convergence of logic and visual languages is recognized as a promising approach for both the logic programming automation and enhancement of visual modeling/programming methods. The distinction of our approach is in that we unite the Actor Prolog concurrent object-oriented logic language [1,2,3] with the Structural Analysis and Design Technique (SADT) diagrams to obtain new issues in the following areas:

Functional modeling of complex systems;

Rapid prototyping of artificial intelligence applications;

Graphic user interface management.

Alexei A. Morozov
Approximating Horn Knowledge Bases in Regular Description Logics to Have PTIME Data Complexity

This work is a continuation of our previous works [4,5]. We assume that the reader is familiar with description logics (DLs). A knowledge base in a description logic is a tuple

$({\mathcal{R}},{\mathcal{T}},{\mathcal{A}})$

consisting of an RBox

${\mathcal{R}}$

of assertions about roles, a TBox

${\mathcal{T}}$

of global assumptions about concepts, and an ABox

${\mathcal{A}}$

of facts about individuals (objects) and roles. The instance checking problem in a DL is to check whether a given individual

a

is an instance of a concept

C

w.r.t. a knowledge base

$({\mathcal{R}},{\mathcal{T}},{\mathcal{A}})$

, written as

$({\mathcal{R}},{\mathcal{T}},{\mathcal{A}}) \models C(a)$

. This problem in DLs including the basic description logic

$\mathcal{ALC}$

(with

${\mathcal{R}} = \emptyset$

) is EXPTIME-hard.From the point of view of deductive databases,

${\mathcal{A}}$

is assumed to be much larger than

${\mathcal{R}}$

and

${\mathcal{T}}$

, and it makes sense to consider the data complexity, which is measured when the query consisting of

${\mathcal{R}}$

,

${\mathcal{T}}$

,

C

,

a

is fixed while

${\mathcal{A}}$

varies as input data.It is desirable to find and study fragments of DLs with PTIME data complexity. Several authors have recently introduced a number of Horn fragments of DLs with PTIME data complexity [2,1,3]. The most expressive fragment from those is Horn-

$\mathcal{SHIQ}$

introduced by Hustadt et al. [3]. It assumes, however, that the constructor ∀ 

R

.

C

does not occur in bodies of program clauses and goals.The data complexity of the “general Horn fragment of

$\mathcal{ALC}$

” is coNP-hard [6]. So, to obtain PTIME data complexity one has to adopt some restrictions for the “general Horn fragments of DLs”. The goal is to find as less restrictive conditions as possible.

Linh Anh Nguyen
A Linear Transformation from Prioritized Circumscription to Disjunctive Logic Programming

The stable model semantics of

disjunctive logic programs

(DLPs) is based on

minimal models

which assign atoms false by default. While this feature is highly useful-leading to concise problem encodings-it occasionally renders knowledge representation with disjunctive rules difficult. Reiter-style

minimal diagnoses

[1] provide a good example in this respect. This problem can be alleviated by a more refined control of minimization provided by

parallel circumscription

[2] which allows certain atoms to vary or to have fixed truth values. The scheme of

prioritized circumscription

[3,2] generalizes this setting with priority classes for atoms being minimized. Our aim is to bring these enhancements of minimality to the realm of disjunctive logic programming.We strive for a translation-based approach where varying and fixed atoms, as well as priority classes are effectively removed from representations by transformations. We have already addressed parallel circumscription and provided a

linear

and

faithful

but

non-modular translation

[4]. Here we present a similar transformation for prioritized circumscription, extend our implementation [5], and report preliminary experiments.

Emilia Oikarinen, Tomi Janhunen
Representation and Execution of a Graph Grammar in Prolog

Structured diagrams - e.g., ER diagrams - consist of finite types of symbols interconnected according to well-defined rules. Graph grammars [CEEe95, Hab92] are a well-known formalism used to specify the syntax of structured diagrams. Many graph grammar formalisms have been defined, which differ mainly in their graph rewrite mechanism. Graph grammars are well understood in algebraic framework; but there is little work in linking them to logic programming [CMR

 + 

91, CMR

 + 

91, CR93, Sch93].

Girish Keshav Palshikar
On Applying Program Transformation to Implement Suspension-Based Tabling in Prolog
(Extended Abstract)

Tabling is a technique of resolution that overcomes some limitations of traditional Prolog systems in dealing with redundant sub-computations and recursion. We can distinguish two main categories of tabling mechanisms:

suspension-based tabling mechanisms and linear tabling mechanisms

. Suspension-based tabling mechanisms need to preserve the state of suspended tabled subgoals in order to ensure that all answers are correctly computed. A tabled evaluation can be seen as a sequence of sub-computations that suspend and later resume. On the other hand, linear tabling mechanisms use iterative computations of tabled subgoals to compute fix-points. The main idea of linear tabling is to maintain a single execution tree where tabled subgoals always extend the current computation without requiring suspension and resumption of sub-computations.

Ricardo Rocha, Cláudio Silva, Ricardo Lopes
Aggregates in Constraint Handling Rules
(Extended Abstract)

Constraint Handling Rules (CHR) [2,3,4] is a general-purpose programming language based on committed-choice, multi-headed, guarded multiset rewrite rules. As the head of each CHR rule only considers a fixed number of constraints, any form of aggregation over unbounded parts of the constraint store necessarily requires explicit encoding, using auxiliary constraints and rules.

Jon Sneyers, Peter Van Weert, Tom Schrijvers, Bart Demoen
Computing Fuzzy Answer Sets Using dlvhex

In this poster, we show how the fuzzy answer set semantics, i.e. a combination of answer set programming and fuzzy logic, can be mapped onto the semantics for

hex

-programs.

Davy Van Nieuwenborgh, Martine De Cock, Dirk Vermeir
The Use of a Logic Programming Language in the Animation of Z Specifications

Animation of a formal specification involves its execution and this paper is concerned with Z specifications and their

correct

animation. Since Z is based on typed set theory the logic programming language Gödel [2] was chosen as the execution language.

Abstract Approximation

was suggested in [1] to provide a formal framework and some proof rules for the correct animation of Z. We describe here how the correctness criteria are applied to our method of

structure simulation

[3].

Margaret M. West
A Stronger Notion of Equivalence for Logic Programs

Several different notions of equivalence have been proposed for logic programs with answer set semantics, most notably strong equivalence. However, strong equivalence is not preserved by certain logic program operators such as the strong and weak forgetting operators of Zhang and Foo. We propose a stronger notion of equivalence, called T-equivalence, which is preserved by these operators. We give a syntactic definition and provide a model-theoretic characterisation of T-equivalence. We show that strong and weak forgetting does preserve T-equivalence and using this, arrive at a model-theoretic definition of the strong and weak forgetting operators.

Ka-Shu Wong
A Register-Free Abstract Prolog Machine with Jumbo Instructions

The majority of current Prolog systems are based on the WAM, in which registers are used to pass procedure arguments and store temporary data. In this paper, we present a stack machine for Prolog, named

TOAM Jr

., which departs from the TOAM adopted in early versions of B-Prolog in that it employs no registers for temporary data and it offers variable-size instructions for encoding unification and procedure calls. TOAM Jr. is suitable for fast bytecode interpretation: the omission of registers facilitates instruction merging and the use of jumbo instructions results in more compact code and execution of fewer instructions than the use of fine-grained instructions. TOAM Jr. is employed in B-Prolog version 7.0. Benchmarking shows that TOAM Jr. helps significantly improve the performance: the execution speed is increased by 48% on a Windows PC and 77% on a Linux machine. Despite the overhead on standard Prolog programs caused by the adoption of a spaghetti stack to support event handling and constraint solving, B-Prolog version 7.0 compares favorably well with the state-of-the-art WAM-based Prolog systems.

Neng-Fa Zhou

Doctoral Consortium Presentations

Advanced Techniques for Answer Set Programming

Theoretical foundations and practical realizations of answer set solving techniques are vital research issues. The challenge lies in combining the elevated modeling capacities of answer set programming with advanced solving strategies for combinatorial search problems. My research shall contribute to bridging the gap between high-level knowledge representation and efficient low-level reasoning, in order to bring out the best of both worlds.

Martin Gebser
A Games Semantics of ASP

We present a games semantics of

${\ensuremath{\mathit AnsProlog}}$

with single headed clauses. Using the semantics, we derive tools for the assistance of verification of programs and tools based on those programs, a well as proposing a solver that grounds on the fly.

Jonty Needham, Marina De Vos
Modular Answer Set Programming

In answer set programming (ASP) a problem is solved declaratively by writing down a logic program the answer sets of which correspond to the solutions of the problem, and computing the answer sets of the program using a special purpose search engine.The growing interest towards ASP is mostly due to efficient search engines, such as

smodels

and

dlv

. Consequently, a variety of interesting applications of ASP has emerged, e.g., in planning, product configuration, and computer aided verification.Despite the declarative nature of ASP the development of programs resembles that of programs in conventional programming, i.e., a programmer often develops a series of programs for a particular problem, e.g., when optimizing execution time and space.This gives rise to a meta-level problem of verifying whether subsequent programs are equivalent. To solve the equivalence verification problem a translation-based approach has been proposed and extended further [1,2,3] . The idea is to combine logic programs

P

and

Q

into logic programs EQT(

P

,

Q

) and EQT(

Q

,

P

) which have no answer sets iff

P

and

Q

are equivalent, which allows the use of the same ASP solver for the equivalence verification task as for the search of answer sets. The translation-based method treats programs as integral entities which limits its usefulness, e.g., if a small local change is made in a large program. The same line of thinking applies to current ASP methodology in general. ASP programs are typically seen as integral entities, and there is a lack of mechanisms available in modern programming languages that ease program development by allowing re-use of code or breaking programs into smaller pieces.

Emilia Oikarinen
Universal Timed Concurrent Constraint Programming

In this doctoral work we aim at developing a rich timed concurrent constraint (

tcc

) based language with strong ties to logic. The new calculus called

Universal Timed Concurrent Constraint

utcc

increases the expressiveness of

tcc

languages allowing infinite behaviour and mobility. We introduce a constructor of the form (

abs

x

,

c

)

P

(

Abstraction

in P) that can be viewed as a dual operator of the hidden operator

local

xP

. i.e. the later can be viewed as an existential quantification on the variable

x

and the former as an universal quantification of

x

, executing

P

[

t

/

x

] for all

t

s.t. the current store entails

c

[

t

/

x

]. As a compelling application, we applied this calculus to verify security protocols.

Carlos Olarte, Catuscia Palamidessi, Frank Valencia
Extension and Implementation of CHR
(Research Summary)

Constraint Handling Rules (CHR) [1,4] is a powerful, yet elegant committedchoice CLP language, consisting of multi-headed, guarded multiset rewrite rules. Originally designed for the implementation of constraint solvers, CHR has matured towards a general purpose language, used in a wide range of application domains, including natural language processing, multi-agent systems, and type system design. Several high-performance compilers for CHR exist.

Peter Van Weert
Backmatter
Metadata
Title
Logic Programming
Editors
Véronica Dahl
Ilkka Niemelä
Copyright Year
2007
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-74610-2
Print ISBN
978-3-540-74608-9
DOI
https://doi.org/10.1007/978-3-540-74610-2

Premium Partner