Skip to main content

1994 | Buch

Logic Programming and Automated Reasoning

5th International Conference, LPAR '94 Kiev, Ukraine, July 16–22, 1994 Proceedings

herausgegeben von: Frank Pfenning

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This volume presents the proceedings of the 5th International Conference on Logic Programming and Automated Reasoning, held aboard the ship "Marshal Koshevoi" on the Dnieper near Kiev, Ukraine in July 1994. The LPAR conferences are held annually in the former Soviet Union and aimed at bringing together researchers interested in LP and AR.
This proceedings contains the full versions of the 24 accepted papers evaluated by at least three referees ensuring a program of highest quality. The papers cover all relevant aspects of LP and AR ranging from theory to implementation and application.

Inhaltsverzeichnis

Frontmatter
Generalization and reuse of tactic proofs
Abstract
A tactic proof is a tree-structured sequent proof where steps may be justified by tactic programs. We describe a prototype of a generic interactive theorem-proving system that supports the construction and manipulation of tactic proofs containing metavariables. The emphasis is on proof reuse. Examples of proof reuse are proof by analogy and reconstruction of partial proofs as part of recovering from errors in definitions or in proof strategies. Our reuse operations involve solving higherorder unification problems, and their effectiveness relies on a proof-generalization step that is done after a tactic is applied. The prototype is implemented in λProlog.
Amy Felty, Douglas Howe
Program tactics and logic tactics
Abstract
In the past, tactics have been mostly implemented as programs written in some programming language, e.g. ML. We call the tactics of this kind, Program Tactics. In this paper we present a first order classical metatheory, called MT, with the following properties: (1) tactics are terms of the language of MT. We call these tactics, Logic Tactics; (2) there exists a mapping between Logic Tactics and the Program Tactics implemented within the GETFOL theorem prover. Property (1) allows us to use GETFOL to prove properties of and to build new Logic Tactics. Property (2) can be exploited to perform a bidirectional translation between Logic Tactics and Program Tactics.
Fausto Giunchiglia, Paolo Traverso, Mechanized Reasoning Group
On the relation between the λμ-calculus and the syntactic theory of sequential control
Abstract
We construct a translation of first order λΜ-calculus [15] into a subtheory of Felleisen's λc-calculus [5, 6]. This translation preserves typing and reduction. Then, by constructing the inverse translation, we show that the two calculi are actually isomorphic.
Philippe de Groote
On a proof-theoretical analysis of Σ 1 1 -AC, Σ 1 1 -DC and Δ 1 1 -CA
Abstract
We present a simple method of embedding of δ 1 1 -CA, σ 1 1 -AC and σ 1 1 -DC into \(RA_{\varepsilon _0 } ,(RA + PAC)_{\varepsilon _0 }\), and \((RA + PDC)_{\varepsilon _0 }\) respectively, where PAC is Predicative Axiom of Choice and PDC is Predicative Axiom of Dependent Choice, These ramified systems are known to have proof-theoretic ordinals Φε 00, which in the case of PAC and PDC can be proved by normalization of Ramified Analysis with Hilbert's epsilon-symbol. In all cases (particularly, of δ 1 1 -CA) our embedding is straightforward and avoids any intermediate steps (like σ 1 1 -Reflection), which was always the case before.
Sergei Tupailo
Proof plans for the correction of false conjectures
Abstract
Theorem proving is the systematic derivation of a mathematical proof from a set of axioms by the use of rules of inference. We are interested in a related but far less explored problem: the analysis and correction of false conjectures, especially where that correction involves finding a collection of antecedents that, together with a set of axioms, transform non-theorems into theorems. Most failed search trees are huge, and special care is to be taken in order to tackle the combinatorial explosion phenomenon. Fortunately, the planning search space generated by proof plans, see [1], are moderately small. We have explored the possibility of using this technique in the implementation of an abduction mechanism to correct non-theorems.
Raul Monroy, Alan Bundy, Andrew Ireland
On the value of antiprenexing
Abstract
In this paper, we examine the effect of antiprenexing on the proof length if resolution deduction concepts are applied. Roughly speaking, our version of antiprenexing moves ∀-quantifiers downward in the formula tree whereas ∃-quantifiers are moved upward. We show that two different Skolemization techniques result in two clause sets with rather different resolution refutations. The lower bounds on the length of both refutations differ exponentially. Furthermore, we demonstrate that both techniques can be improved if antiprenexing is applied before Skolemization. Finally, we examine the influence of antiprenexing if extended resolution deduction concepts are used.
Uwe Egly
Implementing a finite-domain CLP-language on top of Prolog: a transformational approach
Henk Vandecasteele, Danny de Schreye
RISC-CLP(CF) constraint logic programming over complex functions
Abstract
A constraint logic programming system for the domain of complex functions is described. The intended users of the language are scientist and engineers who often reason/compute with constraints over complex functions, such as functional equalities, differential equations, etc. Constraints are solved by iterating several solving methods such as Laplace transformation, non-linear equation solving, etc. A prototype has been built and is illustrated in the paper.
Hoon Hong
Logical closures
Abstract
Uniform proof procedures for hereditary Harrop formulae have been proposed as a foundation for logic programming. A non-standard approach to defining hereditary Harrop formula is given, allowing quantification over predicate variables but distinguishing the forms of predicate quantification. The benefits of this approach include a treatment of higher order procedures which avoids some scoping problems with languages such as λ-Prolog, and the possibility of extending the language straightforwardly with a module system such as that developed for Standard ML. To enable a style of programming found in existing logic programming languages, a form of implementation inheritance is introduced into the language. Combining this with explicit type quantification provides a form of dynamic dispatching similar to CLOS generic procedures in a statically typed language.
Dominic Duggan
Higher-order rigid E-unification
Abstract
Higher-order E-unification, i.e. the problem of finding substitutions that make two simply typed λ-terms equal modulo Β or Βη-equivalence and a given equational theory, is undecidable. We propose to rigidifyit, to get a resource-bounded decidable unification problem (with arbitrary high bounds), providing a complete higher-order E-unification procedure. The techniques are inspired from Gallier's rigid E-unification and from Dougherty and Johann's use of combinatory logic to solve higher-order E-unification problems. We improve their results by using general equational theories, and by defining optimizations such as higher-order rigid E-preunification, where flexible terms are used, gaining much efficiency, as in the non-equational case due to Huet.
Jean Goubault
Program extraction in a Logical Framework setting
Abstract
This paper demonstrates a method of extracting programs from formal deductions represented in the Edinburgh Logical Framework, using the Elf programming language. Deductive systems are given for the extraction of simple types from formulas of first-order arithmetic and of λ-calculus terms from natural deduction proofs. These systems are easily encoded in Elf, yielding an implementation of extraction that corresponds to modified realizability. Because extraction is itself implemented as a set of formal deductive systems, some of its correctness properties can be partially represented and mechanically checked in the Elf language.
Penny Anderson
Higher-Order Abstract Syntax with induction in Coq
Abstract
Three important properties of Higher-Order Abstract Syntax are the (higher-order) induction principle, which allows proofs by induction, the (higher-order) injection principle, which asserts that equal terms have equal heads and equal sons, and the extensionality principle, which asserts that functional terms which are pointwise equal are equal. Higher-order abstract syntax is implemented for instance in the Edinburgh Logical Framework and the above principles are satisfied by this implementation. But although they can be proved at the meta level, they cannot be proved at the object level and furthermore, it is not so easy to know how to formulate them in a simple way at the object level. We explain here how Second-Order Abstract Syntax can be implemented in a more powerful type system (Coq) in such a way as to make available or provable (at the object level) the corresponding induction, injection and extensionality principles.
Joëlle Despeyroux, André Hirschowitz
Towards efficient calculi for resource-oriented deductive planning
Abstract
An important advantage of deductive approaches for solving planning problems is the possibility to exploit powerful proof methods and techniques to reduce the search space developed in the field of automated deduction. The aim of this paper is to adapt such techniques to build efficient resource-oriented planning systems.
Stefan Brüning
A logic programming framework for the abductive inference of intentions in cooperative dialogues
Abstract
In this paper we propose a general logic programming framework allowing the recognition of plans and intentions behind speech acts through abductive reasoning. These inferences enables each agent to have an active participation in dialogues, namely in cooperative informationseeking dialogues. In our framework the possible actions, events, states and world knowledge are represented by extended logic programs (LP with explicit negation) and the abductive inference process is modeled by a framework wich is based on the Well Founded Semantics augmented with explicit negation (WFSX) and contradiction removal semantics (CRSX) ([PAA92]). It will be shown how this framework supports abductive reasoning with Event Calculus ([Esh88]) and some classical examples in the domain of information-seeking dialogues will be shown ([Lit85, Pol86]). Finally, some open problems will be pointed out.
Paulo Quaresma, José Gabriel Lopes
Constraint logic programming in the sequent calculus
Abstract
In this paper, we are developing a new logical semantics of CLP. It is shown that CLP is based on an amalgamated logic embedding the entailment relation of constraints into a fragment of intuitionistic logic. Constrained SLD resolution corresponds to a complete proof search in the amalgamated logic. The framework provides not only the logical account on the definitional semantics towards CLP but also a general way to integrate constraints into various logic programming systems.
John Darlington, Yike Guo
On conditional rewrite systems with extra variables and deterministic logic programs
Abstract
We study deterministic conditional rewrite systems, i.e. conditional rewrite systems where the extra variables are not totally free but ’input bounded’. If such a system R is quasi-reductive then → r is decidable and terminating. We develop a critical pair criterion to prove confluence if R is quasi-reductive and strongly deterministic. We apply our results to prove Horn clause programs to be uniquely terminating.
Jürgen Avenhaus, Carlos Loría-Sáenz
A bottom-up reconstruction of the well-founded semantics for disjunctive logic programs
Abstract
In his paper [12] Ross extends the well-founded semantics for normal logic programs [16] to disjunctive logic programs. His definition is top-down and it is closer to a procedural semantics than to the elegant fixpoint definition of the well-founded semantics for normal logic programs. In the present paper, we propose a declarative, bottom-up fixpoint definition of the well-founded semantics for disjunctive logic programs. Our construction of the greatest unfounded set of extended literals is similar with the construction of the greatest unfounded set for normal programs. As a consequence, the connection between the well-founded semantics for normal programs and the well-founded semantics for disjunctive programs is made clearer.
Cristian Papp
An efficient computation of the extended generalized closed world assumption by support-for-negation sets
Abstract
Closed world assumptions are one of the major approaches for non-monotonic reasoning in artificial intelligence. In [16] this formalism is applied to disjunctive logic programs, i.e. logic programs with positive disjunctive rule heads and positive atoms in the rule bodies. The disjunctive closure operator \(\mathcal{T}_P^S\) allows for the derivation of the set \(\mathcal{M}\mathcal{S}_P\) of all positive disjunctive clauses logically implied by the program P, the minimal model state. On the other hand, disjunctive clauses over negated atoms are derived in the extended generalized closed world assumption \(\mathcal{E}\mathcal{G}\mathcal{C}\mathcal{W}\mathcal{A}_P\). Such a clause is the negation of a conjunction of positive atoms. \(\mathcal{E}\mathcal{G}\mathcal{C}\mathcal{W}\mathcal{A}_P\) contains all conjunctions which are false in all minimal Herbrand models of the program.
We present efficient δ-iteration techniques for computing the closed world assumption \(\mathcal{E}\mathcal{G}\mathcal{C}\mathcal{W}\mathcal{A}_P\) based on an iterative computation of the set of all support-for-negation sets SN(C) for conjunctions C, i.e. certain sets of clauses which characterize \(\mathcal{E}\mathcal{G}\mathcal{C}\mathcal{W}\mathcal{A}_P :C \varepsilon \mathcal{E}\mathcal{G}\mathcal{C}\mathcal{W}\mathcal{A}_P\) iff \(\mathcal{M}\mathcal{S}_P \vDash SN(C)\). The support-for-negation sets SN(A) for atoms A are easily derived from the minimal model state \(\mathcal{M}\mathcal{S}_P\). We will propose a bottom-up computation deriving the support-for-negation sets of longer conjunctions from shorter ones based on an algebraic formula given by [16]: SN(C1 ∧ C2) = SN(C1) ∀ SN(C2). We will present techniques for the efficient computation of these disjunctions of two clause sets and a δ-iteration approach for computing the support-for-negation sets of a sequence of growing minimal model states.
For disjunctive normal logic programs, i.e. logic programs with positive disjunctive rule heads and — possibly — negated atoms in the rule bodies, these operators form a basis for computing the generalized disjunctive well-founded semantics.
Dietmar Seipel
Multi-SLD resolution
Abstract
Multi-SLD resolution is a variant of SLD resolution based on a simple idea.: Let the allowed constraints be closed under disjunction, and provide a mechanism for collecting solutions to a goal and turning the solutions into a disjunctive constraint. This idea leads to an operational model of logic programming, called data, or- parallelism, in which multiple constraint environments partially replace backtracking as the operational embodiment of disjunction. The model has a natural implementation on data-parallel computers since each disjunct of a disjunctive constraint can be handled by a single (virtual) processor. In this paper, we
  • formalize the notions of multi-SLD resolution, multi-derivation, multi-SLD tree, and environment tree;
  • prove the soundness and completeness of multi-SLD resolution; and
  • describe and justify several useful optimization techniques based on the form of constraints in a multi-derivation: the distinction between engine and multi variables, templates, and sharing of bindings in the environment tree.
Together these results provide the foundations for a new operational semantics of disjunction in logic programming.
Donald A. Smith, Timothy J. Hickey
On anti-links
Abstract
The concept of anti-link is defined, and useful equivalence-preserving operations on propositional formulas based on anti-links are introduced. These operations eliminate a potentially large number of subsumed paths in a negation normal form formula. The operations have linear time complexity in the size of that part of the formula containing the anti-link.
These operations are useful for prime implicant/implicate algorithms because most of the computational effort in such algorithms is spent on subsumption checks.
Bernhard Beckert, Reiner Hähnle, Anavai Ramesh, Neil V. Murray
A generic declarative diagnoser for normal logic programs
Abstract
In this paper we develop a generic declarative diagnoser for normal logic programs that is based on tree search. The soundness and the completeness of the diagnoser are proved. The diagnoser is generic in that it can be used with different search strategies such as the bottom-up, top-down, top-down zooming and divide-and-query strategies in the literature. The user can specialise the diagnoser by choosing their own search strategy. The diagnoser also has a smaller search space than diagnosers reported in the literature. This is achieved by using the acquired part of the intended interpretation of the program to prune the search space before it is searched.
Lunjin Lu
Goal dependent vs. goal independent analysis of logic programs
Abstract
Goal independent analysis of logic programs is commonly discussed in the context of the bottom-up approach. However, while the literature is rich in descriptions of top-down analysers and their application, practical experience with bottom-up analysis is still in a preliminary stage. Moreover, the practical use of existing top-down frameworks for goal independent analysis has not been addressed in a practical system. We illustrate the efficient use of existing goal dependent, top-down frameworks for abstract interpretation in performing goal independent analyses of logic programs much the same as those usually derived from bottom-up frameworks. We present several optimizations for this flavour of top-down analysis. The approach is fully implemented within an existing top-down framework. Several implementation tradeoffs are discussed as well as the influence of domain characteristics. An experimental evaluation including a comparison with a bottom-up analysis for the domain Prop is presented. We conclude that the technique can offer advantages with respect to standard goal dependent analyses.
M. Codish, M. García de la Banda, M. Bruynooghe, M. Hermenegildo
A kind of achievement by parts method
Abstract
How to add new rules to a knowledge base Kb1 to obtain a new knowledge base Kb2 for which forward chaining on Kb2 with any extensional knowledge base Ekb gives all the two-valued consequence literals of Kb1Ekb. We have shown in a previous paper that there exists such a method that we call Achievement. if (Kb1Ekb) ⊨ L then L ∃ FwCh(Kb2Ekb) with Kb2 = Ach(Kb1)
Unfortunately these achievement methods have a great complexity in time and space which depends on the size of the initial knowledge base. Thus we try to achieve knowledge bases by parts to have a weaker complexity. Kb = Kb1 ∪ ... ∪ Kb n , Ach(Kb) = Ach(Kb1) ∪ ... ∪ Ach(Kb n )
The aim of this paper is to give several methods to split knowledge bases in order to apply achievement by parts methods.
Ph. Mathieu, J. P. Delahaye
Projection in temporal logic programming
Abstract
We define a projection operator in the framework of the temporal logic programming. Its syntax and semantics are presented and illustrated with examples. We also discuss the implementation details of the projection construct.
Zhenhua Duan, Maciej Koutny, Chris Holt
Backmatter
Metadaten
Titel
Logic Programming and Automated Reasoning
herausgegeben von
Frank Pfenning
Copyright-Jahr
1994
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-48573-5
Print ISBN
978-3-540-58216-8
DOI
https://doi.org/10.1007/3-540-58216-9