Skip to main content

1980 | Buch

5th Conference on Automated Deduction Les Arcs, France, July 8–11, 1980

herausgegeben von: Wolfgang Bibel, Robert Kowalski

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter
Using meta-theoretic reasoning to do algebra
Abstract
We report on an experiment in interactive reasoning with FOL. The subject of the reasoning is elementary algebra. The main point of the paper is to show how the use of meta-theoretic knowledge results in improving the quality of the resulting proofs in that, in this environment, they are both easier to find and easier to understand.
Luigia Aiello, Richard W. Weyhrauch
Generating contours of integration: An application of PROLOG in symbolic computing
Abstract
One standard technique of evaluation of real definite integrals is transformation to a complex variable, and integration over a closed contour in the complex plane by means of Cauchy's Theorem. Textbook presentation of the technique tends to rely on examples, and to state no general principles governing generation of appropriate contours or applicability of the technique. This note states some general principles and a computation strategy, and outlines their implementation in a PROLOG program for automatic deduction of appropriate contours. The program complements the functions of existing systems of symbolic computing programs for integration and for manipulations in complex analysis.
Gábor Belovári, J. A. Campbell
Using meta-level inference for selective application of multiple rewrite rules in algebraic manipulation
Abstract
In this paper we describe a technique for controlling inference, meta-level inference, and a program for algebraic manipulation, PRESS, which embodies this technique. In PRESS, algebraic expressions are manipulated by a series of methods. The appropriate method is chosen by meta-level inference and itself uses meta-level reasoning to select and apply rewrite rules to the current expression.
The use of meta-level inference is shown to drastically cut down on search, lead to clear and modular programs, aid the proving of properties of the program and enable the automatic learning of both new algebraic facts and new control information.
Alan Bundy, Bob Welham
Proofs as descriptions of computation
Abstract
A proof of ∀x∃yϕ(x,y) can serve as a description of an algorithm which satisfies the specification given by ϕ. A variety of techniques from proof theory may be used to execute such a proof — that is, to take the proof and a value for x, and compute a value for y such that ϕ(x,y) holds. Proofs differ from ordinary programs in that they formalize information about algorithms beyond what is needed for the simple execution of the algorithm. This paper concerns (I) the uses of this additional information in the automatic transformation of algorithms, and in particular, in the adaptation of algorithms to special situations, and (2) efficient methods for executing and transforming proofs. A system for manipulating proofs has been implemented. Results of experiments on the specialization of a bin-packing algorithm are described.
C. A. Goad
Program synthesis from incomplete specifications
Abstract
Starting from a recent published paper of W. Bibel we show that it is sometimes useless to specify completely a problem at first. A new method of generalization, useful for output variables, is also exhibited and discussed.
Gérard Guiho, Christian Gresse
A system for proving equivalences of recursive programs
Abstract
We present a system for proving equivalences of recursive programs based on program transformations, namely the fold/unfold method and a generalisation of this method.
Laurent Kott
Variable elimination and chaining in a resolution-based prover for inequalities
Abstract
A modified resolution procedure is described which has been designed to prove theorems about general linear inequalities. This prover uses "variable elimination", and a modified form of inequality chaining (in which chaining is allowed only on so called "shielding terms"), and a decision procedure for proving ground inequality theorems. These techniques and others help to avoid the explicit use of certain axioms, such as the transitivity and interpolation axioms for inequalities, in order to reduce the size of the search space and to speed up proofs. Several examples are given along with results from a computer implementation. In particular this program has proved some difficult theorems such as: The sum of two continuous functions is continuous.
W. W. Bledsoe, Larry M. Hines
Decision procedures for some fragments of set theory
A. Ferro, E. G. Omodeo, J. T. Schwartz
Simplifying interpreted formulas
Abstract
A method is presented for converting a decision procedure for unquantified formulas in an arbitrary first-order theory to a simplifier for such formulas. Given a quantifier-free d.n.f. formula, the method produces a simplest (according to a given criterion) d.n.f. equivalent from among all formulas with atoms in the original formula. The method is predicated on techniques for minimizing purely boolean expressions in the presence of "don't-care" conditions. The don't-cares are used to capture the semantics of the interpreted literals in the formula to be simplified.
Two procedures are described: a primitive version of the method that advances the fundamental idea, and a more refined version intended for practical use. Complexity issues are discussed, as is a nontrivial example illustrating the utility of the method.
D. W. Loveland, R. E. Shostak
Specification and verification of real-time, distributed systems using the theory of constraints
Abstract
A technique for the specification and verification of real-time, distributed systems is proposed. It provides a unified representation for both internal design and externally observable behavior and an automated procedure for deriving the external behavior associated with a design. The approach is applicable to both hardware and software, and can handle systems in which timing, concurrency, indeterminacy, and ongoing behavior are important considerations.
Although the approach contains elements of switching theory and automata theory, it diverges from other models with the definition of a constraint as an incompatibility among successive states. The proposed verification technique centers around an algorithm (incorporating the resolution principle and coded in LISP) that takes as input an acceptor for a set of design constraints and generates as output an acceptor for the set of prime constraints. A description of external behavior is obtained from this acceptor by pruning out all constraints involving hidden variables. It is anticipated that from this description it will be possible to verify properties relating to consistency, equivalence, deadlock, computer security, and fault tolerance.
Frederick C. Furtek
Reasoning by plausible inference
Abstract
The PI (Plausible Inference) system has been implemented with the ability to reason in both directions, unlike previous AI systems for inexact reasoning. This is combined with truth-maintenance, dependency-directed backtracking, and time-varying contexts to permit modelling dynamic situations. Four rules of inference are employed in PI, Modus Ponens, Modus Tollens, Confirmation, and Denial. Each of these causes credibility as well as truth value to be propagated in a semantic network of assertions. Applications are foreseen in ‘guessing’ the answers to problems in formal deduction so that search trees may be pruned, and in the flexible selection of strategies for planning, as well as diagnostic or trouble-shooting models involving less-than-certain assertions.
Leonard Friedman
Logical support in a time-varying model
Abstract
Techniques using logical dependencies as a means of monitoring the justifications of beliefs in a database have proved to be a valuable tool for automatic deduction and planning. A new technique for determining logical support dynamically within existing structure eliminates the need for declaring dependencies in clause form. Procedural specialists define the behavior of logical connectives and allow domain axioms to be expressed in a declarative form. The system uses dependency relationships in the world model to determine the consequences of actions and assumptions. The use of context layers in addition to dependencies remedies certain difficulties encoutered in previous modelling schemes, resulting in improved model accuracy following a state change.
Alan M. Thompson
An experiment with the Boyer-Moore theorem prover: A proof of the correctness of a simple parser of expressions
Abstract
The objective of this report is to convey the essential idea of a proof by the Boyer-Moore theorem prover of the correctness of a parser. The proof required a total of 147 functions and lemmas — all of which have been listed in the appendix of [4].
Included in the following text are a description of the original problem submitted to the theorem prover and a sketch of the resultant proof, together with a discussion of the reasons that induced us to introduce some auxiliary functions. The report also contains the computer-generated proof of one of the main lemmas: INIT.SEG. The complete proof is available from the author.
We conclude with some remarks on our experiment and comments on the use of the theorem prover.
Paul Y Gloess
An experiment with "Edinburgh LCF"
Jacek Leszczylowski
An approach to theorem proving on the basis of a typed lambda-calculus
R. P. Nederpelt
Adding dynamic paramodulation to rewrite algorithms
Abstract
A practical and effective solution to a problem resulting from the existence of critical pairs in a set of rewrite rules is presented.
We show how to modify rewrite algorithms by introducing a dynamic paramodulation of rules. This can be done in different ways. The related issues are discussed.
A concrete example is given in the case of the "depth first" rewrite algorithm.
Paul Y Gloess, Jean-Pierre H Laurent
Hyperparamodulation: A refinement of paramodulation
Abstract
A refinement of paramodulation, called hyperparamodulation, is the focus of attention in this paper. Clauses obtained by the use of this inference rule are, in effect, the result of a sequence of paramodulations into one common nucleus. Among the interesting properties of hyperparamodulation are: first, clauses are chosen from among the input and designated as nuclei or "into" clauses for paramodulation; second, terms in the nucleus are starred to restrict the domain of generalized equality substitution; third, total control is thus iteratively established over all possible targets for paramodulation during the entire run of the theorem-proving program; and fourth, application of demodulation is suspended until the hyperparamodulant is completed. In contrast to these four properties which are reminiscent of the spirit of hyper-resolution, the following differences exist: first, the nucleus and the starred terms therein, which are analogous to negative literals, are determined by the user rather than by syntax; second, nuclei are not restricted to being mixed clauses; and third, while hyper-resolution requires inferred clauses to be positive, no corresponding requirement exists for clauses inferred by hyperparamodulation.
To illustrate the value of this refinement of paramodulation, we have chosen certain conjectures which arose during the study of Robbins algebra. A Robbins algebra is a set on which the functions o and n are defined such that o is both commutative and associative and such that for all x and y the following identity
$$n(o(n(o(x,y)),n(o(x,n(y))))) = x$$
holds. One may think of o as union and n as complement. The main interest in such algebras arises from the following open question: If S is a Robbins algebra, is S necessarily a Boolean algebra? The study of this open question entailed heavy use of an automated theorem-proving program to examine various conjectures. Certain computer proofs therein were obtained only after recourse to hyperparamodulation. (These lemmas were actually obtained prior to the work reported on here by Winker and Wos using a non-standard theorem-proving approach developed by Winker.)
L. Wos, R. Overbeek, L. Henschen
The AFFIRM theorem prover: Proof forests and management of large proofs
Abstract
The AFFIRM theorem prover is an interative, natural-deduction system centered around abstract data types. Since long proofs are often required to verity algorithms, we describe a model (called the "proof forest") which helps the user to visualize and manage the potentially large number of theorems and subgoals that can arise.
Roddy W. Erickson, David R. Musser
Data structures and control architecture for implementation of theorem-proving programs
Abstract
This paper presents the major design features of a new theorem-proving system currently being implemented. In it the authors describe the data structures of an existing program with which much experience has been obtained and discuss their significance for major theorem-proving algorithms such as subsumption, demodulation, resolution, and paramodulation. A new architecture for the large-scale design of theorem proving programs, which provides flexible tools for experimentation, is also presented.
Ross A. Overbeek, Ewing L. Lusk
A note on resolution: How to get rid of factoring without loosing completeness
Abstract
It is often useful to simplify the resolution inference system by elimination of factoring. Factoring, however, cannot be ignored entirely without loosing completeness. This paper studies to what extent factoring is necessary to preserve it. We can show that it is sufficient to factorize only one of the two parent clauses of a resolvent. We apply this basic result to a class of well known refinements and describe for each rule which clause (the so called "selected parent") has at most to be factored.
We try to achieve the results in a transparent manner using only elementary notions of resolution theory, so that this note should be readable without detailed knowledge of resolution strategies.
Helga Noll
Abstraction mappings in mechanical theorem proving
D. A. Plaisted
Transforming matings into natural deduction proofs
Abstract
A procedure is given for transforming refutation matings into natural deduction proofs. Thus a theorem proving system which establishes the validity of a theorem by the general matings approach can apply this procedure to obtain a comprehensible proof of the theorem without further search. This illuminates the close relationship between matings and proofs, and serves as a step toward a synthesis between apparently quite different approaches to automated theorem proving.
From a refutation mating the system constructs a plan for a theorem, describing appropriate replications of quantifiers, substitutions, and matchings of atoms. Skolem functions play a useful role in refutation matings, but terms involving such functions are replaced by appropriate variables when plans are constructed. Once a plan has been constructed, the system constructs a proof outline, or fragmentary proof, on the basis of the structure of the theorem and general principles for constructing natural deduction proofs. In a proof outline certain lines (planned lines) are justified not by rules of inference, but by plans. The outline is filled in by applying transformation rules, which add additional lines to the proof, justify certain planned lines, and sometimes create new planned lines. The linkage between plans and the proof is maintained by keeping track of the ancestries of wffs and quantifiers in the proof. Using the plans and other information about the proof, the system controls the application of transformation rules. For example, ∀xA(x) is instantiated to A(t) in the proof if the plan requires that t be substituted for a variable corresponding to x. Special problems arise in constructing natural proofs of existentially complex theorems, so they are proved in alternative forms.
Peter B. Andrews
Analysis of dependencies to improve the behaviour of logic programs
Abstract
Traditionally, backtracking uses a total order over the derivation steps. On failure, it returns to the most recent state. We consider states as a set of derivation steps. For each step, we save a ‘inputset’ validating the derivation. This results in a partial order over the derivation steps and allows a more accurate ‘intelligent’ backtracking.
Maurice Bruynooghe
Selective backtracking for logic programs
Abstract
We present a selective backtracking method for Horn clause programs, as applied to Prolog (2)(6)(11)(12), a programming language based on first-order predicate calculus (3)(4), developed at the university of Marseille (10).
This method is based on the general method expounded in (7) for backtracking intelligently in AND/OR trees. It consists, essentially, in avoiding backtracking to any goal whose alternative solutions cannot possibly prevent the repetition of the failures which caused backtracking.
This is a renewed version of an earlier report (8), which was spurred by the work of Bruynooghe (1).
In (9) we present an implementation of a selective backtracking interpreter using the methods discussed in this paper.
Luís Moniz Pereira, António Porto
Canonical forms and unification
Abstract
Fay has described in [2,3] a complete T-unification for equational theories T which possess a complete set of reductions as defined by Knuth & Bendix [12]. This algorithm relies essentially on using the narrowing process defined by Lankford [13]. In this paper, we first study the relations between narrowing and unification and we give a new version of Fay's algorithm. We then show how to eliminate many redundancies in this algorithm and give a sufficient condition for the termination of the algorithm. In a last part, we show how to extend the previous results to various kinds of canonical term rewriting systems.
Jean-Marie Hullot
Deciding unique termination of permutative rewriting systems: Choose your term algebra carefully
Abstract
Some problems are considered related to unique termination of rewriting systems for classes of terms equal under some equational theory. It is shown that the approach of Peterson and Stickel [2] to such problems fails to cope with a rather simple equational theory which is very natural in the context of axiomatic specifications of abstract data types.
One can circumvent the problem by choosing a different axiomatic specification (with a different underlying term algebra) using only associative and commutative equations for which the techniques in [2] work nicely.
It is argued that we ought to try finding systematic ways of choosing the "right" term algebra for axiomatisations in order to be able to cope with the equational theory needed.
Some tools are presented to deal with the particular equational theory mentionned above, and some of the difficulties encountered in this approach are highlighted.
Hans-Josef Jeanrond
How to prove algebraic inductive hypotheses without induction
With applications to the correctness of data type implementation
Abstract
This paper proves the correctness of algebraic methods for deciding the equivalence of expressions by applying rewrite rules, and for proving inductive equational hypotheses without using induction; it also shows that the equations true in the initial algebra are just those provable by structural induction. The major results generalize, simplify and rigorize Musser's method for proving inductive hypotheses with the Knuth-Bendix algorithm; our approach uses a very general result, that (under certain conditions) an equation is true iff it is consistent. Finally, we show how these results can be extended to proving the correctness of an implementation of one data abstraction by another.
J. A. Goguen
A complete, nonredundant algorithm for reversed skolemization
Abstract
An algorithm is presented which, for an arbitrary literal containing Skolem functions, outputs a set of closed quantified literals with the following properties. If a and b are formulae we define a ⊃ b iff {sk(a),dsk(b)} is unifiable where sk denotes Skolemization and dsk denotes the dual operation, where the roles of ∀ and ∃ are reversed. If d is an arbitrary literal and X is the output, then:
(i)
Soundness: if x ∈ X then x ⊃ d
 
(ii)
Completeness: if a ⊃ d then ∃x ∈ X such that a ⊃ x
 
(iii)
Nonredundancy: if x,y ∈ X then x ⊅ y and y ⊅ x.
 
P. T. Cox, T. Pietrzykowski
Metadaten
Titel
5th Conference on Automated Deduction Les Arcs, France, July 8–11, 1980
herausgegeben von
Wolfgang Bibel
Robert Kowalski
Copyright-Jahr
1980
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-38140-2
Print ISBN
978-3-540-10009-6
DOI
https://doi.org/10.1007/3-540-10009-1