Skip to main content
main-content

Über dieses Buch

This book constitutes the refereed proceedings of the 23rd International Conference on Automated Deduction, CADE-23, held in Wrocław, Poland, in July/August 2011.

The 28 revised full papers and 7 system descriptions presented were carefully reviewed and selected from 80 submissions. Furthermore, four invited lectures by distinguished experts in the area were included. Among the topics addressed are systems and tools for automated reasoning, rewriting logics, security protocol verification, unification, theorem proving, clause elimination, SAT, satifiability, interactive theorem proving, theory reasoning, static analysis, decision procedures, etc.

Inhaltsverzeichnis

Frontmatter

The Anatomy of Equinox – An Extensible Automated Reasoning Tool for First-Order Logic and Beyond

(Talk Abstract)

Equinox is an automated reasoning tool for first-order logic. It is also a framework for building highly targeted automated reasoning tools for specific domains.

The aim behind Equinox is to obtain an automated reasoning tool with a modular and extensible architecture. SAT modulo theory (SMT) solvers have the same aim. However, the way in which this aim is realized in Equinox is quite different from the way this is done traditional SMT solvers.

Koen Claessen

Advances in Proving Program Termination and Liveness

Because the halting problem is undecidable, many have considered the dream of automatically proving termination (and other liveness properties) to be impossible. While not refuting Turing’s original result of undecidability, recent research now makes this dream a practical reality. I will describe this recent work and its application to industrial software and models of biological systems. I will also describe recent adaptations of these new technologies to the problem of proving temporal properties in logics such as CTL and LTL.

Byron Cook

Translating between Language and Logic: What Is Easy and What Is Difficult

Natural language interfaces make formal systems accessible in informal language. They have a potential to make systems like theorem provers more widely used by students, mathematicians, and engineers who are not experts in logic. This paper shows that simple but still useful interfaces are easy to build with available technology. They are moreover easy to adapt to different formalisms and natural languages. The language can be made reasonably nice and stylistically varied. However, a fully general translation between logic and natural language also poses difficult, even unsolvable problems. This paper investigates what can be realistically expected and what problems are hard.

Aarne Ranta

ASASP: Automated Symbolic Analysis of Security Policies

We describe

asasp

, a symbolic reachability procedure for the analysis of administrative access control policies. The tool represents access policies and their administrative actions as formulae of the Bernays-Shönfinkel-Ramsey class and then uses a symbolic reachability procedure to solve security analysis problems. Checks for fix-point—reduced to satisfiability problems—are mechanized by Satisfiability Modulo Theories solving and Automated Theorem Proving.

asasp

has been successfully applied to the analysis of benchmark problems arising in (extensions of) the Role-Based Access Control model. Our tool shows better scalability than a state-of-the-art tool on a significant set of instances of these problems.

Francesco Alberti, Alessandro Armando, Silvio Ranise

Backward Trace Slicing for Rewriting Logic Theories

Trace slicing is a widely used technique for execution trace analysis that is effectively used in program debugging, analysis and comprehension. In this paper, we present a backward trace slicing technique that can be used for the analysis of Rewriting Logic theories. Our trace slicing technique allows us to systematically trace back rewrite sequences modulo equational axioms (such as associativity and commutativity) by means of an algorithm that dynamically simplifies the traces by detecting control and data dependencies, and dropping useless data that do not influence the final result. Our methodology is particularly suitable for analyzing complex, textually-large system computations such as those delivered as counter-example traces by Maude model-checkers.

María Alpuente, Demis Ballis, Javier Espert, Daniel Romero

Deciding Security for Protocols with Recursive Tests

Security protocols aim at securing communications over public networks. Their design is notoriously difficult and error-prone. Formal methods have shown their usefulness for providing a careful security analysis in the case of standard authentication and confidentiality protocols. However, most current techniques do not apply to protocols that perform recursive computation

e.g.

on a list of messages received from the network.

While considering general recursive input/output actions very quickly yields undecidability, we focus on protocols that perform

recursive tests

on received messages but output messages that depend on the inputs in a standard way. This is in particular the case of secured routing protocols, distributed right delegation or PKI certification paths. We provide NPTIME decision procedures for protocols with recursive tests and for a bounded number of sessions. We also revisit constraint system solving, providing a complete symbolic representation of the attacker knowledge.

Mathilde Arnaud, Véronique Cortier, Stéphanie Delaune

The Matita Interactive Theorem Prover

Matita is an interactive theorem prover being developed by the Helm team at the University of Bologna. Its stable version 0.5.x may be downloaded at

http://matita.cs.unibo.it

. The tool originated in the European project MoWGLI as a set of XML-based tools aimed to provide a mathematician-friendly web-interface to repositories of formal mathematical knoweldge, supporting advanced content-based functionalities for querying, searching and browsing the library. It has since then evolved into a fully fledged ITP, specifically designed as a light-weight, but competitive system, particularly suited for the assessment of innovative ideas, both at foundational and logical level. In this paper, we give an account of the whole system, its peculiarities and its main applications.

Andrea Asperti, Wilmer Ricciotti, Claudio Sacerdoti Coen, Enrico Tassi

Unification in the Description Logic $\mathcal{EL}$ without the Top Concept

Unification in Description Logics has been proposed as a novel inference service that can, for example, be used to detect redundancies in ontologies. The inexpressive Description Logic

$\mathcal{EL}$

is of particular interest in this context since, on the one hand, several large biomedical ontologies are defined using

$\mathcal{EL}$

. On the other hand, unification in

$\mathcal{EL}$

has recently been shown to be NP-complete, and thus of considerably lower complexity than unification in other DLs of similarly restricted expressive power. However,

$\mathcal{EL}$

allows the use of the top concept (⊤), which represents the whole interpretation domain, whereas the large medical ontology SNOMED CT makes no use of this feature. Surprisingly, removing the top concept from

$\mathcal{EL}$

makes the unification problem considerably harder. More precisely, we will show in this paper that unification in

$\mathcal{EL}$

without the top concept is

PSpace

-complete.

Franz Baader, Nguyen Thanh Binh, Stefan Borgwardt, Barbara Morawska

Model Evolution with Equality Modulo Built-in Theories

Many applications of automated deduction require reasoning modulo background theories, in particular some form of integer arithmetic. Developing corresponding automated reasoning systems that are also able to deal with quantified formulas has recently been an active area of research. We contribute to this line of research and propose a novel instantiation-based method for a large fragment of first-order logic with equality modulo a given complete background theory, such as linear integer arithmetic. The new calculus is an extension of the Model Evolution Calculus with Equality, a first-order logic version of the propositional DPLL procedure, including its ordering-based redundancy criteria. We present a basic version of the calculus and prove it sound and (refutationally) complete under certain conditions.

Peter Baumgartner, Cesare Tinelli

Blocked Clause Elimination for QBF

Quantified Boolean formulas (QBF) provide a powerful framework for encoding problems from various application domains, not least because efficient QBF solvers are available. Despite sophisticated evaluation techniques, the performance of such a solver usually depends on the way a problem is represented. However, the translation to processable QBF encodings is in general not unique and may either introduce variables and clauses not relevant for the solving process or blur information which could be beneficial for the solving process. To deal with both of these issues, preprocessors have been introduced which rewrite a given QBF before it is passed to a solver.

In this paper, we present novel preprocessing methods for QBF based on blocked clause elimination (BCE), a technique successfully applied in SAT. Quantified blocked clause elimination (QBCE) allows to simulate various structural preprocessing techniques as BCE in SAT. We have implemented QBCE and extensions of QBCE in the preprocessor

bloqqer

. In our experiments we show that preprocessing with QBCE reduces formulas substantially and allows us to solve considerable more instances than the previous state-of-the-art.

Armin Biere, Florian Lonsing, Martina Seidl

Extending Sledgehammer with SMT Solvers

Sledgehammer is a component of Isabelle/HOL that employs first-order automatic theorem provers (ATPs) to discharge goals arising in interactive proofs. It heuristically selects relevant facts and, if an ATP is successful, produces a snippet that replays the proof in Isabelle. We extended Sledgehammer to invoke satisfiability modulo theories (SMT) solvers as well, exploiting its relevance filter and parallel architecture. Isabelle users are now pleasantly surprised by SMT proofs for problems beyond the ATPs’ reach. Remarkably, the best SMT solver performs better than the best ATP on most of our benchmarks.

Jasmin Christian Blanchette, Sascha Böhme, Lawrence C. Paulson

Automated Cyclic Entailment Proofs in Separation Logic

We present a general automated proof procedure, based upon

cyclic proof

, for inductive entailments in separation logic. Our procedure has been implemented via a deep embedding of cyclic proofs in the HOL Light theorem prover. Experiments show that our mechanism is able to prove a number of non-trivial entailments involving inductive predicates.

James Brotherston, Dino Distefano, Rasmus Lerchedahl Petersen

Reducing Higher-Order Theorem Proving to a Sequence of SAT Problems

We describe a complete theorem proving procedure for higher-order logic that uses SAT-solving to do much of the heavy lifting. The theoretical basis for the procedure is a complete, cut-free, ground refutation calculus that incorporates a restriction on instantiations. The refined nature of the calculus makes it conceivable that one can search in the ground calculus itself, obtaining a complete procedure without resorting to meta-variables and a higher-order lifting lemma. Once one commits to searching in a ground calculus, a natural next step is to consider ground formulas as propositional literals and the rules of the calculus as propositional clauses relating the literals. With this view in mind, we describe a theorem proving procedure that primarily generates relevant formulas along with their corresponding propositional clauses. The procedure terminates when the set of propositional clauses is unsatisfiable. We prove soundness and completeness of the procedure. The procedure has been implemented in a new higher-order theorem prover, Satallax, which makes use of the SAT-solver MiniSat. We also describe the implementation and give some experimental results.

Chad E. Brown

Experimenting with Deduction Modulo

Deduction modulo is a generic framework to describe proofs in a theory better than using raw axioms. This is done by presenting the theory through rules rewriting terms and propositions. In CSL 2010, LNCS 6247, p.155–169, we gave theoretical justifications why it is possible to embed a proof search method based on deduction modulo, namely Ordered Polarized Resolution Modulo, into an existing prover. Here, we describe the implementation of these ideas, starting from iProver. We test it by confronting Ordered Polarized Resolution Modulo and other proof-search calculi, using benchmarks extracted from the TPTP Library. For the integration of rewriting, we also compare several implementation techniques, based for instance on discrimination trees or on compilation. These results reveal that deduction modulo is a promising approach to handle proof search in theories in a generic but efficient way.

Guillaume Burel

Heaps and Data Structures: A Challenge for Automated Provers

Software verification is one of the most prominent application areas for automatic reasoning systems, but their potential improvement is limited by shortage of good benchmarks. Current benchmarks are usually large but shallow, require decision procedures, or have soundness problems. In contrast, we propose a family of benchmarks in first-order logic with equality which is scalable, relatively simple to understand, yet closely resembles difficult verification conditions stemming from real-world C code. Based on this benchmark, we present a detailed comparison of different heap encodings using a number of SMT solvers and ATPs. Our results led to a performance gain of an order of magnitude for the C code verifier VCC.

Sascha Böhme, Michał Moskal

Optimized Query Rewriting for OWL 2 QL

The OWL 2 QL profile has been designed to facilitate query answering via query rewriting. This paper presents an optimized query rewriting algorithm which takes advantage of the special characteristics of the query rewriting problem via first-order resolution in OWL 2 QL and computes efficiently the rewriting set of a user query, by avoiding blind and unnecessary inferences, as well as by reducing the need for extended subsumption checks. The evaluation shows that in several cases the algorithm achieves a significant improvement and better practical scalability if compared to other similar approaches.

Alexandros Chortaras, Despoina Trivela, Giorgos Stamou

Sort It Out with Monotonicity

Translating between Many-Sorted and Unsorted First-Order Logic

We present a novel analysis for sorted logic, which determines if a given sort is

monotone

. The domain of a monotone sort can always be extended with an extra element. We use this analysis to significantly improve well-known translations between unsorted and many-sorted logic, making use of the fact that it is cheaper to translate monotone sorts than non-monotone sorts. Many interesting problems are more naturally expressed in many-sorted first-order logic than in unsorted logic, but most existing highly-efficient automated theorem provers solve problems only in unsorted logic. Conversely, some reasoning tools, for example model finders, can make good use of sort-information in a problem, but most problems today are formulated in unsorted logic. This situation motivates translations in both ways between many-sorted and unsorted problems. We present the monotonicity analysis and its implementation in our tool

Monotonox

, and also show experimental results on the TPTP benchmark library.

Koen Claessen, Ann Lillieström, Nicholas Smallbone

Exploiting Symmetry in SMT Problems

Methods exploiting problem symmetries have been very successful in several areas including constraint programming and SAT solving. We here recast a technique to enhance the performance of SMT-solvers by detecting symmetries in the input formulas and use them to prune the search space of the SMT algorithm. This technique is based on the concept of (syntactic) invariance by permutation of constants. An algorithm for solving SMT by taking advantage of such symmetries is presented. The implementation of this algorithm in the SMT-solver

veriT

is used to illustrate the practical benefits of this approach. It results in a significant improvement of

veriT

’s performances on the SMT-LIB benchmarks that places it ahead of the winners of the last editions of the SMT-COMP contest in the QF_UF category.

David Déharbe, Pascal Fontaine, Stephan Merz, Bruno Woltzenlogel Paleo

Compression of Propositional Resolution Proofs via Partial Regularization

This paper describes two algorithms for the compression of propositional resolution proofs. The first algorithm,

RecyclePivots-WithIntersection

, performs partial regularization, removing an inference

η

when it is redundant in the sense that its pivot literal already occurs as the pivot of another inference located below in the path from

η

to the root of the proof. The second algorithm,

LowerUnits

, delays the resolution of (both input and derived) unit clauses, thus removing (some) inferences having the same pivot but possibly occurring also in different branches of the proof.

Pascal Fontaine, Stephan Merz, Bruno Woltzenlogel Paleo

Dynamic Behavior Matching: A Complexity Analysis and New Approximation Algorithms

A number of advances in software security over the past decade have foundations in the

behavior matching

problem: given a specification of software behavior and a concrete execution trace, determine whether the behavior is exhibited by the execution trace. Despite the importance of this problem, precise descriptions of algorithms for its solution, and rigorous analyses of their complexity, are missing in the literature. In this paper, we formalize the notion of behavior matching used by the software security community, study the complexity of the problem, and give several algorithms for its solution, both exact and approximate. We find that the problem is in general not efficiently solvable, i.e.

behavior matching is NP-Complete

. We demonstrate empirically that our approximation algorithms can be used to efficiently find accurate solutions to real instances.

Matthew Fredrikson, Mihai Christodorescu, Somesh Jha

A Connection-Based Characterization of Bi-intuitionistic Validity

We give a connection-based characterization of validity in propositional bi-intuitionistic logic in terms of specific directed graphs called R-graphs. Such a characterization is well-suited for deriving labelled proof-systems with counter-model construction facilities. We first define the notion of bi-intuitionistic R-graph from which we then obtain a connection-based characterization of propositional bi-intuitionistic validity and derive a sound and complete free-variable labelled sequent calculus that admits cut-elimination and also variable splitting.

Didier Galmiche, Daniel Méry

Automated Reasoning in $\mathcal{ALCQ}$ via SMT

Reasoning techniques for qualified number restrictions (QNRs) in Description Logics (DLs) have been investigated in the past but they mostly do not make use of the arithmetic knowledge implied by QNRs. In this paper we propose and investigate a novel approach for concept satisfiability in acyclic

$\mathcal{ALCQ}$

ontologies. It is based on the idea of encoding an

$\mathcal{ALCQ}$

ontology into a formula in Satisfiability Modulo the Theory of Costs (SMT

$(\ensuremath{\mathcal{C}} )$

), which is a specific and computationally much cheaper subcase of Linear Arithmetic under the Integers, and to exploit the power of modern SMT solvers to compute every concept-satisfiability query on a given ontology. We implemented and tested our approach, which includes a very effective individuals-partitioning technique, on a wide set of synthesized benchmark formulas, comparing the approach with the main state-of-the-art DL reasoners available. Our empirical evaluation confirms the potential of the approach.

Volker Haarslev, Roberto Sebastiani, Michele Vescovi

Sine Qua Non for Large Theory Reasoning

One possible way to deal with large theories is to have a good selection method for relevant axioms. This is confirmed by the fact that the largest available first-order knowledge base (the Open CYC) contains over 3 million axioms, while answering queries to it usually requires not more than a few dozen axioms. A method for axiom selection has been proposed by the first author in the Sumo INference Engine (SInE) system. SInE has won the large theory division of CASC in 2008. The method turned out to be so successful that the next two years it was used by the winner as well as by several other competing systems. This paper contains the presentation of the method and describes experiments with it in the theorem prover Vampire.

Kryštof Hoder, Andrei Voronkov

Predicate Completion for non-Horn Clause Sets

The standard semantics of a logical program described by a set of predicative Horn clauses is minimal model semantics. To reason about negation in this context, Clark proposed to enrich the description in such a way that all Herbrand models but the minimal one are excluded. This

predicate completion

is used in explicit negation as failure, and also for example by Comon and Nieuwenhuis in inductive theorem proving.

In this article, I extend predicate completion to a class of non-Horn clause sets. These may have several minimal models and I show how predicate completion with respect to a ground total reduction ordering singles out the same model as the model construction procedure by Bachmair and Ganzinger.

Matthias Horbach

System Description: SPASS-FD

Using a constrained superposition calculus and a disunification procedure, it is possible to employ superposition-based first-order reasoners for reasoning not only about all models of a first-order theory, but also about all models over a specific finite domain and often as well about the perfect models of the theory (or the unique minimal model in case of a Horn theory). Both of these problems are second-order problems.

In this paper, I describe the tool

Spass-FD

, an extension of the first-order prover

Spass

that equips

Spass

with disunification as well as with fixed domain and minimal model theorem proving capabilities.

Matthias Horbach

Cutting to the Chase Solving Linear Integer Arithmetic

We describe a new algorithm for solving linear integer programming problems. The algorithm performs a DPLL style search for a feasible assignment, while using a novel cut procedure to guide the search away from the conflicting states.

Dejan Jovanović, Leonardo de Moura

A Hybrid Method for Probabilistic Satisfiability

Determining satisfiability of sets of formula formulated in a Nilsson style probabilistic logic (PSAT) by reduction to a system of linear (in)equations has been extensively studied, esp. in the propositional setting. The basic technique for coping with the potentially exponentially large (in terms of the formulae) linear system is column generation, which has been successful (in various forms) in solving problems of around 1000 formulae. Common to existing techniques is that the column generation model explicitly encodes all classical, i.e., non-probabilistic, knowledge. In this paper we introduce a straightforward but new hybrid method for PSAT that makes use of a classical solver in the column generation process. The benefits of this technique are twofold: first, we can, in practice, accommodate inputs with significantly larger classical parts, and thus which are overall larger, and second, we can accommodate inputs with supra-propositional classical parts, such as propositionally complete description logics. We validate our approach with an extensive series of experiments which show that our technique is competitive with traditional non-hybrid approaches in spite of scaling the expressivity of the classical part to the description logic

$\mathcal{SROIQ}$

.

Pavel Klinov, Bijan Parsia

Solving Systems of Linear Inequalities by Bound Propagation

In this paper we introduce a new method for solving systems of linear inequalities. The algorithm incorporates many state-of-the-art techniques from DPLL-style reasoning. We prove soundness, completeness and termination of the method.

Konstantin Korovin, Andrei Voronkov

On Transfinite Knuth-Bendix Orders

In this paper we discuss the recently introduced

transfinite

Knuth-Bendix orders. We prove that any such order with finite subterm coefficients and for a finite signature is equivalent to an order using ordinals below

ω

ω

, that is, finite sequences of natural numbers of a fixed length. We show that this result does not hold when subterm coefficients are infinite. However, we prove that in this general case ordinals below

$\omega^{\omega^{\omega}}$

suffice. We also prove that both upper bounds are tight. We briefly discuss the significance of our results for the implementation of first-order theorem provers and describe relationships between the transfinite Knuth-Bendix orders and existing implementations of extensions of the Knuth-Bendix orders.

Laura Kovács, Georg Moser, Andrei Voronkov

Scala to the Power of Z3: Integrating SMT and Programming

We describe a system that integrates the SMT solver Z3 with the Scala programming language. The system supports the use of the SMT solver for checking satisfiability, unsatisfiability, as well as solution enumeration. The embedding of formula trees into Scala uses the host type system of Scala to prevent the construction of certain ill-typed constraints. The solution enumeration feature integrates into the iteration constructions of Scala and supports writing non-deterministic programs. Using Z3’s mechanism of theory extensions, our system also helps users construct custom constraint solvers where the interpretation of predicates and functions is given as Scala code. The resulting system preserves the productivity advantages of Scala while simplifying tasks such as combinatorial search.

Ali Sinan Köksal, Viktor Kuncak, Philippe Suter

Efficient General Unification for XOR with Homomorphism

General E-unification is an important tool in cryptographic protocol analysis, where the equational theory E represents properties of the cryptographic algorithm, and uninterpreted function symbols represent other functions. Some important properties are XOR, Abelian groups, and homomorphisms over them. Polynomial time algorithms exist for unification in those theories. However, the general E-unification problem in these theories is NP-complete, and existing algorithms are highly nondeterministic. We give a mostly deterministic set of inference rules for solving general E-unification modulo XOR with (or without) a homomorphism, and prove that it is sound, complete and terminating. These inference rules have been implemented in Maude, and are being incorporated into the Maude NPA. They are designed in such a way so that they can be extended to an Abelian group with a homomorphism.

Zhiqiang Liu, Christopher Lynch

A Dependency Pair Framework for Innermost Complexity Analysis of Term Rewrite Systems

We present a modular framework to analyze the innermost runtime complexity of term rewrite systems automatically. Our method is based on the dependency pair framework for termination analysis. In contrast to previous work, we developed a

direct

adaptation of successful termination techniques from the dependency pair framework in order to use them for complexity analysis. By extensive experimental results, we demonstrate the power of our method compared to existing techniques.

Lars Noschinski, Fabian Emmes, Jürgen Giesl

Static Analysis of Android Programs

Android is a programming language based on Java and an operating system for mobile or embedded devices. It features an extended event-based library and dynamic inflation of graphical views from declarative XML layout files. A static analyzer for Android programs must consider such features, for correctness and precision. This article is a description of how we extended the Julia system, based on abstract interpretation, to run formally correct analyses of Android programs. We have analyzed with Julia the Android sample applications by Google and a few larger open-source programs. Julia has found, automatically, bugs and flaws both in the Google samples and in the open-source programs.

Étienne Payet, Fausto Spoto

Stochastic Differential Dynamic Logic for Stochastic Hybrid Programs

Logic is a powerful tool for analyzing and verifying systems, including programs, discrete systems, real-time systems, hybrid systems, and distributed systems. Some applications also have a stochastic behavior, however, either because of fundamental properties of nature, uncertain environments, or simplifications to overcome complexity. Discrete probabilistic systems have been studied using logic. But logic has been chronically underdeveloped in the context of stochastic hybrid systems, i.e., systems with interacting discrete, continuous, and stochastic dynamics. We aim at overcoming this deficiency and introduce a dynamic logic for stochastic hybrid systems. Our results indicate that logic is a promising tool for understanding stochastic hybrid systems and can help taming some of their complexity. We introduce a compositional model for stochastic hybrid systems. We prove adaptivity, càdlàg, and Markov time properties, and prove that the semantics of our logic is measurable. We present compositional proof rules, including rules for stochastic differential equations, and prove soundness.

André Platzer

Reasoning in the OWL 2 Full Ontology Language Using First-Order Automated Theorem Proving

OWL 2 has been standardized by the World Wide Web Consortium (W3C) as a family of ontology languages for the Semantic Web. The most expressive of these languages is OWL 2 Full, but to date no reasoner has been implemented for this language. Consistency and entailment checking are known to be undecidable for OWL 2 Full. We have translated a large fragment of the OWL 2 Full semantics into first-order logic, and used automated theorem proving systems to do reasoning based on this theory. The results are promising, and indicate that this approach can be applied in practice for effective OWL reasoning, beyond the capabilities of current Semantic Web reasoners.

Michael Schneider, Geoff Sutcliffe

An Efficient Decision Procedure for Imperative Tree Data Structures

We present a new decidable logic called

TREX

for expressing constraints about imperative tree data structures. In particular,

TREX

supports a transitive closure operator that can express reachability constraints, which often appear in data structure invariants. We show that our logic is closed under weakest precondition computation, which enables its use for automated software verification. We further show that satisfiability of formulas in

TREX

is decidable in NP. The low complexity makes it an attractive alternative to more expensive logics such as monadic second-order logic (MSOL) over trees, which have been traditionally used for reasoning about tree data structures.

Thomas Wies, Marco Muñiz, Viktor Kuncak

AC Completion with Termination Tools

We present

masco

tt

, a tool for Knuth-Bendix completion modulo the theory of associative and commutative operators. In contrast to classical completion tools,

masco

tt

does not rely on a fixed AC-compatible reduction order. Instead, a suitable order is implicitly constructed during a deduction by collecting all oriented rules in a similar fashion as done in the tool

Slothrop

. This allows for convergent systems which cannot be completed using standard orders. We outline the underlying inference system and comment on implementation details such as the use of multi-completion, term indexing techniques, and critical pair criteria.

Sarah Winkler, Aart Middeldorp

CSI – A Confluence Tool

This paper describes a new confluence tool for term rewrite systems. Due to its modular design, the few techniques implemented so far can be combined flexibly. Methods developed for termination analysis are adapted to prove and disprove confluence. Preliminary experimental results show the potential of our tool.

Harald Zankl, Bertram Felgenhauer, Aart Middeldorp

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise