Skip to main content
Top

2006 | Book

Logic for Programming, Artificial Intelligence, and Reasoning

13th International Conference, LPAR 2006, Phnom Penh, Cambodia, November 13-17, 2006. Proceedings

Editors: Miki Hermann, Andrei Voronkov

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

Table of Contents

Frontmatter
Higher-Order Termination: From Kruskal to Computability

Termination is a major question in both logic and computer science. In logic, termination is at the heart of proof theory where it is usually called strong normalization (of cut elimination). In computer science, termination has always been an important issue for showing programs correct. In the early days of logic, strong normalization was usually shown by assigning ordinals to expressions in such a way that eliminating a cut would yield an expression with a smaller ordinal. In the early days of verification, computer scientists used similar ideas, interpreting the arguments of a program call by a natural number, such as their size. Showing the size of the arguments to decrease for each recursive call gives a termination proof of the program, which is however rather weak since it can only yield quite small ordinals. In the sixties, Tait invented a new method for showing cut elimination of natural deduction, based on a

predicate

over the set of terms, such that the membership of an expression to the predicate implied the strong normalization property for that expression. The predicate being defined by induction on types, or even as a fixpoint, this method could yield much larger ordinals. Later generalized by Girard under the name of

reducibility

or

computability candidates

, it showed very effective in proving the strong normalization property of typed lambda-calculi with polymorphic types, dependent types, inductive types, and finally a cumulative hierarchy of universes. On the programming side, research on termination shifted from programming to executable specification languages based on rewriting, and concentrated on automatable methods based on the construction on well-founded orderings of the set of terms. The milestone here is Dershowitz’s

recursive path ordering

(RPO), in the late seventies, whose well-foundedness proof is based on a powerful combinatorial argument, Kruskal’s tree theorem, which also yields rather large ordinals. While the computability predicates must be defined for each particular case, and their properties proved by hand, the recursive path ordering can be effectively automated.

Frédéric Blanqui, Jean-Pierre Jouannaud, Albert Rubio
Deciding Satisfiability of Positive Second Order Joinability Formulae

This paper deals with a class of second order formulae where the only predicate is joinability modulo a conditional term rewrite system, first order variables range over ground terms and second order variables are interpreted as relations on ground terms (i.e. sets of tuples of ground terms). We define a generic algorithm that decides the satisfiability of positive second order joinability formulae when an algorithm is known to finitely represent solutions of first order formulae. When the answer is positive, the algorithm computes one particular instance for the second order variables. We apply this technique to the class of positive second order pseudo-regular formulae. The result is then a logic program that represents the instance of the second order variables. We define a transformation to translate this instance into a CTRS. This result can be used to automatically synthesize a program that defines a relation from its specification.

Sébastien Limet, Pierre Pillot
SAT Solving for Argument Filterings

This paper introduces a propositional encoding for lexicographic path orders in connection with dependency pairs. This facilitates the application of SAT solvers for termination analysis of term rewrite systems based on the dependency pair method. We address two main inter-related issues and encode them as satisfiability problems of propositional formulas that can be efficiently handled by SAT solving: (1) the combined search for a lexicographic path order together with an

argument filtering

to orient a set of inequalities; and (2) how the choice of the argument filtering influences the set of inequalities that have to be oriented. We have implemented our contributions in the termination prover

AProVE

. Extensive experiments show that by our encoding and the application of SAT solvers one obtains speedups in orders of magnitude as well as increased termination proving power.

Michael Codish, Peter Schneider–Kamp, Vitaly Lagoon, René Thiemann, Jürgen Giesl
Inductive Decidability Using Implicit Induction

Decision procedures are widely used in automated reasoning tools in order to reason about data structures. In applications, many conjectures fall outside the theory handled by a decision procedure. Often, reasoning about user-defined functions on those data structures is needed. For this, inductive reasoning has to be employed. In this work, classes of function definitions and conjectures are identified for which inductive validity can be automatically decided using implicit induction methods and decision procedures for an underlying theory. The class of equational conjectures considered in this paper significantly extends the results of Kapur & Subramaniam (CADE, 2000) [15], which were obtained using explicit induction schemes. Firstly, nonlinear conjectures can be decided automatically. Secondly, function definitions can use other defined functions in their definitions, thus allowing mutually recursive functions and decidable conjectures about them. Thirdly, conjectures can have general terms from the decidable theory on inductive positions. These contributions are crucial for successfully integrating inductive reasoning into decision procedures, thus enabling their use in push-button mode in applications including verification and program analysis.

Stephan Falke, Deepak Kapur
Matching Modulo Superdevelopments Application to Second-Order Matching

To perform higher-order matching, we need to decide the

βη

-equivalence on

λ

-terms. The first way to do it is to use simply typed

λ

-calculus and this is the usual framework where higher-order matching is performed. Another approach consists in deciding a restricted equivalence based on finite superdevelopments. We consider higher-order matching modulo this equivalence over untyped

λ

-terms for which we propose a terminating, sound and complete matching algorithm.

This is in particular of interest since all second-order

β

-matches are matches modulo superdevelopments. We further propose a restriction to second-order matching that gives exactly all second-order matches.

Germain Faure
Derivational Complexity of Knuth-Bendix Orders Revisited

We study the derivational complexity of rewrite systems

$\mathcal{R}$

compatible with KBO, if the signature of

$\mathcal{R}$

is infinite. We show that the known bounds on the derivation height are preserved, if

$\mathcal{R}$

fulfils some mild conditions. This allows us to obtain bounds on the derivational height of non simply terminating TRSs. Furthermore, we re-establish the 2-recursive upper-bound on the derivational complexity of finite rewrite systems

$\mathcal{R}$

compatible with KBO.

Georg Moser
A Characterization of Alternating Log Time by First Order Functional Programs

We a give an intrinsic characterization of the class of functions which are computable in

NC

1

that is by a uniform, logarithmic depth and polynomial size family circuit. Recall that the class of functions in

ALogTime

, that is in logarithmic time on an Alternating Turing Machine, is

NC

1

. Our characterization is in terms of first order functional programming languages. We define measure-tools called Sup-interpretations, which allow to give space and time bounds and allow also to capture a lot of program schemas. This study is part of a research on static analysis in order to predict program resources. It is related to the notion of Quasi-interpretations and belongs to the implicit computational complexity line of research.

Guillaume Bonfante, Jean-Yves Marion, Romain Péchoux
Combining Typing and Size Constraints for Checking the Termination of Higher-Order Conditional Rewrite Systems

In a previous work, the first author extended to higher-order rewriting and dependent types the use of size annotations in types, a termination proof technique called type or size based termination and initially developed for ML-like programs. Here, we go one step further by considering conditional rewriting and explicit quantifications and constraints on size annotations. This allows to describe more precisely how the size of the output of a function depends on the size of its inputs. Hence, we can check the termination of more functions. We first give a general type-checking algorithm based on constraint solving. Then, we give a termination criterion with constraints in Presburger arithmetic. To our knowledge, this is the first termination criterion for higher-order conditional rewriting taking into account the conditions in termination.

Frédéric Blanqui (INRIA), Colin Riba (INPL)
On a Local-Step Cut-Elimination Procedure for the Intuitionistic Sequent Calculus

In this paper we investigate, for intuitionistic implicational logic, the relationship between normalization in natural deduction and cut-elimination in a standard sequent calculus. First we identify a subset of proofs in the sequent calculus that correspond to proofs in natural deduction. Then we define a reduction relation on those proofs that exactly corresponds to normalization in natural deduction. The reduction relation is simulated soundly and completely by a cut-elimination procedure which consists of local proof transformations. It follows that the sequent calculus with our cut-elimination procedure is a proper extension that is conservative over natural deduction with normalization.

Kentaro Kikuchi
Modular Cut-Elimination: Finding Proofs or Counterexamples

Modular cut-elimination is a particular notion of ”cut-elimination in the presence of non-logical axioms” that is preserved under the addition of suitable rules. We introduce syntactic necessary and sufficient conditions for modular cut-elimination for standard calculi, a wide class of (possibly) multiple-conclusion sequent calculi with generalized quantifiers. We provide a ”universal” modular cut-elimination procedure that works uniformly for any standard calculus satisfying our conditions. The failure of these conditions generates counterexamples for modular cut-elimination and, in certain cases, for cut-elimination.

Agata Ciabattoni, Kazushige Terui
An Executable Formalization of the HOL/Nuprl Connection in the Metalogical Framework Twelf

Howe’s HOL/Nuprl connection is an interesting example of a translation between two fundamentally different logics, namely a typed higher-order logic and a polymorphic extensional type theory. In earlier work we have established a proof-theoretic correctness result of the translation in a way that complements Howe’s semantics-based justification and furthermore goes beyond the original HOL/Nuprl connection by providing the foundation for a proof translator. Using the Twelf logical framework, the present paper goes one step further. It presents the first rigorous formalization of this treatment in a logical framework, and hence provides a safe alternative to the translation of proofs.

Carsten Schürmann, Mark-Oliver Stehr
A Semantic Completeness Proof for TaMeD

Deduction modulo is a theoretical framework designed to introduce computational steps in deductive systems. This approach is well suited to automated theorem proving and a tableau method for first-order classical deduction modulo has been developed. We reformulate this method and give an (almost constructive) semantic completeness proof. This new proof allows us to extend the completeness theorem to several classes of rewrite systems used for computations in deduction modulo. We are then able to build a counter-model when a proof fails for these systems.

Richard Bonichon, Olivier Hermant
Saturation Up to Redundancy for Tableau and Sequent Calculi

We discuss an adaptation of the technique of saturation up to redundancy, as introduced by Bachmair and Ganzinger [1], to tableau and sequent calculi for classical first-order logic. This technique can be used to easily show the completeness of optimized calculi that contain destructive rules e.g. for simplification, rewriting with equalities, etc., which is not easily done with a standard Hintikka-style completeness proof. The notions are first introduced for Smullyan-style ground tableaux, and then extended to constrained formula free-variable tableaux.

Martin Giese
Branching-Time Temporal Logic Extended with Qualitative Presburger Constraints

Recently,

LTL

extended with atomic formulas built over a constraint language interpreting variables in ℤ has been shown to have a decidable satisfiability and model-checking problem. This language allows to compare the variables at different states of the model and include periodicity constraints, comparison constraints, and a restricted form of quantification. On the other hand, the

CTL

counterpart of this logic (and hence also its

CTL*

counterpart which subsumes both

LTL

and

CTL

) has an undecidable model-checking problem. In this paper, we substantially extend the decidability border, by considering a meaningful fragment of

CTL*

extended with such constraints (which subsumes both the universal and existential fragments, as well as the

EF

-like fragment) and show that satisfiability and model-checking over relational automata that are abstraction of counter machines are decidable. The correctness and the termination of our algorithm rely on a suitable well quasi-ordering defined over the set of variable valuations.

Laura Bozzelli, Régis Gascon
Combining Supervaluation and Degree Based Reasoning Under Vagueness

Two popular approaches to formalize adequate reasoning with vague propositions are usually deemed incompatible: On the one hand, there is supervaluation with respect to precisification spaces, which consist in collections of classical interpretations that represent admissible ways of making vague atomic statements precise. On the other hand,

t

-norm based fuzzy logics model truth functional reasoning, where reals in the unit interval [0,1] are interpreted as degrees of truth. We show that both types of reasoning can be combined within a single logic

S

Ł

, that extends both: Łukasiewicz logic

Ł

and (classical)

S5

, where the modality corresponds to ‘...is true in all complete precisifications’. Our main result consists in a game theoretic interpretation of

S

Ł

, building on ideas already introduced by Robin Giles in the 1970s to obtain a characterization of

Ł

in terms of a Lorenzen style dialogue game combined with bets on the results of binary experiments that may show dispersion. In our case the experiments are replaced by random evaluations with respect to a given probability distribution over permissible precisifications.

Christian G. Fermüller, Robert Kosik
A Comparison of Reasoning Techniques for Querying Large Description Logic ABoxes

Many modern applications of description logics (DLs) require answering queries over large data quantities, structured according to relatively simple ontologies. For such applications, we conjectured that reusing ideas of deductive databases might improve scalability of DL systems. Hence, in our previous work, we developed an algorithm for reducing a DL knowledge base to a disjunctive datalog program. To test our conjecture, we implemented our algorithm in a new DL reasoner KAON2, which we describe in this paper. Furthermore, we created a comprehensive test suite and used it to conduct a performance evaluation. Our results show that, on knowledge bases with large ABoxes but with simple TBoxes, our technique indeed shows good performance; in contrast, on knowledge bases with large and complex TBoxes, existing techniques still perform better. This allowed us to gain important insights into strengths and weaknesses of both approaches.

Boris Motik, Ulrike Sattler
A Local System for Intuitionistic Logic

This paper presents systems for first-order intuitionistic logic and several of its extensions in which all the propositional rules are

local

, in the sense that, in applying the rules of the system, one needs only a fixed amount of information about the logical expressions involved. The main source of non-locality is the contraction rules. We show that the contraction rules can be restricted to the atomic ones, provided we employ

deep-inference

, i.e., to allow rules to apply anywhere inside logical expressions. We further show that the use of deep inference allows for modular extensions of intuitionistic logic to Dummett’s intermediate logic LC, Gödel logic and classical logic. We present the systems in the calculus of structures, a proof theoretic formalism which supports deep-inference. Cut elimination for these systems are proved indirectly by simulating the cut-free sequent systems, or the hypersequent systems in the cases of Dummett’s LC and Gödel logic, in the cut free systems in the calculus of structures.

Alwen Tiu
CIC $\widehat{~}$ : Type-Based Termination of Recursive Definitions in the Calculus of Inductive Constructions

Sized types provides a type-based mechanism to enforce termination of recursive definitions in typed

λ

-calculi. Previous work has provided strong indications that type-based termination provides an appropriate foundation for proof assistants based on type theory; however, most work to date has been confined to non-dependent type systems. In this article, we introduce a variant of the Calculus of Inductive Constructions with sized types and study its meta theoretical properties: subject reduction, normalization, and thus consistency and decidability of type-checking and of size-inference. A prototype implementation has been developed alongside case studies.

Gilles Barthe, Benjamin Grégoire, Fernando Pastawski
Reducing Nondeterminism in the Calculus of Structures

The calculus of structures is a proof theoretical formalism which generalizes the sequent calculus with the feature of deep inference: In contrast to the sequent calculus, inference rules can be applied at any depth inside a formula, bringing shorter proofs than any other formalisms supporting analytical proofs. However, deep applicability of the inference rules causes greater nondeterminism than in the sequent calculus regarding proof search. In this paper, we introduce a new technique which reduces nondeterminism without breaking proof theoretical properties and provides a more immediate access to shorter proofs. We present this technique on system

BV

, the smallest technically non-trivial system in the calculus of structures, extending multiplicative linear logic with the rules mix, nullary mix, and a self-dual non-commutative logical operator. Because our technique exploits a scheme common to all the systems in the calculus of structures, we argue that it generalizes to these systems for classical logic, linear logic, and modal logics.

Ozan Kahramanoğulları
A Relaxed Approach to Integrity and Inconsistency in Databases

We demonstrate that many, though not all integrity checking methods are able to tolerate inconsistency, without having been aware of it. We show that it is possible to use them to beneficial effect and without further ado, not only for preserving integrity in consistent databases, but also in databases that violate their constraints. This apparently relaxed attitude toward integrity and inconsistency stands in contrast to approaches that are much more cautious wrt the prevention, identification, removal, repair and tolerance of inconsistent data that violate integrity. We assess several well-known methods in terms of inconsistency tolerance and give examples and counter-examples thereof.

Hendrik Decker, Davide Martinenghi
On Locally Checkable Properties

The large computational price of formal verification of general

ω

-regular properties has led to the study of restricted classes of properties, and to the development of verification methodologies for them. Examples that have been widely accepted by the industry include the verification of safety properties, and bounded model checking. We introduce and study another restricted class of properties – the class of

locally checkable

properties. For an integer

k

≥1, a language

L

 ⊆ Σ

ω

is

k-checkable

if there is a language

R

 ⊆ Σ

k

(of “allowed subwords”) such that a word

w

belongs to

L

iff all the subwords of

w

of length

k

belong to

R

. A property is locally checkable if its language is

k

-checkable for some

k

. Locally checkable properties, which are a special case of safety properties, are common in the specification of systems. In particular, one can often bound an eventuality constraint in a property by a fixed time frame.

The practical importance of locally checkable properties lies in the low memory demand for their run-time verification. A monitor for a

k

-checkable property needs only a record of the last

k

computation cycles. Furthermore, even if a large number of

k

-checkable properties are monitored, the monitors can share their memory, resulting in memory demand that do not depend on the number of properties monitored. This advantage of locally checkable properties makes them particularly suitable for run-time verification. In the paper, we define locally checkable languages, study their relation to other restricted classes of properties, study the question of deciding whether a property is locally checkable, and study the relation between the size of the property (specified by an LTL formula or an automaton) and the smallest

k

for which the property is

k

-checkable.

Orna Kupferman, Yoad Lustig, Moshe Y. Vardi
Deciding Key Cycles for Security Protocols

Many recent results are concerned with interpreting proofs of security done in symbolic models in the more detailed models of computational cryptography. In the case of symmetric encryption, these results stringently demand that no key cycle (e.g. {

k

}

k

) can be produced during the execution of protocols. While security properties like secrecy or authentication have been proved decidable for many interesting classes of protocols, the automatic detection of key cycles has not been studied so far.

In this paper, we prove that deciding the existence of key-cycles is NP-complete for a bounded number of sessions. Next, we observe that the techniques that we use are of more general interest and apply them to reprove the decidability of a significant existing fragment of protocols with timestamps.

Véronique Cortier, Eugen Zălinescu
Automating Verification of Loops by Parallelization

Loops are a major bottleneck in formal software verification, because they generally require user interaction: typically, induction hypotheses or invariants must be found or modified by hand. This involves expert knowledge of the underlying calculus and proof engine. We show that one can replace interactive proof techniques, such as induction, with automated first-order reasoning in order to deal with parallelizable loops, where a loop can be parallelized whenever it avoids dependence of the loop iterations from each other. We develop a dependence analysis that ensures parallelizability. It guarantees soundness of a proof rule that transforms a loop into a universally quantified update of the state change information represented by the loop body. This makes it possible to use automatic first order reasoning techniques to deal with loops. The method has been implemented in the KeY verification tool. We evaluated it with representative case studies from the

Java Card

domain.

Tobias Gedell, Reiner Hähnle
On Computing Fixpoints in Well-Structured Regular Model Checking, with Applications to Lossy Channel Systems

We prove a general finite convergence theorem for “upward-guarded” fixpoint expressions over a well-quasi-ordered set. This has immediate applications in regular model checking of well-structured systems, where a main issue is the eventual convergence of fixpoint computations. In particular, we are able to directly obtain several new decidability results on lossy channel systems.

Christel Baier, Nathalie Bertrand, Philippe Schnoebelen
Verification Condition Generation Via Theorem Proving

We present a method to convert (i) an operational semantics for a given machine language, and (ii) an off-the-shelf theorem prover, into a high assurance verification condition generator (VCG). Given a program annotated with assertions at cutpoints, we show how to use the theorem prover directly on the operational semantics to generate verification conditions analogous to those produced by a custom-built VCG. Thus no separate VCG is necessary, and the theorem prover can be employed both to generate and to discharge the verification conditions. The method handles both partial and total correctness. It is also compositional in that the correctness of a subroutine needs to be proved once, rather than at each call site. The method has been used to verify several machine-level programs using the ACL2 theorem prover.

John Matthews, J. Strother Moore, Sandip Ray, Daron Vroon
An Incremental Approach to Abstraction-Carrying Code

Abstraction-Carrying Code

(ACC) has recently been proposed as a framework for Proof-Carrying Code (PCC) in which the code supplier provides a program together with an

abstraction

(or abstract model of the program) whose validity entails compliance with a predefined safety policy. Existing approaches for PCC are developed under the assumption that the consumer reads and validates the entire program w.r.t. the

full

certificate at once, in a non incremental way. In the context of ACC, we propose an

incremental

approach to PCC for the generation of certificates and the checking of untrusted

update

s of a (trusted) program, i.e., when a producer provides a modified version of a previously validated program. Our proposal is that, if the consumer keeps the original (fixed-point) abstraction, it is possible to provide only the program updates and the incremental certificate (i.e., the

difference

of abstractions). Furthermore, it is now possible to define an

incremental checking

algorithm which, given the new updates and its incremental certificate, only re-checks the fixpoint for each procedure affected by the updates and the propagation of the effect of these fixpoint changes. As a consequence, both certificate transmission time and checking time can be reduced significantly.

Elvira Albert, Puri Arenas, Germán Puebla
Context-Sensitive Multivariant Assertion Checking in Modular Programs

We propose a

modular

, assertion-based system for verification and debugging of large logic programs, together with several interesting models for checking assertions statically in modular programs, each with different characteristics and representing different trade-offs. Our proposal is a

modular

and

multivariant

extension of our previously proposed abstract assertion checking model and we also report on its implementation in the CiaoPP system. In our approach, the specification of the program, given by a set of assertions, may be partial, instead of the complete specification required by traditional verification systems. Also, the system can deal with properties which cannot always be determined at compile-time. As a result, the proposed system needs to work with

safe

approximations: all assertions proved correct are guaranteed to be valid and all errors actual errors. The use of modular, context-sensitive static analyzers also allows us to introduce a new distinction between assertions checked in a particular context or checked in general.

Paweł Pietrzak, Jesús Correas, Germán Puebla, Manuel V. Hermenegildo
Representation of Partial Knowledge and Query Answering in Locally Complete Databases

The

Local Closed-World Assumption

(LCWA) is a generalization of Reiter’s Closed-World Assumption (CWA) for relational databases that may be incomplete. Two basic questions that are related to this assumption are: (1) how to

represent

the fact that only part of the information is known to be complete, and (2) how to properly

reason

with this information, that is: how to determine whether an answer to a database query is complete even though the database information is incomplete. In this paper we concentrate on the second issue based on a treatment of the first issue developed in earlier work of the authors. For this we consider a fixpoint semantics for declarative theories that represent locally complete databases. This semantics is based on 3-valued interpretations that allow to distinguish between the certain and possible consequences of the database’s theory.

Álvaro Cortés-Calabuig, Marc Denecker, Ofer Arieli, Maurice Bruynooghe
Sequential, Parallel, and Quantified Updates of First-Order Structures

We present a datastructure for storing memory contents of imperative programs during symbolic execution—a technique frequently used for program verification and testing. The concept, called updates, can be integrated in dynamic logic as runtime infrastructure and models both stack and heap. Here, updates are systematically developed as an imperative programming language that provides the following constructs: assignments, guards, sequential composition and bounded as well as unbounded parallel composition. The language is equipped both with a denotational semantics and a correct rewriting system for execution, whereby the latter is a generalisation of the syntactic application of substitutions. The normalisation of updates is discussed. The complete theory of updates has been formalised using Isabelle/HOL.

Philipp Rümmer
Representing Defaults and Negative Information Without Negation-as-Failure

In logic programs, negation-as-failure has been used both for representing negative information and for providing default nonmonotonic inference. In this paper we argue that this twofold role is not only unnecessary for the expressiveness of the language, but it also plays against declarative programming, especially if further negation symbols such as strong negation are also available. We therefore propose a new logic programming approach in which negation and default inference are independent, orthogonal concepts. Semantical characterization of this approach is given in the style of answer sets, but other approaches are also possible. Finally, we compare them with the semantics for logic programs with two kinds of negation.

Pablo R. Fillottrani, Guillermo R. Simari
Constructing Camin-Sokal Phylogenies Via Answer Set Programming

Constructing parsimonious phylogenetic trees from species data is a central problem in phylogenetics, and has diverse applications, even outside biology. Many variations of the problem, including the cladistic Camin-Sokal (CCS) version, are NP-complete. We present Answer Set Programming (ASP) models for the binary CCS problem, as well as a simpler perfect phylogeny version, along with experimental results of applying the models to biological data. Our contribution is three-fold. First, we solve phylogeny problems which have not previously been tackled by ASP. Second, we report on variants of our CCS model which significantly affect run time, including the interesting case of making the program “slightly tighter”. This version exhibits some of the best performance, in contrast with a tight version of the model which exhibited poor performance. Third, we are able to find proven-optimal solutions for larger instances of the CCS problem than the widely used branch-and-bound-based PHYLIP package.

Jonathan Kavanagh, David Mitchell, Eugenia Ternovska, Ján Maňuch, Xiaohong Zhao, Arvind Gupta
Automata for Positive Core XPath Queries on Compressed Documents

Given any dag

t

representing a fully or partially compressed XML document, we present a method for evaluating any positive unary query expressed in terms of Core XPath axes, on

t

, without unfolding

t

into a tree. To each Core XPath query of a certain basic type, we associate a word automaton; these automata run on the graph of dependency between the non-terminals of the straightline regular tree grammar associated to the given dag, or along complete sibling chains in this grammar. Any given Core XPath query can be decomposed into queries of the basic type, and the answer to the query, on the dag

t

, can then be expressed as a sub-dag of

t

suitably labeled under the runs of such automata.

Barbara Fila, Siva Anantharaman
Boolean Rings for Intersection-Based Satisfiability

A potential advantage of using a Boolean-ring formalism for propositional formulæ is the large measure of simplification it facilitates. We propose a combined linear and binomial representation for Boolean-ring polynomials with which one can easily apply Gaussian elimination and Horn-clause methods to advantage. We demonstrate that this framework, with its enhanced simplification, is especially amenable to intersection-based learning, as in recursive learning and the method of Stålmarck. Experiments support the idea that problem variables can be eliminated and search trees can be shrunk by incorporating learning in the form of Boolean-ring saturation.

Nachum Dershowitz, Jieh Hsiang, Guan-Shieng Huang, Daher Kaiss
Theory Instantiation

In this paper we present a method of integrating theory reasoning into the instantiation framework. This integration is done in the black-box style, which allows us to integrate different theories in a uniform way. We prove completeness of the resulting calculus, provided that the theory reasoner is answer-complete and complete for reasoning with ground clauses. One of the distinctive features of our approach is that it allows us to employ off-the-shelf satisfiability solvers for ground clauses modulo theories, as a part of general first-order reasoning. As an application of this approach, we show how it is possible to combine the instantiation calculus with other calculi, such as ordered resolution and paramodulation.

Harald Ganzinger, Konstantin Korovin
Splitting on Demand in SAT Modulo Theories

Lazy algorithms for

Satisfiability Modulo Theories

(SMT) combine a generic DPLL-based SAT

engine

with a

theory solver

for the given theory

T

that can decide the

T

-consistency of conjunctions of ground literals. For many theories of interest, theory solvers need to reason by performing internal case splits. Here we argue that it is more convenient to delegate these case splits to the DPLL engine instead. The delegation can be done on demand for solvers that can encode their internal case splits into one or more clauses, possibly including new constants and literals. This results in drastically simpler theory solvers. We present this idea in an improved version of DPLL(

T

), a general SMT architecture for the lazy approach, and formalize and prove it correct in an extension of

Abstract DPLL Modulo Theories

, a framework for modeling and reasoning about lazy algorithms for SMT. A remarkable additional feature of the architecture, also discussed in the paper, is that it naturally includes an efficient Nelson-Oppen-like combination of multiple theories and their solvers.

Clark Barrett, Robert Nieuwenhuis, Albert Oliveras, Cesare Tinelli
Delayed Theory Combination vs. Nelson-Oppen for Satisfiability Modulo Theories: A Comparative Analysis

Many approaches for Satisfiability Modulo Theory (

SMT

${\mathcal({T})})$

rely on the integration between a SAT solver and a decision procedure for sets of literals in the background theory

${\mathcal{T}} ({\mathcal{T}}-solver$

). When

${\mathcal{T}}$

is the combination

${{\mathcal{T}}_1\cup{\mathcal{T}}_2}$

of two simpler theories, the approach is typically handled by means of Nelson-Oppen’s (NO) theory combination schema in which two specific

${\mathcal{T}}$

-

solver

deduce and exchange (disjunctions of) interface equalities.

In recent papers we have proposed a new approach to

$({{\mathcal{T}}_1\cup{\mathcal{T}}_2})$

, called

Delayed Theory Combination

(

Dtc

). Here part or all the (possibly very expensive) task of deducing interface equalities is played by the SAT solver itself, at the potential cost of an enlargement of the boolean search space. In principle this enlargement could be up to exponential in the number of interface equalities generated.

In this paper we show that this estimate was too pessimistic. We present a comparative analysis of

Dtc

vs. NO for

SMT

$({{\mathcal{T}}_1\cup{\mathcal{T}}_2})$

, which shows that, using state-of-the-art SAT-solving techniques, the amount of boolean branches performed by

Dtc

can be upper bounded by the number of deductions and boolean branches performed by NO on the same problem. We prove the result for different deduction capabilities of the

${\mathcal({T}-solver)}$

and for both convex and non-convex theories.

Roberto Bruttomesso, Alessandro Cimatti, Anders Franzén, Alberto Griggio, Roberto Sebastiani
Automatic Combinability of Rewriting-Based Satisfiability Procedures

We address the problems of combining satisfiability procedures and consider two combination scenarios: (i) the combination within the class of rewriting-based satisfiability procedures and (ii) the Nelson-Oppen combination of rewriting-based satisfiability procedures and arbitrary satisfiability procedures. In each scenario, we use meta-saturation, which schematizes saturation of the set containing the axioms of a given theory and an arbitrary set of ground literals, to syntactically decide sufficient conditions for the combinability of rewriting-based satisfiability procedures. For (i), we give a sufficient condition for the modular termination of meta-saturation. When meta-saturation for the union of theories halts, it yields a rewriting-based satisfiability procedure for the union. For (ii), we use meta-saturation to prove the stable infiniteness of the component theories and deduction completeness of their rewriting-based satisfiability procedures. These properties are important to establish the correctness of the Nelson-Oppen combination method and to obtain an efficient implementation.

Hélène Kirchner, Silvio Ranise, Christophe Ringeissen, Duc-Khanh Tran
To Ackermann-ize or Not to Ackermann-ize? On Efficiently Handling Uninterpreted Function Symbols in $\mathit{SMT}(\mathcal{EUF} \cup \mathcal{T})$

Satisfiability Modulo Theories

$(\mathit{SMT}(\mathcal{T}))$

is the problem of deciding the satisfiability of a formula with respect to a given background theory

${\mathcal{T}}$

. When

${\mathcal{T}}$

is the combination of two simpler theories

${{\mathcal{T}}_1}$

and

${{\mathcal{T}}_2} ({\mathit{SMT}({{\mathcal{T}}_1\cup{\mathcal{T}}_2})})$

, a standard and general approach is to handle the integration of

${{\mathcal{T}}_1}$

and

${{\mathcal{T}}_2}$

by performing some form of search on the equalities between the shared variables.

A frequent and very relevant sub-case of

${\mathit{SMT}({{\mathcal{T}}_1\cup{\mathcal{T}}_2})}$

is when

${{\mathcal{T}}_1}$

is the theory of Equality and Uninterpreted Functions

$({\mathcal{EUF}})$

. For this case, an alternative approach is to eliminate first all uninterpreted function symbols by means of Ackermann’s expansion, and then to solve the resulting

${\mathit{SMT}}({{\mathcal{T}}_2})$

problem.

In this paper we build on the empirical observation that there is no absolute winner between these two alternative approaches, and that the performance gaps between them are often dramatic, in either direction.

We propose a simple technique for estimating a priori the costs and benefits, in terms of the size of the search space of an

${\mathit{SMT}}$

tool, of applying Ackermann’s expansion to all or part of the function symbols.

A thorough experimental analysis, including the benchmarks of the SMT’05 competition, shows that the proposed technique is extremely effective in improving the overall performance of the

${\mathit{SMT}}$

tool.

Roberto Bruttomesso, Alessandro Cimatti, Anders Franzén, Alberto Griggio, Alessandro Santuari, Roberto Sebastiani
Lemma Learning in the Model Evolution Calculus

The Model Evolution

$\mathcal{ME}$

Calculus is a proper lifting to first-order logic of the DPLL procedure, a backtracking search procedure for propositional satisfiability. Like DPLL, the ME calculus is based on the idea of incrementally building a model of the input formula by alternating constraint propagation steps with non-deterministic decision steps. One of the major conceptual improvements over basic DPLL is

lemma learning

, a mechanism for generating new formulae that prevent later in the search combinations of decision steps guaranteed to lead to failure. We introduce two lemma generation methods for

$\mathcal{ME}$

proof procedures, with various degrees of power, effectiveness in reducing search, and computational overhead. Even if formally correct, each of these methods presents complications that do not exist at the propositional level but need to be addressed for learning to be effective in practice for

$\mathcal{ME}$

. We discuss some of these issues and present initial experimental results on the performance of an implementation of the two learning procedures within our

$\mathcal{ME}$

prover

Darwin

.

Peter Baumgartner, Alexander Fuchs, Cesare Tinelli
Backmatter
Metadata
Title
Logic for Programming, Artificial Intelligence, and Reasoning
Editors
Miki Hermann
Andrei Voronkov
Copyright Year
2006
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-48282-6
Print ISBN
978-3-540-48281-9
DOI
https://doi.org/10.1007/11916277

Premium Partner