Skip to main content
Top

2017 | Book

Automated Deduction – CADE 26

26th International Conference on Automated Deduction, Gothenburg, Sweden, August 6–11, 2017, Proceedings

insite
SEARCH

About this book

This book constitutes the proceeding of the 26th International Conference on Automated Deduction, CADE-26, held in Gothenburg, Sweden, in August 2017.
The 26 full papers and 5 system descriptions presented were carefully reviewed and selected from 69 submissions. CADE is the major forum for the presentation of research in all aspects of automated deduction, including foundations, applications, implementations and practical experience.

The chapter 'Certifying Confluence of Quasi-Decreasing Strongly Deterministic Conditional Term Rewrite Systems' is published open access under a CC BY 4.0 license.

Table of Contents

Frontmatter
Reasoning About Concurrency in High-Assurance, High-Performance Software Systems

We describe our work in the Trustworthy Systems group at Data61 (formerly NICTA) in reasoning about concurrency in high-assurance, high-performance software systems, in which concurrency may come from three different sources: multiple cores, interrupts and application-level interleaving.

June Andronick
Towards Logic-Based Verification of JavaScript Programs

In this position paper, we argue for what we believe is a correct pathway to achieving scalable symbolic verification of JavaScript based on separation logic. We highlight the difficulties imposed by the language, the current state-of-the-art in the literature, and the sequence of steps that needs to be taken. We briefly describe , our semi-automatic toolchain for JavaScript verification.

José Fragoso Santos, Philippa Gardner, Petar Maksimović, Daiva Naudžiūnienė
Formal Verification of Financial Algorithms

Many deep issues plaguing today’s financial markets are symptoms of a fundamental problem: The complexity of algorithms underlying modern finance has significantly outpaced the power of traditional tools used to design and regulate them. At Aesthetic Integration, we have pioneered the use of formal verification for analysing the safety and fairness of financial algorithms. With a focus on financial infrastructure (e.g., the matching logics of exchanges and dark pools and FIX connectivity between trading systems), we describe the landscape, and illustrate our Imandra formal verification system on a number of real-world examples. We sketch many open problems and future directions along the way.

Grant Olney Passmore, Denis Ignatovich
Satisfiability Modulo Theories and Assignments

The CDCL procedure for SAT is the archetype of conflict-driven procedures for satisfiability of quantifier-free problems in a single theory. In this paper we lift CDCL to CDSAT (Conflict-Driven Satisfiability), a system for conflict-driven reasoning in combinations of disjoint theories. CDSAT combines theory modules that interact through a global trail representing a candidate model by Boolean and first-order assignments. CDSAT generalizes to generic theory combinations the model-constructing satisfiability calculus (MCSAT) introduced by de Moura and Jovanović. Furthermore, CDSAT generalizes the equality sharing (Nelson-Oppen) approach to theory combination, by allowing theories to share equality information both explicitly through equalities and disequalities, and implicitly through assignments. We identify sufficient conditions for the soundness, completeness, and termination of CDSAT.

Maria Paola Bonacina, Stéphane Graham-Lengrand, Natarajan Shankar
Notions of Knowledge in Combinations of Theories Sharing Constructors

One of the most effective methods developed for the analysis of security protocols is an approach based on equational reasoning and unification. In this approach, it is important to have the capability to reason about the knowledge of an intruder. Two important measures of this knowledge, defined modulo equational theories, are deducibility and static equivalence. We present new combination techniques for the study of deducibility and static equivalence in unions of equational theories sharing constructors. Thanks to these techniques, we obtain new modularity results for the decidability of deducibility and static equivalence. In turn, this should allow for the analysis of protocols involving combined equational theories which previous disjoint combination methods could not address due to their non-disjoint axiomatization.

Serdar Erbatur, Andrew M. Marshall, Christophe Ringeissen
On the Combination of the Bernays–Schönfinkel–Ramsey Fragment with Simple Linear Integer Arithmetic

In general, first-order predicate logic extended with linear integer arithmetic is undecidable. We show that the Bernays-Schönfinkel-Ramsey fragment ($$\exists ^* \forall ^*$$-sentences) extended with a restricted form of linear integer arithmetic is decidable via finite ground instantiation. The identified ground instances can be employed to restrict the search space of existing automated reasoning procedures considerably, e.g., when reasoning about quantified properties of array data structures formalized in Bradley, Manna, and Sipma’s array property fragment. Typically, decision procedures for the array property fragment are based on an exhaustive instantiation of universally quantified array indices with all the ground index terms that occur in the formula at hand. Our results reveal that one can get along with significantly fewer instances.

Matthias Horbach, Marco Voigt, Christoph Weidenbach
Satisfiability Modulo Transcendental Functions via Incremental Linearization

In this paper we present an abstraction-refinement approach to Satisfiability Modulo the theory of transcendental functions, such as exponentiation and trigonometric functions. The transcendental functions are represented as uninterpreted in the abstract space, which is described in terms of the combined theory of linear arithmetic on the rationals with uninterpreted functions, and are incrementally axiomatized by means of upper- and lower-bounding piecewise-linear functions. Suitable numerical techniques are used to ensure that the abstractions of the transcendental functions are sound even in presence of irrationals. Our experimental evaluation on benchmarks from verification and mathematics demonstrates the potential of our approach, showing that it compares favorably with delta-satisfiability/interval propagation and methods based on theorem proving.

Alessandro Cimatti, Alberto Griggio, Ahmed Irfan, Marco Roveri, Roberto Sebastiani
Satisfiability Modulo Bounded Checking

We describe a new approach to find models for a computational higher-order logic with datatypes. The goal is to find counter-examples for conjectures stated in proof assistants. The technique builds on narrowing [14] but relies on a tight integration with a SAT solver to analyze conflicts precisely, eliminate sets of choices that lead to failures, and sometimes prove unsatisfiability. The architecture is reminiscent of that of an SMT solver. We present the rules of the calculus, an implementation, and some promising experimental results.

Simon Cruanes
Short Proofs Without New Variables

Adding and removing redundant clauses is at the core of state-of-the-art SAT solving. Crucial is the ability to add short clauses whose redundancy can be determined in polynomial time. We present a characterization of the strongest notion of clause redundancy (i.e., addition of the clause preserves satisfiability) in terms of an implication relationship. By using a polynomial-time decidable implication relation based on unit propagation, we thus obtain an efficiently checkable redundancy notion. A proof system based on this notion is surprisingly strong, even without the introduction of new variables—the key component of short proofs presented in the proof complexity literature. We demonstrate this strength on the famous pigeon hole formulas by providing short clausal proofs without new variables.

Marijn J. H. Heule, Benjamin Kiesl, Armin Biere
Relational Constraint Solving in SMT

Relational logic is useful for reasoning about computational problems with relational structures, including high-level system design, architectural configurations of network systems, ontologies, and verification of programs with linked data structures. We present a modular extension of an earlier calculus for the theory of finite sets to a theory of finite relations with such operations as transpose, product, join, and transitive closure. We implement this extension as a theory solver of the SMT solver CVC4. Combining this new solver with the finite model finding features of CVC4 enables several compelling use cases. For instance, native support for relations enables a natural mapping from Alloy, a declarative modeling language based on first-order relational logic, to SMT constraints. It also enables a natural encoding of several description logics with concrete domains, allowing the use of an SMT solver to analyze, for instance, Web Ontology Language (OWL) models. We provide an initial evaluation of our solver on a number of Alloy and OWL models which shows promising results.

Baoluo Meng, Andrew Reynolds, Cesare Tinelli, Clark Barrett
Decision Procedures for Theories of Sets with Measures

In this paper we introduce a decision procedure for checking satisfiability of quantifier-free formulae in the combined theory of sets, measures and arithmetic. Such theories are important in mathematics (e.g. probability theory and measure theory) and in applications. We indicate how these ideas can be used for obtaining a decision procedure for a fragment of the duration calculus.

Markus Bender, Viorica Sofronie-Stokkermans
A Decision Procedure for Restricted Intensional Sets

In this paper we present a decision procedure for Restricted Intensional Sets (RIS), i.e. sets given by a property rather than by enumerating their elements, similar to set comprehensions available in specification languages such as B and Z. The proposed procedure is parametric with respect to a first-order language and theory $$\mathcal {X}$$, providing at least equality and a decision procedure to check for satisfiability of $$\mathcal {X}$$-formulas. We show how this framework can be applied when $$\mathcal {X}$$ is the theory of hereditarily finite sets as is supported by the language CLP($$\mathcal {SET}$$). We also present a working implementation of RIS as part of the $$\{log\}$$ tool and we show how it compares with a mainstream solver and how it helps in the automatic verification of code fragments.

Maximiliano Cristiá, Gianfranco Rossi
Decidability of the Monadic Shallow Linear First-Order Fragment with Straight Dismatching Constraints

The monadic shallow linear Horn fragment is well-known to be decidable and has many application, e.g., in security protocol analysis, tree automata, or abstraction refinement. It was a long standing open problem how to extend the fragment to the non-Horn case, preserving decidability, that would, e.g., enable to express non-determinism in protocols. We prove decidability of the non-Horn monadic shallow linear fragment via ordered resolution further extended with dismatching constraints and discuss some applications of the new decidable fragment.

Andreas Teucke, Christoph Weidenbach
Efficient Certified RAT Verification

Clausal proofs have become a popular approach to validate the results of SAT solvers. However, validating clausal proofs in the most widely supported format (DRAT) is expensive even in highly optimized implementations. We present a new format, called LRAT, which extends the DRAT format with hints that facilitate a simple and fast validation algorithm. Checking validity of LRAT proofs can be implemented using trusted systems such as the languages supported by theorem provers. We demonstrate this by implementing two certified LRAT checkers, one in Coq and one in ACL2.

Luís Cruz-Filipe, Marijn J. H. Heule, Warren A. Hunt Jr., Matt Kaufmann, Peter Schneider-Kamp
Efficient Verified (UN)SAT Certificate Checking

We present an efficient formally verified checker for satisfiability and unsatisfiability certificates for Boolean formulas in conjunctive normal form. It utilizes a two phase approach: Starting from a DRAT certificate, the unverified generator computes an enriched certificate, which is checked against the original formula by the verified checker.Using the Isabelle/HOL Refinement Framework, we verify the actual implementation of the checker, specifying the semantics of the formula down to the integer sequence that represents it.On a realistic benchmark suite drawn from the 2016 SAT competition, our approach is more than two times faster than the unverified standard tool drat-trim. Additionally, we implemented a multi-threaded version of the generator, which further reduces the runtime.

Peter Lammich
Translating Between Implicit and Explicit Versions of Proof

The Foundational Proof Certificate (FPC) framework can be used to define the semantics of a wide range of proof evidence. For example, such definitions exist for a number of textbook proof systems as well as for the proof evidence output from some existing theorem proving systems. An important decision in designing a proof certificate format is the choice of how many details are to be placed within certificates. Formats with fewer details are smaller and easier for theorem provers to output but they require more sophistication from checkers since checking will involve some proof reconstruction. Conversely, certificate formats containing many details are larger but are checkable by less sophisticated checkers. Since the FPC framework is based on well-established proof theory principles, proof certificates can be manipulated in meaningful ways. In this paper, we illustrate how it is possible to automate moving from implicit to explicit (elaboration) and from explicit to implicit (distillation) proof evidence via the proof checking of a pair of proof certificates. Performing elaboration makes it possible to transform a proof certificate with details missing into a certificate packed with enough details so that a simple kernel (without support for proof reconstruction) can check the elaborated certificate. We illustrate how trust in only a single, simple checker of explicitly described proofs can be used to provide trust in a range of theorem provers employing a range of proof structures.

Roberto Blanco, Zakaria Chihani, Dale Miller
A Unifying Principle for Clause Elimination in First-Order Logic

Preprocessing techniques for formulas in conjunctive normal form play an important role in first-order theorem proving. To speed up the proving process, these techniques simplify a formula without affecting its satisfiability or unsatisfiability. In this paper, we introduce the principle of implication modulo resolution, which allows us to lift several preprocessing techniques—in particular, several clause-elimination techniques—from the SAT-solving world to first-order logic. We analyze confluence properties of these new techniques and show how implication modulo resolution yields short soundness proofs for the existing first-order techniques of predicate elimination and blocked-clause elimination.

Benjamin Kiesl, Martin Suda
Splitting Proofs for Interpolation

We study interpolant extraction from local first-order refutations. We present a new theoretical perspective on interpolation based on clearly separating the condition on logical strength of the formula from the requirement on the common signature. This allows us to highlight the space of all interpolants that can be extracted from a refutation as a space of simple choices on how to split the refutation into two parts. We use this new insight to develop an algorithm for extracting interpolants which are linear in the size of the input refutation and can be further optimized using metrics such as number of non-logical symbols or quantifiers. We implemented the new algorithm in first-order theorem prover Vampire and evaluated it on a large number of examples coming from the first-order proving community. Our experiments give practical evidence that our work improves the state-of-the-art in first-order interpolation.

Bernhard Gleiss, Laura Kovács, Martin Suda
Detecting Inconsistencies in Large First-Order Knowledge Bases

Large formalizations carry the risk of inconsistency, and hence may lead to instances of spurious reasoning. This paper describes a new approach and tool that automatically probes large first-order axiomatizations for inconsistency, by selecting subsets of the axioms centered on certain function and predicate symbols, and handling the subsets to a first-order theorem prover to test for unsatisfiability. The tool has been applied to several large axiomatizations, inconsistencies have been found, inconsistent cores extracted, and semi-automatic analysis of the inconsistent cores has helped to pinpoint the axioms that appear to be the underlying cause of inconsistency.

Stephan Schulz, Geoff Sutcliffe, Josef Urban, Adam Pease
Theorem Proving for Metric Temporal Logic over the Naturals

We study translations from Metric Temporal Logic (MTL) over the natural numbers to Linear Temporal Logic (LTL). In particular, we present two approaches for translating from MTL to LTL which preserve the ExpSpace complexity of the satisfiability problem for MTL. In each of these approaches we consider the case where the mapping between states and time points is given by (1) a strict monotonic function and by (2) a non-strict monotonic function (which allows multiple states to be mapped to the same time point). Our translations allow us to utilise LTL solvers to solve satisfiability and we empirically compare the translations, showing in which cases one performs better than the other.

Ullrich Hustadt, Ana Ozaki, Clare Dixon
Scavenger 0.1: A Theorem Prover Based on Conflict Resolution

This paper introduces Scavenger, the first theorem prover for pure first-order logic without equality based on the new conflict resolution calculus. Conflict resolution has a restricted resolution inference rule that resembles (a first-order generalization of) unit propagation as well as a rule for assuming decision literals and a rule for deriving new clauses by (a first-order generalization of) conflict-driven clause learning.

Daniyar Itegulov, John Slaney, Bruno Woltzenlogel Paleo
WorkflowFM: A Logic-Based Framework for Formal Process Specification and Composition

We present a logic-based system for process specification and composition named WorkflowFM. It relies on an embedding of Classical Linear Logic and the so-called proofs-as-processes paradigm within the proof assistant HOL Light. This enables the specification of abstract processes as logical sequents and their composition via formal proof. The result is systematically translated to an executable workflow with formally verified consistency, rigorous resource accounting, and deadlock freedom. The 3-tiered server/client architecture of WorkflowFM allows multiple concurrent users to interact with the system through a purely diagrammatic interface, while the proof is performed automatically on the server.

Petros Papapanagiotou, Jacques Fleuriot
DepQBF 6.0: A Search-Based QBF Solver Beyond Traditional QCDCL

We present the latest major release version 6.0 of the quantified Boolean formula (QBF) solver DepQBF, which is based on QCDCL. QCDCL is an extension of the conflict-driven clause learning (CDCL) paradigm implemented in state of the art propositional satisfiability (SAT) solvers. The Q-resolution calculus (QRES) is a QBF proof system which underlies QCDCL. QCDCL solvers can produce QRES proofs of QBFs in prenex conjunctive normal form (PCNF) as a byproduct of the solving process. In contrast to traditional QCDCL based on QRES, DepQBF 6.0 implements a variant of QCDCL which is based on a generalization of QRES. This generalization is due to a set of additional axioms and leaves the original Q-resolution rules unchanged. The generalization of QRES enables QCDCL to potentially produce exponentially shorter proofs than the traditional variant. We present an overview of the features implemented in DepQBF and report on experimental results which demonstrate the effectiveness of generalized QRES in QCDCL.

Florian Lonsing, Uwe Egly
CSI: New Evidence – A Progress Report

CSI is a strong automated confluence prover for rewrite systems which has been in development since 2010. In this paper we report on recent extensions that make CSI more powerful, secure, and useful. These extensions include improved confluence criteria but also support for uniqueness of normal forms. Most of the implemented techniques produce machine-readable proof output that can be independently verified by an external tool, thus increasing the trust in CSI. We also report on CSI$$\mathbf {\hat{~}}$$oho, a tool built on the same framework and similar ideas as CSI that automatically checks confluence of higher-order rewrite systems.

Julian Nagele, Bertram Felgenhauer, Aart Middeldorp
Scalable Fine-Grained Proofs for Formula Processing

We present a framework for processing formulas in automatic theorem provers, with generation of detailed proofs. The main components are a generic contextual recursion algorithm and an extensible set of inference rules. Clausification, skolemization, theory-specific simplifications, and expansion of ‘let’ expressions are instances of this framework. With suitable data structures, proof generation adds only a linear-time overhead, and proofs can be checked in linear time. We implemented the approach in the SMT solver veriT. This allowed us to dramatically simplify the code base while increasing the number of problems for which detailed proofs can be produced, which is important for independent checking and reconstruction in proof assistants.

Haniel Barbosa, Jasmin Christian Blanchette, Pascal Fontaine

Open Access

Certifying Confluence of Quasi-Decreasing Strongly Deterministic Conditional Term Rewrite Systems

We formalize a confluence criterion for the class of quasi-decreasing strongly deterministic conditional term rewrite systems in Isabelle/HOL: confluence follows if all conditional critical pairs are joinable. However, quasi-decreasingness, strong determinism, and joinability of conditional critical pairs are all undecidable in general. Therefore, we also formalize sufficient criteria for those properties, which we incorporate into the general purpose certifier CeTA as well as the confluence checker ConCon for conditional term rewrite systems.

Christian Sternagel, Thomas Sternagel
A Transfinite Knuth–Bendix Order for Lambda-Free Higher-Order Terms

We generalize the Knuth–Bendix order (KBO) to higher-order terms without $$\lambda $$-abstraction. The restriction of this new order to first-order terms coincides with the traditional KBO. The order has many useful properties, including transitivity, the subterm property, compatibility with contexts (monotonicity), stability under substitution, and well-foundedness. Transfinite weights and argument coefficients can also be supported. The order appears promising as the basis of a higher-order superposition calculus.

Heiko Becker, Jasmin Christian Blanchette, Uwe Waldmann, Daniel Wand
Certifying Safety and Termination Proofs for Integer Transition Systems

Modern program analyzers translate imperative programs to an intermediate formal language like integer transition systems (ITSs), and then analyze properties of ITSs. Because of the high complexity of the task, a number of incorrect proofs are revealed annually in the Software Verification Competitions.In this paper, we establish the trustworthiness of termination and safety proofs for ITSs. To this end we extend our Isabelle/HOL formalization IsaFoR by formalizing several verification techniques for ITSs, such as invariant checking, ranking functions, etc. Consequently the extracted certifier CeTA can now (in)validate safety and termination proofs for ITSs. We also adapted the program analyzers T2 and AProVE to produce machine-readable proof certificates, and as a result, most termination proofs generated by these tools on a standard benchmark set are now certified.

Marc Brockschmidt, Sebastiaan J. C. Joosten, René Thiemann, Akihisa Yamada
Biabduction (and Related Problems) in Array Separation Logic

We investigate array separation logic ($$\mathsf {ASL}$$), a variant of symbolic-heap separation logic in which the data structures are either pointers or arrays, i.e., contiguous blocks of memory. This logic provides a language for compositional memory safety proofs of array programs. We focus on the biabduction problem for this logic, which has been established as the key to automatic specification inference at the industrial scale. We present an $$\mathsf {NP}$$ decision procedure for biabduction in $$\mathsf {ASL}$$, and we also show that the problem of finding a consistent solution is $$\mathsf {NP}$$-hard. Along the way, we study satisfiability and entailment in $$\mathsf {ASL}$$, giving decision procedures and complexity bounds for both problems. We show satisfiability to be $$\mathsf {NP}$$-complete, and entailment to be decidable with high complexity. The surprising fact that biabduction is simpler than entailment is due to the fact that, as we show, the element of choice over biabduction solutions enables us to dramatically reduce the search space.

James Brotherston, Nikos Gorogiannis, Max Kanovich
Automatically Verifying Temporal Properties of Pointer Programs with Cyclic Proof

We propose a deductive reasoning approach to the automatic verification of temporal properties of pointer programs, based on cyclic proof. We present a proof system whose judgements express that a program has a certain temporal property over memory state assertions in separation logic, and whose rules operate directly on the temporal modalities as well as symbolically executing programs. Cyclic proofs in our system are, as usual, finite proof graphs subject to a natural, decidable soundness condition, encoding a form of proof by infinite descent.We present a proof system tailored to proving CTL properties of nondeterministic pointer programs, and then adapt this system to handle fair execution conditions. We show both systems to be sound, and provide an implementation of each in the Cyclist theorem prover, yielding an automated tool that is capable of automatically discovering proofs of (fair) temporal properties of heap-aware programs. Experimental evaluation of our tool indicates that our approach is viable, and offers an interesting alternative to traditional model checking techniques.

Gadi Tellez, James Brotherston
Satisfiability of Compositional Separation Logic with Tree Predicates and Data Constraints

In this paper, we propose compositional separation logic with tree predicates (CSLTP), where properties such as sortedness and height-balancedness of complex data structures (for instance, AVL trees and red-black trees) can be fully specified. We show that the satisfiability problem of CSLTP is decidable. The main technical ingredient of the decision procedure is to compute the least fixed point of a class of inductively defined predicates that are non-linear and involve dense-order and difference-bound constraints, which are of independent interests.

Zhaowei Xu, Taolue Chen, Zhilin Wu
A Proof Strategy Language and Proof Script Generation for Isabelle/HOL

We introduce a language, PSL, designed to capture high level proof strategies in Isabelle/HOL. Given a strategy and a proof obligation, PSL’s runtime system generates and combines various tactics to explore a large search space with low memory usage. Upon success, PSL generates an efficient proof script, which bypasses a large part of the proof search. We also present PSL’s monadic interpreter to show that the underlying idea of PSL is transferable to other ITPs.

Yutaka Nagashima, Ramana Kumar
The Binomial Pricing Model in Finance: A Formalization in Isabelle

The binomial pricing model is an option valuation method based on a discrete-time model of the evolution of an equity market. It allows one to determine the fair price of derivatives from the payoff they generate at their expiration date. A formalization of this model in the proof assistant Isabelle is provided. We formalize essential notions in finance such as the no-arbitrage principle and prove that, under the hypotheses of the model, the market is complete, meaning that any European derivative can be replicated by creating a portfolio that generates the same payoff regardless of the evolution of the market.

Mnacho Echenim, Nicolas Peltier
Monte Carlo Tableau Proof Search

We study Monte Carlo Tree Search to guide proof search in tableau calculi. This includes proposing a number of proof-state evaluation heuristics, some of which are learnt from previous proofs. We present an implementation based on the leanCoP prover. The system is trained and evaluated on a large suite of related problems coming from the Mizar proof assistant, showing that it is capable to find new and different proofs.

Michael Färber, Cezary Kaliszyk, Josef Urban
Backmatter
Metadata
Title
Automated Deduction – CADE 26
Editor
Leonardo de Moura
Copyright Year
2017
Electronic ISBN
978-3-319-63046-5
Print ISBN
978-3-319-63045-8
DOI
https://doi.org/10.1007/978-3-319-63046-5

Premium Partner