Skip to main content

2018 | Buch

Interactive Theorem Proving

9th International Conference, ITP 2018, Held as Part of the Federated Logic Conference, FloC 2018, Oxford, UK, July 9-12, 2018, Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 9th International Conference on Interactive Theorem Proving, ITP 2018, held in Oxford, UK, in July 2018.

The 32 full papers and 5 short papers presented were carefully reviewed and selected from 65 submissions. The papers feature research in the area of logical frameworks and interactive proof assistants. The topics include theoretical foundations and implementation aspects of the technology, as well as applications to verifying hardware and software systems to ensure their safety and security, and applications to the formal verication of mathematical results.

Inhaltsverzeichnis

Frontmatter
Physical Addressing on Real Hardware in Isabelle/HOL

Modern computing platforms are inherently complex and diverse: a heterogeneous collection of cores, interconnects, programmable memory translation units, and devices means that there is no single physical address space, and each core or DMA device may see other devices at different physical addresses. This is a problem because correct operation of system software relies on correct configuration of these interconnects, and current operating systems (and associated formal specifications) make assumptions about global physical addresses which do not hold. We present a formal model in Isabelle/HOL to express this complex addressing hardware that captures the intricacies of different real platforms or Systems-on-Chip (SoCs), and demonstrate its expressivity by showing, as an example, the impossibility of correctly configuring a MIPS R4600 TLB as specified in its documentation. Such a model not only facilitates proofs about hardware, but is used to generate correct code at compile time and device configuration at runtime in the Barrelfish research OS.

Reto Achermann, Lukas Humbel, David Cock, Timothy Roscoe

Open Access

Towards Certified Meta-Programming with Typed Template-Coq

Template-Coq (https://template-coq.github.io/template-coq) is a plugin for Coq, originally implemented by Malecha [18], which provides a reifier for Coq terms and global declarations, as represented in the Coq kernel, as well as a denotation command. Initially, it was developed for the purpose of writing functions on Coq’s AST in Gallina. Recently, it was used in the CertiCoq certified compiler project [4], as its front-end language, to derive parametricity properties [3], and to extract Coq terms to a CBV $$\lambda $$-calculus [13]. However, the syntax lacked semantics, be it typing semantics or operational semantics, which should reflect, as formal specifications in Coq, the semantics of Coq’s type theory itself. The tool was also rather bare bones, providing only rudimentary quoting and unquoting commands. We generalize it to handle the entire Calculus of Inductive Constructions (CIC), as implemented by Coq, including the kernel’s declaration structures for definitions and inductives, and implement a monad for general manipulation of Coq’s logical environment. We demonstrate how this setup allows Coq users to define many kinds of general purpose plugins, whose correctness can be readily proved in the system itself, and that can be run efficiently after extraction. We give a few examples of implemented plugins, including a parametricity translation. We also advocate the use of Template-Coq as a foundation for higher-level tools.

Abhishek Anand, Simon Boulier, Cyril Cohen, Matthieu Sozeau, Nicolas Tabareau
Formalizing Ring Theory in PVS

This work describes the ongoing specification and formalization in the PVS proof assistant of some definitions and theorems of ring theory in abstract algebra, and briefly presents some of the results intended to be formalized. So far, some important theorems from ring theory were specified and formally proved, like the First Isomorphism Theorem, the Binomial Theorem and the lemma establishing that every finite integral domain with cardinality greater than one is a field. The goal of the project in progress is to specify and formalize in PVS the main theorems from ring theory presented in undergraduate textbooks of abstract algebra, but in the short term the authors intended to formalize: (i) the Second and the Third Isomorphism Theorems for rings; (ii) the primality of the characteristic of a ring without zero divisors; (iii) definitions of prime and maximal ideals and theorems related with those concepts. The developed formalization applies mainly a part of the NASA PVS library for abstract algebra specified in the theoryalgebra.

Andréia B. Avelar da Silva, Thaynara Arielly de Lima, André Luiz Galdino
Software Tool Support for Modular Reasoning in Modal Logics of Actions

We present a software tool for reasoning in and about propositional sequent calculi for modal logics of actions. As an example, we implement the display calculus D.EAK of dynamic epistemic logic. The tool generates embeddings of the calculus in the theorem prover Isabelle/HOL for formalising proofs about D.EAK. Integrating propositional reasoning in D.EAK with inductive reasoning in Isabelle/HOL, we verify the solution of the muddy children puzzle for any number of muddy children. There also is a set of meta-tools that allows us to adapt the software for a wide variety of user defined calculi.

Samuel Balco, Sabine Frittella, Giuseppe Greco, Alexander Kurz, Alessandra Palmigiano
Backwards and Forwards with Separation Logic

The use of Hoare logic in combination with weakest preconditions and strongest postconditions is a standard tool for program verification, known as backward and forward reasoning. In this paper we extend these techniques to allow backward and forward reasoning for separation logic. While the former is derived directly from the standard operators of separation logic, the latter uses a new one. We implement our framework in the interactive proof assistant Isabelle/HOL, and enable automation with several interactive proof tactics.

Callum Bannister, Peter Höfner, Gerwin Klein
A Coq Formalisation of SQL’s Execution Engines

In this article, we use the Coq proof assistant to specify and verify the low level layer of SQL’s execution engines. To reach our goals, we first design a high-level Coq specification for data-centric operators intended to capture their essence. We, then, provide two Coq implementations of our specification. The first one, the physical algebra, consists in the low level operators found in systems such as Postgresql or Oracle. The second, SQL algebra, is an extended relational algebra that provides a semantics for SQL. Last, we formally relate physical algebra and SQL algebra. By proving that the physical algebra implements SQL algebra, we give high level assurances that physical algebraic and SQL algebra expressions enjoy the same semantics. All this yields the first, to our best knowledge, formalisation and verification of the low level layer of an RDBMS as well as SQL’s compilation’s physical optimisation: fundamental steps towards mechanising SQL’s compilation chain.

V. Benzaken, É. Contejean, Ch. Keller, E. Martins
A Coq Tactic for Equality Learning in Linear Arithmetic

Coq provides linear arithmetic tactics such as omega or lia. Currently, these tactics either fully prove the current goal in progress, or fail. We propose to improve this behavior: when the goal is not provable in linear arithmetic, we strengthen the hypotheses with new equalities discovered from the linear inequalities. These equalities may help other Coq tactics to discharge the goal. In other words, we apply – in interactive proofs – a seminal idea of SMT-solving: combining tactics by exchanging equalities. The paper describes how we have implemented equality learning in a new Coq tactic, dealing with linear arithmetic over rationals. It also illustrates how this tactic interacts with other Coq tactics.

Sylvain Boulmé, Alexandre Maréchal
The Coinductive Formulation of Common Knowledge

We study the coinductive formulation of common knowledge in type theory. We formalise both the traditional relational semantics and an operator semantics, similar in form to the epistemic system S5, but at the level of events on possible worlds rather than as a logical derivation system. We have two major new results. Firstly, the operator semantics is equivalent to the relational semantics: we discovered that this requires a new hypothesis of semantic entailment on operators, not known in previous literature. Secondly, the coinductive version of common knowledge is equivalent to the traditional transitive closure on the relational interpretation. All results are formalised in the proof assistants Agda and Coq.

Colm Baston, Venanzio Capretta
Tactics and Certificates in Meta Dedukti

Tactics are often featured in proof assistants to simplify the interactive development of proofs by allowing domain-specific automation. Moreover, tactics are also helpful to check the output of automatic theorem provers because they can rebuild details that the provers omit.We use meta-programming to define a tactic language for the Dedukti logical framework which can be used both for checking certificates produced by automatic provers and for developing proofs interactively.More precisely, we propose a dependently-typed tactic language for first-order logic in Meta Dedukti and an untyped tactic language built on top of the typed one. We show the expressivity of these languages on two applications: a transfer tactic and a resolution certificate checker.

Raphaël Cauderlier

Open Access

A Formalization of the LLL Basis Reduction Algorithm

The LLL basis reduction algorithm was the first polynomial-time algorithm to compute a reduced basis of a given lattice, and hence also a short vector in the lattice. It thereby approximates an NP-hard problem where the approximation quality solely depends on the dimension of the lattice, but not the lattice itself. The algorithm has several applications in number theory, computer algebra and cryptography.In this paper, we develop the first mechanized soundness proof of the LLL algorithm using Isabelle/HOL. We additionally integrate one application of LLL, namely a verified factorization algorithm for univariate integer polynomials which runs in polynomial time.

Jose Divasón, Sebastiaan Joosten, René Thiemann, Akihisa Yamada
A Formal Proof of the Minor-Exclusion Property for Treewidth-Two Graphs

We give a formal and constructive proof in Coq/Ssreflect of the known result that the graphs of treewidth two are exactly those that do not admit K4 as a minor. This result is a milestone towards a formal proof of the recent result that isomorphism of treewidth-two graphs can be finitely axiomatized. The proof is based on a function extracting terms from K4-free graphs in such a way that the interpretation of an extracted term yields a treewidth-two graph isomorphic to the original graph.

Christian Doczkal, Guillaume Combette, Damien Pous
Verified Analysis of Random Binary Tree Structures

This work is a case study of the formal verification and complexity analysis of some famous probabilistic algorithms and data structures in the proof assistant Isabelle/HOL. In particular, we consider the expected number of comparisons in randomised quicksort, the relationship between randomised quicksort and average-case deterministic quicksort, the expected shape of an unbalanced random Binary Search Tree, and the expected shape of a Treap. The last two have, to our knowledge, not been analysed using a theorem prover before and the last one is of particular interest because it involves continuous distributions.

Manuel Eberl, Max W. Haslbeck, Tobias Nipkow
HOL Light QE

We are interested in algorithms that manipulate mathematical expressions in mathematically meaningful ways. Expressions are syntactic, but most logics do not allow one to discuss syntax. $${\textsc {ctt}}_\mathrm{qe}$$ is a version of Church’s type theory that includes quotation and evaluation operators, akin to quote and eval in the Lisp programming language. Since the $$\text {HOL}$$ logic is also a version of Church’s type theory, we decided to add quotation and evaluation to $$\text {HOL Light}$$ to demonstrate the implementability of $${\textsc {ctt}}_\mathrm{qe}$$ and the benefits of having quotation and evaluation in a proof assistant. The resulting system is called $$\text {HOL Light QE}$$. Here we document the design of $$\text {HOL Light QE}$$ and the challenges that needed to be overcome. The resulting implementation is freely available.

Jacques Carette, William M. Farmer, Patrick Laskowski
Efficient Mendler-Style Lambda-Encodings in Cedille

It is common to model inductive datatypes as least fixed points of functors. We show that within the Cedille type theory we can relax functoriality constraints and generically derive an induction principle for Mendler-style lambda-encoded inductive datatypes, which arise as least fixed points of covariant schemes where the morphism lifting is defined only on identities. Additionally, we implement a destructor for these lambda-encodings that runs in constant-time. As a result, we can define lambda-encoded natural numbers with an induction principle and a constant-time predecessor function so that the normal form of a numeral requires only linear space. The paper also includes several more advanced examples.

Denis Firsov, Richard Blair, Aaron Stump
Verification of PCP-Related Computational Reductions in Coq

We formally verify several computational reductions concerning the Post correspondence problem (PCP) using the proof assistant Coq. Our verification includes a reduction of the halting problem for Turing machines to string rewriting, a reduction of string rewriting to PCP, and reductions of PCP to the intersection problem and the palindrome problem for context-free grammars.

Yannick Forster, Edith Heiter, Gert Smolka
ProofWatch: Watchlist Guidance for Large Theories in E

Watchlist (also hint list) is a mechanism that allows related proofs to guide a proof search for a new conjecture. This mechanism has been used with the Otter and Prover9 theorem provers, both for interactive formalizations and for human-assisted proving of open conjectures in small theories. In this work we explore the use of watchlists in large theories coming from first-order translations of large ITP libraries, aiming at improving hammer-style automation by smarter internal guidance of the ATP systems. In particular, we (i) design watchlist-based clause evaluation heuristics inside the E ATP system, and (ii) develop new proof guiding algorithms that load many previous proofs inside the ATP and focus the proof search using a dynamically updated notion of proof matching. The methods are evaluated on a large set of problems coming from the Mizar library, showing significant improvement of E’s standard portfolio of strategies, and also of the previous best set of strategies invented for Mizar by evolutionary methods.

Zarathustra Goertzel, Jan Jakubův, Stephan Schulz, Josef Urban
Reification by Parametricity
Fast Setup for Proof by Reflection, in Two Lines of Ltac

We present a new strategy for performing reification in Coq. That is, we show how to generate first-class abstract syntax trees from “native” terms of Coq’s logic, suitable as inputs to verified compilers or procedures in the proof-by-reflection style. Our new strategy, based on simple generalization of subterms as variables, is straightforward, short, and fast. In its pure form, it is only complete for constants and function applications, but “let” binders, eliminators, lambdas, and quantifiers can be accommodated through lightweight coding conventions or preprocessing.We survey the existing methods of reification across multiple Coq metaprogramming facilities, describing various design choices and tricks that can be used to speed them up, as well as various limitations. We report benchmarking results for 18 variants, in addition to our own, finding that our own reification outperforms 16 of these methods in all cases, and one additional method in some cases; writing an OCaml plugin is the only method tested to be faster. Our method is the most concise of the strategies we considered, reifying terms using only two to four lines of Ltac—beyond lists of the identifiers to reify and their reified variants. Additionally, our strategy automatically provides error messages that are no less helpful than Coq’s own error messages.

Jason Gross, Andres Erbsen, Adam Chlipala
Verifying the LTL to Büchi Automata Translation via Very Weak Alternating Automata

We present a formalization of a translation from LTL formulae to generalized Büchi automata in the HOL4 theorem prover. Translations from temporal logics to automata are at the core of model checking algorithms based on automata-theoretic techniques. The translation we verify proceeds in two steps: it produces very weak alternating automata at an intermediate stage, and then ultimately produces a generalized Büchi automaton. After verifying both transformations, we also encode both of these automata models using a generic, functional graph type, and use the CakeML compiler to generate fully verified machine code implementing the translation.

Simon Jantsch, Michael Norrish
CalcCheck: A Proof Checker for Teaching the “Logical Approach to Discrete Math”

For calculational proofs as they are propagated by Gries and Schneider’s textbook classic “A Logical Approach to Discrete Math” (LADM), automated proof checking is feasible, and can provide useful feedback to students acquiring and practicing basic proof skills. We report on the system which implements a proof checker for a mathematical language that resembles the rigorous but informal mathematical style of LADM so closely that students very quickly recognise the system, which provides them immediate feed-back, as not an obstacle, but as an aid, and realise that the problem is finding proofs.Students interact with this proof checker trough the “web application” front-end which provides some assistance for proof entry, but intentionally no assistance for proof finding. Upon request, the system displays, side-by-side with the student input, a version of that input annotated with the results of checking each step for correctness. has now been used twice for teaching an LADM-based second-year discrete mathematics course, and students have been solving exercises and submitting assignments, midterms, and final exams on the system — for examinations, there is the option to disable proof checking and leave only syntax checking enabled. also performed the grading, with very limited human overriding necessary.

Wolfram Kahl
Understanding Parameters of Deductive Verification: An Empirical Investigation of KeY

As formal verification of software systems is a complex task comprising many algorithms and heuristics, modern theorem provers offer numerous parameters that are to be selected by a user to control how a piece of software is verified. Evidently, the number of parameters even increases with each new release. One challenge is that default parameters are often insufficient to close proofs automatically and are not optimal in terms of verification effort. The verification phase becomes hardly accessible for non-experts, who typically must follow a time-consuming trial-and-error strategy to choose the right parameters for even trivial pieces of software. To aid users of deductive verification, we apply machine learning techniques to empirically investigate which parameters and combinations thereof impair or improve provability and verification effort. We exemplify our procedure on the deductive verification system KeY 2.6.1 and specified extracts of OpenJDK, and formulate 53 hypotheses of which only three have been rejected. We identified parameters that represent a trade-off between high provability and low verification effort, enabling the possibility to prioritize the selection of a parameter for either direction. Our insights give tool builders a better understanding of their control parameters and constitute a stepping stone towards automated deductive verification and better applicability of verification tools for non-experts.

Alexander Knüppel, Thomas Thüm, Carsten Immanuel Pardylla, Ina Schaefer
Software Verification with ITPs Should Use Binary Code Extraction to Reduce the TCB
(Short Paper)

LCF-style provers emphasise that all results are secured by logical inference, and yet their current facilities for code extraction or code generation fall short of this high standard. This paper argues that extraction mechanisms with a small trusted computing base (TCB) ought to be used instead, pointing out that the recent CakeML and Œuf projects show that this is possible in HOL and within reach in Coq.

Ramana Kumar, Eric Mullen, Zachary Tatlock, Magnus O. Myreen
Proof Pearl: Constructive Extraction of Cycle Finding Algorithms

We present a short implementation of the well-known Tortoise and Hare cycle finding algorithm in the constructive setting of Coq. This algorithm is interesting from a constructive perspective because it is both very simple and potentially non-terminating (depending on the input). To overcome potential non-termination, we encode the given termination argument (there exists a cycle) into a bar inductive predicate that we use as termination certificate. From this development, we extract the standard OCaml implementation of this algorithm. We generalize the method to the full Floyd’s algorithm that computes the entry point and the period of the cycle in the iterated sequence, and to the more efficient Brent’s algorithm for computing the period only, again with accurate extractions of their respective standard OCaml implementations.

Dominique Larchey-Wendling
Fast Machine Words in Isabelle/HOL

Code generated from a verified formalisation typically runs faster when it uses machine words instead of a syntactic representation of integers. This paper presents a library for Isabelle/HOL that links the existing formalisation of words to the machine words that the four target languages of Isabelle/HOL’s code generator provide. Our design ensures that (i) Isabelle/HOL machine words can be mapped soundly and efficiently to all target languages despite the differences in the APIs; (ii) they can be used uniformly with the three evaluation engines in Isabelle/HOL, namely code generation, normalisation by evaluation, and term rewriting; and (iii) they blend in with the existing formalisations of machine words. Several large-scale formalisation projects use our library to speed up their generated code. To validate the unverified link between machine words in the logic and those in the target languages, we extended Isabelle/HOL with a general-purpose testing facility that compiles test cases expressed within Isabelle/HOL to the four target languages and runs them with the most common implementations of each language. When we applied this to our library of machine words, we discovered miscomputations in the 64-bit word library of one of the target-language implementations.

Andreas Lochbihler
Relational Parametricity and Quotient Preservation for Modular (Co)datatypes

Bounded natural functors (BNFs) provide a modular framework for the construction of (co)datatypes in higher-order logic. Their functorial operations, the mapper and relator, are restricted to a subset of the parameters, namely those where recursion can take place. For certain applications, such as free theorems, data refinement, quotients, and generalised rewriting, it is desirable that these operations do not ignore the other parameters. In this paper, we generalise BNFs such that the mapper and relator act on both covariant and contravariant parameters. Our generalisation, BNF$$_{\!\text {CC}}$$, is closed under functor composition and least and greatest fixpoints. In particular, every (co)datatype is a BNF$$_{\!\text {CC}}$$. We prove that subtypes inherit the BNF$$_{\!\text {CC}}$$ structure under conditions that generalise those for the BNF case. We also identify sufficient conditions under which a BNF$$_{\!\text {CC}}$$ preserves quotients. Our development is formalised abstractly in Isabelle/HOL in such a way that it integrates seamlessly with the existing parametricity infrastructure.

Andreas Lochbihler, Joshua Schneider
Towards Verified Handwritten Calculational Proofs
(Short Paper)

Despite great advances in computer-assisted proof systems, writing formal proofs using a traditional computer is still challenging due to mouse-and-keyboard interaction. This leads to scientists often resorting to pen and paper to write their proofs. However, when handwriting a proof, there is no formal guarantee that the proof is correct. In this paper we address this issue and present the initial steps towards a system that allows users to handwrite proofs using a pen-based device and that communicates with an external theorem prover to support the users throughout the proof writing process. We focus on calculational proofs, whereby a theorem is proved by a chain of formulae, each transformed in some way into the next. We present the implementation of a proof-of-concept prototype that can formally verify handwritten calculational proofs without the need to learn the specific syntax of theorem provers.

Alexandra Mendes, João F. Ferreira

Open Access

A Formally Verified Solver for Homogeneous Linear Diophantine Equations

In this work we are interested in minimal complete sets of solutions for homogeneous linear diophantine equations. Such equations naturally arise during AC-unification—that is, unification in the presence of associative and commutative symbols. Minimal complete sets of solutions are for example required to compute AC-critical pairs. We present a verified solver for homogeneous linear diophantine equations that we formalized in Isabelle/HOL. Our work provides the basis for formalizing AC-unification and will eventually enable the certification of automated AC-confluence and AC-completion tools.

Florian Meßner, Julian Parsert, Jonas Schöpf, Christian Sternagel
Formalizing Implicative Algebras in Coq

We present a Coq formalization of Alexandre Miquel’s implicative algebras [18], which aim at providing a general algebraic framework for the study of classical realizability models. We first give a self-contained presentation of the underlying implicative structures, which roughly consist of a complete lattice equipped with a binary law representing the implication. We then explain how these structures can be turned into models by adding separators, giving rise to the so-called implicative algebras. Additionally, we show how they generalize Boolean and Heyting algebras as well as the usual algebraic structures used in the analysis of classical realizability.

Étienne Miquey
Boosting the Reuse of Formal Specifications

Advances in theorem proving have enabled the emergence of a variety of formal developments that, over the years, have resulted in large corpuses of formalizations. For example, the NASA PVS Library is a collection of 55 formal developments written in the Prototype Verification System (PVS) over a period of almost 30 years and containing more than 28000 proofs. Unfortunately, the simple accumulation of formal developments does not guarantee their reusability. In fact, in formal systems with very expressive specification languages, it is often the case that a particular conceptual object is defined in different ways. This paper presents a technique to establish sound connections between formal definitions. Such connections support the possibility of (partial) borrowing of proved results from one formal description into another, improving the reusability of formal developments. The technique is described using concepts from the field of universal algebra and algebraic specification. The technique is illustrated with concrete examples taken from formalizations available in the NASA PVS Library.

Mariano M. Moscato, Carlos G. Lopez Pombo, César A. Muñoz, Marco A. Feliú

Open Access

Towards Formal Foundations for Game Theory

Utility functions form an essential part of game theory and economics. In order to guarantee the existence of these utility functions sufficient properties are assumed in an axiomatic manner. In this paper we discuss these axioms and the von-Neumann-Morgenstern Utility Theorem, which names precise assumptions under which expected utility functions exist. We formalize these results in Isabelle/HOL. The formalization includes formal definitions of the underlying concepts including continuity and independence of preferences. We make the dependencies more precise and highlight some consequences for a formalization of game theory.

Julian Parsert, Cezary Kaliszyk

Open Access

Verified Timing Transformations in Synchronous Circuits with -Ware

We define a DSL for hardware description, called $$\lambda \pi $$-Ware, embedded in the dependently-typed language Agda, which makes the DSL well-scoped and well-typed by construction. Other advantages of dependent types are that circuit models can be simulated and verified in the same language, and properties can be proven not only of specific circuits, but of circuit generators describing (infinite) families of circuits. This paper focuses on the relations between circuits computing the same values, but with different levels of statefulness. We define common recursion schemes, in combinational and sequential versions, and express known circuits using these recursion patterns. Finally, we define a notion of convertibility between circuits with different levels of statefulness, and prove the core convertibility property between the combinational and sequential versions of our vector iteration primitive. Circuits defined using the recursion schemes can thus have different architectures with a guarantee of functional equivalence up to timing.

João Paulo Pizani Flor, Wouter Swierstra
A Formal Equational Theory for Call-By-Push-Value

Establishing that two programs are contextually equivalent is hard, yet essential for reasoning about semantics preserving program transformations such as compiler optimizations. We adapt Lassen’s normal form bisimulations technique to establish the soundness of equational theories for both an untyped call-by-value $$\lambda $$-calculus and a variant of Levy’s call-by-push-value language. We demonstrate that our equational theory significantly simplifies the verification of optimizations.

Christine Rizkallah, Dmitri Garbuzov, Steve Zdancewic
Program Verification in the Presence of Cached Address Translation

Operating system (OS) kernels achieve isolation between user-level processes using multi-level page tables and translation lookaside buffers (TLBs). Controlling the TLB correctly is a fundamental security property—yet all large-scale formal OS verification projects leave correct functionality of the TLB as an assumption. We present a logic for reasoning about low-level programs in the presence of TLB address translation. We extract invariants and necessary conditions for correct TLB operation that mirror the informal reasoning of OS engineers. Our program logic reduces to a standard logic for user-level reasoning, reduces to side-condition checks for kernel-level reasoning, and can handle typical OS kernel tasks such as context switching and page table manipulations.

Hira Taqdees Syeda, Gerwin Klein
Verified Tail Bounds for Randomized Programs

We mechanize a theorem by Karp, along with several extensions, that provide an easy to use “cookbook” method for verifying tail bounds of randomized algorithms, much like the traditional “Master Theorem” gives bounds for deterministic algorithms. We apply these results to several examples: the number of comparisons performed by QuickSort, the span of parallel QuickSort, the height of randomly generated binary search trees, and the number of rounds needed for a distributed leader election protocol. Because the constants involved in our symbolic bounds are concrete, we are able to use them to derive numerical probability bounds for various input sizes for these examples.

Joseph Tassarotti, Robert Harper
Verified Memoization and Dynamic Programming

We present a lightweight framework in Isabelle/HOL for the automatic verified (functional or imperative) memoization of recursive functions. Our tool constructs a memoized version of the recursive function and proves a correspondence theorem between the two functions. A number of simple techniques allow us to achieve bottom-up computation and space-efficient memoization. The framework’s utility is demonstrated on a number of representative dynamic programming problems.

Simon Wimmer, Shuwei Hu, Tobias Nipkow
MDP + TA = PTA: Probabilistic Timed Automata, Formalized (Short Paper)

We present a formalization of probabilistic timed automata (PTA) in which we try to follow the formula “MDP + TA = PTA” as far as possible: our work starts from existing formalizations of Markov decision processes (MDP) and timed automata (TA) and combines them modularly. We prove the fundamental result for probabilistic timed automata: the region construction that is known from timed automata carries over to the probabilistic setting. In particular, this allows us to prove that minimum and maximum reachability probabilities can be computed via a reduction to MDP model checking, including the case where one wants to disregard unrealizable behavior.

Simon Wimmer, Johannes Hölzl
Formalization of a Polymorphic Subtyping Algorithm

Modern functional programming languages such as Haskell support sophisticated forms of type-inference, even in the presence of higher-order polymorphism. Central to such advanced forms of type-inference is an algorithm for polymorphic subtyping. This paper formalizes an algorithmic specification for polymorphic subtyping in the Abella theorem prover. The algorithmic specification is shown to be decidable, and sound and complete with respect to Odersky and Läufer’s well-known declarative formulation of polymorphic subtyping.While the meta-theoretical results are not new, as far as we know our work is the first to mechanically formalize them. Moreover, our algorithm differs from those currently in the literature by using a novel approach based on worklist judgements. Worklist judgements simplify the propagation of information required by the unification process during subtyping. Furthermore they enable a simple formulation of the meta-theoretical properties, which can be easily encoded in theorem provers.

Jinxu Zhao, Bruno C. d. S. Oliveira, Tom Schrijvers

Open Access

An Agda Formalization of Üresin and Dubois’ Asynchronous Fixed-Point Theory

In this paper we describe an Agda-based formalization of results from Üresin & Dubois’ “Parallel Asynchronous Algorithms for Discrete Data.” That paper investigates a large class of iterative algorithms that can be transformed into asynchronous processes. In their model each node asynchronously performs partial computations and communicates results to other nodes using unreliable channels. Üresin & Dubois provide sufficient conditions on iterative algorithms that guarantee convergence to unique fixed points for the associated asynchronous iterations. Proving such sufficient conditions for an iterative algorithm is often dramatically simpler than reasoning directly about an asynchronous implementation. These results are used extensively in the literature of distributed computation, making formal verification worthwhile.Our Agda library provides users with a collection of sufficient conditions, some of which mildly relax assumptions made in the original paper. Our primary application has been in reasoning about the correctness of network routing protocols. To do so we have derived a new sufficient condition based on the ultrametric theory of Alexander Gurney. This was needed to model the complex policy-rich routing protocol that maintains global connectivity in the internet.

Ran Zmigrod, Matthew L. Daggitt, Timothy G. Griffin
Erratum to: Interactive Theorem Proving
Jeremy Avigad, Assia Mahboubi
Backmatter
Metadaten
Titel
Interactive Theorem Proving
herausgegeben von
Dr. Jeremy Avigad
Assia Mahboubi
Copyright-Jahr
2018
Electronic ISBN
978-3-319-94821-8
Print ISBN
978-3-319-94820-1
DOI
https://doi.org/10.1007/978-3-319-94821-8

Premium Partner