Skip to main content
Top
Published in: Journal of Automated Reasoning 1/2023

Open Access 01-03-2023

A Formalization and Proof Checker for Isabelle’s Metalogic

Authors: Simon Roßkopf, Tobias Nipkow

Published in: Journal of Automated Reasoning | Issue 1/2023

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

Isabelle is a generic theorem prover with a fragment of higher-order logic as a metalogic for defining object logics. Isabelle also provides proof terms. We formalize this metalogic and the language of proof terms in Isabelle/HOL, define an executable (but inefficient) proof term checker and prove its correctness w.r.t. the metalogic. We integrate the proof checker with Isabelle and run it on a range of logics and theories to check the correctness of all the proofs in those theories.
Notes

Supplementary Information

The online version contains supplementary material available at https://​doi.​org/​10.​1007/​s10817-022-09648-w.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

One of the selling points of proof assistants is their trustworthiness. Yet in practice, soundness problems do come up in most proof assistants. Harrison [15] distinguishes errors in the logic and errors in the implementation (and cites examples). Our work contributes to the solution of both problems for the proof assistant Isabelle [35]. Isabelle is a generic theorem prover: it implements \(\mathcal {M}\), a fragment of intuitionistic higher-order logic, as a metalogic for defining object logics. Its most developed object logic is HOL, and the resulting proof assistant is called Isabelle/HOL [27, 28]. The latter is the basis for our formalizations.
Our first contribution is the first complete formalization of Isabelle’s metalogic. Thus our work applies to all Isabelle object logics, e.g., not only HOL but also ZF. Of course Paulson [36] describes \(\mathcal {M}\) precisely, but only on paper. More importantly, his description does not yet cover polymorphism and type classes, which were introduced later [29]. The published account of Isabelle’s proof terms [7] is also silent about type classes, yet type classes are a significant complication. We do, however, not formalize the theory extension mechanisms (e.g., for constant definitions) on top of the logic.
Our second contribution is a verified (against \(\mathcal {M}\)) and executable checker for Isabelle’s proof terms. We have integrated the proof checker with Isabelle. Thus, we can guarantee that every theorem whose proof our proof checker accepts is provable in our definition of \(\mathcal {M}\). So far we are able to check the correctness of moderately sized theories across the full range of logics implemented in Isabelle.
Although Isabelle follows the LCF-architecture (theorems that can only be manufactured by inference rules) it is based on an infrastructure optimized for performance. In particular, this includes multithreading, which is used in the kernel and has once led to a soundness issue.1 Therefore we opt for the “certificate checking” approach (via proof terms) instead of verifying the implementation.
This is the first work that deals directly with what is implemented in Isabelle as opposed to a study of the metalogic that Isabelle is meant to implement. Instead of reading the implementation, you can now read and build on the more abstract formalization in this paper. The correspondence of the two can be established for each proof by running the proof checker.
Our formalization reflects the ML implementation of Isabelle’s terms and types and some other data structures. Thus, a few implementation choices shine through, e.g., De Bruijn indices. This is necessary because we want to integrate our proof checker as directly as possible with Isabelle, with as little unverified glue code as possible, for example, no translation between De Bruijn indices and named variables. We refer to this as our intentional implementation bias. In principle, however, one could extend our formalization with different representations (e.g., named terms) and prove suitable isomorphisms.
Our work is purely proof theoretic; semantics is out of scope.
This paper is an extended version of a conference paper [30] presented at CADE 28. In addition to the material covered in the conference paper, it includes
  • A section describing some useful derived rules in our inference system (Sect. 7)
  • A more detailed description of the executable proofchecker and its verification (Sect. 8)
  • An updated formalization, including an updated, more natural formalization of the order-sorted signatures (Sect. 4), generic variable types, explicitly finite data structures, and updated proof terms.
Harrison [15] was the first to verify some of HOL’s metatheory and an implementation of a HOL kernel in HOL itself. Kumar et al. [20] formalized HOL including definition principles, proved its soundness and synthesized a verified kernel of a HOL prover down to the machine language level. Abrahamsson [1] verified a proof checker for the OpenTheory [17] proof exchange format for HOL.
Wenzel [43] showed how to interpret type classes as predicates on types. We follow his approach of reflecting type classes in the logic but cannot remove them completely because of our intentional implementation bias (see above). Kunčar and Popescu [2124] focus on the subtleties of definition principles for HOL with overloading and prove that under certain conditions, type and constant definitions preserve consistency. Åman Pohjola et al. [3] formalize some of this work by Kunčar and Popescu [21, 24].
Adams [2] presents HOL Zero, a basic theorem prover for HOL that addresses the problem of how to ensure that parser and pretty-printer do not misrepresent formulas.
Let us now move away from Isabelle and HOL. Barras verified fragments of Coq in Coq [4, 5]. Sozeau et al. [41] present the first implementation of a type checker for the kernel of Coq that is proved correct in Coq with respect to a formal specification. Carneiro [8] has implemented a highly performant proof checker for a multi-sorted first-order logic and is in the process of verifying it in its own logic. Davis developed the bootstrapping theorem prover Milawa [9] and, together with Myreen, showed its soundness down to machine code [10].
We formalize a logic with bound variables, and there is a large body of related work that deals with this issue (e.g., [11, 18, 42]) and a range of logics and systems with special support for handling bound variables (e.g., [3840]). We found that De Bruijn indices worked reasonably well for us.

2 Preliminaries

Isabelle types are built from type variables, e.g., \({^{\prime }}a\) and (postfix) type constructors, e.g., \({^{\prime }}{}a\ list\); the function type arrow is \({\Rightarrow }\). Isabelle also has a type class system explained later. The notation \(t {:}{:} \tau \) means that term t has type \(\tau \). Isabelle/HOL provides types \({^{\prime }}{}a\ set\) and \({^{\prime }}{}a\ list\) of sets and lists of elements of type \({^{\prime }}{}a\). They come with the following vocabulary: function set (conversion from lists to sets), \({(}{}{\#}{}{)}{}\) (list constructor), \({(}{}{@}{}{)}{}\) (append), \({|}{}xs{|}{}\) (length of list \(xs\)), \(xs\ {!}{}\ i\) (the \(i\)th element of \(xs\) starting at 0), list-all2 \(p\ {[}{}x{{}_{1}}{,}{}\ {\dots }{,}{}\ x{{}_{m}}{]}{}\ {[}{}y{{}_{1}}{,}{}\ {\dots }{,}{}\ y{{}_{n}}{]}{}\) \({=}{}\) \({(}{}m\ {=}{}\ n\ {\wedge }\ p\ x{{}_{1}}\ y{{}_{1}}\ {\wedge }\ {\dots }\ {\wedge }\ p\ x{{}_{n}}\ y{{}_{n}}{)}{}\), \({(}{}{{{>\!\!\!>\!\!\!=}}}{)}{}\) (monadic bind) and other self-explanatory notation.
There is also the predefined data type
  • \({{\textbf {{\textsf {datatype}}}}}\ {^{\prime }}{}a\ option\ {=}{}\ {{\textsf {\textit{None}}}}\ {|}{}\ {{\textsf {\textit{Some}}}}\ {^{\prime }}{}a\)
The type \({\tau }{{}_{1}}\ {\rightharpoonup }\ {\tau }{{}_{2}}\) abbreviates \({\tau }{{}_{1}}\ {\Rightarrow }\ {\tau }{{}_{2}}\ option\), i.e., partial functions, which we call maps. Maps have a domain and a range:
  • \({{\textsf {\textit{dom}}}}\ m\ {=}{}\ {\{}{}a\ {|}{}\ m\ a\ {\not =}\ {{\textsf {\textit{None}}}}{\}}{} \qquad {{\textsf {\textit{ran}}}}\ m\ {=}{}\ {\{}{}b\ {|}{}\ {\exists }a{.}{}\ m\ a\ {=}{}\ {{\textsf {\textit{Some}}}}\ b{\}}{}\)
It must be noted that in our formalization, we are not using sets/maps directly, but subtypes for finite sets/maps. This simplifies some proofs and code generation; however, there is less material about them readily available. Luckily, we can easily make use of material for general sets/maps using Isabelle’s Lifting and Transfer packages[16].
Logical equivalence is written \({=}{}\) instead of \({\longleftrightarrow }\).

3 Types and Terms

A \(name\) is simply a string. Variables have (Isabelle/HOL level) type \({^{\prime }}{}v\); their inner structure is immaterial for the presentation of the logic. We only require \({^{\prime }}{}v\) to be infinite, to always guarantee a supply of fresh variables. We encode this using a type class for infinite types.
The logic has three layers: terms are classified by types as usual, but in addition, types are classified by sorts. A \(sort\) is simply a set of classes and classes are just strings. We discuss sorts in detail later.
Types (typically denoted by \(T\), \(U\), ...) are defined like this:
  • \({{\textbf {{\textsf {datatype}}}}}\ {^{\prime }}{}v\ typ\ {=}{}\ {{\textsf {\textit{Ty}}}}\ name\ {(}{}{^{\prime }}{}v\ typ\ list{)}{}\ {|}{}\ {{\textsf {\textit{Tv}}}}\ {^{\prime }}{}v\ sort\)
where \({{\textsf {\textit{Ty}}}}\) \({\kappa }\ {[}{}T{{}_{1}}{,}{}{.}{}{.}{}{.}{}{,}{}T{{}_{n}}{]}{}\) represents the Isabelle type \({(}{}T{{}_{1}}{,}{}{\dots }{,}{}T{{}_{n}}{)}{}\ {\kappa }\) and \({{\textsf {\textit{Tv}}}}\ a\ S\) represents a type variable \(a\) of sort \(S\)—sorts are directly attached to type variables and contribute to their identity. The notation \(T\ {\rightarrow }\ U\) is short for \({{\textsf {\textit{Ty}}}}\) \({"}{}{} \textit{fun}{"}{}\ {[}{}T{,}{}U{]}{}\), where \({"}{}{} \textit{fun}{"}{}\) is the name of the function type constructor.
Isabelle’s terms are simply typed lambda terms in De Bruijn notation:
  • \({{\textbf {{\textsf {datatype}}}}}\ {^{\prime }}{}v\ term\ {=}{}\ {{\textsf {\textit{Ct}}}}\ name\ {(}{}{^{\prime }}{}v\ typ{)}{}\ {|}{}\ {{\textsf {\textit{Fv}}}}\ {^{\prime }}{}v\ {(}{}{^{\prime }}{}v\ typ{)}{}\ {|}{}\ {{\textsf {\textit{Bv}}}}\ nat\)\({|}{}\ {{\textsf {\textit{Abs}}}}\ {(}{}{^{\prime }}{}v\ typ{)}{}\ {(}{}{^{\prime }}{}v\ term{)}{}\ {|}{}\ {(}{}{\cdot }{)}{}\ {(}{}{^{\prime }}{}v\ term{)}{}\ {(}{}{^{\prime }}{}v\ term{)}{}\)
A term (typically \(r\), \(s\), \(t\), \(u\) ...) can be a typed constant \({{\textsf {\textit{Ct}}}}\ c\ T\) or free variable \({{\textsf {\textit{Fv}}}}\ v\ T\), a bound variable \({{\textsf {\textit{Bv}}}}\ n\) (a De Bruijn index), a typed abstraction \({{\textsf {\textit{Abs}}}}\ T\ t\) or an application \(t\ {\cdot }\ u\). We call an occurrence of a bound variable \({{\textsf {\textit{Bv}}}}\ i\) in some term \(t\) loose if the occurrence is not enclosed in at least \(i\ {+}{}\ {1}\) abstractions.
The term-has-type proposition has the syntax \(Ts\ {\vdash }{{}_{\tau }}\ t\ {:}{}\ T\) where \(Ts\) is a list of types, the context for the type of the bound variables.
https://static-content.springer.com/image/art%3A10.1007%2Fs10817-022-09648-w/MediaObjects/10817_2022_9648_IEq63_HTML.gif
https://static-content.springer.com/image/art%3A10.1007%2Fs10817-022-09648-w/MediaObjects/10817_2022_9648_IEq64_HTML.gif
https://static-content.springer.com/image/art%3A10.1007%2Fs10817-022-09648-w/MediaObjects/10817_2022_9648_IEq65_HTML.gif
We define \({\vdash }{{}_{\tau }}\ t\ {:}{}\ T\ {=}{}\ {[}{}{]}{}\ {\vdash }{{}_{\tau }}\ t\ {:}{}\ T\).
Function \({{\textsf {\textit{fv}}}}\) \({:}{}{:}{}\ {^{\prime }}{}v\ term\ {\Rightarrow }\ {(}{}{^{\prime }}{}v\ {\times }\ {^{\prime }}{}v\ typ{)}{}\ set\) collects the free variables in a term. Because bound variables are indices, \({{\textsf {\textit{fv}}}}\ t\) is simply the set of all \({(}{}v{,}{}\ T{)}{}\) such that \({{\textsf {\textit{Fv}}}}\ v\ T\) occurs in \(t\). The type is an integral part of a variable.
A type substitution is a function \({\varrho }\) of type \({^{\prime }}{}v\ {\Rightarrow }\ sort\ {\Rightarrow }\ {^{\prime }}{}v\ typ\). It assigns a type to each type variable and sort pair. We write \({\varrho }\ {\$}{}{\$}{}\ T\) or \({\varrho }\ {\$}{}{\$}{}\ t\) for the overloaded function which applies a type substitution to all type variables (and their sort) occurring in a type or term. The type instance relation is defined like this:
  • \(T{{}_{1}}\ {\lesssim }\ T{{}_{2}}\ {=}{}\ {(}{}{\exists }{\varrho }{.}{}\ {\varrho }\ {\$}{}{\$}{}\ T{{}_{2}}\ {=}{}\ T{{}_{1}}{)}{}\)
We also need to \(\beta \)-contract a term \({{\textsf {\textit{Abs}}}}\ T\ t\ {\cdot }\ u\) to something like “\(t\) with \({{\textsf {\textit{Bv}}}}\ {0}\) replaced by \(u\).” We define a function \({{\textsf {\textit{subst-bv}}}}\) such that \({{\textsf {\textit{subst-bv}}}}\ u\ t\) is that \(\beta \)-contractum. The definition of \({{\textsf {\textit{subst-bv}}}}\) is shown in the Appendix and can also be found in the literature (e.g., [33]).
In order to abstract over a free (term) variable, there is a function \({{\textsf {\textit{bind-fv}}}}\ {(}{}v{,}{}\ T{)}{}\ t\) that (roughly speaking) replaces all occurrences of \({{\textsf {\textit{Fv}}}}\ v\ T\) in \(t\) by \({{\textsf {\textit{Bv}}}}\ {0}\). Again, see the Appendix for the definition. This produces (if \({{\textsf {\textit{Fv}}}}\ v\ T\) occurs in \(t\)) a term with a loose \({{\textsf {\textit{Bv}}}}\ {0}\). Function \({{\textsf {\textit{Abs-fv}}}}\) binds it with an abstraction:
  • \({{\textsf {\textit{Abs-fv}}}}\ v\ T\ t\ {=}\ {{\textsf {\textit{Abs}}}}\ T\ {(}{}{{\textsf {\textit{bind-fv}}}}\ {(}{}v{,}{}\ T{)}{}\ t{)}{}\)
While this section described the syntax of types and terms, they are not necessarily wellformed and should be considered pretypes/preterms. The wellformedness checks are described later.

4 Classes and Sorts

Isabelle has a built-in system of type classes [32] as in Haskell 98 except that class constraints are directly attached to variable names: our \(T\!{v} \ a\ {[}{}C{,}{}D{,}{}{\dots }{]}{}\) corresponds to Haskell’s (Ca, Da, ...) => ... a .... A \(sort\) is Isabelle’s terminology for a set of (class) names, e.g., \({\{}{}C{,}{}D{,}{}{\dots }{\}}{}\), which represent a conjunction of class constraints. In our work, variables \(S\), \(S{^{\prime }}{}\) etc. stand for sorts.
Apart from the usual application in object logics, type classes also serve an important metalogical purpose: they allow us to restrict, for example, quantification in object logics to object-level types and rule out meta-level propositions.
Isabelle’s type class system was first presented in a programming language context [31, 34]. We give the first machine-checked formalization. The central data structure is a so-called order-sorted signature. Intuitively, it is composed of a set of classes, a partial subclass ordering on them and a set of type constructor signatures. A type constructor signature \({\kappa }\ {:}{}{:}{}\ {(}{}S{{}_{1}}{,}{}\ {\dots }{,}{}\ S{{}_{k}}{)}{}\ c\) for a type constructor \({\kappa }\) states that applying \({\kappa }\) to types \(T{{}_{1}}{,}{}\ {\dots }{,}{}\ T{{}_{k}}\) such that \(T{{}_{i}}\) has sort \(S{{}_{i}}\) (defined below) produces a type of class \(c\). Formally:
\({{\textbf {{\textsf {type\_synonym}}}}}\,\,\,\textit{osig}\) =
   \({{}{}{(}{}name\ set\ {\times }\ {(}{}name\ {\times }\ name{)}{}\ set\ {\times }\ {(}{}name\ {\times }\ sort\ list\ {\times }\ class{)}{}\ set{)}{}{}{}}\)
The projection functions are called \({{\textsf {\textit{classes}}}}\), \({{\textsf {\textit{subclass}}}}\) and \({{\textsf {\textit{tcsigs}}}}\).
The subclass ordering \(sub\) can be extended to a subsort ordering as follows:
  • \(S{{}_{1}}\ {\le }_{sub}\ S{{}_{2}}\ {=}{}\ {(}{}{\forall }c{{}_{2}}{\in }S{{}_{2}}{.}{}\ {\exists }c{{}_{1}}{\in }S{{}_{1}}{.}{}\ c{{}_{1}}\ {\le }_{sub} \ c{{}_{2}}{)}{}\)
The smaller sort needs to subsume all the classes in the larger sort. In particular \({\{}{}c{{}_{1}}{\}}{}\ {\le }_{sub}\ {\{}{}c{{}_{2}}{\}}{}\) iff \({(}{}c{{}_{1}}{,}{}\ c{{}_{2}}{)}{}\ {\in }\ sub\).
Now we can define a predicate \({{\textsf {\textit{has-sort}}}}\) that checks whether, in the context of some order-sorted signature \({(}{}cl{,}{}sub{,}{}tcs{)}{}\), a type fulfills a given sort constraint:
https://static-content.springer.com/image/art%3A10.1007%2Fs10817-022-09648-w/MediaObjects/10817_2022_9648_IEq119_HTML.gif
https://static-content.springer.com/image/art%3A10.1007%2Fs10817-022-09648-w/MediaObjects/10817_2022_9648_IEq120_HTML.gif
The rule for type variables uses the subsort relation and is obvious. A type \({(}{}T{{}_{1}}{,}{}\ {\dots }{,}{}\ T{{}_{n}}{)}{}\ {\kappa }\) has sort \({\{}{}c{{}_{1}}{,}{}\ {\dots }{\}}{}\) if for every \(c{{}_{i}}\) there is a signature \({\kappa }\ {:}{}{:}{}\ {(}{}S{{}_{1}}{,}{}\ {\dots }{,}{}\ S{{}_{n}}{)}{}\ c{{}_{i}}\) and \({{\textsf {\textit{has-sort}}}} {(}{}cl{,}{}\ sub{,}{} tcs{)}{}\ T{{}_{j}}\ S{{}_{j}}\) for \(j\ {=}{}\ {1}{,}{}\ {\dots }{,}{}\ n\).
We normalize a sort by removing “superfluous” class constraints, i.e., retaining only those classes that are not subsumed by other classes. This gives us unique representatives for sorts which we call normalized:
  • \({{\textsf {\textit{normalize-sort}}}}\ sub\ S\ {=}{}\ {\{}{}c\ {\in }\ S\ {|}{}\ {\lnot }\ {(}{}{\exists }c{^{\prime }}{}{\in }S{.}{}\ c{^{\prime }}{}\ {\not =}\ c\ {\wedge }\ {(}{}c{^{\prime }}{}{,}{}\ c{)}{}\ {\in }\ sub{)}{}{\}}{}\)
  • \({{\textsf {\textit{normalized-sort}}}}\ sub\ S\ {=}{}\ {(}{}{{\textsf {\textit{normalize-sort}}}}\ sub\ S\ {=}{}\ S{)}{}\)
We work with normalized sorts because it simplifies the derivation of efficient executable code later on.
Now we can define wellformedness of an \(osig\):
  • \({{\textsf {\textit{wf-osig}}}}\) \({(}{}cl{,}{}\ sub{,}{}\ tcs{)}{}\ {=}{}\ {(}{}{{\textsf {\textit{wf-subclass}}}}\ cl\ sub\ {\wedge }\ {{\textsf {\textit{wf-tcsigs}}}}\ cl\ sub\ tcs{)}{}\)
A subclass relation is wellformed if it is a partial order where reflexivity is restricted to the set of classes \(cl\).Wellformedness of type constructor signatures (\({{\textsf {\textit{wf-tcsigs}}}}\)) is more complex. The conditions are the following:
  • The following property requires a) that for any \({\kappa }\ {:}{}{:}{}\ {(}{}{.}{}{.}{}{.}{}{)}{}\ c{{}_{1}}\) there must be a \({\kappa }\ {:}{}{:}{}\ {(}{}{.}{}{.}{}{.}{}{)}{}\ c{{}_{2}}\) for every superclass \(c{{}_{2}}\) of \(c{{}_{1}}\) and b) coregularity which guarantees the existence of principal types [14, 31]: \({\forall }{(}{}{\kappa }{,}{}\ Ss{{}_{1}}{,}{}\ c{{}_{1}}{)}{}{\in }tcs{.}{}\)    \({\forall }c{{}_{2}}{.}{}\ {(}{}c{{}_{1}}{,}{}\ c{{}_{2}}{)}{}\ {\in }\ sub\ {\longrightarrow }\)             \({(}{}{\exists }Ss{{}_{2}}{.}{}\ {(}{}{\kappa }{,}{}\ Ss{{}_{2}}{,}{}\ c{{}_{2}}{)}{}\ {\in }\ tcs\ {\wedge }\ {{\textsf {\textit{list-all2}}}}\ {(}{}sort\hbox {-}{}leq\ sub\ \!{)}{}\ Ss{{}_{1}}\ Ss{{}_{2}}{)}{}\)
  • A type constructor must always take the same number of argument types: \({\forall }{\kappa }\ Ss{{}_{1}}\ c{{}_{1}}\ Ss{{}_{2}}\ c{{}_{2}}{.}{}\)    \({(}{}{\kappa }{,}{}\ Ss{{}_{1}}{,}{}\ c{{}_{1}}{)}{}\ {\in }\ tcs\ {\wedge }\ {(}{}{\kappa }{,}{}\ Ss{{}_{2}}{,}{}\ c{{}_{2}}{)}{}\ {\in }\ tcs\ {\longrightarrow }\ {|}{}Ss{{}_{1}}{|}{}\ {=}{}\ {|}{}Ss{{}_{2}}{|}{}\)
  • Sorts must be normalized and must exist in \(cl\): \({\forall }{(}{}{\kappa }{,}{}\ Ss{,}{}\ c{)}{}{\in }tcs{.}{}\ {\forall }S{\in }Ss{.}{}\ {{\textsf {\textit{wf-sort}}}}\ cl\ sub\ S\) where \({{\textsf {\textit{wf-sort}}}}\ cl\ sub\ S\ {=}{}\ {(}{}{{\textsf {\textit{normalized-sort}}}}\ sub\ S\ {\wedge }\ S\ {\subseteq }\ cl{)}{}\)
  • The argument sorts uniquely determine the class of the constructed type: \({\forall }{(}{}{\kappa }{,}{}\ Ss{{}_{1}}{,}{}\ c{)}{}{\in }tcs{.}{}\ {\forall }Ss{{}_{2}}{.}{}\ {(}{}{\kappa }{,}{}\ Ss{{}_{2}}{,}{}\ c{)}{}\ {\in }\ tcs\ {\longrightarrow }\ Ss{{}_{2}}\ {=}{}\ Ss{{}_{1}}\)
These conditions are used in a number of places to show that the type system is well behaved. For example, \({{\textsf {\textit{has-sort}}}}\) is upward closed:
\({{{\textsf {\textit{wf-osig}}}}\ {(}{}cl{,}{}\ sub{,}{}\ tcs{)}{}} \,\,{{\wedge }}\,\, {{\textsf {\textit{has-sort}}}}\ {(}{}cl{,}{}\ sub{,}{}\ tcs{)}{}\ T\ S {{\wedge }} {S\ {\le }_{sub} \ S{^{\prime }}{}}\)
   \({{\longrightarrow }}\ \,\, {{{\textsf {\textit{has-sort}}}}\ {(}{}cl{,}{}\ sub{,}{}\ tcs{)}{}\ T\ S{^{\prime }}{}}\)

5 Signatures

A signature consists of a map from constant names to their (most general) types, a map from type constructor names to their arities, and an order-sorted signature:
\({{\textbf {{\textsf {type\_synonym}}}}}\,\, {^{\prime }}{}v signature = {{}{}{(}{}name\ {\rightharpoonup }\ {^{\prime }}{}v\ typ{)}{}\ {\times }\ {(}{}name\ {\rightharpoonup }\ nat{)}{}\ {\times }\ osig{}{}}\)
The three projection functions are called \({{\textsf {\textit{const-type}}}}\), \({{\textsf {\textit{type-arity}}}}\) and \({{\textsf {\textit{osig}}}}\). We now define a number of wellformedness checks w.r.t. a signature \({\Sigma }\). We start with wellformedness of types, which essentially requires that all type constructors have correct arity and all type variables have wellformed sort constraints:
https://static-content.springer.com/image/art%3A10.1007%2Fs10817-022-09648-w/MediaObjects/10817_2022_9648_IEq155_HTML.gif
https://static-content.springer.com/image/art%3A10.1007%2Fs10817-022-09648-w/MediaObjects/10817_2022_9648_IEq156_HTML.gif
Wellformedness of a term essentially just says that all types in the term are wellformed and that the type \(T{^{\prime }}{}\) of a constant in the term must be an instance of the type \(T\) of that constant in the signature: \(T{^{\prime }}{}\ {\lesssim }\ T\).
https://static-content.springer.com/image/art%3A10.1007%2Fs10817-022-09648-w/MediaObjects/10817_2022_9648_IEq160_HTML.gif
https://static-content.springer.com/image/art%3A10.1007%2Fs10817-022-09648-w/MediaObjects/10817_2022_9648_IEq161_HTML.gif
https://static-content.springer.com/image/art%3A10.1007%2Fs10817-022-09648-w/MediaObjects/10817_2022_9648_IEq162_HTML.gif
https://static-content.springer.com/image/art%3A10.1007%2Fs10817-022-09648-w/MediaObjects/10817_2022_9648_IEq163_HTML.gif
These rules only check whether a term conforms to a signature, not that the contained types are consistent. Combining wellformedness and \({\vdash }{{}_{\tau }}\) yields welltypedness of a term:
  • \({{\textsf {\textit{wt-term}}}}\ {\Sigma }\ t\ {=}{}\ {(}{}{{\textsf {\textit{wf-term}}}}\ {\Sigma }\ t\ {\wedge }\ {(}{}{\exists }T{.}{}\ {\vdash }{{}_{\tau }}\ t\ {:}{}\ T{)}{}{)}{}\)
Wellformedness of a signature \({\Sigma }\ {=}{}\ {(}{}ctf{,}{}\ arf{,}{}\ oss{)}{}\) where \(oss\ {=}{}\ {(}{}cl{,}{}\ sub{,}{}\ tcs{)}{}\) is defined as follows:
\({{\textsf {\textit{wf-sig}}}}\ {\Sigma }\ {=}{}\) \({(}{}{(}{}{\forall }T{\in }{{\textsf {\textit{ran}}}}\ ctf{.}{}\ {{\textsf {\textit{wf-type}}}}\ {\Sigma }\ T{)}{}\ {\wedge }\)
         \({{(}{}}{{\textsf {\textit{wf-osig}}}}\ oss\ {\wedge }\ {(}{}{\forall }{(}{}{\kappa }{,}{}\ Ss{,}{}\ c{)}{}{\in }tcs{.}{}\ arf\ \ {\kappa }\ {=}{}\ {{\textsf {\textit{Some}}}}\ {|}{}Ss{|}{}{)}{}{)}{}\)
In words: all types in \(ctf\) are wellformed, \(oss\) is wellformed, and the type constructors in \(tcs\) are exactly those that have a matching arity in \(arf\).

6 Logic

Isabelle’s metalogic \(\mathcal {M}\) is an extension of the logic described by Paulson [36]. It is a fragment of intuitionistic higher-order logic. The basic types and connectives of \(\mathcal {M}\) are the following:
\(\begin{array}{|l|l|c|} \hline \hbox {Concept} &{} \hbox {Representation} &{} \hbox {Abbreviation}\\ \hline \hbox {Type\,of\,propositions} &{} {{{{\textsf {\textit{Ty}}}}}\, {"}{}{{{\textsf {\textit{prop}}}}}{"}{}\ {[}{}{]}{}} &{} {{{\textsf {\textit{prop}}}}}\\ \hbox {Implication} &{} {{{\textsf {\textit{Ct}}}}}\, \,{{"}{}{{{\textsf {\textit{imp}}}}}{"}{}}\,\, {(}{}{{{\textsf {\textit{prop}}}}\ {\rightarrow }\ {{\textsf {\textit{prop}}}}\ {\rightarrow }\ {{\textsf {\textit{prop}}}}}{)}{} &{} {\Longrightarrow } \\ \hbox {Universal\,quantifier} &{} {{{\textsf {\textit{Ct}}}}}\,\, {{"}{}{{{\textsf {\textit{all}}}}}{"}{}}\,\, {(}{}{(}{}T\ {\rightarrow }\ {{\textsf {\textit{prop}}}}{)}{}\ {\rightarrow }\ {{\textsf {\textit{prop}}}}{)}{} &{} {\bigwedge }{{}_{T}}\\ \hbox {Equality} &{} {{{\textsf {\textit{Ct}}}}}\,\, {{"}{}{{{\textsf {\textit{eq}}}}}{"}{}}\,\, {(}{}T\ {\rightarrow }\ T\ {\rightarrow }\ {{\textsf {\textit{prop}}}}{)}{} &{} {=}{{}_{T}} \\ \hline \end{array}\)
The type subscripts of \({\bigwedge }\) and \({\equiv }\) are dropped in the text if they can be inferred.
Readers familiar with Isabelle syntax must keep in mind that for readability we use the symbols \({\bigwedge }\), \({\Longrightarrow }\) and \({\equiv }\) for the encodings of the respective symbols in Isabelle’s metalogic. We avoid the corresponding metalogical constants completely in favor of HOL’s \({\forall }\), \({\longrightarrow }\), \({=}{}\), and inference rule notation.
The provability judgment of \(\mathcal {M}\) is of the form \({\Theta }{,}{}{\Gamma }\ {\vdash }\ t\) where \({\Theta }\) is a theory, \({\Gamma }\) (the hypotheses) is a set of terms of type \({{\textsf {\textit{prop}}}}\), and \(t\) a term of type \({{\textsf {\textit{prop}}}}\).
A theory is a pair of a signature and a set of axioms:
  • \({{\textbf {{\textsf {type\_synonym}}}}}\,\, {^{\prime }}{}v\ theory = {{}{}{^{\prime }}{}v\ signature\ {\times }\ {^{\prime }}{}v\ term\ set{}{}}\)
The projection functions are \({{\textsf {\textit{sig}}}}\) and \({{\textsf {\textit{axioms}}}}\). We extend the notion of wellformedness from signatures to theories:
  • \({{\textsf {\textit{wf-theory}}}}\ {(}{}{\Sigma }{,}{}\ axs{)}{}\ {=}{}\)    \({(}{}{{\textsf {\textit{wf-sig}}}}\ {\Sigma }\ {\wedge }\ {(}{}{\forall }p{\in }axs{.}{}\ {{\textsf {\textit{wt-term}}}}\ {\Sigma }\ p\ {\wedge }\ {\vdash }{{}_{\tau }}\ p\ {:}{}\ {{\textsf {\textit{prop}}}}{)}{}\ {\wedge }\ {{\textsf {\textit{is-std-sig}}}}\ {\Sigma }\ {\wedge }\ {{\textsf {\textit{eq-axs}}}}\ {\subseteq }\ axs{)}{}\)
The first two conjuncts need no explanation. Predicate \({{\textsf {\textit{is-std-sig}}}}\) (not shown) requires the signature to have certain minimal content: the basic types (\({\rightarrow }\), \({{\textsf {\textit{prop}}}}\)) and constants (\({\equiv }\), \({\bigwedge }\), \({\Longrightarrow }\)) of \(\mathcal {M}\) and the additional types and constants for type class reasoning from Sect. 6.3. Our theories also need to contain a minimal set of axioms. The set \({{\textsf {\textit{eq-axs}}}}\) is an axiomatic basis for equality reasoning and will be explained in Sect. 6.2.
We will now discuss the inference system in three steps: the basic inference rules, equality, and type class reasoning.

6.1 Basic Inference Rules

The axiom rule states that wellformed type-instances of axioms are provable:
https://static-content.springer.com/image/art%3A10.1007%2Fs10817-022-09648-w/MediaObjects/10817_2022_9648_IEq206_HTML.gif
where \({\varrho }\ {:}{}{:}{}\) \({^{\prime }}{}v\ {\Rightarrow }\ sort\ {\Rightarrow }\ {^{\prime }}{}v\ typ\) is a type substitution and \({\$}{}{\$}{}\) denotes its application (see Sect. 3). The types substituted into the type variables need to be wellformed and conform to the sort constraint of the type variable:
  • \({{\textsf {\textit{wf-inst}}}}\ {\Sigma }\ {\varrho }\ {=}{}\)    \({(}{}{\forall }v\ S{.}{}\ {\varrho }\ v\ S\ {\not =}\ {{\textsf {\textit{Tv}}}}\ v\ S\ {\longrightarrow }\ {{\textsf {\textit{has-sort}}}}\ {(}{}{{\textsf {\textit{osig}}}}\ {\Sigma }{)}{}\ {(}{}{\varrho }\ v\ S{)}{}\ S\ {\wedge }\ {{\textsf {\textit{wf-type}}}}\ {\Sigma }\ {(}{}{\varrho }\ v\ S{)}{}{)}{}\)
The conjunction only needs to hold if \({\varrho }\) actually changes something, i.e., if \({\varrho }\ v\ S\ {\not =}\ {{\textsf {\textit{Tv}}}}\ v\ S\). This condition is not superfluous because otherwise \({{\textsf {\textit{has-sort}}}}\ oss\ {(}{}{{\textsf {\textit{Tv}}}}\ v\ S{)}{}\ S\) and \({{\textsf {\textit{wf-type}}}}\ {\Sigma }\ {(}{}{{\textsf {\textit{Tv}}}}\ v\ S{)}{}\) only hold if \(S\) is wellformed w.r.t \({\Sigma }\).
Note that there are no extra rules for general instantiation of type or term variables. Type variables can only be instantiated in the axioms. Term instantiation can be performed using the \({\bigwedge }\) introduction and elimination rules.
The assumption rule allows us to prove terms already in the hypotheses:
https://static-content.springer.com/image/art%3A10.1007%2Fs10817-022-09648-w/MediaObjects/10817_2022_9648_IEq219_HTML.gif
Both \({\bigwedge }\) and \({\Longrightarrow }\) are characterized by introduction and elimination rules:
https://static-content.springer.com/image/art%3A10.1007%2Fs10817-022-09648-w/MediaObjects/10817_2022_9648_IEq222_HTML.gif
https://static-content.springer.com/image/art%3A10.1007%2Fs10817-022-09648-w/MediaObjects/10817_2022_9648_IEq223_HTML.gif
https://static-content.springer.com/image/art%3A10.1007%2Fs10817-022-09648-w/MediaObjects/10817_2022_9648_IEq224_HTML.gif
https://static-content.springer.com/image/art%3A10.1007%2Fs10817-022-09648-w/MediaObjects/10817_2022_9648_IEq225_HTML.gif
where \({{\textsf {\textit{FV}}}}\ {\Gamma }\ {=}{}\ {(}{}{\bigcup }_{{t}{\in }{\Gamma }} \ {{\textsf {\textit{fv}}}}\ t{)}{}\).

6.2 Equality

Most rules about equality are not part of the inference system but are axioms (the set \({{\textsf {\textit{eq-axs}}}}\) mentioned above). Consequences are obtained via the axiom rule.
The first three axioms express that \({=}\) is reflexive, symmetric, and transitive:
\(x\ {\equiv }\ x\)       \(x\ {\equiv }\ y\ {\Longrightarrow }\ y\ {\equiv }\ x\)       \(x\ {\equiv }\ y\ {\Longrightarrow }\ y\ {\equiv }\ z\ {\Longrightarrow }\ x\ {\equiv }\ z\)       
The next two axioms express that terms \(A\) and \(B\) of type \({{\textsf {\textit{prop}}}}\) are equal iff they imply each other:
\(A\ {\equiv }\ B\ {\Longrightarrow }\ A\ {\Longrightarrow }\ B\)       \({(}{}A\ {\Longrightarrow }\ B{)}{}\ {\Longrightarrow }\ {(}{}B\ {\Longrightarrow }\ A{)}{}\ {\Longrightarrow }\ A\ {\equiv }\ B\)
The last equality axioms are congruence rules for application and abstraction:
\(f\ {\equiv }\ g\ {\Longrightarrow }\ x\ {\equiv }\ y\ {\Longrightarrow }\ {(}{}f\ {\cdot }\ x{)}{}\ {\equiv }\ {(}{}g\ {\cdot }\ y{)}{}\)
\({\bigwedge }\) \({(}{}{{\textsf {\textit{Abs}}}}\ T\ {(}{}{(}{}f\ {\cdot }\ {{\textsf {\textit{Bv}}}}\ {0}{)}{}\ {\equiv }\ {(}{}g\ {\cdot }\ {{\textsf {\textit{Bv}}}}\ {0}{)}{}{)}{}{)}{}\)
\({\Longrightarrow }\) \({{\textsf {\textit{Abs}}}}\ T\ {(}{}f\ {\cdot }\ {{\textsf {\textit{Bv}}}}\ {0}{)}{}\ {\equiv }\ {{\textsf {\textit{Abs}}}}\ T\ {(}{}g\ {\cdot }\ {{\textsf {\textit{Bv}}}}\ {0}{)}{}\)
Paulson [36] gives a slightly different congruence rule for abstraction, which allows to abstract over an arbitrary, free \(x\) in \(f{,}{}g\). We are able to derive this rule in our inference system.
Finally, there are the lambda calculus rules. There is no need for \({\alpha }\) conversion because \({\alpha }\)-equivalent terms are already identical thanks to the De Bruijn indices for bound variables. For \({\beta }\) and \({\eta }\) conversion the following rules are added. In contrast to the rest of this subsection, these are not expressed as axioms.
https://static-content.springer.com/image/art%3A10.1007%2Fs10817-022-09648-w/MediaObjects/10817_2022_9648_IEq248_HTML.gif (\(\beta \))
https://static-content.springer.com/image/art%3A10.1007%2Fs10817-022-09648-w/MediaObjects/10817_2022_9648_IEq250_HTML.gif (\(\eta \))
Rule (\(\beta \)) uses the substitution function \({{\textsf {\textit{subst-bv}}}}\) as explained in Sect. 3 (and defined in the Appendix).
Rule (\(\eta \)) requires a few words of explanation. We do not explicitly require that \(t\) does not contain \({{\textsf {\textit{Bv}}}}\ {{{\textsf {\textit{0}}}}}\). This is already a consequence of the precondition that \({\vdash }{{}_{\tau }}\ t\ {:}{}\ T\ {\rightarrow }\ T{^{\prime }}{}\): it implies that \(t\) is closed. For that reason, it is perfectly unproblematic to remove the abstraction above \(t\).

6.3 Type Class Reasoning

Wenzel [43] encoded class constraints of the form “type \(T\) has class \(C\)” in the term language as follows. There is a unary type constructor named \({"}{}itself{"}{}\) and \(T\ {{\textsf {\textit{itself}}}}\) abbreviates \({{\textsf {\textit{Ty}}}}\) \({"}{}itself{"}{}\ {[}{}T{]}{}\). The notation \(\textit{TYPE}_{{T}\ {{\textsf {\textit{itself}}}}}\) is short for \({{\textsf {\textit{Ct}}}}\) \({"}{}type{"}{}\ {(}{}\) \(T\ {{\textsf {\textit{itself}}}}\) \({)}{}\) where \({"}{}type{"}{}\) is the name of a constant. You should view \(\textit{TYPE}_{T\,{{\textsf {\textit{itself}}}}}\) as the term-level representation of type \(T\).
Next we represent the predicate “is of class \(C\)” on the term level. For this, we define some fixed injective mapping \({{\textsf {\textit{const-of-class}}}}\) from class to constant names. For each new class \(C\), a new constant \({{\textsf {\textit{const-of-class}}}}\ C\) of type \(T\ {{\textsf {\textit{itself}}}}\ {\rightarrow }\ {{\textsf {\textit{prop}}}}\) is added. The term \({{\textsf {\textit{Ct}}}}\ {(}{}{{\textsf {\textit{const-of-class}}}}\ C{)}{}\ {(}{}T\ {{\textsf {\textit{itself}}}}\ {\rightarrow }\ {{\textsf {\textit{prop}}}}{)}{}\ {\cdot }\ \textit{TYPE}_{{T}\ {{\textsf {\textit{itself}}}}}\) represents the statement “type \(T\) has class C”. This is the inference rule deriving such propositions:
https://static-content.springer.com/image/art%3A10.1007%2Fs10817-022-09648-w/MediaObjects/10817_2022_9648_IEq281_HTML.gif
This is how the \({{\textsf {\textit{has-sort}}}}\) inference system is integrated into the logic.
This concludes the presentation of \(\mathcal {M}\). We have shown some minimal sanity properties, incl. that all provable terms are of type \({{\textsf {\textit{prop}}}}\) and wellformed:
\( {\Theta }{,}{}{\Gamma }\ {\vdash }\ t\ {\longrightarrow }\ {\vdash }{{}_{\tau }}\ t\ {:}{}\ {{\textsf {\textit{prop}}}}\ {\wedge }\ {{\textsf {\textit{wf-term}}}}\ {(}{}{{\textsf {\textit{sig}}}}\ {\Theta }{)}{}\ t\)
The attentive reader will have noticed that we do not require unused hypotheses in \({\Gamma }\) to be wellformed and of type \({{\textsf {\textit{prop}}}}\). Similarly, we only require \({{\textsf {\textit{wf-theory}}}}\ {\Theta }\) in rules that need it to preserve wellformedness of the terms and types involved. To restrict to wellformed theories and hypotheses, we define a top-level provability judgment that requires wellformedness:
  • \({\Theta }{,}{}{\Gamma }\ {\vdash \!\!\!\vdash }\ t\ {=}{}\ {(}{}{{\textsf {\textit{wf-theory}}}}\ {\Theta }\ {\wedge }\ {(}{}{\forall }h{\in }{\Gamma }{.}{}\ {{\textsf {\textit{wf-term}}}}\ {(}{}{{\textsf {\textit{sig}}}}\ {\Theta }{)}{}\ h\ {\wedge }\ {\vdash }{{}_{\tau }}\ h\ {:}{}\ {{\textsf {\textit{prop}}}}{)}{}\ {\wedge }\ {\Theta }{,}{}{\Gamma }\ {\vdash }\ t{)}{}\)

7 Admissible Rules

Reasoning directly with these basic rules can be very tedious. In the following, we discuss some useful admissible rules that we frequently encountered during our formalization work and sketch their formal proofs.
As already mentioned, our inference system has no inbuilt way of performing term substitutions and one has to simulate them using \({\bigwedge }\)-introductions and eliminations. This can be particularly annoying when performing simultaneous substitutions, as one needs to ensure that no interferences occur. We define a function \({{\textsf {\textit{subst-term}}}}\) which takes a list of (variable,term) pairs, an association list, and substitutes them simultaneously into a term and prove the following corresponding rule:
https://static-content.springer.com/image/art%3A10.1007%2Fs10817-022-09648-w/MediaObjects/10817_2022_9648_IEq292_HTML.gif
where \({{\textsf {\textit{tinst-ok}}}}\) requires there to be only one instantiation per variable and for the terms to be wellformed and of the same type as their corresponding variable. The last condition ensures that we do not substitute into variables occurring in an assumption. To prove this rule, we start with the special case of a single instantiation, which is performed by a single \({\bigwedge }\)-introduction followed by a \({\bigwedge }\)-elimination. To use this result to prove our desired rule, we need to perform the simultaneous instantiations sequentially. This is not possible in general, as they might interfere with one another. To remedy this, we decompose the substitution process into two phases: First we replace all variables we want to substitute into with distinct, fresh variables. Then we modify the original instantiations, so they substitute into their corresponding new variables instead. As these are fresh, they do not occur in the substituted terms. Therefore, no interference occurs and we can perform both phases sequentially.
Another useful derived result is the weakening rule, which tells us that correct inferences remain correct when adding additional (superfluous) assumptions:
https://static-content.springer.com/image/art%3A10.1007%2Fs10817-022-09648-w/MediaObjects/10817_2022_9648_IEq296_HTML.gif
We prove this rule by rule induction on \({\Theta }{,}{}{\Gamma }\ {\vdash }\ B\). Most cases do not interact with \({\Gamma }\) at all and are, therefore, trivial. The only interesting case is the one for \({\bigwedge }\)-introduction, where, once again, variable capture causes trouble. To prove this case, we would like to use the corresponding \({\bigwedge }\)-introduction rule, but the added assumption \(A\) might contain the variable we want to bind. To fix this problem, we use the substitution rule proved above to rename the problematic variable in \(A\). As we cannot use this rule to substitute into the hypotheses, we use implication rules to move \(A\) into the proposition on the right side of the turnstile.
Another major source of complications is equality reasoning. We start by providing corresponding proof rules for each equality axiom, making them easier to use. For example, the resulting rule for reflexivity looks like this:
https://static-content.springer.com/image/art%3A10.1007%2Fs10817-022-09648-w/MediaObjects/10817_2022_9648_IEq304_HTML.gif
The proofs for all these rules are very similar. We first prove them for an empty set of assumptions, to, once again, avoid accidental variable capturing. For this, we use the axiom rule and the derived simultaneous term instantiation rule to substitute the correct types and terms into the axiom. Then we allow for arbitrary (well typed) assumptions using the weakening rule.
By just combining these equality rules and rules of the inference system one can derive new rules, like for example, Paulson’s original congruence for abstraction. Such derivations are, however, complicated by having to propagate the wellformedness conditions of all involved objects, which makes them lengthy. As they also clutter the presentation, we will not show them for the rest of this section and assume that all involved objects in the rules are wellformed.
A last derived rule, which will be useful later, is the fact that \({\beta }\)-reduction preserves provability. Our inference system already contains a rule concerning \({\beta }\)-reduction but it can only be applied at the top of a term.
We first define an inductive notion of a \({\beta }\)-step, based on a formalization by Nipkow [33].
https://static-content.springer.com/image/art%3A10.1007%2Fs10817-022-09648-w/MediaObjects/10817_2022_9648_IEq308_HTML.gif
https://static-content.springer.com/image/art%3A10.1007%2Fs10817-022-09648-w/MediaObjects/10817_2022_9648_IEq309_HTML.gif
This allows us to state a more useful \({\beta }\) rule, which proves equality of two terms if they just differ by a single beta step, regardless of where it occurs.
https://static-content.springer.com/image/art%3A10.1007%2Fs10817-022-09648-w/MediaObjects/10817_2022_9648_IEq311_HTML.gif
We naturally want to prove this statement by induction over \({(}{}{\rightarrow }{{}_{\beta }}{)}{}\). Because \({(}{}{\rightarrow }{{}_{\beta }}{)}{}\) is defined by four rules, there are four cases. The first case allows application of the \({\beta }\) rule of the inference system, the next two use the congruence rule for applications and the respective induction hypotheses. However, the last case is a problem, as descending under an abstraction can expose previously bound variables, which means we cannot apply our proof rules. To remedy this, we replace the now loose variable with a fresh free variable and perform our reasoning with the again wellformed term. The following rule justifies this transformation and is readily proved by combining basic and derived inference rules:
https://static-content.springer.com/image/art%3A10.1007%2Fs10817-022-09648-w/MediaObjects/10817_2022_9648_IEq315_HTML.gif
However, applying this rule makes it impossible to use the induction hypothesis as adding the substitution changes the shape of the goal. The solution is to generalize the \({\beta }\) rule to reflect that one might have passed abstractions and substituted the respective loose variables by new fresh variables:
https://static-content.springer.com/image/art%3A10.1007%2Fs10817-022-09648-w/MediaObjects/10817_2022_9648_IEq317_HTML.gif
This rule uses the \({{\textsf {\textit{subst-bvs}}}}\) function, which behaves like the previously seen \({{\textsf {\textit{subst-bv}}}}\) function, only instantiating multiple loose variables simultaneously. We can prove that it is possible to merge the call to \({{\textsf {\textit{subst-bv}}}}\), arising in the problematic case with the other substitutions. Therefore, we are now able to apply the induction hypothesis. The original rule is the special case for an empty list of variables.
A similar approach was taken to prove that \({\eta }\) reduction preserves provability. Because of symmetry, we have also proved that \({\beta }\)/\({\eta }\) expansion preserves provability, or combined that \({\beta }\)/\({\eta }\) convertibility does not affect provability.

8 Proof Terms and Checker

Berghofer and Nipkow [7] added proof terms to Isabelle. We present an executable checker for these proof terms that is proved sound w.r.t. the above formalization of the metalogic. Berghofer and Nipkow also developed a proof checker but it is unverified and checks the generated proof terms by feeding them back through Isabelle’s unverified inference kernel.
It is crucial to realize that all we need to know about the proof term checker is the soundness theorem below. The internals are, from a soundness perspective, irrelevant, which is why we can get away with sketching them informally. For this reason, we will not give definitions for all involved functions in the following presentation, preferring informal descriptions (all definitions are of course part of the formalization). This is in contrast to the logic itself, which acts like a specification, which is why we presented it in detail.
This is our data type of proof terms:
  • \({{\textbf {{\textsf {datatype}}}}}\ {^{\prime }}{}v\ proofterm\ {=}{}\ {{\textsf {\textit{PAxm}}}}\ name\ {(}{}{(}{}{(}{}{^{\prime }}{}v\ {\times }\ sort{)}{}\ {\times }\ {^{\prime }}{}v\ typ{)}{}\ list{)}{}\)\({|}{}\ {{\textsf {\textit{PThm}}}}\ name\ {(}{}{(}{}{(}{}{^{\prime }}{}v\ {\times }\ sort{)}{}\ {\times }\ {^{\prime }}{}v\ typ{)}{}\ list{)}{}\ {|}{}\ {{\textsf {\textit{PBound}}}}\ nat\)\({|}{}\ {{\textsf {\textit{Abst}}}}\ {(}{}{^{\prime }}{}v\ typ{)}{}\ {(}{}{^{\prime }}{}v\ proofterm{)}{}\ {|}{}\ {{\textsf {\textit{AbsP}}}}\ {(}{}{^{\prime }}{}v\ term{)}{}\ {(}{}{^{\prime }}{}v\ proofterm{)}{}\)\({|}{}\ {{\textsf {\textit{Appt}}}}\ {(}{}{^{\prime }}{}v\ proofterm{)}{}\ {(}{}{^{\prime }}{}v\ term{)}{}\ {|}{}\ {{\textsf {\textit{AppP}}}}\ {(}{}{^{\prime }}{}v\ proofterm{)}{}\ {(}{}{^{\prime }}{}v\ proofterm{)}{}\)\({|}{}\ {{\textsf {\textit{OfClass}}}}\ {(}{}{^{\prime }}{}v\ typ{)}{}\ name\ {|}{}\ {{\textsf {\textit{Hyp}}}}\ {(}{}{^{\prime }}{}v\ term{)}{}\)
These proof terms are not designed to record proofs in our inference system, but to mirror the proof terms generated by Isabelle. Nevertheless, the constructors of our proof terms correspond roughly to the rules of the inference system. As the axiom rule of our inference system allows for type instantiations, \({{\textsf {\textit{PAxm}}}}\) contains an axiom and a type substitution. This substitution is encoded as an association list instead of a function. The axiom is referenced by \(name\). During proof checking, a mapping from names to terms is provided. \({{\textsf {\textit{PThm}}}}\) represents a (previously proved) theorem, containing the same information as a \({{\textsf {\textit{PAxm}}}}\) constructor. While we treat theorems as axioms, they use a different constructor, as axioms and theorems make use of different namespaces in the implementation. \({{\textsf {\textit{AbsP}}}}\) and \({{\textsf {\textit{Abst}}}}\) correspond to introduction of \({\Longrightarrow }\) and \({\bigwedge }\), \({{\textsf {\textit{AppP}}}}\) and \({{\textsf {\textit{Appt}}}}\) correspond to the respective eliminations. \({{\textsf {\textit{Hyp}}}}\) and \({{\textsf {\textit{PBound}}}}\) relate to the assumption rule, where \({{\textsf {\textit{Hyp}}}}\) refers to a free assumption while \({{\textsf {\textit{PBound}}}}\) contains a De Bruijn index referring to an assumption added during the proof by an \({{\textsf {\textit{AbsP}}}}\) constructor. \({{\textsf {\textit{OfClass}}}}\) denotes a proof that a type belongs to a given type class.
Isabelle looks at terms modulo \({\alpha }{\beta }{\eta }\)-equivalence and, therefore, does not save \({\beta }\) or \({\eta }\) steps, while they are explicit steps in our inference system. Therefore we have no constructors corresponding to the (\(\beta \)) and (\(\eta )\) rules. The remaining equality axioms are naturally handled by the \({{\textsf {\textit{PAxm}}}}\) constructor.
In the rest of this section, we discuss how to derive an executable proof checker. Executability means that the checker is defined as a set of recursive functions that Isabelle’s code generator can translate into one of a number of target languages, in particular its implementation language SML [6, 12, 13].
Because of the approximate correspondence between proof term constructors and inference rules, implementing the proof checker largely amounts to providing executable versions of each inference rule, as in LCF: each rule becomes a function that checks the side conditions, and if they are true, computes the conclusion from the premises given as arguments. However, as the checked proof terms are not for our exact inference system but the implementation, there is some additional work to perform. The heart of our checker is a recursive function, where \(var\) is a concrete type of variables discussed further down:
\({{\textsf {\textit{replay}}}} {{:}{}{:}{}} var\ theory\)
            \({\Rightarrow }\ var\ typ\ list\ {\Rightarrow }\ var\ term\ list\ {\Rightarrow }\ var\ proofterm\ {\Rightarrow }\ var\ term\ option\)
It takes a theory, a context (see below), a list of current assumptions and a proof term and returns the certified proposition for a valid proof term or \({{\textsf {\textit{None}}}}\) for an invalid one. We now discuss some of the more involved implementation steps and illustrate them with some cases of the \({{\textsf {\textit{replay}}}}\) function.
First, as we save only names of axioms and theorems in the proof terms, not the propositions themselves, we need a way to look these up. Therefore, our proof checker actually uses a slightly different theory type than the one show before:
  • \({{\textbf {{\textsf {type\_synonym}}}}}\,\, {{^{\prime }}{}v\ \textit{theory}{^{\prime }}{}}=\)\({{{}{}{^{\prime }}{}v\ signature\ {\times }\ {(}{}{(}{}name\ {\rightharpoonup }\ {^{\prime }}{}v\ term{)}{}\ {\times }\ {(}{}name\ {\rightharpoonup }\ {^{\prime }}{}v\ term{)}{}{)}{}{}{}}}\)
It replaces the set of axioms with two (finite) maps. These allow efficient, name-based lookup of the actual axiom/theorem. We use two maps, as the Isabelle implementation uses distinct namespaces for axioms and theorems. The set of axioms in the original type can be recovered as the union of the ranges of the two maps. In the following, we will hide this implementation detail and use the original \(theory\) type.
The type \(var\) of variables is defined as follows:
\({{\textbf {{\textsf {type\_synonym}}}}}\, {indexname = {}{}{(}{}name\ {\times }\ int{)}{}{}{}}\)
\({{\textbf {{\textsf {datatype}}}}}\ var\ {=}{}\ {{\textsf {\textit{Free}}}}\ name\ {|}{}\ {{\textsf {\textit{Var}}}}\ indexname\ {|}{}\ {{\textsf {\textit{Internal}}}}\ nat\)
The constructors \({{\textsf {\textit{Free}}}}\) and \({{\textsf {\textit{Var}}}}\) are “inherited” from the Isabelle implementation of type \(term\). Both represent free variables but the ones represented by \({{\textsf {\textit{Var}}}}\) can be instantiated by unification. We added a third constructor, not present in Isabelle, which we use for easy generation of fresh variables by simply counting up.
Such internal variables are generated only when visiting an \({{\textsf {\textit{Abst}}}}\) constructor (\({\bigwedge }\)-introduction) during the traversal. \({{\textsf {\textit{Abst}}}}\) constructors introduce \({\bigwedge }\)-quantifiers, which eventually bind variables present in the contained proof term. The proof terms generated by Isabelle already use De Bruijn notation for these variables, so descending under \({{\textsf {\textit{Abst}}}}\) constructor can produce loose variables. For each of them, we add an internal variable to the context, which contains exactly the types of \({{\textsf {\textit{Abst}}}}\)-bound variables passed while descending into the proof term. To obtain a fresh variable, we use the size of the current context. Conceptually, when passing an \({{\textsf {\textit{Abst}}}}\) constructor, our proof checker substitutes the newly generated variable for this index everywhere in the proof term, then replays the proof, and binds the variable again in the result, therefore, always working with closed terms. However, performing this substitution first would require an extra traversal of the proof term at each \({{\textsf {\textit{Abst}}}}\) constructor. To avoid this, we remember all passed \({{\textsf {\textit{Abst}}}}\) constructors and substitute the corresponding variables simultaneously.
  • \({{\textsf {\textit{replay}}}}\ {\Theta }\ vs\ Hs\ {(}{}{{\textsf {\textit{Abst}}}}\ T\ p{)}{}\ {=}{}\)    \({(}{}{{\textsf {\textit{if}}}}\ {{\textsf {\textit{wf-type}}}}\ {(}{}{{\textsf {\textit{sig}}}}\ {\Theta }{)}{}\ T\)    \({{\textsf {\textit{then}}}}\ {{\textsf {\textit{map-option}}}}\ {(}{}{\lambda }t{.}{}\ {\bigwedge }{{}_{T}}\ {(}{}{{\textsf {\textit{Abs-fv}}}}\ {(}{}{{\textsf {\textit{Internal}}}}\ {|}{}vs{|}{}{)}{}\ T\ t{)}{}{)}{}\ {(}{}{{\textsf {\textit{replay}}}}\ {\Theta }\ {(}{}T\ {\#}{}\ vs{)}{}\ Hs\ p{)}{}\)    \({{\textsf {\textit{else}}}}\ {{\textsf {\textit{None}}}}{)}{}\)
Such substitutions happen at the AbsP (\({\Longrightarrow }\)-introduction) and the \({{\textsf {\textit{Appt}}}}\) (\({\bigwedge }\)-elimination) case, by means of the \({{\textsf {\textit{subst-bvs}}}}\) function. Note that, somewhat counterintuitively, the innermost abstraction/lowest De Bruijn index corresponds to the highest internal name. We only show the \({{\textsf {\textit{AbsP}}}}\) case here:
  • \({{\textsf {\textit{replay}}}}\ {\Theta }\ vs\ Hs\ {(}{}{{\textsf {\textit{AbsP}}}}\ t\ p{)}{}\ {=}{}\)    \({(}{}{{\textsf {\textit{let}}}}\ t{^{\prime }}{}\ {=}{}\ {{\textsf {\textit{subst-bvs}}}}\ {(}{}{{\textsf {\textit{map-index}}}}\ {(}{}{\lambda }i\ T{.}{}\ {{\textsf {\textit{Fv}}}}\ {(}{}{{\textsf {\textit{Internal}}}}\ {(}{}{|}{}vs{|}{}\ {-}{}\ Suc\ i{)}{}{)}{}\ T{)}{}\ vs{)}{}\ t{;}{}\)          \(rep\ {=}{}\ {{\textsf {\textit{replay}}}}\ {\Theta }\ vs\ {(}{}t{^{\prime }}{}\ {\#}{}\ Hs{)}{}\ p\)    \({{\textsf {\textit{in}}}}\ {{\textsf {\textit{if}}}}\ {\vdash }{{}_{\tau }}\ t{^{\prime }}{}\ {:}{}\ {{\textsf {\textit{prop}}}}\ {\wedge }\ {{\textsf {\textit{wf-term}}}}\ {(}{}{{\textsf {\textit{sig}}}}\ {\Theta }{)}{}\ t{^{\prime }}{}\ {{\textsf {\textit{then}}}}\ {{\textsf {\textit{map-option}}}}\ {(}{}{(}{}{\Longrightarrow }{)}{}\ t{^{\prime }}{}{)}{}\ rep\ {{\textsf {\textit{else}}}}\ {{\textsf {\textit{None}}}}{)}{}\)
We have shown that the result of running \({{\textsf {\textit{replay}}}}\) does not contain any internal variables (as long as the inputs do not contain any).
As already mentioned, term instantiations can be performed by means of the \({\bigwedge }\) rules. However, the proof terms generated by Isabelle do not use these rules when instantiating term variables in axioms. Instead, all variables in an axiom are assumed to already have been universally quantified, so that only the elimination step remains. For our checker, this means we need to also \({\bigwedge }\)-quantify all free variables when handling a \({{\textsf {\textit{PAxm}}}}\) constructor. A pitfall here is the order in which we quantify the free variables. The structure of the proof terms expects them to occur in the order given by a reverse inorder traversal of the axiom. As theorems are treated as just another kind of axioms, \({{\textsf {\textit{PThm}}}}\) is handled analogously.
  • \({{\textsf {\textit{replay}}}}\ {\Theta }\ \_\ \_\ {(}{}{{\textsf {\textit{PAxm}}}}\ n\ Tis{)}{}\ {=}{}\)    \({(}{}{{\textsf {\textit{if}}}}\ {{\textsf {\textit{inst-ok}}}}\ {(}{}{{\textsf {\textit{sig}}}}\ {\Theta }{)}{}\ Tis\)    \({{\textsf {\textit{then}}}}\ {{\textsf {\textit{map-option}}}}\ {(}{}{\lambda }t{.}{}\ {{\textsf {\textit{all-close}}}}\ {(}{}{{\textsf {\textit{subst-typ}}}}\ Tis\ t{)}{}{)}{}\ {(}{}{{\textsf {\textit{axioms}}}}\ {\Theta }\ \ n{)}{}\ {{\textsf {\textit{else}}}}\ {{\textsf {\textit{None}}}}{)}{}\)
To model Isabelle’s view of terms modulo \({\alpha }{\beta }{\eta }\)-equivalence, we sometimes \({\beta }{\eta }\) normalize our terms (\({\alpha }\)-equivalence is for free thanks to De Bruijn notation) during the reconstruction of the proof. This is necessary when replaying an \({{\textsf {\textit{AppP}}}}\) constructor because checking the conditions of the corresponding implication elimination rule requires checking equality of two terms. For all other constructors, no equality checks are necessary. To avoid repeatedly traversing the terms we only normalize in the \({{\textsf {\textit{AppP}}}}\) case and work with possibly non-\({\beta }{\eta }\) normalized terms in all other cases.
\({{\textsf {\textit{replay}}}}\ {\Theta }\ vs\ Hs\ {(}{}{{\textsf {\textit{AppP}}}}\ p{{}_{1}}\ p{{}_{2}}{)}{} {=}{}\)
  • \({{{\textsf {\textit{let}}}}\ \textit{rep}{{{ 1}}}\ {=}{}\ {{\textsf {\textit{replay}}}}\ {\Theta }\ vs\ Hs\ p{{}_{1}}\ {{{>\!\!\!>\!\!\!=}}}\ {{\textsf {\textit{beta-eta-norm}}}}{;}{}}\)\(\textit{rep}{{{ 2}}}\ {=}{}\ {{\textsf {\textit{replay}}}}\ {\Theta }\ vs\ Hs\ p{{}_{2}}\ {{{>\!\!\!>\!\!\!=}}}\ {{\textsf {\textit{beta-eta-norm}}}}\) \({ {{\textsf {\textit{in}}}}}\) \({{{\textsf {\textit{case}}}}\ {(}{}rep{1}{,}{}\ rep{2}{)}{}\ {{\textsf {\textit{of}}}}}\) \({\ \ \ \ \ \ {(}{}{{\textsf {\textit{Some}}}}\ {(}{}A\ {\Longrightarrow }\ B{)}{}{,}{}\ {{\textsf {\textit{Some}}}}\ A{^{\prime }}{}{)}{}\ {\Rightarrow }}\) \({ \ \ \ \ \ \ \ \ {{\textsf {\textit{if}}}}\ A{=}{}A{^{\prime }}{}\ {{\textsf {\textit{then}}}}\ {{\textsf {\textit{Some}}}}\ B\ {{\textsf {\textit{else}}}}\ {{\textsf {\textit{None}}}}}\) \({\ \ \ \ \ \ {|}{}\ -{}\ {\Rightarrow }\ {{\textsf {\textit{None}}}}}\)
For our soundness proof of the checker, we need to verify that these normalizations preserve provability. For this, we show that they can be expressed as a finite number of \({\beta }\) reduction steps, followed by a finite number of \({\eta }\) reduction steps. These steps can then be justified using the rules presented in Sect. 7, yielding us the desired result.
  • \({{\textsf {\textit{wf-theory}}}}\ {\Theta }\ {\wedge }\ {(}{}{\forall }A{\in }{\Gamma }{.}{}\ {{\textsf {\textit{wt-term}}}}\ {(}{}{{\textsf {\textit{sig}}}}\ {\Theta }{)}{}\ A\ {\wedge }\ {\vdash }{{}_{\tau }}\ A\ {:}{}\ {{\textsf {\textit{prop}}}}{)}{}\ {\wedge }\)    \({\Theta }{,}{}{\Gamma }\ {\vdash }\ t\ {\wedge }\ {{\textsf {\textit{beta-eta-norm}}}}\ t\ {=}{}\ {{\textsf {\textit{Some}}}}\ u\ {\longrightarrow }\)    \({\Theta }{,}{}{\Gamma }\ {\vdash }\ u\)
For our \({{\textsf {\textit{replay}}}}\) function to be executable, all constructs used by it must be executable as well. As we are continuously using explicitly finite data structures in the definition of (order sorted) signatures, Isabelle’s code generator needs no further help handling them. Still, the representation of type constructor signatures in Sect. 4 as an unstructured set \(tcs\) does not facilitate efficient access to relevant information. In particular, to compute \({{\textsf {\textit{has-sort }}}}\ oss\ {(}{}{{\textsf {\textit{Ty}}}}\ {\kappa }\ \textit{Ts}{)}{}\ S\), one needs to find the signatures for \({\kappa }\) required to fulfill all class constraints in \(S\). This means searching all of \(tcs\) for each constraint. To speed this up, we define an alternative representation \(\textit{TCS}\), inspired by the Isabelle implementation, which the code generator can transparently use as a replacement. This \(\textit{TCS}\) component has type \(name\ {\rightharpoonup }\ {(}{}class\ {\rightharpoonup }\ sort\ list{)}{}\), and it first groups all signatures by type constructor and then allows finding necessary argument sort constraints by passing an expected return class. More formally, \(\textit{TCS}\) represents the set of all type constructor signatures \({\kappa }\ {:}{}{:}{}\ {(}{}Ss{)}{}\ c\) such that \(\textit{TCS}\ {\kappa }\ {=}{}\ {{\textsf {\textit{Some}}}}\ dm\) and \(dm\ c\ {=}{}\ {{\textsf {\textit{Some}}}}\ Ss\). We can therefore recreate the equivalent but more intuitive, original version \(tcs\) the following way:
\(tcs\ {=}{} {\{}{}{(}{}{\kappa }{,}{}\ Ss{,}{}\ c{)}{}\ {|}{}\ {\exists }dm{.}{}\ TCS\ {\kappa }\ {=}{}\ {{\textsf {\textit{Some}}}}\ dm\ {\wedge }\ dm\ c\ {=}{}\ {{\textsf {\textit{Some}}}}\ Ss{\}}{}\)
We also need to make the inductive wellformedness checks for sorts, types, terms, signatures and theories executable. Mostly, this amounts to providing recursive versions for their inductive definitions and proving them equivalent. A problematic point is the definition of the type instance relation \({(}{}{\lesssim }{)}{}\), which contains an (unbounded) existential quantifier. To make this executable, we provide an implementation which tries to compute a suitable type substitution by matching the types.
In the end, we obtain an executable proof checker
  • \({{\textsf {\textit{check-proof}}}}\ {\Theta }\ P\ p\ {=}{}\)    \({(}{}{{\textsf {\textit{wf-theory}}}}\ {\Theta }\ {\wedge }\)    \({(}{}{\forall }h{\in }{{\textsf {\textit{hyps}}}}\ P{.}{}\ {{\textsf {\textit{wf-term}}}}\ {(}{}{{\textsf {\textit{sig}}}}\ {\Theta }{)}{}\ h\ {\wedge }\ {\vdash }{{}_{\tau }}\ h\ {:}{}\ {{\textsf {\textit{prop}}}}{)}{}\ {\wedge }\ {{\textsf {\textit{replay-norm}}}}\ {\Theta }\ P\ {=}{}\ {{\textsf {\textit{Some}}}}\ p{)}{}\)
where \({{\textsf {\textit{replay-norm}}}}\ {\Theta }\ P\ {=}{}\ {(}{}{{\textsf {\textit{replay}}}}\ {\Theta }\ {[}{}{]}{}\ {(}{}{{\textsf {\textit{hyps}}}}\ P{)}{}\ P\ {{{>\!\!\!>\!\!\!=}}}\ {{\textsf {\textit{beta-eta-norm}}}}{)}{}\). This final \({\beta }{\eta }\) normalization step is once again necessary to account for possible different but \({\alpha }{\beta }{\eta }\)-equivalent results.
\({{\textsf {\textit{check-proof}}}}\) checks wellformedness of theory \({\Theta }\) and the hypotheses and then checks if proof \(P\) proves the given proposition \(p\). The latter check is important because the Isabelle theorems that we check contain both a proof and a proposition that the theorem claims to prove. As one of our main results, we can prove the correctness of our checker:
\({{\textsf {\textit{check-proof}}}}\ {\Theta }\ P\ p\ {\longrightarrow }\ \ {\Theta }{,}{}{{\textsf {\textit{hyps}}}}\ P\ {\vdash \!\!\!\vdash }\ p\)
The proof itself is conceptually simple and proceeds by induction over the structure of the computation of \({{\textsf {\textit{replay}}}}\). For each proof constructor we need to show that there are corresponding inference rules in our system for each step taken by the functional version \({{\textsf {\textit{replay}}}}\). Most of the proof effort goes into a large library of results about terms, types, signatures, substitutions, wellformedness etc. required for this proof. In particular, we need to prove derived rules characterizing all the technical operations we use, similar to Sect. 7.

9 Size and Structure of the Formalization

All material presented so far has been formalized in Isabelle/HOL. The definition of the inference system (incl. types, terms etc.) resides in a separate theory \(Core\) that depends only on the basic library of Isabelle/HOL. It takes about 300 LOC and is fairly high level and readable – we presented most of it. This is at least an order or magnitude smaller than Isabelle’s inference kernel (which is not clearly delineated) – of course, the latter is optimized for performance. Its abstract type of theorems alone takes about 2,500 LOC, not counting any infrastructure of terms, types, unification etc.
The whole formalization consists of 12,000 LOC. The main components are:
  • Almost half the formalization (5,500 LOC) is devoted to providing a library of operations on types and terms and their properties. This includes, among others, executable functions for type checking, different types of substitutions, abstractions, the wellformedness checks, and \({\beta }\) and \({\eta }\) reductions.
  • Proving admissible rules of our inference system takes up 3,000 LOC. A large part of this is deriving rules for equality and the \({\beta }\) and \({\eta }\) reductions. Weakening rules are also derived.
  • Making the wellformedness checks for (order-sorted) signatures and theories as well as the type instance checks executable takes 1,800 LOC.
  • Definition and correctness proof for the checker builds on the above material and take only about 500 additional LOC.
  • Around 1,000 LOC are spent on preliminary material, most importantly results about finite sets and maps, transferred from existing material for general sets and maps.

10 Integration with Isabelle

As explained above, Isabelle generates SML code for the proof checker. This code has its own definitions of types, terms etc. and needs to be interfaced with the corresponding data structures in Isabelle. This step requires 150 lines of handwritten SML code (glue code) that translates Isabelle’s data structures into the corresponding data structures in the generated proof checker such that we can feed them into \({{\textsf {\textit{check-proof}}}}\). We cannot verify this code and therefore aim to keep it as small and simple as possible. This is the reason for the previously mentioned intentional implementation bias we introduced in our formalization. We describe now how the various data types are translated. We call a translation trivial if it merely replaces one constructor by another, possibly forgetting some information.
The translation of types and terms is trivial as their structure is almost identical in the two settings.
Proof term translation is trivial except for two special cases. So-called “oracles” (typically the result of unfinished proofs, i.e. ,“sorry” on the user level) are rejected (but none of the theories we checked contain oracles). Furthermore, translating previously proved lemmas requires some additional name handling work. Also remember that the translation of proofs is not safety critical because all that matters is that in the end we obtain a correct proof of the claimed proposition.
We also provide functions to translate relevant content from the background theory: axioms (including previously proved theorems) and (order-sorted) signatures. This mostly amounts to extracting association lists from efficient internal data structures. Translating the axioms also involves translating some alternative internal representation of type class constraints into their standard form presented in Sect. 6.3.
The checker is integrated into Isabelle by calling it every time a new named theorem has been proved. The set of theorems proved so far is added to the axiomatic basis for this check. Cyclic dependencies between lemmas are ruled out by this ordering because every theorem is checked before being added to the axiomatic basis. However, an explicit cyclicity check is not part of the formalization (yet), which speaks only about checking single proofs.

11 Running the Proof Checker

We run this modified Isabelle with our proof checker on multiple theories in various object logics contained in the Isabelle distribution. A rough overview of the scope of the covered material for some logics and the required running times can be found in the following table. The running times are the total times for running Isabelle, not just the proof checking, but the latter takes over 90% of the time. All tests were performed on an Intel Core i7-9750H CPU running at 2.60GHz and 32GB of RAM.
\(\begin{array}{|l|r|l|} \hline \hbox {Logic} &{} \hbox {LOC} &{} \hbox {Time} \\ \hline \hbox {FOL} &{} ~4{,}500 &{} 40\,\hbox {s}\\ \hbox {ZF} &{} 55{,}000 &{} 12\,\hbox {min}\\ \hbox {HOL} &{} 29{,}000 &{} 110\,\hbox {min}\\ \hline \end{array}\)
We can check the material in several smaller object logics in their entirety. One of the larger such logics is first-order logic (FOL). These logics do not develop any applications but FOL comes with proof automation and theories testing that automation, in particular Pelletier’s collection of problems that were considered challenges in their day [37]. Because the proofs are found automatically, the resulting proof terms will typically be quite complex and good test material for a proof checker.
The logic ZF (Zermelo-Fraenkel set theory) builds on FOL but contains real applications and is an order of magnitude larger than FOL. We are able to check all material formalized in ZF in the Isabelle distribution.
Isabelle’s most frequently used and largest object logic is HOL. We managed to check some of the initial theories of the main library. These theories contain the basic logic and among others the libraries of sets, functions, orderings, lattices, groups, rings, fields and natural numbers. The formalizations are non-trivial and make heavy use of Isabelle’s type classes.
Why is checking material in ZF easier than in HOL? Profiling revealed that the proof checker spends a lot of time in functions that access the signature, especially the wellformedness checks. One reason for this is inefficient data structures (e.g., association lists) and thus the running time depends heavily on the size of the signature and increases with every new constant, type and class. This is aggravated by our current approach which exports the current state of the background theory and has to ensure its wellformedness before each check. Furthermore, there is no sharing of any kind in terms/types and their wellformedness checks. Because ZF is free of polymorphism and type classes, these checks are much simpler. Lastly, the presence of type classes also increases the size of the involved proofterms. These effects can also be seen purely in the HOL material. For example, despite having similar sizes and containing roughly the same number of theorems, checking the material on rings takes about 10 times as long as the one on natural numbers.

12 Trust Assumptions

We need to trust the following components outside of the formalization:
  • The verification (and code generation) of our proof checker in Isabelle/HOL. This is inevitable, one has to trust some theorem prover to start with. We could improve the trustworthiness of this step by porting our proofs to the verified HOL prover by Kumar et al. [20] but its code generator produces CakeML [19], not SML.
  • The unverified glue code in the integration of our proof checker into Isabelle (Sect. 10).
Because users currently cannot examine Isabelle’s internal data structures that we start from, they have to trust Isabelle’s front end that parses and transforms some textual input file into internal data structures. One could add a (possibly verified) presentation layer that outputs those internal representations into a readable format that can be inspected, while avoiding the traps Adams [2] is concerned with.

13 Future Work

Our primary focus will be on scaling up the proof checker to not just deal with all of HOL but with real applications (including itself!). There is a host of avenues for exploration. Just to name a few promising directions: more efficient data structures than association lists (e.g., via existing frameworks [25, 26]); caching of wellformedness checks for types and terms; exploiting sharing within terms and types (tricky because our intentionally simple glue code creates copies); working with the compressed proof terms [6] that Isabelle creates by default instead of uncompressing them as we do now.
We will also upgrade the formalization of our checker from individual theorems to sets of theorems, explicitly checking cyclic dependencies (which are currently prevented by the glue code, see Sect. 10).
A presentation layer as discussed in Sect. 12 would not just allow the inspection of the internal representation of the theories but could also be extended to the proofs themselves, thus, permitting checkers to be interfaced with Isabelle on a textual level instead of on internal data structures.
Another possible next step would be to formalize the theory extension mechanisms including verified cyclicity checks. It would also be nice to have a model-theoretic semantics for \(\mathcal {M}\). We believe that the work by Kunčar and Popescu [2124] could be adapted from HOL to \(\mathcal {M}\) for these purposes.

Acknowledgements

We thank Kevin Kappelmann, Magnus Myreen, Larry Paulson, Andrei Popescu, Makarius Wenzel, and the anonymous reviewers for their comments. Supported by Wirtschaftsministerium Bayern under DIK-2002-0027//DIK0185/03 and DFG GRK 2428 ConVeY
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix

A Appendix

  • \({{\textsf {\textit{subst-bv}}}}\ u\ t\ {=}{}\ {{\textsf {\textit{subst-bv2}}}}\ t\ {0}\ u\)
  • \({{\textsf {\textit{subst-bv2}}}}\ {(}{}{{\textsf {\textit{Bv}}}}\ i{)}{}\ n\ u\ {=}{}\ {(}{}{{\textsf {\textit{if}}}}\ i\ {<}{}\ n\ {{\textsf {\textit{then}}}}\ {{\textsf {\textit{Bv}}}}\ i\ {{\textsf {\textit{else}}}}\ {{\textsf {\textit{if}}}}\ i\ {=}{}\ n\ {{\textsf {\textit{then}}}}\ u\ {{\textsf {\textit{else}}}}\ {{\textsf {\textit{Bv}}}}\ {(}{}i\ {-}{}\ {1}{)}{}{)}{}\) \({{\textsf {\textit{subst-bv2}}}}\ {(}{}{{\textsf {\textit{Abs}}}}\ T\ t{)}{}\ n\ u\ {=}{}\ {{\textsf {\textit{Abs}}}}\ T\ {(}{}{{\textsf {\textit{subst-bv2}}}}\ t\ {(}{}n\ {+}{}\ {1}{)}{}\ {(}{}{{\textsf {\textit{lift}}}}\ u\ {0}{)}{}{)}{}\) \({{\textsf {\textit{subst-bv2}}}}\ {(}{}f\ {\cdot }\ t{)}{}\ n\ u\ {=}{}\ {{\textsf {\textit{subst-bv2}}}}\ f\ n\ u\ {\cdot }\ {{\textsf {\textit{subst-bv2}}}}\ t\ n\ u\) \({{\textsf {\textit{subst-bv2}}}}\ t\ \_\ \_\ {=}{}\ t\)
  • \({{\textsf {\textit{lift}}}}\ {(}{}{{\textsf {\textit{Bv}}}}\ i{)}{}\ n\ {=}{}\ {(}{}{{\textsf {\textit{if}}}}\ n\ {\le }\ i\ {{\textsf {\textit{then}}}}\ {{\textsf {\textit{Bv}}}}\ {(}{}i\ {+}{}\ {1}{)}{}\ {{\textsf {\textit{else}}}}\ {{\textsf {\textit{Bv}}}}\ i{)}{}\) \({{\textsf {\textit{lift}}}}\ {(}{}{{\textsf {\textit{Abs}}}}\ T\ t{)}{}\ n\ {=}{}\ {{\textsf {\textit{Abs}}}}\ T\ {(}{}{{\textsf {\textit{lift}}}}\ t\ {(}{}n\ {+}{}\ {1}{)}{}{)}{}\) \({{\textsf {\textit{lift}}}}\ {(}{}f\ {\cdot }\ t{)}{}\ n\ {=}{}\ {{\textsf {\textit{lift}}}}\ f\ n\ {\cdot }\ {{\textsf {\textit{lift}}}}\ t\ n\) \({{\textsf {\textit{lift}}}}\ t\ \_\ {=}{}\ t\)
  • \({{\textsf {\textit{bind-fv}}}}\ T\ t\ {=}{}\ {{\textsf {\textit{bind-fv2}}}}\ T\ {0}\ t\)
  • \({{\textsf {\textit{bind-fv2}}}}\ var\ n\ {(}{}{{\textsf {\textit{Fv}}}}\ v\ T{)}{}\ {=}{}\ {(}{}{{\textsf {\textit{if}}}}\ var\ {=}{}\ {(}{}v{,}{}\ T{)}{}\ {{\textsf {\textit{then}}}}\ {{\textsf {\textit{Bv}}}}\ n\ {{\textsf {\textit{else}}}}\ {{\textsf {\textit{Fv}}}}\ v\ T{)}{}\) \({{\textsf {\textit{bind-fv2}}}}\ var\ n\ {(}{}{{\textsf {\textit{Abs}}}}\ T\ t{)}{}\ {=}{}\ {{\textsf {\textit{Abs}}}}\ T\ {(}{}{{\textsf {\textit{bind-fv2}}}}\ var\ {(}{}n\ {+}{}\ {1}{)}{}\ t{)}{}\) \({{\textsf {\textit{bind-fv2}}}}\ var\ n\ {(}{}f\ {\cdot }\ u{)}{}\ {=}{}\ {{\textsf {\textit{bind-fv2}}}}\ var\ n\ f\ {\cdot }\ {{\textsf {\textit{bind-fv2}}}}\ var\ n\ u\) \({{\textsf {\textit{bind-fv2}}}}\ \_\ \_\ t\ {=}{}\ t\)
  • \({{\textsf {\textit{tinst-ok}}}}\ {\Sigma }\ insts\ {\equiv }\) \({{\textsf {\textit{distinct}}}}\ {(}{}{{\textsf {\textit{map}}}}\ {{\textsf {\textit{fst}}}}\ insts{)}{}\ {\wedge }\ {{\textsf {\textit{list-all}}}}\ {(}{}{\lambda }{(}{}{(}{}v{,}{}\ T{)}{}{,}{}\ t{)}{}{.}{}\ {{\textsf {\textit{wf-term}}}}\ {\Sigma }\ t\ {\wedge }\ {\vdash }{{}_{\tau }}\ t\ {:}{}\ T{)}{}\ insts\)
  • \({{\textsf {\textit{subst-term}}}}\ \_\ {(}{}{{\textsf {\textit{Ct}}}}\ c\ T{)}{}\ {=}{}\ {{\textsf {\textit{Ct}}}}\ c\ T\) \({{\textsf {\textit{subst-term}}}}\ insts\ {(}{}{{\textsf {\textit{Fv}}}}\ idn\ T{)}{}\ {=}{}\ subst{\_}{}fv\ idn\ T\ insts\) \({{\textsf {\textit{subst-term}}}}\ \_\ {(}{}{{\textsf {\textit{Bv}}}}\ n{)}{}\ {=}{}\ {{\textsf {\textit{Bv}}}}\ n\) \({{\textsf {\textit{subst-term}}}}\ insts\ {(}{}{{\textsf {\textit{Abs}}}}\ T\ t{)}{}\ {=}{}\ {{\textsf {\textit{Abs}}}}\ T\ {(}{}{{\textsf {\textit{subst-term}}}}\ insts\ t{)}{}\) \({{\textsf {\textit{subst-term}}}}\ insts\ {(}{}t\ {\cdot }\ u{)}{}\ {=}{}\ {{\textsf {\textit{subst-term}}}}\ insts\ t\ {\cdot }\ {{\textsf {\textit{subst-term}}}}\ insts\ u\)
  • \({{\textsf {\textit{subst-bvs}}}}\ s\ t\ {=}{}\ {{\textsf {\textit{subst-bvs2}}}}\ t\ {0}\ s\)
  • \({{\textsf {\textit{subst-bvs2}}}}\ {(}{}{{\textsf {\textit{Bv}}}}\ i{)}{}\ n\ us\ {=}{}\) \({(}{}{{\textsf {\textit{if}}}}\ i\ {<}{}\ n\ {{\textsf {\textit{then}}}}\ {{\textsf {\textit{Bv}}}}\ i\) \({{(}{}}{{\textsf {\textit{else}}}}\ {{\textsf {\textit{if}}}}\ i\ {-}{}\ n\ {<}{}\ {|}{}us{|}{}\ {{\textsf {\textit{then}}}}\ us\ {!}{}\ {(}{}i\ {-}{}\ n{)}{}\ {{\textsf {\textit{else}}}}\ {{\textsf {\textit{Bv}}}}\ {(}{}i\ {-}{}\ {|}{}us{|}{}{)}{}{)}{}\) \({{\textsf {\textit{subst-bvs2}}}}\ {(}{}{{\textsf {\textit{Abs}}}}\ T\ t{)}{}\ n\ us\ {=}{}\ {{\textsf {\textit{Abs}}}}\ T\ {(}{}{{\textsf {\textit{subst-bvs2}}}}\ t\ {(}{}n\ {+}{}\ {1}{)}{}\ {(}{}{{\textsf {\textit{map}}}}\ {(}{}{\lambda }t{.}{}\ {{\textsf {\textit{lift}}}}\ t\ {0}{)}{}\ us{)}{}{)}{}\) \({{\textsf {\textit{subst-bvs2}}}}\ {(}{}f\ {\cdot }\ t{)}{}\ n\ us\ {=}{}\ {{\textsf {\textit{subst-bvs2}}}}\ f\ n\ us\ {\cdot }\ {{\textsf {\textit{subst-bvs2}}}}\ t\ n\ us\) \({{\textsf {\textit{subst-bvs2}}}}\ t\ \_\ \_\ {=}{}\ t\)

Supplementary Information

Below is the link to the electronic supplementary material.
Literature
1.
go back to reference Harrison, J.: Towards self-verification of HOL Light. In: Furbach, U., Shankar, N. (eds.) Proceedings of the Third International Joint Conference, IJCAR 2006. Lect. Notes in Comp. Sci 4130, 177–191 (2006). https://doi.org/10.1007/11814771_17. (Springer, Seattle, WA) Harrison, J.: Towards self-verification of HOL Light. In: Furbach, U., Shankar, N. (eds.) Proceedings of the Third International Joint Conference, IJCAR 2006. Lect. Notes in Comp. Sci 4130, 177–191 (2006). https://​doi.​org/​10.​1007/​11814771_​17. (Springer, Seattle, WA)
7.
go back to reference Berghofer, S., Nipkow, T.: Proof terms for simply typed higher order logic. In: Harrison, J., Aagaard, M. (eds.) Theorem Proving in Higher Order Logics. Lect. Notes in Comp. Sci., vol. 1869, pp. 38– 52. Springer, Berlin, Heidelberg ( 2000). https://doi.org/10.1007/3-540-44659-1_3 Berghofer, S., Nipkow, T.: Proof terms for simply typed higher order logic. In: Harrison, J., Aagaard, M. (eds.) Theorem Proving in Higher Order Logics. Lect. Notes in Comp. Sci., vol. 1869, pp. 38– 52. Springer, Berlin, Heidelberg ( 2000). https://​doi.​org/​10.​1007/​3-540-44659-1_​3
11.
go back to reference Hurd, J.: OpenTheory: Package management for higher order logic theories. In: Reis, G.D., Théry, L. (eds.) Workshop on Programming Languages for Mechanized Mathematics Systems (ACM SIGSAM PLMMS 2009), pp. 31– 37 ( 2009) Hurd, J.: OpenTheory: Package management for higher order logic theories. In: Reis, G.D., Théry, L. (eds.) Workshop on Programming Languages for Mechanized Mathematics Systems (ACM SIGSAM PLMMS 2009), pp. 31– 37 ( 2009)
12.
go back to reference Wenzel, M.: Type classes and overloading in higher-order logic. In: Gunter, E.L., Felty, A.P. (eds.) Theorem Proving in Higher Order Logics, TPHOLs’97. Lect. Notes in Comp. Sci., vol. 1275, pp. 307– 322. Springer, Berlin, Heidelberg ( 1997). https://doi.org/10.1007/BFb0028402 Wenzel, M.: Type classes and overloading in higher-order logic. In: Gunter, E.L., Felty, A.P. (eds.) Theorem Proving in Higher Order Logics, TPHOLs’97. Lect. Notes in Comp. Sci., vol. 1275, pp. 307– 322. Springer, Berlin, Heidelberg ( 1997). https://​doi.​org/​10.​1007/​BFb0028402
17.
go back to reference Åman Pohjola, J., Gengelbach, A.: A mechanised semantics for HOL with ad-hoc overloading. In: Albert, E., Kovács, L. (eds.) LPAR 2020: 23rd International Conference on Logic for Programming, Artificial Intelligence and Reasoning. EPiC Series in Computing, vol. 73, pp. 498– 515. EasyChair, online ( 2020). https://doi.org/10.29007/413d Åman Pohjola, J., Gengelbach, A.: A mechanised semantics for HOL with ad-hoc overloading. In: Albert, E., Kovács, L. (eds.) LPAR 2020: 23rd International Conference on Logic for Programming, Artificial Intelligence and Reasoning. EPiC Series in Computing, vol. 73, pp. 498– 515. EasyChair, online ( 2020). https://​doi.​org/​10.​29007/​413d
19.
go back to reference Barras, B.: Coq en coq. technical report 3026. Technical report, INRIA (1996) Barras, B.: Coq en coq. technical report 3026. Technical report, INRIA (1996)
20.
go back to reference Barras, B.: Verification of the interface of a small proof system in coq. In: Giménez, E., Paulin-Mohring, C. (eds.) Types for Proofs and Programs, pp. 28–45. Springer, Berlin, Heidelberg (1998)CrossRef Barras, B.: Verification of the interface of a small proof system in coq. In: Giménez, E., Paulin-Mohring, C. (eds.) Types for Proofs and Programs, pp. 28–45. Springer, Berlin, Heidelberg (1998)CrossRef
21.
go back to reference Sozeau, M., Boulier, S., Forster, Y., Tabareau, N., Winterhalter, T.: Coq Coq correct! Verification of type checking and erasure for Coq, in Coq. Proc. ACM Program. Lang. 4( POPL), 8– 1828 ( 2020). https://doi.org/10.1145/3371076 Sozeau, M., Boulier, S., Forster, Y., Tabareau, N., Winterhalter, T.: Coq Coq correct! Verification of type checking and erasure for Coq, in Coq. Proc. ACM Program. Lang. 4( POPL), 8– 1828 ( 2020). https://​doi.​org/​10.​1145/​3371076
23.
go back to reference Davis, J.: A self-verifying theorem prover. PhD thesis, The University of Texas at Austin (2009) Davis, J.: A self-verifying theorem prover. PhD thesis, The University of Texas at Austin (2009)
28.
go back to reference Pfenning, F.: Elf: A language for logic definition and verified metaprogramming. In: Logic in Computer Science (LICS 1989), pp. 313– 322. IEEE Computer Society Press, Pacific Grove ( 1989) Pfenning, F.: Elf: A language for logic definition and verified metaprogramming. In: Logic in Computer Science (LICS 1989), pp. 313– 322. IEEE Computer Society Press, Pacific Grove ( 1989)
29.
go back to reference Pfenning, F., Schürmann, C.: System description: Twelf - A meta-logical framework for deductive systems. In: Ganzinger, H. (ed.) Automated Deduction, CADE-16. Lect. Notes in Comp. Sci., vol. 1632, pp. 202– 206. Springer, Berlin, Heidelberg ( 1999).https://doi.org/10.1007/3-540-48660-7_14 Pfenning, F., Schürmann, C.: System description: Twelf - A meta-logical framework for deductive systems. In: Ganzinger, H. (ed.) Automated Deduction, CADE-16. Lect. Notes in Comp. Sci., vol. 1632, pp. 202– 206. Springer, Berlin, Heidelberg ( 1999).https://​doi.​org/​10.​1007/​3-540-48660-7_​14
30.
go back to reference Pientka, B.: Beluga: Programming with dependent types, contextual data, and contexts. In: Blume, M., Kobayashi, N., Vidal, G. (eds.) Functional and Logic Programming, FLOPS 2010. Lect. Notes in Comp. Sci., vol. 6009, pp. 1– 12. Springer, Berlin, Heidelberg ( 2010). https://doi.org/10.1007/978-3-642-12251-4_1 Pientka, B.: Beluga: Programming with dependent types, contextual data, and contexts. In: Blume, M., Kobayashi, N., Vidal, G. (eds.) Functional and Logic Programming, FLOPS 2010. Lect. Notes in Comp. Sci., vol. 6009, pp. 1– 12. Springer, Berlin, Heidelberg ( 2010). https://​doi.​org/​10.​1007/​978-3-642-12251-4_​1
33.
go back to reference Nipkow, T.: Order-sorted polymorphism in Isabelle. In: Huet, G., Plotkin, G. (eds.) Logical Environments, pp. 164–188. Cambridge University Press, Cambridge (1993) Nipkow, T.: Order-sorted polymorphism in Isabelle. In: Huet, G., Plotkin, G. (eds.) Logical Environments, pp. 164–188. Cambridge University Press, Cambridge (1993)
34.
go back to reference Nipkow, T., Snelting, G.: Type classes and overloading resolution via order-sorted unification. In: Hughes, J. (ed.) Proc. 5th ACM Conf. Functional Programming Languages and Computer Architecture. Lect. Notes in Comp. Sci., vol. 523, pp. 1– 14. Springer, Berlin, Heidelberg ( 1991). https://doi.org/10.1007/3540543961_1 Nipkow, T., Snelting, G.: Type classes and overloading resolution via order-sorted unification. In: Hughes, J. (ed.) Proc. 5th ACM Conf. Functional Programming Languages and Computer Architecture. Lect. Notes in Comp. Sci., vol. 523, pp. 1– 14. Springer, Berlin, Heidelberg ( 1991). https://​doi.​org/​10.​1007/​3540543961_​1
37.
go back to reference Berghofer, S., Nipkow, T.: Executing higher order logic. In: Callaghan, P., Luo, Z., McKinna, J., Pollack, R. (eds.) Types for Proofs and Programs (TYPES 2000). Lect. Notes in Comp. Sci., vol. 2277, pp. 24– 40 ( 2002). https://doi.org/10.1007/3-540-45842-5_2 . Springer, Berlin, Heidelberg Berghofer, S., Nipkow, T.: Executing higher order logic. In: Callaghan, P., Luo, Z., McKinna, J., Pollack, R. (eds.) Types for Proofs and Programs (TYPES 2000). Lect. Notes in Comp. Sci., vol. 2277, pp. 24– 40 ( 2002). https://​doi.​org/​10.​1007/​3-540-45842-5_​2 . Springer, Berlin, Heidelberg
38.
go back to reference Haftmann, F., Nipkow, T.: Code generation via higher-order rewrite systems. In: Blume, M., Kobayashi, N., Vidal, G. (eds.) Functional and Logic Programming (FLOPS 2010). Lect. Notes in Comp. Sci., vol. 6009, pp. 103– 117. Springer, Berlin, Heidelberg ( 2010). https://doi.org/10.1007/978-3-642-12251-4_9 Haftmann, F., Nipkow, T.: Code generation via higher-order rewrite systems. In: Blume, M., Kobayashi, N., Vidal, G. (eds.) Functional and Logic Programming (FLOPS 2010). Lect. Notes in Comp. Sci., vol. 6009, pp. 103– 117. Springer, Berlin, Heidelberg ( 2010). https://​doi.​org/​10.​1007/​978-3-642-12251-4_​9
39.
go back to reference Haftmann, F., Krauss, A., Kunčar, O., Nipkow, T.: Data refinement in Isabelle/HOL. In: Blazy, S., Paulin-Mohring, C., Pichardie, D. (eds.) Interactive Theorem Proving (ITP 2013). Lect. Notes in Comp. Sci., vol. 7998, pp. 100– 115. Springer, Berlin, Heidelberg ( 2013). https://doi.org/10.1007/978-3-642-39634-2_10 Haftmann, F., Krauss, A., Kunčar, O., Nipkow, T.: Data refinement in Isabelle/HOL. In: Blazy, S., Paulin-Mohring, C., Pichardie, D. (eds.) Interactive Theorem Proving (ITP 2013). Lect. Notes in Comp. Sci., vol. 7998, pp. 100– 115. Springer, Berlin, Heidelberg ( 2013). https://​doi.​org/​10.​1007/​978-3-642-39634-2_​10
43.
go back to reference Lochbihler, A.: Light-weight containers for Isabelle: Efficient, extensible, nestable. In: Blazy, S., Paulin-Mohring, C., Pichardie, D. (eds.) Interactive Theorem Proving, ITP 2013. Lect. Notes in Comp. Sci., vol. 7998, pp. 116– 132. Springer, Berlin, Heidelberg ( 2013). https://doi.org/10.1007/978-3-642-39634-2_11 Lochbihler, A.: Light-weight containers for Isabelle: Efficient, extensible, nestable. In: Blazy, S., Paulin-Mohring, C., Pichardie, D. (eds.) Interactive Theorem Proving, ITP 2013. Lect. Notes in Comp. Sci., vol. 7998, pp. 116– 132. Springer, Berlin, Heidelberg ( 2013). https://​doi.​org/​10.​1007/​978-3-642-39634-2_​11
Metadata
Title
A Formalization and Proof Checker for Isabelle’s Metalogic
Authors
Simon Roßkopf
Tobias Nipkow
Publication date
01-03-2023
Publisher
Springer Netherlands
Published in
Journal of Automated Reasoning / Issue 1/2023
Print ISSN: 0168-7433
Electronic ISSN: 1573-0670
DOI
https://doi.org/10.1007/s10817-022-09648-w

Other articles of this Issue 1/2023

Journal of Automated Reasoning 1/2023 Go to the issue

Premium Partner