Zum Inhalt
Erschienen in:

Open Access 2025 | OriginalPaper | Buchkapitel

coma, an Intermediate Verification Language with Explicit Abstraction Barriers

verfasst von : Andrei Paskevich, Paul Patault, Jean-Christophe Filliâtre

Erschienen in: Programming Languages and Systems

Verlag: Springer Nature Switzerland

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Dieses Kapitel vertieft sich in die Feinheiten einer zwischengeschalteten Verifikationssprache, die darauf ausgelegt ist, die Spezifikation und Verifikation von Programmen zu rationalisieren. Die wichtigste Innovation ist die Verwendung expliziter Abstraktionsbarrieren, die flexiblere und prägnantere Spezifikationen ermöglichen. Diese Barrieren ermöglichen die Trennung von Schnittstellen- und Implementierungsteilen einer Funktion, was einen modulareren und verständlicheren Verifikationsprozess erleichtert. Das Kapitel beginnt mit einem anschaulichen Beispiel einer einfachen Funktion, in dem die Grenzen traditioneller vertragsbasierter Ansätze aufgezeigt werden. Es führt dann die Coma-Sprache ein, die einen Continuation-Passing-Stil verwendet, um Standard-Kontrollstrukturen effizient zu erfassen. Syntax und Semantik der Sprache werden gründlich erforscht, wobei der Schwerpunkt darauf liegt, wie Behauptungen und Barrieren in den Code integriert werden. Das Kapitel behandelt auch die Generierung von Verifikationsbedingungen und stellt einen zweistufigen Prozess vor, der Coma-Programme in logische Thesen umwandelt, die für eine automatisierte Prüfung geeignet sind. Die praktischen Anwendungen von Coma werden anhand von Fallstudien demonstriert, einschließlich einer Engine für reguläre Ausdrücke und eines Sortieralgorithmus im x86-64-Montagecode. Diese Beispiele zeigen die Fähigkeit der Sprache, komplexe Verifikationsaufgaben zu bewältigen, was sie zu einem leistungsstarken Werkzeug für formale Verifikation in realen Anwendungen macht.
Hinweise
This research was supported, in part, by the ANR project ANR-22-CE48-0013 “GOSPEL” and, in part, by the Décysif project funded by the Île-de-France region and by the French government in the context of “Plan France 2030”.

1 Introduction

Consider a simple function, written in some ML dialect, which eliminates the root node from a binary tree, using an existing library function that merges two trees in one:
If we want to use removeRoot in a formally verified program, we need to provide this code with a specification. In a traditional contract-based approach, this means writing a precondition and a postcondition, and here is how they would usually look:
While this contract does its job, it is rather unpleasant. Not only does it take more space than the code it describes, it also basically repeats what is already written in the code. What is more, if we compute a verification condition (VC, for brevity) for the definition of removeRoot, it will take the form of one match-with formula implying another —or maybe two nested match-with formulas— and neither is easy to read and to prove.
Some programming languages, like Haskell and Agda, admit multiclause function definitions, and it is tempting to write our specification in this way, too:
This definition is much nicer. The postcondition in the first clause can refer to the results of the top-level pattern matching and does not need to do one itself. Furthermore, the second clause is self-explainable, so that we can omit the specification altogether.
However, from the verification point of view, something unusual is happening here. As we push the postcondition down the first branch of the pattern matching, we expose a part of the implementation (namely, the pattern matching itself) to the client code. Whenever removeRoot is called in our program, the VC for that call needs to perform the case analysis on the tree parameter in order to access the postcondition. Even more drastically, the second branch contains no specification at all, and so the caller’s VC has to “inline” the entire second clause at the call site and prove that it is never reached.
What we did in this definition of removeRoot, is we moved the abstraction barrier inwards from the entry-exit boundary of a function, and even omitted it entirely on some of the execution paths. The question is, what are the rules of VC generation for programs with freely moving abstraction barriers? What if we do more in the exposed part of the code than just pattern matching or failure?
In this paper, we propose a formalism for computation and specification that intends to answer this question. We present a programming language called Coma that is small enough to comfortably study its properties, yet expressive enough to serve as a practical intermediate verification language for real-life applications. Coma programs are written in the continuation-passing style—the name Coma is short for Continuation Machine—which allows us to capture with just a few constructions the standard control structures: sequence, conditionals, loops, function calls, exception handling.
Specification annotations in Coma take the form of assertions mixed with executable code. Abstraction barriers are made explicit, as special tags that separate the “interface” part of a subroutine, which is verified at every call site, from the “implementation” part, which is verified only once, at the definition site.
Let us acquaint ourselves with this approach through a few examples. Here, we do not yet use the minimalistic syntax of Coma and stay with familiar ML-like constructs.
First, consider a simple wrapper for the Unix exit function, which expects an 8-bit unsigned integer and does not return to the caller. On the left, we show the traditional contract-based specification, and on the right, its rendition in the style of Coma:
In Coma, the precondition moves into the function body as an assertion which precedes the actual implementation, and we hide the latter under the explicit abstraction barrier denoted \({\mathtt {\uparrow }}\). Since the assertion is put above the barrier, it must be verified at every call of wrapExit, whereas the rest of the body is invisible to the caller and produces no proof obligations at call sites. On the definition side, the instructions above the barrier are admitted without verification— which amounts here to simply assuming the asserted property— and the code under the barrier is verified under this assumption. Thus the VC for the definition of wrapExit is \(\forall \texttt {r}.\texttt {0} \le \texttt {r}< \texttt {256} \rightarrow \varphi \), where \(\varphi \) is the VC of the call  Unix.exit r, whatever it might be.
The representation of postconditions in Coma is more involved, since we need to place them above the abstraction barrier (otherwise, the postcondition would be hidden from the caller), but the actual exit points are in the code below the barrier. We get around this complication by treating the postcondition of a function as the precondition of its continuation. Consider the following function, which, once again, in shown with a traditional specification on the left and rewritten in the Coma style on the right:
On the Coma side, the continuation of triple is explicit, as a continuation parameter named ret. A locally defined function out provides a precondition for ret, similarly to wrapExit above. The definition of out is above the barrier in triple, and is verified whenever triple is called. This produces a VC of the form \(\forall \texttt {y}.\texttt {y} \texttt {=} \texttt {3} \cdot \texttt {x} \rightarrow \varphi \), where \(\varphi \) is the VC of ret y, that is, the verification condition of the rest of the computation, which follows the call of triple and is parametrized by the value returned. Note that this is exactly how a classical weakest-precondition calculus would handle calls of the triple function on the left-hand side. Conversely, when we verify the implementation of triple, the definition of out is admitted without proof, and we only need to verify the call of out under the barrier. This amounts to proving that x + x + x is equal to 3 \(\cdot \) x.
The flexibility and expressive power of explicit abstraction barriers become apparent when we implement the removeRoot function in the same fashion:
Here, the postcondition and the abstraction barrier are put inside the first branch of the match operation, and the second branch contains no assertions or barriers. This means that the pattern matching and the inadmissibility of Empty are part of the interface of removeRoot, and will appear in the VC for every individual call of the function.
For non-recursive functions, the barriers can be omitted entirely. In this case, there are no proof obligations generated at the definition site, and the code of the function is effectively inlined during the computation of verification conditions at the call sites.
Coma is meant to serve as an internal language in program verification tools, rather than as a source language for human programmers. The conversion into continuation-passing style, the introduction of wrapper functions for postconditions, and the placement of barriers should be all performed automatically during translation from the front-end language. For example, when a source function is supplied with a contract, the barriers would protect the entire function body, as for wrapExit and triple above. A function without a contract would be translated as is, without barriers, and included transparently in the callers’ verification conditions. For more complex specifications, like that of removeRoot, the front-end language should allow the programmer to express their intent through other means, such as multi-clause definitions, local contracts, etc.
Fig. 1.
Verification condition generation in Coma.
Generation of verification conditions for Coma is a two-stage process, summarized in Fig. 1. A Coma program is first translated into a logical proposition, called a recipe. Recipes are formulas in a special higher-order logic, with an additional neutralization operator which is used for the parts of the code that are admitted without verification. An evaluation procedure, defined as an abstract machine à la Krivine, converts recipes into first-order formulas, suitable for automated proving. For Coma functions specified in a traditional manner, with the barrier at the entry-exit boundary, the resulting proof obligations are similar to those produced by a classical weakest-precondition calculus. However, we can also benefit from the intermediate higher-order form of our verification conditions, and factorize selected subformulas during evaluation. In this way, we curb the well-known exponential growth of classical weakest preconditions, and obtain proof obligations similar to the compact verification conditions of Flanagan and Saxe [7].
We introduce the logic of recipes and the evaluation operator \(\circ \) in Sec. 3, together with a number of properties of this logic. The rules of verification condition generation, in the form of a VC operator https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-91121-7_8/MediaObjects/652625_1_En_8_Figi_HTML.gif , are given afterwards in Sec. 4. The soundness of our VC generation method is stated in Theorem 3.
We implemented a VC generator for Coma programs and performed several case studies, two of which are presented here. While in this paper we focus on the pure fragment, our implementation also supports first-class alias-free mutable variables with effect inference and monadic translation into the pure core language. This implementation currently serves as a back-end for Creusot, a tool for deductive verification of Rust programs [5]. Of particular interest for Creusot is Coma’s ability to automatically infer the contracts of simple Rust closures (anonymous functions). This and other features are presented in more detail in Section 5.
To summarize, here are the main contributions of our work:
  • a new intermediate verification language with higher-order functions and explicit abstraction barriers (Section 2);
  • a rigorously defined and proved verification condition generator (Sections 3 and 4);
  • a working implementation with numerous added features, including alias-free mutable variables, compact VC formulas via subgoal factorization, pre- and postcondition inference, etc. (Section 5);
  • two non-trivial case studies: a second-order regular expression engine (Section 6) and a sorting algorithm written in x86-64 assembly code (Section 7).
An extended version of this paper, containing additional technical definitions and detailed proofs is available online [15].

2 Syntax and Semantics of Coma

The building blocks of Coma are expressions, which perform computations, and terms, which represent data. Expressions and terms are distinct syntactic entities: a term can be passed as an argument to an expression, but an expression cannot reduce to a term. An expression can be encapsulated in a named or anonymous handler (which is what we call subroutines), and either invoked directly or passed as a continuation argument to another expression.
Terms are composed of variables, constants, and pure total operations, provided that they have the same meaning in executable code and in specification. In theory, it would be possible to restrict the syntax of terms to variables and literal values, and delegate all computation to handlers, either predefined or introduced by the user. Still, for the sake of convenience, we admit in terms a handful of basic operations on unbounded integers, Booleans, and polymorphic finite sequences and binary trees. To handle type polymorphism, we treat types as a special kind of data: type expressions are considered to be terms of type Type, and we do not make a formal distinction between term and type variables. We denote variables with letters \(x, y, \upalpha , \upbeta \) (the latter two being reserved for types), and terms with \(s, t, \tau , \theta \) (again, the latter two being reserved for types). For specification, we use first-order formulas, denoted \(\varphi \) and \(\psi \), which may contain variables and terms, but not handlers. By a slight abuse of notation, Boolean terms are accepted as atomic formulas.
Fig. 2.
Handlers and expressions.
Handlers accept term parameters (which includes type parameters) and handler parameters, also called continuation parameters or outcomes. The list of formal parameters of a handler is called its type signature. Since we adopt the continuation-passing style, handlers do not have return values. Handlers that have no parameters are said to have a void type signature, written with the symbol \(\Box \). We use letters \(\pi \) and \(\varrho \) to denote type signatures, and letters hgf for handler names.
We assume to have access to a number of predefined primitive handlers, which form the “standard library” of Coma. Here are the type signatures of five primitive handlers that we use throughout this paper:
Handler if makes a choice between two continuations, represented by nullary outcomes then and else, depending on the condition c. Handler unTree inspects a binary tree t: if it is a node, then its datum and two subtrees are passed to the onNode continuation, otherwise onEmpty is called. Handler get retrieves the i-th element of sequence s and passes it to the continuation. This operation is allowed only when i is a valid index of s. Handler halt stops the computation. Finally, fail is an equivalent of assert false, it represents code that should never be reached in execution.
By allowing Coma computations to have multiple outcomes, we can represent as first-class entities what usually has to be hardwired into the core syntax of programming languages: conditionals and pattern matching. Handlers halt and fail are also noteworthy in this regard: as they do not accept continuation parameters, we know simply by looking at their signature that they cannot ever return control to the caller.
Type signatures are identified modulo parameter renaming. For example, the signature of get can be equivalently written as \(\texttt {(}\upbeta \texttt {:}\texttt {Type)}\!\) (l: seq   \(\upbeta \) ) (k: int) (ret: (e:\(\upbeta \) )). Each parameter binds the corresponding symbol in the types of subsequent parameters. This only matters for variables, as handler symbols cannot occur in type annotations.
The order of a type signature \(\pi \) (and, by extension, of any handler with that signature) is defined recursively: if \(\pi \) has no continuation parameters, it is of order zero; otherwise, the order of \(\pi \) is one plus the highest order of its outcomes. The length and order of a type signature are invariant with respect to type instantiation: handlers can be polymorphic only in the data types.
Figure 2 presents the syntax of Coma expressions. An expression is an application of a named or anonymous handler to a list of arguments, on top of which we can put recursive handler definitions, logical assertions, and two barriers, denoted \({\mathtt {\uparrow }}\) and \({\mathtt {\downarrow }}\), and called black-box and white-box, respectively. Handler definitions are placed to the right of the underlying expression; the slash symbol can be read as “where”. The barriers guide the generation of verification conditions, and do not affect execution. The black-box barrier is the abstraction barrier, which separates the exposed “interface” part of a handler definition from the hidden “implementation” part. The white-box barrier is an auxiliary construction that exposes the whole underlying expression. We use letters e and d to denote expressions, and letters k and o to denote handlers.
Type signatures serve as types for expressions, enumerating the expected arguments; in particular, a fully applied handler has type \(\Box \). Typing contexts, denoted \(\Gamma \) and \(\Delta \), are sequences of type bindings of the form \(x \mathtt {:}\tau \) and \(g \mathtt {:}\varrho \). A typing context is well-formed if no symbol is bound twice, and every variable type either is Type or has type Type with respect to the preceding bindings.
Fig. 3.
Typing rules for expressions.
The typing rules for expressions are given in Fig. 3. In a judgement \(\Gamma \vdash e : \pi \), the typing context is implicitly required to be well-formed. We consider as given the typing relations for terms and formulas, respectively denoted \(\Gamma \vdash s : \tau \) and \(\Gamma \vdash \varphi : \textrm{Prop}\); refer to the extended version [15, App. A] for the fragment used in this paper. Notice that bodies of handler definitions and anonymous handlers have to be fully applied. Thus, an anonymous handler \(\pi \,{\mathtt {\rightarrow }}\,d\) always has type \(\pi \), modulo parameter renaming. As with type signatures, we identify expressions modulo renaming of bound symbols.
The initial typing context \(\Gamma _{\textrm{prim}}\) binds the primitive handlers to their respective type signatures. An expression e is called a program when \(\Gamma _{\textrm{prim}}\vdash e : \Box \).
We define a small-step operational semantics for Coma as a reduction relation \(\longrightarrow \). The reduction rules are shown in Fig. 4. We write \(e \,\mathtt {/} \mathtt {/}\,\Lambda \) to identify an expression under a series of nested handler definitions, where \(\Lambda \) is a list of definitions, possibly empty. In other words, \(e \,\mathtt {/} \mathtt {/}\,h_1 \, \pi _1 \,\mathtt {=}\,d_1, \dots , h_n \, \pi _n \,\mathtt {=}\,d_n\) stands for \(e \,\mathtt {/}\,h_1 \, \pi _1 \,\mathtt {=}\,d_1 \,\mathtt {/}\,\dots \,\mathtt {/}\,h_n \, \pi _n \,\mathtt {=}\,d_n\).
Fig. 4.
Operational semantics.
The rule E-Sym expands handler definitions. We assume that no handler is defined in \(\Lambda \) twice, as we can always rename bound handlers. The rules E-AppT and E-AppH perform \(\upbeta \)-reduction. The rule E-AppC turns an anonymous handler argument into a local handler definition. This is done in a capture-safe manner: we expect that the handler symbol g does not occur freely in d or in \(\bar{o}\). The white-box barrier over d is needed to preserve the verification condition of the program, as we show later.
The rule E-Prop requires the asserted formula \(\varphi \) to be valid before proceeding with the execution. The validity judgement \(\vDash \varphi \) is made within the standard model for our data types: integers, Booleans, sequences and binary trees. Of course, the validity of an arbitrary proposition cannot be effectively verified in a practical implementation. However, our purpose here is different: we define the operational semantics of Coma in order to state and prove the correctness of our verification procedure—in particular, that a program with a valid verification condition cannot get stuck during its execution because of a failed assertion.
The rule E-Void replaces a nullary anonymous handler by its body. The barriers are ignored during execution (rules E-Bbox and E-Wbox). Finally, the rule E-Gc prunes the context by removing unreachable handler definitions. This rule commutes with the rest of the rules, making Coma non-deterministic, yet still strongly confluent.1 We can define the operational semantics of Coma without the E-Gc rule; however, it helps keeping our definition context clean by removing local handler definitions once the parent handler finishes its computation.
We also postulate the evaluation rules for the primitive handlers. As for E-Prop, the application of these rules depends on validity of logical properties that express the pre- and postcondition of the primitives. For example, the Boolean condition of an if must be valid for the evaluation to progress along the first outcome (as mentioned in the beginning of the section, we admit Boolean terms as atomic formulas). In the rules for unTree, the function symbols Node and Empty are the constructors of the tree type. In the rule for get, we use the (total) slice operator on sequence \(s_1\) to isolate the element at the position \(s_2\), which must be a valid index in \(s_1\). The answer terms in the rules for unTree and get can be any ground terms that validate the rule premises: for example, the expression get int [42] 0 return\(\,\mathtt {/} \mathtt {/}\,\Lambda \) can reduce both to return 42\(\,\mathtt {/} \mathtt {/}\,\Lambda \) and return 6*7\(\,\mathtt {/} \mathtt {/}\,\Lambda \). While we could introduce some form of normalization to avoid this syntactical divergence, there is no need for that, since all conditions in our evaluation rules are expressed in terms of semantic validity. Finally, halt and fail represent the final states of a computation and cannot be evaluated.
Coma is a type-safe language. Type preservation is easy to establish, either through a direct proof or by embedding in a more expressive framework like System \(\textrm{F}_\omega \) or CoC. As for the progress property, the blocking semantics of assertions limits it to programs with a valid verification condition; and so we defer this subject until Section 4.
We conclude with two examples. First, we rewrite removeRoot in proper Coma:
Just like in Sec. 1, the implementation starts with a case analysis on the tree parameter t, using the primitive unTree handler. The two branches of the case analysis are represented, respectively, by an anonymous handler, which is called when t is a binary node, and the fail primitive, invoked when t is Empty. The anonymous handler makes a call of mergeTree, which we assume to be available and polymorphic akin to unTree. The result of mergeTree is passed to a wrapper handler out which asserts the postcondition of the first branch, before calling the continuation parameter of removeRoot.
Fig. 5.
Russian Peasant Multiplication in Coma.
In Fig. 5, we show the Russian Peasant Multiplication algorithm written in Coma. This code is specified in a more traditional manner: the entire implementation of the product handler is put behind the abstraction barrier. Left in the interface part are the starting assertion \(\mathtt {\{ b} \ge \mathtt {0 \}}\), which naturally becomes the precondition of product, and the wrapper handler break, which plays the same role as out in removeRoot, and whose precondition \(\{ \texttt {c} \,\mathtt {=}\,\texttt {a} \cdot \texttt {b} \}\) is the postcondition of product.
The implementation defines and calls a recursive handler named loop. This handler does not return to the caller: to do that, it would need to receive a continuation parameter and call it, like removeRoot and product do. Instead, loop escapes by calling break at the end of computation. In this respect, loop behaves indeed rather like a loop than a recursive function: its continuation is determined statically, by its lexical context, rather than dynamically by its caller. Consequently, there is no distinct postcondition associated to loop: in Coma, postconditions are preconditions of continuation parameters (attached via wrapper handlers like out and break), and loop has none thereof. And the precondition of loop, placed above the barrier, is just the loop invariant.
In practice, the majority of Coma programs would be generated by mechanical translation from existing languages like OCaml or Rust. Part of this translation would be a CPS transformation, required by our language. While in most cases, this transformation is not problematic, and allows us to reduce a large number of control structures to just two—definitions and calls—there are limits to what can be easily translated into Coma. Consider, for example, an OCaml exception that carries a closure:
In Coma, exception-raising functions are written as handlers that have multiple continuation parameters: one for the normal outcome, and one for each exception that might be raised in the handler code. However, if the closures passed with the exception E were themselves liable to raise E, we would not be able to give them a finite type in Coma. Incidentally, it is not a coincidence that higher-order exceptions can be used to realize fixed point computations without explicit recursion.

3 The Logic of Recipes

In their final form, verification conditions for Coma programs are first-order logical formulas, which we can handle with the usual methods of automated and interactive theorem proving. Their generation, however, goes through an intermediate stage, where a preliminary higher-order verification condition, called recipe, is constructed and then transformed, deterministically and in a finite number of steps, into the first-order form.
Recipes are formulas in a particular variety of higher-order logic, where bound predicate variables represent verification conditions of individual handlers and can only appear in a positive position. We denote recipes with letters \(\Phi ,\Psi ,\Upsilon \). The syntax of recipes is given in Fig. 6. In recipes, handler symbols become predicate variables of the same name and arity as the original handler. The symbol \(\textbf{0}\) is the verification condition of fail, a logical contradiction. The neutralization operator, denoted \({\natural }\), suppresses proof obligations in the underlying recipe. Finally, notice that the antecedent in an implication is not a recipe, but a first-order formula, which cannot have occurrences of handler symbols. We write \(\uplambda \pi . \Phi \) and \(\forall \pi . \Phi \) to denote a series of nested \(\uplambda \)-abstractions or quantifications. By convention, \(\uplambda \Box . \Phi \) and \(\forall \Box . \Phi \) are the same as \(\Phi \).
Fig. 6.
Preliminary verification conditions (recipes).
Universal quantification over a predicate variable is defined recursively, as an instantiation with a joker recipe \(\textbf{0}_{\pi }\). A joker recipe is the verification condition of a handler of which nothing is known: on any input, the handler may fail or it may call any of its outcomes with arbitrary arguments. On a void type signature, the joker \(\textbf{0}_{\Box }\) is simply \(\textbf{0}\).
A fully applied recipe is a logical proposition, which is why we disregard the result type (that is, \(\textrm{Prop}\)) and use type signatures once again as types for predicate variables and recipes. The typing rules are given in the extended version [15, Fig. 7]. Unlike implication and universal quantification, the conjunction connective applies to any two recipes of the same type, and not only to fully applied recipes. The neutralization operator applies to recipes of any type. We identify recipes modulo renaming of bound symbols.
The semantics of recipes is given by means of a Krivine-style abstract machine [10] that converts a fully applied recipe into a first-order formula, where all bound predicate variables are eliminated. We have chosen this approach both for theoretical and practical reasons. First, the properties of recipes are naturally proved using logical relations [16], which are straightforward to express in this setting. Second, our implementation of a verification condition generator for Coma is based on the same abstract machine. In the rest of the section, we introduce this evaluator and establish some of its properties.
A cell is a triplet \(\langle b,\Sigma ,\Phi \rangle \), where \(\Phi \) is a recipe, \(\Sigma \) a cell context, binding every free handler symbol in \(\Phi \) to a cell, and b a Boolean value. Such a cell can be converted into a recipe by replacing the free handler symbols in \(\Phi \) with the corresponding converted cells from \(\Sigma \). We assign the type signature of the resulting recipe to the initial cell. In what follows, we denote cells with letters C and D, and assume that all cells and recipes under consideration are well-typed.
We associate a specification recipe to each primitive handler:
The initial cell context \(\Sigma _{\textrm{prim}}\) binds primitive handlers to their respective specifications.
The depth of a cell \(C = \langle b,\Sigma ,\Phi \rangle \) is zero if its cell context \(\Sigma \) is empty; otherwise, it is one plus the maximum depth of the cells in \(\Sigma \). The neutralization of C, denoted \({\natural }C\), is the cell \(\langle \top , {\natural }\Sigma , \Phi \rangle \), where \({\natural }\Sigma \) is obtained by neutralizing every cell in \(\Sigma \). Obviously, neutralization does not affect the type or depth of a cell.
A stack is a mixed sequence of terms and cells. An empty stack is denoted \(\varepsilon \). The neutralization of a stack \(\ell \), denoted \({\natural }\ell \), is obtained by neutralizing every cell in \(\ell \). We say that a stack is aligned with a cell, when the length of the stack and the types of its elements coincide with the cell’s signature. In other words, an aligned stack contains appropriate arguments for the cell.
An n-ary relation R on same-typed cells holds on stacks \(\ell _1,\dots ,\ell _n\) when they all have the same length and type signature, and for each position i, if \(\ell _{1i},\dots ,\ell _{ni}\) are terms, then they are all identical, and if they are cells, then both \(R(\ell _{1i},\dots ,\ell _{ni})\) and \(R({\natural }\ell _{1i},\dots ,{\natural }\ell _{ni})\) are true. Similarly, R holds on cell contexts \(\Sigma _1,\dots ,\Sigma _n\) when they bind the same handler names, and for every bound h, both \(R(\Sigma _{1}(h),\dots ,\Sigma _{n}(h))\) and \(R({\natural }\Sigma _{1}(h),\dots ,{\natural }\Sigma _{n}(h))\) are true. It is easy to see that \(R(\Sigma _1,\dots ,\Sigma _n)\) implies \(R({\natural }\Sigma _1,\dots , {\natural }\Sigma _n)\), as \({\natural }{\natural }C = {\natural }C\).
As a special case of the above, any property of cells is said to hold for a stack \(\ell \) or a cell context \(\Sigma \) whenever for every cell C in \(\ell \) or \(\Sigma \), both C and \({\natural }C\) have this property. Furthermore, if the property holds for a cell context \(\Sigma \), then it also holds for \({\natural }\Sigma \).
Fig. 7.
Recipe evaluation.
The evaluation operator \(\circ \), defined in Fig. 7, applies a cell to an aligned stack and produces a first-order logical formula. In the rule for \(\forall x \mathtt {:}\tau .\Phi \), we assume that x does not occur in the cell context \(\Sigma \), to avoid collisions.
Theorem 1
The evaluation operator \(\circ \) is defined on all aligned cells and stacks.
Proof
We say that a cell C is normalizing if for any aligned normalizing stack \(\ell \), the evaluation \(C \circ \ell \) is defined. This definition is well-founded, because every cell in \(\ell \) is of lower order than C. We need to show that all cells (ergo, all stacks) are normalizing. In fact, it suffices to prove that every cell \(\langle b, \Sigma , \Upsilon \rangle \) is normalizing, if its cell context \(\Sigma \) is normalizing. Afterward, a simple induction over cell depth allows us to conclude.
We proceed by induction over the size of \(\Upsilon \), counting only the subrecipes, so that term substitutions do not affect the size. Take an arbitrary aligned normalizing stack \(\ell \).
Case \(\Upsilon \) is \(\textbf{0}\). As \(\textbf{0}\) is \(\Box \)-typed, \(\ell \) has to be empty, and \(\langle b, \Sigma , \textbf{0} \rangle \circ \varepsilon \) is defined.
Case \(\Upsilon \) is h. Since every cell in \(\Sigma \) is normalizing, the evaluation \(\Sigma (h) \circ \ell \) is defined.
Case \(\Upsilon \) is \({\natural }\Phi \). The context \({\natural }\Sigma \) is normalizing, and the induction hypothesis applies.
Case \(\Upsilon \) is \(\Phi \, \Psi \). Let D be \(\langle b, \Sigma , \Psi \rangle \). Since \({\natural }\Sigma \) is normalizing, both cells, D and \({\natural }D\), are normalizing by the induction hypothesis. As the cell \(\langle b, \Sigma , \Phi \rangle \) is also normalizing by the induction hypothesis, the evaluation \(\langle b, \Sigma , \Phi \rangle \circ D,\ell \) is defined.
Case \(\Upsilon \) is \(\uplambda h \mathtt {:}\pi \mathtt {.}\Phi \). Then the stack \(\ell \) is of the form \(D, \ell '\), where both D and \({\natural }D\) are normalizing. Thus, \(\Sigma \uplus [h \mapsto D]\) is normalizing and the induction hypothesis applies.
In every other case, we pick a rule for \(\circ \) and apply the induction hypothesis.   \(\Box \)
A cell C is said to be neutral, if for any aligned neutral stack \(\ell \), the formula \(C \circ \ell \) is valid. Just as above, this recursive definition is well-founded, because every cell in \(\ell \) is of lower order than C.
Lemma 1
Any neutralized cell \({\natural }C\) is neutral.
The proof of this and following lemmas can be found in the extended version [15, Sec. 3].
A cell \(C_1\) entails \(C_2\), denoted \(C_1 \Rrightarrow C_2\), when they have the same type and for any aligned stacks \(\ell _1\) and \(\ell _2\) such that \(\ell _1 \Rrightarrow \ell _2\), we have \(C_1 \circ \ell _1 \Rightarrow C_2 \circ \ell _2\). Here and below, the symbol \(\Rightarrow \) stands for logical consequence, and \(\Leftrightarrow \) for logical equivalence, under the same standard model used for assertions. Cell \(C_1\) is equivalent to \(C_2\), denoted \(C_1 \equiv C_2\), when \(C_1 \Rrightarrow C_2\) and \(C_2 \Rrightarrow C_1\).
Lemma 2
Cell entailment is reflexive and transitive.
Lemma 3
Consider two cells of the same type, \(C_1\) and \(C_2\), such that for any aligned stack \(\ell \), we have \(C_1 \circ \ell \Rightarrow C_2 \circ \ell \). Then \(C_1 \Rrightarrow C_2\).
Note that \(C_1 \Rrightarrow C_2\) does not imply \({\natural }C_1 \Rrightarrow {\natural }C_2\). For example, the cell \(\langle \bot , \varnothing , \uplambda g \mathtt {:}\Box .\textbf{0} \rangle \) entails \(\langle \bot , \varnothing , \uplambda g \mathtt {:}\Box .g \rangle \), yet, when we apply their neutralizations to \(\langle \bot , \varnothing , \textbf{0} \rangle \), we obtain \(\top \) and \(\bot \), respectively. Thus, to establish \(\ell _1 \Rrightarrow \ell _2\), we must show pairwise entailment not only for the cells in the two stacks, but also for their neutralizations.
Given three cells \(C_1 = \langle b_1, \Sigma _1, \Phi \rangle \), \(C_2 = \langle b_2, \Sigma _2, \Phi \rangle \), and \(C_3 = \langle b_3, \Sigma _3, \Phi \rangle \) that have the same type and the same recipe \(\Phi \), we say that \(C_1\) is the fusion of \(C_2\) and \(C_3\) when \(b_1 = b_2 \wedge b_3\) and \(\Sigma _1\) is the fusion of \(\Sigma _2\) and \(\Sigma _3\). Quite obviously, if \(C_1\) is the fusion of \(C_2\) and \(C_3\), then \({\natural }C_1\) is the fusion of \({\natural }C_2\) and \({\natural }C_3\) (all three are actually the same). Furthermore, any cell C is the fusion of itself and \({\natural }C\).
A cell \(C_1\) is a meet of \(C_2\) and \(C_3\) if they all have the same type, the neutralized cells \({\natural }C_2\) and \({\natural }C_3\) are equivalent, and for any aligned stacks \(\ell _1,\ell _2,\ell _3\) such that \(\ell _1\) is a meet of \(\ell _2\) and \(\ell _3\), we have \(C_1 \circ \ell _1 \Leftrightarrow C_2 \circ \ell _2 \wedge C_3 \circ \ell _3\).
Lemma 4
If \(C_1\) is the fusion of \(C_2\) and \(C_3\), then \(C_1\) is a meet of \(C_2\) and \(C_3\).
Corollary 1
For any cell C and aligned stack \(\ell \), we have \(C \circ \ell \Leftrightarrow {\natural }C \circ \ell \wedge C \circ {\natural }\ell \).
Corollary 2
Any cell C entails \({\natural }C\).
Lemma 5
Consider cells \(C_1, C_2, C_3\) of the same type, such that \({\natural }C_2 \equiv {\natural }C_3\) and for any aligned stack \(\ell \), we have \(C_1 \circ \ell \Leftrightarrow C_2 \circ \ell \wedge C_3 \circ \ell \). Then \(C_1\) is a meet of \(C_2\) and \(C_3\).
This leads to a surprising distributivity property. Consider a cell \(D = \langle b, \Sigma , \Phi \wedge \Psi \rangle \) and its conjuncts \(D_1 = \langle b, \Sigma , \Phi \rangle \) and \(D_2 = \langle b, \Sigma , \Psi \rangle \). If \({\natural }D_1\) is equivalent to \({\natural }D_2\), then, by Lemma 5, D is a meet of \(D_1\) and \(D_2\), and \({\natural }D\) is a meet of \({\natural }D_1\) and \({\natural }D_2\). Then, for any appropriate cell C and stack \(\ell \), the formula \(C \circ D, \ell \) is logically equivalent to the conjunction of \(C \circ D_1, \ell \) and \(C \circ D_2, \ell \). Informally speaking, we can split a recipe over any cell conjunction, no matter where it occurs inside the recipe, as long as the conjuncts have equivalent neutralizations.
Theorem 2
Consider a type signature \(\pi \) and a cell \(J = \langle \bot , \Sigma , \textbf{0}_\pi \rangle \). For every cell C of type \(\pi \), we have \(J \Rrightarrow C\) and \({\natural }J \Rrightarrow {\natural }C\).
The proof of Theorem 2 is given in the extended version [15, App. B]. This result justifies our use of joker recipes to represent universal quantification over predicate variables. Indeed, for any \(\Phi : \Box \) and \(\Psi : \varrho \), and a type-compatible \(\Sigma \), the cell \(\langle \bot , \Sigma , \forall g \mathtt {:}\varrho .\Phi \rangle \) entails \(\langle \bot , \Sigma , (\uplambda g \mathtt {:}\varrho .\Phi ) \Psi \rangle \), and the same holds for their neutralizations.
In the process of evaluation, we can factorize selected first-order monomorphic cells, that is, those that only have term parameters whose type is not Type.
Lemma 6
Consider a cell \(C = \langle b, \Sigma , \uplambda g \mathtt {:}(\overline{x\mathtt {:}\tau }) .\Phi \rangle \) and an aligned stack \(D,\ell \), such that none of the types \(\tau _i\) is Type. Let \(D'\) be the cell \(\langle \bot , \varnothing , \uplambda \overline{x\mathtt {:}\tau } .z_1 \,\mathtt {=}\,x_1 \wedge \dots \wedge z_n \,\mathtt {=}\,x_n \rightarrow \textbf{0} \rangle \) for some fresh variables \(\bar{z}\). Then \(C \circ D,\ell \Leftrightarrow C \circ {\natural }D, \ell \wedge \forall \overline{z\mathtt {:}\tau } .({\natural }C \circ D', {\natural }\ell ) \vee (D \circ \bar{z})\).
The proof of Lemma 6 is given in the extended version [15, App. C]. This lemma provides us with an alternative evaluation rule for cell arguments which are eligible and useful to factorize. The latter is a matter of heuristic choice: in our current implementation, we select non-neutral cells that are used multiple times in the final VC and are derived from executable code instead of just a sequence of assertions.
The new rule splits the formula \(C \circ D, \ell \) into two parts. The first part, \(C \circ {\natural }D, \ell \), erases all subgoals stemming from D. In the second part, the formula \({\natural }C \circ D', {\natural }\ell \) erases all subgoals that are not stemming from D, and replaces every occurrence of D with a “unification subgoal” \(D'\), which captures a term substitution in the answer variables \(\bar{z}\). These substitutions are transferred to the single instance of D in the formula \(D \circ \bar{z}\).
By rewriting the second part as an implication \(\forall \overline{z\mathtt {:}\tau } .\lnot ({\natural }C \circ D', {\natural }\ell ) \rightarrow (D \circ \bar{z})\), we can see the antecedent as the cumulated logical premise (or the strongest postcondition) of the context \(C \circ [\,],\ell \) for the continuation in the hole. In the next section, we show how this rule allows us to produce more compact verification conditions.

4 Verification Condition Generation

Verification conditions for Coma expressions are computed by the operator https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-91121-7_8/MediaObjects/652625_1_En_8_Fign_HTML.gif , where Boolean flags \(\mathfrak {p}\) and \(\mathfrak {d}\) establish the mode:
  • https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-91121-7_8/MediaObjects/652625_1_En_8_Figo_HTML.gif   caller verification condition, to verify individual calls of a defined handler.
  • https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-91121-7_8/MediaObjects/652625_1_En_8_Figp_HTML.gif   callee verification condition, to prove the correctness of a handler definition.
  • https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-91121-7_8/MediaObjects/652625_1_En_8_Figq_HTML.gif   full verification condition, which merges the proof goals of the first two modes.
  • https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-91121-7_8/MediaObjects/652625_1_En_8_Figr_HTML.gif   null verification condition, which is always true on fully applied expressions.
The caller mode extracts the specification (or the contract) of a defined handler from its definition. It treats every assertion as a precondition to verify at call sites, and it stops at the black-box barrier which separates the “interface” part of the definition from the hidden “implementation” part. The callee mode, on the contrary, treats every assertion as a precondition to assume, and verifies the correctness of the implementation part, after the black-box barrier, under those assumptions. In the full mode, which is the starting verification mode for Coma expressions, we prove assertions both before and after a barrier. In the null mode, which is equivalent to stopping verification, no proof obligations are generated at all. A Coma program e is said to be correct, when its fully evaluated verification condition https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-91121-7_8/MediaObjects/652625_1_En_8_Figs_HTML.gif is valid.
Fig. 8.
Verification condition generation.
Figure 8 shows the rules of VC generation. The notation \(\textbf{let}\,\, h \,\,\pi = \Psi \,\, \textbf{in}\,\, \Phi \) in the rule for handler definitions stands for \((\uplambda h \mathtt {:}\pi \mathtt {.}\Phi ) (\uplambda \pi \mathtt {.}\forall h \mathtt {:}\pi \mathtt {.}\Psi )\) — notice the universal quantifier that covers the occurrences of h in \(\Psi \) and ensures that this symbol is bound in the resulting recipe, just as it is bound in the original Coma expression. In this rule, we assign the handler’s specification https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-91121-7_8/MediaObjects/652625_1_En_8_Figt_HTML.gif to a predicate variable with the same name h. This recipe is verified every time h is called from the underlying expression e or recursively from the definition body d.
Informally, flag \(\mathfrak {p}\) determines whether we should generate proof obligations— prove assertions, verify handler definitions, ensure the safety of handler calls— at the current position in the expression. For example, in the rule for \(\{\varphi \} e\), we only generate a subgoal for \(\varphi \) (expressed as a double negation \(\lnot \varphi \rightarrow \textbf{0}\)), when \(\mathfrak {p}\) is true. Similarly, in the rule for handler definitions, we verify the correctness of the implementation only when \(\mathfrak {p}\) is true; otherwise, the formula https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-91121-7_8/MediaObjects/652625_1_En_8_Figu_HTML.gif always reduces to \(\top \). Finally, on handler invocation, when \(\mathfrak {p}\) is false, the corresponding predicate variable is neutralized, which effectively cancels all proof obligations in the handler’s specification, as \(\textbf{0}\) becomes evaluated as \(\top \).
When we pass through a black-box barrier \({\mathtt {\uparrow }}\), the second flag \(\mathfrak {d}\) takes the place of \(\mathfrak {p}\). Thus, when we compute the specification of a handler by applying https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-91121-7_8/MediaObjects/652625_1_En_8_Figv_HTML.gif to the handler’s body, we stop at the black-box barrier, where we switch to https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-91121-7_8/MediaObjects/652625_1_En_8_Figw_HTML.gif , which evaluates to \(\top \). On the other hand, when we verify the correctness of a handler definition using https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-91121-7_8/MediaObjects/652625_1_En_8_Figx_HTML.gif , we do not generate proof obligations for assertions and handler calls until we arrive at the black-box barrier, where we pass into the full mode https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-91121-7_8/MediaObjects/652625_1_En_8_Figy_HTML.gif for the rest of the definition.
The white-box barrier \({\mathtt {\downarrow }}\) replaces the second flag with \(\mathfrak {p}\). This preserves the current value of \(\mathfrak {p}\) for the rest of the expression, regardless of the subsequent barriers.
The rule for handler calls simply propagates the VC operator down to the individual handlers without changing the mode. Similarly, the rule for anonymous handlers pushes the VC operator in its current mode under the \(\uplambda \)-prefix—however, we must, in addition, verify the handler in the complementary mode, with both \(\mathfrak {p}\) and \(\mathfrak {d}\) negated. This secondary verification condition is only concerned with the continuation parameters of the handler, and not with its proper proof obligations, which is why we neutralize the corresponding recipe. To see why both conditions are necessary, consider the following Coma code:
$$ \texttt {crash} \,\mathtt {/}\,\texttt {crash} \,\mathtt {=}\,((\texttt {f} \mathtt {:}\Box ) \,{\mathtt {\rightarrow }}\,{\mathtt {\uparrow }}\texttt {f}) ~ \texttt {fail} $$
This program reduces to fail: we unfold crash (E-Sym), purge the now-unreachable definition (E-Gc), substitute fail into f (E-AppH), and drop the barrier (E-Bbox). Thus, we should not be able to prove it correct. Let us look at the full verification condition:
For the sake of readability, we perform several reductions directly on the recipe in the last step; it is easy to show that the resulting recipe leads to the same final VC formula. Out of the four conjuncts, only the second one evaluates to \(\bot \), and the other three to \(\top \). If the rule for anonymous handlers did not include the second condition https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-91121-7_8/MediaObjects/652625_1_En_8_Figaa_HTML.gif , we would end up with only the first and the third conjunct, which both evaluate to \(\top \).
Let us further illustrate the rules of VC generation on the example from Sec. 1:
The full verification condition of this expression has the form
where e is the body of triple. We can safely drop the secondary verification condition for the anonymous handler in the continuation of triple. Indeed, it is produced in the null mode and will evaluate to a tautologically true proof obligation for any argument a.
The caller and callee verification conditions of triple are as follows:
where d is the body of out. And the three VCs of out are as follows:
On the right we show the first-order formulas to which these recipes evaluate. Notice that for any predicate ret, the recipe \({\natural }\texttt {ret} ~ \texttt {y}\) evaluates to a tautology.
Now we can evaluate the verification conditions of triple. Since \({\natural }\texttt {out}\) evaluates to a tautology for any argument, the caller VC, https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-91121-7_8/MediaObjects/652625_1_En_8_Figaf_HTML.gif , evaluates to \(\forall \texttt {y} \mathtt {.}\texttt {y} = \texttt {3} \cdot \texttt {x} \rightarrow \texttt {ret} \; \texttt {y}\). And the callee VC, https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-91121-7_8/MediaObjects/652625_1_En_8_Figag_HTML.gif , evaluates to \(\texttt {x} \, \texttt {+} \, \texttt {x} \, \texttt {+} \, \texttt {x} = \texttt {3} \cdot \texttt {x}\). After simplification, the full VC of the initial expression is \(\, (\forall \texttt {y} \mathtt {.}\texttt {y} = \texttt {3} \cdot \texttt {14} \rightarrow \texttt {y} = \texttt {42}) \wedge (\forall \texttt {x} \mathtt {.}\texttt {x} \, \texttt {+} \, \texttt {x} \, \texttt {+} \, \texttt {x} = \texttt {3} \cdot \texttt {x}) \).
Verification conditions produced by the https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-91121-7_8/MediaObjects/652625_1_En_8_Figah_HTML.gif operator are for partial correctness: they do not ensure the termination of Coma programs. Here is one possible way to verify total correctness. Let us say that a handler definition \(f \overline{x \mathtt {:}\tau } \; \overline{g \mathtt {:}\varrho } \,\mathtt {=}\,d\) is equipped with a variant, if there exists an int-typed term \(t[\bar{x}]\) such that every occurrence of f in d is in an expression of the form \(\{t[\bar{s}]< t[\bar{x}] \wedge \texttt {0}\le t[\bar{x}] \} \, f\, \bar{s}\, \bar{o}\). Here we assume that variables are not bound twice, so that the variables \(\bar{x}\) in the assertion refer to the formal parameters of f. The ordering relation in the assertion is well-founded, therefore, an infinite tower of recursive calls of f is impossible. A practical implementation would, of course, accept other well-founded relations, such as structural decrease on binary trees, lexicographic orderings on tuples, etc. While the definition above does not allow us to use f as a handler argument inside d, this is not a limitation, as we can always move such occurrences into a local wrapper handler definition.
Below we list the main results about our VC generation procedure. The proofs are given in the extended version [15, Appendix D].
Lemma 7
For any Coma expression e, any cell of the form https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-91121-7_8/MediaObjects/652625_1_En_8_Figai_HTML.gif is neutral.
Consequently, any VC of the form https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-91121-7_8/MediaObjects/652625_1_En_8_Figaj_HTML.gif , where e is a fully applied expression, can be safely replaced with a tautological recipe such as \({\natural }\textbf{0}\).
Lemma 8
For any Coma expression e, any cell of the form https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-91121-7_8/MediaObjects/652625_1_En_8_Figak_HTML.gif is a meet of https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-91121-7_8/MediaObjects/652625_1_En_8_Figal_HTML.gif and https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-91121-7_8/MediaObjects/652625_1_En_8_Figam_HTML.gif for all \(\mathfrak {p}\) and \(\mathfrak {d}\).
The fact that https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-91121-7_8/MediaObjects/652625_1_En_8_Figan_HTML.gif can be split into https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-91121-7_8/MediaObjects/652625_1_En_8_Figao_HTML.gif and https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-91121-7_8/MediaObjects/652625_1_En_8_Figap_HTML.gif is the basis of the correctness preservation theorem:
Theorem 3
(Preservation of Correctness). For any Coma programs e and \(e'\), if e is correct and \(e \longrightarrow e'\), then \(e'\) is correct.
Theorem 4
(Progress). For any correct Coma program e, either e is halt, or \(e \longrightarrow e'\) for some program \(e'\).
In conclusion, let us show some examples of verification conditions. For clarity, we omit type annotations, inline the specifications of primitive handlers, treat \(\textbf{0}\) as \(\bot \) in the callee mode, and remove trivial subgoals coming from https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-91121-7_8/MediaObjects/652625_1_En_8_Figaq_HTML.gif or \(\bot \rightarrow \Phi \). The caller VC of removeRoot on page 9, for a given tree t and continuation return, is the recipe
$$\begin{aligned} (\forall \texttt {lvr} \mathtt {.}\, &\texttt {t} = \texttt {Node} \texttt {l} \texttt {v} \texttt {r} \rightarrow \forall \texttt {s} \mathtt {.}(\forall \texttt {e} \mathtt {.}\, \texttt {e} \in \texttt {s} \leftrightarrow \texttt {e} \in \texttt {l} \vee \texttt {e} \in \texttt {r}) \rightarrow \texttt {return} \texttt {s} ) \wedge \\ ( &\texttt {t} = \texttt {Empty}\phantom {\,\,\,\texttt {lv}} \rightarrow \textbf{0}) \end{aligned}$$
This recipe is the specification of removeRoot, instantiated and proved at each call site. The subrecipe \((\forall \texttt {e} \mathtt {.}\, \texttt {e} \in \texttt {s} \leftrightarrow \texttt {e} \in \texttt {l} \vee \texttt {e} \in \texttt {r}) \rightarrow \texttt {return} \texttt {s}\) is the callee VC for the out handler. Notice that the assertion \(\{\,\forall \texttt {e}\mathtt {:}\texttt {int}\mathtt {.}\,\texttt {e} \in \texttt {s} \leftrightarrow \texttt {e} \in \texttt {l} \vee \texttt {e} \in \texttt {r}\,\}\) does not generate a subgoal here: as it occurs before the black-box barrier, it is treated as an assumption in the callee mode.
Here is the callee VC for removeRoot, to be proved for all values of t:
$$\begin{aligned} \forall \texttt {lvr} \mathtt {.}\, \texttt {t} = \texttt {Node} \texttt {l} \texttt {v} \texttt {r} \rightarrow \texttt {mergeTree} \, \texttt {elt} \, \texttt {l} \, \texttt {r} \,\, ( \uplambda \texttt {s} \mathtt {.}\forall \texttt {e} \mathtt {.}\, \texttt {e} \in \texttt {s} \leftrightarrow \texttt {e} \in \texttt {l} \vee \texttt {e} \in \texttt {r}) \end{aligned}$$
There is no subgoal generated for the second outcome of unTree, as it does not contain an abstraction barrier. The predicate argument of mergeTree is the caller VC of out; this time the assertion does generate a subgoal.
Here is the specification of the product handler in Fig. 5, for given integers \(\texttt {a},\texttt {b}\) and a continuation return:
$$ (\texttt {b} \not \ge \texttt {0} \rightarrow \textbf{0}) \wedge (\forall \texttt {c} \mathtt {.}\, \texttt {c} = \texttt {a} \cdot \texttt {b} \rightarrow \texttt {return} \,\, \texttt {c}) $$
The callee VC for product, to be proved for all values of a and b, is as follows:
$$\begin{aligned} &\texttt {b} \ge \texttt {0} \rightarrow \\ & \quad \texttt {a} \cdot \texttt {b} + \texttt {0} = \texttt {a} \cdot \texttt {b} \wedge \texttt {b} \ge \texttt {0} \wedge \\ & \quad \forall \texttt {pqr} \mathtt {.}\, \texttt {p} \cdot \texttt {q} + \texttt {r} = \texttt {a} \cdot \texttt {b} \wedge \texttt {q} \ge \texttt {0} \rightarrow \\ & \quad \quad (\texttt {q} > \texttt {0} \rightarrow \\ & \quad \quad \quad (\texttt {q} \,\, \texttt {mod} \,\, \texttt {2} =\texttt {1} \rightarrow (\texttt {p} + \texttt {p}) \cdot (\texttt {q} \,\, \texttt {div} \,\, \texttt {2}) + \texttt {r} + \texttt {p} = \texttt {a} \cdot \texttt {b} \wedge \texttt {q} \,\, \texttt {div} \,\, \texttt {2} \ge \texttt {0}) \wedge \\ & \quad \quad \quad (\texttt {q} \,\, \texttt {mod} \,\, \texttt {2} \ne \texttt {1} \rightarrow (\texttt {p} + \texttt {p}) \cdot (\texttt {q} \,\, \texttt {div} \,\, \texttt {2}) + \texttt {r} \phantom { + \texttt {p}} = \texttt {a} \cdot \texttt {b} \wedge \texttt {q} \,\, \texttt {div} \,\, \texttt {2} \ge \texttt {0})) \wedge \\ & \quad \quad (\texttt {q} \not > \texttt {0} \rightarrow \texttt {r} = \texttt {a} \cdot \texttt {b}) \end{aligned}$$
Notice how this formula coincides with the verification condition for the definition of product obtained by the traditional weakest-precondition calculus. Consider now the same VC, when we select the next handler for factorization, as described in Section 3:
$$\begin{aligned} &\texttt {b} \ge \texttt {0} \rightarrow \\ & \quad \texttt {a} \cdot \texttt {b} + \texttt {0} = \texttt {a} \cdot \texttt {b} \wedge \texttt {b} \ge \texttt {0} \wedge \\ & \quad \forall \texttt {pqr} \mathtt {.}\, \texttt {p} \cdot \texttt {q} + \texttt {r} = \texttt {a} \cdot \texttt {b} \wedge \texttt {q} \ge \texttt {0} \rightarrow \\ & \quad \quad (\texttt {q} > \texttt {0} \rightarrow \\ & \quad \quad \quad \forall \texttt {s} \mathtt {.}\, ((\texttt {q} \,\, \texttt {mod} \,\, \texttt {2} = \texttt {1} \wedge \texttt {s} = \texttt {r} + \texttt {p}) \vee (\texttt {q} \,\, \texttt {mod} \,\, \texttt {2} \ne \texttt {1} \wedge \texttt {s} = \texttt {r})) \rightarrow \\ & \quad \quad \quad \quad (\texttt {p} + \texttt {p}) \cdot (\texttt {q} \,\, \texttt {div} \,\, \texttt {2}) + \texttt {s} = \texttt {a} \cdot \texttt {b} \wedge \texttt {q} \,\, \texttt {div} \,\, \texttt {2} \ge \texttt {0}) \wedge \\ & \quad \quad (\texttt {q} \not > \texttt {0} \rightarrow \texttt {r} = \texttt {a} \cdot \texttt {b}) \end{aligned}$$
The formula \(\uplambda \texttt {s} \mathtt {.}(\texttt {q} \,\, \texttt {mod} \,\, \texttt {2} = \texttt {1} \wedge \texttt {s} = \texttt {r} + \texttt {p}) \vee (\texttt {q} \,\, \texttt {mod} \,\, \texttt {2} \ne \texttt {1} \wedge \texttt {s} = \texttt {r})\) is the strongest postcondition of the expression https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-91121-7_8/MediaObjects/652625_1_En_8_Figar_HTML.gif with respect to the continuation next. The method of compact verification conditions proposed by Flanagan and Saxe [7] aggregates in a similar way the strongest postconditions across alternative execution paths. The connection between the compact verification conditions and the classical weakest-precondition calculus was studied by Leino [12]. Our approach makes this connection even more prominent, as it allows us to derive both forms from the common precursor verification condition.

5 Implementation

We have implemented the Coma language and its VC generator on top of the Why3 platform [6]. The terms and formulas of Coma are written in the logical language of Why3. This way, we can make use of logical theories from the Why3 standard library, and we readily benefit from Why3’s interface with many automated theorem provers. In addition to what is presented in the previous sections, our implementation offers a few extensions, described below.
Fig. 9.
Extended handler prototypes.
Extended handler prototypes. To facilitate writing and understanding of Coma programs, we provide a suitable syntax for writing pre- and postconditions directly in the handler prototype, as shown in Fig. 9. This notation is desugared into assertions, barriers, and wrapper handlers of the core Coma language; in particular, the code in Fig. 9 is translated into what is shown in Fig. 5. The precondition \(\{\,\texttt {b}\ge \texttt {0}\,\}\) in the prototype of product becomes an assertion put on top of the definition body, now hidden under a black-box barrier. The same transformation is applied to the precondition of the inner loop handler. The postcondition \(\{\, \texttt {c}=\texttt {a}\cdot \texttt {b} \,\}\), attached to return, forces creation of a wrapper handler above the main black-box barrier and becomes the precondition in the body of this wrapper handler.
Let-binding for variables. We added a proper syntax for binding a variable to a term, to avoid writing anonymous handler applications \(\texttt {(}\texttt {(}x\mathtt {:}\tau \texttt {)} \,{\mathtt {\rightarrow }}\,e \texttt {)} \, s\). The new construction is written \(e \,\mathtt {/}\,x\mathtt {:}\tau \,\mathtt {=}\,s\). We show its use in the example in Fig. 12 in the next section.
Mutable state. Our implementation supports mutable variables (references) that can be allocated, modified, and passed as arguments to handlers. References are alias-free, which means that is forbidden to pass a statically accessible reference as an argument or to pass the same reference argument twice. Each handler is annotated with a pre-write clause, which lists the references in its lexical scope that might be modified before the handler is executed. For example, here is the prototype of a handler that increments an integer reference received as argument and returns its previous value:
The pre-write annotation [r] for the return outcome signifies that the code that calls return—namely, the incr handler—may change the value of r before the call. Pre-write annotations are automatically inferred for defined handlers and their continuation parameters. However, we do not infer them for the higher-order outcomes (i.e., continuation parameters of a continuation parameter).
The code with references is translated into pure Coma via a fine-grained monadic transformation, during which the references in the pre-write annotations become additional term parameters. In the example above, after translation, incr would return to the caller the updated value of r along with its previous value in p.
To capture the pre-state of references, we admit let-bindings in handler prototypes. The full prototype of incr, together with its specification, is as follows:
Specification extraction. Given a first-order handler, Coma can produce, on request, the logical predicates that represent its pre- and postconditions. Consider the following handler which returns the value stored in the root node of a non-empty tree:
The caller VC of getRoot, for a given tree t and continuation return, is the recipe
$$\begin{aligned} (\forall \texttt {lvr} \mathtt {.}\, \texttt {t} = \texttt {Node} \, \texttt {l} \, \texttt {v} \, \texttt {r} \rightarrow \texttt {return} \, \texttt {v}) \wedge (\texttt {t} = \texttt {Empty} \rightarrow \textbf{0})\qquad \qquad (\star ) \end{aligned}$$
To compute the precondition of getRoot, we merely need to instantiate return with a neutral recipe \(\uplambda \texttt {x} \mathtt {.}{\natural }\textbf{0}\). After evaluation and simplification, we obtain \(\texttt {t} \ne \texttt {Empty}\).
To compute the postcondition, we neutralize the recipe (\(\star \)), eliminating the subgoals that belong to the precondition, and instantiate return with the recipe \(\uplambda \texttt {x} \mathtt {.}\texttt {z} = \texttt {x} \rightarrow \textbf{0}\), where \(\texttt {z}\) is fresh. After evaluation, we obtain \( \forall \texttt {lvr} \mathtt {.}\, \texttt {t} = \texttt {Node} \, \texttt {l} \, \texttt {v} \, \texttt {r} \rightarrow \texttt {z} = \texttt {v} \rightarrow \bot \), which is equivalent to \( \lnot (\exists \texttt {lr} \mathtt {.}\, \texttt {t} = \texttt {Node} \, \texttt {l} \, \texttt {z} \, \texttt {r}) \). This last formula is exactly the negated postcondition of getRoot, where z denotes the returned value.
This procedure works in a similar way to subgoal factorization discussed in Sec. 3. It is easily extended to handlers with multiple outcomes, producing a separate postcondition for each of them.
Fig. 10.
VC generation time: Coma vs Why3 (lower values favor Coma).
Benchmarks. We evaluated the performance of our VC generator using the Creusot test suite.2 Creusot switched its VC generation back-end from Why3 to Coma at release 0.2. We used the Rust files from the test suite that did not change during the switch (235 files). For each of those files, we measured the VC generation time with both back-ends and excluded those where the absolute difference was smaller than the sum of two standard deviations. For the 33 remaining files, Fig. 10 shows the relative and the absolute time difference (black and white bars, respectively). The negative values in the chart are where the VCgen of Coma is faster than that of Why3, which is the case for all but one test file. For a third of the tests, Coma was more than 50% faster.

6 Case Study: Regular Expression Processing

In this section, we demonstrate the use of Coma by verifying a small, yet non-trivial OCaml program, that uses higher-order functions, exceptions, and requires giving specification to closures in order to be verified. Figure 11 shows a function accept that checks if a string s is recognized by a regular expression r. The code traverses the string with a recursive function a, which takes three parameters: a current regexp r, an integer index i, and a continuation k. This function tries to match a substring \(\texttt {s[i}..\texttt {j)}\) with r, and then applies the continuation k to index j to proceed with the matching of \(\texttt {s[j}..\texttt {)}\). If no such j exists, function a returns the unit value (). The initial continuation passed to function a signals a success by raising the predefined exception Exit.
Fig. 11.
Regular expression engine in OCaml.
Fig. 12.
Regular expression engine in Coma.
Figure 12 contains a Coma version of the accept function, which can be obtained by a mechanical CPS-translation of the OCaml code. The accept handler has a return outcome that receives the Boolean computation result. The local handler a has a continuation o, that corresponds to the normal outcome of the original OCaml function. Finally, the continuation k is transformed itself into CPS-style, and thus has its own outcome, named h. Another way to look at this code is to interpret k and o as success and error/backtrack continuations, respectively, as in a double-barreled CPS [18]. The pattern-matching on the regular expression r is performed using a library handler unRe similar to the unTree handler.
To verify this program, we need to add specifications. Figure 13 contains a version of accept with added preconditions (in https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-91121-7_8/MediaObjects/652625_1_En_8_Figav_HTML.gif ) and ghost parameters (in https://static-content.springer.com/image/chp%3A10.1007%2F978-3-031-91121-7_8/MediaObjects/652625_1_En_8_Figaw_HTML.gif ). As explained in Section 5, specification annotations in handler prototypes are automatically desugared into assertions, black-box barriers, and wrapper handlers. The postcondition of accept (i.e., the precondition of return) uses a built-in logical predicate mem, where \(\texttt {mem}~\texttt {s}~\texttt {r}\) holds if and only if the string s belongs to the language of r.
We add a ghost parameter ck to handler a for the purpose of its specification. Note that the current implementation of Coma does not provide any special treatment for ghost code and data. In future, we plan to introduce the necessary checks that ensure that ghost computations do not interfere with the observable part of the program. As in the OCaml code, handler a tries to match a substring \(\texttt {s[i}..\texttt {j)}\) with r, and then applies the continuation k to index j to proceed with the matching of \(\texttt {s[j}..\texttt {)}\). The ck parameter is a first-class predicate which characterizes the index j passed to k. The cons predicate, declared on top of the figure, is a shortcut to simplify annotations.
Fig. 13.
Regular expression engine in Coma, with specification.
In addition to the annotations given in Fig. 13, we have also instrumented the Coma code to verify the termination of handler a, as described in Section 4. In this case, the variant is a pair \((|\texttt {s}|-\texttt {i},\,\texttt {r})\), ordered lexicographically: namely, we either progress in string s or we move to a smaller regular expression. When the VC for handler accept is sent to Why3, it is split into 44 individual proof tasks which are easily discharged by the SMT solvers Z3 [4] and Alt-Ergo [3].

7 Case Study: Verified Assembly Code

Fig. 14.
Example of verified x86-64 code.
We believe that Coma is a suitable intermediate language for the verification of unstructured programs. As a proof of concept, we have built a prototype tool for the deductive verification of x86-64 assembly programs. The input of the tool is assembly code annotated with assumptions, assertions, and loop invariants. Figure 14 shows the x86-64 assembly code for a function sortbits that sorts the bits of a 64-bit integer using the “I can’t believe it can sort” algorithm by Fung [8]. We use the AT&T syntax, with the destination operand on the right. For instance, \(\texttt {mov}\,A,B\) copies the register A into B, and \(\texttt {andn}\,A,B,C\) computes \(A\wedge \lnot B\) and stores it in C. Integer literals start with a $ sign. The code contains unnecessary labels (e.g., test2), which are only introduced to simplify the forthcoming explanations.
Function sortbits receives an integer in the rdi register and returns an integer in the rax register, with the same number of 1 bits, which are moved to the least significant positions. The code iterates over all pairs of bits \(0 \le i,j < 64\), using registers rdi and rsi, with two nested loops. Whenever the bit i is clear and the bit j is set, the two bits are swapped. (It is not obvious why this sorting procedure is correct; see Fung’s paper for an explanation.) The code contains logical annotations as special comments: namely, two loop invariants and one assertion before the function end. Here, we only show that the population count remains constant (using a logical function pop), but we do not show that bits are indeed sorted. We use the notation rdi@sortbits to refer to the value of the rdi register at function entry.
Fig. 15.
Compilation passes for sortbits.
Our tool parses the code and its annotations, and starts with building its control-flow graph (depicted in Fig. 15a). Then, it computes the dominator tree, the entry point being the function entry. A basic block A dominates a block B whenever any path from the entry to B traverses A. For instance, block loop2 dominates block test2, which itself dominates swap2. Finally, our tool builds a Coma code that follows the structure of the dominator tree. In this way, we do not need to repeat the outer invariant in the inner loop for variables that are not modified. For instance, the handler swap2 is a local definition in handler test2, which is itself a local definition in handler loop2. Each invariant is translated into an assertion followed by a barrier. Figure 15b shows the structure of the Coma code for sortbits. For an easier reading, we omit type annotations and we have left only what relates to control-flow and specification. Conditional jumps are translated using the primitive if (and suitable Boolean conditions \(\texttt {\text { b}}_i\)), and unconditional jumps are handler calls. Parts where references are modified are written “\({\texttt {\small [...]}}\)” for clarity. The translated code relies on our Why3 model of a fragment of the x86-64 instruction set. For instance, the instruction andn is translated into a Coma reference assignment &rax \({\mathtt {\leftarrow }}\) andn rax rsi, where the logical function andn is defined in the accompanying Why3 library.
The Coma back-end computes the VC for the code in Fig. 15b, and sends it to Why3. There it is split into 5 proof tasks—two instances of invariant initialization, two instances of invariant preservation, and the final postcondition—which are automatically proved by Z3 and Alt-Ergo.
To our knowledge, no deductive verification system features explicit abstraction barriers in the style of Coma. Compared to the widely used intermediate verification languages like Boogie [13], Viper [14] or WhyML [6], Coma is a smaller language, with fewer constructs and a simpler VC generator. Still, we are not aware of an autoactive program verifier that would admit the higher-order regexp example from Section 6. To handle stateful computations, Coma implements first-class alias-free mutable variables, which is close to what is provided by Boogie and WhyML. Viper, in comparison, offers a significantly more powerful ownership-based reasoning framework.
There is a natural connection between the weakest-precondition calculus and the CPS transformation, the former being a predicate transformer and the latter a structurally similar code transformer. This connection was first studied on a minimal imperative language by Jensen [9]. This work was later extended with exception handling and goto statement by Audebaud and Zucca [1], and furthermore, with recursion, higher-order functions, and side effects by Kura [11]. A predicate transformer called the Dijkstra monad, introduced by Swamy et al. [17] and used to verify higher-order and effectful \(\textrm{F}^\star \) programs, also highlights this connection. Coma exploits in a similar manner the relation between the continuation-based style and the WP computation. Explicit abstraction barriers allow us to verify recursive code without computing a fixed point of its verification condition.
The CFML tool developed by Charguéraud [2] serves for interactive verification of higher-order stateful programs, written in a subset of OCaml. Programs are translated into so-called characteristic formulas, which essentially capture the weakest precondition of the programs, with respect to a shallow embedding of separation logic in Coq. Specifications are proved in the form of lemmas derived from characteristic formulae. Unlike Coma, or other VC-based program verifiers, the program logic rules of CFML have to be applied manually in the course of an interactive Coq proof. Assertions and invariants are provided as the proof progresses, which limits the possibilities for automation. On the other hand, CFML offers a greater flexibility in stating properties of program code, such as verifying a given function against two different contracts.

9 Conclusion

We presented Coma, a higher-order IVL with explicit abstraction barriers that can be placed inside function definitions to make the exposed part of the computation appear in the specification. This allows us to write specifications in a more concise and flexible way, without having to manually translate executable code into logical specification.
In the future, we consider adding the mechanisms of ownership and borrowing to handle the mutable state, and using prophecy variables in verification conditions. We are also interested in supporting more advanced control structures, such as iterators, coroutines, and algebraic effects. We consider an alternative VC rule, where an above-barrier subhandler definition is verified at the callee site rather than the caller site. In certain programs with nested loops, like the sortbits example from Sec. 7, this allows us to move all invariants into inner loops, avoiding repetition and reducing specification burden. Finally, we continue to improve the efficiency of our implementation, in particular the recipe evaluation engine and the subgoal factorization heuristics.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Fußnoten
1
Modulo semantic equality of answer terms during evaluation of primitive handlers; see below.
 
2
This evaluation can be reproduced using the instructions in the Zenodo archive available online at https://​doi.​org/​10.​5281/​zenodo.​14766822.
 
Literatur
1.
Zurück zum Zitat Audebaud, P., Zucca, E.: Deriving proof rules from continuation semantics. Formal aspects of computing 11, 426–447 (1999)CrossRefMATH Audebaud, P., Zucca, E.: Deriving proof rules from continuation semantics. Formal aspects of computing 11, 426–447 (1999)CrossRefMATH
2.
Zurück zum Zitat Charguéraud, A.: Characteristic formulae for the verification of imperative programs. In: Chakravarty, M.M.T., Hu, Z., Danvy, O. (eds.) Proceeding of the 16th ACM SIGPLAN International Conference on Functional Programming (ICFP). vol. 46(9), pp. 418–430. ACM, Tokyo, Japan (September 2011) Charguéraud, A.: Characteristic formulae for the verification of imperative programs. In: Chakravarty, M.M.T., Hu, Z., Danvy, O. (eds.) Proceeding of the 16th ACM SIGPLAN International Conference on Functional Programming (ICFP). vol. 46(9), pp. 418–430. ACM, Tokyo, Japan (September 2011)
3.
Zurück zum Zitat Conchon, S., Coquereau, A., Iguernelala, M., Mebsout, A.: Alt-Ergo 2.2. In: SMT Workshop: International Workshop on Satisfiability Modulo Theories (2018) Conchon, S., Coquereau, A., Iguernelala, M., Mebsout, A.: Alt-Ergo 2.2. In: SMT Workshop: International Workshop on Satisfiability Modulo Theories (2018)
4.
Zurück zum Zitat De Moura, L., Bjørner, N.: Z3: An efficient SMT solver. In: International conference on Tools and Algorithms for the Construction and Analysis of Systems. vol. 4963, pp. 337–340. Springer (2008)CrossRef De Moura, L., Bjørner, N.: Z3: An efficient SMT solver. In: International conference on Tools and Algorithms for the Construction and Analysis of Systems. vol. 4963, pp. 337–340. Springer (2008)CrossRef
5.
Zurück zum Zitat Denis, X., Jourdan, J.-H., Marché, C.: Creusot: a foundry for the deductive verification of Rust programs. In: International Conference on Formal Engineering Methods. vol. 13478, pp. 90–105. Springer (2022) Denis, X., Jourdan, J.-H., Marché, C.: Creusot: a foundry for the deductive verification of Rust programs. In: International Conference on Formal Engineering Methods. vol. 13478, pp. 90–105. Springer (2022)
6.
Zurück zum Zitat Filliâtre, J.-C., Paskevich, A.: Why3 — where programs meet provers. In: Felleisen, M., Gardner, P. (eds.) Proceedings of the 22nd European Symposium on Programming. Lecture Notes in Computer Science, vol. 7792, pp. 125–128. Springer (Mar 2013) Filliâtre, J.-C., Paskevich, A.: Why3 — where programs meet provers. In: Felleisen, M., Gardner, P. (eds.) Proceedings of the 22nd European Symposium on Programming. Lecture Notes in Computer Science, vol. 7792, pp. 125–128. Springer (Mar 2013)
7.
Zurück zum Zitat Flanagan, C., Saxe, J.B.: Avoiding exponential explosion: Generating compact verification conditions. In: Principles Of Programming Languages. vol. 36(3), pp. 193–205. ACM (2001) Flanagan, C., Saxe, J.B.: Avoiding exponential explosion: Generating compact verification conditions. In: Principles Of Programming Languages. vol. 36(3), pp. 193–205. ACM (2001)
9.
Zurück zum Zitat Jensen, K.: Connection between Dijkstra’s predicate-transformers and denotational continuation-semantics. DAIMI Report Series 86 (1978) Jensen, K.: Connection between Dijkstra’s predicate-transformers and denotational continuation-semantics. DAIMI Report Series 86 (1978)
10.
Zurück zum Zitat Krivine, J.-L.: Un interprèteur du \(\lambda \)-calcul. Unpublished report (1985) Krivine, J.-L.: Un interprèteur du \(\lambda \)-calcul. Unpublished report (1985)
11.
13.
Zurück zum Zitat Leino, K.R.M.: This is Boogie 2. KRML Manuscript 178 (2008) Leino, K.R.M.: This is Boogie 2. KRML Manuscript 178 (2008)
14.
Zurück zum Zitat Müller, P., Schwerhoff, M., Summers, A.J.: Viper: A verification infrastructure for permission-based reasoning. In: Verification, Model Checking, and Abstract Interpretation: 17th International Conference. vol. 9583, pp. 41–62. Springer (2016)CrossRefMATH Müller, P., Schwerhoff, M., Summers, A.J.: Viper: A verification infrastructure for permission-based reasoning. In: Verification, Model Checking, and Abstract Interpretation: 17th International Conference. vol. 9583, pp. 41–62. Springer (2016)CrossRefMATH
16.
17.
Zurück zum Zitat Swamy, N., Weinberger, J., Schlesinger, C., Chen, J., Livshits, B.: Verifying higher-order programs with the Dijkstra monad. ACM SIGPLAN Notices 48(6), 387–398 (2013)CrossRef Swamy, N., Weinberger, J., Schlesinger, C., Chen, J., Livshits, B.: Verifying higher-order programs with the Dijkstra monad. ACM SIGPLAN Notices 48(6), 387–398 (2013)CrossRef
18.
Zurück zum Zitat Thielecke, H.: Comparing control constructs by double-barrelled CPS. Higher-Order and Symbolic Computation 15, 141–160 (2002)CrossRefMATH Thielecke, H.: Comparing control constructs by double-barrelled CPS. Higher-Order and Symbolic Computation 15, 141–160 (2002)CrossRefMATH
Metadaten
Titel
coma, an Intermediate Verification Language with Explicit Abstraction Barriers
verfasst von
Andrei Paskevich
Paul Patault
Jean-Christophe Filliâtre
Copyright-Jahr
2025
DOI
https://doi.org/10.1007/978-3-031-91121-7_8