Skip to main content
main-content

Über dieses Buch

This book constitutes the proceedings of the 26th European Symposium on Programming, ESOP 2017, which took place in Uppsala, Sweden in April 2017, held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2017.

The 36 papers presented in this volume were carefully reviewed and selected from 112 submissions. They cover traditional as well as emerging topics in programming languages. In detail they deal with semantic foundation and type system for probabilistic programming; techniqu3es for verifying concurrent or higher-order programs; programming languages for arrays or web data; program analysis and verification of non-standard program properties; foundation and application of interactive theorem proving; graph rewriting; separation logic; session type; type theory; and implicit computational complexity.

Inhaltsverzeichnis

Frontmatter

Disjoint Polymorphism

The combination of intersection types, a merge operator and parametric polymorphism enables important applications for programming. However, such combination makes it hard to achieve the desirable property of a coherent semantics: all valid reductions for the same expression should have the same value. Recent work proposed disjoint intersections types as a means to ensure coherence in a simply typed setting. However, the addition of parametric polymorphism was not studied. This paper presents : a calculus with disjoint intersection types, a variant of parametric polymorphism and a merge operator. is both type-safe and coherent. The key difficult occurs in an intersection type, it is not statically known whether the instantiated type will be disjoint to other components of the intersection. To address this problem we propose disjoint polymorphism: a constrained form of parametric polymorphism, which allows disjointness constraints for type variables. With disjoint polymorphism the calculus remains very flexible in terms of programs that can be written, while retaining coherence.

João Alpuim, Bruno C. d. S. Oliveira, Zhiyuan Shi

Generalizing Inference Systems by Coaxioms

We introduce a generalized notion of inference system to support structural recursion on non well-founded datatypes. Besides axioms and inference rules with the usual meaning, a generalized inference system allows coaxioms, which are, intuitively, axioms which can only be applied “at infinite depth” in a proof tree. This notion nicely subsumes standard inference systems and their inductive and coinductive interpretation, while providing more flexibility. Indeed, the classical results on the existence and constructive characterization of least and greatest fixed points can be extended to our generalized framework, interpreting recursive definitions as fixed points which are not necessarily the least, nor the greatest one. This allows formal reasoning in cases where the inductive and coinductive interpretation do not provide the intended meaning, or are mixed together.

Davide Ancona, Francesco Dagnino, Elena Zucca

Observed Communication Semantics for Classical Processes

Classical Linear Logic (CLL) has long inspired readings of its proofs as communicating processes. Wadler’s CP calculus is one of these readings. Wadler gave CP an operational semantics by selecting a subset of the cut-elimination rules of CLL to use as reduction rules. This semantics has an appealing close connection to the logic, but does not resolve the status of the other cut-elimination rules, and does not admit an obvious notion of observational equivalence. We propose a new operational semantics for CP based on the idea of observing communication. We use this semantics to define an intuitively reasonable notion of observational equivalence. To reason about observational equivalence, we use the standard relational denotational semantics of CLL. We show that this denotational semantics is adequate for our operational semantics. This allows us to deduce that, for instance, all the cut-elimination rules of CLL are observational equivalences.

Robert Atkey

Is Your Software on Dope?

Formal Analysis of Surreptitiously “enhanced” Programs

Usually, it is the software manufacturer who employs verification or testing to ensure that the software embedded in a device meets its main objectives. However, these days we are confronted with the situation that economical or technological reasons might make a manufacturer become interested in the software slightly deviating from its main objective for dubious reasons. Examples include lock-in strategies and the $$\mathrm {NO}_x$$ emission scandals in automotive industry. This phenomenon is what we call software doping. It is turning more widespread as software is embedded in ever more devices of daily use.The primary contribution of this article is to provide a hierarchy of simple but solid formal definitions that enable to distinguish whether a program is clean or doped. Moreover, we show that these characterisations provide an immediate framework for analysis by using already existing verification techniques. We exemplify this by applying self-composition on sequential programs and model checking of HyperLTL formulas on reactive models.

Pedro R. D’Argenio, Gilles Barthe, Sebastian Biewer, Bernd Finkbeiner, Holger Hermanns

Friends with Benefits

Implementing Corecursion in Foundational Proof Assistants

We introduce AmiCo, a tool that extends a proof assistant, Isabelle/HOL, with flexible function definitions well beyond primitive corecursion. All definitions are certified by the assistant’s inference kernel to guard against inconsistencies. A central notion is that of friends: functions that preserve the productivity of their arguments and that are allowed in corecursive call contexts. As new friends are registered, corecursion benefits by becoming more expressive. We describe this process and its implementation, from the user’s specification to the synthesis of a higher-order definition to the registration of a friend. We show some substantial case studies where our approach makes a difference.

Jasmin Christian Blanchette, Aymeric Bouzy, Andreas Lochbihler, Andrei Popescu, Dmitriy Traytel

Confluence of Graph Rewriting with Interfaces

For terminating double-pushout (DPO) graph rewriting systems confluence is, in general, undecidable. We show that confluence is decidable for an extension of DPO rewriting to graphs with interfaces. This variant is important due to it being closely related to rewriting of string diagrams. We show that our result extends, under mild conditions, to decidability of confluence for terminating rewriting systems of string diagrams in symmetric monoidal categories.

Filippo Bonchi, Fabio Gadducci, Aleks Kissinger, Paweł Sobociński, Fabio Zanasi

Verifying Robustness of Event-Driven Asynchronous Programs Against Concurrency

We define a correctness criterion, called robustness against concurrency, for a class of event-driven asynchronous programs that are at the basis of modern UI frameworks in Android, iOS, and Javascript. A program is robust when all possible behaviors admitted by the program under arbitrary procedure and event interleavings are admitted even if asynchronous procedures (respectively, events) are assumed to execute serially, one after the other, accessing shared memory in isolation. We characterize robustness as a conjunction of two correctness criteria: event-serializability (i.e., events can be seen as atomic) and event-determinism (executions within each event are insensitive to the interleavings between concurrent tasks dynamically spawned by the event). Then, we provide efficient algorithms for checking these two criteria based on polynomial reductions to reachability problems in sequential programs. This result is surprising because it allows to avoid explicit handling of all concurrent executions in the analysis, which leads to an important gain in complexity. We demonstrate via case studies on Android apps that the typical mistakes programmers make are captured as robustness violations, and that violations can be detected efficiently using our approach.

Ahmed Bouajjani, Michael Emmi, Constantin Enea, Burcu Kulahcioglu Ozkan, Serdar Tasiran

Incremental Update for Graph Rewriting

Graph rewriting formalisms are well-established models for the representation of biological systems such as protein-protein interaction networks. The combinatorial complexity of these models usually prevents any explicit representation of the variables of the system, and one has to rely on stochastic simulations in order to sample the possible trajectories of the underlying Markov chain. The bottleneck of stochastic simulation algorithms is the update of the propensity function that describes the probability that a given rule is to be applied next. In this paper we present an algorithm based on a data structure, called extension basis, that can be used to update the counts of predefined graph observables after a rule of the model has been applied. Extension basis are obtained by static analysis of the graph rewriting rule set. It is derived from the construction of a qualitative domain for graphs and the correctness of the procedure is proven using a purely domain theoretic argument.

Pierre Boutillier, Thomas Ehrhard, Jean Krivine

Linearity, Control Effects, and Behavioral Types

Mainstream programming idioms intensively rely on state mutation, sharing, and concurrency. Designing type systems for handling and disciplining such idioms is challenging, due to long known conflicts between internal non-determinism, linearity, and control effects such as exceptions. In this paper, we present the first type system that accommodates non-deterministic and abortable behaviors in the setting of session-based concurrent programs. Remarkably, our type system builds on a Curry-Howard correspondence with (classical) linear logic conservatively extended with two dual modalities capturing an additive (co)monad, and provides a first example of a Curry-Howard interpretation of a realistic programming language with built-in internal non-determinism. Thanks to its deep logical foundations, our system elegantly addresses several well-known tensions between control, linearity, and non-determinism: globally, it enforces progress and fidelity; locally, it allows the specification of non-deterministic and abortable computations. The expressivity of our system is illustrated by several examples, including a typed encoding of a higher-order functional language with threads, session channels, non-determinism, and exceptions.

Luís Caires, Jorge A. Pérez

Temporary Read-Only Permissions for Separation Logic

We present an extension of Separation Logic with a general mechanism for temporarily converting any assertion (or “permission”) to a read-only form. No accounting is required: our read-only permissions can be freely duplicated and discarded. We argue that, in circumstances where mutable data structures are temporarily accessed only for reading, our read-only permissions enable more concise specifications and proofs. The metatheory of our proposal is verified in Coq.

Arthur Charguéraud, François Pottier

Faster Algorithms for Weighted Recursive State Machines

Pushdown systems (PDSs) and recursive state machines (RSMs), which are linearly equivalent, are standard models for interprocedural analysis. Yet RSMs are more convenient as they (a) explicitly model function calls and returns, and (b) specify many natural parameters for algorithmic analysis, e.g., the number of entries and exits. We consider a general framework where RSM transitions are labeled from a semiring and path properties are algebraic with semiring operations, which can model, e.g., interprocedural reachability and dataflow analysis problems.Our main contributions are new algorithms for several fundamental problems. As compared to a direct translation of RSMs to PDSs and the best-known existing bounds of PDSs, our analysis algorithm improves the complexity for finite-height semirings (that subsumes reachability and standard dataflow properties). We further consider the problem of extracting distance values from the representation structures computed by our algorithm, and give efficient algorithms that distinguish the complexity of a one-time preprocessing from the complexity of each individual query. Another advantage of our algorithm is that our improvements carry over to the concurrent setting, where we improve the best-known complexity for the context-bounded analysis of concurrent RSMs. Finally, we provide a prototype implementation that gives a significant speed-up on several benchmarks from the SLAM/SDV project.

Krishnendu Chatterjee, Bernhard Kragl, Samarth Mishra, Andreas Pavlogiannis

ML and Extended Branching VASS

We prove that the observational equivalence problem for a finitary fragment of ML is recursively equivalent to the reachability problem for extended branching vector addition systems with states (EBVASS). Our proof uses the fully abstract game semantics of the language. We introduce a new class of automata, VPCMA, as a representation of the game semantics. VPCMA are a version of class memory automata equipped with a visibly pushdown stack; they serve as a bridge enabling interreducibility of decision problems between the game semantics and EBVASS. The results of this paper complete our programme to give an automata classification of the ML types with respect to the observational equivalence problem for closed terms.

Conrad Cotton-Barratt, Andrzej S. Murawski, C. -H. Luke Ong

Metric Reasoning About -Terms: The General Case

In any setting in which observable properties have a quantitative flavor, it is natural to compare computational objects by way of metrics rather than equivalences or partial orders. This holds, in particular, for probabilistic higher-order programs. A natural notion of comparison, then, becomes context distance, the metric analogue of Morris’ context equivalence. In this paper, we analyze the main properties of the context distance in fully-fledged probabilistic $$\lambda $$-calculi, this way going beyond the state of the art, in which only affine calculi were considered. We first of all study to which extent the context distance trivializes, giving a sufficient condition for trivialization. We then characterize context distance by way of a coinductively-defined, tuple-based notion of distance in one of those calculi, called $$\varLambda ^\oplus _!$$. We finally derive pseudometrics for call-by-name and call-by-value probabilistic $$\lambda $$-calculi, and prove them fully-abstract.

Raphaëlle Crubillé, Ugo Dal Lago

Contextual Equivalence for Probabilistic Programs with Continuous Random Variables and Scoring

We present a logical relation for proving contextual equivalence in a probabilistic programming language (PPL) with continuous random variables and with a scoring operation for expressing observations and soft constraints.Our PPL model is based on a big-step operational semantics that represents an idealized sampler with likelihood weighting. The semantics treats probabilistic non-determinism as a deterministic process guided by a source of entropy. We derive a measure on result values by aggregating (that is, integrating) the behavior of the operational semantics over the entropy space. Contextual equivalence is defined in terms of these measures, taking real events as observable behavior.We define a logical relation and prove it sound with respect to contextual equivalence. We demonstrate the utility of the logical relation by using it to prove several useful examples of equivalences, including the equivalence of a $$\beta _v$$-redex and its contractum and a general form of expression re-ordering. The latter equivalence is sound for the sampling and scoring effects of probabilistic programming but not for effects like mutation or control.

Ryan Culpepper, Andrew Cobb

Probabilistic Termination by Monadic Affine Sized Typing

We introduce a system of monadic affine sized types, which substantially generalise usual sized types, and allows this way to capture probabilistic higher-order programs which terminate almost surely. Going beyond plain, strong normalisation without losing soundness turns out to be a hard task, which cannot be accomplished without a richer, quantitative notion of types, but also without imposing some affinity constraints. The proposed type system is powerful enough to type classic examples of probabilistically terminating programs such as random walks. The way typable programs are proved to be almost surely terminating is based on reducibility, but requires a substantial adaptation of the technique.

Ugo Dal Lago, Charles Grellois

Caper

Automatic Verification for Fine-Grained Concurrency

Recent program logics based on separation logic emphasise a modular approach to proving functional correctness for fine-grained concurrent programs. However, these logics have no automation support. In this paper, we present Caper, a prototype tool for automated reasoning in such a logic. Caper is based on symbolic execution, integrating reasoning about interference on shared data and about ghost resources that are used to mediate this interference. This enables Caper to verify the functional correctness of fine-grained concurrent algorithms.

Thomas Dinsdale-Young, Pedro da Rocha Pinto, Kristoffer Just Andersen, Lars Birkedal

Tackling Real-Life Relaxed Concurrency with FSL++

We extend fenced separation logic (FSL), a program logic for reasoning about C11 relaxed access and memory fences. Our extensions to FSL allow us to handle concurrent algorithms appearing in practice. New features added to FSL allow for reasoning about concurrent non-atomic reads, atomic updates, ownership transfer via release sequences, and ghost state. As a demonstration of power of the extended FSL, we verify correctness of the atomic reference counter (ARC), a standard library of the Rust programing language, whose implementation relies heavily on advanced features of the C11 memory model. Soundness of FSL and its extensions, as well as the correctness proof of ARC have been established in Coq.

Marko Doko, Viktor Vafeiadis

Extensible Datasort Refinements

Refinement types turn typechecking into lightweight verification. The classic form of refinement type is the datasort refinement, in which datasorts identify subclasses of inductive datatypes.Existing type systems for datasort refinements require that all the refinements of a type be specified when the type is declared; multiple refinements of the same type can be obtained only by duplicating type definitions, and consequently, duplicating code.We enrich the traditional notion of a signature, which describes the inhabitants of datasorts, to allow re-refinement via signature extension, without duplicating definitions. Since arbitrary updates to a signature can invalidate the inversion principles used to check case expressions, we develop a definition of signature well-formedness that ensures that extensions maintain existing inversion principles. This definition allows different parts of a program to extend the same signature in different ways, without conflicting with each other. Each part can be type-checked independently, allowing separate compilation.

Joshua Dunfield

Programs Using Syntax with First-Class Binders

We present a general methodology for adding support for higher-order abstract syntax definitions and first-class contexts to an existing ML-like language. As a consequence, low-level infrastructure that deals with representing variables and contexts can be factored out. This avoids errors in manipulating low-level operations, eases the task of prototyping program transformations and can have a major impact on the effort and cost of implementing such systems.We allow programmers to define syntax in a variant of the logical framework LF and to write programs that analyze these syntax trees via pattern matching as part of their favorite ML-like language. The syntax definitions and patterns on syntax trees are then eliminated via a translation using a deep embedding of LF that is defined in ML. We take advantage of GADTs which are frequently supported in ML-like languages to ensure our translation preserves types. The resulting programs can be type checked reusing the ML type checker, and compiled reusing its first-order pattern matching compilation. We have implemented this idea in a prototype written for and in OCaml and demonstrated its effectiveness by implementing a wide range of examples such as type checkers, evaluators, and compilation phases such as CPS translation and closure conversion.

Francisco Ferreira, Brigitte Pientka

LINCX: A Linear Logical Framework with First-Class Contexts

Linear logic provides an elegant framework for modelling stateful, imperative and concurrent systems by viewing a context of assumptions as a set of resources. However, mechanizing the meta-theory of such systems remains a challenge, as we need to manage and reason about mixed contexts of linear and intuitionistic assumptions.We present Lincx, a contextual linear logical framework with first-class mixed contexts. Lincx allows us to model (linear) abstract syntax trees as syntactic structures that may depend on intuitionistic and linear assumptions. It can also serve as a foundation for reasoning about such structures. Lincx extends the linear logical framework LLF with first-class (linear) contexts and an equational theory of context joins that can otherwise be very tedious and intricate to develop. This work may be also viewed as a generalization of contextual LF that supports both intuitionistic and linear variables, functions, and assumptions.We describe a decidable type-theoretic foundation for Lincx that only characterizes canonical forms and show that our equational theory of context joins is associative and commutative. Finally, we outline how Lincx may serve as a practical foundation for mechanizing the meta-theory of stateful systems.

Aina Linn Georges, Agata Murawska, Shawn Otis, Brigitte Pientka

APLicative Programming with Naperian Functors

Much of the expressive power of array-oriented languages such as Iverson’s APL and J comes from their implicit lifting of scalar operations to act on higher-ranked data, for example to add a value to each element of a vector, or to add two compatible matrices pointwise. It is considered a shape error to attempt to combine arguments of incompatible shape, such as a 3-vector with a 4-vector. APL and J are dynamically typed, so such shape errors are caught only at run-time. Recent work by Slepak et al. develops a custom type system for an array-oriented language, statically ruling out such errors. We show here that such a custom language design is unnecessary: the requisite compatibility checks can already be captured in modern expressive type systems, as found for example in Haskell; moreover, generative type-driven programming can exploit that static type information constructively to automatically induce the appropriate liftings. We show also that the structure of multi-dimensional data is inherently a matter of Naperian applicative functors—lax monoidal functors, with strength, commutative up to isomorphism under composition—that also support traversal.

Jeremy Gibbons

Verified Characteristic Formulae for CakeML

Characteristic Formulae (CF) offer a productive, principled approach to generating verification conditions for higher-order imperative programs, but so far the soundness of CF has only been considered with respect to an informal specification of a programming language (OCaml). This leaves a gap between what is established by the verification framework and the program that actually runs. We present a fully-fledged CF framework for the formally specified CakeML programming language. Our framework extends the existing CF approach to support exceptions and I/O, thereby covering the full feature set of CakeML, and comes with a formally verified soundness theorem. Furthermore, it integrates with existing proof techniques for verifying CakeML programs. This validates the CF approach, and allows users to prove end-to-end theorems for higher-order imperative programs, from specification to language semantics, within a single theorem prover.

Armaël Guéneau, Magnus O. Myreen, Ramana Kumar, Michael Norrish

Unified Reasoning About Robustness Properties of Symbolic-Heap Separation Logic

We introduce heap automata, a formalism for automatic reasoning about robustness properties of the symbolic heap fragment of separation logic with user-defined inductive predicates. Robustness properties, such as satisfiability, reachability, and acyclicity, are important for a wide range of reasoning tasks in automated program analysis and verification based on separation logic. Previously, such properties have appeared in many places in the separation logic literature, but have not been studied in a systematic manner. In this paper, we develop an algorithmic framework based on heap automata that allows us to derive asymptotically optimal decision procedures for a wide range of robustness properties in a uniform way.We implemented a prototype of our framework and obtained promising results for all of the aforementioned robustness properties.Further, we demonstrate the applicability of heap automata beyond robustness properties. We apply our algorithmic framework to the model checking and the entailment problem for symbolic-heap separation logic.

Christina Jansen, Jens Katelaan, Christoph Matheja, Thomas Noll, Florian Zuleger

Proving Linearizability Using Partial Orders

Linearizability is the commonly accepted notion of correctness for concurrent data structures. It requires that any execution of the data structure is justified by a linearization—a linear order on operations satisfying the data structure’s sequential specification. Proving linearizability is often challenging because an operation’s position in the linearization order may depend on future operations. This makes it very difficult to incrementally construct the linearization in a proof.We propose a new proof method that can handle data structures with such future-dependent linearizations. Our key idea is to incrementally construct not a single linear order of operations, but a partial order that describes multiple linearizations satisfying the sequential specification. This allows decisions about the ordering of operations to be delayed, mirroring the behaviour of data structure implementations. We formalise our method as a program logic based on rely-guarantee reasoning, and demonstrate its effectiveness by verifying several challenging data structures: the Herlihy-Wing queue, the TS queue and the Optimistic set.

Artem Khyzha, Mike Dodds, Alexey Gotsman, Matthew Parkinson

The Power of Non-determinism in Higher-Order Implicit Complexity

Characterising Complexity Classes Using Non-deterministic Cons-Free Programming

We investigate the power of non-determinism in purely functional programming languages with higher-order types. Specifically, we consider cons-free programs of varying data orders, equipped with explicit non-deterministic choice. Cons-freeness roughly means that data constructors cannot occur in function bodies and all manipulation of storage space thus has to happen indirectly using the call stack.While cons-free programs have previously been used by several authors to characterise complexity classes, the work on non-deterministic programs has almost exclusively considered programs of data order 0. Previous work has shown that adding explicit non-determinism to cons-free programs taking data of order 0 does not increase expressivity; we prove that this—dramatically—is not the case for higher data orders: adding non-determinism to programs with data order at least 1 allows for a characterisation of the entire class of elementary-time decidable sets.Finally we show how, even with non-deterministic choice, the original hierarchy of characterisations is restored by imposing different restrictions.

Cynthia Kop, Jakob Grue Simonsen

The Essence of Higher-Order Concurrent Separation Logic

Concurrent separation logics (CSLs) have come of age, and with age they have accumulated a great deal of complexity. Previous work on the Iris logic attempted to reduce the complex logical mechanisms of modern CSLs to two orthogonal concepts: partial commutative monoids (PCMs) and invariants. However, the realization of these concepts in Iris still bakes in several complex mechanisms—such as weakest preconditions and mask-changing view shifts—as primitive notions.In this paper, we take the Iris story to its (so to speak) logical conclusion, applying the reductionist methodology of Iris to Iris itself. Specifically, we define a small, resourceful base logic, which distills the essence of Iris: it comprises only the assertion layer of vanilla separation logic, plus a handful of simple modalities. We then show how the much fancier logical mechanisms of Iris—in particular, its entire program specification layer—can be understood as merely derived forms in our base logic. This approach helps to explain the meaning of Iris’s program specifications at a much higher level of abstraction than was previously possible. We also show that the step-indexed “later” modality of Iris is an essential source of complexity, in that removing it leads to a logical inconsistency. All our results are fully formalized in the Coq proof assistant.

Robbert Krebbers, Ralf Jung, Aleš Bizjak, Jacques-Henri Jourdan, Derek Dreyer, Lars Birkedal

Comprehending Isabelle/HOL’s Consistency

The proof assistant Isabelle/HOL is based on an extension of Higher-Order Logic (HOL) with ad hoc overloading of constants. It turns out that the interaction between the standard HOL type definitions and the Isabelle-specific ad hoc overloading is problematic for the logical consistency. In previous work, we have argued that standard HOL semantics is no longer appropriate for capturing this interaction, and have proved consistency using a nonstandard semantics. The use of an exotic semantics makes that proof hard to digest by the community. In this paper, we prove consistency by proof-theoretic means—following the healthy intuition of definitions as abbreviations, realized in HOLC, a logic that augments HOL with comprehension types. We hope that our new proof settles the Isabelle/HOL consistency problem once and for all. In addition, HOLC offers a framework for justifying the consistency of new deduction schemas that address practical user needs.

Ondřej Kunčar, Andrei Popescu

The Essence of Functional Programming on Semantic Data

Semantic data fuels many different applications, but is still lacking proper integration into programming languages. Untyped access is error-prone. Mapping approaches cannot fully capture the conceptualization of semantic data. In this paper, we present $$\lambda _{DL}$$, a typed $$\lambda $$-calculus with constructs for operating on semantic data. This is achieved by the integration of description logics into the $$\lambda $$-calculus for both typing and data access or querying. The language is centered around several key design principles, in particular: (1) the usage of semantic conceptualizations as types, (2) subtype inference for these types, and (3) type-checked query access to the data by both ensuring the satisfiability of queries as well as typing query results precisely. The paper motivates the use of a designated type system for semantic data and it provides the theoretic foundation for the integration of description logics as well as the core formal definition of $$\lambda _{DL}$$ including a proof of type safety.

Martin Leinberger, Ralf Lämmel, Steffen Staab

A Classical Sequent Calculus with Dependent Types

Dependent types are a key feature of type systems, typically used in the context of both richly-typed programming languages and proof assistants. Control operators, which are connected with classical logic along the proof-as-program correspondence, are known to misbehave in the presence of dependent types, unless dependencies are restricted to values. We place ourselves in the context of the sequent calculus which has the ability to smoothly provide control under the form of the $$\mu $$ operator dual to the common $$\mathtt {let}$$ operator, as well as to smoothly support abstract machine and continuation-passing style interpretations.We start from the call-by-value version of the $$\lambda \mu \tilde{\mu }$$ language and design a minimal language with a value restriction and a type system that includes a list of explicit dependencies and maintains type safety. We then show how to relax the value restriction and introduce delimited continuations to directly prove the consistency by means of a continuation-passing-style translation. Finally, we relate our calculus to a similar system by Lepigre [19], and present a methodology to transfer properties from this system to our own.

Étienne Miquey

Context-Free Session Type Inference

Some interesting communication protocols can be precisely described only by context-free session types, an extension of conventional session types with a general form of sequential composition. The complex metatheory of context-free session types, however, hinders the definition of corresponding checking and inference algorithms. In this work we address and solve these problems introducing a new type system for context-free session types of which we provide two OCaml embeddings.

Luca Padovani

Modular Verification of Higher-Order Functional Programs

Fully automated verification methods for higher-order functional programs have recently been proposed based on higher-order model checking and/or refinement type inference. Most of those methods are, however, whole program analyses, suffering from the scalability problem. To address the problem, we propose a modular method for fully automated verification of higher-order programs. Our method takes a program consisting of multiple top-level functions as an input, and repeatedly applies procedures for (i) guessing refinement intersection types of each function in a counterexample-guided manner, and (ii) checking that each function indeed has the guessed refinement intersection types, until the whole program is proved/disproved to be safe. To avoid the whole program analysis, we introduce the notion of modular counterexamples, and utilize them in (i), and employ Sato et al.’s technique of reducing refinement type checking to assertion checking in (ii). We have implemented the proposed method as an extension to MoCHi, and confirmed its effectiveness through experiments.

Ryosuke Sato, Naoki Kobayashi

Commutative Semantics for Probabilistic Programming

We show that a measure-based denotational semantics for probabilistic programming is commutative.The idea underlying probabilistic programming languages (Anglican, Church, Hakaru, etc.) is that programs express statistical models as a combination of prior distributions and likelihood of observations. The product of prior and likelihood is an unnormalized posterior distribution, and the inference problem is to find the normalizing constant. One common semantic perspective is thus that a probabilistic program is understood as an unnormalized posterior measure, in the sense of measure theory, and the normalizing constant is the measure of the entire semantic domain.A programming language is said to be commutative if only data flow is meaningful; control flow is irrelevant, and expressions can be re-ordered. It has been unclear whether probabilistic programs are commutative because it is well-known that Fubini-Tonelli theorems for reordering integration fail in general. We show that probabilistic programs are in fact commutative, by characterizing the measures/kernels that arise from programs as ‘s-finite’, i.e. sums of finite measures/kernels.The result is of theoretical interest, but also of practical interest, because program transformations based on commutativity help with symbolic inference and can improve the efficiency of simulation.

Sam Staton

Conditional Dyck-CFL Reachability Analysis for Complete and Efficient Library Summarization

Library summarization is an effective way to accelerate the analysis of client code. However, information about the client is unknown at the library summarization, preventing complete summarization of the library. An existing approach utilizes tree-adjoining languages (TALs) to provide conditional summaries, enabling the summarization of a library under certain premises. However, the use of TAL imposes several problems, preventing a complete summarization of a library and reducing the efficiency of the analysis.In this paper we propose a new conditional summarization technique based on the context-free language (CFL) reachability analysis. Our technique overcomes the above two limitations of TAL, and is more accessible since CFL reachability is much more efficient and widely-used than TAL reachability. Furthermore, to overcome the high cost from premise combination, we also provide a technique to confine the number of premises while maintaining full summarization of the library.We empirically compared our approach with the state-of-art TAL conditional summarization technique on 12 Java benchmark subjects from the SPECjvm2008 benchmark suite. The results demonstrate that our approach is able to significantly outperform TAL on both efficiency and precision.

Hao Tang, Di Wang, Yingfei Xiong, Lingming Zhang, Xiaoyin Wang, Lu Zhang

A Higher-Order Logic for Concurrent Termination-Preserving Refinement

Compiler correctness proofs for higher-order concurrent languages are difficult: they involve establishing a termination-preserving refinement between a concurrent high-level source language and an implementation that uses low-level shared memory primitives. However, existing logics for proving concurrent refinement either neglect properties such as termination, or only handle first-order state. In this paper, we address these limitations by extending Iris, a recent higher-order concurrent separation logic, with support for reasoning about termination-preserving refinements. To demonstrate the power of these extensions, we prove the correctness of an efficient implementation of a higher-order, session-typed language. To our knowledge, this is the first program logic capable of giving a compiler correctness proof for such a language. The soundness of our extensions and our compiler correctness proof have been mechanized in Coq.

Joseph Tassarotti, Ralf Jung, Robert Harper

Modular Verification of Procedure Equivalence in the Presence of Memory Allocation

For most high level languages, two procedures are equivalent if they transform a pair of isomorphic stores to isomorphic stores. However, tools for modular checking of such equivalence impose a stronger check where isomorphism is strengthened to equality of stores. This results in the inability to prove many interesting program pairs with recursion and dynamic memory allocation.In this work, we present RIE, a methodology to modularly establish equivalence of procedures in the presence of memory allocation, cyclic data structures and recursion. Our technique addresses the need for finding witnesses to isomorphism with angelic allocation, supports reasoning about equivalent procedures calls when the stores are only locally isomorphic, and reasoning about changes in the order of procedure calls. We have implemented RIE by encoding it in the Boogie program verifier. We describe the encoding and prove its soundness.

Tim Wood, Sophia Drossopolou, Shuvendu K. Lahiri, Susan Eisenbach

Abstract Specifications for Concurrent Maps

Despite recent advances in reasoning about concurrent data structure libraries, the largest implementations in java.util.concurrent have yet to be verified. The key issue lies in the development of modular specifications, which provide clear logical boundaries between clients and implementations. A solution is to use recent advances in fine-grained concurrency reasoning, in particular the introduction of abstract atomicity to concurrent separation logic reasoning. We present two specifications of concurrent maps, both providing the clear boundaries we seek. We show that these specifications are equivalent, in that they can be built from each other. We show how we can verify client programs, such as a concurrent set and a producer-consumer client. We also give a substantial first proof that the main operations of ConcurrentSkipListMap in java.util.concurrent satisfy the map specification. This work demonstrates that we now have the technology to verify the largest implementations in java.util.concurrent.

Shale Xiong, Pedro da Rocha Pinto, Gian Ntzik, Philippa Gardner

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise