Skip to main content
main-content

Über dieses Buch

This open access book constitutes the proceedings of the 28th European Symposium on Programming, ESOP 2019, which took place in Prague, Czech Republic, in April 2019, held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2019.

Unsere Produktempfehlungen

Premium-Abo der Gesellschaft für Informatik

Sie erhalten uneingeschränkten Vollzugriff auf alle acht Fachgebiete von Springer Professional und damit auf über 45.000 Fachbücher und ca. 300 Fachzeitschriften.

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 69.000 Bücher
  • über 500 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Umwelt
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Testen Sie jetzt 30 Tage kostenlos.

Basis-Abo der Gesellschaft für Informatik

Sie erhalten uneingeschränkten Vollzugriff auf die Inhalte der Fachgebiete Business IT + Informatik und Management + Führung und damit auf über 30.000 Fachbücher und ca. 130 Fachzeitschriften.

Weitere Produktempfehlungen anzeigen

Inhaltsverzeichnis

Frontmatter

Program Verification

Frontmatter

Open Access

Time Credits and Time Receipts in Iris

Abstract
We present a machine-checked extension of the program logic Iris with time credits and time receipts, two dual means of reasoning about time. Whereas time credits are used to establish an upper bound on a program’s execution time, time receipts can be used to establish a lower bound. More strikingly, time receipts can be used to prove that certain undesirable events—such as integer overflows—cannot occur until a very long time has elapsed. We present several machine-checked applications of time credits and time receipts, including an application where both concepts are exploited.
Glen Mével, Jacques-Henri Jourdan, François Pottier

Open Access

Meta-F: Proof Automation with SMT, Tactics, and Metaprograms

Abstract
We introduce Meta-F\(^{\star }\), a tactics and metaprogramming framework for the F\(^\star \) program verifier. The main novelty of Meta-F\(^\star \) is allowing the use of tactics and metaprogramming to discharge assertions not solvable by SMT, or to just simplify them into well-behaved SMT fragments. Plus, Meta-F\(^\star \) can be used to generate verified code automatically.
Meta-F\(^\star \) is implemented as an F\(^\star \) effect, which, given the powerful effect system of F\(^{\star }\), heavily increases code reuse and even enables the lightweight verification of metaprograms. Metaprograms can be either interpreted, or compiled to efficient native code that can be dynamically loaded into the F\(^\star \) type-checker and can interoperate with interpreted code. Evaluation on realistic case studies shows that Meta-F\(^\star \) provides substantial gains in proof development, efficiency, and robustness.
Guido Martínez, Danel Ahman, Victor Dumitrescu, Nick Giannarakis, Chris Hawblitzel, Cătălin Hriţcu, Monal Narasimhamurthy, Zoe Paraskevopoulou, Clément Pit-Claudel, Jonathan Protzenko, Tahina Ramananandro, Aseem Rastogi, Nikhil Swamy

Open Access

Semi-automated Reasoning About Non-determinism in C Expressions

Abstract
Research into C verification often ignores that the C standard leaves the evaluation order of expressions unspecified, and assigns undefined behavior to write-write or read-write conflicts in subexpressions—so called “sequence point violations”. These aspects should be accounted for in verification because C compilers exploit them.
We present a verification condition generator (vcgen) that enables one to semi-automatically prove the absence of undefined behavior in a given C program for any evaluation order. The key novelty of our approach is a symbolic execution algorithm that computes a frame at the same time as a postcondition. The frame is used to automatically determine how resources should be distributed among subexpressions.
We prove correctness of our vcgen with respect to a new monadic definitional semantics of a subset of C. This semantics is modular and gives a concise account of non-determinism in C.
We have implemented our vcgen as a tactic in the Coq interactive theorem prover, and have proved correctness of it using a separation logic for the new monadic definitional semantics of a subset of C.
Dan Frumin, Léon Gondelman, Robbert Krebbers

Open Access

Safe Deferred Memory Reclamation with Types

Abstract
Memory management in lock-free data structures remains a major challenge in concurrent programming. Design techniques including read-copy-update (RCU) and hazard pointers provide workable solutions, and are widely used to great effect. These techniques rely on the concept of a grace period: nodes that should be freed are not deallocated immediately, and all threads obey a protocol to ensure that the deallocating thread can detect when all possible readers have completed their use of the object. This provides an approach to safe deallocation, but only when these subtle protocols are implemented correctly.
We present a static type system to ensure correct use of RCU memory management: that nodes removed from a data structure are always scheduled for subsequent deallocation, and that nodes are scheduled for deallocation at most once. As part of our soundness proof, we give an abstract semantics for RCU memory management primitives which captures the fundamental properties of RCU. Our type system allows us to give the first proofs of memory safety for RCU linked list and binary search tree implementations without requiring full verification.
Ismail Kuru, Colin S. Gordon

Language Design

Frontmatter

Open Access

Codata in Action

Abstract
Computer scientists are well-versed in dealing with data structures. The same cannot be said about their dual: codata. Even though codata is pervasive in category theory, universal algebra, and logic, the use of codata for programming has been mainly relegated to representing infinite objects and processes. Our goal is to demonstrate the benefits of codata as a general-purpose programming abstraction independent of any specific language: eager or lazy, statically or dynamically typed, and functional or object-oriented. While codata is not featured in many programming languages today, we show how codata can be easily adopted and implemented by offering simple inter-compilation techniques between data and codata. We believe codata is a common ground between the functional and object-oriented paradigms; ultimately, we hope to utilize the Curry-Howard isomorphism to further bridge the gap.
Paul Downen, Zachary Sullivan, Zena M. Ariola, Simon Peyton Jones

Open Access

Composing Bidirectional Programs Monadically

Abstract
Software frequently converts data from one representation to another and vice versa. Naïvely specifying both conversion directions separately is error prone and introduces conceptual duplication. Instead, bidirectional programming techniques allow programs to be written which can be interpreted in both directions. However, these techniques often employ unfamiliar programming idioms via restricted, specialised combinator libraries. Instead, we introduce a framework for composing bidirectional programs monadically, enabling bidirectional programming with familiar abstractions in functional languages such as Haskell. We demonstrate the generality of our approach applied to parsers/printers, lenses, and generators/predicates. We show how to leverage compositionality and equational reasoning for the verification of round-tripping properties for such monadic bidirectional programs.
Li-yao Xia, Dominic Orchard, Meng Wang

Open Access

Counters in Kappa: Semantics, Simulation, and Static Analysis

Abstract
Site-graph rewriting languages, such as Kappa or BNGL, offer parsimonious ways to describe highly combinatorial systems of mechanistic interactions among proteins. These systems may be then simulated efficiently. Yet, the modeling mechanisms that involve counting (a number of phosphorylated sites for instance) require an exponential number of rules in Kappa. In BNGL, updating the set of the potential applications of rules in the current state of the system comes down to the sub-graph isomorphism problem (which is NP-complete).
In this paper, we extend Kappa to deal both parsimoniously and efficiently with counters. We propose a single push-out semantics for Kappa with counters. We show how to compile Kappa with counters into Kappa without counters (without requiring an exponential number of rules). We design a static analysis, based on affine relationships, to identify the meaning of counters and bound their ranges accordingly.
Pierre Boutillier, Ioana Cristescu, Jérôme Feret

Open Access

One Step at a Time

A Functional Derivation of Small-Step Evaluators from Big-Step Counterparts
Abstract
Big-step and small-step are two popular flavors of operational semantics. Big-step is often seen as a more natural transcription of informal descriptions, as well as being more convenient for some applications such as interpreter generation or optimization verification. Small-step allows reasoning about non-terminating computations, concurrency and interactions. It is also generally preferred for reasoning about type systems. Instead of having to manually specify equivalent semantics in both styles for different applications, it would be useful to choose one and derive the other in a systematic or, preferably, automatic way.
Transformations of small-step semantics into big-step have been investigated in various forms by Danvy and others. However, it appears that a corresponding transformation from big-step to small-step semantics has not had the same attention. We present a fully automated transformation that maps big-step evaluators written in direct style to their small-step counterparts. Many of the steps in the transformation, which include CPS-conversion, defunctionalisation, and various continuation manipulations, mirror those used by Danvy and his co-authors. For many standard languages, including those with either call-by-value or call-by-need and those with state, the transformation produces small-step semantics that are close in style to handwritten ones. We evaluate the applicability and correctness of the approach on 20 languages with a range of features.
Ferdinand Vesely, Kathleen Fisher

Program Semantics

Frontmatter

Open Access

Extended Call-by-Push-Value: Reasoning About Effectful Programs and Evaluation Order

Abstract
Traditionally, reasoning about programs under varying evaluation regimes (call-by-value, call-by-name etc.) was done at the meta-level, treating them as term rewriting systems. Levy’s call-by-push-value (CBPV) calculus provides a more powerful approach for reasoning, by treating CBPV terms as a common intermediate language which captures both call-by-value and call-by-name, and by allowing equational reasoning about changes to evaluation order between or within programs.
We extend CBPV to additionally deal with call-by-need, which is non-trivial because of shared reductions. This allows the equational reasoning to also support call-by-need. As an example, we then prove that call-by-need and call-by-name are equivalent if nontermination is the only side-effect in the source language.
We then show how to incorporate an effect system. This enables us to exploit static knowledge of the potential effects of a given expression to augment equational reasoning; thus a program fragment might be invariant under change of evaluation regime only because of knowledge of its effects.
Dylan McDermott, Alan Mycroft

Open Access

Effectful Normal Form Bisimulation

Abstract
Normal form bisimulation, also known as open bisimulation, is a coinductive technique for higher-order program equivalence in which programs are compared by looking at their essentially infinitary tree-like normal forms, i.e. at their Böhm or Lévy-Longo trees. The technique has been shown to be useful not only when proving metatheorems about \(\lambda \)-calculi and their semantics, but also when looking at concrete examples of terms. In this paper, we show that there is a way to generalise normal form bisimulation to calculi with algebraic effects, à la Plotkin and Power. We show that some mild conditions on monads and relators, which have already been shown to guarantee effectful applicative bisimilarity to be a congruence relation, are enough to prove that the obtained notion of bisimilarity, which we call effectful normal form bisimilarity, is a congruence relation, and thus sound for contextual equivalence. Additionally, contrary to applicative bisimilarity, normal form bisimilarity allows for enhancements of the bisimulation proof method, hence proving a powerful reasoning principle for effectful programming languages.
Ugo Dal Lago, Francesco Gavazzo

Open Access

On the Multi-Language Construction

Abstract
Modern software is no more developed in a single programming language. Instead, programmers tend to exploit cross-language interoperability mechanisms to combine code stemming from different languages, and thus yielding fully-fledged multi-language programs. Whilst this approach enables developers to benefit from the strengths of each single-language, on the other hand it complicates the semantics of such programs. Indeed, the resulting multi-language does not meet any of the semantics of the combined languages. In this paper, we broaden the boundary functions-based approach à la Matthews and Findler to propose an algebraic framework that provides a constructive mathematical notion of multi-language able to determine its semantics. The aim of this work is to overcome the lack of a formal method (resp., model) to design (resp., represent) a multi-language, regardless of the inherent nature of the underlying languages. We show that our construction ensures the uniqueness of the semantic function (i.e., the multi-language semantics induced by the combined languages) by proving the initiality of the term model (i.e., the abstract syntax of the multi-language) in its category.
Samuele Buro, Isabella Mastroeni

Open Access

Probabilistic Programming Inference via Intensional Semantics

Abstract
We define a new denotational semantics for a first-order probabilistic programming language in terms of probabilistic event structures. This semantics is intensional, meaning that the interpretation of a program contains information about its behaviour throughout execution, rather than a simple distribution on return values. In particular, occurrences of sampling and conditioning are recorded as explicit events, partially ordered according to the data dependencies between the corresponding statements in the program.
This interpretation is adequate: we show that the usual measure-theoretic semantics of a program can be recovered from its event structure representation. Moreover it can be leveraged for MCMC inference: we prove correct a version of single-site Metropolis-Hastings with incremental recomputation, in which the proposal kernel takes into account the semantic information in order to avoid performing some of the redundant sampling.
Simon Castellan, Hugo Paquet

Types

Frontmatter

Open Access

Handling Polymorphic Algebraic Effects

Abstract
Algebraic effects and handlers are a powerful abstraction mechanism to represent and implement control effects. In this work, we study their extension with parametric polymorphism that allows abstracting not only expressions but also effects and handlers. Although polymorphism makes it possible to reuse and reason about effect implementations more effectively, it has long been known that a naive combination of polymorphic effects and let-polymorphism breaks type safety. Although type safety can often be gained by restricting let-bound expressions—e.g., by adopting value restriction or weak polymorphism—we propose a complementary approach that restricts handlers instead of let-bound expressions. Our key observation is that, informally speaking, a handler is safe if resumptions from the handler do not interfere with each other. To formalize our idea, we define a call-by-value lambda calculus \(\lambda _\text {eff}^\text {let}\) that supports let-polymorphism and polymorphic algebraic effects and handlers, design a type system that rejects interfering handlers, and prove type safety of our calculus.
Taro Sekiyama, Atsushi Igarashi

Open Access

Distributive Disjoint Polymorphism for Compositional Programming

Abstract
Popular programming techniques such as shallow embeddings of Domain Specific Languages (DSLs), finally tagless or object algebras are built on the principle of compositionality. However, existing programming languages only support simple compositional designs well, and have limited support for more sophisticated ones.
This paper presents the \(\mathsf {F}_{i}^{+}\) calculus, which supports highly modular and compositional designs that improve on existing techniques. These improvements are due to the combination of three features: disjoint intersection types with a merge operator; parametric (disjoint) polymorphism; and BCD-style distributive subtyping. The main technical challenge is \(\mathsf {F}_{i}^{+}\)’s proof of coherence. A naive adaptation of ideas used in System F’s parametricity to canonicity (the logical relation used by \(\mathsf {F}_{i}^{+}\) to prove coherence) results in an ill-founded logical relation. To solve the problem our canonicity relation employs a different technique based on immediate substitutions and a restriction to predicative instantiations. Besides coherence, we show several other important meta-theoretical results, such as type-safety, sound and complete algorithmic subtyping, and decidability of the type system. Remarkably, unlike \(\mathsf {F}_{<:}\)’s bounded polymorphism, disjoint polymorphism in \(\mathsf {F}_{i}^{+}\) supports decidable type-checking.
Xuan Bi, Ningning Xie, Bruno C. d. S. Oliveira, Tom Schrijvers

Open Access

Types by Need

Abstract
A cornerstone of the theory of \(\lambda \)-calculus is that intersection types characterise termination properties. They are a flexible tool that can be adapted to various notions of termination, and that also induces adequate denotational models.
Since the seminal work of de Carvalho in 2007, it is known that multi types (i.e. non-idempotent intersection types) refine intersection types with quantitative information and a strong connection to linear logic. Typically, type derivations provide bounds for evaluation lengths, and minimal type derivations provide exact bounds.
De Carvalho studied call-by-name evaluation, and Kesner used his system to show the termination equivalence of call-by-need and call-by-name. De Carvalho’s system, however, cannot provide exact bounds on call-by-need evaluation lengths.
In this paper we develop a new multi type system for call-by-need. Our system produces exact bounds and induces a denotational model of call-by-need, providing the first tight quantitative semantics of call-by-need.
Beniamino Accattoli, Giulio Guerrieri, Maico Leberle

Open Access

Verifiable Certificates for Predicate Subtyping

Abstract
Adding predicate subtyping to higher-order logic yields a very expressive language in which type-checking is undecidable, making the definition of a system of verifiable certificates challenging. This work presents a solution to this issue with a minimal formalization of predicate subtyping, named PVS-Core, together with a system of verifiable certificates for PVS-Core, named PVS-Cert. PVS-Cert is based on the introduction of proof terms and explicit coercions. Its design is similar to that of PTSs with dependent pairs, with the exception of the definition of conversion, which is based on a specific notion of reduction \(\rightarrow _{\beta *}\), corresponding to \(\beta \)-reduction combined with the erasure of coercions. The use of this reduction instead of the more standard reduction \(\rightarrow _{\beta \sigma }\) allows to establish a simple correspondence between PVS-Core and PVS-Cert. On the other hand, a type-checking algorithm is designed for PVS-Cert, built on proofs of type preservation of \(\rightarrow _{\beta \sigma }\) and strong normalization of both \(\rightarrow _{\beta \sigma }\) and \(\rightarrow _{\beta *}\). Combining these results, PVS-Cert judgements are used as verifiable certificates for predicate subtyping. In addition, the reduction \(\rightarrow _{\beta \sigma }\) is used to define a cut elimination procedure for predicate subtyping. This definition provides a new tool to study the properties of predicate subtyping, as illustrated with a proof of consistency.
Frederic Gilbert

Security and Incremental Computation

Frontmatter

Open Access

Robustly Safe Compilation

Abstract
Secure compilers generate compiled code that withstands many target-level attacks such as alteration of control flow, data leaks or memory corruption. Many existing secure compilers are proven to be fully abstract, meaning that they reflect and preserve observational equivalence. Fully abstract compilation is strong and useful but, in certain cases, comes at the cost of requiring expensive runtime constructs in compiled code. These constructs may have no relevance for security, but are needed to accommodate differences between the source and target languages that fully abstract compilation necessarily needs.
As an alternative to fully abstract compilation, this paper explores a different criterion for secure compilation called robustly safe compilation or RSC. Briefly, this criterion means that the compiled code preserves relevant safety properties of the source program against all adversarial contexts interacting with the compiled program. We show that RSC can be proved more easily than fully abstract compilation and also often results in more efficient code. We also develop two illustrative robustly-safe compilers and, through them, illustrate two different proof techniques for establishing that a compiler attains RSC. Based on these, we argue that proving RSC can be simpler than proving fully abstraction.
Marco Patrignani, Deepak Garg

Open Access

Compiling Sandboxes: Formally Verified Software Fault Isolation

Abstract
Software Fault Isolation (SFI) is a security-enhancing program transformation for instrumenting an untrusted binary module so that it runs inside a dedicated isolated address space, called a sandbox. To ensure that the untrusted module cannot escape its sandbox, existing approaches such as Google’s Native Client rely on a binary verifier to check that all memory accesses are within the sandbox. Instead of relying on a posteriori verification, we design, implement and prove correct a program instrumentation phase as part of the formally verified compiler CompCert that enforces a sandboxing security property a priori. This eliminates the need for a binary verifier and, instead, leverages the soundness proof of the compiler to prove the security of the sandboxing transformation. The technical contributions are a novel sandboxing transformation that has a well-defined C semantics and which supports arbitrary function pointers, and a formally verified C compiler that implements SFI. Experiments show that our formally verified technique is a competitive way of implementing SFI.
Frédéric Besson, Sandrine Blazy, Alexandre Dang, Thomas Jensen, Pierre Wilke

Open Access

Fixing Incremental Computation

Derivatives of Fixpoints, and the Recursive Semantics of Datalog
Abstract
Incremental computation has recently been studied using the concepts of change structures and derivatives of programs, where the derivative of a function allows updating the output of the function based on a change to its input. We generalise change structures to change actions, and study their algebraic properties. We develop change actions for common structures in computer science, including directed-complete partial orders and Boolean algebras. We then show how to compute derivatives of fixpoints. This allows us to perform incremental evaluation and maintenance of recursively defined functions with particular application generalised Datalog programs. Moreover, unlike previous results, our techniques are modular in that they are easy to apply both to variants of Datalog and to other programming languages.
Mario Alvarez-Picallo, Alex Eyers-Taylor, Michael Peyton Jones, C.-H. Luke Ong

Open Access

Incremental -Calculus in Cache-Transfer Style

Static Memoization by Program Transformation
Abstract
Incremental computation requires propagating changes and reusing intermediate results of base computations. Derivatives, as produced by static differentiation [7], propagate changes but do not reuse intermediate results, leading to wasteful recomputation. As a solution, we introduce conversion to Cache-Transfer-Style, an additional program transformations producing purely incremental functional programs that create and maintain nested tuples of intermediate results. To prove CTS conversion correct, we extend the correctness proof of static differentiation from STLC to untyped \(\lambda \)-calculus via step-indexed logical relations, and prove sound the additional transformation via simulation theorems.
To show ILC-based languages can improve performance relative to from-scratch recomputation, and that CTS conversion can extend its applicability, we perform an initial performance case study. We provide derivatives of primitives for operations on collections and incrementalize selected example programs using those primitives, confirming expected asymptotic speedups.
Paolo G. Giarrusso, Yann Régis-Gianas, Philipp Schuster

Concurrency and Distribution

Frontmatter

Open Access

Asynchronous Timed Session Types

From Duality to Time-Sensitive Processes
Abstract
We present a behavioural typing system for a higher-order timed calculus using session types to model timed protocols. Behavioural typing ensures that processes in the calculus perform actions in the time-windows prescribed by their protocols. We introduce duality and subtyping for timed asynchronous session types. Our notion of duality allows typing a larger class of processes with respect to previous proposals. Subtyping is critical for the precision of our typing system, especially in the presence of session delegation. The composition of dual (timed asynchronous) types enjoys progress when using an urgent receive semantics, in which receive actions are executed as soon as the expected message is available. Our calculus increases the modelling power of extant calculi on timed sessions, adding a blocking receive primitive with timeout and a primitive that consumes an arbitrary amount of time in a given range.
Laura Bocchi, Maurizio Murgia, Vasco Thudichum Vasconcelos, Nobuko Yoshida

Open Access

Manifest Deadlock-Freedom for Shared Session Types

Abstract
Shared session types generalize the Curry-Howard correspondence between intuitionistic linear logic and the session-typed \(\pi \)-calculus with adjoint modalities that mediate between linear and shared session types, giving rise to a programming model where shared channels must be used according to a locking discipline of acquire-release. While this generalization greatly increases the range of programs that can be written, the gain in expressiveness comes at the cost of deadlock-freedom, a property which holds for many linear session type systems. In this paper, we develop a type system for logically-shared sessions in which types capture not only the interactive behavior of processes but also constrain the order of resources (i.e., shared processes) they may acquire. This type-level information is then used to rule out cyclic dependencies among acquires and synchronization points, resulting in a system that ensures deadlock-free communication for well-typed processes in the presence of shared sessions, higher-order channel passing, and recursive processes. We illustrate our approach on a series of examples, showing that it rules out deadlocks in circular networks of both shared and linear recursive processes, while still being permissive enough to type concurrent implementations of shared imperative data structures as processes.
Stephanie Balzer, Bernardo Toninho, Frank Pfenning

Open Access

A Categorical Model of an -typed -calculus

Abstract
This paper introduces a new categorical structure that is a model of a variant of the \( \mathbf {i/o} \)-typed \( \pi \)-calculus, in the same way that a cartesian closed category is a model of the \( \lambda \)-calculus. To the best of our knowledge, no categorical model has been given for the \( \mathbf {i/o} \)-typed \( \pi \)-calculus, in contrast to session-typed calculi, to which corresponding logic and categorical structure were given. The categorical structure introduced in this paper has a simple definition, combining two well-known structures, namely, closed Freyd category and compact closed category. The former is a model of effectful computation in a general setting, and the latter describes connections via channels, which cause the effect we focus on in this paper. To demonstrate the relevance of the categorical model, we show by a semantic consideration that the \( \pi \)-calculus is equivalent to a core calculus of Concurrent ML.
Ken Sakayori, Takeshi Tsukada

Open Access

A Process Algebra for Link Layer Protocols

Abstract
We propose a process algebra for link layer protocols, featuring a unique mechanism for modelling frame collisions. We also formalise suitable liveness properties for link layer protocols specified in this framework. To show applicability we model and analyse two versions of the Carrier-Sense Multiple Access with Collision Avoidance (CSMA/CA) protocol. Our analysis confirms the hidden station problem for the version without virtual carrier sensing. However, we show that the version with virtual carrier sensing not only overcomes this problem, but also the exposed station problem with probability 1. Yet the protocol cannot guarantee packet delivery, not even with probability 1.
Rob van Glabbeek, Peter Höfner, Michael Markl

Program Analysis and Automated Verification

Frontmatter

Open Access

Data Races and Static Analysis for Interrupt-Driven Kernels

Abstract
We consider a class of interrupt-driven programs that model the kernel API libraries of some popular real-time embedded operating systems and the synchronization mechanisms they use. We define a natural notion of data races and a happens-before ordering for such programs. The key insight is the notion of disjoint blocks to define the synchronizes-with relation. This notion also suggests an efficient and effective lockset based analysis for race detection. It also enables us to define efficient “sync-CFG” based static analyses for such programs, which exploit data race freedom. We use this theory to carry out static analysis on the FreeRTOS kernel library to detect races and to infer simple relational invariants on key kernel variables and data-structures.
Nikita Chopra, Rekha Pai, Deepak D’Souza

Open Access

An Abstract Domain for Trees with Numeric Relations

Abstract
We present an abstract domain able to infer invariants on programs manipulating trees. Trees considered in the article are defined over a finite alphabet and can contain unbounded numeric values at their leaves. Our domain can infer the possible shapes of the tree values of each variable and find numeric relations between: the values at the leaves as well as the size and depth of the tree values of different variables. The abstract domain is described as a product of (1) a symbolic domain based on a tree automata representation and (2) a numerical domain lifted, for the occasion, to describe numerical maps with potentially infinite and heterogeneous definition set. In addition to abstract set operations and widening we define concrete and abstract transformers on these environments. We present possible applications, such as the ability to describe memory zones, or track symbolic equalities between program variables. We implemented our domain in a static analysis platform and present preliminary results analyzing a tree-manipulating toy-language.
Matthieu Journault, Antoine Miné, Abdelraouf Ouadjaout

Open Access

A Static Higher-Order Dependency Pair Framework

Abstract
We revisit the static dependency pair method for proving termination of higher-order term rewriting and extend it in a number of ways: (1) We introduce a new rewrite formalism designed for general applicability in termination proving of higher-order rewriting, Algebraic Functional Systems with Meta-variables. (2) We provide a syntactically checkable soundness criterion to make the method applicable to a large class of rewrite systems. (3) We propose a modular dependency pair framework for this higher-order setting. (4) We introduce a fine-grained notion of formative and computable chains to render the framework more powerful. (5) We formulate several existing and new termination proving techniques in the form of processors within our framework.
The framework has been implemented in the (fully automatic) higher-order termination tool WANDA.
Carsten Fuhs, Cynthia Kop

Open Access

Coinduction in Uniform: Foundations for Corecursive Proof Search with Horn Clauses

Abstract
We establish proof-theoretic, constructive and coalgebraic foundations for proof search in coinductive Horn clause theories. Operational semantics of coinductive Horn clause resolution is cast in terms of coinductive uniform proofs; its constructive content is exposed via soundness relative to an intuitionistic first-order logic with recursion controlled by the later modality; and soundness of both proof systems is proven relative to a novel coalgebraic description of complete Herbrand models.
Henning Basold, Ekaterina Komendantskaya, Yue Li

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise