Skip to main content
Top

2013 | Book

Programming Languages and Systems

22nd European Symposium on Programming, ESOP 2013, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2013, Rome, Italy, March 16-24, 2013. Proceedings

Editors: Matthias Felleisen, Philippa Gardner

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This book constitutes the refereed proceedings of the 22nd European Symposium on Programming, ESOP 2013, held as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2013, which took place in Rome, Italy, in March 2013. The 31 papers, presented together with a full-length invited talk, were carefully reviewed and selected from 120 full submissions. The contributions have been organized according to ten topical sections on programming techniques; programming tools; separation logic; gradual typing; shared-memory concurrency and verification; process calculi; taming concurrency; model checking and verification; weak-memory concurrency and verification; and types, inference, and analysis.

Table of Contents

Frontmatter

Invited Talk

Distributed Electronic Rights in JavaScript

Contracts enable mutually suspicious parties to cooperate safely through the exchange of rights. Smart contracts are programs whose behavior enforces the terms of the contract. This paper shows how such contracts can be specified elegantly and executed safely, given an appropriate distributed, secure, persistent, and ubiquitous computational fabric. JavaScript provides the ubiquity but must be significantly extended to deal with the other aspects. The first part of this paper is a progress report on our efforts to turn JavaScript into this fabric. To demonstrate the suitability of this design, we describe an escrow exchange contract implemented in 42 lines of JavaScript code.

Mark S. Miller, Tom Van Cutsem, Bill Tulloh

Session I: Programming Techniques

The Compiler Forest

We address the problem of writing compilers targeting

complex execution environments

, such as computer clusters composed of machines with multi-core CPUs. To that end we introduce

partial compilers

. These compilers can pass sub-programs to several child (partial) compilers, combining the code generated by their children to generate the final target code. We define a set of high-level polymorphic operations manipulating both compilers and partial compilers as first-class values. These mechanisms provide a software architecture for modular compiler construction. This allows the building of a forest of compilers, providing a structured treatment of multistage compilers.

Mihai Budiu, Joel Galenson, Gordon D. Plotkin
Pretty-Big-Step Semantics

In spite of the popularity of small-step semantics, big-step semantics remain used by many researchers. However, big-step semantics suffer from a serious duplication problem, which appears as soon as the semantics account for exceptions and/or divergence. In particular, many premises need to be copy-pasted across several evaluation rules. This duplication problem, which is particularly visible when scaling up to full-blown languages, results in formal definitions growing far bigger than necessary. Moreover, it leads to unsatisfactory redundancy in proofs. In this paper, we address the problem by introducing pretty-big-step semantics. Pretty-big-step semantics preserve the spirit of big-step semantics, in the sense that terms are directly related to their results, but they eliminate the duplication associated with big-step semantics.

Arthur Charguéraud
Language Constructs for Non-Well-Founded Computation

Recursive functions defined on a coalgebraic datatype

C

may not converge if there are cycles in the input, that is, if the input object is not well-founded. Even so, there is often a useful solution. Unfortunately, current functional programming languages provide no support for specifying alternative solution methods. In this paper we give numerous examples in which it would be useful to do so: free variables,

α

-conversion, and substitution in infinitary

λ

-terms; halting probabilities and expected running times of probabilistic protocols; abstract interpretation; and constructions involving finite automata. In each case the function would diverge under the standard semantics of recursion. We propose programming language constructs that would allow the specification of alternative solutions and methods to compute them.

Jean-Baptiste Jeannin, Dexter Kozen, Alexandra Silva

Session II: Programming Tools

Laziness by Need

Lazy functional programming has many benefits that strict functional languages can simulate via lazy data constructors. In recognition, ML, Scheme, and other strict functional languages have supported lazy stream programming with

delaytt

and

forcett

for several decades. Unfortunately, the manual insertion of

delaytt

and

forcett

can be tedious and error-prone.

We present a semantics-based refactoring that helps strict programmers manage manual lazy programming. The refactoring uses a static analysis to identify where additional

delaytts

and

forcetts

might be needed to achieve the desired simplification and performance benefits, once the programmer has added the initial lazy data constructors. The paper presents a correctness argument for the underlying transformations and some preliminary experiences with a prototype tool implementation.

Stephen Chang
FliPpr: A Prettier Invertible Printing System

When implementing a programming language, we often write a parser and a pretty-printer. However, manually writing both programs is not only tedious but also error-prone; it may happen that a pretty-printed result is not correctly parsed. In this paper, we propose

FliPpr

, which is a program transformation system that uses program inversion to produce a CFG parser from a pretty-printer. This novel approach has the advantages of fine-grained control over pretty-printing, and easy reuse of existing efficient pretty-printer and parser implementations.

Kazutaka Matsuda, Meng Wang
Slicing-Based Trace Analysis of Rewriting Logic Specifications with i Julienne

We present

i

Julienne

, a trace analyzer for conditional rewriting logic theories that can be used to compute abstract views of Maude executions that help users understand and debug programs. Given a Maude execution trace and a slicing criterion which consists of a set of target symbols occurring in a selected state of the trace,

i

Julienne

is able to track back reverse dependences and causality along the trace in order to incrementally generate highly reduced program and trace slices that reconstruct all and only those pieces of information that are needed to deliver the symbols of interest.

i

Julienne

is also endowed with a trace querying mechanism that increases flexibility and reduction power and allows program runs to be examined at the appropriate level of abstraction.

María Alpuente, Demis Ballis, Francisco Frechina, Julia Sapiña
Why3 — Where Programs Meet Provers

We present

Why3

, a tool for deductive program verification, and

WhyML

, its programming and specification language.

WhyML

is a first-order language with polymorphic types, pattern matching, and inductive predicates. Programs can make use of record types with mutable fields, type invariants, and ghost code. Verification conditions are discharged by

Why3

with the help of various existing automated and interactive theorem provers. To keep verification conditions tractable and comprehensible,

WhyML

imposes a static control of aliases that obviates the use of a memory model. A user can write

WhyML

programs directly and get correct-by-construction OCaml programs via an automated extraction mechanism.

WhyML

is also used as an intermediate language for the verification of C, Java, or Ada programs. We demonstrate the benefits of

Why3

and

WhyML

on non-trivial examples of program verification.

Jean-Christophe Filliâtre, Andrei Paskevich

Session III: Separation Logic

Compositional Invariant Checking for Overlaid and Nested Linked Lists

We introduce a fragment of separation logic, called

NOLL

, for automated reasoning about programs manipulating overlaid and nested linked lists, where overlaid means that the lists share the same set of objects. The distinguishing features of

NOLL

are: (1) it is parametrized by a set of user-defined predicates specifying nested linked list segments, (2) a “per-field” version of the separating conjunction allowing to share object locations but not record field locations, and (3) it can express sharing constraints between list segments. We prove that checking the entailment between two

NOLL

formulas is co-NP complete using a small model property. We also provide an effective procedure for checking entailment in

NOLL

, which first constructs a Boolean abstraction of the two formulas in order to infer all the implicit constraints, and then, it checks the existence of a homomorphism between the two formulas, viewed as graphs. We have implemented this procedure and applied it on verification conditions generated from several interesting case studies that manipulate overlaid and nested data structures.

Constantin Enea, Vlad Saveluc, Mihaela Sighireanu
A Discipline for Program Verification Based on Backpointers and Its Use in Observational Disjointness

In the verification of programs that manipulate the heap, logics that emphasize

localized reasoning

, such as separation logic, are being used extensively. In such logics, state conditions may only refer to parts of the heap that are reachable from the stack. However, the correct implementation of some data structures is based on state conditions that depend on unreachable locations. For example, reference counting depends on the invariant that “

the number of nodes pointing to a certain node is equal to its reference counter

”. Such conditions are cumbersome or even impossible to formalize in existing variants of separation logic.

In the first part of this paper, we develop a minimal programming discipline that enables the programmer to soundly express

backpointer conditions

, i.e., state conditions that involve heap objects that

point

to the reachable part of the heap, such as the above-mentioned reference counting invariant.

In the second part, we demonstrate the expressiveness of our methodology by verifying the implementation of

concurrent copy-on-write lists

(CCoWL). CCoWL is a data structure with

observational disjointness

, i.e., its specification

pretends

that different lists depend on disjoint parts of the heap, so that separation logic reasoning is made easy, while its implementation uses sharing to maximize performance. The CCoWL case study is a very challenging problem, to which we are not aware of any other solution.

Ioannis T. Kassios, Eleftherios Kritikos
Modular Reasoning about Separation of Concurrent Data Structures

In a concurrent setting, the usage protocol of standard separation logic specifications are not refinable by clients, because standard specifications abstract all information about potential interleavings. This breaks modularity, as libraries cannot be verified in isolation, since the appropriate specification depends on how clients intend to use the library.

In this paper we propose a new logic and a new style of specification for thread-safe concurrent data structures. Our specifications allow clients to refine usage protocols and associate ownership of additional resources with instances of these data structures.

Kasper Svendsen, Lars Birkedal, Matthew Parkinson
Ribbon Proofs for Separation Logic

We present

ribbon proofs

, a diagrammatic system for proving program correctness based on separation logic. Ribbon proofs emphasise the structure of a proof, so are intelligible and pedagogical. Because they contain less redundancy than proof outlines, and allow each proof step to be checked locally, they may be more scalable. Where proof outlines are cumbersome to modify, ribbon proofs can be visually manoeuvred to yield proofs of variant programs. This paper introduces the ribbon proof system, proves its soundness and completeness, and outlines a prototype tool for validating the diagrams in

Isabelle

.

John Wickerson, Mike Dodds, Matthew Parkinson

Session IV: Gradual Typing

Abstract Refinement Types

We present

abstract refinement types

which enable quantification over the refinements of data- and function-types. Our key insight is that we can avail of quantification while preserving SMT-based decidability, simply by encoding refinement parameters as

uninterpreted

propositions within the refinement logic. We illustrate how this mechanism yields a variety of sophisticated means for reasoning about programs, including:

parametric

refinements for reasoning with type classes,

index-dependent

refinements for reasoning about key-value maps,

recursive

refinements for reasoning about recursive data types, and

inductive

refinements for reasoning about higher-order traversal routines. We have implemented our approach in a refinement type checker for Haskell and present experiments using our tool to verify correctness invariants of various programs.

Niki Vazou, Patrick M. Rondon, Ranjit Jhala
Constraining Delimited Control with Contracts

Most programming languages provide abstractions for non-local control flow and access to the stack by using continuations, coroutines, or generators. However, their unrestricted use breaks the local reasoning capability of a programmer. Gradual typing exacerbates this problem because typed and untyped code co-exist. We present a contract system capable of protecting code from control flow and stack manipulations by unknown components. We use these contracts to support a gradual type system, and we prove that the resulting system cannot blame typed components for errors.

Asumu Takikawa, T. Stephen Strickland, Sam Tobin-Hochstadt

Session V: Shared-Memory Concurrency and Verification

Verifying Concurrent Memory Reclamation Algorithms with Grace

Memory management is one of the most complex aspects of modern concurrent algorithms, and various techniques proposed for it—such as hazard pointers, read-copy-update and epoch-based reclamation—have proved very challenging for formal reasoning. In this paper, we show that different memory reclamation techniques actually rely on the same implicit synchronisation pattern, not clearly reflected in the code, but only in the form of assertions used to argue its correctness. The pattern is based on the key concept of a

grace period

, during which a thread can access certain shared memory cells without fear that they get deallocated. We propose a modular reasoning method, motivated by the pattern, that handles all three of the above memory reclamation techniques in a uniform way. By explicating their fundamental core, our method achieves clean and simple proofs, scaling even to realistic implementations of the algorithms without a significant increase in proof complexity. We formalise the method using a combination of separation logic and temporal logic and use it to verify example instantiations of the three approaches to memory reclamation.

Alexey Gotsman, Noam Rinetzky, Hongseok Yang
Interleaving and Lock-Step Semantics for Analysis and Verification of GPU Kernels

We study semantics of GPU kernels — the parallel programs that run on Graphics Processing Units (GPUs). We provide a novel lock-step execution semantics for GPU kernels represented by

arbitrary

reducible control flow graphs and compare this semantics with a traditional interleaving semantics. We show for terminating kernels that either both semantics compute identical results or both behave erroneously.

The result induces a method that allows GPU kernels with arbitrary reducible control flow graphs to be verified via transformation to a sequential program that employs predicated execution. We implemented this method in the GPUVerify tool and experimentally evaluated it by comparing the tool with the previous version of the tool based on a similar method for

structured programs

, i.e., where control is organised using

if

and

while

statements. The evaluation was based on a set of 163 open source and commercial GPU kernels. Among these kernels, 42 exhibit unstructured control flow which our novel method can handle fully automatically, but the previous method could not. Overall the generality of the new method comes at a modest price: Verification across our benchmark set was 2.25 times slower overall; however, the median slow down across all kernels was 0.77, indicating that our novel technique yielded faster analysis in many cases.

Peter Collingbourne, Alastair F. Donaldson, Jeroen Ketema, Shaz Qadeer
Verifying Concurrent Programs against Sequential Specifications

We investigate the algorithmic feasibility of checking whether concurrent implementations of shared-memory objects adhere to their given sequential specifications; sequential consistency, linearizability, and conflict serializability are the canonical variations of this problem. While verifying sequential consistency of systems with unbounded concurrency is known to be undecidable, we demonstrate that conflict serializability, and linearizability with fixed linearization points are EXPSPACE-complete, while the general linearizability problem is undecidable.

Our (un)decidability proofs, besides bestowing novel theoretical results, also reveal novel program explorations strategies. For instance, we show that every violation to conflict serializability is captured by a conflict cycle whose length is bounded independently from the number of concurrent operations. This suggests an incomplete detection algorithm which only remembers a small subset of conflict edges, which can be made complete by increasing the number of remembered edges to the cycle-length bound. Similarly, our undecidability proof for linearizability suggests an incomplete detection algorithm which limits the number of “barriers” bisecting non-overlapping operations. Our decidability proof of bounded-barrier linearizability is interesting on its own, as it reduces the consideration of all possible operation serializations to numerical constraint solving. The literature seems to confirm that most violations are detectable by considering very few conflict edges or barriers.

Ahmed Bouajjani, Michael Emmi, Constantin Enea, Jad Hamza

Session VI: Process Calculi

On Distributability in Process Calculi

We present a novel approach to compare process calculi and their synchronisation mechanisms by using synchronisation patterns and explicitly considering the degree of distributability. For this, we propose a new quality criterion that (1) measures the preservation of distributability and (2) allows us to derive two synchronisation patterns that separate several variants of pi-like calculi. Precisely, we prove that there is no good and distributability-preserving encoding from the synchronous pi-calculus with mixed choice into its fragment with only separate choice, and neither from the asynchronous pi-calculus (without choice) into the join-calculus.

Kirstin Peters, Uwe Nestmann, Ursula Goltz
Behavioral Polymorphism and Parametricity in Session-Based Communication

We investigate a notion of behavioral genericity in the context of session type disciplines. To this end, we develop a logically motivated theory of parametric polymorphism, reminiscent of the Girard-Reynolds polymorphic

λ

-calculus, but casted in the setting of concurrent processes. In our theory, polymorphism accounts for the exchange of abstract communication protocols and dynamic instantiation of heterogeneous interfaces, as opposed to the exchange of data types and dynamic instantiation of individual message types. Our polymorphic session-typed process language satisfies strong forms of type preservation and global progress, is strongly normalizing, and enjoys a relational parametricity principle. Combined, our results confer strong correctness guarantees for communicating systems. In particular, parametricity is key to derive non-trivial results about internal protocol independence, a concurrent analogous of representation independence, and non-interference properties of modular, distributed systems.

Luís Caires, Jorge A. Pérez, Frank Pfenning, Bernardo Toninho
Higher-Order Processes, Functions, and Sessions: A Monadic Integration

In prior research we have developed a Curry-Howard interpretation of linear sequent calculus as session-typed processes. In this paper we uniformly integrate this computational interpretation in a functional language via a linear contextual monad that isolates session-based concurrency. Monadic values are open process expressions and are first class objects in the language, thus providing a logical foundation for higher-order session typed processes. We illustrate how the combined use of the monad and recursive types allows us to cleanly write a rich variety of concurrent programs, including higher-order programs that communicate processes. We show the standard metatheoretic result of type preservation, as well as a global progress theorem, which to the best of our knowledge, is new in the higher-order session typed setting.

Bernardo Toninho, Luis Caires, Frank Pfenning
Concurrent Flexible Reversibility

Concurrent reversibility has been studied in different areas, such as biological or dependable distributed systems. However, only “rigid” reversibility has been considered, allowing to go back to a past state and restart the exact same computation, possibly leading to divergence. In this paper, we present

croll

-

π

, a concurrent calculus featuring

flexible reversibility

, allowing the specification of alternatives to a computation to be used upon rollback. Alternatives in

croll

-

π

are attached to messages. We show the robustness of this mechanism by encoding more complex idioms for specifying flexible reversibility, and we illustrate the benefits of our approach by encoding a calculus of communicating transactions.

Ivan Lanese, Michael Lienhardt, Claudio Antares Mezzina, Alan Schmitt, Jean-Bernard Stefani

Session VII: Taming Concurrency

Structural Lock Correlation with Ownership Types

Concurrent object-oriented programming languages coordinate conflicting memory accesses through locking, which relies on programmer discipline and suffers from a lack of modularity and compile-time support. Programmers typically work with large libraries of code whose locking behaviours are not formally and precisely specified; thus understanding and writing concurrent programs is notoriously difficult and error-prone. This paper proposes

structural lock correlation

, a new model for establishing structural connections between locks and the memory locations they protect, in an ownership-based type and effect system. Structural lock correlation enables modular specification of locking. It offers a compiler-checkable lock abstraction with an enforceable contract at interface boundaries, leading to improved safety, understandability and composability of concurrent program components.

Yi Lu, John Potter, Jingling Xue
Taming Confusion for Modeling and Implementing Probabilistic Concurrent Systems

In concurrent systems, the choice of executing the next transition depends both on the timing between the agents that make independent or collaborative interactions available, and on the conflicts (nondeterministic choices) with other transitions. This creates a challenging modeling and implementation problem. When the system needs to make also probabilistic choices, the situation becomes even more complicated. We use the model of Petri nets to demonstrate the modeling and implementation problem. The proposed solution involves adding sequential observers called

agents

to the Petri net structure. Distributed probabilistic choices are facilitated in the presence of concurrency and nondeterminism, by selecting agents that make the choices, while guaranteeing that their view is temporarily stable. We provide a distributed scheduling algorithm for implementing a system that allows distributed probabilistic choice.

Joost-Peter Katoen, Doron Peled

Session VIII: Model Checking and Verification

Model-Checking Higher-Order Programs with Recursive Types

Model checking of higher-order recursion schemes (HORS, for short) has been recently studied as a new promising technique for automated verification of higher-order programs. The previous HORS model checking could however deal with only

simply-typed

programs, so that its application was limited to functional programs. To deal with a broader range of programs such as object-oriented programs and multi-threaded programs, we extend HORS model checking to check properties of programs with

recursive

types. Although the extended model checking problem is undecidable, we develop a sound model-checking algorithm that is relatively complete with respect to a recursive intersection type system and prove its correctness. Preliminary results on the implementation and applications to verification of object-oriented programs and multi-threaded programs are also reported.

Naoki Kobayashi, Atsushi Igarashi
Counterexample-Guided Precondition Inference

The precondition for an assertion inside a procedure is useful for understanding, verifying and debugging programs. As the procedure might be used in multiple calling-contexts within a program, the precondition should be sufficiently general to enable re-use. We present an extension of counterexample-guided abstraction refinement (CEGAR) for automated precondition inference. Starting with an over-approximation of both the set of safe and unsafe states, we iteratively refine them until they become disjoint. The resulting precondition is then necessary and sufficient for the validity of the assertion, which prevents false alarms. We have implemented our approach in a tool called P-Gen. We present experimental results on string and array-manipulating programs.

Mohamed Nassim Seghir, Daniel Kroening
Information Reuse for Multi-goal Reachability Analyses

It is known that model checkers can generate test inputs as witnesses for reachability specifications (or, equivalently, as counterexamples for safety properties). While this use of model checkers for testing yields a theoretically sound test-generation procedure, it scales poorly for computing complex test suites for large sets of test goals, because each test goal requires an expensive run of the model checker. We represent test goals as automata and exploit relations between automata in order to reuse existing reachability information for the analysis of subsequent test goals. Exploiting the sharing of sub-automata in a series of reachability queries, we achieve considerable performance improvements over the standard approach. We show the practical use of our multi-goal reachability analysis in a predicate-abstraction-based test-input generator for the test-specification language FQL.

Dirk Beyer, Andreas Holzer, Michael Tautschnig, Helmut Veith

Session IX: Weak-Memory Concurrency and Verification

Quarantining Weakness
Compositional Reasoning under Relaxed Memory Models (Extended Abstract)

In sequential computing, every method of an object can be described in isolation via preconditions and postconditions. However, reasoning in a concurrent setting requires a characterization of all possible interactions acrossmethod invocations.Herlihy and Wing [1990]’s notion of linearizability simplifies such reasoning by intuitively ensuring that each method invocation “takes effect” between its invocation and response events.

Radha Jagadeesan, Gustavo Petri, Corin Pitcher, James Riely
Software Verification for Weak Memory via Program Transformation

Multiprocessors implement weak memory models, but program verifiers often assume

Sequential Consistency

(SC), and thus may miss bugs due to weak memory. We propose a sound transformation of the program to verify, enabling SC tools to perform verification w.r.t. weak memory. We present experiments for a broad variety of models (from x86-TSO to Power) and a vast range of verification tools, quantify the additional cost of the transformation and highlight the cases when we can drastically reduce it. Our benchmarks include work-queue management code from PostgreSQL.

Jade Alglave, Daniel Kroening, Vincent Nimal, Michael Tautschnig
Checking and Enforcing Robustness against TSO

We present algorithms for checking and enforcing robustness of concurrent programs against the Total Store Ordering (TSO) memory model. A program is robust if all its TSO computations correspond to computations under the Sequential Consistency (SC) semantics.

We provide a complete characterization of non-robustness in terms of so-called attacks: a restricted form of (harmful) out-of-program-order executions. Then, we show that detecting attacks can be parallelized, and can be solved using state reachability queries under the SC semantics in a suitably instrumented program obtained by a linear size source-to-source translation. Importantly, the construction is valid for an unbounded number of memory addresses and an arbitrary number of parallel threads. It is independent from the data domain and from the size of store buffers in the TSO semantics. In particular, when the data domain is finite and the number of addresses is fixed, we obtain decidability and complexity results for robustness, even for a parametric number of threads.

As a second contribution, we provide an algorithm for computing an optimal set of fences that enforce robustness. We consider two criteria of optimality: minimization of program size and maximization of its performance. The algorithms we define are implemented, and we successfully applied them to analyzing and correcting several concurrent algorithms.

Ahmed Bouajjani, Egor Derevenetc, Roland Meyer

Session X: Types, Inference, and Analysis

GADTs Meet Subtyping

While generalized algebraic datatypes (GADTs) are now considered well-understood, adding them to a language with a notion of subtyping comes with a few surprises. What does it mean for a GADT parameter to be covariant? The answer turns out to be quite subtle. It involves fine-grained properties of the subtyping relation that raise interesting design questions. We allow variance annotations in GADT definitions, study their soundness, and present a sound and complete algorithm to check them. Our work may be applied to real-world ML-like languages with explicit subtyping such as OCaml, or to languages with general subtyping constraints.

Gabriel Scherer, Didier Rémy
A Data Driven Approach for Algebraic Loop Invariants

We describe a

Guess-and-Check

algorithm for computing algebraic equation invariants of the form ∧ 

i

f

i

(

x

1

,…,

x

n

) = 0, where each

f

i

is a polynomial over the variables

x

1

,…,

x

n

of the program. The “guess” phase is data driven and derives a candidate invariant from data generated from concrete executions of the program. This candidate invariant is subsequently validated in a “check” phase by an off-the-shelf SMT solver. Iterating between the two phases leads to a sound algorithm. Moreover, we are able to prove a bound on the number of decision procedure queries which

Guess-and-Check

requires to obtain a sound invariant. We show how

Guess-and-Check

can be extended to generate arbitrary boolean combinations of linear equalities as invariants, which enables us to generate expressive invariants to be consumed by tools that cannot handle non-linear arithmetic. We have evaluated our technique on a number of benchmark programs from recent papers on invariant generation. Our results are encouraging – we are able to efficiently compute algebraic invariants in all cases, with only a few tests.

Rahul Sharma, Saurabh Gupta, Bharath Hariharan, Alex Aiken, Percy Liang, Aditya V. Nori
Automatic Type Inference for Amortised Heap-Space Analysis

We present a fully automatic, sound and modular heap-space analysis for object-oriented programs. In particular, we provide type inference for the system of refinement types RAJA, which checks upper bounds of heap-space usage based on amortised analysis. Until now, the refined RAJA types had to be manually specified. Our type inference increases the usability of the system, as no user-defined annotations are required.

The type inference consists of constraint generation and solving. First, we present a system for generating subtyping and arithmetic constraints based on the RAJA typing rules. Second, we reduce the subtyping constraints to inequalities over infinite trees, which can be solved using an algorithm that we have described in previous work. This paper also enriches the original type system by introducing polymorphic method types, enabling a modular analysis.

Martin Hofmann, Dulma Rodriguez
Backmatter
Metadata
Title
Programming Languages and Systems
Editors
Matthias Felleisen
Philippa Gardner
Copyright Year
2013
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-37036-6
Print ISBN
978-3-642-37035-9
DOI
https://doi.org/10.1007/978-3-642-37036-6

Premium Partner