Skip to main content

2007 | Buch

Programming Languages and Systems

16th European Symposium on Programming, ESOP 2007, Held as Part of the Joint European Conferences on Theory and Practics of Software, ETAPS 2007, Braga, Portugal, March 24 - April 1, 2007. Proceedings

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

Invited Talk

Techniques for Contextual Equivalence in Higher-Order, Typed Languages

Two phrases in a programming language are said to be contextually equivalent if, roughly speaking, they are interchangeable in any complete program without affecting the observable behaviour of the program. I will discuss precise formalisations of this fundamental notion of semantic equivalence for the case of higher-order, typed (HOT) languages, such as ML and Haskell. How does the structure of a type affect properties of contextual equivalence of expressions of that type? It can be very difficult to answer this question when working directly from the definition of contextual equivalence—mainly because HOT programs can make use of their constituent sub-expressions in dynamically complicated ways. This talk will survey some of the semantic techniques (both denotational and operational) that have been devised for proving properties of HOT contextual equivalence.

Andrew Pitts

Models and Languages for Web Services

Structured Communication-Centred Programming for Web Services

This paper relates two different paradigms of descriptions of communication behaviour, one focussing on global message flows and another on end-point behaviours, using formal calculi based on session types. The global calculus, which originates from a web service description language (W3C WS-CDL), describes an interaction scenario from a vantage viewpoint; the end-point calculus, an applied typed

π

-calculus, precisely identifies a local behaviour of each participant. We explore a theory of end-point projection, by which we can map a global description to its end-point counterpart preserving types and dynamics. Three principles of well-structured description and the type structures play a fundamental role in the theory.

Marco Carbone, Kohei Honda, Nobuko Yoshida
CC-Pi: A Constraint-Based Language for Specifying Service Level Agreements

Service Level Agreements are a key issue in Service Oriented Computing. SLA contracts specify client requirements and service guarantees, with emphasis on Quality of Service (cost, performance, availability, etc.). In this work we propose a simple model of contracts for QoS and SLAs that also allows to study mechanisms for resource allocation and for joining different SLA requirements. Our language combines two basic programming paradigms: name-passing calculi and concurrent constraint programming (

cc

programming). Specifically, we extend

cc

programming by adding synchronous communication and by providing a treatment of names in terms of restriction and structural axioms closer to nominal calculi than to variables with existential quantification. In the resulting framework, SLA requirements are constraints that can be generated either by a single party or by the synchronisation of two agents. Moreover, restricting the scope of names allows for local stores of constraints, which may become global as a consequence of synchronisations. Our approach relies on a system of

named

constraints that equip classical constraints with a suitable algebraic structure providing a richer mechanism of constraint combination. We give reduction-preserving translations of both

cc

programming and the calculus of explicit fusions.

Maria Grazia Buscemi, Ugo Montanari
A Calculus for Orchestration of Web Services

We introduce

COWS

(

Calculus for Orchestration of Web Services

), a new foundational language for SOC whose design has been influenced by

WS-BPEL

, the

de facto

standard language for orchestration of web services.

COWS

combines in an original way a number of ingredients borrowed from well-known process calculi, e.g. asynchronous communication, polyadic synchronization, pattern matching, protection, delimited receiving and killing activities, while resulting different from any of them. Several examples illustrates

COWS

peculiarities and show its expressiveness both for modelling imperative and orchestration constructs, e.g. web services, flow graphs, fault and compensation handlers, and for encoding other process and orchestration languages.

Alessandro Lapadula, Rosario Pugliese, Francesco Tiezzi
A Concurrent Calculus with Atomic Transactions

The

Software Transactional Memory

(STM) model is an original approach for controlling concurrent accesses to resources without the need for explicit lock-based synchronization mechanisms. A key feature of STM is to provide a way to group sequences of read and write actions inside

atomic blocks

, similar to database transactions, whose whole effect should occur atomically.

In this paper, we investigate STM from a process algebra perspective and define an extension of asynchronous CCS with atomic blocks of actions. We show that the addition of atomic transactions results in a very expressive calculus, enough to easily encode other concurrent primitives such as guarded choice and multiset-synchronization (à la join-calculus). The correctness of our encodings is proved using a suitable notion of bisimulation equivalence. The equivalence is then applied to prove interesting “laws of transactions” and to obtain a simple normal form for transactions.

Lucia Acciai, Michele Boreale, Silvano Dal Zilio

Verification

Modal I/O Automata for Interface and Product Line Theories

Alfaro and Henzinger use alternating simulation in a two player game as a refinement for interface automata [1]. We show that interface automata correspond to a subset of modal transition systems of Larsen and Thomsen [2], on which alternating simulation coincides with modal refinement. As a consequence a more expressive interface theory may be built, by a simple generalization from interface automata to modal automata. We define modal I/O automata, an extension of interface automata with modality. Our interface theory that follows can express liveness properties, disallowing trivial implementations of interfaces, a problem that exists for theories build around simulation preorders. In order to further exemplify the usefulness of modal I/O automata, we construct a behavioral variability theory for product line development.

Kim G. Larsen, Ulrik Nyman, Andrzej Wąsowski
Using History Invariants to Verify Observers

This paper contributes a technique that expands the set of object invariants that one can reason about in modular verification. The technique uses

history invariants

, two-state invariants that describe the evolution of data values. The technique enables a flexible new way to specify and verify variations of the observer pattern, including iterators. The paper details history invariants and the new kind of object invariants, and proves a soundness theorem.

K. Rustan M. Leino, Wolfram Schulte

Term Rewriting

On the Implementation of Construction Functions for Non-free Concrete Data Types

Many algorithms use concrete data types with some additional invariants. The set of values satisfying the invariants is often a set of representatives for the equivalence classes of some equational theory. For instance, a sorted list is a particular representative wrt commutativity. Theories like associativity, neutral element, idempotence, etc. are also very common. Now, when one wants to combine various invariants, it may be difficult to find the suitable representatives and to efficiently implement the invariants. The preservation of invariants throughout the whole program is even more difficult and error prone. Classically, the programmer solves this problem using a combination of two techniques: the definition of appropriate construction functions for the representatives and the consistent usage of these functions ensured via compiler verifications. The common way of ensuring consistency is to use an abstract data type for the representatives; unfortunately, pattern matching on representatives is lost. A more appealing alternative is to define a concrete data type with private constructors so that both compiler verification and pattern matching on representatives are granted. In this paper, we detail the notion of private data type and study the existence of construction functions. We also describe a prototype, called Moca, that addresses the entire problem of defining concrete data types with invariants: it generates efficient construction functions for the combination of common invariants and builds representatives that belong to a concrete data type with private constructors.

Frédéric Blanqui, Thérèse Hardin, Pierre Weis
Anti-pattern Matching

It is quite appealing to base the description of pattern-based searches on positive as well as negative conditions. We would like for example to specify that we search for white cars that are not station wagons.

To this end, we define the notion of anti-patterns and their semantics along with some of their properties. We then extend the classical notion of matching between patterns and ground terms to matching between anti-patterns and ground terms. We provide a rule-based algorithm that finds the solutions to such problems and prove its correctness and completeness. Anti-pattern matching is by nature different from disunification and quite interestingly the anti-pattern matching problem is unitary. Therefore the concept is appropriate to ground a powerful extension to pattern-based programming languages and we show how this is used to extend the expressiveness and usability of the

Tom

language.

Claude Kirchner, Radu Kopetz, Pierre-Etienne Moreau

Language Based Security

A Certified Lightweight Non-interference Java Bytecode Verifier

Non-interference is a semantical condition on programs that guarantees the absence of illicit information flow throughout their execution, and that can be enforced by appropriate information flow type systems. Much of previous work on type systems for non-interference has focused on calculi or high-level programming languages, and existing type systems for low-level languages typically omit objects, exceptions, and method calls, and/or do not prove formally the soundness of the type system. We define an information flow type system for a sequential JVM-like language that includes classes, objects, arrays, exceptions and method calls, and prove that it guarantees non-interference. For increased confidence, we have formalized the proof in the proof assistant Coq; an additional benefit of the formalization is that we have extracted from our proof a certified lightweight bytecode verifier for information flow. Our work provides, to our best knowledge, the first sound and implemented information flow type system for such an expressive fragment of the JVM.

Gilles Barthe, David Pichardie, Tamara Rezk
Controlling the What and Where of Declassification in Language-Based Security

While a rigorous information flow analysis is a key step in obtaining meaningful end-to-end confidentiality guarantees, one must also permit possibilities for declassification. Sabelfeld and Sands categorized the existing approaches to controlling declassification in their overview along four dimensions and according to four prudent principles [16].

In this article, we propose three novel security conditions for controlling the dimensions

where

and

what

, and we explain why these conditions constitute improvements over prior approaches. Moreover, we present a type-based security analysis and, as another novelty, prove a soundness result that considers more than one dimension of declassification.

Heiko Mantel, Alexander Reinhard
Cost Analysis of Java Bytecode

Cost analysis of Java bytecode is complicated by its unstructured control flow, the use of an operand stack and its object-oriented programming features (like dynamic dispatching). This paper addresses these problems and develops a generic framework for the automatic cost analysis of sequential Java bytecode. Our method generates

cost relations

which define at compile-time the cost of programs as a function of their input data size. To the best of our knowledge, this is the first approach to the automatic cost analysis of Java bytecode.

E. Albert, P. Arenas, S. Genaim, G. Puebla, D. Zanardini

Logics and Correctness Proofs

On the Relationship Between Concurrent Separation Logic and Assume-Guarantee Reasoning

We study the relationship between Concurrent Separation Logic (CSL) and the assume-guarantee (A-G) method (a.k.a. rely-guarantee method). We show in three steps that CSL can be treated as a specialization of the A-G method for well-synchronized concurrent programs. First, we present an A-G based program logic for a low-level language with built-in locking primitives. Then we extend the program logic with explicit separation of “private data” and “shared data”, which provides better memory modularity. Finally, we show that CSL (adapted for the low-level language) can be viewed as a specialization of the extended A-G logic by enforcing the invariant that “

shared resources are well-formed outside of critical regions

”. This work can also be viewed as a different approach (from Brookes’) to proving the soundness of CSL: our CSL inference rules are proved as lemmas in the A-G based logic, whose soundness is established following the syntactic approach to proving soundness of type systems.

Xinyu Feng, Rodrigo Ferreira, Zhong Shao
Abstract Predicates and Mutable ADTs in Hoare Type Theory

Hoare Type Theory

(HTT) combines a dependently typed, higher-order language with monadically-encapsulated, stateful computations. The type system incorporates pre- and post-conditions, in a fashion similar to Hoare and Separation Logic, so that programmers can modularly specify the requirements and effects of computations within types.

This paper extends HTT with quantification over abstract predicates (i.e., higher-order logic), thus embedding into HTT the Extended Calculus of Constructions. When combined with the Hoare-like specifications, abstract predicates provide a powerful way to define and encapsulate the invariants of private state that may be shared by several functions, but is not accessible to their clients. We demonstrate this power by sketching a number of abstract data types that demand ownership of mutable memory, including an idealized custom memory manager.

Aleksandar Nanevski, Amal Ahmed, Greg Morrisett, Lars Birkedal
Structure of a Proof-Producing Compiler for a Subset of Higher Order Logic

We give an overview of a proof-producing compiler which translates recursion equations, defined in higher order logic, to assembly language. The compiler is implemented and validated with a mix of translation validation and compiler verification techniques. Both the design of the compiler and its mechanical verification are implemented in the same logic framework.

Guodong Li, Scott Owens, Konrad Slind

Static Analysis and Abstract Interpretation I

Modular Shape Analysis for Dynamically Encapsulated Programs

We present a

modular

static analysis which identifies structural (shape) invariants for a subset of heap-manipulating programs. The subset is defined by means of a non-standard operational semantics which places certain restrictions on aliasing and sharing across modules. More specifically, we assume that live references (i.e., used before set) between subheaps manipulated by different modules form a tree. We develop a conservative static analysis algorithm by abstract interpretation of our non-standard semantics. Our

modular

algorithm also ensures that the program obeys the above mentioned restrictions.

N. Rinetzky, A. Poetzsch-Heffter, G. Ramalingam, M. Sagiv, E. Yahav
Static Analysis by Policy Iteration on Relational Domains

We give a new practical algorithm to compute, in finite time, a fixpoint (and often the least fixpoint) of a system of equations in the abstract numerical domains of zones and templates used for static analysis of programs by abstract interpretation. This paper extends previous work on the non-relational domain of intervals to relational domains. The algorithm is based on policy iteration techniques– rather than Kleene iterations as used classically in static analysis– and generates from the system of equations a finite set of simpler systems that we call

policies

. This set of policies satisfies a

selection property

which ensures that the minimal fixpoint of the original system of equations is the minimum of the fixpoints of the policies. Computing a fixpoint of a policy is done by linear programming. It is shown, through experiments made on a prototype analyzer, compared in particular to analyzers such as LPInv or the Octagon Analyzer, to be in general more precise and faster than the usual Kleene iteration combined with widening and narrowing techniques.

Stephane Gaubert, Eric Goubault, Ankur Taly, Sarah Zennou
Computing Procedure Summaries for Interprocedural Analysis

We describe a new technique for computing procedure summaries for performing an interprocedural analysis on programs. Procedure summaries are computed by performing a backward analysis of procedures, but there are two key new features: (i) information is propagated using “generic” assertions (rather than regular assertions that are used in intraprocedural analysis); and (ii) unification is used to simplify these generic assertions. We illustrate this general technique by applying it to two abstractions: unary uninterpreted functions and linear arithmetic. In the first case, we get a PTIME algorithm for a special case of the long-standing open problem of interprocedural global value numbering (the special case being that we consider unary uninterpreted functions instead of binary). This also requires developing efficient algorithms for manipulating singleton context-free grammars, and builds on an earlier work by Plandowski [13]. In linear arithmetic case, we get new algorithms for precise interprocedural analysis of linear arithmetic programs with complexity matching that of the best known deterministic algorithm [11].

Sumit Gulwani, Ashish Tiwari
Small Witnesses for Abstract Interpretation-Based Proofs

Abstract interpretation-based proof carrying code uses post-fixpoints of abstract interpretations to witness that a program respects a safety policy. Some witnesses carry more information than needed and are therefore unnecessarily large. We introduce a notion of size of a witness and propose techniques for reducing the size of such certificates. For distributive analyses, we show that a smallest witness exist and we give an iterative algorithm for computing it. For non-distributive analyes we propose a technique for pruning a witness and illustrate this pruning on a relational, polyhedra-based analysis. Finally, only the existence of a witness is needed to assure the code consumer of the safety of a given program. This makes possible a compression technique of witnesses where only part of a witness is sent together with an encoding of the iterative steps necessary to prove that it is part of a post-fixpoint.

Frédéric Besson, Thomas Jensen, Tiphaine Turpin

Static Analysis and Abstract Interpretation II

Interprocedurally Analysing Linear Inequality Relations

In this paper we present an alternative approach to interprocedurally inferring linear inequality relations. We propose an abstraction of the effects of procedures through

convex sets

of transition matrices. In the absence of conditional branching, this abstraction can be characterised precisely by means of the least solution of a constraint system. In order to handle conditionals, we introduce auxiliary variables and postpone checking them until after the procedure calls. In order to obtain an effective analysis, we approximate convex sets by means of

polyhedra

. Since our implementation of function composition uses the frame representation of polyhedra, we rely on the subclass of

simplices

to obtain an efficient implementation. We show that for this abstraction the basic operations can be implemented in polynomial time. First practical experiments indicate that the resulting analysis is quite efficient and provides reasonably precise results.

Helmut Seidl, Andrea Flexeder, Michael Petter
Precise Fixpoint Computation Through Strategy Iteration

We present a practical algorithm for computing least solutions of systems of equations over the integers with addition, multiplication with positive constants, maximum and minimum. The algorithm is based on strategy iteration. Its run-time (w.r.t. the uniform cost measure) is independent of the sizes of occurring numbers. We apply our technique to solve systems of interval equations. In particular, we show how arbitrary intersections as well as full interval multiplication in interval equations can be dealt with precisely.

Thomas Gawlitza, Helmut Seidl

Semantic Theories for Object Oriented Languages

A Complete Guide to the Future

We present the semantics and proof system for an object-oriented language with active objects, asynchronous method calls, and futures. The language, based on Creol, distinguishes itself in that unlike active object models, it permits more than one thread of control within an object, though, unlike Java, only one thread can be active within an object at a given time and rescheduling occurs only at specific release points. Consequently, reestablishing an object’s monitor invariant is possible at specific well-defined points in the code. The resulting proof system shows that this approach to concurrency is simpler for reasoning than, say, Java’s multithreaded concurrency model. From a methodological perspective, we identify constructs which admit a simple proof system and those which require, for example, interference freedom tests.

Frank S. de Boer, Dave Clarke, Einar Broch Johnsen
The Java Memory Model: Operationally, Denotationally, Axiomatically

A semantics to a small fragment of Java capturing the new memory model (JMM) described in the Language Specification is given by combining operational, denotational and axiomatic techniques in a novel semantic framework. The operational steps (specified in the form of SOS) construct denotational models (configuration structures) and are constrained by the axioms of a configuration theory. The semantics is proven correct with respect to the Language Specification and shown to capture many common examples in the JMM literature.

Pietro Cenciarelli, Alexander Knapp, Eleonora Sibilio
Immutable Objects for a Java-Like Language

We extend a Java-like language with immutability specifications and a static type system for verifying immutability. A class modifier

immutable

specifies that all class instances are immutable objects. Ownership types specify the depth of object states and enforce encapsulation of representation objects. The type system guarantees that the state of immutable objects does not visibly mutate during a program run. Provided immutability-annotated classes and methods are

final

, this is true even if immutable classes are composed with untrusted classes that follow Java’s type system, but not our immutability type system.

C. Haack, E. Poll, J. Schäfer, A. Schubert

Process Algebraic Techniques

Scalar Outcomes Suffice for Finitary Probabilistic Testing

The question of equivalence has long vexed research in concurrency, leading to many different denotational- and bisimulation-based approaches; a breakthrough occurred with the insight that tests expressed within the concurrent framework itself, based on a special “success action”, yield equivalences that make only inarguable distinctions.

When probability was added, however, it seemed necessary to extend the testing framework beyond a direct probabilistic generalisation in order to remain useful. An attractive possibility was the extension to

multiple

success actions that yielded

vectors

of real-valued outcomes.

Here we prove that such vectors are unnecessary when processes are

finitary

, that is finitely branching and finite-state: single

scalar

outcomes are just as powerful. Thus for finitary processes we can retain the original, simpler testing approach and its direct connections to other naturally scalar-valued phenomena.

Yuxin Deng, Rob van Glabbeek, Carroll Morgan, Chenyi Zhang
Probabilistic Anonymity Via Coalgebraic Simulations

There is a growing concern on anonymity and privacy on the Internet, resulting in lots of work on formalization and verification of anonymity. Especially, importance of probabilistic aspect of anonymity is claimed recently by many authors. Among them are Bhargava and Palamidessi who present the definition of

probabilistic anonymity

for which, however, proof methods are not yet elaborated. In this paper we introduce a simulation-based proof method for probabilistic anonymity. It is a probabilistic adaptation of the method by Kawabe et al. for non-deterministic anonymity: anonymity of a protocol is proved by finding out a forward/backward simulation between certain automata. For the jump from non-determinism to probability we fully exploit a generic, coalgebraic theory of traces and simulations developed by Hasuo and others. In particular, an appropriate notion of probabilistic simulations is obtained by instantiating a generic definition with suitable parameters.

Ichiro Hasuo, Yoshinobu Kawabe
A Fault Tolerance Bisimulation Proof for Consensus (Extended Abstract)

The possibility of partial failure occuring at any stage of computation complicates rigorous formal treatment of distributed algorithms. We propose a methodology for formalising and proving the correctness of distributed algorithms which alleviates this complexity. The methodology uses fault-tolerance bisimulation proof techniques to split the analysis into two phases, that is a failure-free phase and a failure phase, permitting separation of concerns. We design a minimal partial-failure calculus, develop a corresponding bisimulation theory for it and express a consensus algorithm in the calculus. We then use the consensus example and the calculus theory to demonstrate the benefits of our methodology.

Adrian Francalanza, Matthew Hennessy
A Core Calculus for a Comparative Analysis of Bio-inspired Calculi

The application of process calculi theory to the modeling and the analysis of biological phenomena has recently attracted the interests of the scientific community. To this aim several specialized, bio-inspired process calculi have been proposed, but a formal comparison of their expressivity is still lacking. In this paper we present

π

@, an extension of the

π

-Calculus with priorities and polyadic synchronisation that turns out to be suitable to act as a core platform for the comparison of other calculi. Here we show

π

@ at work by providing “reasonable” encodings of the two most popular calculi for modeling membrane interactions, namely, BioAmbients and Brane Calculi.

Keywords:

pi-calculus, priority, polyadic synchronisation, BioAmbients, Brane Calculi.

Cristian Versari

Applicative Programming

A Rewriting Semantics for Type Inference

When students first learn programming, they often rely on a simple operational model of a program’s behavior to explain how particular features work. Because such models build on their earlier training in algebra, students find them intuitive, even obvious. Students learning type systems, however, have to confront an entirely different notation with a different semantics that many find difficult to understand.

In this work, we begin to build the theoretical underpinnings for treating type checking in a manner like the operational semantics of execution. Intuitively, each term is incrementally rewritten to its type. For example, each basic constant rewrites directly to its type and each lambda expression rewrites to an arrow type whose domain is the type of the lambda’s formal parameter and whose range is the body of the lambda expression which, in turn, rewrites to the range type.

George Kuan, David MacQueen, Robert Bruce Findler
Principal Type Schemes for Modular Programs

Two of the most prominent features of ML are its expressive module system and its support for Damas-Milner type inference. However, while the foundations of both these features have been studied extensively, their interaction has never received a proper type-theoretic treatment. One consequence is that both the official Definition and the alternative Harper-Stone semantics of Standard ML are difficult to implement correctly. To bolster this claim, we offer a series of short example programs on which no existing SML typechecker follows the behavior prescribed by either formal definition. It is unclear how to amend the implementations to match the definitions or vice versa. Instead, we propose a way of defining how type inference interacts with modules that is more liberal than any existing definition or implementation of SML and, moreover, admits a provably sound and complete typechecking algorithm via a straightforward generalization of Algorithm

$\mathcal{W}$

. In addition to being conceptually simple, our solution exhibits a novel hybrid of the Definition and Harper-Stone semantics of SML, and demonstrates the broader relevance of some type-theoretic techniques developed recently in the study of recursive modules.

Derek Dreyer, Matthias Blume
A Consistent Semantics of Self-adjusting Computation

This paper presents a semantics of self-adjusting computation and proves that the semantics is correct and consistent. The semantics integrates change propagation with the classic idea of memoization to enable reuse of computations under mutation to memory. During evaluation, reuse of a computation via memoization triggers a change propagation that adjusts the reused computation to reflect the mutated memory. Since the semantics combines memoization and change-propagation, it involves both non-determinism and mutation. Our consistency theorem states that the non-determinism is not harmful: any two evaluations of the same program starting at the same state yield the same result. Our correctness theorem states that mutation is not harmful: self-adjusting programs are consistent with purely functional programming. We formalized the semantics and its meta-theory in the LF logical framework and machine-checked the proofs in Twelf.

Umut A. Acar, Matthias Blume, Jacob Donham
Multi-language Synchronization

We propose

multi-language synchronization

, a novel approach to the problem of migrating code from a legacy language (such as C) to a new language. We maintain two parallel versions of every source file, one in the legacy language, and one in the new language. Both of these files are fully editable, and the two files are kept automatically in sync so that they have the same semantic meaning and, where possible, have the same comments and layout.

We propose

non-deterministic language translation

as a means to implement multi-language synchronization. If a file is modified in language A, we produce a new version in language B by translating the file into a non-deterministic description of many ways that it could be encoded in language B and then choosing the version that is closest to the old file in language B.

To demonstrate the feasibility of this approach, we have implemented a translator that can synchronize files written in a straw-man language, Jekyll, with files written in C. Jekyll is a high level functional programming language that has many of the features found in modern programming languages.

Robert Ennals, David Gay

Types for Systems Properties

Type-Based Analysis of Deadlock for a Concurrent Calculus with Interrupts

The goal of our research project is to establish a type-based method for verification of certain critical properties (such as deadlock- and race-freedom) of operating system kernels. As operating system kernels make heavy use of threads and interrupts, it is important that the method can properly deal with both of the two features. As a first step towards the goal, we formalize a concurrent calculus equipped with primitives for threads and interrupts handling. We also propose a type system that guarantees deadlock-freedom in the presence of interrupts. To our knowledge, ours is the first type system for deadlock-freedom that can deal with both thread and interrupt primitives.

Kohei Suenaga, Naoki Kobayashi
Type Reconstruction for General Refinement Types

General refinement types

allow types to be refined by predicates written in a general-purpose programming language, and can express function pre- and postconditions and data structure invariants. In this setting, with expressive and possibly verbose types, type reconstruction is particularly valuable, yet typeability is undecidable because it subsumes type checking. Using a generalized notion of type reconstruction, we present the first type reconstruction algorithm for a type system with base types refined by abitrary program terms. Our algorithm is a typeability-

preserving

transformation and defers type checking to a subsequent phase. The algorithm generates and solves a collection of implication constraints over refinement predicates, inferring maximally precise refinement predicates in a largely syntactic manner that is reminiscent of strongest postcondition calculation. Perhaps surprisingly, our notion of type reconstruction is decidable even though type checking is not.

Kenneth Knowles, Cormac Flanagan
Dependent Types for Low-Level Programming

In this paper, we describe the key principles of a dependent type system for low-level imperative languages. The major contributions of this work are (1) a sound type system that combines dependent types and mutation for variables and for heap-allocated structures in a more flexible way than before and (2) a technique for automatically inferring dependent types for local variables. We have applied these general principles to design Deputy, a dependent type system for C that allows the user to describe bounded pointers and tagged unions. Deputy has been used to annotate and check a number of real-world C programs.

Jeremy Condit, Matthew Harren, Zachary Anderson, David Gay, George C. Necula
Backmatter
Metadaten
Titel
Programming Languages and Systems
herausgegeben von
Rocco De Nicola
Copyright-Jahr
2007
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-71316-6
Print ISBN
978-3-540-71314-2
DOI
https://doi.org/10.1007/978-3-540-71316-6

Premium Partner