Skip to main content

2006 | Buch

Programming Languages and Systems

15th European Symposium on Programming, ESOP 2006, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2006, Vienna, Austria, March 27-28, 2006. Proceedings

insite
SUCHEN

Über dieses Buch

ETAPS 2006 was the ninth instance of the European Joint Conferences on Theory and Practice of Software. ETAPS is an annual federated conference that was established in 1998 by combining a number of existing and new conferences. This year it comprised ?ve conferences (CC, ESOP, FASE, FOSSACS, TACAS), 18 satellite workshops (AC- CAT, AVIS, CMCS, COCV, DCC, EAAI, FESCA, FRCSS, GT-VMT, LDTA, MBT, QAPL, SC, SLAP, SPIN, TERMGRAPH, WITS and WRLA), two tutorials, and seven invited lectures (not including those that were speci?c to the satellite events). We - ceived over 550 submissions to the ?ve conferences this year, giving an overall acc- tance rate of 23%, with acceptance rates below 30% for each conference. Congratu- tions to all the authors who made it to the ?nal programme! I hope that most of the other authorsstill founda way of participatingin this excitingevent and I hope you will continue submitting. The events that comprise ETAPS address various aspects of the system devel- ment process, including speci?cation, design, implementation, analysis and impro- ment. The languages, methodologies and tools which support these activities are all well within its scope. Di?erent blends of theory and practice are represented, with an inclination towards theory with a practical motivation on the one hand and soundly based practice on the other. Many of the issues involved in software design apply to systems in general, including hardware systems, and the emphasis on software is not intended to be exclusive.

Inhaltsverzeichnis

Frontmatter
Types for Hierarchic Shapes
(Summary)
Abstract
Heap entities tend to contain complex references to each other. To manage this complexity, types which express shapes and hierarchies have been suggested. We survey type systems which describe such hierarchic shapes, how these types are used for reasoning about programs, and applications in concurrent programming.
Sophia Drossopoulou, Dave Clarke, James Noble
Linear Regions Are All You Need
Abstract
The type-and-effects system of the Tofte-Talpin region calculus makes it possible to safely reclaim objects without a garbage collector. However, it requires that regions have last-in-first-out (LIFO) lifetimes following the block structure of the language. We introduce λrgnUL, a core calculus that is powerful enough to encode Tofte-Talpin-like languages, and that eliminates the LIFO restriction. The target language has an extremely simple, substructural type system. To prove the power of the language, we sketch how Tofte-Talpin-style regions, as well as the first-class dynamic regions and unique pointers of the Cyclone programming language can be encoded in λrgnUL.
Matthew Fluet, Greg Morrisett, Amal Ahmed
Type-Based Amortised Heap-Space Analysis
Abstract
We present a type system for a compile-time analysis of heap-space requirements of Java style object-oriented programs with explicit deallocation.
Our system is based on an amortised complexity analysis: the data is arbitrarily assigned a potential related to its size and layout; allocations must be “payed for” from this potential. The potential of each input then furnishes an upper bound on the heap space usage for the computation on this input.
We successfully treat inheritance, downcast, update and aliasing. Example applications for the analysis include destination-passing style and doubly-linked lists.
Type inference is explicitly not included; the contribution lies in the system itself and the nontrivial soundness theorem. This extended abstract elides most technical lemmas and proofs, even nontrivial ones, due to space limitations. A full version is available at the authors’ web pages.
Martin Hofmann, Steffen Jost
Haskell Is Not Not ML
Abstract
We present a typed calculus IL (“intermediate language”) which supports the embedding of ML-like (strict, eager) and Haskell-like (non-strict, lazy) languages, without favoring either. IL’s type system includes negation (continuations), but not implication (function arrow). Within IL we find that lifted sums and products can be represented as the double negation of their unlifted counterparts. We exhibit a compilation function from IL to AM—an abstract von Neumann machine—which maps values of ordinary and doubly negated types to heap structures resembling those found in practical implementations of languages in the ML and Haskell families. Finally, we show that a small variation in the design of AM allows us to treat any ML value as a Haskell value at runtime without cost, and project a Haskell value onto an ML type with only the cost of a Haskell deepSeq. This suggests that IL and AM may be useful as a compilation and execution model for a new language which combines the best features of strict and non-strict functional programming.
Ben Rudiak-Gould, Alan Mycroft, Simon Peyton Jones
Coinductive Big-Step Operational Semantics
Abstract
This paper illustrates the use of coinductive definitions and proofs in big-step operational semantics, enabling the latter to describe diverging evaluations in addition to terminating evaluations. We show applications to proofs of type soundness and to proofs of semantic preservation for compilers.
Xavier Leroy
Step-Indexed Syntactic Logical Relations for Recursive and Quantified Types
Abstract
We present a sound and complete proof technique, based on syntactic logical relations, for showing contextual equivalence of expressions in a λ-calculus with recursive types and impredicative universal and existential types. Our development builds on the step-indexed PER model of recursive types presented by Appel and McAllester. We have discovered that a direct proof of transitivity of that model does not go through, leaving the “PER” status of the model in question. We show how to extend the Appel-McAllester model to obtain a logical relation that we can prove is transitive, as well as sound and complete with respect to contextual equivalence. We then augment this model to support relational reasoning in the presence of quantified types.
Step-indexed relations are indexed not just by types, but also by the number of steps available for future evaluation. This stratification is essential for handling various circularities, from recursive functions, to recursive types, to impredicative polymorphism. The resulting construction is more elementary than existing logical relations which require complex machinery such as domain theory, admissibility, syntactic minimal invariance, and ⊤ ⊤-closure.
Amal Ahmed
Approaches to Polymorphism in Classical Sequent Calculus
Abstract
\(\mathcal X\) is a relatively new calculus, invented to give a Curry-Howard correspondence with Classical Implicative Sequent Calculus. It is already known to provide a very expressive language; embeddings have been defined of the λ-calculus, Bloo and Rose’s λ x, Parigot’s λμ and Curien and Herbelin’s \({\overline{\lambda}\mu\tilde{\mu}}\). We investigate various notions of polymorphism in the context of the \(\mathcal X\)-calculus. In particular, we examine the first class polymorphism of System F, and the shallow polymorphism of ML. We define analogous systems based on the \(\mathcal X\)-calculus, and show that these are suitable for embedding the original calculi.
In the case of shallow polymorphism we obtain a more general calculus than ML, while retaining its useful properties. A type-assignment algorithm is defined for this system, which generalises Milner’s \({\cal W}\).
Alexander J. Summers, Steffen van Bakel
Pure Pattern Calculus
Abstract
The pure pattern calculus generalises the pure lambda-calculus by basing computation on pattern-matching instead of beta-reduction. The simplicity and power of the calculus derive from allowing any term to be a pattern. As well as supporting a uniform approach to functions, it supports a uniform approach to data structures which underpins two new forms of polymorphism. Path polymorphism supports searches or queries along all paths through an arbitrary data structure. Pattern polymorphism supports the dynamic creation and evaluation of patterns, so that queries can be customised in reaction to new information about the structures to be encountered. In combination, these features provide a natural account of tasks such as programming with XML paths.
As the variables used in matching can now be eliminated by reduction it is necessary to separate them from the binding variables used to control scope. Then standard techniques suffice to ensure that reduction progresses and to establish confluence of reduction.
Barry Jay, Delia Kesner
A Verification Methodology for Model Fields
Abstract
Model fields are specification-only fields that encode abstractions of the concrete state of a data structure. They allow specifications to describe the behavior of object-oriented programs without exposing implementation details.
This paper presents a sound verification methodology for model fields that handles object-oriented features, supports data abstraction, and can be applied to a variety of realistic programs. The key innovation of the methodology is a novel encoding of model fields, where updates of the concrete state do not automatically change the values of model fields. Model fields are updated only by a special pack statement. The methodology guarantees that the specified relation between a model field and the concrete state of an object holds whenever the object is valid, that is, is known to satisfy its invariant.
The methodology also improves on previous work in three significant ways: First, the formalization of model fields prevents unsoundness, even if an interface specification is inconsistent. Second, the methodology fully supports inheritance. Third, the methodology enables modular reasoning about frame properties without using explicit dependencies, which are not handled well by automatic theorem provers.
K. Rustan M. Leino, Peter Müller
ILC: A Foundation for Automated Reasoning About Pointer Programs
Abstract
This paper shows how to use Girard’s intuitionistic linear logic extended with a classical sublogic to reason about pointer programs. More specifically, first, the paper defines the proof theory for ILC (Intuitionistic Linear logic with Constraints) and shows it is well-defined via a proof of cut elimination. Second, inspired by prior work of O’Hearn, Reynolds, and Yang, the paper explains how to interpret linear logical formulas as descriptions of a program store. Third, this paper defines a simple imperative programming language with mutable references and arrays and gives verification condition generation rules that produce assertions in ILC. Finally, we identify a fragment of ILC, ILC− −, that is both decidable and closed under generation of verification conditions. Since verification condition generation is syntax-directed, we obtain a decidable procedure for checking properties of pointer programs.
Limin Jia, David Walker
Bisimulations for Untyped Imperative Objects
Abstract
We present a sound and complete method for reasoning about contextual equivalence in the untyped, imperative object calculus of Abadi and Cardelli [1]. Our method is based on bisimulations, following the work of Sumii and Pierce [25, 26] and our own [14]. Using our method we were able to prove equivalence in more complex examples than the ones of Gordon, Hankin and Lassen [7] and Gordon and Rees [8]. We can also write bisimulations in closed form in cases where similar bisimulation methods [26] require an inductive specification. To derive our bisimulations we follow the same technique as we did in [14], thus indicating the extensibility of this method.
Vasileios Koutavas, Mitchell Wand
A Typed Assembly Language for Confidentiality
Abstract
Language-based information-flow analysis is promising in protecting data confidentiality. Although much work has been carried out in this area, relatively little has been done for assembly code. Source-level techniques do not easily generalize Techniques at a source level do not generalize straightforwardly to assembly code, because assembly code does not readily present certain abstraction about the program structure that is crucial to information-flow analysis. Nonetheless, low-level information-flow analysis is desirable, because it yields a small trusted computing base. Furthermore, many (untrusted) applications are distributed in native code; their verification should not be overlooked.
We present a simple yet effective solution for this problem. Our observation is that the missing abstraction in assembly code can be restored using annotations. Following the philosophy of certifying compilation, these annotations are generated by a compiler, used for static validation, and erased before execution. In particular, we propose a type system for low-level information-flow analysis. Our system is compatible with Typed Assembly Language, and models key features including a call stack, memory tuples and first-class code pointers. A noninterference theorem articulates that well-typed programs respect confidentiality. We also present a security-type preserving translation that targets our system, together with its soundness theorem. This illustrates the application of certifying compilation for noninterference.
Dachuan Yu, Nayeem Islam
Flow Locks: Towards a Core Calculus for Dynamic Flow Policies
Abstract
Security is rarely a static notion. What is considered to be confidential or untrusted data varies over time according to changing events and states. The static verification of secure information flow has been a popular theme in recent programming language research, but information flow policies considered are based on multilevel security which presents a static view of security levels. In this paper we introduce a very simple mechanism for specifying dynamic information flow policies, flow locks, which specify conditions under which data may be read by a certain actor. The interface between the policy and the code is via instructions which open and close flow locks. We present a type and effect system for an ML-like language with references which permits the completely static verification of flow lock policies, and prove that the system satisfies a semantic security property generalising noninterference. We show that this simple mechanism can represent a number of recently proposed information flow paradigms for declassification.
Niklas Broberg, David Sands
A Basic Contract Language for Web Services
Abstract
We design a schema language that includes channel schemas with capabilities of input, output, and input-output. These schemas may describe documents containing references to operations of remote services on the web. In this language, the subschema relation turns out to have an exponential cost. We therefore discuss a language restriction that admits a subschema relation with a polynomial cost.
Samuele Carpineti, Cosimo Laneve
Types for Dynamic Reconfiguration
Abstract
We define a core language combining computational and architectural primitives, and study how static typing may be used to ensure safety properties of component composition and dynamic reconfiguration in object-based systems. We show how our language can model typed entities analogous of configuration scripts, makefiles, components, and component instances, where static typing combined with a dynamic type-directed test on the structure of objects can enforce consistency of compositions and atomicity of reconfiguration.
João Costa Seco, Luís Caires
Size-Change Termination Analysis in k-Bits
Abstract
Size-change termination analysis is a simple and powerful technique successfully applied for a variety of programming paradigms. A main advantage is that termination for size-change graphs is decidable and based on simple linear ranking functions. A main disadvantage is that the size-change termination problem is PSPACE-complete. Proving size change termination may have to consider exponentially many size change graphs. This paper is concerned with the representation of large sets of size-change graphs. The approach is constraint based and the novelty is that sets of size-change graphs are represented as disjunctions of size-change constraints. A constraint solver to facilitate size-change termination analysis is obtained by interpreting size-change constraints over a sufficiently large but finite non-negative integer domain. A Boolean k-bit modeling of size change graphs using binary decision diagrams leads to a concise representation. Experimental evaluation indicates that the 2-bit representation facilitates an efficient implementation which is guaranteed complete for our entire benchmark suite.
Michael Codish, Vitaly Lagoon, Peter Schachte, Peter J. Stuckey
Path Optimization in Programs and Its Application to Debugging
Abstract
We present and solve a path optimization problem on programs. Given a set of program nodes, called critical nodes, we find a shortest path through the program’s control flow graph that touches the maximum number of these nodes. Control flow graphs over-approximate real program behavior; by adding dataflow analysis to the control flow graph, we narrow down on the program’s actual behavior and discard paths deemed infeasible by the dataflow analysis. We derive an efficient algorithm for path optimization based on weighted pushdown systems. We present an application for path optimization by integrating it with the Cooperative Bug Isolation Project CBI, a dynamic debugging system. CBI mines instrumentation feedback data to find suspect program behaviors, called bug predictors, that are strongly associated with program failure. Instantiating critical nodes as the nodes containing bug predictors, we solve for a shortest program path that touches these predictors. This path can be used by a programmer to debug his software. We present some early experience on using this hybrid static/dynamic system for debugging.
Akash Lal, Junghee Lim, Marina Polishchuk, Ben Liblit
Inference of User-Defined Type Qualifiers and Qualifier Rules
Abstract
In previous work, we described a new approach to supporting user-defined type qualifiers, which augment existing types to specify and check additional properties of interest. For each qualifier, users define a set of rules that are enforced during static typechecking of programs. Separately, these rules are automatically validated with respect to a user-defined predicate that formalizes the qualifier’s intended run-time invariant. We instantiated this approach as a framework for user-defined type qualifiers in C programs, called Clarity.
In this paper, we extend our earlier approach by resolving two usability issues. First, we show how to perform qualifier inference in the presence of user-defined rules by generating and solving a system of conditional set constraints, thereby relieving users of the burden of explicitly annotating programs. Second, we show how to automatically infer rules that respect a given user-defined invariant, thereby relieving qualifier designers of the burden of manually producing such rules. We have formalized both qualifier and rule inference and proven their correctness. We have also extended Clarity to support qualifier and rule inference, and we illustrate their utility in practice through experiments with several type qualifiers and open-source C programs.
Brian Chin, Shane Markstrum, Todd Millstein, Jens Palsberg
Assertion Checking over Combined Abstraction of Linear Arithmetic and Uninterpreted Functions
Abstract
This paper presents results on the problem of checking equality assertions in programs whose expressions have been abstracted using combination of linear arithmetic and uninterpreted functions, and whose conditionals are treated as non-deterministic.
We first show that the problem of assertion checking for this combined abstraction is coNP-hard, even for loop-free programs. This result is quite surprising since assertion checking for the individual abstractions of linear arithmetic and uninterpreted functions can be performed efficiently in polynomial time.
Next, we give an assertion checking algorithm for this combined abstraction, thereby proving decidability of this problem despite the underlying lattice having infinite height. Our algorithm is based on an important connection between unification theory and program analysis. Specifically, we show that weakest preconditions can be strengthened by replacing equalities by their unifiers, without losing any precision, during backward analysis of programs.
Sumit Gulwani, Ashish Tiwari
Embedding Dynamic Dataflow in a Call-by-Value Language
Abstract
This paper describes FrTime, an extension of Scheme designed for writing interactive applications. Inspired by functional reactive programming, the language embeds dynamic dataflow within a call-by-value functional language. The essence of the embedding is to make program expressions evaluate to nodes in a dataflow graph. This strategy eases importation of legacy code and permits incremental program construction. We have integrated FrTime with the DrScheme programming environment and have used it to develop several novel applications. We describe FrTime’s design and implementation in detail and present a formal semantics of its evaluation model.
Gregory H. Cooper, Shriram Krishnamurthi
Polymorphic Type Inference for the JNI
Abstract
We present a multi-lingual type inference system for checking type safety of programs that use the Java Native Interface (JNI). The JNI uses specially-formatted strings to represent class and field names as well as method signatures, and so our type system tracks the flow of string constants through the program. Our system embeds string variables in types, and as those variables are resolved to string constants during inference they are replaced with the structured types the constants represent. This restricted form of dependent types allows us to directly assign type signatures to each of the more than 200 functions in the JNI. Moreover, it allows us to infer types for user-defined functions that are parameterized by Java type strings, which we have found to be common practice. Our inference system allows such functions to be treated polymorphically by using instantiation constraints, solved with semi-unification, at function calls. Finally, we have implemented our system and applied it to a small set of benchmarks. Although semi-unification is undecidable, we found our system to be scalable and effective in practice. We discovered 155 errors and 36 cases of suspicious programming practices in our benchmarks.
Michael Furr, Jeffrey S. Foster
Type Safety of Generics for the .NET Common Language Runtime
Abstract
The Microsoft .NET Common Language Runtime (CLR) offers support for generic types and methods. We develop a mathematical specification for the generics design through a type system and a model for the semantics of a subset of bytecode instructions with generics. We formalize the type-consistency checks performed for the subset by the CLR bytecode verifier. We then prove that adding support for generics maintains the type safety of the CLR.
Nicu G. Fruja
The Weird World of Bi-directional Programming
Abstract
Programs generally work in just one direction, from input to answer. But sometimes we need a program to work in two directions: after calculating an answer, we want to update the answer and then somehow calculate backwards to find a correspondingly updated input. Of course, in general, a given update to the answer may not correspond to a unique update on the input, or to any at all; we need an “update translation policy” that tells us which updates can be translated and how to choose among translations when there are many possibilities. The question of how to determine such a policy has been called the view update problem in the database literature.
Many approaches to this problem have been devised over the years; most have taken existing database query languages (such as SQL) as their starting points and then proposed ways of describing or inferring update policies. More recently, several groups have begun working to design entirely new languages in which programs are inherently bi-directional – i.e., in which every program can be read from left to right as a map from inputs to answers and from right to left as (roughly) a map from updated answers to updated inputs. Moreover, bi-directionality in these languages is treated compositionally: each primitive works in both directions, and the two directions of compound programs can be derived from the two directions of their subcomponents.
This talk charts some interesting regions of the world of bidirectional programming and bi-directional language design, using as a touchstone our experiences at the University of Pennsylvania in the context of the Harmony project, where bi-directional languages – one for transforming trees and another for relational data – play a crucial role in the architecture of a universal data synchronizer.
Benjamin C. Pierce
Backmatter
Metadaten
Titel
Programming Languages and Systems
herausgegeben von
Peter Sestoft
Copyright-Jahr
2006
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-33096-7
Print ISBN
978-3-540-33095-0
DOI
https://doi.org/10.1007/11693024