Skip to main content
Top

2006 | Book

Programming Languages and Systems

4th Asian Symposium, APLAS 2006, Sydney, Australia, November 8-10, 2006. Proceedings

insite
SEARCH

About this book

This volume contains the proceedings of the 4th Asian Symposium on Progr- mingLanguagesandSystems(APLAS2006),whichtookplaceinSydney,Japan, November 8-10, 2006. The symposium was sponsored by the Asian Association for Foundation of Software. In response to the call for papers, 70 full submissions were received. Each submission was reviewed by at least three Program Committee members with the help of external reviewers. The ProgramCommittee meeting was conducted electronically over a 2-week period. After careful discussion, the Program C- mittee selected 22 papers. I would like to sincerely thank all the members of the APLAS 2006 Program Committee for their excellent job, and all the external reviewers for their invaluable contribution. The submission and review process was managed using the CyberChair system. In addition to the 22 contributed papers, the symposium also included two invited talks by Jens Palsberg (UCLA, Los Angeles, USA) and Peter Stuckey (University of Melbourne, Melbourne, Australia), and one tutorial by Matthew Flatt (University of Utah, USA). Many people helped to promote APLAS as a high-quality forum in Asia to serve programming language researchers worldwide. Following a series of we- attended workshops that were held in Singapore (2000), Daejeon (2001), and Shanghai (2002), the ?rst three formal symposiums were held in Beijing (2003), Taipei (2004) and Tsukuba (2005).

Table of Contents

Frontmatter

Invited Talk 1

Type Processing by Constraint Reasoning
Abstract
Herbrand constraint solving or unification has long been understood as an efficient mechanism for type checking and inference for programs using Hindley/Milner types. If we step back from the particular solving mechanisms used for Hindley/Milner types, and understand type operations in terms of constraints we not only give a basis for handling Hindley/Milner extensions, but also gain insight into type reasoning even on pure Hindley/Milner types, particularly for type errors. In this paper we consider typing problems as constraint problems and show which constraint algorithms are required to support various typing questions. We use a light weight constraint reasoning formalism, Constraint Handling Rules, to generate suitable algorithms for many popular extensions to Hindley/Milner types. The algorithms we discuss are all implemented as part of the freely available Chameleon system.
Peter J. Stuckey, Martin Sulzmann, Jeremy Wazny

Session 1

Principal Type Inference for GHC-Style Multi-parameter Type Classes
Abstract
We observe that the combination of multi-parameter type classes with existential types and type annotations leads to a loss of principal types and undecidability of type inference. This may be a surprising fact for users of these popular features. We conduct a concise investigation of the problem and are able to give a type inference procedure which, if successful, computes principal types under the conditions imposed by the Glasgow Haskell Compiler (GHC). Our results provide new insights on how to perform type inference for advanced type extensions.
Martin Sulzmann, Tom Schrijvers, Peter J. Stuckey
Private Row Types: Abstracting the Unnamed
Abstract
In addition to traditional record and variant types, Objective Caml has structurally polymorphic types, for objects and polymorphic variants. These types allow new forms of polymorphic programming, but they have a limitation when used in combination with modules: there is no way to abstract their polymorphism in a signature. Private row types remedy this situation: they are manifest types whose “row-variable” is left abstract, so that an implementation may instantiate it freely. They have useful applications even in the absence of functors. Combined with recursive modules, they provide an original solution to the expression problem.
Jacques Garrigue
Type and Effect System for Multi-staged Exceptions
Abstract
We present a type and effect system for a multi-staged language with exceptions. The proposed type and effect system checks if we safely synthesize complex controls with exceptions in multi-staged programming. The proposed exception constructs in multi-staged programming has no artificial restriction. Exception-raise and -handle expressions can appear in expressions of any stage, though they are executed only at stage 0. Exceptions can be raised during code composition and may escape before they are handled. Our effect type system support such features. We prove our type and effect system sound: empty effect means the input program has no uncaught exceptions during its execution.
Hyunjun Eo, Ik-Soon Kim, Kwangkeun Yi

Session 2

Relational Reasoning for Recursive Types and References
Abstract
We present a local relational reasoning method for reasoning about contextual equivalence of expressions in a λ-calculus with recursive types and general references. Our development builds on the work of Benton and Leperchey, who devised a nominal semantics and a local relational reasoning method for a language with simple types and simple references. Their method uses a parameterized logical relation. Here we extend their approach to recursive types and general references. For the extension, we build upon Pitts’ and Shinwell’s work on relational reasoning about recursive types (but no references) in nominal semantics. The extension is non-trivial because of general references (higher-order store) and makes use of some new ideas for proving the existence of the parameterized logical relation and for the choice of parameters.
Nina Bohr, Lars Birkedal
Proof Abstraction for Imperative Languages
Abstract
Modularity in programming language semantics derives from abstracting over the structure of underlying denotations, yielding semantic descriptions that are more abstract and reusable. One such semantic framework is Liang’s modular monadic semantics in which the underlying semantic structure is encapsulated with a monad. Such abstraction can be at odds with program verification, however, because program specifications require access to the (deliberately) hidden semantic representation. The techniques for reasoning about modular monadic definitions of imperative programs introduced here overcome this barrier. And, just like program definitions in modular monadic semantics, our program specifications and proofs are representation-independent and hold for whole classes of monads, thereby yielding proofs of great generality.
William L. Harrison
Reading, Writing and Relations
Towards Extensional Semantics for Effect Analyses
Abstract
We give an elementary semantics to an effect system, tracking read and write effects by using relations over a standard extensional semantics for the original language. The semantics establishes the soundness of both the analysis and its use in effect-based program transformations.
Nick Benton, Andrew Kennedy, Martin Hofmann, Lennart Beringer

Session 3

A Fine-Grained Join Point Model for More Reusable Aspects
Abstract
We propose a new join point model for aspect-oriented programming (AOP) languages. In most AOP languages including AspectJ, a join point is a time interval of an action in execution. While those languages are widely accepted, they have problems in aspects reusability, and awkwardness when designing advanced features such as tracematches. Our proposed join point model, namely the point-in-time join point model redefines join points as the moments both at the beginning and end of actions. Those finer-grained join points enable us to design AOP languages with better reusability and flexibility of aspects. In this paper, we designed an AspectJ-like language based on the point-in-time model. We also give a denotational semantics of a simplified language in a continuation passing style, and demonstrate that we can straightforwardly model advanced language features such as exception handling and cflow pointcuts.
Hidehiko Masuhara, Yusuke Endoh, Akinori Yonezawa
Automatic Testing of Higher Order Functions
Abstract
This paper tackles a problem often overlooked in functional programming community: that of testing. Fully automatic test tools like Quickcheck and G ∀  ST can test first order functions successfully. Higher order functions, HOFs, are an essential and distinguishing part of functional languages. Testing HOFs automatically is still troublesome since it requires the generation of functions as test argument for the HOF to be tested. Also the functions that are the result of the higher order function needs to be identified. If a counter example is found, the generated and resulting functions should be printed, but that is impossible in most functional programming languages. Yet, bugs in HOFs do occur and are usually more subtle due to the high abstraction level.
In this paper we present an effective and efficient technique to test higher order functions by using intermediate data types. Such a data type mimics and controls the structure of the function to be generated. A simple additional function transforms this data structure to the function needed. We use a continuation based parser library as main example of the tests. Our automatic testing method for HOFs reveals errors in the library that was used for a couple of years without problems.
Pieter Koopman, Rinus Plasmeijer

Invited Talk 2

Event Driven Software Quality
Abstract
Event-driven programming has found pervasive acceptance, from high-performance servers to embedded systems, as an efficient method for interacting with a complex world. The fastest research Web servers are event-driven, as is the most common operating system for sensor nodes.
Jens Palsberg

Session 4

Widening Polyhedra with Landmarks
Abstract
The abstract domain of polyhedra is sufficiently expressive to be deployed in verification. One consequence of the richness of this domain is that long, possibly infinite, sequences of polyhedra can arise in the analysis of loops. Widening and narrowing have been proposed to infer a single polyhedron that summarises such a sequence of polyhedra. Motivated by precision losses encountered in verification, we explain how the classic widening/narrowing approach can be refined by an improved extrapolation strategy. The insight is to record inequalities that are thus far found to be unsatisfiable in the analysis of a loop. These so-called landmarks hint at the amount of widening necessary to reach stability. This extrapolation strategy, which refines widening with thresholds, can infer post-fixpoints that are precise enough not to require narrowing. Unlike previous techniques, our approach interacts well with other domains, is fully automatic, conceptually simple and precise on complex loops.
Axel Simon, Andy King
Comparing Completeness Properties of Static Analyses and Their Logics
Abstract
Static analyses calculate abstract states, and their logics validate properties of the abstract states. We place into perspective the variety of forwards, backwards, functional, and logical completeness used in abstract-interpretation-based static analysis by giving examples and by proving equivalences, implications, and independences. We expose two fundamental Galois connections that underlie the logics for static analyses and reveal a new completeness variant, O-completeness. We also show that the key concept underlying logical completeness is covering, which we use to relate the various forms of completeness.
David A. Schmidt
Polymorphism, Subtyping, Whole Program Analysis and Accurate Data Types in Usage Analysis
Abstract
There are a number of choices to be made in the design of a type based usage analysis. Some of these are: Should the analysis be monomorphic or have some degree of polymorphism? What about subtyping? How should the analysis deal with user defined algebraic data types? Should it be a whole program analysis?
Several researchers have speculated that these features are important but there has been a lack of empirical evidence. In this paper we present a systematic evaluation of each of these features in the context of a full scale implementation of a usage analysis for Haskell.
Our measurements show that all features increase the precision. It is, however, not necessary to have them all to obtain an acceptable precision.
Tobias Gedell, Jörgen Gustavsson, Josef Svenningsson

Session 5

A Modal Language for the Safety of Mobile Values
Abstract
In the context of distributed computations, local resources give rise to an issue not found in stand-alone computations: the safety of mobile code. One approach to the safety of mobile code is to build a modal type system with the modality □ that corresponds to necessity of modal logic. We argue that the modality □ is not expressive enough for safe communications in distributed computations, in particular for the safety of mobile values. We present a modal language which focuses on the safety of mobile values rather than the safety of mobile code. The safety of mobile values is achieved with a new modality \(\boxdot\) which expresses that given code evaluates to a mobile value. We demonstrate the use of the modality \(\boxdot\) with a communication construct for remote procedure calls.
Sungwoo Park
An Analysis for Proving Temporal Properties of Biological Systems
Abstract
This paper concerns the application of formal methods to biological systems, modeled specifically in BioAmbients [34], a variant of the Mobile Ambients [4] calculus. Following the semantic-based approach of abstract interpretation, we define a new static analysis that computes an abstract transition system. Our analysis has two main advantages with respect to the analyses appearing in literature: (i) it is able to address temporal properties which are more general than invariant properties; (ii) it supports, by means of a particular labeling discipline, the validation of systems where several copies of an ambient may appear.
Roberta Gori, Francesca Levi
Computational Secrecy by Typing for the Pi Calculus
Abstract
We define and study a distributed cryptographic implementation for an asynchronous pi calculus. At the source level, we adapt simple type systems designed for establishing formal secrecy properties. We show that those secrecy properties have counterparts in the implementation, not formally but at the level of bitstrings, and with respect to probabilistic polynomial-time active adversaries. We rely on compilation to a typed intermediate language with a fixed scheduling strategy. While we exploit interesting, previous theorems for that intermediate language, our result appears to be the first computational soundness theorem for a standard process calculus with mobile channels.
Martín Abadi, Ricardo Corin, Cédric Fournet

Invited Tutorial

Scheme with Classes, Mixins, and Traits
Abstract
The Scheme language report advocates language design as the composition of a small set of orthogonal constructs, instead of a large accumulation of features. In this paper, we demonstrate how such a design scales with the addition of a class system to Scheme. Specifically, the PLT Scheme class system is a collection of orthogonal linguistic constructs for creating classes in arbitrary lexical scopes and for manipulating them as first-class values. Due to the smooth integration of classes and the core language, programmers can express mixins and traits, two major recent innovations in the object-oriented world. The class system is implemented as a macro in terms of procedures and a record-type generator; the mixin and trait patterns, in turn, are naturally codified as macros over the class system.
Matthew Flatt, Robert Bruce Findler, Matthias Felleisen

Session 6

Using Metadata Transformations to Integrate Class Extensions in an Existing Class Hierarchy
Abstract
Class extensions provide a fine-grained mechanism to define incremental modifications to class-based systems when standard subclassing mechanisms are inappropriate. To control the impact of class extensions, the concept of classboxes has emerged that defines a new module system to restrict the visibility of class extensions to selected clients. However, the existing implementations of the classbox concept rely either on a “classbox-aware” virtual machine, an expensive runtime introspection of the method call stack to build the structure of a classbox, or both. In this paper we present an implementation technique that allows for the structure of a classbox to be constructed at compile-time by means of metadata transformations to rewire the inheritance graph of refined classes. These metadata transformations are language-neutral and more importantly preserve both the semantics of the classbox concept and the integrity of the underlying deployment units. As a result, metadata transformation provides a feasible approach to incorporate the classbox concept into programming environments that use a virtual execution system.
Markus Lumpe
Combining Offline and Online Optimizations: Register Allocation and Method Inlining
Abstract
Fast dynamic compilers trade code quality for short compilation time in order to balance application performance and startup time. This paper investigates the interplay of two of the most effective optimizations, register allocation and method inlining for such compilers. We present a bytecode representation which supports offline global register allocation, is suitable for fast code generation and verification, and yet is backward compatible with standard Java bytecode.
Hiroshi Yamauchi, Jan Vitek
A Localized Tracing Scheme Applied to Garbage Collection
Abstract
We present a method to visit all nodes in a forest of data structures while taking into account object placement. We call the technique a Localized Tracing Scheme as it improves locality of reference during object tracing activity. The method organizes the heap into regions and uses trace queues to defer and group tracing of remote objects. The principle of localized tracing reduces memory traffic and can be used as an optimization to improve performance at several levels of the memory hierarchy. The method is applicable to a wide range of uniprocessor garbage collection algorithms as well as to shared memory multiprocessor collectors. Experiments with a mark-and-sweep collector show performance improvements up to 75% at the virtual memory level.
Yannis Chicha, Stephen M. Watt

Session 7

A Pushdown Machine for Recursive XML Processing
Abstract
XML transformations are most naturally defined as recursive functions on trees. A naive implementation, however, would load the entire input XML tree into memory before processing. In contrast, programs in stream processing style minimise memory usage since it may release the memory occupied by the processed prefix of the input, but they are harder to write because the programmer is left with the burden to maintain a state. In this paper, we propose a model for XML stream processing and show that all programs written in a particular style of recursive functions on XML trees, the macro forest transducer, can be automatically translated to our stream processors. The stream processor is declarative in style, but can be implemented efficiently by a pushdown machine. We thus get the best of both worlds — program clarity, and efficiency in execution.
Keisuke Nakano, Shin-Cheng Mu
XML Validation for Context-Free Grammars
Abstract
String expression analysis conservatively approximates the possible string values generated by a program. We consider the validation of a context-free grammar obtained by the analysis against XML schemas and develop two algorithms for deciding inclusion L(G 1) ⊆ L(G 2) where G 1 is a context-free grammar and G 2 is either an XML-grammar or a regular hedge grammar. The algorithms for XML-grammars and regular hedge grammars have exponential and doubly exponential time complexity, respectively. We have incorporated the algorithms into the PHP string analyzer and validated several publicly available PHP programs against the XHTML DTD. The experiments show that both of the algorithms are efficient in practice although they have exponential complexity.
Yasuhiko Minamide, Akihiko Tozawa
A Practical String Analyzer by the Widening Approach
Abstract
The static determination of approximated values of string expressions has many potential applications. For instance, approximated string values may be used to check the validity and security of generated strings, as well as to collect the useful string properties. Previous string analysis efforts have been focused primarily on the maxmization of the precision of regular approximations of strings. These methods have not been completely satisfactory due to the difficulties in dealing with heap variables and context sensitivity. In this paper, we present an abstract-interpretation-based solution that employs a heuristic widening method. The presented solution is implemented and compared to JSA. In most cases, our solution gives results as precise as those produced by previous methods, and it makes the additional contribution of easily dealing with heap variables and context sensitivity in a very natural way. We anticipate the employment of our method in practical applications.
Tae-Hyoung Choi, Oukseh Lee, Hyunha Kim, Kyung-Goo Doh

Session 8

A Bytecode Logic for JML and Types
Abstract
We present a program logic for virtual machine code that may serve as a suitable target for different proof-transforming compilers. Compilation from JML-specified source code is supported by the inclusion of annotations whose interpretation extends to non-terminating computations. Compilation from functional languages, and the communication of results from intermediate level program analysis phases are facilitated by a new judgement format that admits the compositionality of type systems to be reflected in derivations. This makes the logic well suited to serve as a language in which proofs of a PCC architecture are expressed. We substantiate this claim by presenting the compositional encoding of a type system for bounded heap consumption. Both the soundness proof of the logic and the derivation of the type system have been formally verified by an implementation in Isabelle/HOL.
Lennart Beringer, Martin Hofmann
On Jones-Optimal Specializers: A Case Study Using Unmix
Abstract
Jones optimality is a criterion for assessing the strength of a program specializer. Here, the elements required in a proof of Jones optimality are investigated and the first formal proof for a non-trivial polyvariant specializer (Unmix) is presented. A simplifying element is the use of self-application. Variations of the original criterion are discussed.
Johan Gade, Robert Glück
Backmatter
Metadata
Title
Programming Languages and Systems
Editor
Naoki Kobayashi
Copyright Year
2006
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-48938-2
Print ISBN
978-3-540-48937-5
DOI
https://doi.org/10.1007/11924661

Premium Partner