Skip to main content

2007 | Buch

Programming Languages and Systems

5th Asian Symposium, APLAS 2007, Singapore, November 29-December 1, 2007. Proceedings

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

Invited Talk 1

Session 1

The Nuggetizer: Abstracting Away Higher-Orderness for Program Verification
Abstract
We develop a static analysis which distills first-order computational structure from higher-order functional programs. The analysis condenses higher-order programs into a first-order rule-based system, a nugget, that characterizes all value bindings that may arise from program execution. Theorem provers are limited in their ability to automatically reason about higher-order programs; nuggets address this problem, being inductively defined structures that can be simply and directly encoded in a theorem prover. The theorem prover can then prove non-trivial program properties, such as the range of values assignable to particular variables at runtime. Our analysis is flow- and path-sensitive, and incorporates a novel prune-rerun analysis technique to approximate higher-order recursive computations.
Paritosh Shroff, Christian Skalka, Scott F. Smith
Local Reasoning for Storable Locks and Threads
Abstract
We present a resource oriented program logic that is able to reason about concurrent heap-manipulating programs with unbounded numbers of dynamically-allocated locks and threads. The logic is inspired by concurrent separation logic, but handles these more realistic concurrency primitives. We demonstrate that the proposed logic allows local reasoning about programs for which there exists a notion of dynamic ownership of heap parts by locks and threads.
Alexey Gotsman, Josh Berdine, Byron Cook, Noam Rinetzky, Mooly Sagiv
Monadic, Prompt Lazy Assertions in Haskell
Abstract
Assertions test expected properties of run-time values without disrupting the normal computation of a program. We present a library for enriching Haskell programs with assertions. Expected properties can be specified in a parser-combinator like language. The assertions are lazy: they do not force evaluation but only examine what is evaluated by the program. They are also prompt: assertion failure is reported as early as possible. The implementation is based on lazy observations and continuation-based coroutines.
Olaf Chitil, Frank Huch

Session 2

Translation Correctness for First-Order Object-Oriented Pattern Matching
Abstract
Pattern matching makes ML programs more concise and readable, and these qualities are also sought in object-oriented settings. However, objects and classes come with open class hierarchies, extensibility requirements and the need for data abstraction, which all conflict with matching on concrete data types. Extractor-based pattern matching has been proposed to address this conflict. Extractors are user-defined methods that perform the task of value discrimination and deconstruction during pattern matching. In this paper, we give the first formalization of extractor-based matching, using a first-order object-oriented calculus. We give a direct operational semantics and prove it sound. We then present an optimizing translation to a target language without matching, and prove a correctness result stating that an expression is equivalent to its translation.
Burak Emir, Qin Ma, Martin Odersky
Persistent Oberon: A Programming Language with Integrated Persistence
Abstract
This paper presents the programming language Persistent Oberon, which offers persistence as a naturally inbuilt concept. Program data is automatically kept durable and stored in non-volatile memory, without the programmer having to write explicit code for the interactions with an external database system. In the case of a system interruption or failure, the program can directly continue from its latest consistent state. In contrast to other existent persistent programming languages, this language does not need any artificial programming interfaces or commands to use persistence. The programming language is completely implemented and offers a high scalability and performance.
Luc Bläser
More Typed Assembly Languages for Confidentiality
Abstract
We propose a series of type systems for the information-flow security of assembly code. These systems extend previous work TAL C with some timing annotations and associated judgments and rules. By using different timing rules, these systems are applicable to different practical settings. In particular, they can be used to prevent illicit information flow through the termination and timing channels in sequential programs as well as the possibilistic and probabilistic channels in multi-threaded programs. We present the formal details of these as a generic type system TAL\(^+_{C}\) and prove its noninterference. TAL\(^+_{C}\) is designed as a core target language for certifying compilation. We illustrate its use with a formal scheme of type-preserving translation.
Dachuan Yu

Session 3

A Novel Test Case Generation Method for Prolog Programs Based on Call Patterns Semantics
Abstract
A natural way to generate test cases for a Prolog program is to view the call patterns of the procedures in the program as an implicit representation of the control flow graph (CFG) of the program. This paper explores the idea by proposing a call patterns-based test case generation method, where a set of call patterns or computed answers is used to describe the paths in a CFG. With a constraint-based call patterns semantics, this method is formalized. Through the use of a proper constraints solver, we can generate test cases automatically from the sets of constraints. This method can be based on any approximation of the call patterns semantics. So compared with traditional CFG-based test case generation, the method is more flexible and can be easily adapted to meet the requirements of a tester expressed by the approximation of the call patterns semantics we use.
Lingzhong Zhao, Tianlong Gu, Junyan Qian, Guoyong Cai
On a Tighter Integration of Functional and Logic Programming
Abstract
The integration of functional and logic programming is a well developed field of research. We discuss that the integration could be improved significantly in two separate aspects: sharing computations across non-deterministic branching and the declarative encapsulation of logic search. We then present a formal approach which shows how these improvements can be realized and prove the soundness of our approach.
Bernd Braßel, Frank Huch
X10: Concurrent Programming for Modern Architectures
Abstract
Two major trends are converging to reshape the landscape of concurrent object-oriented programming languages. First, trends in modern architectures (multi-core, accelerators, high performance clusters such as Blue Gene) are making concurrency and distribution inescapable for large classes of OO programmers. Second, experience with first-generation concurrent OO languages (e.g. Java threads and synchronization) have revealed several drawbacks of unstructured threads with lock-based synchronization.
X10 is a second generation OO language designed to address both programmer productivity and parallel performance for modern architectures. It extends the sequential Java programming language with a handful of constructs for concurrency and distribution: a clustered address space (with global data-structures) to deal with distribution; lightweight asynchrony to deal with massive parallelism; recursive fork-join parallelism for structured concurrency; termination detection for sequencing, and atomic blocks for mutual exclusion.
Additionally, it introduces a rich framework for constraint-based dependent types (and annotations) in OO languages. Specifically, the framework is useful for statically specifying the shape of various data-structures (e.g. multidimensional arrays) and the location of objects and distribution of arrays in the clustered address space.
X10 is being developed jointly with several colleagues (at IBM and elsewhere) as part of the DARPA-IBM high performance computing project, PERCS. X10 is being developed under the Eclipse open-source licence. A first implementation of the language is available at http://x10. sf.net. It compiles X10 source programs to Java class files and calls to a runtime. A distributed implementation suitable for clusters is currently under development.
This material is based upon work supported by the Defense Advanced Research Projects Agency under its Agreement No. HR0011-07-9-0002.
Vijay Saraswat

Invited Talk 2

Session 4

Timed, Distributed, Probabilistic, Typed Processes
Abstract
This paper studies types and probabilistic bisimulations for a timed π-calculus as an effective tool for a compositional analysis of probabilistic distributed behaviour. The types clarify the role of timers as interface between non-terminating and terminating communication for guaranteeing distributed liveness. We add message-loss probabilities to the calculus, and introduce a notion of approximate bisimulation that discards transitions below a certain specified probability threshold. We prove this bisimulation to be a congruence, and use it for deriving quantitative bounds for practical protocols in distributed systems, including timer-driven message-loss recovery and the Two-Phase Commit protocol.
Martin Berger, Nobuko Yoshida
A Probabilistic Applied Pi–Calculus
Abstract
We propose an extension of the Applied Pi–calculus by introducing nondeterministic and probabilistic choice operators. The semantics of the resulting model, in which probability and nondeterminism are combined, is given by Segala’s Probabilistic Automata driven by schedulers which resolve the nondeterministic choice among the probability distributions over target states. Notions of static and observational equivalence are given for the enriched calculus. In order to model the possible interaction of a process with its surrounding environment a labeled semantics is given together with a notion of weak bisimulation which is shown to coincide with the observational equivalence. Finally, we prove that results in the probabilistic framework are preserved in a purely nondeterministic setting.
Jean Goubault-Larrecq, Catuscia Palamidessi, Angelo Troina
Type-Based Verification of Correspondence Assertions for Communication Protocols
Abstract
Gordon and Jeffrey developed a type system for checking correspondence assertions. The correspondence assertions, proposed by Woo and Lam, state that when a certain event (called an “end” event) happens, the corresponding “begin” event must have occurred before. They can be used for checking authenticity in communication protocols. In this paper, we refine Gordon and Jeffrey’s type system and develop a polynomial-time type inference algorithm, so that correspondence assertions can be verified fully automatically, without any type annotations. The main key idea that enables polynomial-time type inference is to introduce fractional effects; Without the fractional effects, the type inference problem is NP-hard.
Daisuke Kikuchi, Naoki Kobayashi

Session 5

Deriving Compilers and Virtual Machines for a Multi-level Language
Abstract
We develop virtual machines and compilers for a multi-level language, which supports multi-stage specialization by composing program fragments with quotation mechanisms. We consider two styles of virtual machines—ones equipped with special instructions for code generation and ones without—and show that the latter kind can deal with, more easily, low-level code generation, which avoids the overhead of (run-time) compilation by manipulating instruction sequences, rather than source-level terms, as data. The virtual machines and accompanying compilers are derived by program transformation, which extends Ager et al.’s derivation of virtual machines from evaluators.
Atsushi Igarashi, Masashi Iwaki
Finally Tagless, Partially Evaluated
Tagless Staged Interpreters for Simpler Typed Languages
Abstract
We have built the first family of tagless interpretations for a higher-order typed object language in a typed metalanguage (Haskell or ML) that require no dependent types, generalized algebraic data types, or postprocessing to eliminate tags. The statically type-preserving interpretations include an evaluator, a compiler (or staged evaluator), a partial evaluator, and call-by-name and call-by-value CPS transformers.
Our main idea is to encode HOAS de Bruijn or higher-order abstract syntax using cogen functions rather than data constructors. In other words, we represent object terms not in an initial algebra but using the coalgebraic structure of the λ-calculus. Our representation also simulates inductive maps from types to types, which are required for typed partial evaluation and CPS transformations.
Our encoding of an object term abstracts over the various ways to interpret it, yet statically assures that the interpreters never get stuck. To achieve self hyp interpretation and show Jones hyp optimality, we relate this exemplar of higher-rank and higher-kind polymorphism (provided by ML functors and Haskell 98 constructor classes) to plugging a term into a context of let hyp polymorphic bindings.
Jacques Carette, Oleg Kiselyov, Chung-chieh Shan
Polymorphic Delimited Continuations
Abstract
This paper presents a polymorphic type system for a language with delimited control operators, shift and reset. Based on the monomorphic type system by Danvy and Filinski, the proposed type system allows pure expressions to be polymorphic. Thanks to the explicit presence of answer types, our type system satisfies various important properties, including strong type soundness, existence of principal types and an inference algorithm, and strong normalization. Relationship to CPS translation as well as extensions to impredicative polymorphism are also discussed. These technical results establish the foundation of polymorphic delimited continuations.
Kenichi Asai, Yukiyoshi Kameyama

Session 6

Adjunct Elimination in Context Logic for Trees
Abstract
We study adjunct-elimination results for Context Logic applied to trees, following previous results by Lozes for Separation Logic and Ambient Logic. In fact, it is not possible to prove such elimination results for the original single-holed formulation of Context Logic. Instead, we prove our results for multi-holed Context Logic.
Cristiano Calcagno, Thomas Dinsdale-Young, Philippa Gardner
Positive Arithmetic Without Exchange Is a Subclassical Logic
Abstract
This paper shows the equivalence for provability between two infinitary systems with the ω-rule. One system is the positive one-sided fragment of Peano arithmetic without Exchange rules. The other system is two-sided Heyting Arithmetic plus the law of Excluded Middle for \(\Sigma^0_1\)-formulas, and it includes Exchange. Thus, the logic underlying positive Arithmetic without Exchange, a substructural logic, is shown to be a logic intermediate between Intuitionism and Classical Logic, hence a subclassical logic. As a corollary, the authors derive the equivalence for positive formulas among provability in those two systems and validity in two apparently unrelated semantics: Limit Computable Mathematics, and Game Semantics with 1-backtracking.
Stefano Berardi, Makoto Tatsuta
Mixed Inductive/Coinductive Types and Strong Normalization
Abstract
We introduce the concept of guarded saturated sets, saturated sets of strongly normalizing terms closed under folding of corecursive functions. Using this tool, we can model equi-inductive and equi-coinductive types with terminating recursion and corecursion principles. Two type systems are presented: Mendler (co)iteration and sized types. As an application we show that we can directly represent the mixed inductive/coinductive type of stream processors with associated recursive operations.
Andreas Abel
Scalable Simulation of Cellular Signaling Networks
Abstract
Given the combinatorial nature of cellular signalling pathways, where biological agents can bind and modify each other in a large number of ways, concurrent or agent-based languages seem particularly suitable for their representation and simulation [1,2,3,4]. Graphical modelling languages such as κ [5, 6, 7, 8], or the closely related BNG language [9,10,11,12,13,14], seem to afford particular ease of expression. It is unclear however how such models can be implemented. Even a simple model of the EGF receptor signalling network can generate more than \(\oldstylenums{10^{23}}\) non-isomorphic species [5], and therefore no approach to simulation based on enumerating species (beforehand, or even on-the-fly) can handle such models without sampling down the number of potential generated species.
We present in this paper a radically different method which does not attempt to count species. The proposed algorothm uses a representation of the system together with a super-approximation of its ‘event horizon’ (all events that may happen next), and a specific correction scheme to obtain exact timings. Being completely local and not based on any kind of enumeration, this algorithm has a per event time cost which is independent of (i) the size of the set of generable species (which can even be infinite), and (ii) independent of the size of the system (ie, the number of agent instances). We show how to refine this algorithm, using concepts derived from the classical notion of causality, so that in addition to the above one also has that the even cost is depending (iii) only logarithmically on the size of the model (ie, the number of rules). Such complexity properties reflect in our implementation which, on a current computer, generates about \(\oldstylenums{10^6}\) events per minute in the case of the simple EGF receptor model mentioned above, using a system with \(\oldstylenums{10^5}\) agents.
Vincent Danos, Jérôme Feret, Walter Fontana, Jean Krivine

Invited Talk 3

Session 7

The Semantics of “Semantic Patches” in Coccinelle: Program Transformation for the Working Programmer
Abstract
We rationally reconstruct the core of the Coccinelle system, used for automating and documenting collateral evolutions in Linux device drivers. A denotational semantics of the system’s underlying semantic patch language (SmPL) is developed, and extended to include variables. The semantics is in essence a higher-order functional program and so executable; but is inefficient and limited to straight-line source programs. A richer and more efficient SmPL version is defined, implemented by compiling to the temporal logic CTL-V (CTL with existentially quantified variables ranging over source code parameters and program points; defined using the staging concept from partial evaluation). The compilation is formally proven correct and a model check algorithm is outlined.
Neil D. Jones, René Rydhof Hansen
An Efficient SSA-Based Algorithm for Complete Global Value Numbering
Abstract
Global value numbering (GVN) is an important static analysis technique both for optimizing compilers and program verification tools. Existing complete GVN algorithms discovering all Herbrand equivalences are all inefficient. One reason of this is the intrinsic exponential complexity of the problem, but in practice, since the exponential case is quite rare, the more important reason is the huge data structures annotated to every program point and slow abstract evaluations on them site by site. In this paper, we present an SSA-based algorithm for complete GVN, which uses just one global graph to represent all equivalences at different program points and performs fast abstract evaluations on it. This can be achieved because in SSA form, interferences among equivalence relations at different program points can be entirely resolved with dominance information. We implement the new algorithm in GCC. The average proportion of execution time of the new algorithm in the total compilation time is only 0.36%. To the best of our knowledge, this is the first practical complete GVN algorithm.
Jiu-Tao Nie, Xu Cheng
A Systematic Approach to Probabilistic Pointer Analysis
Abstract
We present a formal framework for syntax directed probabilistic program analysis. Our focus is on probabilistic pointer analysis. We show how to obtain probabilistic points-to matrices and their relational counterparts in a systematic way via Probabilistic Abstract Interpretation (PAI). The analysis is based on a non-standard semantics for a simple imperative language which corresponds to a Discrete-Time Markov Chain (DTMC). The generator of this DTMC is constructed by composing (via tensor product) the probabilistic control flow of the program and the data updates of the different variables at individual program points. The dimensionality of the concrete semantics is in general prohibitively large but abstraction (via PAI) allows for a drastic (exponential) reduction of size.
Alessandra Di Pierro, Chris Hankin, Herbert Wiklicky

Session 8

Complete Lattices and Up-To Techniques
Abstract
We propose a theory of up-to techniques for proofs by coinduction, in the setting of complete lattices. This theory improves over existing results by providing a way to compose arbitrarily complex techniques with standard techniques, expressed using a very simple and modular semi-commutation property.
Complete lattices are enriched with monoid operations, so that we can recover standard results about labelled transitions systems and their associated behavioural equivalences at an abstract, “point-free” level.
Our theory gives for free a powerful method for validating up-to techniques. We use it to revisit up to contexts techniques, which are known to be difficult in the weak case: we show that it is sufficient to check basic conditions about each operator of the language, and then rely on an iteration technique to deduce general results for all contexts.
Damien Pous
A Trace Based Bisimulation for the Spi Calculus: An Extended Abstract
Abstract
A notion of open bisimulation is formulated for the spi calculus, an extension of the π-calculus with cryptographic primitives. In this formulation, open bisimulation is indexed by pairs of symbolic traces, which represent the history of interactions between the environment with the pairs of processes being checked for bisimilarity. The use of symbolic traces allows for a symbolic treatment of bound input in bisimulation checking which avoids quantification over input values. Open bisimilarity is shown to be sound with respect to testing equivalence, and futher, it is shown to be an equivalence relation on processes and a congruence on finite processes.
Alwen Tiu
CCS with Replication in the Chomsky Hierarchy: The Expressive Power of Divergence
Abstract
A remarkable result in [4] shows that in spite of its being less expressive than CCS w.r.t. weak bisimilarity, CCS! (a CCS variant where infinite behavior is specified by using replication rather than recursion) is Turing powerful. This is done by encoding Random Access Machines (RAM) in CCS!. The encoding is said to be non-faithful because it may move from a state which can lead to termination into a divergent one which do not correspond to any configuration of the encoded RAM. I.e., the encoding is not termination preserving.
In this paper we study the existence of faithful encodings into CCS! of models of computability strictly less expressive than Turing Machines. Namely, grammars of Types 1 (Context Sensitive Languages), 2 (Context Free Languages) and 3 (Regular Languages) in the Chomsky Hierarchy. We provide faithful encodings of Type 3 grammars. We show that it is impossible to provide a faithful encoding of Type 2 grammars and that termination-preserving CCS! processes can generate languages which are not Type 2. We finally show that the languages generated by termination-preserving CCS! processes are Type 1 .
Jesús Aranda, Cinzia Di Giusto, Mogens Nielsen, Frank D. Valencia

Session 9

Call-by-Name and Call-by-Value in Normal Modal Logic
Abstract
This paper provides a call-by-name and a call-by-value calculus, both of which have a Curry-Howard correspondence to the minimal normal logic K. The calculi are extensions of the λμ-calculi, and their semantics are given by CPS transformations into a calculus corresponding to the intuitionistic fragment of K. The duality between call-by-name and call-by-value with modalities is investigated in our calculi.
Yoshihiko Kakutani
Call-by-Value Is Dual to Call-by-Name, Extended
Abstract
We extend Wadler’s work that showed duality between call-by-value and call-by-name by giving mutual translations between the λμ-calculus and the dual calculus. We extend the λμ-calculus and the dual calculus through two stages. We first add a fixed-point operator and an iteration operator to the call-by-name and call-by-value systems respectively. Secondly, we add recursive types, ⊤, and \(\bot\) types to these systems. The extended duality between call-by-name with recursion and call-by-value with iteration has been suggested by Kakutani. He followed Selinger’s category-theoretic approach. We completely follow Wadler’s syntactic approach. We give mutual translations between our extended λμ-calculus and dual calculus by extending Wadler’s translations, and also show that our translations form an equational correspondence, which was defined by Sabry and Felleisen. By composing our translations with duality on the dual calculus, we obtain a duality on our extended λμ-calculus. Wadler’s duality on the λμ-calculus was an involution, and our duality on our extended λμ-calculus is also an involution.
Daisuke Kimura
Static and Dynamic Analysis: Better Together
Abstract
Static analysis and dynamic analysis have dual properties. Static analysis has high coverage, but is imprecise, and produces false alarms. Dynamic analysis has low coverage, but has high precision. We present lessons learned from three different projects, where we have combined the complementary strengths of static and dynamic analysis to solve interesting problems:
1
The yogi project [2], which combines static abstraction-refinement and directed testing to validate if software components obey safety properties specified as state machines.
 
1
The netra project [3], which combines static access control configuration analysis and runtime checking to find information flow violations.
 
1
The clarity programming language [1], where a static analysis together with the clarity compiler and runtime helps guarantee properties that involve reasoning about asynchrony.
 
Sriram K. Rajamani
Backmatter
Metadaten
Titel
Programming Languages and Systems
herausgegeben von
Zhong Shao
Copyright-Jahr
2007
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-76637-7
Print ISBN
978-3-540-76636-0
DOI
https://doi.org/10.1007/978-3-540-76637-7

Premium Partner