Skip to main content

2011 | Buch

Programming Languages and Systems

9th Asian Symposium, APLAS 2011, Kenting, Taiwan, December 5-7, 2011. Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 9th Asian Symposium on Programming Languages and Systems, APLAS 2011, held in Kenting, Taiwan, in December 2011. The 22 revised full papers presented together with 4 invited talks and one system and tool presentations were carefully reviewed and selected from 64 submissions. The papers are organized in topical sections on program analysis; functional programming; compiler; concurrency; semantics; as well as certification and logic.

Inhaltsverzeichnis

Frontmatter

Invited Talks

Program Analysis and Machine Learning: A Win-Win Deal
Abstract
We give an account of our experiences working at the intersection of two fields: program analysis and machine learning. In particular, we show that machine learning can be used to infer annotations for program analysis tools, and that program analysis techniques can be used to improve the efficiency of machine learning tools.
Aditya V. Nori, Sriram K. Rajamani
Software Verification with Liquid Types
Abstract
Traditional software verification algorithms work by using a combination of Floyd-Hoare Logics, Model Checking and Abstract Interpretation, to check and infer suitable program invariants. However, these techniques are problematic in the presence of complex but ubiquitous constructs like generic data structures, first-class functions. We observe that modern type systems are capable of the kind of analysis needed to analyze the above constructs, and we use this observation to develop Liquid Types, a new static verification technique which combines the complementary strengths of Floyd-Hoare logics, Model Checking, and Types. As a result, we demonstrate how liquid types can be used to statically verify properties ranging from memory safety to data structure correctness, in higher-order languages like ML. This presentation is based on joint work with Patrick Rondon and Ming Kawaguchi.
Ranjit Jhala
Engineering Theories with Z3
Abstract
Modern Satisfiability Modulo Theories (SMT) solvers are fundamental to many program analysis, verification, design and testing tools. They are a good fit for the domain of software and hardware engineering because they support many domains that are commonly used by the tools. The meaning of domains are captured by theories that can be axiomatized or supported by efficient theory solvers. Nevertheless, not all domains are handled by all solvers and many domains and theories will never be native to any solver. We here explore different theories that extend Microsoft Research’s SMT solver Z3’s basic support. Some can be directly encoded or axiomatized, others make use of user theory plug-ins. Plug-ins are a powerful way for tools to supply their custom domains.
Nikolaj Bjørner
Algebra, Logic, Locality, Concurrency
Abstract
This talk reports on ongoing work – with Tony Hoare, Akbar Hussain, Bernhard Möller, Rasmus Petersen, Georg Struth, Ian Wehrman, and others – on models and logics for concurrent processes [10,6,5]. The approach we are taking abstracts from syntax or particular models. Message passing and shared memory process interaction, and strong (interleaving) and weak (partial order) approaches to sequencing, are accomodated as different models of the same core axioms. Rules of program logic, related to Hoare and Separation logics, flow at once from the algebraic axioms. So, one gets a generic program logic from the algebra, which holds for a range of concrete models.
Peter W. O’Hearn

Session 1: Program Analysis

Modular Abstractions of Reactive Nodes Using Disjunctive Invariants
Abstract
We wish to abstract nodes in a reactive programming language, such as Lustre, into nodes with a simpler control structure, with a bound on the number of control states. In order to do so, we compute disjunctive invariants in predicate abstraction, with a bounded number of disjuncts, then we abstract the node, each disjunct representing an abstract state. The computation of the disjunctive invariant is performed by a form of quantifier elimination expressed using SMT-solving.
The same method can also be used to obtain disjunctive loop invariants.
David Monniaux, Martin Bodin
Template-Based Unbounded Time Verification of Affine Hybrid Automata
Abstract
Computing over-approximations of all possible time trajectories is an important task in the analysis of hybrid systems. Sankaranarayanan et al. [20] suggested to approximate the set of reachable states using template polyhedra. In the present paper, we use a max-strategy improvement algorithm for computing an abstract semantics for affine hybrid automata that is based on template polyhedra and safely over-approximates the concrete semantics. Based on our formulation, we show that the corresponding abstract reachability problem is in co-NP. Moreover, we obtain a polynomial-time algorithm for the time elapse operation over template polyhedra.
Thao Dang, Thomas Martin Gawlitza
Access-Based Localization with Bypassing
Abstract
We present an extension of access-based localization technique to mitigate a substantial inefficiency in handling procedure calls. Recently, access-based localization was proposed as an effective way of tightly localizing abstract memories. However, it has a limitation in handling procedure calls: the localized input memory for a procedure contains not only memory locations accessed by the procedure but also those accessed by transitively called procedures. The weakness is especially exacerbated in the presence of recursive call cycles, which is common in analysis of realistic programs. In this paper, we present a technique, called bypassing, that mitigates the problem. Our technique localizes input memory states only with memory locations that the procedure directly accesses. Those parts not involved in analysis of the procedure are bypassed to transitively called procedures. In experiments with an industrial-strength global C static analyzer, the technique reduces the average analysis time by 42%. In particular, the technique is especially effective for programs that extensively use recursion: it saves analysis time by 77% on average.
Hakjoo Oh, Kwangkeun Yi
A Deductive Database with Datalog and SQL Query Languages
Abstract
This paper introduces Datalog Educational System (DES), a deductive database which supports both Datalog and SQL as query languages. Since its inception, this system is targeted to educational purposes rather to develop an efficient, competitive system with respect to other existing systems. As distinguishing features, it is free, open-source, multiplatform, interactive, portable, GUI-enabled, implemented following ISO-Prolog and supports extensions to pure Datalog in the form of stratified negation, strong constraints, types, metapredicates, and duplicates. Also, test case generation for SQL views and declarative debugging for Datalog programs and SQL views are supported. SQL statements, following ISO standard, are compiled to Datalog programs and solved by its inference engine. Nonetheless, ODBC connections are also supported, which enables access to external DBMSs and benefit from their solving performance, persistency and scalability.
Fernando Sáenz-Pérez, Rafael Caballero, Yolanda García-Ruiz

Session 2: Functional Programming

Constructing List Homomorphisms from Proofs
Abstract
The well-known third list homomorphism theorem states that if a function h is both an instance of foldr and foldl, it is a list homomorphism. Plenty of previous works devoted to constructing list homomorphisms, however, overlook the fact that proving h is both a foldr and a foldl is often the hardest part which, once done, already provides a useful hint about what the resulting list homomorphism could be. In this paper we propose a new approach: to construct a possible candidate of the associative operator and, at the same time, to transform a proof that h is both a foldr and a foldl to a proof that h is a list homomorphism. The effort constructing the proof is thus not wasted, and the resulting program is guaranteed to be correct.
Yun-Yan Chi, Shin-Cheng Mu
Extending Hindley-Milner Type Inference with Coercive Structural Subtyping
Abstract
We investigate how to add coercive structural subtyping to a type system for simply-typed lambda calculus with Hindley-Milner polymorphism. Coercions allow to convert between different types, and their automatic insertion can greatly increase readability of terms. We present a type inference algorithm that, given a term without type information, computes a type assignment and determines at which positions in the term coercions have to be inserted to make it type-correct according to the standard Hindley-Milner system (without any subtypes). The algorithm is sound and, if the subtype relation on base types is a disjoint union of lattices, also complete. The algorithm has been implemented in the proof assistant Isabelle.
Dmitriy Traytel, Stefan Berghofer, Tobias Nipkow
Polymorphic Multi-stage Language with Control Effects
Abstract
Multi-stage programming (MSP) is a means for run-time code generation, and has been found promising in various fields including numerical computation and domain specific languages. An important problem in designing MSP languages is the dilemma of safety and expressivity; many foundational calculi have been proposed and proven to be type safe, yet, they are not expressive enough. Taha’s MetaOCaml provides us a very expressive tool for MSP, yet, the corresponding theory covers its purely functional subset only.
In this paper, we propose a polymorphic multi-stage calculus with delimited-control operators. Kameyama, Kiselyov, and Shan proposed a multi-stage calculus with computation effects, but their calculus lacks polymorphism. In the presence of control effects, polymorphism in types is indispensable as all pure functions are polymorphic over answer types, and in MSP languages, polymorphism in stages is indispensable to write custom generators as library functions. We show that the proposed calculus satisfies type soundness and type inference. The former is the key to guarantee the absence of scope extrusion - open codes are never generated or executed. The latter is important in the ML-like programming languages. Following Calcagno, Moggi and Taha’s work, we propose a Hindley-Milner style type inference algorithm to obtain principal types for given expressions (if they exist).
Yuichiro Kokaji, Yukiyoshi Kameyama

Session 3: Compiler

Compiler Backend Generation for Application Specific Instruction Set Processors
Abstract
Application Specific Instruction Set Processors (ASIPs) have become popular in the development of embedded systems. For these processors easily-retargetable, high-performance compilers play a key role in the development process, improving productivity and reducing time-to-market. We propose a novel, object-based architecture description language (ADL) OpenDL, as well as a well-structured Rule Library to automatically retarget compiler backends. OpenDL is a succinct and high-quality ADL with object-based inheritance features, while the Rule Library applies instruction templating in order to allow detailed instruction specification to handle complex rule patterns. We use these tools to automatically retarget the open source industrial-strength compiler Open64 to the high-performance embedded processor PowerPC. A reliable version of auto-retargetable industrial-strength compiler is generated which achieves comparable performance to gcc 4.5 for both the EEMBC and SPEC CPU 2000 benchmarks.
Zhen Cao, Yuan Dong, Shengyuan Wang
A Non-iterative Data-Flow Algorithm for Computing Liveness Sets in Strict SSA Programs
Abstract
We revisit the problem of computing liveness sets (the sets of variables live-in and live-out of basic blocks) for programs in strict static single assignment (SSA). In strict SSA, aka SSA with dominance property, the definition of a variable always dominates all its uses. We exploit this property and the concept of loop-nesting forest to design a fast two-phases data-flow algorithm: a first pass traverses the control-flow graph (CFG), propagating liveness information backwards, a second pass traverses a loop-nesting forest, updating liveness sets within loops. The algorithm is proved correct even for irreducible CFGs. We analyze its algorithmic complexity and evaluate its efficiency on SPECINT 2000. Compared to traditional iterative data-flow approaches, which perform updates until a fixed point is reached, our algorithm is 2 times faster on average. Other approaches are possible that propagate from uses to definitions, one variable at a time, instead of unioning sets as in data-flow analysis. Our algorithm is 1.43 times faster than the fastest alternative on average, when sets are represented as bitsets and for optimized programs, which have non-trivial live-ranges and a larger number of variables.
Benoit Boissinot, Florian Brandner, Alain Darte, Benoît Dupont de Dinechin, Fabrice Rastello
SPAS: Scalable Path-Sensitive Pointer Analysis on Full-Sparse SSA
Abstract
We present a new SPAS (Scalable PAth-Sensitive) framework for resolving points-to sets in C programs that exploits recent advances in pointer analysis. SPAS enables intraprocedural path-sensitivity to be obtained in flow-sensitive and context-sensitive (FSCS) techniques scalably, by using BDDs to manipulate program paths and by performing pointer analysis level-by-level on a full-sparse SSA representation similarly as the state-of-the-art LevPA (the FSCS version of SPAS). Compared with LevPA using all 27 C benchmarks in SPEC CPU2000 and CPU2006, SPAS incurs 18.42% increase in analysis time and 10.97% increase in memory usage on average, while guaranteeing that all points-to sets are obtained with non-decreasing precision.
Yulei Sui, Sen Ye, Jingling Xue, Pen-Chung Yew

Session 4: Concurrency 1

On the Strength of Owicki-Gries for Resources
Abstract
In multithreaded programs data are often separated into lock- protected resources. Properties of those resources are typically verified by modular, Owicki-Gries-like methods. The modularity of the Owicki-Gries method has its price: proving some properties may require manual introduction of auxiliary variables. What properties can be proven without the burden of introducing auxiliary variables? We answer this question in the abstract interpretation framework. On one hand, we reveal a lattice structure of the method and supply a syntax-based abstract transformer that describes the method exactly. On the other hand, we bound the loss of precision from above and below by transition-relation-independent weakly relational closures. On infinitely many programs the closures coincide and describe the precision loss exactly; in general, the bounds are strict. We prove the absence of a general exact closure-based fixpoint characterization of the accuracy of the Owicki-Gries method, both in the collecting semantics and in certain trace semantics.
Alexander Malkis, Laurent Mauborgne
Solving Recursion-Free Horn Clauses over LI+UIF
Abstract
Verification of programs with procedures, multi-threaded programs, and higher-order functional programs can be effectively automated using abstraction and refinement schemes that rely on spurious counterexamples for abstraction discovery. The analysis of counterexamples can be automated by a series of interpolation queries, or, alternatively, as a constraint solving query expressed by a set of recursion free Horn clauses. (A set of interpolation queries can be formulated as a single constraint over Horn clauses with linear dependency structure between the unknown relations.) In this paper we present an algorithm for solving recursion free Horn clauses over a combined theory of linear real/rational arithmetic and uninterpreted functions. Our algorithm performs resolution to deal with the clausal structure and relies on partial solutions to deal with (non-local) instances of functionality axioms.
Ashutosh Gupta, Corneliu Popeea, Andrey Rybalchenko
Macro Tree Transformations of Linear Size Increase Achieve Cost-Optimal Parallelism
Abstract
This paper studies parallel evaluation of tree transformations, in particular accumulative ones. Accumulation is a ubiquitous programming pattern. However, since accumulation usually imposes restrictions on evaluation orders, accumulative tree transformations appear to be unsuitable for parallel evaluation. We propose a parallel evaluation method for a large class of tree-to-tree recursive functions, which may contain accumulations, higher-order terms, and function compositions. Our parallel evaluation method achieves optimal parallel speedup if the transformation is of linear size increase, namely, the size of each output is linearly bounded by the size of the corresponding input. Our result is based on the theory of macro tree transducers and that of parallel tree contractions. The main contribution is to reveal a good collaboration between them.
Akimasa Morihata
Decentralized Delimited Release
Abstract
Decentralization is a major challenge for secure computing. In a decentralized setting, principals are free to distrust each other. The key challenge is to provide support for expressing and enforcing expressive decentralized policies. This paper focuses on declassification policies, i.e., policies for intended information release.We propose a decentralized language-independent framework for expressing what information can be released. The framework enables combination of data owned by different principals without compromising their respective security policies. A key feature is that information release is permitted only when the owners of the data agree on releasing it. We instantiate the framework for a simple imperative language to show how the decentralized declassification policies can be enforced by a runtime monitor and discuss a prototype that secures programs by inlining the monitor in the code.
Jonas Magazinius, Aslan Askarov, Andrei Sabelfeld

Session 5: Concurrency 2

Cost Analysis of Concurrent OO Programs
Abstract
Cost analysis aims at automatically approximating the resource consumption (e.g., memory) of executing a program in terms of its input parameters. While cost analysis for sequential programming languages has received considerable attention, concurrency and distribution have been notably less studied. The main challenges (and our contributions) of cost analysis in a concurrent setting are: (1) Inferring precise size relations for data in the program in the presence of shared memory. This information is essential for bounding the number of iterations of loops. (2) Distribution suggests that analysis must keep the cost of the diverse distributed components separate. We handle this by means of a novel form of recurrence equations which are parametric on the notion of cost center, which represents a corresponding component. To the best of our knowledge, our work is the first one to present a general cost analysis framework and an implementation for concurrent OO programs.
Elvira Albert, Puri Arenas, Samir Genaim, Miguel Gómez-Zamalloa, German Puebla
Static Object Race Detection
Abstract
We present a novel static object race detection analysis. Our analysis is data-centric in the sense that dominance and ownership, as well as object-based reasoning about control, play a crucial role. Our empirical results show that the analysis scales well and has relatively low false-positive rate. In some cases, our analysis outperforms the leading static race detector Chord.
Ana Milanova, Wei Huang
Soundness of Data Flow Analyses for Weak Memory Models
Abstract
Modern multi-core microprocessors implement weak memory consistency models; programming for these architectures is a challenge. This paper solves a problem open for ten years, and originally posed by Rinard: we identify sufficient conditions for a data flow analysis to be sound w.r.t. weak memory models. We first identify a class of analyses that are sound, and provide a formal proof of soundness at the level of trace semantics. Then we discuss how analyses unsound with respect to weak memory models can be repaired via a fixed point iteration, and provide experimental data on the runtime overhead of this method.
Jade Alglave, Daniel Kroening, John Lugton, Vincent Nimal, Michael Tautschnig

Session 6: Semantics

Towards a General Theory of Barbs, Contexts and Labels
Abstract
Barbed bisimilarity is a widely-used behavioural equivalence for interactive systems: given a set of predicates (denoted “barbs”, and representing basic observations on states) and a set of contexts (representing the possible execution environments), two systems are deemed to be equivalent if they verify the same barbs whenever inserted inside any of the chosen contexts. Despite its flexibility, this definition of equivalence is unsatisfactory, since often the quantification is over an infinite set of contexts, thus making barbed bisimilarity very hard to be verified.
Should a labelled operational semantics be available for our system, more efficient observational equivalences might be adopted. To this end, a series of techniques have been proposed to derive labelled transition systems (LTSs) from unlabelled ones, the main example being Leifer and Milner’s reactive systems. The underlying intuition is that labels are the “minimal” contexts that allow for a reduction to be performed.
We introduce a framework that characterizes (weak) barbed bisimilarity via transition systems whose labels are (possibly minimal) contexts. Differently from other proposals, our theory is not dependent on the way LTSs are built, and it relies on a simple set-theoretical presentation. To provide a test-bed for our formalism, we instantiate it by addressing the semantics of mobile ambients and HoCore, recasting the (weak) barbed bisimilarities of these calculi via label-based behavioural equivalences.
Filippo Bonchi, Fabio Gadducci, Giacoma Valentina Monreale
Computation-by-Interaction with Effects
Abstract
A successful approach in the semantics of programming languages is to model programs by interaction dialogues. While dialogues are most often considered abstract mathematical objects, it has also been argued that they are useful for actual computation. A manual implementation of interaction dialogues can be complicated, however. To address this issue, we consider a general method for extending a given language with a metalanguage that supports the implementation of dialogues. This method is based on the construction by Dal Lago and the author of the programming language intml, which applies interaction dialogues to sublinear space computation. We show that only few assumptions on the programming languages are needed to implement a useful intml-like metalanguage. We identify a weak variant of the Enriched Effect Calculus (EEC) of Egger, Møgelberg & Simpson as a convenient setting for capturing the structure needed for the construction of the metalanguage. In particular, function types are not needed for the construction and iteration by means of a Conway operator is sufficient. By using EEC we show how computational effects can be accounted for in the implementation of interaction dialogues.
Ulrich Schöpp

Session 7: Certification and Logic

Towards a Certified Petri Net Model-Checker
Abstract
Petri nets are widely used in the domain of automated verification through model-checking. In this approach, a Petri Net model of the system of interest is produced and its reachable states are computed, searching for erroneous executions. Model compilation can accelerate this analysis by generating code to explore the reachable states. This avoids the use of a fixed exploration tool involving an “interpretation” of the Petri net structure. In this paper, we show how to compile Petri nets targeting the LLVM language (a high-level assembly language) and formally prove the correctness of the produced code. To this aim, we define a structural operational semantics for the fragment of LLVM we use.
Lukasz Fronc, Franck Pommereau
Elementary Linear Logic Revisited for Polynomial Time and an Exponential Time Hierarchy
Abstract
Elementary linear logic is a simple variant of linear logic, introduced by Girard and which characterizes in the proofs-as-programs approach the class of elementary functions, that is to say computable in time bounded by a tower of exponentials of fixed height.
Our goal here is to show that despite its simplicity, elementary linear logic can nevertheless be used as a common framework to characterize the different levels of a hierarchy of deterministic time complexity classes, within elementary time. We consider a variant of this logic with type fixpoints and weakening. By selecting specific types we then characterize the class P of polynomial time predicates and more generally the hierarchy of classes k-EXP, for k ≥ 0, where k-EXP is the union of DTIME \((2_k^{n^i})\), for i ≥ 1.
Patrick Baillot
A Proof Pearl with the Fan Theorem and Bar Induction
Walking through Infinite Trees with Mixed Induction and Coinduction
Abstract
We study temporal properties over infinite binary red-blue trees in the setting of constructive type theory. We consider several familiar path-based properties, typical to linear-time and branching-time temporal logics like LTL and CTL*, and the corresponding tree-based properties, in the spirit of the modal μ-calculus. We conduct a systematic study of the relationships of the path-based and tree-based versions of “eventually always blueness” and mixed inductive-coinductive “almost always blueness” and arrive at a diagram relating these properties to each other in terms of implications that hold either unconditionally or under specific assumptions (Weak Continuity for Numbers, the Fan Theorem, Lesser Principle of Omniscience, Bar Induction).
We have fully formalized our development with the Coq proof assistant.
Keiko Nakata, Tarmo Uustalu, Marc Bezem
A Semantics for Context-Sensitive Reduction Semantics
Abstract
This paper explores the semantics of the meta-notation used in the style of operational semantics introduced by Felleisen and Hieb. Specifically, it defines a formal system that gives precise meanings to the notions of contexts, decomposition, and plugging (recomposition) left implicit in most expositions. This semantics is not naturally algorithmic, so the paper also provides an algorithm and proves a correspondence with the declarative definition.
The motivation for this investigation is PLT Redex, a domain-specific programming language designed to support Felleisen-Hieb-style semantics. This style of semantics is the de-facto standard in operational semantics and, as such, is widely used. Accordingly, our goal is that Redex programs should, as much as possible, look and behave like those semantics. Since Redex’s first public release more than seven years ago, its precise interpretation of contexts has changed several times, as we repeatedly encountered reduction systems that did not behave according to their authors’ intent. This paper describes the culimation of that experience. To the best of our knowledge, the semantics given here accommodates even the most complex uses of contexts available.
Casey Klein, Jay McCarthy, Steven Jaconette, Robert Bruce Findler
Backmatter
Metadaten
Titel
Programming Languages and Systems
herausgegeben von
Hongseok Yang
Copyright-Jahr
2011
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-25318-8
Print ISBN
978-3-642-25317-1
DOI
https://doi.org/10.1007/978-3-642-25318-8