Skip to main content
Top

2008 | Book

Mathematics of Program Construction

9th International Conference, MPC 2008, Marseille, France, July 15-18, 2008. Proceedings

Editors: Philippe Audebaud, Christine Paulin-Mohring

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This book constitutes the refereed proceedings of the 9th International Conference on Mathematics of Program Construction, MPC 2008, held in Marseille, France in July 2008. The 18 revised full papers presented together with 1 invited talk were carefully reviewed and selected from 41 submissions. Issues addressed range from algorithmics to support for program construction in programming languages and systems. Topics of special interest are type systems, program analysis and transformation, programming language semantics, program logics.

Table of Contents

Frontmatter
Exploiting Unique Fixed Points
Abstract
Functional programmers happily use equational reasoning and induction to prove properties of recursive programs. To show properties of corecursive programs they employ coinduction, per perhaps less enthusiastically. Coinduction is often considered as a rather low-level proof method, especially, as it seems to depart rather radically from equational reasoning. In this talk we introduce an alternative proof technique based on unique fixed points. To make the idea concrete, consider the simplest example of a coinductive type: the type of streams, where a stream is an infinite sequence of elements. In a lazy functional language, such as Haskell, streams are easy to define and many textbooks on Haskell reproduce the folklore examples of Fibonacci or Hamming numbers defined by recursion equations over streams. One has to be a bit careful in formulating a recursion equation basically avoiding that the sequence defined swallows its own tail. However, if this care is exercised, the equation even possesses a unique solution, a fact that is not very widely appreciated. Uniqueness can be exploited to prove that two streams are equal: if they satisfy the same recursion equation, then they are! We will use this proof technique to infer some intriguing facts about particular streams and to develop the basics of finite calculus. Quite attractively, the resulting proofs have a strong equational flavour. In a nutshell, the proof method brings equational reasoning to the coworld. Of course, it is by no means restricted to streams and can be used equally well to prove properties of infinite trees or the observational equivalence of instances of an abstract datatype.
Ralf Hinze
Scrap Your Type Applications
Abstract
System F is ubiquitous in logic, theorem proving, language meta-theory, compiler intermediate languages, and elsewhere. Along with its type abstractions come type applications, but these often appear redundant. This redundancy is both distracting and costly for type-directed compilers.
We introduce System IF, for implicit System F, in which many type applications can be made implicit. It supports decidable type checking and strong normalisation. Experiments with Haskell suggest that it could be used to reduce the amount of intermediate code in compilers that employ System F.
System IF constitutes a first foray into a new area in the design space of typed lambda calculi, that is interesting in its own right and may prove useful in practice.
Barry Jay, Simon Peyton Jones
Programming with Effects in Coq
Abstract
Next-generation programming languages will move beyond simple type systems to include support for formal specifications and mechanically- checked proofs of adherence to those requirements. Already, in the imperative world, languages such as ESC/Java and Spec# integrate Hoare- style pre- and post-conditions into the underlying type system. However, we argue that neither the program logics used in these systems, nor the decision procedures used to discharge verification conditions, are sufficient for establishing deep properties of modular software.
In contrast, the Coq proof development environment provides a powerful program logic (CiC) coupled with an extensible, interactive environment that can combine deep insights from humans with automation to discharge deep proof obligations. Unfortunately, the language at the core of Coq is limited to purely functional programming.
In the Ynot project, we are attempting to address this problem by extending Coq with a new type constructor (the Hoare-triple type), and a few carefully chosen axioms that can be used to build imperative programs in a style quite close to Haskell. I will report on our progress thus far, both in using Ynot to construct modular, extensible libraries for imperative programs, as well as our new compiler infrastructure for generating efficient code from Ynot programs.
Greg Morrisett
Verifying a Semantic βη-Conversion Test for Martin-Löf Type Theory
Abstract
Type-checking algorithms for dependent type theories often rely on the interpretation of terms in some semantic domain of values when checking equalities. Here we analyze a version of Coquand’s algorithm for checking the βη-equality of such semantic values in a theory with a predicative universe hierarchy and large elimination rules. Although this algorithm does not rely on normalization by evaluation explicitly, we show that similar ideas can be employed for its verification. In particular, our proof uses the new notions of contextual reification and strong semantic equality.
The algorithm is part of a bi-directional type checking algorithm which checks whether a normal term has a certain semantic type, a technique used in the proof assistants Agda and Epigram. We work with an abstract notion of semantic domain in order to accommodate a variety of possible implementation techniques, such as normal forms, weak head normal forms, closures, and compiled code. Our aim is to get closer than previous work to verifying the type-checking algorithms which are actually used in practice.
Andreas Abel, Thierry Coquand, Peter Dybjer
The Capacity-C Torch Problem
Abstract
The torch problem (also known as the bridge problem or the flashlight problem) is about getting a number of people across a bridge as quickly as possible under certain constraints. Although a very simply stated problem, the solution is surprisingly non-trivial. The case in which there are just four people and the capacity of the bridge is two is a well-known puzzle, widely publicised on the internet. We consider the general problem where the number of people, their individual crossing times and the capacity of the bridge are all input parameters. We present an algorithm that determines the shortest total crossing time; the number of primitive computations executed by the algorithm (i.e. the worst-case time complexity of the algorithm) is proportional to the square of the number of people.
Roland Backhouse
Recounting the Rationals: Twice!
Abstract
We derive an algorithm that enables the rationals to be efficiently enumerated in two different ways. One way is known and is credited to Moshe Newman; it corresponds to a deforestation of the so-called Calkin-Wilf tree of rationals. The second is new and corresponds to a deforestation of the Stern-Brocot tree of rationals. We show that both enumerations stem from the same simple algorithm. In this way, we construct a Stern-Brocot enumeration algorithm with the same time and space complexity as Newman’s algorithm.
Roland Backhouse, João F. Ferreira
Zippy Tabulations of Recursive Functions
Abstract
This paper is devoted to the statement and proof of a theorem showing how recursive definitions whose associated call graphs satisfy certain shape conditions can be converted systematically into efficient bottom-up tabulation schemes. The increase in efficiency can be dramatic, typically transforming an exponential time algorithm into one that takes only quadratic time. The proof of the theorem relies heavily on the theory of zips developed by Roland Backhouse and Paul Hoogendijk.
Richard S. Bird
Unfolding Abstract Datatypes
Abstract
We argue that abstract datatypes — with public interfaces hiding private implementations — represent a form of codata rather than ordinary data, and hence that proof methods for corecursive programs are the appropriate techniques to use for reasoning with them. In particular, we show that the universal properties of unfold operators are perfectly suited for the task. We illustrate with solutions to two problems the solution to a problem in the recent literature.
Jeremy Gibbons
Circulations, Fuzzy Relations and Semirings
Abstract
Circulations are similar to flows in capacity-constrained networks, with the difference that they also observe lower bounds and, unlike flows, are not directed from a source to a sink. We give a new description of circulations in networks using a technique introduced by Kawahara; he applied the same methods to network flows. We show the power and flexibility of his approach in a new application, refining it at the same time by introducing the concept of test relations. Furthermore we will give algebraic formulations of a generic algorithm for computing a flow in a network with lower bounds and a sufficient and necessary criterion for the existence of a circulation.
Roland Glück, Bernhard Möller
Asynchronous Exceptions as an Effect
Abstract
Asynchronous interrupts abound in computing systems, yet they remain a thorny concept for both programming and verification practice. The ubiquity of interrupts underscores the importance of developing programming models to aid the development and verification of interrupt-driven programs. The research reported here recognizes asynchronous interrupts as a computational effect and encapsulates them as a building block in modular monadic semantics. The resulting modular semantic model can serve as both a guide for functional programming with interrupts and as a formal basis for reasoning about interrupt-driven computation as well.
William L. Harrison, Gerard Allwein, Andy Gill, Adam Procter
The Böhm–Jacopini Theorem Is False, Propositionally
Abstract
The Böhm–Jacopini theorem (Böhm and Jacopini, 1966) is a classical result of program schematology. It states that any deterministic flowchart program is equivalent to a while program. The theorem is usually formulated at the first-order interpreted or first-order uninterpreted (schematic) level, because the construction requires the introduction of auxiliary variables. Ashcroft and Manna (1972) and Kosaraju (1973) showed that this is unavoidable. As observed by a number of authors, a slightly more powerful structured programming construct, namely loop programs with multi-level breaks, is sufficient to represent all deterministic flowcharts without introducing auxiliary variables. Kosaraju (1973) established a strict hierarchy determined by the maximum depth of nesting allowed. In this paper we give a purely propositional account of these results. We reformulate the problems at the propositional level in terms of automata on guarded strings, the automata-theoretic counterpart to Kleene algebra with tests. Whereas the classical approaches do not distinguish between first-order and propositional levels of abstraction, we find that the purely propositional formulation allows a more streamlined mathematical treatment, using algebraic and topological concepts such as bisimulation and coinduction. Using these tools, we can give more mathematically rigorous formulations and simpler and more revealing proofs.
Dexter Kozen, Wei-Lung Dustin Tseng
The Expression Lemma
Abstract
Algebraic data types and catamorphisms (folds) play a central role in functional programming as they allow programmers to define recursive data structures and operations on them uniformly by structural recursion. Likewise, in object-oriented (OO) programming, recursive hierarchies of object types with virtual methods play a central role for the same reason. There is a semantical correspondence between these two situations which we reveal and formalize categorically. To this end, we assume a coalgebraic model of OO programming with functional objects. The development may be helpful in deriving refactorings that turn sufficiently disciplined functional programs into OO programs of a designated shape and vice versa.
Ralf Lämmel, Ondrej Rypacek
Nested Datatypes with Generalized Mendler Iteration: Map Fusion and the Example of the Representation of Untyped Lambda Calculus with Explicit Flattening
Abstract
Nested datatypes are families of datatypes that are indexed over all types such that the constructors may relate different family members. Moreover, the argument types of the constructors refer to indices given by expressions where the family name may occur. Especially in this case of true nesting, there is no direct support by theorem provers to guarantee termination of functions that traverse these data structures.
A joint article with A. Abel and T. Uustalu (TCS 333(1–2), pp. 3–66, 2005) proposes iteration schemes that guarantee termination not by structural requirements but just by polymorphic typing. They are generic in the sense that no specific syntactic form of the underlying datatype “functor” is required. In subsequent work (accepted for the Journal of Functional Programming), the author introduced an induction principle for the verification of programs obtained from Mendler-style iteration of rank 2, which is one of those schemes, and justified it in the Calculus of Inductive Constructions through an implementation in the theorem prover Coq.
The new contribution is an extension of this work to generalized Mendler iteration (introduced in Abel et al, cited above), leading to a map fusion theorem for the obtained iterative functions. The results and their implementation in Coq are used for a case study on a representation of untyped lambda calculus with explicit flattening. Substitution is proven to fulfill two of the three monad laws, the third only for “hereditarily canonical” terms, but this is rectified by a relativisation of the whole construction to those terms.
Ralph Matthes
Probabilistic Choice in Refinement Algebra
Abstract
The term refinement algebra refers to a set of abstract algebras, similar to Kleene algebra with tests, that are suitable for reasoning about programs in a total-correctness framework. Abstract algebraic reasoning also works well when probabilistic programs are concerned, and a general refinement algebra that is suitable for such programs has been defined previously. That refinement algebra does not contain features that are specific to probabilistic programs. For instance, it does not include a probabilistic choice operator, or probabilistic assertions and guards (tests), which may be used to represent correctness properties for probabilistic programs. In this paper we investigate how these features may be included in a refinement algebra. That is, we propose a new refinement algebra in which probabilistic choice, and probabilistic guards and assertions may be expressed. Two operators for modelling probabilistic enabledness and termination are also introduced.
Larissa Meinicke, Ian J. Hayes
Algebra of Programming Using Dependent Types
Abstract
Dependent type theory is rich enough to express that a program satisfies an input/output relational specification, but it could be hard to construct the proof term. On the other hand, squiggolists know very well how to show that one relation is included in another by algebraic reasoning. We demonstrate how to encode functional and relational derivations in a dependently typed programming language. A program is coupled with an algebraic derivation from a specification, whose correctness is guaranteed by the type system.
Shin-Cheng Mu, Hsiang-Shang Ko, Patrik Jansson
Safe Modification of Pointer Programs in Refinement Calculus
Abstract
This paper discusses stepwise refinement of pointer programs in the framework of refinement calculus. We augment the underlying logic with formulas of separation logic and then introduce a pair of new predicate transformers, called separating assertion and separating assumption. The new predicate transformers are derived from separating conjunction and separating implication, which are fundamental logical connectives in separation logic. They represent primitive forms of heap allocation/deallocation operators and the basic pointer statements can be specified by means of them. We derive several refinement laws that are useful for stepwise refinement and demonstrate the use of the laws in the context of correctness preserving transformations that are intended for improved memory usage.
The formal development is carried out in the framework of higher-order logic and is based on Back and Preoteasa’s axiomatization of state space and its extension to the heap storage [BP05, Pre06]. All the results have been implemented and verified in the theorem prover PVS.
Susumu Nishimura
A Hoare Logic for Call-by-Value Functional Programs
Abstract
We present a Hoare logic for a call-by-value programming language equipped with recursive, higher-order functions, algebraic data types, and a polymorphic type system in the style of Hindley and Milner. It is the theoretical basis for a tool that extracts proof obligations out of programs annotated with logical assertions. These proof obligations, expressed in a typed, higher-order logic, are discharged using off-the-shelf automated or interactive theorem provers. Although the technical apparatus that we exploit is by now standard, its application to call-by-value functional programming languages appears to be new, and (we claim) deserves attention. As a sample application, we check the partial correctness of a balanced binary search tree implementation.
Yann Régis-Gianas, François Pottier
Synthesis of Optimal Control Policies for Some Infinite-State Transition Systems
Abstract
We develop a symbolic, logic-based technique for constructing optimal control policies in some transition systems where state spaces are large or infinite. These systems are presented as iterations of finite sets of guarded assignments which have costs. The optimality objective is to minimize the total costs of system executions reaching the set characterized by a given target predicate. Guards are predicates and control policies are expressed by tuples of guards. The optimal control policy refines the control policy of the given system. It is generated from the target predicate by an iteration based on backwards induction. This iterative procedure amounts to a variant of the symbolic algorithm generating the reachability precondition; the latter characterizes the states from which some system execution reaches the target set. The main difference is the introduction of greedy and cost-dependent iteration steps.
Michel Sintzoff
Modal Semirings Revisited
Abstract
A new axiomatisation for domain and codomain on semirings and Kleene algebras is proposed. It is simpler, more general and more flexible than a predecessor, and it is particularly suitable for program analysis and construction via automated deduction. Different algebras of domain elements for distributive lattices, (co-)Heyting algebras and Boolean algebras arise by adapting this axiomatisation. Modal operators over all these domain algebras can then easily be defined. The calculus of the previous axiomatisation arises as a special case. An application in terms of a fully automated proof of a modal correspondence result for Löb’s formula is also presented.
Jules Desharnais, Georg Struth
Asymptotic Improvement of Computations over Free Monads
Abstract
We present a low-effort program transformation to improve the efficiency of computations over free monads in Haskell. The development is calculational and carried out in a generic setting, thus applying to a variety of datatypes. An important aspect of our approach is the utilisation of type class mechanisms to make the transformation as transparent as possible, requiring no restructuring of code at all. There is also no extra support necessary from the compiler (apart from an up-to-date type checker). Despite this simplicity of use, our technique is able to achieve true asymptotic runtime improvements. We demonstrate this by examples for which the complexity is reduced from quadratic to linear.
Janis Voigtländer
Symmetric and Synchronous Communication in Peer-to-Peer Networks
Abstract
Motivated by distributed implementations of game-theoretical algorithms, we study symmetric process systems and the problem of attaining common knowledge between processes. We formalize our setting by defining a notion of peer-to-peer networks and appropriate symmetry concepts in the context of Communicating Sequential Processes (CSP) [1]. We then prove that CSP with input and output guards makes common knowledge in symmetric peer-to-peer networks possible, but not the restricted version which disallows output statements in guards and is commonly implemented. Our results extend [2].
An extended version is available at http://arxiv.org/abs/0710.2284 .
Andreas Witzel
Backmatter
Metadata
Title
Mathematics of Program Construction
Editors
Philippe Audebaud
Christine Paulin-Mohring
Copyright Year
2008
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-70594-9
Print ISBN
978-3-540-70593-2
DOI
https://doi.org/10.1007/978-3-540-70594-9

Premium Partner