Skip to main content
main-content

Über dieses Buch

This book constitutes the proceedings of the 13th International Symposium on Functional and Logic Programming, FLOPS 2016, held in Kochi, Japan, in March 2016.
The 14 papers presented in this volume were carefully reviewed and selected from 36 submissions. They cover the following topics: functional and logic programming; program transformation and re-writing; and extracting programs from proofs of their correctness.

Inhaltsverzeichnis

Frontmatter

Logic/Constraint Programming and Concurrency: The Hard-Won Lessons of the Fifth Generation Computer Project

Abstract
The technical goal of the Fifth Generation Computer Systems (FGCS) project (1982–1993) was to develop Parallel Inference technologies, namely systematized technologies for realizing knowledge information processing on top of parallel computer architecture [8].
Kazunori Ueda

From Sets to Bits in Coq

Abstract
Computer Science abounds in folktales about how — in the early days of computer programming — bit vectors were ingeniously used to encode and manipulate finite sets. Algorithms have thus been developed to minimize memory footprint and maximize efficiency by taking advantage of microarchitectural features. With the development of automated and interactive theorem provers, finite sets have also made their way into the libraries of formalized mathematics. Tailored to ease proving, these representations are designed for symbolic manipulation rather than computational efficiency. This paper aims to bridge this gap. In the Coq proof assistant, we implement a bitset library and prove its correctness with respect to a formalization of finite sets. Our library enables a seamless interaction of sets for computing — bitsets — and sets for proving — finite sets.
Arthur Blot, Pierre-Évariste Dagand, Julia Lawall

From Proposition to Program

Embedding the Refinement Calculus in Coq
Abstract
The refinement calculus and type theory are both frameworks that support the specification and verification of programs. This paper presents an embedding of the refinement calculus in the interactive theorem prover Coq, clarifying the relation between the two. As a result, refinement calculations can be performed in Coq, enabling the semi-automatic calculation of formally verified programs from their specification.
Wouter Swierstra, Joao Alpuim

The Boolean Constraint Solver of SWI-Prolog (System Description)

Abstract
We present a new constraint solver over Boolean variables, available as library(clpb) (documentation: http://​eu.​swi-prolog.​org/​man/​clpb.​html) in SWI-Prolog. Our solver distinguishes itself from other available CLP(\(\mathcal {B}\)) solvers by several unique features: First, it is written entirely in Prolog and is hence portable to different Prolog implementations. Second, it is the first freely available BDD-based CLP(\(\mathcal {B}\)) solver. Third, we show that new interface predicates allow us to solve new types of problems with CLP(\(\mathcal {B}\)) constraints. We also use our implementation experience to contrast features and state necessary requirements of attributed variable interfaces to optimally support CLP(\(\mathcal {B}\)) constraints in different Prolog systems. Finally, we also present some performance results and comparisons with SICStus Prolog.
Markus Triska

Probabilistic Inference by Program Transformation in Hakaru (System Description)

Abstract
We present Hakaru, a new probabilistic programming system that allows composable reuse of distributions, queries, and inference algorithms, all expressed in a single language of measures. The system implements two automatic and semantics-preserving program transformations—disintegration, which calculates conditional distributions, and simplification, which subsumes exact inference by computer algebra. We show how these features work together by describing the ideal workflow of a Hakaru user on two small problems. We highlight our composition of transformations and types in design and implementation.
Praveen Narayanan, Jacques Carette, Wren Romano, Chung-chieh Shan, Robert Zinkov

An Interaction Net Encoding of Gödel’s System  $$\mathcal {T}$$

Declarative Pearl
Abstract
The graph rewriting system of interaction nets has been very successful for the implementation of the lambda calculus. In this paper we show how the ideas can be extended and simplified to encode Gödel’s System \(\mathcal {T}\)—the simply typed \(\lambda \)-calculus extended with numbers. Surprisingly, using some results about System \(\mathcal {T}\), we obtain a very simple system of interaction nets that is significantly more efficient than a direct encoding for the evaluation of programs.
Ian Mackie, Shinya Sato

Space-Efficient Planar Acyclicity Constraints

A Declarative Pearl
Abstract
Many constraints on graphs, e.g. the existence of a simple path between two vertices, or the connectedness of the subgraph induced by some selection of vertices, can be straightforwardly represented by means of a suitable acyclicity constraint. One method for encoding such a constraint in terms of simple, local constraints uses a 3-valued variable for each edge, and an \((N+1)\)-valued variable for each vertex, where N is the number of vertices in the entire graph. For graphs with many vertices, this can be somewhat inefficient in terms of space usage.
In this paper, we show how to refine this encoding into one that uses only a single bit of information, i.e. a 2-valued variable, per vertex, assuming the graph in question is planar. We furthermore show how this same constraint can be used to encode connectedness constraints, and a variety of other graph-related constraints.
Taus Brock-Nannestad

Executable Relational Specifications of Polymorphic Type Systems Using Prolog

Abstract
A concise, declarative, and machine executable specification of the Hindley–Milner type system (HM) can be formulated using logic programming languages such as Prolog. Modern functional language implementations such as the Glasgow Haskell Compiler support more extensive flavors of polymorphism beyond Milner’s theory of type polymorphism in the late 70’s. We progressively extend the HM specification to include more advanced type system features. An interesting development is that extending dimensions of polymorphism beyond HM resulted in a multi-staged solution: resolve the typing relations first, while delaying to resolve kinding relations, and then resolve the delayed kinding relations. Our work demonstrates that logic programing is effective for prototyping polymorphic type systems with rich features of polymorphism, and that logic programming could have been even more effective for specifying type inference if it were equipped with better theories and tools for staged resolution of different relations at different levels.
Ki Yung Ahn, Andrea Vezzosi

Proof Relevant Corecursive Resolution

Abstract
Resolution lies at the foundation of both logic programming and type class context reduction in functional languages. Terminating derivations by resolution have well-defined inductive meaning, whereas some non-terminating derivations can be understood coinductively. Cycle detection is a popular method to capture a small subset of such derivations. We show that in fact cycle detection is a restricted form of coinductive proof, in which the atomic formula forming the cycle plays the rôle of coinductive hypothesis.
This paper introduces a heuristic method for obtaining richer coinductive hypotheses in the form of Horn formulas. Our approach subsumes cycle detection and gives coinductive meaning to a larger class of derivations. For this purpose we extend resolution with Horn formula resolvents and corecursive evidence generation. We illustrate our method on non-terminating type class resolution problems.
Peng Fu, Ekaterina Komendantskaya, Tom Schrijvers, Andrew Pond

A Coq Library for Internal Verification of Running-Times

Abstract
This paper presents a Coq library that lifts an abstract yet precise notion of running-time into the type of a function. Our library is based on a monad that counts abstract steps, controlled by one of the monadic operations. The monad’s computational content, however, is simply that of the identity monad so programs written in our monad (that recur on the natural structure of their arguments) extract into idiomatic OCaml code. We evaluated the expressiveness of the library by proving that red-black tree insertion and search, merge sort, insertion sort, Fibonacci, iterated list insertion, BigNum addition, and Okasaki’s Braun Tree algorithms all have their expected running times.
Jay McCarthy, Burke Fetscher, Max New, Daniel Feltey, Robert Bruce Findler

A Transformational Approach to Parametric Accumulated-Cost Static Profiling

Abstract
Traditional static resource analyses estimate the total resource usage of a program, without executing it. In this paper we present a novel resource analysis whose aim is instead the static profiling of accumulated cost, i.e., to discover, for selected parts of the program, an estimate or bound of the resource usage accumulated in each of those parts. Traditional resource analyses are parametric in the sense that the results can be functions on input data sizes. Our static profiling is also parametric, i.e., our accumulated cost estimates are also parameterized by input data sizes. Our proposal is based on the concept of cost centers and a program transformation that allows the static inference of functions that return bounds on these accumulated costs depending on input data sizes, for each cost center of interest. Such information is much more useful to the software developer than the traditional resource usage functions, as it allows identifying the parts of a program that should be optimized, because of their greater impact on the total cost of program executions. We also report on our implementation of the proposed technique using the CiaoPP program analysis framework, and provide some experimental results.
R. Haemmerlé, P. López-García, U. Liqat, M. Klemen, J. P. Gallagher, M. V. Hermenegildo

Polymorphic Types in Erlang Function Specifications

Abstract
Erlang is a concurrent functional programming language developed by Ericsson, well suited for implementing distributed systems. Although Erlang is dynamically typed, the Dialyzer static analysis tool can be used to extract implicit type information from the programs, both for documentation purposes and for finding errors that will definitively arise at program execution. Dialyzer is based on the notion of success types, that correspond to safe over-approximations for the semantics of expressions. Erlang also supports user given function specifications (or just specs), that are contracts providing more information about the semantics of functions. Function specs are useful not only as documentation, but also can be employed by Dialyzer to improve the precision of the analysis. Even though specs can have a polymorphic shape, in practice Dialyzer is not able to exploit all their potential. One reason for that is that extending the notion of success types to a polymorphic setting is not trivial, and several interpretations are possible. In this work we propose a precise formulation for a novel interpretation of function specs as polymorphic success types, and a program transformation that allows us to apply this new interpretation on the call sites of functions with a declared spec. This results in a significant improvement in the number of definite errors that Dialyzer is able to detect.
Francisco J. López-Fraguas, Manuel Montenegro, Juan Rodríguez-Hortalá

Declarative Foreign Function Binding Through Generic Programming

Abstract
Foreign function interfaces are typically organised monolithically, tying together the specification of each foreign function with the mechanism used to make the function available in the host language. This leads to inflexible systems, where switching from one binding mechanism to another (say from dynamic binding to static code generation) often requires changing tools and rewriting large portions of code.
In contrast, approaching the design of a foreign function interface as a generic programming problem allows foreign function specifications to be written declaratively, with easy switching between a wide variety of binding mechanisms — static and dynamic, synchronous and asynchronous, etc. — with no changes to the specifications.
Jeremy Yallop, David Sheets, Anil Madhavapeddy

Incremental Computing with Abstract Data Structures

Abstract
Incremental computing is a method of keeping consistency between an input and an output. If only a small portion of the input is modified, it is natural to expect that the corresponding output can be obtained more efficiently than full re-computation. However, for abstract data structures such as self-balancing binary search trees, even the most primitive modifications may lead to drastic change of the underlying structure. In this paper, we develop an incremental computing method, which can deal with complex modifications and therefore is suitable for abstract data structures. The key idea is to use shortcut fusion in order to decompose a complex modification to a series of simple ones. Based on this idea, we extend Jeuring’s incremental computing method, which can deal with algebraic data structures, so as to deal with abstract data structures. Our method is purely functional and does not rely on any run-time support. Its correctness is straightforward from parametricity. Moreover, its cost is often proportional to that of the corresponding modification.
Akimasa Morihata

Declarative Programming with Algebra

Abstract
The Algebra of Communicating Processes (ACP) is a theory that views sequences and choices as mathematical operations: multiplication and addition. Based on these base constructs others are defined, such as parallel merge, interruption and disruption.
Conventional programming languages may be enriched with ACP features, to gain declarative expressiveness. We have done this in SubScript, an extension to the Scala language. SubScript has high level support for sequences, choices and iterations in a style similar to parser generator languages. It also offers parallel composition operations, such as and- and or- parallelism, and dataflow.
The declarative style is also present in the way various execution modes are supported. Conventional programming languages often require some boilerplate code to run things in the background, in the GUI thread, or as event handlers. SubScript supports the same execution modes, but with minimal boilerplate. It is also easy to compose programs from blocks having different execution modes.
This paper introduces ACP and SubScript; it briefly describes the current implementation, and gives several examples.
Andre van Delft, Anatoliy Kmetyuk

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise