Skip to main content
Top

2024 | Book | 1. edition

Functional and Logic Programming

17th International Symposium, FLOPS 2024, Kumamoto, Japan, May 15–17, 2024, Proceedings

Editors: Jeremy Gibbons, Dale Miller

Publisher: Springer Nature Singapore

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This book constitutes the proceedings of the 17th International Symposium on Functional and Logic Programming, FLOPS 2024, held in Kumamoto, Japan, in May 2024.

The 15 papers presented in this volume were carefully reviewed and selected from 28 submissions. The scope includes all aspects of the design, semantics, theory, applications, implementations, and teaching of
declarative programming. FLOPS speci cally aims to promote cross-fertilization between theory and practice and among di erent styles of declarative programming.

Table of Contents

Frontmatter

Extended Abstract

Frontmatter
Algebraic Connection Between Logic Programming and Machine Learning (Extended Abstract)
Abstract
There have been attempts to connect machine learning and symbolic reasoning, providing interfaces between them. This work focuses on our original approach to integrate machine learning and symbolic reasoning, in the context of algebraic approaches to logic programming. We here realize logical reasoning using algebraic methods, in which algebraic data structures such as matrices and tensors are used to represent logical formulas. These reasoning methods are robust against noise, while allowing for high parallelism and scalable computation. Algebraic logic programming has been applied to fixponit computation, abduction, answer set programming and inductive logic programming.
Katsumi Inoue

Rewriting

Frontmatter
ACGtk: A Toolkit for Developing and Running Abstract Categorial Grammars
Abstract
Abstract categorial grammars (ACGs) is an expressive grammatical framework whose formal properties have been extensively studied. While it can provide its own account, as a grammar, of linguistic phenomena, it is known to encode several grammatical formalisms, including context-free grammars, but also mildly context-sensitive formalisms such as tree-adjoining grammars or m-linear context-free rewriting systems for which parsing is polynomial. The ACG toolkit we present provides a compiler, acgc, that checks and turns ACGs into representations that are suitable for testing and parsing, used in the acg interpreter. We illustrate these functionalities and discuss implementation features, in particular the Datalog reduction on which parsing is based, and the magic set rewriting techniques that can further be applied.
Maxime Guillaume, Sylvain Pogodalla, Vincent Tourneur
Term Evaluation Systems with Refinements: First-Order, Second-Order, and Contextual Improvement
Abstract
For a programming language, there are two kinds of term rewriting: run-time rewriting (“evaluation”) and compile-time rewriting (“refinement”). Whereas refinement resembles general term rewriting, evaluation is commonly constrained by Felleisen’s evaluation contexts. While evaluation specifies a programming language, refinement models optimisation and should be validated with respect to evaluation. Such validation can be given by Sands’ notion of contextual improvement. We formulate evaluation in a term-rewriting-theoretic manner for the first time, and introduce Term Evaluation and Refinement Systems (TERS). We then identify sufficient conditions for contextual improvement, and provide critical pair analysis for local coherence that is the key sufficient condition. As case studies, we prove contextual improvement for a computational lambda-calculus and its extension with effect handlers.
Koko Muroya, Makoto Hamana
A Complete Dependency Pair Framework for Almost-Sure Innermost Termination of Probabilistic Term Rewriting
Abstract
Recently, we adapted the well-known dependency pair (DP) framework to a dependency tuple framework in order to prove almost-sure innermost termination (iAST) of probabilistic term rewrite systems. While this approach was incomplete, in this paper, we improve it into a complete criterion for iAST by presenting a new, more elegant definition of DPs for probabilistic term rewriting. Based on this, we extend the probabilistic DP framework by new transformations. Our implementation in the tool AProVE shows that they increase its power considerably.
Jan-Christoph Kassing, Stefan Dollase, Jürgen Giesl

Algebra

Frontmatter
Tabulation with Zippers
Abstract
Tabulation is a well-known technique for improving the efficiency of recursive functions with redundant function calls. A key point in the application of this technique is to identify a suitable representation for the table. In this paper, we propose the use of zippers as tables in the tabulation process. Our approach relies on a generic function zipWithZipper, that makes strong use of lazy evaluation to traverse two zippers in a circular manner. The technique turns out to be particularly efficient when the arguments to recursive calls are closely situated within the function domain. For example, in the case of natural numbers this means function calls on fairly contiguous values. Likewise, when dealing with tree structures, it means functions calls on immediate sub-trees and parent nodes. This results in a concise and efficient zipper-based embedding of attribute grammars.
Marcos Viera, Alberto Pardo, João Saraiva
Declarative Pearl: Rigged Contracts
Abstract
Over 20 years ago, Peyton Jones et al. embarked on an adventure in financial engineering with their functional pearl on “Composing Contracts”. They introduced a combinator library—a domain-specific language—for precisely describing complex financial contracts and a formal denotational semantics for computing their value, for which they briefly sketched an implementation.
This paper reworks the design of their library to make the central datatype of contracts less ad-hoc by giving it a well-understood algebraic structure: the semiring. Then, interpreting a contract’s worth as a generic semiring homomorphism directly gives rise to a natural semantics for contracts, of which computing the (monetary) value is but one instance.
Alexander Vandenbroucke, Tom Schrijvers

Applications

Frontmatter
System Description: DeepLLM, Casting Dialog Threads into Logic Programs
Abstract
We automate deep step-by step reasoning in an LLM dialog thread by recursively exploring alternatives (OR-nodes) and expanding details (AND-nodes) up to a given depth. Starting from a single succinct task-specific initiator we steer the automated dialog thread to stay focussed on the task by synthesizing a prompt that summarizes the depth-first steps taken so far.
Our algorithm is derived from a simple recursive descent implementation of a Horn Clause interpreter, except that we accommodate our logic engine to fit the natural language reasoning patterns LLMs have been trained on. Semantic similarity to ground-truth facts or oracle advice from another LLM instance is used to restrict the search space and validate the traces of justification steps returned as focussed and trustable answers. At the end, the unique minimal model of a generated Horn Clause program collects the results of the reasoning process.
As applications, we sketch implementations of consequence predictions, causal explanations, recommendation systems and topic-focussed exploration of scientific literature.
Paul Tarau
A Constraint-Based Mathematical Modeling Library in Prolog with Answer Constraint Semantics
Abstract
Constraint logic programming emerged in the late 80’s as a highly declarative class of programming languages based on first-order logic and theories with decidable constraint languages, thereby subsuming Prolog restricted to equality constraints over the Herbrand’s term domain. This approach has proven extremely successful in solving combinatorial problems in the industry which quickly led to the development of a variety of constraint solving libraries in standard programming languages. Later came the design of a purely declarative front-end constraint-based modeling language, MiniZinc, independent of the constraint solvers, in order to compare their performances and create model benchmarks. Beyond that purpose, the use of a high-level modeling language such as MiniZinc to develop complete applications, or to teach constraint programming, is limited by the impossibility to program search strategies, or new constraint solvers, in a modeling language, as well as by the absence of an integrated development environment for both levels of constraint-based modeling and constraint solving. In this paper, we propose to solve those issues by taking Prolog with its constraint solving libraries, as a unified relation-based modeling and programming language. We present a Prolog library for high-level constraint-based mathematical modeling, inspired by MiniZinc, using subscripted variables (arrays) in addition to lists and terms, quantifiers and iterators in addition to recursion, together with a patch of constraint libraries in order to allow array functional notations in constraints. We show that this approach does not come with a significant computation time overhead, and presents several advantages in terms of the possibility of focussing on mathematical modeling, getting answer constraints in addition to ground solutions, programming search or constraint solvers if needed, and debugging models within a unique modeling and programming environment.
François Fages
Grants4Companies: Applying Declarative Methods for Recommending and Reasoning About Business Grants in the Austrian Public Administration (System Description)
Abstract
We describe the methods and technologies underlying the application Grants4Companies. The application uses a logic-based expert system to display a list of business grants suitable for the logged-in business. To evaluate suitability of the grants, formal representations of their conditions are evaluated against properties of the business, taken from the registers of the Austrian public administration. The logical language for the representations of the grant conditions is based on S-expressions. We further describe a Proof of Concept implementation of reasoning over the formalised grant conditions. The proof of concept is implemented in Common Lisp and interfaces with a reasoning engine implemented in Scryer Prolog. The application has recently gone live and is provided as part of the Business Service Portal by the Austrian Federal Ministry of Finance.
Björn Lellmann, Philipp Marek, Markus Triska

Program Analysis

Frontmatter
Inferring Non-failure Conditions for Declarative Programs
Abstract
Unintended failures during a computation are painful but frequent during software development. Failures due to external reasons (e.g., missing files, no permissions) can be caught by exception handlers. Programming failures, such as calling a partially defined operation with unintended arguments, are often not caught due to the assumption that the software is correct. This paper presents an approach to verify such assumptions. For this purpose, non-failure conditions for operations are inferred and then checked in all uses of partially defined operations. In the positive case, the absence of such failures is ensured. In the negative case, the programmer could adapt the program to handle possibly failing situations and check the program again. Our method is fully automatic and can be applied to larger declarative programs. The results of an implementation for functional logic Curry programs are presented.
Michael Hanus
Being Lazy When It Counts
Practical Constant-Time Memory Management for Functional Programming
Abstract
Functional programming (FP) lets users focus on the business logic of their applications by providing them with high-level and composable abstractions. However, both automatic memory management schemes traditionally used for FP, namely tracing garbage collection and reference counting, may introduce latencies in places that can be hard to predict, which limits the applicability of the FP paradigm.
We reevaluate the use of lazy reference counting in single-threaded functional programming with guaranteed constant-time memory management, meaning that allocation and deallocation take only a bounded and predictable amount of time. This approach does not leak memory as long as we use uniform allocation sizes. Uniform allocation sizes were previously considered impractical in the context of imperative programming, but we find it to be surprisingly suitable for FP.
Preliminary benchmark results suggest that our approach is practical, as its performance is on par with Koka’s existing state-of-the-art implementation of reference counting for FP, sometimes even outperforming it. We also evaluate the effect of different allocation sizes on application performance and suggest ways of allowing large allocation in non-mission-critical parts of the program via Koka’s effect system.
We believe this potentially opens the door to many new industrial applications of FP, such as its use in real-time embedded software. In fact, the development of a high-level domain-specific language for describing latency-critical quantum physics experiments was one of the original use cases that prompted us to initiate this work.
Chun Kit Lam, Lionel Parreaux

Metaprogramming

Frontmatter
MetaOCaml: Ten Years Later
System Description
Abstract
MetaOCaml is a superset of OCaml for convenient code generation with static guarantees: the generated code is well-formed, well-typed and well-scoped, by construction. Not only the completed generated code always compiles; code fragments with a variable escaping its scope are detected already during code generation. MetaOCaml has been employed for compiling domain-specific languages, generic programming, automating tedious specializations in high-performance computing, generating efficient computational kernels and embedded programming. It is used in education, and served as inspiration for several other metaprogramming systems.
Most well-known in MetaOCaml are the types for values representing generated code and the template-based mechanism to produce such values, a.k.a., brackets and escapes. MetaOCaml also features cross-stage persistence, generating ordinary and mutually-recursive definitions, first-class pattern-matching and heterogeneous metaprogramming.
The extant implementation of MetaOCaml, first presented at FLOPS 2014, has been continuously evolving. We describe the current design and implementation, stressing particularly notable additions. Among them is a new, efficient, the easiest to retrofit translation from typed code templates to code combinators. Scope extrusion detection unexpectedly brought let-insertion, and a conclusive solution to the 20-year–old vexing problem of cross-stage persistence.
Oleg Kiselyov

Open Access

An ML-Style Module System for Cross-Stage Type Abstraction in Multi-stage Programming
Abstract
We propose MetaFM, a novel ML-style module system that enables users to decompose multi-stage programs (i.e., programs written in a typed multi-stage programming language) into loosely coupled components in a manner natural with respect to type abstraction. The distinctive aspect of MetaFM is that it allows values at different stages to be bound in a single structure (i.e., \( \textbf{struct} \ \cdots \ \textbf{end} \)). This feature is crucial, for example, for defining a function and a macro that use one abstract type in common, without revealing the implementation detail of that type. MetaFM also accommodates functors, higher-kinded types, the \(\textbf{with} \ \textbf{type} \)-construct, etc. with staging features. For defining semantics and proving type safety, we employ an elaboration technique, i.e., type-directed translation to a target language, inspired by the formalization of F-ing Modules. Specifically, we give a set of elaboration rules for converting MetaFM programs into System F\(\omega ^{\langle \rangle }\), a multi-stage extension of System F\(\omega \), and prove that the elaboration preserves typing. Additionally, our language supports cross-stage persistence (CSP), a feature for code reuse spanning more than one stage, without breaking type safety.
Takashi Suwa, Atsushi Igarashi
Rhyme: A Data-Centric Multi-paradigm Query Language Based on Functional Logic Metaprogramming
System Description
Abstract
We present Rhyme, a declarative multi-paradigm query language designed for querying and transforming nested structures such as JSON, tensors, and beyond. Rhyme is designed to be multi-paradigm from ground-up allowing it to seamlessly accommodate typical data processing operations–ranging from aggregations and group-bys to joins–while also having the versatility to express diverse computations like tensor expressions (à la einops) and declaratively express visualizations (e.g., visualizing query outputs with tables, charts, and so on). Rhyme generates optimized JavaScript code for queries by constructing an intermediate representation that implicitly captures the program structure via dependencies. This paper presents a system description of Rhyme implementation while highlighting key design decisions and various use cases covered by Rhyme.
Supun Abeysinghe, Tiark Rompf

Proofs

Frontmatter
Language-parameterized Proofs for Functional Languages with Subtyping
Abstract
Language designers often strive to prove that their programming languages satisfy the properties that were intended at the time of design. \(\textsc {Lang}\text {-}\textsc {n}\text {-}\textsc {Prove}\) is a DSL for expressing language-parametrized proofs, that is, proofs that apply to classes of languages rather than a single language. Prior work has used \(\textsc {Lang}\text {-}\textsc {n}\text {-}\textsc {Prove}\) to express the language-parametrized proofs of type soundness (excluding the substitution lemmas) for a certain class of functional languages. In this paper, we address this class of languages when subtyping is added to them. We provide the language-parametrized proofs of their type soundness (excluding the substitution lemmas) and of the equivalence between algorithmic and declarative subtyping. To express these proofs naturally, we have extended \(\textsc {Lang}\text {-}\textsc {n}\text {-}\textsc {Prove}\) with new operations. Our extension of \(\textsc {Lang}\text {-}\textsc {n}\text {-}\textsc {Prove}\) generates Abella proofs that machine-check the type soundness of a nontrivial class of functional languages with declarative and algorithmic subtyping, when just a few simple lemmas are admitted.
Seth Galasso, Matteo Cimini
System Description: A Theorem-Prover for Subregular Systems: The Language Toolkit and Its Interpreter, Plebby
Abstract
We introduce here a domain-specific language, PLEB. The Piecewise-Local Expression Builder interpreter (plebby) is an interactive system for defining, manipulating, and classifying regular formal languages. The interactive theorem-proving environment provides a generalization of regular expressions with which one can intuitively construct languages via constraints. These constraints retain their semantics upon extension to larger alphabets. The system allows one to decide implications and equalities, either at the language level (with a specified alphabet) or at the logical level (across all possible alphabets). Additionally, one can decide membership in a number of predefined classes, or arbitrary algebraic varieties. With several views of a language, including multiple algebraic structures, the system provides ample opportunity to explore and understand properties of languages.
Dakotah Lambert
Backmatter
Metadata
Title
Functional and Logic Programming
Editors
Jeremy Gibbons
Dale Miller
Copyright Year
2024
Publisher
Springer Nature Singapore
Electronic ISBN
978-981-9723-00-3
Print ISBN
978-981-9722-99-0
DOI
https://doi.org/10.1007/978-981-97-2300-3

Premium Partner