Skip to main content
Top

2021 | Book

Logic-Based Program Synthesis and Transformation

30th International Symposium, LOPSTR 2020, Bologna, Italy, September 7–9, 2020, Proceedings

insite
SEARCH

About this book

This book constitutes the refereed proceedings of the 30th International Conference on Logic-Based Program Synthesis and Transformation, LOPSTR 2020, which was held during September 7-9, 2020.

The 15 papers presented in this volume were carefully reviewed and selected from a total of 31 submissions. The book also contains two invited talks in full paper length. The contributions were organized in topical sections named: rewriting; unification; types; verification; model checking and probabilistic programming; program analysis and testing; and logics.

Table of Contents

Frontmatter

Rewriting

Frontmatter
Symbolic Computation in Maude: Some Tapas
Abstract
Programming in Maude is executable mathematical modeling. Your mathematical model is the code you execute. Both deterministic systems, specified equationally as so-called functional modules and concurrent ones, specified in rewriting logic as system modules, are mathematically modeled and programmed this way. But rewriting logic is also a logical framework in which many different logics can be naturally represented. And one would like not only to execute these models, but to reason about them at a high level. For this, symbolic methods that can automate much of the reasoning are crucial. Many of them are actually supported by Maude itself or by some of its tools. These methods are very general: they apply not just to Maude, but to many other logics, languages and tools. This paper presents some tapas about these Maude-based symbolic methods in an informal way to make it easy for many other people to learn about, and benefit from, them.
José Meseguer
Runtime Complexity Analysis of Logically Constrained Rewriting
Abstract
Logically constrained rewrite systems (LCTRSs) are a versatile and efficient rewriting formalism that can be used to model programs from various programming paradigms, as well as simplification systems in compilers and SMT solvers. In this paper, we investigate techniques to analyse the worst-case runtime complexity of LCTRSs. For that, we exploit synergies between previously developed decomposition techniques for standard term rewriting by Avanzini et al. in conjunction with alternating time and size bound approximations for integer programs by Brockschmidt et al. and adapt these techniques suitably to LCTRSs. Furthermore, we provide novel modularization techniques to exploit loop bounds from recurrence equations which yield sublinear bounds. We have implemented the method in https://static-content.springer.com/image/chp%3A10.1007%2F978-3-030-68446-4_2/MediaObjects/499297_1_En_2_Figa_HTML.gif to test the viability of our method.
Sarah Winkler, Georg Moser
Confluence and Commutation for Nominal Rewriting Systems with Atom-Variables
Abstract
Nominal rewriting was introduced as an extension of first-order term rewriting by a binding mechanism based on the nominal approach. Recently, a new format of nominal rewriting has been introduced where rewrite rules are defined with atom-variables rather than atoms. In this paper, we investigate the difference between the new format and the original nominal rewriting, and prove confluence and commutation for some classes of rewriting systems whose rewrite rules have no proper overlaps which are computed using nominal unification with atom-variables. The properties we prove are expected to be used in a form of program transformation that is realised as an equivalence transformation of rewriting systems.
Kentaro Kikuchi, Takahito Aoto
Pattern Eliminating Transformations
Abstract
Program transformation is a common practice in computer science, and its many applications can have a range of different objectives. For example, a program written in an original high level language could be either translated into machine code for execution purposes, or towards a language suitable for formal verification. Such compilations are split into several so-called passes which generally aim at eliminating certain constructions of the original language to get a program in some intermediate languages and finally generate the target code. Rewriting is a widely established formalism to describe the mechanism and the logic behind such transformations. In a typed context, the underlying type system can be used to give syntactic guarantees on the shape of the results obtained after each pass, but this approach could lead to an accumulation of auxiliary types that should be considered. We propose in this paper a less intrusive approach based on simply annotating the function symbols with the (anti-)patterns the corresponding transformations are supposed to eliminate. We show how this approach allows one to statically check that the rewrite system implementing the transformation is consistent with the annotations and thus, that it eliminates the respective patterns.
Horatiu Cirstea, Pierre Lermusiaux, Pierre-Etienne Moreau

Unification

Frontmatter
Nominal Unification with Letrec and Environment-Variables
Abstract
Unification algorithms of nominal expressions with letrec and atom- and expression-variables are already described in the literature. However, only explicit environments could be treated in nominal unification and the use of abstract environments was restricted to nominal matching. This severely restricts the use of algorithms in applications. The following two restrictions permit a step forward and strongly improve the coverage of the application cases: expression- and environment-variables are restricted to occur at most once in the input equations. A terminating and complete nominal unification algorithm is described that computes complete sets of constrained unifiers. Since the set of ground instances of a complete set may be empty due to constraints, we also provide a decision algorithm for inputs which do not contain permutation-variables and show that then nominal unifiability is NP-complete. For input without an occurrence-restriction for expression-variables and w.r.t. garbage-free ground expressions, we sketch an adapted unification algorithm that produces a complete set of unifiers in NP time. For the decision problem we conjecture that it is harder in this case. We believe that lifting the linearity restrictions for environment-variables leads to a prohibitively high computational complexity.
Manfred Schmidt-Schauß, Yunus Kutz
Terminating Non-disjoint Combined Unification
Abstract
The equational unification problem, where the underlying equational theory may be given as the union of component equational theories, appears often in practice in many fields such as automated reasoning, logic programming, declarative programming, and the formal analysis of security protocols. In this paper, we investigate the unification problem in the non-disjoint union of equational theories via the combination of hierarchical unification procedures. In this context, a unification algorithm known for a base theory is extended with some additional inference rules to take into account the rest of the theory. We present a simple form of hierarchical unification procedure. The approach is particularly well-suited for any theory where a unification procedure can be obtained in a syntactic way using transformation rules to process the axioms of the theory. Hierarchical unification procedures are exemplified with various theories used in protocol analysis. Next, we look at modularity methods for combining theories already using a hierarchical approach. In addition, we consider a new complexity measure that allows us to obtain terminating (combined) hierarchical unification procedures.
Serdar Erbatur, Andrew M. Marshall, Christophe Ringeissen

Types

Frontmatter
slepice: Towards a Verified Implementation of Type Theory in Type Theory
Abstract
Dependent types have proven a useful technique for development of verified software. Despite the existence of many systems based in dependent type theory, mostly interactive theorem provers but also programming languages, there is no system that would itself be implemented using dependent types. Recently, a new approach to type inference and term synthesis for type theory with dependent types emerged that separates the process into an analysis phase that is carried out in type theory, and a search phase that is carried out in a logic programming engine.
We describe an architecture of type inference and term synthesis engine for a language with dependent types that is based on the new approach and that is feasible to implement using a dependently typed language. We demonstrate the architecture by describing slepice, its particular implementation.
František Farka

Open Access

Resourceful Program Synthesis from Graded Linear Types
Abstract
Linear types provide a way to constrain programs by specifying that some values must be used exactly once. Recent work on graded modal types augments and refines this notion, enabling fine-grained, quantitative specification of data use in programs. The information provided by graded modal types appears to be useful for type-directed program synthesis, where these additional constraints can be used to prune the search space of candidate programs. We explore one of the major implementation challenges of a synthesis algorithm in this setting: how does the synthesis algorithm efficiently ensure that resource constraints are satisfied throughout program generation? We provide two solutions to this resource management problem, adapting Hodas and Miller’s input-output model of linear context management to a graded modal linear type theory. We evaluate the performance of both approaches via their implementation as a program synthesis tool for the programming language Granule, which provides linear and graded modal typing.
Jack Hughes, Dominic Orchard

Verification

Frontmatter
Reasoning in the Theory of Heap: Satisfiability and Interpolation
Abstract
In recent work, we have proposed an SMT-LIB theory of heap tailored to Horn-clause verification. The theory makes it possible to lift verification approaches for heap-allocated data-structures to a language-independent level, and this way factor out the treatment of heap in verification tools. This paper gives an overview of the theory, and presents ongoing research on decision and interpolation procedures.
Zafer Esen, Philipp Rümmer
Algorithm Selection for Dynamic Symbolic Execution: A Preliminary Study
Abstract
Given a portfolio of algorithms, the goal of Algorithm Selection (AS) is to select the best algorithm(s) for a new, unseen problem instance. Dynamic Symbolic Execution (DSE) brings together concrete and symbolic execution to maximise the program coverage. DSE uses a constraint solver to solve the path conditions and generate new inputs to explore. In this paper we join these lines of research by introducing a model that combines DSE and AS approaches. The proposed AS/DSE model is a generic and flexible framework enabling the DSE engine to solve the path conditions it collects with a portfolio of different solvers, by exploiting and extending the well-known AS techniques that have been developed over the last decade. In this way, one can increase the coverage and sometimes even outperform the aggregate coverage achievable by running simultaneously all the solvers of the portfolio.
Roberto Amadini, Graeme Gange, Peter Schachte, Harald Søndergaard, Peter J. Stuckey
Translation of Interactive Datalog Programs for Microcontrollers to Finite State Machines
Abstract
“Smart” devices have become ubiquitous in modern households and industry. Especially in home-automation, robotics, and sensing tasks, rule-based approaches seem ideal to describe the behavior of the interactive systems. But managing input data and state is hard. With little choice of programming language, most code targeted at microcontrollers is written in imperative C or assembler.
Microlog is a deductive database language with a strong logic foundation based on Datalog extended with a representation of time and calls to external functions that may be used to control sensors and actors.
In this paper we describe a method to precalculate sets of Datalog facts that may be derivable for a point in time. Values that will be known only at runtime are represented as parameters of those “states”. During “state transition”, a small number of conditions on parameters and input values must be checked. By representing a possibly quite large number of facts as a single state number and a few parameter values, memory and computing time are saved. If no parameters are needed, the result of this compilation is basically a finite state machine.
Mario Wenzel, Stefan Brass

Model Checking and Probabilistic Programming

Frontmatter
Generating Functions for Probabilistic Programs
Abstract
This paper investigates the usage of generating functions (GFs) encoding measures over the program variables for reasoning about discrete probabilistic programs. To that end, we define a denotational GF-transformer semantics for probabilistic while-programs, and show that it instantiates Kozen’s seminal distribution transformer semantics. We then study the effective usage of GFs for program analysis. We show that finitely expressible GFs enable checking super-invariants by means of computer algebra tools, and that they can be used to determine termination probabilities. The paper concludes by characterizing a class of—possibly infinite-state—programs whose semantics is a rational GF encoding a discrete phase-type distribution.
Lutz Klinkenberg, Kevin Batz, Benjamin Lucien Kaminski, Joost-Pieter Katoen, Joshua Moerman, Tobias Winkler
Verification of Multiplayer Stochastic Games via Abstract Dependency Graphs
Abstract
We design and implement an efficient model checking algorithm for alternating-time temporal logic (ATL) on turn-based multiplayer stochastic games with weighted transitions. This logic allows us to query about the existence of multiplayer strategies that aim to maximize the probability of game runs satisfying resource-bounded next and until logical operators, while requiring that the accumulated weight along the successful runs does not exceed a given upper bound. Our method relies on a recently introduced formalism of abstract dependency graphs (ADG) and we provide an efficient reduction of our model checking problem to finding the minimum fixed-point assignment on an ADG over the domain of unit intervals extended with certain-zero optimization. As the fixed-point computation on ADGs is performed in an on-the-fly manner without the need of a priori generating the whole graph, we achieve a performance that is comparable with state-of-the-art model checker PRISM-games for finding the exact solutions and sometimes an order of magnitude faster for queries that ask about approximate probability bounds. We document this on a series of scalable experiments from the PRISM-games benchmark that we annotate with weight information.
Søren Enevoldsen, Mathias Claus Jensen, Kim Guldstrand Larsen, Anders Mariegaard, Jiří Srba

Program Analysis and Testing

Frontmatter
Testing Your (Static Analysis) Truths
Abstract
Static analysis is nowadays an essential component of many software development toolsets. Despite some notorious successes in the validation of compilers, comparatively little work exists on the systematic validation of static analyzers, whose correctness and reliability is critical if they are to be inserted in production environments. Contributing factors may be the intrinsic difficulty of formally verifying code that is quite complex and of finding suitable oracles for testing it. In this paper, we propose a simple, automatic method for testing abstract interpretation-based static analyzers. Broadly, it consists in checking, over a suite of benchmarks, that the properties inferred statically are satisfied dynamically. The main advantage of our approach is its simplicity, which stems directly from framing it within the Ciao assertion-based validation framework, and its blended static/dynamic assertion checking approach. We show that in this setting, the analysis can be tested with little effort by combining the following components already present in the framework: the static analyzer, the assertion run-time checking mechanism, the random test case generator, and the unit-test framework. Together they compose a tool that can effectively discover and locate errors in the different components of the analysis framework. We apply our approach to test some of CiaoPP’s analysis domains over a wide range of programs, successfully finding non-trivial, previously undetected bugs, with a low degree of effort.
Ignacio Casso, José F. Morales, P. López-García, Manuel V. Hermenegildo
Slicing Unconditional Jumps with Unnecessary Control Dependencies
Abstract
Program slicing is an analysis technique that has a wide range of applications, ranging from compilers to clone detection software, and that has been applied to practically all programming languages. Most program slicing techniques are based on a widely extended program representation, the System Dependence Graph (SDG). However, in the presence of unconditional jumps, there exist some situations where most SDG-based slicing techniques are not as accurate as possible, including more code than strictly necessary. In this paper, we identify one of these scenarios, pointing out the cause of the inaccuracy, and describing the initial solution to the problem proposed in the literature, together with an extension, which solves the problem completely. These solutions modify both the SDG generation and the slicing algorithm. Additionally, we propose an alternative solution, that solves the problem by modifying only the SDG generation, leaving the slicing algorithm untouched.
Carlos Galindo, Sergio Pérez, Josep Silva

Logics

Frontmatter
A Formal Model for a Linear Time Correctness Condition of Proof Nets of Multiplicative Linear Logic
Abstract
In a previous paper, we have reported a new linear time correctness condition for proof nets of Multiplicative Linear Logic without units, where we gave a description of the algorithm in an informal way. In this paper, we give a formal model for the algorithm. Our formal model is based on a finite state transition system with queues as well as union-find trees as data structures. The model has been obtained by trial and error based on a concrete implementation of the algorithm. In addition, the algorithm has a subtle mechanism in order to avoid deadlock. We give an invariant property of the state transition system and it guarantees the deadlock-freedom.
Satoshi Matsuoka
Synthesis of Modality Definitions and a Theorem Prover for Epistemic Intuitionistic Logic
Abstract
We propose a mechanism for automating discovery of definitions, that, when added to a logic system for which we have a theorem prover, extends it to support an embedding of a new logic system into it. As a result, the synthesized definitions, when added to the prover, implement a prover for the new logic.
As an instance of the proposed mechanism, we derive a Prolog theorem prover for an interesting but unconventional epistemic Logic by starting from the sequent calculus G4IP that we extend with operator definitions to obtain an embedding in intuitionistic propositional logic (IPC). With help of a candidate definition formula generator, we discover epistemic operators for which axioms and theorems of Artemov and Protopopescu’s Intuitionistic Epistemic Logic (IEL) hold and formulas expected to be non-theorems fail.
We compare the embedding of IEL in IPC with a similarly discovered successful embedding of Dosen’s double negation modality, judged inadequate as an epistemic operator. Finally, we discuss the failure of the necessitation rule for an otherwise successful S4 embedding and share our thoughts about the intuitions explaining these differences between epistemic and alethic modalities in the context of the Brouwer-Heyting-Kolmogorov semantics of intuitionistic reasoning and knowledge acquisition.
Paul Tarau
Backmatter
Metadata
Title
Logic-Based Program Synthesis and Transformation
Editor
Prof. Maribel Fernández
Copyright Year
2021
Electronic ISBN
978-3-030-68446-4
Print ISBN
978-3-030-68445-7
DOI
https://doi.org/10.1007/978-3-030-68446-4

Premium Partner