Skip to main content

2013 | Buch

Logic-Based Program Synthesis and Transformation

22nd International Symposium, LOPSTR 2012, Leuven, Belgium, September 18-20, 2012, Revised Selected Papers

insite
SUCHEN

Über dieses Buch

This book constitutes the thoroughly refereed post-conference proceedings of the 22nd International Symposium on Logic-Based Program Synthesis and Transformation, LOPSTR 2012, held in Leuven, Belgium in September 2012.

The 13 revised full papers presented together with 2 invited talks were carefully reviewed and selected from 27 submissions. Among the topics covered are specification, synthesis, verification, analysis, optimization, specialization, security, certification, applications and tools, program/model manipulation, and transformation techniques for any programming language paradigm.

Inhaltsverzeichnis

Frontmatter
Symbolic Evaluation Graphs and Term Rewriting — A General Methodology for Analyzing Logic Programs
(Abstract)
Abstract
There exist many powerful techniques to analyze termination and complexity of term rewrite systems (TRSs). Our goal is to use these techniques for the analysis of other programming languages as well. For instance, approaches to prove termination of definite logic programs by a transformation to TRSs have been studied for decades. However, a challenge is to handle languages with more complex evaluation strategies (such as Prolog, where predicates like the cut influence the control flow).
We present a general methodology for the analysis of such programs. Here, the logic program is first transformed into a symbolic evaluation graph which represents all possible evaluations in a finite way. Afterwards, different analyses can be performed on these graphs. In particular, one can generate TRSs from such graphs and apply existing tools for termination or complexity analysis of TRSs to infer information on the termination or complexity of the original logic program.
More information can be found in the full paper [1].
Jürgen Giesl, Thomas Ströder, Peter Schneider-Kamp, Fabian Emmes, Carsten Fuhs
An Introduction to Search Combinators
Abstract
The ability to model search in a constraint solver can be an essential asset for solving combinatorial problems. However, existing infrastructure for defining search heuristics is often inadequate. Either modeling capabilities are extremely limited or users are faced with a general-purpose programming language whose features are not tailored towards writing search heuristics. As a result, major improvements in performance may remain unexplored.
This article introduces search combinators, a lightweight and solver-independent method that bridges the gap between a conceptually simple modeling language for search (high-level, functional and naturally compositional) and an efficient implementation (low-level, imperative and highly non-modular). By allowing the user to define application-tailored search strategies from a small set of primitives, search combinators effectively provide a rich domain-specific language (DSL) for modeling search to the user. Remarkably, this DSL comes at a low implementation cost to the developer of a constraint solver.
Tom Schrijvers, Guido Tack, Pieter Wuille, Horst Samulowitz, Peter J. Stuckey
A Declarative Pipeline Language for Complex Data Analysis
Abstract
We introduce BANpipe – a logic-based scripting language designed to model complex compositions of time consuming analyses. Its declarative semantics is described together with alternative operational semantics facilitating goal directed execution, parallel execution, change propagation and type checking. A portable implementation is provided, which supports expressing complex pipelines that may integrate different Prolog systems and provide automatic management of files.
Henning Christiansen, Christian Theil Have, Ole Torp Lassen, Matthieu Petit
Semantic Code Clones in Logic Programs
Abstract
In this paper, we study what is a semantic code clone pair in a logic program. Unlike our earlier work, that focused on simple syntactic equivalence for defining clones, we propose a more general approximation based on operational semantics and transformation rules. This new definition captures a wider set of clones, and allows to formally define the conditions under which a number of refactorings can be applied.
Céline Dandois, Wim Vanhoof
Specialization with Constrained Generalization for Software Model Checking
Abstract
We present a method for verifying properties of imperative programs by using techniques based on constraint logic programming (CLP). We consider a simple imperative language, called SIMP, extended with a nondeterministic choice operator and we address the problem of checking whether or not a safety property ϕ (that specifies that an unsafe configuration cannot be reached) holds for a SIMP program P. The operational semantics of the language SIMP is specified via an interpreter I written as a CLP program. The first phase of our verification method consists in specializing I with respect to P, thereby deriving a specialized interpreter I P . Then, we specialize I P with respect to the property ϕ and the input values of P, with the aim of deriving, if possible, a program whose least model is a finite set of constrained facts. To this purpose we introduce a novel generalization strategy which, during specialization, has the objecting of preserving the so called branching behaviour of the predicate definitions. We have fully automated our method and we have made its experimental evaluation on some examples taken from the literature. The evaluation shows that our method is competitive with respect to state-of-the-art software model checkers.
Emanuele De Angelis, Fabio Fioravanti, Alberto Pettorossi, Maurizio Proietti
Enhancing Declarative Debugging with Loop Expansion and Tree Compression
Abstract
Declarative debugging is a semi-automatic debugging technique that allows the programer to debug a program without the need to see the source code. The debugger generates questions about the results obtained in different computations and the programmer only has to answer them to find the bug. Declarative debugging uses an internal representation of programs called execution tree, whose structure highly influences its performance. In this work we introduce two techniques that optimize the execution trees structure. In particular, we expand and collapse the representation of loops allowing the debugger to find bugs with a reduced number of questions.
David Insa, Josep Silva, César Tomás
XACML 3.0 in Answer Set Programming
Abstract
We present a systematic technique for transforming XACML 3.0 policies in Answer Set Programming (ASP). We show that the resulting logic program has a unique answer set that directly corresponds to our formalisation of the standard semantics of XACML 3.0 from [9]. We demonstrate how our results make it possible to use off-the-shelf ASP solvers to formally verify properties of access control policies represented in XACML, such as checking the completeness of a set of access control policies and verifying policy properties.
Carroline Dewi Puspa Kencana Ramli, Hanne Riis Nielson, Flemming Nielson
Types vs. PDGs in Information Flow Analysis
Abstract
Type-based and PDG-based information flow analysis techniques are currently developed independently in a competing manner, with different strengths regarding coverage of language features and security policies. In this article, we study the relationship between these two approaches. One key insight is that a type-based information flow analysis need not be less precise than a PDG-based analysis. For proving this result we establish a formal connection between the two approaches which can also be used to transfer concepts from one tradition of information flow analysis to the other. The adoption of rely-guarantee-style reasoning from security type systems, for instance, enabled us to develop a PDG-based information flow analysis for multi-threaded programs.
Heiko Mantel, Henning Sudbrock
Galliwasp: A Goal-Directed Answer Set Solver
Abstract
Galliwasp is a goal-directed implementation of answer set programming. Unlike other answer set solvers, Galliwasp computes partial answer sets which are provably extensible to full answer sets. Galliwasp can execute arbitrary answer set programs in a top-down manner similar to SLD resolution. Galliwasp generates candidate answer sets by executing ordinary rules in a top-down, goal-directed manner using coinduction. Galliwasp next checks if the candidate answer sets are consistent with restrictions imposed by OLON rules. Those that are consistent are reported as solutions. Execution efficiency is significantly improved by performing the consistency check incrementally, i.e., as soon as an element of the candidate answer set is generated. We discuss the design of the Galliwasp system and its implementation. Galliwasp’s performance figures, which are comparable to other popular answer set solvers, are also presented.
Kyle Marple, Gopal Gupta
Computing More Specific Versions of Conditional Rewriting Systems
Abstract
Rewrite systems obtained by some automated transformation often have a poor syntactic structure even if they have good properties from a semantic point of view. For instance, a rewrite system might have overlapping left-hand sides even if it can only produce at most one constructor normal form (i.e., value). In this paper, we propose a method for computing “more specific” versions of deterministic conditional rewrite systems (i.e., typical functional programs) by replacing a given rule (e.g., an overlapping rule) with a finite set of instances of this rule. In some cases, the technique is able to produce a non-overlapping system from an overlapping one. We have applied the transformation to improve the systems produced by a previous technique for function inversion with encouraging results (all the overlapping systems were successfully transformed to non-overlapping systems).
Naoki Nishida, Germán Vidal
Improving Determinization of Grammar Programs for Program Inversion
Abstract
The inversion method proposed by Glück and Kawabe uses grammar programs as intermediate results that comprise sequences of operations (data generation, matching, etc.). The determinization method used in the inversion method fails for a grammar program of which the collection of item sets causes a conflict even if there exists a deterministic program equivalent to the grammar program. In this paper, by ignoring shift/shift conflicts, we improve the determinization method so as to cover grammar programs causing shift/shift conflicts. Moreover, we propose a method to eliminate infeasible definitions from unfolded grammar programs and show that the method succeeds in determinizing some grammar programs for which the original method fails. By using the method as a post-process of the original inversion method, we make the original method strictly more powerful.
Minami Niwa, Naoki Nishida, Masahiko Sakai
A Framework for Guided Test Case Generation in Constraint Logic Programming
Abstract
Performing test case generation by symbolic execution on large programs becomes quickly impracticable due to the path explosion problem. A common limitation that this problem poses is the generation of unnecessarily large number of possibly irrelevant or redundant test cases even for medium-size programs. Tackling the path explosion problem and selecting high quality test cases are considered major challenges in the software testing community. In this paper we propose a constraint logic programming-based framework to guide symbolic execution and thus test case generation towards a more relevant and potentially smaller subset of paths in the program under test. The framework is realized as a tool and empirical results demonstrate its applicability and effectiveness. We show how the framework can help to obtain high quality test cases and to alleviate the scalability issues that limit most symbolic execution-based test generation approaches.
José Miguel Rojas, Miguel Gómez-Zamalloa
Simplifying the Verification of Quantified Array Assertions via Code Transformation
Abstract
Quantified assertions pose a particular challenge for automated software verification tools. They are required when proving even the most basic properties of programs that manipulate arrays and so are a major limit for the applicability of fully automatic analysis. This paper presents a simple program transformation approach based on induction to simplify the verification task. The techniques simplifies both the program and the assertion to be verified. Experiments using an implementation of this technique show a significant improvement in performance as well as an increase in the range of programs that can be checked fully automatically.
Mohamed Nassim Seghir, Martin Brain
Proving Properties of Co-logic Programs with Negation by Program Transformations
Abstract
A framework for unfold/fold transformation of (constraint) co-logic programs has been proposed recently, which can be used to prove properties of co-logic programs, thereby allowing us to reason about infinite sequences of events such as behavior of reactive systems. The main problem with this approach is that only definite co-logic programs are considered, thus representing a rather narrow class of co-logic programs. In this paper we consider ”negation elimination”, a familiar program transformation method, tailored to co-logic programs; given a program for predicate p(X), negation elimination derives a program which computes its negation ¬p(X), when the program satisfies certain conditions. We show that negation elimination can be used for co-logic programs, and its application is correct under the alternating fixpoint semantics of co-logic programs. We show by examples how negation elimination, when incorporated into the previous framework for unfold/fold transformation, allows us to represent and reason about a wider class of co-logic programs. We also discuss the difference between negation elimination applied to co-logic programs and the conventional negative unfolding applied to stratified programs.
Hirohisa Seki
Program Analysis and Manipulation to Reproduce Learners’ Erroneous Reasoning
Abstract
Pedagogical research shows that learner errors are seldom random but systematic. Good teachers are capable of inferring from learners’ input the erroneous procedure they are following, and use the result of such deep cognitive diagnoses to repair its incorrect parts. We report a method for the automatic reconstruction of such erroneous procedures based on learner input and the analysis and manipulation of logic programs. The method relies on an iterative application of two algorithms: an innovative use of algorithmic debugging to identify learner errors by the analysis of (initially) correct (sic) Prolog-based procedures, and a subsequent program manipulation phase where errors are introduced into (initially) correct procedures. The iteration terminates with the derivation of an erroneous procedure that was followed by the learner. The procedure, and its step-wise reconstruction, can then be used to inform remedial feedback.
Claus Zinn
Backmatter
Metadaten
Titel
Logic-Based Program Synthesis and Transformation
herausgegeben von
Elvira Albert
Copyright-Jahr
2013
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-38197-3
Print ISBN
978-3-642-38196-6
DOI
https://doi.org/10.1007/978-3-642-38197-3

Premium Partner