Skip to main content

2013 | Buch

Logic for Programming, Artificial Intelligence, and Reasoning

19th International Conference, LPAR-19, Stellenbosch, South Africa, December 14-19, 2013. Proceedings

herausgegeben von: Ken McMillan, Aart Middeldorp, Andrei Voronkov

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the proceedings of the 19th International Conference on Logic for Programming, Artificial Intelligence and Reasoning, LPAR-19, held in December 2013 in Stellenbosch, South Africa. The 44 regular papers and 8 tool descriptions and experimental papers included in this volume were carefully reviewed and selected from 152 submissions. The series of International Conferences on Logic for Programming, Artificial Intelligence and Reasoning (LPAR) is a forum where year after year, some of the most renowned researchers in the areas of logic, automated reasoning, computational logic, programming languages and their applications come to present cutting-edge results, to discuss advances in these fields and to exchange ideas in a scientifically emerging part of the world.

Inhaltsverzeichnis

Frontmatter
An Algorithm for Enumerating Maximal Models of Horn Theories with an Application to Modal Logics

The fragment of propositional logic known as Horn theories plays a central role in automated reasoning. The problem of enumerating the maximal models of a Horn theory (

MaxMod

) has been proved to be computationally hard, unless P = NP. To the best of our knowledge, the only algorithm available for it is the one based on a brute-force approach. In this paper, we provide an algorithm for the problem of enumerating the maximal subsets of facts that do not entail a distinguished atomic proposition in a definite Horn theory (

MaxNoEntail

). We show that

MaxMod

is polynomially reducible to

MaxNoEntail

(and vice versa), making it possible to solve also the former problem using the proposed algorithm. Addressing

MaxMod

via

MaxNoEntail

opens, inter alia, the possibility of benefiting from the monotonicity of the notion of entailment. (The notion of model does not enjoy such a property.) We also discuss an application of

MaxNoEntail

to expressiveness issues for modal logics, which reveals the effectiveness of the proposed algorithm.

Luca Aceto, Dario Della Monica, Anna Ingólfsdóttir, Angelo Montanari, Guido Sciavicco
May-Happen-in-Parallel Analysis for Priority-Based Scheduling

A

may-happen-in-parallel

(MHP) analysis infers the sets of pairs of program points that may execute in parallel along a program’s execution. This is an essential piece of information to detect data races, and also to infer more complex properties of concurrent programs, e.g., deadlock freeness, termination and resource consumption analyses can greatly benefit from the MHP relations to increase their accuracy. Previous MHP analyses have assumed a worst case scenario by adopting a simplistic (non-deterministic) task scheduler which can select any available task. While the results of the analysis for a non-deterministic scheduler are obviously sound, they can lead to an overly pessimistic result. We present an MHP analysis for an asynchronous language with

prioritized

tasks buffers.

Priority-based scheduling

is arguably the most common scheduling strategy adopted in the implementation of concurrent languages. The challenge is to be able to take task priorities into account at static analysis time in order to filter out unfeasible MHP pairs.

Elvira Albert, Samir Genaim, Enrique Martin-Martin
The Complexity of Clausal Fragments of LTL

We introduce and investigate a number of fragments of propositional temporal logic

LTL

over the flow of time (ℤ, <). The fragments are defined in terms of the available temporal operators and the structure of the clausal normal form of the temporal formulas. We determine the computational complexity of the satisfiability problem for each of the fragments, which ranges from

NLogSpace

to

PTime

, NP and

PSpace

.

Alessandro Artale, Roman Kontchakov, Vladislav Ryzhikov, Michael Zakharyaschev
A Semantic Basis for Proof Queries and Transformations

We add

updates

to the query language

PrQL

, designed for inspecting machine representations of proofs. PrQL natively supports

hiproofs

that express proof structure using hierarchically nested labelled trees, which we claim is a natural way of taming the complexity of huge proofs. Query-driven updates allow us to change this structure, in particular, to transform proofs produced by interactive theorem provers into forms that are easier for humans to understand, or that could be consumed by other tools. In this paper we motivate and define basic update operations, using an abstract denotational semantics of hiproofs and queries. This extends our previous semantics for queries based on syntactic tree representations. We define update operations that add and remove sub-proofs or manipulate the hierarchy to group and ungroup nodes. We show that these basic operations are well-behaved and hence can form a sound core for a hierarchical transformation language. Our study here is firmly in language design and semantics; implementation strategies and study of sub-languages of our query language with good complexity will come later.

David Aspinall, Ewen Denney, Christoph Lüth
Expressive Path Queries on Graphs with Data

Graph data models have recently become popular owing to their applications, e.g., in social networks, semantic web. Typical navigational query languages over graph databases — such as Conjunctive Regular Path Queries (CRPQs) — cannot express relevant properties of the interaction between the underlying data and the topology. Two languages have been recently proposed to overcome this problem:

walk logic

(WL) and

regular expressions with memory

(REM). In this paper, we begin by investigating fundamental properties of WL and REM, i.e., complexity of evaluation problems and expressive power. We first show that the data complexity of WL is nonelementary, which rules out its practicality. On the other hand, while REM has low data complexity, we point out that many natural data/topology properties of graphs expressible in WL cannot be expressed in REM. To this end, we propose

register logic

, an extension of REM, which we show to be able to express many natural graph properties expressible in WL, while at the same time preserving the elementariness of data complexity of REMs. It is also incomparable in expressive power against WL.

Pablo Barceló, Gaelle Fontaine, Anthony Widjaja Lin
Proving Infinite Satisfiability

We consider the problem of automatically disproving invalid conjectures over data structures such as lists and arrays over integers, in the presence of additional hypotheses over these data structures. We investigate a simple approach based on refutational theorem proving. We assume that the data structure axioms are satisfiable and provide a template language for additional hypotheses such that satisfiability is preserved. Then disproving is done by proving that the negated conjecture follows. By means of examples we demonstrate that our template language is reasonably expressive and that our approach works well with current theorem provers (Z3, SPASS+T and Beagle).

Peter Baumgartner, Joshua Bax
SAT-Based Preprocessing for MaxSAT

State-of-the-art algorithms for industrial instances of MaxSAT problem rely on iterative calls to a SAT solver. Preprocessing is crucial for the acceleration of SAT solving, and the key preprocessing techniques rely on the application of resolution and subsumption elimination. Additionally, satisfiability-preserving clause elimination procedures are often used. Since MaxSAT computation typically involves a large number of SAT calls, we are interested in whether an input instance to a MaxSAT problem can be preprocessed

up-front

, i.e. prior to running the MaxSAT solver, rather than (or, in addition to) during each iterative SAT solver call. The key requirement in this setting is that the preprocessing has to be

sound

, i.e. so that the solution can be reconstructed correctly and efficiently after the execution of a MaxSAT algorithm on the preprocessed instance. While, as we demonstrate in this paper, certain clause elimination procedures are sound for MaxSAT, it is well-known that this is not the case for resolution and subsumption elimination. In this paper we show how to adapt these preprocessing techniques to MaxSAT. To achieve this we recast the MaxSAT problem in a recently introduced labelled-CNF framework, and show that within the framework the preprocessing techniques can be applied soundly. Furthermore, we show that MaxSAT algorithms restated in the framework have a natural implementation on top of an

incremental

SAT solver. We evaluate the prototype implementation of a MaxSAT algorithm WMSU1 in this setting, demonstrate the effectiveness of preprocessing, and show overall improvement with respect to non-incremental versions of the algorithm on some classes of problems.

Anton Belov, António Morgado, Joao Marques-Silva
Dynamic and Static Symmetry Breaking in Answer Set Programming

Many research works had been done in order to define a semantics for logic programs. The well know is the stable model semantics which selects for each program one of its canonical models. The stable models of a logic program are in a certain sens the minimal Herbrand models of its

reduct

programs. On the other hand, the notion of symmetry elimination had been widely studied in constraint programming and shown to be useful to increase the efficiency of the associated solvers. However symmetry in non monotonic reasoning still not well studied in general. For instance Answer Set Programming (ASP) is a very known framework but only few recent works on symmetry breaking are known in this domain. Ignoring symmetry breaking in the answer set systems could make them doing redundant work and lose on their efficiency. Here we study the notion of

local

and

global

symmetry in the framework of answer set programming. We show how

local symmetries

of a logic program can be detected dynamically by means of the automorphisms of its graph representation. We also give some properties that allow to eliminate theses symmetries in SAT-based answer set solvers and show how to integrate this symmetry elimination in these methods in order to enhance their efficiency.

Belaïd Benhamou
HOL Based First-Order Modal Logic Provers

First-order modal logics (FMLs) can be modeled as natural fragments of classical higher-order logic (HOL). The

FMLtoHOL

tool exploits this fact and it enables the application of off-the-shelf HOL provers and model finders for reasoning within FMLs. The tool bridges between the

qmf

-syntax for FML and the TPTP

thf0

-syntax for HOL. It currently supports logics K, K4, D, D4, T, S4, and S5 with respect to constant, varying and cumulative domain semantics. The approach is evaluated in combination with a meta-prover for HOL, which sequentially schedules various HOL reasoners. The resulting system is very competitive.

Christoph Benzmüller, Thomas Raths
Resourceful Reachability as HORN-LA

The program verification tool

SLAyer

uses abstractions during analysis and relies on a solver for reachability to refine spurious counterexamples. In this context, we extract a reachability benchmark suite and evaluate methods for encoding reachability properties with heaps using Horn clauses over linear arithmetic. The benchmarks are particularly challenging and we describe and evaluate pre-processing transformations that are shown to have significant effect.

Josh Berdine, Nikolaj Bjørner, Samin Ishtiaq, Jael E. Kriener, Christoph M. Wintersteiger
A Seligman-Style Tableau System

Proof systems for hybrid logic typically use @-operators to access information hidden behind modalities; this labeling approach lies at the heart of most resolution, natural deduction, and tableau systems for hybrid logic. But there is another, less well-known approach, which we have come to believe is conceptually clearer. We call this Seligman-style inference, as it was first introduced and explored by Jerry Seligman in the setting of natural deduction and sequent calculus in the late 1990s. The purpose of this paper is to introduce a Seligman-style tableau system.

The most obvious feature of Seligman-style systems is that they work with arbitrary formulas, not just formulas prefixed by @-operators. To achieve this in a tableau system, we introduce a rule called

GoTo

which allows us to “jump to a named world” on a tableau branch, thereby creating a local proof context (which we call a

block

) on that branch. To the surprise of some of the authors (who have worked extensively on developing the labeling approach) Seligman-style inference is often clearer: not only is the approach more modular, individual proofs can be more direct. We briefly discuss termination and extensions to richer logics, and relate our system to Seligman’s original sequent calculus.

Patrick Blackburn, Thomas Bolander, Torben Braüner, Klaus Frovin Jørgensen
Comparison of LTL to Deterministic Rabin Automata Translators

Increasing interest in control synthesis and probabilistic model checking caused recent development of LTL to deterministic

ω

-automata translation. The standard approach represented by

ltl2dstar

tool employs Safra’s construction to determinize a Büchi automaton produced by some LTL to Büchi automata translator. Since 2012, three new LTL to deterministic Rabin automata translators appeared, namely Rabinizer, LTL3DRA, and Rabinizer 2. They all avoid Safra’s construction and work on LTL fragments only. We compare performance and automata produced by the mentioned tools, where

ltl2dstar

is combined with several LTL to Büchi automata translators: besides traditionally used LTL2BA, we also consider LTL− >NBA, LTL3BA, and Spot.

František Blahoudek, Mojmír Křetínský, Jan Strejček
Tree Interpolation in Vampire

We describe new extensions of the Vampire theorem prover for computing tree interpolants. These extensions generalize Craig interpolation in Vampire, and can also be used to derive sequence interpolants. We evaluated our implementation on a large number of examples over the theory of linear integer arithmetic and integer-indexed arrays, with and without quantifiers. When compared to other methods, our experiments show that some examples could only be solved by our implementation.

Régis Blanc, Ashutosh Gupta, Laura Kovács, Bernhard Kragl
Polarizing Double-Negation Translations

Double-negation translations are used to encode and decode classical proofs in intuitionistic logic. We show that, in the cut-free fragment, we can simplify the translations and introduce fewer negations. To achieve this, we consider the polarization of the formulæ and adapt those translation to the different connectives and quantifiers. We show that the embedding results still hold, using a customized version of the focused classical sequent calculus. We also prove the latter equivalent to more usual versions of the sequent calculus. This polarization process allows lighter embeddings, and sheds some light on the relationship between intuitionistic and classical connectives.

Mélanie Boudard, Olivier Hermant
Revisiting the Equivalence of Shininess and Politeness

The Nelson-Oppen method [4] allows the combination of satisfiability procedures of stably infinite theories with disjoint signatures. Due to its importance, several attempts to extend the method to different and wider classes of theories were made. In 2005, it was shown that shiny [9] and polite [6] theories could be combined with an arbitrary theory (the relationship between these classes was analysed in [6]). Later, a stronger notion of polite theory was proposed, see [3], in order to overcome a subtle issue with a proof in [6]. In this paper, we analyse the relationship between shiny and strongly polite theories in the onesorted case. We show that a shiny theory with a decidable quantifier-free satisfiability problem is strongly polite and provide two different sufficient conditions for a strongly polite theory to be shiny. Based on these results, we derive a combination method for the union of a polite theory with an arbitrary theory.

Filipe Casal, João Rasga
Towards Rational Closure for Fuzzy Logic: The Case of Propositional Gödel Logic

In the field of non-monotonic logics, the notion of

rational closure

is acknowledged as a landmark and we are going to see whether such a construction can be adopted in the context of mathematical fuzzy logic, a so far (apparently) unexplored journey. As a first step, we will characterise rational closure in the context of Propositional Gödel Logic.

Giovanni Casini, Umberto Straccia
Multi-objective Discounted Reward Verification in Graphs and MDPs

We study the problem of achieving a given value in Markov decision processes (MDPs) with several independent discounted reward objectives. We consider a generalised version of discounted reward objectives, in which the amount of discounting depends on the states visited and on the objective. This definition extends the usual definition of discounted reward, and allows to capture the systems in which the value of different commodities diminish at different and variable rates.

We establish results for two prominent subclasses of the problem, namely

state-discount models

where the discount factors are only dependent on the state of the MDP (and independent of the objective), and

reward-discount models

where they are only dependent on the objective (but not on the state of the MDP). For the state-discount models we use a straightforward reduction to expected total reward and show that the problem whether a value is achievable can be solved in polynomial time. For the reward-discount model we show that memory and randomisation of the strategies are required, but nevertheless that the problem is decidable and it is sufficient to consider strategies which after a certain number of steps behave in a memoryless way.

For the general case, we show that when restricted to graphs (i.e. MDPs with no randomisation), pure strategies and discount factors of the form 1/

n

where

n

is an integer, the problem is in PSPACE and finite memory suffices for achieving a given value. We also show that when the discount factors are not of the form 1/

n

, the memory required by a strategy can be infinite.

Krishnendu Chatterjee, Vojtěch Forejt, Dominik Wojtczak
Description Logics, Rules and Multi-context Systems

The combination of rules and ontologies has been a fertile topic of research in the last years, with the proposal of several different systems that achieve this goal. In this paper, we look at two of these formalisms, Mdl-programs and multi-context systems, which address different aspects of this combination, and include different, incomparable programming constructs. Despite this, we show that every Mdl-program can be transformed in a multi-context system, and this transformation relates the different semantics for each paradigm in a natural way. As an application, we show how a set of design patterns for multi-context systems can be obtained from previous work on Mdl-programs.

Luís Cruz-Filipe, Rita Henriques, Isabel Nunes
Complexity Analysis in Presence of Control Operators and Higher-Order Functions

A polarized version of Girard, Scedrov and Scott’s Bounded Linear Logic is introduced and its normalization properties studied. Following Laurent [25], the logic naturally gives rise to a type system for the

λμ

-calculus, whose derivations reveal bounds on the time complexity of the underlying term. This is the first example of a type system for the

λμ

-calculus guaranteeing time complexity bounds for typable programs.

Ugo Dal Lago, Giulio Pellitta
Zenon Modulo: When Achilles Outruns the Tortoise Using Deduction Modulo

We propose an extension of the tableau-based first order automated theorem prover

Zenon

to deduction modulo. The theory of deduction modulo is an extension of predicate calculus, which allows us to rewrite terms as well as propositions, and which is well suited for proof search in axiomatic theories, as it turns axioms into rewrite rules. We also present a heuristic to perform this latter step automatically, and assess our approach by providing some experimental results obtained on the benchmarks provided by the TPTP library, where this heuristic is able to prove difficult problems in set theory in particular. Finally, we describe an additional backend for

Zenon

that outputs proof certificates for

Dedukti

, which is a proof checker based on the

λ

Π-calculus modulo.

David Delahaye, Damien Doligez, Frédéric Gilbert, Pierre Halmagrand, Olivier Hermant
Long-Distance Resolution: Proof Generation and Strategy Extraction in Search-Based QBF Solving

Strategies (and certificates) for quantified Boolean formulas (QBFs) are of high practical relevance as they facilitate the verification of results returned by QBF solvers and the generation of solutions to problems formulated as QBFs. State of the art approaches to obtain strategies require traversing a Q-resolution proof of a QBF, which for many real-life instances is too large to handle. In this work, we consider the long-distance Q-resolution (LDQ) calculus, which allows particular tautological resolvents. We show that for a family of QBFs using the LDQ-resolution allows for exponentially shorter proofs compared to Q-resolution. We further show that an approach to strategy extraction originally presented for Q-resolution proofs can also be applied to LDQ-resolution proofs. As a practical application, we consider search-based QBF solvers which are able to learn tautological clauses based on resolution and the conflict-driven clause learning method. We prove that the resolution proofs produced by these solvers correspond to proofs in the LDQ calculus and can therefore be used as input for strategy extraction algorithms. Experimental results illustrate the potential of the LDQ calculus in search-based QBF solving.

Uwe Egly, Florian Lonsing, Magdalena Widl
Verifying Temporal Properties in Real Models

Based on pioneering work of Läuchli and Leonard in the 1960s, a novel and expressive formal language, Model Expressions, for describing the compositional construction of general linear temporal structures has recently been proposed. A sub-language, Real Model Expressions, is capable of specifying models over the real flow of time but its semantics are subtly different because of the specific properties of the real numbers.

Model checking techniques have been developed for the general linear Model Expressions and it was shown that checking temporal formulas against structures described in the formal language is PSPACE-Complete and linear in the length of the model expression.

In this paper we present a model checker for temporal formulas over real-flowed models. In fact the algorithm, and so its complexity, is the same as for the general linear case.

To show that this is adequate we use a concept of temporal bisimulations and establish that it is respected by the compositional construction method. We can then check the correctness of using the general linear model checking algorithm when applied to real model expressions with their special semantics on real-flowed structures.

Tim French, John McCabe-Dansted, Mark Reynolds
A Graphical Language for Proof Strategies

Complex automated proof strategies are often difficult to extract, visualise, modify, and debug. Traditional tactic languages, often based on stack-based goal propagation, make it easy to write proofs that obscure the flow of goals between tactics and are fragile to minor changes in input, proof structure or changes to tactics themselves. Here, we address this by introducing a graphical language called PSGraph for writing proof strategies. Strategies are constructed visually by “wiring together” collections of tactics and evaluated by propagating goal nodes through the diagram via graph rewriting. Tactic nodes can have many output wires, and use a filtering procedure based on goal-types (predicates describing the features of a goal) to decide where best to send newly-generated sub-goals. In addition to making the flow of goal information explicit, the graphical language can fulfil the role of many tacticals using visual idioms like branching, merging, and feedback loops. We argue that this language enables development of more robust proof strategies and provide several examples, along with a prototype implementation in Isabelle.

Gudmund Grov, Aleks Kissinger, Yuhui Lin
A Proof of Strong Normalisation of the Typed Atomic Lambda-Calculus

The atomic lambda-calculus is a typed lambda-calculus with explicit sharing, which originates in a Curry-Howard interpretation of a deep-inference system for intuitionistic logic. It has been shown that it allows fully lazy sharing to be reproduced in a typed setting. In this paper we prove strong normalization of the typed atomic lambda-calculus using Tait’s reducibility method.

Tom Gundersen, Willem Heijltjes, Michel Parigot
Relaxing Synchronization Constraints in Behavioral Programs

In

behavioral programming

, a program consists of separate modules called

behavior threads

, each representing a part of the system’s allowed, necessary or forbidden behavior. An execution of the program is a series of synchronizations between these threads, where at each synchronization point an event is selected to be carried out. As a result, the execution speed is dictated by the slowest thread. We propose an

eager execution

mechanism for such programs, which builds upon the realization that it is often possible to predict the outcome of a synchronization point even without waiting for slower threads to synchronize. This allows faster threads to continue running uninterrupted, whereas slower ones catch up at a later time. Consequently, eager execution brings about increased system performance, better support for the modular design of programs, and the ability to distribute programs across several machines. It also allows to apply behavioral programming to a variety of problems that were previously outside its scope. We illustrate the method by concrete examples, implemented in a behavioral programming framework in C

 + + 

.

David Harel, Amir Kantor, Guy Katz
Characterizing Subset Spaces as Bi-topological Structures

Subset spaces constitute a relatively new semantics for bi-modal logic. This semantics admits, in particular, a modern, computer science oriented view of the classic interpretation of the basic modalities in topological spaces à la McKinsey and Tarski. In this paper, we look at the relationship of both semantics from an opposite perspective as it were, by asking for a consideration of subset spaces in terms of topology and topological modal logic, respectively. Indeed, we shall finally obtain a corresponding characterization result. A third semantics of modal logic, namely the standard relational one, and the associated first-order structures, will play an important part in doing so as well.

Bernhard Heinemann
Proof-Pattern Recognition and Lemma Discovery in ACL2

We present a novel technique for combining statistical machine learning for proof-pattern recognition with symbolic methods for lemma discovery. The resulting tool,

ACL2(ml)

, gathers proof statistics and uses statistical pattern-recognition to pre-processes data from libraries, and then suggests auxiliary lemmas in new proofs by analogy with already seen examples. This paper presents the implementation of ACL2(ml) alongside theoretical descriptions of the proof-pattern recognition and lemma discovery methods involved in it.

Jónathan Heras, Ekaterina Komendantskaya, Moa Johansson, Ewen Maclean
Semantic A-translations and Super-Consistency Entail Classical Cut Elimination

We show that if a theory R defined by a rewrite system is super-consistent, the classical sequent calculus modulo R enjoys the cut elimination property, which was an open question. For such theories it was already known that proofs strongly normalize in natural deduction modulo R, and that cut elimination holds in the intuitionistic sequent calculus modulo R.

We first define a syntactic and a semantic version of Friedman’s A-translation, showing that it preserves the structure of pseudo-Heyting algebra, our semantic framework. Then we relate the interpretation of a theory in the A-translated algebra and its A-translation in the original algebra. This allows to show the stability of the super-consistency criterion and the cut elimination theorem.

Lisa Allali, Olivier Hermant
Blocked Clause Decomposition

We demonstrate that it is fairly easy to decompose any propositional formula into two subsets such that both can be solved by blocked clause elimination. Such a blocked clause decomposition is useful to cheaply detect backbone variables and equivalent literals. Blocked clause decompositions are especially useful when they are unbalanced, i.e., one subset is much larger in size than the other one. We present algorithms and heuristics to obtain unbalanced decompositions efficiently. Our techniques have been implemented in the state-of-the-art solver Lingeling. Experiments show that the performance of Lingeling is clearly improved due to these techniques on application benchmarks of the SAT Competition 2013.

Marijn J. H. Heule, Armin Biere
Maximal Falsifiability
Definitions, Algorithms, and Applications

Similarly to Maximum Satisfiability (MaxSAT), Minimum Satisfiability (MinSAT) is an optimization extension of the Boolean Satisfiability (SAT) decision problem. In recent years, both problems have been studied in terms of exact and approximation algorithms. In addition, the MaxSAT problem has been characterized in terms ofMaximal Satisfiable Subsets (MSSes) andMinimal Correction Subsets (MCSes), as well as Minimal Unsatisfiable Subsets (MUSes) and minimal hitting set dualization. However, and in contrast with MaxSAT, no such characterizations exist for MinSAT. This paper addresses this issue by casting the MinSAT problem in a more general framework. The paper studies

Maximal Falsifiability

, the problem of computing a subset-maximal set of clauses that can be simultaneously falsified, and shows that MinSAT corresponds to the complement of a largest subset-maximal set of simultaneously falsifiable clauses, i.e. the solution of the

Maximum Falsifiability

(MaxFalse) problem. Additional contributions of the paper include novel algorithms for Maximum and Maximal Falsifiability, as well as minimal hitting set dualization results for the MaxFalse problem. Moreover, the proposed algorithms are validated on practical instances.

Alexey Ignatiev, Antonio Morgado, Jordi Planes, Joao Marques-Silva
Solving Geometry Problems Using a Combination of Symbolic and Numerical Reasoning

We describe a framework that combines deductive, numeric, and inductive reasoning to solve geometric problems. Applications include the generation of geometric models and animations, as well as problem solving in the context of intelligent tutoring systems.

Our novel methodology uses (i) deductive reasoning to generate a partial program from logical constraints, (ii) numerical methods to evaluate the partial program, thus creating geometric models which are solutions to the original problem, and (iii) inductive synthesis to read off new constraints that are then applied to one more round of deductive reasoning leading to the desired deterministic program. By the combination of methods we were able to solve problems that each of the methods was not able to solve by itself.

The number of nondeterministic choices in a partial program provides a measure of how close a problem is to being solved and can thus be used in the educational context for grading and providing hints.

We have successfully evaluated our methodology on 18 Scholastic Aptitude Test geometry problems, and 11 ruler/compass-based geometry construction problems. Our tool solved these problems using an average of a few seconds per problem.

Shachar Itzhaky, Sumit Gulwani, Neil Immerman, Mooly Sagiv
On QBF Proofs and Preprocessing

QBFs (quantified boolean formulas), which are a superset of propositional formulas, provide a canonical representation for PSPACE problems. To overcome the inherent complexity of QBF, significant effort has been invested in developing QBF solvers as well as the underlying proof systems. At the same time, formula preprocessing is crucial for the application of QBF solvers. This paper focuses on a missing link in currently-available technology: How to obtain a certificate (e.g. proof) for a formula that had been preprocessed before it was given to a solver? The paper targets a suite of commonly-used preprocessing techniques and shows how to reconstruct certificates for them. On the negative side, the paper discusses certain limitations of the currently-used proof systems in the light of preprocessing. The presented techniques were implemented and evaluated in the state-of-the-art QBF preprocessor bloqqer.

Mikoláš Janota, Radu Grigore, Joao Marques-Silva
Partial Backtracking in CDCL Solvers

Backtracking is a basic technique of search-based satisfiability (SAT) solvers. In order to backtrack, a SAT solver uses conflict analysis to compute a backtracking level and discards all the variable assignments made between the conflicting level and the backtracking level. We observed that, due to the branching heuristics, the solver may repeat lots of previous decisions and propagations later. In this paper, we present a new backtracking strategy, which we refer to as partial backtracking. We implemented this strategy in our solver Nigma. Using this strategy, Nigma amends the variable assignments instead of discarding them completely so that it does not backtrack as many levels as the classic strategy. Our experiments show that Nigma solves 5% more instances than the version without partial backtracking.

Chuan Jiang, Ting Zhang
Lemma Mining over HOL Light

Large formal mathematical libraries consist of millions of atomic inference steps that give rise to a corresponding number of proved statements (lemmas). Analogously to the informal mathematical practice, only a tiny fraction of such statements is named and re-used in later proofs by formal mathematicians. In this work, we suggest and implement criteria defining the estimated usefulness of the HOL Light lemmas for proving further theorems. We use these criteria to mine the large inference graph of all lemmas in the core HOL Light library, adding thousands of the best lemmas to the pool of named statements that can be re-used in later proofs. The usefulness of the new lemmas is then evaluated by comparing the performance of automated proving of the core HOL Light theorems with and without such added lemmas.

Cezary Kaliszyk, Josef Urban
On Module-Based Abstraction and Repair of Behavioral Programs

The number of states a program has tends to grow exponentially in the size of the code. This phenomenon, known as

state explosion

, hinders the verification and repair of large programs. A key technique for coping with state explosion is using

abstractions

, where one substitutes a program’s state graph with smaller over-approximations thereof. We show how module-based abstraction-refinement strategies can be applied to the verification of programs written in the recently proposed framework of

Behavioral Programming

. Further, we demonstrate how — by using a sought-after repair as a means of refining existing abstractions — these techniques can improve the scalability of existing program repair algorithms. Our findings are supported by a proof-of-concept tool.

Guy Katz
Prediction and Explanation over DL-Lite Data Streams

Stream reasoning is an emerging research area focusing on the development of reasoning techniques applicable to streams of rapidly changing, semantically enhanced data. In this paper, we consider data represented in Description Logics from the popular DL-

Lite

family, and study the logic foundations of prediction and explanation over DL-

Lite

data streams, i.e., reasoning from finite segments of streaming data to conjectures about the content of the streams in the future or in the past. We propose a novel formalization of the problem based on temporal “past-future” rules, grounded in Temporal Query Language. Such rules can naturally accommodate complex data association patterns, which are typically discovered through data mining processes, with logical and temporal constraints of varying expressiveness. Further, we analyse the computational complexity of reasoning with rules expressed in different fragments of the temporal language. As a result, we draw precise demarcation lines between NP-, DP- and

PSpace

-complete variants of our setting and, consequently, suggest relevant restrictions rendering prediction and explanation more feasible in practice.

Szymon Klarman, Thomas Meyer
Forgetting Concept and Role Symbols in $\mathcal{ALCH}$ -Ontologies

We develop a resolution-based method for forgetting concept and role symbols in

$\mathcal{ALCH}$

ontologies, or for computing uniform interpolants in

$\mathcal{ALCH}$

. Uniform interpolants use only a restricted set of symbols, while preserving logical consequences of the original ontology involving these symbols. While recent work towards practical methods for uniform interpolation in expressive description logics limits attention to forgetting concept symbols, we believe most applications would benefit from the possibility to forget both role and concept symbols. We focus on the description logic

$\mathcal{ALCH}$

, which allows for the formalisation of role hierarchies. Our approach is based on a recently developed resolution-based calculus for forgetting concept symbols in

$\mathcal{ALC}$

ontologies, which we extend by redundancy elimination techniques to make it practical for larger ontologies. Experiments on

$\mathcal{ALCH}$

fragments of real life ontologies suggest that our method is applicable in a lot of real-life applications.

Patrick Koopmann, Renate A. Schmidt
Simulating Parity Reasoning

Propositional satisfiability (SAT) solvers, which typically operate using conjunctive normal form (CNF), have been successfully applied in many domains. However, in some application areas such as circuit verification, bounded model checking, and logical cryptanalysis, instances can have many parity (xor) constraints which may not be handled efficiently if translated to CNF. Thus, extensions to the CNF-driven search with various parity reasoning engines ranging from equivalence reasoning to incremental Gaussian elimination have been proposed. This paper studies how stronger parity reasoning techniques in the DPLL(XOR) framework can be simulated by simpler systems: resolution, unit propagation, and parity explanations. Such simulations are interesting, for example, for developing the next generation SAT solvers capable of handling parity constraints efficiently.

Tero Laitinen, Tommi Junttila, Ilkka Niemelä
Herbrand Theorems for Substructural Logics

Herbrand and Skolemization theorems are obtained for a broad family of first-order substructural logics. These logics typically lack equivalent prenex forms, a deduction theorem, and reductions of semantic consequence to satisfiability. The Herbrand and Skolemization theorems therefore take various forms, applying either to the left or right of the consequence relation, and to restricted classes of formulas.

Petr Cintula, George Metcalfe
On Promptness in Parity Games

Parity games

are a powerful formalism for the automatic synthesis and verification of reactive systems. They are closely related to alternating

ω

-automata and emerge as a natural method for the solution of the

μ

-calculus model checking problem. Due to these strict connections, parity games are a well-established environment to describe

liveness properties

such as “every request that occurs infinitely often is eventually responded”. Unfortunately, the classical form of such a condition suffers from the strong drawback that there is no bound on the effective time that separates a request from its response, i.e., responses are

not promptly

provided. Recently, to overcome this limitation, several parity game variants have been proposed, in which quantitative requirements are added to the classic qualitative ones.

In this paper, we make a general study of the concept of promptness in parity games that allows to put under a unique theoretical framework several of the cited variants along with new ones. Also, we describe simple polynomial reductions from all these conditions to either Büchi or parity games, which simplify all previous known procedures. In particular, they improve the complexity results of

cost

and

bounded-cost parity games

. Indeed, we provide solution algorithms showing that determining the winner of these games lies in

UPTime

CoUPTime

.

Fabio Mogavero, Aniello Murano, Loredana Sorrentino
Defining Privacy Is Supposed to Be Easy

Formally specifying privacy goals is not trivial. The most widely used approach in formal methods is based on the static equivalence of frames in the applied

pi

-calculus, basically asking whether or not the intruder is able to distinguish two given worlds. A subtle question is how we can be sure that we have specified all pairs of worlds to properly reflect our intuitive privacy goal. To address this problem, we introduce in this paper a novel and declarative way to specify privacy goals, called

α

-

β

privacy, and relate it to static equivalence. This new approach is based on specifying two formulae

α

and

β

in first-order logic with Herbrand universes, where

α

reflects the intentionally released information and

β

includes the actual cryptographic (“technical”) messages the intruder can see. Then

α

-

β

privacy means that the intruder cannot derive any “non-technical” statement from

β

that he cannot derive from

α

already. We describe by a variety of examples how this notion can be used in practice. Even though

α

-

β

privacy does not directly contain a notion of distinguishing between worlds, there is a close relationship to static equivalence of frames that we investigate formally. This allows us to justify (and criticize) the specifications that are currently used in verification tools, and obtain partial tool support for

α

-

β

privacy.

Sebastian A. Mödersheim, Thomas Groß, Luca Viganò
Reachability Modules for the Description Logic $\mathcal{SRIQ}$

In this paper we investigate module extraction for the Description Logic

$\mathcal{SRIQ}$

. We formulate modules in terms of the reachability problem for directed hypergraphs. Using inseparability relations, we investigate the module-theoretic properties of reachability modules and show by means of an empirical evaluation that these modules have the potential of being substantially smaller than syntactic locality modules.

Riku Nortje, Katarina Britz, Thomas Meyer
An Event Structure Model for Probabilistic Concurrent Kleene Algebra

We give a new true-concurrent model for probabilistic concurrent Kleene algebra. The model is based on probabilistic event structures, which combines ideas from Katoen’s work on probabilistic concurrency and Varacca’s probabilistic prime event structures. The event structures are compared with a true-concurrent version of Segala’s probabilistic simulation. Finally, the algebraic properties of the model are summarised to the extent that they can be used to derive techniques such as probabilistic rely/guarantee inference rules.

Annabelle McIver, Tahiry Rabehaja, Georg Struth
Three SCC-Based Emptiness Checks for Generalized Büchi Automata

The automata-theoretic approach for the verification of linear time properties involves checking the emptiness of a Büchi automaton. However generalized Büchi automata, with multiple acceptance sets, are preferred when verifying under weak fairness hypotheses. Existing emptiness checks for which the complexity is independent of the number of acceptance sets are all based on the enumeration of Strongly Connected Components (SCCs).

In this paper, we review the state of the art SCC enumeration algorithms to study how they can be turned into emptiness checks. This leads us to define two new emptiness check algorithms (one of them based on the Union-Find data structure), introduce new optimizations, and show that one of these can be of benefit to a classic SCCs enumeration algorithm. We have implemented all these variants to compare their relative performances and the overhead induced by the emptiness check compared to the corresponding SCCs enumeration algorithm. Our experiments shows that these three algorithms are comparable.

Etienne Renault, Alexandre Duret-Lutz, Fabrice Kordon, Denis Poitrenaud
PeRIPLO: A Framework for Producing Effective Interpolants in SAT-Based Software Verification

Propositional interpolation is widely used as a means of overapproximation to achieve efficient SAT-based symbolic model checking. Different verification applications exploit interpolants for different purposes; it is unlikely that a single interpolation procedure could provide interpolants fit for all cases. This paper describes the PeRIPLO framework, an interpolating SAT-solver that implements a set of techniques to generate and manipulate interpolants for different model checking tasks. We demonstrate the flexibility of the framework in two software bounded model checking applications: verification of a given source code incrementally with respect to various properties, and verification of software upgrades with respect to a fixed set of properties. Both applications use interpolation for generating function summaries. Our systematic experimental investigation shows that size and logical strength of interpolants significantly affect verification, that these characteristics depend on the role played by interpolants, and that therefore techniques for tuning size and strength can be used to customize interpolants in different applications.

Simone Fulvio Rollini, Leonardo Alt, Grigory Fedyukovich, Antti E. J. Hyvärinen, Natasha Sharygina
Incremental Tabling for Query-Driven Propagation of Logic Program Updates

We foster a novel implementation technique for logic program updates, which exploits incremental tabling in logic programming – using XSB Prolog to that effect. Propagation of updates of fluents is controlled by initially keeping any fluent updates pending in the database. And, on the initiative of queries, making active just those updates up to the timestamp of an actual query, by performing incremental assertions of the pending ones. These assertions, in turn, automatically trigger system-implemented incremental bottom-up tabling of other fluents (or their negated complements), with respect to a predefined overall upper time limit, in order to avoid runaway iteration. The frame problem can then be dealt with by inspecting a table for the latest time a fluent is known to be assuredly true, i.e., the latest time it is not supervened by its negated complement, relative to the given query time. To do so, we adopt the dual program transformation for defining and helping propagate, also incrementally and bottom-up, the negated complement of a fluent, in order to establish whether a fluent is still true at some time point, or rather if its complement is. The use of incremental tabling in this approach affords us a form of controlled, but automatic, system level truth-maintenance, up to some actual query time. Consequently, propagation of update side-effects need not employ top-down recursion or bottom-up iteration through a logically defined frame axiom, but can be dealt with by the mechanics of the underlying world. Our approach thus reconciles high-level top-down deliberative reasoning about a query, with autonomous low-level bottom-up world reactivity to ongoing updates, and it might be adopted elsewhere for reasoning in logic.

Ari Saptawijaya, Luís Moniz Pereira
Tracking Data-Flow with Open Closure Types

Type systems hide data that is captured by function closures in function types. In most cases this is a beneficial design that enables simplicity and compositionality. However, some applications require explicit information about the data that is captured in closures.

This paper introduces open closure types, that is, function types that are decorated with type contexts. They are used to track data-flow from the environment into the function closure. A simply-typed lambda calculus is used to study the properties of the type theory of open closure types. A distinctive feature of this type theory is that an open closure type of a function can vary in different type contexts. To present an application of the type theory, it is shown that a type derivation establishes a simple non-interference property in the sense of information-flow theory. A publicly available prototype implementation of the system can be used to experiment with type derivations for example programs.

Gabriel Scherer, Jan Hoffmann
Putting Newton into Practice: A Solver for Polynomial Equations over Semirings

We present the first implementation of Newton’s method for solving systems of equations over

ω

-continuous semirings (based on [5,11]). For instance, such equation systems arise naturally in the analysis of interprocedural programs or the provenance computation for Datalog. Our implementation provides an attractive alternative for computing their exact least solution in some cases where the ascending chain condition is not met and hence, standard fixed-point iteration needs to be combined with some over-approximation (e.g., widening techniques) to terminate. We present a generic C++ library along with the main algorithms and analyze their complexity. Furthermore, we describe our implementation of the counting semiring based on semilinear sets. Finally, we discuss motivating examples as well as performance benchmarks.

Maximilian Schlund, Michał Terepeta, Michael Luttenberger
System Description: E 1.8

E is a theorem prover for full first-order logic with equality. It reduces first-order problems to clause normal form and employs a saturation algorithm based on the equational superposition calculus. E is built on shared terms with cached rewriting, and employs several innovations for efficient clause indexing. Major strengths of the system are automatic problem analysis and highly flexible search heuristics. The prover can provide verifiable proof objects and answer substitutions with very little overhead. E performs well, solving more than 69% of TPTP-5.4.0 FOF and CNF problems in automatic mode.

Stephan Schulz
Formalization of Laplace Transform Using the Multivariable Calculus Theory of HOL-Light

Algebraic techniques based on Laplace transform are widely used for solving differential equations and evaluating transfer of signals while analyzing physical aspects of many safety-critical systems. To facilitate formal analysis of these systems, we present the formalization of Laplace transform using the multivariable calculus theories of HOL-Light. In particular, we use integral, differential, transcendental and topological theories of multivariable calculus to formally define Laplace transform in higher-order logic and reason about the correctness of Laplace transform properties, such as existence, linearity, frequency shifting and differentiation and integration in time domain. In order to demonstrate the practical effectiveness of this formalization, we use it to formally verify the transfer function of Linear Transfer Converter (LTC) circuit, which is a commonly used electrical circuit.

Syeda Hira Taqdees, Osman Hasan
On Minimality and Integrity Constraints in Probabilistic Abduction

Abduction is a type of logical inference that can be successfully combined with probabilistic reasoning. However, the role of integrity constraints has not received much attention when performing logical-probabilistic inference. The contribution of our paper is a probabilistic abductive framework based on the distribution semantics for normal logic programs that handles negation as failure and integrity constraints in the form of denials. Integrity constraints are treated as evidence from the perspective of probabilistic inference. An implementation is provided that computes alternative (non-minimal) abductive solutions, using an appropriately modified abductive system, and generates the probability of a query, for given solutions. An example application of the framework is given, where gene network topologies are abduced according to biological expert knowledge, to probabilistically explain observed gene expressions. The example shows the practical utility of the proposed framework.

Calin-Rares Turliuc, Nataly Maimari, Alessandra Russo, Krysia Broda
Polar: A Framework for Proof Refactoring

We present a prototype refactoring framework based on graph rewriting and bidirectional transformations that is designed to be

generic

,

extensible

, and

declarative

. Our approach uses a language-independent graph meta-model to represent proof developments in a generic way. We use graph rewriting to enrich the meta-model with dependency information and to perform refactorings, which are written as declarative rewrite rules. Our framework, called

Polar

, is implemented in the

GrGen

rewriting engine.

Dominik Dietrich, Iain Whiteside, David Aspinall
Backmatter
Metadaten
Titel
Logic for Programming, Artificial Intelligence, and Reasoning
herausgegeben von
Ken McMillan
Aart Middeldorp
Andrei Voronkov
Copyright-Jahr
2013
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-45221-5
Print ISBN
978-3-642-45220-8
DOI
https://doi.org/10.1007/978-3-642-45221-5