Skip to main content

2008 | Buch

Logic for Programming, Artificial Intelligence, and Reasoning

15th International Conference, LPAR 2008, Doha, Qatar, November 22-27, 2008. Proceedings

herausgegeben von: Iliano Cervesato, Helmut Veith, Andrei Voronkov

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 15th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning, LPAR 2008, which took place in Doha, Qatar, during November 22-27, 2008. The 45 revised full papers presented together with 3 invited talks were carefully revised and selected from 153 submissions. The papers address all current issues in automated reasoning, computational logic, programming languages and their applications and are organized in topical sections on automata, linear arithmetic, verification knowledge representation, proof theory, quantified constraints, as well as modal and temporal logics.

Inhaltsverzeichnis

Frontmatter

Session 1. Constraint Solving

Symmetry Breaking for Maximum Satisfiability

Symmetries are intrinsic to many combinatorial problems including Boolean Satisfiability (SAT) and Constraint Programming (CP). In SAT, the identification of symmetry breaking predicates (SBPs) is a well-known, often effective, technique for solving hard problems. The identification of SBPs in SAT has been the subject of significant improvements in recent years, resulting in more compact SBPs and more effective algorithms. The identification of SBPs has also been applied to pseudo-Boolean (PB) constraints, showing that symmetry breaking can also be an effective technique for PB constraints. This paper extends further the application of SBPs, and shows that SBPs can be identified and used inMaximum Satisfiability (MaxSAT), as well as in its most well-known variants, including partial MaxSAT, weighted MaxSAT and weighted partial MaxSAT. As with SAT and PB, symmetry breaking predicates for MaxSAT and variants are shown to be effective for a representative number of problem domains, allowing solving problem instances that current state of the art MaxSAT solvers could not otherwise solve.

Joao Marques-Silva, Inês Lynce, Vasco Manquinho
Efficient Generation of Unsatisfiability Proofs and Cores in SAT

Some modern DPLL-based propositional SAT solvers now have fast in-memory algorithms for generating unsatisfiability proofs and cores without writing traces to disk. However, in long SAT runs these algorithms still run out of memory.

For several of these algorithms, here we discuss advantages and disadvantages, based on carefully designed experiments with our implementation of each one of them, as well as with (our implementation of) Zhang and Malik’s one writing traces on disk. Then we describe a new in-memory algorithm which saves space by doing more bookkeeping to discard unnecessary information, and show that it can handle significantly more instances than the previously existing algorithms, at a negligible expense in time.

Roberto Asín, Robert Nieuwenhuis, Albert Oliveras, Enric Rodríguez-Carbonell
Justification-Based Local Search with Adaptive Noise Strategies

We study a framework called BC SLS for a novel type of stochastic local search (SLS) for propositional satisfiability (SAT). Aimed specifically at solving real-world SAT instances, the approach works directly on a non-clausal structural representation for SAT. This allows for don’t care detection and justification guided search heuristics in SLS by applying the circuit-level SAT technique of justification frontiers. In this paper we extend the BC SLS approach first by developing generalizations of BC SLS which are probabilistically approximately complete (PAC). Second, we develop and study adaptive noise mechanisms for BC SLS, including mechanisms based on dynamically adapting the waiting period for noise increases. Experiments show that a preliminary implementation of the novel adaptive, PAC generalization of the method outperforms a well-known CNF level SLS method with adaptive noise (AdaptNovelty+) on a collection of structured real-world SAT instances.

Matti Järvisalo, Tommi Junttila, Ilkka Niemelä
The Max-Atom Problem and Its Relevance

Let

F

be a conjunction of atoms of the form max (

x

,

y

) + 

k

 ≥ 

z

, where

x

,

y

,

z

are variables and

k

is a constant value. Here we consider the satisfiability problem of such formulas (e.g., over the integers or rationals). This problem, which appears in unexpected forms in many applications, is easily shown to be in NP. However, decades of efforts (in several research communities, see below) have not produced any polynomial decision procedure nor an NP-hardness result for this – apparently so simple – problem.

Here we develop several ingredients (small-model property and lattice structure of the model class, a polynomially tractable subclass and an inference system) which altogether allow us to prove the existence of small unsatisfiability certificates, and hence membership in NP intersection co-NP. As a by-product, we also obtain a weakly polynomial decision procedure.

We show that the Max-atom problem is PTIME-equivalent to several other well-known – and at first sight unrelated – problems on hypergraphs and on Discrete Event Systems, problems for which the existence of PTIME algorithms is also open. Since there are few interesting problems in NP intersection co-NP that are not known to be polynomial, the Max-atom problem appears to be relevant.

Marc Bezem, Robert Nieuwenhuis, Enric Rodríguez-Carbonell

Session 2. Knowledge Representation 1

Towards Practical Feasibility of Core Computation in Data Exchange

Core computation in data exchange is concerned with materializing the minimal target database for a given source database. Gottlob and Nash have recently shown that the core can be computed in polynomial time under very general conditions. Nevertheless, core computation has not yet been incorporated into existing data exchange tools. The principal aim of this paper is to make a big step forward towards the practical feasibility of core computation in data exchange by developing an improved algorithm and by presenting a prototype implementation of our new algorithm.

Reinhard Pichler, Vadim Savenkov
Data-Oblivious Stream Productivity

We are concerned with demonstrating productivity of specifications of infinite streams of data, based on orthogonal rewrite rules. In general, this property is undecidable, but for restricted formats computable sufficient conditions can be obtained. The usual analysis, also adopted here, disregards the identity of data, thus leading to approaches that we call data-oblivious. We present a method that is provably optimal among all such data-oblivious approaches. This means that in order to improve on our algorithm one has to proceed in a data-aware fashion.

Jörg Endrullis, Clemens Grabmayer, Dimitri Hendriks
Reasoning about XML with Temporal Logics and Automata

We show that problems arising in static analysis of XML specifications and transformations can be dealt with using techniques similar to those developed for static analysis of programs. Many properties of interest in the XML context are related to navigation, and can be formulated in temporal logics for trees. We choose a logic that admits a simple single-exponential translation into unranked tree automata, in the spirit of the classical LTL-to-Buchi automata translation. Automata arising from this translation have a number of additional properties; in particular, they are convenient for reasoning about unary node-selecting queries, which are important in the XML context. We give two applications of such reasoning: one deals with a classical XML problem of reasoning about navigation in the presence of schemas, and the other relates to verifying security properties of XML views.

Leonid Libkin, Cristina Sirangelo
Distributed Consistency-Based Diagnosis

A lot of methods exist to prevent errors and incorrect behaviors in a distributed framework, where all peers work together for the same purpose, under the same protocol. For instance, one may limit them by replication of data and processes among the network. However, with the emergence of web services, the willing for privacy, and the constant growth of data size, such a solution may not be applicable. For some problems, failure of a peer has to be detected and located by the whole system. In this paper, we propose an approach to diagnose abnormal behaviors of the whole system by extending the well known consistency-based diagnosis framework to a fully distributed inference system, where each peer only knows the existence of its neighbors. Contrasting with previous works on model-based diagnosis, our approach computes all minimal diagnoses in an incremental way, without needs to get any conflict first.

Vincent Armant, Philippe Dague, Laurent Simon

Session 3. Proof-Theory 1

From One Session to Many: Dynamic Tags for Security Protocols

The design and verification of cryptographic protocols is a notoriously difficult task, even in abstract Dolev-Yao models. This is mainly due to several sources of unboundedness (size of messages, number of sessions, ...). In this paper, we characterize a class of protocols for which secrecy for an unbounded number of sessions is decidable. More precisely, we present a simple transformation which maps a protocol that is secure for a single protocol session (a decidable problem) to a protocol that is secure for an unbounded number of sessions.

Our result provides an effective strategy to design secure protocols: (i) design a protocol intended to be secure for one protocol session (this can be verified with existing automated tools); (ii) apply our transformation and obtain a protocol which is secure for an unbounded number of sessions. The proof of our result is closely tied to a particular constraint solving procedure by Comon-Lundh

et al.

Myrto Arapinis, Stéphanie Delaune, Steve Kremer
A Conditional Logical Framework

The

Conditional Logical Framework

LF

K

is a variant of the Harper-Honsell-Plotkin’s Edinburgh Logical Framemork

LF

. It features a generalized form of

λ

-abstraction where

β

-reductions fire

under the condition

that the argument satisfies a

logical predicate

. The key idea is that the type system

memorizes

under what conditions and where reductions have yet to fire. Different notions of

β

-reductions corresponding to different predicates can be combined in

LF

K

. The framework

LF

K

subsumes, by simple instantiation,

LF

(in fact, it is also a subsystem of

LF

!), as well as a large class of new generalized conditional

λ

-calculi. These are appropriate to deal smoothly with the side-conditions of both Hilbert and Natural Deduction presentations of Modal Logics. We investigate and characterize the metatheoretical properties of the calculus underpinning

LF

K

, such as subject reduction, confluence, strong normalization.

Furio Honsell, Marina Lenisa, Luigi Liquori, Ivan Scagnetto
Nominal Renaming Sets

Nominal techniques are based on the idea of sets with a finitely-supported atoms-permutation action.

We consider the idea of

nominal renaming sets

, which are sets with a finitely-supported atoms-renaming action; renamings can identify atoms, permutations cannot. We show that nominal renaming sets exhibit many of the useful qualities found in (permutative) nominal sets; an elementary sets-based presentation, inductive datatypes of syntax up to binding, cartesian closure, and being a topos. Unlike is the case for nominal sets, the notion of names-abstraction coincides with functional abstraction. Thus we obtain a concrete presentation of sheaves on the category of finite sets in the form of a category of sets with structure.

Murdoch J. Gabbay, Martin Hofmann
Imogen: Focusing the Polarized Inverse Method for Intuitionistic Propositional Logic

In this paper we describe Imogen, a theorem prover for intuitionistic propositional logic using the focused inverse method. We represent fine-grained control of the search behavior by

polarizing

the input formula. In manipulating the polarity of atoms and subformulas, we can often improve the search time by several orders of magnitude. We tested our method against seven other systems on the propositional fragment of the ILTP library. We found that our prover outperforms all other provers on a substantial subset of the library.

Sean McLaughlin, Frank Pfenning

Invited Talk

Model Checking – My 27-Year Quest to Overcome the State Explosion Problem

Model Checking is an automatic verification technique for state-transition systems that are finite-state or that have finite-state abstractions. In the early 1980’s in a series of joint papers with my graduate students E.A. Emerson and A.P. Sistla, we proposed that Model Checking could be used for verifying concurrent systems and gave algorithms for this purpose. At roughly the same time, Joseph Sifakis and his student J.P. Queille at the University of Grenoble independently developed a similar technique. Model Checking has been used successfully to reason about computer hardware and communication protocols and is beginning to be used for verifying computer software. Specifications are written in temporal logic, which is particularly valuable for expressing concurrency properties. An intelligent, exhaustive search is used to determine if the specification is true or not. If the specification is not true, the Model Checker will produce a counterexample execution trace that shows why the specification does not hold. This feature is extremely useful for finding obscure errors in complex systems. The main disadvantage of Model Checking is the state-explosion problem, which can occur if the system under verification has many processes or complex data structures. Although the state-explosion problem is inevitable in worst case, over the past 27 years considerable progress has been made on the problem for certain classes of state-transition systems that occur often in practice. In this talk, I will describe what Model Checking is, how it works, and the main techniques that have been developed for combating the state explosion problem.

Edmund M. Clarke

Session 4. Automata

On the Relative Succinctness of Nondeterministic Büchi and co-Büchi Word Automata

The practical importance of automata on infinite objects has motivated a re-examination of the complexity of automata-theoretic constructions. One such construction is the translation, when possible, of nondeterministic Büchi word automata (NBW) to nondeterministic co-Büchi word automata (NCW). Among other applications, it is used in the translation (when possible) of LTL to the alternation-free

μ

-calculus. The best known upper bound for the translation of NBW to NCW is exponential (given an NBW with

n

states, the best translation yields an equivalent NCW with 2

O

(

n

log

n

)

states). On the other hand, the best known lower bound is trivial (no NBW with

n

states whose equivalent NCW requires even

n

 + 1 states is known). In fact, only recently was it shown that there is an NBW whose equivalent NCW requires a different structure.

In this paper we improve the lower bound by showing that for every integer

k

 ≥ 1 there is a language

L

k

over a two-letter alphabet, such that

L

k

can be recognized by an NBW with 2

k

 + 1 states, whereas the minimal NCW that recognizes

L

k

has 3

k

states. Even though this gap is not asymptotically very significant, it nonetheless demonstrates for the first time that NBWs are more succinct than NCWs. In addition, our proof points to a conceptual advantage of the Büchi condition: an NBW can abstract precise counting by counting to infinity with two states. To complete the picture, we consider also the reverse NCW to NBW translation, and show that the known upper bound, which duplicates the state space, is tight.

Benjamin Aminof, Orna Kupferman, Omer Lev
Recurrent Reachability Analysis in Regular Model Checking

We consider the problem of recurrent reachability over infinite systems given by regular relations on words and trees, i.e, whether a given regular set of states can be reached infinitely often from a given initial state in the given transition system. Under the condition that the transitive closure of the transition relation is regular, we show that the problem is decidable, and the set of all initial states satisfying the property is regular. Moreover, our algorithm constructs an automaton for this set in polynomial time, assuming that a transducer of the transitive closure can be computed in poly-time. We then demonstrate that transition systems generated by pushdown systems, regular ground tree rewrite systems, and the well-known process algebra PA satisfy our condition and transducers for their transitive closures can be computed in poly-time. Our result also implies that model checking EF-logic extended by recurrent reachability predicate (EGF) over such systems is decidable.

Anthony Widjaja To, Leonid Libkin
Alternation Elimination by Complementation (Extended Abstract)

In this paper, we revisit constructions from the literature that translate alternating automata into language-equivalent nondeterministic automata. Such constructions are of practical interest in finite-state model checking, since formulas of widely used linear-time temporal logics with future and past operators can directly be translated into alternating automata. We present a construction scheme that can be instantiated for different automata classes to translate alternating automata into language-equivalent nondeterministic automata. The scheme emphasizes the core ingredient of previously proposed alternation-elimination constructions, namely, a reduction to the problem of complementing nondeterministic automata. Furthermore, we clarify and improve previously proposed constructions for different classes of alternating automata by recasting them as instances of our construction scheme. Finally, we present new complementation constructions for 2-way nondeterministic automata from which we then obtain novel alternation-elimination constructions.

Christian Dax, Felix Klaedtke
Discounted Properties of Probabilistic Pushdown Automata

We show that several basic discounted properties of probabilistic pushdown automata related both to terminating and non-terminating runs can be efficiently approximated up to an arbitrarily small given precision.

Tomáš Brázdil, Václav Brožek, Jan Holeček, Antonín Kučera

Session 5. Linear Arithmetic

A Quantifier Elimination Algorithm for Linear Real Arithmetic

We propose a new quantifier elimination algorithm for the theory of linear real arithmetic. This algorithm uses as subroutines satisfiability modulo this theory and polyhedral projection; there are good algorithms and implementations for both of these. The quantifier elimination algorithm presented in the paper is compared, on examples arising from program analysis problems and on random examples, to several other implementations, all of which cannot solve some of the examples that our algorithm solves easily.

David Monniaux
(LIA) - Model Evolution with Linear Integer Arithmetic Constraints

Many applications of automated deduction require reasoning modulo some form of integer arithmetic. Unfortunately, theory reasoning support for the integers in current theorem provers is sometimes too weak for practical purposes. In this paper we propose a novel calculus for a large fragment of first-order logic modulo Linear Integer Arithmetic (LIA) that overcomes several limitations of existing theory reasoning approaches. The new calculus — based on the

Model Evolution

calculus, a first-order logic version of the propositional DPLL procedure — supports restricted quantifiers, requires only a decision procedure for LIA-validity instead of a complete LIA-unification procedure, and is amenable to strong redundancy criteria. We present a basic version of the calculus and prove it sound and (refutationally) complete.

Peter Baumgartner, Alexander Fuchs, Cesare Tinelli
A Constraint Sequent Calculus for First-Order Logic with Linear Integer Arithmetic

First-order logic modulo the theory of integer arithmetic is the basis for reasoning in many areas, including deductive software verification and software model checking. While satisfiability checking for ground formulae in this logic is well understood, it is still an open question how the general case of quantified formulae can be handled in an efficient and systematic way. As a possible answer, we introduce a sequent calculus that combines ideas from free-variable constraint tableaux with the Omega quantifier elimination procedure. The calculus is complete for theorems of first-order logic (without functions, but with arbitrary uninterpreted predicates), can decide Presburger arithmetic, and is complete for a substantial fragment of the combination of both.

Philipp Rümmer
Encoding Queues in Satisfiability Modulo Theories Based Bounded Model Checking

Using a Satisfiability Modulo Theories (SMT) solver as the back-end in SAT-based software model checking allows common data types to be represented directly in the language of the solver. A problem is that many software systems involve first-in-first-out queues but current SMT solvers do not support the theory of queues. This paper studies how to encode queues in the context of SMT-based bounded model checking, using only widely supported theories such as linear arithmetic and uninterpreted functions. Various encodings with considerably different compactness and requirements for available theories are proposed. An experimental comparison of the relative efficiency of the encodings is given.

Tommi Junttila, Jori Dubrovin

Session 6. Verification

On Bounded Reachability of Programs with Set Comprehensions

We analyze the bounded reachability problem of programs that use abstract data types and set comprehensions. Such programs are common as high-level executable specifications of complex protocols. We prove decidability and undecidability results of restricted cases of the problem and extend the Satisfiability Modulo Theories approach to support analysis of set comprehensions over tuples and bag axioms. We use the Z3 solver for our implementation and experiments, and we use AsmL as the modeling language.

Margus Veanes, Ando Saabas
Program Complexity in Hierarchical Module Checking

Module checking

is a well investigated technique for verifying the correctness of open systems, which are systems characterized by an ongoing interaction with an external environment. In the classical module checking framework, in order to check whether an open system satisfies a required property, we first translate the entire system into an open model (

module

) that collects all possible behaviors of the environment and then check it with respect to a formal specification of the property.

Recently, in the case of closed system, Alur and Yannakakis have considered hierarchical structure models in order to have models exponentially more succinct. A hierarchical model uses as nodes both ordinary nodes and supernodes, which are hierarchical models themselves. For CTL specifications, it has been shown that for the simple case of models having only single-exit supernodes, the hierarchical model checking problem is not harder than the classical one. On the contrary, for the more general multiple-exit case, the problem becomes

Pspace

-complete.

In this paper, we investigate the program complexity of the

CTL hierarchical module checking problem

, that is, we consider the module checking problem for a fixed CTL formula and modules having also supernodes that are modules themselves. By exploiting an automata-theoretic approach through the introduction of hierarchical Büchi tree automata, we show that, in the single-exit case, the addressed problem remains in

Ptime

, while in the multiple-exit case, it becomes

Pspace

-complete.

Aniello Murano, Margherita Napoli, Mimmo Parente
Valigator: A Verification Tool with Bound and Invariant Generation

We describe

Valigator

, a software tool for imperative program verification that efficiently combines symbolic computation and automated reasoning in a uniform framework. The system offers support for automatically generating and proving verification conditions and, most importantly, for automatically inferring loop invariants and bound assertions by means of symbolic summation, Gröbner basis computation, and quantifier elimination. We present general principles of the implementation and illustrate them on examples.

Thomas A. Henzinger, Thibaud Hottelier, Laura Kovács
Reveal: A Formal Verification Tool for Verilog Designs

We describe the Reveal formal functional verification system and its application to four representative hardware test cases. Reveal employs counterexample-guided abstraction refinement, or CEGAR, and is suitable for verifying the complex control logic of designs with wide datapaths. Reveal performs automatic datapath abstraction yielding an approximation of the original design with a much smaller state space. This approximation is subsequently used to verify the correctness of control logic interactions. If the approximation proves to be too coarse, it is automatically refined based on the spurious counterexample it generates. Such refinement can be viewed as a form of on-demand “learning” similar in spirit to conflict-based learning in modern Boolean satisfiability solvers. The abstraction/refinement process is iterated until the design is shown to be correct or an actual design error is reported. The Reveal system allows some user control over the abstraction and refinement steps. This paper examines the effect on Reveal’s performance of the various available options for abstraction and refinement. Based on our initial experience with this system, we believe that automating the verification for a useful class of hardware designs is now quite feasible.

Zaher S. Andraus, Mark H. Liffiton, Karem A. Sakallah

Invited Talks

A Formal Language for Cryptographic Pseudocode

Game-based cryptographic proofs are typically expressed using pseudocode, which lacks a formal semantics. This can lead to ambiguous specifications, hidden mistakes, and even wrong proofs. We propose a language for expressing proofs that is expressive enough to specify all constructs occurring in cryptographic games, including probabilistic behaviors, the usage of oracles, and polynomial-time programs. The language is a probabilistic higher-order lambda calculus with recursive types, references, and support for events, and is simple enough that researchers without a strong background in the theory of programming languages can understand it. The language has been implemented in the proof assistant Isabelle/HOL.

Michael Backes, Matthias Berg, Dominique Unruh
Reasoning Using Knots

The deployment of Description Logics (DLs) and Answer Set Programming (ASP), which are well-known knowledge representation and reasoning formalisms, to a growing range of applications has created the need for novel reasoning algorithms and methods. Recently, knots have been introduced as a tool to facilitate reasoning tasks in extensions of ASP with functions symbols. They were then also fruitfully applied for query answering in Description Logics, hinging on the forest-shaped model property of knowledge bases. This paper briefly reviews the knot idea at a generic level and recalls some of the results obtained with them. It also discusses features of knots and relations to other reasoning techniques, and presents issues for further research.

Thomas Eiter, Magdalena Ortiz, Mantas Šimkus

Session 7. Knowledge Representation 2

Role Conjunctions in Expressive Description Logics

We show that adding role conjunctions to the Description Logics (DLs)

$\mathcal{SHI}$

and

$\mathcal{SHOIF}$

causes a jump in the computational complexity of the standard reasoning tasks from

ExpTime

-complete to

2ExpTime

-complete and from

NExpTime

-complete to

N2ExpTime

-hard respectively. We further show that this increase is due to a subtle interaction between inverse roles, role hierarchies, and role transitivity in the presence of role conjunctions and that for the DL

$\mathcal{SHQ}$

a jump in the computational complexity cannot be observed.

Birte Glimm, Yevgeny Kazakov
Default Logics with Preference Order: Principles and Characterisations

Practical use of default logics requires mechanisms to select the more suitable extensions from out of the several often allowed by a classical default theory. An obvious solution is to order defaults in a preference hierarchy, and use this ordering to select preferred extensions. The literature contains many suggestions on how to implement such a scheme. The problem is that they yield different results: all authors agree that preferred extensions employ preferred defaults, but this apparent agreement hides differences in lower level decisions. While motivations for these are rarely discussed, their consequences for overall behaviour are wide-ranging. This paper points towards standardisation, discussing principles that ought to hold and then working top-down to determine lower level details. We present characterisations, uncover anomalies of existing approaches, and suggest repairs.

We build on work by Brewka and Eiter [4], who first identified some of the desiderata discussed here. A slightly modified version of the notion of preferred extension proposed by these authors, and one by Delgrande and Schaub [5], are identified as the most and least inclusive notions of extension satisfying all desiderata. We point out that these two (in the literature previously termed “descriptive” and “prescriptive”, respectively) differ along two rather independent dimensions, and two additional notions are then identified, representing the remaining possibilities.

Tore Langholm
On Computing Constraint Abduction Answers

We address the problem of computing and representing answers of constraint abduction problems over the Herbrand domain. This problem is of interest when performing type inference involving generalized algebraic data types. We show that simply recognizing a maximally general answer or fully maximal answer is co-NP complete. However we present an algorithm that computes the (finite) set of fully maximal answers of an abduction problem. The maximally general answers are generally infinite in number but we show how to generate a finite representation of them when only unary function symbols are present.

Michael Maher, Ge Huang
Fast Counting with Bounded Treewidth

Many intractable problems have been shown to become tractable if the treewidth of the underlying structure is bounded by a constant. An important tool for deriving such results is Courcelle’s Theorem, which states that all properties defined by Monadic-Second Order (MSO) sentences are fixed-parameter tractable with respect to the treewidth. Arnborg et al. extended this result to counting problems defined via MSO properties. However, the MSO description of a problem is of course not an algorithm. Consequently, proving the fixed-parameter tractability of some problem via Courcelle’s Theorem can be considered as the starting point rather than the endpoint of the search for an efficient algorithm. Gottlob et al. have recently presented a new approach via monadic datalog to actually devise efficient algorithms for decision problems whose tractability follows from Courcelle’s Theorem. In this paper, we extend this approach and apply it to some fundamental counting problems in logic an artificial intelligence.

Michael Jakl, Reinhard Pichler, Stefan Rümmele, Stefan Woltran

Session 8. Proof-Theory 2

Cut Elimination for First Order Gödel Logic by Hyperclause Resolution

Efficient, automated elimination of cuts is a prerequisite for proof analysis. The method CERES, based on Skolemization and resolution has been successfully developed for classical logic for this purpose. We generalize this method to Gödel logic, an important intermediate logic, which is also one of the main formalizations of fuzzy logic.

Matthias Baaz, Agata Ciabattoni, Christian G. Fermüller
Focusing Strategies in the Sequent Calculus of Synthetic Connectives

It is well-known that focusing striates a sequent derivation into phases of like polarity where each phase can be seen as inferring a synthetic connective. We present a sequent calculus of synthetic connectives based on neutral proof patterns, which are a syntactic normal form for such connectives. Different focusing strategies arise from different polarisations and arrangements of synthetic inference rules, which are shown to be complete by synthetic rule permutations. A simple generic cut-elimination procedure for synthetic connectives respects both the ordinary focusing and the maximally multi-focusing strategies, answering the open question of cut-admissibility for maximally multi-focused proofs.

Kaustuv Chaudhuri
An Algorithmic Interpretation of a Deep Inference System

We set out to find something that corresponds to deep inference in the same way that the lambda-calculus corresponds to natural deduction. Starting from natural deduction for the conjunction-implication fragment of intuitionistic logic we design a corresponding deep inference system together with reduction rules on proofs that allow a fine-grained simulation of beta-reduction.

Kai Brünnler, Richard McKinley
Weak βη-Normalization and Normalization by Evaluation for System F

A general version of the fundamental theorem for System F is presented which can be instantiated to obtain proofs of weak

β

- and

βη

-normalization and normalization by evaluation.

Andreas Abel

Session 9. Quantified Constraints

Variable Dependencies of Quantified CSPs

Quantified constraint satisfaction extends classical constraint satisfaction by a linear order of the variables and an associated existential or universal quantifier to each variable. In general, the semantics of the quantifiers does not allow to change the linear order of the variables arbitrarily without affecting the truth value of the instance. In this paper we investigate variable dependencies that are caused by the influence of the relative order between these variables on the truth value of the instance. Several approaches have been proposed in the literature for identifying such dependencies in the context of quantified Boolean formulas. We generalize these ideas to quantified constraint satisfaction and present new concepts that allow a refined analysis.

Marko Samer
Treewidth: A Useful Marker of Empirical Hardness in Quantified Boolean Logic Encodings

Theoretical studies show that in some combinatorial problems, there is a close relationship between classes of tractable instances and the treewidth (

tw

) of graphs describing their structure. In the case of satisfiability for quantified Boolean formulas (QBFs), tractable classes can be related to a generalization of treewidth, that we call quantified treewidth (

tw

p

). In this paper we investigate the practical relevance of computing

tw

p

for problem domains encoded as QBFs. We show that an approximation of

tw

p

is a predictor of empirical hardness, and that it is the only parameter among several other candidates which succeeds consistently in being so. We also provide evidence that QBF solvers benefit from a preprocessing phase geared towards reducing

tw

p

, and that such phase is a potential enabler for the solution of hard QBF encodings.

Luca Pulina, Armando Tacchella
Tractable Quantified Constraint Satisfaction Problems over Positive Temporal Templates

A positive temporal template (or a positive temporal constraint language) is a relational structure whose relations can be defined over a dense linear order of rational numbers using a relational symbol ≤, logical conjunction and disjunction.

We provide a complexity characterization for quantified constraint satisfaction problems (

QCSP

) over positive temporal languages. The considered

QCSP

problems are decidable in LOGSPACE or complete for one of the following classes: NLOGSPACE, P, NP, PSPACE. Our classification is based on so-called algebraic approach to constraint satisfaction problems: we first classify positive temporal languages depending on their surjective polymorphisms and then give the complexity of

QCSP

for each obtained class.

The complete characterization is quite complex and does not fit into one paper. Here we prove that

QCSP

for positive temporal languages is either NP-hard or belongs to P and we give the whole description of the latter case, that is, we show for which positive temporal languages the problem

QCSP

is in LOGSPACE, and for which it is NLOGSPACE-complete or P-complete. The classification of NP-hard cases is given in a separate paper.

Witold Charatonik, Michał Wrona
A Logic of Singly Indexed Arrays

We present a logic interpreted over integer arrays, which allows difference bound comparisons between array elements situated within a constant sized window. We show that the satisfiability problem for the logic is undecidable for formulae with a quantifier prefix { ∃ , ∀ }

*

 ∀ 

*

 ∃ 

*

 ∀ 

*

. For formulae with quantifier prefixes in the ∃ 

*

 ∀ 

*

fragment, decidability is established by an automata-theoretic argument. For each formula in the ∃ 

*

 ∀ 

*

fragment, we can build a flat counter automaton with difference bound transition rules (FCADBM) whose traces correspond to the models of the formula. The construction is modular, following the syntax of the formula. Decidability of the ∃ 

*

 ∀ 

*

fragment of the logic is a consequence of the fact that reachability of a control state is decidable for FCADBM.

Peter Habermehl, Radu Iosif, Tomáš Vojnar

Session 10. Modal and Temporal Logics

On the Computational Complexity of Spatial Logics with Connectedness Constraints

We investigate the computational complexity of spatial logics extended with the means to represent topological connectedness and restrict the number of connected components. In particular, we show that the connectedness constraints can increase complexity from NP to

PSpace

,

ExpTime

and, if component counting is allowed, to

NExpTime

.

Roman Kontchakov, Ian Pratt-Hartmann, Frank Wolter, Michael Zakharyaschev
Decidable and Undecidable Fragments of Halpern and Shoham’s Interval Temporal Logic: Towards a Complete Classification

Interval temporal logics are based on temporal structures where time intervals, rather than time instants, are the primitive ontological entities. They employ modal operators corresponding to various relations between intervals, known as Allen’s relations. Technically, validity in interval temporal logics translates to dyadic second-order logic, thus explaining their complex computational behavior. The full modal logic of Allen’s relations, called HS, has been proved to be undecidable by Halpern and Shoham under very weak assumptions on the class of interval structures, and this result was discouraging attempts for practical applications and further research in the field. A renewed interest has been recently stimulated by the discovery of interesting decidable fragments of HS. This paper contributes to the characterization of the boundary between decidability and undecidability of HS fragments. It summarizes known positive and negative results, it describes the main techniques applied so far in both directions, and it establishes a number of new undecidability results for relatively small fragments of HS.

Davide Bresolin, Dario Della Monica, Valentin Goranko, Angelo Montanari, Guido Sciavicco
The Variable Hierarchy for the Lattice μ-Calculus

The

variable hierarchy problem

asks whether every

μ

-term

t

is equivalent to a

μ

-term

t

′ where the number of fixed-point variables in

t

′ is bounded by a constant. In this paper we prove that the variable hierarchy of the

lattice μ

-calculus

– whose standard interpretation is over the class of all complete lattices – is infinite, meaning that such a constant does not exist if the

μ

-terms are built up using the basic lattice operations as well as the least and the greatest fixed point operators. The proof relies on the description of the lattice

μ

-calculus by means of games and strategies.

Walid Belkhir, Luigi Santocanale
A Formalised Lower Bound on Undirected Graph Reachability

We study the expressivity of Jumping Automata on Graphs (

jag

s), an idealised model of computation with logarithmic space.

jag

s operate on graphs by using finite control and a constant number of pebbles. In this paper we revisit the proof of Cook & Rackoff that

jag

s cannot decide

s

-

t

-reachability in undirected graphs. Cook & Rackoff prove this result by constructing, for any given

jag

, a finite graph that cannot be traversed exhaustively by the

jag

. We generalise this result from the graphs constructed by Cook & Rackoff to a general class of group action graphs. We establish a bound on the number of nodes that a

jag

can visit on such action graphs. This generalisation allows us to strengthen the result of Cook & Rackoff to the existence of a graph of small degree whose diameter (rather than its number of nodes) is larger than the number of nodes the

jag

can visit. The main result has been formalised in the theorem prover Coq, using Gonthier’s tactic language

SSReflect

.

Ulrich Schöpp

Session 11. Rewriting

Improving Context-Sensitive Dependency Pairs

Context-sensitive dependency pairs (CS-DPs) are currently the most powerful method for automated termination analysis of context-sensitive rewriting. However, compared to DPs for ordinary rewriting, CS-DPs suffer from two main drawbacks: (a) CS-DPs can be

collapsing

. This complicates the handling of CS-DPs and makes them less powerful in practice. (b) There does not exist a “

DP framework

” for CS-DPs which would allow one to apply them in a flexible and modular way. This paper solves drawback (a) by introducing a new definition of CS-DPs. With our definition, CS-DPs are always non-collapsing and thus, they can be handled like ordinary DPs. This allows us to solve drawback (b) as well, i.e., we extend the existing DP framework for ordinary DPs to context-sensitive rewriting. We implemented our results in the tool

AProVE

and successfully evaluated them on a large collection of examples.

Beatriz Alarcón, Fabian Emmes, Carsten Fuhs, Jürgen Giesl, Raúl Gutiérrez, Salvador Lucas, Peter Schneider-Kamp, René Thiemann
Complexity, Graphs, and the Dependency Pair Method

This paper builds on recent efforts (Hirokawa and Moser, 2008) to exploit the dependency pair method for verifying feasible, i.e., polynomial

runtime complexities

of term rewrite systems automatically. We extend our earlier results by revisiting dependency graphs in the context of complexity analysis. The obtained new results are easy to implement and considerably extend the analytic power of our existing methods. The gain in power is even more significant when compared to existing methods that directly, i.e., without the use of transformations, induce

feasible

runtime complexities. We provide ample numerical data for assessing the viability of the method.

Nao Hirokawa, Georg Moser
Uncurrying for Termination

First-order applicative term rewrite systems provide a natural framework for modeling higher-order aspects. In this paper we present a transformation from untyped applicative term rewrite systems to functional term rewrite systems that preserves and reflects termination. Our transformation is less restrictive than other approaches. In particular, head variables in right-hand sides of rewrite rules can be handled. To further increase the applicability of our transformation, we present a version for dependency pairs.

Nao Hirokawa, Aart Middeldorp, Harald Zankl
Approximating Term Rewriting Systems: A Horn Clause Specification and Its Implementation

We present a technique for approximating the set of reachable terms of a given term rewriting system starting from a given initial regular set of terms. The technique is based on previous work by other authors with the same goal, and yields a finite tree automaton recognising an over-approximation of the set of reachable terms. Our contributions are, firstly, to use Horn clauses to specify the transitions of a possibly infinite-state tree automaton defining (at least) the reachable terms. Apart from being a clear specification, the Horn clause model is the basis for further automatic approximations using standard logic program analysis techniques, yielding finite-state tree automata. The approximations are applied in two stages: first a regular approximation of the model of the given Horn clauses is constructed, and secondly a more precise relational abstraction is built using the first approximation. The analysis uses efficient representations based on BDDs, leading to more scalable implementations. We report on preliminary experimental results.

John P. Gallagher, Mads Rosendahl
A Higher-Order Iterative Path Ordering

The higher-order recursive path ordering (

HORPO

) defined by Jouannaud and Rubio provides a method to prove termination of higher-order rewriting. We present an iterative version of

HORPO

by means of an auxiliary term rewriting system, following an approach originally due to Bergstra and Klop. We study well-foundedness of the iterative definition, discuss its relationship with the original HORPO, and point out possible ways to strengthen the ordering.

Cynthia Kop, Femke van Raamsdonk
Variable Dependencies of Quantified CSPs
Marko Samer
Backmatter
Metadaten
Titel
Logic for Programming, Artificial Intelligence, and Reasoning
herausgegeben von
Iliano Cervesato
Helmut Veith
Andrei Voronkov
Copyright-Jahr
2008
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-89439-1
Print ISBN
978-3-540-89438-4
DOI
https://doi.org/10.1007/978-3-540-89439-1

Premium Partner