Skip to main content

2003 | Buch

Computer Science Logic

17th International Workshop CSL 2003, 12th Annual Conference of the EACSL, 8th Kurt Gödel Colloquium, KGC 2003, Vienna, Austria, August 25-30, 2003. Proceedings

herausgegeben von: Matthias Baaz, Johann A. Makowsky

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the joint refereed proceedings of the 17th International Workshop on Computer Science Logic, CSL 2003, held as the 12th Annual Conference of the EACSL and of the 8th Kurt Gödel Colloquium, KGC 2003 in Vienna, Austria, in August 2003.

The 30 revised full papers presented together with abstracts of 9 invited presentations were carefully reviewed and selected from a total of 112 submissions. All current aspects of computer science logic are addressed ranging from mathematical logic and logical foundations to the application of logics in various computing aspects.

Inhaltsverzeichnis

Frontmatter
Deciding Monotonic Games

In an earlier work [AČJYK00] we presented a general framework for verification of infinite-state transition systems, where the transition relation is monotonic with respect to a well quasi-ordering on the set of states. In this paper, we investigate extending the framework from the context of transition systems to that of games. We show that monotonic games are in general undecidable. We identify a subclass of monotonic games, called downward closed games. We provide algorithms for analyzing downward closed games subject to winning conditions which are formulated as safety properties.

Parosh Aziz Abdulla, Ahmed Bouajjani, Julien d’Orso
The Commuting V-Diagram
On the Relation of Refinement and Testing

This article discusses the relations between the step-wise development through refinement and the design of test-cases. It turns out that a commuting diagram inspired by the V-process model is able to clarify the issues involved. This V-diagram defines the dependencies of specifications, implementations and test-cases in the category of contracts. The objects in this category are contracts defined in the formalism of the refinement calculus. The maps are the refinement steps between these objects. Our framework is able to define the correctness notion of test-cases, testing strategies as refinement rules, and which test-cases should be added under refinement.

Bernhard K. Aichernig
Concurrent Construction of Proof-Nets

The functional paradigm of computation has been widely investigated and given a solid mathematical foundation, initiated with the Curry-Howard isomorphism, then elaborated and extended in multiple ways. However, this paradigm is inadequate to capture many useful programming intuitions, arising in particular in the development of applications integrating distributed, autonomous components. Indeed, in this context, non-determinism and true concurrency are the rule, whereas functional programming stresses determinism, and, although it allows some degree of concurrency, it is more as a “nice feature to have” rather than a primary assumption.This paper is part of a program the ambition of which is to provide a logical foundation to a set of programming intuitions which, until now, have not been adequately accounted for. In particular, we are interested in the intuitions which lie behind the concept of transaction, a powerful and essential concept in distributed component-based application development. This concept is independent of the application domain and usually captured in an abstract form in middleware architectural layers.We claim here that proof-construction, and more precisely proof-net construction in Linear Logic, offers the adequate basis for our purpose. We outline the relation, which is of the same nature as the Curry-Howard isomorphism, between transactional concepts and mechanisms on one hand, and proof-net construction on the other. Finally, we describe an algorithm which performs concurrent proof-net construction, where each expansion step is viewed as a transaction. Conflicts between such transactions are minimised using general topological properties of proof-nets, based on a variant of the notion of “domination tree”, introduced and proved here.

Jean-Marc Andreoli, Laurent Mazaré
Back to the Future: Explicit Logic for Computer Science

We will speak about three traditions in Logic:– Classical, usually associated with Frege, Hilbert, Gödel, Tarski, and others;– Intuitionistic, founded by Brouwer, Heyting, Kolmogorov, Gödel, Kleene, and others;– Explicit, which we trace back to Skolem, Curry, Gödel, Church, and others.The classical tradition in logic based on quantifiers ∀ and ∃ essentially reflected the 19th century mathematician’s way of representing dependencies between entities. A sentence ∀ x ∃ yA(x,y), though specifying a certain relation between x and y, did not mean that the latter is a function of the former, let alone a computable one. The Intuitionistic approach provided a principal shift toward the effective functional reading of the mathematician’s quantifiers. A new, nonTarskian semantics had been suggested by Kleene: realizability that revealed a computational content of logical derivations. In a decent intuitionstic system, a proof of ∀ x ∃ yA(x,y) yields a program f that computes y=f(x).Explicit tradition makes the ultimate step by using representative systems of functions instead of quantifiers from the very beginning. Since the work of Skolem, 1920, it has been known that the classical logic can be adequately recast in this way. Church in 1936 showed that even the very basic system of function definition and function application is capable of emulating any computable procedure. However, despite this impressive start, the explicit tradition remained a Cinderella of the mathematical logic for decades. Now things have changed: due to its very explicitness, this third tradition became the one most closely connected with Computer Science.In this talk we will show how switching from quantifiers to explicit functional language helps problem solving in both theoretical logic and its applications. A discovery of a natural system of self-referential proof terms, proof polynomials, was essential in the solution to an open problem of Gödel concerning formalization of provability. Proof polynomials considerably extend the Curry-Howard isomorphism and lead to a joint calculus of propositions and proofs which unifies several previously unrelated areas. It changes our conception of the appropriate syntax and semantics for reasoning about knowledge, functional programming languages, formalized deduction and verification.

Sergei Artemov
Constraint Satisfaction with Countable Homogeneous Templates

For a fixed countable homogeneous relational structure Γ we study the computational problem whether a given finite structure of the same signature homomorphically maps to Γ. This problem is known as the constraint satisfaction problem CSP(Γ) for Γ and was intensively studied for finite Γ. We show that – as in the case of finite Γ – the computational complexity of CSP(Γ) for countable homogeneous Γ is determinded by the clone of polymorphisms of Γ. To this end we prove the following theorem which is of independent interest: The primitive positive definable relations over an ω-categorical structure Γ are precisely the relations that are invariant under the polymorphisms of Γ.Constraint satisfaction with countable homogeneous templates is a proper generalization of constraint satisfaction with finite templates. If the age of Γ is finitely axiomatizable, then CSP(Γ) is in NP. If Γ is a digraph we can use the classification of homogeneous digraphs by Cherlin to determine the complexity of CSP(Γ).

Manuel Bodirsky, Jaroslav Nešetřil
Quantified Constraints: Algorithms and Complexity

The standard constraint satisfaction problem over an arbitrary finite domain can be expressed as follows: given a first-order sentence consisting of a conjunction of predicates, where all of the variables are existentially quantified, determine whether the sentence is true. This problem can be parameterized by the set of allowed constraint predicates. With each predicate, one can associate certain predicate-preserving operations, called polymorphisms, and the complexity of the parameterized problem is known to be determined by the polymorphisms of the allowed predicates. In this paper we consider a more general framework for constraint satisfaction problems which allows arbitrary quantifiers over constrained variables, rather than just existential quantifiers. We show that the complexity of such extended problems is determined by the surjective polymorphisms of the constraint predicates. We give examples to illustrate how this result can be used to identify tractable and intractable cases for the quantified constraint satisfaction problem over arbitrary finite domains.

Ferdinand Börner, Andrei Bulatov, Peter Jeavons, Andrei Krokhin
Verification of Infinite State Systems

The aim of this tutorial is to give an overview on the state-of-the-art in infinite-state model checking and its applications.We present a unified modeling framework based on word/term rewrite systems and show its relevance in reasoning about several important classes of systems (communication protocols, parametrized distributed algorithms, multithreaded programs, etc).Then, we address the verification problem of various classes of such models. We consider especially the basic problem of reachability analysis which consists in computing a (finite) representation of the (potentially infinite) set of reachable configurations.We show the main existing approaches to tackle this problem:Specialized constructions for several significant classes of models for which this problem is shown to be decidable,General principles to prove the termination and the completeness of the iterative computation of the reachability sets for classes of models,Generic constructions and fixpoint acceleration techniques, leading to powerful semi-algorithms applicable to general classes of models in order to compute exact/approximate reachability sets.

Ahmed Bouajjani
Parity of Imperfection or Fixing Independence

We introduce a fixpoint extension of Hintikka and Sandu’s IF (independence-friendly) logic. We obtain some results on its complexity and expressive power. We relate it to parity games of imperfect information, and show its application to defining independence-friendly modal mu-calculi.

Julian C. Bradfield
Atomic Cut Elimination for Classical Logic

System SKS is a set of rules for classical propositional logic presented in the calculus of structures. Like sequent systems and unlike natural deduction systems, it has an explicit cut rule, which is admissible. In contrast to sequent systems, the cut rule can easily be reduced to atomic form. This allows for a very simple cut elimination procedure based on plugging in parts of a proof, like normalisation in natural deduction and unlike cut elimination in the sequent calculus. It should thus be a good common starting point for investigations into both proof search as computation and proof normalisation as computation.

Kai Brünnler
Computational Mathematics, Computational Logic, and Symbolic Computation

“Computational mathematics” (algorithmic mathematics) is the part of mathematics that strives at the solution of mathematical problems by algorithms. In a superficial view, some people might believe that computational mathematics is the easy part of mathematics in which trivial mathematics is made useful by repeating trivial steps sufficiently many times on current powerful machines in order to achieve marginally interesting results. The opposite is true: Many times, computational mathematics needs and stimulates deeper mathematics, i.e. deeper mathematical theorems with more difficult proofs than “pure” mathematics. This is so because, in order to establish an algorithmic method for a given mathematical problem, i.e. in order to reduce the solution of a given problem to the few operations that can be executed on machines, deeper insight on the given problem domain is necessary than the insight necessary for establishing the reduction of the given problem to powerful nonalgorithmic abstract mathematical operations as, for example, choice functions and the basic quantifiers of set theory.

Bruno Buchberger
Simple Stochastic Parity Games

Many verification, planning, and control problems can be modeled as games played on state-transition graphs by one or two players whose conflicting goals are to form a path in the graph. The focus here is on simple stochastic parity games, that is, two-player games with turn-based probabilistic transitions and ω-regular objectives formalized as parity (Rabin chain) winning conditions. An efficient translation from simple stochastic parity games to nonstochastic parity games is given. As many algorithms are known for solving the latter, the translation yields efficient algorithms for computing the states of a simple stochastic parity game from which a player can win with probability 1.An important special case of simple stochastic parity games are the Markov decision processes with Büchi objectives. For this special case a first provably subquadratic algorithm is given for computing the states from which the single player has a strategy to achieve a Büchi objective with probability 1. For game graphs with m edges the algorithm works in time $O(m \sqrt{m})$. Interestingly, a similar technique sheds light on the question of the computational complexity of solving simple Büchi games and yields the first provably subquadratic algorithm, with a running time of O(n2/ logn) for game graphs with n vertices and O(n) edges.

Krishnendu Chatterjee, Marcin Jurdziński, Thomas A. Henzinger
Machine Characterizations of the Classes of the W-Hierarchy

We give machine characterizations of the complexity classes of the W-hierarchy. Moreover, for every class of this hierarchy, we present a parameterized halting problem complete for this class.

Yijia Chen, Jörg Flum
Extending the Dolev-Yao Intruder for Analyzing an Unbounded Number of Sessions

We propose a protocol model which integrates two different ways of analyzing cryptographic protocols: i) analysis w.r.t. an unbounded number of sessions and bounded message size, and ii) analysis w.r.t. an a priori bounded number of sessions but with messages of unbounded size. We show that in this model secrecy is DEXPTIME-complete. This result is obtained by extending the Dolev-Yao intruder to simulate unbounded number of sessions.

Yannick Chevalier, Ralf Küsters, Michaël Rusinowitch, Mathieu Turuani, Laurent Vigneron
On Relativisation and Complexity Gap for Resolution-Based Proof Systems

We study the proof complexity of Taut, the class of Second-Order Existential (SO∃) logical sentences which fail in all finite models. The Complexity-Gap theorem for Tree-like Resolution says that the shortest Tree-like Resolution refutation of any such sentence Φ is either fully exponential, $2^{\Omega \left(n\right)}$, or polynomial, $n^{O\left(1\right)}$, where n is the size of the finite model. Moreover, there is a very simple model-theoretics criteria which separates the two cases: the exponential lower bound holds if and only if Φ holds in some infinite model.In the present paper we prove several generalisations and extensions of the Complexity-Gap theorem. 1For a natural subclass of Taut, $Rel\left(Taut\right)$, there is a gap between polynomial Tree-like Resolution proofs and sub-exponential, $2^{\Omega \left(n^{\varepsilon }\right)}$, general (DAG-like) Resolution proofs, whilst the separating model-theoretic criteria is the same as before. $Rel\left(Taut\right)$ is the set of all sentences in Taut, relativised with respect to a unary predicate.2The gap for stronger systems, $\textrm{Res}^{*}\left(k\right)$, is between polynomial and $\exp \left(\Omega \left(\frac{\log k}{k}n\right)\right)$ for every k, 1≤ k≤ n. $\textrm{Res}^{*}\left(k\right)$ is an extension of Tree-like Resolution, in which literals are replaced by terms (i.e. conjunctions of literals) of size at most k. The lower bound is tight.3There is (as expected) no gap for any propositional proof system (including Tree-like Resolution) if we enrich the language of SO logic by a built-in order.

Stefan Dantchev, Søren Riis
Strong Normalization of the Typed λ ws -Calculus

The λ ws -calculus is a λ-calculus with explicit substitutions introduced in [4]. It satisfies the desired properties of such a calculus: step by step simulation of β, confluence on terms with meta-variables and preservation of the strong normalization. It was conjectured in [4] that simply typed terms of λ ws are strongly normalizable. This was proved in [7] by Di Cosmo & al. by using a translation of λ ws into the proof nets of linear logic. We give here a direct and elementary proof of this result. The strong normalization is also proved for terms typable with second order types (the extension of Girard’s system F). This is a new result.

René David, Bruno Guillaume
A Fixed-Point Logic with Symmetric Choice

Gire and Hoang introduce a fixed-point logic with a ‘symmetric’ choice operator that makes a nondeterministic choice from a definable set of tuples at each stage in the inductive construction of a relation, as long as the set of tuples is an automorphism class of the structure. We present a clean definition of the syntax and semantics of this logic and investigate its expressive power. We extend the logic of Gire and Hoang with parameterized and nested fixed points and first-order combinations of fixed points. We show that the ability to supply parameters to fixed points strictly increases the power of the logic. Our logic can express the graph isomorphism problem and we show that, on almost all structures, it captures PGI, the class of problems decidable in polynomial time by a deterministic Turing machine with an oracle for graph isomorphism.

Anuj Dawar, David Richerby
Positive Games and Persistent Strategies

At CSL 2002, Jerzy Marcinkowsi and Tomasz Truderung presented the notions of positive games and persistent strategies [8]. A strategy is persistent if, given any finite or infinite run played on a game graph, each time the player visits some vertex already encountered, this player repeats the decision made when visiting this vertex for the first time. Such strategies require memory, but once a choice is made, it is made for ever. So, persistent strategies are a weakening of memoryless strategies.The same authors established a direct relation between positive games and the existence of persistent winning strategies. We give a description of such games by means of their topological complexity. In games played on finite graphs, positive games are unexpectedly simple. On the contrary, infinite game graphs, as well as infinite alphabets, yield positive sets involved in non determined games.Last, we discuss positive Muller winning conditions. Although they do not help to discriminate between memoryless and LAR winning strategies, they bear a strong topological characterization.

Jacques Duparc
Generating All Abductive Explanations for Queries on Propositional Horn Theories

Abduction is a fundamental mode of reasoning, which has taken on increasing importance in Artificial Intelligence (AI) and related disciplines. Computing abductive explanations is an important problem, and there is a growing literature on this subject. We contribute to this endeavor by presenting new results on computing multiple resp. all of the possibly exponentially many explanations of an abductive query from a propositional Horn theory represented by a Horn CNF. Here the issues are whether a few explanations can be generated efficiently and, in case of all explanations, whether the computation is possible in polynomial total time (oroutput-polynomial time), i.e., in time polynomial in the combined size of the input and the output. We explore these issues for queries in CNF and important restrictions thereof. Among the results, we show that computing all explanations for a negative query literal from a Horn CNF is not feasible in polynomial total time unless P = NP, which settles an open issue. However, we show how to compute under restriction to acyclic Horn theories polynomially many explanations in input polynomial time and all explanations in polynomial total time, respectively. Complementing and extending previous results, this draws a detailed picture of the computational complexity of computing multiple explanations for queries on Horn theories.

Thomas Eiter, Kazuhisa Makino
Refined Complexity Analysis of Cut Elimination

In [1,2] Zhang shows how the complexity of cut elimination depends on the nesting of quantifiers in cut formulas. By studying the role of contractions we can refine that analysis and show how the complexity depends on a combination of contractions and quantifier nesting. With the refined analysis the upper bound on cut elimination coincides with Statman’s lower bound. Every non-elementary growth example must display a combination of nesting of quantifiers and contractions similar to Statman’s lower bound example. The upper and lower bounds on cut elimination immediately translate into bounds on Herbrand’s theorem. Finally we discuss the role of quantifier alternations and show an elementary upper bound for the ∀ − ∧-case (resp. ∃ − ∨-case).

Philipp Gerhardy
Comparing the Succinctness of Monadic Query Languages over Finite Trees

We study the succinctness of monadic second-order logic and a variety of monadic fixed point logics on trees. All these languages are known to have the same expressive power on trees, but some can express the same queries much more succinctly than others. For example, we show that, under some complexity theoretic assumption, monadic second-order logic is non-elementarily more succinct than monadic least fixed point logic, which in turn is non-elementarily more succinct than monadic datalog.Succinctness of the languages is closely related to the combined and parameterized complexity of query evaluation for these languages.

Martin Grohe, Nicole Schweikardt
The Arithmetical Complexity of Dimension and Randomness

Constructive dimension and constructive strong dimension are effectivizations of the Hausdorff and packing dimensions, respectively. Each infinite binary sequence A is assigned a dimension $\dim(A) \in [0,1]$ and a strong dimension Dim(A) ∈ [0,1].Let DIMα and ${\rm DIM}_{str}^\alpha$ be the classes of all sequences of dimension α and of strong dimension α, respectively. We show that DIM0 is properly $\Pi^{\rm 0}_{\rm 2}$, and that for all $\Delta^{\rm 0}_{\rm 2}$-computable α ∈ (0,1], DIMα is properly $\Pi^{\rm 0}_{\rm 3}$.To classify the strong dimension classes, we use a more powerful effective Borel hierarchy where a co-enumerable predicate is used rather than a enumerable predicate in the definition of the $\Sigma^{\rm 0}_{\rm 1}$ level. For all $\Delta^{\rm 0}_{\rm 2}$-computable α ∈ [0,1), we show that ${\rm DIM}_{str}^\alpha$ is properly in the $\Pi^{\rm 0}_{\rm 3}$ level of this hierarchy. We show that ${\rm DIM}_{str}^1$ is properly in the $\Pi^{\rm 0}_{\rm 2}$ level of this hierarchy.We also prove that the class of Schnorr random sequences and the class of computably random sequences are properly $\Pi^{\rm 0}_{\rm 3}$.

John M. Hitchcock, Jack H. Lutz, Sebastiaan A. Terwijn
Towards a Proof System for Admissibility

In [4] a basis for the admissible rules of intuitionistic propositional logic IPC was given. Here we strengthen this result by presenting a system ADM that has the following two properties. $\phantom{~} A\vdash_{\sf ADM}B$ implies that A admissibly derives B. ADM is complete in the sense that for every formula A there exists a formula $\phantom{~} A\vdash_{\sf ADM}\Lambda_A$ such that the admissibly derivable consequences of A are the (normal) consequences of $\phantom{~} \Lambda_A$. This work is related to and partly relies upon research by Ghilardi on projective formulas [2, 3].

Rosalie Iemhoff
Program Complexity of Dynamic LTL Model Checking

Using a recent result by Hesse we show that for any fixed linear-time temporal formula the dynamic model checking problem is in Dyn-TC0, a complexity class introduced by Hesse, Immerman, Patnaik, containing all dynamic problems where the update after an operation has been performed can be computed by a DLOGTIME-uniform constant-depth threshold circuit. The operations permitted to modify the transition system to be verified include insertion and deletion of transitions and relabeling of states.

Detlef Kähler, Thomas Wilke
Coping Polynomially with Numerous but Identical Elements within Planning Problems

Since the typical AI problem of making a plan of the actions to be performed by a robot so that it could get into a set of final situations, if it started with a certain initial situation, is generally exponential (it is even EXPTIME-complete in the case of games ‘Robot against Nature’), the planners are very sensitive to the number of variables, the inherent symmetry of the problem, and the nature of the logic formalisms being used. The paper shows that linear logic provides a convenient tool for representing planning problems. In particular, the paper focuses on planning problems with an unbounded number of functionally identical objects. We show that for such problems linear logic is especially effective and leads to dramatic contraction of the search space (polynomial instead of exponential). The paper addresses the key issue: “How to automatically recognize functions similarity among objects and break the extreme combinatorial explosion caused by this symmetry,” by means of replacing the unbounded number of specific names of objects with one generic name and contracting thereby the exponential search space over ‘real’ objects to a small polynomial search space but over the ‘generic’ one, with providing a more abstract formulation whose solutions are proved to be directly translatable into (optimal) polytime solutions to the original planning problem.

Max Kanovich, Jacqueline Vauzeilles
On Algebraic Specifications of Abstract Data Types

In this paper we address long standing open problems of Bergstra and Tucker about specifications of abstract data types by means of equations and conditional equations. By an abstract data type we mean the isomorphism type of an algebra. An algebra is algebraically specified if the algebra can be defined uniquely, in a certain precise sense, in terms of a finite number of conditional equations by allowing functions that are not in the original language of the algebra. We provide full solutions to Bergtsra and Tucker problems, explain basic ideas, methods, and the logical dependencies between blocks of proofs used in our solutions.

Bakhadyr Khoussainov
On the Complexity of Existential Pebble Games

Existential k-pebble games, k≥ 2, are combinatorial games played between two players, called the Spoiler and the Duplicator, on two structures. These games were originally introduced in order to analyze the expressive power of Datalog and related infinitary logics with finitely many variables. More recently, however, it was realized that existential k-pebble games have tight connections with certain consistency properties which play an important role in identifying tractable classes of constraint satisfaction problems and in designing heuristic algorithms for solving such problems. Specifically, it has been shown that strong k-consistency can be established for an instance of constraint satisfaction if and only if the Duplicator has a winnning strategy for the existential k-pebble game between two finite structures associated with the given instance of constraint satisfaction. In this paper, we pinpoint the computational complexity of determining the winner of the existential k-pebble game. The main result is that the following decision problem is EXPTIME-complete: given a positive integer k and two finite structures A and B, does the Duplicator win the existential k-pebble game on A and B? Thus, all algorithms for determining whether strong k-consistency can be established (when k is part of the input) are inherently exponential.

Phokion G. Kolaitis, Jonathan Panttaja
Computational Aspects of Σ-Definability over the Real Numbers without the Equality Test

In this paper we study the expressive power and algorithmic properties of the language of Σ-formulas intended to represent computability over the real numbers. In order to adequately represent computability, we extend the reals by the structure of hereditarily finite sets. In this setting it is crucial to consider the real numbers without equality since the equality test is undecidable over the reals. We prove Engeler’s Lemma for Σ-definability over the reals without the equality test which relates Σ-definability with definability in the constructive infinitary language $L_{\omega_1\omega}$. Thus, a relation over the real numbers is Σ-definable if and only if it is definable by a disjunction of a recursively enumerable set of quantifier free formulas. This result reveals computational aspects of Σ-definability and also gives topological characterisation of Σ-definable relations over the reals without the equality test.

Margarita Korovina
The Surprising Power of Restricted Programs and Gödel’s Functionals

Consider the following imperative programming language. The programs operate on registers storing natural numbers, the input $\vec{x}$ is stored in certain registers, and a number b, called the base, is fixed to $\max(\vec{x},1)+1$ before the execution starts. The single primitive instruction $\verb/X+/$ increases the number stored in the register $\verb/X/$ by 1 modulo b. There are two control structures: the loop $\verb+while X{P}+$ executing the program $\verb+P+$ repeatedly as long as the content of the register $\verb/X/$ is different from 0; the composition $\verb+P+\texttt{;} \verb+Q+$ executing first the program $\verb/P/$, then the program $\verb/Q/$. This is the whole language. The language is natural, extremely simple, yet powerful. We will prove that it captures $\mbox{\sc linspace}$, i.e. the numerical relations decidable by such programs are exactly those decidable by Turing machines working in linear space. Variations of the language capturing other important deterministic complexity classes, like e.g. $\mbox{\sc logspace}$, $\mbox{\sc p}$ and $\mbox{\sc pspace}$ are possible (see Kristiansen and Voda [5]).

Lars Kristiansen, Paul J. Voda
Pebble Games on Trees

It is well known that two structures ${\cal A}$ and ${\cal B}$ are indistinguishable by sentences of the infinitary logic with k variables $L^k_{\infty\omega}$ iff Duplicator wins the Barwise game on ${\cal A}$ and ${\cal B}$ with k pebbles. The complexity of the problem who wins the game is in general unknown if k is a part of the input. We prove that the problem is in PTIME for some special classes of structures such as finite directed trees and infinite regular trees. More specifically, we show an algorithm running in time log (k) ( |A| + |B| )O(1).The algorithm for regular trees is based on a characterization of the winning pairs $({\cal A}, {\cal B})$ which is valid also for a more general case of (potentially infinite) rooted trees.

Łukasz Krzeszczakowski
Bistability: An Extensional Characterization of Sequentiality

We give a simple order-theoretic construction of a cartesian closed category of sequential functions. It is based on biordered sets analogous to Berry’s bidomains, except that the stable order is replaced with a new notion, the bistable order, and instead of preserving stably bounded greatest lower bounds, functions are required to preserve bistably bounded least upper bounds and greatest lower bounds. We show that bistable cpos and bistable and continuous functions form a CCC, yielding models of functional languages such as the simply-typed λ-calculus and SPCF. We show that these models are strongly sequential and use this fact to prove universality and full abstraction results.

James Laird
Automata on Lempel-Ziv Compressed Strings

Using the Lempel-Ziv-78 compression algorithm to compress a string yields a dictionary of substrings, i.e. an edge-labelled tree with an order-compatible enumeration, here called an LZ-trie. Queries about strings translate to queries about LZ-tries and hence can in principle be answered without decompression. We compare notions of automata accepting LZ-tries and consider the relation between acceptable and MSO-definable classes of LZ-tries. It turns out that regular properties of strings can be checked efficiently on compressed strings by LZ-trie automata.

Hans Leiß, Michel de Rougemont
Complexity of Some Problems in Modal and Intuitionistic Calculi

Complexity of provability and satisfiability problems in many non-classical logics, for instance, in intuitionistic logic, various systems of modal logic, temporal and dynamic logics was studied in [3, 5, 9, 22, 23, 24]. Ladner [9] proved that the provability problem is $\textsf{\textup{PSPACE}}$-complete for modal logics K, T and S4 and $\textsf{\textup{coNP}}$-complete for S5. Statman [24] proved that the problem of determining if an arbitrary implicational formula is intuitionistically valid is $\textsf{\textup{PSPACE}}$-complete.We consider the complexity of some properties for non-classical logics which are known to be decidable. The complexity of tabularity, pre-tabularity, and interpolation problems in extensions of the intuitionistic logic and of the modal logic S4 is studied, as well as the complexity of amalgamation problems in varieties of Heyting algebras and closure algebras.

Larisa Maksimova, Andrei Voronkov
Goal-Directed Calculi for Gödel-Dummett Logics

In this work we present goal-directed calculi for the Gödel-Dummett logic LC and its finite-valued counterparts, LC n (n ≥ 2). We introduce a terminating hypersequent calculus for the implicational fragment of LC with local rules and a single identity axiom. We also give a labelled goal-directed calculus with invertible rules and show that it is co-NP. Finally we derive labelled goal-directed calculi for LC n .

George Metcalfe, Nicola Olivetti, Dov Gabbay
A Logic for Probability in Quantum Systems

Quantum computation deals with projective measurements and unitary transformations in finite dimensional Hilbert spaces. The paper presents a propositional logic designed to describe quantum computation at an operational level by supporting reasoning about the probabilities associated to such measurements: measurement probabilities, and transition probabilities (a quantum analogue of conditional probabilities). We present two axiomatizations, one for the logic as a whole and one for the fragment dealing just with measurement probabilities. These axiomatizations are proved to be sound and complete. The logic is also shown to be decidable, and we provide results characterizing its complexity in a number of cases.

Ron van der Meyden, Manas Patra
A Strongly Normalising Curry-Howard Correspondence for IZF Set Theory

We propose a method for realising the proofs of Intuitionistic Zermelo-Fraenkel set theory (IZF) by strongly normalising λ-terms. This method relies on the introduction of a Curry-style type theory extended with specific subtyping principles, which is then used as a low-level language to interpret IZF via a representation of sets as pointed graphs inspired by Aczel’s hyperset theory.As a consequence, we refine a classical result of Myhill and Friedman by showing how a strongly normalising λ-term that computes a function of type ℕ→ℕ can be extracted from the proof of its existence in IZF.

Alexandre Miquel
The Epsilon Calculus

Hilbert’s ε-calculus [1,2] is based on an extension of the language of predicate logic by a term-forming operator ε x . This operator is governed by the critical axiomA(t) →A(εxA(x)),where t is an arbitrary term. Within the ε-calculus, quantifiers become definable by $\exists x A(x) \Leftrightarrow A(\epsilon{x}{A(x)})$ and $\forall xA(x) \Leftrightarrow A(\epsilon{x}{\lnot A(x)})$. (The expression εxA(x) is called an ε-term.)

Georg Moser, Richard Zach
Modular Semantics and Logics of Classes

The semantics of class-based languages can be defined in terms of objects only [1,7,8] if classes are viewed as objects with a constructor method. One obtains a store in which method closures are held together with field values. Such a store is also called “higher-order” and does not come for free [13]. It is much harder to prove properties of such stores and as a consequence (soundness of) programming logics can become rather contrived (see [2]).A simpler semantics separates methods from the object store [4,12]. But again, there is a drawback. Once the semantics of a package of classes is computed it is impossible to add other classes in a compositional way. Modular reasoning principles are therefore not obtainable either.In this paper we improve a simple class-based semantics to deal with extensions compositionally and derive modular reasoning principles for a logic of classes. The domain theoretic reasoning principle behind this is fixpoint induction.Modularity is obtained by endowing the denotations of classes with an additional parameter that accounts for those classes added “later at linkage time.”Local class definitions (inner classes) are possible but for dynamic class-loading one cannot do without higher-order store.

Bernhard Reus
Validity of CTL Queries Revisited

We systematically investigate temporal logic queries in model checking, adding to the seminal paper by William Chan at CAV 2000. Chan’s temporal logic queries are CTL specifications where one unspecified subformula is to be filled in by the model checker in such a way that the specification becomes true. Chan defined a fragment of CTL queries called $\mbox{CTL}^{v}$ which guarantees the existence of a unique strongest solution. The starting point of our paper is a counterexample to this claim. We then show how the research agenda of Chan can be realized by modifying his fragment appropriately. To this aim, we investigate the criteria required by Chan, and define two new fragments $\mbox{CTL}^{v}_{new}$ and $\mbox{CTL}^{d}$ where the first is the one originally intended; the latter fragment also provides unique strongest solutions where possible but admits also cases where the set of solutions is empty.

Marko Samer, Helmut Veith
Calculi of Meta-variables

The notion of meta-variable plays a fundamental role when we define formal systems such as logical and computational calculi. Yet it has been usually understood only informally as is seen in most textbooks of logic. Based on our observations of the usages of meta-variables in textbooks, we propose two formal systems that have the notion of meta-variable.In both calculi, each variable is given a level (non-negative integer), which classifies variables into object variables (level 0), meta-variables (level 1), metameta-variables (level 2) and so on. Then, simple arity systems are used to exclude meaningless terms like a meta-level function operating on the metameta-level. A main difference of the two calculi lies in the definitions of substitution. The first calculus uses textual substitution, which can often be found in definitions of quantified formulae: when a term is substituted for a meta-variable, free object-level variables in the term may be captured. The second calculus is based on the observation that predicates can be regarded as meta-level functions on object-level terms, hence uses capture-avoiding substitution.We show both calculi enjoy a number of properties including Church-Rosser and Strong Normalization, which are indispensable when we use them as frameworks to define logical systems.

Masahiko Sato, Takafumi Sakurai, Yukiyoshi Kameyama, Atsushi Igarashi
Henkin Models of the Partial λ-Calculus

We define (set-theoretic) notions of intensional Henkin model and syntactic λ-algebra for Moggi’s partial λ-calculus. These models are shown to be equivalent to the originally described categorical models via the global element construction; the proof makes use of a previously introduced construction of classifying categories. The set-theoretic semantics thus obtained is the foundation of the higher order algebraic specification language HasCasl, which combines specification and functional programming.

Lutz Schröder
Nominal Unification

We present a generalisation of first-order unification to the practically important case of equations between terms involving binding operations. A substitution of terms for variables solves such an equation if it makes the equated terms α-equivalent, i.e. equal up to renaming bound names. For the applications we have in mind, we must consider the simple, textual form of substitution in which names occurring in terms may be captured within the scope of binders upon substitution. We are able to take a ‘nominal’ approach to binding in which bound entities are explicitly named (rather than using nameless, de Bruijn-style representations) and yet get a version of this form of substitution that respects α-equivalence and possesses good algorithmic properties. We achieve this by adapting an existing idea and introducing a key new idea. The existing idea is terms involving explicit substitutions of names for names, except that here we only use explicit permutations (bijective substitutions). The key new idea is that the unification algorithm should solve not only equational problems, but also problems about the freshness of names for terms. There is a simple generalisation of the classical first-order unification algorithm to this setting which retains the latter’s pleasant properties: unification problems involving α-equivalence and freshness are decidable; and solvable problems possess most general solutions.

Christian Urban, Andrew Pitts, Murdoch Gabbay
Friends or Foes? Communities in Software Verification

In contrast to hardware which is finite-state and based on relatively few ample principles, software systems generally give rise to infinite state spaces, and are described in terms of programming languages involving rich semantical concepts. The challenges of software verification can be addressed only by a combined effort of different communities including, most notably, model checking, theorem proving, symbolic computation, static analysis, compilers, and abstract interpretation. We focus on a recent family of tools which use predicate abstraction and theorem proving to extract a finite state system amenable to model checking.

Helmut Veith
More Computation Power for a Denotational Semantics for First Order Logic

This paper starts from a denotational semantics for first order logic proposed by Apt. He interprets first order logic as a programming language within the declarative paradigm, but different from the usual approach of logic programming. For his proposal Apt obtains soundness with respect to the standard interpretation of first order logic. Here we consider the expressivity (or:computation power) of the system proposed, i.e., the programs for which no error messages are produced.To gain computational realism a decrease in expressivity compared to first order logic is inevitable. In [1] the expressivity was compared with logic programming with both positive and negative results. Here we consider three ways of improving the situation: replacing the original interpretation of conjunction-as-sequential-composition by a symmetric interpretation of conjunction; extending the framework with the option of recursive definitions by Clark completions; replacing the interpretation of disjunction-as-choice by an interpretation which is in the style of ‘backtracking’: only produce an error if all options lead to error.For each improvement we obtain a soundness result. The three improvements are developed independently in a way that makes sure that also their combination preserves soundness.The combination gives a considerable improvement of the expressivity of the denotational semantics that Apt proposed, both in the comparison with first order logic and in the comparison with logic programming.

Kess F. M. Vermeulen
Effective Model Completeness of the Theory of Restricted Pfaffian Functions
(Abstract)

Pfaffian functions, introduced by Khovanskii in late 70s, are analytic functions satisfying triangular systems of first order partial differential equations with polynomial coefficients. They include for instance algebraic and elementary transcendental functions in the appropriate domains, iterated exponentials, and fewnomials. A simple example, due to Osgood, shows that the first order theory of reals expanded by restricted Pfaffian functions does not admit quantifier elimination. On the other hand, Gabrielov and Wilkie proved (non-constructively) that this theory is model complete, i.e., one type of quantifiers can be eliminated. The talk will explain some ideas behind recent algorithms for this quantifier simplification which are based on effective cylindrical cell decompositions of sub-Pfaffian sets. Complexities of these algorithms are essentially the same as the ones which existed for a particular case of semialgebraic sets.

Nicolai Vorobjov
Effective Quantifier Elimination over Real Closed Fields
(Abstract)

In early 30s A. Tarski, motivated by a problem of automatic theorem proving in elementary algebra and geometry, suggested an algorithm for quantifier elimination in the first order theory of the reals. The complexity of Tarski’s algorithm is a non-elementary function of the format of the input formula. In mid-70s a group of algorithms appeared based on the idea of a cylindrical cell decomposition and having an elementary albeit doubly-exponential complexity, even for deciding closed existential formulae. The tutorial will explain some ideas behind a new generation of algorithms which were designed during 80s and 90s and have, in a certain sense, optimal (singly-exponential) complexity. In a useful particular case of closed existential formulae (i.e., deciding feasibility of systems of polynomial equations and inequalities) these new algorithms are theoretically superior to procedures known before in numerical analysis and computer algebra.

Nicolai Vorobjov
Fast Infinite-State Model Checking in Integer-Based Systems

In this paper we discuss the use of logic for reachability analysis for infinite-state systems. Infinite-state systems are formalized using transition systems over a first-order structure. We establish a common ground relating a large class of algorithms by analyzing the connections between the symbolic representation of transition systems and formulas used in various reachability algorithms. We consider in detail the so-called guarded assignment systems and local reachability algorithms. We show how an implementation of local reachability algorithms and a new incremental algorithm for finding Hilbert’s base in the system BRAINresulted in much faster reachability checking than in systems using constraint libraries and decision procedures for Presburger’s arithmetic. Experimental results demonstrate that problems in protocol verification which are beyond the reach of other existing systems can be solved completely automatically.

Tatiana Rybina, Andrei Voronkov
Winning Strategies and Synthesis of Controllers

A system consist of a process, of an environment and of possible ways of interaction between them. The synthesis problem is: given a specification φ find a program P for the process such that the overall behaviour of the system satisfies no matter what the bahaviour of the environment is.

Igor Walukiewicz
Logical Relations for Dynamic Name Creation

Pitts and Stark’s nu-calculus is a typed lambda-calculus which forms a basis for the study of interaction between higher-order functions and dynamically created names. A similar approach has received renewed attention recently through Sumii and Pierce’s cryptographic lambda-calculus, which deals with security protocols. Logical relations are a powerful tool to prove properties of such a calculus, notably observational equivalence. While Pitts and Stark construct a logical relation for the nu-calculus, it rests heavily on operational aspects of the calculus and is hard to be extended. We propose an alternative Kripke logical relation for the nu-calculus, which is derived naturally from the categorical model of the nu-calculus and the general notion of Kripke logical relation. This is also related to the Kripke logical relation for the name creation monad by Goubault-Larrecq et al. (CSL’2002), which the authors claimed had similarities with Pitts and Stark’s logical relation. We show that their Kripke logical relation for names is strictly weaker than Pitts and Stark’s. We also show that our Kripke logical relation, which extends the definition of Goubault-Larrecq et al., is equivalent to Pitts and Stark’s up to first-order types; our definition rests on purely semantic constituents, and dispenses with the detours through operational semantics that Pitts and Stark use.

Yu Zhang, David Nowak
Backmatter
Metadaten
Titel
Computer Science Logic
herausgegeben von
Matthias Baaz
Johann A. Makowsky
Copyright-Jahr
2003
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-45220-1
Print ISBN
978-3-540-40801-7
DOI
https://doi.org/10.1007/b13224