Skip to main content

2006 | Buch

Automata, Languages and Programming

33rd International Colloquium, ICALP 2006, Venice, Italy, July 10-14, 2006, Proceedings, Part II

herausgegeben von: Michele Bugliesi, Bart Preneel, Vladimiro Sassone, Ingo Wegener

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

Invited Papers

Differential Privacy

In 1977 Dalenius articulated a desideratum for statistical databases: nothing about an individual should be learnable from the database that cannot be learned without access to the database. We give a general impossibility result showing that a formalization of Dalenius’ goal along the lines of semantic security cannot be achieved. Contrary to intuition, a variant of the result threatens the privacy even of someone not in the database. This state of affairs suggests a new measure,

differential privacy

, which, intuitively, captures the increased risk to one’s privacy incurred by participating in a database. The techniques developed in a sequence of papers [8, 13, 3], culminating in those described in [12], can achieve any desired level of privacy under this measure. In many cases, extremely accurate information about the database can be provided while simultaneously ensuring very high levels of privacy.

Cynthia Dwork
The One Way to Quantum Computation

Measurement-based quantum computation has emerged from the physics community as a new approach to quantum computation where measurements rather than unitary transformations are the main driving force of computation. Among measurement-based quantum computation methods the recently introduced one-way quantum computer [RB01] stands out as basic and fundamental.

In this work we a concrete syntax and an algebra of these patterns derived from a formal semantics. We developed a rewrite theory and proved a general standardization theorem which allows all patterns to be put in a semantically equivalent standard form.

Vincent Danos, Elham Kashefi, Prakash Panangaden

Zero-Knowledge and Signatures

Efficient Zero Knowledge on the Internet

The notion of concurrent zero knowledge has been introduced by Dwork et al. [STOC 1998] motivated by the growing use of asynchronous networks as the Internet.

In this paper we show a transformation that, for any language

L

admitting a Σ-protocol, produces a 4-round concurrent zero-knowledge argument system with concurrent soundness in the bare public-key (BPK, for short) model. The transformation only adds

O

(1) modular exponentiations, and uses standard number-theoretic assumptions and polynomial-time simulation.

A tool that we construct and use for our main result is that of efficient concurrent equivocal commitments. We give an efficient construction of this gadget in the BPK model that can be of independent interest.

Ivan Visconti
Independent Zero-Knowledge Sets

We define and construct

Independent Zero-Knowledge Sets (ZKS) protocols

. In a ZKS protocols, a Prover commits to a set

S

, and for any

x

, proves non-interactively to a Verifier if

x

S

or

x

S

without revealing any other information about

S

. In the

independent

ZKS protocols we introduce, the adversary is prevented from successfully correlate her set to the one of a honest prover. Our notion of independence in particular implies that the resulting ZKS protocol is non-malleable.

On the way to this result we define the notion of

independence

for commitment schemes. It is shown that this notion implies non-malleability, and we argue that this new notion has the potential to simplify the design and security proof of non-malleable commitment schemes.

Efficient implementations of ZKS protocols are based on the notion of

mercurial commitments

. Our efficient constructions of independent ZKS protocols requires the design of

new

commitment schemes that are simultaneously independent (and thus non-malleable) and mercurial.

Rosario Gennaro, Silvio Micali
An Efficient Compiler from Σ-Protocol to 2-Move Deniable Zero-Knowledge

Pass showed a 2-move deniable zero-knowledge argument scheme for any

${\cal NP}$

language in the random oracle model at Crypto 2003. However, this scheme is very inefficient because it relies on the cut and choose paradigm (via straight-line witness extractable technique). In this paper, we propose a very efficient compiler that transforms any Σ-protocol to a 2-move deniable zero-knowledge argument scheme in the random oracle model, which is also a resettable zero-knowledge and resettably-sound argument of knowledge. Since there is no essential loss of efficiency in our transform, we can obtain a very efficient undeniable signature scheme and a very efficient deniable authentication scheme.

Jun Furukawa, Kaoru Kurosawa, Hideki Imai
New Extensions of Pairing-Based Signatures into Universal Designated Verifier Signatures

The concept of universal designated verifier signatures was introduced by Steinfeld, Bull, Wang and Pieprzyk at Asiacrypt 2003. We propose two new efficient constructions for pairing-based short signatures. The first scheme is based on Boneh-Boyen signatures and, its security can be analyzed in the standard security model. We reduce its resistance to forgery to the hardness of the strong Diffie-Hellman problem, under the knowledge-of-exponent assumption. The second scheme is compatible with the Boneh-Lynn-Shacham signatures and is proven unforgeable, in the random oracle model, under the assumption that the computational bilinear Diffie-Hellman problem is untractable. Both schemes are designed for devices with constrained computation capabilities since the signing and the designation procedure are pairing-free.

Damien Vergnaud

Cryptographic Protocols

Corrupting One vs. Corrupting Many: The Case of Broadcast and Multicast Encryption

We analyze group key distribution protocols for broadcast and multicast scenarios that make blackbox use of symmetric encryption and a pseudorandom generator (PRG) in deriving the group center’s messages. We first show that for a large class of such protocols, in which each transmitted ciphertext is of the form

E

K

1 (K2)

(

E

being the encryption operation;

K1,K2

being random or pseudorandom keys), security in the presence of a single malicious receiver is equivalent to that in the presence of collusions of corrupt receivers. On the flip side, we find that for protocols that

nest

the encrytion function (use ciphertexts created by enciphering ciphertexts themselves), such an equivalence fails to hold: there exist protocols that use nested encryption, are secure against single miscreants but are insecure against collusions.

Our equivalence and separation results are first proven in a symbolic, Dolev-Yao style adversarial model and subsequently translated into the computational model using a general theorem that establishes soundness of the symbolic security notions. Both equivalence and separation are shown to hold in the computational world under mild syntactic conditions (like the absence of encryption cycles).

We apply our results to the security analysis of 11 existing key distribution protocols. As part of our analysis, we uncover security weaknesses in 7 of these protocols, and provide simple fixes that result in provably secure protocols.

Daniele Micciancio, Saurabh Panjwani
Cryptographically Sound Implementations for Communicating Processes

We design a core language of principals running distributed programs over a public network. Our language is a variant of the pi calculus, with secure communications, mobile names, and high-level certificates, but without any explicit cryptography. Within this language, security properties can be conveniently studied using trace properties and observational equivalences, even in the presence of an arbitrary (abstract) adversary.

With some care, these security properties can be achieved in a concrete setting, relying on standard cryptographic primitives and computational assumptions, even in the presence of an adversary modeled as an arbitrary probabilistic polynomial-time algorithm. To this end, we develop a cryptographic implementation that preserves all properties for all safe programs. We give a series of soundness and completeness results that precisely relate the language to its implementation.

Pedro Adão, Cédric Fournet
A Dolev-Yao-Based Definition of Abuse-Free Protocols

We propose a Dolev-Yao-based definition of abuse freeness for optimistic contract-signing protocols which, unlike other definitions, incorporates a rigorous notion of what it means for an outside party to be convinced by a dishonest party that it has the ability to determine the outcome of the protocol with an honest party, i.e., to determine whether it will obtain a valid contract itself or whether it will prevent the honest party from obtaining a valid contract. Our definition involves a new notion of test (inspired by static equivalence) which the outside party can perform. We show that an optimistic contract-signing protocol proposed by Asokan, Shoup, and Waidner is abusive and that a protocol by Garay, Jakobsson, and MacKenzie is abuse-free according to our definition. Our analysis is based on a synchronous concurrent model in which parties can receive several messages at the same time. This results in new vulnerabilities of the protocols depending on how a trusted third party reacts in case it receives abort and resolve requests at the same time.

Detlef Kähler, Ralf Küsters, Thomas Wilke

Secrecy and Protocol Analysis

Preserving Secrecy Under Refinement

We propose a general framework of

secrecy

and

preservation of secrecy

for labeled transition systems. Our definition of secrecy is parameterized by the distinguishing power of the observer, the properties to be kept secret, and the executions of interest, and captures a multitude of definitions in the literature. We define a notion of

secrecy preserving refinement

between systems by strengthening the classical trace-based refinement so that the implementation leaks a secret only when the specification also leaks it. We show that secrecy is in general not definable in

μ

-calculus, and thus not expressible in specification logics supported by standard model-checkers. However, we develop a simulation-based proof technique for establishing secrecy preserving refinement. This result shows how existing refinement checkers can be used to show correctness of an implementation with respect to a specification.

Rajeev Alur, Pavol Černý, Steve Zdancewic
Quantifying Information Leakage in Process Calculi

We study two quantitative models of information leakage in the pi-calculus. The first model presupposes an attacker with an essentially unlimited computational power. The resulting notion of

absolute leakage

, measured in bits, is in agreement with secrecy as defined by Abadi and Gordon: a process has an absolute leakage of zero precisely when it satisfies secrecy. The second model assumes a restricted observation scenario, inspired by the testing equivalence framework, where the attacker can only conduct repeated success-or-failure experiments on processes. Moreover, each experiment has a cost in terms of communication actions. The resulting notion of leakage

rate

, measured in bits per action, is in agreement with the first model: the maximum information that can be extracted by repeated experiments coincides with the absolute leakage

A

of the process. Moreover, the overall extraction cost is at least

A

/

R

, where

R

is the rate of the process. Strategies to effectively estimate both absolute leakage and rate are also discussed.

Michele Boreale
Symbolic Protocol Analysis in Presence of a Homomorphism Operator and Exclusive Or

Security of a cryptographic protocol for a bounded number of sessions is usually expressed as a symbolic trace reachability problem. We show that symbolic trace reachability for

well-defined

protocols is decidable in presence of the

exclusive or

theory in combination with the homomorphism axiom. These theories allow us to model basic properties of important cryptographic operators.

This trace reachability problem can be expressed as a system of symbolic deducibility constraints for a certain inference system describing the capabilities of the attacker. One main step of our proof consists in reducing deducibility constraints to constraints for deducibility in one step of the inference system. This constraint system, in turn, can be expressed as a system of quadratic equations of a particular form over ℤ/2ℤ[

h

], the ring of polynomials in one indeterminate over the finite field ℤ/2ℤ. We show that satisfiability of such systems is decidable.

Stéphanie Delaune, Pascal Lafourcade, Denis Lugiez, Ralf Treinen

Cryptographic Primitives

Generalized Compact Knapsacks Are Collision Resistant

In (Micciancio, FOCS 2002), it was proved that solving the generalized compact knapsack problem

on the average

is as hard as solving certain

worst-case

problems for cyclic lattices. This result immediately yielded very efficient one-way functions whose security was based on worst-case hardness assumptions. In this work, we show that, while the function proposed by Micciancio is not collision resistant, it can be easily modified to achieve collision resistance under essentially the same complexity assumptions on cyclic lattices. Our modified function is obtained as a special case of a more general result, which yields efficient collision-resistant hash functions based on the worst-case hardness of various new problems. These include new problems from algebraic number theory as well as classic lattice problems (e.g., the shortest vector problem) over

ideal lattices

, a class of lattices that includes cyclic lattices as a special case.

Vadim Lyubashevsky, Daniele Micciancio
An Efficient Provable Distinguisher for HFE

The HFE cryptosystem was the subject of several cryptanalytic studies, sometimes successful, but always heuristic. To contrast with this trend, this work goes back to the beginnning and achieves

in a provable way

a first step of cryptanalysis which consists in distinguishing HFE public keys from random systems of quadratic equations. We provide two distinguishers: the first one has polynomial complexity and subexponential advantage; the second has subexponential complexity and advantage close to one. These distinguishers are built on the differential methodology introduced at Eurocrypt’05 by Fouque &

al.

Their rigorous study makes extensive use of combinatorics in binary vector spaces. This combinatorial approach is novel in the context of multivariate schemes. We believe that the alliance of both techniques provides a powerful framework for the mathematical analysis of multivariate schemes.

Vivien Dubois, Louis Granboulan, Jacques Stern
A Tight Bound for EMAC

We prove a new upper bound on the advantage of any adversary for distinguishing the encrypted CBC-MAC (EMAC) based on random permutations from a random function. Our proof uses techniques recently introduced in [BPR05], which again were inspired by [DGH

 + 

04].

The bound we prove is tight — in the sense that it matches the advantage of known attacks up to a constant factor — for a wide range of the parameters: let

n

denote the block-size,

q

the number of queries the adversary is allowed to make and ℓ an upper bound on the length (i.e. number of blocks) of the messages, then for ℓ ≤ 2

n

/8

and

q

≥ł

2

the advantage is in the order of

q

2

/2

n

(and in particular independent of ℓ). This improves on the previous bound of

q

2

Θ(1/

ln

ln

ℓ)

/2

n

from [BPR05] and matches the trivial attack (which thus is basically optimal) where one simply asks random queries until a collision is found.

Krzysztof Pietrzak
Constructing Single- and Multi-output Boolean Functions with Maximal Algebraic Immunity

The aim of this paper is to construct boolean functions

$f:\{0,1\}^n\longrightarrow\{0,1\}^m$

, for which the graph

gr

(

f

) = {(

x

,

f

(

x

)),

x

 ∈ {0,1}

n

} ⊆ {0,1}

n

 + 

m

has maximal algebraic immunity. This research is motivated by the need for appropriate boolean functions serving as building blocks of symmetric ciphers. Such functions should have large algebraic immunity for preventing vulnerability of the cipher against algebraic attacks. We completely solve the problem of constructing explicitely defined single-output functions for which the graph has maximal algebraic immunity. Concerning multi-output functions, we present an efficient algorithm, based on matroid union, which computes for given

m

,

n

,

d

the table of a function

$h:\{0,1\}^n\longrightarrow\{0,1\}^m$

for which the algebraic immunity of the graph is greater than

d

. To the best of our knowledge, this is the first systematic method for constructing multi-output functions of high algebraic immunity.

Frederik Armknecht, Matthias Krause

Bounded Storage and Quantum Models

On Everlasting Security in the Hybrid Bounded Storage Model

The

bounded storage model

(BSM) bounds the storage space of an adversary rather than its running time. It utilizes the public transmission of a long random string

${\cal R}$

of length

r

, and relies on the assumption that an eavesdropper cannot possibly store all of this string. Encryption schemes in this model achieve the appealing property of

everlasting security

. In short, this means that an encrypted message remains secure even if the adversary eventually gains more storage or gains knowledge of (original) secret keys that may have been used. However, if the honest parties do not share any private information in advance, then achieving everlasting security requires high storage capacity from the honest parties (storage of

$\Omega(\sqrt{r})$

, as shown in [9]).

We consider the idea of a

hybrid bounded storage model

were computational limitations on the eavesdropper are assumed up until the time that the transmission of

${\cal R}$

has ended. For example, can the honest parties run a computationally secure key agreement protocol in order to agree on a shared private key for the BSM, and thus achieve everlasting security with low memory requirements? We study the possibility and impossibility of everlasting security in the hybrid bounded storage model. We start by formally defining the model and everlasting security for this model. We show the equivalence of two flavors of definitions:

indistinguishability of encryptions

and

semantic security

.

On the negative side, we show that everlasting security with low storage requirements cannot be achieved by

black-box

reductions in the hybrid BSM. This serves as a further indication to the hardness of achieving low storage everlasting security, adding to previous results of this nature [9, 15]. On the other hand, we show two augmentations of the model that allow for low storage everlasting security. The first is by adding a random oracle to the model, while the second bounds the accessibility of the adversary to the broadcast string

${\cal R}$

. Finally, we show that in these two modified models, there also exist bounded storage oblivious transfer protocols with low storage requirements.

Danny Harnik, Moni Naor
On the Impossibility of Extracting Classical Randomness Using a Quantum Computer

In this work we initiate the question of whether quantum computers can provide us with an almost perfect source of classical randomness, and more generally, suffice for classical cryptographic tasks, such as encryption. Indeed, it was observed [SV86, MP91, DOPS04] that classical computers are insufficient for either one of these tasks when all they have access to is a realistic

imperfect

source of randomness, such as the Santha-Vazirani source.

We answer this question in the

negative

, even in the following very restrictive model. We generously assume that quantum computation is error-free, and

all the errors come in the measurements

. We further assume that all the measurement errors are not only small but also

detectable

: namely, all that can happen is that with a small probability

p

 ⊥ 

δ

the (perfectly performed) measurement will result in some distinguished symbol ⊥ (indicating an “erasure”). Specifically, we assume that if an element

x

was supposed to be observed with probability

p

x

, in reality it might be observed with probability

p

x

′∈[(1–

δ

)

p

x

,

p

x

], for some small

δ

>0 (so that

p

 ⊥ 

= 1 – ∑

x

p

x

′ ≤

δ

).

Yevgeniy Dodis, Renato Renner
Quantum Hardcore Functions by Complexity-Theoretical Quantum List Decoding

We present three new quantum hardcore functions for any quantum one-way function. We also give a “quantum” solution to Damgård’s question (CRYPTO’88) on his pseudorandom generator by proving the quantum hardcore property of his generator, which has been unknown to have the classical hardcore property. Our technical tool is quantum list-decoding of “classical” error-correcting codes (rather than “quantum” error-correcting codes), which is defined on the platform of computational complexity theory and cryptography (rather than information theory). In particular, we give a simple but powerful criterion that makes a polynomial-time computable code (seen as a function) a quantum hardcore for any quantum one-way function. On their own interest, we also give quantum list-decoding algorithms for codes whose associated quantum states (called codeword states) are “nearly” orthogonal using the technique of pretty good measurement.

Akinori Kawachi, Tomoyuki Yamakami

Foundations

Efficient Pseudorandom Generators from Exponentially Hard One-Way Functions

In their seminal paper [HILL99], Håstad, Impagliazzo, Levin and Luby show that a pseudorandom generator can be constructed from any one-way function. This plausibility result is one of the most fundamental theorems in cryptography and helps shape our understanding of hardness and randomness in the field. Unfortunately, the reduction of [HILL99] is not nearly as efficient nor as security preserving as one may desire. The main reason for the security deterioration is the blowup to the size of the input. In particular, given one-way functions on

n

bits one obtains by [HILL99] pseudorandom generators with seed length

$\cal{O}$

(

n

8

). Alternative constructions that are far more efficient exist when assuming the one-way function is of a certain restricted structure (e.g. a permutations or a regular function). Recently, Holenstein [Hol06] addressed a different type of restriction. It is demonstrated in [Hol06] that the blowup in the construction may be reduced when considering one-way functions that have

exponential

hardness. This result generalizes the original construction of [HILL99] and obtains a generator from any exponentially hard one-way function with a blowup of

$\cal{O}$

(

n

5

), and even

$\cal{O}$

(

n

4

log

2

n

) if the security of the resulting pseudorandom generator is allowed to have weaker (yet super-polynomial) security.

In this work we show a construction of a pseudorandom generator from any exponentially hard one-way function with a blowup of only

$\cal{O}$

(

n

2

) and respectively, only

$\cal{O}$

(

n

log

2

n

) if the security of the resulting pseudorandom generator is allowed to have only super-polynomial security. Our technique does not take the path of the original [HILL99] methodology, but rather follows by using the tools recently presented in [HHR05] (for the setting of regular one-way functions) and further developing them.

Iftach Haitner, Danny Harnik, Omer Reingold
Hardness of Distinguishing the MSB or LSB of Secret Keys in Diffie-Hellman Schemes

In this paper we introduce very simple deterministic randomness extractors for Diffie-Hellman distributions. More specifically we show that the

k

most significant bits or the

k

least significant bits of a random element in a subgroup of

$\mathbb Z^\star_p$

are indistinguishable from a random bit-string of the same length. This allows us to show that under the Decisional Diffie-Hellman assumption we can deterministically derive a uniformly random bit-string from a Diffie-Hellman exchange in the standard model. Then, we show that it can be used in key exchange or encryption scheme to avoid the leftover hash lemma and universal hash functions.

Pierre-Alain Fouque, David Pointcheval, Jacques Stern, Sébastien Zimmer
A Probabilistic Hoare-style Logic for Game-Based Cryptographic Proofs

We extend a Probabilistic Hoare-style logic to formalize game-based cryptographic proofs. Our approach provides a systematic and rigorous framework, thus preventing errors from being introduced. We illustrate our technique by proving semantic security of ElGamal.

Ricardo Corin, Jerry den Hartog

Multi-party Protocols

Generic Construction of Hybrid Public Key Traitor Tracing with Full-Public-Traceability

In Eurocrypt 2005, Chabanne, Phan and Pointcheval introduced an interesting property for traitor tracing schemes called

public traceability

, which makes tracing a black-box public operation. However, their proposed scheme only worked for two users and an open question proposed by authors was to provide this property for multi-user systems.

In this paper, we give a comprehensive solution to this problem by giving a

generic construction for a hybrid traitor tracing scheme that provides full-public-traceability

. We follow the Tag KEM/DEM paradigm of hybrid encryption systems and extend it to multi-receiver scenario. We define Tag-Broadcast KEM/DEM and construct a secure Tag-BroadcastKEM from a CCA secure PKE and target-collision resistant hash function. We will then use this Tag-Broadcast KEM together with a semantically secure DEM to give a generic construction for Hybrid Public Key Broadcast Encryption. The scheme has a black box tracing algorithm that

always

correctly identifies a traitor. The hybrid structure makes the system very efficient, both in terms of computation and communication cost. Finally we show a method of reducing the communication cost by using codes with identifiable parent property.

Duong Hieu Phan, Reihaneh Safavi-Naini, Dongvu Tonien
An Adaptively Secure Mix-Net Without Erasures

We construct the first mix-net that is secure against

adaptive

adversaries corrupting any minority of the mix-servers and any set of senders. The mix-net is based on the Paillier cryptosystem and analyzed in the universal composability model

without erasures

under the decisional composite residuosity assumption, the strong RSA-assumption, and the discrete logarithm assumption. We assume the existence of ideal functionalities for a bulletin board, key generation, and coin-flipping.

Douglas Wikström, Jens Groth
Multipartite Secret Sharing by Bivariate Interpolation

Given a set of participants that is partitioned into distinct compartments, a multipartite access structure is an access structure that does not distinguish between participants that belong to the same compartment. We examine here three types of such access structures – compartmented access structures with lower bounds, compartmented access structures with upper bounds, and hierarchical threshold access structures. We realize those access structures by ideal perfect secret sharing schemes that are based on bivariate Lagrange interpolation. The main novelty of this paper is the introduction of bivariate interpolation and its potential power in designing schemes for multipartite settings, as different compartments may be associated with different lines in the plane. In particular, we show that the introduction of a second dimension may create the same hierarchical effect as polynomial derivatives and Birkhoff interpolation were shown to do in [13].

Tamir Tassa, Nira Dyn
Identity-Based Encryption Gone Wild

In this paper we introduce the notion of identity based encryption with wildcards, or WIBE for short. This allows the encryption of messages to multiple parties with common fields in their identity strings, for example email groups in a corporate hierarchy. We propose a full security notion and give efficient implementations meeting this notion in the standard model and in the random oracle model.

Michel Abdalla, Dario Catalano, Alexander W. Dent, John Malone-Lee, Gregory Neven, Nigel P. Smart

Games

Deterministic Priority Mean-Payoff Games as Limits of Discounted Games

Inspired by the paper of de Alfaro, Henzinger and Majumdar [1] about discounted

μ

-calculus we show new surprising links between parity games and different classes of discounted games.

Hugo Gimbert, Wiesław Zielonka
Recursive Concurrent Stochastic Games

We study Recursive Concurrent Stochastic Games (RCSGs), extending our recent analysis of recursive simple stochastic games [14, 15] to a concurrent setting where the two players choose moves simultaneously and independently at each state. For multi-exit games, our earlier work already showed undecidability for basic questions like termination, thus we focus on the important case of single-exit RCSGs (1-RCSGs).

We first characterize the value of a 1-RCSG termination game as the least fixed point solution of a system of nonlinear minimax functional equations, and use it to show PSPACE decidability for the quantitative termination problem. We then give a strategy improvement technique, which we use to show that player 1 (maximizer) has

ε

-optimal randomized Stackless & Memoryless (r-SM) strategies, while player 2 (minimizer) has optimal r-SM strategies. Thus, such games are r-SM-determined. These results mirror and generalize in a strong sense the randomized memoryless determinacy results for finite stochastic games, and extend the classic Hoffman-Karp [19] strategy improvement approach from the finite to an infinite state setting. The proofs in our infinite-state setting are very different however.

We show that our upper bounds, even for qualitative termination, can not be improved without a major breakthrough, by giving two reductions: first a P-time reduction from the long-standing square-root sum problem to the quantitative termination decision problem for

finite

concurrent stochastic games, and then a P-time reduction from the latter problem to the qualitative termination problem for 1-RCSGs.

Kousha Etessami, Mihalis Yannakakis
Half-Positional Determinacy of Infinite Games

We study infinite games where one of the players always has a positional (memory-less) winning strategy, while the other player may use a history-dependent strategy. We investigate winning conditions which guarantee such a property for all arenas, or all finite arenas. We establish some closure properties of such conditions, and discover some common reasons behind several known and new positional determinacy results. We exhibit several new classes of winning conditions having this property: the class of concave conditions (for finite arenas) and the classes of monotonic conditions and geometrical conditions (for all arenas).

Eryk Kopczyński
A Game-Theoretic Approach to Deciding Higher-Order Matching

We sketch a proof using a game-theoretic argument that the higher-order matching problem is decidable.

Colin Stirling

Semantics

Descriptive and Relative Completeness of Logics for Higher-Order Functions

This paper establishes a strong completeness property of compositional program logics for pure and imperative higher-order functions introduced in [18, 16, 17, 19, 3]. This property, called

descriptive completeness

, says that for each program there is an assertion fully describing the program’s behaviour up to the standard observational semantics. This formula is inductively calculable from the program text alone. As a consequence we obtain the first relative completeness result for compositional logics of pure and imperative call-by-value higher-order functions in the full type hierarchy.

Kohei Honda, Martin Berger, Nobuko Yoshida
Interpreting Polymorphic FPC into Domain Theoretic Models of Parametric Polymorphism

This paper shows how parametric PILL

Y

(Polymorphic Intuitionistic / Linear Lambda calculus with a fixed point combinator

Y

) can be used as a metalanguage for domain theory, as originally suggested by Plotkin more than a decade ago. Using recent results about solutions to recursive domain equations in parametric models of PILL

Y

, we show how to interpret FPC in these. Of particular interest is a model based on “admissible” pers over a reflexive domain, the theory of which can be seen as a domain theory for (impredicative) polymorphism. We show how this model gives rise to a parametric and computationally adequate model of PolyFPC, an extension of FPC with impredicative polymorphism. This is the first model of a language with parametric polymorphism, recursive terms and recursive types in a non-linear setting.

Rasmus Ejlers Møgelberg
Typed GoI for Exponentials

In a recent paper we introduced a typed version of Geometry of Interaction, called the Multi-object Geometry of Interaction (MGoI). Using this framework we gave an interpretation for the unit-free multiplicative fragment of linear logic. In this paper, we extend our work to cover the exponentials. We introduce the notion of a

GoI Category

that embodies the necessary ingredients for an MGoI interpretation for unit-free multiplicative and exponential linear logic.

Esfandiar Haghverdi
Commutative Locative Quantifiers for Multiplicative Linear Logic

The paper presents a solution to the technical problem posed by Girard after the introduction of Ludics of how to define proof nets with quantifiers that commute with multiplicatives. According to the principles of Ludics, the commuting quantifiers have a “locative” nature, in particular, quantified formulas are not defined modulo variable renaming. The solution is given by defining a new correctness criterion for first-order multiplicative proof structures that characterizes the system obtained by adding a congruence implying

$\forall{x}(A{\bigotimes}B)=\forall{x}{A}\bigotimes\forall{x}{B}$

to first-order multiplicative linear logic with locative quantifiers. In the conclusions we shall briefly discuss the interpretation of locative quantifiers as storage operators.

Stefano Guerrini, Patrizia Marzuoli

Automata I

The Wadge Hierarchy of Deterministic Tree Languages

We provide a complete description of the Wadge hierarchy for deterministically recognizable sets of infinite trees. In particular we give an elementary procedure to decide if one deterministic tree language is continuously reducible to another. This extends Wagner’s results on the hierarchy of

ω

-regular languages to the case of trees.

Filip Murlak
Timed Petri Nets and Timed Automata: On the Discriminating Power of Zeno Sequences

Timed Petri nets and timed automata are two standard models for the analysis of real-time systems. In this paper, we prove that they are incomparable for the timed language equivalence. Thus we propose an extension of timed Petri nets with read-arcs (

RA-TdPN

), whose coverability problem is decidable. We also show that this model unifies timed Petri nets and timed automata. Then, we establish numerous expressiveness results and prove that

Zeno

behaviours discriminate between several sub-classes of

RA-TdPN

s. This has surprising consequences on timed automata, e.g. on the power of non-deterministic clock resets.

Patricia Bouyer, Serge Haddad, Pierre-Alain Reynier
On Complexity of Grammars Related to the Safety Problem

Leftist grammars were introduced by Motwani et. al., who established the relationship between the complexity of accessibility problem (or safety problem) for certain general protection system and the membership problem of these grammars. The membership problem for leftist grammars is decidable. This implies the decidability of the accessibility problem. It is shown that the membership problem for leftist grammars is

PSPACE

-hard. Therefore, the accessibility problem in the appropriate protection systems is

PSPACE

-hard as well. Furthermore, the

PSPACE

-hardness result is adopted to very restricted class of leftist grammars, if the grammar is a part of the input.

Tomasz Jurdziński

Models

Jumbo λ-Calculus

We make an argument that, for any study involving computational effects such as divergence or continuations, the traditional syntax of simply typed lambda-calculus cannot be regarded as canonical, because standard arguments for canonicity rely on isomorphisms that may not exist in an effectful setting. To remedy this, we define a “jumbo lambda-calculus” that fuses the traditional connectives together into more general ones, so-called “jumbo connectives”. We provide two pieces of evidence for our thesis that the jumbo formulation is advantageous.

Firstly, we show that the jumbo lambda-calculus provides a “complete” range of connectives, in the sense of including every possible connective that, within the beta-eta theory, possesses a reversible rule.

Secondly, in the presence of effects, we show that there is no decomposition of jumbo connectives into non-jumbo ones that is valid in both call-by-value and call-by-name.

Paul Blain Levy
λ-RBAC: Programming with Role-Based Access Control

We study mechanisms that permit program components to express role constraints on clients, focusing on programmatic security mechanisms, which permit access controls to be expressed,

in situ

, as part of the code realizing basic functionality. In this setting, two questions immediately arise:

The user of a component faces the issue of safety: is a particular role sufficient to use the component?

The component designer faces the dual issue of protection: is a particular role demanded in all execution paths of the component?

We provide a formal calculus and static analysis to answer both questions.

Radha Jagadeesan, Alan Jeffrey, Corin Pitcher, James Riely
Communication of Two Stacks and Rewriting

Rewriting systems working on words with a center marker are considered. The derivation is done by erasing a prefix or a suffix and then adding a prefix or a suffix. This can be naturally viewed as two stacks communicating with each other according to a fixed protocol. The paper systematically considers different cases of these systems and determines their expressiveness. Several cases are identified where very limited communication surprisingly yields universal computation power.

Juhani Karhumäki, Michal Kunc, Alexander Okhotin

Equations

On the Axiomatizability of Priority

This paper studies the equational theory of bisimulation equivalence over the process algebra BCCSP extended with the priority operator of Baeten, Bergstra and Klop. It is proven that, in the presence of an infinite set of actions, bisimulation equivalence has no finite, sound, ground-complete equational axiomatization over that language. This negative result applies even if the syntax is extended with an arbitrary collection of auxiliary operators, and motivates the study of axiomatizations using conditional equations. In the presence of an infinite set of actions, it is shown that, in general, bisimulation equivalence has no finite, sound, ground-complete axiomatization consisting of conditional equations over BCCSP. Sufficient conditions on the priority structure over actions are identified that lead to a finite, ground-complete axiomatization of bisimulation equivalence using conditional equations.

Luca Aceto, Taolue Chen, Wan Fokkink, Anna Ingolfsdottir
A Finite Equational Base for CCS with Left Merge and Communication Merge

Using the left merge and communication merge from

ACP

, we present an equational base (i.e., a ground-complete and

ω

-complete set of valid equations) for the fragment of

CCS

without restriction and relabelling. Our equational base is finite if the set of actions is finite.

Luca Aceto, Wan Fokkink, Anna Ingolfsdottir, Bas Luttik
Theories of HNN-Extensions and Amalgamated Products

It is shown that the existential theory of

${\mathbb G}$

with rational constraints, over an HNN-extension

${\mathbb G}=\langle {\mathbb H},t; t^{-1}at=\varphi(a) (a \in A) \rangle$

is decidable, provided that the same problem is decidable in the base group

${\mathbb H}$

and that

A

is a finite group. The positive theory of

${\mathbb G}$

is decidable, provided that the existential positive theory of

${\mathbb G}$

is decidable and that

A

and

ϕ

(

A

) are proper subgroups of the base group

${\mathbb H}$

with

A

ϕ

(

A

) finite. Analogous results are also shown for amalgamated products. As a corollary, the positive theory and the existential theory with rational constraints of any finitely generated virtually-free group is decidable.

Markus Lohrey, Géraud Sénizergues
On Intersection Problems for Polynomially Generated Sets

Some classes of sets of vectors of natural numbers are introduced as generalizations of the semi-linear sets, among them the ‘simple semi-polynomial sets.’ Motivated by verification problems that involve arithmetical constraints, we show results on the intersection of such generalized sets with semi-linear sets, singling out cases where the non-emptiness of intersection is decidable. Starting from these initial results, we list some problems on solvability of arithmetical constraints beyond the semi-linear ones.

Wong Karianto, Aloys Krieg, Wolfgang Thomas

Logics

Invisible Safety of Distributed Protocols

The method of “Invisible Invariants” has been applied successfully to protocols that assume a “symmetric” underlying topology, be it cliques, stars, or rings. In this paper we show how the method can be applied to proving safety properties of distributed protocols running under arbitrary topologies. Many safety properties of such protocols have reachability predicates, which, at first glance, are beyond the scope of the Invisible Invariants method. To overcome this difficulty, we present a technique, called “coloring,” that allows, in many instances, to replace the second order reachability predicates by first order predicates, resulting in properties that are amenable to Invisible Invariants.We demonstrate our techniques on several distributed protocols, including a variant on Luby’s Maximal Independent Set protocol, the Leader Election protocol used in the IEEE 1394 (Firewire) distributed bus protocol, and various distributed spanning tree algorithms. All examples have been tested using the symbolic model checker

tlv

.

Ittai Balaban, Amir Pnueli, Lenore D. Zuck
The Complexity of Enriched μ-Calculi

The

fully enriched μ

-calculus

is the extension of the propositional

μ

-calculus with inverse programs, graded modalities, and nominals. While satisfiability in several expressive fragments of the fully enriched

μ

-calculus is known to be decidable and

ExpTime

-complete, it has recently been proved that the full calculus is undecidable. In this paper, we study the fragments of the fully enriched

μ

-calculus that are obtained by dropping at least one of the additional constructs. We show that, in all fragments obtained in this way, satisfiability is decidable and

ExpTime

-complete. Thus, we identify a family of decidable logics that are maximal (and incomparable) in expressive power. Our results are obtained by introducing two new automata models, showing that their emptiness problems are

ExpTime

-complete, and then reducing satisfiability in the relevant logics to this problem. The automata models we introduce are

two-way graded alternating parity automata

over infinite trees (2GAPT) and

fully enriched automata

(FEA) over infinite forests. The former are a common generalization of two incomparable automata models from the literature. The latter extend alternating automata in a similar way as the fully enriched

μ

-calculus extends the standard

μ

-calculus.

Piero A. Bonatti, Carsten Lutz, Aniello Murano, Moshe Y. Vardi
Interpreting Tree-to-Tree Queries

We establish correspondences between top-down tree building query languages and predicate logics. We consider the expressive power of the query language

XQ

, a clean core of the practitioner’s language XQuery. We show that all queries in

XQ

with only atomic equality are equivalent to “first-order interpretations”, an analog to first-order logic (FO) in the setting of transformations of tree-structured data. When

XQ

is considered with deep equality, we find that queries can be translated into FO with counting (

FO

(

Cnt

)). We establish partial converses to this, characterizing the subset of the FO resp.

FO

(

Cnt

) interpretations that correspond to

XQ

. Finally, we study the expressive power of fragments of

XQ

and obtain partial characterizations in terms of existential FO and a fragment of FO that is two-variable if the tree node labeling alphabet is assumed fixed.

Michael Benedikt, Christoph Koch

Automata II

Constructing Exponential-Size Deterministic Zielonka Automata

The well-known algorithm of Zielonka describes how to transform automatically a sequential automaton into a deterministic asynchronous trace automaton. In this paper, we improve the construction of deterministic asynchronous automata from finite state automaton. Our construction improves the well-known construction in that the size of the asynchronous automaton is simply exponential in both the size of the sequential automaton and the number of processes. In contrast, Zielonka’s algorithm gives an asynchronous automaton that is doubly exponential in the number of processes (and simply exponential in the size of the automaton).

Blaise Genest, Anca Muscholl
Flat Parametric Counter Automata

In this paper we study the reachability problem for parametric flat counter automata, in relation with the satisfiability problem of three fragments of integer arithmetic. The equivalence between non-parametric flat counter automata and Presburger arithmetic has been established previously by Comon and Jurski [5]. We simplify their proof by introducing finite state automata defined over alphabets of a special kind of graphs (zigzags). This framework allows one to express also the reachability problem for parametric automata with one control loop as the existence of solutions of a

1-parametric linear Diophantine systems

. The latter problem is shown to be decidable, using a number-theoretic argument. Finally, the general reachability problem for parametric flat counter automata with more than one loops is shown to be undecidable, by reduction from Hilbert’s Tenth Problem [9].

Marius Bozga, Radu Iosif, Yassine Lakhnech
Lower Bounds for Complementation of ω-Automata Via the Full Automata Technique

In this paper, we first introduce a new lower bound technique for the state complexity of transformations of automata. Namely we suggest considering the class of full automata in lower bound analysis. Then we apply such technique to the complementation of nondeterministic

ω

-automata and obtain several lower bound results. Particularly, we prove an Ω((0.76

n

)

n

) lower bound for Büchi complementation, which also holds for almost every complementation and determinization transformation of nondeterministic

ω

-automata, and prove an optimal (Ω(

nk

))

n

lower bound for the complementation of generalized Büchi automata, which holds for Streett automata as well.

Qiqi Yan
Backmatter
Metadaten
Titel
Automata, Languages and Programming
herausgegeben von
Michele Bugliesi
Bart Preneel
Vladimiro Sassone
Ingo Wegener
Copyright-Jahr
2006
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-35908-1
Print ISBN
978-3-540-35907-4
DOI
https://doi.org/10.1007/11787006

Premium Partner