Skip to main content

2008 | Buch

Automata, Languages and Programming

35th International Colloquium, ICALP 2008, Reykjavik, Iceland, July 7-11, 2008, Proceedings, Part II

herausgegeben von: Luca Aceto, Ivan Damgård, Leslie Ann Goldberg, Magnús M. Halldórsson, Anna Ingólfsdóttir, Igor Walukiewicz

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The two-volume set LNCS 5125 and LNCS 5126 constitutes the refereed proceedings of the 35th International Colloquium on Automata, Languages and Programming, ICALP 2008, held in Reykjavik, Iceland, in July 2008. The 126 revised full papers presented together with 4 invited lectures were carefully reviewed and selected from a total of 407 submissions. The papers are grouped in three major tracks on algorithms, automata, complexity and games, on logic, semantics, and theory of programming, and on security and cryptography foundations. LNCS 5126 contains 56 contributions of track B and track C selected from 208 submissions and 2 invited lectures. The papers for track B are organized in topical sections on bounds, distributed computation, real-time and probabilistic systems, logic and complexity, words and trees, nonstandard models of computation, reasoning about computation, and verification. The papers of track C cover topics in security and cryptography such as theory, secure computation, two-party protocols and zero-knowledge, encryption with special properties/quantum cryptography, various types of hashing, as well as public-key cryptography and authentication.

Inhaltsverzeichnis

Frontmatter

Invited Lectures

Composable Formal Security Analysis: Juggling Soundness, Simplicity and Efficiency

A security property of a protocol is

composable

if it remains intact even when the protocol runs alongside other protocols in the same system. We describe a method for asserting composable security properties, and demonstrate its usefulness. In particular, we show how this method can be used to provide security analysis that is formal, relatively simple, and still does not make unjustified abstractions of the underlying cryptographic algorithms in use. It can also greatly enhance the feasibility of

automated

security analysis of systems of realistic size.

Ran Canetti
Newton’s Method for ω-Continuous Semirings

Fixed point equations

${\bf\it X} = {\bf\it f}({\bf\it X})$

over

ω

-continuous semirings are a natural mathematical foundation of interprocedural program analysis. Generic algorithms for solving these equations are based on Kleene’s theorem, which states that the sequence

${\bf{0}}, {\bf\it f}({\bf{0}}), {\bf\it f}({\bf\it f}({\bf{0}})), \ldots$

converges to the least fixed point. However, this approach is often inefficient. We report on recent work in which we extend Newton’s method, the well-known technique from numerical mathematics, to arbitrary

ω

-continuous semirings, and analyze its convergence speed in the real semiring.

Javier Esparza, Stefan Kiefer, Michael Luttenberger

Track B: Logic, Semantics, and Theory of Programming

Bounds

The Tractability Frontier for NFA Minimization

We essentially show that minimizing finite automata is NP-hard as soon as one deviates from the class of deterministic finite automata. More specifically, we show that minimization is NP-hard for all finite automata classes that subsume the class that is unambiguous, allows at most one state

q

with a non-deterministic transition for at most one alphabet symbol

a

, and is allowed to visit state

q

at most once in a run. Furthermore, this result holds even for automata that only accept finite languages.

Henrik Björklund, Wim Martens
Finite Automata, Digraph Connectivity, and Regular Expression Size
(Extended Abstract)

Recently lower bounds on the minimum required size for the conversion of deterministic finite automata into regular expressions and on the required size of regular expressions resulting from applying some basic language operations on them, were given by Gelade and Neven [8]. We strengthen and extend these results, obtaining lower bounds that are in part optimal, and, notably, the presented examples are over a binary alphabet, which is best possible. To this end, we develop a different, more versatile lower bound technique that is based on the star height of regular languages. It is known that for a restricted class of regular languages, the star height can be determined from the digraph underlying the transition structure of the minimal finite automaton accepting that language. In this way, star height is tied to cycle rank, a structural complexity measure for digraphs proposed by Eggan and Büchi, which measures the degree of connectivity of directed graphs.

Hermann Gruber, Markus Holzer
Leftist Grammars Are Non-primitive Recursive

Leftist grammars were introduced by Motwani et. al. [7], as a tool to show decidability of the accessibility problem in certain general protection systems. It is shown that the membership problem for languages defined by leftist grammars is non-primitive recursive. Therefore, by the reduction of Motwani et. al., the accessibility problem in the appropriate protection systems is non-primitive recursive as well.

Tomasz Jurdziński
On the Computational Completeness of Equations over Sets of Natural Numbers

Systems of equations of the form

ϕ

j

(

X

1

, ...,

X

n

) = 

ψ

j

(

X

1

, ...,

X

n

) with

$1 \leqslant j \leqslant m$

are considered, in which the unknowns

X

i

are sets of natural numbers, while the expressions

ϕ

j

,

ψ

j

may contain singleton constants and the operations of union (possibly replaced by intersection) and pairwise addition

. It is shown that the family of sets representable by unique (least, greatest) solutions of such systems is exactly the family of recursive (r.e., co-r.e., respectively) sets of numbers. Basic decision problems for these systems are located in the arithmetical hierarchy.

Artur Jeż, Alexander Okhotin

Distributed Computation

Placement Inference for a Client-Server Calculus

Placement inference assigns locations to operations in a distributed program under the constraints that some operations can only execute on particular locations and that values may not be transferred arbitrarily between locations. An optimal choice of locations additionally minimizes the run time of the program, given that operations take different time on different locations and that a cost is associated to transferring a value from one location to another.

We define a language with a time- and location-aware semantics, formalize placement inference in terms of constraints, and show that solving these constraints is an NP-complete problem. We then show that optimal placements are computable via a reformulation of the semantics in terms of matrices and an application of the max-plus spectral theory. A prototype implementation validates our results.

Matthias Neubauer, Peter Thiemann
Extended pi-Calculi

We demonstrate a general framework for extending the pi-calculus with data terms. In this we generalise and improve on several related efforts such as the spic calculus and the pi-calculus, also including pattern matching and polyadic channels. Our framework uses a single untyped notion of agent, name and scope, an operational semantics without structural equivalence and a simple definition of bisimilarity. We provide general criteria on the semantic equivalence of data terms; with these we prove algebraic laws and that bisimulation is preserved by the operators in the usual way. The definitions are simple enough that an implementation in an automated proof assistant is feasible.

Magnus Johansson, Joachim Parrow, Björn Victor, Jesper Bengtson
Completeness and Logical Full Abstraction in Modal Logics for Typed Mobile Processes

We study an extension of Hennessy-Milner logic for the

π

-calculus which gives a sound and complete characterisation of representative behavioural preorders and equivalences over typed processes. New connectives are introduced representing actual and hypothetical typed parallel composition and hiding. We study three compositional proof systems, characterising the May/Must testing preorders and bisimilarity. The proof systems are uniformly applicable to different type disciplines. Logical axioms distill proof rules for parallel composition studied by Amadio and Dam. We demonstrate the expressiveness of our logic through verification of state transfer in multiparty interactions and fully abstract embeddings of program logics for higher-order functions.

Martin Berger, Kohei Honda, Nobuko Yoshida

Real-Time and Probabilistic Systems

On the Sets of Real Numbers Recognized by Finite Automata in Multiple Bases

This paper studies the expressive power of finite automata recognizing sets of real numbers encoded in positional notation. We consider Muller automata as well as the restricted class of

weak deterministic automata

, used as symbolic set representations in actual applications. In previous work, it has been established that the sets of numbers that are recognizable by weak deterministic automata in two bases that do not share the same set of prime factors are exactly those that are definable in the first order additive theory of real and integer numbers

. This result extends

Cobham’s theorem

, which characterizes the sets of integer numbers that are recognizable by finite automata in multiple bases.

In this paper, we first generalize this result to

multiplicatively independent

bases, which brings it closer to the original statement of Cobham’s theorem. Then, we study the sets of reals recognizable by Muller automata in two bases. We show with a counterexample that, in this setting, Cobham’s theorem does not generalize to multiplicatively independent bases. Finally, we prove that the sets of reals that are recognizable by Muller automata in two bases that do not share the same set of prime factors are exactly those definable in

. These sets are thus also recognizable by weak deterministic automata. This result leads to a precise characterization of the sets of real numbers that are recognizable in multiple bases, and provides a theoretical justification to the use of weak automata as symbolic representations of sets.

Bernard Boigelot, Julien Brusten, Véronique Bruyère
On Expressiveness and Complexity in Real-Time Model Checking

Metric Interval Temporal Logic (

MITL

) is a popular formalism for expressing real-time specifications. This logic achieves decidability by restricting the precision of timing constraints, in particular, by banning so-called

punctual

specifications. In this paper we introduce a significantly more expressive logic that can express a wide variety of punctual specifications, but whose model-checking problem has the same complexity as that of

MITL

. We conclude that for model checking the most commonly occurring specifications, such as invariance and bounded response, punctuality can be accommodated at no cost.

Patricia Bouyer, Nicolas Markey, Joël Ouaknine, James Worrell
STORMED Hybrid Systems

We introduce STORMED hybrid systems, a decidable class of hybrid systems which is similar to o-minimal hybrid automata in that the continuous dynamics and constraints are described in an o-minimal theory. However, unlike o-minimal hybrid automata, the variables are not initialized in a memoryless fashion at discrete steps. STORMED hybrid systems require flows which are monotonic with respect to some vector in the continuous space and can be characterised as bounded-horizon systems in terms of their discrete transitions. We demonstrate that such systems admit a finite bisimulation, which can be effectively constructed provided the o-minimal theory used to describe the system is decidable. As a consequence, many verification problems for such systems have effective decision algorithms.

Vladimeros Vladimerou, Pavithra Prabhakar, Mahesh Viswanathan, Geir Dullerud
Controller Synthesis and Verification for Markov Decision Processes with Qualitative Branching Time Objectives

We show that the controller synthesis and verification problems for Markov decision processes with qualitative PECTL

*

objectives are 2-

EXPTIME

complete. More precisely, the algorithms are

polynomial

in the size of a given Markov decision process and doubly exponential in the size of a given qualitative PECTL

*

formula. Moreover, we show that if a given qualitative PECTL

*

objective is achievable by

some

strategy, then it is also achievable by an effectively constructible

one-counter

strategy, where the associated complexity bounds are essentially the same as above. For the fragment of qualitative PCTL objectives, we obtain

EXPTIME

completeness and the algorithms are only singly exponential in the size of the formula.

Tomáš Brázdil, Vojtěch Forejt, Antonín Kučera

Logic and Complexity

On Datalog vs. LFP

We show that the homomorphism preservation theorem fails for LFP, both in general and in restriction to finite structures. That is, there is a formula of LFP that is preserved under homomorphisms (in the finite) but is not equivalent (in the finite) to a Datalog program. This resolves a question posed by Atserias. The results are established by two different methods: (1) a method of diagonalisation that works only in the presence of infinite structures, but establishes a stronger result showing a hierarchy of homomorphism-preserved problems in LFP; and (2) a method based on a pumping lemma for Datalog due to Afrati, Cosmadakis and Yannakakis which establishes the result in restriction to finite structures. We refine the pumping lemma of Afrati et al. and relate it to the power of Monadic Second-Order Logic on tree decompositions of structures.

Anuj Dawar, Stephan Kreutzer
Directed st-Connectivity Is Not Expressible in Symmetric Datalog

We show that the directed

st

-connectivity problem cannot be expressed in symmetric Datalog, a fragment of Datalog introduced in [5]. It was shown there that symmetric Datalog programs can be evaluated in logarithmic space and that this fragment of Datalog captures logspace when augmented with negation, and an auxiliary successor relation

S

together with two constant symbols for the smallest and largest elements with respect to

S

. In contrast, undirected

st

-connectivity is expressible in symmetric Datalog and is in fact one of the simplest examples of the expressive power of this logic. It follows that undirected non-

st

-connectivity can be expressed in restricted symmetric monotone Krom

$\operatorname{SNP}$

, whereas directed non-

st

-connectivity is only definable in the more expressive restricted monotone Krom

$\operatorname{SNP}$

. By results of [8], the inexpressibility result for directed

st

-connectivity extends to a wide class of homomorphism problems that fail to meet a certain algebraic condition.

László Egri, Benoît Larose, Pascal Tesson
Non-dichotomies in Constraint Satisfaction Complexity

We show that every computational decision problem is polynomial-time equivalent to a constraint satisfaction problem (CSP) with an infinite template. We also construct for every decision problem

L

an

ω-categorical

template

Γ

such that

L

reduces to CSP(

Γ

) and CSP(

Γ

) is in coNP

L

(i.e., the class coNP with an oracle for

L

). CSPs with

ω

-categorical templates are of special interest, because the universal-algebraic approach can be applied to study their computational complexity.

Furthermore, we prove that there are

ω

-categorical templates with coNP-complete CSPs and

ω

-categorical templates with coNP- intermediate CSPs, i.e., problems in coNP that are neither coNP- complete nor in P (unless P=coNP). To construct the coNP-intermediate CSP with

ω

-categorical template we modify the proof of Ladner’s theorem. A similar modification allows us to also prove a non-dichotomy result for a class of left-hand side restricted CSPs, which was left open in [10]. We finally show that if the so-called

local-global conjecture

for

infinite constraint languages

(over a finite domain) is false, then there is no dichotomy for the constraint satisfaction problem for infinite constraint languages.

Manuel Bodirsky, Martin Grohe
Quantified Constraint Satisfaction and the Polynomially Generated Powers Property
(Extended Abstract)

The quantified constraint satisfaction probem (QCSP) is the problem of deciding, given a relational structure and a sentence consisting of a quantifier prefix followed by a conjunction of atomic formulas, whether or not the sentence is true in the structure. The general intractability of the QCSP has led to the study of restricted versions of this problem. In this article, we study restricted versions of the QCSP that arise from prespecifying the relations that may occur via a set of relations called a

constraint language

. A basic tool used is a correspondence that associates an algebra to each constraint language; this algebra can be used to derive information on the behavior of the constraint language.

We identify a new combinatorial property on algebras, the

polynomially generated powers

(PGP) property, which we show is tightly connected to QCSP complexity. We also introduce another new property on algebras,

switchability

, which both implies the PGP property and implies positive complexity results on the QCSP. Our main result is a classification theorem on a class of three-element algebras: each algebra is either switchable and hence has the PGP, or provably lacks the PGP. The description of non-PGP algebras is remarkably simple and robust.

Hubie Chen

Words and Trees

When Does Partial Commutative Closure Preserve Regularity?

The closure of a regular language under commutation or partial commutation has been extensively studied [1,11,12,13], notably in connection with regular model checking [2,3,7] or in the study of Mazurkiewicz traces, one of the models of parallelism [14,15,16,22]. We refer the reader to the survey [10,9] or to the recent articles of Ochmański [17,18,19] for further references.

Antonio Cano Gómez, Giovanna Guaiana, Jean-Éric Pin
Weighted Logics for Nested Words and Algebraic Formal Power Series

Nested words, a model for recursive programs proposed by Alur and Madhusudan, have recently gained much interest. In this paper we introduce quantitative extensions and study nested word series which assign to nested words elements of a semiring. We show that regular nested word series coincide with series definable in weighted logics as introduced by Droste and Gastin. For this, we establish a connection between nested words and series-parallel-biposets. Applying our result, we obtain a characterization of algebraic formal power series in terms of weighted logics. This generalizes a result of Lautemann, Schwentick and Thérien on context-free languages.

Christian Mathissen
Tree Languages Defined in First-Order Logic with One Quantifier Alternation

We study tree languages that can be defined in

Δ

2

. These are tree languages definable by a first-order formula whose quantifier prefix is

$\exists^*\forall^*$

, and simultaneously by a first-order formula whose quantifier prefix is

$\forall^*\exists^*$

, both formulas over the signature with the descendant relation. We provide an effective characterization of tree languages definable in

Δ

2

. This characterization is in terms of algebraic equations. Over words, the class of word languages definable in

Δ

2

forms a robust class, which was given an effective algebraic characterization by Pin and Weil [11].

Mikołaj Bojańczyk, Luc Segoufin
Duality and Equational Theory of Regular Languages

This paper presents a new result in the equational theory of regular languages, which emerged from lively discussions between the authors about Stone and Priestley duality. Let us call

lattice of languages

a class of regular languages closed under finite intersection and finite union. The main results of this paper (Theorems 5.2 and 6.1) can be summarized in a nutshell as follows:

A set of regular languages is a lattice of languages if and only if it can be defined by a set of profinite equations.

The product on profinite words is the dual of the residuation operations on regular languages.

In their more general form, our equations are of the form

u

v

, where

u

and

v

are profinite words. The first result not only subsumes Eilenberg-Reiterman’s theory of varieties and their subsequent extensions, but it shows for instance that any class of regular languages defined by a fragment of logic closed under conjunctions and disjunctions (first order, monadic second order, temporal, etc.) admits an equational description. In particular, the celebrated McNaughton-Schützenberger characterisation of first order definable languages by the aperiodicity condition

x

ω

 = 

x

ω

 + 1

, far from being an isolated statement, now appears as an elegant instance of a very general result.

Mai Gehrke, Serge Grigorieff, Jean-Éric Pin

Nonstandard Models of Computation

Reversible Flowchart Languages and the Structured Reversible Program Theorem

Many irreversible computation models have reversible counterparts, but these are poorly understood at present. We introduce reversible flowcharts with an assertion operator and show that any reversible flowchart can be simulated by a structured reversible flowchart using only three control flow operators. Reversible flowcharts are

r- Turing-complete

, meaning that they can simuluate reversible Turing machines without garbage data. We also demonstrate the

injectivization

of classical flowcharts into reversible flowcharts. The reversible flowchart computation model provides a theoretical justification for low-level machine code for reversible microprocessors as well as high-level block-structured reversible languages. We give examples for both such languages and illustrate them with a lossless encoder for permutations given by Dijkstra.

Tetsuo Yokoyama, Holger Bock Axelsen, Robert Glück
Attribute Grammars and Categorical Semantics

We give a new formulation of attribute grammars (AG for short) called

monoidal

AGs

in traced symmetric monoidal categories. Monoidal AGs subsume existing domain-theoretic, graph-theoretic and relational formulations of AGs. Using a 2-categorical aspect of monoidal AGs, we also show that every monoidal AG is equivalent to a synthesised one when the underlying category is closed, and that there is a sound and complete translation from local dependency graphs to relational AGs.

Shin-ya Katsumata
A Domain Theoretic Model of Qubit Channels

We prove that the spectral order provides a domain theoretic model of qubit channels. Specifically, we show that the spectral order is the unique partial order on quantum states whose least element is the completely mixed state, which satisfies the mixing law and has the property that a qubit channel is unital iff it is Scott continuous and has a Scott closed set of fixed points. This result is used to show that the Holevo capacity of a unital qubit channel is determined by the largest value of its informatic derivative. In particular, these channels always have an informatic derivative that is necessarily not a classical derivative.

Keye Martin
Interacting Quantum Observables

We formalise the constructive content of an essential feature of quantum mechanics: the interaction of complementary quantum observables, and information flow mediated by them. Using a general categorical formulation, we show that pairs of mutually unbiased quantum observables form bialgebra-like structures. We also provide an abstract account on the quantum data encoded in complex phases, and prove a normal form theorem for it. Together these enable us to describe all observables of finite dimensional Hilbert space quantum mechanics. The resulting equations suffice to perform computations with elementary quantum gates, translate between distinct quantum computational models, establish the equivalence of entangled quantum states, and simulate quantum algorithms such as the quantum Fourier transform. All these computations moreover happen within an intuitive diagrammatic calculus.

Bob Coecke, Ross Duncan

Reasoning about Computation

Perpetuality for Full and Safe Composition (in a Constructive Setting)

We study perpetuality in calculi with explicit substitutions having

full composition

. A simple perpetual strategy is used to define strongly normalising terms inductively. This gives a simple argument to show preservation of

β

-strong normalisation as well as strong normalisation for typed terms. Particularly, the strong normalisation proof is based on implicit substitution rather than explicit substitution, so that it turns out to be modular w.r.t. the well-known proofs for typed lambda-calculus. All the proofs we develop are constructive.

Delia Kesner
A System F with Call-by-Name Exceptions

We present an extension of System F with call-by-name exceptions. The type system is enriched with two syntactic constructs: a union type

$A {\rlap{{*}}\cup} \{\varepsilon\}$

for programs of type

A

whose execution may raise the exception

ε

at top level, and a

corruption type

A

(

ε

)

for programs that may raise the exception

ε

in any evaluation context (not necessarily at top level). We present the syntax and reduction rules of the system, as well as its typing and subtyping rules. We then study its properties, such as confluence. Finally, we construct a realizability model using orthogonality techniques, from which we deduce that well-typed programs are weakly normalizing and that the ones who have the type of natural numbers really compute a natural number, without raising exceptions.

Sylvain Lebresne
Linear Logical Algorithms

Bottom-up logic programming can be used to declaratively specify many algorithms in a succinct and natural way, and McAllester and Ganzinger have shown that it is possible to define a cost semantics that enables reasoning about the running time of algorithms written as inference rules. Previous work with the programming language Lollimon demonstrates the expressive power of logic programming with linear logic in describing algorithms that have imperative elements or that must repeatedly make mutually exclusive choices. In this paper, we identify a bottom-up logic programming language based on linear logic that is amenable to efficient execution and describe a novel cost semantics that can be used for complexity analysis of algorithms expressed in linear logic.

Robert J. Simmons, Frank Pfenning
A Simple Model of Separation Logic for Higher-Order Store

Separation logic is a Hoare-style logic for reasoning about pointer-manipulating programs. Its core ideas have recently been extended from low-level to richer, high-level languages. In this paper we develop a new semantics of the logic for a programming language where code can be stored (i.e., with higher-order store). The main improvement on previous work is the simplicity of the model. As a consequence, several restrictions imposed by the semantics are removed, leading to a considerably more natural assertion language with a powerful specification logic.

Lars Birkedal, Bernhard Reus, Jan Schwinghammer, Hongseok Yang

Verification

Open Implication

We argue that the usual trace-based notions of implication and equivalence for linear temporal logics are too strong and should be complemented by the weaker notions of open implication and open equivalence. Although open implication is harder to compute, it can be used to advantage both in model checking and in synthesis. We study the difference between trace-based equivalence and open equivalence and describe an algorithm to compute open implication of Linear Temporal Logic formulas with an asymptotically optimal complexity. We also show how to compute open implication while avoiding Safra’s construction. We have implemented an open-implication solver for Generalized Reactivity(1) specifications. In a case study, we show that open equivalence can be used to justify the use of an alternative specification that allows us to synthesize much smaller systems in far less time.

Karin Greimel, Roderick Bloem, Barbara Jobstmann, Moshe Vardi
ATL* Satisfiability Is 2EXPTIME-Complete

The two central decision problems that arise during the design of safety critical systems are the satisfiability and the model checking problem. While model checking can only be applied after implementing the system, satisfiability checking answers the question whether a system that satisfies the specification exists. Model checking is traditionally considered to be the simpler problem – for branching-time and fixed point logics such as CTL, CTL*, ATL, and the classical and alternating time

μ

-calculus, the complexity of satisfiability checking is considerably higher than the model checking complexity. We show that ATL* is a notable exception of this rule: Both ATL* model checking and ATL* satisfiability checking are 2EXPTIME-complete.

Sven Schewe
Visibly Pushdown Transducers

Visibly pushdown automata have been recently introduced by Alur and Madhusudan as a subclass of pushdown automata. This class enjoys nice properties such as closure under all Boolean operations and the decidability of language inclusion. Along the same line, we introduce here visibly pushdown transducers as a subclass of pushdown transducers. We study properties of those transducers and identify subclasses with useful properties like decidability of type checking as well as preservation of regularity of visibly pushdown languages.

Jean-François Raskin, Frédéric Servais
The Non-deterministic Mostowski Hierarchy and Distance-Parity Automata

Given a Rabin tree-language and natural numbers

i

,

j

, the language is said to be

i

,

j

-feasible if it is accepted by a parity automaton using priorities {

i

,

i

 + 1,...,

j

}. The

i

,

j

-feasibility induces a hierarchy over the Rabin-tree languages called the Mostowski hierarchy.

In this paper we prove that the problem of deciding if a language is

i

,

j

-feasible is reducible to the uniform universality problem for distance-parity automata. Distance-parity automata form a new model of automata extending both the nested distance desert automata introduced by Kirsten in his proof of decidability of the star-height problem, and parity automata over infinite trees. Distance-parity automata, instead of accepting a language, attach to each tree a cost in

ω

 + 1. The uniform universality problem consists in determining if this cost function is bounded by a finite value.

Thomas Colcombet, Christof Löding
Analyzing Context-Free Grammars Using an Incremental SAT Solver

We consider bounded versions of undecidable problems about context-free languages which restrict the domain of words to some finite length: inclusion, intersection, universality, equivalence, and ambiguity. These are in (co)-NP and thus solvable by a reduction to the (un-)satisfiability problem for propositional logic. We present such encodings – fully utilizing the power of incrementat SAT solvers – prove correctness and validate this approach with benchmarks.

Roland Axelsson, Keijo Heljanko, Martin Lange

Track C: Security and Cryptography Foundations

Theory

Weak Pseudorandom Functions in Minicrypt

A family of functions is

weakly

pseudorandom if a random member of the family is indistinguishable from a uniform random function when queried on

random

inputs. We point out a subtle ambiguity in the definition of weak PRFs: there are natural weak PRFs whose security breaks down if the randomness used to sample the inputs is revealed. To capture this ambiguity we distinguish between

public-coin

and

secret-coin

weak PRFs.

We show that the existence of a secret-coin weak PRF which is

not

also a public-coin weak PRF implies the existence of two pass key-agreement (i.e. public-key encryption). So in

Minicrypt

, i.e. under the assumption that one-way functions exist but public-key cryptography does not, the notion of public- and secret-coin weak PRFs coincide.

Previous to this paper all positive cryptographic statements known to hold exclusively in

Minicrypt

concerned the adaptive security of constructions using non-adaptively secure components. Weak PRFs give rise to a new set of statements having this property. As another example we consider the problem of range extension for weak PRFs. We show that in

Minicrypt

one can beat the best possible range expansion factor (using a fixed number of distinct keys) for a very general class of constructions (in particular, this class contains all constructions that are known today).

Krzysztof Pietrzak, Johan Sjödin
On Black-Box Ring Extraction and Integer Factorization

The black-box extraction problem over rings has (at least) two important interpretations in cryptography: An efficient algorithm for this problem implies (i) the equivalence of computing discrete logarithms and solving the Diffie-Hellman problem and (ii) the in-existence of secure ring-homomorphic encryption schemes.

In the special case of a finite field, Boneh/Lipton [1] and Maurer/ Raub [2] show that there exist algorithms solving the black-box extraction problem in subexponential time. It is unknown whether there exist more efficient algorithms.

In this work we consider the black-box extraction problem over finite rings of characteristic

n

, where

n

has at least two different prime factors. We provide a polynomial-time reduction from factoring

n

to the black-box extraction problem for a large class of finite commutative unitary rings. Under the factoring assumption, this implies the in-existence of certain efficient generic reductions from computing discrete logarithms to the Diffie-Hellman problem on the one side, and might be an indicator that secure ring-homomorphic encryption schemes exist on the other side.

Kristina Altmann, Tibor Jager, Andy Rupp
Extractable Perfectly One-Way Functions

We propose a new cryptographic primitive, called

extractable perfectly one-way (EPOW) functions

. Like perfectly one-way (POW) functions, EPOW functions are probabilistic functions that reveal no information about their input, other than the ability to verify guesses. In addition, an EPOW function,

f

, guarantees that any party that manages to compute a value in the range of

f

“knows” a corresponding preimage.

We capture “knowledge of preimage” by way of algorithmic extraction. We formulate two main variants of extractability, namely non-interactive and interactive. The noninteractive variant (i.e., the variant that requires non-interactive extraction) can be regarded as a generalization from specific knowledge assumptions to a notion that is formulated in general computational terms. Indeed, we show how to realize it under several different assumptions. The interactive- extraction variant can be realized from certain POW functions.

We demonstrate the usefulness of the new primitive in two quite different settings. First, we show how EPOW functions can be used to capture, in the standard model, the “knowledge of queries” property that is so useful in the Random Oracle (RO) model. Specifically, we show how to convert a class of CCA2-secure encryption schemes in the RO model to concrete ones by simply replacing the Random Oracle with an EPOW function, without much change in the logic of the original proof. Second, we show how EPOW functions can be used to construct 3-round ZK arguments of knowledge and membership, using weaker knowledge assumptions than the corresponding results due to Hada and Tanaka (Crypto 1998) and Lepinski (M.S. Thesis, 2004). This also opens the door for constructing 3-round ZK arguments based on other assumptions.

Ran Canetti, Ronny Ramzi Dakdouk
Error-Tolerant Combiners for Oblivious Primitives

A

robust combiner

is a construction that combines several implementations of a primitive based on different assumptions, and yields an implementation guaranteed to be secure if at least

some

assumptions (i.e. sufficiently many but not necessarily all) are valid.

In this paper we generalize this concept by introducing

error-tolerant

combiners, which in addition to protection against insecure implementations provide tolerance to functionality failures: an error-tolerant combiner guarantees a secure and correct implementation of the output primitive even if some of the candidates are insecure or faulty. We present simple constructions of error-tolerant robust combiners for oblivious linear function evaluation. The proposed combiners are also interesting in the regular (not error-tolerant) case, as the construction is much more efficient than the combiners known for oblivious transfer.

Bartosz Przydatek, Jürg Wullschleger

Secure Computation

Asynchronous Multi-Party Computation with Quadratic Communication

We present an efficient protocol for secure multi-party computation in the asynchronous model with optimal resilience. For

n

parties, up to

t

 < 

n

/3 of them being corrupted, and security parameter

κ

, a circuit with

c

gates can be securely computed with communication complexity

$\O(c n^2 \kappa)$

bits, which improves on the previously known solutions by a factor of

Ω

(

n

). The construction of the protocol follows the approach introduced by Franklin and Haber (Crypto’93), based on a public-key encryption scheme with threshold decryption. To achieve the quadratic complexity, we employ several techniques, including circuit randomization due to Beaver (Crypto’91), and an abstraction of

certificates

, which can be of independent interest.

Martin Hirt, Jesper Buus Nielsen, Bartosz Przydatek
Improved Garbled Circuit: Free XOR Gates and Applications

We present a new garbled circuit construction for two-party secure function evaluation (SFE). In our one-round protocol, XOR gates are evaluated “for free”, which results in the corresponding improvement over the best garbled circuit implementations (e.g. Fairplay [19]).

We build permutation networks [26] and Universal Circuits (UC) [25] almost exclusively of XOR gates; this results in a factor of up to 4 improvement (in both computation and communication) of their SFE. We also improve integer addition and equality testing by factor of up to 2.

We rely on the Random Oracle (RO) assumption. Our constructions are proven secure in the semi-honest model.

Vladimir Kolesnikov, Thomas Schneider
Improving the Round Complexity of VSS in Point-to-Point Networks

We revisit the following question:

what is the optimal round complexity of verifiable secret sharing (VSS)

? We focus here on the case of perfectly-secure VSS where the number of corrupted parties

t

satisfies

t

 < 

n

/3, with

n

being the total number of parties. Work of Gennaro et al. (STOC 2001) and Fitzi et al. (TCC 2006) shows that, assuming a broadcast channel, 3 rounds are necessary and sufficient for efficient VSS. The efficient 3-round protocol of Fitzi et al., however, treats the broadcast channel as being available “for free” and does not attempt to minimize its usage. This approach leads to relatively poor round complexity when protocols are compiled for a point-to-point network.

We show here a VSS protocol that is

simultaneously

optimal in terms of both the number of rounds and the number of invocations of broadcast. Our protocol also has a certain “2-level sharing” property that makes it useful for constructing protocols for general secure computation.

Jonathan Katz, Chiu-Yuen Koo, Ranjit Kumaresan
How to Protect Yourself without Perfect Shredding

Erasing old data and keys is an important tool in cryptographic protocol design. It is useful in many settings, including proactive security, adaptive security, forward security, and intrusion resilience. Protocols for all these settings typically assume the ability to

perfectly erase

information. Unfortunately, as amply demonstrated in the systems literature, perfect erasures are hard to implement in practice.

We propose a model of

partial erasures

where erasure instructions leave almost all the data erased intact, thus giving the honest players only a limited capability for disposing of old data. Nonetheless, we provide a general compiler that transforms any secure protocol using perfect erasures into one that maintains the same security properties when only partial erasures are available. The key idea is a new redundant representation of secret data which can still be computed on, and yet is rendered useless when partially erased. We prove that any such a compiler must incur a cost in additional storage, and that our compiler is near optimal in terms of its storage overhead.

Ran Canetti, Dror Eiger, Shafi Goldwasser, Dah-Yoh Lim

Two-Party Protocols and Zero-Knowledge

Universally Composable Undeniable Signature

How to define the security of undeniable signature schemes is a challenging task. This paper presents two security definitions of undeniable signature schemes which are more useful or natural than the existing definition. It then proves their equivalence.

We first define the UC-security, where UC means universal composability. We next show that there exists a UC-secure undeniable signature scheme which does not satisfy the standard definition of security that has been believed to be adequate so far. More precisely, it does not satisfy the invisibility defined by [10]. We then show a more adequate definition of invisibility which captures a wider class of (naturally secure) undeniable signature schemes.

We finally prove that the UC-security against non-adaptive adversaries is equivalent to this definition of invisibility and the strong unforgeability in

-hybrid model, where

is the ideal ZK functionality. Our result of equivalence implies that all the known proven secure undeniable signature schemes (including Chaum’s scheme) are UC-secure if the confirmation/disavowal protocols are both UC zero-knowledge.

Kaoru Kurosawa, Jun Furukawa
Interactive PCP

A central line of research in the area of PCPs is devoted to constructing short PCPs. In this paper, we show that if we allow an additional interactive verification phase, with very low communication complexity, then for some NP languages, one can construct PCPs that are significantly shorter than the known PCPs (without the additional interactive phase) for these languages. We give many cryptographical applications and motivations for our results and for the study of the new model in general.

More specifically, we study a new model of proofs:

interactive-PCP

. Roughly speaking, an interactive-PCP (say, for the membership

x

 ∈ 

L

) is a proof-string that can be verified by reading only one of its bits, with the help of an interactive-proof with very small communication complexity. We show that for membership in some NP languages

L

, there are interactive-PCPs that are significantly shorter than the known (non-interactive) PCPs for these languages.

Our main result is that for any constant depth Boolean formula

Φ

(

z

1

,...,

z

k

) of size

n

(over the gates ∧ , ∨ , ⊕ , ¬), a prover, Alice, can publish a proof-string for the satisfiability of

Φ

, where the size of the proof-string is poly(

k

). Later on, any user who wishes to verify the published proof-string needs to interact with Alice via a short interactive protocol of communication complexity poly(log

n

), while accessing the proof-string at a single location.

Note that the size of the published proof-string is poly(

k

), rather than poly(

n

), i.e., the size is polynomial in the size of the witness, rather than polynomial in the size of the instance. This compares to the known (non-interactive) PCPs that are of size polynomial in the size of the instance. By reductions, this result extends to many other central NP languages (e.g., SAT,

k

-clique, Vertex-Cover, etc.).

More generally, we show that the satisfiability of

$\bigwedge_{i=1}^n[\Phi_i(z_1,\ldots,z_k) =0]$

, where each

Φ

i

(

z

1

,...,

z

k

) is an arithmetic formula of size

n

(say, over

$\mathbb{GF}[2]$

) that computes a polynomial of degree

d

, can be proved by a published proof-string of size poly(

k

,

d

). Later on, any user who wishes to verify the published proof-string needs to interact with the prover via an interactive protocol of communication complexity poly(

d

,log

n

), while accessing the proof-string at a single location.

We give many applications and motivations for our results and for the study of the notion of interactive PCP in general. In particular, we have the following applications:

Succinct zero knowledge proofs:

We show that any interactive PCP, with certain properties, can be converted into a zero-knowledge interactive proof. We use this to construct zero-knowledge proofs of communication complexity polynomial in the size of the witness, rather than polynomial in the size of the instance, for many NP languages.

Succinct probabilistically checkable arguments:

In a subsequent paper, we study the new notion of

probabilistically checkable argument

, and show that any interactive PCP, with certain properties, translates into a probabilistically checkable argument [18]. We use this to construct probabilistically checkable arguments of size polynomial in the size of the witness, rather than polynomial in the size of the instance, for many NP languages.

Commit-Reveal schemes:

We show that Alice can commit to a string

w

of

k

bits, by a message of size poly(

k

), and later on, for any predicate

Φ

of size

n

, whose satisfiability can be proved by an efficient enough interactive PCP with certain properties, Alice can prove the statement

Φ

(

w

) = 1, by a zero-knowledge interactive proof with communication complexity poly(log

n

). (Surprisingly, the communication complexity may be significantly smaller than

k

and

n

).

Yael Tauman Kalai, Ran Raz
Constant-Round Concurrent Non-malleable Zero Knowledge in the Bare Public-Key Model

One of the central questions in Cryptography is the design of round-efficient protocols that are secure under concurrent man-in-the-middle attacks. In this paper we present the first

constant-round concurrent non-malleable zero-knowledge

argument system for

NP

in the Bare Public-Key model [Canetti et al., STOC 2000], resolving one of the major open problems in this area. To achieve our result, we introduce and study the notion of non-malleable witness indistinguishability, which is of independent interest. Previous results either achieved relaxed forms of concurrency/security or needed stronger setup assumptions or required a non-constant round complexity.

Rafail Ostrovsky, Giuseppe Persiano, Ivan Visconti

Encryption with Special Properties/Quantum Cryptography

Delegating Capabilities in Predicate Encryption Systems

In predicate encryption systems, given a capability, one can evaluate one or more predicates on the plaintext encrypted, while all other information about the plaintext remains hidden. We consider the role of delegation in such predicate encryption systems. Suppose Alice has a capability, and she wishes to delegate to Bob a more restrictive capability allowing the decryption of a subset of the information Alice can learn about the plaintext encrypted. We formally define delegation in predicate encryption systems, propose a new security definition for delegation, and give an efficient construction supporting conjunctive queries. The security of our construction can be reduced to the general 3-party Bilinear Diffie-Hellman assumption, and the Bilinear Decisional Diffie-Hellman assumption in composite order bilinear groups.

Elaine Shi, Brent Waters
Bounded Ciphertext Policy Attribute Based Encryption

In a ciphertext policy attribute based encryption system, a user’s private key is associated with a set of attributes (describing the user) and an encrypted ciphertext will specify an access policy over attributes. A user will be able to decrypt if and only if his attributes satisfy the ciphertext’s policy.

In this work, we present the first construction of a ciphertext-policy attribute based encryption scheme having a security proof based on a number theoretic assumption and supporting advanced access structures. Previous CP-ABE systems could either support only very limited access structures or had a proof of security only in the generic group model. Our construction can support access structures which can be represented by a bounded size access tree with threshold gates as its nodes. The bound on the size of the access trees is chosen at the time of the system setup. Our security proof is based on the standard Decisional Bilinear Diffie-Hellman assumption.

Vipul Goyal, Abhishek Jain, Omkant Pandey, Amit Sahai
Making Classical Honest Verifier Zero Knowledge Protocols Secure against Quantum Attacks

We show that any problem that has a classical zero-knowledge protocol against the honest verifier also has, under a reasonable condition, a classical zero-knowledge protocol which is secure against all classical and quantum polynomial time verifiers, even cheating ones. Here we refer to the generalized notion of zero-knowledge with classical and quantum auxiliary inputs respectively.

Our condition on the original protocol is that, for positive instances of the problem, the simulated message transcript should be quantum computationally indistinguishable from the actual message transcript. This is a natural strengthening of the notion of honest verifier computational zero-knowledge, and includes in particular, the complexity class of honest verifier statistical zero-knowledge. Our result answers an open question of Watrous [Wat06], and generalizes classical results by Goldreich, Sahai and Vadhan [GSV98], and Vadhan [Vad06] who showed that honest verifier statistical, respectively computational, zero knowledge is equal to general statistical, respectively computational, zero knowledge.

Sean Hallgren, Alexandra Kolla, Pranab Sen, Shengyu Zhang
Composable Security in the Bounded-Quantum-Storage Model

We give a new, simulation-based, definition for security in the bounded-quantum-storage model, and show that this definition allows for sequential composition of protocols. Damgård

et al.

(FOCS ’05, CRYPTO ’07) showed how to securely implement bit commitment and oblivious transfer in the bounded-quantum-storage model, where the adversary is only allowed to store a limited number of qubits. However, their security definitions did only apply to the standalone setting, and it was not clear if their protocols could be composed. Indeed, we show that these protocols are

not

composable in our framework without a small refinement. We then prove the security of their randomized oblivious transfer protocol with our refinement. Secure implementations of oblivious transfer and bit commitment follow easily by a (classical) reduction to randomized oblivious transfer.

Stephanie Wehner, Jürg Wullschleger

Various Types of Hashing

On the Strength of the Concatenated Hash Combiner When All the Hash Functions Are Weak

At Crypto 2004 Joux showed a novel attack against the concatenated hash combiner instantiated with Merkle-Damgård iterated hash functions. His method of producing multicollisions in the design was the first in a recent line of generic attacks against the Merkle-Damgård construction. In the same paper, Joux raised an open question concerning the strength of the concatenated hash combiner and asked whether his attack can be improved when the attacker can efficiently find collisions in both underlying compression functions. We solve this open problem by showing that even in the powerful adversarial scenario first introduced by Liskov (SAC 2006) in which the underlying compression functions can be fully inverted (which implies that collisions can be easily generated), collisions in the concatenated hash cannot be created using fewer than 2

n

/2

queries. We then expand this result to include the double pipe hash construction of Lucks from Asiacrypt 2005. One of the intermediate results is of interest on its own and provides the first streamable construction provably indifferentiable from a random oracle in this model.

Jonathan J. Hoch, Adi Shamir
History-Independent Cuckoo Hashing

Cuckoo hashing is an efficient and practical dynamic dictionary. It provides expected amortized constant update time, worst case constant lookup time, and good memory utilization. Various experiments demonstrated that cuckoo hashing is highly suitable for modern computer architectures and distributed settings, and offers significant improvements compared to other schemes.

In this work we construct a practical

history-independent

dynamic dictionary based on cuckoo hashing. In a history-independent data structure, the memory representation at any point in time yields no information on the specific sequence of insertions and deletions that led to its current content, other than the content itself. Such a property is significant when preventing unintended leakage of information, and was also found useful in several algorithmic settings.

Our construction enjoys most of the attractive properties of cuckoo hashing. In particular, no dynamic memory allocation is required, updates are performed in expected amortized constant time, and membership queries are performed in worst case constant time. Moreover, with high probability, the lookup procedure queries only two memory entries which are independent and can be queried in parallel. The approach underlying our construction is to enforce a canonical memory representation on cuckoo hashing. That is, up to the initial randomness, each set of elements has a unique memory representation.

Moni Naor, Gil Segev, Udi Wieder
Building a Collision-Resistant Compression Function from Non-compressing Primitives
(Extended Abstract)

We consider how to build an efficient compression function from a small number of random, non-compressing primitives. Our main goal is to achieve a level of collision resistance as close as possible to the optimal birthday bound. We present a 2

n

-to-

n

bit compression function based on three independent

n

-to-

n

bit random functions, each called only once. We show that if the three random functions are treated as black boxes then finding collisions requires

Θ

(2

n

/2

/

n

c

) queries for

c

 ≈ 1. This result remains valid if two of the three random functions are replaced by a fixed-key ideal cipher in Davies-Meyer mode (i.e.,

E

K

(

x

) ⊕ 

x

for permutation

E

K

). We also give a heuristic, backed by experimental results, suggesting that the security loss is at most four bits for block sizes up to 256 bits. We believe this is the best result to date on the matter of building a collision-resistant compression function from non-compressing functions. It also relates to an open question from Black et al. (Eurocrypt’05), who showed that compression functions that invoke a single non-compressing random function cannot suffice.

Thomas Shrimpton, Martijn Stam
Robust Multi-property Combiners for Hash Functions Revisited

A robust multi-property combiner for a set of security properties merges two hash functions such that the resulting function satisfies each of the properties which at least one of the two starting functions has. Fischlin and Lehmann (TCC 2008) recently constructed a combiner which simultaneously preserves collision-resistance, target collision-resistance, message authentication, pseudorandomness and indifferentiability from a random oracle (

IRO

). Their combiner produces outputs of 5

n

bits, where

n

denotes the output length of the underlying hash functions.

In this paper we propose improved combiners with shorter outputs. By sacrificing the indifferentiability from random oracles we obtain a combiner which preserves all of the other aforementioned properties but with output length 2

n

only. This matches a lower bound for black-box combiners for collision-resistance as the only property, showing that the other properties can be achieved without penalizing the length of the hash values. We then propose a combiner which also preserves the

IRO

property, slightly increasing the output length to 2

n

 + 

ω

(log

n

). Finally, we show that a twist on our combiners also makes them robust for one-wayness (but at the price of a fixed input length).

Marc Fischlin, Anja Lehmann, Krzysztof Pietrzak

Public-Key Cryptography/Authentication

Homomorphic Encryption with CCA Security

We address the problem of constructing public-key encryption schemes that meaningfully combine useful

computability features

with

non-malleability

. In particular, we investigate schemes in which anyone can change an encryption of an unknown message

m

into an encryption of

T

(

m

) (as a

feature

), for a specific set of allowed functions

T

, but the scheme is “non-malleable” with respect to all other operations. We formulate precise definitions that capture these intuitive requirements and also show relationships among our new definitions and other more standard ones (IND-CCA, gCCA, and RCCA). We further justify our definitions by showing their equivalence to a natural formulation of security in the Universally Composable framework. We also consider extending the definitions to features which combine

multiple

ciphertexts, and show that a natural definition is unattainable for a useful class of features. Finally, we describe a new family of encryption schemes that satisfy our definitions for a wide variety of allowed transformations

T

, and which are secure under the standard Decisional Diffie-Hellman (DDH) assumption.

Manoj Prabhakaran, Mike Rosulek
How to Encrypt with the LPN Problem

We present a probabilistic private-key encryption scheme named LPN-C whose security can be reduced to the hardness of the Learning from Parity with Noise (LPN) problem. The proposed protocol involves only basic operations in GF(2) and an error-correcting code. We show that it achieves indistinguishability under adaptive chosen plaintext attacks (IND-P2-C0). Appending a secure MAC renders the scheme secure under adaptive chosen ciphertext attacks. This scheme enriches the range of available cryptographic primitives whose security relies on the hardness of the LPN problem.

Henri Gilbert, Matthew J. B. Robshaw, Yannick Seurin
Could SFLASH be Repaired?

The SFLASH signature scheme stood for a decade as the most successful cryptosystem based on multivariate polynomials, before an efficient attack was finally found in 2007. In this paper, we review its recent cryptanalysis and we notice that its weaknesses can all be linked to the fact that the cryptosystem is built on the structure of a large field. As the attack demonstrates, this richer structure can be accessed by an attacker by using the specific symmetry of the core function being used. Then, we investigate the effect of restricting this large field to a purely linear subset and we find that the symmetries exploited by the attack are no longer present. At a purely defensive level, this defines a countermeasure which can be used at a moderate overhead. On the theoretical side, this informs us of limitations of the recent attack and raises interesting remarks about the design itself of multivariate schemes.

Jintai Ding, Vivien Dubois, Bo-Yin Yang, Owen Chia-Hsin Chen, Chen-Mou Cheng
Password Mistyping in Two-Factor-Authenticated Key Exchange

We study the problem of Key Exchange (KE), where authentication is two-factor and based on both electronically stored long keys and human-supplied credentials (passwords or biometrics). The latter credential has low entropy and may be

adversarily

mistyped. Our main contribution is the first formal treatment of mistyping in this setting.

Ensuring security in presence of mistyping is subtle. We show mistyping-related limitations of previous KE definitions and constructions (of Boyen et al. [6,7,10] and Kolesnikov and Rackoff [16]).

We concentrate on the practical two-factor authenticated KE setting where

servers

exchange keys with

clients

, who use short passwords (memorized) and long cryptographic keys (stored on a card). Our work is thus a natural generalization of Halevi-Krawczyk [15] and Kolesnikov-Rackoff [16]. We discuss the challenges that arise due to mistyping. We propose the first KE definitions in this setting, and formally discuss their guarantees. We present efficient KE protocols and prove their security.

Vladimir Kolesnikov, Charles Rackoff
Affiliation-Hiding Envelope and Authentication Schemes with Efficient Support for Multiple Credentials

We present an efficient implementation of affiliation-hiding envelope and authentication schemes. An envelope scheme enables secure message transmission between two parties s.t. the message can be decrypted only by a receiver who holds a credential from (i.e. is

affiliated with

) an entity specified by the sender’s authorization policy. An envelope scheme is affiliation-hiding if it hides the receiver’s affiliation, and if the sender’s policy is revealed only to receivers who satisfy it. Similarly, an authentication scheme is affiliation-hiding if it reveals information about affiliations and the authentication policy of a participating party only to counterparties that satisfy this policy.

The novelty of our affiliation-hiding envelope scheme is that it remains practical in the

multi-affiliation setting

without relying on groups with bilinear maps. Namely, it requires

O

(

n

) modular exponentiations and communicates

O

(

n

) group elements, even if each party has

n

credentials, and each party’s authentication policy specifies

n

admissible affiliations. Moreover, our affiliation-hiding envelope is chosen-ciphertext secure, which leads to a provably secure affiliation-hiding authentication scheme with same

O

(

n

) efficiency in the multi-affiliation setting.

Stanisław Jarecki, Xiaomin Liu
Backmatter
Metadaten
Titel
Automata, Languages and Programming
herausgegeben von
Luca Aceto
Ivan Damgård
Leslie Ann Goldberg
Magnús M. Halldórsson
Anna Ingólfsdóttir
Igor Walukiewicz
Copyright-Jahr
2008
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-70583-3
Print ISBN
978-3-540-70582-6
DOI
https://doi.org/10.1007/978-3-540-70583-3