Skip to main content

2012 | Buch

Theory of Cryptography

9th Theory of Cryptography Conference, TCC 2012, Taormina, Sicily, Italy, March 19-21, 2012. Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the thoroughly refereed proceedings of the 9th Theory of Cryptography Conference, TCC 2012, held in Taormina, Sicily, Italy, in March 2012. The 36 revised full papers presented were carefully reviewed and selected from 131 submissions. The papers are organized in topical sections on secure computation; (blind) signatures and threshold encryption; zero-knowledge and security models; leakage-resilience; hash functions; differential privacy; pseudorandomness; dedicated encryption; security amplification; resettable and parallel zero knowledge.

Inhaltsverzeichnis

Frontmatter

Secure Computation

Computing on Authenticated Data

In tandem with recent progress on computing on encrypted data via fully homomorphic encryption, we present a framework for computing on

authenticated

data via the notion of slightly homomorphic signatures, or

P

-homomorphic signatures. With such signatures, it is possible for a third party to

derive

a signature on the object

m

′ from a signature of

m

as long as

P

(

m

,

m

′) = 1 for some predicate

P

which captures the “authenticatable relationship” between

m

′ and

m

. Moreover, a derived signature on

m

′ reveals

no extra information

about the parent

m

.

Our definition is carefully formulated to provide one unified framework for a variety of distinct concepts in this area, including arithmetic, homomorphic, quotable, redactable, transitive signatures and more. It includes being unable to distinguish a derived signature from a fresh one

even when given the original signature

. The inability to link derived signatures to their original sources prevents some practical privacy and linking attacks, which is a challenge not satisfied by most prior works.

Under this strong definition, we then provide generic constructions for all univariate and closed predicates, and specific efficient constructions for a broad class of natural predicates such as quoting, subsets, weighted sums, averages, and Fourier transforms. To our knowledge, these are the first efficient constructions for these predicates (excluding subsets) that provably satisfy this strong security notion.

Jae Hyun Ahn, Dan Boneh, Jan Camenisch, Susan Hohenberger, abhi shelat, Brent Waters
Identifying Cheaters without an Honest Majority

Motivated by problems in secure multiparty computation (MPC), we study a natural extension of

identifiable secret sharing

to the case where an arbitrary number of players may be corrupted. An identifiable secret sharing scheme is a secret sharing scheme in which the reconstruction algorithm, after receiving shares from all players, either outputs the correct secret or publicly identifies the set of all cheaters (players who modified their original shares) with overwhelming success probability. This property is impossible to achieve without an honest majority. Instead, we settle for having the reconstruction algorithm inform each

honest

player of the correct set of cheaters. We show that this new notion of secret sharing can be

unconditionally

realized in the presence of arbitrarily many corrupted players. We demonstrate the usefulness of this primitive by presenting several applications to MPC without an honest majority.

Complete primitives for MPC.

We present the first unconditional construction of a complete primitive for fully secure function evaluation whose complexity does not grow with the complexity of the function being evaluated. This can be used for realizing fully secure MPC using

small

and

stateless

tamper-proof hardware. A previous completeness result of Gordon et al. (TCC 2010) required the use of cryptographic signatures.

Applications to partial fairness.

We eliminate the use of cryptography from the online phase of recent protocols for multiparty coin-flipping and MPC with partial fairness (Beimel et al., Crypto 2010 and Crypto 2011). This is a corollary of a more general technique for unconditionally upgrading security against fail-stop adversaries with preprocessing to security against malicious adversaries.

Finally, we complement our positive results by a negative result on identifying cheaters in unconditionally secure MPC. It is known that MPC without an honest majority can be realized

unconditionally

in the OT-hybrid model, provided that one settles for “security with abort” (Kilian, 1988). That is, the adversary can decide whether to abort the protocol after learning the outputs of corrupted players. We show that such protocols

cannot

be strengthened so that all honest players agree on the identity of a corrupted player in the event that the protocol aborts, even if a broadcast primitive can be used. This is contrasted with the computational setting, in which this stronger notion of security can be realized under standard cryptographic assumptions (Goldreich et al., 1987).

Yuval Ishai, Rafail Ostrovsky, Hakan Seyalioglu
On the Security of the “Free-XOR” Technique

Yao’s

garbled-circuit approach

enables constant-round secure two-party computation of any function. In Yao’s original construction, each gate in the circuit requires the parties to perform a constant number of encryptions/decryptions and to send/receive a constant number of ciphertexts. Kolesnikov and Schneider (ICALP 2008) proposed an improvement that allows XOR gates to be evaluated “for free,” incurring no cryptographic operations and zero communication. Their “free-XOR” technique has proven very popular, and has been shown to improve performance of garbled-circuit protocols by up to a factor of 4.

Kolesnikov and Schneider proved security of their approach in the random oracle model, and claimed that (an unspecified variant of) correlation robustness suffices; this claim has been repeated in subsequent work, and similar ideas have since been used in other contexts. We show that the free-XOR technique

cannot

be proven secure based on correlation robustness alone; somewhat surprisingly, some form of

circular security

is also required. We propose an appropriate definition of security for hash functions capturing the necessary requirements, and prove security of the free-XOR approach when instantiated with any hash function satisfying our definition.

Our results do not impact the security of the free-XOR technique in practice, or imply an error in the free-XOR work, but instead pin down the assumptions needed to prove security.

Seung Geol Choi, Jonathan Katz, Ranjit Kumaresan, Hong-Sheng Zhou
Secure Two-Party Computation with Low Communication

We propose a 2-party UC-secure protocol that can compute any function securely. The protocol requires only two messages, communication that is poly-logarithmic in the size of the circuit description of the function, and the workload for one of the parties is also only poly-logarithmic in the size of the circuit. This implies, for instance, delegatable computation that requires no expensive off-line phase and remains secure even if the server learns whether the client accepts its results. To achieve this, we define two new notions of extractable hash functions, propose an instantiation based on the knowledge of exponent in an RSA group, and build succinct zero-knowledge arguments in the CRS model.

Ivan Damgård, Sebastian Faust, Carmit Hazay

(Blind) Signatures and Threshold Encryption

Non-interactive CCA-Secure Threshold Cryptosystems with Adaptive Security: New Framework and Constructions

In threshold cryptography, private keys are divided into

n

shares, each one of which is given to a different server in order to avoid single points of failure. In the case of threshold public-key encryption, at least

t

 ≤ 

n

servers need to contribute to the decryption process. A threshold primitive is said

robust

if no coalition of

t

malicious servers can prevent remaining honest servers from successfully completing private key operations. So far, most practical non-interactive threshold cryptosystems, where no interactive conversation is required among decryption servers, were only proved secure against static corruptions. In the adaptive corruption scenario (where the adversary can corrupt servers at any time, based on its complete view), all existing robust threshold encryption schemes that also resist chosen-ciphertext attacks (CCA) till recently require interaction in the decryption phase. A specific method (in composite order groups) for getting rid of interaction was recently suggested, leaving the question of more generic frameworks and constructions with better security and better flexibility (

i.e.

, compatibility with distributed key generation).

This paper describes a general construction of adaptively secure robust non-interactive threshold cryptosystems with chosen-ciphertext security. We define the notion of

all-but-one perfectly sound

threshold hash proof systems that can be seen as (threshold) hash proof systems with publicly verifiable and simulation-sound proofs. We show that this notion generically implies threshold cryptosystems combining the aforementioned properties. Then, we provide efficient instantiations under well-studied assumptions in bilinear groups (e.g., in such groups of prime order). These instantiations have a tighter security proof and are indeed compatible with distributed key generation protocols.

Benoît Libert, Moti Yung
Round-Optimal Privacy-Preserving Protocols with Smooth Projective Hash Functions

In 2008, Groth and Sahai proposed a powerful suite of techniques for constructing non-interactive zero-knowledge proofs in bilinear groups. Their proof systems have found numerous applications, including group signature schemes, anonymous voting, and anonymous credentials. In this paper, we demonstrate that the notion of

smooth projective hash functions

can be useful to design round-optimal privacy-preserving interactive protocols. We show that this approach is suitable for designing schemes that rely on standard security assumptions in the standard model with a common-reference string and are more efficient than those obtained using the Groth-Sahai methodology. As an illustration of our design principle, we construct an efficient oblivious signature-based envelope scheme and a blind signature scheme, both round-optimal.

Olivier Blazy, David Pointcheval, Damien Vergnaud
On the Instantiability of Hash-and-Sign RSA Signatures

The hash-and-sign RSA signature is one of the most elegant and well known signatures schemes, extensively used in a wide variety of cryptographic applications. Unfortunately, the only existing analysis of this popular signature scheme is in the random oracle model, where the resulting idealized signature is known as the RSA

Full Domain Hash

signature scheme (RSA-FDH). In fact, prior work has shown several “uninstantiability” results for various abstractions of RSA-FDH, where the RSA function was replaced by a family of trapdoor random permutations, or the hash function instantiating the random oracle could not be keyed. These abstractions, however, do not allow the reduction and the hash function instantiation to use the algebraic properties of RSA function, such as the multiplicative group structure of ℤ

n

* . n. In contrast, the multiplicative property of the RSA function is critically used in many standard model analyses of various RSA-based schemes.

Motivated by closing this gap, we consider the setting where the RSA function representation is generic (i.e., black-box)

but multiplicative

, whereas the hash function itself is in the standard model, and can be keyed and exploit the multiplicative properties of the RSA function. This setting abstracts all known techniques for designing provably secure RSA-based signatures in the standard model, and aims to address the main limitations of prior uninstantiability results. Unfortunately, we show that it is still impossible to reduce the security of RSA-FDH to any natural assumption even in our model. Thus, our result suggests that in order to prove the security of a given instantiation of RSA-FDH, one should use a non-black box security proof, or use specific properties of the RSA group that are not captured by its multiplicative structure alone. We complement our negative result with a positive result, showing that the RSA-FDH signatures can be proven secure under the

standard

RSA assumption, provided that the number of signing queries is

a-priori bounded

.

Yevgeniy Dodis, Iftach Haitner, Aris Tentes
Beyond the Limitation of Prime-Order Bilinear Groups, and Round Optimal Blind Signatures

At Eurocrypt 2010, Freeman proposed a transformation from pairing-based schemes in composite-order bilinear groups to equivalent ones in prime-order bilinear groups. His transformation can be applied to pairing-based cryptosystems exploiting only one of two properties of composite-order bilinear groups: cancelling and projecting. At Asiacrypt 2010, Meiklejohn, Shacham, and Freeman showed that prime-order bilinear groups according to Freeman’s construction cannot have two properties simultaneously except negligible probability and, as an instance of implausible conversion, proposed a (partially) blind signature scheme whose security proof exploits both the cancelling and projecting properties of composite-order bilinear groups.

In this paper, we invalidate their evidence by presenting a security proof of the prime-order version of their blind signature scheme. Our security proof follows a different strategy and exploits only the projecting property. Instead of the cancelling property, a new property, that we call

translating

, on prime-order bilinear groups plays an important role in the security proof, whose existence was not known in composite-order bilinear groups. With this proof, we obtain a 2-move (i.e., round optimal) (partially) blind signature scheme (without random oracle) based on the decisional linear assumption in the common reference string model, which is of independent interest.

As the second contribution of this paper, we construct prime-order bilinear groups that possess both the cancelling and projecting properties at the same time by considering more general base groups. That is, we take a rank

n

p

-submodule of

$\mathbb{Z}_p^{n^2}$

, instead of

$\mathbb{Z}_p^n$

, to be a base group

G

, and consider the projections into its rank 1 submodules. We show that the subgroup decision assumption on this base group

G

holds in the generic bilinear group model for

n

 = 2, and provide an efficient membership-checking algorithm to

G

, which was trivial in the previous setting. Consequently, it is still open whether there exists a cryptosystem on composite-order bilinear groups that cannot be constructed on prime-order bilinear groups.

Jae Hong Seo, Jung Hee Cheon

Zero-Knowledge and Security Models

On Efficient Zero-Knowledge PCPs

We revisit the question of

Zero-Knowledge PCPs

, studied by Kilian, Petrank, and Tardos (STOC ’97). A ZK-PCP is defined similarly to a standard PCP, except that the view of any (possibly malicious) verifier can be efficiently simulated up to a small statistical distance. Kilian et al.obtained a ZK-PCP for

NEXP

in which the proof oracle is in

EXP

NP

. They also obtained a ZK-PCP for

NP

in which the proof oracle is computable in polynomial-time, but this ZK-PCP is only zero-knowledge against

bounded-query

verifiers who make at most an

a priori fixed

polynomial number of queries. The existence of ZK-PCPs for

NP

with efficient oracles and arbitrary polynomial-time malicious verifiers was left open. This question is motivated by the recent line of work on cryptography using tamper-proof hardware tokens: an efficient ZK-PCP (for any language) is

equivalent

to a statistical zero-knowledge proof using only a single stateless token sent to the verifier.

We obtain the following results regarding efficient ZK-PCPs:

Negative Result on Efficient ZK-PCPs.

Assuming that the polynomial time hierarchy does not collapse, we settle the above question in the negative for ZK-PCPs in which the verifier is

nonadaptive

(i.e. the queries only depend on the input and secret randomness but not on the PCP answers).

Simplifying Bounded-Query ZK-PCPs.

The bounded-query zero-knowledge PCP of Kilian et al. starts from a

weakly-sound

bounded-query ZK-PCP of Dwork et al. (CRYPTO ’92) and amplifies its soundness by introducing and constructing a new primitive called

locking scheme

— an unconditional oracle-based analogue of a commitment scheme. We simplify the ZK-PCP of Kilian et al. by presenting an elementary new construction of locking schemes. Our locking scheme is purely combinatorial.

Black-Box Sublinear ZK Arguments via ZK-PCPs.

Kilian used PCPs to construct sublinear-communication zero-knowledge arguments for

NP

which make a

non-black-box

use of collision-resistant hash functions (STOC ’92). We show that ZK-PCPs can be used to get black-box variants of this result with improved round complexity, as well as an

unconditional

zero-knowledge variant of Micali’s non-interactive CS Proofs (FOCS ’94) in the Random Oracle Model.

Yuval Ishai, Mohammad Mahmoody, Amit Sahai
Progression-Free Sets and Sublinear Pairing-Based Non-Interactive Zero-Knowledge Arguments

In 2010, Groth constructed the only previously known sublinear-communication NIZK circuit satisfiability argument in the common reference string model. We optimize Groth’s argument by, in particular, reducing both the CRS length and the prover’s computational complexity from quadratic to quasilinear in the circuit size. We also use a (presumably) weaker security assumption, and have tighter security reductions. Our main contribution is to show that the complexity of Groth’s basic arguments is dominated by the quadratic number of monomials in certain polynomials. We collapse the number of monomials to quasilinear by using a recent construction of progression-free sets.

Helger Lipmaa
Point Obfuscation and 3-Round Zero-Knowledge

We construct 3-round proofs and arguments with negligible soundness error satisfying two relaxed notions of zero-knowledge (ZK):

weak ZK

and

witness hiding

(WH). At the heart of our constructions lie new techniques based on

point obfuscation with auxiliary input

(AIPO).

It is known that such protocols cannot be proven secure using black-box reductions (or simulation). Our constructions circumvent these lower bounds, utilizing AIPO (and extensions) as the “non-black-box component” in the security reduction.

Nir Bitansky, Omer Paneth
Confidentiality and Integrity: A Constructive Perspective

Traditional security definitions in the context of secure communication specify properties of cryptographic schemes. For symmetric encryption schemes, these properties are intended to capture the protection of the confidentiality or the integrity of the encrypted messages. A vast variety of such definitions has emerged in the literature and, despite the efforts of previous work, the relations and interplay of many of these notions (which are a priori not composable) are unexplored. Also, the exact guarantees implied by the properties are hard to understand.

In constructive cryptography, notions such as confidentiality and integrity appear as attributes of channels, i.e., the communication itself. This makes the guarantees achieved by cryptographic schemes explicit, and leads to security definitions that are composable.

In this work, we follow the approach of constructive cryptography, questioning the justification for the existing (game-based) security definitions. In particular, we compare these definitions with related constructive notions and find that some are too weak, such as INT-PTXT, or artificially strong, such as INT-CTXT. Others appear unsuitable for symmetric encryption, such as IND-CCA.

Ueli Maurer, Andreas Rüedlinger, Björn Tackmann

Leakage-Resilience

Leakage-Resilient Circuits without Computational Assumptions

Physical cryptographic devices inadvertently leak information through numerous side-channels. Such leakage is exploited by so-called side-channel attacks, which often allow for a complete security breache. A recent trend in cryptography is to propose formal models to incorporate leakage into the model and to construct schemes that are provably secure within them.

We design a

general

compiler that transforms

any

cryptographic scheme, e.g., a block-cipher, into a functionally equivalent scheme which is resilient to any

continual

leakage provided that the following three requirements are satisfied: (i) in each observation the leakage is bounded, (ii) different parts of the computation leak independently, and (iii) the randomness that is used for certain operations comes from a simple (non-uniform) distribution. In contrast to earlier work on leakage resilient circuit compilers, which relied on computational assumptions, our results are purely

information-theoretic

. In particular, we do not make use of public key encryption, which was required in all previous works.

Stefan Dziembowski, Sebastian Faust
A Parallel Repetition Theorem for Leakage Resilience

A leakage resilient encryption scheme is one which stays secure even against an attacker that obtains a bounded amount of side information on the secret key (say

λ

bits of “leakage”). A fundamental question is whether parallel repetition amplifies leakage resilience. Namely, if we secret share our message, and encrypt the shares under two independent keys, will the resulting scheme be resilient to 2

λ

bits of leakage?

Surprisingly, Lewko and Waters (FOCS 2010) showed that this is false. They gave an example of a public-key encryption scheme that is (CPA) resilient to

λ

bits of leakage, and yet its 2-repetition is not resilient to even (1 + 

ε

)

λ

bits of leakage. In their counter-example, the repeated schemes share secretly generated public parameters.

In this work, we show that under a reasonable strengthening of the definition of leakage resilience (one that captures known proof techniques for achieving non-trivial leakage resilience), parallel repetition

does

in fact amplify leakage (for CPA security). In particular, if fresh public parameters are used for each copy of the Lewko-Waters scheme, then their negative result does not hold, and leakage is amplified by parallel repetition.

More generally, given

t

schemes that are resilient to

λ

1

, …,

λ

t

bits of leakage, respectfully, we show that their direct product is resilient to ∑ (

λ

i

 − 1) bits. We present our amplification theorem in a general framework that applies other cryptographic primitives as well.

Zvika Brakerski, Yael Tauman Kalai
Leakage-Tolerant Interactive Protocols

We put forth a framework for expressing security requirements from interactive protocols in the presence of arbitrary leakage. The framework allows capturing different levels of leakage-tolerance of protocols, namely the preservation (or degradation) of security, under coordinated attacks that include various forms of leakage from the secret states of participating components. The framework extends the universally composable (UC) security framework. We also prove a variant of the UC theorem that enables modular design and analysis of protocols even in face of general, non-modular leakage.

We then construct leakage-tolerant protocols for basic tasks, such as secure message transmission, message authentication, commitment, oblivious transfer and zero-knowledge. A central component in several of our constructions is the observation that resilience to adaptive party corruptions (in some strong sense) implies leakage-tolerance in an essentially optimal way.

Nir Bitansky, Ran Canetti, Shai Halevi

Hash Functions

On the Public Indifferentiability and Correlation Intractability of the 6-Round Feistel Construction

We show that the Feistel construction with six rounds and random round functions is

publicly

indifferentiable from a random invertible permutation (a result that is not known to hold for full indifferentiability). Public indifferentiability (

pub-indifferentiability

for short) is a variant of indifferentiability introduced by Yoneyama

et al.

[29] and Dodis

et al.

[12] where the simulator knows all queries made by the distinguisher to the primitive it tries to simulate, and is useful to argue the security of cryptosystems where all the queries to the ideal primitive are public (as

e.g.

in many digital signature schemes). To prove the result, we introduce a new and simpler variant of indifferentiability, that we call sequential indifferentiability (

seq-indifferentiability

for short) and show that this notion is in fact equivalent to pub-indifferentiability for stateless ideal primitives. We then prove that the 6-round Feistel construction is seq-indifferentiable from a random invertible permutation. We also observe that sequential indifferentiability implies correlation intractability, so that the Feistel construction with six rounds and random round functions yields a correlation intractable invertible permutation, a notion we define analogously to correlation intractable functions introduced by Canetti

et al.

[4].

Avradip Mandal, Jacques Patarin, Yannick Seurin
Collisions Are Not Incidental: A Compression Function Exploiting Discrete Geometry

We present a new construction of a compression function

$\ensuremath{{\if!! {{H}} \else{{H}_{}}\fi}} \colon \ensuremath{\{0,1\}}^{3\ensuremath{n} } \rightarrow \ensuremath{\{0,1\}}^{2\ensuremath{n} }$

that uses two parallel calls to an ideal primitive (an ideal blockcipher or a public random function) from

${2\ensuremath{n} }$

to

${\ensuremath{n} }$

bits. This is similar to the well-known MDC-2 or the recently proposed MJH by Lee and Stam (CT-RSA’11). However, unlike these constructions, we show already in the compression function that an adversary limited (asymptotically in

n

) to

$\mathcal{O}(2^{2\ensuremath{n} (1-\delta)/3})$

queries (for any

δ

 > 0) has disappearing advantage to find collisions. A key component of our construction is the use of the Szemerédi–Trotter theorem over finite fields to bound the number of full compression function evaluations an adversary can make, in terms of the number of queries to the underlying primitives. Moveover, for the security proof we rely on a new abstraction that refines and strenghtens existing techniques. We believe that this framework elucidates existing proofs and we consider it of independent interest.

Dimitar Jetchev, Onur Özen, Martijn Stam

Differential Privacy

Lower Bounds in Differential Privacy

This paper is about private data analysis, in which a trusted curator holding a confidential database responds to real vector-valued queries. A common approach to ensuring privacy for the database elements is to add appropriately generated random noise to the answers, releasing only these

noisy

responses. A line of study initiated in [7] examines the amount of distortion needed to prevent privacy violations of various kinds. The results in the literature vary according to several parameters, including the size of the database, the size of the universe from which data elements are drawn, the “amount” of privacy desired, and for the purposes of the current work, the arity of the query. In this paper we sharpen and unify these bounds. Our foremost result combines the techniques of Hardt and Talwar [11] and McGregor

et al.

[13] to obtain linear lower bounds on distortion when providing differential privacy for a (contrived) class of low-sensitivity queries. (A query has low sensitivity if the data of a single individual has small effect on the answer.) Several structural results follow as immediate corollaries:

We separate so-called

counting

queries from arbitrary

low-sensitivity

queries, proving the latter requires more noise, or distortion, than does the former;

We separate (

ε

,0)-differential privacy from its well-studied relaxation (

ε

,

δ

)-differential privacy, even when

δ

 ∈ 2

− 

o

(

n

)

is negligible in the size

n

of the database, proving the latter requires less distortion than the former;

We demonstrate that (

ε

,

δ

)-differential privacy is much weaker than (

ε

,0)-differential privacy in terms of mutual information of the transcript of the mechanism with the database, even when

δ

 ∈ 2

− 

o

(

n

)

is negligible in the size

n

of the database.

We also simplify the lower bounds on noise for counting queries in [11] and also make them unconditional. Further, we use a characterization of (

ε

,

δ

) differential privacy from [13] to obtain lower bounds on the distortion needed to ensure (

ε

,

δ

)-differential privacy for

ε

,

δ

 > 0. We next revisit the LP decoding argument of [10] and combine it with a recent result of Rudelson [15] to improve on a result of Kasiviswanathan

et al.

[12] on noise lower bounds for privately releasing ℓ-way marginals.

Anindya De
Iterative Constructions and Private Data Release

In this paper we study the problem of approximately releasing the

cut function

of a graph while preserving differential privacy, and give new algorithms (and new analyses of existing algorithms) in both the interactive and non-interactive settings.

Our algorithms in the interactive setting are achieved by revisiting the problem of releasing differentially private, approximate answers to a large number of queries on a database. We show that several algorithms for this problem fall into the same basic framework, and are based on the existence of objects which we call

iterative database construction

algorithms. We give a new generic framework in which new (efficient) IDC algorithms give rise to new (efficient) interactive private query release mechanisms. Our modular analysis simplifies and tightens the analysis of previous algorithms, leading to improved bounds. We then give a new IDC algorithm (and therefore a new private, interactive query release mechanism) based on the Frieze/Kannan low-rank matrix decomposition. This new release mechanism gives an improvement on prior work in a range of parameters where the size of the database is comparable to the size of the data universe (such as releasing all cut queries on dense graphs).

We also give a non-interactive algorithm for efficiently releasing private

synthetic data

for graph cuts with error

O

(|

V

|

1.5

). Our algorithm is based on randomized response and a non-private implementation of the SDP-based, constant-factor approximation algorithm for cut-norm due to Alon and Naor. Finally, we give a reduction based on the IDC framework showing that an efficient, private algorithm for computing sufficiently accurate rank-1 matrix approximations would lead to an improved efficient algorithm for releasing private synthetic data for graph cuts. We leave finding such an algorithm as our main open problem.

Anupam Gupta, Aaron Roth, Jonathan Ullman

Pseudorandomness I

From Non-adaptive to Adaptive Pseudorandom Functions

Unlike the standard notion of pseudorandom functions (PRF), a

non-adaptive

PRF is only required to be indistinguishable from random in the eyes of a

non-adaptive

distinguisher (i.e., one that prepares its oracle calls in advance). A recent line of research has studied the possibility of a

direct

construction of adaptive PRFs from non-adaptive ones, where direct means that the constructed adaptive PRF uses only few (ideally, constant number of) calls to the underlying non-adaptive PRF. Unfortunately, this study has only yielded negative results, showing that “natural” such constructions are unlikely to exist (e.g., Myers [EUROCRYPT ’04], Pietrzak [CRYPTO ’05, EUROCRYPT ’06])..

We give an affirmative answer to the above question, presenting a direct construction of adaptive PRFs from non-adaptive ones. Our construction is extremely simple, a composition of the non-adaptive PRF with an appropriate pairwise independent hash function.

Itay Berman, Iftach Haitner
Hardness Preserving Constructions of Pseudorandom Functions

We show a hardness-preserving construction of a PRF from any length doubling PRG which improves upon known constructions whenever we can put a non-trivial upper bound

q

on the number of queries to the PRF. Our construction requires only

O

(log

q

) invocations to the underlying PRG with each query. In comparison, the number of invocations by the best previous hardness-preserving construction (GGM using Levin’s trick) is logarithmic in the

hardness

of the PRG.

For example, starting from an exponentially secure PRG {0,1}

n

↦{0,1}

2

n

, we get a PRF which is exponentially secure if queried at most

$q=\exp(\sqrt n)$

times and where each invocation of the PRF requires

$\Theta(\sqrt n)$

queries to the underlying PRG. This is much less than the Θ(

n

) required by known constructions.

Abhishek Jain, Krzysztof Pietrzak, Aris Tentes
Computational Extractors and Pseudorandomness

Computational extractors

are efficient procedures that map a source of sufficiently high min-entropy to an output that is computationally indistinguishable from uniform. By relaxing the statistical closeness property of traditional randomness extractors one hopes to improve the efficiency and entropy parameters of these extractors, while keeping their utility for cryptographic applications. In this work we investigate computational extractors and consider questions of existence and inherent complexity from the theoretical and practical angles, with particular focus on the relationship to pseudorandomness.

An obvious way to build a computational extractor is via the “extract-then-prg” method: apply a statistical extractor and use its output to seed a PRG. This approach carries with it the entropy cost inherent to implementing statistical extractors, namely, the source entropy needs to be substantially higher than the PRG’s seed length. It also requires a PRG and thus relies on one-way functions.

We study the necessity of one-way functions in the construction of computational extractors and determine matching lower and upper bounds on the “black-box efficiency” of generic constructions of computational extractors that use a one-way permutation as an oracle. Under this efficiency measure we prove a direct correspondence between the complexity of computational extractors and that of pseudorandom generators, showing the optimality of the extract-then-prg approach for generic constructions of computational extractors and confirming the intuition that to build a computational extractor via a PRG one needs to make up for the entropy gap intrinsic to statistical extractors.

On the other hand, we show that with stronger cryptographic primitives one can have more entropy- and computationally-efficient constructions. In particular, we show a construction of a very practical computational extractor from any weak PRF without resorting to statistical extractors.

Dana Dachman-Soled, Rosario Gennaro, Hugo Krawczyk, Tal Malkin

Dedicated Encryption I

Functional Re-encryption and Collusion-Resistant Obfuscation

We introduce a natural cryptographic functionality called

functional re-encryption

. Informally, this functionality, for a public-key encryption scheme and a function

F

with

n

possible outputs, transforms (“re-encrypts”) an encryption of a message

m

under an “input public key”

pk

into an encryption of the same message

m

under one of the

n

“output public keys”, namely the public key indexed by

F

(

m

).

In many settings, one might require that the program implementing the functional re-encryption functionality should reveal nothing about both the input secret key

sk

as well as the function

F

. As an example, consider a user Alice who wants her email server to share her incoming mail with one of a set of

n

recipients according to an access policy specified by her function

F

, but who wants to keep this access policy private from the server. Furthermore, in this setting, we would ideally obtain an even stronger guarantee: that this information remains hidden even when some of the

n

recipients may be corrupted.

To formalize these issues, we introduce the notion of

collusion-resistant obfuscation

and define this notion with respect to average-case secure obfuscation (Hohenberger

et al.

- TCC 2007). We then provide a construction of a functional re-encryption scheme for any function

F

with a polynomial-size domain and show that it satisfies this notion of collusion-resistant obfuscation. We note that collusion-resistant security can be viewed as a special case of dependent auxiliary input security (a setting where virtually no positive results are known), and this notion may be of independent interest.

Finally, we show that collusion-resistant obfuscation of functional re-encryption for a function

F

gives a way to obfuscate

F

in the sense of Barak

et al.

(CRYPTO 2001), indicating that this task is impossible for arbitrary (polynomial-time computable) functions

F

.

Nishanth Chandran, Melissa Chase, Vinod Vaikuntanathan
How to Delegate and Verify in Public: Verifiable Computation from Attribute-Based Encryption

The wide variety of small, computationally weak devices, and the growing number of computationally intensive tasks makes it appealing to delegate computation to data centers. However, outsourcing computation is useful only when the returned result can be trusted, which makes verifiable computation (VC) a must for such scenarios.

In this work we extend the definition of verifiable computation in two important directions:

public delegation and public verifiability

, which have important applications in many practical delegation scenarios. Yet, existing VC constructions based on standard cryptographic assumptions fail to achieve these properties.

As the primary contribution of our work, we establish an important (and somewhat surprising) connection between verifiable computation and attribute-based encryption (ABE), a primitive that has been widely studied. Namely, we show how to construct a VC scheme with public delegation and public verifiability from any ABE scheme. The VC scheme verifies any function in the class of functions covered by the permissible ABE policies (currently Boolean formulas). This scheme enjoys a very efficient verification algorithm that depends only on the output size. Efficient delegation, however, requires the ABE encryption algorithm to be cheaper than the original function computation. Strengthening this connection, we show a construction of a

multi-function

verifiable computation scheme from an ABE scheme with outsourced decryption, a primitive defined recently by Green, Hohenberger and Waters (USENIX Security 2011). A multi-function VC scheme allows the verifiable evaluation of multiple functions on

the same preprocessed input

.

In the other direction, we also explore the construction of an ABE scheme from verifiable computation protocols.

Bryan Parno, Mariana Raykova, Vinod Vaikuntanathan
On Black-Box Reductions between Predicate Encryption Schemes

We prove that there is no black-box construction of a threshold predicate encryption system from identity-based encryption. Our result signifies nontrivial progress in a line of research suggested by Boneh, Sahai and Waters (TCC ’11), where they proposed a study of the relative power of predicate encryption for different functionalities. We rely on and extend the techniques of Boneh et al. (FOCS ’08), where they give a black-box separation of identity-based encryption from trapdoor permutations.

In contrast to previous results where only trapdoor permutations were used, our starting point is a more powerful primitive, namely identity-based encryption, which allows planting exponentially many trapdoors in the public-key by only planting a single master public-key of an identity-based encryption system. This makes the combinatorial aspect of our black-box separation result much more challenging. Our work gives the first impossibility result on black-box constructions of any cryptographic primitive from identity-based encryption.

We also study the more general question of constructing predicate encryption for a complexity class

$\mathbb{F}$

, given predicate encryption for a (potentially less powerful) complexity class

$\mathbb{G}$

. Toward that end, we rule out certain natural black-box constructions of predicate encryption for

NC

1

from predicate encryption for

AC

0

assuming a widely believed conjecture in communication complexity.

Vipul Goyal, Virendra Kumar, Satya Lokam, Mohammad Mahmoody

Security Amplification

Lossy Functions Do Not Amplify Well

We consider the problem of amplifying the “lossiness” of functions. We say that an oracle circuit

C

*

: {0,1}

m

 → {0,1}

*

amplifies relative lossiness from ℓ/

n

to

L

/

m

if for every function

f

:{0,1}

n

 → {0,1}

n

it holds that

1

If

f

is injective then so is

C

f

.

2

If

f

has image size of at most 2

n

 − ℓ

, then

C

f

has image size at most 2

m

 − 

L

.

The question is whether such

C

*

exists for

L

/

m

 ≫ ℓ/

n

. This problem arises naturally in the context of cryptographic “lossy functions,” where the relative lossiness is the key parameter.

We show that for every circuit

C

*

that makes at most

t

queries to

f

, the relative lossiness of

C

f

is at most

L

/

m

 ≤ ℓ/

n

 + 

O

(log

t

)/

n

. In particular, no black-box method making a polynomial

t

 = 

poly

(

n

) number of queries can amplify relative lossiness by more than an

O

(log

n

)/

n

additive term. We show that this is tight by giving a simple construction (cascading with some randomization) that achieves such amplification.

Krzysztof Pietrzak, Alon Rosen, Gil Segev
Counterexamples to Hardness Amplification beyond Negligible

If we have a problem that is mildly hard, can we create a problem that is significantly harder? A natural approach to

hardness amplification

is the “direct product”; instead of asking an attacker to solve a single instance of a problem, we ask the attacker to solve several independently generated ones. Interestingly, proving that the direct product amplifies hardness is often highly non-trivial, and in some cases may be false. For example, it is known that the direct product (i.e. “parallel repetition”) of general interactive games may not amplify hardness at all. On the other hand, positive results show that the direct product does amplify hardness for many basic primitives such as one-way functions, weakly-verifiable puzzles, and signatures.

Even when positive direct product theorems are shown to hold for some primitive, the parameters are surprisingly weaker than what we may have expected. For example, if we start with a weak one-way function that no poly-time attacker can break with probability

$> \frac{1}{2}$

, then the direct product provably amplifies hardness to

some negligible

probability. Naturally, we would expect that we can amplify hardness exponentially, all the way to 2

− 

n

probability, or at least to some fixed/known negligible such as

n

− log

n

in the security parameter

n

, just by taking sufficiently many instances of the weak primitive. Although it is known that such parameters cannot be proven via black-box reductions, they may seem like reasonable conjectures, and, to the best of our knowledge, are widely believed to hold. In fact, a conjecture along these lines was introduced in a survey of Goldreich, Nisan and Wigderson (ECCC ’95). In this work, we show that such conjectures are

false

by providing simple but surprising counterexamples. In particular, we construct weakly secure

signatures

and

one-way functions

, for which standard hardness amplification results are known to hold, but for which hardness does not amplify

beyond just negligible

. That is, for

any negligible

function

$\ensuremath{\varepsilon} (n)$

, we instantiate these primitives so that the direct product can always be broken with probability

$\ensuremath{\varepsilon} (n)$

,

no matter how many copies we take

.

Yevgeniy Dodis, Abhishek Jain, Tal Moran, Daniel Wichs

Resettable and Parallel Zero Knowledge

Resettable Statistical Zero Knowledge

Two central notions of Zero Knowledge that provide strong, yet seemingly incomparable security guarantees against malicious verifiers are those of Statistical Zero Knowledge and Resettable Zero Knowledge. The current state of the art includes several feasibility and impossibility results regarding these two notions

separately

. However, the question of achieving Resettable Statistical Zero Knowledge (i.e., Resettable Zero Knowledge and Statistical Zero Knowledge

simultaneously

) for non-trivial languages remained open. In this paper, we show:

Resettable Statistical Zero Knowledge with unbounded prover: under the assumption that sub-exponentially hard one-way functions exist,

$\ensuremath{\mathcal{\text{r}SZK}}=\ensuremath{\mathcal{SZK}}$

. In other words, every language that admits a Statistical Zero-Knowledge (

$\ensuremath{\mathcal{SZK}}$

) proof system also admits a Resettable Statistical Zero-Knowledge (

$\ensuremath{\mathcal{\text{r}SZK}}$

) proof system. (Further, the result can be re-stated unconditionally provided there exists a sub-exponentially hard language in

$\mathcal{SZK}$

). Moreover, under the assumption that (standard) one-way functions exist, all languages

L

such that the complement of

L

is random self reducible, admit a

$\ensuremath{\mathcal{\text{r}SZK}}$

; in other words:

$\ensuremath{\mathcal{\text{co-}RSR}} \subseteq \ensuremath{\mathcal{\text{r}SZK}}$

.

Resettable Statistical Zero Knowledge with efficient prover: efficient-prover Resettable Statistical Zero-Knowledge proof systems exist for all languages that admit hash proof systems (e.g., QNR, QR,

$\mathcal{DDH}$

, DCR). Furthermore, for these languages we construct a two-round resettable statistical witness-indistinguishable argument system.

The round complexity of our proof systems is

$\tilde O(\log \kappa)$

, where

κ

is the security parameter, and all our simulators are

black-box

.

Sanjam Garg, Rafail Ostrovsky, Ivan Visconti, Akshay Wadia
The Knowledge Tightness of Parallel Zero-Knowledge

We investigate the concrete security of black-box zero- knowledge protocols when composed in parallel. As our main result, we give essentially tight upper and lower bounds (up to logarithmic factors in the security parameter) on the following measure of security (closely related to knowledge tightness): the number of queries made by black-box simulators when zero-knowledge protocols are composed in parallel. As a function of the number of parallel sessions,

k

, and the round complexity of the protocol,

m

, the bound is roughly

k

1/

m

.

We also construct a modular procedure to amplify simulator-query lower bounds (as above), to generic lower bounds in the black-box concurrent zero-knowledge setting. As a demonstration of our techniques, we give a self-contained proof of the

o

(log

n

/loglog

n

) lower bound for the round complexity of black-box concurrent zero-knowledge protocols, first shown by Canetti, Kilian, Petrank and Rosen (STOC 2002). Additionally, we give a new lower bound regarding constant-round black-box concurrent zero-knowledge protocols: the running time of the black-box simulator must be at least

n

Ω(log

n

)

.

Kai-Min Chung, Rafael Pass, Wei-Lung Dustin Tseng
Simultaneously Resettable Arguments of Knowledge

In this work, we study simultaneously resettable arguments of knowledge. As our main result, we show a construction of a constant-round simultaneously resettable witness-indistinguishable argument of knowledge (

simres

$\mathcal{WI}$

AoK

, for short) for any

NP

language. We also show two applications of

simres

$\mathcal{WI}$

AoK

: the first constant-round simultaneously resettable zero-knowledge argument of knowledge in the Bare Public-Key Model; and the first simultaneously resettable identification scheme which follows the knowledge extraction paradigm.

Chongwon Cho, Rafail Ostrovsky, Alessandra Scafuro, Ivan Visconti

Dedicated Encryption II

Subspace LWE

The (decisional) learning with errors problem (LWE) asks to distinguish “noisy” inner products of a secret vector with random vectors from uniform. The learning parities with noise problem (LPN) is the special case where the elements of the vectors are bits. In recent years, the LWE and LPN problems have found many applications in cryptography.

In this paper we introduce a (seemingly) much stronger

adaptive

assumption, called “subspace LWE” (SLWE), where the adversary can learn the inner product of the secret and random vectors after they were projected into an adaptively and adversarially chosen subspace. We prove that, surprisingly, the SLWE problem mapping into subspaces of dimension

d

is almost as hard as LWE using secrets of length

d

(the other direction is trivial.)

This result immediately implies that several existing cryptosystems whose security is based on the hardness of the LWE/LPN problems are provably secure in a much stronger sense than anticipated. As an illustrative example we show that the standard way of using LPN for symmetric CPA secure encryption is even secure against a very powerful class of related key attacks.

Krzysztof Pietrzak
Bounded-Collusion IBE from Key Homomorphism

In this work, we show how to construct IBE schemes that are secure against a bounded number of collusions, starting with underlying PKE schemes which possess linear homomorphisms over their keys. In particular, this enables us to exhibit a new (bounded-collusion) IBE construction based on the quadratic residuosity assumption, without any need to assume the existence of random oracles. The new IBE’s public parameters are of size

O

(

log

I

) where

I

is the total number of identities which can be supported by the system,

t

is the number of collusions which the system is secure against, and

λ

is a security parameter. While the number of collusions is bounded, we note that an exponential number of total identities can be supported.

More generally, we give a transformation that takes any PKE satisfying

Linear Key Homomorphism

,

Identity Map Compatibility

, and the

Linear Hash Proof Property

and translates it into an IBE secure against bounded collusions. We demonstrate that these properties are more general than our quadratic residuosity-based scheme by showing how a simple PKE based on the DDH assumption also satisfies these properties.

Shafi Goldwasser, Allison Lewko, David A. Wilson
A Unified Approach to Deterministic Encryption: New Constructions and a Connection to Computational Entropy

We propose a general construction of deterministic encryption schemes that unifies prior work and gives novel schemes. Specifically, its instantiations provide:

A construction from any trapdoor function that has sufficiently many hardcore bits.

A construction that provides “bounded” multi-message security from lossy trapdoor functions.

The security proofs for these schemes are enabled by three tools that are of broader interest:

A weaker and more precise sufficient condition for semantic security on a high-entropy message distribution. Namely, we show that to establish semantic security on a distribution

M

of messages, it suffices to establish indistinguishability for all conditional distribution

M

|

E

, where

E

is an event of probability at least 1/4. (Prior work required indistinguishability on

all

distributions of a given entropy.)

A result about computational entropy of conditional distributions. Namely, we show that conditioning on an event

E

of probability

p

reduces the quality of computational entropy by a factor of

p

and its quantity by log

2

1/

p

.

A generalization of leftover hash lemma to correlated distributions.

We also extend our result about computational entropy to the average case, which is useful in reasoning about leakage-resilient cryptography: leaking

λ

bits of information reduces the quality of computational entropy by a factor of 2

λ

and its quantity by

λ

.

Benjamin Fuller, Adam O’Neill, Leonid Reyzin

Pseudorandomness II

A Dichotomy for Local Small-Bias Generators

We consider pseudorandom generators in which each output bit depends on a constant number of input bits. Such generators have appealingly simple structure: they can be described by a sparse input-output dependency graph

G

and a small predicate

P

that is applied at each output. Following the works of Cryan and Miltersen (MFCS ’01) and by Mossel et al (FOCS ’03), we ask: which graphs and predicates yield “small-bias” generators (that fool linear distinguishers)?

We identify an explicit class of degenerate predicates and prove the following. For most graphs, all

non-degenerate

predicates yield small-bias generators,

$f\colon \{0,1\}^n \rightarrow \{0,1\}^m$

, with output length

m

 = 

n

1 + 

ε

for some constant

ε

 > 0. Conversely, we show that for most graphs,

degenerate

predicates are not secure against linear distinguishers, even when the output length is linear

m

 = 

n

 + Ω(

n

). Taken together, these results expose a dichotomy: every predicate is either very hard or very easy, in the sense that it either yields a small-bias generator for almost all graphs or fails to do so for almost all graphs.

As a secondary contribution, we give evidence in support of the view that small bias is a good measure of pseudorandomness for local functions with large stretch. We do so by demonstrating that resilience to linear distinguishers implies resilience to a larger class of attacks for such functions.

Benny Applebaum, Andrej Bogdanov, Alon Rosen
Randomness Condensers for Efficiently Samplable, Seed-Dependent Sources

We initiate a study of randomness condensers for sources that are efficiently samplable but may depend on the seed of the condenser. That is, we seek functions

Cond

: {0,1}

n

×{0,1}

d

 → {0,1}

m

such that if we choose a random seed

S

 ← {0,1}

d

, and a source

$X={\mathcal A}(S)$

is generated by a randomized circuit

$\mathcal A$

of size

t

such that

X

has min-entropy at least

k

given

S

, then

Cond

(

X

;

S

) should have min-entropy at least some

k

′ given

S

. The distinction from the standard notion of randomness condensers is that the source

X

may be correlated with the seed

S

(but is restricted to be efficiently samplable). Randomness

extractors

of this type (corresponding to the special case where

k

′ = 

m

) have been implicitly studied in the past (by Trevisan and Vadhan, FOCS ‘00).

We show that:

Unlike extractors, we can have randomness condensers for samplable, seed-dependent sources whose computational complexity is smaller than the size

t

of the adversarial sampling algorithm

$\mathcal A$

. Indeed, we show that sufficiently strong collision-resistant hash functions are seed-dependent condensers that produce outputs with min-entropy

$k' = m - {\mathcal O}(\log t)$

, i.e. logarithmic

entropy deficiency

.

Randomness condensers suffice for key derivation in many cryptographic applications: when an adversary has negligible success probability (or negligible “squared advantage” [3]) for a uniformly random key, we can use instead a key generated by a condenser whose output has logarithmic entropy deficiency.

Randomness condensers for seed-dependent samplable sources that are robust to side information generated by the sampling algorithm imply soundness of the Fiat-Shamir Heuristic when applied to any constant-round, public-coin interactive proof system.

Yevgeniy Dodis, Thomas Ristenpart, Salil Vadhan
Uniqueness Is a Different Story: Impossibility of Verifiable Random Functions from Trapdoor Permutations

Verifiable random functions (VRFs) are pseudorandom functions with the additional property that the owner of the seed

SK

can issue publicly-verifiable proofs for the statements “

f

(

SK

,

x

) = 

y

”, for any input

x

. Moreover, the output of VRFs is guaranteed to be unique, which means that

y

 = 

f

(

SK

,

x

) is the only image that can be proven to map to

x

. Despite their popularity, constructing VRFs seems to be a challenging task and only a few constructions based on specific number-theoretic problems are known. Basing a scheme on general assumptions is still an open problem. Towards this direction, Brakerski

et al.

showed that verifiable random functions cannot be constructed from one-way permutations in a black-box way.

In this paper we continue the study of the relationship between VRFs and well-established cryptographic primitives. Our main result is a separation of VRFs and adaptive trapdoor permutations (ATDPs) in a black-box manner. This result sheds light on the nature of VRFs and is interesting for at least three reasons:

First, the separation result of Brakerski

et al.

gives the impression that VRFs belong to the “public-key world”, and thus their relationship with other public-key primitives is interesting. Our result, however, shows that VRFs are strictly stronger and cannot be constructed (in a black-box way) form primitives like e.g., public-key encryption (even CCA-secure), oblivious transfer, and key-agreement.

Second, the notion of VRFs is closely related to weak verifiable random functions and verifiable pseudorandom generators which are both implied by TDPs. Dwork and Naor (FOCS 2000) asked whether there are transformation between the verifiable primitives similar to the case of “regular” PRFs and PRGs. Here, we give a negative answer to this problem showing that the case of verifiable random functions is essentially different.

Finally, our result also shows that

unique

signatures cannot be instantiated from ATDPs. While it is well known that standard signature schemes are equivalent to OWFs, we essentially show that the uniqueness property is crucial to change the relations between primitives.

Dario Fiore, Dominique Schröder
Backmatter
Metadaten
Titel
Theory of Cryptography
herausgegeben von
Ronald Cramer
Copyright-Jahr
2012
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-28914-9
Print ISBN
978-3-642-28913-2
DOI
https://doi.org/10.1007/978-3-642-28914-9