Skip to main content
main-content

Über dieses Buch

This book constitutes the thoroughly refereed proceedings of the 10th Theory of Cryptography Conference, TCC 2013, held in Tokyo, Japan, in March 2013. The 36 revised full papers presented were carefully reviewed and selected from 98 submissions. The papers cover topics such as study of known paradigms, approaches, and techniques, directed towards their better understanding and utilization; discovery of new paradigms, approaches and techniques that overcome limitations of the existing ones; formulation and treatment of new cryptographic problems; study of notions of security and relations among them; modeling and analysis of cryptographic algorithms; and study of the complexity assumptions used in cryptography.

Inhaltsverzeichnis

Frontmatter

Overcoming Weak Expectations

Recently, there has been renewed interest in basing cryptographic primitives on weak secrets, where the only information about the secret is some non-trivial amount of (min-) entropy. From a formal point of view, such results require to upper bound the expectation of some function

f

(

X

), where

X

is a weak source in question. We show an elementary inequality which essentially upper bounds such ‘weak expectation’ by two terms, the first of which is independent of

f

, while the second only depends on the ‘variance’ of

f

under uniform distribution. Quite remarkably, as relatively simple corollaries of this elementary inequality, we obtain some ‘unexpected’ results, in several cases noticeably simplifying/improving prior techniques for the same problem.

Examples include non-malleable extractors, leakage-resilient symmetric encryption, alternative to the dense model theorem, seed-dependent condensers and improved entropy loss for the leftover hash lemma.

Yevgeniy Dodis, Yu Yu

A Counterexample to the Chain Rule for Conditional HILL Entropy

And What Deniable Encryption Has to Do with It

A chain rule for an entropy notion

H

(·) states that the entropy

H

(

X

) of a variable

X

decreases by at most ℓ if conditioned on an ℓ-bit string

A

, i.e.,

H

(

X

|

A

) ≥ 

H

(

X

) − ℓ. More generally, it satisfies a chain rule for

conditional

entropy if

H

(

X

|

Y

,

A

) ≥ 

H

(

X

|

Y

) − ℓ.

All natural information theoretic entropy notions we are aware of (like Shannon or min-entropy) satisfy some kind of chain rule for conditional entropy. Moreover, many

computational

entropy notions (like Yao entropy, unpredictability entropy and several variants of HILL entropy) satisfy the chain rule for conditional entropy, though here not only the

quantity

decreases by ℓ, but also the

quality

of the entropy decreases exponentially in ℓ. However, for the standard notion of conditional HILL entropy (the computational equivalent of min-entropy) the existence of such a rule was unknown so far.

In this paper, we prove that for conditional HILL entropy no meaningful chain rule exists, assuming the existence of one-way permutations: there exist distributions

X

,

Y

,

A

, where

A

is a distribution over a

single

bit, but

H

HILL

(X|Y)≫

H

HILL

(X|Y,A), even if we simultaneously allow for a massive degradation in the quality of the entropy.

The idea underlying our construction is based on a surprising connection between the chain rule for HILL entropy and deniable encryption.

Stephan Krenn, Krzysztof Pietrzak, Akshay Wadia

Hardness Preserving Reductions via Cuckoo Hashing

A common method for increasing the usability and uplifting the security of pseudorandom function families (PRFs) is to “hash” the inputs into a smaller domain before applying the PRF. This approach, known as “Levin’s trick”, is used to achieve “PRF domain extension” (using a short,e.g,fixed, input length PRF to get a variable-length PRF), and more recently to transform non-adaptive PRFs to adaptive ones. Such reductions, however, are vulnerable to a “birthday attack”: after

$\sqrt{|\mathcal{U}}$

queries to the resulting PRF, where

$\mathcal{U}$

being the hash function range, a collision (i.e., two distinct inputs have the same hash value) happens with high probability. As a consequence, the resulting PRF is

insecure

against an attacker making this number of queries.

In this work we show how to go beyond the birthday attack barrier, by replacing the above simple hashing approach with a variant of

cuckoo hashing

— a hashing paradigm typically used for resolving hash collisions in a table, by using two hash functions and two tables, and cleverly assigning each element into one of the two tables. We use this approach to obtain: (i) A domain extension method that requires

just two calls

to the original PRF, can withstand as many queries as the original domain size and has a distinguishing probability that is exponentially small in the non cryptographic work. (ii) A

security-preserving

reduction from non-adaptive to adaptive PRFs.

Itay Berman, Iftach Haitner, Ilan Komargodski, Moni Naor

Concurrent Zero Knowledge in the Bounded Player Model

In this paper we put forward the

Bounded Player Model

for secure computation. In this new model, the number of players that will ever be involved in secure computations is bounded, but the number of computations is

not

a priori bounded. Indeed, while the number of devices and people on this planet can be realistically estimated and bounded, the number of computations these devices will run can not be realistically bounded. Further, we note that in the bounded player model, in addition to no a priori bound on the number of sessions, there is no synchronization barrier, no trusted party, and simulation must be performed in polynomial time.

In this setting, we achieve concurrent Zero Knowledge (cZK) with

sub-logarithmic

round complexity. Our security proof is (necessarily) non-black-box, our simulator is “straight-line” and works as long as the number of rounds is

ω

(1).

We further show that unlike previously studied relaxations of the standard model (e.g., bounded number of sessions, timing assumptions, super-polynomial simulation), concurrent-secure computation is still impossible to achieve in the Bounded Player model. This gives evidence that our model is “closer” to the standard model than previously studied models, and study of this model might shed light on constructing round efficient concurrent zero-knowledge in the standard model as well.

Vipul Goyal, Abhishek Jain, Rafail Ostrovsky, Silas Richelson, Ivan Visconti

Public-Coin Concurrent Zero-Knowledge in the Global Hash Model

Public-coin zero-knowledge

and

concurrent zero-knowledge (

cZK

)

are two classes of zero knowledge protocols that guarantee some additional desirable properties. Still, to this date no protocol is known that is both public-coin and

cZK

for a language outside BPP. Furthermore, it is known that no such protocol can be black-box

ZK

[Pass et.al, Crypto 09].

We present a public-coin concurrent

ZK

protocol for any NP language. The protocol assumes that all verifiers have access to a globally specified function, drawn from a collision resistant hash function family. (This model, which we call the Global Hash Function, or GHF model, can be seen as a restricted case of the non-programmable reference string model.) We also show that the impossibility of black-box public-coin

cZK

extends also to the GHF model.

Our protocol assumes CRH functions against quasi-polynomial adversaries and takes

O

(log

1 + 

ε

n

) rounds for any

ε

 > 0, where

n

is the security parameter. Our techniques combine those for (non-public-coin) black-box

cZK

with Barak’s non-black-box technique for public-coin constant-round

ZK

. As a corollary we obtain the first simultaneously resettable zero-knowledge protocol with

O

(log

1 + 

ε

n

) rounds, in the GHF model.

Ran Canetti, Huijia Lin, Omer Paneth

Succinct Malleable NIZKs and an Application to Compact Shuffles

Depending on the application, malleability in cryptography can be viewed as either a flaw or — especially if sufficiently understood and restricted — a feature. In this vein, Chase, Kohlweiss, Lysyanskaya, and Meiklejohn recently defined malleable zero-knowledge proofs, and showed how to

control

the set of allowable transformations on proofs. As an application, they construct the first

compact

verifiable shuffle, in which one such controlled-malleable proof suffices to prove the correctness of an entire multi-step shuffle.

Despite these initial steps, a number of natural problems remained: (1) their construction of controlled-malleable proofs relies on the inherent malleability of Groth-Sahai proofs and is thus not based on generic primitives; (2) the classes of allowable transformations they can support are somewhat restrictive.

In this paper, we address these issues by providing a generic construction of controlled-malleable proofs using succinct non-interactive arguments of knowledge, or SNARGs for short. Our construction can support very general classes of transformations, as we no longer rely on the transformations that Groth-Sahai proofs can support.

Melissa Chase, Markulf Kohlweiss, Anna Lysyanskaya, Sarah Meiklejohn

Encrypted Messages from the Heights of Cryptomania

How flexible can encryption be? This question motivated the invention of public key encryption that began modern cryptography. A lot has happened since then. I will focus on two lines of research that I find especially interesting (mainly the second) and the mysterious gap between them.

Craig Gentry

Attribute-Based Functional Encryption on Lattices

We introduce a broad lattice manipulation technique for expressive cryptography, and use it to realize functional encryption for access structures from post-quantum hardness assumptions.

Specifically, we build an efficient key-policy attribute-based encryption scheme, and prove its security in the selective sense from learning-with-errors intractability in the standard model.

Xavier Boyen

When Homomorphism Becomes a Liability

We show that an encryption scheme cannot have a simple decryption function and be homomorphic at the same time, even with added noise. Specifically, if a scheme can homomorphically evaluate the majority function, then its decryption cannot be weakly-learnable (in particular, linear), even if the probability of decryption error is high. (In contrast, without homomorphism, such schemes do exist and are presumed secure, e.g. based on LPN.)

An immediate corollary is that known schemes that are based on the hardness of decoding in the presence of

low hamming-weight noise

cannot be fully homomorphic. This applies to known schemes such as LPN-based symmetric or public key encryption.

Using these techniques, we show that the recent candidate fully homomorphic encryption, suggested by Bogdanov and Lee (ePrint ’11, henceforth BL), is insecure. In fact, we show two attacks on the BL scheme: One that uses homomorphism, and another that directly attacks a component of the scheme.

Zvika Brakerski

Garbling XOR Gates “For Free” in the Standard Model

Yao’s Garbled Circuit (GC) technique is a powerful cryptographic tool which allows to “encrypt” a circuit

C

by another circuit

${\hat C}$

in a way that hides all information except for the final output. Yao’s original construction incurs a constant overhead in both computation and communication per gate of the circuit

C

(proportional to the complexity of symmetric encryption). Kolesnikov and Schneider (ICALP 2008) introduced an optimized variant that garbles XOR gates “for free” in a way that involves no cryptographic operations and no communication. This variant has become very popular and has lead to notable performance improvements.

The security of the free-XOR optimization was originally proven in the random oracle model. Despite some partial progress (Choi et al., TCC 2012), the question of replacing the random oracle with a standard cryptographic assumption has remained open.

We resolve this question by showing that the free-XOR approach can be realized in the standard model under the

learning parity with noise

(LPN) assumption. Our result is obtained in two steps:

–We show that the random oracle can be replaced with a symmetric encryption which remains secure under a combined form of related-key (RK) and key-dependent message (KDM) attacks; and

–We show that such a symmetric encryption can be constructed based on the LPN assumption.

As an additional contribution, we prove that the combination of RK and KDM security is non-trivial: There exists an encryption scheme which achieves both RK security and KDM security but breaks completely at the presence of combined RK-KDM attacks.

Benny Applebaum

Why “Fiat-Shamir for Proofs” Lacks a Proof

The Fiat-Shamir heuristic [CRYPTO ’86] is used to convert any 3-message public-coin proof or argument system into a non-interactive argument, by hashing the prover’s first message to select the verifier’s challenge. It is known that this heuristic is sound when the hash function is modeled as a random oracle. On the other hand, the surprising result of Goldwasser and Kalai [FOCS ’03] shows that there exists a computationally sound

argument

on which the Fiat-Shamir heuristic is

never

sound, when instantiated with any

actual

efficient hash function. This leaves us with the following interesting possibility: perhaps we can securely instantiates the Fiat-Shamir heuristic for all 3-message public-coin

statistically sound proofs

, even if we must fail for some computationally sound arguments. Indeed, this has been conjectured to be the case by Barak, Lindell and Vadhan [FOCS ’03], but we do not have any provably secure instantiation under any “standard assumption”. In this work, we give a broad black-box separation result showing that the security of the Fiat-Shamir heuristic for statistically sound proofs cannot be proved under virtually any

standard assumption

via a

black-box reduction

. More precisely:

–If we want to have a “universal” instantiation of the Fiat-Shamir heuristic that works for

all

3-message public-coin proofs, then we cannot prove its security via a black-box reduction from any assumption that has the format of a “cryptographic game”.

–For many concrete proof systems, if we want to have a “specific” instantiation of the Fiat-Shamir heuristic for that proof system, then we cannot prove its security via a black box reduction from any “falsifiable assumption” that has the format of a cryptographic game with an efficient challenger.

Nir Bitansky, Dana Dachman-Soled, Sanjam Garg, Abhishek Jain, Yael Tauman Kalai, Adriana López-Alt, Daniel Wichs

On the (In)security of Fischlin’s Paradigm

The Fiat-Shamir paradigm was proposed as a way to remove interaction from 3-round proof of knowledge protocols and derive secure signature schemes. This generic transformation leads to very efficient schemes and has thus grown quite popular. However, this transformation is proven secure only in the random oracle model. In FOCS 2003, Goldwasser and Kalai showed that this transformation is provably insecure in the standard model by presenting a counterexample of a 3-round protocol, the Fiat-Shamir transformation of which is (although provably secure in the random oracle model) insecure in the standard model, thus showing that the random oracle is uninstantiable. In particular, for every hash function that is used to replace the random oracle, the resulting signature scheme is existentially forgeable. This result was shown by relying on the non-black-box techniques of Barak (FOCS 2001).

An alternative to the Fiat-Shamir paradigm was proposed by Fischlin in Crypto 2005. Fischlin’s transformation can be applied to any so called 3-round “Fiat-Shamir proof of knowledge’’ and can be used to derive non-interactive zero-knowledge proofs of knowledge as well as signature schemes. An attractive property of this transformation is that it provides online extractability (i.e., the extractor works without having to rewind the prover). Fischlin remarks that in comparison to the Fiat-Shamir transformation, his construction tries to “decouple the hash function from the protocol flow” and hence, the counterexample in the work of Goldwaaser and Kalai does not seem to carry over to this setting.

In this work, we show a counterexample to the Fischlin’s transformation. In particular, we construct a 3-round Fiat-Shamir proof of knowledge (on which Fischlin’s transformation is applicable), and then, present an adversary against both - the soundness of the resulting non-interactive zero-knowledge, as well as the unforegeability of the resulting signature scheme. Our attacks are successful except with negligible probability for any hash function, that is used to instantiate the random oracle, provided that there is an apriori (polynomial) bound on the running time of the hash function. By choosing the right bound, secure instantiation of Fischlin transformation with most practical cryptographic hash functions can be ruled out.

The techniques used in our work are quite unrelated to the ones used in the work of Goldwasser and Kalai. Our primary technique is to bind the protocol flow with the hash function if the code of the hash function is available. We believe that our ideas are of independent interest and maybe applicable in other related settings.

Prabhanjan Ananth, Raghav Bhaskar, Vipul Goyal, Vanishree Rao

Signatures of Correct Computation

We introduce

Signatures of Correct Computation

(SCC), a new model for verifying dynamic computations in cloud settings. In the SCC model, a trusted

source

outsources a function

f

to an untrusted

server

, along with a public key for that function (to be used during verification). The server can then produce a succinct signature

σ

vouching for the correctness of the computation of

f

, i.e., that some result

v

is indeed the correct outcome of the function

f

evaluated on some point

a

. There are two crucial performance properties that we want to guarantee in an SCC construction: (1) verifying the signature should take asymptotically less time than evaluating the function

f

; and (2) the public key should be efficiently updated whenever the function changes.

We construct SCC schemes (satisfying the above two properties) supporting expressive manipulations over multivariate polynomials, such as polynomial evaluation and differentiation. Our constructions are adaptively secure in the random oracle model and achieve

optimal

updates, i.e., the function’s public key can be updated in time proportional to the number of updated coefficients, without performing a linear-time computation (in the size of the polynomial).

We also show that signatures of correct computation imply

Publicly Verifiable Computation

(PVC), a model recently introduced in several concurrent and independent works. Roughly speaking, in the SCC model,

any client

can verify the signature

σ

and be convinced of some computation result, whereas in the PVC model only the client that issued a query (or anyone who trusts this client) can verify that the server returned a valid signature (proof) for the answer to the query. Our techniques can be readily adapted to construct PVC schemes with adaptive security, efficient updates and

without the random oracle model

.

Charalampos Papamanthou, Elaine Shi, Roberto Tamassia

A Full Characterization of Functions that Imply Fair Coin Tossing and Ramifications to Fairness

It is well known that it is impossible for two parties to toss a coin fairly (Cleve, STOC 1986). This result implies that it is impossible to securely compute with fairness any function that can be used to toss a fair coin. In this paper, we focus on the class of deterministic Boolean functions with finite domain, and we ask for which functions in this class is it possible to information-theoretically toss an unbiased coin, given a protocol for securely computing the function with fairness. We provide a

complete characterization

of the functions in this class that imply and do not imply fair coin tossing. This characterization extends our knowledge of which functions cannot be securely computed with fairness. In addition, it provides a focus as to which functions may potentially be securely computed with fairness, since a function that cannot be used to fairly toss a coin is not ruled out by the impossibility result of Cleve (which is the

only

known impossibility result for fairness). In addition to the above, we draw corollaries to the feasibility of achieving fairness in two possible fail-stop models.

Gilad Asharov, Yehuda Lindell, Tal Rabin

Characterizing the Cryptographic Properties of Reactive 2-Party Functionalities

In secure multi-party computation, a reactive functionality is one which maintains persistent state, takes inputs, and gives outputs over many rounds of interaction with its parties. Reactive functionalities are fundamental and model many interesting and natural cryptographic tasks; yet their security properties are not nearly as well-understood as in the non-reactive case (known as secure function evaluation).

We present new combinatorial characterizations for 2-party reactive functionalities, which we model as finite automata. We characterize the functionalities that have passive-secure protocols, and those which are complete with respect to passive adversaries. Both characterizations are in the information-theoretic setting.

R. Amzi Jeffs, Mike Rosulek

Feasibility and Completeness of Cryptographic Tasks in the Quantum World

It is known that cryptographic feasibility results can change by moving from the classical to the quantum world. With this in mind, we study the feasibility of realizing functionalities in the framework of universal composability, with respect to both computational and information-theoretic security. With respect to computational security, we show that existing feasibility results

carry over unchanged

from the classical to the quantum world; a functionality is “trivial” (i.e., can be realized without setup) in the quantum world if and only if it is trivial in the classical world. The same holds with regard to functionalities that are

complete

(i.e., can be used to realize arbitrary other functionalities).

In the information-theoretic setting, the quantum and classical worlds differ. In the quantum world, functionalities in the class we consider are either complete, trivial, or belong to a family of simultaneous-exchange functionalities (e.g., XOR). However, other results in the information-theoretic setting remain roughly unchanged.

Serge Fehr, Jonathan Katz, Fang Song, Hong-Sheng Zhou, Vassilis Zikas

Languages with Efficient Zero-Knowledge PCPs are in SZK

A

Zero-Knowledge PCP

(ZK-PCP) is a randomized PCP such that the view of any (perhaps cheating) efficient verifier can be efficiently simulated up to small statistical distance. Kilian, Petrank, and Tardos (STOC ’97) constructed ZK-PCPs for all languages in

NEXP

. Ishai, Mahmoody, and Sahai (TCC ’12), motivated by cryptographic applications, revisited the possibility of

efficient

ZK-PCPs for all of

NP

where the PCP is encoded as a polynomial-size circuit that given a query

i

returns the

i

t

h

symbol of the PCP. Ishai

et al

showed that there is no efficient ZK-PCP for

NP

with a

non-adaptive

verifier, that prepares all of its PCP queries before seeing any answers, unless

NP

 ⊆ 

coAM

and the polynomial-time hierarchy collapses. The question of whether

adaptive

verification can lead to efficient ZK-PCPs for

NP

remained open.

In this work, we resolve this question and show that any language or promise problem with efficient ZK-PCPs must be in

SZK

(the class of promise problems with a statistical zero-knowledge

single prover

proof system). Therefore, no

NP

-complete problem can have an efficient ZK-PCP unless

NP

 ⊆ 

SZK

(which also implies

NP

 ⊆ 

coAM

and the polynomial-time hierarchy collapses). We prove our result by reducing any promise problem with an efficient ZK-PCP to two instances of the

Conditional Entropy Approximation

problem defined and studied by Vadhan (FOCS’04) which is known to be complete for the class

SZK

.

Mohammad Mahmoody, David Xiao

Succinct Non-interactive Arguments via Linear Interactive Proofs

Succinct non-interactive arguments

(SNARGs) enable verifying NP statements with lower complexity than required for classical NP verification. Traditionally, the focus has been on minimizing the length of such arguments; nowadays researches have focused also on minimizing verification time, by drawing motivation from the problem of delegating computation.

A common relaxation is a

preprocessing

SNARG, which allows the verifier to conduct an expensive offline phase that is independent of the statement to be proven later. Recent constructions of preprocessing SNARGs have achieved attractive features: they are publicly-verifiable, proofs consist of only

O

(1) encrypted (or encoded) field elements, and verification is via arithmetic circuits of size linear in the NP statement. Additionally, these constructions seem to have “escaped the hegemony” of probabilistically-checkable proofs (PCPs) as a basic building block of succinct arguments.

Nir Bitansky, Alessandro Chiesa, Yuval Ishai, Omer Paneth, Rafail Ostrovsky

Unprovable Security of Perfect NIZK and Non-interactive Non-malleable Commitments

We present barriers to provable security of two fundamental (and well-studied) cryptographic primitives

perfect non-interactive zero knowledge (NIZK)

, and

non-malleable commitments

:

Black-box reductions cannot be used to demonstrate

adaptive

soundness (i.e., that soundness holds even if the statement to be proven is chosen as a function of the common reference string) of any statistical (and thus also perfect) NIZK for

${\cal NP}$

based on any “standard” intractability assumptions.

Black-box reductions cannot be used to demonstrate non-malleability of non-interactive, or even 2-message, commitment schemes based on any “standard” intractability assumptions.

We emphasize that the above separations apply even if the construction of the considered primitives makes a

non-black-box

use of the underlying assumption

As an independent contribution, we suggest a taxonomy of game-based intractability assumption based on 1) the

security threshold

, 2) the number of

communication rounds

in the security game, 3) the

computational complexity

of the game challenger, 4) the

communication complexity

of the challenger, and 5) the

computational complexity of the security reduction

.

Rafael Pass

Secure Computation for Big Data

Secure computation has been a powerful and important research area in cryptography since the first breakthrough results in the 1980s. For many years this area was purely theoretical, as the feasibility results have not been considered even close to practical. Recently, it appears to have turned a corner, with several research efforts showing that secure computation for large classes of functions, and even generic secure computation, has the potential to become truly practical. This shift is brought on by algorithmic advancements and new cryptographic tools, alongside advancements in CPU speed, parallelism, and storage capabilities; it is further motivated by the explosion of new potential application domains for secure computation.

A compelling motivation for making secure computation practical is provided by the burgeoning field of

Big Data

, representing the deluge of data being generated, collected, and stored all around us. Protocols for secure computation on big data can provide critical value for many business, medical, legal, and personal applications. However, conventional approaches to secure computation are inherently insufficient in this setting, where even linear computation can be too prohibitive.

In this talk I discuss challenges and solutions related to secure computation for big data, following two thrusts:

Overcoming inherent theoretical bounds of (in)efficiency; and

Satisfying immediate practical needs in a theoretically sound way.

Both goals require the development of new models of secure computation, allowing for theoretically and practically meaningful relaxations of the standard model. In particular, I discuss a few works I have participated in over the last decade, which address the challenge of achieving efficient secure computation for massive data. I also share some experiences from the last few years working on secure search over massive data sets. This research has externally imposed practical constraints, such as strict performance requirements. I focus on my perspective as a theoretical cryptographer and discuss some open cryptographic challenges in this emerging domain.

Tal Malkin

Communication Locality in Secure Multi-party Computation

How to Run Sublinear Algorithms in a Distributed Setting

We devise multi-party computation protocols for general secure function evaluation with the property that

each party is only required to communicate with a small number of dynamically chosen parties

. More explicitly, starting with

n

parties connected via a complete and synchronous network, our protocol requires each party to send messages to (and process messages from) at most

polylog

(

n

) other parties using

polylog

(

n

) rounds. It achieves secure computation of any polynomial-time computable randomized function

f

under cryptographic assumptions, and tolerates up to

$({1\over 3} - \epsilon) \cdot n$

statically scheduled Byzantine faults.

We then focus on the particularly interesting setting in which the function to be computed is a

sublinear algorithm

: An evaluation of

f

depends on the inputs of at most

q

 = 

o

(

n

) of the parties, where the identity of these parties can be chosen randomly and possibly adaptively. Typically,

q

 = 

polylog

(

n

). While the sublinear query complexity of

f

makes it possible in principle to dramatically reduce the

communication complexity

of our general protocol, the challenge is to achieve this while maintaining security: in particular, while keeping the

identities

of the selected inputs completely hidden. We solve this challenge, and we provide a protocol for securely computing such sublinear

f

that runs in

polylog

(

n

) + 

O

(

q

) rounds, has each party communicating with at most

q

·

polylog

(

n

) other parties, and supports

message sizes

polylog

(

n

) ·(ℓ + 

n

), where ℓ is the parties’ input size.

Our optimized protocols rely on a multi-signature scheme, fully homomorphic encryption (FHE), and simulation-sound adaptive NIZK arguments. However, we remark that multi-signatures and FHE are used to obtain our bounds on message size and round complexity. Assuming only standard digital signatures and public-key encryption, one can still obtain the property that each party only communicates with

polylog

(

n

) other parties. We emphasize that the scheduling of faults can depend on the initial PKI setup of digital signatures and the NIZK parameters.

Elette Boyle, Shafi Goldwasser, Stefano Tessaro

Distributed Oblivious RAM for Secure Two-Party Computation

We present a new method for secure two-party Random Access Memory (RAM)

program

computation that does not require taking a program and first turning it into a circuit. The method achieves logarithmic overhead compared to an insecure program execution.

In the heart of our construction is a new Oblivious RAM construction where a client interacts with two non-communicating servers. Our two-server Oblivious RAM for

n

reads/writes requires

O

(

n

) memory for the servers,

O

(1) memory for the client, and

O

(log

n

) amortized read/write overhead for data access. The constants in the big-O notation are tiny, and we show that the storage and data access overhead of our solution concretely compares favorably to the state-of-the-art single-server schemes. Our protocol enjoys an important feature from a practical perspective as well. At the heart of almost all previous single-server Oblivious RAM solutions, a crucial but inefficient process known as oblivious sorting was required. In our two-server model, we describe a new technique to bypass oblivious sorting, and show how this can be carefully blended with existing techniques to attain a more practical Oblivious RAM protocol in comparison to all prior work.

As alluded above, our two-server Oblivious RAM protocol leads to a novel application in the realm of secure two-party RAM program computation. We observe that in the secure two-party computation, Alice and Bob can play the roles of two non-colluding servers. We show that our Oblivious RAM construction can be composed with an extended version of the Ostrovsky-Shoup compiler to obtain a new method for secure two-party

program

computation with lower overhead than all existing constructions.

Steve Lu, Rafail Ostrovsky

Black-Box Proof of Knowledge of Plaintext and Multiparty Computation with Low Communication Overhead

We present a 2-round protocol to prove knowledge of a plaintext corresponding to a given ciphertext. Our protocol is black-box in the underlying cryptographic primitives and it can be instantiated with almost any fully homomorphic encryption scheme.

Since our protocol is only 2 rounds it cannot be zero-knowledge [GO94]; instead, we prove that our protocol ensures the semantic security of the underlying ciphertext.

To illustrate the merit of this relaxed proof of knowledge property, we use our result to construct a secure multi-party computation protocol for evaluating a function

f

in the standard model using only

black-box access

to a threshold fully homomorphic encryption scheme. This protocol requires communication that is

independent of

|

f

|; while Gentry [Gen09a] has previously shown how to construct secure multi-party protocols with similar communication rates, the use of our novel primitive (along with other new techniques) avoids the use of complicated generic white-box techniques (cf. PCP encodings [Gen09a] and generic zero-knowledge proofs [AJLA

 + 

12, LATV11].)

In this sense, our work demonstrates in principle that

practical

TFHE can lead to reasonably practical secure computation.

Steven Myers, Mona Sergi, abhi shelat

Testing the Lipschitz Property over Product Distributions with Applications to Data Privacy

In the past few years, the focus of research in the area of statistical data privacy has been in designing algorithms for various problems which satisfy some rigorous notions of privacy. However, not much effort has gone into designing techniques to computationally verify if a given algorithm satisfies some predefined notion of privacy. In this work, we address the following question:

Can we design algorithms which tests if a given algorithm satisfies some specific rigorous notion of privacy (e.g., differential privacy)?

We design algorithms to test privacy guarantees of a given algorithm

$\mathcal{A}$

when run on a dataset

x

containing potentially sensitive information about the individuals. More formally, we design a computationally efficient algorithm

${\cal T}_{priv}$

that verifies whether

$\mathcal{A}$

satisfies

differential privacy on typical datasets

(DPTD) guarantee in time sublinear in the size of the domain of the datasets. DPTD, a similar notion to

generalized differential privacy

first proposed by [3], is a

distributional

relaxation of the popular notion of

differential privacy

[14].

To design algorithm

${\cal T}_{priv}$

, we show a formal connection between the testing of privacy guarantee for an algorithm and the testing of the Lipschitz property of a related function. More specifically, we show that an efficient algorithm for testing of Lipschitz property can be used as a subroutine in

${\cal T}_{priv}$

that tests if an algorithm satisfies

differential privacy on typical datasets

.

Apart from formalizing the connection between the testing of privacy guarantee and testing of the Lipschitz property, we generalize the work of [21] to the setting of property testing under product distribution. More precisely, we design an efficient Lipschitz tester for the case where the domain points are drawn from hypercube according to some fixed but unknown product distribution instead of the uniform distribution.

Kashyap Dixit, Madhav Jha, Sofya Raskhodnikova, Abhradeep Thakurta

Limits on the Usefulness of Random Oracles

In the

random oracle

model, parties are given oracle access to a random function (i.e., a uniformly chosen function from the set of all functions), and are assumed to have unbounded computational power (though they can only make a bounded number of oracle queries). This model provides powerful properties that allow proving the security of many protocols, even such that cannot be proved secure in the standard model (under any hardness assumptions). The random oracle model is also used for showing that a given cryptographic primitive cannot be used in a black-box way to construct another primitive; in their seminal work, ImpagliazzoRu89 [STOC ’89] showed that no key-agreement protocol exists in the random oracle model, yielding that key-agreement cannot be black-box reduced to one-way functions. Their work has a long line of followup works (Simon [EC ’98], Gertner et al. [STOC ’00] and Gennaro et al. [SICOMP ’05], to name a few), showing that given oracle access to a certain type of function family (e.g., the family that “implements” public-key encryption) is not sufficient for building a given cryptographic primitive (e.g., oblivious transfer). Yet, the following question remained open:

What is the exact power of the random oracle model?

We make progress towards answering this question, showing that essentially, any no private input, semi-honest two-party functionality that can be securely implemented in the random oracle model, can be securely implemented information theoretically (where parties are assumed to be all powerful, and no oracle is given). We further generalize the above result to function families that provide some natural combinatorial property.

Our result immediately yields that essentially the only no-input functionalities that can be securely realized in the random oracle model (in the sense of secure function evaluation), are the trivial ones (ones that can be securely realized information theoretically). In addition, we use the recent information theoretic impossibility result of McGregor et al. [FOCS ’10], to show the existence of functionalities (e.g., inner product) that cannot be computed both accurately and in a differentially private manner in the random oracle model; yielding that protocols for computing these functionalities cannot be black-box reduced to one-way functions.

Iftach Haitner, Eran Omri, Hila Zarosim

Analyzing Graphs with Node Differential Privacy

We develop algorithms for the private analysis of network data that provide accurate analysis of realistic networks while satisfying stronger privacy guarantees than those of previous work. We present several techniques for designing

node

differentially private algorithms, that is, algorithms whose output distribution does not change significantly when a node and all its adjacent edges are added to a graph. We also develop methodology for analyzing the accuracy of such algorithms on realistic networks.

The main idea behind our techniques is to “project” (in one of several senses) the input graph onto the set of graphs with maximum degree below a certain threshold. We design projection operators, tailored to specific statistics that have low sensitivity and preserve information about the original statistic. These operators can be viewed as giving a fractional (low-degree) graph that is a solution to an optimization problem described as a maximum flow instance, linear program, or convex program. In addition, we derive a generic, efficient reduction that allows us to apply any differentially private algorithm for bounded-degree graphs to an arbitrary graph. This reduction is based on analyzing the smooth sensitivity of the “naive” truncation that simply discards nodes of high degree.

Shiva Prasad Kasiviswanathan, Kobbi Nissim, Sofya Raskhodnikova, Adam Smith

Universally Composable Synchronous Computation

In synchronous networks, protocols can achieve security guarantees that are not possible in an asynchronous world: they can simultaneously achieve

input completeness

(all honest parties’ inputs are included in the computation) and

guaranteed termination

(honest parties do not “hang” indefinitely). In practice truly synchronous networks rarely exist, but synchrony can be emulated if channels have (known) bounded latency and parties have loosely synchronized clocks.

The widely-used framework of universal composability (UC) is inherently asynchronous, but several approaches for adding synchrony to the framework have been proposed. However, we show that the existing proposals do

not

provide the expected guarantees. Given this, we propose a novel approach to defining synchrony in the UC framework by introducing functionalities exactly meant to model, respectively, bounded-delay networks and loosely synchronized clocks. We show that the expected guarantees of synchronous computation can be achieved given these functionalities, and that previous similar models can all be expressed within our new framework.

Jonathan Katz, Ueli Maurer, Björn Tackmann, Vassilis Zikas

Multi-Client Non-interactive Verifiable Computation

Gennaro et al. (Crypto 2010) introduced the notion of

non-interactive verifiable computation

, which allows a computationally weak client to outsource the computation of a function

f

on a series of inputs

x

(1)

,... to a more powerful but untrusted server. Following a pre-processing phase (that is carried out only once), the client sends some representation of its current input

x

(

i

)

to the server; the server returns an answer that allows the client to recover the correct result

f

(

x

(

i

)

), accompanied by a proof of correctness that ensures the client does not accept an incorrect result. The crucial property is that the work done by the client in preparing its input and verifying the server’s proof is less than the time required for the client to compute

f

on its own.

We extend this notion to the

multi-client

setting, where

n

computationally weak clients wish to outsource to an untrusted server the computation of a function

f

over a series of

joint

inputs

$(x_1^{(1)},...,x_1^{(1)})$

,... without interacting with each other. We present a construction for this setting by combining the scheme of Gennaro et al. with a primitive called proxy oblivious transfer.

Seung Geol Choi, Jonathan Katz, Ranjit Kumaresan, Carlos Cid

On the Feasibility of Extending Oblivious Transfer

Oblivious transfer is one of the most basic and important building blocks in cryptography. As such, understanding its cost is of prime importance. Beaver (STOC 1996) showed that it is possible to obtain poly(

n

) oblivious transfers given only

n

actual oblivious transfer calls and using one-way functions, where

n

is the security parameter. In addition, he showed that it is impossible to extend oblivious transfer information theoretically. The notion of extending oblivious transfer is important theoretically (to understand the complexity of computing this primitive) and practically (since oblivious transfers can be expensive and thus extending them using only one-way functions is very attractive).

Despite its importance, very little is known about the feasibility of extending oblivious transfer, beyond the fact that it is impossible information theoretically. Specifically, it is not known whether or not one-way functions are actually necessary for extending oblivious transfer, whether or not it is possible to extend oblivious transfers with adaptive security, and whether or not it is possible to extend oblivious transfers when starting with

O

(log

n

) oblivious transfers. In this paper, we address these questions and provide almost complete answers to all of them. We show that the existence of any oblivious transfer extension protocol with security for static semi-honest adversaries implies one-way functions, that an oblivious transfer extension protocol with adaptive security implies oblivious transfer with static security, and that the existence of an oblivious transfer extension protocol from only

O

(log

n

) oblivious transfers implies oblivious transfer itself.

Yehuda Lindell, Hila Zarosim

Computational Soundness of Coinductive Symbolic Security under Active Attacks

In Eurocrypt 2010, Miccinacio initiated an investigation of cryptographically sound, symbolic security analysis with respect to

coinductive

adversarial knowledge, and showed that under an adversarially

passive

model, certain security criteria may be given a computationally sound symbolic characterization, without the assumption of

key acyclicity

. Left open in his work was the fundamental question of “the viability of extending the coinductive approach to prove computational soundness results in the presence of

active

adversaries.” In this paper we make some initial steps toward this goal with respect to an extension of a

trace-based security model

(Micciancio and Warinschi, TCC 2004) including asymmetric and symmetric encryption; in particular we prove that a random

computational trace

can be

soundly abstracted

by a

coinductive symbolic trace

with overwhelming probability, provided that both the underlying encryption schemes provide IND-CCA2 security (plus ciphertext integrity for the symmetric scheme), and that the

diameter

of the underlying

coinductively-hidden subgraph

is constant in every symbolic trace. This result holds

even if

the protocol allows arbitrarily nested applications of symmetric/asymmetric encryption, unrestricted transmission of symmetric keys, and adversaries who

adaptively corrupt users

, along with other forms of active attack.

As part of our proof, we formulate a game-based definition of encryption security allowing

adaptive corruptions of keys

and certain forms of

adaptive key-dependent plaintext attack

, along with other common forms of CCA2 attack. We prove that (with assumptions similar to above) security under this game is implied by IND-CCA2 security. This also characterizes a

provably benign

form of

cyclic encryption

implied by standard security definitions, which may be of independent interest.

Mohammad Hajiabadi, Bruce M. Kapron

Revisiting Lower and Upper Bounds for Selective Decommitments

In [6,7], Dwork et al. posed the fundamental question of existence of commitment schemes that are secure against selective opening attacks (SOA, for short). In [2] Bellare, Hofheinz, and Yilek, and Hofheinz in [13] answered it affirmatively by presenting a scheme which is based solely on the non-black-box use of a one-way permutation needing a super-constant number of rounds. This result however opened other challenging questions about achieving a better round complexity and obtaining fully black-box schemes using underlying primitives and code of the adversary in a black-box manner.

Recently, in TCC 2011, Xiao ([23]) investigated on how to achieve (nearly) optimal SOA-secure commitment schemes where optimality is in the sense of both the round complexity and the black-box use of cryptographic primitives. The work of Xiao focuses on a simulation-based security notion of SOA. Moreover, the various results in [23] focus only on either parallel or concurrent SOA.

In this work we first point out various issues in the claims of [23] that actually re-open several of the questions left open in [2,13]. Then, we provide new lower bounds and concrete constructions that produce a very different state-of-the-art compared to the one claimed in [23].

Rafail Ostrovsky, Vanishree Rao, Alessandra Scafuro, Ivan Visconti

On the Circular Security of Bit-Encryption

Motivated by recent developments in fully homomorphic encryption, we consider the folklore conjecture that every semantically-secure bit-encryption scheme is circular secure, or in other words, that every bit-encryption scheme remains secure even when the adversary is given encryptions of the individual bits of the private-key. We show the following obstacles to proving this conjecture:

1

We construct a public-key bit-encryption scheme that is plausibly semantically secure, but is not circular secure. The circular security attack manages to fully recover the private-key.

The construction is based on an extension of the Symmetric External Diffie-Hellman assumption (SXDH) from bilinear groups, to ℓ-multilinear groups of order

p

where ℓ ≥ 

c

·log

p

for some

c

 > 1.

While there do exist ℓ-multilinear groups (unconditionally), for ℓ ≥ 3 there are no known candidates for which the SXDH problem is believed to be hard. Nevertheless, there is also no evidence that such groups do not exist. Our result shows that in order to prove the folklore conjecture, one must rule out the possibility that there exist ℓ-multilinear groups for which SXDH is hard.

2

We show that the folklore conjecture cannot be proved using a black-box reduction. That is, there is no reduction of circular security of a bit-encryption scheme to semantic security of that very same scheme that uses both the encryption scheme and the adversary as black-boxes.

Both of our negative results extend also to the (seemingly) weaker conjecture that every CCA secure bit-encryption scheme is circular secure.

As a final contribution, we show an equivalence between three seemingly distinct notions of circular security for public-key bit-encryption schemes. In particular, we give a general search to decision reduction that shows that an adversary that distinguishes between encryptions of the bits of the private-key and encryptions of zeros can be used to actually recover the private-key.

Ron D. Rothblum

Cryptographic Hardness of Random Local Functions–Survey

Constant parallel-time cryptography allows performing complex cryptographic tasks at an ultimate level of parallelism, namely, by local functions that each of their output bits depend on a constant number of input bits. The feasibility of such highly efficient cryptographic constructions was widely studied in the last decade via two main research threads.

Benny Applebaum

On the Power of Correlated Randomness in Secure Computation

We investigate the extent to which correlated secret randomness can help in secure computation with no honest majority. It is known that correlated randomness can be used to evaluate any circuit of size

s

with perfect security against semi-honest parties or statistical security against malicious parties, where the communication complexity grows linearly with

s

. This leaves open two natural questions: (1) Can the communication complexity be made independent of the circuit size? (2) Is it possible to obtain

perfect

security against malicious parties?

We settle the above questions, obtaining both positive and negative results on unconditionally secure computation with correlated randomness. Concretely, we obtain the following results.

Minimizing communication.

Any multiparty functionality can be realized, with perfect security against semi-honest parties or statistical security against malicious parties, by a protocol in which the number of bits communicated by each party is linear in its input length. Our protocol uses an exponential number of correlated random bits. We give evidence that super-polynomial randomness complexity may be inherent.

Perfect security against malicious parties.

Any finite “sender-receiver” functionality, which takes inputs from a sender and a receiver and delivers an output only to the receiver, can be

perfectly

realized given correlated randomness. In contrast, perfect security is generally impossible for functionalities which deliver outputs to both parties. We also show useful functionalities (such as string equality) for which there are

efficient

perfectly secure protocols in the correlated randomness model.

Perfect correctness in the plain model.

We present a general approach for transforming perfectly secure protocols for sender-receiver functionalities in the correlated randomness model into secure protocols in the plain model which offer

perfect correctness

against a malicious sender. This should be contrasted with the impossibility of perfectly sound zero-knowledge proofs.

Yuval Ishai, Eyal Kushilevitz, Sigurd Meldgaard, Claudio Orlandi, Anat Paskin-Cherniavsky

Constant-Overhead Secure Computation of Boolean Circuits using Preprocessing

We present a protocol for securely computing a Boolean circuit

C

in presence of a dishonest and malicious majority. The protocol is unconditionally secure, assuming a preprocessing functionality that is not given the inputs. For a large number of players the work for each player is the same as computing the circuit in the clear, up to a constant factor. Our protocol is the first to obtain these properties for Boolean circuits. On the technical side, we develop new homomorphic authentication schemes based on asymptotically good codes with an additional multiplication property. We also show a new algorithm for verifying the product of Boolean matrices in quadratic time with exponentially small error probability, where previous methods only achieved constant error.

Ivan Damgård, Sarah Zakarias

Implementing Resettable UC-Functionalities with Untrusted Tamper-Proof Hardware-Tokens

Resettable hardware tokens, usually in the form of smart cards, are used for a variety of security-critical tasks in open environments. Many of these tasks require trusted hardware tokens. With the complexity of hardware, however, it is not feasible to check if the hardware contains an internal state or gives away information over side channels. This inspires the question of the cryptographic strength of untrusted resettable hardware tokens in the universal composability framework.

In this work, we consider the problem of realizing general UC-functionalities from untrusted resettable hardware-tokens, with the goal of minimizing both the amount of interaction and the number of tokens employed. Our main result consists of two protocols, realizing functionalities that are sufficient to UC-realize any resettable two-party functionality.

The first protocol requires two rounds of interaction in an initialization phase and only a single hardware-token. The second protocol is fully non-interactive and requires two tokens. One of these relaxations, allowing either communication with the issuer of the token or issuing two tokens, is necessary. We show that even a simple functionality cannot be realized non-interactively using a single token.

Nico Döttling, Thilo Mie, Jörn Müller-Quade, Tobias Nilges

A Cookbook for Black-Box Separations and a Recipe for UOWHFs

We present a new framework for proving fully black-box separations and lower bounds. We prove a general theorem that facilitates the proofs of fully black-box lower bounds from a one-way function (OWF).

Loosely speaking, our theorem says that in order to prove that a fully black-box construction does not securely construct a cryptographic primitive

Q

(e.g., a pseudo-random generator or a universal one-way hash function) from a OWF, it is enough to come up with a large enough set of functions

$\mathcal{F}$

and a parameterized oracle (i.e., an oracle that is defined for every

f

ε

{0,1}

n

 → {0,1}

n

) such that

$\mathcal{O}_{f}$

breaks the security of the construction when instantiated with

f

and the oracle satisfies two local properties.

Our main application of the theorem is a lower bound of Ω(

n

/log(

n

)) on the number of calls made by any fully black-box construction of a universal one-way hash function (UOWHF) from a general one-way function. The bound holds even when the OWF is regular, in which case it matches to a recent construction of Barhum and Maurer [4].

Kfir Barhum, Thomas Holenstein

Algebraic (Trapdoor) One-Way Functions and Their Applications

In this paper we introduce the notion of

Algebraic (Trapdoor) One Way Functions

, which, roughly speaking, captures and formalizes many of the properties of number-theoretic one-way functions. Informally, a (trapdoor) one way function

F

:

X

 → 

Y

is said to be algebraic if

X

and

Y

are (finite) abelian cyclic groups, the function is

homomorphic

i.e.

F

(

x

F

(

y

) = 

F

(

x

·

y

), and is

ring-homomorphic

, meaning that it is possible to compute linear operations “in the exponent” over some ring (which may be different from ℤ

p

where

p

is the order of the underlying group

X

) without knowing the bases. Moreover, algebraic OWFs must be

flexibly one-way

in the sense that given

y

 = 

F

(

x

), it must be infeasible to compute (

x

′,

d

) such that

F

(

x

′) = 

y

d

(for

d

 ≠ 0). Interestingly, algebraic one way functions can be constructed from a variety of

standard

number theoretic assumptions, such as RSA, Factoring and CDH over bilinear groups.

As a second contribution of this paper, we show several applications where algebraic (trapdoor) OWFs turn out to be useful. These include publicly verifiable secure outsourcing of polynomials, linearly homomorphic signatures and batch execution of Sigma protocols.

Dario Catalano, Dario Fiore, Rosario Gennaro, Konstantinos Vamvourellis

Randomness-Dependent Message Security

Traditional definitions of the security of encryption schemes assume that the messages encrypted are chosen independently of the randomness used by the encryption scheme. Recent works, implicitly by Myers and Shelat (FOCS’09) and Bellare et al (AsiaCrypt’09), and explicitly by Hemmenway and Ostrovsky (ECCC’10), consider

randomness-dependent message (RDM) security

of encryption schemes, where the message to be encrypted may be selected as a function—referred to as the RDM function—of the randomness used to encrypt this particular message, or other messages, but in a circular way. We carry out a systematic study of this notion. Our main results demonstrate the following:

Full RDM security

—where the RDM function may be an arbitrary polynomial-size circuit—is not possible.

Any secure encryption scheme can be slightly modified, by just performing some

pre-processing to the randomness

, to satisfy

bounded-RDM

security, where the RDM function is restricted to be a circuit of

a priori

bounded polynomial size. The scheme, however, requires the randomness

r

needed to encrypt a message

m

to be slightly longer than the length of

m

(i.e., |

r

| > |

m

| + 

ω

(log

k

), where

k

is the security parameter).

We present a black-box provability barrier to compilations of

arbitrary

public-key encryption into RDM-secure ones using just pre-processing of the randomness, whenever |

m

| > |

r

| + 

ω

(log

k

). On the other hand, under the DDH assumption, we demonstrate the existence of bounded-RDM secure schemes that can encrypt arbitrarily “long” messages using “short” randomness.

We finally note that the existence of public-key encryption schemes imply the existence of a fully RDM-secure encryption scheme in an “ultra-weak” Random-Oracle Model—where the security reduction need not “program” the oracle, or see the queries made by the adversary to the oracle; combined with our impossibility result, this yields the first example of a cryptographic task that has a secure implementation in such a weak Random-Oracle Model, but does not have a secure implementation without random oracles.

Eleanor Birrell, Kai-Min Chung, Rafael Pass, Sidharth Telang

Errata to (Nearly) Round-Optimal Black-Box Constructions of Commitments Secure against Selective Opening Attacks

Several proofs initially presented by the author [2] were shown to be incorrect in a recent work of Ostrovsky

et al

[1]. In this notice we summarize the errors and summarize the current state of the art after taking into account the errors and subsequent work.

David Xiao

Erratum: Succinct Non-interactive Arguments via Linear Interactive Proofs

Unfortunately the order of appearance of the authors on the title page is not correct. Second last and last author were switched. In fact, the authors should be listed in alphabetical order so that Rafail Ostrovsky is second last and Omer Paneth is last author.

The correct order is:

Nir Bitansky, Alessandro Chiesa, Yuval Ishai,

Rafail Ostrovsky, and Omer Paneth

Nir Bitansky, Alessandro Chiesa, Yuval Ishai, Rafail Ostrovsky, Omer Paneth

Backmatter

Weitere Informationen