main-content

This book constitutes the thoroughly refereed proceedings of the 8th Theory of Cryptography Conference, TCC 2011, held in Providence, Rhode Island, USA, in March 2011. The 35 revised full papers are presented together with 2 invited talks and were carefully reviewed and selected from 108 submissions. The papers are organized in topical sections on hardness amplification, leakage resilience, tamper resilience, encryption, composable security, secure computation, privacy, coin tossing and pseudorandomness, black-box constructions and separations, and black box separations.

### Input Locality and Hardness Amplification

We establish new hardness amplification results for one-way functions in which each input bit influences only a small number of output bits (a.k.a. input-local functions). Our transformations differ from previous ones in that they approximately preserve input locality and at the same time retain the input size of the original function.

Let

f

: {0, 1}

n

→ {0, 1}

m

be a one-way function with input locality

d

, and suppose that

f

cannot be inverted in time

$\exp(\tilde{O}(\sqrt{n}\cdot d))$

on an

ε

-fraction of inputs. Our main results can be summarized as follows:

If

f

is injective then it is equally hard to invert

f

on a (1 −

ε

)-fraction of inputs.

If

f

is regular then there is a function

g

: {0, 1}

n

→ {0, 1}

m

+

O

(

n

)

that is

d

+

O

(log

3

n

) input local and is equally hard to invert on a (1 −

ε

)-fraction of inputs.

A natural candidate for a function with small input locality and for which no sub-exponential time attacks are known is Goldreich’s one-way function. To make our results applicable to this function, we prove that when its input locality is set to be

d

=

O

(log

n

) certain variants of the function are (almost) regular with high probability.

In some cases, our techniques are applicable even when the input locality is not small. We demonstrate this by extending our first main result to one-way functions of the “parity with noise” type.

Andrej Bogdanov, Alon Rosen

### General Hardness Amplification of Predicates and Puzzles

(Extended Abstract)

We give new proofs for the hardness amplification of efficiently samplable predicates and of weakly verifiable puzzles which generalize to new settings. More concretely, in the first part of the paper, we give a new proof of Yao’s XOR-Lemma that additionally applies to related theorems in the cryptographic setting. Our proof seems simpler than previous ones, yet immediately generalizes to statements similar in spirit such as the extraction lemma used to obtain pseudo-random generators from one-way functions [Håstad, Impagliazzo, Levin, Luby, SIAM J. on Comp. 1999].

In the second part of the paper, we give a new proof of hardness amplification for weakly verifiable puzzles, which is more general than previous ones in that it gives the right bound even for an arbitrary monotone function applied to the checking circuit of the underlying puzzle.

Both our proofs are applicable in many settings of interactive cryptographic protocols because they satisfy a property that we call “non-rewinding”. In particular, we show that any weak cryptographic protocol whose security is given by the unpredictability of single bits can be strengthened with a natural information theoretic protocol. As an example, we show how these theorems solve the main open question from [Halevi and Rabin, TCC2008] concerning bit commitment.

Thomas Holenstein, Grant Schoenebeck

### Security Amplification for the Cascade of Arbitrarily Weak PRPs: Tight Bounds via the Interactive Hardcore Lemma

We consider the task of amplifying the security of a weak pseudorandom permutation (PRP), called an

ε

-PRP, for which the computational distinguishing advantage is only guaranteed to be bounded by some (possibly non-negligible) quantity

ε

< 1. We prove that the cascade (i.e., sequential composition) of

m

ε

-PRPs (with independent keys) is an ((

m

− (

m

− 1)

ε

)

ε

m

+

ν

)-PRP, where

ν

is a negligible function. In the asymptotic setting, this implies security amplification for all

$\epsilon < 1 - \frac{1}{\textsf{poly}}$

, and the result extends to two-sided PRPs, where the inverse of the given permutation is also queried. Furthermore, we show that this result is essentially tight. This settles a long-standing open problem due to Luby and Rackoff (STOC ’86).

Our approach relies on the first hardcore lemma for computational indistinguishability of

interactive

systems: Given two systems whose states do not depend on the interaction, and which no efficient adversary can distinguish with advantage better than

ε

, we show that there exist events on the choices of the respective states, occurring each with probability at least 1 −

ε

, such that the two systems are computationally indistinguishable conditioned on these events.

Stefano Tessaro

### Dense Model Theorems and Their Applications

In 2004, Ben Green and Terry Tao [6] proved that, for every

k

, there are infinitely many length-

k

arithmetic progressions made entirely of prime numbers. This settled a very long-standing open question in number theory that had been open even for the

k

= 4 case.

Luca Trevisan

### Parallel Repetition for Leakage Resilience Amplification Revisited

If a cryptographic primitive remains secure even if ℓ bits about the secret key are leaked to the adversary, one would expect that at least one of

n

independent instantiations of the scheme remains secure given

n

·ℓ bits of leakage. This intuition has been proven true for schemes satisfying some special information-theoretic properties by Alwen et al. [Eurocrypt’10]. On the negative side, Lewko and Waters [FOCS’10] construct a CPA secure public-key encryption scheme for which this intuition fails.

The counterexample of Lewko and Waters leaves open the interesting possibility that for any scheme there exists a constant

c

> 0, such that

n

fold repetition remains secure against

c

·

n

·ℓ bits of leakage. Furthermore, their counterexample requires the

n

copies of the encryption scheme to share a common reference parameter, leaving open the possibility that the intuition is true for all schemes without common setup.

In this work we give a stronger counterexample ruling out these possibilities. We construct a signature scheme such that:

1

a single instantiation remains secure given ℓ = log(

k

) bits of leakage where

k

is a security parameter.

2

any polynomial number of independent instantiations can be broken (in the strongest sense of key-recovery) given ℓ′ = poly(

k

) bits of leakage. Note that ℓ′ does not depend on the number of instances.

The computational assumption underlying our counterexample is that non-interactive computationally sound proofs exist. Moreover, under a stronger (non-standard) assumption about such proofs, our counterexample does not require a common reference parameter.

The underlying idea of our counterexample is rather generic and can be applied to other primitives like encryption schemes.

Abhishek Jain, Krzysztof Pietrzak

### Achieving Leakage Resilience through Dual System Encryption

In this work, we show that strong leakage resilience for cryptosystems with advanced functionalities can be obtained quite naturally within the methodology of dual system encryption, recently introduced by Waters. We demonstrate this concretely by providing fully secure IBE, HIBE, and ABE systems which are resilient to bounded leakage from each of many secret keys per user, as well as many master keys. This can be realized as resilience against continual leakage if we assume keys are periodically updated and no (or logarithmic) leakage is allowed during the update process. Our systems are obtained by applying a simple modification to previous dual system encryption constructions: essentially this provides a generic tool for making dual system encryption schemes leakage-resilient.

Allison Lewko, Yannis Rouselakis, Brent Waters

### Signatures Resilient to Continual Leakage on Memory and Computation

Recent breakthrough results by Brakerski

et

al

and Dodis

et

al

have shown that signature schemes can be made secure even if the adversary

continually

obtains information leakage from the secret key of the scheme. However, the schemes currently do not allow leakage on the secret key and randomness

during signing

, except in the random oracle model. Further, the random oracle based schemes require updates to the secret key in order to maintain security, even when no leakage during computation is present.

We present the first signature scheme that is resilient to full continual leakage: memory leakage as well as leakage from processing during signing (both from the secret key and the randomness), in key generation, and in update. Our scheme can tolerate leakage of a 1 –

o

(1) fraction of the secret key between updates, and is proven secure in the standard model based on the symmetric external DDH (SXDH) assumption in bilinear groups. The time periods between updates are a function of the amount of leakage in the period (and nothing more).

As an additional technical contribution, we introduce a new tool: independent pre-image resistant hash functions, which may be of independent interest.

Tal Malkin, Isamu Teranishi, Yevgeniy Vahlis, Moti Yung

### After-the-Fact Leakage in Public-Key Encryption

What does it mean for an encryption scheme to be leakage-resilient? Prior formulations require that the scheme remains semantically secure even in the presence of leakage, but only considered leakage that occurs

before the challenge ciphertext is generated

. Although seemingly necessary, this restriction severely limits the usefulness of the resulting notion.

In this work we study after-the-fact leakage, namely leakage that the adversary obtains after seeing the challenge ciphertext. We seek a “natural” and realizable notion of security, which is usable in higher-level protocols and applications. To this end, we formulate

entropic leakage-resilient PKE

. This notion captures the intuition that as long as the entropy of the encrypted message is higher than the amount of leakage, the message still has some (pseudo) entropy left. We show that this notion is realized by the Naor-Segev constructions (using hash proof systems).

We demonstrate that entropic leakage-resilience is useful by showing a simple construction that uses it to get semantic security in the presence of after-the-fact leakage, in a model of bounded memory leakage from a split state.

Shai Halevi, Huijia Lin

### One-Time Computable Self-erasing Functions

This paper studies the design of cryptographic schemes that are secure even if implemented on untrusted machines that fall under adversarial control. For example, this includes machines that are infected by a software virus.

We introduce a new cryptographic notion that we call a

one-time computable pseudorandom function (PRF)

, which is a PRF

F

K

(·) that can be evaluated

on at most one input

, even by an adversary who controls the device storing the key

K

, as long as: (1) the adversary cannot “leak” the key

K

out of the device completely (this is similar to the assumptions made in the

Bounded-Retrieval Model

), and (2) the local read/write memory of the machine is restricted, and not too much larger than the size of

K

. In particular, the

only way

to evaluate

F

K

(

x

) on such device, is to overwrite part of the key

K

during the computation, thus preventing all future evaluations of

F

K

(·) at any other point

x

′ ≠

x

. We show that this primitive can be used to construct schemes for

that are secure against dictionary attacks, even by a virus that infects the machine. Our constructions rely on the random-oracle model, and lower-bounds for

graphs pebbling

problems.

We show that our techniques can also be used to construct another primitive, called

uncomputable hash functions

, which are hash functions that have a short description but require a large amount of space to compute on any input. We show that this tool can be used to improve the communication complexity of

proofs-of-erasure

schemes, introduced recently by Perito and Tsudik (ESORICS 2010).

Stefan Dziembowski, Tomasz Kazana, Daniel Wichs

### Perfectly Secure Oblivious RAM without Random Oracles

We present an algorithm for implementing a secure oblivious RAM where the access pattern is perfectly hidden in the information theoretic sense, without assuming that the CPU has access to a random oracle. In addition we prove a lower bound on the amount of randomness needed for implementing an information theoretically secure oblivious RAM.

Ivan Damgård, Sigurd Meldgaard, Jesper Buus Nielsen

### Unconditional and Composable Security Using a Single Stateful Tamper-Proof Hardware Token

Cryptographic assumptions regarding tamper proof hardware tokens have gained increasing attention. Even if the tamper-proof hardware is issued by one of the parties, and hence not necessarily trusted by the other, many tasks become possible: Tamper proof hardware is sufficient for universally composable protocols, for information-theoretically secure protocols, and even allow to create software which can only be used once (One-Time-Programs). However, all known protocols employing tamper-proof hardware are either indirect, i.e., additional computational assumptions must be used to obtain general two party computations or a large number of devices must be used. In this work we present the first protocol realizing universally composable two-party computations (and even trusted One-Time-Programs) with information-theoretic security using only one single tamper-proof device issued by one of the mutually distrusting parties.

Nico Döttling, Daniel Kraschewski, Jörn Müller-Quade

### Correlated-Input Secure Hash Functions

We undertake a general study of hash functions secure under

correlated inputs

, meaning that security should be maintained when the adversary sees hash values of many related high-entropy inputs. Such a property is satisfied by a random oracle, and its importance is illustrated by study of the “avalanche effect,” a well-known heuristic in cryptographic hash function design. One can interpret “security” in different ways: e.g., asking for one-wayness or that the hash values look uniformly and independently random; the latter case can be seen as a generalization of correlation-robustness introduced by Ishai et al. (CRYPTO 2003). We give specific applications of these notions to password-based login and efficient search on encrypted data. Our main construction achieves them (without random oracles) for inputs related by

polynomials

over the input space (namely ℤ

p

), based on corresponding variants of the

q

-Diffie Hellman Inversion assumption. Additionally, we show relations between correlated-input secure hash functions and cryptographic primitives secure under related-key attacks. Using our techniques, we are also able to obtain a host of new results for such related-key attack secure cryptographic primitives.

Vipul Goyal, Adam O’Neill, Vanishree Rao

### Black-Box Circular-Secure Encryption beyond Affine Functions

We show how to achieve public-key encryption schemes that can securely encrypt nonlinear functions of their own secret key. Specifically, we show that for any constant

d

∈ ℕ, there exists a public-key encryption scheme that can securely encrypt any function

f

of its own secret key, assuming

f

can be expressed as a polynomial of total degree

d

. Such a scheme is said to be key-dependent message (KDM) secure w.r.t. degree-

d

polynomials. We also show that for any constants

c

,

e

, there exists a public-key encryption scheme that is KDM secure w.r.t. all Turing machines with description size

c

log

λ

and running time

λ

e

, where

λ

is the security parameter. The security of such public-key schemes can be based either on the standard decision Diffie-Hellman (DDH) assumption or on the learning with errors (LWE) assumption (with certain parameters settings).

In the case of functions that can be expressed as degree-

d

polynomials, we show that the resulting schemes are also secure with respect to

key cycles

of any length. Specifically, for any polynomial number

n

of key pairs, our schemes can securely encrypt a degree-

d

polynomial whose variables are the collection of coordinates of all

n

secret keys. Prior to this work, it was not known how to achieve this for nonlinear functions.

Our key idea is a general transformation that amplifies KDM security. The transformation takes an encryption scheme that is KDM secure w.r.t.

some

functions even when the secret keys are weak (i.e. chosen from an arbitrary distribution with entropy

k

), and outputs a scheme that is KDM secure w.r.t. a

richer

class of functions. The resulting scheme may no longer be secure with weak keys. Thus, in some sense, this transformation converts security with weak keys into amplified KDM security.

Zvika Brakerski, Shafi Goldwasser, Yael Tauman Kalai

### Homomorphic Encryption: From Private-Key to Public-Key

We show how to transform any additively homomorphic private-key encryption scheme that is compact, into a public-key encryption scheme. By compact we mean that the length of a homomorphically generated encryption is independent of the number of ciphertexts from which it was created. We do not require anything else on the distribution of homomorphically generated encryptions (in particular, we do not require them to be distributed like real ciphertexts).

Our resulting public-key scheme is homomorphic in the following sense. If the private-key scheme is

i

+ 1-hop homomorphic with respect to some set of operations then the public-key scheme we construct is

i

-hop homomorphic with respect to the same set of operations.

Ron Rothblum

### Identity-Based Encryption Secure against Selective Opening Attack

We present the first IBE schemes that are proven secure against selective opening attack (SOA). This means that if an adversary, given a vector of ciphertexts, adaptively corrupts some fraction of the senders, exposing not only their messages

but also their coins

, the privacy of the unopened messages is guaranteed. Achieving security against such attacks is well-known to be challenging and was only recently done in the PKE case. We show that IBE schemes having a property we call 1-sided public openability (1SPO) yield SOA secure IBE schemes and then provide two 1SPO IBE schemes, the first based on the Boyen-Waters anonymous IBE and the second on Waters’ dual-system approach.

Mihir Bellare, Brent Waters, Scott Yilek

### Functional Encryption: Definitions and Challenges

We initiate the formal study of functional encryption by giving precise definitions of the concept and its security. Roughly speaking, functional encryption supports restricted secret keys that enable a key holder to learn a specific function of encrypted data, but learn nothing else about the data. For example, given an encrypted program the secret key may enable the key holder to learn the output of the program on a specific input without learning anything else about the program.

We show that defining security for functional encryption is non-trivial. First, we show that a natural game-based definition is inadequate for some functionalities. We then present a natural simulation-based definition and show that it (provably) cannot be satisfied in the standard model, but can be satisfied in the random oracle model. We show how to map many existing concepts to our formalization of functional encryption and conclude with several interesting open problems in this young area.

Dan Boneh, Amit Sahai, Brent Waters

### Concurrent Non-Malleable Zero Knowledge with Adaptive Inputs

Concurrent non-malleable zero-knowledge (

$\mathcal{CN}\!\mathcal{MZK}$

) considers the concurrent execution of zero-knowledge protocols in a setting where the attacker can simultaneously corrupt multiple provers and verifiers. We provide the first construction of a

$\mathcal{CN}\!\mathcal{MZK}$

protocol that, without any trusted set-up, remains secure even if the attacker may adaptively select the statements to receive proofs of; previous works only handle scenarios where the statements are fixed at the beginning of the execution, or chosen adaptively from a restricted set of statements.

Huijia Lin, Rafael Pass

### Round-Optimal Password-Based Authenticated Key Exchange

We show a general framework for constructing password-based authenticated key exchange protocols with

optimal

round complexity — one message per party, sent simultaneously — in the standard model, assuming a common reference string. When our framework is instantiated using bilinear-map cryptosystems, the resulting protocol is also (reasonably) efficient. Somewhat surprisingly, our framework can be adapted to give protocols in the standard model that are

universally composable

while still using only one (simultaneous) round.

Jonathan Katz, Vinod Vaikuntanathan

### Bringing People of Different Beliefs Together to Do UC

Known constructions of UC secure protocols are based on the premise that different parties collectively agree on some trusted setup. In this paper, we consider the following two intriguing questions: Is it possible to achieve UC if the parties do not want to put all their trust in

one

entity (or more generally, in one setup)? What if the parties have a difference of opinion about what they are willing to trust? The first question has been studied in only a limited way, while the second has never been considered before.

In this paper, we initiate a systematic study to answer the above questions. We consider a scenario with multiple setup instances where each party in the system has some individual

belief

(setup assumption in terms of the given setups). The belief of a party corresponds to what it is willing to trust and its security is guaranteed given that its belief “holds.” The question considered is: “Given some setups and the (possibly) different beliefs of all the parties, when can UC security be achieved?” We present a general condition on the setups and the beliefs of all the parties under which UC security is possible. Surprisingly, we show that when parties have different beliefs, UC security can be achieved with a more limited “trust” than what is necessary in the traditional setting (where all parties have a common belief).

Sanjam Garg, Vipul Goyal, Abhishek Jain, Amit Sahai

### Secure Two-Party Computation via Cut-and-Choose Oblivious Transfer

Protocols for secure two-party computation enable a pair of parties to compute a function of their inputs while preserving security properties such as privacy, correctness and independence of inputs. Recently, a number of protocols have been proposed for the efficient construction of two-party computation secure in the presence of malicious adversaries (where security is proven under the standard simulation-based ideal/real model paradigm for defining security). In this paper, we present a protocol for this task that follows the methodology of using cut-and-choose to boost Yao’s protocol to be secure in the presence of malicious adversaries. Relying on specific assumptions (DDH), we construct a protocol that is significantly more efficient and far simpler than the protocol of Lindell and Pinkas (Eurocrypt 2007) that follows the same methodology. We provide an exact, concrete analysis of the efficiency of our scheme and demonstrate that (at least for not very small circuits) our protocol is more efficient than any other known today.

Yehuda Lindell, Benny Pinkas

### Practical Adaptive Oblivious Transfer from Simple Assumptions

In an adaptive oblivious transfer (OT) protocol, a sender commits to a database of messages and then repeatedly interacts with a receiver in such a way that the receiver obtains one message per interaction of his choice (and nothing more) while the sender learns nothing about any of the choices. Recently, there has been significant effort to design practical adaptive OT schemes and to use these protocols as a building block for larger database applications. To be well suited for these applications, the underlying OT protocol should: (1) support an efficient initialization phase where

one

commitment can support an arbitrary number of receivers who are guaranteed of having the same view of the database, (2) execute transfers in time independent of the size of the database, and (3) satisfy a strong notion of security under a simple assumption in the standard model.

We present the first adaptive OT protocol simultaneously satisfying these requirements. The sole complexity assumption required is that given (

g

,

g

a

,

g

b

,

g

c

,

Q

), where

g

generates a bilinear group of prime order

p

and

a

,

b

,

c

are selected randomly from ℤ

p

, it is hard to decide if

Q

=

g

abc

. All prior protocols in the standard model either do not meet our efficiency requirements or require dynamic “

q

-based” assumptions.

Our construction makes an important change to the established “assisted decryption” technique for designing adaptive OT. As in prior works, the sender commits to a database of

n

messages by publishing an encryption of each message and a signature on each encryption. Then, each transfer phase can be executed in time

independent

of

n

as the receiver blinds one of the encryptions and proves knowledge of the blinding factors and a signature on this encryption, after which the sender helps the receiver decrypt the chosen ciphertext. One of the main obstacles to designing an adaptive OT scheme from a simple assumption is realizing a suitable signature for this purpose (i.e., enabling signatures on group elements in a manner that later allows for efficient proofs.) We make the observation that a secure signature scheme is not necessary for this paradigm, provided that signatures can only be forged in certain ways. We then show how to efficiently integrate an insecure signature into a secure adaptive OT construction.

Matthew Green, Susan Hohenberger

### Completeness Theorems with Constructive Proofs for Finite Deterministic 2-Party Functions

In this paper we present simple but comprehensive combinatorial criteria for completeness of finite deterministic 2-party functions with respect to information-theoretic security. We give a general protocol construction for efficient and statistically secure reduction of oblivious transfer to any finite deterministic 2-party function that fulfills our criteria. For the resulting protocols we prove universal composability. Our results are tight in the sense that our criteria still are necessary for any finite deterministic 2-party function to allow for implementation of oblivious transfer with statistical privacy and correctness.

We unify and generalize results of Joe Kilian (1991, 2000) in two ways. Firstly, we show that his completeness criteria also hold in the UC framework. Secondly, what is our main contribution, our criteria also cover a wide class of primitives that are not subject of previous criteria. We show that there are non-trivial examples of finite deterministic 2-party functions that are neither symmetric nor asymmetric and therefore have not been covered by existing completeness criteria so far.

As a corollary of our work, every finite deterministic 2-party function is either complete or can be considered equivalent to a non-complete symmetric 2-party function—this assertion holds true with respect to active adversaries as well as passive adversaries. Thereby known results on non-complete symmetric 2-party functions are strengthened.

### A Zero-One Law for Secure Multi-party Computation with Ternary Outputs

There are protocols to privately evaluate any function in the passive (honest-but-curious) setting assuming that the honest nodes are in majority. For some specific functions, protocols are known which remain secure even without an honest majority. The seminal work by Chor and Kushilevitz [7] gave a complete characterization of Boolean functions, showing that each Boolean function either requires an honest majority, or is such that it can be privately evaluated regardless of the number of colluding nodes.

The problem of discovering the threshold for secure evaluation of more general functions remains an open problem. Towards a resolution, we provide a complete characterization of the security threshold for functions with three different outputs. Surprisingly, the zero-one law for Boolean functions extends to ℤ

3

, meaning that each function with range ℤ

3

either requires honest majority or tolerates up to

n

colluding nodes.

Gunnar Kreitz

### PCPs and the Hardness of Generating Private Synthetic Data

Assuming the existence of one-way functions, we show that there is no polynomial-time, differentially private algorithm

$\mathcal{A}$

that takes a database

D

∈ ({0,1}

d

)

n

and outputs a “synthetic database”

$\widehat{D}$

all of whose two-way marginals are approximately equal to those of

D

. (A two-way marginal is the fraction of database rows

x

∈ {0,1}

d

with a given pair of values in a given pair of columns). This answers a question of Barak et al. (PODS ‘07), who gave an algorithm running in time poly(

n

,2

d

).

Our proof combines a construction of hard-to-sanitize databases based on digital signatures (by Dwork et al., STOC ‘09) with encodings based on probabilistically checkable proofs.

We also present both negative and positive results for generating “relaxed” synthetic data, where the fraction of rows in

D

satisfying a predicate

c

are estimated by applying

c

to each row of

$\widehat{D}$

and aggregating the results in some way.

### Limits of Computational Differential Privacy in the Client/Server Setting

Differential privacy is a well established definition guaranteeing that queries to a database do not reveal “too much” information about specific individuals who have contributed to the database. The standard definition of differential privacy is information theoretic in nature, but it is natural to consider

computational

relaxations and to explore what can be achieved with respect to such notions. Mironov et al. (Crypto 2009) and McGregor et al. (FOCS 2010) recently introduced and studied several variants of computational differential privacy, and show that in the

two-party

setting (where data is split between two parties) these relaxations can offer significant advantages.

Left open by prior work was the extent, if any, to which computational differential privacy can help in the usual

client/server

setting where the entire database resides at the server, and the client poses queries on this data. We show, for queries with output in ℝ

n

(for constant

n

) and with respect to a large class of utilities, that any computationally private mechanism can be converted to a statistically private mechanism that is equally efficient and achieves roughly the same utility.

### Towards Privacy for Social Networks: A Zero-Knowledge Based Definition of Privacy

We put forward a zero-knowledge based definition of privacy. Our notion is strictly stronger than the notion of differential privacy and is particularly attractive when modeling privacy in social networks. We furthermore demonstrate that it can be meaningfully achieved for tasks such as computing averages, fractions, histograms, and a variety of graph parameters and properties, such as average degree and distance to connectivity. Our results are obtained by establishing a connection between zero-knowledge privacy and sample complexity, and by leveraging recent sublinear time algorithms.

Johannes Gehrke, Edward Lui, Rafael Pass

### On the Black-Box Complexity of Optimally-Fair Coin Tossing

A fair two-party coin tossing protocol is one in which both parties output the same bit that is almost uniformly distributed (i.e., it equals 0 and 1 with probability that is at most negligibly far from one half). It is well known that it is

impossible

to achieve fair coin tossing even in the presence of fail-stop adversaries (Cleve, FOCS 1986). In fact, Cleve showed that for every coin tossing protocol running for

r

rounds, an efficient fail-stop adversary can bias the output by Ω(1/

r

). Since this is the best possible, a protocol that limits the bias of any adversary to

O

(1/

r

) is called

optimally-fair

. The only optimally-fair protocol that is known to exist relies on the existence of oblivious transfer, because it uses general secure computation (Moran, Naor and Segev, TCC 2009). However, it is possible to achieve a bias of

$O(1/\sqrt{r})$

in

r

rounds relying only on the assumption that there exist one-way functions. In this paper we show that it is impossible to achieve optimally-fair coin tossing via a black-box construction from one-way functions for

r

that is less than

O

(

n

/log

n

), where

n

is the input/output length of the one-way function used. An important corollary of this is that it is impossible to construct an optimally-fair coin tossing protocol via a black-box construction from one-way functions whose round complexity is

independent

of the security parameter

n

determining the security of the one-way function being used. Informally speaking, the main ingredient of our proof is to eliminate the random-oracle from “secure” protocols with “low round-complexity” and simulate the protocol securely against semi-honest adversaries in the plain model. We believe our simulation lemma to be of broader interest.

Dana Dachman-Soled, Yehuda Lindell, Mohammad Mahmoody, Tal Malkin

### Tight Bounds for Classical and Quantum Coin Flipping

Coin flipping

is a cryptographic primitive for which strictly better protocols exist if the players are not only allowed to exchange classical, but also quantum messages. During the past few years, several results have appeared which give a tight bound on the range of implementable unconditionally secure coin flips, both in the classical as well as in the quantum setting and for both weak as well as strong coin flipping. But the picture is still incomplete: in the quantum setting, all results consider only protocols with

perfect correctness

, and in the classical setting tight bounds for strong coin flipping are still missing.

We give a general definition of coin flipping which unifies the notion of strong and weak coin flipping (it contains both of them as special cases) and allows the honest players to abort with a certain probability. We give tight bounds on the achievable range of parameters both in the classical and in the quantum setting.

Esther Hänggi, Jürg Wullschleger

### Exploring the Limits of Common Coins Using Frontier Analysis of Protocols

In 2-party secure computation, access to common, trusted randomness is a fundamental primitive. It is widely employed in the setting of computationally bounded players (under various complexity assumptions) to great advantage. In this work we seek to understand the power of trusted randomness, primarily in the computationally unbounded (or information theoretic) setting. We show that a source of common randomness does not add any additional power for secure evaluation of deterministic functions, even when one of the parties has arbitrary influence over the distribution of the common randomness. Further, common randomness helps only in a trivial sense for realizing randomized functions too (namely, it only allows for sampling from publicly fixed distributions), if UC security is required.

To obtain these impossibility results, we employ a recently developed protocol analysis technique, which we call the

frontier analysis

. This involves analyzing carefully defined “frontiers” in a weighted tree induced by the protocol’s execution (or executions, with various inputs), and establishing various properties regarding one or more such frontiers. We demonstrate the versatility of this technique by employing carefully chosen frontiers to derive the different results. To analyze randomized functionalities we introduce a frontier argument that involves a geometric analysis of the space of probability distributions.

Finally, we relate our results to computational intractability questions. We give an equivalent formulation of the “cryptomania assumption” (that there is a semi-honest or standalone secure oblivious transfer protocol) in terms of UC-secure reduction among randomized functionalities. Also, we provide an

unconditional result

on the uselessness of common randomness, even in the computationally bounded setting.

Our results make significant progress towards understanding the exact power of shared randomness in cryptography. To the best of our knowledge, our results are the first to comprehensively characterize the power of large classes of randomized functionalities.

Hemanta K. Maji, Pichayoot Ouppaphan, Manoj Prabhakaran, Mike Rosulek

### Limits on the Stretch of Non-adaptive Constructions of Pseudo-Random Generators

The standard approach for constructing a large-stretch pseudo-random generator given a one-way permutation or given a smaller-stretch pseudo-random generator involves repeatedly composing the given primitive with itself. In this paper, we consider whether this approach is necessary, that is, whether there are constructions that do not involve composition. More formally, we consider black-box constructions of pseudo-random generators from pseudo-random generators of smaller stretch or from one-way permutations, where the constructions make only non- adaptive queries to the given object. We consider three classes of such constructions, and for each class, we give a black-box impossibility result that demonstrates a contrast between the stretch that can be achieved by adaptive and non-adaptive black-box constructions.

We first consider constructions that make constantly-many non-adaptive queries to a given pseudo-random generator, where the seed length of the construction is at most

O

(

log

n

) bits longer than the length

n

of each oracle query. We show that such constructions cannot achieve stretch that is even a single bit greater than the stretch of the given pseudo-random generator.

We then consider constructions with arbitrarily long seeds, but where oracle queries are collectively chosen in a manner that depends only on a portion of the seed whose length is at most

O

(

log

n

) bits longer than the length

n

of each query. We show that such constructions making constantly-many non-adaptive queries cannot achieve stretch that is

ω

(

log

n

) bits greater than the stretch of the given pseudo-random generator.

Finally, we consider a class of constructions motivated by streaming computation. Specifically, we consider constructions where the computation of each individual output bit depends only on the seed and on the response to a single query to a one-way permutation. We allow the seed to have a public portion that is arbitrarily long but must always be included in the output, and a non-public portion that is at most

O

(

log

n

) bits longer than the length

n

of each oracle query. We show that such constructions whose queries are chosen non-adaptively based only on the non-public portion of the seed cannot achieve linear stretch.

Josh Bronson, Ali Juma, Periklis A. Papakonstantinou

### On the Complexity of Non-adaptively Increasing the Stretch of Pseudorandom Generators

We study the complexity of black-box constructions of linear-stretch pseudorandom generators starting from a 1-bit stretch oracle generator

G

. We show that there is no construction which makes non-adaptive queries to

G

and then just outputs bits of the answers. The result extends to constructions that both work in the non-uniform setting and are only black-box in the primitive

G

(not the proof of correctness), in the sense that any such construction implies NP/poly

$\ne$

P/poly. We then argue that not much more can be obtained using our techniques: via a modification of an argument of Reingold, Trevisan, and Vadhan (TCC ’04), we prove in the non-uniform setting that there is a construction which only treats the primitive

G

as black-box, has polynomial stretch, makes non-adaptive queries to the oracle

G

, and outputs an affine function (i.e., parity or its complement) of the oracle query answers.

Eric Miles, Emanuele Viola

### Concurrent Security and Non-malleability

The Internet enables concurrent executions of cryptographic protocols. This concurrent setting, however, also brings forth new types of coordinated attacks in which an adversary controls many parties, interleaving the executions of the various protocol instances, and attempts to “maul” messages from one execution to use in another.

In this talk, we will survey some recent methods for achieving concurrent security without relying on any trusted-set up (such as e.g., Common Reference Strings).

Rafael Pass

### (Nearly) Round-Optimal Black-Box Constructions of Commitments Secure against Selective Opening Attacks

Selective opening attacks against commitment schemes occur when the commitment scheme is repeated in parallel (or concurrently) and an adversary can choose depending on the commit-phase transcript to see the values and openings to some subset of the committed bits. Commitments are secure under such attacks if one can prove that the remaining, unopened commitments stay secret.

We prove the following black-box constructions and black-box lower bounds for commitments secure against selective opening attacks:

1

For parallel composition, 4 (resp. 5) rounds are necessary and sufficient to build computationally (resp. statistically) binding and computationally hiding commitments. Also, there are no perfectly binding commitments.

2

For parallel composition,

O

(1)-round statistically-hiding commitments are equivalent to

O

(1)-round statistically-binding commitments.

3

For concurrent composition,

ω

(log

n

) rounds are sufficient to build statistically binding commitments and are necessary even to build computationally binding and computationally hiding commitments, up to loglog

n

factors.

Our lower bounds improve upon the parameters obtained by the impossibility results of Bellare et al. (EUROCRYPT ’09), and are proved in a fundamentally different way, by observing that essentially all known impossibility results for black-box zero-knowledge can also be applied to the case of commitments secure against selective opening attacks.

David Xiao

### Limits on the Power of Zero-Knowledge Proofs in Cryptographic Constructions

For over 20 years, black-box impossibility results have been used to argue the infeasibility of constructing certain cryptographic primitives (e.g., key agreement) from others (e.g., one-way functions). A widely recognized limitation of such impossibility results, however, is that they say nothing about the usefulness of (known)

non

black-box techniques. This is unsatisfying, as we would at least like to rule out constructions using the set of techniques we have at our disposal.

With this motivation in mind, we suggest a new framework for black-box constructions that encompasses constructions with a nonblack-box flavor: specifically, those that rely on

zero-knowledge proofs

relative to some oracle. We show that our framework is powerful enough to capture the Naor-Yung/Sahai paradigm for building a (shielding) CCA-secure public-key encryption scheme from a CPA-secure one, something ruled out by prior black-box separation results. On the other hand, we show that several black-box impossibility results still hold even in a setting that allows for zero-knowledge proofs.

Zvika Brakerski, Jonathan Katz, Gil Segev, Arkady Yerukhimovich

### Towards Non-Black-Box Lower Bounds in Cryptography

We consider average-case strengthenings of the traditional assumption that coNP is not contained in AM. Under these assumptions, we rule out generic and potentially

non-black-box

constructions of various cryptographic primitives (e.g., one-way permutations, collision-resistant hash-functions, constant-round statistically hiding commitments, and constant-round black-box zero-knowledge proofs for NP) from one-way functions, assuming the security reductions are

black-box

.

Rafael Pass, Wei-Lung Dustin Tseng, Muthuramakrishnan Venkitasubramaniam

### On Black-Box Separations among Injective One-Way Functions

A one-way permutation (OWP) is one of the most fundamental cryptographic primitives, and can be used as a building block for most of basic symmetric-key cryptographic primitives. However, despite its importance and usefulness, previous black-box separation results have shown that constructing a OWP from another primitive seems hopeless, unless building blocks already achieve “one-way” property and “permutation” property simultaneously. In this paper, in order to clarify more about the constructions of a OWP from other primitives, we study the construction of a OWP from primitives that are very close to a OWP. Concretely, as a negative result, we show that there is no fully black-box construction of a OWP from a length-increasing injective one-way function (OWF), even if the latter is just 1-bit-increasing and achieves strong form of one-wayness which we call

. As a corollary, we show that there is no fully black-box construction of a OWP from a regular OWF with regularity greater than 1. Since a permutation is length-preserving and injective, and is a regular OWF with regularity 1, our negative result indicates that to construct a OWP from another primitive is quite difficult, even if we use very close primitives to a OWP as building blocks. Moreover, we extend our separation result of a OWP from a length-increasing injective OWF, and show a certain restrictive form of black-box separations among injective OWFs in terms of how much a function stretches its input. This result shows a hierarchy among injective OWFs (including a OWP).

Takahiro Matsuda, Kanta Matsuura

### Impossibility of Blind Signatures from One-Way Permutations

A seminal result in cryptography is that signature schemes can be constructed (in a black-box fashion) from any one-way function. The minimal assumptions needed to construct

blind

signature schemes, however, have remained unclear. Here, we rule out black-box constructions of blind signature schemes from one-way functions. In fact, we rule out constructions even from a random permutation oracle, and our results hold even for blind signature schemes for 1-bit messages that achieve security only against honest-but-curious behavior.

Jonathan Katz, Dominique Schröder, Arkady Yerukhimovich