Skip to main content
main-content

Über dieses Buch

The two volume-set, LNCS 8042 and LNCS 8043, constitutes the refereed proceedings of the 33rd Annual International Cryptology Conference, CRYPTO 2013, held in Santa Barbara, CA, USA, in August 2013. The 61 revised full papers presented in LNCS 8042 and LNCS 8043 were carefully reviewed and selected from numerous submissions. Two abstracts of the invited talks are also included in the proceedings. The papers are organized in topical sections on lattices and FHE; foundations of hardness; cryptanalysis; MPC - new directions; leakage resilience; symmetric encryption and PRFs; key exchange; multi linear maps; ideal ciphers; implementation-oriented protocols; number-theoretic hardness; MPC - foundations; codes and secret sharing; signatures and authentication; quantum security; new primitives; and functional encryption.

Inhaltsverzeichnis

Frontmatter

Session 11: Implementation-Oriented Protocols

Fast Cut-and-Choose Based Protocols for Malicious and Covert Adversaries

In the setting of secure two-party computation, two parties wish to securely compute a joint function of their private inputs, while revealing only the output. One of the primary techniques for achieving efficient secure two-party computation is that of Yao’s garbled circuits (FOCS 1986). In the semi-honest model, where just one garbled circuit is constructed and evaluated, Yao’s protocol has proven itself to be very efficient. However, a malicious adversary who constructs the garbled circuit may construct a garbling of a different circuit computing a different function, and this cannot be detected (due to the garbling). In order to solve this problem, many circuits are sent and some of them are opened to check that they are correct while the others are evaluated. This methodology, called

cut-and-choose

, introduces significant overhead, both in computation and in communication, and is mainly due to the number of circuits that must be used in order to prevent cheating.

In this paper, we present a cut-and-choose protocol for secure computation based on garbled circuits, with security in the presence of malicious adversaries, that vastly improves on all previous protocols of this type. Concretely, for a cheating probability of at most 2

− 40

, the best previous works send between 125 and 128 circuits. In contrast, in our protocol 40 circuits alone suffice (with some additional overhead). Asymptotically, we achieve a cheating probability of 2

− 

s

where

s

is the number of garbled circuits, in contrast to the previous best of 2

− 0.32

s

. We achieve this by introducing a new cut-and-choose methodology with the property that in order to cheat,

all

of the evaluated circuits must be incorrect, and not just the

majority

as in previous works.

Yehuda Lindell

Efficient Secure Two-Party Computation Using Symmetric Cut-and-Choose

Beginning with the work of Lindell and Pinkas, researchers have proposed several protocols for secure two-party computation based on the

cut-and-choose

paradigm. In current instantiations of this approach, one party generates

κ

garbled circuits; some fraction of those are “checked” by the other party, and the remaining fraction are evaluated.

We introduce here the idea of

symmetric

cut-and-choose protocols, in which

both

parties generate

κ

circuits to be checked by the other party. The main advantage of our technique is that

κ

can be reduced by a factor of 3 while attaining the same statistical security level as in prior work. Since the number of garbled circuits dominates the costs of the protocol, especially as larger circuits are evaluated, our protocol is expected to run up to 3 times faster than existing schemes. Preliminary experiments validate this claim.

Yan Huang, Jonathan Katz, David Evans

Garbled Circuits Checking Garbled Circuits: More Efficient and Secure Two-Party Computation

Applying cut-and-choose techniques to Yao’s garbled circuit protocol has been a promising approach for designing efficient Two-Party Computation (2PC) with malicious and covert security, as is evident from various optimizations and software implementations in the recent years. We revisit the security and efficiency properties of this popular approach and propose alternative constructions and a new definition that are more suitable for use in practice.

We design an efficient fully-secure 2PC protocol for two-output functions that only requires

O

(

t

|

C

|) symmetric-key operations (with small constant factors, and ignoring factors that are independent of the circuit in use) in the Random Oracle Model, where |

C

| is the circuit size and

t

is a statistical security parameter. This is essentially the

optimal

complexity for protocols based on cut-and-choose, resolving a main question left open by the previous work on the subject.

Our protocol utilizes novel techniques for enforcing

garbler’s input consistency

and handling

two-output functions

that are more efficient than all prior solutions.

Motivated by the goal of eliminating the

all-or-nothing

nature of 2PC with covert security (that privacy and correctness are fully compromised if the adversary is not caught in the challenge phase), we propose a new security definition for 2PC that strengthens the guarantees provided by the standard covert model, and offers a smoother security vs. efficiency tradeoff to protocol designers in choosing the

right deterrence factor

. In our new notion, correctness is always guaranteed, privacy is fully guaranteed with probability (1 − 

ε

), and with probability

ε

(i.e. the event of undetected cheating), privacy is only “partially compromised” with at most a

single bit

of information leaked, in

case of an abort

.

We present two efficient 2PC constructions achieving our new notion. Both protocols are competitive with the previous covert 2PC protocols based on cut-and-choose.

A distinct feature of the techniques we use in all our constructions is to check consistency of inputs and outputs using new gadgets that are themselves

garbled circuits

, and to verify validity of these gadgets using

multi-stage

cut-and-choose openings.

Payman Mohassel, Ben Riva

Improved OT Extension for Transferring Short Secrets

We propose an optimization and generalization of OT extension of Ishai et al. of Crypto 2003. For computational security parameter

k

, our OT extension for short secrets offers

O

(log

k

) factor performance improvement in communication and computation, compared to prior work. In concrete terms, for today’s security parameters, this means approx. factor 2-3 improvement.

This results in corresponding improvements in applications relying on such OT. In particular, for two-party semi-honest SFE, this results in

O

(log

k

) factor improvement in communication over state of the art Yao Garbled Circuit, and has the same asymptotic complexity as the recent multi-round construction of Kolesnikov and Kumaresan of SCN 2012. For multi-party semi-honest SFE, where their construction is inapplicable, our construction implies

O

(log

k

) factor communication and computation improvement over best previous constructions. As with our OT extension, for today’s security parameters, this means approximately factor 2 improvement in semi-honest multi-party SFE.

Our building block of independent interest is a novel IKNP-based framework for 1-out-of-

n

OT extension, which offers

O

(log

n

) factor performance improvement over previous work (for

n

 ≤ 

k

), and concrete factor improvement of up to 5 for today’s security parameters (

n

=

k

=128).

Our protocol is the first practical OT with communication/ computation cost sublinear in the security parameter (prior sublinear constructions Ishai et al. [15,16] are not efficient in concrete terms).

Vladimir Kolesnikov, Ranjit Kumaresan

Time-Optimal Interactive Proofs for Circuit Evaluation

Several research teams have recently been working toward the development of practical general-purpose protocols for verifiable computation. These protocols enable a computationally weak

verifier

to offload computations to a powerful but untrusted

prover

, while providing the verifier with a guarantee that the prover performed the requested computations correctly. Despite substantial progress, existing implementations require further improvements before they become practical for most settings. The main bottleneck is typically the extra effort required by the prover to return an answer with a guarantee of correctness, compared to returning an answer with no guarantee.

We describe a refinement of a powerful interactive proof protocol due to Goldwasser, Kalai, and Rothblum [20]. Cormode, Mitzenmacher, and Thaler [14] show how to implement the prover in this protocol in time

O

(

S

log

S

), where

S

is the size of an arithmetic circuit computing the function of interest. Our refinements apply to circuits with sufficiently “regular” wiring patterns; for these circuits, we bring the runtime of the prover down to

O

(

S

). That is, our prover can evaluate the circuit with a guarantee of correctness, with only a constant-factor blowup in work compared to evaluating the circuit with no guarantee.

We argue that our refinements capture a large class of circuits, and we complement our theoretical results with experiments on problems such as matrix multiplication and determining the number of distinct elements in a data stream. Experimentally, our refinements yield a 200x speedup for the prover over the implementation of Cormode et al., and our prover is less than 10x slower than a C++ program that simply evaluates the circuit. Along the way, we describe a special-purpose protocol for matrix multiplication that is of interest in its own right.

Our final contribution is the design of an interactive proof protocol targeted at general data parallel computation. Compared to prior work, this protocol can more efficiently verify complicated computations as long as that computation is applied independently to many different pieces of data.

Justin Thaler

SNARKs for C: Verifying Program Executions Succinctly and in Zero Knowledge

An argument system for

NP

is a proof system that allows efficient verification of

NP

statements, given proofs produced by an untrusted yet computationally-bounded prover. Such a system is non-interactive and publicly-verifiable if, after a trusted party publishes a proving key and a verification key, anyone can use the proving key to generate non-interactive proofs for adaptively-chosen

NP

statements, and proofs can be verified by anyone by using the verification key.

We present an implementation of a publicly-verifiable non-interactive argument system for

NP

. The system, moreover, is a zero-knowledge proof-of-knowledge. It directly proves correct executions of programs on

TinyRAM

, a nondeterministic random-access machine tailored for efficient verification. Given a program

P

and time bound

T

, the system allows for proving correct execution of

P

, on any input

x

, for up to

T

steps, after a one-time setup requiring

$\tilde{O}(|P| \cdot T)$

cryptographic operations. An honest prover requires

$\tilde{O}(|P| \cdot T)$

cryptographic operations to generate such a proof, while proof verification can be performed with only

O

(|

x

|) cryptographic operations. This system can be used to prove the correct execution of C programs, using our

TinyRAM

port of the GCC compiler.

This yields a zero-knowledge Succinct Non-interactive ARgument of Knowledge (zk-SNARK) for program executions, in the preprocessing model — a powerful solution for delegating

NP

computations, with several features not achieved by previously-implemented primitives.

Our approach builds on recent theoretical progress in the area. We present efficiency improvements and implementations of two main ingredients:

1

Given a C program, we produce a circuit whose satisfiability encodes the correctness of execution of the program. Leveraging nondeterminism, the generated circuit’s size is merely quasilinear in the size of the computation. In particular, we efficiently handle arbitrary and data-dependent loops, control flow, and memory accesses. This is in contrast with existing “circuit generators”, which in the general case produce circuits of quadratic size.

2

Given a linear PCP for verifying satisfiability of circuits, we produce a corresponding SNARK. We construct such a linear PCP (which, moreover, is zero-knowledge and very efficient) by building and improving on recent work on quadratic arithmetic programs.

Eli Ben-Sasson, Alessandro Chiesa, Daniel Genkin, Eran Tromer, Madars Virza

Invited Talk: Why the Web Still Runs on RC4

Session 12: Number-Theoretic Hardness

On the Function Field Sieve and the Impact of Higher Splitting Probabilities

Application to Discrete Logarithms in and

In this paper we propose a binary field variant of the Joux-Lercier medium-sized Function Field Sieve, which results not only in complexities as low as

$L_{q^n}(1/3,(4/9)^{1/3})$

for computing arbitrary logarithms, but also in an heuristic

polynomial time

algorithm for finding the discrete logarithms of degree one and two elements when the field has a subfield of an appropriate size. To illustrate the efficiency of the method, we have successfully solved the DLP in the finite fields with 2

1971

and 2

3164

elements, setting a record for binary fields.

Faruk Göloğlu, Robert Granger, Gary McGuire, Jens Zumbrägel

An Algebraic Framework for Diffie-Hellman Assumptions

We put forward a new algebraic framework to generalize and analyze Diffie-Hellman like Decisional Assumptions which allows us to argue about security and applications by considering only algebraic properties. Our

$\mathcal{D}_{\ell,k}\mathsf{MDDH}$

assumption states that it is hard to decide whether a vector in

$\mathbb{G}^\ell$

is linearly dependent of the columns of some matrix in

$\mathbb{G}^{\ell\times k}$

sampled according to distribution

$\mathcal{D}_{\ell,k}$

. It covers known assumptions such as

DDH

,

Lin

2 (linear assumption), and

k

 − 

Lin

(the

k

-linear assumption). Using our algebraic viewpoint, we can relate the generic hardness of our assumptions in

m

-linear groups to the irreducibility of certain polynomials which describe the output of

$\mathcal{D}_{\ell,k}$

. We use the hardness results to find new distributions for which the

$\mathcal{D}_{\ell,k}\mathsf{MDDH}$

-Assumption holds generically in

m

-linear groups. In particular, our new assumptions 2−

SCasc

and 2−

ILin

are generically hard in bilinear groups and, compared to 2 − 

Lin

, have shorter description size, which is a relevant parameter for efficiency in many applications. These results support using our new assumptions as natural replacements for the 2 − 

Lin

Assumption which was already used in a large number of applications.

To illustrate the conceptual advantages of our algebraic framework, we construct several fundamental primitives based on any

MDDH

-Assumption. In particular, we can give many instantiations of a primitive in a compact way, including public-key encryption, hash-proof systems, pseudo-random functions, and Groth-Sahai NIZK and NIWI proofs. As an independent contribution we give more efficient NIZK and NIWI proofs for membership in a subgroup of

$\mathbb{G}^\ell$

, for validity of ciphertexts and for equality of plaintexts. The results imply very significant efficiency improvements for a large number of schemes, most notably Naor-Yung type of constructions.

Alex Escala, Gottfried Herold, Eike Kiltz, Carla Ràfols, Jorge Villar

Hard-Core Predicates for a Diffie-Hellman Problem over Finite Fields

A long-standing open problem in cryptography is proving the existence of (deterministic) hard-core predicates for the Diffie-Hellman problem defined over finite fields. In this paper, we make progress on this problem by defining a very natural variation of the Diffie-Hellman problem over

$\mathbb{F}_{p^2}$

and proving the unpredictability of every single bit of one of the coordinates of the secret DH value.

To achieve our result, we modify an idea presented at CRYPTO’01 by Boneh and Shparlinski [4] originally developed to prove that the LSB of the elliptic curve Diffie-Hellman problem is hard. We extend this idea in two novel ways:

1

We generalize it to the case of finite fields

$\mathbb{F}_{p^2}$

;

2

We prove that any bit, not just the LSB, is hard using the list decoding techniques of Akavia et al. [1] (FOCS’03) as generalized at CRYPTO’12 by Duc and Jetchev [6].

In the process, we prove several other interesting results:

Our result also hold for a larger class of predicates, called

segment predicates

in [1];

We extend the result of Boneh and Shparlinski to prove that every bit (and every segment predicate) of the elliptic curve Diffie-Hellman problem is hard-core;

We define the notion of

partial one-way function

over finite fields

$\mathbb{F}_{p^2}$

and prove that every bit (and every segment predicate) of one of the input coordinates for these functions is hard-core.

Nelly Fazio, Rosario Gennaro, Irippuge Milinda Perera, William E. Skeith

Session 13: MPC — Foundations

Encoding Functions with Constant Online Rate or How to Compress Garbled Circuits Keys

Randomized encodings of functions

can be used to replace a “complex” function

f

(

x

) by a “simpler” randomized mapping

$\hat{f}(x;r)$

whose output distribution on an input

x

encodes the value of

f

(

x

) and hides any other information about

x

. One desirable feature of randomized encodings is low

online complexity

. That is, the goal is to obtain a randomized encoding

$\hat{f}$

of

f

in which most of the output can be precomputed and published before seeing the input

x

. When the input

x

is available, it remains to publish only a short string

$\hat{x}$

, where the online complexity of computing

$\hat{x}$

is independent of (and is typically much smaller than) the complexity of computing

f

. Yao’s garbled circuit construction gives rise to such randomized encodings in which the online part

$\hat{x}$

consists of

n

encryption keys of length

κ

each, where

n

 = |

x

| and

κ

is a security parameter. Thus, the

online rate

$|\hat{x}|/|x|$

of this encoding is proportional to the security parameter

κ

.

In this paper, we show that the online rate can be dramatically improved. Specifically, we show how to encode any polynomial-time computable function

f

:{0,1}

n

 → {0,1}

m

(

n

)

with online rate of 1 + 

o

(1) and with nearly linear online computation. More concretely, the online part

$\hat{x}$

consists of an

n

-bit string and a single encryption key. These constructions can be based on the decisional Diffie-Hellman assumption (DDH), the Learning with Errors assumption (LWE), or the RSA assumption. We also present a variant of this result which applies to

arithmetic formulas

, where the encoding only makes use of arithmetic operations, as well as several negative results which complement our positive results.

Our positive results can lead to efficiency improvements in most contexts where randomized encodings of functions are used. We demonstrate this by presenting several concrete applications. These include protocols for secure multiparty computation and for non-interactive verifiable computation in the preprocessing model which achieve, for the first time, an optimal online communication complexity, as well as non-interactive zero-knowledge proofs which simultaneously minimize the online communication and the prover’s online computation.

Benny Applebaum, Yuval Ishai, Eyal Kushilevitz, Brent Waters

Efficient Multiparty Protocols via Log-Depth Threshold Formulae

(Extended Abstract)

We put forward a new approach for the design of efficient multiparty protocols:

1

Design a protocol

π

for a small number of parties (say, 3 or 4) which achieves security against a

single

corrupted party. Such protocols are typically easy to construct, as they may employ techniques that do not scale well with the number of corrupted parties.

2

Recursively compose

π

with itself to obtain an efficient

n

-party protocol which achieves security against a constant fraction of corrupted parties.

The second step of our approach combines the “player emulation” technique of Hirt and Maurer (J. Cryptology, 2000) with constructions of logarithmic-depth formulae which compute threshold functions using only constant fan-in threshold gates.

Using this approach, we simplify and improve on previous results in cryptography and distributed computing. In particular:

We provide conceptually simple constructions of efficient protocols for Secure Multiparty Computation (MPC) in the presence of an honest majority, as well as broadcast protocols from point-to-point channels and a 2-cast primitive.

We obtain new results on MPC over blackbox groups and other algebraic structures.

The above results rely on the following complexity-theoretic contributions, which may be of independent interest:

We show that for every

j

,

k

 ∈ ℕ such that

$m \triangleq \frac{k-1}{j-1}$

is an integer, there is an explicit (poly(

n

)-time) construction of a logarithmic-depth formula which computes a good approximation of an (

n

/

m

)-out-of-

n

threshold function using only

j

-out-of-

k

threshold gates and no constants.

For the special case of

n

-bit majority from 3-bit majority gates, a non-explicit construction follows from the work of Valiant (J. Algorithms, 1984). For this special case, we provide an explicit construction with a better approximation than for the general threshold case, and also an

exact

explicit construction based on standard complexity-theoretic or cryptographic assumptions.

Gil Cohen, Ivan Bjerre Damgård, Yuval Ishai, Jonas Kölker, Peter Bro Miltersen, Ran Raz, Ron D. Rothblum

A Dynamic Tradeoff between Active and Passive Corruptions in Secure Multi-Party Computation

At STOC ’87, Goldreich et al. presented two protocols for secure multi-party computation (MPC) among

n

parties: The first protocol provides

passive

security against

t

 < 

n

corrupted parties. The second protocol provides even

active

security, but only against

t

 < 

n

/2 corrupted parties. Although these protocols provide security against the provably highest possible number of corruptions, each of them has its limitation: The first protocol is rendered completely insecure in presence of a single active corruption, and the second protocol is rendered completely insecure in presence of ⌈

n

/2 ⌉ passive corruptions.

At Crypto 2006, Ishai et al. combined these two protocols into a single protocol which provides passive security against

t

 < 

n

corruptions and active security against

t

 < 

n

/2 corruptions. This protocol unifies the security guarantees of the passive world and the active world (“best of both worlds”). However, the corruption threshold

t

 < 

n

can be tolerated only when

all

corruptions are passive. With a single active corruption, the threshold is reduced to

t

 < 

n

/2.

As our main result, we introduce a

dynamic tradeoff

between active and passive corruptions: We present a protocol which provides security against

t

 < 

n

passive corruptions, against

t

 < 

n

/2 active corruptions,

and everything in between

. In particular, our protocol provides full security against

k

active corruptions, as long as less than

n

 − 

k

parties are corrupted in total, for any unknown

k

.

The main technical contribution is a new secret sharing scheme that, in the reconstruction phase, releases secrecy

gradually

. This allows to construct non-robust MPC protocols which, in case of an abort, still provide some level of secrecy. Furthermore, using similar techniques, we also construct protocols for reactive MPC with hybrid security, i.e., different thresholds for secrecy, correctness, robustness, and fairness. Intuitively, the more corrupted parties, the less security is guaranteed.

Martin Hirt, Ueli Maurer, Christoph Lucas

What Information Is Leaked under Concurrent Composition?

AchievingA long series of works have established far reaching impossibility results for concurrently secure computation. On the other hand, some positive results have also been obtained according to various weaker notions of security (such as by using a super-polynomial time simulator). This suggest that somehow, “not all is lost in the concurrent setting.”

In this work, we ask

what and exactly how much private information can an adversary learn by launching a concurrent attack?

Inspired by the recent works on leakage-resilient protocols, we consider a security model where the ideal world adversary (a.k.a simulator) is allowed to query the trusted party for some “leakage” on the honest party inputs. (Intuitively, the amount of leakage required by the simulator upper bounds the security loss in the real world).

We show for the first time that in the concurrent setting, it is possible to achieve

full security

for “most” of the sessions, while incurring significant loss of security in the remaining (fixed polynomial fraction of total) sessions. We also give a lower bound showing that (for general functionalities) this is essentially optimal. Our results also have interesting implications to bounded concurrent secure computation [Barak- FOCS’01], as well as to precise concurrent zero-knowledge [Pandey et al.-Eurocrypt’08] and concurrently secure computation in the multiple ideal query model [Goyal et al.-Crypto’10]

At the heart of our positive results is a new simulation strategy that is inspired by the classical set covering problem. On the other hand, interestingly, our negative results use techniques from leakage-resilient cryptography [Dziembowski-Pietrzak-FOCS’08].

Vipul Goyal, Divya Gupta, Abhishek Jain

Session 14: Codes and Secret Sharing

Non-malleable Codes from Two-Source Extractors

We construct an efficient information-theoretically non-malleable code in the split-state model for one-bit messages. Non-malleable codes were introduced recently by Dziembowski, Pietrzak and Wichs (ICS 2010), as a general tool for storing messages securely on hardware that can be subject to tampering attacks. Informally, a code

$(\mathsf{Enc} : {\cal M} \rightarrow {\cal L} \times {\cal R}, \mathsf{Dec} : {\cal L} \times {\cal R} \rightarrow {\cal M})$

is

non-malleable in the split-state model

if any adversary, by manipulating

independently

L

and

R

(where (

L

,

R

) is an encoding of some message

M

), cannot obtain an encoding of a message

M

′ that is not equal to

M

but is “related”

M

in some way. Until now it was unknown how to construct an information-theoretically secure code with such a property, even for

${\cal M} = \{0,1\}$

. Our construction solves this problem. Additionally, it is leakage-resilient, and the amount of leakage that we can tolerate can be an arbitrary fraction

ξ

 < 1/4 of the length of the codeword. Our code is based on the inner-product two-source extractor, but in general it can be instantiated by any two-source extractor that has large output and has the property of being

flexible

, which is a new notion that we define.

We also show that the non-malleable codes for one-bit messages have an equivalent, perhaps simpler characterization, namely such codes can be defined as follows: if

M

is chosen uniformly from {0,1} then the probability (in the experiment described above) that the output message

M

′ is not equal to

M

can be at most 1/2 + 

ε

.

Stefan Dziembowski, Tomasz Kazana, Maciej Obremski

Optimal Coding for Streaming Authentication and Interactive Communication

Error correction and message authentication are well studied in the literature, and various efficient solutions have been suggested and analyzed. This is however not the case for

data streams

in which the message is very long, possibly infinite, and not known in advance to the sender. Trivial solutions for error-correcting and authenticating data streams either suffer from a long delay at the receiver’s end or cannot perform well when the communication channel is noisy.

In this work we suggest a constant-rate error-correction scheme and an efficient authentication scheme for data streams over a noisy channel (one-way communication, no feedback) in the shared-randomness model. Our first scheme does not assume shared randomness and (non-efficiently) recovers a (1 − 2

c

)-fraction prefix of the stream sent so far, assuming the noise level is at most

c

 < 1/2. The length of the recovered prefix is tight.

To be able to overcome the

c

 = 1/2 barrier we relax the model and assume the parties pre-share a secret key. Under this assumption we show that for any given noise rate

c

 < 1, there exists a scheme that correctly decodes a (1 − 

c

)-fraction of the stream sent so far with high probability, and moreover, the scheme is efficient. Furthermore, if the noise rate exceeds

c

, the scheme aborts with high probability. We also show that no constant-rate authentication scheme recovers more than a (1 − 

c

)-fraction of the stream sent so far with non-negligible probability, thus the relation between the noise rate and recoverable fraction of the stream is tight, and our scheme is optimal.

Our techniques also apply to the task of interactive communication (two-way communication) over a noisy channel. In a recent paper, Braverman and Rao [STOC 2011] show that any function of two inputs has a constant-rate interactive protocol for two users that withstands a noise rate up to 1/4. By assuming that the parties share a secret random string, we extend this result and construct an interactive protocol that succeeds with overwhelming probability against noise rates up to 1/2. We also show that no constant-rate protocol exists for noise rates above 1/2 for functions that require two-way communication. This is contrasted with our first result in which computing the “function” requires only one-way communication and the noise rate can go up to 1.

Matthew Franklin, Ran Gelles, Rafail Ostrovsky, Leonard J. Schulman

Secret Sharing, Rank Inequalities and Information Inequalities

Beimel and Orlov proved that all information inequalities on four or five variables, together with all information inequalities on more than five variables that are known to date, provide lower bounds on the size of the shares in secret sharing schemes that are at most linear on the number of participants. We present here another negative result about the power of information inequalities in the search for lower bounds in secret sharing. Namely, we prove that all information inequalities on a bounded number of variables only can provide lower bounds that are polynomial on the number of participants.

Sebastià Martín, Carles Padró, An Yang

Session 15: Signatures and Authentication

Linearly Homomorphic Structure-Preserving Signatures and Their Applications

Structure-preserving signatures (SPS) are signature schemes where messages, signatures and public keys all consist of elements of a group over which a bilinear map is efficiently computable. This property makes them useful in cryptographic protocols as they nicely compose with other algebraic tools (like the celebrated Groth-Sahai proof systems). In this paper, we consider SPS systems with homomorphic properties and suggest applications that have not been provided before (in particular, not by employing ordinary SPS). We build linearly homomorphic structure-preserving signatures under simple assumptions and show that the primitive makes it possible to verify the calculations performed by a server on outsourced encrypted data (

i.e.

, combining secure computation and authenticated computation to allow reliable and secure cloud storage and computation, while freeing the client from retaining cleartext storage). Then, we give a generic construction of non-malleable (and actually simulation-sound) commitment from any linearly homomorphic SPS. This notably provides the first constant-size non-malleable commitment to group elements.

Benoît Libert, Thomas Peters, Marc Joye, Moti Yung

Man-in-the-Middle Secure Authentication Schemes from LPN and Weak PRFs

We show how to construct, from any weak pseudorandom function, a 3-round symmetric-key authentication protocol that is secure against man-in-the-middle attacks. The construction is very efficient, requiring both the secret key and communication size to be only 3

n

bits long and involving only one call to the weak-PRF. Our techniques also extend to certain classes of randomized weak-PRFs, chiefly among which are those based on the classical LPN problem and its more efficient variants such as Toeplitz-LPN and Ring-LPN. Building an efficient man-in-the-middle secure authentication scheme from any weak-PRF resolves a problem left open by Dodis et al. (Eurocrypt 2012), while building a man-in-the-middle secure scheme based on any variant of the LPN problem solves the main open question in a long line of research aimed at constructing a

practical

light-weight authentication scheme based on learning problems, which began with the work of Hopper and Blum (Asiacrypt 2001).

Vadim Lyubashevsky, Daniel Masny

Session 16: Quantum Security

Achieving the Limits of the Noisy-Storage Model Using Entanglement Sampling

A natural measure for the amount of quantum information that a physical system

E

holds about another system

A

 = 

A

1

,...,

A

n

is given by the min-entropy H

min

(

A

|

E

). Specifically, the min-entropy measures the amount of entanglement between

E

and

A

, and is the relevant measure when analyzing a wide variety of problems ranging from randomness extraction in quantum cryptography, decoupling used in channel coding, to physical processes such as thermalization or the thermodynamic work cost (or gain) of erasing a quantum system. As such, it is a central question to determine the behaviour of the min-entropy after some process M is applied to the system

A

. Here we introduce a new generic tool relating the resulting min-entropy to the original one, and apply it to several settings of interest, including sampling of subsystems and measuring in a randomly chosen basis. The results on random measurements yield new high-order entropic uncertainty relations with which we prove the optimality of cryptographic schemes in the bounded quantum storage model. This is an abridged version of the paper; the full version containing all proofs and further applications can be found in [13].

Frédéric Dupuis, Omar Fawzi, Stephanie Wehner

Quantum One-Time Programs

(Extended Abstract)

A

one-time program

is a hypothetical device by which a user may evaluate a circuit on exactly one input of his choice, before the device self-destructs. One-time programs cannot be achieved by software alone, as any software can be copied and re-run. However, it is known that every circuit can be compiled into a one-time program using a very basic hypothetical hardware device called a

one-time memory

. At first glance it may seem that quantum information, which cannot be copied, might also allow for one-time programs. But it is not hard to see that this intuition is false: one-time programs for classical or quantum circuits based solely on quantum information do not exist, even with computational assumptions.

This observation raises the question, “what assumptions are required to achieve one-time programs for

quantum

circuits?” Our main result is that any quantum circuit can be compiled into a one-time program assuming only the

same basic one-time memory devices

used for classical circuits. Moreover, these quantum one-time programs achieve statistical universal composability (UC-security) against any malicious user. Our construction employs methods for computation on authenticated quantum data, and we present a new quantum authentication scheme called the

trap scheme

for this purpose. As a corollary, we establish UC-security of a recent protocol for delegated quantum computation.

Anne Broadbent, Gus Gutoski, Douglas Stebila

Secure Signatures and Chosen Ciphertext Security in a Quantum Computing World

We initiate the study of

quantum

-secure digital signatures and

quantum

chosen ciphertext security. In the case of signatures, we enhance the standard chosen message query model by allowing the adversary to issue

quantum

chosen message queries: given a superposition of messages, the adversary receives a superposition of signatures on those messages. Similarly, for encryption, we allow the adversary to issue

quantum

chosen ciphertext queries: given a superposition of ciphertexts, the adversary receives a superposition of their decryptions. These adversaries model a natural ubiquitous quantum computing environment where end-users sign messages and decrypt ciphertexts on a personal quantum computer.

We construct classical systems that remain secure when exposed to such quantum queries. For signatures, we construct two compilers that convert classically secure signatures into signatures secure in the quantum setting and apply these compilers to existing post-quantum signatures. We also show that standard constructions such as Lamport one-time signatures and Merkle signatures remain secure under quantum chosen message attacks, thus giving signatures whose quantum security is based on generic assumptions. For encryption, we define security under quantum chosen ciphertext attacks and present both public-key and symmetric-key constructions.

Dan Boneh, Mark Zhandry

Everlasting Multi-party Computation

A protocol has everlasting security if it is secure against adversaries that are computationally unlimited

after

the protocol execution. This models the fact that we cannot predict which cryptographic schemes will be broken, say, several decades after the protocol execution. In classical cryptography, everlasting security is difficult to achieve: even using trusted setup like common reference strings or signature cards, many tasks such as secure communication and oblivious transfer cannot be achieved with everlasting security. An analogous result in the quantum setting excludes protocols based on common reference strings, but not protocols using a signature card. We define a variant of the Universal Composability framework, everlasting quantum-UC, and show that in this model, we can implement secure communication and general multi-party computation using signature cards as trusted setup.

Dominique Unruh

Session 17: New Primitives

Instantiating Random Oracles via UCEs

This paper provides a (standard-model) notion of security for (keyed) hash functions, called UCE, that we show enables instantiation of random oracles (ROs) in a fairly broad and systematic way. Goals and schemes we consider include deterministic PKE; message-locked encryption; hardcore functions; point-function obfuscation; OAEP; encryption secure for key-dependent messages; encryption secure under related-key attack; proofs of storage; and adaptively-secure garbled circuits with short tokens. We can take existing, natural and efficient ROM schemes and show that the instantiated scheme resulting from replacing the RO with a UCE function is secure in the standard model. In several cases this results in the first standard-model schemes for these goals. The definition of UCE-security itself is quite simple, asking that outputs of the function look random given some “leakage,” even if the adversary knows the key, as long as the leakage does not permit the adversary to compute the inputs.

Mihir Bellare, Viet Tung Hoang, Sriram Keelveedhi

Obfuscating Conjunctions

We show how to securely obfuscate the class of

conjunction functions

(functions like

$f(x_1, \ldots, x_n) = x_1 \land \lnot x_4 \land \lnot x_6 \land \cdots \land x_{n-2}$

). Given any function in the class, we produce an obfuscated program which preserves the input-output functionality of the given function, but reveals nothing else.

Our construction is based on multilinear maps, and can be instantiated using the recent candidates proposed by Garg, Gentry and Halevi (EUROCRYPT 2013) and by Coron, Lepoint and Tibouchi (CRYPTO 2013). We show that the construction is secure when the conjunction is drawn from a distribution, under mild assumptions on the distribution. Security follows from multilinear entropic variants of the Diffie-Hellman assumption. We conjecture that our construction is secure for

any

conjunction, regardless of the distribution from which it is drawn. We offer supporting evidence for this conjecture, proving that our obfuscator is secure for any conjunction against

generic adversaries

.

Zvika Brakerski, Guy N. Rothblum

Session 18: Functional Encryption I

Fully, (Almost) Tightly Secure IBE and Dual System Groups

We present the first fully secure Identity-Based Encryption scheme (IBE) from the standard assumptions where the security loss depends only on the security parameter and is independent of the number of secret key queries. This partially answers an open problem posed by Waters (Eurocrypt 2005). Our construction combines the Waters’ dual system encryption methodology (Crypto 2009) with the Naor-Reingold pseudo-random function (J. ACM, 2004) in a novel way. The security of our scheme relies on the DLIN assumption in prime-order groups. Along the way, we introduce a novel notion of

dual system groups

and a new randomization and parameter-hiding technique for prime-order bilinear groups.

Jie Chen, Hoeteck Wee

Function-Private Identity-Based Encryption: Hiding the Function in Functional Encryption

We put forward a new notion,

function privacy

, in identity-based encryption and, more generally, in functional encryption. Intuitively, our notion asks that decryption keys reveal essentially no information on their corresponding identities, beyond the absolute minimum necessary. This is motivated by the need for providing

predicate privacy

in public-key searchable encryption. Formalizing such a notion, however, is not straightforward as given a decryption key it is always possible to learn some information on its corresponding identity by testing whether it correctly decrypts ciphertexts that are encrypted for specific identities.

In light of such an inherent difficulty, any meaningful notion of function privacy must be based on the

minimal

assumption that, from the adversary’s point of view, identities that correspond to its given decryption keys are sampled from somewhat unpredictable distributions. We show that this assumption is in fact

sufficient

for obtaining a strong and realistic notion of function privacy. Loosely speaking, our framework requires that a decryption key corresponding to an identity sampled from any sufficiently unpredictable distribution is indistinguishable from a decryption key corresponding to an independently and uniformly sampled identity.

Within our framework we develop an approach for designing function-private identity-based encryption schemes, leading to constructions that are based on standard assumptions in bilinear groups (DBDH, DLIN) and lattices (LWE). In addition to function privacy, our schemes are also anonymous, and thus yield the first public-key searchable encryption schemes that are provably

keyword private

: A search key sk

w

enables to identify encryptions of an underlying keyword

w

, while not revealing any additional information about

w

beyond the minimum necessary, as long as the keyword

w

is sufficiently unpredictable.

Dan Boneh, Ananth Raghunathan, Gil Segev

Attribute-Based Encryption for Circuits from Multilinear Maps

In this work, we provide the first construction of Attribute- Based Encryption (ABE) for general circuits. Our construction is based on the existence of multilinear maps. We prove selective security of our scheme in the standard model under the natural multilinear generalization of the BDDH assumption. Our scheme achieves both Key-Policy and Ciphertext-Policy variants of ABE. Our scheme and its proof of security directly translate to the recent multilinear map framework of Garg, Gentry, and Halevi.

Sanjam Garg, Craig Gentry, Shai Halevi, Amit Sahai, Brent Waters

Session 19: Functional Encryption II

Functional Encryption: New Perspectives and Lower Bounds

Functional encryption is an emerging paradigm for public-key encryption that enables fine-grained control of access to encrypted data. In this work, we present new lower bounds and impossibility results on functional encryption, as well as new perspectives on security definitions. Our main contributions are as follows:

We show that functional encryption schemes that satisfy even a weak (non-adaptive) simulation-based security notion are impossible to construct in general. This is the

first

impossibility result that exploits

unbounded

collusions in an essential way. In particular, we show that there are no such functional encryption schemes for the class of weak pseudo-random functions (and more generally, for any class of incompressible functions). More quantitatively, our technique also gives us a lower bound for functional encryption schemes secure against

bounded

collusions. To be secure against

q

collusions, we show that the ciphertext in any such scheme must have size Ω(

q

).

We put forth and discuss a simulation-based notion of security for functional encryption, with an unbounded simulator (called USIM). We show that this notion interpolates indistinguishability and simulation-based security notions, and is inspired by results and barriers in the zero-knowledge and multi-party computation literature.

Shweta Agrawal, Sergey Gorbunov, Vinod Vaikuntanathan, Hoeteck Wee

On the Achievability of Simulation-Based Security for Functional Encryption

This work attempts to clarify to what extent simulation-based security (SIM-security) is achievable for functional encryption (FE) and its relation to the weaker indistinguishability-based security (IND-security). Our main result is a compiler that transforms any FE scheme for the general circuit functionality (which we denote by

Circuit

-FE) meeting indistinguishability-based security (IND-security) to a

Circuit

-FE scheme meeting SIM-security, where:

In the random oracle model, the resulting scheme is secure for an unbounded number of encryption and key queries, which is the strongest security level one can ask for.

In the standard model, the resulting scheme is secure for a bounded number of encryption and non-adaptive key queries, but an

unbounded

number of adaptive key queries. This matches known impossibility results and improves upon Gorbunov et al. [CRYPTO’12] (which is only secure for

non-adaptive

key queries).

Our compiler is inspired by the celebrated Fiat-Lapidot-Shamir paradigm [FOCS’90] for obtaining zero-knowledge proof systems from witness-indistinguishable proof systems. As it is currently unknown whether

Circuit

-FE meeting IND-security exists, the purpose of this result is to establish that it remains a good target for future research despite known deficiencies of IND-security [Boneh et al. – TCC’11, O’Neill – ePrint ’10]. We also give a tailored construction of SIM-secure hidden vector encryption (HVE) in composite-order bilinear groups. Finally, we revisit the known negative results for SIM-secure FE, extending them to natural weakenings of the security definition and thus providing essentially a full picture of the (in)achievability of SIM-secure FE.

Angelo De Caro, Vincenzo Iovino, Abhishek Jain, Adam O’Neill, Omer Paneth, Giuseppe Persiano

How to Run Turing Machines on Encrypted Data

Cryptographic schemes for computing on encrypted data promise to be a fundamental building block of cryptography. The way one models such algorithms has a crucial effect on the efficiency and usefulness of the resulting cryptographic schemes. As of today, almost all known schemes for fully homomorphic encryption, functional encryption, and garbling schemes work by modeling algorithms as circuits rather than as Turing machines.

As a consequence of thismodeling, evaluating an algorithmover encrypted data is as slow as the worst-case running time of that algorithm, a dire fact for many tasks. In addition, in settings where an evaluator needs a description of the algorithm itself in some “encoded” form, the cost of computing and communicating such encoding is as large as the worst-case running time of this algorithm.

In this work, we construct cryptographic schemes for computing Turing machines on encrypted data that avoid the worst-case problem. Specifically, we show:

An attribute-based encryption scheme for any polynomial-time Turing machine and Random Access Machine (RAM).

A (single-key and succinct) functional encryption scheme for any polynomialtime Turing machine.

A reusable garbling scheme for any polynomial-time Turing machine. These three schemes have the property that the size of a key or of a garbling for a Turing machine is very short: it depends only on the description of the Turing machine and not on its running time. Previously, the only existing constructions of such schemes were for depthd circuits, where all the parameters grow with

d

. Our constructions remove this depth

d

restriction, have short keys, and moreover, avoid the worst-case running time.

A variant of fully homomorphic encryption scheme for Turing machines, where one can evaluate a Turing machine

M

on an encrypted input

x

in time that is dependent on the running time of

M on input x

as opposed to the worstcase runtime of

M

. Previously, such a result was known only for a restricted class of Turing machines and it required an expensive preprocessing phase (with worst-case runtime); our constructions remove both restrictions.

Our results are obtained via a reduction from SNARKs (Bitanski et al) and an “extractable” variant of witness encryption, a scheme introduced by Garg

et al.

. We prove that the new assumption is secure in the generic group model. We also point out the connection between (the variant of) witness encryption and the obfuscation of point filter functions as defined by Goldwasser and Kalai in 2005.

Shafi Goldwasser, Yael Tauman Kalai, Raluca Ada Popa, Vinod Vaikuntanathan, Nickolai Zeldovich

Erratum: A Dynamic Tradeoff between Active and Passive Corruptions in Secure Multi-Party Computation

The sequence of the authors should be in alphabetical order and therefore reads: “Martin Hirt, Christoph Lucas, Ueli Maurer” instead of “Martin Hirt, Ueli Maurer, Christoph Lucas”.

Martin Hirt, Christoph Lucas, Ueli Maurer

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise