Skip to main content

2010 | Buch

Advances in Cryptology – CRYPTO 2010

30th Annual Cryptology Conference, Santa Barbara, CA, USA, August 15-19, 2010. Proceedings

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

Leakage

Circular and Leakage Resilient Public-Key Encryption under Subgroup Indistinguishability
(or: Quadratic Residuosity Strikes Back)

The main results of this work are new public-key encryption schemes that, under the quadratic residuosity (QR) assumption (or Paillier’s decisional composite residuosity (DCR) assumption), achieve key-dependent message security as well as high resilience to secret key leakage and high resilience to the presence of auxiliary input information.

In particular, under what we call the

subgroup indistinguishability assumption

, of which the QR and DCR are special cases, we can construct a scheme that has:

Key-dependent message (circular) security.

Achieves security even when encrypting affine functions of its own secret key (in fact, w.r.t. affine “key-cycles” of predefined length). Our scheme also meets the requirements for extending key-dependent message security to broader classes of functions beyond affine functions using previous techniques of Brakerski et al. or Barak et al.

Leakage resiliency.

Remains secure even if any adversarial low-entropy (efficiently computable) function of the secret key is given to the adversary. A proper selection of parameters allows for a “leakage rate” of (1 − 

o

(1)) of the length of the secret key.

Auxiliary-input security.

Remains secure even if any sufficiently

hard to invert

(efficiently computable) function of the secret key is given to the adversary.

Our scheme is the first to achieve key-dependent security and auxiliary-input security based on the DCR and QR assumptions. Previous schemes that achieved these properties relied either on the DDH or LWE assumptions. The proposed scheme is also the first to achieve leakage resiliency for leakage rate (1 − 

o

(1)) of the secret key length, under the QR assumption. We note that leakage resilient schemes under the DCR and the QR assumptions, for the restricted case of composite modulus product of safe primes, were implied by the work of Naor and Segev, using hash proof systems. However, under the QR assumption, known constructions of hash proof systems only yield a leakage rate of

o

(1) of the secret key length.

Zvika Brakerski, Shafi Goldwasser
Leakage-Resilient Pseudorandom Functions and Side-Channel Attacks on Feistel Networks

A cryptographic primitive is leakage-resilient, if it remains secure even if an adversary can learn a bounded amount of arbitrary information about the computation with every invocation. As a consequence, the physical implementation of a leakage-resilient primitive is secure against every side-channel as long as the amount of information leaked per invocation is bounded.

In this paper we prove positive and negative results about the feasibility of constructing leakage-resilient pseudorandom functions and permutations (i.e. block-ciphers). Our results are three fold:

1.

We construct (from any standard PRF) a PRF which satisfies a relaxed notion of leakage-resilience where (1) the leakage function is fixed (and not adaptively chosen with each query.) and (2) the computation is split into several steps which leak individually (a “step” will be the invocation of the underlying PRF.)

2.

We prove that a Feistel network with a super-logarithmic number of rounds, each instantiated with a leakage-resilient PRF, is a leakage resilient PRP. This reduction also holds for the non-adaptive notion just discussed, we thus get a block-cipher which is leakage-resilient (against non-adaptive leakage).

3.

We propose generic side-channel attacks against Feistel networks. The attacks are generic in the sense that they work for any round functions (e.g. uniformly random functions) and only require some simple leakage from the inputs to the round functions. For example we show how to invert an

r

round Feistel network over 2

n

bits making 4·(

n

 + 1)

r

 − 2

forward queries, if with each query we are also given as leakage the Hamming weight of the inputs to the

r

round functions. This complements the result from the previous item showing that a super-constant number of rounds is necessary.

Yevgeniy Dodis, Krzysztof Pietrzak
Protecting Cryptographic Keys against Continual Leakage

Side-channel attacks have often proven to have a devastating effect on the security of cryptographic schemes. In this paper, we address the problem of storing cryptographic keys and computing on them in a manner that preserves security even when the adversary is able to obtain information leakage during the computation on the key.

Using any fully homomorphic encryption with re-randomizable ciphertexts, we show how to encapsulate a key and repeatedly evaluate arbitrary functions on it so that no adversary can gain any useful information from a large class of side-channel attacks. We work in the model of Micali and Reyzin, assuming that only the active part of memory during computation leaks information. Our construction makes use of a single “leak-free” hardware token that samples from a distribution that does not depend on the protected key or the function that is evaluated on it.

Our construction is the first general compiler to achieve resilience against polytime leakage functions without performing any leak-free computation on the protected key. Furthermore, the amount of computation our construction must perform does not grow with the amount of leakage the adversary is able to obtain; instead, it suffices to make a stronger assumption about the security of the fully homomorphic encryption.

Ali Juma, Yevgeniy Vahlis
Securing Computation against Continuous Leakage

We present a general method to compile any cryptographic algorithm into one which resists side channel attacks of the

only computation leaks information

variety for an unbounded number of executions. Our method uses as a building block a semantically secure subsidiary bit encryption scheme with the following additional operations: key refreshing, oblivious generation of cipher texts, leakage resilience re-generation, and blinded homomorphic evaluation of one single complete gate (e.g. NAND). Furthermore, the security properties of the subsidiary encryption scheme should withstand bounded leakage incurred while performing each of the above operations.

We show how to implement such a subsidiary encryption scheme under the DDH intractability assumption and the existence of a simple secure hardware component. The hardware component is independent of the encryption scheme secret key. The subsidiary encryption scheme resists leakage attacks where the leakage is computable in polynomial time and of length bounded by a constant fraction of the security parameter.

Shafi Goldwasser, Guy N. Rothblum

Lattice

An Efficient and Parallel Gaussian Sampler for Lattices

At the heart of many recent lattice-based cryptographic schemes is a polynomial-time algorithm that, given a ‘high-quality’ basis, generates a lattice point according to a Gaussian-like distribution. Unlike most other operations in lattice-based cryptography, however, the known algorithm for this task (due to Gentry, Peikert, and Vaikuntanathan; STOC 2008) is rather inefficient, and is inherently sequential.

We present a new Gaussian sampling algorithm for lattices that is

efficient

and

highly parallelizable

. We also show that in most cryptographic applications, the algorithm’s efficiency comes at almost no cost in asymptotic security. At a high level, our algorithm resembles the “perturbation” heuristic proposed as part of NTRUSign (Hoffstein et al., CT-RSA 2003), though the details are quite different. To our knowledge, this is the first algorithm and rigorous analysis demonstrating the security of a perturbation-like technique.

Chris Peikert
Lattice Basis Delegation in Fixed Dimension and Shorter-Ciphertext Hierarchical IBE

We present a technique for delegating a short lattice basis that has the advantage of keeping the lattice dimension unchanged upon delegation. Building on this result, we construct two new hierarchical identity-based encryption (HIBE) schemes, with and without random oracles. The resulting systems are very different from earlier lattice-based HIBEs and in some cases result in shorter ciphertexts and private keys. We prove security from classic lattice hardness assumptions.

Shweta Agrawal, Dan Boneh, Xavier Boyen

Homomorphic Encryption

Toward Basing Fully Homomorphic Encryption on Worst-Case Hardness

Gentry proposed a fully homomorphic public key encryption scheme that uses ideal lattices. He based the security of his scheme on the hardness of two problems: an average-case decision problem over ideal lattices, and the sparse (or “low-weight”) subset sum problem (SSSP).

We provide a key generation algorithm for Gentry’s scheme that generates ideal lattices according to a “nice” average-case distribution. Then, we prove a worst-case / average-case connection that bases Gentry’s scheme (in part) on the quantum hardness of the shortest independent vector problem (SIVP) over ideal lattices in the

worst-case

. (We cannot remove the need to assume that the SSSP is hard.) Our worst-case / average-case connection is the first where the average-case lattice is an ideal lattice, which seems to be necessary to support the security of Gentry’s scheme.

Craig Gentry
Additively Homomorphic Encryption with d-Operand Multiplications

The search for encryption schemes that allow to evaluate functions (or circuits) over encrypted data has attracted a lot of attention since the seminal work on this subject by Rivest, Adleman and Dertouzos in 1978.

In this work we define a theoretical object, chained encryption schemes, which allow an efficient evaluation of polynomials of degree

d

over encrypted data. Chained encryption schemes are generically constructed by concatenating cryptosystems with the appropriate homomorphic properties; such schemes are common in lattice-based cryptography. As a particular instantiation we propose a chained encryption scheme whose IND-CPA security is based on a worst-case/average-case reduction from uSVP.

Carlos Aguilar Melchor, Philippe Gaborit, Javier Herranz
i-Hop Homomorphic Encryption and Rerandomizable Yao Circuits

Homomorphic encryption (HE) schemes enable computing functions on encrypted data, by means of a public

Eval

procedure that can be applied to ciphertexts. But the evaluated ciphertexts so generated may differ from freshly encrypted ones. This brings up the question of whether one can keep computing on evaluated ciphertexts. An

i-hop

homomorphic encryption scheme is one where

Eval

can be called on its own output up to

i

times, while still being able to decrypt the result. A

multi-hop

homomorphic encryption is a scheme which is

i

-hop for all

i

. In this work we study

i

-hop and multi-hop schemes in conjunction with the properties of function-privacy (i.e.,

Eval

’s output hides the function) and compactness (i.e., the output of

Eval

is short). We provide formal definitions and describe several constructions.

First, we observe that “bootstrapping” techniques can be used to convert any (1-hop) homomorphic encryption scheme into an

i

-hop scheme for any

i

, and the result inherits the function-privacy and/or compactness of the underlying scheme. However, if the underlying scheme is not compact (such as schemes derived from Yao circuits) then the complexity of the resulting

i

-hop scheme can be as high as

n

O

(

i

)

.

We then describe a specific DDH-based multi-hop homomorphic encryption scheme that does not suffer from this exponential blowup. Although not compact, this scheme has complexity linear in the size of the composed function, independently of the number of hops. The main technical ingredient in this solution is a

re-randomizable

variant of the Yao circuits. Namely, given a garbled circuit, anyone can re-garble it in such a way that even the party that generated the original garbled circuit cannot recognize it. This construction may be of independent interest.

Craig Gentry, Shai Halevi, Vinod Vaikuntanathan

Theory and Applications

Interactive Locking, Zero-Knowledge PCPs, and Unconditional Cryptography

Motivated by the question of basing cryptographic protocols on stateless tamper-proof hardware tokens, we revisit the question of unconditional two-prover zero-knowledge proofs for

NP

. We show that such protocols exist in the

interactive PCP

model of Kalai and Raz (ICALP ’08), where one of the provers is replaced by a PCP oracle. This strengthens the feasibility result of Ben-Or, Goldwasser, Kilian, and Wigderson (STOC ’88) which requires two stateful provers. In contrast to previous zero-knowledge PCPs of Kilian, Petrank, and Tardos (STOC ’97), in our protocol both the prover and the PCP oracle are efficient given an

NP

witness.

Our main technical tool is a new primitive that we call

interactive locking

, an efficient realization of an unconditionally secure commitment scheme in the interactive PCP model. We implement interactive locking by adapting previous constructions of

interactive hashing

protocols to our setting, and also provide a direct construction which uses a minimal amount of interaction and improves over our interactive hashing based constructions.

Finally, we apply the above results towards showing the feasibility of basing unconditional cryptography on

stateless

tamper-proof hardware tokens, and obtain the following results.

(1)

We show that if tokens can be used to encapsulate other tokens, then there exist unconditional and statistically secure (in fact, UC secure) protocols for general secure computation.

(2)

Even if token encapsulation is not possible, there are unconditional and statistically secure commitment protocols and zero-knowledge proofs for

NP

.

(3)

Finally, if token encapsulation is not possible, then no protocol can realize statistically secure oblivious transfer.

Vipul Goyal, Yuval Ishai, Mohammad Mahmoody, Amit Sahai
Fully Secure Functional Encryption with General Relations from the Decisional Linear Assumption

This paper presents a fully secure functional encryption scheme for a wide class of relations, that are specified by non-monotone access structures combined with inner-product relations. The security is proven under a well-established assumption, the decisional linear (DLIN) assumption, in the standard model. The proposed functional encryption scheme covers, as special cases, (1) key-policy and ciphertext-policy attribute-based encryption with non-monotone access structures, and (2) (hierarchical) predicate encryption with inner-product relations and functional encryption with non-zero inner-product relations.

Tatsuaki Okamoto, Katsuyuki Takashima
Structure-Preserving Signatures and Commitments to Group Elements

A modular approach for cryptographic protocols leads to a simple design but often inefficient constructions. On the other hand, ad hoc constructions may yield efficient protocols at the cost of losing conceptual simplicity. We suggest

structure-preserving

commitments and signatures to overcome this dilemma and provide a way to construct modular protocols with reasonable efficiency, while retaining conceptual simplicity.

We focus on schemes in bilinear groups that preserve parts of the group structure, which makes it easy to combine them with other primitives such as non-interactive zero-knowledge proofs for bilinear groups.

We say that a signature scheme is

structure-preserving

if its verification keys, signatures, and messages are elements in a bilinear group, and the verification equation is a conjunction of pairing-product equations. If moreover the verification keys lie in the message space, we call them

automorphic

. We present several efficient instantiations of automorphic and structure-preserving signatures, enjoying various other additional properties, such as

simulatability

. Among many applications, we give three examples: adaptively secure round-optimal blind signature schemes, a group signature scheme with efficient concurrent join, and an efficient instantiation of anonymous proxy signatures.

A further contribution is

homomorphic trapdoor commitments to group elements

which are also length reducing. In contrast, the messages of previous homomorphic trapdoor commitment schemes are exponents.

Masayuki Abe, Georg Fuchsbauer, Jens Groth, Kristiyan Haralambiev, Miyako Ohkubo
Efficient Indifferentiable Hashing into Ordinary Elliptic Curves

We provide the first construction of a hash function into ordinary elliptic curves that is indifferentiable from a random oracle, based on Icart’s deterministic encoding from Crypto 2009. While almost as efficient as Icart’s encoding, this hash function can be plugged into any cryptosystem that requires hashing into elliptic curves, while not compromising proofs of security in the random oracle model.

We also describe a more general (but less efficient) construction that works for a large class of encodings into elliptic curves, for example the Shallue-Woestijne-Ulas (SWU) algorithm. Finally we describe the first deterministic encoding algorithm into elliptic curves in characteristic 3.

Eric Brier, Jean-Sébastien Coron, Thomas Icart, David Madore, Hugues Randriam, Mehdi Tibouchi

Key Exchange, OAEP/RSA, CCA

Credential Authenticated Identification and Key Exchange

This paper initiates a study of two-party identification and key-exchange protocols in which users authenticate themselves by proving possession of credentials satisfying arbitrary policies, instead of using the more traditional mechanism of a public-key infrastructure. Definitions in the universal composability framework are given, and practical protocols satisfying these definitions, for policies of practical interest, are presented. All protocols are analyzed in the common reference string model, assuming adaptive corruptions with erasures, and no random oracles. The new security notion includes password-authenticated key exchange as a special case, and new, practical protocols for this problem are presented as well, including the first such protocol that provides resilience against server compromise (without random oracles).

Jan Camenisch, Nathalie Casati, Thomas Gross, Victor Shoup
Password-Authenticated Session-Key Generation on the Internet in the Plain Model

The problem of password-authenticated key exchange (PAKE) has been extensively studied for the last two decades. Despite extensive studies, no construction was known for a PAKE protocol that is secure in the plain model in the setting of

concurrent self-composition

, where polynomially many protocol sessions with the same password may be executed on the distributed network (such as the Internet) in an arbitrarily interleaved manner, and where the adversary may corrupt any number of participating parties.

In this paper, we resolve this long-standing open problem. In particular, we give the first construction of a PAKE protocol that is secure (with respect to the standard definition of Goldreich and Lindell) in the fully concurrent setting and without requiring any trusted setup assumptions. We stress that we allow polynomially-many concurrent sessions, where polynomial is not fixed in advance and can be determined by an adversary an an adaptive manner. Interestingly, our proof, among other things, requires important ideas from

Precise

Zero Knowledge theory recently developed by Micali and Pass in their STOC’06 paper.

Vipul Goyal, Abhishek Jain, Rafail Ostrovsky
Instantiability of RSA-OAEP under Chosen-Plaintext Attack

We show that the widely deployed RSA-OAEP encryption scheme of Bellare and Rogaway (Eurocrypt 1994), which combines RSA with two rounds of an underlying Feistel network whose hash (

i.e.

, round) functions are modeled as random oracles, meets indistinguishability under chosen-plaintext attack (IND-CPA) in the

standard model

based on simple, non-interactive, and non-interdependent assumptions on RSA and the hash functions. To prove this, we first give a result on a more general notion called “padding-based” encryption, saying that such a scheme is IND-CPA if (1) its underlying padding transform satisfies a “fooling” condition against small-range distinguishers on a class of high-entropy input distributions, and (2) its trapdoor permutation is sufficiently

lossy

as defined by Peikert and Waters (STOC 2008). We then show that the first round of OAEP satifies condition (1) if its hash function is

t

-wise independent for appopriate

t

and that RSA satisfies condition (2) under the Φ-Hiding Assumption of Cachin

et al.

(Eurocrypt 1999).

This appears to be the first non-trivial

positive

result about the instantiability of RSA-OAEP. In particular, it increases our confidence that chosen-plaintext attacks are unlikely to be found against the scheme. In contrast, RSA-OAEP’s predecessor in PKCS #1 v1.5 was shown to be vulnerable to such attacks by Coron

et al

. (Eurocrypt 2000).

Eike Kiltz, Adam O’Neill, Adam Smith
Efficient Chosen-Ciphertext Security via Extractable Hash Proofs

We introduce the notion of an extractable hash proof system. Essentially, this is a special kind of non-interactive zero-knowledge proof of knowledge system where the secret keys may be generated in one of two modes to allow for either simulation or extraction.

We show how to derive efficient CCA-secure encryption schemes via extractable hash proofs in a simple and modular fashion. Our construction clarifies and generalizes the recent factoring-based cryptosystem of Hofheinz and Kiltz (Eurocrypt ’09), and is reminiscent of an approach proposed by Rackoff and Simon (Crypto ’91). We show how to instantiate extractable hash proof system for hard search problems, notably factoring and computational Diffie-Hellman. Using our framework, we obtain the first CCA-secure encryption scheme based on CDH where the public key is a constant number of group elements and a more modular and conceptually simpler variant of the Hofheinz-Kiltz cryptosystem (though less efficient).

We introduce adaptive trapdoor relations, a relaxation of the adaptive trapdoor functions considered by Kiltz, Mohassel and O’Neil (Eurocrypt ’10), but nonetheless imply CCA-secure encryption schemes. We show how to construct such relations using extractable hash proofs, which in turn yields realizations from hardness of factoring and CDH.

Hoeteck Wee

Attacks

Factorization of a 768-Bit RSA Modulus

This paper reports on the factorization of the 768-bit number RSA-768 by the number field sieve factoring method and discusses some implications for RSA.

Thorsten Kleinjung, Kazumaro Aoki, Jens Franke, Arjen K. Lenstra, Emmanuel Thomé, Joppe W. Bos, Pierrick Gaudry, Alexander Kruppa, Peter L. Montgomery, Dag Arne Osvik, Herman te Riele, Andrey Timofeev, Paul Zimmermann
Correcting Errors in RSA Private Keys

Let

pk

= (

N

,

e

) be an RSA public key with corresponding secret key

${\sf sk}=(p,q,d,d_p,d_q, q_p^{-1})$

. Assume that we obtain partial error-free information of

sk

, e.g., assume that we obtain half of the most significant bits of

p

. Then there are well-known algorithms to recover the full secret key. As opposed to these algorithms that allow for

correcting erasures

of the key

sk

, we present for the first time a heuristic probabilistic algorithm that is capable of

correcting errors

in

sk

provided that

e

is small. That is, on input of a full but error-prone secret key

$\widetilde{\sf sk}$

we reconstruct the original

sk

by correcting the faults.

More precisely, consider an error rate of

$\delta \in [0,\frac 1 2)$

, where we flip each bit in

sk

with probability

δ

resulting in an erroneous key

$\widetilde{\sf sk}$

. Our Las-Vegas type algorithm allows to recover

sk

from

$\widetilde{\sf sk}$

in expected time polynomial in log

N

with success probability close to 1, provided that

δ

< 0.237. We also obtain a polynomial time Las-Vegas factorization algorithm for recovering the factorization (

p

,

q

) from an erroneous version with error rate

δ

< 0.084.

Wilko Henecka, Alexander May, Alexander Meurer
Improved Differential Attacks for ECHO and Grøstl

We present improved cryptanalysis of two second-round

SHA-3

candidates: the

AES

-based hash functions

ECHO

and

Grøstl

. We explain methods for building better differential trails for

ECHO

by increasing the granularity of the truncated differential paths previously considered. In the case of

Grøstl

, we describe a new technique, the

internal differential attack

, which shows that when using parallel computations designers should also consider the differential security between the parallel branches. Then, we exploit the recently introduced start-from-the-middle or Super-Sbox attacks, that proved to be very efficient when attacking

AES

-like permutations, to achieve a very efficient utilization of the available freedom degrees. Finally, we obtain the best known attacks so far for both

ECHO

and

Grøstl

. In particular, we are able to mount a distinguishing attack for the full

Grøstl

-256 compression function.

Thomas Peyrin
A Practical-Time Related-Key Attack on the KASUMI Cryptosystem Used in GSM and 3G Telephony

The privacy of most GSM phone conversations is currently protected by the 20+ years old A5/1 and A5/2 stream ciphers, which were repeatedly shown to be cryptographically weak. They will soon be replaced by the new A5/3 (and the soon to be announced A5/4) algorithm based on the block cipher KASUMI, which is a modified version of MISTY. In this paper we describe a new type of attack called a

sandwich attack

, and use it to construct a simple distinguisher for 7 of the 8 rounds of KASUMI with an amazingly high probability of 2

− 14

. By using this distinguisher and analyzing the single remaining round, we can derive the complete 128 bit key of the full KASUMI by using only 4 related keys, 2

26

data, 2

30

bytes of memory, and 2

32

time. These complexities are so small that we have actually simulated the attack in less than two hours on a single PC, and experimentally verified its correctness and complexity. Interestingly, neither our technique nor any other published attack can break MISTY in less than the 2

128

complexity of exhaustive search, which indicates that the changes made by ETSI’s SAGE group in moving from MISTY to KASUMI resulted in a much weaker cipher.

Orr Dunkelman, Nathan Keller, Adi Shamir

Composition

Universally Composable Incoercibility

We present the UC/c framework, a general definition for secure and incoercible multi-party protocols. Our framework allows to model arbitrary reactive protocol tasks (by specifying an ideal functionality) and comes with a universal composition theorem. We show that given natural setup assumptions, we can construct incoercible two-party protocols realising arbitrary functionalities (with respect to static adversaries).

Dominique Unruh, Jörn Müller-Quade
Concurrent Non-Malleable Zero Knowledge Proofs

Concurrent non-malleable zero-knowledge (NMZK) considers the concurrent execution of zero-knowledge protocols in a setting where the attacker can simultaneously corrupt multiple provers and verifiers. Barak, Prabhakaran and Sahai (FOCS’06) recently provided the first construction of a concurrent NMZK protocol without any set-up assumptions. Their protocol, however, is only computationally sound (a.k.a., a concurrent NMZK

argument

). In this work we present the first construction of a concurrent NMZK

proof

without any set-up assumptions. Our protocol requires poly(

n

) rounds assuming one-way functions, or

$\tilde{O}(\log n)$

rounds assuming collision-resistant hash functions.

As an additional contribution, we improve the round complexity of concurrent NMZK arguments based on one-way functions (from poly(

n

) to

$\tilde O(\log n)$

), and achieve a near linear (instead of cubic) security reductions. Taken together, our results close the gap between concurrent ZK protocols and concurrent NMZK protocols (in terms of feasibility, round complexity, hardness assumptions, and tightness of the security reduction).

Huijia Lin, Rafael Pass, Wei-Lung Dustin Tseng, Muthuramakrishnan Venkitasubramaniam
Equivalence of Uniform Key Agreement and Composition Insecurity

We prove that achieving adaptive security from composing two general non-adaptively secure pseudo-random functions is impossible if and only if a uniform-transcript key agreement protocol exists.

It is well known that proving the security of a key agreement protocol (even in a special case where the protocol transcript looks random to an outside observer) is at least as difficult as proving

$P \not = NP$

. Another (seemingly unrelated) statement in cryptography is the existence of two or more non-adaptively secure pseudo-random functions that do not become adaptively secure under sequential or parallel composition. In 2006, Pietrzak showed that

at least one

of these two seemingly unrelated statements is true. Pietrzak’s result was significant since it showed a surprising connection between the worlds of public-key (i.e., “cryptomania”) and private-key cryptography (i.e., “minicrypt”). In this paper we show that this duality is far stronger: we show that

at least one

of these two statements must also be false. In other words, we show their

equivalence

.

More specifically, Pietrzak’s paper shows that if sequential composition of two non-adaptively secure pseudo-random functions is not adaptively secure, then there exists a key agreement protocol. However, Pietrzak’s construction implies a slightly stronger fact: If sequential composition does not imply adaptive security (in the above sense), then a

uniform-transcript

key agreement protocol exists, where by uniform-transcript we mean a key agreement protocol where the transcript of the protocol execution is indistinguishable from uniform to eavesdroppers. In this paper, we complete the picture, and show the reverse direction as well as a strong equivalence between these two notions. More specifically, as our main result, we show that if there exists

any

uniform-transcript key agreement protocol, then composition does not imply adaptive security. Our result holds for both parallel and sequential composition. Our implication holds based on virtually all known key agreement protocols, and can also be based on general complexity assumptions of the existence of dense trapdoor permutations.

Chongwon Cho, Chen-Kuei Lee, Rafail Ostrovsky

Computation Delegation and Obfuscation

Non-interactive Verifiable Computing: Outsourcing Computation to Untrusted Workers

We introduce and formalize the notion of

Verifiable Computation

, which enables a computationally weak client to “outsource” the computation of a function

F

on various dynamically-chosen inputs

x

1

,...,

x

k

to one or more workers. The workers return the result of the function evaluation, e.g.,

y

i

 = 

F

(

x

i

), as well as a proof that the computation of

F

was carried out correctly on the given value

x

i

. The primary constraint is that the verification of the proof should require substantially less computational effort than computing

F

(

x

i

) from scratch.

We present a protocol that allows the worker to return a computationally-sound, non-interactive proof that can be verified in

O

(

m

·poly

λ

) time, where

m

is the bit-length of the output of

F

, and

λ

is a security parameter. The protocol requires a one-time pre-processing stage by the client which takes

O

(|

C

|·poly

λ

) time, where

C

is the smallest known Boolean circuit computing

F

. Unlike previous work in this area, our scheme also provides (at no additional cost) input and output privacy for the client, meaning that the workers do not learn any information about the

x

i

or

y

i

values.

Rosario Gennaro, Craig Gentry, Bryan Parno
Improved Delegation of Computation Using Fully Homomorphic Encryption

Following Gennaro, Gentry, and Parno (Cryptology ePrint Archive 2009/547), we use fully homomorphic encryption to design improved schemes for delegating computation. In such schemes, a

delegator

outsources the computation of a function

F

on many, dynamically chosen inputs

x

i

to a

worker

in such a way that it is infeasible for the worker to make the delegator accept a result other than

F

(

x

i

). The “online stage” of the Gennaro et al. scheme is very efficient: the parties exchange two messages, the delegator runs in time poly(log

T

), and the worker runs in time poly(

T

), where

T

is the time complexity of

F

. However, the “offline stage” (which depends on the function

F

but not the inputs to be delegated) is inefficient: the delegator runs in time poly(

T

) and generates a public key of length poly(

T

) that needs to be accessed by the worker during the online stage.

Our first construction eliminates the large public key from the Gennaro et al. scheme. The delegator still invests poly(

T

) time in the offline stage, but does not need to communicate or publish anything. Our second construction reduces the work of the delegator in the offline stage to poly(log

T

) at the price of a 4-message (offline) interaction with a poly(

T

)-time worker (which need not be the same as the workers used in the online stage). Finally, we describe a “pipelined” implementation of the second construction that avoids the need to re-run the offline construction after errors are detected (assuming errors are not too frequent).

Kai-Min Chung, Yael Kalai, Salil Vadhan
Oblivious RAM Revisited

We reinvestigate the oblivious RAM concept introduced by Goldreich and Ostrovsky, which enables a client, that can store locally only a constant amount of data, to store remotely n data items, and access them while hiding the identities of the items which are being accessed. Oblivious RAM is often cited as a powerful tool, but is also commonly considered to be impractical due to its overhead, which is asymptotically efficient but is quite high.We redesign the oblivious RAM protocol using modern tools, namely Cuckoo hashing and a new oblivious sorting algorithm. The resulting protocol uses only

O

(

n

) external memory, and replaces each data request by only

O

(

log

2

n

) requests.

Benny Pinkas, Tzachy Reinman
On Strong Simulation and Composable Point Obfuscation

The Virtual Black Box (VBB) property for program obfuscators provides a strong guarantee: Anything computable by an efficient adversary given the obfuscated program can also be computed by an efficient simulator with only oracle access to the program. However, we know how to achieve this notion only for very restricted classes of programs.

This work studies a simple relaxation of VBB: Allow the simulator unbounded computation time, while still allowing only polynomially many queries to the oracle. We then demonstrate the viability of this relaxed notion, which we call Virtual Grey Box (VGB), in the context of fully composable obfuscators for point programs: It is known that, w.r.t. VBB, if such obfuscators exist then there exist multi-bit point obfuscators (aka “digital lockers”) and subsequently also very strong variants of encryption that are resilient to various attacks, such as key leakage and key-dependent-messages. However, no composable VBB-obfuscators for point programs have been shown. We show fully composable

VGB

-obfuscators for point programs under a strong variant of the Decision Diffie Hellman assumption. We show they suffice for the above applications and even for extensions to the public key setting as well as for encryption schemes with resistance to certain related key attacks (RKA).

Nir Bitansky, Ran Canetti

Multiparty Computation

Protocols for Multiparty Coin Toss with Dishonest Majority

Coin-tossing protocols are protocols that generate a random bit with uniform distribution. These protocols are used as a building block in many cryptographic protocols. Cleve [STOC 1986] has shown that if at least half of the parties can be malicious, then, in any

-round coin-tossing protocol, the malicious parties can cause a bias of

to the bit that the honest parties output. However, for more than two decades the best known protocols had bias

, where

is the number of corrupted parties. Recently, in a surprising result, Moran, Naor, and Segev [TCC 2009] have shown that there is an

-round two-party coin-tossing protocol with the optimal bias of

. We extend Moran et al. results to the

multiparty model

when less than 2/3 of the parties are malicious. The bias of our protocol is proportional to

and depends on the gap between the number of malicious parties and the number of honest parties in the protocol. Specifically, for a constant number of parties or when the number of malicious parties is somewhat larger than half, we present an

-round

-party coin-tossing protocol with optimal bias of

.

Amos Beimel, Eran Omri, Ilan Orlov
Multiparty Computation for Dishonest Majority: From Passive to Active Security at Low Cost

Multiparty computation protocols have been known for more than twenty years now, but due to their lack of efficiency their use is still limited in real-world applications: the goal of this paper is the design of efficient two and multi party computation protocols aimed to fill the gap between theory and practice. We propose a new protocol to securely evaluate reactive arithmetic circuits, that offers security against an active adversary in the universally composable security framework. Instead of the “do-and-compile” approach (where the parties use zero-knowledge proofs to show that they are following the protocol) our key ingredient is an efficient version of the “cut-and-choose” technique, that allow us to achieve active security for just a (small) constant amount of work more than for passive security.

Ivan Damgård, Claudio Orlandi
Secure Multiparty Computation with Minimal Interaction

We revisit the question of secure multiparty computation (MPC) with two rounds of interaction. It was previously shown by Gennaro et al. (Crypto 2002) that 3 or more communication rounds are necessary for general MPC protocols with guaranteed output delivery, assuming that there may be

t

 ≥ 2 corrupted parties. This negative result holds regardless of the total number of parties, even if

broadcast

is allowed in each round, and even if only

fairness

is required. We complement this negative result by presenting matching positive results.

Our first main result is that if only

one

party may be corrupted, then

n

 ≥ 5 parties can securely compute any function of their inputs using only

two

rounds of interaction over secure point-to-point channels (without broadcast or any additional setup). The protocol makes a black-box use of a pseudorandom generator, or alternatively can offer unconditional security for functionalities in NC

1

.

We also prove a similar result in a client-server setting, where there are

m

 ≥ 2 clients who hold inputs and should receive outputs, and

n

additional servers with no inputs and outputs. For this setting, we obtain a general MPC protocol which requires a single message from each client to each server, followed by a single message from each server to each client. The protocol is secure against a single corrupted client and against coalitions of

t

 < 

n

/3 corrupted servers.

The above protocols guarantee output delivery and fairness. Our second main result shows that under a relaxed notion of security, allowing the adversary to selectively decide (after learning its own outputs) which honest parties will receive their (correct) output, there is a general 2-round MPC protocol which tolerates

t

 < 

n

/3 corrupted parties. This protocol relies on the existence of a pseudorandom generator in NC

1

(which is implied by standard cryptographic assumptions), or alternatively can offer unconditional security for functionalities in NC

1

.

Yuval Ishai, Eyal Kushilevitz, Anat Paskin
A Zero-One Law for Cryptographic Complexity with Respect to Computational UC Security

It is well-known that most cryptographic tasks do not have universally composable (UC) secure protocols, if no trusted setup is available in the framework. On the other hand, if a task like fair coin-tossing is available as a trusted setup, then

all

cryptographic tasks have UC-secure protocols. What other trusted setups allow UC-secure protocols for all tasks? More generally, given a particular setup, what tasks have UC-secure protocols?

We show that, surprisingly, every trusted setup is either useless (equivalent to having no trusted setup) or all-powerful (allows UC-secure protocols for all tasks). There are no “intermediate” trusted setups in the UC framework. We prove this

zero-one law

under a natural intractability assumption, and consider the class of deterministic, finite, 2-party functionalities as candidate trusted setups.

One important technical contribution in this work is to initiate the comprehensive study of the cryptographic properties of reactive functionalities. We model these functionalities as finite automata and develop an automata-theoretic methodology for classifying and studying their cryptographic properties. Consequently, we completely characterize the reactive behaviors that lead to cryptographic non-triviality. Another contribution of independent interest is to optimize the hardness assumption used by Canetti et al. (

STOC

2002) in showing that the common random string functionality is complete (a result independently obtained by Damgård et al. (

TCC

2010)).

Hemanta K. Maji, Manoj Prabhakaran, Mike Rosulek

Pseudorandomness

On Generalized Feistel Networks

We prove beyond-birthday-bound security for most of the well-known types of generalized Feistel networks: (1) unbalanced Feistel networks, where the

n

-bit to

m

-bit round functions may have

$n\ne m$

; (2) alternating Feistel networks, where the round functions alternate between contracting and expanding; (3) type-1, type-2, and type-3 Feistel networks, where

n

-bit to

n

-bit round functions are used to encipher

kn

-bit strings for some

k

 ≥ 2; and (4) numeric variants of any of the above, where one enciphers numbers in some given range rather than strings of some given size. Using a unified analytic framework, we show that, in any of these settings, for any

ε

> 0, with enough rounds, the subject scheme can tolerate CCA attacks of up to

q

~

N

1 − 

ε

adversarial queries, where

N

is the size of the round functions’ domain (the larger domain for alternating Feistel). Prior analyses for most generalized Feistel networks established security to only

q

~

N

0.5

queries.

Viet Tung Hoang, Phillip Rogaway
Cryptographic Extraction and Key Derivation: The HKDF Scheme

In spite of the central role of

key derivation functions (KDF)

in applied cryptography, there has been little formal work addressing the design and analysis of general multi-purpose KDFs. In practice, most KDFs (including those widely standardized) follow ad-hoc approaches that treat cryptographic hash functions as perfectly random functions. In this paper we close some gaps between theory and practice by contributing to the study and engineering of KDFs in several ways. We provide detailed rationale for the design of KDFs based on the

extract-then-expand

approach; we present the first general and rigorous definition of KDFs and their security that we base on the notion of

computational extractors

; we specify a concrete

fully practical

KDF based on the HMAC construction; and we provide an analysis of this construction based on the extraction and pseudorandom properties of HMAC. The resultant KDF design can support a large variety of KDF applications under suitable assumptions on the underlying hash function; particular attention and effort is devoted to minimizing these assumptions as much as possible for each usage scenario.

Beyond the theoretical interest in modeling KDFs, this work is intended to address two important and timely needs of cryptographic applications: (i) providing a single hash-based KDF design that can be standardized for use in multiple and diverse applications, and (ii) providing a conservative, yet efficient, design that exercises much care in the way it utilizes a cryptographic hash function. (The HMAC-based scheme presented here, named

HKDF

, is being standardized by the IETF.)

Hugo Krawczyk
Time Space Tradeoffs for Attacks against One-Way Functions and PRGs

We study time space tradeoffs in the complexity of attacks against one-way functions and pseudorandom generators.

Fiat and Naor [7] show that for every function

f

: [

N

]→[

N

], there is an algorithm that inverts

f

everywhere using (ignoring lower order factors) time, space and advice at most

N

3/4

.

We show that an algorithm using time, space and advice at most

$$ \max \{ \epsilon^{\frac 54} N^{\frac 34} \ , \ \sqrt{\epsilon N} \} $$

exists that inverts

f

on at least an

ε

fraction of inputs. A lower bound of

$\tilde \Omega(\sqrt { \epsilon N })$

also holds, making our result tight in the “low end” of

$\epsilon \leq \sqrt[3]{\frac{1}{N}}$

.

(Both the results of Fiat and Naor and ours are formulated as more general trade-offs between the time and the space and advice length of the algorithm. The results quoted above correspond to the interesting special case in which time equals space and advice length.)

We also show that for every length-increasing generator

G

:[

N

] →[2

N

] there is a algorithm that achieves distinguishing probability

ε

between the output of

G

and the uniform distribution and that can be implemented in polynomial (in log

N

) time and with advice and space

O

(

ε

2

·

N

log

N

). We prove a lower bound of

S

·

T

 ≥ Ω(

ε

2

N

) where

T

is the time used by the algorithm and

S

is the amount of advice. This lower bound applies even when the distinguisher has oracle access to

G

.

We prove stronger lower bounds in the

common random string

model, for families of one-way permutations and of pseudorandom generators.

Anindya De, Luca Trevisan, Madhur Tulsiani
Pseudorandom Functions and Permutations Provably Secure against Related-Key Attacks

This paper fills an important foundational gap with the first proofs, under standard assumptions and in the standard model, of the existence of PRFs and PRPs resisting rich and relevant forms of relatedkey attack (RKA). An RKA allows the adversary to query the function not only under the target key but under other keys derived from it in adversary-specified ways. Based on the Naor-Reingold PRF we obtain an RKA-PRF whose keyspace is a group and that is proven, under DDH, to resist attacks in which the key may be operated on by

arbitrary

adversary-specified group elements. Our framework yields other RKA-PRFs including a DLIN-based one derived from the Lewko- Waters PRF.We show how to turn these PRFs into PRPs (blockciphers) while retaining security against RKAs. Over the last 17 years cryptanalysts and blockcipher designers have routinely and consistenly targeted RKA-security; it is important for abuse-resistant cryptography; and it helps protect against fault-injection sidechannel attacks. Yet ours are the first significant proofs of existence of secure constructs. We warn that our constructs are proofs-of-concept in the foundational style and not practical.

Mihir Bellare, David Cash

Quantum

Secure Two-Party Quantum Evaluation of Unitaries against Specious Adversaries

We describe how any two-party quantum computation, specified by a unitary which simultaneously acts on the registers of both parties, can be privately implemented against a quantum version of classical semi-honest adversaries that we call specious. Our construction requires two ideal functionalities to garantee privacy: a private SWAP between registers held by the two parties and a classical private AND-box equivalent to oblivious transfer. If the unitary to be evaluated is in the Clifford group then only one call to SWAP is required for privacy. On the other hand, any unitary not in the Clifford requires one call to an AND-box per R-gate in the circuit. Since SWAP is itself in the Clifford group, this functionality is universal for the private evaluation of any unitary in that group. SWAP can be built from a classical bit commitment scheme or an AND-box but an AND-box cannot be constructed from SWAP. It follows that unitaries in the Clifford group are to some extent the easy ones. We also show that SWAP cannot be implemented privately in the bare model.

Frédéric Dupuis, Jesper Buus Nielsen, Louis Salvail
On the Efficiency of Classical and Quantum Oblivious Transfer Reductions

Due to its universality oblivious transfer (OT) is a primitive of great importance in secure multi-party computation. OT is impossible to implement from scratch in an unconditionally secure way, but there are many reductions of OT to other variants of OT, as well as other primitives such as noisy channels. It is important to know how efficient such unconditionally secure reductions can be in principle, i.e., how many instances of a given primitive are at least needed to implement OT. For perfect (error-free) implementations good lower bounds are known, e.g. the bounds by Beaver (STOC ’96) or by Dodis and Micali (EUROCRYPT ’99). However, in practice one is usually willing to tolerate a small probability of error and it is known that these

statistical

reductions can in general be much more efficient. Thus, the known bounds have only limited application. In the first part of this work we provide bounds on the efficiency of secure (one-sided) two-party computation of arbitrary finite functions from distributed randomness in the statistical case. From these results we derive bounds on the efficiency of protocols that use (different variants of) OT as a black-box. When applied to implementations of OT, our bounds generalize known results to the statistical case. Our results hold in particular for transformations between a finite number of primitives and for

any

error. Furthermore, we provide bounds on the efficiency of protocols implementing Rabin OT.

In the second part we study the efficiency of quantum protocols implementing OT. Recently, Salvail, Schaffner and Sotakova (ASIACRYPT ’09) showed that most classical lower bounds for

perfectly

secure reductions of OT to distributed randomness still hold in a quantum setting. We present a statistically secure protocol that violates these bounds by an arbitrarily large factor. We then present a weaker lower bound that

does

hold in the statistical quantum setting. We use this bound to show that even quantum protocols cannot extend OT. Finally, we present two lower bounds for reductions of OT to commitments and a protocol based on string commitments that is optimal with respect to both of these bounds.

Severin Winkler, Jürg Wullschleger
Sampling in a Quantum Population, and Applications

We propose a framework for analyzing classical sampling strategies for estimating the Hamming weight of a large string from a few sample positions, when applied to a multi-qubit quantum system instead. The framework shows how to interpret the result of such a strategy and how to define its accuracy when applied to a quantum system. Furthermore, we show how the accuracy of any strategy relates to its accuracy in its classical usage, which is well understood for the important examples. We show the usefulness of our framework by using it to obtain new and simple security proofs for the following quantum-cryptographic schemes: BB84 quantum-key-distribution, and quantum oblivious-transfer from bit-commitment.

Niek J. Bouman, Serge Fehr
Backmatter
Metadaten
Titel
Advances in Cryptology – CRYPTO 2010
herausgegeben von
Tal Rabin
Copyright-Jahr
2010
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-14623-7
Print ISBN
978-3-642-14622-0
DOI
https://doi.org/10.1007/978-3-642-14623-7