Skip to main content

2007 | Buch

Advances in Cryptology - CRYPTO 2007

27th Annual International Cryptology Conference, Santa Barbara, CA, USA, August 19-23, 2007. Proceedings

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

Cryptanalysis I

Practical Cryptanalysis of SFLASH

In this paper, we present a practical attack on the signature scheme SFLASH proposed by Patarin, Goubin and Courtois in 2001 following a design they had introduced in 1998. The attack only needs the public key and requires about one second to forge a signature for any message, after a one-time computation of several minutes. It can be applied to both SFLASH

v

2

which was accepted by NESSIE, as well as to SFLASH

v

3

which is a higher security version.

Vivien Dubois, Pierre-Alain Fouque, Adi Shamir, Jacques Stern
Full Key-Recovery Attacks on HMAC/NMAC-MD4 and NMAC-MD5

At Crypto ’06, Bellare presented new security proofs for HMAC and NMAC, under the assumption that the underlying compression function is a pseudo-random function family. Conversely, at Asiacrypt ’06, Contini and Yin used collision techniques to obtain forgery and partial key-recovery attacks on HMAC and NMAC instantiated with MD4, MD5, SHA-0 and reduced SHA-1. In this paper, we present the first full key-recovery attacks on NMAC and HMAC instantiated with a real-life hash function, namely MD4. Our main result is an attack on HMAC/NMAC-MD4 which recovers the full MAC secret key after roughly 2

88

MAC queries and 2

95

MD4 computations. We also extend the partial key-recovery Contini-Yin attack on NMAC-MD5 (in the related-key setting) to a full key-recovery attack. The attacks are based on generalizations of collision attacks to recover a secret IV, using new differential paths for MD4.

Pierre-Alain Fouque, Gaëtan Leurent, Phong Q. Nguyen

Secure Searching

How Should We Solve Search Problems Privately?

Secure multiparty computation allows a group of distrusting parties to jointly compute a (possibly randomized)

function

of their inputs. However, it is often the case that the parties executing a computation try to solve a

search problem

, where one input may have a multitude of correct answers – such as when the parties compute a shortest path in a graph or find a solution to a set of linear equations.

Picking one output arbitrarily from the solution set has significant implications on the privacy of the algorithm. Beimel et al. [STOC 2006] gave a

minimal

definition for private computation of search problems with focus on proving impossibility result. In this work we aim for stronger definitions of privacy for search problems that provide reasonable privacy. We give two alternative definitions and discuss their privacy guarantees. We also supply algorithmic machinery for designing such protocols for a broad selection of search problems.

Amos Beimel, Tal Malkin, Kobbi Nissim, Enav Weinreb
Public Key Encryption That Allows PIR Queries

Consider the following problem: Alice wishes to maintain her email using a storage-provider Bob (such as a Yahoo! or hotmail e-mail account). This storage-provider should provide for Alice the ability to collect, retrieve, search and delete emails but, at the same time, should learn neither the content of messages sent from the senders to Alice (with Bob as an intermediary), nor the search criteria used by Alice. A trivial solution is that messages will be sent to Bob in encrypted form and Alice, whenever she wants to search for some message, will ask Bob to send her a copy of the entire database of encrypted emails. This however is highly inefficient. We will be interested in solutions that are communication-efficient and, at the same time, respect the privacy of Alice. In this paper, we show how to create a public-key encryption scheme for Alice that allows PIR searching over encrypted documents. Our solution is the first to reveal no partial information regarding the user’s search (including the access pattern) in the public-key setting and with non-trivially small communication complexity. This provides a theoretical solution to a problem posed by Boneh, DiCrescenzo, Ostrovsky and Persiano on “Public-key Encryption with Keyword Search.” The main technique of our solution also allows for Single-Database PIR writing with sub-linear communication complexity, which we consider of independent interest.

Dan Boneh, Eyal Kushilevitz, Rafail Ostrovsky, William E. Skeith III

Invited Talk

Information Security Economics – and Beyond

The economics of information security has recently become a thriving and fast-moving discipline. As distributed systems are assembled from machines belonging to principals with divergent interests, incentives are becoming as important to dependability as technical design. The new field provides valuable insights not just into ‘security’ topics such as privacy, bugs, spam, and phishing, but into more general areas such as system dependability (the design of peer-to-peer systems and the optimal balance of effort by programmers and testers), and policy (particularly digital rights management). This research program has been starting to spill over into more general security questions (such as law-enforcement strategy), and into the interface between security and sociology. Most recently it has started to interact with psychology, both through the psychology-and-economics tradition and in response to phishing. The promise of this research program is a novel framework for analyzing information security problems – one that is both principled and effective.

Ross Anderson, Tyler Moore

Theory I

Cryptography with Constant Input Locality
(Extended Abstract)

We study the following natural question: Which cryptographic primitives (if any) can be realized by functions with constant input locality, namely functions in which every bit of the

input

influences only a constant number of bits of the output? This continues the study of cryptography in low complexity classes. It was recently shown (Applebaum et al., FOCS 2004) that, under standard cryptographic assumptions, most cryptographic primitives can be realized by functions with constant

output

locality, namely ones in which every bit of the

output

is influenced by a constant number of bits from the input.

We (almost) characterize what cryptographic tasks can be performed with constant input locality. On the negative side, we show that primitives which require some form of non-malleability (such as digital signatures, message authentication, or non-malleable encryption)

cannot

be realized with constant input locality. On the positive side, assuming the intractability of certain problems from the domain of error correcting codes (namely, hardness of decoding a random linear code or the security of the McEliece cryptosystem), we obtain new constructions of one-way functions, pseudorandom generators, commitments, and semantically-secure public-key encryption schemes whose input locality is constant. Moreover, these constructions also enjoy constant

output locality

. Therefore, they give rise to cryptographic hardware that has constant-depth, constant fan-in and constant

fan-out

. As a byproduct, we obtain a pseudorandom generator whose output and input locality are both optimal (namely, 3).

Benny Applebaum, Yuval Ishai, Eyal Kushilevitz
Universally-Composable Two-Party Computation in Two Rounds

Round complexity is a central measure of efficiency, and characterizing the round complexity of various cryptographic tasks is of both theoretical and practical importance. We show here a universally-composable (UC) protocol (in the common reference string model) for two-party computation of any functionality, where

both

parties receive output, using only two rounds. (This assumes honest parties are allowed to transmit messages simultaneously in any given round; we obtain a three-round protocol when parties are required to alternate messages.) Our results match the obvious lower bounds for the round complexity of secure two-party computation under any reasonable definition of security, regardless of what setup is used. Thus, our results establish that secure two-party computation can be obtained under a commonly-used setup assumption with

maximal

security (i.e., security under general composition) in a

minimal

number of rounds.

To give but one example of the power of our general result, we observe that as an almost immediate corollary we obtain a two-round UC blind signature scheme, matching a result by Fischlin at Crypto 2006 (though, in contrast to Fischlin, we use specific number-theoretic assumptions).

Omer Horvitz, Jonathan Katz
Indistinguishability Amplification

Many aspects of cryptographic security proofs can be seen as the proof that a certain system (e.g. a block cipher) is indistinguishable from an ideal system (e.g. a random permutation), for different types of distinguishers.

This paper presents a new generic approach to proving upper bounds on the information-theoretic distinguishing advantage (from an ideal system) for a combined system, assuming upper bounds of certain types for the component systems. For a general type of combination operation of systems, including the XOR of functions or the cascade of permutations, we prove two amplification theorems. The first is a product theorem, in the spirit of XOR-lemmas: The distinguishing advantage of the combination of two systems is at most twice the product of the individual distinguishing advantages. This bound is optimal. The second theorem states that the combination of systems is secure against some strong class of distinguishers, assuming only that the components are secure against some weaker class of distinguishers.

A key technical tool of the paper is the proof of a tight two-way correspondence, previously only known to hold in one direction, between the distinguishing advantage of two systems and the probability of winning an appropriately defined game.

Ueli Maurer, Krzysztof Pietrzak, Renato Renner

Lattices

A Hybrid Lattice-Reduction and Meet-in-the-Middle Attack Against NTRU

To date the NTRUEncrypt security parameters have been based on the existence of two types of attack: a meet-in-the-middle attack due to Odlyzko, and a conservative extrapolation of the running times of the best (known) lattice reduction schemes to recover the private key. We show that there is in fact a continuum of more efficient attacks between these two attacks. We show that by combining lattice reduction and a meet-in-the-middle strategy one can reduce the number of loops in attacking the NTRUEncrypt private key from 2

84.2

to 2

60.3

, for the

k

 = 80 parameter set. In practice the attack is still expensive (dependent on ones choice of cost-metric), although there are certain space/time tradeoffs that can be applied. Asymptotically our attack remains exponential in the security parameter

k

, but it dictates that NTRUEncrypt parameters must be chosen so that the meet-in-the-middle attack has complexity 2

k

even after an initial lattice basis reduction of complexity 2

k

.

Nick Howgrave-Graham
Improved Analysis of Kannan’s Shortest Lattice Vector Algorithm
(Extended Abstract)

The security of lattice-based cryptosystems such as NTRU, GGH and Ajtai-Dwork essentially relies upon the intractability of computing a shortest non-zero lattice vector and a closest lattice vector to a given target vector in high dimensions. The best algorithms for these tasks are due to Kannan, and, though remarkably simple, their complexity estimates have not been improved since over twenty years. Kannan’s algorithm for solving the shortest vector problem (SVP) is in particular crucial in Schnorr’s celebrated block reduction algorithm, on which rely the best known generic attacks against the lattice-based encryption schemes mentioned above. In this paper we improve the complexity upper-bounds of Kannan’s algorithms. The analysis provides new insight on the practical cost of solving SVP, and helps progressing towards providing meaningful key-sizes.

Guillaume Hanrot, Damien Stehlé

Random Oracles

Domain Extension of Public Random Functions: Beyond the Birthday Barrier

A

public

random function is a random function that is accessible by all parties, including the adversary. For example, a (public) random oracle is a public random function {0,1}

*

→{0,1}

n

. The natural problem of constructing a public random oracle from a public random function {0,1}

m

→{0,1}

n

(for some

m

 > 

n

) was first considered at Crypto 2005 by Coron et al. who proved the security of variants of the Merkle-Damgård construction against adversaries issuing up to

O

(2

n

/2

) queries to the construction and to the underlying compression function. This bound is less than the square root of

n

2

m

, the number of random bits contained in the underlying random function.

In this paper, we investigate domain extenders for public random functions approaching optimal security. In particular, for all

ε

 ∈ (0,1) and all functions

m

and ℓ (polynomial in

n

), we provide a construction

C

ε

,

m

,ℓ

(·) which extends a public random function

R

: {0,1}

n

→{0,1}

n

to a function

${\bf C}_{\epsilon,m,\ell}({\bf R}): \{0,1\}^{m(n)} \rightarrow \{0,1\}^{\ell(n)}$

with time-complexity polynomial in

n

and 1/

ε

and which is secure against adversaries which make up to

Θ

(2

n

(1 − 

ε

)

) queries. A central tool for achieving high security are special classes of unbalanced bipartite expander graphs with small degree. The achievability of practical (as opposed to complexity-theoretic) efficiency is proved by a non-constructive existence proof.

Combined with the iterated constructions of Coron et al., our result leads to the first iterated construction of a hash function {0,1}

*

→{0,1}

n

from a component function {0,1}

n

→{0,1}

n

that withstands all recently proposed generic attacks against iterated hash functions, like Joux’s multi-collision attack, Kelsey and Schneier’s second-preimage attack, and Kelsey and Kohno’s herding attacks.

Ueli Maurer, Stefano Tessaro
Random Oracles and Auxiliary Input

We introduce a variant of the random oracle model where oracle-dependent auxiliary input is allowed. In this setting, the adversary gets an auxiliary input that can contain information about the random oracle. Using simple examples we show that this model should be preferred over the classical variant where the auxiliary input is independent of the random oracle.

In the presence of oracle-dependent auxiliary input, the most important proof technique in the random oracle model—lazy sampling—does not apply directly. We present a theorem and a variant of the lazy sampling technique that allows one to perform proofs in the new model almost as easily as in the old one.

As an application of our approach and to illustrate how existing proofs can be adapted, we prove that RSA-OAEP is IND-CCA2 secure in the random oracle model with oracle-dependent auxiliary input.

Dominique Unruh

Hash Functions

Security-Amplifying Combiners for Collision-Resistant Hash Functions

The classical combiner

$\mathsf{Comb}_{\text{class}}^{H_0,H_1}(M)=H_0(M)|| H_1(M)$

for hash functions

H

0

,

H

1

provides collision-resistance as long as at least one of the two underlying hash functions is secure. This statement is complemented by the multi-collision attack of Joux (Crypto 2004) for iterated hash functions

H

0

,

H

1

with

n

-bit outputs. He shows that one can break the classical combiner in

${{n}\over{2}}. T_0 + T_1$

steps if one can find collisions for

H

0

and

H

1

in time

T

0

and

T

1

, respectively. Here we address the question if there are

security-amplifying

combiners where the security of the building blocks increases the security of the combined hash function, thus beating the bound of Joux. We discuss that one can indeed have such combiners and, somewhat surprisingly in light of results of Nandi and Stinson (ePrint 2004) and of Hoch and Shamir (FSE 2006), our solution is essentially as efficient as the classical combiner.

Marc Fischlin, Anja Lehmann
Hash Functions and the (Amplified) Boomerang Attack

Since Crypto 2004, hash functions have been the target of many attacks which showed that several well-known functions such as

SHA-0

or

MD5

can no longer be considered secure collision free hash functions. These attacks use classical cryptographic techniques from block cipher analysis such as differential cryptanalysis together with some specific methods. Among those, we can cite the neutral bits of Biham and Chen or the message modification techniques of Wang

et al.

In this paper, we show that another tool of block cipher analysis, the boomerang attack, can also be used in this context. In particular, we show that using this boomerang attack as a neutral bits tool, it becomes possible to lower the complexity of the attacks on

SHA-1

.

Antoine Joux, Thomas Peyrin
Amplifying Collision Resistance: A Complexity-Theoretic Treatment

We initiate a complexity-theoretic treatment of hardness amplification for collision-resistant hash functions, namely the transformation of weakly collision-resistant hash functions into strongly collision-resistant ones in the standard model of computation. We measure the level of collision resistance by the maximum probability, over the choice of the key, for which an efficient adversary can find a collision. The goal is to obtain constructions with short output, short keys, small loss in adversarial complexity tolerated, and a good trade-off between compression ratio and computational complexity. We provide an analysis of several simple constructions, and show that many of the parameters achieved by our constructions are almost optimal in some sense.

Ran Canetti, Ron Rivest, Madhu Sudan, Luca Trevisan, Salil Vadhan, Hoeteck Wee

Theory II

How Many Oblivious Transfers Are Needed for Secure Multiparty Computation?

Oblivious transfer (OT) is an essential building block for secure multiparty computation when there is no honest majority. In this setting, current protocols for

n

 ≥ 3 parties require

each pair of parties

to engage in a single OT for

each gate

in the circuit being evaluated. Since implementing OT typically requires expensive public-key operations (alternatively, expensive setup or physical infrastructure), minimizing the number of OTs is a highly desirable goal.

In this work we initiate a study of this problem in both an information-theoretic and a computational setting and obtain the following results.

If the adversary can corrupt up to

t

 = (1 − 

ε

)

n

parties, where

ε

> 0 is an arbitrarily small constant, then a total of

O

(

n

) OT channels between pairs of parties are necessary and sufficient for general secure computation. Combined with previous protocols for “extending OTs”,

O

(

nk

) invocations of OT are sufficient for computing arbitrary functions with computational security, where

k

is a security parameter.

The above result does not improve over the previous state of the art in the important case where

t

 = 

n

 − 1, when the number of parties is small, or in the information-theoretic setting. For these cases, we show that an arbitrary function

f

:{0,1}

n

→{0,1}

*

can be securely computed by a protocol which makes use of a single OT (of strings) between each pair of parties. This result is tight in the sense that at least one OT between each pair of parties is necessary in these cases. A major disadvantage of this protocol is that its communication complexity grows exponentially with

n

. We present natural classes of functions

f

for which this exponential overhead can be avoided.

Danny Harnik, Yuval Ishai, Eyal Kushilevitz
Simulatable VRFs with Applications to Multi-theorem NIZK

This paper introduces simulatable verifiable random functions (sVRF). VRFs are similar to pseudorandom functions, except that they are also verifiable: corresponding to each seed

SK

, there is a public key

PK

, and for

$y=F_{\textit {PK}}(x)$

, it is possible to prove that

y

is indeed the value of the function seeded by

SK

. A

simulatable

VRF is a VRF for which this proof can be simulated, so a simulator can pretend that the value of

$F_{\textit {PK}}(x)$

is any

y

.

Our contributions are as follows. We introduce the notion of sVRF. We give two constructions: one from general assumptions (based on NIZK), but inefficient, just as a proof of concept; the other construction is practical and based on a special assumption about composite-order groups with bilinear maps. We then use an sVRF to get a direct transformation from a single-theorem non-interactive zero-knowledge proof system for a language

L

to a multi-theorem non-interactive proof system for the same language

L

.

Melissa Chase, Anna Lysyanskaya
Cryptography in the Multi-string Model

The common random string model introduced by Blum, Feldman and Micali permits the construction of cryptographic protocols that are provably impossible to realize in the standard model. We can think of this model as a trusted party generating a random string and giving it to all parties in the protocol. However, the introduction of such a third party should set alarm bells going off: Who is this trusted party? Why should we trust that the string is random? Even if the string is uniformly random, how do we know it does not leak private information to the trusted party? The very point of doing cryptography in the first place is to prevent us from trusting the wrong people with our secrets.

In this paper, we propose the more realistic multi-string model. Instead of having one trusted authority, we have several authorities that generate random strings. We do not trust any single authority; we only assume a majority of them generate the random string honestly. This security model is reasonable, yet at the same time it is very easy to implement. We could for instance imagine random strings being provided on the Internet, and any set of parties that want to execute a protocol just need to agree on which authorities’ strings they want to use.

We demonstrate the use of the multi-string model in several fundamental cryptographic tasks. We define multi-string non-interactive zero-knowledge proofs and prove that they exist under general cryptographic assumptions. Our multi-string NIZK proofs have very strong security properties such as simulation- extractability and extraction zero-knowledge, which makes it possible to compose them with arbitrary other protocols and to reuse the random strings. We also build efficient simulation-sound multi-string NIZK proofs for circuit satisfiability based on groups with a bilinear map. The sizes of these proofs match the best constructions in the single common random string model.

We suggest a universally composable commitment scheme in the multi-string model. It has been proven that UC commitment does not exist in the plain model without setup assumptions. Prior to this work, constructions were only known in the common reference string model and the registered public key model. One of the applications of the UC commitment scheme is a coin-flipping protocol in the multi-string model. Armed with the coin-flipping protocol, we can securely realize any multi-party computation protocol.

Jens Groth, Rafail Ostrovsky

Quantum Cryptography

Secure Identification and QKD in the Bounded-Quantum-Storage Model

We consider the problem of secure identification: user

U

proves to server

S

that he knows an agreed (possibly low-entropy) password

w

, while giving away as little information on

w

as possible, namely the adversary can exclude at most one possible password for each execution of the scheme. We propose a solution in the bounded-quantum-storage model, where

U

and

S

may exchange qubits, and a dishonest party is assumed to have limited quantum memory. No other restriction is posed upon the adversary. An improved version of the proposed identification scheme is also secure against a man-in-the-middle attack, but requires

U

and

S

to additionally share a high-entropy key

k

. However, security is still guaranteed if one party loses

k

to the attacker but notices the loss. In both versions of the scheme, the honest participants need no quantum memory, and noise and imperfect quantum sources can be tolerated. The schemes compose sequentially, and

w

and

k

can securely be re-used. A small modification to the identification scheme results in a quantum-key-distribution (QKD) scheme, secure in the bounded-quantum-storage model, with the same re-usability properties of the keys, and without assuming authenticated channels. This is in sharp contrast to known QKD schemes (with unbounded adversary) without authenticated channels, where authentication keys must be updated, and unsuccessful executions can cause the parties to run out of keys.

Ivan B. Damgård, Serge Fehr, Louis Salvail, Christian Schaffner
A Tight High-Order Entropic Quantum Uncertainty Relation with Applications

We derive a new entropic quantum uncertainty relation involving min-entropy. The relation is tight and can be applied in various quantum-cryptographic settings.

Protocols for quantum 1-out-of-2 Oblivious Transfer and quantum Bit Commitment are presented and the uncertainty relation is used to prove the security of these protocols in the bounded-quantum-storage model according to new strong security definitions.

As another application, we consider the realistic setting of Quantum Key Distribution (QKD) against quantum-memory-bounded eavesdroppers. The uncertainty relation allows to prove the security of QKD protocols in this setting while tolerating considerably higher error rates compared to the standard model with unbounded adversaries. For instance, for the six-state protocol with one-way communication, a bit-flip error rate of up to 17% can be tolerated (compared to 13% in the standard model).

Our uncertainty relation also yields a lower bound on the min-entropy key uncertainty against known-plaintext attacks when quantum ciphers are composed. Previously, the key uncertainty of these ciphers was only known with respect to Shannon entropy.

Ivan B. Damgård, Serge Fehr, Renato Renner, Louis Salvail, Christian Schaffner

Cryptanalysis II

Finding Small Roots of Bivariate Integer Polynomial Equations: A Direct Approach

Coppersmith described at Eurocrypt 96 an algorithm for finding small roots of bivariate integer polynomial equations, based on lattice reduction. A simpler algorithm was later proposed in [9], but it was asymptotically less efficient than Coppersmith’s algorithm. In this paper, we describe an analogous simplification but with the same asymptotic complexity as Coppersmith. We illustrate our new algorithm with the problem of factoring RSA moduli with high-order bits known; in practical experiments our method is several orders of magnitude faster than [9].

Jean-Sébastien Coron
A Polynomial Time Attack on RSA with Private CRT-Exponents Smaller Than N 0.073

Wiener’s famous attack on RSA with

d

 < 

N

0.25

shows that using a small

d

for an efficient decryption process makes RSA completely insecure. As an alternative, Wiener proposed to use the Chinese Remainder Theorem in the decryption phase, where

d

p

 = 

d

mod

(

p

 − 1) and

d

q

 = 

d

mod

(

q

 − 1) are chosen significantly smaller than

p

and

q

. The parameters

d

p

,

d

q

are called private CRT-exponents. Since Wiener’s proposal in 1990, it has been a challenging open question whether there exists a polynomial time attack on small private CRT-exponents. In this paper, we give an affirmative answer to this question, and show that a polynomial time attack exists if

d

p

and

d

q

are smaller than

N

0.073

.

Ellen Jochemsz, Alexander May

Encryption

Invertible Universal Hashing and the TET Encryption Mode

This work describes a mode of operation, TET, that turns a regular block cipher into a length-preserving enciphering scheme for messages of (almost) arbitrary length. When using an

n

-bit block cipher, the resulting scheme can handle input of any bit-length between

n

and 2

n

and associated data of arbitrary length.

The mode TET is a concrete instantiation of the generic mode of operation that was proposed by Naor and Reingold, extended to handle tweaks and inputs of arbitrary bit length. The main technical tool is a construction of invertible “universal hashing” on wide blocks, which is as efficient to compute and invert as polynomial-evaluation hash.

Shai Halevi
Reducing Trust in the PKG in Identity Based Cryptosystems

One day, you suddenly find that a private key corresponding to your Identity is up for sale at e-Bay. Since you do not suspect a key compromise, perhaps it must be the PKG who is acting dishonestly and trying to make money by selling your key. How do you find out for sure and even prove it in a court of law?

This paper introduces the concept of Traceable Identity based Encryption which is a new approach to mitigate the (inherent) key escrow problem in identity based encryption schemes. Our main goal is to restrict the ways in which the PKG can misbehave. In our system, if the PKG ever maliciously generates and distributes a decryption key for an Identity, it runs the risk of being caught and prosecuted.

In contrast to other mitigation approaches, our approach does not require multiple key generation authorities.

Vipul Goyal
Pirate Evolution: How to Make the Most of Your Traitor Keys

We introduce a novel attack concept against trace and revoke schemes called pirate evolution. In this setting, the attacker, called an evolving pirate, is handed a number of traitor keys and produces a number of generations of pirate decoders that are successively disabled by the trace and revoke system. A trace and revoke scheme is susceptible to pirate evolution when the number of decoders that the evolving pirate produces exceeds the number of traitor keys that were at his possession. Pirate evolution can threaten trace and revoke schemes even in cases where both the revocation and traceability properties are ideally satisfied: this is because pirate evolution may enable an attacker to “magnify” an initial key-leakage incident and exploit the traitor keys available to him to produce a great number of pirate boxes that will take a long time to disable. Even moderately successful pirate evolution affects the economics of deployment for a trace and revoke system and thus it is important that it is quantified prior to deployment.

In this work, we formalize the concept of pirate evolution and we demonstrate the susceptibility of the trace and revoke schemes of Naor, Naor and Lotspiech (NNL) from Crypto 2001 to an evolving pirate that can produce up to

t

·log

N

generations of pirate decoders given an initial set of

t

traitor keys. This is particularly important in the context of AACS, the new standard for high definition DVDs (HD-DVD and Blue-Ray) that employ the subset difference method of NNL: for example using our attack strategy, a pirate can potentially produce more than 300 pirate decoder generations by using only 10 traitor keys, i.e., key-leakage incidents in AACS can be substantially magnified.

Aggelos Kiayias, Serdar Pehlivanoglu

Protocol Analysis

A Security Analysis of the NIST SP 800-90 Elliptic Curve Random Number Generator

An elliptic curve random number generator (ECRNG) has been approved in a NIST standard and proposed for ANSI and SECG draft standards. This paper proves that, if three conjectures are true, then the ECRNG is secure. The three conjectures are hardness of the elliptic curve decisional Diffie-Hellman problem and the hardness of two newer problems, the x-logarithm problem and the truncated point problem. The x-logarithm problem is shown to be hard if the decisional Diffie-Hellman problem is hard, although the reduction is not tight. The truncated point problem is shown to be solvable when the minimum amount of bits allowed in NIST standards are truncated, thereby making it insecure for applications such as stream ciphers. Nevertheless, it is argued that for nonce and key generation this distinguishability is harmless.

Daniel R. L. Brown, Kristian Gjøsteen
A Generalization of DDH with Applications to Protocol Analysis and Computational Soundness

In this paper we identify the (

P

,

Q

) − 

DDH

assumption, as an extreme, powerful generalization of the Decisional Diffie-Hellman

(DDH

)

assumption: virtually all previously proposed generalizations of

DDH

are instances of the (

P

,

Q

) − 

DDH

problem. We prove that our generalization is no harder than

DDH

through a concrete reduction that we show to be rather tight in most practical cases. One important consequence of our result is that it yields significantly simpler security proofs for protocols that use extensions of

DDH

. We exemplify in the case of several group-key exchange protocols (among others we give an elementary, direct proof for the Burmester-Desmedt protocol). Finally, we use our generalization of

DDH

to extend the celebrated computational soundness result of Abadi and Rogaway [1] so that it can also handle exponentiation and Diffie-Hellman-like keys. The extension that we propose crucially relies on our generalization and seems hard to achieve through other means.

Emmanuel Bresson, Yassine Lakhnech, Laurent Mazaré, Bogdan Warinschi
Chernoff-Type Direct Product Theorems

Consider a challenge-response protocol where the probability of a correct response is at least

α

for a legitimate user, and at most

β

 < 

α

for an attacker. One example is a CAPTCHA challenge, where a human should have a significantly higher chance of answering a single challenge (e.g., uncovering a distorted letter) than an attacker. Another example would be an argument system without perfect completeness. A natural approach to boost the gap between legitimate users and attackers would be to issue many challenges, and accept if the response is correct for more than a threshold fraction, for the threshold chosen between

α

and

β

. We give the first proof that parallel repetition with thresholds improves the security of such protocols. We do this with a very general result about an attacker’s ability to solve a large fraction of many independent instances of a hard problem, showing a Chernoff-like convergence of the fraction solved incorrectly to the probability of failure for a single instance.

Russell Impagliazzo, Ragesh Jaiswal, Valentine Kabanets

Public-Key Encryption

Rerandomizable RCCA Encryption

We give the first perfectly rerandomizable, Replayable-CCA (RCCA) secure encryption scheme, positively answering an open problem of Canetti et al. (

CRYPTO

2003). Our encryption scheme, which we call the

Double-strand Cramer-Shoup scheme

, is a non-trivial extension of the popular Cramer-Shoup encryption. Its security is based on the standard DDH assumption. To justify our definitions, we define a powerful “Replayable Message Posting” functionality in the Universally Composable (UC) framework, and show that any encryption scheme that satisfies our definitions of rerandomizability and RCCA security is a UC-secure implementation of this functionality. Finally, we enhance the notion of rerandomizable RCCA security by adding a receiver-anonymity (or key-privacy) requirement, and show that it results in a correspondingly enhanced UC functionality. We leave open the problem of constructing a scheme achieving this enhancement.

Manoj Prabhakaran, Mike Rosulek
Deterministic and Efficiently Searchable Encryption

We present as-strong-as-possible definitions of privacy, and constructions achieving them, for public-key encryption schemes where the encryption algorithm is

deterministic

. We obtain as a consequence database encryption methods that permit fast (i.e. sub-linear, and in fact logarithmic, time) search while provably providing privacy that is as strong as possible subject to this fast search constraint. One of our constructs, called RSA-DOAEP, has the added feature of being length preserving, so that it is the first example of a public-key cipher. We generalize this to obtain a notion of efficiently-searchable encryption schemes which permit more flexible privacy to search-time trade-offs via a technique called bucketization. Our results answer much-asked questions in the database community and provide foundations for work done there.

Mihir Bellare, Alexandra Boldyreva, Adam O’Neill
Secure Hybrid Encryption from Weakened Key Encapsulation

We put forward a new paradigm for building hybrid encryption schemes from constrained chosen-ciphertext secure (CCCA) key-encapsulation mechanisms (KEMs) plus authenticated symmetric encryption. Constrained chosen-ciphertext security is a new security notion for KEMs that we propose. It has less demanding security requirements than standard CCCA security (since it requires the adversary to have a certain plaintext-knowledge when making a decapsulation query) yet we can prove that it is CCCA sufficient for secure hybrid encryption.

Our notion is not only useful to express the Kurosawa-Desmedt public-key encryption scheme and its generalizations to hash-proof systems in an abstract KEM/DEM security framework. It also has a very constructive appeal, which we demonstrate with a new encryption scheme whose security relies on a class of intractability assumptions that we show (in the generic group model) strictly weaker than the Decision Diffie-Hellman (DDH) assumption. This appears to be the first practical public-key encryption scheme in the literature from an algebraic assumption strictly weaker than DDH.

Dennis Hofheinz, Eike Kiltz

Multi-party Computation

Scalable and Unconditionally Secure Multiparty Computation

We present a multiparty computation protocol that is unconditionally secure against adaptive and active adversaries, with communication complexity

$\mathcal{O}(\mathcal{C} n) k + \mathcal{O}(D n^2) k + {\rm poly}(n \kappa)$

, where

$\mathcal{C}$

is the number of gates in the circuit,

n

is the number of parties,

k

is the bit-length of the elements of the field over which the computation is carried out,

D

is the multiplicative depth of the circuit, and

κ

is the security parameter. The corruption threshold is

t

 < 

n

/3. For passive security the corruption threshold is

t

 < 

n

/2 and the communication complexity is

$\mathcal{O}(n \mathcal{C}) k$

. These are the first unconditionally secure protocols where the part of the communication complexity that depends on the circuit size is linear in

n

. We also present a protocol with threshold

t

 < 

n

/2 and complexity

$\mathcal{O}(\mathcal{C} n) k + {\rm poly}(n \kappa)$

based on a complexity assumption which, however, only has to hold

during

the execution of the protocol – that is, the protocol has so called everlasting security.

Ivan Damgård, Jesper Buus Nielsen
On Secure Multi-party Computation in Black-Box Groups

We study the natural problem of secure

n

-party computation (in the passive, computationally unbounded attack model) of the

n

-product function

f

G

(

x

1

,...,

x

n

) = 

x

1

·

x

2

 ⋯ 

x

n

in an arbitrary finite group (

G

,·), where the input of party

P

i

is

x

i

 ∈ 

G

for

i

 = 1,...,

n

. For flexibility, we are interested in protocols for

f

G

which require only

black-box

access to the group

G

(i.e. the only computations performed by players in the protocol are a group operation, a group inverse, or sampling a uniformly random group element).

Our results are as follows. First, on the negative side, we show that if (

G

,·) is non-abelian and

n

 ≥ 4, then no ⌈

n

/2⌉-private protocol for computing

f

G

exists. Second, on the positive side, we initiate an approach for construction of black-box protocols for

f

G

based on

k

-of-

k

threshold secret sharing schemes, which are efficiently implementable over any black-box group

G

. We reduce the problem of constructing such protocols to a combinatorial colouring problem in planar graphs. We then give two constructions for such graph colourings. Our first colouring construction gives a protocol with optimal collusion resistance

t

 < 

n

/2, but has exponential communication complexity

$O(n\frac{2t+1}{t}^2)$

group elements (this construction easily extends to general adversary structures). Our second probabilistic colouring construction gives a protocol with (close to optimal) collusion resistance

t

 < 

n

/

μ

for a graph-related constant

μ

 ≤ 2.948, and has efficient communication complexity

O

(

n

t

2

) group elements. Furthermore, we believe that our results can be improved by further study of the associated combinatorial problems.

Yvo Desmedt, Josef Pieprzyk, Ron Steinfeld, Huaxiong Wang
A Note on Secure Computation of the Moore-Penrose Pseudoinverse and Its Application to Secure Linear Algebra

This work deals with the communication complexity of secure multi-party protocols for linear algebra problems. In our model, complexity is measured in terms of the number of secure multiplications required and protocols terminate within a constant number of rounds of communication.

Previous work by Cramer and Damgård proposes secure protocols for solving systems

Ax

 = 

b

of

m

linear equations in

n

variables over a finite field, with

m

 ≤ 

n

. The complexity of those protocols is

n

5

.

We show a new upper bound of

m

4

 + 

n

2

m

secure multiplications for this problem, which is clearly asymptotically smaller. Our main point, however, is that the advantage can be substantial

in case m

is much smaller than n

. Indeed, if

$m={\sqrt{n}}$

, for example, the complexity goes down from

n

5

to

n

2.5

.

Our secure protocols rely on some recent advances concerning the computation of the Moore-Penrose pseudo-inverse of matrices over fields of positive characteristic. These computations are based on the evaluation of a certain characteristic polynomial, in combination with variations on a well-known technique due to Mulmuley that helps to control the effects of non-zero characteristic. We also introduce a new method for secure polynomial evaluation that exploits properties of Chebychev polynomials, as well as a new secure protocol for computing the characteristic polynomial of a matrix based on Leverrier’s lemma that exploits this new method.

Ronald Cramer, Eike Kiltz, Carles Padró
Backmatter
Metadaten
Titel
Advances in Cryptology - CRYPTO 2007
herausgegeben von
Alfred Menezes
Copyright-Jahr
2007
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-74143-5
Print ISBN
978-3-540-74142-8
DOI
https://doi.org/10.1007/978-3-540-74143-5