Skip to main content

2005 | Buch

Advances in Cryptology – EUROCRYPT 2005

24th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Aarhus, Denmark, May 22-26, 2005. Proceedings

insite
SUCHEN

Über dieses Buch

These are the proceedings of the 24th Annual IACR Eurocrypt Conference. The conference was sponsored by the International Association for Cryptologic Research(IACR;seewww.iacr.org),thisyearincooperationwiththeComputer Science Department of the University of Aarhus, Denmark. As General Chair, Ivan Damg? ard was responsible for local organization. TheEurocrypt2005ProgramCommittee(PC)consistedof30internationally renowned experts. Their names and a?liations are listed on pages VII and VIII of these proceedings. By the November 15, 2004 submission deadline the PC had received a total of 190 submissions via the IACR Electronic Submission Server. The subsequent selection process was divided into two phases, as usual. In the review phase each submission was carefully scrutinized by at least three independent reviewers, and the review reports, often extensive, were committed to the IACR Web Review System. These were taken as the starting point for the PC-wideWeb-baseddiscussionphase.Duringthisphase,additionalreportswere provided as needed, and the PC eventually had some 700 reports at its disposal. In addition, the discussions generated more than 850 messages, all posted in the system. During the entire PC phase, which started in August 2003 with my earliest invitations to PC members and which continued until March 2005, more than 1000 email messages were communicated. Moreover, the PC received much appreciated assistance from a large body of external reviewers. Their names are listed on page VIII of these proceedings.

Inhaltsverzeichnis

Frontmatter

Cryptanalysis I

Cryptanalysis of the Hash Functions MD4 and RIPEMD

MD4 is a hash function developed by Rivest in 1990. It serves as the basis for most of the dedicated hash functions such as MD5, SHAx, RIPEMD, and HAVAL. In 1996, Dobbertin showed how to find collisions of MD4 with complexity equivalent to 2

20

MD4 hash computations. In this paper, we present a new attack on MD4 which can find a collision with probability 2

− 2

to 2

− 6

, and the complexity of finding a collision doesn’t exceed 2

8

MD4 hash operations. Built upon the collision search attack, we present a chosen-message pre-image attack on MD4 with complexity below 2

8

. Furthermore, we show that for a weak message, we can find another message that produces the same hash value. The complexity is only a single MD4 computation, and a random message is a weak message with probability 2

− 122

.

The attack on MD4 can be directly applied to RIPEMD which has two parallel copies of MD4, and the complexity of finding a collision is about 2

18

RIPEMD hash operations.

Xiaoyun Wang, Xuejia Lai, Dengguo Feng, Hui Chen, Xiuyuan Yu
How to Break MD5 and Other Hash Functions

MD5 is one of the most widely used cryptographic hash functions nowadays. It was designed in 1992 as an improvement of MD4, and its security was widely studied since then by several authors. The best known result so far was a semi free-start collision, in which the initial value of the hash function is replaced by a non-standard value, which is the result of the attack. In this paper we present a new powerful attack on MD5 which allows us to find collisions efficiently. We used this attack to find collisions of MD5 in about 15 minutes up to an hour computation time. The attack is a differential attack, which unlike most differential attacks, does not use the exclusive-or as a measure of difference, but instead uses modular integer subtraction as the measure. We call this kind of differential a

modular differential

. An application of this attack to MD4 can find a collision in less than a fraction of a second. This attack is also applicable to other hash functions, such as RIPEMD and HAVAL.

Xiaoyun Wang, Hongbo Yu
Collisions of SHA-0 and Reduced SHA-1

In this paper we describe improvements to the techniques used to cryptanalyze SHA-0 and introduce the first results on SHA-1. The results include a generic multi-block technique that uses near-collisions in order to find collisions, and a four-block collision of SHA-0 found using this technique with complexity 2

51

. Then, extension of this and prior techniques are presented, that allow us to find collisions of reduced versions of SHA-1. We give collisions of variants with up to 40 rounds, and show the complexities of longer variants. These techniques show that collisions up to about 53–58 rounds can still be found faster than by birthday attacks.

Eli Biham, Rafi Chen, Antoine Joux, Patrick Carribault, Christophe Lemuet, William Jalby

Theory I

Reducing Complexity Assumptions for Statistically-Hiding Commitment

Determining the minimal assumptions needed to construct various cryptographic building blocks has been a focal point of research in theoretical cryptography. Here, we revisit the following question:

what are the minimal assumptions needed to construct statistically-hiding commitment schemes

? Previously, it was known how to construct such schemes based on one-way permutations. We improve upon this by constructing statistically-hiding commitment schemes based on

approximable-preimage-size

one-way functions. These are one-way functions for which there is an efficient way to approximate the number of preimages of a given output. A special case (for which we show a somewhat simpler construction) is that of

regular

one-way functions where all outputs have the same number of preimages.

We utilize two different approaches in constructing statistically-hiding commitment schemes. Our first approach proceeds by showing that the scheme of Naor et al. can be implemented using any one-way function having an output distribution which is “sufficiently similar” to uniform. We then construct one-way functions with this property from approximable-preimage-size one-way functions. Our second approach begins by constructing a commitment scheme which is statistically hiding against an honest-but-curious receiver. We then demonstrate a

compiler

which transforms any such commitment scheme into one which is statistically hiding even against a malicious receiver. This compiler and its analysis may be of independent interest.

Iftach Haitner, Omer Horvitz, Jonathan Katz, Chiu-Yuen Koo, Ruggero Morselli, Ronen Shaltiel
Smooth Projective Hashing and Two-Message Oblivious Transfer

We present a general framework for constructing two-message oblivious transfer protocols using a modification of Cramer and Shoup’s notion of smooth projective hashing (2002). Our framework is actually an abstraction of the two-message oblivious transfer protocols of Naor and Pinkas (2001) and Aiello et al. (2001), whose security is based on the Decisional Diffie Hellman Assumption. In particular, we give two new oblivious transfer protocols. The security of one is based on the

N

’th-Residuosity Assumption, and the security of the other is based on both the Quadratic Residuosity Assumption and the Extended Riemann Hypothesis. Our security guarantees are not simulation based, and are similar to those of previous constructions.

When using smooth projective hashing in this context, we must deal with maliciously chosen smooth projective hash families. This raises new technical difficulties, and in particular it is here that the Extended Riemann Hypothesis comes into play.

Yael Tauman Kalai
On Robust Combiners for Oblivious Transfer and Other Primitives

A (1,2)-robust combiner for a cryptographic primitive

${\mathcal P}$

is a construction that takes two candidate schemes for

${\mathcal P}$

and combines them into one scheme that securely implement

${\mathcal P}$

even if one of the candidates fails. Robust combiners are a useful tool for ensuring better security in applied cryptography, and also a handy tool for constructing cryptographic protocols. For example, we discuss using robust combiners for obtaining universal schemes for cryptographic primitives (a universal scheme is an explicit construction that implements

${\mathcal P}$

under the sole assumption that

${\mathcal P}$

exists).

In this paper we study what primitives admit robust combiners. In addition to known and very simple combiners for one-way functions and equivalent primitives, we show robust combiners for protocols in the world of public key cryptography, namely for Key Agreement(KA).

The main point we make is that things are not as nice for Oblivious Transfer (OT) and in general for secure computation. We prove that there are no ”transparent black-box” robust combiners for OT, giving an indication to the difficulty of finding combiners for OT. On the positive side we show a black box construction of a (2,3)-robust combiner for OT, as well as a generic construction of (1,

n

)-robust OT-combiners from any (1,2)-robust OT-combiner.

Danny Harnik, Joe Kilian, Moni Naor, Omer Reingold, Alon Rosen

Encryption I

Efficient Identity-Based Encryption Without Random Oracles

We present the first efficient Identity-Based Encryption (IBE) scheme that is fully secure without random oracles. We first present our IBE construction and reduce the security of our scheme to the decisional Bilinear Diffie-Hellman (BDH) problem. Additionally, we show that our techniques can be used to build a new signature scheme that is secure under the computational Diffie-Hellman assumption without random oracles.

Brent Waters
Tag-KEM/DEM: A New Framework for Hybrid Encryption and A New Analysis of Kurosawa-Desmedt KEM

This paper presents a novel framework for generic construction of hybrid encryption schemes secure against chosen ciphertext attack. Our new framework yields new and more efficient CCA-secure schemes, and provides insightful explanations about existing schemes that do not fit into the previous frameworks. This could result in finding future improvements. Moreover, it allows immediate conversion from a class of threshold public-key encryption to a hybrid one without considerable overhead, which is not possible in the previous approaches.

Finally we present an improved security proof of the Kurosawa-Desmedt scheme, which removes the original need for information-theoretic key derivation and message authentication functions. We show that the scheme can be instantiated with any computationally secure such functions, thus extending the applicability of their paradigm, and improving its efficiency.

Masayuki Abe, Rosario Gennaro, Kaoru Kurosawa, Victor Shoup

Signatures and Authentication

Secure Remote Authentication Using Biometric Data

Biometric data offer a potential source of high-entropy, secret information that can be used in cryptographic protocols provided two issues are addressed: (1) biometric data are not uniformly distributed; and (2) they are not exactly reproducible. Recent work, most notably that of Dodis, Reyzin, and Smith, has shown how these obstacles may be overcome by allowing some auxiliary public information to be reliably sent from a server to the human user. Subsequent work of Boyen has shown how to extend these techniques, in the random oracle model, to enable unidirectional authentication from the user to the server without the assumption of a reliable communication channel.

We show two efficient techniques enabling the use of biometric data to achieve

mutual

authentication or authenticated key exchange over a completely insecure (i.e., adversarially controlled) channel. In addition to achieving stronger security guarantees than the work of Boyen, we improve upon his solution in a number of other respects: we tolerate a broader class of errors and, in one case, improve upon the parameters of his solution and give a proof of security in the standard model.

Xavier Boyen, Yevgeniy Dodis, Jonathan Katz, Rafail Ostrovsky, Adam Smith
Stronger Security Bounds for Wegman-Carter-Shoup Authenticators

Shoup proved that various message-authentication codes of the form (

n

,

m

) ↦

h

(

m

) +

f

(

n

) are secure against all attacks that see at most

$\sqrt{1/\epsilon}$

authenticated messages. Here

m

is a message;

n

is a nonce chosen from a public group

G

;

f

is a secret uniform random permutation of

G

;

h

is a secret random function; and

ε

is a differential probability associated with

h

.

Shoup’s result implies that if AES is secure then various state-of-the-art message-authentication codes of the form (

n

,

m

) ↦

h

(

m

) + AES

k

(

n

) are secure up to

$\sqrt{ 1/\epsilon}$

authenticated messages. Unfortunately,

$\sqrt{ 1/\epsilon}$

is only about 2

50

for some state-of-the-art systems, so Shoup’s result provides no guarantees for long-term keys.

This paper proves that security of the same systems is retained up to

$\sqrt{\#G}$

authenticated messages. In a typical state-of-the-art system,

$\sqrt{\#G}$

is 2

64

. The heart of the paper is a very general “one-sided” security theorem: (

n

,

m

) ↦

h

(

m

) +

f

(

n

) is secure if there are small upper bounds on differential probabilities for

h

and on interpolation probabilities for

f

.

Daniel J. Bernstein
3-Move Undeniable Signature Scheme

In undeniable signature schemes, zero-knowledgeness and non-transferability have been identified so far. In this paper, by separating these two notions, we show the first 3-move confirmation and disavowal protocols for Chaum’s undeniable signature scheme which is secure against active and concurrent attacks. Our main observation is that while the signer has one public key and one secret key, there exist two witnesses in the confirmation and disavowal proofs of Chaum’s scheme.

Kaoru Kurosawa, Swee-Huay Heng
Group Signatures with Efficient Concurrent Join

A group signature is a basic privacy mechanism. The group joining operation is a critical component of such a scheme. To date all secure group signature schemes either employed a trusted-party aided join operation or a complex joining protocol requiring many interactions between the prospective user and the Group Manager (GM). In addition no efficient scheme employed a join protocol proven secure against adversaries that have the capability to dynamically initiate multiple concurrent join sessions during an attack.

This work presents the first efficient group signature scheme with a simple Joining protocol that is based on a “single message and signature response” interaction between the prospective user and the GM. This single-message and signature-response registration paradigm where no other actions are taken, is the most efficient possible join interaction and was originally alluded to in 1997 by Camenisch and Stadler, but its efficient instantiation remained open till now.

The fact that joining has two short communication flows and does not require secure channels is highly advantageous: for example, it allows users to easily join by a proxy (i.e., a security officer of a company can send a file with all registration requests in his company and get back their certificates for distribution back to members of the company). It further allows an easy and non-interactive global system re-keying operation as well as straightforward treatment of multi-group signatures.We present a strong security model for group signatures (the first explicitly taking into account concurrent join attacks) and an efficient scheme with a single-message and signature-response join protocol.

Aggelos Kiayias, Moti Yung

Algebra and Number Theory I

Floating-Point LLL Revisited

The Lenstra-Lenstra-Lovász lattice basis reduction algorithm (LLL or L

3

) is a very popular tool in public-key cryptanalysis and in many other fields. Given an integer

d

-dimensional lattice basis with vectors of norm less than

B

in an

n

-dimensional space, L

3

outputs a so-called L

3

-reduced basis in polynomial time

O

(

d

5

n

log

3

B

), using arithmetic operations on integers of bit-length

O

(

d

log

B

). This worst-case complexity is problematic for lattices arising in cryptanalysis where

d

or/and log

B

are often large. As a result, the original L

3

is almost never used in practice. Instead, one applies floating-point variants of L

3

, where the long-integer arithmetic required by Gram-Schmidt orthogonalisation (central in L

3

) is replaced by floating-point arithmetic. Unfortunately, this is known to be unstable in the worst-case: the usual floating-point L

3

is not even guaranteed to terminate, and the output basis may not be L

3

-reduced at all. In this article, we introduce the L

2

algorithm, a new and natural floating-point variant of L

3

which provably outputs L

3

-reduced bases in polynomial time

O

(

d

4

n

(

d

+ log

B

) log

B

). This is the first L

3

algorithm whose running time (without fast integer arithmetic) provably grows only quadratically with respect to log

B

, like the well-known Euclidean and Gaussian algorithms, which it generalizes.

Phong Q. Nguên, Damien Stehlé
Practical Cryptography in High Dimensional Tori

At Crypto 2004, van Dijk and Woodruff introduced a new way of using the algebraic tori

T

n

in cryptography, and obtained an asymptotically optimal

n

/

φ

(

n

) savings in bandwidth and storage for a number of cryptographic applications. However, the computational requirements of compression and decompression in their scheme were impractical, and it was left open to reduce them to a practical level. We give a new method that compresses orders of magnitude faster than the original, while also speeding up the decompression and improving on the compression factor (by a constant term). Further, we give the first efficient implementation that uses

T

30

, compare its performance to XTR, CEILIDH, and ECC, and present new applications. Our methods achieve better compression than XTR and CEILIDH for the compression of as few as two group elements. This allows us to apply our results to ElGamal encryption with a small message domain to obtain ciphertexts that are 10% smaller than in previous schemes.

Marten van Dijk, Robert Granger, Dan Page, Karl Rubin, Alice Silverberg, Martijn Stam, David Woodruff
A Tool Kit for Finding Small Roots of Bivariate Polynomials over the Integers

We present a new and flexible formulation of Coppersmith’s method for finding small solutions of bivariate polynomials

p

(

x

,

y

) over the integers. Our approach allows to maximize the bound on the solutions of

p

(

x

,

y

) in a purely combinatorial way. We give various construction rules for different shapes of

p

(

x

,

y

)’s Newton polygon. Our method has several applications. Most interestingly, we reduce the case of solving univariate polynomials

f

(

x

) modulo some composite number

N

of unknown factorization to the case of solving bivariate polynomials over the integers. Hence, our approach unifies both methods given by Coppersmith at Eurocrypt 1996.

Johannes Blömer, Alexander May

Quantum Cryptography

Computational Indistinguishability Between Quantum States and Its Cryptographic Application

We introduce a problem of distinguishing between two quantum states as a new underlying problem to build a computational cryptographic scheme that is ”secure” against quantum adversary. Our problem is a natural generalization of the distinguishability problem between two probability distributions, which are commonly used in computational cryptography. More precisely, our problem QSCD

ff

is the computational distinguishability problem between two types of random coset states with a hidden permutation over the symmetric group. We show that (i) QSCD

ff

has the trapdoor property; (ii) the average-case hardness of QSCD

ff

coincides with its worst-case hardness; and (iii) QSCD

ff

is at least as hard in the worst case as the graph automorphism problem. Moreover, we show that QSCD

ff

cannot be efficiently solved by any quantum algorithm that naturally extends Shor’s factorization algorithm. These cryptographic properties of QSCD

ff

enable us to construct a public-key cryptosystem, which is likely to withstand any attack of a polynomial-time quantum adversary.

Akinori Kawachi, Takeshi Koshiba, Harumichi Nishimura, Tomoyuki Yamakami
Approximate Quantum Error-Correcting Codes and Secret Sharing Schemes

It is a standard result in the theory of quantum error- correcting codes that no code of length

n

can fix more than

n

/4 arbitrary errors, regardless of the dimension of the coding and encoded Hilbert spaces. However, this bound only applies to codes which recover the message exactly. Naively, one might expect that correcting errors to very high fidelity would only allow small violations of this bound. This intuition is incorrect: in this paper we describe quantum error-correcting codes capable of correcting up to

$\lfloor(n - 1)/2\rfloor$

arbitrary errors with fidelity exponentially close to 1, at the price of increasing the size of the registers (i.e., the coding alphabet). This demonstrates a sharp distinction between exact and approximate quantum error correction. The codes have the property that any

t

components reveal no information about the message, and so they can also be viewed as error-tolerant secret sharing schemes.

The construction has several interesting implications for cryptography and quantum information theory. First, it suggests that secret sharing is a better classical analogue to quantum error correction than is classical error correction. Second, it highlights an error in a purported proof that verifiable quantum secret sharing (VQSS) is impossible when the number of cheaters

t

is

n

/4. In particular, the construction directly yields an

honest-dealer

VQSS scheme for

$t= \lfloor(n - 1)/2\rfloor$

. We believe the codes could also potentially lead to improved protocols for dishonest-dealer VQSS and secure multi-party quantum computation.

More generally, the construction illustrates a difference between exact and approximate requirements in quantum cryptography and (yet again) the delicacy of security proofs and impossibility results in the quantum model.

Claude Crépeau, Daniel Gottesman, Adam Smith

Secure Protocols

Compact E-Cash

This paper presents efficient off-line anonymous e-cash schemes where a user can withdraw a wallet containing 2

coins each of which she can spend unlinkably. Our first result is a scheme, secure under the strong RSA and the

y

-DDHI assumptions, where the complexity of the withdrawal and spend operations is

${\mathcal O}(\ell + k)$

and the user’s wallet can be stored using

${\mathcal O}(\ell + k)$

bits, where

k

is a security parameter. The best previously known schemes require at least one of these complexities to be

${\mathcal O}(2^{\rm \ell}\cdot k)$

. In fact, compared to previous e-cash schemes, our whole wallet of 2

coins has about the same size as

one

coin in these schemes. Our scheme also offers exculpability of users, that is, the bank can prove to third parties that a user has double-spent. We then extend our scheme to our second result, the first e-cash scheme that provides traceable coins without a trusted third party. That is, once a user has double spent one of the 2

coins in her wallet,

all

her spendings of these coins can be traced. However, the price for this is that the complexity of the spending and of the withdrawal protocols becomes

${\mathcal O}(\ell \cdot k)$

and

${\mathcal O}(\ell \cdot k+k^{2})$

bits, respectively, and wallets take

${\mathcal O}(\ell \cdot k)$

bits of storage. All our schemes are secure in the random oracle model.

Jan Camenisch, Susan Hohenberger, Anna Lysyanskaya
Cryptographic Asynchronous Multi-party Computation with Optimal Resilience
(Extended Abstract)

We consider secure multi-party computation in the asynchronous model and present an efficient protocol with optimal resilience. For

n

parties, up to

t

<

n

/3 of them being corrupted, and security parameter

κ

, a circuit with

c

gates can be securely computed with communication complexity

${\mathcal O}(cn^{3}k)$

bits. In contrast to all previous asynchronous protocols with optimal resilience, our protocol requires access to an expensive broadcast primitive only

${\mathcal O}(n)$

times — independently of the size

c

of the circuit. This results in a practical protocol with a very low communication overhead.

One major drawback of a purely asynchronous network is that the inputs of up to

t

honest parties cannot be considered for the evaluation of the circuit. Waiting for all inputs could take infinitely long when the missing inputs belong to corrupted parties. Our protocol can easily be extended to a hybrid model, in which we have one round of synchronicity at the end of the input stage, but are fully asynchronous afterwards. In this model, our protocol allows to evaluate the circuit on the inputs of every honest party.

Martin Hirt, Jesper Buus Nielsen, Bartosz Przydatek

Algebra and Number Theory II

Differential Cryptanalysis for Multivariate Schemes

In this paper we propose a novel cryptanalytic method against multivariate schemes, which adapts differential cryptanalysis to this setting. In multivariate quadratic systems, the differential of the public key is a linear map and has invariants such as the dimension of the kernel. Using linear algebra, the study of this invariant can be used to gain information on the secret key. We successfully apply this new method to break the original Matsumoto-Imai cryptosystem using properties of the differential, thus providing an alternative attack against this scheme besides the attack devised by Patarin. Next, we present an attack against a randomised variant of the Matsumoto-Imai cryptosystem, called PMI. This scheme has recently been proposed by Ding, and according to the author, it resists all previously known attacks. We believe that differential cryptanalysis is a general and powerful method that can give additional insight on most multivariate schemes proposed so far.

Pierre-Alain Fouque, Louis Granboulan, Jacques Stern
A Fast Cryptanalysis of the Isomorphism of Polynomials with One Secret Problem

At Eurocrypt’96, Patarin proposed [9] new cryptographic schemes based on the

Isomorphism of Polynomials with one Secret

problem (IP1S) [9]. We study in this paper a restriction of IP1S called

Polynomial Linear Equivalence

problem (PLE) [7]. We show that PLE is in fact not a restriction of IP1S, in the sense that any algorithm solving PLE can be efficiently transformed into an algorithm for solving IP1S. Motivated by the cryptanalysis of schemes based on IP1S, we present a new efficient algorithm for solving PLE. This algorithm is mainly based on a differential property of PLE. The main advantage of this approach is to translate PLE into a simple linear algebra problem. The performances of our algorithm evidence that, with the parameters proposed in [9], schemes based on IP1S are far from achieving the security level required for cryptographic applications.

Ludovic Perret
Partial Key Exposure Attacks on RSA up to Full Size Exponents

We present several attacks on RSA that factor the modulus in polynomial time under the condition that a fraction of the most significant bits or least significant bits of the private exponent is available to the attacker. Our new attacks on RSA are the first attacks of this type that work up to full size public or private exponent.

Matthias Ernst, Ellen Jochemsz, Alexander May, Benne de Weger
The RSA Group is Pseudo-Free

We prove, under the strong RSA assumption, that the group of invertible integers modulo the product of two safe primes is pseudo-free. More specifically, no polynomial time algorithm can output (with non negligible probability) an unsatisfiable system of equations over the free abelian group generated by the symbols

g

1

,...,

g

n

, together with a solution modulo the product of two randomly chosen safe primes when

g

1

,...,

g

n

are instantiated to randomly chosen quadratic residues. Ours is the first provably secure construction of pseudo-free abelian groups under a standard cryptographic assumption, and resolves a conjecture of Rivest (TCC 2004).

Daniele Micciancio

Theory II

Universally Composable Password-Based Key Exchange

We propose and realize a definition of security for password-based key exchange within the framework of universally composable (UC) security, thus providing security guarantees under arbitrary composition with other protocols. In addition, our definition captures some aspects of the problem that were not adequately addressed by most prior notions. For instance, it does not assume any underlying probability distribution on passwords, nor does it assume independence between passwords chosen by different parties. We also formulate a definition of password-based secure channels, and show that such a definition is achievable given password-based key exchange.

Our protocol realizing the new definition of password-based key exchange is in the common reference string model and relies on standard number-theoretic assumptions. The components of our protocol can be instantiated to give a relatively efficient solution which is conceivably usable in practice. We also show that it is impossible to satisfy our definition in the “plain” model (e.g., without a common reference string).

Ran Canetti, Shai Halevi, Jonathan Katz, Yehuda Lindell, Phil MacKenzie
Mercurial Commitments with Applications to Zero-Knowledge Sets
Extended Abstract

We introduce a new flavor of commitment schemes, which we call

mercurial commitments

. Informally, mercurial commitments are standard commitments that have been extended to allow for

soft

decommitment. Soft decommitments, on the one hand, are not binding but, on the other hand, cannot be in conflict with true decommitments.

We then demonstrate that a particular instantiation of mercurial commitments has been implicitly used by Micali, Rabin and Kilian to construct

zero-knowledge sets

. (A

zero-knowledge set

scheme allows a Prover to (1) commit to a set

S

in a way that reveals nothing about

S

and (2) prove to a Verifier, in zero-knowledge, statements of the form

x

S

and

x

S

.) The rather complicated construction of Micali et al. becomes easy to understand when viewed as a more general construction with mercurial commitments as an underlying building block.

By providing mercurial commitments based on various assumptions, we obtain several different new zero-knowledge set constructions.

Melissa Chase, Alexander Healy, Anna Lysyanskaya, Tal Malkin, Leonid Reyzin

Encryption II

Hierarchical Identity Based Encryption with Constant Size Ciphertext

We present a Hierarchical Identity Based Encryption (HIBE) system where the ciphertext consists of just three group elements and decryption requires only two bilinear map computations,

regardless of the hierarchy depth

. Encryption is as efficient as in other HIBE systems. We prove that the scheme is selective-ID secure in the standard model and fully secure in the random oracle model. Our system has a number of applications: it gives very efficient forward secure public key and identity based cryptosystems (with short ciphertexts), it converts the NNL broadcast encryption system into an efficient public key broadcast system, and it provides an efficient mechanism for encrypting to the future. The system also supports limited delegation where users can be given restricted private keys that only allow delegation to bounded depth. The HIBE system can be modified to support sublinear size private keys at the cost of some ciphertext expansion.

Dan Boneh, Xavier Boyen, Eu-Jin Goh
Fuzzy Identity-Based Encryption

We introduce a new type of Identity-Based Encryption (IBE) scheme that we call Fuzzy Identity-Based Encryption. In Fuzzy IBE we view an identity as set of descriptive attributes. A Fuzzy IBE scheme allows for a private key for an identity,

ω

, to decrypt a ciphertext encrypted with an identity,

ω

′, if and only if the identities

ω

and

ω

′ are close to each other as measured by the “set overlap” distance metric. A Fuzzy IBE scheme can be applied to enable encryption using biometric inputs as identities; the error-tolerance property of a Fuzzy IBE scheme is precisely what allows for the use of biometric identities, which inherently will have some noise each time they are sampled. Additionally, we show that Fuzzy-IBE can be used for a type of application that we term “attribute-based encryption”.

In this paper we present two constructions of Fuzzy IBE schemes. Our constructions can be viewed as an Identity-Based Encryption of a message under several attributes that compose a (fuzzy) identity. Our IBE schemes are both error-tolerant and secure against collusion attacks. Additionally, our basic construction does not use random oracles. We prove the security of our schemes under the Selective-ID security model.

Amit Sahai, Brent Waters

Cryptanalysis II

Second Preimages on n-Bit Hash Functions for Much Less than 2 n Work

We expand a previous result of Dean [Dea99] to provide a second preimage attack on all

n

-bit iterated hash functions with Damgård-Merkle strengthening and

n

-bit intermediate states, allowing a second preimage to be found for a 2

k

-message-block message with about

k

× 2

n

/2 + 1

+ 2

n

 − 

k

 + 1

work. Using RIPEMD-160 as an example, our attack can find a second preimage for a 2

60

byte message in about 2

106

work, rather than the previously expected 2

160

work. We also provide slightly cheaper ways to find multicollisions than the method of Joux [Jou04]. Both of these results are based on

expandable messages

–patterns for producing messages of varying length, which all collide on the intermediate hash result immediately after processing the message. We provide an algorithm for finding expandable messages for any

n

-bit hash function built using the Damgård-Merkle construction, which requires only a small multiple of the work done to find a single collision in the hash function.

John Kelsey, Bruce Schneier
Predicting and Distinguishing Attacks on RC4 Keystream Generator

In this paper we analyze the statistical distribution of the keystream generator used by the stream ciphers RC4 and RC4A. Our first result is the discovery of statistical biases of the digraphs distribution of RC4/RC4A generated streams, where digraphs tend to repeat with short gaps between them. We show how an attacker can use these biased patterns to distinguish RC4 keystreams of 2

26

bytes and RC4A keystreams of 2

26.5

bytes from randomness with success rate of more than 2/3. Our second result is the discovery of a family of patterns in RC4 keystreams whose probabilities in RC4 keystreams are several times their probabilities in random streams. These patterns can be used to predict bits and words of RC4 with arbitrary advantage, e.g., after 2

45

output words a single bit can be predicted with probability of 85%, and after 2

50

output words a single byte can be predicted with probability of 82%, contradicting the unpredictability property of PRNGs.

Itsik Mantin
Related-Key Boomerang and Rectangle Attacks

The boomerang attack and the rectangle attack are two attacks that utilize differential cryptanalysis in a larger construction. Both attacks treat the cipher as a cascade of two sub-ciphers, where there exists a good differential for each sub-cipher, but not for the entire cipher. In this paper we combine the boomerang (and the rectangle) attack with related-key differentials.

The new combination is applicable to many ciphers, and we demonstrate its strength by introducing attacks on reduced-round versions of AES and IDEA. The attack on 192-bit key 9-round AES uses 256 different related keys. The 6.5-round attack on IDEA uses four related keys (and has time complexity of 2

88.1

encryptions). We also apply these techniques to COCONUT98 to obtain a distinguisher that requires only four related-key adaptive chosen plaintexts and ciphertexts. For these ciphers, our results attack larger number of rounds or have smaller complexities then all previously known attacks.

Eli Biham, Orr Dunkelman, Nathan Keller
On the Impossibility of Highly-Efficient Blockcipher-Based Hash Functions

Fix a small, non-empty set of blockcipher keys

${\mathcal K}$

. We say a blockcipher-based hash function is

highly-efficient

if it makes exactly one blockcipher call for each message block hashed, and all blockcipher calls use a key from

${\mathcal K}$

. Although a few highly-efficient constructions have been proposed, no one has been able to prove their security. In this paper we prove, in the ideal-cipher model, that it is

impossible

to construct a highly-efficient iterated blockcipher-based hash function that is provably secure. Our result implies, in particular, that the Tweakable Chain Hash (TCH) construction suggested by Liskov, Rivest, and Wagner [7] is

not

correct under an instantiation suggested for this construction, nor can TCH be correctly instantiated by any other efficient means.

John Black, Martin Cochran, Thomas Shrimpton

Broadcast Encryption and Traitor Tracing

Public Traceability in Traitor Tracing Schemes

Traitor tracing schemes are of major importance for secure distribution of digital content. They indeed aim at protecting content providers from colluding users to build pirate decoders. If such a collusion happens, at least one member of the latter collusion will be detected. Several solutions have already been proposed in the literature, but the most important problem to solve remains having a very good ciphertext/plaintext rate. At Eurocrypt ’02, Kiayias and Yung proposed the first scheme with such a constant rate, but still not optimal. In this paper, granted bilinear maps, we manage to improve it, and get an “almost” optimal scheme, since this rate is asymptotically 1. Furthermore, we introduce a new feature, the “public traceability”, which means that the center can delegate the tracing capability to any “untrusted” person. This is not the first use of bilinear maps for traitor tracing applications, but among the previous proposals, only one has remained unbroken: we present an attack by producing an anonymous pirate decoder. We furthermore explain the flaw in their security analysis. For our scheme, we provide a complete proof, based on new computational assumptions, related to the bilinear Diffie-Hellman ones, in the standard model.

Hervé Chabanne, Duong Hieu Phan, David Pointcheval
One-Way Chain Based Broadcast Encryption Schemes

We propose a new broadcast encryption scheme based on the idea of ‘one key per each punctured interval’. Let

r

be the number of revoked users. In our scheme with

p

-punctured

c

-intervals, the transmission overhead is roughly

$\frac{r}{p+1}$

as

r

grows. Our scheme is very flexible with two parameters

p

and

c

. We may take

p

as large as possible if a user device allows a large key storage, and set

c

as small as possible if the storage size and the computing power is limited. As variants of the proposed scheme, we further study a combination of a one-way chain and a hierarchical ring. This combination provides a fine-grained trade-off between user storage and transmission overhead. As one specific instance, the combination includes the subset difference (SD) scheme which is considered the most efficient one in the literature.

Nam-Su Jho, Jung Yeon Hwang, Jung Hee Cheon, Myung-Hwan Kim, Dong Hoon Lee, Eun Sun Yoo
Backmatter
Metadaten
Titel
Advances in Cryptology – EUROCRYPT 2005
herausgegeben von
Ronald Cramer
Copyright-Jahr
2005
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-32055-5
Print ISBN
978-3-540-25910-7
DOI
https://doi.org/10.1007/b136415