Skip to main content
Top

2004 | Book

Advances in Cryptology – CRYPTO 2004

24th Annual International Cryptology Conference, Santa Barbara, California, USA, August 15-19, 2004. Proceedings

Editor: Matt Franklin

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

Crypto 2004, the 24th Annual Crypto Conference, was sponsored by the Int- national Association for Cryptologic Research (IACR) in cooperation with the IEEE Computer Society Technical Committee on Security and Privacy and the Computer Science Department of the University of California at Santa Barbara. The program committee accepted 33 papers for presentation at the conf- ence. These were selected from a total of 211 submissions. Each paper received at least three independent reviews. The selection process included a Web-based discussion phase, and a one-day program committee meeting at New York U- versity. These proceedings include updated versions of the 33 accepted papers. The authors had a few weeks to revise them, aided by comments from the reviewers. However, the revisions were not subjected to any editorial review. Theconferenceprogramincludedtwoinvitedlectures.VictorShoup’sinvited talk was a survey on chosen ciphertext security in public-key encryption. Susan Landau’s invited talk was entitled “Security, Liberty, and Electronic Commu- cations”. Her extended abstract is included in these proceedings. We continued the tradition of a Rump Session, chaired by Stuart Haber. Those presentations (always short, often serious) are not included here.

Table of Contents

Frontmatter

Linear Cryptanalysis

On Multiple Linear Approximations

In this paper we study the long standing problem of information extraction from multiple linear approximations. We develop a formal statistical framework for block cipher attacks based on this technique and derive explicit and compact gain formulas for generalized versions of Matsui’s Algorithm 1 and Algorithm 2. The theoretical framework allows both approaches to be treated in a unified way, and predicts significantly improved attack complexities compared to current linear attacks using a single approximation. In order to substantiate the theoretical claims, we benchmarked the attacks against reduced-round versions of DES and observed a clear reduction of the data and time complexities, in almost perfect correspondence with the predictions. The complexities are reduced by several orders of magnitude for Algorithm 1, and the significant improvement in the case of Algorithm 2 suggests that this approach may outperform the currently best attacks on the full DES algorithm.

Alex Biryukov, Christophe De Cannière, Michaël Quisquater
Feistel Schemes and Bi-linear Cryptanalysis

In this paper we introduce the method of bi-linear cryptanalysis (BLC), designed specifically to attack Feistel ciphers. It allows to construct periodic biased characteristics that combine for an arbitrary number of rounds. In particular, we present a practical attack on DES based on a 1-round invariant, the fastest known based on such invariant, and about as fast as the best Matsui’s attack. For ciphers similar to DES, based on small S-boxes, we claim that BLC is very closely related to LC, and we do not expect to find a bi-linear attack much faster than by LC. Nevertheless we have found bi-linear characteristics that are strictly better than the best Matsui’s result for 3, 7, 11 and more rounds.For more general Feistel schemes there is no reason whatsoever for BLC to remain only a small improvement over LC. We present a construction of a family of practical ciphers based on a big Rijndael-type S-box that are strongly resistant against linear cryptanalysis (LC) but can be easily broken by BLC, even with 16 or more rounds.

Nicolas T. Courtois

Group Signatures

Short Group Signatures

We construct a short group signature scheme. Signatures in our scheme are approximately the size of a standard RSA signature with the same security. Security of our group signature is based on the Strong Diffie-Hellman assumption and a new assumption in bilinear groups called the Decision Linear assumption. We prove security of our system, in the random oracle model, using a variant of the security definition for group signatures recently given by Bellare, Micciancio, and Warinschi.

Dan Boneh, Xavier Boyen, Hovav Shacham
Signature Schemes and Anonymous Credentials from Bilinear Maps

We propose a new and efficient signature scheme that is provably secure in the plain model. The security of our scheme is based on a discrete-logarithm-based assumption put forth by Lysyanskaya, Rivest, Sahai, and Wolf (LRSW) who also showed that it holds for generic groups and is independent of the decisional Diffie-Hellman assumption. We prove security of our scheme under the LRSW assumption for groups with bilinear maps. We then show how our scheme can be used to construct efficient anonymous credential systems as well as group signature and identity escrow schemes. To this end, we provide efficient protocols that allow one to prove in zero-knowledge the knowledge of a signature on a committed (or encrypted) message and to obtain a signature on a committed message.

Jan Camenisch, Anna Lysyanskaya

Foundations

Complete Classification of Bilinear Hard-Core Functions

Let f:{0,1}n→{0,1}l be a one-way function. A function h: {0,1}n→ {0,1}m is called a hard-core function for f if, when given f(x) for a (secret) x drawn uniformly from {0,1}n, it is computationally infeasible to distinguish h(x) from a uniformly random m-bit string. A (randomized) function h: {0,1}n× {0,1}k →{0,1}m is a general hard-core function if it is hard-core for every one-way function f:{0,1}n→{0,1}l, where the second input to h is a k-bit uniform random string r. Hard-core functions are a crucial tool in cryptography, in particular for the construction of pseudo-random generators and pseudo-random functions from any one-way function.The first general hard-core predicate, proposed by Goldreich and Levin, and several subsequently proposed hard-core functions, are bilinear functions in the two arguments x and r. In this paper we introduce a parameter of bilinear functions h: {0,1}n×{0,1}k → {0,1}m, called exponential rank loss, and prove that it characterizes exactly whether or not h is a general hard-core function. The security proofs for the previously proposed bilinear hard-core functions follow as simple consequences. Our results are obtained by extending the class of list-decodable codes and by generalizing Hast’s list-decoding algorithm from the Reed-Muller code to general codes.

Thomas Holenstein, Ueli Maurer, Johan Sjödin
Finding Collisions on a Public Road, or Do Secure Hash Functions Need Secret Coins?

Many cryptographic primitives begin with parameter generation, which picks a primitive from a family. Such generation can use public coins (e.g., in the discrete-logarithm-based case) or secret coins (e.g., in the factoring-based case). We study the relationship between public-coin and secret-coin collision-resistant hash function families (CRHFs). Specifically, we demonstrate that:there is a lack of attention to the distinction between secret-coin and public-coin definitions in the literature, which has led to some problems in the case of CRHFs;in some cases, public-coin CRHFs can be built out of secret-coin CRHFs;the distinction between the two notions is meaningful, because in general secret-coin CRHFs are unlikely to imply public-coin CRHFs.The last statement above is our main result, which states that there is no black-box reduction from public-coin CRHFs to secret-coin CRHFs. Our proof for this result, while employing oracle separations, uses a novel approach, which demonstrates that there is no black-box reduction without demonstrating that there is no relativizing reduction.

Chun-Yuan Hsiao, Leonid Reyzin
Security of Random Feistel Schemes with 5 or More Rounds

We study cryptographic attacks on random Feistel schemes. We denote by m the number of plaintext/ciphertext pairs, and by k the number of rounds. In their famous paper [3], M. Luby and C. Rackoff have completely solved the cases m≪ 2n/2: the schemes are secure against all adaptive chosen plaintext attacks (CPA-2) when k≥ 3 and against all adaptive chosen plaintext and chosen ciphertext attacks (CPCA-2) when k≥ 4 (for this second result a proof is given in [9]).In this paper we study the cases m≪2n. We will use the “coefficients H technique” of proof to analyze known plaintext attacks (KPA), adaptive or non-adaptive chosen plaitext attacks (CPA-1 and CPA-2) and adaptive or non-adaptive chosen plaitext and chosen ciphertext attacks (CPCA-1 and CPCA-2). In the first part of this paper, we will show that when m≪ 2n the schemes are secure against all KPA when k≥4, against all CPA-2 when k≥ 5 and against all CPCA-2 attacks when k≥6. This solves an open problem of [1], [14], and it improves the result of [14] (where more rounds were needed and m≪ 2n(1 − − ε) was obtained instead of m≪ 2n). The number 5 of rounds is minimal since CPA-2 attacks on 4 rounds are known when m≥ O(2n/2) (see [1], [10]). Furthermore, in all these cases we have always obtained an explicit majoration for the distinguishing probability. In the second part of this paper, we present some improved generic attacks. For k=5 rounds, we present a KPA with m ≃ 23n/2 and a non-adaptive chosen plaintext attack (CPA-1) with m ≃ 2n. For k≥ 7 rounds we also show some improved attacks against random Feistel generators (with more than one permutation to analyze and ≥ 22 n computations).

Jacques Patarin

Efficient Representations

Signed Binary Representations Revisited

The most common method for computing exponentiation of random elements in Abelian groups are sliding window schemes, which enhance the efficiency of the binary method at the expense of some precomputation. In groups where inversion is easy (e.g. elliptic curves), signed representations of the exponent are meaningful because they decrease the amount of required precomputation. The asymptotic best signed method is wNAF, because it minimizes the precomputation effort whilst the non-zero density is nearly optimal. Unfortunately, wNAF can be computed only from the least significant bit, i.e. right-to-left. However, in connection with memory constraint devices left-to-right recoding schemes are by far more valuable.In this paper we define the MOF (Mutual Opposite Form), a new canonical representation of signed binary strings, which can be computed in any order. Therefore we obtain the first left-to-right signed exponent-recoding scheme for general width w by applying the width w sliding window conversion on MOF left-to-right. Moreover, the analogue right-to-left conversion on MOF yields wNAF, which indicates that the new class is the natural left-to-right analogue to the useful wNAF. Indeed, the new class inherits the outstanding properties of wNAF, namely the required precomputation and the achieved non-zero density are exactly the same.

Katsuyuki Okeya, Katja Schmidt-Samoa, Christian Spahn, Tsuyoshi Takagi
Compressed Pairings

Pairing-based cryptosystems rely on bilinear non-degenerate maps called pairings, such as the Tate and Weil pairings defined over certain elliptic curve groups. In this paper we show how to compress pairing values, how to couple this technique with that of point compression, and how to benefit from the compressed representation to speed up exponentiations involving pairing values, as required in many pairing based protocols.

Michael Scott, Paulo S. L. M. Barreto
Asymptotically Optimal Communication for Torus-Based Cryptography

We introduce a compact and efficient representation of elements of the algebraic torus. This allows us to design a new discrete-log based public-key system achieving the optimal communication rate, partially answering the conjecture in [4]. For n the product of distinct primes, we construct efficient ElGamal signature and encryption schemes in a subgroup of $F_{q^n}^*$ in which the number of bits exchanged is only a φ(n)/n fraction of that required in traditional schemes, while the security offered remains the same. We also present a Diffie-Hellman key exchange protocol averaging only φ(n)log2q bits of communication per key. For the cryptographically important cases of n=30 and n=210, we transmit a 4/5 and a 24/35 fraction, respectively, of the number of bits required in XTR [14] and recent CEILIDH [24] cryptosystems.

Marten van Dijk, David Woodruff
How to Compress Rabin Ciphertexts and Signatures (and More)

Ordinarily, RSA and Rabin ciphertexts and signatures are log N bits, where N is a composite modulus; here, we describe how to “compress” Rabin ciphertexts and signatures (among other things) down to about (2/3)log N bits, while maintaining a tight provable reduction from factoring in the random oracle model. The computational overhead of our compression algorithms is small. We also improve upon Coron’s results regarding partial-domain-hash signature schemes, reducing by over 300 bits the hash output size necessary to prove adequate security.

Craig Gentry

Public Key Cryptanalysis

On the Bounded Sum-of-Digits Discrete Logarithm Problem in Finite Fields

In this paper, we study the bounded sum-of-digits discrete logarithm problem in finite fields. Our results concern primarily with fields ${\mathbf F}_{q^n}$ where n|q–1. The fields are called Kummer extensions of F q . It is known that we can efficiently construct an element g with order greater than 2n in the fields. Let S q ( ∙ ) be the function from integers to the sum of digits in their q-ary expansions. We first present an algorithm that given ge ( 0≤ e < qn ) finds e in random polynomial time, provided that S q (e) < n. We then show that the problem is solvable in random polynomial time for most of the exponent e with S q (e) < 1.32 n , by exploring an interesting connection between the discrete logarithm problem and the problem of list decoding of Reed-Solomon codes, and applying the Guruswami-Sudan algorithm. As a side result, we obtain a sharper lower bound on the number of congruent polynomials generated by linear factors than the one based on Stothers-Mason ABC-theorem. We also prove that in the field ${\mathbf F}_{q^{q-1}}$, the bounded sum-of-digits discrete logarithm with respect to g can be computed in random time O(f(w) log4 (qq − − 1)), where f is a subexponential function and w is the bound on the q-ary sum-of-digits of the exponent, hence the problem is fixed parameter tractable. These results are shown to be generalized to Artin-Schreier extension ${\mathbf F}_{p^p}$ where p is a prime. Since every finite field has an extension of reasonable degree which is a Kummer extension, our result reveals an unexpected property of the discrete logarithm problem, namely, the bounded sum-of-digits discrete logarithm problem in any given finite field becomes polynomial time solvable in certain low degree extensions.

Qi Cheng
Computing the RSA Secret Key Is Deterministic Polynomial Time Equivalent to Factoring

We address one of the most fundamental problems concerning the RSA cryptoscheme: Does the knowledge of the RSA public key/ secret key pair (e,d) yield the factorization of N=pq in polynomial time? It is well-known that there is a probabilistic polynomial time algorithm that on input (N,e,d) outputs the factors p and q. We present the first deterministic polynomial time algorithm that factors N provided that e,d < φ(N) and that the factors p, q are of the same bit-size. Our approach is an application of Coppersmith’s technique for finding small roots of bivariate integer polynomials.

Alexander May

Zero-Knowledge

Multi-trapdoor Commitments and Their Applications to Proofs of Knowledge Secure Under Concurrent Man-in-the-Middle Attacks

We introduce the notion of multi-trapdoor commitmentswhich is a stronger form of trapdoor commitment schemes. We then construct two very efficient instantiations of multi-trapdoor commitment schemes, one based on the Strong RSA Assumption and the other on the Strong Diffie-Hellman Assumption.The main application of our new notion is the construction of a compiler that takes any proof of knowledge and transforms it into one which is secure against a concurrent man-in-the-middle attack (in the common reference string model). When using our specific implementations, this compiler is very efficient (requires no more than four exponentiations) and maintains the round complexity of the original proof of knowledge.The main practical applications of our results are concurrently secure identification protocols. For these applications our results are the first simple and efficient solutions based on the Strong RSA or Diffie-Hellman Assumption.

Rosario Gennaro
Constant-Round Resettable Zero Knowledge with Concurrent Soundness in the Bare Public-Key Model

In the bare public-key model (BPK in short), each verifier is assumed to have deposited a public key in a file that is accessible by all users at all times. In this model, introduced by Canetti et al. [STOC 2000], constant-round black-box concurrent and resettable zero knowledge is possible as opposed to the standard model for zero knowledge. As pointed out by Micali and Reyzin [Crypto 2001], the notion of soundness in this model is more subtle and complex than in the classical model and indeed four distinct notions have been introduced (from weakest to strongest): one-time, sequential, concurrent and resettable soundness.In this paper we present the first constant-round concurrently sound resettable zero-knowledge argument system in the bare public-key model for $\mathcal{NP}$. More specifically, we present a 4-round protocol, which is optimal as far as the number of rounds is concerned. Our result solves the main open problem on resettable zero knowledge in the BPK model and improves the previous works of Micali and Reyzin [EuroCrypt 2001] and Zhao et al. [EuroCrypt 2003] since they achieved concurrent soundness in stronger models.

Giovanni Di Crescenzo, Giuseppe Persiano, Ivan Visconti
Zero-Knowledge Proofs and String Commitments Withstanding Quantum Attacks

The concept of zero-knowledge (ZK) has become of fundamental importance in cryptography. However, in a setting where entities are modeled by quantum computers, classical arguments for proving ZK fail to hold since, in the quantum setting, the concept of rewinding is not generally applicable. Moreover, known classical techniques that avoid rewinding have various shortcomings in the quantum setting.We propose new techniques for building quantum zero-knowledge (QZK) protocols, which remain secure even under (active) quantum attacks. We obtain computational QZK proofs and perfect QZK arguments for any NP language in the common reference string model. This is based on a general method converting an important class of classical honest-verifier ZK (HVZK) proofs into QZK proofs. This leads to quite practical protocols if the underlying HVZK proof is efficient. These are the first proof protocols enjoying these properties, in particular the first to achieve perfect QZK.As part of our construction, we propose a general framework for building unconditionally hiding (trapdoor) string commitment schemes, secure against quantum attacks, as well as concrete instantiations based on specific (believed to be) hard problems. This is of independent interest, as these are the first unconditionally hiding string commitment schemes withstanding quantum attacks.Finally, we give a partial answer to the question whether QZK is possible in the plain model. We propose a new notion of QZK, non-oblivious verifier QZK, which is strictly stronger than honest-verifier QZK but weaker than full QZK, and we show that this notion can be achieved by means of efficient (quantum) protocols.

Ivan Damgård, Serge Fehr, Louis Salvail
The Knowledge-of-Exponent Assumptions and 3-Round Zero-Knowledge Protocols

Hada and Tanaka [11,12] showed the existence of 3-round, negligible-error zero-knowledge arguments for NP based on a pair of non-standard assumptions, here called KEA1 and KEA2. In this paper we show that KEA2 is false. This renders vacuous the results of [11,12]. We recover these results, however, under a suitably modified new assumption called KEA3. What we believe is most interesting is that we show that it is possible to “falsify” assumptions like KEA2 that, due to their nature and quantifier-structure, do not lend themselves easily to “efficient falsification” (Naor [15]).

Mihir Bellare, Adriana Palacio

Hash Collisions

Near-Collisions of SHA-0

In this paper we find two near-collisions of the full compression function of SHA-0, in which up to 142 of the 160 bits of the output are equal. We also find many full collisions of 65-round reduced SHA-0, which is a large improvement to the best previous result of 35 rounds. We use the very surprising fact that the messages have many neutral bits, some of which do not affect the differences for about 15–20 rounds. We also show that 82-round SHA-0 is much weaker than the (80-round) SHA-0, although it has more rounds. This fact demonstrates that the strength of SHA-0 is not monotonous in the number of rounds.

Eli Biham, Rafi Chen
Multicollisions in Iterated Hash Functions. Application to Cascaded Constructions

In this paper, we study the existence of multicollisions in iterated hash functions. We show that finding multicollisions, i.e. r-tuples of messages that all hash to the same value, is not much harder than finding ordinary collisions, i.e. pairs of messages, even for extremely large values of r. More precisely, the ratio of the complexities of the attacks is approximately equal to the logarithm of r. Then, using large multicollisions as a tool, we solve a long standing open problem and prove that concatenating the results of several iterated hash functions in order to build a larger one does not yield a secure construction. We also discuss the potential impact of our attack on several published schemes. Quite surprisingly, for subtle reasons, the schemes we study happen to be immune to our attack.

Antoine Joux

Secure Computation

Adaptively Secure Feldman VSS and Applications to Universally-Composable Threshold Cryptography

We propose the first distributed discrete-log key generation (DLKG) protocol from scratch which is adaptively-secure in the non-erasure model, and at the same time completely avoids the use of interactive zero-knowledge proofs. As a consequence, the protocol can be proven secure in a universally-composable (UC) like framework which prohibits rewinding. We prove the security in what we call the single-inconsistent-player UC model, which guarantees arbitrary composition as long as all protocols are executed by the same players. As an application, we propose a fully UC threshold Schnorr signature scheme.Our results are based on a new adaptively-secure Feldman VSS scheme. Although adaptive security was already addressed by Feldman in the original paper, the scheme requires secure communication, secure erasure, and either a linear number of rounds or digital signatures to resolve disputes. Our scheme overcomes all of these shortcomings, but on the other hand requires some restriction on the corruption behavior of the adversary, which however disappears in some applications including our new DLKG protocol.We also propose several new adaptively-secure protocols, which may find other applications, like a sender non-committing encryption scheme, a distributed trapdoor-key generation protocol for Pedersen’s commitment scheme, or distributed-verifier proofs for proving relations among commitments or even any NP relations in general.

Masayuki Abe, Serge Fehr
Round-Optimal Secure Two-Party Computation

We consider the central cryptographic task of secure two-party computation: two parties wish to compute some function of their private inputs (each receiving possibly different outputs) where security should hold with respect to arbitrarily-malicious behavior of either of the participants. Despite extensive research in this area, the exact round-complexity of this fundamental problem (i.e., the number of rounds required to compute an arbitrary poly-time functionality) was not previously known.Here, we establish the exact round complexity of secure two-party computation with respect to black-box proofs of security. We first show a lower bound establishing (unconditionally) that four rounds are not sufficient to securely compute the coin-tossing functionality for any super-logarithmic number of coins; this rules out 4-round protocols for other natural functionalities as well. Next, we construct protocols for securely computing any (randomized) functionality using only five rounds. Our protocols may be based either on certified trapdoor permutations or homomorphic encryption schemes satisfying certain additional properties. The former assumption is implied by, e.g., the RSA assumption for large public exponents, while the latter is implied by, e.g., the DDH assumption. Finally, we show how our protocols may be modified – without increasing their round complexity and without requiring erasures – to tolerate an adaptive malicious adversary.

Jonathan Katz, Rafail Ostrovsky

Invited Talk

Security, Liberty, and Electronic Communications

We live in perilous times. We live in times where a dirty bomb going off in lower Manhattan is not unimaginable. We live in times where the CIA interrogations of al Qaeda leaders were so harsh that the FBI would not let its agent participate [36]. We live in times when security and liberty are both endangered.We also live in times of unimaginable technical creativity. It is faster to use Instant Messaging to query a colleague halfway across the world than it is to walk down the hallway and ask the question, when Google can search four billion web pages faster than the time it takes to pull the right volume of the Oxford English Dictionary off the library shelf. We live surrounded by a plethora of communicating and computing devices – telephones, PDAs, cell phones, laptops, PCs, Computers – and this is only the beginning of the communications revolution.

Susan Landau

Stream Cipher Cryptanalysis

An Improved Correlation Attack Against Irregular Clocked and Filtered Keystream Generators

In this paper we propose a new key recovery attack on irregular clocked keystream generators where the stream is filtered by a nonlinear Boolean function. We show that the attack is much more efficient than expected from previous analytic methods, and we believe it improves all previous attacks on the cipher model.

Håvard Molland, Tor Helleseth
Rewriting Variables: The Complexity of Fast Algebraic Attacks on Stream Ciphers

Recently proposed algebraic attacks [2,6] and fast algebraic attacks [1,5] have provided the best analyses against some deployed LFSR-based ciphers. The process complexity is exponential in the degree of the equations. Fast algebraic attacks were introduced [5] as a way of reducing run-time complexity by reducing the degree of the system of equations. Previous reports on fast algebraic attacks [1,5] have underestimated the complexity of substituting the keystream into the system of equations, which in some cases dominates the attack. We also show how the Fast Fourier Transform (FFT) [4] can be applied to decrease the complexity of the substitution step. Finally, it is shown that all functions of degree d satisfy a common, function-independent linear combination that may be used in the pre-computation step of the fast algebraic attack. An explicit factorization of the corresponding characteristic polynomial yields the fastest known method for performing the pre-computation step.

Philip Hawkes, Gregory G. Rose
Faster Correlation Attack on Bluetooth Keystream Generator E0

We study both distinguishing and key-recovery attacksagainst E0, the keystream generator used in Bluetooth by means of correlation. First, a powerful computation method of correlations is formulated by a recursive expression, which makes it easier to calculate correlations of the finite state machine output sequences up to 26 bits for E0 and allows us to verify the two known correlations to be the largest for the first time. Second, we apply the concept of convolution to the analysis of the distinguisher based on all correlations, and propose an efficient distinguisher due to the linear dependency of the largest correlations. Last, we propose a novel maximum likelihood decoding algorithm based on fast Walsh transform to recover the closest codeword for any linear code of dimension L and length n. It requires time O(n+L· 2L) and memory min (n,2L). This can speed up many attacks such as fast correlation attacks. We apply it to E0, and our best key-recovery attack works in 239 time given 239 consecutive bits after O(237) precomputation. This is the best known attack against E0 so far.

Yi Lu, Serge Vaudenay

Public Key Encryption

A New Paradigm of Hybrid Encryption Scheme

In this paper, we show that a key encapsulation mechanism (KEM) does not have to be IND-CCA secure in the construction of hybrid encryption schemes, as was previously believed. That is, we present a more efficient hybrid encryption scheme than Shoup [12] by using a KEM which is not necessarily IND-CCA secure. Nevertheless, our scheme is secure in the sense of IND-CCA under the DDH assumption in the standard model. This result is further generalized to universal2 projective hash families.

Kaoru Kurosawa, Yvo Desmedt
Secure Identity Based Encryption Without Random Oracles

We present a fully secure Identity Based Encryption scheme whose proof of security does not rely on the random oracle heuristic. Security is based on the Decision Bilinear Diffie-Hellman assumption. This solves an open problem posed by Boneh and Franklin in 2001.

Dan Boneh, Xavier Boyen

Bounded Storage Model

Non-interactive Timestamping in the Bounded Storage Model

A timestamping scheme is non-interactive if a stamper can stamp a document without communicating with any other player. The only communication done is at validation time. Non-Interactive timestamping has many advantages, such as information theoretic privacy and enhanced robustness. Unfortunately, no such scheme exists against polynomial time adversaries that have unbounded storage at their disposal.In this paper we show non-interactive timestamping is possible in the bounded storage model. In this model it is assumed that all parties participating in the protocol have small storage, and that in the beginning of the protocol a very long random string (which is too long to be stored by the players) is transmitted. To the best of our knowledge, this is the first example of a cryptographic task that is possible in the bounded storage model, but is impossible in the “standard cryptographic setting”, even assuming cryptographic assumptions.We give an explicit construction that is secure against all bounded storage adversaries, and a significantly more efficient construction secure against all bounded storage adversaries that run in polynomial time.

Tal Moran, Ronen Shaltiel, Amnon Ta-Shma

Key Management

IPAKE: Isomorphisms for Password-Based Authenticated Key Exchange

In this paper we revisit one of the most popular password-based key exchange protocols, namely the OKE (for Open Key Exchange) scheme, proposed by Luck in 1997. Our results can be highlighted as follows. First we define a new primitive that we call trapdoor hard-to-invert isomorphisms, and give some candidates. Then we present a generic password-based key exchange construction, that admits a security proof assuming that these objects exist. Finally, we instantiate our general scheme with some concrete examples, such as the Diffie-Hellman function and the RSA function, but more interestingly the modular square root function, which leads to the first scheme with security related to the integer factorization problem. Furthermore, the latter variant is very efficient for one party (the server). Our results hold in the random-oracle model.

Dario Catalano, David Pointcheval, Thomas Pornin
Randomness Extraction and Key Derivation Using the CBC, Cascade and HMAC Modes

We study the suitability of common pseudorandomnessmodes associated with cryptographic hash functions and block ciphers (CBC-MAC, Cascade and HMAC) for the task of “randomness extraction”, namely, the derivation of keying material from semi-secret and/or semi-random sources. Important applications for such extractors include the derivation of strong cryptographic keys from non-uniform sources of randomness (for example, to extract a seed for a pseudorandom generator from a weak source of physical or digital noise), and the derivation of pseudorandom keys from a Diffie-Hellman value.Extractors are closely related in their applications to pseudorandom functions and thus it is attractive to (re)use the common pseudorandom modes as randomness extractors. Yet, the crucial difference between pseudorandom generation and randomness extraction is that the former uses random secret keys while the latter uses random but known keys. We show that under a variety of assumptions on the underlying primitives (block ciphers and compression functions), ranging from ideal randomness assumptions to realistic universal-hashing properties, these modes induce good extractors. Hence, these schemes represent a more practical alternative to combinatorial extractors (that are seldom used in practice), and a better-analyzed alternative to the common practice of using SHA-1 or MD5 (as a single un-keyed function) for randomness extraction. In particular, our results serve to validate the method of key extraction and key derivation from Diffie-Hellman values used in the IKE (IPsec’s Key Exchange) protocol.

Yevgeniy Dodis, Rosario Gennaro, Johan Håstad, Hugo Krawczyk, Tal Rabin
Efficient Tree-Based Revocation in Groups of Low-State Devices

We study the problem of broadcasting confidential information to a collection of n devices while providing the ability to revoke an arbitrary subset of those devices (and tolerating collusion among the revoked devices). In this paper, we restrict our attention to low-memory devices, that is, devices that can store at most O(log n) keys. We consider solutions for both zero-state and low-state cases, where such devices are organized in a tree structure T. We allow the group controller to encrypt broadcasts to any subtree of T, even if the tree is based on an multi-way organizational chart or a severely unbalanced multicast tree.

Michael T. Goodrich, Jonathan Z. Sun, Roberto Tamassia

Computationally Unbounded Adversaries

Privacy-Preserving Datamining on Vertically Partitioned Databases

In a recent paper Dinur and Nissim considered a statistical database in which a trusted database administrator monitors queries and introduces noise to the responses with the goal of maintaining data privacy [5]. Under a rigorous definition of breach of privacy, Dinur and Nissim proved that unless the total number of queries is sub-linear in the size of the database, a substantial amount of noise is required to avoid a breach, rendering the database almost useless.As databases grow increasingly large, the possibility of being able to query only a sub-linear number of times becomes realistic. We further investigate this situation, generalizing the previous work in two important directions: multi-attribute databases (previous work dealt only with single-attribute databases) and vertically partitioned databases, in which different subsets of attributes are stored in different databases. In addition, we show how to use our techniques for datamining on published noisy statistics.

Cynthia Dwork, Kobbi Nissim
Optimal Perfectly Secure Message Transmission

In the perfectly secure message transmission (PSMT) problem, two synchronized non-faulty players (or processors), the SenderS and the ReceiverR are connected by n wires (each of which facilitates 2-way communication); S has an ℓ-bit message that he wishes to send to R; after exchanging messages in phases R should correctly obtain S’s message, while an adversary listening on and actively controlling any set of t (or less) wires should have no information about S’s message.We measure the quality of a protocol for securely transmitting an ℓ-bit message using the following parameters: the number of wires n, the number of phases r and the total number of bits transmitted b. The optima for n and r are respectively 2t+1 and 2. We prove that any 2-phase reliable message transmission protocol, and hence any secure protocol, over n wires out of which at most t are faulty is required to transmit at least $b = \left(\frac{n\ell}{n-2t}\right)$ bits. While no known protocol is simultaneously optimal in both communication and phase complexity, we present one such optimum protocol for the case n=2t+1 when the size of message is large enough, viz., ℓ = Ω(tlog t) bits; that is, our optimal protocol has n=2t+1, r=2 and b=O(nℓ) bits. Note that privacy is for free, if the message is large enough.We also demonstrate how randomness can effectively improve the phase complexity. Specifically, while the (worst-case) lower bound on r is 2, we design an efficient optimally tolerant protocol for PSMT that terminates in a single phase with arbitrarily high probability.Finally, we consider the case when the adversary is mobile, that is, he could corrupt a different set of t wires in different phases. Again, the optima for n and r are respectively 2t+1 and 2; However we show that $b \geq \left(\frac{n\ell}{n-2t}\right)$ bits irrespective of r. We present the first protocol that is (asymptotically) optimum in b for n=2t+1. Our protocol has a phase complexity of O(t).

K. Srinathan, Arvind Narayanan, C. Pandu Rangan
Pseudo-signatures, Broadcast, and Multi-party Computation from Correlated Randomness

Unconditionally secure multi-party computations in general, and broadcast in particular, are impossible if any third of the players can be actively corrupted and if no additional information-theoretic primitive is given. In this paper, we relativize this pessimistic result by showing that such a primitive can be as simple as noisy communication channels between the players or weakly correlated pieces of information. We consider the scenario where three players have access to random variables X, Y, and Z, respectively, and give the exact condition on the joint distribution P XYZ under which unconditional broadcast is possible. More precisely, we show that this condition characterizes the possibility of realizing so-called pseudo-signatures between the players. As a consequence of our results, we can give conditions for the possibility of achieving unconditional broadcast between n players and any minority of cheaters and, hence, general multi-party computation under the same condition.

Matthias Fitzi, Stefan Wolf, Jürg Wullschleger
Backmatter
Metadata
Title
Advances in Cryptology – CRYPTO 2004
Editor
Matt Franklin
Copyright Year
2004
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-28628-8
Print ISBN
978-3-540-22668-0
DOI
https://doi.org/10.1007/b99099