Skip to main content

2000 | Buch

Advances in Cryptology — EUROCRYPT 2000

International Conference on the Theory and Application of Cryptographic Techniques Bruges, Belgium, May 14–18, 2000 Proceedings

herausgegeben von: Bart Preneel

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

Factoring and Discrete Logarithm

Factorization of a 512-Bit RSA Modulus

This paper reports on the factorization of the 512-bit number RSA-155 by the Number Field Sieve factoring method (NFS) and discusses the implications for RSA.

Stefania Cavallar, Bruce Dodson, Arjen K. Lenstra, Walter Lioen, Peter L. Montgomery, Brian Murphy, Herman te Riele, Karen Aardal, Jeff Gilchrist, Gérard Guillerm, Paul Leyland, Jöel Marchand, François Morain, Alec Muffett, Chris and Craig Putnam, Paul Zimmermann
An Algorithm for Solving the Discrete Log Problem on Hyperelliptic Curves

We present an index-calculus algorithm for the computation of discrete logarithms in the Jacobian of hyperelliptic curves defined over finite fields. The complexity predicts that it is faster than the Rho method for genus greater than 4. To demonstrate the efficiency of our approach, we describe our breaking of a cryptosystem based on a curve of genus 6 recently proposed by Koblitz.

Pierrick Gaudry
Analysis and Optimization of the TWINKLE Factoring Device

We describe an enhanced version of the TWINKLE factoring device and analyse to what extent it can be expected to speed up the sieving step of the Quadratic Sieve and Number Field Sieve factoring algorithms. The bottom line of our analysis is that the TWINKLE-assisted factorization of 768-bit numbers is difficult but doable in about 9 months (including the sieving and matrix parts) by a large organization which can use 80,000 standard Pentium II PC’s and 5,000 TWINKLE devices.

Arjen K. Lenstra, Adi Shamir

Cryptanalysis I: Digital Signatures

Noisy Polynomial Interpolation and Noisy Chinese Remaindering

The noisy polynomial interpolation problem is a new intractability assumption introduced last year in oblivious polynomial evaluation. It also appeared independently in password identification schemes, due to its connection with secret sharing schemes based on Lagrange’s polynomial interpolation. This paper presents new algorithms to solve the noisy polynomial interpolation problem. In particular, we prove a reduction from noisy polynomial interpolation to the lattice shortest vector problem, when the parameters satisfy a certain condition that we make explicit. Standard lattice reduction techniques appear to solve many instances of the problem. It follows that noisy polynomial interpolation is much easier than expected. We therefore suggest simple modifications to several cryptographic schemes recently proposed, in order to change the intractability assumption. We also discuss analogous methods for the related noisy Chinese remaindering problem arising from the well-known analogy between polynomials and integers.

Daniel Bleichenbacher, Phong Q. Nguyen
A Chosen Messages Attack on the ISO/IEC 9796-1 Signature Scheme

We introduce an attack against the ISO/IEC 9796-1 digital signature scheme using redundancy, taking advantage of the multiplicative property of the RSA and Rabin cryptosystems. The forged signature of 1 message is obtained from the signature of 3 others for any public exponent v. For even v, the modulus is factored from the signature of 4 messages, or just 2 for v = 2. The attacker must select the above messages from a particular message subset, which size grows exponentialy with the public modulus bit size. The attack is computationally inexpensive, and works for any modulus of 16z, 16z ± 1, or 16z ± 2 bits. This prompts the need to revise ISO/IEC 9796-1, or avoid its use in situations where an adversary could obtain the signature of even a few mostly chosen messages.

François Grieu
Cryptanalysis of Countermeasures Proposed for Repairing ISO 9796-1

ISO 9796-1, published in 1991, was the first standard specifying a digital signature scheme with message recovery. In [4], Coron, Naccache and Stern described an attack on a slight modification of ISO 9796-1. Then, Coppersmith, Halevi and Jutla turned it into an attack against the standard in full [2]. They also proposed five countermeasures for repairing it. In this paper, we show that all these countermeasures can be attacked, either by using already existing techniques (including a very recent one), or by introducing new techniques, one of them based on the decomposition of an integer into sums of two squares.

Marc Girault, Jean-François Misarsky
Security Analysis of the Gennaro-Halevi-Rabin Signature Scheme

We exhibit an attack against a signature scheme recently proposed by Gennaro, Halevi and Rabin [9]. The scheme’s security is based on two assumptions namely the strong RSA assumption and the existence of a division-intractable hash-function. For the latter, the authors conjectured a security level exponential in the hash-function’s digest size whereas our attack is sub-exponential with respect to the digest size. Moreover, since the new attack is optimal, the length of the hash function can now be rigorously fixed. In particular, to get a security level equivalent to 1024-bit RSA, one should use a digest size of approximately 1024 bits instead of the 512 bits suggested in [9].

Jean-Sébastien Coron, David Naccache

Invited Talk

On the Security of 3GPP Networks

Later this year we shall see the release of the Third Generation Partnership Project (3GPP) specifications for WCDMA — the first third generation standard for mobile communications. This 3G system combines elements of both a radical departure and a timid evolution from the 2G system known as GSM. It is radically different from GSM in having a wide-band CDMA system for its air-interface, but it hangs on to the GSM/GPRS core switching network with its MAP based signalling system. In this paper we consider the security features in WCDMA, taking a critical look at where they depart from those in GSM, where they are still very much the same and how they may develop as the core switching network is replaced by an IP based infrastructure.

Michael Walker

Private Information Retrieval

One-Way Trapdoor Permutations Are Sufficient for Non-trivial Single-Server Private Information Retrieval

We show that general one-way trapdoor permutations are sufficient to privately retrieve an entry from a database of size n with total communication complexity strictly less than n. More specifically, we present a protocol in which the user sends O(K2) bits and the server sends n − cn K bits (for any constant c), where K is the security parameter of the trapdoor permutations. Thus, for sufficiently large databases (e.g., when K = n∈ for some small ∈) our construction breaks the information-theoretic lower-bound (of at least n bits). This demonstrates the feasibility of basing single-server private information retrieval on general complexity assumptions.An important implication of our result is that we can implement a 1-out-of-n Oblivious Transfer protocol with communication complexity strictly less than n based on any one-way trapdoor permutation.

Eyal Kushilevitz, Rafail Ostrovsky
Single Database Private Information Retrieval Implies Oblivious Transfer

A Single-Database Private Information Retrieval (PIR) is a protocol that allows a user to privately retrieve from a database an entry with as small as possible communication complexity. We call a PIR protocol non-trivial if its total communication is strictly less than the size of the database. Non-trivial PIR is an important cryptographic primitive with many applications. Thus, understanding which assumptions are necessary for implementing such a primitive is an important task, although (so far) not a well-understood one. In this paper we show that any non-trivial PIR implies Oblivious Transfer, a far better understood primitive. Our result not only significantly clarifies our understanding of any non-trivial PIR protocol, but also yields the following consequences: Any non-trivial PIR is complete for all two-party and multi-party secure computations.There exists a communication-efficient reduction from any PIR protocol to a 1-out-of-n Oblivious Transfer protocol (also called SPIR).There is strong evidence that the assumption of the existence of a one-way function is necessary but not sufficient for any non-trivial PIR protocol.

Giovanni Di Crescenzo, Tal Malkin, Rafail Ostrovsky

Key Management Protocols

Authenticated Key Exchange Secure against Dictionary Attacks

Password-based protocols for authenticated key exchange (AKE) are designed to work despite the use of passwords drawn from a space so small that an adversary might well enumerate, off line, all possible passwords. While several such protocols have been suggested, the underlying theory has been lagging. We begin by defining a model for this problem, one rich enough to deal with password guessing, forward secrecy, server compromise, and loss of session keys. The one model can be used to define various goals. We take AKE (with “implicit” authentication) as the “basic” goal, and we give definitions for it, and for entity-authentication goals as well. Then we prove correctness for the idea at the center of the Encrypted Key-Exchange (EKE) protocol of Bellovin and Merritt: we prove security, in an ideal-cipher model, of the two-flow protocol at the core of EKE.

Mihir Bellare, David Pointcheval, Phillip Rogaway
Provably Secure Password-Authenticated Key Exchange Using Diffie-Hellman

When designing password-authenticated key exchange protocols (as opposed to key exchange protocols authenticated using crypto-graphically secure keys), one must not allow any information to be leaked that would allow verification of the password (a weak shared key), since an attacker who obtains this information may be able to run an off-line dictionary attack to determine the correct password. We present a new protocol called PAK which is the first Diffie-Hellman-based password-authenticated key exchange protocol to provide a formal proof of security (in the random oracle model) against both passive and active adversaries. In addition to the PAK protocol that provides mutual explicit authentication, we also show a more efficient protocol called PPK that is provably secure in the implicit-authentication model. We then extend PAK to a protocol called PAK-X, in which one side (the client) stores a plaintext version of the password, while the other side (the server) only stores a verifier for the password. We formally prove security of PAK-X, even when the server is compromised. Our formal model for password-authenticated key exchange is new, and may be of independent interest.

Victor Boyko, Philip MacKenzie, Sarvar Patel
Fair Encryption of RSA Keys

Cryptography is more and more concerned with elaborate protocols involving many participants. In some cases, it is crucial to be sure that players behave fairly especially when they use public key encryption. Accordingly, mechanisms are needed to check the correctness of encrypted data, without compromising secrecy. We consider an optimistic scenario in which users have pairs of public and private keys and give an encryption of their secret key with the public key of a third party. In this setting we wish to provide a publicly verifiable proof that the third party is able to recover the secret key if needed. Our emphasis is on size; we believe that the proof should be of the same length as the original key.In this paper, we propose such proofs of fair encryption for El Gamal and RSA keys, using the Paillier cryptosystem. Our proofs are really efficient since in practical terms they are only a few hundred bytes long. As an application, we design a very simple and efficient key recovery system.

Guillaume Poupard, Jacques Stern

Threshold Cryptography and Digital Signatures

Computing Inverses over a Shared Secret Modulus

We discuss the following problem: Given an integer φ shared secretly among n players and a prime number e, how can the players efficiently compute a sharing of e−1 mod φ. The most interesting case is when φ is the Euler function of a known RSA modulus N, φ = φ(N). The problem has several applications, among which the construction of threshold variants for two recent signature schemes proposed by Gennaro-Halevi-Rabin and Cramer-Shoup.We present new and efficient protocols to solve this problem, improving over previous solutions by Boneh-Franklin and Frankel et al. Our basic protocol (secure against honest but curious players) requires only two rounds of communication and a single GCD computation. The robust protocol (secure against malicious players) adds only a couple of rounds and a few modular exponentiations to the computation.

Dario Catalano, Rosario Gennaro, Shai Halevi
Practical Threshold Signatures

We present an RSA threshold signature scheme. The scheme enjoys the following properties: 1.it is unforgeable and robust in the random oracle model, assuming the RSA problem is hard;2.signature share generation and verification is completely non-interactive;3.the size of an individual signature share is bounded by a constant times the size of the RSA modulus.

Victor Shoup
Adaptively Secure Threshold Cryptography: Introducing Concurrency, Removing Erasures
(Extended Abstract)

We put forward two new measures of security for threshold schemes secure in the adaptive adversary model: security under concurrent composition; and security without the assumption of reliable erasure. Using novel constructions and analytical tools, in both these settings, we exhibit efficient secure threshold protocols for a variety of cryptographic applications. In particular, based on the recent scheme by Cramer-Shoup, we construct adaptively secure threshold cryptosystems secure against adaptive chosen ciphertext attack under the DDH intractability assumption. Our techniques are also applicable to other cryptosystems and signature schemes, like RSA, DSS, and ElGamal. Our techniques include the first efficient implementation, for a wide but special class of protocols, of secure channels in erasure-free adaptive model.Of independent interest, we present the notion of a committed proof.

Stanisław Jarecki, Anna Lysyanskaya
Confirmer Signature Schemes Secure against Adaptive Adversaries
(Extended Abstract)

The main difference between confirmer signatures and ordinary digital signatures is that a confirmer signature can be verified only with the assistance of a semitrusted third party, the confirmer. Additionally, the confirmer can selectively convert single confirmer signatures into ordinary signatures.This paper points out that previous models for confirmer signature schemes are too restricted to address the case where several signers share the same confirmer. More seriously, we show that various proposed schemes (some of which are provably secure in these restricted models) are vulnerable to an adaptive signature-transformation attack. We define a new stronger model that covers this kind of attack and provide a generic solution based on any secure ordinary signature scheme and public key encryption scheme. We also exhibit a concrete instance thereof.

Jan Camenisch, Markus Michels

Public-Key Encryption

Public-Key Encryption in a Multi-user Setting: Security Proofs and Improvements

This paper addresses the security of public-key cryptosystems in a “multi-user” setting, namely in the presence of attacks involving the encryption of related messages under different public keys, as exemplified by Håstad’s classical attacks on RSA. We prove that security in the single-user setting implies security in the multi-user setting as long as the former is interpreted in the strong sense of “indistinguishability,” thereby pin-pointing many schemes guaranteed to be secure against Håstad-type attacks. We then highlight the importance, in practice, of considering and improving the concrete security of the general reduction, and present such improvements for two Diffie-Hellman based schemes, namely El Gamal and Cramer-Shoup.

Mihir Bellare, Alexandra Boldyreva, Silvio Micali
Using Hash Functions as a Hedge against Chosen Ciphertext Attack

The cryptosystem recently proposed by Cramer and Shoup [CS98] is a practical public key cryptosystem that is secure against adaptive chosen ciphertext attack provided the Decisional Diffie-Hellman assumption is true. Although this is a reasonable intractability assumption, it would be preferable to base a security proof on a weaker assumption, such as the Computational Diffie-Hellman assumption. Indeed, this cryptosystem in its most basic form is in fact insecure if the Decisional Diffie-Hellman assumption is false. In this paper we present a practical hybrid scheme that is just as efficient as the scheme of of Cramer and Shoup; indeed, the scheme is slightly more efficient than the one originally presented by Cramer and Shoup; we prove that the scheme is secure if the Decisional Diffie-Hellman assumption is true; we give strong evidence that the scheme is secure if the weaker, Computational Diffie-Hellman assumption is true by providing a proof of security in the random oracle model.

Victor Shoup

Quantum Cryptography

Security Aspects of Practical Quantum Cryptography

The use of quantum bits (qubits) in cryptography holds the promise of secure cryptographic quantum key distribution schemes. Unfortunately, the implemented schemes are often operated in a regime which excludes unconditional security. We provide a thorough investigation of security issues for practical quantum key distribution, taking into account channel losses, a realistic detection process, and modifications of the “qubits” sent from the sender to the receiver. We first show that even quantum key distribution with perfect qubits might not be achievable over long distances when fixed channel losses and fixed dark count errors are taken into account. Then we show that existing experimental schemes (based on weak pulses) currently do not offer unconditional security for the reported distances and signal strength. Finally we show that parametric downconversion offers enhanced performance compared to its weak coherent pulse counterpart.

Gilles Brassard, Norbert Lütkenhaus, Tal Mor, Barry C. Sanders
Perfectly Concealing Quantum Bit Commitment from any Quantum One-Way Permutation

We show that although unconditionally secure quantum bit commitment is impossible, it can be based upon any family of quantum one-way permutations. The resulting scheme is unconditionally concealing and computationally binding. Unlike the classical reduction of Naor, Ostrovski, Ventkatesen and Young, our protocol is non-interactive and has communication complexity O(n) qubits for n a security parameter.

Paul Dumais, Dominic Mayers, Louis Salvail

Multi-party Computation and Information Theory

General Secure Multi-party Computation from any Linear Secret-Sharing Scheme

We show that verifiable secret sharing (VSS) and secure multi-party computation (MPC) among a set of n players can efficiently be based on any linear secret sharing scheme (LSSS) for the players, provided that the access structure of the LSSS allows MPC or VSS at all. Because an LSSS neither guarantees reconstructability when some shares are false, nor verifiability of a shared value, nor allows for the multiplication of shared values, an LSSS is an apparently much weaker primitive than VSS or MPC.Our approach to secure MPC is generic and applies to both the information-theoretic and the cryptographic setting. The construction is based on 1) a formalization of the special multiplicative property of an LSSS that is needed to perform a multiplication on shared values, 2) an efficient generic construction to obtain from any LSSS a multiplicative LSSS for the same access structure, and 3) an efficient generic construction to build verifiability into every LSSS (always assuming that the adversary structure allows for MPC or VSS at all).The protocols are efficient. In contrast to all previous information-theoretically secure protocols, the field size is not restricted (e.g, to be greater than n). Moreover, we exhibit adversary structures for which our protocols are polynomial in n while all previous approaches to MPC for non-threshold adversaries provably have super-polynomial complexity.

Ronald Cramer, Ivan Damgård, Ueli Maurer
Minimal-Latency Secure Function Evaluation

Sander, Young and Yung recently exhibited a protocol for computing on encrypted inputs, for functions computable in NC1. In their variant of secure function evaluation, Bob (the “CryptoComputer”) accepts homomorphically-encrypted inputs (x) from client Alice, and then returns a string from which Alice can extract f(x; y) (where y is Bob’s input, or e.g. the function f itself). Alice must not learn more about y than what f(x, y) reveals by itself. We extend their result to encompass NLOGSPACE (nondeterministic log-space functions). In the domain of multiparty computations, constant-round protocols have been known for years [BB89,FKN95]. This paper introduces novel parallelization techniques that, coupled with the [SYY99] methods, reduce the constant to 1 with preprocessing. This resolves the conjecture that NLOGSPACE subcomputations (including log-slices of circuit computation) can be evaluated with latency 1 (as opposed to just O(1))

Donald Beaver
Information-Theoretic Key Agreement: From Weak to Strong Secrecy for Free

One of the basic problems in cryptography is the generation of a common secret key between two parties, for instance in order to communicate privately. In this paper we consider information-theoretically secure key agreement. Wyner and subsequently Csiszár and Körner described and analyzed settings for secret-key agreement based on noisy communication channels. Maurer as well as Ahlswede and Csiszár generalized these models to a scenario based on correlated randomness and public discussion. In all these settings, the secrecy capacity and the secret-key rate, respectively, have been defined as the maximal achievable rates at which a highly-secret key can be generated by the legitimate partners. However, the privacy requirements were too weak in all these definitions, requiring only the ratio between the adversary’s information and the length of the key to be negligible, but hence tolerating her to obtain a possibly substantial amount of information about the resulting key in an absolute sense. We give natural stronger definitions of secrecy capacity and secret-key rate, requiring that the adversary obtains virtually no information about the entire key. We show that not only secret-key agreement satisfying the strong secrecy condition is possible, but even that the achievable key-generation rates are equal to the previous weak notions of secrecy capacity and secret-key rate. Hence the unsatisfactory old definitions can be completely replaced by the new ones. We prove these results by a generic reduction of strong to weak key agreement. The reduction makes use of extractors, which allow to keep the required amount of communication negligible as compared to the length of the resulting key.

Ueli Maurer, Stefan Wolf

Cryptanalysis II: Public-Key Encryption

New Attacks on PKCS#1 v1.5 Encryption

This paper introduces two new attacks on PKCS#1 V1.5, an RSA-based encryption standard proposed by RSA Laboratories. As opposed to Bleichenbacher’s attack, our attacks are chosen-plaintext only, i.e. they do not make use of a decryption oracle. The first attack applies to small public exponents and shows that a plaintext ending by sufficiently many zeroes can be recovered efficiently when two or more ciphertexts corresponding to the same plaintext are available. We believe the technique we employ to be of independent interest, as it extends Coppersmith’s low-exponent attack to certain length parameters. Our second attack is applicable to arbitrary public exponents, provided that most message bits are zeroes. It seems to constitute the first chosen-plaintext attack on an RSA-based encryption standard that yields to practical results for any public exponent.

Jean-Sébastien Coron, Marc Joye, David Naccache, Pascal Paillier
A NICE Cryptanalysis

We present a chosen-ciphertext attack against both NICE cryptosystems. These two cryptosystems are based on computations in the class group of non-maximal imaginary orders. More precisely, the systems make use of the canonical surjection between the class group of the quadratic order of discriminant $$ \sqrt { - pq^2 } $$ and the class group of the quadratic order of discriminant $$ \sqrt { - p} $$. In this paper, we examine the properties of this canonical surjection and use them to build a chosen-ciphertext attack that recovers the secret key (p and q) from two ciphertexts/cleartexts pairs.

Éliane Jaulmes, Antoine Joux
Efficient Algorithms for Solving Overdefined Systems of Multivariate Polynomial Equations

The security of many recently proposed cryptosystems is based on the difficulty of solving large systems of quadratic multivariate polynomial equations. This problem is NP-hard over any field. When the number of equations m is the same as the number of unknowns n the best known algorithms are exhaustive search for small fields, and a Gröbner base algorithm for large fields. Gröbner base algorithms have large exponential complexity and cannot solve in practice systems with n ≥ 15. Kipnis and Shamir [9] have recently introduced a new algorithm called “relinearization”. The exact complexity of this algorithm is not known, but for sufficiently overdefined systems it was expected to run in polynomial time.In this paper we analyze the theoretical and practical aspects of relinearization. We ran a large number of experiments for various values of n and m, and analysed which systems of equations were actually solvable. We show that many of the equations generated by relinearization are linearly dependent, and thus relinearization is less efficient that one could expect. We then develop an improved algorithm called XL which is both simpler and more powerful than relinearization. For all 0 < ε ≤ 1/2, and m ≥ εn2, XL and relinearization are expected to run in polynomial time of approximately $$ n^{\mathcal{O}(1/\sqrt \varepsilon )} $$. Moreover, we provide strong evidence that relinearization and XL can solve randomly generated systems of polynomial equations in subexponential time when m exceeds n by a number that increases slowly with n.

Nicolas Courtois, Alexander Klimov, Jacques Patarin, Adi Shamir
Cryptanalysis of Patarin’s 2-Round Public Key System with S Boxes (2R)

In a series of papers Patarin proposes new efficient public key systems. A very interesting proposal, called 2-Round Public Key System with S Boxes, or 2R, is based on the difficulty of decomposing the structure of several rounds of unknown linear transformations and S boxes. This difficulty is due to the difficulty of decomposing compositions of multivariate binary functions. In this paper we present a novel attack which breaks the 64-bit block variant with complexity about 230 steps, and the more secure 128-bit blocks variant with complexity about 260 steps. It is interesting to note that this cryptanalysis uses only the ciphertexts of selected plaintexts, and does not analyze the details of the supplied encryption code.

Eli Biham

Invited Talk

Colossus and the German Lorenz Cipher — Code Breaking in WW II

The Lorenz cipher system was used by the German Army High Command in World War II. It used a binary additive method for enciphering teleprinter signals.

Anthony E Sale

Zero-Knowledge

Efficient Concurrent Zero-Knowledge in the Auxiliary String Model

We show that if any one-way function exists, then 3-round concurrent zero-knowledge arguments for all NP problems can be built in a model where a short auxiliary string with a prescribed distribution is available to the players. We also show that a wide range of known efficient proofs of knowledge using specialized assumptions can be modified to work in this model with no essential loss of efficiency. We argue that the assumptions of the model will be satisfied in many practical scenarios where public key cryptography is used, in particular our construction works given any secure public key infrastructure. Finally, we point out that in a model with preprocessing (and no auxiliary string) proposed earlier, concurrent zero-knowledge for NP can be based on any one-way function.

Ivan Damgård
Efficient Proofs that a Committed Number Lies in an Interval

Alice wants to prove that she is young enough to borrow money from her bank, without revealing her age. She therefore needs a tool for proving that a committed number lies in a specific interval. Up to now, such tools were either inefficient (too many bits to compute and to transmit) or inexact (i.e. proved membership to a much larger interval). This paper presents a new proof, which is both efficient and exact. Here, “efficient” means that there are less than 20 exponentiations to perform and less than 2 Kbytes to transmit. The potential areas of application of this proof are numerous (electronic cash, group signatures, publicly verifiable secret encryption, etc ...).

Fabrice Boudot

Symmetric Cryptography

A Composition Theorem for Universal One-Way Hash Functions

In this paper we present a new scheme for constructing universal one-way hash functions that hash arbitrarily long messages out of universal one-way hash functions that hash fixed-length messages. The new construction is extremely simple and is also very efficient, yielding shorter keys than previously proposed composition constructions.

Victor Shoup
Exposure-Resilient Functions and All-or-Nothing Transforms

We study the problem of partial key exposure. Standard cryptographic definitions and constructions do not guarantee any security even if a tiny fraction of the secret key is compromised. We show how to build cryptographic primitives that remain secure even when an adversary is able to learn almost all of the secret key.The key to our approach is a new primitive of independent interest, which we call an Exposure-Resilient Function (ERF) — a deterministic function whose output appears random (in a perfect, statistical or computational sense) even if almost all the bits of the input are known. ERF’s by themselves efficiently solve the partial key exposure problem in the setting where the secret is simply a random value, like in privatekey cryptography. They can also be viewed as very secure pseudorandom generators, and have many other applications.To solve the general partial key exposure problem, we use the (generalized) notion of an All-Or-Nothing Transform (AONT), an invertible (randomized) transformation T which, nevertheless, reveals “no information” about x even if almost all the bits of T(x) are known. By applying an AONT to the secret key of any cryptographic system, we obtain security against partial key exposure. To date, the only known security analyses of AONT candidates were made in the random oracle model.We show how to construct ERF’s and AONT’s with nearly optimal parameters. Our computational constructions are based on any one-way function. We also provide several applications and additional properties concerning these notions.

Ran Canetti, Yevgeniy Dodis, Shai Halevi, Eyal Kushilevitz, Amit Sahai
The Sum of PRPs Is a Secure PRF

Given d independent pseudorandom permutations (PRPs) πi, ..., π d over 0, 1n, it appears natural to define a pseudorandom function (PRF) by adding (or XORing) the permutation results: sumd(x) = π1(x) ⊕ ··· ⊕ πd(x). This paper investigates the security of sumd and also considers a variant that only uses one single PRP over 0, 1n.

Stefan Lucks

Boolean Functions and Hardware

Construction of Nonlinear Boolean Functions with Important Cryptographic Properties

This paper addresses the problem of obtaining new construction methods for cryptographically significant Boolean functions. We show that for each positive integer m, there are infinitely many integers n (both odd and even), such that it is possible to construct n-variable, m-resilient functions having nonlinearity greater than $$ 2^{n - 1} - 2^{\left\lfloor {\tfrac{n} {2}} \right\rfloor } $$. Also we obtain better results than all published works on the construction of n-variable, m-resilient functions, including cases where the constructed functions have the maximum possible algebraic degree n − m − 1. Next we modify the Patterson-Wiedemann functions to construct balanced Boolean functions on n-variables having nonlinearity strictly greater than $$ 2^{n - 1} - 2^{\tfrac{{n - 1}} {2}} $$ for all odd n ≥ 15. In addition, we consider the properties strict avalanche criteria and propagation characteristics which are important for design of S-boxes in block ciphers and construct such functions with very high nonlinearity and algebraic degree.

Palash Sarkar, Subhamoy Maitra
Propagation Characteristics and Correlation-Immunity of Highly Nonlinear Boolean Functions

We investigate the link between the nonlinearity of a Boolean function and its propagation characteristics. We prove that highly nonlinear functions usually have good propagation properties regarding different criteria. Conversely, any Boolean function satisfying the propagation criterion with respect to a linear subspace of codimension 1 or 2 has a high nonlinearity. We also point out that most highly nonlinear functions with a three-valued Walsh spectrum can be transformed into 1-resilient functions.

Anne Canteaut, Claude Carlet, Pascale Charpin, Caroline Fontaine
Cox-Rower Architecture for Fast Parallel Montgomery Multiplication

This paper proposes a fast parallel Montgomery multiplication algorithm based on Residue Number Systems (RNS). It is easy to construct a fast modular exponentiation by applying the algorithm repeatedly. To realize an efficient RNS Montgomery multiplication, the main contribution of this paper is to provide a new RNS base extension algorithm. Cox-Rower Architecture described in this paper is a hardware suitable for the RNS Montgomery multiplication. In this architecture, a base extension algorithm is executed in parallel by plural Rower units controlled by a Cox unit. Each Rower unit is a single-precision modular multiplier-and-accumulator, whereas Cox unit is typically a 7 bit adder. Although the main body of the algorithm processes numbers in an RNS form, efficient procedures to transform RNS to or from a radix representation are also provided. The exponentiation algorithm can, thus, be adapted to an existing standard radix interface of RSA cryptosystem.

Shinichi Kawamura, Masanobu Koike, Fumihiko Sano, Atsushi Shimbo

Voting Schemes

Efficient Receipt-Free Voting Based on Homomorphic Encryption

Voting schemes that provide receipt-freeness prevent voters from proving their cast vote, and hence thwart vote-buying and coercion. We analyze the security of the multi-authority voting protocol of Benaloh and Tuinstra and demonstrate that this protocol is not receipt-free, opposed to what was claimed in the paper and was believed before. Furthermore, we propose the first practicable receipt-free voting scheme. Its only physical assumption is the existence of secret one-way communication channels from the authorities to the voters, and due to the public verifiability of the tally, voters only join a single stage of the protocol, realizing the “vote-and-go” concept. The protocol combines the advantages of the receipt-free protocol of Sako and Kilian and of the very efficient protocol of Cramer, Gennaro, and Schoenmakers, with help of designated-verifier proofs of Jakobsson, Sako, and Impagliazzo. Compared to the receipt-free protocol of Sako and Kilian for security parameter ℓ (the number of repetitions in the non-interactive cut-and-choose proofs), the protocol described in this paper realizes an improvement of the total bit complexity by a factor ℓ.

Martin Hirt, Kazue Sako
How to Break a Practical MIX and Design a New One

A MIX net takes a list of ciphertexts (c1, ..., cN) and outputs a permuted list of the plaintexts (m1, ..., mN) without revealing the relationship between (c1,..., cN) and (m1, ...,mN). This paper first shows that the Jakobsson’s MIX net of Eurocrypt’98, which was believed to be resilient and very efficient, is broken. We next propose an efficient t-resilient MIX net with O(t2) servers in which the cost of each MIX server is O(N). Two new concepts are introduced, existential-honesty and limited-open-verification. They will be useful for distributed computation in general.

Yvo Desmedt, Kaoru Kurosawa

Cryptanalysis III: Stream Ciphers and Block Ciphers

Improved Fast Correlation Attacks Using Parity-Check Equations of Weight 4 and 5

This paper describes new techniques for fast correlation attacks, based on Gallager iterative decoding algorithm using parity-check equations of weight greater than 3. These attacks can be applied to any key-stream generator based on LFSRs and it does not require that the involved feedback polynomial have a low weight. We give a theoretical analysis of all fast correlation attacks, which shows that our algorithm with parity-check equations of weight 4 or 5 is usually much more efficient than correlation attacks based on convolutional codes or on turbo codes. Simulation results confirm the validity of this comparison. In this context, we also point out the major role played by the nonlinearity of the Boolean function used in a combination generator.

Anne Canteaut, Michaël Trabbia
Advanced Slide Attacks

Recently a powerful cryptanalytic tool—the slide attack—was introduced [3]. Slide attacks are very successful in breaking iterative ciphers with a high degree of self-similarity and even more surprisingly are independent of the number of rounds of a cipher. In this paper we extend the applicability of slide attacks to a larger class of ciphers. We find very efficient known- and chosen-text attacks on generic Feistel ciphers with a periodic key-schedule with four independent subkeys, and consequently we are able to break a DES variant proposed in [2] using just 128 chosen texts and negligible time for the analysis (for one out of every 216 keys). We also describe known-plaintext attacks on DESX and Even-Mansour schemes with the same complexity as the best previously known chosen-plaintext attacks on these ciphers. Finally, we provide new insight into the design of GOST by successfully analyzing a 20-round variant (GOST⊕) and demonstrating weak key classes for all 32 rounds.

Alex Biryukov, David Wagner
Backmatter
Metadaten
Titel
Advances in Cryptology — EUROCRYPT 2000
herausgegeben von
Bart Preneel
Copyright-Jahr
2000
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-45539-4
Print ISBN
978-3-540-67517-4
DOI
https://doi.org/10.1007/3-540-45539-6