Skip to main content

1999 | Buch

Advances in Cryptology — EUROCRYPT ’99

International Conference on the Theory and Application of Cryptographic Techniques Prague, Czech Republic, May 2–6, 1999 Proceedings

herausgegeben von: Jacques Stern

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

Cryptanalysis I

Cryptanalysis of RSA with Private Key d Less than N 0.292

We show that if the private exponent d used in the RSA public-key cryptosystem is less than N0.292 then the system is insecure. This is the first improvement over an old result of Wiener showing that when d < N0.25 the RSA system is insecure. We hope our approach can be used to eventually improve the bound to d < N0.5.

Dan Boneh, Glenn Durfee
Cryptanalysis of Skipjack Reduced to 31 Rounds Using Impossible Differentials

In this paper we present a new cryptanalytic technique, based on impossible differentials, and use it to show that Skipjack reduced from 32 to 31 rounds can be broken by an attack which is faster than exhaustive search.

Eli Biham, Alex Biryukov, Adi Shamir

Hash Functions

Software Performance of Universal Hash Functions

This paper compares the parameters sizes and software performance of several recent constructions for universal hash functions: bucket hashing, polynomial hashing, Toeplitz hashing, division hashing, evaluation hashing, and MMH hashing. An objective comparison between these widely varying approaches is achieved by defining constructions that offer a comparable security level. It is also demonstrated how the security of these constructions compares favorably to existing MAC algorithms, the security of which is less understood.

Wim Nevelsteen, Bart Preneel

Foundations I

Lower Bounds for Oblivious Transfer Reductions

We prove the first general and non-trivial lower bound for the number of times a 1-out-of-n Oblivious Transfer of strings of length l should be invoked so as to obtain, by an information-theoretically secure reduction, a 1-out-of-N Oblivious Transfer of strings of length L. Our bound is tight in many significant cases.We also prove the first non-trivial lower bound for the number of random bits needed to implement such a reduction whenever the receiver sends no messages to the sender. This bound is also tight in many significant cases.

Yevgeniy Dodis, Silvio Micali
On the (Im)possibility of Basing Oblivious Transfer and Bit Commitment on Weakened Security Assumptions

We consider the problem of basing Oblivious Transfer (OT) and Bit Commitment (BC), with information theoretic security, on seemingly weaker primitives. We introduce a general model for describing such primitives, called Weak Generic Transfer (WGT). This model includes as important special cases Weak Oblivious Transfer (WOT), where both the sender and receiver may learn too much about the other party’s input, and a new, more realistic model of noisy channels, called unfair noisy channels. An unfair noisy channel has a known range of possible noise levels; protocols must work for any level within this range against adversaries who know the actual noise level.We give a precise characterization for when one can base OT on WOT. When the deviation of the WOT from the ideal is above a certain threshold, we show that no information-theoretic reductions from OT (even against passive adversaries) and BC exist; when the deviation is below this threshold, we give a reduction from OT (and hence BC) that is information-theoretically secure against active adversaries.For unfair noisy channels we show a similar threshold phenomenon for bit commitment. If the upper bound on the noise is above a threshold (given as a function of the lower bound) then no information-theoretic reduction from OT (even against passive adversaries) or BC exist; when it is below this threshold we give a reduction from BC. As a partial result, we give a reduction from OT to UNC for smaller noise intervals.

Ivan Damgård, Joe Kilian, Louis Salvail
Conditional Oblivious Transfer and Timed-Release Encryption

We consider the problem of sending messages “into the future.” Previous constructions for this task were either based on heuristic assumptions or did not provide anonymity to the sender of the message. In the public-key setting, we present an efficient and secure timed-release encryption scheme using a “time server” which inputs the current time into the system. The server has to only interact with the receiver and never learns the sender’s identity. The scheme’s computational and communicational cost per request are only logarithmic in the time parameter. The construction of our scheme is based on a novel cryptographic primitive: a variant of oblivious transfer which we call conditional oblivious transfer. We define this primitive (which may be of independent interest) and show an efficient construction for an instance of this new primitive based on the quadratic residuosity assumption.

Giovanni Di Crescenzo, Rafail Ostrovsky, Sivaramakrishnan Rajagopalan

Public Key

An Efficient threshold Public Key Cryptosystem Secure Against Adaptive Chosen Ciphertext Attack (Extended Abstract)

This paper proposes a simple threshold Public-Key Cryptosystem (PKC) which is secure against adaptive chosen ciphertext attack, under the Decisional Diffie-Hellman (DDH) intractability assumption.Previously, it was shown how to design non-interactive threshold PKC secure under chosen ciphertext attack, in the random-oracle model and under the DDH intractability assumption [25]. The random-oracle was used both in the proof of security and to eliminate interaction. General completeness results for multi-party computations [6,13] enable in principle converting any single server PKC secure against CCA (e.g., [19,17]) into a threshold one, but the conversions are inefficient and require much interaction among the servers for each ciphertext decrypted. The recent work by Cramer and Shoup [17] on single server PKC secure against adaptive CCA is the starting point for the new proposal.

Ran Canetti, Shafi Goldwasser
Proving in Zero-Knowledge that a Number is the Product of Two Safe Primes

We present the first efficient statistical zero-knowledge protocols to prove statements such as: - A committed number is a prime.- A committed (or revealed) number is the product of two safe primes, i.e., primes p and q such that (p - 1)/2 and (q - 1)/2 are prime.- A given integer has large multiplicative order modulo a composite number that consists of two safe prime factors.The main building blocks of our protocols are statistical zero-knowledge proofs of knowledge that are of independent interest. We show how to prove the correct computation of a modular addition, a modular multiplication, and a modular exponentiation, where all values including the modulus are committed to but not publicly known. Apart from the validity of the equations, no other information about the modulus (e.g., a generator whose order equals the modulus) or any other operand is exposed. Our techniques can be generalized to prove that any multivariate modular polynomial equation is satisfied, where only commitments to the variables of the polynomial and to the modulus need to be known. This improves previous results, where the modulus is publicly known. We show how these building blocks allow to prove statements such as those listed earlier.

Jan Camenisch, Markus Michels
Secure Hash-and-Sign Signatures Without the Random Oracle

We present a new signature scheme which is existentially unforgeable under chosen message attacks, assuming some variant of the RSA conjecture. This scheme is not based on “signature trees”, and nstead it uses the so called “hash-and-sign” paradigm. It is unique in that the assumptions made on the cryptographic hash function in use are well defined and reasonable (although non-standard). In particular, we do not model this function as a random oracle. We construct our proof of security in steps. First we describe and prove a construction which operates in the random oracle model. Then we show that the random oracle in this construction can be replaced by a hash function which satisfies some strong (but well defined!) computational assumptions. Finally, we demonstrate that these assumptions are reasonable, by proving that a function satisfying them exists under standard intractability assumptions.

Rosario Gennaro, Shai Halevi, Tal Rabin

Watermarking and Fingerprinting

A Note on the Limits of Collusion-Resistant Watermarks

In one proposed use of digital watermarks, the owner of a document D sells slightly different documents, D1,D2,... to each buyer; if a buyer posts his/her document Di to the web, the owner can identify the source of the leak. More general attacks are however possible in which k buyers create some composite document D*; the goal of the owner is to identify at least one of the conspirators. We show, for a reasonable model of digital watermarks, fundamental limits on their efficacy against collusive attacks. In particular, if the effective document length is n, then at most $$ O(\sqrt {n/ln n} $$ adversaries can defeat any watermarking scheme.Our attack is, in the theoretical model, oblivious to the watermarking scheme being used; in practice, it uses very little information about the watermarking scheme. Thus, using a proprietary system seems to give only a very weak defense.

Funda Ergun, Joe Kilian, Ravi Kumar
Coin-Based Anonymous Fingerprinting

Fingerprinting schemes are technical means to discourage people from illegally redistributing the digital data they have legally purchased. These schemes enable the original merchant to identify the original buyer of the digital data. In so-called asymmetric fingerprinting schemes the fingerprinted data item is only known to the buyer after a sale and if the merchant finds an illegally redistributed copy, he obtains a proof convincing a third party whom this copy belonged to. All these fingerprinting schemes require the buyers to identify themselves just for the purpose of fingerprinting and thus over the buyers no privacy. Hence anonymous asymmetric fingerprinting schemes were introduced, which preserve the anonymity of the buyers as long as they do not redistribute the data item.In this paper a new anonymous fingerprinting scheme based on the principles of digital coins is introduced. The construction replaces the general zero-knowledge techniques from the known certificate-based construction by explicit protocols, thus bringing anonymous fingerprinting far nearer to practicality.

Birgit Pfitzmann, Ahmad-Reza Sadeghi

Elliptic Curve

On the Performance of Hyperelliptic Cryptosystems

In this paper we discuss various aspects of cryptosystems based on hyperelliptic curves. In particular we cover the implementation of the group law on such curves and how to generate suitable curves for use in cryptography. This paper presents a practical comparison between the performance of elliptic curve based digital signature schemes and schemes based on hyperelliptic curves. We conclude that, at present, hyperelliptic curves over no performance advantage over elliptic curves.

Nigel P. Smart
Fast Elliptic Curve Algorithm Combining Frobenius Map and Table Reference to Adapt to Higher Characteristic

A new elliptic curve scalar multiplication algorithm is proposed. The algorithm offers about twice the troughput of some conventional OEF-base algorithms because it combines the Frobenius map with the table reference method based on base-φ expansion. Furthermore, since this algorithm suits conventional computational units such as 16, 32 and 64 bits, its base field $$ F_{p^m } $$ is expected to enhance elliptic curve operation efficiency more than Fq (q is a prime) or $$ F_{2^n } $$ .

Tetsutaro Kobayashi, Hikaru Morita, Kunio Kobayashi, Fumitaka Hoshino
Comparing the MOV and FR Reductions in Elliptic Curve Cryptography

This paper addresses the discrete logarithm problem in elliptic curve cryptography. In particular, we generalize the Menezes, Okamoto, and Vanstone (MOV) reduction so that it can be applied to some non-supersingular elliptic curves (ECs); decrypt Frey and Rück (FR)’s idea to describe the detail of the FR reduction and to implement it for actual elliptic curves with finite fields on a practical scale; and based on them compare the (extended) MOV and FR reductions from an algorithmic point of view. (This paper has primarily an expository role.)

Ryuichi Harasawa, Junji Shikata, Joe Suzuki, Hideki Imai

New Schemes

Unbalanced Oil and Vinegar Signature Schemes

In [16], J. Patarin designed a new scheme, called “Oil and Vinegar”, for computing asymmetric signatures. It is very simple, can be computed very fast (both in secret and public key) and requires very little RAM in smartcard implementations. The idea consists in hiding quadratic equations in n unknowns called “oil” and v = n unknowns called “vinegar” over a finite field K, with linear secret functions. This original scheme was broken in [10] by A. Kipnis and A. Shamir. In this paper, we study some very simple variations of the original scheme where v > n (instead of v = n). These schemes are called “Unbalanced Oil and Vinegar” (UOV), since we have more “vinegar” unknowns than “oil” unknowns. We show that, when v ⋍ n, the attack of [10] can be extended, but when v ≥ 2n for example, the security of the scheme is still an open problem. Moreover, when $$ v \simeq \tfrac{{n^2 }} {2}$$ , the security of the scheme is exactly equivalent (if we accept a very natural but not proved property) to the problem of solving a random set of n quadratic equations in $$ \tfrac{{n^2 }} {2}$$ unknowns (with no trapdoor). However, we show that (in characteristic 2) when v ≥ n2, finding a solution is generally easy. Then we will see that it is very easy to combine the Oil and Vinegar idea and the HFE schemes of [14]. The resulting scheme, called HFEV, looks at the present also very interesting both from a practical and theoretical point of view. The length of a UOV signature can be as short as 192 bits and for HFEV it can be as short as 80 bits.

Aviad Kipnis, Jacques Patarin, Louis Goubin
Public-Key Cryptosystems Based on Composite Degree Residuosity Classes

This paper investigates a novel computational problem, namely the Composite Residuosity Class Problem, and its applications to public-key cryptography. We propose a new trapdoor mechanism and derive from this technique three encryption schemes: a trapdoor permutation and two homomorphic probabilistic encryption schemes computationally comparable to RSA. Our cryptosystems, based on usual modular arithmetics, are provably secure under appropriate assumptions in the standard model.

Pascal Paillier
New Public Key Cryptosystems Based on the Dependent-RSA Problems

Since the Diffie-Hellman paper, asymmetric encryption has been a very important topic, and furthermore ever well studied. However, between the efficiency of RSA and the security of some less efficient schemes, no trade-off has ever been provided. In this paper, we propose better than a trade-off: indeed, we first present a new problem, derived from the RSA assumption, the “Dependent-RSA Problem”. A careful study of its difficulty is performed and some variants are proposed, namely the “Decisional Dependent-RSA Problem”.They are next used to provide new encryption schemes which are both secure and efficient. More precisely, the main scheme is proven semantically secure in the standard model. Then, two variants are derived with improved security properties, namely against adaptive chosen-ciphertext attacks, in the random oracle model. Furthermore, all those schemes are more or less as efficient as the original RSA encryption scheme and reach semantic security.

David Pointcheval

Block Ciphers

Resistance Against General Iterated Attacks

In this paper we study the resistance of a block cipher against a class of general attacks which we call “iterated attacks”. This class includes some elementary versions of differential and linear cryptanalysis. We prove that we can upper bound the complexity of the attack by using decorrelation techniques. Our main theorem enables to prove the security against these attacks (in our model) of some recently proposed block ciphers COCONUT98 and PEANUT98, as well as the AES candidate DFC.We outline that decorrelation to the order 2d is required for proving security against iterated attacks of order d.

Serge Vaudenay
XOR and Non-XOR Differential Probabilities

Differential cryptanalysis is a well-known attack on iterated ciphers whose success is determined by the probability of predicting sequences of differences from one round of the cipher to the next. The notion of difference is typically defined with respect to the group operation (s) used to combine the subkey in the round function F. For a given round operation π of F, such as an S-box, let DP⊗(π) denote the probability of the most likely non-trivial difference for π when differences are defined with respect to ⊗. In this paper we investigate how the distribution of DP⊗(π) varies as the group operation ⊗ is varied when π is a uniformly selected permutation. We prove that DP⊗(π) is maximized with high probability when differences are defined with respect to XOR.

Philip Hawkes, Luke O’Connor
S-boxes with Controllable Nonlinearity

In this paper, we give some relationship between the nonlinearity of rational functions over $$ \mathbb{F}_{2^n } $$ and the number of points of associated hyperelliptic curve. Using this, we get a lower bound on nonlinearity of rational-typed vector Boolean functions over $$ \mathbb{F}_{2^n } $$ . While the previous works give us a lower bound on nonlinearity only for special-typed monomials, our result gives us general bound applicable for all rational fuctions defined over $$ \mathbb{F}_{2^n } $$ . As an application of our results, we get a lower bound on nonlinearity of n × kn S-boxes.

Jung Hee Cheon, Seongtaek Chee, Choonsik Park

Distributed Cryptography

Secure Distributed Key Generation for Discrete-Log Based Cryptosystems

Distributed key generation is a main component of threshold cryptosystems and distributed cryptographic computing in general. Solutions to the distributed generation of private keys for discrete-log based cryptosystems have been known for several years and used in a variety of protocols and in many research papers. However, these solutions fail to provide the full security required and claimed by these works. We show how an active attacker controlling a small number of parties can bias the values of the generated keys, thus violating basic correctness and secrecy requirements of a key generation protocol. In particular, our attacks point out to the places where the proofs of security fail.Based on these findings we designed a distributed key generation protocol which we present here together with a rigorous proof of security. Our solution, that achieves optimal resiliency, can be used as a drop-in replacement for key generation modules as well as other components of threshold or proactive discrete-log based cryptosystems.

Rosario Gennaro, Stanisław Jarecki, Hugo Krawczyk, Tal Rabin
Efficient Multiparty Computations Secure Against an Adaptive Adversary

We consider verifiable secret sharing (VSS) and multiparty computation (MPC) in the secure-channels model, where a broadcast channel is given and a non-zero error probability is allowed. In this model Rabin and Ben-Or proposed VSS and MPC protocols secure against an adversary that can corrupt any minority of the players. In this paper, we first observe that a subprotocol of theirs, known as weak secret sharing (WSS), is not secure against an adaptive adversary, contrary to what was believed earlier. We then propose new and adaptively secure protocols for WSS, VSS and MPC that are substantially more efficient than the original ones. Our protocols generalize easily to provide security against general Q2-adversaries.

Ronald Cramer, Ivan Damgård, Stefan Dziembowski, Martin Hirt, Tal Rabin
Distributed Pseudo-random Functions and KDCs

This work describes schemes for distributing between n servers the evaluation of a function f which is an approximation to a random function, such that only authorized subsets of servers are able to compute the function. A user who wants to compute f(x) should send x to the members of an authorized subset and receive information which enables him to compute f(x). We require that such a scheme is consistent, i.e. that given an input x all authorized subsets compute the same value f(x).The solutions we present enable the operation of many servers, preventing bottlenecks or single points of failure. There are also no single entities which can compromise the security of the entire network. The solutions can be used to distribute the operation of a Key Distribution Center (KDC). They are far better than the known partitioning to domains or replication solutions to this problem, and are especially suited to handle users of multicast groups.

Moni Naor, Benny Pinkas, Omer Reingold

Cryptanalysis II

Improved Fast Correlation Attacks on Stream Ciphers via Convolutional Codes

This paper describes new methods for fast correlation attacks, based on the theory of convolutional codes. They can be applied to arbitrary LFSR feedback polynomials, in opposite to the previous methods, which mainly focus on feedback polynomials of low weight. The results improve significantly the few previous results for this general case, and are in many cases comparable with corresponding results for low weight feedback polynomials.

Thomas Johansson, Fredrik Jönsson
Cryptanalysis of an Identification Scheme Based on the Permuted Perceptron Problem

This paper describes an attack on an identification scheme based on the permuted perceptron problem (PPP) as suggested by Point-cheval. The attack finds the secret key, a vector of n binary elements, in time much faster than estimated by its designer. The basic idea in the attack is to use several applications of a simulated annealing algorithm and combine the outcomes into an improved search. It is left as an open problem to what extent the methods developed in this paper are useful also in other combinatorial problems.

Lars R. Knudsen, Willi Meier

Tools from Related areas

An Analysis of Exponentiation Based on Formal Languages

A recoding rule for exponentiation is a method for reducing the cost of the exponentiation ae by reducing the number of required multiplications. If w(e) is the (hamming) weight of e, and ē the result of applying the recoding rule A to e, then the purpose is to reduce w A (ē) as compared to w(e). A well-known example of a recoding rule is to convert a binary exponent into a signed-digit representation in terms of the digits {1; $$ \bar 1$$, 0} where $$ \bar 1$$ = −1, by recoding runs of 1’s. In this paper we show how three recoding rules can be modelled via regular languages to obtain precise information about the resulting weight distributions. In particular we analyse the recoding rules employed by the 2k-ary, sliding window and optimal signed-digit exponentiation algorithms. We prove that the sliding window method has an expected recoded weight of approximately n/(k +1) for relevant k-bit windows and n-bit exponents, and also that the variance is small. We also prove for the optimal signed digit method that the expected weight is approximately n/3 with a variance of 2n/27. In general the sliding window method provides the best performance, and performs less than 85% of the multiplications required for the other methods for a majority of exponents.

Luke O’Connor
Dealing Necessary and Sufficient Numbers of Cards for Sharing a One-Bit Secret Key (Extended Abstract)

Using a random deal of cards to players and a computationally unlimited eavesdropper, all players wish to share a one-bit secret key which is information-theoretically secure from the eavesdropper. This can be done by a protocol to make several pairs of players share one-bit secret keys so that all these pairs form a spanning tree over players. In this paper we obtain a necessary and sufficient condition on the number of cards for the existence of such a protocol. Our condition immediately yields an efficient linear-time algorithm to determine whether there exists a protocol to achieve such a secret key sharing.

Takaaki Mizuki, Hiroki Shizuya, Takao Nishizeki

Foundations IIz

Computationally Private Information Retrieval with Polylogarithmic Communication

We present a single-database computationally private information retrieval scheme with polylogarithmic communication complexity. Our construction is based on a new, but reasonable intractability assumption, which we call the Φ-Hiding Assumption (ΦHA): essentially the difficulty of deciding whether a small prime divides Φ(m), where m is a composite integer of unknown factorization.

Christian Cachin, Silvio Micali, Markus Stadler
On the Concurrent Composition of Zero-Knowledge Proofs

We examine the concurrent composition of zero-knowledge proofs. By concurrent composition, we indicate a single prover that is involved in multiple, simultaneous zero-knowledge proofs with one or multiple verifiers. Under this type of composition it is believed that standard zero-knowledge protocols are no longer zero-knowledge. We show that, modulo certain complexity assumptions, any statement in NP has k∈-round proofs and arguments in which one can efficiently simulate any kO(1) concurrent executions of the protocol.

Ransom Richardson, Joe Kilian
Pseudorandom Function Tribe Ensembles Based on One-Way Permutations: Improvements and Applications

Pseudorandom function tribe ensembles are pseudorandom function ensembles that have an additional collision resistance property: almost all functions have disjoint ranges. We present an alternative to the construction of pseudorandom function tribe ensembles based on one-way permutations given by Canetti, Micciancio and Reingold [7]. Our approach yields two different but related solutions: One construction is somewhat theoretic, but conceptually simple and therefore gives an easier proof that one-way permutations suffice to construct pseudorandom function tribe ensembles. The other, slightly more complicated solution provides a practical construction; it starts with an arbitrary pseudorandom function ensemble and assimilates the one-way permutation to this ensemble. Therefore, the second solution inherits important characteristics of the underlying pseudorandom function ensemble: it is almost as efficient and if the starting pseudorandom function ensemble is invertible then so is the derived tribe ensemble. We also show that the latter solution yields so-called committing private-key encryption schemes. i.e., where each ciphertext corresponds to exactly one plaintext — independently of the choice of the secret key or the random bits used in the encryption process.

Marc Fischlin

Broadcast and Multicast

Secure Communication in Broadcast Channels: The Answer to Franklin and Wright’s Question

Problems of secure communication and computation have been studied extensively in network models. Goldreich, Goldwasser, and Linial, Franklin and Yung, and Franklin and Wright have initiated the study of secure communication and secure computation in multi-recipient (broadcast) models. A “broadcast channel” (such as ethernet) enables one processor to send the same message—simultaneously and privately—to a fixed subset of processors. In their Eurocrypt ’98 paper, Franklin and Wright have shown that if there are n broadcast lines between a sender and a receiver and there are at most t malicious (Byzantine style) processors, then the condition n > t is necessary and sufficient for achieving efficient probabilisticly reliable and probabilisticly private communication. They also showed that if n > [3t=2] then there is an efficient protocol to achieve probabilisticly reliable and perfectly private communication. And they left open the question whether there exists an efficient protocol to achieve probabilisticly reliable and perfectly private communication when [3t=2] ≥ n > t. In this paper, by using a different authentication scheme, we will answer this question affirmatively and study related problems.

Yongge Wang, Yvo Desmedt
Efficient Communication-Storage Tradeoffs for Multicast Encryption

We consider re-keying protocols for secure multicasting in a dynamic multicast group with a center. There is a variety of different scenarios using multicast, presenting a wide range of efficiency requirements with respect to several parameters. We give an upper bound on the tradeoffs between storage and communication parameters. In particular, we suggest an improvement of the schemes by Wallner et al. and Wong et al. [13,14] with sub-linear center storage, without a significant loss in other parameters.Correctly selecting the parameters of our scheme we can efficiently accommodate a wide range of scenarios. This is demonstrated by Applying the protocol to some known benchmark scenarios.We also show lower bounds on the tradeoffs between communication and user storage, and show that our scheme is almost optimal with respect to these lower bounds.

Ran Canetti, Tal Malkin, Kobbi Nissim
Backmatter
Metadaten
Titel
Advances in Cryptology — EUROCRYPT ’99
herausgegeben von
Jacques Stern
Copyright-Jahr
1999
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-48910-8
Print ISBN
978-3-540-65889-4
DOI
https://doi.org/10.1007/3-540-48910-X