Skip to main content

1993 | Buch

Advances in Cryptology — CRYPTO’ 92

12th Annual International Cryptology Conference Santa Barbara, California, USA August 16–20, 1992 Proceedings

herausgegeben von: Ernest F. Brickell

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

Crypto'92 took place on August 16-20, 1992. It was the twelfth in the series of annual cryptology conferences held on the beautiful campus of the University of California, Santa Barbara. Once again, it was sponsored by the International Association for Cryptologic Research, in cooperation with the IEEE Computer Society Technical Committee on Security and Privacy. The conference ran smoothly, due to the diligent efforts of the g- eral chair, Spyros Magliveras of the University of Nebraska. One of the measures of the success of this series of conferences is represented by the ever increasing number of papers submitted. This year, there were 135 submissions to the c- ference, which represents a new record. Following the practice of recent program comm- tees, the papers received anonymous review. The program committee accepted 38 papers for presentation. In addition, there were two invited presentations, one by Miles Smid on the Digital Signature Standard, and one by Mike Fellows on presenting the concepts of cryptology to elementary-age students. These proceedings contains these 40 papers plus 3 papers that were presented at the Rump Session. I would like to thank all of the authors of the submitted papers and all of the speakers who presented papers. I would like to express my sincere appreciation to the work of the program committee: Ivan Damgard (Aarhus University, Denmark), Odd Goldreich (Technion, Israel), Burt Kaliski (RSA Data Security, USA), Joe Kilian (NEC, USA).

Inhaltsverzeichnis

Frontmatter

Digital Signatures and Identification I

Provably Unforgeable Signatures

Very strong definitions of security for signature schemes have been proposed in the literature. Constructions for such schemes have been proposed, but so far they have only been of theoretical interest and have been considered far too inefficient for practical use.Here we present a new scheme that satisfies these strongest definitions and uses essentially the same amount of computation and memory as the widely applied RSA scheme. The scheme is based on the well known RSA assumption.Our signatures can be thought of as products resulting from a two-dimensional Lamport scheme, where one dimension consists of a list of public constants, and the other is the sequence of odd primes.

Jurjen N. E. Bos, David Chaum
New Constructions of Fail-Stop Signatures and Lower Bounds
Extended Abstract

With a fail-stop signature scheme, the supposed signer of a forged signature can prove to everybody else that it was a forgery. Thus the signer is secure even against computationally unrestricted forgers. Until recently, efficient constructions were only known for restricted cases, but at Eurocrypt ’92, van Heijst and Pedersen presented an efficient general scheme, where the unforgeability is based on the discrete logarithm.We present a similar scheme based on factoring: Signing a message block requires approximately one modular exponentiation, and testing it requires a little more than two exponentiations. It is useful to have such alternative constructions in case one of the unproven assumptions is broken.With all fail-stop signatures so far, the size of the secret key is linear in the number of messages to be signed. In one sense, we prove that this cannot be avoided: The signer needs so many secretly chosen random bits. However, this does not imply that these bits ever have to be secretly stored at the same time: We present a practical construction with only logarithmic secret storage and a less practical one where the amount of secret storage is constant.We also prove rather small lower bounds for the length of public keys and signatures. All three lower bounds are within a small factor of what can be achieved with one of the known schemes.Finally, we prove that with unconditionally secure signatures, like those presented by Chaum and Roijakkers at Crypto ’90, the length of a signature is at least linear in the number of participants who can test it. This shows that such schemes cannot be as efficient as fail-stop signatures.

Eugène van Heijst, Torben Pryds Pedersen, Birgit Pfitzmann
Provably Secure and Practical Identification Schemes and Corresponding Signature Schemes

This paper presents a three-move interactive identification scheme and proves it to be as secure as the discrete logarithm problem. This provably secure scheme is almost as efficient as the Schnorr identification scheme, while the Schnorr scheme is not provably secure. This paper also presents another practical identification scheme which is proven to be as secure as the factoring problem and is almost as efficient as the Guillou-Quisquater identification scheme: the Guillou-Quisquater scheme is not provably secure. We also propose practical digital signature schemes based on these identification schemes. The signature schemes are almost as efficient as the Schnorr and Guillou-Quisquater signature schemes, while the security assumptions of our signature schemes are weaker than those of the Schnorr and Guillou-Quisquater. signature schemes. This paper also gives a theoretically generalized result: a three-move identification scheme can be constructed which is as secure as the random-self-reducible problem. Moreover, this paper proposes a variant which is proven to be as secure as the difficulty of solving both the discrete logarithm problem and the specific factoring problem simultaneously. Some other variants such as an identity-based variant and an elliptic curve variant are also proposed.

Tatsuaki Okamoto
An Efficient Digital Signature Scheme Based on an Elliptic Curve over the Ring Z n

We propose a practical digital signature scheme based on the elliptic curve modulo n, where n = p2q such that p and q are large secret primes. The signature generation speed of our scheme is more than 10 times faster than that of the RSA scheme. Moreover, a pre-processing technique can significantly increase the signature generation speed.

Tatsuaki Okamoto, Atsushi Fujioka, Eiichiro Fujisaki

The Digital Signature Standard

Designing and Detecting Trapdoors for Discrete Log Cryptosystems

Using a number field sieve, discrete logarithms modulo primes of special forms can be found faster than standard primes. This has raised concerns about trapdoors in discrete log cryptosystems, such as the Digital Signature Standard. This paper discusses the practical impact of these trapdoors, and how to avoid them.

Daniel M. Gordon
Response to Comments on the NIST Proposed Digital Signature Standard

NIST received comments from 109 separate government agencies, companies, and private individuals concerning the proposed Digital Signature Standard. Both positive and negative comments were received. However the number of negative comments was significantly larger than normally received for a proposed Federal Information Processing Standard (FIPS). This paper summarizes the major comments, both positive and negative, and provides responses where appropriate. The paper highlights the anticipated significant modifications to the proposed standard and concludes by discussing the future milestones that need to be accomplished before the proposed DSS becomes a FIPS.

Miles E. Smid, Dennis K. Branstad

Applications and New Problems

Wallet Databases with Observers
Extended Abstract

Previously there have been essentially only two models for computers that people can use to handle ordinary consumer transactions: (1) the tamper-proof module, such as a smart card, that the person cannot modify or probe; and (2) the personal workstation whose inner working is totally under control of the individual. The first part of this article argues that a particular combination of these two kinds of mechanism can overcome the limitations of each alone, providing both security and correctness for organizations as well as privacy and even anonymity for individuals.Then it is shown how this combined device, called a wallet, can carry a database containing personal information. The construction presented ensures that no single part of the device (i.e. neither the tamper-proof part nor the workstation) can learn the contents of the database — this information can only be recovered by the two parts together.

David Chaum, Torben Pryds Pedersen
Making Electronic Refunds Safer

We show how to break an electronic cash protocol due to van Antwerpen (a refinement of the system proposed by Chaum, Fiat, and Naor), and given an alternative protocol that fixes the problem.

Rafael Hirschfeld
Fair Public-Key Cryptosystems

We show how to construct public-key cryptosystems that are fair, that is, strike a good balance, in a democratic country, between the needs of the Government and those of the Citizens. Fair public-key cryptosystems guarantee that: (1) the system cannot be misused by criminal organizations and (2) the Citizens mantain exactly the same rights to privacy they currently have under the law.We actually show how to transform any public-key cryptosystem into a fair one. The transformed systems preserve the security and efficiency of the original ones. Thus one can still use whatever system he believes to be more secure, and enjoy the additional properties of fairness. Moreover, for today’s best known cryptosystems, we show that the transformation to fair ones is particularly efficient and convenient.As we shall explain, our solution compares favorably with the Clipper Chip, the encryption proposal more recently put forward by the Clinton Administration for solving similar problems.

Silvio Micali
Pricing via Processing or Combatting Junk Mail

We present a computational technique for combatting junk mail in particular and controlling access to a shared resource in general. The main idea is to require a user to compute a moderately hard, but not intractable, function in order to gain access to the resource, thus preventing frivolous use. To this end we suggest several pricing functions, based on, respectively, extracting square roots modulo a prime, the Fiat-Shamir signature scheme, and the Ong-Schnorr-Shamir (cracked) signature scheme.

Cynthia Dwork, Moni Naor

Secret Sharing I

On the Information Rate of Secret Sharing Schemes
Extended Abstract

We derive new limitations on the information rate and the average information rate of secret sharing schemes for access structure represented by graphs. We give the first proof of the existence of access structures with optimal information rate and optimal average information rate less that 1/2 + ε, where ε is an arbitrary positive constant. We also provide several general lower bounds on information rate and average information rate of graphs. In particular, we show that any graph with n vertices admits a secret sharing scheme with information rate Ω((log n)/n).

C. Blundo, A. De Santis, L. Gargano, U. Vaccaro
New General Lower Bounds on the Information Rate of Secret Sharing Schemes

We use two combinatorial techniques to apply a decomposition construction in obtaining general lower bounds on information rate and average information rate of certain general classes of access structures. The first technique uses combinatorial designs (in particular, Steiner systems S(t, k, v)). The second technique uses equitable edge-colourings of bipartite graphs. For uniform access structures of rank t, this second technique improves the best previous general bounds by a factor of t (asymptotically).

D. R. Stinson
Universally Ideal Secret Sharing Schemes
Preliminary Version

Given a set of parties {1, ..., n}, an access structure is a monotone collection of subsets of the parties. For a certain domain of secrets, a secret sharing scheme for an access structure is a method for a dealer to distribute shares to the parties, such that only subsets in the access structure can reconstruct the secret.A secret sharing scheme is ideal if the domains of the shares are the same as the domain of the secrets. An access structure is universally ideal if there is an ideal secret sharing scheme for it over every finite domain of secrets. An obvious necessary condition for an access structure to be universally ideal is to be ideal over the binary and ternary domains of secrets. In this work, we prove that this condition is also sufficient. In addition, we given an exact characterization for each of these two conditions, and show that each condition by itself is not sufficient for universally ideal access structures.

Amos Beimel, Benny Chor

Theory I

Perfect Zero-Knowledge Arguments for NP Can Be Based on General Complexity Assumptions
Extended Abstract

“Zero-knowledge arguments” is a fundamental cryptographic primitive which allows one polynomial-time player to convince another polynomial-time player of the validity of an NP statement, without revealing any additional information in the information-theoretic sense. Despite their practical and theoretical importance, it was only known how to implement zero-knowledge arguments based on specific algebraic assumptions; basing them on a general complexity assumption was open since their introduction in 1986 [BCC, BC, CH]. In this paper, we finally show a general construction, which can be based on any one-way permutation.We stress that our scheme is efficient: both players can execute only polynomial-time programs during the protocol. Moreover, the security achieved is on-line: in order to cheat and validate a false theorem, the prover must break a cryptographic assumption on-line during the conversation, while the verifier can not find (ever!) any information unconditionally (in the information theoretic sense).

Moni Naor, Rafail Ostrovsky, Ramarathnam Venkatesan, Moti Yung
Low communication 2-prover zero-knowledge proofs for NP
Preliminary Version

We exhibit a two-prover perfect zero-knowledge proof system for 3-SAT. In this protocol, the verifier asks a single message to each prover, whose size grows logarithmically in the size of the 3-SAT formula. Each prover’s answer consists of only a constant number of bits. The verifier will always accept correct proofs. Given an unsatisfiable formula S the verifier will reject with probability at least Ω((|S|-max-sat(S))/|S|, where max-sat(S) denotes the maximum number of clauses of S that may be simultaneously satisfied, and |S| denotes the total number of clauses of S. Using a recent result by Arora et al [2], we can construct for any language in NP a protocol with the property that any non-member of the language be rejected with constant probability.

Cynthia Dwork, Uri Feige, Joe Kilian, Moni Naor, Muli Safra
Invariant Signatures and Non-Interactive Zero-Knowledge Proofs are Equivalent
Extended Abstract

The standard definition of digital signatures allows a document to have many valid signatures. In this paper, we consider a subclass of digital signatures, called invariant signatures, in which all legal signatures of a document must be identical according to some polynomial-time computable function (of a signature) which is hard to predict given an unsigned document. We formalize this notion and show its equivalence to non-interactive zero-knowledge proofs.

Shafi Goldwasser, Rafail Ostrovsky
On the Discrepancy between Serial and Parallel of Zero-Knowledge Protocols
Extended Abstract

In this paper, we investigate the discrepancy between a serial version and a parallel version of zero-knowledge protocols, and clarify the information “leaked” in the parallel version, which is not zero-knowledge unlike the case of the serial version. We consider two sides: one negative and the other positive in the parallel version of zero-knowledge protocols, especially of the Fiat-Shamir scheme.

Kouichi Sakurai, Toshiya Itoh

Cryptographic Functions

On the Design of SP Networks from an Information Theoretic Point of View

The cryptographic strength of an SP network depends crucially on the strength of its substitution boxes (S-boxes). In this paper we use the concept of information leakage to evaluate the strength of S-boxes and SP networks. We define an equivalence class on n × n S-boxes that is invariant in information leakage. Simulation results for a 16 × 16 SP network suggest that after a sufficient number of rounds the distribution of the output XOR in the SP network looks random. We further present simulation results to show that the information leakage for an SP network diminishes more rapidly with the number of rounds when the S-boxes are cryptographically strong.

M. Sivabalan, S. E. Tavares, L. E. Peppard
Partially-bent functions

We study a conjecture stated in [6] about the numbers of non-zeros of, respectively, the auto-correlation function and the Walsh transform of the function (−1)f(x), where f(x) is any boolean function on {0, 1}n. The result that we obtain leads us to introduce the class of partially-bent functions. We study within these functions the propagation criterion. We characterize those partially-bent functions which are balanced and prove a relation between their number (which is unknown) and the number of non-balanced partially-bent functions on {0, 1}n−1. Eventually, we study their correlation immunity.

C. Carlet

Digital Signatures and Identifcation II

Practical Approaches to Attaining Security against Adaptively Chosen Ciphertext Attacks
Extended Abstract

This paper presents three methods for strengthening public key cryptosystems in such a way that they become secure against adaptively chosen ciphertext attacks. In an adaptively chosen ciphertext attack, an attacker can query the deciphering algorithm with any ciphertexts, except for the exact object ciphertext to be cryptanalyzed. The first strengthening method is based on the use of one-way hash functions, the second on the use of universal hash functions and the third on the use of digital signature schemes. Each method is illustrated by an example of a public key cryptosystem based on the intractability of computing discrete logarithms in finite fields. Two other issues, namely applications of the methods to public key cryptosystems based on other intractable problems and enhancement of information authentication capability to the cryptosystems, are also discussed.

Yuliang Zheng, Jennifer Seberry
On the Security of the Permuted Kernel Identification Scheme

A zero-knowledge identification scheme built upon the so-called Permuted Kernel Problem (PKP) was proposed by Adi Shamir in 1989 [1].In this paper, we present a time-memory trade-off leading to a reduction of the computation time for solving the PKP problem, as compared with the best known attack mentioned in [1].

Thierry Baritaud, Mireille Campana, Pascal Chauvaud, Henri Gilbert

Computational Number Theory

Massively Parallel Computation of Discrete Logarithms

Numerous cryptosystems have been designed to be secure under the assumption that the computation of discrete logarithms is infeasible. This paper reports on an aggressive attempt to discover the size of fields of characteristic two for which the computation of discrete logarithms is feasible. We discover several things that were previously overlooked in the implementation of Coppersmith’s algorithm, some positive, and some negative. As a result of this work we have shown that field as large as GF(2503) can definitely be attacked.

Daniel M. Gordon, Kevin S. McCurley
A Quadratic Sieve on the n-Dimensional Cube

Let N be a large odd integer. We show how to produce a long sequence {(Xi, Yi)}i2n=1 of integers modulo N which satisfy Xi2 ≡ Yi modulo N, where Xi > N1/2 and | Yi | < cN1/2. Our sequence corresponds to a Hamiltonian path on the n-dimensional hypercube Cn, where n is Θ(log N/log log N). One application of these techniques is that, at each vertex of the hypercube, it is possible to search for equations of the form U2 ≡ V modulo N with V a product of small primes. The search is as in the quadratic sieve algorithm and therefore very fast. This yields a faster way of changing polynomials in the Multiple Polynomial Quadratic Sieve algorithm, since moving along the hypercube turns out to be very cheap.

René Peralta
Efficient Multiplication on Certain Nonsupersingular Elliptic Curves

Elliptic curves defined over finite fields have been proposed for Diffie-Hellman type crypto systems. Koblitz has suggested to use “anomalous” elliptic curves in characteristic 2, as these are nonsupersingular and allow for efficient multiplication of points by an integer.For anomalous curves E defined over F2 and regarded as curves over the extension field F2n, a new algorithm for computing multiples of arbitrary points on E is developed. The algorithm is shown to be three times faster than double and add, is easy to implement and does not rely on precomputation or additional memory. The algorithm is used to generate efficient one-way permutations involving pairs of twisted elliptic curves by extending a construction of Kaliski to finite fields of characteristic 2.

Willi Meier, Othmar Staffelbach
Speeding up Elliptic Cryptosystems by Using a Signed Binary Window Method

The basic operation in elliptic cryptosystems is the computation of a multiple d·P of a point P on the elliptic curve modulo n. We propose a fast and systematic method of reducing the number of operations over elliptic curves. The proposed method is based on pre-computation to generate an adequate addition-subtraction chain for multiplier the d. By increasing the average length of zero runs in a signed binary representation of d, we can speed up the window method. Formulating the time complexity of the proposed method makes clear that the proposed method is faster than other methods. For example, for d with length 512 bits, the proposed method requires 602.6 multiplications on average. Finally, we point out that each addition/subtraction over the elliptic curve using homogeneous coordinates can be done in 3 multiplications if parallel processing is allowed.

Kenji Koyama, Yukio Tsuruoka
On Generation of Probable Primes by Incremental Search

This paper examines the following algorithm for generating a probable prime number: choose a random k bit odd number n, and test the numbers n, n + 2, ... for primality using t iterations of Rabin’s test, until a probable prime has been found or some maximum number s of candidates have been tested.We show an explicit upper bound as a function of k, t and s on the probability that this algorithm outputs a composite. From Hardy and Littlewoods prime r-tuple conjecture, an upper bound follows on the probability that the algorithm fails. We propose the entropy of the output distributrion as a natural measure of the quality of the output. Under the prime r-tuple conjecture, we show a lower bound on the entropy of the output distribution over the primes. This bound shows that as k → ∞ the entropy becomes almost equal to the largest possible value.Variants allowing repeated choice of starting points or arbitrary search length are also examined. They are guaranteed not to fail, and their error probability and output entropy can be analysed to some extent.

Jørgen Brandt, Ivan Damgård

Cryptography Education

Kid Krypto

Cryptographic ideas and protocols that are accessible to children are described, and the case is made that cryptography can provide an excellent context and motivation for fundamental ideas of mathematics and computer science in the school curriculum, and in other venues such as children’s science museums. It is pointed out that we may all be doing “Kid Krypto” unawares. Crayon-technology cryptosystems can be a source of interesting research problems; a number of these are described.

Michael Fellows, Neal Koblitz

Theory II

On Defining Proofs of Knowledge

The notion of a “proof of knowledge,” suggested by Goldwasser, Micali and Rackoff, has been used in many works as a tool for the construction of cryptographic protocols and other schemes. Yet the commonly cited formalizations of this notion are unsatisfactory and in particular inadequate for some of the applications in which they are used. Consequently, new researchers keep getting misled by existing literature. The purpose of this paper is to indicate the source of these problems and suggest a definition which resolves them.

Mihir Bellare, Oded Goldreich
Public Randomness in Cryptography

The main contribution of this paper is the introduction of a formal notion of public randomness in the context of cryptography. We show how this notion affects the definition of the security of a cryptographic primitive and the definition of how much security is preserved when one cryptographic primitive is reduced to another. Previous works considered the public random bits as a part of the input, and security was parameterized in terms of the total length of the input. We parameterize security solely in terms of the length of the private input, and treat the public random bits as a separate resource. This separation allows us to independently address the important issues of how much security is preserved by a reduction and how many public random bits are used in the reduction.To exemplify these new definitions, we present reductions from weak one-way permutations to one-way permutations with strong security preserving properties that are simpler than previously known reductions.

Amir Herzberg, Michael Luby
Necessary and Sufficient Conditions for Collision-Free Hashing

This paper determines an exact relationship between collision-free hash functions and other cryptographic primitives. Namely, it introduces a new concept, the pseudo-permutation, and shows that the existence of collision-free hash functions is equivalent to the existence of claw-free pairs of pseudo-permutations. When considered as one bit contractors (functions from k + 1 bits to k bits), the collision-free hash functions constructed are more efficient than those proposed originally, requiring a single (claw-free) function evaluation rather than k.

Alexander Russell
Certifying Cryptographic Tools: The Case of Trapdoor Permutations

In cryptographic protocols it is often necessary to verify/certify the “tools” in use. This work demonstrates certain subtleties in treating a family of trapdoor permutations in this context, noting the necessity to “check” certain properties of these functions. The particular case we illustrate is that of non-interactive zero-knowledge. We point out that the elegant recent protocol of Feige, Lapidot and Shamir for proving NP statements in non-interactive zero-knowledge requires an additional certification of the underlying trapdoor permutation, and suggest a certification method to fill this gap.

Mihir Bellare, Moti Yung

Key Distribution

Protocols for Secret Key Agreement by Public Discussion Based on Common Information

Consider the following scenario: Alice and Bob, two parties who share no secret key initially but whose goal it is to generate a (large amount of) information-theoretically secure (or unconditionally secure) shared secret key, are connected only by an insecure public channel to which an eavesdropper Eve has perfect (read) access. Moreover, there exists a satelite broadcasting random bits at a very low signal power. Alice and Bob can receive these bits with certain bit error probabilities εA and εB, respectively (e.g. εA = εB = 30%) while Eve is assumed to receive the same bits much more reliably with bit error probability εE ≪ εA, εB (e.g. εE = 1%). The errors on the three channels are assumed to occur at least partially independently. Practical protocols are discussed by which Alice and Bob can generate a secret key despite the facts that Eve possesses more information than both of them and is assumed to have unlimited computational resources as well as complete knowledge of the protocols.The described scenario is a special case of a much more general setup in which Alice, Bob and Eve are assumed to know random variables X, Y and Z jointly distributed according to some probability distribution PXYZ, respectively. The results of this paper suggest to build cryptographic systems that are provably secure against enemies with unlimited computing power under realistic assumptions about the partial independence of the noise on the involved communication channels.

Ueli M. Maurer
Perfectly-Secure Key Distribution for Dynamic Conferences

A key distribution scheme for dynamic conferences is a method by which initially an (off-line) trusted server distributes private individual pieces of information to a set of users. Later any group of users of a given size (a dynamic conference) is able to compute a common secure key. In this paper we study the theory and applications of such perfectly secure systems. In this setting, any group of t users can compute a common key by each user computing using only his private piece of information and the identities of the other t − 1 group users. Keys are secure against coalitions of up to k users, that is, even if k users pool together their pieces they cannot compute anything about a key of any t-size conference comprised of other users.First we consider a non-interactive model where users compute the common key without any interaction. We prove a lower bound on the size of the user’s piece of information of % MathType!MTEF!2!1!+- % feaafiart1ev1aaatCvAUfKttLearuqr1ngBPrgarmWu51MyVXgatC % vAUfeBSjuyZL2yd9gzLbvyNv2CaeHbd9wDYLwzYbItLDharyavP1wz % ZbItLDhis9wBH5garqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbb % L8F4rqqrFfpeea0xe9Lq-Jc9vqaqpepm0xbba9pwe9Q8fs0-yqaqpe % pae9pg0FirpepeKkFr0xfr-xfr-xb9adbaqaaeGaciGaaiaabeqaam % aaeaqbaaGcbaWaaeWaaeaafaqabeGabaaabaGaem4AaSMaey4kaSIa % emiDaqNaeyOeI0IaeGymaedabaGaemiDaqNaeyOeI0IaeGymaedaaa % GaayjkaiaawMcaaaaa!453E! $$ \left( {\begin{array}{*{20}c} {k + t - 1} \\ {t - 1} \\ \end{array} } \right) $$ times the size of the common key. We then establish the optimality of this bound, by describing and analyzing a scheme which exactly meets this limitation (the construction extends the one in [2]). Then, we consider the model where interaction is allowed in the common key computation phase, and show a gap between the models by exhibiting an interactive scheme in which the user’s information is only k + t − 1 times the size of the common key. We further show various applications and useful modifications of our basic scheme. Finally, we present its adaptation to network topologies with neighborhood constraints.

Carlo Blundo, Alfredo De Santis, Amir Herzberg, Shay Kutten, Ugo Vaccaro, Moti Yung

DES

Differential Cryptanalysis of the Full 16-round DES

In this paper we develop the first known attack which is capable of breaking the full 16 round DES in less than the 255 complexity of exhaustive search. The data anlaysis phase computes the key by analyzing about 236 ciphertexts in 237 time. The 236 usable ciphertexts are obtained during the data collection phase from a larger pool of 247 chosen plaintexts by a simple bit repetition criteria which discards more than 99.9% of the ciphertexts as soon as they are generated. While earlier versions of differential attacks were based on huge counter arrays, the new attack requires negligible memory and can be carried out in parallel on up to 233 disconnected processors with linear speedup. In addition, the new attack can be carried out even if the analyzed ciphertexts are derived from up to 233 different keys due to frequent key changes during the data collection phase. The attack can be carried out incrementally with any number of available ciphertexts, and its probability of success grows linearly with this number (e.g., when 229 usable ciphertexts are generated from a smaller pool of 240 plaintexts, the analysis time decreases to 230 and the probability of success is about 1%).

Eli Biham, Adi Shamir
Iterative Characteristics of DES and s2-DES

In this paper we show that we are close at the proof that the type of characteristics used by Biham and Shamir in their differential attack on DES [3] are in fact the best characteristics we can find for DES. Furthermore we show that the criteria for the construction of DES-like S-boxes proposed by Kim [6] are insufficient to assure resistance against differential attacks. We show several good iterative characteristics for these S-boxes to be used in differential attacks. Finally we examine the probabilities of the two characteristics used by Biham and Shamir in [3]. We found that for some keys we do not get the probabilities used in the attack. We suggest the use of 5 characteristics instead of two in the attack on DES.

Lars Ramkilde Knudsen
DES is not a Group

We prove that the set of DES permutations (encryption and decryption for each DES key) is not closed under functional composition. This implies that, in general, multiple DES-encryption is not equivalent to single DES-encryption, and that DES is not susceptible to a particular known-plaintext attack which requires, on average, 228 steps. We also show that the size of the subgroup generated by the set of DES permutations is greater than 102499, which is too large for potential attacks on DES which would exploit a small subgroup.

Keith W. Campbell, Michael J. Wiener
A High-speed DES Implementation for Network Applications

A high-speed data encryption chip implementing the Data Encryption Standard (DES) has been developed. The DES modes of operation supported are Electronic Code Book and Cipher Block Chaining. The chip is based on a gallium arsenide (GaAs) gate array containing 50K transistors. At a clock frequency of 250 MHz, data can be encrypted or decrypted at a rate of 1 GBit/second, making this the fastest single-chip implementation reported to date. High performance and high density have been achieved by using custom-designed circuits to implement the core of the DES algorithm. These circuits employ precharged logic, a methodology novel to the design of GaAs devices. A pipelined flow-through architecture and an efficient key exchange mechanism make this chip suitable for low-latency network controllers.

Hans Eberle

Secret Sharing II

Threshold Schemes with Disenrollment

When a shadow of a threshold scheme is publicized, new shadows have to be reconstructed and redistributed in order to maintain the same level of security. In this paper we consider threshold schemes with disenrollment capabilities where the new shadows can be created by broadcasts through a public channel. We establish a lower bound on the size of each shadow in a scheme that allows L disenrollments. We exhibit three systems that achieve the lower bound on shadow size.

Bob Blakley, G. R. Blakley, A. H. Chan, J. L. Massey
Non-existence of homomorphic general sharing schemes for some key spaces
Extended Abstract

Homomorphic threshold schemes were introduced by Benaloh and have found several applications. In this paper we prove that there do not exist perfect finite homomorphic general monotone sharing schemes for which the key space is a finite non-Abeiian group (except for very particular access structures). This result is valid for the most general case, e.g., if each participant receives shares from different sets and when these sets are not necessarily groups.We extend the definition of homomorphic threshold scheme to allow that the homomorphic property is valid for two-operations. When the set of keys is a finite Boolean Algebra or a Galois field then there does not exist a perfect finite two-operation-homomorphic general sharing scheme.

Yair Frankel, Yvo Desmedt, Mike Burmester
An l-Span Generalized Secret Sharing Scheme

For some secret sharing applications, the secret reconstructed is not revealed to the participants, and therefore, the secret/shadows can be repeatedly used without having to be changed. But for other applications, in which the secret reconstructed is revealed to participants, a new secret must be chosen and its corresponding shadows must be regenerated and then secretly distributed to participants again, in order to enforce the same secret sharing policy. This is inefficient because of the overhead in the generation and distribution of shadows. In this paper, an l-span secret sharing scheme for the general sharing policy is proposed to solve the secret/shadows regeneration problem by extending the life span of the shadows from 1 to l, i. e., the shadows can be repeatedly used for l times to generate l different secrets.

Lein Harn, Hung-Yu Lin

Rump Session

Provable Security Against Differential Cryptanalysis

The purpose of this paper is to show that there exist DES-like iterated ciphers, which are provably resistant against differential attacks. The main result on the security of a DES-like cipher with independent round keys is Theorem 1, which gives an upper bound to the probability of r-round differentials, as defined in [3] and this upper bound depends only on the round function of the iterated cipher. Moreover, it is shown that there exist functions such that the probabilities of differentials are less than or equal to 22 − n where n is the length of the plaintext block. We also show a prototype of an iterated block cipher, which is compatible with DES and has proven security against differential attacks.

Kaisa Nyberg, Lars Ramkilde Knudsen
Content-Addressable Search Engines and DES-like Systems

A very simple parallel architecture using a modified version of content-addressable memory (CAM) can be used to cheaply and efficiently encipher and decipher data with DES-like systems. This paper will describe how to implement DES on these modified content-addressable memories at speeds approaching some of the better specialized hardware. This implementation is often much more attractive for system designers because the CAM can be reprogrammed to encrypt the data with other DES-like systems such as Khufu or perform system tasks like data compression or graphics.The CAM memory architecture is also easily extendable to build a large scale engine for exhaustively searching the entire keyspace. This paper estimates that it will be possible to build a machine to test 255 keys of DES in one day for $30 million. This design is much less hypothetical than some of the others in the literature because it is based upon hardware that will be available off-the-shelf in the late end of 1992. The architecture of this key search machine is much more attractive to an attacker because it is easily reprogrammable to handle modified DES-like algorithms such as the UNIX password system or Khufu.

Peter C. Wayner
FFT-Hash-II is not yet Collision-free

In this paper, we show that the FFT-Hash function proposed by Schnorr [2] is not collision free. Finding a collision requires about 224 computation of the basic function of FFT. This can be done in few hours on a SUN4-workstation. In fact, it is at most as strong as a one-way hash function which returns a 48 bits length value. Thus, we can invert the proposed FFT hash-function with 248 basic computations. Some simple improvements of the FFT hash function are also proposed to try to get rid of the weaknesses of FFT.

Serge Vaudenay
Metadaten
Titel
Advances in Cryptology — CRYPTO’ 92
herausgegeben von
Ernest F. Brickell
Copyright-Jahr
1993
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-48071-6
Print ISBN
978-3-540-57340-1
DOI
https://doi.org/10.1007/3-540-48071-4