Skip to main content

1994 | Buch

Advances in Cryptology — CRYPTO’ 93

13th Annual International Cryptology Conference Santa Barbara, California, USA August 22–26, 1993 Proceedings

herausgegeben von: Douglas R. Stinson

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The CRYPTO ’93 conference was sponsored by the International Association for Cryptologic Research (IACR) and Bell-Northern Research (a subsidiary of Northern Telecom), in co-operation with the IEEE Computer Society Technical Committee. It took place at the University of California, Santa Barbara, from August 22-26, 1993. This was the thirteenth annual CRYPTO conference, all of which have been held at UCSB. The conference was very enjoyable and ran very of the General Chair, Paul Van Oorschot. smoothly, largely due to the efforts It was a pleasure working with Paul throughout the months leading up to the conference. There were 136 submitted papers which were considered by the Program Committee. Of these, 38 were selected for presentation at the conference. There was also one invited talk at the conference, presented by Miles Smid, the title of which was “A Status Report On the Federal Government Key Escrow System.” The conference also included the customary Rump Session, which was presided over by Whit Diffie in his usual inimitable fashion. Thanks again to Whit for organizing and running the Rump session. This year, the Rump Session included an interesting and lively panel discussion on issues pertaining to key escrowing. Those taking part were W. Diffie, J. Gilmore, S. Goldwasser, M. Hellman, A. Herzberg, S. Micali, R. Rueppel, G. Simmons and D. Weitzner.

Inhaltsverzeichnis

Frontmatter

Cryptosystems

Efficient Signature Schemes Based on Birational Permutations

Many public key cryptographic schemes (such as cubic RSA) are based on low degree polynomials whose inverses are high degree polynomials. These functions are very easy to compute but time consuming to invert even by their legitimate users. To overcome this problem, it is natural to consider the class of birational permutations ƒ over k-tuples of numbers, in which both ƒ and ƒ−1 are low degree rational functions. In this paper we develop two new families of birational permutations, and discuss their cryptographic applications.

Adi Shamir
A new identification scheme based on syndrome decoding

Zero-knowledge proofs were introduced in 1985, in a paper by Goldwasser, Micali and Rackoff ([6]). Their practical significance was soon demonstrated in the work of Fiat and Shamir ([4]), who turned zero-knowledge proofs of quadratic residuosity into efficient means of establishing user identities. Still, as is almost always the case in public-key cryptography, the Fiat-Shamir scheme relied on arithmetic operations on large numbers. In 1989, there were two attempts to build identification protocols that only use simple operations (see [11,10]). One appeared in the EUROCRYPT proceedings and relies on the intractability of some coding problems, the other was presented at the CRYPTO rump session and depends on the so-called Permuted Kernel problem (PKP). Unfortunately, the first of the schemes was not really practical. In the present paper, we propose a new identification scheme, based on error-correcting codes, which is zero-knowledge and is of practical value. Furthermore, we describe several variants, including one which has an identity based character. The security of our scheme depends on the hardness of decoding a word of given syndrome w.r.t. some binary linear error-correcting code.

Jacques Stern
The Shrinking Generator

We present a new construction of a pseudorandom generator based on a simple combination of two LFSRs. The construction has attractive properties as simplicity (conceptual and implementation-wise), scalability (hardware and security), proven minimal security conditions (exponential period, exponential linear complexity, good statistical properties), and resistance to known attacks. The construction is suitable for practical implementation of efficient stream cipher cryptosystems.

Don Coppersmith, Hugo Krawczyk, Yishay Mansour

Stream Ciphers and Cryptographic Functions

An Integrity Check Value Algorithm for Stream Ciphers

A method of calculating an integrity check value (icv) with the use of a stream cipher is presented. The strength of the message integrity this provides is analysed and proven to be dependent on the unpredictability of the stream cipher used. A way of efficiently providing both integrity and encryption with the use of a single stream cipher is also explained. Note that the method of providing message integrity, used with or without encryption, is not subject to a number of attacks that succeed against many conventional integrity schemes. Specifically any legitimate message-icv pair that is copied or removed and subsequently replayed will have an appropriately small small chance of deceiving the receiver. Furthermore, any message-icv pair generated by an attacker and injected into the communication channel will have an appropriately small chance of escaping detection unless the attacker has actually broken the stream cipher. This is the case even if the attacker has any amount of chosen messages and corresponding icvs or performs any number of calculations.

Richard Taylor
Nonlinearly Balanced Boolean Functions and Their Propagation Characteristics
Extended Abstract

Three of the most important criteria for cryptographically strong Boolean functions are the balancedness, the nonlinearity and the propagation criterion. This paper studies systematic methods for constructing Boolean functions satisfying some or all of the three criteria. We show that concatenating, splitting, modifying and multiplying sequences can yield balanced Boolean functions with a very high nonlinearity. In particular, we show that balanced Boolean functions obtained by modifying and multiplying sequences achieve a nonlinearity higher than that attainable by any previously known construction method. We also present methods for constructing highly nonlinear balanced Boolean functions satisfying the propagation criterion with respect to all but one or three vectors. A technique is developed to transform the vectors where the propagation criterion is not satisfied in such a way that the functions constructed satisfy the propagation criterion of high degree while preserving the balancedness and nonlinearity of the functions. The algebraic degrees of functions constructed are also discussed, together with examples illustrating the various constructions.

Jennifer Seberry, Xian-Mo Zhang, Yuliang Zheng

Proof Systems and Zero-knowledge

A Low Communication Competitive Interactive Proof System for Promised Quadratic Residuosity

A notion of “competitive” interactive proof systems is defined by Bellare and Goldwasser as a natural extension of a problem whether computing a witness w of x ∈ L is harder than deciding x ∈ L for a language L ∈ % MathType!MTEF!2!1!+- % feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn % hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr % 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq-Jc9 % vqaqpepm0xbba9pwe9Q8fs0-yqaqpepae9pg0FirpepeKkFr0xfr-x % fr-xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamitaiaahc % cacaWHGaGaeyicI4SaaCiiaiaahccatuuDJXwAK1uy0HwmaeHbfv3y % SLgzG0uy0Hgip5wzaGqbaiab-1q8ojab-9q8qbaa!4816! $$ L{\mathbf{ }} \in {\mathbf{ }}\mathcal{N}\mathcal{P} $$ . It is widely believed that quadratic residuosity (QR) does not have a competitive interactive proof system. Bellare and Goldwasser however introduced a notion of “representative” of ZN* and showed that there exists a competitive interactive proof system for promised QR, i.e., the moduli N is guaranteed to be the product of k = O(loglog|N|) distinct odd primes. In this paper, we consider how to reduce the communication complexity of a competitive interactive proof system for promised QR and how to relax the constraint on k from O(loglog|N|) to O(log|N|). To do this, we introduce a notion of “dominant” of ZN* and show that promised QR with the constraint that k = O(log|N|) has a competitive interactive proof system with considerably low communication complexity.

Toshiya Itoh, Masafumi Hoshi, Shigeo Tsujii
Secret Sharing and Perfect Zero Knowledge

In this work we study relations between secret sharing and perfect zero knowledge in the non-interactive model. Both secret sharing schemes and non-interactive zero knowledge are important cryptographic primitives with several applications in the management of cryptographic keys, in multi-party secure protocols, and many other areas. Secret sharing schemes are very well-studied objects while non-interactive perfect zero-knowledge proofs seem to be very elusive. In fact, since the introduction of the non-interactive model for zero knowledge, the only perfect zero-knowledge proof known was for quadratic non residues.In this work, we show that a large class of languages related to quadratic residuosity admits non-interactive perfect zero-knowledge proofs. More precisely, we give a protocol for proving non-interactively and in perfect zero knowledge the veridicity of any “threshold” statement where atoms are statements about the quadratic character of input elements. We show that our technique is very general and extend this result to any secret sharing scheme (of which threshold schemes are just an example).

A. De Santis, G. Di Crescenzo, G. Persiano
One Message Proof Systems with Known Space Verifiers

We construct a proof system for any NP statement, in which the proof is a single message sent from the prover to the verifier. No other interaction is required, neither before nor after this single message is sent. In the “envelope” model, the prover sends a sequence of envelopes to the verifier, where each envelope contains one bit of the prover’s proof. It suffices for the verifier to open a constant number of envelopes in order to verify the correctness of the proof (in a probabilistic sense). Even if the verifier opens polynomially many envelopes, the proof remains perfectly zero knowledge.We transform this proof system to the “known-space verifier” model of De-Santis et al. [7]. In this model it suffices for the verifier to have space Smin in order to verify proof, and the proof should remain statistically zero knowledge with respect to verifiers that use space at most Smax. We resolve an open question of [7], showing that arbitrary ratios Smax/Smin are achievable. However, we question the extent to which these proof systems (that of [7] and ours) are really zero knowledge. We do show that our proof system is witness indistinguishable, and hence has applications in cryptographic scenarios such as identification schemes.

Yonatan Aumann, Uriel Feige
Interactive Hashing can Simplify Zero-Knowledge Protocol Design Without Computational Assumptions
Extended Abstract

We show that any 3-round protocol (in general, any bounded round protocol) in which the verifier sends only random bits, and which is zero-knowledge against an honest verifier can be transformed into a protocol that is zero-knowledge in general The transformation is based on the interactive hashing technique of Naor, Ostrovsky, Venkatesan and Yung. No assumption is made on the computing power of prover or verifier, and the transformation therefore is valid in both the proof and argument model, and does not rely on any computational assumptions such as the existence of one-way permutations. The technique is also applicable to proofs of knowledge. The transformation preserves perfect and statistical zero-knowledge. As corollaries, we show first a generalization of a result by Damgård on construction of bit-commitments from zero-knowledge proofs. Other corollaries give results on non-interactive zero-knowledge, one-sided proof systems, and black-box simulation.

Ivan B. Damgård

Secret Sharing

Fully Dynamic Secret Sharing Schemes

We consider secret sharing schemes in which the dealer has the feature of being able (after a preprocessing stage) to activate a particular access structure out of a given set and/or to allow the participants to reconstruct different secrets (in different time instants) by sending to all participants the same broadcast message. In this paper we establish a formal setting to study such secret sharing schemes. The security of the schemes presented is unconditional, since they are not based on any computational assumption. We give bounds on the size of the shares held by participants and on the site of the broadcast message in such schemes.

C. Blundo, A. Cresti, A. De Santis, U. Vaccaro
Multisecret Threshold Schemes

A threshold scheme is a system that protects a secret (key) among a group of participants in such a way that it can only be reconstructed from the joint information held by some predetermined number of these participants. In this paper we extend this problem to one where there is more than one secret that participants can reconstruct using the information that they hold. In particular we consider the situation where there is a secret sK associated with each k-subset K of participants and sK can be reconstructed by any group of t participants in K (t ≤ k). We establish bounds on the minimum amount of information that participants must hold in order to ensure that up to w participants (0 ≤ w ≤ n − k + t − 1) cannot obtain any information about a secret with which they are not associated. We also discuss examples of systems that satisfy this bound.

Wen-Ai Jackson, Keith M. Martin, Christine M. O’Keefe
Secret Sharing Made Short

A well-known fact in the theory of secret sharing schemes is that shares must be of length at least as the secret itself. However, the proof of this lower bound uses the notion of information theoretic secrecy. A natural (and very practical) question is whether one can do better for secret sharing if the notion of secrecy is computational, namely, against resource bounded adversaries. In this note we observe that, indeed, one can do much better in the computational model (which is the one used in most applications).We present an m-threshold scheme, where m shares recover the secret but m − 1 shares give no (computational) information on the secret, in which shares corresponding to a secret S are of size |S|/m plus a short piece of information whose length does not depend on the secret size but just in the security parameter. (The bound of |S|/m is clearly optimal if the secret is to be recovered from m shares). Therefore, for moderately large secrets (a confidential file, a long message, a large data base) the savings in space and communication over traditional schemes is remarkable. The scheme is very simple and combines in a natural way traditional (perfect) secret sharing schemes, encryption, and information dispersal. It is provable secure given a secure (e.g., private key) encryption function.

Hugo Krawczyk

Number Theory and Algorithms

A Subexponential Algorithm for Discrete Logarithms over All Finite Fields

There are numerous subexponential algorithms for computing discrete logarithms over certain classes of finite fields. However, there appears to be no published subexponential algorithm for computing discrete logarithms over all finite fields. We present such an algorithm and a heuristic argument that there exists a c ∈ ℜ>0 such that for all sufficiently large prime powers pn, the algorithm computes discrete logarithms over GF(pn) within expected time: % MathType!MTEF!2!1!+- % feaafiart1ev1aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn % hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr % 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq-Jc9 % vqaqpepm0xbba9pwe9Q8fs0-yqaqpepae9pg0FirpepeKkFr0xfr-x % fr-xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaGaamyzamaaCa % aaleqabaGaam4yaiaacIcaciGGSbGaai4BaiaacEgacaGGOaGaamiC % amaaCaaameqabaGaamOBaaaaliaacMcaciGGSbGaai4BaiaacEgaca % WHGaGaciiBaiaac+gacaGGNbGaaiikaiaadchadaahaaadbeqaaiaa % d6gaaaWccaGGPaGaaiykamaaCaaameqabaGaaGymaiaac+cacaaIYa % aaaaaaaaa!4BAA! $$ e^{c(\log (p^n )\log {\mathbf{ }}\log (p^n ))^{1/2} } $$

Leonard M. Adleman, Jonathan DeMarrais
An implementation of the general number field sieve
Extended abstract

It was shown in [2] that under reasonable assumptions the general number field sieve (GNFS) is the asymptotically fastest known factoring algorithm. It is, however, not known how this algorithm behaves in practice. In this report we describe practical experience with our implementation of the GNFS whose first version was completed in January 1993 at the Department of Computer Science at the Universität des Saarlandes.

J. Buchmann, J. Loho, J. Zayer
On the factorization of RSA-120

We present data concerning the factorization of the 120-digit number RSA-120, which we factored on July 9, 1993, using the quadratic sieve method. The factorization took approximately 825 MIPS years and was completed within three months real time. At the time of writing RSA-120 is the largest integer ever factored by a general purpose factoring algorithm. We also present some conservative extrapolations to estimate the difficulty of factoring even larger numbers, using either the quadratic sieve method or the number field sieve, and discuss the issue of the crossover point between these two methods.

T. Denny, B. Dodson, A. K. Lenstra, M. S. Manasse
Comparison of three modular reduction functions

Three modular reduction algorithms for large integers are compared with respect to their performance in portable software: the classical algorithm, Barrett’s algorithm and Montgomery’s algorithm. These algorithms are a time critical step in the implementation of the modular exponentiation operation. For each of these algorithms their application in the modular exponentiation operation is considered. Modular exponentiation constitutes the basis of many well known and widely used public key cryptosystems. A fast and portable modular exponentiation will considerably enhance the speed and applicability of these systems.

Antoon Bosselaers, René Govaerts, Joos Vandewalle

Differential Cryptanalysis

Differential Cryptanalysis of Lucifer

Differential cryptanalysis was introduced as an approach to analyze the security of DES-like cryptosystems. The first example of a DES-like cryptosystem was Lucifer, the direct predecessor of DES, which is still believed by many people to be much more secure than DES, since it has 128 key bits, and since no attacks against (the full variant of) Lucifer were ever reported in the cryptographic literature. In this paper we introduce a new extension of differential cryptanalysis, devised to extend the class of vulnerable cryptosystems. This new extension suggests key-dependent characteristics, called conditional characteristics, selected to enlarge the characteristics’ probabilities for keys in subsets of the key space. The application of conditional characteristics to Lucifer shows that more than half of the keys of Lucifer are insecure, and the attack requires about 236 complexity and chosen plaintexts to find these keys. The same extension can also be used to attack a new variant of DES, called RDES, which was designed to be immune against differential cryptanalysis. These new attacks flash new light on the design of DES, and show that the transition of Lucifer to DES strengthened the later cryptosystem.

Ishai Ben-Aroya, Eli Biham
Differential Attack on Message Authentication Codes

We discuss the security of Message Authentication Code (MAC) schemes from the viewpoint of differential attack, and propose an attack that is effective against DES-MAC and FEAL-MAC. The attack derives the secret authentication key in the chosen plaintext scenario. For example, DES(8-round)-MAC can be broken with 234 pairs of plaintext, while FEAL8-MAC can be broken with 222 pairs. The proposed attack is applicable to any MAC scheme, even if the 32-bits are randomly selected from among the 64-bits of ciphertext generated by a cryptosystem vulnerable to differential attack in the chosen plaintext scenario.

Kazuo Ohta, Mitsuru Matsui
Cryptanalysis of the CFB mode of the DES with a reduced number of rounds

Three attacks on the DES with a reduced number of rounds in the Cipher Feedback Mode (CFB) are studied, namely a meet in the middle attack, a differential attack, and a linear attack. These attacks are based on the same principles as the corresponding attacks on the ECB mode, They are compared to the three basic attacks on the CFB mode. In 8-bit CFB and with 8 rounds in stead of 16, a differential attack with 239.4 chosen ciphertexts can find 3 key bits, and a linear attack with 231 known plaintexts can find 7 key bits. This suggests that it is not safe to reduce the number of rounds in order to improve the performance. Moreover, it is shown that the final permutation has some cryptographic significance in the CFB mode.

Bart Preneel, Marnix Nuttin, Vincent Rijmen, Johan Buelens
Weak Keys for IDEA

Large classes of weak keys have been found for the block cipher algorithm IDEA, previously known as IPES [2]. IDEA has a 128-bit key and encrypts blocks of 64 bits. For a class of 223 keys IDEA exhibits a linear factor. For a certain class of 235 keys the cipher has a global characteristic with probability 1. For another class of 251 keys only two encryptions and solving a set of 16 nonlinear boolean equations with 12 variables is sufficient to test if the used key belongs to this class. If it does, its particular value can be calculated efficiently. It is shown that the problem of weak keys can be eliminated by slightly modifying the key schedule of IDEA.

Joan Daemen, René Govaerts, Joos Vandewalle

Complexity Theory

Entity Authentication and Key Distribution

We provide the first formal treatment of entity authentication and authenticated key distribution appropriate to the distributed environment. Addressed in detail are the problems of mutual authentication and authenticated key exchange for the symmetric, two-party setting. For each we present a definition, protocol, and proof that the protocol meets its goal, assuming only the existence of a pseudorandom function.

Mihir Bellare, Phillip Rogaway
On the Existence of Statistically Hiding Bit Commitment Schemes and Fail-Stop Signatures

We show that the existence of a statistically hiding bit commitment scheme with non-interactive opening and public verification implies the existence of fail-stop signatures. Therefore such signatures can now be based on any one-way permutation — the weakest assumption known to be sufficient for fail-stop signatures. We also show that genuinely practical fail-stop signatures follow from the existence of any collision-intractable hash function. A similar idea is used to improve a commitment scheme of Naor and Yung, so that one can commit to several bits with amortized O(1) bits of communication per bit committed to.Conversely, we show that any fail-stop signature scheme with a property we call the almost unique secret key property can be transformed into a statistically hiding bit commitment scheme. All previously known fail-stop signature schemes have this property. We even obtain an equivalence since we can modify the construction of fail-stop signatures from bit commitments such that it has this property.

Ivan B. Damgård, Torben P. Pedersen, Birgit Pfitzmann
Joint Encryption and Message-Efficient Secure Computation

This paper connects two areas of recent cryptographic research: secure distributed computation, and group-oriented cryptography. We construct a probabilistic public-key encryption scheme with the following properties: —It is easy to encrypt using the public keys of any subset of parties, such that it is hard to decrypt without the cooperation of every party in the subset.—It is easy for any private key holder to give a “witness” of its contribution to the decryption (e.g., for parallel decryption).—It is “blindable”: From an encrypted bit it is easy for anyone to compute a uniformly random encryption of the same bit.—It is “xor-homomorphic”: Prom two encrypted bits it is easy for anyone to compute an encryption of their xor.—It is “compact”: The size of an encryption does not depend on the number of participants.Using this joint encryption scheme as a tool, we show how to reduce the message complexity of secure computation versus a passive adversary (gossiping faults).

Matthew Franklin, Stuart Haber
Cryptographic Primitives Based on Hard Learning Problems

Modern cryptography has had considerable impact on the development of computational learning theory. Virtually every intractability result in Valiant’s model [13] (which is representation-independent in the sense that it does not rely on an artificial syntactic restriction on the learning algorithm’s hypotheses) has at its heart a cryptographic construction [4, 9, 1, 10]. In this paper, we give results in the reverse direction by showing how to construct several cryptographic primitives based on certain assumptions on the difficulty of learning. In doing so, we develop further a line of thought introduced by Impagliazzo and Levin [6].

Avrim Blum, Merrick Furst, Michael Kearns, Richard J. Lipton

Applications

Extensions of Single-term Coins

We show how the electronic cash scheme in [Fer93a] can be extended to provide n-spendable corns. Furthermore, we show how observers can be incorporated in the protocols to provide prior restraint against double spending by the user, instead of just detection after the fact.

Niels Ferguson
Untraceable Off-line Cash in Wallet with Observers
Extended abstract

Incorporating the property of untraceability of payments into off-line electronic cash systems has turned out to be no easy matter. Two key concepts have been proposed in order to attain the same level of security against double-spending as can be trivially attained in systems with full traceability of payments.The first of these, one-show blind signatures, ensures traceability of double-spenders after the fact. The realizations of this concept that have been proposed unfortunately require either a great sacrifice in efficiency or seem to have questionable security, if not both.The second concept, wallets with observers, guarantees prior restraint of double-spending, while still offering traceability of double-spenders after the fact in case tamper-resistance is compromised. No realization of this concept has yet been proposed in literature, which is a serious problem. It seems that the known cash systems cannot be extended to this important setting without significantly worsening the problems related to efficiency and security.We introduce a new primitive that we call restrictive blind signatures. In conjunction with the so-called representation problem in groups of prime order this gives rise to highly efficient off-line cash systems that can be extended at virtually no extra cost to wallets with observers under the most stringent of privacy requirements. The workload for the observer is so small that it can be performed by a tamper-resistant smart card capable of performing the Schnorr identification scheme.We also introduce new extensions in functionality (unconditional protection against framing, anonymous accounts, multi-spendable coins) and improve some known constructions (computional protection against framing, electronic checks).The security of our cash system and all its extensions can be derived directly from the security of the well-known Schnorr identification and signature schemes, and the security of our new primitive.

Stefan Brands
Discreet Solitary Games

Cryptographic techniques have been used intensively in the past to show how to play multiparty games in an adversarial scenario. We now investigate the cryptographic power of a deck of cards in a solitary scenario. In particular, we show how a person can select a random permutation satisfying a certain criterion discreetly (without knowing which one was picked) using a simple deck of cards. We also show how it is possible using cards to play games of partial information such as POKER, BRIDGE and other cards games in solitary.

Claude Crépeau, Joe Kilian

Authentication Codes

On Families of Hash Functions via Geometric Codes and Concatenation

In this paper we use coding theory to give simple explanations of some recent results on universal hashing. We first apply our approach to give a precise and elegant analysis of the Wegman-Carter construction for authentication codes. Using Reed-Solomon codes and the well known concept of concatenated codes we can then give some new constructions, which require much less key size than previously known constructions. The relation to coding theory allows the use of codes from algebraic curves for the construction of hash functions. Particularly, we show how codes derived from Artin-Schreier curves, Hermitian curves and Suzuki curves yield good classes of universal hash functions.

Jürgen Bierbrauer, Thomas Johansson, Gregory Kabatianskii, Ben Smeets
On the Construction of Perfect Authentication Codes that Permit Arbitration

1 Authentication codes that permit arbitration are codes that unconditionally protect against deceptions from the outsiders and additionally also protect against some forms of deceptions from the insiders. Simmons introduced this authentication model and he also showed a way of constructing such codes, called the Cartesian product construction. We present a general way of constructing such codes and we also derive two specific classes of such codes. One that is perfect in the sense that it meets the lower bounds on the size of the transmitter’s and the receiver’s keys and one that allows the number of source states to be chosen arbitrarily large.

Thomas Johansson
Codes for Interactive Authentication

An authentication protocol is a procedure by which an informant tries to convey n bits of information, which we call an input message, to a recipient. An intruder, I, controls the network over which the informant and the recipient talk and may change any message before it reaches its destination. a If the protocol has security p, then the the recipient must detect this a cheating with probability at least 1 − p. This paper is devoted to characterizing the amount of secret information that the sender and receipient must share in a p-secure protocol. We provide a single-round authentication protocol which requires log(n) + 5 log(1/p) bits of secrecy. as well as a single-round protocol which requires log(n) + 2 1og(1/p) bits of secrecy based on non-constructive random codes. We prove a lower bound of log(n) +log(1/p) secret bits for single-round protocols.We introduce authentication protocols with more than one round of communication (multi-round protocols) and present a k-round protocol which reduces the amount of secret information that the two parties need to log(k)(n) + 51og(1/p). When the number of rounds is log*(n), our protocol requires 2 log(1/p) + O(1) bits. Hence interaction helps when log(n) > log(1/p). We also show a lower bound of log(k)(n) on the number of shared random bits in a k-round protocol.

Pete Gemmell, Moni Naor

Hash Functions

Hash functions based on block ciphers: a synthetic approach

Constructions for hash functions based on a block cipher are studied where the size of the hashcode is equal to the block length of the block cipher and where the key size is approximately equal to the block length. A general model is presented, and it is shown that this model covers 9 schemes that have appeared in the literature. Within this general model 64 possible schemes exist, and it is shown that 12 of these are secure; they can be reduced to 2 classes based on linear transformations of variables. The properties of these 12 schemes with respect to weaknesses of the underlying block cipher are studied. The same approach can be extended to study keyed hash functions (MAC’s) based on block ciphers and hash functions based on modular arithmetic. Finally a new attack is presented on a scheme suggested by R. Merkle..

Bart Preneel, René Govaerts, Joos Vandewalle
Security of Iterated Hash Functions Based on Block Ciphers

Cryptographic hash functions obtained by iterating a round function constructed from a block cipher and for which the hash-code length is twice the block length m of the underlying block cipher are considered. The computational security of such hash functions against two particular attacks, namely, the free-start target and free-start collision attacks, is investigated; these two attacks differentiate themselves from the “usual” target and collision attacks by not specifying the initial value of the iterations. The motivation is that computationally secure iterated hash functions against these two particular attacks implies computationally secure iterated hash functions against the “usual” target and collision attacks. For a general class of such 2m-bit iterated hash functions, tighter upper bounds than the one yet published in the literature on the complexity of free-start target and free-start collision attacks are derived. A proposal for a 2m-bit iterated hash function achieving these upper bounds is made; this new proposal is shown to be computationally more secure against free-start target and free-start collision attacks than some of the already proposed schemes falling into this general class. It is also shown that our proposal is better than the present proposal for an ISO standard in the sense that both schemes achieve these upper bounds but one encryption is required in our proposal for hashing one m-bit message block as opposed to two encryptions in the ISO proposal. Finally, two new attacks on the LOKI Double-Block-Hash function are presented with lower complexities than the known ones.

Walter Hohl, Xuejia Lai, Thomas Meier, Christian Waldvogel

Cryptanalysis

Improved Algorithms for the Permuted Kernel Problem

In 1989, Adi Shamir published a new asymmetric identification scheme, based on the intractability of the Permuted Kernel Problem (PKP) [3]. In 1992, an algorithm to solve the PKP problem was suggested by J. Georgiades [2], and also in 1992 T. Baritaud, M. Campana, P. Chauvaud and H. Gilbert [1] have independently found another algorithm for this problem. These algorithms still need huge amount of time and/or memory in order to solve the PKP problem with the values suggested by A. Shamir.In this paper, we will see that it is possible to solve the PKP problem using less time that which was needed in [1] and [2], and much less memory than that needed in [1].First we will investigate how the ideas of [1] and [2] can be combined. This will enable us to obtain a little reduction in the time needed. Then, some new ideas will enable us to obtain a considerable reduction in the memory required, and another small reduction in time.Since our new algorithms are quicker and more practical than previous algorithms they confirm the idea stated in [1] that for strong security requirements, the smallest values (n = 32, m = 16, p = 251) mentioned in [3] are not recommended.

Jaques Patarin, Pascal Chauvaud
On the Distribution of Characteristics in Composite Permutations

Differential cryptanalysis is a method of attacking iterated mappings which has been applied with varying success to a number of product ciphers and hash functions [1, 2]. Let ρ: Z2c × Z2m → Z2m be a mapping that consists of c ‘control’ bits and m ‘data’ bits. The mapping ρ mapping contains 2cm-bit permutations πi: Z2m → Z2m, 0 ≤ i ≤ 2c − 1, one of which is selected (multiplexed) by the control bits, and a substitution is then performed on the data bits using the selected permutation. Such mappings will be called composite permutations. The S-boxes of DES are composite permutations of the form Si: Z22 × Z24 → Z24 with 2 control bits and 4 data bits.In differential cryptanalysis the attacker is interested in the largest entry in a given XOR table, and the fraction of the XOR table that is zero. In this paper we determine the distribution of characteristics in the XOR tables of composite permutations, which leads to approximations for the largest entry in the XOR table and the density of zero entries.

Luke O’Connor
Remark on the Threshold RSA Signature Scheme

Shared generation of secure signatures, called the threshold signatures, was introduced by Desmedt and Frankel in 1991. A threshold signature scheme is not only a threshold scheme, but also a signature scheme. Therefore, it should possess the properties of both threshold scheme and digital signature scheme. In this paper, we investigate conspiracy attacks on the Desmedt and Frankel’s threshold RSA signature scheme. We also discuss the requirements of secure threshold signature scheme.

Chuan-Ming Li, Tzonelih Hwang, Narn-Yih Lee
Another Method for Attaining Security Against Adaptively Chosen Ciphertext Attacks

Practical approaches to constructing public key cryptosystems secure against chosen ciphertext attacks were first initiated by Damgard and further extended by Zheng and Seberry. In this paper we first point out that in some cryptosystems proposed by Zheng and Seberry the method for adding authentication capability may fail just under known plaintext attacks. Next, we present a new method for immunizing public key cryptosystems against adaptively chosen ciphertext attacks. In the proposed immunization method, the deciphering algorithm first checks that the ciphertext is legitimate and then outputs the matching plaintext only when the check is successful. This is in contrast with the Zheng and Seberry’s methods, where the deciphering algorithm first recovers the plaintext and then outputs it only when the checking condition on it is satisfied. Such a ciphertext-based validity check will be particularly useful for an application to group-oriented cryptosystems, where almost all deciphering operations are performed by third parties, not by the actual receiver.

Chae Hoon Lim, Pil Joong Lee
Attacks on the Birational Permutation Signature Schemes

Shamir presents in [3] a family of cryptographic signature schemes based on birational permutations of the integers modulo a large integer N of unknown factorization. These schemes are attractive because of the low computational requirements, both for signature generation and signature verification. However, the two schemes presented in Shamir’s paper are weak. We show here how to break the first scheme, by first reducing it algebraically to the earlier Ong-Schnorr-Shamir signature scheme, and then applying the Pollard solution to that scheme. We then show some attacks on the second scheme. These attacks give ideas which can be applied to schemes in this general family.

Don Coppersmith, Jacques Stern, Serge Vaudenay

Key Distribution

Interaction in Key Distribution Schemes
Extended Abstract

A (g, b) key distribution scheme allows conferences of g users to generate secret keys, so that disjoint coalitions of b users cannot gain any information on the key (in the information theoretic sense). In this work we study the relationships between interaction and space efficiency of key distribution schemes. We prove that interaction does not help in the context of unrestricted schemes. On the other hand, we show that for restricted schemes, which are secure for a limited number of conferences, interaction can substantially improve the space efficiency.

Amos Beimel, Benny Chor
Secret-Key Agreement without Public-Key Cryptography
Extended Abstract

In this paper, we describe novel approaches to secret-key agreement. Our schemes are not based on public-key cryptography nor number theory. They are extremely efficient implemented in software or make use of very simple unexpensive hardware. Our technology is particularly well-suited for use in cryptographic scenarios like those of the Clipper Chip, the recent encryption proposal put forward by the Clinton Administration.

Tom Leighton, Silvio Micali
Broadcast Encryption

We introduce new theoretical measures for the qualitative and quantitative assessment of encryption schemes designed for broadcast transmissions. The goal is to allow a central broadcast site to broadcast secure transmissions to an arbitrary set of recipients while minimizing key management related transmissions. We present several schemes that allow a center to broadcast a secret to any subset of privileged users out of a universe of size n so that coalitions of k users not in the privileged set cannot learn the secret. The most interesting scheme requires every user to store O(k log k log n) keys and the center to broadcast O(k2 log2k log n) messages regardless of the size of the privileged set. This scheme is resilient to any coalition of k users. We also present a scheme that is resilient with probability p against a random subset of k users. This scheme requires every user to store O(log k log(1/p)) keys and the center to broadcast O(k log2k log(1/p)) messages.

Amos Fiat, Moni Naor
Backmatter
Metadaten
Titel
Advances in Cryptology — CRYPTO’ 93
herausgegeben von
Douglas R. Stinson
Copyright-Jahr
1994
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-48329-8
Print ISBN
978-3-540-57766-9
DOI
https://doi.org/10.1007/3-540-48329-2