Skip to main content

1983 | Buch

Advances in Cryptology

Proceedings of Crypto 82

herausgegeben von: David Chaum, Ronald L. Rivest, Alan T. Sherman

Verlag: Springer US

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

Algorithms and Theory

Frontmatter
Fast Computation of Discrete Logarithms in GF (q)

The Merkte-Adleman algorithm computes discrete logarithms in GF (q),the finite field with q elements, in subexponential time, when q is a prime number p. This paper shows that similar asymptotic behavior can be obtained for the logarithm problem when q = pm, in the case that m grows with p fixed. A method of partial precomputation, applicable to either problem, is also presented. The precomputation is particularly useful when many logarithms need to be computed for fixed values of p and m.

Martin E. Hellman, Justin M. Reyneri
Some Remarks on the Herlestam-Johannesson Algorithm for Computing Logarithms over GF(2p)

At the 1981 IEEE Symposium on Information Theory, T. Herlestam and R. Johannesson presented a heurestic method for computing logarithms over GF(2p). They reported computing logarithms over GF(23 !) with surprisingly few iterations and claimed that the running time of their algorithm was polynomial in p. If this were true, the algorithm could be used to cryptanalyze the Pohlig-Hellman cryptosystem, currently in use by Mitre Corporation for key distribution. The Mitre system operates in GF(2127). However, the algorithm was not implemented for GF(2p) for p > 31 because it would require multiple precision arithmetic. Consequently attempts to evaluate the possible threat to the Pohlig-Hellman cryptosystem have centered on modeling the algorithm so that some predictions could be made analytically about the number of iterations required to find logarithms over GF(2P) for p > 31.

E. F. Brickell, J. H. Moore
A Public-Key Cryptosystem Based on the Matrix Cover NP-Complete Problem

The design and implementation of a public-key cryptosystem based on the matrix cover NP-complete problem is described. Section 1 explains the problem, provides the necessary background to understand the implementation and serves to establish the terminology used. The implementation is described in detail in section 2. Section 3 contains further comments on the system and also examines its signature capability. The system borrows quite a few ideas from the Merkle-Hellman scheme [3] and a very brief comparison between the two systems appears in section 4.

Ravi Janardan, K. B. Lakshmanan
Infinite Structures in Information Theory

The idea of infinite structures, even of continua, in information theory is not a new one [K056]. This note is devoted to infinite one-time pads [BL81] and an infinite projective geometric [BL79] threshold scheme. Perhaps an infinite structure can be better understood than its finite analog if it is amenable to investigation by methods from calculus or harmonic analysis. It is conceivable that an existing error control code, pool/split/restitute process [AS82], or cryptosystem can be better understood by examining an infinite version.

G. R. Blakley, Laif Swanson
A Fast Modular Multiplication Algorithm with Application to Two Key Cryptography

This paper presents an algorithm which will perform multiplication modulo C in [log2C] + 7 clock pulses.

Ernest F. Brickell
Comparison of Two Pseudo-Random Number Generators

What do we want from a pseudo-random sequence generator? Ideally, we would like a pseudo-random sequence generator to quickly produce, from short seeds, long sequences (of bits) that appear in every way to be generated by successive flips of a fair coin.

Lenore Blum, Manuel Blum, Michael Shub
On Computationally Secure Authentication Tags Requiring Short Secret Shared Keys

As an application of strongly universal-2 classes of hash functions, Wegman and Carter have proposed a provably secure authentication tag system.1 Their technique allows the receiver to be certain that a message is genuine. An enemy, even one with infinite computing power, cannot forge or modify a message without detection. Moreover, there are no messages that just happen to be easy to forge. Unfortunately, their scheme requires that the sender and the receiver share a rather long secret key if they wish to use the system more than once. Indeed, the length of the key is essentially n log(1/p), where n is the number of messages they wish to be able to authenticate before having to agree on a new secret key, and p is the probability of undetected forgery they are willing to tolerate. Since they also proved that n log(1/p) is a lower bound on the number of bits required by any tag system that assures security against infinite computing power, it is clearly necessary to resort to computational complexity if we wish to have a scheme usable in practice allowing a potentially very large number of messages to be authenticated.

Gilles Brassard

Modes of Operation

Frontmatter
Some Regular Properties of the ‘Data Encryption Standard’ Algorithm

A cipher function y = E(k,x) should appear to be a random function of both the key k and the plaintext x. Any regular behaviour is of interest to the users. In the extreme case regular properties might point to a weakness of the cipher. Precautions are needed in the use of a cipher that has regular features. This note describes five regular properties of the ‘Data Encryption Standard’ or DES, two of which have been described elsewhere, are included for completeness.

Donald W. Davies
The Average Cycle Size of the Key Stream in Output Feedback Encipherment

Output feedback is a method of using the ‘Data Encryption Standard’ (DES), and it is defined in Federal Information Processing Standards Publication 81 produced by the US National Bureau of Standards. Its purpose is to provide a method of using the DES without error extension, since both encipherment and decipherment consist of adding a‘key-stream’ modulo 2 to the data stream. It has been defined for the values m = 1, 2, 3 .... 64 of the parameter m, which is the feedback width.

Donald W. Davies, Graeme I. P. Parkin
Analysis of Certain Aspects of Output Feedback Mode

The Output Feedback (OFB) mode of operation of the Data Encryption Standard (DES) is discussed, and compared to the other DES modes. The advantages of the Output Feedback mode’s insensitivity to transmission errors and the applicability to bulk encryption of multiple users’ transmissions are presented, along with the disadvantages of an increased sensitivity to bit slippage and a requirement for more complex synchronization procedures.It is concluded that the Manipulation Detection Code technique suggested in draft Federal Standards 1025 and 1026 is unsound, and that therefore there are only differences of degree in the vulnerability to active (spoofing) attacks between the various modes. Two separate encryption operations are required to provide cryptographic protection against both the passive and the active threat, but a quadratic residue checksum is proposed as a possible alternative. However, considerations of the physical media involved and the types of traffic carried may make even this level of protection unnecessary for many applications.The problem of transmission in depth is discussed, and Output Feedback mode is analyzed with respect to the probability of repeating a given output prior to exhausting the space of 264 variables. Reiterating the advice of Davies and Parkin, the user is cautioned not to use K<64 bit feedback and it is recommended that FIPS PUB 81 be revised to delete that option. Numerical data are presented for various reinitialization rates which indicate that when OFB is used not more than four billion iterations or 10,000 reinitializations or one day of operation should occur between DES key changes. One week to one month between master key changes is suggested, especially for cryptographic networks of more than two stations. Blakley’s shadow key concept is recommended as a way of minimizing the possibility of human compromise.Appendices discuss the existence of 256 weak, semi-weak, and demi-semi-weak keys, plus the derivations of the formulas for the probability of repetition for the various cases.

Robert R. Jueneman
Drainage and the DES Summary

In our paper, we investigate a statistical property of random functions we named drainage. (Definitions for drainage, random function, etc. will be given shortly.) Our motivation for doing so is twofold. First, it generally assumed that a good cryptographic system will exhibit no simple statistical regularity. For example, the function from key to ciphertext when a block cipher is used to encode a fixed plaintext should appear to be completely random. We were therefore interested in studying the behavior of drainage for a random function, and then comparing it to the measured behavior for a real cryptosystem, the DES. Secondly, drainage is closely related to statistical properties which are important to the performance of the generalized cryptanalytic attack proposed by Hellman [1], discussed below.

Martin E. Hellman, Justin M. Reyneri
Security of a Keystream Cipher with Secret Initial Value

A keystream can be produced by driving a known pseudo-random function with a counter, starting with a secret initial value. Knowledge of M blocks of keystream allows a speed-up of crypt-analysis by a factor of M over exhaustive search. A similar result holds when the function is a secret choice from a parameterized family. These results are the best possible under a black-box model, i.e., where the function is revealed to the analyst only by calling an oracle.A synchronous cryptosystem can be produced by driving a pseudorandom function with a counter to generate a keystream. The initial value of the counter may be kept secret as part or all of the key. This paper shows that this provides some additional security, but not as much as might appear at first glance. The author wishes to thank H. Amirazizi, M. Hellman, E. Karnin, and J. Reyneri for useful conversations.Naturally such a system depends upon the cryptographic security of the basic function. It must be one-way in a strong sense: given the values of the function on some set of arguments, it must be infeasible to compute the value on any argument not in the set.For this paper, we assume that the basic function is a cryptographically secure pseudo-random function. We consider only attacks which are uniform, attacks which do not use particular weaknesses of the function. We present such an attack which reduces the cryptanalyst’s work by a factor of M over exhaustive search, when he is given M corresponding blocks of ciphertext and plaintext. Furthermore, we show that this is optimal for such uniform attacks.In the first system we consider, the only secret is the initial value of the counter.

Robert S. Winternitz
Using Data Uncertainty to Increase the Crypto-Complexity of Simple Private Key Enciphering Schemes

Computational efficiency is of prime importance to any micro-processor based cryptosystem. A technique is presented here which permits a reduction in the enciphering complexity of private key schemes without a loss in security. The net result can be a simplification of the system’s implementation, a reduction in cryptographic overhead and the potential for a simple mathematical analysis of the security system.

G. M. Avis, S. E. Tavares
Randomized Encryption Techniques

A randomized encryption procedure enciphers a message by randomly choosing a ciphertext from a set of ciphertexts corresponding to the message under the current encryption key. At the cost of increasing the required bandwidth, such procedures may achieve greater cryptographic security than their deterministic counterparts by increasing the apparent size of the message space, eliminating the threat of chosen plaintext attacks, and improving the a priori statistics for the inputs to the encryption algorithms. In this paper we explore various ways of using randomization in encryption.

Ronald L. Rivest, Alan T. Sherman

Protocols and Transaction Security

Frontmatter
On the Security of Multi-Party Protocols in Distributed Systems

Security of protocols for network communication has received considerable attention in recent years. We concentrate on ensuring the security of cryptographic protocols in distributed systems.In a distributed system, beyond eavesdropping, a saboteur may impersonate another user or alter messages being sent. A saboteur who is also a user may send conflicting messages or use other illegal messages in order to uncover secret information.The problem we address, in its most general form, is: “given a multi-party protocol which is provably secure when all the participants monitor every message being sent, can the protocol be modified to be secure in a distributed system?”We use the Byzantine Agreement, Crusader Agreement, and other specific checks to improve protocols by making them secure in a general distributed network. We examine the trade-off between detection of faulty behaviour and the number of messages exchanged.

Danny Dolev, Avi Wigderson
On the Security of Ping-Pong Protocols

Consider the class of protocols, for two participants, in which the initiator applies a sequence of operators to a message M and sends it to the other participant; in each step, one of the participants applies a sequence of operators to the message received last, and sends it back. This “ping-pong” action continues several times, using sequences of operators as specified by the protocol. The set of operators may include public-key encryptions and decryptions.We present an O(n3) algorithm which determines the security of a given protocol (of length n). This is an improvement of the algorithm of Dolev and Yao [DY].

D. Dolev, S. Even, R. M. Karp
The Use of Public-Key Cryptography for Signing Checks

We want to build a secure system in which customers of a bank can make transactions and be able to keep a proof of each transaction. We also want the system to satisfy (as far as possible) the following constraints (our ultimate purpose is to remove all physical money): the customers are able to make transactions over the phone (just by exchanging messages),no communication with the bank is required for a transaction,no directory of customers is available.

Luc Longpré
Blind Signatures for Untraceable Payments

Automation of the way we pay for goods and services is already underway, as can be seen by the variety and growth of electronic banking services available to consumers. The ultimate structure of the new electronic payments system may have a substantial impact on personal privacy as well as on the nature and extent of criminal use of payments. Ideally a new payments system should address both of these seemingly conflicting sets of concerns.

David Chaum
A Randomized Protocol for Signing Contracts

Suppose two parties A, and B, in a communication network, have negotiated a contract, which they wish to sign. To this end, they need a protocol which has the two following properties: (1)At the end of an honest execution of the protocol, each party has a signature of the other.(2)If one party, X, executes the protocol honestly, his counterpoint, Y, cannot obtain X’s signature to the contract without yielding his own signature.

S. Even, O. Goldreich, A. Lempel
On Signatures and Authentication

The design of cryptographic protocols using trapdoor and one-way functions has received considerable attention in the past few years [1–8]. More recently, attention has been paid to provide rigorous correctness proofs based on simple mathematical assumptions, for example, in coin flipping (Blum [1]), mental poker (Goldwasser and Micali [4]). It is perhaps reasonable to speculate at this time that all cryptographic protocols can eventually be designed to be provably secure under simple assumptions, such as factoring large numbers or inverting RSA functions are computationally intractable in the appropriate sense.

S. Goldwasser, S. Micali, A. Yao

Applications

Frontmatter
Cryptographic Protection of Personal Data Cards

Plastic cards for different types of stored data are in wide use at present. Examples are credit cards and cards bearing access control information for automatic teller machines. More powerful devices with non-volatile read/write memory of several kilobytes, possibly with some intelligence, (Personal Data Cards), open new fields of applications in banking, administration, health care and communications.If sensitive data is stored on such cards, protection of this data and authentication of the authorized user becomes crucial. This paper describes a method for user verification and selective record protection in a network of terminals and one or more trusted Authentication Servers. The method is based on Single Key and/or Public Key Cryptography in conjunction with personal feature recognition (such as fingerprints) and selective key distribution. All the system information that needs secrecy protection is one key in the Authentication Server(s). The reference pattern for the feature recognition is stored on the card in encrypted form. The Authentication Server(s) can be kept very simple and inexpensive since no long-term data storage is required. As no user specific information remains permanently in the terminals, full user mobility is assured.

Christian Mueller-Schloer, Neal R. Wagner
Non-Public Key Distribution

Assume that it should be possible to protect messages transmitted in a N-user network by encryption. The encryption can either be performed by a public-key crypto system or by a conventional cipher. In the first case there is no need for key distribution. In the second case we have two choices, either to distribute keys from a key distribution center or use a public key distribution algorithm.

Rolf Blom
Cryptographic Solution to a Multilevel Security Problem

A scheme based on cryptography is proposed for enforcing multilevel security in a system where hierarchy is represented by a partially ordered set (or poset). Straightforward implementation of the scheme requires users highly placed in the hierarchy to store a large number of cryptographic keys. A time-versus-storage trade-off is then described for addressing this key management problem.

Selim G. Akl, Peter D. Taylor
Local Network Cryptosystem Architecture: Access Control

A mechanism which uses keyed verification sequences at the link layer of protocol to control access to data transport facilities of a local area network will be described.

Thomas A. Berson
Implementing an Electronic Notary Public

Many communication security problems admit both “physical” and “mathematical” solutions. For example sending a message from A to B without exposing it to C, can be accomplished physically by means of secure courier, or mathematically by means of encryption. With the advent of public key cryptography, many problems originally believed to be solvable only by physical means have been shown to have mathematical solutions (e.g. key distribution [DB], secret sharing [S], coin flipping [B], mental poker playing [SRA]), In this paper we describe a mathematical solution to a communication security problem, which arose in connection with the Nuclear Test Ban Treaty, and for which only physical solutions were known, The problem concerns the implementation of an electronic notary public - a device which can certify information for a group of mutually distrusting parties - among which may be builder of the device.

Leonard M. Adleman
Quantum Cryptography, or Unforgeable Subway Tokens

The use of quantum mechanical systems, such as polarized photons, to record information gives rise to novel cryptographic phenomena, not achievable with classical recording media: 1) A Verify Only Memory (VOM) that, with high probability, cannot be read or copied by someone ignorant of its contents; 2) the multiplexing of two messages in such a way that, with high probability, either message may be recovered at the cost of irreversibly destroying the other.Quantum multiplexing can be combined with public-key cryptography to produce unforgeable subway tokens that resist counterfeiting even by an opponent with a supply of good tokens and complete knowledge of the turnstiles that test them.

Charles H. Bennett, Gilles Brassard, Seth Breidbart, Stephen Wiesner

Special Session on Cryptanalysis

Frontmatter
A Polynomial Time Algorithm for Breaking the Basic Merkle-Hellman Cryptosystem

The cryptographic security of the Merkle-Hellman system (which is one of the two public-key cryptosystems proposed so far) has been a major open problem since 1976. In this paper we show that when the elements of the public key al,...,an are modular multiples of a superincreasing sequence (as proposed by Merkle and Hellman), almost all the equations of the form % MathType!MTEF!2!1!+- % feaagCart1ev2aaatCvAUfeBSjuyZL2yd9gzLbvyNv2CaerbuLwBLn % hiov2DGi1BTfMBaeXatLxBI9gBaerbd9wDYLwzYbItLDharqqtubsr % 4rNCHbGeaGqiVu0Je9sqqrpepC0xbbL8F4rqqrFfpeea0xe9Lq-Jc9 % vqaqpepm0xbba9pwe9Q8fs0-yqaqpepae9pg0FirpepeKkFr0xfr-x % fr-xb9adbaqaaeGaciGaaiaabeqaamaabaabaaGcbaWaaabCaeaaca % WG4bWaaSbaaSqaaiaadMgaaeqaaOGaamyyamaaBaaaleaacaWGPbaa % beaakiabg2da9iaadkgacaaMf8UaamiEamaaBaaaleaacaWGPbaabe % aakiabgIGiopaacmaabaGaaGimaiaacYcacaaIXaaacaGL7bGaayzF % aaaaleaacaaIXaGaeyypa0JaamiBaaqaaiaad6gaa0GaeyyeIuoaaa % a!4B7C! $$ \sum\limits_{1 = l}^n {{x_i}{a_i} = b\quad {x_i} \in \left\{ {0,1} \right\}} $$ can be solved in polynomia time, and thus the cleartexts xl...xn that correspond to given ciphertexts b can be easily found.

Adi Shamir
A Preliminary Report on the Cryptanalysis of Merkle-Hellman Knapsack Cryptosystems

In April, 1982, Adi Shamir caused a furor with the announcement [1] of “A Polynomial Time Algorithm for Breaking Merkle-Hellman Cryptosystems.” Like many others who received his “extended abstract,” members of the mathematics department at the Sandia National Laboratories undertook a careful study of both the algorithm and the underlying mathematical concepts. This paper summarizes some of our findings. In order to meet the deadline for Crypto’82 the style will be deliberately telegraphic -- and informal. A complete paper will be presented this fall at the Twelfth Conference on Numerical Mathematics and Computing at Winnipeg, the proceedings of which will be published in Congressus Numerantium. It should also be remarked that in discussions with Adi at Crypto’82, we learned that some of the results presented here as new were also arrived at independently by him in the period since his original announcement.

E. F. Brickell, J. A. Davis, G. J. Simmons
On Breaking the Iterated Merkle-Hellman Public-Key Cryptosystem

In 1976 Diffie and Hellman introduced the concept of a public-key cryptosystem [1]. In 1977 Rivest, Shamir, and Adleman discovered the first incarnation of such a system [4], and soon afterwards Merkle and Hellman produced a second one [3]. Despite the widespread interest in the area, the years have produced no other public-key cryptosystems which have attracted widespread interest.

Leonard M. Adleman

Rump Session: Impromptu Talks by Conference Attendees

Frontmatter
Long Key Variants of DES

The Federal Data Encryption Standard (DES) [1] is a block product cipher which converts 64-bit blocks of plaintext into 64-bit blocks of cipher text, or vice-versa, under the control of a 56-bit key. There has in the past been considerable controversy over the adequacy of DES key length [2,3,4]. Easily implemented modifications to the DES key schedule (KS) would allow the use of keys longer than 56 bits.

Thomas A. Berson
On the Security of Multi-Party Ping-Pong Protocols

We define a p-party ping-pong protocol and its security problem, along the lines of Daley and Yao definition of a two-party ping-pong protocol.

S. Even, O. Goldreich
Inferring a Sequence Generated by a Linear Congruence

A pseudo-random number generator is considered cryptographically secure if, even when a cryptanalyst has obtained long segments of the generator’s output, he or she is unable to compute any other segment within certain time and space complexity bounds. A pseudo-random number generator which is as cryptographically secure as the RivestShamir-Adleman encryption scheme is presented in [Shamir]. This method for generating pseudo-random numbers is quite slow, though, and it is not known whether any statistical biases might be present in the sequences it generates. Blum and Micali [BlMi] give a pseudo-random bit generator, with arbitrarily small bias, which is cryptographically strong, assuming the problem of index finding is intractable. But their method is also slow. Other cryptographically strong, but slow, pseudo-random bit generators are given in [BBS] and [Yao]. This suggests the question of whether any of the pseudo-random number generators commonly in use are also cryptographically secure. In particular, the linear congruential method, Xi+1 = aX i + bmod m, is very popular and fast. Obviously, this method is not cryptographically secure if the modulus, m, is known. In that case, one could solve for z in the congruence (X2 − X1) = x (X 1 − X0) mod m. Then the remainder of the sequence could be correctly predicted using Xi+1 = x(X i ) + (X1 − x(X 0 )) mod m. In [K1980], Knuth has discussed this problem, assuming m is known and is a power of two, but assuming that only the high order bits of the numbers generated are actually used. We have looked at the problem, assuming the m is unknown and arbitrary, but that the low order bits are also used. We have shown that, under these assumptions, the linear congruential method is cryptographically insecure. A similar result is given in [Reeds], but, among other problems, that result relies on the assumption that factoring is easy.

Joan B. Plumstead
Key Reconstruction

The key reconstruction problem is an application of (n,t)-threshold schemes. Such schemes permit an arbitrary piece of information to be ‘broken’ into n different ‘fragments,’ any t of which may be used to completely reconstruct the original information. No smaller number of fragments provides any information concerning the original, aside from its size in bits. These n fragments may then be distributed to different individuals, any t of whom can later cooperate in the reconstruction of the original information. For our purposes, the original information might be a private key in a public key cryptosystem.

Michael Merritt
Nondeterministic Cryptography

By employing a special type of stutter and executing it at truly random places in a key stream, the complexity of the stream can be vastly enhanced.

Carl R. Nicolai
A Short Report on the RSA Chip

The nMOS “RSA chip” described in our article [1] was initially fabricated by Hewlett-Packard. Testing revealed that while the control portion of the chip worked correctly, the arithmetic section suffered from transient errors and was usually too unreliable to complete a full encryption. We tested a number of chips and found the same problem, enough to convonce us that the cause was probably a design error and not a fabrication problem.

Ronald L. Rivest
Backmatter
Metadaten
Titel
Advances in Cryptology
herausgegeben von
David Chaum
Ronald L. Rivest
Alan T. Sherman
Copyright-Jahr
1983
Verlag
Springer US
Electronic ISBN
978-1-4757-0602-4
Print ISBN
978-1-4757-0604-8
DOI
https://doi.org/10.1007/978-1-4757-0602-4