Skip to main content

1986 | Buch

Advances in Cryptology — CRYPTO ’85 Proceedings

herausgegeben von: Hugh C. Williams

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

Signatures and authentication

Breaking the Ong-Schnorr-Shamir Signature Scheme for Quadratic Number Fields

Recently Ong, Schnorr, and Shamir [OSS1, OSS2] have presented new public key signature schemes based on quadratic equations. We will refer to these as the OSS schemes. The security of the schemes rest in part on the difficulty of finding solutions to (1)$$ X^2 - KY^2 \equiv M(mod{\mathbf{ }}n), $$ where n is the product of two large rational primes. In the original OSS scheme [OSS1], K, M, X, and Y were to be rational integers. However, when this version succumbed to an attack by Pollard [PS,S1], a new version was introduced [OSS2], where M, X, and Y were to be quadratic integers, i. e. elements of the ring $$ Z[\sqrt d ] $$. In this paper we will show that the OSS system in $$ Z[\sqrt d ] $$ is also breakable The method by which we do this is to reduce the problem of solving the congruence over the ring $$ Z[\sqrt d ] $$ to the problem of solving the congruence over the integers, for which we can use Pollard’s algorithm.

Dennis Estes, Leonard M. Adleman, Kireeti Kompella, Kevin S. McCurley, Gary L. Miller
Another Birthday Attack

We show that a meet-in-the-middle attack can successfully defraud the Davies-Price message authentication scheme. Their scheme used message blocks in an iterated encipherment of an initial block, and it went through the message blocks twice, in order to prevent just such a “birthday” attack.

Don Coppersmith
Attacks on Some RSA Signatures

Two simple redundancy schemes are shown to be inadequate in securing RSA signatures against attacks based on multiplicative properties. The schemes generalize the requirement that each valid message starts or ends with a fixed number of zero bits. Even though only messages with proper redundancy are signed, forgers are able to construct signatures on messages of their choice.

Wiebren de Jonge, David Chaum
An Attack on a Signature Scheme Proposed by Okamoto and Shiraishi

Recently Okamoto and Shiraishi proposed a public key authentication system [1]. The security of the scheme is based on the difficulty of solving quadratic inequalities. This new system is interesting since the amount of computing needed for the proposed scheme is significantly less than that needed for an RSA encryption.This report is an investigation into the security of the proposed digital signature scheme. We demonstrate that if the system is used as it is presented, an opponent could sign messages without factoring the modulus. Further, we suggest a modification which may not have the same flaw as the proposed scheme.

Ernest F. Brickell, John M. DeLaurentis
A Secure Subliminal Channel (?)

At Crypto’83, the present author showed that a transmitter and chosen receiver(s) -- by secretly exchanging some side information -- could pervert an authentication without secrecy channel to allow them to convert a portion of the authentication information to a hidden (covert) communications channel [1]. It was also shown that under quite reasonable conditions even the detecticn of the existence of this Covert channel could be made as difficult as the underlying authentication algorithm was “cryptosecure”. In view of this open -- but indetectable -- existence, such a covert channel was called a “sublininal” channel. The examples constructed in [1] were more in the nature of existence proofs than of practical subliminal communications channels. At Eurocrypt’84 [2], however, it was shown how to use digital signature schemes as a way of realizing practical subliminal channels and, in particular, subliminal channels were devised using Ong and Schnorr’s quadratic approximation scheme [3], Ong, Schnorr and Shamir’s quadratic representation schemes [4] and Ong. Schnorr and Shamir’s cubic signature scheme [5] as Well as Carnal’s discrete logarithm-based digital signature scheme [6]. Unfortunately, from the standpoint of providing a secure (and feasible) subliminal channel, all Of these digital signature schemes were cryptanalyzed [7],[8] shortly after being proposed. At Crypto’84, a fourth variant to the earlier digital signature schemes of Ong, Schnorr and Shamir was presented by Schnorr [9] which was also quickly cryptanalyzed [10]. At the 1985 IEEE Symposium on Security and Privacy, Okamoto and Shiraishi proposed yet another digital signature scheme based on quadratic inequalities [11] which had been designed to avoid the cryptanalytic weaknesses that hed flawed the schemes of Schnorr, et al. The cryptanalysis of this scheme by Erickell and DeLaurentis is reported elsewhere in these Proceedings [12]. In view of the short-lived nature Of all of these schemes, it has become a high risk venture to propose subliminal channels based on digital signatures. The motivation for going so is that digital Signatures can be much easier to calculate and verify tnan full-fledged two-key ciphers. As a result, the benefits (of a successful implementation) far outweigh the risks of perhaps having an insecure digital sianature (or subliminal) channel slip by undetected. Based on the cumulative experience gained in cryptanalyzing the six digital signature schemes mentioned above, Brickell and DeLaurentis propose a new scheme in their paper that appears to avoid the weaknesses exploited in the earlier cryptanalyses.

Gustavus J. Simmons
Unconditionally Secure Authentication Schemes and Practical and Theoretical Consequences

The Vernam scheme protects the privacy unconditionally, but is completely insecure to protect the authenticity of a message. Schemes will be discussed in this paper that protect the authenticity unconditionally. The definition of unconditional security is defined. Stream cipher authentication schemes are proposed. The consequences on information protection using RSA and DES are discussed.

Yvo Desmedt

Protocols

On the Security of Ping-Pong Protocols when Implemented using the RSA (Extended Abstract)

The Security of the RSA implementation of ping-pong protocols is considered. It is shown that the obvious RSA properties, such as “multiplicativity”, do not endanger the security of ping-pong protocols. Namely, if a ping-pong protocol is secure in general then its implementation using an “ideal RSA” is also secure.

Shimon Even, Oded Goldreich, Adi Shamir
A Secure Poker Protocol that Minimizes the Effect of Player Coalitions

What can we expect from a poker protocol? How close to reality can we come?From the outset of this research , we realized that a cryptographic protocol could achieve more security than its real life counterpart (with physical cards). But every protocol proposed until now was far from offering all the possibilities of a real deck of cards or could not acheive the full security we were expecting.

Claude Crépeau
A Framework for the Study of Cryptographic Protocols

We develop a simple model of computation under which to study the meaning of cryptographic protocol and security. We define a protocol as a mathematical object and security as a possible property of this object. Having formalized the concept of a secure protocol we study its general properties. We back up our contention that the model is reasonable by solving some well known cryptography problems within the framework of the model.

Richard Berger, Sampath Kannan, René Peralta
Cheating at Mental Poker

We review the “mental poker” scheme described by Shamir, Rivest and Adleman [SRA]. We present two possible means of cheating, depending on careless implementation of the SRA scheme. One will work if the prime p is such that p-1 has a small prime divisor. In the other scheme, the names of the cards “TWO OF CLUBS” have been extended by random-looking bits. chosen by the cheater.

Don Coppersmith
Security for the DoD Transmission Control Protocol

In securing packet switched digital communications, it is possible to add the security measures at almost any layer of the Open Systems Interconnection (OSI) model of network functioning. At one extreme, security may be supplied either by physical protection of the communication links (with no impact at all on network communication protocols) or by independent encryption of the traffic on each link of the network (with little protocol impact). Solutions or this sort are called link security and, although widely employed, have the disadvantage of requiring the users to place a high degree of trust in the network. At the other extreme, it is possible, using cryptography, to add security to each individual user level application. This has the advantage of minimizing the user’s need to trust the network and thus providing end-to-end security, but also has the disadvantage of requiring a multiplicity of implementations.

Whitfield Diffie
Symmetric Public-Key Encryption

Public-key encryption would seem to be inherently assymmetric, in that only messages sent to a user can be encrypted using his public key. We demonstrate that the use of interactive protocols for sending encrypted messages enables a symmetric use of public keys; we give cryptographic protocols for the following tasks: 1.Probabilistic encryption, using the same public key, both of messages that are sent to a particular user as well as of messages that the user sends to others, without compromising the key. We propose a public-key cryptosystem based on these protocols which has only one key, owned by a cryptographic server.2.Authentication both of the sender and of the receiver of a probabilistically encrypted message.3.Probabilistic encryption which is provably secure against both chosen-message and chosen-ciphertext attack.

Zvi Galil, Stuart Haber, Moti Yung

Copy Protection

Software Protection: Myth or Reality?

Staggering amounts of commercial software are marketed to fulfill needs from the PC explosion. Unfortunately, such software is trivial to duplicate! From the vendors’ viewpoint a way to protect profit is needed. Typically, they have resorted to various schemes that attempt to inhibit the duplication process.Although protection of future profit is important, so is protection against current loss. Commercial and business related software must be adequately protected lest data be stolen or manipulated. However, more important than any of these classes is protection of government computer resources, especially classified and operational software and data. Loss of control in this realm could be detrimental to national security.This paper addresses current technologies employed in protection schemes: signatures (magnetic and physical) on floppy disks, Software Analysis Denial (SAD), Hardware Security Devices (HSD), and Technology Denial Concepts (TDC) are presented, with an emphasis on SAD. Advantages and disadvantages of these schemes will be clarified.

James R. Gosler
Public Protection of Software

One of the overwhelming problems that software producers must contend with, is the unauthorized use and distribution of their products. Copyright laws concerning software are rarely enforced, thereby causing major losses to the software companies. Technical means of protecting software from illegal duplication are required, but the available means are imperfect. We present protocols that enables software protection, without causing overhead in distribution and maintenance. The protocols may be implemented by a conventional cryptosystem, such as the DES, or by a public key cryptosystem, such as the RSA. Both implementations are proved to satisfy required security criterions.

Amir Herzberg, Shlomit S. Pinter
Fingerprinting Long Forgiving Messages

In his 1983 paper, Neal Wagner1 defines a perfect fingerprint to be an identifying fingerprint added to an object in such a way that any alteration to it that makes the fingerprint unrecognizable will also make the object unusable. A perfect fingerprinting scheme for binary data would seem difficult to devise, since it would be possible to discover the fingerprints by comparing different fingerprinted copies of the same piece of data. In this paper we discuss a fingerprinting scheme which, although it does not surmount this problem entirely, at least specifies the number of copies an opponent must obtain in order to erase the fingerprints.

G. R. Blakley, C. Meadows, G. B. Purdy

Single Key Cryptology

Cryptanalysis of des with a Reduced Number of Rounds
Sequences of Linear Factors in Block Ciphers

A blockcipher is said to have a linear factor if, for all plaintexts and keys, there is a fixed non-empty set of key bits whose simultaneous complementation leaves the exclusive-or sum of a fixed non-empty set of ciphertext bits unchanged.

David Chaum, Jan-Hendrik Evertse
Is DES a Pure Cipher? (Results of More Cycling Experiments on DES) (Preliminary Abstract)

During summer 1985, we performed eight cycling experiments on the Data Encryption Standard (DES) to see if DES has certain algebraic weaknesses. Using special-purpose hardware, we applied the cycling closure test described in our Eurocrypt 85 paper to determine whether DES is a pure cipher. We also carried out a stronger version of this test. (A cipher is pure if, for any keys i, j, k, there exists some key l such that TiTj−1Tk = Tl, where Tw denotes encryption under key w.) In addition, we followed the orbit of a randomly chosen DES transformation for 236 steps, as well as the orbit of the composition of two of the “weak key” transformations. Except for the weak key experiment, our results are consistent with the hypothesis that DES acts like a set of randomly chosen permutations. In particular, our results show with overwhelming confidence that DES is not pure. The weak key experiment produced a short cycle of about 233 steps, the consequence of hitting a fixed point for each weak key.

Burton S. Kaliski Jr., Ronald L. Rivest, Alan T. Sherman
A Layered Approach to the Design of Private Key Cryptosystems

This paper presents a layered approach to the design of private key cryptographic algorithms based on a few strategically chosen layers. Each layer is a conceptually simple invertible transformation that may be weak in isolation, but makes a necessary contribution to the security of the algorithm. This is in contrast to algorithms such as DES which utilize many layers and depend on S-boxes that have no simple mathematical interpretation. A property called transparency is introduced to deal with the interaction of layers and how they must be selected to eliminate system weaknesses.Utilizing this layered approach, a private key cryptographic algorithm consisting of three layers is constructed to demonstrate the design criteria. The algorithm has an adequate key space and valid keys can be easily generated. The design is based on a symmetrical layered configuration, which allows encryption and decryption to be performed using the same algorithm. The algorithm is suitable for VLSI implementation. Some statistical tests are applied to the algorithm in order that its cryptographic performance can be evaluated. The test results and attempts at cryptanalysis suggest that the three-layered algorithm is secure.

T. E. Moore, S. E. Tavares
Lifetimes of Keys in Cryptographic Key Management Systems

The keys lifetimes necessary to attain a certain low disclosure rate have been investigated for two types of schemes. DES is employed as an encryption algorithm example. This paper employs the poorest attack, namely the exhaustive attack as a cryptanalysis. There may be a more effective attack. As results, we recommend to adopt SCHEME 2 and to change the master key ‘at least’ within a few years.

E. Okamoto, K. Nakamura
Correlation Immunity and the Summation Generator

It is known that for a memoryless mapping from GF(2)N into GF(2) the nonlinear order of the mapping and its correlation-immunity form a linear tradeoff. In this paper it is shown that the same tradeoff does no longer hold when the function is allowed to have memory. Moreover, it is shown that integer addition, when viewed over GF(2), defines an inherently nonlinear function with memory whose correlation-immunity is maximum. The summation generator which sums N binary sequences over the integers is shown as an application of integer addition in random sequence generation.

Rainer A. Rueppel
Design of Combiners to Prevent Divide and Conquer Attacks

A finite state machine driven by n independent sources each generating a q-ary sequence is investigated. The q-ary output sequence of that device is considered as the running-key sequence in a stream cipher. Possible definitions for Correlation-Immunity are discussed and a simple condition is given which ensures that divide-and-conquer attacks on such generators are prevented.

T. Siegenthaler
On the Security of DES

The purpose of this note is to describe some anomalies found in the structure of the S-boxes in the Data Encryption Standard. These anomalies are potentially dangerous, but so far they have not let to any successful cryptanalytic attack. While their significance is still unknown, they clearly demonstrate the deficiencies of current certification techniques and the need for provably secure cryptosystems.

Adi Shamir
Information theory without the finiteness assumption, II. Unfolding the DES

The DES is described in purely mathematical terms by means of confusion, diffusion and arithmetic involving a group of messages and a group of keys. It turns out to be a diffusion/arithmetic cryptosystem in which confusion plays no role, although the S-boxes effect an arithmetic operation of replacement (which is sometimes mistaken for confusion) as an important part of the encryption process.

G. R. Blakley

Two Key Cryptology

Analysis of a Public Key Approach Based on Polynomial Substitution

We set out to build a public key cryptosystem by repeatedly substituting for variables in multivariate polynomials and simplifying the results to conceal the substitution process. There seems, however, to be no way to build such a system that is both secure and has a public key of practical size when the devices used to limit the number of coefficeints are nilpotence and J-rings. We have only shown, however, that it is impossible to produce such a system if the total degree of the encryption polynomial determines the size of the public key. Perhaps, by properly choosing p0 and p1, we can employ the fundamental scheme to produce sparse encrypting polynomials. Then the public key could be kept small while the encrypting polynomial bas large total degree and is difficult to invert.

Harriet Fell, Whitfield Diffie
Developing an RSA Chip

FAF’4 is a fast arithmetic processor designed specifically for modular operations, including exponentiation, on large integers. It is at present implemented as an array of 32-bit bit-slice processors, which may be interconnected without additional circuitry to obtain word lengths of up to 1023 bits. With 512-bit operands, exponentiation takes 133 milliseconds at worst (100 ms typically).

Martin Kochanski
An M3 Public-Key Encryption Scheme

It is well known that the RSA public-key cryptosystem can be broken if the composite modulus can be factored. It is nor known, however, whether the problem of breaking any RSA system is equivalent in difficulty to factoring the modulus. In 1979 Rabin [5] introduced a public-key cryptosystem which is as difficult to break as it is to factor a modulus R=p1p2, where p1p2 are two distinct large primes. Esaentially Rabin suggested that the designer of such a scheme first determine p1 and p2, keep them secret and make R public. Anyone wishing to send a secure message H (0 < M < R ) to the designer would encrypt M as K , where $$ K \equiv M^2 (\bmod R)$$ and 0 < K < R, then transmit K to the designer.

H. C. Williams
Trapdoor Rings And Their Use In Cryptography

This paper examines possible trapdoor structures which can be used to design public key cryptosystems based on the factorization problem. Some examples of such finite trapdoor systems which might serve as a basis for an extended RSA cryptosystem are proposed.

V. Varadharajan
On Computing Logarithms Over Finite Fields

The problem of computing logarithms over finite fields has proved to be of interest in different fields [4]. Subexponential time algorithms for computing logarithms over the special cases GF(p), GF(p2) and GF(pm) for a fixed p and m → ∞ have been obtained. In this paper, we present some results for obtaining a subexponential time algorithms for the remaining cases GF(pm) for p → ∞ and fixed m ≠ 1, 2. The algorithm depends on mapping the field GF(pm) into a suitable cyclotomic extension of the integers (or rationals). Once an isomorphism between GF(pm) and a subset of the cyclotomic field Q(θq) is obtained, the algorithms becomes similar to the previous algorithms for m = 1.2.A rigorous proof for subexponential time is not yet available, but using some heuristic arguments we can show how it could be proved. If a proof would be obtained, it would use results on the distribution of certain classes of integers and results on the distribution of some ideal classes in cyclotomic fields.

Taher ElGamal
N Using RSA with Low Exponent in a Public Key Network

We consider the problem of solving systems of equations Pi(x) ≡ 0 (mod ni) i = 1...k where Pi are polynomials of degree d and the ni are distinct relatively prime numbers and x < min ni. We prove that if k > d(d+1)/2 we can recover x in polynomial time provided ni > > 2k. This shows that RSA with low exponent is not a good alternative to use as a public key cryptosystem in a large network. It also shows that a protocol by Broder and Dolev [4] is insecure if RSA with low exponent is used.

Johan Hastad
Lenstra’s Factorisation Method Based on Elliptic Curves

The purpose of this exposition is to explain the method due to H.W. Lenstra, Jr. [1] of determining a non-trivial factor, p, of a composite number, n. The method uses the theory of elliptic curves and has a n expected running time of L (p)√2 where L(p)=exp (√log p log log p). The aim of the exposition is to be completely elementary. with an introduction to the arithmetic of elliptic curves sufficient to enable the reader to follow the later section explaining the method. The paper ends with a few remarks on techniques for the practical implementation of the algorithm.

N. M. Stephens
Use of Elliptic Curves in Cryptography

We discuss the use of elliptic curves in cryptography. In particular, we propose an analogue of the Diffie-Hellmann key exchange protocol which appears to be immune from attacks of the style of Western, Miller, and Adleman. With the current bounds for infeasible attack, it appears to be about 20% faster than the Diffie-Hellmann scheme over GF(p). As computational power grows, this disparity should get rapidly bigger.

Victor S. Miller

Randomness and Other Problems

Cryptography with Cellular Automata

This abstract discusses a stream cipher based on a simple one-dimensional cellular automation. The cellular automaton consists of a circular register with N cells, each having a value ai equal to 0 or 1. The values are updated synchronously in discrete time steps according to the rule (1a)$$ a'_i = a_{i - 1} XOR (a_i OR a_{i + 1} ) ,$$ or, equivalently, (1b)$$ a'_i = (a_{i - 1} + a_i + a_{i + 1} + a_i a_{i + 1} ) \bmod 2 .$$The initial state of the register is used as a seed or key. The values a(t) attained by a particular cell through time can then serve as a random sequence. Ciphertext C can be obtained from binary plaintext P as usual according to Ci=pi XOR a(t); the plaintext can be recovered by repeating the same operation, but only if the sequence a(t) is known.Cellular automata such as (1) have been investigated in studies of the origins of randomness in physical systems [2]. They are related to non-linear feedback shift registers, but have slightly differen boundary conditions.Figure 1 shows the pattern of cell values produced by (1) with a seed consisting of a single nonzero cell in a large register. The time sequence of values of the centre cell shows no statistical regularities under the test of ref. [3] (for sequence lengths up to 219≡5×105). Some definite spacetime patterns are nevertheless produced by the cellular automtion rule.In the limit N→∞, the cullular automation evolution is like an iterated continuous mapping of the Cantor set, and can be studied using dynamical systems theory [4]. One result is that the evolution is unstable with respect to small perturbations in the initial seed. A change produced by reversing a single cell value typically expands at a rate given by Lyapunov exponents, equal to 0.25 on the left, and 1 on the right. Length T time sequences of cell values are found however to be affected on average only by abount 1.19T initial values.Iterations of the cellular automaton rule (1) can be considered as Boolean functions of initial cll values. Disjunctive normal forms (minimized using [5]) for these functions are found to increase in size roughly as 40.65t, giving some indication of the complexity of the cellular automaton evolution.Figure 2 shows the complete state transition diagram for the cellular automaton (1) in a register of size N=11. For large N, an overwhelming fraction of states lie on the longest cycle. But there are also shorter cycles, often corresponding to states with special symmetries. Figure 3 shows the length of the longest cycle as a function of N. The results (up to N=53), which gives cycle length 40114679273) fit approximately 20.61N. The mapping (1) is not a bijection, but is almost so; only a fraction (κ/2)N≡0.85N of states do not have unique predecessors [6] (κ is the real root of 4κ3−2κ2−1=0).The security of a cryptographic system based on (1) relies on the difficulty of finding the seed from a time sequence of cell values. This problem is in the class NP. No systematic algorithm for its solution is currently known that takes a time less than exponential in N. No statistical regularities have been found in sequences shorter than the cycle length.One approach to the problem of finding the seed [6] uses the near linearity of the rule (1). Equation (1) can be written in the alternative form ai−1=ai′ XOR (ai OR ai+1). Given the values of cells in two adjacent columns, this allows the values of all cells in a triangle to the left wo be reconstructed. But the sequence provided give only one column. Values in the other column can be guessed, and then determined from the consistency of Boolean equations for the seed. But in disjunctive normal form the number of terms in these equations increases linearly with N, presumably making their solutions take a time more than polynomial in N.The cellular automaton (1) can be implemented efficiently on a integrated circuit; it requires less than ten gate delay times to generate each output bit, and can thus potentially be used in a variety of high-bandwidth cryptographic applications.Much of the work summarized here was done while I was consulting at Thinking Machines Corporation (Cambridge, MA). I am grateful for discussions with many people, including Persi Diaconis, Carl Feynman, Richard Feynman, Shafi Goldwasser, Erica Jen and John Milnor.

Stephen Wolfram
Efficient Parallel Pseudo-Random Number Generation

We present a parallel algorithm for pseudo-random number generation. Given a seed of nε truly random bits for any ε > 0, our algorithm generates nc pseudo-random bits for any c > 1. This takes poly-log time using nε′ processors where ε′ = κε for some fixed small constant κ > 1. We show that the pseudo-random bits output by our algorithm can not be distinguished from truly random bits in parallel poly-log time using a polynomial number of processors with probability 1/2 + 1/nO(1) if the multiplicative inverse problem almost always can not be solved in RNC. The proof is interesting and is quite different from previous proofs for sequential pseudo-random number generators.Our generator is fast and its output is provably as effective for RNC algorithms as truly random bits. Our generator passes all the statistical tests in Knuth[14].Moreover, the existence of our generator has a number of central consequences for complexity theory. Given a randomized parallel algorithm A (over a wide class of machine models such as parallel RAMs and fixed connection networks) with time bound T(n) and processor bound P(n), we show A can be simulated by a parallel algorithm with time bound T(n) + O((log n)(log log n)), processor bound P(n)nε′, and only using nε truly random bits for any ε > 0.Also, we show that if the multiplicative inverse problem is almost always not in RNC, then RNC is within the class of languages accepted by uniform poly-log depth circuits with unbounded fan-in and strictly sub-exponential size $$ \bigcap\limits_{\varepsilon > 0} {2^{n^\varepsilon } } $$.

J. H. Reif, J. D. Tygar
How to Construct Pseudo-random Permutations from Pseudo-random Functions

Let Fn be the set of all functions from n bits to n bits. Let fn specify for each key k of a given length a function f k n ∈ Fn. We say fn is pseudorandom if the following two properties hold: (1)Given a key k and an input α of length n, the time to evaluate f k n (α) is polynomial in n.(2)If a random key k is chosen, f k n “looks like” a random function chosen from Fn to any algorithm which is allowed to evaluate f k n at polynomial in n input values.Let P2n be the set of permutations (1-1 onto functions) from 2n bits to 2n bits. Let p2n specify for each key k of a given length a permutation p k 2n ∈ P2n. We present a simple method for describing p2n in terms of fn. The method has the property that if fn is pseudo-random then p2n is also pseudo-random. The method was inspired by a study of the security of the Data Encryption Standard. This result, together with the result of Goldreich, Goldwasser and Micali [GGM], implies that if there is a pseudo-random number generator then there is a pseudo-random invertible permutation generator. We also prove that if two permutation generators which are “slightly secure” are cryptographically composed, the result is more secure than either one alone.

Michael Luby, Charles Rackoff
The Bit Security of Modular Squaring given Partial Factorization of the Modulos

It is known that given a composite integer N = p1p2 (such that p1 ≡ p2 ≡ 3 (mod 4)), and q a quadratic residue modulo N, guessing the least significant bit of a square root of q with any non-negligible advantage is as hard as factoring N.In this paper we extend the above result to multi-prime numbers N = p1p2...p l (such that p1 ≡ p2 ≡ ... ≡ p l ≡ 3 (mod 4)). We show that given N and q1 a quadratic residue mod N, guessing the least significant bit of a square root of q is as hard as completely factoring N. Furthermore, the difficulty of guessing the least significant bit of the square root of q remains unchanged even when all but two of the prime factors of N, p3,...,p l , are known. The result is useful in designing multi-party cryptographic protocols.

Benny Chor, Oded Goldreich, Shafi Goldwasser
Some Cryptographic Aspects of Womcodes

We consider the following crytographic and coding questions in relation with the use of “write-once” memories (or woms) How to prevent anyone from reusing the wom (immutable codes).How to fix the written information in the wom after a given number of generations (locking codes).How to encode a “credit” in a way that guarantees the user t generations or “purchases” in any possible way and makes it impossible to cheat: i.e. writing on the wom necessarily increases the spent amount of money. The coding will be called “incremental locked”.These questions were only raised in [5], where the accent was put on the generation of womcodes possessing an “easy reading-reserved writing” property.

Philippe Godlewski, Gerard D. Cohen
How to Reduce your Enemy’s Information (extended abstract)

If no eavesdropping occurred over the private channel, it is possible for Alice and Bob to publicly verify that no transmission errors nor tampering occurred either. with a 2−K error probability, and end up with an entirely secret final string that is only K bits shorter than the original private transmission. This is optimal. A somewhat shorter common string, on which Eve still has no information, can also be obtained with high probability despite transmission errors over the private channel.If partial eavesdropping occurred over the private channel, leaking up to K bits of information to Eve, in Shannon’s sense, it is still possible for Alice and Bob to publicly verify that no transmission errors nor tampering occurred, with a 2−L error probability, and end up with a final string that is K+L+S bits shorter than the original private transmission, on which Eve has less than 2−s/ln2 bit of information. Here again, transmission errors can be handled at the cost of reducing some more the length of the final common string.Finally, if partial eavesdropping over the private channel is restricted to K physical bits secretly chosen by Eve, it becomes possible again for Alice and Bob to verify with high probability that no errors nor tampering occurred, and end up with a new string on which Eve has no information whatsoever. However, the new string is substantially shorter than if Alice and Bob had tolerated knowledge by Eve of an arbitrarily small fraction of one bit of information.

Charles H. Bennett, Gilles Brassard, Jean-Marc Robert
Encrypting Problem Instances
Or..., Can You Take Advantage of Someone Without Having to Trust Him?

This paper describes ongoing work on the task of encrypting problem instances, also known as computing with encrypted data. A problem is specified by a function f and an instance by a value x in the domain of f. The scenario involves two people, A and B. A has instances {xi} of f to which she needs answers, but she lacks the resources to compute them. We use the term resources completely generally-she may be lacking time, space, algorithmic knowledge, or appropriate hardware, or she may simply be too lazy to implement a solution that she knows others have already implemented. B has the resources to compute f(x) and is willing to let A use them, i.e., he is willing to send her f(x) if she sends him x. She would like to take advantage of his generosity without having to trust him, i.e., she does not want to reveal any more about her data than she must in order to enable him to compute the correct answer. Intuitively, we say that f is encryptable if A can easily transform instance x into instance x′, obtain f(x′) from B, and easily compute f(x) from f(x′) in such a way that B cannot infer x from x′.

Joan Feigenbaum
Divergence Bounds on Key Equivocation and Error Probability in Cryptanalysis

A general method, based on the f-divergence (Csiszar) is presented to obtain divergence bounds on error probability and key equivocation. The method presented here is applicable for discrete data as well as for continuous data. As a special case of the f-divergence it is shown that the upper bound on key equivocation derived by Blom is of the Bhattacharyya type. For a pure cipher model using a discrete memoryless message source a recursive formula is derived for the error probability. A generalization of the β-unicity distance is given, from which it is shown why the key equivocation is a poor measure of theoretical security in many cases, and why lower bounds on error probability must be considered instead of upper bounds. Finally the concept of unicity distance is generalized in terms of the error probability and is called the PeSecurity Distance.

Johan van Tilburg, Dick E. Boekee

Impromptu Talks

A chosen text attack on the RSA cryptosystem and some discrete logarithm schemes

A new attack on the RSA cryptosystem is presented. This attack assumes less than previous chosen ciphertext attacks, since the cryptanalyst has to obtain the plaintext versions of some carefully chosen ciphertexts only once, and can then proceed to decrypt further ciphertexts without further recourse to the authorized user’s decrypting facility. This attack is considerably more efficient than the best algorithms that are known for factoring the public modulus. The same idea can also be used to develop an attack on the three-pass system of transmitting information using exponentiation in a finite field.

Y. Desmedt, A. M. Odlyzko
On the Design of S-Boxes

The ideas of completeness and the avalanche effect were first introduced by Kam and Davida [1] and Feistel [2], respectively. If a cryptographic transformation is complete, then each ciphertext bit must depend on all of the plaintext bits. Thus, if it were possible to find the simplest Boolean expression for each ciphertext bit in terms of the plaintext bits, each of those expressions would have to contain all of the plaintext bits if the function was complete. Alternatively, if there is at least one pair of n-bit plaintext vectors X and Xi that differ only in bit i, and f(X) and f(Xi) differ at least in bit j for all $$ \{ (i,j)|1 \leqslant i,j \leqslant n\}$$ then the function f must be complete.

A. F. Webster, S. E. Tavares
The Real Reason for Rivest’s Phenomenon
Don Coppersmith
The Importance of “Good” Key Scheduling Schemes (How to Make a Secure DES* Scheme with ≤ 48 Bit Keys?)

In DES the key scheduling scheme uses mainly shift registers. By modifying this key scheduling, conventional cryptosystems can be designed which are, e.g., strong against exhaustive key search attacks (without increasing the key size), or have public key like properties. Other effects obtainable by modifying the key scheduling and their importance are discussed.

Jean-Jacques Quisquater, Yvo Desmedt, Marc Davio
Access Control at the Netherlands Postal and Telecommunications Services

The Netherlands Postal and Telecommunications Services (PTT) have developed a system that controles the entrance to their buildings by use of magnetic stripe cards. In this note some cryptographic aspects of the system are explained.

Willem Haemers
Backmatter
Metadaten
Titel
Advances in Cryptology — CRYPTO ’85 Proceedings
herausgegeben von
Hugh C. Williams
Copyright-Jahr
1986
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-39799-1
Print ISBN
978-3-540-16463-0
DOI
https://doi.org/10.1007/3-540-39799-X