Skip to main content

1991 | Buch

Advances in Cryptology-CRYPT0’ 90

Proceedings

herausgegeben von: Alfred J. Menezes, Scott A. Vanstone

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

Crypto '90 marked the tenth anniversary of the Crypto conferences held at the University of California at Santa Barbara. The conference was held from August 11 to August 15, 1990 and was sponsored by the International Association for Cryptologic Research, in cooperation with the IEEE Computer Society Technical Committee on Security and Privacy and the Department of Computer Science of the University of California at Santa Barbara. 227 participants from twenty countries around the world. Crypto '90 attracted Roughly 35% of attendees were from academia, 45% from industry and 20% from government. The program was intended to provide a balance between the purely theoretical and the purely practical aspects of cryptography to meet the needs and diversified interests of these various groups. The overall organization of the conference was superbly handled by the general chairperson Sherry McMahan. All of the outstanding features of Crypto, which we have come to expect over the years, were again present and, in addition to all of this, she did a magnificent job in the preparation of the book of abstracts. This is a crucial part of the program and we owe her a great deal of thanks.

Inhaltsverzeichnis

Frontmatter

Cryptanalysis

Differential Cryptanalysis of DES-like Cryptosystems
Extended Abstract

The Data Encryption Standard (DES) is the best known and most widely used cryptosystem for civilian applications. It was developed at IBM and adopted by the National Buraeu of Standards in the mid 70’s, and has successfully withstood all the attacks published so far in the open literature. In this paper we develop a new type of cryptanalytic attack which can break DES with up to eight rounds in a few minutes on a PC and can break DES with up to 15 rounds faster than an exhaustive search. The new attack can be applied to a variety of DES-like substitution/permutation cryptosystems, and demonstrates the crucial role of the (unpublished) design rules.

Eli Biham, Adi Shamir
A Statistical Attack of the FEAL-8 Cryptosystem

This paper presents a chosen plaintext cryptanalysis of the FEAL-8 cryptosystem. The attack requires the ciphertext corresponding to approximately 10000 pairs of 64 bit plaintext blocks. The difference (bitwise xor) between the two blocks of each pair is equal to an appropriately selected constant. We first state that some differential statistics for intermediate values of the data randomizer are non uniform and independent of the encryption key. We then show that these statistics can be used to compute gradually the expanded key of the data randomizer.

Henri Gilbert, Guy Chassé
An Improved Linear Syndrome Algorithm in Cryptanalysis With Applications

A new algorithm is developed for making attacks to certain comparatively simple LFSR based ciphersystems. Special attention is paid towards minimizing the solution distance and guaranteeing the success probability of the attacks. The algorithm is then applied to crack the random generators of Geffe (1973) and Beth-Piper (1984).

Kencheng Zeng, C. H. Yang, T. R. N. Rao

Protocols

Quantum Bit Commitment and Coin Tossing Protocols

In the late 1960’s a physicist, Stephen Wiesner, had the idea that the uncertainty principle could be used for cryptography (though he published his result much later [Wie83]). One of his ideas was that it would be possible to use a stream of polarized photons to transmit two messages in a way that would make only one of them readable at the receiver’s choosing. This notion, which he called “multiplexing”, is remarkably similar to the “one-out-of-two oblivious transfer” to be reinvented many years later [EGL83], and it even predates Rabin’s notion of oblivious transfer [Rab81] by more than a decade. In the late 1970’s, Wiesner’s invention was brought back to life by the work of Charles H. Bennett and Gilles Brassard, which resulted in a CRYPTO ’82 paper [BBBW82]. Subsequently, Bennett and Brassard used quantum cryptographic principles to implement basic cryptographic protocols, such as secret key exchange and coin tossing by telephone [BB84]. There has been recently much excitement in the field of quantum cryptography because a working prototype of the quantum key exchange channel has been successfully built at the IBM T. J. Watson Research Laboratory, Yorktown Heights [BBBSS90].

Gilles Brassard, Claude Crépeau
Security with Low Communication Overhead
Extended Abstract

We consider the communication complexity of secure multiparty computations by networks of processors each with unlimited computing power. Say that an n-party protocol for a function of m bits is efficient if it uses a constant number of rounds of communication and a total number of message bits that is polynomial in max(m, n). We show that any function has an efficient protocol that achieves (n log n)/m resilience, Ours is the first secure multiparty protocol in which the communication complexity is independent of the computational complexity of the function being computed.We also consider the communication complexity of zero-knowledge proofs of properties of committed bits. We show that every function f of m bits has an efficient notarized envelope scheme; that is, there is a protocol in which a computationally unlimited prover commits a sequence of bits x to a computationally unlimited verifier and then proves in perfect zero-knowledge (without decommitting x) that f(x) = 1, using a constant number of rounds and poly(m) message bits. Ours is the first notarized envelope scheme in which the communication complexity is independent of the computational complexity of f.Finally, we establish a new upper bound on the number of oracles needed in instance-hiding schemes for arbitrary functions. These schemes allow a computationally limited querier to capitalize on the superior power of one or more computationally unlimited oracles in order to obtain f(x) without revealing its private input x to any one of the oracles. We show that every function of m bits has an (m/log m)-oracle instance-hiding scheme.The central technique used in all of these results is locally random reducibility, which was used for the first time in [7] and is formally defined for the first time here. In addition to the applications that we present, locally random reducibility has been applied to interactive proof systems, program checking, and program testing.

D. Beaver, J. Feigenbaum, J. Kilian, P. Rogaway
Fair Computation of General Functions in Presence of Immoral Majority

This paper describes a method for n players, a majority of which may be faulty, to compute correctly, privately, and fairly any computable function f(x1, . . . , xn,) where xi is the input of the i-th player. The method uses as a building block an oblivious transfer primitive.Previous methods achieved these properties, only for boolean functions, which, in particular, precluded composition of such protocols.We also propose a simpler definition of security for multi-player protocols which still implies previous definitions of privacy and correctness.

Shafi Goldwasser, Leonid Levin
One-Way Group Actions

Bit commitment schemes are central to all zero-knowledge protocols [GMR89] for NP-complete problems [GMW86, BC86a, BC86b, BCC88, BCY89, FS89, etc.]. One-way group actions is a natural and powerful primitive for the implementation of bit commitment schemes. It is a generalization of the one-way group homomorphism [IY88], which was not powerful enough to capture the bit commitment scheme based on graph isomorphism [BC86b]. It provides a unified theory for all the known bit commitment schemes that offer unconditional protection for the originator of the commitments, and for many of those that offer her statistical protection. (Unconditional protection means that the value of the bit committed to is always perfectly concealed. Statistical protection either means that this is almost always the case, or that only an arbitrarily small probabilistic bias about this bit can leak; in either cases, statistical protection must hold even against unlimited computing power.)Bit commitment schemes based on one-way group actions automatically have the chameleon property [BCC88] (also called trap-door [FS89]), which is useful for the parallelization of zero-knowledge protocols [BCY89, FS89]. Moreover, these bit commitment schemes allow the originator of two commitments to convince the receiver that they are commitments to the same bit, provided that this is so, without disclosing any information about which bit this is.In addition, one-way group actions are also a natural primitive for the implementation of claw-free pairs of functions [GMRi88].

Gilles Brassard, Moti Yung

Algebra and Number Theory

Solving Large Sparse Linear Systems Over Finite Fields

Many of the fast methods for factoring integers and computing discrete logarithms require the solution of large sparse linear systems of equations over finite fields. This paper presents the results of implementations of several linear algebra algorithms. It shows that very large sparse systems can be solved efficiently by using combinations of structured Gaussian elimination and the conjugate gradient, Lanczos, and Wiedemann methods.

B. A. LaMacchia, A. M. Odlyzko
On the Computation of Discrete Logarithms in Class Groups
Extended Abstract

In [3] and [1] a new key exchange system was introduced whose security is related to the difficulty of solving the discrete logarithm problem in the class group of imaginary quadratic orders. Subsequently, in [5] and [4] a subexponential algorithm for computing those class groups was presented and it was shown how to use this algorithm and the index-calculus method to calculate discrete logarithms.

Johannes Buchmann, Stephan Düllmann
Matrix Extensions of the RSA Algorithm

A new matrix extension of the RSA algorithm is proposed which is based on the Cayley-Hamilton theorem and a one-way function. The security of this algorithm rests upon both that of the RSA algorithm and the one-way function. The computational efficiency of the new algorithm depends on the dimension of the matrix. The most efficient implementation is the 2×2 case in which both encryption and decryption use a single modulo arithmetic multiplication and single evaluation of the one-way function.

Chih-Chwen Chuang, James George Dunham
Constructing Elliptic Curve Cryptosystems in Characteristic 2

Since the group of an elliptic curve defined over a finite field Fq, was proposed for Diffie-Hellman type cryptosystems in [7] and [15], some work on implementation has been done using special types of elliptic curves for which the order of the group is trivial to compute ([2], [13]). A consideration which discourages the use of an arbitrary elliptic curve is that one needs Schoof’s algorithm [16] to count the order of the corresponding group, and this algorithm, in addition to being rather complicated, has running time O(log9q) for an elliptic curve defined over Fq. Thus, in applications of elliptic curves where one needs extremely large q — for example, the original version of the elliptic curve primality test ([4], [11]) — this algorithm is too time-consuming.

Neal Koblitz

Signatures and Authentication

Identification Tokens — or: Solving The Chess Grandmaster Problem

Fiat and Shamir have proposed to use zero-knowledge interactive proofs to obtain secure identification mechanisms. Real time attacks in which active eavesdroppers relay questions and answers or in which the prover helps deliberately an impersonator have been described [4]. In this paper a solution against such frauds is given and (based on some physical assumptions) it is proved that the solution protects against the real-time attacks.

Thomas Beth, Yvo Desmedt
Arbitrated Unconditionally Secure Authentication Can Be Unconditionally Protected against Arbiter’s Attacks
Extended Abstract

Given an arbiter whose arbitrage is trusted, an authentication scheme is presented which is unconditionally secure against impersonation and/or substitution attacks performed by the arbiter, whereas previous scheme did not protect against such attacks. Furthermore, the scheme protects unconditionally against: impersonation/substitution attacks done by an outsider, against disavowal of a message by the sender, and against the receiver forging a message which was never sent. A practical scheme based on finite geometry is presented. Adaptations of the scheme realize an asymmetric conventional authentication scheme, and the set-up of an unconditionally secure oblivious transfer system.

Yvo Desmedt, Moti Yung
Convertible Undeniable Signatures

We introduce a new concept called convertible undeniable signature schemes. In these schemes, release of a single bit string by the signer turns all of his signatures, which were originally undeniable signatures, into ordinary digital signatures. We prove that the existence of such schemes is implied by the existence of digital signature schemes. Then, looking at the problem more practically, we present a very efficient convertible undeniable signature scheme. This scheme has the added benefit that signatures can also be selectively converted.

Joan Boyar, David Chaum, Ivan Damgård, Torben Pedersen
Unconditionally-Secure Digital Signatures

All known digital signature schemes can be forged by anyone having enough computing power. For a finite set of participants, we can overcome this weakness.We present a polynomial time protocol in which a participant can convince (with an exponentially small error probability) any other participant that his signature is valid. Moreover, such a convinced participant can convince any other participant of the signature’s validity, without interaction with the original signer.An extension allows, in most cases, a participant who receives a signature from any source to convince each other participant of its validity. If a participant cannot use the signature to convince others, he knows so when he receives it.

David Chaum, Sandra Roijakkers

Secret Sharing

Geometric Shared Secret and/or Shared Control Schemes

A shared secret scheme is normally specified in terms of a desired security, Pd, and a concurrence scheme, Γ. The concurrence scheme (aka access structure) identifies subsets of participants (also called trustees or shareholders) each of which should be able to cooperatively recover the secret and/or initiate the controlled action. The security requirement is expressed as the maximum acceptable probability, Pd, that the secret can be exposed or the controlled action initiated by a collection of persons that doesn’t include at least one of the authorized subsets identified in the concurrence scheme. A concurrence scheme is said to be monotone if every set of participants that includes one or more sets from Γ is also able to recover the secret. The closure of Γ, denoted by $$ \hat \Gamma $$ is the collection of all supersets (not necessarily proper) of the sets in Γ, i.e., the collection of all sets of participants that can recover the secret and/or initiate the controlled action. A shared secret scheme implementing a concurrence scheme Γ is said to be perfect if the probability of recovering the secret is the same for every set, C, of participants: C$$ \hat \Gamma $$. Since, in particular, C could consist of only nonparticipants, i.e., of persons with no insider information about the secret, the probability, P, of an unauthorized recovery of the secret in a perfect scheme is just the probability of being able to “guess” the secret using only public information about Γ and the shared secret scheme implementing Γ: P ≤ Ptd.

Gustavus J. Simmons
Some Improved Bounds on the Information Rate of Perfect Secret Sharing Schemes
Extended Abstract

Informally, a secret sharing scheme is a method of sharing a secret key K among a finite set of participants, in such a way that certain specified subsets of participants can compute a key. Suppose that P is the set of participants. Denote by Γ the set of subsets of participants which we desire to be able to determine the key; hence Γ ⊑ 2P. Γ is called the access structure of the secret sharing scheme. It seems reasonable to require that Γ be monotone, i.e. $$ ifB \in \Gamma andB \subseteq C \subseteq P,thenC \in \Gamma . $$ For any Γ0 ⊑ 2P, define the cioswe of Γ0 to be $$ cl\left( {\Gamma _0 } \right) = \left\{ {C:B \in \Gamma andB \subseteq C \subseteq P} \right\}. $$ Note that the closure of any set of subsets is monotone.

E. F. Brickell, D. R. Stinson
Collective Coin Tossing Without Assumptions nor Broadcasting

To obtain security, one needs to utilize many resources. Among these are one-way functions, physically secure communication channels, and —though less well known— broadcasting.

Silvio Micali, Tal Rabin

Key Distribution

A Key Distribution “Paradox”

The so called, Rabin “paradox” is a proof that a given signature system, which is secure under ciphertext only attack is insecure under chosen message attack. The construction that is used to prove the first clause is also used to prove the second. For several years it was believed to be inherent to public key signature systems. A similar problem existed for public key cryptosystems (under chosen ciphertext attack). Trap-door functions were inherent in the construction of the “paradox.”In 1984 Goldwasser, Micali and Rivest constructively showed that one can overcome the “paradox.” Naor and Yung (1989) resolved the similar problem for public key cryptosystems. Both solution actually solve two problems. They resolve the “paradox,” with the strictest definition of security (for a cryptosystem it amounts to the demand that for a given cryptogram c and two messages m0, m1 it should be infeasible to decide whether c resulted from m0 or m1 with probability significantly greater than half). Both solutions are very complicated.We show that a similar “paradox” exists for many key distribution systems, even if non-trapdoor one way functions are used (like in the Diffie-Hellman variations). Using the simple basic definition of security (given the messages exchanged during the protocol it should be impossible to find the resulting session key in probabilistic polynomial time) we show a simple and practical key distribution system which is provably free of the paradox.

Yacov Yacobi
A Modular Approach to Key Distribution

The purpose of key management is to provide procedures for handling cryptographic keying material to be used in symmetric or asymmetric cryptographic mechanisms. As a result of varied design decisions appropriate to different conditions, a large variety of key distribution protocols exist. There is a need to explicate key distribution protocols in a way that allows to understand which results they achieve and on which assumptions they depend. We define a modular system that can be used to transform cryptographic protocols into a generic form and that has proven to be useful in the analysis and the construction of such protocols.

Walter Fumy, Michael Munzert

Hash Functions

Structural Properties of One-Way Hash Functions

We study the following two kinds of one-way hash functions: universal one-way hash functions (UOHs) and collision intractable hash functions (CIHs). The main property of the former is that given an initial-string x, it is computationally difficult to find a different string y that collides with x. And the main property of the latter is that it is computationally difficult to find a pair x ≠ y of strings such that x collides with y. Our main results are as follows. First we prove that UOHs with respect to initial-strings chosen arbitrarily exist if and only if UOHs with respect to initial-strings chosen uniformly at random exist. Then, as an application of the result, we show that UOHs with respect to initial-strings chosen arbitrarily can be constructed under a weaker assumption, the existence of one-way quasi-injections. Finally, we investigate relationships among various versions of one-way hash functions. We prove that some versions of one-way hash functions are strictly included in others by explicitly constructing hash functions that are one-way in the sense of the former but not in the sense of the latter.

Yuliang Zheng, Tsutomu Matsumoto, Hideki Imai
The MD4 Message Digest Algorithm

The MD4 message digest algorithm takes an input message of arbitrary length and produces an output 128-bit “fingerprint” or “message digest”, in such a way that it is (hopefully) computationally infeasible to produce two messages having the same message digest, or to produce any message having a given prespecified target message digest. The MD4 algorithm is thus ideal for digital signature applications: a large file can be securely “compressed” with MD4 before being signed with (say) the RSA public-key cryptosystem.The MD4 algorithm is designed to be quite fast on 32-bit machines. For example, on a SUN Sparc station, MD4 runs at 1,450,000 bytes/second (11.6 Mbit/sec). In addition, the MD4 algorithm does not require any large substitution tables; the algorithm can be coded quite compactly.The MD4 algorithm is being placed in the public domain for review and possible adoption as a standard.

Ronald L. Rivest

Zero-Knowledge

Achieving Zero-Knowledge Robustly

We introduce the notion of robust transformations of interactive proof systems. A robust transformation takes an interactive proof system (P, V), and produces a new interactive proof system, (P*, V*), such that the power of P* is within a polynomial factor of that of P. We show that, given an ideal protocol for secure circuit evaluation, there exists a robust transformation that converts interactive proof systems to zero-knowledge interactive proof systems.

J. Kilian
Hiding Instances in Zero-Knowledge Proof Systems
Extended Abstract

Informally speaking, an instance-hiding proof system for the function f is a protocol in which a polynomial-time verifier is convinced of the value of f(x) but does not reveal the input x to the provers. We show here that a boolean function f has an instance-hiding proof system if and only if it is the characteristic function of a language in NEXP ∩ coNEXP. We formalize the notion of zero-knowledge for instance-hiding proof systems with several provers and show that all such systems can be made perfect zero-knowledge.

Donald Beaver, Joan Feigenbaum, Victor Shoup
Multi-Language Zero Knowledge Interactive Proof Systems

Suppose that two ZKIPs are given for language L1 and L2. The total number of bits communicated is the sum of the two. This paper shows that it is possible to get the same effect in less amount of communication. We call such protocols “multi-language zero knowledge interactive proof systems”.

Kaoru Kurosawa, Shigeo Tsujii
Publicly Verifiable Non-Interactive Zero-Knowledge Proofs

In this paper we construct the first publicly verifiable non-interactive zero-knowledge proof for any NP statement under the general assumption that one way permutations exist. If the prover is polynomially bounded then our scheme is based on the stronger assumption that trapdoor permutations exist. In both cases we assume that P and V have a common random string, and use it to prove a single theorem (which may be chosen as a function of the known string).

Dror Lapidot, Adi Shamir
Cryptographic Applications of the Non-Interactive Metaproof and Many-prover Systems

In a companion paper [DeYu] we have developed the tool of non-interactive proof-system we call “Metaproof” (μ-NIZK proof system); this provides a proof of “the existence of a proof to a statement”. Using a reduction of the theorem to a set of claims about encrypted values, enabled us to develop a crucial proof-system property which we called “on-line simulatable NIZK proof-system”. This was used to implement the “Many-Prover Non-Interactive Proof-System” where independent users can send proofs (which was not known in the original system and was open), and a “Self-Referential NIZK proof system” where the random reference string is available to the polynomial-time opponent who chooses the theorem to prove, (this was an intriguing question regarding such systems).In this abstract we present an introduction to the basic tools and their possible applications. The subject of this paper is a variety of cryptographic applications provided by the new tools. We demonstrate its applicability in enhancing security and properties of a methodology for signature and authentication developed by Bellare and Goldwasser [BeGo] (by using the Metaproof system to solve the open problem of many-prover NIZK system). We also show, among other things, how the tools can be used to provide security mechanisms such as an “Oblivious Warden” which translates non-interactive proofs to random ones independently of the proof itself, and the notion of “Gradual opening of a zero-knowledge computation” which is first demonstrated to be correct using a non-interactive proof, and then is opened gradually and fast (i.e., without further proofs).

Alfredo De Santis, Moti Yung
Interactive Proofs with Provable Security against Honest Verifiers

Nearly all of the work on constructing zero-knowledge proof systems relies on very strong complexity theoretic assumptions. We consider a form of “no use” zero-knowledge, and show that every language in PSPACE has an interactive proof system that provably achieves “no-use” zero-knowledge against honest verifiers.

J. Kilian

Randomness

On the Universality of the Next Bit Test

The next bit test was shown by Yao to be a universal test for sources of unbiased independent bits. The aim of this paper is to provide a rigorous methodology of how to test other properties of sources whose output distribution is not necessarily uniform. We prove the surprising result that the natural extension of the next bit test, even in the simplest case of biased independent bits, is no longer universal: We construct a source of biased bits, whose bits are obviously dependent and yet none of these bits can be predicted with probability of success greater than the bias. To overcome this difficulty, we develop new universal tests for arbitrary models of (potentially imperfect) sources of randomness.

A. W. Schrift, A. Shamir
A Universal Statistical Test for Random Bit Generators

A new statistical test for random bit generators is presented that is universal in the sense that any significant deviation of the output statistics from the statistics of a perfect random bit generator is detected with high probability when the defective generator can be modeled as an ergodic stationary source with finite memory. This is in contrast to most presently used statistical tests which can detect only one type of non-randomness, for example, a bias in the distribution of 0’s and 1’s or a correlation between consecutive bits. Moreover, the new test, whose formulation was motivated by considering the universal data compression algorithms of Elias and of Willems, measures the entropy per output bit of a generator. This is shown to be the correct quality measure for a random bit generator in cryptographic applications. A generator is thus rejected with high probability if and only if the cryptographic significance of a statistical defect is above a specified threshold. The test is easy to implement and very fast and thus well-suited for practical applications.

Ueli M. Maurer
On the Impossibility of Private Key Cryptography with Weakly Random Keys

The properties of weak sources of randomness have been investigated in many contexts and using several models of weakly random behaviour. For two such models, developed by Santha and Vazirani, and Chor and Goldreich, it is known that the output from one such source cannot be “compressed” to produce nearly random bits. At the same time, however, a single source is sufficient to solve problems in the randomized complexity classes BPP and RP. It is natural to ask exactly which tasks can be done using a single, weak source of randomness and which cannot. The present work begins to answer this question by establishing that a single weakly random source of either model cannot be used to obtain a secure “one-time-pad” type of cryptosystem.

James L. McInnes, Benny Pinkas

Applications

How to Time-Stamp a Digital Document

The prospect of a world in which all text, audio, picture, and video documents are in digital form on easily modifiable media raises the issue of how to certify when a document was created or last changed. The problem is to time-stamp the data, not the medium. We propose computationally practical procedures for digital time-stamping of such documents so that it is infeasible for a user either to back-date or to forward-date his document, even with the collusion of a time-stamping service. Our procedures maintain complete privacy of the documents themselves, and require no record-keeping by the time-stamping service.

Stuart Haber, W. Scott Stornetta
How to Utilize the Randomness of Zero-Knowledge Proofs
Extended Abstract

In zero-knowledge interactive proofs, a lot of randomized information is exchanged between the prover and the verifier, and the randomness of the prover is used in satisfying the zero-knowledge condition. In this paper, we show a new methodology that utilizes the randomness of the prover in a zero-knowledge proof for some positive objectives as well as for zero-knowledge condition. Based on this idea, we propose two types of applications; key distribution, and digital signature. We propose identity-based key distribution schemes that are provably secure against strong active attacks (chosen-message-known-key active attacks) assuming the difficulty of factoring a composite number. In addition, we show that non-transitive digital signature schemes can be constructed if and only if a one-way function exists. We also show some practical non-transitive digital signature schemes. A new general method of constructing identity-based cryptographic schemes is presented as an application of the identity-based non-transitive digital signature schemes. We also propose a new digital signature scheme based on the (extended) Fiat-Shamir identification scheme.

Tatsuaki Okamoto, Kazuo Ohta
Fast Software Encryption Functions

Encryption hardware is not available on most computer systems in use today. Despite this fact, there is no well accepted encryption function designed for software implementation — instead, hardware designs are emulated in software and the resulting performance loss is tolerated. The obvious solution is to design an encryption function for implementation in software. Such an encryption function is presented here — on a SUN 4/260 it can encrypt at 4 to 8 megabits per second. The combination of modem processor speeds and a faster algorithm make software encryption feasible in applications which previously would have required hardware. This will effectively reduce the cost and increase the availability Of cryptographic protection.

Ralph C. Merkle
CORSAIR: A Smart Card for Public Key Cryptosystems

Algorithms best suited for flexible smart card applications are based on public key cryptosystems — RSA, zero-knowledge protocols ... Their practical implementation (execution in ≈1 second) entails a computing power beyond the reach of classical smart cards, since large integers (512 bits) have to be manipulated in complex ways (exponentiation). CORSAIR achieves up to 40 (8 bit) MIPS with a clock speed of 6 Mhz. This allows to compute XE mod M, with 512 bit operand, in less than 15 second (0.4 set for a signature). The new smart card is in the final design stage; the first test chips should be available by the end of 1990.

Dominique de Waleffe, Jean-Jacques Quisquater

Design and Analysis I

Fast Checkers for Cryptography

Program correctness is a serious concern, and has consequently received considerable attention. This attention has taken three approaches: mathematical: prove programs correct;empirical: test programs for bugs; andengineering: design programs well. While considerable progress has been made, the question of whether a given computation was performed correctly still has no practical solution. However, a new approach, proposed by Manuel Blum, promises to be both practical and of theoretical significance, and is intrinsically closer to a computer scientist’s heart, since the approach is algorithmic: check every computation using a progmm checker. Program checkers are designed for specific computational problems; a checker is then an algorithm such that, given a program that is supposed to solve that problem, and an input value, if the program computes correctly at that input, the checker says “CORRECT” (with a small probability of error); alternately, if the program has a bug, it says “BUGGY” (again, with a small chance of error).

Kireeti Kompella, Leonard Adleman
Complexity Theoretic Issues Concerning Block Ciphers Related to D.E.S.

The D.E.S. cipher is naturally viewed as a composition of sixteen invertible transformations on 64-bit strings (where the transformations depend of the value of a 56-bit key). Each of the transformations has a special form and satisfies the particular property that each of its output bits is determined by a “small” number of its input bits. We investigate the computational power of block ciphers on n-bit strings that can be expressed as polynomial-length (with respect to n) compositions of invertible transformations that have a form similar to those of D.E.S. In particular, we require that the basic transformations have the property that each of their output bits depends on the value of a small number of their input bits (where “small” is somewhere in the range between O(1) and O(log n)). We present some sufficient conditions for ciphers of this type to be “pseudorandom function generators” and, thus, to yield private key cryptosystems that are secure against adaptive chosen plaintext attacks.

Richard Cleve
The Redoc II Cryptosystem

A new cyptosystem called REDOC II is introduced. Analysis of a miniature (one-round) version of this system is given.

Thomas W. Cusick, Michael C. Wood
A Recursive Construction Method of S-boxes Satisfying Strict Avalanche Criterion

S(ubstitution)-boxes are quite important components of modern symmetric cryptosystems. S-boxes bring nonlinearity to cryptosystems and strengthen their cryptographic security. An S-box satisfies the strict avalanche criterion (SAC), if and only if for any single input bit of the S-box, the inversion of it changes each output bit with probability one half. We present some interesting properties of S-boxes and propose an efficient and systematic means of generating arbitrary input size bijective S-boxes satisfying the SAC by applying simple rules recursively given 3-bit input bijectective S-box(es) satisfying the SAC.

Kwangjo Kim, Tsutomu Matsumoto, Hideki Imai

Design and Analysis II

A Comparison of Practical Public-Key Cryptosystems based on Integer Factorization and Discrete Logarithms
extended abstract

Since its inception in the mid 1970’s, public-key cryptography has flourished as a research activity, and significant theoretical advances have been made. In more recent years, many public-key concepts have gained acceptance in the commercial world. Without question, the best-known public-key cryptosystem is the RSA system of Rivest, Shamir and Adleman [28]. Although not as well-known, another public-key cryptosystem of practical interest is that due to ElGamal [11]. The latter system and its variations use a basic extension of Diffie-Hellman key exchange [9] for encryption, together with an accompanying signature scheme. Elliptic curve cryptosystems, introduced by Miller [24] and Koblitz [12], have also recently received much attention as cryptographic alternatives.

Paul C. van Oorschot
Nonlinear Parity Circuits and their Cryptographic Applications

This paper proposes a new family of nonlinear cryptographic functions called parity circuits. These parity circuits compute a one-to-one Boolean function, and they can be applied to symmetric block ciphers. In this paper, parity circuits are first defined. Next, these circuits are proven to satisfy some of the properties required in cryptography; involution, nonlinearity, the probability of bit complementation, avalanche effect, equivalent keys and computational efficiency. Finally, the speed of parity circuits implemented using the current hardware technology is estimated to show they can achieve 160 Mbps with a 64-bit block size, 8 rounds, and 3.2 K gates.

Kenji Koyama, Routo Terada
Cryptographic Significance of the Carry for Ciphers Based on Integer Addition

Integer addition has been proposed for use in cryptographic transformations since this operation is nonlinear when considered over GF(2). In these applications nonlinearity or confusion is achieved via the carry. If the carry happens to be biased, there result correlations to linear functions which can be cryptanalytically exploited.The aim of the present paper is to investigate the probability distribution of the carry for integer addition with an arbitrary number n of inputs. It is shown that asymptotically the carry is balanced for even n and biased for odd n. As a result, for n = 3 the carry is strongly biased, whereas for increasing n it is shown that the bias tends to 0.

Othmar Staffelbach, Willi Meier

Impromptu Talks

Computation of Discrete Logarithms in Prime Fields
Extended Abstract

If p is a prime and g and x integers, then computation of y such that 1.1$$ y \equiv g^x \bmod p,0 \leqslant y \leqslant p - 1 $$ is referred to as discrere exponentiarion. Using the successive squaring method, it is very fast (polynomial in the number of bits of ∣p∣ + ∣g∣ + ∣x∣). On the other hand, the inverse problem, namely, given p, g, and y, to compute some z such that Equation 1.1 holds, which is referred to as the discrete logarithm problem, appears to be quite hard in general. Many of the most widely used public key cryptosystems are based on the assumption that discrete logarithms are indeed hard to compute, at least for carefully chosen primes.

B. A. LaMacchia, A. M. Odlyzko
Systolic Modular Multiplication

A simple systolic array for achieving the effect of modular reduction, in linear time, is described. This circuit, in conjunction with Atrubin’s multiplier, performs modular multiplication in linear time.

Shimon Even
Finding Four Million Large Random Primes

A number n is a (base two) pseudoprime if it is composite and satisfies the identity 1$$ 2^{n - 1} \equiv 1\left( {\bmod n} \right). $$ Every prime satisifies (1), but very few composite numbers are pseudoprimes. If pseu- doprimes are very raTe, then one could even find large “industrial strength” primes (say for cryptographic use) by simply choosing large random values for n until an n is found that satisfies (1). H ow rare are pseudoprimes? We performed an experiment that attempts to provide an answer. We also provide some references to the literature for theoretical analyses.

Ronald L. Rivest
The FEAL Cipher Family

FEAL-8 has been expanded to FEAL-N (N round FEAL with 64-bit key), where FEAL-N with N=4/8 is identical to FEAL-4/-8 which have been previously published respectively. N is a user defined parameter (N≥4, N:even, N=2x, 2 ≥2 is recommended). FEAL-N has also been expanded to FEAL-NX (X: expansion, N round FEAL with 12-bit key) that accepts 128-bit keys. When the right half of the 128-bit key is all zeros, FEAL-NX works as FEAL-N. Upward compatibility is an important concept of the FEAL cipher family [1].

Shoji Miyaguchi
Discrete-Log With Compressible Exponents

Many key distribution systems are based on the assumption that the Discrete-Log (DL) problem is hard. The implementations could be more efficient if a significantly smaller exponent could be used, without lowering the complexity of the DL problem. When the exponent is known to reside in interval of size w, the DL problem can be computed in time O$$ \sqrt w $$, using Pollard’ “Lambda method for catching Kangaroos”.Suppose we want a level of security of 300 years on a 1 MIP machine, with 1K bit operations per instruction. Then w = 2127 currently seems sufficient (with 512 bit modulus). It is not clear, however, whether methods other than “Kangaroo” exist, with lower complexity.Let s and m denote the number of squarings and multiplications, respectively, required to exponentiate. It is well known that s roughly equals the size in bits of the exponent (L), and m is roughly 1.5 · L/lg2(L), for the most efficient methods, in the practical range.We show that by using an exponent which is known to be compressible by a factor η, using the Ziv-Lempel method, we reduce m exponentially in η, on the average (integer multiplications may be more than twice as expensive as squarings, hence this is not negligible).This can be used to speed up cryptographic key distribution systems of the Diffie-Hellman family. However, it is not clear how safe compressible exponents are.

Yacov Yacobi
Backmatter
Metadaten
Titel
Advances in Cryptology-CRYPT0’ 90
herausgegeben von
Alfred J. Menezes
Scott A. Vanstone
Copyright-Jahr
1991
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-38424-3
Print ISBN
978-3-540-54508-8
DOI
https://doi.org/10.1007/3-540-38424-3