Skip to main content

1988 | Buch

Advances in Cryptology — CRYPTO ’87

Proceedings

herausgegeben von: Carl Pomerance

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

Zero-knowledge interactive proofsystems are a new technique which can be used as a cryptographic tool for designing provably secure protocols. Goldwasser, Micali, and Rackoff originally suggested this technique for controlling the knowledge released in an interactive proof of membership in a language, and for classification of languages [19]. In this approach, knowledge is defined in terms of complexity to convey knowledge if it gives a computational advantage to the receiver, theory, and a message is said for example by giving him the result of an intractable computation. The formal model of interacting machines is described in [19, 15, 171. A proof-system (for a language L) is an interactive protocol by which one user, the prover, attempts to convince another user, the verifier, that a given input x is in L. We assume that the verifier is a probabilistic machine which is limited to expected polynomial-time computation, while the prover is an unlimited probabilistic machine. (In cryptographic applications the prover has some trapdoor information, or knows the cleartext of a publicly known ciphertext) A correct proof-system must have the following properties: If XE L, the prover will convince the verifier to accept the pmf with very high probability. If XP L no prover, no matter what program it follows, is able to convince the verifier to accept the proof, except with vanishingly small probability.

Inhaltsverzeichnis

Frontmatter

Communication Networks and Standards

Standards for Data Security — a Change of Direction

Standards for data security - the achievement of acceptable privacy and integrity in data communication and storage - have been in preparation for the last fourteen years, beginning with the US Data Encryption Standard (DES). The DES was adopted as a US federal standard (1) in 1977, followed by adoption as an ANSI standard (2) in 1981. Since 1980 work has been in progress to develop a correspond- ing International Standards Organization (ISO) text. For most practical purposes the IS0 text was identical with the ANSI text; the only significant departure was that the eight parity bits allo- cated to the key in the US standard were left unallocated in the IS0 text. The responsible IS0 body was at first Technical Committee 97 (information processing), Working Group 1, TC97/WG1, which was foll- owed by Sub-Committee 20 (data cryptographic techniques) of TC97, TC97/SC20. In May 1986 a discussion, followed by are solution, took place in TC 97, meetingin Washington, as a result of which a refer- ence was made to the central governing body of IS0 on which all national member bodies are represented, IS0 Council, to decide whether it was wise to proceed to publication of the IS0 standard. The outcome of this reference was that IS0 Council decided to abandon work on the DES as a potential international standard. The decision was taken very late in the process of preparing the standard text, publication had been imminent.

Wyn L. Price
Integrating Cryptography in ISDN

Security services in ISDN have been briefly discussed earlier in the literature This paper deals with the protocol aspects of integrating cryptography in ISDN. 125, 26 and 271.

Kåre Presttun

Protocols

Special Uses and Abuses of the Fiat-Shamir Passport Protocol (extended abstract)

If the physical description of a person would be unique and adequately used and tested, then the security of the Fiat-Shamir scheme is not based on zero-knowledge. Otherwise some new frauds exist. The Feige-Fiat-Shamir scheme always suffers from these bauds. Using an extended notion of subliminal channels, several other undetectable abuses of the Fiat-Shamir protocol, which are not possible with ordinary passports, are discussed. This technique can be used by a terrorist sponsoring country to communicate 500 new words of secret information each time a tourist passport is verified. A non-trivial solution to avoid these subliminal channel problems is presented. The notion of relative zero-knowledge is introduced.

Yvo Desmedt, Claude Goutier, Samy Bengio
Direct Minimum-Knowledge Computations (Extended Abstract)

We present a protocol scheme which directly simulates any given computation, defined on any computational device, in a minimum-knowledge fashion. We also present a scheme for simulation of computation in dua1 (perfect) minimum-knowledge fashion. Using the simulation protocol, we can that one user transfers to another user exactly the result of a given computation and nothing more.The simulation is direct and efficient; it extends, simplifies and unifies important recent results which have useful applications in cryptographic protocol design. Our technique can be used to implement several different sorts of transfer of knowledge, including: transfer of computational results, proving possession of information, proving knowledge of knowledge, gradual and adaptive revealing of information, and commitment to input values.The novelty of the simulation technique is the separation of the data encryption from the encryption of the device’s structural (or control) information.

Russell Impagliazzo, Moti Yung
Non-Interactive Zero-Knowledge Proof Systems

The intriguing notion of a Zero-Knowledge Proof System has been introduced by Goldwasser, Micali and Rackoff [GMR] and its wide applicability has been demonstrated by Goldreich, Micali and Wigderson [GMW1]-[GMW2].Based on complexity theoretic assumptions, Zero-Knowledge Proof Systems exist, provided that(i)The prover and the verifier are allowed to talk back and forth.(ii)The verifier is allowed to flip coins whose result the prover cannot see.Blum, Feldman and Micali [BFM] have recently shown that, based on specific complexity theoretic assumption (the computational difficulty of distinguishing products of two primes from those product of three primes), both the requirements (i) and (ii) above are not necessary to the existence of Zero-Knowledge Proof Systems. Instead of (i), it is enough for the prover only to talk and for the verifier only to listen. Instead of (ii), it is enough that both the prover and verifier share a randomly selected string.We strengthen their result by showing that Non-Interactive Zero-Knowledge Proof Systems exist based on the weaker and well-known assumption that quadratic residuosity is hard.

Alfredo De Santis, Silvio Micali, Giuseppe Persiano
How to Solve any Protocol Problem - An Efficiency Improvement (Extended Abstract)

Consider n parties having local inputs x1,x2,...,xn respectively. and wishing to compute the value f(x1,...,xn). where f is a predetermined function. Loosely speaking. an n-party protocol for this purpose has maximum privacy if whatever a subset of the users can efficiently compute when participating in the protocol, they can also compute from their local inputs and the value f(x1,..., xn).Recently, Goldreich, Micali and Wigderson have presented a polynomial-time algorithm that, given a Turing machine for computing the function f. outputs an n-party protocol with maximum privacy for distributively Computing f(x1,...,xn). The maximum privacy protocol output uses as a subprotocol a maximum privacy two-party protocol for computing a particular simple function p1(·,·). More recently, Haber and Micali have improved the efficiency of the above n-party protocols, using a maximum privacy two-party protocol for computing another particular function p2(·,·). Both works use a general result of Yao in order to implement protocols for the particular functions p1, and p2.In this paper, we present direct solutions to the above two particular protocol problems, avoiding the use of Yao’s general result. In fact. we present two alternative approaches for solving both problems. The first approach consists of a simple reduction of these two problems to a variant of Oblivious Transfer. The second approach consists of designing direct solutions to these two problems, assuming the intractability or the Quadratic Residuosity problem. Both approaches yield simpler and more efficient solutions than the ones obtained by Yao’s result.

Oded Goldrcich, Ronen Vainish
Multiparty Computations Ensuring Privacy of Each Party’s Input and Correctness of the Result

A protocol is presented that allows a set of parties to collectively perform any agreed computation, where every party is able to choose secret inputs and verify that the resulting output is correct, and where all secret inputs are optimally protected.The protocol has the following properties:One participant is allowed to hide his secrets unconditionally, i.e. the protocol releases no Shannon information about these secrets. This means that a participant with bounded resources can perform computations securely with a participant who may have unlimited computing power. To the best of our knowledge, our protocol is the first of its kind to provide this possibility.The cost of our protocol is linear in the number of gates in a circuit performing the computation, and in the number of participants. We believe it is conceptually simpler and more efficient than other protocols solving related problems ([Y1], [GoMiWi] and [GaHaYu]). It therefore leads to practical solutions of problems involving small circuits.The protocol is openly verifiable, i.e. any number of people can later come in and rechallenge any participant to verify that no cheating has occurred.The protocol is optimally secure against conspiracies: even if n − 1 out of the n participants collude, they will not find out more about the remaining participants’ secrets than what they could already infer from their own input and the public output.Each participant has a chance of undetected cheating that is only exponentially small in the amount of time and space needed for the protocol.The protocol adapts easily, and with negligible extra cost, to various additional requirements, e.g. making part of the output private to some participant, ensuring that the participants learn the output simultaneously, etc.Participants can prove relations between data used in different instances of the protocol, even if those instances involve different groups of participants. For example, it can be proved that the output of one computation was used as input to another, without revealing more about this data.The protocol can be usen as an essential tool in proving that all languages in IP have zero knowledge proof systems, i.e. any statement which can be proved interactively can also be proved in zero knowledge.The rest of this paper is organised as follows: First we survey some related results. Then Section 2 gives an intuitive-introduction to the protocol. In Section 3, we present one of the main tools used in this paper: bit commitment schemes. Sections 4 and 5 contain the notation, terminology, etc. used in the paper. In Section 6, the protocol is presented, along with proofs of its security and correctness. In Section 7, we show how to adapt the protocol to various extra requirements and discuss some generalisations and optimisations. Finally, Section 8 contains some remarks on how to construct zero knowledge proof systems for any language in IP.

David Chaum, Ivan B. Damgård, Jeroen van de Graaf
Society and Group Oriented Cryptography: a New Concept

Messages are frequently addressed to a group of people, e.g., board of directors. Conventional and public key systems (in the sense of Diffie and Hellman [4]) are not adapted when messages are intended for a group instead of for an individual. To deeply understand the lack of usefulness of the above cryptmystems in the case that messages are intended for (or are originating from) a group of people, let u s now nevertheless attempt to use these systems. When conventional and public key systems are used to protect privacy, the legitimate receiver(s) has (have) to know the secret key to decrypt. This means that, a first solution could be, to send the message to dl members of the group, e.g., using their public keys. A second is that the secret key is known to all membexs and that the message is sent only once. All other solutions using a conventional or public key system, are combinations of the above two solutions. We now explain briefly why these two obvious solutions are not adapted to security needs specific to the protection of information intended for groups.

Yvo Desmedt
A Simple and Secure Way to Show the Validity of Your Public Key

We present a protocol for convincing an opponent that an integer N is of the form PrQs, with P and Q primes congruent to 3 modulo 4 and with r and s odd. Our protocol is provably secure in the sense that it does not reveal the factors of N. The protocol is very fast and therefore can be used in practice.

Jeroen van de Graaf, Renė Peralta
Cryptographic Computation: Secure Fault-Tolerant Protocols and the Public-Key Model (Extended Abstract)

We give a general procedure for designing correct, secure, and fault-tolerant cryptographic protocols for many parties, thus enlarging the domain of tasks that can be performed efficiently by cryptographic means. We model the most general sort of feasible adversarial behavior, and describe fault-recovery procedures that can tolerate it. Our constructions minimize the use of cryptographic resources. By applying the complexity-theoretic approach to knowledge, we are able to measure and control the computational knowledge released to the various users, as well as its temporal availability.

Zvi Galil, Stuart Haber, Moti Yung
Gradual and Verifiable Release of a Secret (Extended Abstract)

Protocols are presented allowing someone with a secret discrete logarithm to release it, bit by bit, such that anyone can verify each bit’s correctness as they receive it. This new notion of release of secrets generalizes and extends that of the already known exchange of secrets protocols. Consequently, the protocols presented allow exchange of secret discrete logs between any number of parties.The basic protocol solves an even more general problem than that of releasing a discrete log. Given any instance of a discrete log problem in a group with public group operation, the party who knows the solution can make public some interval I and convince anyone that the solution belongs to I, while releasing no additional information, such as any hint as to where in I the solution is.This can be used directly to release a discrete log, or to transfer it securely between different groups, i.e. prove that two instances are related such that knowledge of the solution to one implies knowledge of the solution to the other.We show how this last application can be used to implement a more efficient release protocol by transferring the given discrete log instance to a group with special properties. In this scenario, each bit of the secret can be verified by a single modular squaring, and unlike the direct use of the basic protocol, no interactive proofs are needed after the basic setup has been done.Finally, it is shown how the basic protocol can be used to release the factorization of a public composite number.

Ernest F. Brickell, David Chaum, Ivan B. Damgård, Jeroen van de Graaf
Strong Practical Protocols

Recent progress in the area of cryptography has given rise to strong cryptoalgorithms using complex mathematical systems. These algorithms often require quite sophisticated computing capabilities for their implementation and are designed to withstand attack by equally sophisticated opponents with nearly unlimited resources available to them. However, the mere existence o f such algorithms is not enough to solve the problems of message secrecy and authentication. The procedures for handling the data, including the use of a cryptoalgorithm, must insure that the desired level of security is achieved. Such a set of rules or procedures is known as a cryptographic protocol.

Judy H. Moore

Key Distribution Systems

Identity-based conference key distribution systems

This paper proposes identity-based key distribution systems for generating a common secret conference key for two or more users. Users are connected in a ring, a complete graph, or a star network. Messages among users are authenticated using each user’s identification information. The security of the proposed systems is based on the difficulty of both factoring large numbers and computing discrete logarithms over large finite fields.

Kenji Koyama, Kazuo Ohta
On the KEY PREDISTRIBUTION SYSTEM: A Practical Solution to the Key Distribution Problem

To utilize the common-key encryption for the efficient message protection in a large communication network, it is desired to settle the problem of how to distribute the common keys. This paper describes a practical solution called the key predistribution system (KPS, for short), which has been proposed by the present authors. On request, the KPS quickly brings a common key to an arbitrary group of entities in a network. Using the KPS, it is quite easy to construct an enciphered one-way communication system, as well as an enciphered two-way (interactive) communication system. For example, even in a very large public network, the KPS can be applied to realize a practical enciphered electronic mailing service directed to individuals. This paper presents secure and efficient realization schemes for the KPS. This paper also discusses the security issues and the variety of applications of them.

Tsutomu Matsumoto, Hideki Imai
Key Distribution Systems Based on Identification Information

Two types of key distribution systems based on identification information are proposed, one for decentralized networks and the other for centralized networks. The system suitable for decentralized networks is analogous to the Diffie-Hellman public key distribution system, in which the former uses each user’s identification information instead of a public file used in the latter. The proposed system is able to defend the networks from impostors. The system suitable for centralized networks, which is less analogous to the Diffie-Hellman system, can also defend the networks from impostors, though the network center does not have any directory of public or private key-encrypting keys of the users. Both of the systems proposed in this paper do not require any services of a center to distribute work keys, or users to keep directories of key-encrypting keys. Therefore, key management in cryptosystems can be practical and simplified by adopting the identity-based key distribution systems.

Eiji Okamoto
Secret Distribution of Keys for Public-Key Systems

This paper proposes new public-key cryptosystems systems the security of which is based on the tamperfreeness of a device and the existence of secret key cryptosystems instead of the computational complexity of a trapdoor one-way function (RSA).

Jean-Jacques Quisquater

Public Key Systems

An Impersonation-Proof Identity Verification Scheme

Most schemes for the verification of personal identity are logically flawed in that they require an individual to exhibit a piece of private, i.e., secret, infor- mation such as a computer access password, a telephone credit card number, a per- sonal identification number (PIN), etc., to prove his identity. The logical problem is that this information, once exhibited, is potentially compromised and. could be used by anyone to undetectably impersonate the legitimate owner. What is needed is a protocol that will allow an individual to “prove” that he knows the secret piece of information, whose possession is equated with his identity, without revealing anything about the information itself which could aid a would-be cheater to imper- sonate him. Several investigators have proposed identification schemes to accom- plish this [ 1 , 2 , 3 , 4 ] that depend on interactive-proof schemes, often referred to as zero-knowledge proofs or ping-pong protocols, in which the individual responds to a series of queries in a way that the legitimate user could, but which an impostor (probably) could not. We describe a simpler identity verification scheme which uses a public authentication channel to validate a private authentication channel belong- ing to the individual who wishes to prove his identity. The public and the private channels can be completely independent and can even be based on different authen- tication algorithms, or they can both be of the same type. This scheme also pro- vides certified receipts for transactions whose legitimacy can later be verified by impartial arbiters who were not involved in the transaction itself.

Gustavus J. Simmons
Arbitration in Tamper Proof Systems
If DES ≈ RSA Then What’s the Difference Between True Signature and Arbitrated Signature Schemes?

Given that tamperfree devices exist it is possible to construct true signature schemes that have the advantages of arbitrated signature schemes, protection against disavowing or forging messages, and lacking certain short commings. Other cryptographic protocols can also be improved. The contents of tamperfree devices cannot be examined as well as not modified.

George I. Davida, Brian J. Matt
Efficient Digital Public-Key Signatures with Shadow

This paper describes a strictly deterministic digital signature scheme using public-key cryptosystems. This scheme is in discussion inside a working group of ISO on signature schemes (TC97/SC20/WG2). A working draft has been written and accepted recently (with formal modifications to be added). By this presentation we hope to receive useful remarks for improving this scheme. This scheme is the fastest known scheme for the verification of signature (only one square plus some very easy operations).

Louis Guillou, Jean-Jacques Quisquater
Security-Related Comments Regarding McEliece’s Public-Key Cryptosystem

The optimal values for the parameters of the McEliece public key cryptosystem are computed. Using these values improves the cryptanalytic complexity of the system and decreases its data expansion. Secondly it is shown that the likelihood of the existence of more than one trapdoor in the system is very small.

Carlisle M. Adams, Henk Meijer

Design and Analysis of Cryptographic Systems

Components and Cycles of a Random Function

This investigation examines the average distribution of the components and cycles of a random function. Here we refer to the mappings from a finite set of, say, n elements into itself; denoted by Гn. Suppose the elements of Гn are assigned equal probability, i.e. P(γ) = n−n, γ∈Гn. The directed graph that is naturally associated with γ consists of several components, each with a unique cycle. Define χn (s, t) (γ) as the number of components in γ containing at least the fraction s of the total number of nodes, with the size of each component’s cycle not exceeding tn1/2. We show that the expected value of χn (s, t) can be approximated by the double integral The average number of components of a given size with cycles of a specified length approximately equals the volume under the graph of the integrand. This expression can be used to estimate the probability that a function has a component which contains a significant percentage of the total number of nodes and yet its cycle is relatively small.

J. M. DeLaurentis
Fast Spectral Tests for Measuring Nonrandomness and the DES

Two spectral tests for detecting nonrandomness were proposed in 1977. One test, developed by J. Gait [1], considered properties of power spectra obtained from the discrete Fourier transform of finite binary strings. Gait tested the DES [10], [11] in output-feedback mode, as a pseudorandom generator. Unfortunately, Gait’s test was not properly developed [3] ,[4], nor was his design for testing the DES adequate.Another test, developed by C. Yuen [2], considered analogous properties for the Walsh transform. In estimating the variance of spectral bands, Yuen assumed the spectral components to be independent. Except for the special case of Gaussian random numbers, this assumption introduces a significant error into his estimate.We recently [3] ,[4] constructed a new test for detecting nonrandomness in finite binary strings, which extends and quantifies Gait’s test. Our test is based on an evaluation of a statistic, which is a function of Fourier periodograms [5]. Binary strings produced using short-round versions of the DES in output-feedback mode were tested. By varying the number of DES rounds from 1 to 16, it was thought possible to gradually vary the degree of randomness of the resulting strings. However, we found that each of the short-round versions, consisting of 1, 2, 3, 5 and 7 rounds, generated ensembles for which at least 10% of the test strings were rejected as random, at a confidence level approaching certainty.A new test, based on an evaluation of the Walsh spectrum, is presented here. This test extends the earlier test of C. Yuen. Testing of the DES, including short-round versions, has produced results consistent with those previously obtained in [3].We prove that our measure of the Walsh spectrum is equivalent to a measure of the skirts of the logical autocorrelation function. It is clear that an analogous relationship exists between Fourier periodograms and the circular autocorrelation function.

Frank A. Feldman
Other Cycling Tests for DES

A very preliminary presentation of this paper was done at the rump session of CRYPTO ’86 and kindly played by Gus Simmons (Sandia). The full paper will be complete description of some cycling experiments applied to DES using only very fast software for the computations. Using only software permitted many measures and tests.

Jean-Jacques Quisquater, Jean-Paul Delescaille
A Crypto-Engine

In this paper we present a design for a crypto-engine. We shall discuss the design and show the instruction set of this coprocessor and then show how this could be used to implement most of the known encryption algorithms. We will discuss why a coprocessor approach may be a better solution than adoption of specific encryption algorithms which can be broken or decertified.

George I. Davida, Frank B. Dancs
A Natural Taxonomy for Digital Information Authentication Schemes

There are two objectives that prompt the authentication of information; one is to verify that the information was, in all probability, actually originated by the pur- ported originator, i.e., source identification, the other is to verify the integrity of the information, i.e., to establish that even if the message was originated by the authorized source, that it hasn’t been subsequently altered, repeated, delayed, etc. These two objectives are normally treated in the theory of authentication as though they are inseparable, and will also be treated in that way here, although recent results by Cham [l] demonstrating message integrity with source anonymity and by Fiat and Shamir [Z], by Goldreich, Micali and Wigderson [3], and by others demon- strating verification of source identity with no additional information exchange show that the functions can in some instances be separated. The relevance of this comment to the subject matter of this paper is that it suggests that there may be a fourth independent coordinate in information authentication besides the three that will be discussed here. In spite of considerable effort, we have been unable to produce a convincing argument for or against this being the case, so we only mention the possibility for completeness.

Gustavus J. Simmons
Analyzing Encryption Protocols Using Formal Verification Techniques (Extended Abstract)

Much work has been done in the area of analyzing encryption algorithms, such as DES [Dav 81.Bri 85.BMP 861]. A vast amount of work has also been expended on formally verifying com- munication protocols [IEE 82,STE 82,RW 83.LS 84.Hol 871]. In contrast, very little work has been devoted to the analysis and formal verification of encryption protocols.

Richard A. Kemmerer
Cryptosystems based on an analog of heat flow

It is commonplace to base computer security and information security on hard problems. Recent cryptosystems have been based on the knapsack problem [DE83, pp. 118-126; BRS5] and the problem of factoring an integer [DE83, pp. 104-1091. The former problem is N P complete [GA79, p. 2471. The place of the latter problem in complexity theory is not well understood, but it has been around in number theory for a long time.

G. R. Blakley, William Rundell
A Combinatorial Approach to Threshold Schemes

We investigate the combinatorial properties of threshold schemes. Informally, a (t, w)-threshold scheme is a way of distributing partial information (shadows) to w participants, so that any t of them can easily calculate a key, but no subset of fewer than t participants can determine the key. Our interest is in perfect threshold schemes: no subset of fewer than t participants can determine any partial information regarding the key. We give a combinatorial characterization of a certain type of perfect threshold scheme. We also investigate the maximum number of keys which a perfect (t, w)-threshold scheme can incorporate, as a function of t, w, and the total number of possible shadows, v. This maximum can be attained when there is a Steiner system S(t, w, v) which can be partitioned into Steiner systems S(t − 1. w, v). Using known constructions for such Steiner systems, we present two new classes of perfect threshold schemes, and discuss their implementation.

D. R. Stinson, S. A. Vanstone
A Realization Scheme for the Identity-Based Cryptosystem

At the Crypto’84, Shamir has presented a new concept of the identity-based cryptosystem, but no idea is presented on the realization scheme. In this paper a new realization scheme of the modified identity-based cryptosystem has been proposed. The basic idea of the scheme is based on the discrete logarithm problem and the difficulty of factoring a large integer composed of two large primes. The scheme seems to be very secure if all members of the system keep their secret keys safe, but if a constant number of users conspire, the center secret will be disclosed, Then it has a close relation to the well-known “threshold scheme”. To cope with the conspiracy, the basic system is extended to get a new scheme of which “threshold” becomes higher. Detail considerations on the scheme are also given.

Hatsukazu Tanaka
Equivalence Between Two Flavours of Oblivious Transfers

The concept of oblivious transfer (O.T.) that was induced by Halpern and Rabin m] turned out to be a very useful tool in designing cryptographic protocols. The related notion of “one-out-of-two oblivious transfer” was proposed by Even, Goldreich and Lempel in [EGL] together with some applications. Some more applications of this protocol can be found in recent papers [BCR], [GMWj. So far, the two notions where believed to be closely related but not known to be equivalent. This paper presents a proof that these two notions are computationally equivalent.

Claude Crépeau
A construction for authentication / secrecy codes from certain combinatorial designs

If we agree to use one of v possible messages to communicate one of k possible source states, then an opponent can successfully impersonate a transmitter with probability at least k / v, and can successfully substitute a message with a fraudulent one with probability at least (k − 1) / (v − 1). We wish to limit an opponent to these bounds. In addition, we desire that the observation of any two messages in the communication channel will give an opponent no clue as to the two source states. We describe a construction for a code which achieves these goals, and which does so with the minimum possible number of encoding rules (namely, v·(v − 1) / 2). The construction uses a structure from combinatorial design theory known as a perpendicular array.

D. R. Stinson

Applications

A Digital Signature Based on a Conventional Encryption Function

A new digital signature based only on a conventional encryption function (such as DES) is described which is as secure as the underlying encryption function -- the security does not depend on the difficulty of factoring and the high computational costs of modular arithmetic are avoided. The signature system can sign an unlimited number of messages, and the signature size increases logarithmically as a function of the number of messages signed. Signature size in a ‘typical’ system might range from a few hundred bytes to a few kilobytes, and generation of a signature might require a few hundred to a few thousand computations of the underlying conventional encryption function.

Ralph C. Merkle
How to Make Replicated Data Secure

Many distributed systems manage some form of long-lived data, such as files or data bases. The performance and fault-tolerance of such systems may be enhanced if the repositories for the data are physically distributed. Nevertheless, distribution makes security more difficult, since it may be difficult to ensure that each repository is physically secure, particularly if the number of repositories is large. This paper proposes new techniques for ensuring the security of long-lived, physically distributed data. These techniques adapt replication protocols for fault-tolerance to the more demanding requirements of security. For a given threshold value, one set of protocols ensures that an adversary cannot ascertain the state of a data object by observing the contents of fewer than a threshold of repositories. These protocols are cheap; the message traffic needed to tolerate a given number of Compromised repositories is only slightly more than the message traffic needed to tolerate the same number of failures. A second set of protocols ensures that an object’s state cannot be altered by an adversary who can modify the contents of fewer than a threshold of repositories. These protocols are more expensive; to tolerate t-1 compromised repositories, clients executing certain operations must communicate with t-1 additional sites.

Maurice P. Herlihy, J. D. Tygar
A Study of Password Security

Our work is motivated by the question of whether or not the password scheme used in UNIX is secure. The following password scheme is a somewhat simplified version of the actual password scheme used in UNIX. We feel that this simplified version captures the essential features of the actual password scheme used in UNM. When a user logs in for the first time he creates a random password and types his user name together with the password into the system. The system creates an encryption of the password using the Data Encryp- tion Standard (DES) and stores this (only the encryption, not the password) together with the user name in a password file. Thereafter, whenever the user logs in and types in his user name and password the system computes the encryption of the password and only allows the user to successfully log in if the encryption matches the entry stored with the user name in the password file.

Michael Luby, Charles Rackoff
A Video Scrambling Technique Based On Space Filling Curves (Extended Abstract)

In this paper we propose a video scrambling technique which scans a picture stored in a frame buffer along a pseudo-random space filling curve. We describe several efficient methods for generating cryptographically strong curves, and show that they actually decrease the bandwidth required to transmit the picture.

Yossi Matias, Adi Shamir
Secure Audio Teleconference

A number of alternative encryption techniques have been suggested for secure audio teleconferencing implementable on public switched network, in which the centralized facility, called bridge, does not hold any secret. The role of the bridge is to synchronously add simultaneous encrypted signals, modulo some known number, and then transmit the result to all the participants. Each terminal has a secret key, with which it can decrypt the above modular sum of encrypted signals to obtain the desired ordinary sum of cleartext signals. Secrecy of the systems is analyzed. Some of which are provably secure, assuming the existence of one way functions, and for the others we have partial cryptanalysis.We also present a N-party identification and signature systems, based on Fiat and Shamir’s single party systems, and another N-party signature system based on discrete-log problem. Our system have communication complexity 2N times that of the basic Fiat-Shamir systems (as compared to a factor of N2 in the direct application of the basic scheme to all pairs).

E. F. Brickell, P. J. Lee, Y. Yacobi

Informal Contributions

Attack on the Koyama-Ohta Identity Based Key Distribution Scheme

Koyama and Ohta proposed an identity based key distribution scheme. They considered three configurations: ring, complete graph, and star. The most practical configuration is the star which is used in teleconferencing. We show the Koyama-Ohta star scheme to be insecure. Specifically, we show that an active eavesdropper may cut one of the lines, and perform a bidirectional impersonation, thus establishing two separate keys. One with each side.

Yacov Yacobi
On the F-function of FEAL

The cryptographic strength of a Feistel Cipher depends strongly on the properties of its F-function. Certain characteristics of the F-function of the Fast Data Encipherment Algorithm (FEAL) are investigated and compared to characteristics of the F-function of the Data Encryption Standard (DES). The effects of several straight-forward modifications of FEAL’s F-function are discussed.

Walter Fumy
Patterns of Entropy Drop of the Key in an S-Box of the DES (Extended Abtract)

The S-boxes used in the DES have been looked at in various aspects like non-linearity,propagation characteristics and 1/0 correlation immunity. In the present work we try to make a cryptographic study of these boxes from a new viewpoint, namely,by investigating the way in which the uncertainty of the 6-bit key vector k which controls the work of an S-box diminishes,when a certain set of distinct plaintext vectors

K. C. Zeng, J. H. Yang, Z. T. Dai
The Rao-Nam Scheme is Insecure Against a Chosen-Plaintext Attack

The Rao-Nam scheme is discussed and generalized to Fq. It is shown that the scheme is insecure against a chosen-plaintext attack for practical code lengths. Based on observations an improved scheme is given, which is not vulnerable to the chosen-plaintext attacks as described.

René Struik, Johan van Tilburg
On Struik-Tilburg Cryptanalysis of Rao-Nam Scheme

A private-key cryptosystem using algebraic codes was presented in CRYPTO 86 [l] and it is referred later [2] and also here as Rao-Nam Scheme. Subsequently, a chosen-plaintext attack was presented by Struik and Tilburg in a rump session of CRYPTO 87 and appears in this issue [2]. This note addresses a major difference between the types of chosen-plaintext attacks on privatekey algebraic code cryptosystems vs. the other more conventional private key schemes, and presents a rebuttal to the conclusions given in [2].

T. R. N Rao
A Generalization of Hellman’s Extension of Shannon’s Approach to Cryptography (Abstract)

In his landmark 1977 paper [Hell77], Hellman extends the Shannon theory approach to cryptography [Shan49]. In particular, he shows that the expected number of spurious key decipherements on length n messages is at least 2H(K)-nD - 1 for any uniquely enci- pherable, uniquely decipherable cipher, as long as each key is equally likely and the set of meaningful cleartext messages follows a uniform distribution (where H(K) is the key entropy and D is the redundancy of the source language). In this paper, we show that Hellman’s result holds with no restrictions on the distribution of keys and messages. We also bound from above and below the key equivocation upon seeing the ciphertext. Sufficient conditions for these bounds to be tight are given. The results are obtained through very simple purely information theoretic arguments, with no needs for (explicit) counting arguments.

Pierre Beauchemin, Gilles Brassard
Multiparty Unconditionally Secure Protocols (Abstract)

It has been shown previously how almost any multiparty protocol problem can be solved. All the constructions suggested so far rely on trapdoor one-way functions, and therefore must assume essentially that public key cryptography is possible. It has also been shown that unconditional protection of a single designated participant is all that can be achieved under that model. Assuming only authenticated secrecy channels between pairs of participants, we show that essentially any multiparty protocol problem can be solved. Such a model actually implies the further requirement that less than one third of the participants deviate from the protocol. The techniques presented do not, however, rely on any cryptographic assumptions; they achieve the optimal result and provide security as good as the secrecy and authentication of the channels used. Moreover, the constructions have a built-in fault tolerance: once the participants have sent messages committing themselves to the secrets they will use in the protocol, there is no way less than a third of them can stop the protocol from completing correctly. Our technique relies on the so called key-safeguarding or secret-sharing schemes proposed by Blakley and Shamir as basic building blocks. The usefulness of their homomorphic structure was observed by Benaloh, who proposed techniques very similar to ours.

David Chaum, Claude Crépeau, Ivan Damgård
Backmatter
Metadaten
Titel
Advances in Cryptology — CRYPTO ’87
herausgegeben von
Carl Pomerance
Copyright-Jahr
1988
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-48184-3
Print ISBN
978-3-540-18796-7
DOI
https://doi.org/10.1007/3-540-48184-2