Skip to main content
main-content
Top

Table of Contents

Frontmatter

Protocol Design and Analysis

A Calculus for Access Control in Distributed Systems

We study some of the concepts, protocols, and algorithms for access control in distributed systems, from a logical perspective. We account for how a principal may come to believe that another principal is making a request, either on his own or on someone else’s behalf. We also provide a logical language for access control lists, and theories for deciding whether requests should be granted.

M. Abadi, M. Burrows, B. Lampson, G. Plotkin

Deriving the Complete Knowledge of Participants in Cryptographic Protocols

Extended Abstract

This paper shows how to derive a representation of the participants’ knowledge in a cryptographic protocol. The modelization is based on the assumption that the underlying cryptographic system is perfect and is an extension of the “Hidden Automorphism Model” introduced by Merritt. It can be used to establish the security of the protocols.

Marie-Jeanne Toussaint

Systematic Design of Two-Party Authentication Protocols

We investigate protocols for authenticated exchange of messages between two parties in a communication network. Secure authenticated exchange is essential for network security. It is not difficult to design simple and seemingly correct solutions for it, however, many such ‘solutions’ can be broken. We give some examples of such protocols and we show a useful methodology which can be used to break many protocols. In particular, we break a protocol that is being standardized by the ISO.We present a new authenticated exchange protocol which is both provably secure and highly efficient and practical. The security of the protocol is proven, based on an assumption about the the cryptosystem employed (namely, that it is secure when used in CBC mode on a certain message space). We think that this assumption is quite reasonable for many cryptosystems, and furthermore it is often assumed in practical use of the DES cryptosystem. Our protocol cannot be broken using the methodology we present (which was strong enough to catch all protocol flaws we found). The reduction to the security of the encryption mode, indeed captures the non-existence of the exposures that the methodology catches (specialized to the actual use of encryption in our protocol). Furthermore, the protocol prevents chosen plaintext or ciphertext attacks on the cryptosystem.The proposed protocol is efficient and practical in several aspects. First, it uses only conventional cryptography (like the DES, or any privately-shared one-way function) and no public-key. Second, the protocol does not require synchronized clocks or counter management. Third, only a small number of encryption operations is needed (we use no decryption), all with a single shared key. In addition, only three messages are exchanged during the protocol, and the size of these messages is minimal. These properties are similar to existing and proposed actual protocols. This is essential for integration of the proposed protocol into existing systems and embedding it in existing communication protocols.

Ray Bird, Inder Gopal, Amir Herzberg, Phil Janson, Shay Kutten, Refik Molva, Moti Yung

Combinatorics and Authentication

Combinatorial characterizations of authentication codes

In this paper, we prove two new combinatorial characterizations of authentication codes. Authentication codes without secrecy are characterized in terms of orthogonal arrays; and general authentication codes are characterized in terms of balanced incomplete block designs.

D. R. Stinson

Universal hashing and authentication codes

In this paper, we study the application of universal hashing to the construction of unconditionally secure authentication codes without secrecy. This idea is most useful when the number of authenticators is exponentially small compared to the number of possible source states (plaintext messages). We formally define some new classes of hash functions and then prove some new bounds and give some general constructions for these classes of hash functions. Then we discuss the implications to authentication codes.

D. R. Stinson

On Correlation-immune functions

We establish the link between correlation-immune functions and orthogonal arrays. We give a recursive definition of any correlation-immune function of maximal degree. We describe the set of quadratic balanced correlation-immune functions of maximal order. Some constructions are then deduced.

P. Camion, C. Carlet, P. Charpin, N. Sendrier

Secret Sharing and Information Theory

On the Size of Shares for Secret Sharing Schemes

A secret sharing scheme permits a secret to be shared among participants in such a way that only qualified subsets of partecipants can recover the secret, but any non-qualified subset has absolutely no information on the secret. The set of all qualified subsets defines the access structure to the secret. Sharing schemes are useful in the management of cryptographic keys and in multy-party secure protocols.We analyze the relationships among the entropies of the sample spaces from which the shares and the secret are chosen. We show that there are access structures with 4 participants for which any secret sharing scheme must give to a participant a share at least 50% greater than the secret size. This is the first proof that there exist access structures for which the best achievable information rate (i.e., the ratio between the size of the secret and that of the largest share) is bounded away from 1. The bound is the best possible, as we construct a secret sharing scheme for the above access structures which meets the bound with equality.

R. M. Capocelli, A. De Santis, L. Gargano, U. Vaccaro

On Verification in Secret Sharing

Verifiable Secret Sharing (VSS) has proven to be a powerful tool in the construction of fault-tolerant distributed algorithms. Previous results show that Unverified Secret Sharing, in which there are no requirements when the dealer is faulty during distribution of the secret, requires the same number of processors as VSS. This is counterintuitive: verification that the secret is well shared out should come at a price. In this paper, by focussing on information leaked to nonfaulty processors during verification, we separate a certain strong version of Unverified Secret Sharing (USS) from its VSS analogue in terms of the required number of processors. The proof of the separation theorem yields information about communication needed for the original VSS problem. In order to obtain the separation result we introduce a new definition of secrecy, different from the Shannon definition, capturing the intuition that “information” received from faulty processors may not be informative at all.

Cynthia Dwork

Non-Interactive and Information-Theoretic Secure Verifiable Secret Sharing

It is shown how to distribute a secret to n persons such that each person can verify that he has received correct information about the secret without talking with other persons. Any k of these persons can later find the secret (1 ≤ k ≤ n), whereas fewer than k persons get no (Shannon) information about the secret. The information rate of the scheme is 1/2 and the distribution as well as the verification requires approximately 2k modular multiplications pr. bit of the secret. It is also shown how a number of persons can choose a secret “in the well” and distribute it verifiably among themselves.

Torben Pryds Pedersen

Multiparty Secret Key Exchange Using a Random Deal of Cards

Extended Abstract

We consider the problem of multiparty secret key exchange. A “team” of players P1 through Pk wishes to determine an n-bit secret key in the presence of a computationally unlimited eavesdropper, Eve. The team players are dealt hands of cards of prespecified sizes from a deck of d distinct cards; any remaining cards are dealt to Eve. We explore how the team can use the information contained in their hands of cards to determine an n-bit key that is secret from Eve, that is, an n bit string which each team player knows exactly but for which Eve’s probability of guessing the key correctly is 1/2n both before and after she hears the communication between the team players. We describe randomized protocols for secret key exchange that work for certain classes of deals, and we present some conditions on the deal for such a protocol to exist.

Michael J. Fischer, Rebecca N. Wright

Cryptanalysis

Differential Cryptanalysis of Snefru, Khafre, REDOC-II, LOKI and Lucifer

Extended Abstract

In [1,2] we introduced the notion of differential cryptanalysis based on chosen plaintext attacks. In [3,4] we described the application of differential cryptanalysis to Feal[13,12] and extended the method to known plaintext attacks. In this paper we apply differential cryptanalytic methods to the hash function Snefru[10] and to the cryptosystems Khafre[11], REDOC-II[6,7], LOKI[5] and Lucifer[8].

Eli Biham, Adi Shamir

A Known Plaintext Attack of FEAL-4 and FEAL-6

We present new results on the cryptanalysis of the FEAL-N blockcipher. As a matter of fact, almost all the attacks of this cryptosystem published so far are chosen plaintext attacks [3,4,5,7], except the announcement in [7] of a non-differential known plaintext attack of FEAL-4 which requires about 100000 plaintext blocks. We describe known plaintext attacks of FEAL-4 and FEAL-6, which require about 1000 and 20000 plaintext blocks respectively and are based on correlations with linear functions. Using similar methods, we have also found more recently an improved attack on FEAL-4, which requires only 200 knwon plaintext blocks.

Anne Tardy-Corfdir, Henri Gilbert

A switching closure test to analyze cryptosystems

Extended abstract

The closure test MCT (meet-in-the-middle closure test) was introduced to analyze the algebraic properties of cryptosystems [KaRiSh]. Since MCT needs a large amount of memory, it is hard to implement with an ordinary meet-in-the-middle method. As a feasible version of MCT, this paper presents a switching closure test SCT which based on a new memoryless meet-in-the-middle method. To achieve the memoryless method, appropriate techniques, such as expansion of cycling detection methods for one function into a method for two functions and an efficient intersection search method that uses only a small amount of memory, are used in an extremely effective manner.

Hikaru Morita, Kazuo Ohta, Shoji Miyaguchi

An Attack on the Last Two Rounds of MD4

In [Rive90] the MD4 message digest algorithm was introduced taking an input message of arbitrary length and producing an output 128-bit message digest. It is conjectured that it is computationally infeasible to produce two messages having the same message digest, or to produce any message having a given prespecified target message. In this paper it is shown that if the three round MD4 algorithm is stripped of its first round, it is possible to find for a given (initial) input value two different messages hashing to the same output. A computer program implementing this attack takes about 1 millisecond on a 16 Mhz IBM PS/2 to find such a collision.

Bert den Boer, Antoon Bosselaers

The Cryptanalysis of a New Public-Key Cryptosystem based on Modular Knapsacks

At the 1990 EuroCrypt Conference, Niemi proposed a new publickey cryptosystem based on modular knapsacks. Y.M. Chee in Singapore, A. Joux and J. Stern in Paris independently found that this cryptosystem is insecure. Our two cryptanalytic methods are slightly different, but they are both based on the LLL algorithm. This is one more example of a cryptosystem that can be broken using this powerful algorithm.

Yeow Meng Chee, Antoine Joux, Jacques Stern

Complexity Theory

A One-Round, Two-Prover, Zero-Knowledge Protocol for NP

The model of zero knowledge multi prover interactive proofs was introduced by Ben-Or, Goldwasser, Kilian and Wigderson. A major open problem associated with these protocols is whether they can be executed in parallel. A positive answer was claimed by Fortnow, Rompel and Sipser, but its proof was later shown to be flawed by Fortnow who demonstrated that the probability of cheating in n independent parallel rounds can be exponentially higher than the probability of cheating in n independent sequential rounds. In this paper we use refined combinatorial arguments to settle this problem by proving that the probability of cheating in a parallelized BGKW protocol is at most 1/2n/9, and thus every problem in NP has a one-round two prover protocol which is perfectly zero knowledge under no cryptographic assumptions.

Dror Lapidot, Adi Shamir

Interactive Proofs with Space Bounded Provers

Recent results in interactive proof systems [12][13][1] seem to indicate that it is easier for a prover in a single prover interactive proof system to cheat the verifier than it is for a prover in a multiple prover interactive proof system. We show that this is not the case for a single prover in which all but a fixed polynomial of the prover’s space is erased between each round. One consequence of this is that any multiple prover interactive protocol in which the provers need only a polynomial amount of space can be easily transformed into a single prover interactive protocol where the prover has only a fixed polynomial amount of space. This result also shows that one can easily transform checkers [5] into adaptive checkers [7] under the assumption that the program being checked has space bounded by a fixed polynomial.

Joe Kilian, Ronitt Rubinfeld

Functional Inversion and Communication Complexity

In this paper, we study the relation between the multi-party communication complexity over various communication topologies and the complexity of inverting functions and/or permutations. In particular, we show that if a function has a ring-protocol or a tree-protocol of communication complexity bounded by H, then there is a circuit of size O(2Hn) which computes an inverse of the function. Consequently, we have proved, although inverting NC0 Boolean circuits is N P-complete, planar N C1 Boolean circuits can be inverted in N C, and hence in polynomial time. In general, NCk planar boolean circuits can be inverted in O(nlog(k−1) n) time. Also from the ring-protocol results, we derive an Ω(n log n) lower bound on the VLSI area to layout any one-way functions. Our results on inverting boolean circuits can be extended to invert algebraic circuits over finite rings.One significant aspect of our result is that it enables us to compare the communication power of two topologies. We have proved that on some topologies, no one-way function nor its inverse can be computed with bounded communication complexity.

Shang-Hua Teng

The Use of Interaction in Public Cryptosystems.

extended abstract

For every k, we construct an oraclc relative to which secret agreement can be done in k passes, but not in k-1. In particular, for k=3, we get an oracle relative to which secret agreement is possible, but relative to which trapdoor functions do not exist. Thus, unlike the case of private cryptosystems, there is no black box reduction from a k-pass system to a k-1 pass system. Our construction is natural— suggesting that real-world protocols could trade higher interaction costs for an assumption strictly weaker than the existence of trapdoor functions. Finding a complexity theoretic assumption necessary and sufficient for public cryptosystems to exist is one of the important open questions of cryptography. Our results make clear the possibility that this question is impossible to answer because it contains a false hidden assumption: the existence of a 2-pass public cryptosystem follows from the existence of a k-pass system. The question should really be a family of questions: given k find an assumption equivalent to the existence of a k-pass public cryptosystem.

Steven Rudich

Cryptographic Schemes Based on Number Theory

New Public-Key Schemes Based on Elliptic Curves over the Ring Zn

Three new trapdoor one-way functions are proposed that are based on elliptic curves over the ring Zn. The first class of functions is a naive construction, which can be used only in a digital signature scheme, and not in a public-key cryptosystem. The second, preferred class of function, does not suffer from this problem and can be used for the same applications as the RSA trapdoor one-way function, including zero-knowledge identification protocols. The third class of functions has similar properties to the Rabin trapdoor one-way functions. Although the security of these proposed schemes is based on the difficulty of factoring n, like the RSA and Rabin schemes, these schemes seem to be more secure than those schemes from the viewpoint of attacks without factoring such as low multiplier attacks. The new schemes are somewhat less efficient than the RSA and Rabin schemes.

Kenji Koyama, Ueli M. Maurer, Tatsuaki Okamoto, Scott A. Vanstone

Efficient Algorithms for the Construction of Hyperelliptic Cryptosystems

The jacobian of hyperelliptic curves, including elliptic curves as a special case, offers a good primitive for cryptosystems, since cryptosystems (discrete logarithms) based on the jacobians seem to be more intractable than those based on conventional multiplicative groups. In this paper, we show that the problem to determine the group structure of the jacobian can be characterized to be in NP ∩ co-NP, when the jacobian is a non-degenerate type (“non-half-degenerate”). We also show that the hyperelliptic discrete logarithm can be characterized to be in NP ∩ co-NP, when the group structure is non-half-degenerate. Moreover, we imply the reducibility of the hyperelliptic discrete logarithm to a multiplicative discrete logarithm. The extended Weil pairing over the jacobian is the key tool for these algorithms.

Tatsuaki Okamoto, Kouichi Sakurai

CM-Curves with Good Cryptographic Properties

Our purpose is to describe elliptic curves with complex multiplication which in characteristic 2 have the following useful properties for constructing Diffie-Hellman type cryptosystems: (1) they are nonsupersingular (so that one cannot use the Menezes-Okamoto-Vanstone reduction of discrete log from elliptic curves to finite fields); (2) the order of the group has a large prime factor (so that discrete logs cannot be computed by giant-step/baby-step or the Pollard rho method); (3) doubling of points can be carried out almost as efficiently as in the case of the supersingular curves used by Vanstone; (4) the curves are easy to find.

Neal Koblitz

A New ID-Based Key Sharing System

Non-interactive ID-based key sharing schemes are convenient in practice since they do not need preliminary communication. However, they are vulnerable to entities conspiracy. This paper proposes a new ID-based non-interactive key sharing scheme with high security against conspiracy of entities.

Shigeo Tsujii, Jinhui Chao

Pseudorandomness

Pseudo-random Generators from One-way Functions

One of the basic primitives in cryptography and other areas of computer science is a pseudo-random generator. The usefulness of a pseudo-random generator is demonstrated by the fact that it can be used to construct a private key cryptosystem that is secure even against chosen plaintext attack. A pseudo-random generator can also be used to conserve random bits and allows reproducibility of results in Monte Carlo simulation experiments. Intuitively, a pseudo-random generator is a polynomial time computable function g that stretches a short random string x into a much longer string g(x) that “looks” just like a random string to any polynomial time adversary that is allowed to examine g(x)1. Thus, a pseudo-random number generator can be used to efficiently convert a small amount of true randomness into a much longer string that is indistinguishable from a truly random string of the same length to any polynomial time adversary.On the other hand, there seem to be a variety of natural examples of another basic primitive; the one-way function. Intuitively, a function f is one-way if: (1) given any x, f(x) can be computed in polynomial time; (2) given f(x) for a randomly chosen x, it is not possible on average to find an inverse x’ such that f(x’)=f(x) in polynomial time. It has not been proven that there are any one-way functions, but there are a number of problems from number theory, coding theory, graph theory and combinatorial theory that are candidates for problems that might eventually be proven to be one-way functions.We show how to construct a pseudo-random generator from any one-way function. The journal version of this work (in preparation) is the combination of the results announced in the conference papers [ILL, Impagliazzo Levin Luby] and [H, Hästad].

Michael Luby

New Results on Pseudorandom Permutation Generators Based on the Des Scheme

We denote by ψk the permutation generator based on the DES Scheme with k rounds where the S boxes are replaced by random independant functions. We denote by |P1 − P1*|, (respectively |P1 − P1**|), the probability of distinguishing such a permutation from a random function (respectively from a random permutation) by means of a distinguishing circuit that has m oracle gates.In 1988, M. Luby and C. Rackoff [1] proved that $$ \forall k \geqslant 3,|P_1 - P_1^* | \leqslant \frac{{m(m - 1)}} {{2^n }}. $$At Eurocrypt 90, J. Pieprzyk wondered at the end of his paper [4] if that inequality could be improved. This is the problem we consider here. In particular, such an improvement could greatly reduce the length of the keys used in a “direct” application of these theorems to a cryptosystem.Our main results will be: 1.For ψ3 and ψ4 there is no really tighter inequality than $$ \leqslant \frac{{m(m - 1)}} {{2^n }} $$ .2.However for ψ5 (and then for ψk, k ≥ 5), there is a much tighter inequality than Luby - Rackoff’s one. For example for ψ6, |P1 − P1*| and |P1 − P1**| are $$ \leqslant \frac{{12m.}} {{2^n }} + \frac{{18m^3 }} {{2^{2n} }} $$ .3.When m is very small (m = 2 or 3 for example) it is possible to have an explicit evaluation of the effects of the number of rounds k on the “better and better pseudorandomness” of ψk.

Jacques Patarin

Applications and Implementations

Faster Modular Multiplication by Operand Scaling

There are a number of techniques known for speeding up modular multiplication, which is the main arithmetic operation in RSA cryptography. This note shows how to gain speed by scaling the modulus. Resulting hardware is limited only by the speed of addition.1 Detailed analysis of fan out shows that over existing methods the speedup is potentially as much as two-fold. This is because the addition and fan out can now be done in parallel. Of course, in RSA the modulus can be chosen to need no scaling, so that most of the minor extra costs are eliminated.

Colin D. Walter

Universal Electronic Cash

This paper proposes the first ideal untraceable electronic cash system which solves the most crucial problem inherent with real cash and all previous untraceable electronic cash systems. The main advantage of the new system is that the customer can subdivide his cash balance, C (dollars), into many pieces in any way he pleases until the total value of all subdivided piece equals C. This system can be implemented efficiently. In a typical implementation, the data size of one piece of electronic cash is less than 100 bytes regardless of the face value of piece, the computation time for each transaction is several seconds, assuming the existence of a Rabin scheme chip. The security of this scheme relies on the difficulty of factoring.

Tatsuaki Okamoto, Kazuo Ohta

How to Break and Repair a “Provably Secure” Untraceable Payment System

Extended Abstract

On Crypto’ 88, an untraceable payment system with provable security against abuse by individuals was presented by Damgård. We show how to break the untraceability of that system completely.Next, an improved version of the system is presented. We also augment the system by security for the individuals against loss of money, and we introduce the possibility of receipts for payments. Finally, whereas all this concerned an on-line system, we present a similar construction for untraceable electronic cash.

Birgit Pfitzmann, Michael Waidner

Practical Quantum Oblivious Transfer

We describe a protocol for quantum oblivious transfer, utilizing faint pulses of polarized light, by which one of two mutually distrustful parties (“Alice”) transmits two one-bit messages in such a way that the other party (“Bob”) can choose which message he gets but cannot obtain information about both messages (he will learn his chosen bit’s value with exponentially small error probability and may gain at most exponentially little information about the value of the other bit), and Alice will be entirely ignorant of which bit he received. Neither party can cheat (ie deviate from the protocol while appearing to follow it) in such a way as to obtain more information than what is given by the description of the protocol. Our protocol is easy to modify in order to implement the All-or-Nothing Disclosure of one out of two string messages, and it can be used to implement bit commitment and oblivious circuit evaluation without complexity-theoretic assumptions, in a way that remains secure even against cheaters that have unlimited computing power. Moreover, this protocol is practical in that it can be realized with available optoelectronic apparatus while being immune to any technologically feasible attack for the foreseeable future.

Charles H. Bennett, Gilles Brassard, Claude Crépeau, Marie-Hélène Skubiszewska

Exploiting Parallelism in Hardware Implementation of the DES

The Data Encryption Standard algorithm has features which may be used to advantage in parallelizing an implementation. The kernel of the algorithm, a single round, may be decomposed into several parallel computations resulting in a structure with minimal delay. These rounds may also be computed in a pipelined parallel structure for operations modes which do not require cryptext feedback. Finally, system I/O may be performed in parallel with the encryption computation for further gain. Although several of these ideas have been discussed before separately, the composite presentation is novel.

Albert G. Broscius, Jonathan M. Smith

Secure Computation Protocols

Foundations of Secure Interactive Computing

The problem of secure multiparty computation is usually described as follows: each of n players in a network holds a private input xi. Together they would like to compute a function F(x1,...,xn) without revealing the inputs, even though no particular player can be trusted. Attempts to contrive formal definitions for the problem have treated properties of the solution separately (correctness, privacy, etc.), giving an ad hoc collection of desirable properties and varied definitions that do not support clear or comparable proofs.We propose a clear, concise, and unified definition for security and reliability in interactive computations. We develop a reduction called relative resilience that captures all desired properties at a single blow. Relative resilience allows one to classify and compare arbitrary protocols in terms of security and reliability, in the same way that Turing reductions allow one to classify and compare algorithms in terms of complexity. Security and reliability reduce to a simple statement: a protocol for F is resilient if it is as resilient as an ideal protocol in which a trusted host is available to compute F. Relative resilience captures the notions of security and reliability for a wide variety of interactive computations, including zero-knowledge proof systems, Byzantine Agreement, oblivious transfer, two-party oblivious circuit evaluation, among others.Relative resilience provides modular proof techniques that other approaches lack: one may compare a sequence of protocols ranging from the real-world protocol to the ideal protocol, proving the relative resilience of each successive protocol with greater clarity and less complexity. Folk theorems about the “transitivity” of security and the security of concatenated protocols are now provable; and the proofs reveal that such folk theorems fail under subtle conditions that have previously gone unnoticed. The conciseness1 and modularity of our definitions and proof techniques provide great clarity in designing and reasoning about protocols and have already lead to provably secure protocols that are significantly more efficient than those appearing in the literature.

Donald Beaver

Secure Computation

Abstract

We define what it means for a network of communicating players to securely compute a function of privately held inputs. Intuitively, we wish to correctly compute its value in a manner which protects the privacy of each player’s contribution, even though a powerful adversary may endeavor to disrupt this enterprise.This highly general and desirable goal has been around a long time, inspiring a large body protocols, definitions, and ideas, starting with Yao [1982, 1986] and Goldreich, Micali and Wigderson [1987]. But all the while, it had resisted a full and satisfactory formulation.Our definition is built on several new ideas. Among them: •Closely mimicking an ideal evaluation. A secure protocol must mimic this abstraction in a run-by-run manner, our definition depending as much on individual executions as on global properties of ensembles.•Blending privacy and correctness in a novel way, using a special type of simulator designed for the purpose.•Requiring adversarial awareness—capturing the idea that the adversary should know, in a very strong sense, certain information associated to the execution of a protocol. Among the noteworthy and desirable properties of our definition is the reducibility of secure protocols, which we believe to be a cornerstone in a mature theory of secure computation.

Silvio Micali, Phillip Rogaway

A Cryptographic Scheme for Computerized General Elections

This paper presents a novel cryptographic scheme which fully conforms to the requirements of holding large scale general elections. The participants of the scheme are the voters, the candidates and the government. The scheme ensures independence between the voters in that they do not have to be present at the same time or go through several phases together; no global computation is needed. The scheme preserves the privacy of the votes against any subset of dishonest voters, and against any proper subset of dishonest candidates, including the government. Robustness is ensured in that no subset of voters can corrupt or disrupt the election. This also means that no voter is able to vote more than once without being detected. The verifiability of the scheme ensures that the government and the candidates cannot present a false tally without being caught. “Voting by telephone” is possible by employing the proposed scheme.

Kenneth R. Iversen

Efficient Multiparty Protocols Using Circuit Randomization

The difference between theory and practice often rests on one major factor: efficiency. In distributed systems, communication is usually expensive, and protocols designed for practical use must require as few rounds of communication and as small messages as possible.A secure multiparty protocol to compute function F is a protocol that, when each player i of n players starts with private input xi, provides each participant i with F(x1,...xn) without revealing more information than what can be derived from learning the function value. Some number l of players may be corrupted by an adversary who may then change the messages they send. Recent solutions to this problem have suffered in practical terms: while theoretically using only polynomially-many rounds, in practice the constants and exponents of such polynomials are too great. Normally, such protocols express F as a circuit CF, call on each player to secretly sharexi, and proceed to perform “secret addition and multiplication” on secretly shared values. The cost is proportional to the depth of CF times the cost of secret multiplication; and multiplication requires several rounds of interaction.We present a protocol that simplifies the body of such a protocol and significantly reduces the number of rounds of interaction. The steps of our protocol take advantage of a new and counterintuitive technique for evaluating a circuit; set every input to every gate in the circuit completely at random, and then make corrections. Our protocol replaces each secret multiplication — multiplication that requires further sharing, addition, zero-knowledge proofs, and secret reconstruction — that is used during the body of a standard protocol by a simple reconstruction of secretly shared values, thereby reducing rounds by an order of magnitude. Furthermore, these reconstructions require only broadcast messages (but do not require Byzantine Agreement). The simplicity of broadcast and reconstruction provides efficiency and ease of implementation. Our transformation is simple and compatible with other techniques for reducing rounds.

Donald Beaver

Public-Key Cryptosystems and Signatures

Non-Interactive Zero-Knowledge Proof of Knowledge and Chosen Ciphertext Attack

The zero-knowledge proof of knowledge, first defined by Fiat, Fiege and Shamir, was used by Galil, Haber and Yung as a means of constructing (out of a trapdoor function) an interactive public-key cryptosystem provably secure against chosen ciphertext attack. We introduce a revised setting which permits the definition of a non-interactive analogue, the non-interactive zero-knowledge proof of knowledge, and show how it may be constructed in that setting from a non-interactive zero-knowledge proof system for N P (of the type introduced by Blum, Feldman and Micali). We give a formalization of chosen ciphertext attack in our model which is stronger than the “lunchtime attack” considered by Naor and Yung, and prove a non-interactive public-key cryptosystem based on non-interactive zero-knowledge proof of knowledge to be secure against it.

Charles Rackoff, Daniel R. Simon

Towards Practical Public Key Systems Secure Against Chosen Ciphertext attacks

We present two efficient constructions aimed at making public key systems secure against chosen ciphertext attacks. The first one applies to any deterministic public key system and modifies it into a system that is provably as hard to break under a passive attack as the original one, but has the potential of making a chosen ciphertext attack useless to an enemy. The second construction applies to the El Gamal/Diffie-Hellman public key system. Again, the modified system is provably as hard to break under a passive attack as the original one, and under an additional cryptographic assumption, a chosen ciphertext attack is provably useless to an enemy. We also point out a connection between such public-key systems and efficient identification schemes.

Ivan Damgård

Shared generation of authenticators and signatures

Extended Abstract

Often it is desired that the power to sign or authenticate messages is shared. This paper presents methods to collectively generate RSA signatures, provably secure authenticators and unconditionally secure authenticators. In the new schemes, l individuals are given shares such that k ≤ l are needed to generate a signature (authenticator) but less than k can not. When the k people have finished signing (authenticating), nobody can perform an impersonation or substitution attack. These schemes are called threshold signature (authentication) schemes. Clearly these schemes are better than each of the k individuals sending a separate authenticator for each message or if each of the k individuals each send their share to a “trusted” person who will sign for them.In all of the schemes we assume that the shareholders (senders) and receiver have secure workstations but the network and servers are not necessarily secure.

Yvo Desmedt, Yair Frankel

Cryptographically Strong Undeniable Signatures, Unconditionally Secure for the Signer

Extended Abstract

We present the first undeniable signature schemes where signers are unconditionally secure. In the efficient variants, the security for the recipients relies on a discrete logarithm assumption or on factoring; and in a theoretical version, on claw-free permutation pairs.Besides, on the one hand, the efficient variants are the first practical cryptographically strong undeniable signature schemes at all. On the other hand, in many cases they are more efficient than previous signature schemes unconditionally secure for the signer.Interesting new subprotocols are efficient collision-free hash functions based on a discrete logarithm assumption, efficient perfectly hiding commitments for elements of ℤp (p prime), and fairly practical perfect zero-knowledge proofs for arithmetic formulas in ℤp or ℤ.

David Chaum, Eugène van Heijst, Birgit Pfitzmann

Backmatter

Additional information