Skip to main content

1994 | Buch

Advances in Cryptology — CRYPTO ’94

14th Annual International Cryptology Conference Santa Barbara, California, USA August 21–25, 1994 Proceedings

herausgegeben von: Yvo G. Desmedt

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The CRYPTO ’94 conference is sponsored by the International Association for Cryptologic Research (IACR), in co-operation with the IEEE Computer Society Technical Committee on Security and Privacy. It has taken place at the Univ- sity of California, Santa Barbara, from August 21-25,1994. This is the fourteenth annual CRYPTO conference, all of which have been held at UCSB. This is the first time that proceedings are available at the conference. The General Chair, Jimmy R. Upton has been responsible for local organization, registration, etc. There were 114 submitted papers which were considered by the Program Committee. Of these, 1 was withdrawn and 38 were selected for the proce- ings. There are also 3 invited talks. Two of these are on aspects of cryptog- phy in the commercial world. The one on hardware aspects will be presented by David Maher (AT&T), the one on software aspects by Joseph Pato (Hewlett- Packard). There will also be a panel discussion on “Securing an Electronic World: Are We Ready?” The panel members will be: Ross Anderson, Bob Blakley, Matt Blaze, George Davida, Yvo Desmedt (moderator), Whitfield Diffie, Joan Feig- baum, Blake Greenlee, Martin Hellman, David Maher, Miles Smid. The topic of the panel will be introduced by the invited talk of Whitfield Diffie on ”Securing the Information Highway. ” These proceedings contain revised versions of the 38 contributed talks. Each i paper was sent to at least 3 members of the program committee for comments.

Inhaltsverzeichnis

Frontmatter

Block Ciphers: Differential and Linear Cryptanalysis

The First Experimental Cryptanalysis of the Data Encryption Standard

This paper describes an improved version of linear cryptanalysis and its application to the first successful computer experiment in breaking the full 16-round DES. The scenario is a known-plaintext attack based on two new linear approximate equations, each of which provides candidates for 13 secret key bits with negligible memory. Moreover, reliability of the key candidates is taken into consideration, which increases the success rate. As a result, the full 16-round DES is breakable with high success probability if 24.3 random plaintexts and their ciphertexts are available. The author carried out the first experimental attack using twelve computers to confirm this: he finally reached all of the 56 secret key bits in fifty days, out of which forty days were spent for generating plaintexts and their ciphertexts and only ten days were spent for the actual key search.

Mitsuru Matsui
Linear Cryptanalysis of the Fast Data Encipherment Algorithm

This paper discusses the security of the Fast Data Encipherment Algorithm (FEAL) against Linear Cryptanalysis. It has been confirmed that the entire subkeys used in FEAL-8 can be derived with 225 pairs of known plaintexts and ciphertexts with a success rate approximately 70% spending about 1 hour using a WS (SPARCstation 10 Model 30). This paper also evaluates the security of FEAL-N in comparison with that of the Data Encryption Standard (DES).

Kazuo Ohta, Kazumaro Aoki
Differential-Linear Cryptanalysis

This paper introduces a new chosen text attack on iterated cryptosystems, such as the Data Encryption Standard (DES). The attack is very efficient for 8-round DES,2 recovering 10 bits of key with 80% probability of success using only 512 chosen plaintexts. The probability of success increases to 95% using 768 chosen plaintexts. More key can be recovered with reduced probability of success. The attack takes less than 10 seconds on a SUN-4 workstation. While comparable in speed to existing attacks, this 8-round attack represents an order of magnitude improvement in the amount of required text.

Susan K. Langford, Martin E. Hellman
Linear Cryptanalysis Using Multiple Approximations

We present a technique which aids in the linear cryptanalysis of a block cipher and allows for a reduction in the amount of data required for a successful attack. We note the limits of this extension when applied to DES, but illustrate that it is generally applicable and might be exceptionally successful when applied to other block ciphers. This forces us to reconsider some of the initial attempts to quantify the resistance of block ciphers to linear cryptanalysis, and by taking account of this new technique we cover several issues which have not yet been considered.

Burton S. Kaliski Jr., M. J. B. Robshaw

Schemes Based on New Problems

Hashing with SL 2

We propose a new family of hash functions based on computations over a finite field of characteristic 2. These functions can be computed quickly, detect small modifications of the input text, and their security is equivalent to a precise mathematical problem. They rely on the arithmetic of the group of matrices SL2, and improve upon previous functions based on the same strategy.

Jean-Pierre Tillich, Gilles Zémor
Design of Elliptic Curves with Controllable Lower Boundary of Extension Degree for Reduction Attacks

In this paper, we present a design strategy of elliptic curves whose extension degrees needed for reduction attacks have a controllable lower boundary, based on the complex multiplication fields method of Atkin and Morain over prime fields.

Jinhui Chao, Kazuo Tanada, Shigeo Tsujii
Cryptographic Protocols Based on Discrete Logarithms in Real-quadratic Orders

We generalize and improve the schemes of [4]. We introduce analogues of exponentiation and discrete logarithms in the principle cycle of real quadratic orders. This enables us to implement many cryptographic protocols based on discrete logarithms, e.g. a variant of the signature scheme of ElGamal [8].

Ingrid Biehl, Johannes Buchmann, Christoph Thiel

Signatures I

Designated Confirmer Signatures and Public-Key Encryption are Equivalent

The concept of designated confirmer signatures was introduced by Chaum [Cha94] to improve a shortcoming of undeniable signatures. The present paper formalizes the definition of designated confirmer signatures and proves that a designated confirmer signature scheme is equivalent to a public-key encryption scheme with respect to existence. In addition, the paper proposes practical designated confirmer signature schemes which are more efficient in signing than the previous scheme [Cha94].

Tatsuaki Okamoto
Directed Acyclic Graphs, One-way Functions and Digital Signatures
Extended Abstract

The goals of this paper are to formalize and investigate the general concept of a digital signature scheme based on a general one-way function without trapdoor for signing a predetermined number of messages. It generalizes and unifies previous work of Lamport, Winternitz, Merkle, Even et al. and Vaudenay. The structure of the computation yielding a public key from a secret key corresponds to a directed acyclic graph $$ \mathcal{G} $$. A signature scheme for $$ \mathcal{G} $$ can be defined as an antichain in the poset of minimal verifyable sets of vertices of $$ \mathcal{G} $$ with the naturally defined computability relation as the order relation and where a set is verifyable if and only if the public key can be computed from the set.

Daniel Bleichenbacher, Ueli M. Maurer
An Identity-Based Signature Scheme with Bounded Life-Span

The aim of this paper is to present a signature scheme in which the ability to sign messages of a signer is limited to a fixed number k of signatures. It is an identity-based signature scheme in which each signature can be used only once. We called such schemes “bounded life-span”. It is based on mental games and it uses zero-knowledge tools. A validation center is needed to initialize this identity-based scheme. A credential center is used to insure the unicity and the bounded life-span aspects. It allows delegation and numerous practical applications.

Olivier Delos, Jean-Jacques Quisquater

Implementation and Hardware Aspects

More Flexible Exponentiation with Precomputation

A new precomputation method is presented for computing gR for a fixed element g and a randomly chosen exponent R in a given group. Our method is more efficient and flexible than the previously proposed methods, especially in the case where the amount of storage available is very small or quite large. It is also very efficient in computing gRyE for a small size E and variable number y, which occurs in the verification of Schnorr’s identification scheme or its variants. Finally it is shown that our method is well-suited for parallel processing as well.

Chae Hoon Lim, Pil Joong Lee
A Parallel Permutation Multiplier for a PGM Crypto-chip

A symmetric key cryptosystem, called PGM, based on logarithmic signatures for finite permutation groups was invented by S. Magliveras in the late 1970’s. PGM is intended to be used in cryptosystems with high data rates. This requires exploitation of the potential parallelism in composition of permutations. As a first step towards a full VLSI implementation, a parallel multiplier has been designed and implemented on an FPGA (Field Programmable Gate Array) chip. The chip works as a co-processor in a DSP system. This paper explains the principles of the architecture, reports about implementation details and concludes by giving an estimate of the expected performance in VLSI.

Tamás Horváth, Spyros S. Magliveras, Tran van Trung
Cryptographic Randomness from Air Turbulence in Disk Drives

A computer disk drive’s motor speed varies slightly but irregularly, principally because of air turbulence inside the disk’s enclosure. The unpredictability of turbulence is well-understood mathematically; it reduces not to computational complexity, but to information losses. By timing disk accesses, a program can efficiently extract at least 100 independent, unbiased bits per minute, at no hardware cost. This paper has three parts: a mathematical argument tracing our RNG’s randomness to a formal definition of turbulence’s unpredictability, a novel use of the FFT as an unbiasing algorithm, and a “sanity check” data analysis.

Don Davis, Ross Ihaka, Philip Fenstermacher

Authentication and Secret Sharing

Cryptanalysis of the Gemmell and Naor Multiround Authentication Protocol

Gemmell and Naor proposed a new protocol for the authentication of long messages which was based on block codes and which used a transmission channel k times. This multiround authentication makes it possible to limit the key size independently of the message length. We propose a new attack and show that the probability analysis made by Gemmell and Naor, which was only based on the minimum distance property of the codes, does not hold for our attack. Considering also the impersonation attack we conclude that the number of rounds have to be odd.

Christian Gehrmann
LFSR-based Hashing and Authentication

We present simple and efficient hash functions applicable to secure authentication of information. The constructions are mainly intended for message authentication in systems implementing stream cipher encryption and are suitable for other applications as well. The proposed hash functions are implemented through linear feedback shift registers and therefore attractive for hardware applications. As an example, a single 64 bit LFSR will be used to authenticate 1 Gbit of information with a failure probability of less than 2−30. One of the constructions is the cryptographic version of the well known cyclic redundancy codes (CRC); the other is based on Toeplitz hashing where the matrix entries are generated by a LFSR. The later construction achieves essentially the same hashing and authentication strength of a completely random matrix but at a substantially lower cost in randomness, key size and implementation complexity. Of independent interest is our characterization of the properties required from a family of hash functions in order to be secure for authentication when combined with a (secure) stream cipher.

Hugo Krawczyk
New Bound on Authentication Code with Arbitration

For the authentication model with arbitration (A2-code), Johansson showed a lower bound on the size of encoding rules. However, this bound is no longer tight if the size of source states is large. This paper presents a more tight lower bound on the size of encoding rules for large source states. An A2-code is shown which approximately meets the proposed bound, also. Further, we show that the size of encoding rules for the transmitter can be greatly reduced if the receiver’s cheating probability is slightly large.

Kaoru Kurosawa
Multi-Secret Sharing Schemes
Extended Abstract

A multi-secret sharing scheme is a protocol to share m arbitrarily related secrets s1, ..., sm among a set of participants $$ \mathcal{P} $$. In this paper we put forward a general theory of multi-secret sharing schemes by using an information theoretical framework. We prove lower bounds on the size of information held by each participant for various access structures. Finally, we prove the optimality of the bounds by providing protocols.

Carlo Blundo, Alfredo De Santis, Giovanni Di Crescenzo, Antonio Giorgio Gaggia, Ugo Vaccaro

Zero-Knowledge

Designing Identification Schemes with Keys of Short Size

In the last few years, there have been several attempts to build identification protocols that do not rely on arithmetical operations with large numbers but only use simple operations (see [10, 8]). One was presented at the CRYPTO 89 rump session ([8]) and depends on the so-called Permuted Kernel problem (PKP). Another appeared in the CRYPTO 93 proceedings and is based on the syndrome decoding problem (SD) form the theory of error correcting codes ([11]). In this paper, we introduce a new scheme of the same family with the distinctive character that both the secret key and the public identification key can be taken to be of short length. By short, we basically mean the usual size of conventional symmetric cryptosystems. As is known, the possibility of using short keys has been a challenge in public key cryptography and has practical applications. Our scheme relies on a combinatorial problem which we call Constrained Linear Equations (CLE in short) and which consists of solving a set of linear equations modulo some small prime q, the unknowns being subject to belong to a specific subset of the integers mod q. Thus, we enlarge the set of tools that can be used in cryptography.

Jacques Stern
Proofs of Partial Knowledge and Simplified Design of Witness Hiding Protocols

Suppose we are given a proof of knowledge $$ \mathcal{P} $$ in which a prover demonstrates that he knows a solution to a given problem instance. Suppose also that we have a secret sharing scheme $$ \mathcal{S} $$ on n participants. Then under certain assumptions on $$ \mathcal{P} $$ and $$ \mathcal{S} $$, we show how to transform $$ \mathcal{P} $$ into a witness indistinguishable protocol, in which the prover demonstrates knowledge of the solution to some subset of n problem instances out of a collection of subsets defined by $$ \mathcal{S} $$. For example, using a threshold scheme, the prover can show that he knows at least d out of n solutions without revealing which d instances are involved. If the instances are independently generated, we get a witness hiding protocol, even if $$ \mathcal{P} $$ did not have this property. Our results can be used to efficiently implement general forms of group oriented identification and signatures. Our transformation produces a protocol with the same number of rounds as $$ \mathcal{P} $$ and communication complexity n times that of $$ \mathcal{P} $$. Our results use no unproven complexity assumptions.

Ronald Cramer, Ivan Damgård, Berry Schoenmakers
Language Dependent Secure Bit Commitment

In this paper, we define two classes of languages, one induces opaque/transparent bit commitments and the other induces transparent/opaque bit commitments. As an application of opaque/transparent and transparent/opaque properties, we first show that if a language L induces an opaque/transparent bit commitment, then there exists a prover-practical perfect zero-knowledge proof for L, and we then show that if a language L induces a transparent/opaque bit commitment, then there exists a bounded round perfect zero-knowledge proof for L.

Toshiya Itoh, Yuji Ohta, Hiroki Shizuya
On the length of cryptographic hash-values used in identification schemes

Many interactive identification schemes based on the zero-knowledge concept use cryptographic hash-values, either in their basic design or in specific variants. In this paper, we first show that 64-bit hash-values, a length often suggested, definitely decrease the level of the security of all these schemes. (Of course, this does not compromise the security of the schemes by themselves). Then we prove that collision-resistance is a sufficient condition to achieve the claimed level of security. Finally, by using a weaker notion of collision-resistance, we present interesting variants of some of these schemes (in particular the Schnorr and the Guillou-Quisquater schemes) which minimize the number of communication bits for a given level of security.

Marc Girault, Jacques Stern

Signatures II

Incremental Cryptography: The Case of Hashing and Signing

We initiate the investigation of a new kind of efficiency for cryptographic transformations. The idea is that having once applied the transformation to some document M, the time to update the result upon modification of M should be “proportional” to the “amount of modification” done to M. Thereby one obtains much faster cryptographic primitives for environments where closely related documents are undergoing the same cryptographic transformations.We provide some basic definitions enabling treatment of the new notion. We then exemplify our approach by suggesting incremental schemes for hashing and signing which are efficient according to our new measure.

Mihir Bellare, Oded Goldreich, Shafi Goldwasser
An Efficient Existentially Unforgeable Signature Scheme and its Applications

We present a practical existentially unforgeable signature scheme and point out applications where its application is desirable. A signature scheme is existentially unforgeable if, given any polynomial (in the security parameter) number of pairs $$ (m_1 ,S(m_1 )),(m_2 ,S(m_2 )), \ldots (m_k ,S(m_k )) $$ where S(m) denotes the signature on the message m, it is computationally infeasible to generate a pair (mk+1, S(mk+1)) for any message mk+1 ∉ {m1, ... mk{. We have developed a signature scheme that requires at most 6 times the amount of time needed to generate a signature using RSA (which is not existentially unforgeable).

Cynthia Dwork, Moni Naor

Combinatorics and its Applications

Bounds for Resilient Functions and Orthogonal Arrays
Extended Abstract

Orthogonal arrays (OAs) are basic combinatorial structures, which appear under various disguises in cryptology and the theory of algorithms. Among their applications are universal hashing, authentication codes, resilient and correlation-immune functions, derandomization of algorithms, and perfect local randomizers. In this paper, we give new bounds on the size of orthogonal arrays using Delsarte’s linear programming method. Then we derive bounds on resilient functions and discuss when these bounds can be met.

Jürgen Bierbrauer, K. Gopalakrishnan, D. R. Stinson
Tracing Traitors

We give cryptographic schemes that help trace the source of leaks when sensitive or proprietary data is made available to a large set of parties. This is particularly important for broadcast and database access systems, where the data should be accessible only to authorized users. Such schemes are very relevant in the context of pay television, and easily combine with and complement the Broadcast Encryption schemes of [6].

Benny Chor, Amos Fiat, Moni Naor

Number Theory

Towards the Equivalence of Breaking the Diffie-Hellman Protocol and Computing Discrete Logarithms

Let G be an arbitrary cyclic group with generator g and order |G| with known factorization. G could be the subgroup generated by g within a larger group H. Based on an assumption about the existence of smooth numbers in short intervals, we prove that breaking the Diffie-Hellman protocol for G and base g is equivalent to computing discrete logarithms in G to the base g when a certain side information string S of length 2 log |G| is given, where S depends only on |G| but not on the definition of G and appears to be of no help for computing discrete logarithms in G. If every prime factor p of |G| is such that one of a list of expressions in p, including p − 1 and p + 1, is smooth for an appropriate smoothness bound, then S can efficiently be constructed and therefore breaking the Diffie-Hellman protocol is equivalent to computing discrete logarithms.

Ueli M. Maurer
Fast Generation of Provable Primes Using Search in Arithmetic Progressions

Many cryptographic algorithms use number theory. They share the problem of generating large primes with a given (fixed) number n of bits. In a series of articles, Brandt, Damgard, Landrock and Pomerance address the problem of optimal use of probabilistic primality proofs for generation of cryptographic primes. Maurer proposed using the Pocklington lemma for generating provable primes. His approach loses efficiency due to involved mechanisms for generating close to uniform distribution of primes. We propose an algorithm which generates provable primes and can be shown to be the most efficient prime generation algorithm up to date. This is possible at the cost of a slight reduction of the set of primes which may be produced by the algorithm. However, the entropy of the primes produced by this algorithm is assymptotically equal to the entropy of primes with random uniform distribution. Primes are sought in arithmetic progressions and proved by recursion. Search in arithmetic progressions allows the use of Eratosthenes sieves, which leads finaly to saving 1/3 of the psuedo prime tests compared to random search.

Preda Mihailescu

Cryptanalysis and Protocol Failures

Attack on the Cryptographic Scheme NIKS-TAS

The NIKS-TAS scheme, proposed by Tsujii, Araki, and Sekine in 1993, is an ID-based cryptographic key sharing scheme. We present an algebraic method for attacking this scheme, requiring the cooperation of a small number of collaborators to discover the key shared by any two parties.

Don Coppersmith
On the Risk of Opening Distributed Keys

We describe an insider known-key attack on key distribution systems which are based on public keys. This is of a general type and applies to the key distribution system presented by Yacobi at Crypto ’90, the Goss system, the Günther system presented at Eurocrypt ’89 and the key exchange version of COMSET, based on a system presented by Brandt et al. at Crypto ’89. The attack is primarily theoretical, in the sense that it assumes that some session keys are leaked or lost. Well designed systems will prevent this. However it could have practical consequences with certain applications (e.g. negotiation of contracts or poor implementations). We discuss the implications and ways to prevent the attack.

Mike Burmester
Cryptanalysis of Cryptosystems based on Remote Chaos Replication

In the last five years, many cryptosystems based on the chaos phenomenon have been proposed. Most of them use chaotic maps, i.e., the discrete-time chaos. The recent announcement of a cryptosystem based on continuous-time chaos that is generated by a very simple electronic circuit known as Chua’s circuit passed unrecognized by a large part of the cryptographic community. It is an analog to the Vernam-cipher system, but uses auto-synchronization through remote replication of the chaotic masking signal. After the introductory description of continuous-time chaotic systems and their synchronization a general definition and discussion of cryptosystems based on remote chaos replication is given. A cryptanalytic attack for these systems is developed that can break the cryptosystem using Chua’s circuit for all types of information-bearing signals.

Th. Beth, D. E. Lazic, A. Mathias

Pseudo-Random Generation

A Fourier Transform Approach to the Linear Complexity of Nonlinearly Filtered Sequences

A method for analyzing the linear complexity of nonlinear filterings of PN-sequences that is based on the Discrete Fourier Transform is presented. The method makes use of “Blahut’s theorem”, which relates the linear complexity of an N-periodic sequence in GF(q)N and the Hamming weight of its frequency-domain associate. To illustrate the power of this approach, simple proofs are given of Key’s bound on linear complexity and of a generalization of a condition of Groth and Key for which equality holds in this bound.

James L. Massey, Shirlei Serconek

Block Ciphers: Design and Cryptanalysis

The Security of Cipher Block Chaining

The Cipher Block Chaining — Message Authentication Code (CBC MAC) specifies that a message x = x1 . . . xm be authenticated among parties who share a secret key a by tagging x with a prefix of $$ f_a^{(m)} (x)\mathop = \limits^{def} f_a (f_a ( \ldots f_a (f_a (x_1 ) \oplus x_2 ) \oplus \ldots \oplus x_{m - 1} ) \oplus x_m ) $$ where f is some underlying block cipher (eg. f = DES). This method is a pervasively used international and U.S. standard. We provide its first formal justification, showing the following general lemma: that cipher block chaining a pseudorandom function gives a pseudorandom function. Underlying our results is a technical lemma of independent interest, bounding the success probability of a computationally unbounded adversary in distinguishing between a random ml-bit to l-bit function and the CBC MAC of a random l-bit to l-bit function.

Mihir Bellare, Joe Kilian, Phillip Rogaway
A Chosen Plaintext Attack of the 16-round Khufu Cryptosystem

In 1990, Merkle proposed two fast software encryption functions, Khafre and Khufu, as possible replacements for DES [1]. In 1991, Biham and Shamir applied their differential cryptanalysis technique to Khafre [2], and obtained an efficient attack of the 16-round version and some bounds on the 24-round version. However, these attacks take advantage of the fact that the S-boxes used for Khafre are public; they cannot be applied to Khufu, which uses secret S-boxes, and no attack of Khufu has been proposed so far. In this paper, we present a chosen plaintext attack of the 16-round version of Khufu, which is based on differential properties of this algorithm. The derivation of first information concerning the secret key requires about 231 chosen plaintexts and 231 operations. Our estimate of the resources required for breaking the entire scheme is about 243 chosen plaintexts and about 243 operations.

Henri Gilbert, Pascal Chauvaud
Ciphertext Only Attack for One-way function of the MAP using One Ciphertext

FEAL-N proposed by NTT is an N-round block cipher which is well suited for a fast software execution. We research the method which improves FEAL algorithm against cryptanalysis based on the careful analysis of data randomizer. Since the round function f used as data randomizer has an inverse function f−1, we have already shown that there exists a key-independent relation between plaintext and ciphertext.In this paper, we apply our attack to the ciphertext only attack for one-way function of the MAP using only one ciphertext. Computer implementation of the proposed attack shows that the improvement of the computational complexity to obtain the plaintext is decreased and given by O(232).

Yukiyasu Tsunoo, Eiji Okamoto, Tomohiko Uyematsu
Pitfalls in Designing Substitution Boxes
Extended Abstract

Two significant recent advances in cryptanalysis, namely the differential attack put forward by Biham and Shamir [3] and the linear attack by Matsui [7, 8], have had devastating impact on data encryption algorithms. An eminent problem that researchers are facing is to design S-boxes or substitution boxes so that an encryption algorithm that employs the S-boxes is immune to the attacks. In this paper we present evidence indicating that there are many pitfalls on the road to achieve the goal. In particular, we show that certain types of S-boxes which are seemly very appealing do not exist. We also show that, contrary to previous perception, techniques such as chopping or repeating permutations do not yield cryptographically strong S-boxes. In addition, we reveal an important combinatorial structure associated with certain quadratic permutations, namely, the difference distribution table of each differentially 2-uniform quadratic permutation embodies a Hadamard matrix. As an application of this result, we show that chopping a differentially 2-uniform quadratic permutation results in an S-box that is very prone to the differential cryptanalytic attack.

Jennifer Seberry, Xian-Mo Zhang, Yuliang Zheng

Secure Computations and Protocols

A Randomness-Rounds Tradeoff in Private Computation

We study the role of randomness in multi-party private computations. In particular, we give several results that prove the existence of a randomness-rounds tradeoff in multi-party private computation of xor. We show that with a single random bit, Θ(n) rounds are necessary and sufficient to privately compute xor of n input bits. With d ≥ 2 random bits, Ω(log n/d) rounds are necessary, and O(log n/log d) are sufficient.More generally, we show that the private computation of a boolean function f, using d ≥ 2 random bits, requires Ω(log S(f)/d) rounds, where S(f) is the sensitivity of f. Using a single random bit, Ω(S(f)) rounds are necessary.

Eyal Kushilevitz, Adi Rosén
Secure Voting Using Partially Compatible Homomorphisms

We introduce a new number-theoretic based protocol for secure electronic voting. Our scheme is much more communication efficient than previous schemes of its type, and has a much lower round complexity than is currently possible using the anonymous-channel/mixer techniques. Preprocessing allows for nearly all of the communication and computation to be performed before any voting takes place. Unlike the mixer-based protocols, anyone can verify that everyone’s vote has been properly counted. Also, our techniques allow for a wide variety of different schemes.Our protocols are based on families of homomorphic encryptions which have a partial compatibility property, generalizing a method of Benaloh and Yung [2]. We use these functions to generate very simple interactive proofs on encrypted shares. We also develop amortization techniques yielding dramatic efficiency improvements over our simple protocols. Our protocols can be realized by current-generation PC’s with access to an electronic bulletin board.

Kazue Sako, Joe Kilian
Maintaining Security in the Presence of Transient Faults

Consider a multiparty system where parties may occasionally be “infected” by malicious, coordinated agents, called viruses. After some time the virus is expelled and the party wishes to regain its security. Since the leaving virus knows the entire contents of the infected party’s memory, a source of “fresh” randomness seems essential for regaining security (e.g., for selecting new keys). However, such an “on-line” source of randomness may not be always readily available.We describe a scheme which, using randomness only at the beginning of the computation, supplies each party with a new pseudorandom number at each round of communication. Each generated number is unpredictable by an adversary controlling the viruses, even if the party was infected in previous rounds. Our scheme is valid as long as in each round there is at least one noninfected party, and some of the communication links are secure.We describe an important application of our scheme to secure sign-on protocols.

Ran Canetti, Amir Herzberg
Backmatter
Metadaten
Titel
Advances in Cryptology — CRYPTO ’94
herausgegeben von
Yvo G. Desmedt
Copyright-Jahr
1994
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-48658-9
Print ISBN
978-3-540-58333-2
DOI
https://doi.org/10.1007/3-540-48658-5