Skip to main content
Top

1997 | Book

Advances in Cryptology — EUROCRYPT ’97

International Conference on the Theory and Application of Cryptographic Techniques Konstanz, Germany, May 11–15, 1997 Proceedings

Editor: Walter Fumy

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

EUROCRYEVr '97, the 15th annual EUROCRYPT conference on the theory and application of cryptographic techniques, was organized and sponsored by the International Association for Cryptologic Research (IACR). The IACR organizes two series of international conferences each year, the EUROCRYPT meeting in Europe and CRWTO in the United States. The history of EUROCRYFT started 15 years ago in Germany with the Burg Feuerstein Workshop (see Springer LNCS 149 for the proceedings). It was due to Thomas Beth's initiative and hard work that the 76 participants from 14 countries gathered in Burg Feuerstein for the first open meeting in Europe devoted to modem cryptography. I am proud to have been one of the participants and still fondly remember my first encounters with some of the celebrities in cryptography. Since those early days the conference has been held in a different location in Europe each year (Udine, Paris, Linz, Linkoping, Amsterdam, Davos, Houthalen, Aarhus, Brighton, Balantonfiired, Lofthus, Perugia, Saint-Malo, Saragossa) and it has enjoyed a steady growth, Since the second conference (Udine, 1983) the IACR has been involved, since the Paris meeting in 1984, the name EUROCRYPT has been used. For its 15th anniversary, EUROCRYPT finally returned to Germany. The scientific program for EUROCRYPT '97 was put together by a 18-member program committee whch considered 104 high-quality submissions. These proceedings contain the revised versions of the 34 papers that were accepted for presentation. In addition, there were two invited talks by Ernst Bovelander and by Gerhard Frey.

Table of Contents

Frontmatter

Block Ciphers

Two Attacks on Reduced IDEA
Extended Abstract

In 1991 Lai, Massey and Murphy introduced the IPES (Improved Proposed Encryption Standard), later renamed IDEA (International Data Encryption Algorithm). In this paper we give two new attacks on a reduced number of rounds of IDEA. A truncated differential attack on IDEA reduced to 3.5 rounds and a differential-linear attack on IDEA reduced to 3 rounds. The truncated differential attack contains a novel method for determining the secret key.

Johan Borst, Lars R. Knudsen, Vincent Rijmen
Combinatorial Properties of Basic Encryption Operations
Extended Abstract

The basic ingredients of modern fast software block encryption schemes are computer instructions like SHIFT, ADD, XOR etc. We analyze the algebraic structure of different combinations of those cryptographic primitives from a purely combinatorial point of view. Different subsets of such operations will yield an interesting variety of different permutation groups, e.g. semidirect products, affine linear groups, wreath products, and symmetric groups. As we will show, a simple pair of a SHIFT and an ADD operation is already powerful enough to generate every possible encryption function on its set of input blocks. On the other hand, any possible combination of SHIFT and XOR operations can only produce a subset of at most n2n functions within the symmetric group of order n!. The present results are useful in theory at first. Their cryptographic applications can be found in providing practical tools for the analysis of the algebraic structure of new block encryption schemes and evaluation of their subroutines.

Thilo Zieschang

Public Key Systems

A New Public-Key Cryptosystem

This paper describes a new public-key cryptosystem where the ciphertext is obtained by multiplying the public-keys indexed by the message bits and the cleartext is recovered by factoring the ciphertext raised to a secret power. Encryption requires four multiplications / byte and decryption is roughly equivalent to the generation of an RSA signature.

David Naccache, Jacques Stern
On the Importance of Checking Cryptographic Protocols for Faults
Extended abstract

We present a theoretical model for breaking various cryptographic schemes by taking advantage of random hardware faults. We show how to attack certain implementations of RSA and Rabin signatures. We also show how various authentication protocols, such as Fiat-Shamir and Schnorr, can be broken using hardware faults.

Dan Boneh, Richard A. DeMillo, Richard J. Lipton
Lattice Attacks on NTRU

NTRU is a new public key cryptosystem proposed at Crypto 96 by Hoffstein, Pipher and Silverman from the Mathematics department of Brown University. It attracted considerable attention, and is being advertised over the Internet by NTRU Cryptosystems. Its security is based on the difficulty of analyzing the result of polynomial arithmetic modulo two unrelated moduli, and its correctness is based on clustering properties of the sums of random variables. In this paper, we apply new lattice basis reduction techniques to cryptanalyze the scheme, to discover either the original secret key, or an alternative secret key which is equally useful in decoding the ciphertexts.

Don Coppersmith, Adi Shamir

Protocols

Kleptography: Using Cryptography Against Cryptography

The notion of a Secretly Embedded Trapdoor with Universal Protection (SETUP) has been recently introduced. In this paper we extend the study of stealing information securely and subliminally from black-box cryptosystems. The SETUP mechanisms presented here, in contrast with previous ones, leak secret key information without using an explicit subliminal channel. This extends this area of threats, which we call “kleptography”.We introduce new definitions of SETUP attacks (strong, regular, and weak SETUPs) and the notion of m out of n leakage bandwidth. We show a strong attack which is based on the discrete logarithm problem. We then show how to use this setup to compromise the Diffie-Hellman key exchange protocol. We also strengthen the previous SETUP against RSA. The strong attacks employ the discrete logarithm as a one-way function (assuring what is called “forward secrecy”), public-key cryptography, and a technique which we call probabilistic bias removal.

Adam Young, Moti Yung
Fast and Secure Immunization Against Adaptive Man-in-the-Middle Impersonation

We present a simple method for constructing identification schemes resilient against impersonation and man-in-the-middle attacks. Though zero-knowledge or witness hiding protocols are known to withstand attacks of the first kind, all such protocols previously proposed suffer from a weakness observed by Bengio et al.: a malicious verifier may simply act as a moderator between the prover and yet another verifier, thus enabling the malicious verifier to pass as the prover.We exhibit a general class of identification schemes that can be efficiently and securely transformed into identification schemes withstanding an adaptive man-in-the-middle attacker. The complexity of the resulting (witness hiding) schemes is roughly twice that of the originals. Basically, any three-move, public coin identification scheme that is zero knowledge against the honest verifier and that is secure against passive impersonation attacks, is eligible for our transformation. This indicates that we need only seemlingly weak cryptographic intractability assumptions to construct a practical identification scheme resisting adative man-in-the-middle impersonation attacks. Moreover, the required primitive protocols can efficiently be constructed under the factoring or discrete logarithm assumptions.

Ronald Cramer, Ivan Damgård
Anonymous Fingerprinting

Fingerprinting schemes deter people from illegally redistributing digital data by enabling the original merchant of the data to identify the original buyer of a redistributed copy. Recently, asymmetric fingerprinting schemes were introduced. Here, only the buyer knows the fingerprinted copy after a sale, and if the merchant finds this copy somewhere, he obtains a proof that it was the copy of this particular buyer.A problem with all previous fingerprinting schemes arises in the context of electronic marketplaces where untraceable electronic cash offers buyers privacy similar to that when buying books or music in normal shops with normal cash. Now buyers would have to identify themselves solely for the purpose of fingerprinting. To remedy this, we introduce and construct anonymous asymmetric fingerprinting schemes, where buyers can buy information anonymously, but can nevertheless be identified if they redistribute this information illegally.A subresult of independent interest is an asymmetric fingerprinting protocol with reasonable collusion tolerance and 2-party trials, which have several practical advantages over the previous 3-party trials. Our results can also be applied to so-called traitor tracing, the equivalent of fingerprinting for broadcast encryption.

Birgit Pfitzmann, Michael Waidner
A Secure and Optimally Efficient Multi-Authority Election Scheme

In this paper we present a new multi-authority secret-ballot election scheme that guarantees privacy, universal verifiability, and robustness. It is the first scheme for which the performance is optimal in the sense that time and communication complexity is minimal both for the individual voters and the authorities. An interesting property of the scheme is that the time and communication complexity for the voter is independent of the number of authorities. A voter simply posts a single encrypted message accompanied by a compact proof that it contains a valid vote. Our result is complementary to the result by Cramer, Franklin, Schoenmakers, and Yung in the sense that in their scheme the work for voters is linear in the number of authorities but can be instantiated to yield information-theoretic privacy, while in our scheme the voter’s effort is independent of the number of authorities but always provides computational privacy-protection. We will also point out that the majority of proposed voting schemes provide computational privacy only (often without even considering the lack of information-theoretic privacy), and that our new scheme is by far superior to those schemes.

Ronald Cramer, Rosario Gennaro, Berry Schoenmakers

Key Escrow

Binding ElGamal: A Fraud-Detectable Alternative to Key-Escrow Proposals

We propose a concept for a worldwide information security infrastructure that protects law-abiding citizens, but not criminals, even if the latter use it fraudulently (i.e. when not complying with the agreed rules). It can be seen as a middle course between the inflexible but fraud-resistant KMI-proposal [8] and the flexible but non-fraud-resistant concept used in TIS-CKE [2]. Our concept consists of adding binding data to the latter concept, which will not prevent fraud by criminals but makes it at least detectable by third parties without the need of any secret information. In [19], we depict a worldwide framework in which this concept could present a security tool that is flexible enough to be incorporated in any national cryptography policy, on both the domestic and foreign use of cryptography. Here, we present a construction for binding data for ElGamal type public key encryption schemes. As a side result we show that a particular simplification in a multiuser version of ElGamal does not affect its security.

Eric R. Verheul, Henk C. A. van Tilborg
The GCHQ Protocol and Its Problems

The UK government is fielding an architecture for secure electronic mail based on the NSA’s Message Security Protocol, with a key escrow scheme inspired by Diffie-Hellman. Attempts have been made to have this protocol adopted by other governments and in various domestic applications. The declared policy goal is to entrench commercial key escrow while simultaneously creating a large enough market that software houses will support the protocol as a standard feature rather than charging extra for it.We describe this protocol and show that, like the ‘Clipper’ proposal of a few years ago, it has a number of problems. It provides the worst of both secret and public key systems, without delivering the advantages of either; it does not support nonrepudiation; and there are serious problems with the replacement of compromised keys, the protection of security labels, and the support of complex or dynamic administrative structures.

Ross Anderson, Michael Roe

Hash-Functions

Bucket Hashing with a Small Key Size

In this paper we consider very fast evaluation of strongly universal hash functions, or equivalently, authentication codes. We show how it is possible to modify some known families of hash functions into a form such that the evaluation is similar to “bucket hashing”, a technique for very fast hashing introduced by Rogaway. Rogaway’s bucket hash family has a huge key size, which for common parameter choices can be more than a hundred thousand bits. The proposed hash families have a key size that is close to the key size of the theoretically best known constructions, typically a few hundred bits, and the evaluation has a time complexity that is similar to bucket hashing.

Thomas Johansson
A New Paradigm for Collision-Free Hashing: Incrementality at Reduced Cost

We present a simple, new paradigm for the design of collision-free hash functions. Any function emanating from this paradigm is incremental. (This means that if a message x which I have previously hashed is modified to x′ then rather than having to re-compute the hash of x′ from scratch, I can quickly “update” the old hash value to the new one, in time proportional to the amount of modification made in x to get x′.) Also any function emanating from this paradigm is parallelizable, useful for hardware implementation. We derive several specific functions from our paradigm. All use a standard hash function, assumed ideal, and some algebraic operations. The first function, MuHASH, uses one modular multiplication per block of the message, making it reasonably efficient, and significantly faster than previous incremental hash functions. Its security is proven, based on the hardness of the discrete logarithm problem. A second function, AdHASH, is even faster, using additions instead of multiplications, with security proven given either that approximation of the length of shortest lattice vectors is hard or that the weighted subset sum problem is hard. A third function, LtHASH, is a practical variant of recent lattice based functions, with security proven based, again on the hardness of shortest lattice vector approximation.

Mihir Bellare, Daniele Micciancio

Information Theory

Smooth Entropy and Rényi Entropy

The notion of smooth entropy allows a unifying, generalized formulation of privacy amplification and entropy smoothing. Smooth entropy is a measure for the number of almost uniform random bits that can be extracted from a random source by probabilistic algorithms. It is known that the Rényi entropy of order at least 2 of a random variable is a lower bound for its smooth entropy. On the other hand, an assumption about Shannon entropy (which is Rényi entropy of order 1) is too weak to guarantee any non-trivial amount of smooth entropy. In this work we close the gap between Rényi entropy of order 1 and 2. In particular, we show that Rényi entropy of order α for any 1 < α < 2 is a lower bound for smooth entropy, up to a small parameter depending on α, the alphabet size and the failure probability. The results have applications in cryptography for unconditionally secure protocols such as quantum key agreement, key agreement from correlated information, oblivious transfer, and bit commitment.

Christian Cachin
Information-Theoretically Secure Secret-Key Agreement by NOT Authenticated Public Discussion

All information-theoretically secure key agreement protocols (e.g. based on quantum cryptography or on noisy channels) described in the literature are secure only against passive adversaries in the sense that they assume the existence of an authenticated public channel. The goal of this paper is to investigate information-theoretic security even against active adversaries with complete control over the communication channel connecting the two parties who want to agree on a secret key. Several impossibility results are proved and some scenarios are characterized in which secret-key agreement secure against active adversaries is possible. In particular, when each of the parties, including the adversary, can observe a sequence of random variables that are correlated between the parties, the rate at which key agreement against active adversaries is possible is characterized completely: it is either 0 or equal to the rate achievable against passive adversaries, and the condition for distinguishing between the two cases is given.

Ueli Maurer

Stream Ciphers

Linear Statistical Weakness of Alleged RC4 Keystream Generator

A keystream generator known as RC4 is analyzed by the linear model approach. It is shown that the second binary derivative of the least significant bit output sequence is correlated to 1 with the correlation coefficient close to 15 · 2−3n where n is the variable word size of RC4. The output sequence length required for the linear statistical weakness detection may be realistic in high speed applications if n ≤ 8. The result can be used to distinguish RC4 from other keystream generators and to determine the unknown parameter n, as well as for the plaintext uncertainty reduction if n is small.

Jovan Dj. Golić
Cryptanalysis of Alleged A5 Stream Cipher

A binary stream cipher, known as A5, consisting of three short LFSRs of total length 64 that are mutually clocked in the stop/go manner is cryptanalyzed. It is allegedly used in the GSM standard for digital cellular mobile telephones. Very short keystream sequences are generated from different initial states obtained by combining a 64-bit secret session key and a known 22-bit public key. A basic divide-and-conquer attack recovering the unknown initial state from a known keystream sequence is first introduced. It exploits the specific clocking rule used and has average computational complexity around 240. A time-memory trade-off attack based on the birthday paradox which yields the unknown internal state at a known time for a known keystream sequence is then pointed out. The attack is successful if T · M ≥ 263.32, where T and M are the required computational time and memory (in 128-bit words), respectively. The precomputation time is O(M) and the required number of known keystream sequences generated from different public keys is about T/102. For example, one can choose T ≈ 227.67 and M ≈ 235.65. To obtain the secret session key from the determined internal state, a so-called internal state reversion attack is proposed and analyzed by the theory of critical and subcritical branching processes.

Jovan Dj. Golić

Complexity Theory

Lower Bounds for Discrete Logarithms and Related Problems

This paper considers the computational complexity of the discrete logarithm and related problems in the context of “generic algorithms”—that is, algorithms which do not exploit any special properties of the encodings of group elements, other than the property that each group element is encoded as a unique binary string. Lower bounds on the complexity of these problems are proved that match the known upper bounds: any generic algorithm must perform Ω(p1/2) group operations, where p is the largest prime dividing the order of the group. Also, a new method for correcting a faulty Diffie-Hellman oracle is presented.

Victor Shoup
Stronger Security Proofs for RSA and Rabin Bits

The RSA and Rabin encryption function are respectively defined as EN(x) = xe mod N and EN(x) = x2 mod N, where N is a product of two large random primes p, q and e is relatively prime to φ(N). We present a much simpler and stronger proof of the result of Alexi, Chor, Goldreich and Schnorr [ACGS88] that the following problems are equivalent by probabilistic polynomial time reductions: (1) given EN(x) find x; (2) given EN(x) predict the least-significant bit of x with success probability % MathType!MTEF!2!1!+- % feaafiart1ev1aaatCvAUfKttLearuqr1ngBPrgarmWu51MyVXgatC % vAUfeBSjuyZL2yd9gzLbvyNv2CaeHbd9wDYLwzYbItLDharyavP1wz % ZbItLDhis9wBH5garqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbb % L8F4rqqrFfpeea0xe9Lq-Jc9vqaqpepm0xbba9pwe9Q8fs0-yqaqpe % pae9pg0FirpepeKkFr0xfr-xfr-xb9adbaqaaeGaciGaaiaabeqaam % aaeaqbaaGcbaWaaSaaaeaacqaIXaqmaeaacqaIYaGmaaGaey4kaSYa % aSaaaeaacqaIXaqmaeaaieaacqWFWbaCcqWFVbWBcqWFSbaBcqWF5b % qEcqGGOaakcqWGUbGBcqGGPaqkaaaaaa!475B! $$ \frac{1} {2} + \frac{1} {{poly(n)}} $$, where N has n bits. The new proof consists of a more efficient algorithm for inverting the RSA/Rabin-function with the help of an oracle that predicts the least-significant bit of x. It yields provable security guarantees for RSA-message bits and for the RSA-random number generator for moduli N of practical size.

R. Fischlin, C. P. Schnorr
Round-Optimal Zero-Knowledge Arguments Based on Any One-Way Function

We fill a gap in the theory of zero-knowledge protocols by presenting NP-arguments that achieve negligible error probability and computational zero-knowledge in four rounds of interaction, assuming only the existence of a one-way function. This result is optimal in the sense that four rounds and a one-way function are each individually necessary to achieve a negligible error zero-knowledge argument for NP.

Mihir Bellare, Markus Jakobsson, Moti Yung
Efficient Cryptographic Protocols Based on Noisy Channels

The Wire-Tap Channel of Wyner [19] shows that a Binary Symmetric Channel may be used as a basis for exchanging a secret key, in a cryptographic scenario of two honest people facing an eavesdropper. Later Crépeau and Kilian [9] showed how a BSC may be used to implement Oblivious Transfer in a cryptographic scenario of two possibly dishonest people facing each other. Unfortunately this result is rather impractical as it requires Ω(n11) bits to be transmitted through the BSC to accomplish a single OT. The current paper provides efficient protocols to achieve the cryptographic primitives of Bit Commitment and Oblivious Transfer based on the existence of a Binary Symmetric Channel. Our protocols respectively require sending O(n) and O(n3) bits through the BSC. These results are based on a technique known as Generalized Privacy Amplification [1] that allow two people to extract secret information from partially compromised data.

Claude Crépeau
Rapid Demonstration of Linear Relations Connected by Boolean Operators

Consider a polynomial-time prover holding a set of secrets. We describe how the prover can rapidly demonstrate any satisfiable boolean formula for which the atomic propositions are relations that are linear in the secrets, without revealing more information about the secrets than what is conveyed by the formula itself. Our protocols support many proof modes, and are as secure as the Discrete Logarithm assumption or the RSA/factoring assumption.

Stefan Brands
Oblivious Transfers and Privacy Amplification

Assume % MathType!MTEF!2!1!+- % feaafiart1ev1aaatCvAUfKttLearuqr1ngBPrgarmWu51MyVXgatC % vAUfeBSjuyZL2yd9gzLbvyNv2CaeHbd9wDYLwzYbItLDharyavP1wz % ZbItLDhis9wBH5garqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbb % L8F4rqqrFfpeea0xe9Lq-Jc9vqaqpepm0xbba9pwe9Q8fs0-yqaqpe % pae9pg0FirpepeKkFr0xfr-xfr-xb9adbaqaaeGaciGaaiaabeqaam % aaeaqbaaGcbaWefv3ySLgznfgDOfdarCqr1ngBPrginfgDObYtUvga % iyaacqWFaeFqaaa!460E! $$ \mathcal{A} $$ owns two secret k-bit strings. She is willing to disclose one of them to % MathType!MTEF!2!1!+- % feaafiart1ev1aaatCvAUfKttLearuqr1ngBPrgarmWu51MyVXgatC % vAUfeBSjuyZL2yd9gzLbvyNv2CaeHbd9wDYLwzYbItLDharyavP1wz % ZbItLDhis9wBH5garqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbb % L8F4rqqrFfpeea0xe9Lq-Jc9vqaqpepm0xbba9pwe9Q8fs0-yqaqpe % pae9pg0FirpepeKkFr0xfr-xfr-xb9adbaqaaeGaciGaaiaabeqaam % aaeaqbaaGcbaWefv3ySLgznfgDOfdarCqr1ngBPrginfgDObYtUvga % iyaacqWFSeIqaaa!456B! $$ \mathcal{B} $$, at his choosing, provided he does not learn anything about the other string. Conversely, % MathType!MTEF!2!1!+- % feaafiart1ev1aaatCvAUfKttLearuqr1ngBPrgarmWu51MyVXgatC % vAUfeBSjuyZL2yd9gzLbvyNv2CaeHbd9wDYLwzYbItLDharyavP1wz % ZbItLDhis9wBH5garqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbb % L8F4rqqrFfpeea0xe9Lq-Jc9vqaqpepm0xbba9pwe9Q8fs0-yqaqpe % pae9pg0FirpepeKkFr0xfr-xfr-xb9adbaqaaeGaciGaaiaabeqaam % aaeaqbaaGcbaWefv3ySLgznfgDOfdarCqr1ngBPrginfgDObYtUvga % iyaacqWFSeIqaaa!456B! $$ \mathcal{B} $$ does not want % MathType!MTEF!2!1!+- % feaafiart1ev1aaatCvAUfKttLearuqr1ngBPrgarmWu51MyVXgatC % vAUfeBSjuyZL2yd9gzLbvyNv2CaeHbd9wDYLwzYbItLDharyavP1wz % ZbItLDhis9wBH5garqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbb % L8F4rqqrFfpeea0xe9Lq-Jc9vqaqpepm0xbba9pwe9Q8fs0-yqaqpe % pae9pg0FirpepeKkFr0xfr-xfr-xb9adbaqaaeGaciGaaiaabeqaam % aaeaqbaaGcbaWefv3ySLgznfgDOfdarCqr1ngBPrginfgDObYtUvga % iyaacqWFaeFqaaa!460E! $$ \mathcal{A} $$ to learn which secret he chose to learn. A protocol for the above task is said to implement One-out-of-two String Oblivious Transfer, denoted (12)-OTk. This primitive is particularly useful in a variety of cryptographic settings. An apparently simpler task corresponds to the case k = 1 of two one-bit secrets: this is known as One-out-of-two Bit Oblivious Transfer, denoted (12)-OT. We address the question of reducing (12)-OTk to (12)-OT. This question is not new: it was introduced in 1986. However, most solutions until now have implicitly or explicitly depended on the notion of self-intersecting codes. It can be proved that this restriction makes it asymptotically impossible to implement (12)-OTk with fewer than about 3.5277k instances of (12)-OT. The current paper introduces the idea of using privacy amplification as underlying technique to reduce (12)-OTk to (12)-OT. This allows for more efficient solutions at the cost of an exponentially small probability of failure: it is sufficient to use slightly more than 2k instances of (12)-OT in order to implement (12)-OTk. Moreover, we show that privacy amplification allows for the efficient implementation of (12)-OTk from generalized versions of (12)-OT that would not have been suitable for the earlier techniques based on self-intersecting codes. An application of this more general reduction is given.

Gilles Brassard, Claude Crépeau

Implementation

SHA: A Design for Parallel Architectures?

To enhance system performance computer architectures tend to incorporate an increasing number of parallel execution units. This paper shows that the new generation of MD4-based customized hash functions (RIPEMD-128, RIPEMD-160, SHA-1) contains much more software parallelism than any of these computer architectures is currently able to provide. It is conjectured that the parallelism found in SHA-1 is a design principle. The critical path of SHA-1 is twice as short as that of its closest contender RIPEMD-160, but realizing it would require a 7-way multiple-issue architecture. It will also be shown that, due to the organization of RIPEMD-160 in two independent lines, it will probably be easier for future architectures to exploit its software parallelism.

Antoon Bosselaers, René Govaerts, Joos Vandewalle
Fast Arithmetic Architectures for Public-Key Algorithms over Galois Fields GF((2n)m)

This contribution describes a new class of arithmetic architectures for Galois fields GF(2k). The main applications of the architecture are public-key systems which are based on the discrete logarithm problem for elliptic curves. The architectures use a representation of the field GF(2k) as GF((2n)m), where k = n · m. The approach explores bit parallel arithmetic in the subfield GF(2n), and serial processing for the extension field arithmetic. This mixed parallel-serial (hybrid) approach can lead to very fast implementations. The principle of these approach was initially suggested by Mastrovito. As the core module, a hybrid multiplier is introduced and several optimizations are discussed. We provide two different approaches to squaring which, in conjunction with the multiplier, yield fast exponentiation architectures.The hybrid architectures are capable of exploring the time-space trade-off paradigm in a flexible manner. In particular, the number of clock cycles for one field multiplication, which is the atomic operation in most public-key schemes, can be reduced by a factor of n compared to all other known realizations. The acceleration is achieved at the cost of an increased computational complexity. We describe a proof-of-concept implementation of an ASIC for exponentiation in GF((2n)m), m variable.

Christof Paar, Pedro Soria-Rodriguez
Finding Good Random Elliptic Curves for Cryptosystems Defined over

One of the main difficulties for implementing cryptographic schemes based on elliptic curves defined over finite fields is the necessary computation of the cardinality of these curves. In the case of finite fields % MathType!MTEF!2!1!+- % feaafiart1ev1aaatCvAUfKttLearuqr1ngBPrgarmWu51MyVXgatC % vAUfeBSjuyZL2yd9gzLbvyNv2CaeHbd9wDYLwzYbItLDharyavP1wz % ZbItLDhis9wBH5garqqtubsr4rNCHbGeaGqiVu0Je9sqqrpepC0xbb % L8F4rqqrFfpeea0xe9Lq-Jc9vqaqpepm0xbba9pwe9Q8fs0-yqaqpe % pae9pg0FirpepeKkFr0xfr-xfr-xb9adbaqaaeGaciGaaiaabeqaam % aaeaqbaaGcbaWefv3ySLgznfgDOjdarCqr1ngBPrginfgDObcv39ga % iyqacqWFfcVrdaWgaaWcbaGaeeOmaiZaaWbaaWqabeaaieGacqGFUb % GBaaaaleqaaaaa!496C! $$ \mathbb{F}_{{\text{2}}^n } $$, recent theoretical breakthroughs yield a significant speed up of the computations. Once described some of these ideas in the first part of this paper, we show that our current implementation runs from 2 up to 10 times faster than what was done previously. In the second part, we exhibit a slight change of Schoof’s algorithm to choose curves with a number of points “nearly” prime and so construct cryptosystems based on random elliptic curves instead of specific curves as it used to be.

Reynald Lercier

Authentication

Incremental Cryptography and Memory Checkers

We introduce the relationship between incremental cryptography and memory checkers. We present an incremental message authentication scheme based on the XOR MACs which supports insertion, deletion and other single block operations. Our scheme takes only a constant number of pseudorandom function evaluations for each update step and produces smaller authentication codes than the tree scheme presented in [BGG95]. Furthermore, it is secure against message substitution attacks, where the adversary is allowed to tamper messages before update steps, making it applicable to virus protection. From this scheme we derive memory checkers for data structures based on lists. Conversely, we use a lower bound for memory checkers to show that so-called message substitution detecting schemes produce signatures or authentication codes with size proportional to the message length.

Marc Fischlin
Almost k-wise Independent Sample Spaces and Their Cryptologic Applications

An almost k-wise independent sample space is a small subset of m bit sequences in which any k bits are “almost independent”. We show that this idea has close relationships with useful cryptologic notions such as multiple authentication codes (multiple A-codes), almost strongly universal hash families and almost k-resilient functions.We use almost k-wise independent sample spaces to construct new efficient multiple A-codes such that the number of key bits grows linearly as a function of k (here k is the number of messages to be authenticated with a single key). This improves on the construction of Atici and Stinson [2], in which the number of key bits is Ω (k2).We also introduce the concept of ∈-almost k-resilient functions and give a construction that has parameters superior to k-resilient functions.Finally, new bounds (necessary conditions) are derived for almost k-wise independent sample spaces, multiple A-codes and balanced ε-almost k-resilient functions.

Kaoru Kurosawa, Thomas Johansson, Douglas Stinson

Boolean Functions

More Correlation-Immune and Resilient Functions over Galois Fields and Galois Rings

We show that the usual constructions of bent functions, when they are suitably modified, allow constructions of correlation-immune and resilient functions over Galois fields and, in some cases, over Galois rings.

Claude Carlet
Design of SAC/PC(l) of Order k Boolean Functions and Three Other Cryptographic Criteria

A Boolean function f satisfies PC(l) of order k if f(x) ⊕ f(x ⊕ α) is balanced for any α such that 1 ≤ W(α) ≤ l even if any k input bits are kept constant, where W(α) denotes the Hamming weight of α. This paper shows the first design method of such functions which provides deg(f) ≥ 3. More than that, we show how to design “balanced” such functions. High nonlinearity and large degree are also obtained. Further, we present balanced SAC(k) functions which achieve the maximum degree. Finally, we extend our technique to vector output Boolean functions.

Kaoru Kurosawa, Takashi Satoh

Signatures

Distributed “Magic Ink” Signatures

The physical analog of “blind signatures” of Chaum is a document and a carbon paper put into an envelope, allowing the signer to transfer his signature onto the document by signing on the envelope, and without opening it. Only the receiver can present the signed document while the signer cannot “unblind” its signature and get the document signed.When an authority signs “access tokens”, “electronic coins”, “credentials” or “passports”, it makes sense to assume that whereas the users can typically enjoy the disassociation of the blindly signed token and the token itself (i.e. anonymity and privacy), there may be cases which require “unblinding” of a signature by the signing authority itself (to establish what is known as “audit trail” and to “revoke anonymity” in case of criminal activity).This leads us to consider a new notion of signature with the following physical parallel: The signer places a piece of paper with a carbon paper on top in an envelope as before (but the document on the paper is not yet written). The receiver then writes the document on the envelope using magic ink, e.g., ink that is only visible after being “developed”. Due to the carbon copy, this results in the document being written in visible ink on the internal paper. Then, the signer signs the envelope (so its signature on the document is made available). The receiver gets the internal paper and the signer retains the envelope with the magic ink copy. Should the signer need to unblind the document, he can develop the magic ink and get the document copy on the envelope. Note that the signing is not blinded forever to the signer. We call this new type of signature a magic ink signature.We present an efficient method for distributively generating magic ink signatures, requiring a quorum of servers to produce a signature and a (possibly different) quorum to unblind a signature. The scheme is robust, and the unblinding is guaranteed to work even if a set of up to a threshold of signers refuses to cooperate, or actively cheats during either the signing or the unblinding protocol. We base our specific implementation on the DSS algorithm. Our construction demonstrates the extended power of distributed signing.

Markus Jakobsson, Moti Yung
Efficient and Generalized Group Signatures

The concept of group signatures was introduced by Chaum et al. at Eurocrypt ’91. It allows a member of a group to sign messages anonymously on behalf of the group. In case of a later dispute a designated group manager can revoke the anonymity and identify the originator of a signature. In this paper we propose a new efficient group signature scheme. Furthermore we present a model and the first realization of generalized group signatures. Such a scheme allows to define coalitions of group members that are able to sign on the group’s behalf.

Jan Camenisch
Collision-Free Accumulators and Fail-Stop Signature Schemes Without Trees

One-way accumulators, introduced by Benaloh and de Mare, can be used to accumulate a large number of values into a single one, which can then be used to authenticate every input value without the need to transmit the others. However, the one-way property does is not sufficient for all applications.In this paper, we generalize the definition of accumulators and define and construct a collision-free subtype. As an application, we construct a fail-stop signature scheme in which many one-time public keys are accumulated into one short public key. In contrast to previous constructions with tree authentication, the length of both this public key and the signatures can be independent of the number of messages that can be signed.

Niko Barić, Birgit Pfitzmann
Selective Forgery of RSA Signatures Using Redundancy

We show the weakness of several RSA signature schemes using redundancy (i.e. completing the message to be signed with some additional bits which are fixed or message-dependent), by exhibiting chosen-message attacks based on the multiplicative property of RSA signature function. Our attacks, which largely extend those of De Jonge and Chaum [DJC], make extensive use of an affine variant of Euclid’s algorithm, due to Okamoto and Shiraishi [OS]. When the redundancy consists of appending any fixed bits to the message m to be signed (more generally when redundancy takes the form of an affine function of m), then our attack is valid if the redundancy is less than half the length of the public modulus. When the redundancy consists in appending to m the remainder of m modulo some fixed value (or, more generally, any function of this remainder), our attack is valid if the redundancy is less than half the length of the public modulus minus the length of the remainder. We successfully apply our attack to a scheme proposed for discussion inside ISO.

Marc Girault, Jean-François Misarsky
Backmatter
Metadata
Title
Advances in Cryptology — EUROCRYPT ’97
Editor
Walter Fumy
Copyright Year
1997
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-69053-5
Print ISBN
978-3-540-62975-7
DOI
https://doi.org/10.1007/3-540-69053-0