Skip to main content

2003 | Buch

Advances in Cryptology — EUROCRYPT 2003

International Conference on the Theory and Applications of Cryptographic Techniques, Warsaw, Poland, May 4–8, 2003 Proceedings

herausgegeben von: Eli Biham

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

Cryptanalysis I

Cryptanalysis of the EMD Mode of Operation

In this paper, we study the security of the Encrypt-Mask-Decrypt mode of operation, also called EMD, which was recently proposed for applications such as disk-sector encryption. The EMD mode transforms an ordinary block cipher operating on n-bit blocks into a tweakable block cipher operating on large blocks of size nm bits. We first show that EMD is not a secure tweakable block cipher and then describe efficient attacks in the context of disk-sector encryption. We note that the parallelizable variant of EMD, called EME that was proposed at the same time is also subject to these attacks.In the course of developing one of the attacks, we revisit Wagner’s generalized birthday algorithm and show that in some special cases it performs much more efficiently than in the general case. Due to the large scope of applicability of this algorithm, even when restricted to these special cases, we believe that this result is of independent interest.

Antoine Joux
On the Optimality of Linear, Differential, and Sequential Distinguishers

In this paper, we consider the statistical decision processes behind a linear and a differential cryptanalysis. By applying techniques and concepts of statistical hypothesis testing, we describe precisely the shape of optimal linear and differential distinguishers and we improve known results of Vaudenay concerning their asymptotic behaviour. Furthermore, we formalize the concept of “sequential distinguisher” and we illustrate potential applications of such tools in various statistical attacks.

Pascal Junod
A Toolbox for Cryptanalysis: Linear and Affine Equivalence Algorithms

This paper presents two algorithms for solving the linear and the affine equivalence problem for arbitrary permutations (S-boxes). For a pair of n × n-bit permutations the complexity of the linear equivalence algorithm (LE) is O(n32n). The affine equivalence algorithm (AE) has complexity O(n322n). The algorithms are efficient and allow to study linear and affine equivalences for bijective S-boxes of all popular sizes (LE is efficient up to n ≤ 32). Using these tools new equivalent representations are found for a variety of ciphers: Rijndael, DES, Camellia, Serpent, Misty, Kasumi, Khazad, etc. The algorithms are furthermore extended for the case of non-bijective n to m-bit S-boxes with a small value of |n − m| and for the case of almost equivalent S-boxes. The algorithms also provide new attacks on a generalized Even-Mansour scheme. Finally, the paper defines a new problem of S-box decomposition in terms of Substitution Permutations Networks (SPN) with layers of smaller S-boxes. Simple information-theoretic bounds are proved for such decompositions.

Alex Biryukov, Christophe De Cannière, An Braeken, Bart Preneel

Secure Multi-party Computation I

Two-Threshold Broadcast and Detectable Multi-party Computation

Classical distributed protocols like broadcast or multi-party computation provide security as long as the number of malicious players f is bounded by some given threshold t, i.e., f ≤ t. If f exceeds t then these protocols are completely insecure.We relax this binary concept to the notion of two-threshold security: Such protocols guarantee full security as long as f ≤ t for some small threshold t, and still provide some degraded security when t < f ≤ T for a larger threshold T. In particular, we propose the following problems. Broadcast withExtendedValidity: Standard broadcast is achieved when f ≤ t. When t < f ≤ T, then either broadcast is achieved, or every player learns that there are too many faults. Furthermore, when the sender is honest, then broadcast is always achieved.Broadcast withExtendedConsistency: Standard broadcast is achieved when f ≤ t. When t < f ≤ T, then either broadcast is achieved, or every player learns that there are too many faults. Furthermore, the players agree on whether or not broadcast is achieved.DetectableMulti-PartyComputation: Secure computation is achieved when f ≤ t. When t < f ≤ T, then either the computation is secure, or all players detect that there are too many faults and abort. The above protocols for n players exist if and only if t = 0 or t+2T < n.

Matthias Fitzi, Martin Hirt, Thomas Holenstein, Jürg Wullschleger
On the Limitations of Universally Composable Two-Party Computation without Set-up Assumptions

The recently proposed universally composable (UC) security framework, for analyzing security of cryptographic protocols, provides very strong security guarantees. In particular, a protocol proven secure in this framework is guaranteed to maintain its security even when deployed in arbitrary multi-party, multi-protocol, multi-execution environments.Protocols for securely carrying out essentially any cryptographic task in a universally composable way exist, both in the case of an honest majority (in the plain model, i.e., without set-up assumptions) and in the case of no honest majority (in the common reference string model). However, in the plain model, little was known for the case of no honest majority and, in particular, for the important special case of two-party protocols.We study the feasibility of universally composable two-party function evaluation in the plain model. Our results show that very few functions can be computed in this model so as to provide the UC security guarantees. Specifically, for the case of deterministic functions, we provide a full characterization of the functions computable in this model. (Essentially, these are the functions that depend on at most one of the parties’ inputs, and furthermore are “efficiently invertible” in a sense defined within.) For the case of probabilistic functions, we show that the only functions computable in this model are those where one of the parties can essentially uniquely determine the joint output.

Ran Canetti, Eyal Kushilevitz, Yehuda Lindell
Fair Secure Two-Party Computation
Extended Abstract

We demonstrate a transformation of Yao’s protocol for secure two-party computation to a fair protocol in which neither party gains any substantial advantage by terminating the protocol prematurely. The transformation adds additional steps before and after the execution of the original protocol, but does not change it otherwise, and does not use a trusted third party. It is based on the use of gradual release timed commitments, which are a new variant of timed commitments, and on a novel use of blind signatures for verifying that the committed values are correct.

Benny Pinkas

Invited Talk I

Facts and Myths of Enigma: Breaking Stereotypes

In spite of a relatively large number of publications about breaking Enigma by the Allies before and during the World War II, this subject remains relatively unknown not only to the general public, but also to people professionally involved in cryptological research. For example, the story of Enigma is rarely a part of a modern textbook on cryptology or a modern course on cryptography and network security. There exist multiple reasons for this situation. First, there are still a few unresolved issues, resulting from conflicting reports, the lack of reliable sources, and a long period required for declassifying documents related to any cryptological activity during the World War II. Secondly, the issue is highly political, and there is little consensus in weighing the contribution of all involved countries. Thirdly, many contemporary cryptologists honestly believe that there is little to learn from the analysis of old cryptosystems, because of the tremendous progress in theory and practice of cryptography and a little similarity between old and modern ciphers. In this paper we confront these opinions by presenting a look at the current state of knowledge about cryptological methods and devices used to break Enigma. We introduce all major players involved in these activities, and we make an effort to weigh their original contributions. Finally, we show that the story of Enigma can still provide contemporary cryptographers with many useful lessons regarding the way of organizing and building any large-scale security system.

Kris Gaj, Arkadiusz Orłowski

Zero-Knowledge Protocols

Resettable Zero-Knowledge in the Weak Public-Key Model

A new public-key model for resettable zero-knowledge (rZK) protocols, which is an extension and generalization of the upper-bounded public-key (UPK) model introduced by Micali and Reyzin [EuroCrypt’01, pp. 373–393], is introduced and is named weak public-key (WPK) model. The motivations and applications of the WPK model are justified in the distributed smart-card/server setting and it seems more preferable in practice, especially in E-commerce over Internet. In this WPK model a 3-round (optimal) black-box resettable zero-knowledge argument with concurrent soundness for $$ \mathcal{N}\mathcal{P} $$ is presented assuming the security of RSA with large exponents against subexponential-time adversaries. Our result improves Micali and Reyzin’s result of resettable zero-knowledge argument with concurrent soundness for $$ \mathcal{N}\mathcal{P} $$ in the UPK model. Note that although Micali and Reyzin’ protocol satisfies concurrent soundness in the UPK model, but it does not satisfy even sequential soundness in our WPK model.Our protocol works in a somewhat “parallel repetition” manner to reduce the error probability and the black-box zero-knowledge simulator works in strict polynomial time rather than expected polynomial time. The critical tools used are: verifiable random functions introduced by Micali, Rabin and Vadhan [FOCS’99, pp. 120–130], zap presented by Dwork and Naor [FOCS’00, pp. 283–293] and complexity leveraging introduced by Canetti, Goldreich, Goldwasser and Micali [STOC’00, pp. 235–244].

Yunlei Zhao, Xiaotie Deng, C. H. Lee, Hong Zhu
Simulatable Commitments and Efficient Concurrent Zero-Knowledge

We define and construct simulatable commitments. These are commitment schemes such that there is an efficient interactive proof system to show that a given string c is a legitimate commitment on a given value v, and furthermore, this proof is efficiently simulatable given any proper pair (c, v). Our construction is provably secure based on the Decisional Diffie-Hellman (DDH) assumption.Using simulatable commitments, we show how to efficiently transform any public coin honest verifier zero knowledge proof system into a proof system that is concurrent zero-knowledge with respect to any (possibly cheating) verifier via black box simulation. By efficient we mean that our transformation incurs only an additive overhead (both in terms of the number of rounds and the computational and communication complexity of each round), and the additive term is close to optimal (for black box simulation): only ω(log n) additional rounds, and ω(log n) additional public key operations for each round of the original protocol, where n is a security parameter, and ω(log n) can be any superlogarithmic function of n independent of the complexity of the original protocol. The transformation preserves (up to negligible additive terms) the soundness and completeness error probabilities, and the new proof system is proved secure based on the DDH assumption, in the standard model of computation, i.e., no random oracles, shared random strings, or public key infrastructure is assumed.

Daniele Micciancio, Erez Petrank
Simulation in Quasi-Polynomial Time, and Its Application to Protocol Composition

We propose a relaxation of zero-knowledge, by allowing the simulator to run in quasi-polynomial time. We show that protocols satisfying this notion can be constructed in settings where the standard definition is too restrictive. Specifically, we construct constant-round straight-line concurrent quasi-polynomial time simulatable arguments and show that such arguments can be used in advanced composition operations without any set-up assumptions. Our protocols rely on slightly strong, but standard type assumptions (namely the existence of one-to-one one-way functions secure against subexponential circuits).

Rafael Pass
Strengthening Zero-Knowledge Protocols Using Signatures

Recently there has been an interest in zero-knowledge protocols with stronger properties, such as concurrency, unbounded simulation soundness, non-malleability, and universal composability. In this paper, we show a novel technique to convert a large class of existing honest-verifier zero-knowledge protocols into ones with these stronger properties in the common reference string model. More precisely, our technique utilizes a signature scheme existentially unforgeable against adaptive chosen-message attacks, and transforms any Σ-protocol (which is honest-verifier zero-knowledge) into an unbounded simulation sound concurrent zero-knowledge protocol. We also introduce Ω-protocols, a variant of Σ-protocols for which our technique further achieves the properties of non-malleability and/or universal composability.In addition to its conceptual simplicity, a main advantage of this new technique over previous ones is that it avoids the Cook-Levin theorem, which tends to be rather inefficient. Indeed, our technique allows for very efficient instantiation based on the security of some efficient signature schemes and standard number-theoretic assumptions. For instance, one instantiation of our technique yields a universally composable zero-knowledge protocol under the Strong RSA assumption, incurring an overhead of a small constant number of exponentiations, plus the generation of two signatures.

Juan A. Garay, Philip MacKenzie, Ke Yang

Foundations and Complexity Theoretic Security

Nearly One-Sided Tests and the Goldreich-Levin Predicate

We study statistical tests with binary output that rarely outputs one, which we call nearly one-sided statistical tests. We provide an efficient reduction establishing improved security for the Goldreich-Levin hard-core bit against nearly one-sided tests. The analysis is extended to prove the security of the Blum-Micali pseudo-random generator combined with the Goldreich-Levin bit.Furthermore, applications where nearly one-sided tests naturally occur are discussed. This includes cryptographic constructions that replace real randomness with pseudo-randomness and where the adversary’s success easily can be verified. In particular, this applies to signature schemes that utilize a pseudo-random generator as a provider of randomness.

Gustav Hast
Efficient and Non-malleable Proofs of Plaintext Knowledge and Applications
Extended Abstract

We describe efficient protocols for non-malleable (interactive) proofs of plaintext knowledge for the RSA, Rabin, Paillier, and El Gamal encryption schemes. We also highlight some important applications of these protocols: Chosen-ciphertext-secure, interactive encryption. In settings where both parties are on-line, an interactive encryption protocol may be used. We construct chosen-ciphertext-secure interactive encryption schemes based on any of the schemes above. In each case, the improved scheme requires only a small overhead beyond the original, semantically-secure scheme.Password-based authenticated key exchange. We derive efficient protocols for password-based key exchange in the public-key model [28], [5] whose security may be based on any of the cryptosystems mentioned above.Deniable authentication. Our techniques give the first efficient constructions of deniable authentication protocols based on, e.g., the RSA or computational Diffie-Hellman assumption.Of independent interest, we consider the concurrent composition of proofs of knowledge; this is essential to prove security of our protocols when run in an asynchronous, concurrent environment.

Jonathan Katz

Public Key Encryption

A Public Key Encryption Scheme Based on the Polynomial Reconstruction Problem

The Polynomial Reconstruction problem (PR) has been introduced in 1999 as a new hard problem. Several cryptographic primitives established on this problem have been constructed, for instance Naor and Pinkas have proposed a protocol for oblivious polynomial evaluation. Then it has been studied from the point of view of robustness, and several important properties have been discovered and proved by Kiayias and Yung. Furthermore the same authors constructed a symmetric cipher based on the PR problem. In the present paper, we use the published security results and construct a new public key encryption scheme based on the hardness of the problem of Polynomial Reconstruction. The scheme presented is the first public key encryption scheme based on this Polynomial Reconstruction problem. We also present some attacks, discuss their performances and state the size of the parameters required to reach the desired security level. In conclusion, this leads to a cryptosystem where the cost of encryption and decryption per bit is low, and where the public key is kept relatively small.

Daniel Augot, Matthieu Finiasz
A Simpler Construction of CCA2-Secure Public-Key Encryption under General Assumptions

In this paper we present a simpler construction of a public-key encryption scheme that achieves adaptive chosen ciphertext security (CCA2), assuming the existence of trapdoor permutations. We build on previous works of Sahai and De Santis et al. and construct a scheme that we believe is the easiest to understand to date. In particular, it is only slightly more involved than the Naor-Yung encryption scheme that is secure against passive chosen-ciphertext attacks (CCA1). We stress that the focus of this paper is on simplicity only.

Yehuda Lindell
A Forward-Secure Public-Key Encryption Scheme

Cryptographic computations are often carried out on insecure devices for which the threat of key exposure represents a serious and realistic concern. In an effort to mitigate the damage caused by exposure of secret data (e.g., keys) stored on such devices, the paradigm of forward security was introduced. In a forward-secure scheme, secret keys are updated at regular periods of time; furthermore, exposure of a secret key corresponding to a given time period does not enable an adversary to “break” the scheme (in the appropriate sense) for any prior time period. A number of constructions of forward-secure digital signature schemes, key-exchange protocols, and symmetric-key schemes are known.We present the first constructions of a (non-interactive) forward-secure public-key encryption scheme. Our main construction achieves security against chosen plaintext attacks under the decisional bilinear Diffie-Hellman assumption in the standard model. It is practical, and all complexity parameters grow at most logarithmically with the total number of time periods. The scheme can also be extended to achieve security against chosen ciphertext attacks.

Ran Canetti, Shai Halevi, Jonathan Katz
Certificate-Based Encryption and the Certificate Revocation Problem

We introduce the notion of certificate-based encryption. In this model, a certificate — or, more generally, a signature — acts not only as a certificate but also as a decryption key. To decrypt a message, a keyholder needs both its secret key and an up-to-date certificate from its CA (or a signature from an authorizer). Certificate-based encryption combines the best aspects of identity-based encryption (implicit certification) and public key encryption (no escrow). We demonstrate how certificate-based encryption can be used to construct an efficient PKI requiring less infrastructure than previous proposals, including Micali’s Novomodo, Naor-Nissim and Aiello-Lodha-Ostrovsky.

Craig Gentry

New Primitives

CAPTCHA: Using Hard AI Problems for Security

We introduce captcha, an automated test that humans can pass, but current computer programs can’t pass: any program that has high success over a captcha can be used to solve an unsolved Artificial Intelligence (AI) problem. We provide several novel constructions of captchas. Since captchas have many applications in practical security, our approach introduces a new class of hard problems that can be exploited for security purposes. Much like research in cryptography has had a positive impact on algorithms for factoring and discrete log, we hope that the use of hard AI problems for security purposes allows us to advance the field of Artificial Intelligence. We introduce two families of AI problems that can be used to construct captchas and we show that solutions to such problems can be used for steganographic communication. CAPTCHAs based on these AI problem families, then, imply a win-win situation: either the problems remain unsolved and there is a way to differentiate humans from computers, or the problems are solved and there is a way to communicate covertly on some channels.

Luis von Ahn, Manuel Blum, Nicholas J. Hopper, John Langford
Concealment and Its Applications to Authenticated Encryption

We introduce a new cryptographic primitive we call concealment, which is related, but quite different from the notion of commitment. A concealment is a publicly known randomized transformation, which, on input m, outputs a hider h and a binder b. Together, h and b allow one to recover m, but separately, (1) the hider h reveals “no information” about m, while (2) the binder b can be “meaningfully opened” by at most one hider h. While setting b = m, h = Ø is a trivial concealment, the challenge is to make |b| ≪ |m|, which we call a “non-trivial” concealment. We show that non-trivial concealments are equivalent to the existence of collision-resistant hash functions. Moreover, our construction of concealments is extremely simple, optimal, and yet very general, giving rise to a multitude of efficient implementations.We show that concealments have natural and important applications in the area of authenticated encryption. Specifically, let $$ \mathcal{A}\mathcal{E} $$ be an authenticated encryption scheme (either public- or symmetric-key) designed to work on short messages. We show that concealments are exactly the right abstraction allowing one to use $$ \mathcal{A}\mathcal{E} $$ for encrypting long messages. Namely, to encrypt “long” m, one uses a concealment scheme to get h and b, and outputs authenticated ciphertext $$ \left\langle {\mathcal{A}\mathcal{E}(b),h} \right\rangle $$ . More surprisingly, the above paradigm leads to a very simple and general solution to the problem of remotely keyed (authenticated) encryption (RKAE) [[12],[13]]. In this problem, one wishes to split the task of high-bandwidth authenticated encryption between a secure, but low-bandwidth/computationally limited device, and an insecure, but computationally powerful host. We give formal definitions for RKAE, which we believe are simpler and more natural than all the previous definitions. We then show that our composition paradigm satisfies our (very strong) definition. Namely, for authenticated encryption, the host simply sends a short value b to the device (which stores the actual secret key for $$ \mathcal{A}\mathcal{E} $$ , gets back $$ \mathcal{A}\mathcal{E} $$(b) , and outputs $$ \left\langle {\mathcal{A}\mathcal{E}(b),h} \right\rangle $$ (authenticated decryption is similar). Finally, we also observe that the particular schemes of [[13],[17]] are all special examples of our general paradigm.

Yevgeniy Dodis, Jee Hea An

Cryptanalysis II

Predicting the Shrinking Generator with Fixed Connections

We propose a novel distinguishing attack on the shrinking generator with known feedback polynomial for the generating LFSR. The attack can e.g. reliably distinguish a shrinking generator with a weight 4 polynomial of degree as large as 10000, using 232 output bits. As the feedback polynomial of an arbitrary LFSR is known to have a polynomial multiple of low weight, our distinguisher applies to arbitrary shrunken LFSR’s of moderate length. The analysis can also be used to predict the distribution of blocks in the generated keystream.

Patrik Ekdahl, Willi Meier, Thomas Johansson
Algebraic Attacks on Stream Ciphers with Linear Feedback

A classical construction of stream ciphers is to combine several LFSRs and a highly non-linear Boolean function f. Their security is usually analysed in terms of correlation attacks, that can be seen as solving a system of multivariate linear equations, true with some probability. At ICISC’02 this approach is extended to systems of higher-degree multivariate equations, and gives an attack in 292 for Toyocrypt, a Cryptrec submission. In this attack the key is found by solving an overdefined system of algebraic equations. In this paper we show how to substantially lower the degree of these equations by multiplying them by well-chosen multivariate polynomials. Thus we are able to break Toyocrypt in 249 CPU clocks, with only 20 Kbytes of keystream, the fastest attack proposed so far. We also successfully attack the Nessie submission LILI-128, within 257 CPU clocks (not the fastest attack known). In general, we show that if the Boolean function uses only a small subset (e.g. 10) of state/LFSR bits, the cipher can be broken, whatever is the Boolean function used (worst case). Our new general algebraic attack breaks stream ciphers satisfying all the previously known design criteria in at most the square root of the complexity of the previously known generic attack.

Nicolas T. Courtois, Willi Meier

Elliptic Curves Cryptography

Counting Points on Elliptic Curves over Finite Fields of Small Characteristic in Quasi Quadratic Time

Let p be a small prime and q = pn. Let E be an elliptic curve over $$ \mathbb{F}_q $$ . We propose an algorithm which computes without any preprocessing the j-invariant of the canonical lift of E with the cost of O(log n) times the cost needed to compute a power of the lift of the Frobenius. Let μ be a constant so that the product of two n-bit length integers can be carried out in O(nμ) bit operations, this yields an algorithm to compute the number of points on elliptic curves which reaches, at the expense of a O(n5/2) space complexity, a theoretical time complexity bound equal to O(nmax(1.19,μ)+μ+1/2 log n). When the field has got a Gaussian Normal Basis of small type, we obtain furthermore an algorithm with O(log(n)n2μ) time and O(n2) space complexities. From a practical viewpoint, the corresponding algorithm is particularly well suited for implementations. We outline this by a 100002-bit computation.

Reynald Lercier, David Lubicz
The GHS Attack Revisited

We generalize the Weil descent construction of the GHS attack to arbitrary Artin-Schreier extensions. We give a formula for the characteristic polynomial of Frobenius of the obtained curves and prove that the large cyclic factor of the input elliptic curve is not contained in the kernel of the composition of the conorm and norm maps. As an application we almost square the number of elliptic curves which succumb to the basic GHS attack, thereby weakening curves over $$ \mathbb{F}_{2^{155} } $$ further. We also discuss other possible extensions or variations of the GHS attack and conclude that they are not likely to yield further improvements.

Florian Hess
Improved Algorithms for Efficient Arithmetic on Elliptic Curves Using Fast Endomorphisms

In most algorithms involving elliptic curves, the most expensive part consists in computing multiples of points. This paper investigates how to extend the τ-adic expansion from Koblitz curves to a larger class of curves defined over a prime field having an efficiently-computable endomorphism φ in order to perform an efficient point multiplication with efficiency similar to Solinas’ approach presented at CRYPTO ’97. Furthermore, many elliptic curve cryptosystems require the computation of k0P + k1Q. Following the work of Solinas on the Joint Sparse Form, we introduce the notion of φ-Joint Sparse Form which combines the advantages of a φ-expansion with the additional speedup of the Joint Sparse Form. We also present an efficient algorithm to obtain the φ-Joint Sparse Form. Then, the double exponentiation can be done using the φ endomorphism instead of doubling, resulting in an average of l applications of φ and l/2 additions, where l is the size of the ki’s. This results in an important speed-up when the computation of φ is particularly effective, as in the case of Koblitz curves.

Mathieu Ciet, Tanja Lange, Francesco Sica, Jean-Jacques Quisquater

Digital Signatures

A Signature Scheme as Secure as the Diffie-Hellman Problem

We show a signature scheme whose security is tightly related to the Computational Diffie-Hellman (CDH) assumption in the Random Oracle Model. Existing discrete-log based signature schemes, such as ElGamal, DSS, and Schnorr signatures, either require non-standard assumptions, or their security is only loosely related to the discrete logarithm (DL) assumption using Pointcheval and Stern’s “forking” lemma. Since the hardness of the CDH problem is widely believed to be closely related to the hardness of the DL problem, the signature scheme presented here offers better security guarantees than existing discrete-log based signature schemes. Furthermore, the new scheme has comparable efficiency to existing schemes.The signature scheme was previously proposed in the cryptographic literature on at least two occasions. However, no security analysis was done, probably because the scheme was viewed as a slight modification of Schnorr signatures. In particular, the scheme’s tight security reduction to CDH has remained unnoticed until now. Interestingly, this discrete-log based signature scheme is similar to the trapdoor permutation based PSS signatures proposed by Bellare and Rogaway, and has a tight reduction for a similar reason.

Eu-Jin Goh, Stanisław Jarecki
Aggregate and Verifiably Encrypted Signatures from Bilinear Maps

An aggregate signature scheme is a digital signature that supports aggregation: Given n signatures on n distinct messages from n distinct users, it is possible to aggregate all these signatures into a single short signature. This single signature (and the n original messages) will convince the verifier that the n users did indeed sign the n original messages (i.e., user i signed message M i for i = 1,..., n). In this paper we introduce the concept of an aggregate signature, present security models for such signatures, and give several applications for aggregate signatures. We construct an efficient aggregate signature from a recent short signature scheme based on bilinear maps due to Boneh, Lynn, and Shacham. Aggregate signatures are useful for reducing the size of certificate chains (by aggregating all signatures in the chain) and for reducing message size in secure routing protocols such as SBGP. We also show that aggregate signatures give rise to verifiably encrypted signatures. Such signatures enable the verifier to test that a given ciphertext C is the encryption of a signature on a given message M. Verifiably encrypted signatures are used in contract-signing protocols. Finally, we show that similar ideas can be used to extend the short signature scheme to give simple ring signatures.

Dan Boneh, Craig Gentry, Ben Lynn, Hovav Shacham
Hypercubic Lattice Reduction and Analysis of GGH and NTRU Signatures

In this paper, we introduce a new lattice reduction technique applicable to the narrow, but important class of Hypercubic lattices, (L ≅ ℤN). Hypercubic lattices arise during transcript analysis of certain GGH, and NTRUSign signature schemes. After a few thousand signatures, key recovery amounts to discovering a hidden unitary matrix U, from its Gram matrix G = UUT. This case of the Gram Matrix Factorization Problem is equivalent to finding the shortest vectors in the hypercubic lattice, L G , defined by the quadratic form G. Our main result is a polynomial-time reduction to a conjecturally easier problem: the Lattice Distinguishing Problem. Additionally, we propose a heuristic solution to this distinguishing problem with a distributed computation of many “relatively short” vectors.

Michael Szydlo

Invited Talk II

Why Provable Security Matters?

Recently, methods from provable security, that had been developped for the last twenty years within the research community, have been extensively used to support emerging standards. This in turn has led researchers as well as practitioners to raise some concerns about this methodology. Should provable security be restricted to the standard computational model or can it rely on the so-called random oracle model? In the latter case, what is the practical meaning of security estimates obtained using this model? Also, the fact that proofs themselves need time to be validated through public discussion was somehow overlooked. Building on two case studies, we discuss these concerns. One example covers the public key encryption formatting scheme OAEP originally proposed in [[3]]. The other comes from the area of signature schemes and is related to the security proof of ESIGN [[43]]. Both examples show that provable security is more subtle than it at first appears.

Jacques Stern

Cryptanalysis III

On the Security of RDSA

A variant of Schnorr’s signature scheme called RDSA has been proposed by I. Biehl, J. Buchmann, S. Hamdy and A. Meyer in order to be used in finite abelian groups of unknown order such as the class group of imaginary quadratic orders. We describe in this paper a total break of RDSA under a plain known-message attack for the parameters that were originally proposed. It recovers the secret signature key from the knowledge of less than 10 signatures of known messages, with a very low computational complexity.We also compare a repaired version of RDSA with GPS scheme, another Schnorr variant with similar properties and we show that GPS should be preferred for most of the applications.

Pierre-Alain Fouque, Guillaume Poupard
Cryptanalysis of the Public-Key Encryption Based on Braid Groups

At CRYPTO 2000, a new public-key encryption based on braid groups was introduced. This paper demonstrates how to solve its underlying problem using the Burau representation. By this method, we show that the private-key can be recovered from the public-key for several parameters with significant probability in a reasonable time. Our attack can be mounted directly on the revised scheme mentioned at ASIACRYPT 2001 as well. On the other hand, we give a new requirement for secure parameters against our attack, which more or less conflicts with that against brute force attack.

Eonkyung Lee, Je Hong Park
A Theoretical Treatment of Related-Key Attacks: RKA-PRPs, RKA-PRFs, and Applications

We initiate a theoretical investigation of the popular block-cipher design-goal of security against “related-key attacks” (RKAs). We begin by introducing definitions for the concepts of PRPs and PRFs secure against classes of RKAs, each such class being specified by an associated set of “related-key deriving (RKD) functions.” Then for some such classes of attacks, we prove impossibility results, showing that no block-cipher can resist these attacks while, for other, related classes of attacks that include popular targets in the block cipher community, we prove possibility results that provide theoretical support for the view that security against them is achievable. Finally we prove security of various block-cipher based constructs that use related keys, including a tweakable block cipher given in [[14]].

Mihir Bellare, Tadayoshi Kohno

Key Exchange

Provably Secure Threshold Password-Authenticated Key Exchange
Extended Abstract

We present two protocols for threshold password authenticated key exchange. In this model, the password is not stored in a single authenticating server but rather shared among a set of n servers so that an adversary can learn the password only by breaking into t+1 of them. The protocols require n > 3t servers to work.The goal is to protect the password against hackers attacks that can break into the authenticating server and steal password information. All known centralized password authentication schemes are susceptible to such an attack.Ours are the first protocols which are provably secure in the standard model (i.e. no random oracles are used for the proof of security). Moreover our protocols are reasonably efficient and implementable in practice. In particular a goal of the design was to avoid costly zero-knowledge proofs to keep interaction to a minimum.

Mario Di Raimondo, Rosario Gennaro
A Framework for Password-Based Authenticated Key Exchange
Extended Abstract

In this paper we present a general framework for password-based authenticated key exchange protocols, in the common reference string model. Our protocol is actually an abstraction of the key exchange protocol of Katz et al. and is based on the recently introduced notion of smooth projective hashing by Cramer and Shoup. We gain a number of benefits from this abstraction. First, we obtain a modular protocol that can be described using just three high-level cryptographic tools. This allows a simple and intuitive understanding of its security. Second, our proof of security is significantly simpler and more modular. Third, we are able to derive analogues to the Katz et al. protocol under additional cryptographic assumptions. Specifically, in addition to the DDH assumption used by Katz et al., we obtain protocols under both the Quadratic and N-Residuosity assumptions. In order to achieve this, we construct new smooth projective hash functions.

Rosario Gennaro, Yehuda Lindell

Information Theoretic Cryptography

The Security of Many-Round Luby-Rackoff Pseudo-Random Permutations

Luby and Rackoff showed how to construct a (super-)pseudorandom permutation {0, 1}2n → {0, 1}2n → {0, 1}2n from some number r of pseudo-random functions {0, 1}n → {0, 1}n. Their construction, motivated by DES, consists of a cascade of r Feistel permutations. A Feistel permutation 1for a pseudo-random function f is defined as (L, R) → (R, L ⊕ f(R)), where L and R are the left and right part of the input and ⊕ denotes bitwise XOR or, in this paper, any other group operation on {0, 1}n. The only non-trivial step of the security proof consists of proving that the cascade of r Feistel permutations with independent uniform random functions {0, 1}n → {0, 1}n, denoted Ψ2n r, is indistinguishable from a uniform random permutation {0, 1}2n → {0, 1}2n by any computationally unbounded adaptive distinguisher making at most O(2cn) combined chosen plaintext/ciphertext queries for any c < α, where α is a security parameter.Luby and Rackoff proved α = 1/2 for r = 4. A natural problem, proposed by Pieprzyk is to improve on α for larger r. The best known result, α = 3/4 for r = 6, is due to Patarin. In this paper we prove α = 1 − O(1/r), i.e., the trivial upper bound α = 1 can be approached. The proof uses some new techniques that can be of independent interest.

Ueli Maurer, Krzysztof Pietrzak
New Bounds in Secret-Key Agreement: The Gap between Formation and Secrecy Extraction

Perfectly secret message transmission can be realized with only partially secret and weakly correlated information shared by the parties as soon as this information allows for the extraction of information-theoretically secret bits. The best known upper bound on the rate S at which such key bits can be generated has been the intrinsic information of the distribution modeling the parties’, including the adversary’s, knowledge. Based on a new property of the secret-key rate S, we introduce a conditional mutual information measure which is a stronger upper bound on S. Having thus seen that the intrinsic information of a distribution P is not always suitable for determining the number of secret bits extractable from P, we prove a different significance of it in the same context: It is a lower bound on the number of key bits required to generate P by public communication. Taken together, these two results imply that sometimes, (a possibly arbitrarily large fraction of) the correlation contained in distributed information cannot be extracted in the form of secret keys by any protocol.

Renato Renner, Stefan Wolf

Secure Multi-party Computation II

Round Efficiency of Multi-party Computation with a Dishonest Majority

We consider the round complexity of multi-party computation in the presence of a static adversary who controls a majority of the parties. Here, n players wish to securely compute some functionality and up to n − 1 of these players may be arbitrarily malicious. Previous protocols for this setting (when a broadcast channel is available) require O(n) rounds. We present two protocols with improved round complexity: The first assumes only the existence of trapdoor permutations and dense cryptosystems, and achieves round complexity O(log n) based on a proof scheduling technique of Chor and Rabin [[13]]; the second requires a stronger hardness assumption (along with the non-black-box techniques of Barak [[2]]) and achieves O(1) round complexity.

Jonathan Katz, Rafail Ostrovsky, Adam Smith
Efficient Multi-party Computation over Rings

Secure multi-party computation (MPC) is an active research area, and a wide range of literature can be found nowadays suggesting improvements and generalizations of existing protocols in various directions. However, all current techniques for secure MPC apply to functions that are represented by (boolean or arithmetic) circuits over finite fields. We are motivated by two limitations of these techniques: Generality. Existing protocols do not apply to computation over more general algebraic structures (except via a brute-force simulation of computation in these structures).Efficiency. The best known constant-round protocols do not efficiently scale even to the case of large finite fields.Our contribution goes in these two directions. First, we propose a basis for unconditionally secure MPC over an arbitrary ginite ring, an algebraic object with a much less nice structure than a field, and obtain efficient MPC protocols requiring only a black-box access to the ring operations and to random ring elements. Second, we extend these results to the constant-round setting, and suggest efficiency improvements that are relevant also for the important special case of fields. We demonstrate the usefulness of the above results by presenting a novel application of MPC over (non-field) rings to the round-efficient secure computation of the maximum function.

Ronald Cramer, Serge Fehr, Yuval Ishai, Eyal Kushilevitz

Group Signatures

Foundations of Group Signatures: Formal Definitions, Simplified Requirements, and a Construction Based on General Assumptions

This paper provides theoretical foundations for the group signature primitive. We introduce strong, formal definitions for the core requirements of anonymity and traceability. We then show that these imply the large set of sometimes ambiguous existing informal requirements in the literature, thereby unifying and simplifying the requirements for this primitive. Finally we prove the existence of a construct meeting our definitions based only on the sole assumption that trapdoor permutations exist.

Mihir Bellare, Daniele Micciancio, Bogdan Warinschi
Extracting Group Signatures from Traitor Tracing Schemes

Digital Signatures emerge naturally from Public-Key Encryption based on trapdoor permutations, and the “duality” of the two primitives was noted as early as Diffie-Hellman’s seminal work. The present work is centered around the crucial observation that two well known cryptographic primitives whose connection has not been noticed so far in the literature enjoy an analogous “duality.” The primitives are Group Signature Schemes and Public-Key Traitor Tracing. Based on the observed “duality,” we introduce new design methodologies for group signatures that convert a traitor tracing scheme into its “dual” group signature scheme.Our first methodology applies to generic public-key traitor tracing schemes. We demonstrate its power by applying it to the Boneh-Franklin scheme, and obtaining its “dual” group signature. This scheme is the first provably secure group signature scheme whose signature size is not proportional to the size of the group and is based only on DDH and a random oracle. The existence of such schemes was open. Our second methodology introduces a generic way of turning any group signature scheme with signature size linear in the group size into a group signature scheme with only logarithmic dependency on the group size. To this end it employs the notion of traceability codes (a central component of combinatorial traitor tracing schemes already used in the first such scheme by Chor, Fiat and Naor). We note that our signatures, obtained by generic transformations, are proportional to a bound on the anticipated maximum malicious coalition size. Without the random oracle assumption our schemes give rise to provably secure and efficient Identity Escrow schemes.

Aggelos Kiayias, Moti Yung
Backmatter
Metadaten
Titel
Advances in Cryptology — EUROCRYPT 2003
herausgegeben von
Eli Biham
Copyright-Jahr
2003
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-39200-2
Print ISBN
978-3-540-14039-9
DOI
https://doi.org/10.1007/3-540-39200-9