main-content

Crypto 2001, the 21st Annual Crypto conference, was sponsored by the Int- national Association for Cryptologic Research (IACR) in cooperation with the IEEE Computer Society Technical Committee on Security and Privacy and the Computer Science Department of the University of California at Santa Barbara. The conference received 156 submissions, of which the program committee selected 34 for presentation; one was later withdrawn. These proceedings contain the revised versions of the 33 submissions that were presented at the conference. These revisions have not been checked for correctness, and the authors bear full responsibility for the contents of their papers. The conference program included two invited lectures. Mark Sherwin spoke on, \Quantum information processing in semiconductors: an experimentalist’s view." Daniel Weitzner spoke on, \Privacy, Authentication & Identity: A recent history of cryptographic struggles for freedom." The conference program also included its perennial \rump session," chaired by Stuart Haber, featuring short, informal talks on late{breaking research news. As I try to account for the hours of my life that ?ew o to oblivion, I realize that most of my time was spent cajoling talented innocents into spending even more time on my behalf. I have accumulated more debts than I can ever hope to repay. As mere statements of thanks are certainly insu cient, consider the rest of this preface my version of Chapter 11.

On the (Im)possibility of Obfuscating Programs

Extended Abstract

Informally, an obfuscator$$\mathcal{O}$$ is an (efficient, probabilistic) “compiler” that takes as input a program (or circuit) P and produces a new program $$\mathcal{O}$$(P) that has the same functionality as P yet is “unintelligible” in some sense. Obfuscators, if they exist, would have a wide variety of cryptographic and complexity-theoretic applications, ranging from software protection to homomorphic encryption to complexity-theoretic analogues of Rice’s theorem. Most of these applications are based on an interpretation of the “unintelligibility” condition in obfuscation as meaning that $$\mathcal{O}$$ is a “virtual black box,” in the sense that anything one can efficiently compute given $$\mathcal{O}$$, one could also efficiently compute given oracle access to P.In this work, we initiate a theoretical investigation of obfuscation. Our main result is that, even under very weak formalizations of the above intuition, obfuscation is impossible. We prove this by constructing a family of functions $$\mathcal{F}$$ that are inherently unobfuscatable in the following sense: there is a property π: $$\mathcal{F}$$ → {0,1} such that (a) given any program that computes a function f ∈ $$\mathcal{F}$$, the value π(f) can be efficiently computed, yet (b) given oracle access to a (randomly selected) function f ∈ $$\mathcal{F}$$, no efficient algorithm can compute π(f) much better than random guessing. We extend our impossibility result in a number of ways, including even obfuscators that (a) are not necessarily computable in polynomial time, (b) only approximately preserve the functionality, and (c) only need to work for very restricted models of computation (TC0). We also rule out several potential applications of obfuscators, by constructing “unobfuscatable” signature schemes, encryption schemes, and pseudorandom function families.

Boaz Barak, Oded Goldreich, Rusell Impagliazzo, Steven Rudich, Amit Sahai, Salil Vadhan, Ke Yang

Universally Composable Commitments

Extended Abstract

We propose a new security measure for commitment protocols, called Universally Composable (UC) Commitment. The measure guarantees that commitment protocols behave like an “ideal commitment service,” even when concurrently composed with an arbitrary set of protocols. This is a strong guarantee: it implies that security is maintained even when an unbounded number of copies of the scheme are running concurrently, it implies non-malleability (not only with respect to other copies of the same protocol but even with respect to other protocols), it provides resilience to selective decommitment, and more.Unfortunately, two-party uc commitment protocols do not exist in the plain model. However, we construct two-party uc commitment protocols, based on general complexity assumptions, in the common reference string model where all parties have access to a common string taken from a predetermined distribution. The protocols are non-interactive, in the sense that both the commitment and the opening phases consist of a single message from the committer to the receiver.

Ran Canetti, Marc Fischlin

Revocation and Tracing Schemes for Stateless Receivers

We deal with the problem of a center sending a message to a group of users such that some subset of the users is considered revoked and should not be able to obtain the content of the message. We concentrate on the stateless receiver case, where the users do not (necessarily) update their state from session to session. We present a framework called the Subset-Cover framework, which abstracts a variety of revocation schemes including some previously known ones. We provide sufficient conditions that guarantees the security of a revocation algorithm in this class.We describe two explicit Subset-Cover revocation algorithms; these algorithms are very flexible and work for any number of revoked users. The schemes require storage at the receiver of log N and 1/2 log2N keys respectively (N is the total number of users), and in order to revoke r users the required message lengths are of r log N and 2r keys respectively. We also provide a general traitor tracing mechanism that can be integrated with any Subset-Cover revocation scheme that satisfies a “bifurcation property”. This mechanism does not need an a priori bound on the number of traitors and does not expand the message length by much compared to the revocation of the same set of traitors.The main improvements of these methods over previously suggested methods, when adopted to the stateless scenario, are: (1) reducing the message length to O(r) regardless of the coalition size while maintaining a single decryption at the user’s end (2) provide a seamless integration between the revocation and tracing so that the tracing mechanisms does not require any change to the revocation algorithm.

Dalit Naor, Moni Naor, Jeff Lotspiech

Self Protecting Pirates and Black-Box Traitor Tracing

We present a new generic black-box traitor tracing model in which the pirate-decoder employs a self-protection technique. This mechanism is simple, easy to implement in any (software or hardware) device and is a natural way by which a pirate (an adversary) which is black-box accessible, may try to evade detection. We present a necessary combinatorial condition for black-box traitor tracing of self-protecting devices. We constructively prove that any system that fails this condition, is incapable of tracing pirate-decoders that contain keys based on a superlogarithmic number of traitor keys. We then combine the above condition with specific properties of concrete systems. We show that the Boneh-Franklin (BF) scheme as well as the Kurosawa-Desmedt scheme have no black-box tracing capability in the self-protecting model when the number of traitors is superlogarithmic, unless the ciphertext size is as large as in a trivial system, namely linear in the number of users. This partially settles in the negative the open problem of Boneh and Franklin regarding the general black-box traceability of the BF scheme: at least for the case of superlogarithmic traitors. Our negative result does not apply to the Chor-Fiat-Naor (CFN) scheme (which, in fact, allows tracing in our self-protecting model); this separates CFN black-box traceability from that of BF. We also investigate a weaker form of black-box tracing called single-query “black-box confirmation.” We show that, when suspicion is modeled as a confidence weight (which biases the uniform distribution of traitors), such single-query confirmation is essentially not possible against a self-protecting pirate-decoder that contains keys based on a superlogarithmic number of traitor keys.

Aggelos Kiayias, Moti Yung

Minimal Complete Primitives for Secure Multi-party Computation

The study of minimal cryptographic primitives needed to implement secure computation among two or more players is a fundamental question in cryptography. The issue of complete primitives for the case of two players has been thoroughly studied. However, in the multi-party setting, when there are n > 2 players and t of them are corrupted, the question of what are the simplest complete primitives remained open for t ≥ n/3. We consider this question, and introduce complete primitives of minimal cardinality for secure multi-party computation. The cardinality issue (number of players accessing the primitive) is essential in settings where the primitives are implemented by some other means, and the simpler the primitive the easier it is to realize it. We show that our primitives are complete and of minimal cardinality possible.

Matthias Fitzi, Juan A. Garay, Ueli Maurer, Rafail Ostrovsky

We present a very efficient multi-party computation protocol unconditionally secure against an active adversary. The security is maximal, i.e., active corruption of up to t < n/3 of the n players is tolerated. The communication complexity for securely evaluating a circuit with m multiplication gates over a finite field is $$\mathcal{O}(mn^2 )$$ field elements, including the communication required for simulating broadcast, but excluding some overhead costs (independent of m) for sharing the inputs and reconstructing the outputs. This corresponds to the complexity of the best known protocols for the passive model, where the corrupted players are guaranteed not to deviate from the protocol. The complexity of our protocol may well be optimal. The constant overhead factor for robustness is small and the protocol is practical.

Martin Hirt, Ueli Maurer

Secure Distributed Linear Algebra in a Constant Number of Rounds

Consider a network of processors among which elements in a finite field K can be verifiably shared in a constant number of rounds. Assume furthermore constant-round protocols are available for generating random shared values, for secure multiplication and for addition of shared values. These requirements can be met by known techniques in all standard models of communication.In this model we construct protocols allowing the network to securely solve standard computational problems in linear algebra. In particular, we show how the network can securely, efficiently and in constant-round compute determinant, characteristic polynomial, rank, and the solution space of linear systems of equations. Constant round solutions follow for all problems which can be solved by direct application of such linear algebraic methods, such as deciding whether a graph contains a perfect match.If the basic protocols (for shared random values, addition and multiplication) we start from are unconditionally secure, then so are our protocols. Our results offer solutions that are significantly more efficient than previous techniques for secure linear algebra, they work for arbitrary fields and therefore extend the class of functions previously known to be computable in constant round and with unconditional security. In particular, we obtain an unconditionally secure protocol for computing a function f in constant round, where the protocol has complexity polynomial in the span program size of f over an arbitrary finite field.

Ronald Cramer, Ivan Damgård

Two-Party Generation of DSA Signatures

Extended Abstract

We describe a means of sharing the DSA signature function, so that two parties can efficiently generate a DSA signature with respect to a given public key but neither can alone. We focus on a certain instantiation that allows a proof of security for concurrent execution in the random oracle model, and that is very practical. We also briefly outline a variation that requires more rounds of communication, but that allows a proof of security for sequential execution without random oracles.

Philip MacKenzie, Michael K. Reiter

Oblivious Transfer in the Bounded Storage Model

Building on a previous important work of Cachin, Crépeau, and Marcil [15], we present a provably secure and more efficient protocol for (21)-Oblivious Transfer with a storage-bounded receiver. A public random string of n bits long is employed, and the protocol is secure against any receiver who can store γn bits, γ < 1. Our work improves the work of CCM [15] in two ways. First, the CCM protocol requires the sender and receiver to store O(nc) bits, c ~ 2/3. We give a similar but more efficient protocol that just requires the sender and receiver to store O(√kn) bits, where k is a security parameter. Second, the basic CCM Protocol was proved in [15] to guarantee that a dishonest receiver who can store O(n) bits succeeds with probability at most O(n™d), d ~ 1/3, although repitition of the protocol can make this probability of cheating exponentially small [20]. Combining the methodologies of [24] and [15], we prove that in our protocol, a dishonest storage-bounded receiver succeeds with probability only 2™O(k), without repitition of the protocol. Our results answer an open problem raised by CCM in the affirmative.

Yan Zong Ding

Parallel Coin-Tossing and Constant-Round Secure Two-Party Computation

In this paper we show that any two-party functionality can be securely computed in a constant number of rounds, where security is obtained against malicious adversaries that may arbitrarily deviate from the protocol specification. This is in contrast to Yao’s constant-round protocol that ensures security only in the face of semi-honest adversaries, and to its malicious adversary version that requires a polynomial number of rounds.In order to obtain our result, we present a constant-round protocol for secure coin-tossing of polynomially many coins (in parallel). We then show how this protocol can be used in conjunction with other existing constructions in order to obtain a constant-round protocol for securely computing any two-party functionality. On the subject of coin-tossing, we also present a constant-round perfect coin-tossing protocol, where by “perfect” we mean that the resulting coins are guaranteed to be statistically close to uniform (and not just pseudorandom).

Yehuda Lindell

Faster Point Multiplication on Elliptic Curves with Efficient Endomorphisms

The fundamental operation in elliptic curve cryptographic schemes is the multiplication of an elliptic curve point by an integer. This paper describes a new method for accelerating this operation on classes of elliptic curves that have efficiently-computable endomorphisms. One advantage of the new method is that it is applicable to a larger class of curves than previous such methods. For this special class of curves, a speedup of up to 50% can be expected over the best general methods for point multiplication.

Robert P. Gallant, Robert J. Lambert, Scott A. Vanstone

On the Unpredictability of Bits of the Elliptic Curve Diffie-Hellman Scheme

Let $${\mathbb{E} \mathord{\left/ {\vphantom {\mathbb{E} \mathbb{F}}} \right. \kern-\nulldelimiterspace} \mathbb{F}}_p$$ be an elliptic curve, and $${\mathbb{E} \mathord{\left/ {\vphantom {\mathbb{E} \mathbb{F}}} \right. \kern-\nulldelimiterspace} \mathbb{F}}_p$$. Define the Diffie-Hellman function as DHE,G (aG,bG) = abG. We show that if there is an efficient algorithm for predicting the LSB of the x or y coordinate of abG given $$\left\langle \mathbb{E} \right.,G,\left. {aG,bG} \right\rangle$$ for a certain family of elliptic curves, then there is an algorithm for computing the Diffie-Hellman function on all curves in this family. This seems stronger than the best analogous results for the Diffie-Hellman function in $$\mathbb{F}_p^*$$. Boneh and Venkatesan showed that in $$\mathbb{F}_p^*$$ computing approximately (log p)1/2 of the bits of the Diffie-Hellman secret is as hard as computing the entire secret. Our results show that just predicting one bit of the Elliptic Curve Diffie-Hellman secret in a family of curves is as hard as computing the entire secret.

Dan Boneh, Igor E. Shparlinski

Identity-Based Encryption from the Weil Pairing

We propose a fully functional identity-based encryption scheme (IBE). The scheme has chosen ciphertext security in the random oracle model assuming an elliptic curve variant of the computational Diffie-Hellman problem. Our system is based on the Weil pairing. We give precise definitions for secure identity based encryption schemes and give several applications for such systems.

Dan Boneh, Matt Franklin

A Chosen Ciphertext Attack on RSA Optimal Asymmetric Encryption Padding (OAEP) as Standardized in PKCS #1 v2.0

An adaptive chosen ciphertext attack against PKCS #1 v2.0 RSA OAEP encryption is described. It recovers the plaintext - not the private key - from a given ciphertext in a little over log2n queries of an oracle implementing the algorithm, where n is the RSA modulus. The high likelihood of implementations being susceptible to this attack is explained as well as the practicality of the attack. Improvements to the algorithm to defend against the attack are discussed.

James Manger

OAEP Reconsidered

Extended Abstract

The OAEP encryption scheme was introduced by Bellare and Rogaway at Eurocrypt ’94. It converts any trapdoor permutation scheme into a public-key encryption scheme. OAEP is widely believed to provide resistance against adaptive chosen ciphertext attack. The main justification for this belief is a supposed proof of security in the random oracle model, assuming the underlying trapdoor permutation scheme is one way.This paper shows conclusively that this justification is invalid. First, it observes that there appears to be a non-trivial gap in the OAEP security proof. Second, it proves that this gap cannot be filled, in the sense that there can be no standard “black box” security reduction for OAEP. This is done by proving that there exists an oracle relative to which the general OAEP scheme is insecure.The paper also presents a new scheme OAEP+, along with a complete proof of security in the random oracle model. OAEP+ is essentially just as efficient as OAEP, and even has a tighter security reduction.It should be stressed that these results do not imply that a particular instantiation of OAEP, such as RSA-OAEP, is insecure. They simply undermine the original justification for its security. In fact, it turns out— essentially by accident, rather than by design—that RSA-OAEP is secure in the random oracle model; however, this fact relies on special algebraic properties of the RSA function, and not on the security of the general OAEP scheme.

Victor Shoup

RSA-OAEP Is Secure under the RSA Assumption

Recently Victor Shoup noted that there is a gap in the widely-believed security result of OAEP against adaptive chosen-cipher-text attacks. Moreover, he showed that, presumably, OAEP cannot be proven secure from the one-wayness of the underlying trapdoor permutation. This paper establishes another result on the security of OAEP. It proves that OAEP offers semantic security against adaptive chosen-ciphertext attacks, in the random oracle model, under the partial-domain one-wayness of the underlying permutation. Therefore, this uses a formally stronger assumption. Nevertheless, since partial-domain one-wayness of the RSA function is equivalent to its (full-domain) one-wayness, it follows that the security of RSA-OAEP can actually be proven under the sole RSA assumption, although the reduction is not tight.

Eiichiro Fujisaki, Tatsuaki Okamoto, David Pointcheval, Jacques Stern

Simplified OAEP for the RSA and Rabin Functions

Optimal Asymmetric Encryption Padding (OAEP) is a technique for converting the RSA trapdoor permutation into a chosen cipher-text secure system in the random oracle model. OAEP padding can be viewed as two rounds of a Feistel network. We show that for the Rabin and RSA trapdoor functions a much simpler padding scheme is sufficient for chosen ciphertext security in the random oracle model. We show that only one round of a Feistel network is sufficient. The proof of security uses the algebraic properties of the RSA and Rabin functions.

Dan Boneh

Online Ciphers and the Hash-CBC Construction

We initiate a study of on-line ciphers. These are ciphers that can take input plaintexts of large and varying lengths and will output the ith block of the ciphertext after having processed only the first i blocks of the plaintext. Such ciphers permit length-preserving encryption of a data stream with only a single pass through the data. We provide security definitions for this primitive and study its basic properties. We then provide attacks on some possible candidates, including CBC with fixed IV. Finally we provide a construction called HCBC which is based on a given block cipher E and a family of AXU functions. HCBC is proven secure against chosen-plaintext attacks assuming that E is a PRP secure against chosen-plaintext attacks

Mihir Bellare, Alexandra Boldyreva, Lars Knudsen, Chanathip Namprempre

The Order of Encryption and Authentication for Protecting Communications (or: How Secure Is SSL?)

We study the question of how to generically compose symmetric encryption and authentication when building “secure channels” for the protection of communications over insecure networks. We show that any secure channels protocol designed to work with any combination of secure encryption (against chosen plaintext attacks) and secure MAC must use the encrypt-then-authenticate method. We demonstrate this by showing that the other common methods of composing encryption and authentication, including the authenticate-then-encrypt method used in SSL, are not generically secure. We show an example of an encryption function that provides (Shannon’s) perfect secrecy but when combined with any MAC function under the authenticate-then-encrypt method yields a totally insecure protocol (for example, finding passwords or credit card numbers transmitted under the protection of such protocol becomes an easy task for an active attacker). The same applies to the encrypt-and-authenticate method used in SSH.On the positive side we show that the authenticate-then-encrypt method is secure if the encryption method in use is either CBC mode (with an underlying secure block cipher) or a stream cipher (that xor the data with a random or pseudorandom pad). Thus, while we show the generic security of SSL to be broken, the current practical implementations of the protocol that use the above modes of encryption are safe.

Hugo Krawczyk

Forward-Secure Signatures with Optimal Signing and Verifying

We propose the first forward-secure signature scheme for which both signing and verifying are as efficient as for one of the most efficient ordinary signature schemes (Guillou-Quisquater [GQ88]), each requiring just two modular exponentiations with a short exponent. All previously proposed forward-secure signature schemes took significantly longer to sign and verify than ordinary signature schemes.Our scheme requires only fractional increases to the sizes of keys and signatures, and no additional public storage. Like the underlying [GQ88] scheme, our scheme is provably secure in the random oracle model.

Gene Itkis, Leonid Reyzin

Improved Online/Offline Signature Schemes

The notion of on-line/off-line signature schemes was introduced in 1990 by Even, Goldreich and Micali. They presented a general method for converting any signature scheme into an on-line/off-line signature scheme, but their method is not very practical as it increases the length of each signature by a quadratic factor. In this paper we use the recently introduced notion of a trapdoor hash function to develop a new paradigm called hash-sign-switch, which can convert any signature scheme into a highly efficient on-line/off-line signature scheme: In its recommended implementation, the on-line complexity is equivalent to about 0.1 modular multiplications, and the size of each signature increases only by a factor of two. In addition, the new paradigm enhances the security of the original signature scheme since it is only used to sign random strings chosen off-line by the signer. This makes the converted scheme secure against adaptive chosen message attacks even if the original scheme is secure only against generic chosen message attacks or against random message attacks.

An Efficient Scheme for Proving a Shuffle

In this paper, we propose a novel and efficient protocol for proving the correctness of a shuffle, without leaking how the shuffle was performed. Using this protocol, we can prove the correctness of a shuffle of n data with roughly 18n exponentiations, where as the protocol of Sako-Kilian[SK95] required 642n and that of Abe[Ab99] required 22n log n. The length of proof will be only 211n bits in our protocol, opposed to 218n bits and 214n log n bits required by Sako-Kilian and Abe, respectively. The proposed protocol will be a building block of an efficient, universally verifiable mix-net, whose application to voting system is prominent.

Jun Furukawa, Kazue Sako

An Identity Escrow Scheme with Appointed Verifiers

An identity escrow scheme allows a member of a group to prove membership in this group without revealing any extra information. At the same time, in case of abuse, his identity can still be discovered. Such a scheme allows anonymous access control. In this paper, we put forward the notion of an identity escrow scheme with appointed verifiers. Such a scheme allows the user to only convince an appointed verifier (or several appointed verifiers) of his membership; but no unauthorized verifier can verify a user’s group membership even if the user fully cooperates, unless the user is completely under his control. We provide a formal definition of this new notion and give an efficient construction of an identity escrow scheme with appointed verifiers provably secure under common number-theoretic assumptions in the public-key model.

Jan Camenisch, Anna Lysyanskaya

Session-Key Generation Using Human Passwords Only

We present session-key generation protocols in a model where the legitimate parties share only a human-memorizable password. The security guarantee holds with respect to probabilistic polynomial-time adversaries that control the communication channel (between the parties), and may omit, insert and modify messages at their choice. Loosely speaking, the effect of such an adversary that attacks an execution of our protocol is comparable to an attack in which an adversary is only allowed to make a constant number of queries of the form “is w the password of Party A”. We stress that the result holds also in case the passwords are selected at random from a small dictionary so that it is feasible (for the adversary) to scan the entire directory. We note that prior to our result, it was not clear whether or not such protocols were attainable without the use of random oracles or additional setup assumptions.

Oded Goldreich, Yehuda Lindell

Cryptanalysis of RSA Signatures with Fixed-Pattern Padding

A fixed-pattern padding consists in concatenating to the message m a fixed pattern P. The RSA signature is then obtained by computing P|md mod N where d is the private exponent and N the modulus. In Eurocrypt ’97, Girault and Misarsky showed that the size of P must be at least half the size of N (in other words the parameter configurations |P| < |N|/2 are insecure) but the security of RSA fixed-pattern padding remained unknown for |P| > |N|/2. In this paper we show that the size of P must be at least two-thirds of the size of N, i.e. we show that |P| < 2|N|/3 is insecure.

Eric Brier, Christophe Clavier, Jean-Sébastien Coron, David Naccache

Correlation Analysis of the Shrinking Generator

The shrinking generator is a well-known keystream generator composed of two linear feedback shift registers, LFSR1 and LFSR2, where LFSR1 is clock-controlled according to regularly clocked LFSR2. A probabilistic analysis of the shrinking generator which shows that this generator can be vulnerable to a specific fast correlation attack is conducted. The first stage of the attack is based on a recursive computation of the posterior probabilites of individual bits of the regularly clocked LFSR1 sequence when conditioned on a given segment of the keystream sequence. Theoretical analysis shows that these probabilities are significantly different from one half and can hence be used for reconstructing the initial state of LFSR1 by iterative probabilistic decoding algorithms for fast correlation attacks on regularly clocked LFSR’s. In the second stage of the attack, the initial state of LFSR2 is reconstructed in a similar way, which is based on a recursive computation of the posterior probabilites of individual bits of the LFSR2 sequence when conditioned on the keystream sequence and on the reconstructed LFSR1 sequence.

Jovan D. Golić

Nonlinear Vector Resilient Functions

An (n, m, k)-resilient function is a function $$f:\mathbb{F}_2^n \to \mathbb{F}_2^m$$ such that every possible output m-tuple is equally likely to occur when the values of k arbitrary inputs are fixed by an adversary and the remaining n - k input bits are chosen independently at random. In this paper we propose a new method to generate a (n + D + 1,m,d - 1)-resilient function for any non-negative integer D whenever a [n, m, d] linear code exists. This function has algebraic degree D and nonlinearity at least $$2^{n + D} - 2^n \left\lfloor {\sqrt {2^{n + D + 1} } } \right\rfloor + 2^{n - 1}$$. If we apply this method to the simplex code, we can get a (t(2m ™ 1) + D + 1, m, t2m™1 ™ 1)-resilient function with algebraic degree D for any positive integers m, t and D. Note that if we increase the input size by D in the proposed construction, we can get a resilient function with the same parameter except algebraic degree increased by D.

Jung Hee Cheon

New Public Key Cryptosystem Using Finite Non Abelian Groups

Most public key cryptosystems have been constructed based on abelian groups up to now. We propose a new public key cryptosystem built on finite non abelian groups in this paper. It is convertible to a scheme in which the encryption and decryption are much faster than other well-known public key cryptosystems, even without no message expansion.Furthermore a signature scheme can be easily derived from it, while it is difficult to find a signature scheme using a non abelian group.

Seong-Hun Paeng, Kil-Chan Ha, Jae Heon Kim, Seongtaek Chee, Choonsik Park

Pseudorandomness from Braid Groups

Recently the braid groups were introduced as a new source for cryptography. The group operations are performed efficiently and the features are quite different from those of other cryptographically popular groups. As the first step to put the braid groups into the area of pseudorandomness, this article presents some cryptographic primitives under two related assumptions in braid groups. First, assuming that the conjugacy problem is a one-way function, say f, we show which particular bit of the argument x is pseudorandom given f(x). Next, under the decision Ko-Lee assumption, we construct two provably secure pseudorandom schemes: a pseudorandom generator and a pseudorandom synthesizer.

Eonkyung Lee, Sang Jin Lee, Sang Geun Hahn

On the Cost of Reconstructing a Secret, or VSS with Optimal Reconstruction Phase

Consider a scenario where an l-bit secret has been distributed among n players by an honest dealer using some secret sharing scheme. Then, if all players behave honestly, the secret can be reconstructed in one round with zero error probability, and by broadcasting nl bits.We ask the following question: how close to this ideal can we get if up to t players (but not the dealer) are corrupted by an adaptive, active adversary with unbounded computing power? - and where in addition we of course require that the adversary does not learn the secret ahead of reconstruction time. It is easy to see that t = ⌊(n ™ 1)/2⌋ is the maximal value of t that can be tolerated, and furthermore, we show that the best we can hope for is a one-round reconstruction protocol where every honest player outputs the correct secret or “failure”. For any such protocol with failure probability at most 2™ν(k), we show a lower bound of ν(nl + kn2) bits on the information communicated. We further show that this is tight up to a constant factor.The lower bound trivially applies as well to VSS schemes, where also the dealer may be corrupt. Using generic methods, the scheme establishing the upper bound can be turned into a VSS with efficient reconstruction. However, the distribution phase becomes very inefficient. Closing this gap, we present a new VSS protocol where the distribution complexity matches that of the previously best known VSS, but where the reconstruction phase meets our lower bound up to a constant factor. The reconstruction is a factor of n better than previous VSS protocols. We show an application of this to multi-party computation with pre-processing, improving the complexity of earlier similar protocols by a factor of n.

Ronald Cramer, Ivan Damgård, Serge Fehr

Secure and Efficient Asynchronous Broadcast Protocols

Extended Abstract

Christian Cachin, Klaus Kursawe, Frank Petzold, Victor Shoup

Soundness in the Public-Key Model

The public-key model for interactive proofs has proved to be quite effective in improving protocol efficiency [CGGM00]. We argue, however, that its soundness notion is more subtle and complex than in the classical model, and that it should be better understood to avoid designing erroneous protocols. Specifically, for the public-key model, we identify four meaningful notions of soundness;prove that, under minimal complexity assumptions, these four notions are distinct;identify the exact soundness notions satisfied by prior interactive protocols; andidentify the round complexity of some of the new notions.

Silvio Micali, Leonid Reyzin

Robust Non-interactive Zero Knowledge

Non-Interactive Zero Knowledge (NIZK), introduced by Blum, Feldman, and Micali in 1988, is a fundamental cryptographic primitive which has attracted considerable attention in the last decade and has been used throughout modern cryptography in several essential ways. For example, NIZK plays a central role in building provably secure public-key cryptosystems based on general complexity-theoretic assumptions that achieve security against chosen ciphertext attacks. In essence, in a multi-party setting, given a fixed common random string of polynomial size which is visible to all parties, NIZK allows an arbitrary polynomial number of Provers to send messages to polynomially many Verifiers, where each message constitutes an NIZK proof for an arbitrary polynomial-size NP statement.In this paper, we take a closer look at NIZK in the multi-party setting. First, we consider non-malleable NIZK, and generalizing and substantially strengthening the results of Sahai, we give the first construction of NIZK which remains non-malleable after polynomially-many NIZK proofs. Second, we turn to the definition of standard NIZK itself, and propose a strengthening of it. In particular, one of the concerns in the technical definition of NIZK (as well as non-malleable NIZK) is that the so-called “simulator” of the Zero-Knowledge property is allowed to pick a different “common random string” from the one that Provers must actually use to prove NIZK statements in real executions. In this paper, we propose a new definition for NIZK that eliminates this shortcoming, and where Provers and the simulator use the same common random string. Furthermore, we show that both standard and non-malleable NIZK (as well as NIZK Proofs of Knowledge) can be constructed achieving this stronger definition. We call such NIZK Robust NIZK and show how to achieve it. Our results also yields the simplest known public-key encryption scheme based on general assumptions secure against adaptive chosen-ciphertext attack (CCA2).

Alfredo De Santis, Giovanni Di Crescenzo, Rafail Ostrovsky, Giuseppe Persiano, Amit Sahai