Skip to main content
Top

2006 | Book

Advances in Cryptology - CRYPTO 2006

26th Annual International Cryptology Conference, Santa Barbara, California, USA, August 20-24, 2006. Proceedings

insite
SEARCH

Table of Contents

Frontmatter
Rigorous Bounds on Cryptanalytic Time/Memory Tradeoffs

In this paper we formalize a general model of cryptanalytic time/memory tradeoffs for the inversion of a random function

f

:{0,1,...,

N

–1} ↦{0,1,...,

N

–1}. The model contains all the known tradeoff techniques as special cases. It is based on a new notion of

stateful random graphs

. The evolution of a path in the stateful random graph depends on a

hidden state

such as the color in the Rainbow scheme or the table number in the classical Hellman scheme. We prove an upper bound on the number of images

y

=

f

(

x

) for which

f

can be inverted, and derive from it a lower bound on the number of hidden states. These bounds hold for an overwhelming majority of the functions

f

, and their proofs are based on a rigorous combinatorial analysis. With some additional natural assumptions on the behavior of the

online phase

of the scheme, we prove a lower bound on its worst-case time complexity

$T=\Omega(\frac{N^2}{M^2 \ln N})$

, where

M

is the memory complexity. Finally, we describe new rainbow-based time/memory/data tradeoffs, and a new method for improving the time complexity of the online phase (by a small factor) by performing a deeper analysis during preprocessing.

Elad Barkan, Eli Biham, Adi Shamir
On the Power of the Randomized Iterate

We consider two of the most fundamental theorems in Cryptography. The first, due to Håstad et al. [HILL99], is that pseudorandom generators can be constructed from any one-way function. The second due to Yao [Yao82] states that the existence of weak one-way functions (i.e. functions on which every efficient algorithm fails to invert with some noticeable probability) implies the existence of full fledged one-way functions. These powerful plausibility results shape our understanding of hardness and randomness in Cryptography. Unfortunately, the reductions given in [HILL99, Yao82] are not as security preserving as one may desire. The main reason for the security deterioration is the input blow up in both of these constructions. For example, given one-way functions on

n

bits one obtains by [HILL99] pseudorandom generators with seed length Ω(

n

8

).

This paper revisits a technique that we call the

Randomized Iterate

, introduced by Goldreich, et. al.[GKL93]. This technique was used in to give a construction of pseudorandom generators from

regular

one-way functions. We simplify and strengthen this technique in order to obtain a similar reduction where the seed length of the resulting generators is as short as

${\cal{O}}(n \log n)$

rather than Ω(

n

3

) in [GKL93]. Our technique has the potential of implying seed-length

${\cal{O}}(n)$

, and the only bottleneck for such a result is the parameters of current generators against space bounded computations. We give a reduction with similar parameters for security amplification of

regular

one-way functions. This improves upon the reduction of Goldreich et al. [GIL+90] in that the reduction does not need to know the regularity parameter of the functions (in terms of security, the two reductions are incomparable). Finally, we show that the randomized iterate may even be useful in the general context of [HILL99]. In Particular, we use the randomized iterate to replace the basic building block of the [HILL99] construction. Interestingly, this modification improves efficiency by an

n

3

factor and reduces the seed length to

${\cal{O}}(n^7)$

(which also implies improvement in the security of the construction).

Iftach Haitner, Danny Harnik, Omer Reingold
Strengthening Digital Signatures Via Randomized Hashing

We propose randomized hashing as a mode of operation for cryptographic hash functions intended for use with standard digital signatures and without necessitating of any changes in the internals of the underlying hash function (e.g., the SHA family) or in the signature algorithms (e.g., RSA or DSA). The goal is to free practical digital signature schemes from their current reliance on strong collision resistance by basing the security of these schemes on significantly weaker properties of the underlying hash function, thus providing a

safety net

in case the (current or future) hash functions in use turn out to be less resilient to collision search than initially thought.

We design a specific mode of operation that takes into account engineering considerations (such as simplicity, efficiency and compatibility with existing implementations) as well as analytical soundness. Specifically, the scheme consists of a regular use of the hash function with randomization applied only to the message before it is input to the hash function. We formally show the sufficiency of weaker than collision-resistance assumptions for proving the security of the scheme.

Shai Halevi, Hugo Krawczyk
Round-Optimal Composable Blind Signatures in the Common Reference String Model

We build concurrently executable blind signatures schemes in the common reference string model, based on general complexity assumptions, and with optimal round complexity. Namely, each interactive signature generation requires the requesting user and the issuing bank to transmit only one message each. We also put forward the definition of universally composable blind signature schemes, and show how to extend our concurrently executable blind signature protocol to derive such universally composable schemes in the common reference string model under general assumptions. While this protocol then guarantees very strong security properties when executed within larger protocols, it still supports signature generation in two moves.

Marc Fischlin
On Signatures of Knowledge

In a traditional signature scheme, a signature

σ

on a message

m

is issued under a public key

PK

, and can be interpreted as follows: “The owner of the public key

PK

and its corresponding secret key has signed message

m

.” In this paper we consider schemes that allow one to issue signatures on behalf of any NP statement, that can be interpreted as follows: “A person in possession of a witness

w

to the statement that

x

L

has signed message

m

.” We refer to such schemes as

signatures of knowledge

.

We formally define the notion of a signature of knowledge. We begin by extending the traditional definition of digital signature schemes, captured by Canetti’s ideal signing functionality, to the case of signatures of knowledge. We then give an alternative definition in terms of games that also seems to capture the necessary properties one may expect from a signature of knowledge. We then gain additional confidence in our two definitions by proving them equivalent.

We construct signatures of knowledge under standard complexity assumptions in the common-random-string model.

We then extend our definition to allow signatures of knowledge to be

nested

i.e., a signature of knowledge (or another accepting input to a UC-realizable ideal functionality) can itself serve as a witness for another signature of knowledge. Thus, as a corollary, we obtain the first

delegatable

anonymous credential system, i.e., a system in which one can use one’s anonymous credentials as a secret key for issuing anonymous credentials to others.

Melissa Chase, Anna Lysyanskaya
Non-interactive Zaps and New Techniques for NIZK

In 2000, Dwork and Naor proved a very surprising result: that there exist “Zaps”, two-round witness-indistinguishable proofs in the plain model without a common reference string, where the Verifier asks a single question and the Prover sends back a single answer. This left open the following tantalizing question: does there exist a

non-interactive

witness indistinguishable proof, where the Prover sends a single message to the Verifier for some non-trivial NP-language? In 2003, Barak, Ong and Vadhan answered this question affirmatively by derandomizing Dwork and Naor’s construction under a complexity theoretic assumption, namely that Hitting Set Generators against co-nondeterministic circuits exist.

In this paper, we construct non-interactive Zaps for all NP-languages. We accomplish this by introducing new techniques for building Non-Interactive Zero Knowledge (NIZK) Proof and Argument systems, which we believe to be of independent interest, and then modifying these to yield our main result. Our construction is based on the Decisional Linear Assumption, which can be seen as a bilinear group variant of the Decisional Diffie-Hellman Assumption.

Furthermore, our single message witness-indistinguishable proof for Circuit Satisfiability is of size

O

(

k

|

C

|) bits, where

k

is a security parameter, and |

C

| is the size of the circuit. This is much more efficient than previous constructions of 1- or 2-move Zaps.

Jens Groth, Rafail Ostrovsky, Amit Sahai
Rankin’s Constant and Blockwise Lattice Reduction

Lattice reduction is a hard problem of interest to both public-key cryptography and cryptanalysis. Despite its importance, extremely few algorithms are known. The best algorithm known in high dimension is due to Schnorr, proposed in 1987 as a block generalization of the famous LLL algorithm. This paper deals with Schnorr’s algorithm and potential improvements. We prove that Schnorr’s algorithm outputs better bases than what was previously known: namely, we decrease all former bounds on Schnorr’s approximation factors to their (ln 2)-th power. On the other hand, we also show that the output quality may have intrinsic limitations, even if an improved reduction strategy was used for each block, thereby strengthening recent results by Ajtai. This is done by making a connection between Schnorr’s algorithm and a mathematical constant introduced by Rankin more than 50 years ago as a generalization of Hermite’s constant. Rankin’s constant leads us to introduce the so-called smallest volume problem, a new lattice problem which generalizes the shortest vector problem, and which has applications to blockwise lattice reduction generalizing LLL and Schnorr’s algorithm, possibly improving their output quality. Schnorr’s algorithm is actually based on an approximation algorithm for the smallest volume problem in low dimension. We obtain a slight improvement over Schnorr’s algorithm by presenting a cheaper approximation algorithm for the smallest volume problem, which we call transference reduction.

Nicolas Gama, Nick Howgrave-Graham, Henrik Koy, Phong Q. Nguyen
Lattice-Based Cryptography

We describe some of the recent progress on lattice-based cryptography, starting from the seminal work of Ajtai, and ending with some recent constructions of very efficient cryptographic schemes.

Oded Regev
A Method for Making Password-Based Key Exchange Resilient to Server Compromise

This paper considers the problem of password-authenticated key exchange (PAKE) in a client-server setting, where the server authenticates using a stored password file, and it is desirable to maintain some degree of security even if the server is compromised. A PAKE scheme is said to be

resilient to server compromise

if an adversary who compromises the server must at least perform an offline dictionary attack to gain any advantage in impersonating a client. (Of course, offline dictionary attacks should be infeasible in the absence of server compromise.) One can see that this is the best security possible, since by definition the password file has enough information to allow one to play the role of the server, and thus to verify passwords in an offline dictionary attack.

While some previous PAKE schemes have been proven resilient to server compromise, there was no known general technique to take an arbitrary PAKE scheme and make it provably resilient to server compromise. This paper presents a practical technique for doing so which requires essentially one extra round of communication and one signature computation/ verification. We prove security in the universal composability framework by (1) defining a new functionality for PAKE with resilience to server compromise, (2) specifying a protocol combining this technique with a (basic) PAKE functionality, and (3) proving (in the random oracle model) that this protocol securely realizes the new functionality.

Craig Gentry, Philip MacKenzie, Zulfikar Ramzan
Mitigating Dictionary Attacks on Password-Protected Local Storage

We address the issue of encrypting data in local storage using a key that is derived from the user’s password. The typical solution in use today is to derive the key from the password using a cryptographic hash function. This solution provides relatively weak protection, since an attacker that gets hold of the encrypted data can mount an off-line dictionary attack on the user’s password, thereby recovering the key and decrypting the stored data.

We propose an approach for limiting off-line dictionary attacks in this setting

without relying on secret storage or secure hardware.

In our proposal, the process of deriving a key from the password requires the user to solve a puzzle that is presumed to be solvable only by humans (e.g, a CAPTCHA). We describe a simple protocol using this approach: many different puzzles are stored on the disk, the user’s password is used to specify which of them need to be solved, and the encryption key is derived from the password

and the solutions of the specified puzzles.

Completely specifying and analyzing this simple protocol, however, raises a host of modeling and technical issues, such as new properties of human-solvable puzzles and some seemingly hard combinatorial problems. Here we analyze this protocol in some interesting special cases.

Ran Canetti, Shai Halevi, Michael Steiner
Rationality and Adversarial Behavior in Multi-party Computation

We study multi-party computation in the model where none of

n

participating parties are honest: they are either rational, acting in their selfish interest to maximize their utility, or adversarial, acting arbitrarily. In this new model, which we call

the mixed-behavior model

, we define a class of functions that can be computed in the presence of an adversary using a trusted mediator. We then give a protocol that allows the rational parties to emulate the mediator and jointly compute the function such that (1) assuming that each rational party prefers that it learns the output while others do not, no rational party has an incentive to deviate from the protocol; and (2) the rational parties are protected from a malicious adversary controlling

$\lceil \frac{n}{2} \rceil -- 2$

of the participants: the adversary can only either cause all rational participants to abort (so no one learns the function they are trying to compute), or can only learn whatever information is implied by the output of the function.

Anna Lysyanskaya, Nikos Triandopoulos
When Random Sampling Preserves Privacy

Many organizations such as the U.S. Census publicly release samples of data that they collect about private citizens. These datasets are first anonymized using various techniques and then a small sample is released so as to enable “do-it-yourself” calculations. This paper investigates the privacy of the second step of this process: sampling. We observe that rare values – values that occur with low frequency in the table – can be problematic from a privacy perspective. To our knowledge, this is the first work that quantitatively examines the relationship between the number of rare values in a table and the privacy in a released random sample. If we require

ε

-privacy (where the larger

ε

is, the worse the privacy guarantee) with probability at least 1 –

δ

, we say that a value is rare if it occurs in at most

$\tilde{O}(\frac{1}{\epsilon})$

rows of the table (ignoring log factors). If there are no rare values, then we establish a direct connection between sample size that is safe to release and privacy. Specifically, if we select each row of the table with probability at most

ε

then the sample is

O

(

ε

)-private with high probability. In the case that there are

t

rare values, then the sample is

$\tilde{O}(\epsilon \delta /t)$

-private with probability at least 1–

δ

.

Kamalika Chaudhuri, Nina Mishra
Tight Bounds for Unconditional Authentication Protocols in the Manual Channel and Shared Key Models

We address the message authentication problem in two seemingly different communication models. In the first model, the sender and receiver are connected by an insecure channel and by a low-bandwidth auxiliary channel, that enables the sender to “manually” authenticate one short message to the receiver (for example, by typing a short string or comparing two short strings). We consider this model in a setting where no computational assumptions are made, and prove that for any 0 <

ε

< 1 there exists a log

*

n

-round protocol for authenticating

n

-bit messages, in which only 2 log(1 /

ε

) + O(1) bits are manually authenticated, and any adversary (even computationally unbounded) has probability of at most

ε

to cheat the receiver into accepting a fraudulent message. Moreover, we develop a proof technique showing that our protocol is essentially optimal by providing a lower bound of 2 log(1/

ε

) – 6 on the required length of the manually authenticated string.

The second model we consider is the traditional message authentication model. In this model the sender and the receiver share a short secret key; however, they are connected only by an insecure channel. Once again, we apply our proof technique, and prove a lower bound of 2 log(1/

ε

) – 2 on the required Shannon entropy of the shared key. This settles an open question posed by Gemmell and Naor (CRYPTO ’93).

Finally, we prove that one-way functions are

essential

(and sufficient) for the existence of protocols breaking the above lower bounds in the computational setting.

Moni Naor, Gil Segev, Adam Smith
Robust Fuzzy Extractors and Authenticated Key Agreement from Close Secrets

Consider two parties holding correlated random variables

W

and

W

′, respectively, that are within distance

t

of each other in some metric space. These parties wish to agree on a uniformly distributed secret key

R

by sending a single message over an insecure channel controlled by an all-powerful adversary. We consider both the

keyless

case, where the parties share no additional secret information, and the

keyed

case, where the parties share a long-term secret

SK

that they can use to generate a sequence of session keys {

R

j

} using multiple pairs {(

W

j

,

W

j

)}. The former has applications to, e.g., biometric authentication, while the latter arises in, e.g., the bounded storage model with errors.

Our results improve upon previous work in several respects:

– The best previous solution for the keyless case with no errors (i.e.,

t

=0) requires the min-entropy of

W

to exceed 2|

W

|/3. We show a solution when the min-entropy of

W

exceeds the

minimal

threshold |

W

|/2.

– Previous solutions for the keyless case in the presence of errors (i.e.,

t

>0) required random oracles. We give the first constructions (for certain metrics) in the standard model.

– Previous solutions for the keyed case were stateful. We give the first stateless solution.

Yevgeniy Dodis, Jonathan Katz, Leonid Reyzin, Adam Smith
On Forward-Secure Storage

We study a problem of secure data storage in a recently introduced

Limited Communication Model

. We propose a new cryptographic primitive that we call a

Forward-Secure Storage (FSS)

. This primitive is a special kind of an encryption scheme, which produces huge (5 GB, say) ciphertexts, even from small plaintexts, and has the following non-standard security property. Suppose an adversary gets access to a ciphertext

C

=

E

(

K

,

M

) and he is allowed to compute any function

h

of

C

, with the restriction that |

h

(

C

)| ≪|

C

| (say: |

h

(

C

)| = 1 GB). We require that

h

(

C

) should give the adversary no information about

M

, even if he later learns

K

.

A practical application of this concept is as follows. Suppose a ciphertext

C

is stored on a machine on which an adversary can install a virus. In many cases it is completely infeasible for the virus to retrieve 1 GB of data from the infected machine. So if the adversary (at some point later) learns

K

, then

M

remains secret.

We provide a formal definition of the FSS, propose some FSS schemes, and show that FSS can be composed sequentially in a secure way. We also show connections of the FSS to the theory of compressibility of NP-instances (recently developed by Harnik and Naor).

Stefan Dziembowski
Construction of a Non-malleable Encryption Scheme from Any Semantically Secure One

There are several candidate semantically secure encryption schemes, yet in many applications

non-malleability

of encryptions is crucial. We show how to transform

any

semantically secure encryption scheme into one that is non-malleable for arbitrarily many messages.

Rafael Pass, abhi shelat, Vinod Vaikuntanathan
Anonymous Hierarchical Identity-Based Encryption (Without Random Oracles)

We present an identity-based cryptosystem that features fully anonymous ciphertexts and hierarchical key delegation. We give a proof of security in the standard model, based on the mild Decision Linear complexity assumption in bilinear groups. The system is efficient and practical, with small ciphertexts of size linear in the depth of the hierarchy. Applications include search on encrypted data, fully private communication,

etc

.

Our results resolve two open problems pertaining to anonymous identity-based encryption, our scheme being the first to offer provable anonymity in the standard model, in addition to being the first to realize fully anonymous HIBE at all levels in the hierarchy.

Xavier Boyen, Brent Waters
Fast Algorithms for the Free Riders Problem in Broadcast Encryption

We provide algorithms to solve the

free riders

problem in broadcast encryption. In this problem, the broadcast server is allowed to choose some small subset

F

of the revoked set

R

of users to allow to decrypt the broadcast, despite having been revoked. This may allow the server to significantly reduce network traffic while only allowing a small set of non-privileged users to decrypt the broadcast.

Although there are worst-case instances of broadcast encryption schemes where the free riders problem is difficult to solve (or even approximate), we show that for many specific broadcast encryption schemes, there are efficient algorithms. In particular, for the complete subtree method [25] and some other schemes in the subset-cover framework, we show how to find the optimal assignment of free riders in

O

(|

R

||

F

|) time, which is independent of the total number of users. We also define an approximate version of this problem, and study specific distributions of

R

for which this relaxation yields even faster algorithms.

Along the way we develop the first approximation algorithms for the following problem: given two integer sequences

a

1

a

2

≥⋯≥

a

n

and

b

1

b

2

≥⋯≥

b

n

, output for all

i

, an integer

j

′ for which

a

j

+

b

$_{i--{\it j}\prime}$

≤(1+

ε

) min

j

(

a

j

+

b

$_{i--{\it j}}$

). We show that if the differences

a

i

a

i

 + 1

,

b

i

b

i

 + 1

are bounded, then there is an

O

(

n

4/3

/

ε

2/3

)-time algorithm for this problem, improving upon the

O

(

n

2

) time of the naive algorithm.

Zulfikar Ramzan, David P. Woodruff
The Number Field Sieve in the Medium Prime Case

In this paper, we study several variations of the number field sieve to compute discrete logarithms in finite fields of the form

${\mathbb F}_{p^n}$

, with

p

a medium to large prime. We show that when

n

is not too large, this yields a

$L_{p^n}(1/3)$

algorithm with efficiency similar to that of the regular number field sieve over prime fields. This approach complements the recent results of Joux and Lercier on the function field sieve. Combining both results, we deduce that computing discrete logarithms have heuristic complexity

$L_{p^n}(1/3)$

in all finite fields. To illustrate the efficiency of our algorithm, we computed discrete logarithms in a 120-digit finite field

${\mathbb F}_{p^3}$

.

Antoine Joux, Reynald Lercier, Nigel Smart, Frederik Vercauteren
Inverting HFE Is Quasipolynomial

In the last ten years, multivariate cryptography has emerged as a possible alternative to public key cryptosystems based on hard computational problems from number theory. Notably, the HFE scheme [17] appears to combine efficiency and resistance to attacks, as expected from any public key scheme. However, its security is not yet completely understood. On one hand, since the security is related to the hardness of solving quadratic systems of multivariate binary equations, an NP complete problem, there were hopes that the system could be immune to subexponential attacks. On the other hand, several lines of attacks have been explored, based on so-called relinearization techniques [12,5], or on the use of Gröbner basis algorithms [7]. The latter approach was used to break the first HFE Challenge 1 in 96 hours on a 833 MHz Alpha workstation with 4 Gbytes of memory. At a more abstract level, Faugère and Joux discovered an algebraic invariant that explains why the computation finishes earlier than expected. In the present paper, we pursue this line and study the asymptotic behavior of these Gröbner basis based attacks. More precisely, we consider the complexity of the decryption attack which uses Gröbner bases to recover the plaintext and the complexity of a related distinguisher. We show that the decryption attack has a quasipolynomial complexity, where quasipolynomial denotes an subexponential expression much smaller than the classical subexponential expressions encountered in factoring or discrete logarithm computations. The same analysis shows that the related distinguisher has provable quasipolynomial complexity.

Louis Granboulan, Antoine Joux, Jacques Stern
Cryptanalysis of 2R− Schemes

In this paper, we study the security of 2R

schemes [17,18], which are the “minus variant” of two-round schemes. This variant consists in removing some of the

n

polynomials of the public key, and permits to thwart an attack described at Crypto’99 [25] against two-round schemes. Usually, the “minus variant” leads to a real strengthening of the considered schemes. We show here that this is actually not true for 2R

schemes. We indeed propose an efficient algorithm for decomposing 2R

schemes. For instance, we can remove up to

$\left \lfloor\frac{n}{2} \right \rfloor$

equations and still be able to recover a decomposition in

O

(

n

12

). We provide experimental results illustrating the efficiency of our approach. In practice, we have been able to decompose 2R

schemes in less than a handful of hours for most of the challenges proposed by the designers [18]. We believe that this result makes the principle of two-round schemes, including 2R

schemes, useless.

Jean-Charles Faugère, Ludovic Perret
Receipt-Free Universally-Verifiable Voting with Everlasting Privacy

We present the first universally verifiable voting scheme that can be based on a general assumption (existence of a non-interactive commitment scheme). Our scheme is also the first receipt-free scheme to give “everlasting privacy” for votes: even a computationally unbounded party does not gain any information about individual votes (other than what can be inferred from the final tally).

Our voting protocols are designed to be used in a “traditional” setting, in which voters cast their ballots in a private polling booth (which we model as an untappable channel between the voter and the tallying authority). Following in the footsteps of Chaum and Neff [7,16], our protocol ensures that the integrity of an election cannot be compromised

even if the computers running it are all corrupt

(although ballot secrecy may be violated in this case).

We give a generic voting protocol which we prove to be secure in the Universal Composability model, given that the underlying commitment is universally composable. We also propose a concrete implementation, based on the hardness of discrete log, that is slightly more efficient (and can be used in practice).

Tal Moran, Moni Naor
Cryptographic Protocols for Electronic Voting

Electronic voting has seen a surge of growth in the US over the past five years; yet many questions have been raised about the trustworthiness of today’s e-voting systems. One solution that has been proposed involves use of sophisticated cryptographic protocols to prove to voters that their vote has been recorded and counted correctly. These protocols are of interest both for their use of many fundamental tools in cryptography (e.g., zero-knowledge proofs, mixnets, threshold cryptosystems) as well as because of the social importance of the problem. I will survey some promising recent work in this area, discuss several open problems for future research, and suggest some ways that cryptographers might be able to contribute to improving the integrity of public elections.

David Wagner
Asymptotically Optimal Two-Round Perfectly Secure Message Transmission

The problem of perfectly secure message transmission concerns two synchronized non-faulty processors sender (

${\mathcal{S}}$

) and receiver (

${\mathcal{R}}$

) that are connected by a synchronous network of

n

≥2

t

+1 noiseless 2-way communication channels. Their goal is to communicate privately and reliably, despite the presence of an adversary that may actively corrupt at most

t

of those channels. These properties should hold information theoretically and without error.

We propose an asymptotically optimal solution for this problem. The proposed protocol consists of two communication rounds, and a total of

O

(ℓ

n

) bits are exchanged in order to transmit a message of ℓ bits. Earlier, at CRYPTO 2004, an equally optimal solution has been claimed. However, we give a counter-example showing that their result is not perfectly reliable. The flaw seems to be fundamental and non-trivial to repair. Our approach is overall entirely different, yet it also makes essential use of their neat communication efficient technique for reliably transmitting conflict graphs.

What distinguishes our approach from previous ones is a technique that allows to identify

all actively corrupted channels

, initially trading it off against privacy. A perfectly secure and reliable secret key is then distilled by privacy amplification.

Saurabh Agarwal, Ronald Cramer, Robbert de Haan
Random Selection with an Adversarial Majority

We consider the problem of random selection, where

p

players follow a protocol to jointly select a random element of a universe of size

n

. However, some of the players may be adversarial and collude to force the output to lie in a small subset of the universe. We describe essentially the first protocols that solve this problem in the presence of a dishonest majority in the full-information model (where the adversary is computationally unbounded and all communication is via non-simultaneous broadcast). Our protocols are nearly optimal in several parameters, including the round complexity (as a function of

n

), the randomness complexity, the communication complexity, and the tradeoffs between the fraction of honest players, the probability that the output lies in a small subset of the universe, and the density of this subset.

Ronen Gradwohl, Salil Vadhan, David Zuckerman
Oblivious Transfer and Linear Functions

We study unconditionally secure 1-out-of-2 Oblivious Transfer (1–2

OT

). We first point out that a standard security requirement for 1–2

OT

of bits, namely that the receiver only learns one of the bits sent, holds if and only if the receiver has no information on the XOR of the two bits. We then generalize this to 1–2

OT

of strings and show that the security can be characterized in terms of binary linear functions. More precisely, we show that the receiver learns only one of the two strings sent if and only if he has no information on the result of applying any binary linear function (which non-trivially depends on both inputs) to the two strings.

We then argue that this result not only gives new insight into the nature of 1–2

OT

, but it in particular provides a very powerful tool for analyzing 1–2

OT

protocols. We demonstrate this by showing that with our characterization at hand, the reducibility of 1–2

OT

(of strings) to a wide range of weaker primitives follows by a very simple argument. This is in sharp contrast to previous literature, where reductions of 1–2

OT

to weaker flavors have rather complicated and sometimes even incorrect proofs.

Ivan B. Damgård, Serge Fehr, Louis Salvail, Christian Schaffner
On Expected Constant-Round Protocols for Byzantine Agreement

In a seminal paper, Feldman and Micali (STOC ’88) show an

n

-party Byzantine agreement protocol tolerating

t

<

n

/3 malicious parties that runs in expected constant rounds. Here, we show an expected constant-round protocol for

authenticated

Byzantine agreement assuming

honest majority

(i.e.,

t

<

n

/2), and relying only on the existence of a secure signature scheme and a public-key infrastructure (PKI). Combined with existing results, this gives the first expected constant-round protocol for secure computation with honest majority in a point-to-point network assuming only one-way functions and a PKI. Our key technical tool — a new primitive we introduce called

moderated VSS

— also yields a simpler proof of the Feldman-Micali result.

We also show a simple technique for sequential composition of protocols without simultaneous termination (something that is inherent for Byzantine agreement protocols using

o

(

n

) rounds) for the case of

t

<

n

/2.

Jonathan Katz, Chiu-Yuen Koo
Robust Multiparty Computation with Linear Communication Complexity

We present a robust multiparty computation protocol. The protocol is for the cryptographic model with open channels and a poly-time adversary, and allows

n

parties to actively securely evaluate any poly-sized circuit with resilience

t

<

n

/2. The total communication complexity in bits over the point-to-point channels is

${\mathcal{O}}(S n \kappa + n {\mathcal{BC}})$

, where

S

is the size of the circuit being securely evaluated,

κ

is the security parameter and

${\mathcal{BC}}$

is the communication complexity of one broadcast of a

κ

-bit value. This means the average number of bits sent and received by a single party is

${\mathcal{O}}(S \kappa + {\mathcal{BC}})$

, which is almost independent of the number of participating parties. This is the first robust multiparty computation protocol with this property.

Martin Hirt, Jesper Buus Nielsen
On Combining Privacy with Guaranteed Output Delivery in Secure Multiparty Computation

In the setting of multiparty computation, a set of parties wish to jointly compute a function of their inputs, while preserving security in the case that some subset of them are corrupted. The typical security properties considered are privacy, correctness, independence of inputs, guaranteed output delivery and fairness. Until now, all works in this area either considered the case that the corrupted subset of parties constitutes a strict minority, or the case that a half or more of the parties are corrupted. Secure protocols for the case of an honest majority achieve full security and thus output delivery and fairness are guaranteed. However, the security of these protocols is

completely compromised

if there is no honest majority. In contrast, protocols for the case of no honest majority do not guarantee output delivery, but do provide privacy, correctness and independence of inputs for

any number

of corrupted parties. Unfortunately, an adversary controlling only

a single party

can disrupt the computation of these protocols and prevent output delivery.

In this paper, we study the possibility of obtaining general protocols for multiparty computation that

simultaneously

guarantee security (allowing abort) in the case that an arbitrary number of parties are corrupted

and

full security (including guaranteed output delivery) in the case that only a minority of the parties are corrupted. That is, we wish to obtain the best of both worlds in a single protocol, depending on the corruption case. We obtain both positive and negative results on this question, depending on the type of the functionality to be computed (standard or reactive) and the type of dishonest majority (semi-honest or malicious).

Yuval Ishai, Eyal Kushilevitz, Yehuda Lindell, Erez Petrank
Scalable Secure Multiparty Computation

We present the first general protocol for secure multiparty computation which is

scalable

, in the sense that the amortized work per player does not grow, and in some natural settings even vanishes, with the number of players. Our protocol is secure against an

active

adversary which may

adaptively

corrupt up to some

constant fraction

of the players. The protocol can be implemented in a constant number rounds assuming the existence of a “computationally simple” pseudorandom generator, or in a small non-constant number of rounds assuming an arbitrary pseudorandom generator.

Ivan Damgård, Yuval Ishai
Algebraic Geometric Secret Sharing Schemes and Secure Multi-Party Computations over Small Fields

We introduce algebraic geometric techniques in secret sharing and in secure multi-party computation (MPC) in particular. The main result is a linear secret sharing scheme (LSSS) defined over a finite field

${\mathbb F}_q$

, with the following properties.

1. It is

ideal

. The number of players

n

can be

as large as

$\#C({\mathbb F}_q)$

, where

C

is an algebraic curve

C

of genus

g

defined over

${\mathbb F}_q$

.

2. It is

quasi-threshold

: it is

t

-rejecting and

t

+1+2

g

-accepting, but not necessarily

t

+1-accepting. It is thus in particular a ramp scheme. High information rate can be achieved.

3. It has

strong multiplication

with respect to the

t

-threshold adversary structure, if

$t<\frac{1}{3}n-\frac{4}{3}g$

. This is a multi-linear algebraic property on an LSSS facilitating zero-error multi-party multiplication, unconditionally secure against corruption by an active

t

-adversary.

4. The finite field

${\mathbb F}_q$

can be

dramatically smaller than n

. This is by using algebraic curves with many

${\mathbb F}_q$

-rational points. For example, for each small enough

ε

, there is a finite field

${\mathbb F}_q$

such that for infinitely many

n

there is an LSSS over

${\mathbb F}_q$

with strong multiplication satisfying

$(\frac{1}{3}- \epsilon) n\leq t < \frac{1}{3}n$

.

5. Shamir’s scheme, which requires

n

>

q

and which has strong multiplication for

$t<\frac{1}{3}n$

, is a special case by taking

g

=0.

Now consider the classical (“BGW”) scenario of MPC unconditionally secure (with zero error probability) against an active

t

-adversary with

$t<\frac{1}{3}n$

, in a synchronous

n

-player network with secure channels. By known results it now follows that there exist MPC protocols in this scenario, achieving the same communication complexities in terms of the number of field elements exchanged in the network compared with known Shamir-based solutions. However, in return for decreasing corruption tolerance by a small

ε

-fraction,

q

may be dramatically smaller than

n

. This tolerance decrease is unavoidable due to properties of MDS codes. The techniques extend to other models of MPC. Results on less specialized LSSS can be obtained from more general coding theory arguments.

Hao Chen, Ronald Cramer
Automated Security Proofs with Sequences of Games

This paper presents the first automatic technique for proving not only protocols but also primitives in the exact security computational model. Automatic proofs of cryptographic protocols were up to now reserved to the Dolev-Yao model, which however makes quite strong assumptions on the primitives. On the other hand, with the proofs by reductions, in the complexity theoretic framework, more subtle security assumptions can be considered, but security analyses are manual. A process calculus is thus defined in order to take into account the probabilistic semantics of the computational model. It is already rich enough to describe all the usual security notions of both symmetric and asymmetric cryptography, as well as the basic computational assumptions. As an example, we illustrate the use of the new tool with the proof of a quite famous asymmetric primitive: unforgeability under chosen-message attacks (

UF

-

CMA

) of the Full-Domain Hash signature scheme under the (trapdoor)-one-wayness of some permutations.

Bruno Blanchet, David Pointcheval
On Robust Combiners for Private Information Retrieval and Other Primitives

Let and

$\mathcal A$

and

$\mathcal B$

denote cryptographic primitives.

$\mathcal A$

(k

,m

)-robust

$\mathcal A$

-to-

$\mathcal B$

combiner

is a construction, which takes

m

implementations of primitive

${\ensuremath{{\cal A}}}$

as input, and yields an implementation of primitive

${\ensuremath{{\cal B}}}$

, which is guaranteed to be secure as long as at least

k

input implementations are secure. The main motivation for such constructions is the tolerance against

wrong assumptions

on which the security of implementations is based. For example, a (1,2)-robust

$\mathcal A$

-to-

$\mathcal B$

combiner yields a secure implementation of

${\ensuremath{{\cal B}}}$

even if an assumption underlying

one

of the input implementations of

${\ensuremath{{\cal A}}}$

turns out to be wrong.

In this work we study robust combiners for private information retrieval (PIR), oblivious transfer (OT), and bit commitment (BC). We propose a (1,2)-robust PIR-to-PIR combiner, and describe various optimizations based on properties of existing PIR protocols. The existence of simple PIR-to-PIR combiners is somewhat surprising, since OT, a very closely related primitive, seems difficult to combine (Harnik

et al.

, Eurocrypt’05). Furthermore, we present (1,2)-robust PIR-to-OT and PIR-to-BC combiners. To the best of our knowledge these are the first constructions of

$\mathcal A$

-to-

$\mathcal B$

combiners with

${\ensuremath{{\cal A}}}\neq {\ensuremath{{\cal B}}}$

. Such combiners, in addition to being interesting in their own right, offer insights into relationships between cryptographic primitives. In particular, our PIR-to-OT combiner together with the impossibility result for OT-combiners of Harnik

et al.

rule out certain types of reductions of PIR to OT. Finally, we suggest a more fine-grained approach to construction of robust combiners, which may lead to more efficient and practical combiners in many scenarios.

Remo Meier, Bartosz Przydatek
On the Impossibility of Efficiently Combining Collision Resistant Hash Functions

Let

H

1

,

H

2

be two hash functions. We wish to construct a new hash function

H

that is collision resistant if at least one of

H

1

or

H

2

is collision resistant. Concatenating the output of

H

1

and

H

2

clearly works, but at the cost of doubling the hash output size. We ask whether a better construction exists, namely, can we hedge our bets without doubling the size of the output? We take a step towards answering this question in the negative — we show that any secure construction that evaluates each hash function once cannot output fewer bits than simply concatenating the given functions.

Dan Boneh, Xavier Boyen
On the Higher Order Nonlinearities of Algebraic Immune Functions

One of the most basic requirements concerning Boolean functions used in cryptosystems is that they must have high algebraic degrees. This simple criterion is not always well adapted to the concrete situation in which Boolean functions are used in symmetric cryptography, since changing one or several output bits of a Boolean function considerably changes its algebraic degree while it may not change its robustness. The proper characteristic is the

r

-th order nonlinearity profile (which includes the first-order nonlinearity). However, studying it is difficult and almost no paper, in the literature, has ever been able to give general effective results on it. The values of the nonlinearity profile are known for very few functions and these functions have little cryptographic interest. A recent paper has given a lower bound on the nonlinearity profile of functions, given their algebraic immunity. We improve upon it, and we deduce that it is enough, for a Boolean function, to have high algebraic immunity, for having non-weak low order nonlinearity profile (even when it cannot be evaluated), except maybe for the first order.

Claude Carlet
New Proofs for NMAC and HMAC: Security Without Collision-Resistance

HMAC was proved in [3] to be a PRF assuming that (1) the underlying compression function is a PRF, and (2) the iterated hash function is weakly collision-resistant. However, recent attacks show that assumption (2) is false for MD5 and SHA-1, removing the proof-based support for HMAC in these cases. This paper proves that HMAC is a PRF under the sole assumption that the compression function is a PRF. This recovers a proof based guarantee since no known attacks compromise the pseudorandomness of the compression function, and it also helps explain the resistance-to-attack that HMAC has shown even when implemented with hash functions whose (weak) collision resistance is compromised. We also show that an even weaker-than-PRF condition on the compression function, namely that it is a privacy-preserving MAC, suffices to establish HMAC is a secure MAC as long as the hash function meets the very weak requirement of being computationally almost universal, where again the value lies in the fact that known attacks do not invalidate the assumptions made.

Mihir Bellare
Backmatter
Metadata
Title
Advances in Cryptology - CRYPTO 2006
Editor
Cynthia Dwork
Copyright Year
2006
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-37433-6
Print ISBN
978-3-540-37432-9
DOI
https://doi.org/10.1007/11818175

Premium Partner