Skip to main content

2008 | Buch

Advances in Cryptology – CRYPTO 2008

28th Annual International Cryptology Conference, Santa Barbara, CA, USA, August 17-21, 2008. Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 28th Annual International Cryptology Conference, CRYPTO 2008, held in Santa Barbara, CA, USA in August 2008. The 32 revised full papers presented were carefully reviewed and selected from 184 submissions. Addressing all current foundational, theoretical and research aspects of cryptology, cryptography, and cryptanalysis as well as advanced applications, the papers are organized in topical sections on random oracles, applications, public-key crypto, hash functions, cryptanalysis, multiparty computation, privacy, zero knowledge, and oblivious transfer.

Inhaltsverzeichnis

Frontmatter

Random Oracles

The Random Oracle Model and the Ideal Cipher Model Are Equivalent

The Random Oracle Model and the Ideal Cipher Model are two well known idealised models of computation for proving the security of cryptosystems. At Crypto 2005, Coron

et al.

showed that security in the random oracle model implies security in the ideal cipher model; namely they showed that a random oracle can be replaced by a block cipher-based construction, and the resulting scheme remains secure in the ideal cipher model. The other direction was left as an open problem,

i.e.

constructing an ideal cipher from a random oracle. In this paper we solve this open problem and show that the Feistel construction with 6 rounds is enough to obtain an ideal cipher; we also show that 5 rounds are insufficient by providing a simple attack. This contrasts with the classical Luby-Rackoff result that 4 rounds are necessary and sufficient to obtain a (strong) pseudo-random permutation from a pseudo-random function.

Jean-Sébastien Coron, Jacques Patarin, Yannick Seurin
Programmable Hash Functions and Their Applications

We introduce a new information-theoretic primitive called

programmable hash functions

(PHFs). PHFs can be used to

program

the output of a hash function such that it contains solved or unsolved discrete logarithm instances with a certain probability. This is a technique originally used for security proofs in the random oracle model. We give a variety of

standard model

realizations of PHFs (with different parameters).

The programmability of PHFs make them a suitable tool to obtain black-box proofs of cryptographic protocols when considering adaptive attacks. We propose generic digital signature schemes from the strong RSA problem and from some hardness assumption on bilinear maps that can be instantiated with any PHF. Our schemes offer various improvements over known constructions. In particular, for a reasonable choice of parameters, we obtain short standard model digital signatures over bilinear maps.

Dennis Hofheinz, Eike Kiltz

Applications

One-Time Programs

In this work, we introduce

one-time programs

, a new computational paradigm geared towards security applications. A one-time program can be executed on a

single

input, whose value can be specified at run time. Other than the result of the computation on this input, nothing else about the program is leaked. Hence, a one-time program is like a black box function that may be evaluated once and then “self destructs.” This also extends to

k

-time programs, which are like black box functions that can be evaluated

k

times and then self destruct.

One-time programs serve many of the same purposes of program obfuscation, the obvious one being software protection, but also including applications such as temporary transfer of cryptographic ability. Moreover, the applications of one-time programs go well beyond those of obfuscation, since one-time programs can only be executed once (or more generally, a limited number of times) while obfuscated programs have no such bounds. For example, one-time programs lead naturally to electronic cash or token schemes: coins are generated by a program that can only be run once, and thus cannot be double spent.

Most significantly, the new paradigm of one-time computing opens new avenues for conceptual research. In this work we explore one such avenue, presenting the new concept of “one-time proofs,” proofs that can only be verified once and then become useless and unconvincing.

All these tasks are clearly impossible using software alone, as any piece of software can be copied and run again, enabling the user to execute the program on more than one input. All our solutions employ a secure memory device, inspired by the cryptographic notion of interactive oblivious transfer protocols, that stores two secret keys (

k

0

,

k

1

). The device takes as input a single bit

b

 ∈ {0,1}, outputs

k

b

, and then self destructs. Using such devices, we demonstrate that for every input length, any standard program (Turing machine) can be efficiently compiled into a functionally equivalent one-time program. We also show how this memory device can be used to construct one-time proofs. Specifically, we show how to use this device to efficiently convert a classical witness for any

NP

statement, into “one-time proof” for that statement.

Shafi Goldwasser, Yael Tauman Kalai, Guy N. Rothblum
Adaptive One-Way Functions and Applications

We introduce new and general complexity theoretic hardness assumptions. These assumptions abstract out concrete properties of a random oracle and are significantly stronger than traditional cryptographic hardness assumptions; however, assuming their validity we can resolve a number of long-standing open problems in cryptography.

Omkant Pandey, Rafael Pass, Vinod Vaikuntanathan

Public-Key Crypto I

Bits Security of the Elliptic Curve Diffie–Hellman Secret Keys

We show that the least significant bits (LSB) of the elliptic curve Diffie–Hellman secret keys are hardcore. More precisely, we prove that if one can efficiently predict the LSB with non-negligible advantage on a polynomial fraction of all the curves defined over a given finite field

$\mathbb{F}_p$

, then with polynomial factor overhead, one can compute the entire Diffie–Hellman secret on a polynomial fraction of all the curves over the same finite field. Our approach is based on random self-reducibility (assuming GRH) of the Diffie–Hellman problem among elliptic curves of the same order. As a part of the argument, we prove a refinement of H. W. Lenstra’s lower bounds on the sizes of the isogeny classes of elliptic curves, which may be of independent interest.

Dimitar Jetchev, Ramarathnam Venkatesan
Improved Bounds on Security Reductions for Discrete Log Based Signatures

Despite considerable research efforts, no efficient reduction from the discrete log problem to forging a discrete log based signature (e.g. Schnorr) is currently known. In fact, negative results are known. Paillier and Vergnaud [PV05] show that the forgeability of several discrete log based signatures

cannot

be equivalent to solving the discrete log problem in the standard model,

assuming

the so-called one-more discrete log assumption and algebraic reductions. They also show, under the same assumptions, that, any security reduction in the Random Oracle Model (ROM) from discrete log to forging a Schnorr signature must lose a factor of at least

$\sqrt{q_h}$

in the success probability. Here

q

h

is the number of queries the forger makes to the random oracle. The best known positive result, due to Pointcheval and Stern [PS00], also in the ROM, gives a reduction that loses a factor of

q

h

. In this paper, we improve the negative result from [PV05]. In particular, we show that any algebraic reduction in the ROM from discrete log to forging a Schnorr signature must lose a factor of at least

$q_h^{2/3}$

, assuming the one-more discrete log assumption. We also hint at certain circumstances (by way of restrictions on the forger) under which this lower bound may be tight. These negative results indicate that huge loss factors may be inevitable in reductions from discrete log to discrete log based signatures.

Sanjam Garg, Raghav Bhaskar, Satyanarayana V. Lokam
Circular-Secure Encryption from Decision Diffie-Hellman

We describe a public-key encryption system that remains secure even encrypting messages that depend on the secret keys in use. In particular, it remains secure under a “key cycle” usage, where we have a cycle of public/secret key-pairs (pk

i

,sk

i

) for

i

 = 1,...,

n

, and we encrypt each sk

i

under

${\rm pk}_{(i \bmod n)+1}$

. Such usage scenarios sometimes arise in key-management systems and in the context of anonymous credential systems. Also, security against key cycles plays a role when relating “axiomatic security” of protocols that use encryption to the “computational security” of concrete instantiations of these protocols.

The existence of encryption systems that are secure in the presence of key cycles was wide open until now: on the one hand we had no constructions that provably meet this notion of security (except by relying on the random-oracle heuristic); on the other hand we had no examples of secure encryption systems that become demonstrably insecure in the presence of key-cycles of length greater than one.

Here we construct an encryption system that is circular-secure against chosen-plaintext attacks under the Decision Diffie-Hellman assumption (without relying on random oracles). Our proof of security holds even if the adversary obtains an encryption clique, that is, encryptions of sk

i

under pk

j

for all 1 ≤ 

i

,

j

 ≤ 

n

. We also construct a circular counterexample: a one-way secure encryption scheme that breaks completely if an encryption cycle (of any size) is published.

Dan Boneh, Shai Halevi, Mike Hamburg, Rafail Ostrovsky
Public-Key Locally-Decodable Codes

In this paper we introduce the notion of a Public-Key Encryption Scheme that is also a Locally-Decodable Error-Correcting Code (PKLDC). In particular, we allow any polynomial-time adversary to read the entire ciphertext, and corrupt a constant fraction of the bits of the

entire

ciphertext. Nevertheless, the decoding algorithm can recover any bit of the plaintext with all but negligible probability by reading only a sublinear number of bits of the (corrupted) ciphertext.

We give a general construction of a PKLDC from any Semantically-Secure Public Key Encryption (SS-PKE) and any Private Information Retrieval (PIR) protocol. Since Homomorphic encryption implies PIR, we also show a reduction from any Homomorphic encryption protocol to PKLDC.

Applying our construction to the best known PIR protocol (that of Gentry and Ramzan), we obtain a PKLDC, which for messages of size

n

and security parameter

k

achieves ciphertexts of size

$\mathcal{O}(n)$

, public key of size

$\mathcal{O}(n+k)$

, and locality of size

$\mathcal{O}(k^2)$

. This means that for messages of length

n

 = 

ω

(

k

2 + 

ε

), we can decode a bit of the plaintext from a corrupted ciphertext while doing computation sublinear in

n

.

Brett Hemenway, Rafail Ostrovsky

Hash Functions I

Key-Recovery Attacks on Universal Hash Function Based MAC Algorithms

This paper discusses key recovery and universal forgery attacks on several MAC algorithms based on universal hash functions. The attacks use a substantial number of verification queries but eventually allow for universal forgeries instead of existential or multiple forgeries. This means that the security of the algorithms completely collapses once a few forgeries are found. Some of these attacks start off by exploiting a weak key property, but turn out to become full-fledged divide and conquer attacks because of the specific structure of the universal hash functions considered. Partial information on a secret key can be exploited too, in the sense that it renders some key recovery attacks practical as soon as a few key bits are known. These results show that while universal hash functions offer provable security, high speeds and parallelism, their simple combinatorial properties make them less robust than conventional message authentication primitives.

Helena Handschuh, Bart Preneel
Cryptanalysis of the GOST Hash Function

In this article, we analyze the security of the GOST hash function. The GOST hash function, defined in the Russian standard GOST 34.11-94, is an iterated hash function producing a 256-bit hash value. As opposed to most commonly used hash functions such as MD5 and SHA-1, the GOST hash function defines, in addition to the common iterative structure, a checksum computed over all input message blocks. This checksum is then part of the final hash value computation.

As a result of our security analysis of the GOST hash function, we present the first collision attack with a complexity of about 2

105

evaluations of the compression function. Furthermore, we are able to significantly improve upon the results of Mendel et al. with respect to preimage and second preimage attacks. Our improved attacks have a complexity of about 2

192

evaluations of the compression function.

Florian Mendel, Norbert Pramstaller, Christian Rechberger, Marcin Kontak, Janusz Szmidt
Preimages for Reduced SHA-0 and SHA-1

In this paper, we examine the resistance of the popular hash function SHA-1 and its predecessor SHA-0 against dedicated preimage attacks. In order to assess the security margin of these hash functions against these attacks, two new cryptanalytic techniques are developed:

Reversing the inversion problem:

the idea is to start with an impossible expanded message that would lead to the required digest, and then to correct this message until it becomes valid without destroying the preimage property.

P

3

graphs

:

an algorithm based on the theory of random graphs that allows the conversion of preimage attacks on the compression function to attacks on the hash function with less effort than traditional meet-in-the-middle approaches.

Combining these techniques, we obtain preimage-style shortcuts attacks for up to 45 steps of SHA-1, and up to 50 steps of SHA-0 (out of 80).

Christophe De Cannière, Christian Rechberger

Cryptanalysis I

On the Power of Power Analysis in the Real World: A Complete Break of the KeeLoq Code Hopping Scheme

KeeLoq

remote keyless entry systems are widely used for access control purposes such as garage openers or car door systems. We present the first successful differential power analysis attacks on numerous commercially available products employing

KeeLoq

code hopping. Our new techniques combine side-channel cryptanalysis with specific properties of the

KeeLoq

algorithm. They allow for efficiently revealing both the secret key of a remote transmitter and the manufacturer key stored in a receiver. As a result, a remote control can be cloned from only ten power traces, allowing for a practical key recovery in few minutes. After extracting the manufacturer key once, with similar techniques, we demonstrate how to recover the secret key of a remote control and replicate it from a distance, just by eavesdropping on at most two messages. This key-cloning without physical access to the device has serious real-world security implications, as the technically challenging part can be outsourced to specialists. Finally, we mount a denial of service attack on a

KeeLoq

access control system. All proposed attacks have been verified on several commercial

KeeLoq

products.

Thomas Eisenbarth, Timo Kasper, Amir Moradi, Christof Paar, Mahmoud Salmasizadeh, Mohammad T. Manzuri Shalmani
Bug Attacks

In this paper we present a new kind of cryptanalytic attack which utilizes bugs in the hardware implementation of computer instructions. The best known example of such a bug is the Intel division bug, which resulted in slightly inaccurate results for extremely rare inputs. Whereas in most applications such bugs can be viewed as a minor nuisance, we show that in the case of RSA (even when protected by OAEP), Pohlig-Hellman, elliptic curve cryptography, and several other schemes, such bugs can be a security disaster: Decrypting ciphertexts on

any

computer which multiplies

even one pair of numbers

incorrectly can lead to full leakage of the secret key, sometimes with a single well-chosen ciphertext.

Eli Biham, Yaniv Carmeli, Adi Shamir

Multiparty Computation I

Scalable Multiparty Computation with Nearly Optimal Work and Resilience

We present the first general protocol for secure multiparty computation in which the

total

amount of work required by

n

players to compute a function

f

grows only polylogarithmically with

n

(ignoring an additive term that depends on

n

but not on the complexity of

f

). Moreover, the protocol is also nearly optimal in terms of resilience, providing computational security against an active, adaptive adversary corrupting a (1/2 − 

ε

) fraction of the players, for an arbitrary

ε

> 0.

Ivan Damgård, Yuval Ishai, Mikkel Krøigaard, Jesper Buus Nielsen, Adam Smith
Cryptographic Complexity of Multi-Party Computation Problems: Classifications and Separations

We develop new tools to study the relative complexities of secure multi-party computation tasks in the Universal Composition framework. When one task can be securely realized using another task as a black-box, we interpret this as a qualitative, complexity-theoretic reduction between the two tasks. Virtually all previous characterizations of MPC functionalities, in the UC model or otherwise, focus exclusively on secure function evaluation. In comparison, the tools we develop do not rely on any special internal structure of the functionality, thus applying to functionalities with arbitrary behavior. Our tools additionally apply uniformly to both the PPT and unbounded computation models.

Our first main tool is an exact characterization of realizability in the UC framework with respect to a large class of communication channel functionalities. Using this characterization, we can rederive all previously-known impossibility results as immediate and simple corollaries. We also complete the combinatorial characterization of 2-party secure function evaluation initiated by [12] and partially extend the combinatorial conditions to the multi-party setting. Our second main tool allows us to translate complexity separations in simpler MPC settings (such as the honest-but-curious corruption model) to the standard (malicious) setting. Using this tool, we demonstrate the existence of functionalities which are neither realizable nor complete, in the unbounded computation model.

Manoj Prabhakaran, Mike Rosulek

Cryptanalysis II

Cryptanalysis of MinRank

In this paper, we investigate the difficulty of one of the most relevant problems in multivariate cryptography – namely MinRank – about which no real progress has been reported since [9, 19]. Our starting point is the Kipnis-Shamir attack [19]. We first show new properties of the ideal generated by Kipnis-Shamir’s equations. We then propose a new modeling of the problem. Concerning the practical resolution, we adopt a Gröbner basis approach that permitted us to actually solve challenges A and B proposed by Courtois in [8]. Using the multi-homogeneous structure of the algebraic system, we have been able to provide a theoretical complexity bound reflecting the practical behavior of our approach. Namely, when

r

the dimension of the matrices minus the rank of the target matrix in the MinRank problem is constant, then we have a polynomial time attack

$\mathcal{O}\left( \ln\left( q\right) \,n^{3\,r^{\prime2}}\right) $

. For the challenge C [8], we obtain a theoretical bound of 2

66.3

operations.

Jean-Charles Faugère, Françoise Levy-dit-Vehel, Ludovic Perret
New State Recovery Attack on RC4

The stream cipher RC4 was designed by R. Rivest in 1987, and it is a widely deployed cipher. In this paper we analyse the class RC4-

N

of RC4-like stream ciphers, where

N

is the modulus of operations, as well as the length of internal arrays. Our new attack is a state recovery attack which accepts the keystream of a certain length, and recovers the internal state. For the reduced RC4-100, our attack has total complexity of around 2

93

operations, whereas the best previous attack (from Knudsen et al.) needs 2

236

of time.

The complexity of the attack applied to the original RC4-256 depends on the parameters of specific states (patterns), which are in turn hard to discover. Extrapolated parameters from smaller patterns give us the attack of complexity about 2

241

, and it is much smaller than the complexity of the best known previous attack 2

779

. The algorithm of the new attack was implemented and verified on small cases.

Alexander Maximov, Dmitry Khovratovich

Public-Key Crypto II

Dynamic Threshold Public-Key Encryption

This paper deals with

threshold public-key encryption

which allows a pool of players to decrypt a ciphertext if a given threshold of authorized players cooperate. We generalize this primitive to the dynamic setting, where any user can

dynamically

join the system, as a possible recipient; the sender can

dynamically

choose the authorized set of recipients, for each ciphertext; and the sender can

dynamically

set the threshold

t

for decryption capability among the authorized set. We first give a formal security model, which includes strong robustness notions, and then we propose a candidate achieving all the above dynamic properties, that is semantically secure in the standard model, under a new non-interactive assumption, that fits into the general Diffie-Hellman exponent framework on groups with a bilinear map. It furthermore compares favorably with previous proposals,

a.k.a.

threshold broadcast encryption, since this is the first threshold public-key encryption, with dynamic authorized set of recipients and dynamic threshold that provides constant-size ciphertexts.

Cécile Delerablée, David Pointcheval
On Notions of Security for Deterministic Encryption, and Efficient Constructions without Random Oracles

The study of deterministic public-key encryption was initiated by Bellare et al. (CRYPTO ’07), who provided the “strongest possible” notion of security for this primitive (called PRIV) and constructions in the random oracle (RO) model. We focus on constructing efficient deterministic encryption schemes

without

random oracles. To do so, we propose a slightly weaker notion of security, saying that no partial information about encrypted messages should be leaked as long as each message is a-priori hard-to-guess

given the others

(while PRIV did not have the latter restriction). Nevertheless, we argue that this version seems adequate for many practical applications. We show equivalence of this definition to single-message and indistinguishability-based ones, which are easier to work with. Then we give general constructions of both chosen-plaintext (CPA) and chosen-ciphertext-attack (CCA) secure deterministic encryption schemes, as well as efficient instantiations of them under standard number-theoretic assumptions. Our constructions build on the recently-introduced framework of Peikert and Waters (STOC ’08) for constructing CCA-secure

probabilistic

encryption schemes, extending it to the deterministic-encryption setting as well.

Alexandra Boldyreva, Serge Fehr, Adam O’Neill
Deterministic Encryption: Definitional Equivalences and Constructions without Random Oracles

We strengthen the foundations of deterministic public-key encryption via definitional equivalences and standard-model constructs based on general assumptions. Specifically we consider seven notions of privacy for deterministic encryption, including six forms of semantic security and an indistinguishability notion, and show them all equivalent. We then present a deterministic scheme for the secure encryption of uniformly and independently distributed messages based solely on the existence of trapdoor one-way permutations. We show a generalization of the construction that allows secure deterministic encryption of independent high-entropy messages. Finally we show relations between deterministic and standard (randomized) encryption.

Mihir Bellare, Marc Fischlin, Adam O’Neill, Thomas Ristenpart
Communication Complexity in Algebraic Two-Party Protocols

In cryptography, there has been tremendous success in building various two-party protocols with small communication complexity out of homomorphic semantically-secure encryption schemes, using their homomorphic properties in a black-box way. A few notable examples of such primitives include items like single database Private Information Retrieval (PIR) schemes (introduced in [15]) and private database update with small communication (introduced in [5]). In this paper, we illustrate a general methodology for determining what types of protocols can and cannot be implemented with small communication by using homomorphic encryption in a black-box way.

We hope that this work will provide a simple “litmus test” of feasibility for black-box use of known homomorphic encryption schemes by other cryptographic researchers attempting to develop new protocols with low communication. Additionally, a precise mathematical language for reasoning about such problems is developed in this work, which may be of independent interest. We stress that the class of algebraic structures for which we prove communication complexity lower bounds is large, and covers practically all known semantically-secure homomorphic cryptosystems (including those based upon bilinear maps).

Finally, we show the following equivalence which relates group homomorphic encryption and a major open question of designing a so-called fully-homomorphic cryptosystem: a fully homomorphic encryption scheme (over a non-zero ring) exists if and only if there exists homomorphic encryption over any finite non-abelian simple group. This result somewhat generalizes results of Barrington [1] (to any group containing a finite non-abelian simple subgroup) and of Maurer and Rhodes [18], and in fact gives a

constructive

proof of the 1974 result Werner [28]. (This also answers an open question posed by Rappe in [23], who in 2004 proved a special case of this result.)

Rafail Ostrovsky, William E. Skeith III

Hash Functions II

Privacy

Distributed Private Data Analysis: Simultaneously Solving How and What

We examine the combination of two directions in the field of privacy concerning computations over distributed private inputs –

secure function evaluation

(SFE) and

differential privacy

. While in both the goal is to privately evaluate some function of the individual inputs, the privacy requirements are significantly different. The general feasibility results for SFE suggest a natural paradigm for implementing differentially private analyses distributively: First choose

what

to compute, i.e., a differentially private analysis; Then decide

how

to compute it, i.e., construct an SFE protocol for this analysis. We initiate an examination whether there are advantages to a paradigm where both decisions are made simultaneously. In particular, we investigate under which accuracy requirements it is beneficial to adapt this paradigm for computing a collection of functions including Binary Sum, Gap Threshold, and Approximate Median queries. Our results yield new separations between the local and global models of computations for private data analysis.

Amos Beimel, Kobbi Nissim, Eran Omri
New Efficient Attacks on Statistical Disclosure Control Mechanisms

The goal of a statistical database is to provide statistics about a population while simultaneously protecting the privacy of the individual records in the database. The tension between privacy and usability of statistical databases has attracted much attention in statistics, theoretical computer science, security, and database communities in recent years. A line of research initiated by Dinur and Nissim investigates for a particular type of queries, lower bounds on the distortion needed in order to prevent gross violations of privacy. The first result in the current paper simplifies and sharpens the Dinur and Nissim result.

The Dinur-Nissim style results are strong because they demonstrate insecurity of all low-distortion privacy mechanisms. The attacks have an all-or-nothing flavor: letting

n

denote the size of the database,

Ω

(

n

) queries are made before anything is learned, at which point

Θ

(

n

) secret bits are revealed. Restricting attention to a wide and realistic subset of possible low-distortion mechanisms, our second result is a more acute attack, requiring only a fixed number of queries for each bit revealed.

Cynthia Dwork, Sergey Yekhanin
Beyond Uniformity: Better Security/Efficiency Tradeoffs for Compression Functions

Suppose we are given a perfect

n

 + 

c

-to-

n

bit compression function

f

and we want to construct a larger

m

 + 

s

-to-

s

bit compression function

H

instead. What level of security, in particular collision resistance, can we expect from

H

if it makes

r

calls to

f

? We conjecture that typically collisions can be found in 2

(

nr

 + 

cr

 − 

m

)/(

r

 + 1)

queries. This bound is also relevant for building a

m

 + 

s

-to-

s

bit compression function based on a blockcipher with

k

-bit keys and

n

-bit blocks: simply set

c

 = 

k

, or

c

 = 0 in case of fixed keys.

We also exhibit a number of (conceptual) compression functions whose collision resistance is close to this bound. In particular, we consider the following four scenarios:

1

A 2

n

-to-

n

bit compression function making two calls to an

n

-to-

n

bit primitive, providing collision resistance up to 2

n

/3

/

n

queries. This beats a recent bound by Rogaway and Steinberger that 2

n

/4

queries to the underlying random

n

-to-

n

bit function suffice to find collisions in any rate-1/2 compression function. In particular, this shows that Rogaway and Steinberger’s recent bound of 2

(

nr

 − 

m

 − 

s

/2)/

r

)

queries (for

c

 = 0) crucially relies upon a uniformity assumption; a blanket generalization to arbitrary compression functions would be incorrect.

1

A 3

n

-to-2

n

bit compression function making a single call to a 3

n

-to-

n

bit primitive, providing collision resistance up to 2

n

queries.

1

A 3

n

-to-2

n

bit compression function making two calls to a 2

n

-to-

n

bit primitive, providing collision resistance up to 2

n

queries.

1

A single call compression function with parameters satisfying

m

 ≤ 

n

 + 

c

,

n

 ≤ 

s

,

c

 ≤ 

m

. This result provides a tradeoff between how many bits you can compress for what level of security given a single call to an

n

 + 

c

-to-

n

bit random function.

Martijn Stam
Compression from Collisions, or Why CRHF Combiners Have a Long Output

A black-box combiner for collision resistant hash functions (CRHF) is a construction which given black-box access to two hash functions is collision resistant if at least one of the components is collision resistant.

In this paper we prove a lower bound on the output length of black-box combiners for CRHFs. The bound we prove is basically tight as it is achieved by a recent construction of Canetti et al [Crypto’07]. The best previously known lower bounds only ruled out a very restricted class of combiners having a very strong security reduction: the reduction was required to output collisions for both underlying candidate hash-functions given a single collision for the combiner (Canetti et al [Crypto’07] building on Boneh and Boyen [Crypto’06] and Pietrzak [Eurocrypt’07]).

Our proof uses a lemma similar to the elegant “reconstruction lemma” of Gennaro and Trevisan [FOCS’00], which states that any function which is not one-way is compressible (and thus uniformly random function must be one-way). In a similar vein we show that a function which is not collision resistant is compressible. We also borrow ideas from recent work by Haitner et al. [FOCS’07], who show that one can prove the reconstruction lemma even relative to some very powerful oracles (in our case this will be an exponential time collision-finding oracle).

Krzysztof Pietrzak
Constructing Cryptographic Hash Functions from Fixed-Key Blockciphers

We propose a family of compression functions built from fixed-key blockciphers and investigate their collision and preimage security in the ideal-cipher model. The constructions have security approaching and in many cases equaling the security upper bounds found in previous work of the authors [24]. In particular, we describe a 2

n

-bit to

n

-bit compression function using three

n

-bit permutation calls that has collision security

N

0.5

, where

N

 = 2

n

, and we describe 3

n

-bit to 2

n

-bit compression functions using five and six permutation calls and having collision security of at least

N

0.55

and

N

0.63

.

Phillip Rogaway, John Steinberger

Multiparty Computation II

Zero Knowledge

Efficient Constructions of Composable Commitments and Zero-Knowledge Proofs

Canetti et al. [7] recently proposed a new framework — termed

Generalized Universal Composability

(GUC) — for properly analyzing concurrent execution of cryptographic protocols in the presence of a global setup, and constructed the first known GUC-secure implementations of commitment (GUCC) and zero-knowledge (GUC ZK), which suffice to implement any two-party or multi-party functionality under several natural and relatively mild setup assumptions. Unfortunately, the feasibility results of [7] used rather inefficient constructions.

In this paper, we dramatically improve the efficiency of (adaptively-secure) GUCC and GUC ZK assuming data erasures are allowed. Namely, using the same minimal setup assumptions as those used by [7], we build

a direct and efficient constant-round GUC ZK for

R

from any “dense”

Ω

-protocol [21] for

R

. As a corollary, we get a semi-efficient construction from any

Σ

-protocol for

R

(

without doing the Cook-Levin reduction

), and a very efficient GUC ZK for proving knowledge of a discrete log representation.

the first

constant-rate

(and constant-round) GUCC scheme.

Additionally, we show how to properly model a random oracle in the GUC framework without losing

deniability

, which is one of the attractive features of the GUC framework. In particular, by adding the random oracle to the setup assumptions used by [7], we build the first two-round (which we show is optimal), deniable, straight-line extractable and simulatable ZK proof for any NP relation

R

.

Yevgeniy Dodis, Victor Shoup, Shabsi Walfish
Noninteractive Statistical Zero-Knowledge Proofs for Lattice Problems

We construct

noninteractive statistical zero-knowledge

(

NISZK

) proof systems for a variety of standard approximation problems on lattices, such as the shortest independent vectors problem and the complement of the shortest vector problem. Prior proof systems for lattice problems were either interactive or leaked knowledge (or both).

Our systems are the first known

NISZK

proofs for any cryptographically useful problems that are not related to integer factorization. In addition, they are proofs of knowledge, have reasonable complexity, and generally admit efficient prover algorithms (given appropriate auxiliary input). In some cases, they even imply the first known

interactive

statistical zero-knowledge proofs for certain cryptographically important lattice problems.

We also construct an

NISZK

proof for a special kind of disjunction (i.e., OR gate) related to the shortest vector problem. This may serve as a useful tool in potential constructions of noninteractive (computational) zero knowledge proofs for

NP

based on lattice assumptions.

Chris Peikert, Vinod Vaikuntanathan

Oblivious Transfer

A Framework for Efficient and Composable Oblivious Transfer

We propose a simple and general framework for constructing oblivious transfer (OT) protocols that are

efficient

,

universally composable

, and

generally realizable

under any one of a variety of standard number-theoretic assumptions, including the decisional Diffie-Hellman assumption, the quadratic residuosity and decisional composite residuosity assumptions, and

worst-case

lattice assumptions.

Our OT protocols are round-optimal (one message each way), quite efficient in computation and communication, and can use a single common string for an unbounded number of executions between the same sender and receiver. Furthermore, the protocols can provide

statistical

security to either the sender or the receiver, simply by changing the distribution of the common string. For certain instantiations of the protocol, even a common

uniformly random

string suffices.

Our key technical contribution is a simple abstraction that we call a

dual-mode

cryptosystem. We implement dual-mode cryptosystems by taking a unified view of several cryptosystems that have what we call “messy” public keys, whose defining property is that a ciphertext encrypted under such a key carries

no information

(statistically) about the encrypted message.

As a contribution of independent interest, we also provide a multi-bit

amortized

version of Regev’s lattice-based cryptosystem (STOC 2005) whose time and space complexity are improved by a linear factor in the security parameter

n

. The resulting amortized encryption and decryption times are only

$\tilde{O}(n)$

bit operations per message bit, and the ciphertext expansion can be made as small as a constant; the public key size and underlying lattice assumption remain essentially the same.

Chris Peikert, Vinod Vaikuntanathan, Brent Waters
Founding Cryptography on Oblivious Transfer – Efficiently

We present a simple and efficient compiler for transforming secure multi-party computation (MPC) protocols that enjoy security only with an honest majority into MPC protocols that guarantee security with no honest majority, in the oblivious-transfer (OT) hybrid model. Our technique works by combining a secure protocol in the honest majority setting with a protocol achieving only security against

semi-honest

parties in the setting of no honest majority.

Applying our compiler to variants of protocols from the literature, we get several applications for secure two-party computation and for MPC with no honest majority. These include:

Constant-rate two-party computation in the OT-hybrid model.

We obtain a statistically UC-secure two-party protocol in the OT-hybrid model that can evaluate a general circuit

C

of size

s

and depth

d

with a total communication complexity of

O

(

s

) + 

poly

(

k

,

d

, log

s

) and

O

(

d

) rounds. The above result generalizes to a constant number of parties.

Extending OTs in the malicious model.

We obtain a computationally efficient protocol for generating many string OTs from few string OTs with only a

constant amortized communication overhead

compared to the total length of the string OTs.

Black-box constructions for constant-round MPC with no honest majority.

We obtain general computationally UC-secure MPC protocols in the OT-hybrid model that use only a constant number of rounds, and only make a

black-box

access to a pseudorandom generator. This gives the first constant-round protocols for three or more parties that only make a black-box use of cryptographic primitives (and avoid expensive zero-knowledge proofs).

Yuval Ishai, Manoj Prabhakaran, Amit Sahai
Efficient Secure Linear Algebra in the Presence of Covert or Computationally Unbounded Adversaries

In this work we study the design of secure protocols for linear algebra problems. All current solutions to the problem are either inefficient in terms of communication complexity or assume that the adversary is honest but curious. We design protocols for two different adversarial settings: First, we achieve security in the presence of a covert adversary, a notion recently introduced by [Aumann and Lindell, TCC 2007]. Roughly speaking, this guarantees that if the adversary deviates from the protocol in a way that allows him to cheat, then he will be caught with good probability. Second, we achieve security against arbitrary malicious behaviour in the presence of a computationally unbounded adversary that controls less than a third of the parties. Our main result is a new upper bound of

O

(

n

2 + 1/

t

) communication for testing singularity of a shared

n

×

n

matrix in constant round, for any constant

t

in both of these adversarial environments. We use this construction to design secure protocols for computing the rank of a shared matrix and solving a shared linear system of equations with similar efficiency.

We use different techniques from computer algebra, together with recent ideas from [Cramer, Kiltz, and Padró, CRYPTO 2007], to reduce the problem of securely deciding singularity to the problem of securely computing matrix product. We then design new and efficient protocols for secure matrix product in both adversarial settings. In the two-party setting, we combine cut-and-choose techniques on random additive decomposition of the input, with a careful use of the random strings of a homomorphic encryption scheme to achieve simulation-based security. Thus, our protocol avoids general zero-knowledge proofs and only makes a black-box use of a homomorphic encryption scheme.

Payman Mohassel, Enav Weinreb
Collusion-Free Protocols in the Mediated Model

Prior approaches [14, 15] to building collusion-free protocols require exotic channels. By taking a conceptually new approach, we are able to use a more

digitally

-friendly communication channel to construct protocols that achieve a stronger collusion-free property.

We consider a communication channel which can filter and rerandomize message traffic. We then provide a new security definition that captures collusion-freeness in this new setting; our new setting even allows for the mediator to be corrupted in which case the security gracefully fails to providing standard privacy and correctness. This stronger notion makes the property useful in more settings.

To illustrate feasibility, we construct a commitment scheme and a zero-knowledge proof of knowledge that meet our definition in its two variations.

Joël Alwen, Abhi Shelat, Ivan Visconti
Backmatter
Metadaten
Titel
Advances in Cryptology – CRYPTO 2008
herausgegeben von
David Wagner
Copyright-Jahr
2008
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-85174-5
Print ISBN
978-3-540-85173-8
DOI
https://doi.org/10.1007/978-3-540-85174-5

Premium Partner