Skip to main content

2010 | Buch

Advances in Cryptology – EUROCRYPT 2010

29th Annual International Conference on the Theory and Applications of Cryptographic Techniques, French Riviera, May 30 – June 3, 2010. Proceedings

insite
SUCHEN

Über dieses Buch

These are the proceedings of Eurocrypt 2010, the 29th in the series of Eu- pean conferences on the Theory and Application of Cryptographic Techniques. The conference was sponsored by the International Association for Cryptologic Research and held on the French Riviera, May 30–June 3, 2010. A total of 191 papers were received of which 188 were retained as valid submissions. These were each assigned to at least three Program Committee members and a total of 606 review reports were produced. The printed record of the reviews and extensive online discussions that followed would be almost as voluminous as these proceedings. In the end 35 submissions were accepted with twosubmissionpairsbeingmergedtogive33paperspresentedattheconference. The ?nal papers in these proceedings were not subject to a second review before publication and the authors are responsible for their contents. The ProgramCommittee, listed on the next page, deservesparticular thanks for all their hard work, their outstanding expertise, and their constant c- mitment to all aspects of the evaluation process. These thanks are of course extended to the very many external reviewers who took the time to help out during the evaluation process.It was also a greatpleasure to honor and welcome Moti Yung who gave the 2010 IACR Distinguished Lecture.

Inhaltsverzeichnis

Frontmatter

Cryptosystems I

On Ideal Lattices and Learning with Errors over Rings

The “learning with errors” (LWE) problem is to distinguish random linear equations, which have been perturbed by a small amount of noise, from truly uniform ones. The problem has been shown to be as hard as worst-case lattice problems, and in recent years it has served as the foundation for a plethora of cryptographic applications. Unfortunately, these applications are rather inefficient due to an inherent quadratic overhead in the use of LWE. A main open question was whether LWE and its applications could be made truly efficient by exploiting extra algebraic structure, as was done for lattice-based hash functions (and related primitives).

We resolve this question in the affirmative by introducing an algebraic variant of LWE called

ring-LWE

, and proving that it too enjoys very strong hardness guarantees. Specifically, we show that the ring-LWE distribution is pseudorandom, assuming that worst-case problems on ideal lattices are hard for polynomial-time quantum algorithms. Applications include the first truly practical lattice-based public-key cryptosystem with an efficient security reduction; moreover, many of the other applications of LWE can be made much more efficient through the use of ring-LWE. Finally, the algebraic structure of ring-LWE might lead to new cryptographic applications previously not known to be based on LWE.

Vadim Lyubashevsky, Chris Peikert, Oded Regev
Fully Homomorphic Encryption over the Integers

We construct a simple fully homomorphic encryption scheme, using only elementary modular arithmetic. We use Gentry’s technique to construct a fully homomorphic scheme from a “bootstrappable” somewhat homomorphic scheme. However, instead of using ideal lattices over a polynomial ring, our bootstrappable encryption scheme merely uses addition and multiplication over the integers. The main appeal of our scheme is the conceptual simplicity.

We reduce the security of our scheme to finding an approximate integer gcd – i.e., given a list of integers that are near-multiples of a hidden integer, output that hidden integer. We investigate the hardness of this task, building on earlier work of Howgrave-Graham.

Marten van Dijk, Craig Gentry, Shai Halevi, Vinod Vaikuntanathan
Converting Pairing-Based Cryptosystems from Composite-Order Groups to Prime-Order Groups

We develop an abstract framework that encompasses the key properties of bilinear groups of composite order that are required to construct secure pairing-based cryptosystems, and we show how to use prime-order elliptic curve groups to construct bilinear groups with the same properties. In particular, we define a generalized version of the subgroup decision problem and give explicit constructions of bilinear groups in which the generalized subgroup decision assumption follows from the decision Diffie-Hellman assumption, the decision linear assumption, and/or related assumptions in prime-order groups.

We apply our framework and our prime-order group constructions to create more efficient versions of cryptosystems that originally required composite-order groups. Specifically, we consider the Boneh-Goh-Nissim encryption scheme, the Boneh-Sahai-Waters traitor tracing system, and the Katz-Sahai-Waters attribute-based encryption scheme. We give a security theorem for the prime-order group instantiation of each system, using assumptions of comparable complexity to those used in the composite-order setting. Our conversion of the last two systems to prime-order groups answers a problem posed by Groth and Sahai.

David Mandell Freeman
Fully Secure Functional Encryption: Attribute-Based Encryption and (Hierarchical) Inner Product Encryption

We present two fully secure functional encryption schemes: a fully secure attribute-based encryption (ABE) scheme and a fully secure (attribute-hiding) predicate encryption (PE) scheme for inner-product predicates. In both cases, previous constructions were only proven to be selectively secure. Both results use novel strategies to adapt the dual system encryption methodology introduced by Waters. We construct our ABE scheme in composite order bilinear groups, and prove its security from three static assumptions. Our ABE scheme supports arbitrary monotone access formulas. Our predicate encryption scheme is constructed via a new approach on bilinear pairings using the notion of dual pairing vector spaces proposed by Okamoto and Takashima.

Allison Lewko, Tatsuaki Okamoto, Amit Sahai, Katsuyuki Takashima, Brent Waters

Obfuscation and Side Channel Security

Secure Obfuscation for Encrypted Signatures

Obfuscation is one of the most intriguing open problems in cryptography and only a few positive results are known. In TCC’07, Hohenberger et al. proposed an obfuscator for a re-encryption functionality, which takes a ciphertext for a message encrypted under Alice’s public key and transforms it into a ciphertext for the same message under Bob’s public key [24]. It is the first complicated cryptographic functionality that can be securely obfuscated, but obfuscators for such cryptographic functionalities are still elusive. In this paper, we consider obfuscation for encrypted signature (ES) functionalities, which generate a signature on a given message under Alice’s secret signing key and encrypt the signature under Bob’s public encryption key. We propose a special ES functionality, which is the sequential composition of Waters’s signature scheme [33] and the linear encryption scheme proposed by Boneh, Boyen, and Shacham [5], and construct a secure obfuscator for it. We follow the security argument by Hohenberger et al. to prove that our proposed obfuscator satisfies a virtual black-box property (VBP), which guarantees that the security of the signature scheme is preserved even when adversaries are given an obfuscated program. Our security argument is in the standard model.

Satoshi Hada
Public-Key Encryption in the Bounded-Retrieval Model

We construct the

first

public-key encryption scheme in the

Bounded-Retrieval Model

(BRM), providing security against various forms of adversarial “key leakage” attacks. In this model, the adversary is allowed to learn arbitrary information about the decryption key, subject only to the constraint that the overall amount of “leakage” is bounded by at most ℓ bits. The goal of the BRM is to design cryptographic schemes that can flexibly tolerate arbitrarily leakage bounds ℓ (few bits or many Gigabytes), by

only

increasing the size of secret key proportionally, but keeping

all the other parameters

— including the size of the public key, ciphertext, encryption/decryption time, and the number of secret-key bits accessed during decryption —

small and independent of

ℓ.

As our main technical tool, we introduce the concept of an

Identity-Based Hash Proof System

(

IB-HPS

), which generalizes the notion of hash proof systems of Cramer and Shoup [CS02] to the identity-based setting. We give three different constructions of this primitive based on: (1) bilinear groups, (2) lattices, and (3) quadratic residuosity. As a result of independent interest, we show that an

IB-HPS

almost immediately yields an Identity-Based Encryption (

IBE

) scheme which is secure against (small) partial leakage of the target identity’s decryption key. As our main result, we use

IB-HPS

to construct public-key encryption (and

IBE

) schemes in the Bounded-Retrieval Model.

Joël Alwen, Yevgeniy Dodis, Moni Naor, Gil Segev, Shabsi Walfish, Daniel Wichs
Protecting Circuits from Leakage: the Computationally-Bounded and Noisy Cases

Physical computational devices leak side-channel information that may, and often does, reveal secret internal states. We present a

general

transformation that compiles any circuit into a new, functionally equivalent circuit which is resilient against well-defined classes of leakage. Our construction requires a

small

,

stateless

and

computation-independent

leak-proof component that draws random elements from a fixed distribution. In essence, we reduce the problem of shielding arbitrarily complex circuits to the problem of shielding a single, simple component.

Our approach is based on modeling the adversary as a powerful observer that inspects the device via a limited measurement apparatus. We allow the apparatus to access all the bits of the computation (except those inside the leak-proof component) and the amount of leaked information to grow unbounded over time. However, we assume that the apparatus is limited either in its computational ability (namely, it lacks the ability to decode certain linear encodings and outputs a limited number of bits per iteration), or its precision (each observed bit is flipped with some probability). While our results apply in general to such leakage classes, in particular, we obtain security against:

Constant depth circuits leakage

, where the measurement apparatus can be implemented by an

AC

0

circuit (namely, a constant depth circuit composed of NOT gates and unbounded fan-in AND and OR gates), or an

ACC

0

[

p

] circuit (which is the same as

AC

0

, except that it also uses

MOD

p

gates) which outputs a limited number of bits.

Noisy leakage

, where the measurement apparatus reveals all the bits of the state of the circuit, perturbed by independent binomial noise. Namely, each bit of the computation is perturbed with probability

p

, and remains unchanged with probability 1 − 

p

.

Sebastian Faust, Tal Rabin, Leonid Reyzin, Eran Tromer, Vinod Vaikuntanathan

2-Party Protocols

Partial Fairness in Secure Two-Party Computation

A seminal result of Cleve (STOC ’86) is that

complete

fairness is impossible to achieve in two-party computation. In light of this, various techniques for obtaining

partial

fairness have been suggested in the literature. We propose a definition of partial fairness within the standard real-/ideal-world paradigm that addresses deficiencies of prior definitions. We also show broad feasibility results with respect to our definition: partial fairness is possible for any (randomized) functionality

f

:

X

×

Y

Z

1

×

Z

2

at least one of whose domains or ranges is polynomial in size. Our protocols are always private, and when one of the domains has polynomial size our protocols also simultaneously achieve the usual notion of security with abort. In contrast to some prior work, we rely on standard assumptions only.

We also show that, as far as general feasibility is concerned, our results are

optimal

(with respect to our definition).

S. Dov Gordon, Jonathan Katz
Secure Message Transmission with Small Public Discussion

In the problem of Secure Message Transmission in the public discussion model (SMT-PD), a Sender wants to send a message to a Receiver privately and reliably. Sender and Receiver are connected by

n

channels, up to

t

 < 

n

of which may be maliciously controlled by a computationally unbounded adversary, as well as one public channel, which is reliable but not private.

The SMT-PD abstraction has been shown instrumental in achieving secure multi-party computation on sparse networks, where a subset of the nodes are able to realize a broadcast functionality, which plays the role of the public channel. However, the

implementation

of such public channel in point-to-point networks is highly costly and non-trivial, which makes minimizing the use of this resource an intrinsically compelling issue.

In this paper, we present the first SMT-PD protocol with

sublinear

(i.e., logarithmic in

m

, the message size) communication on the public channel. In addition, the protocol incurs a private communication complexity of

$O(\frac{mn}{n-t})$

, which, as we also show, is

optimal

. By contrast, the best known bounds in both public and private channels were linear. Furthermore, our protocol has an optimal round complexity of (3,2), meaning three rounds, two of which must invoke the public channel.

Finally, we ask the question whether some of the lower bounds on resource use for a single execution of SMT-PD can be beaten

on average

through amortization. In other words, if Sender and Receiver must send several messages back and forth (where later messages depend on earlier ones), can they do better than the naïve solution of repeating an SMT-PD protocol each time? We show that amortization can indeed drastically reduce the use of the public channel: it is possible to limit the total number of uses of the public channel to

two

, no matter how many messages are ultimately sent between two nodes. (Since two uses of the public channel are required to send any reliable communication whatsoever, this is best possible.)

Juan Garay, Clint Givens, Rafail Ostrovsky
On the Impossibility of Three-Move Blind Signature Schemes

We investigate the possibility to prove security of the well-known blind signature schemes by Chaum, and by Pointcheval and Stern in the standard model, i.e., without random oracles. We subsume these schemes under a more general class of blind signature schemes and show that finding security proofs for these schemes via black-box reductions in the standard model is hard. Technically, our result deploys meta-reduction techniques showing that black-box reductions for such schemes could be turned into efficient solvers for hard non-interactive cryptographic problems like RSA or discrete-log. Our approach yields significantly stronger impossibility results than previous meta-reductions in other settings by playing off the two security requirements of the blind signatures (unforgeability and blindness).

Marc Fischlin, Dominique Schröder
Efficient Device-Independent Quantum Key Distribution

An efficient protocol for quantum key distribution is proposed the security of which is entirely device-independent and not even based on the accuracy of quantum physics. A scheme of that type relies on the quantum-physical phenomenon of

non-local correlations

and on the assumption that no illegitimate information flows within and between Alice’s and Bob’s laboratories. The latter can be enforced via the non-signaling postulate of relativity if all measurements are carried out simultaneously enough.

Esther Hänggi, Renato Renner, Stefan Wolf

Cryptanalysis

New Generic Algorithms for Hard Knapsacks

In this paper, we study the complexity of solving hard knapsack problems, i.e., knapsacks with a density close to 1 where lattice-based low density attacks are not an option. For such knapsacks, the current state-of-the-art is a 31-year old algorithm by Schroeppel and Shamir which is based on birthday paradox techniques and yields a running time of

$\tilde{O}(2^{n/2})$

for knapsacks of

n

elements and uses

$\tilde{O}(2^{n/4})$

storage. We propose here two new algorithms which improve on this bound, finally lowering the running time down to either

$\tilde{O} (2^{0.385\, n})$

or

$\tilde{O} (2^{0.3113\, n})$

under a reasonable heuristic. We also demonstrate the practicality of these algorithms with an implementation.

Nick Howgrave-Graham, Antoine Joux
Lattice Enumeration Using Extreme Pruning

Lattice enumeration algorithms are the most basic algorithms for solving hard lattice problems such as the shortest vector problem and the closest vector problem, and are often used in public-key cryptanalysis either as standalone algorithms, or as subroutines in lattice reduction algorithms. Here we revisit these fundamental algorithms and show that surprising exponential speedups can be achieved both in theory and in practice by using a new technique, which we call

extreme pruning

. We also provide what is arguably the first sound analysis of pruning, which was introduced in the 1990s by Schnorr

et al.

Nicolas Gama, Phong Q. Nguyen, Oded Regev
Algebraic Cryptanalysis of McEliece Variants with Compact Keys

In this paper we propose a new approach to investigate the security of the McEliece cryptosystem. We recall that this cryptosystem relies on the use of error-correcting codes. Since its invention thirty years ago, no efficient attack had been devised that managed to recover the private key. We prove that the private key of the cryptosystem satisfies a system of bi-homogeneous polynomial equations. This property is due to the particular class of codes considered which are alternant codes. We have used these highly structured algebraic equations to mount an efficient key-recovery attack against two recent variants of the McEliece cryptosystems that aim at reducing public key sizes. These two compact variants of McEliece managed to propose keys with less than 20,000 bits. To do so, they proposed to use quasi-cyclic or dyadic structures. An implementation of our algebraic attack in the computer algebra system

Magma

allows to find the secret-key in a negligible time (less than one second) for almost all the proposed challenges. For instance, a private key designed for a 256-bit security has been found in 0.06 seconds with about 2

17.8

operations.

Jean-Charles Faugère, Ayoub Otmani, Ludovic Perret, Jean-Pierre Tillich
Key Recovery Attacks of Practical Complexity on AES-256 Variants with up to 10 Rounds

AES is the best known and most widely used block cipher. Its three versions (AES-128, AES-192, and AES-256) differ in their key sizes (128 bits, 192 bits and 256 bits) and in their number of rounds (10, 12, and 14, respectively). While for AES-128, there are no known attacks faster than exhaustive search, AES-192 and AES-256 were recently shown to be breakable by attacks which require 2

176

and 2

99.5

time, respectively. While these complexities are much faster than exhaustive search, they are completely non-practical, and do not seem to pose any real threat to the security of AES-based systems.

In this paper we aim to increase our understanding of AES security, and we concentrate on attacks

with practical complexity

, i.e., attacks that can be experimentally verified. We show attacks on reduced-round variants of AES-256 with up to 10 rounds with complexity which is feasible. One of our attacks uses only two related keys and 2

39

time to recover the complete 256-bit key of a 9-round version of AES-256 (the best previous attack on this variant required 4 related keys and 2

120

time). Another attack can break a 10-round version of AES-256 in 2

45

time, but it uses a stronger type of

related subkey attack

(the best previous attack on this variant required 64 related keys and 2

172

time). While the full AES-256 cannot be directly broken by these attacks, the fact that 10 rounds can be broken with such a low complexity raises serious concerns about the remaining safety margin offered by AES-256.

Alex Biryukov, Orr Dunkelman, Nathan Keller, Dmitry Khovratovich, Adi Shamir

IACR Distinguished Lecture

Cryptography between Wonderland and Underland

Cryptography is a very broad field, interdisciplinary in nature, and connected to many other areas (in mathematics, computer science, computer systems and engineering). On the one hand, in theoretical cryptography many new notions have been defined, constructed and improved, especially new protocols and cryptosystems that are very powerful and surprising, including solving challenging and even seemingly paradoxical problems. On the other hand, cryptography is often required in actual computing systems, where the computing and communication infrastructure is very dynamic and evolves in a very fast pace. Thus, actual systems may need solutions that are highly constrained, non trivial, and not covered by merely combining existing cryptographic tools and protocols in a black-box fashion. These solutions are the subject of industrial development of specific cryptographic systems that are much less known than their theoretical counterparts. We discuss the interplay between theory of cryptographic protocols and actual industrial cryptographic systems, the differences in specifying, analyzing, modeling, designing and validating in each sub-area, as well as the similarity and the mutual influence between the two sub-areas.

Moti Yung

Automated Tools and Formal Methods

Automatic Search for Related-Key Differential Characteristics in Byte-Oriented Block Ciphers: Application to AES, Camellia, Khazad and Others

While differential behavior of modern ciphers in a single secret key scenario is relatively well understood, and simple techniques for computation of security lower bounds are readily available, the security of modern block ciphers against related-key attacks is still very ad hoc. In this paper we make a first step towards provable security of block ciphers against related-key attacks by presenting an efficient search tool for finding differential characteristics both in the state and in the key (note that due to similarities between block ciphers and hash functions such tool will be useful in analysis of hash functions as well). We use this tool to search for the best possible (in terms of the number of rounds) related-key differential characteristics in AES, byte-Camellia, Khazad, FOX, and Anubis. We show the best related-key differential characteristics for 5, 11, and 14 rounds of AES-128, AES-192, and AES-256 respectively. We use the optimal differential characteristics to design the best related-key and chosen key attacks on AES-128 (7 out of 10 rounds), AES-192 (full 12 rounds), byte-Camellia (full 18 rounds) and Khazad (7 and 8 out of 8 rounds). We also show that ciphers FOX and Anubis have no related-key attacks on more than 4-5 rounds.

Alex Biryukov, Ivica Nikolić
Plaintext-Dependent Decryption: A Formal Security Treatment of SSH-CTR

This paper presents a formal security analysis of SSH in counter mode in a security model that accurately captures the capabilities of real-world attackers, as well as security-relevant features of the SSH specifications and the OpenSSH implementation of SSH. Under reasonable assumptions on the block cipher and MAC algorithms used to construct the SSH Binary Packet Protocol (BPP), we are able to show that the SSH BPP meets a strong and appropriate notion of security: indistinguishability under buffered, stateful chosen-ciphertext attacks. This result helps to bridge the gap between the existing security analysis of the SSH BPP by Bellare

et al.

and the recently discovered attacks against the SSH BPP by Albrecht

et al.

which partially invalidate that analysis.

Kenneth G. Paterson, Gaven J. Watson
Computational Soundness, Co-induction, and Encryption Cycles

We analyze the relation between induction, co-induction and the presence of encryption cycles in the context of computationally sound symbolic equivalence of cryptographic expressions. Our main finding is that the use of co-induction in the symbolic definition of the adversarial knowledge allows to prove soundness results without the need to require syntactic restrictions, like the absence of encryption cycles, common to most previous work in the area. Encryption cycles are relevant only to the extent that the key recovery function associated to acyclic expressions can be shown to have a unique fixed point. So, when a cryptographic expression has no encryption cycles, the inductive (least fixed point) and co-inductive (greatest fixed point) security definitions produce the same results, and the computational soundness of the inductive definitions for acyclic expressions follows as a special case of the soundness of the co-inductive definition.

Daniele Micciancio

Models and Proofs

Encryption Schemes Secure against Chosen-Ciphertext Selective Opening Attacks

Imagine many small devices send data to a single receiver, encrypted using the receiver’s public key. Assume an adversary that has the power to adaptively corrupt a subset of these devices. Given the information obtained from these corruptions, do the ciphertexts from

uncorrupted

devices remain secure?

Recent results suggest that conventional security notions for encryption schemes (like IND-CCA security) do not suffice in this setting. To fill this gap, the notion of

security against selective-opening attacks (SOA security)

has been introduced. It has been shown that lossy encryption implies SOA security against a

passive

, i.e., only eavesdropping and corrupting, adversary (SO-CPA). However, the known results on SOA security against an

active

adversary (SO-CCA) are rather limited. Namely, while there exist feasibility results, the (time and space) complexity of currently known SO-CCA secure schemes depends on the number of devices in the setting above.

In this contribution, we devise a new solution to the selective opening problem that does not build on lossy encryption. Instead, we combine techniques from non-committing encryption and hash proof systems with a new technique (dubbed “cross-authentication codes”) to glue several ciphertext parts together. The result is a rather practical SO-CCA secure public-key encryption scheme that does not suffer from the efficiency drawbacks of known schemes. Since we build upon hash proof systems, our scheme can be instantiated using standard number-theoretic assumptions such as decisional Diffie-Hellman DDH), decisional composite residuosity (DCR), and quadratic residuosity (QR). Besides, we construct a conceptually very simple and comparatively efficient SO-CPA secure scheme from (slightly enhanced) trapdoor one-way permutations.

We stress that our schemes are completely independent of the number of challenge ciphertexts, and we do not make assumptions about the underlying message distribution (beyond being efficiently samplable). In particular, we do not assume efficient conditional re-samplability of the message distribution. Hence, our schemes are secure in

arbitrary

settings, even if it is not known in advance how many ciphertexts might be considered for corruptions.

Serge Fehr, Dennis Hofheinz, Eike Kiltz, Hoeteck Wee
Cryptographic Agility and Its Relation to Circular Encryption

We initiate a provable-security treatment of cryptographic

agility

. A primitive (for example PRFs, authenticated encryption schemes or digital signatures) is agile when multiple, individually secure schemes can securely share the same key. We provide a surprising connection between two seemingly unrelated but challenging questions. The first, new to this paper, is whether wPRFs (weak-PRFs) are agile. The second, already posed several times in the literature, is whether every secure (IND-R) encryption scheme is secure when encrypting cycles. We resolve the second question in the negative and thereby the first as well. We go on to provide a comprehensive treatment of agility, with definitions for various different primitives. We explain the practical motivations for agility. We provide foundational results that show to what extent it is achievable and practical constructions to achieve it to the best extent possible. On the theoretical side our work uncovers new notions and relations and settles stated open questions, and on the practical side it serves to guide developers.

Tolga Acar, Mira Belenkiy, Mihir Bellare, David Cash
Bounded Key-Dependent Message Security

We construct the first public-key encryption scheme that is proven secure (in the standard model, under standard assumptions) even when the attacker gets access to encryptions of arbitrary efficient functions of the secret key. Specifically, under either the DDH or LWE assumption, and for arbitrary but fixed polynomials

L

and

N

, we obtain a public-key encryption scheme that resists key-dependent message (KDM) attacks for up to

N

(

k

) public keys and functions of

circuit size

up to

L

(

k

), where

k

denotes the size of the secret key. We call such a scheme

bounded KDM secure

. Moreover, we show that our scheme suffices for one of the important applications of KDM security: ability to securely instantiate symbolic protocols with axiomatic proofs of security.

We also observe that any fully homomorphic encryption scheme that additionally enjoys circular security and circuit privacy is

fully KDM secure

in the sense that its algorithms can be independent of the polynomials

L

and

N

as above. Thus, the recent fully homomorphic encryption scheme of Gentry (STOC 2009) is fully KDM secure under certain non-standard hardness assumptions.

Finally, we extend an impossibility result of Haitner and Holenstein (TCC 2009), showing that it is impossible to prove KDM security against a family of query functions that contains exponentially hard pseudorandom functions if the proof makes only a

black-box

use of the query function and the adversary attacking the scheme. This shows that the non-black-box use of the query function in our proof of security is inherent.

Boaz Barak, Iftach Haitner, Dennis Hofheinz, Yuval Ishai

Multiparty Protocols

Perfectly Secure Multiparty Computation and the Computational Overhead of Cryptography

We study the following two related questions:

What are the minimal computational resources required for general secure multiparty computation in the presence of an honest majority?

What are the minimal resources required for two-party primitives such as zero-knowledge proofs and general secure two-party computation?

We obtain a nearly tight answer to the first question by presenting a

perfectly

secure protocol which allows

n

players to evaluate an arithmetic circuit of size

s

by performing a total of

$\mathcal{O}(s\log s\log^2 n)$

arithmetic operations, plus an additive term which depends (polynomially) on

n

and the circuit depth, but only logarithmically on

s

. Thus, for typical large-scale computations whose circuit width is much bigger than their depth and the number of players, the amortized overhead is just polylogarithmic in

n

and

s

. The protocol provides perfect security with guaranteed output delivery in the presence of an active, adaptive adversary corrupting a (1/3 − 

ε

) fraction of the players, for an arbitrary constant

ε

> 0 and sufficiently large

n

. The best previous protocols in this setting could only offer

computational

security with a computational overhead of poly(

k

,log

n

,log

s

), where

k

is a computational security parameter, or perfect security with a computational overhead of

$\mathcal{O}(n\log n)$

.

We then apply the above result towards making progress on the second question. Concretely, under standard cryptographic assumptions, we obtain zero-knowledge proofs for circuit satisfiability with 2

− 

k

soundness error in which the amortized computational overhead per gate is only

polylogarithmic

in

k

, improving over the

ω

(

k

) overhead of the best previous protocols. Under stronger cryptographic assumptions, we obtain similar results for general secure two-party computation.

Ivan Damgård, Yuval Ishai, Mikkel Krøigaard
Adaptively Secure Broadcast

A broadcast protocol allows a sender to distribute a message through a point-to-point network to a set of parties, such that (i) all parties receive the same message, even if the sender is corrupted, and (ii) this is the sender’s message, if he is honest. Broadcast protocols satisfying these properties are known to exist if and only if

t

 < 

n

/3, where

n

denotes the total number of parties, and

t

denotes the maximal number of corruptions. When a setup allowing signatures is available to the parties, then such protocols exist even for

t

 < 

n

.

Since its invention in [LSP82], broadcast has been used as a primitive in numerous multi-party protocols making it one of the fundamental primitives in the distributed-protocols literature. The security of these protocols is analyzed in a model where a broadcast primitive which behaves in an ideal way is assumed. Clearly, a definition of broadcast should allow for secure composition, namely, it should be secure to replace an assumed broadcast primitive by a protocol satisfying this definition. Following recent cryptographic reasoning, to allow secure composition the ideal behavior of broadcast can be described as an ideal functionality, and a simulation-based definition can be used.

In this work, we show that the property-based definition of broadcast does not imply the simulation-based definition for the natural broadcast functionality. In fact, most broadcast protocols in the literature do not securely realize this functionality, which raises a composability issue for these broadcast protocols. In particular, we do not know of any broadcast protocol which could be securely invoked in a multi-party computation protocol in the secure-channels model. The problem is that existing protocols for broadcast do not preserve the secrecy of the message while being broadcasted, and in particular allow the adversary to corrupt the sender (and change the message), depending on the message being broadcasted. For example, when every party should broadcast a random bit, the adversary could corrupt those parties who intend to broadcast 0, and make them broadcast 1.

More concretely, we show that simulatable broadcast in a model with secure channels is possible if and only if

t

 < 

n

/3, respectively

t

 ≤ 

n

/2 when a signature setup is available. The positive results are proven by constructing secure broadcast protocols.

Martin Hirt, Vassilis Zikas
Universally Composable Quantum Multi-party Computation

The Universal Composability model (UC) by Canetti (FOCS 2001) allows for secure composition of arbitrary protocols. We present a quantum version of the UC model which enjoys the same compositionality guarantees. We prove that in this model statistically secure oblivious transfer protocols can be constructed from commitments. Furthermore, we show that every statistically classically UC secure protocol is also statistically quantum UC secure. Such implications are not known for other quantum security definitions. As a corollary, we get that quantum UC secure protocols for general multi-party computation can be constructed from commitments.

Dominique Unruh

Cryptosystems II

A Simple BGN-Type Cryptosystem from LWE

We construct a simple public-key encryption scheme that supports polynomially many additions and one multiplication, similar to the cryptosystem of Boneh, Goh, and Nissim (BGN). Security is based on the hardness of the learning with errors (LWE) problem, which is known to be as hard as certain worst-case lattice problems.

Some features of our cryptosystem include support for large message space, an easy way of achieving formula-privacy, a better message-to-ciphertext expansion ratio than BGN, and an easy way of multiplying two encrypted polynomials. Also, the scheme can be made identity-based and leakage-resilient (at the cost of a higher message-to-ciphertext expansion ratio).

Craig Gentry, Shai Halevi, Vinod Vaikuntanathan
Bonsai Trees, or How to Delegate a Lattice Basis

We introduce a new

lattice-based

cryptographic structure called a

bonsai tree

, and use it to resolve some important open problems in the area. Applications of bonsai trees include:

An efficient, stateless ‘hash-and-sign’ signature scheme in the

standard model

(i.e., no random oracles), and

The first

hierarchical

identity-based encryption (HIBE) scheme (also in the standard model) that does not rely on bilinear pairings.

Interestingly, the abstract properties of bonsai trees seem to have no known realization in conventional number-theoretic cryptography.

David Cash, Dennis Hofheinz, Eike Kiltz, Chris Peikert
Efficient Lattice (H)IBE in the Standard Model

We construct an efficient identity based encryption system based on the standard learning with errors (LWE) problem. Our security proof holds in the standard model. The key step in the construction is a family of lattices for which there are two distinct trapdoors for finding short vectors. One trapdoor enables the real system to generate short vectors in all lattices in the family. The other trapdoor enables the simulator to generate short vectors for all lattices in the family except for one. We extend this basic technique to an adaptively-secure IBE and a Hierarchical IBE.

Shweta Agrawal, Dan Boneh, Xavier Boyen

Hash and MAC

Multi-property-preserving Domain Extension Using Polynomial-Based Modes of Operation

In this paper, we propose a new double-piped mode of operation for multi-property-preserving domain extension of MACs (message authentication codes), PRFs (pseudorandom functions) and PROs (pseudorandom oracles). Our mode of operation performs twice as fast as the original double-piped mode of operation of Lucks [15] while providing comparable security. Our construction, which uses a class of polynomial-based compression functions proposed by Stam [22,23], makes a single call to a 3

n

-bit to

n

-bit primitive at each iteration and uses a finalization function

f

2

at the last iteration, producing an

n

-bit hash function

H

[

f

1

,

f

2

] satisfying the following properties.

1

H

[

f

1

,

f

2

] is unforgeable up to

O

(2

n

/

n

) query complexity as long as

f

1

and

f

2

are unforgeable.

1

H

[

f

1

,

f

2

] is pseudorandom up to

O

(2

n

/

n

) query complexity as long as

f

1

is unforgeable and

f

2

is pseudorandom.

1

H

[

f

1

,

f

2

] is indifferentiable from a random oracle up to

O

(2

2

n

/3

) query complexity as long as

f

1

and

f

2

are public random functions.

To our knowledge, our result constitutes the first time

O

(2

n

/

n

) unforgeability has been achieved using only an unforgeable primitive of

n

-bit output length. (Yasuda showed unforgeability of

O

(2

5

n

/6

) for Lucks’ construction assuming an unforgeable primitive, but the analysis is sub-optimal; in the appendix, we show how Yasuda’s bound can be improved to

O

(2

n

).)

In related work, we strengthen Stam’s collision resistance analysis of polynomial-based compression functions (showing that unforgeability of the primitive suffices) and discuss how to implement our mode by replacing

f

1

with a 2

n

-bit key blockcipher in Davies-Meyer mode or by replacing

f

1

with the cascade of two 2

n

-bit to

n

-bit compression functions.

Jooyoung Lee, John Steinberger
Stam’s Collision Resistance Conjecture

At CRYPTO 2008 Stam [7] made the following conjecture: if an

m

 + 

s

-bit to

s

-bit compression function

F

makes

r

calls to a primitive

f

of

n

-bit input, then a collision for

F

can be obtained (with high probability) using

r

2

(

nr

 − 

m

)/(

r

 + 1)

queries to

f

. For example, a 2

n

-bit to

n

-bit compression function making two calls to a random function of

n

-bit input cannot have collision security exceeding 2

n

/3

. We prove this conjecture up to a constant multiplicative factor and under the condition

m

′ : = (2

m

 − 

n

(

r

 − 1))/(

r

 + 1) ≥ log

2

(17). This covers nearly all cases

r

 = 1 of the conjecture and the aforementioned example of a 2

n

-bit to

n

-bit compression function making two calls to a primitive of

n

-bit input.

John Steinberger
Universal One-Way Hash Functions via Inaccessible Entropy

This paper revisits the construction of Universal One-Way Hash Functions (UOWHFs) from any one-way function due to Rompel (STOC 1990). We give a simpler construction of UOWHFs, which also obtains better efficiency and security. The construction exploits a strong connection to the recently introduced notion of

inaccessible entropy

(Haitner et al. STOC 2009). With this perspective, we observe that a small tweak of any one-way function

f

is already a weak form of a UOWHF: Consider

F

(

x

,

i

) that outputs the

i

-bit long prefix of

f

(

x

). If

F

were a UOWHF then given a random

x

and

i

it would be hard to come up with

x

′ ≠ 

x

such that

F

(

x

,

i

) = 

F

(

x

′,

i

). While this may not be the case, we show (rather easily) that it is hard to sample

x

′ with almost full entropy among all the possible such values of

x

′. The rest of our construction simply amplifies and exploits this basic property.

With this and other recent works, we have that the constructions of three fundamental cryptographic primitives (Pseudorandom Generators, Statistically Hiding Commitments and UOWHFs) out of one-way functions are to a large extent unified. In particular, all three constructions rely on and manipulate computational notions of entropy in similar ways. Pseudorandom Generators rely on the well-established notion of pseudoentropy, whereas Statistically Hiding Commitments and UOWHFs rely on the newer notion of inaccessible entropy.

Iftach Haitner, Thomas Holenstein, Omer Reingold, Salil Vadhan, Hoeteck Wee

Foundational Primitives

Constant-Round Non-malleable Commitments from Sub-exponential One-Way Functions

We present a constant-round non-malleable commitment scheme based on the existence of sub-exponential one-way functions and using a black-box proof of security. As far as we know, this is the first construction of a constant-round non-malleable protocol based on only one-wayness, or to admit a black-box proof of security under any standard-type assumption.

Rafael Pass, Hoeteck Wee
Constructing Verifiable Random Functions with Large Input Spaces

We present a family of verifiable random functions which are provably secure for exponentially-large input spaces under a noninteractive complexity assumption. Prior constructions required either an interactive complexity assumption or one that could tolerate a factor 2

n

security loss for

n

-bit inputs. Our construction is practical and inspired by the pseudorandom functions of Naor and Reingold and the verifiable random functions of Lysyanskaya. Set in a bilinear group, where the Decisional Diffie-Hellman problem is easy to solve, we require the

l

- Decisional Diffie-Hellman Exponent assumption in the standard model, without a common reference string. Our core idea is to apply a simulation technique where the large space of VRF inputs is collapsed into a small (polynomial-size) input in the view of the reduction algorithm. This view, however, is information-theoretically hidden from the attacker. Since the input space is exponentially large, we can first apply a collision-resistant hash function to handle arbitrarily-large inputs.

Susan Hohenberger, Brent Waters
Adaptive Trapdoor Functions and Chosen-Ciphertext Security

We introduce the notion of

adaptive

trapdoor functions (ATDFs); roughly, ATDFs remain one-way even when the adversary is given access to an inversion oracle. Our main application is the black-box construction of chosen-ciphertext secure public-key encryption (CCA-secure PKE). Namely, we give a black-box construction of CCA-Secure PKE from ATDFs, as well as a construction of ATDFs from correlation-secure TDFs introduced by Rosen and Segev (TCC ’09). Moreover, by an extension of a recent result of Vahlis (TCC ’10), we show that ATDFs are strictly

weaker

than the latter (in a black-box sense). Thus, adaptivity appears to be the weakest condition on a TDF currently known to yield the first implication.

We also give a black-box construction of CCA-secure PKE from a natural extension of ATDFs we call

tag-based

ATDFs that, when applied to our constructions of the latter from either correlation-secure TDFs, or lossy TDFs introduced by Peikert and Waters (STOC ’08), yield precisely the CCA-secure PKE schemes in these works. This helps to unify and clarify their schemes. Finally, we show how to realize tag-based ATDFs from an assumption on RSA inversion not known to yield correlation-secure TDFs.

Eike Kiltz, Payman Mohassel, Adam O’Neill
Backmatter
Metadaten
Titel
Advances in Cryptology – EUROCRYPT 2010
herausgegeben von
Henri Gilbert
Copyright-Jahr
2010
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-13190-5
Print ISBN
978-3-642-13189-9
DOI
https://doi.org/10.1007/978-3-642-13190-5