Skip to main content
Top

2012 | Book

Advances in Cryptology – CRYPTO 2012

32nd Annual Cryptology Conference, Santa Barbara, CA, USA, August 19-23, 2012. Proceedings

Editors: Reihaneh Safavi-Naini, Ran Canetti

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This book constitutes the refereed proceedings of the 32nd Annual International Cryptology Conference, CRYPTO 2012, held in Santa Barbara, CA, USA, in August 2012. The 48 revised full papers presented were carefully reviewed and selected from 225 submissions. The volume also contains the abstracts of two invited talks. The papers are organized in topical sections on symmetric cryptosystems, secure computation, attribute-based and functional encryption, proofs systems, protocols, hash functions, composable security, privacy, leakage and side-channels, signatures, implementation analysis, black-box separation, cryptanalysis, quantum cryptography, and key encapsulation and one-way functions.

Table of Contents

Frontmatter
An Enciphering Scheme Based on a Card Shuffle

We introduce the

swap-or-not shuffle

and show that the technique gives rise to a new method to convert a pseudorandom function (PRF) into a pseudorandom permutation (PRP) (or, alternatively, to directly build a confusion/diffusion blockcipher). We then prove that swap-or-not has excellent quantitative security bounds, giving a Luby-Rackoff type result that ensures security (assuming an ideal round function) to a number of adversarial queries that is nearly the size of the construction’s domain. Swap-or-not provides a direct solution for building a small-domain cipher and achieving format-preserving encryption, yielding the best bounds known for a practical scheme for enciphering credit-card numbers. The analysis of swap-or-not is based on the theory of mixing times of Markov chains.

Viet Tung Hoang, Ben Morris, Phillip Rogaway
Tweakable Blockciphers with Beyond Birthday-Bound Security

Liskov, Rivest and Wagner formalized the tweakable blockcipher (TBC) primitive at CRYPTO’02. The typical recipe for instantiating a TBC is to start with a blockcipher, and then build up a construction that admits a tweak. Almost all such constructions enjoy provable security only to the birthday bound, and the one that does achieve security beyond the birthday bound (due to Minematsu) severely restricts the tweak size and requires per-invocation blockcipher rekeying.

This paper gives the first TBC construction that simultaneously allows for arbitrarily “wide” tweaks, does not rekey, and delivers provable security beyond the birthday bound. Our construction is built from a blockcipher and an

ε

− AXU

2

hash function.

As an application of the TBC primitive, LRW suggest the TBC-MAC construction (similar to CBC-MAC but chaining through the tweak), but leave open the question of its security. We close this question, both for TBC-MAC as a PRF and a MAC. Along the way, we find a nonce-based variant of TBC-MAC that has a

tight

reduction to the security of the underlying TBC, and also displays graceful security degradation when nonces are misused. This result is interesting on its own, but it also serves as an application of our new TBC construction, ultimately giving a variable input-length PRF with beyond birthday-bound security.

Will Landecker, Thomas Shrimpton, R. Seth Terashima
Breaking and Repairing GCM Security Proofs

In this paper, we study the security proofs of GCM (Galois/Counter Mode of Operation). We first point out that a lemma, which is related to the upper bound on the probability of a counter collision, is invalid. Both the original privacy and authenticity proofs by the designers are based on the lemma. We further show that the observation can be translated into a distinguishing attack that invalidates the main part of the privacy proof. It turns out that the original security proofs of GCM contain a flaw, and hence the claimed security bounds are not justified. A very natural question is then whether the proofs can be repaired. We give an affirmative answer to the question by presenting new security bounds, both for privacy and authenticity. As a result, although the security bounds are larger than what were previously claimed, GCM maintains its provable security. We also show that, when the nonce length is restricted to 96 bits, GCM has better security bounds than a general case of variable length nonces.

Tetsu Iwata, Keisuke Ohashi, Kazuhiko Minematsu
On the Distribution of Linear Biases: Three Instructive Examples

Despite the fact that we evidently have very good block ciphers at hand today, some fundamental questions on their security are still unsolved. One such fundamental problem is to precisely assess the security of a given block cipher with respect to linear cryptanalysis. In by far most of the cases we have to make (clearly wrong) assumptions, e.g., assume independent round-keys. Besides being unsatisfactory from a scientific perspective, the lack of fundamental understanding might have an impact on the performance of the ciphers we use. As we do not understand the security sufficiently enough, we often tend to embed a security margin – from an efficiency perspective nothing else than wasted performance. The aim of this paper is to stimulate research on these foundations of block ciphers. We do this by presenting three examples of ciphers that behave differently to what is normally assumed. Thus, on the one hand these examples serve as counter examples to common beliefs and on the other hand serve as a guideline for future work.

Mohamed Ahmed Abdelraheem, Martin Ågren, Peter Beelen, Gregor Leander
Substitution-Permutation Networks, Pseudorandom Functions, and Natural Proofs

This paper takes a new step towards closing the troubling gap between pseudorandom functions (PRF) and their popular, bounded-input-length counterparts. This gap is both quantitative, because these counterparts are more efficient than PRF in various ways, and methodological, because these counterparts usually fit in the substitution-permutation network paradigm (SPN) which has not been used to construct PRF.

We give several candidate PRF

$\mathcal{F}_i$

that are inspired by the SPN paradigm. This paradigm involves a “substitution function” (S-box). Our main candidates are:

$\mathcal{F}_1 : \{0, 1\}^n \to \{0, 1\}^n$

is an SPN whose S-box is a random function on

b

bits given as part of the seed. We prove unconditionally that

$\mathcal{F}_1$

resists attacks that run in time ≤ 2

εb

. Setting

$b = \omega(\lg n)$

we obtain an inefficient PRF, which however seems to be the first such construction using the SPN paradigm.

$\mathcal{F}_2 : \{0, 1\}^n \to \{0, 1\}^n$

is an SPN where the S-box is (patched) field inversion, a common choice in practical constructions.

$\mathcal{F}_2$

is computable with Boolean circuits of size

n

·log

O

(1)

n

, and in particular with seed length

n

·log

O

(1)

n

. We prove that this candidate has exponential security 2

Ω(

n

)

against linear and differential cryptanalysis.

$\mathcal{F}_3 : \{0, 1\}^n \to \{0, 1\}$

is a non-standard variant on the SPN paradigm, where “states” grow in length.

$\mathcal{F}_3$

is computable with size

n

1 + 

ε

, for any

ε

 > 0, in the restricted circuit class TC

0

of unbounded fan-in majority circuits of constant-depth. We prove that

$\mathcal{F}_3$

is almost 3-wise independent.

$\mathcal{F}_4 : \{0, 1\}^n \to \{0, 1\}$

uses an extreme setting of the SPN parameters (one round, one S-box, no diffusion matrix). The S-box is again (patched) field inversion. We prove that this candidate fools all parity tests that look at ≤ 2

0.9

n

outputs.

Assuming the security of our candidates, our work also narrows the gap between the “Natural Proofs barrier” [Razborov & Rudich; JCSS ’97] and existing lower bounds, in three models: unbounded-depth circuits, TC

0

circuits, and Turing machines. In particular, the efficiency of the circuits computing

$\mathcal{F}_3$

is related to a result by Allender and Koucky [JACM ’10] who show that a lower bound for such circuits would imply a lower bound for TC

0

.

Eric Miles, Emanuele Viola
The End of Crypto

This talk will reflect on the core purposes of cryptology, and the extent to which those purposes are served – and servable – in today’s digital environment.

Jonathan Zittrain
Must You Know the Code of f to Securely Compute f?

When Alice and Bob want to securely evaluate a function of their shared inputs, they typically first express the function as a (boolean or arithmetic) circuit and then securely evaluate that circuit, gate-by-gate. In other words, a secure protocol for evaluating

f

is typically obtained in a

non-black-box-way

from

f

itself. Consequently, secure computation protocols have high overhead (in communication & computation) that is directly linked to the circuit-description complexity of

f

.

In other settings throughout cryptography, black-box constructions invariably lead to better practical efficiency than comparable non-black-box constructions. Could secure computation protocols similarly be made more practical by eliminating their dependence on a circuit representation of the target function? Or, in other words,

must one know the code of f to securely evaluate f?

In this work we initiate the theoretical study of this question. We show the following:

1

A complete characterization of the 2-party tasks which admit such security against semi-honest adversaries. The characterization is inspired by notions of

autoreducibility

from computational complexity theory. From this characterization, we show a class of pseudorandom functions that

cannot

be securely evaluated (when one party holds the seed and the other holds the input) without “knowing” the code of the function in question. On the positive side, we show a class of functions (related to blind signatures) that can indeed be securely computed without “knowing” the code of the function.

2

Sufficient conditions for such security against malicious adversaries, also based on autoreducibility. We show that it is not possible to prove membership in the image of a one-way function in zero- knowledge, without “knowing” the code of the one-way function. We also describe a variant of the GMW compiler for transforming semi-honest to malicious security while preserving the specific black-box property considered here.

Mike Rosulek
Adaptively Secure Multi-Party Computation with Dishonest Majority

Adaptively secure multiparty computation is an essential and fundamental notion in cryptography. In this work we focus on the basic question of constructing a multiparty computation protocol secure against a

malicious

,

adaptive

adversary in the

stand-alone

setting without assuming an honest majority, in the plain model. It has been believed that this question can be resolved by composing known protocols from the literature. We show that in fact, this belief is fundamentally mistaken. In particular, we show:

Round inefficiency is unavoidable when using black-box simulation:

There does not exist any

$o(\frac{n}{\log{n}})$

round protocol that adaptively securely realizes a (natural)

n

-party functionality with a black-box simulator. Note that most previously known protocols in the adaptive security setting relied on black-box simulators.

A constant round protocol using non-black-box simulation:

We construct a

constant round

adaptively secure multiparty computation protocol in a setting without

honest majority

that makes crucial use of non-black box techniques.

Taken together, these results give the first resolution to the question of adaptively secure multiparty computation protocols with a malicious dishonest majority in the plain model, open since the first formal treatment of adaptive security for multiparty computation in 1996.

Sanjam Garg, Amit Sahai
Collusion-Preserving Computation

In

collusion-free

protocols, subliminal communication is impossible and parties are thus unable to communicate any information “beyond what the protocol allows.” Collusion-free protocols are interesting for several reasons, but have specifically attracted attention because they can be used to reduce trust in game-theoretic mechanisms. Collusion-free protocols are impossible to achieve (in general) when all parties are connected by point-to-point channels, but exist under certain physical assumptions (Lepinksi et al., STOC 2005) or when parties are connected in specific network topologies (Alwen et al., Crypto 2008).

We provide a “clean-slate” definition of the stronger notion of collusion

preservation

. Our goals in revisiting the definition are:

To give a definition with respect to

arbitrary

communication resources (including as special cases the communication models from prior work). We can then, in particular, better understand what types of resources enable collusion-preserving protocols.

To construct protocols that allow no

additional

subliminal communication when parties

can

communicate via other means. (This property is

not

implied by collusion-freeness.)

To support

composition

, so protocols can be designed in a modular fashion using sub-protocols run among subsets of the parties.

In addition to proposing the definition, we explore implications of our model and show a general feasibility result for collusion-preserving computation of arbitrary functionalities. We formalize a model for concurrently playing multiple extensive-form, mediated games while preserving many important equilibrium notions.

Joël Alwen, Jonathan Katz, Ueli Maurer, Vassilis Zikas
Secret Sharing Schemes for Very Dense Graphs

A secret-sharing scheme realizes a graph if every two vertices connected by an edge can reconstruct the secret while every independent set in the graph does not get any information on the secret. Similar to secret-sharing schemes for general access structures, there are gaps between the known lower bounds and upper bounds on the share size for graphs. Motivated by the question of what makes a graph “hard” for secret-sharing schemes, we study very dense graphs, that is, graphs whose complement contains few edges. We show that if a graph with

n

vertices contains

$\binom{n}{2}-n^{1+\beta}$

edges for some constant 0 ≤ 

β

 < 1, then there is a scheme realizing the graph with total share size of

$\tilde{O}(n^{5/4+3\beta/4})$

. This should be compared to

O

(

n

2

/log

n

) – the best upper bound known for general graphs. Thus, if a graph is “hard”, then the graph and its complement should have many edges. We generalize these results to nearly complete

k

-homogeneous access structures for a constant

k

. To complement our results, we prove lower bounds for secret-sharing schemes realizing very dense graphs, e.g., for linear secret-sharing schemes we prove a lower bound of Ω(

n

1 + 

β

/2

).

Amos Beimel, Oriol Farràs, Yuval Mintz
Functional Encryption with Bounded Collusions via Multi-party Computation

We construct functional encryption schemes for polynomial-time computable functions secure against an a-priori bounded polynomial number of collusions. Our constructions require only semantically secure public-key encryption schemes and pseudorandom generators computable by small-depth circuits (known to be implied by most concrete intractability assumptions). For certain special cases such as predicate encryption schemes with public index, the construction requires only semantically secure encryption schemes.

Along the way, we show a “bootstrapping theorem” that builds a

q

-query functional encryption scheme for arbitrary functions starting from a

q

-query functional encryption scheme for

bounded-degree

functions. All our constructions rely heavily on techniques from secure multi-party computation and randomized encodings.

Our constructions are secure under a strong simulation-based definition of functional encryption.

Sergey Gorbunov, Vinod Vaikuntanathan, Hoeteck Wee
New Proof Methods for Attribute-Based Encryption: Achieving Full Security through Selective Techniques

We develop a new methodology for utilizing the prior techniques to prove selective security for functional encryption systems as a direct ingredient in devising proofs of full security. This deepens the relationship between the selective and full security models and provides a path for transferring the best qualities of selectively secure systems to fully secure systems. In particular, we present a Ciphertext-Policy Attribute-Based Encryption scheme that is proven fully secure while matching the efficiency of the state of the art selectively secure systems.

Allison Lewko, Brent Waters
Dynamic Credentials and Ciphertext Delegation for Attribute-Based Encryption

Motivated by the question of access control in cloud storage, we consider the problem using Attribute-Based Encryption (ABE) in a setting where users’ credentials may change and ciphertexts may be stored by a third party. Our main result is obtained by pairing two contributions:

We first ask how a third party who is not trusted with secret key information can process a ciphertext to disqualify revoked users from decrypting data encrypted in the past. Our core tool is a new procedure called

ciphertext delegation

that allows a ciphertext to be ‘re-encrypted’ to a more restrictive policy using only public information.

Second, we study the problem of revocable attribute-based encryption. We provide the first fully secure construction by modifying an attribute-based encryption scheme due to Lewko et al. [

9

] and prove security in the standard model.

We then combine these two results for a new approach for revocation on stored data. Our scheme allows a storage server to update stored ciphertexts to disqualify revoked users from accessing data that was encrypted before the user’s access was revoked while key update broadcasts can dynamically revoke selected users.

Amit Sahai, Hakan Seyalioglu, Brent Waters
Functional Encryption for Regular Languages

We provide a functional encryption system that supports functionality for regular languages. In our system a secret key is associated with a Deterministic Finite Automata (DFA)

M

. A ciphertext CT encrypts a message

m

and is associated with an

arbitrary length

string

w

. A user is able to decrypt the ciphertext CT if and only if the DFA

M

associated with his private key accepts the string

w

.

Compared with other known functional encryption systems, this is the first system where the functionality is capable of recognizing an unbounded language. For example, in (Key-Policy) Attribute-Based Encryption (ABE) a private key SK is associated with a single boolean formula

φ

which operates over a

fixed

number of boolean variables from the ciphertext. In contrast, in our system a DFA

M

will meaningfully operate over an arbitrary length input

w

.

We propose a system that utilizes bilinear groups. Our solution is a “public index” system, where the message

m

is hidden, but the string

w

is not. We prove security in the selective model under a variant of the decision ℓ-Bilinear Diffie-Hellman Exponent (BDHE) assumption that we call the decision ℓ-Expanded BDHE problem.

Brent Waters
Secure Database Commitments and Universal Arguments of Quasi Knowledge

In this work we focus on a simple database commitment functionality where besides the standard security properties, one would like to hide the size of the input of the sender. Hiding the size of the input of a player is a critical requirement in some applications, and relatively few works have considered it. Notable exceptions are the work on zero-knowledge sets introduced in [

14

], and recent work on size-hiding private set intersection [

1

]. However, neither of these achieves a secure computation (i.e., a reduction of a real-world attack of a malicious adversary into an ideal-world attack) of the proposed functionality.

The first result of this submission consists in defining “secure” database commitment and in observing that previous constructions do not satisfy this definition. This leaves open the question of whether there is any way this functionality can be achieved.

We then provide an affirmative answer to this question by using new techniques that combined together achieve “secure” database commitment. Our construction is in particular optimized to require only a constant number of rounds, to provide non-interactive proofs on the content of the database, and to rely on the existence of a family of CRHFs. This is the first result where input-size hiding secure computation is achieved for an interesting functionality and moreover we obtain this result with standard security (i.e., simulation in expected polynomial time against fully malicious adversaries, without random oracles, without non-black-box extraction assumptions, without hardness assumptions against super-polynomial time adversaries).

A key building block in our construction is a universal argument enjoying an improved proof of knowledge property, that we call quasi-knowledge. This property is significantly closer to the standard proof of knowledge property than the weak proof of knowledge property satisfied by previous constructions.

Melissa Chase, Ivan Visconti
Succinct Arguments from Multi-prover Interactive Proofs and Their Efficiency Benefits

Succinct arguments of knowledge

are computationally-sound proofs of knowledge for NP where the verifier’s running time is independent of the time complexity of the NP nondeterministic machine for the considered language.

Existing succinct argument constructions are, typically, based on techniques that combine cryptographic hashing and probabilistically-checkable proofs (PCPs), and thus, in light of today’s state-of-the-art PCP technology, are quite inefficient: either one uses long PCP proofs with lots of redundancy to make the verifier fast but at the cost of making the prover slow, or one uses short PCP proofs to make the prover fast but at the cost of making the verifier slow.

To obtain better efficiency, we propose to investigate the alternative approach of constructing succinct arguments based on multi-prover interactive proofs (MIPs) and stronger cryptographic techniques:

(1)

We construct a one-round succinct MIP of knowledge protocol where (i) each prover is highly efficient in terms of time AND space, and ALSO (ii) the verifier is highly efficient.

(2)

We show how to transform any one round MIP protocol to a succinct four-message argument (with a single prover), while preserving the time and space efficiency of the original MIP protocol.

As a main tool for this transformation, we construct a

succinct multi-function commitment

that (a) allows the sender to commit to a vector of functions in time and space complexity that are essentially the same as those needed for a single evaluation of the functions, and (b) ensures that the receiver’s running time is essentially independent of the function. The scheme is based on fully-homomorphic encryption (and no additional assumptions are needed for our succinct argument).

(3)

In addition, we revisit the problem of

non-interactive

succinct arguments of knowledge (SNARKs), where known impossibilities rule out solutions based on black-box reductions to standard assumptions. We formulate a natural (though non-standard) variant of homomorphic encryption that has a

homomorphism-extraction property

. We then show that his primitive essentially allows to “squash” our interactive protocol, while again preserving time and space efficiency. We further show that this variant is, in fact, implied by the existence of SNARKs.

Nir Bitansky, Alessandro Chiesa
On the Security of TLS-DHE in the Standard Model

TLS is the most important cryptographic protocol in use today. However, up to now there is no complete cryptographic security proof in the standard model, nor in any other model. We give the first such proof for the core cryptographic protocol of TLS ciphersuites based on ephemeral Diffie-Hellman key exchange (TLS-DHE), which include the cipher suite

TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA

mandatory in TLS 1.0 and TLS 1.1. It is impossible to prove security of the TLS Handshake protocol in any classical key-indistinguishability-based security model (like for instance the Bellare-Rogaway or the Canetti-Krawczyk model), due to subtle issues with the encryption of the final

Finished

messages. Therefore we start with proving the security of a truncated version of the TLS-DHE Handshake protocol, which has been considered in previous works on TLS. Then we define the notion of authenticated and confidential channel establishment (ACCE) as a new security model which captures precisely the security properties expected from TLS in practice, and show that the combination of the TLS Handshake with data encryption in the TLS Record Layer can be proven secure in this model.

Tibor Jager, Florian Kohlar, Sven Schäge, Jörg Schwenk
Semantic Security for the Wiretap Channel

The wiretap channel is a setting where one aims to provide information-theoretic privacy of communicated data based solely on the assumption that the channel from sender to adversary is “noisier” than the channel from sender to receiver. It has developed in the Information and Coding (I&C) community over the last 30 years largely divorced from the parallel development of modern cryptography. This paper aims to bridge the gap with a cryptographic treatment involving advances on two fronts, namely definitions and schemes. On the first front (definitions), we explain that the mis-r definition in current use is weak and propose two alternatives: mis (based on mutual information) and ss (based on the classical notion of semantic security). We prove them equivalent, thereby connecting two fundamentally different ways of defining privacy and providing a new, strong and well-founded target for constructions. On the second front (schemes), we provide the first explicit scheme with all the following characteristics: it is proven to achieve both security (ss and mis, not just mis-r) and decodability; it has optimal rate; and both the encryption and decryption algorithms are proven to be polynomial-time.

Mihir Bellare, Stefano Tessaro, Alexander Vardy
Multi-instance Security and Its Application to Password-Based Cryptography

This paper develops a theory of multi-instance (mi) security and applies it to provide the first proof-based support for the classical practice of salting in password-based cryptography. Mi-security comes into play in settings (like password-based cryptography) where it is computationally feasible to compromise a single instance, and provides a second line of defense, aiming to ensure (in the case of passwords, via salting) that the effort to compromise all of some large number

m

of instances grows linearly with

m

. The first challenge is definitions, where we suggest

$\textnormal{LORX}$

-security as a good metric for mi security of encryption and support this claim by showing it implies other natural metrics, illustrating in the process that even lifting simple results from the si setting to the mi one calls for new techniques. Next we provide a composition-based framework to transfer standard single-instance (si) security to mi-security with the aid of a key-derivation function. Analyzing password-based KDFs from the PKCS#5 standard to show that they meet our indifferentiability-style mi-security definition for KDFs, we are able to conclude with the first proof that per password salts amplify mi-security as hoped in practice. We believe that mi-security is of interest in other domains and that this work provides the foundation for its further theoretical development and practical application.

Mihir Bellare, Thomas Ristenpart, Stefano Tessaro
Hash Functions Based on Three Permutations: A Generic Security Analysis

We consider the family of 2

n

-to-

n

-bit compression functions that are solely based on at most three permutation executions and on XOR-operators, and analyze its collision and preimage security. Despite their elegance and simplicity, these designs are not covered by the results of Rogaway and Steinberger (CRYPTO 2008). By defining a carefully chosen equivalence relation on this family of compression functions, we obtain the following results. In the setting where the three permutations

π

1

,

π

2

,

π

3

are selected independently and uniformly at random, there exist at most four equivalence classes that achieve optimal 2

n

/2

collision resistance. Under a certain extremal graph theory based conjecture, these classes are then proven optimally collision secure. Three of these classes allow for finding preimages in 2

n

/2

queries, and only one achieves optimal 2

2

n

/3

preimage resistance (with respect to the bounds of Rogaway and Steinberger, EUROCRYPT 2008). Consequently, a compression function is optimally collision and preimage secure if and only if it is equivalent to

F

(

x

1

,

x

2

) = 

x

1

 ⊕ 

π

1

(

x

1

) ⊕ 

π

2

(

x

2

) ⊕ 

π

3

(

x

1

 ⊕ 

x

2

 ⊕ 

π

1

(

x

1

)). For compression functions that make three calls to the same permutation we obtain a surprising negative result, namely the impossibility of optimal 2

n

/2

collision security: for any scheme, collisions can be found with 2

2

n

/5

queries. This result casts some doubt over the existence of any (larger) secure permutation-based compression function built only on XOR-operators and (multiple invocations of) a single permutation.

Bart Mennink, Bart Preneel
To Hash or Not to Hash Again? (In)Differentiability Results for H 2 and HMAC

We show that the second iterate

H

2

(

M

) = 

H

(

H

(

M

)) of a random oracle

H

cannot achieve strong security in the sense of indifferentiability from a random oracle. We do so by proving that indifferentiability for

H

2

holds only with poor concrete security by providing a lower bound (via an attack) and a matching upper bound (via a proof requiring new techniques) on the complexity of any successful simulator. We then investigate HMAC when it is used as a general-purpose hash function with arbitrary keys (and not as a MAC or PRF with uniform, secret keys). We uncover that HMAC’s handling of keys gives rise to two types of weak key pairs. The first allows trivial attacks against its indifferentiability; the second gives rise to structural issues similar to that which ruled out strong indifferentiability bounds in the case of

H

2

. However, such weak key pairs do not arise, as far as we know, in any deployed applications of HMAC. For example, using keys of any fixed length shorter than

d

 − 1, where

d

is the block length in bits of the underlying hash function, completely avoids weak key pairs. We therefore conclude with a positive result: a proof that HMAC is indifferentiable from a RO (with standard, good bounds) when applications use keys of a fixed length less than

d

 − 1.

Yevgeniy Dodis, Thomas Ristenpart, John Steinberger, Stefano Tessaro
New Preimage Attacks against Reduced SHA-1

This paper shows preimage attacks against reduced SHA-1 up to 57 steps. The best previous attack has been presented at CRYPTO 2009 and was for 48 steps finding a two-block preimage with incorrect padding at the cost of 2

159.3

evaluations of the compression function. For the same variant our attacks find a one-block preimage at 2

150.6

and a correctly padded two-block preimage at 2

151.1

evaluations of the compression function. The improved results come out of a differential view on the meet-in-the-middle technique originally developed by Aoki and Sasaki. The new framework closely relates meet-in-the-middle attacks to differential cryptanalysis which turns out to be particularly useful for hash functions with linear message expansion and weak diffusion properties.

Simon Knellwolf, Dmitry Khovratovich
Stam’s Conjecture and Threshold Phenomena in Collision Resistance

At CRYPTO 2008 Stam [

8

] conjectured that if an

$(m\!+\!s)$

-bit to

s

-bit compression function

F

makes

r

calls to a primitive

f

of

n

-bit input, then a collision for

F

can be obtained (with high probability) using

r

2

(

nr

 − 

m

)/(

r

 + 1)

queries to

f

, which is sometimes less than the birthday bound. Steinberger [

9

] proved Stam’s conjecture up to a constant multiplicative factor for most cases in which

r

 = 1 and for certain other cases that reduce to the case

r

 = 1. In this paper we prove the general case of Stam’s conjecture (also up to a constant multiplicative factor). Our result is qualitatively different from Steinberger’s, moreover, as we show the following novel threshold phenomenon: that exponentially many (more exactly, 2

s

 − 2(

m

 − 

n

)/(

r

 + 1)

) collisions are obtained with high probability after

O

(1)

r

2

(

nr

 − 

m

)/(

r

 + 1)

queries. This in particular shows that threshold phenomena observed in practical compression functions such as JH are, in fact, unavoidable for compression functions with those parameters.

John Steinberger, Xiaoming Sun, Zhe Yang
Universal Composability from Essentially Any Trusted Setup

It is impossible to securely carry out general multi-party computation in arbitrary network contexts like the Internet, unless protocols have access to some trusted setup. In this work we classify the power of such trusted (2-party) setup functionalities. We show that nearly every setup is either

useless

(ideal access to the setup is equivalent to having no setup at all) or else

complete

(composably secure protocols for

all

tasks exist in the presence of the setup). We further argue that those setups which are neither complete nor useless are highly unnatural.

The main technical contribution in this work is an almost-total characterization of completeness for 2-party setups. Our characterization treats setup functionalities as black-boxes, and therefore is the first work to classify completeness of

arbitrary setup functionalities

(

i.e.

, randomized, reactive, and having behavior that depends on the global security parameter).

Mike Rosulek
Impossibility Results for Static Input Secure Computation

Consider a setting of two mutually distrustful parties Alice and Bob who want to securely evaluate some function on

pre-specified

inputs. The well studied notion of

two-party secure computation

allows them to do so in the

stand-alone

setting. Consider a deterministic function (e.g., 1-out-of-2 bit OT) that Alice and Bob can not evaluate trivially and which allows only Bob to receive the output. We show that Alice and Bob can not securely compute any such function in the

concurrent

setting even when their inputs are

pre-specified

. Our impossibility result also extends to all deterministic functions in which both Alice and Bob get the same output. Our results have implications in the

bounded

-concurrent setting as well.

Sanjam Garg, Abishek Kumarasubramanian, Rafail Ostrovsky, Ivan Visconti
New Impossibility Results for Concurrent Composition and a Non-interactive Completeness Theorem for Secure Computation

We consider the

client-server

setting for the concurrent composition of secure protocols: in this setting, a single server interacts with multiple clients concurrently, executing with each client a specified protocol where only the client should receive any nontrivial output. Such a setting is easily motivated from an application standpoint. There are important special cases for which positive results are known – such as concurrent zero knowledge protocols – and it has been an open question whether other natural functionalities such as Oblivious Transfer (OT) are possible in this setting.

In this work:

We resolve this open question by showing that unfortunately, even in this very limited concurrency setting,

broad new impossibility results

hold, ruling out not only OT, but in fact all nontrivial finite asymmetric functionalities. Our new negative results hold even if the inputs of all honest parties are fixed in advance, and the adversary receives no auxiliary information.

Along the way, we establish a new

unconditional completeness result

for asymmetric functionalities, where we characterize functionalities that are

non-interactively complete

secure against active adversaries. When we say that a functionality

$\mathcal{F}$

is non-interactively complete, we mean that every other asymmetric functionality can be realized by parallel invocations of several copies of

$\mathcal{F}$

, with no other communication in any direction. Our result subsumes a completeness result of Kilian [STOC’00] that uses protocols which require additional interaction in both directions.

Shweta Agrawal, Vipul Goyal, Abhishek Jain, Manoj Prabhakaran, Amit Sahai
Black-Box Constructions of Composable Protocols without Set-Up

We present the first

black-box

construction of a secure multi-party computation protocol that satisfies a meaningful notion of

concurrent security

in the plain model (without any set-up, and without assuming an honest majority). Moreover, our protocol relies on the minimal assumption of the existence of a semi-honest OT protocol, and our security notion “UC with super-polynomial helpers” (Canetti et al, STOC’10) is closed under universal composition, and implies super-polynomial-time simulation security.

Huijia Lin, Rafael Pass
Crowd-Blending Privacy

We introduce a new definition of privacy called

crowd- blending privacy

that strictly relaxes the notion of differential privacy. Roughly speaking,

k

-crowd blending private sanitization of a database requires that each individual

i

in the database “blends” with

k

other individuals

j

in the database, in the sense that the output of the sanitizer is “indistinguishable” if

i

’s data is replaced by

j

’s.

We demonstrate crowd-blending private mechanisms for histograms and for releasing synthetic data points, achieving strictly better utility than what is possible using differentially private mechanisms. Additionally, we demonstrate that if a crowd-blending private mechanism is combined with a “pre-sampling” step, where the individuals in the database are randomly drawn from some underlying population (as is often the case during data collection), then the combined mechanism satisfies not only differential privacy, but also the stronger notion of zero-knowledge privacy. This holds even if the pre-sampling is slightly biased and an adversary knows whether certain individuals were sampled or not. Taken together, our results yield a practical approach for collecting and privately releasing data while ensuring higher utility than previous approaches.

Johannes Gehrke, Michael Hay, Edward Lui, Rafael Pass
Differential Privacy with Imperfect Randomness

In this work we revisit the question of basing cryptography on imperfect randomness. Bosley and Dodis (TCC’07) showed that if a source of randomness

$\mathcal{R}$

is “good enough” to generate a secret key capable of encrypting

k

bits, then one can deterministically extract nearly

k

almost uniform bits from

$\mathcal{R}$

, suggesting that traditional privacy notions (namely, indistinguishability of encryption) requires an “extractable” source of randomness. Other, even stronger impossibility results are known for achieving privacy under specific “non-extractable” sources of randomness, such as the

γ

-Santha-Vazirani (SV) source, where each next bit has fresh entropy, but is allowed to have a small bias

γ

 < 1 (possibly depending on prior bits).

We ask whether similar negative results also hold for a more recent notion of privacy called

differential privacy

(Dwork et al., TCC’06), concentrating, in particular, on achieving differential privacy with the Santha-Vazirani source. We show that the answer is

no

. Specifically, we give a differentially private mechanism for approximating arbitrary “low sensitivity” functions that works even with randomness coming from a

γ

-Santha-Vazirani source, for any

γ

 < 1. This provides a somewhat surprising “separation” between traditional privacy and differential privacy with respect to imperfect randomness.

Interestingly, the design of our mechanism is quite different from the traditional “additive-noise” mechanisms (e.g., Laplace mechanism) successfully utilized to achieve differential privacy with perfect randomness. Indeed, we show that

any

(non-trivial) “SV-robust” mechanism for our problem requires a demanding property called

consistent sampling

, which is strictly stronger than differential privacy, and cannot be satisfied by any additive-noise mechanism.

Yevgeniy Dodis, Adriana López-Alt, Ilya Mironov, Salil Vadhan
Tamper and Leakage Resilience in the Split-State Model

It is notoriously difficult to create hardware that is immune from side channel and tampering attacks. A lot of recent literature, therefore, has instead considered

algorithmic

defenses from such attacks. In this paper, we show how to algorithmically secure any cryptographic functionality from continual split-state leakage and tampering attacks. A split-state attack on cryptographic hardware is one that targets separate parts of the hardware separately. Our construction does not require the hardware to have access to randomness. In contrast, prior work on protecting from continual combined leakage and tampering [

23

] required true randomness for each update. Our construction is in the common reference string (CRS) model; the CRS must be hard-wired into the device. We note that prior negative results show that it is impossible to algorithmically secure a cryptographic functionality against a combination of arbitrary continual leakage and tampering attacks without true randomness; therefore restricting our attention to the split-state model is justified. Our construction is simple and modular, and relies on a new construction, in the CRS model, of non-malleable codes with respect to split-state tampering functions, which may be of independent interest.

Feng-Hao Liu, Anna Lysyanskaya
Securing Circuits against Constant-Rate Tampering

We present a compiler that converts any circuit into one that remains secure even if a

constant fraction

of its wires are tampered with. Following the seminal work of Ishai

et. al.

(Eurocrypt 2006), we consider adversaries who may choose an arbitrary set of wires to corrupt, and may set each such wire to 0 or to 1, or may toggle with the wire. We prove that such adversaries, who

continuously

tamper with the circuit, can learn at most

logarithmically many bits

of secret information (in addition to black-box access to the circuit). Our results are information theoretic.

Dana Dachman-Soled, Yael Tauman Kalai
How to Compute under ${\cal{AC}}^{\sf0}$ Leakage without Secure Hardware

We study the problem of computing securely in the presence of leakage on the computation’s internals. Our main result is a general compiler that compiles any algorithm

P

, viewed as a boolean circuit, into a functionally equivalent algorithm

P

′. The compiled

P

′ can then be run repeatedly on adversarially chosen inputs in the presence of leakage on its internals: In each execution of

P

′, an

${\cal{AC}}^{\sf0}$

adversary can (adaptively) choose any leakage function that can be computed in

${\cal{AC}}^{\sf0}$

and has bounded output length, apply it to the values on

P

′’s internal wires in that execution, and view its output. We show that no such leakage adversary can learn more than

P

’s input-output behavior. In particular, the internals of

P

are protected.

Security does not rely on any secure hardware, and is proved under a computational intractability assumption regarding the hardness of computing inner products for

${\cal{AC}}^{\sf0}$

circuits with pre-processing. This new assumption has connections to long-standing open problems in complexity theory.

Guy N. Rothblum
Recent Advances and Existing Research Questions in Platform Security

In this talk I will provide a description of recent uses Intel has made of cryptography in our platforms, including providing a hardware random number generator, using anonymous signatures, and improving performance of cryptographic algorithms. I will discuss how processor capabilities could be used more effectively by cryptographic algorithms. I will then discuss research questions in cryptographic protocols and platform security that are motivated by our goals.

Ernie Brickell
Group Signatures with Almost-for-Free Revocation

Group signatures are a central cryptographic primitive where users can anonymously and accountably sign messages in the name of a group they belong to. Several efficient constructions with security proofs in the standard model (

i.e.

, without the random oracle idealization) appeared in the recent years. However, like standard PKIs, group signatures need an efficient revocation system to be practical. Despite years of research, membership revocation remains a non-trivial problem: many existing solutions do not scale well due to either high overhead or constraining operational requirements (like the need for all users to update their keys after each revocation). Only recently, Libert, Peters and Yung (Eurocrypt’12) suggested a new scalable revocation method, based on the Naor-Naor-Lotspiech (NNL) broadcast encryption framework, that interacts nicely with techniques for building group signatures in the standard model. While promising, their mechanism introduces important storage requirements at group members. Namely, membership certificates, which used to have constant size in existing standard model constructions, now have polylog size in the maximal cardinality of the group (NNL, after all, is a tree-based technique and such dependency is naturally expected). In this paper we show how to obtain private keys of

constant

size. To this end, we introduce a new technique to leverage the NNL subset cover framework in the context of group signatures but, perhaps surprisingly, without logarithmic relationship between the size of private keys and the group cardinality. Namely, we provide a way for users to efficiently prove their membership of one of the generic subsets in the NNL subset cover framework. This technique makes our revocable group signatures competitive with ordinary group signatures (

i.e.

, without revocation) in the standard model. Moreover, unrevoked members (as in PKIs) still do not need to update their keys at each revocation.

Benoît Libert, Thomas Peters, Moti Yung
Tightly Secure Signatures and Public-Key Encryption

We construct the first public-key encryption scheme whose chosen-ciphertext (i.e., IND-CCA) security can be proved under a standard assumption

and

does not degrade in either the number of users or the number of ciphertexts. In particular, our scheme can be safely deployed in unknown settings in which no a-priori bound on the number of encryptions and/or users is known.

As a central technical building block, we construct the first structure-preserving signature scheme with a tight security reduction. (This signature scheme may be of independent interest.) Combining this scheme with Groth-Sahai proofs yields a tightly simulation-sound non-interactive zero-knowledge proof system for group equations. If we use this proof system in the Naor-Yung double encryption scheme, we obtain a tightly IND-CCA secure public-key encryption scheme from the Decision Linear assumption.

We point out that our techniques are not specific to public-key encryption security. Rather, we view our signature scheme and proof system as general building blocks that can help to achieve a tight security reduction.

Dennis Hofheinz, Tibor Jager
Efficient Padding Oracle Attacks on Cryptographic Hardware

We show how to exploit the encrypted key import functions of a variety of different cryptographic devices to reveal the imported key. The attacks are padding oracle attacks, where error messages resulting from incorrectly padded plaintexts are used as a side channel. In the asymmetric encryption case, we modify and improve Bleichenbacher’s attack on RSA PKCS#1v1.5 padding, giving new cryptanalysis that allows us to carry out the ‘million message attack’ in a mean of 49 000 and median of 14 500 oracle calls in the case of cracking an unknown valid ciphertext under a 1024 bit key (the original algorithm takes a mean of 215 000 and a median of 163 000 in the same case). We show how implementation details of certain devices admit an attack that requires only 9 400 operations on average (3 800 median). For the symmetric case, we adapt Vaudenay’s CBC attack, which is already highly efficient. We demonstrate the vulnerabilities on a number of commercially available cryptographic devices, including security tokens, smartcards and the Estonian electronic ID card. The attacks are efficient enough to be practical: we give timing details for all the devices found to be vulnerable, showing how our optimisations make a qualitative difference to the practicality of the attack. We give mathematical analysis of the effectiveness of the attacks, extensive empirical results, and a discussion of countermeasures.

Romain Bardou, Riccardo Focardi, Yusuke Kawamoto, Lorenzo Simionato, Graham Steel, Joe-Kai Tsay
Public Keys

We performed a sanity check of public keys collected on the web and found that the vast majority works as intended. Our main goal was to test the validity of the assumption that different random choices are made each time keys are generated. We found that this is not always the case, resulting in public keys that offer no security. Our conclusion is that generating secure public keys in the real world is challenging. We did

not

study usage of public keys.

Arjen K. Lenstra, James P. Hughes, Maxime Augier, Joppe W. Bos, Thorsten Kleinjung, Christophe Wachter
Multiparty Computation from Somewhat Homomorphic Encryption

We propose a general multiparty computation protocol secure against an active adversary corrupting up to

n

 − 1 of the

n

players. The protocol may be used to compute securely arithmetic circuits over any finite field

$\mathbb{F}_{p^k}$

. Our protocol consists of a preprocessing phase that is both independent of the function to be computed and of the inputs, and a much more efficient online phase where the actual computation takes place. The online phase is unconditionally secure and has total computational (and communication) complexity linear in

n

, the number of players, where earlier work was quadratic in

n

. Moreover, the work done by each player is only a small constant factor larger than what one would need to compute the circuit in the clear. We show this is optimal for computation in large fields. In practice, for 3 players, a secure 64-bit multiplication can be done in 0.05 ms. Our preprocessing is based on a somewhat homomorphic cryptosystem. We extend a scheme by Brakerski et al., so that we can perform distributed decryption and handle many values in parallel in one ciphertext. The computational complexity of our preprocessing phase is dominated by the public-key operations, we need

O

(

n

2

/

s

) operations per secure multiplication where

s

is a parameter that increases with the security parameter of the cryptosystem. Earlier work in this model needed Ω(

n

2

) operations. In practice, the preprocessing prepares a secure 64-bit multiplication for 3 players in about 13 ms.

Ivan Damgård, Valerio Pastro, Nigel Smart, Sarah Zakarias
Near-Linear Unconditionally-Secure Multiparty Computation with a Dishonest Minority

In the setting of unconditionally-secure MPC, where dishonest players are unbounded and no cryptographic assumptions are used, it was known since the 1980’s that an honest majority of players is both necessary and sufficient to achieve privacy and correctness, assuming secure point-to-point and broadcast channels. The main open question that was left is to establish the exact communication complexity.

We settle the above question by showing an unconditionally-secure MPC protocol, secure against a dishonest minority of malicious players, that matches the communication complexity of the best known MPC protocol in the honest-but-curious setting. More specifically, we present a new

n

-player MPC protocol that is secure against a computationally-unbounded malicious adversary that can adaptively corrupt

t

 < 

n

/2 of the players. For polynomially-large binary circuits that are not too unshaped, our protocol has an amortized communication complexity of

O

(

n

log

n

 + 

κ

/

n

const

) bits per multiplication (i.e. AND) gate, where

κ

denotes the security parameter and

const

 ∈ ℤ is an arbitrary non-negative constant. This improves on the previously most efficient protocol with the same security guarantee, which offers an amortized communication complexity of

O

(

n

2

κ

) bits per multiplication gate. For any

κ

polynomial in

n

, the amortized communication complexity of our protocol matches the

O

(

n

log

n

) bit communication complexity of the best known MPC protocol with passive security.

We introduce several novel techniques that are of independent interest and we believe will have wider applicability. One is a novel idea of computing authentication tags by means of a

mini MPC

, which allows us to avoid expensive double-sharings; the other is a

batch-wise multiplication verification

that allows us to speedup Beaver’s “multiplication triples”.

Eli Ben-Sasson, Serge Fehr, Rafail Ostrovsky
A New Approach to Practical Active-Secure Two-Party Computation

We propose a new approach to practical two-party computation secure against an active adversary. All prior practical protocols were based on Yao’s garbled circuits. We use an OT-based approach and get efficiency via OT extension in the random oracle model. To get a practical protocol we introduce a number of novel techniques for relating the outputs and inputs of OTs in a larger construction.

Jesper Buus Nielsen, Peter Sebastian Nordholt, Claudio Orlandi, Sai Sheshank Burra
The Curious Case of Non-Interactive Commitments – On the Power of Black-Box vs. Non-Black-Box Use of Primitives

It is well-known that one-way permutations (and even one-to-one one-way functions) imply the existence of non-interactive commitments. Furthermore the construction is

black-box

(i.e., the underlying one-way function is used as an oracle to implement the commitment scheme, and an adversary attacking the commitment scheme is used as an oracle in the proof of security).

We rule out the possibility of black-box constructions of non-interactive commitments from general (possibly not one-to-one) one-way functions. As far as we know, this is the first result showing a natural cryptographic task that can be achieved in a black-box way from one-way permutations but not from one-way functions.

We next extend our black-box separation to constructions of non-interactive commitments from a stronger notion of one-way functions, which we refer to as

hitting

one-way functions. Perhaps surprisingly, Barak, Ong, and Vadhan (Siam JoC ’07) showed that there does exist a non-black-box construction of non-interactive commitments from hitting one-way functions. As far as we know, this is the first result to establish a “separation” between the power of black-box and non-black-box use of a primitive to implement a natural cryptographic task.

We finally show that unless the complexity class

$\ensuremath{\ensuremath{\mathsf{NP}} }$

has program checkers, the above separations extend also to non-interactive instance-based commitments, and 3-message public-coin honest-verifier zero-knowledge protocols with

O

(log

n

)-bit verifier messages. The well-known classical zero-knowledge proof for

$\ensuremath{\ensuremath{\mathsf{NP}} }$

fall into this category.

Mohammad Mahmoody, Rafael Pass
Efficient Dissection of Composite Problems, with Applications to Cryptanalysis, Knapsacks, and Combinatorial Search Problems

In this paper we show that a large class of diverse problems have a bicomposite structure which makes it possible to solve them with a new type of algorithm called

dissection

, which has much better time/memory tradeoffs than previously known algorithms. A typical example is the problem of finding the key of multiple encryption schemes with

r

independent

n

-bit keys. All the previous error-free attacks required time

T

and memory

M

satisfying

TM

 = 2

rn

, and even if “false negatives” are allowed, no attack could achieve

TM

 < 2

3

rn

/4

. Our new technique yields the first algorithm which never errs and finds all the possible keys with a smaller product of

TM

, such as

T

 = 2

4

n

time and

M

 = 2

n

memory for breaking the sequential execution of

r

 = 7 block ciphers. The improvement ratio we obtain increases in an unbounded way as

r

increases, and if we allow algorithms which can sometimes miss solutions, we can get even better tradeoffs by combining our dissection technique with parallel collision search. To demonstrate the generality of the new dissection technique, we show how to use it in a generic way in order to attack hash functions with a rebound attack, to solve hard knapsack problems, and to find the shortest solution to a generalized version of Rubik’s cube with better time complexities (for small memory complexities) than the best previously known algorithms.

Itai Dinur, Orr Dunkelman, Nathan Keller, Adi Shamir
Resistance against Iterated Attacks by Decorrelation Revisited,

Iterated attacks are comprised of iterating adversaries who can make

d

plaintext queries, in each iteration to compute a bit, and are trying to distinguish between a random cipher

C

and the ideal random cipher

C

*

based on all bits. In EUROCRYPT ’99, Vaudenay showed that a 2

d

-decorrelated cipher resists to iterated attacks of order

d

when iterations make almost no common queries. Then, he first asked what the necessary conditions are for a cipher to resist a non-adaptive iterated attack of order

d

. Secondly, he speculated that repeating a plaintext query in different iterations does not provide any advantage to a non-adaptive distinguisher. We close here these two long-standing open problems.

We show that, in order to resist non-adaptive iterated attacks of order

d

, decorrelation of order 2

d

 − 1 is not sufficient. We do this by providing a counterexample consisting of a cipher decorrelated to the order 2

d

 − 1 and a successful non-adaptive iterated attack of order

d

against it.

Moreover, we prove that the aforementioned claim is wrong by showing that a higher probability of having a common query between different iterations can translate to a high advantage of the adversary in distinguishing

C

from

C

*

. We provide a counterintuitive example consisting of a cipher decorrelated to the order 2

d

which can be broken by an iterated attack of order 1 having a high probability of common queries.

Aslı Bay, Atefeh Mashatan, Serge Vaudenay
Secure Identity-Based Encryption in the Quantum Random Oracle Model

We give the first proof of security for an identity-based encryption scheme in the

quantum

random oracle model. This is the first proof of security for

any

scheme in this model that requires no additional assumptions. Our techniques are quite general and we use them to obtain security proofs for two random oracle hierarchical identity-based encryption schemes and a random oracle signature scheme, all of which have previously resisted quantum security proofs, even using additional assumptions. We also explain how to remove the extra assumptions from prior quantum random oracle model proofs. We accomplish these results by developing new tools for arguing that quantum algorithms cannot distinguish between two oracle distributions. Using a particular class of oracle distributions, so called

semi-constant

distributions, we argue that the aforementioned cryptosystems are secure against quantum adversaries.

Mark Zhandry
Quantum to Classical Randomness Extractors

The goal of randomness extraction is to distill (almost) perfect randomness from a weak source of randomness. When the source outputs a classical string

X

, many extractor constructions are known. Yet, when considering a physical randomness source,

X

is itself ultimately the result of a measurement on an underlying quantum system. When characterizing the power of a source to supply randomness it is hence a natural question to ask, how much

classical

randomness we can extract from a

quantum

system. To tackle this question we here take on the study of

quantum-to-classical randomness extractors

(QC-extractors).

We provide constructions of QC-extractors based on measurements in a full set of mutually unbiased bases (MUBs), and certain single qubit measurements. The latter are particularly appealing since they are not only easy to implement, but appear throughout quantum cryptography. We proceed to prove an upper bound on the maximum amount of randomness that we could hope to extract from any quantum state. Some of our QC-extractors almost match this bound. We show two applications of our results.

First, we show that any QC-extractor gives rise to entropic uncertainty relations with respect to quantum side information. Such relations were previously only known for two measurements. In particular, we obtain strong relations in terms of the von Neumann (Shannon) entropy as well as the min-entropy for measurements in (almost) unitary 2-designs, a full set of MUBs, and single qubit measurements in three MUBs each.

Second, we finally resolve the central open question in the noisy-storage model [Wehner et al., PRL 100, 220502 (2008)] by linking security to the quantum capacity of the adversary’s storage device. More precisely, we show that any two-party cryptographic primitive can be implemented securely as long as the adversary’s storage device has sufficiently low quantum capacity. Our protocol does not need any quantum storage to implement, and is technologically feasible using present-day technology.

Mario Berta, Omar Fawzi, Stephanie Wehner
Actively Secure Two-Party Evaluation of Any Quantum Operation

We provide the first two-party protocol allowing Alice and Bob to evaluate privately even against active adversaries any completely positive, trace-preserving map

, given as a quantum circuit, upon their joint quantum input state

. Our protocol leaks no more to any active adversary than an ideal functionality for

provided Alice and Bob have the cryptographic resources for active secure two-party classical computation. Our protocol is constructed from the protocol for the same task secure against specious adversaries presented in [

4

].

Frédéric Dupuis, Jesper Buus Nielsen, Louis Salvail
On the Impossibility of Constructing Efficient Key Encapsulation and Programmable Hash Functions in Prime Order Groups

In this paper, we discuss the (im)possibility of constructing chosen ciphertext secure (CCA secure) key encapsulation mechanisms (KEMs) with low ciphertext overhead. More specifically, we rule out the existence of algebraic black-box reductions from the (bounded) CCA security of a natural class of KEMs to any non-interactive problem. The class of KEMs captures the structure of the currently most efficient KEMs defined in standard prime order groups, but restricts an encapsulation to consist of a single group element and a string. This result suggests that we cannot rely on existing techniques to construct a CCA secure KEM in standard prime order groups with a ciphertext overhead lower than two group elements. Furthermore, we show how the properties of an (algebraic) programmable hash function can be used to construct a simple, efficient and CCA secure KEM based on the hardness of the decisional Diffie-Hellman problem with a ciphertext overhead of just a single group element. Since this KEM construction is covered by the above mentioned impossibility result, this enables us to derive a lower bound on the hash key size of an algebraic programmable hash function, and rule out the existence of algebraic (

poly

,

n

)-programmable hash functions in prime order groups for any integer

n

. The latter result answers an open question posed by Hofheinz and Kiltz (CRYPTO’08) in the case of algebraic programmable hash functions in prime order groups.

Goichiro Hanaoka, Takahiro Matsuda, Jacob C. N. Schuldt
Hardness of Computing Individual Bits for One-Way Functions on Elliptic Curves

We prove that if one can predict any of the bits of the input to an elliptic curve based one-way function over a finite field, then we can invert the function. In particular, our result implies that if one can predict any of the bits of the input to a classical pairing-based one-way function with non-negligible advantage over a random guess then one can efficiently invert this function and thus, solve the Fixed Argument Pairing Inversion problem (FAPI-1/FAPI-2). The latter has implications on the security of various pairing-based schemes such as the identity-based encryption scheme of Boneh–Franklin, Hess’ identity-based signature scheme, as well as Joux’s three-party one-round key agreement protocol. Moreover, if one can solve FAPI-1 and FAPI-2 in polynomial time then one can solve the Computational Diffie–Hellman problem (CDH) in polynomial time.

Our result implies that all the bits of the functions defined above are hard-to-compute assuming these functions are one-way. The argument is based on a list-decoding technique via discrete Fourier transforms due to Akavia–Goldwasser–Safra as well as an idea due to Boneh–Shparlinski.

Alexandre Duc, Dimitar Jetchev
Homomorphic Evaluation of the AES Circuit

We describe a working implementation of leveled homomorphic encryption (without bootstrapping) that can evaluate the AES-128 circuit in three different ways. One variant takes under over 36 hours to evaluate an entire AES encryption operation, using NTL (over GMP) as our underlying software platform, and running on a large-memory machine. Using SIMD techniques, we can process over 54 blocks in each evaluation, yielding an amortized rate of just under 40 minutes per block. Another implementation takes just over two and a half days to evaluate the AES operation, but can process 720 blocks in each evaluation, yielding an amortized rate of just over five minutes per block. We also detail a third implementation, which theoretically could yield even better amortized complexity, but in practice turns out to be less competitive.

For our implementations we develop both AES-specific optimizations as well as several “generic” tools for FHE evaluation. These last tools include (among others) a different variant of the Brakerski-Vaikuntanathan key-switching technique that does not require reducing the norm of the ciphertext vector, and a method of implementing the Brakerski-Gentry-Vaikuntanathan modulus-switching transformation on ciphertexts in CRT representation.

Craig Gentry, Shai Halevi, Nigel P. Smart
Fully Homomorphic Encryption without Modulus Switching from Classical GapSVP

We present a new tensoring technique for LWE-based fully homomorphic encryption. While in all previous works, the ciphertext noise grows quadratically (

B

 → 

B

2

·poly(

n

)) with every multiplication (before “refreshing”), our noise only grows linearly (

B

 → 

B

·poly(

n

)).

We use this technique to construct a

scale-invariant

fully homomorphic encryption scheme, whose properties only depend on the ratio between the modulus

q

and the initial noise level

B

, and not on their absolute values.

Our scheme has a number of advantages over previous candidates: It uses the same modulus throughout the evaluation process (no need for “modulus switching”), and this modulus can take arbitrary form. In addition, security can be

classically

reduced from the worst-case hardness of the GapSVP problem (with quasi-polynomial approximation factor), whereas previous constructions could only exhibit a quantum reduction from GapSVP.

Zvika Brakerski
Backmatter
Metadata
Title
Advances in Cryptology – CRYPTO 2012
Editors
Reihaneh Safavi-Naini
Ran Canetti
Copyright Year
2012
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-32009-5
Print ISBN
978-3-642-32008-8
DOI
https://doi.org/10.1007/978-3-642-32009-5

Premium Partner