Skip to main content

2014 | Buch

Advances in Cryptology – CRYPTO 2014

34th Annual Cryptology Conference, Santa Barbara, CA, USA, August 17-21, 2014, Proceedings, Part I

herausgegeben von: Juan A. Garay, Rosario Gennaro

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The two volume-set, LNCS 8616 and LNCS 8617, constitutes the refereed proceedings of the 34th Annual International Cryptology Conference, CRYPTO 2014, held in Santa Barbara, CA, USA, in August 2014.

The 60 revised full papers presented in LNCS 8616 and LNCS 8617 were carefully reviewed and selected from 227 submissions. The papers are organized in topical sections on symmetric encryption and PRFs; formal methods; hash functions; groups and maps; lattices; asymmetric encryption and signatures; side channels and leakage resilience; obfuscation; FHE; quantum cryptography; foundations of hardness; number-theoretic hardness; information-theoretic security; key exchange and secure communication; zero knowledge; composable security; secure computation - foundations; secure computation - implementations.

Inhaltsverzeichnis

Frontmatter

Symmetric Encryption and PRFs

Security of Symmetric Encryption against Mass Surveillance

Motivated by revelations concerning population-wide surveillance of encrypted communications, we formalize and investigate the resistance of symmetric encryption schemes to mass surveillance. The focus is on algorithm-substitution attacks (ASAs), where a subverted encryption algorithm replaces the real one. We assume that the goal of “big brother” is undetectable subversion, meaning that ciphertexts produced by the subverted encryption algorithm should reveal plaintexts to big brother yet be indistinguishable to users from those produced by the real encryption scheme. We formalize security notions to capture this goal and then offer both attacks and defenses. In the first category we show that successful (from the point of view of big brother) ASAs may be mounted on a large class of common symmetric encryption schemes. In the second category we show how to design symmetric encryption schemes that avoid such attacks and meet our notion of security. The lesson that emerges is the danger of choice: randomized, stateless schemes are subject to attack while deterministic, stateful ones are not.

Mihir Bellare, Kenneth G. Paterson, Phillip Rogaway
The Security of Multiple Encryption in the Ideal Cipher Model

Multiple encryption—the practice of composing a blockcipher several times with itself under independent keys—has received considerable attention of late from the standpoint of provable security. Despite these efforts proving definitive security bounds (i.e., with matching attacks) has remained elusive even for the special case of triple encryption. In this paper we close the gap by improving both the best known attacks and best known provable security, so that both bounds match. Our results apply for arbitrary number of rounds and show that the security of ℓ-round multiple encryption is precisely

$\exp(\kappa + \min\{\kappa (\ell'-2)/2), n (\ell'-2)/\ell'\})$

where

$\exp(t) = 2^t$

and where ℓ′ = 2⌈ℓ/2⌉ is the smallest even integer greater than or equal to ℓ, for all ℓ ≥ 1. Our technique is based on Patarin’s H-coefficient method and relies on a combinatorial result of Chen and Steinberger originally required in the context of key-alternating ciphers.

Yuanxi Dai, Jooyoung Lee, Bart Mennink, John Steinberger
Minimizing the Two-Round Even-Mansour Cipher

The

r

-round (iterated)

Even-Mansour cipher

(also known as

key-alternating cipher

) defines a block cipher from

r

fixed public

n

-bit permutations

P

1

,…,

P

r

as follows: given a sequence of

n

-bit round keys

k

0

,…,

k

r

, an

n

-bit plaintext

x

is encrypted by xoring round key

k

0

, applying permutation

P

1

, xoring round key

k

1

, etc. The (strong) pseudorandomness of this construction in the random permutation model (i.e., when the permutations

P

1

,…,

P

r

are public random permutation oracles that the adversary can query in a black-box way) was studied in a number of recent papers, culminating with the work of Chen and Steinberger (EUROCRYPT 2014), who proved that the

r

-round Even-Mansour cipher is indistinguishable from a truly random permutation up to

$ \mathcal{O} (2^{\frac{rn}{r+1}})$

queries of any adaptive adversary (which is an optimal security bound since it matches a simple distinguishing attack). All results in this entire line of work share the common restriction that they only hold under the assumption that

the round keys k

0

,…,

k

r

and the permutations P

1

,…,

P

r

are independent

. In particular, for two rounds, the current state of knowledge is that the block cipher

E

(

x

) = 

k

2

 ⊕ 

P

2

(

k

1

 ⊕ 

P

1

(

k

0

 ⊕ 

x

)) is provably secure up to

$ \mathcal{O} (2^{2n/3})$

queries of the adversary, when

k

0

,

k

1

, and

k

2

are three independent

n

-bit keys, and

P

1

and

P

2

are two independent random

n

-bit permutations. In this paper, we ask whether one can obtain a similar bound for the two-round Even-Mansour cipher

from just one n-bit key and one n-bit permutation

. Our answer is positive: when the three

n

-bit round keys

k

0

,

k

1

, and

k

2

are adequately derived from an

n

-bit master key

k

, and the same permutation

P

is used in place of

P

1

and

P

2

, we prove a qualitatively similar

$ \widetilde{ \mathcal{O} } (2^{2n/3})$

security bound (in the random permutation model). To the best of our knowledge, this is the first “beyond the birthday bound” security result for AES-like ciphers that does not assume independent round keys.

Shan Chen, Rodolphe Lampe, Jooyoung Lee, Yannick Seurin, John Steinberger
Block Ciphers – Focus on the Linear Layer (feat. PRIDE)

The linear layer is a core component in any substitution-permutation network block cipher. Its design significantly influences both the security and the efficiency of the resulting block cipher. Surprisingly, not many general constructions are known that allow to choose trade-offs between security and efficiency. Especially, when compared to Sboxes, it seems that the linear layer is crucially understudied. In this paper, we propose a general methodology to construct good, sometimes optimal, linear layers allowing for a large variety of trade-offs. We give several instances of our construction and on top underline its value by presenting a new block cipher.

PRIDE

is optimized for 8-bit micro-controllers and significantly outperforms all academic solutions both in terms of code size and cycle count.

Martin R. Albrecht, Benedikt Driessen, Elif Bilge Kavun, Gregor Leander, Christof Paar, Tolga Yalçın
Related-Key Security for Pseudorandom Functions Beyond the Linear Barrier

Related-key attacks (RKAs) concern the security of cryptographic primitives in the situation where the key can be manipulated by the adversary. In the RKA setting, the adversary’s power is expressed through the class of related-key deriving (RKD) functions which the adversary is restricted to using when modifying keys. Bellare and Kohno (Eurocrypt 2003) first formalised RKAs and pin-pointed the foundational problem of constructing RKA-secure pseudorandom functions (RKA-PRFs). To date there are few constructions for RKA-PRFs under standard assumptions, and it is a major open problem to construct RKA-PRFs for larger classes of RKD functions. We make significant progress on this problem. We first show how to repair the Bellare-Cash framework for constructing RKA-PRFs and extend it to handle the more challenging case of classes of RKD functions that contain claws. We apply this extension to show that a variant of the Naor-Reingold function already considered by Bellare and Cash is an RKA-PRF for a class of affine RKD functions under the DDH assumption, albeit with an exponential-time security reduction. We then develop a second extension of the Bellare-Cash framework, and use it to show that the same Naor-Reingold variant is actually an RKA-PRF for a class of degree

d

polynomial RKD functions under the stronger decisional

d

-Diffie-Hellman inversion assumption. As a significant technical contribution, our proof of this result avoids the exponential-time security reduction that was inherent in the work of Bellare and Cash and in our first result.

Michel Abdalla, Fabrice Benhamouda, Alain Passelègue, Kenneth G. Paterson

Formal Methods

Automated Analysis of Cryptographic Assumptions in Generic Group Models

We initiate the study of principled, automated, methods for analyzing hardness assumptions in generic group models, following the approach of symbolic cryptography. We start by defining a broad class of generic and symbolic group models for different settings—symmetric or asymmetric (leveled)

k

-linear groups—and by proving ‘‘computational soundness’’ theorems for the symbolic models. Based on this result, we formulate a very general master theorem that formally relates the hardness of a (possibly interactive) assumption in these models to solving problems in polynomial algebra. Then, we systematically analyze these problems. We identify different classes of assumptions and obtain decidability and undecidability results. Then, we develop and implement automated procedures for verifying the conditions of master theorems, and thus the validity of hardness assumptions in generic group models. The concrete outcome of this work is an automated tool which takes as input the statement of an assumption, and outputs either a proof of its generic hardness or shows an algebraic attack against the assumption.

Gilles Barthe, Edvard Fagerholm, Dario Fiore, John Mitchell, Andre Scedrov, Benedikt Schmidt

Hash Functions

The Exact PRF-Security of NMAC and HMAC

NMAC

is a mode of operation which turns a fixed input-length keyed hash function

f

into a variable input-length function. A practical single-key variant of

NMAC

called

HMAC

is a very popular and widely deployed message authentication code (MAC). Security proofs and attacks for

NMAC

can typically be lifted to

HMAC

.

NMAC

was introduced by Bellare, Canetti and Krawczyk [Crypto’96], who proved it to be a secure pseudorandom function (PRF), and thus also a MAC, assuming that (1)

f

is a PRF and (2) the function we get when cascading

f

is weakly collision-resistant. Unfortunately,

HMAC

is typically instantiated with cryptographic hash functions like MD5 or SHA-1 for which (2) has been found to be wrong. To restore the provable guarantees for

NMAC

, Bellare [Crypto’06] showed its security based solely on the assumption that

f

is a PRF, albeit via a non-uniform reduction.

Our first contribution is a simpler and uniform proof for this fact: If

f

is an

ε

-secure PRF (against

q

queries) and a

δ

-

non-adaptively

secure PRF (against

q

queries), then

NMAC

f

is an (

ε

 + ℓ

)-secure PRF against

q

queries of length at most ℓ blocks each.

We then show that this

ε

 + ℓ

bound is basically tight. For the most interesting case where ℓ

 ≥ 

ε

we prove this by constructing an

f

for which an attack with advantage ℓ

exists. This also violates the bound

O

ε

) on the PRF-security of

NMAC

recently claimed by Koblitz and Menezes.

Finally, we analyze the PRF-security of a modification of

NMAC

called

NI

[An and Bellare, Crypto’99] that differs mainly by using a compression function with an additional keying input. This avoids the constant rekeying on multi-block messages in

NMAC

and allows for a security proof starting by the standard switch from a PRF to a random function, followed by an information-theoretic analysis. We carry out such an analysis, obtaining a tight ł

q

2

/2

c

bound for this step, improving over the trivial bound of ł

2

q

2

/2

c

. The proof borrows combinatorial techniques originally developed for proving the security of CBC-MAC [Bellare et al., Crypto’05].

Peter Gaži, Krzysztof Pietrzak, Michal Rybár
Updates on Generic Attacks against HMAC and NMAC

In this paper, we present new generic attacks against

HMAC

and other similar MACs when instantiated with an

n

-bit output hash function maintaining a ℓ-bit internal state. Firstly, we describe two types of selective forgery attacks (a forgery for which the adversary commits on the forged message beforehand). The first type is a tight attack which requires

O

(2

ℓ/2

) computations, while the second one requires

O

(2

2ℓ/3

) computations, but offers much more freedom degrees in the choice of the committed message. Secondly, we propose an improved universal forgery attack which significantly reduces the complexity of the best known attack from

O

(2

5ℓ/6

) to

O

(2

3ℓ/4

). Finally, we describe the very first time-memory tradeoff for key recovery attack on

HMAC

. With

O

(2

) precomputation, the internal key

K

out

is firstly recovered with

O

(2

2ℓ/3

) computations by exploiting the Hellman’s time-memory tradeoff, and then the other internal key

K

in

is recovered with

O

(2

3ℓ/4

) computations by a novel approach. This tends to indicate an inefficiency in using long keys for

HMAC

.

Jian Guo, Thomas Peyrin, Yu Sasaki, Lei Wang
Improved Generic Attacks against Hash-Based MACs and HAIFA

The security of HMAC (and more general hash-based MACs) against state-recovery and universal forgery attacks was very recently shown to be suboptimal, following a series of surprising results by Leurent

et al.

and Peyrin

et al.

. These results have shown that such powerful attacks require much less than 2

computations, contradicting the common belief (where ℓ denotes the internal state size). In this work, we revisit and extend these results, with a focus on properties of concrete hash functions such as a limited message length, and special iteration modes.

We begin by devising the first state-recovery attack on HMAC with a HAIFA hash function (using a block counter in every compression function call), with complexity 2

4ℓ/5

. Then, we describe improved trade-offs between the message length and the complexity of a state-recovery attack on HMAC. Consequently, we obtain improved attacks on several HMAC constructions used in practice, in which the hash functions limit the maximal message length (e.g.,

SHA-1

and

SHA-2

). Finally, we present the first universal forgery attacks, which can be applied with short message queries to the

MAC

oracle. In particular, we devise the first universal forgery attacks applicable to

SHA-1

and

SHA-2

.

Itai Dinur, Gaëtan Leurent
Cryptography from Compression Functions: The UCE Bridge to the ROM

This paper suggests and explores the use of UCE security for the task of turning VIL-ROM schemes into FIL-ROM ones. The benefits we offer over indifferentiability, the current leading method for this task, are the ability to handle multi-stage games and greater efficiency. The paradigm consists of (1) Showing that a VIL UCE function can instantiate the VIL RO in the scheme, and (2) Constructing the VIL UCE function given a FIL random oracle. The main technical contributions of the paper are domain extension transforms that implement the second step. Leveraging known results for the first step we automatically obtain FIL-ROM constructions for several primitives whose security notions are underlain by multi-stage games.Our first domain extender exploits indifferentiability, showing that although the latter does not work directly for multi-stage games it can be used indirectly, through UCE, as a tool for this end. Our second domain extender targets performance. It is parallelizable and shown through implementation to provide significant performance gains over indifferentiable domain extenders.

Mihir Bellare, Viet Tung Hoang, Sriram Keelveedhi
Indistinguishability Obfuscation and UCEs: The Case of Computationally Unpredictable Sources

Random oracles are powerful cryptographic objects. They facilitate the security proofs of an impressive number of practical cryptosystems ranging from KDM-secure and deterministic encryption to point-function obfuscation and many more. However, due to an uninstantiability result of Canetti, Goldreich, and Halevi (STOC 1998) random oracles have become somewhat controversial. Recently, Bellare, Hoang, and Keelveedhi (BHK; CRYPTO 2013 and ePrint 2013/424, August 2013) introduced a new abstraction called Universal Computational Extractors (UCEs), and showed that they suffice to securely replace random oracles in a number of prominent applications, including all those mentioned above, without suffering from the aforementioned uninstantiability result. This, however, leaves open the question of constructing UCEs in the standard model.

We show that the existence of indistinguishability obfuscation (iO) implies (non-black-box) attacks on all the definitions that BHK proposed within their UCE framework in the original version of their paper, in the sense that no concrete hash function can satisfy them. We also show that this limitation can be overcome, to some extent, by restraining the class of admissible adversaries via a

statistical

notion of unpredictability. Following our attack, BHK (ePrint 2013/424, September 2013), independently adopted this approach in their work.

In the updated version of their paper, BHK (ePrint 2013/424, September 2013) also introduce two other novel source classes, called

bounded parallel sources

and

split sources

, which aim at recovering the computational applications of UCEs that fall outside the statistical fix. These notions keep to a computational notion of unpredictability, but impose structural restrictions on the adversary so that our original iO attack no longer applies. We extend our attack to show that indistinguishability obfuscation is sufficient to also break the UCE security of any hash function against bounded parallel sources. Towards this goal, we use the

randomized encodings

paradigm of Applebaum, Ishai, and Kushilevitz (STOC 2004) to parallelize the obfuscated circuit used in our attack, so that it can be computed by a bounded parallel source whose second stage consists of constant-depth circuits. BHK, in the latest version of their paper (ePrint 2013/424, May 2014), have subsequently replace bounded parallel sources with new source classes. We conclude by discussing the composability and feasibility of hash functions secure against split sources.

Christina Brzuska, Pooya Farshim, Arno Mittelbach

Groups and Maps

Low Overhead Broadcast Encryption from Multilinear Maps

We use multilinear maps to provide a solution to the long-standing problem of public-key broadcast encryption where all parameters in the system are small. In our constructions, ciphertext overhead, private key size, and public key size are all poly-logarithmic in the total number of users. The systems are fully collusion-resistant against any number of colluders. All our systems are based on an

O

(log

N

)-way multilinear map to support a broadcast system for

N

users. We present three constructions based on different types of multilinear maps and providing different security guarantees. Our systems naturally give identity-based broadcast systems with short parameters.

Dan Boneh, Brent Waters, Mark Zhandry
Security Analysis of Multilinear Maps over the Integers

At Crypto 2013, Coron, Lepoint, and Tibouchi (CLT) proposed a practical Graded Encoding Scheme (GES) over the integers, which has very similar cryptographic features to ideal multilinear maps. In fact, the scheme of Coron

et al.

is the second proposal of a secure GES, and has advantages over the first scheme of Garg, Gentry, and Halevi (GGH). For example, unlike the GGH construction, the subgroup decision assumption holds in the CLT construction. Immediately following the elegant innovations of the GES, numerous GES-based cryptographic applications were proposed. Although these applications rely on the security of the underlying GES, the security of the GES has not been analyzed in detail, aside from the original papers produced by Garg

et al.

and Coron

et al.

We present an attack algorithm against the system parameters of the CLT GES. The proposed algorithm’s complexity

$\tilde{\mathcal{O}}(2^{\rho/2})$

is exponentially smaller than

$\tilde{\mathcal{O}}(2^{\rho})$

of the previous best attack of Coron

et al.

, where

ρ

is a function of the security parameter. Furthermore, we identify a flaw in the generation of the zero-testing parameter of the CLT GES, which drastically reduces the running time of the proposed algorithm. The experimental results demonstrate the practicality of our attack.

Hyung Tae Lee, Jae Hong Seo
Converting Cryptographic Schemes from Symmetric to Asymmetric Bilinear Groups

We propose a method to convert schemes designed over symmetric bilinear groups into schemes over asymmetric bilinear groups. The conversion assigns variables to one or both of the two source groups in asymmetric bilinear groups so that all original computations in the symmetric bilinear groups go through over asymmetric groups without having to compute isomorphisms between the source groups. Our approach is to represent dependencies among variables using a directed graph, and split it into two graphs so that variables associated to the nodes in each graph are assigned to one of the source groups. Though searching for the best split is cumbersome by hand, our graph-based approach allows us to automate the task with a simple program. With the help of the automated search, our conversion method is applied to several existing schemes including one that has been considered hard to convert.

Masayuki Abe, Jens Groth, Miyako Ohkubo, Takeya Tango
Polynomial Spaces: A New Framework for Composite-to-Prime-Order Transformations

At Eurocrypt 2010, Freeman presented a framework to convert cryptosystems based on composite-order groups into ones that use prime-order groups. Such a transformation is interesting not only from a conceptual point of view, but also since for relevant parameters, operations in prime-order groups are faster than composite-order operations by an order of magnitude. Since Freeman’s work, several other works have shown improvements, but also lower bounds on the efficiency of such conversions.

In this work, we present a new framework for composite-to-prime-order conversions. Our framework is in the spirit of Freeman’s work; however, we develop a different, “polynomial” view of his approach, and revisit several of his design decisions. This eventually leads to significant efficiency improvements, and enables us to circumvent previous lower bounds. Specifically, we show how to verify Groth-Sahai proofs in a prime-order environment (with a symmetric pairing) almost twice as efficiently as the state of the art.

We also show that our new conversions are optimal in a very broad sense. Besides, our conversions also apply in settings with a multilinear map, and can be instantiated from a variety of computational assumptions (including, e.g., the

k

-linear assumption).

Gottfried Herold, Julia Hesse, Dennis Hofheinz, Carla Ràfols, Andy Rupp

Lattices

Revisiting the Gentry-Szydlo Algorithm

We put the Gentry-Szydlo algorithm into a mathematical framework, and show that it is part of a general theory of “lattices with symmetry”. For large ranks, there is no good algorithm that decides whether a given lattice has an orthonormal basis. But when the lattice is given with enough symmetry, we can construct a provably deterministic polynomial time algorithm to accomplish this, based on the work of Gentry and Szydlo. The techniques involve algorithmic algebraic number theory, analytic number theory, commutative algebra, and lattice basis reduction. This sheds new light on the Gentry-Szydlo algorithm, and the ideas should be applicable to a range of questions in cryptography.

H. W. Lenstra, A. Silverberg
Faster Bootstrapping with Polynomial Error

Bootstrapping

is a technique, originally due to Gentry (STOC 2009), for “refreshing” ciphertexts of a somewhat homomorphic encryption scheme so that they can support further homomorphic operations. To date, bootstrapping remains the only known way of obtaining fully homomorphic encryption for arbitrary unbounded computations.

Over the past few years, several works have dramatically improved the efficiency of bootstrapping and the hardness assumptions needed to implement it. Recently, Brakerski and Vaikuntanathan (ITCS 2014) reached the major milestone of a bootstrapping algorithm based on Learning With Errors for

polynomial

approximation factors. Their method uses the Gentry-Sahai-Waters (GSW) cryptosystem (CRYPTO 2013) in conjunction with Barrington’s “circuit sequentialization” theorem (STOC 1986). This approach, however, results in

very large

polynomial runtimes and approximation factors. (The approximation factors can be improved, but at even greater costs in runtime and space.)

In this work we give a new bootstrapping algorithm whose runtime and associated approximation factor are both

small

polynomials. Unlike most previous methods, ours implements an elementary and efficient

arithmetic

procedure, thereby avoiding the inefficiencies inherent to the use of boolean circuits and Barrington’s Theorem. For 2

λ

security under conventional lattice assumptions, our method requires only a

quasi-linear

Õ(

λ

) number of homomorphic operations on GSW ciphertexts, which is optimal (up to polylogarithmic factors) for schemes that encrypt just one bit per ciphertext. As a contribution of independent interest, we also give a technically simpler variant of the GSW system and a tighter error analysis for its homomorphic operations.

Jacob Alperin-Sheriff, Chris Peikert
Hardness of k-LWE and Applications in Traitor Tracing

We introduce the

k

-LWE

problem

, a Learning With Errors variant of the

k

-SIS problem. The Boneh-Freeman reduction from SIS to

k

-SIS suffers from an exponential loss in k. We improve and extend it to an LWE to

k

-LWE reduction with a polynomial loss in

k

, by relying on a new technique involving trapdoors for random integer kernel lattices. Based on this hardness result, we present the first algebraic construction of a traitor tracing scheme whose security relies on the worst-case hardness of standard lattice problems. The proposed LWE traitor tracing is almost as efficient as the LWE encryption. Further, it achieves public traceability, i.e., allows the authority to delegate the tracing capability to ”untrusted” parties. To this aim, we introduce the notion of

projective sampling family

in which each sampling function is keyed and, with a projection of the key on a well chosen space, one can simulate the sampling function in a computationally indistinguishable way. The construction of a projective sampling family from

k

-LWE allows us to achieve public traceability, by publishing the projected keys of the users. We believe that the new lattice tools and the projective sampling family are quite general that they may have applications in other areas.

San Ling, Duong Hieu Phan, Damien Stehlé, Ron Steinfeld
Improved Short Lattice Signatures in the Standard Model

We present a signature scheme provably secure in the standard model (no random oracles) based on the worst-case complexity of approximating the Shortest Vector Problem in ideal lattices within polynomial factors. The distinguishing feature of our scheme is that it achieves

short

signatures (consisting of a single lattice vector), and

relatively short

public keys (consisting of

O

(log

n

) vectors.) Previous lattice schemes in the standard model with similarly

short

signatures, due to Boyen (PKC 2010) and Micciancio and Peikert (Eurocrypt 2012), had substantially longer public keys consisting of Ω(

n

) vectors (even when implemented with ideal lattices).

Léo Ducas, Daniele Micciancio
New and Improved Key-Homomorphic Pseudorandom Functions

A

key-homomorphic

pseudorandom function (PRF) family {

F

s

:

D

 → 

R

} allows one to efficiently compute the value

F

s

 + 

t

(

x

) given

F

s

(

x

) and

F

t

(

x

). Such functions have many applications, such as distributing the operation of a key-distribution center and updatable symmetric encryption. The only known construction of key-homomorphic PRFs without random oracles, due to Boneh

et al

. (CRYPTO 2013), is based on the learning with errors

(LWE)

problem and hence on worst-case lattice problems. However, the security proof relies on a very strong

LWE

assumption (i.e., very large approximation factors), and hence has quite inefficient parameter sizes and runtimes.

In this work we give new constructions of key-homomorphic PRFs that are based on much weaker

LWE

assumptions, are much more efficient in time and space, and are still highly parallel. More specifically, we improve the

LWE

approximation factor from exponential in the input length to exponential in its

logarithm

(or less). For input length

λ

and 2

λ

security against known lattice algorithms, we improve the key size from

λ

3

to

λ

bits, the public parameters from

λ

6

to

λ

2

bits, and the runtime from

λ

7

to

λ

ω

 + 1

bit operations (ignoring polylogarithmic factors in

λ

), where

ω

 ∈ [2,2.373] is the exponent of matrix multiplication. In addition, we give even more efficient ring-

LWE

-based constructions whose key sizes, public parameters, and

incremental

runtimes on consecutive inputs are all

quasi-linear

Õ(

λ

), which is optimal up to polylogarithmic factors. To our knowledge, these are the first

low-depth

PRFs (whether key homomorphic or not) enjoying any of these efficiency measures together with nontrivial proofs of 2

λ

security under any conventional assumption.

Abhishek Banerjee, Chris Peikert

Asymmetric Encryption and Signatures

Homomorphic Signatures with Efficient Verification for Polynomial Functions

A homomorphic signature scheme for a class of functions

$\mathcal{C}$

allows a client to sign and upload elements of some data set

D

on a server. At any later point, the server can derive a (publicly verifiable) signature that certifies that some

y

is the result computing some

$f\in\mathcal{C}$

on the basic data set

D

. This primitive has been formalized by Boneh and Freeman (Eurocrypt 2011) who also proposed the only known construction for the class of multivariate polynomials of fixed degree

d

 ≥ 1. In this paper we construct new homomorphic signature schemes for such functions. Our schemes provide the first alternatives to the one of Boneh-Freeman, and improve over their solution in three main aspects. First, our schemes do not rely on random oracles. Second, we obtain security in a stronger fully-adaptive model: while the solution of Boneh-Freeman requires the adversary to query messages in a given data set all at once, our schemes can tolerate adversaries that query one message at a time, in a fully-adaptive way. Third, signature verification is more efficient (in an amortized sense) than computing the function from scratch. The latter property opens the way to using homomorphic signatures for publicly-verifiable computation on outsourced data. Our schemes rely on a new assumption on leveled graded encodings which we show to hold in a generic model.

Dario Catalano, Dario Fiore, Bogdan Warinschi
Structure-Preserving Signatures from Type II Pairings

We investigate structure-preserving signatures in asymmetric bilinear groups with an efficiently computable homomorphism from one source group to the other, i.e., the Type II setting. It has been shown that in the Type I and Type III settings, structure-preserving signatures need at least 2 verification equations and 3 group elements. It is therefore natural to conjecture that this would also be required in the intermediate Type II setting, but surprisingly this turns out not to be the case. We construct structure-preserving signatures in the Type II setting that only require a single verification equation and consist of only 2 group elements. This shows that the Type II setting with partial asymmetry is different from the other two settings in a way that permits the construction of cryptographic schemes with unique properties.

We also investigate lower bounds on the size of the public verification key in the Type II setting. Previous work on structure-preserving signatures has explored lower bounds on the number of verification equations and the number of group elements in a signature but the size of the verification key has not been investigated before.We show that in the Type II setting it is necessary to have at least 2 group elements in the public verification key in a signature scheme with a single verification equation.

Our constructions match the lower bounds so they are optimal with respect to verification complexity, signature sizes and verification key sizes. In fact, in terms of verification complexity, they are the most efficient structure preserving signature schemes to date.

We give two structure-preserving signature schemes with a single verification equation where both the signatures and the public verification keys consist of two group elements each. One signature scheme is strongly existentially unforgeable, the other is fully randomizable. Having such simple and elegant structure-preserving signatures may make the Type II setting the easiest to use when designing new structure-preserving cryptographic schemes, and lead to schemes with the greatest conceptual simplicity.

Masayuki Abe, Jens Groth, Miyako Ohkubo, Mehdi Tibouchi
(Hierarchical) Identity-Based Encryption from Affine Message Authentication

We provide a generic transformation from any

affine

message authentication code (MAC) to an identity-based encryption (IBE) scheme over pairing groups of prime order. If the MAC satisfies a security notion related to unforgeability against chosen-message attacks and, for example, the

k

-Linear assumption holds, then the resulting IBE scheme is adaptively secure. Our security reduction is tightness preserving, i.e., if the MAC has a tight security reduction so has the IBE scheme. Furthermore, the transformation also extends to hierarchical identity-based encryption (HIBE). We also show how to construct affine MACs with a tight security reduction to standard assumptions. This, among other things, provides the first tightly secure HIBE in the standard model.

Olivier Blazy, Eike Kiltz, Jiaxin Pan
Witness Encryption from Instance Independent Assumptions

Witness encryption was proposed by Garg, Gentry, Sahai, and Waters as a means to encrypt to an instance,

x

, of an NP language and produce a ciphertext. In such a system, any decryptor that knows of a witness

w

that

x

is in the language can decrypt the ciphertext and learn the message. In addition to proposing the concept, their work provided a candidate for a witness encryption scheme built using multilinear encodings. However, one significant limitation of the work is that the candidate had no proof of security (other than essentially assuming the scheme secure).

In this work we provide a proof framework for proving witness encryption schemes secure under instance independent assumptions. At the highest level we introduce the abstraction of

positional witness encryption

which allows a proof reduction of a witness encryption scheme via a sequence of 2

n

hybrid experiments where

n

is the witness length of the NP-statement. Each hybrid step proceeds by looking at

a single witness candidate

and using the fact that it does not satisfy the NP-relation to move the proof forward. We show that this “isolation strategy” enables one to create a witness encryption system that is provably secure from assumptions that are (maximally) independent of any particular encryption instance. We demonstrate the viability of our approach by implementing this strategy using level

n

-linear encodings where

n

is the witness length. Our complexity assumption has ≈ 

n

group elements, but does not otherwise depend on the NP-instance

x

.

Craig Gentry, Allison Lewko, Brent Waters

Side Channels and Leakage Resilience I

RSA Key Extraction via Low-Bandwidth Acoustic Cryptanalysis

Many computers emit a high-pitched noise during operation, due to vibration in some of their electronic components. These acoustic emanations are more than a nuisance: as we show in this paper, they can leak the key used in cryptographic operations. This is surprising, since the acoustic information has very low bandwidth (under 20 kHz using common microphones, and a few hundred kHz using ultrasound microphones), which is many orders of magnitude below the GHz-scale clock rates of the attacked computers. We describe a new

acoustic cryptanalysis

attack which can extract full 4096-bit RSA keys from the popular GnuPG software, within an hour, using the sound generated by the computer during the decryption of some chosen ciphertexts. We experimentally demonstrate such attacks, using a plain mobile phone placed next to the computer, or a more sensitive microphone placed 10 meters away.

Daniel Genkin, Adi Shamir, Eran Tromer
On the Impossibility of Cryptography with Tamperable Randomness

We initiate a study of the security of cryptographic primitives in the presence of efficient tampering attacks to the randomness of honest parties. More precisely, we consider

p

-tampering attackers that may

efficiently

tamper with each bit of the honest parties’ random tape with probability

p

, but have to do so in an “online” fashion. Our main result is a strong negative result: We show that any secure encryption scheme, bit commitment scheme, or zero-knowledge protocol can be “broken” with probability

p

by a

p

-tampering attacker.The core of this result is a new Fourier analytic technique for biasing the output of bounded-value functions, which may be of independent interest.

We also show that this result cannot be extended to primitives such as signature schemes and identification protocols: assuming the existence of one-way functions, such primitives can be made resilient to (1/poly(

n

))-tampering attacks where

n

is the security parameter.

Per Austrin, Kai-Min Chung, Mohammad Mahmoody, Rafael Pass, Karn Seth

Obfuscation I

Multiparty Key Exchange, Efficient Traitor Tracing, and More from Indistinguishability Obfuscation

In this work, we show how to use indistinguishability obfuscation (iO) to build multiparty key exchange, efficient broadcast encryption, and efficient traitor tracing. Our schemes enjoy several interesting properties that have not been achievable before:

Our multiparty non-interactive key exchange protocol does not require a trusted setup. Moreover, the size of the published value from each user is independent of the total number of users.

Our broadcast encryption schemes support

distributed

setup, where users choose their own secret keys rather than be given secret keys by a trusted entity. The broadcast ciphertext size is

independent

of the number of users.

Our traitor tracing system is fully collusion resistant with short ciphertexts, secret keys, and public key. Ciphertext size is logarithmic in the number of users and secret key size is independent of the number of users. Our public key size is polylogarithmic in the number of users. The recent functional encryption system of Garg, Gentry, Halevi, Raykova, Sahai, and Waters also leads to a traitor tracing scheme with similar ciphertext and secret key size, but the construction in this paper is simpler and more direct. These constructions resolve an open problem relating to differential privacy.

Generalizing our traitor tracing system gives a private broadcast encryption scheme (where broadcast ciphertexts reveal minimal information about the recipient set) with optimal size ciphertext.

Several of our proofs of security introduce new tools for proving security using indistinguishability obfuscation.

Dan Boneh, Mark Zhandry
Indistinguishability Obfuscation from Semantically-Secure Multilinear Encodings

We define a notion of semantic security of multilinear (a.k.a. graded) encoding schemes, which stipulates security of a class of algebraic “decisional” assumptions: roughly speaking, we require that for every nuPPT distribution

D

over two

constant-length

sequences

m

0

,

m

1

and auxiliary elements

z

such that all arithmetic circuits (respecting the multilinear restrictions and ending with a zero-test) are

constant

with overwhelming probability over (

m

b

,

z

),

b

 ∈ {0,1}, we have that encodings of

m

0

,

z

are computationally indistinguishable from encodings of

m

1

,

z

. Assuming the existence of semantically secure multilinear encodings and the LWE assumption, we demonstrate the existence of indistinguishability obfuscators for all polynomial-size circuits.

Rafael Pass, Karn Seth, Sidharth Telang
On the Implausibility of Differing-Inputs Obfuscation and Extractable Witness Encryption with Auxiliary Input

The notion of

differing-inputs obfuscation

(diO) was introduced by Barak et al. (CRYPTO 2001). It guarantees that, for any two circuits

C

0

,

C

1

, if it is difficult to come up with an input

x

on which

C

0

(

x

) ≠ 

C

1

(

x

), then it should also be difficult to distinguish the obfuscation of

C

0

from that of

C

1

. This is a strengthening of

indistinguishability obfuscation

, where the above is only guaranteed for circuits that agree on all inputs:

C

0

(

x

) = 

C

1

(

x

) for all

x

. Two recent works of Ananth et al. (ePrint 2013) and Boyle et al. (TCC 2014) study the notion of diO in the setting where the attacker is also given some auxiliary information related to the circuits, showing that this notion leads to many interesting applications.

In this work, we show that the existence of

general-purpose

diO with

general

auxiliary input has a surprising consequence: it implies that a

specific

circuit

C

*

with

specific

auxiliary input

aux

* cannot be obfuscated in a way that hides some

specific

information. In other words, under the conjecture that such

special-purpose obfuscation

exists, we show that general-purpose diO cannot exist. We do not know if this special-purpose obfuscation assumption is implied by diO itself, and hence we do not get an unconditional impossibility result. However, the special-purpose obfuscation assumption is a falsifiable assumption which we do not know how to break for candidate obfuscation schemes. Showing the existence of general-purpose diO with general auxiliary input would necessitate showing how to break this assumption.

We also show that the special-purpose obfuscation assumption implies the impossibility of

extractable witness encryption

with auxiliary input, a notion proposed by Goldwasser et al. (CRYPTO 2013). A variant of this assumption also implies the impossibility of

“output-only dependent” hardcore bits

for general one-way functions, as recently constructed by Bellare and Tessaro (ePrint 2013) using diO.

Sanjam Garg, Craig Gentry, Shai Halevi, Daniel Wichs

FHE

Maliciously Circuit-Private FHE

We present a framework for transforming FHE (fully homomorphic encryption) schemes with no circuit privacy requirements into maliciously circuit-private FHE. That is, even if both maliciously formed public key and ciphertext are used, encrypted outputs only reveal the evaluation of the circuit on some well-formed input

x

*

. Previous literature on FHE only considered semi-honest circuit privacy. Circuit-private FHE schemes have direct applications to computing on encrypted data. In that setting, one party (a receiver) holding an input

x

wishes to learn the evaluation of a circuit

C

held by another party (a sender). The goal is to make receiver’s work sublinear (and ideally independent) of

$\left\lvert C \right\rvert $

, using a 2-message protocol. The transformation technique may be of independent interest, and have various additional applications. The framework uses techniques akin to Gentry’s bootstrapping and conditional disclosure of secrets (CDS [AIR01]) combining a non circuit private FHE scheme, with a homomorphic encryption (HE) scheme for a smaller class of circuits which is maliciously circuit-private. We devise the first known circuit private FHE, by instantiating our framework by various (standard) FHE schemes from the literature.

Rafail Ostrovsky, Anat Paskin-Cherniavsky, Beni Paskin-Cherniavsky
Algorithms in HElib

HElib is a software library that implements homomorphic encryption (HE), specifically the Brakerski-Gentry-Vaikuntanathan (BGV) scheme, focusing on effective use of the Smart-Vercauteren ciphertext packing techniques and the Gentry-Halevi-Smart optimizations. The underlying cryptosystem serves as the equivalent of a “hardware platform” for HElib, in that it defines a set of operations that can be applied homomorphically, and specifies their cost. This “platform” is a SIMD environment (somewhat similar to Intel SSE and the like), but with unique cost metrics and parameters. In this report we describe some of the algorithms and optimization techniques that are used in HElib for data movement, linear algebra, and other operations over this “platform.”

Shai Halevi, Victor Shoup
Backmatter
Metadaten
Titel
Advances in Cryptology – CRYPTO 2014
herausgegeben von
Juan A. Garay
Rosario Gennaro
Copyright-Jahr
2014
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-662-44371-2
Print ISBN
978-3-662-44370-5
DOI
https://doi.org/10.1007/978-3-662-44371-2

Premium Partner