Skip to main content
main-content

Über dieses Buch

This book constitutes the proceedings of the 33rd Annual International Conference on the Theory and Applications of Cryptographic Techniques, EUROCRYPT 2014, held in Copenhagen, Denmark, in May 2014. The 38 full papers included in this volume were carefully reviewed and selected from 197 submissions. They deal with public key cryptanalysis, identity-based encryption, key derivation and quantum computing, secret-key analysis and implementations, obfuscation and multi linear maps, authenticated encryption, symmetric encryption, multi-party encryption, side-channel attacks, signatures and public-key encryption, functional encryption, foundations and multi-party computation.

Inhaltsverzeichnis

Frontmatter

Public-Key Cryptanalysis

A Heuristic Quasi-Polynomial Algorithm for Discrete Logarithm in Finite Fields of Small Characteristic

The difficulty of computing discrete logarithms in fields

$\mathbb{F}_{q^k}$

depends on the relative sizes of

k

and

q

. Until recently all the cases had a sub-exponential complexity of type

L

(1/3), similar to the factorization problem. In 2013, Joux designed a new algorithm with a complexity of

L

(1/4 + 

ε

) in small characteristic. In the same spirit, we propose in this article another heuristic algorithm that provides a quasi-polynomial complexity when

q

is of size at most comparable with

k

. By quasi-polynomial, we mean a runtime of

n

O

(log

n

)

where

n

is the bit-size of the input. For larger values of

q

that stay below the limit

$L_{q^k}(1/3)$

, our algorithm loses its quasi-polynomial nature, but still surpasses the Function Field Sieve. Complexity results in this article rely on heuristics which have been checked experimentally.

Razvan Barbulescu, Pierrick Gaudry, Antoine Joux, Emmanuel Thomé

Polynomial Time Attack on Wild McEliece over Quadratic Extensions

We present a polynomial time structural attack against the McEliece system based on Wild Goppa codes from a quadratic finite field extension. This attack uses the fact that such codes can be distinguished from random codes to compute some filtration, that is to say a family of nested subcodes which will reveal their secret algebraic description.

Alain Couvreur, Ayoub Otmani, Jean–Pierre Tillich

Symmetrized Summation Polynomials: Using Small Order Torsion Points to Speed Up Elliptic Curve Index Calculus

Decomposition-based index calculus methods are currently efficient only for elliptic curves

E

defined over non-prime finite fields of very small extension degree

n

. This corresponds to the fact that the Semaev summation polynomials, which encode the relation search (or “sieving”), grow over-exponentially with

n

. Actually, even their computation is a first stumbling block and the largest Semaev polynomial ever computed is the 6-th. Following ideas from Faugère, Gaudry, Huot and Renault, our goal is to use the existence of small order torsion points on

E

to define new summation polynomials whose symmetrized expressions are much more compact and easier to compute. This setting allows to consider smaller factor bases, and the high sparsity of the new summation polynomials provides a very efficient decomposition step. In this paper the focus is on 2-torsion points, as it is the most important case in practice. We obtain records of two kinds: we successfully compute up to the 8-th symmetrized summation polynomial and give new timings for the computation of relations with degree 5 extension fields.

Jean-Charles Faugère, Louise Huot, Antoine Joux, Guénaël Renault, Vanessa Vitse

Identity-Based Encryption

Why Proving HIBE Systems Secure Is Difficult

Proving security of Hierarchical Identity-Based Encryption (HIBE) and Attribution Based Encryption scheme is a challenging problem. There are multiple well-known schemes in the literature where the best known (adaptive) security proofs degrade exponentially in the maximum hierarchy depth. However, we do not have a rigorous understanding of why better proofs are not known. (For ABE, the analog of hierarchy depth is the maximum number of attributes used in a ciphertext.)

In this work, we define a certain commonly found checkability property on ciphertexts and private keys. Roughly the property states that any two different private keys that are both “supposed to” decrypt a ciphertext will decrypt it to the same message. We show that any simple black box reduction to a non-interactive assumption for a HIBE or ABE system that contains this property will suffer an exponential degradation of security.

Allison Lewko, Brent Waters

Identity-Based Encryption Secure against Selective Opening Chosen-Ciphertext Attack

Security against selective opening attack (SOA) requires that in a multi-user setting, even if an adversary has access to all ciphertexts from users, and adaptively corrupts some fraction of the users by exposing not only their messages but also the random coins, the remaining unopened messages retain their privacy. Recently, Bellare, Waters and Yilek considered SOA-security in the identity-based setting, and presented the first identity-based encryption (IBE) schemes that are proven secure against selective opening chosen plaintext attack (SO-CPA). However, how to achieve SO-CCA security for IBE is still open.

In this paper, we introduce a new primitive called extractable IBE and define its IND-ID-CCA security notion. We present a generic construction of SO-CCA secure IBE from an IND-ID-CCA secure extractable IBE with “One-Sided Public Openability”(1SPO), a collision-resistant hash function and a strengthened cross-authentication code. Finally, we propose two concrete constructions of extractable 1SPO-IBE schemes, resulting in the first simulation-based SO-CCA secure IBE schemes without random oracles.

Junzuo Lai, Robert H. Deng, Shengli Liu, Jian Weng, Yunlei Zhao

Key Derivation and Quantum Computing

Key Derivation without Entropy Waste

We revisit the classical problem of converting an imperfect source of randomness into a usable cryptographic key. Assume that we have some cryptographic application

P

that expects a uniformly random

m

-bit key

R

and ensures that the best attack (in some complexity class) against

P

(

R

) has success probability at most

δ

. Our goal is to design a key-derivation function (KDF)

h

that converts any random source

X

of min-entropy

k

into a sufficiently “good” key

h

(

X

), guaranteeing that

P

(

h

(

X

)) has comparable security

δ

′ which is ‘close’ to

δ

.

Seeded randomness extractors provide a generic way to solve this problem for

all

applications

P

, with resulting security

δ

′ = 

O

(

δ

), provided that we start with entropy

$k\ge m+2\log\left({1}/{\delta}\right)-O(1)$

. By a result of Radhakrishnan and Ta-Shma, this bound on

k

(called the “RT-bound”) is also known to be tight in general. Unfortunately, in many situations the loss of

$2\log\left({1}/{\delta}\right)$

bits of entropy is unacceptable. This motivates the study KDFs with less entropy waste by placing some restrictions on the source

X

or the application

P

.

In this work we obtain the following new positive and negative results in this regard:

Efficient samplability of the source

X

does not help beat the RT-bound for general applications. This resolves the SRT (samplable RT) conjecture of Dachman-Soled et al. [DGKM12] in the affirmative, and also shows that the existence of computationally-secure extractors beating the RT-bound implies the existence of one-way functions.

We continue in the line of work initiated by Barak et al. [BDK+11] and construct new information-theoretic KDFs which beat the RT-bound for large but restricted classes of applications. Specifically, we design efficient KDFs that work for

all unpredictability applications

P

(e.g., signatures, MACs, one-way functions, etc.) and can either: (1) extract

all

of the entropy

k

 = 

m

with a very modest security loss

$\delta'=O(\delta\cdot \log\left({1}/{\delta}\right))$

, or alternatively, (2) achieve essentially optimal security

δ

′ = 

O

(

δ

) with a very modest entropy loss

$k \ge m+\log\!\log\left({1}/{\delta}\right)$

. In comparison, the best prior results from [BDK+11] for this class of applications would only guarantee

$\delta'=O(\sqrt{\delta})$

when

k

 = 

m

, and would need

$k\ge m+\log\left({1}/{\delta}\right)$

to get

δ

′ = 

O

(

δ

).

The weaker bounds of [BDK+11] hold for a larger class of so-called “square-friendly” applications (which includes all unpredictability, but also some important indistinguishability, applications). Unfortunately, we show that these weaker bounds are tight for the larger class of applications.

We abstract out a clean, information-theoretic notion of (

k

,

δ

,

δ

′)

-unpredictability extractors

, which guarantee “induced” security

δ

′ for any

δ

-secure unpredictability application

P

, and characterize the parameters achievable for such unpredictability extractors. Of independent interest, we also relate this notion to the previously-known notion of (min-entropy)

condensers

, and improve the state-of-the-art parameters for such condensers.

Yevgeniy Dodis, Krzysztof Pietrzak, Daniel Wichs

Efficient Non-malleable Codes and Key-Derivation for Poly-size Tampering Circuits

Non-malleable codes, defined by Dziembowski, Pietrzak and Wichs (ICS ’10), provide roughly the following guarantee: if a codeword

c

encoding some message

x

is tampered to

c

′ = 

f

(

c

) such that

c

′ ≠ 

c

, then the tampered message

x

′ contained in

c

′ reveals no information about

x

. Non-malleable codes have applications to immunizing cryptosystems against tampering attacks and related-key attacks.

One

cannot

have an

efficient

non-malleable code that protects against

all efficient

tampering functions

f

. However, in this work we show “the next best thing”: for any polynomial bound

s

given a-priori, there is an efficient non-malleable code that protects against all tampering functions

f

computable by a circuit of size

s

. More generally, for any family of tampering functions

$\mathcal{F}$

of size

$|\mathcal{F}| \leq 2^{s}$

, there is an efficient non-malleable code that protects against all

$f \in \mathcal{F}$

. The

rate

of our codes, defined as the ratio of message to codeword size, approaches 1. Our results are information-theoretic and our main proof technique relies on a careful probabilistic method argument using limited independence. As a result, we get an efficiently samplable family of efficient codes, such that a random member of the family is non-malleable with overwhelming probability. Alternatively, we can view the result as providing an efficient non-malleable code in the “common reference string” (CRS) model.

We also introduce a new notion of non-malleable key derivation, which uses randomness

x

to derive a secret key

y

 = 

h

(

x

) in such a way that, even if

x

is tampered to a different value

x

′ = 

f

(

x

), the derived key

y

′ = 

h

(

x

′) does not reveal any information about

y

. Our results for non-malleable key derivation are analogous to those for non-malleable codes.

As a useful tool in our analysis, we rely on the notion of “leakage-resilient storage” of Davì, Dziembowski and Venturi (SCN ’10) and, as a result of independent interest, we also significantly improve on the parameters of such schemes.

Sebastian Faust, Pratyay Mukherjee, Daniele Venturi, Daniel Wichs

Revocable Quantum Timed-Release Encryption

Timed-release encryption is a kind of encryption scheme that a recipient can decrypt only after a specified amount of time

T

(assuming that we have a moderately precise estimate of his computing power). A

revocable

timed-release encryption is one where, before the time

T

is over, the sender can “give back” the timed-release encryption, provably loosing all access to the data. We show that revocable timed-release encryption without trusted parties is possible using quantum cryptography (while trivially impossible classically).

Along the way, we develop two proof techniques in the quantum random oracle model that we believe may have applications also for other protocols.

Finally, we also develop another new primitive,

unknown recipient encryption

, which allows us to send a message to an unknown/unspecified recipient over an insecure network in such a way that at most one recipient will get the message.

Dominique Unruh

Secret-Key Analysis and Implementations

Generic Universal Forgery Attack on Iterative Hash-Based MACs

In this article, we study the security of iterative hash-based

MAC

s, such as

HMAC

or

NMAC

, with regards to universal forgery attacks. Leveraging recent advances in the analysis of functional graphs built from the iteration of

HMAC

or

NMAC

, we exhibit the very first generic universal forgery attack against hash-based

MAC

s. In particular, our work implies that the universal forgery resistance of an

n

-bit output

HMAC

construction is not 2

n

queries as long believed by the community. The techniques we introduce extend the previous functional graphs-based attacks that only took in account the cycle structure or the collision probability: we show that one can extract much more meaningful secret information by also analyzing the distance of a node from the cycle of its component in the functional graph.

Thomas Peyrin, Lei Wang

Links between Truncated Differential and Multidimensional Linear Properties of Block Ciphers and Underlying Attack Complexities

The mere number of various apparently different statistical attacks on block ciphers has raised the question about their relationships which would allow to classify them and determine those that give essentially complementary information about the security of block ciphers. While mathematical links between some statistical attacks have been derived in the last couple of years, the important link between general truncated differential and multidimensional linear attacks has been missing. In this work we close this gap. The new link is then exploited to relate the complexities of chosen-plaintext and known-plaintext distinguishing attacks of differential and linear types, and further, to explore the relations between the key-recovery attacks. Our analysis shows that a statistical saturation attack is the same as a truncated differential attack, which allows us, for the first time, to provide a justifiable analysis of the complexity of the statistical saturation attack and discuss its validity on 24 rounds of the PRESENT block cipher. By studying the data, time and memory complexities of a multidimensional linear key-recovery attack and its relation with a truncated differential one, we also show that in most cases a known-plaintext attack can be transformed into a less costly chosen-plaintext attack. In particular, we show that there is a differential attack in the chosen-plaintext model on 26 rounds of PRESENT with less memory complexity than the best previous attack, which assumes known plaintext. The links between the statistical attacks discussed in this paper give further examples of attacks where the method used to sample the data required by the statistical test is more differentiating than the method used for finding the distinguishing property.

Céline Blondeau, Kaisa Nyberg

Faster Compact Diffie–Hellman: Endomorphisms on the x-line

We describe an implementation of fast elliptic curve scalar multiplication, optimized for Diffie–Hellman Key Exchange at the 128-bit security level. The algorithms are compact (using only

x

-coordinates), run in constant time with uniform execution patterns, and do not distinguish between the curve and its quadratic twist; they thus have a built-in measure of side-channel resistance. (For comparison, we also implement two faster but non-constant-time algorithms.) The core of our construction is a suite of two-dimensional differential addition chains driven by efficient endomorphism decompositions, built on curves selected from a family of ℚ-curve reductions over

$\mathbb{F}_{p^2}$

with

p

 = 2

127

 − 1. We include state-of-the-art experimental results for twist-secure, constant-time,

x

-coordinate-only scalar multiplication.

Craig Costello, Huseyin Hisil, Benjamin Smith

Obfuscation and Multilinear Maps

Replacing a Random Oracle: Full Domain Hash from Indistinguishability Obfuscation

Our main result gives a way to instantiate the random oracle with a concrete hash function in “full domain hash” applications. The term full domain hash was first proposed by Bellare and Rogaway [BR93, BR96] and referred to a signature scheme from any trapdoor permutation that was part of their seminal work introducing the random oracle heuristic. Over time the term full domain hash has (informally) encompassed a broader range of notable cryptographic schemes including the Boneh-Franklin [BF01] IBE scheme and Boneh-Lynn-Shacham (BLS) [BLS01] signatures. All of the above described schemes required a hash function that had to be modeled as a random oracle to prove security. Our work utilizes recent advances in indistinguishability obfuscation to construct specific hash functions for use in these schemes. We then prove security of the

original

cryptosystems when instantiated with our specific hash function.

Of particular interest, our work evades the impossibility results of Dodis, Oliveira, and Pietrzak [DOP05], who showed that there can be no black-box construction of hash functions that allow Full-Domain Hash Signatures to be based on trapdoor permutations, and its extension by Dodis, Haitner, and Tentes [DHT12] to the RSA Full-Domain Hash Signatures. This indicates our techniques applying indistinguishability obfuscation may be useful for circumventing other black-box impossibility proofs.

Susan Hohenberger, Amit Sahai, Brent Waters

Protecting Obfuscation against Algebraic Attacks

Recently, Garg, Gentry, Halevi, Raykova, Sahai, and Waters (FOCS 2013) constructed a general-purpose obfuscating compiler for NC

1

circuits. We describe a simplified variant of this compiler, and prove that it is a virtual black box obfuscator in a generic multilinear map model. This improves on Brakerski and Rothblum (eprint 2013) who gave such a result under a strengthening of the Exponential Time Hypothesis. We remove this assumption, and thus resolve an open question of Garg

et al.

As shown by Garg

et al.

, a compiler for NC

1

circuits can be bootstrapped to a compiler for all polynomial-sized circuits under the learning with errors (LWE) hardness assumption.

Our result shows that there is a candidate obfuscator that cannot be broken by algebraic attacks, hence reducing the task of creating secure obfuscators in the plain model to obtaining sufficiently strong security guarantees on candidate instantiations of multilinear maps.

Boaz Barak, Sanjam Garg, Yael Tauman Kalai, Omer Paneth, Amit Sahai

GGHLite: More Efficient Multilinear Maps from Ideal Lattices

The

GGH

Graded Encoding Scheme[9], based on ideal lattices, is the first plausible approximation to a cryptographic multilinear map. Unfortunately, using the security analysis in[9], the scheme requires very large parameters to provide security for its underlying “encoding re-randomization” process. Our main contributions are to formalize, simplify and improve the efficiency and the security analysis of the re-randomization process in the

GGH

construction. This results in a new construction that we call

GGHLite

. In particular, we first lower the size of a standard deviation parameter of the re-randomization process of[9] from exponential to polynomial in the security parameter. This first improvement is obtained via a finer security analysis of the “drowning” step of re-randomization, in which we apply the

Rényi divergence

instead of the conventional

statistical distance

as a measure of distance between distributions. Our second improvement is to reduce the number of randomizers needed from Ω(

n

log

n

) to 2, where

n

is the dimension of the underlying ideal lattices. These two contributions allow us to decrease the bit size of the public parameters from

O

(

λ

5

log

λ

) for the

GGH

scheme to

O

(

λ

log

2

λ

) in

GGHLite

, with respect to the security parameter

λ

(for a constant multilinearity parameter

κ

).

Adeline Langlois, Damien Stehlé, Ron Steinfeld

Authenticated Encryption

Reconsidering Generic Composition

In the context of authenticated encryption (AE),

generic composition

has referred to the construction of an AE scheme by gluing together a conventional (privacy-only) encryption scheme and a MAC. Since the work of Bellare and Namprempre (2000) and then Krawczyk (2001), the conventional wisdom has become that there are three forms of generic composition, with Encrypt-then-MAC the only one that generically works. However, many caveats to this understanding have surfaced over the years. Here we explore this issue further, showing how this understanding oversimplifies the situation because it ignores the results’ sensitivity to definitional choices. When encryption is formalized differently, making it either IV-based or nonce-based, rather than probabilistic, and when the AE goal is likewise changed to take in a nonce, qualitatively different results emerge. We explore these alternatives versions of the generic-composition story. We also evidence the overreaching understanding of prior generic-composition results by pointing out that the Encrypt-then-MAC mechanism of ISO 19772 is completely wrong.

Chanathip Namprempre, Phillip Rogaway, Thomas Shrimpton

Parallelizable Rate-1 Authenticated Encryption from Pseudorandom Functions

This paper proposes a new scheme for authenticated encryption (AE) which is typically realized as a blockcipher mode of operation. The proposed scheme has attractive features for fast and compact operation. When it is realized with a blockcipher, it requires one blockcipher call to process one input block (i.e. rate-1), and uses the encryption function of the blockcipher for both encryption and decryption. Moreover, the scheme enables one-pass, parallel operation under two-block partition. The proposed scheme thus attains similar characteristics as the seminal OCB mode, without using the inverse blockcipher. The key idea of our proposal is a novel usage of two-round Feistel permutation, where the round functions are derived from the theory of tweakable blockcipher. We also provide basic software results, and describe some ideas on using a non-invertible primitive, such as a keyed hash function.

Kazuhiko Minematsu

Symmetric Encryption

Honey Encryption: Security Beyond the Brute-Force Bound

We introduce

honey encryption

(HE), a simple, general approach to encrypting messages using low min-entropy keys such as passwords. HE is designed to produce a ciphertext which, when decrypted with any of a number of

incorrect

keys, yields plausible-looking but bogus plaintexts called

honey messages

. A key benefit of HE is that it provides security in cases where too little entropy is available to withstand brute-force attacks that try every key; in this sense, HE provides security beyond conventional brute-force bounds. HE can also provide a hedge against partial disclosure of high min-entropy keys.

HE significantly improves security in a number of practical settings. To showcase this improvement, we build concrete HE schemes for password-based encryption of RSA secret keys and credit card numbers. The key challenges are development of appropriate instances of a new type of randomized message encoding scheme called a

distribution-transforming encoder

(DTE), and analyses of the expected maximum loading of bins in various kinds of balls-and-bins games.

Ari Juels, Thomas Ristenpart

Sometimes-Recurse Shuffle

Almost-Random Permutations in Logarithmic Expected Time

We describe a security-preserving construction of a random permutation of domain size

N

from a random function, the construction tolerating adversaries asking all

N

plaintexts, yet employing just

$\Theta(\lg N)$

calls, on average, to the one-bit-output random function. The approach is based on card shuffling. The basic idea is to use the

sometimes-recurse

transformation: lightly shuffle the deck (with some other shuffle), cut the deck, and then recursively shuffle one of the two halves. Our work builds on a recent paper of Ristenpart and Yilek.

Ben Morris, Phillip Rogaway

Tight Security Bounds for Key-Alternating Ciphers

A

t

-round

key-alternating cipher

(also called

iterated Even-Mansour cipher

) can be viewed as an abstraction of AES. It defines a cipher

E

from

t

fixed public permutations

P

1

,...,

P

t

: {0,1}

n

 → {0,1}

n

and a key

k

 = 

k

0

 ∥ ... ∥ 

k

t

 ∈ {0,1}

n

(

t

 + 1)

by setting

E

k

(

x

) = 

k

t

 ⊕ 

P

t

(

k

t

 − 1

 ⊕ 

P

t

 − 1

( ⋯ 

k

1

 ⊕ 

P

1

(

k

0

 ⊕ 

x

) ⋯ )). The indistinguishability of

E

k

from a truly random permutation by an adversary who also has oracle access to the (public) random permutations

P

1

, …,

P

t

was investigated in 1997 by Even and Mansour for

t

 = 1 and for higher values of

t

in a series of recent papers. For

t

 = 1, Even and Mansour proved indistinguishability security up to 2

n

/2

queries, which is tight. Much later Bogdanov et al. (2011) conjectured that security should be

$2^{\frac{t}{t+1}n}$

queries for general

t

, which matches an easy distinguishing attack (so security cannot be more). A number of partial results have been obtained supporting this conjecture, besides Even and Mansour’s original result for

t

 = 1: Bogdanov et al. proved security of

$2^{\frac{2}{3}n}$

for

t

 ≥ 2, Steinberger (2012) proved security of

$2^{\frac{3}{4}n}$

for

t

 ≥ 3, and Lampe, Patarin and Seurin (2012) proved security of

$2^{\frac{t}{t+2}n}$

for all even values of

t

, thus “barely” falling short of the desired

$2^{\frac{t}{t+1}n}$

.

Our contribution in this work is to prove the long-sought-for security bound of

$2^{\frac{t}{t+1}n}$

, up to a constant multiplicative factor depending on

t

. Our method is essentially an application of Patarin’s H-coefficient technique.

Shan Chen, John Steinberger

The Locality of Searchable Symmetric Encryption

This paper proves a lower bound on the trade-off between server storage size and the locality of memory accesses in searchable symmetric encryption (SSE). Namely, when encrypting an index of N identifier/keyword pairs, the encrypted index must have size

ω

(

N

) or the scheme must perform searching with

ω

(1) non-contiguous reads to memory or the scheme must read many more bits than is necessary to compute the results. Recent implementations have shown that nonlocality of server memory accesses create a throughput-bottleneck on very large databases. Our lower bound shows that this is due to the security notion and not a defect of the constructions. An upper bound is also given in the form of a new SSE construction with an

O

(

N

log

N

) size encrypted index that performs

O

(log

N

) reads during a search.

David Cash, Stefano Tessaro

Multi-party Computation

A Bound for Multiparty Secret Key Agreement and Implications for a Problem of Secure Computing

We consider secret key agreement by multiple parties observing correlated data and communicating interactively over an insecure communication channel. Our main contribution is a

single-shot

upper bound on the length of the secret keys that can be generated, without making any assumptions on the distribution of the underlying data. Heuristically, we bound the secret key length in terms of “how far” is the joint distribution of the initial observations of the parties and the eavesdropper from a distribution that renders the observations of the parties conditionally independent across some partition, when conditioned on the eavesdropper’s side information. The closeness of the two distributions is measured in terms of the exponent of the probability of error of type II for a binary hypothesis testing problem, thus bringing out a structural connection between secret key agreement and binary hypothesis testing. When the underlying data consists of an independent and identically distributed sequence, an application of our bound recovers several known upper bounds for the asymptotic rate of a secret key that can be generated, without requiring the agreement error probability or the security index to vanish to 0 asymptotically.

Also, we consider the following problem of secure function computation with trusted parties: Multiple parties observing correlated data seek to compute a function of their collective data. To this end, they communicate interactively over an insecure communication channel. It is required that the value of the function be concealed from an eavesdropper with access to the communication. When is such a secure computation of a given function feasible? Using the aforementioned upper bound, we derive a necessary condition for the existence of a communication protocol that allows the parties to reliably recover the value of a given function, while keeping this value concealed from an eavesdropper with access to (only) the communication.

Himanshu Tyagi, Shun Watanabe

Non-Interactive Secure Computation Based on Cut-and-Choose

In recent years, secure two-party computation (2PC) has been demonstrated to be feasible in practice. However, all efficient general-computation 2PC protocols require multiple rounds of interaction between the two players. This property restricts 2PC to be only relevant to scenarios where both players can be simultaneously online, and where communication latency is not an issue.

This work considers the model of 2PC with a

single

round of interaction, called

Non-Interactive Secure Computation (NISC)

. In addition to the non-interaction property, we also consider a flavor of NISC that allows reusing the first message for many different 2PC invocations, possibly with different players acting as the player who sends the second message, similar to a public-key encryption where a single public-key can be used to encrypt many different messages.

We present a NISC protocol that is based on the cut-and-choose paradigm of Lindell and Pinkas (Eurocrypt 2007). This protocol achieves concrete efficiency similar to that of best multi-round 2PC protocols based on the cut-and-choose paradigm. The protocol requires only

t

garbled circuits for achieving cheating probability of 2

− 

t

, similar to the recent result of Lindell (Crypto 2013), but only needs a single round of interaction.

To validate the efficiency of our protocol, we provide a prototype implementation of it and show experiments that confirm its competitiveness with that of the best multi-round 2PC protocols. This is the

first

prototype implementation of an efficient NISC protocol.

In addition to our NISC protocol, we introduce a new encoding technique that significantly reduces communication in the NISC setting. We further show how our NISC protocol can be improved in the multi-round setting, resulting in a highly efficient constant-round 2PC that is also suitable for pipelined implementation.

Arash Afshar, Payman Mohassel, Benny Pinkas, Ben Riva

Garbled RAM Revisited

The notion of

garbled random-access machines

(garbled RAMs) was introduced by Lu and Ostrovsky (Eurocrypt 2013). It can be seen as an analogue of Yao’s garbled circuits, that allows a user to garble a RAM program directly, without performing the expensive step of converting it into a circuit. In particular, the size of the garbled program and the time it takes to create and evaluate it are only proportional to its running time on a RAM rather than its circuit size. Lu and Ostrovsky gave a candidate construction of this primitive based on pseudo-random functions (PRFs).

The starting point of this work is pointing out a subtle circularity hardness assumption in the Lu-Ostrovsky construction. Specifically, the construction requires a complex “circular” security assumption on the underlying Yao garbled circuits and PRFs. We then proceed to abstract, simplify and generalize the main ideas behind the Lu-Ostrovsky construction, and show two alternatives constructions that overcome the circularity of assumptions. Our first construction breaks the circularity by replacing the PRF-based encryption in the Lu-Ostrovsky construction by

identity-based encryption (IBE)

. The result retains the same asymptotic performance characteristics of the original Lu-Ostrovsky construction, namely overhead of

O

(

poly

(

k

)

polylog

(

n

)) (with

k

the security parameter and

n

the data size). Our second construction breaks the circularity assuming only the existence of one way functions, but with overhead

O

(

poly

(

k

)n

ε

) for any constant

ε

 > 0. This construction works by adaptively “revoking” the PRFs at selected points, and using a delicate recursion argument to get successively better performance characteristics. It remains as an interesting open problem to achieve an overhead of

poly

(

k

)

polylog

(

n

) assuming only the existence of one-way functions.

Craig Gentry, Shai Halevi, Steve Lu, Rafail Ostrovsky, Mariana Raykova, Daniel Wichs

Side-Channel Attacks

Unifying Leakage Models: From Probing Attacks to Noisy Leakage.

A recent trend in cryptography is to formally show the leakage resilience of cryptographic implementations in a given leakage model. A realistic model is to assume that leakages are sufficiently noisy, following real-world observations. While the noisy leakage assumption has first been studied in the seminal work of Chari et al. (CRYPTO 99), the recent work of Prouff and Rivain (Eurocrypt 2013) provides the first analysis of a full masking scheme under a physically motivated noise model. Unfortunately, the security analysis of Prouff and Rivain has three important shortcomings: (1) it requires leak-free gates, (2) it considers a restricted adversarial model (random message attacks), and (3) the security proof has limited application for cryptographic settings. In this work, we provide an alternative security proof in the same noisy model that overcomes these three challenges. We achieve this goal by a new reduction from noisy leakage to the important theoretical model of probing adversaries (Ishai et al – CRYPTO 2003). Our work can be viewed as a next step of closing the gap between theory and practice in leakage resilient cryptography: while our security proofs heavily rely on concepts of theoretical cryptography, we solve problems in practically motivated leakage models.

Alexandre Duc, Stefan Dziembowski, Sebastian Faust

Higher Order Masking of Look-Up Tables

We describe a new algorithm for masking look-up tables of block-ciphers at any order, as a countermeasure against side-channel attacks. Our technique is a generalization of the classical randomized table countermeasure against first-order attacks. We prove the security of our new algorithm against

t

-th order attacks in the usual Ishai-Sahai-Wagner model from Crypto 2003; we also improve the bound on the number of shares from

n

 ≥ 4

t

 + 1 to

n

 ≥ 2

t

 + 1 for an adversary who can adaptively move its probes between successive executions.

Our algorithm has the same time complexity

$\mathcal{O}$

(

n

2

) as the Rivain-Prouff algorithm for AES, and its extension by Carlet

et al.

to any look-up table. In practice for AES our algorithm is less efficient than Rivain-Prouff, which can take advantage of the special algebraic structure of the AES Sbox; however for DES our algorithm performs slightly better.

Jean-Sébastien Coron

How to Certify the Leakage of a Chip?

Evaluating side-channel attacks and countermeasures requires determining the amount of information leaked by a target device. For this purpose, information extraction procedures published so far essentially combine a “

leakage model

” with a “

distinguisher

”. Fair evaluations ideally require exploiting a perfect leakage model (i.e. exactly corresponding to the true leakage distribution) with a Bayesian distinguisher. But since such perfect models are generally unknown, density estimation techniques have to be used to approximate the leakage distribution. This raises the fundamental problem that all security evaluations are potentially biased by both estimation and assumption errors. Hence, the best that we can hope is to be aware of these errors. In this paper, we provide and implement methodological tools to solve this issue. Namely, we show how sound statistical techniques allow both quantifying the leakage of a chip, and certifying that the amount of information extracted is close to the maximum value that would be obtained with a perfect model.

François Durvaux, François-Xavier Standaert, Nicolas Veyrat-Charvillon

Signatures and Public-Key Encryption

Efficient Round Optimal Blind Signatures

Known constructions of blind signature schemes suffer from at least one of the following limitations: (1) rely on parties having access to a common reference string or a random oracle, (2) are not round-optimal, or (3) are prohibitively expensive.

In this work, we construct the

first

blind-signature scheme that does not suffer from any of these limitations. In other words, besides being round optimal and having a standard model proof of security, our scheme is very efficient. Specifically, in our scheme, one signature is of size 6.5 KB and the communication complexity of the signing protocol is roughly 100 KB. An amortized variant of our scheme has communication complexity less that 1 KB.

Sanjam Garg, Divya Gupta

Key-Versatile Signatures and Applications: RKA, KDM and Joint Enc/Sig

This paper introduces key-versatile signatures. Key-versatile signatures allow us to sign with keys already in use for another purpose, without changing the keys and without impacting the security of the original purpose. This allows us to obtain advances across a collection of challenging domains including joint Enc/Sig, security against related-key attack (RKA) and security for key-dependent messages (KDM). Specifically we can (1) Add signing capability to existing encryption capability with zero overhead in the size of the public key (2) Obtain RKA-secure signatures from any RKA-secure one-way function, yielding new RKAsecure signature schemes (3) Add integrity to encryption while maintaining KDM-security.

Mihir Bellare, Sarah Meiklejohn, Susan Thomson

Non-malleability from Malleability: Simulation-Sound Quasi-Adaptive NIZK Proofs and CCA2-Secure Encryption from Homomorphic Signatures

Verifiability is central to building protocols and systems with integrity. Initially, efficient methods employed the Fiat-Shamir heuristics. Since 2008, the Groth-Sahai techniques have been the most efficient in constructing non-interactive witness indistinguishable and zero-knowledge proofs for algebraic relations in the standard model. For the important task of proving membership in linear subspaces, Jutla and Roy (Asiacrypt 2013) gave significantly more efficient proofs in the quasi-adaptive setting (QA-NIZK). For membership of the row space of a

t

×

n

matrix, their QA-NIZK proofs save Ω(

t

) group elements compared to Groth-Sahai. Here, we give QA-NIZK proofs made of a

constant

number group elements – regardless of the number of equations or the number of variables – and additionally prove them

unbounded

simulation-sound. Unlike previous unbounded simulation-sound Groth-Sahai-based proofs, our construction does not involve quadratic pairing product equations and does not rely on a chosen-ciphertext-secure encryption scheme. Instead, we build on structure-preserving signatures with homomorphic properties. We apply our methods to design new and improved CCA2-secure encryption schemes. In particular, we build the first efficient threshold CCA-secure keyed-homomorphic encryption scheme (

i.e.

, where homomorphic operations can only be carried out using a dedicated evaluation key) with publicly verifiable ciphertexts.

Benoît Libert, Thomas Peters, Marc Joye, Moti Yung

Functional Encryption

Fully Key-Homomorphic Encryption, Arithmetic Circuit ABE and Compact Garbled Circuits

We construct the first (key-policy) attribute-based encryption (ABE) system with short secret keys: the size of keys in our system depends only on the depth of the policy circuit, not its size. Our constructions extend naturally to arithmetic circuits with arbitrary fan-in gates thereby further reducing the circuit depth. Building on this ABE system we obtain the first reusable circuit garbling scheme that produces garbled circuits whose size is the same as the original circuit

plus

an additive

poly

(

λ

,

d

) bits, where

λ

is the security parameter and

d

is the circuit depth. All previous constructions incurred a

multiplicative

poly

(

λ

) blowup.

We construct our ABE using a new mechanism we call

fully key-homomorphic encryption

, a public-key system that lets anyone translate a ciphertext encrypted under a public-key x into a ciphertext encrypted under the public-key (

f

(

x

),

f

) of the same plaintext, for any efficiently computable

f

. We show that this mechanism gives an ABE with short keys. Security of our construction relies on the subexponential hardness of the learning with errors problem.

We also present a second (key-policy) ABE, using multilinear maps, with short ciphertexts: an encryption to an attribute vector x is the size of x plus

poly

(

λ

,

d

) additional bits. This gives a reusable circuit garbling scheme where the garbled input is short.

Dan Boneh, Craig Gentry, Sergey Gorbunov, Shai Halevi, Valeria Nikolaenko, Gil Segev, Vinod Vaikuntanathan, Dhinakaran Vinayagamurthy

Dual System Encryption via Doubly Selective Security: Framework, Fully Secure Functional Encryption for Regular Languages, and More

Dual system encryption techniques introduced by Waters in Crypto’09 are powerful approaches for constructing fully secure functional encryption (FE) for many predicates. However, there are still some FE for certain predicates to which dual system encryption techniques seem inapplicable, and hence their fully-secure realization remains an important problem. A notable example is FE for regular languages, introduced by Waters in Crypto’12.

We propose a generic framework that abstracts the concept of dual system encryption techniques. We introduce a new primitive called

pair encoding

scheme for predicates and show that it implies fully secure functional encryption (for the same predicates) via a generic construction. Using the framework, we obtain the first fully secure schemes for functional encryption primitives of which only selectively secure schemes were known so far. Our three main instantiations include FE for regular languages, unbounded attribute-based encryption (ABE) for large universes, and ABE with constant-size ciphertexts.

Our main ingredient for overcoming the barrier of inapplicability for the dual system techniques to certain predicates is a computational security notion of the pair encoding scheme which we call

doubly selective security

. This is in contrast with most of the previous dual system based schemes, where information-theoretic security are implicitly utilized. The doubly selective security notion resembles that of selective security and its complementary notion, co-selective security, and hence its name. Our framework can be regarded as a method for boosting doubly selectively security (of encoding) to full security (of functional encryption).

Besides generality of our framework, we remark that improved security is also obtained, as our security proof enjoys tighter reduction than previous schemes, notably the reduction cost does not depend on the number of all queries, but only that of

pre-challenged

queries.

Nuttapong Attrapadung

Multi-input Functional Encryption

We introduce the problem of Multi-Input Functional Encryption, where a secret key

sk

f

can correspond to an

n

-ary function

f

that takes multiple ciphertexts as input. We formulate both indistinguishability-based and simulation-based definitions of security for this notion, and show close connections with indistinguishability and virtual black-box definitions of obfuscation.

Assuming indistinguishability obfuscation for circuits, we present constructions achieving indistinguishability security for a large class of settings. We show how to modify this construction to achieve simulation-based security as well, in those settings where simulation security is possible.

Shafi Goldwasser, S. Dov Gordon, Vipul Goyal, Abhishek Jain, Jonathan Katz, Feng-Hao Liu, Amit Sahai, Elaine Shi, Hong-Sheng Zhou

Foundations

Salvaging Indifferentiability in a Multi-stage Setting

The indifferentiability framework by Maurer, Renner and Holenstein (MRH; TCC 2004) formalizes a sufficient condition to safely replace a random oracle by a construction based on a (hopefully) weaker assumption such as an ideal cipher. Indeed, many indifferentiable hash functions have been constructed and could since be used in place of random oracles. Unfortunately, Ristenpart, Shacham, and Shrimpton (RSS; Eurocrypt 2011) discovered that for a large class of security notions, the MRH composition theorem actually does not apply. To bridge the gap they suggested a stronger notion called reset indifferentiability and established a generalized version of the MRH composition theorem. However, as recent works by Demay et al. (Eurocrypt 2013) and Baecher et al. (Asiacrypt 2013) brought to light, reset indifferentiability is not achievable thereby re-opening the quest for a notion that is sufficient for multi-stage games and achievable at the same time.

We present a condition on multi-stage games called

unsplittability

. We show that if a game is unsplittable for a hash construction then the MRH composition theorem can be salvaged. Unsplittability captures a restricted yet broad class of games together with a set of practical hash constructions including HMAC, NMAC and several Merkle-Damgård variants. We show unsplittability for the chosen distribution attack (CDA) game (Bellare et al., Asiacrypt 2009), a multi-stage game capturing the security of deterministic encryption schemes; for message-locked encryption (Bellare et al.; Eurocrypt 2013) a related primitive that allows for secure deduplication; for universal computational extractors (UCE) (Bellare et al., Crypto 2013), a recently introduced standard model assumption to replace random oracles; as well as for the proof-of-storage game given by Ristenpart et al. as a counterexample to the general applicability of the indifferentiability framework.

Arno Mittelbach

Déjà Q: Using Dual Systems to Revisit q-Type Assumptions

After more than a decade of usage, bilinear groups have established their place in the cryptographic canon by enabling the construction of many advanced cryptographic primitives. Unfortunately, this explosion in functionality has been accompanied by an analogous growth in the complexity of the assumptions used to prove security. Many of these assumptions have been gathered under the umbrella of the “uber-assumption,” yet certain classes of these assumptions—namely,

q-type

assumptions—are stronger and require larger parameter sizes than their static counterparts. In this paper, we show that in certain bilinear groups, many classes of

q

-type assumptions are in fact implied by subgroup hiding (a well-established, static assumption). Our main tool in this endeavor is the

dual-system

technique, as introduced by Waters in 2009. As a case study, we first show that in composite-order groups, we can prove the security of the Dodis-Yampolskiy PRF based solely on subgroup hiding and allow for a domain of arbitrary size (the original proof only allowed a logarithmically-sized domain). We then turn our attention to classes of

q

-type assumptions and show that they are implied—when instantiated in appropriate groups—solely by subgroup hiding. These classes are quite general and include assumptions such as

q

-SDH. Concretely, our result implies that every construction relying on such assumptions for security (e.g., Boneh-Boyen signatures) can, when instantiated in appropriate composite-order bilinear groups, be proved secure under subgroup hiding instead.

Melissa Chase, Sarah Meiklejohn

Distributed Point Functions and Their Applications

For

x

,

y

 ∈ {0,1}

*

, the point function

P

x

,

y

is defined by

P

x

,

y

(

x

) = 

y

and

P

x

,

y

(

x

′) = 0

|

y

|

for all

x

′ ≠ 

x

. We introduce the notion of a

distributed point function

(DPF), which is a keyed function family

F

k

with the following property. Given

x

,

y

specifying a point function, one can efficiently generate a key pair (

k

0

,

k

1

) such that: (1)

$F_{k_0}\oplus F_{k_1}=P_{x,y}$

, and (2) each of

k

0

and

k

1

hides

x

and

y

. Our main result is an efficient construction of a DPF under the (minimal) assumption that a one-way function exists.

Distributed point functions have applications to private information retrieval (PIR) and related problems, as well as to worst-case to average-case reductions. Concretely, assuming the existence of a strong one-way function, we obtain the following applications.

Polylogarithmic 2-server binary PIR

. We present the first 2- server computational PIR protocol in which the length of each query is polylogarithmic in the database size

n

and the answers consist of a single bit each. This improves over the 2

$^O(\sqrt{log n})$

query length of the protocol of Chor and Gilboa (STOC ’97). Similarly, we get a polylogarithmic “PIR writing” scheme, allowing secure non-interactive updates of a database shared between two servers. Assuming just a standard one-way function, we get the first 2-server private keyword search protocol in which the query length is polynomial in the keyword size, the answers consist of a single bit, and there is no error probability. In all these protocols, the computational cost on the server side is comparable to applying a symmetric encryption scheme to the entire database.

Worst-case to average-case reductions.

We present the first worst-case to average-case reductions for PSPACE and EXPTIME complete languages that require only a constant number of oracle queries. These reductions complement a recent negative result of Watson (TOTC ’12).

Niv Gilboa, Yuval Ishai

Multi-party Computation

A Full Characterization of Completeness for Two-Party Randomized Function Evaluation

We settle a long standing open problem which has pursued a full characterization of completeness of (potentially randomized) finite functions for 2-party computation that is secure against active adversaries. Since the first such complete function was discovered [Kilian, FOCS 1988], the question of which finite 2-party functions are complete has been studied extensively, leading to characterization in many special cases. In this work, we completely settle this problem.

We provide a polynomial time algorithm to test whether a 2-party finite secure function evaluation (SFE) functionality (possibly randomized) is complete or not. The main tools in our solution include:

We show that any function

f

, if complete, can implement any (randomized) circuit

C

using only

O

(|

C

| + 

κ

) calls to

f

, where

κ

is the statistical security parameter. In particular, for any two-party functionality

g

, this establishes a universal notion of its quantitative “cryptographic complexity” independent of the setup and has close connections to circuit complexity.

Daniel Kraschewski, Hemanta K. Maji, Manoj Prabhakaran, Amit Sahai

On the Complexity of UC Commitments

Motivated by applications to secure multiparty computation, we study the complexity of realizing universally composable (UC) commitments. Several recent works obtain practical UC commitment protocols in the common reference string (CRS) model under the DDH assumption. These protocols have two main disadvantages. First, even when applied to long messages, they can only achieve a small constant rate (namely, the communication complexity is larger than the length of the message by a large constant factor). Second, they require computationally expensive public-key operations for each block of each message being committed.

Our main positive result is a UC commitment protocol that simultaneously avoids both of these limitations. It achieves an optimal rate of 1 (strictly speaking, 1 − 

o

(1)) by making only few calls to an ideal oblivious transfer (OT) oracle and additionally making a black-box use of a (computationally inexpensive) PRG. By plugging in known efficient protocols for UC-secure OT, we get rate-1, computationally efficient UC commitment protocols under a variety of setup assumptions (including the CRS model) and under a variety of standard cryptographic assumptions (including DDH). We are not aware of any previous UC commitment protocols that achieve an optimal asymptotic rate.

A corollary of our technique is a rate-1 construction for

UC commitment length extension

, that is, a UC commitment protocol for a long message using a single ideal commitment for a short message. The extension protocol additionally requires the use of a semi-honest (stand-alone) OT protocol. This raises a natural question: can we achieve UC commitment length extension while using only inexpensive PRG operations as is the case for stand-alone commitments and UC OT? We answer this question in the negative, showing that the existence of a semi-honest OT protocol is necessary (and sufficient) for UC commitment length extension. This shows, quite surprisingly, that UC commitments are qualitatively different from both stand-alone commitments and UC OT.

Juan A. Garay, Yuval Ishai, Ranjit Kumaresan, Hoeteck Wee

Universally Composable Symbolic Analysis for Two-Party Protocols Based on Homomorphic Encryption

We consider a class of two-party function evaluation protocols in which the parties are allowed to use ideal functionalities as well as a set of powerful primitives, namely commitments, homomorphic encryption, and certain zero-knowledge proofs. With these it is possible to capture protocols for oblivious transfer, coin-flipping, and generation of multiplication-triples.

We show how any protocol in our class can be compiled to a symbolic representation expressed as a process in an abstract process calculus, and prove a general computational soundness theorem implying that if the protocol realises a given ideal functionality in the symbolic setting, then the original version also realises the ideal functionality in the standard computational UC setting. In other words, the theorem allows us to transfer a proof in the abstract symbolic setting to a proof in the standard UC model.

Finally, we have verified that the symbolic interpretation is simple enough in a number of cases for the symbolic proof to be partly automated using the ProVerif tool.

Morten Dahl, Ivan Damgård

Backmatter

Weitere Informationen

Premium Partner