Skip to main content

Über dieses Buch

This book constitutes the refereed proceedings of the 30th Annual International Conference on the Theory and Applications of Cryptographic Techniques, EUROCRYPT 2011, held in Tallinn, Estonia, in May 2011. The 31 papers, presented together with 2 invited talks, were carefully reviewed and selected from 167 submissions. The papers are organized in topical sections on lattice-base cryptography, implementation and side channels, homomorphic cryptography, signature schemes, information-theoretic cryptography, symmetric key cryptography, attacks and algorithms, secure computation, composability, key dependent message security, and public key encryption.



Invited Talks

The Arithmetic Codex: Theory and Applications

We define the notion of an

arithmetic codex



, for short), and as a special case,

arithmetic secret sharing

. This notion encompasses as well as generalizes, in a single mathematical framework, all known types of specialized secret sharing schemes from the area of secure multi-party computation, i.e., the so-called

(strongly) multiplicative linear secret sharing schemes


These schemes were first studied as an abstract primitive by Cramer, Damgård, and Maurer in the late 1990s. They showed that the “

Fundamental Theorem of Information-Theoretically Secure Multi-Party Computation

,” the landmark 1988 result by Ben-Or, Goldwasser, and Wigderson and, independently at the same time by Chaum, Crépeau, Damgård, admits a proof that uses this primitive as a blackbox: it is possible to bootstrap, in a blackbox fashion, from this primitive a set of atomic sub-protocols upon which general secure computation can be based. They also showed when and how multiplicative schemes (but not strongly multiplicative ones) reduce to ordinary ones and gave applications to security against non-threshold adversaries.

In 2006, Chen and Cramer showed an “asymptotically good” version of the Fundamental Theorem, where the size of the network is unbounded and where an adversary corrupts a constant fraction of the network, yet the information rate of the secret sharing primitive is constant. Their result relies on a careful choice of algebraic geometric codes, in combination with the earlier work of Cramer, Damgård, and Maurer.

In 2007 this asymptotic result turned out to have a surprising application in

two-party cryptography

, through the work of Ishai, Kushilevitz, Ostrovsky and Sahai (“

Multi-Party Computation in the Head

”). This first application was to zero knowledge for circuit satisfiability, but soon after other applications to secure two-party computation and information theory (correlation extractors) followed.

Our notion of arithmetic secret sharing is not merely a unification for its own sake. First, it casts these schemes in terms of a dedicated “representation” of K-algebras, thereby bringing the relevant mathematical structure to the surface. Second, it identifies novel types of special secret sharing schemes. And, third, there are novel cryptographic applications.

Besides presenting some elementary examples and giving an overview of the basic theory and the main applications, we discuss a construction of arithmetic secret sharing schemes based on a novel algebraic-geometric paradigm that we also introduce. This talk is mainly based on several recent joint works with Nacho Cascudo (CWI) and Chaoping Xing (NTU). But in part it is also based on recent joint work with Ivan Damgård (Aarhus University) and Valerio Pastro (Aarhus University).

Ronald Cramer

Lattice Reduction Algorithms: Theory and Practice

Lattice reduction algorithms have surprisingly many applications in mathematics and computer science, notably in cryptology. On the one hand, lattice reduction algorithms are widely used in public-key cryptanalysis, for instance to attack special settings of RSA and DSA/ECDSA. On the other hand, there are more and more cryptographic schemes whose security require that certain lattice problems are hard. In this talk, we survey lattice reduction algorithms, present their performances, and discuss the differences between theory and practice.

Phong Q. Nguyen

Lattice-Based Cryptography

Efficient Authentication from Hard Learning Problems

We construct efficient authentication protocols and message-authentication codes (MACs) whose security can be reduced to the learning parity with noise (LPN) problem.

Despite a large body of work – starting with the


protocol of Hopper and Blum in 2001 – until now it was not even known how to construct an efficient authentication protocol from LPN which is secure against man-in-the-middle (MIM) attacks. A MAC implies such a (two-round) protocol.

Eike Kiltz, Krzysztof Pietrzak, David Cash, Abhishek Jain, Daniele Venturi

Making NTRU as Secure as Worst-Case Problems over Ideal Lattices


, proposed in 1996 by Hoffstein, Pipher and Silverman, is the fastest known lattice-based encryption scheme. Its moderate key-sizes, excellent asymptotic performance and conjectured resistance to quantum computers could make it a desirable alternative to factorisation and discrete-log based encryption schemes. However, since its introduction, doubts have regularly arisen on its security. In the present work, we show how to modify


to make it provably secure in the standard model, under the assumed quantum hardness of standard worst-case lattice problems, restricted to a family of lattices related to some cyclotomic fields. Our main contribution is to show that if the secret key polynomials are selected by rejection from discrete Gaussians, then the public key, which is their ratio, is statistically indistinguishable from uniform over its domain. The security then follows from the already proven hardness of the the R-LWE problem.

Damien Stehlé, Ron Steinfeld

Implementation and Side Channels

Faster Explicit Formulas for Computing Pairings over Ordinary Curves

We describe efficient formulas for computing pairings on ordinary elliptic curves over prime fields. First, we generalize lazy reduction techniques, previously considered only for arithmetic in quadratic extensions, to the whole pairing computation, including towering and curve arithmetic. Second, we introduce a new compressed squaring formula for cyclotomic subgroups and a new technique to avoid performing an inversion in the final exponentiation when the curve is parameterized by a negative integer. The techniques are illustrated in the context of pairing computation over Barreto-Naehrig curves, where they have a particularly efficient realization, and are also combined with other important developments in the recent literature. The resulting formulas reduce the number of required operations and, consequently, execution time, improving on the state-of-the-art performance of cryptographic pairings by 28%-34% on several popular 64-bit computing platforms. In particular, our techniques allow to compute a pairing under 2 million cycles for the first time on such architectures.

Diego F. Aranha, Koray Karabina, Patrick Longa, Catherine H. Gebotys, Julio López

Pushing the Limits: A Very Compact and a Threshold Implementation of AES

Our contribution is twofold: first we describe a very compact hardware implementation of AES-128, which requires only 2400 GE. This is to the best of our knowledge the smallest implementation reported so far. Then we apply the threshold countermeasure by Nikova

et al.

to the AES S-box and yield an implementation of the AES improving the level of resistance against first-order side-channel attacks. Our experimental results on real-world power traces show that although our implementation provides additional security, it is still susceptible to some sophisticated attacks having enough number of measurements.

Amir Moradi, Axel Poschmann, San Ling, Christof Paar, Huaxiong Wang

Fully Leakage-Resilient Signatures

A signature scheme is

fully leakage resilient

(Katz and Vaikuntanathan, ASIACRYPT ’09) if it is existentially unforgeable under an adaptive chosen-message attack even in a setting where an adversary may obtain bounded (yet arbitrary) leakage information on

all intermediate values that are used throughout the lifetime of the system

. This is a strong and meaningful notion of security that captures a wide range of side-channel attacks.

One of the main challenges in constructing fully leakage-resilient signature schemes is dealing with leakage that may depend on the random bits used by the signing algorithm, and constructions of such schemes are known only in the random-oracle model. Moreover, even in the random-oracle model, known schemes are only resilient to leakage of less than half the length of their signing key.

In this paper we construct


leakage-resilient signature schemes without random oracles. We present a scheme that is resilient to any leakage of length (1–




bits, where


is the length of the signing key. Our approach relies on generic cryptographic primitives, and at the same time admits rather efficient instantiations based on specific number-theoretic assumptions. In addition, we show that our approach extends to the continual-leakage model, recently introduced by Dodis, Haralambiev, Lopez-Alt and Wichs (FOCS ’10), and by Brakerski, Tauman Kalai, Katz and Vaikuntanathan (FOCS ’10). In this model the signing key is allowed to be refreshed, while its corresponding verification key remains fixed, and the amount of leakage is assumed to be bounded only in between any two successive key refreshes.

Elette Boyle, Gil Segev, Daniel Wichs

A Formal Study of Power Variability Issues and Side-Channel Attacks for Nanoscale Devices

Variability is a central issue in deep submicron technologies, in which it becomes increasingly difficult to produce two chips with the same behavior. While the impact of variability is well understood from the microelectronic point of view, very few works investigated its significance for cryptographic implementations. This is an important concern as 65-nanometer and smaller technologies are soon going to equip an increasing number of security-enabled devices. Based on measurements performed on 20 prototype chips of an AES S-box, this paper provides the first comprehensive treatment of variability issues for side-channel attacks. We show that technology scaling implies important changes in terms of physical security. First, common leakage models (e.g. based on the Hamming weight of the manipulated data) are no longer valid as the size of transistors shrinks, even for standard CMOS circuits. This impacts both the evaluation of hardware countermeasures and formal works assuming that independent computations lead to independent leakage. Second, we discuss the consequences of variability for profiled side-channel attacks. We study the extend to which a leakage model that is carefully profiled for one device can lead to successful attacks against another device. We also define the perceived information to quantify this context, which generalizes the notion of mutual information with possibly degraded leakage models. Our results exhibit that existing side-channel attacks are not perfectly suited to this new context. They constitute an important step in better understanding the challenges raised by future technologies for the theory and practice of leakage resilient cryptography.

Mathieu Renauld, François-Xavier Standaert, Nicolas Veyrat-Charvillon, Dina Kamel, Denis Flandre

Homomorphic Cryptography

Implementing Gentry’s Fully-Homomorphic Encryption Scheme

We describe a working implementation of a variant of Gentry’s fully homomorphic encryption scheme (STOC 2009), similar to the variant used in an earlier implementation effort by Smart and Vercauteren (PKC 2010). Smart and Vercauteren implemented the underlying “somewhat homomorphic” scheme, but were not able to implement the bootstrapping functionality that is needed to get the complete scheme to work. We show a number of optimizations that allow us to implement all aspects of the scheme, including the bootstrapping functionality.

Our main optimization is a key-generation method for the underlying somewhat homomorphic encryption, that does not require full polynomial inversion. This reduces the asymptotic complexity from




when working with dimension-


lattices (and practically reducing the time from many hours/days to a few seconds/minutes). Other optimizations include a batching technique for encryption, a careful analysis of the degree of the decryption polynomial, and some space/time trade-offs for the fully-homomorphic scheme.

We tested our implementation with lattices of several dimensions, corresponding to several security levels. From a “toy” setting in dimension 512, to “small,” “medium,” and “large” settings in dimensions 2048, 8192, and 32768, respectively. The public-key size ranges in size from 70 Megabytes for the “small” setting to 2.3 Gigabytes for the “large” setting. The time to run one bootstrapping operation (on a 1-CPU 64-bit machine with large memory) ranges from 30 seconds for the “small” setting to 30 minutes for the “large” setting.

Craig Gentry, Shai Halevi

Homomorphic Signatures for Polynomial Functions

We construct the first homomorphic signature scheme that is capable of evaluating multivariate polynomials on signed data. Given the public key and a signed data set, there is an efficient algorithm to produce a signature on the mean, standard deviation, and other statistics of the signed data. Previous systems for computing on signed data could only handle linear operations. For polynomials of constant degree, the length of a derived signature only depends logarithmically on the size of the data set.

Our system uses ideal lattices in a way that is a “signature analogue” of Gentry’s fully homomorphic encryption. Security is based on hard problems on ideal lattices similar to those in Gentry’s system.

Dan Boneh, David Mandell Freeman

Semi-homomorphic Encryption and Multiparty Computation

An additively-homomorphic encryption scheme enables us to compute linear functions of an encrypted input by manipulating only the ciphertexts. We define the relaxed notion of a


encryption scheme, where the plaintext can be recovered as long as the computed function does not increase the size of the input “too much”. We show that a number of existing cryptosystems are captured by our relaxed notion. In particular, we give examples of semi-homomorphic encryption schemes based on lattices, subset sum and factoring. We then demonstrate how semi-homomorphic encryption schemes allow us to construct an


multiparty computation protocol for arithmetic circuits, UC-secure against a dishonest majority. The protocol consists of a preprocessing phase and an online phase. Neither the inputs nor the function to be computed have to be known during preprocessing. Moreover, the online phase is extremely efficient as it requires

no cryptographic operations

: the parties only need to exchange additive shares and verify information theoretic MACs. Our contribution is therefore twofold: from a theoretical point of view, we can base multiparty computation on a variety of different assumptions, while on the practical side we offer a protocol with better efficiency than any previous solution.

Rikke Bendlin, Ivan Damgård, Claudio Orlandi, Sarah Zakarias

Signature Schemes

Tight Proofs for Signature Schemes without Random Oracles

We present the first tight security proofs for two general classes of Strong RSA based signature schemes. Among the affected signature schemes are the Cramer-Shoup, Camenisch-Lysyanskaya, Zhu, and Fischlin signature scheme. We also present two bilinear variants of our signature classes that produce short signatures. Similar to before, we show that these variants have tight security proofs under the the Strong Diffie-Hellman (SDH) assumption. We so obtain very efficient SDH-based variants of the Cramer-Shoup, Fischlin, and Zhu signature scheme and the first tight security proof of the recent Camenisch-Lysyanskaya scheme that was proposed and proven secure under the SDH assumption. Central to our results is a new proof technique that allows the simulator to avoid guessing which of the attacker’s signature queries are re-used in the forgery. In contrast to previous proofs, our security reduction does not lose a factor of



Sven Schäge

Adaptive Pseudo-free Groups and Applications

In this paper we explore a powerful extension of the notion of pseudo-free groups, proposed by Rivest at TCC 2004. We identify, motivate, and study pseudo-freeness in face of


adversaries who may learn solutions to other non-trivial equations before having to solve a new non-trivial equation.

We present a novel, carefully crafted definition of


pseudo-freeness that walks a fine line between being too weak and being unsatisfiable. We show that groups that satisfy our definition yield, via a generic construction, digital and network coding signature schemes.

Finally, we obtain concrete constructions of such schemes in the RSA group by showing this group to be adaptive pseudo-free. In particular, we demonstrate the generality of our framework for signatures by showing that most existing schemes are instantiations of our generic construction.

Dario Catalano, Dario Fiore, Bogdan Warinschi

Commuting Signatures and Verifiable Encryption

Verifiable encryption allows one to encrypt a signature while preserving its public verifiability. We introduce a new primitive called

commuting signatures and verifiable encryption

that extends this in multiple ways, such as enabling encryption of both signature and message while proving validity. More importantly, given a ciphertext, a signer can create a verifiably encrypted signature on the encrypted (unknown) message, which leads to the same result as first signing the message and then verifiably encrypting the message/signature pair; thus, signing and encrypting commute. Our instantiation is based on the recently introduced

automorphic signatures

and Groth-Sahai proofs, which we show to be homomorphic. We also prove a series of other properties and provide a novel approach to simulation.

As an application, we give an instantiation of

delegatable anonymous credentials

, a primitive introduced by Belenkiy et al. Our construction is arguably simpler than theirs and it is the first to provide


(and thus concurrently secure) issuing and delegation protocols, which are significantly more efficient. Moreover, the size of our credentials and the cost of verification are less than half of those of the previous instantiation. All our constructions are proven secure in the standard model under known non-interactive assumptions.

Georg Fuchsbauer

Information-Theoretic Cryptography

Secure Authentication from a Weak Key, without Leaking Information

We study the problem of authentication based on a weak key in the information-theoretic setting. A key is weak if its min-entropy is an arbitrary small fraction of its bit length. This problem has recently received considerable attention, with different solutions optimizing different parameters. We study the problem in an extended setting, where the weak key is a one-time

session key

that is derived from a public source of randomness with the help of a (potentially also weak)


key. Our goal now is to authenticate a message by means of the weak session key in such a way that (nearly) no information on the long-term key is leaked. Ensuring privacy of the long-term key is vital for the long-term key to be re-usable. Previous work has not considered such a privacy issue, and previous solutions do not seem to satisfy this requirement.

We show the existence of a practical four-round protocol that provides message authentication from a weak session key and that avoids non-negligible leakage on the long-term key. The security of our scheme also holds in the quantum setting where the adversary may have limited quantum side information on the weak session key. As an application of our scheme, we show the existence of an identification scheme in the bounded quantum storage model that is secure against a man-in-the-middle attack and that is truly password-based: it does not need any high entropy key, in contrast to the scheme proposed by Damgård

et al.

Niek J. Bouman, Serge Fehr

Secret Keys from Channel Noise

We study the problem of unconditionally secure Secret Key Establishment (SKE) when Alice and Bob are connected by two noisy channels that are eavesdropped by Eve. We consider the case that Alice and Bob do not have any sources of initial randomness at their disposal. We start by discussing special cases of interest where SKE is impossible and then provide a simple SKE construction over binary symmetric channels that achieves some rates of secret key. We next focus on the Secret Key (SK) capacity and provide lower and upper bounds on this capacity. We prove the lower bound by proposing a multi-round SKE protocol, called the

main protocol

. The main protocol consists of an initialization round and the repetition of a two-round SKE sub-protocol, called the

basic protocol

. We show that the two bounds coincide when channels do not leak information to the adversary. We apply the results to the case that communicants are connected by binary symmetric channels.

Hadi Ahmadi, Reihaneh Safavi-Naini

Almost Optimum t-Cheater Identifiable Secret Sharing Schemes

In Crypto’95, Kurosawa, Obana and Ogata proposed a




secret sharing scheme capable of identifying up to


cheaters with probability 1 − 


under the condition

$t \leq \lfloor$



–1)/3. The size of share

$|{\cal V}_i|$

of the scheme satisfies

$|{\cal V}_i|$


$|{\cal S}|/\epsilon^{t+2}$

, which was the most efficient scheme known so far. In this paper, we propose new




secret sharing schemes capable of identifying cheaters. The proposed scheme possesses the same security parameters




as those of Kurosawa

et al..

The scheme is surprisingly simple and its size of share is

$|{\cal V}_i|=|{\cal S}|/\epsilon$

, which is much smaller than that of Kurosawa

et al.

and is almost optimum with respect to the size of share; that is, the size of share is only one bit longer than the existing bound. Further, this is the first scheme which can identify cheaters, and whose size of share is independent of any of






. We also present schemes which can identify up to


, and


cheaters whose sizes of share can be approximately written by

$|{\cal V}_i|\approx (n\cdot(t+1)\cdot 2^{3t-1}\cdot|{\cal S}|)/\epsilon$


$|{\cal V}_i|\approx ((n\cdot t\cdot 2^{3t})^2\cdot|{\cal S}|)/\epsilon^2$

, respectively. The number of cheaters that the latter two schemes can identify meet the theoretical upper bound.

Satoshi Obana

Symmetric Key Cryptography

On Linear Hulls, Statistical Saturation Attacks, PRESENT and a Cryptanalysis of PUFFIN

We discuss complexities of advanced linear attacks. In particular, we argue why it is often more appropriate to examine the median of the complexity than the average value. Moreover, we apply our methods to the block ciphers PUFFIN and PRESENT. For PUFFIN, a 128 bit key cipher, we present an attack which breaks the cipher for at least a quarter of the keys with a complexity less than 2


. In the case of PRESENT we show that the design is sound. The design criteria are sufficient to ensure the resistance against linear attacks, taking into account the notion of linear hulls. Finally, we show that statistical saturation attacks and multi dimensional linear attacks are almost identical.

Gregor Leander

Domain Extension for MACs Beyond the Birthday Barrier

Given an


-bit to


-bit MAC (e.g., a fixed key blockcipher) with MAC security




queries, we design a variable-length MAC achieving MAC security








)) against queries of total length


. In particular, our construction is the first to break the “birthday barrier” for MAC domain extension from noncompressing primitives, since our security bound is meaningful even for


 = 2






) (assuming


is the best possible




)). In contrast, the previous best construction for MAC domain extension for


-bit to


-bit primitives, due to Dodis and Steinberger [11], achieved MAC security of










), which means that


cannot cross the “birthday bound” of 2




Yevgeniy Dodis, John Steinberger

Attacks and Algorithms

Statistical Attack on RC4

Distinguishing WPA

In this paper we construct several tools for manipulating pools of biases in the analysis of RC4. Then, we show that optimized strategies can break WEP based on 4 000 packets by assuming that the first bytes of plaintext are known for each packet. We describe similar attacks for WPA. Firstly, we describe a distinguisher for WPA of complexity 2


and advantage 0.5 which uses 2


packets. Then, based on several partial temporary key recovery attacks, we recover the full 128-bit temporary key by using 2


packets. It works within a complexity of 2


. So far, this is the best attack against WPA. We believe that our analysis brings further insights on the security of RC4.

Pouyan Sepehrdad, Serge Vaudenay, Martin Vuagnoux

Improved Generic Algorithms for Hard Knapsacks

At Eurocrypt 2010, Howgrave-Graham and Joux described an algorithm for solving hard knapsacks of density close to 1 in time

${\mathcal{\tilde O}}(2^{0.337n})$

and memory

${\mathcal{\tilde O}}(2^{0.256n})$

, thereby improving a 30-year old algorithm by Shamir and Schroeppel. In this paper we extend the Howgrave-Graham–Joux technique to get an algorithm with running time down to

${\mathcal{\tilde O}}(2^{0.291n})$

. An implementation shows the practicability of the technique. Another challenge is to reduce the memory requirement. We describe a constant memory algorithm based on cycle finding with running time

${\mathcal{\tilde O}}(2^{0.72n})$

; we also show a time-memory tradeoff.

Anja Becker, Jean-Sébastien Coron, Antoine Joux

Secure Computation

Two-Output Secure Computation with Malicious Adversaries

We present a method to compile Yao’s two-player garbled circuit protocol into one that is secure against malicious adversaries that relies on witness indistinguishability. Our approach can enjoy lower communication and computation overhead than methods based on cut-and-choose [13] and lower overhead than methods based on zero-knowledge proofs [8] (or Σ-protocols [14]). To do so, we develop and analyze new solutions to issues arising with this transformation:

How to guarantee the generator’s input consistency

How to support different outputs for each player


adding extra gates to the circuit of the function


being computed

How the evaluator can retrieve input keys but avoid selective failure attacks

Challenging 3/5 of the circuits is near optimal for cut-and-choose (and better than challenging 1/2)

Our protocols require the existence of secure-OT and claw-free functions that have a weak malleability property. We discuss an experimental implementation of our protocol to validate our efficiency claims.

Abhi shelat, Chih-hao Shen

Efficient Non-interactive Secure Computation

Suppose that a receiver


wishes to publish an encryption of her secret input


so that every sender


, holding an input


, can reveal






) to


by sending her a single message. This should be done while simultaneously protecting the secrecy of


against a corrupted


and preventing a corrupted


from having an unfair influence on the output of


beyond what is allowed by



When the parties are semi-honest, practical solutions can be based on Yao’s garbled circuit technique. However, for the general problem when the parties, or even


alone, may be malicious, all known polynomial-time solutions are highly inefficient. This is due in part to the fact that known solutions make a


use of cryptographic primitives, e.g., for providing non-interactive zero-knowledge proofs of statements involving cryptographic computations on secrets.

Motivated by the above question, we consider the problem of secure two-party computation in a model that allows only parallel calls to an ideal oblivious transfer (OT) oracle with no additional interaction. We obtain the following results.


We present the first general protocols in this model which only make a


use of a pseudorandom generator (PRG). All previous OT-based protocols either make a non-black-box use of cryptographic primitives or require multiple rounds of interaction.


We also consider the question of minimizing the asymptotic number of PRG calls made by such protocols. We show that




) calls are sufficient for each gate in a (large) boolean circuit computing


, where


is a statistical security parameter guaranteeing at most 2



simulation error of a malicious sender. Furthermore, the number of PRG calls per gate can be made


by settling for a relaxed notion of security which allows a malicious


to arbitrarily correlate the event that


detects cheating with the input of


. This improves over the state of the art also for


constant-round black-box protocols, which required Ω(


) PRG calls per gate, even with similar relaxations of the notion of security.

Combining the above results with 2-message (parallel) OT protocols in the CRS model, we get the first solutions to the initial motivating question which only make a black-box use of standard cryptographic primitives.

Yuval Ishai, Eyal Kushilevitz, Rafail Ostrovsky, Manoj Prabhakaran, Amit Sahai

Towards a Game Theoretic View of Secure Computation

We demonstrate how Game Theoretic concepts and formalism can be used to capture cryptographic notions of security. In the restricted but indicative case of two-party protocols in the face of malicious fail-stop faults, we first show how the traditional notions of secrecy and correctness of protocols can be captured as properties of Nash equilibria in games for rational players. Next, we concentrate on fairness. Here we demonstrate a Game Theoretic notion and two different cryptographic notions that turn out to all be equivalent. In addition, we provide a simulation based notion that implies the previous three. All four notions are weaker than existing cryptographic notions of fairness. In particular, we show that they can be met in some natural setting where existing notions of fairness are provably impossible to achieve.

Gilad Asharov, Ran Canetti, Carmit Hazay

Highly-Efficient Universally-Composable Commitments Based on the DDH Assumption

Universal composability (a.k.a. UC security) provides very strong security guarantees for protocols that run in complex real-world environments. In particular, security is guaranteed to hold when the protocol is run concurrently many times with other secure and possibly insecure protocols. Commitment schemes are a basic building block in many cryptographic constructions, and as such universally composable commitments are of great importance in constructing UC-secure protocols. In this paper, we construct highly efficient UC-secure commitments from the standard DDH assumption, in the common reference string model. Our commitment stage is non-interactive, has a common reference string with


(1) group elements, and has complexity of


(1) exponentiations for committing to a group element (to be more exact, the effective cost is that of


exponentiations overall, for both the commit and decommit stages). We present a construction that is secure in the presence of static adversaries, and a construction that is secure in the presence of adaptive adversaries with erasures, where the latter construction has an effective additional cost of just



Yehuda Lindell


Concurrent Composition in the Bounded Quantum Storage Model

We define the BQS-UC model, a variant of the UC model, that deals with protocols in the bounded quantum storage model. We present a statistically secure commitment protocol in the BQS-UC model that composes concurrently with other protocols and an (a-priori) polynomially-bounded number of instances of itself. Our protocol has an efficient simulator which is important if one wishes to compose our protocol with protocols that are only computationally secure. Combining our result with prior results, we get a statistically BQS-UC secure constant-round protocol for general two-party computation without the need for any setup assumption.

Dominique Unruh

Careful with Composition: Limitations of the Indifferentiability Framework

We exhibit a hash-based storage auditing scheme which is provably secure in the random-oracle model (ROM), but easily broken when one instead uses typical indifferentiable hash constructions. This contradicts the widely accepted belief that the indifferentiability composition theorem from [27] applies to


cryptosystem. We characterize the uncovered limitations of indifferentiability by showing that the formalizations used thus far implicitly exclude security notions captured by experiments that have multiple, disjoint adversarial stages. Examples include deterministic public-key encryption (PKE), password-based cryptography, hash function nonmalleability, and more. We formalize a stronger notion, reset indifferentiability, that enables a composition theorem covering such multi-stage security notions, but our results show that practical hash constructions cannot be reset indifferentiable. We finish by giving direct security proofs for several important PKE schemes.

Thomas Ristenpart, Hovav Shacham, Thomas Shrimpton

Key Dependent Message Security

Efficient Circuit-Size Independent Public Key Encryption with KDM Security

Key Dependent Message (KDM) secure

encryption is a new area which has attracted much research in recent years. Roughly speaking, a KDM secure scheme w.r.t. a function set


provides security even if one encrypts a key dependent message




) for any


. We present a construction of an


public key encryption scheme which is KDM secure with respect to a large function set


. Our function set is a function computable by a polynomial-size

Modular Arithmetic Circuit (MAC)

; we represent the set as

Straight Line Programs

computing multi-variable polynomials (an extended scheme includes all rational functions whose denominator and numerator are functions as above). Unlike previous schemes, our scheme is what we call


: the size of the ciphertext depends on the degree bound for the polynomials, and beyond this all parameters of the scheme are

completely independent

of the size of the function or the number of secret keys (users). We note that although KDM security has practical applications, all previous works in the standard model are either inefficient feasibility results when dealing with general circuits function sets, or are for a small set of functions such as linear functions. Efficiency of our scheme is dramatically improved compared to the previous feasibility results.

Tal Malkin, Isamu Teranishi, Moti Yung

Key-Dependent Message Security: Generic Amplification and Completeness

Key-dependent message (KDM) secure encryption schemes provide secrecy even when the attacker sees encryptions of messages related to the secret-key


. Namely, the scheme should remain secure even when messages of the form




) are encrypted, where


is taken from some function class




procedure takes an encryption scheme which satisfies


-KDM security and boost it into a


-KDM secure scheme, where the function class


should be richer than


. It was recently shown by Brakerski et al. (TCC 2011) and Barak et al. (EUROCRYPT 2010), that a strong form of amplification is possible, provided that the underlying encryption scheme satisfies some special additional properties.

In this work, we prove the first


KDM amplification theorem which relies solely on the KDM security of the underlying scheme without making any other assumptions. Specifically, we show that an elementary form of KDM security against functions in which each output bit either copies or flips a single bit of the key (aka


) can be amplified into KDM security with respect to any function family that can be computed in arbitrary fixed polynomial-time. Furthermore, our amplification theorem and its proof are insensitive to the exact setting of KDM security, and they hold in the presence of multiple-keys and in the symmetric-key/public-key and the CPA/CCA cases. As a result, we can amplify the security of all known KDM constructions, including ones that could not be amplified before.

Finally, we study the minimal conditions under which full-KDM security (with respect to all functions) can be achieved. We show that under strong notion of KDM security, the existence of cyclic-secure fully-homomorphic encryption is not only sufficient for full-KDM security, as shown by Barak et al., but also necessary. On the other hand, we observe that for standard KDM security, this condition can be relaxed by adopting Gentry’s bootstrapping technique (STOC 2009) to the KDM setting.

Benny Applebaum

Public Key Encryption

Unbounded HIBE and Attribute-Based Encryption

In this work, we present HIBE and ABE schemes which are “unbounded” in the sense that the public parameters do not impose additional limitations on the functionality of the systems. In all previous constructions of HIBE in the standard model, a maximum hierarchy depth had to be fixed at setup. In all previous constructions of ABE in the standard model, either a small universe size or a bound on the size of attribute sets had to be fixed at setup. Our constructions avoid these limitations. We use a nested dual system encryption argument to prove full security for our HIBE scheme and selective security for our ABE scheme, both in the standard model and relying on static assumptions. Our ABE scheme supports LSSS matrices as access structures and also provides delegation capabilities to users.

Allison Lewko, Brent Waters

Decentralizing Attribute-Based Encryption

We propose a Multi-Authority Attribute-Based Encryption (ABE) system. In our system, any party can become an authority and there is no requirement for any global coordination other than the creation of an initial set of common reference parameters. A party can simply act as an ABE authority by creating a public key and issuing private keys to different users that reflect their attributes. A user can encrypt data in terms of any boolean formula over attributes issued from any chosen set of authorities. Finally, our system does not require any central authority.

In constructing our system, our largest technical hurdle is to make it collusion resistant. Prior Attribute-Based Encryption systems achieved collusion resistance when the ABE system authority “tied” together different components (representing different attributes) of a user’s private key by randomizing the key. However, in our system each component will come from a potentially different authority, where we assume no coordination between such authorities. We create new techniques to tie key components together and prevent collusion attacks between users with different global identifiers.

We prove our system secure using the recent dual system encryption methodology where the security proof works by first converting the challenge ciphertext and private keys to a semi-functional form and then arguing security. We follow a recent variant of the dual system proof technique due to Lewko and Waters and build our system using bilinear groups of composite order. We prove security under similar static assumptions to the LW paper in the random oracle model.

Allison Lewko, Brent Waters

Threshold and Revocation Cryptosystems via Extractable Hash Proofs

We present a new unifying framework for constructing non-interactive threshold encryption and signature schemes, as well as broadcast encryption schemes, and in particular, derive several new cryptosystems based on hardness of factoring, including:

a threshold signature scheme (in the random oracle model) that supports ad-hoc groups (i.e., exponential number of identities and the set-up is independent of the total number of parties) and implements the standard Rabin signature;

a threshold encryption scheme that supports ad-hoc groups, where encryption is the same as that in the Blum-Goldwasser cryptosystem and therefore more efficient than RSA-based implementations;

a CCA-secure threshold encryption scheme in the random oracle model;

a broadcast encryption scheme (more precisely, a revocation cryptosystem) that supports ad-hoc groups, whose complexity is comparable to that of the Naor-Pinkas scheme; moreover, we provide a variant of the construction that is CCA-secure in the random oracle model.

Our framework rests on a new notion of

threshold extractable hash proofs

. The latter can be viewed as a generalization of the extractable hash proofs, which are a special kind of non-interactive zero-knowledge proof of knowledge.

Hoeteck Wee

Deniable Encryption with Negligible Detection Probability: An Interactive Construction

Deniable encryption

, introduced in 1997 by Canetti, Dwork, Naor, and Ostrovsky, guarantees that the sender or the receiver of a secret message is able to “fake” the message encrypted in a specific ciphertext in the presence of a coercing adversary, without the adversary detecting that he was not given the real message. To date, constructions are only known either for weakened variants with separate “honest” and “dishonest” encryption algorithms, or for single-algorithm schemes with non-negligible detection probability.

We propose the first sender-deniable public key encryption system with a single encryption algorithm and negligible detection probability. We describe a generic interactive construction based on a public key bit encryption scheme that has certain properties, and we give two examples of encryption schemes with these properties, one based on the quadratic residuosity assumption and the other on trapdoor permutations.

Markus Dürmuth, David Mandell Freeman


Weitere Informationen

Premium Partner