Skip to main content

2012 | Buch

Public Key Cryptography – PKC 2012

15th International Conference on Practice and Theory in Public Key Cryptography, Darmstadt, Germany, May 21-23, 2012. Proceedings

herausgegeben von: Marc Fischlin, Johannes Buchmann, Mark Manulis

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 15th International Conference on Practice and Theory in Public Key Cryptography, PKC 2012, held in Darmstadt, Germany, in May 2012. The 41 papers presented were carefully reviewed and selected from 188 submissions. The book also contains one invited talk. The papers are organized in the following topical sections: homomorphic encryption and LWE, signature schemes, code-based and multivariate crypto, public key encryption: special properties, identity-based encryption, public-key encryption: constructions, secure two-party and multi-party computations, key exchange and secure sessions, public-key encryption: relationships, DL, DDH, and more number theory, and beyond ordinary signature schemes.

Inhaltsverzeichnis

Frontmatter

Homomorphic Encryption and LWE

Better Bootstrapping in Fully Homomorphic Encryption

Gentry’s bootstrapping technique is currently the only known method of obtaining a “pure” fully homomorphic encryption (FHE) schemes, and it may offers performance advantages even in cases that do not require pure FHE (e.g., when using the noise-control technique of Brakerski-Gentry-Vaikuntanathan).

The main bottleneck in bootstrapping is the need to evaluate homomorphically the reduction of one integer modulo another. This is typically done by emulating a binary modular reduction circuit, using bit operations on binary representation of integers. We present a simpler approach that bypasses the homomorphic modular-reduction bottleneck to some extent, by working with a modulus very close to a power of two. Our method is easier to describe and implement than the generic binary circuit approach, and we expect it to be faster in practice (although we did not implement it yet). In some cases it also allows us to store the encryption of the secret key as a single ciphertext, thus reducing the size of the public key.

We also show how to combine our new method with the SIMD homomorphic computation techniques of Smart-Vercauteren and Gentry-Halevi-Smart, to get a bootstrapping method that works in time quasi-linear in the security parameter. This last part requires extending the techniques from prior work to handle arithmetic not only over fields, but also over some rings. (Specifically, our method uses arithmetic modulo a power of two, rather than over characteristic-two fields.)

Craig Gentry, Shai Halevi, Nigel P. Smart
Polly Cracker, Revisited, Revisited

In this paper, we consider the Polly Cracker with Noise (PCN) cryptosystem by Albrecht, Farshim, Faugère, and Perret (Asiacrypt 2011), which is a public-key cryptosystem based on the hardness of computing Gröbner bases for noisy random systems of multivariate equations. We examine four settings, covering all possible parameter ranges of PCN with zero-degree noise. In the first setting, the PCN cryptosystem is known to be equivalent to Regev’s LWE-based scheme. In the second, it is known to be at most as secure as Regev’s scheme. We show that for one other settings it is equivalent to a variants of Regev’s with less efficiency and in the last setting it is completely insecure and we give an efficient key-recovery attack. Unrelated to the attack, we also fix some flaws in the security proofs of PCN.

Gottfried Herold
Ring-LWE in Polynomial Rings

The Ring-LWE problem, introduced by Lyubashevsky, Peikert, and Regev (Eurocrypt 2010), has been steadily finding many uses in numerous cryptographic applications. Still, the Ring-LWE problem defined in [LPR10] involves the fractional ideal

R

 ∨ 

, the dual of the ring

R

, which is the source of many theoretical and implementation technicalities. Until now, getting rid of

R

 ∨ 

, required some relatively complex transformation that substantially increase the magnitude of the error polynomial and the practical complexity to sample it. It is only for rings

R

 = ℤ[

X

]/(

X

n

 + 1) where

n

a power of 2, that this transformation is simple and benign.

In this work we show that by applying a different, and much simpler transformation, one can transfer the results from [LPR10] into an “easy-to-use” Ring-LWE setting (

i.e.

without the dual ring

R

 ∨ 

), with only a very slight increase in the magnitude of the noise coefficients. Additionally, we show that creating the correct noise distribution can also be simplified by generating a Gaussian distribution over a particular extension ring of

R

, and then performing a reduction modulo

f

(

X

). In essence, our results show that one does not need to resort to using any algebraic structure that is more complicated than polynomial rings in order to fully utilize the hardness of the Ring-LWE problem as a building block for cryptographic applications.

Léo Ducas, Alain Durmus
On Homomorphic Encryption and Chosen-Ciphertext Security

Chosen-Ciphertext (IND-CCA) security is generally considered the right notion of security for a cryptosystem. Because of its central importance much effort has been devoted to constructing IND-CCA secure cryptosystems.

In this work, we consider constructing IND-CCA secure cryptosystems from (group) homomorphic encryption. Our main results give natural and efficient constructions of IND-CCA secure cryptosystems from any homomorphic encryption scheme that satisfies weak cyclic properties, either in the plaintext, ciphertext or randomness space. Our results have the added benefit of being simple to describe and analyze.

Brett Hemenway, Rafail Ostrovsky

Signature Schemes

Waters Signatures with Optimal Security Reduction

Waters signatures (Eurocrypt 2005) can be shown existentially unforgeable under chosen-message attacks under the assumption that the computational Diffie-Hellman problem in the underlying (pairing-friendly) group is hard. The corresponding security proof has a reduction loss of

O

(ℓ·

q

), where ℓ is the bitlength of messages, and

q

is the number of adversarial signature queries. The original reduction could meanwhile be improved to

$O(\sqrt{\ell}\cdot q)$

(Hofheinz and Kiltz, Crypto 2008); however, it is currently unknown whether a better reduction exists. We answer this question as follows:

(a)

We give a simple modification of Waters signatures, where messages are encoded such that each two encoded messages have a suitably large Hamming distance. Somewhat surprisingly, this simple modification suffices to prove security under the CDH assumption with a reduction loss of

O

(

q

).

1

We also show that any black-box security proof for a signature scheme with

re-randomizable

signatures must have a reduction loss of at least Ω(

q

), or the underlying hardness assumption is false. Since both Waters signatures and our variant from (a) are re-randomizable, this proves our reduction from (a) optimal up to a constant factor.

Understanding and optimizing the security loss of a cryptosystem is important to derive concrete parameters, such as the size of the underlying group. We provide a complete picture for Waters-like signatures: there is an inherent lower bound for the security loss, and we show how to achieve it.

Dennis Hofheinz, Tibor Jager, Edward Knapp
Strong Security from Probabilistic Signature Schemes

We introduce a new and very weak security notion for signature schemes called target randomness security. In contrast to previous security definitions we focus on signature schemes with (public coin) probabilistic signature generation where the randomness used during signature generation is exposed as part of the signature. To prove practical usefulness of our notion we present a new signature transformation for mapping target randomness secure signature schemes to weakly secure signature schemes. It is well-known that, using chameleon hash functions, the resulting weakly secure scheme can then be turned into a fully secure one. Our transformation outputs signature schemes that in general produce signatures with

l

elements, where

l

is the bit length of the input randomness. We present an instantiation of a target randomness secure signature scheme based on the RSA assumption and show that after applying our new signature transformation to this scheme, we can

accumulate

the

l

signature elements into a single element. This results in a new efficient RSA-based signature scheme. In contrast to traditional security definitions, all signature schemes obtained with our transformation enjoy

strong

security, i.e. they remain secure even if the adversary outputs a new signature on a previously queried message. In our proofs, we rely on the prefix-based technique introduced by Hohenberger and Waters at Crypto’09. However, using a precise analysis we are able decrease the security loss in proofs relying on the prefix-based technique. This result may be of independent interest.

Sven Schäge
Space Efficient Signature Schemes from the RSA Assumption

Signature schemes from the RSA assumption are very important because of their highly reliable security. Despite their importance, only a few digital signature schemes from the RSA assumption are currently known. Thus, improvement of efficiency in this area seems to be very important. In this paper, we propose various signature schemes from the RSA assumption. First, we propose a scheme that simultaneously provides the shortest signatures and public key length among the known schemes. Compared with the known best schemes, the signature size is the same as that of the scheme proposed recently by Hofheinz, Jager, and Kiltz, whereas the public key size is about the half that of the Hohenberger-Waters scheme. The drawback of the scheme is its heavy signing and verification algorithms. Second, we also propose a scheme whose public key is longer than our first scheme, but the signing and verification cost is more efficient. The scheme can be seen as a generalization of our first scheme and the Hofheinz-Jager-Kiltz scheme. Finally, we propose a scheme whose signing and verification algorithms are more efficient than our first and second schemes, whereas the signature size is longer. All these schemes are constructed based on a new observation about the relation between

m

-time signature schemes and short signature schemes.

Shota Yamada, Goichiro Hanaoka, Noboru Kunihiro
The Construction of Ambiguous Optimistic Fair Exchange from Designated Confirmer Signature without Random Oracles

Ambiguous Optimistic Fair Exchange (AOFE), introduced by Huang

et al.

in ASIACRYPT 2008, is an extension of OFE that enhances the fairness of the two communicating parties in the exchange of signatures. The first scheme was proven secure without random oracles while its partial signature contains dozens of group elements. Recently, interactive AOFE was introduced and the construction is more practical, where one partial signature only contains three group elements. It is based on the existence of Designated Confirmer Signature (DCS) with a special property where one is able to sample a confirmer signature efficiently from a signer’s signature space. Nevertheless, we note that there are only a few DCS schemes that have this special property. Security of the interactive AOFE construction relies on the

q

-Computational and Decisional Hidden Strong Diffie-Hellman assumptions. In this paper, we propose a new construction of interactive AOFE from DCS, where the underlying DCS is standard and does not require any special property. We also propose a new DCS construction. By applying our transformation from DCS to interactive AOFE, we build a concrete interactive AOFE which is secure under more standard number-theoretic assumptions, namely Strong Diffie-Hellman and Decision Linear assumptions, without random oracles. A partial signature of the interactive AOFE contains six group elements, while a full signature contains two only.

Qiong Huang, Duncan S. Wong, Willy Susilo

Code-Based and Multivariate Crypto

Efficient Implementation of a CCA2-Secure Variant of McEliece Using Generalized Srivastava Codes

In this paper we present efficient implementations of McEliece variants using quasi-dyadic codes. We provide secure parameters for a classical McEliece encryption scheme based on quasi-dyadic generalized Srivastava codes, and successively convert our scheme to a CCA2-secure protocol in the random oracle model applying the Fujisaki-Okamoto transform. In contrast with all other CCA2-secure code-based cryptosystems that work in the random oracle model, our conversion does not require a constant weight encoding function. We present results for both 128-bit and 80-bit security level, and for the latter we also feature an implementation for an embedded device.

Pierre-Louis Cayrel, Gerhard Hoffmann, Edoardo Persichetti
Solving Underdetermined Systems of Multivariate Quadratic Equations Revisited

Solving systems of

m

$\mathcal M$

ultivariate

$\mathcal Q$

uadratic (

$\mathcal{MQ}$

) equations in

n

variables is one of the main challenges of algebraic cryptanalysis. Although the associated

$\mathcal{MQ}$

-problem is proven to be

NP

-complete, we know that it is solvable in

polynomial time

over fields of even characteristic if either

m

 ≥ 

n

(

n

 − 1)/2 (

overdetermined

) or

n

 ≥ 

m

(

m

 + 1) (

underdetermined

). It is widely believed that

m

 = 

n

has worst case complexity. Actually in the overdetermined case Gröbner Bases algorithms show a gradual decrease in complexity from

m

 = 

n

to

m

 ≥ 

n

(

n

 − 1)/2 as more and more equations are available. For the underdetermined case no similar behavior was known. Up to now the best way to deal with the case

m

 < 

n

 < 

m

(

m

 + 1) was to randomly guess variables until

m

 = 

n

. This article shows how to smartly use additional variables and thus obtain a gradual change of complexity over even characteristics also for the underdetermined case. Namely, we show how a linear change of variables can be used to reduce the overall complexity of solving a

$\mathcal{MQ}$

-system with

m

equations and

n

 = 

ωm

variables for some

ω

 ∈ ℚ

> 1

to the complexity of solving a

$\mathcal{MQ}$

-system with only

$(m-\left\lfloor \omega\right\rfloor+1)$

equations and variables, respectively. Our algorithm can be seen as an extension of the previously known algorithm from Kipnis-Patarin-Goubin (extended version of Eurocrypt ’99) and improves an algorithm of Courtois

et al.

which eliminates

$\left\lfloor \mbox{log}_2\omega\right\rfloor$

variables. For small

ω

we also adapt our algorithm to fields of odd characteristic. We apply our result to break current instances of the Unbalanced Oil and Vinegar public key signature scheme that uses

n

 = 3

m

and hence

ω

 = 3.

Enrico Thomae, Christopher Wolf
Public-Key Identification Schemes Based on Multivariate Cubic Polynomials

Solving a system of multivariate polynomials over a finite field is a promising problem in cryptography. Recently, Sakumoto et al. proposed public-key identification schemes based on the quadratic version of the problem, which is called the MQ problem. However, it is still an open question whether or not it is able to build efficient constructions of public-key identification based on multivariate polynomials of degree greater than two. In this paper, we tackle the

cubic

case of this question and construct public-key identification schemes based on the

cubic

version of the problem, which is called the MC problem. The MQ problem is a special case of the MC problem. Our schemes consist of a protocol which is zero-knowledge argument of knowledge for the MC problem under the assumption of the existence of a non-interactive commitment scheme. For a practical parameter choice, the efficiency of our scheme is highly comparable to that of the schemes based on the MQ problem. Furthermore, the parallel version of our scheme also achieves the security under active attack with some additional cost.

Koichi Sakumoto
Public-Key Cryptography from New Multivariate Quadratic Assumptions

In this work, we study a new multivariate quadratic (MQ) assumption that can be used to construct public-key encryptions. In particular, we research in the following two directions:

We establish a precise

asymptotic

formulation of a family of hard MQ problems, and provide empirical evidence to confirm the hardness.

We construct public-key encryption schemes, and prove their security under the hardness assumption of this family. Also, we provide a new

perspective

to look at MQ systems that plays a key role to our design and proof of security.

As a consequence, we construct the

first

public-key encryption scheme that is

provably secure

under the MQ assumption. Moreover, our public-key encryption scheme is efficient in the sense that it only needs a ciphertext length

L

 + poly(

k

) to encrypt a message

M

 ∈ {0, 1}

L

for any un-prespecified polynomial

L

, where

k

is the security parameter. This is essentially

optimal

since an additive overhead is the best we can hope for.

Yun-Ju Huang, Feng-Hao Liu, Bo-Yin Yang

Public-Key Encryption: Special Properties

Anonymous Broadcast Encryption: Adaptive Security and Efficient Constructions in the Standard Model

In this paper we consider

anonymity

in the context of Broadcast Encryption (BE). This issue has received very little attention so far and

all but one

of the currently available BE schemes fail to provide anonymity. Yet, we argue that it is intrinsically desirable to provide anonymity in standard applications of BE and that it can be achieved at a moderate cost. We provide a security definition for Anonymous Broadcast Encryption (ANOBE) and show that it is achievable assuming only the existence of IND-CCA secure public key encryption (PKE). Focusing on reducing the size of ciphertexts, we then give two generic constructions for ANOBE. The first is from any anonymous (key-private) IND-CCA secure PKE scheme, and the second is from any IBE scheme that satisfies a weak security notion in the multi-TA setting. Furthermore, we show how randomness re-use techniques can be deployed in the ANOBE context to reduce computational and communication costs, and how a new cryptographic primitive – anonymous hint systems – can be used to speed up the decryption process in our ANOBE constructions. All of our results are in the standard model, achieving fully collusion-resistant ANOBE schemes secure against

adaptive

IND-CCA adversaries.

Benoît Libert, Kenneth G. Paterson, Elizabeth A. Quaglia
Outsider-Anonymous Broadcast Encryption with Sublinear Ciphertexts

In the standard setting of broadcast encryption, information about the receivers is transmitted as part of the ciphertext. In several broadcast scenarios, however, the identities of the users authorized to access the content are often as sensitive as the content itself. In this paper, we propose the first broadcast encryption scheme with sublinear ciphertexts to attain meaningful guarantees of receiver anonymity. We formalize the notion of

outsider-anonymous broadcast encryption

(

oABE

), and describe generic constructions in the standard model that achieve outsider-anonymity under adaptive corruptions in the chosen-plaintext and chosen-ciphertext settings. We also describe two constructions with enhanced decryption, one under the gap Diffie-Hellman assumption, in the random oracle model, and the other under the decisional Diffie-Hellman assumption, in the standard model.

Nelly Fazio, Irippuge Milinda Perera
Verifiable Predicate Encryption and Applications to CCA Security and Anonymous Predicate Authentication

In this paper, we focus on

verifiability

of predicate encryption. A verifiable predicate encryption scheme guarantees that all legitimate receivers of a ciphertext will obtain the same message upon decryption. While verifiability of predicate encryption might be a desirable property by itself, we furthermore show that this property enables interesting applications.

Specifically, we provide two applications of verifiable predicate encryption. Firstly, we show that for a large class of verifiable predicate encryption schemes, it is always possible to convert a chosen-plaintext secure scheme into a chosen-ciphertext secure one. Secondly, we show that a verifiable predicate encryption scheme allows the construction of a

deniable predicate authentication scheme

. This primitive enables a user to authenticate a message to a verifier using a private key satisfying a specified relation while at the same time allowing the user to deny ever having interacted with the verifier. This scheme furthermore guarantees the anonymity of the user in the sense that the verifier will learn nothing about the user’s private key except that it satisfies the specified relation.

Lastly, we show that many currently known predicate encryption schemes already provide verifiability, and furthermore demonstrate that many predicate encryption schemes which do not provide verifiability, can be easily converted into schemes providing verifiability.

Our results not only highlight that verifiability is a very useful property of predicate encryption, but also show that efficient and practical schemes with this property can be obtained relatively easily.

Shota Yamada, Nuttapong Attrapadung, Bagus Santoso, Jacob C. N. Schuldt, Goichiro Hanaoka, Noboru Kunihiro
Public Key Encryption against Related Key Attacks

In this work, we present efficient public-key encryption schemes resilient against linear related key attacks (RKA) under standard assumptions and in the standard model. Specifically, we obtain encryption schemes based on hardness of factoring, BDDH and LWE that remain secure even against an adversary that may query the decryption oracle on linear shifts of the actual secret key. Moreover, the ciphertext overhead is only an additive constant number of group elements.

Hoeteck Wee

Identity-Based Encryption

Functional Encryption for Threshold Functions (or Fuzzy IBE) from Lattices

Cryptosystems based on the hardness of lattice problems have recently acquired much importance due to their average-case to worst-case equivalence, their conjectured resistance to quantum cryptanalysis, their ease of implementation and increasing practicality, and, lately, their promising potential as a platform for constructing advanced functionalities.

In this work, we construct “Fuzzy” Identity Based Encryption from the hardness of the Learning With Errors (LWE) problem. We note that for our parameters, the underlying lattice problems (such as gapSVP or SIVP) are assumed to be hard to approximate within supexponential factors for adversaries running in subexponential time. We give CPA and CCA secure variants of our construction, for small and large universes of attributes. All our constructions are secure against selective-identity attacks in the standard model. Our construction is made possible by observing certain special properties that secret sharing schemes need to satisfy in order to be useful for Fuzzy IBE. We also discuss some obstacles towards realizing lattice-based attribute-based encryption (ABE).

Shweta Agrawal, Xavier Boyen, Vinod Vaikuntanathan, Panagiotis Voulgaris, Hoeteck Wee
Variants of Waters’ Dual System Primitives Using Asymmetric Pairings
(Extended Abstract)

Waters, in 2009, introduced an important technique, called dual system encryption, to construct identity-based encryption (IBE) and related schemes. The resulting IBE scheme was described in the setting of symmetric pairing. A key feature of the construction is the presence of random tags in the ciphertext and decryption key. Later work by Lewko and Waters removed the tags and proceeding through composite-order pairings led to a more efficient dual system IBE scheme using asymmetric pairings whose security is based on non-standard but static assumptions. In this work, we have systematically simplified Waters 2009 IBE scheme in the setting of asymmetric pairing. The simplifications retain tags used in the original description. This leads to several variants, the first one of which is based on standard assumptions and in comparison to Waters’ original scheme reduces ciphertexts and keys by two elements each. Going through several stages of simplifications, we finally obtain a simple scheme whose security can be based on two standard assumptions and a natural and minimal extension of the decision Diffie-Hellman problem for asymmetric pairing groups. The scheme itself is also minimal in the sense that apart from the tags, both encryption and key generation use exactly one randomiser each. This final scheme is more efficient than both the previous dual system IBE scheme in the asymmetric setting due to Lewko and Waters and the more recent dual system IBE scheme due to Lewko. We extend the IBE scheme to hierarchical IBE (HIBE) and broadcast encryption (BE) schemes. Both primitives are secure in their respective full models and have better efficiencies compared to previously known schemes offering the same level and type of security.

Somindu C. Ramanna, Sanjit Chatterjee, Palash Sarkar
From Selective to Full Security: Semi-generic Transformations in the Standard Model

In this paper, we propose an efficient, standard model, semigeneric transformation of selective-secure (Hierarchical) Identity-Based Encryption schemes into fully secure ones. The main step is a procedure that uses admissible hash functions (whose existence is implied by collision-resistant hash functions) to convert any selective-secure wildcarded identity-based encryption (WIBE) scheme into a fully secure (H)IBE scheme. Since building a selective-secureWIBE, especially with a selective-secure HIBE already in hand, is usually much less involved than directly building a fully secure HIBE, this transform already significantly simplifies the latter task. This black-box transformation easily extends to schemes secure in the Continual Memory Leakage (CML) model of Brakerski et al. (FOCS 2010), which allows us obtain a new fully secure IBE in that model. We furthermore show that if a selective-secure HIBE scheme satisfies a particular security notion, then it can be generically transformed into a selective-secure WIBE. We demonstrate that several current schemes already fit this new definition, while some others that do not obviously satisfy it can still be easily modified into a selective-secure WIBE.

Michel Abdalla, Dario Fiore, Vadim Lyubashevsky
Circular and KDM Security for Identity-Based Encryption

We initiate the study of security for key-dependent messages (KDM), sometimes also known as “circular” or “clique” security, in the setting of identity-based encryption (IBE). Circular/KDM security requires that ciphertexts preserve secrecy even when they encrypt messages that may depend on the secret keys, and arises in natural usage scenarios for IBE.

We construct an IBE system that is circular secure for affine functions of users’ secret keys, based on the learning with errors (LWE) problem (and hence on worst-case lattice problems). The scheme is secure in the standard model, under a natural extension of a selectiveidentity attack. Our three main technical contributions are (1) showing the circular/KDM-security of a “dual”-style LWE public-key cryptosystem, (2) proving the hardness of a version of the “extended LWE” problem due to O’Neill, Peikert and Waters (CRYPTO’11), and (3) building an IBE scheme around the dual-style system using a novel lattice-based “all-but-d” trapdoor function.

Jacob Alperin-Sheriff, Chris Peikert

Public-Key Encryption: Constructions

NTRUCCA: How to Strengthen NTRUEncrypt to Chosen-Ciphertext Security in the Standard Model

NTRUEncrypt

is a fast and practical lattice-based public-key encryption scheme, which has been standardized by IEEE, but until recently, its security analysis relied only on heuristic arguments. Recently, Stehlé and Steinfeld showed that a slight variant (that we call

pNE

) could be proven to be secure under chosen-plaintext attack (IND-CPA), assuming the hardness of worst-case problems in ideal lattices. We present a variant of

pNE

called

NTRUCCA

, that is IND-CCA2 secure in the standard model assuming the hardness of worst-case problems in ideal lattices, and only incurs a

constant factor

overhead in ciphertext and key length over the

pNE

scheme. To our knowledge, our result gives the first IND-CCA2 secure variant of

NTRUEncrypt

in the standard model, based on standard cryptographic assumptions.

As an intermediate step, we present a construction for an All-But-One (ABO) lossy trapdoor function from

pNE

, which may be of independent interest. Our scheme uses the lossy trapdoor function framework of Peikert and Waters, which we generalize to the case of (

k

 − 1)-of-

k

-correlated input distributions.

Ron Steinfeld, San Ling, Josef Pieprzyk, Christophe Tartary, Huaxiong Wang
Generating Provable Primes Efficiently on Embedded Devices

This paper introduces new techniques to generate provable prime numbers efficiently on embedded devices such as smartcards, based on variants of Pocklington’s and the Brillhart-Lehmer-Selfridge-Tuckerman-Wagstaff theorems. We introduce two new generators that, combined with cryptoprocessor-specific optimizations, open the way to efficient and tamper-resistant on-board generation of provable primes. We also report practical results from our implementations. Both our theoretical and experimental results show that constructive methods can generate provable primes essentially as efficiently as state-of-the-art generators for probable primes based on Fermat and Miller-Rabin pseudo-tests. We evaluate the output entropy of our two generators and provide techniques to ensure a high level of resistance against physical attacks. This paper intends to provide practitioners with the first practical solutions for fast and secure generation of provable primes in embedded security devices.

Christophe Clavier, Benoit Feix, Loïc Thierry, Pascal Paillier

Invited Talk

Password-Based Authenticated Key Exchange

Authenticated Key Exchange

protocols enable several parties to establish a shared cryptographically strong key over an insecure network using various authentication means, such as strong cryptographic keys or short (

i.e.

, low-entropy) common secrets. The latter example is definitely the most interesting in practice, since no additional device is required, but just a human-memorable password, for authenticating the players.

After the seminal work by Bellovin and Merritt, many settings and security notions have been defined, and many protocols have been proposed, in the two-user setting and in the group setting.

David Pointcheval

Secure Two-Party and Multi-party Computations

Constant-Round Multi-party Private Set Union Using Reversed Laurent Series

We introduce the idea of associating a set of elements with a

rational function

represented using a

reversed Laurent series

. Using this representation, we propose private set-union protocols in the multi-party setting, assuming an honest majority. Our protocols are the first efficient protocol for private set union with constant round complexity (in both the semi-honest and malicious settings), as well as the first with statistical security (in the semi-honest setting).

Jae Hong Seo, Jung Hee Cheon, Jonathan Katz
Policy-Enhanced Private Set Intersection: Sharing Information While Enforcing Privacy Policies

Companies, organizations, and individuals often wish to share information to realize valuable social and economic goals. Unfortunately, privacy concerns often stand in the way of such information sharing and exchange.

This paper proposes a novel cryptographic paradigm called Policy-Enhanced Private Set Intersection (PPSI ), allowing two parties to share information while enforcing the desired privacy policies. Our constructions require minimal additional overhead over traditional Private Set Intersection (PSI) protocols, and yet we can handle rich policy semantics previously not possible with traditional PSI and Authorized Private Set Intersection (APSI) protocols. Our scheme involves running a standard PSI protocol over carefully crafted encodings of elements formed as part of a challenge-response mechanism. The structure of these encodings resemble techniques used for aggregating BLS signatures in bilinear groups. We prove that our scheme is secure in the malicious model, under the CBDH assumption, the random oracle model, and the assumption that the underlying PSI protocol is secure against malicious adversaries.

Emil Stefanov, Elaine Shi, Dawn Song
Efficiently Shuffling in Public

We revisit

shuffling in public

[AW07a], a scheme which allows a shuffle to be precomputed. We show how to obfuscate a Paillier shuffle with

O

(

N

log

3.5

N

) exponentiations, leading to a very robust and efficient mixnet: when distributed over

O

(

N

) nodes the mixnet achieves mixing in polylogarithmic time, independent of the level of privacy or verifiability required. Our construction involves the use of layered Paillier applied to permutation networks. With an appropriate network the shuffle may be confined to a particular subset of permutations, for example to rotations. While it is possible that the mixnet may produce biased output, we show that certain networks lead to an acceptable bias-efficiency tradeoff.

Udaya Parampalli, Kim Ramchen, Vanessa Teague

Key Exchange and Secure Sessions

Efficient Password Authenticated Key Exchange via Oblivious Transfer

We present a new framework for constructing efficient password authenticated key exchange (PAKE) protocols based on oblivious transfer (OT). Using this framework, we obtain:

an efficient and simple UC-secure PAKE protocol that is secure against adaptive corruptions

without erasures

.

efficient and simple PAKE protocols under the Computational Diffie-Hellman (CDH) assumption and the hardness of factoring. (Previous efficient constructions rely on hash proof systems, which appears to be inherently limited to decisional assumptions.)

All of our constructions assume a common reference string (CRS) but do not rely on random oracles.

Ran Canetti, Dana Dachman-Soled, Vinod Vaikuntanathan, Hoeteck Wee
Strongly Secure Authenticated Key Exchange from Factoring, Codes, and Lattices

An unresolved problem in research on authenticated key exchange (AKE) is to construct a secure protocol against advanced attacks such as key compromise impersonation and maximal exposure attacks without relying on random oracles. HMQV, a state of the art AKE protocol, achieves both efficiency and the strong security model proposed by Krawczyk (we call it the CK

 + 

model), which includes resistance to advanced attacks. However, the security proof is given under the random oracle model. We propose a generic construction of AKE from a key encapsulation mechanism (KEM). The construction is based on a chosen-ciphertext secure KEM, and the resultant AKE protocol is CK

 + 

secure in the standard model. The protocol gives the first CK

 + 

secure AKE protocols based on the hardness of integer factorization problem, code-based problems, or learning problems with errors. In addition, instantiations under the Diffie-Hellman assumption or its variant can be proved to have strong security without non-standard assumptions such as

π

PRF and KEA1.

Atsushi Fujioka, Koutarou Suzuki, Keita Xagawa, Kazuki Yoneyama
Relatively-Sound NIZKs and Password-Based Key-Exchange

We define a new notion of relatively-sound non-interactive zero-knowledge (NIZK) proofs, where a private verifier with access to a trapdoor continues to be sound even when the Adversary has access to simulated proofs and common reference strings. It is likely that this weaker notion of relative-soundness suffices in most applications that need simulation-soundness. We show that for certain languages which are diverse groups, and hence allow smooth projective hash functions, one can obtain more efficient single-theorem relatively-sound NIZKs as opposed to simulation-sound NIZKs. We also show that such relatively-sound NIZKs can be used to build rather efficient publicly-verifiable CCA2-encryption schemes.

By employing this new publicly-verifiable encryption scheme along with an associated smooth projective-hash, we show that a recent PAK-model single-round password-based key exchange protocol of Katz and Vaikuntanathan, Proc. TCC 2011, can be made much more efficient. We also show a new single round UC-secure password-based key exchange protocol with only a constant number of group elements as communication cost, whereas the previous single round UC-protocol required Ω(

k

) group elements, where

k

is the security parameter.

Charanjit Jutla, Arnab Roy
Multi-location Leakage Resilient Cryptography

Understanding and modeling leakage in the context of cryptographic systems (connecting physical protection of keys and cryptographic operation) is an emerging area with many missing issues and hard to understand aspects. In this work we initiate the study of leakage out of cryptographic devices when the operation is inherently replicated in

multiple locations

. This setting (allowing the adversary access to leakage at different locations) arises naturally in cases like protocols, where different parties activate the same cryptographic function, or in the case of a global service providers (like cloud operators) which need to replicate the cryptographic function to allow for accessible and responsive services. We specifically deal with the theoretical setting of “leakage resilient cryptography,” (modeling leakage as a bound associated with algorithmic steps), and in the most general model of continual leakage on memory, randomness (and thus computation) with periods of operation and refresh of private keys between them.

We first investigate public-key cryptography, and construct a multi-location leakage resilient signature scheme (with unbounded number of locations) with optimal (i.e., total

n

(1 − 

o

(1)) leakage) in a period, and

O

(log

n

) leakage during updates (

n

is the key size). The new crucial issue behind our scheme is how to maintain leakage at each location at the level of key leakage in the single location variant, even under parallel adaptive leakage at the different locations. We then construct a shared-symmetric-key authenticated session protocol that is resilient to leakage on both the sender and the receiver, and tolerates

O

(log

n

) bits of leakage per computation. We construct and utilize a single-location pseudorandom generator which is the first to tolerate continual leakage with only an efficient pseudorandom function as a primitive component. This protocol highlights the importance of protocol level “per message synchronization” against leakage adversaries. Interestingly, the construction is secure in spite of the entire randomness used in the refresh processes being publicly available.

Ali Juma, Yevgeniy Vahlis, Moti Yung

Public-Key Encryption: Relationships

On Definitions of Selective Opening Security

Assume that an adversary observes many ciphertexts, and may then ask for openings, i.e. the plaintext and the randomness used for encryption, of some of them. Do the unopened ciphertexts remain secure? There are several ways to formalize this question, and the ensuing security notions are not known to be implied by standard notions of encryption security. In this work, we relate the two existing flavors of selective opening security. Our main result is that indistinguishability-based selective opening security and simulation-based selective opening security do not imply each other.

We show our claims by counterexamples. Concretely, we construct two public-key encryption schemes. One scheme is secure under selective openings in a simulation-based sense, but not in an indistinguishability-based sense. The other scheme is secure in an indistinguishability-based sense, but not in a simulation-based sense.

Our results settle an open question of Bellare et al. (Eurocrypt 2009). Also, taken together with known results about selective opening secure encryption, we get an almost complete picture how the two flavors of selective opening security relate to standard security notions.

Florian Böhl, Dennis Hofheinz, Daniel Kraschewski
New Definitions and Separations for Circular Security

Traditional definitions of encryption security guarantee secrecy for any plaintext that can be computed by an outside adversary. In some settings, such as anonymous credential or disk encryption systems, this is not enough, because these applications encrypt messages that depend on the secret key. A natural question to ask is do standard definitions capture these scenarios? One area of interest is

n-circular security

where the ciphertexts

$E(pk_1,sk_2),\allowbreak E(pk_2,sk_3)$

, …

$,\allowbreak E(pk_{n-1},sk_n), E(pk_n, sk_1)$

must be indistinguishable from encryptions of zero. Acar et al. (Eurocrypt 2010) provided a CPA-secure public key cryptosystem that is not 2-circular secure due to a distinguishing attack.

In this work, we consider a natural relaxation of this definition. Informally, a cryptosystem is

n-weak circular secure

if an adversary given the cycle

$E(pk_1,sk_2),\allowbreak E(pk_2,sk_3), \dots,\allowbreak E(pk_{n-1},sk_n), E(pk_n, sk_1)$

has no significant advantage in the regular security game, (e.g., CPA or CCA) where ciphertexts of chosen messages must be distinguished from ciphertexts of zero. Since this definition is sufficient for some practical applications and the Acar et al. counterexample no longer applies, the hope is that it would be easier to realize, or perhaps even implied by standard definitions. We show that this is unfortunately not the case: even this weaker notion is not implied by standard definitions. Specifically, we show:

For symmetric encryption, under the minimal assumption that one-way functions exist,

n

-weak circular (CPA) security is not implied by CCA security, for any

n

. In fact, it is not even implied by authenticated encryption security, where ciphertext integrity is guaranteed.

For public-key encryption, under a number-theoretic assumption, 2-weak circular security is not implied by CCA security.

In both of these results, which also apply to the stronger circular security definition,

we actually show for the first time an attack in which the adversary can recover the secret key of an otherwise-secure encryption scheme after an encrypted key cycle is published.

These negative results are an important step in answering deep questions about which attacks are prevented by commonly-used definitions and systems of encryption. They say to practitioners: if key cycles may arise in your system, then even if you use CCA-secure encryption, your system may break catastrophically; that is, a passive adversary might be able to recover your secret keys.

David Cash, Matthew Green, Susan Hohenberger
Correlated Product Security from Any One-Way Function

It is well-known that the

k

-wise product of one-way functions remains one-way, but may no longer be when the

k

inputs are correlated. At TCC 2009, Rosen and Segev introduced a new notion known as Correlated Product secure functions. These functions have the property that a

k

-wise product of them remains one-way even under correlated inputs. Rosen and Segev gave a construction of injective trapdoor functions which were correlated product secure from the existence of Lossy Trapdoor Functions (introduced by Peikert and Waters in STOC 2008).

In this work we continue the study of correlated product security, and find many differences depending on whether the functions have trapdoors.

The first main result of this work shows that a family of correlated product secure functions (without trapdoors) can be constructed from any one-way function. Because correlated product secure functions are trivially one-way, this shows an equivalence between the existence of these two cryptographic primitives.

In the second main result of this work, we consider a natural decisional variant of correlated product security. Roughly, a family of functions is Decisional Correlated Product (DCP) secure if

f

1

(

x

1

),…,

f

k

(

x

1

) is indistinguishable from

f

1

(

x

1

),…,

f

k

(

x

k

) when

x

1

,…,

x

k

are chosen uniformly at random.

When considering DCP secure functions with trapdoors, we give a construction based on Lossy Trapdoor Functions, and show that any DCP secure function family with trapdoor satisfies the security requirements for Deterministic Encryption as defined by Bellare, Boldyreva and O’Neill in CRYPTO 2007. In fact, we also show that definitionally, DCP secure functions with trapdoors are a strict subset of Deterministic Encryption functions by showing an example of a Deterministic Encryption function which according to the definition is not a DCP secure function.

Brett Hemenway, Steve Lu, Rafail Ostrovsky
Relations between Constrained and Bounded Chosen Ciphertext Security for Key Encapsulation Mechanisms

In CRYPTO 2007, Hofheinz and Kiltz formalized a security notion for key encapsulation mechanisms (KEMs), called

constrained chosen ciphertext

(CCCA) security, which is strictly weaker than ordinary chosen ciphertext (CCA) security, and showed a new composition paradigm for CCA secure hybrid encryption. Thus, CCCA security of a KEM turned out to be quite useful. However, since the notion is relatively new and its definition is slightly complicated, relations among CCCA security and other security notions have not been clarified well. In this paper, in order to better understand CCCA security and the construction of CCCA secure KEMs, we study relations between CCCA and

bounded CCA

security, where the latter notion considers security against adversaries that make a-priori bounded number of decapsulation queries, and is also strictly weaker than CCA security. Specifically, we show that in most cases there are separations between these notions, while there is some unexpected implication from (a slightly stronger version of) CCCA security to a weak form of 1-bounded CCA security. We also revisit the construction of a KEM from a hash proof system (HPS) with computational security properties, and show that the HPS-based KEM, which was previously shown CCCA secure, is actually 1-bounded CCA secure as well. This result, together with the above general implication, suggests that 1-bounded CCA security can be essentially seen as a ‘‘necessary” condition for a CCCA secure KEM.

Takahiro Matsuda, Goichiro Hanaoka, Kanta Matsuura

DL, DDH, and More Number Theory

Solving a Discrete Logarithm Problem with Auxiliary Input on a 160-Bit Elliptic Curve

A discrete logarithm problem with auxiliary input (DLPwAI) is a problem to find

α

from

G

,

αG

,

α

d

G

in an additive cyclic group generated by an element

G

of prime order

r

, and a positive integer

d

satisfying

d

|(

r

 − 1). The infeasibility of this problem assures the security of some cryptographic schemes. In 2006, Cheon proposed a novel algorithm for solving DLPwAI (Cheon’s algorithm). This paper reports our experimental results of Cheon’s algorithm by implementing it with some speeding-up techniques. In fact, we have succeeded to solve DLPwAI on a pairing-friendly elliptic curve of 160-bit order in 1314 core days. Implications of our experiments on cryptographic schemes are also discussed.

Yumi Sakemi, Goichiro Hanaoka, Tetsuya Izu, Masahiko Takenaka, Masaya Yasuda
Inferring Sequences Produced by Nonlinear Pseudorandom Number Generators Using Coppersmith’s Methods

Number-theoretic

pseudorandom generators work by iterating an algebraic map

F

(public or private) over a residue ring ℤ

N

on a secret random initial seed value

v

0

 ∈ ℤ

N

to compute values

$v_{n+1} = F(v_n) \bmod{N}$

for

n

 ∈ ℕ. They output some consecutive bits of the state value

v

n

at each iteration and their efficiency and security are thus strongly related to the number of output bits. In 2005, Blackburn, Gomez-Perez, Gutierrez and Shparlinski proposed a deep analysis on the security of such generators. In this paper, we revisit the security of number-theoretic generators by proposing better attacks based on Coppersmith’s techniques for finding small roots on polynomial equations. Using intricate constructions, we are able to significantly improve the security bounds obtained by Blackburn

et al.

.

Aurélie Bauer, Damien Vergnaud, Jean-Christophe Zapalowicz
Extended-DDH and Lossy Trapdoor Functions

Lossy Trapdoor Functions (LTFs) were introduced by Peikert and Waters in STOC ’08 and since then have found many applications and have proven to be an extremely useful and versatile cryptographic primitive. Lossy trapdoor functions were used to build the first injective trapdoor functions based on DDH, the first IND-CCA cryptosystems based on lattice assumptions, and they are known to imply deterministic encryption, collision resistant hash-functions, oblivious transfer and a host of other important primitives. While LTFs can be instantiated under most known cryptographic hardness assumptions, no constructions until today existed based on generic cryptographic primitives. In this work, we show that any Homomorphic Smooth Hash Proof System, introduced by Cramer and Shoup in EUROCRYPT ’02, can be used to construct LTFs. In addition to providing a connection between two important cryptographic primitives – our construction implies the first construction of LTFs based on the QR assumption.

Smooth Hash Proof Systems (SHPs) can be seen as a generalization of the DDH assumption, yet can be built on other cryptographic assumptions, such as the DCR or QR assumptions. Yet, until today, a “translation” of results proven secure under DDH to results under DCR or QR has always been fraught with difficulties. Thus, as our second goal of this paper, we ask the following question: is it possible to streamline such translations from DDH to QR and other primitives? Our second result formally provides this connection. More specifically, we define an Extended Decisional Diffie Hellman (EDDH) assumption, which is a simple and natural generalization of DDH. We show that EDDH can be instantiated under both the DCR and QR assumptions. This gives a much simpler connection between the DDH and the DCR and QR assumptions and provides an easy way to translate proofs from DDH to DCR or QR. That is, the advantage of the EDDH assumption is that most schemes (including LTFs) proven secure under the DDH assumption can easily be instantiated under the DCR and QR assumptions with almost no change to their proofs of security.

Brett Hemenway, Rafail Ostrovsky
DDH-Like Assumptions Based on Extension Rings

We introduce and study a new type of DDH-like assumptions based on groups of prime order

q

. Whereas standard DDH is based on encoding elements of

$\mathbb{F}_{q}$

“in the exponent” of elements in the group, we ask what happens if instead we put in the exponent elements of the extension ring

$R_f= \mathbb{F}_{q}[X]/(f)$

where

f

is a degree-

d

polynomial. The decision problem that follows naturally reduces to the case where

f

is irreducible. This variant is called the

d

-DDH problem, where 1-DDH is standard DDH. We show in the generic group model that

d

-DDH is harder than DDH for

d

 > 1 and that we obtain, in fact, an infinite hierarchy of progressively weaker assumptions whose complexities lie “between” DDH and CDH. This leads to a large number of new schemes because virtually all known DDH-based constructions can very easily be upgraded to be based on

d

-DDH. We use the same construction and security proof but get better security and moreover, the amortized complexity (e.g, computation per encrypted bit) is the same as when using DDH. We also show that

d

-DDH, just like DDH, is easy in bilinear groups. We therefore suggest a different type of assumption, the

d

-vector DDH problems (

d

-VDDH), which are based on

f

(

X

) = 

X

d

, but with a twist to avoid problems with reducible polynomials. We show in the generic group model that

d

-VDDH is hard in bilinear groups and that the problems become harder with increasing

d

. We show that hardness of

d

-VDDH implies CCA-secure encryption, efficient Naor-Reingold style pseudorandom functions, and auxiliary input secure encryption. This can be seen as an alternative to the known family of

k

-LIN assumptions.

Ronald Cramer, Ivan Damgård, Eike Kiltz, Sarah Zakarias, Angela Zottarel

Beyond Ordinary Signature Schemes

Security of Blind Signatures Revisited

We revisit the definition of unforgeability of blind signatures as proposed by Pointcheval and Stern (Journal of Cryptology 2000). Surprisingly, we show that this established definition falls short in two ways of what one would intuitively expect from a secure blind signature scheme: It is not excluded that an adversary submits the same message

m

twice for signing, and then produces a signature for

m

′ ≠ 

m

. The reason is that the forger only succeeds if

all

messages are distinct. Moreover, it is not excluded that an adversary performs

k

signing queries and produces signatures on

k

 + 1 messages as long as

each

of these signatures does not pass verification with probability 1.

Finally, we propose a new definition, honest-user unforgeability, that covers these attacks. We give a simple and efficient transformation that transforms any unforgeable blind signature scheme (with deterministic verification) into an honest-user unforgeable one.

Dominique Schröder, Dominique Unruh
Efficient Network Coding Signatures in the Standard Model

Network Coding is a routing technique where each node may actively modify the received packets before transmitting them.While this departure from passive networks improves throughput and resilience to packet loss it renders transmission susceptible to

pollution attacks

where nodes can misbehave and change in a malicious way the messages transmitted. Nodes cannot use standard signature schemes to authenticate the modified packets: this would require knowledge of the original sender’s signing key. Network coding signature schemes offer a cryptographic solution to this problem. Very roughly, such signatures allow signing vector spaces (or rather bases of such spaces), and these signatures are homomorphic: given signatures on a set of vectors it is possible to create signatures for any linear combination of these vectors. Designing such schemes is a difficult task, and the few existent constructions either rely on random oracles or are rather inefficient. In this paper we introduce two new network coding signature schemes. Both of our schemes are provably secure in the standard model, rely on standard assumptions,

and

are in the same efficiency class as previous solutions based on random oracles.

Dario Catalano, Dario Fiore, Bogdan Warinschi
Improved Security for Linearly Homomorphic Signatures: A Generic Framework

We propose a general framework that converts (ordinary) signature schemes having certain properties into linearly homomorphic signature schemes, i.e., schemes that allow authentication of linear functions on signed data. The security of the homomorphic scheme follows from the same computational assumption as is used to prove security of the underlying signature scheme. We show that the following signature schemes have the required properties and thus give rise to secure homomorphic signatures in the standard model:

The scheme of Waters (Eurocrypt 2005), secure under the computational Diffie-Hellman asumption in bilinear groups.

The scheme of Boneh and Boyen (Eurocrypt 2004,

J. Cryptology

2008), secure under the

q

-strong Diffie-Hellman assumption in bilinear groups.

The scheme of Gennaro, Halevi, and Rabin (Eurocrypt 1999), secure under the strong RSA assumption.

The scheme of Hohenberger and Waters (Crypto 2009), secure under the RSA assumption.

Our systems not only allow weaker security assumptions than were previously available for homomorphic signatures in the standard model, but also are secure in a model that allows a stronger adversary than in other proposed schemes.

Our framework also leads to efficient linearly homomorphic signatures that are secure against our stronger adversary under weak assumptions (CDH or RSA) in the random oracle model; all previous proofs of security in the random oracle model break down completely when faced with our stronger adversary.

David Mandell Freeman
On the Security of Dynamic Group Signatures: Preventing Signature Hijacking

We identify a potential weakness in the standard security model for dynamic group signatures which appears to have been overlooked previously. More specifically, we highlight that even if a scheme provably meets the security requirements of the model, a malicious group member can potentially claim ownership of a group signature produced by an honest group member by forging a proof of ownership. This property leads to a number of vulnerabilities in scenarios in which dynamic group signatures are likely to be used. We furthermore show that the currently most efficient dynamic group signature scheme does not provide protection against this type of malicious behavior.

To address this, we introduce the notion of

opening soundness

for group signatures which essentially requires that it is infeasible to produce a proof of ownership of a valid group signature for any user except the original signer. We then show a relatively simple modification of the scheme by Groth (ASIACRYPT 2007, full version) which allows us to prove opening soundness for the modified scheme without introducing any additional assumptions.

We believe that opening soundness is an important and natural security requirement for group signatures, and hope that future schemes will adopt this type of security.

Yusuke Sakai, Jacob C. N. Schuldt, Keita Emura, Goichiro Hanaoka, Kazuo Ohta
Backmatter
Metadaten
Titel
Public Key Cryptography – PKC 2012
herausgegeben von
Marc Fischlin
Johannes Buchmann
Mark Manulis
Copyright-Jahr
2012
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-30057-8
Print ISBN
978-3-642-30056-1
DOI
https://doi.org/10.1007/978-3-642-30057-8

Premium Partner