Skip to main content

2014 | Buch

Public-Key Cryptography – PKC 2014

17th International Conference on Practice and Theory in Public-Key Cryptography, Buenos Aires, Argentina, March 26-28, 2014. Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 17th International Conference on Practice and Theory in Public-Key Cryptography, PKC 2014, held in Buenos Aires, Argentina, in March 2014. The 38 papers presented were carefully reviewed and selected from 145 submissions. The papers are organized in topical sections on chosen ciphertext security, re-encryption, verifiable outsourcing, cryptanalysis, identity and attribute-based encryption, enhanced encryption, signature schemes, related-key security, functional authentication, quantum impossibility, privacy, protocols.

Inhaltsverzeichnis

Frontmatter

Chosen Ciphertext Security

Simple Chosen-Ciphertext Security from Low-Noise LPN

Recently, Döttling et al. (ASIACRYPT 2012) proposed the first chosen-ciphertext (IND-CCA) secure public-key encryption scheme from the learning parity with noise (LPN) assumption. In this work we give an alternative scheme which is conceptually simpler and more efficient. At the core of our construction is a trapdoor technique originally proposed for lattices by Micciancio and Peikert (EUROCRYPT 2012), which we adapt to the LPN setting. The main technical tool is a new double-trapdoor mechanism, together with a trapdoor switching lemma based on a computational variant of the leftover hash lemma.

Eike Kiltz, Daniel Masny, Krzysztof Pietrzak
Leakage-Flexible CCA-secure Public-Key Encryption: Simple Construction and Free of Pairing

In AsiaCrypt 2013, Qin and Liu proposed a new approach to CCA-security of Public-Key Encryption (PKE) in the presence of bounded key-leakage, from any universal hash proof system (due to Cramer and Shoup) and any one-time lossy filter (a simplified version of lossy algebraic filters, due to Hofheinz). They presented two instantiations under the DDH and DCR assumptions, which result in leakage rate (defined as the ratio of leakage amount to the secret-key length) of 1/2 − 

o

(1). In this paper, we extend their work to broader assumptions and to flexible leakage rate, more specifically to leakage rate of 1 − 

o

(1).

We introduce the Refined Subgroup Indistinguishability (RSI) assumption, which is a subclass of subgroup indistinguishability assumptions, including many standard number-theoretical assumptions, like the quadratic residuosity assumption, the decisional composite residuosity assumption and the subgroup decision assumption over a group of known order defined by Boneh et al.

We show that universal hash proof (UHP) system and one-time lossy filter (OT-LF) can be simply and efficiently constructed from the RSI assumption. Applying Qin and Liu’s paradigm gives simple and efficient PKE schemes under the RSI assumption.

With the RSI assumption over a specific group (free of pairing), public parameters of UHP and OT-LF can be chosen in a flexible way, resulting in a leakage-flexible CCA-secure PKE scheme. More specifically, we get the first CCA-secure PKE with leakage rate of 1 − 

o

(1) without pairing.

Baodong Qin, Shengli Liu
A Black-Box Construction of a CCA2 Encryption Scheme from a Plaintext Aware (sPA1) Encryption Scheme

We present a construction of a

CCA2

-secure encryption scheme from a plaintext aware (

sPA1

), weakly simulatable public key encryption scheme. The notion of plaintext aware, weakly simulatable public key encryption has been considered previously by Myers, Sergi and shelat (SCN, 2012) and natural encryption schemes such as the Damgård Elgamal Scheme (Damgård, Crypto, 1991) and the Cramer-Shoup Lite Scheme (Cramer and Shoup, SIAM J. Comput., 2003) were shown to satisfy these properties.

Recently, Myers, Sergi and shelat (SCN, 2012) defined an extension of non-malleable

CCA1

security, called

cNM-CCA1

, and showed how to construct a

cNM

 − 

CCA

1-secure encryption scheme from a plaintext aware and weakly simulatable public key encryption scheme. Our work extends and improves on this result by showing that a full

CCA2

-secure encryption scheme can be constructed from the same assumptions.

Dana Dachman-Soled
Chosen Ciphertext Security via UCE

Bellare, Hoang, and Keelveedhi (CRYPTO’13) introduced a security notion for a family of (hash) functions called

universal computational extractor

(UCE), and showed how it can be used to realize various kinds of cryptographic primitives in the standard model whose (efficient) constructions were only known in the random oracle model. Although the results of Bellare et al. have shown that UCEs are quite powerful and useful, the notion of UCE is new, and its potential power and limitation do not seem to have been clarified well. To further widen and deepen our understanding of UCE, in this paper we study the construction of chosen ciphertext secure (CCA secure) public key encryption (PKE), one of the most important primitives in the area of cryptography to which (in)applicability of UCEs was not covered by the work of Bellare et al.

We concretely consider the setting in which other than a UCE, we only use chosen plaintext secure (CPA secure) PKE as an additional building block, and obtain several negative and positive results. As our negative results, we show difficulties of instantiating the random oracle in the Fujisaki-Okamoto (FO) construction (PKC’99) with a UCE, by exhibiting pairs of CPA secure PKE and a UCE for which the FO construction instantiated with these pairs becomes insecure (assuming that CPA secure PKE and a UCE exist at all). Then, as our main positive result, we show how to construct a CCA secure PKE scheme using only CPA secure PKE and a UCE as building blocks. Furthermore, we also show how to extend this result to a CCA secure deterministic PKE scheme for block sources (with some constraint on the running time of the sources). Our positive results employ the ideas and techniques from the Dolev-Dwork-Naor (DDN) construction (STOC’91), and for convenience we abstract and formalize the ‘‘core” structure of the DDN construction as a stand-alone primitive that we call

puncturable tag-based encryption

, which might be of independent interest.

Takahiro Matsuda, Goichiro Hanaoka

Re-encryption

Proxy Re-encryption from Lattices

We propose a new unidirectional proxy re-encryption scheme based on the hardness of the

LWE

problem. Our construction is collusionsafe and does not require any trusted authority for the re-encryption key generation. We extend a recent trapdoor definition for a lattice of Micciancio and Peikert. Our proxy re-encryption scheme is provably CCA-1 secure in the selective model under the LWE assumption.

Elena Kirshanova
Re-encryption, Functional Re-encryption, and Multi-hop Re-encryption: A Framework for Achieving Obfuscation-Based Security and Instantiations from Lattices

In this work we define multiple relaxations to the definition of correctness in secure obfuscation. While still remaining meaningful, these relaxations provide ways to obfuscate many primitives in a more direct and efficient way. In particular, we first show how to construct a secure obfuscator for the re-encryption primitive from the Decisional Learning with Errors (DLWE) assumption, without going through fully homomorphic encryption. This can be viewed as a meaningful way to trade correctness for efficiency. Next, we show how our tools can be used to construct secure obfuscators for the functional re-encryption and multi-hop unidirectional re-encryption primitives. In the former case, we improve upon the efficiency of the only previously known construction that satisfies the stronger notion of collusion-resistant obfuscation (due to Chandran

et al.

- TCC 2012) and obtain a construction with input ciphertexts of

constant

length. In the latter case, we provide the first known obfuscation-based definition and construction; additionally, our scheme is the first scheme where the size of the ciphertexts does not grow with every hop.

Nishanth Chandran, Melissa Chase, Feng-Hao Liu, Ryo Nishimaki, Keita Xagawa

Verifiable Outsourcing

Verifiable Set Operations over Outsourced Databases

We study the problem of verifiable delegation of computation over outsourced data, whereby a powerful worker maintains a large data structure for a weak client in a verifiable way. Compared to the well-studied problem of verifiable computation, this setting imposes additional difficulties since the verifier also needs to check the consistency of updates succinctly and without maintaining large state. We present a scheme for verifiable evaluation of

hierarchical set operations

(unions, intersections and set-differences) applied to a collection of

dynamically changing sets of elements

from a given domain. The verification cost incurred is proportional only to the size of the final outcome set and to the size of the query, and is

independent of the cardinalities of the involved sets

. The cost of updates is

optimal

(involving

O

(1) modular operations per update). Our construction extends that of [Papamanthou et al., CRYPTO 2011] and relies on a modified version of the

extractable collision-resistant hash function

(ECRH) construction, introduced in [Bitansky et al., ITCS 2012] that can be used to succinctly hash univariate polynomials.

Ran Canetti, Omer Paneth, Dimitrios Papadopoulos, Nikos Triandopoulos
Verifiable Oblivious Storage

We formalize the notion of

Verifiable Oblivious Storage

(VOS), where a client outsources the storage of data to a server while ensuring data confidentiality, access pattern privacy, and integrity and freshness of data accesses. VOS generalizes the notion of Oblivious RAM (ORAM) in that it allows the server to perform computation, and also explicitly considers data integrity and freshness.

We show that allowing server-side computation enables us to construct asymptotically more efficient VOS schemes whose bandwidth overhead cannot be matched by any ORAM scheme, due to a known lower bound by Goldreich and Ostrovsky. Specifically, for large block sizes we can construct a VOS scheme with constant bandwidth per query; further, answering queries requires only poly-logarithmic server computation. We describe applications of VOS to Dynamic Proofs of Retrievability, and RAM-model secure multi-party computation.

Daniel Apon, Jonathan Katz, Elaine Shi, Aishwarya Thiruvengadam
Achieving Privacy in Verifiable Computation with Multiple Servers – Without FHE and without Pre-processing

Cloud services provide a powerful resource to which weak clients may outsource their computation. While tremendously useful, they come with their own security challenges. One of the fundamental issues in cloud computation is: how does a client efficiently verify the correctness of computation performed on an untrusted server? Furthermore, how can the client be assured that the server learns nothing about its private inputs? In recent years, a number of proposals have been made for constructing verifiable computation protocols. Unfortunately, solutions that guarantee privacy of inputs (in addition to the correctness of computation) rely on the use of fully homomorphic encryption (FHE). An unfortunate consequence of this dependence on FHE, is that all hope of making verifiable computation implementable in practice hinges on the challenge of making FHE deployable in practice. This brings us to the following question: do we need fully homomorphic encryption to obtain privacy in verifiable computation protocol which achieves input privacy?

Another drawback of existing protocols is that they require the client to run a pre-processing stage, in which the work done by the client is proportional to the function being outsourced and hence the outsourcing benefit is obtained only in an amortized sense. This brings us to our next question: can we build verifiable computation protocols that allow the client to efficiently outsource even a computation that it wishes to execute just once?

In this paper, we consider a model in which the client outsources his computation to multiple (say

n

 ≥ 2) servers. In this model, we construct verifiable computation protocols that do not make use of FHE and that do not have a pre-processing stage. In the two-server setting, we present an extremely practical protocol based only on one-way functions. We also present a solution, based on the DDH assumption, for the multi-server model for any arbitrary

n

. All these protocols are secure as long as at least one server is honest. Finally, even in the

n

-server model, we present a solution based solely on one-way functions. This protocol tolerates up to a constant fraction of corrupted servers.

Prabhanjan Ananth, Nishanth Chandran, Vipul Goyal, Bhavana Kanukurthi, Rafail Ostrovsky
Efficient Delegation of Zero-Knowledge Proofs of Knowledge in a Pairing-Friendly Setting

Since their introduction in 1985, by Goldwasser, Micali and Rackoff, followed by Feige, Fiat and Shamir, zero-knowledge proofs have played a significant role in modern cryptography: they allow a party to convince another party of the validity of a statement (proof of membership) or of its knowledge of a secret (proof of knowledge). Cryptographers frequently use them as building blocks in complex protocols since they offer quite useful soundness features, which exclude cheating players. In most of modern telecommunication services, the execution of these protocols involves a prover on a portable device, with limited capacities, and namely distinct trusted part and more powerful part. The former thus has to delegate some computations to the latter. However, since the latter is not fully trusted, it should not learn any secret information.

This paper focuses on proofs of knowledge of discrete logarithm relations sets (DLRS), and the delegation of some prover’s computations, without leaking any critical information to the delegatee. We will achieve various efficient improvements ensuring perfect zero-knowledge against the verifier and partial zero-knowledge, but still reasonable in many contexts, against the delegatee.

Sébastien Canard, David Pointcheval, Olivier Sanders

Cryptanalysis I

Rounding and Chaining LLL: Finding Faster Small Roots of Univariate Polynomial Congruences

In a seminal work at EUROCRYPT ’96, Coppersmith showed how to find all small roots of a univariate polynomial congruence in polynomial time: this has found many applications in public-key cryptanalysis and in a few security proofs. However, the running time of the algorithm is a high-degree polynomial, which limits experiments: the bottleneck is an LLL reduction of a high-dimensional matrix with extra-large coefficients. We present in this paper the first significant speedups over Coppersmith’s algorithm. The first speedup is based on a special property of the matrices used by Coppersmith’s algorithm, which allows us to provably speed up the LLL reduction by rounding, and which can also be used to improve the complexity analysis of Coppersmith’s original algorithm. The exact speedup depends on the LLL algorithm used: for instance, the speedup is asymptotically quadratic in the bit-size of the small-root bound if one uses the Nguyen-Stehlé

L

2

algorithm. The second speedup is heuristic and applies whenever one wants to enlarge the root size of Coppersmith’s algorithm by exhaustive search. Instead of performing several LLL reductions independently, we exploit hidden relationships between these matrices so that the LLL reductions can be somewhat chained to decrease the global running time. When both speedups are combined, the new algorithm is in practice hundreds of times faster for typical parameters.

Jingguo Bi, Jean-Sébastien Coron, Jean-Charles Faugère, Phong Q. Nguyen, Guénaël Renault, Rina Zeitoun
Elliptic and Hyperelliptic Curves: A Practical Security Analysis

Motivated by the advantages of using elliptic curves for discrete logarithm-based public-key cryptography, there is an active research area investigating the potential of using hyperelliptic curves of genus 2. For both types of curves, the best known algorithms to solve the discrete logarithm problem are generic attacks such as Pollard rho, for which it is well-known that the algorithm can be sped up when the target curve comes equipped with an efficiently computable automorphism. In this paper we incorporate all of the known optimizations (including those relating to the automorphism group) in order to perform a systematic security assessment of two elliptic curves and two hyperelliptic curves of genus 2. We use our software framework to give concrete estimates on the number of core years required to solve the discrete logarithm problem on four curves that target the 128-bit security level: on the standardized NIST CurveP-256, on a popular curve from the Barreto-Naehrig family, and on their respective analogues in genus 2.

Joppe W. Bos, Craig Costello, Andrea Miele
Discrete Logarithm in GF(2809) with FFS

The year 2013 has seen several major complexity advances for the discrete logarithm problem in multiplicative groups of small- characteristic finite fields. These outmatch, asymptotically, the Function Field Sieve (FFS) approach, which was so far the most efficient algorithm known for this task. Yet, on the practical side, it is not clear whether the new algorithms are uniformly better than FFS. This article presents the state of the art with regard to the FFS algorithm, and reports data from a record-sized discrete logarithm computation in a prime-degree extension field.

Razvan Barbulescu, Cyril Bouvier, Jérémie Detrey, Pierrick Gaudry, Hamza Jeljeli, Emmanuel Thomé, Marion Videau, Paul Zimmermann

Identity- and Attribute-Based Encryption

Identity-Based Lossy Trapdoor Functions: New Definitions, Hierarchical Extensions, and Implications

Lossy trapdoor functions, introduced by Peikert and Waters (STOC’08), have received a lot of attention in the last years, because of their wide range of applications. The notion has been recently extended to the identity-based setting by Bellare

et al.

(Eurocrypt’12). An identity-based trapdoor function (IB-TDF) satisfying the lossy property introduced by Bellare

et al.

can be used to construct other cryptographic primitives in the identity-based setting: encryption schemes with semantic security under chosen-plaintext attacks, deterministic encryption schemes, and hedged encryption schemes that maintain some security when messages are encrypted using randomness of poor quality. However, the constructed primitives can be proved secure only against

selective

adversaries who select the target identity upfront.

Our first contribution is an alternative definition for the lossiness of an identity-based trapdoor function. We prove that an IB-TDF satisfying the new property can be used to construct all the aforementioned primitives, in the identity-based setting, with security against

adaptive

adversaries. We further consider the new definition and its implications in the more general scenario of

hierarchical

identity-based cryptography, which has proved very useful both for practical applications and to establish theoretical relations with other cryptographic primitives (including encryption with chosen-ciphertext security or with forward-security).

As a second contribution, we describe a pairing-based hierarchical IB-TDF satisfying the new definition of lossiness against either selective or, for hierarchies of constant depth, adaptive adversaries. This is also the first example of hierarchical trapdoor functions based on traditional (

i.e.

, non-lattice-related) number theoretic assumptions. As a direct consequence of our two contributions, we obtain a hierarchical identity-based (HIB) encryption scheme with chosen-plaintext security, a HIB deterministic encryption scheme and a HIB hedged encryption scheme, all of them with security against adaptive adversaries.

Alex Escala, Javier Herranz, Benoît Libert, Carla Ràfols
Bounded-Collusion Identity-Based Encryption from Semantically-Secure Public-Key Encryption: Generic Constructions with Short Ciphertexts

To circumvent the lack of generic constructions of identity-based encryption (IBE), Dodis

et al.

(EUROCRYPT ’02) introduced the notion of

bounded-collusion IBE

(BC-IBE), where attackers only learn secret keys of an a-priori bounded number

t

of identities. They provided a

generic

BC-IBE construction from any semantically-secure encryption scheme which, however, suffers from a

ω

(

t

) blow-up in ciphertext size. Goldwasser

et al.

(TCC 2012) recently presented a generic construction with no ciphertext-length blow-up. Their construction requires an underlying public-key scheme with a key homomorphism, as well as a hash-proof-style security definition that is strictly stronger than semantic security. This latter requirement in particular reduces the applicability of their construction to existing schemes.

In this paper, we present the

first

generic constructions of BC-IBE from

semantically-secure

encryption schemes with no ciphertext-length blow-up. Our constructions require different degrees of key-homomorphism and malleability properties that are usually easy to verify. We provide concrete instantiations based on the DDH, QR, NTRU, and LWE assumptions. For all of these assumptions, our schemes present the smallest BC-IBE ciphertext size known to date. Our NTRU-based construction is particularly interesting, due to the lack of NTRU-based IBE constructions as well as the fact that it supports fully-homomorphic evaluation.

Our results also yield new constructions of bounded CCA-secure cryptosystems

Stefano Tessaro, David A. Wilson
A Framework and Compact Constructions for Non-monotonic Attribute-Based Encryption

In this paper, we propose new non-monotonic attribute-based encryption schemes with compact parameters. The first three schemes are key-policy attribute-based encryption (KP-ABE) and the fourth scheme is ciphertext-policy attribute-based encryption (CP-ABE) scheme.

Our first scheme achieves the shortest ciphertext overhead in the literature. Compared to the scheme by Attrapadung et al. (PKC2011), which is the best scheme in terms of the ciphertext overhead, our scheme shortens ciphertext overhead by 33%. The scheme also reduces the size of the master public key to about half.

Our second scheme is proven secure under the decisional bilinear Diffie-Hellman (DBDH) assumption, which is one of the most standard assumptions in bilinear groups. Compared to the non-monotonic KP-ABE scheme from the same assumption by Ostrovsky et al. (ACM-CCS’07), our scheme reduces the size of the master public key and the ciphertext to about half.

Our third scheme is the first non-monotonic KP-ABE scheme that can deal with unbounded size of set and access policies. That is, there is no restriction on the size of attribute sets and the number of allowed repetition of the same attributes which appear in an access policy. The master public key of our scheme consists of only constant number of group elements.

Our fourth scheme is the first non-monotonic CP-ABE scheme that can deal with unbounded size of set and access policies. The master public key of the scheme consists of only constant number of group elements.

We construct our KP-ABE schemes in a modular manner. We first introduce special type of predicate encryption that we call two-mode identity based broadcast encryption (TIBBE). Then, we show that any TIBBE scheme that satisfies certain condition can be generically converted into non-monotonic KP-ABE scheme. Finally, we construct efficient TIBBE schemes and apply this conversion to obtain the above new non-monotonic KP-ABE schemes.

Shota Yamada, Nuttapong Attrapadung, Goichiro Hanaoka, Noboru Kunihiro
Online/Offline Attribute-Based Encryption

Attribute-based encryption (ABE) is a type of public key encryption that allows users to encrypt and decrypt messages based on user attributes. For instance, one can encrypt a message to any user satisfying the boolean formula (“crypto conference attendee” AND “PhD student”) OR “IACR member”. One drawback is that encryption and key generation computational costs scale with the complexity of the access policy or number of attributes. In practice, this makes encryption and user key generation a possible bottleneck for some applications.

To address this problem, we develop new techniques for ABE that split the computation for these algorithms into two phases: a preparation phase that does the vast majority of the work to encrypt a message or create a secret key

before

it knows the message or the attribute list/access control policy that will be used (or even the size of the list or policy). A second phase can then rapidly assemble an ABE ciphertext or key when the specifics become known. This concept is sometimes called “online/offline” encryption when only the message is unknown during the preparation phase; we note that the addition of unknown attribute lists and access policies makes ABE significantly more challenging.

One motivating application for this technology is mobile devices: the preparation work can be performed while the phone is plugged into a power source, then it can later rapidly perform ABE operations on the move without significantly draining the battery.

Susan Hohenberger, Brent Waters

Enhanced Encryption

Scale-Invariant Fully Homomorphic Encryption over the Integers

At Crypto 2012, Brakerski constructed a scale-invariant fully homomorphic encryption scheme based on the LWE problem, in which the same modulus is used throughout the evaluation process, instead of a ladder of moduli when doing “modulus switching”. In this paper we describe a variant of the van Dijk et al. FHE scheme over the integers with the same scale-invariant property. Our scheme has a single secret modulus whose size is linear in the multiplicative depth of the circuit to be homomorphically evaluated, instead of exponential; we therefore construct a leveled fully homomorphic encryption scheme. This scheme can be transformed into a pure fully homomorphic encryption scheme using bootstrapping, and its security is still based on the Approximate-GCD problem.

We also describe an implementation of the homomorphic evaluation of the full AES encryption circuit, and obtain significantly improved performance compared to previous implementations: about 23 seconds (resp. 3 minutes) per AES block at the 72-bit (resp. 80-bit) security level on a mid-range workstation.

Finally, we prove the equivalence between the (error-free) decisional Approximate-GCD problem introduced by Cheon et al. (Eurocrypt 2013) and the classical computational Approximate-GCD problem. This equivalence allows to get rid of the additional noise in all the integer-based FHE schemes described so far, and therefore to simplify their security proof.

Jean-Sébastien Coron, Tancrède Lepoint, Mehdi Tibouchi
Enhanced Chosen-Ciphertext Security and Applications

We introduce and study a new notion of

enhanced chosen-ciphertext security

(ECCA) for public-key encryption. Loosely speaking, in the ECCA security experiment, the decryption oracle provided to the adversary is augmented to return not only the output of the decryption algorithm on a queried ciphertext but also of a

randomness-recovery

algorithm associated to the scheme. Our results mainly concern the case where the randomness-recovery algorithm is efficient.

We provide constructions of ECCA-secure encryption from adaptive trapdoor functions as defined by Kiltz

et al.

(EUROCRYPT 2010), resulting in ECCA encryption from standard number-theoretic assumptions. We then give two applications of ECCA-secure encryption: (1) We use it as a unifying concept in showing equivalence of adaptive trapdoor functions and tag-based adaptive trapdoor functions, resolving an open question of Kiltz

et al

. (2) We show that ECCA-secure encryption can be used to securely realize an approach to public-key encryption with non-interactive opening (PKENO) originally suggested by Damgård and Thorbek (EUROCRYPT 2007), resulting in new and practical PKENO schemes quite different from those in prior work.

Our results demonstrate that ECCA security is of both practical and theoretical interest.

Dana Dachman-Soled, Georg Fuchsbauer, Payman Mohassel, Adam O’Neill

Signature Schemes

Lattice-Based Group Signature Scheme with Verifier-Local Revocation

Support of membership revocation is a desirable functionality for any group signature scheme. Among the known revocation approaches, verifier-local revocation (VLR) seems to be the most flexible one, because it only requires the verifiers to possess some up-to-date revocation information, but not the signers. All of the contemporary VLR group signatures operate in the bilinear map setting, and all of them will be insecure once quantum computers become a reality. In this work, we introduce the first lattice-based VLR group signature, and thus, the first such scheme that is believed to be quantum-resistant. In comparison with existing lattice-based group signatures, our scheme has several noticeable advantages: support of membership revocation, logarithmic-size signatures, and weaker security assumption. In the random oracle model, our scheme is proved to be secure based on the hardness of the

$\mathsf{SIVP}_{\widetilde{\mathcal{O}}(n^{1.5})}$

problem in general lattices - an assumption that is as weak as those of state-of-the-art lattice-based standard signatures. Moreover, our construction works without relying on encryption schemes, which is an intriguing feature for group signatures.

Adeline Langlois, San Ling, Khoa Nguyen, Huaxiong Wang
Leakage-Resilient Signatures with Graceful Degradation

We investigate new models and constructions which allow leakage-resilient signatures secure against existential forgeries, where the signature is much shorter than the leakage bound. Current models of leakage-resilient signatures against existential forgeries demand that the adversary cannot produce a new valid message/signature pair (

m

,

σ

) even after receiving some

λ

bits of leakage on the signing key. If ∣ 

σ

 ∣ ≤ 

λ

, then the adversary can just choose to leak a valid signature

σ

, and hence signatures must be larger than the allowed leakage, which is impractical as the goal often is to have large signing keys to allow a lot of leakage.

We propose a new notion of leakage-resilient signatures against existential forgeries where we demand that the adversary cannot produce

$n = \lfloor \lambda / \vert \sigma \vert \rfloor + 1$

distinct valid message/signature pairs (

m

1

,

σ

1

), …, (

m

n

,

σ

n

) after receiving

λ

bits of leakage. If

λ

 = 0, this is the usual notion of existential unforgeability. If 1 < 

λ

 < ∣ 

σ

 ∣, this is essentially the usual notion of existential unforgeability in the presence of leakage. In addition, for

λ

 ≥ ∣ 

σ

 ∣ our new notion still guarantees the best possible, namely that the adversary cannot produce more forgeries than he could have leaked, hence graceful degradation.

Besides the game-based notion hinted above, we also consider a variant which is more simulation-based, in that it asks that from the leakage a simulator can “extract” a set of

n

 − 1 messages (to be thought of as the messages corresponding to the leaked signatures), and no adversary can produce forgeries not in this small set. The game-based notion is easier to prove for a concrete instantiation of a signature scheme. The simulation-based notion is easier to use, when leakage-resilient signatures are used as components in larger protocols.

We prove that the two notion are equivalent and present a generic construction of signature schemes meeting our new notion and a concrete instantiation under fairly standard assumptions. We further give an application, to leakage-resilient identification.

Jesper Buus Nielsen, Daniele Venturi, Angela Zottarel
On the Lossiness of the Rabin Trapdoor Function

Lossy trapdoor functions, introduced by Peikert and Waters (STOC ’08), are functions that can be generated in two indistinguishable ways: either the function is injective, and there is a trapdoor to invert it, or the function is lossy, meaning that the size of its range is strictly smaller than the size of its domain. Kakvi and Kiltz (EUROCRYPT 2012) proved that the Full Domain Hash signature scheme based on a lossy trapdoor function has a

tight

security reduction from the lossiness of the trapdoor function. Since Kiltz, O’Neill, and Smith (CRYPTO 2010) showed that the RSA trapdoor function is lossy under the

$\varPhi$

-Hiding assumption of Cachin, Micali, and Stadler (EUROCRYPT ’99), this implies that the RSA Full Domain Hash signature scheme has a

tight

security reduction from the Φ-Hiding assumption (for public exponents

e

 < 

N

1/4

). In this work, we consider the Rabin trapdoor function,

i.e.

modular squaring over ℤ*

N

. We show that when adequately restricting its domain (either to the set

${\mathbb{QR}}_N$

of quadratic residues, or to (

${\mathbb J}_N)^+$

, the set of positive integers 1 ≤ 

x

 ≤ (

N

 − 1)/2 with Jacobi symbol +1) the Rabin trapdoor function is lossy, the injective mode corresponding to Blum integers

N

 = 

pq

with

$p,q\equiv 3\bmod 4$

, and the lossy mode corresponding to what we call pseudo-Blum integers

N

 = 

pq

with

$p,q\equiv 1 \bmod 4$

. This lossiness result holds under a natural extension of the Φ-Hiding assumption to the case

e

 = 2 that we call the 2-

$\varPhi/4$

-Hiding assumption. We then use this result to prove that deterministic variants of Rabin-Williams Full Domain Hash signatures have a tight reduction from the 2-

$\varPhi$

/4-Hiding assumption. We also show that these schemes are unlikely to have a tight reduction from the factorization problem by extending a previous “meta-reduction” result by Coron (EUROCRYPT 2002), later corrected by Kakvi and Kiltz (EUROCRYPT 2012). These two results therefore answer one of the main questions left open by Bernstein (EUROCRYPT 2008) in his work on Rabin-Williams signatures.

Yannick Seurin

Cryptanalysis II

Solving Random Subset Sum Problem by l p -norm SVP Oracle

It is well known that almost all random subset sum instances with density less than 0.6463... can be solved with an

l

2

-norm SVP oracle by Lagarias and Odlyzko. Later, Coster

et al.

improved the bound to 0.9408... by using a different lattice. In this paper, we generalize this classical result to

l

p

-norm. More precisely, we show that for

p

 ∈ ℤ

 + 

, an

l

p

-norm SVP oracle can be used to solve almost all random subset sum instances with density bounded by

δ

p

, where

δ

1

 = 0.5761 and

$\delta_p = 1/(\frac{1}{2^p}\log_2(2^{p+1}-2)+\log_2(1+\frac{1}{(2^p-1)(1-(\frac{1}{2^{p+1}-2})^{(2^p-1)})})))$

for

p

 ≥ 3(asymptotically,

δ

p

 ≈ 2

p

/(

p

 + 2)). Since

δ

p

goes increasingly to infinity when

p

tends to infinity, it can be concluded that an

l

p

-norm SVP oracle with bigger

p

can solve more subset sum instances. An interesting phenomenon is that an

l

p

-norm SVP oracle with

p

 ≥ 3 can help solve almost all random subset sum instances with density one, which are thought to be the most difficult instances.

Gengran Hu, Yanbin Pan, Feng Zhang
Parallel Gauss Sieve Algorithm: Solving the SVP Challenge over a 128-Dimensional Ideal Lattice

In this paper, we report that we have solved the SVP Challenge over a 128-dimensional lattice in Ideal Lattice Challenge from TU Darmstadt, which is currently the highest dimension in the challenge that has ever been solved. The security of lattice-based cryptography is based on the hardness of solving the shortest vector problem (SVP) in lattices. In 2010, Micciancio and Voulgaris proposed a Gauss Sieve algorithm for heuristically solving the SVP using a list

L

of Gauss-reduced vectors. Milde and Schneider proposed a parallel implementation method for the Gauss Sieve algorithm. However, the efficiency of the more than 10 threads in their implementation decreased due to the large number of non-Gauss-reduced vectors appearing in the distributed list of each thread. In this paper, we propose a more practical parallelized Gauss Sieve algorithm. Our algorithm deploys an additional Gauss-reduced list

V

of sample vectors assigned to each thread, and all vectors in list

L

remain Gauss-reduced by mutually reducing them using all sample vectors in

V

. Therefore, our algorithm allows the Gauss Sieve algorithm to run for large dimensions with a small communication overhead. Finally, we succeeded in solving the SVP Challenge over a 128-dimensional ideal lattice generated by the cyclotomic polynomial

x

128

 + 1 using about 30,000 CPU hours.

Tsukasa Ishiguro, Shinsaku Kiyomoto, Yutaka Miyake, Tsuyoshi Takagi
Lazy Modulus Switching for the BKW Algorithm on LWE

Some recent constructions based on LWE do not sample the secret uniformly at random but rather from some distribution which produces small entries. The most prominent of these is the binary-LWE problem where the secret vector is sampled from {0,1}

 ∗ 

or { − 1,0,1}

 ∗ 

. We present a variant of the BKW algorithm for binary-LWE and other small secret variants and show that this variant reduces the complexity for solving binary-LWE. We also give estimates for the cost of solving binary-LWE instances in this setting and demonstrate the advantage of this BKW variant over standard BKW and lattice reduction techniques applied to the SIS problem. Our variant can be seen as a combination of the BKW algorithm with a lazy variant of modulus switching which might be of independent interest.

Martin R. Albrecht, Jean-Charles Faugère, Robert Fitzpatrick, Ludovic Perret
Practical Cryptanalysis of a Public-Key Encryption Scheme Based on New Multivariate Quadratic Assumptions

In this paper, we investigate the security of a public-key encryption scheme introduced by Huang, Liu and Yang (HLY) at PKC’12. This new scheme can be provably reduced to the hardness of solving a set of quadratic equations whose coefficients of highest degree are chosen according to a discrete Gaussian distributions. The other terms being chosen uniformly at random. Such a problem is a variant of the classical problem of solving a system of non-linear equations (PoSSo), which is known to be hard for random systems. The main hypothesis of Huang, Liu and Yang is that their variant is not easier than solving PoSSo for random instances. In this paper, we disprove this hypothesis. To this end, we exploit the fact that the new problem proposed by Huang, Liu and Yang reduces to an easy instance of the Learning With Errors (LWE) problem. The main contribution of this paper is to show that security and efficiency are essentially incompatible for the HLY proposal. That is, one cannot find parameters which yield a secure and a practical scheme. For instance, we estimate that a public-key of at least

$1.03~\textnormal{GB}$

is required to achieve 80-bit security against the simplest of our attacks. As a proof of concept, we present 3 practical attacks against all the parameters proposed by Huang, Liu and Yang. With the most efficient attack, we have been able to recover the private-key in roughly 5 minutes for the first challenge (i.e. Case 1) proposed by HLY and less than 30 minutes for the second challenge (i.e. Case2).

Martin R. Albrecht, Jean-Charles Faugére, Robert Fitzpatrick, Ludovic Perret, Yosuke Todo, Keita Xagawa

Related-Key Security

Related Randomness Attacks for Public Key Encryption

Several recent and high-profile incidents give cause to believe that randomness failures of various kinds are endemic in deployed cryptographic systems. In the face of this, it behoves cryptographic researchers to develop methods to immunise – to the extent that it is possible – cryptographic schemes against such failures. This paper considers the practically-motivated situation where an adversary is able to force a public key encryption scheme to reuse random values, and functions of those values, in encryption computations involving adversarially chosen public keys and messages. It presents a security model appropriate to this situation, along with variants of this model. It also provides necessary conditions on the set of functions used in order to attain this security notation, and demonstrates that these conditions are also sufficient in the Random Oracle Model. Further standard model constructions achieving weaker security notions are also given, with these constructions having interesting connections to other primitives including: pseudo-random functions that are secure in the related key attack setting; Correlated Input Secure hash functions; and public key encryption schemes that are secure in the auxiliary input setting (this being a special type of leakage resilience).

Kenneth G. Paterson, Jacob C. N. Schuldt, Dale L. Sibborn
Encryption Schemes Secure under Related-Key and Key-Dependent Message Attacks

We construct secret-key encryption (SKE) schemes that are secure against related-key attacks

and

in the presence of key-dependent messages (RKA-KDM secure). We emphasize that RKA-KDM security is not merely the conjunction of individual security properties, but covers attacks in which ciphertexts of key-dependent messages under related keys are available. Besides being interesting in their own right, RKA-KDM secure schemes allow to garble circuits with XORs very efficiently (Applebaum, TCC 2013). Until now, the only known RKA-KDM secure SKE scheme (due to Applebaum) is based on the LPN assumption. Our schemes are based on various other computational assumptions, namely DDH, LWE, QR, and DCR.

We abstract from Applebaum’s construction and proof, and formalize three generic technical properties that imply RKA-KDM security: one property is IND-CPA security, and the other two are the existence of suitable oracles that produce ciphertexts under related keys, resp. of key-dependent messages. We then give simple SKE schemes that achieve these properties. Our constructions are variants of known KDM-secure public-key encryption schemes. To additionally achieve RKA security, we isolate suitable homomorphic properties of the underlying schemes in order to simulate ciphertexts under related keys in the security proof. RKA-KDM security for our schemes holds w.r.t. affine functions (over the respective mathematical domain).

From a conceptual point of view, our work provides a generic and extensible way to construct encryption schemes with multiple special security properties.

Florian Böhl, Gareth T. Davies, Dennis Hofheinz

Functional Authentication

Functional Signatures and Pseudorandom Functions

We introduce two new cryptographic primitives:

functional digital signatures

and

functional pseudorandom functions

.

In a functional signature scheme, in addition to a master signing key that can be used to sign any message, there are

signing keys for a function

f

, which allow one to sign any message in the range of

f

. As a special case, this implies the ability to generate keys for predicates

P

, which allow one to sign any message

m

for which

P

(

m

) = 1.

We show applications of functional signatures to constructing succinct non-interactive arguments and delegation schemes. We give several general constructions for this primitive based on different computational hardness assumptions, and describe the trade-offs between them in terms of the assumptions they require and the size of the signatures.

In a functional pseudorandom function, in addition to a master secret key that can be used to evaluate the pseudorandom function

F

on any point in the domain, there are additional

secret keys for a function

f

, which allow one to evaluate

F

on any

y

for which there exists an

x

such that

f

(

x

) = 

y

. As a special case, this implies

pseudorandom functions with selective access

, where one can delegate the ability to evaluate the pseudorandom function on inputs

y

for which a predicate

P

(

y

) = 1 holds. We define and provide a sample construction of a functional pseudorandom function family for prefix-fixing functions. This construction yields, in particular,

punctured pseudorandom functions

, which have proven an invaluable tool in recent advances in obfuscation (Sahai and Waters ePrint 2013).

Elette Boyle, Shafi Goldwasser, Ioana Ivan
Policy-Based Signatures

We introduce policy-based signatures (PBS), where a signer can only sign messages conforming to some authority-specified policy. The main requirements are unforgeability and privacy, the latter meaning that signatures not reveal the policy. PBS offers value along two fronts: (1) On the practical side, they allow a corporation to control what messages its employees can sign under the corporate key. (2) On the theoretical side, they unify existing work, capturing other forms of signatures as special cases or allowing them to be easily built. Our work focuses on definitions of PBS, proofs that this challenging primitive is realizable for arbitrary policies, efficient constructions for specific policies, and a few representative applications.

Mihir Bellare, Georg Fuchsbauer
Generalizing Homomorphic MACs for Arithmetic Circuits

Homomorphic MACs, introduced by Gennaro and Wichs in 2013, allow anyone to validate computations on authenticated data without knowledge of the secret key.Moreover, the secret-key owner can verify the validity of the computation without needing to know the original (authenticated) inputs. Beyond security, homomorphic MACs are required to produce short tags (succinctness) and to support composability (i.e., outputs of authenticated computations should be re-usable as inputs for new computations).

At Eurocrypt 2013, Catalano and Fiore proposed two realizations of homomorphic MACs that support a restricted class of computations (arithmetic circuits of polynomial degree), are practically efficient, but fail to achieve both succinctness and composability at the same time.

In this paper, we generalize the work of Catalano and Fiore in several ways. First, we abstract away their results using the notion of encodings with limited malleability, thus yielding new schemes based on different algebraic settings. Next, we generalize their constructions to work with graded encodings, and more abstractly with k-linear groups. The main advantage of this latter approach is that it allows for homomorphic MACs which are (somewhat) composable while retaining succinctness. Interestingly, our construction uses graded encodings in a generic way. Thus, all its limitations (limited composability and non-constant size of the tags) solely depend on the fact that currently known multilinear maps share similar constraints. This means, for instance, that our scheme would support arbitrary circuits (polynomial depth) if we had compact multilinear maps with an exponential number of levels.

Dario Catalano, Dario Fiore, Rosario Gennaro, Luca Nizzardo

Quantum Impossibility

General Impossibility of Group Homomorphic Encryption in the Quantum World

Group homomorphic encryption represents one of the most important building blocks in modern cryptography. It forms the basis of widely-used, more sophisticated primitives, such as CCA2-secure encryption or secure multiparty computation. Unfortunately, recent advances in quantum computation show that many of the existing schemes completely break down once quantum computers reach maturity (mainly due to Shor’s algorithm). This leads to the challenge of constructing quantum-resistant group homomorphic cryptosystems.

In this work, we prove the

general

impossibility of (abelian) group homomorphic encryption in the presence of quantum adversaries, when assuming the IND-CPA security notion as the minimal security requirement. To this end, we prove a new result on the probability of sampling generating sets of finite (sub-)groups if sampling is done with respect to an arbitrary, unknown distribution. Finally, we provide a sufficient condition on homomorphic encryption schemes for our quantum attack to work and discuss its satisfiability in non-group homomorphic cases. The impact of our results on recent fully homomorphic encryption schemes poses itself as an open question.

Frederik Armknecht, Tommaso Gagliardoni, Stefan Katzenbeisser, Andreas Peter

Privacy

On Minimal Assumptions for Sender-Deniable Public Key Encryption

The primitive of deniable encryption was introduced by Canetti et al. (CRYPTO, 1997). Deniable encryption is an encryption scheme with the added feature that after transmitting a message

m

, both sender and receiver may produce random coins showing that the transmitted ciphertext was an encryption of any message

m

′ in the message space. Deniable encryption is a key tool for constructing incoercible protocols, since it allows a party to send one message and later provide apparent evidence to a coercer that a different message was sent. In addition, deniable encryption may be used to obtain

adaptively

-secure multiparty computation (MPC) protocols and is secure under

selective-opening

attacks. Different flavors such as sender-deniable and receiver-deniable encryption, where only the sender or receiver produce fake random coins, have been considered.

Recently, over 15 years after the primitive was first introduced, Sahai and Waters (IACR Cryptology ePrint Archive, 2013), gave the first construction of sender-deniable encryption schemes with super-polynomial security, where an adversary has negligible advantage in distinguishing real and fake openings. Their construction is based on the construction of an indistinguishability obfuscator for general programs recently introduced in a breakthrough result of Garg et al. (FOCS, 2013). Although feasibility has now been demonstrated, the question of determining the

minimal

assumptions necessary for sender-deniable encryption with super-polynomial security remains open.

The primitive of simulatable public key encryption (PKE), introduced by Damgård and Nielsen (CRYPTO, 2000), is a public key encryption scheme with additional properties that allow oblivious sampling of public keys and ciphertexts. It is one of the low-level primitives used to construct adaptively-secure MPC protocols and was used by O’Neill et al. in their construction of bi-deniable encryption in the multi-distributional model (CRYPTO, 2011). Moreover, the original construction of sender-deniable encryption with polynomial security given by Canetti et al. can be instantiated with simulatable PKE. Thus, a natural question to ask is whether it is possible to construct sender-deniable encryption with

super-polynomial security

from simulatable PKE.

In this work, we investigate the possibility of constructing sender-deniable public key encryption from simulatable PKE in a black-box manner. We show that there is no black-box construction of sender-deniable public key encryption with super-polynomial security from simulatable PKE. This indicates that improving on the original construction of Canetti et al. requires the use of non-black-box techniques, stronger assumptions, or interaction, thus giving some evidence that strong assumptions such as those used by Sahai and Waters are necessary.

Dana Dachman-Soled
Traceable Group Encryption

Group encryption (

GE

) is the encryption analogue of group signatures. It allows a sender to verifiably encrypt a message for some certified but anonymous member of a group. The sender is further able to convince a verifier that the ciphertext is a well-formed encryption under some group member’s public key. As in group signatures, an opening authority is empowered with the capability of identifying the receiver if the need arises. One application of such a scheme is secure repository at an unknown but authorized cloud server, where the archive is made accessible by a judge order in the case of misbehavior, like a server hosting illegal transaction records (this is done in order to balance individual rights and society’s safety). In this work we describe Traceable

GE

system, a group encryption with refined tracing capabilities akin to those of the primitive of “traceable signatures” (thus, balancing better privacy vs. safety). Our primitive enjoys the properties of group encryption, and, in addition, it allows the opening authority to reveal a user-specific trapdoor which makes it possible to publicly trace all the ciphertexts encrypted for that user without harming the anonymity of other ciphertexts. In addition, group members are able to non-interactively prove that specific ciphertexts are intended for them or not. This work provides rigorous definitions, concrete constructions in the standard model, and security proofs.

Benoît Libert, Moti Yung, Marc Joye, Thomas Peters
Practical Covert Authentication

Von Ahn, Hopper, and Langford [vAHL05] introduced the notion of two-party

steganographic

a.k.a.

covert

computation, which assures that neither party can distinguish its counterparty from a random noise generator, except for what is revealed by the final output of the securely computed function. The flagship motivation for covert computation is

covert authentication

, where two parties want to authenticate each other, e.g. as some credential holders, but a party who lacks the credentials is not only unable to pass the authentication protocol, but cannot even distinguish a protocol instance from random noise.

Previous work on covert computation [vAHL05,CGOS07] showed general-purpose protocols whose efficiency is linear in the size of the circuit representation of the computed function. Here we show the first practical (assuming a large-enough random steganographic channel) covert protocol for the specific task of two-party mutual authentication, secure under the strong RSA, DQR, and DDH assumptions. The protocol takes 5 rounds (3 in ROM),

O

(1) modular exponentiations, and supports revocation and identity escrow. The main technical contribution which enables it is a compiler from a special honest-verifier zero-knowledge proof to a

covert conditional key encapsulation mechanism

for the same language.

Stanislaw Jarecki

Protocols

Fine-Tuning Groth-Sahai Proofs

Groth-Sahai proofs are efficient non-interactive zero-knowledge proofs that have found widespread use in pairing-based cryptography. We propose efficiency improvements of Groth-Sahai proofs in the SXDH setting, which is the one that yields the most efficient non-interactive zero-knowledge proofs.

We replace some of the commitments with ElGamal encryptions, which reduces the prover’s computation and for some types of equations reduces the proof size.

Groth-Sahai proofs are zero-knowledge when no public elements are paired to each other. We observe that they are also zero-knowledge when base elements for the groups are paired to public constants.

The prover’s computation can be reduced by letting her pick her own common reference string. By giving a proof she has picked a valid common reference string this does not compromise soundness.

We define a type-based commit-and-prove scheme, which allows commitments to be reused in many different proofs.

Alex Escala, Jens Groth
Cross-Domain Secure Computation

Consider the setting of two mutually distrustful parties Alice and Bob communicating over the Internet, who want to securely evaluate desired functions on their private inputs. In this setting all known protocols for securely evaluating general functions either require honest parties to trust an external party or provide only weaker notions of security. Thus, the question of minimizing or removing trusted set-up assumptions remains open. In this work, we introduce the cross-domain model (CD) for secure computation as a means to reducing the level of required trust. In this model, each domain consists of a set of mutually trusting parties along with a key-registration authority, where we would like parties from distinct domains to be able to perform multiple secure computation tasks concurrently. In this setting, we show the followings:

Positive Construction for

2

domains:

We give a multiparty-party protocol that concurrently and securely evaluates any function in the CD model with two domains, using only a

constant

number of rounds and relying only on

standard assumptions

.

Impossibility Results for

3

or more domains:

Consider a deterministic function (e.g., 1-out-of-2 bit OT) that Alice and Bob in the standalone setting cannot evaluate trivially and which allows only Bob to receive the output. In this setting if besides Alice and Bob there is a third party (such that all three are from distinct domains) then they cannot securely compute any such function in the CD model in

concurrent

setting even when their inputs are

pre-specified

.

These results extend to the setting of multiple parties as well. In particular, there exists an

n

-party concurrently secure protocol in the CD model of

n

domains if and only if there are exactly

n

domains in the system.

Chongwon Cho, Sanjam Garg, Rafail Ostrovsky
On the Security of the Pre-shared Key Ciphersuites of TLS

TLS is by far the most important protocol on the Internet for negotiating secure session keys and providing authentication. Only very recently, the standard ciphersuites of TLS have been shown to provide provably secure guarantees under a new notion called Authenticated and Confidential Channel Establishment (ACCE) introduced by Jager et al.at CRYPTO’12. In this work, we analyse the variants of TLS that make use of pre-shared keys (TLS-PSK). In various environments, TLS-PSK is an interesting alternative for remote authentication between servers and constrained clients like smart cards, for example for mobile phone authentication, EMV-based payment transactions or authentication via electronic ID cards. First, we introduce a new and strong definition of ACCE security that covers protocols with pre-shared keys. Next, we prove that all ciphersuite families of TLS-PSK meet our strong notion of ACCE security. Our results do not rely on random oracles nor on any non-standard assumption.

Yong Li, Sven Schäge, Zheng Yang, Florian Kohlar, Jörg Schwenk
Backmatter
Metadaten
Titel
Public-Key Cryptography – PKC 2014
herausgegeben von
Hugo Krawczyk
Copyright-Jahr
2014
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-54631-0
Print ISBN
978-3-642-54630-3
DOI
https://doi.org/10.1007/978-3-642-54631-0