Skip to main content

2013 | Buch

Public-Key Cryptography – PKC 2013

16th International Conference on Practice and Theory in Public-Key Cryptography, Nara, Japan, February 26 – March 1, 2013. Proceedings

herausgegeben von: Kaoru Kurosawa, Goichiro Hanaoka

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 16th International Conference on Practice and Theory in Public-Key Cryptography, PKC 2013, held in Nara, Japan, in February/March 2013.

The 28 papers presented together with 2 invited talks were carefully reviewed and selected from numerous submissions. The papers are organized in the following topical sections: homomorphic encryption, primitives, functional encryption/signatures, RSA, IBE and IPE, key exchange, signature schemes, encryption, and protocols.

Inhaltsverzeichnis

Frontmatter

Homomorphic Encryption

Packed Ciphertexts in LWE-Based Homomorphic Encryption
Abstract
In this short note we observe that the Peikert-Vaikuntanathan-Waters (PVW) method of packing many plaintext elements in a single Regev-type ciphertext, can be used for performing SIMD homomorphic operations on packed ciphertext. This provides an alternative to the Smart-Vercauteren (SV) ciphertext-packing technique that relies on polynomial-CRT. While the SV technique is only applicable to schemes that rely on ring-LWE (or other hardness assumptions in ideal lattices), the PVW method can be used also for cryptosystems whose security is based on standard LWE (or more broadly on the hardness of “General-LWE”).
Although using the PVW method with LWE-based schemes leads to worse asymptotic efficiency than using the SV technique with ring-LWE schemes, the simplicity of this method may still offer some practical advantages. Also, the two techniques can be used in tandem with “general-LWE” schemes, suggesting yet another tradeoff that can be optimized for different settings.
Zvika Brakerski, Craig Gentry, Shai Halevi
Feasibility and Infeasibility of Adaptively Secure Fully Homomorphic Encryption
Abstract
Fully homomorphic encryption (FHE) is a form of public-key encryption that enables arbitrary computation over encrypted data. The past few years have seen several realizations of FHE under different assumptions, and FHE has been used as a building block in many cryptographic applications.
Adaptive security for public-key encryption schemes is an important security notion proposed by Canetti et al. It is intended to ensure security when encryption is used within an interactive protocol and parties may be adaptively corrupted by an adversary during the course of the protocol execution. Due to the extensive applications of FHE to protocol design, it is natural to understand whether adaptively secure FHE is achievable.
In this paper we show two contrasting results in this direction. First, we show that adaptive security is impossible for FHE satisfying the (standard) compactness requirement. On the other hand, we show a construction of adaptively secure FHE that is not compact, but that does achieve circuit privacy.
Jonathan Katz, Aishwarya Thiruvengadam, Hong-Sheng Zhou
Chosen Ciphertext Secure Keyed-Homomorphic Public-Key Encryption
Abstract
In homomorphic encryption schemes, anyone can perform homomorphic operations, and therefore, it is difficult to manage when, where and by whom they are performed. In addition, the property that anyone can ‘‘freely” perform the operation inevitably means that ciphertexts are malleable, and it is well-known that adaptive chosen ciphertext (CCA) security and the homomorphic property can never be achieved simultaneously. In this paper, we show that CCA security and the homomorphic property can be simultaneously handled in situations that the user(s) who can perform homomorphic operations on encrypted data should be controlled/limited, and propose a new concept of homomorphic public-key encryption, which we call keyed-homomorphic public-key encryption (KH-PKE). By introducing a secret key for homomorphic operations, we can control who is allowed to perform the homomorphic operation. To construct KH-PKE schemes, we introduce a new concept, a homomorphic transitional universal hash family, and present a number of KH-PKE schemes through hash proof systems. We also present a practical construction of KH-PKE from the DDH assumption. For ℓ-bit security, our DDH-based scheme yields only ℓ-bit longer ciphertext size than that of the Cramer-Shoup PKE scheme.
Keita Emura, Goichiro Hanaoka, Go Ohtake, Takahiro Matsuda, Shota Yamada

Invited Talk (1)

Functional Encryption: Origins and Recent Developments
Abstract
In this talk, we will present the notion of functional encryption and recent progress in the area. We will begin by describing the concept and origins of functional encryption. Next, we will describe intuitively why current bilinear map based constructions appear to be “stuck” with boolean formula type functionality even in the public index setting. Finally, we will see some very recent work that uses multilinear forms to move beyond these barriers and achieve functionality for any circuit.
Brent Waters

Primitives

Vector Commitments and Their Applications
Abstract
We put forward the study of a new primitive that we call Vector Commitment (VC, for short). Informally, VCs allow to commit to an ordered sequence of q values (m1, . . . , mq) in such a way that one can later open the commitment at specific positions (e.g., prove that mi is the i-th committed message). For security, Vector Commitments are required to satisfy a notion that we call position binding which states that an adversary should not be able to open a commitment to two different values at the same position. Moreover, what makes our primitive interesting is that we require VCs to be concise, i.e. the size of the commitment string and of its openings has to be independent of the vector length.
We show two realizations of VCs based on standard and well established assumptions, such as RSA, and Computational Diffie-Hellman (in bilinear groups). Next, we turn our attention to applications and we show that Vector Commitments are useful in a variety of contexts, as they allow for compact and efficient solutions which significantly improve previous works either in terms of efficiency of the resulting solutions, or in terms of ”quality” of the underlying assumption, or both. These applications include: Verifiable Databases with Efficient Updates, Updatable Zero-Knowledge Databases, and Universal Dynamic Accumulators.
Dario Catalano, Dario Fiore
Efficient, Adaptively Secure, and Composable Oblivious Transfer with a Single, Global CRS
Abstract
We present a general framework for efficient, universally composable oblivious transfer (OT) protocols in which a single, global, common reference string (CRS) can be used for multiple invocations of oblivious transfer by arbitrary pairs of parties. In addition:
  • Our framework is round-efficient. E.g., under the DLIN or SXDH assumptions we achieve round-optimal protocols with static security, or 3-round protocols with adaptive security (assuming erasure).
  • Our resulting protocols are more efficient than any known previously, and in particular yield protocols for string OT using O(1) exponentiations and communicating O(1) group elements.
Our result improves on that of Peikert et al. (Crypto 2008), which uses a CRS whose length depends on the number of parties in the network and achieves only static security. Compared to Garay et al. (Crypto 2009), we achieve adaptive security with better round complexity and efficiency.
Seung Geol Choi, Jonathan Katz, Hoeteck Wee, Hong-Sheng Zhou
Cryptography Using Captcha Puzzles
Abstract
A Captcha is a puzzle that is easy for humans but hard to solve for computers. A formal framework, modelling Captcha puzzles (as hard AI problems), was introduced by Ahn, Blum, Hopper, and Langford ([1], Eurocrypt 2003). Despite their attractive features and wide adoption in practice, the use of Captcha puzzles for general cryptographic applications has been limited.
In this work, we explore various ways to formally model Captcha puzzles and their human component and explore new applications for Captcha. We show that by defining Captcha with additional (strong but realistic) properties, it is possible to broaden Captcha applicability, including using it to learning a machine’s “secret internal state.” To facilitate this, we introduce the notion of an human-extractable Captcha, which we believe may be of independent interest. We show that this type of Captcha yields a constant round protocol for fully concurrent non-malleable zero-knowledge. To enable this we also define and construct a Captcha-based commitment scheme which admits “straight line” extraction. We also explore Captcha definitions in the setting of Universal Composability (UC). We show that there are two (incomparable) ways to model Captcha within the UC framework that lead to different results. In particular, we show that in the so called indirect access model, for every polynomial time functionality \(\ensuremath{\mathcal{F}}\) there exists a protocol that UC-realizes \(\ensuremath{\mathcal{F}}\) using human-extractable Captcha, while for the so-called direct access model, UC is impossible, even with the help of human-extractable Captcha.
The security of our constructions using human-extractable Captcha is proven against the (standard) class of all polynomial time adversaries. In contrast, most previous works guarantee security only against a very limited class of adversaries, called the conservative adversaries.
Abishek Kumarasubramanian, Rafail Ostrovsky, Omkant Pandey, Akshay Wadia
Improved Zero-Knowledge Proofs of Knowledge for the ISIS Problem, and Applications
Abstract
In all existing efficient proofs of knowledge of a solution to the infinity norm Inhomogeneous Small Integer Solution (ISIS ∞ ) problem, the knowledge extractor outputs a solution vector that is only guaranteed to be \(\widetilde{O}(n)\) times longer than the witness possessed by the prover. As a consequence, in many cryptographic schemes that use these proof systems as building blocks, there exists a gap between the hardness of solving the underlying ISIS ∞  problem and the hardness underlying the security reductions. In this paper, we generalize Stern’s protocol to obtain two statistical zero-knowledge proofs of knowledge for the ISIS ∞  problem that remove this gap. Our result yields the potential of relying on weaker security assumptions for various lattice-based cryptographic constructions. As applications of our proof system, we introduce a concurrently secure identity-based identification scheme based on the worst-case hardness of the \({\rm SIVP}_{{\widetilde{O}}(n^{1.5})}\) problem (in the ℓ2 norm) in general lattices in the random oracle model, and an efficient statistical zero-knowledge proof of plaintext knowledge with small constant gap factor for Regev’s encryption scheme.
San Ling, Khoa Nguyen, Damien Stehlé, Huaxiong Wang

Functional Encryption/Signatures

Decentralized Attribute-Based Signatures
Abstract
We present the first decentralized multi-authority attribute-based signature (DMA-ABS) scheme, in which no central authority and no trusted setup are required. The proposed DMA-ABS scheme for a large class of (non-monotone) predicates is fully secure (adaptive-predicate unforgeable and perfectly private) under a standard assumption, the decisional linear (DLIN) assumption, in the random oracle model. Our DMA-ABS scheme is comparably as efficient as the most efficient ABS scheme. As a by-product, this paper also presents an adaptively secure DMA functional encryption (DMA-FE) scheme under the DLIN assumption.
Tatsuaki Okamoto, Katsuyuki Takashima
On the Semantic Security of Functional Encryption Schemes
Abstract
Functional encryption (FE) is a powerful cryptographic primitive that generalizes many asymmetric encryption systems proposed in recent years. Syntax and security definitions for FE were proposed by Boneh, Sahai, and Waters (BSW) (TCC 2011) and independently by O’Neill (ePrint 2010/556). In this paper we revisit these definitions, identify several shortcomings in them, and propose a new definitional approach that overcomes these limitations. Our definitions display good compositionality properties and allow us to obtain new feasibility and impossibility results for adaptive token-extraction attack scenarios that shed further light on the potential reach of general FE for practical applications.
Manuel Barbosa, Pooya Farshim
Attribute-Based Encryption with Fast Decryption
Abstract
Attribute-based encryption (ABE) is a vision of public key encryption that allows users to encrypt and decrypt messages based on user attributes. This functionality comes at a cost. In a typical implementation, the size of the ciphertext is proportional to the number of attributes associated with it and the decryption time is proportional to the number of attributes used during decryption. Specifically, many practical ABE implementations require one pairing operation per attribute used during decryption.
This work focuses on designing ABE schemes with fast decryption algorithms. We restrict our attention to expressive systems without system-wide bounds or limitations, such as placing a limit on the number of attributes used in a ciphertext or a private key. In this setting, we present the first key-policy ABE system where ciphertexts can be decrypted with a constant number of pairings. We show that GPSW ciphertexts can be decrypted with only 2 pairings by increasing the private key size by a factor of |Γ|, where Γ is the set of distinct attributes that appear in the private key. We then present a generalized construction that allows each system user to independently tune various efficiency tradeoffs to their liking on a spectrum where the extremes are GPSW on one end and our very fast scheme on the other. This tuning requires no changes to the public parameters or the encryption algorithm. Strategies for choosing an individualized user optimization plan are discussed. Finally, we discuss how these ideas can be translated into the ciphertext-policy ABE setting at a higher cost.
Susan Hohenberger, Brent Waters

On RSA

Recovering RSA Secret Keys from Noisy Key Bits with Erasures and Errors
Abstract
We discuss how to recover RSA secret keys from noisy key bits with erasures and errors. There are two known algorithms recovering original secret keys from noisy keys. At Crypto 2009, Heninger and Shacham proposed a method for the case where an erroneous version of secret keys contains only erasures. Subsequently, Henecka et al. proposed a method for an erroneous version containing only errors at Crypto 2010. For physical attacks such as side-channel and cold boot attacks, we need to study key recovery from a noisy secret key containing both erasures and errors. In this paper, we propose a method to recover a secret key from such an erroneous version and analyze the condition for error and erasure rates so that our algorithm succeeds in finding the correct secret key in polynomial time. We also evaluate a theoretical bound to recover the secret key and discuss to what extent our algorithm achieves this bound.
Noboru Kunihiro, Naoyuki Shinohara, Tetsuya Izu
Combined Attack on CRT-RSA
Why Public Verification Must Not Be Public?
Abstract
This article introduces a new Combined Attack on a CRT-RSA implementation resistant against Side-Channel Analysis and Fault Injection attacks. Such implementations prevent the attacker from obtaining the signature when a fault has been induced during the computation. Indeed, such a value would allow the attacker to recover the RSA private key by computing the gcd of the public modulus and the faulty signature. The principle of our attack is to inject a fault during the signature computation and to perform a Side-Channel Analysis targeting a sensitive value processed during the Fault Injection countermeasure execution. The resulting information is then used to factorize the public modulus, leading to the disclosure of the whole RSA private key. After presenting a detailed account of our attack, we explain how its complexity can be significantly reduced by using lattice reduction techniques. We also provide simulations that confirm the efficiency of our attack as well as two different countermeasures having a very small impact on the performance of the algorithm. As it performs a Side-Channel Analysis during a Fault Injection countermeasure to retrieve the secret value, this article recalls the need for Fault Injection and Side-Channel Analysis countermeasures as monolithic implementations.
Guillaume Barbu, Alberto Battistello, Guillaume Dabosville, Christophe Giraud, Guénaël Renault, Soline Renner, Rina Zeitoun

IBE and IPE

Revocable Identity-Based Encryption Revisited: Security Model and Construction
Abstract
In ACM CCS 2008, Boldyreva et al. proposed an elegant way of achieving an Identity-based Encryption (IBE) with efficient revocation, which we call revocable IBE (RIBE). One of the significant benefit of their construction is scalability, where the overhead of the trusted authority is logarithmically increased in the number of users, whereas that in the Boneh-Franklin naive revocation way is linearly increased. All subsequent RIBE schemes follow the Boldyreva et al. security model and syntax. In this paper, we first revisit the Boldyreva et al. security model, and aim at capturing the exact notion for the security of the naive but non-scalable Boneh-Franklin RIBE scheme. To this end, we consider a realistic threat, which we call decryption key exposure. We also show that all prior RIBE constructions except for the Boneh-Franklin one are vulnerable to decryption key exposure. As the second contribution, we revisit approaches to achieve (efficient and adaptively secure) scalable RIBE schemes, and propose a simple RIBE scheme, which is the first scalable RIBE scheme with decryption key exposure resistance, and is more efficient than previous (adaptively secure) scalable RIBE schemes. In particular, our construction has the shortest ciphertext size and the fastest decryption algorithm even compared with all scalable RIBE schemes without decryption key exposure resistance.
Jae Hong Seo, Keita Emura
Improved (Hierarchical) Inner-Product Encryption from Lattices
Abstract
Inner-product encryption (IPE) provides fine-grained access control and has attractive applications. Agrawal, Freeman, and Vaikuntanathan (Asiacrypt 2011) proposed the first IPE scheme from lattices by twisting the identity-based encryption (IBE) scheme by Agrawal, Boneh, and Boyen (Eurocrypt 2010). Their IPE scheme supports inner-product predicates over R μ , where the ring is R = ℤ q . Several applications require the ring R to be exponentially large and, thus, they set q = 2O(n) to implement such applications. This choice results in the AFV IPE scheme with public parameters of size \(O(\mu n^2 \lg^3{q}) = O(\mu n^5)\) and ciphertexts of size \(O(\mu n \lg^3{q}) = O(\mu n^4)\), where n is the security parameter. Hence, this makes the scheme impractical, as they noted.
We address this efficiency issue by “untwisting” their twist and providing another twist. Our scheme supports inner-product predicates over R μ where R = GF(q n ) instead of ℤ q . Our scheme has public parameters of size \(O(\mu n^2 \lg^2{q})\) and ciphertexts of size \(O(\mu n \lg^2{q})\). Since the cardinality of GF(q n ) is inherently exponential in n, we have no need to set q as the exponential size for applications.
As side contributions, we extend our IPE scheme to a hierarchical IPE (HIPE) scheme and propose a fuzzy IBE scheme from IPE. Our HIPE scheme is more efficient than that developed by Abdalla, De Caro, and Mochetti (Latincrypt 2012). Our fuzzy IBE is secure under a much weaker assumption than that employed by Agrawal et al. (PKC 2012), who constructed the first lattice-based fuzzy IBE scheme.
Keita Xagawa

Invited Talk (2)

Techniques for Efficient Secure Computation Based on Yao’s Protocol
Abstract
In the setting of secure two-party computation, two parties wish to securely compute a function of their joint private inputs. The theoretical foundations of this problem were laid down in the 1980s, and it has been heavily studied due to its generality and many applications. However, until recently, secure computation was considered a theoretical problem of purely theoretical interest. This has changed, and progress on the question of efficient secure computation has been extraordinarily fast in the past five years. In this talk, we survey some of this recent progress and describe the main techniques used for obtaining fast two-party computation, based on Yao’s garbled circuit protocol. We will present the main algorithmic/protocol improvements, as well as implementation issues that have turned out to be a big factor in obtaining concrete efficiency. In addition, we will relate to the settings of semi-honest, covert and malicious adversaries, and will describe the challenges that arise for each along with the solutions and major open questions.
Yehuda Lindell

Key Exchange

Non-Interactive Key Exchange
Abstract
Non-interactive key exchange (NIKE) is a fundamental but much-overlooked cryptographic primitive. It appears as a major contribution in the ground-breaking paper of Diffie and Hellman, but NIKE has remained largely unstudied since then. In this paper, we provide different security models for this primitive and explore the relationships between them. We then give constructions for secure NIKE in the Random Oracle Model based on the hardness of factoring and in the standard model based on the hardness of a variant of the decisional Bilinear Diffie Hellman Problem for asymmetric pairings. We also study the relationship between NIKE and public key encryption (PKE), showing that a secure NIKE scheme can be generically converted into an IND-CCA secure PKE scheme. Our conversion also illustrates the fundamental nature of NIKE in public key cryptography.
Eduarda S. V. Freire, Dennis Hofheinz, Eike Kiltz, Kenneth G. Paterson
Efficient UC-Secure Authenticated Key-Exchange for Algebraic Languages
Abstract
Authenticated Key Exchange (AKE) protocols enable two parties to establish a shared, cryptographically strong key over an insecure network using various authentication means, such as cryptographic keys, short (i.e., low-entropy) secret keys or credentials. In this paper, we provide a general framework, that encompasses several previous AKE primitives such as (Verifier-based) Password-Authenticated Key Exchange or Secret Handshakes, we call LAKE for Language-Authenticated Key Exchange.
We first model this general primitive in the Universal Composability (UC) setting. Thereafter, we show that the Gennaro-Lindell approach can efficiently address this goal. But we need smooth projective hash functions on new languages, whose efficient implementations are of independent interest. We indeed provide such hash functions for languages defined by combinations of linear pairing product equations.
Combined with an efficient commitment scheme, that is derived from the highly-efficient UC-secure Lindell’s commitment, we obtain a very practical realization of Secret Handshakes, but also Credential-Authenticated Key Exchange protocols. All the protocols are UC-secure, in the standard model with a common reference string, under the classical Decisional Linear assumption.
Fabrice Ben Hamouda, Olivier Blazy, Céline Chevalier, David Pointcheval, Damien Vergnaud

Signature Schemes I

Tighter Reductions for Forward-Secure Signature Schemes
Abstract
In this paper, we revisit the security of factoring-based signature schemes built via the Fiat-Shamir transform and show that they can admit tighter reductions to certain decisional complexity assumptions such as the quadratic-residuosity, the high-residuosity, and the φ-hiding assumptions. We do so by proving that the underlying identification schemes used in these schemes are a particular case of the lossy identification notion recently introduced by Abdalla et al. at Eurocrypt 2012. Next, we show how to extend these results to the forward-security setting based on ideas from the Itkis-Reyzin forward-secure signature scheme. Unlike the original Itkis-Reyzin scheme, our construction can be instantiated under different decisional complexity assumptions and has a much tighter security reduction. Finally, we show that the tighter security reductions provided by our proof methodology can result in concrete efficiency gains in practice, both in the standard and forward-security setting, as long as the use of stronger security assumptions is deemed acceptable. All of our results hold in the random oracle model.
Michel Abdalla, Fabrice Ben Hamouda, David Pointcheval
Tagged One-Time Signatures: Tight Security and Optimal Tag Size
Abstract
We present an efficient structure-preserving tagged one-time signature scheme with tight security reductions to the decision-linear assumption. Our scheme features short tags consisting of a single group element and gives rise to the currently most efficient structure-preserving signature scheme based on the decision-liner assumption with constant-size signatures of only 14 group elements, where the record-so-far was 17 elements.
To demonstrate the advantages of our scheme, we revisit the work by Hofheinz and Jager (CRYPTO 2012) and present the currently most efficient tightly secure public-key encryption scheme. We also obtain the first structure-preserving public-key encryption scheme featuring both tight security and public verifiability.
Masayuki Abe, Bernardo David, Markulf Kohlweiss, Ryo Nishimaki, Miyako Ohkubo

Encryption

Key Encapsulation Mechanisms from Extractable Hash Proof Systems, Revisited
Abstract
In CRYPTO 2010, Wee proposed the notion of ‘‘extractable hash proof systems” (XHPS), and its richer version, ‘‘all-but-one XHPS” (ABO-XHPS), and showed that chosen ciphertext secure (CCA secure) key encapsulation mechanisms (KEM) can be constructed from them. This elegantly explains several recently proposed practical KEMs constructed based on the ‘‘all-but-one” simulation paradigm in a unified framework. Somewhat frustratingly, however, there still exist popular KEMs whose construction and security proofs are not captured by this framework. In this paper, we revisit the framework of the ABO-XHPS-based KEM. Firstly, we show that to prove CCA security of the ABO-XHPS-based KEM, some requirements can be relaxed. This relaxation widens the applicability of the original framework, and explains why many known practical KEMs can be proved CCA secure. Moreover, we introduce new properties for ABO-XHPS, and show how one of the properties leads to KEMs that achieve ‘‘constrained” CCA security, which is a useful security notion of KEMs for obtaining CCA secure public key encryption via hybrid encryption. Thirdly, we investigate the relationships among computational properties that we introduce in this paper, and derive a useful theorem that enables us to understand the structure of KEMs of a certain type in a modular manner. Finally, we show that the ABO-XHPS-based KEM can be extended to efficient multi-recipient KEMs. Our results significantly extend the framework for constructing a KEM from ABO-XHPS, enables us to capture and explain more existing practical CCA secure schemes (most notably those based on the decisional Diffie-Hellman assumption) in the framework, and leads to a number of new instantiations of (single- and multi-recipient) KEMs.
Takahiro Matsuda, Goichiro Hanaoka
Robust Encryption, Revisited
Abstract
We revisit the notions of robustness introduced by Abdalla, Bellare, and Neven (TCC 2010). One of the main motivations for the introduction of strong robustness for public-key encryption (PKE) by Abdalla et al. is to prevent certain types of attack on Sako’s auction protocol. We show, perhaps surprisingly, that Sako’s protocol is still vulnerable to attacks exploiting robustness problems in the underlying PKE scheme, even when it is instantiated with a strongly robust scheme. This demonstrates that current notions of robustness are insufficient even for one of its most natural applications. To address this and other limitations in existing notions, we introduce a series of new robustness notions for PKE and explore their relationships. In particular, we introduce complete robustness, our strongest new notion of robustness, and give a number of constructions for completely robust PKE schemes.
Pooya Farshim, Benoît Libert, Kenneth G. Paterson, Elizabeth A. Quaglia
Sender-Equivocable Encryption Schemes Secure against Chosen-Ciphertext Attacks Revisited
Abstract
In Eurocrypt 2010, Fehr et al. proposed the first sender-equivocable encryption scheme secure against chosen-ciphertext attacks (NC-CCA) and proved that NC-CCA security implies security against selective opening chosen-ciphertext attacks (SO-CCA). The NC-CCA security proof of the scheme relies on security against substitution attacks of a new primitive, “cross-authentication code”. However, the security of cross-authentication code can not be guaranteed when all the keys used in the code are exposed. Our key observation is that in the NC-CCA security game, the randomness used in the generation of the challenge ciphertext is exposed to the adversary. This random information can be used to recover all the keys involved in the cross-authentication code, and forge a ciphertext (like a substitution attack of cross-authentication code) that is different from but related to the challenge ciphertext. And the response of the decryption oracle, with respect to the forged ciphertext, leaks information. This leaked information can be employed by an adversary to spoil the NC-CCA security proof of Fehr et al.’s scheme encrypting multi-bit plaintexts. We also show that Fehr et al.’s scheme encrypting single-bit plaintexts can be refined to achieve NC-CCA security, free of any cross-authentication code.
Zhengan Huang, Shengli Liu, Baodong Qin

Signature Schemes II

Efficient Completely Context-Hiding Quotable and Linearly Homomorphic Signatures
Abstract
Homomorphic signatures are primitives that allow for public computations for a class of specified predicates over authenticated data. An enhanced privacy notion, called complete context-hiding security, was recently motivated by Attrapadung et al. (Asiacrypt’12). This notion ensures that a signature derived from any valid signatures is perfectly indistinguishable from a newly generated signatures (on the same message), and seems desirable in many applications requiring to compute on authenticated data. In this paper, we focus on two useful predicates – namely, substring quotation predicates and linear dependency predicates – and present the first completely context-hiding schemes for these in the standard model. Moreover, our new quotable signature scheme is the first such construction with signatures of linear size. In comparison with the initial scheme of Ahn et al. (TCC 2012), we thus reduce the signature size from O(n logn) to O(n), where n is the message size. Our scheme also allows signing messages of arbitrary length using constant-size public keys.
Nuttapong Attrapadung, Benoît Libert, Thomas Peters
Verifiably Encrypted Signatures with Short Keys Based on the Decisional Linear Problem and Obfuscation for Encrypted VES
Abstract
Verifiably encrypted signatures (VES) are signatures encrypted by a public key of a trusted third party and we can verify their validity without decryption. This paper proposes a new VES scheme which is secure under the decisional linear (DLIN) assumption in the standard model. We also propose new obfuscators for encrypted signatures (ES) and encrypted VES (EVES) which are secure under the DLIN assumption.
All previous efficient VES schemes in the standard model are either secure under standard assumptions (such as the computational Diffie-Hellman assumption) with large verification (or secret) keys or secure under (non-standard) dynamic q-type assumptions (such as the q-strong Diffie-Hellman extraction assumption) with short verification keys. Our construction is the first efficient VES scheme with short verification (and secret) keys secure under a standard assumption (DLIN).
As by-products of our VES scheme, we construct new obfuscators for ES/EVES based on our new VES scheme. They are more efficient than previous obfuscators with respect to the public key size. Previous obfuscators for EVES are secure under non-standard assumption and use zero-knowledge (ZK) proof systems and Fiat-Shamir heuristics to obtain non-interactive ZK, i.e., its security is considered in the random oracle model. Thus, our construction also has an advantage with respect to assumptions and security models. Our new obfuscator for ES is obtained from our new obfuscator for EVES.
Ryo Nishimaki, Keita Xagawa
Sequential Aggregate Signatures with Short Public Keys: Design, Analysis and Implementation Studies
Abstract
The notion of aggregate signature has been motivated by applications and it enables any user to compress different signatures signed by different signers on different messages into a short signature. Sequential aggregate signature, in turn, is a special kind of aggregate signature that only allows a signer to add his signature into an aggregate signature in sequential order. This latter scheme has applications in diversified settings, such as in reducing bandwidth of a certificate chains, and in secure routing protocols. Lu, Ostrovsky, Sahai, Shacham, and Waters presented the first sequential aggregate signature scheme in the standard (non idealized ROM) model. The size of their public key, however, is quite large (i.e., the number of group elements is proportional to the security parameter), and therefore they suggested as an open problem the construction of such a scheme with short keys. Schröder recently proposed a sequential aggregate signature (SAS) with short public keys using the Camenisch-Lysyanskaya signature scheme, but the security is only proven under an interactive assumption (which is considered a relaxed notion of security). In this paper, we propose the first sequential aggregate signature scheme with short public keys (i.e., a constant number of group elements) in prime order (asymmetric) bilinear groups which is secure under static assumptions in the standard model. Technically, we start with a public key signature scheme based on the recent dual system encryption technique of Lewko and Waters. This technique cannot give directly an aggregate signature scheme since, as we observed, additional elements should be published in the public key to support aggregation. Thus, our construction is a careful augmentation technique for the dual system technique to allow it to support a sequential aggregate signature scheme. We further implemented our scheme and conducted a performance study and implementation optimization.
Kwangsu Lee, Dong Hoon Lee, Moti Yung
New Constructions and Applications of Trapdoor DDH Groups
Abstract
Trapdoor Decisional Diffie-Hellman (TDDH) groups, introduced by Dent and Galbraith (ANTS 2006), are groups where the DDH problem is hard, unless one is in possession of a secret trapdoor which enables solving it efficiently. Despite their intuitively appealing properties, they have found up to now very few cryptographic applications. Moreover, among the two constructions of such groups proposed by Dent and Galbraith, only a single one based on hidden pairings remains unbroken. In this paper, we extend the set of trapdoor DDH groups by giving a construction based on composite residuosity. We also introduce a more restrictive variant of these groups that we name static trapdoor DDH groups, where the trapdoor only enables to solve the DDH problem with respect to a fixed pair (G,G x ) of group elements. We give two constructions for such groups whose security relies respectively on the RSA and the factoring assumptions. Then, we show that static trapdoor DDH groups yield elementary constructions of convertible undeniable signature schemes allowing delegatable verification. Using our constructions of static trapdoor DDH groups from the RSA or the factoring assumption, we obtain slightly simpler variants of the undeniable signature schemes of respectively Gennaro, Rabin, and Krawczyk (J. Cryptology, 2000) and Galbraith and Mao (CT-RSA 2003). These new schemes are conceptually more satisfying since they can strictly be viewed as instantiations, in an adequate group, of the original undeniable signature scheme of Chaum and van Antwerpen (CRYPTO ’89).
Yannick Seurin

Protocols

Rate-Limited Secure Function Evaluation: Definitions and Constructions
Abstract
We introduce the notion of rate-limited secure function evaluation (RL-SFE). Loosely speaking, in an RL-SFE protocol participants can monitor and limit the number of distinct inputs (i.e., rate) used by their counterparts in multiple executions of an SFE, in a private and verifiable manner. The need for RL-SFE naturally arises in a variety of scenarios: e.g., it enables service providers to “meter” their customers’ usage without compromising their privacy, or can be used to prevent oracle attacks against SFE constructions.
We consider three variants of RL-SFE providing different levels of security. As a stepping stone, we also formalize the notion of commit-first SFE (cf-SFE) wherein parties are committed to their inputs before each SFE execution. We provide compilers for transforming any cf-SFE protocol into each of the three RL-SFE variants. Our compilers are accompanied with simulation-based proofs of security in the standard model and show a clear tradeoff between the level of security offered and the overhead required. Moreover, motivated by the fact that in many client-server applications clients do not keep state, we also describe a general approach for transforming the resulting RL-SFE protocols into stateless ones.
As a case study, we take a closer look at the oblivious polynomial evaluation (OPE) protocol of Hazay and Lindell, show that it is commitfirst and instantiate efficient rate-limited variants of it.
Özgür Dagdelen, Payman Mohassel, Daniele Venturi
Verifiable Elections That Scale for Free
Abstract
In order to guarantee a fair and transparent voting process, electronic voting schemes must be verifiable. Most of the time, however, it is important that elections also be anonymous. The notion of a verifiable shuffle describes how to satisfy both properties at the same time: ballots are submitted to a public bulletin board in encrypted form, verifiably shuffled by several mix servers (thus guaranteeing anonymity), and then verifiably decrypted by an appropriate threshold decryption mechanism. To guarantee transparency, the intermediate shuffles and decryption results, together with proofs of their correctness, are posted on the bulletin board throughout this process.
In this paper, we present a verifiable shuffle and threshold decryption scheme in which, for security parameter k, L voters, M mix servers, and N decryption servers, the proof that the end tally corresponds to the original encrypted ballots is only O(k(L + M + N)) bits long. Previous verifiable shuffle constructions had proofs of size O(kLM + kLN), which, for elections with thousands of voters, mix servers, and decryption servers, meant that verifying an election on an ordinary computer in a reasonable amount of time was out of the question.
The linchpin of each construction is a controlled-malleable proof (cm- NIZK), which allows each server, in turn, to take a current set of ciphertexts and a proof that the computation done by other servers has proceeded correctly so far. After shuffling or partially decrypting these ciphertexts, the server can also update the proof of correctness, obtaining as a result a cumulative proof that the computation is correct so far. In order to verify the end result, it is therefore sufficient to verify just the proof produced by the last server.
Melissa Chase, Markulf Kohlweiss, Anna Lysyanskaya, Sarah Meiklejohn
On the Connection between Leakage Tolerance and Adaptive Security
Abstract
We revisit the context of leakage-tolerant interactive protocols as defined by Bitanski, Canetti and Halevi (TCC 2012). Our contributions can be summarized as follows:
1
For the purpose of secure message transmission, any encryption protocol with message space \(\mathcal{M}\) and secret key space \(\mathcal{SK}\) tolerating poly-logarithmic leakage on the secret state of the receiver must satisfy \(|\mathcal{SK}| \ge (1-\epsilon)|\mathcal{M}|\), for every 0 < ε ≤ 1, and if \(|\mathcal{SK}| = |\mathcal{M}|\), then the scheme must use a fresh key pair to encrypt each message.
 
2
More generally, we show that any n party protocol tolerates leakage of ≈ poly(logκ) bits from one party at the end of the protocol execution, if and only if the protocol has passive adaptive security against an adaptive corruption of one party at the end of the protocol execution. This shows that as soon as a little leakage is tolerated, one needs full adaptive security.
 
3
In case more than one party can be corrupted, we get that leakage tolerance is equivalent to a weaker form of adaptivity, which we call semi-adaptivity. Roughly, a protocol has semi-adaptive security if there exist a simulator which can simulate the internal state of corrupted parties, however, such a state is not required to be indistinguishable from a real state, only that it would have lead to the simulated communication.
 
All our results can be based on the solely assumption that collision-resistant function ensembles exist.
Jesper Buus Nielsen, Daniele Venturi, Angela Zottarel
Backmatter
Metadaten
Titel
Public-Key Cryptography – PKC 2013
herausgegeben von
Kaoru Kurosawa
Goichiro Hanaoka
Copyright-Jahr
2013
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-36362-7
Print ISBN
978-3-642-36361-0
DOI
https://doi.org/10.1007/978-3-642-36362-7

Premium Partner