Skip to main content

2015 | Buch

Advances in Cryptology -- ASIACRYPT 2015

21st International Conference on the Theory and Application of Cryptology and Information Security,Auckland, New Zealand, November 29 -- December 3, 2015, Proceedings, Part I

herausgegeben von: Tetsu Iwata, Jung Hee Cheon

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The two-volume set LNCS 9452 and 9453 constitutes the refereed proceedings of the 21st International Conference on the Theory and Applications of Cryptology and Information Security, ASIACRYPT 2015, held in Auckland, New Zealand, in November/December 2015.

The 64 revised full papers and 3 invited talks presented were carefully selected from 251 submissions. They are organized in topical sections on indistinguishability obfuscation; PRFs and hashes; discrete logarithms and number theory; signatures; multiparty computation; public key encryption; ABE and IBE; zero-knowledge; attacks on ASASA; number field sieve; hashes and MACs; symmetric encryption; foundations; side-channel attacks; design of block ciphers; authenticated encryption; symmetric analysis; cryptanalysis; privacy and lattices.

Inhaltsverzeichnis

Frontmatter

Best Paper

Frontmatter
Improved Security Proofs in Lattice-Based Cryptography: Using the Rényi Divergence Rather Than the Statistical Distance

The Rényi divergence is a measure of closeness of two probability distributions. We show that it can often be used as an alternative to the statistical distance in security proofs for lattice-based cryptography. Using the Rényi divergence is particularly suited for security proofs of primitives in which the attacker is required to solve a search problem (e.g., forging a signature).We show that it may also be used in the case of distinguishing problems (e.g., semantic security of encryption schemes), when they enjoy a public sampleability property. The techniques lead to security proofs for schemes with smaller parameters, and sometimes to simpler security proofs than the existing ones.

Shi Bai, Adeline Langlois, Tancrède Lepoint, Damien Stehlé, Ron Steinfeld

Indistinguishability Obfuscation

Frontmatter
Multi-input Functional Encryption for Unbounded Arity Functions

The notion of multi-input functional encryption (MI-FE) was recently introduced by Goldwasser et al. [EUROCRYPT’14] as a means to non-interactively compute aggregate information on the joint private data of multiple users. A fundamental limitation of their work, however, is that the total number of users (which corresponds to the arity of the functions supported by the MI-FE scheme) must be a priori bounded and fixed at the system setup time.In this work, we overcome this limitation by introducing the notion of unbounded input MI-FE that supports the computation of functions with unboundedarity. We construct such an MI-FE scheme with indistinguishability security in the selective model based on the existence of public-coin differing-inputs obfuscation for turing machines and collisionresistant hash functions.Our result enables several new exciting applications, including a new paradigm of on − the − fly secure multiparty computation where new users can join the system dynamically.

Saikrishna Badrinarayanan, Divya Gupta, Abhishek Jain, Amit Sahai
Multi-party Key Exchange for Unbounded Parties from Indistinguishability Obfuscation

Existing protocols for non-interactive multi-party key exchange either (1) support a bounded number of users, (2) require a trusted setup, or (3) rely on knowledge-type assumptions.We construct the first non-interactive key exchange protocols which support an unbounded number of parties and have a security proof that does not rely on knowledge assumptions. Our non-interactive keyexchange protocol does not require a trusted setup and extends easily to the identity-based setting. Our protocols suffer only a polynomial loss to the underlying hardness assumptions.

Dakshita Khurana, Vanishree Rao, Amit Sahai

PRFs and Hashes

Frontmatter
Adaptively Secure Puncturable Pseudorandom Functions in the Standard Model

We study the adaptive security of constrained PRFs in the standard model. We initiate our exploration with puncturable PRFs. A puncturable PRF family is a special class of constrained PRFs, where the constrained key is associated with an element x′ in the input domain. The key allows evaluation at all points x ≠ x′.We show how to build puncturable PRFs with adaptive security proofs in the standard model that involve only polynomial loss to the underlying assumptions. Prior work had either super-polynomial loss or applied the random oracle heuristic. Our construction uses indistinguishability obfuscation and DDH-hard algebraic groups of composite order.More generally, one can consider a t-puncturable PRF: PRFs that can be punctured at any set of inputs S, provided the size of S is less than a fixed polynomial. We additionally show how to transform any (single) puncturable PRF family to a t-puncturable PRF family, using indistinguishability obfuscation.

Susan Hohenberger, Venkata Koppula, Brent Waters
Multilinear and Aggregate Pseudorandom Functions: New Constructions and Improved Security

Since its introduction, pseudorandom functions (PRFs) have become one of the main building blocks of cryptographic protocols. In this work, we revisit two recent extensions of standard PRFs, namely multilinear and aggregate PRFs, and provide several new results for these primitives. In the case of aggregate PRFs, one of our main results is a proof of security for the Naor-Reingold PRF with respect to read-once boolean aggregate queries under the standard Decision Diffie-Hellman problem, which was an open problem. In the case of multilinear PRFs, one of our main contributions is the construction of new multilinear PRFs achieving indistinguishability from random symmetric and skewsymmetric multilinear functions, which was also left as an open problem. In order to achieve these results, our main technical tool is a simple and natural generalization of the recent linear independent polynomial framework for PRFs proposed by Abdalla, Benhamouda, and Passel‘egue in Crypto 2015, that can handle larger classes of PRF constructions. In addition to simplifying and unifying proofs for multilinear and aggregate PRFs, our new framework also yields new constructions which are secure under weaker assumptions, such as the decisional k-linear assumption.

Michel Abdalla, Fabrice Benhamouda, Alain Passelègue
New Realizations of Somewhere Statistically Binding Hashing and Positional Accumulators

A somewhere statistically binding (SSB) hash, introduced by Hubáček and Wichs (ITCS ’15), can be used to hash a long string x to a short digest y = H hk (x) using a public hashing-key hk. Furthermore, there is a way to set up the hash key hk to make it statistically binding on some arbitrary hidden position i, meaning that: (1) the digest y completely determines the i’th bit (or symbol) of x so that all pre-images of y have the same value in the i’th position, (2) it is computationally infeasible to distinguish the position i on which hk is statistically binding from any other position i’. Lastly, the hash should have a localopening property analogous to Merkle-Tree hashing, meaning that given x and y = H hk (x) it should be possible to create a short proof π that certifies the value of the i’th bit (or symbol) of x without having to provide the entire input x. A similar primitive called a positionalaccumulator, introduced by Koppula, Lewko and Waters (STOC ’15) further supports dynamic updates of the hashed value. These tools, which are interesting in their own right, also serve as one of the main technical components in several recent works building advanced applications from indistinguishability obfuscation (iO).The prior constructions of SSB hashing and positional accumulators required fully homomorphic encryption (FHE) and iO respectively. In this work, we give new constructions of these tools based on well studied number-theoretic assumptions such as DDH, Phi-Hiding and DCR, as well as a general construction from lossy/injective functions.

Tatsuaki Okamoto, Krzysztof Pietrzak, Brent Waters, Daniel Wichs

Discrete Logarithms and Number Theory

Frontmatter
Computing Individual Discrete Logarithms Faster in GF(p n ) with the NFS-DL Algorithm

The Number Field Sieve (NFS) algorithm is the best known method to compute discrete logarithms (DL) in finite fields $\mathbb{F}_{p^n}$, with p medium to large and n ≥ 1 small. This algorithm comprises four steps: polynomial selection, relation collection, linear algebra and finally, individual logarithm computation. The first step outputs two polynomials defining two number fields, and a map from the polynomial ring over the integers modulo each of these polynomials to $\mathbb{F}_{p^n}$. After the relation collection and linear algebra phases, the (virtual) logarithm of a subset of elements in each number field is known. Given the target element in $\mathbb{F}_{p^n}$, the fourth step computes a preimage in one number field. If one can write the target preimage as a product of elements of known (virtual) logarithm, then one can deduce the discrete logarithm of the target.As recently shown by the Logjam attack, this final step can be critical when it can be computed very quickly. But we realized that computing an individual DL is much slower in medium- and large-characteristic non-prime fields $\mathbb{F}_{p^n}$ with n ≥ 3, compared to prime fields and quadratic fields $\mathbb{F}_{p^2}$ . We optimize the first part of individual DL: the bootingstep, by reducing dramatically the size of the preimage norm. Its smoothness probability is higher, hence the running-time of the booting step is much improved. Our method is very efficient for small extension fields with 2 ≤ n ≤ 6 and applies to any n > 1, in medium and large characteristic.

Aurore Guillevic
Multiple Discrete Logarithm Problems with Auxiliary Inputs

Let g be an element of prime order p in an abelian group and let α1, . . . ,α L εℤ p for a positive integer L. First, we show that, if $g, g^{\alpha_i}$, and $g^{\alpha_{i}^{d}}$ (i = 1, . . . , L) are given for d|p − 1, all the discrete logarithms α i ’s can be computed probabilistically in Õ$({\sqrt{L \cdot p/d}} + {\sqrt{L \cdot d}})$ group exponentiations with O(L) storage under the condition that L ≪ min{(p/d)1/4, d1/4}.Let $f \epsilon \mathbb{F}_{p}[x]$ be a polynomial of degree d and let ρf be the number of rational points over $\mathbb{F}_{p}$ on the curve determined by f(x) − f(y) = 0. Second, if $g, g^{\alpha_i}$, $g^{\alpha_{i}^{2}}$ , . . . , $g^{\alpha_{i}^{d}}$ are given for any d ≥ 1, then we propose an algorithm that solves all α i ’s in O(max$\{\sqrt{L \cdot p^{2}/\rho{f}}, L \cdot d\})$ group exponentiations with Õ$({\sqrt{L \cdot p^{2}/\rho{f}}})$ storage. In particular, we have explicit choices for a polynomial f when d|p ±1, that yield a running time of Õ$(\sqrt{L \cdot p/d})$ whenever $L \leq \frac{p}{c{\cdot}d^3}$ for some constant c.

Taechan Kim
Solving Linear Equations Modulo Unknown Divisors: Revisited

We revisit the problem of finding small solutions to a collection of linear equations modulo an unknown divisor p for a known composite integer N. In CaLC 2001, Howgrave-Graham introduced an efficient algorithm for solving univariate linear equations; since then, two forms of multivariate generalizations have been considered in the context of cryptanalysis: modular multivariate linear equations by Herrmann and May (Asiacrypt’08) and simultaneous modular univariate linear equations by Cohn and Heninger (ANTS’12). Their algorithms have many important applications in cryptanalysis, such as factoring with known bits problem, fault attacks on RSA signatures, analysis of approximate GCD problem, etc.In this paper, by introducing multiple parameters, we propose several generalizations of the above equations. The motivation behind these extensions is that some attacks on RSA variants can be reduced to solving these generalized equations, and previous algorithms do not apply. We present new approaches to solve them, and compared with previous methods, our new algorithms are more flexible and especially suitable for some cases. Applying our algorithms, we obtain the best analytical/ experimental results for some attacks on RSA and its variants, specifically,– We improve May’s results (PKC’04) on small secret exponent attack on RSA variant with moduli N = prq (r ≥ 2).– We experimentally improve Boneh et al.’s algorithm (Crypto’98) on factoring N = prq (r ≥ 2) with known bits problem.– We significantly improve Jochemsz-May’ attack (Asiacrypt’06) on Common Prime RSA.– We extend Nitaj’s result (Africacrypt’12) on weak encryption exponents of RSA and CRT-RSA.

Yao Lu, Rui Zhang, Liqiang Peng, Dongdai Lin
Fourℚ: Four-Dimensional Decompositions on a ℚ-curve over the Mersenne Prime

We introduce Fourℚ, a high-security, high-performance elliptic curve that targets the 128-bit security level. At the highest arithmetic level, cryptographic scalar multiplications on Fourℚ can use a four-dimensional Gallant-Lambert-Vanstone decomposition to minimize the total number of elliptic curve group operations. At the group arithmetic level, Fourℚ admits the use of extended twisted Edwards coordinates and can therefore exploit the fastest known elliptic curve addition formulas over large prime characteristic fields. Finally, at the finite field level, arithmetic is performed modulo the extremely fast Mersenne prime p = 2127 − 1. We show that this powerful combination facilitates scalar multiplications that are significantly faster than all prior works. On Intel’s Haswell, Ivy Bridge and Sandy Bridge architectures, our software computes a variable-base scalar multiplication in 59,000, 71,000 cycles and 74,000 cycles, respectively; and, on the same platforms, our software computes a Diffie-Hellman shared secret in 92,000, 110,000 cycles and 116,000 cycles, respectively.

Craig Costello, Patrick Longa

Signatures

Frontmatter
Efficient Fully Structure-Preserving Signatures for Large Messages

We construct both randomizable and strongly existentially unforgeable structure-preserving signatures for messages consisting of many group elements. To sign a message consisting of N = mn group elements we have a verification key size of m group elements and signatures contain n + 2 elements. Verification of a signature requires evaluating n + 1 pairing product equations.We also investigate the case of fully structure-preserving signatures where it is required that the secret signing key consists of group elements only. We show a variant of our signature scheme allowing the signer to pick part of the verification key at the time of signing is still secure. This gives us both randomizable and strongly existentially unforgeable fully structure-preserving signatures. In the fully structure preserving scheme the verification key is a single group element, signatures contain m + n + 1 group elements and verification requires evaluating n + 1 pairing product equations.

Jens Groth
A Provably Secure Group Signature Scheme from Code-Based Assumptions

We solve an open question in code-based cryptography by introducing the first provably secure group signature scheme from codebased assumptions. Specifically, the scheme satisfies the CPA-anonymity and traceability requirements in the random oracle model, assuming the hardness of the McEliece problem, the Learning Parity with Noise problem, and a variant of the Syndrome Decoding problem. Our construction produces smaller key and signature sizes than the existing post-quantum group signature schemes from lattices, as long as the cardinality of the underlying group does not exceed the population of the Netherlands (≈ 224 users). The feasibility of the scheme is supported by implementation results. Additionally, the techniques introduced in this work might be of independent interest: a new verifiable encryption protocol for the randomized McEliece encryption and a new approach to design formal security reductions from the Syndrome Decoding problem.

Martianus Frederic Ezerman, Hyung Tae Lee, San Ling, Khoa Nguyen, Huaxiong Wang
Type 2 Structure-Preserving Signature Schemes Revisited

At CRYPTO 2014, Abe et al. presented generic-signer structure-preserving signature schemes using Type 2 pairings. According to the authors, the proposed constructions are optimal with only two group elements in each signature and just one verification equation. The schemes beat the known lower bounds in the Type 3 setting and thereby establish that the Type 2 setting permits construction of cryptographic schemes with unique properties not achievable in Type 3.In this paper we undertake a concrete analysis of the Abe et al. claims. By properly accounting for the actual structure of the underlying groups and subgroup membership testing of group elements in signatures, we show that the schemes are not as efficient as claimed. We present natural Type 3 analogues of the Type 2 schemes, and show that the Type 3 schemes are superior to their Type 2 counterparts in every aspect. We also formally establish that in the concrete mathematical structure of asymmetric pairing, all Type 2 structure-preserving signature schemes can be converted to the Type 3 setting without any penalty in security or efficiency, and show that the converse is false. Furthermore, we prove that the Type 2 setting does not allow one to circumvent the known lower bound result for the Type 3 setting. Our analysis puts the optimality claims for Type 2 structure-preserving signature in a concrete perspective and indicates an incompleteness in the definition of a generic bilinear group in the Type 2 setting.

Sanjit Chatterjee, Alfred Menezes
Design Principles for HFEv- Based Multivariate Signature Schemes

The Hidden Field Equations (HFE) Cryptosystem as proposed by Patarin is one of the best known and most studied multivariate schemes. While the security of the basic scheme appeared to be very weak, the HFEv- variant seems to be a good candidate for digital signature schemes on the basis of multivariate polynomials. However, the currently existing scheme of this type, the QUARTZ signature scheme, is hardly used in practice because of its poor efficiency. In this paper we analyze recent results from Ding and Yang about the degree of regularity of HFEv- systems and derive from them design principles for signature schemes of the HFEv- type. Based on these results we propose the new HFEv- based signature scheme Gui, which is more than 100 times faster than QUARTZ and therefore highly comparable with classical signature schemes such as RSA and ECDSA.

Albrecht Petzoldt, Ming-Shing Chen, Bo-Yin Yang, Chengdong Tao, Jintai Ding

Multiparty Computation I

Frontmatter
Oblivious Network RAM and Leveraging Parallelism to Achieve Obliviousness

Oblivious RAM (ORAM) is a cryptographic primitive that allows a trusted CPU to securely access untrusted memory, such that the access patterns reveal nothing about sensitive data. ORAM is known to have broad applications in secure processor design and secure multiparty computation for big data. Unfortunately, due to a logarithmic lower bound by Goldreich and Ostrovsky (Journal of the ACM, ’96), ORAM is bound to incur a moderate cost in practice. In particular, with the latest developments in ORAM constructions, we are quickly approaching this limit, and the room for performance improvement is small.In this paper, we consider new models of computation in which the cost of obliviousness can be fundamentally reduced in comparison with the standard ORAM model. We propose the Oblivious Network RAM model of computation, where a CPU communicates with multiple memory banks, such that the adversary observes only which bank the CPU is communicating with, but not the address offset within each memory bank. In other words, obliviousness within each bank comes for free— either because the architecture prevents a malicious party from observing the address accessed within a bank, or because another solution is used to obfuscate memory accesses within each bank—and hence we only need to obfuscate communication patterns between the CPU and the memory banks. We present new constructions for obliviously simulating general or parallel programs in the Network RAM model. We describe applications of our new model in secure processor design and in distributed storage applications with a network adversary.

Dana Dachman-Soled, Chang Liu, Charalampos Papamanthou, Elaine Shi, Uzi Vishkin
Three-Party ORAM for Secure Computation

An Oblivious RAM (ORAM) protocol [13] allows a client to retrieve N-th element of a data array D stored by the server s.t. the server learns no information about N. A related notion is that of an ORAM for Secure Computation (SC-ORAM) [17], which is a protocol that securely implements a RAM functionality, i.e. given a secret-sharing of both D and N, it computes a secret-sharing of D[N]. SC-ORAM can be used as a subprotocol for implementing the RAM functionality for secure computation of RAM programs [7,14,17]. It can also implement a public database service which hides each client’s access pattern even if a threshold of servers colludes with any number of clients.Most previous works used two-party secure computation to implement each step of an ORAM client algorithm, but since secure computation of many functions becomes easier in the three-party honest-majority setting than in the two-party setting, it is natural to ask if the cost of an SC-ORAM scheme can be reduced if one was willing to use three servers instead of two and assumed an honest majority. We show a 3-party SCORAM scheme which is based on a variant of the BinaryTree Client- Server ORAM of Shi et al. [20]. However, whereas previous SC-ORAM implementations used general 2PC or MPC techniques like Yao’s garbled circuits, e.g. [14,22], homomorphic encryption [11], or the SPDZ protocol for arithmetic circuits [15], our techniques are custom-made for the threeparty setting, giving rise to a protocol which is secure against honest-butcurious faults using bandwidth and CPU costs which are comparable to those of the underlying Client-Server ORAM.

Sky Faber, Stanislaw Jarecki, Sotirios Kentros, Boyang Wei
On Cut-and-Choose Oblivious Transfer and Its Variants

Motivated by the recent progress in improving efficiency of secure computation, we study cut − and − chooseoblivioustransfer—a basic building block of state-of-the-art constant round two-party secure computation protocols that was introduced by Lindell and Pinkas (TCC 2011). In particular, we study the question of realizing cut-and-choose oblivious transfer and its variants in the OT-hybrid model. Towards this, we provide newdefinitions of cut-and-choose oblivious transfer (and its variants) that suffice for its application in cut-and-choose techniques for garbled circuit based two-party protocols. Furthermore, our definitions conceptually simplify previous definitions including those proposed by Lindell (Crypto 2013), Huang et al., (Crypto 2014), and Lindell and Riva (Crypto 2014). Our main result is an efficient realization (under our new definitions) of cut-and-choose OT and its variants with small concrete communication overhead in an OT-hybrid model. Among other things this implies that we can base cut-and-choose OT and its variants under a variety of assumptions, including those that are believed to be resilient to quantum attacks. By contrast, previous constructions of cut-and-choose OT and its variants relied on DDH and could not take advantage of OT extension. Also, our new definitions lead us to more efficient constructions for multistage cut-and-choose OT—a variant proposed by Huang et al. (Crypto 2014) that is useful in the multiple execution setting.

Vladimir Kolesnikov, Ranjit Kumaresan

Public Key Encryption

Frontmatter
An Asymptotically Optimal Method for Converting Bit Encryption to Multi-Bit Encryption

Myers and Shelat (FOCS 2009) showed how to convert a chosen ciphertext secure (CCA secure) PKE scheme that can encrypt only 1-bit plaintexts into a CCA secure scheme that can encrypt arbitrarily long plaintexts (via the notion of key encapsulation mechanism (KEM) and hybrid encryption), and subsequent works improved efficiency and simplicity. In terms of efficiency, the best known construction of a CCA secure KEM from a CCA secure 1-bit PKE scheme, has the public key size Ω(k) ·|pk| and the ciphertext size Ω(k)2 ·|c|, where k is a security parameter, and |pk| and |c| denote the public key size and the ciphertext size of the underlying 1-bit scheme, respectively.In this paper, we show a new CCA secure KEM based on a CCA secure 1-bit PKE scheme which achieves the public key size 2 ·|pk| and the ciphertext size (2k + o(k))·|c|. These sizes are asymptotically optimal in the sense that they are the same as those of the simplest “bitwiseencrypt” construction (seen as a KEM by encrypting a k-bit random session-key) that works for the chosen plaintext attack and non-adaptive chosen ciphertext attack settings.We achieve our main result by developing several new techniques and results on the “double-layered” construction (which builds a KEM from an inner PKE/KEM and an outer PKE scheme) by Myers and Shelat and on the notion of detectable PKE/KEM by Hohenberger, Lewko, and Waters (EUROCRYPT 2012).

Takahiro Matsuda, Goichiro Hanaoka
Selective Opening Security for Receivers

In a selective opening (SO) attack an adversary breaks into a subset of honestly created ciphertexts and tries to learn information on the plaintexts of some untouched (but potentially related) ciphertexts. Contrary to intuition, standard security notions do not always imply security against this type of adversary, making SO security an important standalone goal. In this paper we study receiversecurity, where the attacker is allowed to obtain the decryption keys corresponding to some of the ciphertexts.First we study the relation between two existing security definitions, one based on simulation and the other based on indistinguishability, and show that the former is strictly stronger. We continue with feasibility results for both notions which we show can be achieved from (variants of) non-committing encryption schemes. In particular, we show that indistinguishability-based SO security can be achieved from a tweaked variant of non-committing encryption which, in turn, can be instantiated from a variety of basic, well-established, assumptions. We conclude our study by showing that SO security is however strictly weaker than all variants of non-committing encryption that we consider, leaving potentially more efficient constructions as an interesting open problem.

Carmit Hazay, Arpita Patra, Bogdan Warinschi
Function-Hiding Inner Product Encryption

We extend the reach of functional encryption schemes that are provably secure under simple assumptions against unbounded collusion to include function-hiding inner product schemes. Our scheme is a private key functional encryption scheme, where ciphertexts correspond to vectors $\overrightarrow{x}$, secret keys correspond to vectors $\overrightarrow{y}$, and a decryptor learns $\langle\overrightarrow{x},\overrightarrow{y}\rangle$. Our scheme employs asymmetric bilinear maps and relies only on the SXDH assumption to satisfy a natural indistinguishability-based security notion where arbitrarily many key and ciphertext vectors can be simultaneously changed as long as the key-ciphertext dot product relationships are all preserved.

Allison Bishop, Abhishek Jain, Lucas Kowalczyk

ABE and IBE

Frontmatter
Idealizing Identity-Based Encryption

We formalize the standard application of identity-based encryption (IBE), namely non-interactive secure communication, as realizing an ideal system which we call delivery controlled channel (DCC). This system allows users to be registered (by a central authority) for an identity and to send messages securely to other users only known by their identity.Quite surprisingly, we show that existing security definitions for IBE are not sufficient to realize DCC. In fact, it is impossible to do so in the standard model. We show, however, how to adjust any IBE scheme that satisfies the standard security definition IND-ID-CPA to achieve this goal in the random oracle model.We also show that the impossibility result can be avoided in the standard model by considering a weaker ideal system that requires all users to be registered in an initial phase before any messages are sent. To achieve this, a weaker security notion, which we introduce and call IND-ID1- CPA, is actually sufficient. This justifies our new security definition and might open the door for more efficient schemes. We further investigate which ideal systems can be realized with schemes satisfying the standard notion and variants of selective security.As a contribution of independent interest, we show how to model features of an ideal system that are potentially available to dishonest parties but not guaranteed, and which such features arise when using IBE.

Dennis Hofheinz, Christian Matt, Ueli Maurer
A Framework for Identity-Based Encryption with Almost Tight Security

We show a framework for constructing identity-based encryption (IBE) schemes that are (almost) tightly secure in the multichallenge and multi-instance setting. In particular, we formalize a new notion called broadcastencoding, analogously to encoding notions by Attrapadung (Eurocrypt 2014) and Wee (TCC 2014). We then show that it can be converted into such an IBE. By instantiating the framework using several encoding schemes (new or known ones), we obtain the following:– We obtain (almost) tightly secure IBE in the multi-challenge, multiinstance setting, both in composite and prime-order groups. The latter resolves the open problem posed by Hofheinz et al. (PKC 2015).– We obtain the first (almost) tightly secure IBE with sub-linear size public parameters (master public keys). In particular, we can set the size of the public parameters to constant at the cost of longer ciphertexts and private keys. This gives a partial solution to the open problem posed by Chen and Wee (Crypto 2013).By applying (a variant of) the Canetti-Halevi-Katz transformation to our schemes, we obtain several CCA-secure PKE schemes with tight security in the multi-challenge, multi-instance setting. One of our schemes achieves very small ciphertext overhead, consisting of less than 12 group elements. This significantly improves the state-of-the-art construction by Libert et al. (in ePrint Archive) which requires 47 group elements. Furthermore, by modifying one of our IBE schemes obtained above, we can make it anonymous. This gives the first anonymous IBE whose security is almost tightly shown in the multi-challenge setting.

Nuttapong Attrapadung, Goichiro Hanaoka, Shota Yamada
Riding on Asymmetry: Efficient ABE for Branching Programs

In an Attribute-Based Encryption (ABE) scheme the ciphertext encrypting a message μ, is associated with a public attribute vector x and a secret key sk P is associated with a predicate P. The decryption returns μ if and only if P(x) = 1. ABE provides efficient and simple mechanism for data sharing supporting fine-grained access control. Moreover, it is used as a critical component in constructions of succinct functional encryption, reusable garbled circuits, token-based obfuscation and more.In this work, we describe a new efficient ABE scheme for a family of branching programs with short secret keys and from a mild assumption. In particular, in our construction the size of the secret key for a branching program P is |P| + poly(λ), where λ is the security parameter. Our construction is secure assuming the standard Learning With Errors (LWE) problem with approximation factors nω(1). Previous constructions relied on nO(logn) approximation factors of LWE (resulting in less efficient parameters instantiation) or had large secret keys of size |P| × poly(λ). We rely on techniques developed by Boneh et al. (EUROCRYPT’14) and Brakerski et al. (ITCS’14) in the context of ABE for circuits and fully-homomorphic encryption.

Sergey Gorbunov, Dhinakaran Vinayagamurthy
Conversions Among Several Classes of Predicate Encryption and Applications to ABE with Various Compactness Tradeoffs

Predicate encryption is an advanced form of public-key encryption that yields high flexibility in terms of access control. In the literature, many predicate encryption schemes have been proposed such as fuzzy-IBE, KP-ABE, CP-ABE, (doubly) spatial encryption (DSE), and ABE for arithmetic span programs. In this paper, we study relations among them and show that some of them are in fact equivalent by giving conversions among them. More specifically, our main contributions are as follows:– We show that monotonic, small universe KP-ABE (CP-ABE) with bounds on the size of attribute sets and span programs (or linear secret sharing matrix) can be converted into DSE. Furthermore, we show that DSE implies non-monotonic CP-ABE (and KP-ABE) with the same bounds on parameters. This implies that monotonic/nonmonotonic KP/CP-ABE (with the bounds) and DSE are all equivalent in the sense that one implies another.– We also show that if we start from KP-ABE without bounds on the size of span programs (but bounds on the size of attribute sets), we can obtain ABE for arithmetic span programs. The other direction is also shown: ABE for arithmetic span programs can be converted into KP-ABE. These results imply, somewhat surprisingly, KP-ABE without bounds on span program sizes is in fact equivalent to ABE for arithmetic span programs, which was thought to be more expressive or at least incomparable.By applying these conversions to existing schemes, we obtain many nontrivial consequences. We obtain the first non-monotonic, large universe CP-ABE (that supports span programs) with constant-size ciphertexts, the first KP-ABE with constant-size private keys, the first (adaptivelysecure, multi-use) ABE for arithmetic span programs with constant-size ciphertexts, and more. We also obtain the first attribute-based signature scheme that supports non-monotone span programs and achieves constant-size signatures via our techniques.

Nuttapong Attrapadung, Goichiro Hanaoka, Shota Yamada

Zero-Knowledge

Frontmatter
QA-NIZK Arguments in Asymmetric Groups: New Tools and New Constructions

A sequence of recent works have constructed constant-size quasi-adaptive (QA) NIZK arguments of membership in linear subspaces of $\mathbb{\hat{G}}^m$, where $\mathbb{\hat{G}}$ is a group equipped with a bilinear map e : $\mathbb{\hat{G}} \times \mathbb{\check{H}} \rightarrow \mathbb{T}$. Although applicable to any bilinear group, these techniques are less useful in the asymmetric case. For example, Jutla and Roy (Crypto 2014) show how to do QA aggregation of Groth-Sahai proofs, but the types of equations which can be aggregated are more restricted in the asymmetric setting. Furthermore, there are natural statements which cannot be expressed as membership in linear subspaces, for example the satisfiability of quadratic equations.In this paper we develop specific techniques for asymmetric groups. We introduce a new computational assumption, under which we can recover all the aggregation results of Groth-Sahai proofs known in the symmetric setting. We adapt the arguments of membership in linear spaces of $\mathbb{\hat{G}}^m$ to linear subspaces of $\mathbb{\hat{G}}^m \times \mathbb{\check{H}}^n$. In particular, we give a constant-size argument that two sets of Groth-Sahai commitments, defined over different groups $\mathbb{\hat{G}}, \mathbb{\check{H}}^n$, open to the same scalars in ℤ q , a useful tool to prove satisfiability of quadratic equations in ℤ q . We then use one of the arguments for subspaces in $\mathbb{\hat{G}}^m \times \mathbb{\check{H}}^n$ and develop new techniques to give constant-size QA-NIZK proofs that a commitment opens to a bit-string. To the best of our knowledge, these are the first constant-size proofs for quadratic equations in ℤ q under standard and falsifiable assumptions. As a result, we obtain improved threshold Groth- Sahai proofs for pairing product equations, ring signatures, proofs of membership in a list, and various types of signature schemes.

Alonso González, Alejandro Hevia, Carla Ràfols
Dual-System Simulation-Soundness with Applications to UC-PAKE and More

We introduce a novel concept of dual-system simulationsound non-interactive zero-knowledge (NIZK) proofs. Dual-system NIZK proof system can be seen as a two-tier proof system. As opposed to the usual notion of zero-knowledge proofs, dual-system defines an intermediate partial-simulation world, where the proof simulator may have access to additional auxiliary information about the word, for example a membership bit, and simulation of proofs is only guaranteed if the membership bit is correct. Further, dual-system NIZK proofs allow a quasi-adaptive setting where the CRS can be generated based on language parameters. This allows for the further possibility that the partial-world CRS simulator may have access to additional trapdoors related to the language parameters. We show that for important hard languages like the Diffie-Hellman language, such dual-system proof systems can be given which allow unbounded partial simulation soundness, and which further allow transition between partial simulation world and single-theorem full simulation world even when proofs are sought on non-members. The construction is surprisingly simple, involving only two additional group elements for general linear-subspace languages in asymmetric bilinear pairing groups.As a direct application we give a short keyed-homomorphic CCAsecure encryption scheme. The ciphertext in this scheme consists of only six group elements (under the SXDH assumption) and the security reduction is tight. An earlier scheme of Libert et al. based on their efficient unbounded simulation-sound QA-NIZK proofs only provided a loose security reduction, and further had ciphertexts almost twice as long as ours.We also show a single-round universally-composable password authenticated key-exchange (UC-PAKE) protocol which is secure under adaptive corruption in the erasure model. The single message flow only requires four group elements under the SXDH assumption.This is the shortestknown UC-PAKE even without considering adaptive corruption. The latest published scheme which considered adaptive corruption, by Abdalla et al [ABB+13], required non-constant (more than 10 times the bit-size of the password) number of group elements.

Charanjit S. Jutla, Arnab Roy
Secret Sharing and Statistical Zero Knowledge

We show a general connection between various types of statistical zero-knowledge (SZK) proof systems and (unconditionally secure) secret sharing schemes. Viewed through the SZK lens, we obtain several new results on secret-sharing:- Characterizations: We obtain an almost-characterization of access structures for which there are secret-sharing schemes with an efficient sharing algorithm (but not necessarily efficient reconstruction). In particular, we show that for every language LεSZK L (the class of languages that have statistical zero knowledge proofs with log-space verifiers and simulators), a (monotonized) access structure associated with L has such a secret-sharing scheme. Conversely, we show that such secret-sharing schemes can only exist for languages in SZK.– Constructions: We show new constructions of secret-sharing schemes with both efficient sharing and efficient reconstruction for access structures associated with languages that are in P, but are not known to be in NC, namely Bounded-Degree Graph Isomorphism and constantdimensional lattice problems. In particular, this gives us the first combinatorial access structure that is conjectured to be outside NC but has an efficient secret-sharing scheme. Previous such constructions (Beimel and Ishai; CCC 2001) were algebraic and number-theoretic in nature.– Limitations: We also show that universally − efficient secret-sharing schemes, where the complexity of computing the shares is a polynomial independent of the complexity of deciding the access structure, cannot exist for all (monotone languages in) P, unless there is a polynomial q such that P ⊆ DSPACE(q(n)).

Vinod Vaikuntanathan, Prashant Nalini Vasudevan
Compactly Hiding Linear Spans
Tightly Secure Constant-Size Simulation-Sound QA-NIZK Proofs and Applications

Quasi-adaptive non-interactive zero-knowledge (QA-NIZK) proofs is a recent paradigm, suggested by Jutla and Roy (Asiacrypt ’13), which is motivated by the Groth-Sahai seminal techniques for efficient non-interactive zero-knowledge (NIZK) proofs. In this paradigm, the common reference string may depend on specific language parameters, a fact that allows much shorter proofs in important cases. It even makes certain standard model applications competitive with the Fiat-Shamir heuristic in the Random Oracle idealization. Such QA-NIZK proofs were recently optimized to constant size by Jutla and Roy (Crypto ’14) and Libert etal. (Eurocrypt ’14) for the important case of proving that a vector of group elements belongs to a linear subspace. While the QA-NIZK arguments of Libert etal. provide unbounded simulationsoundness and constant proof length, their simulation-soundness is only loosely related to the underlying assumption (with a gap proportional to the number of adversarial queries) and it is unknown how to alleviate this limitation without sacrificing efficiency. In this paper, we deal with the question of whether we can simultaneously optimize the proof size and the tightness of security reductions, allowing for important applications with tight security (which are typically quite lengthy) to be of shorter size. We resolve this question by designing a novel simulation-sound QANIZK argument showing that a vector $v \epsilon \mathbb{G}^n$ belongs to a subspace of rank t < n using a constant number of group elements. Unlike previous short QA-NIZK proofs of such statements, the unbounded simulation-soundness of our system is nearly tightly related (i.e., the reduction only loses a factor proportional to the security parameter) to the standard Decision Linear assumption. To show simulation-soundness in the constrained context of tight reductions, we explicitly point at a technique— which may be of independent interest—of hiding the linear span of a vector defined by a signature (which is part of an OR proof). As an application, we design a public-key cryptosystem with almost tight CCA2- security in the multi-challenge, multi-user setting with improved length (asymptotically optimal for long messages). We also adapt our scheme to provide CCA security in the key-dependent message scenario (KDMCCA2) with ciphertext length reduced by 75% when compared to the best known tightly secure KDM-CCA2 system so far.

Benoît Libert, Thomas Peters, Marc Joye, Moti Yung

Multiparty Computation II

Frontmatter
A Unified Approach to MPC with Preprocessing Using OT

SPDZ, TinyOT and MiniMAC are a family of MPC protocols based on secret sharing with MACs, where a preprocessing stage produces multiplication triples in a finite field. This work describes new protocols for generating multiplication triples in fields of characteristic two using OT extensions. Before this work, TinyOT, which works on binary circuits, was the only protocol in this family using OT extensions. Previous SPDZ protocols for triples in large finite fields require somewhat homomorphic encryption, which leads to very inefficient runtimes in practice, while no dedicated preprocessing protocol for MiniMAC (which operates on vectors of small field elements) was previously known. Since actively secure OT extensions can be performed very efficiently using only symmetric primitives, it is highly desirable to base MPC protocols on these rather than expensive public key primitives. We analyze the practical efficiency of our protocols, showing that they should all perform favorably compared with previous works; we estimate our protocol for SPDZ triples in $\mathbb{F}_{{2}^{40}}$ will perform around 2 orders of magnitude faster than the best known previous protocol.

Tore Kasper Frederiksen, Marcel Keller, Emmanuela Orsini, Peter Scholl
Secure Computation from Millionaire

The standard method for designing a secure computation protocol for function f first transforms f into either a circuit or a RAM program and then applies a generic secure computation protocol that either handles boolean gates or translates the RAM program into oblivious RAM instructions.In this paper, we show a large class of functions for which a different iterative approach to secure computation results in more efficient protocols. The first such examples of this technique was presented by Aggarwal, Mishra, and Pinkas (J. of Cryptology, 2010) for computing the median; later, Brickell and Shmatikov (Asiacrypt 2005) showed a similar technique for shortest path problems.We generalize the technique in both of those works and show that it applies to a large class of problems including certain matroid optimizations, sub-modular optimization, convex hulls, and other scheduling problems. The crux of our technique is to securelyreduce these problems to secure comparison operations and to employ the idea of graduallyreleasing part of the output. We then identify conditions under which both of these techniques for protocol design are compatible with achieving simulation-based security in the honest-but-curious and covert adversary models. In special cases such as median, we also show how to achieve malicious security.

Abhi Shelat, Muthuramakrishnan Venkitasubramaniam
Garbling Scheme for Formulas with Constant Size of Garbled Gates

We provide a garbling scheme which creates garbled circuits of a very small constant size (four bits per gate) for circuits with fanout one (formulas). For arbitrary fan-out, we additionally need only two ciphertexts per additional connection of each gate output wire. We make use of a trapdoor permutation for which we define a generalized notion of correlation robustness. We show that our notion is implied by PRIVsecurity, a notion for deterministic (searchable) encryption.We prove our scheme secure in the programmable random oracle model.

Carmen Kempka, Ryo Kikuchi, Susumu Kiyoshima, Koutarou Suzuki
Card-Based Cryptographic Protocols Using a Minimal Number of Cards

Secure multiparty computation can be done with a deck of playing cards. For example, den Boer (EUROCRYPT ’89) devised his famous “five-card trick”, which is a secure two-party AND protocol using five cards. However, the output of the protocol is revealed in the process and it is therefore not suitable for general circuits with hidden intermediate results. To overcome this limitation, protocols in committed format, i.e., with concealed output, have been introduced, among them the sixcard AND protocol of (Mizuki and Sone, FAW 2009). In their paper, the authors ask whether six cards are minimal for committed format AND protocols.We give a comprehensive answer to this problem: there is a four-card AND protocol with a runtime that is finite in expectation (i.e., a Las Vegas protocol), but no protocol with finite runtime. Moreover, we show that five cards are sufficient for finite runtime. In other words, improving on (Mizuki, Kumamoto and Sone, ASIACRYPT 2012) “The Five-Card Trick can be done with four cards”, our results can be stated as “The Five-Card Trick can be done in committed format” and furthermore it “can be done with four cards in Las Vegas committed format”.By devising a Las Vegas protocol for any k-ary boolean function using 2k cards, we address the open question posed by (Nishida et al., TAMC 2015) on whether 2k + 6 cards are necessary for computing any k-ary boolean function. For this we use the shuffle abstraction as introduced in the computational model of card-based protocols in (Mizuki and Shizuya, Int. J. Inf. Secur., 2014). We augment this result by a discussion on implementing such general shuffle operations.

Alexander Koch, Stefan Walzer, Kevin Härtel
Backmatter
Metadaten
Titel
Advances in Cryptology -- ASIACRYPT 2015
herausgegeben von
Tetsu Iwata
Jung Hee Cheon
Copyright-Jahr
2015
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-662-48797-6
Print ISBN
978-3-662-48796-9
DOI
https://doi.org/10.1007/978-3-662-48797-6

Premium Partner