Skip to main content

2018 | Buch

Advances in Cryptology – EUROCRYPT 2018

37th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Tel Aviv, Israel, April 29 - May 3, 2018 Proceedings, Part I

insite
SUCHEN

Über dieses Buch

The three volumes LNCS 10820, 10821, and 10822 constitute the thoroughly refereed proceedings of the 37th Annual International Conference on the Theory and Applications of Cryptographic Techniques, EUROCRYPT 2018, held in Tel Aviv, Israel, in April/May 2018.
The 69 full papers presented were carefully reviewed and selected from 294 submissions. The papers are organized into the following topical sections: foundations; lattices; random oracle model; fully homomorphic encryption; permutations; galois counter mode; attribute-based encryption; secret sharing; blockchain; multi-collision resistance; signatures; private simultaneous messages; masking; theoretical multiparty computation; obfuscation; symmetric cryptanalysis; zero-knowledge; implementing multiparty computation; non-interactive zero-knowledge; anonymous communication; isogeny; leakage; key exchange; quantum; non-malleable codes; and provable symmetric cryptography.

Inhaltsverzeichnis

Frontmatter

Foundations

Frontmatter
On the Bit Security of Cryptographic Primitives
Abstract
We introduce a formal quantitative notion of “bit security” for a general type of cryptographic games (capturing both decision and search problems), aimed at capturing the intuition that a cryptographic primitive with k-bit security is as hard to break as an ideal cryptographic function requiring a brute force attack on a k-bit key space. Our new definition matches the notion of bit security commonly used by cryptographers and cryptanalysts when studying search (e.g., key recovery) problems, where the use of the traditional definition is well established. However, it produces a quantitatively different metric in the case of decision (indistinguishability) problems, where the use of (a straightforward generalization of) the traditional definition is more problematic and leads to a number of paradoxical situations or mismatches between theoretical/provable security and practical/common sense intuition. Key to our new definition is to consider adversaries that may explicitly declare failure of the attack. We support and justify the new definition by proving a number of technical results, including tight reductions between several standard cryptographic problems, a new hybrid theorem that preserves bit security, and an application to the security analysis of indistinguishability primitives making use of (approximate) floating point numbers. This is the first result showing that (standard precision) 53-bit floating point numbers can be used to achieve 100-bit security in the context of cryptographic primitives with general indistinguishability-based security definitions. Previous results of this type applied only to search problems, or special types of decision problems.
Daniele Micciancio, Michael Walter
On the Gold Standard for Security of Universal Steganography
Abstract
While symmetric-key steganography is quite well understood both in the information-theoretic and in the computational setting, many fundamental questions about its public-key counterpart resist persistent attempts to solve them. The computational model for public-key steganography was proposed by von Ahn and Hopper in EUROCRYPT 2004. At TCC 2005, Backes and Cachin gave the first universal public-key stegosystem – i.e. one that works on all channels – achieving security against replayable chosen-covertext attacks (ss-rcca) and asked whether security against non-replayable chosen-covertext attacks (ss-cca) is achievable. Later, Hopper (ICALP 2005) provided such a stegosystem for every efficiently sampleable channel, but did not achieve universality. He posed the question whether universality and ss-cca-security can be achieved simultaneously. No progress on this question has been achieved since more than a decade. In our work we solve Hopper’s problem in a somehow complete manner: As our main positive result we design an ss-cca-secure stegosystem that works for every memoryless channel. On the other hand, we prove that this result is the best possible in the context of universal steganography. We provide a family of 0-memoryless channels – where the already sent documents have only marginal influence on the current distribution – and prove that no ss-cca-secure steganography for this family exists in the standard non-look-ahead model.
Sebastian Berndt, Maciej Liśkiewicz
Memory Lower Bounds of Reductions Revisited
Abstract
In Crypto 2017, Auerbach et al. initiated the study on memory-tight reductions and proved two negative results on the memory-tightness of restricted black-box reductions from multi-challenge security to single-challenge security for signatures and an artificial hash function. In this paper, we revisit the results by Auerbach et al. and show that for a large class of reductions treating multi-challenge security, it is impossible to avoid loss of memory-tightness unless we sacrifice the efficiency of their running-time. Specifically, we show three lower bound results. Firstly, we show a memory lower bound of natural black-box reductions from the multi-challenge unforgeability of unique signatures to any computational assumption. Then we show a lower bound of restricted reductions from multi-challenge security to single-challenge security for a wide class of cryptographic primitives with unique keys in the multi-user setting. Finally, we extend the lower bound result shown by Auerbach et al. treating a hash function to one treating any hash function with a large domain.
Yuyu Wang, Takahiro Matsuda, Goichiro Hanaoka, Keisuke Tanaka
Fiat-Shamir and Correlation Intractability from Strong KDM-Secure Encryption
Abstract
A hash function family is called correlation intractable if for all sparse relations, it is hard to find, given a random function from the family, an input-output pair that satisfies the relation (Canetti et al., STOC 1998). Correlation intractability (CI) captures a strong Random-Oracle-like property of hash functions. In particular, when security holds for all sparse relations, CI suffices for guaranteeing the soundness of the Fiat-Shamir transformation from any constant round, statistically sound interactive proof to a non-interactive argument. However, to date, the only CI hash function for all sparse relations (Kalai et al., Crypto 2017) is based on general program obfuscation with exponential hardness properties.
We construct a simple CI hash function for arbitrary sparse relations, from any symmetric encryption scheme that satisfies some natural structural properties, and in addition guarantees that key recovery attacks mounted by polynomial-time adversaries have only exponentially small success probability - even in the context of key-dependent messages (KDM). We then provide parameter settings where ElGamal encryption and Regev encryption plausibly satisfy the needed properties. Our techniques are based on those of Kalai et al., with the main contribution being substituting a statistical argument for the use of obfuscation, therefore greatly simplifying the construction and basing security on better-understood intractability assumptions.
In addition, we extend the definition of correlation intractability to handle moderately sparse relations so as to capture the properties required in proof-of-work applications (e.g. Bitcoin). We also discuss the applicability of our constructions and analyses in that regime.
Ran Canetti, Yilei Chen, Leonid Reyzin, Ron D. Rothblum

Lattices

Frontmatter
Shortest Vector from Lattice Sieving: A Few Dimensions for Free
Abstract
Asymptotically, the best known algorithms for solving the Shortest Vector Problem (SVP) in a lattice of dimension n are sieve algorithms, which have heuristic complexity estimates ranging from \((4/3)^{n+o(n)}\) down to \((3/2)^{n/2 +o(n)}\) when Locality Sensitive Hashing techniques are used. Sieve algorithms are however outperformed by pruned enumeration algorithms in practice by several orders of magnitude, despite the larger super-exponential asymptotical complexity \(2^{\varTheta (n \log n)}\) of the latter.
In this work, we show a concrete improvement of sieve-type algorithms. Precisely, we show that a few calls to the sieve algorithm in lattices of dimension less than \(n-d\) solves SVP in dimension n, where \(d = \varTheta (n/\log n)\).
Although our improvement is only sub-exponential, its practical effect in relevant dimensions is quite significant. We implemented it over a simple sieve algorithm with \((4/3)^{n+o(n)}\) complexity, and it outperforms the best sieve algorithms from the literature by a factor of 10 in dimensions 70–80. It performs less than an order of magnitude slower than pruned enumeration in the same range.
By design, this improvement can also be applied to most other variants of sieve algorithms, including LSH sieve algorithms and tuple-sieve algorithms. In this light, we may expect sieve-techniques to outperform pruned enumeration in practice in the near future.
Léo Ducas
On the Ring-LWE and Polynomial-LWE Problems
Abstract
The Ring Learning With Errors problem (\(\mathsf {RLWE}\)) comes in various forms. Vanilla \(\mathsf {RLWE}\) is the decision dual-\(\mathsf {RLWE}\) variant, consisting in distinguishing from uniform a distribution depending on a secret belonging to the dual \(\mathcal {O}_K^{\vee }\) of the ring of integers \(\mathcal {O}_K\) of a specified number field K. In primal-\(\mathsf {RLWE}\), the secret instead belongs to \(\mathcal {O}_K\). Both decision dual-\(\mathsf {RLWE}\) and primal-\(\mathsf {RLWE}\) enjoy search counterparts. Also widely used is (search/decision) Polynomial Learning With Errors (\(\mathsf {PLWE}\)), which is not defined using a ring of integers \(\mathcal {O}_K\) of a number field K but a polynomial ring \(\mathbb {Z}[x]/f\) for a monic irreducible \(f \in \mathbb {Z}[x]\). We show that there exist reductions between all of these six problems that incur limited parameter losses. More precisely: we prove that the (decision/search) dual to primal reduction from Lyubashevsky et al. [EUROCRYPT 2010] and Peikert [SCN 2016] can be implemented with a small error rate growth for all rings (the resulting reduction is non-uniform polynomial time); we extend it to polynomial-time reductions between (decision/search) primal \(\mathsf {RLWE}\) and \(\mathsf {PLWE}\) that work for a family of polynomials f that is exponentially large as a function of \(\deg f\) (the resulting reduction is also non-uniform polynomial time); and we exploit the recent technique from Peikert et al. [STOC 2017] to obtain a search to decision reduction for \(\mathsf {RLWE}\) for arbitrary number fields. The reductions incur error rate increases that depend on intrinsic quantities related to K and f.
Miruna Rosca, Damien Stehlé, Alexandre Wallet
Faster Gaussian Sampling for Trapdoor Lattices with Arbitrary Modulus
Abstract
We present improved algorithms for gaussian preimage sampling using the lattice trapdoors of (Micciancio and Peikert, CRYPTO 2012). The MP12 work only offered a highly optimized algorithm for the on-line stage of the computation in the special case when the lattice modulus q is a power of two. For arbitrary modulus q, the MP12 preimage sampling procedure resorted to general lattice algorithms with complexity cubic in the bitsize of the modulus (or quadratic, but with substantial preprocessing and storage overheads). Our new preimage sampling algorithm (for any modulus q) achieves linear complexity with very modest storage requirements, and experimentally outperforms the generic method of MP12 already for small values of q. As an additional contribution, we give a new, quasi-linear time algorithm for the off-line perturbation sampling phase of MP12 in the ring setting. Our algorithm is based on a variant of the Fast Fourier Orthogonalization (FFO) algorithm of (Ducas and Prest, ISSAC 2016), but avoids the need to precompute and store the FFO matrix by a careful rearrangement of the operations. All our algorithms are fairly simple, with small hidden constants, and offer a practical alternative to use the MP12 trapdoor lattices in a broad range of cryptographic applications.
Nicholas Genise, Daniele Micciancio
Short, Invertible Elements in Partially Splitting Cyclotomic Rings and Applications to Lattice-Based Zero-Knowledge Proofs
Abstract
When constructing practical zero-knowledge proofs based on the hardness of the Ring-LWE or the Ring-SIS problems over polynomial rings \(\mathbb {Z}_p[X]/(X^n+1)\), it is often necessary that the challenges come from a set \(\mathcal {C}\) that satisfies three properties: the set should be large (around \(2^{256}\)), the elements in it should have small norms, and all the non-zero elements in the difference set \(\mathcal {C}-\mathcal {C}\) should be invertible. The first two properties are straightforward to satisfy, while the third one requires us to make efficiency compromises. We can either work over rings where the polynomial \(X^n+1\) only splits into two irreducible factors modulo p, which makes the speed of the multiplication operation in the ring sub-optimal; or we can limit our challenge set to polynomials of smaller degree, which requires them to have (much) larger norms.
In this work we show that one can use the optimal challenge sets \(\mathcal {C}\) and still have the polynomial \(X^n+1\) split into more than two factors. This comes as a direct application of our more general result that states that all non-zero polynomials with “small” coefficients in the cyclotomic ring \(\mathbb {Z}_p[X]/(\varPhi _m(X))\) are invertible (where “small” depends on the size of p and how many irreducible factors the \(m^{th}\) cyclotomic polynomial \(\varPhi _m(X)\) splits into). We furthermore establish sufficient conditions for p under which \(\varPhi _m(X)\) will split in such fashion.
For the purposes of implementation, if the polynomial \(X^n+1\) splits into k factors, we can run FFT for \(\log {k}\) levels until switching to Karatsuba multiplication. Experimentally, we show that increasing the number of levels from one to three or four results in a speedup by a factor of \(\approx 2\) – 3. We point out that this improvement comes completely for free simply by choosing a modulus p that has certain algebraic properties. In addition to the speed improvement, having the polynomial split into many factors has other applications – e.g. when one embeds information into the Chinese Remainder representation of the ring elements, the more the polynomial splits, the more information one can embed into an element.
Vadim Lyubashevsky, Gregor Seiler

Random Oracle Model

Frontmatter
Random Oracles and Non-uniformity
Abstract
We revisit security proofs for various cryptographic primitives in the auxiliary-input random-oracle model (AI-ROM), in which an attacker \(\mathcal A\) can compute arbitrary S bits of leakage about the random oracle \(\mathcal O\) before attacking the system and then use additional T oracle queries to \(\mathcal O\) during the attack. This model has natural applications in settings where traditional random-oracle proofs are not useful: (a) security against non-uniform attackers; (b) security against preprocessing. We obtain a number of new results about the AI-ROM:
  • Unruh (CRYPTO’07) introduced the pre-sampling technique, which generically reduces security proofs in the AI-ROM to a much simpler P-bit-fixing random-oracle model (BF-ROM), where the attacker can arbitrarily fix the values of \(\mathcal O\) on some P coordinates, but then the remaining coordinates are chosen at random. Unruh’s security loss for this transformation is \(\sqrt{ST/P}\). We improve this loss to the optimal value O(ST / P), obtaining nearly tight bounds for a variety of indistinguishability applications in the AI-ROM.
  • While the basic pre-sampling technique cannot give tight bounds for unpredictability applications, we introduce a novel “multiplicative version” of pre-sampling, which allows to dramatically reduce the size of P of the pre-sampled set to \(P=O(ST)\) and yields nearly tight security bounds for a variety of unpredictability applications in the AI-ROM. Qualitatively, it validates Unruh’s “polynomial pre-sampling conjecture”—disproved in general by Dodis et al. (EUROCRYPT’17)—for the special case of unpredictability applications.
  • Using our techniques, we reprove nearly all AI-ROM bounds obtained by Dodis et al. (using a much more laborious compression technique), but we also apply it to many settings where the compression technique is either inapplicable (e.g., computational reductions) or appears intractable (e.g., Merkle-Damgård hashing).
  • We show that for any salted Merkle-Damgård hash function with m-bit output there exists a collision-finding circuit of size \(\varTheta (2^{m/3})\) (taking salt as the input), which is significantly below the \(2^{m/2}\) birthday security conjectured against uniform attackers.
  • We build two compilers to generically extend the security of applications proven in the traditional ROM to the AI-ROM. One compiler simply prepends a public salt to the random oracle, showing that salting generically provably defeats preprocessing.
Overall, our results make it much easier to get concrete security bounds in the AI-ROM. These bounds in turn give concrete conjectures about the security of these applications (in the standard model) against non-uniform attackers.
Sandro Coretti, Yevgeniy Dodis, Siyao Guo, John Steinberger
Another Step Towards Realizing Random Oracles: Non-malleable Point Obfuscation
Abstract
The random oracle paradigm allows us to analyze the security of protocols and construction in an idealized model, where all parties have access to a truly random function. This is one of the most successful and well-studied models in cryptography. However, being such a strong idealized model, it is known to be susceptible to various weaknesses when implemented naively in “real-life”, as shown by Canetti, Goldreich and Halevi (J. ACM 2004).
As a counter-measure, one could try to identify and implement only one or few of the properties a random oracle possesses that are needed for a specific setting. Such a systematic study was initiated by Canetti (CRYPTO 1997), who showed how to implement the property that the output of the function does not reveal anything regarding the input by constructing a point function obfucator. This property turned out to suffice in many follow-up works and applications.
In this work, we tackle another natural property of random oracles and implement it in the standard model. The property we focus on is non-malleability, where it is guaranteed that the output on an input cannot be used to generate the output on any related point. We construct a point-obfuscator that is both point-hiding (à la Canetti) and is non-malleable. The cost of our construction is a single exponentiation on top of Canetti’s construction and could be used for any application where point obfuscators are used and obtain improved security guarantees. The security of our construction relies on variants of the DDH and power-DDH assumptions.
On the technical side, we introduce a new technique for proving security of a construction based on a DDH-like assumption. We call this technique “double-exponentiation” and believe it will be useful in the future.
Ilan Komargodski, Eylon Yogev
The Wonderful World of Global Random Oracles
Abstract
The random-oracle model by Bellare and Rogaway (CCS’93) is an indispensable tool for the security analysis of practical cryptographic protocols. However, the traditional random-oracle model fails to guarantee security when a protocol is composed with arbitrary protocols that use the same random oracle. Canetti, Jain, and Scafuro (CCS’14) put forth a global but non-programmable random oracle in the Generalized UC framework and showed that some basic cryptographic primitives with composable security can be efficiently realized in their model. Because their random-oracle functionality is non-programmable, there are many practical protocols that have no hope of being proved secure using it. In this paper, we study alternative definitions of a global random oracle and, perhaps surprisingly, show that these allow one to prove GUC-secure existing, very practical realizations of a number of essential cryptographic primitives including public-key encryption, non-committing encryption, commitments, Schnorr signatures, and hash-and-invert signatures. Some of our results hold generically for any suitable scheme proven secure in the traditional ROM, some hold for specific constructions only. Our results include many highly practical protocols, for example, the folklore commitment scheme \(\mathcal {H}(m\Vert r)\) (where m is a message and r is the random opening information) which is far more efficient than the construction of Canetti et al.
Jan Camenisch, Manu Drijvers, Tommaso Gagliardoni, Anja Lehmann, Gregory Neven

Fully Homomorphic Encryption

Frontmatter
Homomorphic Lower Digits Removal and Improved FHE Bootstrapping
Abstract
Bootstrapping is a crucial operation in Gentry’s breakthrough work on fully homomorphic encryption (FHE), where a homomorphic encryption scheme evaluates its own decryption algorithm. There has been a couple of implementations of bootstrapping, among which HElib arguably marks the state-of-the-art in terms of throughput, ciphertext/message size ratio and support for large plaintext moduli.
In this work, we applied a family of “lowest digit removal” polynomials to design an improved homomorphic digit extraction algorithm which is a crucial part in bootstrapping for both FV and BGV schemes. When the secret key has 1-norm \(h=||s||_1\) and the plaintext modulus is \(t = p^r\), we achieved bootstrapping depth \(\log h + \log ( \log _p(ht))\) in FV scheme. In case of the BGV scheme, we brought down the depth from \(\log h + 2 \log t\) to \(\log h + \log t\).
We implemented bootstrapping for FV in the SEAL library. We also introduced another “slim mode”, which restrict the plaintexts to batched vectors in \(\mathbb {Z}_{p^r}\). The slim mode has similar throughput as the full mode, while each individual run is much faster and uses much smaller memory. For example, bootstrapping takes 6.75 s for vectors over GF(127) with 64 slots and 1381 s for vectors over \(GF(257^{128})\) with 128 slots. We also implemented our improved digit extraction procedure for the BGV scheme in HElib.
Hao Chen, Kyoohyung Han
Homomorphic SIMD Operations: Single Instruction Much More Data
Abstract
In 2014, Smart and Vercauteren introduced a packing technique for homomorphic encryption schemes by decomposing the plaintext space using the Chinese Remainder Theorem. This technique allows to encrypt multiple data values simultaneously into one ciphertext and execute Single Instruction Multiple Data operations homomorphically. In this paper we improve and generalize their results by introducing a flexible Laurent polynomial encoding technique and by using a more fine-grained CRT decomposition of the plaintext space. The Laurent polynomial encoding provides a convenient common framework for all conventional ways in which input data types can be represented, e.g. finite field elements, integers, rationals, floats and complex numbers. Our methods greatly increase the packing capacity of the plaintext space, as well as one’s flexibility in optimizing the system parameters with respect to efficiency and/or security.
Wouter Castryck, Ilia Iliashenko, Frederik Vercauteren
Bootstrapping for Approximate Homomorphic Encryption
Abstract
This paper extends the leveled homomorphic encryption scheme for an approximate arithmetic of Cheon et al. (ASIACRYPT 2017) to a fully homomorphic encryption, i.e., we propose a new technique to refresh low-level ciphertexts based on Gentry’s bootstrapping procedure. The modular reduction operation is the main bottleneck in the homomorphic evaluation of the decryption circuit. We exploit a scaled sine function as an approximation of the modular reduction operation and present an efficient evaluation strategy. Our method requires only one homomorphic multiplication for each of iterations and so the total computation cost grows linearly with the depth of the decryption circuit. We also show how to recrypt packed ciphertexts on the RLWE construction with an open-source implementation. For example, it takes 139.8 s to refresh a ciphertext that encrypts 128 numbers with 12 bits of precision, yielding an amortized rate of 1.1 seconds per slot.
Jung Hee Cheon, Kyoohyung Han, Andrey Kim, Miran Kim, Yongsoo Song

Permutations

Frontmatter
Full Indifferentiable Security of the Xor of Two or More Random Permutations Using the Method
Abstract
The construction \(\mathsf {XORP}\) (bitwise-xor of outputs of two independent n-bit random permutations) has gained broad attention over the last two decades due to its high security. Very recently, Dai et al. (CRYPTO’17), by using a method which they term the Chi-squared method (\(\chi ^2\) method), have shown n-bit security of \(\mathsf {XORP}\) when the underlying random permutations are kept secret to the adversary. In this work, we consider the case where the underlying random permutations are publicly available to the adversary. The best known security of \(\mathsf {XORP}\) in this security game (also known as indifferentiable security) is \(\frac{2n}{3}\)-bit, due to Mennink et al. (ACNS’15). Later, Lee (IEEE-IT’17) proved a better \(\frac{(k-1)n}{k}\)-bit security for the general construction \(\mathsf {XORP}[k]\) which returns the xor of k (\(\ge 2\)) independent random permutations. However, the security was shown only for the cases where k is an even integer. In this paper, we improve all these known bounds and prove full, i.e., n-bit (indifferentiable) security of \(\mathsf {XORP}\) as well as \(\mathsf {XORP}[k]\) for any k. Our main result is n-bit security of \(\mathsf {XORP}\), and we use the \(\chi ^2\) method to prove it.
Srimanta Bhattacharya, Mridul Nandi
An Improved Affine Equivalence Algorithm for Random Permutations
Abstract
In this paper we study the affine equivalence problem, where given two functions \(\varvec{F},\varvec{G}: \{0,1\}^n \rightarrow \{0,1\}^n\), the goal is to determine whether there exist invertible affine transformations \(A_1,A_2\) over \(GF(2)^n\) such that \(\varvec{G} = A_2 \circ \varvec{F} \circ A_1\). Algorithms for this problem have several well-known applications in the design and analysis of Sboxes, cryptanalysis of white-box ciphers and breaking a generalized Even-Mansour scheme.
We describe a new algorithm for the affine equivalence problem and focus on the variant where \(\varvec{F},\varvec{G}\) are permutations over n-bit words, as it has the widest applicability. The complexity of our algorithm is about \(n^3 2^n\) bit operations with very high probability whenever \(\varvec{F}\) (or \(\varvec{G})\) is a random permutation. This improves upon the best known algorithms for this problem (published by Biryukov et al. at EUROCRYPT 2003), where the first algorithm has time complexity of \(n^3 2^{2n}\) and the second has time complexity of about \(n^3 2^{3n/2}\) and roughly the same memory complexity.
Our algorithm is based on a new structure (called a rank table) which is used to analyze particular algebraic properties of a function that remain invariant under invertible affine transformations. Besides its standard application in our new algorithm, the rank table is of independent interest and we discuss several of its additional potential applications.
Itai Dinur

Galois Counter Mode

Frontmatter
Optimal Forgeries Against Polynomial-Based MACs and GCM
Abstract
Polynomial-based authentication algorithms, such as GCM and Poly1305, have seen widespread adoption in practice. Due to their importance, a significant amount of attention has been given to understanding and improving both proofs and attacks against such schemes. At EUROCRYPT 2005, Bernstein published the best known analysis of the schemes when instantiated with PRPs, thereby establishing the most lenient limits on the amount of data the schemes can process per key. A long line of work, initiated by Handschuh and Preneel at CRYPTO 2008, finds the best known attacks, advancing our understanding of the fragility of the schemes. Yet surprisingly, no known attacks perform as well as the predicted worst-case attacks allowed by Bernstein’s analysis, nor has there been any advancement in proofs improving Bernstein’s bounds, and the gap between attacks and analysis is significant. We settle the issue by finding a novel attack against polynomial-based authentication algorithms using PRPs, and combine it with new analysis, to show that Bernstein’s bound, and our attacks, are optimal.
Atul Luykx, Bart Preneel
Revisiting AES-GCM-SIV: Multi-user Security, Faster Key Derivation, and Better Bounds
Abstract
This paper revisits the multi-user (mu) security of symmetric encryption, from the perspective of delivering an analysis of the \(\mathsf {AES\text {-}GCM\text {-}SIV}\) AEAD scheme. Our end result shows that its mu security is comparable to that achieved in the single-user setting. In particular, even when instantiated with short keys (e.g., 128 bits), the security of \(\mathsf {AES\text {-}GCM\text {-}SIV}\) is not impacted by the collisions of two user keys, as long as each individual nonce is not re-used by too many users. Our bounds also improve existing analyses in the single-user setting, in particular when messages of variable lengths are encrypted. We also validate security against a general class of key-derivation methods, including one that halves the complexity of the final proposal.
As an intermediate step, we consider mu security in a setting where the data processed by every user is bounded, and where user keys are generated according to arbitrary, possibly correlated distributions. This viewpoint generalizes the currently adopted one in mu security, and can be used to analyze re-keying practices.
Priyanka Bose, Viet Tung Hoang, Stefano Tessaro

Attribute-Based Encryption

Frontmatter
Unbounded ABE via Bilinear Entropy Expansion, Revisited
Abstract
We present simpler and improved constructions of unbounded attribute-based encryption (ABE) schemes with constant-size public parameters under static assumptions in bilinear groups. Concretely, we obtain:
  • a simple and adaptively secure unbounded ABE scheme in composite-order groups, improving upon a previous construction of Lewko and Waters (Eurocrypt ’11) which only achieves selective security;
  • an improved adaptively secure unbounded ABE scheme based on the k-linear assumption in prime-order groups with shorter ciphertexts and secret keys than those of Okamoto and Takashima (Asiacrypt ’12);
  • the first adaptively secure unbounded ABE scheme for arithmetic branching programs under static assumptions.
At the core of all of these constructions is a “bilinear entropy expansion” lemma that allows us to generate any polynomial amount of entropy starting from constant-size public parameters; the entropy can then be used to transform existing adaptively secure “bounded” ABE schemes into unbounded ones.
Jie Chen, Junqing Gong, Lucas Kowalczyk, Hoeteck Wee
Anonymous IBE, Leakage Resilience and Circular Security from New Assumptions
Abstract
In anonymous identity-based encryption (IBE), ciphertexts not only hide their corresponding messages, but also their target identity. We construct an anonymous IBE scheme based on the Computational Diffie-Hellman (CDH) assumption in general groups (and thus, as a special case, based on the hardness of factoring Blum integers).
Our approach extends and refines the recent tree-based approach of Cho et al. (CRYPTO ’17) and Döttling and Garg (CRYPTO ’17). Whereas the tools underlying their approach do not seem to provide any form of anonymity, we introduce two new building blocks which we utilize for achieving anonymity: blind garbled circuits (which we construct based on any one-way function), and blind batch encryption (which we construct based on CDH).
We then further demonstrate the applicability of our newly-developed tools by showing that batch encryption implies a public-key encryption scheme that is both resilient to leakage of a \((1-o(1))\)-fraction of its secret key, and KDM secure (or circular secure) with respect to all linear functions of its secret key (which, in turn, is known to imply KDM security for bounded-size circuits). These yield the first high-rate leakage-resilient encryption scheme and the first KDM-secure encryption scheme based on the CDH or Factoring assumptions.
Finally, relying on our techniques we also construct a batch encryption scheme based on the hardness of the Learning Parity with Noise (LPN) problem, albeit with very small noise rate \(\varOmega (\log ^2(n)/n)\). Although this batch encryption scheme is not blind, we show that it still implies standard (i.e., non-anonymous) IBE, leakage resilience and KDM security. IBE and high-rate leakage resilience were not previously known from LPN, even with extremely low noise.
Zvika Brakerski, Alex Lombardi, Gil Segev, Vinod Vaikuntanathan

Secret Sharing

Frontmatter
Towards Breaking the Exponential Barrier for General Secret Sharing
Abstract
A secret-sharing scheme for a monotone Boolean (access) function \(F: \{0,1\}^n \rightarrow \{0,1\}\) is a randomized algorithm that on input a secret, outputs n shares \(s_1,\ldots ,s_n\) such that for any \((x_1,\ldots ,x_n) \in \{0,1\}^n\), the collection of shares \( \{ s_i : x_i = 1 \}\) determine the secret if \(F(x_1,\ldots ,x_n)=1\) and reveal nothing about the secret otherwise. The best secret sharing schemes for general monotone functions have shares of size \(\varTheta (2^n)\). It has long been conjectured that one cannot do much better than \(2^{\varOmega (n)}\) share size, and indeed, such a lower bound is known for the restricted class of linear secret-sharing schemes.
In this work, we refute two natural strengthenings of the above conjecture:
  • First, we present secret-sharing schemes for a family of \(2^{2^{n/2}}\) monotone functions over \(\{0,1\}^n\) with sub-exponential share size \(2^{O(\sqrt{n} \log n)}\). This unconditionally refutes the stronger conjecture that circuit size is, within polynomial factors, a lower bound on the share size.
  • Second, we disprove the analogous conjecture for non-monotone functions. Namely, we present “non-monotone secret-sharing schemes” for every access function over \(\{0,1\}^n\) with shares of size \(2^{O(\sqrt{n} \log n)}\).
Our construction draws upon a rich interplay amongst old and new problems in information-theoretic cryptography: from secret-sharing, to multi-party computation, to private information retrieval. Along the way, we also construct the first multi-party conditional disclosure of secrets (CDS) protocols for general functions \(F:\{0,1\}^n \rightarrow \{0,1\}\) with communication complexity \(2^{O(\sqrt{n} \log n)}\).
Tianren Liu, Vinod Vaikuntanathan, Hoeteck Wee
Improving the Linear Programming Technique in the Search for Lower Bounds in Secret Sharing
Abstract
We present a new improvement in the linear programming technique to derive lower bounds on the information ratio of secret sharing schemes. We obtain non-Shannon-type bounds without using information inequalities explicitly. Our new technique makes it possible to determine the optimal information ratio of linear secret sharing schemes for all access structures on 5 participants and all graph-based access structures on 6 participants. In addition, new lower bounds are presented also for some small matroid ports and, in particular, the optimal information ratios of the linear secret sharing schemes for the ports of the Vamos matroid are determined.
Oriol Farràs, Tarik Kaced, Sebastià Martín, Carles Padró
Backmatter
Metadaten
Titel
Advances in Cryptology – EUROCRYPT 2018
herausgegeben von
Jesper Buus Nielsen
Prof. Dr. Vincent Rijmen
Copyright-Jahr
2018
Electronic ISBN
978-3-319-78381-9
Print ISBN
978-3-319-78380-2
DOI
https://doi.org/10.1007/978-3-319-78381-9

Premium Partner