Skip to main content
Top

2018 | Book

Advances in Cryptology – CRYPTO 2018

38th Annual International Cryptology Conference, Santa Barbara, CA, USA, August 19–23, 2018, Proceedings, Part I

insite
SEARCH

About this book

The three volume-set, LNCS 10991, LNCS 10992, and LNCS 10993, constitutes the refereed proceedings of the 38th Annual International Cryptology Conference, CRYPTO 2018, held in Santa Barbara, CA, USA, in August 2018.
The 79 revised full papers presented were carefully reviewed and selected from 351 submissions. The papers are organized in the following topical sections: secure messaging; implementations and physical attacks prevention; authenticated and format-preserving encryption; cryptoanalysis; searchable encryption and differential privacy; secret sharing; encryption; symmetric cryptography; proofs of work and proofs of stake; proof tools; key exchange; symmetric cryptoanalysis; hashes and random oracles; trapdoor functions; round optimal MPC; foundations; lattices; lattice-based ZK; efficient MPC; quantum cryptography; MPC; garbling; information-theoretic MPC; oblivious transfer; non-malleable codes; zero knowledge; and obfuscation.

Table of Contents

Frontmatter

Secure Messaging

Frontmatter
Towards Bidirectional Ratcheted Key Exchange
Abstract
Ratcheted key exchange (RKE) is a cryptographic technique used in instant messaging systems like Signal and the WhatsApp messenger for attaining strong security in the face of state exposure attacks. RKE received academic attention in the recent works of Cohn-Gordon et al. (EuroS&P 2017) and Bellare et al. (CRYPTO 2017). While the former is analytical in the sense that it aims primarily at assessing the security that one particular protocol does achieve (which might be weaker than the notion that it should achieve), the authors of the latter develop and instantiate a notion of security from scratch, independently of existing implementations. Unfortunately, however, their model is quite restricted, e.g. for considering only unidirectional communication and the exposure of only one of the two parties.
In this article we resolve the limitations of prior work by developing alternative security definitions, for unidirectional RKE as well as for RKE where both parties contribute. We follow a purist approach, aiming at finding strong yet convincing notions that cover a realistic communication model with fully concurrent operation of both participants. We further propose secure instantiations (as the protocols analyzed or proposed by Cohn-Gordon et al. and Bellare et al. turn out to be weak in our models). While our scheme for the unidirectional case builds on a generic KEM as the main building block (differently to prior work that requires explicitly Diffie–Hellman), our schemes for bidirectional RKE require a stronger, HIBE-like component.
Bertram Poettering, Paul Rösler
Optimal Channel Security Against Fine-Grained State Compromise: The Safety of Messaging
Abstract
We aim to understand the best possible security of a (bidirectional) cryptographic channel against an adversary that may arbitrarily and repeatedly learn the secret state of either communicating party. We give a formal security definition and a proven-secure construction. This construction provides better security against state compromise than the Signal Double Ratchet Algorithm or any other known channel construction. To facilitate this we define and construct new forms of public-key encryption and digital signatures that update their keys over time.
Joseph Jaeger, Igors Stepanovs
Out-of-Band Authentication in Group Messaging: Computational, Statistical, Optimal
Abstract
Extensive efforts are currently put into securing messaging platforms, where a key challenge is that of protecting against man-in-the-middle attacks when setting up secure end-to-end channels. The vast majority of these efforts, however, have so far focused on securing user-to-user messaging, and recent attacks indicate that the security of group messaging is still quite fragile.
We initiate the study of out-of-band authentication in the group setting, extending the user-to-user setting where messaging platforms (e.g., Telegram and WhatsApp) protect against man-in-the-middle attacks by assuming that users have access to an external channel for authenticating one short value (e.g., two users who recognize each other’s voice can compare a short value). Inspired by the frameworks of Vaudenay (CRYPTO ’05) and Naor et al. (CRYPTO ’06) in the user-to-user setting, we assume that users communicate over a completely-insecure channel, and that a group administrator can out-of-band authenticate one short message to all users. An adversary may read, remove, or delay this message (for all or for some of the users), but cannot undetectably modify it.
Within our framework we establish tight bounds on the tradeoff between the adversary’s success probability and the length of the out-of-band authenticated message (which is a crucial bottleneck given that the out-of-band channel is of low bandwidth). We consider both computationally-secure and statistically-secure protocols, and for each flavor of security we construct an authentication protocol and prove a lower bound showing that our protocol achieves essentially the best possible tradeoff.
In particular, considering groups that consist of an administrator and k additional users, for statistically-secure protocols we show that at least \((k+1)\cdot (\log (1/\epsilon ) - \varTheta (1))\) bits must be out-of-band authenticated, whereas for computationally-secure ones \(\log (1/\epsilon ) + \log k\) bits suffice, where \(\epsilon \) is the adversary’s success probability. Moreover, instantiating our computationally-secure protocol in the random-oracle model yields an efficient and practically-relevant protocol (which, alternatively, can also be based on any one-way function in the standard model).
Lior Rotem, Gil Segev

Implementations and Physical Attacks Prevention

Frontmatter
Faster Homomorphic Linear Transformations in HElib
Abstract
HElib is a software library that implements homomorphic encryption (HE), with a focus on effective use of “packed” ciphertexts. An important operation is applying a known linear map to a vector of encrypted data. In this paper, we describe several algorithmic improvements that significantly speed up this operation: in our experiments, our new algorithms are 30–75 times faster than those previously implemented in HElib for typical parameters.
One application that can benefit from faster linear transformations is bootstrapping (in particular, “thin bootstrapping” as described in [Chen and Han, Eurocrypt 2018]). In some settings, our new algorithms for linear transformations result in a \(6{\times }\) speedup for the entire thin bootstrapping operation.
Our techniques also reduce the size of the large public evaluation key, often using 33%–50% less space than the previous HElib implementation. We also implemented a new tradeoff that enables a drastic reduction in size, resulting in a \(25{\times }\) factor or more for some parameters, paying only a penalty of a 2–\(4{\times }\) times slowdown in running time (and giving up some parallelization opportunities).
Shai Halevi, Victor Shoup
CAPA: The Spirit of Beaver Against Physical Attacks
Abstract
In this paper we introduce two things: On one hand we introduce the Tile-Probe-and-Fault model, a model generalising the wire-probe model of Ishai et al. extending it to cover both more realistic side-channel leakage scenarios on a chip and also to cover fault and combined attacks. Secondly we introduce CAPA: a combined Countermeasure Against Physical Attacks. Our countermeasure is motivated by our model, and aims to provide security against higher-order SCA, multiple-shot FA and combined attacks. The tile-probe-and-fault model leads one to naturally look (by analogy) at actively secure multi-party computation protocols. Indeed, CAPA draws much inspiration from the MPC protocol SPDZ. So as to demonstrate that the model, and the CAPA countermeasure, are not just theoretical constructions, but could also serve to build practical countermeasures, we present initial experiments of proof-of-concept designs using the CAPA methodology. Namely, a hardware implementation of the KATAN and AES block ciphers, as well as a software bitsliced AES S-box implementation. We demonstrate experimentally that the design can resist second-order DPA attacks, even when the attacker is presented with many hundreds of thousands of traces. In addition our proof-of-concept can also detect faults within our model with high probability in accordance to the methodology.
Oscar Reparaz, Lauren De Meyer, Begül Bilgin, Victor Arribas, Svetla Nikova, Ventzislav Nikov, Nigel Smart

Authenticated and Format-Preserving Encryption

Frontmatter
Fast Message Franking: From Invisible Salamanders to Encryptment
Abstract
Message franking enables cryptographically verifiable reporting of abusive messages in end-to-end encrypted messaging. Grubbs, Lu, and Ristenpart recently formalized the needed underlying primitive, what they call compactly committing authenticated encryption (AE), and analyze security of a number of approaches. But all known secure schemes are still slow compared to the fastest standard AE schemes. For this reason Facebook Messenger uses AES-GCM for franking of attachments such as images or videos.
We show how to break Facebook’s attachment franking scheme: a malicious user can send an objectionable image to a recipient but that recipient cannot report it as abuse. The core problem stems from use of fast but non-committing AE, and so we build the fastest compactly committing AE schemes to date. To do so we introduce a new primitive, called encryptment, which captures the essential properties needed. We prove that, unfortunately, schemes with performance profile similar to AES-GCM won’t work. Instead, we show how to efficiently transform Merkle-Damgärd-style hash functions into secure encryptments, and how to efficiently build compactly committing AE from encryptment. Ultimately our main construction allows franking using just a single computation of SHA-256 or SHA-3. Encryptment proves useful for a variety of other applications, such as remotely keyed AE and concealments, and our results imply the first single-pass schemes in these settings as well.
Yevgeniy Dodis, Paul Grubbs, Thomas Ristenpart, Joanne Woodage
Indifferentiable Authenticated Encryption
Abstract
We study Authenticated Encryption with Associated Data (AEAD) from the viewpoint of composition in arbitrary (single-stage) environments. We use the indifferentiability framework to formalize the intuition that a “good” AEAD scheme should have random ciphertexts subject to decryptability. Within this framework, we can then apply the indifferentiability composition theorem to show that such schemes offer extra safeguards wherever the relevant security properties are not known, or cannot be predicted in advance, as in general-purpose crypto libraries and standards.
We show, on the negative side, that generic composition (in many of its configurations) and well-known classical and recent schemes fail to achieve indifferentiability. On the positive side, we give a provably indifferentiable Feistel-based construction, which reduces the round complexity from at least 6, needed for blockciphers, to only 3 for encryption. This result is not too far off the theoretical optimum as we give a lower bound that rules out the indifferentiability of any construction with less than 2 rounds.
Manuel Barbosa, Pooya Farshim
The Curse of Small Domains: New Attacks on Format-Preserving Encryption
Abstract
Format-preserving encryption (FPE) produces ciphertexts which have the same format as the plaintexts. Building secure FPE is very challenging, and recent attacks (Bellare, Hoang, Tessaro, CCS ’16; Durak and Vaudenay, CRYPTO ’17) have highlighted security deficiencies in the recent NIST SP800-38G standard. This has left the question open of whether practical schemes with high security exist.
In this paper, we continue the investigation of attacks against FPE schemes. Our first contribution are new known-plaintext message recovery attacks against Feistel-based FPEs (such as FF1/FF3 from the NIST SP800-38G standard) which improve upon previous work in terms of amortized complexity in multi-target scenarios, where multiple ciphertexts are to be decrypted. Our attacks are also qualitatively better in that they make no assumptions on the correlation between the targets to be decrypted and the known plaintexts. We also surface a new vulnerability specific to FF3 and how it handles odd length domains, which leads to a substantial speedup in our attacks.
We also show the first attacks against non-Feistel based FPEs. Specifically, we show a strong message-recovery attack for FNR, a construction proposed by Cisco which replaces two rounds in the Feistel construction with a pairwise-independent permutation, following the paradigm by Naor and Reingold (JoC, ’99). We also provide a strong ciphertext-only attack against a variant of the DTP construction by Brightwell and Smith, which is deployed by Protegrity within commercial applications. All of our attacks show that existing constructions fall short of achieving desirable security levels. For Feistel and the FNR schemes, our attacks become feasible on small domains, e.g., 8 bits, for suggested round numbers. Our attack against the DTP construction is practical even for large domains. We provide proof-of-concept implementations of our attacks that verify our theoretical findings.
Viet Tung Hoang, Stefano Tessaro, Ni Trieu

Cryptoanalysis

Frontmatter
Cryptanalysis via Algebraic Spans
Abstract
We introduce a method for obtaining provable polynomial time solutions of problems in nonabelian algebraic cryptography. This method is widely applicable, easier to apply, and more efficient than earlier methods. After demonstrating its applicability to the major classic nonabelian protocols, we use this method to cryptanalyze the Triple Decomposition key exchange protocol, the only classic group theory based key exchange protocol that could not be cryptanalyzed by earlier methods.
Adi Ben-Zvi, Arkadius Kalka, Boaz Tsaban
Improved Division Property Based Cube Attacks Exploiting Algebraic Properties of Superpoly
Abstract
The cube attack is an important technique for the cryptanalysis of symmetric key primitives, especially for stream ciphers. Aiming at recovering some secret key bits, the adversary reconstructs a superpoly with the secret key bits involved, by summing over a set of the plaintexts/IV which is called a cube. Traditional cube attack only exploits linear/quadratic superpolies. Moreover, for a long time after its proposal, the size of the cubes has been largely confined to an experimental range, e.g., typically 40. These limits were first overcome by the division property based cube attacks proposed by Todo et al. at CRYPTO 2017. Based on MILP modelled division property, for a cube (index set) I, they identify the small (index) subset J of the secret key bits involved in the resultant superpoly. During the precomputation phase which dominates the complexity of the cube attacks, \(2^{|I|+|J|}\) encryptions are required to recover the superpoly. Therefore, their attacks can only be available when the restriction \(|I|+|J|<n\) is met.
In this paper, we introduced several techniques to improve the division property based cube attacks by exploiting various algebraic properties of the superpoly.
1.
We propose the “flag” technique to enhance the preciseness of MILP models so that the proper non-cube IV assignments can be identified to obtain a non-constant superpoly.
 
2.
A degree evaluation algorithm is presented to upper bound the degree of the superpoly. With the knowledge of its degree, the superpoly can be recovered without constructing its whole truth table. This enables us to explore larger cubes I’s even if \(|I|+|J|\ge n\).
 
3.
We provide a term enumeration algorithm for finding the monomials of the superpoly, so that the complexity of many attacks can be further reduced.
 
As an illustration, we apply our techniques to attack the initialization of several ciphers. To be specific, our key recovery attacks have mounted to 839-round Trivium, 891-round Kreyvium, 184-round Grain-128a and 750-round Acornrespectively.
Qingju Wang, Yonglin Hao, Yosuke Todo, Chaoyun Li, Takanori Isobe, Willi Meier
Generic Attacks Against Beyond-Birthday-Bound MACs
Abstract
In this work, we study the security of several recent MAC constructions with provable security beyond the birthday bound. We consider block-cipher based constructions with a double-block internal state, such as SUM-ECBC, PMAC+, 3kf9, GCM-SIV2, and some variants (LightMAC+, 1kPMAC+). All these MACs have a security proof up to \(2^{2n/3}\) queries, but there are no known attacks with less than \(2^{n}\) queries.
We describe a new cryptanalysis technique for double-block MACs based on finding quadruples of messages with four pairwise collisions in halves of the state. We show how to detect such quadruples in SUM-ECBC, PMAC+, 3kf9, GCM-SIV2 and their variants with \(\mathcal {O}(2^{3n/4})\) queries, and how to build a forgery attack with the same query complexity. The time complexity of these attacks is above \(2^n\), but it shows that the schemes do not reach full security in the information theoretic model. Surprisingly, our attack on LightMAC+ also invalidates a recent security proof by Naito.
Moreover, we give a variant of the attack against SUM-ECBC and GCM-SIV2 with time and data complexity \(\tilde{\mathcal {O}}(2^{6n/7})\). As far as we know, this is the first attack with complexity below \(2^n\) against a deterministic beyond-birthday-bound secure MAC.
As a side result, we also give a birthday attack against 1kf9, a single-key variant of 3kf9 that was withdrawn due to issues with the proof.
Gaëtan Leurent, Mridul Nandi, Ferdinand Sibleyras

Searchable Encryption and Differential Privacy

Frontmatter
Structured Encryption and Leakage Suppression
Abstract
Structured encryption (STE) schemes encrypt data structures in such a way that they can be privately queried. One aspect of STE that is still poorly understood is its leakage. In this work, we describe a general framework to design STE schemes that do not leak the query/search pattern (i.e., if and when a query was previously made).
Our framework consists of two compilers. The first can be used to make any dynamic STE scheme rebuildable in the sense that the encrypted structures it produces can be rebuilt efficiently using only O(1) client storage. The second transforms any rebuildable scheme that leaks the query/search pattern into a new scheme that does not. Our second compiler is a generalization of Goldreich and Ostrovsky’s square root oblivious RAM (ORAM) solution but does not make use of black-box ORAM simulation. We show that our framework produces STE schemes with query complexity that is asymptotically better than ORAM simulation in certain (natural) settings and comparable to special-purpose oblivious data structures.
We use our framework to design a new STE scheme that is “almost” zero-leakage in the sense that it reveals an, intuitively-speaking, small amount of information. We also show how the scheme can be used to achieve zero-leakage queries when one can tolerate a probabilistic guarantee of correctness. This construction results from applying our compilers to a new STE scheme we design called the piggyback scheme. This scheme is a general-purpose STE construction (in the sense that it can encrypt any data structure) that leaks the search/query pattern but hides the response length on non-repeating queries.
Seny Kamara, Tarik Moataz, Olya Ohrimenko
Searchable Encryption with Optimal Locality: Achieving Sublogarithmic Read Efficiency
Abstract
We propose the first linear-space searchable encryption scheme with constant locality and sublogarithmic read efficiency, strictly improving the previously best known read efficiency bound (Asharov et al., STOC 2016) from \(\varTheta (\log N \log \log N)\) to \(O(\log ^{\gamma } N)\) where \(\gamma =\frac{2}{3}+\delta \) for any fixed \(\delta >0\) and where N is the number of keyword-document pairs. Our scheme employs four different allocation algorithms for storing the keyword lists, depending on the size of the list considered each time. For our construction we develop (i) new probability bounds for the offline two-choice allocation problem; (ii) and a new I/O-efficient oblivious RAM with \(\tilde{O}(n^{1/3})\) bandwidth overhead and zero failure probability, both of which can be of independent interest.
Ioannis Demertzis, Dimitrios Papadopoulos, Charalampos Papamanthou
Tight Tradeoffs in Searchable Symmetric Encryption
Abstract
A searchable symmetric encryption (SSE) scheme enables a client to store data on an untrusted server while supporting keyword searches in a secure manner. Recent experiments have indicated that the practical relevance of such schemes heavily relies on the tradeoff between their space overhead, locality (the number of non-contiguous memory locations that the server accesses with each query), and read efficiency (the ratio between the number of bits the server reads with each query and the actual size of the answer). These experiments motivated Cash and Tessaro (EUROCRYPT ’14) and Asharov et al. (STOC ’16) to construct SSE schemes offering various such tradeoffs, and to prove lower bounds for natural SSE frameworks. Unfortunately, the best-possible tradeoff has not been identified, and there are substantial gaps between the existing schemes and lower bounds, indicating that a better understanding of SSE is needed.
We establish tight bounds on the tradeoff between the space overhead, locality and read efficiency of SSE schemes within two general frameworks that capture the memory access pattern underlying all existing schemes. First, we introduce the “pad-and-split” framework, refining that of Cash and Tessaro while still capturing the same existing schemes. Within our framework we significantly strengthen their lower bound, proving that any scheme with locality L must use space \(\varOmega ( N \log N / \log L )\) for databases of size N. This is a tight lower bound, matching the tradeoff provided by the scheme of Demertzis and Papamanthou (SIGMOD ’17) which is captured by our pad-and-split framework.
Then, within the “statistical-independence” framework of Asharov et al. we show that their lower bound is essentially tight: We construct a scheme whose tradeoff matches their lower bound within an additive \(O(\log \log \log N)\) factor in its read efficiency, once again improving upon the existing schemes. Our scheme offers optimal space and locality, and nearly-optimal read efficiency that depends on the frequency of the queried keywords: For a keyword that is associated with \(n = N^{1 - \epsilon (n)}\) document identifiers, the read efficiency is \(\omega (1) \cdot {\epsilon }(n)^{-1}+ O(\log \log \log N)\) when retrieving its identifiers (where the \(\omega (1)\) term may be arbitrarily small, and \(\omega (1) \cdot {\epsilon }(n)^{-1}\) is the lower bound proved by Asharov et al.). In particular, for any keyword that is associated with at most \(N^{1 - 1/o(\log \log \log N)}\) document identifiers (i.e., for any keyword that is not exceptionally common), we provide read efficiency \(O(\log \log \log N)\) when retrieving its identifiers.
Gilad Asharov, Gil Segev, Ido Shahaf
Hardness of Non-interactive Differential Privacy from One-Way Functions
Abstract
A central challenge in differential privacy is to design computationally efficient non-interactive algorithms that can answer large numbers of statistical queries on a sensitive dataset. That is, we would like to design a differentially private algorithm that takes a dataset \(D \in X^n\) consisting of some small number of elements n from some large data universe X, and efficiently outputs a summary that allows a user to efficiently obtain an answer to any query in some large family Q.
Ignoring computational constraints, this problem can be solved even when X and Q are exponentially large and n is just a small polynomial; however, all algorithms with remotely similar guarantees run in exponential time. There have been several results showing that, under the strong assumption of indistinguishability obfuscation, no efficient differentially private algorithm exists when X and Q can be exponentially large. However, there are no strong separations between information-theoretic and computationally efficient differentially private algorithms under any standard complexity assumption.
In this work we show that, if one-way functions exist, there is no general purpose differentially private algorithm that works when X and Q are exponentially large, and n is an arbitrary polynomial. In fact, we show that this result holds even if X is just subexponentially large (assuming only polynomially-hard one-way functions). This result solves an open problem posed by Vadhan in his recent survey [52].
Lucas Kowalczyk, Tal Malkin, Jonathan Ullman, Daniel Wichs
Risky Traitor Tracing and New Differential Privacy Negative Results
Abstract
In this work we seek to construct collusion-resistant traitor tracing systems with small ciphertexts from standard assumptions that also move toward practical efficiency. In our approach we will hold steadfast to the principle of collusion resistance, but relax the requirement on catching a traitor from a successful decoding algorithm. We define a f-risky traitor tracing system as one where the probability of identifying a traitor is \(f(\lambda ,n)\) times the probability a successful box is produced. We then go on to show how to build such systems from prime order bilinear groups with assumptions close to those used in prior works. Our core system achieves, for any \(k > 0\), \(f(\lambda ,n) \approx \frac{k}{n + k - 1}\) where ciphertexts consists of \((k + 4)\) group elements and decryption requires \((k + 3)\) pairing operations.
At first glance the utility of such a system might seem questionable since the f we achieve for short ciphertexts is relatively small. Indeed an attacker in such a system can more likely than not get away with producing a decoding box. However, we believe this approach to be viable for four reasons:
1.
A risky traitor tracing system will provide deterrence against risk averse attackers. In some settings the consequences of being caught might bear a high cost and an attacker will have to weigh his utility of producing a decryption D box against the expected cost of being caught.
 
2.
Consider a broadcast system where we want to support low overhead broadcast encrypted communications, but will periodically allow for a more expensive key refresh operation. We refer to an adversary produced algorithm that maintains the ability to decrypt across key refreshes as a persistent decoder. We show how if we employ a risky traitor tracing systems in this setting, even for a small f, we can amplify the chances of catching such a “persistent decoder” to be negligibly close to 1.
 
3.
In certain resource constrained settings risky traitor tracing provides a best tracing effort where there are no other collusion-resistant alternatives. For instance, suppose we had to support 100 K users over a radio link that had just 10 KB of additional resources for extra ciphertext overhead. None of the existing \(\sqrt{N}\) bilinear map systems can fit in these constraints. On the other hand a risky traitor tracing system provides a spectrum of tracing probability versus overhead tradeoffs and can be configured to at least give some deterrence in this setting.
 
4.
Finally, we can capture impossibility results for differential privacy from \(\frac{1}{n}\)-risky traitor tracing. Since our ciphertexts are short (\(O(\lambda )\)), we get the negative result which matches what one would get plugging in the obfuscation based tracing system Boneh-Zhandry [9] solution into the prior impossibility result of Dwork et al. [14].
 
Rishab Goyal, Venkata Koppula, Andrew Russell, Brent Waters

Secret Sharing

Frontmatter
Non-malleable Secret Sharing for General Access Structures
Abstract
Goyal and Kumar (STOC’18) recently introduced the notion of non-malleable secret sharing. Very roughly, the guarantee they seek is the following: the adversary may potentially tamper with all of the shares, and still, either the reconstruction procedure outputs the original secret, or, the original secret is “destroyed” and the reconstruction outputs a string which is completely “unrelated” to the original secret. Prior works on non-malleable codes in the 2 split-state model imply constructions which can be seen as 2-out-of-2 non-malleable secret sharing (NMSS) schemes. Goyal and Kumar proposed constructions of t-out-of-n NMSS schemes. These constructions have already been shown to have a number of applications in cryptography.
We continue this line of research and construct NMSS for more general access structures. We give a generic compiler that converts any statistical (resp. computational) secret sharing scheme realizing any access structure into another statistical (resp. computational) secret sharing scheme that not only realizes the same access structure but also ensures statistical non-malleability against a computationally unbounded adversary who tampers each of the shares arbitrarily and independently. Instantiating with known schemes we get unconditional NMMS schemes that realize any access structures generated by polynomial size monotone span programs. Similarly, we also obtain conditional NMMS schemes realizing access structure in \(\mathbf {monotone \;P}\) (resp. \(\mathbf {monotone \;NP}\)) assuming one-way functions (resp. witness encryption).
Towards considering more general tampering models, we also propose a construction of n-out-of-n NMSS. Our construction is secure even if the adversary could divide the shares into any two (possibly overlapping) subsets and then arbitrarily tamper the shares in each subset. Our construction is based on a property of inner product and an observation that the inner-product based construction of Aggarwal, Dodis and Lovett (STOC’14) is in fact secure against a tampering class that is stronger than 2 split-states. We also show applications of our construction to the problem of non-malleable message transmission.
Vipul Goyal, Ashutosh Kumar
On the Local Leakage Resilience of Linear Secret Sharing Schemes
Abstract
We consider the following basic question: to what extent are standard secret sharing schemes and protocols for secure multiparty computation that build on them resilient to leakage? We focus on a simple local leakage model, where the adversary can apply an arbitrary function of a bounded output length to the secret state of each party, but cannot otherwise learn joint information about the states.
We show that additive secret sharing schemes and high-threshold instances of Shamir’s secret sharing scheme are secure under local leakage attacks when the underlying field is of a large prime order and the number of parties is sufficiently large. This should be contrasted with the fact that any linear secret sharing scheme over a small characteristic field is clearly insecure under local leakage attacks, regardless of the number of parties. Our results are obtained via tools from Fourier analysis and additive combinatorics.
We present two types of applications of the above results and techniques. As a positive application, we show that the “GMW protocol” for honest-but-curious parties, when implemented using shared products of random field elements (so-called “Beaver Triples”), is resilient in the local leakage model for sufficiently many parties and over certain fields. This holds even when the adversary has full access to a constant fraction of the views. As a negative application, we rule out multi-party variants of the share conversion scheme used in the 2-party homomorphic secret sharing scheme of Boyle et al. (Crypto 2016).
Fabrice Benhamouda, Akshay Degwekar, Yuval Ishai, Tal Rabin

Encryption

Frontmatter
Threshold Cryptosystems from Threshold Fully Homomorphic Encryption
Abstract
We develop a general approach to adding a threshold functionality to a large class of (non-threshold) cryptographic schemes. A threshold functionality enables a secret key to be split into a number of shares, so that only a threshold of parties can use the key, without reconstructing the key. We begin by constructing a threshold fully-homomorphic encryption scheme (ThFHE) from the learning with errors (LWE) problem. We next introduce a new concept, called a universal thresholdizer, from which many threshold systems are possible. We show how to construct a universal thresholdizer from our ThFHE. A universal thresholdizer can be used to add threshold functionality to many systems, such as CCA-secure public-key encryption (PKE), signature schemes, pseudorandom functions, and others primitives. In particular, by applying this paradigm to a (non-threshold) lattice signature system, we obtain the first single-round threshold signature scheme from LWE.
Dan Boneh, Rosario Gennaro, Steven Goldfeder, Aayush Jain, Sam Kim, Peter M. R. Rasmussen, Amit Sahai
Multi-Input Functional Encryption for Inner Products: Function-Hiding Realizations and Constructions Without Pairings
Abstract
We present new constructions of multi-input functional encryption (MIFE) schemes for the inner-product functionality that improve the state of the art solution of Abdalla et al. (Eurocrypt 2017) in two main directions.
First, we put forward a novel methodology to convert single-input functional encryption for inner products into multi-input schemes for the same functionality. Our transformation is surprisingly simple, general and efficient. In particular, it does not require pairings and it can be instantiated with all known single-input schemes. This leads to two main advances. First, we enlarge the set of assumptions this primitive can be based on, notably, obtaining new MIFEs for inner products from plain DDH, LWE, and Decisional Composite Residuosity. Second, we obtain the first MIFE schemes from standard assumptions where decryption works efficiently even for messages of super-polynomial size.
Our second main contribution is the first function-hiding MIFE scheme for inner products based on standard assumptions. To this end, we show how to extend the original, pairing-based, MIFE by Abdalla et al. in order to make it function hiding, thus obtaining a function-hiding MIFE from the MDDH assumption.
Michel Abdalla, Dario Catalano, Dario Fiore, Romain Gay, Bogdan Ursu

Symmetric Cryptography

Frontmatter
Encrypt or Decrypt? To Make a Single-Key Beyond Birthday Secure Nonce-Based MAC
Abstract
At CRYPTO 2016, Cogliati and Seurin have proposed a highly secure nonce-based MAC called Encrypted Wegman-Carter with Davies-Meyer (\(\textsf {EWCDM}\)) construction, as \(\textsf {E}_{K_2}\bigl (\textsf {E}_{K_1}(N)\oplus N\oplus \textsf {H}_{K_h}(M)\bigr )\) for a nonce N and a message M. This construction achieves roughly \(2^{2n/3}\) bit MAC security with the assumption that \(\textsf {E}\) is a PRP secure n-bit block cipher and \(\textsf {H}\) is an almost xor universal n-bit hash function. In this paper we propose Decrypted Wegman-Carter with Davies-Meyer (\(\textsf {DWCDM}\)) construction, which is structurally very similar to its predecessor \(\textsf {EWCDM}\) except that the outer encryption call is replaced by decryption. The biggest advantage of \(\textsf {DWCDM}\) is that we can make a truly single key MAC: the two block cipher calls can use the same block cipher key \(K=K_1=K_2\). Moreover, we can derive the hash key as \(K_h=\textsf {E}_K(1)\), as long as \(|K_h|=n\). Whether we use encryption or decryption in the outer layer makes a huge difference; using the decryption instead enables us to apply an extended version of the mirror theory by Patarin to the security analysis of the construction. \(\textsf {DWCDM}\) is secure beyond the birthday bound, roughly up to \(2^{2n/3}\) MAC queries and \(2^n\) verification queries against nonce-respecting adversaries. \(\textsf {DWCDM}\) remains secure up to \(2^{n/2}\) MAC queries and \(2^n\) verification queries against nonce-misusing adversaries.
Nilanjan Datta, Avijit Dutta, Mridul Nandi, Kan Yasuda
Rasta: A Cipher with Low ANDdepth and Few ANDs per Bit
Abstract
Recent developments in multi party computation (MPC) and fully homomorphic encryption (FHE) promoted the design and analysis of symmetric cryptographic schemes that minimize multiplications in one way or another. In this paper, we propose with Rastaa design strategy for symmetric encryption that has ANDdepth d and at the same time only needs d ANDs per encrypted bit. Even for very low values of d between 2 and 6 we can give strong evidence that attacks may not exist. This contributes to a better understanding of the limits of what concrete symmetric-key constructions can theoretically achieve with respect to AND-related metrics, and is to the best of our knowledge the first attempt that minimizes both metrics simultaneously. Furthermore, we can give evidence that for choices of d between 4 and 6 the resulting implementation properties may well be competitive by testing our construction in the use-case of removing the large ciphertext-expansion when using the BGV scheme.
Christoph Dobraunig, Maria Eichlseder, Lorenzo Grassi, Virginie Lallemand, Gregor Leander, Eik List, Florian Mendel, Christian Rechberger
Non-Uniform Bounds in the Random-Permutation, Ideal-Cipher, and Generic-Group Models
Abstract
The random-permutation model (RPM) and the ideal-cipher model (ICM) are idealized models that offer a simple and intuitive way to assess the conjectured standard-model security of many important symmetric-key and hash-function constructions. Similarly, the generic-group model (GGM) captures generic algorithms against assumptions in cyclic groups by modeling encodings of group elements as random injections and allows to derive simple bounds on the advantage of such algorithms.
Unfortunately, both well-known attacks, e.g., based on rainbow tables (Hellman, IEEE Transactions on Information Theory ’80), and more recent ones, e.g., against the discrete-logarithm problem (Corrigan-Gibbs and Kogan, EUROCRYPT ’18), suggest that the concrete security bounds one obtains from such idealized proofs are often completely inaccurate if one considers non-uniform or preprocessing attacks in the standard model. To remedy this situation, this work
  • defines the auxiliary-input (AI) RPM/ICM/GGM, which capture both non-uniform and preprocessing attacks by allowing an attacker to leak an arbitrary (bounded-output) function of the oracle’s function table;
  • derives the first non-uniform bounds for a number of important practical applications in the AI-RPM/ICM, including constructions based on the Merkle-Damgård and sponge paradigms, which underly the SHA hashing standards, and for AI-RPM/ICM applications with computational security; and
  • using simpler proofs, recovers the AI-GGM security bounds obtained by Corrigan-Gibbs and Kogan against preprocessing attackers, for a number of assumptions related to cyclic groups, such as discrete logarithms and Diffie-Hellman problems, and provides new bounds for two assumptions.
An important step in obtaining these results is to port the tools used in recent work by Coretti et al. (EUROCRYPT ’18) from the ROM to the RPM/ICM/GGM, resulting in very powerful and easy-to-use tools for proving security bounds against non-uniform and preprocessing attacks.
Sandro Coretti, Yevgeniy Dodis, Siyao Guo
Provable Security of (Tweakable) Block Ciphers Based on Substitution-Permutation Networks
Abstract
Substitution-Permutation Networks (SPNs) refer to a family of constructions which build a wn-bit block cipher from n-bit public permutations (often called S-boxes), which alternate keyless and “local” substitution steps utilizing such S-boxes, with keyed and “global” permutation steps which are non-cryptographic. Many widely deployed block ciphers are constructed based on the SPNs, but there are essentially no provable-security results about SPNs.
In this work, we initiate a comprehensive study of the provable security of SPNs as (possibly tweakable) wn-bit block ciphers, when the underlying n-bit permutation is modeled as a public random permutation. When the permutation step is linear (which is the case for most existing designs), we show that 3 SPN rounds are necessary and sufficient for security. On the other hand, even 1-round SPNs can be secure when non-linearity is allowed. Moreover, 2-round non-linear SPNs can achieve “beyond-birthday” (up to \(2^{2n/3}\) adversarial queries) security, and, as the number of non-linear rounds increases, our bounds are meaningful for the number of queries approaching \(2^n\). Finally, our non-linear SPNs can be made tweakable by incorporating the tweak into the permutation layer, and provide good multi-user security.
As an application, our construction can turn two public n-bit permutations (or fixed-key block ciphers) into a tweakable block cipher working on wn-bit inputs, 6n-bit key and an n-bit tweak (for any \(w\ge 2\)); the tweakable block cipher provides security up to \(2^{2n/3}\) adversarial queries in the random permutation model, while only requiring w calls to each permutation, and 3w field multiplications for each wn-bit input.
Benoît Cogliati, Yevgeniy Dodis, Jonathan Katz, Jooyoung Lee, John Steinberger, Aishwarya Thiruvengadam, Zhe Zhang

Proofs of Work and Proofs of Stake

Frontmatter
Verifiable Delay Functions
Abstract
We study the problem of building a verifiable delay function (VDF). A \(\text {VDF}\)requires a specified number of sequential steps to evaluate, yet produces a unique output that can be efficiently and publicly verified. \(\text {VDF}\)s have many applications in decentralized systems, including public randomness beacons, leader election in consensus protocols, and proofs of replication. We formalize the requirements for \(\text {VDF}\)s and present new candidate constructions that are the first to achieve an exponential gap between evaluation and verification time.
Dan Boneh, Joseph Bonneau, Benedikt Bünz, Ben Fisch
Proofs of Work From Worst-Case Assumptions
Abstract
We give Proofs of Work (PoWs) whose hardness is based on well-studied worst-case assumptions from fine-grained complexity theory. This extends the work of (Ball et al., STOC ’17), that presents PoWs that are based on the Orthogonal Vectors, 3SUM, and All-Pairs Shortest Path problems. These, however, were presented as a ‘proof of concept’ of provably secure PoWs and did not fully meet the requirements of a conventional PoW: namely, it was not shown that multiple proofs could not be generated faster than generating each individually. We use the considerable algebraic structure of these PoWs to prove that this non-amortizability of multiple proofs does in fact hold and further show that the PoWs’ structure can be exploited in ways previous heuristic PoWs could not.
This creates full PoWs that are provably hard from worst-case assumptions (previously, PoWs were either only based on heuristic assumptions or on much stronger cryptographic assumptions (Bitansky et al., ITCS ’16)) while still retaining significant structure to enable extra properties of our PoWs. Namely, we show that the PoWs of (Ball et al., STOC ’17) can be modified to have much faster verification time, can be proved in zero knowledge, and more.
Finally, as our PoWs are based on evaluating low-degree polynomials originating from average-case fine-grained complexity, we prove an average-case direct sum theorem for the problem of evaluating these polynomials, which may be of independent interest. For our context, this implies the required non-amortizability of our PoWs.
Marshall Ball, Alon Rosen, Manuel Sabin, Prashant Nalini Vasudevan
Backmatter
Metadata
Title
Advances in Cryptology – CRYPTO 2018
Editors
Hovav Shacham
Alexandra Boldyreva
Copyright Year
2018
Electronic ISBN
978-3-319-96884-1
Print ISBN
978-3-319-96883-4
DOI
https://doi.org/10.1007/978-3-319-96884-1

Premium Partner