Skip to main content
Top

2018 | Book

Advances in Cryptology – EUROCRYPT 2018

37th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Tel Aviv, Israel, April 29 - May 3, 2018 Proceedings, Part II

insite
SEARCH

About this book

The three volumes LNCS 10820, 10821, and 10822 constitute the thoroughly refereed proceedings of the 37th Annual International Conference on the Theory and Applications of Cryptographic Techniques, EUROCRYPT 2018, held in Tel Aviv, Israel, in April/May 2018.
The 69 full papers presented were carefully reviewed and selected from 294 submissions. The papers are organized into the following topical sections: foundations; lattices; random oracle model; fully homomorphic encryption; permutations; galois counter mode; attribute-based encryption; secret sharing; blockchain; multi-collision resistance; signatures; private simultaneous messages; masking; theoretical multiparty computation; obfuscation; symmetric cryptanalysis; zero-knowledge; implementing multiparty computation; non-interactive zero-knowledge; anonymous communication; isogeny; leakage; key exchange; quantum; non-malleable codes; and provable symmetric cyptography.

Table of Contents

Frontmatter

Blockchain

Frontmatter
Thunderella: Blockchains with Optimistic Instant Confirmation
Abstract
State machine replication, or “consensus”, is a central abstraction for distributed systems where a set of nodes seek to agree on an ever-growing, linearly-ordered log. In this paper, we propose a practical new paradigm called Thunderella for achieving state machine replication by combining a fast, asynchronous path with a (slow) synchronous “fall-back” path (which only gets executed if something goes wrong); as a consequence, we get simple state machine replications that essentially are as robust as the best synchronous protocols, yet “optimistically” (if a super majority of the players are honest), the protocol “instantly” confirms transactions.
We provide instantiations of this paradigm in both permissionless (using proof-of-work) and permissioned settings. Most notably, this yields a new blockchain protocol (for the permissionless setting) that remains resilient assuming only that a majority of the computing power is controlled by honest players, yet optimistically—if 3/4 of the computing power is controlled by honest players, and a special player called the “accelerator”, is honest—transactions are confirmed as fast as the actual message delay in the network. We additionally show the 3/4 optimistic bound is tight for protocols that are resilient assuming only an honest majority.
Rafael Pass, Elaine Shi
But Why Does It Work? A Rational Protocol Design Treatment of Bitcoin
Abstract
An exciting recent line of work has focused on formally investigating the core cryptographic assumptions underlying the security of Bitcoin. In a nutshell, these works conclude that Bitcoin is secure if and only if the majority of the mining power is honest. Despite their great impact, however, these works do not address an incisive question asked by positivists and Bitcoin critics, which is fuelled by the fact that Bitcoin indeed works in reality: Why should the real-world system adhere to these assumptions?
In this work we employ the machinery from the Rational Protocol Design (RPD) framework by Garay et al. [FOCS 2013] to analyze Bitcoin and address questions such as the above. We show that under the natural class of incentives for the miners’ behavior—i.e., rewarding them for adding blocks to the blockchain but having them pay for mining—we can reserve the honest majority assumption as a fallback, or even, depending on the application, completely replace it by the assumption that the miners aim to maximize their revenue.
Our results underscore the appropriateness of RPD as a “rational cryptography” framework for analyzing Bitcoin. Along the way, we devise significant extensions to the original RPD machinery that broaden its applicability to cryptocurrencies, which may be of independent interest.
Christian Badertscher, Juan Garay, Ueli Maurer, Daniel Tschudi, Vassilis Zikas
Ouroboros Praos: An Adaptively-Secure, Semi-synchronous Proof-of-Stake Blockchain
Abstract
We present “Ouroboros Praos”, a proof-of-stake blockchain protocol that, for the first time, provides security against fully-adaptive corruption in the semi-synchronous setting: Specifically, the adversary can corrupt any participant of a dynamically evolving population of stakeholders at any moment as long the stakeholder distribution maintains an honest majority of stake; furthermore, the protocol tolerates an adversarially-controlled message delivery delay unknown to protocol participants.
To achieve these guarantees we formalize and realize in the universal composition setting a suitable form of forward secure digital signatures and a new type of verifiable random function that maintains unpredictability under malicious key generation. Our security proof develops a general combinatorial framework for the analysis of semi-synchronous blockchains that may be of independent interest. We prove our protocol secure under standard cryptographic assumptions in the random oracle model.
Bernardo David, Peter Gaži, Aggelos Kiayias, Alexander Russell
Sustained Space Complexity
Abstract
Memory-hard functions (MHF) are functions whose evaluation cost is dominated by memory cost. MHFs are egalitarian, in the sense that evaluating them on dedicated hardware (like FPGAs or ASICs) is not much cheaper than on off-the-shelf hardware (like x86 CPUs). MHFs have interesting cryptographic applications, most notably to password hashing and securing blockchains.
Alwen and Serbinenko [STOC’15] define the cumulative memory complexity (cmc) of a function as the sum (over all time-steps) of the amount of memory required to compute the function. They advocate that a good MHF must have high cmc. Unlike previous notions, cmc takes into account that dedicated hardware might exploit amortization and parallelism. Still, cmc has been critizised as insufficient, as it fails to capture possible time-memory trade-offs; as memory cost doesn’t scale linearly, functions with the same cmc could still have very different actual hardware cost.
In this work we address this problem, and introduce the notion of sustained-memory complexity, which requires that any algorithm evaluating the function must use a large amount of memory for many steps. We construct functions (in the parallel random oracle model) whose sustained-memory complexity is almost optimal: our function can be evaluated using n steps and \(O(n/\log (n))\) memory, in each step making one query to the (fixed-input length) random oracle, while any algorithm that can make arbitrary many parallel queries to the random oracle, still needs \(\varOmega (n/\log (n))\) memory for \(\varOmega (n)\) steps.
As has been done for various notions (including cmc) before, we reduce the task of constructing an MHFs with high sustained-memory complexity to proving pebbling lower bounds on DAGs. Our main technical contribution is the construction is a family of DAGs on n nodes with constant indegree with high “sustained-space complexity”, meaning that any parallel black-pebbling strategy requires \(\varOmega (n/\log (n))\) pebbles for at least \(\varOmega (n)\) steps.
Along the way we construct a family of maximally “depth-robust” DAGs with maximum indegree \(O(\log n)\), improving upon the construction of Mahmoody et al. [ITCS’13] which had maximum indegree \(O\left( \log ^2 n \cdot {{\mathsf {polylog}}} (\log n)\right) \).
Joël Alwen, Jeremiah Blocki, Krzysztof Pietrzak

Multi-collision Resistance

Frontmatter
Multi-Collision Resistant Hash Functions and Their Applications
Abstract
Collision resistant hash functions are functions that shrink their input, but for which it is computationally infeasible to find a collision, namely two strings that hash to the same value (although collisions are abundant).
In this work we study multi-collision resistant hash functions (\(\mathsf {MCRH}\)) a natural relaxation of collision resistant hash functions in which it is difficult to find a t-way collision (i.e., t strings that hash to the same value) although finding \((t-1)\)-way collisions could be easy. We show the following:
  • The existence of \(\mathsf {MCRH}\) follows from the average case hardness of a variant of the Entropy Approximation problem. The goal in this problem (Goldreich, Sahai and Vadhan, CRYPTO ’99) is to distinguish circuits whose output distribution has high entropy from those having low entropy.
  • \(\mathsf {MCRH}\) imply the existence of constant-round statistically hiding (and computationally binding) commitment schemes. As a corollary, using a result of Haitner et al. (SICOMP, 2015), we obtain a blackbox separation of \(\mathsf {MCRH}\) from any one-way permutation.
Itay Berman, Akshay Degwekar, Ron D. Rothblum, Prashant Nalini Vasudevan
Collision Resistant Hashing for Paranoids: Dealing with Multiple Collisions
Abstract
A collision resistant hash (CRH) function is one that compresses its input, yet it is hard to find a collision, i.e. a \(x_1 \ne x_2\) s.t. \(h(x_1) = h(x_2)\). Collision resistant hash functions are one of the more useful cryptographic primitives both in theory and in practice and two prominent applications are in signature schemes and succinct zero-knowledge arguments.
In this work we consider a relaxation of the above requirement that we call Multi-CRH: a function where it is hard to find \(x_1, x_2, \ldots , x_k\) which are all distinct, yet \( h(x_1) = h(x_2) = \cdots = h(x_k)\). We show that for some of the major applications of CRH functions it is possible to replace them by the weaker notion of a Multi-CRH, albeit at the price of adding interaction: we show a constant-round statistically-hiding commitment scheme with succinct interaction (committing to \(\mathsf {poly}(n)\) bits requires exchanging \(\tilde{O}(n)\) bits) that can be opened locally (without revealing the full string). This in turn can be used to provide succinct arguments for any \({\textsf {NP}}\) statement.
We formulate four possible worlds of hashing-related assumptions (in the spirit of Impagliazzo’s worlds). They are (1) Nocrypt, where no one-way functions exist, (2) Unihash, where one-way functions exist, and hence also UOWHFs and signature schemes, but no Multi-CRH functions exist, (3) Minihash, where Multi-CRH functions exist but no CRH functions exist, and (4) Hashomania, where CRH functions exist. We show that these four worlds are distinct in a black-box model: we show a separation of CRH from Multi-CRH and a separation of Multi-CRH from one-way functions.
Ilan Komargodski, Moni Naor, Eylon Yogev

Signatures

Frontmatter
Synchronized Aggregate Signatures from the RSA Assumption
Abstract
In this work we construct efficient aggregate signatures from the RSA assumption in the synchronized setting. In this setting, the signing algorithm takes as input a (time) period t as well the secret key and message. A signer should sign at most once for each t. A set of signatures can be aggregated so long as they were all created for the same period t. Synchronized aggregate signatures are useful in systems where there is a natural reporting period such as log and sensor data, or for signatures embedded in a blockchain protocol.
We design a synchronized aggregate signature scheme that works for a bounded number of periods T that is given as a parameter to a global system setup. The big technical question is whether we can create solutions that will perform well with the large T values that we might use in practice. For instance, if one wanted signing keys to last up to ten years and be able to issue signatures every second, then we would need to support a period bound of upwards of \(2^{28}\).
We build our solution in stages where we start with an initial solution that establishes feasibility, but has an impractically large signing time where the number of exponentiations and prime searches grows linearly with T. We prove this scheme secure in the standard model under the RSA assumption with respect to honestly-generated keys. We then provide a tradeoff method where one can tradeoff the time to create signatures with the space required to store private keys. One point in the tradeoff is where each scales with \(\sqrt{T}\).
Finally, we reach our main innovation which is a scheme where both the signing time and storage scale with \(\lg {T}\) which allows for us to keep both computation and storage costs modest even for large values of T. Conveniently, our final scheme uses the same verification algorithm, and has the same distribution of public keys and signatures as the first scheme. Thus we are able to recycle the existing security proof for the new scheme.
We also extend our results to the identity-based setting in the random oracle model, which can further reduce the overall cryptographic overhead. We conclude with a detailed evaluation of the signing time and storage requirements for various settings of the system parameters.
Susan Hohenberger, Brent Waters
More Efficient (Almost) Tightly Secure Structure-Preserving Signatures
Abstract
We provide a structure-preserving signature (SPS) scheme with an (almost) tight security reduction to a standard assumption. Compared to the state-of-the-art tightly secure SPS scheme of Abe et al. (CRYPTO 2017), our scheme has smaller signatures and public keys (of about \(56\%\), resp. \(40\%\) of the size of signatures and public keys in Abe et al.’s scheme), and a lower security loss (of \(\mathbf{O}(\log Q)\) instead of \(\mathbf{O}(\lambda )\), where \(\lambda \) is the security parameter, and \(Q=\mathsf {poly}(\lambda )\) is the number of adversarial signature queries).
While our scheme is still less compact than structure-preserving signature schemes without tight security reduction, it significantly lowers the price to pay for a tight security reduction. In fact, when accounting for a non-tight security reduction with larger key (i.e., group) sizes, the computational efficiency of our scheme becomes at least comparable to that of non-tightly secure SPS schemes.
Technically, we combine and refine recent existing works on tightly secure encryption and SPS schemes. Our technical novelties include a modular treatment (that develops an SPS scheme out of a basic message authentication code), and a refined hybrid argument that enables a lower security loss of \(\mathbf{O}(\log Q)\) (instead of \(\mathbf{O}(\lambda )\)).
Romain Gay, Dennis Hofheinz, Lisa Kohl, Jiaxin Pan

Private Simultaneous Messages

Frontmatter
The Communication Complexity of Private Simultaneous Messages, Revisited
Abstract
Private Simultaneous Message (PSM) protocols were introduced by Feige, Kilian and Naor (STOC ’94) as a minimal non-interactive model for information-theoretic three-party secure computation. While it is known that every function \(f:\{0,1\}^k\times \{0,1\}^k \rightarrow \{0,1\}\) admits a PSM protocol with exponential communication of \(2^{k/2}\) (Beimel et al., TCC ’14), the best known (non-explicit) lower-bound is \(3k-O(1)\) bits. To prove this lower-bound, FKN identified a set of simple requirements, showed that any function that satisfies these requirements is subject to the \(3k-O(1)\) lower-bound, and proved that a random function is likely to satisfy the requirements.
We revisit the FKN lower-bound and prove the following results:
(Counterexample) We construct a function that satisfies the FKN requirements but has a PSM protocol with communication of \(2k+O(1)\) bits, revealing a gap in the FKN proof.
(PSM lower-bounds) We show that, by imposing additional requirements, the FKN argument can be fixed leading to a \(3k-O(\log k)\) lower-bound for a random function. We also get a similar lower-bound for a function that can be computed by a polynomial-size circuit (or even polynomial-time Turing machine under standard complexity-theoretic assumptions). This yields the first non-trivial lower-bound for an explicit Boolean function partially resolving an open problem of Data, Prabhakaran and Prabhakaran (Crypto ’14, IEEE Information Theory ’16). We further extend these results to the setting of imperfect PSM protocols which may have small correctness or privacy error.
(CDS lower-bounds) We show that the original FKN argument applies (as is) to some weak form of PSM protocols which are strongly related to the setting of Conditional Disclosure of Secrets (CDS). This connection yields a simple combinatorial criterion for establishing linear \(\varOmega (k)\)-bit CDS lower-bounds. As a corollary, we settle the complexity of the Inner Product predicate resolving an open problem of Gay, Kerenidis, and Wee (Crypto ’15).
Benny Applebaum, Thomas Holenstein, Manoj Mishra, Ofer Shayevitz
The Complexity of Multiparty PSM Protocols and Related Models
Abstract
We study the efficiency of computing arbitrary k-argument functions in the Private Simultaneous Messages (PSM) model of [10, 14]. This question was recently studied by Beimel et al. [6], in the two-party case (\(k=2\)). We tackle this question in the general case of PSM protocols for \(k\ge 2\) parties. Our motivation is two-fold: On one hand, there are various applications (old and new) of PSM protocols for constructing other cryptographic primitives, where obtaining more efficient PSM protocols imply more efficient primitives. On the other hand, improved PSM protocols are an interesting goal on its own. In particular, we pay a careful attention to the case of small number of parties (e.g., \(k=3,4,5\)), which may be especially interesting in practice, and optimize our protocols for those cases.
Our new upper bounds include a k-party PSM protocol, for any \(k>2\) and any function \(f:[N]^k\rightarrow \{0,1\}\), of complexity O(poly\((k)\cdot N^{k/2})\) (compared to the previous upper bound of O(poly\((k)\cdot N^{k-1})\)), and even better bounds for small values of k; e.g., an O(N) PSM protocol for the case \(k=3\). We also handle the more involved case where different parties have inputs of different sizes, which is useful both in practice and for applications.
As applications, we obtain more efficient Non-Interactive secure Multi-Party (NIMPC) protocols (a variant of PSM, where some of the parties may collude with the referee [5]), improved ad-hoc PSM protocols (another variant of PSM, where the subset of participating parties is not known in advance [4, 7]), secret-sharing schemes for uniform access structures with smaller share size than previously known, and better homogeneous distribution designs [4] (a primitive with many cryptographic applications on its own).
Amos Beimel, Eyal Kushilevitz, Pnina Nissim

Masking

Frontmatter
Formal Verification of Masked Hardware Implementations in the Presence of Glitches
Abstract
Masking provides a high level of resistance against side-channel analysis. However, in practice there are many possible pitfalls when masking schemes are applied, and implementation flaws are easily overlooked. Over the recent years, the formal verification of masked software implementations has made substantial progress. In contrast to software implementations, hardware implementations are inherently susceptible to glitches. Therefore, the same methods tailored for software implementations are not readily applicable.
In this work, we introduce a method to formally verify the security of masked hardware implementations that takes glitches into account. Our approach does not require any intermediate modeling steps of the targeted implementation. The verification is performed directly on the circuit’s netlist in the probing model with glitches and covers also higher-order flaws. For this purpose, a sound but conservative estimation of the Fourier coefficients of each gate in the netlist is calculated, which characterize statistical dependence of the gates on the inputs and thus allow to predict possible leakages. In contrast to existing practical evaluations, like t-tests, this formal verification approach makes security statements beyond specific measurement methods, the number of evaluated leakage traces, and the evaluated devices. Furthermore, flaws detected by the verifier are automatically localized. We have implemented our method on the basis of a SAT solver and demonstrate the suitability on a range of correctly and incorrectly protected circuits of different masking schemes and for different protection orders. Our verifier is efficient enough to prove the security of a full masked first-order AES S-box, and of the Keccak S-box up to the third protection order.
Roderick Bloem, Hannes Gross, Rinat Iusupov, Bettina Könighofer, Stefan Mangard, Johannes Winter
Masking the GLP Lattice-Based Signature Scheme at Any Order
Abstract
Recently, numerous physical attacks have been demonstrated against lattice-based schemes, often exploiting their unique properties such as the reliance on Gaussian distributions, rejection sampling and FFT-based polynomial multiplication. As the call for concrete implementations and deployment of postquantum cryptography becomes more pressing, protecting against those attacks is an important problem. However, few countermeasures have been proposed so far. In particular, masking has been applied to the decryption procedure of some lattice-based encryption schemes, but the much more difficult case of signatures (which are highly non-linear and typically involve randomness) has not been considered until now.
In this paper, we describe the first masked implementation of a lattice-based signature scheme. Since masking Gaussian sampling and other procedures involving contrived probability distribution would be prohibitively inefficient, we focus on the GLP scheme of Güneysu, Lyubashevsky and Pöppelmann (CHES 2012). We show how to provably mask it in the Ishai–Sahai–Wagner model (CRYPTO 2003) at any order in a relatively efficient manner, using extensions of the techniques of Coron et al. for converting between arithmetic and Boolean masking. Our proof relies on a mild generalization of probing security that supports the notion of public outputs. We also provide a proof-of-concept implementation to assess the efficiency of the proposed countermeasure.
Gilles Barthe, Sonia Belaïd, Thomas Espitau, Pierre-Alain Fouque, Benjamin Grégoire, Mélissa Rossi, Mehdi Tibouchi
Masking Proofs Are Tight and How to Exploit it in Security Evaluations
Abstract
Evaluating the security level of a leaking implementation against side-channel attacks is a challenging task. This is especially true when countermeasures such as masking are implemented since in this case: (i) the amount of measurements to perform a key recovery may become prohibitive for certification laboratories, and (ii) applying optimal (multivariate) attacks may be computationally intensive and technically challenging. In this paper, we show that by taking advantage of the tightness of masking security proofs, we can significantly simplify this evaluation task in a very general manner. More precisely, we show that the evaluation of a masked implementation can essentially be reduced to the one of an unprotected implementation. In addition, we show that despite optimal attacks against masking schemes are computationally intensive for large number of shares, heuristic (soft analytical side-channel) attacks can approach optimality efficiently. As part of this second contribution, we also improve over the recent multivariate (aka horizontal) side-channel attacks proposed at CHES 2016 by Battistello et al.
Vincent Grosso, François-Xavier Standaert

Best Young Researcher Paper Award

Frontmatter
The Discrete-Logarithm Problem with Preprocessing
Abstract
This paper studies discrete-log algorithms that use preprocessing. In our model, an adversary may use a very large amount of precomputation to produce an “advice” string about a specific group (e.g., NIST P-256). In a subsequent online phase, the adversary’s task is to use the preprocessed advice to quickly compute discrete logarithms in the group. Motivated by surprising recent preprocessing attacks on the discrete-log problem, we study the power and limits of such algorithms.
In particular, we focus on generic algorithms—these are algorithms that operate in every cyclic group. We show that any generic discrete-log algorithm with preprocessing that uses an S-bit advice string, runs in online time T, and succeeds with probability \(\epsilon \), in a group of prime order N, must satisfy \(ST^2 = {\widetilde{\varOmega }}(\epsilon N)\). Our lower bound, which is tight up to logarithmic factors, uses a synthesis of incompressibility techniques and classic methods for generic-group lower bounds. We apply our techniques to prove related lower bounds for the CDH, DDH, and multiple-discrete-log problems.
Finally, we demonstrate two new generic preprocessing attacks: one for the multiple-discrete-log problem and one for certain decisional-type problems in groups. This latter result demonstrates that, for generic algorithms with preprocessing, distinguishing tuples of the form \((g,{g^x}, {g^{(x^2)}})\) from random is much easier than the discrete-log problem.
Henry Corrigan-Gibbs, Dmitry Kogan

Best Paper Awards

Frontmatter
Simple Proofs of Sequential Work
Abstract
At ITCS 2013, Mahmoody, Moran and Vadhan [MMV13] introduce and construct publicly verifiable proofs of sequential work, which is a protocol for proving that one spent sequential computational work related to some statement. The original motivation for such proofs included non-interactive time-stamping and universally verifiable CPU benchmarks. A more recent application, and our main motivation, are blockchain designs, where proofs of sequential work can be used – in combination with proofs of space – as a more ecological and economical substitute for proofs of work which are currently used to secure Bitcoin and other cryptocurrencies.
The construction proposed by [MMV13] is based on a hash function and can be proven secure in the random oracle model, or assuming inherently sequential hash-functions, which is a new standard model assumption introduced in their work.
In a proof of sequential work, a prover gets a “statement” \(\chi \), a time parameter N and access to a hash-function \(\mathsf{H}\), which for the security proof is modelled as a random oracle. Correctness requires that an honest prover can make a verifier accept making only N queries to \(\mathsf{H}\), while soundness requires that any prover who makes the verifier accept must have made (almost) N sequential queries to \(\mathsf{H}\). Thus a solution constitutes a proof that N time passed since \(\chi \) was received. Solutions must be publicly verifiable in time at most polylogarithmic in N.
The construction of [MMV13] is based on “depth-robust” graphs, and as a consequence has rather poor concrete parameters. But the major drawback is that the prover needs not just N time, but also N space to compute a proof.
In this work we propose a proof of sequential work which is much simpler, more efficient and achieves much better concrete bounds. Most importantly, the space required can be as small as \(\log (N)\) (but we get better soundness using slightly more memory than that).
An open problem stated by [MMV13] that our construction does not solve either is achieving a “unique” proof, where even a cheating prover can only generate a single accepting proof. This property would be extremely useful for applications to blockchains.
Bram Cohen, Krzysztof Pietrzak
Two-Round Multiparty Secure Computation from Minimal Assumptions
Abstract
We provide new two-round multiparty secure computation (MPC) protocols assuming the minimal assumption that two-round oblivious transfer (OT) exists. If the assumed two-round OT protocol is secure against semi-honest adversaries (in the plain model) then so is our two-round MPC protocol. Similarly, if the assumed two-round OT protocol is secure against malicious adversaries (in the common random/reference string model) then so is our two-round MPC protocol. Previously, two-round MPC protocols were only known under relatively stronger computational assumptions. Finally, we provide several extensions.
Sanjam Garg, Akshayaram Srinivasan
k-Round Multiparty Computation from k-Round Oblivious Transfer via Garbled Interactive Circuits
Abstract
We present new constructions of round-efficient, or even round-optimal, Multi-Party Computation (MPC) protocols from Oblivious Transfer (OT) protocols. Our constructions establish a tight connection between MPC and OT: In the setting of semi-honest security, for any \(k \ge 2\), k-round semi-honest OT is necessary and complete for k-round semi-honest MPC. In the round-optimal case of \(k = 2\), we obtain 2-round semi-honest MPC from 2-round semi-honest OT, resolving the round complexity of semi-honest MPC assuming weak and necessary assumption. In comparison, previous 2-round constructions rely on either the heavy machinery of indistinguishability obfuscation or witness encryption, or the algebraic structure of bilinear pairing groups. More generally, for an arbitrary number of rounds k, all previous constructions of k-round semi-honest MPC require at least OT with \(k'\) rounds for \(k' \le \lfloor k/2 \rfloor \).
In the setting of malicious security, we show: For any \(k \ge 5\), k-round malicious OT is necessary and complete for k-round malicious MPC. In fact, OT satisfying a weaker notion of delayed-semi-malicious security suffices. In the common reference string model, for any \(k \ge 2\), we obtain k-round malicious Universal Composable (UC) protocols from any k-round semi-malicious OT and non-interactive zero-knowledge. Previous 5-round protocols in the plain model, and 2-round protocols in the common reference string model all require algebraic assumptions such as DDH or LWE.
At the core of our constructions is a new framework for garbling interactive circuits. Roughly speaking, it allows for garbling interactive machines that participates in interactions of a special form. The garbled machine can emulate the original interactions receiving messages sent in the clear (without being encoded using secrets), and reveals only the transcript of the interactions, provided that the transcript is computationally uniquely defined. We show that garbled interactive circuits for the purpose of constructing MPC can be implemented using OT. Along the way, we also propose a new primitive of witness selector that strengthens witness encryption, and a new notion of zero-knowledge functional commitments.
Fabrice Benhamouda, Huijia Lin

Theoretical Multiparty Computation

Frontmatter
Adaptively Secure Garbling with Near Optimal Online Complexity
Abstract
We construct an adaptively secure garbling scheme with an online communication complexity of \(n+m+\mathsf {poly}(\log |C|, \lambda )\) where \(C: \{0,1\}^n \rightarrow \{0,1\}^{m}\) is the circuit being garbled, and \(\lambda \) is the security parameter. The security of our scheme can be based on (polynomial hardness of) the Computational Diffie-Hellman (CDH) assumption, or the Factoring assumption or the Learning with Errors assumption. This is nearly the best achievable in the standard model (i.e., without random oracles) as the online communication complexity must be larger than both n and m. The online computational complexity of our scheme is \(O(n+m)+\mathsf {poly}(\log |C|, \lambda )\). Previously known standard model adaptively secure garbling schemes had asymptotically worse online cost or relied on exponentially hard computational assumptions.
Sanjam Garg, Akshayaram Srinivasan
A New Approach to Black-Box Concurrent Secure Computation
Abstract
We consider the task of constructing concurrently composable protocols for general secure computation by making only black-box use of underlying cryptographic primitives. Existing approaches for this task first construct a black-box version of CCA-secure commitments which provide a strong form of concurrent security to the committed value(s). This strong form of security is then crucially used to construct higher level protocols such as concurrently secure OT/coin-tossing (and eventually all functionalities).
This work explores a fresh approach. We first aim to construct a concurrently-secure OT protocol whose concurrent security is proven directly using concurrent simulation techniques; in particular, it does not rely on the usual “non-polynomial oracles” of CCA-secure commitments. The notion of concurrent security we target is super-polynomial simulation (SPS). We show that such an OT protocol can be constructed from polynomial hardness assumptions in a black-box manner, and within a constant number of rounds. In fact, we only require the existence of (constant round) semi-honest OT and standard collision-resistant hash functions.
Next, we show that such an OT protocol is sufficient to obtain SPS-secure (concurrent) multiparty computation (MPC) for general functionalities. This transformation does not require any additional assumptions; it also maintains the black-box nature as well as the constant round feature of the original OT protocol. Prior to our work, the only known black-box construction of constant-round concurrently composable MPC required stronger assumptions; namely, verifiable perfectly binding homomorphic commitment schemes and PKE with oblivious public-key generation.
Sanjam Garg, Susumu Kiyoshima, Omkant Pandey

Obfuscation

Frontmatter
Obfustopia Built on Secret-Key Functional Encryption
Abstract
We show that indistinguishability obfuscation (IO) for all circuits can be constructed solely from secret-key functional encryption (SKFE). In the construction, SKFE need to be able to issue a-priori unbounded number of functional keys, that is, collusion-resistant. Our strategy is to replace public-key functional encryption (PKFE) in the construction of IO proposed by Bitansky and Vaikuntanathan (FOCS 2015) with puncturable SKFE. Bitansky and Vaikuntanathan introduced the notion of puncturable SKFE and observed that the strategy works. However, it has not been clear whether we can construct puncturable SKFE without assuming PKFE. In particular, it has not been known whether puncturable SKFE is constructed from ordinary SKFE. In this work, we show that a relaxed variant of puncturable SKFE can be constructed from collusion-resistant SKFE. Moreover, we show that the relaxed variant of puncturable SKFE is sufficient for constructing IO.
In addition, we also study the relation of collusion-resistance and succinctness for SKFE. Functional encryption is said to be weakly-succinct if the size of its encryption circuit is sub-linear in the size of functions. We show that collusion-resistant SKFE can be constructed from weakly-succinct SKFE supporting only one functional key.
By combining the above two results, we show that IO for all circuits can be constructed from weakly-succinct SKFE supporting only one functional key.
Fuyuki Kitagawa, Ryo Nishimaki, Keisuke Tanaka
Limits on Low-Degree Pseudorandom Generators (Or: Sum-of-Squares Meets Program Obfuscation)
Abstract
An m output pseudorandom generator \(\mathcal {G}:(\{\pm 1\}^b)^n \rightarrow \{\pm 1\}^m\) that takes input n blocks of b bits each is said to be \(\ell \)-block local if every output is a function of at most \(\ell \) blocks. We show that such \(\ell \)-block local pseudorandom generators can have output length at most \(\tilde{O}(2^{\ell b} n^{\lceil \ell /2 \rceil })\), by presenting a polynomial time algorithm that distinguishes inputs of the form \(\mathcal {G}(x)\) from inputs where each coordinate is sampled from the uniform distribution on m bits.
As a corollary, we refute some conjectures recently made in the context of constructing provably secure indistinguishability obfuscation (iO). This includes refuting the assumptions underlying Lin and Tessaro’s [47] recently proposed candidate iO from bilinear maps. Specifically, they assumed the existence of a secure pseudorandom generator \(\mathcal {G}:\{ \pm 1 \}^{nb} \rightarrow \{\pm 1\}^{2^{cb}n}\) as above for large enough \(c>3\) and \(\ell =2\). (Following this work, and an independent work of Lombardi and Vaikuntanthan [49], Lin and Tessaro retracted the bilinear maps based candidate from their manuscript.)
Our results actually hold for the much wider class of low-degree, non-binary valued pseudorandom generators: if every output of \(\mathcal {G}:\{\pm 1\}^n \rightarrow \mathbb R^m\) (\(\mathbb R\) = reals) is a polynomial (over \(\mathbb R\)) of degree at most d with at most s monomials and \(m \ge \tilde{\varOmega }(sn^{\lceil d/2 \rceil })\), then there is a polynomial time algorithm for distinguishing the output \(\mathcal {G}(x)\) from z where each coordinate \(z_i\) is sampled independently from the marginal distribution on \(\mathcal {G}_i\). Furthermore, our results continue to hold under arbitrary pre-processing of the seed. This implies that any such map \(\mathcal {G}\), with arbitrary seed pre-processing, cannot be a pseudorandom generator in the mild sense of fooling a product distribution on the output space. This allows us to rule out various natural modifications to the notion of generators suggested in other works that still allow obtaining indistinguishability obfuscation from bilinear maps.
Our algorithms are based on the Sum of Squares (SoS) paradigm, and in most cases can even be defined more simply using a canonical semidefinite program. We complement our algorithm by presenting a class of candidate generators with block-wise locality 3 and constant block size, that resists both Gaussian elimination and sum of squares (SOS) algorithms whenever \(m = n^{1.5-\varepsilon }\). This class is extremely easy to describe: Let \(\mathbb G\) be any simple non-abelian group with the group operation “\(*\)”, and interpret the blocks of x as elements in \(\mathbb G\). The description of the pseudorandom generator is a sequence of m triples of indices (ijk) chosen at random and each output of the generator is of the form \(x_i *x_j *x_k\).
Boaz Barak, Zvika Brakerski, Ilan Komargodski, Pravesh K. Kothari

Symmetric Cryptanalysis

Frontmatter
Boomerang Connectivity Table: A New Cryptanalysis Tool
Abstract
A boomerang attack is a cryptanalysis framework that regards a block cipher E as the composition of two sub-ciphers \(E_1\circ E_0\) and builds a particular characteristic for E with probability \(p^2q^2\) by combining differential characteristics for \(E_0\) and \(E_1\) with probability p and q, respectively. Crucially the validity of this figure is under the assumption that the characteristics for \(E_0\) and \(E_1\) can be chosen independently. Indeed, Murphy has shown that independently chosen characteristics may turn out to be incompatible. On the other hand, several researchers observed that the probability can be improved to p or q around the boundary between \(E_0\) and \(E_1\) by considering a positive dependency of the two characteristics, e.g. the ladder switch and S-box switch by Biryukov and Khovratovich. This phenomenon was later formalised by Dunkelman et al. as a sandwich attack that regards E as \(E_1\circ E_m \circ E_0\), where \(E_m\) satisfies some differential propagation among four texts with probability r, and the entire probability is \(p^2q^2r\). In this paper, we revisit the issue of dependency of two characteristics in \(E_m\), and propose a new tool called Boomerang Connectivity Table (BCT), which evaluates r in a systematic and easy-to-understand way when \(E_m\) is composed of a single S-box layer. With the BCT, previous observations on the S-box including the incompatibility, the ladder switch and the S-box switch are represented in a unified manner. Moreover, the BCT can detect a new switching effect, which shows that the probability around the boundary may be even higher than p or q. To illustrate the power of the BCT-based analysis, we improve boomerang attacks against Deoxys-BC, and disclose the mechanism behind an unsolved probability amplification for generating a quartet in SKINNY. Lastly, we discuss the issue of searching for S-boxes having good BCT and extending the analysis to modular addition.
Carlos Cid, Tao Huang, Thomas Peyrin, Yu Sasaki, Ling Song
Correlation Cube Attacks: From Weak-Key Distinguisher to Key Recovery
Abstract
In this paper, we describe a new variant of cube attacks called correlation cube attack. The new attack recovers the secret key of a cryptosystem by exploiting conditional correlation properties between the superpoly of a cube and a specific set of low-degree polynomials that we call a basis, which satisfies that the superpoly is a zero constant when all the polynomials in the basis are zeros. We present a detailed procedure of correlation cube attack for the general case, including how to find a basis of the superpoly of a given cube. One of the most significant advantages of this new analysis technique over other variants of cube attacks is that it converts from a weak-key distinguisher to a key recovery attack.
As an illustration, we apply the attack to round-reduced variants of the stream cipher Trivium. Based on the tool of numeric mapping introduced by Liu at CRYPTO 2017, we develop a specific technique to efficiently find a basis of the superpoly of a given cube as well as a large set of potentially good cubes used in the attack on Trivium variants, and further set up deterministic or probabilistic equations on the key bits according to the conditional correlation properties between the superpolys of the cubes and their bases. For a variant when the number of initialization rounds is reduced from 1152 to 805, we can recover about 7-bit key information on average with time complexity \(2^{44}\), using \(2^{45}\) keystream bits and preprocessing time \(2^{51}\). For a variant of Trivium reduced to 835 rounds, we can recover about 5-bit key information on average with the same complexity. All the attacks are practical and fully verified by experiments. To the best of our knowledge, they are thus far the best known key recovery attacks for these variants of Trivium, and this is the first time that a weak-key distinguisher on Trivium stream cipher can be converted to a key recovery attack.
Meicheng Liu, Jingchun Yang, Wenhao Wang, Dongdai Lin
The Missing Difference Problem, and Its Applications to Counter Mode Encryption
Abstract
The counter mode (CTR) is a simple, efficient and widely used encryption mode using a block cipher. It comes with a security proof that guarantees no attacks up to the birthday bound (i.e. as long as the number of encrypted blocks \(\sigma \) satisfies \(\sigma \ll 2^{n/2}\)), and a matching attack that can distinguish plaintext/ciphertext pairs from random using about \(2^{n/2}\) blocks of data.
The main goal of this paper is to study attacks against the counter mode beyond this simple distinguisher. We focus on message recovery attacks, with realistic assumptions about the capabilities of an adversary, and evaluate the full time complexity of the attacks rather than just the query complexity. Our main result is an attack to recover a block of message with complexity \(\tilde{\mathcal {O}}(2^{n/2})\). This shows that the actual security of CTR is similar to that of CBC, where collision attacks are well known to reveal information about the message.
To achieve this result, we study a simple algorithmic problem related to the security of the CTR mode: the missing difference problem. We give efficient algorithms for this problem in two practically relevant cases: where the missing difference is known to be in some linear subspace, and when the amount of data is higher than strictly required.
As a further application, we show that the second algorithm can also be used to break some polynomial MACs such as GMAC and Poly1305, with a universal forgery attack with complexity \(\tilde{\mathcal {O}}(2^{2n/3})\).
Gaëtan Leurent, Ferdinand Sibleyras
Fast Near Collision Attack on the Grain v1 Stream Cipher
Abstract
Modern stream ciphers often adopt a large internal state to resist various attacks, where the cryptanalysts have to deal with a large number of variables when mounting state recovery attacks. In this paper, we propose a general new cryptanalytic method on stream ciphers, called fast near collision attack, to address this situation. It combines a near collision property with the divide-and-conquer strategy so that only subsets of the internal state, associated with different keystream vectors, are recovered first and merged carefully later to retrieve the full large internal state. A self-contained method is introduced and improved to derive the target subset of the internal state from the partial state difference efficiently. As an application, we propose a new key recovery attack on Grain v1, one of the 7 finalists selected by the eSTREAM project, in the single-key setting. Both the pre-computation and the online phases are tailored according to its internal structure, to provide an attack for any fixed IV in \(2^{75.7}\) cipher ticks after the pre-computation of \(2^{8.1}\) cipher ticks, given \(2^{28}\)-bit memory and about \(2^{19}\) keystream bits. Practical experiments on Grain v1 itself whenever possible and on a 80-bit reduced version confirmed our results.
Bin Zhang, Chao Xu, Willi Meier
Backmatter
Metadata
Title
Advances in Cryptology – EUROCRYPT 2018
Editors
Jesper Buus Nielsen
Prof. Dr. Vincent Rijmen
Copyright Year
2018
Electronic ISBN
978-3-319-78375-8
Print ISBN
978-3-319-78374-1
DOI
https://doi.org/10.1007/978-3-319-78375-8

Premium Partner