Skip to main content

2016 | Buch

Advances in Cryptology – CRYPTO 2016

36th Annual International Cryptology Conference, Santa Barbara, CA, USA, August 14-18, 2016, Proceedings, Part II

herausgegeben von: Matthew Robshaw, Jonathan Katz

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The three volume-set, LNCS 9814, LNCS 9815, and LNCS 9816, constitutes the refereed proceedings of the 36th Annual International Cryptology Conference, CRYPTO 2016, held in Santa Barbara, CA, USA, in August 2016.

The 70 revised full papers presented were carefully reviewed and selected from 274 submissions. The papers are organized in the following topical sections: provable security for symmetric cryptography; asymmetric cryptography and cryptanalysis; cryptography in theory and practice; compromised systems; symmetric cryptanalysis; algorithmic number theory; symmetric primitives; asymmetric cryptography; symmetric cryptography; cryptanalytic tools; hardware-oriented cryptography; secure computation and protocols; obfuscation; quantum techniques; spooky encryption; IBE, ABE, and functional encryption; automated tools and synthesis; zero knowledge; theory.

Inhaltsverzeichnis

Frontmatter

Asymmetric Cryptography

Frontmatter
Adversary-Dependent Lossy Trapdoor Function from Hardness of Factoring Semi-smooth RSA Subgroup Moduli
Abstract
Lossy trapdoor functions (LTDFs), proposed by Peikert and Waters (STOC’08), are known to have a number of applications in cryptography. They have been constructed based on various assumptions, which include the quadratic residuosity (QR) and decisional composite residuosity (DCR) assumptions, which are factoring-based decision assumptions. However, there is no known construction of an LTDF based on the factoring assumption or other factoring-related search assumptions. In this paper, we first define a notion of adversary-dependent lossy trapdoor functions (ad-LTDFs) that is a weaker variant of LTDFs. Then we construct an ad-LTDF based on the hardness of factorizing RSA moduli of a special form called semi-smooth RSA subgroup (SS) moduli proposed by Groth (TCC’05). Moreover, we show that ad-LTDFs can replace LTDFs in many applications. Especially, we obtain the first factoring-based deterministic encryption scheme that satisfies the security notion defined by Boldyreva et al. (CRYPTO’08) without relying on a decision assumption. Besides direct applications of ad-LTDFs, by a similar technique, we construct a chosen ciphertext secure public key encryption scheme whose ciphertext overhead is the shortest among existing schemes based on the factoring assumption w.r.t. SS moduli.
Takashi Yamakawa, Shota Yamada, Goichiro Hanaoka, Noboru Kunihiro
Optimal Security Proofs for Signatures from Identification Schemes
Abstract
We perform a concrete security treatment of digital signature schemes obtained from canonical identification schemes via the Fiat-Shamir transform. If the identification scheme is random self-reducible and satisfies the weakest possible security notion (hardness of key-recoverability), then the signature scheme obtained via Fiat-Shamir is unforgeable against chosen-message attacks in the multi-user setting. Our security reduction is in the random oracle model and loses a factor of roughly \(Q_h\), the number of hash queries. Previous reductions incorporated an additional multiplicative loss of N, the number of users in the system. Our analysis is done in small steps via intermediate security notions, and all our implications have relatively simple proofs. Furthermore, for each step, we show the optimality of the given reduction in terms of model assumptions and tightness.
As an important application of our framework, we obtain a concrete security treatment for Schnorr signatures in the multi-user setting.
Eike Kiltz, Daniel Masny, Jiaxin Pan
FHE Circuit Privacy Almost for Free
Abstract
Circuit privacy is an important property for many applications of fully homomorphic encryption. Prior approaches for achieving circuit privacy rely on superpolynomial noise flooding or on bootstrapping. In this work, we present a conceptually different approach to circuit privacy based on a novel characterization of the noise growth amidst homomorphic evaluation. In particular, we show that a variant of the GSW FHE for branching programs already achieves circuit privacy; this immediately yields a circuit-private FHE for NC\(^1\) circuits under the standard LWE assumption with polynomial modulus-to-noise ratio. Our analysis relies on a variant of the discrete Gaussian leftover hash lemma which states that \(\mathbf {e}^\intercal \mathbf {G}^{-1}(\mathbf {v})+small\) noise does not depend on \(\mathbf {v}\). We believe that this result is of independent interest.
Florian Bourse, Rafaël Del Pino, Michele Minelli, Hoeteck Wee

Symmetric Cryptography

Frontmatter
Cryptanalysis of a Theorem: Decomposing the Only Known Solution to the Big APN Problem
Abstract
The existence of Almost Perfect Non-linear (APN) permutations operating on an even number of bits has been a long standing open question until Dillon et al., who work for the NSA, provided an example on 6 bits in 2009.
In this paper, we apply methods intended to reverse-engineer S-Boxes with unknown structure to this permutation and find a simple decomposition relying on the cube function over \(GF(2^3)\). More precisely, we show that it is a particular case of a permutation structure we introduce, the butterfly. Such butterflies are 2n-bit mappings with two CCZ-equivalent representations: one is a quadratic non-bijective function and one is a degree \(n+1\) permutation. We show that these structures always have differential uniformity at most 4 when n is odd. A particular case of this structure is actually a 3-round Feistel Network with similar differential and linear properties. These functions also share an excellent non-linearity for \(n=3,5,7\).
Furthermore, we deduce a bitsliced implementation and significantly reduce the hardware cost of a 6-bit APN permutation using this decomposition, thus simplifying the use of such a permutation as building block for a cryptographic primitive.
Léo Perrin, Aleksei Udovenko, Alex Biryukov
The SKINNY Family of Block Ciphers and Its Low-Latency Variant MANTIS
Abstract
We present a new tweakable block cipher family SKINNY, whose goal is to compete with NSA recent design SIMON in terms of hardware/software performances, while proving in addition much stronger security guarantees with regards to differential/linear attacks. In particular, unlike SIMON, we are able to provide strong bounds for all versions, and not only in the single-key model, but also in the related-key or related-tweak model. SKINNY has flexible block/key/tweak sizes and can also benefit from very efficient threshold implementations for side-channel protection. Regarding performances, it outperforms all known ciphers for ASIC round-based implementations, while still reaching an extremely small area for serial implementations and a very good efficiency for software and micro-controllers implementations (SKINNY has the smallest total number of AND/OR/XOR gates used for encryption process).
Secondly, we present MANTIS, a dedicated variant of SKINNY for low-latency implementations, that constitutes a very efficient solution to the problem of designing a tweakable block cipher for memory encryption. MANTIS basically reuses well understood, previously studied, known components. Yet, by putting those components together in a new fashion, we obtain a competitive cipher to PRINCE in latency and area, while being enhanced with a tweak input.
Christof Beierle, Jérémy Jean, Stefan Kölbl, Gregor Leander, Amir Moradi, Thomas Peyrin, Yu Sasaki, Pascal Sasdrich, Siang Meng Sim

Cryptanalytic Tools

Frontmatter
Automatic Search of Meet-in-the-Middle and Impossible Differential Attacks
Abstract
Tracking bits through block ciphers and optimizing attacks at hand is one of the tedious task symmetric cryptanalysts have to deal with. It would be nice if a program will automatically handle them at least for well-known attack techniques, so that cryptanalysts will only focus on finding new attacks. However, current automatic tools cannot be used as is, either because they are tailored for specific ciphers or because they only recover a specific part of the attacks and cryptographers are still needed to finalize the analysis.
In this paper we describe a generic algorithm exhausting the best meet-in-the-middle and impossible differential attacks on a very large class of block ciphers from byte to bit-oriented, SPN, Feistel and Lai-Massey block ciphers. Contrary to previous tools that target to find the best differential / linear paths in the cipher and leave the cryptanalysts to find the attack using these paths, we automatically find the best attacks by considering the cipher and the key schedule algorithms. The building blocks of our algorithm led to two algorithms designed to find the best simple meet-in-the-middle attacks and the best impossible truncated differential attacks respectively. We recover and improve many attacks on AES, mCRYPTON, SIMON, IDEA, KTANTAN, PRINCE and ZORRO. We show that this tool can be used by designers to improve their analysis.
Patrick Derbez, Pierre-Alain Fouque
Memory-Efficient Algorithms for Finding Needles in Haystacks
Abstract
One of the most common tasks in cryptography and cryptanalysis is to find some interesting event (a needle) in an exponentially large collection (haystack) of \(N=2^n\) possible events, or to demonstrate that no such event is likely to exist. In particular, we are interested in finding needles which are defined as events that happen with an unusually high probability of \(p \gg 1/N\) in a haystack which is an almost uniform distribution on N possible events. When the search algorithm can only sample values from this distribution, the best known time/memory tradeoff for finding such an event requires \(O(1/Mp^2)\) time given O(M) memory.
In this paper we develop much faster needle searching algorithms in the common cryptographic setting in which the distribution is defined by applying some deterministic function f to random inputs. Such a distribution can be modelled by a random directed graph with N vertices in which almost all the vertices have O(1) predecessors while the vertex we are looking for has an unusually large number of O(pN) predecessors. When we are given only a constant amount of memory, we propose a new search methodology which we call NestedRho. As p increases, such random graphs undergo several subtle phase transitions, and thus the log-log dependence of the time complexity T on p becomes a piecewise linear curve which bends four times. Our new algorithm is faster than the \(O(1/p^2)\) time complexity of the best previous algorithm in the full range of \(1/N<p<1\), and in particular it improves the previous time complexity by a significant factor of \(\sqrt{N}\) for any p in the range \(N^{-0.75}<p< N^{-0.5}\). When we are given more memory, we show how to combine the NestedRho technique with the parallel collision search technique in order to further reduce its time complexity. Finally, we show how to apply our new search technique to more complicated distributions with multiple peaks when we want to find all the peaks whose probabilities are higher than p.
Itai Dinur, Orr Dunkelman, Nathan Keller, Adi Shamir
Breaking Symmetric Cryptosystems Using Quantum Period Finding
Abstract
Due to Shor’s algorithm, quantum computers are a severe threat for public key cryptography. This motivated the cryptographic community to search for quantum-safe solutions. On the other hand, the impact of quantum computing on secret key cryptography is much less understood. In this paper, we consider attacks where an adversary can query an oracle implementing a cryptographic primitive in a quantum superposition of different states. This model gives a lot of power to the adversary, but recent results show that it is nonetheless possible to build secure cryptosystems in it.
We study applications of a quantum procedure called Simon’s algorithm (the simplest quantum period finding algorithm) in order to attack symmetric cryptosystems in this model. Following previous works in this direction, we show that several classical attacks based on finding collisions can be dramatically sped up using Simon’s algorithm: finding a collision requires \(\varOmega (2^{n/2})\) queries in the classical setting, but when collisions happen with some hidden periodicity, they can be found with only O(n) queries in the quantum model.
We obtain attacks with very strong implications. First, we show that the most widely used modes of operation for authentication and authenticated encryption (e.g. CBC-MAC, PMAC, GMAC, GCM, and OCB) are completely broken in this security model. Our attacks are also applicable to many CAESAR candidates: CLOC, AEZ, COPA, OTR, POET, OMD, and Minalpher. This is quite surprising compared to the situation with encryption modes: Anand et al. show that standard modes are secure with a quantum-secure PRF.
Second, we show that Simon’s algorithm can also be applied to slide attacks, leading to an exponential speed-up of a classical symmetric cryptanalysis technique in the quantum model.
Marc Kaplan, Gaëtan Leurent, Anthony Leverrier, María  Naya-Plasencia

Hardware-Oriented Cryptography

Frontmatter
Efficiently Computing Data-Independent Memory-Hard Functions
Abstract
A memory-hard function (MHF) f is equipped with a space cost \({\sigma } \) and time cost \({\tau } \) parameter such that repeatedly computing \(f_{{\sigma },{\tau }}\) on an application specific integrated circuit (ASIC) is not economically advantageous relative to a general purpose computer. Technically we would like that any (generalized) circuit for evaluating an iMHF \(f_{{\sigma },{\tau }}\) has area \(\times \) time (AT) complexity at \(\varTheta ({\sigma } ^2 * {\tau })\). A data-independent MHF (iMHF) has the added property that it can be computed with almost optimal memory and time complexity by an algorithm which accesses memory in a pattern independent of the input value. Such functions can be specified by fixing a directed acyclic graph (DAG) G on \(n=\varTheta ({\sigma } * {\tau })\) nodes representing its computation graph.
In this work we develop new tools for analyzing iMHFs. First we define and motivate a new complexity measure capturing the amount of energy (i.e. electricity) required to compute a function. We argue that, in practice, this measure is at least as important as the more traditional AT-complexity. Next we describe an algorithm \({{\mathcal {A}}} \) for repeatedly evaluating an iMHF based on an arbitrary DAG G. We upperbound both its energy and AT complexities per instance evaluated in terms of a certain combinatorial property of G.
Next we instantiate our attack for several general classes of DAGs which include those underlying many of the most important iMHF candidates in the literature. In particular, we obtain the following results which hold for all choices of parameters \({\sigma } \) and \({\tau } \) (and thread-count) such that \(n={\sigma } *{\tau } \).
  • The Catena-Dragonfly function of [FLW13] has AT and energy complexities \(O(n^{1.67})\).
  • The Catena-Butterfly function of [FLW13] has complexities is \(O(n^{1.67})\).
  • The Double-Buffer and the Linear functions of [CGBS16] both have complexities in \(O(n^{1.67})\).
  • The Argon2i function of [BDK15] (winner of the Password Hashing Competition [PHC]) has complexities \(O(n^{7/4}\log (n))\).
  • The Single-Buffer function of [CGBS16] has complexities \(O(n^{7/4}\log (n))\).
  • Any iMHF can be computed by an algorithm with complexities \(O(n^2/\log ^{1-{\epsilon }}(n))\) for all \({\epsilon } > 0\). In particular when \({\tau } =1\) this shows that the goal of constructing an iMHF with AT-complexity \(\varTheta ({\sigma } ^2 * {\tau })\) is unachievable.
Along the way we prove a lemma upper-bounding the depth-robustness of any DAG which may prove to be of independent interest.
Joël Alwen, Jeremiah Blocki
Towards Sound Fresh Re-keying with Hard (Physical) Learning Problems
Abstract
Most leakage-resilient cryptographic constructions aim at limiting the information adversaries can obtain about secret keys. In the case of asymmetric algorithms, this is usually obtained by secret sharing (aka masking) the key, which is made easy by their algebraic properties. In the case of symmetric algorithms, it is rather key evolution that is exploited. While more efficient, the scope of this second solution is limited to stateful primitives that easily allow for key evolution such as stream ciphers. Unfortunately, it seems generally hard to avoid the need of (at least one) execution of a stateless primitive, both for encryption and authentication protocols. As a result, fresh re-keying has emerged as an alternative solution, in which a block cipher that is hard to protect against side-channel attacks is re-keyed with a stateless function that is easy to mask. While previous proposals in this direction were all based on heuristic arguments, we propose two new constructions that, for the first time, allow a more formal treatment of fresh re-keying. More precisely, we reduce the security of our re-keying schemes to two building blocks that can be of independent interest. The first one is an assumption of Learning Parity with Leakage, which leverages the noise that is available in side-channel measurements. The second one is based on the Learning With Rounding assumption, which can be seen as an alternative solution for low-noise implementations. Both constructions are efficient and easy to mask, since they are key homomorphic or almost key homomorphic.
Stefan Dziembowski, Sebastian Faust, Gottfried Herold, Anthony Journault, Daniel Masny, François-Xavier Standaert
ParTI – Towards Combined Hardware Countermeasures Against Side-Channel and Fault-Injection Attacks
Abstract
Side-channel analysis and fault-injection attacks are known as major threats to any cryptographic implementation. Hardening cryptographic implementations with appropriate countermeasures is thus essential before they are deployed in the wild. However, countermeasures for both threats are of completely different nature: Side-channel analysis is mitigated by techniques that hide or mask key-dependent information while resistance against fault-injection attacks can be achieved by redundancy in the computation for immediate error detection. Since already the integration of any single countermeasure in cryptographic hardware comes with significant costs in terms of performance and area, a combination of multiple countermeasures is expensive and often associated with undesired side effects.
In this work, we introduce a countermeasure for cryptographic hardware implementations that combines the concept of a provably-secure masking scheme (i.e., threshold implementation) with an error detecting approach against fault injection. As a case study, we apply our generic construction to the lightweight LED cipher. Our LED instance achieves first-order resistance against side-channel attacks combined with a fault detection capability that is superior to that of simple duplication for most error distributions at an increased area demand of 12 %.
Tobias Schneider, Amir Moradi, Tim Güneysu

Secure Computation and Protocols I

Frontmatter
Network-Hiding Communication and Applications to Multi-party Protocols
Abstract
As distributed networks are heavily used in modern applications, new security challenges emerge. In a multi-party computation (in short, MPC) protocol over an incomplete network, such a challenge is to hide, to the extent possible, the topology of the underlying communication network. Such a topology-hiding (aka network hiding) property is in fact very relevant in applications where anonymity is needed.
To our knowledge, with the exception of two recent works by Chandran et al. [ITCS 2015] and by Moran et al. [TCC 2015], existing MPC protocols do not hide the topology of the underlying communication network. Moreover, the above two solutions are either not applicable to arbitrary networks (as is [ITCS 2015]) or, as in [TCC 2015], they make non-black-box and recursive use of cryptographic primitives resulting in an unrealistic communication and computation complexity even for simple, i.e., low degree and diameter, networks.
Our work suggests the first topology-hiding communication protocol for incomplete networks which makes black-box use of the underlying cryptographic assumption—in particular, a public-key encryption scheme—and tolerates any adversary who passively corrupts arbitrarily many network nodes. Our solutions are based on a new, enhanced variant of threshold homomorphic encryption, in short, TH-PKE, that requires no a-priori setup and allows to circulate an encrypted message over any (unknown) incomplete network and then decrypt it without revealing any network information to intermediate nodes. We show how to realize this enhanced TH-PKE from the DDH assumption. The black-box nature of our scheme, along with some optimization tricks that we employ, makes our communication protocol more efficient than existing solutions.
We then use our communication protocol to make any semi-honest secure MPC protocol topology-hiding with a reasonable—i.e., for simple networks, polynomial with small constants—communication and computation overhead. We further show how to construct anonymous broadcast without using expensive MPCs to setup the original pseudonyms.
Martin Hirt, Ueli Maurer, Daniel Tschudi, Vassilis Zikas
Network Oblivious Transfer
Abstract
Motivated by the goal of improving the concrete efficiency of secure multiparty computation (MPC), we study the possibility of implementing an infrastructure for MPC. We propose an infrastructure based on oblivious transfer (OT), which would consist of OT channels between some pairs of parties in the network. We devise information-theoretically secure protocols that allow additional pairs of parties to establish secure OT correlations using the help of other parties in the network in the presence of a dishonest majority. Our main technical contribution is an upper bound that matches a lower bound of Harnik, Ishai, and Kushilevitz (Crypto 2007), who studied the number of OT channels necessary and sufficient for MPC. In particular, we characterize which n-party OT graphs G allow t-secure computation of OT correlations between all pairs of parties, showing that this is possible if and only if the complement of G does not contain the complete bipartite graph \(K_{n-t,n-t}\) as a subgraph.
Ranjit Kumaresan, Srinivasan Raghuraman, Adam Sealfon
On the Power of Secure Two-Party Computation
Abstract
Ishai, Kushilevitz, Ostrovsky and Sahai (STOC 2007, SIAM JoC 2009) introduced the powerful “MPC-in-the-head” technique that provided a general transformation of information-theoretic MPC protocols secure against passive adversaries to a ZK proof in a “black-box” way. In this work, we extend this technique and provide a generic transformation of any semi-honest secure two-party computation (2PC) protocol (with mild adaptive security guarantees) in the so called oblivious-transfer hybrid model to an adaptive ZK proof for any NP-language, in a “black-box” way assuming only one-way functions. Our basic construction based on Goldreich-Micali-Wigderson’s 2PC protocol yields an adaptive ZK proof with communication complexity proportional to quadratic in the size of the circuit implementing the NP relation. Previously such proofs relied on an expensive Karp reduction of the NP language to Graph Hamiltonicity (Lindell and Zarosim (TCC 2009, Journal of Cryptology 2011)). We also improve our basic construction to obtain the first linear-rate adaptive ZK proofs by relying on efficient maliciously secure 2PC protocols. Core to this construction is a new way of transforming 2PC protocols to efficient (adaptively secure) instance-dependent commitment schemes.
As our second contribution, we provide a general transformation to construct a randomized encoding of a function f from any 2PC protocol that securely computes a related functionality (in a black-box way). We show that if the 2PC protocol has mild adaptive security guarantees then the resulting randomized encoding (RE) can be decomposed to an offline/online encoding.
As an application of our techniques, we show how to improve the construction of Lapidot and Shamir (Crypto 1990) to obtain a four-round ZK proof with an “input-delayed” property. Namely, the honest prover’s algorithm does not require the actual statement to be proved until the last round. We further generalize this to obtain a four-round “commit and prove” zero-knowledge with the same property where the prover commits to a witness w in the second message and proves a statement x regarding the witness w that is determined only in the fourth round.
Carmit Hazay, Muthuramakrishnan Venkitasubramaniam
Secure Protocol Transformations
Abstract
In the rich literature of secure multi-party computation (MPC), several important results rely on “protocol transformations,” whereby protocols from one model of MPC are transformed to protocols from another model. Motivated by the goal of simplifying and unifying results in the area of MPC, we formalize a general notion of black-box protocol transformations that captures previous transformations from the literature as special cases, and present several new transformations. We motivate our study of protocol transformations by presenting the following applications.
  • Simplifying feasibility results:
    • Easily rederive a result in Goldreich’s book (2004), on MPC with full security in the presence of an honest majority, from an earlier result in the book, on MPC that offers “security with abort.”
    • Rederive the classical result of Rabin and Ben-Or (1989) by applying a transformation to the simpler protocols of Ben-Or et al. or Chaum et al. (1988).
  • Efficiency improvements:
    • The first “constant-rate” MPC protocol for a constant number of parties that offers full information-theoretic security with an optimal threshold, improving over the protocol of Rabin and Ben-Or;
    • A fully secure MPC protocol with optimal threshold that improves over a previous protocol of Ben-Sasson et al. (2012) in the case of “deep and narrow” computations;
    • A fully secure MPC protocol with near-optimal threshold that improves over a previous protocol of Damgård et al. (2010) by improving the dependence on the security parameter from linear to polylogarithmic;
    • An efficient new transformation from passive-secure two-party computation in the OT-hybrid and OLE-hybrid model to zero-knowledge proofs, improving over a recent similar transformation of Hazay and Venkitasubramaniam (2016) for the case of static zero-knowledge, which is restricted to the OT-hybrid model and requires a large number of commitments.
Finally, we prove the impossibility of two simple types of black-box protocol transformations, including an unconditional variant of a previous negative result of Rosulek (2012) that relied on the existence of one-way functions.
Yuval Ishai, Eyal Kushilevitz, Manoj Prabhakaran, Amit Sahai, Ching-Hua Yu
On the Communication Required for Unconditionally Secure Multiplication
Abstract
Many information-theoretic secure protocols are known for general secure multi-party computation, in the honest majority setting, and in the dishonest majority setting with preprocessing. All known protocols that are efficient in the circuit size of the evaluated function follow the same “gate-by-gate” design pattern: we work through an arithmetic (boolean) circuit on secret-shared inputs, such that after we process a gate, the output of the gate is represented as a random secret sharing among the players. This approach usually allows non-interactive processing of addition gates but requires communication for every multiplication gate. Thus, while information-theoretic secure protocols are very efficient in terms of computational work, they (seem to) require more communication and more rounds than computationally secure protocols. Whether this is inherent is an open and probably very hard problem. However, in this work we show that it is indeed inherent for protocols that follow the “gate-by-gate” design pattern. We present the following results:
  • In the honest majority setting, as well as for dishonest majority with preprocessing, any gate-by-gate protocol must communicate \(\varOmega (n)\) bits for every multiplication gate, where n is the number of players.
  • In the honest majority setting, we show that one cannot obtain a bound that also grows with the field size. Moreover, for a constant number of players, amortizing over several multiplication gates does not allow us to save on the computational work, and – in a restricted setting – we show that this also holds for communication.
All our lower bounds are met up to a constant factor by known protocols that follow the typical gate-by-gate paradigm. Our results imply that a fundamentally new approach must be found in order to improve the communication complexity of known protocols, such as BGW, GMW, SPDZ etc.
Ivan Damgård, Jesper Buus Nielsen, Antigoni Polychroniadou, Michael Raskin

Obfuscation

Frontmatter
Universal Constructions and Robust Combiners for Indistinguishability Obfuscation and Witness Encryption
Abstract
Over the last few years a new breed of cryptographic primitives has arisen: on one hand they have previously unimagined utility and on the other hand they are not based on simple to state and tried out assumptions. With the on-going study of these primitives, we are left with several different candidate constructions each based on a different, not easy to express, mathematical assumptions, where some even turn out to be insecure.
A combiner for a cryptographic primitive takes several candidate constructions of the primitive and outputs one construction that is as good as any of the input constructions. Furthermore, this combiner must be efficient: the resulting construction should remain polynomial-time even when combining polynomially many candidate. Combiners are especially important for a primitive where there are several competing constructions whose security is hard to evaluate, as is the case for indistinguishability obfuscation (IO) and witness encryption (WE).
One place where the need for combiners appears is in design of a universal construction, where one wishes to find “one construction to rule them all”: an explicit construction that is secure if any construction of the primitive exists.
In a recent paper, Goldwasser and Kalai posed as a challenge finding universal constructions for indistinguishability obfuscation and witness encryption. In this work we resolve this issue: we construct universal schemes for IO, and for witness encryption, and also resolve the existence of combiners for these primitives along the way. For IO, our universal construction and combiners can be built based on either assuming DDH, or assuming LWE, with security against subexponential adversaries. For witness encryption, we need only one-way functions secure against polynomial time adversaries.
Prabhanjan Ananth, Aayush Jain, Moni Naor, Amit Sahai, Eylon Yogev
Obfuscation Combiners
Abstract
Obfuscation is challenging; we currently have practical candidates with rather vague security guarantees on the one side, and theoretical constructions which have recently experienced jeopardizing attacks against the underlying cryptographic assumptions on the other side. This motivates us to study and present robust combiners for obfuscators, which integrate several candidate obfuscators into a single obfuscator which is secure as long as a quorum of the candidates is indeed secure.
We give several results about building obfuscation combiners, with matching upper and lower bounds for the precise quorum of secure candidates. Namely, we show that one can build 3-out-of-4 obfuscation combiners where at least three of the four combiners are secure, whereas 2-out-of-3 structural combiners (which combine the obfuscator candidates in a black-box sense) with only two secure candidates, are impossible. Our results generalize to \((2\gamma +1)\)-out-of-\((3\gamma +1)\) combiners for the positive result, and to \(2\gamma \)-out-of-\(3\gamma \) results for the negative result, for any integer \(\gamma \).
To reduce overhead, we define detecting combiners, where the combined obfuscator may sometimes produce an error-indication instead of the desired output, indicating that some of the component obfuscators is faulty. We present a \((\gamma +1)\)-out-of-\((2\gamma +1)\) detecting combiner for any integer \(\gamma \), bypassing the previous lower bound. We further show that \(\gamma \)-out-of-\(2\gamma \) structural detecting combiners are again impossible.
Since our approach can be used for practical obfuscators, as well as for obfuscators proven secure (based on assumptions), we also briefly report on implementation results for some applied obfuscator programs.
Marc Fischlin, Amir Herzberg, Hod Bin-Noon, Haya Shulman
On Statistically Secure Obfuscation with Approximate Correctness
Abstract
Goldwasser and Rothblum (TCC ’07) prove that statistical indistinguishability obfuscation (iO) cannot exist if the obfuscator must maintain perfect correctness (under a widely believed complexity theoretic assumption: \(\mathcal {NP}\not \subseteq \mathcal {SZK}\subseteq \mathcal {AM}\cap \mathbf {co}\mathcal {AM}\)). However, for many applications of iO, such as constructing public-key encryption from one-way functions (one of the main open problems in theoretical cryptography), approximate correctness is sufficient. It had been unknown thus far whether statistical approximate iO (saiO) can exist.
We show that saiO does not exist, even for a minimal correctness requirement, if \(\mathcal {NP}\not \subseteq \mathcal {AM}\cap \mathbf {co}\mathcal {AM}\), and if one-way functions exist. A simple complementary observation shows that if one-way functions do not exist, then average-case saiO exists. Technically, previous approaches utilized the behavior of the obfuscator on evasive functions, for which saiO always exists. We overcome this barrier by using a PRF as a “baseline” for the obfuscated program.
We broaden our study and consider relaxed notions of security for iO. We introduce the notion of correlation obfuscation, where the obfuscations of equivalent circuits only need to be mildly correlated (rather than statistically indistinguishable). Perhaps surprisingly, we show that correlation obfuscators exist via a trivial construction for some parameter regimes, whereas our impossibility result extends to other regimes. Interestingly, within the gap between the parameters regimes that we show possible and impossible, there is a small fraction of parameters that still allow to build public-key encryption from one-way functions and thus deserve further investigation.
Zvika Brakerski, Christina Brzuska, Nils Fleischhacker
Revisiting the Cryptographic Hardness of Finding a Nash Equilibrium
Abstract
The exact hardness of computing a Nash equilibrium is a fundamental open question in algorithmic game theory. This problem is complete for the complexity class PPAD. It is well known that problems in PPAD cannot be \(\mathrm {NP}\)-complete unless \(\mathrm {NP}=\mathrm {coNP}\). Therefore, a natural direction is to reduce the hardness of PPAD to the hardness of problems used in cryptography.
Bitansky, Paneth, and Rosen [FOCS 2015] prove the hardness of PPAD assuming the existence of quasi-polynomially hard indistinguishability obfuscation and sub-exponentially hard one-way functions. This leaves open the possibility of basing PPAD hardness on simpler, polynomially hard, computational assumptions.
We make further progress in this direction and reduce PPAD hardness directly to polynomially hard assumptions. Our first result proves hardness of PPAD assuming the existence of polynomially hard indistinguishability obfuscation (\(i\mathcal {O}\)) and one-way permutations. While this improves upon Bitansky et al.’s work, it does not give us a reduction to simpler, polynomially hard computational assumption because constructions of \(i\mathcal {O}\) inherently seems to require assumptions with sub-exponential hardness. In contrast, public key functional encryption is a much simpler primitive and does not suffer from this drawback. Our second result shows that \(\mathsf{PPAD}\) hardness can be based on polynomially hard compact public key functional encryption and one-way permutations. Our results further demonstrate the power of polynomially hard compact public key functional encryption which is believed to be weaker than indistinguishability obfuscation. Our techniques are general and we expect them to have various applications.
Sanjam Garg, Omkant Pandey, Akshayaram Srinivasan

Asymmetric Cryptography and Cryptanalysis II

Frontmatter
Cryptanalysis of GGH15 Multilinear Maps
Abstract
We describe a cryptanalysis of the GGH15 multilinear maps. Our attack breaks the multipartite key-agreement protocol in polynomial time by generating an equivalent user private key; it also applies to GGH15 with safeguards. We also describe attacks against variants of the GGH13 multilinear maps proposed by Halevi (ePrint 2015/866) aiming at supporting graph-induced constraints, as in GGH15.
Jean-Sébastien Coron, Moon Sung Lee, Tancrède Lepoint, Mehdi Tibouchi
Annihilation Attacks for Multilinear Maps: Cryptanalysis of Indistinguishability Obfuscation over GGH13
Abstract
In this work, we present a new class of polynomial-time attacks on the original multilinear maps of Garg, Gentry, and Halevi (2013). Previous polynomial-time attacks on GGH13 were “zeroizing” attacks that generally required the availability of low-level encodings of zero. Most significantly, such zeroizing attacks were not applicable to candidate indistinguishability obfuscation (iO) schemes. iO has been the subject of intense study.
To address this gap, we introduce annihilation attacks, which attack multilinear maps using non-linear polynomials. Annihilation attacks can work in situations where there are no low-level encodings of zero. Using annihilation attacks, we give the first polynomial-time cryptanalysis of candidate iO schemes over GGH13. More specifically, we exhibit two simple programs that are functionally equivalent, and show how to efficiently distinguish between the obfuscations of these two programs.
Given the enormous applicability of iO, it is important to devise iO schemes that can avoid attack. We discuss some initial directions for safeguarding against annihilating attacks.
Eric Miles, Amit Sahai, Mark Zhandry
Three’s Compromised Too: Circular Insecurity for Any Cycle Length from (Ring-)LWE
Abstract
A public-key encryption scheme is k-circular secure if a cycle of k encrypted secret keys \((\mathsf {Enc} _{pk_{1}}(sk_{2}), \mathsf {Enc} _{pk_{2}}(sk_{3}), \ldots , \mathsf {Enc} _{pk_{k}}(sk_{1}))\) is indistinguishable from encryptions of zeros. Circular security has applications in a wide variety of settings, ranging from security of symbolic protocols to fully homomorphic encryption. A fundamental question is whether standard security notions like IND-CPA/CCA imply k-circular security.
For the case \(k=2\), several works over the past years have constructed counterexamples—i.e., schemes that are CPA or even CCA secure but not 2-circular secure—under a variety of well-studied assumptions (SXDH, decision linear, and LWE). However, for \(k > 2\) the only known counterexamples are based on strong general-purpose obfuscation assumptions.
In this work we construct k-circular security counterexamples for any \(k \ge 2\) based on (ring-)LWE. Specifically:
  • for any constant \(k=O(1)\), we construct a counterexample based on n-dimensional (plain) LWE for \(\mathrm{poly}(n)\) approximation factors;
  • for any \(k=\mathrm{poly}(\lambda )\), we construct one based on degree-n ring-LWE for at most subexponential \(\exp (n^{\varepsilon })\) factors.
Moreover, both schemes are \(k'\)-circular insecure for \(2 \le k' \le k\).
Notably, our ring-LWE construction does not immediately translate to an LWE-based one, because matrix multiplication is not commutative. To overcome this, we introduce a new “tensored” variant of LWE which provides the desired commutativity, and which we prove is actually equivalent to plain LWE.
Navid Alamati, Chris Peikert
Circular Security Separations for Arbitrary Length Cycles from LWE
Abstract
We describe a public key encryption that is IND-CPA secure under the Learning with Errors (LWE) assumption, but that is not circular secure for arbitrary length cycles. Previous separation results for cycle length greater than 2 require the use of indistinguishability obfuscation, which is not currently realizable under standard assumptions.
Venkata Koppula, Brent Waters
Backmatter
Metadaten
Titel
Advances in Cryptology – CRYPTO 2016
herausgegeben von
Matthew Robshaw
Jonathan Katz
Copyright-Jahr
2016
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-662-53008-5
Print ISBN
978-3-662-53007-8
DOI
https://doi.org/10.1007/978-3-662-53008-5

Premium Partner