Skip to main content

2017 | Buch

Advances in Cryptology – CRYPTO 2017

37th Annual International Cryptology Conference, Santa Barbara, CA, USA, August 20–24, 2017, Proceedings, Part II

insite
SUCHEN

Über dieses Buch

The three volume-set, LNCS 10401, LNCS 10402, and LNCS 10403, constitutes the refereed proceedings of the 37th Annual International Cryptology Conference, CRYPTO 2017, held in Santa Barbara, CA, USA, in August 2017.
The 72 revised full papers presented were carefully reviewed and selected from 311 submissions. The papers are organized in the following topical sections: functional encryption; foundations; two-party computation; bitcoin; multiparty computation; award papers; obfuscation; conditional disclosure of secrets; OT and ORAM; quantum; hash functions; lattices; signatures; block ciphers; authenticated encryption; public-key encryption, stream ciphers, lattice crypto; leakage and subversion; symmetric-key crypto, and real-world crypto.

Inhaltsverzeichnis

Frontmatter

OT and ORAM

Frontmatter
Secure Computation Based on Leaky Correlations: High Resilience Setting
Abstract
Correlated private randomness, or correlation in short, is a fundamental cryptographic resource that helps parties compute securely over their private data. An offline preprocessing step, which is independent of the eventual secure computation, generates correlated secret shares for the parties and the parties use these shares during the final secure computation step. However, these secret shares are vulnerable to leakage attacks.
Inspired by the quintessential problem of privacy amplification, Ishai, Kushilevitz, Ostrovsky, and Sahai (FOCS 2009) introduced the concept of correlation extractors. Correlation extractors are interactive protocols that take leaky correlations as input and produce secure independent copies of oblivious transfer (OT), the building blocks of secure computation protocols. Although their initial feasibility result is resilient to linear leakage and produces a linear number of “fresh” OTs, the constants involved are minuscule. The output of this correlation extractor can be used to perform only small secure computation tasks, because the number of OTs needed to evaluate a functionality securely is roughly proportional to its circuit size. Recently, Gupta, Ishai, Maji, and Sahai (CRYPTO 2015) constructed an extractor that is resilient to 1/4 fractional leakage and has near-linear production rate. They also constructed an extractor from a large correlation that has 1/2 fractional resilience but produces only one OT, which does not suffice to compute even constant size functionalities securely.
In this paper, we show the existence of a correlation that produces n-bit shares for the parties and allows the extraction of \(n^{1-o(1)}\) secure OTs, despite n/2 bits of leakage. The key technical idea is to embed several multiplications over a field into one multiplication over an extension field. The packing efficiency of this embedding directly translates into the production rate of our correlation extractor. Our work establishes a connection between this problem and a rich vein of research in additive combinatorics on constructing dense sets of integers that are free of arithmetic progressions, a.k.a. 3-free sets. We introduce a new combinatorial problem that suffices for our multiplication embedding, and produces concrete embeddings that beat the efficiency of the embeddings inspired by the reduction to 3-free sets.
Finally, the paper introduces a graph-theoretic measure to upper-bound the leakage resilience of correlations, namely the simple partition number. This measure is similar in spirit to graph covering problems like the biclique partition number. If the simple partition number of a correlation is \(2^\lambda \), then it is impossible to extract even one OT if parties can perform \(\lambda \)-bits of leakage. We compute tight estimates of the simple partition number of several correlations that are relevant to this paper, and, in particular, show that our extractor and the extractor for the large correlation by Gupta et al. have optimal leakage resilience and (qualitatively) optimal simulation error.
Alexander R. Block, Hemanta K. Maji, Hai H. Nguyen
Laconic Oblivious Transfer and Its Applications
Abstract
In this work, we introduce a novel technique for secure computation over large inputs. Specifically, we provide a new oblivious transfer (OT) protocol with a laconic receiver. Laconic OT allows a receiver to commit to a large input D (of length M) via a short message. Subsequently, a single short message by a sender allows the receiver to learn \(m_{D[L]}\), where the messages \(m_0, m_1\) and the location \(L \in [M]\) are dynamically chosen by the sender. All prior constructions of OT required the receiver’s outgoing message to grow with D.
Our key contribution is an instantiation of this primitive based on the Decisional Diffie-Hellman (DDH) assumption in the common reference string (CRS) model. The technical core of this construction is a novel use of somewhere statistically binding (SSB) hashing in conjunction with hash proof systems. Next, we show applications of laconic OT to non-interactive secure computation on large inputs and multi-hop homomorphic encryption for RAM programs.
Chongwon Cho, Nico Döttling, Sanjam Garg, Divya Gupta, Peihan Miao, Antigoni Polychroniadou
Black-Box Parallel Garbled RAM
Abstract
In 1982, Yao introduced a technique of “circuit garbling” that became a central building block in cryptography. The question of garbling general random-access memory (RAM) programs was introduced by Lu and Ostrovsky in 2013. The most recent results of Garg, Lu, and Ostrovsky (FOCS 2015) achieve a garbled RAM with black-box use of any one-way functions and poly-log overhead of data and program garbling in all the relevant parameters, including program run-time. The advantage of Garbled RAM is that large data can be garbled first, and act as persistent garbled storage (e.g. in the cloud) and later programs can be garbled and sent to be executed on this garbled database in a non-interactive manner.
One of the main advantages of cloud computing is not only that it has large storage but also that it has a large number of parallel processors. Despite multiple successful efforts on parallelizing (interactive) Oblivious RAM, the non-interactive garbling of parallel programs remained open until very recently. Specifically, Boyle, Chung and Pass in their TCC 2016-A [4] have shown how to garble PRAM programs with poly-logarithmic (parallel) overhead assuming non-black-box use of identity-based encryption (IBE). The question of whether the IBE assumption, and in particular, the non-black-box use of such a strong assumption is needed. In this paper, we resolve this question and show how to garble parallel programs, with black-box use of only one-way functions and with only poly-log overhead in the (parallel) running time. Our result works for any number of parallel processors.
Steve Lu, Rafail Ostrovsky

Foundations II

Frontmatter
Non-Malleable Codes for Space-Bounded Tampering
Abstract
Non-malleable codes—introduced by Dziembowski, Pietrzak and Wichs at ICS 2010—are key-less coding schemes in which mauling attempts to an encoding of a given message, w.r.t. some class of tampering adversaries, result in a decoded value that is either identical or unrelated to the original message. Such codes are very useful for protecting arbitrary cryptographic primitives against tampering attacks against the memory. Clearly, non-malleability is hopeless if the class of tampering adversaries includes the decoding and encoding algorithm. To circumvent this obstacle, the majority of past research focused on designing non-malleable codes for various tampering classes, albeit assuming that the adversary is unable to decode. Nonetheless, in many concrete settings, this assumption is not realistic.
In this paper, we explore one particular such scenario where the class of tampering adversaries naturally includes the decoding (but not the encoding) algorithm. In particular, we consider the class of adversaries that are restricted in terms of memory/space. Our main contributions can be summarized as follows:
  • We initiate a general study of non-malleable codes resisting space-bounded tampering. In our model, the encoding procedure requires large space, but decoding can be done in small space, and thus can be also performed by the adversary. Unfortunately, in such a setting it is impossible to achieve non-malleability in the standard sense, and we need to aim for slightly weaker security guarantees. In a nutshell, our main notion (dubbed leaky space-bounded non-malleability) ensures that this is the best the adversary can do, in that space-bounded tampering attacks can be simulated given a small amount of leakage on the encoded value.
  • We provide a simple construction of a leaky space-bounded non-malleable code. Our scheme is based on any Proof of Space (PoS)—a concept recently put forward by Ateniese et al. (SCN 2014) and Dziembowski et al. (CRYPTO 2015)—satisfying a variant of soundness. As we show, our paradigm can be instantiated by extending the analysis of the PoS construction by Ren and Devadas (TCC 2016-A), based on so-called stacks of localized expander graphs.
  • Finally, we show that our flavor of non-malleability yields a natural security guarantee against memory tampering attacks, where one can trade a small amount of leakage on the secret key for protection against space-bounded tampering attacks.
Sebastian Faust, Kristina Hostáková, Pratyay Mukherjee, Daniele Venturi
Four-Round Concurrent Non-Malleable Commitments from One-Way Functions
Abstract
How many rounds and which assumptions are required for concurrent non-malleable commitments? The above question has puzzled researchers for several years. Pass in [TCC 2013] showed a lower bound of 3 rounds for the case of black-box reductions to falsifiable hardness assumptions with respect to polynomial-time adversaries. On the other side, Goyal [STOC 2011], Lin and Pass [STOC 2011] and Goyal et al. [FOCS 2012] showed that one-way functions (OWFs) are sufficient with a constant number of rounds. More recently Ciampi et al. [CRYPTO 2016] showed a 3-round construction based on subexponentially strong one-way permutations.
In this work we show as main result the first 4-round concurrent non-malleable commitment scheme assuming the existence of any one-way function.
Our approach builds on a new security notion for argument systems against man-in-the-middle attacks: Simulation-Witness-Independence. We show how to construct a 4-round one-many simulation-witnesses-independent argument system from one-way functions. We then combine this new tool in parallel with a weak form of non-malleable commitments constructed by Goyal et al. in [FOCS 2014] obtaining the main result of our work.
Michele Ciampi, Rafail Ostrovsky, Luisa Siniscalchi, Ivan Visconti
Distinguisher-Dependent Simulation in Two Rounds and its Applications
Abstract
We devise a novel simulation technique that makes black-box use of the adversary as well as the distinguisher. Using this technique we construct several round-optimal protocols, many of which were previously unknown even using non-black-box simulation techniques:
  • Two-round witness indistinguishable (WI) arguments for \(\mathrm {NP}\) from different assumptions than previously known.
  • Two-round arguments and three-round arguments of knowledge for \(\mathrm {NP}\) that achieve strong WI, witness hiding (WH) and distributional weak zero knowledge (WZK) properties in a setting where the instance is only determined by the prover in the last round of the interaction. The soundness of these protocols is guaranteed against adaptive provers.
  • Three-round two-party computation satisfying input-indistinguishable security as well as a weaker notion of simulation security against malicious adversaries.
  • Three-round extractable commitments with guaranteed correctness of extraction from polynomial hardness assumptions.
Our three-round protocols can be based on DDH or QR or \(\text {N}^{\text {th}}\) residuosity and our two-round protocols require quasi-polynomial hardness of the same assumptions. In particular, prior to this work, two-round WI arguments for NP were only known based on assumptions such as the existence of trapdoor permutations, hardness assumptions on bilinear maps, or the existence of program obfuscation; we give the first construction based on (quasi-polynomial) DDH or QR or \(\text {N}^{\text {th}}\) residuosity.
Our simulation technique bypasses known lower bounds on black-box simulation [Goldreich-Krawcyzk’96] by using the distinguisher’s output in a meaningful way. We believe that this technique is likely to find additional applications in the future.
Abhishek Jain, Yael Tauman Kalai, Dakshita Khurana, Ron Rothblum

Obfuscation II

Frontmatter
Incremental Program Obfuscation
Abstract
Recent advances in program obfuscation suggest that it is possible to create software that can provably safeguard secret information. However, software systems usually contain large executable code that is updated multiple times and sometimes very frequently. Freshly obfuscating the program for every small update will lead to a considerable efficiency loss. Thus, an extremely desirable property for obfuscation algorithms is incrementality: small changes to the underlying program translate into small changes to the corresponding obfuscated program.
We initiate a thorough investigation of incremental program obfuscation. We show that the strong simulation-based notions of program obfuscation, such as “virtual black-box” and “virtual grey-box” obfuscation, cannot be incremental (according to our efficiency requirements) even for very simple functions such as point functions. We then turn to the indistinguishability-based notions, and present two security definitions of varying strength — namely, a weak one and a strong one. To understand the overall strength of our definitions, we formulate the notion of incremental best-possible obfuscation and show that it is equivalent to our strong indistinguishability-based notion.
Finally, we present constructions for incremental program obfuscation satisfying both our security notions. We first give a construction achieving the weaker security notion based on the existence of general purpose indistinguishability obfuscation. Next, we present a generic transformation using oblivious RAM to amplify security from weaker to stronger, while maintaining the incrementality property.
Sanjam Garg, Omkant Pandey
From Obfuscation to the Security of Fiat-Shamir for Proofs
Abstract
The Fiat-Shamir paradigm [CRYPTO’86] is a heuristic for converting three-round identification schemes into signature schemes, and more generally, for collapsing rounds in constant-round public-coin interactive protocols. This heuristic is very popular both in theory and in practice, and its security has been the focus of extensive study.
In particular, this paradigm was shown to be secure in the Random Oracle Model. However, in the plain model, the results shown were mostly negative. In particular, the heuristic was shown to be insecure when applied to computationally sound proofs (also known as arguments). Moreover, recently it was shown that even in the restricted setting where the heuristic is applied to interactive proofs (as opposed to arguments), its soundness cannot be proven via a black-box reduction to any so-called falsifiable assumption.
In this work, we give a positive result for the security of this paradigm in the plain model. Specifically, we construct a hash function for which the Fiat Shamir paradigm is secure when applied to proofs (as opposed to arguments), assuming the existence of a sub-exponentially secure indistinguishability obfuscator, the existence of an exponentially secure input-hiding obfuscator for the class of multi-bit point functions, and the existence of a sub-exponentially secure one-way function.
More generally, we construct a hash family that is correlation intractable (under the computational assumptions above), solving an open problem originally posed by Canetti, Goldreich and Halevi (JACM, 2004), under the above assumptions.
In addition, we show that our result resolves a long-lasting open problem in about zero-knowledge proofs: It implies that there does not exist a public-coin constant-round zero-knowledge proof with negligible soundness (under the assumptions stated above).
Yael Tauman Kalai, Guy N. Rothblum, Ron D. Rothblum
Indistinguishability Obfuscation for Turing Machines: Constant Overhead and Amortization
Abstract
We study the asymptotic efficiency of indistinguishability obfuscation (\(\mathsf {i}\mathcal {O}\)) on two fronts:
  • Obfuscation size: Present constructions of indistinguishability obfuscation (\(\mathsf {i}\mathcal {O}\)) create obfuscated programs where the size of the obfuscated program is at least a multiplicative factor of security parameter larger than the size of the original program.
    In this work, we construct the first \(\mathsf {i}\mathcal {O}\) scheme for (bounded-input) Turing machines that achieves only a constant multiplicative overhead in size. The constant in our scheme is, in fact, 2.
  • Amortization: Suppose we want to obfuscate an arbitrary polynomial number of (bounded-input) Turing machines \(M_1,\ldots ,M_n\). We ask whether it is possible to obfuscate \(M_1,\ldots ,M_n\) using a single application of an \(\mathsf {i}\mathcal {O}\) scheme for a circuit family where the size of any circuit is independent of n as well the size of any Turing machine \(M_i\).
    In this work, we resolve this question in the affirmative, obtaining a new bootstrapping theorem for obfuscating arbitrarily many Turing machines.
In order to obtain both of these results, we develop a new template for obfuscating Turing machines that is of independent interest and likely to find applications in future. The security of our results rely on the existence of sub-exponentially secure \(\mathsf {i}\mathcal {O}\) for circuits and re-randomizable encryption schemes.
Prabhanjan Ananth, Abhishek Jain, Amit Sahai

Quantum

Frontmatter
Quantum Security of NMAC and Related Constructions
PRF Domain Extension Against Quantum attacks
Abstract
We prove the security of NMAC, HMAC, AMAC, and the cascade construction with fixed input-length as quantum-secure pseudo-random functions (PRFs). Namely, they are indistinguishable from a random oracle against any polynomial-time quantum adversary that can make quantum superposition queries. In contrast, many blockcipher-based PRFs including CBC-MAC were recently broken by quantum superposition attacks.
Classical proof strategies for these constructions do not generalize to the quantum setting, and we observe that they sometimes even fail completely (e.g., the universal-hash then PRF paradigm for proving security of NMAC). Instead, we propose a direct hybrid argument as a new proof strategy (both classically and quantumly). We first show that a quantum-secure PRF is secure against key-recovery attacks, and remains secure under random leakage of the key. Next, as a key technical tool, we extend the oracle indistinguishability framework of Zhandry in two directions: we consider distributions on functions rather than strings, and we also consider a relative setting, where an additional oracle, possibly correlated with the distributions, is given to the adversary as well. This enables a hybrid argument to prove the security of NMAC. Security proofs for other constructions follow similarly.
Fang Song, Aaram Yun
Quantum Non-malleability and Authentication
Abstract
In encryption, non-malleability is a highly desirable property: it ensures that adversaries cannot manipulate the plaintext by acting on the ciphertext. In [6], Ambainis et al. gave a definition of non-malleability for the encryption of quantum data. In this work, we show that this definition is too weak, as it allows adversaries to “inject” plaintexts of their choice into the ciphertext. We give a new definition of quantum non-malleability which resolves this problem. Our definition is expressed in terms of entropic quantities, considers stronger adversaries, and does not assume secrecy. Rather, we prove that quantum non-malleability implies secrecy; this is in stark contrast to the classical setting, where the two properties are completely independent. For unitary schemes, our notion of non-malleability is equivalent to encryption with a two-design (and hence also to the definition of [6]).
Our techniques also yield new results regarding the closely-related task of quantum authentication. We show that “total authentication” (a notion recently proposed by Garg et al. [18]) can be satisfied with two-designs, a significant improvement over the eight-design construction of [18]. We also show that, under a mild adaptation of the rejection procedure, both total authentication and our notion of non-malleability yield quantum authentication as defined by Dupuis et al. [16].
Gorjan Alagic, Christian Majenz
New Security Notions and Feasibility Results for Authentication of Quantum Data
Abstract
We give a new class of security definitions for authentication in the quantum setting. These definitions capture and strengthen existing definitions of security against quantum adversaries for both classical message authentication codes (MACs) as well as full quantum state authentication schemes. The main feature of our definitions is that they precisely characterize the effective behavior of any adversary when the authentication protocol accepts, including correlations with the key. Our definitions readily yield a host of desirable properties and interesting consequences; for example, our security definition for full quantum state authentication implies that the entire secret key can be re-used if the authentication protocol succeeds.
Next, we present several protocols satisfying our security definitions. We show that the classical Wegman-Carter authentication scheme with 3-universal hashing is secure against superposition attacks, as well as adversaries with quantum side information. We then present conceptually simple constructions of full quantum state authentication.
Finally, we prove a lifting theorem which shows that, as long as a protocol can securely authenticate the maximally entangled state, it can securely authenticate any state, even those that are entangled with the adversary. Thus, this shows that protocols satisfying a fairly weak form of authentication security automatically satisfy a stronger notion of security (in particular, the definition of Dupuis et al. (2012)).
Sumegha Garg, Henry Yuen, Mark Zhandry

Hash Functions

Frontmatter
Time-Memory Tradeoff Attacks on the MTP Proof-of-Work Scheme
Abstract
Proof-of-work (PoW) schemes are cryptographic primitives with numerous applications, and in particular, they play a crucial role in maintaining consensus in cryptocurrency networks. Ideally, a cryptocurrency PoW scheme should have several desired properties, including efficient verification on one hand, and high memory consumption of the prover’s algorithm on the other hand, making the scheme less attractive for implementation on dedicated hardware.
At the USENIX Security Symposium 2016, Biryukov and Khovratovich presented a new promising PoW scheme called MTP (Merkle Tree Proof) that achieves essentially all desired PoW properties. As a result, MTP has received substantial attention from the cryptocurrency community. The scheme uses a Merkle hash tree construction over a large array of blocks computed by a memory consuming (memory-hard) function. Despite the fact that only a small fraction of the memory is verified by the efficient verification algorithm, the designers claim that a cheating prover that uses a small amount of memory will suffer from a significant computational penalty.
In this paper, we devise a sub-linear computation-memory tradeoff attack on MTP. We apply our attack to the concrete instance proposed by the designers which uses the memory-hard function Argon2d and computes a proof by allocating 2 gigabytes of memory. The attack computes arbitrary malicious proofs using less than a megabyte of memory (about 1/3000 of the honest prover’s memory) at a relatively mild penalty of 170 in computation. This is more than 55,000 times faster than what is claimed by the designers. The attack requires a one-time precomputation step of complexity \(2^{64}\), but its online cost is only increased by a factor which is less than 2 when spending \(2^{48}\) precomputation time.
The main idea of the attack is to exploit the fact that Argon2d accesses its memory in a way which is determined by its previous computations. This allows to inject a small fraction of carefully selected memory blocks that manipulate Argon2d’s memory access patterns, significantly weakening its memory-hardness.
Itai Dinur, Niv Nadler
Functional Graph Revisited: Updates on (Second) Preimage Attacks on Hash Combiners
Abstract
This paper studies functional-graph-based (second) preimage attacks against hash combiners. By exploiting more properties of functional graph, we find an improved preimage attack against the XOR combiner with a complexity of \(2^{5n/8}\), while the previous best-known complexity is \(2^{2n/3}\). Moreover, we find the first generic second-preimage attack on Zipper hash with an optimal complexity of \(2^{3n/5}\).
Zhenzhen Bao, Lei Wang, Jian Guo, Dawu Gu
Non-full Sbox Linearization: Applications to Collision Attacks on Round-Reduced Keccak
Abstract
The Keccak hash function is the winner of the SHA-3 competition and became the SHA-3 standard of NIST in 2015. In this paper, we focus on practical collision attacks against round-reduced Keccak hash function, and two main results are achieved: the first practical collision attacks against 5-round Keccak-224 and an instance of 6-round Keccak collision challenge. Both improve the number of practically attacked rounds by one. These results are obtained by carefully studying the algebraic properties of the nonlinear layer in the underlying permutation of Keccak and applying linearization to it. In particular, techniques for partially linearizing the output bits of the nonlinear layer are proposed, utilizing which attack complexities are reduced significantly from the previous best results.
Ling Song, Guohong Liao, Jian Guo

Lattices

Frontmatter
Gaussian Sampling over the Integers: Efficient, Generic, Constant-Time
Abstract
Sampling integers with Gaussian distribution is a fundamental problem that arises in almost every application of lattice cryptography, and it can be both time consuming and challenging to implement. Most previous work has focused on the optimization and implementation of integer Gaussian sampling in the context of specific applications, with fixed sets of parameters. We present new algorithms for discrete Gaussian sampling that are both generic (application independent), efficient, and more easily implemented in constant time without incurring a substantial slow-down, making them more resilient to side-channel (e.g., timing) attacks. As an additional contribution, we present new analytical techniques that can be used to simplify the precision/security evaluation of floating point cryptographic algorithms, and an experimental comparison of our algorithms with previous algorithms from the literature.
Daniele Micciancio, Michael Walter
LPN Decoded
Abstract
We propose new algorithms with small memory consumption for the Learning Parity with Noise (LPN) problem, both classically and quantumly. Our goal is to predict the hardness of LPN depending on both parameters, its dimension k and its noise rate \(\tau \), as accurately as possible both in theory and practice. Therefore, we analyze our algorithms asymptotically, run experiments on medium size parameters and provide bit complexity predictions for large parameters.
Our new algorithms are modifications and extensions of the simple Gaussian elimination algorithm with recent advanced techniques for decoding random linear codes. Moreover, we enhance our algorithms by the dimension reduction technique from Blum, Kalai, Wasserman. This results in a hybrid algorithm that is capable for achieving the best currently known run time for any fixed amount of memory.
On the asymptotic side, we achieve significant improvements for the run time exponents, both classically and quantumly. To the best of our knowledge, we provide the first quantum algorithms for LPN.
Due to the small memory consumption of our algorithms, we are able to solve for the first time LPN instances of medium size, e.g. with \(k=243, \tau = \frac{1}{8}\) in only 15 days on 64 threads.
Our algorithms result in bit complexity prediction that require relatively large k for small \(\tau \). For instance for small noise LPN with \(\tau = \frac{1}{\sqrt{k}}\), we predict 80-bit classical and only 64-bit quantum security for \(k~\ge ~2048\). For the common cryptographic choice \(k=512, \tau = \frac{1}{8}\), we achieve with limited memory classically 102-bit and quantumly 69-bit security.
Andre Esser, Robert Kübler, Alexander May

Signatures

Frontmatter
Optimal Security Reductions for Unique Signatures: Bypassing Impossibilities with a Counterexample
Abstract
Optimal security reductions for unique signatures (Coron, Eurocrypt 2002) and their generalization, i.e., efficiently re-randomizable signatures (Hofheinz et al. PKC 2012 & Bader et al. Eurocrypt 2016) have been well studied in the literature. Particularly, it has been shown that under a non-interactive hard assumption, any security reduction (with or without random oracles) for a unique signature scheme or an efficiently re-randomizable signature scheme must loose a factor of at least \(q_s\) in the security model of existential unforgeability against chosen-message attacks (EU-CMA), where \(q_s\) denotes the number of signature queries. Note that the number \(q_s\) can be as large as \(2^{30}\) in practice. All unique signature schemes and efficiently re-randomizable signature schemes are concluded to be accompanied with loose reductions from these impossibility results.
Somewhat surprisingly, in contrast to previous impossibility results (Coron, Eurocrypt 2002; Hofheinz et al. PKC 2012; Bader et al. Eurocrypt 2016), in this work we show that without changing the assumption type and security model, it is not always the case that any security reduction must loose a factor of at least \(q_s\). As a counterexample, we propose a unique signature scheme with a tight reduction in the EU-CMA security model under the Computational Diffie-Hellman (CDH) assumption. Precisely, in the random oracle model, we can program a security reduction with a loss factor of at most \(nq^{1/{n}}\), where n can be any integer independent of the security parameter for the scheme construction and q is the number of hash queries to random oracles. The loss factor in our reduction can be very small. Considering \(n=25\) and \(q=2^{50}\) as an example, the loss factor is of at most \(nq^{1/{n}}=100\) and therefore our security reduction is tight.
Notice that the previous impossibility results are derived from proofs via a so-called meta-reduction technique. We stress that instead of indicating any flaw in their meta-reduction proofs, our counterexample merely demonstrates that their given meta-reduction proofs fail to capture all security reductions. More precisely, we adopt a reduction called query-based reduction, where the reduction uses a hash query from the adversary to solve an underlying hard problem. We show that the meta-reduction proofs break down in our query-based reduction. The query-based reduction is not a new notion and it has been adopted for encryption proofs, but this work is the first seminal approach for applying query-based reduction in digital signatures.
The given counterexample in this work is of an independent interest as it implies a generic way of constructing a digital signature scheme (including unique signatures) with a tight reduction in the random oracle model from a digital signature scheme with a loose reduction. Although our proposed methodology is somewhat impractical due to the inefficiency of signature length, it introduces a completely new approach for tight proofs that is different from traditional approaches using a random salt.
Fuchun Guo, Rongmao Chen, Willy Susilo, Jianchang Lai, Guomin Yang, Yi Mu
Compact Structure-Preserving Signatures with Almost Tight Security
Abstract
In structure-preserving cryptography, every building block shares the same bilinear groups. These groups must be generated for a specific, a priori fixed security level, and thus it is vital that the security reduction of all involved building blocks is as tight as possible. In this work, we present the first generic construction of structure-preserving signature schemes whose reduction cost is independent of the number of signing queries. Its chosen-message security is almost tightly reduced to the chosen-plaintext security of a structure-preserving public-key encryption scheme and the security of Groth-Sahai proof system. Technically, we adapt the adaptive partitioning technique by Hofheinz (Eurocrypt 2017) to the setting of structure-preserving signature schemes. To achieve a structure-preserving scheme, our new variant of the adaptive partitioning technique relies only on generic group operations in the scheme itself. Interestingly, however, we will use non-generic operations during our security analysis. Instantiated over asymmetric bilinear groups, the security of our concrete scheme is reduced to the external Diffie-Hellman assumption with linear reduction cost in the security parameter, independently of the number of signing queries. The signatures in our schemes consist of a larger number of group elements than those in other non-tight schemes, but can be verified faster, assuming their security reduction loss is compensated by increasing the security parameter to the next standard level.
Masayuki Abe, Dennis Hofheinz, Ryo Nishimaki, Miyako Ohkubo, Jiaxin Pan
Snarky Signatures: Minimal Signatures of Knowledge from Simulation-Extractable SNARKs
Abstract
We construct a pairing based simulation-extractable SNARK (SE-SNARK) that consists of only 3 group elements and has highly efficient verification. By formally linking SE-SNARKs to signatures of knowledge, we then obtain a succinct signature of knowledge consisting of only 3 group elements.
SE-SNARKs enable a prover to give a proof that they know a witness to an instance in a manner which is: (1) succinct - proofs are short and verifier computation is small; (2) zero-knowledge - proofs do not reveal the witness; (3) simulation-extractable - it is only possible to prove instances to which you know a witness, even when you have already seen a number of simulated proofs.
We also prove that any pairing based signature of knowledge or SE-NIZK argument must have at least 3 group elements and 2 verification equations. Since our constructions match these lower bounds, we have the smallest size signature of knowledge and the smallest size SE-SNARK possible.
Jens Groth, Mary Maller
Fast Secure Two-Party ECDSA Signing
Abstract
ECDSA is a standard digital signature schemes that is widely used in TLS, Bitcoin and elsewhere. Unlike other schemes like RSA, Schnorr signatures and more, it is particularly hard to construct efficient threshold signature protocols for ECDSA (and DSA). As a result, the best-known protocols today for secure distributed ECDSA require running heavy zero-knowledge proofs and computing many large-modulus exponentiations for every signing operation. In this paper, we consider the specific case of two parties (and thus no honest majority) and construct a protocol that is approximately two orders of magnitude faster than the previous best. Concretely, our protocol achieves good performance, with a single signing operation for curve P-256 taking approximately 37 ms between two standard machine types in Azure (utilizing a single core only). Our protocol is proven secure under standard assumptions using a game-based definition. In addition, we prove security by simulation under a plausible yet non-standard assumption regarding Paillier.
Yehuda Lindell

Block Ciphers

Frontmatter
Proving Resistance Against Invariant Attacks: How to Choose the Round Constants
Abstract
Many lightweight block ciphers apply a very simple key schedule in which the round keys only differ by addition of a round-specific constant. Generally, there is not much theory on how to choose appropriate constants. In fact, several of those schemes were recently broken using invariant attacks, i.e., invariant subspace or nonlinear invariant attacks. This work analyzes the resistance of such ciphers against invariant attacks and reveals the precise mathematical properties that render those attacks applicable. As a first practical consequence, we prove that some ciphers including Prince, Skinny-64 and \(\textsf {Mantis}_{\mathsf {7}}\) are not vulnerable to invariant attacks. Also, we show that the invariant factors of the linear layer have a major impact on the resistance against those attacks. Most notably, if the number of invariant factors of the linear layer is small (e.g., if its minimal polynomial has a high degree), we can easily find round constants which guarantee the resistance to all types of invariant attacks, independently of the choice of the S-box layer. We also explain how to construct optimal round constants for a given, but arbitrary, linear layer.
Christof Beierle, Anne Canteaut, Gregor Leander, Yann Rotella
Breaking the FF3 Format-Preserving Encryption Standard over Small Domains
Abstract
The National Institute of Standards and Technology (NIST) recently published a Format-Preserving Encryption standard accepting two Feistel structure based schemes called FF1 and FF3. Particularly, FF3 is a tweakable block cipher based on an 8-round Feistel network. In CCS 2016, Bellare et al. gave an attack to break FF3 (and FF1) with time and data complexity \(O(N^5\log (N))\), which is much larger than the code book (but using many tweaks), where \(N^2\) is domain size to the Feistel network. In this work, we give a new practical total break attack to the FF3 scheme (also known as BPS scheme). Our FF3 attack requires \(O(N^{\frac{11}{6}})\) chosen plaintexts with time complexity \(O(N^{5})\). Our attack was successfully tested with \(N\leqslant 2^9\). It is a slide attack (using two tweaks) that exploits the bad domain separation of the FF3 design. Due to this weakness, we reduced the FF3 attack to an attack on 4-round Feistel network. Biryukov et al. already gave a 4-round Feistel structure attack in SAC 2015. However, it works with chosen plaintexts and ciphertexts whereas we need a known-plaintext attack. Therefore, we developed a new generic known-plaintext attack to 4-round Feistel network that reconstructs the entire tables for all round functions. It works with \(N^{\frac{3}{2}} \left( \frac{N}{2} \right) ^{\frac{1}{6}}\) known plaintexts and time complexity \(O(N^{3})\). Our 4-round attack is simple to extend to five and more rounds with complexity \(N^{(r-5)N+o(N)}\). It shows that FF1 with \(N=7\) and FF3 with \(7\leqslant N\leqslant 10\) do not offer a 128-bit security. Finally, we provide an easy and intuitive fix to prevent the FF3 scheme from our \(O(N^{5})\) attack.
F. Betül Durak, Serge Vaudenay
Insuperability of the Standard Versus Ideal Model Gap for Tweakable Blockcipher Security
Abstract
Two types of tweakable blockciphers based on classical blockciphers have been presented over the last years: non-tweak-rekeyable and tweak-rekeyable, depending on whether the tweak may influence the key input to the underlying blockcipher. In the former direction, the best possible security is conjectured to be \(2^{\sigma n/(\sigma +1)}\), where n is the size of the blockcipher and \(\sigma \) is the number of blockcipher calls. In the latter direction, Mennink and Wang et al. presented optimally secure schemes, but only in the ideal cipher model. We investigate the possibility to construct a tweak-rekeyable cipher that achieves optimal security in the standard cipher model. As a first step, we note that all standard-model security results in literature implicitly rely on a generic standard-to-ideal transformation, that replaces all keyed blockcipher calls by random secret permutations, at the cost of the security of the blockcipher. Then, we prove that if this proof technique is adopted, tweak-rekeying will not help in achieving optimal security: if \(2^{\sigma n/(\sigma +1)}\) is the best one can get without tweak-rekeying, optimal \(2^n\) provable security with tweak-rekeying is impossible.
Bart Mennink
Backmatter
Metadaten
Titel
Advances in Cryptology – CRYPTO 2017
herausgegeben von
Jonathan Katz
Hovav Shacham
Copyright-Jahr
2017
Electronic ISBN
978-3-319-63715-0
Print ISBN
978-3-319-63714-3
DOI
https://doi.org/10.1007/978-3-319-63715-0