Skip to main content
main-content

Über dieses Buch

The three-volume proceedings LNCS 10210-10212 constitute the thoroughly refereed proceedings of the 36th Annual International Conference on the Theory and Applications of Cryptographic Techniques, EUROCRYPT 2017, held in Paris, France, in April/May 2017.

The 67 full papers included in these volumes were carefully reviewed and selected from 264 submissions. The papers are organized in topical sections named: lattice attacks and constructions; obfuscation and functional encryption; discrete logarithm; multiparty computation; universal composability; zero knowledge; side-channel attacks and countermeasures; functional encryption; elliptic curves; symmetric cryptanalysis; provable security for symmetric cryptography; security models; blockchain; memory hard functions; symmetric-key constructions; obfuscation; quantum cryptography; public-key encryption and key-exchange.

Inhaltsverzeichnis

Frontmatter

Functional Encryption II

Frontmatter

On Removing Graded Encodings from Functional Encryption

Abstract
Functional encryption (FE) has emerged as an outstanding concept. By now, we know that beyond the immediate application to computation over encrypted data, variants with succinct ciphertexts are so powerful that they yield the full might of indistinguishability obfuscation (IO). Understanding how, and under which assumptions, such succinct schemes can be constructed has become a grand challenge of current research in cryptography. Whereas the first schemes were based themselves on IO, recent progress has produced constructions based on constant-degree graded encodings. Still, our comprehension of such graded encodings remains limited, as the instantiations given so far have exhibited different vulnerabilities.
Our main result is that, assuming LWE, black-box constructions of sufficiently succinct FE schemes from constant-degree graded encodings can be transformed to rely on a much better-understood object — bilinear groups. In particular, under an über assumption on bilinear groups, such constructions imply IO in the plain model. The result demonstrates that the exact level of ciphertext succinctness of FE schemes is of major importance. In particular, we draw a fine line between known FE constructions from constant-degree graded encodings, which just fall short of the required succinctness, and the holy grail of basing IO on better-understood assumptions.
In the heart of our result, are new techniques for removing ideal graded encoding oracles from FE constructions. Complementing the result, for weaker ideal models, namely the generic group model and the random oracle model, we show a transformation from collusion-resistant FE in either of the two models directly to FE (and IO) in the plain model, without assuming bilinear groups.
Nir Bitansky, Huijia Lin, Omer Paneth

Functional Encryption: Deterministic to Randomized Functions from Simple Assumptions

Abstract
Functional encryption (FE) enables fine-grained control of sensitive data by allowing users to only compute certain functions for which they have a key. The vast majority of work in FE has focused on deterministic functions, but for several applications such as privacy-aware auditing, differentially-private data release, proxy re-encryption, and more, the functionality of interest is more naturally captured by a randomized function. Recently, Goyal et al. (TCC 2015) initiated a formal study of FE for randomized functionalities with security against malicious encrypters, and gave a selectively secure construction from indistinguishability obfuscation. To date, this is the only construction of FE for randomized functionalities in the public-key setting. This stands in stark contrast to FE for deterministic functions which has been realized from a variety of assumptions.
Our key contribution in this work is a generic transformation that converts any general-purpose, public-key FE scheme for deterministic functionalities into one that supports randomized functionalities. Our transformation uses the underlying FE scheme in a black-box way and can be instantiated using very standard number-theoretic assumptions (for instance, the DDH and RSA assumptions suffice). When applied to existing FE constructions, we obtain several adaptively-secure, public-key functional encryption schemes for randomized functionalities with security against malicious encrypters from many different assumptions such as concrete assumptions on multilinear maps, indistinguishability obfuscation, and in the bounded-collusion setting, the existence of public-key encryption, together with standard number-theoretic assumptions.
Additionally, we introduce a new, stronger definition for malicious security as the existing one falls short of capturing an important class of correlation attacks. In realizing this definition, our compiler combines ideas from disparate domains like related-key security for pseudorandom functions and deterministic encryption in a novel way. We believe that our techniques could be useful in expanding the scope of new variants of functional encryption (e.g., multi-input, hierarchical, and others) to support randomized functionalities.
Shashank Agrawal, David J. Wu

Lattice Attacks and Constructions IV

Frontmatter

Random Sampling Revisited: Lattice Enumeration with Discrete Pruning

Abstract
In 2003, Schnorr introduced Random sampling to find very short lattice vectors, as an alternative to enumeration. An improved variant has been used in the past few years by Kashiwabara et al. to solve the largest Darmstadt SVP challenges. However, the behaviour of random sampling and its variants is not well-understood: all analyses so far rely on a questionable heuristic assumption, namely that the lattice vectors produced by some algorithm are uniformly distributed over certain parallelepipeds. In this paper, we introduce lattice enumeration with discrete pruning, which generalizes random sampling and its variants, and provides a novel geometric description based on partitions of the n-dimensional space. We obtain what is arguably the first sound analysis of random sampling, by showing how discrete pruning can be rigorously analyzed under the well-known Gaussian heuristic, in the same model as the Gama-Nguyen-Regev analysis of pruned enumeration from EUROCRYPT ’10, albeit using different tools: we show how to efficiently compute the volume of the intersection of a ball with a box, and to efficiently approximate a large sum of many such volumes, based on statistical inference. Furthermore, we show how to select good parameters for discrete pruning by enumerating integer points in an ellipsoid. Our analysis is backed up by experiments and allows for the first time to reasonably estimate the success probability of random sampling and its variants, and to make comparisons with previous forms of pruned enumeration. Our work unifies random sampling and pruned enumeration and show that they are complementary of each other: both have different characteristics and offer different trade-offs to speed up enumeration.
Yoshinori Aono, Phong Q. Nguyen

On Dual Lattice Attacks Against Small-Secret LWE and Parameter Choices in HElib and SEAL

Abstract
We present novel variants of the dual-lattice attack against LWE in the presence of an unusually short secret. These variants are informed by recent progress in BKW-style algorithms for solving LWE. Applying them to parameter sets suggested by the homomorphic encryption libraries HElib and SEAL yields revised security estimates. Our techniques scale the exponent of the dual-lattice attack by a factor of \((2\,L)/(2\,L+1)\) when \(\log q = \varTheta {\left( L \log n\right) }\), when the secret has constant hamming weight \(h\) and where \(L\) is the maximum depth of supported circuits. They also allow to half the dimension of the lattice under consideration at a multiplicative cost of \(2^{h}\) operations. Moreover, our techniques yield revised concrete security estimates. For example, both libraries promise 80 bits of security for LWE instances with \(n=1024\) and \(\log _2 q \approx {47}\), while the techniques described in this work lead to estimated costs of 68 bits (SEAL) and 62 bits (HElib).
Martin R. Albrecht

Small CRT-Exponent RSA Revisited

Abstract
Since May (Crypto’02) revealed the vulnerability of the small CRT-exponent RSA using Coppersmith’s lattice-based method, several papers have studied the problem and two major improvements have been made. Bleichenbacher and May (PKC’06) proposed an attack for small \(d_q\) when the prime factor p is significantly smaller than the other prime factor q; the attack works for \(p<N^{0.468}\). Jochemsz and May (Crypto’07) proposed an attack for small \(d_p\) and \(d_q\) where the prime factors p and q are balanced; the attack works for \(d_p,d_q<N^{0.073}\). Even after a decade has passed since their proposals, the above two attacks are still considered to be the state-of-the-art, and no improvements have been made thus far. A novel technique seems to be required for further improvements since the attacks have been studied with all the applicable techniques for Coppersmith’s methods proposed by Durfee-Nguyen (Asiacrypt’00), Jochemsz-May (Asiacrypt’06), and Herrmann-May (Asiacrypt’09, PKC’10). In this paper, we propose two improved attacks on the small CRT-exponent RSA: a small \(d_q\) attack for \(p<N^{0.5}\) (an improvement of Bleichenbacher-May’s) and a small \(d_p\) and \(d_q\) attack for \(d_p,d_q<N^{0.091}\) (an improvement of Jochemsz-May’s). We use Coppersmith’s lattice-based method to solve modular equations and obtain the improvements from a novel lattice construction by exploiting useful algebraic structures of the CRT-RSA key generation. We explicitly show proofs of our attacks and verify the validities by computer experiments. In addition to the two main attacks, we propose small \(d_q\) attacks on several variants of RSA.
Atsushi Takayasu, Yao Lu, Liqiang Peng

Multiparty Computation II

Frontmatter

Group-Based Secure Computation: Optimizing Rounds, Communication, and Computation

Abstract
A recent work of Boyle et al. (Crypto 2016) suggests that “group-based” cryptographic protocols, namely ones that only rely on a cryptographically hard (Abelian) group, can be surprisingly powerful. In particular, they present succinct two-party protocols for securely computing branching programs and \({\mathsf{NC}^1}\) circuits under the DDH assumption, providing the first alternative to fully homomorphic encryption.
In this work we further explore the power of group-based secure computation protocols, improving both their asymptotic and concrete efficiency. We obtain the following results.
  • Black-box use of group. We modify the succinct protocols of Boyle et al. so that they only make a black-box use of the underlying group, eliminating an expensive non-black-box setup phase.
  • Round complexity. For any constant number of parties, we obtain 2-round MPC protocols based on a PKI setup under the DDH assumption. Prior to our work, such protocols were only known using fully homomorphic encryption or indistinguishability obfuscation.
  • Communication complexity. Under DDH, we present a secure 2-party protocol for any \({\mathsf{NC}^1}\) or log-space computation with n input bits and m output bits using \(n+(1+o(1)) m+\mathsf{poly}(\lambda )\) bits of communication, where \(\lambda \) is a security parameter. In particular, our protocol can generate n instances of bit-oblivious-transfer using \((4+o(1))\cdot n\) bits of communication. This gives the first constant-rate OT protocol under DDH.
  • Computation complexity. We present several techniques for improving the computational cost of the share conversion procedure of Boyle et al., improving the concrete efficiency of group-based protocols by several orders of magnitude.
Elette Boyle, Niv Gilboa, Yuval Ishai

On the Exact Round Complexity of Self-composable Two-Party Computation

Abstract
The round complexity of secure computation has been a fundamental problem in cryptography. Katz and Ostrovsky proved that 5 rounds are both necessary and sufficient for secure computation in the stand alone setting, thus resolving the exact round complexity of standalone secure computation.
In contrast, round complexity of secure computation in the concurrent setting, where several protocols may run simultaneously, is poorly understood. Since standard polynomial time simulation is impossible in the concurrent setting, alternative security notions have been proposed, e.g., super-polynomial simulation (SPS). While SPS security can be achieved in constant rounds, the actual constant (\(> 20\)) is far from optimal.
In this work, we take the first steps towards studying the exact round complexity of concurrent secure computation. We focus on the two party case and present a new secure computation protocol that achieves SPS security under concurrent self-composition. Our protocol has 5 rounds assuming quasi-polynomially-hard injective one-way functions (or 7 rounds assuming standard polynomially-hard collision-resistant hash functions). We also require other standard assumptions, specifically trapdoor OWPs and lossy TDFs. This matches the rounds for standalone secure computation.
More specifically, our security proof presents a polynomial time reduction from SPS security to 3-round public-coin non-malleable commitments with appropriate extractability properties. Such commitments are known based on quasi-polynomially-hard injective OWFs. (The reduction also works with a special 6-round non-malleable commitment to yield the 7-round result under CRHFs.)
Sanjam Garg, Susumu Kiyoshima, Omkant Pandey

High-Throughput Secure Three-Party Computation for Malicious Adversaries and an Honest Majority

Abstract
In this paper, we describe a new protocol for secure three-party computation of any functionality, with an honest majority and a malicious adversary. Our protocol has both an information-theoretic and computational variant, and is distinguished by extremely low communication complexity and very simple computation. We start from the recent semi-honest protocol of Araki et al. (ACM CCS 2016) in which the parties communicate only a single bit per AND gate, and modify it to be secure in the presence of malicious adversaries. Our protocol follows the paradigm of first constructing Beaver multiplication triples and then using them to verify that circuit gates are correctly computed. As in previous work (e.g., the so-called TinyOT and SPDZ protocols), we rely on the cut-and-choose paradigm to verify that triples are correctly constructed. We are able to utilize the fact that at most one of three parties is corrupted in order to construct an extremely simple and efficient method of constructing such triples. We also present an improved combinatorial analysis for this cut-and-choose which can be used to achieve improvements in other protocols using this approach.
Jun Furukawa, Yehuda Lindell, Ariel Nof, Or Weinstein

Symmetric Cryptanalysis I

Frontmatter

Conditional Cube Attack on Reduced-Round Keccak Sponge Function

Abstract
The security analysis of Keccak, the winner of SHA-3, has attracted considerable interest. Recently, some attention has been paid to the analysis of keyed modes of Keccak sponge function. As a notable example, the most efficient key recovery attacks on Keccak-MAC and Keyak were reported at EUROCRYPT’15 where cube attacks and cube-attack-like cryptanalysis have been applied. In this paper, we develop a new type of cube distinguisher, the conditional cube tester, for Keccak sponge function. By imposing some bit conditions for certain cube variables, we are able to construct cube testers with smaller dimensions. Our conditional cube testers are used to analyse Keccak in keyed modes. For reduced-round Keccak-MAC and Keyak, our attacks greatly improve the best known attacks in key recovery in terms of the number of rounds or the complexity. Moreover, our new model can also be applied to keyless setting to distinguish Keccak sponge function from random permutation. We provide a searching algorithm to produce the most efficient conditional cube tester by modeling it as an MILP (mixed integer linear programming) problem. As a result, we improve the previous distinguishing attacks on Keccak sponge function significantly. Most of our attacks have been implemented and verified by desktop computers. Finally we remark that our attacks on the reduced-round Keccak will not threat the security margin of Keccak sponge function.
Senyang Huang, Xiaoyun Wang, Guangwu Xu, Meiqin Wang, Jingyuan Zhao

A New Structural-Differential Property of 5-Round AES

Abstract
AES is probably the most widely studied and used block cipher. Also versions with a reduced number of rounds are used as a building block in many cryptographic schemes, e.g. several candidates of the SHA-3 and CAESAR competition are based on it.
So far, non-random properties which are independent of the secret key are known for up to 4 rounds of AES. These include differential, impossible differential, and integral properties.
In this paper we describe a new structural property for up to 5 rounds of AES, differential in nature and which is independent of the secret key, of the details of the MixColumns matrix (with the exception that the branch number must be maximal) and of the SubBytes operation. It is very simple: By appropriate choices of difference for a number of input pairs it is possible to make sure that the number of times that the difference of the resulting output pairs lie in a particular subspace is always a multiple of 8.
We not only observe this property experimentally (using a small-scale version of AES), we also give a detailed proof as to why it has to exist. As a first application of this property, we describe a way to distinguish the 5-round AES permutation (or its inverse) from a random permutation with only \(2^{32}\) chosen texts that has a computational cost of \(2^{35.6}\) look-ups into memory of size \(2^{36}\) bytes which has a success probability greater than 99%.
Lorenzo Grassi, Christian Rechberger, Sondre Rønjom

Zero Knowledge II

Frontmatter

Removing the Strong RSA Assumption from Arguments over the Integers

Abstract
Committing integers and proving relations between them is an essential ingredient in many cryptographic protocols. Among them, range proofs have been shown to be fundamental. They consist in proving that a committed integer lies in a public interval, which can be seen as a particular case of the more general Diophantine relations: for the committed vector of integers \(\varvec{x}\), there exists a vector of integers \(\varvec{w}\) such that \(P(\varvec{x},\varvec{w})=0\), where P is a polynomial.
In this paper, we revisit the security strength of the statistically hiding commitment scheme over the integers due to Damgård-Fujisaki, and the zero-knowledge proofs of knowledge of openings. Our first main contribution shows how to remove the Strong RSA assumption and replace it by the standard RSA assumption in the security proofs. This improvement naturally extends to generalized commitments and more complex proofs without modifying the original protocols.
As a second contribution, we design an interactive technique turning commitment scheme over the integers into commitment scheme modulo a prime p. Still under the RSA assumption, this results in more efficient proofs of relations between committed values. Our methods thus improve upon existing proof systems for Diophantine relations both in terms of performance and security. We illustrate that with more efficient range proofs under the sole RSA assumption.
Geoffroy Couteau, Thomas Peters, David Pointcheval

Magic Adversaries Versus Individual Reduction: Science Wins Either Way

Abstract
We prove that, assuming there exists an injective one-way function f, at least one of the following statements is true:
  • (Infinitely-often) Non-uniform public-key encryption and key agreement exist;
  • The Feige-Shamir protocol instantiated with f is distributional concurrent zero knowledge for a large class of distributions over any OR NP-relations with small distinguishability gap.
The questions of whether we can achieve these goals are known to be subject to black-box limitations. Our win-win result also establishes an unexpected connection between the complexity of public-key encryption and the round-complexity of concurrent zero knowledge.
As the main technical contribution, we introduce a dissection procedure for concurrent adversaries, which enables us to transform a magic concurrent adversary that breaks the distributional concurrent zero knowledge of the Feige-Shamir protocol into non-black-box constructions of (infinitely-often) public-key encryption and key agreement.
This dissection of complex algorithms gives insight into the fundamental gap between the known universal security reductions/simulations, in which a single reduction algorithm or simulator works for all adversaries, and the natural security definitions (that are sufficient for almost all cryptographic primitives/protocols), which switch the order of qualifiers and only require that for every adversary there exists an individual reduction or simulator.
Yi Deng

Provable Security for Symmetric Cryptography I

Frontmatter

The Multi-user Security of Double Encryption

Abstract
It is widely known that double encryption does not substantially increase the security of a block cipher. Indeed, the classical meet-in-the middle attack recovers the 2k-bit secret key at the cost of roughly \(2^k\) off-line enciphering operations, in addition to very few known plaintext-ciphertext pairs. Thus, essentially as efficiently as for the underlying cipher with a k-bit key.
This paper revisits double encryption under the lens of multi-user security. We prove that its security degrades only very mildly with an increasing number of users, as opposed to single encryption, where security drops linearly. More concretely, we give a tight bound for the multi-user security of double encryption as a pseudorandom permutation in the ideal-cipher model, and describe matching attacks.
Our contribution is also conceptual: To prove our result, we enhance and generalize the generic technique recently proposed by Hoang and Tessaro for lifting single-user to multi-user security. We believe this technique to be broadly applicable.
Viet Tung Hoang, Stefano Tessaro

Public-Seed Pseudorandom Permutations

Abstract
This paper initiates the study of standard-model assumptions on permutations – or more precisely, on families of permutations indexed by a public seed. We introduce and study the notion of a public-seed pseudorandom permutation (psPRP), which is inspired by the UCE notion by Bellare, Hoang, and Keelveedhi (CRYPTO ’13). It considers a two-stage security game, where the first-stage adversary is known as the source, and is restricted to prevent trivial attacks – the security notion is consequently parameterized by the class of allowable sources. To this end, we define in particular unpredictable and reset-secure sources analogous to similar notions for UCEs.
We first study the relationship between psPRPs and UCEs. To start with, we provide efficient constructions of UCEs from psPRPs for both reset-secure and unpredictable sources, thus showing that most applications of the UCE framework admit instantiations from psPRPs. We also show a converse of this statement, namely that the five-round Feistel construction yields a psPRP for reset-secure sources when the round function is built from UCEs for reset-secure sources, hence making psPRP and UCE equivalent notions for such sources.
In addition to studying such reductions, we suggest generic instantiations of psPRPs from both block ciphers and (keyless) permutations, and analyze them in ideal models. Also, as an application of our notions, we show that a simple modification of a recent highly-efficient garbling scheme by Bellare et al. (S&P ’13) is secure under our psPRP assumption.
Pratik Soni, Stefano Tessaro

Security Models I

Frontmatter

Cryptography with Updates

Abstract
Starting with the work of Bellare, Goldreich and Goldwasser [CRYPTO’94], a rich line of work has studied the design of updatable cryptographic primitives. For example, in an updatable signature scheme, it is possible to efficiently transform a signature over a message into a signature over a related message without recomputing a fresh signature.
In this work, we continue this line of research, and perform a systematic study of updatable cryptography. We take a unified approach towards adding updatability features to recently studied cryptographic objects such as attribute-based encryption, functional encryption, witness encryption, indistinguishability obfuscation, and many others that support non-interactive computation over inputs. We, in fact, go further and extend our approach to classical protocols such as zero-knowledge proofs and secure multiparty computation.
To accomplish this goal, we introduce a new notion of updatable randomized encodings that extends the standard notion of randomized encodings to incorporate updatability features. We show that updatable randomized encodings can be used to generically transform cryptographic primitives to their updatable counterparts.
We provide various definitions and constructions of updatable randomized encodings based on varying assumptions, ranging from one-way functions to compact functional encryption.
Prabhanjan Ananth, Aloni Cohen, Abhishek Jain

Fixing Cracks in the Concrete: Random Oracles with Auxiliary Input, Revisited

Abstract
We revisit the security of cryptographic primitives in the random-oracle model against attackers having a bounded amount of auxiliary information about the random oracle. This situation arises most naturally when an attacker carries out offline preprocessing to generate state (namely, auxiliary information) that is later used as part of an on-line attack, with perhaps the best-known example being the use of rainbow tables for function inversion. The resulting model is also critical to obtain accurate bounds against non-uniform attackers when the random oracle is instantiated by a concrete hash function.
Unruh (Crypto 2007) introduced a generic technique (called pre-sampling) for analyzing security in this model: a random oracle for which S bits of arbitrary auxiliary information can be replaced by a random oracle whose value is fixed in some way on P points; the two are distinguishable with probability at most \(O(\sqrt{ST/P})\) by attackers making at most T oracle queries. Unruh conjectured that the distinguishing advantage could be made negligible for a sufficiently large polynomial P. We show that Unruh’s conjecture is false by proving that the distinguishing probability is at least \(\varOmega (ST/P)\).
Faced with this negative general result, we establish new security bounds, — which are nearly optimal and beat pre-sampling bounds, — for specific applications of random oracles, including one-way functions, pseudorandom functions/generators, and message authentication codes. We also explore the effectiveness of salting as a mechanism to defend against offline preprocessing, and give quantitative bounds demonstrating that salting provably helps in the context of one-wayness, collision-resistance, pseudorandom generators/functions, and message authentication codes. In each case, using (at most) n bits of salt, where n is the length of the secret key, we get the same security \(O(T/2^n)\) in the random oracle model with auxiliary input as we get without auxiliary input.
At the heart of our results is the compression technique of Gennaro and Trevisan, and its extensions by De, Trevisan and Tulsiani.
Yevgeniy Dodis, Siyao Guo, Jonathan Katz

Provable Security for Symmetric Cryptography II

Frontmatter

Modifying an Enciphering Scheme After Deployment

Abstract
Assume that a symmetric encryption scheme has been deployed and used with a secret key. We later must change the encryption scheme in a way that preserves the ability to decrypt (a subset of) previously encrypted plaintexts. Frequent real-world examples are migrating from a token-based encryption system for credit-card numbers to a format-preserving encryption (FPE) scheme, or extending the message space of an already deployed FPE. The ciphertexts may be stored in systems for which it is not easy or not efficient to retrieve them (to re-encrypt the plaintext under the new scheme).
We introduce methods for functionality-preserving modifications to encryption, focusing particularly on deterministic, length-preserving ciphers such as those used to perform format-preserving encryption. We provide a new technique, that we refer to as the Zig-Zag construction, that allows one to combine two ciphers using different domains in a way that results in a secure cipher on one domain. We explore its use in the two settings above, replacing token-based systems and extending message spaces. We develop appropriate security goals and prove security relative to them assuming the underlying ciphers are themselves secure as strong pseudorandom permutations.
Paul Grubbs, Thomas Ristenpart, Yuval Yarom

Separating Semantic and Circular Security for Symmetric-Key Bit Encryption from the Learning with Errors Assumption

Abstract
In this work we separate private-key semantic security from 1-circular security for bit encryption using the Learning with Error assumption. Prior works used the less standard assumptions of multilinear maps or indistinguishability obfuscation. To achieve our results we develop new techniques for obliviously evaluating branching programs.
Rishab Goyal, Venkata Koppula, Brent Waters

Security Models II

Frontmatter

Toward Fine-Grained Blackbox Separations Between Semantic and Circular-Security Notions

Abstract
We address the problems of whether t-circular-secure encryption can be based on \((t-1)\)-circular-secure encryption or on semantic (CPA) security, if \(t = 1\). While for \(t = 1\) a folklore construction, based on CPA-secure encryption, can be used to build a 1-circular-secure encryption with the same secret-key and message space, no such constructions are known for the bit-encryption case, which is of particular importance in fully-homomorphic encryption. Also, all constructions of t-circular encryption (bitwise or otherwise) are based on specific assumptions.
We make progress toward these problems by ruling out all fully-blackbox constructions of
  • 1-seed-circular-secure bit encryption from CPA-secure encryption;
  • t-seed-circular-secure encryption from \((t-1)\)-seed-circular secure encryption, for any \(t > 1\).
Informally, seed-circular security is a variant of the circular security notion in which the seed of the key-generation algorithm, instead of the secret key, is encrypted. We also show how to extend our first result to rule out a large and non-trivial class of constructions of 1-circular-secure bit encryption, which we dub key-isolating constructions. Our separations follow the model of Gertner, Malkin and Reingold (FOCS’01), which is a weaker separation model than that of Impagliazzo and Rudich.
Mohammad Hajiabadi, Bruce M. Kapron

A Note on Perfect Correctness by Derandomization

Abstract
We show a general compiler that transforms a large class of erroneous cryptographic schemes (such as public-key encryption, indistinguishability obfuscation, and secure multiparty computation schemes) into perfectly correct ones. The transformation works for schemes that are correct on all inputs with probability noticeably larger than half, and are secure under parallel repetition. We assume the existence of one-way functions and of functions with deterministic (uniform) time complexity \(2^{O(n)}\) and non-deterministic circuit complexity \(2^{\varOmega (n)}\).
Our transformation complements previous results that showed how public-key encryption and indistinguishability obfuscation that err on a noticeable fraction of inputs can be turned into ones that for all inputs are often correct.
The technique relies on the idea of “reverse randomization” (Naor, Crypto 1989) and on Nisan-Wigderson style derandomization, previously used in cryptography to remove interaction from witness-indistinguishable proofs and commitment schemes (Barak, Ong and Vadhan, Crypto 2003).
Nir Bitansky, Vinod Vaikuntanathan

Blockchain

Frontmatter

Decentralized Anonymous Micropayments

Abstract
Micropayments (payments worth a few pennies) have numerous potential applications. A challenge in achieving them is that payment networks charge fees that are high compared to “micro” sums of money.
Wheeler (1996) and Rivest (1997) proposed probabilistic payments as a technique to achieve micropayments: a merchant receives a macro-value payment with a given probability so that, in expectation, he receives a micro-value payment. Despite much research and trial deployment, micropayment schemes have not seen adoption, partly because a trusted party is required to process payments and resolve disputes.
The widespread adoption of decentralized currencies such as Bitcoin (2009) suggests that decentralized micropayment schemes are easier to deploy. Pass and Shelat (2015) proposed several micropayment schemes for Bitcoin, but their schemes provide no more privacy guarantees than Bitcoin itself, whose transactions are recorded in plaintext in a public ledger.
We formulate and construct decentralized anonymous micropayment (DAM) schemes, which enable parties with access to a ledger to conduct offline probabilistic payments with one another, directly and privately. Our techniques extend those of Zerocash (2014) with a new privacy-preserving probabilistic payment protocol. One of the key ingredients of our construction is fractional message transfer (FMT), a primitive that enables probabilistic message transmission between two parties, and for which we give an efficient instantiation.
Double spending in our setting cannot be prevented. Our second contribution is an economic analysis that bounds the additional utility gain of any cheating strategy, and applies to virtually any probabilistic payment scheme with offline validation. In our construction, this bound allows us to deter double spending by way of advance deposits that are revoked when cheating is detected.
Alessandro Chiesa, Matthew Green, Jingcheng Liu, Peihan Miao, Ian Miers, Pratyush Mishra

Analysis of the Blockchain Protocol in Asynchronous Networks

Abstract
Nakamoto’s famous blockchain protocol enables achieving consensus in a so-called permissionless setting—anyone can join (or leave) the protocol execution, and the protocol instructions do not depend on the identities of the players. His ingenious protocol prevents “sybil attacks” (where an adversary spawns any number of new players) by relying on computational puzzles (a.k.a. “moderately hard functions”) introduced by Dwork and Naor (Crypto’92).
The analysis of the blockchain consensus protocol (a.k.a. Nakamoto consensus) has been a notoriously difficult task. Prior works that analyze it either make the simplifying assumption that network channels are fully synchronous (i.e. messages are instantly delivered without delays) (Garay et al. Eurocrypt’15) or only consider specific attacks (Nakamoto’08; Sampolinsky and Zohar, FinancialCrypt’15); additionally, as far as we know, none of them deal with players joining or leaving the protocol.
In this work we prove that the blockchain consensus mechanism satisfies a strong forms of consistency and liveness in an asynchronous network with adversarial delays that are a-priori bounded, within a formal model allowing for adaptive corruption and spawning of new players, assuming that the computational puzzle is modeled as a random oracle. (We complement this result by showing a simple attack against the blockchain protocol in a fully asynchronous setting, showing that the “puzzle-hardness” needs to be appropriately set as a function of the maximum network delay; this attack applies even for static corruption.)
As an independent contribution, we define an abstract blockchain protocol and identify appropriate security properties of such protocols; we prove that Nakamoto’s blockchain protocol satisfies them and that these properties are sufficient for typical applications; we hope that this abstraction may simplify further applications of blockchains.
Rafael Pass, Lior Seeman, Abhi Shelat

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise