Skip to main content
Top

2017 | Book

Theory of Cryptography

15th International Conference, TCC 2017, Baltimore, MD, USA, November 12-15, 2017, Proceedings, Part I

insite
SEARCH

About this book

The two-volume set LNCS 10677 and LNCS 10678 constitutes the refereed proceedings of the 15th International Conference on Theory of Cryptography, TCC 2017, held in Baltimore, MD, USA, in November 2017.

The total of 51 revised full papers presented in the proceedings were carefully reviewed and selected from 150 submissions. The Theory of Cryptography Conference deals with the paradigms, approaches, and techniques used to conceptualize natural cryptographic problems and provide algorithmic solutions to them and much more.

Table of Contents

Frontmatter

Impossibilities and Barriers

Frontmatter
Barriers to Black-Box Constructions of Traitor Tracing Systems
Abstract
Reducibility between different cryptographic primitives is a fundamental problem in modern cryptography. As one of the primitives, traitor tracing systems help content distributors recover the identities of users that collaborated in the pirate construction by tracing pirate decryption boxes. We present the first negative result on designing efficient traitor tracing systems via black-box constructions from symmetric cryptographic primitives, e.g. one-way functions. More specifically, we show that there is no secure traitor tracing scheme in the random oracle model, such that \(\ell _k\cdot \ell _c^2<\widetilde{\varOmega }(n)\), where \(\ell _k\) is the length of user key, \(\ell _c\) is the length of ciphertext and n is the number of users, under the assumption that the scheme does not access the oracle to generate private user keys. To our best knowledge, all the existing cryptographic schemes (not limited to traitor tracing systems) via black-box constructions from one-way functions satisfy this assumption. Thus, our negative results indicate that most of the standard black-box reductions in cryptography cannot help construct a more efficient traitor tracing system.
We prove our results by extending the connection between traitor tracing systems and differentially private database sanitizers to the setting with random oracle access. After that, we prove the lower bound for traitor tracing schemes by constructing a differentially private sanitizer that only queries the random oracle polynomially many times. In order to reduce the query complexity of the sanitizer, we prove a large deviation bound for decision forests, which might be of independent interest.
Bo Tang, Jiapeng Zhang
On the Impossibility of Entropy Reversal, and Its Application to Zero-Knowledge Proofs
Abstract
Zero knowledge proof systems have been widely studied in cryptography. In the statistical setting, two classes of proof systems studied are Statistical Zero Knowledge (SZK) and Non-Interactive Statistical Zero Knowledge (NISZK), where the difference is that in NISZK only very limited communication is allowed between the verifier and the prover. It is an open problem whether these two classes are in fact equal. In this paper, we rule out efficient black box reductions between SZK and NISZK.
We achieve this by studying algorithms which can reverse the entropy of a function. The problem of estimating the entropy of a circuit is complete for NISZK. Hence, reversing the entropy of a function is equivalent to a black box reduction of NISZK to its complement, which is known to be equivalent to a black box reduction of SZK to NISZK [Goldreich et al. CRYPTO 1999]. We show that any such black box algorithm incurs an exponential loss of parameters, and hence cannot be implemented efficiently.
Shachar Lovett, Jiapeng Zhang
Position-Based Cryptography and Multiparty Communication Complexity
Abstract
Position based cryptography (PBC), proposed in the seminal work of Chandran, Goyal, Moriarty, and Ostrovsky (SIAM J. Computing, 2014), aims at constructing cryptographic schemes in which the identity of the user is his geographic position. Chandran et al. construct PBC schemes for secure positioning and position-based key agreement in the bounded-storage model (Maurer, J. Cryptology, 1992). Apart from bounded memory, their security proofs need a strong additional restriction on the power of the adversary: he cannot compute joint functions of his inputs. Removing this assumption is left as an open problem.
We show that an answer to this question would resolve a long standing open problem in multiparty communication complexity: finding a function that is hard to compute with low communication complexity in the simultaneous message model, but easy to compute in the fully adaptive model.
On a more positive side: we also show some implications in the other direction, i.e.: we prove that lower bounds on the communication complexity of certain multiparty problems imply existence of PBC primitives. Using this result we then show two attractive ways to “bypass” our hardness result: the first uses the random oracle model, the second weakens the locality requirement in the bounded-storage model to online computability. The random oracle construction is arguably one of the simplest proposed so far in this area. Our results indicate that constructing improved provably secure protocols for PBC requires a better understanding of multiparty communication complexity. This is yet another example where negative results in one area (in our case: lower bounds in multiparty communication complexity) can be used to construct secure cryptographic schemes.
Joshua Brody, Stefan Dziembowski, Sebastian Faust, Krzysztof Pietrzak
When Does Functional Encryption Imply Obfuscation?
Abstract
Realizing indistinguishablility obfuscation (IO) based on well understood computational assumptions is an important open problem. Recently, realizing functional encryption (FE) has emerged as a promising direction towards that goal. This is because: (1) compact single-key FE (where the functional secret-key is of length double the ciphertext length) is known to imply IO [Anath and Jain, CRYPTO 2015; Bitansky and Vaikuntanathan, FOCS 2015] and (2) several strong variants of single-key FE are known based on various standard computation assumptions.
In this work, we study when FE can be used for obtaining IO. We show any single-key FE for function families with “short” enough outputs (specifically the output is less than ciphertext length by a value at least \(\omega (n + \mathrm {\kappa })\), where n is the message length and \(\mathrm {\kappa }\) is the security parameter) is insufficient for IO even when non-black-box use of the underlying FE is allowed to some degree. Namely, our impossibility result holds even if we are allowed to plant FE sub-routines as gates inside the circuits for which functional secret-keys are issued (which is exactly how the known FE to IO constructions work).
Complementing our negative result, we show that our condition of “short” enough is almost tight. More specifically, we show that any compact single-key FE with functional secret-key output length strictly larger than ciphertext length is sufficient for IO. Furthermore, we show that non-black-box use of the underlying FE is necessary for such a construction, by ruling out any fully black-box construction of IO from FE even with arbitrary long output.
Sanjam Garg, Mohammad Mahmoody, Ameer Mohammed

Obfuscation

Frontmatter
Limits on the Locality of Pseudorandom Generators and Applications to Indistinguishability Obfuscation
Abstract
Lin and Tessaro (ePrint 2017) recently proposed indistinguishability obfuscation (IO) and functional encryption (FE) candidates and proved their security based on two assumptions: a standard assumption on bilinear maps and a non-standard assumption on “Goldreich-like” pseudorandom generators. In a nutshell, their second assumption requires the existence of pseudorandom generators \(G:[q]^n \rightarrow \{0,1\}^m\) for some \(\mathsf {poly}(n)\)-size alphabet q, each of whose output bits depend on at most two in put alphabet symbols, and which achieve sufficiently large stretch. We show polynomial-time attacks against such generators, invalidating the security of the IO and FE candidates. Our attack uses tools from the literature on two-source extractors (Chor and Goldreich, SICOMP 1988) and efficient refutation of random 2-XOR instances (Charikar and Wirth, FOCS 2004).
Alex Lombardi, Vinod Vaikuntanathan
Decomposable Obfuscation: A Framework for Building Applications of Obfuscation from Polynomial Hardness
Abstract
There is some evidence that indistinguishability obfuscation (iO) requires either exponentially many assumptions or (sub)exponentially hard assumptions, and indeed, all known ways of building obfuscation suffer one of these two limitations. As such, any application built from iO suffers from these limitations as well. However, for most applications, such limitations do not appear to be inherent to the application, just the approach using iO. Indeed, several recent works have shown how to base applications of iO instead on functional encryption (FE), which can in turn be based on the polynomial hardness of just a few assumptions. However, these constructions are quite complicated and recycle a lot of similar techniques.
In this work, we unify the results of previous works in the form of a weakened notion of obfuscation, called Decomposable Obfuscation. We show (1) how to build decomposable obfuscation from functional encryption, and (2) how to build a variety of applications from decomposable obfuscation, including all of the applications already known from FE. The construction in (1) hides most of the difficult techniques in the prior work, whereas the constructions in (2) are much closer to the comparatively simple constructions from iO. As such, decomposable obfuscation represents a convenient new platform for obtaining more applications from polynomial hardness.
Qipeng Liu, Mark Zhandry

Functional Encryption

Frontmatter
Functional Encryption for Bounded Collusions, Revisited
Abstract
We provide a new construction of functional encryption (FE) for circuits in the bounded collusion model. In this model, security of the scheme is guaranteed as long as the number of colluding adversaries can be a-priori bounded by some polynomial Q. Our construction supports arithmetic circuits in contrast to all prior work which support Boolean circuits. The ciphertext of our scheme is sublinear in the circuit size for the circuit class \(\mathsf{NC}_1\); this implies the first construction of arithmetic reusable garbled circuits for \(\mathsf{NC}_1\).
Additionally, our construction achieves several desirable features:
  • Our construction for reusable garbled circuits for \(\mathsf{NC}_1\) achieves the optimal “full” simulation based security.
  • When generalised to handle Q queries for any fixed polynomial Q, our ciphertext size grows additively with \(Q^2\). In contrast, previous works that achieve full security [5, 39] suffered a multiplicative growth of \(Q^4\).
  • The ciphertext of our scheme can be divided into a succinct data dependent component and a non-succinct data independent component. This makes it well suited for optimization in an online-offline model that allows a majority of the computation to be performed in an offline phase, before the data becomes available.
Security of our reusable garbled circuits construction for \(\mathsf{NC_1}\) is based on the Ring Learning With Errors assumption (RLWE), while the bounded collusion construction (with non-succinct ciphertext) may also be based on the standard Learning with Errors (LWE) assumption. To achieve our result, we provide new public key and ciphertext evaluation algorithms. These algorithms are general, and may find application elsewhere.
Shweta Agrawal, Alon Rosen
Attribute-Hiding Predicate Encryption in Bilinear Groups, Revisited
Abstract
We present new techniques for achieving strong attribute-hiding in prime-order bilinear groups under the standard k-Linear assumption. Our main result is a “partially hiding” predicate encryption scheme for functions that compute an arithmetic branching program on public attributes, followed by an inner product predicate on private attributes. This constitutes the first “best of both worlds” result in bilinear groups that simultaneously generalizes existing attribute-based encryption schemes and inner product predicate encryption. Our scheme achieves a variant of simulation-based security in the semi-adaptive setting. Along the way, we introduce a conceptually simpler and more modular approach towards achieving the strong attribute-hiding guarantee.
Hoeteck Wee

Constrained PRFs

Frontmatter
Constrained Keys for Invertible Pseudorandom Functions
Abstract
A constrained pseudorandom function (PRF) is a secure PRF for which one can generate constrained keys that can only be used to evaluate the PRF on a subset of the domain. Constrained PRFs are used widely, most notably in applications of indistinguishability obfuscation (\(i\mathcal {O}\)). In this paper we show how to constrain an invertible PRF (IPF), which is significantly harder. An IPF is a secure injective PRF accompanied by an inversion algorithm. A constrained key for an IPF can only be used to evaluate the IPF on a subset S of the domain, and to invert the IPF on the image of S. We first define the notion of a constrained IPF and then give two main constructions: one for puncturing an IPF and the other for (single-key) circuit constraints. Both constructions rely on recent work on private constrained PRFs. We also show that constrained pseudorandom permutations for many classes of constraints are impossible under our definition.
Dan Boneh, Sam Kim, David J. Wu
Private Constrained PRFs (and More) from LWE
Abstract
In a constrained PRF, the owner of the PRF key K can generate constrained keys \(K_f\) that allow anyone to evaluate the PRF on inputs x that satisfy the predicate f (namely, where f(x) is “true”) but reveal no information about the PRF evaluation on the other inputs. A private constrained PRF goes further by requiring that the constrained key \(K_f\) hides the predicate f.
Boneh, Kim and Montgomery (EUROCRYPT 2017) recently presented a construction of private constrained PRF for point function constraints, and Canetti and Chen (EUROCRYPT 2017) presented a completely different construction for more general NC\(^1\) constraints. In this work, we show two constructions of LWE-based constraint-hiding constrained PRFs for general predicates described by polynomial-size circuits.
The two constructions are based on two distinct techniques that we show have further applicability, by constructing weak attribute-hiding predicate encryption schemes. In a nutshell, the first construction imports the technique of modulus switching from the FHE world into the domain of trapdoor extension and homomorphism. The second construction shows how to use the duality between FHE secret-key/randomness and ABE randomness/secret-key to construct a scheme with dual use of the same values for both FHE and ABE purposes.
Zvika Brakerski, Rotem Tsabary, Vinod Vaikuntanathan, Hoeteck Wee

Encryption

Frontmatter
The Edited Truth
Abstract
We introduce two new cryptographic notions in the realm of public and symmetric key encryption.
  • Encryption with invisible edits is an encryption scheme with two tiers of users: “privileged” and “unprivileged”. Privileged users know a key pair \(({\mathsf {pk}}, {\mathsf {sk}})\) and “unprivileged” users know a key pair \(({\mathsf {pk}}_e, {{\mathsf {sk}}_e})\) which is associated with an underlying edit e to be applied to messages encrypted. When an unprivileged user attempts to decrypt a ciphertext generated by a privileged user of an underlying plaintext \(m\), it will be decrypted to an edited \(m' = {\mathsf {Edit}}(m,e)\). Here, \({\mathsf {Edit}}\) is a supported edit function and e is a description of the particular edit. A user shouldn’t be able to tell whether he’s an unprivileged or a privileged user.
  • An encryption with deniable edits is an encryption scheme which allows a user who owns a ciphertext c encrypting a large corpus of data m under a secret key \({\mathsf {sk}}\), to generate an alternative but legitimate looking secret key \({{\mathsf {sk}}_{c,e}}\) that decrypts c to an “edited” version of the data \(m'={\mathsf {Edit}}(m,e)\). This generalizes classical receiver deniable encryption, which is a special case of deniable edits where the edit function completely replaces the original data. The new flexibility allows to design solutions with much smaller key sizes than required in classical receiver deniable encryption allowing the key size to only scale with the description size of the edit e which can be much smaller than the plaintext data m.
We construct encryption schemes with deniable and invisible edits for any polynomial-time computable edit function under minimal assumptions: in the public-key setting we require the existence of standard public-key encryption and in the symmetric-key setting require the existence of one-way functions.
The solutions to both problems use common ideas, however there is a significant conceptual difference between deniable edits and invisible edits. Whereas encryption with deniable edits enables a user to modify the meaning of a single ciphertext in hindsight, the goal of encryption with invisible edits is to enable ongoing modifications of multiple ciphertexts.
Shafi Goldwasser, Saleet Klein, Daniel Wichs
A Modular Analysis of the Fujisaki-Okamoto Transformation
Abstract
The Fujisaki-Okamoto (FO) transformation (CRYPTO 1999 and Journal of Cryptology 2013) turns any weakly secure public-key encryption scheme into a strongly (i.e., \(\mathsf {IND}\text {-}\mathsf {CCA}\)) secure one in the random oracle model. Unfortunately, the FO analysis suffers from several drawbacks, such as a non-tight security reduction, and the need for a perfectly correct scheme. While several alternatives to the FO transformation have been proposed, they have stronger requirements, or do not obtain all desired properties.
In this work, we provide a fine-grained and modular toolkit of transformations for turning weakly secure into strongly secure public-key encryption schemes. All of our transformations are robust against schemes with correctness errors, and their combination leads to several tradeoffs among tightness of the reduction, efficiency, and the required security level of the used encryption scheme. For instance, one variant of the FO transformation constructs an \(\mathsf {IND}\text {-}\mathsf {CCA}\) secure scheme from an \(\mathsf {IND}\text {-}\mathsf {CPA}\) secure one with a tight reduction and very small efficiency overhead. Another variant assumes only an \(\mathsf {OW}\text {-}\mathsf {CPA}\) secure scheme, but leads to an \(\mathsf {IND}\text {-}\mathsf {CCA}\) secure scheme with larger ciphertexts.
We note that we also analyze our transformations in the quantum random oracle model, which yields security guarantees in a post-quantum setting.
Dennis Hofheinz, Kathrin Hövelmanns, Eike Kiltz
From Selective IBE to Full IBE and Selective HIBE
Abstract
Starting with any selectively secure identity-based encryption (IBE) scheme, we give generic constructions of fully secure IBE and selectively secure hierarchical IBE (HIBE) schemes. Our HIBE scheme allows for delegation arbitrarily many times.
Nico Döttling, Sanjam Garg
Multi-key Authenticated Encryption with Corruptions: Reductions Are Lossy
Abstract
We study the security of symmetric encryption schemes in settings with multiple users and realistic adversaries who can adaptively corrupt encryption keys. To avoid confinement to any particular definitional paradigm, we propose a general framework for multi-key security definitions. By appropriate settings of the parameters of the framework, we obtain multi-key variants of many of the existing single-key security notions.
This framework is instrumental in establishing our main results. We show that for all single-key secure encryption schemes satisfying a minimal key uniqueness assumption and almost any instantiation of our general multi-key security notion, any reasonable reduction from the multi-key game to a standard single-key game necessarily incurs a linear loss in the number of keys. We prove this result for all three classical single-key security notions capturing confidentiality, authenticity and the combined authenticated encryption notion.
Tibor Jager, Martijn Stam, Ryan Stanley-Oakes, Bogdan Warinschi

Moderately Hard Functions

Frontmatter
On the Depth-Robustness and Cumulative Pebbling Cost of Argon2i
Abstract
Argon2i is a data-independent memory hard function that won the password hashing competition. The password hashing algorithm has already been incorporated into several open source crypto libraries such as libsodium. In this paper we analyze the cumulative memory cost of computing Argon2i. On the positive side we provide a lower bound for Argon2i. On the negative side we exhibit an improved attack against Argon2i which demonstrates that our lower bound is nearly tight. In particular, we show that
(1)
An Argon2i DAG is \(\left( e,O\left( n^3/e^3\right) \right) )\)-reducible.
 
(2)
The cumulative pebbling cost for Argon2i is at most \(O\left( n^{1.768}\right) \). This improves upon the previous best upper bound of \(O\left( n^{1.8}\right) \) [AB17].
 
(3)
Argon2i DAG is \(\left( e,\tilde{\varOmega }\left( n^3/e^3\right) \right) \)-depth robust. By contrast, analysis of [ABP17a] only established that Argon2i was \(\left( e,\tilde{\varOmega }\left( n^2/e^2\right) \right) \)-depth robust.
 
(4)
The cumulative pebbling complexity of Argon2i is at least \(\tilde{\varOmega }\left( n^{1.75}\right) \). This improves on the previous best bound of \(\varOmega \left( n^{1.66}\right) \) [ABP17a] and demonstrates that Argon2i has higher cumulative memory cost than competing proposals such as Catena or Balloon Hashing.
 
We also show that Argon2i has high fractional depth-robustness which strongly suggests that data-dependent modes of Argon2 are resistant to space-time tradeoff attacks.
Jeremiah Blocki, Samson Zhou
Bandwidth Hard Functions for ASIC Resistance
Abstract
Cryptographic hash functions have wide applications including password hashing, pricing functions for spam and denial-of-service countermeasures and proof of work in cryptocurrencies. Recent progress on ASIC (Application Specific Integrated Circuit) hash engines raise concerns about the security of the above applications. This leads to a growing interest in ASIC resistant hash function and ASIC resistant proof of work schemes, i.e., those that do not give ASICs a huge advantage. The standard approach towards ASIC resistance today is through memory hard functions or memory hard proof of work schemes. However, we observe that the memory hardness approach is an incomplete solution. It only attempts to provide resistance to an ASIC’s area advantage but overlooks the more important energy advantage. In this paper, we propose the notion of bandwidth hard functions to reduce an ASIC’s energy advantage. CPUs cannot compete with ASICs for energy efficiency in computation, but we can rely on memory accesses to reduce an ASIC’s energy advantage because energy costs of memory accesses are comparable for ASICs and CPUs. We propose a model for hardware energy cost that has sound foundations in practice. We then analyze the bandwidth hardness property of ASIC resistant candidates. We find scrypt, Catena-BRG and Balloon are bandwidth hard with suitable parameters. Lastly, we observe that a capacity hard function is not necessarily bandwidth hard, with a stacked double butterfly graph being a counterexample.
Ling Ren, Srinivas Devadas
Moderately Hard Functions: Definition, Instantiations, and Applications
Abstract
Several cryptographic schemes and applications are based on functions that are both reasonably efficient to compute and moderately hard to invert, including client puzzles for Denial-of-Service protection, password protection via salted hashes, or recent proof-of-work blockchain systems. Despite their wide use, a definition of this concept has not yet been distilled and formalized explicitly. Instead, either the applications are proven directly based on the assumptions underlying the function, or some property of the function is proven, but the security of the application is argued only informally. The goal of this work is to provide a (universal) definition that decouples the efforts of designing new moderately hard functions and of building protocols based on them, serving as an interface between the two.
On a technical level, beyond the mentioned definitions, we instantiate the model for four different notions of hardness. We extend the work of Alwen and Serbinenko (STOC 2015) by providing a general tool for proving security for the first notion of memory-hard functions that allows for provably secure applications. The tool allows us to recover all of the graph-theoretic techniques developed for proving security under the older, non-composable, notion of security used by Alwen and Serbinenko. As an application of our definition of moderately hard functions, we prove the security of two different schemes for proofs of effort (PoE). We also formalize and instantiate the concept of a non-interactive proof of effort (niPoE), in which the proof is not bound to a particular communication context but rather any bit-string chosen by the prover.
Joël Alwen, Björn Tackmann

Blockchains

Frontmatter
Overcoming Cryptographic Impossibility Results Using Blockchains
Abstract
Blockchain technology has the potential to disrupt how cryptography is done. In this work, we propose to view blockchains as an “enabler”, much like indistinguishability obfuscation [5, 23, 46] or one-way functions, for building a variety of cryptographic systems. Our contributions in this work are as follows:
1.
A Framework for Proof-of-Stake based Blockchains: We provide an abstract framework for formally analyzing and defining useful security properties for Proof-of-Stake (POS) based blockchain protocols. Interestingly, for some of our applications, POS based protocols are more suitable. We believe our framework and assumptions would be useful in building applications on top of POS based blockchain protocols even in the future.
 
2.
Blockchains as an Alternative to Trusted Setup Assumptions in Cryptography: A trusted setup, such as a common reference string (CRS) has been used to realize numerous systems in cryptography. The paragon example of a primitive requiring trusted setup is a non-interactive zero-knowledge (NIZK) system. We show that already existing blockchains systems including Bitcoin, Ethereum etc. can be used as a foundation (instead of a CRS) to realize NIZK systems. The novel aspect of our work is that it allows for utilizing an already existing (and widely trusted) setup rather than proposing a new one. Our construction does not require any additional functionality from the miners over the already existing ones, nor do we need to modify the underlying blockchain protocol. If an adversary can violate the security of our NIZK, it could potentially also take over billions of dollars worth of coins in the Bitcoin, Ethereum or any such cryptocurrency!
We believe that such a “trusted setup” represents significant progress over using CRS published by a central trusted party. Indeed, NIZKs could further serve as a foundation for a variety of other cryptographic applications such as round efficient secure computation [33, 36].
 
3.
One-time programs and pay-per use programs: Goldwasser et al. [29] introduced the notion of one time program and presented a construction using tamper-proof hardware. As noted by Goldwasser et al. [29], clearly a one-time program cannot be solely software based, as software can always be copied and run again. While there have been a number of follow up works [4, 6, 30], there are indeed no known constructions of one-time programs which do not rely on self destructing tamper-proof hardware (even if one uses trusted setup or random oracles). Somewhat surprisingly, we show that it is possible to base one-time programs on POS based blockchain systems without relying on trusted hardware. Our ideas do not seem to translate over to Proof-of-Work (POW) based blockchains.
We also introduce the notion of pay-per-use programs which is simply a contract between two parties — service provider and customer. A service provider supplies a program such that if the customer transfers a specific amount of coins to the provider, it can evaluate the program on any input of its choice once, even if the provider is offline. This is naturally useful in a subscription based model where your payment is based on your usage.
 
Rishab Goyal, Vipul Goyal

Multiparty Computation

Frontmatter
Secure Two-Party Computation with Fairness - A Necessary Design Principle
Abstract
Protocols for secure two-party computation enable a pair of mutually distrustful parties to carry out a joint computation of their private inputs without revealing anything but the output. One important security property that has been considered is that of fairness which guarantees that if one party learns the output then so does the other. In the case of two-party computation, fairness is not always possible, and in particular two parties cannot fairly toss a coin (Cleve, 1986). Despite this, it is actually possible to securely compute many two-party functions with fairness (Gordon et al., 2008 and follow-up work). However, all known two-party protocols that achieve fairness have the unique property that the effective input of the corrupted party is determined at an arbitrary point in the protocol. This is in stark contrast to almost all other known protocols that have an explicit fixed round at which the inputs are committed.
In this paper, we ask whether or not the property of not having an input committal round is inherent for achieving fairness for two parties. In order to do so, we revisit the definition of security of Micali and Rogaway (Technical report, 1992), that explicitly requires the existence of such a committal round. We adapt the definition of Canetti in the two-party setting to incorporate the spirit of a committal round, and show that under such a definition, it is impossible to achieve fairness for any non-constant two-party function. This result deepens our understanding as to the type of protocol construction that is needed for achieving fairness. In addition, our result discovers a fundamental difference between the definition of security of Micali and Rogaway and that of Canetti (Journal of Cryptology, 2000) which has become the standard today. Specifically, many functions can be securely computed with fairness under the definition of Canetti but no non-constant function can be securely computed with fairness under the definition of Micali and Rogaway.
Yehuda Lindell, Tal Rabin
Designing Fully Secure Protocols for Secure Two-Party Computation of Constant-Domain Functions
Abstract
In a sense, a two-party protocol achieves fairness if the output from the computation is obtained simultaneously by both parties. A seminal result by Cleve (STOC 1986) states that fairness is impossible, in general. Surprisingly, Gordon et al. (JACM 2011) showed that there exist interesting functions that are computable with fairness. The two results give rise to a distinction between fair functions and unfair ones. The question of characterizing these functions has been studied in a sequence of works leading to the complete characterization of (symmetric) Boolean functions by Asharov et al. (TCC 2015). In this paper, we design new fully secure protocols for functions that were previously unknown to be fair. To this end, our main technical contribution is a generic construction of a fully secure (fair) protocol, starting with a constant-round protocol satisfying limited security requirements. Our construction introduces new conceptual tools for the analysis of fairness that apply to arbitrary (constant-domain) functions. While the characterization remains open, we believe that our results lay the foundation for a deeper understanding of fairness.
Vanesa Daza, Nikolaos Makriyannis
On Secure Two-Party Computation in Three Rounds
Abstract
We revisit the exact round complexity of secure two-party computation. While four rounds are known to be sufficient for securely computing general functions that provide output to one party [Katz-Ostrovsky, CRYPTO’04], Goldreich-Krawczyk [SIAM J. Computing’96] proved that three rounds are insufficient for this task w.r.t. black-box simulation.
In this work, we study the feasibility of secure computation in three rounds using non-black-box simulation. Our main result is a three-round two-party computation protocol for general functions against adversaries with auxiliary inputs of a priori bounded size. This result relies on a new two round input-extraction protocol based on succinct randomized encodings.
We also provide a partial answer to the question of achieving security against non-uniform adversaries. Assuming sub-exponentially secure iO and one-way functions, we rule out three-round protocols that achieve polynomial simulation-based security against the output party and exponential indistinguishability-based security against the other party.
Prabhanjan Ananth, Abhishek Jain
Four Round Secure Computation Without Setup
Abstract
We construct a 4-round multi-party computation protocol in the plain model for any functionality, secure against a malicious adversary. Our protocol relies on the sub-exponential hardness of the Learning with Errors (LWE) problem with slightly super-polynomial noise ratio, and on the existence of adaptively secure commitments based on standard assumptions. Our round complexity matches a lower bound of Garg et al. (EUROCRYPT ’16), and outperforms the state of the art of 6 rounds based on similar assumptions to ours, and 5 rounds relying on indistinguishability obfuscation and other strong assumptions.
To do this, we construct an LWE based multi-key FHE scheme with a very simple one-round distributed setup procedure (vs. the trusted setup required in previous LWE based constructions). This lets us construct the first 3-round semi-malicious MPC protocol without setup from standard LWE using the approach of Mukherjee and Wichs (EUROCRYPT ’16). Finally, subexponential hardness and adaptive commitments are used to “compile” the protocol into the fully malicious setting.
Zvika Brakerski, Shai Halevi, Antigoni Polychroniadou
Round-Optimal Secure Two-Party Computation from Trapdoor Permutations
Abstract
In this work we continue the study on the round complexity of secure two-party computation with black-box simulation.
Katz and Ostrovsky in CRYPTO 2004 showed a 5 (optimal) round construction assuming trapdoor permutations for the general case where both players receive the output. They also proved that their result is round optimal. This lower bound has been recently revisited by Garg et al. in Eurocrypt 2016 where a 4 (optimal) round protocol is showed assuming a simultaneous message exchange channel. Unfortunately there is no instantiation of the protocol of Garg et al. under standard polynomial-time hardness assumptions.
In this work we close the above gap by showing a 4 (optimal) round construction for secure two-party computation in the simultaneous message channel model with black-box simulation, assuming trapdoor permutations against polynomial-time adversaries.
Our construction for secure two-party computation relies on a special 4-round protocol for oblivious transfer that nicely composes with other protocols in parallel. We define and construct such special oblivious transfer protocol from trapdoor permutations. This building block is clearly interesting on its own. Our construction also makes use of a recent advance on non-malleability: a delayed-input 4-round non-malleable zero knowledge argument.
Michele Ciampi, Rafail Ostrovsky, Luisa Siniscalchi, Ivan Visconti
Delayed-Input Non-Malleable Zero Knowledge and Multi-Party Coin Tossing in Four Rounds
Abstract
In this work we start from the following two results in the state-of-the art:
1.
4-round non-malleable zero knowledge (NMZK): Goyal et al. in FOCS 2014 showed the first 4-round one-one NMZK argument from one-way functions (OWFs). Their construction requires the prover to know the instance and the witness already at the 2nd round.
 
2.
4-round multi-party coin tossing (MPCT): Garg et al. in Eurocrypt 2016 showed the first 4-round protocol for MPCT. Their result crucially relies on 3-round 3-robust parallel non-malleable commitments. So far there is no candidate construction for such a commitment scheme under standard polynomial-time hardness assumptions.
 
We improve the state-of-the art on NMZK and MPCT by presenting the following two results:
1.
a delayed-input 4-round one-many NMZK argument \(\varPi _{\mathsf {NMZK}}\) from OWFs; moreover \(\varPi _{\mathsf {NMZK}}\) is also a delayed-input many-many synchronous NMZK argument.
 
2.
a 4-round MPCT protocol \(\varPi _{\mathsf {MPCT}}\) from one-to-one OWFs; \(\varPi _{\mathsf {MPCT}}\) uses \(\varPi _{\mathsf {NMZK}}\) as subprotocol and exploits the special properties (e.g., delayed input, many-many synchronous) of \(\varPi _{\mathsf {NMZK}}\).
 
Both \(\varPi _{\mathsf {NMZK}}\) and \(\varPi _{\mathsf {MPCT}}\) make use of a special proof of knowledge that offers additional security guarantees when played in parallel with other protocols. The new technique behind such a proof of knowledge is an additional contribution of this work and is of independent interest.
Michele Ciampi, Rafail Ostrovsky, Luisa Siniscalchi, Ivan Visconti
Round Optimal Concurrent MPC via Strong Simulation
Abstract
In this paper, we study the round complexity of concurrently secure multi-party computation (MPC) with super-polynomial simulation (SPS) in the plain model. In the plain model, there are known explicit attacks that show that concurrently secure MPC with polynomial simulation is impossible to achieve; SPS security is the most widely studied model for concurrently secure MPC in the plain model. We obtain the following results:
  • Three-round concurrent MPC with SPS security against Byzantine adversaries, assuming sub-exponentially secure DDH and LWE.
  • Two-round concurrent MPC with SPS security against Byzantine adversaries for input-less randomized functionalities, assuming sub-exponentially secure indistinguishability obfuscation and DDH. In particular, this class includes sampling functionalities that allow parties to jointly sample a secure common reference string for cryptographic applications.
Prior to our work, to the best of our knowledge, concurrent MPC with SPS security required roughly 20 rounds, although we are not aware of any work that even gave an approximation of the constant round complexity sufficient for the multi-party setting. We also improve over the previous best round complexity for the two-party setting, where 5 rounds were needed (Garg, Kiyoshima, and Pandey, Eurocrypt 2017).
To obtain our results, we compile protocols that already achieve security against “semi-malicious” adversaries, to protocols secure against fully malicious adversaries, additionally assuming sub-exponential DDH. Our protocols develop new techniques to use two-round zero-knowledge with super-polynomial strong simulation, defined by Pass (Eurocrypt 2003) and very recently realized by Khurana and Sahai (FOCS 2017). These remain zero-knowledge against adversaries running in time larger than the running time of the simulator.
Saikrishna Badrinarayanan, Vipul Goyal, Abhishek Jain, Dakshita Khurana, Amit Sahai
A Unified Approach to Constructing Black-Box UC Protocols in Trusted Setup Models
Abstract
We present a unified framework for obtaining black-box constructions of Universal Composable (UC) protocol in trusted setup models. Our result is analogous to the unified framework of Lin, Pass, and Venkitasubramaniam [STOC’09, Asiacrypt’12] that, however, only yields non-black-box constructions of UC protocols. Our unified framework shows that to obtain black-box constructions of UC protocols, it suffices to implement a special purpose commitment scheme that is, in particular, concurrently extractable using a given trusted setup. Using our framework, we improve black-box constructions in the common reference string and tamper-proof hardware token models by weakening the underlying computational and setup assumptions.
Susumu Kiyoshima, Huijia Lin, Muthuramakrishnan Venkitasubramaniam
Backmatter
Metadata
Title
Theory of Cryptography
Editors
Yael Kalai
Leonid Reyzin
Copyright Year
2017
Electronic ISBN
978-3-319-70500-2
Print ISBN
978-3-319-70499-9
DOI
https://doi.org/10.1007/978-3-319-70500-2

Premium Partner