Skip to main content

2016 | Buch

Theory of Cryptography

14th International Conference, TCC 2016-B, Beijing, China, October 31-November 3, 2016, Proceedings, Part I

herausgegeben von: Martin Hirt, Adam Smith

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The two-volume set LNCS 9985 and LNCS 9986 constitutes the refereed proceedings of the 14th International Conference on Theory of Cryptography, TCC 2016-B, held in Beijing, China, in November 2016.

The total of 45 revised full papers presented in the proceedings were carefully reviewed and selected from 113 submissions. The papers were organized in topical sections named: TCC test-of-time award; foundations; unconditional security; foundations of multi-party protocols; round complexity and efficiency of multi-party computation; differential privacy; delegation and IP; public-key encryption; obfuscation and multilinear maps; attribute-based encryption; functional encryption; secret sharing; new models.

Inhaltsverzeichnis

Frontmatter

TCC Test-of-Time Award

Frontmatter
From Indifferentiability to Constructive Cryptography (and Back)
Abstract
The concept of indifferentiability of systems, a generalized form of indistinguishability, was proposed in 2004 to provide a simplified and generalized explanation of impossibility results like the non-instantiability of random oracles by hash functions due to Canetti, Goldreich, and Halevi (STOC 1998). But indifferentiability is actually a constructive notion, leading to possibility results. For example, Coron et al. (Crypto 2005) argued that the soundness of the construction C(f) of a hash function from a compression function f can be demonstrated by proving that C(R) is indifferentiable from a random oracle if R is an ideal random compression function.
The purpose of this short paper is to describe how the indifferentiability notion was a precursor to the theory of constructive cryptography and thereby to provide a simplified and generalized treatment of indifferentiability as a special type of constructive statement.
Ueli Maurer, Renato Renner

Foundations

Frontmatter
Fast Pseudorandom Functions Based on Expander Graphs
Abstract
We present direct constructions of pseudorandom function (PRF) families based on Goldreich’s one-way function. Roughly speaking, we assume that non-trivial local mappings \(f:\{0,1\}^n\rightarrow \{0,1\}^m\) whose input-output dependencies graph form an expander are hard to invert. We show that this one-wayness assumption yields PRFs with relatively low complexity. This includes weak PRFs which can be computed in linear time of O(n) on a RAM machine with \(O(\log n)\) word size, or by a depth-3 circuit with unbounded fan-in AND and OR gates (AC0 circuit), and standard PRFs that can be computed by a quasilinear size circuit or by a constant-depth circuit with unbounded fan-in AND, OR and Majority gates (TC0).
Our proofs are based on a new search-to-decision reduction for expander-based functions. This extends a previous reduction of the first author (STOC 2012) which was applicable for the special case of random local functions. Additionally, we present a new family of highly efficient hash functions whose output on exponentially many inputs jointly forms (with high probability) a good expander graph. These hash functions are based on the techniques of Miles and Viola (Crypto 2012). Although some of our reductions provide only relatively weak security guarantees, we believe that they yield novel approach for constructing PRFs, and therefore enrich the study of pseudorandomness.
Benny Applebaum, Pavel Raykov
3-Message Zero Knowledge Against Human Ignorance
Abstract
The notion of Zero Knowledge has driven the field of cryptography since its conception over thirty years ago. It is well established that two-message zero-knowledge protocols for NP do not exist, and that four-message zero-knowledge arguments exist under the minimal assumption of one-way functions. Resolving the precise round complexity of zero-knowledge has been an outstanding open problem for far too long.
In this work, we present a three-message zero-knowledge argument system with soundness against uniform polynomial-time cheating provers. The main component in our construction is the recent delegation protocol for RAM computations (Kalai and Paneth, TCC 2016B and Brakerski, Holmgren and Kalai, ePrint 2016). Concretely, we rely on a three-message variant of their protocol based on a key-less collision-resistant hash functions secure against uniform adversaries as well as other standard primitives.
More generally, beyond uniform provers, our protocol provides a natural and meaningful security guarantee against real-world adversaries, which we formalize following Rogaway’s “human-ignorance” approach (VIETCRYPT 2006): in a nutshell, we give an explicit uniform reduction from any adversary breaking the soundness of our protocol to finding collisions in the underlying hash function.
Nir Bitansky, Zvika Brakerski, Yael Kalai, Omer Paneth, Vinod Vaikuntanathan
The GGM Function Family Is a Weakly One-Way Family of Functions
Abstract
We give the first demonstration of the cryptographic hardness of the Goldreich-Goldwasser-Micali (GGM) function family when the secret key is exposed. We prove that for any constant \(\epsilon >0\), the GGM family is a \(1/n^{2+\epsilon }\)-weakly one-way family of functions, when the lengths of secret key, inputs, and outputs are equal. Namely, any efficient algorithm fails to invert GGM with probability at least \(1/n^{2+\epsilon }\)even when given the secret key.
Additionally, we state natural conditions under which the GGM family is strongly one-way.
Aloni Cohen, Saleet Klein
On the (In)Security of SNARKs in the Presence of Oracles
Abstract
In this work we study the feasibility of knowledge extraction for succinct non-interactive arguments of knowledge (SNARKs) in a scenario that, to the best of our knowledge, has not been analyzed before. While prior work focuses on the case of adversarial provers that may receive (statically generated) auxiliary information, here we consider the scenario where adversarial provers are given access to an oracle. For this setting we study if and under what assumptions such provers can admit an extractor. Our contribution is mainly threefold.
First, we formalize the question of extraction in the presence of oracles by proposing a suitable proof of knowledge definition for this setting. We call SNARKs satisfying this definition O-SNARKs. Second, we show how to use O-SNARKs to obtain formal and intuitive security proofs for three applications (homomorphic signatures, succinct functional signatures, and SNARKs on authenticated data) where we recognize an issue while doing the proof under the standard proof of knowledge definition of SNARKs. Third, we study whether O-SNARKs exist, providing both negative and positive results. On the negative side, we show that, assuming one way functions, there do not exist O-SNARKs in the standard model for every signing oracle family (and thus for general oracle families as well). On the positive side, we show that when considering signature schemes with appropriate restrictions on the message length O-SNARKs for the corresponding signing oracles exist, based on classical SNARKs and assuming extraction with respect to specific distributions of auxiliary input.
Dario Fiore, Anca Nitulescu
Leakage Resilient One-Way Functions: The Auxiliary-Input Setting
Abstract
Most cryptographic schemes are designed in a model where perfect secrecy of the secret key is assumed. In most physical implementations, however, some form of information leakage is inherent and unavoidable. To deal with this, a flurry of works showed how to construct basic cryptographic primitives that are resilient to various forms of leakage.
Dodis et al. (FOCS ’10) formalized and constructed leakage resilient one-way functions. These are one-way functions f such that given a random image f(x) and leakage g(x) it is still hard to invert f(x). Based on any one-way function, Dodis et al. constructed such a one-way function that is leakage resilient assuming that an attacker can leak any lossy function g of the input.
In this work we consider the problem of constructing leakage resilient one-way functions that are secure with respect to arbitrary computationally hiding leakage (a.k.a auxiliary-input). We consider both types of leakage — selective and adaptive — and prove various possibility and impossibility results.
On the negative side, we show that if the leakage is an adaptively-chosen arbitrary one-way function, then it is impossible to construct leakage resilient one-way functions. The latter is proved both in the random oracle model (without any further assumptions) and in the standard model based on a strong vector-variant of DDH. On the positive side, we observe that when the leakage is chosen ahead of time, there are leakage resilient one-way functions based on a variety of assumption.
Ilan Komargodski
Simulating Auxiliary Inputs, Revisited
Abstract
For any pair (XZ) of correlated random variables we can think of Z as a randomized function of X. If the domain of Z is small, one can make this function computationally efficient by allowing it to be only approximately correct. In folklore this problem is known as simulating auxiliary inputs. This idea of simulating auxiliary information turns out to be a very usefull tool, finding applications in complexity theory, cryptography, pseudorandomness and zero-knowledge. In this paper we revisit this problem, achieving the following results:
(a)
We present a novel boosting algorithm for constructing the simulator. This boosting proof is of independent interest, as it shows how to handle “negative mass” issues when constructing probability measures by shifting distinguishers in descent algorithms. Our technique essentially fixes the flaw in the TCC’14 paper “How to Fake Auxiliary Inputs”.
 
(b)
The complexity of our simulator is better than in previous works, including results derived from the uniform min-max theorem due to Vadhan and Zheng. To achieve \((s,\epsilon )\)-indistinguishability we need the complexity \(O\left( s\cdot 2^{5\ell }\epsilon ^{-2}\right) \) in time/circuit size, which improve previous bounds by a factor of \(\epsilon ^{-2}\). In particular, with we get meaningful provable security for the EUROCRYPT’09 leakage-resilient stream cipher instantiated with a standard 256-bit block cipher, like \(\mathsf {AES256}\).
 
Our boosting technique utilizes a two-step approach. In the first step we shift the current result (as in gradient or sub-gradient descent algorithms) and in the separate step we fix the biggest non-negative mass constraint violation (if applicable).
Maciej Skórski

Unconditional Security

Frontmatter
Pseudoentropy: Lower-Bounds for Chain Rules and Transformations
Abstract
Computational notions of entropy have recently found many applications, including leakage-resilient cryptography, deterministic encryption or memory delegation. The two main types of results which make computational notions so useful are (1) Chain rules, which quantify by how much the computational entropy of a variable decreases if conditioned on some other variable (2) Transformations, which quantify to which extend one type of entropy implies another.
Such chain rules and transformations typically lose a significant amount in quality of the entropy, and are the reason why applying these results one gets rather weak quantitative security bounds. In this paper we for the first time prove lower bounds in this context, showing that existing results for transformations are, unfortunately, basically optimal for non-adaptive black-box reductions (and it’s hard to imagine how non black-box reductions or adaptivity could be useful here.)
A variable X has k bits of HILL entropy of quality \((\epsilon ,s)\) if there exists a variable Y with k bits min-entropy which cannot be distinguished from X with advantage \(\epsilon \) by distinguishing circuits of size s. A weaker notion is Metric entropy, where we switch quantifiers, and only require that for every distinguisher of size s, such a Y exists.
We first describe our result concerning transformations. By definition, HILL implies Metric without any loss in quality. Metric entropy often comes up in applications, but must be transformed to HILL for meaningful security guarantees. The best known result states that if a variable X has k bits of Metric entropy of quality \((\epsilon ,s)\), then it has k bits of HILL with quality \((2\epsilon ,s\cdot \epsilon ^2)\). We show that this loss of a factor \({\varOmega }(\epsilon ^{-2})\) in circuit size is necessary. In fact, we show the stronger result that this loss is already necessary when transforming so called deterministic real valued Metric entropy to randomised boolean Metric (both these variants of Metric entropy are implied by HILL without loss in quality).
The chain rule for HILL entropy states that if X has k bits of HILL entropy of quality \((\epsilon ,s)\), then for any variable Z of length m, X conditioned on Z has \(k-m\) bits of HILL entropy with quality \((\epsilon ,s\cdot \epsilon ^2/ 2^{m})\). We show that a loss of \({\varOmega }(2^m/\epsilon )\) in circuit size necessary here. Note that this still leaves a gap of \(\epsilon \) between the known bound and our lower bound.
Krzysztof Pietrzak, Maciej Skórski
Oblivious Transfer from Any Non-trivial Elastic Noisy Channel via Secret Key Agreement
Abstract
A \((\gamma ,\delta )\)-elastic channel is a binary symmetric channel between a sender and a receiver where the error rate of an honest receiver is \(\delta \) while the error rate of a dishonest receiver lies within the interval \([\gamma , \delta ]\). In this paper, we show that from any non-trivial elastic channel (i.e., \(0<\gamma<\delta <\frac{1}{2}\)) we can implement oblivious transfer with information-theoretic security. This was previously (Khurana et al., Eurocrypt 2016) only known for a subset of these parameters. Our technique relies on a new way to exploit protocols for information-theoretic key agreement from noisy channels. We also show that information-theoretically secure commitments where the receiver commits follow from any non-trivial elastic channel.
Ignacio Cascudo, Ivan Damgård, Felipe Lacerda, Samuel Ranellucci
Simultaneous Secrecy and Reliability Amplification for a General Channel Model
Abstract
We present a general notion of channel for cryptographic purposes, which can model either a (classical) physical channel or the consequences of a cryptographic protocol, or any hybrid. We consider simultaneous secrecy and reliability amplification for such channels. We show that simultaneous secrecy and reliability amplification is not possible for the most general model of channel, but, at least for some values of the parameters, it is possible for a restricted class of channels that still includes both standard information-theoretic channels and keyless cryptographic protocols.
Even in the restricted model, we require that for the original channel, the failure chance for the attacker must be a factor c more than that for the intended receiver. We show that for any \(c > 4 \), there is a one-way protocol (where the sender sends information to the receiver only) which achieves simultaneous secrecy and reliability. From results of Holenstein and Renner (CRYPTO’05), there are no such one-way protocols for \(c < 2\). On the other hand, we also show that for \(c > 1.5\), there are two-way protocols that achieve simultaneous secrecy and reliability.
We propose using similar models to address other questions in the theory of cryptography, such as using noisy channels for secret agreement, trade-offs between reliability and secrecy, and the equivalence of various notions of oblivious channels and secure computation.
Russell Impagliazzo, Ragesh Jaiswal, Valentine Kabanets, Bruce M. Kapron, Valerie King, Stefano Tessaro
Proof of Space from Stacked Expanders
Abstract
Recently, proof of space (PoS) has been suggested as a more egalitarian alternative to the traditional hash-based proof of work. In PoS, a prover proves to a verifier that it has dedicated some specified amount of space. A closely related notion is memory-hard functions (MHF), functions that require a lot of memory/space to compute. While making promising progress, existing PoS and MHF have several problems. First, there are large gaps between the desired space-hardness and what can be proven. Second, it has been pointed out that PoS and MHF should require a lot of space not just at some point, but throughout the entire computation/protocol; few proposals considered this issue. Third, the two existing PoS constructions are both based on a class of graphs called superconcentrators, which are either hard to construct or add a logarithmic factor overhead to efficiency. In this paper, we construct PoS from stacked expander graphs. Our constructions are simpler, more efficient and have tighter provable space-hardness than prior works. Our results also apply to a recent MHF called Balloon hash. We show Balloon hash has tighter space-hardness than previously believed and consistent space-hardness throughout its computation.
Ling Ren, Srinivas Devadas
Perfectly Secure Message Transmission in Two Rounds
Abstract
In the model that has become known as “Perfectly Secure Message Transmission” (PSMT), a sender Alice is connected to a receiver Bob through n parallel two-way channels. A computationally unbounded adversary Eve controls t of these channels, meaning she can acquire and alter any data that is transmitted over these channels. The sender Alice wishes to communicate a secret message to Bob privately and reliably, i.e. in such a way that Eve will not get any information about the message while Bob will be able to recover it completely.
In this paper, we focus on protocols that work in two transmission rounds for \(n= 2t+1\). We break from previous work by following a conceptually simpler blueprint for achieving a PSMT protocol. We reduce the previously best-known communication complexity, i.e. the number of transmitted bits necessary to communicate a 1-bit secret, from \(O(n^3\log n)\) to \(O(n^2\log n)\). Our protocol also answers a question raised by Kurosawa and Suzuki and hitherto left open: their protocol reaches optimal transmission rate for a secret of size \(O(n^2 \log n)\) bits, and the authors raised the problem of lowering this threshold. The present solution does this for a secret of \(O(n \log n)\) bits.
Gabriele Spini, Gilles Zémor

Foundations of Multi-Party Protocols

Frontmatter
Almost-Optimally Fair Multiparty Coin-Tossing with Nearly Three-Quarters Malicious
Abstract
An \(\alpha \)-fair coin-tossing protocol allows a set of mutually distrustful parties to generate a uniform bit, such that no efficient adversary can bias the output bit by more than \(\alpha \). Cleve [STOC 1986] has shown that if half of the parties can be corrupted, then, no \(r\)-round coin-tossing protocol is \(o(1/r)\)-fair. For over two decades the best known m-party protocols, tolerating up to \({t}\ge m/2\) corrupted parties, were only \(O\left( {t}/\sqrt{r} \right) \)-fair. In a surprising result, Moran, Naor, and Segev [TCC 2009] constructed an \(r\)-round two-party \(O(1/r)\)-fair coin-tossing protocol, i.e., an optimally fair protocol. Beimel, Omri, and Orlov [Crypto 2010] extended the result of Moran et al. to the multiparty setting where strictly fewer than 2/3 of the parties are corrupted. They constructed a \(2^{2^k}/r\)-fair r-round m-party protocol, tolerating up to \(t=\frac{m+k}{2}\) corrupted parties.
Recently, in a breakthrough result, Haitner and Tsfadia [STOC 2014] constructed an \(O\left( \log ^3(r)/r \right) \)-fair (almost optimal) three-party coin-tossing protocol. Their work brought forth a combination of novel techniques for coping with the difficulties of constructing fair coin-tossing protocols. Still, the best coin-tossing protocols for the case where more than 2/3 of the parties may be corrupted (and even when \(t=2m/3\), where \(m>3\)) were \(\theta \left( 1/\sqrt{r} \right) \)-fair. We construct an \(O\left( \log ^3(r)/r \right) \)-fair m-party coin-tossing protocol, tolerating up to t corrupted parties, whenever m is constant and \(t<3m/4\).
Bar Alon, Eran Omri
Binary AMD Circuits from Secure Multiparty Computation
Abstract
An AMD circuit over a finite field \(\mathbb {F}\) is a randomized arithmetic circuit that offers the “best possible protection” against additive attacks. That is, the effect of every additive attack that may blindly add a (possibly different) element of \(\mathbb {F}\) to every internal wire of the circuit can be simulated by an ideal attack that applies only to the inputs and outputs.
Genkin et al. (STOC 2014, Crypto 2015) introduced AMD circuits as a means for protecting MPC protocols against active attacks, and showed that every arithmetic circuit C over \(\mathbb {F}\) can be transformed into an equivalent AMD circuit of size O(|C|) with \(O(1/|\mathbb {F}|)\) simulation error. However, for the case of the binary field \(\mathbb {F}=\mathbb {F}_2\), their constructions relied on a tamper-proof output decoder and could only realize a weaker notion of security.
We obtain the first constructions of fully secure binary AMD circuits. Given a boolean circuit C and a statistical security parameter \(\sigma \), we construct an equivalent binary AMD circuit \(C'\) of size \(|C|\cdot {\text {polylog} }(|C|,\sigma )\) (ignoring lower order additive terms) with \(2^{-\sigma }\) simulation error. That is, the effect of toggling an arbitrary subset of wires can be simulated by toggling only input and output wires.
Our construction combines in a general way two types of “simple” honest-majority MPC protocols: protocols that only offer security against passive adversaries, and protocols that only offer correctness against active adversaries. As a corollary, we get a conceptually new technique for constructing active-secure two-party protocols in the OT-hybrid model, and reduce the open question of obtaining such protocols with constant computational overhead to a similar question in these simpler MPC models.
Daniel Genkin, Yuval Ishai, Mor Weiss
Composable Security in the Tamper-Proof Hardware Model Under Minimal Complexity
Abstract
We put forth a new formulation of tamper-proof hardware in the Global Universal Composable (GUC) framework introduced by Canetti et al. in TCC 2007. Almost all of the previous works rely on the formulation by Katz in Eurocrypt 2007 and this formulation does not fully capture tokens in a concurrent setting. We address these shortcomings by relying on the GUC framework where we make the following contributions:
1.
We construct secure Two-Party Computation (2PC) protocols for general functionalities with optimal round complexity and computational assumptions using stateless tokens. More precisely, we show how to realize arbitrary functionalities in the two-party setting with GUC security in two rounds under the minimal assumption of One-Way Functions (OWFs). Moreover, our construction relies on the underlying function in a black-box way. As a corollary, we obtain feasibility of Multi-Party Computation (MPC) with GUC-security under the minimal assumption of OWFs. As an independent contribution, we identify an issue with a claim in a previous work by Goyal, Ishai, Sahai, Venkatesan and Wadia in TCC 2010 regarding the feasibility of UC-secure computation with stateless tokens assuming collision-resistant hash-functions (and the extension based only on one-way functions).
 
2.
We then construct a 3-round MPC protocol to securely realize arbitrary functionalities with GUC-security starting from any semi-honest secure MPC protocol. For this construction, we require the so-called one-many commit-and-prove primitive introduced in the original work of Canetti, Lindell, Ostrovsky and Sahai in STOC 2002 that is round-efficient and black-box in the underlying commitment. Using specially designed ?input-delayed? protocols we realize this primitive (with a 3-round protocol in our framework) using stateless tokens and one-way functions (where the underlying one-way function is used in a black-box way).
 
Carmit Hazay, Antigoni Polychroniadou, Muthuramakrishnan Venkitasubramaniam
Composable Adaptive Secure Protocols Without Setup Under Polytime Assumptions
Abstract
All previous constructions of general multiparty computation protocols that are secure against adaptive corruptions in the concurrent setting either require some form of setup or non-standard assumptions. In this paper we provide the first general construction of secure multi-party computation protocol without any setup that guarantees composable security in the presence of an adaptive adversary based on standard polynomial-time assumptions. We prove security under the notion of “UC with super-polynomial helpers” introduced by Canetti et al. (FOCS 2010), which is closed under universal composition and implies “super-polynomial-time simulation”. Moreover, our construction relies on the underlying cryptographic primitives in a black-box manner.
Next, we revisit the zero-one law for two-party secure functions evaluation initiated by the work of Maji, Prabhakaran and Rosulek (CRYPTO 2010). According to this law, every two-party functionality is either trivial (meaning, such functionalities can be reduced to any other functionality) or complete (meaning, any other functionality can be reduced to these functionalities) in the Universal Composability (UC) framework. As our second contribution, assuming the existence of a simulatable public-key encryption scheme, we establish a zero-one law in the adaptive setting. Our result implies that every two-party non-reactive functionality is either trivial or complete in the UC framework in the presence of adaptive, malicious adversaries.
Carmit Hazay, Muthuramakrishnan Venkitasubramaniam
Adaptive Security of Yao’s Garbled Circuits
Abstract
A garbling scheme is used to garble a circuit C and an input x in a way that reveals the output C(x) but hides everything else. Yao’s construction from the 80’s is known to achieve selective security, where the adversary chooses the circuit C and the input x in one shot. It has remained as an open problem whether the construction also achieves adaptive security, where the adversary can choose the input x after seeing the garbled version of the circuit C.
A recent work of Hemenway et al. (CRYPTO’16) modifies Yao’s construction and shows that the resulting scheme is adaptively secure. This is done by encrypting the garbled circuit from Yao’s construction with a special type of “somewhere equivocal encryption” and giving the key together with the garbled input. The efficiency of the scheme and the security loss of the reduction is captured by a certain pebbling game over the circuit.
In this work we prove that Yao’s construction itself is already adaptively secure, where the security loss can be captured by the same pebbling game. For example, we show that for circuits of depth d, the security loss of our reduction is \(2^{O(d)}\), meaning that Yao’s construction is adaptively secure for NC1 circuits without requiring complexity leveraging. Our technique is inspired by the “nested hybrids” of Fuchsbauer et al. (Asiacrypt’14, CRYPTO’15) and relies on a careful sequence of hybrids where each hybrid involves some limited guessing about the adversary’s adaptive choices. Although it doesn’t match the parameters achieved by Hemenway et al. in their full generality, the main advantage of our work is to prove the security of Yao’s construction as is, without any additional encryption layer.
Zahra Jafargholi, Daniel Wichs

Round Complexity and Efficiency of Multi-party Computation

Frontmatter
Efficient Secure Multiparty Computation with Identifiable Abort
Abstract
We study secure multiparty computation (MPC) in the dishonest majority setting providing security with identifiable abort, where if the protocol aborts, the honest parties can agree upon the identity of a corrupt party. All known constructions that achieve this notion require expensive zero-knowledge techniques to obtain active security, so are not practical.
In this work, we present the first efficient MPC protocol with identifiable abort. Our protocol has an information-theoretic online phase with message complexity \(O(n^2)\) for each secure multiplication (where n is the number of parties), similar to the BDOZ protocol (Bendlin et al., Eurocrypt 2011), which is a factor in the security parameter lower than the identifiable abort protocol of Ishai et al. (Crypto 2014). A key component of our protocol is a linearly homomorphic information-theoretic signature scheme, for which we provide the first definitions and construction based on a previous non-homomorphic scheme. We then show how to implement the preprocessing for our protocol using somewhat homomorphic encryption, similarly to the SPDZ protocol (Damgård et al., Crypto 2012).
Carsten Baum, Emmanuela Orsini, Peter Scholl
Secure Multiparty RAM Computation in Constant Rounds
Abstract
Secure computation of a random access machine (RAM) program typically entails that it be first converted into a circuit. This conversion is unimaginable in the context of big-data applications where the size of the circuit can be exponential in the running time of the original RAM program. Realizing these constructions, without relinquishing the efficiency of RAM programs, often poses considerable technical hurdles. Our understanding of these techniques in the multi-party setting is largely limited. Specifically, the round complexity of all known protocols grows linearly in the running time of the program being computed. In this work, we consider the multi-party case and obtain the following results:
  • Semi-honest model: We present a constant-round black-box secure computation protocol for RAM programs. This protocol is obtained by building on the new black-box garbled RAM construction by Garg, Lu, and Ostrovsky [FOCS 2015], and constant-round secure computation protocol for circuits of Beaver, Micali, and Rogaway [STOC 1990]. This construction allows execution of multiple programs on the same persistent database.
  • Malicious model: Next, we show how to extend our semi-honest results to the malicious setting, while ensuring that the new protocol is still constant-round and black-box in nature.
Sanjam Garg, Divya Gupta, Peihan Miao, Omkant Pandey
Constant-Round Maliciously Secure Two-Party Computation in the RAM Model
Abstract
The random-access memory (RAM) model of computation allows program constant-time memory lookup and is more applicable in practice today, covering many important algorithms. This is in contrast to the classic setting of secure 2-party computation (2PC) that mostly follows the approach for which the desired functionality must be represented as a boolean circuit. In this work we design the first constant round maliciously secure two-party protocol in the RAM model. Our starting point is the garbled RAM construction of Gentry et al. [16] that readily induces a constant round semi-honest two-party protocol for any RAM program assuming identity-based encryption schemes. We show how to enhance the security of their construction into the malicious setting while facing several challenges that stem due to handling the data memory. Next, we show how to apply our techniques to a more recent garbled RAM construction by Garg et al. [13] that is based on one-way functions.
Carmit Hazay, Avishay Yanai
More Efficient Constant-Round Multi-party Computation from BMR and SHE
Abstract
We present a multi-party computation protocol in the case of dishonest majority which has very low round complexity. Our protocol sits philosophically between Gentry’s Fully Homomorphic Encryption based protocol and the SPDZ-BMR protocol of Lindell et al. (CRYPTO 2015). Our protocol avoids various inefficiencies of the previous two protocols. Compared to Gentry’s protocol we only require Somewhat Homomorphic Encryption (SHE). Whilst in comparison to the SPDZ-BMR protocol we require only a quadratic complexity in the number of players (as opposed to cubic), we have fewer rounds, and we require less proofs of correctness of ciphertexts. Additionally, we present a variant of our protocol which trades the depth of the garbling circuit (computed using SHE) for some more multiplications in the offline and online phases.
Yehuda Lindell, Nigel P. Smart, Eduardo Soria-Vazquez
Cross and Clean: Amortized Garbled Circuits with Constant Overhead
Abstract
Garbled circuits (GC) are one of the main tools for secure two-party computation. One of the most promising techniques for efficiently achieving active-security in the context of GCs is the so called cut-and-choose approach, and the main measure of efficiency in cut-and-choose based protocols is the number of garbled circuits which need to be constructed, exchanged and evaluated.
In this paper we investigate the following, natural question: how many garbled circuits are needed to achieve active security? and we show that in the amortized setting (for large enough circuits and number of executions), it is possible to achieve active security while using only a constant number of garbled circuits.
Jesper Buus Nielsen, Claudio Orlandi

Differential Privacy

Frontmatter
Separating Computational and Statistical Differential Privacy in the Client-Server Model
Abstract
Differential privacy is a mathematical definition of privacy for statistical data analysis. It guarantees that any (possibly adversarial) data analyst is unable to learn too much information that is specific to an individual. Mironov et al. (CRYPTO 2009) proposed several computational relaxations of differential privacy (CDP), which relax this guarantee to hold only against computationally bounded adversaries. Their work and subsequent work showed that CDP can yield substantial accuracy improvements in various multiparty privacy problems. However, these works left open whether such improvements are possible in the traditional client-server model of data analysis. In fact, Groce, Katz and Yerukhimovich (TCC 2011) showed that, in this setting, it is impossible to take advantage of CDP for many natural statistical tasks.
Our main result shows that, assuming the existence of sub-exponentially secure one-way functions and 2-message witness indistinguishable proofs (zaps) for \(\mathbf {NP}\), that there is in fact a computational task in the client-server model that can be efficiently performed with CDP, but is infeasible to perform with information-theoretic differential privacy.
Mark Bun, Yi-Hsiu Chen, Salil Vadhan
Concentrated Differential Privacy: Simplifications, Extensions, and Lower Bounds
Abstract
“Concentrated differential privacy” was recently introduced by Dwork and Rothblum as a relaxation of differential privacy, which permits sharper analyses of many privacy-preserving computations. We present an alternative formulation of the concept of concentrated differential privacy in terms of the Rényi divergence between the distributions obtained by running an algorithm on neighboring inputs. With this reformulation in hand, we prove sharper quantitative results, establish lower bounds, and raise a few new questions. We also unify this approach with approximate differential privacy by giving an appropriate definition of “approximate concentrated differential privacy”.
Mark Bun, Thomas Steinke
Strong Hardness of Privacy from Weak Traitor Tracing
Abstract
A central problem in differential privacy is to accurately answer a large family Q of statistical queries over a data universe X. A statistical query on a dataset \(D \in X^n\) asks “what fraction of the elements of D satisfy a given predicate p on X?” Ignoring computational constraints, it is possible to accurately answer exponentially many queries on an exponential size universe while satisfying differential privacy (Blum et al., STOC’08). Dwork et al. (STOC’09) and Boneh and Zhandry (CRYPTO’14) showed that if both Q and X are of polynomial size, then there is an efficient differentially private algorithm that accurately answers all the queries. They also proved that if Q and X are both exponentially large, then under a plausible assumption, no efficient algorithm exists.
We show that, under the same assumption, if either the number of queries or the data universe is of exponential size, then there is no differentially private algorithm that answers all the queries. Specifically, we prove that if one-way functions and indistinguishability obfuscation exist, then:
1.
For every n, there is a family Q of \({\tilde{O}}(n^7)\) queries on a data universe X of size \(2^d\) such that no \(\mathrm {poly}(n,d)\) time differentially private algorithm takes a dataset \(D \in X^n\) and outputs accurate answers to every query in Q.
 
2.
For every n, there is a family Q of \(2^d\) queries on a data universe X of size \({\tilde{O}}(n^7)\) such that no \(\mathrm {poly}(n,d)\) time differentially private algorithm takes a dataset \(D \in X^n\) and outputs accurate answers to every query in Q.
 
In both cases, the result is nearly quantitatively tight, since there is an efficient differentially private algorithm that answers \({\tilde{{\varOmega }}}(n^2)\) queries on an exponential size data universe, and one that answers exponentially many queries on a data universe of size \({\tilde{{\varOmega }}}(n^2)\).
Our proofs build on the connection between hardness of differential privacy and traitor-tracing schemes (Dwork et al., STOC’09; Ullman, STOC’13). We prove our hardness result for a polynomial size query set (resp., data universe) by showing that they follow from the existence of a special type of traitor-tracing scheme with very short ciphertexts (resp., secret keys), but very weak security guarantees, and then constructing such a scheme.
Lucas Kowalczyk, Tal Malkin, Jonathan Ullman, Mark Zhandry
Backmatter
Metadaten
Titel
Theory of Cryptography
herausgegeben von
Martin Hirt
Adam Smith
Copyright-Jahr
2016
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-662-53641-4
Print ISBN
978-3-662-53640-7
DOI
https://doi.org/10.1007/978-3-662-53641-4