Skip to main content
Top

2015 | Book

Theory of Cryptography

12th Theory of Cryptography Conference, TCC 2015, Warsaw, Poland, March 23-25, 2015, Proceedings, Part II

Editors: Yevgeniy Dodis, Jesper Buus Nielsen

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

The two-volume set LNCS 9014 and LNCS 9015 constitutes the refereed proceedings of the 12th International Conference on Theory of Cryptography, TCC 2015, held in Warsaw, Poland in March 2015.
The 52 revised full papers presented were carefully reviewed and
selected from 137 submissions. The papers are organized in topical
sections on foundations, symmetric key, multiparty computation,
concurrent and resettable security, non-malleable codes and tampering, privacy amplification, encryption an key exchange, pseudorandom functions and applications, proofs and verifiable computation, differential privacy, functional encryption, obfuscation.

Table of Contents

Frontmatter

Pseudorandom Functions and Applications

Constrained Key-Homomorphic PRFs from Standard Lattice Assumptions
Or: How to Secretly Embed a Circuit in Your PRF
Abstract
Boneh et al. (Crypto 13) and Banerjee and Peikert (Crypto 14) constructed pseudorandom functions (PRFs) from the Learning with Errors (LWE) assumption by embedding combinatorial objects, a path and a tree respectively, in instances of the LWE problem. In this work, we show how to generalize this approach to embed circuits, inspired by recent progress in the study of Attribute Based Encryption.
Embedding a universal circuit for some class of functions allows us to produce constrained keys for functions in this class, which gives us the first standard-lattice-assumption-based constrained PRF (CPRF) for general bounded-description bounded-depth functions, for arbitrary polynomial bounds on the description size and the depth. (A constrained key w.r.t a circuit C enables one to evaluate the PRF on all x for which C(x) = 1, but reveals nothing on the PRF values at other points.) We rely on the LWE assumption and on the one-dimensional SIS (Short Integer Solution) assumption, which are both related to the worst case hardness of general lattice problems. Previous constructions for similar function classes relied on such exotic assumptions as the existence of multilinear maps or secure program obfuscation. The main drawback of our construction is that it does not allow collusion (i.e. to provide more than a single constrained key to an adversary). Similarly to the aforementioned previous works, our PRF family is also key homomorphic.
Interestingly, our constrained keys are very short. Their length does not depend directly either on the size of the constraint circuit or on the input length. We are not aware of any prior construction achieving this property, even relying on strong assumptions such as indistinguishability obfuscation.
Zvika Brakerski, Vinod Vaikuntanathan
Key-Homomorphic Constrained Pseudorandom Functions
Abstract
A pseudorandom function (PRF) is a keyed function \(F : {\mathcal K}\times{\mathcal X}\rightarrow{\mathcal Y}\) where, for a random key \(k\in{\mathcal K}\), the function F(k,·) is indistinguishable from a uniformly random function, given black-box access. A key-homomorphic PRF has the additional feature that for any keys k,k′ and any input x, we have F(k + k′, x) = F(k,x) ⊕ F(k′,x) for some group operations + , ⊕ on \(\mathcal{K}\) and \(\mathcal{Y}\), respectively. A constrained PRF for a family of sets \({\mathcal S} \subseteq \mathcal{P}({\mathcal X})\) has the property that, given any key k and set \(S \in \mathcal{S}\), one can efficiently compute a “constrained” key k S that enables evaluation of F(k,x) on all inputs x ∈ S, while the values F(k,x) for x ∉ S remain pseudorandom even given k S .
In this paper we construct PRFs that are simultaneously constrained and key homomorphic, where the homomorphic property holds even for constrained keys. We first show that the multilinear map-based bit-fixing and circuit-constrained PRFs of Boneh and Waters (Asiacrypt 2013) can be modified to also be key-homomorphic. We then show that the LWE-based key-homomorphic PRFs of Banerjee and Peikert (Crypto 2014) are essentially already prefix-constrained PRFs, using a (non-obvious) definition of constrained keys and associated group operation. Moreover, the constrained keys themselves are pseudorandom, and the constraining and evaluation functions can all be computed in low depth.
As an application of key-homomorphic constrained PRFs, we construct a proxy re-encryption scheme with fine-grained access control. This scheme allows storing encrypted data on an untrusted server, where each file can be encrypted relative to some attributes, so that only parties whose constrained keys match the attributes can decrypt. Moreover, the server can re-key (arbitrary subsets of) the ciphertexts without learning anything about the plaintexts, thus permitting efficient and fine-grained revocation.
Abhishek Banerjee, Georg Fuchsbauer, Chris Peikert, Krzysztof Pietrzak, Sophie Stevens
Aggregate Pseudorandom Functions and Connections to Learning
Abstract
In the first part of this work, we introduce a new type of pseudo-random function for which “aggregate queries” over exponential-sized sets can be efficiently answered. We show how to use algebraic properties of underlying classical pseudo random functions, to construct such “aggregate pseudo-random functions” for a number of classes of aggregation queries under cryptographic hardness assumptions. For example, one aggregate query we achieve is the product of all function values accepted by a polynomial-sized read-once boolean formula. On the flip side, we show that certain aggregate queries are impossible to support. Aggregate pseudo-random functions fall within the framework of the work of Goldreich, Goldwasser, and Nussboim [GGN10] on the “Implementation of Huge Random Objects,” providing truthful implementations of pseudo-random functions for which aggregate queries can be answered.
In the second part of this work, we show how various extensions of pseudo-random functions considered recently in the cryptographic literature, yield impossibility results for various extensions of machine learning models, continuing a line of investigation originated by Valiant and Kearns in the 1980s. The extended pseudo-random functions we address include constrained pseudo random functions, aggregatable pseudo random functions, and pseudo random functions secure under related-key attacks.
Aloni Cohen, Shafi Goldwasser, Vinod Vaikuntanathan
Oblivious Polynomial Evaluation and Secure Set-Intersection from Algebraic PRFs
Abstract
In this paper we study the two fundamental functionalities oblivious polynomial evaluation in the exponent and set-intersection, and introduce a new technique for designing efficient secure protocols for these problems (and others). Our starting point is the [6] technique (CRYPTO 2011) for verifiable delegation of polynomial evaluations, using algebraic PRFs. We use this tool, that is useful to achieve verifiability in the outsourced setting, in order to achieve privacy in the standard two-party setting. Our results imply new simple and efficient oblivious polynomial evaluation (OPE) protocols. We further show that our OPE protocols are readily used for secure set-intersection, implying much simpler protocols in the plain model. As a side result, we demonstrate the usefulness of algebraic PRFs for various search functionalities, such as keyword search and oblivious transfer with adaptive queries. Our protocols are secure under full simulation-based definitions in the presence of malicious adversaries.
Carmit Hazay
Verifiable Random Functions from Weaker Assumptions
Abstract
The construction of a verifiable random function (VRF) with large input space and full adaptive security from a static, non-interactive complexity assumption, like decisional Diffie-Hellman, has proven to be a challenging task. To date it is not even clear that such a VRF exists. Most known constructions either allow only a small input space of polynomially-bounded size, or do not achieve full adaptive security under a static, non-interactive complexity assumption.
The only known constructions without these restrictions are based on non-static, so-called “q-type” assumptions, which are parametrized by an integer q. Since q-type assumptions get stronger with larger q, it is desirable to have q as small as possible. In current constructions, q is either a polynomial (e.g., Hohenberger and Waters, Eurocrypt 2010) or at least linear (e.g., Boneh et al., CCS 2010) in the security parameter.
We show that it is possible to construct relatively simple and efficient verifiable random functions with full adaptive security and large input space from non-interactive q-type assumptions, where q is only logarithmic in the security parameter. Interestingly, our VRF is essentially identical to the verifiable unpredictable function (VUF) by Lysyanskaya (Crypto 2002), but very different from Lysyanskaya’s VRF from the same paper. Thus, our result can also be viewed as a new, direct VRF-security proof for Lysyanskaya’s VUF. As a technical tool, we introduce and construct balanced admissible hash functions.
Tibor Jager

Proofs and Verifiable Computation

Multi-Client Verifiable Computation with Stronger Security Guarantees
Abstract
At TCC 2013, Choi et al. introduced the notion of multiclient verifiable computation (MVC) in which a set of clients outsource to an untrusted server the computation of a function f over their collective inputs in a sequence of time periods. In that work, the authors defined and realized multi-client verifiable computation satisfying soundness against a malicious server and privacy against the semi-honest corruption of a single client. Very recently, Goldwasser et al. (Eurocrypt 2014) provided an alternative solution relying on multi-input functional encryption.
Here we conduct a systematic study of MVC, with the goal of satisfying stronger security requirements. We begin by introducing a simulationbased notion of security that provides a unified way of defining soundness and privacy, and automatically captures several attacks not addressed in previous work. We then explore the feasibility of achieving this notion of security. Assuming no collusion between the server and the clients, we demonstrate a protocol for multi-client verifiable computation that achieves stronger security than the protocol of Choi et al. in several respects. When server-client collusion is possible, we show (somewhat surprisingly) that simulation-based security cannot be achieved, even assuming only semi-honest behavior.
S. Dov Gordon, Jonathan Katz, Feng-Hao Liu, Elaine Shi, Hong-Sheng Zhou
Public Verification of Private Effort
Abstract
We introduce a new framework for polling responses from a large population. Our framework allows gathering information without violating the responders’ anonymity and at the same time enables public verification of the poll’s result. In contrast to prior approaches to the problem, we do not require trusting the pollster for faithfully announcing the poll’s results, nor do we rely on strong identity verification.
We propose an “effort based” polling protocol whose results can be publicly verified by constructing a “responder certification graph” whose nodes are labeled by responders’ replies to the poll, and whose edges cross-certify that adjacent nodes correspond to honest participants. Cross-certification is achieved using a newly introduced (privately verifiable) “Private Proof of Effort” (PPE). In effect, our protocol gives a general method for converting privately-verifiable proofs into a publicly-verifiable protocol. The soundness of the transformation relies on expansion properties of the certification graph.
Our results are applicable to a variety of settings in which crowd-sourced information gathering is required. This includes crypto-currencies, political polling, elections, recommendation systems, viewer voting in TV shows, and prediction markets.
Giulia Alberini, Tal Moran, Alon Rosen
Primary-Secondary-Resolver Membership Proof Systems
Abstract
We consider Primary-Secondary-Resolver Membership Proof Systems (PSR for short) and show different constructions of that primitive. A PSR system is a 3-party protocol, where we have a primary, which is a trusted party which commits to a set of members and their values, then generates public and secret keys in order for secondaries (provers with knowledge of both keys) and resolvers (verifiers who only know the public key) to engage in interactive proof sessions regarding elements in the universe and their values. The motivation for such systems is for constructing a secure Domain Name System (DNSSEC) that does not reveal any unnecessary information to its clients.
We require our systems to be complete, so honest executions will result in correct conclusions by the resolvers, sound, so malicious secondaries cannot cheat resolvers, and zero-knowledge, so resolvers will not learn additional information about elements they did not query explicitly. Providing proofs of membership is easy, as the primary can simply precompute signatures over all the members of the set. Providing proofs of non-membership, i.e. a denial-of-existence mechanism, is trickier and is the main issue in constructing PSR systems.
The construction we present in this paper uses a set of cryptographic keys for all elements of the universe which are not members, which we implement using hierarchical identity based encryption. In the full version of this paper we present a full analysis for two additional strategies to construct a denial of existence mechanism. One which uses cuckoo hashing with a stash, where in order to prove non-membership, a secondary must prove that a search for an element will fail. Another strategy uses a verifiable “random looking” function and proves non-membership by proving an element’s value is between two consecutive values of members.
For all three constructions we suggest fairly efficient implementations, of order comparable to other public-key operations such as signatures and encryption. The first approach offers perfect ZK and does not reveal the size of the set in question, the second can be implemented based on very solid cryptographic assumptions and uses the unique structure of cuckoo hashing, while the last technique has the potential to be highly efficient, if one could construct an efficient and secure VRF/VUF or if one is willing to live in the random oracle model.
Moni Naor, Asaf Ziv
Tight Parallel Repetition Theorems for Public-Coin Arguments Using KL-Divergence
Abstract
We present a new and conceptually simpler proof of a tight parallel-repetition theorem for public-coin arguments [Pass-Venkitasubramaniam, STOC’07], [Håstad et al, TCC’10], [Chung-Liu, TCC’10]. We follow the same proof framework as the previous non-tight parallel-repetition theorem of Håstad et al—which relied on statistical distance to measure the distance between experiments—and show that it can be made tight (and further simplified) if instead relying on KL-divergence as the distance between the experiments.
We then use this new proof to present the first tight “Chernoff-type” parallel repetition theorem for arbitrary public-coin arguments, demonstrating that parallel-repetition can be used to simultaneously decrease both the soundness and completeness error of any public-coin argument at a rate matching the standard Chernoff bound.
Kai-Min Chung, Rafael Pass
Stretching Groth-Sahai: NIZK Proofs of Partial Satisfiability
Abstract
Groth, Ostrovsky and Sahai constructed a non-interactive Zap for NP-languages by observing that the common reference string of their proof system for circuit satisfiability admits what they call correlated key generation. The latter means that it is possible to create from scratch two common reference strings in such a way that it can be publicly verified that at least one of them guarantees perfect soundness while it is computationally infeasible to tell which one. Their technique also implies that it is possible to have NIWI Groth-Sahai proofs for certain types of equations over bilinear groups in the plain model. We extend the result of Groth, Ostrovsky and Sahai in several directions. Given as input some predicate P computable by some monotone span program over a finite field, we show how to generate a set of common reference strings in such a way that it can be publicly verified that the subset of them which guarantees perfect soundness is accepted by the span program. We give several different flavors of the technique suitable for different applications scenarios and different equation types. We use this to stretch the expressivity of Groth-Sahai proofs and construct NIZK proofs of partial satisfiability of sets of equations in a bilinear group and more efficient Groth-Sahai NIWI proofs without common reference string for a larger class of equation types. Finally, we apply our results to significantly reduce the size of the signatures of the ring signature scheme of Chandran, Groth and Sahai or to have a more efficient proof in the standard model that a commitment opens to an element of a public list.
Carla Ràfols

Differential Privacy

Outlier Privacy
Abstract
We introduce a generalization of differential privacy called tailored differential privacy, where an individual’s privacy parameter is “tailored” for the individual based on the individual’s data and the data set. In this paper, we focus on a natural instance of tailored differential privacy, which we call outlier privacy: an individual’s privacy parameter is determined by how much of an “outlier” the individual is. We provide a new definition of an outlier and use it to introduce our notion of outlier privacy. Roughly speaking, ε(·)-outlier privacy requires that each individual in the data set is guaranteed “ε(k)-differential privacy protection”, where k is a number quantifying the “outlierness” of the individual. We demonstrate how to release accurate histograms that satisfy ε(·)-outlier privacy for various natural choices of ε(·). Additionally, we show that ε(·)-outlier privacy with our weakest choice of ε(·)—which offers no explicit privacy protection for “non-outliers”—already implies a “distributional” notion of differential privacy w.r.t. a large and natural class of distributions.
Edward Lui, Rafael Pass

Functional Encryption

Function-Private Functional Encryption in the Private-Key Setting
Abstract
Functional encryption supports restricted decryption keys that allow users to learn specific functions of the encrypted messages. Although the vast majority of research on functional encryption has so far focused on the privacy of the encrypted messages, in many realistic scenarios it is crucial to offer privacy also for the functions for which decryption keys are provided.
Whereas function privacy is inherently limited in the public-key setting, in the private-key setting it has a tremendous potential. Specifically, one can hope to construct schemes where encryptions of messages m1, …, m T together with decryption keys corresponding to functions f1, …, f T , reveal essentially no information other than the values { f i (m j )}i,j ∈ [T]. Despite its great potential, the known function-private private-key schemes either support rather limited families of functions (such as inner products), or offer somewhat weak notions of function privacy.
We present a generic transformation that yields a function-private functional encryption scheme, starting with any non-function-private scheme for a sufficiently rich function class. Our transformation preserves the message privacy of the underlying scheme, and can be instantiated using a variety of existing schemes. Plugging in known constructions of functional encryption schemes, we obtain function-private schemes based either on the Learning with Errors assumption, on obfuscation assumptions, on simple multilinear-maps assumptions, and even on the existence of any one-way function (offering various trade-offs between security and efficiency).
Zvika Brakerski, Gil Segev
Functional Encryption for Randomized Functionalities
Abstract
In this work, we present the first definitions and constructions for functional encryption supporting randomized functionalities. The setting of randomized functionalities require us to revisit functional encryption definitions by, for the first time, explicitly adding security requirements for dishonest encryptors, to ensure that they cannot improperly tamper with the randomness that will be used for computing outputs. Our constructions are built using indistinguishability obfuscation.
Vipul Goyal, Abhishek Jain, Venkata Koppula, Amit Sahai
Functional Encryption for Randomized Functionalities in the Private-Key Setting from Minimal Assumptions
Abstract
We present a construction of a private-key functional encryption scheme for any family of randomized functionalities based on any such scheme for deterministic functionalities that is sufficiently expressive. Instantiating our construction with existing schemes for deterministic functionalities, we obtain schemes for any family of randomized functionalities based on a variety of assumptions (including the LWE assumption, simple assumptions on multilinear maps, and even the existence of any one-way function) offering various trade-offs between security and efficiency.
Previously, Goyal, Jain, Koppula and Sahai [TCC, 2015] constructed a public-key functional encryption scheme for any family of randomized functionalities based on indistinguishability obfuscation.
One of the key insights underlying our work is that, in the privatekey setting, a sufficiently expressive functional encryption scheme may be appropriately utilized for implementing proof techniques that were so far implemented based on obfuscation assumptions (such as the punctured programming technique of Sahai and Waters [STOC, 2014]). We view this as a contribution of independent interest that may be found useful in other settings as well.
Ilan Komargodski, Gil Segev, Eylon Yogev

Obfuscation

Separations in Circular Security for Arbitrary Length Key Cycles
Abstract
While standard notions of security suffice to protect any message supplied by an adversary, in some situations stronger notions of security are required. One such notion is n-circular security, where ciphertexts Enc(pk1, sk2), Enc(pk2, sk3), . . . , Enc(pk n , sk1) should be indistinguishable from encryptions of zero.
In this work we prove the following results for n-circular security, based upon recent candidate constructions of indistinguishability obfuscation [18,16] and one way functions:
– For any n there exists an encryption scheme that is IND-CPA secure but not n-circular secure.
– There exists a bit encryption scheme that is IND-CPA secure, but not 1-circular secure.
– If there exists an encryption system where an attacker can distinguish a key encryption cycle from an encryption of zeroes, then in a transformed cryptosystem there exists an attacker which recovers secret keys from the encryption cycles.
The last result is generic and applies to any such cryptosystem.
Venkata Koppula, Kim Ramchen, Brent Waters
ZAPs and Non-Interactive Witness Indistinguishability from Indistinguishability Obfuscation
Abstract
We present new constructions of two-message and one-message witness-indistinguishable proofs (ZAPs and NIWIs). This includes:
  • ZAPs (or, equivalently, non-interactive zero-knowledge in the common random string model) from indistinguishability obfuscation and one-way functions.
  • NIWIs from indistinguishability obfuscation and one-way permutations.
The previous construction of ZAPs [Dwork and Naor, FOCS 00] was based on trapdoor permutations. The two previous NIWI constructions were based either on ZAPs and a derandomization-type complexity assumption [Barak, Ong, and Vadhan CRYPTO 03], or on a specific number theoretic assumption in bilinear groups [Groth, Sahai, and Ostrovsky, CRYPTO 06].
Nir Bitansky, Omer Paneth
Random-Oracle Uninstantiability from Indistinguishability Obfuscation
Abstract
Assuming the existence of indistinguishability obfuscation (iO), we show that a number of prominent transformations in the randomoracle model are uninstantiable in the standard model. We start by showing that the Encrypt-with-Hash transform of Bellare, Boldyreva and O’Neill (CRYPTO 2007) for converting randomized public-key encryption schemes to deterministic ones is not instantiable in the standard model. To this end, we build on the recent work of Brzuska, Farshim and Mittelbach (CRYPTO 2014) and rely on the existence of iO for Turing machines or for circuits to derive two flavors of uninstantiability. The techniques that we use to establish this result are flexible and lend themselves to a number of other transformations such as the classical Fujisaki–Okamoto transform (CRYPTO 1998) and transformations akin to those by Bellare and Keelveedhi (CRYPTO 2011) and Douceur et al. (ICDCS 2002) for obtaining KDM-secure encryption and de-duplication schemes respectively. Our results call for a re-assessment of scheme design in the random-oracle model and highlight the need for new transforms that do not suffer from iO-based attacks.
Christina Brzuska, Pooya Farshim, Arno Mittelbach
On Obfuscation with Random Oracles
Abstract
Assuming trapdoor permutations, we show that there exist function families that cannot be VBB-obfuscated even if both the obfuscator and the obfuscated program have access to a random oracle. Specifically, these families are the robust unobfuscatable families of [Bitansky-Paneth, STOC 13].
Our result stands in contrast to the general VBB obfuscation algorithms in more structured idealized models where the oracle preserves certain algebraic homomorphisms [Canetti-Vaikuntanathan, ePrint 13; Brakerski-Rothblum, TCC 14; Barak et al., Eurocrypt 14].
Ran Canetti, Yael Tauman Kalai, Omer Paneth
Obfuscation of Probabilistic Circuits and Applications
Abstract
This paper studies the question of how to define, construct, and use obfuscators for probabilistic programs. Such obfuscators compile a possibly randomized program into a deterministic one, which achieves computationally indistinguishable behavior from the original program as long as it is run on each input at most once. For obfuscation, we propose a notion that extends indistinguishability obfuscation to probabilistic circuits: It should be hard to distinguish between the obfuscations of any two circuits whose output distributions at each input are computationally indistinguishable, possibly in presence of some auxiliary input. We call the resulting notion probabilistic indistinguishability obfuscation (pIO).
We define several variants of pIO, and study relations among them. Moreover, we give a construction of one of our variants, called X-pIO, from sub-exponentially hard indistinguishability obfuscation (for deterministic circuits) and one-way functions.
We then move on to show a number of applications of pIO. In particular, we first give a general and natural methodology to achieve fully homomorphic encryption (FHE) from variants of pIO and of semantically secure encryption schemes. In particular, one instantiation leads to FHE from any X-pIO obfuscator and any re-randomizable encryption scheme that’s slightly super-polynomially secure.
We note that this constitutes the first construction of full-fledged FHE that does not rely on encryption with circular security.
Moreover, assuming sub-exponentially secure puncturable PRFs computable in NC1, sub-exponentially-secure indistinguishability obfuscation for (deterministic) NC1 circuits can be bootstrapped to obtain indistinguishability obfuscation for arbitrary (deterministic) poly-size circuits (previously such bootstrapping was known only assuming FHE with NC1 decryption algorithm).
Ran Canetti, Huijia Lin, Stefano Tessaro, Vinod Vaikuntanathan
Graph-Induced Multilinear Maps from Lattices
Abstract
Graded multilinear encodings have found extensive applications in cryptography ranging from non-interactive key exchange protocols, to broadcast and attribute-based encryption, and even to software obfuscation. Despite seemingly unlimited applicability, essentially only two candidate constructions are known (GGH and CLT). In this work, we describe a new graph-induced multilinear encoding scheme from lattices. In a graph-induced multilinear encoding scheme the arithmetic operations that are allowed are restricted through an explicitly defined directed graph (somewhat similar to the “asymmetric variant” of previous schemes). Our construction encodes Learning With Errors (LWE) samples in short square matrices of higher dimensions. Addition and multiplication of the encodings corresponds naturally to addition and multiplication of the LWE secrets. Security of the new scheme is not known to follow from LWE hardness (or any other “nice” assumption), at present it requires making new hardness assumptions.
Craig Gentry, Sergey Gorbunov, Shai Halevi
Obfuscating Circuits via Composite-Order Graded Encoding
Abstract
We present a candidate obfuscator based on composite-order Graded Encoding Schemes (GES), which are a generalization of multilinear maps. Our obfuscator operates on circuits directly without converting them into formulas or branching programs as was done in previous solutions. As a result, the time and size complexity of the obfuscated program, measured by the number of GES elements, is directly proportional to the circuit complexity of the program being obfuscated. This improves upon previous constructions whose complexity was related to the formula or branching program size. Known instantiations of Graded Encoding Schemes allow us to obfuscate circuit classes of polynomial degree, which include for example families of circuits of logarithmic depth.
We prove that our obfuscator is secure against a class of generic algebraic attacks, formulated by a generic graded encoding model. We further consider a more robust model which provides more power to the adversary and extend our results to this setting as well.
As a secondary contribution, we define a new simple notion of algebraic security (which was implicit in previous works) and show that it captures standard security relative to an ideal GES oracle.
Benny Applebaum, Zvika Brakerski
Adaptively Secure Two-Party Computation from Indistinguishability Obfuscation
Abstract
We present the first two-round, two-party general function evaluation protocol that is secure against honest-but-curious adaptive corruption of both parties. In addition, the protocol is incoercible for one of the parties, and fully leakage tolerant. It requires a global (non-programmable) reference string and is based on one way functions and general-purpose indistinguishability obfuscation with sub-exponential security, as well as augmented non-committing encryption.
A Byzantine version of the protocol, obtained by applying the Canetti et al. [STOC 02] compiler, achieves UC security with comparable efficiency parameters, but is no longer incoercible.
Ran Canetti, Shafi Goldwasser, Oxana Poburinnaya
Adaptively Secure, Universally Composable, Multiparty Computation in Constant Rounds
Abstract
Cryptographic protocols with adaptive security ensure that security holds against an adversary who can dynamically determine which parties to corrupt as the protocol progresses—or even after the protocol is finished. In the setting where all parties may potentially be corrupted, and secure erasure is not assumed, it has been a long-standing open question to design secure-computation protocols with adaptive security running in constant rounds.
Here, we show a constant-round, universally composable protocol for computing any functionality, tolerating a malicious, adaptive adversary corrupting any number of parties. Interestingly, our protocol can compute all functionalities, not just adaptively well-formed ones. The protocol relies on indistinguishability obfuscation, and assumes a common reference string.
Dana Dachman-Soled, Jonathan Katz, Vanishree Rao
Two-Round Adaptively Secure MPC from Indistinguishability Obfuscation
Abstract
Adaptively secure Multi-Party Computation (MPC) first studied by Canetti, Feige, Goldreich, and Naor in 1996, is a fundamental notion in cryptography. Adaptive security is particularly hard to achieve in settings where arbitrary number of parties can be corrupted and honest parties are not trusted to properly erase their internal state. We did not know how to realize constant round protocols for this task even if we were to restrict ourselves to semi-honest adversaries and to the simpler two-party setting. Specifically the round complexity of known protocols grows with the depth of the circuit the parties are trying to compute.
In this work, using indistinguishability obfuscation, we construct a UC two-round Multi-Party computation protocol secure against any active, adaptive adversary corrupting an arbitrary number of parties.
Sanjam Garg, Antigoni Polychroniadou
Obfuscation-Based Non-black-box Simulation and Four Message Concurrent Zero Knowledge for NP
Abstract
We show the following result: Assuming the existence of public-coin differing-input obfuscation (pc-diO) for the class of all polynomial time Turing machines, then there exists a four message, fully concurrent zero-knowledge proof system for all languages in NP with negligible soundness error. This result is constructive: given (pc-diO), our reduction yields an explicit protocol along with an explicit simulator that is “straight line” and runs in strict polynomial time. The obfuscation security property is used only to prove soundness.
Public-coin differing-inputs obfuscation is a notion of obfuscation closely related to indistinguishability obfuscation. Most importantly for our result, (pc-diO) does not suffer from any known impossibility results: recent negative results on standard differing-inputs obfuscation do not apply to (pc-diO). Furthermore, candidate constructions for (pc-diO) for the class of all polynomial-time Turing Machines are known.
Our reduction relies on a new non-black-box simulation technique which does not use the PCP theorem. We view the development of this new non-black-box simulation technique as the main contribution of our work. In addition to assuming (pc-diO), our reduction also assumes (standard and polynomial time) cryptographic assumptions such as collision-resistant hash functions.
Omkant Pandey, Manoj Prabhakaran, Amit Sahai
Public-Coin Differing-Inputs Obfuscation and Its Applications
Abstract
Differing inputs obfuscation (diO) is a strengthening of indistinguishability obfuscation (iO) that has recently found applications to improving the efficiency and generality of obfuscation, functional encryption, and related primitives. Roughly speaking, a diO scheme ensures that the obfuscations of two efficiently generated programs are indistinguishable not only if the two programs are equivalent, but also if it is hard to find an input on which their outputs differ. The above “indistinguishability” and “hardness” conditions should hold even in the presence of an auxiliary input that is generated together with the programs.
The recent works of Boyle and Pass (ePrint 2013) and Garg et al. (Crypto 2014) cast serious doubt on the plausibility of general-purpose diO with respect to general auxiliary inputs. This leaves open the existence of a variant of diO that is plausible, simple, and useful for applications.
We suggest such a diO variant that we call public-coin diO. A publiccoin diO restricts the original definition of diO by requiring the auxiliary input to be a public random string which is given as input to all relevant algorithms. In contrast to standard diO, we argue that it remains very plausible that current candidate constructions of iO for circuits satisfy the public-coin diO requirement.
We demonstrate the usefulness of the new notion by showing that several applications of diO can be obtained by relying on the public-coin variant instead. These include constructions of succinct obfuscation and functional encryption schemes for Turing Machines, where the size of the obfuscated code or keys is essentially independent of the input-length, running time and space.
Yuval Ishai, Omkant Pandey, Amit Sahai
Backmatter
Metadata
Title
Theory of Cryptography
Editors
Yevgeniy Dodis
Jesper Buus Nielsen
Copyright Year
2015
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-662-46497-7
Print ISBN
978-3-662-46496-0
DOI
https://doi.org/10.1007/978-3-662-46497-7

Premium Partner