Skip to main content

2016 | Buch

Advances in Cryptology – CRYPTO 2016

36th Annual International Cryptology Conference, Santa Barbara, CA, USA, August 14-18, 2016, Proceedings, Part III

herausgegeben von: Matthew Robshaw, Jonathan Katz

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The three volume-set, LNCS 9814, LNCS 9815, and LNCS 9816, constitutes the refereed proceedings of the 36th Annual International Cryptology Conference, CRYPTO 2016, held in Santa Barbara, CA, USA, in August 2016.

The 70 revised full papers presented were carefully reviewed and selected from 274 submissions. The papers are organized in the following topical sections: provable security for symmetric cryptography; asymmetric cryptography and cryptanalysis; cryptography in theory and practice; compromised systems; symmetric cryptanalysis; algorithmic number theory; symmetric primitives; asymmetric cryptography; symmetric cryptography; cryptanalytic tools; hardware-oriented cryptography; secure computation and protocols; obfuscation; quantum techniques; spooky encryption; IBE, ABE, and functional encryption; automated tools and synthesis; zero knowledge; theory.

Inhaltsverzeichnis

Frontmatter

Quantum Techniques

Frontmatter
Quantum Homomorphic Encryption for Polynomial-Sized Circuits
Abstract
We present a new scheme for quantum homomorphic encryption which is compact and allows for efficient evaluation of arbitrary polynomial-sized quantum circuits. Building on the framework of Broadbent and Jeffery [BJ15] and recent results in the area of instantaneous non-local quantum computation [Spe15], we show how to construct quantum gadgets that allow perfect correction of the errors which occur during the homomorphic evaluation of T gates on encrypted quantum data. Our scheme can be based on any classical (leveled) fully homomorphic encryption (FHE) scheme and requires no computational assumptions besides those already used by the classical scheme. The size of our quantum gadget depends on the space complexity of the classical decryption function – which aligns well with the current efforts to minimize the complexity of the decryption function.
Our scheme (or slight variants of it) offers a number of additional advantages such as ideal compactness, the ability to supply gadgets “on demand”, and circuit privacy for the evaluator against passive adversaries.
Yfke Dulek, Christian Schaffner, Florian Speelman
Adaptive Versus Non-Adaptive Strategies in the Quantum Setting with Applications
Abstract
We prove a general relation between adaptive and non-adaptive strategies in the quantum setting, i.e., between strategies where the adversary can or cannot adaptively base its action on some auxiliary quantum side information. Our relation holds in a very general setting, and is applicable as long as we can control the bit-size of the side information, or, more generally, its “information content”. Since adaptivity is notoriously difficult to handle in the analysis of (quantum) cryptographic protocols, this gives us a very powerful tool: as long as we have enough control over the side information, it is sufficient to restrict ourselves to non-adaptive attacks.
We demonstrate the usefulness of this methodology with two examples. The first is a quantum bit commitment scheme based on 1-bit cut-and-choose. Since bit commitment implies oblivious transfer (in the quantum setting), and oblivious transfer is universal for two-party computation, this implies the universality of 1-bit cut-and-choose, and thus solves the main open problem of [9]. The second example is a quantum bit commitment scheme proposed in 1993 by Brassard et al. It was originally suggested as an unconditionally secure scheme, back when this was thought to be possible. We partly restore the scheme by proving it secure in (a variant of) the bounded quantum storage model.
In both examples, the fact that the adversary holds quantum side information obstructs a direct analysis of the scheme, and we circumvent it by analyzing a non-adaptive version, which can be done by means of known techniques, and applying our main result.
Frédéric Dupuis, Serge Fehr, Philippe Lamontagne, Louis Salvail
Semantic Security and Indistinguishability in the Quantum World
Abstract
At CRYPTO 2013, Boneh and Zhandry initiated the study of quantum-secure encryption. They proposed first indistinguishability definitions for the quantum world where the actual indistinguishability only holds for classical messages, and they provide arguments why it might be hard to achieve a stronger notion. In this work, we show that stronger notions are achievable, where the indistinguishability holds for quantum superpositions of messages. We investigate exhaustively the possibilities and subtle differences in defining such a quantum indistinguishability notion for symmetric-key encryption schemes. We justify our stronger definition by showing its equivalence to novel quantum semantic-security notions that we introduce. Furthermore, we show that our new security definitions cannot be achieved by a large class of ciphers – those which are quasi-preserving the message length. On the other hand, we provide a secure construction based on quantum-resistant pseudorandom permutations; this construction can be used as a generic transformation for turning a large class of encryption schemes into quantum indistinguishable and hence quantum semantically secure ones. Moreover, our construction is the first completely classical encryption scheme shown to be secure against an even stronger notion of indistinguishability, which was previously known to be achievable only by using quantum messages and arbitrary quantum encryption circuits.
Tommaso Gagliardoni, Andreas Hülsing, Christian Schaffner

Spooky Encryption

Frontmatter
Spooky Encryption and Its Applications
Abstract
Consider encrypting n inputs under n independent public keys. Given the ciphertexts \(\{c_i = \mathsf {Enc}_{\mathsf {pk}_i}(x_i)\}_i\), Alice outputs ciphertexts \(c'_1,\ldots ,c'_n\) that decrypt to \(y_1,\ldots ,y_n\) respectively. What relationships between the \(x_i\)’s and \(y_i\)’s can Alice induce?
Motivated by applications to delegating computations, Dwork et al. [11] showed that a semantically secure scheme disallows signaling in this setting, meaning that \(y_i\) cannot depend on \(x_j\) for \(j \ne i\). On the other hand if the scheme is homomorphic then any local (component-wise) relationship is achievable, meaning that each \(y_i\) can be an arbitrary function of \(x_i\). However, there are also relationships which are neither signaling nor local. Dwork et al. asked if it is possible to have encryption schemes that support such “spooky” relationships. Answering this question is the focus of our work.
Our first result shows that, under the \(\textsf {LWE}\) assumption, there exist encryption schemes supporting a large class of “spooky” relationships, which we call additive function sharing (AFS) spooky. In particular, for any polynomial-time function f, Alice can ensure that \(y_1,\ldots ,y_n\) are random subject to \(\sum _{i=1}^n y_i = f(x_1,\ldots ,x_n)\). For this result, the public keys all depend on common public randomness. Our second result shows that, assuming sub-exponentially hard indistinguishability obfuscation (iO) (and additional more standard assumptions), we can remove the common randomness and choose the public keys completely independently. Furthermore, in the case of \(n=2\) inputs, we get a scheme that supports an even larger class of spooky relationships.
We discuss several implications of AFS-spooky encryption. Firstly, it gives a strong counter-example to a method proposed by Aiello et al. [1] for building arguments for \(\textsf {NP}\) from homomorphic encryption. Secondly, it gives a simple 2-round multi-party computation protocol where, at the end of the first round, the parties can locally compute an additive secret sharing of the output. Lastly, it immediately yields a function secret sharing (FSS) scheme for all functions.
We also define a notion of spooky-free encryption, which ensures that no spooky relationship is achievable. We show that any non-malleable encryption scheme is spooky-free. Furthermore, we can construct spooky-free homomorphic encryption schemes from SNARKs, and it remains an open problem whether it is possible to do so from falsifiable assumptions.
Yevgeniy Dodis, Shai Halevi, Ron D. Rothblum, Daniel Wichs
Spooky Interaction and Its Discontents: Compilers for Succinct Two-Message Argument Systems
Abstract
We are interested in constructing short two-message arguments for various languages, where the complexity of the verifier is small (e.g. linear in the input size, or even sublinear if the input is coded appropriately).
In 2000 Aiello et al. suggested the tantalizing possibility of obtaining such arguments for all of NP. These have proved elusive, despite extensive efforts. Our work builds on the compiler of Kalai and Raz, which takes as input an interactive proof system consisting of several rounds and produces a two-message argument system. The proof of soundness of their compiler relies on superpolynomial hardness assumptions.
In this work we obtain a succinct two-message argument system for any language in NC, where the verifier’s work is linear (or even polylogarithmic). Soundness relies on any standard (polynomially hard) private information retrieval scheme or fully homomorphic encryption scheme. This is the first non trivial two-message succinct argument system that is based on a standard polynomial-time hardness assumption. We obtain this result by proving that the compiler is sound (under standard polynomial hardness assumptions) if the verifier in the original protocol runs in logarithmic space and public coins. We obtain our two-message argument by applying the compiler to an interactive proof protocol of Goldwasser, Kalai and Rothblum. On the other hand, we prove that under standard assumptions there is a sound interactive proof protocol that, when run through the compiler, results in a protocol that is not sound.
Cynthia Dwork, Moni Naor, Guy N. Rothblum

Secure Computation and Protocols II

Frontmatter
Adaptively Secure Garbled Circuits from One-Way Functions
Abstract
A garbling scheme is used to garble a circuit C and an input x in a way that reveals the output C(x) but hides everything else. In many settings, the circuit can be garbled off-line without strict efficiency constraints, but the input must be garbled very efficiently on-line, with much lower complexity than evaluating the circuit. Yao’s garbling scheme [31] has essentially optimal on-line complexity, but only achieves selective security, where the adversary must choose the input x prior to seeing the garbled circuit. It has remained an open problem to achieve adaptive security, where the adversary can choose x after seeing the garbled circuit, while preserving on-line efficiency.
In this work, we modify Yao’s scheme in a way that allows us to prove adaptive security under one-way functions. In our main instantiation we achieve on-line complexity only proportional to the width w of the circuit. Alternatively we can also get an instantiation with on-line complexity only proportional to the depth d (and the output size) of the circuit, albeit incurring in a \(2^{O(d)}\) security loss in our reduction. More broadly, we relate the on-line complexity of adaptively secure garbling schemes in our framework to a certain type of pebble complexity of the circuit. As our main tool, of independent interest, we develop a new notion of somewhere equivocal encryption, which allows us to efficiently equivocate on a small subset of the message bits.
Brett Hemenway, Zahra Jafargholi, Rafail Ostrovsky, Alessandra Scafuro, Daniel Wichs
Rate-1, Linear Time and Additively Homomorphic UC Commitments
Abstract
We construct the first UC commitment scheme for binary strings with the optimal properties of rate approaching 1 and linear time complexity (in the amortised sense, using a small number of seed OTs). On top of this, the scheme is additively homomorphic, which allows for applications to maliciously secure 2-party computation. As tools for obtaining this, we make three contributions of independent interest: we construct the first (binary) linear time encodable codes with non-trivial distance and rate approaching 1, we construct the first almost universal hash function with small seed that can be computed in linear time, and we introduce a new primitive called interactive proximity testing that can be used to verify whether a string is close to a given linear code.
Ignacio Cascudo, Ivan Damgård, Bernardo David, Nico Döttling, Jesper Buus Nielsen
UC Commitments for Modular Protocol Design and Applications to Revocation and Attribute Tokens
Abstract
Complex cryptographic protocols are often designed from simple cryptographic primitives, such as signature schemes, encryption schemes, verifiable random functions, and zero-knowledge proofs, by bridging between them with commitments to some of their inputs and outputs. Unfortunately, the known universally composable (UC) functionalities for commitments and the cryptographic primitives mentioned above do not allow such constructions of higher-level protocols as hybrid protocols. Therefore, protocol designers typically resort to primitives with property-based definitions, often resulting in complex monolithic security proofs that are prone to mistakes and hard to verify.
We address this gap by presenting a UC functionality for non-interactive commitments that enables modular constructions of complex protocols within the UC framework. We also show how the new functionality can be used to construct hybrid protocols that combine different UC functionalities and use commitments to ensure that the same inputs are provided to different functionalities. We further provide UC functionalities for attribute tokens and revocation that can be used as building blocks together with our UC commitments. As an example of building a complex system from these new UC building blocks, we provide a construction (a hybrid protocol) of anonymous attribute tokens with revocation. Unlike existing accumulator-based schemes, our scheme allows one to accumulate several revocation lists into a single commitment value and to hide the revocation status of a user from other users and verifiers.
Jan Camenisch, Maria Dubovitskaya, Alfredo Rial
Probabilistic Termination and Composability of Cryptographic Protocols
Abstract
When analyzing the round complexity of multi-party computation (MPC), one often overlooks the fact that underlying resources, such as a broadcast channel, can by themselves be expensive to implement. For example, it is impossible to implement a broadcast channel by a (deterministic) protocol in a sub-linear (in the number of corrupted parties) number of rounds. The seminal works of Rabin and Ben-Or from the early 80’s demonstrated that limitations as the above can be overcome by allowing parties to terminate in different rounds, igniting the study of protocols with probabilistic termination. However, absent a rigorous simulation-based definition, the suggested protocols are proven secure in a property-based manner, guaranteeing limited composability. In this work, we define MPC with probabilistic termination in the UC framework. We further prove a special universal composition theorem for probabilistic-termination protocols, which allows to compile a protocol using deterministic-termination hybrids into a protocol that uses expected-constant-round protocols for emulating these hybrids, preserving the expected round complexity of the calling protocol.
We showcase our definitions and compiler by providing the first composable protocols (with simulation-based security proofs) for the following primitives, relying on point-to-point channels: (1) expected-constant-round perfect Byzantine agreement, (2) expected-constant-round perfect parallel broadcast, and (3) perfectly secure MPC with round complexity independent of the number of parties.
Ran Cohen, Sandro Coretti, Juan Garay, Vassilis Zikas
Concurrent Non-Malleable Commitments (and More) in 3 Rounds
Abstract
The round complexity of commitment schemes secure against man-in-the-middle attacks has been the focus of extensive research for about 25 years. The recent breakthrough of Goyal et al. [22] showed that 3 rounds are sufficient for (one-left, one-right) non-malleable commitments. This result matches a lower bound of [41]. The state of affairs leaves still open the intriguing problem of constructing 3-round concurrent non-malleable commitment schemes.
In this paper we solve the above open problem by showing how to transform any 3-round (one-left one-right) non-malleable commitment scheme (with some extractability property) in a 3-round concurrent non-malleable commitment scheme. Our transform makes use of complexity leveraging and when instantiated with the construction of [22] gives a 3-round concurrent non-malleable commitment scheme from one-way permutations secure w.r.t. subexponential-time adversaries.
We also show a 3-round arguments of knowledge and a 3-round identification scheme secure against concurrent man-in-the-middle attacks.
Michele Ciampi, Rafail Ostrovsky, Luisa Siniscalchi, Ivan Visconti

IBE, ABE, and Functional Encryption

Frontmatter
Programmable Hash Functions from Lattices: Short Signatures and IBEs with Small Key Sizes
Abstract
Driven by the open problem raised by Hofheinz and Kiltz [34], we study the formalization of lattice-based programmable hash function (PHF), and give two types of constructions by using several techniques such as a novel combination of cover-free sets and lattice trapdoors. Under the Inhomogeneous Small Integer Solution (ISIS) assumption, we show that any (non-trivial) lattice-based PHF is collision-resistant, which gives a direct application of this new primitive. We further demonstrate the power of lattice-based PHF by giving generic constructions of signature and identity-based encryption (IBE) in the standard model, which not only provide a way to unify several previous lattice-based schemes using the partitioning proof techniques, but also allow us to obtain a new short signature scheme and a new fully secure IBE scheme with keys consisting of a logarithmic number of matrices/vectors in the security parameter \(\kappa \). Besides, we also give a refined way of combining two concrete PHFs to construct an improved short signature scheme with short verification keys from weaker assumptions. In particular, our methods depart from the confined guessing technique of Böhl et al. [8] that was used to construct previous standard model short signature schemes with short verification keys by Ducas and Micciancio [24] and by Alperin-Sheriff [6], and allow us to achieve existential unforgeability against chosen message attacks (EUF-CMA) without resorting to chameleon hash functions.
Jiang Zhang, Yu Chen, Zhenfeng Zhang
Fully Secure Functional Encryption for Inner Products, from Standard Assumptions
Abstract
Functional encryption is a modern public-key paradigm where a master secret key can be used to derive sub-keys \(SK_F\) associated with certain functions F in such a way that the decryption operation reveals F(M), if M is the encrypted message, and nothing else. Recently, Abdalla et al. gave simple and efficient realizations of the primitive for the computation of linear functions on encrypted data: given an encryption of a vector \(\varvec{y}\) over some specified base ring, a secret key \(SK_{\varvec{x}}\) for the vector \(\varvec{x}\) allows computing \(\langle \varvec{x},\varvec{y} \rangle \). Their technique surprisingly allows for instantiations under standard assumptions, like the hardness of the Decision Diffie-Hellman (\(\mathsf {DDH}\)) and Learning-with-Errors (\(\mathsf {LWE}\)) problems. Their constructions, however, are only proved secure against selective adversaries, which have to declare the challenge messages \(M_0\) and \(M_1\) at the outset of the game.
In this paper, we provide constructions that provably achieve security against more realistic adaptive attacks (where the messages \(M_0\) and \(M_1\) may be chosen in the challenge phase, based on the previously collected information) for the same inner product functionality. Our constructions are obtained from hash proof systems endowed with homomorphic properties over the key space. They are (almost) as efficient as those of Abdalla et al. and rely on the same hardness assumptions.
In addition, we obtain a solution based on Paillier’s composite residuosity assumption, which was an open problem even in the case of selective adversaries. We also propose \(\mathsf {LWE}\)-based schemes that allow evaluation of inner products modulo a prime p, as opposed to the schemes of Abdalla et al. that are restricted to evaluations of integer inner products of short integer vectors. We finally propose a solution based on Paillier’s composite residuosity assumption that enables evaluation of inner products modulo an RSA integer \(N = p \cdot q\).
We demonstrate that the functionality of inner products over a prime field is powerful and can be used to construct bounded collusion FE for all circuits.
Shweta Agrawal, Benoît Libert, Damien Stehlé
Circuit-ABE from LWE: Unbounded Attributes and Semi-adaptive Security
Abstract
We construct an LWE-based key-policy attribute-based encryption (ABE) scheme that supports attributes of unbounded polynomial length. Namely, the size of the public parameters is a fixed polynomial in the security parameter and a depth bound, and with these fixed length parameters, one can encrypt attributes of arbitrary length. Similarly, any polynomial size circuit that adheres to the depth bound can be used as the policy circuit regardless of its input length (recall that a depth d circuit can have as many as \(2^d\) inputs). This is in contrast to previous LWE-based schemes where the length of the public parameters has to grow linearly with the maximal attribute length.
We prove that our scheme is semi-adaptively secure, namely, the adversary can choose the challenge attribute after seeing the public parameters (but before any decryption keys). Previous LWE-based constructions were only able to achieve selective security. (We stress that the “complexity leveraging” technique is not applicable for unbounded attributes).
We believe that our techniques are of interest at least as much as our end result. Fundamentally, selective security and bounded attributes are both shortcomings that arise out of the current LWE proof techniques that program the challenge attributes into the public parameters. The LWE toolbox we develop in this work allows us to delay this programming. In a nutshell, the new tools include a way to generate an a-priori unbounded sequence of LWE matrices, and have fine-grained control over which trapdoor is embedded in each and every one of them, all with succinct representation.
Zvika Brakerski, Vinod Vaikuntanathan

Automated Tools and Synthesis

Frontmatter
Design in Type-I, Run in Type-III: Fast and Scalable Bilinear-Type Conversion Using Integer Programming
Abstract
Bilinear type conversion is to convert cryptographic schemes designed over symmetric groups instantiated with imperilled curves into ones that run over more secure and efficient asymmetric groups. In this paper we introduce a novel type conversion method called IPConv using 0–1 Integer Programming. Instantiated with a widely available IP solver, it instantly converts existing intricate schemes, and can process large-scale schemes that involves more than a thousand variables and hundreds of pairings.
Such a quick and scalable method allows a new approach in designing cryptographic schemes over asymmetric bilinear groups. Namely, designers work without taking much care about asymmetry of computation but the converted scheme runs well in the asymmetric setting. We demonstrate the usefulness of conversion-aided design by presenting somewhat counter-intuitive examples where converted DLIN-based Groth-Sahai proofs are more compact than manually built SXDH-based proofs.
Masayuki Abe, Fumitaka Hoshino, Miyako Ohkubo
Linicrypt: A Model for Practical Cryptography
Abstract
A wide variety of objectively practical cryptographic schemes can be constructed using only symmetric-key operations and linear operations. To formally study this restricted class of cryptographic algorithms, we present a new model called Linicrypt. A Linicrypt program has access to a random oracle whose inputs and outputs are field elements, and otherwise manipulates data only via fixed linear combinations.
Our main technical result is that it is possible to decide in polynomial time whether two given Linicrypt programs induce computationally indistinguishable distributions (against arbitrary PPT adversaries, in the random oracle model).
We show also that indistinguishability of Linicrypt programs can be expressed as an existential formula, making the model amenable to automated program synthesis. In other words, it is possible to use a SAT/SMT solver to automatically generate Linicrypt programs satisfying a given security constraint. Interestingly, the properties of Linicrypt imply that this synthesis approach is both sound and complete. We demonstrate this approach by synthesizing Linicrypt constructions of garbled circuits.
Brent Carmer, Mike Rosulek

Zero Knowledge

Frontmatter
On the Relationship Between Statistical Zero-Knowledge and Statistical Randomized Encodings
Abstract
Statistical Zero-knowledge proofs (Goldwasser et al. [GMR89]) allow a computationally unbounded server to convince a computationally limited client that an input x is in a language \(\varPi \) without revealing any additional information about x that the client cannot compute by herself. Randomized encoding (RE) of functions (Ishai and Kushilevitz [IK00]) allows a computationally limited client to publish a single (randomized) message, \(\mathsf {Enc} (x)\), from which the server learns whether x is in \(\varPi \) and nothing else.
It is known that \(\mathcal {SRE} \), the class of problems that admit statistically private randomized encoding with polynomial-time client and computationally unbounded server, is contained in the class \(\mathcal {SZK} \) of problems that have statistical zero-knowledge proof. However, the exact relation between these two classes, and, in particular, the possibility of equivalence was left as an open problem.
In this paper, we explore the relationship between \(\mathcal {SRE} \) and \(\mathcal {SZK} \), and derive the following results:
  • In a non-uniform setting, statistical randomized encoding with one-side privacy (\(\textit{1}\mathcal {RE} \)) is equivalent to non-interactive statistical zero-knowledge (\(\mathcal {NISZK} \)). These variants were studied in the past as natural relaxation/strengthening of the original notions. Our theorem shows that proving \(\mathcal {SRE} =\mathcal {SZK} \) is equivalent to showing that \(\textit{1}\mathcal {RE} =\mathcal {SRE} \) and \(\mathcal {SZK} =\mathcal {NISZK} \). The latter is a well-known open problem (Goldreich et al. [GSV99]).
  • If \(\mathcal {SRE} \) is non-trivial (not in \(\mathcal {BPP} \)), then infinitely-often one-way functions exist. The analog hypothesis for \(\mathcal {SZK} \) yields only auxiliary-input one-way functions (Ostrovsky [Ost91]), which is believed to be a significantly weaker implication.
  • If there exists an average-case hard language with perfect randomized encoding, then collision-resistance hash functions (CRH) exist. Again, a similar assumption for \(\mathcal {SZK} \) implies only constant-round statistically-hiding commitments, a primitive which seems weaker than CRH.
We believe that our results sharpen the relationship between \(\mathcal {SRE} \) and \(\mathcal {SZK} \) and illuminates the core differences between these two classes.
Benny Applebaum, Pavel Raykov
How to Prove Knowledge of Small Secrets
Abstract
We propose a new zero-knowledge protocol applicable to additively homomorphic functions that map integer vectors to an Abelian group. The protocol demonstrates knowledge of a short preimage and achieves amortised efficiency comparable to the approach of Cramer and Damgård from Crypto 2010, but gives a much tighter bound on what we can extract from a dishonest prover. Towards achieving this result, we develop an analysis for bins-and-balls games that might be of independent interest. We also provide a general analysis of rewinding of a cut-and-choose protocol as well as a method to use Lyubachevsky’s rejection sampling technique efficiently in an interactive protocol when many proofs are given simultaneously.
Our new protocol yields improved proofs of plaintext knowledge for (Ring-)LWE-based cryptosystems, where such general techniques were not known before. Moreover, they can be extended to prove preimages of homomorphic hash functions as well.
Carsten Baum, Ivan Damgård, Kasper Green Larsen, Michael Nielsen
Efficient Zero-Knowledge Proof of Algebraic and Non-Algebraic Statements with Applications to Privacy Preserving Credentials
Abstract
Practical anonymous credential systems are generally built around sigma-protocol ZK proofs. This requires that credentials be based on specially formed signatures. Here we ask whether we can instead use a standard (say, RSA, or (EC)DSA) signature that includes formatting and hashing messages, as a credential, and still provide privacy. Existing techniques do not provide efficient solutions for proving knowledge of such a signature: On the one hand, ZK proofs based on garbled circuits (Jawurek et al. 2013) give efficient proofs for checking formatting of messages and evaluating hash functions. On the other hand they are expensive for checking algebraic relations such as RSA or discrete-log, which can be done efficiently with sigma protocols.
We design new constructions obtaining the best of both worlds: combining the efficiency of the garbled circuit approach for non-algebraic statements and that of sigma protocols for algebraic ones. We then discuss how to use these as building-blocks to construct privacy-preserving credential systems based on standard RSA and (EC)DSA signatures.
Other applications of our techniques include anonymous credentials with more complex policies, the ability to efficiently switch between commitments (and signatures) in different groups, and secure two-party computation on committed/signed inputs.
Melissa Chase, Chaya Ganesh, Payman Mohassel

Theory

Frontmatter
Fine-Grained Cryptography
Abstract
Fine-grained cryptographic primitives are ones that are secure against adversaries with an a-priori bounded polynomial amount of resources (time, space or parallel-time), where the honest algorithms use less resources than the adversaries they are designed to fool. Such primitives were previously studied in the context of time-bounded adversaries (Merkle, CACM 1978), space-bounded adversaries (Cachin and Maurer, CRYPTO 1997) and parallel-time-bounded adversaries (Håstad, IPL 1987). Our goal is come up with fine-grained primitives (in the setting of parallel-time-bounded adversaries) and to show unconditional security of these constructions when possible, or base security on widely believed separation of worst-case complexity classes. We show:
1.
\({\textsf {NC}^{1}}\)-cryptography: Under the assumption that https://static-content.springer.com/image/chp%3A10.1007%2F978-3-662-53015-3_19/429204_1_En_19_IEq2_HTML.gif , we construct one-way functions, pseudo-random generators (with sub-linear stretch), collision-resistant hash functions and most importantly, public-key encryption schemes, all computable in \({\textsf {NC}^{1}}\) and secure against all \({\textsf {NC}^{1}}\) circuits. Our results rely heavily on the notion of randomized encodings pioneered by Applebaum, Ishai and Kushilevitz, and crucially, make non-black-box use of randomized encodings for logspace classes.
 
2.
\({\textsf {AC}^{0}}\)-cryptography: We construct (unconditionally secure) pseudo-random generators with arbitrary polynomial stretch, weak pseudo-random functions, secret-key encryption and perhaps most interestingly, collision-resistant hash functions, computable in \({\textsf {AC}^{0}}\) and secure against all \({\textsf {AC}^{0}}\) circuits. Previously, one-way permutations and pseudo-random generators (with linear stretch) computable in \({\textsf {AC}^{0}}\) and secure against \({\textsf {AC}^{0}}\) circuits were known from the works of Håstad and Braverman.
 
Akshay Degwekar, Vinod Vaikuntanathan, Prashant Nalini Vasudevan
TWORAM: Efficient Oblivious RAM in Two Rounds with Applications to Searchable Encryption
Abstract
We present \(\mathsf {TWORAM}\), an asymptotically efficient oblivious RAM (ORAM) protocol providing oblivious access (read and write) of a memory index y in exactly two rounds: The client prepares an encrypted query encapsulating y and sends it to the server. The server accesses memory \(\mathsf {M}\) obliviously and returns encrypted information containing the desired value \(\mathsf {M}[y]\). The cost of \(\mathsf {TWORAM}\) is only a multiplicative factor of security parameter higher than the tree-based ORAM schemes such as the path ORAM scheme of Stefanov et al. [34].
\(\mathsf {TWORAM}\) gives rise to interesting applications, and in particular to a 4-round symmetric searchable encryption scheme where search is sublinear in the worst case and the search pattern is not leaked—the access pattern can also be concealed assuming the documents are stored in the obliviously accessed memory \(\mathsf {M}\).
Sanjam Garg, Payman Mohassel, Charalampos Papamanthou
Bounded Indistinguishability and the Complexity of Recovering Secrets
Abstract
Motivated by cryptographic applications, we study the notion of bounded indistinguishability, a natural relaxation of the well studied notion of bounded independence.
We say that two distributions \(\mu \) and \(\nu \) over \(\varSigma ^n\) are k-wise indistinguishable if their projections to any k symbols are identical. We say that a function \(f{:}\varSigma ^n \rightarrow \mathrm {\{0,1\}}\) is \(\mathrm {\epsilon }\) -fooled by k -wise indistinguishability if f cannot distinguish with advantage \(\mathrm {\epsilon }\) between any two k-wise indistinguishable distributions \(\mu \) and \(\nu \) over \(\varSigma ^n\).
We are interested in characterizing the class of functions that are fooled by k-wise indistinguishability. While the case of k-wise independence (corresponding to one of the distributions being uniform) is fairly well understood, the more general case remained unexplored.
When \(\varSigma = \mathrm {\{0,1\}}\), we observe that whether f is fooled is closely related to its approximate degree. For larger alphabets \(\varSigma \), we obtain several positive and negative results. Our results imply the first efficient secret sharing schemes with a high secrecy threshold in which the secret can be reconstructed in AC\(^0\). More concretely, we show that for every \(0< \sigma < \rho \le 1\) it is possible to share a secret among n parties so that any set of fewer than \(\sigma n\) parties can learn nothing about the secret, any set of at least \(\rho n\) parties can reconstruct the secret, and where both the sharing and the reconstruction are done by constant-depth circuits of size \(\mathrm {{poly}}(n)\). We present additional cryptographic applications of our results to low-complexity secret sharing, visual secret sharing, leakage-resilient cryptography, and eliminating “selective failure” attacks.
Andrej Bogdanov, Yuval Ishai, Emanuele Viola, Christopher  Williamson
Two-Message, Oblivious Evaluation of Cryptographic Functionalities
Abstract
We study the problem of two round oblivious evaluation of cryptographic functionalities. In this setting, one party \(P_1\) holds a private key \(\textit{sk}\) for a provably secure instance of a cryptographic functionality \(\mathcal {F} \) and the second party \(P_2\) wishes to evaluate \(\mathcal {F} _\textit{sk}\) on a value x. Although it has been known for 22 years that general functionalities cannot be computed securely in the presence of malicious adversaries with only two rounds of communication, we show the existence of a round optimal protocol that obliviously evaluates cryptographic functionalities. Our protocol is provably secure against malicious receivers under standard assumptions and does not rely on heuristic (setup) assumptions. Our main technical contribution is a novel nonblack-box technique, which makes nonblack-box use of the security reduction of \(\mathcal {F} _\textit{sk}\). Specifically, our proof of malicious receiver security uses the code of the reduction, which reduces the security of \(\mathcal {F} _\textit{sk}\) to some hard problem, in order to break that problem directly. Instantiating our framework, we obtain the first two-round oblivious pseudorandom function that is secure in the standard model. This question was left open since the invention of OPRFs in 1997.
Nico Döttling, Nils Fleischhacker, Johannes Krupp, Dominique Schröder
Backmatter
Metadaten
Titel
Advances in Cryptology – CRYPTO 2016
herausgegeben von
Matthew Robshaw
Jonathan Katz
Copyright-Jahr
2016
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-662-53015-3
Print ISBN
978-3-662-53014-6
DOI
https://doi.org/10.1007/978-3-662-53015-3

Premium Partner