Skip to main content

Über dieses Buch

The two-volume set LNCS 10677 and LNCS 10678 constitutes the refereed proceedings of the 15th International Conference on Theory of Cryptography, TCC 2017, held in Baltimore, MD, USA, in November 2017.

The total of 51 revised full papers presented in the proceedings were carefully reviewed and selected from 150 submissions. The Theory of Cryptography Conference deals with the paradigms, approaches, and techniques used to conceptualize natural cryptographic problems and provide algorithmic solutions to them and much more.



Garbled Circuits and Oblivious RAM


Actively Secure Garbled Circuits with Constant Communication Overhead in the Plain Model

We consider the problem of constant-round secure two-party computation in the presence of active (malicious) adversaries. We present the first protocol that has only a constant multiplicative communication overhead compared to Yao’s protocol for passive adversaries and can be implemented in the plain model by only making a black-box use of (parallel) oblivious transfer and a pseudo-random generator. This improves over the polylogarithmic overhead of the previous best protocol. A similar result could previously be obtained only in an amortized setting, using preprocessing, or by assuming bit-oblivious-transfer as an ideal primitive that has a constant cost.
We present two variants of this result, one which is aimed at minimizing the number of oblivious transfers and another which is aimed at optimizing concrete efficiency. Our protocols are based on a novel combination of previous techniques together with a new efficient protocol to certify that pairs of strings transmitted via oblivious transfer satisfy a global relation. The communication complexity of the second variant of our protocol can beat the best previous protocols even for realistic values of the circuit size and the security parameter. This variant is particularly attractive in the offline-online setting, where the online cost is dominated by a single evaluation of an authenticated garbled circuit, and can also be made non-interactive using the Fiat-Shamir heuristic.
Carmit Hazay, Yuval Ishai, Muthuramakrishnan Venkitasubramaniam

Adaptively Indistinguishable Garbled Circuits

A garbling scheme is used to garble a circuit C and an input x in a way that reveals the output C(x) but hides everything else. An adaptively secure scheme allows the adversary to specify the input x after seeing the garbled circuit. Applebaum et al. (CRYPTO ’13) showed that in any garbling scheme with adaptive simulation-based security, the size of the garbled input must exceed the output size of the circuit. Here we show how to circumvent this lower bound and achieve significantly better efficiency under the minimal assumption that one-way functions exist by relaxing the security notion from simulation-based to indistinguishability-based.
We rely on the recent work of Hemenway et al. (CRYPTO ’16) which constructed an adaptive simulation-based garbling scheme under one-way functions. The size of the garbled input in their scheme is as large as the output size of the circuit plus a certain pebble complexity of the circuit, where the latter is (e.g.,) bounded by the space complexity of the computation. By building on top of their construction and adapting their proof technique, we show how to remove the output size dependence in their result when considering indistinguishability-based security.
As an application of the above result, we get a symmetric-key functional encryption based on one-way functions, with indistinguishability-based security where the adversary can obtain an unbounded number of function secret keys and then adaptively a single challenge ciphertext. The size of the ciphertext only depends on the maximal pebble complexity of each of the functions but not on the number of functions or their circuit size.
Zahra Jafargholi, Alessandra Scafuro, Daniel Wichs

Circuit OPRAM: Unifying Statistically and Computationally Secure ORAMs and OPRAMs

An Oblivious Parallel RAM (OPRAM) provides a general method to simulate any Parallel RAM (PRAM) program, such that the resulting memory access patterns leak nothing about secret inputs. OPRAM was originally proposed by Boyle et al. as the natural parallel counterpart of Oblivious RAM (ORAM), which was shown to have broad applications, e.g., in cloud outsourcing, secure processor design, and secure multi-party computation. Since parallelism is common in modern computing architectures such as multi-core processors or cluster computing, OPRAM is naturally a powerful and desirable building block as much as its sequential counterpart ORAM is.
Although earlier works have shown how to construct OPRAM schemes with polylogarithmic simulation overhead, in comparison with best known sequential ORAM constructions, all existing OPRAM schemes are (poly-)logarithmic factors more expensive. In this paper, we present a new framework in which we construct both statistically secure and computationally secure OPRAM schemes whose asymptotical performance matches the best known ORAM schemes in each setting. Since an OPRAM scheme with simulation overhead \(\chi \) directly implies an ORAM scheme with simulation overhead \(\chi \), our result can be regarded as providing a unifying framework in which we can subsume all known results on statistically and computationally secure ORAMs and OPRAMs alike. Particularly for the case of OPRAMs, we also improve the state-of-the-art scheme by superlogarithmic factors.
To achieve the aforementioned results requires us to combine a variety of techniques involving (1) efficient parallel oblivious algorithm design; and (2) designing tight randomized algorithms and proving measure concentration bounds about the rather involved stochastic process induced by the OPRAM algorithm.
T.-H. Hubert Chan, Elaine Shi

Zero-Knowledge and Non-Malleability


Resettably-Sound Resettable Zero Knowledge in Constant Rounds

In FOCS 2001 Barak et al. conjectured the existence of zero-knowledge arguments that remain secure against resetting provers and resetting verifiers. The conjecture was proven true by Deng et al. in FOCS 2009 under various complexity assumptions and requiring a polynomial number of rounds. Later on in FOCS 2013 Chung et al. improved the assumptions requiring one-way functions only but still with a polynomial number of rounds.
In this work we show a constant-round resettably-sound resettable zero-knowledge argument system, therefore improving the round complexity from polynomial to constant. We obtain this result through the following steps.
We show an explicit transform from any \(\ell \)-round concurrent zero-knowledge argument system into an \(O(\ell )\)-round resettable zero-knowledge argument system. The transform is based on techniques proposed by Barak et al. in FOCS 2001 and by Deng et al. in FOCS 2009. Then, we make use of a recent breakthrough presented by Chung et al. in CRYPTO 2015 that solved the longstanding open question of constructing a constant-round concurrent zero-knowledge argument system from plausible polynomial-time hardness assumptions. Starting with their construction \(\varGamma \) we obtain a constant-round resettable zero-knowledge argument system \(\varLambda \).
We then show that by carefully embedding \(\varLambda \) inside \(\varGamma \) (i.e., essentially by playing a modification of the construction of Chung et al. against the construction of Chung et al.) we obtain the first constant-round resettably-sound concurrent zero-knowledge argument system \(\varDelta \).
Finally, we apply a transformation due to Deng et al. to \(\varDelta \) obtaining a resettably-sound resettable zero-knowledge argument system \(\varPi \), the main result of this work.
While our round-preserving transform for resettable zero knowledge requires one-way functions only, both \(\varLambda , \varDelta \) and \(\varPi \) extend the work of Chung et al. and as such they rely on the same assumptions (i.e., families of collision-resistant hash functions, one-way permutations and indistinguishability obfuscation for \(\mathcal {P}/\mathtt{{poly}}\), with slightly super-polynomial security).
Wutichai Chongchitmate, Rafail Ostrovsky, Ivan Visconti

Round Optimal Concurrent Non-malleability from Polynomial Hardness

Non-malleable commitments are a central cryptographic primitive that guarantee security against man-in-the-middle adversaries, and their exact round complexity has been a subject of great interest. Pass (TCC 2013, CC 2016) proved that non-malleable commitments with respect to commitment are impossible to construct in less than three rounds, via black-box reductions to polynomial hardness assumptions. Obtaining a matching positive result has remained an open problem so far.
While three-round constructions of non-malleable commitments have been achieved, beginning with the work of Goyal, Pandey and Richelson (STOC 2016), current constructions require super-polynomial assumptions.
In this work, we settle the question of whether three-round non-malleable commitments can be based on polynomial hardness assumptions. We give constructions based on polynomial hardness of ZAPs, as well as one out of DDH/QR/\(N^{th}\) residuosity. Our protocols also satisfy concurrent non-malleability.
Dakshita Khurana

Zero Knowledge Protocols from Succinct Constraint Detection

We study the problem of constructing proof systems that achieve both soundness and zero knowledge unconditionally (without relying on intractability assumptions). Known techniques for this goal are primarily combinatorial, despite the fact that constructions of interactive proofs (IPs) and probabilistically checkable proofs (PCPs) heavily rely on algebraic techniques to achieve their properties.
We present simple and natural modifications of well-known ‘algebraic’ IP and PCP protocols that achieve unconditional (perfect) zero knowledge in recently introduced models, overcoming limitations of known techniques.
  • We modify the PCP of Ben-Sasson and Sudan [BS08] to obtain zero knowledge for \(\mathbf {NEXP}\) in the model of Interactive Oracle Proofs [BCS16, RRR16], where the verifier, in each round, receives a PCP from the prover.
  • We modify the IP of Lund et al. [LFKN92] to obtain zero knowledge for \(\mathbf {\#P}\) in the model of Interactive PCPs [KR08], where the verifier first receives a PCP from the prover and then interacts with him.
The simulators in our zero knowledge protocols rely on solving a problem that lies at the intersection of coding theory, linear algebra, and computational complexity, which we call the succinct constraint detection problem, and consists of detecting dual constraints with polynomial support size for codes of exponential block length. Our two results rely on solutions to this problem for fundamental classes of linear codes:
  • An algorithm to detect constraints for Reed–Muller codes of exponential length. This algorithm exploits the Raz–Shpilka [RS05] deterministic polynomial identity testing algorithm, and shows, to our knowledge, a first connection of algebraic complexity theory with zero knowledge.
  • An algorithm to de tect constraints for PCPs of Proximity of Reed–Solomon codes [BS08] of exponential degree. This algorithm exploits the recursive structure of the PCPs of Proximity to show that small-support constraints are “locally” spanned by a small number of small-support constraints.
Eli Ben-Sasson, Alessandro Chiesa, Michael A. Forbes, Ariel Gabizon, Michael Riabzev, Nicholas Spooner

Leakage and Tampering


How to Construct a Leakage-Resilient (Stateless) Trusted Party

Trusted parties and devices are commonly used in the real world to securely perform computations on secret inputs. However, their security can often be compromised by side-channel attacks in which the adversary obtains partial leakage on intermediate computation values. This gives rise to the following natural question: To what extent can one protect the trusted party against leakage?
Our goal is to design a hardware device T that allows \(m\ge 1\) parties to securely evaluate a function \(f(x_1,\ldots ,x_m)\) of their inputs by feeding T with encoded inputs that are obtained using local secret randomness. Security should hold even in the presence of an active adversary that can corrupt a subset of parties and obtain restricted leakage on the internal computations in T.
We design hardware devices T in this setting both for zero-knowledge proofs and for general multi-party computations. Our constructions can unconditionally resist either \(\mathsf{{AC}} ^0\) leakage or a strong form of “only computation leaks” (OCL) leakage that captures realistic side-channel attacks, providing different tradeoffs between efficiency and security.
Daniel Genkin, Yuval Ishai, Mor Weiss

Blockwise p-Tampering Attacks on Cryptographic Primitives, Extractors, and Learners

Austrin et al. [1] studied the notion of bitwise p-tampering attacks over randomized algorithms in which an efficient ‘virus’ gets to control each bit of the randomness with independent probability p in an online way. The work of [1] showed how to break certain ‘privacy primitives’ (e.g., encryption, commitments, etc.) through bitwise p-tampering, by giving a bitwise p-tampering biasing attack for increasing the average \({\mathbb {E}}[f(U_n)]\) of any efficient function \(f :\{0,1\}^n \mapsto [-1,+1]\) by \(\varOmega (p \cdot {\text {Var}}[f(U_n)])\).
In this work, we revisit and extend the bitwise tampering model of [1] to blockwise setting, where blocks of randomness becomes tamperable with independent probability p. Our main result is an efficient blockwise p-tampering attack to bias the average \({\mathbb {E}}[f(\overline{X})]\) of any efficient function f mapping arbitrary \(\overline{X}\) to \([-1,+1]\) by \(\varOmega (p \cdot {\text {Var}}[f(\overline{X})])\) regardless of how \(\overline{X}\) is partitioned into individually tamperable blocks \(\overline{X}=(X_1,\dots ,X_n)\). Relying on previous works of [1, 19, 36], our main biasing attack immediately implies efficient attacks against the privacy primitives as well as seedless multi-source extractors, in a model where the attacker gets to tamper with each block (or source) of the randomness with independent probability p. Further, we show how to increase the classification error of deterministic learners in the so called ‘targeted poisoning’ attack model under Valiant’s adversarial noise. In this model, an attacker has a ‘target’ test data d in mind and wishes to increase the error of classifying d while she gets to tamper with each training example with independent probability p an in an online way.
Saeed Mahloujifar, Mohammad Mahmoody



On Zero-Testable Homomorphic Encryption and Publicly Verifiable Non-interactive Arguments

We define and study zero-testable homomorphic encryption (ZTHE) – a semantically secure, somewhat homomorphic encryption scheme equipped with a weak zero test that can identify trivial zeros. These are ciphertexts that result from homomorphically evaluating an arithmetic circuit computing the zero polynomial over the integers. This is a relaxation of the (strong) zero test provided by the notion of graded encodings, which identifies all encodings of zero.
We show that ZTHE can suffice for powerful applications. Based on any ZTHE scheme that satisfies the additional properties of correctness on adversarial ciphertexts and multi-key homomorphism, we construct publicly verifiable non-interactive arguments for delegating computation. Such arguments were previously constructed from indistinguishability obfuscation or based on so-called knowledge assumptions. The arguments we construct are adaptively sound, based on an efficiently falsifiable assumption, and only make black-box use of the underlying cryptographic primitives.
We also show that a ZTHE scheme that is sufficient for our application can be constructed based on an efficiently-falsifiable assumption over so-called “clean” graded encodings.
Omer Paneth, Guy N. Rothblum

Non-Malleable Codes


Inception Makes Non-malleable Codes Stronger

Non-malleable codes (NMCs), introduced by Dziembowski et al. [DPW10], provide a useful message integrity guarantee in situations where traditional error-correction (and even error-detection) is impossible; for example, when the attacker can completely overwrite the encoded message. NMCs have emerged as a fundamental object at the intersection of coding theory and cryptography.
A large body of the recent work has focused on various constructions of non-malleable codes in the split-state model. Many variants of NMCs have been introduced in the literature i.e. strong NMCs, super strong NMCs and continuous NMCs. Perhaps the most useful notion among these is that of continuous non-malleable codes, that allows for continuous tampering by the adversary.
In this paper we give the first efficient, information-theoretic secure construction of continuous non-malleable codes in the split-state model. Enroute to our main result, we obtain constructions for almost all possible notions of non-malleable codes that have been considered in the split-state model, and for which such a construction is possible. Our result is obtained by a series of black-box reductions starting from the non-malleable codes from [ADL14].
One of the main technical ingredient of our result is a new concept that we call inception coding. We believe it may be of independent interest. Also our construction is used as a building block for non-persistent (resettable) continuous non-malleable codes in constant split-state model in [DNO16].
Divesh Aggarwal, Tomasz Kazana, Maciej Obremski

Four-State Non-malleable Codes with Explicit Constant Rate

Non-malleable codes (NMCs), introduced by Dziembowski, Pietrzak and Wichs (ITCS 2010), generalize the classical notion of error correcting codes by providing a powerful guarantee even in scenarios where error correcting codes cannot provide any guarantee: a decoded message is either the same or completely independent of the underlying message, regardless of the number of errors introduced into the codeword. Informally, NMCs are defined with respect to a family of tampering functions \(\mathcal {F}\) and guarantee that any tampered codeword either decodes to the same message or to an independent message, so long as it is tampered using a function \(f \in \mathcal {F}\).
Nearly all known constructions of NMCs are for the t-split-state family, where the adversary tampers each of the t “states” of a codeword, arbitrarily but independently. Cheraghchi and Guruswami (TCC 2014) obtain a Rate-1 non-malleable code for the case where \(t = \mathcal {O}(n)\) with n being the codeword length and, in (ITCS 2014), show an upper bound of \(1-1/t\) on the best achievable rate for any t-split state NMC. For \(t=10\), Chattopadhyay and Zuckerman (FOCS 2014) achieve a constant rate construction where the constant is unknown. In summary, there is no known construction of an NMC with an explicit constant rate for any \(t= o(n)\), let alone one that comes close to matching Cheraghchi and Guruswami’s lowerbound!
In this work, we construct an efficient non-malleable code in the t-split-state model, for \(t=4\), that achieves a constant rate of \(\frac{1}{3+\zeta }\), for any constant \(\zeta > 0\), and error \(2^{-\varOmega (\ell / log^{c+1} \ell )}\), where \(\ell \) is the length of the message and \(c > 0\) is a constant.
Bhavana Kanukurthi, Sai Lakshmi Bhavana Obbattu, Sruthi Sekar

Secret Sharing


Evolving Secret Sharing: Dynamic Thresholds and Robustness

Threshold secret sharing schemes enable a dealer to share a secret among n parties such that only subsets of parties of cardinality at least \(k = k(n)\) can reconstruct the secret. Komargodski, Naor and Yogev (TCC 2016-B) proposed an efficient scheme for sharing a secret among an unbounded number of parties such that only subsets of k parties can recover the secret, where k is any fixed constant. This access structure is known as k-threshold. They left open the possibility of an efficient scheme for the dynamic threshold access structure, in which the qualified sets are of increasing size as the number of parties increases. We resolve this open problem and present a construction in which the share size of the t-th party is \(O(t^4\cdot \log t)\) bits.
Furthermore, we show how to generically translate any scheme for k-threshold into a scheme which is robust, where a shared secret can be recovered even if some parties hand-in incorrect shares. This answers another open problem of Komargodski et al. Our construction is based on the construction of robust (classical) secret sharing schemes of Cramer et al. (EUROCRYPT 2008) using algebraic manipulation detection codes.
Ilan Komargodski, Anat Paskin-Cherniavsky

Linear Secret-Sharing Schemes for Forbidden Graph Access Structures

A secret-sharing scheme realizes the forbidden graph access structure determined by a graph \(G=(V,E)\) if a pair of vertices can reconstruct the secret if and only if it is an edge in G. Secret-sharing schemes for forbidden graph access structures of bipartite graphs are equivalent to conditional disclosure of secrets protocols, a primitive that is used to construct attributed-based encryption schemes.
We study the complexity of realizing a forbidden graph access structure by linear secret-sharing schemes. A secret-sharing scheme is linear if the reconstruction of the secret from the shares is a linear mapping. In many applications of secret-sharing, it is required that the scheme will be linear. We provide efficient constructions and lower bounds on the share size of linear secret-sharing schemes for sparse and dense graphs, closing the gap between upper and lower bounds: Given a sparse graph with n vertices and at most \(n^{1+\beta }\) edges, for some \( 0 \le \beta < 1\), we construct a linear secret-sharing scheme realizing its forbidden graph access structure in which the total size of the shares is \(\tilde{O} (n^{1+\beta /2})\). We provide an additional construction showing that every dense graph with n vertices and at least \(\left( {\begin{array}{c}n\\ 2\end{array}}\right) - n^{1+\beta }\) edges can be realized by a linear secret-sharing scheme with the same total share size.
We provide lower bounds on the share size of linear secret-sharing schemes realizing forbidden graph access structures. We prove that for most forbidden graph access structures, the total share size of every linear secret-sharing scheme realizing these access structures is \(\varOmega (n^{3/2})\), which shows that the construction of Gay, Kerenidis, and Wee [CRYPTO 2015] is optimal. Furthermore, we show that for every \( 0 \le \beta < 1\) there exist a graph with at most \(n^{1+\beta }\) edges and a graph with at least \(\left( {\begin{array}{c}n\\ 2\end{array}}\right) -n^{1+\beta }\) edges, such that the total share size of every linear secret-sharing scheme realizing these forbidden graph access structures is \(\varOmega (n^{1+\beta /2})\). This shows that our constructions are optimal (up to poly-logarithmic factors).
Amos Beimel, Oriol Farràs, Yuval Mintz, Naty Peter

Near-Optimal Secret Sharing and Error Correcting Codes in

We study the question of minimizing the computational complexity of (robust) secret sharing schemes and error correcting codes. In standard instances of these objects, both encoding and decoding involve linear algebra, and thus cannot be implemented in the class \(\mathsf {AC}^0\). The feasibility of non-trivial secret sharing schemes in \(\mathsf {AC}^0\) was recently shown by Bogdanov et al. (Crypto 2016) and that of (locally) decoding errors in \(\mathsf {AC}^0\) by Goldwasser et al. (STOC 2007).
In this paper, we show that by allowing some slight relaxation such as a small error probability, we can construct much better secret sharing schemes and error correcting codes in the class \(\mathsf {AC}^0\). In some cases, our parameters are close to optimal and would be impossible to achieve without the relaxation. Our results significantly improve previous constructions in various parameters.
Our constructions combine several ingredients in pseudorandomness and combinatorics in an innovative way. Specifically, we develop a general technique to simultaneously amplify security threshold and reduce alphabet size, using a two-level concatenation of protocols together with a random permutation. We demonstrate the broader usefulness of this technique by applying it in the context of a variant of secure broadcast.
Kuan Cheng, Yuval Ishai, Xin Li

OT Combiners


Resource-Efficient OT Combiners with Active Security

An OT-combiner takes n candidate implementations of the oblivious transfer (OT) functionality, some of which may be faulty, and produces a secure instance of oblivious transfer as long as a large enough number of the candidates are secure. We see an OT-combiner as a 2-party protocol that can make several black-box calls to each of the n OT candidates, and we want to protect against an adversary that can corrupt one of the parties and a certain number of the OT candidates, obtaining their inputs and (in the active case) full control of their outputs.
In this work we consider perfectly (unconditionally, zero-error) secure OT-combiners and we focus on minimizing the number of calls to the candidate OTs.
First, we construct a single-use (one call per OT candidate) OT-combiner which is perfectly secure against active adversaries corrupting one party and a constant fraction of the OT candidates. This extends a previous result by Ishai et al. (ISIT 2014) that proves the same fact for passive adversaries.
Second, we consider a more general asymmetric corruption model where an adversary can corrupt different sets of OT candidates depending on whether it is Alice or Bob who is corrupted. We give sufficient and necessary conditions for the existence of an OT combiner with a given number of calls to the candidate OTs in terms of the existence of secret sharing schemes with certain access structures and share-lengths. This allows in some cases to determine the optimal number of calls to the OT candidates which are needed to construct an OT combiner secure against a given adversary.
Ignacio Cascudo, Ivan Damgård, Oriol Farràs, Samuel Ranellucci



An Equivalence Between Attribute-Based Signatures and Homomorphic Signatures, and New Constructions for Both

In Attribute-Based Signatures (ABS; first defined by Maji, Prabhakaran and Rosulek, CT-RSA 2011) an authority can generate multiple signing keys, where each key is associated with an attribute x. Messages are signed with respect to a constraint f, such that a key for x can sign messages respective to f only if \(f(x) = 0\). The security requirements are unforgeability and key privacy (signatures should not expose the specific signing key used). In (single-hop) Homomorphic Signatures (HS; first defined by Boneh and Freeman, PKC 2011), given a signature for a data-set x, one can evaluate a signature for the pair (f(x), f), for functions f. In context-hiding HS, evaluated signatures do not reveal information about the original (pre-evaluated) signatures.
In this work we start by showing that these two notions are in fact equivalent. The first implication of this equivalence is a new lattice-based ABS scheme for polynomial-depth circuits, based on the HS construction of Gorbunov, Vaikuntanathan and Wichs (GVW; STOC 2015).
We then construct a new ABS candidate from a worst case lattice assumption (\(\mathrm {SIS}\)), with different parameters. Using our equivalence again, now in the opposite direction, our new ABS implies a new lattice-based HS scheme with different parameter trade-off, compared to the aforementioned GVW.
Rotem Tsabary

On the One-Per-Message Unforgeability of (EC)DSA and Its Variants

The American signature standards DSA and ECDSA, as well as their Russian and Chinese counterparts GOST 34.10 and SM2, are of utmost importance in the current security landscape. The mentioned schemes are all rooted in the Elgamal signature scheme (1984) and use a hash function and a cyclic group as building blocks. Unfortunately, authoritative security guarantees for the schemes are still due: All existing positive results on their security use aggressive idealization approaches, like the generic group model, leading to debatable overall results.
In this work we conduct security analyses for a set of classic signature schemes, including the ones mentioned above, providing positive results in the following sense: If the hash function (which is instantiated with SHA1 or SHA2 in a typical DSA/ECDSA setup) is modeled as a random oracle, and the signer issues at most one signature per message, then the schemes are unforgeable if and only if they are key-only unforgeable, where the latter security notion captures that the adversary has access to the verification key but not to sample signatures. Put differently, for the named signature schemes, in the one-signature-per-message setting the signature oracle is redundant.
Manuel Fersch, Eike Kiltz, Bertram Poettering

Verifiable Random Functions


A Generic Approach to Constructing and Proving Verifiable Random Functions

Verifiable Random Functions (VRFs) as introduced by Micali, Rabin and Vadhan are a special form of Pseudo Random Functions (PRFs) wherein a secret key holder can also prove validity of the function evaluation relative to a statistically binding commitment.
Prior works have approached the problem of constructing VRFs by proposing a candidate under a specific number theoretic setting — mostly in bilinear groups — and then grappling with the challenges of proving security in the VRF environments. These constructions achieved different results and tradeoffs in practical efficiency, tightness of reductions and cryptographic assumptions.
In this work we take a different approach. Instead of tackling the VRF problem as a whole, we demonstrate a simple and generic way of building Verifiable Random Functions from more basic and narrow cryptographic primitives. Then we can turn to exploring solutions to these primitives with a more focused mindset. In particular, we show that VRFs can be constructed generically from the ingredients of: (1) a 1-bounded constrained pseudo random function for a functionality that is “admissible hash friendly”, (2) a non-interactive statistically binding commitment scheme (without trusted setup) and (3) non-interactive witness indistinguishable proofs or NIWIs. The first primitive can be replaced with a more basic puncturable PRF constraint if one is willing to settle for selective security or assume sub-exponential hardness of assumptions.
In the second half of our work, we support our generic approach by giving new constructions of the underlying primitives. We first provide new constructions of perfectly binding commitments from the Learning with Errors (LWE) and Learning Parity with Noise (LPN) assumptions. Second, we give two new constructions of 1-bounded constrained PRFs for admissible hash friendly constructions. Our first construction is from the \(n\text {-}\mathsf {power DDH}\) assumption. The next is from the \(\phi \) hiding assumption.
Rishab Goyal, Susan Hohenberger, Venkata Koppula, Brent Waters

Verifiable Random Functions from Non-interactive Witness-Indistinguishable Proofs

Verifiable random functions (VRFs) are pseudorandom functions where the owner of the seed, in addition to computing the function’s value y at any point x, can also generate a non-interactive proof \(\pi \) that y is correct, without compromising pseudorandomness at other points. Being a natural primitive with a wide range of applications, considerable efforts have been directed towards the construction of such VRFs. While these efforts have resulted in a variety of algebraic constructions (from bilinear maps or the RSA problem), the relation between VRFs and other general primitives is still not well understood.
We present new constructions of VRFs from general primitives, the main one being non-interactive witness-indistinguishable proofs (NIWIs). This includes:
  • A selectively-secure VRF assuming NIWIs and non-interactive commitments. As usual, the VRF can be made adaptively-secure assuming subexponential hardness of the underlying primitives.
  • An adaptively-secure VRF assuming (polynomially-hard) NIWIs, noninteractive commitments, and (single-key) constrained pseudorandom functions for a restricted class of constraints.
The above primitives can be instantiated under various standard assumptions, which yields corresponding VRF instantiations, under different assumptions than were known so far. One notable example is a non-uniform construction of VRFs from subexponentially-hard trapdoor permutations, or more generally, from verifiable pseudorandom generators (the construction can be made uniform under a standard derandomization assumption). This partially answers an open question by Dwork and Naor (FOCS ’00).
The construction and its analysis are quite simple. Both draw from ideas commonly used in the context of indistinguishability obfuscation.
Nir Bitansky

Fully Homomorphic Encryption


Batched Multi-hop Multi-key FHE from Ring-LWE with Compact Ciphertext Extension

Traditional fully homomorphic encryption (FHE) schemes support computation on data encrypted under a single key. In STOC 2012, López-Alt et al. introduced the notion of multi-key FHE (MKFHE), which allows homomorphic computation on ciphertexts encrypted under different keys. In this work, we focus on MKFHE constructions from standard assumptions and propose a new construction of ring-LWE-based multi-hop MKFHE scheme. Our work is based on Brakerski-Gentry-Vaikuntanathan (BGV) FHE scheme where, in contrast, all the previous works on multi-key FHE with standard assumptions were based on Gentry-Sahai-Waters (GSW) FHE scheme. Therefore, our construction can encrypt a ring element rather than a single bit and naturally inherits the advantages in aspects of the ciphertext/plaintext ratio and the complexity of homomorphic operations. Moveover, our MKFHE scheme supports the Chinese Remainder Theorem (CRT)-based ciphertexts packing technique, achieves \(poly\left( k,L,\log n\right) \) computation overhead for k users, circuits with depth at most L and an n dimensional lattice, and gives the first batched MKFHE scheme based on standard assumptions to our knowledge. Furthermore, the ciphertext extension algorithms of previous schemes need to perform complex computation on each ciphertext, while our extension algorithm just needs to generate evaluation keys for the extended scheme. So the complexity of ciphertext extension is only dependent on the number of associated parities but not on the number of ciphertexts. Besides, our scheme also admits a threshold decryption protocol from which a generalized two-round MPC protocol can be similarly obtained as prior works.
Long Chen, Zhenfeng Zhang, Xueqing Wang

Database Privacy


Strengthening the Security of Encrypted Databases: Non-transitive JOINs

Database management systems that operate over encrypted data are gaining significant commercial interest. CryptDB is one such notable system supporting a variety SQL queries over encrypted data (Popa et al., SOSP ’11). It is a practical system obtained by utilizing a number of encryption schemes, together with a new cryptographic primitive for supporting SQL’s join operator.
This new primitive, an adjustable join scheme, is an encoding scheme that enables to generate tokens corresponding to any two database columns for computing their join given only their encodings. Popa et al. presented a framework for modeling the security of adjustable join schemes, but it is not completely clear what types of potential adversarial behavior it captures. Most notably, CryptDB’s join operator is transitive, and this may reveal a significant amount of sensitive information.
In this work we put forward a strong and intuitive notion of security for adjustable join schemes, and argue that it indeed captures the security of such schemes: We introduce, in addition, natural simulation-based and indistinguishability-based notions (capturing the “minimal leakage” of such schemes), and prove that our notion is positioned between their adaptive and non-adaptive variants.
Then, we construct an adjustable join scheme that satisfies our notion of security based on the linear assumption (or on the seemingly stronger matrix-DDH assumption for improved efficiency) in bilinear groups. Instantiating CryptDB with our scheme strengthens its security by providing a non-transitive join operator, while increasing the size of CryptDB’s encodings from one group element to four group elements based on the linear assumption (or two group elements based on the matrix-DDH assumption), and increasing the running time of the adjustment operation from that of computing one group exponentiation to that of computing four bilinear maps based on the linear assumption (or two bilinear maps based on the matrix-DDH assumption). Most importantly, however, the most critical and frequent operation underlying our scheme is comparison of single group elements as in CryptDB’s join scheme.
Ilya Mironov, Gil Segev, Ido Shahaf

Can We Access a Database Both Locally and Privately?

We consider the following strong variant of private information retrieval (PIR). There is a large database x that we want to make publicly available. To this end, we post an encoding X of x together with a short public key \(\mathsf{pk}\) in a publicly accessible repository. The goal is to allow any client who comes along to retrieve a chosen bit \(x_i\) by reading a small number of bits from X, whose positions may be randomly chosen based on i and \(\mathsf{pk}\), such that even an adversary who can fully observe the access to X does not learn information about i.
Towards solving this problem, we study a weaker secret key variant where the data is encoded and accessed by the same party. This primitive, that we call an oblivious locally decodable code (OLDC), is independently motivated by applications such as searchable symmetric encryption. We reduce the public-key variant of PIR to OLDC using an ideal form of obfuscation that can be instantiated heuristically with existing indistinguishability obfuscation candidates, or alternatively implemented with small and stateless tamper-proof hardware.
Finally, a central contribution of our work is the first proposal of an OLDC candidate. Our candidate is based on a secretly permuted Reed-Muller code. We analyze the security of this candidate against several natural attacks and leave its further study to future work.
Elette Boyle, Yuval Ishai, Rafael Pass, Mary Wootters

Towards Doubly Efficient Private Information Retrieval

Private Information Retrieval (PIR) allows a client to obtain data from a public database without disclosing the locations accessed. Traditionally, the stress is on preserving sublinear work for the client, while the server’s work is taken to inevitably be at least linear in the database size. Beimel, Ishai and Malkin (JoC 2004) show PIR schemes where, following a linear-work preprocessing stage, the server’s work per query is sublinear in the database size. However, that work only addresses the case of multiple non-colluding servers; the existence of single-server PIR with sublinear server work remained unaddressed.
We consider single-server PIR schemes where, following a preprocessing stage in which the server obtains an encoded version of the database and the client obtains a short key, the per-query work of both server and client is polylogarithmic in the database size. Concentrating on the case where the client’s key is secret, we show:
  • A scheme, based on one-way functions, that works for a bounded number of queries, and where the server storage is linear in the number of queries plus the database size.
  • A family of schemes for an unbounded number of queries, whose security follows from a corresponding family of new hardness assumption that are related to the hardness of solving a system of noisy linear equations.
We also show the insufficiency of a natural approach for obtaining doubly efficient PIR in the setting where the preprocessing is public.
Ran Canetti, Justin Holmgren, Silas Richelson



On Iterative Collision Search for LPN and Subset Sum

Iterative collision search procedures play a key role in developing combinatorial algorithms for the subset sum and learning parity with noise (LPN) problems. In both scenarios, the single-list pair-wise iterative collision search finds the most solutions and offers the best efficiency. However, due to its complex probabilistic structure, no rigorous analysis for it appears to be available to the best of our knowledge. As a result, theoretical works often resort to overly constrained and sub-optimal iterative collision search variants in exchange for analytic simplicity. In this paper, we present rigorous analysis for the single-list pair-wise iterative collision search method and its applications in subset sum and LPN. In the LPN literature, the method is known as the LF2 heuristic. Besides LF2, we also present rigorous analysis of other LPN solving heuristics and show that they work well when combined with LF2. Putting it together, we significantly narrow the gap between theoretical and heuristic algorithms for LPN.
Srinivas Devadas, Ling Ren, Hanshen Xiao

Can PPAD Hardness be Based on Standard Cryptographic Assumptions?

We consider the question of whether PPAD hardness can be based on standard cryptographic assumptions, such as the existence of one-way functions or public-key encryption. This question is particularly well-motivated in light of new devastating attacks on obfuscation candidates and their underlying building blocks, which are currently the only known source for PPAD hardness.
Central in the study of obfuscation-based PPAD hardness is the sink-of-verifiable-line (SVL) problem, an intermediate step in constructing instances of the PPAD-complete problem source-or-sink. Within the framework of black-box reductions we prove the following results:
  • Average-case PPAD hardness (and even SVL hardness) does not imply any form of cryptographic hardness (not even one-way functions). Moreover, even when assuming the existence of one-way functions, average-case PPAD hardness (and, again, even SVL hardness) does not imply any public-key primitive. Thus, strong cryptographic assumptions (such as obfuscation-related ones) are not essential for average-case PPAD hardness.
  • Average-case SVL hardness cannot be based either on standard cryptographic assumptions or on average-case PPAD hardness. In particular, average-case SVL hardness is not essential for average-case PPAD hardness.
  • Any attempt for basing the average-case hardness of the PPAD-complete problem source-or-sink on standard cryptographic assumptions must result in instances with a nearly-exponential number of solutions. This stands in striking contrast to the obfuscation-based approach, which results in instances having a unique solution.
Taken together, our results imply that it may still be possible to base PPAD hardness on standard cryptographic assumptions, but any such black-box attempt must significantly deviate from the obfuscation-based approach: It cannot go through the SVL problem, and it must result in source-or-sink instances with a nearly-exponential number of solutions.
Alon Rosen, Gil Segev, Ido Shahaf


Weitere Informationen

Premium Partner