Skip to main content

2023 | Buch

Theory of Cryptography

21st International Conference, TCC 2023, Taipei, Taiwan, November 29–December 2, 2023, Proceedings, Part III

insite
SUCHEN

Über dieses Buch

The four-volume set LNCS 14369 until 14372 constitutes the refereed proceedings of the 21st International Conference on Theory of Cryptography, TCC 2023, held in Taipei, Taiwan, in November/December 2023. The total of 68 full papers presented in the proceedings was carefully reviewed and selected from 168 submissions. They focus on topics such as proofs and outsourcing; theoretical foundations; multi-party computation; encryption; secret sharing, PIR and memory checking; anonymity, surveillance and tampering; lower bounds; IOPs and succinctness; lattices; quantum cryptography; Byzantine agreement, consensus and composability.

Inhaltsverzeichnis

Frontmatter

Anonymity, Surveillance and Tampering

Frontmatter
Lower Bounds on Anonymous Whistleblowing
Abstract
Anonymous transfer, recently introduced by Agrikola, Couteau and Maier [3] (TCC ’22), allows a sender to leak a message anonymously by participating in a public non-anonymous discussion in which everyone knows who said what. This opens up the intriguing possibility of using cryptography to ensure strong anonymity guarantees in a seemingly non-anonymous environment.
The work of [3] presented a lower bound on anonymous transfer, ruling out constructions with strong anonymity guarantees (where the adversary’s advantage in identifying the sender is negligible) against arbitrary polynomial-time adversaries. They also provided a (heuristic) upper bound, giving a scheme with weak anonymity guarantees (the adversary’s advantage in identifying the sender is inverse in the number of rounds) against fine-grained adversaries whose run-time is bounded by some fixed polynomial that exceeds the run-time of the honest users. This leaves a large gap between the lower bound and the upper bound, raising the intriguing possibility that one may be able to achieve weak anonymity against arbitrary polynomial time adversaries, or strong anonymity a-gainst fine grained adversaries.
In this work, we present improved lower bounds on anonymous transfer, that rule out both of the above possibilities:
  • We rule out the existence of anonymous transfer with any non-trivial anonymity guarantees against general polynomial time adversaries.
  • Even if we restrict ourselves to fine-grained adversaries whose run-time is essentially equivalent to that of the honest parties, we cannot achieve strong anonymity, or even quantitatively improve over the inverse polynomial anonymity guarantees (heuristically) achieved by [3].
Consequently, constructions of anonymous transfer can only provide security against fine-grained adversaries, and even in that case they achieve at most weak quantitative forms of anonymity.
Willy Quach, LaKyah Tyner, Daniel Wichs
Anonymous Permutation Routing
Abstract
The Non-Interactive Anonymous Router (NIAR) model was introduced by Shi and Wu [SW21] as an alternative to conventional solutions to the anonymous routing problem, in which a set of senders wish to send messages to a set of receivers. In contrast to most known approaches to support anonymous routing (e.g. mix-nets, DC-nets, etc.), which rely on a network of routers communicating with users via interactive protocols, the NIAR model assumes a single router and is inherently non-interactive (after an initial setup phase). In addition to being non-interactive, the NIAR model is compelling due to the security it provides: instead of relying on the honesty of some subset of the routers, the NIAR model requires anonymity even if the router (as well as an arbitrary subset of senders/receivers) is corrupted by an honest-but-curious adversary.
In this paper, we present a protocol for the NIAR model that improves upon the results from [SW21] in two ways:
  • Improved computational efficiency (quadratic to near linear): Our protocol matches the communication complexity of [SW21] for each sender/receiver, while reducing the computational overhead for the router to polylog overhead instead of linear overhead.
  • Relaxation of assumptions: Security of the protocol in [SW21] relies on the Decisional Linear assumption in bilinear groups; while security for our protocol follows from the existence of any rate-1 oblivious transfer (OT) protocol (instantiations of which are known to exist under the DDH, QR and LWE assumptions [DGI+19, GHO20]).
Paul Bunn, Eyal Kushilevitz, Rafail Ostrovsky
Non-Interactive Anonymous Router with Quasi-Linear Router Computation
Abstract
Anonymous routing is an important cryptographic primitive that allows users to communicate privately on the Internet, without revealing their message contents or their contacts. Until the very recent work of Shi and Wu (Eurocrypt’21), all classical anonymous routing schemes are interactive protocols, and their security rely on a threshold number of the routers being honest. The recent work of Shi and Wu suggested a new abstraction called Non-Interactive Anonymous Router (NIAR), and showed how to achieve anonymous routing non-interactively for the first time. In particular, a single untrusted router receives a token which allows it to obliviously apply a permutation to a set of encrypted messages from the senders. Shi and Wu’s construction suffers from two drawbacks: 1) the router takes time quadratic in the number of senders to obliviously route their messages; and 2) the scheme is proven secure only in the presence of static corruptions.
In this work, we show how to construct a non-interactive anonymous router scheme with sub-quadratic router computation, and achieving security in the presence of adaptive corruptions. To get this result, we assume the existence of indistinguishability obfuscation and one-way functions. Our final result is obtained through a sequence of stepping stones. First, we show how to achieve the desired efficiency, but with security under static corruption and in a selective, single-challenge setting. Then, we go through a sequence of upgrades which eventually get us the final result. We devise various new techniques along the way which lead to some additional results. In particular, our techniques for reasoning about a network of obfuscated programs may be of independent interest.
Rex Fernando, Elaine Shi, Pratik Soni, Nikhil Vanjani, Brent Waters
Multi-instance Randomness Extraction and Security Against Bounded-Storage Mass Surveillance
Abstract
Consider a state-level adversary who observes and stores large amounts of encrypted data from all users on the Internet, but does not have the capacity to store it all. Later, it may target certain “persons of interest” in order to obtain their decryption keys. We would like to guarantee that, if the adversary’s storage capacity is only (say) \(1\%\) of the total encrypted data size, then even if it can later obtain the decryption keys of arbitrary users, it can only learn something about the contents of (roughly) \(1\%\) of the ciphertexts, while the rest will maintain full security. This can be seen as an extension of incompressible cryptography (Dziembowski CRYPTO’06, Guan, Wichs and Zhandry EUROCRYPT’22) to the multi-user setting. We provide solutions in both the symmetric key and public key setting with various trade-offs in terms of computational assumptions and efficiency.
As the core technical tool, we study an information-theoretic problem which we refer to as “multi-instance randomness extraction”. Suppose \(X_1, \ldots , X_t\) are correlated random variables whose total joint min-entropy rate is \(\alpha \), but we know nothing else about their individual entropies. We choose t random and independent seeds \(S_1,\ldots ,S_t\) and attempt to individually extract some small amount of randomness \(Y_i = \textsf{Ext}(X_i;S_i)\) from each \(X_i\). We’d like to say that roughly an \(\alpha \)-fraction of the extracted outputs \(Y_i\) should be indistinguishable from uniform even given all the remaining extracted outputs and all the seeds. We show that this indeed holds for specific extractors based on Hadamard and Reed-Muller codes.
Jiaxin Guan, Daniel Wichs, Mark Zhandry
Efficiently Testable Circuits Without Conductivity
Abstract
The notion of “efficiently testable circuits” (ETC) was recently put forward by Baig et al. (ITCS’23). Informally, an ETC compiler takes as input any Boolean circuit C and outputs a circuit/inputs tuple \((C',\mathbb {T})\) where (completeness) \(C'\) is functionally equivalent to C and (security) if \(C'\) is tampered in some restricted way, then this can be detected as \(C'\) will err on at least one input in the small test set \(\mathbb {T}\). The compiler of Baig et al. detects tampering even if the adversary can tamper with all wires in the compiled circuit. Unfortunately, the model requires a strong “conductivity” restriction: the compiled circuit has gates with fan-out up to 3, but wires can only be tampered in one way even if they have fan-out greater than one. In this paper, we solve the main open question from their work and construct an ETC compiler without this conductivity restriction. While Baig et al. use gadgets computing the AND and OR of particular subsets of the wires, our compiler computes inner products with random vectors. We slightly relax their security notion and only require that tampering is detected with high probability over the choice of the randomness. Our compiler increases the size of the circuit by only a small constant factor. For a parameter \(\lambda \) (think \(\lambda \le 5\)), the number of additional input and output wires is \(|C|^{1/\lambda }\), while the number of test queries to detect an error with constant probability is around \(2^{2\lambda }\).
Mirza Ahad Baig, Suvradip Chakraborty, Stefan Dziembowski, Małgorzata Gałązka, Tomasz Lizurej, Krzysztof Pietrzak
Immunizing Backdoored PRGs
Abstract
A backdoored Pseudorandom Generator (PRG) is a PRG which looks pseudorandom to the outside world, but a saboteur can break PRG security by planting a backdoor into a seemingly honest choice of public parameters, pk, for the system. Backdoored PRGs became increasingly important due to revelations about NIST’s backdoored Dual EC PRG, and later results about its practical exploitability.
Motivated by this, at Eurocrypt’15 Dodis et al. [22] initiated the question of immunizing backdoored PRGs. A k-immunization scheme repeatedly applies a post-processing function to the output of k backdoored PRGs, to render any (unknown) backdoors provably useless. For \(k=1\), [22] showed that no deterministic immunization is possible, but then constructed “seeded” 1-immunizer either in the random oracle model, or under strong non-falsifiable assumptions. As our first result, we show that no seeded 1-immunization scheme can be black-box reduced to any efficiently falsifiable assumption.
This motivates studying k-immunizers for \(k\ge 2\), which have an additional advantage of being deterministic (i.e., “seedless”). Indeed, prior work at CCS’17 [37] and CRYPTO’18 [8] gave supporting evidence that simple k-immunizers might exist, albeit in slightly different settings. Unfortunately, we show that simple standard model proposals of [8, 37] (including the XOR function [8]) provably do not work in our setting. On a positive, we confirm the intuition of [37] that a (seedless) random oracle is a provably secure 2-immunizer. On a negative, no (seedless) 2-immunization scheme can be black-box reduced to any efficiently falsifiable assumption, at least for a large class of natural 2-immunizers which includes all “cryptographic hash functions.”
In summary, our results show that k-immunizers occupy a peculiar place in the cryptographic world. While they likely exist, and can be made practical and efficient, it is unlikely one can reduce their security to a “clean” standard-model assumption.
Marshall Ball, Yevgeniy Dodis, Eli Goldin

Lower Bounds

Frontmatter
Communication Lower Bounds of Key-Agreement Protocols via Density Increment Arguments
Abstract
Constructing key-agreement protocols in the random oracle model (ROM) is a viable method to assess the feasibility of developing public-key cryptography within Minicrypt. Unfortunately, as shown by Impagliazzo and Rudich (STOC 1989) and Barak and Mahmoody (Crypto 2009), such protocols can only guarantee limited security: any \(\ell \)-query protocol can be attacked by an \(O(\ell ^2)\)-query adversary. This quadratic gap matches the key-agreement protocol proposed by Merkle (CACM 78), known as Merkle’s Puzzles.
Besides query complexity, the communication complexity of key-agreement protocols in the ROM is also an interesting question in the realm of find-grained cryptography, even though only limited security is achievable. Haitner et al. (ITCS 2019) first observed that in Merkle’s Puzzles, to obtain secrecy against an eavesdropper with \(O(\ell ^2)\) queries, the honest parties must exchange \(\varOmega (\ell )\) bits. Therefore, they conjectured that high communication complexity is unavoidable, any \(\ell \)-query protocols with c bits of communication could be attacked by an \(O(c\cdot \ell )\)-query adversary. This, if true, will suggest that Merkle’s Puzzle is also optimal regarding communication complexity. Building upon techniques from communication complexity, Haitner et al. (ITCS 2019) confirmed this conjecture for two types of key agreement protocols with certain natural properties.
This work affirms the above conjecture for all non-adaptive protocols with perfect completeness. Our proof uses a novel idea called density increment argument. This method could be of independent interest as it differs from previous communication lower bounds techniques (and bypasses some technical barriers).
Mi-Ying Huang, Xinyu Mao, Guangxu Yang, Jiapeng Zhang
Searching for ELFs in the Cryptographic Forest
Abstract
Extremely Lossy Functions (ELFs) are families of functions that, depending on the choice during key generation, either operate in injective mode or instead have only a polynomial image size. The choice of the mode is indistinguishable to an outsider. ELFs were introduced by Zhandry (Crypto 2016) and have been shown to be very useful in replacing random oracles in a number of applications.
One open question is to determine the minimal assumption needed to instantiate ELFs. While all constructions of ELFs depend on some form of exponentially-secure public-key primitive, it was conjectured that exponentially-secure secret-key primitives, such as one-way functions, hash functions or one-way product functions, might be sufficient to build ELFs. In this work we answer this conjecture mostly negative: We show that no primitive, which can be derived from a random oracle (which includes all secret-key primitives mentioned above), is enough to construct even moderately lossy functions in a black-box manner. However, we also show that (extremely) lossy functions themselves do not imply public-key cryptography, leaving open the option to build ELFs from some intermediate primitive between the classical categories of secret-key and public-key cryptography. (The full version can be found at https://​eprint.​iacr.​org/​2023/​1403.)
Marc Fischlin, Felix Rohrbach
On Time-Space Lower Bounds for Finding Short Collisions in Sponge Hash Functions
Abstract
Sponge paradigm, used in the design of SHA-3, is an alternative hashing technique to the popular Merkle-Damgård paradigm. We revisit the problem of finding B-block-long collisions in sponge hash functions in the auxiliary-input random permutation model, in which an attacker gets a piece of S-bit advice about the random permutation and makes T (forward or inverse) oracle queries to the random permutation.
Recently, significant progress has been made in the Merkle-Damgård setting and optimal bounds are known for a large range of parameters, including all constant values of B. However, the sponge setting is widely open: there exist significant gaps between known attacks and security bounds even for \(B=1\).
Freitag, Ghoshal and Komargodski (CRYPTO 2022) showed a novel attack for \(B=1\) that takes advantage of the inverse queries and achieves advantage \(\widetilde{\varOmega }(\min (S^2T^2/2^{2c}\), \( (S^2T/2^{2c})^{2/3})+T^2/2^r)\), where r is bit-rate and c is the capacity of the random permutation. However, they only showed an \(\widetilde{O}(ST/2^c+T^2/2^r)\) security bound, leaving open an intriguing quadratic gap. For \(B=2\), they beat the general security bound by Coretti, Dodis, Guo (CRYPTO 2018) for arbitrary values of B. However, their highly non-trivial argument is quite laborious, and no better (than the general) bounds are known for \(B\ge 3\).
In this work, we study the possibility of proving better security bounds in the sponge setting. To this end,
  • For \(B=1\), we prove an improved \(\widetilde{O}(S^2T^2/2^{2c}+S/2^c+T/2^c+T^2/2^r)\) bound. Our bound strictly improves the bound by Freitag et al., and is optimal for \(ST^2\le 2^c\).
  • For \(B=2\), we give a considerably simpler and more modular proof, recovering the bound obtained by Freitag et al.
  • We obtain our bounds by adapting the recent multi-instance technique of Akshima, Guo and Liu (CRYPTO 2022) which bypasses the limitations of prior techniques in the Merkle-Damgård setting. To complement our results, we provably show that the recent multi-instance technique cannot further improve our bounds for \(B=1,2\), and the general bound by Correti et al., for \(B\ge 3\).
Overall, our results yield state-of-the-art security bounds for finding short collisions and fully characterize the power of the multi-instance technique in the sponge setting.
Akshima, Xiaoqi Duan, Siyao Guo, Qipeng Liu
On the Cost of Post-compromise Security in Concurrent Continuous Group-Key Agreement
Abstract
Continuous Group-Key Agreement (CGKA) allows a group of users to maintain a shared key. It is the fundamental cryptographic primitive underlying group messaging schemes and related protocols, most notably TreeKEM, the underlying key agreement protocol of the Messaging Layer Security (MLS) protocol, a standard for group messaging by the IETF. CKGA works in an asynchronous setting where parties only occasionally must come online, and their messages are relayed by an untrusted server. The most expensive operation provided by CKGA is that which allows for a user to refresh their key material in order to achieve forward secrecy (old messages are secure when a user is compromised) and post-compromise security (users can heal from compromise). One caveat of early CGKA protocols is that these update operations had to be performed sequentially, with any user wanting to update their key material having had to receive and process all previous updates. Late versions of TreeKEM do allow for concurrent updates at the cost of a communication overhead per update message that is linear in the number of updating parties. This was shown to be indeed necessary when achieving PCS in just two rounds of communication by [Bienstock et al. TCC’20].
The recently proposed protocol CoCoA [Alwen et al. Eurocrypt’22], however, shows that this overhead can be reduced if PCS requirements are relaxed, and only a logarithmic number of rounds is required. The natural question, thus, is whether CoCoA is optimal in this setting.
In this work we answer this question, providing a lower bound on the cost (concretely, the amount of data to be uploaded to the server) for CGKA protocols that heal in an arbitrary k number of rounds, that shows that CoCoA is very close to optimal. Additionally, we extend CoCoA to heal in an arbitrary number of rounds, and propose a modification of it, with a reduced communication cost for certain k.
We prove our bound in a combinatorial setting where the state of the protocol progresses in rounds, and the state of the protocol in each round is captured by a set system, each set specifying a set of users who share a secret key. We show this combinatorial model is equivalent to a symbolic model capturing building blocks including PRFs and public-key encryption, related to the one used by Bienstock et al.
Our lower bound is of order \(k\cdot n^{1+1/(k-1)}/\log (k)\), where \(2\le k\le \log (n)\) is the number of updates per user the protocol requires to heal. This generalizes the \(n^2\) bound for \(k=2\) from Bienstock et al.. This bound almost matches the \(k\cdot n^{1+2/(k-1)}\) or \(k^2\cdot n^{1+1/(k-1)}\) efficiency we get for the variants of the CoCoA protocol also introduced in this paper.
Benedikt Auerbach, Miguel Cueto Noval, Guillermo Pascual-Perez, Krzysztof Pietrzak
Generic-Group Lower Bounds via Reductions Between Geometric-Search Problems: With and Without Preprocessing
Abstract
The generic-group model (\( \textrm{GGM}\)) aims to capture algorithms working over groups of prime order that only rely on the group operation, but do not exploit any additional structure given by the concrete implementation of the group. In it, it is possible to prove information-theoretic lower bounds on the hardness of problems like the discrete logarithm (DL) or computational Diffie-Hellman (CDH). Thus, since its introduction, it has served as a valuable tool to assess the concrete security provided by cryptographic schemes based on such problems. A work on the related algebraic-group model (AGM) introduced a method, used by many subsequent works, to adapt \( \textrm{GGM}\) lower bounds for one problem to another, by means of conceptually simple reductions.
In this work, we propose an alternative approach to extend \( \textrm{GGM}\) bounds from one problem to another. Following an idea by Yun [EC15], we show that, in the \( \textrm{GGM}\), the security of a large class of problems can be reduced to that of geometric search-problems. By reducing the security of the resulting geometric-search problems to variants of the search-by-hypersurface problem, for which information theoretic lower bounds exist, we give alternative proofs of several results that used the AGM approach.
The main advantage of our approach is that our reduction from geometric search-problems works, as well, for the \(\textrm{GGM}\) with preprocessing (more precisely the bit-fixing \( \textrm{GGM}\) introduced by Coretti, Dodis and Guo [Crypto18]). As a consequence, this opens up the possibility of transferring preprocessing \( \textrm{GGM}\) bounds from one problem to another, also by means of simple reductions. Concretely, we prove novel preprocessing bounds on the hardness of the d-strong discrete logarithm, the d-strong Diffie-Hellman inversion, and multi-instance CDH problems, as well as a large class of Uber assumptions. Additionally, our approach applies to Shoup’s GGM without additional restrictions on the query behavior of the adversary, while the recent works of Zhang, Zhou, and Katz [AC22] and Zhandry [Crypto22] highlight that this is not the case for the AGM approach.
Benedikt Auerbach, Charlotte Hoffmann, Guillermo Pascual-Perez

IOPs and Succinctness

Frontmatter
Holographic SNARGs for P and Batch-NP from (Polynomially Hard) Learning with Errors
Abstract
A succinct non-interactive argument (SNARG) is called holographic if the verifier runs in time sub-linear in the input length when given oracle access to an encoding of the input. We present holographic SNARGs for P and Batch-NP under the learning with errors (LWE) assumption. Our holographic SNARG for P has a verifier that runs in time \(\textsf{poly}(\lambda , \log T, \log n)\) for T-time computations and \(n\)-bit inputs (\(\lambda \) is the security parameter), while our holographic SNARG for Batch-NP has a verifier that runs in time \(\textsf{poly}(\lambda , T, \log k)\) for k instances of T-time computations. Before this work, constructions with the same asymptotic efficiency were known in the designated-verifier setting or under the sub-exponential hardness of the LWE assumption. We obtain our holographic SNARGs (in the public-verification setting under the polynomial hardness of the LWE assumption) by constructing holographic SNARGs for certain hash computations and then applying known/trivial transformations.
As an application, we use our holographic SNARGs to weaken the assumption needed for a recent public-coin 3-round zero-knowledge (ZK) argument [Kiyoshima, CRYPTO 2022]. Specifically, we use our holographic SNARGs to show that a public-coin 3-round ZK argument exists under the same assumptions as the state-of-the-art private-coin 3-round ZK argument [Bitansky et al., STOC 2018].
Susumu Kiyoshima
Chainable Functional Commitments for Unbounded-Depth Circuits
Abstract
A functional commitment (FC) scheme allows one to commit to a vector \(\boldsymbol{x}\) and later produce a short opening proof of \((f, f(\boldsymbol{x}))\) for any admissible function f. Since their inception, FC schemes supporting ever more expressive classes of functions have been proposed.
In this work, we introduce a novel primitive that we call chainable functional commitment (CFC), which extends the functionality of FCs by allowing one to 1) open to functions of multiple inputs \(f(\boldsymbol{x}_1, \ldots , \boldsymbol{x}_m)\) that are committed independently, 2) while preserving the output also in committed form. We show that CFCs for quadratic polynomial maps generically imply FCs for circuits. Then, we efficiently realize CFCs for quadratic polynomials over pairing groups and lattices, resulting in the first FC schemes for circuits of unbounded depth based on either pairing-based or lattice-based falsifiable assumptions. Our FCs require fixing a-priori only the maximal width of the circuit to be evaluated, and have opening proof size depending only on the circuit depth. Additionally, our FCs feature other nice properties such as being additively homomorphic and supporting sublinear-time verification after offline preprocessing.
Using a recent transformation that constructs homomorphic signatures (HS) from FCs, we obtain the first pairing- and lattice-based realisations of HS for bounded-width, but unbounded-depth, circuits. Prior to this work, the only HS for general circuits is lattice-based and requires bounding the circuit depth at setup time.
David Balbás, Dario Catalano, Dario Fiore, Russell W. F. Lai
Multilinear Schwartz-Zippel Mod N and Lattice-Based Succinct Arguments
Abstract
We show that for \(\textbf{x}\overset{\$}{\leftarrow }[0,2^\lambda )^\mu \) and any integer N the probability that \(f(\textbf{x})\equiv 0 \bmod N\) for any non-zero multilinear polynomial \(f\in \mathbb {Z}[X_1, \dots ,X_\mu ]\), co-prime to N is inversely proportional to N. As a corollary we show that if \(\log _2 N\ge \log _2(2\mu )\lambda +8\mu ^2 \) then the probability is bounded by \(\frac{\mu +1}{2^\lambda }\). We also give tighter numerically derived bounds, showing that if \(\log _2 N\ge {418}\), and \(\mu \le 20\) the probability is bounded by \(\frac{\mu }{2^\lambda }+2^{-120}\).
We then apply this Multilinear Composite Schwartz-Zippel Lemma (LCSZ) to resolve an open problem in the literature on succinct arguments: that the Bulletproofs protocol for linear relations over classical Pedersen commitments in prime-order groups remains knowledge sound when generalized to commitment schemes that are binding only over short integer vectors. In particular, this means that the Bulletproofs protocol can be instantiated with plausibly post-quantum commitments from lattice hardness assumptions (SIS/R-SIS/M-SIS). It can also be instantiated with commitments based on groups of unknown order (GUOs), in which case the verification time becomes logarithmic instead of linear time.\(^{1}\)
Prior work on lattice-based Bulletproofs (Crypto 2020) and its extensions required modifying the protocol to sample challenges from special sets of polynomial size. This results in a non-negligible knowledge error, necessitating parallel repetition to amplify soundness, which impacts efficiency and poses issues for the Fiat-Shamir transform. Our analysis shows knowledge soundness for the original Bulletproofs protocol with the exponential-size integer challenge set \([0,2^\lambda ]\) and thus achieves a negligible soundness error without repetition, circumventing a previous impossibility result (Crypto 2021). Our analysis also closes a critical gap in the original security proof of DARK, a GUO-based polynomial commitment scheme (Eurocrypt 2020). Along the way to achieving our result we also define Almost Special Soundness (AMSS), a generalization of Special-Soundness. Our main result is divided into two parts: (1) that the Bulletproofs protocol over generalized commitments is AMSS, and (2) that AMSS implies knowledge soundness. This framework serves to simplify the application of our analytical techniques to protocols beyond Bulletproofs in the future(\(^{1}\)This paper incorporates content published in the updated EPRINT of DARK [18]. The full version of this paper containing proofs is available online [17].).
Benedikt Bünz, Ben Fisch
Generalized Special-Sound Interactive Proofs and Their Knowledge Soundness
Abstract
A classic result in the theory of interactive proofs shows that a special-sound \(\varSigma \)-protocol is automatically a proof of knowledge. This result is very useful to have, since the latter property is typically tricky to prove from scratch, while the former is often easy to argue — if it is satisfied. While classic \(\varSigma \)-protocols often are special-sound, this is unfortunately not the case for many recently proposed, highly efficient interactive proofs, at least not in this strict sense. Motivated by this, the original result was recently generalized to k-special-sound \(\varSigma \)-protocols (for arbitrary, polynomially bounded k), and to multi-round versions thereof. This generalization is sufficient to analyze (e.g.) Bulletproofs-like protocols, but is still insufficient for many other examples.
In this work, we push the relaxation of the special soundness property to the extreme, by allowing an arbitrary access structure \(\varGamma \) to specify for which subsets of challenges it is possible to compute a witness, when given correct answers to these challenges (for a fixed first message). Concretely, for any access structure \(\varGamma \), we identify parameters \(t_\varGamma \) and \(\kappa _\varGamma \), and we show that any \(\varGamma \)-special-sound \(\varSigma \)-protocol is a proof of knowledge with knowledge error \(\kappa _\varGamma \) if \(t_\varGamma \) is polynomially bounded. Similarly for multi-round protocols.
We apply our general result to a couple of simple but important example protocols, where we obtain a tight knowledge error as an immediate corollary. Beyond these simple examples, we analyze the FRI protocol. Here, showing the general special soundness notion is non-trivial, but can be done (for a certain range of parameters) by recycling some of the techniques used to argue ordinary soundness of the protocol (as an IOP). Again as a corollary, we then derive that the FRI protocol, as an interactive proof by using a Merkle-tree commitment, has a knowledge extractor with almost optimal knowledge error, with the caveat that the extractor requires (expected) quasi-polynomial time.
Finally, building up on the technique for the parallel repetition of k-special-sound \(\varSigma \)-protocols, we show the same strong parallel repetition result for \(\varGamma \)-special-sound \(\varSigma \)-protocol and its multi-round variant.
Thomas Attema, Serge Fehr, Nicolas Resch
From Polynomial IOP and Commitments to Non-malleable zkSNARKs
Abstract
We study sufficient conditions to compile simulation-extractable zkSNARKs from information-theoretic interactive oracle proofs (IOP) using a simulation-extractable commit-and-prove system for its oracles. Specifically, we define simulation extractability for opening and evaluation proofs of polynomial commitment schemes, which we then employ to prove the security of zkSNARKS obtained from polynomial IOP proof systems. To instantiate our methodology, we additionally prove that KZG commitments satisfy our simulation extractability requirement, despite being naturally malleable. To this end, we design a relaxed notion of simulation extractability that matches how KZG commitments are used and optimized in real-world proof systems. The proof that KZG satisfies this relaxed simulation extractability property relies on the algebraic group model and random oracle model.
Antonio Faonio, Dario Fiore, Markulf Kohlweiss, Luigi Russo, Michal Zajac
How to Compile Polynomial IOP into Simulation-Extractable SNARKs: A Modular Approach
Abstract
Most succinct arguments (SNARKs) are initially only proven knowledge sound (KS). We show that the commonly employed compilation strategy from polynomial interactive oracle proofs (PIOP) via polynomial commitments to knowledge sound SNARKS actually also achieves other desirable properties: weak unique response (WUR) and trapdoorless zero-knowledge (TLZK); and that together they imply simulation extractability (SIM-EXT).
The factoring of SIM-EXT into KS + WUR + TLZK is becoming a cornerstone of the analysis of non-malleable SNARK systems. We show how to prove WUR and TLZK for PIOP compiled SNARKs under mild falsifiable assumptions on the polynomial commitment scheme. This means that the analysis of knowledge soundness from PIOP properties that inherently relies on non-falsifiable or idealized assumption such as the algebraic group model (AGM) or generic group model (GGM) need not be repeated.
While the proof of WUR requires only mild assumptions on the PIOP, TLZK is a different matter. As perfectly hiding polynomial commitments sometimes come at a substantial performance premium, SNARK designers prefer to employ deterministic commitments with some leakage. This results in the need for a stronger zero-knowledge property for the PIOP.
The modularity of our approach implies that any analysis improvements, e.g. in terms of tightness, credibility of the knowledge assumption and model of the KS analysis, or the precision of capturing real-world optimizations for TLZK also benefits the SIM-EXT guarantees.
Markulf Kohlweiss, Mahak Pancholi, Akira Takahashi
Backmatter
Metadaten
Titel
Theory of Cryptography
herausgegeben von
Guy Rothblum
Hoeteck Wee
Copyright-Jahr
2023
Electronic ISBN
978-3-031-48621-0
Print ISBN
978-3-031-48620-3
DOI
https://doi.org/10.1007/978-3-031-48621-0

Premium Partner