A succinct non-interactive argument (SNARG) for NP allows a prover to convince a verifier that an NP statement x is true with a proof whose size is sublinear in the length of the traditional NP witness. Moreover, a SNARG is adaptively sound if the adversary can choose the statement it wants to prove after seeing the scheme parameters. Very recently, Waters and Wu (STOC 2024) showed how to construct adaptively-sound SNARGs for NP in the plain model from falsifiable assumptions (specifically, sub-exponentially secure indistinguishability obfuscation, sub-exponentially secure one-way functions, and polynomial hardness of discrete log).
We consider the batch setting where the prover wants to prove a collection of T statements \(x_1, \ldots , x_T\) and its goal is to construct a proof whose size is sublinear in both the size of a single witness and the number of instances T. In this setting, existing constructions either require the size of the public parameters to scale linearly with T (and thus, can only support an a priori bounded number of instances), or only provide non-adaptive soundness, or have proof size that scales linearly with the size of a single NP witness. In this work, we give two approaches for batching adaptively-sound SNARGs for NP, and in particular, show that under the same set of assumptions as those underlying the Waters-Wu adaptively-sound SNARG, we can obtain an adaptively-sound SNARG for batch NP where the size of the proof is \(\textsf{poly} (\lambda )\) and the size of the CRS is \(\textsf{poly} (\lambda + |C|)\), where \(\lambda \) is a security parameter and |C| is the size of the circuit that computes the associated NP relation.
Our first approach builds directly on top of the Waters-Wu construction and relies on indistinguishability obfuscation and a homomorphic re-randomizable one-way function. Our second approach shows how to combine ideas from the Waters-Wu SNARG with the chaining-based approach by Garg, Sheridan, Waters, and Wu (TCC 2022) to obtain a SNARG for batch NP.
Anzeige
Bitte loggen Sie sich ein, um Zugang zu Ihrer Lizenz zu erhalten.
A subsequent work [41] also shows how to construct an adaptively-sound SNARG using \(i\mathcal {O} \) and sub-exponentially-secure one-way functions without any additional algebraic assumptions.
A BARG is somewhere extractable if the CRS can be programmed on a (hidden) index \(i \in [T]\). Then, given a valid BARG proof \(\pi \) on a batch of statements \((x_1, \ldots , x_T)\), there is an efficient extraction algorithm that recovers a witness \(w_i\) for \(x_i\). The special index i is computationally hidden by the CRS. Somewhere extractable BARGs can be constructed from most number-theoretic assumptions [12‐14, 17, 23, 26, 27, 29, 36, 39].
Note that the original Waters-Wu construction did not require \(\textsf{GenSol}\) and \(\textsf{GenChall}\) to take the bit \(b \in \{0, 1\}\) as input. Instead, \(\textsf{GenSol}\) computed \(b = \textsf{F} (k_{\textsf{sel} }, x)\) and outputted \(z = \textsf{F} (k_b, x)\) while \(\textsf{GenChall}\) simply outputted \(f(\textsf{F} (k_0, x))\) and \(f(\textsf{F} (k_1, x))\). The adaptation here is equivalent to the original Waters-Wu construction and the updated syntax will be conducive when extending to batch arguments.