Skip to main content

2025 | Buch

Theory of Cryptography

22nd International Conference, TCC 2024, Milan, Italy, December 2–6, 2024, Proceedings, Part II

insite
SUCHEN

Über dieses Buch

The four-volume set LNCS 15364-15367 constitutes the refereed proceedings of the 22nd International Conference on Theory of Cryptography, TCC 2024, held in Milan, Italy, in December 2024.
The total of 68 full papers presented in the proceedings was carefully reviewed and selected from 172 submissions. They focus on topics such as: proofs; math and foundations; consensus and messaging; quantum; kolmogorov and OWFs; encryption; quantum and black-box separations; authentication and sequentiality; obfuscation and homomorphism; multi-party computation; information-theoretic cryptography; and secret sharing.

Inhaltsverzeichnis

Frontmatter

Quantum I

Frontmatter
Quantum Pseudorandom Scramblers
Abstract
Quantum pseudorandom state generators (PRSGs) have stimulated exciting developments in recent years. A PRSG, on a fixed initial (e.g., all-zero) state, produces an output state that is computationally indistinguishable from a Haar random state. However, pseudorandomness of the output state is not guaranteed on other initial states. In fact, known PRSG constructions provably fail on some initial states.
In this work, we propose and construct quantum Pseudorandom State Scramblers (PRSSs), which can produce a pseudorandom state on an arbitrary initial state. In the information-theoretical setting, we obtain a scrambler which maps an arbitrary initial state to a distribution of quantum states that is close to Haar random in total variation distance. As a result, our scrambler exhibits a dispersing property. Loosely, it can span an \(\epsilon \)-net of the state space. This significantly strengthens what standard PRSGs can induce, as they may only concentrate on a small region of the state space provided that the average output state approximates a Haar random state.
Our PRSS construction develops a parallel extension of the famous Kac’s walk, and we show that it mixes exponentially faster than the standard Kac’s walk. This constitutes the core of our proof. We also describe a few applications of PRSSs. While our PRSS construction assumes a post-quantum one-way function, PRSSs are potentially a weaker primitive and can be separated from one-way functions in a relativized world similar to standard PRSGs.
Chuhan Lu, Minglong Qin, Fang Song, Penghui Yao, Mingnan Zhao
Real-Valued Somewhat-Pseudorandom Unitaries
Abstract
We explore a very simple distribution of unitaries: random (binary) phase—Hadamard—random (binary) phase—random computational-basis permutation. We show that this distribution is statistically indistinguishable from random Haar unitaries for any polynomial set of orthogonal input states (in any basis) with polynomial multiplicity. This shows that even though real-valued unitaries cannot be completely pseudorandom (Haug, Bharti, Koh, arXiv:2306.11677), we can still obtain some pseudorandom properties without giving up on the simplicity of a real-valued unitary.
Our analysis shows that an even simpler construction: applying a random (binary) phase followed by a random computational-basis permutation, would suffice, assuming that the input is orthogonal and flat (that is, has high min-entropy when measured in the computational basis).
Using quantum-secure one-way functions (which imply quantum-secure pseudorandom functions and permutations), we obtain an efficient cryptographic instantiation of the above.
Zvika Brakerski, Nir Magrafta
Split-State Non-malleable Codes and Secret Sharing Schemes for Quantum Messages
Abstract
Non-malleable codes are fundamental objects at the intersection of cryptography and coding theory. These codes provide security guarantees even in settings where error correction and detection are impossible, and have found applications to several other cryptographic tasks. One of the strongest and most well-studied adversarial tampering models is 2-split-state tampering. Here, a codeword is split into two parts which are stored in physically distant servers, and the adversary can then independently tamper with each part using arbitrary functions. This model can be naturally extended to the secret sharing setting with several parties by having the adversary independently tamper with each share. Previous works on non-malleable coding and secret sharing in the split-state tampering model only considered the encoding of classical messages. Furthermore, until recent work by Aggarwal, Boddu, and Jain (IEEE Trans. Inf. Theory 2024 & arXiv 2022), adversaries with quantum capabilities and shared entanglement had not been considered, and it is a priori not clear whether previous schemes remain secure in this model.
In this work, we introduce the notions of split-state non-malleable codes and secret sharing schemes for quantum messages secure against quantum adversaries with shared entanglement. Then, we present explicit constructions of such schemes that achieve low-error non-malleability. More precisely, for some constant \(c>0\), we construct efficiently encodable and decodable split-state non-malleable codes and secret sharing schemes for quantum messages preserving entanglement with external systems and achieving security against quantum adversaries having shared entanglement with codeword length n, any message length at most \(n^c\), and error \(\varepsilon =2^{-{n^{c}}}\). In the easier setting of average-case non-malleability, we achieve efficient non-malleable coding with rate close to 1/11.
Naresh Goud Boddu, Vipul Goyal, Rahul Jain, João Ribeiro
Cryptography in the Common Haar State Model: Feasibility Results and Separations
Abstract
Common random string model is a popular model in classical cryptography. We study a quantum analogue of this model called the common Haar state (CHS) model. In this model, every party participating in the cryptographic system receives many copies of one or more i.i.d Haar random states.
We study feasibility and limitations of cryptographic primitives in this model and its variants:
  • We present a construction of pseudorandom function-like states with security against computationally unbounded adversaries, as long as the adversaries only receive (a priori) bounded number of copies. By suitably instantiating the CHS model, we obtain a new approach to construct pseudorandom function-like states in the plain model.
  • We present separations between pseudorandom function-like states (with super-logarithmic length) and quantum cryptographic primitives, such as interactive key agreement and bit commitment, with classical communication. To show these separations, we prove new results on the indistinguishability of identical versus independent Haar states against LOCC (local operations, classical communication) adversaries.
Prabhanjan Ananth, Aditya Gulati, Yao-Ting Lin
Robust Combiners and Universal Constructions for Quantum Cryptography
Abstract
A robust combiner combines many candidates for a cryptographic primitive and generates a new candidate for the same primitive. Its correctness and security hold as long as one of the original candidates satisfies correctness and security. A universal construction is a closely related notion to a robust combiner. A universal construction for a primitive is an explicit construction of the primitive that is correct and secure as long as the primitive exists. It is known that a universal construction for a primitive can be constructed from a robust combiner for the primitive in many cases.
Although robust combiners and universal constructions for classical cryptography are widely studied, robust combiners and universal constructions for quantum cryptography have not been explored so far. In this work, we define robust combiners and universal constructions for several quantum cryptographic primitives including one-way state generators, public-key quantum money, quantum bit commitments, and unclonable encryption, and provide constructions of them.
On a different note, it was an open problem how to expand the plaintext length of unclonable encryption. In one of our universal constructions for unclonable encryption, we can expand the plaintext length, which resolves the open problem.
Taiga Hiroka, Fuyuki Kitagawa, Ryo Nishimaki, Takashi Yamakawa
Unbounded Leakage-Resilience and Intrusion-Detection in a Quantum World
Abstract
Can an adversary hack into our system and steal sensitive data such as cryptographic keys? This question is as old as the Internet and significant effort has been spent on designing mechanisms to prevent and detect hacking attacks. Once quantum computers arrive, will the situation remain the same or can we hope to live in a better world?
We first consider ubiquitous side-channel attacks, which aim to leak side information on secret system components, studied in the leakage-resilient cryptography literature. Classical leakage-resilient cryptography must necessarily impose restrictions on the type of leakage one aims to protect against, such as the popular bounded leakage model. Although such leakage bounds are necessary, many real-world side-channel attacks cannot be captured by bounded leakage. In this work, we design cryptographic schemes that provide guarantees against arbitrary side-channel attacks:
  • Using techniques from unclonable quantum cryptography, we design several basic leakage-resilient primitives, such as public- and private-key encryption, pseudorandom functions, digital signatures and quantum money schemes which remain secure under unbounded adaptive classical leakage over unbounded number of rounds.
  • What if the adversary simply breaks into our system to steal our secret keys, rather than mounting only a side-channel attack? What if the adversary can even tamper with the data arbitrarily, for example to cover its tracks? We initiate the study of intrusion-detection in the quantum setting, where one would like to detect if security has been compromised even in the face of such attacks. We design cryptographic schemes supporting intrusion-detection for a host of primitives such as public- and private-key encryption, digital signature, functional encryption, program obfuscation and software protection. Our schemes are based on techniques from cryptography with secure key leasing and certified deletion.
Alper Çakan, Vipul Goyal, Chen-Da Liu-Zhang, João Ribeiro

Math & Foundations

Frontmatter
Bit-Security Preserving Hardness Amplification
Abstract
Hardness amplification is one of the important reduction techniques in cryptography, and it has been extensively studied in the literature. The standard XOR lemma known in the literature evaluates the hardness in terms of the probability of correct prediction; the hardness is amplified from mildly hard (close to 1) to very hard \(1/2 + \varepsilon \) by inducing \(\varepsilon ^2\) multiplicative decrease of the circuit size. Translating such a statement in terms of the bit-security framework introduced by Micciancio-Walter (EUROCRYPT 2018) and Watanabe-Yasunaga (ASIACRYPT 2021), it may cause a bit-security loss of \(\log (1/\varepsilon )\). To resolve this issue, we derive a new variant of the XOR lemma in terms of the Rényi advantage, which directly characterizes the bit security. In the course of proving this result, we prove a new variant of the hardcore lemma in terms of the conditional squared advantage; our proof uses a boosting algorithm that may output the \(\bot \) symbol in addition to 0 and 1, which may be of independent interest.
Shun Watanabe, Kenji Yasunaga
Bit Security: Optimal Adversaries, Equivalence Results, and a Toolbox for Computational-Statistical Security Analysis
Abstract
We investigate the notion of bit-security for decisional cryptographic properties, as originally proposed in (Micciancio & Walter, Eurocrypt 2018), and its main variants and extensions, with the goal clarifying the relation between different definitions, and facilitating their use. Specific contributions of this paper include: (1) identifying the optimal adversaries achieving the highest possible MW advantage, showing that they are deterministic and have a very simple threshold structure; (2) giving a simple proof that a competing definition proposed by (Watanabe & Yasunaga, Asiacrypt 2021) is actually equivalent to the original MW definition; and (3) developing tools for the use of the extended notion of computational-statistical bit-security introduced in (Li, Micciancio, Schultz & Sorrell, Crypto 2022), showing that it fully supports common cryptographic proof techniques like hybrid arguments and probability replacement theorems. On the technical side, our results are obtained by introducing a new notion of “fuzzy” distinguisher (which we prove equivalent to the “aborting” distinguishers of Micciancio and Walter), and a tight connection between the MW advantage and the Le Cam metric, a standard quantity used in statistics.
Daniele Micciancio, Mark Schultz-Wu
Low-Degree Security of the Planted Random Subgraph Problem
Abstract
The planted random subgraph detection conjecture of Abram et al. (TCC 2023) asserts the pseudorandomness of a pair of graphs (HG), where G is an Erdős-Rényi random graph on n vertices, and H is a random induced subgraph of G on k vertices. Assuming the hardness of distinguishing these two distributions (with two leaked vertices), Abram et al. construct communication-efficient, computationally secure (1) 2-party private simultaneous messages (PSM) and (2) secret sharing for forbidden graph structures.
We prove the low-degree hardness of detecting planted random subgraphs all the way up to \(k\le n^{1 - \varOmega (1)}\). This improves over Abram et al.’s analysis for \(k \le n^{1/2 - \varOmega (1)}\). The hardness extends to r-uniform hypergraphs for constant r.
Our analysis is tight in the distinguisher’s degree, its advantage, and in the number of leaked vertices. Extending the constructions of Abram et al., we apply the conjecture towards (1) communication-optimal multiparty PSM protocols for random functions and (2) bit secret sharing with share size \((1 + \epsilon )\log n\) for any \(\epsilon > 0\) in which arbitrary minimal coalitions of up to r parties can reconstruct and secrecy holds against all unqualified subsets of up to \(\ell = o(\epsilon \log n)^{1/(r-1)}\) parties.
Andrej Bogdanov, Chris Jones, Alon Rosen, Ilias Zadik
Sparse Linear Regression and Lattice Problems
Abstract
Sparse linear regression (SLR) is a well-studied problem in statistics where one is given a design matrix \(\textbf{X} \in \mathbb {R}^{m \times n}\) and a response vector \(\textbf{y} = \textbf{X} \boldsymbol{\theta }^* + \textbf{w}\) for a k-sparse vector \(\boldsymbol{\theta }^*\) (that is, \(\Vert \boldsymbol{\theta }^*\Vert _0 \le k\)) and small, arbitrary noise \(\textbf{w}\), and the goal is to find a k-sparse \(\widehat{\boldsymbol{\theta }} \in \mathbb {R}^{n}\) that minimizes the mean squared prediction error \(\frac{1}{m} \Vert \textbf{X} \widehat{\boldsymbol{\theta }} - \textbf{X} \boldsymbol{\theta }^*\Vert ^2_2\). While \(\ell _1\)-relaxation methods such as basis pursuit, Lasso, and the Dantzig selector solve SLR when the design matrix is well-conditioned, no general algorithm is known, nor is there any formal evidence of hardness in an average-case setting with respect to all efficient algorithms.
We give evidence of average-case hardness of SLR w.r.t. all efficient algorithms assuming the worst-case hardness of lattice problems. Specifically, we give an instance-by-instance reduction from a variant of the bounded distance decoding (BDD) problem on lattices to SLR, where the condition number of the lattice basis that defines the BDD instance is directly related to the restricted eigenvalue condition of the design matrix, which characterizes some of the classical statistical-computational gaps for sparse linear regression. Also, by appealing to worst-case to average-case reductions from the world of lattices, this shows hardness for a distribution of SLR instances; while the design matrices are ill-conditioned, the resulting SLR instances are in the identifiable regime.
Furthermore, for well-conditioned (essentially) isotropic Gaussian design matrices, where Lasso is known to behave well in the identifiable regime, we show hardness of outputting any good solution in the unidentifiable regime where there are many solutions, assuming the worst-case hardness of standard and well-studied lattice problems.
Aparna Gupte, Neekon Vafa, Vinod Vaikuntanathan
Worst-Case to Average-Case Hardness of LWE: An Alternative Perspective
Abstract
In this work, we study the worst-case to average-case hardness of the Learning with Errors problem (LWE) under an alternative measure of hardness − the maximum success probability achievable by a probabilistic polynomial-time (PPT) algorithm. Previous works by Regev (STOC 2005), Peikert (STOC 2009), and Brakerski, Peikert, Langlois, Regev, Stehle (STOC 2013) give worst-case to average-case reductions from lattice problems to LWE, specifically from the approximate decision variant of the Shortest Vector Problem (GapSVP) and the Bounded Distance Decoding (BDD) problem. These reductions, however, are lossy in the sense that even the strongest assumption on the worst-case hardness of GapSVP or BDD implies only mild hardness of LWE. Our alternative perspective gives a much tighter reduction and strongly relates the hardness of LWE to that of BDD. In particular, we show that under a reasonable assumption about the success probability of solving BDD via a PPT algorithm, we obtain a nearly tight lower bound on the highest possible success probability for solving LWE via a PPT algorithm. Furthermore, we show a tight relationship between the best achievable success probability by any PPT algorithm for decision-LWE to that of search-LWE. Our results not only refine our understanding of the computational complexity of LWE, but also provide a useful framework for analyzing the practical security implications.
Aggarwal Divesh, Jin Ming Leong, Veliche Alexandra

Proofs II

Frontmatter
Batching Adaptively-Sound SNARGs for NP
Abstract
A succinct non-interactive argument (SNARG) for NP allows a prover to convince a verifier that an NP statement x is true with a proof whose size is sublinear in the length of the traditional NP witness. Moreover, a SNARG is adaptively sound if the adversary can choose the statement it wants to prove after seeing the scheme parameters. Very recently, Waters and Wu (STOC 2024) showed how to construct adaptively-sound SNARGs for NP in the plain model from falsifiable assumptions (specifically, sub-exponentially secure indistinguishability obfuscation, sub-exponentially secure one-way functions, and polynomial hardness of discrete log).
We consider the batch setting where the prover wants to prove a collection of T statements \(x_1, \ldots , x_T\) and its goal is to construct a proof whose size is sublinear in both the size of a single witness and the number of instances T. In this setting, existing constructions either require the size of the public parameters to scale linearly with T (and thus, can only support an a priori bounded number of instances), or only provide non-adaptive soundness, or have proof size that scales linearly with the size of a single NP witness. In this work, we give two approaches for batching adaptively-sound SNARGs for NP, and in particular, show that under the same set of assumptions as those underlying the Waters-Wu adaptively-sound SNARG, we can obtain an adaptively-sound SNARG for batch NP where the size of the proof is \(\textsf{poly} (\lambda )\) and the size of the CRS is \(\textsf{poly} (\lambda + |C|)\), where \(\lambda \) is a security parameter and |C| is the size of the circuit that computes the associated NP relation.
Our first approach builds directly on top of the Waters-Wu construction and relies on indistinguishability obfuscation and a homomorphic re-randomizable one-way function. Our second approach shows how to combine ideas from the Waters-Wu SNARG with the chaining-based approach by Garg, Sheridan, Waters, and Wu (TCC 2022) to obtain a SNARG for batch NP.
Lalita Devadas, Brent Waters, David J. Wu
Doubly-Efficient Batch Verification in Statistical Zero-Knowledge
Abstract
A sequence of recent works, concluding with Mu et al. (Eurocrypt, 2024) has shown that every problem \(\varPi \) admitting a non-interactive statistical zero-knowledge proof (\(\text {NISZK}\)) has an efficient zero-knowledge batch verification protocol. Namely, an \(\text {NISZK}\) protocol for proving that \(x_1,\dots ,x_k \in \varPi \) with communication that only scales poly-logarithmically with k. A caveat of this line of work is that the prover runs in exponential-time, whereas for \(\text {NP}\) problems it is natural to hope to obtain a doubly-efficient proof – that is, a prover that runs in polynomial-time given the k \(\text {NP}\) witnesses.
   In this work we show that every problem in \(\text {NISZK}\cap \text {UP}\) has a doubly-efficient interactive statistical zero-knowledge proof with communication \(\textrm{poly}(n,\log (k))\) and \(\textrm{poly}(\log (k),\log (n))\) rounds. The prover runs in time \(\textrm{poly}(n,k)\) given access to the k \(\text {UP}\) witnesses. Here n denotes the length of each individual input, and \(\text {UP}\) is the subclass of \(\text {NP}\) relations in which YES instances have unique witnesses.
   This result yields doubly-efficient statistical zero-knowledge batch verification protocols for a variety of concrete and central cryptographic problems from the literature.
Or Keret, Ron D. Rothblum, Prashant Nalini Vasudevan
Monotone Policy BARGs from BARGs and Additively Homomorphic Encryption
Abstract
A monotone policy batch \(\textsf{NP} \) language \(\mathcal {L}_{\mathcal {R}, P}\) is parameterized by a monotone policy \(P :\{0,1\}^k \rightarrow \{0,1\}\) and an \(\textsf{NP} \) relation \(\mathcal {R}\). A statement \((x_1, \ldots , x_k)\) is a yes instance if there exists \(w_1, \ldots , w_k\) where \(P(\mathcal {R}(x_1, w_1), \ldots , \mathcal {R}(x_k, w_k)) = 1\). For example, we might say that an instance \((x_1, \ldots , x_k)\) is a yes instance if a majority of the statements are true. A monotone policy batch argument (BARG) for \(\textsf{NP} \) allows a prover to prove that \((x_1, \ldots , x_k) \in \mathcal {L}_{\mathcal {R}, P}\) with a proof of size \(\textsf{poly}(\lambda , |\mathcal {R}|, \log k)\), where \(\lambda \) is the security parameter, \(|\mathcal {R}|\) is the size of the Boolean circuit that computes \(\mathcal {R}\), and k is the number of instances. Recently, Brakerski, Brodsky, Kalai, Lombardi, and Paneth (CRYPTO 2023) gave the first monotone policy BARG for \(\textsf{NP} \) from the learning with errors (LWE) assumption.
In this work, we describe a generic approach for constructing monotone policy BARGs from any BARG for \(\textsf{NP} \) together with an additively homomorphic encryption scheme. This yields the first constructions of monotone policy BARGs from the k-\(\textsf{Lin}\) assumption in prime-order pairing groups as well as the (subexponential) DDH assumption in pairing-free groups. Central to our construction is a notion of a zero-fixing hash function, which is a relaxed version of a predicate-extractable hash function from the work of Brakerski et al. Our relaxation enables a direct realization of zero-fixing hash functions from BARGs for \(\textsf{NP} \) and additively homomorphic encryption, whereas the previous notion relied on leveled homomorphic encryption, and by extension, the LWE assumption.
As an application, we also show how to combine a monotone policy BARG with a puncturable signature scheme to obtain a monotone policy aggregate signature scheme. Our work yields the first (statically-secure) monotone policy aggregate signatures that supports general monotone Boolean circuits from standard pairing-based assumptions. Previously, this was only known from LWE.
Shafik Nassar, Brent Waters, David J. Wu
Batch Arguments to NIZKs from One-Way Functions
Abstract
Succinctness and zero-knowledge are two fundamental properties in the study of cryptographic proof systems. Several recent works have formalized the connections between these two notions by showing how to realize non-interactive zero-knowledge (NIZK) arguments from succinct non-interactive arguments. Specifically, Champion and Wu (CRYPTO 2023) as well as Bitansky, Kamath, Paneth, Rothblum, and Vasudevan (ePrint 2023) recently showed how to construct a NIZK argument for \(\textsf{NP}\) from a (somewhere-sound) non-interactive batch argument (BARG) and a dual-mode commitment scheme (and in the case of the Champion-Wu construction, a local pseudorandom generator). The main open question is whether a BARG suffices for a NIZK (just assuming one-way functions).
In this work, we first show that an adaptively-sound BARG for \(\textsf{NP}\) together with an one-way function imply a computational NIZK argument for \(\textsf{NP}\). We then show that the weaker notion of somewhere soundness achieved by existing BARGs from standard algebraic assumptions are also adaptively sound if we assume sub-exponential security. This transformation may also be of independent interest. Taken together, we obtain a NIZK argument for \(\textsf{NP}\) from one-way functions and a sub-exponentially-secure somewhere-sound BARG for \(\textsf{NP}\).
If we instead assume plain public-key encryption, we show that a standard polynomially-secure somewhere-sound batch argument for \(\textsf{NP}\) suffices for the same implication. As a corollary, this means a somewhere-sound BARG can be used to generically upgrade any semantically-secure public-key encryption scheme into one secure against chosen-ciphertext attacks. More broadly, our results demonstrate that constructing non-interactive batch arguments for \(\textsf{NP}\) is essentially no easier than constructing NIZK arguments for \(\textsf{NP}\).
Eli Bradley, Brent Waters, David J. Wu
Security Bounds for Proof-Carrying Data from Straightline Extractors
Abstract
Proof-carrying data (PCD) is a powerful cryptographic primitive that allows mutually distrustful parties to perform distributed computation in an efficiently verifiable manner. Real-world deployments of PCD have sparked keen interest within the applied community and industry.
Known constructions of PCD are obtained by recursively-composing SNARKs or related primitives. Unfortunately, known security analyses incur expensive blowups, which practitioners have disregarded as the analyses would lead to setting parameters that are prohibitively expensive.
In this work we study the concrete security of recursive composition, with the goal of better understanding how to reasonably set parameters for certain PCD constructions of practical interest. Our main result is that PCD obtained from SNARKs with straightline knowledge soundness has essentially the same security as the underlying SNARK (i.e., recursive composition incurs essentially no security loss).
We describe how straightline knowledge soundness is achieved by SNARKs in several oracle models, which results in a highly efficient security analysis of PCD that makes black-box use of the SNARK’s oracle (there is no need to instantiated the oracle to carry out the security reduction).
As a notable application, our work offers an idealized model that provides new, albeit heuristic, insights for the concrete security of recursive STARKs used in blockchain systems. Our work could be viewed as partial evidence justifying the parameter choices for recursive STARKs made by practitioners.
Alessandro Chiesa, Ziyi Guan, Shahar Samocha, Eylon Yogev
Backmatter
Metadaten
Titel
Theory of Cryptography
herausgegeben von
Elette Boyle
Mohammad Mahmoody
Copyright-Jahr
2025
Electronic ISBN
978-3-031-78017-2
Print ISBN
978-3-031-78016-5
DOI
https://doi.org/10.1007/978-3-031-78017-2