Skip to main content

2023 | Buch

Theory of Cryptography

21st International Conference, TCC 2023, Taipei, Taiwan, November 29–December 2, 2023, Proceedings, Part IV

insite
SUCHEN

Über dieses Buch

The four-volume set LNCS 14369 until 14372 constitutes the refereed proceedings of the 21st International Conference on Theory of Cryptography, TCC 2023, held in Taipei, Taiwan, in November/December 2023. The total of 68 full papers presented in the proceedings was carefully reviewed and selected from 168 submissions. They focus on topics such as proofs and outsourcing; theoretical foundations; multi-party computation; encryption; secret sharing, PIR and memory checking; anonymity, surveillance and tampering; lower bounds; IOPs and succinctness; lattices; quantum cryptography; Byzantine agreement, consensus and composability.

Inhaltsverzeichnis

Frontmatter

Lattices

Frontmatter
Rigorous Foundations for Dual Attacks in Coding Theory
Abstract
Dual attacks aiming at decoding generic linear codes have been found recently to outperform for certain parameters information set decoding techniques which have been for 60 years the dominant tool for solving this problem and choosing the parameters of code-based cryptosystems. However, the analysis of the complexity of these dual attacks relies on some unproven assumptions that are not even fully backed up with experimental evidence. These dual attacks can actually be viewed as the code-based analogue of dual attacks in lattice based cryptography. Here too, dual attacks have been found out those past years to be strong competitors to primal attacks and a controversy has emerged whether similar heuristics made for instance on the independence of certain random variables really hold. We will show that the dual attacks in coding theory can be studied by providing in a first step a simple alternative expression of the fundamental quantity used in these dual attacks. We then show that this expression can be studied without relying on independence assumptions whatsoever. This study leads us to discover that there is indeed a problem with the latest and most powerful dual attack proposed in [CDMT22] and that for the parameters chosen in this algorithm there are indeed false candidates which are produced and which are not predicted by the analysis provided there which relies on independence assumptions. We then suggest a slight modification of this algorithm consisting in a further verification step, analyze it thoroughly, provide experimental evidence that our analysis is accurate and show that the complexity claims made in [CDMT22] are indeed valid for this modified algorithm. This approach provides a simple methodology for studying rigorously dual attacks which could turn out to be useful for further developing the subject.
Charles Meyer-Hilfiger, Jean-Pierre Tillich
On the Multi-user Security of LWE-Based NIKE
Abstract
Non-interactive key exchange (NIKE) schemes like the Diffie-Hellman key exchange are a widespread building block in several cryptographic protocols. Since the Diffie-Hellman key exchange is not post-quantum secure, it is important to investigate post-quantum alternatives.
We analyze the security of the LWE-based NIKE by Ding et al. (ePrint 2012) and Peikert (PQCrypt 2014) in a multi-user setting where the same public key is used to generate shared keys with multiple other users. The Diffie-Hellman key exchange achieves this security notion. The mentioned LWE-based NIKE scheme comes with an inherent correctness error (Guo et al., PKC 2020), and this has significant implications for the multi-user security, necessitating a closer examination.
Single-user security generically implies multi-user security when all users generate their keys honestly for NIKE schemes with negligible correctness error. However, the LWE-based NIKE requires a super-polynomial modulus to achieve a negligible correctness error, which makes the scheme less efficient. We show that
  • generically, single-user security does not imply multi-user security when the correctness error is non-negligible, but despite this
  • the LWE-based NIKE with polynomial modulus is multi-user secure for honest users when the number of users is fixed in advance. This result takes advantage of the leakage-resilience properties of LWE.
We then turn to a stronger model of multi-user security that allows adversarially generated public keys. For this model, we consider a variant of the LWE-based NIKE where each public key is equipped with a NIZKPoK of the secret key. Adding NIZKPoKs is a standard technique for this stronger model and Hesse et al. (Crypto 2018) showed that this is sufficient to achieve security in the stronger multi-user security model for perfectly correct NIKEs (which the LWE-based NIKE is not). We show that
  • for certain parameters that include all parameters with polynomial modulus, the LWE-based NIKE can be efficiently attacked with adversarially generated public keys, despite the use of NIZKPoKs, but
  • for suitable parameters (that require a super-polynomial modulus), this security notion is achieved by the LWE-based NIKE with NIZKPoKs.
This stronger security notion has been previously achieved for LWE-based NIKE only in the QROM, while all our results are in the standard model.
Roman Langrehr
Ideal-SVP is Hard for Small-Norm Uniform Prime Ideals
Abstract
The presumed hardness of the Shortest Vector Problem for ideal lattices (Ideal-SVP) has been a fruitful assumption to understand other assumptions on algebraic lattices and as a security foundation of cryptosystems. Gentry [CRYPTO’10] proved that Ideal-SVP enjoys a worst-case to average-case reduction, where the average-case distribution is the uniform distribution over the set of inverses of prime ideals of small algebraic norm (below \(d^{O(d)}\) for cyclotomic fields, where d refers to the field degree). De Boer et al. [CRYPTO’20] obtained another random self-reducibility result for an average-case distribution involving integral ideals of norm \(2^{O(d^2)}\).
In this work, we show that Ideal-SVP for the uniform distribution over inverses of small-norm prime ideals reduces to Ideal-SVP for the uniform distribution over small-norm prime ideals. Combined with Gentry’s reduction, this leads to a worst-case to average-case reduction for the uniform distribution over the set of small-norm prime ideals. Using the reduction from Pellet-Mary and Stehlé [ASIACRYPT’21], this notably leads to the first distribution over NTRU instances with a polynomial modulus whose hardness is supported by a worst-case lattice problem.
Joël Felderhoff, Alice Pellet-Mary, Damien Stehlé, Benjamin Wesolowski
Revocable Cryptography from Learning with Errors
Abstract
Quantum cryptography leverages unique properties of quantum information in order to construct cryptographic primitives that are oftentimes impossible classically. In this work, we build on the no-cloning principle of quantum mechanics and design cryptographic schemes with key revocation capabilities. We consider schemes where secret keys are represented as quantum states with the guarantee that, once the secret key is successfully revoked from a user, they no longer have the ability to perform the same functionality as before.
We define and construct several fundamental cryptographic primitives with key-revocation capabilities, namely pseudorandom functions, secret-key and public-key encryption, and even fully homomorphic encryption, assuming the quantum sub-exponential hardness of the learning with errors problem. Central to all our constructions is our method of making the Dual-Regev encryption (Gentry, Peikert and Vaikuntanathan, STOC 2008) scheme revocable.
Prabhanjan Ananth, Alexander Poremba, Vinod Vaikuntanathan

Quantum Cryptography

Frontmatter
Pseudorandomness with Proof of Destruction and Applications
Abstract
Two fundamental properties of quantum states that quantum information theory explores are pseudorandomness and provability of destruction. We introduce the notion of quantum pseudorandom states with proofs of destruction (PRSPD) that combines both these properties. Like standard pseudorandom states (PRS), these are efficiently generated quantum states that are indistinguishable from random, but they can also be measured to create a classical string. This string is verifiable (given the secret key) and certifies that the state has been destructed. We show that, similarly to PRS, PRSPD can be constructed from any post-quantum one-way function. As far as the authors are aware, this is the first construction of a family of states that satisfies both pseudorandomness and provability of destruction.
We show that many cryptographic applications that were shown based on PRS variants using quantum communication can be based on (variants of) PRSPD using only classical communication. This includes symmetric encryption, message authentication, one-time signatures, commitments, and classically verifiable private quantum coins.
Amit Behera, Zvika Brakerski, Or Sattath, Omri Shmueli
Semi-quantum Copy-Protection and More
Abstract
Properties of quantum mechanics have enabled the emergence of quantum cryptographic protocols achieving important goals which are proven to be impossible classically. Unfortunately, this usually comes at the cost of needing quantum power from every party in the protocol, while arguably a more realistic scenario would be a network of classical clients, classically interacting with a quantum server.
In this paper, we focus on copy-protection, which is a quantum primitive that allows a program to be evaluated, but not copied, and has shown interest especially due to its links to other unclonable cryptographic primitives. Our main contribution is to show how to dequantize quantum copy-protection schemes constructed from hidden coset states, by giving a construction for classically-instructed remote state preparation for coset states, which preserves hardness properties of hidden coset states. We then apply this dequantizer to obtain semi-quantum cryptographic protocols for copy-protection and tokenized signatures with strong unforgeability. In the process, we present the first secure copy-protection scheme for point functions in the plain model and a new direct product hardness property of coset states which immediately implies a strongly unforgeable tokenized signature scheme.
Céline Chevalier, Paul Hermouet, Quoc-Huy Vu
Weakening Assumptions for Publicly-Verifiable Deletion
Abstract
We develop a simple compiler that generically adds publicly-verifiable deletion to a variety of cryptosystems. Our compiler only makes use of one-way functions (or one-way state generators, if we allow the public verification key to be quantum). Previously, similar compilers either relied on indistinguishability obfuscation along with any one-way function (Bartusek et al., ePrint:2023/265), or on almost-regular one-way functions (Bartusek, Khurana and Poremba, CRYPTO 2023).
James Bartusek, Dakshita Khurana, Giulio Malavolta, Alexander Poremba, Michael Walter
Public-Key Encryption with Quantum Keys
Abstract
In the framework of Impagliazzo’s five worlds, a distinction is often made between two worlds, one where public-key encryption exists (Cryptomania), and one in which only one-way functions exist (MiniCrypt). However, the boundaries between these worlds can change when quantum information is taken into account. Recent work has shown that quantum variants of oblivious transfer and multi-party computation, both primitives that are classically in Cryptomania, can be constructed from one-way functions, placing them in the realm of quantum MiniCrypt (the so-called MiniQCrypt). This naturally raises the following question: Is it possible to construct a quantum variant of public-key encryption, which is at the heart of Cryptomania, from one-way functions or potentially weaker assumptions?
In this work, we initiate the formal study of the notion of quantum public-key encryption (qPKE), i.e., public-key encryption where keys are allowed to be quantum states. We propose new definitions of security and several constructions of qPKE based on the existence of one-way functions (OWF), or even weaker assumptions, such as pseudorandom function-like states (PRFS) and pseudorandom function-like states with proof of destruction (PRFSPD). Finally, to give a tight characterization of this primitive, we show that computational assumptions are necessary to build quantum public-key encryption. That is, we give a self-contained proof that no quantum public-key encryption scheme can provide information-theoretic security.
Khashayar Barooti, Alex B. Grilo, Loïs Huguenin-Dumittan, Giulio Malavolta, Or Sattath, Quoc-Huy Vu, Michael Walter
Publicly Verifiable Deletion from Minimal Assumptions
Abstract
We present a general compiler to add the publicly verifiable deletion property for various cryptographic primitives including public key encryption, attribute-based encryption, and quantum fully homomorphic encryption. Our compiler only uses one-way functions, or more generally hard quantum planted problems for \(\textsf{NP}\), which are implied by one-way functions. It relies on minimal assumptions and enables us to add the publicly verifiable deletion property with no additional assumption for the above primitives. Previously, such a compiler needs additional assumptions such as injective trapdoor one-way functions or pseudorandom group actions [Bartusek-Khurana-Poremba, CRYPTO 2023]. Technically, we upgrade an existing compiler for privately verifiable deletion [Bartusek-Khurana, CRYPTO 2023] to achieve publicly verifiable deletion by using digital signatures.
Fuyuki Kitagawa, Ryo Nishimaki, Takashi Yamakawa
One-Out-of-Many Unclonable Cryptography: Definitions, Constructions, and More
Abstract
The no-cloning principle of quantum mechanics enables us to achieve amazing unclonable cryptographic primitives, which is impossible in classical cryptography. However, the security definitions for unclonable cryptography are tricky. Achieving desirable security notions for unclonability is a challenging task. In particular, there is no indistinguishable-secure unclonable encryption and quantum copy-protection for single-bit output point functions in the standard model. To tackle this problem, we introduce and study relaxed but meaningful security notions for unclonable cryptography in this work. We call the new security notion one-out-of-many unclonable security.
We obtain the following results.
  • We show that one-time strong anti-piracy secure secret key single-decryptor encryption (SDE) implies one-out-of-many indistinguishable-secure unclonable encryption.
  • We construct a one-time strong anti-piracy secure secret key SDE scheme in the standard model from the LWE assumption. This scheme can encrypt multi-bit messages.
  • We construct one-out-of-many copy-protection for single-bit output point functions from one-out-of-many indistinguishable-secure unclonable encryption and the LWE assumption.
  • We construct one-out-of-many unclonable predicate encryption (PE) from one-out-of-many indistinguishable-secure unclonable encryption and the LWE assumption.
Thus, we obtain one-out-of-many indistinguishable-secure unclonable encryption, one-out-of-many copy-protection for single-bit output point functions, and one-out-of-many unclonable PE in the standard model from the LWE assumption. In addition, our one-time SDE scheme is the first multi-bit SDE scheme that does not rely on any oracle heuristics and strong assumptions such as indistinguishability obfuscation and witness encryption.
Fuyuki Kitagawa, Ryo Nishimaki

Group-Based Cryptography

Frontmatter
Limits in the Provable Security of ECDSA Signatures
Abstract
Digital Signatures are ubiquitous in modern computing. One of the most widely used digital signature schemes is \(\textsf {ECDSA}\) due to its use in TLS, various Blockchains such as Bitcoin and Etherum, and many other applications. Yet the formal analysis of \(\textsf {ECDSA}\) is comparatively sparse. In particular, all known security results for \(\textsf {ECDSA}\) rely on some idealized model such as the generic group model or the programmable (bijective) random oracle model.
In this work, we study the question whether these strong idealized models are necessary for proving the security of \(\textsf {ECDSA}\). Specifically, we focus on the programmability of \(\textsf {ECDSA}\) ’s “conversion function” which maps an elliptic curve point into its x-coordinate modulo the group order. Unfortunately, our main results are negative. We establish, by means of a meta reductions, that an algebraic security reduction for \(\textsf {ECDSA}\) can only exist if the security reduction is allowed to program the conversion function. As a consequence, a meaningful security proof for \(\textsf {ECDSA}\) is unlikely to exist without strong idealization.
Dominik Hartmann, Eike Kiltz
Round-Robin is Optimal: Lower Bounds for Group Action Based Protocols
Abstract
An hard homogeneous space (HHS) is a finite group acting on a set with the group action being hard to invert and the set lacking any algebraic structure. As such HHS could potentially replace finite groups where the discrete logarithm is hard for building cryptographic primitives and protocols in a post-quantum world.
Threshold HHS-based primitives typically require parties to compute the group action of a secret-shared input on a public set element. On one hand this could be done through generic MPC techniques, although they incur in prohibitive costs due to the high complexity of circuits evaluating group actions known to date. On the other hand round-robin protocols only require black box usage of the HHS. However these are highly sequential procedures, taking as many rounds as parties involved. The high round complexity appears to be inherent due the lack of homomorphic properties in HHS, yet no lower bounds were known so far.
In this work we formally show that round-robin protocols are optimal. In other words, any at least passively secure distributed computation of a group action making black-box use of an HHS must take a number of rounds greater or equal to the threshold parameter. We furthermore study fair protocols in which all users receive the output in the same round (unlike plain round-robin), and prove communication and computation lower bounds of \(\varOmega (n \log _2 n)\) for n parties. Our results are proven in Shoup’s Generic Action Model (GAM), and hold regardless of the underlying computational assumptions.
Daniele Cozzo, Emanuele Giunta
(Verifiable) Delay Functions from Lucas Sequences
Abstract
Lucas sequences are constant-recursive integer sequences with a long history of applications in cryptography, both in the design of cryptographic schemes and cryptanalysis. In this work, we study the sequential hardness of computing Lucas sequences over an RSA modulus.
First, we show that modular Lucas sequences are at least as sequentially hard as the classical delay function given by iterated modular squaring proposed by Rivest, Shamir, and Wagner (MIT Tech. Rep. 1996) in the context of time-lock puzzles. Moreover, there is no obvious reduction in the other direction, which suggests that the assumption of sequential hardness of modular Lucas sequences is strictly weaker than that of iterated modular squaring. In other words, the sequential hardness of modular Lucas sequences might hold even in the case of an algorithmic improvement violating the sequential hardness of iterated modular squaring.
Second, we demonstrate the feasibility of constructing practically-efficient verifiable delay functions based on the sequential hardness of modular Lucas sequences. Our construction builds on the work of Pietrzak (ITCS 2019) by leveraging the intrinsic connection between the problem of computing modular Lucas sequences and exponentiation in an appropriate extension field.
Charlotte Hoffmann, Pavel Hubáček, Chethan Kamath, Tomáš Krňák
Algebraic Group Model with Oblivious Sampling
Abstract
In the algebraic group model (AGM), an adversary has to return with each group element a linear representation with respect to input group elements. In many groups, it is easy to sample group elements obliviously without knowing such linear representations. Since the AGM does not model this, it can be used to prove the security of spurious knowledge assumptions. We show several well-known zk-SNARKs use such assumptions. We propose AGM with oblivious sampling (AGMOS), an AGM variant where the adversary can access an oracle that allows sampling group elements obliviously from some distribution. We show that AGM and AGMOS are different by studying the family of “total knowledge-of-exponent” assumptions, showing that they are all secure in the AGM, but most are not secure in the AGMOS if the DL holds. We show an important separation in the case of the KZG commitment scheme. We show that many known AGM reductions go through also in the AGMOS, assuming a novel falsifiable assumption \(\textrm{TOFR}\).
Helger Lipmaa, Roberto Parisella, Janno Siim

Byzantine Agreement, Consensus and Composability

Frontmatter
Zombies and Ghosts: Optimal Byzantine Agreement in the Presence of Omission Faults
Abstract
Studying the feasibility of Byzantine Agreement (BA) in realistic fault models is an important question in the area of distributed computing and cryptography. In this work, we revisit the mixed fault model with Byzantine (malicious) faults and omission faults put forth by Hauser, Maurer, and Zikas (TCC 2009), who showed that BA (and MPC) is possible with t Byzantine faults, s send faults (whose outgoing messages may be dropped) and r receive faults (whose incoming messages may be lost) if \(n>3t+r+s\). We generalize their techniques and results by showing that BA is possible if \(n>2t+r+s\), given the availability of a cryptographic setup. Our protocol is the first to match the recent lower bound of Eldefrawy, Loss, and Terner (ACNS 2022) for this setting.
Julian Loss, Gilad Stern
Concurrent Asynchronous Byzantine Agreement in Expected-Constant Rounds, Revisited
Abstract
It is well known that without randomization, Byzantine agreement (BA) requires a linear number of rounds in the synchronous setting, while it is flat out impossible in the asynchronous setting. The primitive which allows to bypass the above limitation is known as oblivious common coin (OCC). It allows parties to agree with constant probability on a random coin, where agreement is oblivious, i.e., players are not aware whether or not agreement has been achieved.
The starting point of our work is the observation that no known protocol exists for information-theoretic multi-valued OCC with optimal resiliency in the asynchronous setting (with eventual message delivery).
This apparent hole in the literature is particularly problematic, as multi-valued OCC is implicitly or explicitly used in several constructions.
In this paper, we present the first information-theoretic multi-valued OCC protocol in the asynchronous setting with optimal resiliency, i.e., tolerating \(t<n/3\) corruptions, thereby filling this important gap. Further, our protocol efficiently implements OCC with an exponential-size domain, a property which is not even achieved by known constructions in the simpler, synchronous setting.
We then turn to the problem of round-preserving parallel composition of asynchronous BA. A protocol for this task was proposed by Ben-Or and El-Yaniv [Distributed Computing ’03]. Their construction, however, is flawed in several ways. Thus, as a second contribution, we provide a simpler, more modular protocol for the above task. Finally, and as a contribution of independent interest, we provide proofs in Canetti’s Universal Composability framework; this makes our work the first one offering composability guarantees, which are important as BA is a core building block of secure multi-party computation protocols.
Ran Cohen, Pouyan Forghani, Juan Garay, Rutvik Patel, Vassilis Zikas
Simplex Consensus: A Simple and Fast Consensus Protocol
Abstract
We present a theoretical framework for analyzing the efficiency of consensus protocols, and apply it to analyze the optimistic and pessimistic confirmation times of state-of-the-art partially-synchronous protocols in the so-called “rotating leader/random leader” model of consensus (recently popularized in the blockchain setting).
We next present a new and simple consensus protocol in the partially synchronous setting, tolerating \(f < n/3\) byzantine faults; in our eyes, this protocol is essentially as simple to describe as the simplest known protocols, but it also enjoys an even simpler security proof, while matching and, even improving, the efficiency of the state-of-the-art (according to our theoretical framework).
As with the state-of-the-art protocols, our protocol assumes a (bare) PKI, a digital signature scheme, collision-resistant hash functions, and a random leader election oracle, which may be instantiated with a random oracle (or a CRS).
Benjamin Y. Chan, Rafael Pass
Agile Cryptography: A Universally Composable Approach
Abstract
Being capable of updating cryptographic algorithms is an inevitable and essential practice in cryptographic engineering. This cryptographic agility, as it has been called, is a fundamental desideratum for long term cryptographic system security that still poses significant challenges from a modeling perspective. For instance, current formulations of agility fail to express the fundamental security that is expected to stem from timely implementation updates, namely the fact that the system retains some of its security properties provided that the update is performed prior to the deprecated implementation becoming exploited.
In this work we put forth a novel framework for expressing updateability in the context of cryptographic primitives within the universal composition model. Our updatable ideal functionality framework provides a general template for expressing the security we expect from cryptographic agility capturing in a fine grained manner all the properties that can be retained across implementation updates. We exemplify our framework over two basic cryptographic primitives, digital signatures and non-interactive zero-knowledge (NIZK), where we demonstrate how to achieve updateability with consistency and backwards-compatibility across updates in a composable manner. We also illustrate how our notion is a continuation of a much broader scope of the concept of agility introduced by Acar, Belenkiy, Bellare, and Cash in Eurocrypt 2010 in the context of symmetric cryptographic primitives.
Christian Badertscher, Michele Ciampi, Aggelos Kiayias
Composable Long-Term Security with Rewinding
Abstract
We circumvent these impossibility results with new techniques, enabling rewinding-based simulation in a way that universal composability is achieved. This allows us to construct a long-term-secure composable commitment scheme in the CRS-hybrid model, which is provably impossible in the notion of Müller-Quade and Unruh. We base our construction on a statistically hiding commitment scheme in the CRS-hybrid model with CCA-like properties. To provide a CCA oracle, we cannot rely on super-polynomial extraction techniques and instead extract the value committed to via rewinding. To this end, we incorporate rewinding-based commitment extraction into the UC framework via a helper in analogy to Canetti, Lin and Pass (FOCS 2010), allowing both adversary and environment to extract statistically hiding commitments.
Our new framework provides the first setting in which a commitment scheme that is both statistically hiding and universally composable can be constructed from standard polynomial-time hardness assumptions and a CRS only. We also prove that our CCA oracle is k-robust extractable. This asserts that extraction is possible without rewinding a concurrently executed k-round protocol. Consequently any k-round (standard) UC-secure protocol remains secure in the presence of our helper.
Finally, we prove that building long-term-secure oblivious transfer (and thus general two-party computations) from long-term-revealing setups remains impossible in our setting. Still, our long-term-secure commitment scheme suffices for natural applications, such as long-term secure and composable (commit-and-prove) zero-knowledge arguments of knowledge.
Robin Berger, Brandon Broadnax, Michael Klooß, Jeremias Mechler, Jörn Müller-Quade, Astrid Ottenhues, Markus Raiber
Backmatter
Metadaten
Titel
Theory of Cryptography
herausgegeben von
Guy Rothblum
Hoeteck Wee
Copyright-Jahr
2023
Electronic ISBN
978-3-031-48624-1
Print ISBN
978-3-031-48623-4
DOI
https://doi.org/10.1007/978-3-031-48624-1

Premium Partner