main-content

## Über dieses Buch

The two-volume proceedings set LNCS 12710 and 12711 constitutes the proceedings of the 24th IACR International Conference on Practice and Theory of Public Key Cryptography, PKC 2021, which was held online during May 10-13, 2021. The conference was originally planned to take place in Edinburgh, UK, but had to change to an online format due to the COVID-19 pandemic.

The 52 papers included in these proceedings were carefully reviewed and selected from 156 submissions. They focus on all aspects of public-key cryptography, covering theory, implementations and applications. This year, post-quantum cryptography, PQC constructions and cryptanalysis received special attention.

## Inhaltsverzeichnis

### More Efficient Digital Signatures with Tight Multi-user Security

Abstract
We construct the currently most efficient signature schemes with tight multi-user security against adaptive corruptions. It is the first generic construction of such schemes, based on lossy identification schemes (Abdalla et al.; JoC 2016), and the first to achieve strong existential unforgeability. It also has significantly more compact signatures than the previously most efficient construction by Gjøsteen and Jager (CRYPTO 2018). When instantiated based on the decisional Diffie–Hellman assumption, a signature consists of only three exponents.
We propose a new variant of the generic construction of signatures from sequential OR-proofs by Abe, Ohkubo, and Suzuki (ASIACRYPT 2002) and Fischlin, Harasser, and Janson (EUROCRYPT 2020). In comparison to Fischlin et al., who focus on constructing signatures in the non-programmable random oracle model (NPROM), we aim to achieve tight security against adaptive corruptions, maximize efficiency, and to directly achieve strong existential unforgeability (also in the NPROM). This yields a slightly different construction and we use slightly different and additional properties of the lossy identification scheme.
Signatures with tight multi-user security against adaptive corruptions are a commonly-used standard building block for tightly-secure authenticated key exchange protocols. We also show how our construction improves the efficiency of all existing tightly-secure AKE protocols.
Denis Diemert, Kai Gellert, Tibor Jager, Lin Lyu

### Multiparty Cardinality Testing for Threshold Private Intersection

Abstract
Threshold Private Set Intersection (PSI) allows multiple parties to compute the intersection of their input sets if and only if the intersection is larger than $$n-t$$, where n is the size of each set and t is some threshold. The main appeal of this primitive is that, in contrast to standard PSI, known upper-bounds on the communication complexity only depend on the threshold t and not on the sizes of the input sets. Current threshold PSI protocols split themselves into two components: A Cardinality Testing phase, where parties decide if the intersection is larger than some threshold; and a PSI phase, where the intersection is computed. The main source of inefficiency of threshold PSI is the former part.
In this work, we present a new Cardinality Testing protocol that allows N parties to check if the intersection of their input sets is larger than $$n-t$$. The protocol incurs in $$\tilde{ \mathcal {O}} (Nt^2)$$ communication complexity. We thus obtain a Threshold PSI scheme for N parties with communication complexity $$\tilde{\mathcal {O}}(Nt^2)$$.
Pedro Branco, Nico Döttling, Sihang Pu

### Verifiable Random Functions with Optimal Tightness

Abstract
Verifiable random functions (VRFs), introduced by Micali, Rabin and Vadhan (FOCS’99), are the public-key equivalent of pseudorandom functions. A public verification key and proofs accompanying the output enable all parties to verify the correctness of the output. However, all known standard model VRFs have a reduction loss that is much worse than what one would expect from known optimal constructions of closely related primitives like unique signatures. We show that:
1.
Every security proof for a VRF that relies on a non-interactive assumption has to lose a factor of Q, where Q is the number of adversarial queries. To that end, we extend the meta-reduction technique of Bader et al. (EUROCRYPT’16) to also cover VRFs.

2.
This raises the question: Is this bound optimal? We answer this question in the affirmative by presenting the first VRF with a reduction from the non-interactive qDBDHI assumption to the security of VRF that achieves this optimal loss.

We thus paint a complete picture of the achievability of tight verifiable random functions: We show that a security loss of Q is unavoidable and present the first construction that achieves this bound.
David Niehues

### A Geometric Approach to Homomorphic Secret Sharing

Abstract
An (nmt)-homomorphic secret sharing (HSS) scheme allows n clients to share their inputs across m servers, such that the inputs are hidden from any t colluding servers, and moreover the servers can evaluate functions over the inputs locally by mapping their input shares to compact output shares. Such compactness makes HSS a useful building block for communication-efficient secure multi-party computation (MPC).
In this work, we propose a simple compiler for HSS evaluating multivariate polynomials based on two building blocks: (1) homomorphic encryption for linear functions or low-degree polynomials, and (2) information-theoretic HSS for low-degree polynomials. Our compiler leverages the power of the first building block towards improving the parameters of the second.
We use our compiler to generalize and improve on the HSS scheme of Lai, Malavolta, and Schröder [ASIACRYPT’18], which is only efficient when the number of servers is at most logarithmic in the security parameter. In contrast, we obtain efficient schemes for polynomials of higher degrees and an arbitrary number of servers. This application of our general compiler extends techniques that were developed in the context of information-theoretic private information retrieval (Woodruff and Yekhanin [CCC’05]), which use partial derivatives and Hermite interpolation to support the computation of polynomials of higher degrees.
In addition to the above, we propose a new application of HSS to MPC with preprocessing. By pushing the computation of some HSS servers to a preprocessing phase, we obtain communication-efficient MPC protocols for low-degree polynomials that use fewer parties than previous protocols based on the same assumptions. The online communication of these protocols is linear in the input size, independently of the description size of the polynomial.
Yuval Ishai, Russell W. F. Lai, Giulio Malavolta

### Generic Negation of Pair Encodings

Abstract
Attribute-based encryption (ABE) is a cryptographic primitive which supports fine-grained access control on encrypted data, making it an appealing building block for many applications. Pair encodings (Attrapadung, EUROCRYPT 2014) are simple primitives that can be used for constructing fully secure ABE schemes associated to a predicate relative to the encoding. We propose a generic transformation that takes any pair encoding scheme (PES) for a predicate $$P$$ and produces a PES for its negated predicate $$\bar{P}$$. This construction finally solves a problem that was open since 2015. Our techniques bring new insight to the expressivity and generality of PES and can be of independent interest. We also provide, to the best of our knowledge, the first pair encoding scheme for negated doubly spatial encryption (obtained with our transformation) and explore several other consequences of our results.
Miguel Ambrona

### On Selective-Opening Security of Deterministic Primitives

Abstract
Classically, selective-opening attack (SOA) has been studied for randomized primitives, like randomized encryption schemes and commitments. The study of SOA for deterministic primitives, which presents some unique challenges, was initiated by Bellare et al. (PKC 2015), who showed negative results. Subsequently, Hoang et al. (ASIACRYPT 2016) showed positive results in the non-programmable random oracle model. Here we show the first positive results for SOA security of deterministic primitives in the standard (RO devoid) model. Our results are:
• Any 2t-wise independent hash function is SOA secure for an unbounded number of “t-correlated” messages, meaning any group of up to t messages are arbitrarily correlated.
• A construction of a deterministic encryption scheme with analogous security, combining a regular lossy trapdoor function with a 2t-wise independent hash function.
• The one-more-RSA problem of Bellare et al. (J. Cryptology 2003), which can be seen as a form of SOA, is hard under the $$\varPhi$$-Hiding Assumption with large enough encryption exponent.
Somewhat surprisingly, the last result yields the first proof of RSA-based Chaum’s blind signature scheme (CRYPTO 1982), albeit for large exponent e, based on a “standard” computational assumption. Notably, it avoids the impossibility result of Pass (STOC 2011) because lossiness of RSA endows the scheme with non-unique signatures.

### Revisiting (R)CCA Security and Replay Protection

Abstract
This paper takes a fresh approach to systematically characterizing, comparing, and understanding CCA-type security definitions for public-key encryption (PKE), a topic with a long history. The justification for a concrete security definition X is relative to a benchmark application (e.g. confidential communication): Does the use of a PKE scheme satisfying X imply the security of the application? Because unnecessarily strong definitions may lead to unnecessarily inefficient schemes or unnecessarily strong computational assumptions, security definitions should be as weak as possible, i.e. as close as possible to (but above) the benchmark. Understanding the hierarchy of security definitions, partially ordered by the implication (i.e. at least as strong) relation, is hence important, as is placing the relevant applications as benchmark levels within the hierarchy.
CCA-2 security is apparently the strongest notion, but because it is arguably too strong, Canetti, Krawczyk, and Nielsen (Crypto 2003) proposed the relaxed notions of Replayable CCA security (RCCA) as perhaps the weakest meaningful definition, and they investigated the space between CCA and RCCA security by proposing two versions of Detectable RCCA (d-RCCA) security which are meant to ensure that replays of ciphertexts are either publicly or secretly detectable (and hence preventable).
The contributions of this paper are three-fold. First, following the work of Coretti, Maurer, and Tackmann (Asiacrypt 2013), we formalize the three benchmark applications of PKE that serve as the natural motivation for security notions, namely the construction of certain types of (possibly replay-protected) confidential channels (from an insecure and an authenticated communication channel). Second, we prove that RCCA does not achieve the confidentiality benchmark and, contrary to previous belief, that the proposed d-RCCA notions are not even relaxations of CCA-2 security. Third, we propose the natural security notions corresponding to the three benchmarks: an appropriately strengthened version of RCCA to ensure confidentiality, as well as two notions for capturing public and secret replay detectability.
Christian Badertscher, Ueli Maurer, Christopher Portmann, Guilherme Rito

### Single-to-Multi-theorem Transformations for Non-interactive Statistical Zero-Knowledge

Abstract
Non-interactive zero-knowledge proofs or arguments allow a prover to show validity of a statement without further interaction. For non-trivial statements such protocols require a setup assumption in form of a common random or reference string (CRS). Generally, the CRS can only be used for one statement (single-theorem zero-knowledge) such that a fresh CRS would need to be generated for each proof. Fortunately, Feige, Lapidot and Shamir (FOCS 1990) presented a transformation for any non-interactive zero-knowledge proof system that allows the CRS to be reused any polynomial number of times (multi-theorem zero-knowledge). This FLS transformation, however, is only known to work for either computational zero-knowledge or requires a structured, non-uniform common reference string.
In this paper we present FLS-like transformations that work for non-interactive statistical zero-knowledge arguments in the common random string model. They allow to go from single-theorem to multi-theorem zero-knowledge and also preserve soundness, for both properties in the adaptive and non-adaptive case. Our first transformation is based on the general assumption that one-way permutations exist, while our second transformation uses lattice-based assumptions. Additionally, we define different possible soundness notions for non-interactive arguments and discuss their relationships.
Marc Fischlin, Felix Rohrbach

### On the CCA Compatibility of Public-Key Infrastructure

Abstract
In this work, we put forth the notion of compatibility of any key generation or setup algorithm. We focus on the specific case of encryption, and say that a key generation algorithm $$\mathsf {KeyGen}$$ is $$\mathsf {X}\text {-compatible}$$ (for $$\mathsf {X} \in \{\mathsf {CPA},\mathsf {CCA1},\mathsf {CCA2}\}$$) if there exist encryption and decryption algorithms that together with $$\mathsf {KeyGen}$$, result in an $$\mathsf {X}$$-secure public-key encryption scheme.
We study the following question: Is every $$\mathsf {CPA}\text {-compatible}$$ key generation algorithm also $$\mathsf {CCA}\text {-compatible}$$? We obtain the following answers:
• Every sub-exponentially $$\mathsf {CPA}\text {-compatible}$$ $$\mathsf {KeyGen}$$ algorithm is $$\mathsf {CCA1}\text {-compatible}$$, assuming the existence of hinting PRGs and sub-exponentially secure keyless collision resistant hash functions.
• Every sub-exponentially $$\mathsf {CPA}\text {-compatible}$$ $$\mathsf {KeyGen}$$ algorithm is also $$\mathsf {CCA2}\text {-compatible}$$, assuming the existence of non-interactive CCA2 secure commitments, in addition to sub-exponential security of the assumptions listed in the previous bullet.
Here, sub-exponentially $$\mathsf {CPA}\text {-compatible}$$ $$\mathsf {KeyGen}$$ refers to any key generation algorithm for which there exist encryption and decryption algorithms that result in a $$\mathsf {CPA}$$-secure public-key encryption scheme against sub-exponential adversaries.
This gives a way to perform CCA secure encryption given any public key infrastructure that has been established with only (sub-exponential) CPA security in mind. The resulting CCA encryption makes black-box use of the CPA scheme and all other underlying primitives.
Dakshita Khurana, Brent Waters

### Round-Optimal Verifiable Oblivious Pseudorandom Functions from Ideal Lattices

Abstract
Verifiable Oblivious Pseudorandom Functions (VOPRFs) are protocols that allow a client to learn verifiable pseudorandom function (PRF) evaluations on inputs of their choice. The PRF evaluations are computed by a server using their own secret key. The security of the protocol prevents both the server from learning anything about the client’s input, and likewise the client from learning anything about the server’s key. VOPRFs have many applications including password-based authentication, secret-sharing, anonymous authentication and efficient private set intersection. In this work, we construct the first round-optimal (online) VOPRF protocol that retains security from well-known subexponential lattice hardness assumptions. Our protocol requires constructions of non-interactive zero-knowledge arguments of knowledge (NIZKAoK). Using recent developments in the area of post-quantum zero-knowledge arguments of knowledge, we show that our VOPRF may be securely instantiated in the quantum random oracle model. We construct such arguments as extensions of prior work in the area of lattice-based zero-knowledge proof systems.
Martin R. Albrecht, Alex Davidson, Amit Deo, Nigel P. Smart

### BETA: Biometric-Enabled Threshold Authentication

Abstract
In the past decades, user authentication has been dominated by server-side password-based solutions that rely on “what users know”. This approach is susceptible to breaches and phishing attacks, and poses usability challenges. As a result, the industry is gradually moving to biometric-based client-side solutions that do not store any secret information on servers. This shift necessitates the safe storage of biometric templates and private keys, which are used to generate tokens, on user devices.
We propose a new generic framework called Biometric Enabled Threshold Authentication (BETA) to protect sensitive client-side information like biometric templates and cryptographic keys. Towards this, we formally introduce the notion of Fuzzy Threshold Tokenizer ($$\text {FTT}$$) where an initiator can use a “close” biometric measurement to generate an authentication token if at least t (the threshold) devices participate. We require that the devices only talk to the initiator, and not to each other, to capture the way user devices are connected in the real world. We use the universal composability (UC) framework to model the security properties of $$\text {FTT}$$, including the unforgeability of tokens and the privacy of the biometric values (template and measurement), under a malicious adversary. We construct three protocols that meet our definition.
Our first two protocols are general feasibility results that work for any distance function, any threshold t and tolerate the maximal (i.e. $$t-1$$) amount of corruption. They are based on any two round UC-secure multi-party computation protocol in the standard model (with a CRS) and threshold fully homomorphic encryption, respectively. We show how to effectively use these primitives to build protocols in a constrained communication model with just four rounds of communication.
For the third protocol, we consider inner-product based distance metrics (cosine similarity, Euclidean distance, etc.) specifically, motivated by the recent interest in its use for face recognition. We use Paillier encryption, efficient NIZKs for specific languages, and a simple garbled circuit to build an efficient protocol for the common case of $$n=3$$ devices with one compromised.
Shashank Agrawal, Saikrishna Badrinarayanan, Payman Mohassel, Pratyay Mukherjee, Sikhar Patranabis

Amortizing Multiplication Triples Across Conditionals
Abstract
A classic approach to MPC uses preprocessed multiplication triples to evaluate arbitrary Boolean circuits. If the target circuit features conditional branching, e.g. as the result of a IF program statement, then triples are wasted: one triple is consumed per $$\mathtt {AND}$$ gate, even if the output of the gate is entirely discarded by the circuit’s conditional behavior.
In this work, we show that multiplication triples can be re-used across conditional branches. For a circuit with b branches, each having n $$\mathtt {AND}$$ gates, we need only a total of n triples, rather than the typically required $$b\cdot n$$. Because preprocessing triples is often the most expensive step in protocols that use them, this significantly improves performance.
Prior work similarly amortized oblivious transfers across branches in the classic GMW protocol (Heath et al., Asiacrypt 2020, [HKP20]). In addition to demonstrating conditional improvements are possible for a different class of protocols, we also concretely improve over [HKP20]: their maximum improvement is bounded by the topology of the circuit. Our protocol yields improvement independent of topology: we need triples proportional to the size of the program’s longest execution path, regardless of the structure of the program branches.
We implemented our approach in C++. Our experiments show that we significantly improve over a “naïve” protocol and over prior work: for a circuit with 16 branches and in terms of total communication, we improved over naïve by $$12\times$$ and over [HKP20] by an average of $$2.6\times$$.
Our protocol is secure against the semi-honest corruption of $$p-1$$ parties.
David Heath, Vladimir Kolesnikov, Stanislav Peceny

### Multi-party Threshold Private Set Intersection with Sublinear Communication

Abstract
In multi-party threshold private set intersection (PSI), n parties each with a private set wish to compute the intersection of their sets if the intersection is sufficiently large. Previously, Ghosh and Simkin (CRYPTO 2019) studied this problem for the two-party case and demonstrated interesting lower and upper bounds on the communication complexity. In this work, we investigate the communication complexity of the multi-party setting $$(n\ge 2)$$. We consider two functionalities for multi-party threshold PSI. In the first, parties learn the intersection if each of their sets and the intersection differ by at most T. In the second functionality, parties learn the intersection if the union of all their sets and the intersection differ by at most T.
For both functionalities, we show that any protocol must have communication complexity $$\varOmega (nT)$$. We build protocols with a matching upper bound of O(nT) communication complexity for both functionalities assuming threshold FHE. We also construct a computationally more efficient protocol for the second functionality with communication complexity $$\widetilde{O}(nT)$$ under a weaker assumption of threshold additive homomorphic encryption. As a direct implication, we solve one of the open problems in the work of Ghosh and Simkin (CRYPTO 2019) by designing a two-party protocol with communication cost $$\widetilde{O}(T)$$ from assumptions weaker than FHE.
As a consequence of our results, we achieve the first “regular” multi-party PSI protocol where the communication complexity only grows with the size of the set difference and does not depend on the size of the input sets.
Saikrishna Badrinarayanan, Peihan Miao, Srinivasan Raghuraman, Peter Rindal

### On the (In)Security of the Diffie-Hellman Oblivious PRF with Multiplicative Blinding

Abstract
Oblivious Pseudorandom Function (OPRF) is a protocol between a client holding input x and a server holding key k for a PRF F. At the end, the client learns $$F_k(x)$$ and nothing else while the server learns nothing. OPRF’s have found diverse applications as components of larger protocols, and the currently most efficient instantiation, with security proven in the UC model, is $$F_k(x)=H_2(x,(H_1(x))^k)$$ computed using so-called exponential blinding, i.e. the client sends $$a=(H_1(x))^r$$ for random r, the server responds $$b=a^k$$, which the client unblinds as $$v=b^{1/r}$$ to compute $$F_k(x)=H_2(x,v)$$.
However, this protocol requires two variable-base exponentiations on the client, while a more efficient multiplicative blinding scheme replaces one or both client exponentiations with fixed-base exponentiation, leading to the decrease of the client’s computational cost by a factor between two to six, depending on pre-computation.
We analyze the security of the above OPRF with multiplicative blinding, showing surprising weaknesses that offer attack avenues which are not present using exponential blinding. We characterize the security of this OPRF implementation as a “Correlated OPRF” functionality, a relaxation of UC OPRF functionality used in prior work.
On the positive side, we show that the Correlated OPRF suffices for the security of OPAQUE, the asymmetric PAKE protocol, hence allowing OPAQUE the computational advantages of multiplicative blinding. Unfortunately, we also show examples of other OPRF applications which become insecure when using such blinding. The conclusion is that usage of multiplicative blinding for $$F_k(x)$$ defined as above, in settings where correct value $$g^k$$ (needed for multiplicative blinding) is not authenticated, and OPRF inputs are of low entropy, must be carefully analyzed, or avoided all together. We complete the picture by showing a simple and safe alternative definition of function $$F_k(x)$$ which offers (full) UC OPRF security using either form of blinding.
Stanisław Jarecki, Hugo Krawczyk, Jiayu Xu

### An Efficient and Generic Construction for Signal’s Handshake (X3DH): Post-Quantum, State Leakage Secure, and Deniable

Abstract
The Signal protocol is a secure instant messaging protocol that underlies the security of numerous applications such as WhatsApp, Skype, Facebook Messenger among many others. The Signal protocol consists of two sub-protocols known as the X3DH protocol and the double ratchet protocol, where the latter has recently gained much attention. For instance, Alwen, Coretti, and Dodis (Eurocrypt’19) provided a concrete security model along with a generic construction based on simple building blocks that are instantiable from versatile assumptions, including post-quantum ones. In contrast, as far as we are aware, works focusing on the X3DH protocol seem limited.
In this work, we cast the X3DH protocol as a specific type of authenticated key exchange (AKE) protocol, which we call a Signal-conforming AKE protocol, and formally define its security model based on the vast prior work on AKE protocols. We then provide the first efficient generic construction of a Signal-conforming AKE protocol based on standard cryptographic primitives such as key encapsulation mechanisms (KEM) and signature schemes. Specifically, this results in the first post-quantum secure replacement of the X3DH protocol on well-established assumptions. Similar to the X3DH protocol, our Signal-conforming AKE protocol offers a strong (or stronger) flavor of security, where the exchanged key remains secure even when all the non-trivial combinations of the long-term secrets and session-specific secrets are compromised. Moreover, our protocol has a weak flavor of deniability and we further show how to strengthen it using ring signatures. Finally, we provide a full-fledged, generic C implementation of our (weakly deniable) protocol. We instantiate it with several Round 3 candidates (finalists and alternates) to the NIST post-quantum standardization process and compare the resulting bandwidth and computation performances. Our implementation is publicly available.
Keitaro Hashimoto, Shuichi Katsumata, Kris Kwiatkowski, Thomas Prest

### Cryptographic Pseudorandom Generators Can Make Cryptosystems Problematic

Abstract
Randomness is an essential resource for cryptography. For practical randomness generation, the security notion of pseudorandom generators (PRGs) intends to automatically preserve (computational) security of cryptosystems when used in implementation. Nevertheless, some opposite case such as in computational randomness extractors (Barak et al., CRYPTO 2011) is known (but not yet systematically studied so far) where the security can be lost even by applying secure PRGs. The present paper aims at pushing ahead the observation and understanding about such a phenomenon; we reveal such situations at layers of primitives and protocols as well, not just of building blocks like randomness extractors. We present three typical types of such cases: (1) adversaries can legally see the seed of the PRGs (including the case of randomness extractors); (2) the set of “bad” randomness may be not efficiently recognizable; (3) the formulation of a desired property implicitly involves non-uniform distinguishers for PRGs. We point out that the semi-honest security of multiparty computation also belongs to Type 1, while the correctness with negligible decryption error probability for public key encryption belongs to Types 2 and 3. We construct examples for each type where a secure PRG (against uniform distinguishers only, for Type 3) does not preserve the security/correctness of the original scheme; and discuss some countermeasures to avoid such an issue.
Koji Nuida

### Publicly Verifiable Zero Knowledge from (Collapsing) Blockchains

Abstract
Publicly Verifiable Zero-Knowledge proofs are known to exist only from setup assumptions such as a trusted common reference string or a random oracle. Unfortunately, the former requires a trusted party while the latter does not exist.
Blockchains are distributed systems that already exist and provide certain security properties (under some honest majority assumption), hence, a natural recent research direction has been to use a blockchain as an alternative setup assumption.
In TCC 2017 Goyal and Goyal proposed a construction of a publicly verifiable zero-knowledge (pvZK) proof system for some proof-of-stake blockchains. The zero-knowledge property of their construction however relies on some additional and not fully specified assumptions about the current and future behavior of honest blockchain players.
In this paper we provide several contributions. First, we show that when using a blockchain to design a provably secure protocol, it is dangerous to rely on demanding additional requirements on behaviors of the blockchain players. We do so by showing an “attack of the clones” whereby a malicious verifier can use a smart contract to slyly (not through bribing) clone capabilities of honest stakeholders and use those to invalidate the zero-knowledge property of the proof system by Goyal and Goyal.
Second, we propose a new publicly verifiable zero-knowledge proof system that relies on non-interactive commitments and on an assumption on the min-entropy of some blocks appearing on the blockchain.
Third, motivated by the fact that blockchains are a recent innovation and their resilience in the long run is still controversial, we introduce the concept of collapsing blockchain, and we prove that the zero-knowledge property of our scheme holds even if the blockchain eventually becomes insecure and all blockchain players eventually become dishonest.
Alessandra Scafuro, Luisa Siniscalchi, Ivan Visconti

### Two-Server Distributed ORAM with Sublinear Computation and Constant Rounds

Abstract
Distributed ORAM (DORAM) is a multi-server variant of Oblivious RAM. Originally proposed to lower bandwidth, DORAM has recently been of great interest due to its applicability to secure computation in the RAM model, where circuit complexity and rounds of communication are equally important metrics of efficiency. All prior DORAM constructions either involve linear work per server (e.g., Floram) or logarithmic rounds of communication between servers (e.g., square root ORAM). In this work, we construct the first DORAM schemes in the 2-server, semi-honest setting that simultaneously achieve sublinear server computation and constant rounds of communication. We provide two constant-round constructions, one based on square root ORAM that has $$O(\sqrt{N}\log N)$$ local computation and another based on secure computation of a doubly efficient PIR that achieves local computation of $$O(N ^\epsilon )$$ for any $$\epsilon > 0$$ but that allows the servers to distinguish between reads and writes. As a building block in the latter construction, we provide secure computation protocols for evaluation and interpolation of multivariate polynomials based on the Fast Fourier Transform, which may be of independent interest.
Ariel Hamlin, Mayank Varia

### Flexible and Efficient Verifiable Computation on Encrypted Data

Abstract
We consider the problem of verifiable and private delegation of computation [Gennaro et al. CRYPTO’10] in which a client stores private data on an untrusted server and asks the server to compute functions over this data. In this scenario we aim to achieve three main properties: the server should not learn information on inputs and outputs of the computation (privacy), the server cannot return wrong results without being caught (integrity), and the client can verify the correctness of the outputs faster than running the computation (efficiency). A known paradigm to solve this problem is to use a (non-private) verifiable computation (VC) to prove correctness of a homomorphic encryption (HE) evaluation on the ciphertexts. Despite the research advances in obtaining efficient VC and HE, using these two primitives together in this paradigm is concretely expensive. Recent work [Fiore et al. CCS’14, PKC’20] addressed this problem by designing specialized VC solutions that however require the HE scheme to work with very specific parameters; notably HE ciphertexts must be over $$\mathbb {Z}_q$$ for a large prime q.
In this work we propose a new solution that allows a flexible choice of HE parameters, while staying modular (based on the paradigm combining VC and HE) and efficient (the VC and the HE schemes are both executed at their best efficiency). At the core of our new protocol are new homomorphic hash functions for Galois rings. As an additional contribution we extend our results to support non-deterministic computations on encrypted data and an additional privacy property by which verifiers do not learn information on the inputs of the computation.
Alexandre Bois, Ignacio Cascudo, Dario Fiore, Dongwoo Kim

### Transferable E-Cash: A Cleaner Model and the First Practical Instantiation

Abstract
Transferable e-cash is the most faithful digital analog of physical cash, as it allows users to transfer coins between them in isolation, that is, without interacting with a bank or a “ledger”. Appropriate protection of user privacy and, at the same time, providing means to trace fraudulent behavior (double-spending of coins) have made instantiating the concept notoriously hard. Baldimtsi et al. (PKC’15) gave a first instantiation, but, as it relies on a powerful cryptographic primitive, the scheme is not practical. We also point out a flaw in their scheme.
In this paper we revisit the model for transferable e-cash and propose simpler yet stronger security definitions. We then provide the first concrete construction, based on bilinear groups, give rigorous proofs that it satisfies our model, and analyze its efficiency in detail.
Balthazar Bauer, Georg Fuchsbauer, Chen Qian

### Private Set Operations from Oblivious Switching

Abstract
Private set intersection reveals the intersection of two private sets, but many real-world applications require the parties to learn only partial information about the intersection. In this paper we introduce a new approach for computing arbitrary functions of the intersection, provided that it is safe to also reveal the cardinality of the intersection. In the most general case, our new protocol provides the participants with secret shares of the intersection, which can be fed into any generic 2PC protocol. Certain computations on the intersection can also be done even more directly and efficiently, avoiding this secret-sharing step. These cases include computing only the cardinality of intersection, or the “cardinality-sum” application proposed in Ion et al. (ePrint 2017). Compared to the state-of-the-art protocol for computing on intersection (Pinkas et al., Eurocrypt 2019), our protocol has about $$2.5-3\times$$ less communication, and has faster running time on slower (50 Mbps) networks.
Our new techniques can also be used to privately compute the union of two sets as easily as computing the intersection. Our protocol concretely improves the leading private set union protocol (Kolesnikov et al., Asiacrypt 2020) by a factor of $$2-2.5\times$$, depending on the network speed. We then show how private set union can be used in a simple way to realize the “Private-ID” functionality suggested by Buddhavarapu et al. (ePrint 2020). Our protocol is significantly faster than the prior Private-ID protocol, especially on fast networks.
All of our protocols are in the two-party setting and are secure against semi-honest adversaries.
Gayathri Garimella, Payman Mohassel, Mike Rosulek, Saeed Sadeghian, Jaspal Singh

### On Publicly-Accountable Zero-Knowledge and Small Shuffle Arguments

Abstract
Constructing interactive zero-knowledge arguments from simple assumptions with small communication complexity and good computational efficiency is an important, but difficult problem. In this work, we study interactive arguments with noticeable soundness error in their full generality and for the specific purpose of constructing concretely efficient shuffle arguments.
To counterbalance the effects of a larger soundness error, we show how to transform such three-move arguments into publicly-accountable ones which allow the verifier to convince third parties of detected misbehavior by a cheating prover. This may be particularly interesting for applications where a malicious prover has to balance the profits it can make from cheating successfully and the losses it suffers from being caught.
We construct interactive, public-coin, zero-knowledge arguments with noticeable soundness error for proving that a target vector of commitments is a pseudorandom permutation of a source vector. Our arguments do not rely on any trusted setup and only require the existence of collision-resistant hash functions. The communication complexity of our arguments is independent of the length of the shuffled vector. For a soundness error of $$2^{-5}=1/32$$, the communication cost is 153 bytes without and 992 bytes with public accountability, meaning that our arguments are shorter than shuffle arguments realized using Bulletproofs (IEEE S&P 2018) and even competitive in size with SNARKs, despite only relying on simple assumptions.
Nils Fleischhacker, Mark Simkin

### Beyond Security and Efficiency: On-Demand Ratcheting with Security Awareness

Abstract
Secure asynchronous two-party communication applies ratcheting to strengthen privacy, in the presence of internal state exposures. Security with ratcheting is provided in two forms: forward security and post-compromise security. There have been several such secure protocols proposed in the last few years. However, they come with a high cost.
In this paper, we propose two generic constructions with favorable properties. Concretely, our first construction achieves security awareness. It allows users to detect non-persistent active attacks, to determine which messages are not safe given a potential leakage pattern, and to acknowledge for deliveries.
In our second construction, we define a hybrid system formed by combining two protocols: typically, a weakly secure “light” protocol and a strongly secure “heavy" protocol. The design goals of our hybrid construction are, first, to let the sender decide which one to use in order to obtain an efficient protocol with ratchet on demand; and second, to restore the communication between honest participants in the case of a message loss or an active attack.
We can apply our generic constructions to any existing protocol.
Andrea Caforio, F. Betül Durak, Serge Vaudenay

### Group Encryption: Full Dynamicity, Message Filtering and Code-Based Instantiation

Abstract
Group encryption (GE), introduced by Kiayias, Tsiounis and Yung (Asiacrypt’07), is the encryption analogue of group signatures. It allows to send verifiably encrypted messages satisfying certain requirements to certified members of a group, while keeping the anonymity of the receivers. Similar to the tracing mechanism in group signatures, the receiver of any ciphertext can be identified by an opening authority - should the needs arise. The primitive of GE is motivated by a number of interesting privacy-preserving applications, including the filtering of encrypted emails sent to certified members of an organization.
This paper aims to improve the state-of-affairs of GE systems. Our first contribution is the formalization of fully dynamic group encryption (FDGE) - a GE system simultaneously supporting dynamic user enrolments and user revocations. The latter functionality for GE has not been considered so far. As a second contribution, we realize the message filtering feature for GE based on a list of t-bit keywords and 2 commonly used policies: “permissive” - accept the message if it contains at least one of the keywords as a substring; “prohibitive” - accept the message if all of its t-bit substrings are at Hamming distance at least d from all keywords, for $$d \ge 1$$. This feature so far has not been substantially addressed in existing instantiations of GE based on DCR, DDH, pairing-based and lattice-based assumptions. Our third contribution is the first instantiation of GE under code-based assumptions. The scheme is more efficient than the lattice-based construction of Libert et al. (Asiacrypt’16) - which, prior to our work, is the only known instantiation of GE under post-quantum assumptions. Our scheme supports the 2 suggested policies for message filtering, and in the random oracle model, it satisfies the stringent security notions for FDGE that we put forward.
Khoa Nguyen, Reihaneh Safavi-Naini, Willy Susilo, Huaxiong Wang, Yanhong Xu, Neng Zeng

### Steel: Composable Hardware-Based Stateful and Randomised Functional Encryption

Abstract
Trusted execution environments (TEEs) enable secure execution of programs on untrusted hosts and cryptographically attest the correctness of outputs. As these are complex systems, it is essential to formally capture the exact security achieved by protocols employing TEEs, and ultimately, prove their security under composition, as TEEs are typically employed in multiple protocols, simultaneously.
Our contribution is twofold. On the one hand, we show that under existing definitions of attested execution setup, we can realise cryptographic functionalities that are unrealisable in the standard model. On the other hand, we extend the adversarial model to capture a broader class of realistic adversaries, we demonstrate weaknesses of existing security definitions this class, and we propose stronger ones.
Specifically, we first define a generalization of Functional Encryption that captures Stateful and Randomised functionalities ($$\mathrm {FESR}$$). Then, assuming the ideal functionality for attested execution of Pass et al. (Eurocrypt ’2017), we construct the associated protocol, $$\mathsf {Steel}$$, and we prove that $$\mathsf {Steel}$$ UC-realises $$\mathrm {FESR}$$ in the universal composition with global subroutines model by Badertscher et al. (TCC ’2020). Our work is also a validation of the compositionality of the $$\mathsf {Iron}$$ protocol by Fisch et al. (CCS ’2017), capturing (non-stateful) hardware-based functional encryption.
As the existing functionality for attested execution of Pass et al. is too strong for real world use, we propose a weaker functionality that allows the adversary to conduct rollback and forking attacks. We demonstrate that $$\mathsf {Steel}$$ (realising stateful functionalities), contrary to the stateless variant corresponding to $$\mathsf {Iron}$$, is not secure in this setting and discuss possible mitigation techniques.
Pramod Bhatotia, Markulf Kohlweiss, Lorenzo Martinico, Yiannis Tselekounis

### Adventures in Crypto Dark Matter: Attacks and Fixes for Weak Pseudorandom Functions

Abstract
A weak pseudorandom function (weak PRF) is one of the most important cryptographic primitives for its efficiency although it has lower security than a standard PRF.
Recently, Boneh et al. (TCC’18) introduced two types of new weak PRF candidates, which are called a basic Mod-2/Mod-3 and alternative Mod-2/Mod-3 weak PRF. Both use the mixture of linear computations defined on different small moduli to satisfy conceptual simplicity, low complexity (depth-2 $$\mathsf{ACC^0}$$) and MPC friendliness. In fact, the new candidates are conjectured to be exponentially secure against any adversary that allows exponentially many samples, and a basic Mod-2/Mod-3 weak PRF is the only candidate that satisfies all features above. However, none of the direct attacks which focus on basic and alternative Mod-2/Mod-3 weak PRFs use their own structures.
In this paper, we investigate weak PRFs from two perspectives; attacks, fixes. We first propose direct attacks for an alternative Mod-2/Mod-3 weak PRF and a basic Mod-2/Mod-3 weak PRF when a circulant matrix is used as a secret key.
For an alternative Mod-2/Mod-3 weak PRF, we prove that the adversary’s advantage is at least $$2^{-0.105n}$$, where n is the size of the input space of the weak PRF. Similarly, we show that the advantage of our heuristic attack to the weak PRF with a circulant matrix key is larger than $$2^{-0.21n}$$, which is contrary to the previous expectation that ‘structured secret key’ does not affect the security of a weak PRF. Thus, for an optimistic parameter choice $$n = 2\lambda$$ for the security parameter $$\lambda$$, parameters should be increased to preserve $$\lambda$$-bit security when an adversary obtains exponentially many samples.
Next, we suggest a simple method for repairing two weak PRFs affected by our attack while preserving the parameters.
Jung Hee Cheon, Wonhee Cho, Jeong Han Kim, Jiseung Kim

### Backmatter

Weitere Informationen