Skip to main content

Über dieses Buch

This volume constitutes the proceedings of the 11th International Conference on post-quantum cryptography, PQCrypto 2020, held in Paris, France in April 2020.
The 29 full papers presented in this volume were carefully reviewed and selected from 86 submissions. They cover a broad spectrum of research within the conference's scope, including code-, hash-, isogeny-, and lattice-based cryptography, multivariate cryptography, and quantum cryptanalysis.



Code-Based Cryptography


Randomized Decoding of Gabidulin Codes Beyond the Unique Decoding Radius

We address the problem of decoding Gabidulin codes beyond their unique error-correction radius. The complexity of this problem is of importance to assess the security of some rank-metric code-based cryptosystems. We propose an approach that introduces row or column erasures to decrease the rank of the error in order to use any proper polynomial-time Gabidulin code error-erasure decoding algorithm. The expected work factor of this new randomized decoding approach is a polynomial term times \(q^{m(n-k)-w(n+m)+w^2+\min \{2\xi (\frac{n+k}{2}-\xi ),wk\} }\), where n is the code length, q the size of the base field, m the extension degree of the field, k the code dimension, w the number of errors, and \(\xi := w-\tfrac{n-k}{2}\). It improves upon generic rank-metric decoders by an exponential factor.
Julian Renner, Thomas Jerkovits, Hannes Bartz, Sven Puchinger, Pierre Loidreau, Antonia Wachter-Zeh

About Low DFR for QC-MDPC Decoding

McEliece-like code-based key exchange mechanisms using QC-MDPC codes can reach IND-CPA security under hardness assumptions from coding theory, namely quasi-cyclic syndrome decoding and quasi-cyclic codeword finding. To reach higher security requirements, like IND-CCA security, it is necessary in addition to prove that the decoding failure rate (DFR) is negligible, for some decoding algorithm and a proper choice of parameters. Getting a formal proof of a low DFR is a difficult task. Instead, we propose to ensure this low DFR under some additional security assumption on the decoder. This assumption relates to the asymptotic behavior of the decoder and is supported by several other works. We define a new decoder, Backflip, which features a low DFR. We evaluate the Backflip decoder by simulation and extrapolate its DFR under the decoder security assumption. We also measure the accuracy of our simulation data, in the form of confidence intervals, using standard techniques from communication systems.
Nicolas Sendrier, Valentin Vasseur

QC-MDPC Decoders with Several Shades of Gray

QC-MDPC code-based KEMs rely on decoders that have a small or even negligible Decoding Failure Rate (DFR). These decoders should be efficient and implementable in constant-time. One example for a QC-MDPC KEM is the Round-2 candidate of the NIST PQC standardization project, “BIKE”. We have recently shown that the Black-Gray decoder achieves the required properties. In this paper, we define several new variants of the Black-Gray decoder. One of them, called Black-Gray-Flip, needs only 7 steps to achieve a smaller DFR than Black-Gray with 9 steps, for the same block size. On current platforms, our BIKE-1 (Level-1) constant-time decapsulation is \(1.9\times \) faster than the previous decapsulation with Black-Gray. We also report an additional \(1.25\times \) decapsulating speedup using the new and instructions available on “Ice-Lake” micro-architecture.
Nir Drucker, Shay Gueron, Dusan Kostic



Isochronous Gaussian Sampling: From Inception to Implementation

With Applications to the Falcon Signature Scheme
Gaussian sampling over the integers is a crucial tool in lattice-based cryptography, but has proven over the recent years to be surprisingly challenging to perform in a generic, efficient and provable secure manner. In this work, we present a modular framework for generating discrete Gaussians with arbitrary center and standard deviation. Our framework is extremely simple, and it is precisely this simplicity that allowed us to make it easy to implement, provably secure, portable, efficient, and provably resistant against timing attacks. Our sampler is a good candidate for any trapdoor sampling and it is actually the one that has been recently implemented in the Falcon signature scheme. Our second contribution aims at systematizing the detection of implementation errors in Gaussian samplers. We provide a statistical testing suite for discrete Gaussians called SAGA (Statistically Acceptable GAussian). In a nutshell, our two contributions take a step towards trustable and robust Gaussian sampling real-world implementations.
James Howe, Thomas Prest, Thomas Ricosset, Mélissa Rossi

Benchmarking Post-quantum Cryptography in TLS

Post-quantum cryptographic primitives have a range of trade-offs compared to traditional public key algorithms, either having slower computation or larger public keys and ciphertexts/signatures, or both. While the performance of these algorithms in isolation is easy to measure and has been a focus of optimization techniques, performance in realistic network conditions has been less studied. Google and Cloudflare have reported results from running experiments with post-quantum key exchange algorithms in the Transport Layer Security (TLS) protocol with real users’ network traffic. Such experiments are highly realistic, but cannot be replicated without access to Internet-scale infrastructure, and do not allow for isolating the effect of individual network characteristics.
In this work, we develop and make use of a framework for running such experiments in TLS cheaply by emulating network conditions using the networking features of the Linux kernel. Our testbed allows us to independently control variables such as link latency and packet loss rate, and then examine the performance impact of various post-quantum-primitives on TLS connection establishment, specifically hybrid elliptic curve/post-quantum key exchange and post-quantum digital signatures, based on implementations from the Open Quantum Safe project. Among our key results, we observe that packet loss rates above 3–5% start to have a significant impact on post-quantum algorithms that fragment across many packets, such as those based on unstructured lattices. The results from this emulation framework are also complemented by results on the latency of loading entire web pages over TLS in real network conditions, which show that network latency hides most of the impact from algorithms with slower computations (such as supersingular isogenies).
Christian Paquin, Douglas Stebila, Goutam Tamvada

Efficient Key Generation for Rainbow

Multivariate Cryptography is one of the main candidates for securing communication in a post-quantum world. One of the most promising schemes from this area is the Rainbow signature scheme. While this scheme provides very fast signature generation and verification, the key generation process of Rainbow is relatively slow. In this paper, we propose an algorithm which speeds up the key generation process of the standard Rainbow signature scheme by up to two orders of magnitude, such eliminating one of the few drawbacks of this scheme. Furthermore, we present an improved key generation algorithm for the CyclicRainbow signature scheme. This algorithm allows to generate a key pair for Cyclic Rainbow in essentially the same time as a key pair for standard Rainbow, thus making CyclicRainbow a practical alternative to the standard scheme. Our algorithms are implemented in the Rainbow proposal for the second round of the NIST standardization process for post-quantum cryptosystems.
Albrecht Petzoldt

Isogeny-Based Cryptography


CSIDH on the Surface

For primes \(p \equiv 3 \bmod 4\), we show that setting up CSIDH on the surface, i.e., using supersingular elliptic curves with endomorphism ring \(\mathbf {Z}[(1 + \sqrt{-p})/2]\), amounts to just a few sign switches in the underlying arithmetic. If \(p \equiv 7 \bmod 8\) then horizontal 2-isogenies can be used to help compute the class group action. The formulas we derive for these 2-isogenies are very efficient (they basically amount to a single exponentiation in \(\mathbf {F}_p\)) and allow for a noticeable speed-up, e.g., our resulting CSURF-512 protocol runs about \(5.68\%\) faster than CSIDH-512. This improvement is completely orthogonal to all previous speed-ups, constant-time measures and construction of cryptographic primitives that have appeared in the literature so far. At the same time, moving to the surface gets rid of the redundant factor \(\mathbf {Z}_3\) of the acting ideal-class group, which is present in the case of CSIDH and offers no extra security.
Wouter Castryck, Thomas Decru

LegRoast: Efficient Post-quantum Signatures from the Legendre PRF

We introduce an efficient post-quantum signature scheme that relies on the one-wayness of the Legendre PRF. This “LEGendRe One-wAyness SignaTure” (LegRoast) builds upon the MPC-in-the-head technique to construct an efficient zero-knowledge proof, which is then turned into a signature scheme with the Fiat-Shamir transform. Unlike many other Fiat-Shamir signatures, the security of LegRoast can be proven without using the forking lemma, and this leads to a tight (classical) ROM proof. We also introduce a generalization that relies on the one-wayness of higher-power residue characters; the “POwer Residue ChaRacter One-wAyness SignaTure” (PorcRoast).
LegRoast outperforms existing MPC-in-the-head-based signatures (most notably Picnic/Picnic2) in terms of signature size and speed. Moreover, PorcRoast outperforms LegRoast by a factor of 2 in both signature size and signing time. For example, one of our parameter sets targeting NIST security level I results in a signature size of 7.2 KB and a signing time of 2.8ms. This makes PorcRoast the most efficient signature scheme based on symmetric primitives in terms of signature size and signing time.
Ward Beullens, Cyprien Delpech de Saint Guilhem

The Supersingular Isogeny Problem in Genus 2 and Beyond

Let \(A/\overline{\mathbb {F}}_p\) and \(A'/\overline{\mathbb {F}}_p\) be superspecial principally polarized abelian varieties of dimension \(g>1\). For any prime \(\ell \ne p\), we give an algorithm that finds a path \(\phi :A \rightarrow A'\) in the \((\ell , \dots , \ell )\)-isogeny graph in \(\widetilde{O}(p^{g-1})\) group operations on a classical computer, and \(\widetilde{O}(\sqrt{p^{g-1}})\) calls to the Grover oracle on a quantum computer. The idea is to find paths from A and \(A'\) to nodes that correspond to products of lower dimensional abelian varieties, and to recurse down in dimension until an elliptic path-finding algorithm (such as Delfs–Galbraith) can be invoked to connect the paths in dimension \(g=1\). In the general case where A and \(A'\) are any two nodes in the graph, this algorithm presents an asymptotic improvement over all of the algorithms in the current literature. In the special case where A and \(A'\) are a known and relatively small number of steps away from each other (as is the case in higher dimensional analogues of SIDH), it gives an asymptotic improvement over the quantum claw finding algorithms and an asymptotic improvement over the classical van Oorschot–Wiener algorithm.
Craig Costello, Benjamin Smith

Sashimi: Cutting up CSI-FiSh Secret Keys to Produce an Actively Secure Distributed Signing Protocol

We present the first actively secure variant of a distributed signature scheme based on isogenies. The protocol produces signatures from the recent CSI-FiSh signature scheme. Our scheme works for any access structure, as we use a replicated secret sharing scheme to define the underlying secret sharing; as such it is only practical when the number of maximally unqualified sets is relatively small. This, however, includes the important case of full threshold, and (nt)-threshold schemes when n is small.
Daniele Cozzo, Nigel P. Smart

Lattice-Based Cryptography


Defeating NewHope with a Single Trace

The key encapsulation method “NewHope” allows two parties to agree on a secret key. The scheme includes a private and a public key. While the public key is used to encipher a random shared secret, the private key enables to decipher the ciphertext. NewHope is a candidate in the NIST post-quantum project, whose aim is to standardize cryptographic systems that are secure against attacks originating from both quantum and classical computers. While NewHope relies on the theory of quantum-resistant lattice problems, practical implementations have shown vulnerabilities against side-channel attacks targeting the extraction of the private key. In this paper, we demonstrate a new attack on the shared secret. The target consists of the C reference implementation as submitted to the NIST contest, being executed on a Cortex-M4 processor. Based on power measurement, the complete shared secret can be extracted from data of one single trace only. Further, we analyze the impact of different compiler directives. When the code is compiled with optimization turned off, the shared secret can be read from an oscilloscope display directly with the naked eye. When optimizations are enabled, the attack requires some more sophisticated techniques, but the attack still works on single power traces.
Dorian Amiet, Andreas Curiger, Lukas Leuenberger, Paul Zbinden

Decryption Failure Is More Likely After Success

The user of an imperfectly correct lattice-based public-key encryption scheme leaks information about their secret key with each decryption query that they answer—even if they answer all queries successfully. Through a refinement of the D’Anvers–Guo–Johansson–Nilsson–Vercauteren–Verbauwhede failure boosting attack, we show that an adversary can use this information to improve his odds of finding a decryption failure. We also propose a new definition of \(\delta \)-correctness, and we re-assess the correctness of several submissions to NIST’s post-quantum standardization effort.
Nina Bindel, John M. Schanck

Compact Privacy Protocols from Post-quantum and Timed Classical Assumptions

While basic lattice-based primitives like encryption and digital signature schemes are already fairly short, more advanced privacy-preserving protocols (e.g. group signatures) that are believed to be post-quantum secure have outputs of at least several hundred kilobytes. In this paper, we propose a framework for building privacy protocols with significantly smaller parameter sizes whose secrecy is based on post-quantum assumptions, but soundness additionally assumes that some classical assumption, e.g., the discrete logarithm problem (DLP), is hard to break within a short amount of time.
The main ingredients of our constructions are statistical zero-knowledge proofs of knowledge for certain relations, whose soundness rely on the hardness of solving the discrete logarithm problem for a fresh DLP instance per proof. This notion has recently been described by the term quantum annoyance. Using such proofs, while also enforcing that they be completed in a fixed amount of time, we then show how to construct privacy-preserving primitives such as (dynamic) group signatures and DAA schemes, where soundness is based on the hardness of the “timed” discrete logarithm problem and SIS. The outputs of our schemes are significantly shorter (\({\approx }30X)\) than purely lattice-based schemes.
Jonathan Bootle, Anja Lehmann, Vadim Lyubashevsky, Gregor Seiler

Efficient Post-quantum SNARKs for RSIS and RLWE and Their Applications to Privacy

In this paper we give efficient statistical zero-knowledge proofs (SNARKs) for Module/Ring LWE and Module/Ring SIS relations, providing the remaining ingredient for building efficient cryptographic protocols from lattice-based hardness assumptions. We achieve our results by exploiting the linear-algebraic nature of the statements supported by the Aurora proof system (Ben-Sasson et al.), which allows us to easily and efficiently encode the linear-algebraic statements that arise in lattice schemes and to side-step the issue of “relaxed extractors”, meaning extractors that only recover a witness for a larger relation than the one for which completeness is guaranteed. We apply our approach to the example use case of partially dynamic group signatures and obtain a lattice-based group signature that protects users against corrupted issuers, and that produces signatures smaller than the state of the art, with signature sizes of less than 300 KB for the comparably secure version of the scheme. To obtain our argument size estimates for proof of knowledge of RLWE secret, we implemented the NIZK using libiop.
Cecilia Boschini, Jan Camenisch, Max Ovsiankin, Nicholas Spooner

Short Zero-Knowledge Proof of Knowledge for Lattice-Based Commitment

Commitment scheme, together with zero-knowledge proof, is a fundamental tool for cryptographic design. Recently, Baum et al. proposed a commitment scheme (BDLOP), which is by far the most efficient lattice-based one and has been applied on several latest constructions of zero-knowledge proofs. In this paper, we propose a more efficient zero-knowledge proof of knowledge for BDLOP commitment opening with a shorter proof. There are a few technical challenges, and we develop some new techniques: First, we make an adaption of BDLOP commitment by evaluating the opening with the singular value rather than \(\ell _2\) norm in order to get compact parameters. Then, we try to use the bimodal Gaussian technique to minimize the size of the proof. Finally, utilizing a modulus-switch technique, we can retain the size of the commitment.
Yang Tao, Xi Wang, Rui Zhang

COSAC: COmpact and Scalable Arbitrary-Centered Discrete Gaussian Sampling over Integers

The arbitrary-centered discrete Gaussian sampler is a fundamental subroutine in implementing lattice trapdoor sampling algorithms. However, existing approaches typically rely on either a fast implementation of another discrete Gaussian sampler or pre-computations with regards to some specific discrete Gaussian distributions with fixed centers and standard deviations. These approaches may only support sampling from standard deviations within a limited range, or cannot efficiently sample from arbitrary standard deviations determined on-the-fly at run-time.
In this paper, we propose a compact and scalable rejection sampling algorithm by sampling from a continuous normal distribution and performing rejection sampling on rounded samples. Our scheme does not require pre-computations related to any specific discrete Gaussian distributions. Our scheme can sample from both arbitrary centers and arbitrary standard deviations determined on-the-fly at run-time. In addition, we show that our scheme only requires a low number of trials close to 2 per sample on average, and our scheme maintains good performance when scaling up the standard deviation. We also provide a concrete error analysis of our scheme based on the Rényi divergence. We implement our sampler and analyse its performance in terms of storage and speed compared to previous results. Our sampler’s running time is center-independent and is therefore applicable to implementation of convolution-style lattice trapdoor sampling and identity-based encryption resistant against timing side-channel attacks.
Raymond K. Zhao, Ron Steinfeld, Amin Sakzad

Multivariate Cryptography


Combinatorial Rank Attacks Against the Rectangular Simple Matrix Encryption Scheme

In 2013, Tao et al. introduced the ABC Simple Matrix Scheme for Encryption, a multivariate public key encryption scheme. The scheme boasts great efficiency in encryption and decryption, though it suffers from very large public keys. It was quickly noted that the original proposal, utilizing square matrices, suffered from a very bad decryption failure rate. As a consequence, the designers later published updated parameters, replacing the square matrices with rectangular matrices and altering other parameters to avoid the cryptanalysis of the original scheme presented in 2014 by Moody et al.
In this work we show that making the matrices rectangular, while decreasing the decryption failure rate, actually, and ironically, diminishes security. We show that the combinatorial rank methods employed in the original attack of Moody et al. can be enhanced by the same added degrees of freedom that reduce the decryption failure rate. Moreover, and quite interestingly, if the decryption failure rate is still reasonably high, as exhibited by the proposed parameters, we are able to mount a reaction attack to further enhance the combinatorial rank methods. To our knowledge this is the first instance of a reaction attack creating a significant advantage in this context.
Daniel Apon, Dustin Moody, Ray Perlner, Daniel Smith-Tone, Javier Verbel

A Structural Attack on Block-Anti-Circulant UOV at SAC 2019

At SAC 2019, Szepieniec and Preneel proposed a new variant of the Unbalanced Oil and Vinegar signature scheme (UOV) called block-anti-circulant UOV (BAC-UOV). In this scheme, the matrices representing the quadratic parts of the public key are designed to be block-anti-circulant matrices, which drastically reduces its public key size compared to UOV that originally has a relatively large public key size.
In this paper, we show that this block-anti-circulant property enables us to do a special linear transformation on variables in the public key polynomials. By executing the UOV attack on quadratic terms in partial variables of the resulting polynomial system, we obtain a polynomial system with less quadratic terms, which can be algebraically solved faster than the plain direct attack. Our proposed attack reduces the bit complexity of breaking BAC-UOV by about 20% compared with the previously known attacks. For example, the complexity of our proposed attack on 147-bit BAC-UOV parameter (claimed security level II in NIST PQC project by its authors) can be reduced only to 119 bits.
Hiroki Furue, Koha Kinjo, Yasuhiko Ikematsu, Yacheng Wang, Tsuyoshi Takagi

Generalization of Isomorphism of Polynomials with Two Secrets and Its Application to Public Key Encryption

Most of the public key encryption (PKE) schemes based on multivariate quadratic polynomials rely on Hidden Field Equation (HFE) paradigm. However, most of HFE based schemes have been broken in only several years just after their introduction. In this paper, we propose an alternative paradigm for constructing PKE based on multivariate quadratic polynomials. At the heart of our proposal is a new family of computational problems based on the generalization of Isomorphism of Polynomials with Two Secrets (IP2S) problem. The main computational problem in the new family is proven as hard as the original IP2S problem and is more robust, in the sense that we can associate it with circulant matrices as solutions without degrading its computational hardness too much, in contrast to the original IP2S problem which immediately becomes easy as soon as it is associated with circulant matrices. By associating it to circulant matrices, we obtain a Diffie-Hellman like structure which allows us to have an El-Gamal like PKE scheme.
Bagus Santoso

Practical Cryptanalysis of k-ary

Recently, an article by Felke appeared in Cryptography and Communications discussing the security of biquadratic \(C^*\) and a further generalization, k-ary \(C^*\). The article derives lower bounds for the complexity of an algebraic attack, directly inverting the public key, under an assumption that the first-fall degree is a good approximation of the solving degree, an assumption that the paper notes requires “greater justification and clarification.”
In this work, we provide a practical attack breaking all k-ary \(C^*\) schemes. The attack is based on differential techniques and requires nothing but the ability to evaluate the public key and solve linear systems. In particular, the attack breaks the parameters provided in CryptoChallenge 11 by constructing and solving linear systems of moderate size in a few minutes.
Daniel Smith-Tone

A Rank Attack Against Extension Field Cancellation

Extension Field Cancellation (EFC) is a multivariate-based primitive for encryption proposed by Szepieniec, Ding and Preneel in 2016. They claim to provide 80 bits of security for all the proposed variants and parameters. In this paper, we develop a rigorous security analysis and show that none of the proposed variants archive the claimed security levels. While the Joux-Vitse algorithm can perform message recovery on the variants EFC\(_{p}^{-}(2,83,10)\) and EFC\(_{pt^{2}}^{-}(2,83,8)\) in less than \(2^{80}\) bit operations, we offer a new key recovery technique based on MinRank that can break the last proposed variant EFC\(^{-}_{p}(3,59,6)\) with complexity \(2^{73}\). We also introduce a new technique based on a spectral decomposition with respect to a subfield to recover the first half of the isomorphism of polynomials in EFC\(^{-}_{p}(q,n,a)\), when \(a = 0,1\). This technique is of independent interest.
Daniel Smith-Tone, Javier Verbel

Multivariate Encryption Schemes Based on Polynomial Equations over Real Numbers

The MQ problem, an NP-complete problem, is related to the security of Multivariate Public Key Cryptography (MPKC). Its variant, the constrained MQ problem, was first considered in constructing secure multivariate encryption schemes using the pq-method proposed at ProvSec2018. In this paper, we propose an encryption scheme named PERN, whose key space completely includes that of the pq-method. The decryption of PERN uses methods of solving nonlinear equations over the real numbers, which is different from the decryption of the existing encryption schemes in MPKC. The construction of PERN is fairly flexible, which enables us to construct a multivariate encryption scheme, whose public key consists of multivariate polynomials of degree 2, 3 or higher degrees while constraining its public key to a reasonable size.
Takanori Yasuda, Yacheng Wang, Tsuyoshi Takagi

Quantum Algorithms


Improved Quantum Circuits for Elliptic Curve Discrete Logarithms

We present improved quantum circuits for elliptic curve scalar multiplication, the most costly component in Shor’s algorithm to compute discrete logarithms in elliptic curve groups. We optimize low-level components such as reversible integer and modular arithmetic through windowing techniques and more adaptive placement of uncomputing steps, and improve over previous quantum circuits for modular inversion by reformulating the binary Euclidean algorithm. Overall, we obtain an affine Weierstrass point addition circuit that has lower depth and uses fewer T gates than previous circuits. While previous work mostly focuses on minimizing the total number of qubits, we present various trade-offs between different cost metrics including the number of qubits, circuit depth and T-gate count. Finally, we provide a full implementation of point addition in the Q# quantum programming language that allows unit tests and automatic quantum resource estimation for all components.
Thomas Häner, Samuel Jaques, Michael Naehrig, Martin Roetteler, Mathias Soeken

The Power of Few Qubits and Collisions – Subset Sum Below Grover’s Bound

Let \(a_1, \ldots a_n, t\) be a solvable subset sum instance, i.e. there exists a subset of the \(a_i\) that sums to t. Such a subset can be found with Grover search in time \(2^{\frac{n}{2}}\), the square root of the search space, using only \(\mathcal {O}(n)\) qubits. The only quantum algorithms that beat Grover’s square root bound – such as the Left-Right-Split algorithm of Brassard, Hoyer, Tapp – either use an exponential amount of qubits or an exponential amount of expensive classical memory with quantum random access (QRAM). We propose the first subset sum quantum algorithms that breaks the square root Grover bound with linear many qubits and without QRAM. Building on the representation technique and the quantum collision finding algorithm from Chailloux, Naya-Plasencia and Schrottenloher (CNS), we obtain a quantum algorithm with time \(2^{0.48n}\).
Using the Schroeppel-Shamir list construction technique, we further improve downto run time \(2^{0.43n}\). The price that we have to pay for beating the square root bound is that as opposed to Grover search our algorithms require classical memory, but no QRAM, i.e. we get a time/memory/qubit tradeoff. Thus, our algorithms have to be compared to purely classical time/memory subset sum trade-offs such as those of Howgrave-Graham and Joux. Our quantum algorithms improve on these purely classical algorithms for all memory complexities \(M < 2^{0.2n}\). As an example, for memory \(2^{0.1n}\) we obtain run time \(2^{0.47n}\) as opposed to \(2^{0.63n}\) for the best classical algorithm.
Alexander Helm, Alexander May

On Quantum Distinguishers for Type-3 Generalized Feistel Network Based on Separability

In this work, we derive a method for constructing quantum distinguishers for GFNs (Generalized Feistel-like schemes with invertible inner functions and XORs), where for simplicity 4 branches are considered. The construction technique is demonstrated on Type-3 GFN, where some other cyclically inequivalent GFNs are considered as examples. Introducing the property of separability, we observe that finding a suitable partition of input blocks implies that some branches can be represented as a sum of functions with almost disjoint variables, which simplifies the application of Simon’s algorithm. However, higher number of rounds in most of the cases have branches which do not satisfy the previous property, and in order to derive a quantum distinguisher for these branches, we employ Simon’s and Grover’s algorithm in combination with a suitable system of equations given in terms of input blocks and inner functions involved in the round function. As a result, we are able to construct a 5-round quantum distinguisher for Type-3 GFNs using only a quantum encryption oracle with query complexity \(2^{N/4}\cdot \mathcal {O}(N/4)\), where N size of the input block.
Samir Hodžić, Lars Knudsen Ramkilde, Andreas Brasen Kidmose

Security Proofs


Many a Mickle Makes a Muckle: A Framework for Provably Quantum-Secure Hybrid Key Exchange

Hybrid Authenticated Key Exchange (AKE) protocols combine keying material from different sources (post-quantum, classical, and quantum key distribution (QKD)) to build protocols that are resilient to catastrophic failures of the different components. These failures may be due to advances in quantum computing, implementation vulnerabilities, or our evolving understanding of the quantum (and even classical) security of supposedly quantum-secure primitives. This hybrid approach is a prime candidate for initial deployment of post-quantum-secure cryptographic primitives because it hedges against undiscovered weaknesses. We propose a general framework \(\mathsf {HAKE}\) for analysing the security of such hybrid AKE protocols. \(\mathsf {HAKE}\) extends the classical Bellare-Rogaway model for AKE security to encompass forward security, post-compromise security, fine-grained compromise of different cryptographic components, and more. We use the framework to provide a security analysis of a new hybrid AKE protocol named \(\mathsf {Muckle}\). This protocol operates in one round trip and leverages the pre-established symmetric keys that are inherent to current QKD designs to provide message authentication, avoiding the need to use expensive post-quantum signature schemes. We provide an implementation of our Muckle protocol, instantiating our generic construction with classical and post-quantum Diffie-Hellman-based algorithmic choices. Finally, we report on benchmarking exercises against our implementation, examining its performance in terms of clock cycles, elapsed wall-time, and additional latency in both LAN and WAN settings.
Benjamin Dowling, Torben Brandt Hansen, Kenneth G. Paterson

A Note on the Instantiability of the Quantum Random Oracle

In a highly influential paper from fifteen years ago [10], Canetti, Goldreich, and Halevi showed a fundamental separation between the Random Oracle Model (ROM) and the standard model. They constructed a signature scheme which can be proven secure in the ROM, but is insecure when instantiated with any hash function (and thus insecure in the standard model). In 2011, Boneh et al. defined the notion of the Quantum Random Oracle Model (QROM), where queries to the random oracle may be made in quantum superposition. Because the QROM generalizes the ROM, a proof of security in the QROM is stronger than one in the ROM. This leaves open the possibility that security in the QROM could imply security in the standard model. In this work, we show that this is not the case, and that security in the QROM cannot imply standard-model security. We do this by showing that the original schemes that show a separation between the standard model and the ROM are also secure in the QROM. We consider two schemes that establish such a separation, one with length-restricted messages, and one without, and show both to be secure in the QROM. Our results give further understanding to the landscape of proofs in the ROM versus the QROM or standard model, and point towards the QROM and ROM being much closer to each other than either is to standard model security.
Edward Eaton, Fang Song

Collapseability of Tree Hashes

One oft-endeavored security property for cryptographic hash functions is collision resistance: it should be computationally infeasible to find distinct inputs \(x,x'\) such that \(H(x) = H(x')\), where H is the hash function. Unruh (EUROCRYPT 2016) proposed collapseability as its quantum equivalent. The Merkle-Damgård and sponge hashing modes have recently been proven to be collapseable under the assumption that the underlying primitive is collapseable. These modes are inherently sequential. In this work, we investigate collapseability of tree hashing. We first consider fixed length tree hashing modes, and derive conditions under which their collapseability can be reduced to the collapseability of the underlying compression function. Then, we extend the result to two methods for achieving variable length hashing: tree hashing with domain separation between message and chaining value, and tree hashing with length encoding at the end of the tree. The proofs are performed using the collapseability composability framework of Fehr (TCC 2018), that allows us to discard of deeply technical quantum details and to focus on proper composition of the tree hashes from their compression function.
Aldo Gunsing, Bart Mennink

Encryption Schemes Using Random Oracles: From Classical to Post-Quantum Security

The security proofs of post-quantum cryptographic schemes often consider only classical adversaries. Therefore, whether such schemes are really post-quantum secure remains unknown until the proofs take quantum adversaries into account. Switching to a quantum adversary might require to adapt the security notion. In particular, post-quantum security proofs for schemes which use random oracles have to be in the quantum random oracle model (\(\mathrm {QROM}\)), while classical security proofs are in the random oracle model (\(\mathrm {ROM}\)). We remedy this state of affairs by introducing a framework to obtain post-quantum security of public key encryption schemes which use random oracles. We define a class of encryption schemes, called oracle-simple, and identify game hops which are used to prove such schemes secure in the \(\mathrm {ROM}\). For these game hops, we state both simple and sufficient conditions to validate that a proof also holds in the \(\mathrm {QROM}\). The strength of our framework lies in its simplicity, its generality, and its applicability. We demonstrate this by applying it to the code-based encryption scheme \(\mathrm {ROLLO{\hbox {-}}II}\) (Round 2 NIST candidate) and the lattice-based encryption scheme \(\mathrm {LARA}\) (FC 2019). Thereby we prove that both schemes are post-quantum secure, which had not been shown before.
Juliane Krämer, Patrick Struck


Weitere Informationen

Premium Partner