Skip to main content

Über dieses Buch

This book contains revised selected papers from the 24th International Conference on Selected Areas in Cryptography, SAC 2017, held in Ottawa, ON, Canada in August 2017.

The 23 full papers presented in this volume were carefully reviewed and selected from 66 submissions. The focus of the conference was on specific themes in the area of cryptographic system design and analysis such as:

Design and analysis of symmetric key cryptosystemsPrimitives for symmetric key cryptography, including block and stream ciphers, hash functions, and MAC algorithmsEfficient implementations of symmetric and public key algorithms



Discrete Logarithms


Second Order Statistical Behavior of LLL and BKZ

The LLL algorithm (from Lenstra, Lenstra and Lovász) and its generalization BKZ (from Schnorr and Euchner) are widely used in cryptanalysis, especially for lattice-based cryptography. Precisely understanding their behavior is crucial for deriving appropriate key-size for cryptographic schemes subject to lattice-reduction attacks. Current models, e.g.  the Geometric Series Assumption and Chen-Nguyen’s BKZ-simulator, have provided a decent first-order analysis of the behavior of LLL and BKZ. However, they only focused on the average behavior and were not perfectly accurate. In this work, we initiate a second order analysis of this behavior. We confirm and quantify discrepancies between models and experiments —in particular in the head and tail regions— and study their consequences. We also provide variations around the mean and correlations statistics, and study their impact. While mostly based on experiments, by pointing at and quantifying unaccounted phenomena, our study sets the ground for a theoretical and predictive understanding of LLL and BKZ performances at the second order.
Yang Yu, Léo Ducas

Refinement of the Four-Dimensional GLV Method on Elliptic Curves

In this paper we refine the four-dimensional GLV method on elliptic curves presented by Longa and Sica (ASIACRYPT 2012). First we improve the twofold Cornacchia-type algorithm, and show that the improved algorithm possesses a better theoretic upper bound of decomposition coefficients. In particular, our proof is much simpler than Longa and Sica’s. We also apply the twofold Cornacchia-type algorithm to GLS curves over \(\mathbb {F}_{p^4}\). Second in the case of curves with j-invariant 0, we compare this improved version with the almost optimal algorithm proposed by Hu, Longa and Xu in 2012 (Designs, Codes and Cryptography). Computational implementations show that they have almost the same performance, which provide further evidence that the improved version is a sufficiently good scalar decomposition approach.
Hairong Yi, Yuqing Zhu, Dongdai Lin

Key Agreement


Post-Quantum Static-Static Key Agreement Using Multiple Protocol Instances

Some key agreement protocols leak information about secret keys if dishonest participants use specialized public keys. We formalize these protocols and attacks, and present a generic transformation that can be made to such key agreement protocols to resist such attacks. Simply put, each party generates k different keys, and two parties perform key agreement using all \(k^2\) combinations of their individual keys. We consider this transformation in the context of various post-quantum key agreement schemes and analyze the attacker’s success probabilities (which depend on the details of the underlying key agreement protocol) to determine the necessary parameter sizes for 128-bit security. Our transformation increases key sizes by a factor of k and computation times by \(k^2\), which represents a significant cost—but nevertheless still feasible. Our transformation is particularly well-suited to supersingular isogeny Diffie-Hellman, in which one can take \(k=113\) instead of the usual \(k=256\) at the 128-bit quantum security level. These results represent a potential path forward towards solving the open problem of securing long-term static-static key exchange against quantum adversaries.
Reza Azarderakhsh, David Jao, Christopher Leonardi

Side-Channel Attacks on Quantum-Resistant Supersingular Isogeny Diffie-Hellman

In this paper, we present three side-channel attacks on the quantum-resistant supersingular isogeny Diffie-Hellman (SIDH) key exchange protocol. These refined power analysis attacks target the representation of a zero value in a physical implementation of SIDH to extract bits of the secret key. To understand the behavior of these zero-attacks on SIDH, we investigate the representation of zero in the context of quadratic extension fields and isogeny arithmetic. We then present three different refined power analysis attacks on SIDH. Our first and second attacks target the Jao, De Feo, and Plût three-point Montgomery ladder by utilizing a partial-zero attack and zero-value attack, respectively. Our third attack proposes a method to break the large-degree isogeny by utilizing zero-values in the context of isogenies. The goal of this paper is to illustrate additional security concerns for an SIDH static-key user.
Brian Koziel, Reza Azarderakhsh, David Jao



Computing Discrete Logarithms in

The security of torus-based and pairing-based cryptography relies on the difficulty of computing discrete logarithms in small degree extensions of finite fields of large characteristic. It has already been shown that for degrees 2 and 3, the discrete logarithm problem is not as hard as once thought. We address the question of degree 6 and aim at providing real-life timings for such problems. We report on a record DL computation in a 132-bit subgroup of \({\mathbb F}_{{p}^{6}}\) for a 22-decimal digit prime, with \(p^6\) having 422 bits. The previous record was for a 79-bit subgroup in a 240-bit field. We used NFS-DL with a sieving phase over degree 2 polynomials, instead of the more classical degree 1 case. We show how to improve many parts of the NFS-DL algorithm to reach this target.
Laurent Grémy, Aurore Guillevic, François Morain, Emmanuel Thomé

Computing Low-Weight Discrete Logarithms

We propose some new baby-step giant-step algorithms for computing “low-weight” discrete logarithms; that is, for computing discrete logarithms in which the radix-b representation of the exponent is known to have only a small number of nonzero digits. Prior to this work, such algorithms had been proposed for the case where the exponent is known to have low Hamming weight (i.e., the radix-2 case). Our new algorithms (i) improve the best-known deterministic complexity for the radix-2 case, and then (ii) generalize from radix-2 to arbitrary radixes \(b>1\). We also discuss how our new algorithms can be used to attack several recent Verifier-based Password Authenticated Key Exchange (VPAKE) protocols from the cryptographic literature with the conclusion that the new algorithms render those constructions completely insecure in practice.
Bailey Kacsmar, Sarah Plosker, Ryan Henry

Efficient Implementation


sLiSCP: Simeck-Based Permutations for Lightweight Sponge Cryptographic Primitives

In this paper, we propose a family of lightweight cryptographic permutations, named sLiSCP, with the sole aim to provide a realistic minimal design that suits a variety of lightweight device applications. More precisely, we argue that for such devices the area dedicated for security purposes should not only be consumed by an encryption or hashing algorithm, but also be used to provide as many cryptographic functionalities as possible. Our main contribution is the design of a lightweight permutation employing a 4-subblock Type-2 Generalized Feistel-like Structure (GFS) and round-reduced unkeyed Simeck with either 48 or 64-bit block length as the two round functions, thus resulting in two lightweight instances of the permutation, sLiSCP-192 and sLiSCP-256. We leverage the extensive security analysis on both Simeck (Simon-like functions) and Type-2 GFSs and present bounds against differential and linear cryptanalysis. Moreover, we analyze sLiSCP against a wide range of distinguishing attacks, and accordingly, claim that there exist no structural distinguishers for sLiSCP with a complexity below \(2^{b/2}\) where b is the state size. We demonstrate how sLiSCP can be used as a unified round function in the duplex sponge construction to build (authenticated) encryption and hashing functionalities. The parallel hardware implementation area of the unified duplex mode of sLiSCP-192 (resp. sLiSCP-256) in CMOS 65 nm ASIC is 2289 (resp. 3039) GEs with a throughput of 29.62 (resp. 44.44) kbps.
Riham AlTawy, Raghvendra Rohit, Morgan He, Kalikinkar Mandal, Gangqiang Yang, Guang Gong

Efficient Reductions in Cyclotomic Rings - Application to Ring-LWE Based FHE Schemes

With Fully Homomorphic Encryption (FHE), it is possible to process encrypted data without having an access to the private-key. This has a wide range of applications, most notably the offloading of sensitive data processing. Most research on FHE has focused on the improvement of its efficiency, namely by introducing schemes based on Ring-Learning With Errors (RLWE), and techniques such as batching, which allows for the encryption of multiple messages in the same ciphertext. Much of the related research has focused on RLWE relying on power-of-two cyclotomic polynomials. While it is possible to achieve efficient arithmetic with such polynomials, one cannot exploit batching. Herein, the efficiency of ring arithmetic underpinned by non-power-of-two cyclomotic polynomials is analyzed and improved. Two methods for polynomial reduction are proposed, one based on the Barrett reduction and the other on a Montgomery representation. Speed-ups up to 2.66 are obtained for the reduction operation using an i7-5960X processor when compared with a straightforward implementation of the Barrett reduction. Moreover, the proposed methods are exploited to enhance homomorphic multiplication of Fan-Vercauteren (FV) and Brakerski-Gentry-Vaikuntantahan (BGV) encryption schemes, producing experimental speed-ups up to 1.37.
Jean-Claude Bajard, Julien Eynard, Anwar Hasan, Paulo Martins, Leonel Sousa, Vincent Zucca

How to (Pre-)Compute a Ladder

Improving the Performance of X25519 and X448
In the RFC 7748 memorandum, the Internet Research Task Force specified a Montgomery-ladder scalar multiplication function based on two recently adopted elliptic curves, “curve25519” and “curve448”. The purpose of this function is to support the Diffie-Hellman key exchange algorithm that will be included in the forthcoming version of the Transport Layer Security cryptographic protocol. In this paper, we describe a ladder variant that permits to accelerate the fixed-point multiplication function inherent to the Diffie-Hellman key pair generation phase. Our proposal combines a right-to-left version of the Montgomery ladder along with the pre-computation of constant values directly derived from the base-point and its multiples. To our knowledge, this is the first proposal of a Montgomery ladder procedure for prime elliptic curves that admits the extensive use of pre-computation. In exchange of very modest memory resources and a small extra programming effort, the proposed ladder obtains significant speedups for software implementations. Moreover, our proposal fully complies with the RFC 7748 specification. A software implementation of the X25519 and X448 functions using our pre-computable ladder yields an acceleration factor of roughly 1.20, and 1.25 when implemented on the Haswell and the Skylake micro-architectures, respectively.
Thomaz Oliveira, Julio López, Hüseyin Hışıl, Armando Faz-Hernández, Francisco Rodríguez-Henríquez

HILA5: On Reliability, Reconciliation, and Error Correction for Ring-LWE Encryption

We describe a new reconciliation method for Ring-LWE that has a significantly smaller failure rate than previous proposals while reducing ciphertext size and the amount of randomness required. It is based on a simple, deterministic variant of Peikert’s reconciliation that works with our new “safe bits” selection and constant-time error correction techniques. The new method does not need randomized smoothing to achieve non-biased secrets. When used with the very efficient “New Hope” Ring-LWE parametrization we achieve a decryption failure rate well below \(2^{-128}\) (compared to \(2^{-60}\) of the original), making the scheme suitable for public key encryption in addition to key exchange protocols; the reconciliation approach saves about \(40 \%\) in ciphertext size when compared to the common LP11 Ring-LWE encryption scheme. We perform a combinatorial failure analysis using full probability convolutions, leading to a precise understanding of decryption failure conditions on bit level. Even with additional implementation security and safety measures the new scheme is still essentially as fast as the New Hope but has slightly shorter messages. The new techniques have been instantiated and implemented as a Key Encapsulation Mechanism (KEM) and public key encryption scheme designed to meet the requirements of NIST’s Post-Quantum Cryptography effort at very high security level.
Markku-Juhani O. Saarinen

Public Key Encryption


A Public-Key Encryption Scheme Based on Non-linear Indeterminate Equations

In this paper, we propose a post-quantum public-key encryption scheme whose security depends on a problem arising from a multivariate non-linear indeterminate equation. The security of lattice cryptosystems, which are considered to be the most promising candidate for a post-quantum cryptosystem, is based on the shortest vector problem or the closest vector problem in the discrete linear solution spaces of simultaneous equations. However, several improved attacks for the underlying problems have recently been developed by using approximation methods, which result in requiring longer key sizes. As a scheme to avoid such attacks, we propose a public-key encryption scheme based on the “smallest” solution problem in the non-linear solution spaces of multivariate indeterminate equations that was developed from the algebraic surface cryptosystem. Since no efficient algorithm to find such a smallest solution is currently known, we introduce a new computational assumption under which proposed scheme is proven to be secure in the sense of IND-CPA. Then, we perform computational experiments based on known attack methods and evaluate that the key size of our scheme is able to be much shorter than those of previous lattice cryptosystems.
Koichiro Akiyama, Yasuhiro Goto, Shinya Okumura, Tsuyoshi Takagi, Koji Nuida, Goichiro Hanaoka

NTRU Prime: Reducing Attack Surface at Low Cost

Several ideal-lattice-based cryptosystems have been broken by recent attacks that exploit special structures of the rings used in those cryptosystems. The same structures are also used in the leading proposals for post-quantum lattice-based cryptography, including the classic NTRU cryptosystem and typical Ring-LWE-based cryptosystems.
This paper (1) proposes NTRU Prime, which tweaks NTRU to use rings without these structures; (2) proposes Streamlined NTRU Prime, a public-key cryptosystem optimized from an implementation perspective, subject to the standard design goal of IND-CCA2 security; (3) finds high-security post-quantum parameters for Streamlined NTRU Prime; and (4) optimizes a constant-time implementation of those parameters. The resulting sizes and speeds show that reducing the attack surface has very low cost.
Daniel J. Bernstein, Chitchanok Chuengsatiansup, Tanja Lange, Christine van Vredendaal



Leighton-Micali Hash-Based Signatures in the Quantum Random-Oracle Model

Digital signatures constructed solely from hash functions offer competitive signature sizes and fast signing and verifying times. Moreover, the security of hash functions against a quantum adversary is believed to be well understood. This means that hash-based signatures are strong candidates for standard use in a post-quantum world. The Leighton-Micali signature scheme (LMS) is one such scheme being considered for standardization. However all systematic analyses of LMS have only considered a classical adversary. In this work we close this gap by showing a proof of the security of LMS in the quantum random-oracle model. Our results match the bounds imposed by Grover’s search algorithm within a constant factor, and remain tight in the multi-user setting.
Edward Eaton

Efficient Post-Quantum Undeniable Signature on 64-Bit ARM

We present a full-fledged, highly-optimized, constant-time software for post-quantum supersingular isogeny-based undeniable signature (SIUS) on the ARMv8 platforms providing 83- and 110-bit quantum security levels. To the best of our knowledge, this work is the first empirical implementation of isogeny-based quantum-resistant undeniable signature presented to date. The proposed software is developed on the top of our optimized hand-written ARMv8 assembly arithmetic library and benchmarked on a variety of platforms. The entire protocol runs less than a second on Huawei Nexus smart phone, providing 83-bit quantum security level. Moreover, our signature and public key sizes are 25% smaller than the original SIUS scheme. We remark that the SIUS protocol, similar to other isogeny-based schemes, suffers from the excessive number of operations, affecting its overall performance. Nonetheless, its significantly smaller key and signature sizes make it a promising candidate for post-quantum cryptography.
Amir Jalali, Reza Azarderakhsh, Mehran Mozaffari-Kermani

“Oops, I Did It Again” – Security of One-Time Signatures Under Two-Message Attacks

One-time signatures (OTS) are called one-time, because the accompanying security reductions only guarantee security under single-message attacks. However, this does not imply that efficient attacks are possible under two-message attacks. Especially in the context of hash-based OTS (which are basic building blocks of recent standardization proposals) this leads to the question if accidental reuse of a one-time key pair leads to immediate loss of security or to graceful degradation.
In this work we analyze the security of the most prominent hash-based OTS, Lamport’s scheme, its optimized variant, and WOTS, under different kinds of two-message attacks. Interestingly, it turns out that the schemes are still secure under two message attacks, asymptotically. However, this does not imply anything for typical parameters. Our results show that for Lamport’s scheme, security only slowly degrades in the relevant attack scenarios and typical parameters are still somewhat secure, even in case of a two-message attack. As we move on to optimized Lamport and its generalization WOTS, security degrades faster and faster, and typical parameters do not provide any reasonable level of security under two-message attacks.
Leon Groot Bruinderink, Andreas Hülsing



Low-Communication Parallel Quantum Multi-Target Preimage Search

The most important pre-quantum threat to AES-128 is the 1994 van Oorschot–Wiener “parallel rho method”, a low-communication parallel pre-quantum multi-target preimage-search algorithm. This algorithm uses a mesh of p small processors, each running for approximately \(2^{128}\!/pt\) fast steps, to find one of t independent AES keys \(k_1,\dots ,k_t\), given the ciphertexts for a shared plaintext 0.
NIST has claimed a high post-quantum security level for AES-128, starting from the following rationale: “Grover’s algorithm requires a long-running serial computation, which is difficult to implement in practice. In a realistic attack, one has to run many smaller instances of the algorithm in parallel, which makes the quantum speedup less dramatic.” NIST has also stated that resistance to multi-key attacks is desirable; but, in a realistic parallel setting, a straightforward multi-key application of Grover’s algorithm costs more than targeting one key at a time.
This paper introduces a different quantum algorithm for multi-target preimage search. This algorithm shows, in the same realistic parallel setting, that quantum preimage search benefits asymptotically from having multiple targets. The new algorithm requires a revision of NIST’s AES-128, AES-192, and AES-256 security claims.
Gustavo Banegas, Daniel J. Bernstein

Lattice Klepto

Turning Post-Quantum Crypto Against Itself
This paper studies ways to backdoor lattice-based systems following Young and Yung’s work on backdooring RSA and discrete-log based systems. For the NTRU encryption scheme we show how to build a backdoor and to change the system so that each ciphertext leaks information about the plaintext to the owner of the backdoor. For signature schemes the backdoor leaks information about the signing key to the backdoor owner.
As in Young and Yung’s work the backdoor uses the freedom that random selections offer in the protocol to hide a secret message encrypted to the backdoor owner. The most interesting and very different part though is how to hide and retrieve the hidden messages.
Robin Kwant, Tanja Lange, Kimberley Thissen

Total Break of the SRP Encryption Scheme

Multivariate Public Key Cryptography (MPKC) is one of the main candidates for secure communication in a post-quantum era. Recently, Yasuda and Sakurai proposed in [7] a new multivariate encryption scheme called SRP, which combines the Square encryption scheme with the Rainbow signature scheme and the Plus modifier.
In this paper we propose a practical key recovery attack against the SRP scheme, which is based on the min-Q-rank property of the system. Our attack is very efficient and allows us to break the parameter sets recommended in [7] within minutes. Our attack shows that combining a weak scheme with a secure one does not automatically increase the security of the weak scheme.
Ray Perlner, Albrecht Petzoldt, Daniel Smith-Tone

Approximate Short Vectors in Ideal Lattices of  with Precomputation of

Let ab be constants such that \(b\le 7a - 2\) and \(\frac{2}{5}< a < \frac{1}{2}\). We present a classical heuristic PIP resolution method that finds a generator of any input \(\mathfrak {I}\) such that \(\mathcal {N}(\mathfrak {I})\le 2^{n^b}\) in time \(2^{n^{a + o(1)}}\) given a one time classical precomputation of cost and size \(2^{n^{2-3a+o(1)}}\).
We also present a quantum variant of this PIP algorithm with precomputation. Let \(1/3< a < 1/2\). With a quantum coprocessor running Shor’s algorithm, our algorithm solves the \(\gamma \)-ideal-SVP for \(\gamma = 2^{n^{1/2+o(1)}}\) in time \(2^{n^{a + o(1)}}\) using \(\tilde{O}(n^{2-a})\) qubits and a one time classical precomputation on \(\mathbb {Q}(\zeta _{p^e})\) of cost \(2^{n^{2-3a+o(1)}}\). This is a superpolynomial improvement over the best classical method relying on the BKZ reduction, and it uses asymptotically fewer qubit than the quantum polynomial time method relying on the PIP algorithm of [BS16] which requires \(\varOmega (n^3)\) qubits.
Jean-François Biasse

Quantum Key-Recovery on Full AEZ

AEZ is an authenticated encryption algorithm, submitted to the CAESAR competition. It has been selected for the third round of the competition. While some classical analysis on the algorithm have been published, the cost of these attacks is beyond the security claimed by the designers.
In this paper, we show that all the versions of AEZ are completely broken against a quantum adversary. For this, we propose a generalisation of Simon’s algorithm for quantum period finding that allows to build efficient attacks.
Xavier Bonnetain

Quantum Key Search with Side Channel Advice

Recently, a number of results have been published that show how to combine classical cryptanalysis with quantum algorithms, thereby (potentially) achieving considerable speed-ups. We follow this trend but add a novel twist by considering how to utilise side channel leakage in a quantum setting. This is non-trivial because Grover’s algorithm deals with unstructured data, however we are interested in searching through a key space which has structure due to the side channel information. We present a novel variation of a key enumeration algorithm that produces batches of keys that can be efficiently tested using Grover’s algorithm. This results in the first quantum key search that benefits from side channel information.
Daniel P. Martin, Ashley Montanaro, Elisabeth Oswald, Dan Shepherd

Multidimensional Zero-Correlation Linear Cryptanalysis of Reduced Round SPARX-128

SPARX is a family of ARX-based block ciphers proposed at ASIACRYPT 2016. This family was designed with the aim of providing provable security against single-characteristic linear and differential cryptanalysis. SPARX-128/128 and SPARX-128/256 are two members of this family which operate on data blocks of length 128 bits and keys of length 128 and 256 bits, respectively. In this work, we propose a zero-correlation distinguisher that covers 5 steps (20 rounds) for both variants of SPARX-128. Then, using specific linear masks at its output and utilizing some properties of the employed linear layer and S-box, we extend this distinguisher to 5.25 steps (21 rounds).
By exploiting some properties of the key schedule, we extend the 20-round distinguisher by 4 rounds to present a 24-round multidimensional zero-correlation attack against SPARX-128/256, i.e., 6 steps out of 10 steps. The 24-round attack is then extended to a 25-round (6.25 out of 10 steps) zero-correlation attack against SPARX-128/256 with the full codebook by using the developed 21-round distinguisher. In addition, we extend the 21-round distinguisher by one round to launch a 22-round multidimensional zero-correlation attack against SPARX-128/128, i.e., 5.5 steps out of 8 steps.
Mohamed Tolba, Ahmed Abdelkhalek, Amr M. Youssef

Categorising and Comparing Cluster-Based DPA Distinguishers

Side-channel distinguishers play an important role in differential power analysis, where real world leakage information is compared against hypothetical predictions in order to guess at the underlying secret key. A class of distinguishers which can be described as ‘cluster-based’ have the advantage that they are able to exploit multi-dimensional leakage samples in scenarios where only loose, ‘semi-profiled’ approximations of the true leakage forms are available. This is by contrast with univariate distinguishers exploiting only single points (e.g. correlation), and Template Attacks requiring concise fitted models which can be overly sensitive to mismatch between the profiling and attack acquisitions. This paper collects together—to our knowledge, for the first time—the various different proposals for cluster-based DPA (concretely, Differential Cluster Analysis, First Principal Components Analysis, and Linear Discriminant Analysis), and shows how they fit within the robust ‘semi-profiling’ attack procedure proposed by Whitnall et al. at CHES 2015. We provide discussion of the theoretical similarities and differences of the separately proposed distinguishers as well as an empirical comparison of their performance in a range of (real and simulated) leakage scenarios and with varying parameters. Our findings have application for practitioners constrained to rely on ‘semi-profiled’ models who wish to make informed choices about the best known procedures to exploit such information.
Xinping Zhou, Carolyn Whitnall, Elisabeth Oswald, Degang Sun, Zhu Wang


Weitere Informationen

Premium Partner