Skip to main content

Über dieses Buch

This book contains revised selected papers from the 26th International Conference on Selected Areas in Cryptography, SAC 2019, held in Waterloo, ON, Canada, in August 2019. The 26 full papers presented in this volume were carefully reviewed and selected from 74 submissions. They cover the following research areas: Design and analysis of symmetric key primitives and cryptosystems, including block and stream ciphers, hash functions, MAC algorithms, and authenticated encryption schemes, efficient implementations of symmetric and public key algorithms, mathematical and algorithmic aspects of applied cryptology, cryptography for the Internet of Things.



Invited Talks


Looking Back—My Life as a Mathematician and Cryptographer

In this paper, I look back at my career as a mathematician and mathematical cryptographer, mainly concentrating on my student days and the early parts of my career. I also discuss my research philosophy and what I mean by the term “combinatorial cryptography.” Along the way, I recall some influential people, books and papers.
Douglas R. Stinson

Supersingular Isogeny Key Exchange for Beginners

This is an informal tutorial on the supersingular isogeny Diffie-Hellman protocol aimed at non-isogenists.
Craig Costello

Differential Cryptanalysis


Probabilistic Mixture Differential Cryptanalysis on Round-Reduced AES

At Eurocrypt 2017 the first secret-key distinguisher for 5-round AES has been presented. Although it allows to distinguish a random permutation from an AES-like one, it seems (rather) hard to exploit such a distinguisher in order to implement a key-recovery attack different than brute-force like. On the other hand, such result has been exploited to set up a new (competitive) secret-key distinguisher for 4-round AES, called “Mixture Differential Cryptanalysis”.
In this paper, we combine this new 4-round distinguisher with a modified version of a truncated differential distinguisher in order to set up a new 5-round distinguisher, that exploits properties which are independent of the secret key, of the details of the S-Box and of the MixColumns matrix. As a result, while a “classical” truncated differential distinguisher exploits the probability that a pair of (two) texts satisfies or not a given differential trail independently of the others pairs, our distinguisher works with sets of \(N\gg 2 \) (related) pairs of texts. In particular, our new 5-round AES distinguisher exploits the fact that such sets of texts satisfy some properties with a different probability than for a random permutation.
Even if such 5-round distinguisher has a higher complexity than e.g. the “multiple-of-8” one present in the literature, it can be used as starting point to set up the first key-recovery attack on 6-round AES that exploits directly a 5-round secret-key distinguisher. The goal of this paper is indeed to present and explore new approaches, showing that even a distinguisher like the one presented at Eurocrypt – believed to be hard to exploit – can be the starting point for new secret-key distinguishers and/or key-recovery attacks.
Lorenzo Grassi

Iterative Differential Characteristic of TRIFLE-BC

TRIFLE is a Round 1 candidate of the NIST Lightweight Cryptography Standardization process. In this paper, we present an interesting 1-round iterative differential characteristic of the underlying block cipher TRIFLE-BC used in TRIFLE, which holds with probability of \(2^{-3}\). Consequently, it allows to mount distinguishing attack on TRIFLE-BC for up to 43 (out of 50) rounds with data complexity \(2^{124}\) and time complexity \(2^{124}\). Most importantly, with such an iterative differential characteristic, the forgery attack on TRIFLE can reach up to 21 (out of 50) rounds with data complexity \(2^{63}\) and time complexity \(2^{63}\). Finally, to achieve key recovery attack on reduced TRIFLE, we construct a differential characteristic covering three blocks by carefully choosing the positions of the iterative differential characteristic. As a result, we can mount key-recovery attack on TRIFLE for up to 11 rounds with data complexity \(2^{63}\) and time complexity \(2^{104}\). Although the result in this paper cannot threaten the security margin of TRIFLE, we hope it can help further understand the security of TRIFLE.
Fukang Liu, Takanori Isobe

Tweakable Block Ciphers


Plaintext Recovery Attacks Against XTS Beyond Collisions

\(\mathsf {XTS}\) is a popular encryption scheme for storage devices standardized by IEEE and NIST. It is based on Rogaway’s \(\mathsf {XEX}\) tweakable block cipher and is known to be secure up to the collisions between the blocks, thus up to around \(2^{n/2}\) blocks for n-bit blocks. However this only implies that the theoretical indistinguishability notion is broken with \(O(2^{n/2})\) queries and does not tell the practical risk against the plaintext recovery if \(\mathsf {XTS}\) is targeted. We show several plaintext recovery attacks against \(\mathsf {XTS}\) beyond collisions, and evaluate their practical impacts.
Takanori Isobe, Kazuhiko Minematsu

Cryptanalysis of SKINNY in the Framework of the SKINNY 2018–2019 Cryptanalysis Competition

In April 2018, Beierle et al. launched the 3rd SKINNY cryptanalysis competition, a contest that aimed at motivating the analysis of their recent tweakable block cipher SKINNY . In contrary to the previous editions, the focus was made on practical attacks: contestants were asked to recover a 128-bit secret key from a given set of \(2^{20}\) plaintext blocks. The suggested SKINNY instances are 4- to 20-round reduced variants of SKINNY-64-128 and SKINNY-128-128. In this paper, we explain how to solve the challenges for 10-round SKINNY-128-128 and for 12-round SKINNY-64-128 in time equivalent to roughly \(2^{52}\) simple operations. Both techniques benefit from the highly biased sets of messages that are provided and that actually correspond to the encryption of various books in ECB mode.
Patrick Derbez, Virginie Lallemand, Aleksei Udovenko

Block Ciphers and Permutations


Algebraic Cryptanalysis of Variants of Frit

Frit is a cryptographic 384-bit permutation recently proposed by Simon et al. and follows a novel design approach for built-in countermeasures against fault attacks. We analyze the cryptanalytic security of Frit in different use cases and propose attacks on the full-round primitive. We show that the inverse \(\textsc {Frit}^{-1}\) of Frit is significantly weaker than Frit from an algebraic perspective, despite the better diffusion of the inverse of the mixing functions \(\sigma \): Its round function has an effective algebraic degree of only about 1.325. We show how to craft structured input spaces to linearize up to 4 (or, conditionally, 5) rounds and thus further reduce the degree. As a result, we propose very low-dimensional start-in-the-middle zero-sum partitioning distinguishers for unkeyed Frit, as well as integral distinguishers for reduced-round Frit and full-round \(\textsc {Frit}^{-1}\). We also consider keyed Frit variants using Even-Mansour or arbitrary round keys. By using optimized interpolation attacks and symbolically evaluating up to 5 rounds of \(\textsc {Frit}^{-1}\), we obtain key-recovery attacks with a complexity of either \(2^{59}\) chosen plaintexts and \(2^{67}\) time, or \(2^{18}\) chosen ciphertexts and time (about 5 seconds in practice).
Christoph Dobraunig, Maria Eichlseder, Florian Mendel, Markus Schofnegger

Improved Interpolation Attacks on Cryptographic Primitives of Low Algebraic Degree

Symmetric cryptographic primitives with low multiplicative complexity have been proposed to improve the performance of emerging applications such as secure Multi-Party Computation. However, primitives composed of round functions with low algebraic degree require a careful evaluation to assess their security against algebraic cryptanalysis, and in particular interpolation attacks. This paper proposes new low-memory interpolation attacks on symmetric key primitives of low degree. Moreover, we present generic attacks on block ciphers with a simple key schedule; our attacks require either constant memory or constant data complexity. The improved attack is applied to the block cipher MiMC which aims to minimize the number of multiplications in large finite fields. As a result, we can break MiMC-129/129 with 38 rounds with time and data complexity \(2^{65.5}\) and \(2^{60.2}\) respectively and with negligible memory; this attack invalidates one of the security claims of the designers. Our attack indicates that for MiMC-129/129 the full 82 rounds are necessary even with restrictions on the memory available to the attacker. For variants of MiMC with larger keys, we present new attacks with reduced complexity. Our results do not affect the security claims of the full round MiMC.
Chaoyun Li, Bart Preneel

A General Framework for the Related-Key Linear Attack Against Block Ciphers with Linear Key Schedules

We present a general framework for the related-key linear attack that can be applied to iterative block ciphers with linear key schedules. The attack utilizes a newly introduced related-key linear approximation that is obtained directly from a linear trail. The attack makes use of a known related-key data consisting of triplets of a plaintext, a ciphertext, and a key difference such that the ciphertext is the encrypted value of the plaintext under the key that is the xor of the key to be recovered and the specified key difference. If such a block cipher has a linear trail with linear correlation \(\epsilon \), it admits attacks with related-key data of size \(O(\epsilon ^{-2})\) just as in the case of classical Matsui’s Algorithms. But since the attack makes use of a related-key data, the attacker can use a linear trail with the squared correlation less than \(2^{-n}\), n being the block size, in case the key size is larger than n. Moreover, the standard key hypotheses seem to be appropriate even when the trail is not dominant as validated by experiments.
The attack can be applied in two ways. First, using a linear trail with squared correlation smaller than \(2^{-n}\), one can get an effective attack covering more rounds than existing attacks against some ciphers, such as Simon48/96, Simon64/128 and Simon128/256. Secondly, using a trail with large squared correlation, one can use related-key data for key recovery even when the data is not suitable for existing linear attacks.
Jung-Keun Lee, Bonwook Koo, Woo-Hwan Kim

Real-World Cryptography


Towards a Practical Cluster Analysis over Encrypted Data

Cluster analysis is one of the most significant unsupervised machine learning methods, and it is being utilized in various fields associated with privacy issues including bioinformatics, finance and image processing. In this paper, we propose a practical solution for privacy-preserving cluster analysis based on homomorphic encryption (HE). Our work is the first HE solution for the mean-shift clustering algorithm. To reduce the super-linear complexity of the original mean-shift algorithm, we adopt a novel random sampling method called dust sampling approach, which perfectly suits with HE and achieves the linear complexity. We also substitute non-polynomial kernels by a new polynomial kernel so that it can be efficiently computed in HE.
The HE implementation of our modified mean-shift clustering algorithm based on the approximate HE scheme HEAAN shows prominent performance in terms of speed and accuracy. It takes approx. 30 min with \(99\%\) accuracy over several public datasets with hundreds of data, and even for the dataset with 262, 144 data, it takes 82 min only when SIMD operations in HEAAN is applied. Our results outperform the previously best known result (SAC 2018) by over 400 times.
Jung Hee Cheon, Duhyeong Kim, Jai Hyun Park

Breaking the Bluetooth Pairing – The Fixed Coordinate Invalid Curve Attack

Bluetooth is a widely deployed standard for wireless communications between mobile devices. It uses authenticated Elliptic Curve Diffie-Hellman for its key exchange. In this paper we show that the authentication provided by the Bluetooth pairing protocols is insufficient and does not provide the promised MitM protection. We present a new attack that modifies the y-coordinates of the public keys (while preserving the x-coordinates). The attack compromises the encryption keys of all of the current Bluetooth authenticated pairing protocols, provided both paired devices are vulnerable. Specifically, it successfully compromises the encryption keys of 50% of the Bluetooth pairing attempts, while in the other 50% the pairing of the victims is terminated. The affected vendors have been informed and patched their products accordingly, and the Bluetooth specification had been modified to address the new attack. We named our new attack the “Fixed Coordinate Invalid Curve Attack”. Unlike the well known “Invalid Curve Attack” of Biehl et al. [2] which recovers the private key by sending multiple specially crafted points to the victim, our attack is a MitM attack which modifies the public keys in a way that lets the attacker deduce the shared secret.
Eli Biham, Lior Neumann

Using TopGear in Overdrive: A More Efficient ZKPoK for SPDZ

The HighGear protocol (Eurocrypt 2018) is the fastest currently known approach to preprocessing for the SPDZ Multi-Party Computation scheme. Its backbone is formed by an Ideal Lattice-based Somewhat Homomorphic Encryption Scheme and accompanying Zero-Knowledge proofs. Unfortunately, due to certain characteristics of HighGear such current implementations limit the security parameters in a number of places. This is mainly due to memory and bandwidth consumption constraints.
In this work we present a new approach to the ZKPoKs for the SPDZ Multi-Party Computation scheme. We rigorously formalize the original approach of HighGear and show how to improve upon it using a different proof strategy. This allows us to increase the security of the underlying protocols, whilst simultaneously also increasing the performance in terms of memory and bandwidth consumption as well as overall throughput of the SPDZ offline phase.
Carsten Baum, Daniele Cozzo, Nigel P. Smart

On the Real-World Instantiability of Admissible Hash Functions and Efficient Verifiable Random Functions

Verifiable random functions (VRFs) are essentially digital signatures with additional properties, namely verifiable uniqueness and pseudorandomness, which make VRFs a useful tool, e.g., to prevent enumeration in DNSSEC Authenticated Denial of Existence and the CONIKS key management system, or in the random committee selection of the Algorand blockchain.
Most standard-model VRFs rely on admissible hash functions (AHFs) to achieve security against adaptive attacks in the standard model. Known AHF constructions are based on error-correcting codes, which yield asymptotically efficient constructions. However, previous works do not clarify how the code should be instantiated concretely in the real world. The rate and the minimal distance of the selected code have significant impact on the efficiency of the resulting cryptosystem, therefore it is unclear if and how the aforementioned constructions can be used in practice.
First, we explain inherent limitations of code-based AHFs. Concretely, we assume that even if we were given codes that achieve the well-known Gilbert-Varshamov or McEliece-Rodemich-Rumsey-Welch bounds, existing AHF-based constructions of verifiable random functions (VRFs) can only be instantiated quite inefficiently. Then we introduce and construct computational AHFs (cAHFs). While classical AHFs are information-theoretic, and therefore work even in presence of computationally unbounded adversaries, cAHFs provide only security against computationally bounded adversaries. However, we show that cAHFs can be instantiated significantly more efficiently. Finally, we use our cAHF to construct the currently most efficient verifiable random function with full adaptive security in the standard model.
Tibor Jager, David Niehues

Stream Ciphers and Lightweight Cryptography


Tight Security Bounds for Generic Stream Cipher Constructions

The design of modern stream ciphers is strongly influenced by the fact that Time-Memory-Data tradeoff (TMD-TO) attacks reduce their effective key length to half of the inner state length. The classical solution is to design the cipher in accordance with the Large-State-Small-Key principle, which implies that the state length is at least twice as large as the session key length. In lightweight cryptography, considering heavily resource-constrained devices, a large amount of inner state cells is a big drawback for these type of constructions.
Recent stream cipher proposals like Lizard, Sprout, Plantlet and Fruit employ new techniques to avoid a large inner state size. However, when considering indistinguishability, none of the ciphers mentioned above provide a security above the birthday barrier with regard to the state length.
In this paper, we present a formal indistinguishability framework for proving lower bounds on the resistance of generic stream cipher constructions against TMD-TO attacks. In particular, we first present a tight lower bound on constructions underlying the Large-State-Small-Key principle. Further, we show a close to optimal lower bound of stream cipher constructions continuously using the initial value during keystream generation. These constructions would allow to shorten the inner state size significantly and hence the resource requirements of the cipher. We thus believe that Continuous-IV-Use constructions are a hopeful direction of future research.
Matthias Hamann, Matthias Krause, Alexander Moch

On the Data Limitation of Small-State Stream Ciphers: Correlation Attacks on Fruit-80 and Plantlet

Many cryptographers have focused on lightweight cryptography, and a huge number of lightweight block ciphers have been proposed. On the other hand, designing lightweight stream ciphers is a challenging task due to the well-known security criteria, i.e., the state size of stream ciphers must be at least twice the key size. The designers of Sprout addressed this issue by involving the secret key not only in the initialization but also in the keystream generation, and the state size of such stream ciphers can be smaller than twice the key size. After the seminal work, some small-state stream ciphers have been proposed such as Fruit, Plantlet, and LIZARD. Unlike conventional stream ciphers, these small-state stream ciphers have the limitation of keystream bits that can be generated from the same key and IV pair. In this paper, our motivation is to show whether the data limitation claimed by the designers is proper or not. The correlation attack is one of the attack methods exploiting many keystream bits generated from the same key and IV pair, and we apply it to Fruit-80 and Plantlet. As a result, we can break the full Fruit-80, i.e., the designers’ data limitation is not sufficient. We can also recover the secret key of Plantlet if it allows about \(2^{53}\) keystream bits from the same key and IV pair.
Yosuke Todo, Willi Meier, Kazumaro Aoki

A Lightweight Alternative to PMAC

PMAC is a parallelizable message authentication code (MAC) based on a block cipher. PMAC has many desirable features, such as parallelizability and essential optimality in terms of the number of block cipher calls, and the provable security. However, PMAC needs a pre-processing of one block cipher call taking all-zero block to produce the input masks to all subsequent block cipher calls. This incurs an overhead for both time and memory, which is often non-negligible. In particular, this makes PMAC’s state size 3n bits. To address these issues, we propose a new parallelizable MAC as an alternative to PMAC, which we call \(\text {LAPMAC}\). \(\text {LAPMAC}\) enables a high parallelizability, and unlike PMAC, it does not need a pre-processing to create an input mask. This leads to 2n-bit state memory compared to PMAC’s 3n-bit state. Moreover, \(\text {LAPMAC}\) is highly optimized in terms of the number of block cipher calls, for example it requires exactly the same number of block cipher calls as PMAC when one pre-processing call is allowed, and achieves the same number of block cipher calls as the state-of-the-art serial MACs those do not need the pre-processing call.
We prove that \(\text {LAPMAC}\) is secure up to around \(2^{n/2}\) queried blocks, under the standard pseudorandomness assumption of the underlying block cipher.
Kazuhiko Minematsu

Post-quantum Analysis


An Improved Security Analysis on an Indeterminate Equation Public Key Cryptosystem by Evaluation Attacks

Akiyama, Goto, Okumura, Takagi, Nuida and Hanaoka introduced an indeterminate equation analogue of learning with errors (IE-LWE) problem as a new computationally hard problem and constructed a candidate of post-quantum cryptosystem, called “Giophantus”. Giophantus satisfies the indistinguishability under chosen plaintext attack (IND-CPA) if IE-LWE problem is computationally infeasible. Akiyama et al., Shimizu and Ikematsu proposed improved Giophantus to the post-quantum standardization project. Beullens, Castryck and Vercauteren proposed an evaluation at one attack against IND-CPA security of Giophantus. However, Akiyama et al. assert that recommended parameters can resist Vercauteren et al.’s attack. Therefore, the security analysis on Giophantus is still needed.
In this paper, we propose a new kind of evaluation attack against IND-CPA security of Giophantus. Our attack solves IE-LWE problem by combining a part of Vercauteren et al.’s attack with a lattice attack on low rank lattices, e.g., 6-rank lattices for recommended parameters. Moreover, we investigate a way to avoid our attack and some variants of our attack. We give some remarks on modification of the IE-LWE problem. Our experimental analysis shows that our attack can solve IE-LWE problem efficiently, and that Giophantus does not satisfy IND-CPA security unless IE-LWE problem is modified appropriately.
Akifumi Muroi, Shinya Okumura, Atsuko Miyaji

Ternary Syndrome Decoding with Large Weight

The Syndrome Decoding problem is at the core of many code-based cryptosystems. In this paper, we study ternary Syndrome Decoding in large weight. This problem has been introduced in the Wave signature scheme but has never been thoroughly studied. We perform an algorithmic study of this problem which results in an update of the Wave parameters. On a more fundamental level, we show that ternary Syndrome Decoding with large weight is a really harder problem than the binary Syndrome Decoding problem, which could have several applications for the design of code-based cryptosystems.
Rémi Bricout, André Chailloux, Thomas Debris-Alazard, Matthieu Lequesne

Exploring Trade-offs in Batch Bounded Distance Decoding

Algorithms for solving the Bounded Distance Decoding problem (BDD) are used for estimating the security of lattice-based cryptographic primitives, since these algorithms can be employed to solve variants of the Learning with Errors problem (LWE). In certain parameter regimes where the target vector is small and/or sparse, batches of BDD instances emerge from a combinatorial approach where several components of the target vector are guessed before decoding. In this work we explore trade-offs in solving “Batch-BDD”, and apply our techniques to the small-secret Learning with Errors problem. We compare our techniques to previous works which solve batches of BDD instances, such as the hybrid lattice-reduction and meet-in-the-middle attack. Our results are a mixed bag. We show that, in the “enumeration setting” and with BKZ reduction, our techniques outperform a variant of the hybrid attack which does not consider time-memory trade-offs in the guessing phase for certain Round5 (17-bits out of 466), Round5-IoT (19-bits out of 240), and NTRU LPrime (23-bits out of 385) parameter sets. On the other hand, our techniques do not outperform the Hybrid Attack under standard, albeit unrealistic, assumptions. Finally, as expected, our techniques do not improve on previous works in the “sieving setting” (under standard assumptions) where combinatorial attacks in general do not perform well.
Martin R. Albrecht, Benjamin R. Curtis, Thomas Wunderer

On Quantum Slide Attacks

At Crypto 2016, Kaplan et al. proposed the first quantum exponential acceleration of a classical symmetric cryptanalysis technique: they showed that, in the superposition query model, Simon’s algorithm could be applied to accelerate the slide attack on the alternate-key cipher. This allows to recover an n-bit key with \(\mathop {}\mathopen {}\mathcal {O}\mathopen {}\left( n\right) \) queries.
In this paper we propose many other types of quantum slide attacks, inspired by classical techniques including sliding with a twist, complementation slide and mirror slidex. We also propose four-round self-similarity attacks for Feistel ciphers when using XOR operations. Some of these variants combined with whitening keys (FX construction) can also be successfully attacked. We present a surprising new result involving composition of quantum algorithms, that allows to combine some quantum slide attacks with a quantum attack on the round function, allowing an efficient key-recovery even if this function is strong classically.
Finally, we analyze the case of quantum slide attacks exploiting cycle-finding, whose possibility was mentioned in a paper by Bar-On et al. in 2015, where these attacks were introduced. We show that the speed-up is smaller than expected and less impressive than the above variants, but nevertheless provide improved complexities on the previous known quantum attacks in the superposition model for some self-similar SPN and Feistel constructions.
Xavier Bonnetain, María Naya-Plasencia, André Schrottenloher

Post-quantum Implementations


XMSS and Embedded Systems

XMSS Hardware Accelerators for RISC-V
We describe a software-hardware co-design for the hash-based post-quantum signature scheme XMSS on a RISC-V embedded processor. We provide software optimizations for the XMSS reference implementation for SHA-256 parameter sets and several hardware accelerators that allow to balance area usage and performance based on individual needs. By integrating our hardware accelerators into the RISC-V processor, the version with the best time-area product generates a key pair (that can be used to generate \(2^{10}\) signatures) in 3.44 s, achieving an over \(54 \times \) speedup in wall-clock time compared to the pure software version. For such a key pair, signature generation takes less than 10 ms and verification takes less than 6 ms, bringing speedups of over \(42 \times \) and \(17 \times \) respectively. We tested and measured the cycle count of our implementation on an Intel Cyclone V SoC FPGA. The integration of our XMSS accelerators into an embedded RISC-V processor shows that it is possible to use hash-based post-quantum signatures for a large variety of embedded applications.
Wen Wang, Bernhard Jungk, Julian Wälde, Shuwen Deng, Naina Gupta, Jakub Szefer, Ruben Niederhagen

A Timing Attack on the HQC Encryption Scheme

The HQC public-key encryption scheme is a promising code-based submission to NIST’s post-quantum cryptography standardization process. The scheme is based on the decisional decoding problem for random quasi-cyclic codes. One problem of the HQC’s reference implementation submitted to NIST in the first round of the standardization process is that the decryption operation is not constant-time. In particular, the decryption time depends on the number of errors decoded by a BCH decoder. We use this to present the first timing attack against HQC. The attack is practical, requiring the attacker to record the decryption time of around 400 million ciphertexts for a set of HQC parameters corresponding to 128 bits of security. This makes the use of constant-time decoders mandatory for the scheme to be considered secure.
Thales Bandiera Paiva, Routo Terada

Block-Anti-Circulant Unbalanced Oil and Vinegar

We introduce a new technique for compressing the public keys of the UOV signature scheme that makes use of block-anti-circulant matrices. These matrices admit a compact representation as for every block, the remaining elements can be inferred from the first row. This space saving translates to the public key, which as a result of this technique can be shrunk by a small integer factor. We propose parameters sets that take into account the most important attacks, and present performance statistics derived from a C implementation along with a comparison to LUOV.
Alan Szepieniec, Bart Preneel

Symmetric Cryptography


A DFA Attack on White-Box Implementations of AES with External Encodings

Attacks based on DFA are an important threat to the security of white-box AES implementations. DFA typically requires that the output of AES is known. The use of external encodings that obfuscate this output is therefore a straightforward and well-known measure against such attacks. This paper presents a new DFA attack on a class of white-box implementations of AES that use a specific type of external encoding on the output. The expected work factor of the new attack is dominated by \(2^{32}\) executions of the white-box implementation.
Alessandro Amadori, Wil Michiels, Peter Roelse

Parallelizable Authenticated Encryption with Small State Size

Authenticated encryption (AE) is a symmetric-key encryption function that provides confidentiality and authenticity of a message. One of the evaluation criteria for AE is state size, which is memory size needed for encryption. State size is especially important when cryptosystem is implemented in constrained devices, while trivial reduction by using a small primitive is not generally acceptable as it leads to a degraded security.
In these days, the state size of AE has been very actively studied and a number of small-state AE schemes have been proposed, but they are inherently serial. It would be a natural question if we come up with a parallelizable AE with a smaller state size than the state-of-the-art.
In this paper, we study the seminal OCB mode for parallelizable AE and propose a method to reduce its state size without losing the bit security of it. More precisely, while (the most small-state variant of) OCB has 3n-bit state, by carefully treating the checksum that is halved, we can achieve 2.5n-bit state, while keeping the n/2-bit security as original. We also propose an inverse-free variant of it based on OTR. While the original OTR has 4n-bit state, ours has 3.5n-bit state. To our knowledge these numbers are the smallest ones achieved by the blockcipher modes for parallel AE and inverse-free parallel AE.
Akiko Inoue, Kazuhiko Minematsu

Deep Neural Network Attribution Methods for Leakage Analysis and Symmetric Key Recovery

Deep Neural Networks (DNNs) have recently received significant attention in the side-channel community due to their state-of-the-art performance in security testing of embedded systems. However, research on the subject mostly focused on techniques to improve the attack efficiency in terms of the number of traces required to extract secret parameters. What has not been investigated in detail is a constructive approach of DNNs as a tool to evaluate and improve the effectiveness of countermeasures against side-channel attacks. In this work, we close this gap by applying attribution methods that aim for interpreting Deep Neural Network (DNN) decisions in order to identify leaking operations in cryptographic implementations. In particular, we investigate three different approaches that have been proposed for feature visualization in image classification tasks and compare them regarding their suitability to reveal Points of Interest (POIs) in side-channel traces. We show by experiments with four separate data sets that the three methods are especially interesting in the context of side-channel protected implementations and misaligned measurements. Finally, we demonstrate that attribution can also serve as a powerful side-channel distinguisher leading to a successful retrieval of the secret key with at least five times fewer traces compared to standard key recovery in DNN-based attack setups.
Benjamin Hettwer, Stefan Gehrer, Tim Güneysu

Post-quantum Constructions


BBQ: Using AES in Picnic Signatures

This works studies the use of the AES block-cipher for Picnic-style signatures, which work in the multiparty-computation-in-the-head model. It applies advancements to arithmetic circuits for the computation of the AES S-box over multiparty computation in the preprocessing model to obtain an improvement of signature sizes of 40% on average compared to using binary circuits for AES-128, AES-192 and AES-256 in combination with previous techniques. This work also discusses other methods for the computation of the S-box and provides insights into the reaches and limits of the multiparty-computation-in-the-head paradigm.
Cyprien Delpech de Saint Guilhem, Lauren De Meyer, Emmanuela Orsini, Nigel P. Smart

Towards Practical GGM-Based PRF from (Module-)Learning-with-Rounding

We investigate the efficiency of a \(\mathsf {(module}\text {-}\mathsf {)LWR}\)-based PRF built using the GGM design. Our construction enjoys the security proof of the GGM construction and the \(\mathsf {(module}\text {-}\mathsf {)LWR}\) hardness assumption which is believed to be post-quantum secure. We propose GGM-based PRFs from PRGs with larger ratio of output to input. This reduces the number of PRG invocations which improves the PRF performance and reduces the security loss in the GGM security reduction. Our construction bridges the gap between practical and provably secure PRFs. We demonstrate the efficiency of our construction by providing parameters achieving at least 128-bit post-quantum security and optimized implementations utilizing AVX2 vector instructions. Our PRF requires, on average, only 39.4 cycles per output byte.
Chitchanok Chuengsatiansup, Damien Stehlé


Weitere Informationen

Premium Partner