Skip to main content

2015 | Buch

Advances in Cryptology – ASIACRYPT 2015

21st International Conference on the Theory and Application of Cryptology and Information Security, Auckland, New Zealand, November 29 -- December 3, 2015, Proceedings, Part II

herausgegeben von: Tetsu Iwata, Jung Hee Cheon

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The two-volume set LNCS 9452 and 9453 constitutes the refereed proceedings of the 21st International Conference on the Theory and Applications of Cryptology and Information Security, ASIACRYPT 2015, held in Auckland, New Zealand, in November/December 2015.

The 64 revised full papers and 3 invited talks presented were carefully selected from 251 submissions. They are organized in topical sections on indistinguishability obfuscation; PRFs and hashes; discrete logarithms and number theory; signatures; multiparty computation; public key encryption; ABE and IBE; zero-knowledge; attacks on ASASA; number field sieve; hashes and MACs; symmetric encryption; foundations; side-channel attacks; design of block ciphers; authenticated encryption; symmetric analysis; cryptanalysis; privacy and lattices.

Inhaltsverzeichnis

Frontmatter

Attacks on ASASA

Frontmatter
Key-Recovery Attacks on ASASA

The $$\mathsf {ASASA}$$ construction is a new design scheme introduced at Asiacrypt 2014 by Biruykov, Bouillaguet and Khovratovich. Its versatility was illustrated by building two public-key encryption schemes, a secret-key scheme, as well as super S-box subcomponents of a white-box scheme. However one of the two public-key cryptosystems was recently broken at Crypto 2015 by Gilbert, Plût and Treger. As our main contribution, we propose a new algebraic key-recovery attack able to break at once the secret-key scheme as well as the remaining public-key scheme, in time complexity $$2^{63}$$ and $$2^{39}$$ respectively (the security parameter is 128 bits in both cases). Furthermore, we present a second attack of independent interest on the same public-key scheme, which heuristically reduces its security to solving an $$\mathsf {LPN}$$ instance with tractable parameters. This allows key recovery in time complexity $$2^{56}$$ . Finally, as a side result, we outline a very efficient heuristic attack on the white-box scheme, which breaks an instance claiming 64 bits of security under one minute on a single desktop computer.

Brice Minaud, Patrick Derbez, Pierre-Alain Fouque, Pierre Karpman

Number Field Sieve

Frontmatter
The Tower Number Field Sieve

The security of pairing-based crypto-systems relies on the difficulty to compute discrete logarithms in finite fields $${\mathbb F}_{p^n}$$ where n is a small integer larger than 1. The state-of-art algorithm is the number field sieve (NFS) together with its many variants. When p has a special form (SNFS), as in many pairings constructions, NFS has a faster variant due to Joux and Pierrot. We present a new NFS variant for SNFS computations, which is better for some cryptographically relevant cases, according to a precise comparison of norm sizes. The new algorithm is an adaptation of Schirokauer’s variant of NFS based on tower extensions, for which we give a middlebrow presentation.

Razvan Barbulescu, Pierrick Gaudry, Thorsten Kleinjung

Hashes and MACs

Frontmatter
On the Impact of Known-Key Attacks on Hash Functions

Hash functions are often constructed based on permutations or blockciphers, and security proofs are typically done in the ideal permutation or cipher model. However, once these random primitives are instantiated, vulnerabilities of these instantiations may nullify the security. At ASIACRYPT 2007, Knudsen and Rijmen introduced known-key security of blockciphers, which gave rise to many distinguishing attacks on existing blockcipher constructions. In this work, we analyze the impact of such attacks on primitive-based hash functions. We present and formalize the weak cipher model, which captures the case a blockcipher has a certain weakness but is perfectly random otherwise. A specific instance of this model, considering the existence of sets of B queries whose XOR equals 0 at bit-positions C, where C is an index set, covers a wide range of known-key attacks in literature. We apply this instance to the PGV compression functions, as well as to the Grøstl (based on two permutations) and Shrimpton-Stam (based on three permutations) compression functions, and show that these designs do not seriously succumb to any differential known-key attack known to date.

Bart Mennink, Bart Preneel
Generic Security of NMAC and HMAC with Input Whitening

HMAC and its variant NMAC are the most popular approaches to deriving a MAC (and more generally, a PRF) from a cryptographic hash function. Despite nearly two decades of research, their exact security still remains far from understood in many different contexts. Indeed, recent works have re-surfaced interest for generic attacks, i.e., attacks that treat the compression function of the underlying hash function as a black box.Generic security can be proved in a model where the underlying compression function is modeled as a random function – yet, to date, the question of proving tight, non-trivial bounds on the generic security of HMAC/NMAC even as a PRF remains a challenging open question.In this paper, we ask the question of whether a small modification to HMAC and NMAC can allow us to exactly characterize the security of the resulting constructions, while only incurring little penalty with respect to efficiency. To this end, we present simple variants of NMAC and HMAC, for which we prove tight bounds on the generic PRF security, expressed in terms of numbers of construction and compression function queries necessary to break the construction. All of our constructions are obtained via a (near) black-box modification of NMAC and HMAC, which can be interpreted as an initial step of key-dependent message pre-processing.While our focus is on PRF security, a further attractive feature of our new constructions is that they clearly defeat all recent generic attacks against properties such as state recovery and universal forgery. These exploit properties of the so-called “functional graph” which are not directly accessible in our new constructions.

Peter Gaži, Krzysztof Pietrzak, Stefano Tessaro

Symmetric Encryption

Frontmatter
On the Optimality of Non-Linear Computations of Length-Preserving Encryption Schemes

It is well known that three and four rounds of balanced Feistel cipher or Luby-Rackoff (LR) encryption for two blocks messages are pseudorandom permutation (PRP) and strong pseudorandom permutation (SPRP) respectively. A block is n-bit long for some positive integer n and a (possibly keyed) block-function is a nonlinear function mapping all blocks to themselves, e.g. blockcipher. XLS (eXtended Latin Square) encryption defined over two block inputs with three blockcipher calls was claimed to be SPRP. However, later Nandi showed that it is not a SPRP. Motivating with these observations, we consider the following questions in this paper: What is the minimum number of invocations of block-functions required to achieve PRP or SPRP security over$$\ell $$blocks inputs? To answer this question, we consider all those length-preserving encryption schemes, called linear encryption mode, for which only nonlinear operations are block-functions. Here, we prove the following results for these encryption schemes:1.At least $$2\ell $$ (or $$2\ell -1$$) invocations of block-functions are required to achieve SPRP (or PRP respectively). These bounds are also tight.2.To achieve the above bound for PRP over $$\ell > 1$$ blocks, either we need at least two keys or it can not be inverse-free (i.e., need to apply the inverses of block-functions in the decryption). In particular, we show that a single-keyed inverse-free PRP needs $$2\ell $$ invocations of block functions.3.We show that 3-round LR using a single-keyed pseudorandom function (PRF) is PRP if we xor a block of input by a masking key.

Mridul Nandi
Beyond-Birthday-Bound Security for Tweakable Even-Mansour Ciphers with Linear Tweak and Key Mixing

The iterated Even-Mansour construction defines a block cipher from a tuple of public n-bit permutations $$(P_1,\ldots ,P_r)$$ by alternatively xoring some n-bit round key $$k_i$$, $$i=0,\ldots ,r$$, and applying permutation $$P_i$$ to the state. The tweakable Even-Mansour construction generalizes the conventional Even-Mansour construction by replacing the n-bit round keys by n-bit strings derived from a master key and a tweak, thereby defining a tweakable block cipher. Constructions of this type have been previously analyzed, but they were either secure only up to the birthday bound, or they used a nonlinear mixing function of the key and the tweak (typically, multiplication of the key and the tweak seen as elements of some finite field) which might be costly to implement. In this paper, we tackle the question of whether it is possible to achieve beyond-birthday-bound security for such a construction by using only linear operations for mixing the key and the tweak into the state. We answer positively, describing a 4-round construction with a 2n-bit master key and an n-bit tweak which is provably secure in the Random Permutation Model up to roughly $$2^{2n/3}$$ adversarial queries.

Benoît Cogliati, Yannick Seurin
An Inverse-Free Single-Keyed Tweakable Enciphering Scheme

In CRYPTO 2003, Halevi and Rogaway proposed CMC, a tweakable enciphering scheme (TES) based on a blockcipher. It requires two blockcipher keys and it is not inverse-free (i.e., the decryption algorithm uses the inverse (decryption) of the underlying blockcipher). We present here a new inverse-free, single-keyed TES. Our construction is a tweakable strong pseudorandom permutation (TSPRP), i.e., it is secure against chosen-plaintext-ciphertext adversaries assuming that the underlying blockcipher is a pseudorandom permutation (PRP), i.e., secure against chosen-plaintext adversaries. In comparison, SPRP assumption of the blockcipher is required for the TSPRP security of CMC. Our scheme can be viewed as a mixture of type-1 and type-3 Feistel cipher and so we call it FMix or mixed-type Feistel cipher.

Ritam Bhaumik, Mridul Nandi

Foundations

Frontmatter
On Black-Box Complexity of Universally Composable Security in the CRS Model

In this work, we study the intrinsic complexity of black-box Universally Composable (UC) secure computation based on general assumptions. We present a thorough study in various corruption modelings while focusing on achieving security in the common reference string (CRS) model. Our results involve the following:Static UC Secure Computation. Designing the first static UC secure oblivious transfer protocol based on public-key encryption and stand-alone semi-honest oblivious transfer. As a corollary we obtain the first black-box constructions of UC secure computation assuming only two-round semi-honest oblivious transfer.One-sided UC Secure Computation. Designing adaptive UC secure two-party computation with single corruptions assuming public-key encryption with oblivious ciphertext generation.Adaptive UC Secure Computation. Designing adaptively secure UC commitment scheme assuming only public-key encryption with oblivious ciphertext generation. As a corollary we obtain the first black-box constructions of adaptive UC secure computation assuming only (trapdoor) simulatable public-key encryption (as well as a variety of concrete assumptions).We remark that such a result was not known even under non-black-box constructions.

Carmit Hazay, Muthuramakrishnan Venkitasubramaniam
Public Verifiability in the Covert Model (Almost) for Free

The covert security model (Aumann and Lindell, TCC 2007) offers an important security/efficiency trade-off: a covert player may arbitrarily cheat, but is caught with a certain fixed probability. This permits more efficient protocols than the malicious setting while still giving meaningful security guarantees. However, one drawback is that cheating cannot be proven to a third party, which prevents the use of covert protocols in many practical settings. Recently, Asharov and Orlandi (ASIACRYPT 2012) enhanced the covert model by allowing the honest player to generate a proof of cheating, checkable by any third party. Their model, which we call the PVC (publicly verifiable covert) model, offers a very compelling trade-off.Asharov and Orlandi (AO) propose a practical protocol in the PVC model, which, however, relies on a specific expensive oblivious transfer (OT) protocol incompatible with OT extension. In this work, we improve the performance of the PVC model by constructing a PVC-compatible OT extension as well as making several practical improvements to the AO protocol. As compared to the state-of-the-art OT extension-based two-party covert protocol, our PVC protocol adds relatively little: four signatures and an $$\approx 67\,\%$$ wider OT extension matrix. This is a significant improvement over the AO protocol, which requires public-key-based OTs per input bit. We present detailed estimates showing (up to orders of magnitude) concrete performance improvements over the AO protocol and a recent malicious protocol.

Vladimir Kolesnikov, Alex J. Malozemoff
Limits of Extractability Assumptions with Distributional Auxiliary Input

Extractability, or “knowledge,” assumptions have recently gained popularity in the cryptographic community, leading to the study of primitives such as extractable one-way functions, extractable hash functions, succinct non-interactive arguments of knowledge (SNARKs), and (public-coin) differing-inputs obfuscation ((PC-)$$di\mathcal {O}$$), and spurring the development of a wide spectrum of new applications relying on these primitives. For most of these applications, it is required that the extractability assumption holds even in the presence of attackers receiving some auxiliary information that is sampled from some fixed efficiently computable distribution $$\mathcal {Z}$$.We show that, assuming the existence of public-coin collision-resistant hash functions, there exists an efficient distributions $$\mathcal {Z}$$ such that eitherPC-$$di\mathcal {O}$$ for Turing machines does not exist, orextractable one-way functions w.r.t. auxiliary input $$\mathcal {Z}$$ do not exist.A corollary of this result shows that additionally assuming existence of fully homomorphic encryption with decryption in $$NC^1$$, there exists an efficient distribution $$\mathcal {Z}$$ such that eitherSNARKs for $$\mathsf {NP}$$ w.r.t. auxiliary input $$\mathcal {Z}$$ do not exist, orPC-$$di\mathcal {O}$$ for $$NC^1$$ circuits does not exist.To achieve our results, we develop a “succinct punctured program” technique, mirroring the powerful punctured program technique of Sahai and Waters (STOC’14), and present several other applications of this new technique. In particular, we construct succinct perfect zero knowledge SNARGs and give a universal instantiation of random oracles in full-domain hash applications, based on PC-$$di\mathcal {O}$$.As a final contribution, we demonstrate that even in the absence of auxiliary input, care must be taken when making use of extractability assumptions. We show that (standard) $$di\mathcal {O}$$ w.r.t. any distribution $$\mathcal {D}$$ over programs and bounded-length auxiliary input is directly implied by any obfuscator that satisfies the weaker indistinguishability obfuscation (i$$\mathcal {O}$$) security notion and $$di\mathcal {O}$$ for a slightly modified distribution $$\mathcal {D}'$$ of programs (of slightly greater size) and no auxiliary input. As a consequence, we directly obtain negative results for (standard) $$di\mathcal {O}$$ in the absence of auxiliary input.

Elette Boyle, Rafael Pass
Composable and Modular Anonymous Credentials: Definitions and Practical Constructions

It takes time for theoretical advances to get used in practical schemes. Anonymous credential schemes are no exception. For instance, existing schemes suited for real-world use lack formal, composable definitions, partly because they do not support straight-line extraction and rely on random oracles for their security arguments. To address this gap, we propose unlinkable redactable signatures (URS), a new building block for privacy-enhancing protocols, which we use to construct the first efficient UC-secure anonymous credential system that supports multiple issuers, selective disclosure of attributes, and pseudonyms. Our scheme is one of the first such systems for which both the size of a credential and its presentation proof are independent of the number of attributes issued in a credential. Moreover, our new credential scheme does not rely on random oracles. As an important intermediary step, we address the problem of building a functionality for a complex credential system that can cover many different features. Namely, we design a core building block for a single issuer that supports credential issuance and presentation with respect to pseudonyms and then show how to construct a full-fledged credential system with multiple issuers in a modular way. We expect this definitional approach to be of independent interest.

Jan Camenisch, Maria Dubovitskaya, Kristiyan Haralambiev, Markulf Kohlweiss

Side-Channel Attacks

Frontmatter
ASCA, SASCA and DPA with Enumeration: Which One Beats the Other and When?

We describe three contributions regarding the Soft Analytical Side-Channel Attacks (SASCA) introduced at Asiacrypt 2014. First, we compare them with Algebraic Side-Channel Attacks (ASCA) in a noise-free simulated setting. We observe that SASCA allow more efficient key recoveries than ASCA, even in this context (favorable to the latter). Second, we describe the first working experiments of SASCA against an actual AES implementation. Doing so, we analyse their profiling requirements, put forward the significant gains they provide over profiled Differential Power Analysis (DPA) in terms of number of traces needed for key recoveries, and discuss the specificities of such concrete attacks compared to simulated ones. Third, we evaluate the distance between SASCA and DPA enhanced with computational power to perform enumeration, and show that the gap between both attacks can be quite reduced in this case. Therefore, our results bring interesting feedback for evaluation laboratories. They suggest that in several relevant scenarios (e.g. attacks exploiting many known plaintexts), taking a small margin over the security level indicated by standard DPA with enumeration should be sufficient to prevent more elaborate attacks such as SASCA. By contrast, SASCA may remain the only option in more extreme scenarios (e.g. attacks with unknown plaintexts/ciphertexts or against leakage-resilient primitives). We conclude by recalling the algorithmic dependency of the latter attacks, and therefore that our conclusions are specific to the AES.

Vincent Grosso, François-Xavier Standaert
Counting Keys in Parallel After a Side Channel Attack

Side channels provide additional information to skilled adversaries that reduce the effort to determine an unknown key. If sufficient side channel information is available, identification of the secret key can even become trivial. However, if not enough side information is available, some effort is still required to find the key in the key space (which now has reduced entropy). To understand the security implications of side channel attacks it is then crucial to evaluate this remaining effort in a meaningful manner. Quantifying this effort can be done by looking at two key questions: first, how ‘deep’ (at most) is the unknown key in the remaining key space, and second, how ‘expensive’ is it to enumerate keys up to a certain depth?We provide results for these two challenges. Firstly, we show how to construct an extremely efficient algorithm that accurately computes the rank of a (known) key in the list of all keys, when ordered according to some side channel attack scores. Secondly, we show how our approach can be tweaked such that it can be also utilised to enumerate the most likely keys in a parallel fashion. We are hence the first to demonstrate that a smart and parallel key enumeration algorithm exists.

Daniel P. Martin, Jonathan F. O’Connell, Elisabeth Oswald, Martijn Stam
A Unified Metric for Quantifying Information Leakage of Cryptographic Devices Under Power Analysis Attacks

To design effective countermeasures for cryptosystems against side-channel power analysis attacks, the evaluation of the system leakage has to be lightweight and often times at the early stage like on cryptographic algorithm or source code. When real implementations and power leakage measurements are not available, security evaluation has to be through metrics for the information leakage of algorithms. In this work, we propose such a general and unified metric, information leakage amount - ILA. ILA has several distinct advantages over existing metrics. It unifies the measure of information leakage to various attacks: first-order and higher-order DPA and CPA attacks. It works on algorithms with no mask protection or perfect/imperfect masking countermeasure. It is explicitly connected to the success rates of attacks, the ultimate security metric on physical implementations. Therefore, we believe ILA is an accurate indicator of the side-channel security level of the physical system, and can be used during the countermeasure design stage effectively and efficiently for choosing the best countermeasure.

Liwei Zhang, A. Adam Ding, Yunsi Fei, Pei Luo
How Secure is AES Under Leakage

While traditionally cryptographic algorithms have been designed with the black-box security in mind, they often have to deal with a much stronger adversary – namely, an attacker that has some access to the execution environment of a cryptographic algorithm. This can happen in such grey-box settings as physical side-channel attacks or digital forensics as well as due to Trojans.In this paper, we aim to address this challenge for symmetric-key cryptography. We study the security of the Advanced Encryption Standard (AES) in the presence of explicit leakage: We let a part of the internal secret state leak in each operation. We consider a wide spectrum of settings – from adversaries with limited control all the way to the more powerful attacks with more knowledge of the computational platform. To mount key recoveries under leakage, we develop several novel cryptanalytic techniques such as differential bias attacks. Moreover, we demonstrate and quantify the effect of uncertainty and implementation countermeasures under such attacks: black-boxed rounds, space randomization, time randomization, and dummy operations. We observe that the residual security of AES can be considerable, especially with uncertainty and basic countermeasures in place.

Andrey Bogdanov, Takanori Isobe

Design of Block Ciphers

Frontmatter
A Synthetic Indifferentiability Analysis of Interleaved Double-Key Even-Mansour Ciphers

Iterated Even-Mansour scheme (IEM) is a generalization of the basic 1-round proposal (ASIACRYPT ’91). The scheme can use one key, two keys, or completely independent keys.Most of the published security proofs for IEM against relate-key and chosen-key attacks focus on the case where all the round-keys are derived from a single master key. Whereas results beyond this barrier are relevant to the cryptographic problem whether a secure blockcipher with key-size twice the block-size can be built by mixing two relatively independent keys into IEM and iterating sufficiently many rounds, and this strategy actually has been used in designing blockciphers for a long-time.This work makes the first step towards breaking this barrier and considers IEM with Interleaved Double independent round-keys: $$\begin{aligned} \text {IDEM}_r((k_1,k_2),m)=k_i\oplus (P_r(\ldots k_1\oplus P_2(k_2\oplus P_1(k_1\oplus m))\ldots )), \end{aligned}$$ where $$i=2$$ when r is odd, and $$i=1$$ when r is even. As results, this work proves that 15 rounds can achieve (full) indifferentiability from an ideal cipher with $$O({q^{8}}/{2^n})$$ security bound. This work also proves that 7 rounds is sufficient and necessary to achieve sequential-indifferentiability (a notion introduced at TCC 2012) with $$O({q^{6}}/{2^n})$$ security bound, so that $$\text {IDEM}_{7}$$ is already correlation intractable and secure against any attack that exploits evasive relations between its input-output pairs.

Chun Guo, Dongdai Lin
Midori: A Block Cipher for Low Energy

In the past few years, lightweight cryptography has become a popular research discipline with a number of ciphers and hash functions proposed. The designers’ focus has been predominantly to minimize the hardware area, while other goals such as low latency have been addressed rather recently only. However, the optimization goal of low energy for block cipher design has not been explicitly addressed so far. At the same time, it is a crucial measure of goodness for an algorithm. Indeed, a cipher optimized with respect to energy has wide applications, especially in constrained environments running on a tight power/energy budget such as medical implants.This paper presents the block cipher Midori (The name of the cipher is the Japanese translation for the word Green.) that is optimized with respect to the energy consumed by the circuit per bt in encryption or decryption operation. We deliberate on the design choices that lead to low energy consumption in an electrical circuit, and try to optimize each component of the circuit as well as its entire architecture for energy. An added motivation is to make both encryption and decryption functionalities available by small tweak in the circuit that would not incur significant area or energy overheads. We propose two energy-efficient block ciphers Midori128 and Midori64 with block sizes equal to 128 and 64 bits respectively. These ciphers have the added property that a circuit that provides both the functionalities of encryption and decryption can be designed with very little overhead in terms of area and energy. We compare our results with other ciphers with similar characteristics: it was found that the energy consumptions of Midori64 and Midori128 are by far better when compared ciphers like PRINCE and NOEKEON.

Subhadeep Banik, Andrey Bogdanov, Takanori Isobe, Kyoji Shibutani, Harunaga Hiwatari, Toru Akishita, Francesco Regazzoni
Optimally Secure Block Ciphers from Ideal Primitives

Recent advances in block-cipher theory deliver security analyses in models where one or more underlying components (e.g., a function or a permutation) are ideal (i.e., randomly chosen). This paper addresses the question of finding new constructions achieving the highest possible security level under minimal assumptions in such ideal models.We present a new block-cipher construction, derived from the Swap-or-Not construction by Hoang et al. (CRYPTO ’12). With n-bit block length, our construction is a secure pseudorandom permutation (PRP) against attackers making $$2^{n - O(\log n)}$$ block-cipher queries, and $$2^{n - O(1)}$$ queries to the underlying component (which has itself domain size roughly n). This security level is nearly optimal. So far, only key-alternating ciphers have been known to achieve comparable security using O(n) independent random permutations. In contrast, we only use a singlefunction or permutation, and still achieve similar efficiency.Our second contribution is a generic method to enhance a block cipher, initially only secure as a PRP, to additionally withstand related-key attacks without substantial loss in terms of concrete security.

Stefano Tessaro

Authenticated Encryption

Frontmatter
Security of Full-State Keyed Sponge and Duplex: Applications to Authenticated Encryption

We provide a security analysis for full-state keyed Sponge and full-state Duplex constructions. Our results can be used for making a large class of Sponge-based authenticated encryption schemes more efficient by concurrent absorption of associated data and message blocks. In particular, we introduce and analyze a new variant of SpongeWrap with almost free authentication of associated data. The idea of using full-state message absorption for higher efficiency was first made explicit in the Donkey Sponge MAC construction, but without any formal security proof. Recently, Gaži, Pietrzak and Tessaro (CRYPTO 2015) have provided a proof for the fixed-output-length variant of Donkey Sponge. Yasuda and Sasaki (CT-RSA 2015) have considered partially full-state Sponge-based authenticated encryption schemes for efficient incorporation of associated data. In this work, we unify, simplify, and generalize these results about the security and applicability of full-state keyed Sponge and Duplex constructions; in particular, for designing more efficient authenticated encryption schemes. Compared to the proof of Gaži et al., our analysis directly targets the original Donkey Sponge construction as an arbitrary-output-length function. Our treatment is also more general than that of Yasuda and Sasaki, while yielding a more efficient authenticated encryption mode for the case that associated data might be longer than messages.

Bart Mennink, Reza Reyhanitabar, Damian Vizár
Heuristic Tool for Linear Cryptanalysis with Applications to CAESAR Candidates

Differential and linear cryptanalysis are the general purpose tools to analyze various cryptographic primitives. Both techniques have in common that they rely on the existence of good differential or linear characteristics. The difficulty of finding such characteristics depends on the primitive. For instance, AES is designed to be resistant against differential and linear attacks and therefore, provides upper bounds on the probability of possible linear characteristics. On the other hand, we have primitives like SHA-1, SHA-2, and Keccak, where finding good and useful characteristics is an open problem. This becomes particularly interesting when considering, for example, competitions like CAESAR. In such competitions, many cryptographic primitives are waiting for analysis. Without suitable automatic tools, this is a virtually infeasible job. In recent years, various tools have been introduced to search for characteristics. The majority of these only deal with differential characteristics. In this work, we present a heuristic search tool which is capable of finding linear characteristics even for primitives with a relatively large state, and without a strongly aligned structure. As a proof of concept, we apply the presented tool on the underlying permutations of the first round CAESAR candidates Ascon, ICEPOLE, Keyak, Minalpher and Prøst.

Christoph Dobraunig, Maria Eichlseder, Florian Mendel
Collision Attacks Against CAESAR Candidates
Forgery and Key-Recovery Against AEZ and Marble

In this paper we study authenticated encryption algorithms inspired by the OCB mode (Offset Codebook). These algorithms use secret offsets (masks derived from a whitening key) to turn a block cipher into a tweakable block cipher, following the XE or XEX construction.OCB has a security proof up to $$2^{n/2}$$ queries, and a matching forgery attack was described by Ferguson, where the main step of the attack recovers the whitening key. In this work we study recent authenticated encryption algorithms inspired by OCB, such as Marble, AEZ, and COPA. While Ferguson’s attack is not applicable to those algorithms, we show that it is still possible to recover the secret mask with birthday complexity. Recovering the secret mask easily leads to a forgery attack, but it also leads to more devastating attacks, with a key-recovery attack against Marble and AEZ v2 and v3 with birthday complexity.For Marble, this clearly violates the security claims of full n-bit security. For AEZ, this matches the security proof, but we believe it is nonetheless a quite undesirable property that collision attacks allow to recover the master key, and more robust designs would be desirable.Our attack against AEZ is generic and independent of the internal permutation (in particular, it still works with the full AES), but the key-recovery is specific to the key derivation used in AEZ v2 and v3. Against Marble, the forgery attack is generic, but the key-recovery exploits the structure of the E permutation (4 AES rounds). In particular, we introduce a novel cryptanalytic method to attack 3 AES rounds followed by 3 inverse AES rounds, which can be of independent interest.

Thomas Fuhr, Gaëtan Leurent, Valentin Suder

Symmetric Analysis

Frontmatter
Optimized Interpolation Attacks on LowMC

LowMC is a collection of block cipher families introduced at Eurocrypt 2015 by Albrecht et al. Its design is optimized for instantiations of multi-party computation, fully homomorphic encryption, and zero-knowledge proofs. A unique feature of LowMC is that its internal affine layers are chosen at random, and thus each block cipher family contains a huge number of instances. The Eurocrypt paper proposed two specific block cipher families of LowMC, having 80-bit and 128-bit keys.In this paper, we mount interpolation attacks (algebraic attacks introduced by Jakobsen and Knudsen) on LowMC, and show that a practically significant fraction of $$2^{-38}$$ of its 80-bit key instances could be broken $$2^{23}$$ times faster than exhaustive search. Moreover, essentially all instances that are claimed to provide 128-bit security could be broken about 1000 times faster. In order to obtain these results we optimize the interpolation attack using several new techniques. In particular, we present an algorithm that combines two main variants of the interpolation attack, and results in an attack which is more efficient than each one.

Itai Dinur, Yunwen Liu, Willi Meier, Qingju Wang
Another Tradeoff Attack on Sprout-Like Stream Ciphers

Sprout is a new lightweight stream cipher with shorter internal state proposed at FSE 2015, using key-dependent state updating in the keystream generation phase. Some analyses have been available on eprint so far. In this paper, we extend the design paradigm in general and study the security of Sprout-like ciphers in a unified framework. Our new penetration is to investigate the k-normality of the augmented function, a vectorial Boolean function derived from the primitive. Based on it, a dedicated time/memory/data tradeoff attack is developed for such designs. It is shown that Sprout can be broken in $$2^{79-x-y}$$ time, given $${\left[ {c \cdot (2x + 2y - 58) \cdot {2^{71 - x - y}}} \right] }$$-bit memory and $$2^{9+x+y}$$-bit keystream, where x / y is the number of forward/backward steps and c is a small constant. Our attack is highly flexible and compares favorably to all the previous results. With carefully chosen parameters, the new attack is at least $$2^{20}$$ times faster than Lallemand/Naya-Plasencia attack at Crypto 2015, Maitra et al. attack and Banik attack, $$2^{10}$$ times faster than Esgin/Kara attack with much less memory.

Bin Zhang, Xinxin Gong
Reverse-Engineering of the Cryptanalytic Attack Used in the Flame Super-Malware

In May 2012, a highly advanced malware for espionage dubbed Flame was found targeting the Middle-East. As it turned out, it used a forged signature to infect Windows machines by MITM-ing Windows Update. Using counter-cryptanalysis, Stevens found that the forged signature was made possible by a chosen-prefix attack on MD5 [25]. He uncovered some details that prove that this attack differs from collision attacks in the public literature, yet many questions about techniques and complexity remained unanswered.In this paper, we demonstrate that significantly more information can be deduced from the example collision. Namely, that these details are actually sufficient to reconstruct the collision attack to a great extent using some weak logical assumptions. In particular, we contribute an analysis of the differential path family for each of the four near-collision blocks, the chaining value differences elimination procedure and a complexity analysis of the near-collision block attacks and the associated birthday search for various parameter choices. Furthermore, we were able to prove a lower-bound for the attack’s complexity.This reverse-engineering of a non-academic cryptanalytic attack exploited in the real world seems to be without precedent. As it allegedly was developed by some nation-state(s) [11, 12, 19], we discuss potential insights to their cryptanalytic knowledge and capabilities.

Max Fillinger, Marc Stevens
Analysis of SHA-512/224 and SHA-512/256

In 2012, NIST standardized SHA-512/224 and SHA-512/256, two truncated variants of SHA-512, in FIPS 180-4. These two hash functions are faster than SHA-224 and SHA-256 on 64-bit platforms, while maintaining the same hash size and claimed security level. So far, no third-party analysis of SHA-512/224 or SHA-512/256 has been published. In this work, we examine the collision resistance of step-reduced versions of SHA-512/224 and SHA-512/256 by using differential cryptanalysis in combination with sophisticated search tools. We are able to generate practical examples of free-start collisions for 44-step SHA-512/224 and 43-step SHA-512/256. Thus, the truncation performed by these variants on their larger state allows us to attack several more rounds compared to the untruncated family members. In addition, we improve upon the best published collisions for 24-step SHA-512 and present practical collisions for 27 steps of SHA-512/224, SHA-512/256, and SHA-512.

Christoph Dobraunig, Maria Eichlseder, Florian Mendel

Cryptanalysis

Frontmatter
Tradeoff Cryptanalysis of Memory-Hard Functions

We explore time-memory and other tradeoffs for memory-hard functions, which are supposed to impose significant computational and time penalties if less memory is used than intended. We analyze three finalists of the Password Hashing Competition: Catena, which was presented at Asiacrypt 2014, yescrypt and Lyra2.We demonstrate that Catena’s proof of tradeoff resilience is flawed, and attack it with a novel precomputation tradeoff. We show that using $$M^{4/5}$$ memory instead of M we have no time penalties and reduce the AT cost by the factor of 25. We further generalize our method for a wide class of schemes with predictable memory access. For a wide class of data-dependent schemes, which addresses memory unpredictably, we develop a novel ranking tradeoff and show how to decrease the time-memory and the time-area product by significant factors. We then apply our method to yescrypt and Lyra2 also exploiting the iterative structure of their internal compression functions.The designers confirmed our attacks and responded by adding a new mode for Catena and tweaking Lyra2.

Alex Biryukov, Dmitry Khovratovich
Property Preserving Symmetric Encryption Revisited

At EUROCRYPT 2012 Pandey and Rouselakis introduced the notion of property preserving symmetric encryption which enables checking for a property on plaintexts by running a public test on the corresponding ciphertexts. Their primary contributions are: (i) a separation between ‘find-then-guess’ and ‘left-or-right’ security notions; (ii) a concrete construction for left-or-right secure orthogonality testing in composite order bilinear groups.This work undertakes a comprehensive (crypt)analysis of property preserving symmetric encryption on both these fronts. We observe that the quadratic residue based property used in their separation result is a special case of testing equality of one-bit messages, suggest a very simple and efficient deterministic encryption scheme for testing equality and show that the two security notions, find-then-guess and left-or-right, are tightly equivalent in this setting. On the other hand, the separation result easily generalizes for the equality property. So contextualized, we posit that the question of separation between security notions is property specific and subtler than what the authors envisaged; mandating further critical investigation. Next, we show that given a find-then-guess secure orthogonality preserving encryption of vectors of length 2n, there exists left-or-right secure orthogonality preserving encryption of vectors of length n, giving further evidence that find-then-guess is indeed a meaningful notion of security for property preserving encryption. Finally, we cryptanalyze the scheme for testing orthogonality. A simple distinguishing attack establishes that it is not even the weakest selective find-then-guess secure. Our main attack extracts out the subgroup elements used to mask the message vector and indicates greater vulnerabilities in the construction beyond indistinguishability. Overall, our work underlines the importance of cryptanalysis in provable security.

Sanjit Chatterjee, M. Prem Laxman Das
Refinements of the k-tree Algorithm for the Generalized Birthday Problem

We study two open problems proposed by Wagner in his seminal work on the generalized birthday problem. First, with the use of multicollisions, we improve Wagner’s k-tree algorithm that solves the generalized birthday problem for the cases when k is not a power of two. The new k-tree only slightly outperforms Wagner’s k-tree. However, in some applications this suffices, and as a proof of concept, we apply the new 3-tree algorithm to slightly reduce the security of two CAESAR proposals. Next, with the use of multiple collisions based on Hellman’s table, we give improvements to the best known time-memory tradeoffs for the k-tree. As a result, we obtain the a new tradeoff curve $$T^2 \cdot M^{\lg k -1} = k \cdot N$$. For instance, when $$k=4$$, the tradeoff has the form $$T^2 M = 4 \cdot N$$.

Ivica Nikolić, Yu Sasaki
How to Sequentialize Independent Parallel Attacks?
Biased Distributions Have a Phase Transition

We assume a scenario where an attacker can mount several independent attacks on a single CPU. Each attack can be run several times in independent ways. Each attack can succeed after a given number of steps with some given and known probability. A natural question is to wonder what is the optimal strategy to run steps of the attacks in a sequence. In this paper, we develop a formalism to tackle this problem. When the number of attacks is infinite, we show that there is a magic number of steps m such that the optimal strategy is to run an attack for m steps and to try again with another attack until one succeeds. We also study the case of a finite number of attacks.We describe this problem when the attacks are exhaustive key searches, but the result is more general. We apply our result to the learning parity with noise ($$\mathsf {LPN}$$) problem and the password search problem. Although the optimal m decreases as the distribution is more biased, we observe a phase transition in all cases: the decrease is very abrupt from m corresponding to exhaustive search on a single target to $$m=1$$ corresponding to running a single step of the attack on each target. For all practical biased examples, we show that the best strategy is to use $$m=1$$. For $$\mathsf {LPN}$$, this means to guess that the noise vector is 0 and to solve the secret by Gaussian elimination. This is actually better than all variants of the Blum-Kalai-Wasserman ($$\mathsf {BKW}$$) algorithm.

Sonia Bogos, Serge Vaudenay

Privacy and Lattices

Frontmatter
Pure Differential Privacy for Rectangle Queries via Private Partitions

We consider the task of data analysis with pure differential privacy. We construct new and improved mechanisms for statistical release of interval and rectangle queries. We also obtain a new algorithm for counting over a data stream under continual observation, whose error has optimal dependence on the data stream’s length.A central ingredient in all of these result is a differentially private partition mechanism. Given set of data items drawn from a large universe, this mechanism outputs a partition of the universe into a small number of segments, each of which contain only a few of the data items.

Cynthia Dwork, Moni Naor, Omer Reingold, Guy N. Rothblum
Implementing Candidate Graded Encoding Schemes from Ideal Lattices

Multilinear maps have become popular tools for designing cryptographic schemes since a first approximate realisation candidate was proposed by Garg, Gentry and Halevi (GGH). This construction was later improved by Langlois, Stehlé and Steinfeld who proposed GGHLite which offers smaller parameter sizes. In this work, we provide the first implementation of such approximate multilinear maps based on ideal lattices. Implementing GGH-like schemes naively would not allow instantiating it for non-trivial parameter sizes. We hence propose a strategy which reduces parameter sizes further and several technical improvements to allow for an efficient implementation. In particular, since finding a prime ideal when generating instances is an expensive operation, we show how we can drop this requirement. We also propose algorithms and implementations for sampling from discrete Gaussians, for inverting in some Cyclotomic number fields and for computing norms of ideals in some Cyclotomic number rings. Due to our improvements we were able to compute a multilinear jigsaw puzzle for $$\kappa =52$$ (resp. $$\kappa =38$$ ) and $$\lambda = 52$$ (resp. $$\lambda = 80$$ ).

Martin R. Albrecht, Catalin Cocis, Fabien Laguillaumie, Adeline Langlois
New Circular Security Counterexamples from Decision Linear and Learning with Errors

We investigate new constructions of n-circular counterexamples with a focus on the case of $$n=2$$. We have a particular interest in what qualities a cryptosystem must have to be able to separate such circular security from IND-CPA or IND-CCA security. To start, we ask whether there is something special about the asymmetry in bilinear groups that is inherent in the works of [1, 18] or whether it is actually the bilinearity that matters. As a further question, we explore whether such counterexamples are derivable from other assumptions such as the Learning with Errors (LWE) problem. If it were difficult to find such counterexamples, this might bolster our confidence in using 2-circular encryption as a method of bootstrapping Fully Homomorphic Encryption systems that are based on lattice assumptions.The results of this paper broadly expand the class of assumptions under which we can build 2-circular counterexamples. We first show for any constant $$k \ge 2$$ how to build counterexamples from a bilinear group under the decision k-linear assumption. Recall that the decision k-linear assumption becomes progressively weaker as k becomes larger. This means that we can instantiate counterexamples from symmetric bilinear groups and shows that asymmetric groups do not have any inherently special property needed for this problem. We then show how to create 2-circular counterexamples from the Learning with Errors problem. This extends the reach of these systems beyond bilinear groups and obfuscation.

Allison Bishop, Susan Hohenberger, Brent Waters
Backmatter
Metadaten
Titel
Advances in Cryptology – ASIACRYPT 2015
herausgegeben von
Tetsu Iwata
Jung Hee Cheon
Copyright-Jahr
2015
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-662-48800-3
Print ISBN
978-3-662-48799-0
DOI
https://doi.org/10.1007/978-3-662-48800-3

Premium Partner