Skip to main content

Über dieses Buch

The two-volume set LNCS 8873 and 8874 constitutes the refereed proceedings of the 20th International Conference on the Theory and Applications of Cryptology and Information Security, ASIACRYPT 2014, held in Kaoshiung, Taiwan, in December 2014. The 55 revised full papers and two invited talks presented were carefully selected from 255 submissions. They are organized in topical sections on cryptology and coding theory; authenticated encryption; symmetric key cryptanalysis; side channel analysis; hyperelliptic curve cryptography; factoring and discrete log; cryptanalysis; signatures; zero knowledge; encryption schemes; outsourcing and delegation; obfuscation; homomorphic cryptography; secret sharing; block ciphers and passwords; black-box separation; composability; multi-party computation.



Cryptology and Coding Theory

Solving LPN Using Covering Codes

We present a new algorithm for solving the LPN problem. The algorithm has a similar form as some previous methods, but includes a new key step that makes use of approximations of random words to a nearest codeword in a linear code. It outperforms previous methods for many parameter choices. In particular, we can now solve instances suggested for 80-bit security in cryptographic schemes like HB variants, LPN-C and Lapin, in less than 280 operations.
Qian Guo, Thomas Johansson, Carl Löndahl

Algebraic Attack against Variants of McEliece with Goppa Polynomial of a Special Form

In this paper, we present a new algebraic attack against some special cases of Wild McEliece Incognito, a generalization of the original McEliece cryptosystem. This attack does not threaten the original McEliece cryptosystem. We prove that recovering the secret key for such schemes is equivalent to solving a system of polynomial equations whose solutions have the structure of a usual vector space. Consequently, to recover a basis of this vector space, we can greatly reduce the number of variables in the corresponding algebraic system. From these solutions, we can then deduce the basis of a GRS code. Finally, the last step of the cryptanalysis of those schemes corresponds to attacking a McEliece scheme instantiated with particular GRS codes (with a polynomial relation between the support and the multipliers) which can be done in polynomial-time thanks to a variant of the Sidelnikov-Shestakov attack. For Wild McEliece & Incognito, we also show that solving the corresponding algebraic system is notably easier in the case of a non-prime base field \({\mathbb F}_q\). To support our theoretical results, we have been able to practically break several parameters defined over a non-prime base field q ∈ {9,16,25,27, 32}, t ≤ 6, extension degrees m ∈ {2,3}, security level up to 2129 against information set decoding in few minutes or hours.
Jean-Charles Faugère, Ludovic Perret, Frédéric de Portzamparc

New Proposals

Bivariate Polynomials Modulo Composites and Their Applications

We investigate the hardness of finding solutions to bivariate polynomial congruences modulo RSA composites. We establish necessary conditions for a bivariate polynomial to be one-way, second preimage resistant, and collision resistant based on arithmetic properties of the polynomial. From these conditions we deduce a new computational assumption that implies an efficient algebraic collision-resistant hash function. We explore the assumption and relate it to known computational problems. The assumption leads to (i) a new statistically hiding commitment scheme that composes well with Pedersen commitments, (ii) a conceptually simple cryptographic accumulator, and (iii) an efficient chameleon hash function.
Dan Boneh, Henry Corrigan-Gibbs

Cryptographic Schemes Based on the ASASA Structure: Black-Box, White-Box, and Public-Key (Extended Abstract)

In this paper we pick up an old challenge to design public key or white-box constructions from symmetric cipher components. We design several encryption schemes based on the ASASA structure ranging from fast and generic symmetric ciphers to compact public key and white-box constructions based on generic affine transformations combined with specially designed low degree non-linear layers. While explaining our design process we show several instructive attacks on the weaker variants of our schemes.
Alex Biryukov, Charles Bouillaguet, Dmitry Khovratovich

Authenticated Encryption

Beyond 2 c/2 Security in Sponge-Based Authenticated Encryption Modes

The Sponge function is known to achieve 2 c/2 security, where c is its capacity. This bound was carried over to keyed variants of the function, such as SpongeWrap, to achieve a min {2 c/2,2 κ } security bound, with κ the key length. Similarly, many CAESAR competition submissions are designed to comply with the classical 2 c/2 security bound. We show that Sponge-based constructions for authenticated encryption can achieve the significantly higher bound of min {2 b/2,2 c ,2 κ } asymptotically, with b > c the permutation size, by proving that the CAESAR submission NORX achieves this bound. Furthermore, we show how to apply the proof to five other Sponge-based CAESAR submissions: Ascon, CBEAM/STRIBOB, ICEPOLE, Keyak, and two out of the three PRIMATEs. A direct application of the result shows that the parameter choices of these submissions are overly conservative. Simple tweaks render the schemes considerably more efficient without sacrificing security. For instance, NORX64 can increase its rate and decrease its capacity by 128 bits and Ascon-128 can encrypt three times as fast, both without affecting the security level of their underlying modes in the ideal permutation model.
Philipp Jovanovic, Atul Luykx, Bart Mennink

How to Securely Release Unverified Plaintext in Authenticated Encryption

Scenarios in which authenticated encryption schemes output decrypted plaintext before successful verification raise many security issues. These situations are sometimes unavoidable in practice, such as when devices have insufficient memory to store an entire plaintext, or when a decrypted plaintext needs early processing due to real-time requirements. We introduce the first formalization of the releasing unverified plaintext (RUP) setting. To achieve privacy, we propose using plaintext awareness (PA) along with IND-CPA. An authenticated encryption scheme is PA if it has a plaintext extractor, which tries to fool adversaries by mimicking the decryption oracle, without the secret key. Releasing unverified plaintext to the attacker then becomes harmless as it is infeasible to distinguish the decryption oracle from the plaintext extractor. We introduce two notions of plaintext awareness in the symmetric-key setting, PA1 and PA2, and show that they expose a new layer of security between IND-CPA and IND-CCA. To achieve integrity, INT-CTXT in the RUP setting is required, which we refer to as INT-RUP. These new security notions are compared with conventional definitions, and are used to make a classification of symmetric-key schemes in the RUP setting. Furthermore, we re-analyze existing authenticated encryption schemes, and provide solutions to fix insecure schemes.
Elena Andreeva, Andrey Bogdanov, Atul Luykx, Bart Mennink, Nicky Mouha, Kan Yasuda

Forging Attacks on Two Authenticated Encryption Schemes COBRA and POET

In FSE 2014, an authenticated encryption mode COBRA [4], based on pseudorandom permutation (PRP) blockcipher, and POET [3], based on Almost XOR-Universal (AXU) hash and strong pseudorandom permutation (SPRP), were proposed. Few weeks later, COBRA mode and a simple variant of the original proposal of POET (due to a forging attack [13] on the original proposal) with AES as an underlying blockcipher, were submitted to CAESAR, a competition [1] of authenticated encryption (AE). In this paper, we show a forging attack on the mode COBRA based on any n-bit blockcipher. Our attack on COBRA requires about O(n) queries with success probability of about 1/2. This disproves the claim proved in the FSE 2014 paper. We also show both privacy and forging attack on the parallel version of POET, denoted POET-m. In case of the modes POET and POE (the underlying modes for encryption), we demonstrate a distinguishing attack making only one encryption query when we instantiate the underlying AXU hash function with some other AXU hash function, namely a uniform random involution. Thus, our result violates the designer’s main claim (Theorem 8.1 in [1]). However, the attacks can not be extended to the specifications of POET submitted to the CAESAR competition.
Mridul Nandi

Symmetric Key Cryptanalysis

Low Probability Differentials and the Cryptanalysis of Full-Round CLEFIA-128

So far, low probability differentials for the key schedule of block ciphers have been used as a straightforward proof of security against related-key differential analysis. To achieve resistance, it is believed that for cipher with k-bit key it suffices the upper bound on the probability to be 2− k . Surprisingly, we show that this reasonable assumption is incorrect, and the probability should be (much) lower than 2− k . Our counter example is a related-key differential analysis of the well established block cipher CLEFIA-128. We show that although the key schedule of CLEFIA-128 prevents differentials with a probability higher than 2− 128, the linear part of the key schedule that produces the round keys, and the Feistel structure of the cipher, allow to exploit particularly chosen differentials with a probability as low as 2− 128. CLEFIA-128 has 214 such differentials, which translate to 214 pairs of weak keys. The probability of each differential is too low, but the weak keys have a special structure which allows with a divide-and-conquer approach to gain an advantage of 27 over generic analysis. We exploit the advantage and give a membership test for the weak-key class and provide analysis of the hashing modes. The proposed analysis has been tested with computer experiments on small-scale variants of CLEFIA-128. Our results do not threaten the practical use of CLEFIA.
Sareh Emami, San Ling, Ivica Nikolić, Josef Pieprzyk, Huaxiong Wang

Automatic Security Evaluation and (Related-key) Differential Characteristic Search: Application to SIMON, PRESENT, LBlock, DES(L) and Other Bit-Oriented Block Ciphers

We propose two systematic methods to describe the differential property of an S-box with linear inequalities based on logical condition modelling and computational geometry respectively. In one method, inequalities are generated according to some conditional differential properties of the S-box; in the other method, inequalities are extracted from the H-representation of the convex hull of all possible differential patterns of the S-box. For the second method, we develop a greedy algorithm for selecting a given number of inequalities from the convex hull. Using these inequalities combined with Mixed-integer Linear Programming (MILP) technique, we propose an automatic method for evaluating the security of bit-oriented block ciphers against the (related-key) differential attack with several techniques for obtaining tighter security bounds, and a new tool for finding (related-key) differential characteristics automatically for bit-oriented block ciphers.
Siwei Sun, Lei Hu, Peng Wang, Kexin Qiao, Xiaoshuang Ma, Ling Song

Scrutinizing and Improving Impossible Differential Attacks: Applications to CLEFIA, Camellia, LBlock and Simon

Impossible differential cryptanalysis has shown to be a very powerful form of cryptanalysis against block ciphers. These attacks, even if extensively used, remain not fully understood because of their high technicality. Indeed, numerous are the applications where mistakes have been discovered or where the attacks lack optimality. This paper aims in a first step at formalizing and improving this type of attacks and in a second step at applying our work to block ciphers based on the Feistel construction. In this context, we derive generic complexity analysis formulas for mounting such attacks and develop new ideas for optimizing impossible differential cryptanalysis. These ideas include for example the testing of parts of the internal state for reducing the number of involved key bits. We also develop in a more general way the concept of using multiple differential paths, an idea introduced before in a more restrained context. These advances lead to the improvement of previous attacks against well known ciphers such as CLEFIA-128 and Camellia, while also to new attacks against 23-round LBlock and all members of the Simon family.
Christina Boura, María Naya-Plasencia, Valentin Suder

A Simplified Representation of AES

We show that the so-called super S-box representation of AES – that provides a simplified view of two consecutive AES rounds – can be further simplified. In the untwisted representation of AES presented here, two consecutive AES rounds are viewed as the composition of a non-linear transformation S and an affine transformation R that respectively operate on the four 32-bit columns and on the four 32-bit rows of their 128-bit input. To illustrate that this representation can be helpful for analysing the resistance of AES-like ciphers or AES-based hash functions against some structural attacks, we present some improvements of the known-key distinguisher for the 7-round variant of AES presented by Knudsen and Rijmen at ASIACRYPT 2007. We first introduce a known-key distinguisher for the 8-round variant of AES which constructs a 264-tuple of (input,output) pairs satisfying a simple integral property. While this new 8-round known-key distinguisher is outperformed for 8 AES rounds by known-key differential distinguishers of time complexity 248 and 244 presented by Gilbert and Peyrin at FSE 2010 and Jean, Naya-Plasencia, and Peyrin at SAC 2013, we show that one can take advantage of its specific features to mount a known-key distinguisher for the 10-round AES with independent subkeys and the full AES-128. The obtained 10-round distinguisher has the same time complexity 264 as the 8-round distinguisher it is derived from, but the highlighted input-output correlation property is more intricate and therefore its impact on the security of the 10-round AES when used as a known key primitive, e.g. in a hash function construction, is questionable. The new known-key distinguishers do not affect at all the security of AES when used as a keyed primitive, for instance for encryption or message authentication purposes.
Henri Gilbert

Side Channel Analysis I

Simulatable Leakage: Analysis, Pitfalls, and New Constructions

In 2013, Standaert et al. proposed the notion of simulatable leakage to connect theoretical leakage resilience with the practice of side channel attacks. Their use of simulators, based on physical devices, to support proofs of leakage resilience allows verification of underlying assumptions: the indistinguishability game, involving real vs. simulated leakage, can be ‘played’ by an evaluator. Using a concrete, block cipher based leakage resilient PRG and high-level simulator definition (based on concatenating two partial leakage traces), they included detailed reasoning why said simulator (for AES-128) resists state-of-the-art side channel attacks.
In this paper, we demonstrate a distinguisher against their simulator and thereby falsify their hypothesis. Our distinguishing technique, which is evaluated using concrete implementations of the Standaert et al. simulator on several platforms, is based on ‘tracking’ consistency (resp. identifying simulator inconsistencies) in leakage traces by means of cross-correlation. In attempt to rescue the approach, we propose several alternative simulator definitions based on splitting traces at points of low intrinsic cross-correlation. Unfortunately, these come with significant caveats, and we conclude that the most natural way of producing simulated leakage is by using the underlying construction ‘as is’ (but with a random key).
Jake Longo, Daniel P. Martin, Elisabeth Oswald, Daniel Page, Martijin Stam, Michael J. Tunstall

Multi-target DPA Attacks: Pushing DPA Beyond the Limits of a Desktop Computer

Following the pioneering CRYPTO ’99 paper by Kocher et al., differential power analysis (DPA) was initially geared around low-cost computations performed using standard desktop equipment with minimal reliance on device-specific assumptions. In subsequent years, the scope was broadened by, e.g., making explicit use of (approximate) power models. An important practical incentive of so-doing is to reduce the data complexity of attacks, usually at the cost of increased computational complexity. It is this trade-off which we seek to explore in this paper. We draw together emerging ideas from several strands of the literature—high performance computing, post-side-channel global key enumeration, and effective combination of separate information sources—by way of advancing (non-profiled) ‘standard DPA’ towards a more realistic threat model in which trace acquisitions are scarce but adversaries are well resourced. Using our specially designed computing platform (including our parallel and scalable DPA implementation, which allows us to work efficiently with as many as 232 key hypotheses), we demonstrate some dramatic improvements that are possible for ‘standard DPA’ when combining DPA outcomes for several intermediate targets. Unlike most previous ‘information combining’ attempts, we are able to evidence the fact that the improvements apply even when the exact trace locations of the relevant information (i.e. the ‘interesting points’) are not known a priori but must be searched simultaneously with the correct subkey.
Luke Mather, Elisabeth Oswald, Carolyn Whitnall

GLV/GLS Decomposition, Power Analysis, and Attacks on ECDSA Signatures with Single-Bit Nonce Bias

The fastest implementations of elliptic curve cryptography in recent years have been achieved on curves endowed with nontrivial efficient endomorphisms, using techniques due to Gallant–Lambert–Vanstone (GLV) and Galbraith–Lin–Scott (GLS). In such implementations, a scalar multiplication [k]P is computed as a double multiplication [k 1]P + [k 2]ψ(P), for ψ an efficient endomorphism and k 1,k 2 appropriate half-size scalars. To compute a random scalar multiplication, one can either select the scalars k 1,k 2 at random, hoping that the resulting k = k 1 + k 2 λ is close to uniform, or pick a uniform k instead and decompose it as k 1 + k 2 λ afterwards. The main goal of this paper is to discuss security issues that may arise using either approach.
When k 1 and k 2 are chosen uniformly at random in \([0,\sqrt{n})\), n = ord(P), we provide a security proofs under mild assumptions. However, if they are chosen as random integers of \(\lfloor\frac12\log_2 n\rfloor\) bits, the resulting k is slightly skewed, and hence not suitable for use in schemes like ECDSA. Indeed, for GLS curves, we show that this results in a bias of up to 1 bit on a suitable multiple of \(k\bmod n\), and that this bias is practically exploitable: while lattice-based attacks cannot exploit a single bit of bias, we demonstrate that an earlier attack strategy by Bleichenbacher makes it possible. In doing so, we set a record by carrying out the first ECDSA full key recovery using a single bit of bias.
On the other hand, computing k 1 and k 2 by decomposing a uniformly random k ∈ [0,n) avoids any statistical bias, but the decomposition algorithm may leak side-channel information. Early proposed algorithms relied on lattice reduction and exhibited a significant amount of timing channel leakage. More recently, constant-time approaches have also been proposed, but we show that they are amenable to power analysis: we describe a template attack that can be combined with classical lattice-based attacks on ECDSA to achieve full key recovery on physiscal devices.
Diego F. Aranha, Pierre-Alain Fouque, Benoît Gérard, Jean-Gabriel Kammerer, Mehdi Tibouchi, Jean-Christophe Zapalowicz

Soft Analytical Side-Channel Attacks

In this paper, we introduce a new approach to side-channel key recovery, that combines the low time/memory complexity and noise tolerance of standard (divide and conquer) differential power analysis with the optimal data complexity of algebraic side-channel attacks. Our fundamental contribution for this purpose is to change the way of expressing the problem, from the system of equations used in algebraic attacks to a code, essentially inspired by low density parity check codes. We then show that such codes can be efficiently decoded, taking advantage of the sparsity of the information corresponding to intermediate variables in actual leakage traces. The resulting soft analytical side-channel attacks work under the same profiling assumptions as template attacks, and directly exploit the vectors of probabilities produced by these attacks. As a result, we bridge the gap between popular side-channel distinguishers based on simple statistical tests and previous approaches to analytical side-channel attacks that could only exploit hard information so far.
Nicolas Veyrat-Charvillon, Benoît Gérard, François-Xavier Standaert

Hyperelliptic Curve Cryptography

On the Enumeration of Double-Base Chains with Applications to Elliptic Curve Cryptography

The Double-Base Number System (DBNS) uses two bases, 2 and 3, in order to represent any integer n. A Double-Base Chain (DBC) is a special case of a DBNS expansion. DBCs have been introduced to speed up the scalar multiplication [n]P on certain families of elliptic curves used in cryptography. In this context, our contributions are twofold. First, given integers n, a, and b, we outline a recursive algorithm to compute the number of different DBCs with a leading factor dividing 2 a 3 b and representing n. A simple modification of the algorithm allows to determine the number of DBCs with a specified length as well as the actual expansions. In turn, this gives rise to a method to compute an optimal DBC representing n, i.e. an expansion with minimal length. Our implementation is able to return an optimal expansion for most integers up to 260 bits in a few minutes. Second, we introduce an original and potentially more efficient approach to compute a random scalar multiplication [n]P, based on the concept of controlled DBC. Instead of generating a random integer n and then trying to find an optimal, or at least a short DBC to represent it, we propose to directly generate n as a random DBC with a chosen leading factor 2 a 3 b and length ℓ. To inform the selection of those parameters, in particular ℓ, which drives the trade-off between the efficiency and the security of the underlying cryptosystem, we enumerate the total number of DBCs having a given leading factor 2 a 3 b and a certain length ℓ. The comparison between this total number of DBCs and the total number of integers that we wish to represent a priori provides some guidance regarding the selection of suitable parameters. Experiments indicate that our new Near Optimal Controlled DBC approach provides a speedup of at least 10% with respect to the NAF for sizes from 192 to 512 bits. Computations involve elliptic curves defined over \(\mathbb{F}_p\), using the Inverted Edwards coordinate system and state of the art scalar multiplication techniques.
Christophe Doche

Kummer Strikes Back: New DH Speed Records

This paper sets new speed records for high-security constant-time variable-base-point Diffie–Hellman software: 305395 Cortex-A8-slow cycles; 273349 Cortex-A8-fast cycles; 88916 Sandy Bridge cycles; 88448 Ivy Bridge cycles; 54389 Haswell cycles. There are no higher speeds in the literature for any of these platforms.
The new speeds rely on a synergy between (1) state-of-the-art formulas for genus-2 hyperelliptic curves and (2) a modern trend towards vectorization in CPUs. The paper introduces several new techniques for efficient vectorization of Kummer-surface computations.
Daniel J. Bernstein, Chitchanok Chuengsatiansup, Tanja Lange, Peter Schwabe

Jacobian Coordinates on Genus 2 Curves

This paper presents a new projective coordinate system and new explicit algorithms which together boost the speed of arithmetic in the divisor class group of genus 2 curves. The proposed formulas generalise the use of Jacobian coordinates on elliptic curves, and their application improves the speed of performing cryptographic scalar multiplications in Jacobians of genus 2 curves over prime fields by an approximate factor of 1.25x. For example, on a single core of an Intel Core i7-3770M (Ivy Bridge), we show that replacing the previous best formulas with our new set improves the cost of generic scalar multiplications from 243,000 to 195,000 cycles, and drops the cost of specialised GLV-style scalar multiplications from 166,000 to 129,000 cycles.
Huseyin Hisil, Craig Costello

Factoring and Discrete Log

Mersenne Factorization Factory

We present work in progress to completely factor seventeen Mersenne numbers using a variant of the special number field sieve where sieving on the algebraic side is shared among the numbers. It is expected that it reduces the overall factoring effort by more than 50%. As far as we know this is the first practical application of Coppersmith’s “factorization factory” idea. Most factorizations used a new double-product approach that led to additional savings in the matrix step.
Thorsten Kleinjung, Joppe W. Bos, Arjen K. Lenstra

Improving the Polynomial time Precomputation of Frobenius Representation Discrete Logarithm Algorithms

Simplified Setting for Small Characteristic Finite Fields
In this paper, we revisit the recent small characteristic discrete logarithm algorithms. We show that a simplified description of the algorithm, together with some additional ideas, permits to obtain an improved complexity for the polynomial time precomputation that arises during the discrete logarithm computation. With our new improvements, this is reduced to O(q 6), where q is the cardinality of the basefield we are considering. This should be compared to the best currently documented complexity for this part, namely O(q 7). With our simplified setting, the complexity of the precomputation in the general case becomes similar to the complexity known for Kummer (or twisted Kummer) extensions.
Antoine Joux, Cécile Pierrot

Invited Talk I

Big Bias Hunting in Amazonia: Large-Scale Computation and Exploitation of RC4 Biases (Invited Paper)

RC4 is (still) a very widely-used stream cipher. Previous work by AlFardan et al. (USENIX Security 2013) and Paterson et al. (FSE 2014) exploited the presence of biases in the RC4 keystreams to mount plaintext recovery attacks against TLS-RC4 and WPA/TKIP. We improve on the latter work by performing large-scale computations to obtain accurate estimates of the single-byte and double-byte distributions in the early portions of RC4 keystreams for the WPA/TKIP context and by then using these distributions in a novel variant of the previous plaintext recovery attacks. The distribution computations were conducted using the Amazon EC2 cloud computing infrastructure and involved the coordination of 213 hyper-threaded cores running in parallel over a period of several days. We report on our experiences of computing at this scale using commercial cloud services. We also study Microsoft’s Point-to-Point Encryption protocol and its use of RC4, showing that it is also vulnerable to our attack techniques.
Kenneth G. Paterson, Bertram Poettering, Jacob C. N. Schuldt


Multi-user Collisions: Applications to Discrete Logarithm, Even-Mansour and PRINCE

In this paper, we investigate the multi-user setting both in public and in secret-key cryptanalytic applications. In this setting, the adversary tries to recover keys of many users in parallel more efficiently than with classical attacks, i.e., the number of recovered keys multiplied by the time complexity to find a single key, by amortizing the cost among several users. One possible scenario is to recover a single key in a large set of users more efficiently than to recover a key in the classical model. Another possibility is, after some shared precomputation, to be able to learn individual keys very efficiently. This latter model is close to traditional time/memory tradeoff attacks with precomputation. With these goals in mind, we introduce two new algorithmic ideas to improve collision-based attacks in the multi-user setting. Both ideas are derived from the parallelizable collision search as proposed by van Oorschot and Wiener. This collision search uses precomputed chains obtained by iterating some basic function. In our cryptanalytic application, each pair of merging chains can be used to correlate the key of two distinct users. The first idea is to construct a graph, whose vertices are keys and whose edges are these correlations. When the graph becomes connected, we simultaneously recover all the keys. Thanks to random graph analysis techniques, we can show that the number of edges that are needed to make this event occurs is small enough to obtain some improved attacks. The second idea modifies the basic technique of van Oorschot and Wiener: instead of waiting for two chains to merge, we now require that they become parallel.
We first show that, using the first idea alone, we can recover the discrete logarithms of L users in a group of size N in time \(\widetilde{O}(\sqrt{NL})\). We put these two ideas together and we show that in the multi-user Even-Mansour scheme, all the keys of L = N 1/3 users can be found with N 1/3 + ε queries for each user (where N is the domain size). Finally, we consider the PRINCE block cipher (with 128-bit keys and 64-bit blocks) and find the keys of 2 users among a set of 232 users in time 265. We also describe a new generic attack in the classical model for PRINCE.
Pierre-Alain Fouque, Antoine Joux, Chrysanthi Mavromati

Cryptanalysis of Iterated Even-Mansour Schemes with Two Keys

The iterated Even-Mansour (EM) scheme is a generalization of the original 1-round construction proposed in 1991, and can use one key, two keys, or completely independent keys. In this paper, we methodically analyze the security of all the possible iterated Even-Mansour schemes with two n-bit keys and up to four rounds, and show that none of them provides more than n-bit security. Our attacks are based on a new cryptanalytic technique called multibridge which splits the cipher to different parts in a novel way, such that they can be analyzed independently, exploiting its self-similarity properties. After the analysis of the parts, the key suggestions are efficiently joined using a meet-in-the-middle procedure.
As a demonstration of the multibridge technique, we devise a new attack on 4 steps of the LED-128 block cipher, reducing the time complexity of the best known attack on this scheme from 296 to 264. Furthermore, we show that our technique can be used as a generic key-recovery tool, when combined with some statistical distinguishers (like those recently constructed in reflection cryptanalysis of GOST and PRINCE).
Itai Dinur, Orr Dunkelman, Nathan Keller, Adi Shamir

Meet-in-the-Middle Attacks on Generic Feistel Constructions

We show key recovery attacks on generic balanced Feistel ciphers. The analysis is based on the meet-in-the-middle technique and exploits truncated differentials that are present in the ciphers due to the Feistel construction. Depending on the type of round function, we differentiate and show attacks on two types of Feistels. For the first type, which is the most general Feistel, we show a 5-round distinguisher (based on a truncated differential), which allows to launch 6-round and 10-round attacks, for single-key and double-key sizes, respectively. For the second type, we assume the round function follows the SPN structure with a linear layer P that has a maximal branch number, and based on a 7-round distinguisher, we show attacks that reach up to 14 rounds. Our attacks outperform all the known attacks for any key sizes, have been experimentally verified (implemented on a regular PC), and provide new lower bounds on the number of rounds required to achieve a practical and a secure Feistel.
Jian Guo, Jérémy Jean, Ivica Nikolić, Yu Sasaki

XLS is Not a Strong Pseudorandom Permutation

In FSE 2007, Ristenpart and Rogaway had described a generic method XLS to construct a length-preserving strong pseudorandom permutation (SPRP) over bit-strings of size at least n. It requires a length-preserving permutation \(\mathcal{E}\) over all bits of size multiple of n and a blockcipher E with block size n. The SPRP security of XLS was proved from the SPRP assumptions of both \(\mathcal{E}\) and E. In this paper we disprove the claim by demonstrating a SPRP distinguisher of XLS which makes only three queries and has distinguishing advantage about 1/2. XLS uses a multi-permutation linear function, called mix2. In this paper, we also show that if we replace mix2 by any invertible linear functions, the construction XLS still remains insecure. Thus the mode has inherit weakness.
Mridul Nandi


Structure-Preserving Signatures on Equivalence Classes and Their Application to Anonymous Credentials

Structure-preserving signatures are a quite recent but important building block for many cryptographic protocols. In this paper, we introduce a new type of structure-preserving signatures, which allows to sign group element vectors and to consistently randomize signatures and messages without knowledge of any secret. More precisely, we consider messages to be (representatives of) equivalence classes on vectors of group elements (coming from a single prime order group), which are determined by the mutual ratios of the discrete logarithms of the representative’s vector components. By multiplying each component with the same scalar, a different representative of the same equivalence class is obtained. We propose a definition of such a signature scheme, a security model and give an efficient construction, which is secure in the SXDH setting, where EUF-CMA security holds against generic forgers in the generic group model and the so called class hiding property holds under the DDH assumption.
As a second contribution, we use the proposed signature scheme to build an efficient multi-show attribute-based anonymous credential (ABC) system that allows to encode an arbitrary number of attributes. This is – to the best of our knowledge – the first ABC system that provides constant-size credentials and constant-size showings. To allow an efficient construction in combination with the proposed signature scheme, we also introduce a new, efficient, randomizable polynomial commitment scheme. Aside from these two building blocks, the credential system requires a very short and constant-size proof of knowledge to provide freshness in the showing protocol.
Christian Hanser, Daniel Slamanig

On Tight Security Proofs for Schnorr Signatures

The Schnorr signature scheme is the most efficient signature scheme based on the discrete logarithm problem and a long line of research investigates the existence of a tight security reduction for this scheme in the random oracle. Almost all recent works present lower tightness bounds and most recently Seurin (Eurocrypt 2012) showed that under certain assumptions the non-tight security proof for Schnorr signatures in the random oracle by Pointcheval and Stern (Eurocrypt 1996) is essentially optimal. All previous works in this direction rule out tight reductions from the (one-more) discrete logarithm problem. In this paper we introduce a new meta-reduction technique, which shows lower bounds for the large and very natural class of generic reductions. A generic reduction is independent of a particular representation of group elements and most reductions in state-of-the-art security proofs have this desirable property. Our approach shows unconditionally that there is no tight generic reduction from any natural computational problem Π defined over algebraic groups (including even interactive problems) to breaking Schnorr signatures, unless solving Π is easy.
Nils Fleischhacker, Tibor Jager, Dominique Schröder


Square Span Programs with Applications to Succinct NIZK Arguments

We propose a new characterization of NP using square span programs (SSPs). We first characterize NP as affine map constraints on small vectors. We then relate this characterization to SSPs, which are similar but simpler than Quadratic Span Programs (QSPs) and Quadratic Arithmetic Programs (QAPs) since they use a single series of polynomials rather than 2 or 3.
We use SSPs to construct succinct non-interactive zero-knowledge arguments of knowledge. For performance, our proof system is defined over Type III bilinear groups; proofs consist of just 4 group elements, verified in just 6 pairings. Concretely, using the Pinocchio libraries, we estimate that proofs will consist of 160 bytes verified in less than 6 ms.
George Danezis, Cédric Fournet, Jens Groth, Markulf Kohlweiss

Better Zero-Knowledge Proofs for Lattice Encryption and Their Application to Group Signatures

Lattice problems are an attractive basis for cryptographic systems because they seem to offer better security than discrete logarithm and factoring based problems. Efficient lattice-based constructions are known for signature and encryption schemes. However, the constructions known for more sophisticated schemes such as group signatures are still far from being practical. In this paper we make a number of steps towards efficient lattice-based constructions of more complex cryptographic protocols. First, we provide a more efficient way to prove knowledge of plaintexts for lattice-based encryption schemes. We then show how our new protocol can be combined with a proof of knowledge for Pedersen commitments in order to prove that the committed value is the same as the encrypted one. Finally, we make use of this to construct a new group signature scheme that is a “hybrid” in the sense that privacy holds under a lattice-based assumption while security is discrete-logarithm-based.
Fabrice Benhamouda, Jan Camenisch, Stephan Krenn, Vadim Lyubashevsky, Gregory Neven


Weitere Informationen

Premium Partner