Skip to main content
Top

2015 | Book

Progress in Cryptology -- INDOCRYPT 2015

16th International Conference on Cryptology in India, Bangalore, India, December 6-9, 2015, Proceedings

insite
SEARCH

About this book

This book constitutes the refereed proceedings of the 16th International Conference on Cryptology in India, INDOCRYPT 2015, held in Bangalore, India, in December 2015. The 19 revised full papers presented in this book were carefully reviewed and selected from 60 submissions. The papers are organized in topical sections on public key encryption; cryptanalysis; side channel attacks; information theoretic cryptography; and lightweight cryptography.

Table of Contents

Frontmatter

Public Key Encryption

Frontmatter
Compact Attribute-Based Encryption and Signcryption for General Circuits from Multilinear Maps
Abstract
In this paper, we start by presenting a key-policy attribute-based encryption ABE supporting general polynomial-size circuit realizable decryption policies and featuring compactness in the sense that our ABE construction exhibits short ciphertexts and shorter decryption keys compared to existing similar works. We then design a key-policy attribute-based signcryption ABSC scheme which enjoys several interesting properties that were never achievable before. It supports signing and decryption policies representable as arbitrary polynomial-size circuits. Besides, it generates short ciphertext. Our constructions employ multilinear map and achieve selective security in the standard model under standard complexity assumptions. More interestingly, our key-policy constructions can be converted to the corresponding ciphertext-policy variants achieving short ciphertext by utilizing the technique of universal circuits.
Pratish Datta, Ratna Dutta, Sourav Mukhopadhyay
Dynamic Key-Aggregate Cryptosystem on Elliptic Curves for Online Data Sharing
Abstract
The recent advent of cloud computing and the IoT has made it imperative to have efficient and secure cryptographic schemes for online data sharing. Data owners would ideally want to store their data/files online in an encrypted manner, and delegate decryption rights for some of these to users with appropriate credentials. An efficient and recently proposed solution in this regard is to use the concept of aggregation that allows users to decrypt multiple classes of data using a single key of constant size. In this paper, we propose a secure and dynamic key aggregate encryption scheme for online data sharing that operates on elliptic curve subgroups while allowing dynamic revocation of user access rights. We augment this basic construction to a generalized two-level hierarchical structure that achieves optimal space and time complexities, and also efficiently accommodates extension of data classes. Finally, we propose an extension to the generalized scheme that allows use of efficiently computable bilinear pairings for encryption and decryption operations. Each scheme is formally proven to be semantically secure. Practical experiments have been conducted to validate all claims made in the paper.
Sikhar Patranabis, Yash Shrivastava, Debdeep Mukhopadhyay
Lite-Rainbow: Lightweight Signature Schemes Based on Multivariate Quadratic Equations and Their Secure Implementations
Abstract
Rainbow signature scheme based on multivariate quadratic equations is one of alternatives to guarantee secure communications in the post-quantum world. Its speed is about dozens of times faster than classical public-key signatures, RSA and ECDSA, while its key size is much heavier than those of the classical ones. We propose lightweight variants of Rainbow, Lite-Rainbow-0 and Lite-Rainbow-1, for constrained devices. By replacing some parts of a public key or a secret key with small random seeds via a pseudo-random number generator, we reduce a public key in Lite-Rainbow-1 and a secret key in Lite-Rainbow-0 by factors 71 % and 99.8 %, respectively, compared to Rainbow. Although our schemes require additional costs for key recovery processes, they are still highly competitive in term of performance. We also prove unforgeability of our scheme with special parameter sets in the random oracle model under the hardness assumption of the multivariate quadratic polynomial solving problem. Finally, we propose countermeasures of Rainbow-like schemes against side channel attacks such as power analysis for their secure implementations.
Kyung-Ah Shim, Cheol-Min Park, Yoo-Jin Baek
Lossy Projective Hashing and Its Applications
Abstract
In this paper, we introduce a primitive called lossy projective hashing. It is unknown before whether smooth projective hashing (Cramer-Shoup, Eurocrypt’02) can be constructed from dual projective hashing (Wee, Eurocrypt’12). The lossy projective hashing builds a bridge between dual projective hashing and smooth projective hashing. We give instantiations of lossy projective hashing from DDH, DCR, QR and general subgroup membership assumptions (including \(2^k\)-th residue, p-subgroup and higher residue assumptions). We also show how to construct lossy encryption and fully IND secure deterministic public key encryption from lossy projective hashing.
  • We give a construction of lossy projective hashing via dual projective hashing. We prove that lossy projective hashing can be converted to smooth projective hashing via pairwise independent hash functions, which in turn yields smooth projective hashing from dual projective hashing.
  • We propose a direct construction of lossy encryption via lossy projective hashing. Our construction is different from that given by Hemenway et al. (Eurocrypt 2011) via smooth projective hashing. In addition, we give a fully IND secure deterministic public key encryption via lossy projective hashing and one round UCE secure hash functions recently introduced by Bellare et al. (Crypto 2013).
Haiyang Xue, Yamin Liu, Xianhui Lu, Bao Li
(De-)Constructing TLS 1.3
Abstract
SSL/TLS is one of the most widely deployed cryptographic protocols on the Internet. It is used to protect the confidentiality and integrity of transmitted data in various client-server applications. The currently specified version is TLS 1.2, and its security has been analyzed extensively in the cryptographic literature. The IETF working group is actively developing a new version, TLS 1.3, which is designed to address several flaws inherent to previous versions.
In this paper, we analyze the security of a slightly modified version of the current TLS 1.3 draft. (We do not encrypt the server’s certificate.) Our security analysis is performed in the constructive cryptography framework. This ensures that the resulting security guarantees are composable and can readily be used in subsequent protocol steps, such as password-based user authentication over a TLS-based communication channel in which only the server is authenticated. Most steps of our proof hold in the standard model, with the sole exception that the key derivation function HKDF is used in a way that has a proof only in the random-oracle model. Beyond the technical results on TLS 1.3, this work also exemplifies a novel approach towards proving the security of complex protocols by a modular, step-by-step decomposition, in which smaller sub-steps are proved in isolation and then the security of the protocol follows by the composition theorem.
Markulf Kohlweiss, Ueli Maurer, Cristina Onete, Björn Tackmann, Daniele Venturi

Cryptanalysis

Frontmatter
Cryptanalysis of Variants of RSA with Multiple Small Secret Exponents
Abstract
In this paper, we analyze the security of two variants of the RSA public key cryptosystem where multiple encryption and decryption exponents are used with a common modulus. For the most well known variant, CRT-RSA, assume that n encryption and decryption exponents \((e_l,d_{p_l},d_{q_l})\), where \(l=1,\cdots ,n\), are used with a common CRT-RSA modulus N. By utilizing a Minkowski sum based lattice construction and combining several modular equations which share a common variable, we prove that one can factor N when \(d_{p_l},d_{q_l}<N^{\frac{2n-3}{8n+2}}\) for all \(l=1,\cdots ,n\). We further improve this bound to \(d_{p_l}(\mathrm {or}\,d_{q_l})<N^{\frac{9n-14}{24n+8}}\) for all \(l=1,\cdots ,n\). Moreover, our experiments do better than previous works by Jochemsz-May (Crypto 2007) and Herrmann-May (PKC 2010) when multiple exponents are used. For Takagi’s variant of RSA, assume that n key pairs \((e_l,d_l)\) for \(l=1,\cdots ,n\) are available for a common modulus \(N=p^rq\) where \(r\ge 2\). By solving several simultaneous modular univariate linear equations, we show that when \(d_l<N^{(\frac{r-1}{r+1})^{\frac{n+1}{n}}}\), for all \(l=1,\cdots ,n\), one can factor the common modulus N.
Liqiang Peng, Lei Hu, Yao Lu, Santanu Sarkar, Jun Xu, Zhangjie Huang
Some Results on Sprout
Abstract
Sprout is a lightweight stream cipher proposed by Armknecht and Mikhalev at FSE 2015. It has a Grain-like structure with two state Registers of size 40 bits each, which is exactly half the state size of Grain v1. In spite of this, the cipher does not appear to lose in security against generic Time-Memory-Data Tradeoff attacks due to the novelty of its design. In this paper, we first present improved results on Key Recovery with partial knowledge of the internal state. We show that if 50 of the 80 bits of the internal state are guessed then the remaining bits along with the secret key can be found in a reasonable time using a SAT solver. Thereafter, we show that it is possible to perform a distinguishing attack on the full Sprout stream cipher in the multiple IV setting using around \(2^{40}\) randomly chosen IVs on an average. The attack requires around \(2^{48}\) bits of memory. Thereafter, we will show that for every secret key, there exist around \(2^{30}\) IVs for which the LFSR used in Sprout enters the all zero state during the keystream generating phase. Using this observation, we will first show that it is possible to enumerate Key-IV pairs that produce keystream bits with period as small as 80. We will then outline a simple key recovery attack that takes time equivalent to \(2^{66.7}\) encryptions with negligible memory requirement. This although is not the best attack reported against this cipher in terms of the time complexity, it is the best in terms of the memory required to perform the attack.
Subhadeep Banik
Linear Cryptanalysis of Reduced-Round SIMECK Variants
Abstract
SIMECK is a family of 3 lightweight block ciphers designed by Yang et al. They follow the framework used by Beaulieu et al. from the United States National Security Agency (NSA) to design SIMON and SPECK. A cipher in this family with K-bit key and N-bit block is called SIMECKN / K. We show that the security of this block cipher against linear cryptanalysis is not as good as its predecessors SIMON. More precisely, while the best known linear attack for SIMON32/64, using Algorithm 1 of Matsui, covers 13 rounds we present a linear attack in this senario which covers 14 rounds of SIMECK32/64. Similarly, using Algorithm 1 of Matsui, we present attacks on 19 and 22 rounds of SIMECK48/96 and SIMECK64/128 respectively, compare them with known attacks on 16 and 19 rounds SIMON48/96 and SIMON64/128 respectively. In addition, we use Algorithm 2 of Matsui to attack 18, 23 and 27 rounds of SIMECK32/64, SIMECK48/96 and SIMECK64/128 respectively, compare them with known attacks on 18, 19 and 21 rounds SIMON32/64, SIMON48/96 and SIMON64/128 respectively.
Nasour Bagheri
Improved Linear Cryptanalysis of Reduced-Round SIMON-32 and SIMON-48
Abstract
In this paper we analyse two variants of SIMON family of light-weight block ciphers against variants of linear cryptanalysis and present the best linear cryptanalytic results on these variants of reduced-round SIMON to date.
We propose a time-memory trade-off method that finds differential/linear trails for any permutation allowing low Hamming weight differential/linear trails. Our method combines low Hamming weight trails found by the correlation matrix representing the target permutation with heavy Hamming weight trails found using a Mixed Integer Programming model representing the target differential/linear trail. Our method enables us to find a 17-round linear approximation for SIMON-48 which is the best current linear approximation for SIMON-48. Using only the correlation matrix method, we are able to find a 14-round linear approximation for SIMON-32 which is also the current best linear approximation for SIMON-32.
The presented linear approximations allow us to mount a 23-round key recovery attack on SIMON-32 and a 24-round Key recovery attack on SIMON-48/96 which are the current best results on SIMON-32 and SIMON-48. In addition we have an attack on 24 rounds of SIMON-32 with marginal complexity.
Mohamed Ahmed Abdelraheem, Javad Alizadeh, Hoda A. Alkhzaimi, Mohammad Reza Aref, Nasour Bagheri, Praveen Gauravaram
Some Results Using the Matrix Methods on Impossible, Integral and Zero-Correlation Distinguishers for Feistel-Like Ciphers
Abstract
While many recent publications have shown strong relations between impossible differential, integral and zero-correlation distinguishers for SPNs and Feistel-like ciphers, this paper tries to bring grist to the mill to this research direction by first, studying the Type-III, the Source-Heavy (SH) and the Target-Heavy (TH) Feistel-like ciphers regarding those three kinds of distinguishers. Second, this paper tries to make a link between the matrix methods used to find such distinguishers and the adjacency matrix of the graph of a Feistel-like cipher.
Thierry P. Berger, Marine Minier
Improved Meet-in-the-Middle Attacks on 7 and 8-Round ARIA-192 and ARIA-256
Abstract
The ARIA block cipher has been established as a Korean encryption standard by Korean government since 2004. In this work, we re-evaluate the security bound of reduced round ARIA-192 and ARIA-256 against meet-in-the-middle (MITM) key recovery attacks in the single key model. We present a new 4-round distinguisher to demonstrate the best 7 & 8 round MITM attacks on ARIA-192/256. Our 7-round attack on ARIA-192 has data, time and memory complexity of \(2^{113}\), \(2^{135.1}\) and \(2^{130}\) respectively. For our 7-round attack on ARIA-256, the data/time/memory complexities are \(2^{115}\), \(2^{136.1}\) and \(2^{130}\) respectively. These attacks improve upon the previous best MITM attack on the same in all the three dimensions. Our 8-round attack on ARIA-256 requires \(2^{113}\) cipher calls and has time and memory complexity of \(2^{245.9}\) and \(2^{138}\) respectively. This improves upon the previous best MITM attack on ARIA-256 in terms of time as well as memory complexity. Further, in our attacks, we are able to recover the actual secret key unlike the previous cryptanalytic attacks existing on ARIA-192/256. To the best of our knowledge, this is the first actual key recovery attack on ARIA so far. We apply multiset attack - a variant of meet-in-the-middle attack to achieve these results.
Akshima, Donghoon Chang, Mohona Ghosh, Aarushi Goel, Somitra Kumar Sanadhya
Structural Evaluation for Generalized Feistel Structures and Applications to LBlock and TWINE
Abstract
The generalized Feistel structure (GFS) is the variant of Feistel structure with \(m>2\) branches. While the GFS is widely used, the security is not well studied. In this paper, we propose a generic algorithm for searching integral distinguishers. By applying the algorithm, we prove that the low bound for the length of integral distinguishers is \(m^2+m-1\) and \(2m+1\) for Type-1 GFS and Type-2 GFS, respectively. Meanwhile, we evaluate the security of the improved Type-1 and Type-2 GFSs when the size of each branch and the algebraic degree of F-functions are specified. Our results show that the distinguishers are affected by the parameters to various levels, which will provide valuable reference for designing GFS ciphers. Although our search algorithm is generic, it can improve integral distinguishers for specific ciphers. For instance, it constructs several 16-round integral distinguishers for LBlock and TWINE, which directly extends the numbers of attacked rounds.
Huiling Zhang, Wenling Wu

Side Channel Attacks

Frontmatter
Cryptanalysis of Two Fault Countermeasure Schemes
Abstract
In this paper, we look at two fault countermeasure schemes proposed very recently in literature. The first proposed in ACISP 2015 constructs a transformation function using a cellular automata based linear diffusion, and a non-linear layer using a series of bent functions. This countermeasure is meant for the protection of block ciphers like AES. The second countermeasure was proposed in IEEE-HOST 2015 and protects the Grain-128 stream cipher. The design divides the output function used in Grain-128 into two components. The first called the masking function, masks the input bits to the output function with some additional randomness and computes the value of the function. The second called the unmasking function, is computed securely using a different register and undoes the effect of the masking with random bits. We will show that there exists a weakness in the way in which both these schemes use the internally generated random bits which make these designs vulnerable. We will outline attacks that cryptanalyze the above schemes using 66 and 512 faults respectively.
Subhadeep Banik, Andrey Bogdanov
Differential Fault Analysis of SHA-3
Abstract
In this paper we present the first differential fault analysis (DFA) of SHA-3. This attack can recover the internal state of two versions of SHA-3 (namely, SHA3-512 and SHA3-384) and can be used to forge MAC’s which are using these versions of SHA-3. Assuming that the attacker can inject a random single bit fault on the intermediate state of the hash computation, and given the output of the SHA-3 version for a correct message and 80 faulty messages, we can extract 1592 out of the 1600 bits of the compression function’s internal state. To the best of our knowledge, this is the first public analysis of SHA-3 against DFA. Although our results do not compromise any security claim of SHA-3, it shows the feasibility of DFA on this scheme and possibly other Sponge based MACs and increases our understanding of SHA-3.
Nasour Bagheri, Navid Ghaedi, Somitra Kumar Sanadhya
A Key to Success
Success Exponents for Side-Channel Distinguishers
Abstract
The success rate is the classical metric for evaluating the performance of side-channel attacks. It is generally computed empirically from measurements for a particular device or using simulations. Closed-form expressions of success rate are desirable because they provide an explicit functional dependence on relevant parameters such as number of measurements and signal-to-noise ratio which help to understand the effectiveness of a given attack and how one can mitigate its threat by countermeasures. However, such closed-form expressions involve high-dimensional complex statistical functions that are hard to estimate.
In this paper, we define the success exponent (SE) of an arbitrary side-channel distinguisher as the first-order exponent of the success rate as the number of measurements increases. Under fairly general assumptions such as soundness, we give a general simple formula for any arbitrary distinguisher and derive closed-form expressions of it for \(\mathsf {DoM}\), \(\mathsf {CPA}\), \(\mathsf {MIA}\) and the optimal distinguisher when the model is known (template attack). For \(\mathsf {DoM}\) and \(\mathsf {CPA}\) our results are in line with the literature. Experiments confirm that the theoretical closed-form expression of the SE coincides with the empirically computed one, even for reasonably small numbers of measurements. Finally, we highlight that our study raises many new perspectives for comparing and evaluating side-channel attacks, countermeasures and implementations.
Sylvain Guilley, Annelie Heuser, Olivier Rioul

Information Theoretic Cryptography

Frontmatter
Non-malleable Extractors with Shorter Seeds and Their Applications
Abstract
Motivated by the problem of how to communicate over a public channel with an active adversary, Dodis and Wichs (STOC’09) introduced the notion of a non-malleable extractor. A non-malleable extractor \(\textsf {nmExt}: \{0, 1\}^n \times \{0, 1\}^d \rightarrow \{0, 1\}^m\) takes two inputs, a weakly-random W and a uniformly random seed S, and outputs a string which is nearly uniform, given S as well as \(\textsf {nmExt}(W, \mathcal {A}(S))\), for an arbitrary function \(\mathcal {A}\) with \(\mathcal {A}(S) \ne S\).
In this paper, by developing the combination and permutation techniques, we improve the error estimation of the extractor of Raz (STOC’05), which plays an extremely important role in the constraints of the non-malleable extractor parameters including seed length. Then we present improved explicit construction of non-malleable extractors. Though our construction is the same as that given by Cohen, Raz and Segev (CCC’12), the parameters are improved. More precisely, we construct an explicit \((1016, \frac{1}{2})\)-non-malleable extractor \(\textsf {nmExt}: \{0, 1\}^{n} \times \{0, 1\}^d \rightarrow \{0, 1\}\) with \(n=2^{10}\) and seed length \(d=19\), while Cohen et al. showed that the seed length is no less than \(\frac{46}{63} + 66\). Therefore, our method beats the condition “\(2.01 \cdot \log n \le d \le n\)” proposed by Cohen et al., since d is just \(1.9 \cdot \log n\) in our construction. We also improve the parameters of the general explicit construction given by Cohen et al. Finally, we give their applications to privacy amplification.
Yanqing Yao, Zhoujun Li
Efficiently Simulating High Min-entropy Sources in the Presence of Side Information
Abstract
In this paper we show that every source X having very high min-entropy conditioned on side information Z, can be efficiently simulated from Z. That is, there exists a simulator \(\mathsf {Sim}(\cdot )\) such that (a) it is efficient, (b) \((\mathsf {Sim}(Z),Z)\) and (XZ) are indistinguishable and (c) the min-entropy of \(\mathsf {Sim}(Z)\) and X given Z is (almost, up to a few bits) the same. Concretely, the simulator achieves \((s,\epsilon )\)-indistinguishability running in time \(s\cdot \mathrm {poly}\left( \frac{1}{\epsilon },2^\varDelta ,|Z|\right) \), where \(\varDelta \) is the entropy deficiency and |Z| is the length of Z.
This extends the result of Trevisan, Tulsani and Vadhan (CCC ’09), who proved a special case when Z is empty.
Our technique is based on the standard min-max theorem combined with a new convex \(L_2\)-approximation result resembling the well known result due to Maurey-Jones-Barron.
Maciej Skórski

Lightweight Cryptography

Frontmatter
BitCryptor: Bit-Serialized Flexible Crypto Engine for Lightweight Applications
Abstract
There is a significant effort in building lightweight cryptographic operations, yet the proposed solutions are typically single-purpose modules that can implement a single functionality. In contrast, we propose BitCryptor, a multi-purpose, compact processor for cryptographic applications on reconfigurable hardware. The proposed crypto engine can perform pseudo-random number generation, strong collision-resistant hashing and variable-key block cipher encryption. The hardware architecture utilizes SIMON, a recent lightweight block cipher, as its core. The complete engine uses a bit-serial design methodology to minimize the area. Implementation results on the Xilinx Spartan-3 s50 FPGA show that the proposed architecture occupies 95 slices (187 LUTs, 102 registers), which is 10\(\times \) smaller than the nearest comparable multi-purpose design. BitCryptor is also smaller than the majority of recently proposed lightweight single-purpose designs. Therefore, it is a very efficient cryptographic IP block for resource-constrained domains, providing a good performance at a minimal area overhead.
Ege Gulcan, Aydin Aysu, Patrick Schaumont
Low-Resource and Fast Binary Edwards Curves Cryptography
Abstract
Elliptic curve cryptography (ECC) is an ideal choice for low-resource applications because it provides the same level of security with smaller key sizes than other existing public key encryption schemes. For low-resource applications, designing efficient functional units for elliptic curve computations over binary fields results in an effective platform for an embedded co-processor. This paper proposes such a co-processor designed for area-constrained devices by utilizing state of the art binary Edwards curve equations over mixed point addition and doubling. The binary Edwards curve offers the security advantage that it is complete and is, therefore, immune to the exceptional points attack. In conjunction with Montgomery Ladder, such a curve is naturally immune to most types of simple power and timing attacks. The recently presented formulas for mixed point addition in [1] were found to be invalid, but were corrected such that the speed and register usage were maintained. We utilize corrected mixed point addition and doubling formulas to achieve a secure, but still fast implementation of a point multiplication on binary Edwards curves. Our synthesis results over NIST recommended fields for ECC indicate that the proposed co-processor requires about 50 % fewer clock cycles for point multiplication and occupies a similar silicon area when compared to the most recent in literature.
Brian Koziel, Reza Azarderakhsh, Mehran Mozaffari-Kermani
Backmatter
Metadata
Title
Progress in Cryptology -- INDOCRYPT 2015
Editors
Alex Biryukov
Vipul Goyal
Copyright Year
2015
Electronic ISBN
978-3-319-26617-6
Print ISBN
978-3-319-26616-9
DOI
https://doi.org/10.1007/978-3-319-26617-6

Premium Partner