Skip to main content

2015 | Buch

Smart Card Research and Advanced Applications

13th International Conference, CARDIS 2014, Paris, France, November 5-7, 2014. Revised Selected Papers

insite
SUCHEN

Über dieses Buch

This book constitutes the thoroughly refereed post-conference proceedings of the 13th International Conference on Smart Card Research and Advanced Applications, CARDIS 2014, held in Paris, France, in November 2014. The 15 revised full papers presented in this book were carefully reviewed and selected from 56 submissions. The papers are organized in topical sections on Java cards; software countermeasures; side-channel analysis; embedded implementations; public-key cryptography and leakage and fault attacks.

Inhaltsverzeichnis

Frontmatter

Java Cards

Frontmatter
Memory Forensics of a Java Card Dump
Abstract
Nowadays several papers have shown the ability to dump the EEPROM area of several Java Cards leading to the disclosure of already loaded applet and data structure of the card. Such a reverse engineering process is costly and prone to errors. Currently there are no tools available to help the process. We propose here an approach to find in the raw data obtained after a dump, the area containing the code and the data. Then, once the code area has been identified, we propose to rebuilt the original binary Cap file in order to be able to obtain the source code of the applet stored in the card.
Jean-Louis Lanet, Guillaume Bouffard, Rokia Lamrani, Ranim Chakra, Afef Mestiri, Mohammed Monsif, Abdellatif Fandi
Heap $$\ldots $$ Hop! Heap Is Also Vulnerable
Abstract
Several logical attacks against Java based smart card have been published recently. Most of them are based on the hypothesis that the type verification was not performed, thus allowing to obtain dynamically a type confusion. To mitigate such attacks, typed stack have been introduced on recent smart card. We propose here a new attack path for performing a type confusion even in presence of a typed stack. Then we propose using a Fault Tree Analysis a way to design efficiently counter measure in a top down approach. These counter measures are then evaluated on a Java Card virtual machine
Guillaume Bouffard, Michael Lackner, Jean-Louis Lanet, Johannes Loinig

Software Countermeasures

Frontmatter
Study of a Novel Software Constant Weight Implementation
Abstract
While in the early 2000’s lots of research was focused on Differential Power Analysis of first and second-order, it seems the recent trend is of even higher-order. As this order grows, countermeasures such as masking need to be designed in a more generic way. In this paper, we introduce a new constant weight implementation of the AES extending the idea of the software dual-rail countermeasure proposed by Hoogvorst et al. at COSADE 2011. Notably, we illustrate its practicality on 16-bit microcontroller in terms of speed and complexity. This countermeasure applies to all devices that leak a function of the Hamming weight of the internal variables. Under this assumption, our constant weight implementation is theoretically inherently resistant to side-channel attacks of any order. A security evaluation is conducted to analyze its resistance when the leakage slightly deviates from the Hamming weight assumption. It reveals that the countermeasure remains as good as several well-known masking countermeasures. Moreover, the proposed countermeasure offers the possibility to detect some classes of faults.
Victor Servant, Nicolas Debande, Houssem Maghrebi, Julien Bringer
Balanced Encoding to Mitigate Power Analysis: A Case Study
Abstract
Most side channel countermeasures for software implementations of cryptography either rely on masking or randomize the execution order of the cryptographic implementation. This work proposes a countermeasure that has constant leakage in common linear leakage models. Constant leakage is achieved not only for internal state values, but also for their transitions. The proposed countermeasure provides perfect protection in the theoretical leakage model. To study the practical relevance of the proposed countermeasure, it is applied to a software implementation of the block cipher Prince. This case study allows us to give realistic values for resulting implementation overheads as well as for the resulting side channel protection levels that can be achieved in realistic implementation scenarios.
Cong Chen, Thomas Eisenbarth, Aria Shahverdi, Xin Ye
On the Cost of Lazy Engineering for Masked Software Implementations
Abstract
Masking is one of the most popular countermeasures to mitigate side-channel analysis. Yet, its deployment in actual cryptographic devices is well known to be challenging, since designers have to ensure that the leakage corresponding to different shares is independent. Several works have shown that such an independent leakage assumption may be contradicted in practice, because of physical effects such as “glitches” or “transition-based” leakages. As a result, implementing masking securely can be a time-consuming engineering problem. This is in strong contrast with recent and promising approaches for the automatic insertion of countermeasures exploiting compilers, that aim to limit the development time of side-channel resistant software. Motivated by this contrast, we question what can be hoped for these approaches – or more generally for masked software implementations based on careless assembly generation. For this purpose, our first contribution is a simple reduction from security proofs obtained in a (usual but not always realistic) model where leakages depend on the intermediate variables manipulated by the target device, to security proofs in a (more realistic) model where the transitions between these intermediate variables are leaked. We show that the cost of moving from one context to the other implies a division of the security order by two for masking schemes. Next, our second and main contribution is to provide a comprehensive empirical validation of this reduction, based on two microcontrollers, several (handwritten and compiler-based) ways of generating assembly codes, with and without “recycling” the randomness used for sharing. These experiments confirm the relevance of our analysis, and therefore quantify the cost of lazy engineering for masking.
Josep Balasch, Benedikt Gierlichs, Vincent Grosso, Oscar Reparaz, François-Xavier Standaert

Side-Channel Analysis

Frontmatter
Efficient Stochastic Methods: Profiled Attacks Beyond 8 Bits
Abstract
Template attacks and stochastic models are among the most powerful side-channel attacks. However, they can be computationally expensive when processing a large number of samples. Various compression techniques have been used very successfully to reduce the data dimensionality prior to applying template attacks, most notably Principal Component Analysis (PCA) and Fisher’s Linear Discriminant Analysis (LDA). These make the attacks more efficient computationally and help the profiling phase to converge faster. We show how these ideas can also be applied to implement stochastic models more efficiently, and we also show that they can be applied and evaluated even for more than eight unknown data bits at once.
Marios O. Choudary, Markus G. Kuhn
Kangaroos in Side-Channel Attacks
Abstract
Side-channel attacks are a powerful tool to discover the cryptographic secrets of a chip or other device but only too often do they require too many traces or leave too many possible keys to explore. In this paper we show that for side channel attacks on discrete-logarithm-based systems significantly more unknown bits can be handled by using Pollard’s kangaroo method: if \(b\) bits are unknown then the attack runs in \(2^{b/2}\) instead of \(2^b\). If an attacker has many targets in the same group and thus has reasons to invest in precomputation, the costs can even be brought down to \(2^{b/3}\).
Usually the separation between known and unknown keybits is not this clear cut – they are known with probabilities ranging between 100 % and 0 %. Enumeration and rank estimation of cryptographic keys based on partial information derived from cryptanalysis have become important tools for security evaluations. They make the line between a broken and secure device more clear and thus help security evaluators determine how high the security of a device is. For symmetric-key cryptography there has been some recent work on key enumeration and rank estimation, but for discrete-logarithm-based systems these algorithms fail because the subkeys are not independent and the algorithms cannot take advantage of the above-mentioned faster attacks. We present \(\epsilon \)-enumeration as a new method to compute the rank of a key by using the probabilities together with (variations of) Pollard’s kangaroo algorithm and give experimental evidence.
Tanja Lange, Christine van Vredendaal, Marnix Wakker
Combining Leakage-Resilient PRFs and Shuffling
Towards Bounded Security for Small Embedded Devices
Abstract
Combining countermeasures is usually assumed to be the best way to protect embedded devices against side-channel attacks. These combinations are at least expected to increase the number of measurements of successful attacks to some reasonable extent, and at best to guarantee a bounded time complexity independent of the number of measurements. This latter guarantee, only possible in the context of leakage-resilient constructions, was only reached either for stateful (pseudo-random generator) constructions, or large parallel implementations so far. In this paper, we describe a first proposal of stateless (pseudo-random function) construction, for which we have strong hints that security bounded implementations are reachable under the constraints of small embedded devices. Our proposal essentially combines the well-known shuffling countermeasure with a tweaked pseudo-random function introduced at CHES 2012. We first detail is performances. Then we analyze it against standard differential power analysis and discuss the different parameters influencing its security bounds. Finally, we put forward that its implementation in 8-bit microcontrollers can provide a better security vs. performance tradeoff than state-of-the art (combinations of) countermeasures.
Vincent Grosso, Romain Poussier, François-Xavier Standaert, Lubos Gaspar

Embedded Implementations

Frontmatter
Double Level Montgomery Cox-Rower Architecture, New Bounds
Abstract
Recently, the Residue Number System and the Cox-Rower architecture have been used to compute efficiently Elliptic Curve Cryptography over FPGA. In this paper, we are rewriting the conditions of Kawamura’s theorem for the base extension without error in order to define the maximal range of the set from which the moduli can be chosen to build a base. At the same time, we give a procedure to compute correctly the truncation function of the Cox module. We also present a modified ALU of the Rower architecture using a second level of Montgomery Representation. Such architecture allows us to select the moduli with the new upper bound defined with the condition. This modification makes the Cox-Rower architecture suitable to compute 521 bits ECC with radix downto 16 bits compared to 18 with the classical Cox-Rower architecture. We validate our results through FPGA implementation of a scalar multiplication at classical cryptography security levels (NIST curves). Our implementation uses 35 % less LUTs compared to the state of the art generic implementation of ECC using RNS for the same performance [5]. We also slightly improve the computation time (latency) and our implementation shows best ratio throughput/area for RNS computation supporting any curve independently of the chosen base.
Jean-Claude Bajard, Nabil Merkiche
How to Use Koblitz Curves on Small Devices?
Abstract
Koblitz curves allow very efficient scalar multiplications because point doublings can be traded for cheap Frobenius endomorphisms by representing the scalar as a \(\tau \)-adic expansion. Typically elliptic curve cryptosystems, such as ECDSA, also require the scalar as an integer. This results in a need for conversions between integers and the \(\tau \)-adic domain, which are costly and prevent from using Koblitz curves on very constrained devices, such as RFID tags or wireless sensors. In this paper, we provide a solution to this problem by showing how complete cryptographic processes, such as ECDSA signing, can be completed in the \(\tau \)-adic domain with very few resources, consequently outsourcing the expensive conversions to a more powerful party. We also provide small circuitries that require about 76 gate equivalents on 0.13 \(\upmu \)m CMOS and that are applicable for all Koblitz curves.
Kimmo Järvinen, Ingrid Verbauwhede

Public-Key Cryptography

Frontmatter
Caml Crush: A PKCS#11 Filtering Proxy
Abstract
PKCS#11 is a very popular cryptographic API: it is the standard used by many Hardware Security Modules, smartcards and software cryptographic tokens. Several attacks have been uncovered against PKCS#11 at different levels: intrinsic logical flaws, cryptographic vulnerabilities or severe compliance issues. Since affected hardware remains widespread in computer infrastructures, we propose a user-centric and pragmatic approach for secure usage of vulnerable devices. We introduce Caml Crush, a PKCS#11 filtering proxy. Our solution allows to dynamically protect PKCS#11 cryptographic tokens from state of the art attacks. This is the first approach that is immediately applicable to commercially available products. We provide a fully functional open source implementation with an extensible filter engine effectively shielding critical resources. This yields additional advantages to using Caml Crush that go beyond classical PKCS#11 weakness mitigations.
Ryad Benadjila, Thomas Calderon, Marion Daubignard
Algorithms for Outsourcing Pairing Computation
Abstract
We address the question of how a computationally limited device may outsource pairing computation in cryptography to another, potentially malicious, but much more computationally powerful device. We introduce two new efficient protocols for securely outsourcing pairing computations to an untrusted helper. The first generic scheme is proven computationally secure (and can be proven statistically secure at the expense of worse performance). It allows various communication-efficiency trade-offs. The second specific scheme – for optimal Ate pairing on a Barreto-Naehrig curve – is unconditionally secure, and do not rely on any hardness assumptions. Both protocols are more efficient than the actual computation of the pairing by the restricted device and in particular they are more efficient than all previous proposals.
Aurore Guillevic, Damien Vergnaud

Leakage and Fault Attacks

Frontmatter
Bounded, yet Sufficient? How to Determine Whether Limited Side Channel Information Enables Key Recovery
Abstract
This work presents a novel algorithm to quantify the relation between three factors that characterize a side channel adversary: the amount of observed side channel leakage, the workload of full key recovery, and its achievable success rate. The proposed algorithm can be used by security evaluators to derive a realistic bound on the capabilities of a side channel adversary. Furthermore, it provides an optimal strategy for combining subkey guesses to achieve any predefined success rate. Hence, it can be used by a side channel adversary to determine whether observed leakage suffices for key recovery before expending computation time. The algorithm is applied to a series of side channel measurements of a microcontroller AES implementation and simulations. A comparison to related work shows that the new algorithm improves on existing algorithms in several respects.
Xin Ye, Thomas Eisenbarth, William Martin
On the Security of Fresh Re-keying to Counteract Side-Channel and Fault Attacks
Abstract
At AFRICACRYPT 2010 and CARDIS 2011, fresh re-keying schemes to counter side-channel and fault attacks were introduced. The idea behind those schemes is to shift the main burden of side-channel protection to a re-keying function \(g\) that is easier to protect than the main block cipher. This function produces new session keys based on the secret master key and random nonces for every block of message that is encrypted. In this paper, we present a generic chosen-plaintext key-recovery attack on both fresh re-keying schemes. The attack is based on two observations: Since session key collisions for the same message are easy to detect, it is possible to recover one session key with a simple time-memory trade-off strategy; and if the re-keying function is easy to invert (such as the suggested multiplication constructions), the attacker can use the session key to recover the master key. The attack has a complexity of about \(2 \cdot 2^{n/2}\) (instead of the expected \(2^n\)) for an \(n\)-bit key. For the typically employed block cipher AES-128, this would result in a key-recovery attack complexity of only \(2^{65}\). If weaker primitives like 80-bit PRESENT are used, even lower attack complexities are possible.
Christoph Dobraunig, Maria Eichlseder, Stefan Mangard, Florian Mendel
Evidence of a Larger EM-Induced Fault Model
Abstract
Electromagnetic waves have been recently pointed out as a medium for fault injection within circuits featuring cryptographic modules. Indeed, it has been experimentally demonstrated by A. Dehbaoui et al. [3] that an electromagnetic pulse, produced with a high voltage pulse generator and a probe similar to that used to perform EM analyses, was susceptible to create faults exploitable from a cryptanalysis viewpoint. An analysis of the induced faults [4] revealed that they originated from timing constraint violations.
This paper experimentally demonstrates that EM injection, performed with enhanced probes is very local and can produce not only timing faults but also bit-set and bit-reset faults. This result clearly extends the range of the threats associated with EM fault injection.
S. Ordas, L. Guillaume-Sage, K. Tobich, J.-M. Dutertre, P. Maurine
Backmatter
Metadaten
Titel
Smart Card Research and Advanced Applications
herausgegeben von
Marc Joye
Amir Moradi
Copyright-Jahr
2015
Electronic ISBN
978-3-319-16763-3
Print ISBN
978-3-319-16762-6
DOI
https://doi.org/10.1007/978-3-319-16763-3

Premium Partner