Skip to main content
Top

2016 | Book

Applied Cryptography and Network Security

14th International Conference, ACNS 2016, Guildford, UK, June 19-22, 2016. Proceedings

insite
SEARCH

About this book

This book constitutes the refereed proceedings of the 14th International Conference on Applied Cryptography and Network Security, ACNS 2016, held in Guildford, UK. in June 2016. 5. The 35 revised full papers included in this volume and presented together with 2 invited talks, were carefully reviewed and selected from 183 submissions.ACNS is an annual conference focusing on innovative research and current developments that advance the areas of applied cryptography, cyber security and privacy.

Table of Contents

Frontmatter

Authentication and Key Establishment

Frontmatter
On the Security of the Algebraic Eraser Tag Authentication Protocol

The Algebraic Eraser has been gaining prominence as SecureRF, the company commercializing the algorithm, increases its marketing reach. The scheme is claimed to be well-suited to IoT applications but a lack of detail in available documentation has hampered peer-review. Recently more details of the system have emerged after a tag authentication protocol built using the Algebraic Eraser was proposed for standardization in ISO/IEC SC31 and SecureRF provided an open public description of the protocol. In this paper we describe a range of attacks on this protocol that include very efficient and practical tag impersonation as well as partial, and total, tag secret key recovery. Most of these results have been practically verified, they contrast with the 80-bit security that is claimed for the protocol, and they emphasize the importance of independent public review for any cryptographic proposal.

Simon R. Blackburn, M. J. B. Robshaw
A Cryptographic Analysis of UMTS/LTE AKA

Secure communications between mobile subscribers and their associated operator networks require mutual authentication and key deri-vation protocols. The $$\mathsf {3GPP}$$3GPP standard provides the $$\mathsf {AKA}$$AKA protocol for just this purpose. Its structure is generic, to be instantiated with a set of seven cryptographic algorithms. The currently-used proposal instantiates these by means of a set of $$\mathsf {AES}$$AES-based algorithms called $$\mathsf {MILENAGE}$$MILENAGE; as an alternative, the ETSI SAGE committee submitted the $$\mathsf {TUAK}$$TUAK algorithms, which rely on a truncation of the internal permutation of $$\mathsf {Keccak}$$Keccak.In this paper, we provide a formal security analysis of the $$\mathsf {AKA}$$AKA protocol in its complete three-party setting. We formulate requirements with respect to both Man-in-the-Middle (MiM) adversaries, i.e. key-indistinguishability and impersonation security, and to local untrusted serving networks, denoted “servers”, namely state-confidentiality and soundness. We prove that the unmodified $$\mathsf {AKA}$$AKA protocol attains these properties as long as servers cannot be corrupted. Furthermore, adding a unique server identifier suffices to guarantee all the security statements even in in the presence of corrupted servers. We use a modular proof approach: the first step is to prove the security of (modified and unmodified) $$\mathsf {AKA}$$AKA with generic cryptographic algorithms that can be represented as a unitary pseudorandom function –PRF– keyed either with the client’s secret key or with the operator key. A second step proceeds to show that $$\mathsf {TUAK}$$TUAK and $$\mathsf {MILENAGE}$$MILENAGE guarantee this type of pseudorandomness, though the guarantee for $$\mathsf {MILENAGE}$$MILENAGE requires a stronger assumption. Our paper provides (to our knowledge) the first complete, rigorous analysis of the original $$\mathsf {AKA}$$AKA protocol and these two instantiations. We stress that such an analysis is important for any protocol deployed in real-life scenarios.

Stephanie Alt, Pierre-Alain Fouque, Gilles Macario-rat, Cristina Onete, Benjamin Richard
Low-Cost Mitigation Against Cold Boot Attacks for an Authentication Token

Hardware tokens for user authentication need a secure and usable mechanism to lock them when not in use. The Pico academic project proposes an authentication token unlocked by the proximity of simpler wearable devices that provide shares of the token’s master key. This method, however, is vulnerable to a cold boot attack: an adversary who captures a running Pico could extract the master key from its RAM and steal all of the user’s credentials. We present a cryptographic countermeasure—bivariate secret sharing—that protects all the credentials except the one in use at that time, even if the token is captured while it is on. Remarkably, our key storage costs for the wearables that supply the cryptographic shares are very modest (256 bits) and remain constant even if the token holds thousands of credentials. Although bivariate secret sharing has been used before in slightly different ways, our scheme is leaner and more efficient and achieves a new property—cold boot protection. We validated the efficacy of our design by implementing it on a commercial Bluetooth Low Energy development board and measuring its latency and energy consumption. For reasonable choices of latency and security parameters, a standard CR2032 button-cell battery can power our prototype for 5–7 months, and we demonstrate a simple enhancement that could make the same battery last for over 9 months.

Ian Goldberg, Graeme Jenkinson, Frank Stajano
Two More Efficient Variants of the J-PAKE Protocol

Recently, the password-authenticated key exchange protocol J-PAKE of Hao and Ryan (Workshop on Security Protocols 2008) was formally proven secure in the algebraic adversary model by Abdalla et al. (IEEE S&P 2015). In this paper, we propose and examine two variants of J-PAKE - which we call RO-J-PAKE and CRS-J-PAKE - that each makes the use of two less zero-knowledge proofs than the original protocol. We show that they are provably secure following a similar strategy to that of Abdalla et al. We also study their efficiency as compared to J-PAKE’s, also taking into account how the groups are chosen. Namely, we treat the cases of subgroups of finite fields and elliptic curves. Our work reveals that, for subgroups of finite fields, CRS-J-PAKE is indeed more efficient than J-PAKE, while RO-J-PAKE is much less efficient. On the other hand, when instantiated with elliptic curves, both RO-J-PAKE and CRS-J-PAKE are more efficient than J-PAKE, with CRS-J-PAKE being the best of the three. Regardless of implementation, we note that RO-J-PAKE enjoys a looser security reduction than both J-PAKE and CRS-J-PAKE. CRS-J-PAKE has the tightest security proof, but relies on an additional trust assumption at setup time.

Jean Lancrenon, Marjan Škrobot, Qiang Tang
Hash-Based TPM Signatures for the Quantum World

Trusted Platform Modules (TPMs) provide trust and attestation services to the platforms they reside on, using public key encryption and digital signatures among other cryptography operations. However, the current standards mandate primitives that will be insecure in the presence of quantum computers. In this paper, we study how to eliminate these insecure primitives. We replace RSA-based digital signatures with a hash-based scheme. We show that this scheme can be implemented using reasonable amounts of space on the TPM. We also show how to protect the TPM from rollback attacks against these state-sensitive signature operations.

Megumi Ando, Joshua D. Guttman, Alberto R. Papaleo, John Scire

Signatures with Advanced Properties

Frontmatter
Fuzzy Signatures: Relaxing Requirements and a New Construction

Takahashi et al. (ACNS 2015) introduced the notion of fuzzy signature, which is a signature scheme that allows a signature to be generated using “fuzzy data” (i.e. a noisy string such as a biometric feature) as a signing key, without using any additional user-specific data (such as a helper string in the context of fuzzy extractors). They gave a generic construction of a fuzzy signature scheme from the combination of an ordinary signature scheme with some homomorphic properties regarding keys and signatures, and a new primitive that they call linear sketch, and showed a concrete instantiation based on the Waters signature scheme (EUROCRYPT 2005). A major weakness of their scheme is that fuzzy data is assumed to be distributed uniformly, and another is that it has somewhat large public parameter (proportional to the security parameter), and requires bilinear groups, and either (or both) of these properties could be barriers for implementation and/or practical use.In this paper, we revisit the results of Takahashi et al.: We show that in their generic construction, the requirements on each of the building blocks can be relaxed in several aspects. More specifically, our relaxation for the underlying linear sketch scheme allows us to use a new linear sketch scheme (that we propose) for a fuzzy key setting different from that of Takahashi et al., for which we only require that the average min-entropy of fuzzy data is high (under the situation some part of its information is leaked). Furthermore, our relaxation on the underlying signature scheme enables us to now use the Schnorr signature scheme as a building block. Our concrete instantiation of a fuzzy signature scheme is, although relying on a random oracle, arguably much more practical than the scheme by Takahashi et al. The latter relaxation routes through a variant of related key security for signature schemes.

Takahiro Matsuda, Kenta Takahashi, Takao Murakami, Goichiro Hanaoka
Foundations of Fully Dynamic Group Signatures

Group signatures are a central cryptographic primitive that has received a considerable amount of attention from the cryptographic community. They allow members of a group to anonymously sign on behalf of the group. Membership is overseen by a designated group manager. There is also a tracing authority that can revoke anonymity by revealing the identity of the signer if and when needed, to enforce accountability and deter abuse. For the primitive to be applicable in practice, it needs to support fully dynamic groups, i.e. users can join and leave at any time. In this work we take a close look at existing security definitions for fully dynamic group signatures. We identify a number of shortcomings in existing security definitions and fill the gap by providing a formal rigorous security model for the primitive. Our model is general and is not tailored towards a specific design paradigm and can therefore, as we show, be used to argue about the security of different existing constructions following different design paradigms. Our definitions are stringent and when possible incorporate protection against maliciously chosen keys. In the process, we identify a subtle issue inherent to one design paradigm, where new members might try to implicate older ones by means of back-dated signatures. This is not captured by existing models. We propose some inexpensive fixes for some existing constructions to avoid the issue.

Jonathan Bootle, Andrea Cerulli, Pyrros Chaidos, Essam Ghadafi, Jens Groth
A Lattice-Based Group Signature Scheme with Message-Dependent Opening

Group signatures are an important anonymity primitive allowing users to sign messages while hiding in a crowd. At the same time, signers remain accountable since an authority is capable of de-anonymizing signatures via a process called opening. In many situations, this authority is granted too much power as it can identify the author of any signature. Sakai et al. proposed a flavor of the primitive, called Group Signature with Message-Dependent Opening (GS-MDO), where opening operations are only possible when a separate authority (called “admitter”) has revealed a trapdoor for the corresponding message. So far, all existing GS-MDO constructions rely on bilinear maps, partially because the message-dependent opening functionality inherently implies identity-based encryption. This paper proposes the first GS-MDO candidate based on lattice assumptions. Our construction combines the group signature of Ling, Nguyen and Wang (PKC’15) with two layers of identity-based encryption. These components are tied together using suitable zero-knowledge argument systems.

Benoît Libert, Fabrice Mouhartem, Khoa Nguyen
Threshold-Optimal DSA/ECDSA Signatures and an Application to Bitcoin Wallet Security

While threshold signature schemes have been presented before, there has never been an optimal threshold signature algorithm for DSA. The properties of DSA make it quite challenging to build a threshold version. In this paper, we present a threshold DSA scheme that is efficient and optimal. We also present a compelling application to use our scheme: securing Bitcoin wallets. Bitcoin thefts are on the rise, and threshold DSA is necessary to secure Bitcoin wallets. Our scheme is the first general threshold DSA scheme that does not require an honest majority and is useful for securing Bitcoin wallets.

Rosario Gennaro, Steven Goldfeder, Arvind Narayanan
Legally Fair Contract Signing Without Keystones

In two-party computation, achieving both fairness and guaranteed output delivery is well known to be impossible. Despite this limitation, many approaches provide solutions of practical interest by weakening somewhat the fairness requirement. Such approaches fall roughly in three categories: “gradual release” schemes assume that the aggrieved party can eventually reconstruct the missing information; “optimistic schemes” assume a trusted third party arbitrator that can restore fairness in case of litigation; and “concurrent” or “legally fair” schemes in which a breach of fairness is compensated by the aggrieved party having a digitally signed cheque from the other party (called the keystone).In this paper we describe and analyse a new contract signing paradigm that doesn’t require keystones to achieve legal fairness, and give a concrete construction based on Schnorr signatures which is compatible with standard Schnorr signatures and provably secure.

Houda Ferradi, Rémi Géraud, Diana Maimuț, David Naccache, David Pointcheval

DoS Attacks and Network Anomaly Detection

Frontmatter
Why Software DoS Is Hard to Fix: Denying Access in Embedded Android Platforms

A new class of software Denial of Service (DoS) attacks against Android platforms was recently discovered, where the attacks can force the victim device unresponsive, target and terminate other applications on the device, and continuously soft reboot the device [26]. After Google was informed of these DoS attacks, their attempt to resolve the problem did not adequately address the fundamental underlying attack principles. In this paper, we show that engineering software DoS defenses is challenging, especially for embedded and resource-constrained devices. To support our findings, we detail a revised DoS attack strategy for the latest version of Android. For our experimental evaluation, we demonstrate that the new class of DoS attacks are even more damaging to embedded Android devices. As part of our proof-of-concept attacks, we were able to render the Sony Bravia XBR-43X830C Android TV and the Amazon Fire TV Stick 1st generation devices permanently unusable. In addition, other devices, including the Moto 360 1st generation smartwatch, required flashing firmware images, whereas the Nvidia Shield Android TV and the Amazon Fire 7$$''$$′′ Tablet required a factory reset to recover. Our attack is applicable to most Android devices and requires manual intervention to attempt to recover the device. The proposed attack strategy is more debilitating to devices that do not provide means for the end-user to easily access safe mode, recovery mode, or the ability flash firmware images. To mitigate the attack, we created an open-source defense application that has a 100 % prevention rate after a single soft reboot of the device while incurring less than 1.6 % performance overhead.

Ryan Johnson, Mohamed Elsabagh, Angelos Stavrou
Network Anomaly Detection Using Unsupervised Feature Selection and Density Peak Clustering

Intrusion detection systems (IDSs) play a significant role to effectively defend our crucial computer systems or networks against attackers on the Internet. Anomaly detection is an effective way to detect intrusion, which can discover patterns that do not conform to expected behavior. The mainstream approaches of ADS (anomaly detection system) are using data mining technology to automatically extract normal pattern and abnormal ones from a large set of network data and distinguish them from each other. However, supervised or semi-supervised approaches in data mining rely on data label information. This is not practical when the network data is large-scale. In this paper, we propose a two-stage approach, unsupervised feature selection and density peak clustering to tackle label lacking situations. First, the density-peak based clustering approach is introduced for network anomaly detection, which considers both distance and density nature of data. Second, to achieve better performance of clustering process, we use maximal information coefficient and feature clustering to remove redundant and irrelevant features. Experimental results show that our method can get rid of useless features of high-dimensional data and achieves high detection accuracy and efficiency in the meanwhile.

Xiejun Ni, Daojing He, Sammy Chan, Farooq Ahmad

Deterministic and Functional Encryption

Frontmatter
More Efficient Constructions for Inner-Product Encryption

We propose new constructions for inner product encryption – and , both secure under the eXternal Diffie-Hellman assumption (SXDH) in asymmetric pairing groups. The first scheme has constant-size ciphertexts whereas the second one is weakly attribute hiding. is derived from the identity-based encryption scheme of Jutla Roy (Asiacrypt 2013), that was extended from tag-based quasi-adaptive non-interactive zero-knowledge (QA-NIZK) proofs for linear subspaces of vector spaces over bilinear groups. The verifier common reference string (CRS) in these tag-based systems are split into two parts, that are combined during verification. We consider an alternate form of the tag-based QA-NIZK proof with a single verifier CRS that already includes a tag, different from the one defining the language. The verification succeeds as long as the two tags are unequal. Essentially, we embed a two-equation revocation mechanism in the verification. The new QA-NIZK proof system leads to , a constant-sized ciphertext IPE scheme with very short ciphertexts. Both the IPE schemes are obtained by applying the n-equation revocation technique of Attrapadung and Libert (PKC 2010) to the corresponding identity based encryption schemes and proved secure under SXDH assumption. As an application, we show how our schemes can be specialised to obtain the first fully secure identity-based broadcast encryption based on SXDH with a trade-off among the public parameters, ciphertext and key sizes, all of them being sub-linear in the maximum number of recipients of a broadcast.

Somindu C. Ramanna
Attribute Based Encryption with Direct Efficiency Tradeoff

We propose the first fully secure unbounded Attribute-Based Encryption (ABE) scheme such that the key size and ciphertext size can be directly traded off. Our proposed scheme is parameterized by a positive integer d, which can be arbitrarily chosen at setup. In our scheme, the ciphertext size is O(t/d), the private key size is O(md), and the public key size is O(d), where t, m are the sizes of attribute sets and policies corresponding to ciphertext and private key, respectively.Our scheme can be considered as a generalization that includes two of the state-of-the-art ABE instantiations, namely, the unbounded ABE scheme and the ABE scheme with constant-size ciphertexts proposed by Attrapadung (Eurocrypt 2014). Indeed, these two schemes correspond to the two extreme cases of our scheme, that is, when setting $$d=1$$d=1 and when setting d as the maximum size of allowed attribute sets, respectively. Furthermore, our scheme also yields a tradeoff between encryption and decryption time. Interestingly, when estimating efficiency using numerical parameters, the decryption time is minimized at d being somewhere in the middle of the spectrum.We believe that this tradeoff can provide advantages in applications where size and/or time resources are concretely fixed in advance, as we can flexibly adjust d to match available resources and thus make the most of them. Such situations include, but are not limited to, implementations of ABE in tiny hardware tokens.

Nuttapong Attrapadung, Goichiro Hanaoka, Tsutomu Matsumoto, Tadanori Teruya, Shota Yamada
Turing Machines with Shortcuts: Efficient Attribute-Based Encryption for Bounded Functions

We propose a direct construction of attribute-based encryption (ABE) scheme for bounded multi-stack deterministic pushdown automata (DPDAs) and Turing machines that have polynomial runtime in the security parameter. Particularly, we show how to extend our construction to handle bounded DPDAs with two or more stacks, which leads to an ABE scheme for deterministic Turing machines (DTMs) with polynomial runtime.Our ABE schemes have “input-specific” decryption runtime meaning that the decryption time depends on the semantics of attributes. If a machine halts prematurely on a certain input, its execution can be cut short. To the best of our knowledge, our ABE scheme is the first one that achieves this property and has security proofs based on standard cryptographic assumption.The key technical ingredient we apply is a special graph encoding on the executions of bounded DPDAs with multi-stacks, allowing us to remember just enough of the execution history to enforce correct evaluation. The security of our scheme is shown to be based on the learning with errors (LWE) problem in the selective security model.

Xavier Boyen, Qinyi Li
Offline Witness Encryption

Witness encryption (WE) was introduced by Garg et al. [GGSW13]. A WE scheme is defined for some NP language L and lets a sender encrypt messages relative to instances x. A ciphertext for x can be decrypted using w witnessing $$x\in L$$ , but hides the message if $$x\notin L$$ . Garg et al. construct WE from multilinear maps and give another construction [GGH+13b] using indistinguishability obfuscation (iO) for circuits. Due to the reliance on such heavy tools, WE can currently hardly be implemented on powerful hardware and will unlikely be realizable on constrained devices like smart cards any time soon.We construct a WE scheme where encryption is done by simply computing a Naor-Yung ciphertext (two CPA encryptions and a NIZK proof). To achieve this, our scheme has a setup phase, which outputs public parameters containing an obfuscated circuit (only required for decryption), two encryption keys and a common reference string (used for encryption). This setup need only be run once, and the parameters can be used for arbitrary many encryptions. Our scheme can also be turned into a functional WE scheme, where a message is encrypted w.r.t. a statement and a function f, and decryption with a witness w yields f(m, w).Our construction is inspired by the functional encryption scheme by Garg et al. and we prove (selective) security assuming iO and statistically simulation-sound NIZK. We give a construction of the latter in bilinear groups and combining it with ElGamal encryption, our ciphertexts are of size 1.3 kB at a 128-bit security level and can be computed on a smart card.

Hamza Abusalah, Georg Fuchsbauer, Krzysztof Pietrzak
Deterministic Public-Key Encryption Under Continual Leakage

Deterministic public-key encryption, introduced by Bellare, Boldyreva, and O’Neill (CRYPTO 2007), is an important technique for searchable encryption; it allows quick, logarithmic-time, search over encrypted data items. The technique is most effective in scenarios where frequent search queries are performed over a huge database of unpredictable data items. We initiate the study of deterministic public-key encryption (D-PKE) in the presence of leakage. We formulate appropriate security notions for leakage-resilient D-PKE, and present constructions that achieve them in the standard model. We work in the continual leakage model, where the secret-key is updated at regular intervals and an attacker can learn arbitrary but bounded leakage on the secret key during each time interval. We, however, do not consider leakage during the updates. Our main construction is based on the (standard) linear assumption in bilinear groups, tolerating up to $$0.5-o(1)$$0.5-o(1) fraction of arbitrary leakage. The leakage rate can be improved to $$1-o(1)$$1-o(1) by relying on the SXDH assumption.At a technical level, we propose and construct a “continual leakage resilient” version of the all-but-one lossy trapdoor functions, introduced by Peikert and Waters (STOC 2008). Our formulation and construction of leakage-resilient lossy-TDFs is of independent general interest for leakage-resilient cryptography.

Venkata Koppula, Omkant Pandey, Yannis Rouselakis, Brent  Waters

Computing on Encrypted Data

Frontmatter
Better Preprocessing for Secure Multiparty Computation

We present techniques and protocols for the preprocessing of secure multiparty computation (MPC), focusing on the so-called SPDZ MPC scheme [14] and its derivatives [1, 11, 13]. These MPC schemes consist of a so-called preprocessing or offline phase where correlated randomness is generated that is independent of the inputs and the evaluated function, and an online phase where such correlated randomness is consumed to securely and efficiently evaluate circuits. In the recent years, it has been shown that such protocols (such as [5, 17, 18]) turn out to be very efficient in practice.While much research has been conducted towards optimizing the online phase of the MPC protocols, there seems to have been less focus on the offline phase of such protocols (except for [11]). With this work, we want to close this gap and give a toolbox of techniques that aim at optimizing the preprocessing. We support both instantiations over small fields and large rings using somewhat homomorphic encryption and the Paillier cryptosystem [19], respectively. In the case of small fields, we show how the preprocessing overhead can basically be made independent of the field characteristic. In the case of large rings, we present a protocol based on the Paillier cryptosystem which has a lower message complexity than previous protocols and employs more efficient zero-knowledge proofs that, to the best of our knowledge, were not presented in previous work.

Carsten Baum, Ivan Damgård, Tomas Toft, Rasmus Zakarias
Trinocchio: Privacy-Preserving Outsourcing by Distributed Verifiable Computation

Verifiable computation allows a client to outsource computations to a worker with a cryptographic proof of correctness of the result that can be verified faster than performing the computation. Recently, the highly efficient Pinocchio system was introduced as a major leap towards practical verifiable computation. Unfortunately, Pinocchio and other efficient verifiable computation systems require the client to disclose the inputs to the worker, which is undesirable for sensitive inputs. To solve this problem, we propose Trinocchio: a system that distributes Pinocchio to three (or more) workers, that each individually do not learn which inputs they are computing on. We fully exploit the almost linear structure of Pinochhio proofs, letting each worker essentially perform the work for a single Pinocchio proof; verification by the client remains the same. Moreover, we extend Trinocchio to enable joint computation with multiple mutually distrusting inputters and outputters and still very fast verification. We show the feasibility of our approach by analysing the performance of an implementation in a case study.

Berry Schoenmakers, Meilof Veeningen, Niels de Vreede
Verifiable Multi-party Computation with Perfectly Private Audit Trail

We propose an efficient protocol for the evaluation of functions getting their inputs from multiple parties in a way that guarantees the result correctness. In our setting, a worker is trusted with the confidentiality of the inputs and, given this assumption, our protocol guarantees perfect privacy to the clients.Our protocol offers an interesting middle ground between traditional verifiable computation protocols, that usually do not come with privacy guarantees and focus on one or a small number of clients, and secure multi-party computation protocol that distribute the privacy trust between a number of parties, at the cost of much more expensive protocols (especially for $$\mathsf {NP}$$NP functions and functions that do not admit an efficient static circuit representation) and a demanding infrastructure of independently managed servers interacting in multiple rounds. By contrast, our protocol is single-pass: the clients submit their inputs asynchronously, and everyone can collect the result at any later time.We present three unrelated applications of our technique: solving a system of linear equations, an auction scheme and the search of the shortest path in a shared graph. These examples illustrate the ease of use and the advantage in terms of complexity of our approach. We made a prototype implementation that illustrates the practicality of our solution.

Édouard Cuvelier, Olivier Pereira
Practical Fault-Tolerant Data Aggregation

During Financial Cryptography 2012 Chan et al. presented a novel privacy-protection fault-tolerant data aggregation protocol. Comparing to previous work, their scheme guaranteed provable privacy of individuals and could work even if some number of users refused to participate.In our paper we demonstrate that despite its merits, their method provides unacceptably low accuracy of aggregated data for a wide range of assumed parameters and cannot be used in majority of real-life systems. To show this we use both analytic and experimental methods.Additionally, we present a precise data aggregation protocol that provides provable level of security even when facing massive failures of nodes. Moreover, the protocol requires significantly less computation (limited exploiting of heavy cryptography) than most of currently known fault tolerant aggregation protocols and offers better security guarantees that make it suitable for systems of limited resources (including sensor networks). To obtain our result we relax however the model and allow some limited communication between the nodes.

Krzysztof Grining, Marek Klonowski, Piotr Syga
Accelerating Homomorphic Computations on Rational Numbers

Fully Homomorphic Encryption (FHE) schemes are conceptually very powerful tools for outsourcing computations on confidential data. However, experience shows that FHE-based solutions are not sufficiently efficient for practical applications yet. Hence, there is a huge interest in improving the performance of applying FHE to concrete use cases. What has been mainly overlooked so far is that not only the FHE schemes themselves contribute to the slowdown, but also the choice of data encoding. While FHE schemes usually allow for homomorphic executions of algebraic operations over finite fields (often $$\mathbb {Z}_2$$Z2), many applications call for different algebraic structures like signed rational numbers. Thus, before an FHE scheme can be used at all, the data needs to be mapped into the structure supported by the FHE scheme.We show that the choice of the encoding can already incur a significant slowdown of the overall process, which is independent of the efficiency of the employed FHE scheme. We compare different methods for representing signed rational numbers and investigate their impact on the effort needed for processing encrypted values. In addition to forming a new encoding technique which is superior under some circumstances, we also present further techniques to speed up computations on encrypted data under certain conditions, each of independent interest. We confirm our results by experiments.

Angela Jäschke, Frederik Armknecht

Non-Interactive Proofs and PRFs

Frontmatter
New Techniques for Non-interactive Shuffle and Range Arguments

We construct the most efficient non-interactive Argument of Correctness of a Shuffle and Range Argument under falsifiable assumptions in asymmetric bilinear groups. Our constructions use as a common building block a novel quasi-adaptive argument for proving that n commitments open to messages in a public set S, with proof-size independent of n.

Alonso González, Carla Ráfols
Constrained PRFs for Unbounded Inputs with Short Keys

A constrained pseudorandom function (CPRF) $$F:\mathcal{K} \times \mathcal{X} \rightarrow \mathcal{Y}$$F:K×X→Y for a family $$\mathcal{T}$$T of subsets of $$\mathcal X$$X is a function where for any key $$k \in \mathcal{K}$$k∈K and set $$S\in \mathcal{T}$$S∈T one can efficiently compute a short constrained key $$k_S$$kS, which allows to evaluate $$F(k,\cdot )$$F(k,·) on all inputs $$x\in S$$x∈S, while the outputs on all inputs $$x \notin S$$x∉S look random even given $$k_S$$kS.Abusalah et al. recently constructed the first constrained PRF for inputs of arbitrary length whose sets S are decided by Turing machines. They use their CPRF to build broadcast encryption and the first ID-based non-interactive key exchange for an unbounded number of users. Their constrained keys are obfuscated circuits and are therefore large.In this work we drastically reduce the key size and define a constrained key for a Turing machine M as a short signature on M. For this, we introduce a new signature primitive with constrained signing keys that let one only sign certain messages, while forging a signature on others is hard even when knowing the coins for key generation.

Hamza Abusalah, Georg Fuchsbauer

Symmetric Ciphers

Frontmatter
Wide Trail Design Strategy for Binary MixColumns
Enhancing Lower Bound of Number of Active S-boxes

AES is one of the most common block ciphers and many AES-like primitives have been proposed. Recently, many lightweight symmetric-key cryptographic primitives have also been proposed. Some such primitives require the diffusion using element-wise XORs, which are called binary matrices in this paper, rather than that using MDS matrices because the element-wise XOR is efficiently implemented in a lightweight environment. However, since the branch number of binary matrices is generally lower than that of MDS matrices, such primitives require more rounds to guarantee security against several cryptanalyses. In this paper, we focus on binary matrices and discuss useful cryptographic properties of binary matrices. Specifically, we focus on AES-like primitives with binary MixColumns, whose output is computed using a binary matrix. One of the benefit of AES-like primitives is that four rounds guarantee $$\mathcal{B}^2$$B2 differentially and linearly active S-boxes, where $$\mathcal{B}$$B denotes the branch number of the matrix. We argue that there is a binary MixColumns in which the lower bound of the number of active S-boxes is more than $$\mathcal{B}^2$$B2 in the 4-round characteristic. For some binary matrices, the lower bound is improved from $$\mathcal{B}^2$$B2 to $$\mathcal{B}(\mathcal{B}+2)$$B(B+2).

Yosuke Todo, Kazumaro Aoki
Automatic Search of Linear Trails in ARX with Applications to SPECK and Chaskey

In this paper, we study linear cryptanalysis of the ARX structure by means of automatic search. To evaluate the security of ARX designs against linear cryptanalysis, it is crucial to find (round-reduced) linear trails with maximum correlation. We model the problem of finding optimal linear trails by the boolean satisfiability problem (SAT), translate the propagation of masks through ARX operations into bitwise expressions and constraints, and then solve the problem using a SAT solver. We apply the method to find optimal linear trails for round-reduced versions of the block cipher SPECK and the MAC algorithm Chaskey. For SPECK with block size 32/48/64/96/128, we can find optimal linear trails for 22/11/13/9/9 rounds respectively, which largely improves previous results, especially on larger versions. A 3-round optimal linear trail of Chaskey is presented for the first time as far as we know. In addition, our method can be used to enumerate the trails in a linear hull, and we present two linear hulls with the distributions of trails for round-reduced SPECK32. Our work provides designers with more accurate evaluation against linear cryptanalysis on ARX designs, especially for primitives with large block sizes and many rounds.

Yunwen Liu, Qingju Wang, Vincent Rijmen
Square Attack on 7-Round Kiasu-BC

Kiasu-BC is a tweakable block cipher presented within the TWEAKEY framework at AsiaCrypt 2014. Kiasu-BC is almost identical to AES-128, the only difference to AES-128 is the tweak addition, where the 64-bit tweak is xored to the first two rows of every round-key. The security analysis of the designers focuses primarily on related-key related-tweak differential characteristics and meet-in-the-middle attacks. For other attacks, they conclude that the security level of Kiasu-BC is similar to AES-128. In this work, we provide the first third-party analysis of Kiasu-BC. We show that we can mount Square attacks on up to 7-round Kiasu-BC with a complexity of about $$2^{48.5}$$248.5 encryptions, which improves upon the best published 7-round attacks for AES-128. Furthermore, we show that such attacks are applicable to the round-reduced $$\Theta $$ΘCB3-like mode of the CAESAR candidate Kiasu. To be specific, we show a key-recovery attack on 7-round Kiasu$$\ne $$≠ with a complexity of about $$2^{82}$$282 encryptions.

Christoph Dobraunig, Maria Eichlseder, Florian Mendel
On the Design Rationale of Simon Block Cipher: Integral Attacks and Impossible Differential Attacks against Simon Variants

Simon is a lightweight block cipher designed by NSA in 2013. NSA presented the specification and the implementation efficiency, but they did not provide detailed security analysis nor the design rationale. The original Simon has rotation constants of (1, 8, 2), and Kölbl et al. regarded the constants as a parameter (a, b, c), and analyzed the security of Simon block cipher variants against differential and linear attacks for all the choices of (a, b, c). This paper complements the result of Kölbl et al. by considering integral and impossible differential attacks. First, we search the number of rounds of integral distinguishers by using a supercomputer. Our search algorithm follows the previous approach by Wang et al., however, we introduce a new choice of the set of plaintexts satisfying the integral property. We show that the new choice indeed extends the number of rounds for several parameters. We also search the number of rounds of impossible differential characteristics based on the miss-in-the-middle approach. Finally, we make a comparison of all parameters from our results and the observations by Kölbl et al. Interesting observations are obtained, for instance we find that the optimal parameters with respect to the resistance against differential attacks are not stronger than the original parameter with respect to integral and impossible differential attacks. We also obtain a parameter that is better than the original parameter with respect to security against these four attacks.

Kota Kondo, Yu Sasaki, Tetsu Iwata
Correlation Power Analysis of Lightweight Block Ciphers: From Theory to Practice

Side-Channel Analysis (SCA) represents a serious threat to the security of millions of smart devices that form part of the so-called Internet of Things (IoT). Choosing the “right” cryptographic primitive for the IoT is a highly challenging task due to the resource constraints of IoT devices and the variety of primitives. An important criterion to assess the suitability of a lightweight cipher with respect to SCA is the amount of leakage available to an adversary. In this paper, we analyze the efficiency of different selection functions that are commonly used in Correlation Power Analysis (CPA) attacks on symmetric primitives. To this end, we attacked implementations of the lightweight block ciphers AES, Fantomas, LBlock, Piccolo, PRINCE, RC5, Simon, and Speck on an 8-bit AVR processor. By exploring the relation between the nonlinearity of the studied selection functions and the measured leakages, we discovered some imperfections when using nonlinearity to quantify the resilience against CPA. Then, we applied these findings in an evaluation of the “intrinsic” CPA-resistance of unprotected implementations of the eight mentioned ciphers. We show that certain implementation aspects can influence the leakage level and try to explain why. Our results shed new light on the resilience of basic operations executed by these ciphers against CPA and help to bridge the gap between theory and practice.

Alex Biryukov, Daniel Dinu, Johann Großschädl

Cryptography in Software

Frontmatter
Assisted Identification of Mode of Operation in Binary Code with Dynamic Data Flow Slicing

Verification of software security properties, when conducted at the binary code level, is a difficult and cumbersome task. This paper is focused on the reverse engineering task that needs to be performed prior to any thorough analysis. A previous line of work has been dedicated to the identification of cryptographic primitives. Relying on the techniques that have been proposed, we devise a semi-automated solution to identify modes of operation. Our solution produces a concise representation of the data transfers occurring within a cryptographic scheme. Inspired by program slicing techniques, we extract from a dynamic data flow a slice defined as the smallest subgraph that is distance preserving for the set of cryptographic parameters. We apply our solution to several modes of operation including CBC, CTR, HMAC and OCB. For each of them, we successfully obtain a complete and readable representation. Moreover, we show with an example that our solution can be applied on non standard schemes to quickly discover security flaw.

Pierre Lestringant, Frédéric Guihéry, Pierre-Alain Fouque
Parallel Implementation of BDD Enumeration for LWE

One of the most attractive problems for post-quantum secure cryptographic schemes is the LWE problem. Beside combinatorial and algebraic attacks, LWE can be solved by a lattice-based Bounded Distance Decoding (BDD) approach. We provide the first parallel implementation of an enumeration-based BDD algorithm that employs the Lindner-Peikert and Linear Length pruning strategies. We ran our algorithm on a large variety of LWE parameters, from which we derive the following interesting results. First, our parallel enumeration achieves almost perfect speed-up, which allows us to provide for the first time practical cryptanalytic results on standard LWE parameters of meaningful size. Second, we conclude that lattice-based attacks perform better than recent advanced BKW-type algorithms even for small noise, while requiring way less samples. Third, we experimentally show weaknesses for a binary matrix LWE proposal of Galbraith.

Elena Kirshanova, Alexander May, Friedrich Wiemer
Memory Carving in Embedded Devices: Separate the Wheat from the Chaff

This paper investigates memory carving techniques for embedded devices. Given that cryptographic material in memory dumps makes carving techniques inefficient, we introduce a methodology to distinguish meaningful information from cryptographic material in small-sized memory dumps. The proposed methodology uses an adaptive boosting technique with statistical tests. Experimented on EMV cards, the methodology recognized 92% of meaningful information and $$98\,\%$$98% of cryptographic material.

Thomas Gougeon, Morgan Barbier, Patrick Lacharme, Gildas Avoine, Christophe Rosenberger

Security for Human Use

Frontmatter
CAPTCHaStar! A Novel CAPTCHA Based on Interactive Shape Discovery

Over the last years, most websites on which users can register (e.g., email providers and social networks) adopted CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart) as a countermeasure against automated attacks. The battle of wits between designers and attackers of captchas led to current ones being annoying and hard to solve for users, while still being vulnerable to automated attacks.In this paper, we propose CAPTCHaStar, a new image-based captcha that relies on user interaction. This novel captcha leverages the innate human ability to recognize shapes in a confused environment. We assess the effectiveness of our proposal for the two key aspects of captchas, i.e., usability, and resiliency to automated attacks. In particular, we evaluated the usability, carrying out a thorough user study, and we tested the resiliency of our proposal against several types of automated attacks: traditional ones; designed ad-hoc for our proposal; and based on machine learning. Compared to the state of the art, our proposal is more user friendly (e.g., only some 35 % of the users prefer current solutions, such as text-based captchas) and more resilient to automated attacks.

Mauro Conti, Claudio Guarisco, Riccardo Spolaor
TMGuard: A Touch Movement-Based Security Mechanism for Screen Unlock Patterns on Smartphones

Secure user authentication is a big challenge for smartphone security. To overcome the drawbacks of knowledge-based method, various graphical passwords have been proposed to enhance user authentication on smartphones. Android unlock patterns are one of the Android OS features aiming to authenticate users based on graphical patterns. However, recent studies have shown that attackers can easily compromise this unlock mechanism (i.e., by means of smudge attacks). We advocate that some additional mechanisms should be added to improve the security of unlock patterns. In this paper, we first show that users would perform a touch movement differently when interacting with the touchscreen and that users would perform somewhat stably for the same pattern after several trials. We then develop a touch movement-based security mechanism, called TMGuard, to enhance the authentication security of Android unlock patterns by verifying users’ touch movement during pattern input. In the evaluation, our user study with 75 participants demonstrate that TMGuard can positively improve the security of Android unlock patterns without compromising its usability.

Weizhi Meng, Wenjuan Li, Duncan S. Wong, Jianying Zhou
Gesture-Based Continuous Authentication for Wearable Devices: The Smart Glasses Use Case

We study the feasibility of touch gesture behavioural biometrics for implicit authentication of users on smart glasses by proposing a continuous authentication system on Google Glass using two classifiers: SVM with RBF kernel, and a new classifier based on Chebyshev’s concentration inequality. Based on data collected from 30 users, we show that such authentication is feasible both in terms of classification accuracy and computational load on Glass. We achieve a classification accuracy of up to 99 % with only 75 training samples using behavioural biometric data from four different types of touch gestures. To show that our system can be generalized, we test its performance on touch data from smartphones and found the accuracy to be similar to Glass. Finally, our experiments on the permanence of gestures show that the negative impact of changing user behaviour with time on classification accuracy can be best alleviated by periodically replacing older training samples with new randomly chosen samples.

Jagmohan Chauhan, Hassan Jameel Asghar, Anirban Mahanti, Mohamed Ali Kaafar
Backmatter
Metadata
Title
Applied Cryptography and Network Security
Editors
Mark Manulis
Ahmad-Reza Sadeghi
Steve Schneider
Copyright Year
2016
Electronic ISBN
978-3-319-39555-5
Print ISBN
978-3-319-39554-8
DOI
https://doi.org/10.1007/978-3-319-39555-5

Premium Partner