Skip to main content
Top

2015 | Book

Information Security Practice and Experience

11th International Conference, ISPEC 2015, Beijing, China, May 5-8, 2015, Proceedings

insite
SEARCH

About this book

This book constitutes the proceedings of the 11th International Conference on Information Security Practice and Experience, ISPEC 2015, held in Beijing China, in May 2015.

The 38 papers presented in this volume were carefully reviewed and selected from 117 submissions. The regular papers are organized in topical sections named: system security, stream cipher, analysis, key exchange protocol, elliptic curve cryptography, authentication, attribute-based encryption, mobile security, theory, implementation, privacy and indistinguishability.

Table of Contents

Frontmatter
Erratum: A Rapid and Scalable Method for Android Application Repackaging Detection

The paper starting on page 349 of this volume inadvertently omitted a reference to the following paper:

Zhauniarovich, Y., Gadyatskaya, O., Crispo, B., La Spina, F., Moser, E.: FSquaDRA: Fast Detection of Repackaged Applications. In: Atluri, V., Pernul, G. (eds.) DBSec 2014. LNCS, vol. 8566, pp. 130-145. Springer, Heidelberg (2014)

The authors of this paper would like, via this Erratum, to credit the work done by Y. Zhauniarovich et al. on the subject of repackaging detection based on resource similarity.

Sibei Jiao, Yao Cheng, Lingyun Ying, Purui Su, Dengguo Feng

System Security

Frontmatter
Operating System Security Policy Hardening via Capability Dependency Graphs

An operating system relies heavily on its access control mechanism to defend against various attacks. The complexities of modern access control mechanisms and the scale of possible configurations are often overwhelming to system administrators and software developers. Therefore, misconfigurations are very common and the security consequences are serious. It is very necessary to detect and eliminate these misconfigurations. We propose an automated and systematic approach to address how to correct the misconfigurations based on capability dependency graph generating and MaxSAT solving. Given the attacker’s initial capabilities, we first automatically generate a capability dependency graph to describe attacker’s potential capabilities and the dependency relationships among these capabilities. Based on the capability dependency graph, we then develop a solution to automate the task of hardening operating system security policy against multi-step attacks resulting from misconfigurations. In this solution, we first represent each capability obtained by an attacker as a propositional logic formula of initial conditions, and then transfer the policy hardening problem to a MaxSAT problem. Finally, we present a notation called normal capability loss to aid an administrator to select an optimal hardening solution leading to minimum system usability loss. We apply our approach to analyze misconfigurations in Ubuntu10.04 shipped with SELinux and study an attack case to evaluate the effectiveness of our approach.

Zhihui Han, Liang Cheng, Yang Zhang, Dengguo Feng
Expanding an Operating System’s Working Space with a New Mode to Support Trust Measurement

Integrity measurement for Operating Systems (OS) is of practical significance. To make a measurement trustworthy, it is essential to protect the Integrity Measurement Mechanisms (IMM). However, much is still to be done to this end. This paper tries to take a step forward to shoot the target. Firstly, it puts forward the concept of trust mode, which expands the working space of an OS from two-mode, consisting of user mode and kernel mode, to tri-mode, consisting of user mode, kernel mode and trust mode. The trust mode is of the highest privilege level, in which the Core Measurement Mechanism (CMM) of an OS is executed. The CMM is in charge of measuring the IMM, which is running in kernel mode. Even if the OS kernel is compromised, the CMM would work normally without interference. Then, the paper proposes an approach to building the trust mode. It also develops a prototype to implement the trust mode by fully utilizing potentialities of modern hardware.

Chenglong Wei, Wenchang Shi, Bo Qin, Bin Liang

Stream cipher I

Frontmatter
Differential Fault Analysis of Streebog

In August 2012, the Streebog hash function was selected as the new Russian federal hash function standard (GOST R 34.11-2012). In this paper, we present a fault analysis attack on this new hashing standard. In particular, our attack considers the compression function in the secret key setting where both the input chaining value and the message block are unknown. The fault model adopted is the one in which an attacker is assumed to be able to cause a bit-flip at a random byte in the internal state of the underlying cipher of the compression function. We also consider the case where the position of the faulted byte can be chosen by the attacker. In the sequel, we propose a two-stage approach that recovers the two secret inputs of the compression function using an average number of faults that varies between 338-1640, depending on the assumptions of our employed fault model. Moreover, we show that the attack can be extended to the iterated hash function using a feasible pre-computation stage. Finally, we analyze Streebog in different MAC settings and demonstrate how our attack can be used to recover the secret key of HMAC/NMAC-GOST.

Riham AlTawy, Amr M. Youssef
Fault Attacks on Stream Cipher Scream

In this paper we present a differential fault attack (DFA) on the stream cipher Scream which is designed by the IBM researchers Coppersmith, Halevi, and Jutla in 2002. The known linear distinguishing attack on Scream takes 2

120

output words and there is no key recovery attack on it, since the

S

-box used by Scream is key-dependent and complex. Under the assumption that we can inject random byte faults in the same location multiple number of times, the 128-bit key can be recovered with 2

94

computations and 2

72

bytes memory by injecting around 2000 faults. Then combined with the assumption of related key attacks, we can retrieve the key with 2

44

computations and 2

40

bytes memory. The result is verified by experiments. To the best of the our knowledge this is the first DFA and key recovery attack on Scream.

Shaoyu Du, Bin Zhang, Zhenqi Li, Dongdai Lin
New Related Key Attacks on the RAKAPOSHI Stream Cipher

RAKAPOSHI is a hardware oriented stream cipher designed by Cid et al. in 2009. It is based on Dynamic Linear Feedback Shift Registers, with a simple and potentially scalable design, and is particularly suitable for hardware applications with restricted resources. The RAKAPOSHI stream cipher offers 128-bit security. In this paper, we point out some mistakes existing in the related key attack on RAKAPOSHI by Isobe et al., and propose a new related key attack on RAKAPOSHI, which recovers the 128-bit secret key with a time complexity of 2

56

, requiring one related key and 2

55

chosen IVs. Furthermore, an improved key recovery attack on RAKAPOSHI in the multiple related key setting is proposed with a time complexity of 2

33

, requiring 2

12.58

chosen IVs. As confirmed by the experimental results, our new attack can recover all 128 key bits of RAKAPOSHI in less than 1.5 hours on a PC.

Lin Ding, Chenhui Jin, Jie Guan, Shaowu Zhang, Ting Cui, Wei Zhao

Analysis

Frontmatter
Cramer-Shoup Like Chosen Ciphertext Security from LPN

We propose two chosen ciphertext secure public key encryption schemes from the learning parity with noise problem. Currently, all existing chosen ciphertext secure public key encryption schemes from the hard learning problems are constructed based on the All-But-One technique, while our schemes are based on the Cramer-Shoup technique.

Xiaochao Sun, Bao Li, Xianhui Lu
Partial Prime Factor Exposure Attacks on RSA and Its Takagi’s Variant

There are many partial key exposure attacks on RSA or its variants under the assumption that a portion of the bits of the decryption exponent

d

is exposed. Sarkar and Maitra presented a further attack when some bits of the private prime factor

q

in the modulus

N

 = 

pq

are simultaneously revealed and the total number of bits of

q

and

d

required to be known is reduced compared to previous partial key exposure attacks. In this paper, for both the standard RSA with moduli

N

 = 

pq

and the Takagi’s variant of RSA with moduli

N

 = 

p

2

q

, we propose partial key exposure attacks when most significant bits (MSBs) or least significant bits of

q

are exposed. Compared with previous results, our theoretical analysis and experimental results show a substantial improvement in reducing the number of known bits of the private key to factor

N

.

Liqiang Peng, Lei Hu, Zhangjie Huang, Jun Xu
Analysis of Fractional ωmbNAF for Scalar Multiplication

In the current work we analyze the average Hamming weight of recoded sequence obtained by fractional

ω

mbNAF algorithm using Markov theory. Cost comparison between fractional

ω

mbNAF and different scalar recoding methods is given. Regardless of memory restraint, it is shown that

$\{2,3,5\}\mbox{NAF}_{3+\frac{3}{4}}$

improves tree-based double base chain by a factor of 6.8% and 13.2% is Jacobian curves(with efficiency-orient selected parameter

a

 = 3) and inverted Edwards curves respectively.

Weixuan Li, Wei Yu, Kunpeng Wang
On the Effectiveness of Different Botnet Detection Approaches

Botnets represent one of the most significant threats against cyber security. They employ different techniques, topologies and communication protocols in different stages of their lifecycle. Hence, identifying botnets have become very challenging specifically given that they can upgrade their methodology at any time. In this work, we investigate four different botnet detection approaches based on the technique used and type of data employed. Two of them are public rule based systems (BotHunter and Snort) and the other two are data mining based techniques with different feature extraction methods (packet payload based and traffic flow based). The performance of these systems range from 0% to 100% on the five publicly available botnet data sets employed in this work. We discuss the evaluation results for these different systems, their features and the models learned by the data mining based techniques.

Fariba Haddadi, Duc Le Cong, Laura Porter, A. Nur Zincir-Heywood

Key Exchange Protocol

Frontmatter
Strongly Secure Key Exchange Protocol with Minimal KEM

In this paper, we give a generic construction of two-pass authenticated key exchange (AKE) protocol from key encapsulation mechanism (KEM). Our construction is provably secure without random oracles in the CK

 + 

model which is stronger than CK model and eCK model. Compared with similar KEM-based AKE protocols, our generic construction achieves CK

 + 

security with the minimal KEM (namely, one CCA-secure KEM and one CPA-secure KEM).

Baoping Tian, Fushan Wei, Chuangui Ma
sHMQV: An Efficient Key Exchange Protocol for Power-Limited Devices

In this paper we focus on designing authenticated key exchange protocols for practical scenarios where the party consists of a powerful but untrusted host (e.g., PC, mobile phone, etc) and a power-limited but trusted device (e.g., Trusted Platform Module, Mobile Trusted Module, Smart Card, etc). HMQV and (s,r)OAKE protocols are the state-of-the-art in the integrity of security and efficiency. However, we find that they are not suitable for the above scenarios as all (or part) of the online exponentiation computations must be performed in the power-limited trusted devices, which makes them inefficient for the deployment in practice.

To overcome the above inefficiency, we propose a variant of HMQV protocol, denoted sHMQV, under some new design rationales which bring the following advantages: 1) eliminating the validation of the ephemeral public keys, which costs one exponentiation; 2) the power-limited trusted device only performs one exponentiation, which can be pre-computed offline; 3) all the online exponentiation computations can be performed in the powerful host. The above advantages make sHMQV enjoy better performance than HMQV and (s,r)OAKE, especially when deployed in the scenarios considered in this paper. We finally formally prove the security of sHMQV in the CK model.

Shijun Zhao, Qianying Zhang

Elliptic Curve Cryptography

Frontmatter
Models of Curves from GHS Attack in Odd Characteristic

The idea behind the GHS attack is to transform the discrete logarithm problem(DLP) in the Jacobian of a (hyper-)elliptic curve over an extension field into DLPs in Jacobians of covering curves over the base field. Diem gives a condition under which explicit defining equations for some coverings are computed. In this paper, we show that his method works without that condition. We also give explicit map from the covering to the original curve if the covering is hyperelliptic. Our method is based on a formula for the embedding of rational subfield of the function field of (hyper)elliptic curve in that of the hyperelliptic covering.

Song Tian, Wei Yu, Bao Li, Kunpeng Wang
Some Elliptic Subcovers of Genus 3 Hyperelliptic Curves

A morphism from an algebraic curve

C

to an elliptic curve is called an elliptic subcover of the curve

C

. Elliptic subcovers provide means of solving discrete logarithm problem in elliptic curves over extension fields. The GHS attack yields only degree 2 minimal elliptic subcovers of hyperelliptic curves of genus 3. In this paper, we study the properties of elliptic subcovers of genus 3 hyperelliptic curves. Using these properties, we find some minimal elliptic subcovers of degree 4, which can not be constructed by GHS attack.

Song Tian, Wei Yu, Bao Li, Kunpeng Wang
Batch Blind Signatures on Elliptic Curves

Blind signature is a fundamental tool in electronic cash. In most existing blind signature schemes, both the signer and the verifier need to take expensive modular exponentiations. This situation is deteriorated in significant monetary transactions in which a large number of (multi-)exponentiations need to be calculated. This paper proposes batch blind signature to reduce the computation overheads at both the signer and the verifier sides in blind signatures on elliptical curves. To this end, we first propose a batch multi-exponentiation algorithm that allows a batch of multi-base exponentiations on elliptic curves to be processed simultaneously. We next apply our batch multi-exponentiation algorithm to speed up the Okamoto-Schnorr blind signature scheme in both the signing and the verification procedures. Specifically, the proposed algorithm is exploited for generating blind signatures so that multiple messages can be signed in a batch for sake of saving computation costs. The algorithm is further employed in the verification process, which gives a different batch signature verification approach from the existing batch verification algorithm. An attracting feature of our approach is that, unlike existing batch verification signature approach, our approach does distinguish all valid signatures from a batch purported signatures (of correct and erroneous ones). This is desirable in e-cash systems where a signature represents certain value of e-cash and any valid signature should not passed up. The experimental results show that, compared with acceleration with existing simultaneous exponentiation algorithm, our batch approach is about 55% and 45% more efficient in generating and verifying blind signatures, respectively.

Yang Sun, Qianhong Wu, Bo Qin, Yujue Wang, Jianwei Liu

Stream Cipher II

Frontmatter
Improved Differential Analysis of Block Cipher PRIDE

In CRYPTO 2014 Albrecht

et al.

brought in a 20-round iterative lightweight block cipher PRIDE which is based on a good linear layer for achieving a tradeoff between security and efficiency. A recent analysis is presented by Zhao

et al.

Inspired by their work, we use an automatic search method to find out 56 iterative differential characteristics of PRIDE, containing 24 1-round iterative characteristics, based on three of them we construct a 15-round differential and perform a differential attack on the 19-round PRIDE, with data, time and memory complexity of 2

62

, 2

63

and 2

71

respectively.

Qianqian Yang, Lei Hu, Siwei Sun, Kexin Qiao, Ling Song, Jinyong Shan, Xiaoshuang Ma
Estimating Differential-Linear Distinguishers and Applications to CTC2

At FSE 2014, Blondeau

et al.

proposed an exact expression of the bias of differential-linear approximation and a multidimensional generalization of differential-linear distinguisher. In this paper, we study the application of the theory to concrete designs. We first propose a meet-in-the-middle style searching-and-estimating process. Then, we show that the capacity of a multiple differential distinguisher using

χ

2

statistical test can be written as the summation of squared correlations of several differential-linear distinguishers. This link provides us with another approach to estimating the theoretical capacity of multiple differential distinguisher.

We apply the above methods to CTC2. CTC2 was designed by Courtois to show the strength of algebraic cryptanalysis on block ciphers. For CTC2 with a 255-bit block size and key, we give a multiple differential attack against 11-round version, which to our knowledge is the best with respect to the number of attacked rounds. Experimental results firmly verify the correctness of the proposed method. The attack itself, and its potential to be further extended, reveals that the resistance of CTC2 against statistical attacks may be much weaker than expected before.

Chun Guo, Hailong Zhang, Dongdai Lin
Combined Cache Timing Attacks and Template Attacks on Stream Cipher MUGI

The stream cipher MUGI was proposed by Hitachi, Ltd. in 2002 and it was specified as ISO/IEC 18033-4 for keystream generation. Assuming that noise-free cache timing measurements are possible, we give the cryptanalysis of MUGI under the cache attack model. Our simulation results show that we can reduce the computation complexity of recovering all the 1216-bits internal state of MUGI to about

O

(2

76

) when it is implemented in processors with 64-byte cache line. The attack reveals some new inherent weaknesses of MUGI’s structure. The weaknesses can also be used to conduct a noiseless template attack of

O

(2

60.51

) computation complexity to restore the state of MUGI. And then combining these two attacks we can conduct a key-recovery attack on MUGI with about

O

(2

30

) computation complexity. To the best of our knowledge, it is the first time that the analysis of cache timing attacks and template attacks are applied to full version of MUGI and that these two classes of attacks are combined to attack some cipher. Moreover, the combination can be used to improve the error-tolerance capability of each attack. If each measurement has one additional error, the key-recovery attack will take about

O

(2

50

) computation complexity.

Shaoyu Du, Zhenqi Li, Bin Zhang, Dongdai Lin

Authentication

Frontmatter
Half a Century of Practice: Who Is Still Storing Plaintext Passwords?

Text-based passwords are probably the most common way to authenticate a user on the Internet today. To implement a password system, it is critical to ensure the confidentiality of the stored password—if an attacker obtains a password, they get full access to that account. However, in the past several years, we have witnessed several major password leakages in which all the passwords were stored in plaintext. Considering the severity of these security breaches, we believe that the website owners should have upgraded their systems to store password hashes. Unfortunately, there are still many websites that store plaintext passwords. Given the persistence of such bad practice, it is crucial to raise public awareness about this issue, find these websites, and shed light on best practices. As such, in this paper, we systematically analyze websites in both industry and academia and check whether they are still storing plaintext passwords (or used to do so). In industry, we find 11 such websites in Alexa’s top 500 websites list. Also, we find this is a universal problem, regardless of the profile of the websites according to our analysis of almost 3,000 analyzed sites. Interestingly, we also find that even though end users have reported websites that are storing plaintext passwords, significant amounts of website owners ignore this. On the academic side, our analysis of 135 conference submission sites shows that the majority of them are also still storing plaintext passwords despite the existence of patches that fix this problem.

Erick Bauman, Yafeng Lu, Zhiqiang Lin
User Identity Verification Based on Touchscreen Interaction Analysis in Web Contexts

The ever-increasing popularity of smartphones amplifies the risk of loss or theft, thus increasing the threat of attackers hijacking critical user accounts. In this paper, we present a framework to secure accounts by continuously verifying user identities based on user interaction behavior with smartphone touchscreens. This enables us to protect user accounts by disabling critical functionality and enforcing a reauthentication in case of suspicious behavior. We take advantage of standard mobile web browser capabilities to remotely capture and analyze touchscreen interactions. This approach is completely transparent for the user and works on everyday smartphones without requiring any special software or privileges on the user’s device. We show how to successfully classify users even on the basis of limited and imprecise touch interaction data as is prevalent in web contexts. We evaluate the performance of our framework and show that the user identification accuracy is higher than 99% after collecting about a dozen touch interactions.

Michael Velten, Peter Schneider, Sascha Wessel, Claudia Eckert
Adaptive-ID Secure Revocable Identity-Based Encryption from Lattices via Subset Difference Method

In view of the expiration or reveal of user’s private credential (or private key) in a realistic scenario, identity-based encryption (IBE) schemes with an efficient key revocation mechanism, or for short, revocable identity-based encryption (RIBE) schemes, become prominently significant. In this paper, we present an RIBE scheme from lattices by combining two Agrawal et al.’s IBE schemes with the subset difference (SD) method. Our scheme is secure against adaptive identity-time attacks in the standard model under the learning with errors (LWE) assumption. In particular, our scheme serves as one solution to the challenge posed by Chen et al. (ACISP ’12).

Shantian Cheng, Juanyang Zhang

Attribute-Based Encryption

Frontmatter
Outsourcing the Re-encryption Key Generation: Flexible Ciphertext-Policy Attribute-Based Proxy Re-encryption

In this paper, we introduce a new proxy re-encryption (PRE) in that the re-encryption key generation can be outsourced, in attribute-based encryption. We call this new notion

flexible

ciphertext-policy attribute-based proxy re-encryption (flexible CP-AB-PRE). In ordinary PRE scheme, re-encryption keys are generated by using user’s decryption key and an access structure. So, whenever the access structure is changed, a PRE user has to generate new different re-encryption keys. In order to overcome this disadvantage of the ordinary PRE, the re-encryption key generation of the proposed scheme is divided into the following two steps. First, a user generates

universal

re-encryption key

urk

S

which indicates delegator’s attributes set

S

. Second, an authority who has re-encryption secret key

rsk

generates ordinary re-encryption key

${\sf rk}_{S\rightarrow{\Bbb M}'}$

by using

urk

S

,

rsk

, and an access structure

${\Bbb M}'$

. The user has only to generate single

urk

S

for all re-encryption keys. By this “outsourcing”, the task of re-encryption key generation for a user is reduced only to generate one

urk

S

. Furthermore, supposing a Private Key Generator (PKG) generates

urk

simultaneously at the time of decryption key generation, the load of re-encryption key generation for users almost vanishes.

Yutaka Kawai
Revocable Threshold Attribute-Based Signature against Signing Key Exposure

For a cryptosystem with a large number of users, it is necessary to provide an efficient revocation mechanism to preserve the security of whole system. In this paper, we aim to provide a scalable revocation mechanism for attribute-based signature (ABS). Specifically, we first formally define the syntax of revocable ABS (RABS), followed with a corresponding security model that considers a realistic threat called

signing key exposure

. Then, built on the ideas of an ABS scheme and binary data structure, we present a concrete construction of RABS with signing key exposure resistance. Finally, the proposed scheme is proved to be existentially unforgeable under adaptively chosen message attacks in the selective-predicate model, without random oracles. In addition to the necessary revocation functionality, the proposed scheme remains efficient in terms of storage cost and computation complexity.

Jianghong Wei, Xinyi Huang, Xuexian Hu, Wenfen Liu
Fully Secure Online/Offline Predicate and Attribute-Based Encryption

This paper presents the

first fully

secure online/offline

predicate encryption

(

PE

) and

attribute-based encryption

(

ABE

) schemes that split the computation required for encryption into two phases: A preparation phase that does the vast majority of the work to encrypt a message before knowing the actual message and the attributes or access control policy that will be used. A second phase can then rapidly assemble a ciphertext when the specifications become known. Our

PE

schemes

support generalized inner-product predicates

, while, our

ABE

scheme supports

non-monotone access structures

. All the proposed schemes are

unbounded

in the sense that the size of the public parameters is constant. The security of all the schemes are based on the

Decisional Linear

assumption. The best part of our constructions is that they exhibit

better online performance

despite of providing stronger security guarantees compared to the existing work.

Pratish Datta, Ratna Dutta, Sourav Mukhopadhyay

Mobile Security

Frontmatter
A Rapid and Scalable Method for Android Application Repackaging Detection

Nowadays the security issues of Android applications (apps) are more and more serious. One of the main security threats come from repackaged apps. There already are some researches detecting repackaged apps using similarity measurement. However, so far, all the existing techniques for repackaging detection are based on code similarity or feature (e.g., permission set) similarity evaluation. In this paper, we propose a novel approach called ImageStruct that applies image similarity technique to locate and detect the changes coming from repackaging effectively. ImageStruct performs a quick repackaging detection by considering the similarity of images in target apps. The intuition behind our approach is that the repackaged apps still need to maintain the ”look and feel” of the original apps by including the original images, even they might have their additional code included or some of the original code removed. To prove the effectiveness and evaluate the reliability of our approach, we carry out the compare experiments between ImageStruct and the code based similarity scores of AndroGuard. The results demonstrate that ImageStruct is not only with good performance and scalability, but also able to resistant to code obfuscation.

Sibei Jiao, Yao Cheng, Lingyun Ying, Purui Su, Dengguo Feng
Comprehensive Analysis of the Android Google Play’s Auto-update Policy

Google Play provides a large Android application repository and the companion service application handles the initial installation and update processes. For the ease of management effort, a recent policy change by Google allows users to configure auto-update for installed applications based on permission groups, rather than individual permission. By analyzing the effects of the new auto-update policy on Android permission system with an emphasis on permission groups and protection levels, we find a new privilege escalation attack vector. Then 1200 Android applications are evaluated to identify potential privilege escalation candidates, and 1260 malware samples are investigated to study how the new attack vector could be utilized by the malware to increase the chance of distribution without users’ attention. Based on the evaluation results, we confirm that such new policy can be easily manipulated by malicious developers to gain high privileged permissions without users’ consent. It is highly recommended that users of the new auto-update feature carefully review permissions obtained after each update via global setting, or simply turn off the feature.

Craig Sanders, Ayush Shah, Shengzhi Zhang
IVDroid: Static Detection for Input Validation Vulnerability in Android Inter-component Communication

Input validation vulnerability in Android inter-component communication is a kind of severe vulnerabilities in Android apps. Malicious attacks can exploit the vulnerability to bypass Android security mechanism and compromise the integrity, confidentiality and availability of Android devices. However, so far there is not a sound approach at source code level designed for app developers to detect such vulnerabilities. In this paper we propose a novel approach aiming at detecting input validation flaws in Android apps and implement a prototype named IVDroid, which provides practical static analysis of Java source code. IVDroid leverages backward program slicing to abstract application logic from Java source code. On slice level, IVDroid detects flaws of known pattern by security rule matching and detects flaws of unknown pattern by duplicate validation behavior mining. Then IVDroid semi-automatically confirms the suspicious rule violations and report the confirmed ones as vulnerabilities. We evaluate IVDroid on 3 versions of Android spanning from version 2.2 to 4.4.2 and it detects 37 vulnerabilities including confused deputy and denial of service attack. Our results prove that IVDroid can provide a practical defence solution for app developers.

Zhejun Fang, Qixu Liu, Yuqing Zhang, Kai Wang, Zhiqiang Wang

Theory

Frontmatter
New Constructions of T-function

T-function is a mapping from a Boolean matrix to a Boolean matrix with the property that

i

-th column of the output matrix depends on the first

i

columns of the input matrix. In 2003, Klimov and Shamir first introduced the concept of T-function. Using T-function we can construct some invertible functions which can be used to construct block cipher. Some invertible and full cycle T-functions can be used to construct stream cipher. Inversion procedure of these functions are different from the normal invertible functions and which is also hard. In 2005, Hong et al. constructed new class of full cycle T-function. In their construction the

i

-th column of the output matrix actually depends only on the

i

-th column of the input matrix. Our construction is more general than Hong et al’s construction with good cryptographic properties. In this paper, we present a new construction of T-function. We study the invertibility, cycle length and nonlinearity of this T-function. Furthermore, we construct a new full cycle T-function. Invertible and highly nonlinear functions can be used to design block cipher. Full cycle functions can be used in LFSR based stream cipher to get the full period of the stream cipher in place of linear state update function.

Dibyendu Roy, Ankita Chaturvedi, Sourav Mukhopadhyay
A New Lattice-Based Threshold Attribute-Based Signature Scheme

In this paper, we present a new construction of attribute-based signature (ABS) scheme supporting flexible threshold predicates from lattices. The new construction is proved to be selective-predicate and adaptive-message unforgeable under chosen message attacks in random oracle model if the small integer solution (SIS) assumption holds. In addition, this scheme can also achieve privacy, which means the signature reveals nothing about the attributes or identity information about the real signer. Compared with existing lattice-based threshold ABS scheme, the new construction provides better efficiency.

Qingbin Wang, Shaozhen Chen, Aijun Ge
Hard Invalidation of Electronic Signatures

We present a new concept for invalidating electronic signatures which, in many situations, seem to be better suited for real business and society applications. We do not rely on an administrative invalidation process executed separately for each single signing key and based on certificate revocation lists. Instead, all signatures created with a certain group are invalidated by a certain event. We propose a hard invalidation via releasing of the inherent cryptographic proof value – instead of soft invalidation via revoking certificates which leaves intact the cryptographic strength of signatures (even if legal validity is partially lost).

We present concrete efficient realizations of our ideas based on verifiable encryption, trapdoor discrete logarithm groups and ring signatures.

Lucjan Hanzlik, Mirosław Kutyłowski, Moti Yung

Implementation

Frontmatter
Lightweight Function Pointer Analysis

How to detect and classify the huge malware samples received every day is a major challenge of security area. In recent years, using function call graph to detect and classify malicious software has become a feasible method. As the basic technology of call graph construction, function pointer analysis becomes more noticeable. Previous works often use the result of pointer analysis to determine the possible targets of function pointer calls. However, the inherent complexity and efficiency problem of the pointer analysis often leads to unsatisfactory results when applied to practical programs. This paper presents a strong connected component (SCC) level flow-sensitive and context-sensitive function pointer analysis algorithm (referred as FP algorithm). This algorithm not only makes up for the speed deficiency of pointer analysis, but also obtains higher precision. Measurements for 8 practical C programs show that FP algorithm advances 42.6 times on average compared with DSA algorithm and the precision is also improved.

Wei Zhang, Yu Zhang
Accelerating RSA with Fine-Grained Parallelism Using GPU

RSA is a public key cryptography widely used for end-to-end authentication and key exchange in various Internet protocols, such as SSL and TLS. Compared with symmetric cryptography, the cryptographic operations in RSA is much more time consuming. This brings pressure on performance to service providers using secure protocols, and hinders these protocols from being more widely used. Graphics Processing Units (GPUs) are increasingly used for intensive data parallelism general purpose computing. GPUs often provide better throughput than CPUs at the same cost. In this paper, we propose a new approach to parallelize Montgomery multiplication under the Single Instruction Multiple Thread (SIMT) threading model of GPUs, and construct a parallel RSA implementation based on this approach, combining with other optimization techniques both in the algorithmic level and implementation level. The performance evaluation shows our RSA implementation achieves a record-breaking latency for RSA decryption implementations on GPUs: 2.6 ms for RSA-1024 and 6.5 ms for RSA-2048. The peak throughtput of decryptions per second of our implementation reaches 5,244 for RSA-2048 and 34,981 for RSA-1024 respectively, which is much faster than existing integer-based implementations. The peak throughput of our implementation is slightly slower than the fastest floating-point based implementation, while the latency of our implementation is 3 times faster.

Yang Yang, Zhi Guan, Huiping Sun, Zhong Chen
On the Impacts of Mathematical Realization over Practical Security of Leakage Resilient Cryptographic Schemes

In real world, in order to transform an abstract and generic cryptographic scheme into actual physical implementation, one usually undergoes two processes: mathematical realization at algorithmic level and physical realization at implementation level. In black-box model (i.e. leakage-free setting), a cryptographic scheme can be mathematically realized without affecting its theoretical security as long as the mathematical components meet the required cryptographic properties. However, up to now, no previous work formally show that whether one can mathematically realize a leakage resilient cryptographic scheme in existent ways without affecting its practical security. Our results give a negative answer to this important question by introducing attacks against several kinds of mathematical realization of a practical leakage resilient cryptographic scheme. To be specific, there may exist a big gap between the theoretical tolerance leakage bits number and the practical tolerance leakage bits number of the same leakage resilient cryptographic scheme if the mathematical components in the mathematical realization are not provably secure in leakage setting.

Guangjun Fan, Yongbin Zhou, François-Xavier Standaert, Dengguo Feng
Non-interactive Revocable Identity-Based Access Control over e-Healthcare Records

Revocation of access control on private e-healthcare records (EHRs) allows to revoke the access rights of valid users. Most existing solutions rely on a trusted third party too much to generate and update decryption keys, or require the computations of non-revoked users during the revocation, which make them impractical for some more complicated scenarios. In this paper, we propose a new revocation model, referred to as non-interactive revocable identity-based access control (NRIBAC) on EHRs. In NRIBAC, a trusted third party only needs to generate secret keys for group authorities and each group authority can generate decryption keys for the users in its domain. The NRIBAC distinguishes itself from other revocation schemes by the advantageous feature that it does not require any participation of non-revoked users in the revocation. We construct an NRIBAC scheme with short ciphertexts and decryption keys by leveraging hierarchical identity-based encryption and introducing the version information. We formally prove the security of the NRIBAC scheme and conduct thorough theoretical analysis to evaluate the performance. The results reveal that the scheme provides favorable revocation procedure without disturbing non-revoked users.

Yunya Zhou, Jianwei Liu, Hua Deng, Bo Qin, Lei Zhang
Efficient File Sharing in Electronic Health Records

The issue of handling electronic health records have become paramount interest to the practitioners and security community, due to their sensitivity. In this paper, we propose a framework that enables medical practitioners to securely communicate among themselves to discuss health matters, and the patients can be rest assured that the information will only be made available to eligible medical practitioners. Specifically, we construct a new cryptographic primitive to enable File Sharing in Electronic Health Records (

FSEHR

). This primitive enables doctors to read the information sent by the hospital, or by any other individuals (such as patients’ health records), when the doctors have their ‘license’ validated by that given hospital. We construct such a cryptographic primitive and capture its security requirements in a set of security models. Subsequently, we present a concrete scheme, which is proven selectively chosen-ciphertext security (CCA-1) secure under the Decisional Bilinear Diffie-Hellman Exponent (DBDHE) assumption and fully collusion resistant.

Clémentine Gritti, Willy Susilo, Thomas Plantard
A Framework for Analyzing Verifiability in Traditional and Electronic Exams

The main concern for institutions that organize exams is to detect when students cheat. Actually more frauds are possible and even authorities can be dishonest. If institutions wish to keep exams a trustworthy business, anyone and not only the authorities should be allowed to look into an exam’s records and verify the presence or the absence of frauds. In short, exams should be

verifiable

. However, what verifiability means for exams is unclear and no tool to analyze an exam’s verifiability is available. In this paper we address both issues: we formalize several

individual

and

universal verifiability properties

for traditional and electronic exams, so proposing a set of verifiability properties and clarifying their meaning, then we implement our framework in ProVerif, so making it a tool to analyze exam verifiability. We validate our framework by analyzing the verifiability of two existing exam systems – an electronic and a paper-and-pencil system.

Jannik Dreier, Rosario Giustolisi, Ali Kassem, Pascal Lafourcade, Gabriele Lenzini

Privay and Indistinguishability

Frontmatter
ADKAM: A-Diversity K-Anonymity Model via Microaggregation

A great challenge in privacy preservation is to trade off two important issues: data utility and privacy preservation, in publication of dataset which usually contains sensitive information. Anonymization is a well-represent approach to achieve this, and there exist several anonymity models. Most of those models mainly focuses on protecting privacy exerting identical protection for the whole table with pre-defined parameters. As a result, it could not meet the diverse requirements of protection degrees varied with different sensitive values.Motivated by this, this paper firstly introduces an a-diversity k-anonymity model (ADKAM) to satisfy the diversity deassociation for sensitive values, ant then designs a framework based on an improved microaggregation algorithm, as an alternative to generalization/ suppression to achieve anonymization. By using this framework, we improve the data utility and disclosure risk of privacy disclosure. We conduct several experiments to validate our schemes.

Liang Cheng, Shaoyin Cheng, Fan Jiang
Visualizing Privacy Risks of Mobile Applications through a Privacy Meter

When it comes to installing mobile applications on Android devices, users tend to ignore privacy warning messages about permissions being requested. Warning messages are often shown too late and are hard to interpret for normal users. To improve users’ awareness of potential privacy implications of installing an application, we designed a “privacy meter” that visualizes the risks (in a slider bar format) imposed by the types of permissions being requested. Interpreting and understanding privacy risks become quick and easy.

Our lab study shows that the privacy meter is the most effective solution compared to Google’s existing permission screens and privacy fact feature. With the privacy meter in place, only about 26% of participants recommended applications that have high privacy risks to their friends and family members. That is a significant improvement from the 61% of participants who recommended high risk applications when Google’s permission screens were used. The time taken to make recommendation decisions, on average, also dropped from 72 seconds to 26 seconds when the privacy meter was used.

Jina Kang, Hyoungshick Kim, Yun Gyung Cheong, Jun Ho Huh
One-Round Witness Indistinguishability from Indistinguishability Obfuscation

In this work, we build up the relationship between witness indistinguishability (WI) and indistinguishability obfuscation (

$i\mathcal{O}$

) by constructing a one-round witness indistinguishable argument system for all languages in

NP

based on the existence of indistinguishability obfuscator for general circuit class and a number-theoretic assumption. The key tool in our construction is witness encryption scheme with unique decryption which is also proposed and constructed in this work. Our construction of witness encryption scheme with unique decryption is based on a general witness encryption scheme and a weak auxiliary input multi-bit output point obfuscation.

Qihua Niu, Hongda Li, Guifang Huang, Bei Liang, Fei Tang
Backmatter
Metadata
Title
Information Security Practice and Experience
Editors
Javier Lopez
Yongdong Wu
Copyright Year
2015
Electronic ISBN
978-3-319-17533-1
Print ISBN
978-3-319-17532-4
DOI
https://doi.org/10.1007/978-3-319-17533-1

Premium Partner