Skip to main content
Top

2014 | Book

Computer Security - ESORICS 2014

19th European Symposium on Research in Computer Security, Wroclaw, Poland, September 7-11, 2014. Proceedings, Part I

Editors: Mirosław Kutyłowski, Jaideep Vaidya

Publisher: Springer International Publishing

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

The two-volume set, LNCS 8712 and LNCS 8713 constitutes the refereed proceedings of the 19th European Symposium on Research in Computer Security, ESORICS 2014, held in Wroclaw, Poland, in September 2014 The 58 revised full papers presented were carefully reviewed and selected from 234 submissions. The papers address issues such as cryptography, formal methods and theory of security, security services, intrusion/anomaly detection and malware mitigation, security in hardware, systems security, network security, database and storage security, software and application security, human and societal aspects of security and privacy.

Table of Contents

Frontmatter
Detecting Malicious Domains via Graph Inference
Abstract
Enterprises routinely collect terabytes of security relevant data, e.g., network logs and application logs, for several reasons such as cheaper storage, forensic analysis, and regulatory compliance. Analyzing these big data sets to identify actionable security information and hence to improve enterprise security, however, is a relatively unexplored area. In this paper, we introduce a system to detect malicious domains accessed by an enterprise’s hosts from the enterprise’s HTTP proxy logs. Specifically, we model the detection problem as a graph inference problemwe construct a host-domain graph from proxy logs, seed the graph with minimal ground truth information, and then use belief propagation to estimate the marginal probability of a domain being malicious. Our experiments on data collected at a global enterprise show that our approach scales well, achieves high detection rates with low false positive rates, and identifies previously unknown malicious domains when compared with state-of-the-art systems. Since malware infections inside an enterprise spread primarily via malware domain accesses, our approach can be used to detect and prevent malware infections.
Pratyusa K. Manadhata, Sandeep Yadav, Prasad Rao, William Horne
Empirically Measuring WHOIS Misuse
Abstract
WHOIS is a publicly-accessible online directory used to map domain names to the contact information of the people who registered them (registrants). Regrettably, registrants have anecdotally complained about their WHOIS information being misused, e.g., for spam, while there is also concrete evidence that maliciously registered domains often map to bogus or protected information. All of this has brought into question whether WHOIS is still needed. In this study, we empirically assess which factors, if any, lead to a measurable degree of misuse of WHOIS data. We register 400 domains spread over the five most popular global top level domains (gTLD), using unique artificial registrant identities linked to email addresses, postal addresses, and phone numbers under our control. We collect, over six months, instances of misuse targeting our artificial registrants, revealing quantitative insights on both the extent and the factors (gTLD, domain type, presence of anti-harvesting mechanisms) that appear to have statistically-significant impact on WHOIS misuse.
Nektarios Leontiadis, Nicolas Christin
EncDNS: A Lightweight Privacy-Preserving Name Resolution Service
Abstract
Users are increasingly switching to third party DNS resolvers (e. g., Google Public DNS and OpenDNS). The resulting monitoring capabilities constitute an emerging threat to online privacy. In this paper we present EncDNS, a novel lightweight privacy-preserving name resolution service as a replacement for conventional third-party resolvers. The EncDNS protocol, which is based on DNSCurve, encapsulates encrypted messages in standards-compliant DNS messages. User privacy is protected by exploiting the fact that a conventional DNS resolver provides sender anonymity against the EncDNS server. Unlike traditional privacy-preserving techniques like mixes or onion routing, which introduce considerable delays due to routing messages over multiple hops, the EncDNS architecture introduces only one additional server in order to achieve a sufficient level of protection against realistic adversaries. EncDNS is open source software. An initial test deployment is available for public use.
Dominik Herrmann, Karl-Peter Fuchs, Jens Lindemann, Hannes Federrath
Ubic: Bridging the Gap between Digital Cryptography and the Physical World
Abstract.
Advances in computing technology increasingly blur the boundary between the digital domain and the physical world. Although the research community has developed a large number of cryptographic primitives and has demonstrated their usability in all-digital communication, many of them have not yet made their way into the real world due to usability aspects. We aim to make another step towards a tighter integration of digital cryptography into real world interactions. We describe Ubic, a framework that allows users to bridge the gap between digital cryptography and the physical world. Ubic relies on head-mounted displays, like Google Glass, resource-friendly computer vision techniques as well as mathematically sound cryptographic primitives to provide users with better security and privacy guarantees. The framework covers key cryptographic primitives, such as secure identification, document verification using a novel secure physical document format, as well as content hiding. To make a contribution of practical value, we focused on making Ubic as simple, easily deployable, and user friendly as possible.
Mark Simkin, Dominique Schröder, Andreas Bulling, Mario Fritz
Updaticator: Updating Billions of Devices by an Efficient, Scalable and Secure Software Update Distribution over Untrusted Cache-enabled Networks
Abstract
Secure and fast distribution of software updates and patches is essential for improving functionality and security of computer systems. Today, each device downloads updates individually from a software provider distribution server. Unfortunately, this approach does not scale to large systems with billions of devices where the network bandwidth of the server and the local Internet gateway become bottlenecks. Cache-enabled Network (CN) services (either proprietary, as Akamai, or open Content-Distribution Networks) can reduce these bottlenecks. However, they do not offer security guarantees against potentially untrusted CN providers that try to threaten the confidentiality of the updates or the privacy of the users. In this paper, we propose Updaticator, the first protocol for software updates over Cache-enabled Networks that is scalable to billions of concurrent device updates while being secure against malicious networks. We evaluate our proposal considering Named-Data Networking, a novel instance of Cache-enabled overlay Networks. Our analysis and experimental evaluation show that Updaticator removes the bottlenecks of individual device-update distribution, by reducing the network load at the distribution server: from linear in the number of devices to a constant, even if billions of devices are requesting updates. Furthermore, when compared to the state-of-the-art individual device-update mechanisms, the download time with Updaticator is negligible, due to local caching.
Moreno Ambrosin, Christoph Busold, Mauro Conti, Ahmad-Reza Sadeghi, Matthias Schunter
Local Password Validation Using Self-Organizing Maps
Abstract
The commonly used heuristics to promote password strength (e.g. minimum length, forceful use of alphanumeric characters, etc) have been found considerably ineffective and, what is worst, often counterproductive. When coupled with the predominancy of dictionary based attacks and leaks of large password data sets, this situation has led, in later years, to the idea that the most useful criterion on which to classify the strength of a candidate password, is the frequency with which it has appeared in the past.
Maintaining an updated and representative record of past password choices does, however, require the processing and storage of high volumes of data, making the schemes thus far proposed centralized. Unfortunately, requiring that users submit their chosen candidate passwords to a central engine for validation may have security implications and does not allow offline password generation. Another major limitation of the currently proposed systems is the lack of generalisation capability: a password similar to a common password is usually considered safe.
In this article, we propose an algorithm which addresses both limitations. It is designed for local operation, avoiding the need to disclose candidate passwords, and is focused on generalisation, recognizing as dangerous not only frequently occurring passwords, but also candidates similar to them. An implementation of this algorithm is released in the form of a Google Chrome browser extension.
Diogo Mónica, Carlos Ribeiro
Verifiable Delegation of Computations with Storage-Verification Trade-off
Abstract
Outsourcing computations has attracted much attention in recent years. An important security challenge is ensuring the correctness of the computed results. In the verifiable computation (VC) model of Gennaro, Gentry and Parno (CRYPTO 2010), a client can delegate the computation of its function to a cloud server, and efficiently verify the correctness of any computed results. In the existing VC schemes, the server must store an encoding of the function that doubles the required cloud storage, compared with storing the function itself. In this paper, we introduce a parameter that measures the trade-off between the required cloud storage and the client’s verification time. We construct four (privately or publicly) VC schemes for delegating polynomials and matrices. These schemes allow the client to significantly reduce the consumed cloud storage by slightly increasing its verification time.
Liang Feng Zhang, Reihaneh Safavi-Naini
Identity-Based Encryption with Post-Challenge Auxiliary Inputs for Secure Cloud Applications and Sensor Networks
Abstract
Identity-based encryption (IBE) is useful for providing end-to-end access control and data protection in many scenarios such as cloud applications and wireless sensor networks However, there are some practical threats for the data owner or the sensor, who encrypts raw data; and the data user or the control centre, who decrypts the ciphertext and recovers the raw data.
In this paper, we tackle the open problem of proposing a leakage-resilience encryption model that can capture leakage from both the secret key owner (the data user or control centre) and the encryptor (the data owner or sensor), in the auxiliary input model. Existing models only allow the leakage of the secret key and do not allow adversaries to query more leakage information after seeing the challenge ciphertext of the security games. We solve this problem by defining the post-challenge auxiliary input model in which the family of leakage functions must be defined before the adversary is given the public key. The post-challenge query will return the leakage of the encryption randomness used by the encryptor. This model is able to capture a wider class of real-world attacks.
To realize our model, we propose a generic transformation from the auxiliary input model to our new post-challenge auxiliary input model for both public key encryption (PKE) and IBE. Furthermore, we extend Canetti et al.’s technique, that converts CPA-secure IBE to CCA-secure PKE, into the leakage-resilient setting.
Tsz Hon Yuen, Ye Zhang, Siu Ming Yiu, Joseph K. Liu
Verifiable Computation over Large Database with Incremental Updates
Abstract
The notion of verifiable database (VDB) enables a resource-constrained client to securely outsource a very large database to an untrusted server so that it could later retrieve a database record and update a record by assigning a new value. Also, any attempt by the server to tamper with the data will be detected by the client. When the database undergoes frequent while small modifications, the client must re-compute and update the encrypted version (ciphertext) on the server at all times. For very large data, it is extremely expensive for the resources-constrained client to perform both operations from scratch. In this paper, we formalize the notion of verifiable database with incremental updates (Inc-VDB). Besides, we propose a general Inc-VDB framework by incorporating the primitive of vector commitment and the encrypt-then-incremental MAC mode of encryption. We also present a concrete Inc-VDB scheme based on the computational Diffie-Hellman (CDH) assumption. Furthermore, we prove that our construction can achieve the desired security properties.
Xiaofeng Chen, Jin Li, Jian Weng, Jianfeng Ma, Wenjing Lou
DroidMiner: Automated Mining and Characterization of Fine-grained Malicious Behaviors in Android Applications
Abstract
Most existing malicious Android app detection approaches rely on manually selected detection heuristics, features, and models. In this paper, we describe a new, complementary system, called DroidMiner, which uses static analysis to automatically mine malicious program logic from known Android malware, abstracts this logic into a sequence of threat modalities, and then seeks out these threat modality patterns in other unknown (or newly published) Android apps. We formalize a two-level behavioral graph representation used to capture Android app program logic, and design new techniques to identify and label elements of the graph that capture malicious behavioral patterns (or malicious modalities). After the automatic learning of these malicious behavioral models, DroidMiner can scan a new Android app to (i) determine whether it contains malicious modalities, (ii) diagnose the malware family to which it is most closely associated, (iii) and provide further evidence as to why the app is considered to be malicious by including a concise description of identified malicious behaviors. We evaluate DroidMiner using 2,466 malicious apps, identified from a corpus of over 67,000 third-party market Android apps, plus an additional set of over 10,000 official market Android apps. Using this set of real-world apps, we demonstrate that DroidMiner achieves a 95.3% detection rate, with only a 0.4% false positive rate. We further evaluate DroidMiner’s ability to classify malicious apps under their proper family labels, and measure its label accuracy at 92%.
Chao Yang, Zhaoyan Xu, Guofei Gu, Vinod Yegneswaran, Phillip Porras
Detecting Targeted Smartphone Malware with Behavior-Triggering Stochastic Models
Abstract
Malware for current smartphone platforms is becoming increasingly sophisticated. The presence of advanced networking and sensing functions in the device is giving rise to a new generation of targeted malware characterized by a more situational awareness, in which decisions are made on the basis of factors such as the device location, the user profile, or the presence of other apps. This complicates behavioral detection, as the analyst must reproduce very specific activation conditions in order to trigger malicious payloads. In this paper, we propose a system that addresses this problem by relying on stochastic models of usage and context events derived from real user traces. By incorporating the behavioral particularities of a given user, our scheme provides a solution for detecting malware targeting such a specific user. Our results show that the properties of these models follow a power-law distribution: a fact that facilitates an efficient generation of automatic testing patterns tailored for individual users, when done in conjunction with a cloud infrastructure supporting device cloning and parallel testing. We report empirical results with various representative case studies, demonstrating the effectiveness of this approach to detect complex activation patterns.
Guillermo Suarez-Tangil, Mauro Conti, Juan E. Tapiador, Pedro Peris-Lopez
TrustDump: Reliable Memory Acquisition on Smartphones
Abstract
With the wide usage of smartphones in our daily life, new malware is emerging to compromise the mobile OS and steal the sensitive data from the mobile applications. Anti-malware tools should be continuously updated via static and dynamic malware analysis to detect and prevent the newest malware. Dynamic malware analysis depends on a reliable memory acquisition of the OS and the applications running on the smartphones. In this paper, we develop a TrustZone-based memory acquisition mechanism called TrustDump that is capable of reliably obtaining the RAM memory and CPU registers of the mobile OS even if the OS has crashed or has been compromised. The mobile OS is running in the TrustZone’s normal domain, and the memory acquisition tool is running in the TrustZone’s secure domain, which has the access privilege to the memory in the normal domain. Instead of using a hypervisor to ensure an isolation between the OS and the memory acquisition tool, we rely on ARM TrustZone to achieve a hardware-assisted isolation with a small trusted computing base (TCB) of about 450 lines of code. We build a TrustDump prototype on Freescale i.MX53 QSB.
He Sun, Kun Sun, Yuewu Wang, Jiwu Jing, Sushil Jajodia
A Framework to Secure Peripherals at Runtime
Abstract
Secure hardware forms the foundation of a secure system. However, securing hardware devices remains an open research problem. In this paper, we present IOCheck, a framework to enhance the security of I/O devices at runtime. It leverages System Management Mode (SMM) to quickly check the integrity of I/O configurations and firmware. IOCheck is agnostic to the operating system. We use random-polling and event-driven approaches to switch into SMM. We implement a prototype of IOCheck and conduct extensive experiments on physical machines. Our experimental results show that IOCheck takes 10 milliseconds to check the integrity of a network card and a video card. Also, IOCheck introduces a low overhead on Windows and Linux platforms. We show that IOCheck achieves a faster switching time than the Dynamic Root of Trust Measurement approach.
Fengwei Zhang, Haining Wang, Kevin Leach, Angelos Stavrou
StealthGuard: Proofs of Retrievability with Hidden Watchdogs
Abstract
This paper presents StealthGuard, an efficient and provably secure proof of retrievabillity (POR) scheme. StealthGuard makes use of a privacy-preserving word search (WS) algorithm to search, as part of a POR query, for randomly-valued blocks called watchdogs that are inserted in the file before outsourcing. Thanks to the privacy-preserving features of the WS, neither the cloud provider nor a third party intruder can guess which watchdog is queried in each POR query. Similarly, the responses to POR queries are also obfuscated. Hence to answer correctly to every new set of POR queries, the cloud provider has to retain the file in its entirety. StealthGuard stands out from the earlier sentinel-based POR scheme proposed by Juels and Kaliski (JK), due to the use of WS and the support for an unlimited number of queries by StealthGuard. The paper also presents a formal security analysis of the protocol.
Monir Azraoui, Kaoutar Elkhiyaoui, Refik Molva, Melek Önen
An Efficient Cloud-Based Revocable Identity-Based Proxy Re-encryption Scheme for Public Clouds Data Sharing
Abstract
Identity-based encryption (IBE) eliminates the necessity of having a costly certificate verification process. However, revocation remains as a daunting task in terms of ciphertext update and key update phases. In this paper, we provide an affirmative solution to solve the efficiency problem incurred by revocation. We propose the first cloud-based revocable identity-based proxy re-encryption (CR-IB-PRE) scheme that supports user revocation but also delegation of decryption rights. No matter a user is revoked or not, at the end of a given time period the cloud acting as a proxy will re-encrypt all ciphertexts of the user under the current time period to the next time period. If the user is revoked in the forthcoming time period, he cannot decrypt the ciphertexts by using the expired private key anymore. Comparing to some naive solutions which require a private key generator (PKG) to interact with non-revoked users in each time period, the new scheme provides definite advantages in terms of communication and computation efficiency.
Kaitai Liang, Joseph K. Liu, Duncan S. Wong, Willy Susilo
Verifiable Computation on Outsourced Encrypted Data
Abstract
On one hand, homomorphic encryption allows a cloud server to perform computation on outsourced encrypted data but provides no verifiability that the computation is correct. On the other hand, homomorphic authenticator, such as homomorphic signature with public verifiability and homomorphic MAC with private verifiability, guarantees authenticity of computation over outsourced data but does not provide data confidentiality. Since cloud servers are usually operated by third-party providers which are almost certain to be outside the trust domain of cloud users, neither homomorphic encryption nor homomorphic authenticator suffices for verifiable computation on outsourced encrypted data in the cloud. In this paper, we propose verifiable homomorphic encryption (VHE), which enables verifiable computation on outsourced encrypted data.
We first introduce a new cryptographic primitive called homomorphic encrypted authenticator (HEA), which may be of independent interest. Informally, HEA can be viewed as a homomorphic authenticator in which the authenticator itself does not leak any information about the message it authenticates. Next, we show that the fully homomorphic MAC scheme, proposed by Gennaro and Wichs recently, is a fully HEA with weak unforgeability in the sense that an adversary is not allowed to make verification queries. We then propose a linearly HEA which can tolerate any number of malicious verification queries, i.e., it achieves (strong) unforgeability. Finally, we define VHE formally, and give a generic construction of VHE based on homomorphic encryption and HEA. Instantiating the generic construction, we derive a fully VHE with weak verifiability as well as a linearly VHE with (strong) verifiability.
Junzuo Lai, Robert H. Deng, Hweehwa Pang, Jian Weng
Verifiable Computation with Reduced Informational Costs and Computational Costs
Abstract
Outsourcing computation is a fundamental principle of the new cloud computing paradigm. Among its various aspects, the correctness of the computation result remains paramount. This motivates the birth of verifiable computation, which aims at efficiently checking the result for general-purpose computation. The common goal of recently sprouted verifiable computation protocols is to reduce the costs associated with verification at both prover and verifier. Unfortunately, the high computation and communication costs of verification still keep general verifiable computation away from practicality. Besides the computational costs, we observe that another type of verification cost has been generally ignored until now –the informational costs, namely, the information required for the verification. In particular, in the context of the third-party verification, this cost implies the information leakage of sensitive information regarding the computational task and its results. In this paper, we introduce the new verifiable-computation protocol RIVER, which reduces the computational costs of the verifier and of the prover, comparing to the most recent alternative protocols, and (for the first time in the context of verifiable computation) addresses and decreases informational costs.
Gang Xu, George T. Amariucai, Yong Guan
Detangling Resource Management Functions from the TCB in Privacy-Preserving Virtualization
Abstract
Recent research has developed virtualization architectures to protect the privacy of guest virtual machines. The key technology is to include an access control matrix in the hypervisor. However, existing approaches have either limited functionalities in the hypervisor or a Trusted Computing Base (TCB) which is too large to secure. In this paper, we propose a new architecture, MyCloud SEP, to separate resource allocation and management from the hypervisor in order to reduce the TCB size while supporting privacy protection. In our design, the hypervisor checks all resource accesses against an access control matrix in the hypervisor. While providing flexibility of plugging-in resource management modules, the size of TCB is significantly reduced compared with commercial hypervisors. Using virtual disk manager as an example, we implement a prototype on x86 architecture. The performance evaluation results also show acceptable overheads.
Min Li, Zili Zha, Wanyu Zang, Meng Yu, Peng Liu, Kun Bai
Securely Outsourcing Exponentiations with Single Untrusted Program for Cloud Storage
Abstract
Provable Data Possession (PDP) allows a file owner to outsource her files to a storage server such that a verifier can check the integrity of the outsourced file. Public verifiable PDP schemes allow any one other than the file owner to be a verifier. At the client side (file owner or verifier), a substantial number of modular exponentiations is often required. In this paper we make PDP more practical via proposing a protocol to securely outsource the (most generic) variable-exponent variable-base exponentiations in one untrusted program model. Our protocol demonstrates advantages in efficiency or privacy over existing protocols coping with only special cases in two or single untrusted program model. We then apply our generic protocol to Shacham-Waters PDP and a variant of Yuan-Yu PDP. The analyses show that our protocol makes PDP much more efficient at the client side.
Yujue Wang, Qianhong Wu, Duncan S. Wong, Bo Qin, Sherman S. M. Chow, Zhen Liu, Xiao Tan
Quantitative Workflow Resiliency
Abstract
A workflow is resilient when the unavailability of some users does not force to choose between a violation of the security policy or an early termination of the workflow. Although checking for the resiliency of a workflow is a well-studied problem, solutions usually only provide a binary answer to the problem, leaving a workflow designer with little help when the workflow is not resilient. We propose in this paper to provide instead a measure of quantitative resiliency, indicating how much a workflow is likely to terminate for a given security policy and a given user availability model. We define this notion by encoding the resiliency problem as a decision problem, reducing the finding of an optimal user-task assignment to that of solving a Markov Decision Process. We illustrate the flexibility of our encoding by considering different measures of resiliency, and we empirically analyse them, showing the existence of a trade-off between multiple aspects such as success rate, expected termination step and computation time, thus providing a toolbox that could help a workflow designer to improve or fix a workflow.
John C. Mace, Charles Morisset, Aad van Moorsel
Who Is Touching My Cloud
Abstract
Advanced access controls have been proposed to secure sensitive data maintained by a third party. A subtle issue in such systems is that some access credentials may be leaked due to various reasons, which could severely damage data security. In this paper, we investigate leakage tracing enabled access control over outsourced data, so that one can revoke the suspected leaked credentials or prepare judicial evidences for legal procedure if necessary. Specifically, we propose a leaked access credential tracing (LACT) framework to secure data outsourced to clouds and formalize its security model. Following the framework, we construct a concrete LACT scheme that is provably secure. The proposed scheme offers fine-grained access control over outsourced data, by which the data owner can specify an access policy to ensure that the data is only accessible to the users meeting the policy. In case of suspectable illegal access to outsourced data with leaked credentials, a tracing procedure can be invoked to tracing in a black-box manner at least one of the users who leaked their access credentials. The tracing procedure can run without the cloud service provider being disturbed. Analysis shows that the introduction of tracing access credential leakage incurs little additional cost to either data outsourcing or access procedure.
Hua Deng, Qianhong Wu, Bo Qin, Jian Mao, Xiao Liu, Lei Zhang, Wenchang Shi
A Fast Single Server Private Information Retrieval Protocol with Low Communication Cost
Abstract
Existing single server Private Information Retrieval (PIR) protocols are far from practical. To be practical, a single server PIR protocol has to be both communicationally and computationally efficient. In this paper, we present a single server PIR protocol that has low communication cost and is much faster than existing protocols. A major building block of the PIR protocol in this paper is a tree-based compression scheme, which we call folding/unfolding. This compression scheme enables us to lower the communication complexity to O(loglogn). The other major building block is the BGV fully homomorphic encryption scheme. We show how we design the protocol to exploit the internal parallelism of the BGV scheme. This significantly reduces the server side computational overhead and makes our protocol much faster than the existing protocols. Our protocol can be further accelerated by utilising hardware parallelism. We have built a prototype of the protocol. We report on the performance of our protocol based on the prototype and compare it with the current most efficient protocols.
Changyu Dong, Liqun Chen
Privacy-Preserving Complex Query Evaluation over Semantically Secure Encrypted Data
Abstract
In the last decade, several techniques have been proposed to evaluate different types of queries (e.g., range and aggregate queries) over encrypted data in a privacy-preserving manner. However, solutions supporting the privacy-preserving evaluation of complex queries over encrypted data have been developed only recently. Such recent techniques, however, are either insecure or not feasible for practical applications. In this paper, we propose a novel privacy-preserving query processing framework that supports complex queries over encrypted data in the cloud computing environment and addresses the shortcomings of previous approaches. At a high level, our framework utilizes both homomorphic encryption and garbled circuit techniques at different stages in query processing to achieve the best performance, while at the same time protecting the confidentiality of data, privacy of the user’s input query and hiding data access patterns. Also, as a part of query processing, we provide an efficient approach to systematically combine the predicate results (in encrypted form) of a query to derive the corresponding query evaluation result in a privacy-preserving manner. We theoretically and empirically analyze the performance of this approach and demonstrate its practical value over the current state-of-the-art techniques. Our proposed framework is very efficient from the user’s perspective, thus allowing a user to issue queries even using a resource constrained device (e.g., PDAs and cell phones).
Bharath Kumar Samanthula, Wei Jiang, Elisa Bertino
Authorized Keyword Search on Encrypted Data
Abstract
Cloud computing has drawn much attention from research and industry in recent years. Plenty of enterprises and individuals are outsourcing their data to cloud servers. As those data may contain sensitive information, it should be encrypted before outsourced to cloud servers. In order to ensure that only authorized users can search and further access the encrypted data, two important capabilities must be supported: keyword search and access control. Recently, rigorous efforts have been made on either keyword search or access control over encrypted data. However, to the best of our knowledge, there is no encryption scheme supporting both capabilities in a public-key scenario so far. In this paper, we propose an authorized searchable public-key encryption scheme supporting expressive search capability and prove it fully secure in the standard model.
Jie Shi, Junzuo Lai, Yingjiu Li, Robert H. Deng, Jian Weng
Double-Authentication-Preventing Signatures
Abstract
Digital signatures are often used by trusted authorities to make unique bindings between a subject and a digital object; for example, certificate authorities certify a public key belongs to a domain name, and time-stamping authorities certify that a certain piece of information existed at a certain time. Traditional digital signature schemes however impose no uniqueness conditions, so a trusted authority could make multiple certifications for the same subject but different objects, be it intentionally, by accident, or following a (legal or illegal) coercion. We propose the notion of a double-authentication-preventing signature, in which a value to be signed is split into two parts: a subject and a message. If a signer ever signs two different messages for the same subject, enough information is revealed to allow anyone to compute valid signatures on behalf of the signer. This double-signature forgeability property discourages signers from misbehaving—a form of self-enforcement—and would give binding authorities like CAs some cryptographic arguments to resist legal coercion. We give a generic construction using a new type of trapdoor functions with extractability properties, which we show can be instantiated using the group of sign-agnostic quadratic residues modulo a Blum integer.
Bertram Poettering, Douglas Stebila
Statistical Properties of Pseudo Random Sequences and Experiments with PHP and Debian OpenSSL
Abstract
NIST SP800-22 (2010) proposed the state of the art statistical testing techniques for testing the quality of (pseudo) random generators. However, it is easy to construct natural functions that are considered as GOOD pseudorandom generators by the NIST SP800-22 test suite though the output of these functions is easily distinguishable from the uniform distribution. This paper proposes solutions to address this challenge by using statistical distance based testing techniques. We carried out both NIST tests and LIL based tests on the following pseudorandom generators by generating more than 200TB of data in total: (1) the standard C linear congruential generator, (2) Mersenne Twister pseudorandom generator, (3) PHP random generators (including Mersenne Twister and Linear Congruential based), and (4) Debian Linux (CVE-2008-0166) pseudorandom generator with OpenSSL 0.9.8c-1. As a first important result, our experiments show that, PHP pseudorandom generator implementation (both linear congruential generators and Mersenne Twister generators) outputs completely insecure bits if the output is not further processed. As a second result, we illustrate the advantages of our LIL based testing over NIST testing. It is known that Debian Linux (CVE-2008-0166) pseudorandom generator based on OpenSSL 0.9.8c-1 is flawed and the output sequences are predictable. Our LIL tests on these sequences discovered the flaws in Debian Linux implementation. However, NIST SP800-22 test suite is not able to detect this flaw using the NIST recommended parameters. It is concluded that NIST SP800-22 test suite is not sufficient and distance based LIL test techniques be included in statistical testing practice. It is also recommended that all pseudorandom generator implementations be comprehensively tested using state-of-the-art statistically robust testing tools.
Yongge Wang, Tony Nicol
Efficient Hidden Vector Encryption with Constant-Size Ciphertext
Abstract
A Hidden Vector Encryption (HVE) scheme is a special type of anonymous identity-based encryption (IBE) scheme where the attribute string associated with the ciphertext or the user secret key can contain wildcards. In this paper, we introduce two constant-size ciphertext-policy hidden vector encryption (CP-HVE) schemes. Our first scheme is constructed on composite order bilinear groups, while the second one is built on prime order bilinear groups. Both schemes are proven secure in a selective security model which captures plaintext (or payload) and attribute hiding. To the best of our knowledge, our schemes are the first HVE constructions that can achieve constant-size ciphertext among all the existing HVE schemes.
Tran Viet Xuan Phuong, Guomin Yang, Willy Susilo
Enabling Short Fragments for Uncoordinated Spread Spectrum Communication
Abstract
Uncoordinated spread spectrum (USS) protocols have been proposed for anti-jamming communication in wireless settings without shared secrets. The existing USS protocols assume that fragments of hundreds of bits can be transmitted on different channels in order to identify fragments that belong to the same message. However, such long transmissions are susceptible to reactive jamming. To address this problem, we present a protocol that allows the use of short fragments of a few bits only. This makes our scheme resilient to a large class of reactive jammers. We prove that reassembling the fragmented message is not only feasible but also efficient: it can be completed in polynomial time in the size of the message, even if the jammer is computationally resourceful. We demonstrate the protocol efficiency by simulating the reassembly process at the link layer under different design parameters.
Naveed Ahmed, Christina Pöpper, Srdjan Capkun
Fingerprinting Far Proximity from Radio Emissions
Abstract
As wireless mobile devices are more and more pervasive and adopted in critical applications, it is becoming increasingly important to measure the physical proximity of these devices in a secure way. Although various techniques have been developed to identify whether a device is close, the problem of identifying the far proximity (i.e., a target is at least a certain distance away) has been neglected by the research community. Meanwhile, verifying the far proximity is desirable and critical to enhance the security of emerging wireless applications. In this paper, we propose a secure far proximity identification approach that determines whether or not a remote device is far away. The key idea of the proposed approach is to estimate the far proximity from the unforgeable “fingerprint” of the proximity. We have validated and evaluated the effectiveness of the proposed far proximity identification method through experiments on real measured channel data. The experiment results show that the proposed approach can detect the far proximity with a successful rate of 0.85 for the non-Line-of-sight (NLoS) scenario, and the successful rate can be further increased to 0.99 for the Line-of-sight (LoS) scenario.
Tao Wang, Yao Liu, Jay Ligatti
A Cross-Layer Key Establishment Scheme in Wireless Mesh Networks
Abstract
Cryptographic keys are necessary to secure communications among mesh clients in wireless mesh networks. Traditional key establishment schemes are implemented at higher layers, and the security of most such designs relies on the complexity of computational problems. Extracting cryptographic keys at the physical layer is a promising approach with information-theoretical security. But due to the nature of communications at the physical layer, none of the existing designs supports key establishment if communicating parties are out of each other’s radio range, and all schemes are insecure against man-in-the-middle attacks. This paper presents a cross-layer key establishment scheme where the established key is determined by two partial keys: one extracted at the physical layer and the other generated at higher layers. The analysis shows that the proposed cross-layer key establishment scheme not only eliminates the aforementioned shortcomings of key establishment at each layer but also provides a flexible solution to the key generation rate problem.
Yuexin Zhang, Yang Xiang, Xinyi Huang, Li Xu
Backmatter
Metadata
Title
Computer Security - ESORICS 2014
Editors
Mirosław Kutyłowski
Jaideep Vaidya
Copyright Year
2014
Publisher
Springer International Publishing
Electronic ISBN
978-3-319-11203-9
Print ISBN
978-3-319-11202-2
DOI
https://doi.org/10.1007/978-3-319-11203-9

Premium Partner