Skip to main content

Über dieses Buch

This book constitutes the refereed proceedings of the 8th FIRA International Conference on Secure and Trust Computing, Data Management, and Applications, STA 2011, held in Loutraki, Greece, in June 2011. STA 2011 is the first conference after the merger of the successful SSDU, UbiSec, and TRUST symposium series previously held from 2006 until 2010 in various locations. The 29 revised full papers presented were carefully reviewed and selected from numerous submissions. The papers address various theories and practical applications of secure and trust computing and data management in future environments.



STA2011 - Security Track

Embedding High Capacity Covert Channels in Short Message Service (SMS)

Covert Channels constitute an important security threat because they are used to ex-filtrate sensitive information, to disseminate malicious code, and, more alarmingly, to transfer instructions to a criminal (or terrorist). This work presents zero day vulnerabilities and weak-nesses, that we discovered, in the Short Message Service (SMS) protocol, that allow the embedding of high capacity covert channels. We show that an intruder, by exploiting these SMS vulnerabilities, can bypass the existing security infrastructure (including firewalls, intrusion detection systems, content filters) of a sensitive organization and the primitive content filtering software at an SMS Center (SMSC). We found that the SMS itself, along with its value added services (like picture SMS, ring tone SMS), appears to be much more susceptible to security vulnerabilities than other services in IP-based networks. To demonstrate the effectiveness of covert channels in SMS, we have used our tool GeheimSMS that practically embeds data bytes (not only secret, but also hidden) by composing the SMS in Protocol Description Unit (PDU) mode and transmitting it from a mobile device using a serial or Bluetooth link. The contents of the overt (benign) message are not corrupted; hence the secret communication remains unsuspicious during the transmission and reception of SMS. Our experiments on active cellular networks show that 1 KB of a secret message can be transmitted in less than 3 minutes by sending 26 SMS without raising an alarm over suspicious activity.
M. Zubair Rafique, Muhammad Khurram Khan, Khaled Alghatbar, Muddassar Farooq

A Framework for Detecting Malformed SMS Attack

Malformed messages in different protocols pose a serious threat because they are used to remotely launch malicious activity. Furthermore, they are capable of crashing servers and end points, sometimes with a single message. Recently, it was shown that a malformed SMS can crash a mobile phone or gain unfettered access to it. In spite of this, little research has been done to protect mobile phones against malformed SMS messages. In this paper, we propose an SMS malformed message detection framework that extracts novel syntactical features from SMS messages at the access layer of a smart phone. Our framework operates in four steps: (1) it analyzes the syntax of the SMS protocol, (2) extracts syntactical features from SMS messages and represents them in a suffix tree, (3) uses well-known feature selection schemes to remove the redundancy in the features’ set, and (4) uses standard distance measures to raise the final alarm. The benefit of our framework is that it is lightweight-requiring less processing and memory resources-and provides a high detection rate and small false alarm rate. We evaluated our system on a real-world SMS dataset consisting of more than 5000 benign and malformed SMS messages. The results of our experiments demonstrated that our framework achieves a detection rate of more than 99% with a false alarm rate of less than 0.005%. Last, but not least, its processing and memory requirements are relatively small; as a result, it can be easily deployed on resource-constrained smart phones or mobile devices.
M Zubair Rafique, Muhammad Khurram Khan, Khaled Alghathbar, Muddassar Farooq

A Metadata Model for Data Centric Security

Data-sharing agreements across organisations are often used to derive security policies to enforce the access, usage and routing or data across different trust and administrative domains. The data exchanged is usually annotated with metadata to describe its meaning in different applications and contexts, which may be used by the enforcement points of such data-sharing policies. In this paper, we present a metadata model for describing data-centric security, i.e. any security information that may be used to annotate data. Such metadata may be used to describe attributes of the data as well as their security requirements. We demonstrate an applicability scenario of our model in the context of organisations sharing scientific data.
Benjmin Aziz, Shirley Crompton, Michael Wilson

SecPAL4DSA: A Policy Language for Specifying Data Sharing Agreements

Data sharing agreements are a common mechanism by which enterprises can legalise and express acceptable circumstances for the sharing of information and digital assets across their administrative boundaries. Such agreements, often written in some natural language, are expected to form the basis for the low-level policies that control the access to and usage of such digital assets. This paper contributes to the problem of expressing data sharing requirements in security policy languages such that the resulting policies can enforce the terms of a data sharing agreement. We extend one such language, SecPAL, with constructs for expressing permissions, obligations, penalties and risk, which often occur as clauses in a data sharing agreement.
Benjamin Aziz, Alvaro Arenas, Michael Wilson

Self-keying Identification Mechanism for Small Devices

We present a strong authentication mechanism intended for embedded systems based on standard but weak processors, without support for cryptographic operations. So far the main effort was to provide methods based on relatively short keys and complex computations, we make advantage of availability of non-volatile memory of larger size and confine ourselves to basic bit operations available on each processor.
Krzysztof Barczyński, Przemysław Błaśkiewicz, Marek Klonowski, Mirosław Kutyłowski

A Software Architecture for Introducing Trust in Java-Based Clouds

The distributed software paradigms of grid and cloud computing offer massive computational power at commodity prices. Unfortunately, a number of security risks exist. In this paper we propose a software architecture which leverages the Trusted Computing principle of Remote Attestation to assess the trustworthiness of nodes in computing clouds. We combine hardware-security based on the Trusted Platform Module and Intel Trusted Execution Technology with an integrity-guaranteeing virtualization platform. Cloud services are offered by an easy-to-use Java middleware that performs role based access control and trust decisions hidden from the developer.
Siegfried Podesser, Ronald Toegl

A Network Data Abstraction Method for Data Set Verification

Network data sets are often used for evaluating the performance of intrusion detection systems and intrusion prevention systems[1]. The KDD CUP 99’ data set, which was modeled after MIT Lincoln laboratory network data has been a popular network data set used for evaluation network intrusion detection algorithm and system. However, many points at issues have been discovered concerning the modeling method of the KDD CUP 99’ data. This paper proposed both a measure to compare the similarities between two data groups and an optimization method to efficiently modeled data sets with the proposed measure. Then, both similarities between KDD CUP 99’ and MIT Lincoln laboratory data that between our composed data set from the MIT Lincoln laboratory data and MIT Lincoln laboratory are compared quantitatively.
Jaeik Cho, Kyuwon Choi, Taeshik Shon, Jongsub Moon

Efficient De-Fragmented Writeback for Journaling File System

Journaling file systems are widely used file systems to guarantee data integrity. However, the system performance is degraded due to data request fragmentations. We propose a technique that efficiently handles fragmented data workloads in journaling file system, which we call De-Fragmented Writeback(DFW). The first method of the DFW sorts write orders of the atomic data set in accordance with their disk block numbers before issuing into write requests. The second method of the DFW searches for data fragments in the sorted data set and tries to fill up holes between adjacent data fragments using dirty blocks in main memory. The filling of holes between data fragments converts fragmented data blocks into sequential ones so that this method lowers the number of write requests and reduces unnecessary disk-head movements to write the blocks.
Seung-Ho Lim, Hyun Jin Choi, Jong Hyuk Park

IMS Session Management Based on Usage Control

Multimedia applications have made their way to the wireless/mobile world and this is not likely to change. However, people have not stopped using multimedia services through wired networks and this is also something that is not foreseen to change. What really is changing in the coming networks is the separation of network and service providers; this separation is creating new security challenges since the end user does not have the same trust relationships with all these providers. This paper proposes an architecture for protecting end users from untrusted and unreliable multimedia service providers in Next Generation Networks (NGNs) that utilize the IP Multimedia Subsystem (IMS) for multimedia delivery. Our proposal is based on the Usage Control (UCON) model for monitoring continuously the multimedia content delivered to the end user and ensure that it is the proper content requested by the user.
Giorgos Karopoulos, Fabio Martinelli

Using Counter Cache Coherence to Improve Memory Encryptions Performance in Multiprocessor Systems

When memory encryption schemes are applied in multiprocessor systems, the systems will confront new problems such as inter-processor communication overhead increasing and cache coherence protocol overhead increasing. A counter cache coherence optimization scheme AOW is proposed to improve cache hit rate. As MESI protocol which makes counter line by four states, AOW marks each counter line using three encryption states, ’Autonomy’, ’Operating’ and ’Waiting’. According to the simulation results, by applying AOW, memory access time decreases, and execution speed of non-AOW method improves obviously.
Zhang Yuanyuan, Gu Junzhong

Dynamic Combinatorial Key Pre-distribution Scheme for Heterogeneous Sensor Networks

Previous research on sensor network security considers homogeneous sensor networks, i.e. it assumes all sensors have the same capabilities. Many schemes designed for these networks suffer from high volume of communication and/or large storage requirements. In this paper, we propose the Dynamic Combinatorial Key Pre-distribution scheme (DCKP), which differentiates between sensors in accordance to their capabilities. DCKP makes use of the Exclusion-Basis System (EBS) and sensors’ location information. Performance evaluations demonstrate that DCKP is very efficient in terms of storage at a certain local connectivity, while at the same time is capable of providing better security.
Bidi Ying, Dimitrios Makrakis, Hussein T. Mouftah, Wenjun Lu

Exclusion Based VANETs (EBV)

Vehicular Ad hoc NETworks (VANETs) were proposed to improve driving safety. In VANETs vehicles communicate with other vehicles and with the infrastructure’s Road Side Units (RSUs) to achieve this safety. 5.9 GHz band was assigned by FCC for the use of VANETs in Dedicated Short Range Communications (DSRC) [1]. VANETs’ security has been a hot topic since the very first day of its announcement. While IEEE 1609.2 is setting security standards for VANETS, many researches around the world are working hard to provide an ideal system. IEEE 1609.2 chose Public Key Infrastructure (PKI) to be used in VANETs [2]. PKI comes with its inherent issues such as key management and Certificate revocation lists (CRLs) in addition to issues related to wireless networks such privacy and linkability. In this paper we propose Exclusion-Based Vanet (EBV), a novel framework based on a combination of PKI and Exclusion Based Systems (EBS) [3] to address the above issues. EBV eliminates the need for CRLs, guarantees privacy and scalability.
Ahmad A. Al-Daraiseh, Mohammed A. Moharrum

Embedding Edit Distance to Allow Private Keyword Search in Cloud Computing

Recently, Li et al. introduced a fuzzy keyword search over encrypted data in Cloud Computing. Their approach relies on fuzzy keyword sets which are used by a symmetric searchable encryption protocol. The idea behind these fuzzy keyword sets is to index – before the search phase – the exact keywords but also the ones differing slightly according to a fixed bound on the tolerated edit distance. We here suggest a different construction. We exploit a classical embedding of the edit distance into the Hamming distance. This enables us to adapt results on private identification schemes to this new context. This way of doing implies more flexibility on the tolerated edit distance.
Julien Bringer, Hervé Chabanne

Efficient Secret Sharing Schemes

We propose a new XOR-based (k,n) threshold secret SSS, where the secret is a binary string and only XOR operations are used to make shares and recover the secret. Moreover, it is easy to extend our scheme to a multi-secret sharing scheme. When k is closer to n, the computation costs are much lower than existing XOR-based schemes in both distribution and recovery phases. In our scheme, using more shares (≥ k) will accelerate the recovery speed.
Chunli Lv, Xiaoqi Jia, Jingqiang Lin, Jiwu Jing, Lijun Tian, Mingli Sun

Two Dimensional PalmPhasor Enhanced by Multi-orientation Score Level Fusion

The security and protection of biometric template has been the bottleneck of its applications due to permanent appearance of biometrics. Texture codes are important approaches for palmprint recognition; unfortunately, there is no ideal cancelable scheme for palmprint coding until now. We propose a novel cancelable palmprint template, called “PalmPhasor”, which is a set of binary code fusing the user-specific tokenised pseudo-random number (PRN) and multi-orientation texture features of PalmCodes. PalmPhasor is extended from one dimension to two dimensions to reduce computational complexity. Two dimensional (2D) PalmPhasor in row orientation has better performance than one dimensional (1D) PalmPhasor. Furthermore, 2D PalmPhasor fuses multi-orientation texture features in score level to enhance recognition performance. Besides, the number of orientations of texture features for fusion is also discussed in this paper. The experimental results on PolyU palmprint database show the feasibility and efficiency of 2D PalmPhasor enhanced by multi-orientation score level fusion.
Lu Leng, Jiashu Zhang, Gao Chen, Muhammad Khurram Khan, Ping Bai

Improved Steganographic Embedding Exploiting Modification Direction in Multimedia Communications

Steganography provides secure communications over the Internet with a cover image. However, it is difficult to transfer many messages with small-sized images. We have improved EMD (Exploiting Modification Direction), proposed by Zhang and Wang, to solve this problem. In this paper, we have developed a (2 n + 2-1)-ary scheme. Our scheme shows a higher embedding rate, R=log2(2 n + 2-1)/n, which is greater than that of the EMD scheme, because the EMD scheme embedding rate is R=log2(2n+1)/n, for n>=2. The experimental results show that our scheme is able to embed twice as many secret bits in multimedia communications compared to the existing EMD embedding method. Our method has low complexity and achieves higher embedding performance with good perceptual quality against the earlier arts. An experiment verified our proposed data hiding method in multimedia communications.
Cheonshik Kim, Dongkyoo Shin, Dongil Shin, Xinpeng Zhang

An Ontology Based Information Security Requirements Engineering Framework

Software Requirement Specification (SRS) is frequently evolving to reflect requirements change during project development. Therefore, it needs enhancement to facilitate its authoring and reuse. This paper proposes a framework for building a part of SRS related to information security requirements (ISRs) using ontologies. Such a framework allows ensuring ISRs traceability and reuse. The framework uses three kinds of generic ontologies as a solution to this problem - software requirement ontology, application domain ontology, information security ontology. We propose to enhance SRS by associating the ISR with specific entities within ontologies. We aim to facilitate a semantic-based interpretation of ISRs by restricting their interpretation through the three previous ontologies. Semantic form is used to improve our ability to create, manage, and maintain ISRs. We anticipate that the proposed framework would be very helpful for requirements engineers to create and understand the ISRs.
Azeddine Chikh, Muhammad Abulaish, Syed Irfan Nabi, Khaled Alghathbar

Simulating Malicious Users in a Software Reputation System

Today, computer users have trouble in separating malicious and legitimate software. Traditional countermeasures such as anti-virus tools mainly protect against truly malicious programs, but the situation is complicated due to a ”grey-zone” of questionable programs that are difficult to classify. We therefore suggest a software reputation system (SRS) to help computer users in separating legitimate software from its counterparts. In this paper we simulate the usage of a SRS to investigate the effects that malicious users have on the system. Our results show that malicious users will have little impact on the overall system, if kept within 10% of the population. However, a coordinated attack against a selected subset of the applications may distort the reputation of these applications. The results also show that there are ways to detect attack attempts in an early stage. Our conclusion is that a SRS could be used as a decision support system to protect against questionable software.
Anton Borg, Martin Boldt, Bengt Carlsson

Weak Keys of the Block Cipher SEED-192 for Related-Key Differential Attacks

In this paper, we analyze the block cipher SEED-192 which is an extended version of the ISO/IEC block cipher SEED. According to the result of this paper, there exist weak keys in 8 out of 20 rounds of SEED-192 against related-key differential attacks. This is the first cryptanalytic result for the key schedule of SEED-192.
Jongsung Kim, Jong Hyuk Park, Young-Gon Kim

The Method of Database Server Detection and Investigation in the Enterprise Environment

When a forensic investigation is carried out in the enterprise environment, most of the important data is stored in database servers, and data stored in them are very important elements for a forensic investigation. As for database servers with such data stored, there are over 10 various kinds, such as SQL Server, Mysql and Oracle. All the methods of investigating a database system are important, but this study suggests a single methodology likely to investigate all the database systems while considering the common characteristics of database system. A method of detecting a server, data acquiring and investigating data in the server can be usefully used for such an investigation in the enterprise environment. Therefore, such a methodology will be explained through a way of carrying out a forensic investigation on SQL Server Database of Microsoft Corporation.
Namheun Son, Keun-gi Lee, SangJun Jeon, Hyunji Chung, Sangjin Lee, Changhoon Lee

Sensitive Privacy Data Acquisition in the iPhone for Digital Forensic Analysis

As a diverse range of smartphones has been recently developed, the use of smartphones is being dramatically increased. The use of smartphones allowed many tasks to be done at smartphones, which used to require the use of computers. Especially, along with the increase in smartphone use, the users of SNS (Social Network Service) also have been sharply increased. The SNS saves a variety of information such as exchanged pictures and videos, voice mails or location sharing, chat history, etc. as well as simple user data, so that the acquisition of data that are useful in the aspect of digital forensic is achievable. This thesis reviews the types of SNS that are available for the iPhone, a recent example of highly used smartphones, and studies the data to be collected by client and the analysis methods accordingly.
Jinhyung Jung, Chorong Jeong, Keunduk Byun, Sangjin Lee

STA2011 - Data Management Track

An Efficient Processing Scheme for Continuous Queries Involving RFID and Sensor Data Streams

RFID technology is being applied to the wide field of distributions and logistics for item-level tracking of tagged goods. Recently, spoilage and abnormalities of product in Blood Supply Chain (BSC), Cold Chain Management (CCM) and Hazardous Materials (HAZMAT) Management become issues. To monitor item’s health status is as a crucial issue as to keep track of the item. By augmenting passive RFID tags with sensor nodes, we can easily extend existing RFID systems and reduce operational cost. However, this approach will suffer from high overhead for joining two streaming data in real time. To address this problem, we propose a scheme for continuous query processing to monitor sensing information of tagged items in real-time. It makes integrated stream data from each stream data. And we summarize integrated stream data using MBR, and we make summarized data. It requires comparing the summarized data with registered queries. Finally, we do query processing between integrated stream data and matched query. Experimental results show that our method can handle both RFID tag data and sensing information efficiently.
Jeongwoo Park, Kwangjae Lee, Wooseok Ryu, Joonho Kwon, Bonghee Hong

Smart Warehouse Modeling Using Re-recording Methodology with Sensor Tag

As the recent growth of RFID technologies, a goal of productivity has been achieved in various industry areas where the technologies are used. With this change, a number of RFID logistics systems have been studied, and various standard systems also have been proposed. However, in the storage stage as a part of logistics process, the environmental characteristics on goods, i.e. the sensor technology is not studied to put together with. A system in which status information is processed with the existing sensor nodes has been developed a lot. But in the sensor node system, the nodes are put on specific locations, and the sensor sends sensing information on a large space. Thus, the system is inappropriate for that environment that a temperature-sensitive product has to be checked thoroughly down to a tiny section. To solve this problem, as a part of the smart logistics system, the smart storage modeling technology with sensor tags used is suggested in this paper. With the suggested technology, the space limit problem of sensor node based storage will be solved.
Gwangsoo Lee, Wooseok Ryu, Bonghee Hong, Joonho Kwon

SIMOnt: A Security Information Management Ontology Framework

In this paper, we have proposed the design of a Security Information Management Ontology (SIMOnto) framework, which utilizes natural language processing and statistical analysis to mine an exhaustive list of concepts and their relationships in an automatic way. Concepts are extracted using TF-IDF and LSA techniques whereas, relations between them are mined using semantic and co-occurrence based analyses. The mined concepts and relations are presented to domain experts for validation before creation of ontology using Protégé.
Muhammad Abulaish, Syed Irfan Nabi, Khaled Alghathbar, Azeddine Chikh

Using Bloom Filters for Mining Top-k Frequent Itemsets in Data Streams

In this paper, we study the problem of finding the top-k most frequent itemsets in data streams. To only mine top-k restricted to the sub-domains of the workspace or the result of some query. Most previous algorithms are clearly not suitable for this problem with limited memory, such as for instance, an allocated for each stream summary. Therefore, we propose that in order to solve memory efficiency for mining frequent itemsets from massively and speedy a data stream. Our algorithm is used to a bloom filter structure, named MineTop-k, which permit the efficient computation and maintenance of the results. We show that our approach is memory-efficient method for the top-k problem.
Younghee Kim, Kyungsoo Cho, Jaeyeol Yoon, Ieejoon Kim, Ungmo Kim

An RSN Tool : A Test Dataset Generator for Evaluating RFID Middleware

Evaluation of RFID middleware is a complex process due to its cost for constructing a test bed involving RFID readers and tags. An input dataset of the RFID middleware is a tag event stream from connected readers. A randomly generated dataset can be considered for stress testing, but this cannot guarantee whether the middleware can provide correct answers on given dataset. To enable this, the dataset should be meaningful to represent tags’ activities based on business rules. This paper presents an RSN Tool which generates semantic datasets based on tags’ behavior. The basic idea is to virtualize RFID environments in a point of business processes. To do this, the RSN Tool provides a modeling of real world RFID environments using graph representations, and execution mechanisms of tags’ movements under several business rules. The experimental result shows that the RSN Tool can create a semantic valid dataset which reflects tags’ behavior.
Wooseok Ryu, Joonho Kwon, Bonghee Hong

Design and Implementation of a Virtual Reader for Emulation of RFID Physical Reader

Recently, RFID technology has become one of essential technologies, which applies to a wide area of applications such as logistic, manufacture and pharmacy. Testing of RFID middleware is a critical process to increase stability of RFID system. As it requires much money to use a real RFID reader, a virtual reader to emulate an RFID reader is necessary. To emulate a physical reader, it is necessary to consider the reader’s operational characteristics as well as protocol-level emulation. In this paper, we propose a mechanism for a virtual RFID reader which closely emulates an RFID reader by considering communicational characteristics between the reader and RFID tags. To do this, we analyze characteristics of RF communications and parameterize them. Based on the parameters, we present a mechanism for the virtual reader. The experimental results show that our approach can emulate more similar to RFID physical reader using set parameters.
Jiwan Lee, Jekwan Park, Wooseok Ryu, Bonghee Hong, Joonho Kwon

Adaptive Optimal Global Resource Scheduling for a Cloud-Based Virtualized Resource Pool

This paper proposes to employ linear programming algorithms for global resource scheduling to reduce the extra cost for power consumption and operation expenditures, for remote resource access in a cloud-based resource pool with concrete restraints of the networking environment. The scheduler adapts the problem modeling granularity and solution which corresponds to the differential demands of the various stages of a continual process for the initial construction and subsequent operation of a cloud-based resource pool. In particular, the proposed algorithms takes into account resource configuration, service deployment and real-time load, among other factors, to strike a tradeoff among the scheduling performance, response time and computation cost. Different environment modeling methods are provided according to the specific location of networking resource bottleneck. A simple greedy algorithm is provided for a small-scale pool with abundant networking resources.
Lingli Deng, Qing Yu, Jin Peng

Maximal Cliques Generating Algorithm for Spatial Co-location Pattern Mining

The spatial co-location pattern represents the relationships between spatial features that are frequently located close together, and is one of the most important concepts in spatial data mining. The spatial co-location pattern mining approach, which is based on association analysis and uses maximal cliques as input data, is general and useful. However, there are no algorithms that can generate all maximal cliques from large dense spatial data sets in polynomial execution time. We propose a polynomial algorithm called AGSMC to generate all maximal cliques from general spatial data sets; including an enhanced existing materializing method to extract neighborhood relationships between spatial objects, and a tree-type data structure to express maximal cliques. AGSMC constructs the tree-type data structures using the materializing method, and generates maximal cliques by scanning the constructed trees. AGSMC can support the spatial co-location pattern mining efficiently, and is also useful for listing maximal cliques of graph whose vertexes are a geometric object.
Seung Kwan Kim, Younghee Kim, Ungmo Kim


Weitere Informationen

Premium Partner