Skip to main content
Top

2014 | Book

Information and Communication Technology

Second IFIP TC5/8 International Conference, ICT-EurAsia 2014, Bali, Indonesia, April 14-17, 2014. Proceedings

Editors:  Linawati, Made Sudiana Mahendra, Erich J. Neuhold, A Min Tjoa, Ilsun You

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

This book constitutes the refereed proceedings of the Second IFIP TC 5/8 International Conference on Information and Communication Technology, ICT-Eur Asia 2014, with the collocation of Asia ARES 2014 as a special track on Availability, Reliability and Security, held in Bali, Indonesia, in April 2014. The 70 revised full papers presented were carefully reviewed and selected from numerous submissions. The papers have been organized in the following topical sections: applied modeling and simulation; mobile computing; advanced urban-scale ICT applications; semantic web and knowledge management; cloud computing; image processing; software engineering; collaboration technologies and systems; e-learning; data warehousing and data mining; e-government and e-health; biometric and bioinformatics systems; network security; dependable systems and applications; privacy and trust management; cryptography; multimedia security and dependable systems and applications.

Table of Contents

Frontmatter
Erratum to: Acceptance and Use of Information System: E-Learning Based on Cloud Computing in Vietnam

The affiliation of Thanh D. Nguyen and Dung T. Nguyen was incorrectly rendered as “HCM University of Technology, Vietnam”. The correct designation is “HCMC University of Technology, Vietnam”.

Thanh D. Nguyen, Dung T. Nguyen, Thi H. Cao

Information & Communication Technology-EurAsia Conference 2014, ICT-EurAsia 2014

Applied Modeling and Simulation

Agent-Based Methods for Simulation of Epidemics with a Low Number of Infected Persons

Modeling of infectious diseases with a low number of infections is a task that often arises since most real epidemics affect only a small fraction of the population. Agent-based methods simulate individuals and their behavior. When the model is simulated, the epidemic automatically arises without being explicitly defined. Surprisingly, it is not easy to produce such epidemics with small infection numbers. Instead, it needs model improvements to accomplish that task. In this paper, we show different extensions, addressing the person’s behavior, the pathogen’s behavior and the environmental impacts. It turns out that the discussed improvements have different consequences. Hence, they need to be used deliberately to overcome modeling issues of a specific epidemic in an appropriate and valid way. Even more, these improvements address the underlying behavior of epidemics and hence have the ability to provide a deeper insight into the real spreading process of a disease.

Florian Miksch, Philipp Pichler, Kurt J. Espinosa, Niki Popper
Cellular Automata Model of Urbanization in Camiguin, Philippines

Monitoring and forecasting land use and change in the Philippines are necessary for urban planning, agricultural mapping, resource allocation and conservation. Land change studies are important in order to guide policymakers, government and private institutions. In this paper, 2-dimensional cellular automata using Markov chains is used to numerically simulate land change in Camiguin, an island province in the Philippines, over a period of 50 years. The preliminary findings of the study identify wooded areas that may be at risk of disappearing. The study also emphasizes the need for updated land cover data in the Philippines in order to improve land use forecasts.

Maria Isabel Beltran, Guido David
A Flexible Agent-Based Framework for Infectious Disease Modeling

Agent-based modeling is a method to model a system by autonomous entities. The proposed framework models single persons with personal behavior, different health states and ability to spread the disease. Upon simulation, the epidemic emerges automatically. This approach is clear and easily understandable but requires extensive knowledge of the epidemic’s background. Such real-world model structures produce realistic epidemics, allowing detailed examination of the transmission process or testing and analyzing the outcome of interventions like vaccinations. Due to changed epidemic propagation, effects like herd immunity or serotype shift arise automatically. Beyond that, a modular structure splits the model into parts, which can be developed and validated separately. This approach makes development more efficient, increases credibility of the results and allows reusability and exchangeability of existing modules. Thus, knowledge and models can be easily and efficiently transferred, for example to compute scenarios for different countries and similar diseases.

Florian Miksch, Christoph Urach, Patrick Einzinger, Günther Zauner

Mobile Computing

Transformation of Digital Ecosystems: The Case of Digital Payments

In digital ecosystems, the fusion relation between business and technology means that the decision of technical compatibility of the offering is also the decision of how to position the firm relative to the coopetive relations that characterize business ecosystems. In this article we develop the Digital Ecosystem Technology Transformation (DETT) framework for explaining technology-based transformation of digital ecosystems by integrating theories of business and technology ecosystems. The framework depicts ecosystem transformation as distributed and emergent from micro-, meso-, and macro-level coopetition. The DETT framework consists an alternative to the existing explanations of digital ecosystem transformation as the rational management of one central actor balancing ecosystem tensions. We illustrate the use of the framework by a case study of transformation in the digital payment ecosystem.

Stefan Henningsson, Jonas Hedman
Do Personality Traits Work as Moderator on the Intention to Purchase Mobile Applications Work? - A Pilot Study

Mobile application markets are now transforming into a multibillion-dollar business. Understanding the consumer’s intention to purchase them and the moderating factors of individual differences – personality traits – allowed us to know more about consumers. The objectives of this research are to determine whether personality traits have any moderating effect with the consumer’s intention to purchase mobile application. Our preliminary data for the pilot study consists of 147 participants, who are office workers and students who use smart devices and live in Bangkok, Thailand. Hierarchical multiple regression was used to analyze the data. The results from our pilot study indicate that personality traits did not show a significant moderating effect. However, the result supported the original Theory Reasoned Action. It demonstrates that there are differences in attitudes toward the purchase of mobile applications between office workers and students. Therefore, this research has thrown up many questions in need of further investigation. As the initial data was only 147 respondents, a more extensive study will have to follow up our preliminary results.

Charnsak Srisawatsakul, Gerald Quirchmayr, Borworn Papasratorn
Intelligent Method for Dipstick Urinalysis Using Smartphone Camera

This paper introduces an intelligent method for helping people to maintain their healthy by doing a self urinalysis utilising a smartphone camera. A color sensing method using a smartphone camera is designed to determine the value of a reagent strip in a urinalysis dipstick. In the dipstick urinalysis, a color change in each reagent strip is examined. This color change is a result of the reaction of dipstick to the chemical contents of urine including pH, Protein, Glucose, Ketones, Leucocyte, Nitrite, Bilirubine, and Urobilinogen. Performing disptick urinalysis can be done in almost any places even on the very remote area where medical laboratory cannot be found, and it is much easier and cheaper than medical lab visit.

The proposed intelligent method includes a framework for color acquisition using a smartphone camera which covers the color management system, color correction, color matching, and quantification. The usage of RGB and CIELAB color space is discussed in the color management part. An automated color constancy approach is introduced to provide a better color correction, and a step-wise linear interpolation is introduced to better estimate the urinalysis value.

To implement this proposed method, a closed acquisition box does not required. Disptick capturing can be done directly with a smartphone camera in almost any normal lighting condition.

R. V. Hari Ginardi, Ahmad Saikhu, Riyanarto Sarno, Dwi Sunaryono, Ali Sofyan Kholimi, Ratna Nur Tiara Shanty

Advanced Urban-Scale ICT Applications

Empirical and Computational Issues of Microclimate Simulation

The dynamic variability of weather conditions and complex geometry and semantics of urban domain impose significant constraints on the empirical study of urban microclimate. Thus, numerical modeling is being increasingly deployed to capture the very dynamics of urban microclimate. In this context, the present paper illustrates the basic processes of calibrating and preparing a numerical model for the simulation of the urban microclimate.

Aida Maleki, Kristina Kiesel, Milena Vuckovic, Ardeshir Mahdavi
A Distributed Generic Data Structure for Urban Level Building Data Monitoring

Building a generic data structure that handles building realated data at an urban scale offers certain challenges. Real world entities must be captured in an environment that allows for the communication of relevent data. The associated software components must be maintainable and reliable. The present contribution describes efforts to enhance a well tested building monitoring framework to handle building data at an urban scale. This requires the development of a distributed, generic and enhancable data store, as well as the conceptualization of a modular and scalable application architecture. The scalable data store is introduced, as well as the modularization process of the application logic, including data handling and communication routines. Furthermore, the concept of Virtual Datapoints and Virtual Datapoint Collections enables urban entities (for instance buildings) to communicate their status to the system in an effective way.

Stefan Glawischnig, Harald Hofstätter, Ardeshir Mahdavi
Toward a Data-Driven Performance-Guided Urban Decision-Support Environment

The present contribution briefly represents the structure and main features of SEMERGY, a performance-guided multi-objective building optimization environment, supported by Semantic Web technologies. It establishes the importance of urban-scale performance considerations, discusses particular features of urban data, and suggests a framework to upscale the SEMERGY approach towards development of a data-driven performance-guided urban decision support environment. The suggested task-based ontology framework can facilitate data and knowledge sharing within the domain of urban performance inquiries.

Neda Ghiassi, Stefan Glawischnig, Ulrich Pont, Ardeshir Mahdavi

Semantic Web and Knowledge Management

Knowledge Management: Organization Culture in Healthcare Indonesia

Nowadays organizations realize that knowledge is an important asset to achieve a competitive advantage. In the favor of that, it is necessary for organizations to manage and utilize the knowledge as much as possible through knowledge management (KM). KM concept is not only used in large companies, but also has begun to be adopted by healthcare organization in an effort to improve the quality of services. Managing knowledge is not easy, a lot of factors to consider, one of which is the culture of the organization. Organizational culture is defined as a set of practices, values, and assumptions held by members of the organization and are able to influence the behavior of the organization [12, 13]. According to Kim Cameron and Robert Quinn (2006), organizational culture can be examined using ‘Organizational Culture Assessment Instrument’ (OCAI) which has a framework, called the competing value framework (CVF). This framework consists of four culture types i.e. clan, adhocracy, market, and hierarchy. In this research, we found that healthcare organization in Indonesia have developed a dominant culture-style. It is a mix of the market and hierarchy. In addition, we also discuss the relationship of four culture types with KM, and six dimensions of organizational culture.

Dana Indra Sensuse, Yudho Giri Sucahyo, Siti Rohajawati, Haya Rizqi, Pinkie Anggia
Semantic Versioning of In-Process Scientific Document

The development of scientific documents is an iterative process. Scientific documents go through a continuous informal review phase during writing process and as a result keep changing. The informal review changes are casually recorded. The key issue for maintaining the changes in scientific document is to maintain the review history of individual components within source file at component level. Scientific document is meaningfully organized and it can be easily transformed into an ontology. For this purpose, we use Document Ontology to map each component of the scientific document and manage changes in this ontology by enhancing an already existing technique of semantic repository versioning. In this paper, we explore document change process using semantic versioning and provide the review comments history along each change. In addition, we define a usage scenario to present the viability and benefit of our approach. To achieve this, we developed a prototype system which represents the meaningful track of change in individual components of a scientific document, provides the review comments history along each change and at the end of document writing the author can see the progress report.

Imran Asif, M. Shuaib Karim
Towards Semantic Mashup Tools for Big Data Analysis

Big Data is generally characterized by three V’s:

volume

,

velocity

, and

variety

. For the Semantic Web community, the

variety

dimension could be the most appropriate and interesting aspect to contribute in. Since the real-world use of Big Data is for data analytics purposes of knowledge workers in different domains, we can consider mashup approach as an effective tool to create user-generated solution based on available private/public resources. This paper gives brief overview and comparison of some semantic mashup tools which can be employed to mash up various data sources in heterogenous data format.

Hendrik, Amin Anjomshoaa, A. Min Tjoa

Cloud Computing

Acceptance and Use of Information System: E-Learning Based on Cloud Computing in Vietnam

E-learning is an inevitable trend of education in the future. Although there are several researches about E-learning based on cloud computing, not many researches on the cloud computing adoption model, on the other hand, there are not many studies on the adoption of cloud-based E-learning in Vietnam and in the World. This study adapts the extended of Unified Theory of Acceptance and Use of Technology (UTAUT2) [48] to research the acceptance and use of E-learning based on cloud computing in Vietnam. These elements, namely facilitating condition, performance expectancy, effort expectancy, social influence, hedonic motivation, price value and habit influence on the intention and use of cloud-based E-Learning, the results show that seven out of eleven hypotheses are supported. The results will help implementing E-learning based on cloud and learning strategies to be more successful.

Thanh D. Nguyen, Dung T. Nguyen, Thi H. Cao
Requirements Identification for Migrating eGovernment Applications to the Cloud

Increasing citizens’ participation in the use of eGovernment services is one of the main goals that governments all around the world are aiming to satisfy. While the number of Internet users is increasing rapidly and the percentage of the use of ICT services follows the same increment it is obvious that governments are seeking to take advantage of the modern alternative technological solutions in order to design the next generation of eGovernment systems and services. Without any doubt one of the most advanced solution that offers many advantages both in hardware and software levels is cloud computing. This paper aims on identifying the major functional and non-functional requirements that a traditional eGovenrment system should realise for its safe migration into a cloud environment.

Evangelos Gongolidis, Christos Kalloniatis, Evangelia Kavakli
A GPU-Based Enhanced Genetic Algorithm for Power-Aware Task Scheduling Problem in HPC Cloud

In this paper, we consider power-aware task scheduling (PATS) in HPC clouds. Users request virtual machines (VMs) to execute their tasks. Each task is executed on one single VM, and requires a fixed number of cores (i.e., processors), computing power (million instructions per second - MIPS) of each core, a fixed start time and non-preemption in a duration. Each physical machine has maximum capacity resources on processors (cores); each core has limited computing power. The energy consumption of each placement is measured for cost calculating purposes. The power consumption of a physical machine is in a linear relationship with its CPU utilization. We want to minimize the total energy consumption of the placements of tasks. We propose here a genetic algorithm (GA) to solve the PATS problem. The GA is developed with two versions: (1) BKGPUGA, which is an adaptively implemented using NVIDIA’s Compute Unified Device Architecture (CUDA) framework; and (2) SGA, which is a serial GA version on CPU. The experimental results show the BKGPUGA program that executed on a single NVIDIA® TESLA

TM

M2090 GPU (512 cores) card obtains significant speedups in comparing to the SGA program executing on Intel® XeonTM E5-2630 (2.3 GHz) on same input problem size. Both versions share the same GA’s parameters (e.g. number of generations, crossover and mutation probability, etc.) and a relative small (10-11) on difference of two finesses between BKGPUGA and SGA. Moreover, the proposed BKGPUGA program can handle large-scale task scheduling problems with scalable speedup under limitations of GPU device (e.g. GPU’s device memory, number of GPU cores, etc.).

Nguyen Quang-Hung, Le Thanh Tan, Chiem Thach Phat, Nam Thoai

Image Processing

Ball Distance Estimation and Tracking System of Humanoid Soccer Robot

Modern Humanoid Soccer Robots in uncontrolled environments need to be based on vision and versatile. This paper propose a method for object measurement and ball tracking method using Kalman Filter for Humanoid Soccer, because the ability to accurately track a ball is one of the important features for processing high-definition image. A color-based object detection is used for detecting a ball while PID controller is used for controlling pan tilt camera system. We also modify the robots controller CM-510 in order able to communicate efficiently using main controller. The proposed method is able to determine and estimate the position of a ball and kick the ball correctly with the success percentage greater than 90%. We evaluate and present the performance of the system.

Widodo Budiharto, Bayu Kanigoro, Viska Noviantri
Image Clustering Using Multi-visual Features

This paper presents a research on clustering an image collection using multi-visual features. The proposed method extracted a set of visual features from each image and performed multi-dimensional K-Means clustering on the whole collection. Furthermore, this work experiments on different number of visual features combination for clustering. 2, 3, 5 and 7 pair of visual features chosen from a total of 8 visual features used, to measure the impact of using more visual features towards clustering performance. The result show that the accuracy of multi-visual features clustering is promising, but using too many visual features might set a drawback.

Bilih Priyogi, Nungki Selviandro, Zainal A. Hasibuan, Mubarik Ahmad
A Robust Visual Object Tracking Approach on a Mobile Device

In this paper, we present an approach for tracking an object in video captured on a mobile device. We use a colour-based approach. The performance of many of these approaches degrades due to lighting changes and occlusion. To address the issue of lightning changes, our approach makes use of colour histogram that is generated by accumulating histograms derived from target objects imaged under different conditions. A CAMShift tracking algorithm is applied to the back-projected image to track the target object.

We have tested our approach by tracking an Emergency Exit sign and the results obtained show that the tracking is robust against lightning changes.

Abdulmalik Danlami Mohammed, Tim Morris

Software Engineering

Self-generating Programs – Cascade of the Blocks

When building complex applications the only way not to get lost is to split the application into simpler components. Current programming languages, including object oriented ones, offer very good utilities to create such components. However, when the components are created, they need to be connected together. Unluckily, these languages are not a very suitable tool for that. To help with composition of the components we introduce

cascade

– a dynamic acyclic structure built from blocks, inspired by the Function Block approach. The cascade generates itself on-the-fly during its evaluation to match requirements specified by input data and automatically orders an execution of the individual blocks. Thus the structure of a given cascade does not need to be predefined entirely during its composing/implementation and fixed during its execution as it is usually assumed by the most approaches. It also provides a real-time and fully automatic visualization of all blocks and their connections to ease debugging and an inspection of the application.

Josef Kufner, Radek Mařík
State Machine Abstraction Layer

Smalldb uses a non-deterministic parametric finite automaton combined with Kripke structures to describe lifetime of an entity, usually stored in a traditional SQL database. It allows to formally prove some interesting properties of resulting application, like access control of users, and provides primary source of metadata for various parts of the application, for example automatically generated user interface and documentation.

Josef Kufner, Radek Mařík
Concern Based SaaS Application Architectural Design

With SaaS application, tenant can focus on application utilization while Independent Software Vendor (ISV) is responsible for application dep- loyment, installation, operation and maintenance. Using Aspect Oriented Soft- ware Development (AOSD), we propose eight concerns, i.e. configurability, discriminator, measurement, monitoring, tenant management, billing manage- ment, performance management, and application management. Those concerns are integrated into a SaaS system architectural design, to enhance SaaS opera- tional flexibility and maintainability. As a proof of concept, we developed a SaaS operational environment using Spring and AOP. Two Java applications have been integrated to this environment after tailoring. We have tested the modules, classes and services and then the applications, to demonstrate that the platform is able to run a set of web applications as a SaaS. Using this system, ISV can modify an existing Java application easily to be a part of SaaS and measure resource usage and monitor SaaS operation by a dashboard.

Aldo Suwandi, Inggriani Liem, Saiful Akbar

Collaboration Technologies and Systems

Hybridization of Haar Wavelet Decomposition and Computational Intelligent Algorithms for the Estimation of Climate Change Behavior

We propose a hybrid of haar wavelet decomposition, relevance vector machine, and adaptive linear neural network (HWD-RVMALNN) for the estimation of climate change behavior. The HWD-RVMALNN is able to improve estimation accuracy of climate change more than the approaches already discussed in the literature. Comparative simulation results show that the HWD-RVMALNN outperforms cyclical weight/bias rule, Levenberg-Marquardt, resilient back-propagation, support vector machine, and learning vector quantization neural networks in both estimation accuracy and computational efficiency. The model proposes in this study can provide future knowledge of climate change behavior. The future climate change behavior can be used by policy makers in formulating policies that can drastically reduce the negative impact of climate change, and be alert on possible consequences expected to occur in the future.

Haruna Chiroma, Sameem Abdulkareem, Adamu I. Abubakar, Eka Novita Sari, Tutut Herawan, Abdulsalam Ya’u Gital
An Improved Ant Colony Matching by Using Discrete Curve Evolution

In this paper we present an improved Ant Colony Optimization (ACO) for contour matching, which can be used to match 2D shapes. Discrete Curve Evolution (DCE) technique is used to simplify the extracted contour. In order to find the best correspondence between shapes, the match process is formulated as a Quadratic Assignment Problem (QAP) and resolved by using Ant Colony Optimization (ACO). The experimental results justify that Discrete Curve Evolution (DCE) performs better than the previous Constant Sampling (CS) technique which has been selected for the ACO matching.

Younes Saadi, Eka Novita Sari, Tutut Herawan
A Novel Approach to Gasoline Price Forecasting Based on Karhunen-Loève Transform and Network for Vector Quantization with Voronoid Polyhedral

We propose an intelligent approach to gasoline price forecasting as an alternative to the statistical and econometric approaches typically applied in the literature. The linear nature of the statistics and Econometrics models assume normal distribution for input data which makes it unsuitable for forecasting nonlinear, and volatile gasoline price. Karhunen-Loève Transform and Network for Vector Quantization (KLNVQ) is proposed to build a model for the forecasting of gasoline prices. Experimental findings indicated that the proposed KLNVQ outperforms Autoregressive Integrated Moving Average, multiple linear regression, and vector autoregression model. The KLNVQ model constitutes an alternative to the forecasting of gasoline prices and the method has added to methods propose in the literature. Accurate forecasting of gasoline price has implication for the formulation of policies that can help deviate from the hardship of gasoline shortage.

Haruna Chiroma, Sameem Abdulkareem, Adamu I. Abubakar, Eka Novita Sari, Tutut Herawan

E-Learning

Enhancing the Use of Digital Model with Team-Based Learning Approach in Science Teaching

This study describes the introduction of digital models and team-based learning (TBL) for teaching science; in this case, the teaching of the magnetic induction portion of a physics class. This new approach required students’ active construction of knowledge as both an individual and team. Students were asked to begin their studies through the viewing of digital models in videos through an online learning portal. Camtasia Studio was utilized in creating video contains class material and experiments along with the teacher’s audio explanation. The TBL approach was implemented as the instructional strategy during in-class sessions. A portion of the classroom time was spent ensuring that students master the class material and a vast majority of class time was used for team assignments that focused on problem-based learning and simulating complex questions that the student would face as the course developed. The utilization of digital models and TBL improved the students’ ability to learn independently and to present their ideas coherently, transforming them into more engaged, independent learners, not just in science learning but also in their overall academic experience.

Bens Pardamean, Teddy Suparyanto, Suyanta, Eryadi Masli, Jerome Donovan
Improving Reusability of OER
Educational Patterns for Content Sharing

The effect of Open Educational Resources (OER) on Higher Education is still disappointing. (Re)use of materials, which can be accessed and employed freely, has not developed in such a way that it has changed the attitudes and behavior of teachers. After analyzing several aspects of the problem the article will focus on educational reasons to improve this situation. It is argued that to strive for context free learning objects is heading in the wrong direction. The author proposes to link OER not only with an educational taxonomy of learning outcomes but also with typical patterns of educational scenarios.

Peter Baumgartner
Online Learning for Two Novel Latent Topic Models

Latent topic models have proven to be an efficient tool for modeling multitopic count data. One of the most well-known models is the latent Dirichlet allocation (LDA). In this paper we propose two improvements for LDA using generalized Dirichlet and Beta-Liouville prior assumptions. Moreover, we apply an online learning approach for both introduced approaches. We choose a challenging application namely natural scene classification for comparison and evaluation purposes.

Ali Shojaee Bakhtiari, Nizar Bouguila

Data Warehousing and Data Mining

An Infinite Mixture Model of Generalized Inverted Dirichlet Distributions for High-Dimensional Positive Data Modeling

We propose an infinite mixture model for the clustering of positive data. The proposed model is based on the generalized inverted Dirichlet distribution which has a more general covariance structure than the inverted Dirichlet that has been widely used recently in several machine learning and data mining applications. The proposed mixture is developed in an elegant way that allows simultaneous clustering and feature selection, and is learned using a fully Bayesian approach via Gibbs sampling. The merits of the proposed approach are demonstrated using a challenging application namely images categorization.

Nizar Bouguila, Mohamed Al Mashrgy
On If-Then Multi Soft Sets-Based Decision Making

Soft set theory as a new mathematical tool for dealing with uncertainties was first introduced by Molodtsov has experienced rapid growth. Various applications of soft set for the purpose of decision-making have been shown by several researchers. From various studies presented mostly shows the role of soft sets as a tool in the collection of the various attributes needed by a person to determine which decisions will be taken. In this paper, we show how soft set can play a role in the decision made by a person based on a history of decisions that have been made earlier and used as a reference for the next decision. Therefore, we introduce an (

if-then

) multi soft sets as a developments of application of soft set which is stated in the form if (antecedent) and then (consequence). The antecedent and consequence are derived from previously several decisions that have been made by people when using a soft set as a tool to help them for making a decision.

R. B. Fajriya Hakim, Eka Novita Sari, Tutut Herawan
Predicting Size of Forest Fire Using Hybrid Model

This paper outlines a hybrid approach in data mining to predict the size of forest fire using meteorological and forest weather index (FWI) variables such as Fine Fuel Moisture Code (FFMC), Duff Moisture Code (DMC), Drought Code (DC), Initial Spread Index (ISI), temperature, Relative Humidity (RH), wind and rain. The hybrid model is developed with clustering and classification approaches. Fuzzy C-Means (FCM) is used to cluster the historical variables. The clustered data are then used as inputs to Back-Propagation Neural Network classification. The label dataset having value greater than zero in fire area size are clustered using FCM to produce two categorical clusters,i.e.:

Light Burn

, and

Heavy Burn

for its label. On the other hand, fire area label with value zero is clustered as

No Burn Area

. A Back-Propagation Neural Network (BPNN) is trained based on these data to classify the output (burn area) in three categories,

No Burn Area, Light Burn

and

Heavy Burn

. The experiment shows promising results depicting classification size of forest fire with the accuracy of confusion matrix around 97, 50 % and Cohens Kappa 0.954. This research also compares the performance of proposed model with other classification method such as SVM, Naive Bayes, DCT Tree, and K-NN that showed BPNN have best performance.

Guruh Fajar Shidik, Khabib Mustofa

E-Government and E-Health

Understanding eParticipation Services in Indonesian Local Government

This study aims at understanding how local government from a developing country, in this case Indonesia, implement and manage eParticipation services. In doing so, we combine institutional theory and stakeholder theory to build a sharper analytical lens. From an interpretive case study in the city of Yogyakarta, we reveal the institutionalization process of the services since their inception and identify major stakeholders and their salience. Based on our findings, we propose implications for practice and suggest implications for further research. Future work, based on a multiple case strategy including several eParticipation cases from other parts of Indonesia, will further explore the findings reported here.

Fathul Wahid, Øystein Sæbø
Document Tracking Technology to Support Indonesian Local E-Governments

Currently, many information and communication technologies have been used to support electronic government systems (e-gov) to become more effective and efficient. However, in the practical level, some specific issues are still needed to be handled, for example how to manage, handle and track electronic documents in the government institutions, which also can support frequent business process modification. In this paper, we propose to integrate document tracking technology into e-government business process to improve efficiency and effectiveness of e-government application in Indonesia. We offers three integrated generic models of a document tracking system. The models has been applied at a pilot project in an administration office and a city that is enthusiastic to apply a complete e-gov system. We expect that the solution approach can be applied in other local e-governments in Indonesia. Initial results show that the document tracking prototype application can enhance the productivity, clear and simplify the business process, and support process measurement, such that it can be used to improve the quality of local e-government services.

Wikan Sunindyo, Bayu Hendradjaya, G. A. Putri Saptawati, Tricya E. Widagdo
Modern Framework for Distributed Healthcare Data Analytics Based on Hadoop

Evolution in the field of IT, electronics and networking resulted in enhancements in connectivity and in computation capabilities. Proliferation of miniaturized devices paved way for Body Area Network. Healthcare systems have been going through cycles of modernization as the advent of IT system in the field of medical sciences. Body Area Network is a network of lightweight wearable sensor nodes that sense human body functions. Modern healthcare informatics systems produce lots of data that emerge from sensors. Even though many healthcare informatics systems exist and produce volume of data, such solutions exist in silos. Existing Healthcare IT systems intend to gather multivariate medical data about the patients by the means of electronic format. They capture multi variant types of data, process and store them in a RDBMS. Inferring knowledge from such systems is tedious. This paper aims at proposing a framework for modernizing the healthcare informatics systems. Proposed framework is based on Apache Hadoop platform which is open source and its implementation is distributed in nature as it is deployable at various healthcare centers in different geographic locations.

P. Vignesh Raja, E. Sivasankar

Biometric and Bioinformatics Systems

A Bioinformatics Workflow for Genetic Association Studies of Traits in Indonesian Rice

Asian rice is a staple food in Indonesia and worldwide, and its production is essential to food security. Cataloging and linking genetic variation in Asian rice to important traits, such as quality and yield, is needed in developing superior varieties of rice. We develop a bioinformatics workflow for quality control and data analysis of genetic and trait data for a diversity panel of 467 rice varieties found in Indonesia. The bioinformatics workflow operates using a back-end relational database for data storage and retrieval. Quality control and data analysis procedures are implemented and automated using the whole genome data analysis toolset, PLINK, and the [R] statistical computing language. The 467 rice varieties were genotyped using a custom array (717,312 genotypes total) and phenotyped for 12 traits in four locations in Indonesia across multiple seasons. We applied our bioinformatics workflow to these data and present prototype genome-wide association results for a continuous trait - days to flowering. Two genetic variants, located on chromosome 4 and 12 of the rice genome, showed evidence for association in these data. We conclude by outlining extensions to the workflow and plans for more sophisticated statistical analyses.

James W. Baurley, Bens Pardamean, Anzaludin S. Perbangsa, Dwinita Utami, Habib Rijzaani, Dani Satyawan
Fuzzy Logic Weight Estimation in Biometric-Enabled Co-authentication Systems

In this paper, we introduce a co-authentication system that combines password, biometric features (face, voice) in order to improve the false reject rate (FRR) and false accept rate (FAR) in Android smartphone authentication system. Since the system performance is often affected by external conditions and variabilities, we also propose a fuzzy logic weight estimation method which takes three inputs: password complexity, face image illuminance and audio signal-to-noise-ratio to automatically adjust the weights of each factor for the security improvement. The proposed method is evaluated using Yale [5] and Voxforge [1] Databases. The experimental results are very promising, the FAR is 0.4 % and FRR almost equal 0 % when the user remembers his password.

Van Nhan Nguyen, Vuong Quoc Nguyen, Minh Ngoc Binh Nguyen, Tran Khanh Dang
The Human Face of Mobile

As the landscape around Big data continues to exponentially evolve, the « big » facet of Big data is no more number one priority of researchers and IT professionals. The race has recently become more about how to sift through torrents of data to find the hidden diamond and engineer a better, smarter and healthier world. The ease with which our mobile captures daily data about ourselves makes it an exceptionally suitable means for ultimately improving the quality of our lives and gaining valuable insights into our affective, mental and physical state. This paper takes the first exploratory step into this direction by using the mobile to process and analyze the “digital exhaust” it collects to automatically recognize our emotional states and accordingly respond to them in the most effective and “human” way possible. To achieve this we treat all technical, psycho-somatic, and cognitive aspects of emotion observation and prediction, and repackage all these elements into a mobile multimodal emotion recognition system that can be used on any mobile device.

Hajar Mousannif, Ismail Khalil

The 2014 Asian Conference on Availability, Reliability and Security, AsiaARES 2014

Network Security

A Real-Time Intrusion Detection and Protection System at System Call Level under the Assistance of a Grid

In this paper, we propose a security system, named the Intrusion Detection and Protection System (IDPS for short) at system call level, which creates personal profiles for users to keep track of their usage habits as the forensic features, and determines whether a legally login users is the owner of the account or not by comparing his/her current computer usage behaviors with the user’s computer usage habits collected in the account holder’s personal profile. The IDPS uses a local computational grid to detect malicious behaviors in a real-time manner. Our experimental results show that the IDPS’s user identification accuracy is 93%, the accuracy on detecting its internal malicious attempts is up to 99% and the response time is less than 0.45 sec., implying that it can prevent a protected system from internal attacks effectively and efficiently.

Fang-Yie Leu, Yi-Ting Hsiao, Kangbin Yim, Ilsun You
A Hybrid System for Reducing Memory and Time Overhead of Intrusion Detection System

With the growing use of the internet worldwide, internet security becomes more and more important. There are many techniques available for intrusion detection. However, there remain various issues to be improved, such as detection rate, false positive rate, memory overhead, time overhead, and so on. In this paper, a new hybrid system for network intrusion detection system using principal component analysis and C4.5 is presented, which has a good detection rate and keeps false positive and false negative rate at an acceptable level for different types of network attacks. Especially, this system can effectively reduce the memory overhead and the time overhead of building the intrusion detection model. These claims are verified by experimental results on the KDD Cup 99 benchmark network intrusion detection dataset.

Zhi-Guo Chen, Sung-Ryul Kim
LDFGB Algorithm for Anomaly Intrusion Detection

With the development of internet technology, more and more risks are appearing on the internet and the internet security has become an important issue. Intrusion detection technology is an important part of internet security. In intrusion detection, it is important to have a fast and effective method to find out known and unknown attacks. In this paper, we present a graph-based intrusion detection algorithm by outlier detection method which is based on local deviation factor (LDFGB). This algorithm has better detection rates than a previous clustering algorithm. Moreover, it is able to detect any shape of cluster and still keep high detection rate for detecting unknown or known attacks. LDFGB algorithm uses graph-based cluster algorithm (GB) to get an initial partition of dataset which depends on a parameter of cluster precision, then we use the outlier detection algorithm to further processing the results of graph-based cluster algorithm. This measure is effective to improve the detection rates and false positive rates.

Shang-nan Yin, Zhi-guo Chen, Sung-Ryul Kim

Dependable Systems and Applications

Assets Dependencies Model in Information Security Risk Management

Information security risk management is a fundamental process conducted for the purpose of securing information assets in an organization. It usually involves asset identification and valuation, threat analysis, risk analysis and implementation of countermeasures. A correct asset valuation is a basis for accurate risk analysis, but there is a lack of works describing the valuation process with respect to dependencies among assets. In this work we propose a method for inspecting asset dependencies, based on common security attributes - confidentiality, integrity and availability. Our method should bring more detailed outputs from the risk analysis and therefore make this process more objective.

Jakub Breier, Frank Schindler
Creation of Assurance Case Using Collaboration Diagram

Recently, serious failures of complex IT systems are becoming social problems. Assurance case attracts an attention as a technique to assure the dependability of critical systems. We have proposed d* framework which is an extended assurance case notation based on the network of dependable actors. In this paper, The assurance case creation procedure that creates the assurance case from the collaboration diagram is proposed and the case study is performed using this procedure. In this case study, a result is described by d* framework.

Takuya Saruwatari, Shuichiro Yamamoto
Using Model Driven Security Approaches in Web Application Development

With the rise of Model Driven Engineering (MDE) as a software development methodology, which increases productivity and, supported by powerful code generation tools, allows a less error-prone implementation process, the idea of modeling security aspects during the design phase of the software development process was first suggested by the research community almost a decade ago. While various approaches for Model Driven Security (MDS) have been proposed during the years, it is still unclear, how these concepts compare to each other and whether they can improve the security of software projects. In this paper, we provide an evaluation of current MDS approaches based on a simple web application scenario and discuss the strengths and limitations of the various techniques, as well as the practicability of MDS for web application security in general.

Christoph Hochreiner, Zhendong Ma, Peter Kieseberg, Sebastian Schrittwieser, Edgar Weippl
An Evaluation of Argument Patterns Based on Data Flow

In this paper, we will introduce some of the problem areas that software engineers are susceptible during the creation of assurance cases, based on the author’s educational experience with assurance cases. To mitigate these problems, assurance case patterns are proposed based on Data flow diagrams that help engineers develop assurance cases by reusing those patterns. It is also shown an evaluation result of assurance case pattern application to develop an assurance case for a smart card application system.

Shuichiro Yamamoto
A Design of Web Log Integration Framework Using NoSQL

Webservice is a software technology as the representative method of information communication currently used to create a dynamic system environment that is configured to fulfill its users’ needs. Therefore, analyzing log data that occurred at provision is being used as the significant basic data in webservice research. Thanks to development of Cloud computing technology, it has resulted in centralized points from which data is generated and data enlargement. A research is now implemented to create information from collecting, processing and converting flood of data and to obtain the new various items of information. Against this backdrop, it is justified that collection, storage and analysis of web log data in the existing conventional RDBMS system may be inadequate to process the enlarged log data. This research propose a framework which to integrate web log for storage using HBase, a repository of the Cloud computing- based NoSQL. In addition, data validation must be completed in the pre-process when collecting web log. The validated log is stored in the modeling structure in which takes features of web log into account. According to the results, introduction of NoSQL system is found to integrate the enlargement of log data in more efficient manner. By comparisons with the existing RDBMS in terms of data processing performance, it was proved that the NoSQL- based database had a superior performance.

Huijin Jeong, Junho Choi, Chang Choi, Ilsun You, Pankoo Kim
Motivation-Based Risk Analysis Process for IT Systems

Information security management is one of the most important issues to be resolved. The key element of this process is risk analysis. The standards are (ISO/IEC 27000, ISO/IEC 31000) based on the complex and time consuming process of defining vulnerabilities and threats for all organisation assets. In the article we present a new approach to analysing the risk of an attack on information systems. We focus on human factor - motivation, and show its relation to hacker profiles, as well as impacts. At the beginning we introduce a new model of motivation-based risk analysis. Then we describe case study illustrating our approach for a simple set of organisation processes.

Agata Niescieruk, Bogdan Ksiezopolski
Formalizing Information Flow Control in a Model-Driven Approach

Information flow control is a promising formal technique to guarantee the privacy and desired release of our data in an always connected world. However, it is not easy to apply in practice. IFlow is a model-driven approach that supports the development of distributed systems with information flow control. A system is modeled with UML and automatically transformed into a formal specification as well as Java code. This paper shows how the this specification is generated and presents several advantages of a model-driven approach for information flow control.

Kurt Stenzel, Kuzman Katkalov, Marian Borek, Wolfgang Reif
Security Assessment of Computer Networks Based on Attack Graphs and Security Events

Security assessment is an important task for operation of modern computer networks. The paper suggests the security assessment technique based on attack graphs which can be implemented in contemporary SIEM systems. It is based on the security metrics taxonomy and different techniques for calculation of security metrics according to the data about current events. Proposed metrics form the basis for security awareness and reflect current security situation, including development of attacks, attacks sources and targets, attackers’ characteristics. The technique suggested is demonstrated on a case study.

Igor Kotenko, Elena Doynikova
A Pipeline Optimization Model for QKD Post-processing System

Quantum key distribution (QKD) technology can create unconditional security keys between communication parties, but its key generation rate can’t satisfy high-speed network applications. As an essential part of QKD system, the design of post-processing system has a huge influence on its key generation rate. For the challenges of real-time and high-speed processing requirements, we propose a pipeline optimization model for QKD post-processing system. With the variable granularity division policies in our model, a high-speed pipeline QKD post-processing system can be designed with the constraints of limited computing and storage resources and security requirements. Simulation results show that for GHz BB84 protocol based QKD system, the security key generation rate of our post-processing system can reach to 600kbps with 25km distance. We believe that our work can provide useful guidance for the design of future QKD system.

Jianyi Zhou, Bo Liu, Baokang Zhao, Bo Liu

Privacy and Trust Management

Aggregation of Network Protocol Data Near Its Source

In Network Anomaly and Botnet Detection the main source of input for analysis is the network traffic, which has to be transmitted from its capture source to the analysis system. High-volume data sources often generate traffic volumes prohibiting direct pass-through of bulk data into researchers hands.

In this paper we achieve a reduction in volume of transmitted test data from network flow captures by aggregating raw data using extraction of protocol semantics. This is orthogonal to classic bulk compression algorithms. We propose a formalization for this concept called Descriptors and extend it to network flow data.

A comparison with common bulk data file compression formats will be given for full Packet Capture (PCAP) files, giving 4 to 5 orders of magnitude in size reduction using Descriptors.

Our approach aims to be compatible with Internet Protocol Flow Information Export (IPFIX) and other standardized network flow data formats as possible inputs.

Marcel Fourné, Kevin Stegemann, Dominique Petersen, Norbert Pohlmann
The Mediating Role of Social Competition Identity Management Strategy in the Predictive Relationship between Susceptibility to Social Influence, Internet Privacy Concern, and Online Political Efficacy

There are few psychotechnological researches in political world in Indonesia, whereas Indonesia will enter “Year of Politics” by General Election which will be held in 2014. This research aimed at examining the hypothesis of predictive correlation, at the individual level, between susceptibility to social influence, internet privacy concern (as predictor variables) and online political efficacy (as the dependent variable). This research hypothesized that the relationship between the variables is mediated by online identity management strategy in the form of social competition. This research employed questionnaire to gain variables data. Participants of this research were 214 undergraduate students of Bina Nusantara University, Jakarta (106 males, 108 females). Result of this research shows that five of seven proposed hypotheses are supported by empirical data. Implications of the result of this research for online system development for political activities are discussed in the end of this article.

Juneman Abraham, Murty Magda Pane
Formal Security Analysis and Performance Evaluation of the Linkable Anonymous Access Protocol

The introduction of e-Health applications has not only brought benefits, but also raised serious concerns regarding security and privacy of health data. The increasing demands of accessing health data, highlighted critical questions and challenges concerning the confidentiality of electronic patient records and the efficiency of accessing these records. Therefore, the aim of this paper is to provide secure and efficient access to electronic patient records. In this paper, we propose a novel protocol called the Linkable Anonymous Access protocol (LAA). We formally verify and analyse the protocol against security properties such as secrecy and authentication using the Casper/FDR2 verification tool. In addition, we have implemented the protocol using the Java technology to evaluate its performance. Our formal security analysis and performance evaluation proved that the LAA protocol supports secure access to electronic patient records without compromising performance.

Rima Addas, Ning Zhang
On Safety of Pseudonym-Based Location Data in the Context of Constraint Satisfation Problems

Pseudonymization is a promising technique for publishing a trajectory location data set in a privacy-preserving way. However, it is not trivial to determine whether a given data set is safely publishable against an adversary with partial knowledge about users’ movements. We therefore formulate this safety decision problem based on the framework of constraint satisfaction problems (CSPs) and evaluate its performance with a real location data set. We show that our approach with an existing CSP solver outperforms a polynomial-time verification algorithm, which is designed particularly for this safety problem.

Tomoya Tanjo, Kazuhiro Minami, Ken Mano, Hiroshi Maruyama
Advanced Techniques for Computer Sharing and Management of Strategic Data

In the paper will be presented some advances in the area of computer methods used for encryption and division of confidential data, as well as modern approaches for management of splitted information. Computer techniques for secret information sharing aim to secure information against disclosure to unauthorized persons. The paper will present algorithms allowing for information division and sharing on the basis of biometric or personal features. The development of computer techniques for classified information sharing should also be useful in the process of shared information distribution and management. For this purpose there will be presented a new approach for information management based on cognitive information systems.

Marek R. Ogiela, Lidia Ogiela, Urszula Ogiela
On the Modelling of the Computer Security Impact on the Reputation Systems

Reputation systems are an important factor for building trust in virtual communities. In the article we introduce reputation module for Quality of Protection Modelling Language which allows to represent the reputation system as a part of the protocol, where all operations and communication steps can be consistently modelled. Owing to the proposed approach the reputation systems can be formally specified and computer security impact can be considered as a factor of the metrics in the reputation systems. Finally, we model and analyse the case study of the eBay reputation system with modification which will refer to the computer security impact.

Bogdan Ksiezopolski, Adam Wierzbicki, Damian Rusinek

Cryptography

Efficient Variant of Rainbow without Triangular Matrix Representation

Multivariate Public Key Cryptosystems (MPKC) is one of candidates for post-quantum cryptography. Rainbow is an MPKC digital signature scheme, with relatively efficient encryption and decryption processes. However, the size of MPKC key is substantially larger than that of an RSA cryptosystem for the same security level. In this paper, we propose a variant of Rainbow that has a smaller secret key. The smaller secret key is to the result of a different description of the quadratic polynomials appearing in the secret key from that of the original Rainbow. In addition, our scheme improves the efficiency of the Rainbow’s signature generation. In particular, the secret key is reduced in size by about 40% and the signature generation is sped up by about 30% at the security level of 100 bits.

Takanori Yasuda, Tsuyoshi Takagi, Kouichi Sakurai
Efficient Lattice HIBE in the Standard Model with Shorter Public Parameters

The concept of identity-based cryptosystem was introduced by Adi Shamir in 1984. In this new paradigm users’ public key can be any string which uniquely identifies the user. The task of Public Key Generator (PKG) in IBE is to authenticate identity of the entity, generate the private key corresponding to the identity of the entity and finally transmit the private key securely to the entity. In large network PKG has a burdensome job. So the notion of Hierarchical IBE (HIBE) was introduced in [11,12] to distribute the workload by delegating the capability of private key generation and identity authentication to lower-level PKGs. In Eurocrypt 2010 Agrawal et al [1] presented an efficient lattice based secure HIBE scheme in the standard model in weaker security notion i.e. selective-ID. Based on [1], Singh et al [18] constructed adaptive-ID secure HIBE with short public parameters and still the public parameters is very large (total

l

′′×

h

 + 2 matrices). In this paper, we have reduced the size of the public parameters from

l

′′×

h

 + 2 matrices to

l

′′ + 2 matrices using Chatterjee and Sarkar’s [8] and blocking technique [7], where

h

is the number of levels in HIBE.

Kunwar Singh, C. Pandu Rangan, A. K. Banerjee
Security Analysis of Public Key Encryptions Based on Conjugacy Search Problem

We report a fatal flaw of CSP-ElG scheme, one of public key encryptions based on conjugacy search problem proposed in INSCRYPT 2010. It does not satisfy the security property claimed as it is. We also discuss imperfections of security proofs of the other proposals: CSP-hElG and CSP-CS schemes. Following the technique given by Gennaro et al. to smooth a distribution of DH transform outputs, we introduce a computational assumption related to monoid actions and fix the CSP-ElG scheme using a universal hash function and the leftover hash lemma.

Akihiro Yamamura
Cryptanalysis of Unidirectional Proxy Re-Encryption Scheme

At Eurocrypt 1998, Blaze, Bleumer and Strauss [7]presented a new primitive called Proxy Re-Encryption (

PRE

). This new primitive allows semi trusted proxy to transform a ciphertext for Alice (delegator) into a ciphertext for Bob (delegatee) without knowing the message. Ateniese et al [6] introduced

master secret security

as another security requirement for unidirectional

PRE

.

Master secret security

demands that no coalition of dishonest proxy and malicious delegatees can compute the master secret key (private key) of the delegator. In this paper, first we have shown that Aono et al’s scheme [4] is not secure under

master secret security

model. In other words if proxy and delegatee collude they can compute the private key of the delegator. Second, based on Aono et al’s paper [4] we have constructed unidirectional

PRE

which is also secure under

master secret security

model. Like [4], our scheme is also multi-use.

Kunwar Singh, C. Pandu Rangan, A. K. Banerjee
An Algorithm to Analyze Non-injective S-Boxes

We present an algorithm for constructing pairs of an invertible mapping

A

and an affine mapping

B

such that

AS

 = 

SB

for a given

S

-box. For doing we so, we introduce and analyse the link graph of an

S

-box. We apply the algorithm to the eight DES

S

-boxes. All obtained pairs (

A

,

B

) are those reported in previous work, in which it was required that both

A

and

B

are invertible affine mappings. In particular, the relaxation that

A

need not be affine does not yield new pairs.

Leandro Marin, Ludo Tolhuizen
Attribute-Based Fine-Grained Access Control with User Revocation

Attribute-based encryption brings a lot of convenience for access control. But it introduces several challenges with regard to the user revocation. In this paper, we propose an access control mechanism using new key update technology to enforce access control policies with efficient user revocation capability. The access control can be achieved by efficient key update technology which takes advantage of the attribute-based encryption and key distribution. We demonstrate how to apply the proposed mechanism to securely manage the cloud data. The analysis results indicate that the proposed scheme is efficient and secure in user revocation.

Jun Ye, Wujun Zhang, Shu-lin Wu, Yuan-yuan Gao, Jia-tao Qiu
A Full Privacy-Preserving Scheme for Location-Based Services

Location based services(

LBS

) pose risks to user’s privacy, as they have access to user’s identity, location and usage profile. Many approaches have been made up to deal with the privacy problems. But few of them meet the requirement of full privacy. In this paper, we propose a protocol that does not require a trusted third party and provides full privacy. We use group anonymous authentication to fulfill identity privacy, while using program obfuscation to satisfy the privacy requirement of usage profile. And we assume that there exist some geography or geometry methods to form a cloaking region to meet location privacy.

Fei Shao, Rong Cheng, Fangguo Zhang
Implementation of Efficient Operations over GF(232) Using Graphics Processing Units

Evaluating non-linear multivariate polynomial systems over finite fields is an important subroutine, e.g., for encryption and signature verification in multivariate public-key cryptography. The security of multivariate cryptography definitely becomes lower if a larger field is used instead of

GF

(2) given the same number of bits in the key. However, we still would like to use larger fields because multivariate cryptography tends to run faster at the same level of security if a larger field is used. In this paper, we compare the efficiency of several techniques for evaluating multivariate polynomial systems over

GF

(2

32

) via their implementations on graphics processing units.

Satoshi Tanaka, Takanori Yasuda, Kouichi Sakurai
M-SRS: Secure and Trustworthy Mobile Service Review System Based on Mobile Cloudlet

The scope of services has skyrocketed to such an extent that it is necessary for the service consumers to quickly understand the quality of a service provided by different vendors through Service Review Systems (SRS). In this paper, we consider the trustworthyness of a SRS without a trusted review management center in location-based Service-oriented Mobile Social Networks (S-MSNs). Firstly, we broach some review statistic modification attacks, which are very important for service consumers to review a service. Secondly, the M-SRS network model based on Mobile Cloud Computing (MCC) is constructed, which could protect the security and reduce the communication and computation overhead. Also, data entanglement and verifiable service utilization tickets are adopted to prevent proposed attacks in existing SRS and guarantee the trustworthyness of the statistic SRS. Final results show that M-SRS could effectively resist the existing service review attacks, and it is efficient in terms of review submission and review authenticity verification for the whole system.

Tao Jiang, Xiaofeng Chen, Jin Li, Jianfeng Ma

Multimedia Security

High-Quality Reversible Data Hiding Approach Based on Evaluating Multiple Prediction Methods

Reversible data hiding based on prediction methods is a good technique that can hide secret bits into cover images efficiently. In this paper, we propose a reversible data hiding method based on four candidates of prediction methods and local complexity for enhancing stego-image quality. In our proposed method, before we embed the secret message in one level, we evaluate the four prediction methods by calculating their efficiency ratios to decide which prediction method will be used. When the selected prediction method is applied, a threshold based on local complexity is used to determine which pixel should join the shifting and embedding process. Therefore, more pixels will avoid executing the process of pixel shifting. It results in stego-images with lower distortion. The experimental results show that our image quality is superior to that of other approaches at the same capacity.

Cheng-Hsing Yang, Kuan-Liang Liu, Chun-Hao Chang, Yi-Jhong Jhang
Steganalysis to Data Hiding of VQ Watermarking Upon Grouping Strategy

This paper present a steganalysis method for the data hiding scheme based on VQ-compression. This data hiding algorithm divides the codebook into groups which contain two codewords each. The Euclidean Distance of the group is used instead of a codeword in traditional VQ-compression. A PoV-like effect of Chi-square attack is observed and used as a feature of detection. In the proposed steganalysis, we detect whether an unknown image is a VQ-compressed image or not, and then the target detection of codewords grouping type data hiding methods is proposed. We apply proposed scheme to Yang et al.’s watermarking scheme. A large amount test image database UCID (Uncompressed Colour Image Database) is utilized as various conditions, such as cover imaged, traditional VQ-compressed images, and stego images. The experimental shows that the proposed steganalysis method is able to identify the stego images among others, and the accuracy rate reaches over 90 %.

Ya-Ting Chang, Min-Hao Wu, Shiuh-Jeng Wang
Experimental Evaluation of an Algorithm for the Detection of Tampered JPEG Images

This paper aims to experimentally evaluate the performance of one popular algorithm for the detection of tampered JPEG images: the algorithm by Lin

et al

. [1]. We developed a reference implementation for this algorithm and performed a deep experimental analysis, by measuring its performance when applied to the images of the CASIA TIDE public dataset, the

de facto

standard for the experimental analysis of this family of algorithms. Our first results were very positive, thus confirming the good performance of this algorithm. However, a closer inspection revealed the existence of an unexpected anomaly in a consistent part of the images of the CASIA TIDE dataset that may have influenced our results as well as the results of previous studies conducted using this dataset. By taking advantage of this anomaly, we were able to develop a variant of the original algorithm which exhibited better performance on the same dataset.

Giuseppe Cattaneo, Gianluca Roscigno, Umberto Ferraro Petrillo

Dependable Systems and Applications

A Semantic-Based Malware Detection System Design Based on Channels

With the development of information technology, there are massive and heterogeneous data resources in the internet, as well as the malwares are appearing in different forms, traditional text-based malware detection cannot efficiently detect the various malwares. So it is becoming a great challenge about how to realize semantic-based malware detection. This paper proposes an intelligent and active data interactive coordination model based on channels. The coordination channels are the basic construction unit of this model, which can realize various data transmissions. By defining the coordination channels, the coordination atoms and the coordination units, the model can support diverse data interactions and can understand the semantic of different data resources. Moreover, the model supports graphical representation of data interaction, so we can design complex data interaction system in the forms of flow graph. Finally, we design a semantic-based malware detection system using our model; the system can understand the behavior semantics of different malwares, realizing the intelligent and active malware detection.

Peige Ren, Xiaofeng Wang, Chunqing Wu, Baokang Zhao, Hao Sun
An Efficient Semantic-Based Organization and Similarity Search Method for Internet Data Resources

A large number of data resources with different types are appearing in the internet with the development of information technology, and some negative ones have done harm to our society and citizens. In order to insure the harmony of the society, it is important to discovery the bad resources from the heterogeneous massive data resources in the cyberspace, the internet resource discovery has attracted increasing attention. In this paper, we present the iHash method, a semantic-based organization and similarity search method for internet data resources. First, the iHash normalizes the internet data objects into a high-dimensional feature space, solving the “feature explosion” problem of the feature space; second, we partition the high-dimensional data in the feature space according to clustering method, transform the data clusters into regular shapes, and use the Pyramid-similar method to organize the high-dimensional data; finally, we realize the range and kNN queries based on our method. At last we discuss the performance evaluation of the iHash method and find it performs efficiently for similarity search.

Peige Ren, Xiaofeng Wang, Hao Sun, Baokang Zhao, Chunqing Wu
Efficient DVFS to Prevent Hard Faults for Many-Core Architectures

Dynamic Voltage and Frequency Scaling (DVFS)

is a widely-used and efficient technology for

Dynamic Power management (DPM)

. To avoid hard faults caused by voltage and frequency scaling, some overhead always be imposed on the performance of applications due to the latency of DVFS. Besides, on many-core architectures, the design of multiple voltage domains has made the latency of DVFS a much more significant issue. In this paper, we propose an efficient DVFS scheme to prevent hard faults, meanwhile eliminating the impact of latency of DVFS as possible. The main idea is applying

Retroactive Frequency Scaling (RFS)

where the latency of DVFS might be introduced. Based on the analysis, our approach is expected to achieve noticeable performance improvement on many-core architectures.

Zhiquan Lai, Baokang Zhao, Jinshu Su
Improving Availability through Energy-Saving Optimization in LEO Satellite Networks

Recently, satellite networks are widely used for communication in areas lack of network infrastructures, and will act as the backbones in the next generation internet. Therefore, the availability of satellite networks is very important. In space, the energy is always limited for satellites, and highly efficient energy utilization would certainly improve the availability of satellite systems. In this paper, we consider the energy-saving optimization for the LEO satellite network instead of a single satellite. We modify and extend the multicommodity flow model [3] to switch off satellite nodes and links as much as possible in LEO satellite networks. Taking advantage of the multi-coverage scheme and traffic distribution patents in satellite networks, we improve the heuristic algorithms in [3] to turn off the unnecessary satellites, up-down links and inter-satellite links respectively up to 59 %, 61 % and 72 % under the constraints of link utilization and routing hops increase ratio, and the total energy saving ratio can be up to 65 %. Finally, the availability of LEO satellite networks has been deeply developed.

Zhu Tang, Chunqing Wu, Zhenqian Feng, Baokang Zhao, Wanrong Yu
An Effective Cloud-Based Active Defense System against Malicious Codes

With the rapid development of cloud computing technique, network security has attracted more and more attention. Of all the network threats, malicious code is the major one. Due to the surge of number and species diversity of the malicious code, it is intractable for the existing antivirus techniques to defense all of the attacks. In this paper, we construct an effective cloud-based active defense system against malicious code. The constructed system utilizes the honey-pot subsystem to collect threaten data, and multiple behavior analysis engines work in parallel to generate a comprehensive program behavior analysis report. Furthermore, there are intelligent algorithms running on several computing servers to achieve automatic intelligent analysis on the reports. Associated with the multiple scan engines form a comprehensive, reinforced and more intelligent active defense system.

Zhenyu Zhang, Wujun Zhang, Jianfeng Wang, Xiaofeng Chen
Backmatter
Metadata
Title
Information and Communication Technology
Editors
Linawati
Made Sudiana Mahendra
Erich J. Neuhold
A Min Tjoa
Ilsun You
Copyright Year
2014
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-55032-4
Print ISBN
978-3-642-55031-7
DOI
https://doi.org/10.1007/978-3-642-55032-4

Premium Partner