Skip to main content

2015 | Buch

Information Science and Applications

insite
SUCHEN

Über dieses Buch

This proceedings volume provides a snapshot of the latest issues encountered in technical convergence and convergences of security technology. It explores how information science is core to most current research, industrial and commercial activities and consists of contributions covering topics including Ubiquitous Computing, Networks and Information Systems, Multimedia and Visualization, Middleware and Operating Systems, Security and Privacy, Data Mining and Artificial Intelligence, Software Engineering, and Web Technology. The proceedings introduce the most recent information technology and ideas, applications and problems related to technology convergence, illustrated through case studies, and reviews converging existing security techniques. Through this volume, readers will gain an understanding of the current state-of-the-art in information strategies and technologies of convergence security.

The intended readership are researchers in academia, industry, and other research institutes focusing on information science and technology.

Inhaltsverzeichnis

Frontmatter
Erratum to: Finding Knee Solutions in Multi-Objective Optimization Using Extended Angle Dominance Approach
Sufian Sudeng, Naruemon Wattanapongsakorn, Sanan Srakaew

.

Frontmatter
QoS-aware Mapping and Scheduling for Integrated CWDM-PON and WiMAX Network

Worldwide Interoperability for Microwave Access (WiMAX) has emerged as one of key technologies for wireless broadband access network while Coarse Wavelength Division Multiplexing-Passive Optical Network (CWDM-PON) is one of the potential solutions for future high speed broadband access network. Integrating both networks could enhance the whole network performance by allowing cost-effectiveness, higher capacity, wider coverage, better network flexibility and higher reliability. In this work, scheduling algorithm is proposed as means to maintain the Quality of Service (QoS) requirements of two different media whilst allocating the bandwidth to the subscribers. The NS-2 simulation results demonstrate how network performances of the integrated CWDM-PON and WiMAX networks are improved in terms of delay and throughput.

Siti H. Mohammad, Nadiatulhuda Zulkifli, Sevia M. Idrus
Resource Management Scheme Based on Position and Direction for Handoff Control in Micro/Pico-Cellular Networks

We propose a handoff control scheme to accommodate mobile multimedia traffic based on the resource reservation procedure using the direction estimation. This proposed scheme uses a novel mobile tracking method based on Fuzzy Multi-Criteria Decision Making (FMCDM), in which uncertain parameters such as Pilot Signal Strength (PSS), the distances between the Mobile Terminal (MT) and the Base Station (BS), the moving direction, and the previous location are used in the decision process using the aggregation function in fuzzy set theory. In performance analysis, our proposed scheme provides a better performance than the previous schemes.

Dong Chun Lee, Kuinam J. Kim, Jong Chan Lee
A Pilot Study of Embedding Android Apps with Arduino for Monitoring Rehabilitation Process

This paper proposes the monitoring of a post-stroke rehabilitation activities via Android smart phones. The study focuses on designing the hardware, developing the software and simulating the results in interactive manner. The result is documented for the purpose of post-processing and progressive status tracking. Furthermore, the interactive output may motivate the patients to keep on using this system for rehabilitation. The subject needs to wear a set of sensors over the palm while performing a few basics arm movement. The data will be converted into series of readable data and then transferred to the smart phone via Bluetooth. The experiment demonstrates the capabilities of the sensors to produce information and also the Android Apps in responding such hand movement activities. It is believed that the system offers more information than conventional method and also the ability to improve training quality, results and patients progress. For initial proof of concept, the system will be tested to a healthy normal subject.

Safyzan Salim, Wan Nurshazwani Wan Zakaria, M. Mahadi Abdul Jamil
A Security Protocol based-on Mutual Authentication Application toward Wireless Sensor Network

This paper presents an application of security protocol based-on mu-tual authentication procedure between involved entities that providing data and network security in wireless sensor environment communication. Through WSN, a user can access base station of wireless sensor networks and gets the data. This paper proposes a secure authentication protocol where involved enti-ties are mutual strongly verified before accessing the data. The proposal pro-vides secure features standards such as data integrity, mutual authentication and session key establishment. Furthermore, an extensive analysis shows that the proposed scheme possesses many advantages against popular attacks, achieves better efficiency and can be safeguard to real wireless sensor network applications.

Ndibanje Bruce, YoungJin Kang, Hyeong Rag Kim, SuHyun Park, Hoon-Jae Lee
A Noninvasive Sensor System for Discriminating Mobility Pattern on a Bed

In this paper, we propose a noninvasive sensor system for discriminating mobility pattern of a resident on the bed without inconvenience. The proposed system consists of a thin and wide film style of piezoelectric force senor, a signal processing board, and data collecting program. There are four different types of motion that were simulated by non-patient volunteers. About 10,000 experimental motions of subjects were performed. Sensor data by human motions were collected and preprocessed by a moving average filter, transformed by FFT, and classified by the

k

-NN algorithm with

k

= 1. The experiment yielded the overall discrimination rate of 89.4 %. The proposed system will contribute to differentiating mobility pattern on a bed and distinguishing the physical characteristics of a person.

Seung Ho Cho, Seokhyang Cho
A Fault-Tolerant Multi-Path Multi-Channel Routing Protocol for Cognitive Radio Ad Hoc Networks

Cognitive Radio (CR) has been proposed as a promising technology to solve the problem of radio spectrum shortage and spectrum underutilization. In Cognitive Radio Ad Hoc Networks (CRAHNs), which operate without centralized infrastructure support, the data routing is one of the most important issues to be taken into account and requires more studies. Moreover, in such networks, a path failure can easily occur during data transmission caused by an activity of licensed users, node mobility, node fault, or link degradation. Also, the network performance is severely degraded due to a large number of path failures. In this paper, the Fault-Tolerant Cognitive Ad hoc Routing Protocol (FTCARP) is proposed to provide fast and efficient route recovery in presence of path failures during data delivery in CRAHNs. In FTCARP, a backup path is immediately utilized in case a failure occurs over a primary transmission route in order to transfer the next coming data packets without severe service disruption. The protocol uses different route recovery mechanism to handle different cause of a path failure. The performance evaluation is conducted through simulation using NS-2 simulator. The protocol performance is benchmarked against the Dual Diversity Cognitive Ad hoc Routing Protocol (D2CARP). The simulation results prove that the FTCARP protocol achieves better performance in terms of average throughput and average end-to-end delay as compared to the D2CARP protocol.

Zamree Che-aron, Aisha Hassan Abdalla, Khaizuran Abdullah, Wan Haslina Hassan, Md. Arafatur Rahman
Study of Sound and Haptic Feedback in Smart Wearable Devices to Improve Driving Performance of Elders

This paper’s objective is to study the influence of sound and haptic feedback from smart wearable devices on performance of elderly drivers. The performance is measured with an assumption that those who spend more time to apply brake tend to be more prepared and aware of their surroundings. We create a prototype wearable device and experiment on how it affects drivers. A total of 9 elderly drivers were measured more than 108 times on how long they apply brake while driving. We then perform a paired sample T-test and found performance change from sound feedback to be statistically significant. We then calculate for correlation of the factors and performance. The factors with statistical significance are familiarity with smart devices and gender.

Chularas Natpratan, Nagul Cooharojananone
Indoor WIFI localization on embedded systems

Localization has recently become increasingly important for ubiquitous computing. Most embedded devices require location-based services for monitoring and tracking purposes. Typically the location services provided by these devices can be achieved using global positioning system (GPS). Although GPS can indicate absolute position based on satellite system, several studies have demonstrated its inaccuracy when deployed indoor due to signal degradation from the obstacles and building structures. In this paper, we have developed a testbed that provided indoor positioning services based on multi-lateration technique. The empirical results have shown that appropriate indoor propagation model can improve the positioning accuracy, several propagation models have been studied and statistical models have been proposed for a site-specific environment.

Anya Apavatjrut, Ekkawit Boonyasiwapong
Bacterial Foraging-based Power Allocation for Cooperative Wireless Sensor Networks

Cooperative communication becomes a popular area of research due to its strength and wide application scope in wireless networking and communications. This technique improves the communication performance largely in capacity enhancement, energy-efficiency, timeliness and contention. Power allocation plays an important role in the cooperative communication paradigm to get the desired performance improvements in the aforementioned aspects. In this paper, we present a bacterial foraging optimization algorithm (BFOA)-based power allocation method for cooperative communications in wireless systems. Comparative measures with non-cooperative approaches are made to justify our proposed method.

Mohammad Abdul Azim, Zeyar Aung, Mario E. Rivero-Angeles

.

Frontmatter
Maintaining a Trajectory Data Warehouse Under Schema Changes with the Mobile Agent View Adaptation Technique

With the development of pervasive systems and positioning technology, the analysis of data resulting from moving objects trajectories has attracted a particular interest. Those data are called trajectory data and are stored in a suitable repository called trajectory data warehouse (TDW). TDW view definitions are constructed from heterogeneous mobile information sources schema. Those latter are more and more autonomous and they often evolve by changing their contents and/or their schema then the TDW view definition may become undefined and consequently the analysis process may be affected. For this reason, it is important to achieve view definitions restoring or synchronization following schema changes occurred at the heterogeneous mobile information sources. The goal of this paper is to propose a TDW view definition maintenance based on mobile agent view adaptation system.

Wided Oueslati, Jalel Akaichi
Traffic Analysis in Concurrent Multi-Channel Viewing on P2PTV

In recent years, peer-to-peer (P2P) video streaming services (P2PTV) have attracted much attention because of the ability to decrease the load on the content servers by distributing data delivery function to peers. On the other hand, P2P overlay networks are oblivious to the physical network topology and thus may cause undesirable traffic straddling on some Internet service providers (ISPs). To optimize P2PTV traffic, several traffic measurements have been studied and revealed the characteristics of P2PTV traffic. However, these studies did not focus on users’ behavior and traffic flow of each user. In this paper, we focus on PPTV that is one of the most famous P2PTV services and collect traffic data when multiple channels are viewed at the same time. Through this measurement, we observed the changes of the number of peers, traffic flows, and packet arrival time. As a result, we found the new characteristics of PPTV such as the transmission state of PPTV by monitoring the variation in the number of new arrival peers. Moreover, we could detect video servers by simultaneously analyzing multi-channel PPTV traffic.

Koki Mizutani, Takumi Miyoshi, Olivier Fourmaux
Evaluation Method of MANET over Next Generation Optical Wireless Access Network

Access networks are increasingly shaped by the emerging trend of user applications that called for the need of converged layers to meet the requirement of growing spectrum resources. But currently, majority of the previous works in this domain are still limited to a specific functional layer. Recently discovered, MANETs of access network alone have salient characteristic that is bandwidth constrained where the wireless links of frontend topology will continue to have significantly lower capacity than their wired counterpart. Hence, a unified and hybrid method to establish a performance evaluation of MANET (AODVUU, OLSR and DYMOUM) routing based on IEEE 802.11 mesh topology over passive optical system access network is proposed through simulation in the OMNeT environment. The results show that DYMO has the best performance compared to AODV and OLSR of in terms of throughput and delay as the performance metrics.

M. A. Wong, N. Zulkifli, S. M. Idrus, M. Elshaikh
A Feedback Mechanism Based on Randomly Suppressed Timer for ForCES Protocol

As the shortcomings of closed networks are high lightened, the requirements of next generation network for openness are becoming increasingly intense. Based on the fact that reliable multicast of ForCES protocol plays a significant role in improving the performance of ForCES router, this paper researches the congestion control within the reliable multicast process, and analyzes scalability issues of reliable multicast based on ForCES protocol. Also, this paper proposes a feedback mechanism based on randomly suppressed timer which effectively avoids the ack-implosion problem via analyzing a mathematical model of randomized procedure of multicast feedback process. According to the results of the test, this mechanism enables reliable multicast to better adapt to the instability of network environment.

Lijie Cen, Chuanhuang Li, Weiming Wang
Operational Analysis of Interarrival Rate-based Queuing Management Mechanism for High-speed Networks

With the development of the Internet, it has become essential to operate an effective queue management mechanism that improves Quality of Service (QoS), such as throughput, end-to-end delay, and loss on the Internet. The performance of TCP applications relies on the selection of a queue management mechanism in the network. The current, mechanisms used in the Internet have slow response to congestion, which in turn results in large queue size variation and have untimely congestion detection and notification. These consequences degrade the performance significantly because of the high queuing delays and packet, loss. This paper presents a proactive queuing management mechanism that would efficiently address the congestion state upon every incoming packet, arrival, by deciding the probability with which the packet, should dropped or marked. It is designed based on Packet. Interarrival Rate and Actual Queue Size parameters. The performance evaluation showed that the proposed mechanism can be adjusted to efficiently control the queue size to some desirable value, which can decrease the queuing delay while maintaining high link utilization and low packet, drops.

Mohammed M. Kadhum, Selvakumar Manickam
E-Government project enquiry framework for a continuous improvement process, status in Malaysia and Comoros

Information and Communication Technology (ICT) transformed the way we live today, both in the private and public sectors. Technologies such as e-Commerce and banking systems show advancement in the private sector while the public sector is characterized by the reinvention of public services processes and delivery. However, many obstacles have been realized in developing and least developed countries leading into failure of many e-Government projects. This study aims to propose a hypothetical e-Government projects enquiry (EPE) framework that can serve as a basis for investigating e-Government projects implementation against best practices, techniques, and methodologies in this area. The study focused on three levels of enquiry namely the strategy level, the operational level, and the technical level. On the basis of a sound study on the Malaysian and Comorian e-Government systems status, we found that the proposed framework is noteworthy for questioning current e-Government environments performance and guiding the implementation of new e-Government projects initiatives.

Said Abdou Mfoihaya, Mokhtar Mohd Yusof
Objective Non-intrusive Conversational VoIP Quality Prediction using Data mining methods

Nowadays, there is a growth in the number of applications running on the Internet involving real-time transmission of speech and audio streams. Among these applications, Voice over Internet Protocol (VoIP) has become a widespread application based on the Internet Protocol (IP). However, its quality-of-service (QoS) is not robust to network impairments and codecs. It is hard to determine conversational voice quality within real-time network by using ITU-T standards, PESQ and E-model. In this research, three data mining methods: Regression-based, Decision tree and Neural network were used to create the prediction models. The datasets were generated from the combination of PESQ and E-model. The statistical error analysis was conducted to compare accuracy of each model. The results show that the Neural network model proves to be the most suitable prediction model for VoIP quality of service.

Sake Valaisathien, Vajirasak Vanijja
A Hybrid Incentive-based Peer Selection Scheme for Unstructured Peer-to-Peer Media Streaming Systems

The success of peer to peer file sharing protocols such as BitTorrent and Gnutella make peer to peer networks an attractive alternative for implementing media streaming services. However, due to the continuous nature of the content, the key components of such conventional file sharing applications are unable to suffice to the more stringent requirements of media streaming. In this study, we propose an alternative to one of its key components that is the incentive based peer selection routine. The primary motivation of our proposed scheme is the utilization of peer contribution information on both local and global context for a more informed peer selection process.

Victor Romero II, Cedric Angelo Festin
A Transmission Method to Improve the Quality of Multimedia in Hybrid Broadcast/Mobile Networks

This paper proposes the method through the mobile communication network when broadcast contents occur receiving errors in the Hybrid Broadcast/Mobile Network. Generally, the characteristic of the mobile communication network is pay network and peer-to-peer mobile. Therefore, it is necessary to reduce the amount of retransmission data in order to reduce the load of users and networks for recovery. This paper proposes the method to recovery for Hybrid DMB System which combines the T-DMB of the major mobile TV standard with the mobile communication network. The proposed method utilizes the Reed-Solomon techniques based on cross-layer. The proposed method transmits the additional information for recovery when errors are detected. It can reduce by % the resource compared with the retransmission of the original MPEG2-TS.

Hyung-Yoon Seo, Byungjun Bae, Jong-Deok Kim
A Failureless Pipelined Aho-Corasick Algorithm for FPGA-based Parallel String Matching Engine

This paper proposes a failureless pipelined Aho-Corasick (FPAC) algorithm that generates the failureless pipelined deterministic-finite automaton (DFA). The failureless pipelined DFA generated by the FPAC algorithm does not store the failure pointers for reducing hardware overhead. Moreover, by sharing common prefixes, the information for storing states can be compressed. Because the pipeline register stores the state in each stage, the failureless pipelined DFA can perform multiple state transitions in parallel. Therefore, throughput can be increased with multiple homogeneous DFAs. In the experiments with cost-effective FPGAs, the implementation of the proposed FPAC algorithm shows high performance and low hardware overhead compared to several FPGA-based string matching engines.

Hyun Jin Kim
Revised P2P Data Sharing Scheme over Distributed Cloud Networks

Distributed cloud networks can be seen as a cooperative network composed of millions of hosts spread around the world and it is also a distribut-ed shared resource. In the P2P technology, as opposed to the existing client/server concept, devices are actively connected with one another in order to share the resources and every participant is both a server and a client at the same time. In P2P distributed cloud networks, peers are able to directly share and exchange information without the help of a server. This results in a prompt and secure sharing of network resources and data handling. However, flooding algorithm that is used in distributed P2P network generated query message excessively. Our objective in this paper is to proposes a presents a restricted path flooding algorithm that can decrease query message’s occurrence to solve P2P network’s problems. It includes concepts as well as systematic procedures of the proposed scheme for fast path flooding in distributed P2P distributed cloud networks.

Wonhyuk Lee, TaeYeon Kim, Seungae Kang, HyunCheol Kim
LQDV Routing Protocol Implementation on Arduino Platform and Xbee module

So far today, most of routing protocols in ad hoc network have been evaluated by simulation. However, the simulation usually does not reflect the impacts of real environment on the performance of the routing protocols. In this paper, we have implemented Link Quality Distance Vector (LQDV) routing protocol which is a reliable routing protocol in static wireless networks on Arduino platform. We also have evaluated the performance of our LQDV protocol implementation on a real environment. Furthermore, we introduce a simple way to help developers to create implementations based on Arduino platform and Xbee module.

Ho Sy Khanh, Myung Kyun Kim
A Virtualization and Management Architecture of Micro-Datacenter

With the cloud computing, storage and computing resources are moving to remote resources such as virtual servers and storage systems in large DCs(Data Centers), which raise many performance and management challenges. We consider distributed cloud systems, which deploy micro DC that are geographically distributed over a large number of locations in a wide-area network. In this article, we also argue for a micro DC-based network model that provides higher-level connectivity and logical network abstraction that are integral parts of wellness applications. We revisit our previously proposed logical network models that are used to configure the logical wellness network.

Byungyeon Park, Wonhyuk Lee, TaeYeon Kim, HyunCheol Kim
Multi-disjoint Routes Mechanism for Source Routing in Mobile Ad-hoc Networks

The capabilities of Mobile Ad hoc Networks (MANETs) pave the way for wide range of potential applications personal, civilian, and military environments as well as emergency operations. The characteristics of MANETS impose serious challenges in deploying these networks efficiently. The limitation of transmission range of mobile nodes emerge the use of multi-hop to exchange and route data between nodes across the network. Routing is the most important process that can affect the overall performance in MANETs. A variety of routing protocols have been developed for MANETs. These protocols present poor performance under different conditions due to their strategies in learning about routes to destinations. In this paper, we present an innovative Multi-disjoint Routes mechanism that is based on source routing concept to improve routing in MANTEs. The simulation evaluation results showed that the proposed routing mechanism outperforms the common routing protocols, DSR and AODV.

Baidaa Hamza Khudayer, Mohammed M. Kadhum, Wan Tat Chee
Passive Network Monitoring using REAMS

As computer networks grow in size and complexity, monitoring them becomes more challenging. In order to meet the needs of IT administrators maintaining such networks, various

Network Monitoring Systems (NMS)

have been developed. Most NMSs rely solely on active scanning techniques in order to detect the topology of the networks they monitor. We propose a passive scanning solution using the logs produced by the systems within the networks. Additionally, we demonstrate how passive monitoring can be used to develop a holistic knowledge graph of the network landscape.

Amir Azodi, David Jaeger, Feng Cheng, Christoph Meinel
Low Latency Video Transmission Device

This paper presents a low latency video transmission device which includes a protocol engine and a packet buffer for delivering data. The packet buffer is designed to transfer data packet between different clock domains. The protocol engine is designed for video transmission and especially for hardware implementation. Adopted protocol is simpler than TCP protocol to implement in hardware and has some reliability which UDP does not have. The proposed device can achieve low latency and good quality of service.

Chanho Park, Hagyoung Kim

.

Frontmatter
Comparative Investigation of Medical Contents using Desktop and Tabletop Technologies

In recent years, in the field of human-computer interaction show that interactive technologies, constructive methods and appropriate interfaces have great impact in improving learning and creative education. Such interactive technologies along with suitable interfaces are beneficial for humans in various domains especially in the field of healthcare. Generally, in practice, medical monitoring and diagnosis systems are based on desktop computers and have been less explored on multi-touch tabletops. It is due to its less availability but supports collaborative activities and natural form of interaction. This paper presents an experimental investigation of technological-based comparative study, with collaborative activities using desktop technologies and tabletop technologies. A prototype was implemented based on the proposed conceptual framework. Furthermore, a quantitative investigation on the usefulness of these technologies was conducted with twenty students. The experiment showed investigation of medical elements, was more significant in tabletop technologies as compared to desktop technologies.

Tahir Mustafa Madni, Yunus Nayan, Suziah Sulaiman, Muhammad Tahir
Dynamic-Time-Warping Analysis of Feature-Vector Reliability for Cognitive Stimulation Therapy Assessment

Cognitive stimulation therapy (CST) can help people with mental illness improve their health condition. In particular, CST provides an alternative treatment for people with mild to moderate dementia. Signal processing and pattern recognition methods are promising tools for automated assessment of the effectiveness of CST in treating individuals with dementia. This paper applies the dynamic time-warping for investigating the reliability of photoplethysmography-derived features extracted by the largest Lyapunov exponents and spectral distortion for CST evaluation.

Tuan D. Pham
An Algorithm for Rock Pore Image Segmentation

A new algorithm used for rock pore segmentation is presented in this paper. Using the morphological erosion operator and hit-or-miss transform from mathematical morphology, the dividing lines’ possible positions are derived. The dividing lines’ direction can be further confirmed by dilation. The authenticity of the dividing lines can be verified using conditional dilation. Consequently, the accurate pore segmentation can be achieved. It has been proven by various practical applications that this algorithm is relatively accurate, rapidly executed, and insensitive to noise. In effect, the algorithm has excellent antinoise ability.

Zhang Jiqun, Hu Chungjin, Liu Xin, He Dongmei, Li Hua
Action Recognition by Extracting Pyramidal Motion Features from Skeleton Sequences

Human action recognition has been a long-standing problem in computer vision. Computational efficiency is an important aspect in the design of an action-recognition based practical system. This paper presents a framework for efficient human action recognition. The novel pyramidal motion features are proposed to represent skeleton sequences via computing position offsets in 3D skeletal body joints. In the recognition phase, a Naive-Bayes-Nearest-Neighbors (NBNN) classifier is used to take into account the spatial independence of body joints.We conducted experiments to systematically test our framework on the public UCF dataset. Experimental results show that, compared with the

state

-

of

-

the

-

art

approaches, the presented framework is more effective and more accurate for action recognition, and meanwhile it has a high potential to be more efficient in computation.

Guoliang LU, Yiqi ZHOU, Xueyong LI, Chen LV
P-PCC: Parallel Pearson Correlation Condition for Robust Cosmetic Makeup Face Recognitions

The performance of face recognition has been improved over the past; however, there remain some limitations, especially with noises and defects, such as occlusion, face pose, expression, and, in particular, cosmetic makeup change. Recently, the makeup has directly impacted on face characteristics, e.g., face shape, texture, and color, perhaps leading to low classification precision. Thus, this research proposes a robust approach to enhance the recognition accuracy for the makeup using Pearson Correlation (PC) combining with the channel selection (PCC). To further optimize the complexity, the parallelism of PCC was then investigated. This technique demonstrates the practicality and proficiency by outperforming the accuracy and computational time over a traditional PCA and PC.

Kanokmon Rujirakul, Chakchai So-In
Flight Simulator for Serious Gaming

Providing entertainment is the primary concern of the gaming. Once this primary objective alters to provide learning and training materials it calls simulators or the serious gaming. Learning through experiencing or facing the actual scenario is considered as an effective learning technique. The limitations of the experiential learning and how the simulations are going to address those limitations are also reviewed in this paper. Aviation field is one of the most critical and potentially high risk areas where one has to spend lots of money and resources in training scenario. Hence the serious gaming concepts have being playing as an effective cost cutting solution in aviation training. In this paper it is intended to discuss the seriousness of a selected flight simulator and how they adopted the teaching learning concepts. How the simulator can be used in the learning curve is also discussed separately

Aruni Nisansala, Maheshya Weerasinghe, G. K. A. Dias, Damitha Sandaruwan, Chamath Keppitiyagama, Nihal Kodikara, Chamal Perera, Prabhath Samarasinghe
Potential Z-Fighting Conflict Detection System in 3D Level Design Tools

Z-Fighting is an effect that happens in 3D scenes when two co-planar surfaces share similar values in the z-buffer which leads to flicking and visual artifacts during the rendering process due to conflicting order of rendering the surface. However in 3D level design, scenes created by the tools can be complex, in which level designers can inadvertently place co-planar surfaces that would be susceptible to z-fighting. Level designers typically notice the z-fighting artifact through visual inspection through the usage of a 3D walkthrough test on the scene which is time-consuming and easy to miss. To solve the issue, a proposal of a z- fighting detection system for level design tools is proposed to streamline the process of detecting potential hotspots where z-fighting conflicts may occur from co-planar objects.

Pisal Setthawong
Efficient Motion Estimation Algorithms for HEVC/H.265 Video Coding

This paper presents two fast motion estimation algorithms based on the structure of the triangle and the pentagon, respectively, for HEVC/H.265 video coding. These new search patterns determine motion vectors faster than the two Tzsearch patterns - diamond and square - that are built into the motion estimation engine of the HEVC. The proposed algorithms are capable of achieving a faster run-time with negligible video quality loss and increase in bit rate. Experimental results show that, at their best, the triangle and pentagon algorithms can offer 63 % and 61.9 % speed-up in run-time respectively compared to the Tzsearch algorithms in HEVC reference software.

Edward Jaja, Zaid Omar, Ab Al-Hadi Ab Rahman, Muhammad Mun’im Zabidi
A Synthesis of 2D Mammographic Image Using Super- Resolution Technique: A Phantom Study

The gold standard for early detection of breast cancer has been the mammogram. However, this technique still has limitation for women with dense breast. Combining mammogram with digital breast tomosynthesis overcomes the limitation but increases exposure dose approximately twice. This study focuses on reducing radiation dose by synthesizing the 2D mammographic image from multiple tomosynthesis projection images using an image Super-Resolution technique based on sparse representation. We evaluated the result images using peak signal to noise ratio (PSNR), mean structure similarity (MSSIM) and phantom passing score. We compared the synthesized 2D mammographic image from multiple projection images to the one from a single central projection image. The one from multiple images yields better result with. 27.2426 PSNR and 0.4436 MSSIM. For the phantom passing score, we obtained 5, 2, 4 for fibers, group of micocalcifications, and masses, respectively.

Surangkana Kantharak, Thanarat H. Chalidabhongse, Jenjeera Prueksadee
Using complex events to represent domain concepts in graphs

We have developed an event based visualisation model for analysing patterns between news story data and stock prices. Visual analytics systems generally show a direct mapping from data to visualisation. We show that by inserting an intermediate step, which models an expert manipulating data, we can provide unique results that display patterns within the data being investigated and assist less expert users.

Riley T Perry, Cat Kutay, Fethi Rabhi
Unequal Loss Protection Mechanism for Wi-Fi based Broadcasting system using a Video Quality Prediction Model

The Wi-Fi based broadcasting system is mobile Internet Protocol Television (IPTV) technology, which transmits the multimedia content to a lot of local mobile users in real time. Unlike conventional Wi-Fi based multimedia streaming systems that use a separate unicast packets to serve each user, the Wi-Fi broadcast system uses the broadcast packets for scalability because one broadcast packet can serve many users simultaneously. However, it must use the FEC technology to recover the broadcast packet loss because an IEEE 802.11 does not support the packet loss recovery method for the broadcast packet. Considering the feature of multimedia data, equally adding FEC packet to video source is not efficient. For better efficiency Unequal Loss Protection (ULP) technology, for example, has been developed. This paper is going to modelling the characteristics of quality deterioration from effect of broadcasting packet losses in Wi-Fi based broadcasting system and suggest newly developed ULP system based on the model of quality deterioration. The system will be implemented and experimented to demonstrate its superiority.

Dong Hyun Kim, Hyung-Yoon Seo, Byungjun Bae, Jong-Deok Kim
An Approximate Matching Preprocessing for Efficient Phase-Only Correlation-Based Image Retrieval

In this paper, we focus on phase-only correlation-based im-age matching, which is often used for biometrics image retrieval. Al-though systems using phase-only correlation of images allow us to have image matching results of high quality, it takes long time to process phase-only correlation-based image matching especially when there are a large number of images to be processed in the matching. In this paper, we propose an approximate matching preprocessing for pruning images that do not have to be matched to a query image. An empirical study shows the superiority of our proposal.

Honghang Wang, Masayoshi Aritsugi
Pixel Art Color Palette Synthesis

Pixel art is created together with a color palette which greatly affects the overall visual quality. From a given image with hundreds of thousands of colors, it is difficult to pick out a limited number of colors for the color palette. We propose an automatic system, which adopts similar steps of the manual creation processes by pixel art artists, to effectively synthesize the color palette of a pixel art from a given image. Based on our approach, both artists and novices can easily derive a low-resolution image which is very close to the required final pixel art.

Ming-Rong Huang, Ruen-Rone Lee
Introducing a New Radiation Detection Device Calibration Method and Estimating 3D Distance to Radiation Sources

Radiation detection devices; also known as particle detectors; are vastly used to track and identify radioactive sources within a given area. The 3D distance to such radioactive sources can be estimated using stereo radiation detection devices. In stereo vision, the devices have to be calibrated before they are used to acquire stereo images. In this work, we first introduce a new method to calibrate the stereo radiation detection devices using homography translation relationship. The radiation detection devices we have used in our approach are pinhole cameras. The calibrated pinhole cameras are then used to generate stereo images of radioactive sources using a pan/tilt device, and estimated the 3D distance using the intrinsic and extrinsic calibration data, and triangulation. Stereo vision cameras are used along with pinhole cameras to obtain coincident 2D visual information. We performed two experiments to estimate the 3D distance using different input image data sets. The inferred 3D distance results had around a 5~6 % error which assures the accuracy of our proposed calibration method.

Pathum Rathnayaka, Seung-Hae Baek, Soon-Yong Park
Integrating Feature Descriptors to Scanpath-based Decoding of Deformed Barcode Images

There is an extensive literature on localization and decoding of barcode images. Most traditional scanline-based methods have difficulty decoding a highly deformed barcode. This paper thus presents a scanpath-based method to manage the decoding of deformed barcode images. The notion of local orientedness based on HOG descriptors, called Modal Orientation Deviation (MOD), is introduced to help in scanpath construction. The effectiveness of the proposed method is also discussed according to the results of tests reported.

Poonna Yospanya, Yachai Limpiyakorn
Detecting and extracting text in video via sparse representation based classification method

This paper describes a new approach to detect and extract the video text more precisely and efficiently. The proposed approach combines the frame difference and the sparse representation based classification (SRC) method. The experiments demonstrated that the proposed method is effective and efficient for the both scenes, i.e. detecting and extracting the captions in a video the character in a sign.

Bo Sun, Yang Wu, Feng Xu, Yongkang Xiao, Jun He, Chen Chao

.

Frontmatter
A Cost Model for Client-Side CaaS

Deploying cache-as-a-service (CaaS) at the corporation level reduces network bandwidth expense and improves performance. Careful consideration must be given when choosing CaaS to achieve cost-effectiveness. This requires the service model of CaaS allowing custom-made SLA. This paper presents the flexible economic model of client-side CaaS as an early attempt in the field. The model has been evaluated to be promising based on a realistic scenario.

Chaturong Sriwiroj, Thepparit Banditwattanawong
A Reliability Analysis of Cascaded TMR Systems

Cascaded TMRs (Triple Module Redundancy) have been used in various areas like pipeline process, redundant Poly-Si TFT, and etc. Original cascaded TMR with one voter has a single point of failure problem, so that a lot of model studies of cascaded TMR have been done for making more reliable and cost-effective model than previous one. The model with one voter in previous work improves reliability compared to the original one by solving a single point of failure problem. However, the model requires strict rule to improve the reliability. The way to loosen the strict rule is to use more than one voter per stage. However, using more than one voter in the every stage is not a cost-effective solution. In this paper, we suggested a cost-effective new model of cascaded TMR. At the same time, the model also loosened the strict rule in previous works.

Hyun Joo Yi, Tae-Sun Chung, Sungsoo Kim
Availability Analysis for A Data Center Cooling System with (n,k)-way CRACs

Data center cooling systems have cooled computer equipments. Recently, these systems have been supposed to economize energy consumption and perform effectively in various environment. So, it is important to design appropriate data center cooling system infrastructure and analyze its dependability, especially availability and reliability. In this paper, we have proposed a (n,k)-way data center cooling system which composes n units of main CRAC (Computer Room Air Conditioner) and k units of spare CRAC. We have also calculated adequate amount of main CRAC units (

n

). In addition, optimal values of

n

and

k

to achieve a cost effective solution have been stated with a cost-effective equation.

Sohyun Koo, Tae-Sun Chung, Sungsoo Kim
A Recovery Model for Improving Reliability of PCM WL-Reviver System

PCM (Phase Change Memory) that takes center stage has characteristics of non-volatile memory, but as it has a limited number of write times, we wear-leveling technique is required. WL-Reviver (Wear Leveling – Reviver) system is a wear-leveling technic tailored to the PCM, but if internal and external errors occur, it gets failed. This paper proposes a recovery model based on WL-Reviver system for internal and external error situations. Compared to previous work, our newly suggested model improves the system reliability.

Taehoon Roh, Tae-Sun Chung, Sungsoo Kim
Real-time Sitting Posture Monitoring System for Functional Scoliosis Patients

In recent years, with the increase in sitting time, real-time posture monitoring system is needed. We developed sitting posture monitoring system with accelerometer to evaluate postural balance. Unstable structure of the system was designed to assess asymmetrical balance caused by habitual sitting position. Postural patterns of normal group and patients with functional scoliosis group were analyzed. Consequently, inclination angle of scoliosis group were tilted to the posterior and left side. From these results, we concluded that real-time sitting posture monitoring system can be utilized to measure asymmetrical balance of patients and provide accurate diagnosis as well as treatment for individuals.

Ji-Yong Jung, Soo-Kyung Bok, Bong-Ok Kim, Yonggwan Won, Jung-Ja Kim

.

Frontmatter
Decreasing Size of Parameter for Computing Greatest Common Divisor to Speed up New Factorization Algorithm Based on Pollard Rho

Pollard Rho is one of integer factorization algorithms for factoring the modulus in order to recover the private key which is the one of two keys of RSA and is kept secret. However, this algorithm cannot finish all values of the modulus. Later, New Factorization algorithm (NF) which is based on Pollard Rho was proposed to solve the problem of Pollard Rho that cannot finish all value of the modulus. Nevertheless, both of Pollard Rho and NF have to take time – consuming to find two large prime factors of the modulus, because they must compute the greatest common divisor for all iterations of the computation. In this paper, the method to speed up NF is presented by reducing the size of the parameter which is used to be one of two parameters to compute the greatest common divisor. The reason is, if the size of one of two parameters is reduced, the computation time for computing the greatest common divisor is also decreased. The experimental results show that the computation time of this method is decreased for all values of the modulus. Moreover, the average computation time of the proposed method for factoring the modulus is faster than NF about 6 percentages.

Kritsanapong Somsuk
Network Security Situation Assessment: A Review and Discussion

The number of network intrusion attempts have reached an alarming level. Questions have been raised about the efficiency of deploying intrusion detection and prevention system which are more concern on single device instead of overall network security situation. Researchers have shown an increased interest in designing network security situation awareness which consists of event detection, situation assessment and situation prediction. Generally, Network Security Situation Assessment is a process to evaluate the entire network security situation in particular time frame and use the result to predict the incoming situation. In this paper, we review existing network security situation assessment methods from three major categories in the aspect of its strengths and limitations. A list of consideration criteria has been summarized for future situation assessment model design.

Yu-Beng Leau, Selvakumar Manickam, Yung-Wey Chong
Protecting Binary Files from Stack-Based Buffer Overflow

Vulnerabilities that exist in many software systems can be exploited by attackers to cause serious damages to the users. One of such attacks that have become widely spread in the last decade is the buffer overflow attack. The attacker can, if successful, execute an arbitrary code with the same access privileges as the attacked process. Thus, if the attacked process is a root process, the attackers can execute any kind of code they want and therefore causing a security breach in the system. In this paper, we propose a new solution to the buffer overflow attacks that can protect return addresses from being overwritten. Our solution works with string library functions, such as

strcpy

() by preventing access to memory locations beyond the frame pointer of a function, and thus preventing overwriting the return address. Unlike other approaches that have been used to solve the buffer overflow attack, our solution can detect and fix buffer overflow vulnerabilities in executable (i.e.,the

.exe

or binary files). In other words, our solution does not require the availability of the program source code, which may not be available for many applications, and does not require any hardware modifications, which can be expensive. Therefore, we developed a tool that can be used to convert a vulnerable program to a safe version that is protected against buffer overflow attacks.

Sahel Alouneh, Heba Bsoul, Mazen Kharbutli
Meet-in-the-middle Attack with Splice-and-Cut Technique on the 19-round Variant of Block Cipher HIGHT

We show a meet-in-the-middle (MITM) attack with Splice-and-Cut technique (SCT) on the 19-round variant of the block cipher HIGHT. The original HIGHT having 32-round iteration was proposed by Hong et al. in 2006, which applies the 8-branch Type-2 generalized Feistel network (GFN) with 64-bit data block and 128-bit secret key. MITM attack was proposed by Diffie and Hellman in 1977 as a generic method to analyze symmetric-key cryptographic algorithms. SCT was proposed by Aoki and Sasaki to improve MITM attack in 2009. In this paper we show that 19-round HIGHT can be attacked with 2

8

bytes of memory, 2

8

+ 2 pairs of chosen plain and cipher texts, and 2

120.7

times of the encryption operation by using MITM attack with SCT.

Yasutaka Igarashi, Ryutaro Sueyoshi, Toshinobu Kaneko, Takayasu Fuchida
Nonintrusive SSL/TLS Proxy with JSON-Based Policy

The placement of an interception proxy in between a client and web server has its own implications. Therefore, it is more practical to take a “middle” approach that can moderate the ongoing and future SSL/TLS sessions while not compromising the user privacy. A policy rule in JSON schema and data is proposed in handling SSL/TLS connection delegated by a non-intrusive, pass-through proxy.

Suhairi Mohd Jawi, Fakariah Hani Mohd Ali, Nurul Huda Nik Zulkipli
Secure Problem Solving by Encrypted Computing

The advancement of encrypted computing technologies has led to increase the demands for problem solving system which allows customers and problem solving providers to guarantee the privacy and confidentiality in the cloud environment. Meanwhile, along with the diversity of the customer levels, an improvement of easiness in describing the requirements (problem) has become a serious challenge issues. In this paper, we propose the structured natural language based query scheme improving descriptiveness of problem and efficient private information retrieve scheme for enhancing the security level of encrypted computing.

Hiroshi Yamaguchi, Phillip C.-Y. Sheu, Shigeo Tsujii
Anomaly Detection from Log Files Using Data Mining Techniques

Log files are created by devices or systems in order to provide information about processes or actions that were performed. Detailed inspection of security logs can reveal potential security breaches and it can show us system weaknesses. In our work we propose a novel anomaly-based detection approach based on data mining techniques for log analysis. Our approach uses Apache Hadoop technique to enable processing of large data sets in a parallel way. Dynamic rule creation enables us to detect new types of breaches without further human intervention. Overall error rates of our method are below 10%.

Jakub Breier, Jana Branišová
Improved Remote User Trust Evaluation Scheme to apply to Social Network Services

Social networking services are interactive services that allow users to build friendships or social relations with others on the Internet. They have played a central role in human interchange networks based on openness. Recently, however, groundless rumors have been spread by making bad use of their openness and a lot of users have been adversely affected by various attacks violating private information. Therefore, information on social networks should be able to be delivered to trustworthy persons or be provided from them. Accordingly, this paper proposes a remote trust evaluation scheme that improves the existing schemes, and provides a method for improving security by applying the proposed scheme to access control on social networks.

Youngwoong Kim, Younsung Choi, Dongho Won
Threat assessment model for mobile malware

Today the smartphone is definitely the most popular and used device, surpassing by far the laptop. Unfortunately, its popularity makes it also the most targeted goal for malicious attacks. On the other hand mobile security is still in its infancy and improvements are needed to provide adequate protection to the users. This paper contributes with threat assessment model which enables a systematic analysis and evaluation of mobile malware. The same model can also be used to evaluate the effectiveness of security protection measures.

Thanh v Do, Fredrik B. Lyche, Jørgen H. Lytskjold, Do van Thuan
Performance Evaluation of System Resources Utilization with Sandboxing Applications

Sandboxing is a popular technique that is used for safely executing untested code or testing un-trusted programs inside a secure environment. It can be employed at the operating system level or at the application level. In addition, it limits the level of access requested by the untested programs in the operating system by running them inside a secure environment. Therefore, any malicious or improperly coded programs that are aiming to damage hardware or software recourses will be prevented by the sandboxing. In this paper, we want to assess the effect of sandboxing on the system’s recourses utilization. We will examine and evaluate the operating system performance with Sandboxie, Bufferzone and Returnil sandboxes applications. Different performance parameters are considered, such as the execution time by the CPU for each sandbox and the read/write speed for various input output devices like memory and disks. Moreover, it is important to highlight that we have evaluated the mentioned sandboxing applications under the effect of having a virus that is attacking the operating system. We defined our own performance metrics that contain the most important parameters used in evaluating the related research work.

Tarek Helmy, Ismail Keshta, Abdallah Rashed
Fault Attacks by using Voltage and Temperature Variations: An Investigation and Analysis of Experimental Environment

Physical attacks are a powerful tools to exploit implemented weaknesses of embedded devices even if that using robust cryptography algorithms. Various physical attack techniques have been researched, both to make practical several theoretical or physical fault/error models proposed in open literature and to outline new kinds of vulnerabilities. In this paper, we investigated and summarized the classification of physical attacks; especially we focused on fault attack by using voltage and temperature variations which is one of the types of glitch attacks in non-invasive attacks. Also we investigated and compared several case of experimental environment for fault attacks by using temperature and voltage variations.

Young Sil Lee, Non Thiranant, HyeongRag Kim, JungBok Jo, HoonJae Lee
Pseudoinverse Matrix over Finite Field and Its Applications

With the development of smart devices like smart phones or tablets, etc., the challenging point for these devices is risen in the security problems required limited computational capacity, which attracts researchers in both academia and industrial societies. In this paper, we tackle two interesting security problems: changing a shared key between two users and privacy-preserving auditing for cloud storage. Our solution is based on pseudoinverse matrix, a generalization concept of inverse matrix. In addition, this can be applied for devices with limited computational capacity due to its fast computation.

Dang Hai Van, Nguyen Dinh Thuc
Multi-Agent based Framework for Time-correlated Alert Detection of Volume Attacks

Recent and emerging cyber-threats have justified the need to keep improving the network security technologies such as Intrusion Detection Systems (IDSs) to keep it abreast with the rapidly evolving technologies subsequently creating diverse security challenges. A post-processing filter is required to reduce false positives and large number of alerts generated by network-based IDSs for the timely detection of intrusions. This paper investigates statistical-based detection approach for volume anomaly such as Distributed Denial-of-Service (DDoS) attacks, through the use of multi-agent framework that hunt for time-correlated abnormalities in different behaviours of network event. Employing statistical process-behaviour charts of Exponentially Weighted Moving Average (EWMA) one-step-ahead forecasting technique, the framework correlates undesirable deviations in order to identify abnormal patterns and raise alarm. This paper provides the architecture and mathematical foundation of the proposed framework prototype, describing the specific implementation and testing of the approach based on a network log generated from a 2012 cyber range simulation experiment as well as the DARPA 2000 datasets. Its effectiveness in detecting time-correlated anomaly alerts, reducing the number of alerts and false positive alarms from the IDS output is evaluated in this paper.

Abimbola Olabelurin, Georgios Kallos, Suresh Veluru, Muttukrishnan Rajarajan
Side Channel Attacks on Cryptographic Module: EM and PA Attacks Accuracy Analysis

Extensive research on modern cryptography ensures significant mathematical immunity to conventional cryptographic attacks. However, different side channel techniques such as power analysis and electromagnetic attacks are such a powerful tool to extract the secret key from cryptographic devices. These techniques bring serious threat on hardware implementations of cryptographic algorithms. In this paper an extensive analysis of side channel analysis on cryptographic device is presented where we study on the EM and PA attacks methods as sideways attacks on the hardware implementation of the crypto-module. Finally we establish a comparison table among different attacks tools methods for the accuracy analysis.

HyunHo Kim, Ndibanje Bruce, Hoon-Jae Lee, YongJe Choi, Dooho Choi
Exploring Bad Behaviors from Email Logs

Human is the weakest factor in organization security and is always the target of attackers. Attackers find vulnerability of human cause by his/her bad behaviors and exploit it. Organization shall proactively detect those bad behaviors and prevent it on the first hand. This paper studied employees with bad habit of sending of corporate file(s) to public email addresses that are not allowed by the organization. By using email log as a source to generate social network graph and collect 2 types of out- degree of each node, regular out-degree and out-degree to “not allowed” public email addresses. We analyze correlation of these 2 numbers and found that people with bad habit tend to send mail to public email addresses when they have to send file outside the company. The result of this paper gave us parameters to calculate risk score for further study using the integration of Social Network-Attack Graph (SN-AG) analysis approach.

Supachai Kanchanapokin, Sirapat Boonkrong
Using PKI to Provide Credential Delegation in non Web-based Federations

Authentication is basic functionality required by most services that provide access to protected resources or personalized content. In order to authenticate to services users maintain sets of credentials that they use to prove their identity. Credential delegation allows users to seamlessly access multiple services across the network. The concept manifested their utility in the scope of single domain authentication mechanisms. Therefore, emerging identity federations are expected to provide similar functions, too. Recently, various non web-based federation models have emerged, unfortunately they do not cover properly delegation of credentials. In this paper we introduce a mechanism utilizing digital certificates and PKI, which provides support for credential delegation in non web-based federations. The viability of the concept is demonstrated on integration of the mechanism with the Moonshot federation framework. However, the solution forms an independent middleware layer that can be used by several federation models.

Daniel Kouřil, Marcel Poul, Michal Procházka
Modeling and Simulation of Identification Protocol for Wireless Medical Devices in Healthcare System Communication

The digital era is changing the nature of health care delivery system with the integration of Information Technology’s potential to improve the quality, safety, and efficiency of health care. Medical devices may be connected on wireless and wired networks and communicate with nearby receivers that are connected to landline networks, cellular systems or broadband facilities that provide more ubiquitous coverage of connectivity, allowing uninterrupted monitoring of patients in transit by accessing the Internet. This promise of universal connectivity allow data availability service from the high-end systems such as routers, gateways, firewalls, and web servers to the low-end systems such as smart phone, tablet, etc…Hence, security has become an essential part of today’s computing world regarding the ubiquitous nature of the entities and wireless technology evolution. This paper presents a framework for Healthcare System Communication where wireless medical devices are challenged to an identification protocol procedure which enables negotiation between wireless medical devices and specify authorization requirements that must be met before accessing the network and patients’ data. To illustrate the feasibility of the work to real world protocols, we simulate the scheme using Scyther tool by applying the IEEE-802.16-2004 PKM protocols for WiMAX protocol standards.

Hyun Ho Kim, Ndibanje Bruce, SuHyun Park, JungBok Jo, Hoon-Jae Lee
A Method for Web Security Context Patterns Development from User Interface Guidelines Based on Structural and Textual Analysis

Currently, only a small number of user agents present information on the web security context to the user in an easy way for understandability. W3C has created WSC-UI documents as a security suggestion standard for web security context. The application in designing user agents to be secure requires human resources in identifying specifications, which takes much time and ex-pense, and may also result in incompleteness. Security patterns have been used to collect solutions to recurring problems. Therefore, this research proposes a method for creating web security context patterns, based on WSC-UI documents, and identifying the relationship structure of the patterns. The proposed patterns are validated and refined according to the initial validation list. The developers can specify the security requirements based on the proposed patterns according to the specified application approach, for the benefits in designing a user agent to be aware of the web security context.

Pattariya Singpant, Nakornthip Prompoon
A Comparative Study of Combination with Different LSB Techniques in MP3 Steganography

Steganography hides the existence of the data inside any cover file. There are different file formats used in steganography like text, image, audio and video. Out of these file formats audio steganography is followed in this paper .One of the major objective of hiding data using audio steganography is to hide the data in an audio file, so that the changes in the intensity of the bits of host must not be detect by human auditory system. The focus of this paper is on time domain technique i.e. LSB technique of audio steganography. Method used in the paper hides the data in combination of LSBs instead of hiding the data in least significant bit. Results are compared by using parameters PSNR, BER and correlation.

Mohammed Salem Atoum
Review of Digital Forensic Investigation Frameworks

Digital Forensic Investigation has seen a tremendous change in the past 25 years. From the age of early computers to the current day mobile devices and storage devices, the crime rate has also followed growth. With the diversity in crimes, frameworks have also been modified over time to cope-up with the pace of crimes being committed. The paper amalgamates all major approaches and models presented that have helped in shaping the digital forensic process. Each discussed model is followed by its advantages and shortcomings.

Ritu Agarwal, Suvarna Kothari

.

Frontmatter
Knowledge Discovery in Dynamic Data Using Neural Networks

This article aims at knowledge discovery in dynamic data via classification based on neural networks. In our experimental study we have used three different types of neural networks based on Hebb, Adaline and backpropagation training rules. Our goal was to discover important market (Forex) patterns which repeatedly appear in the market history. Developed classifiers based upon neural networks should effectively look for the key characteristics of the patterns in dynamic data. We focus on reliability of recognition made by the described algorithms with optimized training patterns based on the reduction of the calculation costs. To interpret the data from the analysis we created a basic trading system and trade all recommendations provided by the neural network.

Eva Volna, Martin Kotyrba, Michal Janosek
Data Mining for Industrial System Identification: A Turning Process

The modeling of an industrial process is always a challenging issue and has a significant effect on the performance of the industry. In this study, one of the most important industrial processes, a turning process, is considered as a black box system. Since it is also a dynamic system, i.e., its characteristics changing over time, the system identification method has been applied on the measurement data in order to obtain an empirical model for explaining a system output, surface roughness. The inputs of the system are feed rate, cutting speed and tool nose radius. According to the study, three non-parametric models, Box-Jenkins, autoregressive moving average with exogenous inputs (ARMAX) and output error (OE), are recommended to be used to construct mathematical models based on data mining available from the manufacturing process. These system identification models are appropriate to model the dynamic turning process since they have the capability to construct both dynamic and noise parameters separately.

Karin Kandananond
Investigating the Effectiveness of E-mail Spam Image Data for Phone Spam Image Detection Using Scale Invariant Feature Transform Image Descriptor

The increased number of spam images in mobile phones has become a big trouble by annoying users steadily. One big issue in developing an effective phone spam image detection system using machine learning and data mining techniques is unavailability of sufficient phone spam image data. In this study, we demonstrate that the utilization of similar email spam image data obtained by chi-square similarity distance is an effective solution to develop phone spam image classifier. We compared the performance of our approach with the one using randomly selected email spam image data and showed that this approach works better than the one using randomly selected images. Our analysis further illustrates that a more sophisticated clustering algorithm is expected to improve the performance.

So Yeon Kim, Yenewondim Biadgie, Kyung-Ah Sohn
Features extraction for classification of focal and non-focal EEG signals

The electroencephalogram (EEG) is most often used signal to detect epileptic seizures. For a successful epilepsy surgery, it is very important to localize epileptogenic area. In this paper, a new method is proposed to classify focal and non-focal EEG signals. EEG signal is decomposed by empirical mode decomposition (EMD). The average Renyi entropy and the average negentropy of IMFs for EEG signals have been computed as features. The class discrimination ability of these features are quantified using Kruskal—Wallis statistical test. These features are set to input in neural network classifier for classification of focal and non-focal EEG signals. The experimental results are presented to show the effectiveness of the proposed method for classification of focal and non-focal EEG signals.

Khushnandan Rai, Varun Bajaj, Anil Kumar
Relations on Intuitionistic Fuzzy Soft Multi Sets

The theory of soft set and multiset are vital mathematical tools used in handling uncertainties about vague concepts. In this paper we define a new operation product of two intuitionistic fuzzy soft multi setsand present the concept of relations in intuitionistic fuzzy soft multi set. The notions of null relation and absolute relation are to be defined. Also we study their properties and discuss reflexive, symmetric and transitiveintuitionistic fuzzy soft multi relations. Some new results along with illustrating examples have been put forward in our work.

Anjan Mukherjee, Ajoy Kanti Das
Towards Automatically Retrieving Discoveries and Generating Ontologies

For the web to become intelligent, machines needs to be able to extract the nature and semantics of various concepts and the relationships between them. Most approaches focus on methods involving manually teaching the machine about different entities, their properties manually constructing an ontology. This paper discusses an approach where the necessary metadata is extracted automatically from Wikipedia, the online encyclopedia. This metadata is then used to compare documents allowing them to be clustered together so that similar documents can be identified allowing alternative knowledge to be discovered. The results show that an ontology indicating the relationships between types of documents can be automatically identified and also alternative knowledge can be discovered.

Kenneth Cosh
Improving process models discovery using AXOR clustering algorithm

The goal of process mining is to discover process models from event logs. Real-life processes tend to be less structured and more flexible. Classical process mining algorithms face to unstructured processes, generate spaghetti-like process models which are hard to comprehend. One way to cope with these models consists to divide the log into clusters in order to analyze reduced sets of cases. In this paper, we propose a new clustering approach where cases are restricted to activity profiles. We evaluate the quality of the formed clusters using established fitness and comprehensibility metrics on the basis distance using logical XOR operator. throwing a significant real-life case study, we illustrate our approach, and we show its interest especially for flexible environments.

Hanane Ariouat, Kamel Barkaoui, Jacky Akoka
Simple Approaches of Sentiment Analysis via Ensemble Learning

Twitter has become a popular microblogging tool where users are increasing every minute. It allows its users to post messages of up to 140 characters each time; known as ‘Tweets’. Tweets have become extremely attractive to the marketing sector, since the user can either indicate customer success or presage public relations disasters far more quickly than web pages or traditional media. Moreover, the content of Tweets has become a current active research topic on sentiment polarity as positive or negative. Our experiment of sentiment analysis of contexts of tweets show that the accuracy performance can improve and be better achieved using ensemble learning, which is formed by the majority voting of the Support Vector Machine, Naive Bayes, SentiStrength and Stacking.

Tawunrat Chalothom, Jeremy Ellman
Effective Trajectory Similarity Measure for Moving Objects in Real-World Scene

Trajectories of moving objects provide fruitful information for analyzing activities of the moving objects; therefore, numerous researches have tried to obtain semantic information from the trajectories by using clustering algorithms. In order to cluster the trajectories, similarity measure of the trajectories should be defined first. Most of existing methods have utilized dynamic programming (DP) based similarity measures to cope with different lengths of trajectories. However, DP based similarity measures do not have enough discriminative power to properly cluster trajectories from the real-world environment. In this paper, an effective trajectory similarity measure is proposed, and the proposed measure is based on the geographic and semantic similarities which have a same scale. Therefore, importance of the geographic and semantic information can be easily controlled by a weighted sum of the two similarities. Through experiments on a challenging real-world dataset, the proposed measure was proved to have a better discriminative power than the existing method.

Moonsoo Ra, Chiawei Lim, Yong Ho Song, Jechang Jung, Whoi-Yul Kim
A Prediction of Engineering Students Performance from Core Engineering Course Using Classification

All Engineering students in Thailand must complete four core engineering courses which consist of mechanic, material, drawing and computer programming. These four core courses are essential basis and fundamental. Prediction of student academic performance helps instructors develop good understanding of how well or poor students perform. Thus, instructors can take proper proactive evaluation to improve student learning. Students can predict themselves to gain higher performance in the future. This paper focuses on developing a predictive model to predict student academic performance in core engineering courses. A total of 6,884 records has been collected from year 2004 to 2010. Five classification models are developed using decision tree, naïve bayes, k-nearest neighbors, support vector machine, and neural network, respectively. The results show that the neural network model generates the best prediction with 89.29 % accuracy.

Nachirat Rachburee, Wattana Punlumjeak, Sitti Rugtanom, Deachrut Jaithavil, Manoch Pracha
Evolutionary Circular-ELM for the Reduced-Reference Assessment of Perceived Image Quality

At present, the quality of the image is very important. The audience needs to get the undistorted image like the original image. Cause of the loss of image quality such as storage, transmission, compression and rendering. The mechanisms rely on systems that can assess the visual quality with human perception are required. Computational Intelligence (CI) paradigms represent a suitable technology to solve this challenging problem. In this paper present, the Evolutionary Extreme Learning Machine (EC-ELM) is derived into Circular-ELM (C-ELM) that is an extended Extreme Learning Machine (ELM) and the Differential Evolution (DE) to select appropriate weights and hidden biases, which can proves performance in addressing the visual quality assessment problem by embedded in the proposed framework. The experimental results, the EC-ELM can map the visual signals into quality score values that close to the real quality score than ELM, Evolutionary Extreme Learning (E-ELM) and the original C-ELM and also stable as well. Its can confirms that the EC-ELM is proved on recognized benchmarks and for four different types of distortions.

Sarutte Atsawaraungsuk, Punyaphol Horata
Enhanced Web Page Cleaning for Constructing Social Media Text Corpora

Web page cleaning is one of the most essential tasks in Web corpus construction. The intention is to separate the main content from navigational elements, templates, and advertisements, often referred to as

boilerplate.

In this paper, we particularly enhance Web page cleaning applied to pages containing comments and introduce a new training corpus for that purpose. Beside extending an existing boilerplate detection algorithm by means of a comment classifier, we train and test different classifiers on extended feature sets solving a two-class problem (content vs. boilerplate) on our and an existing benchmark corpus. Results show that the proposed approach outperforms existing methods, particularly on comment pages from different domains. Finally, we point out that our trained classifiers are domain independent and with small adjustments only transferable to other languages.

Melanie Neunerdt, Eva Reimer, Michael Reyer, Rudolf Mathar
Finding Knee Solutions in Multi-Objective Optimization Using Extended Angle Dominance Approach

The aim of this paper is to develop a knee-based multi-objective evolutionary algorithm (MOEA) which is a method to find optimal trade-off solutions among all available solutions. In multi-objective optimization, the knee regions represent implicitly preferred parts to the decision maker (DM). The proposed approach uses the extended angle dominance concept to guide the solution process towards knee regions. The extent of the obtained solutions can be controlled by the means of user-supplied density controller parameter. The approach is verified by two and three objective knee-based test problems. The results have shown that our approach is competitive to well-known knee-based MOEAs in convergence view point.

Sufian Sudeng, Naruemon Wattanapongsakorn
Affective Learning Analysis of Children in a Card Sorting Game

The purpose of this paper is to provide an affective learning analysis on children while they were playing a card sorting game. The electroencephalogram (EEG) signals of 8 preschoolers aged between 4 to 6 years were collected (a) while they were playing a card sorting game; and (b) observing affective faces. The features from EEG signals were extracted using Kernel Density Estimation (KDE). The Multi-Layer Perceptron (MLP) was used to classify and generate the affective maps of the EEG signals while the children were playing the game. The initial results show that the children’s affective states are unique and there might be different affects that drive a child’s performance. This analysis shows the potential of using the affective learning analysis approach in assessing educational tools such as computer games.

Marini Othman, Abdul Wahab, Abdul Qayoom, Mohd Syarqawy Hamzah, Muhamad Sadry Abu Seman
Application of Mean and Median Frequency Methods for Identification of Human Joint Angles Using EMG Signal

The analysis of surface electromyography (EMG) signals is generally based on three major issues, i.e., the detection of muscle force, muscle geometry, and muscle fatigue. Recently, there are no any techniques that can analyse all the issues. Mean frequency (MNF) and median frequency (MDF) have been successfully applied to be used as muscle force and fatigue indices in previous studies. However, there is the lack of consensus upon the effect of muscle geometry on the basis of varying joint angles. In this paper, the modification of MNF and MDF using a min-max normalization technique was proposed to provide a consistent relationship between feature value and joint angle across subjects. The results show that MNF and MDF extracted from normalized EMG showed a stronger linear relationship with elbow joint angle compared to traditional MNF and MDF methods. Modified MNF and MDF features increased with increasing elbow angle during isometric flexion. As a result of the proposed technique, modified MNF and MDF features could be used as a universal index to determine all the issues involving muscle fatigue, muscle force, and also muscle geometry.

Sirinee Thongpanja, Angkoon Phinyomark, Chusak Limsakul, Pomchai Phukpattaranont
A Study of Big Data Solution Using Hadoop to Process Connected Vehicle’s Diagnostics Data

Recently, we have witnessed a period where things are connected to the Internet; the vehicle is not leaved back because connected car field is currently explored. It is without doubt that connected vehicles will generate a huge of vehicle’s diagnostics data which will be sent to remotely servers or to vehicle’s cloud providers. As the amount of vehicle’s diagnostics data increases, the actors in automotive ecosystem will encounter difficulties to perform a real time analysis in order to simulate or to design further services according to the data gathered from the connected vehicle. In this paper, Apache Hadoop framework and its ecosystems particularly Hive, Sqoop have been deployed to process vehicle diagnostics data and delivered useful outcomes that may be used by actors in automotive ecosystem to deliver new services to car owners. A study of big data solution to process vehicle diagnostics data from connected vehicles using Hadoop is proposed.

Lionel Nkenyereye, Jong-Wook Jang
Wiki SaGa: An Interactive Timeline to Visualize Historical Documents

Searching for information inside a repository of digitised historical documents is a very common task. A timeline interface that represents the historical content which can perform the same search function will reveal better results to researchers. This paper presents the integration of

SIMILE Timeline

within a wiki, named Wiki SaGa, containing digitised version of

Sarawak Gazette

. The proposed approach allows display of events and relevant information search compared to traditional list of documents.

Daniel Yong Wen Tan, Bali Ranaivo-Malançon, Narayanan Kulathuramaiyer
Discovering genomic associations on cancer datasets by applying sparse regression methods

Association analysis of gene expression traits with genomic features is crucial to identify the molecular mechanisms underlying cancer. In this study, we employ sparse regression methods of Lasso and GFLasso to discover ge-nomic associations. Lasso penalizes a least squares regression by the sum of the absolute values of the coefficients, which in turn leads to sparse solutions. GFLasso, an extension of Lasso, fuses regression coefficients across correlated outcome variables, which is especially suitable for the analysis of gene expres-sion traits having inherent network structure as output traits. Our study is about considering combined benefits of these computational methods and investigat-ing the identified genomic associations. Real genomic datasets from breast can-cer and ovarian cancer patients are analyzed by the proposed approach. We show that the combined effect of both the methods has a significant impact in identifying the crucial cancer causing genomic features with both weaker and stronger associations.

Reddy Rani Vangimalla, Kyung-Ah Sohn
The Adaptive Dynamic Clustering Neuro-Fuzzy System for Classification

This paper proposes a method of neuro-fuzzy for classification using adaptive dynamic clustering. The method has three parts, the first part is to find the proper number of membership functions by using adaptive dynamic clustering and transform to binary value in a second step. The final step is classification part using neural network. Furthermore the weights from the learning process of the neural network are used as feature eliminates to perform the rule extraction. The experiments used dataset form UCI to verify the proposed methodology. The result shows the high performance of the proposed method.

Phichit Napook, Narissara Eiamkanitchat
A Hybrid Approach of Neural Network and Level-2 Fuzzy set

This paper presents a new high performance algorithm for the classification problems. The structure of A Hybrid Approach of Neural Network and Level-2 Fuzzy set, including two main processes. The first process of this structure is the learning algorithm. This step applied the combination of the multilayer perceptron neural network and the level-2 fuzzy set for learning. The outputs from learning process are fed to the classification process by using the K-nearest neighbor. The classification results on standard datasets show better accuracy than other high performance Neuro-Fuzzy methods.

Jirawat Teyakome, Narissara Eiamkanitchat
Developing Term Weighting Scheme Based on Term Occurrence Ratio for Sentiment Analysis

Term weighting is an important task for sentiment classification. Inverse document frequency (IDF) is one of the most popular methods for this task; however, in some situations, such as supervised learning for sentiment classification, it doesn’t weight terms properly, because it neglects the category information and assumes that a term that occurs in smaller set of documents should get a higher weight. In this paper, I purpose sentiment classification framework focusing on the comparison of various term weighting schemes, including Boolean, TF, TFIDF and a novel term weighting (TOW). I have evaluated these methods on Internet Movie Database corpus with four supervised learning classifiers. I found TOW weighting most effective in our experiments with SVM NB and NN algorithms. Based on our experiments, using TOW weighting with SVM algorithm yielded the best performance with the accuracy equaling 93.45 %.

Nivet Chirawichitchai
A Comparison of Artificial Neural Network and Regression Model for Predicting the Rice Production in Lower Northern Thailand

Lower Northern Thailand is one of the main regions which can produce the highest rice yield. If the emphasis is on producing the rice yield in order to meet the standard yield, then the key factors, such as characteristics of rice farm, rice seed types, cultivation period, quantity of fertilizer usage, number of seeds, must be clearly studied and understood. This paper studies factors influencing the rice products and develops a model to predict rice yield per rai that can support farmers to plan their rice farming in Lower Northern Thailand. The aim of this paper is to compare the prediction accuracy between two popular predictive techniques for modelling rice yield namely, artificial neural network (ANN) and Regression. Root mean square of error (RMSE) and mean absolute error (MAE) values are used to compare prediction accuracy of the predictive models. The result shows that ANN is superior over regression model in terms of prediction accuracy and it is flexible to develop.

Anamai Na-udom, Jaratsri Rungrattanaubol
Study on Local Path Planning for Collision Avoidance in Vehicle-to-Vehicle Communication Environment

Local path planning is a path planning method that utilizes the dynamics and probabilities of a vehicle as criteria for determining the possibility of the corresponding vehicle to drive on a candidate path. The proposed system in this study utilizes the environmental data through a vehicle-to-vehicle (V2V) communication environment to create the collision risk index. The collision risk index takes into account the time to collision (TTC) as a criteria for determining path planning. The results of analyzing the performance of the proposed algorithm obtained through simulation verified that the proposed algorithm performed route planning in a more efficient manner when compared with the algorithm that does not take into account the TTC value. Therefore, the proposed collision avoidance algorithm that considers the TTC value is expected to contribute to the reduction of car accidents, as well as provide a more efficient route when applied to the actual system.

Gyoungeun Kim, Byeongwoo Kim
Vehicle Position Estimation using Tire Model

GPS

is being widely used in the location estimation technology, which is essential for stable driving of autonomous vehicle. However,

GPS

has problems such as reduction in location accuracy during abrupt vehicle behavior at high speed, and limitations such as signal interruption in tunnels and downtown areas. To overcome this problem, an algorithm that combines various sensor information and longitudinal/lateral slip is required. This paper proposes a three-degree of freedom (3-DoF) vehicle dynamics model, in which DugofFs tire model is applied, and an algorithm, which combines various sensor information inside the vehicle by using extended Kalman filter. The performance of proposed location estimation algorithm was analyzed and evaluated through simulations. As a result, it is confirmed that the location estimation result of proposed algorithm is more accurate than that of method using

GPS

even during abrupt changes in motion.

Jaewoo Yoon, Byeongwoo Kim
I know when you are happy - Emotion Detection

Affective computing is an interdisciplinary research field. Various approaches have been proposed for the detection of emotions. One of the approaches is emotion detection from facial expressions. People react to situations in real life and they have little or limited control over their facial expressions. These facial expressions can be examined to know about their emotional state. In this paper we propose an approach based on Laplacian of Gaussian filters for frontal facial images. This technique removes the disadvantages of Laplcian filters susceptibility to noise. Also the simplicity of these filters makes them eligible for timely computations. For emotion recognition all possible two class emotion combinations are used. Experiments are conducted on expression dataset available for research purpose.

Abhishek Singh Kilak, Namita Mittal
Interconnection Learning between Economic Indicators in Indonesia Optimized by Genetic Algorithm

Economic is important issue in a country since it is conducted by many sectors. Knowing stability of economic condition can be looked at by predicting the economic indicator. Unfortunately, the prediction that had been done to each of economic indicator did not concern about interconnection of them while predicting it. Because of that, learning about dependability of economic indicator still need further research about it. In other side, economic as knowledge that complex and chaos need differential dynamic to face the problem inside. Based on those reasons, this research not only observed about interconnection between indicators economic while predicting, but also needed differential dynamic which had been optimized by genetic algorithm. System got 20% until 80% for the accuracy system. The reason of why accuracy 80% was gotten because of using the same characteristics of economic indicators, ex. when system observed GDP and GNI together. Using similar data trend influenced the fitness function in GA able to optimized differential dynamic while doing prediction of economic indicators. Whereas, the decrease accuracy around 20% until 40% was came by using different characteristic of economic indicator. It can be found when learning dependable of GDP and Inflation while predict times series for GDP. Based on this research, it can be concluded that GA is able in optimizing learning the dependability of economic indicator’s Indonesia. Moreover, it can be said that using the same characteristic indicator economic give better result for GA to learning the dependability of economic indicator than not. It can be said indirectly that government should concern about value of indicators economic that have the same characteristics when monitoring economic condition.

S Saadah, G. S. Wulandari
Classification Prediction of the Foot Disease Pattern Using Decision Tree Model

Datamining is used to find out desired important and meaningful knowledge in large scale data. The decision tree in classification algorithms has been applied to categorical attributes and numeric attributes in different domains. The purpose of study was to acquire significant information between singular disease groups and biomechanical parameters related with symptoms by developing prediction model. Sample data of 90 patient’s records diagnosed with a singular disease was selected for analysis, in total 2418 data. A dependent variable was composed of 9 singular disease groups. 18 of 32 independent variables closely related to disease were selected and optimized. After object data was divided into training data and test data, C5.0 algorithm was applied for analysis. In conclusion, 10 diagnosis rules were created and major symptom information was verified. On the basis of the study, additional analysis with utilizing other datamining methods will be performed to improve accuracy from now on.

Jung-Kyu Choi, Yonggwan Won, Jung-Ja Kim
Non-Preference Based Pruning Algorithm for Multi-Objective Redundancy Allocation Problem

A non-preference based pruning algorithm is proposed to rank the Pareto-optimal solutions according to the cost and reliability trade-off for solving multi-objective redundancy allocation problem. The proposed method demonstrates on multi-objective redundancy allocation problem with mixing of non-identical component types in each subsystem. The objectives of system design are to maximize system reliability and minimize system cost simultaneously while satisfying system requirement constraints. Non-dominated sorting genetic algorithm-II (NSGA-II) finds an approximation of Paretooptimal solutions. After obtaining the approximation of Pareto-optimal solutions by NSGA-II, K-means clustering is used to cluster the approximation of Pareto-optimal solutions into some trade-off regions. Thereafter, the Paretooptimal solutions are ranked based on the cost and reliability trade-off compare to the centroid solution of each cluster. The results show that the proposed method is able to identify the most-compromised solution.

Tipwimol Sooktip, Naruemon Wattanapongsakorn, Sanan Srakaew
A Combined AdaBoost and NEWFM Technique for Medical Data Classification

A hybrid technique combining the AdaBoost ensemble method with the neural network with fuzzy membership function (NEWFM) method is proposed for medical data classification and disease diagnosis. Combining the Adaboost, a general method used to improve the performance of learning methods, with the ‘standard’ NEWFM, which uses as base classifiers, ensures better accuracy in medical data classification tasks and diagnosis of diseases. To validate the proposal, four medical datasets related to epileptic seizure detection, Parkinson, cardiovascular (heart), and hepatitis disease diagnoses were used. The results show an average classification accuracy of 95.8% (made up of best accuracy of 99.5% for epileptic seizure, 87.9% for Parkinson, 97.4% for cardiovascular (heart) disease, and 98.7% for Hepatitis dataset classifications), which suggests that the proposed technique is capable of efficient medical data classification and potential applications in disease diagnosis and treatment.

Khaled A. Abuhasel, Abdullah M. Iliyasu, Chastine Fatichah
On sentence length distribution as an authorship attribute

Understanding what makes written texts sound like they are written by their author has been an unsolved problem for hundreds of years. The attributes of authorship are often clumped together as an attempt to solve the case of an unknown author while the practice of investigating a single attribute by eliminating the effect of all others has been paid little attention. One of the debated attributes is the size of the text segments which authors use to group words together. Texts consist of these segments — sentences — which are of different lengths, the values being distributed in ways that are assumed to be characteristic of the author. Comparing the statistics of paired text samples, we can show that differences in the statistics in fact indicate difference in the authorship of the texts. However, certain choices of metrics and units easily lead to random and meaningless results.

Miro Lehtonen
Learning models for activity recognition in smart homes

Automated recognition of activities in a smart home is useful in independent living of elderly and remote monitoring of patients. Learning methods are applied to recognize activities by utilizing the information obtained from the sensors installed in a smart home. In this paper, we present a comparative study using five learning models applied to activity recognition, highlighting their strengths and weaknesses under different challenging conditions. The challenges include high intra-class, low inter-class variations, unreliable sensor data and imbalance number of activity instances per class. The same sets of features are given as input to the learning approaches. Evaluation is performed using four publicly available smart home datasets. Analysis of the results shows that Support Vector Machine (SVM) and Evidence-Theoretic K-nearest Neighbors (ET-KNN) in comparison to the learning methods Probabilistic Neural Network (PNN), K-Nearest Neighbor (KNN) and Naive Bayes (NB) performed better in correctly recognizing the smart home activities.

Labiba Gillani Fahad, Arshad Ali, Muttukrishnan Rajarajan
Tackling Class Imbalance Problem in Binary Classification using Augmented Neighborhood Cleaning Algorithm

Many natural processes generate some observations more frequently than others. These processes result in an imbalanced distributions which cause classifiers to bias toward the majority class because most classifiers assume a normal distribution. In order to address the problem of class imbalance, a number of data preprocessing techniques, which can be generally categorized into over-sampling and under-sampling methods, have been proposed throughout the years. The Neighborhood cleaning rule (NCL) method proposed by Laurikkala is among the most popular under-sampling methods. In this paper, we augment the original NCL algorithm by cleaning the unwanted samples using CHC evolutionary algorithm instead of a simple nearest neighbor-based cleaning as in NCL. We name our augmented algorithm as NCL+. The performance of NCL+ is compared to that of NCL on 9 imbalanced datasets using 11 different classifiers. Experimental results show noticeable accuracy improvements by NCL+ over NCL. Moreover, NCL+ is also compared to another popular over-sampling method called Synthetic minority over-sampling technique (SMOTE), and is found to offer better results as well.

Nadyah Obaid Al Abdouli, Zeyar Aung, Wei Lee Woon, Davor Svetinovic

.

Frontmatter
Software Effective Risk Management: An Evaluation of Risk Management Process Models and Standards

Different software risk management process models, professional standards and specific techniques have been presented in literature by researchers and practitioners in the software industry to make the development of software projects more likely to succeed. In this study different software risk management process models and Professional standards have been evaluated against the most effective risk management techniques and processes proposed by the different researchers in the last 13 years to highlight the strengths and weaknesses of different risk management process models. The results show that, there is no model which can be called the de facto effective risk management process model.

Jafreezal Jaafar, Uzair Iqbal Janjua, Fong Woon Lai
Validating Reusability of Software Projects using Object-Oriented Design Metrics

Reusability is the highest priority that needs to be considered before developing good software projects. Without reusability, software projects would be hard to maintain or extend. In order to ensure the minimum reusability levels of quality attribute is achieved in designing software project, two versions of jUnit which were developed by software professionals have been assessed by using Quality Model for Object-oriented Design (QMOOD). Even though the obtained result showed that the design of jUnit projects achieved the standard, the result was still not validated by other object-oriented design metrics framework. Therefore, this paper will validate the design reusability of jUnit using CK Metrics. The result showed that jUnit has a good quality design and the reusability level can be used as a benchmark for designing other software projects.

Zhamri Che Ani, Shuib Basri, Aliza Sarlan
Plasticine Scrum: An Alternative Solution for Simulating Scrum Software Development

Scrum is an efficient software development process. However, in order to master its concept, practitioners need to have a considerable level of software development skills and go through cycles of actual activities. This could be expensive and time consuming. “Scrum simulation with LEGO bricks” is a very popular solution to tackle such limitations. Unfortunately, due to its high costs, implementing this workshop in a large software engineering class could be financially challenging. This paper proposes an alternative low cost solution for simulating scrum software development by using low cost plasticine as main material. It also reports results of two pilot studies on this alternative simulation.

Sakgasit Ramingwong, Lachana Ramingwong
Understanding of Project Manager Competency in Agile Software Development Project: The Taxonomy

The current growth of agile software development project (ASDP) continues to be more significant in the software industry. Project managers have important role to play in ensuring success of project. The success of a project depends on the competency of the project manager. Realizing on the lack of research on the project manager competency in ASDP such skill, knowledge, personal attribute and behavior. This research had taken initiatives in introducing; skill, knowledge, personal attribute and behavior that is needed by a project manager in ASDP. This paper contributes the relevant theory by developing taxonomy of the agile project manager’s competency. Practitioners can use this taxonomy as a sensitizing device that will help project manager.

Kamalrufadillah Sutling, Zulkefli Mansor, Setyawan Widyarto, Sukumar Lecthmunan, Noor Habibah Arshad
The Design and Implementation of Around-View System using Android-based Image Processing

Nowadays, image processing products such as car black boxes and CCTV are sold in the market and offer convenient to users. Especially the black boxes helps determine the cause of the accidents while driving. However they can check only the front and the rear of vehicles so they can’t check for the driver’s blind spot and scenes of angle of view. To solve these problems, the Black box system was improved further and AVM(Around-View Monitoring) system was developed. AVM system gets images which has a bird’s-eye view of a sight on the top so it can obtain all directions, namely 360-degree images of vehicles. To obtain images, there is a condition attached to this system. Desktop should be installed in vehicles. AVM system that this article suggests can remedy the disadvantages such as installing PC in vehicles, this system was designed, using tablet equipment.

Gyu-Hyun Kim, Jong-Wook Jang
Identification of Opportunities for Move Method Refactoring Using Decision Theory

The early versions of object-oriented software system may function correctly, but sometimes its internal structure may not be well designed. Some classes may be god classes with several different behaviors. God classes have low cohesion in which some of their methods should be moved to another class. Finding a target method of a class to be moved to another class is a challenging problem. This paper proposes an approach to identity opportunity for move method using decision theory. The approach searches for candidate classes and chooses only one class with the highest coupling value which is used as decision criteria of Laplace method. A preliminary evaluation is performed on object-oriented software to demonstrate the effectiveness of the proposed approach. The results show that the proposed method can improve the design quality of the source code.

Siriwan Boonkwan, Pornsiri Muenchaisri
Requirements Engineering for Cloud Computing in University Using i*(iStar) Hierarchy Method

Cloud computing gets increasingly established in higher education as an option for modelling the quality of services and education system. Requirement engineering is a process to formulate user system needs, manifested by identifying the stakeholders in cloud computing at university which are used in operational activities where such operations require services that allow network access from anywhere and anytime, e.g., networks, servers, storage, applications and services. The adoption of cloud computing in a university course is highly related to the readiness of the University. Using iStar hierarchy make a great combination for improving the quality of software. Design requirement engineering framework through iStar hierarchy approach in the implementation of requirements engineering.

Sandfreni, Nabila Rizky Oktadimi, Kridanto Surendro
Power Consumption Profiling Method based on Android Application Usage

In this paper, we propose a method for collecting essential data to profile energy consumption of applications running on Android OS. Existing power-estimation methods are unable to account for all possible usage patterns, since developers can only prepare a limited number of profiling test cases. Our proposed method analyzes the power consumption using a log collected during an application use on a smart phone of a particular user. In our method, the logging code that tracks application usage data for the user is automatically embedded into the application. Our method uses this usage information to estimate power consumption, and provide developers with helpful hints for decision making and application tuning. In this paper, we analyzed the power consumption of an open-source application based on the log collected and estimated the power-saving effects of the application. The power consumption of the application was reduced by tuning the application according to the data derived from the results of the analyses; thus confirming that the method provides valid information to determine power-saving techniques.

Hiroki Furusho, Kenji Hisazumi, Takeshi Kamiyama, Hiroshi Inamura, Tsuneo Nakanishi, Akira Fukuda
Evergreen Software Preservation: The Conceptual Framework of Anti-Ageing Model

The symptom of degradation in term of quality is observed as the indicator of ageing phenomenon in software system. In human and living creators, ageing is an inescapable manifestation for every living creature on earth. In human being, this phenomenon of delaying the ageing process is normally known as anti-ageing. We try to understand and learn the process of ageing in software through understanding the human ageing process. Unlike human ageing, software ageing can be delayed by identifying factors that influence the ageing. Ageing in software is occurring when the software is degraded in term of its quality, user’s satisfaction and dynamic. Previous studies indicated that software ageing factors possibly will be classified into some categorization such as cost, technology, human, functionality and environment. Our past experiences in software quality and certification motivate us to the development of software anti-ageing model and its related areas which are the ageing factors and rejuvenation index. This paper presents the background issues in software ageing which includes software quality and certification, and focus further on the conceptual framework of software anti-ageing model and preliminary formulation of anti-ageing model.

Jamaiah H. Yahaya, Aziz Deraman, Zuriani Hayati Abdullah
A Framework Analysis for Distributed Interactive Multimedia System on Location-based Service

This paper presents a system solution to distribute and share school multimedia resources effectively. Especially, the paper presents the importance and feasibility of the location-based service in personalized, distributed and interactive multimedia system for digital campus. We focus on building a wireless roaming network platform in mobile applications, together with the characteristics of reliability, security, scalability and other design principles, such as object-oriented design patterns. Our goal is to achieve the multimedia content distribution and sharing information in the wireless roaming network by designing system function modules and terminal server application modules. The results demonstrate that the designed system can provide a channel for external communication for school management level, and a platform for internal resource sharing. The designed system can also provide a channel for independent study and information exchange platform for school staff and visitors.

Shan Liu, Jianping Chai
An Empirical Validation of Coupling and Cohesion Metrics as Testability Indicators

The measurement of the quality attributes of software is a management approach to building quality software product. Testability is a measure of the capability of the software to be subjected to testing. It is a desirable quality attribute of software product because it ensures the reliability of software. This paper presents an empirical validation of coupling and cohesion metrics as indicators of the testability quality attributes of Object-Oriented (OO) software from High-Level Design (HLD) perspective. Open-source OO software samples are used for the empirical analysis and three test case complexity metrics are taken as the measure of the complexity of testing the software sample. The results of the empirical validation showed that the coupling metrics and some of the cohesion metrics investigated are good indicators of the testability of OO software design.

Amos Orenyi Bajeh, Shuib Basri, Low Tang Jung
Software Defect Prediction in Imbalanced Data Sets Using Unbiased Support Vector Machine

In the software assurance process, it is crucial to prevent a program with defected modules to be published to users since it can save the maintenance cost and increase software quality and reliability. There were many prior attempts to automatically capture errors by employing conventional classification techniques, e.g., Decision Tree, k-NN, Naïve Bayes, etc. However, their detection performance was limited due to the imbalanced issue since the number of defected modules is very small comparing to that of non-defected modules. This paper aims to achieve high prediction rate by employing unbiased SVM called “R-SVM,” our version of SVM tailored to domains with imbalanced classes. The experiment was conducted in the NASA Metric Data Program (MDP) data set. The result showed that our proposed system outperformed all of the major traditional approaches.

Teerawit Choeikiwong, Peerapon Vateekul
Evaluation of Separated Concerns in Web-based Delivery of User Interfaces

User Interfaces (UI) play a significant role in contemporary web applications. Responsiveness and performance are influenced by the UI design, complexity of its features, the amount of transmitted information, as well as by network conditions. While traditional web delivery approaches separate out presentation of UI in the form of Cascading Style Sheets (CSS), a large number of presentation concerns are left tangled together in the structural description used for data presentations. Such tangling impedes concern reuse, which impacts the description size as well as caching options. This paper evaluates separation of UI concerns from the perspective of UI delivery. Concerns are distributed to clients through various resources/channels, which impacts the UI composition at the client-side. This decreases the volume of transmitted information and extends caching options. The efficacy is demonstrated through experiments.

Tomas Cerny, Lubos Matl, Karel Cemus, Michael J. Donahoo
Separating out Platform-independent Particles of User Interfaces

User Interfaces (UIs) visualize a wide range of various underlying computer application concerns. Such orthogonal concerns present in even the simplest UIs. The expectation of support for users from various backgrounds, location, different technical skills, etc. serves to increase concern complexity. Nowadays users typically remotely access to applications from a variety of platforms including web, mobile or even standalone clients. Providing platform-specific support for multiple UIs further increases the concern complexity. Such a wide-range of concerns often results in a significant portion of the UI description being restated using platform-specific components, which brings extended development, and maintenance efforts. This paper aims to separate out the platform-independent particles of UI that could be reused across various platforms. Such separation supports reduction of information restatement, development and maintenance effort. The platform-independent particles are provided in a machine-readable format to support their reuse in platform-specific UIs.

Tomas Cerny, Michael J. Donahoo
Implementing Personal Software Process in Undergraduate Course to Improve Model-View-Controller Software Construction

This research attempts to implement the concepts of Personal Software Process (PSP) with Model View Controller (MVC) software construction in an undergraduate software engineering course. Exercises were redesigned to support the MVC framework based on the traditional PSP exercises. The findings proposed are a guide for instructors to improving both personal and team performance in MVC software construction.

Wacharapong Nachiengmai, Sakgasit Ramingwong

.

Frontmatter
Personalized Care Recommendation Approach for Diabetes patients Using Ontology and SWRL

Although there are many health organizations that publish documents of detection and self-care recommendation for diabetes patients, this information is too general and inconsistent with each patient’s conditions. This research proposed the personalized care recommendation approach for diabetes patients using ontology and SWRL. The objective is to develop the diabetes knowledge-based ontologies (DKOs) expressed in Web Ontology Language (OWL) for describing the patient profile, the general self-care practices for diabetes patients, complication, sign, symptom, and disorder of the diabetes patients. The DKOs are mapped and incorporated with rules created by using the Semantic Web Rule Language (SWRL) for inferring new knowledge or new relationships according to each patient condition. The semantic rules enable the semantic recommendation for personalized care of the diabetes patient corresponding to each patient’s condition.

Benjamas Hempo, Ngamnij Arch-int, Somjit Arch-int, Cherdpan Pattarapongsin
An Experimental Study for Neighbor Selection in Collaborative Filtering

Similarity is a most critical index in collaborative filtering based recommdender systems, which suggest items to their customers by consulting most similar neighbors. Current popular similarity measures may mislead the user to unwanted items in certain cases, due to their inherent properties. This study suggests a novel idea to significantly decrease such occurrences by enforcing qualifying conditions to neighbors using some simple criteria, to make consultations for their ratings. From extensive experiments, the proposed idea is found to substantially improve prediction performance of collaborative filtering based on existing similarity measures. This result is noticeable considering that such improvements are achieved by simply consulting only those neighbors satisfying the given criteria, without adopting any sophisticated similarity measure.

Soojung Lee
Transforming e-Procurement Platforms for PEPPOL and WCAG 2.0 Compliance
The anoGov-PEPPOL Project

The increase in the complexity inherent to public administrative activities, including public procurement, has led ANO to develop a public e-procurement platform in order for Portuguese public entities to perform their contracts and acquisitions. Given the introduction of European PEPPOL standards and the requirement for all Portuguese e-procurement platforms to be WCAG 2.0 level A compliant, ANO company established a consortium with the UTAD University in order to improve their platform and develop the required features. By using a specially designed stage-based work methodology, the R&D project was carried out with success and all initial goals were achieved.

José Martins, João Barroso, Ramiro Gonçalves, André Sousa, Miguel Bacelar, Hugo Paredes
Analyzing Developer Behavior and Community Structure in Software Crowdsourcing

Recently software crowdsourcing has become an emerging development paradigm in software ecosystems. At present, most research efforts on software crowdsourcing focus on modelling its competitive nature from aspect of incentive mechanism and developer decision using game theory. Few work has been done on analyzing the impact of developer behavior and community structure on software crowdsourcing practices. Based on social network modeling and analysis methodology, this paper studies a popular community of software crowdsourcing named as TopCoder. We discover that the online activities of TopCoder users can be characterized as a temporal bursty pattern. Such a user behavior leads to the similarity in the participants of TopCoder contests occurred within the consecutive time intervals. Furthermore we introduce a competition social network to study the influence of the cooperation and competition between developers on their rating in the TopCoder community by analyzing competition social network. In addition to the community-wide developer network, this paper extends the modeling method to examine topologic characteristics of TopCoder projects and reveal the correlation between these structural features and the outcome quality of the projects.

Hui Zhang, Yuchuan Wu, Wenjun Wu
A Semantic Similarity Assessment Tool for Computer Science Subjects Using Extended Wu & Palmer’s Algorithm and Ontology

This paper presents a process model and a system for calculating the correspondence between the content of courses in Computer Science with the standard of The Thailand Qualifications Framework for Higher Education (TQF: HEd). The aim is to improve the curriculum of universities in Thailand by meeting the standards and decreasing the duration of the operation to be more convenient. We designed an ontology of courses and TQF: HEd, and then we developed the system as a web application that can map the information from two ontologies using the extended Wu & Palmer’s algorithm and WordNet. At last, the system summarizes the consistency of the course descriptions with the body of knowledge of TQF: HEd. Tests with sample data show that this method indicates whether the course description is consistent with the standards or not. It also indicates the relative importance of each part of the body of knowledge in teaching of this subject as well.

Chayan Nuntawong, Chakkrit Snae Namahoot, Michael Brückner
Comparison of Architecture-Centric Model-Driven Web Engineering and Abstract Behavioural Specification in Constructing Software Product Line for Web Applications

We investigated how a software product line (SPL) for Web application is realized by following an established Web application development methodology called Architecture-Centric Model-Driven Web Engineering (AC-MDWE). The development process is done by using Abstract Behavioural Specification (ABS), which is an executable modelling language that provides SPL-related features. We created a case study by implementing a product line for E-commerce Web applications. The product line is realisable using ABS with modifications to the original AC-MDWE process. ABS can provide several benefits such as control during product generation, feature traceability, and preserving integrity of core assets. However, it is still not ready for creating production-level Web application and lack of readability in the generated artefacts.

Daya Adianto, Maya Retno Ayu Setyautami, Salman El Farisi
Prediction of Query Satisfaction Based on CQA-Oriented Browsing Behaviors

Browsing satisfaction with community-based websites has been studied mainly based on webpage content. In this paper, we exploratively analyze the factors affecting the browsing behaviors of client users to predict the query satisfaction level in a Community-based Question Answering (CQA) website. The experiment’s results show that different categories of information are affected by different factors, and explain that considering the factors of browsing behaviors could improve the prediction accuracy of query satisfaction.

Junxia Guo, Hao Han, Cheng Gao, Takashi Nakayama, Keizo Oyama
The Competition-based Information Propagation in Online Social Networks

Information propagation in Online Social Networks is of great interest to many researchers. Common issues on the existing models include complete network structure requirement, topic dependent model parameters, isolated spreading assumption and so on. In this paper, we study the characteristics of information propagation for multiple topics based on the data collected from Sina Weibo (one of the most popular microblogging services in China). According to the observations, we propose a Competition-based Multi-topic Information Propagation (CMIP) model without the existence of network structure. From a perspective of topic competition, we treat the information propagation as two stages: the gain and the loss of user attention resources. Simulation results validate the model and verify that this model effectively generates the singlepeak and succession phenomenon in the propagation of information and is consistent with observations.

Liyuan Sun, Yadong Zhou, Xiaohong Guan
e-Learning Recommender System for Teachers using Opinion Mining

In recent few years e-learning has evolved as one of the better alternative of the classroom approach. e-learning has crossed the geographical boundaries and now it is in the reach of every learner who is using internet. But merely presence of the e-learning websites does not make sure that all the content is very effective for the learners. Generally for learning any subject the learner has to traverse many websites for various topics because no single website provide all the best content about the subject at a single place. So we have to analyze the learner’s reviews about the website subject content in order to deliver all the best content at a single place. In this paper we proposed a new e-learning recommender system named as A

3

. It analyzes the learner’s opinions about the subject contents and recommends the teachers, who have uploaded the tutorial on to the website to change the particular portion of the subject topic which is difficult to understand by the learners using opinion mining, not the complete topic. By this system after some time all the best content about the subject will be available at the single place.

Anand Shanker Tewari, Anita Saroj, Asim Gopal Barman

IWICT

Frontmatter
Evaluating Heuristic Optimization, Bio-Inspired and Graph-Theoretic Algorithms for the Generation of Fault-Tolerant Graphs with Minimal Costs

The construction of fault-tolerant graphs is a trade-off between costs and degree of fault-tolerance. Thus the construction of such graphs can be viewed as a two-criteria optimization problem. Any algorithm therefore should be able to generate a Pareto-front of graphs so that the right graph can be chosen to match the application and the user’s need. In this work algorithms from three different domains for the construction of fault-tolerant graphs are evaluated. Classical graph-theoretic algorithms, optimization and bio-inspired approaches are compared regarding the quality of the generated graphs as well as concerning the runtime requirements. As result, recommendations for the application of the right algorithm for a certain problem class can be concluded.

Matthias Becker, Markus Krömker, Helena Szczerbicka

ICWCIA

Real-time Night Visibility Enhancement Algorithm Using the Similarity of Inverted Night Image and Fog Image

In this paper, we propose improved night visibility enhancement algorithm based on haze removal method. The proposed method use new haze removal method in place of the conventional method. This de-haze method is very good and faster than traditional method. It also uses a further Contrast Limit Adaptive Histogram Equalization (CLAHE) for sharpening the image. The proposed method can be applied to any application that uses a visible light camera, and it is appropriate to apply a black box, vehicle camera, and cell phone camera, since it is possible that real-time processing.

Jae-Won Lee, Bae-Ho Lee, Yongkwan Won, Cheol-Hong Kim, Sung-Hoon Hong
A Service Protection mechanism Using VPN GW Hiding Techniques

The recent supply of smartphone and the late change in the internet environment had lead users to demand safe services. Especially, the VPN technology is being researched as a key technology for providing safe services within the cloud environment or the data-center environment. However, the openness of IP is a critical threat to the VPN technology, which provides service via sharing IP address of its gateway. The exposed IP address is venerable to many kinds of attacks, and thus VPN gateway and its service are also venerable to these threats. This paper proposes a VHSP mechanism, which prevents exposure of IP address by assigning temporal IP address for the VPN gateway and its services. VHSP assign temporal IP address per-user bases. Moreover, this paper had verified performance of VHSP and original VPN in various conditions.

PyungKoo Park, HoYong Ryu, GyungTae Hong, SeongMin Yoo, Jaehyung Park, JaeCheol Ryou
Networking Service Availability Analysis of 2N Redundancy Model with Non-stop Forwarding

This paper focuses on availability of a networking service application with 2N redundancy model and non-stop forwarding. The redundancy model is a form of resilience that ensures service availability in the event of component failure. In the model there are at most one active service unit and at most one standby service unit. The active one is providing the service while the standby is prepared to take over the active role when the active fails. Non-stop forwarding is another method to increase the availability of networking service, which can continue forwarding the service packets based on its copy of the forwarding information until the failed component is recovered. We design our analysis model using Stochastic Reward Nets, then analyze the relationship between an availability and non-stop forwarding using SPNP.

Dong-Hyun Kim, Jae-Chan Shim, Ho-Yong Ryu, Jaehyung Park, Yutae Lee
Property Analysis of Classifiers for Sleep Snoring Detection

Sleep snoring has become a serious concern for long term healthcare, as well as an indicative diagnosis to other critical diseases. Recently, many re-searches show that the snoring sound can be analyzed in different domain by various classifiers. In this paper, a comparison study with various classifiers has been provided for analyzing the advantages and the characteristics of the classi-fiers for sleep snoring detection. As the results, correlational filter multilayer per-ceptron neural network (f-MLP) and support vector machine (SVM) classifiers achieved the better generalization performances with the classification rate over 96% for the time domain snoring data set. Besides, the filtered data as the output of filter layer in f-MLP also provided a high discriminative feature set which made most of the classifier succeeded in their works. One important observation was that the ordinary multilayer perceptron (o-MLP) could not generalize with the time domain input data.

Tan Loc Nguyen, Young Y. Lee, Su-il Choi, Yonggwan Won
An Innovative Detection Method Integrating Hybrid Sensors for Motorized Wheelchairs

In this paper, we illustrate an innovative detection method integrating hybrid sensors for motorized wheelchairs. By recognizing its surrounding circumstance, it can prevent accidents as well as provide convenience for users. Its sensors are designed to sense obstacles so that the technology is able to control motors to avoid collision or turning over. Using the 8 infrared distance sensors detect 360° of an obstacle. In addition, the controlling of motor’s speed based on the distance can avoid obstacle collision safely.

Sanghyun Park, Hyunyoung Kim, Jinsul Kim, Taeksoo Ji, Myoung Jin Lee

IWSATA

Road Weather Information transmission method for digital multimedia broadcasting

Many countries gather information for the road surface condition related weather. But there are no proper way to give the gathered information to the drivers on the roads. In this paper a method to transmit the road information especially related with weather condition. The proposed method is testified via terrestrial digital multimedia broadcasting system, and can be deployed for other digital mobile broadcasting system for example DAB, HD Radio, etc.

SangWoon Lee
Color Overlapping Method of Projection System on LEDs

This paper proposes a primary color overlapping method for increasing the brightness of projection system using light-emitting diodes (LEDs). The proposed approach drives the pulse width modulated signals of red, green and blue LEDs. The proposed color overlapping method synthesizes secondary colors of yellow and cyan from primary colors of RGB. By the proposed method, brightness of the projected image is improved by bout 30 %, compared to the conventionalnon-color overlapping method in a project system with LEDs.

Yongseok Chi, Youngseop Kim, Je-Ho Park, Kyoungro Yoon, Cheong-Ghil Kim, Yong-Hwan Lee
A System Design for Audio Level Measurement based on ITU-R BS.1770-3

Nowadays, inter-program level jumps in digital broadcasting programs have been a major source of sound nuisance. Therefore, control of loudness becomes one of the most important audio issues. In early days of digital audio, the level of a given piece of audio was determined by measuring sample-peak level. Later, the concept of perceived loudness was introduced. In this paper, we introduce a system design for audio level measurement based on ITU- R BS.1770-3 in order to measure true-peak audio level. The system board is designed with TMS320C6727 Floating Point Digital Signal Processor (DSP).

Sang Woon Lee, Hyun Woo Lee, Cheong Ghil Kim
A Prototype Implementation of Contents Sharing System on Networked Heterogeneous Devices

This paper introduces a prototype design and implementation of contents sharing service application on networked heterogeneous devices. This is because users need to share their media contents and play them continuously by switching from one device to another with the wide spreading mobile devices. For example, N-Screen services allow users to manage their active sessions among multiple devices having different capabilities and architectures. In this paper, the basic sharing configuration consists of three devices: PC, Smartphone, and Tablets. The application was implemented with RESTful web services and an open service platform. The prototype was successfully implemented and its performance was simulated on sample 3D video with different file sizes to minimize load latency when changing devices. The latency was decreased greatly on Smartphone from 1.6 to 0.9 seconds being compared with the one before.

Cheong Ghil Kim, Dae Seung Park
Metadaten
Titel
Information Science and Applications
herausgegeben von
Kuinam J. Kim
Copyright-Jahr
2015
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-662-46578-3
Print ISBN
978-3-662-46577-6
DOI
https://doi.org/10.1007/978-3-662-46578-3