Skip to main content

2016 | Buch | 1. Auflage

Computational Science and Its Applications – ICCSA 2016

16th International Conference, Beijing, China, July 4-7, 2016, Proceedings, Part II

herausgegeben von: Osvaldo Gervasi, Beniamino Murgante, Sanjay Misra, Ana Maria A.C. Rocha, Carmelo M. Torre, David Taniar, Bernady O. Apduhan, Elena Stankova, Shangguang Wang

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The five-volume set LNCS 9786-9790 constitutes the refereed proceedingsof the 16th International Conference on Computational Science and ItsApplications, ICCSA 2016, held in Beijing, China, in July 2016.
The 239 revised full papers and 14 short papers presented at 33 workshops were carefully reviewed and selected from 849 submissions. They are organized in five thematical tracks: computational methods, algorithms and scientific applications; high performance computing and networks; geometric modeling, graphics and visualization; advanced and emerging applications; and information systems and technologies.

Inhaltsverzeichnis

Frontmatter

High Performance Computing and Networks

Frontmatter
Parallel Sparse Matrix-Vector Multiplication Using Accelerators

Sparse matrix-vector multiplication (SpMV) is an essential computational kernel for many applications such as scientific computing. Recently, the number of computing systems equipped with NVIDIA’s GPU and Intel’s Xeon Phi coprocessor based on the MIC architecture has been increasing. Therefore, the importance of effective algorithms for SpMV in these systems is increasing. To the best of our knowledge, while previous studies have reported CPU and GPU implementations of SpMV for a cluster and MIC implementations for a single node, implementations of SpMV for the MIC cluster have not yet been reported. In this paper, we implemented and evaluated parallel SpMV on a GPU cluster and a MIC cluster. As shown by the results, the implementation for MIC achieved relatively high performance in some matrices with a single process, but it could not achieve higher performance than other implementations with 64 MPI processes. Therefore, we implemented and evaluated the single SpMV kernel to improve the performance of parallel SpMV.

Hiroshi Maeda, Daisuke Takahashi
On the Cluster-Connectivity of Wireless Sensor Networks

Wireless sensor networks consist of sensor devices with limited energy resources and computational capabilities, hence network optimization and algorithmic development in minimizing the total energy or power to maintain the connectivity of the underlying network are crucial for their design and maintenance. We consider a generalized system model of wireless sensor networks whose node set is decomposed into multiple clusters, and show that the decision and the induced minimization problems of the cluster-connectivity of wireless sensor networks appear to be computationally intractable – completeness and hardness, respectively, for the nondeterministic polynomial-time complexity class. An approximation algorithm is devised to minimize the number of endnodes of inter-cluster edges within a factor of 2 of the optimum for the cluster-connectivity.

H. K. Dai, H. C. Su
A PBIL for Load Balancing in Network Coding Based Multicasting

One of the most important issues in multicast is how to achieve a balanced traffic load within a communications network. This paper formulates a load balancing optimization problem in the context of multicast with network coding and proposes a modified population based incremental learning (PBIL) algorithm for tackling it. A novel probability vector update scheme is developed to enhance the global exploration of the stochastic search by introducing extra flexibility when guiding the search towards promising areas in the search space. Experimental results demonstrate that the proposed PBIL outperforms a number of the state-of-the-art evolutionary algorithms in terms of the quality of the best solution obtained.

Huanlai Xing, Ying Xu, Rong Qu, Lexi Xu
A Proposed Protocol for Periodic Monitoring of Cloud Storage Services Using Trust and Encryption

The advantages of using cloud computing include scalability, availability and a virtually ‘unlimited’ storage capacity. However, it is challenging to build storage services that are at the same time safe from the customer point-of-view and that run in public cloud infrastructures managed by service providers that can not be fully considered trustworthy. Owners of large amounts of data have to keep their data in the cloud for a long period of time without the need to keep copies of the original data or to access it. In such cases, questions of Integrity, availability, privacy and trust are still challenges in the adoption of Cloud Storage Services to ensure security, especially when losing or leaking information can bring significant damage, be it legal or business-related. With such concerns in mind, this paper proposes a protocol to monitor the information stored in the cloud and the behaviour of contracted storage services periodically. The proposed protocol, which is based on trust and encryption, is validated by analysis and simulation that demonstrate its utilization of computing resources compared to its results regarding cloud storage protection that are achieved over time.

Alexandre Pinheiro, Edna Dias Canedo, Rafael Timóteo de Sousa Jr., Robson de Oliveira Albuquerque
Implementation of Multiple-Precision Floating-Point Arithmetic on Intel Xeon Phi Coprocessors

In this paper, we propose an implementation of multiple-precision floating-point addition, subtraction, multiplication, division and square root on Intel Xeon Phi coprocessors. Using propagated carries in multiple-precision floating-point addition is a major obstacle to vectorization and parallelization. By using the carry skip method, the operation of performing propagated carries in the multiple-precision floating-point addition can be vectorized and parallelized. A parallel implementation of floating-point real FFT-based multiplication is presented, as multiplication is a fundamental operation in fast multiple-precision arithmetic. The experimental results of multiple-precision floating-point addition, multiplication, division and square root operations on an Intel Xeon Phi 5110P are then reported.

Daisuke Takahashi
Towards a Sustainable Architectural Design by an Adaptation of the Architectural Driven Design Method

Sustainability is a global concern. It must be addressed by different sectors of society, even by the information technology sector. Moreover, such effort should not be only focused on direct impacts of technology over the environment, but also on the software engineering discipline by itself, which is facing now other dimensions of sustainability. In this paper, an adaptation of the Attribute-Driven Design method including sustainability as a driver is introduced. The proposal is based on sustainability guidelines established by the Karlskrona Manifesto. It involves a multidimensional sustainability analysis considering three levels of impacts and opportunities for each one of the architectural components. The adaptation is motivated by the design of a sustainable architecture of a cloud-based personal health record. The designed architecture is termed Health Catalogue Repository. It offers cloud services, allowing interoperability and timely access to patients’ clinical information. Energy consumption and resources optimization are contemplated like fundamental sustainability characteristics in the architectural design. The achieved design offers a better trade-off among quality attributes and sustainability constraints. The achieved design by using the proposal reduces the long-term impacts whilst increases the sustainability opportunities in architectural components.

Luis Villa, Ivan Cabezas, Maria Lopez, Oscar Casas
The Design and Implementation on the Android Application Protection System

As an open-source mobile platform, Android is facing with the severe problems of security and then the applications that running on this platform also confront with the same threats. This paper concludes the secure problems with which android applications are facing and gives a research on the current defense solutions. A security reinforcement system based on the Dex protection is proposed in order to defense the dynamic monitoring and modification. This system combines the static defense solution and dynamic defense solution, implements the purpose to tamper-proofing, anti-debugging for Android applications and improves the reliability and security of the software.

Cui Haoliang, Huang Ruqiang, Shi Chengjie, Niu Shaozhang
Extending the ITU-T G.1070 Opinion Model to Support Current Generation H.265/HEVC Video Codec

Online video streaming is one of the most promising applications that is being widely used today. Such streaming videos at high definition (HD) resolution or up consume a large network bandwidth. Current generation video codecs like H.265/High Efficiency Video Coding (HEVC) and VP9 are expected to reduce this bandwidth requirement while providing an excellent viewing quality. ITU-T has developed a standardized parametric opinion model called G.1070 that tries to assess the Quality of Experience (QoE) of any multimedia content. The model outputs an overall multimedia quality Mq which is a combination of the video quality Vq and speech quality Sq. The function Vq has to be validated for different video codecs and formats by carrying out subjective experiments. In this paper we propose for the first time a set of coefficients that enables us to extend the G.1070 opinion model to support the H.265 video codec at full-HD resolution.

Debajyoti Pal, Tuul Triyason, Vajirasak Vanijja
New Advantages of Using Chains in Computing Multiple Probabilistic Connectivity

We consider the problem of a network reliability calculation for a network with unreliable communication links and perfectly reliable nodes. For such networks, we study two different reliability indices: network average pairwise connectivity and average size of a connected subgraph that contains some special vertex. The problem of precise calculation of both these characteristics is known to be NP-hard. Both indices may be calculated or estimated through complete or partial enumeration of pairs of vertexes and calculation of their pairwise reliability. Methods for speeding up this process in the case when there are chains in a graph structure are presented in the paper.

Alexey S. Rodionov, Denis A. Migov
A Delay-Driven Switching-Based Broadcasting Scheme in Low-Duty-Cycled Wireless Sensor Networks

Wireless Sensor Networks (WSNs) are proven to be an important part of the everyday modern life. In that, broadcasting essentially delivers network-wide configurations, code updates, or route finding requests to the sensor nodes. Addressing the delay performance of tree-based broadcasting in low-duty-cycled WSNs, this paper proposes a novel switching-based scheme to enhance the overall broadcasting delay given one sink node sending out the packet. Simulation results show that the proposed algorithm significantly improves delay compared to the well-known schemes, while maintaining a comparable number of transmissions.

Dung T. Nguyen, Vyacheslav V. Zalyubovskiy, Thang Le-Duc, Duc-Tai Le, Hyunseung Choo
Memory-Aware Scheduling for Mixed-Criticality Systems

In this paper, by taking both memory-access and computation time cost into consideration, a two-phase execution, i.e. memory-access phase first to fetch the instructions and required data, and then computation, is proposed to model mixed criticality tasks. Based on the proposed task model, a fixed-priority based scheduling algorithm is developed to schedule the mixed-criticality tasks. We first establish the theoretical foundations upon which to determine whether if a mixed-criticality task set is schedulable under given memory-access and computation priorities; and then based on these theoretical conclusions, we further present how to apply the well-known Audsley’s algorithm to find the optimal priority assignment for both memory-access and computation phases. Extensive experiments have been conducted and the experimental results validate the effectiveness of our proposed approach.

Zheng Li, Li Wang
A Generalized Ant Routing Mechanism Framework in Mobile P2P Networks

With the rapid development of mobile peer-to-peer networks (MP2P) and the diversification of users’ demand, routing mechanism has become an important research focus in MP2P. Because of the characteristic of self-organization, Ant Colony Optimization (ACO) has been widely applied in designing routing mechanism in various networks. In ant routing protocols, the node routes by perceiving the pheromone in networks and deposits pheromone to direct the subsequent routing. In this paper, the relationship between ACO and routing protocols in MP2P is discussed, and some representative ant routing protocols are chosen to compare and analyze to get the general ant routing principles. Then a generalized ant routing mechanism framework is proposed, and a corresponding ant routing protocol as the generalized solution is produced and its performance is shown through simulation experiments.

Dapeng Qu, Di Zhang, Dengyu Liang, Xingwei Wang, Min Huang
WACA: WearAble Device Based Home Control Assistant Framework

In this paper, we have analyzed requirements and pain points of legacy wearable-based IoT and smart home services. In conventional system, we are required to wear additional accessory devices, and to move to specific areas to control and manage the home devices. In order to solve these restrictions, we have implemented WACA framework by using well-known general wearable devices. We also present the gesture-interaction based device control architecture using the wearable devices to provide more efficient and valuable smart home services. Based on the experiment results, we show that the proposed system has the potential as wearable device-based universal control assistance architecture.

Bonhyun Koo, Simon Kong, Hyejung Cho, Lynn Choi
Media, Screen, Input, and Context Sharing System for D2D Services in Smart TV 2.0

A major effort has been put in recent years on the development of Device-to-Device(D2D) communications which provides wireless connectivity for higher data rates and system capacity. Accordingly, various information sharing systems between smart devices have been developed and widely disseminated to make full use of the wireless network environment. However, due to low compatibility and performance current sharing systems hardly meet the user’s needs for seamless and bidirectional sharing features. In this paper, we propose an integrated sharing system among smart devices. The proposed system provides functional features not only the media, screen, and input data but also the user context usage information of their mobile devices. Our system can fully support the role of a source device and a destination device whereas previous systems provide only one-directional data sharing. Furthermore, proposed system can support high availability and usability by providing the integrated sharing environment. Through the experimental development, we saw that our system utilized for the heterogeneous contexts and showed the validity of the future IoT multimedia system design.

Taeho Kong, Junghyun Bum, Hyunseung Choo
Indoor Location: An Adaptable Platform

Nowadays, it is clear that location systems are increasingly present in people’s lives. In general people often spend 80–90 % of their time in indoor environments, which include shopping malls, libraries, airports, universities, schools, offices, factories, hospitals, among others. In these environments, GPS does not work properly, causing inaccurate positioning. Currently, when performing the location of people or objects in indoor environments, no single technology can reproduce the same results achieved by the GPS for outdoor environments. One of the main reasons for this is the high complexity of indoor environments where, unlike outdoor spaces, there is a series of obstacles such as walls, equipment and even people. Thus, it is necessary that the solutions proposed to solve the problem of location in indoor environments take into account the complexity of these environments. In this paper, we propose an adaptable platform for indoor location, which allows the use and combination of different technologies, techniques and methods in this context.

Mário Melo, Gibeon Aquino, Itamir Morais
Sequential and Parallel Hybrid Approaches of Augmented Neural Networks and GRASP for the 0-1 Multidimensional Knapsack Problem

There are a lot of problems, whose solutions are based on 0-1 multidimensional problem. Since this combinatorial optimization problem is $$\mathcal {NP}$$NP-hard, one of the approaches to solve it is the use of metaheuristics. For this problem, even heuristics consume a lot of time to find a solution, which motivates the search for alternatives capable of making the use of such techniques less time-consuming. Among these alternatives, the use of parallelization strategies deserves to be highlighted, once it may lead to reduced execution times and/or better quality results. In this work we propose a hybrid approach of augmented neural networks and GRASP for the 0-1 multidimensional knapsack problem, we describe a sequential and a GPGPU implementation and measure the achieved speedups. We also compare our results with the ones obtained by other metaheuristics. The obtained results show that the proposed approach can achieve better quality solutions than some of the other algorithms found in the literature. These solutions can lead to better solutions to real problems that can be modeled with the 0-1 MKP.

Bianca de Almeida Dantas, Edson Norberto Cáceres
Computational Verification of Network Programs for Several OpenFlow Switches in Coq

OpenFlow is a network technology that enables to control network equipment centrally, to realize complicated forwarding of packets and to change network topologies flexibly. In OpenFlow networks, network equipment is separated into OpenFlow switches and OpenFlow controllers. OpenFlow switches do not have controllers that usual network equipment has. OpenFlow controllers control OpenFlow switches. OpenFlow controllers are configured by programs. Therefore, network configurations are realized by software. This kind of software can be created by several kinds of programming languages. NetCore is one of them. The verification method of NetCore programs has been introduced. This method uses Coq, which is a formal proof management system. This method, however, deals with only networks that consist of one OpenFlow switch. This paper proposes a methodology that verifies networks that consist of several OpenFlow switches.

Hiroaki Date, Noriaki Yoshiura
Parallelizing Simulated Annealing Algorithm in Many Integrated Core Architecture

The simulated annealing algorithm (SAA) is a well-established approach to the approximate solution of combinatorial optimisation problems. SAA allows for occasional uphill moves in an attempt to reduce the probability of becoming stuck in a poor but locally optimal solution. Previous work showed that SAA can find better solutions, but it takes much longer time. In this paper, in order to harness the power of the very recent hybrid Many Integrated Core Architecture (MIC), we propose a new parallel simulated annealing algorithm customised for MIC. Our experiments with the Travelling Salesman Problem (TSP) show that our parallel SAA gains significant speedup.

Junhao Zhou, Hong Xiao, Hao Wang, Hong-Ning Dai
On Efficient SC-Based Soft Handoff Scheme in Proxy Mobile IPv6 Networks

This study examines mobility support functions in soft handoff and IP-based mobile network, and distinguishes the control areas in the cell’s range areas. Several important characteristics of cell configurations for soft handoff are used to propose new structures for mobile network’s efficient session control (SC). A fixed-point strategy is proposed in order to not only determine the handoff traffic’s arrival speed, but stably calculate the loss probabilities or set the optimum guard channel numbers. We suggest a False Handoff Sessions (FHS) for improving the channel use efficiency based on mobility information. Numerical analyses indicate the efficiency of the presented Markov chain model and the advantage of proposed soft handoff method.

Byunghun Song, Youngmin Kwon, Hana Jang, Jongpil Jeong, Jun-Dong Cho
Distributed Computing Infrastructure Based on Dynamic Container Clusters

Modern scientific and business applications often require fast provisioning of an infrastructure tailored to particular application needs. In turn, actual physical infrastructure contains resources that might be underutilized by applications if allocated in dedicated mode (e.g., a process does not utilize provided CPU or network connection fully). Traditional virtualization technologies can solve the problem partially, however, overheads on bootstrapping a virtual infrastructure for each application and sharing physical resources might be significant. In this paper we propose and evaluate an approach to create and configure dedicated computing environment tailored to the needs of particular applications, which is based on light-weight virtualization also known as containers. We investigate available capabilities to model and create dynamic container-based virtual infrastructures sharing a common set of physical resources, and evaluate their performance on a set of test applications with different requirements.

Vladimir Korkhov, Sergey Kobyshev, Artem Krosheninnikov, Alexander Degtyarev, Alexander Bogdanov
Building a Virtual Cluster for 3D Graphics Applications

This paper discusses a possible approach to distributed visualization and rendering system infrastructure organization, based on Linux environment with the usage of virtualization technologies. Particular attention is paid to the minutiae, which may be encountered due to the environment setup and exploitation processes, and may affect system performance and usability. Some applications and development tools are studied, as they can provide a rapid onset of computing resources exploration.

Alexander Bogdanov, Andrei Ivashchenko, Alexey Belezeko, Vladimir Korkhov, Nataliia Kulabukhova, Dmitry Khmel, Sofya Suslova, Evgeniya Milova, Konstantin Smirnov
Great Deluge and Extended Great Deluge Based Job Scheduling in Grid Computing Using GridSim

Scheduling of jobs is one of the most important research areas of Grid computing as it has attracted so much attention since its beginning. Job scheduling in Grid computing is a NP-Complete problem due to Grid characteristics such as heterogeneity and dynamicity. Many heuristic algorithms have been proposed for Grid scheduling to avail Grid computing. However, these heuristic methods are limited by time constraints required for remapping of jobs to Grid resources in such elastic and dynamic environments. Great Deluge (GD) is a practical solution for such a problem. Therefore, this paper presents Great Deluge and Extended Great Deluge (EGD) based scheduling algorithm for Grid computing. We also present the detailed implementation of GD and EGD in a reliable simulation platform, GridSim. This has two advantages. First, it will ease the reimplementation process for future contributors since there are lots of complexity and ambiguity to develop such scheduling algorithms. Second, most of the research and experimental results, especially in the area of Grid scheduling, have used their own developed infrastructure to simulate the performance of their algorithms, thus the question remains on how well they will perform in a real world environment. We also, investigate the computation time and the number of soft constraints violations of EGD against its conventional GD algorithm. The GD scheduling algorithm is able to provide qualitative solution in shorter time for small Grid size while EGD could produce schedule in shorter time for all cases.

Omid Seifaddini, Azizol Abdullah, Abdullah Muhammed, Masnida Hussin
Employing Docker Swarm on OpenStack for Biomedical Analysis

Biomedical analysis, in particular image and biosignal analysis, often requires several methods applied to the same data. The data is typically of large volume, so data transfer can become a bottleneck in remote analysis. Furthermore, biomedical data may contain patient data, raising data protection issues. We propose a highly virtualized infrastructure, employing Docker Swarm technology as the computing infrastructure. An underlying Openstack based IaaS cloud provides additional security features for a flexible and efficient multi-tenant analysis platform. We introduce the prototype infrastructure along a sample use-case of multiple versions of a machine-learning method applied to feature sets extracted from multidimensional biosignal recordings from Sleep Apnea patients and healthy controls.

Christoph Jansen, Michael Witt, Dagmar Krefting
Untangling the Edits: User Attribution in Collaborative Report Writing for Emergency Management

This paper describes the progress of our ongoing research into collaborative report writing for emergency management using web technologies to improve communication and shared knowledge in emergency situations. Specifically, the work presented in this paper focuses on the development of a user attribution framework that enhances the Differential Synchronisation (diffsync) technique by exploiting its diff operation. This technique improves real-time collaborative editing for emergency management by combining the benefits of user attribution with diffsync features such as convergence, scalability, and robustness to poor network environments. As a proof of concept, we implement a prototype collaborative system and report results of simulations to test scalability, efficiency, and correctness. Further, we consider the potential benefits of this framework for web-based collaborative report writing in the context of emergency management.

Adrian Shatte, Jason Holdsworth, Ickjai Lee
An Improved Reconfiguration Algorithm for VLSI Arrays with A-Star

This paper describes a novel technique to speed up the reconfiguration for the VLSI arrays. We propose an efficient algorithm based on A-star algorithm for accelerating reconfiguration of the power efficient VLSI processor subarrays to meet the real-time constraints and lower power consumption of the embedded systems. The proposed algorithm treats the problem of constructing a local optimal logical column that has the minimum number of long interconnects as a shortest path problem. Then the local optimal column can be constructed by utilizing A-star algorithm with appropriate heuristic strategy. The proposed algorithm greatly reduces the number of visits to the fault-free PEs for constructing a local optimal logical column and effectively decreases the reconfiguration running time. Experimental results show that the computation time can be improved by more than 38.64 % for a $$128 \times 128$$128×128 host array with fault density of 20 %, without loss of harvest.

Junyan Qian, Zhide Zhou, Lingzhong Zhao, Tianlong Gu
Performance Evaluation of MAC Protocols in Energy Harvesting Wireless Sensor Networks

Wireless Sensor Networks is considered as one of the most important elements in the upcoming Internet of Things. As sensors based applications are widely deployed, limited battery power of the sensor nodes becomes a serious problem. Intrusions or malfunction of legitimate sensor protocols can lead to the quick depletion of sensors batteries and a network failure. Energy harvesting facilities provide an attractive solution to this problem. The potential of the energy harvesting wireless sensor networks can be properly applied if the corresponding efficient network operations protocols will be implemented. Development of this protocols requires the corresponding mathematical tools for system performance evaluation. Most papers in this area focus on some concrete technical problem, but there is lack of papers analyzing common principles of energy harvesting wireless sensor networks operating. In this paper we partially fill this gap.

Vladimir Shakhov
Fog Networking: An Enabler for Next Generation Internet of Things

Fog networking, an emerging concept in the context of cloud computing, is an idea to bring computation, communication and storage near to edge devices. Fog computing can offer low latency, geographically distributed mobile applications, and distributed control systems. On the other side, Software defined networking is a concept to make networking flexible and programmable. These two technologies together can create flexible and scalable networks to handle heterogeneous and massively increasing applications of IoT. In this paper we discuss how these two technologies can interplay with each other to be enabler of next generation IoT. We discuss that why these technologies are important for IoT and what current architectures are available to support IoT.

Saad Qaisar, Nida Riaz
Application of Optimization of Parallel Algorithms to Queries in Relational Databases

All known approaches to parallel data processing in relational client-server database management systems are based only on inter-query parallelism. Nevertheless, it’s possible to achieve intra-query parallelism by consideration of a request structure and implementation of mathematical methods of parallel calculations for its equivalent transformation. This article presents an example of complex query parallelization and describes applicability of the graph theory and methods of parallel computing both for query parallelization and optimization.

Yulia Shichkina, Alexander Degtyarev, Dmitry Gushchanskiy, Oleg Iakushkin
Factory: Master Node High-Availability for Big Data Applications and Beyond

Master node fault-tolerance is the topic that is often dimmed in the discussion of big data processing technologies. Although failure of a master node can take down the whole data processing pipeline, this is considered either improbable or too difficult to encounter. The aim of the studies reported here is to propose rather simple technique to deal with master-node failures. This technique is based on temporary delegation of master role to one of the slave nodes and transferring updated state back to the master when one step of computation is complete. That way the state is duplicated and computation can proceed to the next step regardless of a failure of a delegate or the master (but not both). We run benchmarks to show that a failure of a master is almost “invisible” to other nodes, and failure of a delegate results in recomputation of only one step of data processing pipeline. We believe that the technique can be used not only in Big Data processing but in other types of applications.

Ivan Gankevich, Yuri Tipikin, Vladimir Korkhov, Vladimir Gaiduchok, Alexander Degtyarev, Alexander Bogdanov
Petri Nets for Modelling of Message Passing Middleware in Cloud Computing Environments

Cloud systems allow to run parallel applications using solutions with distributed heterogeneous architecture. Software development for heterogeneous distributed environment requires a module-based design. The components in such module system are connected by means of telecommunications network enabling message passing. This article describes an interaction model for components in distributed applications. The model was designed based on the paradigm of Variable Speed Hybrid Petri Nets and allows to analyse system performance at various tiers: selection of the optimum approach to load balancing between components; making scaling decisions to enhance performance of certain modules; fine-tuning the interaction between system components. The model is not contingent on particular tools a user might employ to implement a solution; it also provides a monitoring data integration functionality.The model contains descriptions of standard messaging patterns linking components of distributed applications. These patterns include request-reply and publish-subscribe. Load balancing algorithms for various schemes of these patterns usage have been developed for a cloud environment.

Oleg Iakushkin, Yulia Shichkina, Olga Sedova

Geometric Modeling, Graphics and Visualization

Frontmatter
A Comparative Study of LOWESS and RBF Approximations for Visualization

Approximation methods are widely used in many fields and many techniques have been published already. This comparative study presents a comparison of LOWESS (Locally weighted scatterplot smoothing) and RBF (Radial Basis Functions) approximation methods on noisy data as they use different approaches. The RBF approach is generally convenient for high dimensional scattered data sets. The LOWESS method needs finding a subset of nearest points if data are scattered. The experiments proved that LOWESS approximation gives slightly better results than RBF in the case of lower dimension, while in the higher dimensional case with scattered data the RBF method has lower computational complexity.

Michal Smolik, Vaclav Skala, Ondrej Nedved
Improving the ANN Classification Accuracy of Landsat Data Through Spectral Indices and Linear Transformations (PCA and TCT) Aimed at LU/LC Monitoring of a River Basin

In this paper an efficient Artificial Neural Networks (ANN) classification method based on LANDSAT satellite data is proposed, studying the Cervaro river basin area (Foggia, Italy). LANDSAT imagery acquisition dates of 1984, 2003, 2009 and 2011 were selected to produce Land Use/Land Cover (LULC) maps to cover a time trend of 28 years. Land cover categories were chosen with the aim of characterizing land use according to the level of surface imperviousness. Nine synthetic bands from the PC, Tasseled Cap (TC), Brightness Temperature (BT) and vegetation indices (Leaf area Index LAI and the Modified Soil Adjusted Vegetation Index MSAVI) were identified as the most effective for the classification procedure. The advantages in using the ANN approach were confirmed without requiring a priori knowledge on the distribution model of input data. The results quantify land cover change patterns in the river basin area under study and demonstrate the potential of multitemporal LANDSAT data to provide an accurate and cost effective means to map and analyze land cover changes over time that can be used as input for subsequent hydrological and planning analysis.

Antonio Novelli, Eufemia Tarantino, Grazia Caradonna, Ciro Apollonio, Gabriella Balacco, Ferruccio Piccinni
Automatic Temporal Segmentation of Articulated Hand Motion

This paper introduces a novel and efficient segmentation method designed for articulated hand motion. The method is based on a graph representation of temporal structures in human hand-object interaction. Along with the method for temporal segmentation we provide an extensive new database of hand motions. The experiments performed on this dataset show that our method is capable of a fully automatic hand motion segmentation which largely coincides with human user annotations.

Katharina Stollenwerk, Anna Vögele, Björn Krüger, André Hinkenjann, Reinhard Klein
A Method for Predicting Words by Interpreting Labial Movements

The study of lips movements is relevant for a series of interesting applications in real world to enhance the communication means and in medical applications. In the present paper we illustrate a method we implemented with the purpose of helping Amyotrophic Lateral Schlerosys (ALS) patients to communicate, once the progress of the disease requires to intubate the patient and the voice is lost.The Method uses several subsystems to carry out a so complex task and the results are really promising. However the method need to be improved in order to make the system more easy to use and more reliable in the prediction of pronounced words.

Osvaldo Gervasi, Riccardo Magni, Matteo Ferri
A New 3D Augmented Reality Application for Educational Games to Help Children in Communication Interactively

In recent years, the use of technology to help children for augmented and alternative communication (AAC) is extremely a vital task. In this paper, a novel three-dimensional human-computer interaction application is presented based on augmented reality (AR) technology for assisting children with special problems in communication for social innovation. To begin with, three-dimensional human hand model is constructed to estimate and track the hand’s position of users. An extended particle filter is applied for calculating the pose of background and the positions of children. The likelihoods based on the edge map of the image and pixel color values are utilized to estimate the joint likelihood in three-dimensional model. A flexible real-time hand tracking framework using the ‘golden energy’ scoring function is integrated for capturing region of interests. An inertial tracking technique is used for calculating the quaternion. Three-dimensional models from Google SketchUp are employed. We then use a built QR-code for scanning to access the system, and then utilize for selecting a character three-dimensional designed cartoon by applying the Vidinoti image application. After that, representative three-dimensional cartoons and augmented environments are overlaid, so that it is able to entertain children. A printed coloring photo, called Augmented Flexible Tracking is designed and provided in the system for visualization. The process of the system is done in real-time. Our experiments have revealed that the system is beneficial both quantitatively and qualitatively for assisting children with special needs in communication interactively.

Chutisant Kerdvibulvech, Chih-Chien Wang
An Improved, Feature-Centric LoG Approach for Edge Detection

Gaussian filter is used to smooth an input image to prevent false edge detection caused by image noises in the classic LoG edge detector, but it weakens the image features at the same time which results in some edges cannot be detected efficiently. To ameliorate, this paper presents an improved, feature-centric LoG approach for edge detection. It firstly uses non-local means filter based on structural similarity measure to replace Gaussian filter to smooth an input image which enables the image features to be preserved better, and then image edges can be extracted efficiently by the zero-crossing method for the smoothed image operated by Laplacian operator. Experimental results show that the proposed method can improve the edge detection precision of the classic LoG edge detector, and the non-local means filter used in the presented method achieves better results than the other two typical filters with edge-preserving ability.

Jianping Hu, Xin Tong, Qi Xie, Ling Li
The Use of Geoinformation Technology, Augmented Reality and Gamification in the Urban Modeling Process

The aim of the paper is to present the concept of a geoinformation technology, extended by so-called augmented reality (AR) module to support the social geoparticipation process with respect to spatial planning. The authors propose to increase the level of residents’ activity by using gamification tools adapted to the cultural, historical, economic and social reality of the given city as well as precise 3D modelling techniques and VR/AR tools. The authors strongly believe that the implementation of this idea will enable not only the improvement of spatial planning process but also to develop an open geoinformation society that will create “smart cities” of the future.

Miłosz Gnat, Katarzyna Leszek, Robert Olszewski
A Study of Virtual Reality Headsets and Physiological Extension Possibilities

Since the worldly notable companies put serious investment in VR headsets in 2014, everything about VR has reemerged from the deep of fiction-like prototypes to the surface of actual consumer products. This reemergence is much stronger than the past with very active supports from commercial VR content creation devices like 360-degree cameras. In this paper, we study important triggers and trends related to VR headsets from 2014 to now. In details, we discuss about the timeline of important events, high potential related products together with example usage scenarios, major user experience concerns with possible solutions, and future possibilities led by VR headsets. We also propose novel usage scenarios where recently popular heart rate monitoring wearable devices are used in combination with VR headsets in order to open up a new communication channel between the headset and the wearer.

Thitirat Siriborvornratanakul
A Novel Integrated System of Visual Communication and Touch Technology for People with Disabilities

Due to the current popularity of the internet of things (IoT), the research topic for communicating, connecting, and supporting people remotely through the internet is very popular in computational science in recent years. This paper presents a new integrated application to assist people with disabilities based on enhanced technologies of visual and touch communications in exploiting information and communication technology (ICT) innovative technologies. Our research aim is to help hearing impaired people to communicate both visually and affectingly to their loved one who may live distantly in different part of the world. By integrating an augmented reality application for visual communication and a wearable jacket for touch communication, it is able to support hard of hearing people via the human-computer interaction experience. A Google cardboard is also built for allowing people with hearing loss and deafness to have an immersive experience visually using augmented reality for geometric visualization. A hugging communication wearable tool, called T.Jacket, using sensor technology is then extended and applied to assist disabled people for hugging their loved one remotely by reproducing an artificial hug sense between two people affectingly. Experimental results have also been included to show the robustness of the proposed integrated application.

Chutisant Kerdvibulvech
A Multi-classifier Combination Method Using SFFS Algorithm for Recognition of 19 Human Activities

In order to investigate the human activity recognition and classification, which is significant for human-computer interaction (HCI), a multi-classifier combination method using Sequential Forward Feature Selection (SFFS) algorithm is proposed in this paper for recognition of 19 human daily and sports activities. The dataset collected by wearable sensor units is obtained from UCI Machine Learning Repository. The main contents of this method include: (1) extracting features from the raw sensor data after preprocessing; (2) reducing features by SFFS algorithm; (3) classifying activities by 10-fold cross validation with a multi-classifier combination algorithm based on the grid search for parameter optimization. The experimental results indicate that, compared with other traditional activity recognition methods, which use principal component analysis (PCA) to reduce features or use a single classifier to classify activities, the multi-classifier combination method using SFFS achieves the best recognition performance with the average classification accuracy of 99.91 %.

Feng Lu, Danfeng Wang, Haoying Wu, Wei Xie
Embedded Implementation of Template Matching Using Correlation and Particle Swarm Optimization

The template matching is an important technique used in pattern recognition. The goal is find a given pattern, from a prescribed model, in a frame sequence. In order to evaluate the similarity of two images, the Pearsons Correlation Coefficient (PCC) is widely used. This coefficient is calculated for each of the image pixels, which entails a computationally very expensive operation. This paper proposes the implementation of Template Matching using the PCC based method together with Particle Swarm Optimization as an embedded system. This approach allows for a great versatility to use this kind of system in portable equipment. The results indicate that PSO is up to 158x faster than the brute force exhausted search. So, the thus obtained co-design with PCC computation implemented in hardware, while the PSO process in software, is a viable way to achieve real time template matching, which is a pre-requisite in real-word applications.

Yuri Marchetti Tavares, Nadia Nedjah, Luiza de Macedo Mourelle
Visualizing High Dimensional Feature Space for Feature-Based Information Classification

Feature-based approaches represent an important paradigm in content-based information retrieval and classification. We present a visual approach to information retrieval and classification by interactively exploring the high dimensional feature space through visualization of 3D projections. We show how grand tour could be used for 3D visual exploration of high dimensional feature spaces. Points that represent high dimensional feature observations are linearly projected into a 3D viewable subspace. Volume rendering using splatting is used to visualize data sets with large number of records. It takes as input only aggregations of data records that can be calculated on the fly by database queries. The approach scales well to high dimensionality and large number of data records. Experiments on real world feature datasets show the usefulness of this approach to display feature distributions and to identify interesting patterns for further exploration.

Xiaokun Wang, Li Yang
Online Appearance Manifold Learning for Video Classification and Clustering

Video classification and clustering are key techniques in multimedia applications such as video segmentation and recognition. This paper investigates the application of incremental manifold learning algorithms to directly learn nonlinear relationships among video frames. Video frame classification and clustering are performed to the projected data in an intrinsic latent space. This approach has avoided partitioning video frames into arbitrary groups. It works even when the input video frames are under-sampled or unevenly distributed. Experiments show that video classification and clustering give better results in the latent space than in the original high dimensional space.

Li Yang, Xiaokun Wang
Patch Based Face Recognition via Fast Collaborative Representation Based Classification and Expression Insensitive Two-Stage Voting

Small sample size (SSS) is one of the most challenging problems in Face Recognition (FR). Recently the collaborative representation based classification with l2-norm regularization (CRC) shows very effective face recognition performance with low computational cost. Patch based CRC (PCRC) also could well handle the SSS problem, and a more effective method is conducted PCRC on different scales with various patch sizes (MSPCRC). However, computation of reconstruction residuals on all patches is still time consuming. In this paper, we devote to improve the performance for SSS problem in face recognition and decrease the computational cost. First, fast collaborative representation based classification (FCRC) is proposed to further decrease the computational cost of CRC. Instead of computing reconstruction residual on all classes, FCRC computes the residual on a small subset of classes which has a big coefficient, such a category full make use of the discrimination of representation coefficients and decrease the computational cost. Our experiments results show that FCRC has a significantly lower computational cost than CRC and slightly outperforms CRC. FCRC is especially powerful when it is applied on patches. To further improve the performance under varying expression, we use a two-stage voting method to combine the recognition outputs of all patches. Extended experiments show that the proposed two-stage voting based FCRC (TSPFCRC) outperforms many state-of-the-art face recognition algorithms and have a significantly lower computational cost.

Decheng Yang, Weiting Chen, Jiangtao Wang, Yan Xu
A Video Self-descriptor Based on Sparse Trajectory Clustering

In order to describe the main movement of the video a new motion descriptor is proposed in this work. We combine two methods for estimating the motion between frames: block matching and brightness gradient of image. In this work we use a variable size block matching algorithm to extract displacement vectors as a motion information. The cross product between the block matching vector and the gradient is used to obtain the displacement vectors. These vectors are computed in a frame sequence, obtaining the block trajectory which contains the temporal information. The block matching vectors are also used to cluster the sparse trajectories according to their shape. The proposed method computes this information to obtain orientation tensors and to generate the final descriptor. The global tensor descriptor is evaluated by classification of KTH, UCF11 and Hollywood2 video datasets with a non-linear SVM classifier. Results indicate that our sparse trajectories method is competitive in comparison to the well known dense trajectories approach, using orientation tensors, besides requiring less computational effort.

Ana Mara de Oliveira Figueiredo, Marcelo Caniato, Virgínia Fernandes Mota, Rodrigo Luis de Souza Silva, Marcelo Bernardes Vieira
Facial Expression Recognition Using String Grammar Fuzzy K-Nearest Neighbor

Facial expression recognition can provide rich emotional information for human computer interaction. It has become more and more interesting problem recently. Therefore, we propose a facial expression recognition system using the string grammar fuzzy K-nearest neighbor. We test our algorithm on 3 data sets, i.e., the Japanese Female Facial Expression (JAFFE), the Yale, and the Project- Face In Action (FIA) Face Video Database, AMP, CMU (CMU AMP) face expression databases. The system yields 89.67 %, 61.80 %, and 96.82 % in JAFFE, Yale and CMU AMP, respectively. We compare our results indirectly with the existing algorithms as well. We consider that our algorithm provides comparable results with those existing algorithms but we do not need to crop an image beforehand.

Payungsak Kasemsumran, Sansanee Auephanwiriyakul, Nipon Theera-Umpon
Living in the Golden Age of Digital Archaeology

The aim of this work is to provide a short overview on the most commonly digital technologies today available and used for historical landscape investigations as well as for archaeological and palaeo-environmental studies. One of the main advantage of these techniques is their capability to provide a huge amount of information in non invasive, non-destructive way, also protecting and preserving cultural heritage. The impact of digital technologies for archaeology regards researchers, professionals as well as end-users and enables us not only to improve knowledge and documentation, but also fruition and sustainable touristic exploitation as well as management and monitoring.

Rosa Lasaponara, Nicola Masini
Low Cost Space Technologies for Operational Change Detection Monitoring Around the Archaeological Area of Esna-Egypt

Cultural sites are being continuously threatened by natural hazardous processes and human intervention. There is a general agreement on the need for their protection for present and future human generations. The approach to do is unclear technologically inadequate and (or) lacks financing. Pollution, urban encroachment, population pressure and major development projects are seriously impinging on the precious heritage material values of man in innumerable cases.Remote sensing and GIS provide a historical database from which hazard maps may be generated, indicating which areas are potentially dangerous. The zonation of hazard must be the basis for any environmental risks management project and should supply planners and decision-makers with adequate and understandable information. The objective of this paper is the detection and mapping urban sprawl and agricultural areas around Esna city in order to assess their impact on the Esna temple and to propose mitigation measures.

Rosa Lasaponara, Abdelaziz Elfadaly, Wael Attia
Satellite Based Monitoring of Natural Heritage Sites: The Case Study of the Iguazu Park

Up to nowadays, satellite data have become increasingly available, thus offering a low cost or even free of charge unique tool, with a great potential for operational monitoring of vegetation cover, quantitative assessment of urban expansion and urban sprawl, as well as for monitoring of land use changes and soil consumption. This growing observational capacity has also highlighted the need for research efforts aimed at exploring the potential offered by data processing methods and algorithms, in order to exploit as much as possible this invaluable space-based data source. The work herein presented concerns an application study on the monitoring of vegetation cover with the use of multitemporal (2010–2014) satellite Modis data. The selected test site is the Iguazu park highly significant, being it one of the most threatened global conservation priorities (http://whc.unesco.org/en/list/303/). In order to produce synthetic maps of the investigated areas to monitor the status of vegetation and ongoing subtle changes, satellite data were processed using Principal Component Analysis (PCA). Results from our investigations pointed out a n ongoing degradation trend.

Antonio Lanorte, Angelo Aromando, Gabriele Nolè, Rosa Lasaponara
Backmatter
Metadaten
Titel
Computational Science and Its Applications – ICCSA 2016
herausgegeben von
Osvaldo Gervasi
Beniamino Murgante
Sanjay Misra
Ana Maria A.C. Rocha
Carmelo M. Torre
David Taniar
Bernady O. Apduhan
Elena Stankova
Shangguang Wang
Copyright-Jahr
2016
Electronic ISBN
978-3-319-42108-7
Print ISBN
978-3-319-42107-0
DOI
https://doi.org/10.1007/978-3-319-42108-7

Premium Partner