Skip to main content

Über dieses Buch

This book presents the latest research findings, as well as innovative theoretical and practical research results, methods and development techniques related to P2P, grid, cloud and Internet computing. It also reveals the synergies among such large scale computing paradigms.

P2P, Grid, Cloud and Internet computing technologies have rapidly become established as breakthrough paradigms for solving complex problems by enabling aggregation and sharing of an increasing variety of distributed computational resources on a large scale.

Grid computing originated as a paradigm for high-performance computing, offering an alternative to expensive supercomputers through different forms of large-scale distributed computing. P2P computing emerged as a new paradigm following on from client-server and web-based computing and has proved useful in the development of social networking, B2B (Business to Business), B2C (Business to Consumer), B2G (Business to Government), and B2E (Business to Employee). Cloud computing has been described as a “computing paradigm where the boundaries of computing are determined by economic rationale rather than technical limits”. Cloud computing has fast become the computing paradigm with applicability and adoption in all domains and providing utility computing at large scale. Lastly, Internet computing is the basis of any large-scale distributed computing paradigm; it has very quickly developed into a vast and flourishing field with enormous impact on today’s information societies and serving as a universal platform comprising a large variety of computing forms such as grid, P2P, cloud and mobile computing.



13th International Conference on P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC-2018)

iDBP: A Distributed Min-Cut Density-Balanced Algorithm for Incremental Web-Pages Ranking

A link analysis on a distribute system is a viable choice to evaluate relationships between web-pages in a large web-graph. Each computational processor in the system contains a partial local web-graph and it locally performs web ranking. Since a distributed web ranking is generally incur penalties on execution times and accuracy from data synchronization, a web-graph can preliminary partitioned with a desired structure before a link analysis algorithm is started to improve execution time and accuracy. However, in the real-word situation, the numbers of web-pages in the web-graph can be continuously increased. Therefore, a link analysis algorithm has to re-partition a web-graph and re-perform web-pages ranking every time when the new web-pages are collected. In this paper, an efficient distributed web-pages ranking algorithm with min-cut density-balanced partitioning is proposed to improve the execution time of this scenario. The algorithm will re-partition the web-graph and re-perform the web-pages ranking only when necessary. The experimental results show that the proposed algorithm outperform in terms of the ranking’s execution times and the ranking’s accuracy.

Sumalee Sangamuang, Pruet Boonma, Juggapong Natwichai

Fault-Tolerant Fog Computing Models in the IoT

A huge number of devices like sensors are interconnected in the IoT (Internet of Things). In order to reduce the traffic of networks and servers, the IoT is realized by the fog computing model. Here, data and processes to handle the data are distributed to not only servers but also fog nodes. In our previous studies, the tree-based fog computing (TBFC) model is proposed to reduce the total electric energy consumption. However, if a fog node is faulty, some sensor data cannot be processed in the TBFC model. In this paper, we propose a fault-tolerant TBFC (FTBFC) model. Here, we propose non-replication and replication FTBFC models to make fog nodes fault-tolerant. In the non-replication FTBFC model, another operational fog node takes over a faulty fog node. We evaluate the non-replication FTBFC models in terms of the electric energy consumption and execution time.

Ryuji Oma, Shigenari Nakamura, Dilawaer Duolikun, Tomoya Enokido, Makoto Takizawa

Semi-synchronocity Enabling Protocol and Pulsed Injection Protocol For A Distributed Ledger System

Distributed ledger technologies have a central problem that involves the latency. When transactions are to be accepted in the ledger, latency is incurred due to transaction processing and verification. For efficient systems, high latency should be avoided for the governance of the ledger. To help reduce latency, we offer a distributed ledger architecture, Tango, that mimics the Iota-tangle design as articulated by Popov [1] in his seminal paper. We introduce a semi-synchronous transaction entry protocol layer to avoid asynchronism in the system since an asynchronous system has a high latency. We further model periodic pulsed injections into the evaluation layer from the entry layer to regulate the performance of the system.

Bruno Andriamanalimanana, Chen-Fu Chiang, Jorge Novillo, Sam Sengupta, Ali Tekeoglu

WebSocket-Based Real-Time Single-Page Application Development Framework

Main feature of WebSocket is to establish a persistent link between the client and the server, enabling them to perform full-duplex communication. This protocol can effectively address the communication issues within browsers. In many cases, the persistent link established by WebSocket is not fully utilized. In fact, WebSocket can also implement most of functions of the HTTP protocol, but it requires additional difficulty and workload. Besides, it lacks mature solutions and libraries. Therefore, we develop a WebSocket-based web application development framework that takes full advantage of the features and benefits of WebSocket, and combines the popular single-page application development model to allow developers to quickly develop efficient and reliable web applications based on our framework. Several experiments has been carried out and the results are presented to show the performance of the WebSocket framework.

Hao Qu, Kun Ma

Texture Estimation System of Snacks Using Neural Network Considering Sound and Load

This paper aims at construction of a system which estimates texture of snacks. The authors have rebuilt an equipment from the ground up in order to examine various foods. The system consists of an original equipment and a simple neural network model. The equipment examines the food by compressing it and observing load and sound simultaneously. The input of the neural network model is parameters expressing characteristics of the load change and the sound data. The model outputs numerical value ranged [0,1] representing the level of the textures such as “crunchiness’’ and “crispness’’. In order to validate the usefulness of the neural network model, the experiment is carried out. Three kinds of snacks such as rice crackers, potato chips and cookies are employed. The model estimates the appropriate texture value of the snacks which are not used for training the neural network model.

Shigeru Kato, Naoki Wada, Ryuji Ito, Takaya Shiozaki, Yudai Nishiyama, Tomomichi Kagawa

Blockchain-Based Trust Communities for Decentralized M2M Application Services

Trust evaluation in decentralized M2M communities, where several end-users provide or consume independently M2M application services, enable the identification of trustless nodes and increase the security level of the community. Several trust management systems using different trust evaluation techniques are presented in the application field of M2M. However, most of them do not provide a secure way to store the computed trust values in the community. Moreover, the trust agents participating in the trust evaluation process are not securely identified and could lead to misbehavior among the trust agents resulting in non-reliable trust values. This research identifies several problems regarding decentralized M2M application services and the trust evaluation process. In order to overcome these issues this research proposes a novel approach by integrating blockchain technology in trust evaluation processes. Moreover, this publication presents a concept for using blockchain within the system for decentralized M2M application service provision. Finally, the combination of P2P overlay and blockchain network is introduced in order to verify the integrity of data.

Besfort Shala, Ulrich Trick, Armin Lehmann, Bogdan Ghita, Stavros Shiaeles

Parameterized Pulsed Transaction Injection Computation Model And Performance Optimizer For IOTA-Tango

To keep a cryptocurrency system at its optimal performance, it is necessary to utilize the resources and avoid latency in its network. To achieve this goal, dynamically and efficiently injecting the unverified transactions to enable synchronicity based on the current system configuration and the traffic of the network is crucial. To meet this need, we design the pulsed transaction injection parameterization (PTIP) protocol to provide a preliminary dynamic injection mechanism. To further assist the network to achieve its subgoals based on various house policies (such as maximal revenue to the network or maximum throughput of the system), we turn the house policy based optimization into a 0/1 knapsack problem. To efficiently solve these NP-hard problems, we adapt and improve a fully polynomial time approximation scheme (FPTAS) and dynamic programming as components in our approximate optimization algorithm.

Bruno Andriamanalimanana, Chen-Fu Chiang, Jorge Novillo, Sam Sengupta, Ali Tekeoglu

A Real-Time Fog Computing Approach for Healthcare Environment

The increased use of IoT has contributed to the popularization of environments that monitor the daily activities and health of the elderly, children or people with disabilities. The requirements of these environments, such as low latency and rapid response, corroborate the usefulness of associating fog computing with healthcare environment since one of the advantages of fog is to provide low latency. Because of this, we propose a hardware and software infrastructure capable of storing, processing and presenting monitoring data in real-time, based on fog computing paradigm. The main objective of our proposal is that the data be manipulated and processed respecting a hard time constraint.

Eliza Gomes, M. A. R. Dantas, Patricia Plentz

On Construction of a Caffe Deep Learning Framework based on Intel Xeon Phi

With the increase of processor computing power, also a substantial rise in the development of many scientific applications, such as weather forecast, financial market analysis, medical technology and so on. The need for more intelligent data increases significantly. Deep Learning as a framework that able to understand the abstract information such as images, text, and sound has a challenging area in recent research works. This phenomenon makes the accuracy and speed are essential for implementing a large neural network. Therefore in this paper, we intend to implement Caffe deep learning framework on Intel Xeon Phi and measure the performance of this environment. In this case, we conduct three experiments. First, we evaluated the accuracy of Caffe deep learning framework in several numbers of iterations on Intel Xeon Phi. For the speed evaluation, in the second experiment we compared the training time before and after optimization on Intel Xeon E5-2650 and Intel Xeon Phi 7210 . In this case, we use vectorization, OpenMP parallel processing, message transfer Interface (MPI) for optimization. In the third experiment, we compared multinode execution results on two nodes of Intel Xeon E5-2650 and two nodes of Intel Xeon Phi 7210.

Chao-Tung Yang, Jung-Chun Liu, Yu-Wei Chan, Endah Kristiani, Chan-Fu Kuo

A Brief History of Self-destructing Data: From 2005 to 2017

2018 is the 13th anniversary since Radia Perlman has introduced the concept of self-destructing data in 2005. After all the big events committed data leakage, such as the PRISM and Instagram fappening, big data still maintain its leading position among the internet buzzwords. This paper strives to review and summarize the research process of self-destructing data in the past decade. Comparisons between landmark methods and systems have also been made in purpose of inspiring graduate students and researchers to contribute to study in this field.

Xiao Fu, Zhijian Wang, Yong Chen, Yunfeng Chen, Hao Wu

The Implementation of a Hadoop Ecosystem Portal with Virtualization Deployment

The requirements of research, analysis, processing and storing of big data are more and more important because big data is increasingly vital for development in the fields of information technology, finance, medicine, etc. Most of the big data environments are built on Hadoop or Spark. However, the constructions of these kinds of big data platform are not easy for ordinary users because of the lacks of professional knowledge and familiarity with the system. To make it easier to use the big data platform for data processing and analysis, we implemented the web user interface combining the big data platform including Hadoop and Spark. Then, we packaged the whole big data platform into the virtual machine image file along with the web user interface so that users can construct the environment and do the job more quickly and efficiently. We provide the convenient web user interface, not only reduce the difficulty of building a big data platform and save time but also provide an excellent performance of the system. And we also made the comparison of performance between the web user interface and the command line using the HiBench benchmark suit.

Chao-Tung Yang, Chien-Heng Wu, Wen-Yi Chang, Whey-Fone Tsai, Yu-Wei Chan, Endah Kristiani, Yuan-Ping Chiang

A Model for Data Enrichment over IoT Streams at Edges of Internet

In this paper some issues related to the efficiency of processing IoT data are addressed through semantic data enrichment and edge computing. The aim is to cope with big data streams at various levels, from the lowest level of data capturing to the highest level of Cloud platforms and applications. The objective is thus to extract full knowledge contained in the data in real time but also to solve bottlenecks of processing observed in IoT Cloud systems, in which IoT devices are directly connected to Cloud servers. An architecture comprising various levels is introduced, where each level is in charge of specific functionalities in the overall processing chain. In particular, there is a focus on the layer of semantic data enrichment in order to enable further processing and reasoning in upper layers of the architecture. Some preliminary evaluation results are presented to highlight the issues and findings of this study using a case study of pothole detection in roads based on a data stream collected by cars.

Reinout Van Hille, Fatos Xhafa, Peter Hellinckx

SQL Injection in Cloud: An Actual Case Study

SQL Injection is not a strange word for developers, maintainers and users of Web applications. It has haunted for more than 25 years since discovered and classified in 2002. Even into the Cloud era, SQL Injection is still the biggest risk of internet according to statics. Virtualization technology used by Cloud such as SaaS, PaaS and IaaS failed to provide extra security against this kind of attack. In this paper we strive to explain how to perform SQL Injection attacks in Cloud, in order to explain the mechanism and principles of it.

Xiao Fu, Zhijian Wang, Yong Chen, Yunfeng Chen, Hao Wu

Smart Intrusion Detection with Expert Systems

Nowadays security concerns of computing devices are growing significantly. This is due to ever increasing number of devices connected to the network. In this context, optimising the performance of intrusion detection systems (IDS) is a key research issue to meet demanding requirements on security of complex and large scale networks. Within the IDS systems, attack classification plays an important role. In this work we propose and evaluate the use the generalizing power of neural networks to classify attacks. More precisely, we use multilayer perceptron (MLP) with the back-propagation algorithm and the sigmoidal activation function. The proposed attack classification system is validated and its performance studied through a subset of the DARPA dataset, known as KDD99, which is a public dataset labelled for an IDS and previously processed. We analysed the results corresponding to different configurations, by varying the number of hidden layers and the number of training epochs to obtain a low number of false results. We observed that it is required a large number of training epochs and that by using the entire data set consisting of 31 features the best classification is carried out for the type of Denial-Of-Service and Probe attacks.

Flora Amato, Francesco Moscato, Fatos Xhafa, Emilio Vivenzio

Cognitive Codes for Authentication and Management in Cloud Computing Infrastructures

This paper will describe new approaches in creation of cognitive codes for authentication tasks. Authentication procedure will be connected with visual CAPTCHA, which require specific information or expert-knowledge. Authentication protocols will be used to allow access for trusted group of persons, based of theirs expertise and professional activities. For new authentication protocols some possible examples of applications will be presented especially implemented in distributed computing environment.

Marek R. Ogiela, Lidia Ogiela

Threshold Based Load Balancer for Efficient Resource Utilization of Smart Grid Using Cloud Computing

Cloud computing is infrastructure which provides services to end users and increases the efficiency of the system. Fog computing is the extension of cloud computing which distributes load of cloud servers on different fog servers and enhance the overall performance of cloud. Smart Grid (SG) is the combination of traditional grid and information,communication and technology. The purpose of integration of cloud-fog based system and smart grid in this paper is to enhance the energy management services. In this paper a four layered cloud-fog based architecture is proposed to reduce the load of power requests. Three different load balancing algorithms: Round Robin (RR), Particle Swarm Optimization (PSO) and Threshold Based Load Balancer (TBLB) are used for efficient resource utilization. The service broker policies used in this paper are: Dynamically Reconfigure with Load and Advanced Service Proximity. While comparing the results on both broker policies TBLB performs betters in term of Response Time (RT) and Processing Time (PT). However, Trade-off is comparative in Cost, RT and PT.

Mubariz Rehman, Nadeem Javaid, Muhammad Junaid Ali, Talha Saif, Muhammad Hassaan Ashraf, Sadam Hussain Abbasi

A Fuzzy-based Approach for MobilePeerDroid System Considering of Peer Communication Cost

In this work, we present a distributed event-based awareness approach for P2P groupware systems. The awareness of collaboration will be achieved by using primitive operations and services that are integrated into the P2P middleware.We propose an abstract model for achieving these requirements and we discuss how this model can support awareness of collaboration in mobile teams. In this paper, we present a fuzzy-based system for improving peer coordination quality according to four parameters. We consider Peer Communication Cost (PCC) as a new parameter This model will be implemented in MobilePeerDroid system to give more realistic view of the collaborative activity and better decisions for the groupwork, while encouraging peers to increase their reliability in order to support awareness of collaboration in MobilePeerDroid Mobile System. We evaluated the performance of proposed system by computer simulations. From the simulations results, we conclude that when AA, SCT, GS values are increased, the peer coordination quality is increased, but when PCC is increased, the peer coordination quality is decreased.

Yi Liu, Kosuke Ozera, Keita Matsuo, Makoto Ikeda, Leonard Barolli

On the Security of a CCA-Secure Timed-Release Conditional Proxy Broadcast Re-encryption Scheme

Proxy re-encryption acts an important role in secure data sharing in cloud storage. There are many variants of proxy re-encryption until now, in this paper we focus on the timed-realise conditional proxy broadcast re-encryption. In this primitive, if and only the condition and time satisfied the requirement, the proxy can re-encrypt the delegator(broadcast encryption set)’s ciphertext to be the delegatee(another broadcast encryption set)’s ciphertext. Chosen cipertext security (CCA-security) is an important security notion for encryption scheme. In the security model of CCA-security, the adversary can query the decryption oracle to get help, with the only restriction the challenge ciphertext can not be queried to the decryption oracle. For CCA-security of time-realised conditional proxy broadcast re-encryption, the situation is more complicated for this time the adversary can not only get the decryption oracle of normal ciphertext but also the decryption oracle of the re-encrypted ciphertext and the re-encrypted key generation oracle. In 2013, Liang et al. proposed a CCA-secure time-realised conditional proxy broadcast re-encryption scheme, in this paper, we show their proposal is not CCA-secure in the security model of CCA-secure time-realised conditional proxy broadcast re-encryption.

Xu An Wang, Arun Kumar Sangaiah, Nadia Nedjah, Chun Shan, Zuliang Wang

Cloud-Fog Based Load Balancing Using Shortest Remaining Time First Optimization

Micro Grid (MG) integrated with cloud computing to develop an improved Energy Management System (EMS) for end users and utilities. For data processing on cloud new applications are developed. To overcome the overloading on cloud data centers fog computing is integrated. Three-layered framework is proposed in this paper to overcome the load of consumers. First layer is end-user layer which contains clusters of smart buildings. These smart buildings consist smart homes. Each smart home having multiple appliances. Controllers are used to connect with fog. Second and central layer consists of fogs with Virtual Machines (VMs). Fogs receive user requests and forwards that to MG. If the request is out of bound then MG requests to cloud using fog. Third layer contains cloud which consists data centers and utility. For load balancing three different techniques are used. Round Robin (RR), Throttled and Shortest Remaining Time First (SRTF) used to compare results of VMs allocation. Results show that proposed technique performed better cost wise. However, RR and Throttled outperformed SRTF overall. Closest Data Center Service broker policy is used for fog selection.

Muhammad Zakria, Nadeem Javaid, Muhammad Ismail, Muhammad Zubair, Muhammad Asad Zaheer, Faizan Saeed

Mining and Utilizing Network Protocol’s Stealth Attack Behaviors

The survivability, concealment and aggression of network protocol’s stealth attack behaviors are very strong, and they are not easy to be detected by the existing security measures. In order to compensate for the shortcomings of existing protocol analysis methods, starting from the instructions to implement the protocol program, the normal behavior instruction sequences of the protocol are captured by dynamic binary analysis. Then, the potential stealth attack behavior instruction sequences are mined by means of instruction clustering and feature distance computation. The mined stealth attack behavior instruction sequences are loaded into the general executing framework for inline assembly. Dynamic analysis is implemented on the self-developed virtual analysis platform HiddenDisc, and the securities of stealth attack behaviors are evaluated. Except to mining analysis and targeted defensive the stealth attack behaviors, the stealth attack behaviors are also formally transformed by the self-designed stealth transformation method, by using the stealth attack behaviors after transformation, the virtual target machine were successfully attacked and were not detected. Experimental results show that, the mining of protocol stealth attack behaviors is accurate, the transformation and use of them to increase our information offensive and defensive ability is also feasible.

YanJing Hu, Xu An Wang, HaiNing Luo, Shuaishuai Zhu

A Fuzzy-Based System for Selection of IoT Devices in Opportunistic Networks Considering Number of Past Encounters

In opportunistic networks the communication opportunities (contacts) are intermittent and there is no need to establish an end-to-end link between the communication nodes. The enormous growth of devices having access to the Internet, along the vast evolution of the Internet and the connectivity of objects and devices, has evolved as Internet of Things (IoT). There are different issues for these networks. One of them is the selection of IoT devices in order to carry out a task in opportunistic networks. In this work, we implement a Fuzzy-Based System for IoT device selection in opportunistic networks. For our system, we use four input parameters: IoT Device’s Number of Past Encounters (IDNPE), IoT Contact Duration (IDCD), IoT Device Storage (IDST) and IoT Device Remaining Energy (IDRE). The output parameter is IoT Device Selection Decision (IDSD). The simulation results show that the proposed system makes a proper selection decision of IoT devices in opportunistic networks. The IoT device selection is increased up to 15% and 27% by increasing IDNPE and IDRE, respectively.

Miralda Cuka, Donald Elmazi, Kevin Bylykbashi, Keita Matsuo, Makoto Ikeda, Leonard Barolli

Hill Climbing Load Balancing Algorithm on Fog Computing

Cloud Computing (CC) concept is an emerging field of technology. It provides shared resources through its own Data Centers (DC’s), Virtual Machines (VM’s) and servers. People now shift their data on cloud for permanent storage and online easily approachable. Fog is the extended version of cloud. It gives more features than cloud and it is a temporary storage, easily accessible and secure for consumers. Smart Grid (SG) is the way which fulfills the demand of electricity of consumers according to their requirements. Micro Grid (MG) is a part of SG. So there is a need to balance load of requests on fog using VM’s. Response Time (RT), Processing Time (PT) and delay are three main factors which, discussed in this paper with Hill Climbing Load Balancing (HCLB) technique with Optimize best RT service broker policy.

Maheen Zahid, Nadeem Javaid, Kainat Ansar, Kanza Hassan, Muhammad KaleemUllah Khan, Mohammad Waqas

Performance Analysis of WMN-PSOSA Simulation System for WMNs Considering Weibull and Chi-Square Client Distributions

Wireless Mesh Networks (WMNs) have many advantages such as low cost and increased high-speed wireless Internet connectivity, therefore WMNs are becoming an important networking infrastructure. In our previous work, we implemented a Particle Swarm Optimization (PSO) based simulation system for node placement in WMNs, called WMN-PSO. Also, we implemented a simulation system based on Simulated Annealing (SA) for solving node placement problem in WMNs, called WMN-SA. In this paper, we implement a hybrid simulation system based on PSO and SA, called WMN-PSOSA. We analyse the performance of WMN-PSOSA system for WMNs by conducting computer simulations considering two types of mesh clients distributions. Simulation results show that WMN-PSOSA performs better for Weibull distribution compared with the case of Chi-square distribution.

Shinji Sakamoto, Leonard Barolli, Shusuke Okamoto

Automated Risk Analysis for IoT Systems

Designing and assessing the security of IoT systems is very challenging, mainly due to the fact that new threats and vulnerabilities affecting IoT devices are continually discovered and published. Moreover, new (typically low-cost) devices are continuously plugged-in into IoT systems, thus introducing unpredictable security issues. This paper proposes a methodology aimed at automating the threat modeling and risk analysis processes for an IoT system. Such methodology enables to identify existing threats and related countermeasures and relies upon an open catalogue, built in the context of EU projects, for gathering information about threats and vulnerabilities of the IoT system under analysis. In order to validate the proposed methodology, we applied it to a real case study, based on a commercial smart home application.

Massimiliano Rak, Valentina Casola, Alessandra De Benedictis, Umberto Villano

Workshop SMECS-2018: 11th International Workshop on Simulation and Modelling of Engineering and Computational Systems


Integration of Cloud-Fog Based Platform for Load Balancing Using Hybrid Genetic Algorithm Using Bin Packing Technique

The smart girds (SGs) are used to accommodate the growing demand of electric systems and monitor the power consumption with bidirectional communication and power flows. Smart buildings as key partners of the smart grid for the energy transition. Smart grids co-ordinate the needs and capabilities of all generators, grid operators, end-users and electricity market stakeholders to operate all parts of the system as efficiently as possible, minimising costs and environmental impacts while maximising system reliability, resilience and stability. The users demand for energy varies dynamically in different time slots. The power grids needs ideal load balancing for supply and demand of electricity between end-users and utility providers. The main characteristics of the SGs are its heterogeneous architecture that includes reduce the costly impact of blackouts, help measure and reduce energy consumption, reduce their carbon footprint and provides the power quality for the range of needs. The cloud-fog based computing model is used to achieve the objective of load balancing in the SG. The cloud layer provides on-demand delivery of resources. The fog layer is the extension of the cloud that lies between the cloud and end-user layer. The fog layer minimizes the latency, enhances the reliability of cloud facilities and reduced the load on the cloud because fog is an edge computing and it analyzing data close to the device that collected the data can make the difference between averting disaster and a cascading system failure. The end-users required electricity through the Macrogrids (MGs) and Utilities installed on fog and cloud layer respectively. The cloud-fog computing framework uses different algorithms for load balancing objective. In this paper, three algorithms are used such as Round Robin (RR), throttled and Hybrid Genetic Algorithm using Bin Packing Technique for load balancing.

Muhammad Zubair, Nadeem Javaid, Muhammad Ismail, Muhammad Zakria, Muhammad Asad Zaheer, Faizan Saeed

More Secure Outsource Protocol for Matrix Multiplication in Cloud Computing

Matrix multiplication is a very basic computation task in many scientific algorithms. Recently Lei et al. proposed an interesting outsource protocol for matrix multiplication in cloud computing. Their proposal is very efficient, however we find that the proposal is not so secure from the view of cryptography. Concretely, the cloud can easily distinguish which matrix has been outsourced from two candidate matrixes. That is, their proposal does not satisfy the indistinguishable property under chosen plaintext attack. Finally we give an improved outsource protocol for matrix multiplication in cloud computing.

Xu An Wang, Shuaishuai Zhu, Arun Kumar Sangaiah, Shuai Xue, Yunfei Cao

Load Balancing on Cloud Using Professional Service Scheduler Optimization

In smart grid (SG) fog computing based concept is used. Fog is used to minimizing the load on cloud. It stores data temporarily by covering small area and send data to cloud for permanent storage. In this paper, cloud and fog are integrated for the better execution of energy in the smart building. In our proposed framework from interest side a demand created which oversaw by haze. Three unique districts which contains six mists. Fog is associated with a cluster. Include the quantities of structures each fog is associated with each fog. Each cluster contained thirty buildings and each building comprises of 10 homes. SGs are put close to the buildings and used to satisfy energy request. These SGs are set adjacent to the buildings. For productive use of vitality in smart buildings, Virtual Machines (VMs) are used to overcome the load on fog and cloud. Throttled, Round Robin (RR) and Professional Service Scheduler (PSS) are used as load balancing algorithms and these algorithms are compared for closest data center service broker policy. It is used for best fog selection. Using this policy the results of these algorithms are compared. Cost wise policy outperforms are shown. However, RR and throttled performing better overall.

Muhammad Asad Zaheer, Nadeem Javaid, Muhammad Zakria, Muhammad Zubair, Muhammad Ismail, Abdul Rehman

Privacy Preservation for Re-publication Data by Using Probabilistic Graph

With the dynamism of data intensive applications, data can be changed by the insert, update, and delete operations, at all times. Thus, the privacy models are designed to protect the static dataset might not be able to cope with the case of the dynamic dataset effectively. m-invariance and m-distinct models are the well-known anonymization model which are proposed to protect the privacy data in the dynamic dataset. However, in their counting-based model, the privacy data of the target user could still be revealed on internally or fully updated datasets when they are analyzed using updated probability graph. In this paper, we propose a new privacy model for dynamic data publishing based on probability graph. Subsequently, in order to study the characteristics of the problem, we propose a brute-force algorithm to preserve the privacy and maintain the data quality. From the experiment results, our proposed model can guarantee the minimum probability of inferencing sensitive value.

Pachara Tinamas, Nattapon Harnsamut, Surapon Riyana, Juggapong Natwichai

Workshop SMDMS-2018: 9th International Workshop on Streaming Media Delivery and Management Systems


Evaluation of Scheduling Method for Division Based Broadcasting of Multiple Video Considering Data Size

When watching videos, many people receive the data by broadcasting. In general broadcasting systems, even though servers can concurrently deliver data to many clients, they must wait until the first portion of the data is broadcast. In division based broadcasting, although several researchers have proposed scheduling methods to reduce the waiting time for delivering multiple video, they failed to consider cases where the data size of each video is not the same. In division based broadcasting systems, we have proposed a scheduling method that delivers multiple video called multiple-video broadcasting method considering data size (MV-D). The MV-D method divides the data and produces an effective broadcasting schedule based on the data size of each video. In addition, the server can reduce the required bandwidth for delivering multiple video. In this paper, we evaluate the MV-D method and confirm the effectiveness of reducing the waiting time with conventional methods.

Ren Manabe, Yusuke Gotoh

A Design of Hierarchical ECA Rules for Distributed Multi-viewpoint Internet Live Broadcasting Systems

With the recent popularization of omnidirectional cameras, multi-viewpoint live videos are now often broadcast via the Internet. However, in multi-viewpoint Internet live broadcasting services, the screen images will differ according to the viewpoint selected by the viewer. Thus, one of the main research challenges for multi-viewpoint Internet live broadcasting is how to reduce the computational load of adding effects under different screen images. Processes for distributed multi-viewpoint Internet live broadcasting systems have some types. The processes can be executed effectively for distributed computing environments by considering the types. In this paper, we design hierarchical ECA rules for distributed multi-viewpoint Internet live broadcasting systems. Hierarchical ECA rules are suitable to describe the processes since they are simple and can realize complex processes by their combinations.

Satoru Matsumoto, Tomoki Yoshihisa, Tomoya Kawakami, Yuuichi Teranishi

An Evaluation on Virtual Bandwidth for Video Streaming Delivery in Hybrid Broadcasting Environments

Most of the recent set-top boxes for digital video broadcasting connect to the Internet. They can receive data from broadcasting systems and from the Internet. Such hybrid broadcasting environments, in which clients can receive data from both broadcasting systems and communications systems, are suitable for video streaming delivery since they complement their demerits with each other. To reduce interruption time for hybrid broadcasting environments, I have proposed data piece elimination technique. However, the influence on interruption time of virtual bandwidth, a parameter of the technique, has not been well investigated. In this paper, I evaluate this and discuss how to determine appropriate virtual bandwidth.

Tomoki Yoshihisa

A Load Distribution Method for Sensor Data Stream Collection Considering Phase Differences

In the Internet of Things (IoT), various devices (things) including sensors generate data and publish them via the Internet. We define continuous sensor data with difference cycles as a sensor data stream and have proposed methods to collect distributed sensor data streams. In this paper, we describe a skip graph-based collection system for sensor data streams considering phase differences and its evaluation.

Tomoya Kawakami, Tomoki Yoshihisa, Yuuichi Teranishi

Workshop MWVRTA-2018: The 8th International Workshop on Multimedia, Web and Virtual Reality Technologies


Proposal of a Zoo Navigation AR Application Using Markerless Image Processing

In this research, we propose an inbound zoo navigation application using augmented reality technology by markerless image processing. This application provides animal guide board in multiple languages by AR technology, and distributes animal quiz to zoo visitors by using beacon. Zoo visitors can enjoy animal book, zoo navigation, animal character collection, etc. via this mobile application. Moreover, the zoo keepers can freely update various contents provided by the mobile application via the content management web application.

Hayato Sakamoto, Tomoyuki Ishida

Implementation of a Virtual Reality Streaming Software for Network Performance Evaluation

The Virtual Reality (VR) has become a popular technology for general people caused by low cost VR devices. However, there is difficult to keep playing the VR contents because of limitations on the network performances during rush hours on the Internet. For this reason, the VR content should keep the QoS (Quality of Service) and the QoS parameters should be changed simultaneously without being noticed by the user. We have already introduced a QoS Management Framework for VR contents to gives priorities and change the QoS parameters according to the limitation of available resources and the user’s requests. In this paper, we present the implementation of a VR streaming software to find the appropriate reduction of data size for QoS parameters in different types of video formats.

Ko Takayama, Yusi Machidori, Kaoru Sugita

Remote Voltage Controls by Image Recognitions for Adaptive Array Antenna of Vehicular Delay Tolerant Networks

The automatic operating system of automobiles has rapidly developed in recent, but it is expected that there are various subjects for the developments of the new applications. One of the subjects is the wireless stable connections between the automobiles because it is necessary to concern that automobiles run so fast. Moreover, there might be radio obstacles such as trees or buildings along roads. Therefore, the Delay Tolerant Network System with the Adaptive Array Antenna controlled by the image recognition is proposed in this paper. The proposed system consists of the image recognitions for the target automobiles, and the proper directions of the Antenna is calculated by the Kalman Filter Algorithm. Then, the antenna direction is controlled by the differential of the given voltages between the antenna elements. The paper especially reports the implementations and the experimental results of the voltage controls for the Adaptive Array Antenna, and the future research subjects are discussed.

Noriki Uchida, Ryo Hashimoto, Goshi Sato, Yoshitaka Shibata

A Collaborative Safety Flight Control System for Multiple Drones: Dealing with Weak Wind by Changing Drones Formation

In recent years, it has become possible for anyone to purchase high-performance Drones at low price. The Drones are equipped with a unit of high-vision camera, multiple compact cameras for flight control, gyroscope, infrared sensor, GPS, and a processor for processing video images and sensor information for controlling the flight. Relatively stable flight is available for Drones by operating within human’s sight. In this paper, we introduced our collaborative security flight control system for multiple drones. We considered the case of weak wind. To deal with weak wind, we propose that the drones change the formation. We added the Caution level for the safety level. In the case of weak wind, the Caution level is activated and the drones change the position (formation) in order to keep the distance between them.

Noriyasu Yamamoto, Noriki Uchida

Workshop DEM-2018: 5th International Workshop on Distributed Embedded Systems


Contact Detection for Social Networking of Small Animals

Biological research often tracks animal using collars containing a wireless sensor that transmit telemetry or positional data. However, when dealing with small animals, the size and weight of conventional telemetry is often an obstruction and can alter animal behavior. In this study we take a look at the the viability of Bluetooth Low Energy (BLE) to develop a low power contact logger which tracks contacts between small rodents. Using the BLE Discovery Process, a contact logger can reliably detect nearby loggers without the need to set up an actual connection. We manufactured a prototype with an extremely small footprint to demonstrate the feasibility.

Rafael Berkvens, Ivan Herrera Olivares, Siegfried Mercelis, Lucinda Kirkpatrick, Maarten Weyn

Introduction of Deep Neural Network in Hybrid WCET Analysis

Safe and responsive hard real-time systems require the Worst-Case Execution Time (WCET) to determine the schedulability of each software task. Not meeting planned deadlines could result in fatal consequences. During development, system designers have to make decisions without any insight in the WCET of the tasks. Early WCET estimates will help us to perform design space exploration of feasible hardware and thus lowering the overall development costs. This paper proposes to extend the hybrid WCET analysis with deep learning models to support early predictions. Two models are created in TensorFlow to be compatible with our COBRA framework. The framework provides datasets based on hybrid blocks to train each model. The feed-forward neural network has a high convergence rate and is able to learn a trend in the features. However, the error of the models are currently too large to predict meaningful upper bounds. To conclude, we summarise the problems we need to tackle to improve the accuracy and convergence issues.

Thomas Huybrechts, Thomas Cassimon, Siegfried Mercelis, Peter Hellinckx

Distributed Uniform Streaming Framework: Towards an Elastic Fog Computing Platform for Event Stream Processing

More and more devices are connected to the internet. These devices could be used to help execute applications that otherwise would need to be executed on the cloud or on a system with more computational resources. To execute the application on multiple devices, we will split it up into multiple application components that stream events to each other. In this paper we present a framework that allows application components to stream events to each other. On top of this we present a coordinator system to move application components to other devices. This elasticity allows the coordinators to run application components on different devices based on the context, in order to optimize resources such as network usage, response times and battery life. The coordinators use an adapted version of the Contract Net Protocol which allows them to find a local minima in resource consumption. In order to verify this, three use cases are implemented.

Simon Vanneste, Jens de Hoog, Thomas Huybrechts, Stig Bosmans, Muddsair Sharif, Siegfried Mercelis, Peter Hellinckx

Context-Aware Distribution In Constrained IoT Environments

The increased adoption of the IoT paradigm requires us to take a good look at the network weight it creates. As adoption increases, so does the network load and server cost, causing a jump in required expenses. A solution for this is Fog Computing, where we distribute the cloud load over the network devices so that the tasks get pre-processed before reaching the cloud level, or might not even have to reach the cloud level. To aid with this research, we wrote a simulator that calculates the optimal spread of the application over the network devices, and shows us how this spread will occur. This spread will be based on context, where for example processor-bound machines get smaller tasks and energy-bound machines get energy-efficient tasks. We use this simulator to compare algorithms used for placing the application.

Reinout Eyckerman, Muddsair Sharif, Siegfried Mercelis, Peter Hellinckx

Towards a Scalable Distributed Real-Time Hybrid Simulator for Autonomous Vehicles

The rising popularity of autonomous cars is asking for a safe testbed, but real-world testing is costly and dangerous while simulation-based testing is too abstract. Therefore, a hybrid simulator is needed in which a real car can interact with many simulated cars. Such a simulator already exists, but is far from scalable due to a centralised architecture, thus not deployable on many vehicles. Therefore, this paper presents a more distributed and scalable architecture that solves this problem. We assessed the overall performance and scalability of the new system by conducting four experiments using a 1/10th scale car. The results show that this new distributed architecture outperforms the previous approach in terms of overall performance and scalability, thus paving the way to a safe, cost-efficient and hyper scalable testing environment.

Jens de Hoog, Manu Pepermans, Siegfried Mercelis, Peter Hellinckx

Challenges of Modeling and Simulating Internet of Things Systems

With the rise of complex Internet of Things systems we see an increasing need for testing and evaluating these systems. Especially, when we expect emergent complex adaptive behavior to arise. Agent based simulation is an often used technique to do this. However, the effectiveness of a simulation depends on the quality of individual models. In this work we look in depth what the characteristics are of Internet of Things devices, actors and environments. We look at how these characteristics can be used to find appropriate, performance optimized modeling techniques and formalisms. During the course of this work we will extensively refer to a custom-developed Internet of Things simulation framework and to relevant related literature.

Stig Bosmans, Siegfried Mercelis, Joachim Denil, Peter Hellinckx

Workshop BIDS-2018: International Workshop on Business Intelligence and Distributed Systems


Cyber Incident Classification: Issues and Challenges

The cyber threat landscape is changing rapidly thus making the process of scientific classification of incidents for the purpose of incident response management difficult. Additionally, there are no universal methodologies for sharing information on cyber security incidents between private and public sectors. Existing efforts to automate the process of incident classification do not make a distinction between ordinary events and threatening incidents, which can cause issues that permeate throughout the entire incident response process. We describe a machine learning model to determine the probability that an event is an incident using contextual information of the event.

Marina Danchovsky Ibrishimova

Outsourcing Online/offline Proxy Re-encryption for Mobile Cloud Storage Sharing

Outsourcing heavy storage and computation to the cloud servers now becomes more and more popular. How to secure share the cloud storage is an important problem for many mobile users. Proxy re-encryption is such a cryptographic primitive which can be used to secure share cloud data. Until now there are many kinds of proxy re-encryption schemes with various properties, such as conditional proxy re-encryption, proxy re-encryption with keyword search etc. However until now there exists no work focus on proxy re-encryption for mobile cloud storage sharing. In mobile cloud storage, almost all the users are mobile ones, they only have resource-restricted equipments. In this paper we try to initialize this research, we give a very basic outsourced online/offline proxy re-encryption scheme for mobile cloud storage sharing and leave many interesting open problems as the future work.

Xu An Wang, Nadia Nedjah, Arun Kumar Sangaiah, Chun Shan, Zuliang Wang

DAHS: A Distributed Data-as-a-Service Framework for Data Analytics in Healthcare

Generally speaking, healthcare service providers, such as hospitals, maintains a large collection of data. In the last decade, healthcare industry becomes aware that data analytics is a crucial tool to help providing a better services. However, there are several obstacles to prevent a successful deployment of such systems, among them are data quality and system performance. To address the issues, this paper proposes a distributed data-as-a-service framework that help to assure level of data quality and also improve the performance of data analytics. Preliminary evaluation suggests that the proposed system is scale well to large amount of user requests.

Pruet Boonma, Juggapong Natwichai, Krit Khwanngern, Panutda Nantawad

Round Robin Inspired History Based Load Balancing Using Cloud Computing

The advancement of cloud computing (CC) becomes a reason for the foundation of fog computing (FC). FC inherits the services of CC and divides the load of executions on different small levels which ultimately reduces the load on cloud. FC stores data on short term basis and forward it to the cloud for long term storage. In this paper, a fog based environment is proposed connected with cloud and cluster, managing data taken from end user. The proposed algorithm is round robin (RR) inspired and works by using the history of previous VMs. Two service broker policies have also been considered in this paper which are closest data center policy and advance broker policy. Aforementioned three algorithms have been used with these broker policies. RRIHB (Round Robin Inspire History Based Algorithm) outperforms (Honey Bee) HB in case of both service broker policies while it performs equal in case of RR with closest data center and outperforms RR with advance broker policy.

Talha Saif, Nadeem Javaid, Mubariz Rahman, Hanan Butt, Muhammad Babar Kamal, Muhammad Junaid Ali


Weitere Informationen