Skip to main content

2019 | Buch

Internet and Distributed Computing Systems

12th International Conference, IDCS 2019, Naples, Italy, October 10–12, 2019, Proceedings

herausgegeben von: Dr. Raffaele Montella, Angelo Ciaramella, Giancarlo Fortino, Dr. Antonio Guerrieri, Antonio Liotta

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the proceedings of the 12th International Conference on Internet and Distributed Systems held in Naples, Italy, in October 2019.
The 47 revised full papers presented were carefully reviewed and selected from 145 submissions. This conference desires to look for inspiration in diverse areas (e.g. infrastructure & system design, software development, big data, control theory, artificial intelligence, IoT, self-adaptation, emerging models, paradigms, applications and technologies related to Internet-based distributed systems) to develop new ways to design and manage such complex and adaptive computation resources.

Inhaltsverzeichnis

Frontmatter
Table Tennis Stroke Recognition Based on Body Sensor Network

Table tennis stroke recognition is very important for athletes to analyze their sports skills. It can help players to regulate hitting movement and calculate sports consumption. Different players have different stroke motions, which makes stroke recognition more difficult. In order to accurately distinguish the stroke movement, this paper uses body sensor networks (BSN) to collect motion data. Sensors collecting acceleration and angular velocity information are placed on the upper arm, lower arm and back respectively. Principal component analysis (PCA) is employed to reduce the feature dimensions and support vector machine (SVM) is used to recognize strokes. Compared with other classification algorithms, the final experimental results (97.41% accuracy) illustrate that the algorithm proposed in the paper is effective and useful.

Ruichen Liu, Zhelong Wang, Xin Shi, Hongyu Zhao, Sen Qiu, Jie Li, Ning Yang
The Analysis of the Computation Offloading Scheme with Two-Parameter Offloading Criterion in Fog Computing

Fog computing provides an efficient solution for mobile computing offloading, keeping tight constraints on the response time for real-time applications. The paper takes into account the variation of tasks by introducing the joint distribution function of the required processing volume and data size to be transmitted. We propose an offloading criterion based on processing and data volumes of tasks and develop an analytical framework for the evaluation of the average response time and average energy consumption of mobile devices. The developed framework is used in the case study.

Eduard Sopin, Konstantin Samouylov, Sergey Shorgin
Protecting Personal Data Using Smart Contracts

Decentralized Online Social Networks (DOSNs) have been proposed as an alternative solution to the current centralized Online Social Networks (OSNs). Online Social Networks are based on centralized architecture (e.g., Facebook, Twitter, or Google+), while DOSNs do not have a service provider that acts as central authority and users have more control over their information. Several DOSNs have been proposed during the last years. However, the decentralization of the OSN requires efficient solutions for protecting the privacy of users, and to evaluate the trust between users. Blockchain represents a disruptive technology which has been applied to several fields, among these also to Social Networks. In this paper, we propose a manageable, user-driven and auditable access control framework for DOSNs using blockchain technology. In the proposed approach, the blockchain is used as a support for the definition of privacy policies. The resource owner uses the public key of the subject to define flexible role-based access control policies, while the private key associated with the subjects Ethereum account is used to decrypt the private data once access permission is validated on the blockchain. We evaluate our solution by exploiting the Rinkeby Ethereum testnet to deploy the smart contract, and to evaluate its performance. Experimental results show the feasibility of the proposed scheme in achieving auditable and user-driven access control via smart contract deployed on the Blockchain.

Mohsin Ur Rahman, Fabrizio Baiardi, Barbara Guidi, Laura Ricci
Towards Environmental Impact Reduction Leveraging IoT Infrastructures: The PIXEL Approach

Ports are essential nodes in the global supply chain. Usually, the port-cities experience growth and sustainability thanks to the port activities and their related stakeholders. However, it is undeniable that port operations have an impact on the environment, the city and citizens living nearby. To mitigate this impact, the ports of the future will need to count with advanced tools enabling measurement and actuation over the harmful pollution sources. Modelling the footprint and gathering information about the causes are not easy tasks, as ports are complex environments with no standard procedures established yet for these purposes. These challenges are added to another current truth: data about real-time port operations are not optimally gathered neither exploited due to, mainly, a marked lack of interoperability.PIXEL (Port IoT for environmental leverage) aims at creating the first smart, flexible and scalable solution reducing the environmental impact while enabling optimization of operations in port ecosystems. This approach relies on the most innovative ICT technology for ports building upon an interoperable open IoT platform. PIXEL use-cases are also presented in this paper, aiming to demonstrate, through various analytic services, a valid architecture drawing from heterogeneous data collection, data handling under a common model, data storage and data visualization.Besides that, PIXEL devotes to decouple port’s size and its ability to deploy environmental impact mitigation specifying an innovative methodology and an integrated metric for the assessment of the overall environmental impact of ports.

Ignacio Lacalle, Miguel Ángel Llorente, Carlos E. Palau
Adaptive Application Deployment of Priority Services in Virtual Environments

This paper introduces an adaptive application deployment service for virtualized environments (named DECIDE). This service facilitates the definition of customized cluster/cloud environment and the adaptive integration of scheduling policies for testing and deploying containerized applications. The service-based design of DECIDE and the use of a virtualized environment makes it possible to easily change the cluster/cloud configuration and its scheduling policy. It provides a differentiated service for application deployment based on priorities, according to user requirements. A prototype of this service was implemented using Apache MESOS and Docker. As a proof of concept, a federated application for electronic identification (eIDAS) was deployed using the DECIDE approach, which allows users to evaluate different deployment scenarios and scheduling policies providing useful information for decision making. Experiments were carried out to validate service functionality and the feasibility for testing and deploying applications that require different scheduling policies.

Jesus Carretero, Mario Vasile-Cabezas, Victor Sosa
An Adaptive Restart Mechanism for Continuous Epidemic Systems

Software services based on large-scale distributed systems demand continuous and decentralised solutions for achieving system consistency and providing operational monitoring. Epidemic data aggregation algorithms provide decentralised, scalable and fault-tolerant solutions that can be used for system-wide tasks such as global state determination, monitoring and consensus. Existing continuous epidemic algorithms either periodically restart at fixed epochs or apply changes in the system state instantly producing less accurate approximation. This work introduces an innovative mechanism without fixed epochs that monitors the system state and restarts upon the detection of the system convergence or divergence. The mechanism makes correct aggregation with an approximation error as small as desired. The proposed solution is validated and analysed by means of simulations under static and dynamic network conditions.

Mosab M. Ayiad, Giuseppe Di Fatta
Using Sentiment Analysis and Automated Reasoning to Boost Smart Lighting Systems

Smart cities, arising all around the globe, encourage the birth of new and different urban infrastructures, with interesting challenges and opportunities. Within each smart city, a smart community emerges, which integrates technological solutions for the definition of innovative models for the smart management of urban areas. In this paper, we describe the research activities conducted within a smart city project, introducing a novel framework for managing smart lighting systems within a smart community. We start by describing the proposed framework, then we specialize it to the specific use case. One of the main novelties of the proposed approach is the use of automated reasoning and sentiment analysis to boost the smart lighting process.

Francesco Cauteruccio, Luca Cinelli, Giancarlo Fortino, Claudio Savaglio, Giorgio Terracina
In-network Hebbian Plasticity for Wireless Sensor Networks

In typical Wireless Sensor Networks (WSNs), all sensor data is routed to a more powerful computing entity. In the case of environmental monitoring, this enables data prediction and event detection. When the size of the network increases, processing all the input data outside the network will create a bottleneck at the gateway device. This creates delays and increases the energy consumption of the network. To solve this issue, we propose using Hebbian learning to pre-process the data in the wireless network. This method allows to reduce the dimension of the sensor data, without loosing spatial and temporal correlation. Furthermore, bottlenecks are avoided. By using a recurrent neural network to predict sensor data, we show that pre-processing the data in the network with Hebbian units reduces the computation time and increases the energy efficiency of the network without compromising learning.

Tim van der Lee, Georgios Exarchakos, Sonia Heemstra de Groot
A High Performance Modified K-Means Algorithm for Dynamic Data Clustering in Multi-core CPUs Based Environments

K-means algorithm is one of the most widely used methods in data mining and statistical data analysis to partition several objects in K distinct groups, called clusters, on the basis of their similarities. The main problel and distributed clustering algorithms start to be designem of this algorithm is that it requires the number of clusters as an input data, but in the real life it is very difficult to fix in advance such value. In this work we propose a parallel modified K-means algorithm where the number of clusters is increased at run time in a iterative procedure until a given cluster quality metric is satisfied. To improve the performance of the procedure, at each iteration two new clusters are created, splitting only the cluster with the worst value of the quality metric. Furthermore, experiments in a multi-core CPUs based environment are presented.

Giuliano Laccetti, Marco Lapegna, Valeria Mele, Diego Romano
Overcoming GPU Memory Capacity Limitations in Hybrid MPI Implementations of CFD

In this paper, we describe a hybrid MPI implementation of a discontinuous Galerkin scheme in Computational Fluid Dynamics which can utilize all the available processing units (CPU cores or GPU devices) on each computational node. We describe the optimization techniques used in our GPU implementation making it up to 74.88x faster than the single core CPU implementation in our machine environment. We also perform experiments on work partitioning between heterogeneous devices to measure the ideal load balance achieving the optimal performance in a single node consisting of heterogeneous processing units. The key problem is that CFD workloads need to allocate large amounts of both host and GPU device memory in order to compute accurate results. There exists an economic burden, not to mention additional communication overheads of simply scaling out by adding more nodes with high-end scientific GPU devices. In a micro-management perspective, workload size in each single node is also limited by its attached GPU memory capacity. To overcome this, we use ZFP, a floating-point compression algorithm to save at least 25% of data usage in our workloads, with less performance degradation than using NVIDIA UM.

Jake Choi, Yoonhee Kim, Heon-young Yeom
Using Trust and “Utility” for Group Formation in the Cloud of Things

In this paper we consider a CoT (Cloud of Things) scenario where agents cooperate to perform complex tasks. Agents have to select reliable partners and, in some cases, they don’t have enough information about their peers. In order to support agents in their choice and to maximize the benefits during their cooperation, we combined several contributions. First of all, we designed a trust model which exploits the recommendations coming from the ego networks of the agents. Secondly, we propose to partition the agents in groups by exploiting trust relationships to allow agents to interact with the most reliable partners. To this aim, we designed an algorithm named DAGA (Distributed Agent Grouping Algorithm) to form agent groups by exploiting available reliability and reputation and the results obtained in a simulated scenario confirmed its potential advantages.

Giancarlo Fortino, Lidia Fotia, Fabrizio Messina, Domenico Rosaci, Giuseppe M. L. Sarné
Unsupervised Anomaly Thresholding from Reconstruction Errors

Internet of Things (IoT) sensors generate massive streaming data which needs to be processed in real-time for many applications. Anomaly detection is one popular way to process such data and discover nuggets of information. Various machine learning techniques for anomaly detection rely on pre-labelled data which is very expensive and not feasible for streaming scenarios. Autoencoders have been found effective for unsupervised outlier removal because of their inherent ability to better reconstruct data with higher density. Our work aims to leverage this principle to investigate approaches through which the optimal threshold for anomaly detection can be obtained in an automated and adaptive fashion for streaming scenarios. Rather than experimentally setting an optimal threshold through trial and error, we obtain the threshold from the reconstruction errors of the training data. Inspired by image processing, we investigate how thresholds set by various statistical approaches can perform in an image dataset.

Maryleen U. Ndubuaku, Ashiq Anjum, Antonio Liotta
Yet Another Way to Unknowingly Gather People Coordinates and Its Countermeasures

Apps running on a smartphone have the possibility to gather data that can act as a fingerprint for their user. Such data comprise the ids of nearby WiFi networks, features of the device, etc., and they can be a precious asset for offering e.g. customised transportation means, news and ads, etc. Additionally, since WiFi network ids can be easily associated to GPS coordinates, from the users frequent locations it is possible to guess their home address, their shopping preferences, etc. Unfortunately, existing privacy protection mechanisms and permissions on Android OS do not suffice in preventing apps from gathering such data, which can be considered sensitive and not to be disclosed to a third part. This paper shows how an app using only the permission to access WiFi networks could send some private data unknowingly from the user. Moreover, an advanced mechanism is proposed to shield user private data, and to selectively obscure data an app could spy.

Gabriella Verga, Andrea Fornaia, Salvatore Calcagno, Emiliano Tramontana
Computation Offloading with MQTT Protocol on a Fog-Mist Computing Framework

Although computation offloading has been widely discussed into research literature, as an optimisation mechanism that can utilise remote CPU resources on Cloud, it has not been extensively applied in highly constrained devices due to its long latency and the unreliability of the Internet connection. In a previous work, a Fog-Mist Computing Framework was proposed, which claims to offer lower latency and remove the need for the persistent Internet connectivity. The Fog-Mist Computing underlying architecture is made up of Fog-Nodes and Mist-Node. The Mist-Node are placed directly within the edge fabrics, thus providing the Mobile Devices with connectivity services and low latency computational resources. In this paper, the MQTT, a Publisher/Subscriber messaging protocol, is adopted to deploy a Remote Function Invocation among the Nodes of such a Framework, at the edge of the network.

Pietro Battistoni, Monica Sebillo, Giuliana Vitiello
Load Balancing in Hybrid Clouds Through Process Mining Monitoring

An increasing number of organisations are harnessing the benefits of hybrid cloud adoption to support their business goals and achieving privacy and control in a private cloud whilst enjoying the on-demand scalability of the public cloud. However the complexity introduced by the combination of the public and private clouds worsens visibility in cloud monitoring with regards compliance to given business constraints. Load balancing as a technique for evenly distributing workloads can be leveraged together with processing mining to help ease the monitoring challenge. In this paper we propose a load balancing approach to distribute workloads in order to minimise violations to specified business constraints. The scenario of a hospital consultation process is employed as a use case in monitoring and controlling Octavia load balancing-as-a-service in OpenStack. The results show a co-occurrence of constraint violations and Octavia L7 Policy creation, indicating a successful application of process mining monitoring in load balancing.

Kenneth K. Azumah, Sokol Kosta, Lene T. Sørensen
Distributed Processor Load Balancing Based on Multi-objective Extremal Optimization

The paper proposes and discusses distributed processor load balancing algorithms which are based on nature inspired approach of multi-objective Extremal Optimization. Extremal Optimization is used for defining task migration aiming at processor load balancing in execution of graph-represented distributed programs. The analysed multi-objective algorithms are based on three or four criteria selected from the following four choices: the balance of computational loads of processors in the system, the minimal total volume of application data transfers between processors, the number of task migrations during program execution and the influence of task migrations on computational load imbalance and the communication volume. The quality of the resulting load balancing is assessed by simulation of the execution of the distributed program macro data flow graphs, including all steps of the load balancing algorithm. It is done following the event-driven model in a simulator of a message passing multiprocessor system. The experimental comparison of the multi-objective load balancing to the single objective algorithms demonstrated the superiority of the multi-objective approach.

Ivanoe De Falco, Eryk Laskowski, Richard Olejnik, Umberto Scafuri, Ernesto Tarantino, Marek Tudruj
Argumentation-Based Coordination in IoT: A Speaking Objects Proof-of-Concept

Coordination of Cyberphysical Systems is an increasingly relevant concern for distributed systems engineering, mostly due to the rise of the Internet of Things vision in many application domains. Against this background, Speaking Objects has been proposed as a vision of future smart objects coordinating their collective perception and action through argumentation. Along this line, in this paper we describe a Proof-of-Concept implementation of the Speaking Objects vision in a smart home deployment.

Stefano Mariani, Andrea Bicego, Marco Lippi, Marco Mamei, Franco Zambonelli
Optimized Analytics Query Allocation at the Edge of the Network

The new era of the Internet of Things (IoT) provides the space where novel applications will play a significant role in people’s daily lives through the adoption of multiple services that facilitate everyday activities. The huge volumes of data produced by numerous IoT devices make the adoption of analytics imperative to produce knowledge and support efficient decision making. In this setting, one can identify two main problems, i.e., the time required to send the data to Cloud and wait for getting the final response and the distributed nature of data collection. Edge Computing (EC) can offer the necessary basis for storing locally the collected data and provide the required analytics on top of them limiting the response time. In this paper, we envision multiple edge nodes where data are stored being the subject of analytics queries. We propose a methodology for allocating queries, defined by end users or applications, to the appropriate edge nodes in order to save time and resources in the provision of responses. By adopting our scheme, we are able to ask the execution of queries only from a sub-set of the available nodes avoiding to demand processing activities that will lead to an increased response time. Our model envisions the allocation to specific epochs and manages a batch of queries at a time. We present the formulation of our problem and the proposed solution while providing results of an extensive evaluation process that reveals the pros and cons of the proposed model.

Anna Karanika, Madalena Soula, Christos Anagnostopoulos, Kostas Kolomvatsos, George Stamoulis
MR-DNS: Multi-resolution Domain Name System

Users want websites to deliver rich content quickly. However, rich content often comes from separate subdomains and requires additional DNS lookups, which negatively impact web performance metrics such as First Meaningful Paint Time, Page Load Time, and the Speed Index. In this paper we investigate the impact of DNS lookups on web performance and propose Multi-Resolution DNS (MR-DNS) to reduce DNS resolutions through response batching. Our results show that MR-DNS has the potential to improve Page Load Time around 14% on average, Speed Index around 10% on average and reduce DNS traffic around 50%. We also discuss how these gains may be realized in practice through incremental changes to DNS infrastructure.

Saidur Rahman, Mike P. Wittie
Temporal-Variation-Aware Profit-Maximized and Delay-Bounded Task Scheduling in Green Data Center

An increasing number of enterprises deploy their business applications in green data centers (GDCs) to address irregular and drastic natures in task arrival of global users. GDCs aim to schedule tasks in the most cost-effective way, and achieve the profit maximization by increasing green energy usage and reducing brown one. However, prices of power grid, revenue, solar and wind energy vary dynamically within tasks’ delay constraints, and this brings a high challenge to maximize the profit of GDCs such that their delay constraints are strictly met. Different from existing studies, a Temporal-variation-aware Profit-maximized Task Scheduling (TPTS) algorithm is proposed to consider dynamic differences, and intelligently schedule all tasks to GDCs within their delay constraints. In each interval, TPTS solves a constrained profit maximization problem by a novel Simulated-annealing-based Chaotic Particle swarm optimization (SCP). Compared to several state-of-the-art scheduling algorithms, TPTS significantly increases throughput and profit while strictly meeting tasks’ delay constraints.

Haitao Yuan, Jing Bi, MengChu Zhou
A Lévy Walk and Firefly Based Multi-Robots Foraging Algorithm

Foraging constitutes one of the main benchmarks in robotic problems. It is known as the act of searching for objects/tokens and, when found, transport them to one or multiple locations. Swarm intelligence based algorithms have been widely used in foraging problem. The ambient light sensors technology in nowadays robots makes easy using and implementing luminous swarm intelligence-based algorithms such as the Firefly and the Glow-worm algorithms. In this paper, we propose a swarm intelligence-based foraging algorithm called Lévy walk and Firefly Foraging Algorithm (LFFA) which is a hybridizing of the two algorithms Lévy Walk and Firefly Algorithm. Numerical experiments to test the performances are conducted on the ARGoS robotic simulator.

Ouarda Zedadra, Antonio Guerrieri, Hamid Seridi, Giancarlo Fortino
An Overview of Wireless Indoor Positioning Systems: Techniques, Security, and Countermeasures

The interest in Indoor position systems (IPSs) had been widely increased recently, due to technological advancement. IPSs provide users with location information of various objects inside big buildings, typically using a mobile device. Different wireless technologies are available to provide location service such RF, Wi-Fi, Bluetooth, Visible Light Communication (VLC), etc. IPSs mainly determine the position by analyzing sensory information which is collected by mobile device continuously on real time, unless the user turned off the service. Various services and security issues had been associated with IPSs. Secure positioning become more important and crucial to the success of the delivered service. Location service network that based on off-air signal measurement is susceptible to numerous attacks (e.g. wormhole, sinkhole and Sybil attacks). This paper aims to provide an integrated view of IPSs, technologies and associated security threats that face such positioning systems. The paper compares different wireless indoor position technologies, explore potential attacks, and evaluate IPS protection mechanism.

Mouna S. Chebli, Heba Mohammad, Khalifa Al Amer
Hybrid Software-Defined Network Monitoring

Software defined networking (SDN) with OpenFlow-enabled switches operate alongside traditional switches has become a matter of fact in ISP network paradigms which are known as a hybrid SDN (H-SDN) network. When the centralized controller of SDN introduced into an existing network, significant improvement in network use as well as reducing packet losses and delays are expected. However, monitoring such networks is the main concern for better traffic management decision making which can lead to a maximum throughput performance. There is, to our knowledge, only one actual article proposed for H-SDN monitoring scheme so far. Thus, this paper surveys several monitoring methods/techniques for both networks, then propose taxonomy criteria to evaluate the various monitoring methods. The survey includes discussing the design concepts, accuracy and limitations for each, eventually summarize the future research directions for integrated perspective of monitoring in H-SDN networks.

Abdulfatah A. G. Abushagur, Tan Saw Chin, Rizaludin Kaspin, Nazaruddin Omar, Ahmad Tajuddin Samsudin
Time-Sensitive-Aware Scheduling Traffic (TSA-ST) Algorithm in Software-Defined Networking

Time-sensitive-aware scheduling traffic system is capable to eliminate the queuing delay in the network that resulting hard real-time guarantees. Hence, this article aims to develop a time-sensitive-aware scheduling traffic system which is able to avoid multiple time-sensitive flows from conducting in the same path simultaneously so that the queueing delay can be eliminated. Under this prologue, an algorithm Time-Sensitive-Aware Scheduling Traffic (TSA-ST) is proposed to reduce the time complexity in the transmission schedule while maintaining the quality of the scheduling system. In the end, the transmission schedule will be computed in different network topologies to evaluate the performance and accuracy.

Ng Kean Haur, Tan Saw Chin
Engineering Micro-intelligence at the Edge of CPCS: Design Guidelines

The Intelligent Edge computing paradigm is playing a major role in the design and development of Cyber-Physical and Cloud Systems (CPCS), extending the Cloud and overcoming its limitations so as to better address the issues related with the physical dimension of data—and therefore of the data-aware intelligence (such as context-awareness and real-time responses). Despite the proliferation of research works in this area, a well-founded software engineering approach specifically addressing the distribution of intelligence sources between the Edge and the Cloud is still missing. In this paper we propose some general criteria along with a coherent set of guidelines to follow in the design of distributed intelligence within CPCS, suitably exploiting Edge and Cloud paradigms to effectively enable data intelligence and accounting for both symbolic and sub-symbolic approaches to reasoning. Then, we exploit the notion of micro-intelligence as situated intelligence for Edge computing, promoting the idea of intelligent environment embodying rational processes meant to complement the cognitive process of individuals in order to reduce their cognitive workload and augment their cognitive capabilities. In order to demonstrate the general applicability of our guidelines, we propose Situated Logic Programming (SLP) as the conceptual framework for delivering micro-intelligence in CPCS, and Logic Programming as a Service (LPaaS) as its reference architecture and technological embodiment.

Roberta Calegari, Giovanni Ciatto, Enrico Denti, Andrea Omicini
NIOECM: A Network I/O Event Control Mechanism to Provide Fairness of Network Performance Among VMs with Same Resource Configuration in Virtualized Environment

In the virtualization environment, a hypervisor scheduler determines the degree of shared resource occupancy of the virtual machine (VM) according to the degree of CPU processing and it provides a fair CPU processing on virtual machines (VMs). But VMs are experiencing unfair network performance due to the hypervisor scheduler’s policy occupying resources based on CPU processing time. In this paper, we present NIOECM which is a network IO event control technique that controls the network-intensive VM’s network IO event to guarantee a fairness network performance of VMs which have the same resource configuration. The NIOECM performs a network delay processing on the network-intensive VMs which have a high network I/O event set. As a result, the network-intensive VMs which have a low network I/O event set will have more chance to occupy the network resource. In the result of experiments, our approach provides more fairness of network performance and does not give a performance interference on VM which is performing another task.

Jaehak Lee, Jihun Kang, Heonchang Yu
Learning and Prediction of E-Car Charging Requirements for Flexible Loads Shifting

The availability of distributed renewable energy sources (RES), such as photo-voltaic panels, allows to locally consume or accumulate energy, avoiding power peaks and loss along the power network. However, as the number of utilities in a household or a building increases, and the energy must be equally and intelligently shared among the utilities and devices, demand side management systems must exploit new solution for allowing such energy usage optimisation. The current trends of demand side management systems highly exploit loads shifting, as a concrete solution to align consumption to the fluctuating produced power, and to maximise the energy utilisation avoiding its wastage. Moreover the introduction, in the latest years, of e-cars has given a boost to smart charging, as it can increase the flexibility that is necessary for maximising the self-consumption. However we strongly believe that a performing demand side management system must be able to learn and predict user’s habits and energy requirements of her e-car, to better schedule the loads shifting and reduce energy wastage. This paper focuses on the e-car utilisation, investigating the exploitation of machine learning techniques to extract and use such knowledge from the power measures at charging plug.

Salvatore Venticinque, Stefania Nacchia
Making IoT Services Accountable: A Solution Based on Blockchain and Physically Unclonable Functions

Nowadays, an important issue in the IoT landscape consists of enabling the dynamic instauration of interactions among two or more objects, operating autonomously in a distributed and heterogeneous environment, which participate in the enactment of accountable cross-organization business processes. In order to achieve the above goal, a decentralized and reliable approach is needed. Here, we propose a solution based on physical unclonable function (PUF) and blockchain technologies that represent the building blocks of the devised IT infrastructure. The core of the authentication process is based on a purposely designed circuit for the PUF bitcell, implemented in a 65 nm CMOS technology. One of the most important aspects of this work is represented by the concept of accountability node, an element inspired to a blockchain 3.0 masternode. This is the key element of the proposed architecture, acting as the main interface for cooperating services and IoT objects which relieve the users/objects from the burden of having to directly interact with the blockchain.

Carmelo Felicetti, Angelo Furfaro, Domenico Saccà, Massimo Vatalaro, Marco Lanuzza, Felice Crupi
Generation of Network Traffic Using WGAN-GP and a DFT Filter for Resolving Data Imbalance

The intrinsic features of Internet networks lead to imbalanced class distributions when datasets are conformed, phenomena called Class Imbalance and that is attaching an increasing attention in many research fields. In spite of performance losses due to Class Imbalance, this issue has not been thoroughly studied in Network Traffic Classification and some previous works are limited to few solutions and/or assumed misleading methodological approaches. In this study, we propose a method for generating network attack traffic to address data imbalance problems in training datasets. For this purpose, traffic data was analyzed based on deep packet inspection and features were extracted based on common traffic characteristics. Similar malicious traffic was generated for classes with low data counts using Wasserstein generative adversarial networks (WGAN) with a gradient penalty algorithm. The experiment demonstrated that the accuracy of each dataset was improved by approximately 5% and the false detection rate was reduced by approximately 8%. This study has demonstrated that enhanced learning and classification can be achieved by solving the problem of degraded performance caused by data imbalance in datasets used in deep learning based intrusion detection systems.

WooHo Lee, BongNam Noh, YeonSu Kim, KiMoon Jeong
Secure Cross-Border Exchange of Health Related Data: The KONFIDO Approach

This paper sets up the scene of the KONFIDO project in a clear way. In particular, it: (i) defines KONFIDO objectives and draws KONFIDO boundaries; (ii) identifies KONFIDO users and beneficiaries; (iii) describes the environment where KONFIDO is embedded; (iv) provides a bird’s eye view of the KONFIDO technologies and how they will be deployed in the pilot studies of the project; and (v) presents the approach that the KONFIDO consortium will take to prove that the proposed solutions work. KONFIDO addresses one of the top three priorities of the European Commission regarding the digital transformation of health and care in the Digital Single Market, i.e. citizens’ secure access to their health data, also across borders. To make sure that KONFIDO has a high-impact, its results are exposed to the wide public by developing three substantial pilots in three distinct European countries (namely Denmark, Italy, and Spain).

Sotiris Diamantopoulos, Dimitris Karamitros, Luigi Romano, Luigi Coppolino, Vassilis Koutkias, Kostas Votis, Oana Stan, Paolo Campegiani, David Mari Martinez, Marco Nalin, Ilaria Baroni, Fabrizio Clemente, Giuliana Faiella, Charis Mesaritakis, Evangelos Grivas, Janne Rasmussen, Jan Petersen, Isaac Cano, Elisa Puigdomenech, Erol Gelenbe, Jos Dumortier, Maja Voss-KnudeVoronkov
Safety Management in Smart Ships

Smart Ships represent the next-generation of ships and they use ICT to connect all the devices on board to support integrated monitoring and safe management. In such cyber-physical systems, software has the responsibility of bridging the physical components and creating smart functions. Safety is a critical concern in such kind of systems whose malfunctioning may result in damage to equipment and injury to people. In this paper, we deal with this aspect, by identifying two interconnected sub-systems: shipboard power system and emergency management. The proposed architecture is developed through the H-entity multi-paradigm approach, in which heterogeneous technologies are interconnected. We propose to extend the MOISE+ organisational model to deal with systems of H-entities.

Massimo Cossentino, Luca Sabatucci, Flavia Zaffora
Managing Privacy in a Social Broker Internet of Thing

Smart homes, smart cities, smart everything and the Internet of Things (IoT) have incredible impact in our life. Typically, IoT devices are though vulnerable to attacks and the strategy to manage IoT security is influenced by the IoT model and the application field. Social Internet of Things (SIoT) can be viewed as the evolution of IoT in the same way social networks can be considered an evolution of the Internet. In this paper, we discuss about security issues on social IoT approach based on a social broker paradigm. In particular, we present a solution to manage information privacy in a SIoT environment built on a social broker.

V. Carchiolo, A. Longheu, M. Malgeri, G. Mangioni
A PageRank Inspired Approach to Measure Network Cohesiveness

Basics of PageRank algorithm have been widely adopted in its variations, tailored for specific scenarios. In this work, we consider the Black Hole metric, an extension of the original PageRank that leverages a (bogus) black hole node to reduce the arc weights normalization effect. We further extend this approach by introducing several black holes to investigate on the cohesiveness of the network, a measure of the strength among nodes belonging to the network. First experiments on real networks show the effectiveness of the proposed approach.

V. Carchiolo, M. Grassia, A. Longheu, M. Malgeri, G. Mangioni
TaRad: A Thing-Centric Sensing System for Detecting Activities of Daily Living

Activities of Daily Living Scale (ADLs) is widely used to evaluate living abilities of the patients and the elderly. Most of the currently proposed approaches for tracking indicators of ADLs are human-centric. Considering the privacy concerns of the human-centric approaches, a new thing-centric sensing system, named TaRad, for detecting some indicators of ADLs (i.e. using fridge, making a phone call), through identifying vibration of objects when a person interacts with objects. It consists of action transceivers (named ViNode), smart phones and a server. By taking into account the limited computation resource of the action transceiver, and the drift and accuracy issues of the cheap sensor, a method of extracting features from the vibration signal, named ViFE, along with a light-weight activity recognition method, named ViAR, have been implemented in ViNode. Besides, an operator recognition method, named ViOR, has been proposed to recognize the acting person who generates vibration of action transceiver, when two or more people exist simultaneously within an area. Experimental results verify the performance of TaRad with different persons, in terms of the sensitivity to correctly detect the activities, and probability to successfully recognize the operators of the activities.

Haiming Chen, Xiwen Liu, Ze Zhao, Giuseppe Aceto, Antonio Pescapè
Distributed Genomic Compression in MapReduce Paradigm

In recent years the biological data, represented for computational analysis, has increased in size terms. Despite the representation of the latter is demanded to specific file format, the analysis and managing overcame always more difficult due to high dimension of data. For these reasons, in recent years, a new computational framework, called Hadoop for manage and compute this data have been introduced. Hadoop is based on MapReduce paradigm to manage data in distributed systems. Despite the gain of performance obtained from this framework, our aim is to introduce a new compression method DSRC by decreasing the size of output file and make easy its processing from ad-hoc software. Performance analysis will show the reliability and efficiency achieved by our implementation.

Pasquale De Luca, Stefano Fiscale, Luca Landolfi, Annabella Di Mauro
Distributed ACO Based Reputation Management in Crowdsourcing

Crowdsourcing is an economical and efficient tool that hires human labour to execute tasks which are difficult to solve otherwise. Verification of the quality of the workers is a major problem in Crowd sourcing. We need to judge the performance of the workers based on their history of service and it is difficult to do so without hiring other workers. In this paper, we propose an Ant Colony Optimization (ACO) based reputation management system that can differentiate between good and bad workers. Using experimental evaluation, we show that, the algorithm works fine on the real scenario and efficiently differentiate workers with higher reputations.

Safina Showkat Ara, Subhasis Thakur, John G. Breslin
Osmotic Flow Deployment Leveraging FaaS Capabilities

Nowadays, the rapid development of emerging Cloud, Fog, Edge, and Internet of Things (IoT) technologies has accelerated the advancement trends forcing applications and information systems (IS) to evolve. In this hybrid and distributed ecosystem, the management of service heterogeneity is complex, as well as, the service provisioning based on the classification and allocation of suitable computational resources remains a challenge. Osmotic Computing (OC), a new promising paradigm that allows the service migrations ensuring beneficial resource utilization within Cloud, Edge, and Fog Computing environments, was introduced as a potential solution to these issues. Driven by the needs of complex management mitigation, greater agility, flexibility and scalability, this paper aims to propose an innovative OC ecosystem leveraging Functions-as-a-Service (FaaS); there is also introduced the concept of hybrid architectural style combining both microservices and serverless architectures. Furthermore, to support the FaaS-based OC ecosystem, an osmotic flow model for video surveillance in smart cities is presented. To validate the functionality and assess the performance and to further improve the understanding of the usability of the OC flow in real-world applications, several experiments have been carried out.

Alina Buzachis, Maria Fazio, Antonio Celesti, Massimo Villari
Secure and Distributed Crowd-Sourcing Task Coordination Using the Blockchain Mechanism

A complex crowd-sourcing problem such as open source software development has multiple sub-tasks, dependencies among the sub-tasks and requires multiple workers working on these sub-tasks to coordinate their work. Current solutions of this problem employ a centralized coordinator. Such a coordinator decides on the sub-task execution sequence and as a centralized coordinator faces problems related to cost, fairness and security. In this paper, we present a futuristic model of crowd-sourcing for complex tasks to mitigate the above problems. We replace the centralized coordinator by a blockchain and automate the decision-making process of the coordinator. We show that the proposed solution is secure, efficient and the computational overhead due to employing a blockchain is low.

Safina Showkat Ara, Subhasis Thakur, John G. Breslin
CUDA Virtualization and Remoting for GPGPU Based Acceleration Offloading at the Edge

In the last decade, GPGPU virtualization and remoting have been among the most important research topics in the field of computer science and engineering due to the rising of cloud computing technologies. Public, private, and hybrid infrastructures need such virtualization tools in order to multiplex and better organize the computing resources. With the advent of novel technologies and paradigms, such as edge computing, code offloading in mobile clouds, deep learning techniques, etc., the need for computing power, especially of specialized hardware such as GPUs, has skyrocketed. Although many GPGPU virtualization tools are available nowadays, in this paper we focus on improving GVirtuS, our solution for GPU virtualization. The contributions in this work focus on the CUDA plug-in, in order to provide updated performance enabling the next generation of GPGPU code offloading applications. Moreover, we present a new GVirtuS implementation characterized by a highly modular approach with a full multithread support. We evaluate and discuss the benchmarks of the new implementation comparing and contrasting the results with the pure CUDA and with the previous version of GVirtuS. The new GVirtuS yielded better results when compared with its previous implementation, closing the gap with the pure CUDA performance and trailblazing the path for the next future improvements.

Antonio Mentone, Diana Di Luccio, Luca Landolfi, Sokol Kosta, Raffaele Montella
Design of Self-organizing Protocol for LoWPAN Networks

IoT technology is widely employed to solve the problem of large-scale monitoring, e.g. in the context of smart cities or smart industries. Nevertheless, several issues have to be addressed in this context. The number of nodes can be very large and, sometimes, nodes can be not be easily reachable for humans interventions. Other important issues are battery life and node failure, two aspects that can affect the quality of service provided by the IoT system as well as related costs. To deal with the aspects above we propose a LoWPAN (Low power Wire-less Personal Area Network) network protocol that supports an automatic network construction without any human intervention. The resulting network is a tree structure featuring a main node which, in turn, is linked with the wireless gateway and a number of middle nodes (the first layer of the tree), while the leafs are called End nodes. The network structure and the underlying protocol described in this paper are designed to face the problem of configuration and to balance inter-node communication to ensure a fair power consumption. The proposed approach also features self-repair capabilities, as it is able to perform automatic recovery after a node failure. A case study is briefly discussed to show a potential application of the described approach.

Matteo Buffa, Fabrizio Messina, Corrado Santoro, Federico Fausto Santoro
Rough–Fuzzy Entropy in Neighbourhood Characterization

The Entropy has been used to characterize the neighbourhood of a sample on the base of its k Nearest Neighbour when data are imbalanced and many measures of Entropy have been proposed in the literature to better cope with vagueness, exploiting fuzzy logic, rough set theory and their derivatives. In this paper, a rough extension of Entropy is proposed to measure uncertainty and ambiguity in the neighbourhood of a sample, using the lower and upper approximations from rough–fuzzy set theory in order to compute the Entropy of the set of the k Nearest Neighbours of a sample. The proposed measure shows better robustness to noise and allows a more flexible modeling of vagueness with respect to the Fuzzy Entropy.

Antonio Maratea, Alessio Ferone
StormSeeker: A Machine-Learning-Based Mediterranean Storm Tracer

The Mediterranean area is subject to a range of destructive weather events, including middle-latitudes storms, Mediterranean sub-tropical hurricane-like storms (“medicanes”), and small-scale but violent local storms. Although predicting large-scale atmosphere disturbances is a common activity in numerical weather prediction, the tasks of recognizing, identifying, and tracing trajectories of such extreme weather events within weather model outputs remains challenging. We present here a new approach to this problem, called StormSeeker, that uses machine learning techniques to recognize, classify, and trace the trajectories of severe storms in atmospheric model data. We report encouraging results detecting weather hazards in a heavy middle-latitude storm that struck the Ligurian coast in October 2018, causing disastrous damages to public infrastructure and private property.

Raffaele Montella, Diana Di Luccio, Angelo Ciaramella, Ian Foster
Smart Cities and Open WiFis: When Android OS Permissions Cease to Protect Privacy

The wide-spread availability of open WiFi networks on smart cities can be considered an advanced service for citizens. However, a device connecting to WiFi network access points gives away its location. On the one hand, the access point provider could collect and analyse the ids of connecting devices, and people choose whether to connect depending on the degree of trust to the provider. On the other hand, an app running on the device could sense the presence of nearby WiFi networks, and this could have some consequences on user privacy. Based on permission levels and mechanisms proper of Android OS, this paper proposes an approach whereby an app attempting to connect to WiFi networks could reveal to a third part the presence of some known networks, thus a surrogate for the geographical location of the user, while she is unaware of it. This is achieved without resorting to GPS readings, hence without needing dangerous-level permissions. We propose a way to counteract such a weakness in order to protect user privacy.

Gabriella Verga, Salvatore Calcagno, Andrea Fornaia, Emiliano Tramontana
Multidimensional Neuroimaging Processing in ReCaS Datacenter

In the last decade, a large amount of neuroimaging datasets became publicly available on different archives, so there is an increasing need to manage heterogeneous data, aggregate and process them by means of large-scale computational resources. ReCaS datacenter offers the most important features to manage big datasets, process them, store results in efficient manner and make all the pipeline steps available for reproducible data analysis. Here, we present a scientific computing environment in ReCaS datacenter to deal with common problems of large-scale neuroimaging processing. We show the general architecture of the datacenter and the main steps to perform multidimensional neuroimaging processing.

Angela Lombardi, Eufemia Lella, Nicola Amoroso, Domenico Diacono, Alfonso Monaco, Roberto Bellotti, Sabina Tangaro
A Data Preparation Approach for Cloud Storage Based on Containerized Parallel Patterns

In this paper, we present the design, implementation, and evaluation of an efficient data preparation and retrieval approach for cloud storage. The approach includes a deduplication subsystem that indexes the hash of each content to identify duplicated data. As a consequence, avoiding duplicated content reduces reprocessing time during uploads and other costs related to outsource data management tasks. Our proposed data preparation scheme enables organizations to add properties such as security, reliability, and cost-efficiency to their contents before sending them to the cloud. It also creates recovery schemes for organizations to share preprocessed contents with partners and end-users. The approach also includes an engine that encapsulates preprocessing applications into virtual containers (VCs) to create parallel patterns that improve the efficiency of data preparation retrieval process. In a study case, real repositories of satellite images, and organizational files were prepared to be migrated to the cloud by using processes such as compression, encryption, encoding for fault tolerance, and access control. The experimental evaluation revealed the feasibility of using a data preparation approach for organizations to mitigate risks that still could arise in the cloud. It also revealed the efficiency of the deduplication process to reduce data preparation tasks and the efficacy of parallel patterns to improve the end-user service experience.

Diana Carrizales, Dante D. Sánchez-Gallegos, Hugo Reyes, J. L. Gonzalez-Compean, Miguel Morales-Sandoval, Jesus Carretero, Alejandro Galaviz-Mosqueda
Distributed Training of 3DPyranet over Intel AI DevCloud Platform

Neural network architectures have demonstrated to achieve impressive results across a wide range of different domains. The availability of very large datasets makes possible to overcome the limitation of the training stage thus achieving significant level of performance. On the other hand, even though the advancements in GPU hardware, training a complex neural network model still represents a challenge. Long time is required when the computation is demanded to a single machine. In this work, a distributed training approach for 3DPyraNet model built for a specific domain, that is the emotion recognition from videos, is discussed. The proposed work aims at distributing the training procedures over the nodes of the Intel DevCloud Platform and demonstrating how the training performance are affected in terms of both computational demand and achieved accuracy compared to the use of a single machine. The results obtained in an experimental design suggests the feasibility of the approach for challenging computer vision tasks even in presence of limited computing power based on exclusive use of CPUs.

Emanuel Di Nardo, Fabio Narducci
Parallel and Distributed Computing Methodologies in Bioinformatics

The significant advantage of using experimental techniques such as microarray, mass spectrometry (MS), and next generation sequencing (NGS), is that they produce an overwhelming amount of experimental omics data. All of these technologies come with the challenges of determining how the raw omics data should be efficiently processed or normalized and, subsequently, how can the data adequately be summarised or integrated, in order to be stored and shared, as well as to enable machine learning and/or statistical analysis. Omics data analysis involves the execution of several steps, each one implemented through different algorithms, that demand for a lot of computation power. The main problem is the automation of the overall analysis process, to increase the throughput and to reduce manual intervention (e.g., users have to manually supervise some steps of the analysis process). In this scenario, parallel and distributed computing technologies (i.e., Message Passing Interface (MPI), GPU computing, and Hadoop Map-Reduce), are essential to speed up and automatize the whole workflow of omics data analysis. Parallel and distributed computing enable the development of bioinformatics pipeline able to achieve scalable, efficient and reliable computing performance on clusters as well as on cloud computing.

Giuseppe Agapito
Backmatter
Metadaten
Titel
Internet and Distributed Computing Systems
herausgegeben von
Dr. Raffaele Montella
Angelo Ciaramella
Giancarlo Fortino
Dr. Antonio Guerrieri
Antonio Liotta
Copyright-Jahr
2019
Electronic ISBN
978-3-030-34914-1
Print ISBN
978-3-030-34913-4
DOI
https://doi.org/10.1007/978-3-030-34914-1

Premium Partner