Skip to main content
Top

2018 | Book

Intelligent Distributed Computing XII

Editors: Prof. Dr. Javier Del Ser, Dr. Eneko Osaba, Dr. Miren Nekane Bilbao, Dr. Javier J. Sanchez-Medina, Dr. Massimo Vecchio, Dr. Xin-She Yang

Publisher: Springer International Publishing

Book Series : Studies in Computational Intelligence

insite
SEARCH

About this book

This book gathers a wealth of research contributions on recent advances in intelligent and distributed computing, and which present both architectural and algorithmic findings in these fields. A major focus is placed on new techniques and applications for evolutionary computation, swarm intelligence, multi-agent systems, multi-criteria optimization and Deep/Shallow machine learning models, all of which are approached as technological drivers to enable autonomous reasoning and decision-making in complex distributed environments. Part of the book is also devoted to new scheduling and resource allocation methods for distributed computing systems. The book represents the peer-reviewed proceedings of the 12th International Symposium on Intelligent Distributed Computing (IDC 2018), which was held in Bilbao, Spain, from October 15 to 17, 2018.

Table of Contents

Frontmatter

Main Track

Frontmatter
Long Distance In-Links for Ranking Enhancement

Ranking is a widely used technique to classify nodes in networks according to their relevance. Increasing one’s rank is a desiderable feature in almost any context; several approaches have been proposed to achieve this goal by exploiting in-links and/or out-links with other existing nodes. In this paper, we focus on the impact of in-links in rank improvement (with PageRank metric) and their distance from starting link. Results for different networks both in type and size show that the best improvement comes from long distance nodes rather than neighbours, somehow subverting the commonly adopted social-based approach.

V. Carchiolo, M. Grassia, A. Longheu, M. Malgeri, G. Mangioni
Concept Tracking and Adaptation for Drifting Data Streams under Extreme Verification Latency

When analyzing large-scale streaming data towards resolving classification problems, it is often assumed that true labels of the incoming data are available right after being predicted. This assumption allows online learning models to efficiently detect and accommodate non-stationarities in the distribution of the arriving data (concept drift). However, this assumption does not hold in many practical scenarios where a delay exists between predicted and class labels, to the point of lacking this supervision for an infinite period of time (extreme verification latency). In this case, the development of learning algorithms capable of adapting to drifting environments without any external supervision remains a challenging research area to date. In this context, this work proposes a simple yet effective learning technique to classify non-stationary data streams under extreme verification latency. The intuition motivating the design of our technique is to predict the trajectory of concepts in the feature space. The estimation of the region where concepts may reside in the future can be then exploited for producing more updated predictions for newly arriving examples, ultimately enhancing its accuracy during this unsupervised drifting period. Our approach is compared to a benchmark of incremental and static learning methods over a set of public non-stationary synthetic datasets. Results obtained by our passive learning method are promising and encourage further research aimed at generalizing its applicability to other types of drifts.

Maria Arostegi, Ana I. Torre-Bastida, Jesus L. Lobo, Miren Nekane Bilbao, Javier Del Ser
Adversarial Sample Crafting for Time Series Classification with Elastic Similarity Measures

Adversarial Machine Learning (AML) refers to the study of the robustness of classification models when processing data samples that have been intelligently manipulated to confuse them. Procedures aimed at furnishing such confusing samples exploit concrete vulnerabilities of the learning algorithm of the model at hand, by which perturbations can make a given data instance to be misclassified. In this context, the literature has so far gravitated on different AML strategies to modify data instances for diverse learning algorithms, in most cases for image classification. This work builds upon this background literature to address AML for distance based time series classifiers (e.g., nearest neighbors), in which attacks (i.e. modifications of the samples to be classified by the model) must be intelligently devised by taking into account the measure of similarity used to compare time series. In particular, we propose different attack strategies relying on guided perturbations of the input time series based on gradient information provided by a smoothed version of the distance based model to be attacked. Furthermore, we formulate the AML sample crafting process as an optimization problem driven by the Pareto trade-off between (1) a measure of distortion of the input sample with respect to its original version; and (2) the probability of the crafted sample to confuse the model. In this case, this formulated problem is efficiently tackled by using multi-objective heuristic solvers. Several experiments are discussed so as to assess whether the crafted adversarial time series succeed when confusing the distance based model under target.

Izaskun Oregi, Javier Del Ser, Aritz Perez, Jose A. Lozano
Slot Co-allocation Optimization in Distributed Computing with Heterogeneous Resources

In this work, we introduce slot selection and co-allocation algorithms for parallel jobs in distributed computing with non-dedicated and heterogeneous resources. A single slot is a time span that can be assigned to a task, which is a part of a parallel job. The job launch requires a co-allocation of a specified number of slots starting and finishing synchronously. Some existing resource co-allocation algorithms assign a job to the first set of slots matching the resource request without any optimization (the first fit type), while other algorithms are based on an exhaustive search. In this paper, algorithms for efficient and dependable slot selection are studied and compared with known approaches. The novelty of the proposed approach is in a general algorithm efficiently selecting a set of slots according to the specified criterion.

Victor Toporkov, Anna Toporkova, Dmitry Yemelyanov
About Designing an Observer Pattern-Based Architecture for a Multi-objective Metaheuristic Optimization Framework

Multi-objective optimization with metaheuristics is an active and popular research field which is supported by the availability of software frameworks providing algorithms, benchmark problems, quality indicators and other related components. Most of these tools follow a monolithic architecture that frequently leads to a lack of flexibility when a user intends to add new features to the included algorithms. In this paper, we explore a different approach by designing a component-based architecture for a multi-objective optimization framework based on the observer pattern. In this architecture, most of the algorithmic components are observable entities that naturally allows to register a number of observers. This way, a metaheuristic is composed of a set of observable and observer elements, which can be easily extended without requiring to modify the algorithm. We have developed a prototype of this architecture and implemented the NSGA-II evolutionary algorithm on top of it as a case study. Our analysis confirms the improvement of flexibility using this architecture, pointing out the requirements it imposes and how performance is affected when adopting it.

Antonio Benítez-Hidalgo, Antonio J. Nebro, Juan J. Durillo, José García-Nieto, Esteban López-Camacho, Cristóbal Barba-González, José F. Aldana-Montes
Scalable Inference of Gene Regulatory Networks with the Spark Distributed Computing Platform

Inference of Gene Regulatory Networks (GRNs) remains an important open challenge in computational biology. The goal of bio-model inference is to, based on time-series of gene expression data, obtain the sparse topological structure and the parameters that quantitatively understand and reproduce the dynamics of biological system. Nevertheless, the inference of a GRN is a complex optimization problem that involve processing S-System models, which include large amount of gene expression data from hundreds (even thousands) of genes in multiple time-series (essays). This complexity, along with the amount of data managed, make the inference of GRNs to be a computationally expensive task. Therefore, the generation of parallel algorithmic proposals that operate efficiently on distributed processing platforms is a must in current reconstruction of GRNs. In this paper, a parallel multi-objective approach is proposed for the optimal inference of GRNs, since minimizing the Mean Squared Error using S-System model and Topology Regularization value. A flexible and robust multi-objective cellular evolutionary algorithm is adapted to deploy parallel tasks, in form of Spark jobs. The proposed approach has been developed using the framework jMetal, so in order to perform parallel computation, we use Spark on a cluster of distributed nodes to evaluate candidate solutions modeling the interactions of genes in biological networks.

Cristóbal Barba-González, José García-Nieto, Antonio Benítez-Hidalgo, Antonio J. Nebro, José F. Aldana-Montes
Finding Best Compiler Options for Critical Software Using Parallel Algorithms

The efficiency of a software piece is a key factor for many systems. Real-time programs, critical software, device drivers, kernel OS functions and many other software pieces which are executed thousands or even millions of times per day require a very efficient execution. How this software is built can significantly affect the run time for these programs, since the context is that of compile-once/run-many. In this sense, the optimization flags used during the compilation time are a crucial element for this goal and they could make a big difference in the final execution time. In this paper, we use parallel metaheuristic techniques to automatically decide which optimization flags should be activated during the compilation on a set of benchmarking programs. The using the appropriate flag configuration is a complex combinatorial problem, but our approach is able to adapt the flag tuning to the characteristics of the software, improving the final run times with respect to other spread practices.

Gabriel Luque, Enrique Alba
Drift Detection over Non-stationary Data Streams Using Evolving Spiking Neural Networks

Drift detection in changing environments is a key factor for those active adaptive methods which require trigger mechanisms for drift adaptation. Most approaches are relied on a base learner that provides accuracies or error rates to be analyzed by an algorithm. In this work we propose the use of evolving spiking neural networks as a new form of drift detection, which resorts to the own architectural changes of this particular class of models to estimate the drift location without requiring any external base learner. By virtue of its inherent simplicity and lower computational cost, this embedded approach can be suitable for its adoption in online learning scenarios with severe resource constraints. Experiments with synthetic datasets show that the proposed technique is very competitive when compared to other drift detection techniques.

Jesus L. Lobo, Javier Del Ser, Ibai Laña, Miren Nekane Bilbao, Nikola Kasabov

Energy

Frontmatter
A Hybrid Ensemble of Heterogeneous Regressors for Wind Speed Estimation in Wind Farms

This paper focuses on a problem of wind speed estimation in wind farms by proposing an ensemble of regressors in which the output of four different systems (Neural Networks (NNs), Suppor Vector Regressors (SVRs) and Gaussian Processes (GPRs)) will be the input of a final prediction system (An Extreme Learning Machine (ELM) in this case). Moreover, we propose to use variables from atmospheric reanalysis data as predictive inputs for the systems, which gives us the possibility of hybridizing numerical weather models with ML techniques for wind speed prediction in real systems. The experimental evaluation of the proposed system in real data from a wind farm in Spain has been carried out, with the subsequent discussion about the performance of the different ML regressors and the ensemble method tested in this wind speed prediction problem.

L. Cornejo-Bueno, J. Acevedo-Rodríguez, L. Prieto, S. Salcedo-Sanz
Bio-inspired Approximation to MPPT Under Real Irradiation Conditions

The aim of this paper is to study the possibilities of increasing the renewable power of a photovoltaic system through a barely tested bio-inspired algorithm. Photovoltaic energy has a high potential to grows but it has a strong dependence on climate conditions. Particularly, the power production of panels is reduced under partial shading conditions, which is a very common situation in several cities around the world. Therefore, the Maximum Power Point Tracker (MPPT) algorithm becomes critical to control the photovoltaic system. In this paper, we propose a novel MPPT algorithm based on the Artificial Bee Colony (ABC) bio-inspired method and we explicitly define a fitness function based on power production. The ABC algorithm is more attractive than any other bio-inspired methods due to its simplicity and ability to resolve the problem of choosing an ideal duty cycle. Specifically, it requires a reduced number of control parameters and the initial conditions have no influence over the convergence. The analysis has been carried out using real meteorological and consumption data and testing the behavior of the algorithm on a standalone photovoltaic system operating only with direct current.

Cristian Olivares-Rodríguez, Tony Castillo-Calzadilla, Oihane Kamara-Esteban

Industry

Decision Making in Industry 4.0 Scenarios Supported by Imbalanced Data Classification

In the last years Data Science has emerged as one of the main technological enablers in many business sectors, including the manufacturing industry. Process engineers, who traditionally resorted to engineering tools for troubleshooting, have now embraced the support of data analysis to unveil complex patterns between process parameters and the quality of products and/or the performance of the production assets in plant. This work elaborates on a practical methodology to conduct data analysis within an industrial environment. The most important contribution of the proposed method is to focus on the importance of hypothesis generation dynamics among multidisciplinary experts in the process, prior to data capture itself. To exemplify the practical utility of this prescribed procedure, evidences from a real industrial case study are provided, departing from the dynamic generation of the hypothesis around the reduction of defects in the delivered products. Interestingly, this process leads to a imbalanced data classification problem, for which an extensive benchmark of supervised learning algorithm and balancing preprocessing techniques is performed to accurately predict whether parts are defective. Insights are drawn from this analysis so as to yield recommended parameter values for different stages of the production process, thereby achieving a lower defective rate and ultimately, a higher manufacturing quality of the industrial process.

Jesus Para, Javier Del Ser, Aitor Aguirre, Antonio J. Nebro
A Hybrid Optimization Algorithm for Standardization of Maintenance Plans

This paper presents an algorithm to solve the combinatorial optimization problem in the definition of preventive maintenance plans. This problem is not easy to solve, since tasks performed on assets can be redundant or not required. The objective is a more accurate definition of plans to reduce maintenance costs, where different optimization algorithms can be used. Taking a single optimization algorithm approach could serve to find an optimal solution depending of the use case, but it does not find reliable results in a generic way. The new hybrid approach with 4 different algorithms shows that better results are obtained than the use of the individual optimization algorithms.

Eduardo Gilabert, Egoitz Konde, Aitor Arnaiz, Basilio Sierra
Labelling Drifts in a Fault Detection System for Wind Turbine Maintenance

A failure detection system is the first step towards predictive maintenance strategies. A popular data-driven method to detect incipient failures and anomalies is the training of normal behaviour models by applying a machine learning technique like feed-forward neural networks (FFNN) or extreme learning machines (ELM). However, the performance of any of these modelling techniques can be deteriorated by the unexpected rise of non-stationarities in the dynamic environment in which industrial assets operate. This unpredictable statistical change in the measured variable is known as concept drift. In this article a wind turbine maintenance case is presented, where non-stationarities of various kinds can happen unexpectedly. Such concept drift events are desired to be detected by means of statistical detectors and window-based approaches. However, in real complex systems, concept drifts are not as clear and evident as in artificially generated datasets. In order to evaluate the effectiveness of current drift detectors and also to design an appropriate novel technique for this specific industrial application, it is essential to dispose beforehand of a characterization of the existent drifts. Under the lack of information in this regard, a methodology for labelling concept drift events in the lifetime of wind turbines is proposed. This methodology will facilitate the creation of a drift database that will serve both as a training ground for concept drift detectors and as a valuable information to enhance the knowledge about maintenance of complex systems.

Iñigo Martinez, Elisabeth Viles, Iñaki Cabrejas
Time Series Forecasting in Turning Processes Using ARIMA Model

A prediction model which is able to predict the tool life and the cutting edge replacement is tackled. The study is based on the spindle load during a turning process in order to optimize productivity and the cost of the turning processes. The methodology proposed to address the problem encompasses several steps. The main ones include filtering the signal, modeling of the normal behavior and forecasting. The forecasting approach is carried out by an Autoregressive Integrated Moving Average (ARIMA) model. Results are compared with a robust ARIMA model and show that the previous preprocessing steps are necessary to obtain greater accuracy in predicting future values of this specific process.

Alberto Jimenez-Cortadi, Fernando Boto, Itziar Irigoien, Basilio Sierra, German Rodriguez
A New Distributed Self-repairing Strategy for Transient Fault Cell in Embryonics Circuit

The embryonics circuit with cell array structure has the prominent characteristics of distributed self-controlling and self-repairing. Distributed self-repairing strategy is a key element in designing the embryonics circuit. However, all existing strategies of embryonics circuit mainly aim at the permanent faults, and lack of the transient faults. It would be a huge waste of hardware if a cell was permanently eliminated due to a local transient fault, and the waste will result in seriously low hardware utilization in those environments dominated by transient faults. In this paper, a new distributed self-repairing strategy named fault-cell reutilization self-repairing strategy (FCRSS) is proposed, where the cells with transient fault could be reused. Two mechanisms of elimination and reconfiguration are mixed together. Those transient fault-cells can be reconfigured to achieve fault-cell reutilization. Then, methods to design of all the modules are described in details. Lastly, circuit simulation and reliability analysis results prove that the FCRSS can increase hardware utilization rate and system reliability.

Zhai Zhang, Yao Qiu, Xiaoliang Yuan

Mobility and Smart Cities

Frontmatter
Solving the Open-Path Asymmetric Green Traveling Salesman Problem in a Realistic Urban Environment

In this paper, a driving route planning system for multi-point routes is designed and developed. The routing problem has modeled as an Open-Path and Asymmetric Green Traveling Salesman Problem (OAG-TSP). The main objective of the proposed OAG-TSP is to find a route between a fixed origin and destination, visiting a group of intermediate points exactly once, minimizing the $$CO_2$$ C O 2 emitted by the car and the total distance traveled. Thus, the developed transportation problem is a complex and multi-attribute variant of the well-known TSP. For its efficient solving, three classic meta-heuristics have been used: Simulated Annealing, Tabu Search and Variable Neighborhood Search. These approaches have been chosen for its easy adaptation and rapid execution times, something appreciated in this kind of real-world systems. The system developed has been built in a realistic simulation environment, using the open source framework Open Trip Planner. Additionally, three heterogeneous scenarios have been studied in three different cities of the Basque Country (Spain): Bilbao, Gazteiz and Donostia. Obtained results conclude that the most promising technique for solving this problem is the Simulated Annealing. The statistical significance of these findings is confirmed by the results of a Friedman’s non-parametric test.

Eneko Osaba, Javier Del Ser, Andres Iglesias, Miren Nekane Bilbao, Iztok Fister Jr., Iztok Fister, Akemi Galvez
Road Traffic Forecasting Using NeuCube and Dynamic Evolving Spiking Neural Networks

This paper presents a new approach for spatio-temporal road traffic forecasting that relies on the adoption of the NeuCube architecture based on spiking neural networks. The NeuCube platform was originally conceived and designed to process electroencephalographic (EEG) signals considering their temporal component and their spatial source within the brain. Its neural representation allows for a visual analysis of connectivity among different locations, and also provides a prediction tool harnessing the predictive learning capabilities of dynamic evolving Spiking Neural Networks (deSNNs). Taking advantage of the NeuCube features, this work focuses on the potential of spatially-aware traffic variable forecasts, as well as on the exploration of the spatio-temporal relationships among different sensor locations within a traffic network. Its performance, assessed over real traffic data collected in 51 locations in the center of Madrid (Spain), is superior to that of other machine learning techniques in terms of forecasting accuracy. Moreover, we discuss on the interactions and relationships among sensors of the network provided by Neucube, which may provide valuable insights on the traffic dynamics of the city under study towards enhancing its management.

Ibai Laña, Elisa Capecci, Javier Del Ser, Jesus L. Lobo, Nikola Kasabov
A Preliminary Study on Automatic Algorithm Selection for Short-Term Traffic Forecasting

Despite the broad range of Machine Learning (ML) algorithms, there are no clear baselines to find the best method and its configuration given a Short-Term Traffic Forecasting (STTF) problem. In ML, this is known as the Model Selection Problem (MSP). Although Automatic Algorithm Selection (AAS) has proved success dealing with MSP in other areas, it has hardly been explored in STTF. This paper deepens into the benefits of AAS in this field. To this end, we have used Auto-WEKA, a well-known AAS method, and compared it to the general approach (which consists of selecting the best of a set of algorithms) over a multi-class imbalanced classification STTF problem. Experimental results show AAS as a promising methodology in this area and allow important conclusions to be drawn on how to improve the performance of ASS methods when dealing with STTF.

Juan S. Angarita-Zapata, Isaac Triguero, Antonio D. Masegosa
Solving an Eco-efficient Vehicle Routing Problem for Waste Collection with GRASP

We address in this work the optimization of real waste collection in the island of La Palma (Canary Islands, Spain). The waste containers are of two types: paper-carton and plastic packaging. The optimization criterion in the problem is to collect those containers with the highest fill level in such a way that the environmental impact is minimized. In order to solve this optimization problem we firstly estimate the fill level of the containers by exploiting historic data and later we use a meta-heuristic procedure to design the collection routes. The computational experiments reveal the optimization technique is effective and efficient due to the fact that it allows to improve the current collection process according with several eco-efficient indicators.

Airam Expósito-Márquez, Christopher Expósito-Izquierdo, Julio Brito-Santana, José A. Moreno-Pérez
Fostering Agent Cooperation in AmI: A Context-Aware Mechanism for Dealing with Multiple Intentions

Ambient Intelligent (AmI) environments dynamically provide contextual information to intelligent agents that interact with them. In such environments, could these agents cooperate to improve their goal achievement, considering multiple intentions from several agents? With multiple agents, cooperation will depend on each agent’s own intentions. Agents adapt to dynamic changes in the environment using context-aware planning mechanisms such as the Contextual Planning System (CPS), which proposes an optimal plan for a single agent based on the current context. In this paper we present the Collective CPS (CCPS), an opportunistic cooperative planning mechanism for multiple agents in AmI environments. CCPS allows agents to partially delegate their own plans or to collaborate with other agents’ plans during their execution, while retaining individual planning capabilities. A working scenario is shown for a realistic AmI environment, such as a Smart Campus.

Arthur Casals, Assia Belbachir, Amal El-Fallah Seghrouchni, Anarosa Alves Franco Brandão

Robotics and Video Games

Frontmatter
Distributed Formation Tracking of Multi Robots with Trajectory Estimation

This paper investigates distributed formation tracking of multi robots with virtual robot as reference trajectory subject to communication failure. The objective is to propose a control approach which improves the performances of the formation in term of stability and robustness. Suppose fixed and directed communication topology, the control law is developed for each robot using extended consensus algorithm with a time varying reference trajectory. Meanwhile, polynomial regression method is implemented for estimating the trajectory of the virtual robot to overcome communication failure. At the end, Matlab simulations are carried out and the comparative results demonstrate the effectiveness of the proposed approach.

Ali Alouache, Qinghe Wu
Applying Evolutionary Computation Operators for Automatic Human Motion Generation in Computer Animation and Video Games

This paper presents an evolutionary computation scheme for automatic human motion generation in computer animation and video games. Given a set of identical physics-driven skeletons seated on the ground as an initial pose (similar for all skeletons), the method applies forces on selected bones seeking for a final stable pose with all skeletons standing. Such forces are initially random but then modulated by a set of evolutionary operators (selection, reproduction, and mutation) to make the digital characters learn to stand up by themselves. An illustrative example is discussed in detail to show the performance of this approach. This method can readily be extended to other skeleton configurations and other interesting motions with little modification. Our approach represents a significant first step towards automatic generation of motion routines by applying evolutionary operators.

Luis de la Vega-Hazas, Francisco Calatayud, Andrés Iglesias
A General-Purpose Hardware Robotic Platform for Swarm Robotics

Swarm intelligence is based on the recently-acquired notion that sophisticated behaviors can also be obtained from the cooperation of several simple individuals with a very limited intelligence but cooperating together through low-level interactions between them and with the environment using decentralized control and self-organization. Such interactions can lead to the emergence of intelligent behavior, unknown to the individual agents. One of the most remarkable applications of swarm intelligence is swarm robotics, where expensive and sophisticated robots can be replaced by a swarm of simple inexpensive micro-robots. In this context, this paper introduces a general-purpose hardware robotic platform suitable for swarm robotics. With a careful choice of its main components and its flexible and modular architecture, this robotic platform provides support to the most popular swarm intelligence algorithms by hardware. As an illustration, the paper considers four of the most popular swarm intelligence methods; then, it describes the most relevant hardware features of our approach to support such methods (and arguably many other swarm intelligence approaches as well) for swarm robotics.

Nureddin Moustafa, Akemi Gálvez, Andrés Iglesias

Internet of Things

Frontmatter
A Deep Learning Approach to Device-Free People Counting from WiFi Signals

The last decade has witnessed a progressive interest shown by the community on inferring the presence of people from changes in the signals exchanged by deployed wireless devices. This non-invasive approach finds its rationale in manifold applications where the provision of counting devices to the people expected to traverse the scenario at hand is not affordable nor viable in the practical sense, such as intrusion detection in critical infrastructures. A trend in the literature has focused on modeling this paradigm as a supervised learning problem: a dataset with WiFi traces and their associated number of people is assumed to be available a priori, which permits to learn the pattern between traces and the number of people by a supervised learning algorithm. This paper advances over the state of the art by proposing a novel convolutional neural network that infers such a pattern over space (frequency) and time by rearranging the received I/Q information as a three-dimensional tensor. The proposed layered architecture incorporates further processing elements for a better generalization capability of the overall model. Results are obtained over real WiFi traces and compared to those recently reported over the same dataset for shallow learning models. The superior performance shown by the model proposed in this work paves the way towards exploring the applicability of the latest advances in Deep Learning to this specific case study.

Iker Sobron, Javier Del Ser, Iñaki Eizmendi, Manuel Velez
Evolutionary Algorithms for Design of Virtual Private Networks

Virtual Private Networks (VPNs) is a most known technology to create the protected communication links via the Internet. The paper offers a new approach to solve the problem of VPN network design based on evolutionary algorithms (genetic and differential evolution). The joint accounting of network bandwidth, reliability and cost, which indices are calculated on the basis of the offered queuing theory models, is the feature of the considered problem. The experimental assessment of the suggested decisions shows that the evolutionary algorithms can improve the VPN network efficiency up to 40% in comparison with the standard variants of its creation. The comparative assessment of the suggested evolutionary algorithms shows higher convergence of the differential evolution algorithm.

Igor Kotenko, Igor Saenko
Forming Groups in the Cloud of Things Using Trust Measures

The need of managing complex and interactive activities is becoming a key challenge in the “Internet of Things” (IoT) and leads to request large hardware and power resources. A possibility of facing such a problem is represented by the possibility of virtualizing physical IoT environments over the so called Cloud-of-Things (CoT), where each device is associated with one or more software agents working in the Cloud on its behalf. In this open and heterogeneous context, IoT devices obtain significant advantages by the social cooperation of software agents, and the selection of the most trustworthy partners for cooperating becomes a crucial issue, making necessary to use a suitable trust model. The cooperation activity can be further improved by clustering agents in different groups on the basis of trust measures, allowing each agent will to interact with the agents belonging to its own group. To this purpose, we designed an algorithm to form agent groups on the basis of information about reliability and reputation collected by the agents. In order to validate both the efficiency and effectiveness of our approach, we performed some experiments in a simulated scenario, which showed significant advantages introduces by the use of the trust measures.

Giancarlo Fortino, Lidia Fotia, Fabrizio Messina, Domenico Rosaci, Giuseppe M. L. Sarné
Terrorism and War: Twitter Cascade Analysis

Misinformation spreading over online social networks is becoming more and more critical due to the huge amount of information sources whose reliability is hard to establish; moreover, several humans psychology factors as echo chambers and biased searches, plus the intensive use of bot, makes the scenario difficult to cope with. Unprecedented opportunities of gathering data to enhance knowledge though raised, even if the threat of assuming a fake as real or viceversa has been hugely increased, so urgent questions are how to ascertain the truth, and how to somehow limit the flooding process of fakes. In this work, we investigate on the diffusion of true, false and mixed news through the Twitter network using a free large dataset of fact-checked rumor cascades, that were also categorized into specific topics (here, we focus on Terrorism and War). Our goal is to assess how news spread depending on their veracity and we also try to provide an analytic formulation of spreading process via a differential equation that approximates this phenomenon by properly setting the retweet rate.

V. Carchiolo, A. Longheu, M. Malgeri, G. Mangioni, M. Previti
A Genetic Algorithm with Local Search Based on Label Propagation for Detecting Dynamic Communities

The interest in community detection problems on networks that evolves over time have experienced an increasing attention over the last years. Genetic Algorithms, and other bio-inspired methods, have been successfully applied to tackle the community finding problem in static networks. However, few research works have been done related to the improvement of these algorithms for temporal domains. This paper is focused on the design, implementation, and empirical analysis of a new Genetic Algorithm pair with a local search operator based on Label Propagation to identify communities on dynamic networks.

A. Panizo, G. Bello-Orgaz, D. Camacho
On the Design and Tuning of Machine Learning Models for Language Toxicity Classification in Online Platforms

One of the most concerning drawbacks derived from the lack of supervision in online platforms is their exploitation by misbehaving users to deliver offending (toxic) messages while remaining unknown themselves. Given the huge volumes of data handled by these platforms, the detection of toxicity in exchanged comments and messages has naturally called for the adoption of machine learning models to automate this task. In the last few years Deep Learning models and related techniques have played a major role in this regard due to their superior modeling capabilities, which have made them stand out as the prevailing choice in the related literature. By addressing a toxicity classification problem over a real dataset, this work aims at throwing light on two aspects of this noted dominance of Deep Learning models: (1) an empirical assessment of their predictive gains with respect to traditional Shallow Learning models; and (2) the impact of using different text embedding methods and data augmentation techniques in this classification task. Our findings reveal that in our case study the application of non-optimized Shallow and Deep Learning models attains very competitive accuracy scores, thus leaving a narrow improvement margin for the fine-grained refinement of the models or the addition of data augmentation techniques.

Maciej Rybinski, William Miller, Javier Del Ser, Miren Nekane Bilbao, José F. Aldana-Montes
Ensuring Availability of Wireless Mesh Networks for Crisis Management

The paper is aimed at solving a problem of providing availability in wireless mesh networks for crisis management. It offers the concept of a system providing the availability by scanning the network and using drones to deliver emergency wireless communication modules to restore the network connectivity. The authors also offer a model of a crisis management network to assess availability of network nodes. A software prototype implementing a fragment of the crisis management network and a mechanism for ensuring its availability has been developed. The prototype is used to analyze security of the network and to perform experiments.

Vasily Desnitsky, Igor Kotenko, Nikolay Rudavin

Medicine and Biology

Frontmatter
Automatic Fitting of Feature Points for Border Detection of Skin Lesions in Medical Images with Bat Algorithm

This paper addresses the problem of automatic fitting of feature points for border detection of skin lesions. This problem is an important task in segmentation of dermoscopy images for semi-automatic early diagnosis of melanoma and other skin lesions. Given a set of feature points selected by a dermatologist, we apply a powerful nature-inspired metaheuristic optimization method called bat algorithm to obtain the free-form parametric Bézier curve that fits the points better in the least-squares sense. Our experimental results on two examples of skin lesions show that the method performs quite well and might be applied to automatic fitting of feature points for border detection in medical images.

Akemi Gálvez, Iztok Fister, Iztok Fister Jr., Eneko Osaba, Javier Del Ser, Andrés Iglesias
Multi-objective Metaheuristics for a Flexible Ligand-Macromolecule Docking Problem in Computational Biology

The problem of molecular docking focuses on minimizing the binding energy of a complex composed by a ligand and a receptor. In this paper, we propose a new approach based on the joint optimization of three conflicting objectives: $$E_{inter}$$ E inter that relates to the ligand-receptor affinity, the $$E_{intra}$$ E intra characterizing the ligand deformity and the RMSD score (Root Mean Square Deviation), which measures the difference of atomic distances between the co-crystallized ligand and the computed ligand. In order to deal with this multi-objective problem, three different metaheuristic solvers (SMPSO, MOEA/D and MPSO/D) are used to evolve a numerical representation of the ligand’s conformation. An experimental benchmark is designed to shed light on the comparative performance of these multi-objective heuristics, comprising a set of HIV-proteases/inhibitors complexes where flexibility was applied. The obtained results are promising, and pave the way towards embracing the proposed algorithms for practical multi-criteria in the docking problem.

Esteban López Camacho, María Jesús García-Godoy, Javier Del Ser, Antonio J. Nebro, José F. Aldana-Montes
Identifying the Polypharmacy Side-Effects in Daily Life Activities of Elders with Dementia

This paper addresses the problem of polypharmacy management in older patients with dementia. We propose a technique that combines semantic technologies with big data machine learning techniques to detect deviations in daily activities which may signal the side effects of a drug-drug interaction. A polypharmacy management knowledge base was developed and used to semantically define drug-drug interactions and to annotate with the help of doctors significant registered deviations from the elders’ routines. The Random Forest Classifier is used to detect the days with significant deviations, while the k-means clustering algorithm is used to automate the deviations annotation process. The results are promising showing that such an approach can be successfully applied for assisting doctors in identifying the effects of polypharmacy in the case of patients with dementia.

Viorica Chifu, Cristina Pop, Tudor Cioara, Ionut Anghel, Dorin Moldovan, Ioan Salomie

Other applications

Frontmatter
A Study of the Predictive Earliness of Traffic Flow Characterization for Software Defined Networking

Software Defined Networking (SDN) is a new network paradigm that decouples the control from the data plane in order to provide a more structured approach to develop applications and services. In traditional networks the routing of flows is defined by masks and tends to be rather static. With SDN, the granularity of routing decisions can be downscaled to single TCP sessions, and can be performed dynamically within a single data stream. In this context We propose a novel approach – coined as micro flow aware routing – aimed at implementing routing of flows based on the properties of transport-level information, which is closely related to the type of application. Our proposed scheme relies on the early characterization of the flow based on statistical predictors, which are computed over a time window spanning the first exchanged packets over the session. We evaluate different window lengths over real traffic data to examine the Pareto trade-off between the earliness of flow characterization and its predictive accuracy. These results stimulate further research towards ensuring the practicality of the scheme.

Hegoi Garitaonandia, Javier Del Ser, Juanjo Unzilla, Eduardo Jacob
An e-Exam Platform Approach to Enhance University Academic Student’s Learning Performance

Nowadays it is common for higher education institutions to use computer-based exams, partly or integrally, in their evaluation processes. The fact that exams are undertaken in a computer allows for new features to be acquired that may provide more reliable insights into the behaviour and state of the student during the exam. Current performance monitoring approaches are either intrusive or based on productivity measures and are thus often dreaded by workers. Moreover, these approaches do not take into account the importance and role of the numerous external factors that influence productivity. In this paper, we outline a non-intrusive and non-invasive performance monitoring approach developed, as a stress detection system. It is based on guidance from psychological stress studies, as well as from the nature of stress detection during high-end exams, through real-time analysis of mouse movements and decision-making behavioural patterns during the execution of high-end exams, in order to enhance university academic students’ learning performance.

Radu Albastroiu, Anisia Iova, Filipe Gonçalves, Marian Cristian Mihaescu, Paulo Novais
Collective Profitability of DAG-Based Selling-Buying Intermediation Processes

We revisit our formal model of intermediation business processes and propose its generalization from trees to DAGs. With the new model, a company can use multiple sellers to better reach the market of potential buyers interested in purchasing its products. The sellers are engaged in transactions via a set of intermediaries that help connecting with end customers, rather than acting directly in the market. This process can be represented by a complex DAG-structured business transaction. In this work we present a formal model based on DAGs of such transactions and we generalize our results regarding collectively profitable intermediation transactions.

Amelia Bădică, Costin Bădică, Mirjana Ivanović, Doina Logofătu
Knowledge-Based Metrics for Document Classification: Online Reviews Experiments

In this paper we propose a new method that addresses the documents classification problem with respect to their topic. The presented method takes into consideration only textual measures. We exemplify the method by considering three sets of documents of gradually different topics: (i) the first two sets contain reviews that comment the published entity features characteristics representing electronic devices – laptops and mobile phones; (ii) the third set contains reviews about touristic locations. All the review texts are written in Romanian and were extracted by crawling popular Romanian sites. The paper presents and discusses the obtained evaluation scores after the application of textual measures.

Mihaela Colhon, Costin Bădică
Experiments of Distributed Ledger Technologies Based on Global Clock Mechanisms

This paper reports on some experiments using different global clock mechanisms in distributed ledger technologies. Recently, using global clocks in distributed systems has become practical due to the progress of small atomic clock devices. However, current distributed systems such as typical distributed ledger technologies assume traditional loosely synchronized clocks. In this paper, we have implemented logical and physical global clock mechanisms in a distributed ledger system and investigated how different clock mechanisms influence the performance and scalability of distributed ledger technologies. When comparing these clocks, we found that the number of messages exchanged among the nodes is increased due to the number of the nodes required when using logical global clocks; thus, physical global clocks are more suitable than are logical global clocks for use in distributed ledger systems. We also found that the guarantee of transaction ordering based on the global time and the transaction throughput become a tradeoff in distributed ledger systems.

Yuki Yamada, Tatsuo Nakajima
Backmatter
Metadata
Title
Intelligent Distributed Computing XII
Editors
Prof. Dr. Javier Del Ser
Dr. Eneko Osaba
Dr. Miren Nekane Bilbao
Dr. Javier J. Sanchez-Medina
Dr. Massimo Vecchio
Dr. Xin-She Yang
Copyright Year
2018
Electronic ISBN
978-3-319-99626-4
Print ISBN
978-3-319-99625-7
DOI
https://doi.org/10.1007/978-3-319-99626-4

Premium Partner