Skip to main content

About this book

This book constitutes the refereed proceedings of the 17th International Conference on Economics of Grids, Clouds, Systems, and Services, GECON 2020, held in Izola, Slovenia, in September 2020. Due to COVID-19 pandemic the conference was held virtually by the University of Ljubljana.
The 11 full papers and 9 short papers presented in this book were carefully reviewed and selected from 40 submissions. The papers are structured in selected topics, namely: Smartness in Distributed Systems; Decentralizing Clouds to Deliver Intelligence at the Edge; Digital Infrastructures for Pandemic Response and Countermeasures; Dependability and Sustainability; Economic Computing and Storage; Poster Session.

Table of Contents


Smartness in Distributed Systems


A Blockchain Consensus for Message Queue Based on Byzantine Fault Tolerance

Blockchain technology is developing rapidly, and many companies that develop and apply blockchain technology have gradually risen. Most companies choose to use the consortium blockchain. Because the consortium blockchain is for small-scale groups or institutions, identity authentication is required to join the consortium blockchains. Therefore, security can be guaranteed to a certain extent. Blockchain is often considered as a distributed accounting system. An important mechanism to ensure the stable operation of this accounting system is to ensure consistency among the distributed ledgers. The consensus mechanism in the blockchain is the algorithm that accomplishes this function. To improve throughput, many consortium blockchains use message queues as the consensus algorithm. However, the message queue consensus cannot tolerate malicious nodes. Aiming at this problem, this paper designs and implements a consensus for message queue based on Byzantine fault tolerance, which improves and combines message queue and PBFT. Moreover, it is verified through experiments that the consensus algorithm can reach the general level of TPS. The system can still operate normally when malicious nodes appear within the tolerable range, which has certain robustness and efficiency.
Jiahui Zhang, Jingling Zhao, Xuyan Song, Yiping Liu, Qian Wu

Automatic Q.A-Pair Generation for Incident Tickets Handling: An Application of NLP

Chatbots answer customer questions by mostly manually crafted Question Answer (Q.A.)-pairs. If organizations process vast numbers of questions, manual Q.A. pair generation and maintenance become very ex-pensive and complicated. To reduce cost and increase efficiency, in this study, we propose a low threshold QA-pair generation system that can automatically identify unique problems and their solutions from a large incident ticket dataset of an I.T. Shared Service Center. The system has four components: categorical clustering for structuring the semantic meaning of ticket information, intent identification, action recommendation, and reinforcement learning. For categorical clustering, we use a Latent Semantic Indexing (LSI) algorithm, and for the intent identification, we apply the Latent Dirichlet Allocation (LDA), both Natural Language Processing techniques. The actions are cleaned and clustered and resulting Q.A. pairs are stored in a knowledge base with reinforcement learning capabilities. The system can produce Q.A. pairs from which about 55% are useful and correct. This percentage is likely to in-crease significantly with feedback in its usage stage. By this study, we contribute to a further understanding of the development of automatic service processes.
Mick Lammers, Fons Wijnhoven, Faiza A. Bukhsh, Patrício de Alencar Silva

ProtectDDoS: A Platform for Trustworthy Offering and Recommendation of Protections

As the dependency of businesses on digital services increases, their vulnerability to cyberattacks increases, too. Besides providing innovative services, business owners must focus on investing in robust cybersecurity mechanisms to countermeasure cyberattacks. Distributed Denial-of-Service (DDoS) attacks remain one of the most dangerous cyberattacks, e.g., leading to service disruption, financial loss, and reputation harm. Although protection measures exist, a catalog of solutions is missing, which could help network operators to access and filter information in order to select suitable protections for specific demands.
This work presents ProtectDDoS, a platform offering recommendations of DDoS protections. ProtectDDoS provides a blockchain-based catalog, where DDoS protection providers can announce details regarding their services, while users can obtain recommendations of DDoS protections according to their specific demands (e.g., price, attacks supported, or geolocation constraints). ProtectDDoS’s Smart Contract (SC) maintains the integrity of data about protections available and provides tamper-proof reputation. To evaluate the feasibility and effectiveness of ProtectDDoS, a prototype was implemented and a case study conducted to discuss costs, including interactions with the SC.
Muriel Franco, Erion Sula, Bruno Rodrigues, Eder Scheid, Burkhard Stiller

Delivering Privacy-Friendly Location-Based Advertising Over Smartwatches: Effect of Virtual User Interface

The expansion of smartwatches and positioning technologies have offered marketers the opportunity to provide consumers with contextually relevant advertising messages. However, perceived privacy risk and a lack of privacy control are preventing consumers from using location-based advertising (LBA). This study explores how variation in the design of the user interfaces of smartwatches (regular user interface vs. virtual user interface) affects the perceived ease of privacy control, perceived privacy risk, and ultimately the intention of consumers to use LBA. A simple mediation analysis is conducted on the data collected from a between-subjects experiment (N = 335). The obtained results extend the growing literature on the topic of LBA and privacy concerns related to it. The results indicate that a smart-watch augmented with a virtual user interface increases perceived ease of privacy control. Also, this has a direct positive effect on perceived privacy risk and the intention to use LBA. However, perceived privacy risk does not mediate the relationship between perceived ease of privacy control and the intention to use LBA. The findings of this study have important implications for various commercial players.
Emiel Emanuel, Somayeh Koohborfardhaghighi

Decentralising Clouds to Deliver Intelligence at the Edge


GEM-Analytics: Cloud-to-Edge AI-Powered Energy Management

Energy analysis, forecasting and optimization methods play a fundamental role in managing Combine Heat and Power (CHP) systems for energy production, in order to find the most suitable operational point. Indeed, several industries owning such cogeneration systems can significantly reduce overall costs by applying diverse techniques to predict, in real-time, the optimal load of the system. However, this is a complex task that requires processing a large amount of information from multiple data sources (IoT sensors, smart meters and much more), and, in most of the cases, is manually carried out by the energy manager of the company owning the CHP. For this reason, resorting to machine learning methods and new advanced technologies such as fog computing can significantly ease and automate real-time analyses and predictions for energy management systems that deal with huge amounts of data. In this paper we present GEM-Analytics, a new platform that exploits fog computing to enable AI-based methods for energy analysis at the edge of the network. In particular, we present two use cases involving CHP plants that need for optimal strategies to reduce the overall energy supply costs. In all the case studies we show that our platform can improve the energy load predictions compared to baselines thus reducing the costs incurred by industrial customers.
Daniele Tovazzi, Francescomaria Faticanti, Domenico Siracusa, Claudio Peroni, Silvio Cretti, Tommaso Gazzini

Using LSTM Neural Networks as Resource Utilization Predictors: The Case of Training Deep Learning Models on the Edge

Cloud and Fog technologies are steadily gaining momentum and popularity in the research and industry circles. Both communities are wondering about the resource usage. The present work aims to predict the resource usage of a machine learning application in an edge environment, utilizing Raspberry Pies. It investigates various experimental setups and machine learning methods that are acting as benchmarks, allowing us to compare the accuracy of each setup. We propose a prediction model that leverages the time series characteristics of resource utilization employing an LSTM Recurrent Neural Network (LSTM-RNN). To conclude to a close to optimal LSTM-RNN architecture we use a genetic algorithm. For the experimental evaluation we used a real dataset constructed by training a well known model in Raspberry Pies3. The results encourage us for the applicability of our method.
John Violos, Evangelos Psomakelis, Dimitrios Danopoulos, Stylianos Tsanakas, Theodora Varvarigou

Towards a Semantic Edge Processing of Sensor Data. An Incipient Experiment

This paper addresses a semantic stream processing pipeline, including data collection, semantic annotation, RDF data storage and query processing. We investigate whether the semantic annotation step could be moved on the edge, by designing and evaluating two alternative processing architectures. Experiments show that the edge processing fulfills the low-latency requirement, facilitating the parallel processing of the semantic enrichment for the sensor data.
Paula-Georgiana Zălhan, Gheorghe Cosmin Silaghi, Robert Andrei Buchmann

Distributed Cloud Intelligence: Implementing an ETSI MANO-Compliant Predictive Cloud Bursting Solution Using Openstack and Kubernetes

While solutions for cloud bursting already exist and are commercially available, they often rely on a limited set of metrics that are monitored and acted upon when user-defined thresholds are exceeded. In this paper, we present an ETSI MANO compliant approach that performs proactive bursting of applications based on infrastructure and application metrics. The proposed solution implements Machine Learning (ML) techniques to realise a proactive offloading of tasks in anticipation of peak utilisation that is based on pattern recognition from historical data. Experimental results comparing several forecasting algorithms show that the proposed approach can improve upon reactive cloud bursting solutions by responding quicker to system load changes. This approach is applicable to both traditional datacentres and applications as well as 5G telco infrastructures that run Virtual Network Functions (VNF) at the edge.
Francescomaria Faticanti, Jason Zormpas, Sergey Drozdov, Kewin Rausch, Orlando Avila García, Fragkiskos Sardis, Silvio Cretti, Mohsen Amiribesheli, Domenico Siracusa

Digital Infrastructures for Pandemic Response and Countermeasures


A MDE Approach for Modelling and Distributed Simulation of Health Systems

Epidemic episodes such as the COVID-19 has shown the need for simulation tools to support decision making, predict the results of control actions, and mitigating the effects of the virus. Simulation methods have been widely used by healthcare researchers and practitioners to improve the planning and management of hospitals and predict the spread of disease. Simulating all involved aspects of an epidemic episode requires the modelling and simulation of large and complex Discrete Event Systems (DESs), supported by modular and hierarchical models easy to use for experts, and that can be translated to efficient code for distributed simulation. This paper presents a model driven engineering (MDE) approach to support the modelling of healthcare systems (HS) in epidemic episodes combining different perspectives, and the translation to efficient code for scalable distributed simulations.
Unai Arronategui, José Ángel Bañares, José Manuel Colom

South Korea as the Role Model for Covid-19 Policies? An Analysis of the Effect of Government Policies on Infection Chain Structures

The fast increase of Covid-19 cases led to high attention from local and international authorities to mitigate and reduce the propagation of the disease. Concerning the risks and the negative effects inflicted by the spread of the pandemic, many countries established a series of policies reinforcing public protection from the virus. With respect to these policies, this study characterizes the infection chain structure in Korea and identifies changes in the structure over time. Furthermore, using multiple linear regressions, the impact of government policy interventions on the infection chain structure is measured. The analysis shows a high fluctuation in infection chain structures at the beginning of the pandemic, which decreases with the implemented policies. The findings serve as a foundation for policymakers to evaluate the success of policies and strategies for reducing the diffusion of Covid-19 and to make optimized resource allocation decisions.
Alexander Haberling, Jakob Laurisch, Jörn Altmann

Dependability and Sustainability


A Network Reliability Game

In an ad hoc network, accessing a point depends on the participation of other, intermediate, nodes. Each node behaving selfishly, we end up with a non-cooperative game where each node incurs a cost for providing a reliable connection but whose success depends not only on its own reliability investment but also on the investment of nodes which can be on a path to the access point. Our purpose here is to formally define and analyze such a game: existence of an equilibrium output, comparison with the optimal cooperative case.
Patrick Maillé, Bruno Tuffin

NuPow: Managing Power on NUMA Multiprocessors with Domain-Level Voltage and Frequency Control

Power management and task placement pose two of the greatest challenges for future many-core processors in data centers. With hundreds of cores on a single die, cores experience varying memory latencies and cannot individually regulate voltage and frequency, therefore calling for new approaches to scheduling and power management. This work presents NuPow, a hierarchical scheduling and power management framework for architectures with multiple cores per voltage and frequency domain and non-uniform memory access (NUMA) properties. NuPow considers the conflicting goals of grouping virtual machines (VMs) with similar load patterns while also placing them as close as possible to the accessed data. Implemented and evaluated on existing hardware, NuPow achieves significantly better performance per watt compared to competing approaches.
Changmin Ahn, Seungyul Lee, Chanseok Kang, Bernhard Egger

Multi-tier Power-Saving Method in Cloud Storage Systems for Content Sharing Services

Fast-growing cloud computing has a mass impact on power consumption in datacenters. In our previous study, we presented a power-saving method for cloud storage systems, where the stored data were periodically rearranged in a disk array in the order of access frequency. The disk containing unpopular files can often be switched to power-saving mode, enabling power conservation. However, if such unpopular files became popular at some point, the disk containing those files spin up that leads to increase power consumption. To remedy this drawback, in this paper, we present a multi-tier power-saving method for cloud storage systems. The idea behind our method is to divide the disk array into multiple tiers. The first tier containing popular files is always active for fast response, while lower tiers pack unpopular files for power conservation. To maintain such a hierarchical structure, files are periodically migrated to the neighboring tiers according to the access frequency. To evaluate the effectiveness of our proposed method, we measured the performance in simulations and a prototype implementation using real access traces of approximately 60,000 time-series images with a duration of 3,000 h. In the experiments, we observed that our method consumed approximately 22% less energy than the system without any file migration among disks. At the same time, our method maintained a preferred response time with an overall average of 86 ms based on our prototype implementation.
Horleang Choeng, Koji Hasebe, Hirotake Abe, Kazuhiko Kato

Instant Virtual Machine Live Migration

Live migration of virtual machines (VMs) is an important tool for data center operators to achieve maintenance, power management, and load balancing. The relatively high cost of live migration makes it difficult to employ live migration for rapid load balancing or power management operations, leaving much of its promised benefits unused. The advance of fast network interconnects has led to the development of distributed shared memory (DSM) that allows a cluster of nodes to utilize remote memory with relatively low overhead. In this paper, we explore VM live migration over DSM. We present and evaluate a novel live migration algorithm on our own DSM implementation. The evaluation of live migrating various VMs executing real-life workloads shows that live migration over DSM can reduce the total migration time by 70% and the total amount of transferred data by 65% on average. The absolute average total migration time of only 1.1 s demonstrates the potential of live migration over DSM to lead to better load balancing, energy management, and total cost of ownership.
Changyeon Jo, Hyunik Kim, Bernhard Egger

Economic Computing and Storage


Towards Economical Live Migration in Data Centers

Live migration of virtual machines (VMs) enables maintenance, load balancing, and power management in data centers. The cost of live migration on several key metrics combined with strict service-level objectives (SLOs), however, typically limits its practical application to situations where the underlying physical host has to undergo maintenance. As a consequence, the potential benefits of live migration with respect to increased resource usage and lower power consumption remain largely untouched. In this paper, we argue that live migration-aware SLOs combined with smart live migration algorithm selection provides an economically viable model for live migration in data centers. Based on a model predicting key parameters of VM live migration, an optimization algorithm selects the live migration technique that is expected to meet client SLOs while at the same time to optimize target metrics given by the data center operator. A comparison with the state-of-the-art shows that the presented guided live migration technique selection achieves significantly fewer SLO violations while, at the same time, minimizing the effect of live migration on the infrastructure.
Youngsu Cho, Changyeon Jo, Hyunik Kim, Bernhard Egger

Index-Selection for Minimizing Costs of a NoSQL Cloud Database

The index-selection problem in database systems is that of determining a set of indexes (data-access paths) that minimizes the costs of database operations. Although this problem has received significant attention in the context of relational database systems, the established methods and tools do not translate easily to the context of modern non-relational database systems (so-called NoSQL systems) that are widely used in cloud and grid computing, and in particular systems such as DynamoDB from Amazon Web Services. Although the index-selection problem in these contexts appears simple at first glance, due to the very limited indexing features, this simplicity is deceptive because the non-relational nature of these databases and indexes permits more complex indexing schemes to be expressed. This paper motivates and describes the index-selection problem for NoSQL databases, and DynamoDB in particular. It motivates and outlines a cost model to capture the specific monetary costs associated with database operations in this context. The cost model has not only been carefully checked for consistency using the system documentation but also been verified using actual usage costs in a live DynamoDB instance.
Sudarshan S. Chawathe

Poster Session


A Developer-Centric API Value Chain

In today’s digital economy, the creation of digital services attracts widespread interest due to its new role as the main driver of innovation. Application Programmable Interfaces (APIs) and their emerging ecosystem are at the center of this digital service innovation trend. In this research, we propose a developer-centric API value chain and we link it to the topic of online customer reviews (OCRs). The aim of our work then is to highlight the important role of the developers in the API value chain and the success of a provider’s API.
Konrad Kirner, Somayeh Koohborfardhaghighi

Bridging Education Services and Consumer Expectations Through Reusable Learning Objects

A competitive job market requires higher education institutions to carefully update curriculums and explore new possibilities of technology-related education. In spite of the reforms in education systems, there is still a gap between the skills that the job market expects and the skill set of new graduates (Hinchliffe and Jolly 2011). This gap fertilizes a rising risk of increased unemployment among early graduates (Nghia 2018; Tan et al. 2017).
Djamshid Sultanov, Jörn Altmann

Exascale Computing Deployment Challenges

As Exascale computing proliferates, we see an accelerating shift towards clusters with thousands of nodes and thousands of cores per node, often on the back of commodity graphics processing units. This paper argues that this drives a once in a generation shift of computation, and that fundamentals of computer science therefore need to be re-examined. Exploiting the full power of Exascale computation will require attention to the fundamentals of programme design and specification, programming language design, systems and software engineering, analytic, performance and cost models, fundamental algorithmic design, and to increasing replacement of human bandwidth by computational analysis. As part of this, we will argue that Exascale computing will require a significant degree of co-design and close attention to the economics underlying the challenges ahead.
Karim Djemame, Hamish Carr


Additional information

Premium Partner

    Image Credits