Skip to main content

Über dieses Buch

This book constitutes the refereed proceedings of the 8th IFIP WG 2.14 European Conference on Service-Oriented and Cloud Computing, ESOCC 2020, held in Heraklion, Crete, Greece, in September 2020.

The 6 full and 8 short papers presented in this volume were carefully reviewed and selected from 20 submissions. The main event mapped to the main research track which focused on the presentation of cutting-edge research in both the service-oriented and cloud computing areas. In conjunction, an industrial track was also held attempting to bring together academia and industry through showcasing the application of service-oriented and cloud computing research, especially in the form of case studies, in the industry.

The chapters ‘Identification of Comparison Key Elements and their Relationships for Cloud Service Selection’ and ‘Technology-Agnostic Declarative Deployment Automation of Cloud Applications’ are available open access under a Creative Commons Attribution 4.0 International License via



Formal Methods


Testing Conformance in Multi-component Enterprise Application Management

Modern enterprise applications integrate various heterogeneous components, which management has to be suitably coordinated. Being able to check whether the management allowed by the implementation of an application component conforms to a given specification hence becomes crucial. One may indeed wish to replace component specifications with conforming implementations, by ensuring that already planned management can be enacted, or that no additional (potentially undesired) management activities get enabled. In this perspective, we propose a parametric relation for testing the conformance of the management of application components, based on an existing formalism to model multi-component application management (i.e., management protocols). We also discuss how such relation can be exploited to ensure that replacing a specification with a conforming implementation continues to enable all already allowed management activities, and/or that no additional (potentially undesired) management activity gets enabled.
Jacopo Soldani, Lars Luthmann, Malte Lochau, Antonio Brogi

Formalizing Event-Driven Behavior of Serverless Applications

We present new operational semantics for serverless computing that model the event-driven relationships between serverless functions, as well as their interaction with platform services such as databases and object stores. These semantics precisely encapsulate how control transfers between functions, both directly and through reads and writes to platform services. We use these semantics to define the notion of the service call graph for serverless applications that captures program flows through functions and services. Finally, we construct service call graphs for 12 serverless JavaScript applications, using a prototype of our call graph construction algorithm, and we evaluate their accuracy.
Matthew Obetz, Anirban Das, Timothy Castiglia, Stacy Patterson, Ana Milanova

Probabilistic Verification of Outsourced Computation Based on Novel Reversible PUFs

With the growing number of commercial cloud-computing services, there is a corresponding need to verify that such computations were performed correctly. In other words, after a weak client outsources computations to an untrusted cloud, it must be able to ensure the correctness of the results with less work than re-performing the computations. This is referred to as verifiable computation. In this paper we present a new probabilistic verifiable computation method based on a novel Reversible Physically Unclonable Function (PUF) and a binomial Bayesian Inference model. Our scheme links the outsourced software with the cloud-node hardware to provide a proof of the computational integrity and the resultant correctness of the results with high probability. The proposed Reversible SW-PUF is a two-way function capable of computing partial inputs given its outputs. Given the random output signature of a specific instruction in a specific basic block of the program, only the computing platform that originally computed the instruction can accurately regenerate the inputs of the instruction correct within a certain number of bits. To explore the feasibility of the proposed design, the Reversible SW-PUF was implemented in HSPICE using 45 nm technology. The probabilistic verifiable computation scheme was implemented in C++, and the Bayesian Inference model was utilized to estimate the probability of correctness of the results returned from the cloud service. Our proof-of-concept implementation of Reversible SW-PUF exhibits good uniqueness compared to other types of PUFs and exhibits perfect reliability and acceptable randomness. Finally, we demonstrate our verifiable computation approach on a matrix computation. We show that it enables faster verification than existing verification techniques.
Hala Hamadeh, Abdallah Almomani, Akhilesh Tyagi

Cloud Service and Platform Selection


Multiplayer Game Backends: A Comparison of Commodity Cloud-Based Approaches

The development of resource-intensive complex distributed systems such as the backend side of Massively Multiplayer Online Games (MMOGs) has shifted towards cloud-based approaches in recent years. Despite this shift, researchers and developers have mostly utilized proprietary clouds to provide services for such applications—thus leaving the area of commodity clouds largely unexplored. The use of proprietary clouds is almost always applied at the Infrastructure-as-a-Service layer, thereby enforcing restrictions on the development of MMOGs. In a previous work we focused on the characteristics of MMOGs, outlining certain factors that prohibit their deployment on commodity clouds. In this paper, we evaluate the suitability of common public cloud platforms in developing and deploying the backend side of MMOGs. In our approach, we implement a simple MMOG over three popular public cloud platforms. Then, we evaluate their performance by measuring the latency of the game over each platform as well as the maximum size of game worlds supported by each approach. Our measurements show that approaches based on the Infrastructure-as-a-Service layer perform better than those based on the Platform-as-a-Service layer—which was expected. However, our results indicate that MMOGs based on the Platform-as-a-Service layer can also perform relatively well and within the bounds of real-time latency. Coupled with accelerated development and lower maintenance costs, Platform-as-a-Service technology paves the way for further development of MMOG specific Backend-as-a-Service platforms.
Nicos Kasenides, Nearchos Paspallis

Are Cloud Platforms Ready for Multi-cloud?

Multi-cloud computing is getting a momentum as it offers various advantages, including vendor lock-in avoidance, better client proximity and application performance improvement. As such, various multi-cloud platforms have been developed, each with its own strengths and limitations. This paper aims at comparing all these platforms to unveil the best one as well as ease the selection of the right platform based on the user requirements and preferences. Further, it identifies the current gaps in the platforms to be covered so as to enable the full potential of multi-cloud computing. Finally, it draws directions for further research.
Kyriakos Kritikos, Paweł Skrzypek, Feroz Zahid

Open Access

Identification of Comparison Key Elements and Their Relationships for Cloud Service Selection

Nowadays, the cloud computing industry is enjoying an exponential growth, where several cloud service providers compete to be one of the market leaders. Usually, providers offering similar services use different non-functional attributes to describe them. Thus, given the heterogeneity and diversity of services descriptions, the selection process of the appropriate cloud service becomes challenging. Architects no longer know what criteria to use to make the suitable cloud services selection. In this paper, we highlight the challenge of identifying key elements of comparisons and their relationship for selecting cloud services. Further, we propose a methodology to solve this issue based on real data available from service providers and benchmark work. Our methodology is validated based on two case studies of cloud relational databases and cloud queuing services.
Anis Ahmed Nacer, Olivier Perrin, François Charoy

Deployment and Workflows


Deployable Self-contained Workflow Models

Service composition is a popular approach for building software applications from several individual services. Using imperative workflow technologies, service compositions can be specified as workflow models comprising activities that are implemented, e.g., by service calls or scripts. While scripts are typically included in the workflow model itself and can be executed directly by the workflow engine, the required services must be deployed in a separate step. Moreover, to enable their invocation, an additional step is required to configure the workflow model regarding the endpoints of the deployed services, i.e., IP-address, port, etc. However, a manual deployment of services and configuration of the workflow model are complex, time-consuming, and error-prone tasks. In this paper, we present an approach that enables defining service compositions in a self-contained manner using imperative workflow technology. For this, the workflow models can be packaged with all necessary deployment models and software artifacts that implement the required services. As a result, the service deployment in the target environment where the workflow is executed as well as the configuration of the workflow with the endpoint information of the services can be automated completely. We validate the technical feasibility of our approach by a prototypical implementation based on the TOSCA standard and OpenTOSCA.
Benjamin Weder, Uwe Breitenbücher, Kálmán Képes, Frank Leymann, Michael Zimmermann

Open Access

Technology-Agnostic Declarative Deployment Automation of Cloud Applications

Declarative approaches for automating the deployment and configuration management of multi-component applications are on the rise. Many deployment technologies exist, sharing the same baselines for enacting declarative deployments, even if based on different languages for specifying multi-component applications. The Essential Deployment Metamodel (EDMM) Modeling and Transformation Framework allows to specify multi-component applications in a technology-agnostic manner, and to automatically generate the technology-specific deployment artifacts allowing to deploy an IaaS-based application. In this paper, we propose an extension of the EDMM Modeling and Transformation Framework to PaaS and SaaS by allowing to deploy application components on PaaS platforms or to implement them by instrumenting SaaS services. Given that not all existing deployment technologies support PaaS and SaaS deployments, we also propose the new EDMM Decision Support Framework allowing us to determine which deployment technologies can be used to deploy an application specified with EDMM.
Michael Wurster, Uwe Breitenbücher, Antonio Brogi, Lukas Harzenetter, Frank Leymann, Jacopo Soldani

Blockchain-Based Healthcare Workflows in Federated Hospital Clouds

Nowadays, security is one of the biggest concerns against the wide adoption of on-demand Cloud services. Specifically, one of the major challenges in many application domains is the certification of exchanged data. For these reasons, since the advent of bitcoin and smart contracts respectively in 2009 and 2015, healthcare has been one of the major sectors in which Blockchain has been studied. In this paper, by exploiting the intrinsic security feature of the Blockchain technology, we propose a Software as a Service (SaaS) that enables a hospital Cloud to establish a federation with other ones in order to arrange a virtual healthcare team including doctors coming from different federated hospitals that cooperate in order to carry out a healthcare workflow. Experiments conducted in a prototype implemented by means of the Ethereum platform show that the overhead introduced by Blockchain is acceptable considering the obvious gained advantages in terms of security.
Armando Ruggeri, Maria Fazio, Antonio Celesti, Massimo Villari



Monitoring Behavioral Compliance with Architectural Patterns Based on Complex Event Processing

Architectural patterns assist in the process of architectural decision making as they capture architectural aspects of proven solutions. In many cases, the chosen patterns have system-wide implications on non-functional requirements such as availability, performance, and resilience. Ensuring compliance with the selected patterns is of vital importance to avoid architectural drift between the implementation and its desired architecture. Most of the patterns not only capture structural but also significant behavioral architectural aspects that need to be checked. In case all properties of the system are known before runtime, static compliance checks of application code and configuration files might be sufficient. However, in case aspects of the system dynamically evolve, e.g., due to manual reconfiguration, compliance with the architectural patterns also needs to be monitored during runtime. In this paper, we propose to link compliance rules to architectural patterns that specify behavioral aspects of the patterns based on runtime events using stream queries. These queries serve as input for a complex event processing component to automatically monitor architecture compliance of a running system. To validate the practical feasibility, we applied the approach to a set of architectural patterns in the domain of distributed systems and prototypically implemented a compliance monitor.
Christoph Krieger, Uwe Breitenbücher, Michael Falkenthal, Frank Leymann, Vladimir Yussupov, Uwe Zdun

Towards Real-Time Monitoring of Data Centers Using Edge Computing

Introducing the Internet of Things paradigm to data centers enables real-time monitoring on a scale that has not been seen before. Real-time monitoring promises to reduce the data center’s operational costs and increase energy savings. As data centers can house over a hundred thousand servers, the potential number of data points that can be collected every minute is in the order of hundreds of millions. In this work-in-progress paper, we stipulate about the impact that real-time monitoring of data centers has on the network infrastructure, and demonstrate that the impact is indeed significant enough to disrupt the data center’s network. We therefore propose a preliminary solution based on edge computing that minimizes the load on the network when performing real-time monitoring of a data center.
Brian Setz, Marco Aiello

Modeling Users’ Performance: Predictive Analytics in an IoT Cloud Monitoring System

We exploit the feasibility of predictive modeling combined with the support given by a suitably defined IoT Cloud Infrastructure in the attempt of assessing and reporting relative performances for user-specific settings during a bike trial. The matter is addressed by introducing a suitable dynamical system whose state variables are the so-called origin-destination (OD) flow deviations obtained from prior estimates based on historical data recorded by means of mobile sensors directly installed in each bike through a fast real-time processing of big traffic data. We then use the Kalman filter theory in order to dynamically update an assignment matrix in such a context and gain information about usual routes and distances. This leads us to a dynamical ranking system for the users of the bike trial community making the award procedure more transparent.
Rosa Di Salvo, Antonino Galletta, Orlando Marco Belcore, Massimo Villari

Data Distribution and Analytics


Multi-source Distributed System Data for AI-Powered Analytics

The emerging field of Artificial Intelligence for IT Operations (AIOps) utilizes monitoring data, big data platforms, and machine learning, to automate operations and maintenance (O&M) tasks in complex IT systems. The available research data usually contain only a single source of information, often logs or metrics. The inability of the single-source data to describe precise state of the distributed systems leads to methods that fail to make effective use of the joint information, thus, producing large number of false predictions. Therefore, current data limits the possibilities for greater advances in AIOps research. To overcome these constraints, we created a complex distributed system testbed, which generates multi-source data composed of distributed traces, application logs, and metrics. This paper provides detailed descriptions of the infrastructure, testbed, experiments, and statistics of the generated data. Furthermore, it identifies how such data can be utilized as a stepping stone for the development of novel methods for O&M tasks such as anomaly detection, root cause analysis, and remediation.
The data from the testbed and its code is available at https://​zenodo.​org/​record/​3549604.
Sasho Nedelkoski, Jasmin Bogatinovski, Ajay Kumar Mandapati, Soeren Becker, Jorge Cardoso, Odej Kao

Blockchain- and IPFS-Based Data Distribution for the Internet of Things

Distributing data in a tamper-proof and traceable way is a necessity in many Internet of Things (IoT) scenarios. Blockchain technologies are frequently named as an approach to provide such functionality. Despite this, there is a lack of concrete solutions which integrate the IoT with the blockchain for data distribution purposes.
Within this paper, we present a middleware which connects to IoT devices, and uses a blockchain to distribute IoT data with guaranteed integrity. Furthermore, the middleware also offers that data is distributed in real-time via a second channel. We implement our solution using the Ethereum blockchain and the InterPlanetary File System (IPFS).
Simon Krejci, Marten Sigwart, Stefan Schulte


Weitere Informationen

Premium Partner