Skip to main content
main-content

Über dieses Buch

This book constitutes the refereed proceedings of the 21st IFIP WG 6.1 International Conference on Distributed Applications and Interoperable Systems, DAIS 2021, held in Valletta, Malta, in June 2021, as part of the 16th International Federated Conference on Distributed Computing Techniques, DisCoTec 2021.

The 7 regular papers and 3 short papers presented in this book were carefully reviewed and selected from 15 submissions.

DAIS addresses all practical and conceptual aspects of distributed applications, including their design, modeling, implementation and operation, the supporting middleware, appropriate software engineering methodologies and tools, as well as experimental studies and applications.

Inhaltsverzeichnis

Frontmatter

Cloud and Fog Computing

Frontmatter

A Methodology for Tenant Migration in Legacy Shared-Table Multi-tenant Applications

Abstract
Multi-tenancy enables cost-effective SaaS through resource consolidation. Multiple customers, or tenants, are served by a single application instance, and isolation is enforced at the application level. Service load for different tenants can vary over time, requiring applications to scale in and out. A large class of SaaS providers operates legacy applications structured around a relational (SQL) database. These applications achieve tenant isolation through dedicated fields in their relational schema and are not designed to support scaling operations. We present a novel solution for scaling in or out such applications through the migration of a tenant’s data to new application and database instances. Our solution requires no change to the application and incurs no service downtime for non-migrated tenants. It leverages external tables and foreign data wrappers, as supported by major relational databases. We evaluate the approach using two multi-tenant applications: Iomad, an extension of the Moodle Learning Management System, and Camunda, a business process management platform. Our results show the usability of the method, minimally impacting performance for other tenants during migration and leading to increased service capacity after migration.
Guillaume Rosinosky, Samir Youcef, François Charoy, Etienne Rivière

Network Federation for Inter-cloud Operations

Abstract
This paper introduces the NetFed network federation approach, which enables inter-cloud operations on the basis of shared overlay networks. This allows to match the particular application needs to the most suitable infrastructure provider. The agent-based federation approach utilizes WireGuard and GRE to deliver a flexible and transparent layer 2 or layer 3 overlay network, while maintaining data integrity and confidentiality. The evaluation shows that our prototype can be deployed to arbitrary cloud platforms and benefits from a low traffic and processing overhead.
Johannes Köstler, Sven Gebauer, Hans P. Reiser

SpecK: Composition of Stream Processing Applications over Fog Environments

Abstract
Stream Processing (SP), i.e., the processing of data in motion, as soon as it becomes available, is a hot topic in cloud computing. Various SP stacks exist today, with applications ranging from IoT analytics to processing of payment transactions. The backbone of said stacks are Stream Processing Engines (SPEs), software packages offering a high-level programming model and scalable execution of data stream processing pipelines. SPEs have been traditionally developed to work inside a single datacenter, and optimised for speed. With the advent of Fog computing, however, the processing of data streams needs to be carried out over multiple geographically distributed computing sites: Data gets typically pre-processed close to where they are generated, then aggregated at intermediate nodes, and finally globally and persistently stored in the Cloud. SPEs were not designed to address these new scenarios. In this paper, we argue that large scale Fog-based stream processing should rely on the coordinated composition of geographically dispersed SPE instances. We propose an architecture based on the composition of multiple SPE instances and their communication via distributed message brokers. We introduce SpecK, a tool to automate the deployment and adaptation of pipelines over a Fog computing platform. Given a description of the pipeline, SpecK covers all the operations needed to deploy a stream processing computation over the different SPE instances targeted, using their own APIs and establishing the required communication channels to forward data among them. A prototypical implementation of SpecK is presented, and its performance is evaluated over Grid’5000, a large-scale, distributed experimental facility.
Davaadorj Battulga, Daniele Miorandi, Cédric Tedeschi

Fault Tolerance and Big Data

Frontmatter

ASPAS: As Secure as Possible Available Systems

Abstract
Available-Partition-tolerant (AP) geo-replicated systems trade consistency for availability. They allow replicas to serve clients’ requests without prior synchronization. Potential conflicts due to concurrent operations can then be resolved using a conflict resolution mechanism if operations are commutative and execution is deterministic. However, a Byzantine replica can diverge from deterministic execution of operations and break convergence. In this paper, we introduce ASPAS: As Secure as Possible highly Available System that is a Byzantine resilient AP system. ASPAS follows an optimistic approach to maintain a single round-trip response time. It then allows the detection of Byzantine replicas in the background, i.e., off the critical path of clients requests. Our empirical evaluation of ASPAS in a geo-replicated setting shows that its latency in the normal case is close to that of an AP system, and one order of magnitude better than classical BFT protocols that provide stronger (total ordering) guarantees, unnecessary in AP systems.
Houssam Yactine, Ali Shoker, Georges Younes

Portable Intermediate Representation for Efficient Big Data Analytics

Abstract
To process big data, applications have been utilizing data processing libraries over the last years, which are however not optimized to work together for efficient processing. Intermediate Representations (IR) have been introduced for unifying essential functions into an abstract interface that supports cross-optimization between applications. Still, the efficiency of an IR depends on the architecture and the tools required for compilation and execution. In this paper, we present a first glance at a framework that provides an IR by creating containers with executable code from structures of data analytics functions, described in an input grammar. These containers process data in query lists and they can be executed either standalone or integrated with other big data analytics applications without the need to compile the entire framework.
Giannis Tzouros, Michail Tsenos, Vana Kalogeraki

Distributed Algorithms

Frontmatter

Shared-Dining: Broadcasting Secret Shares Using Dining-Cryptographers Groups

Abstract
We introduce a combination of Shamir’s secret sharing and dining-cryptographers networks, which provides \((n-|\text {attackers}|)\)-anonymity for up to \(k-1\) attackers and has manageable performance impact on dissemination. A k-anonymous broadcast can be implemented using a small group of dining cryptographers to first share the message, followed by a flooding phase started by group members. Members have little incentive to forward the message in a timely manner, as forwarding incurs costs, or they may even profit from keeping the message. In worst case, this leaves the true originator as the only sender, rendering the dining-cryptographers phase useless and compromising their privacy. We present a novel approach using a modified dining-cryptographers protocol to distributed shares of an (nk)-Shamir’s secret sharing scheme. All group members broadcast their received share through the network, allowing any recipient of k shares to reconstruct the message, enforcing anonymity. If less than k group members broadcast their shares, the message cannot be decoded thus preventing privacy breaches for the originator. We demonstrate the privacy and performance results in a security analysis and performance evaluation based on a proof-of-concept prototype. Throughput rates between 10 and 100 kB/s are enough for many real applications with high privacy requirements, e.g., financial blockchain system.
David Mödinger, Juri Dispan, Franz J. Hauck

UCBFed: Using Reinforcement Learning Method to Tackle the Federated Optimization Problem

Abstract
Federated learning is a novel research area of AI technology that focus on distributed training and privacy preservation. Current federated optimization algorithms face serious challenge in the aspects of speed and accuracy, especially in non-i.i.d scenario. In this work, we propose UCBFed, a federated optimization algorithm that uses the Upper Confidence Bound (UCB) method to heuristically select participating clients in each round’s optimization process. We evaluate our algorithm in multiple federated distributed datasets. Comparing to most widely-used FedAvg and FedOpt, the UCBFed we proposed is superior in both the final accuracy and communication efficiency.
Wanqi Chen, Xin Zhou

Trusted Environments

Frontmatter

KeVlar-Tz: A Secure Cache for Arm TrustZone

(Practical Experience Report)
Abstract
Edge devices are increasingly in charge of storing privacy-sensitive data, in particular implantables, wearables, and nearables can potentially collect and process high-resolution vital signs 24/7. Storing and performing computations over such data in a privacy-preserving fashion is of paramount importance. We present KeVlar-Tz, an application-level trusted cache designed to leverage Arm TrustZone, a popular trusted execution environment available in consumer-grade devices. To facilitate the integration with existing systems and IoT devices and protocols, KeVlar-Tz exposes a REST-based interface with connection endpoints inside the TrustZone enclave. Furthermore, it exploits the on-device secure persistent storage to guarantee durability of data across reboots. We fully implemented KeVlar-Tz on top of the Op-Tee framework, and experimentally evaluated its performance. Our results showcase performance trade-offs, for instance in terms of throughput and latency, for various workloads, and we believe our results can be useful for practitioners and in general developers of systems for TrustZone. KeVlar-Tz is available as open-source at https://​github.​com/​mqttz/​kevlar-tz/​.
Oscar Benedito, Ricard Delgado-Gonzalo, Valerio Schiavoni

Analysis and Improvement of Heterogeneous Hardware Support in Docker Images

Abstract
Docker images are used to distribute and deploy cloud-native applications in containerised form. A container engine runs them with separated privileges according to namespaces. Recent studies have investigated security vulnerabilities and runtime characteristics of Docker images. In contrast, little is known about the extent of hardware-dependent features in them such as processor-specific trusted execution environments, graphics acceleration or extension boards. This problem can be generalised to missing knowledge about the extent of any hardware-bound instructions within the images that may require elevated privileges. We first conduct a systematic one-year evolution analysis of a sample of Docker images concerning their use of hardware-specific features. To improve the state of technology, we contribute novel tools to manage such images. Our heuristic hardware dependency detector and a hardware-aware Docker executor hdocker give early warnings upon missing dependencies instead of leading to silent or untimely failures. Our dataset and tools are released to the research community.
Panagiotis Gkikopoulos, Valerio Schiavoni, Josef Spillner

Invited Paper

Frontmatter

Simulation of Large Scale Computational Ecosystems with Alchemist: A Tutorial

Abstract
Many interesting systems in several disciplines can be modeled as networks of nodes that can store and exchange data: pervasive systems, edge computing scenarios, and even biological and bio-inspired systems. These systems feature inherent complexity, and often simulation is the preferred (and sometimes the only) way of investigating their behavior; this is true both in the design phase and in the verification and testing phase. In this tutorial paper, we provide a guide to the simulation of such systems by leveraging Alchemist, an existing research tool used in several works in the literature. We introduce its meta-model and its extensible architecture; we discuss reference examples of increasing complexity; and we finally show how to configure the tool to automatically execute multiple repetitions of simulations with different controlled variables, achieving reliable and reproducible results.
Danilo Pianini

Backmatter

Weitere Informationen

Premium Partner