Skip to main content
main-content

Über dieses Buch

This book constitutes the proceedings of the 14th IFIP International Conference on Distributed Applications and Interoperable Systems, DAIS 2014, held in Berlin, Germany, in June 2014. The 12 papers presented in this volume were carefully reviewed and selected from 53 submissions. They deal with cloud computing, replicated storage, and large-scale systems.

Inhaltsverzeichnis

Frontmatter

A Risk-Based Model for Service Level Agreement Differentiation in Cloud Market Providers

Abstract
Cloud providers may not always fulfil the Service Level Agreements with the clients because of outages in the data centre or inaccurate resource provisioning. Minimizing the Probability of Failure of the tasks that are allocated within a Cloud Infrastructure can be economically infeasible because overprovisioning resources increases the cost and is economically inefficient. This paper intends to increase the fulfilment rate of Service Level Agreements at the infrastructure provider side while maximizing the economic efficiency, by considering risk in the decision process. We introduce a risk model based on graph analysis for risk propagation, and we model it economically to provide three levels of risk to the clients: moderate risk, low risk, and very low risk. The client may decide the risk of the service and proportionally pay: the lower the risk the higher the price.
Mario Macías, Jordi Guitart

Adaptive and Scalable High Availability for Infrastructure Clouds

Abstract
These days Infrastructure-as-a-Service (IaaS) clouds attract more and more customers for their flexibility and scalability. Even critical data and applications are considered for cloud deployments. However, this demands for resilient cloud infrastructures.
Hence, this paper approaches adaptive and scalable high availability for IaaS clouds. First, we provide a detailed failure analysis of OpenStack showing the impact of failing services to the cloud infrastructure. Second, we analyse existing approaches for making OpenStack highly available, and pinpoint several weaknesses of current best practice. Finally, we propose and evaluate improvements to automate failover mechanisms.
Stefan Brenner, Benjamin Garbers, Rüdiger Kapitza

Trust-Aware Operation of Providers in Cloud Markets

Abstract
Online Reputation Systems allow markets to exclude providers providers that are untrustworthy or unreliable. System failures and outages may decrease the reputation of honest providers, which would lose potential clients. For that reason, providers require trust-aware management policies aimed at retaining their reputation when unexpected failures occur. This paper proposes policies to operate cloud resources to minimise the impact of system failures in the reputation. On the one side, we discriminate clients under conflicting situations to favour those that would impact more positively the reputation of the provider. On the other side, we analyse the impact of management actions in the reputation and the revenue of the provider to select those with less impact when an actuation is required. The validity of these policies is demonstrated through experiments for various use cases.
Mario Macías, Jordi Guitart

Scaling HDFS with a Strongly Consistent Relational Model for Metadata

Abstract
The Hadoop Distributed File System (HDFS) scales to store tens of petabytes of data despite the fact that the entire file system’s metadata must fit on the heap of a single Java virtual machine. The size of HDFS’ metadata is limited to under 100 GB in production, as garbage collection events in bigger clusters result in heartbeats timing out to the metadata server (NameNode).
In this paper, we address the problem of how to migrate the HDFS’ metadata to a relational model, so that we can support larger amounts of storage on a shared-nothing, in-memory, distributed database. Our main contribution is that we show how to provide at least as strong consistency semantics as HDFS while adding support for a multiple-writer, multiple-reader concurrency model. We guarantee freedom from deadlocks by logically organizing inodes (and their constituent blocks and replicas) into a hierarchy and having all metadata operations agree on a global order for acquiring both explicit locks and implicit locks on subtrees in the hierarchy. We use transactions with pessimistic concurrency control to ensure the safety and progress of metadata operations. Finally, we show how to improve performance of our solution by introducing a snapshotting mechanism at NameNodes that minimizes the number of roundtrips to the database.
Kamal Hakimzadeh, Hooman Peiro Sajjad, Jim Dowling

Distributed Exact Deduplication for Primary Storage Infrastructures

Abstract
Deduplication of primary storage volumes in a cloud computing environment is increasingly desirable, as the resulting space savings contribute to the cost effectiveness of a large scale multi-tenant infrastructure. However, traditional archival and backup deduplication systems impose prohibitive overhead for latency-sensitive applications deployed at these infrastructures while, current primary deduplication systems rely on special cluster filesystems, centralized components, or restrictive workload assumptions.
We present DEDIS, a fully-distributed and dependable system that performs exact and cluster-wide background deduplication of primary storage. DEDIS does not depend on data locality and works on top of any unsophisticated storage backend, centralized or distributed, that exports a basic shared block device interface. The evaluation of an open-source prototype shows that DEDIS scales out and adds negligible overhead even when deduplication and intensive storage I/O run simultaneously.
João Paulo, José Pereira

Scalable and Accurate Causality Tracking for Eventually Consistent Stores

Abstract
In cloud computing environments, data storage systems often rely on optimistic replication to provide good performance and availability even in the presence of failures or network partitions. In this scenario, it is important to be able to accurately and efficiently identify updates executed concurrently. Current approaches to causality tracking in optimistic replication have problems with concurrent updates: they either (1) do not scale, as they require replicas to maintain information that grows linearly with the number of writes or unique clients; (2) lose information about causality, either by removing entries from client-id based version vectors or using server-id based version vectors, which cause false conflicts. We propose a new logical clock mechanism and a logical clock framework that together support a traditional key-value store API, while capturing causality in an accurate and scalable way, avoiding false conflicts. It maintains concise information per data replica, only linear on the number of replica servers, and allows data replicas to be compared and merged linear with the number of replica servers and versions.
Paulo Sérgio Almeida, Carlos Baquero, Ricardo Gonçalves, Nuno Preguiça, Victor Fonte

Cooperation across Multiple Healthcare Clinics on the Cloud

Abstract
Many healthcare units are creating cloud strategies and migration plans in order to exploit the benefits of cloud based computing. This generally involves collaboration between healthcare specialists and data management researchers to create a new wave of healthcare technology and services. However, in many cases the technology pioneers are ahead of government policies as cloud based storage of healthcare data is not yet permissible in many jurisdictions. One approach is to store anonymised data on the cloud and maintain all identifying data locally. At login time, a simple protocol can be developed to allow clinicians to combine both sets of data for selected patients for the current session. However, the management of off-cloud identifying data requires a framework to ensure sharing and availability of data within clinics and the ability to share data between users in remote clinics. In this paper, we introduce the PACE healthcare architecture which uses a combination of Cloud and Peer-to-Peer technologies to model healthcare units or clinics where off-cloud data is accessible to all, and where exchange of data between remote healthcare units is also facilitated.
Neil Donnelly, Kate Irving, Mark Roantree

Behave: Behavioral Cache for Web Content

Abstract
We propose Behave: a novel approach for peer-to-peer cache-oriented applications such as CDNs. Behave relies on the principle of Behavioral Locality inspired from collaborative filtering. Users that have visited similar websites in the past will have local caches that provide interesting content for one another.
Behave exploits epidemic protocols to build overlapping communities of peers with similar interests. Peers in the same one-hop community federate their cache indexes in a Behavioral cache. Extensive simulations on a real data trace show that Behave can provide zero-hop lookup latency for about 50% of the content available in a DHT-based CDN.
Davide Frey, Mathieu Goessens, Anne-Marie Kermarrec

Implementing the WebSocket Protocol Based on Formal Modelling and Automated Code Generation

Abstract
Model-based software engineering offers several attractive benefits for the implementation of protocols, including automated code generation for different platforms from design-level models. In earlier work, we have proposed a template-based approach using Coloured Petri Net formal models with pragmatic annotations for automated code generation of protocol software. The contribution of this paper is an application of the approach as implemented in the PetriCode tool to obtain protocol software implementing the IETF WebSocket protocol. This demonstrates the scalability of our approach to real protocols. Furthermore, we perform formal verification of the CPN model prior to code generation, and test the implementation for interoperability against the Autobahn WebSocket test-suite resulting in 97% and 99% success rate for the client and server implementation, respectively. The tests show that the cause of test failures were mostly due to local and trivial errors in newly written code-generation templates, and not related to the overall logical operation of the protocol as specified by the CPN model.
Kent Inge Fagerland Simonsen, Lars Michael Kristensen

GreenBrowsing: Towards Energy Efficiency in Browsing Experience

Abstract
Web 2.0 allowed for the enhancement and revamp of web pages’ aesthetics and interaction mechanics. Moreover, current web browsers function almost as a de facto operating system: they run “apps”, along with other background plug-ins. All of which have an increasing energy impact, proportional to the rate of appearance of more sophisticated browser mechanisms and web content. We present the architecture of GreenBrowsing. A system that proposes the provision of (i) a Google Chrome extension to monitor, rationalize and reduce the energy consumption of the browsing experience and (ii) a Certification Scheme for dynamic web pages, based on web-page performance counter statistics and analysis, performed on the cloud.
Gonçalo Avelar, Luís Veiga

Making Operation-Based CRDTs Operation-Based

Abstract
Conflict-free Replicated Datatypes (CRDT) are usually classified as either state-based or operation-based. However, the standard definition of op-based CRDTs is very encompassing, allowing even sending the full-state, blurring the distinction. We introduce pure op-based CRDTs, that can only send operations to other replicas, drawing a clear distinction from state-based ones. Datatypes with commutative operations can be trivially implemented as pure op-based CRDTs using standard reliable causal delivery. We propose an extended API – tagged reliable causal broadcast – that provides causality information upon delivery, and show how it can be used to also implement other datatypes having non-commutative operations, through the use of a PO-Log – a partially ordered log of operations – inside the datatype. A semantically-based PO-Log compaction framework, using both causality and what we denote by causal stability, allows obtaining very compact replica state for pure op-based CRDTs, while also benefiting from small message sizes.
Carlos Baquero, Paulo Sérgio Almeida, Ali Shoker

Autonomous Multi-dimensional Slicing for Large-Scale Distributed Systems

Abstract
Slicing is a distributed systems primitive that allows to autonomously partition a large set of nodes based on node-local attributes. Slicing is decisive for automatically provisioning system resources for different services, based on their requirements or importance. One of the main limitations of existing slicing protocols is that only single dimension attributes are considered for partitioning. In practical settings, it is often necessary to consider best compromises for an ensemble of metrics.
In this paper we propose an extension of the slicing primitive that allows multi-attribute distributed systems slicing.Our protocol employs a gossip-based approach that does not require centralized knowledge and allows self-organization. It leverages the notion of domination between nodes, forming a partial order between multi-dimensional points, in a similar way to SkyLine queries for databases. We evaluate and demonstrate the interest of our approach using large-scale simulations.
Mathieu Pasquet, Francisco Maia, Etienne Rivière, Valerio Schiavoni

Bandwidth-Minimized Distribution of Measurements in Global Sensor Networks

Abstract
Global sensor networks (GSN) allow applications to integrate huge amounts of data using real-time streams from virtually anywhere. Queries to a GSN offer many degrees of freedom, e.g. the resolution and the geographic origin of data, and scaling optimization of data streams to many applications is highly challenging. Existing solutions hence either limit the flexibility with additional constraints or ignore the characteristics of sensor streams where data points are produced synchronously.
In this paper, we present a new approach to bandwidth-minimized distribution of real-time sensor streams in a GSN. Using a distributed index structure, we partition queries for bandwidth management and quickly identify overlapping queries. Based on this information, our relay strategy determines an optimized distribution structure which minimizes traffic while being adaptive to changing conditions. Simulations show that total traffic and user perceived delay can be reduced by more than 50%.
Andreas Benzing, Boris Koldehofe, Kurt Rothermel

A Fuzzy-Logic Based Coordinated Scheduling Technique for Inter-grid Architectures

Abstract
Inter-grid is a composition of small interconnected grid domains; each has its own local broker. The main challenge is to devise appropriate job scheduling policies that can satisfy goals such as global load balancing together with maintaining the local policies of the different domains. Existing inter-grid methodologies are based on either centralised meta-scheduling or decentralised scheduling which carried is out by local brokers, but without proper coordination. Both are suitable interconnecting grid domains, but breaks down when the number of domains become large. Earlier we proposed Slick, a scalable resource discovery and job scheduling technique for broker based interconnected grid domains, where inter-grid scheduling decisions are handled by gateway schedulers installed on the local brokers. This paper presents a decentralised scheduling technique for the Slick architecture, where cross-grid scheduling decisions are made using a fuzzy-logic based algorithm. The proposed technique is tested through simulating its implementation on 512 interconnected Condor pools. Compared to existing techniques, our results show that the proposed technique is better at maintaining the overall throughput and load balancing with increasing number of interconnected grids.
Abdulrahman Azab, Hein Meling, Reggie Davidrajuh

Distributed Vertex-Cut Partitioning

Abstract
Graph processing has become an integral part of big data analytics. With the ever increasing size of the graphs, one needs to partition them into smaller clusters, which can be managed and processed more easily on multiple machines in a distributed fashion. While there exist numerous solutions for edge-cut partitioning of graphs, very little effort has been made for vertex-cut partitioning. This is in spite of the fact that vertex-cuts are proved significantly more effective than edge-cuts for processing most real world graphs. In this paper we present Ja-be-Ja-vc, a parallel and distributed algorithm for vertex-cut partitioning of large graphs. In a nutshell, Ja-be-Ja-vc is a local search algorithm that iteratively improves upon an initial random assignment of edges to partitions. We propose several heuristics for this optimization and study their impact on the final partitioning. Moreover, we employ simulated annealing technique to escape local optima. We evaluate our solution on various graphs and with variety of settings, and compare it against two state-of-the-art solutions. We show that Ja-be-Ja-vc outperforms the existing solutions in that it not only creates partitions of any requested size, but also requires a vertex-cut that is better than its counterparts and more than 70% better than random partitioning.
Fatemeh Rahimian, Amir H. Payberah, Sarunas Girdzijauskas, Seif Haridi

Multi-agent Systems Design and Prototyping with Bigraphical Reactive Systems

Abstract
Several frameworks and methodologies have been proposed to ease the design of Multi Agent Systems (MAS), but the vast majority of them is tightly tied to specific implementation platforms. In this paper, we outline a methodology for MAS design and prototyping in the more abstract framework of Bigraphical Reactive Systems (BRS). In our approach, components and elements of the application domain are modelled as bigraphs, and their dynamics as graph rewriting rules. Desiderata can be encoded by means of type systems or logical formulae. Then, the BDI agents (i.e., their beliefs, desires and intentions) are identified and extracted from the BRS. This yield a prototype which can be run as distributed bigraphical system, evolving by means of distributed transactional rewritings triggered by cooperating agents depending on their internal intentions and beliefs.
This methodology allows the designer to benefit from the results and tools from the theory of BRS, especially in the requirement analysis and validation phases. Among other results, we mention behavioural equivalences, temporal/spatial logics, visual tools for editing, for simulation and for model checking, etc. Moreover, bigraphs can be naturally composed, thus allowing for modular design of MAS.
Alessio Mansutti, Marino Miculan, Marco Peressotti

Erratum to: Distributed Applications and Interoperable Systems

Abstract
Erratum to: K. Magoutis and P. Pietzuch (Eds.) Distributed Applications and Interoperable Systems DOI: 10.​1007/​978-3-662-43352-2
The book was inadvertently published with an incorrect name of the copyright holder. The name of the copyright holder for this book is: © IFIP International Federation for Information Processing. The book has been updated with the changes.
Kostas Magoutis, Peter Pietzuch

Backmatter

Weitere Informationen

Premium Partner

    Bildnachweise