Skip to main content

Über dieses Buch

This book constitutes extended, revised and selected papers from the 6th International Conference on Cloud Computing and Services Science, CLOSER 2016, held in Rome, Italy, in April 2016.

The 16 papers presented in this volume were carefully reviewed and selected from a total of 123 submissions. The volume also contains two invited papers. CLOSER 2016 focused on the emerging area of cloud computing, inspired by recent advances related to infrastructures, operations, and service availability through global networks. It also studied the influence of service science in this area.



Invited Papers


Supporting Users in Data Outsourcing and Protection in the Cloud

Moving data and applications to the cloud allows users and companies to enjoy considerable benefits. However, these benefits are also accompanied by a number of security issues that should be addressed. Among these, the need to ensure that possible requirements on security, costs, and quality of services are satisfied by the cloud providers, and the need to adopt techniques ensuring the proper protection of their data and applications. In this paper, we present different strategies and solutions that can be applied to address these issues.
S. De Capitani di Vimercati, S. Foresti, G. Livraga, P. Samarati

Native Cloud Applications: Why Monolithic Virtualization Is Not Their Foundation

Due to the current hype around cloud computing, the term ‘native cloud application’ becomes increasingly popular. It suggests an application to fully benefit from all the advantages of cloud computing. Many users tend to consider their applications as cloud native if the application is just bundled as a monolithic virtual machine or container. Even though virtualization is fundamental for implementing the cloud computing paradigm, a virtualized application does not automatically cover all properties of a native cloud application. In this work, which is an extension of a previous paper, we propose a definition of a native cloud application by specifying the set of characteristic architectural properties, which a native cloud application has to provide. We demonstrate the importance of these properties by introducing a typical scenario from current practice that moves an application to the cloud. The identified properties and the scenario especially show why virtualization alone is insufficient to build native cloud applications. We also outline how native cloud applications respect the core principles of service-oriented architectures, which are currently hyped a lot in the form of microservice architectures. Finally, we discuss the management of native cloud applications using container orchestration approaches as well as the cloud standard TOSCA.
Frank Leymann, Uwe Breitenbücher, Sebastian Wagner, Johannes Wettinger



Delegated Audit of Cloud Provider Chains Using Provider Provisioned Mobile Evidence Collection

Businesses, especially SMEs, increasingly integrate cloud services in their IT infrastructure. The assurance of the correct and effective implementation of security controls is required by businesses to attenuate the loss of control that is inherently associated with using cloud services. Giving this kind of assurance, is traditionally the task of audits and certification done by auditors. Cloud auditing becomes increasingly challenging for the auditor, if you be aware, that today cloud services are often distributed across many cloud providers. There are Software as a Service (SaaS) providers that do not own dedicated hardware anymore for operating their services, but rely solely on other cloud providers of the lower layers, such Infrastructure as a Service (IaaS) providers. Cloud audit of provider chains, that is cloud auditing of cloud service provisioned across different providers, is challenging and complex for the auditor.
The main contributions of this paper are: An approach to automated auditing of cloud provider chains with the goal of providing evidence-based assurance about the correct handling of data according to pre-defined policies. A concepts of individual and delegated audits, discuss policy distribution and applicability aspects and propose a lifecycle model. The delegated auditing of cloud provider chains using a provider provisioned platform for mobile evidence collection is the policy to collect evidence data on demand. Further, the extension of Cloud Security Alliance’s (CSA) CloudTrust Protocol form the basis for the proposed system for provider chain auditing.
Christoph Reich, Thomas Rübsamen

Trade-offs Based Decision System for Adoption of Cloud-Based Services

Decision of adopting any new technology in an organization is a crucial decision as it can have impact at the technical, economical, and organizational level. Therefore, the decision to adopt Cloud-based services has to be based on a methodology that supports a wide array of criteria for evaluating available alternatives. Also, as these criteria or factors can be mutually interdependent and conflicting, a trade-offs-based methodology is needed to take a decisions.
This paper discusses the design, implementation, and evaluation of the prototype developed for automating the extended theoretical methodology of Trade-offs based Methodology for Adoption of Cloud-based Services (TrAdeCIS) developed in [5]. This system is based on Multi-attribute Decision Algorithms (MADA), which selects the best alternative, based on priorities of criteria of a decision maker. The applicability of this methodology to the adoption of cloud-based services in an organization is validated with a use-case and is even extended to other domains, especially for Train Operating Companies.
Radhika Garg, Marc Heimgartner, Burkhard Stiller

Toward Proactive Learning of Multi-layerd Cloud Service Based Application

Cloud computing is becoming a popular platform to deliver service-based applications (SBAs) based on service oriented architecture (SOA) principles. Monitoring the performance and functionality in all the layers which affects the final step of adaptations of SBAs deployed on multiple Cloud providers and adapting them to variations/events produced by several layers (infrastructure, platform, application, service, etc.) are challenges for the research community, and the major challenge is handling the impact of the adaptation operations. A crucial dimension in industrial practice is the non-functional service aspects, which are related to Quality-of-Service (QoS) aspects. Service Level Agreements (SLAs) define quantitative QoS objectives and is a part of a contract between the service provider and the service consumer. Although significant work exists on how SLA may be specified, monitored and enforced, few efforts have considered the problem of SLA monitoring in the context of Cloud Service-Based Application (CSBA), which caters for tailoring of services using a mixture of Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS) and Infrastructure-as-a-Service (IaaS) solutions. With a preventive focus, the main contribution of this paper is a novel learning and prediction approach for SLA violations, which generates models that are capable of proactively predicting upcoming SLAs violations, and suggesting recovery actions to react to such SLA violations before their occurrence. A prototype has been developed as a Proof-Of-Concept (POC) to ascertain the feasibility and applicability of the proposed approach.
Ameni Meskini, Yehia Taher, Amal El Gammal, Béatrice Finance, Yahya Slimani

Map Reduce Autoscaling over the Cloud with Process Mining Monitoring

Over the last years, the traditional pressing need for fast and reliable processing solutions has been further exacerbated by the increase of data volumes – produced by mobile devices, sensors and almost ubiquitous internet availability. These big data must be analyzed to extract further knowledge.
Distributed programming models, such as Map Reduce, are providing a technical answer to this challenge. Furthermore, when relaying on cloud infrastructures, Map Reduce platforms can easily be runtime provided with additional computing nodes (e.g., the system administrator can scale the infrastructure to face temporal deadlines). Nevertheless, the execution of distributed programming models on the cloud still lacks automated mechanisms to guarantee the Quality of Service (i.e., autonomous scale-up/-down behavior).
In this paper, we focus on the steps of monitoring Map Reduce applications (to detect situations where the temporal deadline will be exceeded) and performing recovery actions on the cluster (by automatically providing additional resources to boost the computation). To this end, we exploit some techniques and tools developed in the research field of Business Process Management: in particular, we focus on declarative languages and tools for monitoring the execution of business process. We introduce a distributed architecture where a logic-based monitor is able to detect possible delays, and trigger recovery actions such as the dynamic provisioning of a congruent number of resources.
Federico Chesani, Anna Ciampolini, Daniela Loreti, Paola Mello

Detecting Anomaly in Cloud Platforms Using a Wavelet-Based Framework

Cloud computing enables the delivery of compute resources as services in an on-demand fashion. The reliability of these services is of significant importance to their consumers. The presence of anomaly in Cloud platforms can put their reliability into question, since an anomaly indicates deviation from normal behaviour. Monitoring enables efficient Cloud service provisioning management; however, most of the management efforts are focused on the performance of the services and little attention is paid to detecting anomalous behaviour from the gathered monitoring data. In addition, the existing solutions for detecting anomaly in Clouds lacks a multi-dimensional approach. In this chapter, we present a wavelet-based anomaly detection framework that is capable of analysing multiple monitored metrics simultaneously to detect anomalous behaviour. It operates in both frequency and time domains in analysing monitoring data that represents system behaviour. The framework is first trained using over seven days worth of historical monitoring data to identify healthy behaviour. Based on this training, anomalous behaviour can be detected as deviations from the healthy system. The effectiveness of the proposed framework was evaluated based on a Cloud service deployment use-case scenario that produced both healthy and anomalous behaviour.
David O’Shea, Vincent C. Emeakaroha, Neil Cafferkey, John P. Morrison, Theo Lynn

Security SLA in Next Generation Data Centers, the SPECS Approach

Next generation Data Centers (ngDC) provide a significant evolution how storage resources can be provisioned. They are cloud-based architectures offering flexible IT infrastructure and services through the virtualization of resources: managing in an integrated way compute, network and storage resources. Despite the multitude of benefits available when leveraging a Cloud infrastructure, wide scale Cloud adoption for sensitive or critical business applications still faces resistance. One of the key limiting factors holding back larger adoption of Cloud services is trust. To cope with this, datacenter customers need more guarantees about the security levels provided, creating the need for tools to dynamically negotiate and monitor the security requirements. The SPECS project proposes a platform that offers security features with an as-a-service approach, furthermore it uses Security Service Level Agreements (Security SLA) as a means for establishing a clear statement between customers and providers to define a mutual agreement. This paper presents an industrial use case from EMC that integrates the SPECS Platform with their innovative solutions for the ngDC. In particular, the paper illustrates how it is possible to negotiate, enforce and monitor a Security SLA in a cloud infrastructure offering.
Massimiliano Rak, Valentina Casola, Silvio La Porta, Andrew Byrne

Dynamically Loading Mobile/Cloud Assemblies

Distributed applications that span mobile devices, computing clusters, and cloud services, require robust and flexible mechanisms for dynamically loading code. This paper describes lady: a system that augments the .net platform with a highly reliable mechanism for resolving and loading assemblies, and arranges for safe execution of partially trusted code. Key benefits of lady are the low latency and high availability achieved through its novel integration with dns.
Robert Pettersen, Håvard D. Johansen, Steffen Viken Valvåg, Dag Johansen

Investigation of Impacts on Network Performance in the Advance of a Microservice Design

Due to REST-based protocols, microservice architectures are inherently horizontally scalable. That might be why the microservice architectural style is getting more and more attention for cloud-native application engineering. Corresponding microservice architectures often rely on a complex technology stack which includes containers, elastic platforms and software defined networks. Astonishingly, there are almost no specialized tools to figure out performance impacts (coming along with this microservice architectural style) in the upfront of a microservice design. Therefore, we propose a benchmarking solution intentionally designed for this upfront design phase. Furthermore, we evaluate our benchmark and present some performance data to reflect some often heard cloud-native application performance rules (or myths).
Nane Kratzke, Peter-Christian Quint

An Universal Approach for Compliance Management Using Compliance Descriptors

Trends like outsourcing and cloud computing have led to a distribution of business processes among different IT systems and organizations. Still, businesses need to ensure compliance regarding laws and regulations of these distributed processes. This need gave way to many new solutions for compliance management and checking. Compliance requirements arise from legal documents and are implemented in all parts of enterprise IT, creating a business IT gap between legal texts and software implementation. Compliance solutions must bridge this gap as well as support a wide variety of compliance requirements. To achieve these goals, we developed an integrating compliance descriptor for compliance modeling on the legal, requirement and technical level, incorporating arbitrary rule languages for specific types of requirements. Using a modeled descriptor a compliance checking architecture can be configured, including specific rule checking implementations. The graphical notation of the compliance descriptor and the formalism it’s based on are described and evaluated using a prototype as well as expert interviews. Based on evaluation results, an extension for compliance management in unstructured processes is outlined.
Falko Koetter, Maximilien Kintz, Monika Kochanowski, Thatchanok Wiriyarattanakul, Christoph Fehling, Philipp Gildein, Sebastian Wagner, Frank Leymann, Anette Weisbecker

Fostering the Reuse of TOSCA-based Applications by Merging BPEL Management Plans

Complex Cloud applications consist of a variety of individual components that need to be provisioned and managed in a holistic manner to setup the overall application. The Cloud standard TOSCA can be used to describe these components, their dependencies, and their management functions. To provision or manage the Cloud application, the execution of these individual management functions can be orchestrated by executable management plans, which are workflows being able to deal with heterogeneity of the functions. Unfortunately, creating TOSCA application descriptions and management plans from scratch is time-consuming, error-prone, and needs a lot of expert knowledge. Hence, to save the amount of time and resources needed to setup the management capabilities of new Cloud applications, existing TOSCA description and plans should be reused. To enable the systematic reuse of these artifacts, we proposed in a previous paper a method for combining existing TOSCA descriptions and plans or buildings blocks thereof. One important aspect of this method is the creation of BPEL4Chor-based management choreographies for coordinating different plans and how these choreographies can be automatically consolidated back into executable plans. This paper extends the previous one by providing a much more formal description about the choreography consolidation. Therefore, a set of new algorithms is introduced describing the different steps required to consolidate the management choreography into an executable management plan. The method and the algorithms are validated by a set of tools from the Cloud application management ecosystem OpenTOSCA.
Sebastian Wagner, Uwe Breitenbücher, Oliver Kopp, Andreas Weiß, Frank Leymann

Experimental Study on Performance and Energy Consumption of Hadoop in Cloud Environments

The big data applications are a resource and energy intensive applications. Cloud providers wish to better utilize the technologies of virtualization in order to solve the evolving needs of infrastructures, alongside the growing demand. The virtualization technology based on container is increasingly popular in the high performance domain, this work is the evaluation of this technology in the context of big data and cloud computing domains. It focuses on the software Hadoop, as a big data application, it evaluates the performance impact and energy consumption. The objective is to understand the tradeoff between performance and energy efficiency depending on the technology of virtualization. The outcomes of this paper are: Firstly, the evaluation of the technology of virtualization based on containers on the cloud using Hadoop as a big data application. Secondly, the comparison of the traditional virtualization with the merging container technology. We analyze the impact of the coexistence of virtual machines (or containers) on the CPU, memory, hard disk throughput and network bandwidth. Thirdly, the reduction of the big data application deployment cost using the cloud. Fourthly, the Hadoop community finds an in-depth study of the resource consumption depending on the deployment environment. Our evaluation shows that: (i) The container (Docker) technology is a performance enhancement and energy saving technology compared to the traditional technology of virtualization. (ii) Performance of Hadoop cluster based on containers is significantly better than the traditional virtualization technology. (iii) Data replication rate influences the completion date of job. (vi) Coexisting containers (or virtual machines) influence the energy consumption and the completion time of the applications.
Aymen Jlassi, Patrick Martineau

Towards a Framework for Privacy-Preserving Data Sharing in Portable Clouds

Cloud storage is a cheap and reliable solution for users to share data with their contacts. However, the lack of standardisation and migration tools makes it difficult for users to migrate to another Cloud Service Provider (CSP) without losing contacts, thus resulting in a vendor lock-in problem. In this work, we aim at providing a generic framework, named PortableCloud, that is flexible enough to enable users to migrate seamlessly to a different CSP keeping all their data and contacts. To preserve the privacy of users, the data in the portable cloud is concealed from the CSP by employing encryption techniques. Moreover, we introduce a migration agent that assists users in automatically finding a suitable CSP that can satisfy their needs.
Clemens Zeidler, Muhammad Rizwan Asghar

Cost Analysis Comparing HPC Public Versus Private Cloud Computing

The past several years have seen a rapid increase in the number and type of public cloud computing hardware configurations and pricing options offered to customers. In addition public cloud providers have also expanded the number and type of storage options and established incremental price points for storage and network transmission of outbound data from the cloud facility. This has greatly complicated the analysis to determine the most economical option for moving general purpose applications to the cloud. This paper investigates whether this economic analysis for moving general purpose applications to the public cloud can be extended to more computationally intensive HPC type computations. Using an HPC baseline hardware configuration for comparison, the total cost of operations for several HPC private and public cloud providers are analyzed. The analysis shows under what operational conditions the public cloud option may be a more cost effective alternative for HPC type applications.
Patrick Dreher, Deepak Nair, Eric Sills, Mladen Vouk

Ming: Model- and View-Based Deployment and Adaptation of Cloud Datacenters

For the configuration of a datacenter from bare metal up to the level of infrastructure as a service (IaaS) solutions, currently, there is neither a standard nor a common datamodel that is understood across deployment automation tools. Following a model- and view-based approach, Ming aims at holistically describing cloud datacenters. Establishing a respective metamodel, it supports different stakeholders with tailored views and permits utilization of arbitrary deployment tools for providing the basic cloud service model. In addition to initial deployments, it targets (model-based) adaptation of datacenters for covering operational use cases such as extending a cloud with additional resources and for providing for software upgrades and patches of the deployed solutions.
Ta’id Holmes

A Generic Framework for Representing Context-Aware Security Policies in the Cloud

Enterprises are increasingly embracing cloud computing in order to reduce costs and increase agility in their everyday business operations. Nevertheless, due mainly to confidentiality, privacy and integrity concerns, many organisations are reluctant to migrate their sensitive data to the cloud. In order to alleviate these security concerns, this chapter proposes the PaaSword framework: a generic PaaS solution that provides capabilities for guiding developers through the process of defining appropriate policies for protecting their sensitive data. More specifically, this chapter outlines the construction of an extensible and declarative formalism for representing policy-related knowledge, one which disentangles the definition of a policy from the code employed for enforcing it. It also outlines the construction of a suitable Context-aware Security Model, a framework of concepts and properties in terms of which the policy-related knowledge is expressed.
Simeon Veloudis, Iraklis Paraskakis, Yiannis Verginadis, Ioannis Patiniotakis, Gregoris Mentzas

Towards Outsourced and Individual Service Consumption Clearing in Connected Mobility Solutions

The work on hand targets the missing capabilities for data access and service clearing in connected mobility service solutions. Interviewed experts pointed out that current solutions of the mobility domain lack appropriate clearing capabilities as well as access on internally processed data. We approach both limitations and elaborate a message protocol and respective interfaces which enables participants of a service solution to outsource their clearing process individually. The design of the interfaces and the message protocol are discussed. Its feasibility is justified with the help of an additionally developed clearing service prototype. This prototype uses the interfaces in an automated fashion and calculates the costs for a set of transactions in accordance to a given contract. The presented interfaces and message protocol support service solution developers in their development of new marketplace capabilities demanded by participants. What kind of clearing might be applicable in an ecosystem of interconnected marketplaces is furthermore discussed. The work on hand closes the gap of limited data access and service clearing. The findings of this work can be used as a foundation for a standard regarding service clearing or any other use case that requires access on internal marketplace data.
Michael Strasser, Sahin Albayrak


Weitere Informationen

Premium Partner