Skip to main content

Über dieses Buch

This book constitutes the thoroughly refereed proceedings of the Third International Conference on Cloud Computing and Services Science, CLOSER 2013, held in Aachen, Germany, in May 2013. The 8 papers presented were selected from 142 paper submissions. The papers cover the following topics: cloud computing fundamentals; services science foundations for cloud computing; cloud computing platforms and applications; and cloud computing enabling technologies.



SecLA-Based Negotiation and Brokering of Cloud Resources

As the popularity of Cloud computing has grown during the last years, the choice of Cloud Service Provider (CSP) has become an important issue from user’s perspective. Although the Cloud users are more and more concerned about their security in the Cloud and might have some specific security requirements, currently this choice is based on requirements related to the offered Service Level Agreements (SLA) and costs. Most of the CSPs do not provide user- understandable information regarding the security levels associated with their services, and in this way impede the users to negotiate their security requirements. In other words, the users do not have the technical means in terms of tools and semantics to choose the CSP that best suits their security demands. Industrial efforts on specification of Cloud security parameters in SLAs, also known as “Security Level Agreements” or SecLAs represent the initial steps towards solving this problem. The aim of this paper is to propose a practical approach that enables user-centric negotiation and brokering of Cloud resources. The proposed methodology relies on both the notion of SecLAs for establishing a common semantic between the CSPs and the users, and on a quantitative approach to evaluate the security levels associated with the specific SecLAs.
This work is a result of the joint effort spent on the security metrology-related techniques being developed by the EU FP7 projects ABC4Trust/SPECS and, the framework for SLA-based negotiation and Cloud resource brokering proposed by the EU FP7 mOSAIC project. The feasibility of the proposed negotiation approach and its applicability for Cloud Federations is demonstrated in the paper with a real-world case study considering a scenario presented in the FP7 project SPECS. The presented scenario shows the negotiation of a user’s security requirements with respect to a set of CSPs SecLAs, using both the information available in the Cloud Security Alliance’s “Security, Trust & Assurance Registry” (CSA STAR) and the WS-Agreement standard.
Jesus Luna, Tsvetoslava Vateva-Gurova, Neeraj Suri, Massimiliano Rak, Alessandra De Benedictis

Trustworthiness Attributes and Metrics for Engineering Trusted Internet-Based Software Systems

Trustworthiness of Internet-based software systems, apps, services and platform is a key success factor for their use and acceptance by organizations and end-users. The notion of trustworthiness, though, is subject to individual interpretation and preference, e.g., organizations require confidence about how their business critical data is handled whereas end-users may be more concerned about usability. As one main contribution, we present an extensive list of software quality attributes that contribute to trustworthiness. Those software quality attributes have been identified by a systematic review of the research literature and by analyzing two real-world use cases. As a second contribution, we sketch an approach for systematically deriving metrics to measure the trustworthiness of software system. Our work thereby contributes to better understanding which software quality attributes should be considered and assured when engineering trustworthy Internet-based software systems.
Nazila Gol Mohammadi, Sachar Paulus, Mohamed Bishr, Andreas Metzger, Holger Könnecke, Sandro Hartenstein, Thorsten Weyer, Klaus Pohl

Using Ontologies to Analyze Compliance Requirements of Cloud-Based Processes

In recent years, the concept of cloud computing has seen a significant growth. The spectrum of available services covers most, if not all, aspects needed in existing business processes, allowing companies to outsource large parts of their IT infrastructure to cloud service providers. While this prospect might offer considerable economic advantages, it is hindered by concerns regarding information security as well as compliance issues. Relevant regulations are imposed by several sources, like legal regulations or standards for information security, amounting to an extend that makes it difficult to identify those aspects relevant for a given company. In order to support the identification of relevant regulations, we developed an approach to represent regulations in the form of ontologies, which can then be used to examine a given system for compliance requirements. Additional tool support is offered to check system models for certain properties that have been found relevant.
Thorsten Humberg, Christian Wessel, Daniel Poggenpohl, Sven Wenzel, Thomas Ruhroth, Jan Jürjens

Assessing Latency in Cloud Gaming

With the emergence of cloud computing, diverse types of Information Technology services are increasingly provisioned through large data centers via the Internet. A relatively novel service category is cloud gaming, where video games are executed in the cloud and delivered to a client as audio/video stream. While cloud gaming substantially reduces the demand of computational power on the client side, thus enabling the use of thin clients, it may also affect the Quality of Service through the introduction of network latencies. In this work, we quantitatively examined this effect, using a self-developed measurement tool and a set of actual cloud gaming providers. For the two providers and three games in our experiment, we found absolute increases in latency between approximately 40 ms and 150 ms, or between 85 % and 800 % in relative terms, compared to a local game execution. In addition, based on a second complementary experiment, we found mean round-trip times ranging from about 30 ms to 380 ms using WLAN and approximately 40 ms to 1050 ms using UMTS between a local computer and globally distributed compute nodes. Bilaterally among the compute nodes, results were in the range from approximately 10 ms to 530 ms. This highlights the importance of data center placement for the provision of cloud gaming services with adequate Quality of Service properties.
Ulrich Lampe, Qiong Wu, Sheip Dargutev, Ronny Hans, André Miede, Ralf Steinmetz

Multi-dimensional Model Driven Policy Generation

As Cloud Computing provides agile and scalable IT infrastructure, QoS-assured services and customizable computing environment, it increases the call for agile and dynamic deployment and governance environments over multi-cloud infrastructure. By now, governance and Non Functional Properties (such as security, QoS…) are managed in a static way, limiting the global benefits of deploying service-based information system over multi-cloud environments. To overcome this limit, we propose a contextualised policy generation process to allow both an agile management NFP in a multi-cloud context and a secured deployment of the service-based information system. The last step of this Model Driven Policy Engineering approach uses policies as Model@runtime to select, compose, deploy and orchestrate NFP management functions depending on the exact execution context. Moreover, a dynamic governance loop including autonomic KPI management is used to control continuously the governance results.
Juan Li, Wendpanga Francis Ouedraogo, Frédérique Biennier

An Approach for Monitoring Components Generation and Deployment for SCA Applications

Cloud Computing is an emerging paradigm involving different kind of Information Technologies (IT) services. One of the major advantages of this paradigm resides on its economic model based on pay-as-you-go. This paradigm got an increasing attention these last years regarding different aspects (e.g., deployment, scalability, elasticity), meanwhile, monitoring remains not well explored. Almost all the existing solutions for monitoring do not offer an approach that allows to describe in a granular way the monitoring requirements. Moreover, they do not take into account the scalability issues. In this paper, we propose a model that allows to describe monitoring requirements for Service Component Architecture (SCA) applications in different granularities. We propose an approach that transforms SCA components that were initially designed without monitoring facilities to render them monitorable. In our approach, we use a Micro-container based mechanism to deploy components in the Cloud. This mechanism ensures the scalability of SCA applications. Our solution take into account a late instantiation politic to reduce resources consumption to be in-line with the economic model of the Cloud. The realized experiments proves the efficiency of our solution.
Mohamed Mohamed, Djamel Belaïd, Samir Tata

Locations for Performance Ensuring Admission Control in Load Balanced Multi-tenant Systems

In the context of Software as a Service offerings, multi-tenant applications (MTAs) allow to increase the efficiency by sharing one application instance among several customers. Due to the tight coupling and sharing of resources on all layers up to the application layer, customers may influence each other with regards to the performance they observe. Existing research on performance isolation of MTAs focuses on methods and concrete algorithms. In this paper, we present concerns that are raised when serving a high amount of users in a load balancing cluster with multiple MTA instances. We identified potential positions in such an architecture where performance isolation can be enforced based on request admission control. Considering various approaches for request-to-instance allocation, our discussion shows that different positions come along with specific pros and cons that have influence on the ability to performance-isolate tenants.
Manuel Loesch, Rouven Krebs

A Study on Today’s Cloud Environments for HPC Applications

With the advance of information technology - as building smaller circuits and hardware with lower energy consumption - the power of HPC (High-Performance Computing) resources increases with the target to employ complex large-scaling applications. In traditional computing, an organization has to pay high costs to build an HPC platform by purchasing hardware and maintaining it afterwards. On-premises HPC resources may not satisfy the demand of scientific applications when more computing resources for large-scaling computations are requested than own resources are available. Especially for SMEs (small and medium-sized enterprises), a temporarily-increasing computing demand is challenging. Cloud computing, is an on-demand, pay-as-you-go model, that provides us with enormous, almost unlimited and scalable computing power in an instantly-available way. Therefore, it is a valuable topic to develop HPC applications for the cloud. In this paper, we focus on developing an HPC application deployment model based on the Windows Azure cloud platform, and an MPI framework for the execution of the application in the cloud. In addition, we present a combined HPC mode using cloud and on-premises resources together. Experiments that are employed on a Windows cluster and the Azure cloud are compared and their performance is analyzed with respect to the difference of the two platforms. Moreover, we study the applied scenarios for different HPC modes using cloud resources.
Fan Ding, Dieter an Mey, Sandra Wienke, Ruisheng Zhang, Lian Li


Weitere Informationen

Premium Partner