Skip to main content

2017 | Buch

Research Advances in Cloud Computing

herausgegeben von: Dr. Sanjay Chaudhary, Dr. Gaurav Somani, Dr. Rajkumar Buyya

Verlag: Springer Singapore

insite
SUCHEN

Über dieses Buch

This book addresses the emerging area of cloud computing, providing a comprehensive overview of the research areas, recent work and open research problems. The move to cloud computing is no longer merely a topic of discussion; it has become a core competency that every modern business needs to embrace and excel at. It has changed the way enterprise and internet computing is viewed, and this success story is the result of the long-term efforts of computing research community around the globe. It is predicted that by 2026 more than two-thirds of all enterprises across the globe will be entirely run in cloud. These predictions have led to huge levels of funding for research and development in cloud computing and related technologies.

Accordingly, universities across the globe have incorporated cloud computing and its related technologies in their curriculum, and information technology (IT) organizations are accelerating their skill-set evolution in order to be better prepared to manage emerging technologies and public expectations of the cloud, such as new services.

Inhaltsverzeichnis

Frontmatter
Serverless Computing: Current Trends and Open Problems
Abstract
Serverless computing has emerged as a new compelling paradigm for the deployment of applications and services. It represents an evolution of cloud programming models, abstractions, and platforms, and is a testament to the maturity and wide adoption of cloud technologies. In this chapter, we survey existing serverless platforms from industry, academia, and open-source projects, identify key characteristics and use cases, and describe technical challenges and open problems.
Ioana Baldini, Paul Castro, Kerry Chang, Perry Cheng, Stephen Fink, Vatche Ishakian, Nick Mitchell, Vinod Muthusamy, Rodric Rabbah, Aleksander Slominski, Philippe Suter
Highly Available Clouds: System Modeling, Evaluations, and Open Challenges
Abstract
Cloud-based solution adoption is becoming an indispensable strategy for enterprises, since it brings many advantages, such as low cost. On the other hand, to attend this demand, cloud providers are facing a great challenge regarding their resource management: how to provide services with high availability relying on finite computational resources and limited physical infrastructure? Understanding the components and operations of cloud data center is a key point to manage resources in an optimal way and to estimate how physical and logical failures can impact on users’ perception. This book chapter aims to explore computational modeling theories in order to represent a cloud infrastructure focusing on how to estimate and model cloud availability.
Patricia Takako Endo, Glauco Estácio Gonçalves, Daniel Rosendo, Demis Gomes, Guto Leoni Santos, André Luis Cavalcanti Moreira, Judith Kelner, Djamel Sadok, Mozhgan Mahloo
Big Data Analytics in Cloud—A Streaming Approach
Abstract
There is a significant interplay between Big Data and Cloud. Data Mining and Data Analytics are required for interpretation of Big Data. The literature on Cloud computing generally discusses infrastructure and architecture, but very little discussion is found on algorithms required for mining and analytics. This chapter focuses on online algorithms for learning and analytics that can be used for distributed and unstructured data over Cloud. It also discusses their time complexity, presents required architecture for deploying them over Cloud and concludes with presenting relevant open research directions.
Ratnik Gandhi
A Terminology to Classify Artifacts for Cloud Infrastructure
Abstract
Cloud environments are widely used to offer scalable software services. To support these environments, organizations operating data centers must maintain an infrastructure with a significant amount of resources. Such resources are managed by specific software to ensure service level agreements based on one or more performance metrics. Within such infrastructure, approaches to meet non-functional requirements can be split into various artifacts, distributed across different operational layers, which operate together with the aim of reaching a specific target. Existing studies classify such approaches using different terms, which usually are used with conflicting meanings by different people. Therefore, it is necessary a common nomenclature defining different artifacts, so they can be organized in a more scientific way. To this end, we propose a comprehensive bottom-up classification to identify and classify approaches for system artifacts at the infrastructure level, and organize existing literature using the proposed classification.
Fábio Diniz Rossi, Rodrigo Neves Calheiros, César Augusto Fonticielha De Rose
Virtual Networking with Azure for Hybrid Cloud Computing in Aneka
Abstract
Hybrid cloud environments are a highly scalable and cost-effective option for enterprises that need to expand their on-premises infrastructure. In every hybrid cloud solutions, the issue of inter-cloud network connectivity has to be overcome to allow communications, possibly secure, between resources scattered over multiple networks. Network visualization provides the right method for addressing this issue. We present how Azure Virtual Private Network (VPN) services are used to establish an overlay network for hybrid clouds in our Aneka platform. First, we explain how Aneka resource provisioning module is extended to support Azure Resource Manger (ARM) application programming interfaces (APIs). Then, we walk through the process of establishment of an Azure Point-to-Site VPN to provide connectivity between Aneka nodes in the hybrid cloud environment. Finally, we present a case study hybrid cloud in Aneka and we experiment with it to demonstrate the functionality of the system.
Adel Nadjaran Toosi, Rajkumar Buyya
Building Efficient HPC Cloud with SR-IOV-Enabled InfiniBand: The MVAPICH2 Approach
Abstract
Single Root I/O Virtualization (SR-IOV) technology has been steadily gaining momentum for high-speed interconnects such as InfiniBand. SR-IOV enabled InfiniBand has been widely used in modern HPC clouds with virtual machines and containers. While SR-IOV can deliver near-native I/O performance, recent studies have shown that locality-aware communication schemes play an important role in achieving high I/O performance on SR-IOV enabled InfiniBand clusters. To discuss how to build efficient HPC clouds, this chapter presents a novel approach using the MVAPICH2 library. We first propose locality-aware designs inside the MVAPICH2 library to achieve near-native performance on HPC clouds with virtual machines and containers. Then, we propose advanced designs with cloud resource managers such as OpenStack and Slurm to make users easier to deploy and run their applications with the MVAPICH2 library on HPC clouds. Performance evaluations with benchmarks and applications on an OpenStack-based HPC cloud (i.e., NSF-supported Chameleon Cloud) show that MPI applications with our designs are able to get near bare-metal performance on HPC clouds with different virtual machine and container deployment scenarios. Compared to running default MPI applications on Amazon EC2, our design can deliver much better performance. The MVAPICH2 over HPC Cloud software package presented in this chapter is publicly available from http://​mvapich.​cse.​ohio-state.​edu.
Xiaoyi Lu, Jie Zhang, Dhabaleswar K. Panda
Resource Procurement, Allocation, Metering, and Pricing in Cloud Computing
Abstract
Cloud computing is not only a popular paradigm for services offered over the Internet, but has also captured the interest of both academia and industry.
Akshay Narayan, Parvathy S. Pillai, Abhinandan S. Prasad, Shrisha Rao
Dynamic Selection of Virtual Machines for Application Servers in Cloud Environments
Abstract
Autoscaling is a hallmark of cloud computing as it allows flexible just-in-time allocation and release of computational resources in response to dynamic and often unpredictable workloads. This is especially important for web applications, whose workload is time dependent and prone to flash crowds. Most of them follow the 3-tier architectural pattern, and are divided into presentation, application/domain and data layers. In this work, we focus on the application layer. Reactive autoscaling policies of the type “Instantiate a new Virtual Machine (VM) when the average server CPU utilisation reaches X%” have been used successfully since the dawn of cloud computing. But which VM type is the most suitable for the specific application at the moment remains an open question. In this work, we propose an approach for dynamic VM type selection. It uses a combination of online machine learning techniques, works in real time and adapts to changes in the users’ workload patterns, application changes as well as middleware upgrades and reconfigurations. We have developed a prototype, which we tested with the CloudStone benchmark deployed on AWS EC2. Results show that our method quickly adapts to workload changes and reduces the total cost compared to the industry standard approach.
Nikolay Grozev, Rajkumar Buyya
Improving the Energy Efficiency in Cloud Computing Data Centres Through Resource Allocation Techniques
Abstract
The growth of power consumption in Cloud Computing systems is one of the current concerns of systems designers. In previous years, several studies have been carried out in order to find new techniques to decrease the cloud power consumption. These techniques range from decisions on locations for data centres to techniques that enable efficient resource management. Resource Allocation, as a process of Resource Management, assigns available resources throughout the data centre in an efficient manner, minimizing the power consumption and maximizing the system performance. The contribution presented in this chapter is an overview of the Resource Management and Resource Allocation techniques, which contribute to the reduction of energy consumption without compromising the cloud user and provider constraints. We will present key concepts regarding energy consumption optimization in cloud data centres. Moreover, two practical cases are presented to illustrate the theoretical concepts of Resource Allocation. Finally, we discuss the open challenges that Resource Management must face in the coming years.
Belén Bermejo, Sonja Filiposka, Carlos Juiz, Beatriz Gómez, Carlos Guerrero
Recent Developments in Resource Management in Cloud Computing and Large Computing Clusters
Abstract
Cloud computing and large computing clusters consist of a large number of computing resources of different types ranging from storage, CPU, memory, I/O to network bandwidth. Cloud computing exposes resources as a single access point to end users through the use of virtualization technologies. A major issue in cloud computing is how to properly allocate cloud resources to different users or frameworks accessing the cloud. There are a lot of complex, diverse, and heterogeneous workloads that need to coexist in the cloud and large-scale compute clusters, thus the need for finding efficient means of assigning resources to the different users or workloads. Millions of jobs need to be scheduled in a small amount of time, so there is a need for a resource management and scheduling mechanism that can minimize latency and maximize efficiency. Cloud resource management involves allocating computing, processing, storage, and networking resources to cloud users, in such a way that their demands and performance objectives are met. Cloud providers need to ensure efficient and effective resource provisioning while being constrained by Service Level Agreements (SLAs). This chapter gives the differences and similarities between resource management in cloud computing and cluster computing, and provide detailed information about different types of scheduling approaches and open research issues.
Richard Olaniyan, Muthucumaru Maheswaran
Resource Allocation for Cloud Infrastructures: Taxonomies and Research Challenges
Abstract
Cloud computing datacenters dynamically provide millions of virtual machines in real-world cloud computing environments. A large number of research challenges have to be addressed toward an efficient resource management of these cloud computing infrastructures. In the resource allocation field, Virtual Machine Placement (VMP) is one of the most studied problems with several possible formulations and a large number of existing optimization criteria, considering solutions with high economical and ecological impact. Based on systematic reviews of the VMP literature, a taxonomy of VMP problem environments is presented to understand different possible environments where a VMP problem could be considered, from both provider and broker perspectives in different deployment architectures. Additionally, another taxonomy for VMP problems is presented to identify existing approaches for the formulation and resolution of the VMP as an optimization problem. Finally a detailed view of the VMP problem is presented, identifying research opportunities to further advance in cloud computing resource allocation areas.
Benjamín Barán, Fabio López-Pires
Many-Objective Optimization for Virtual Machine Placement in Cloud Computing
Abstract
Resource allocation in cloud computing datacenters presents several research challenges, where the Virtual Machine Placement (VMP) is one of the most studied problems with several possible formulations considering a large number of existing optimization criteria. This chapter presents the main contributions that studied for the first time Many-Objective VMP (MaVMP) problems for cloud computing environments. In this context, two variants of MaVMP problems were formulated and different algorithms were designed to effectively address existing research challenges associated to the resolution of Many-Objective Optimization Problems (MaOPs). Experimental results proved the correctness of the presented algorithms, its effectiveness in solving particular associated challenges and its capabilities to solve problem instances with large numbers of physical and virtual machines for: (1) MaVMP for initial placement of VMs (static) and (2) MaVMP with reconfiguration of VMs (semi-dynamic). Finally, open research problems for the formulation and resolution of MaVMP problems for cloud computing (dynamic) are discussed.
Fabio López-Pires, Benjamín Barán
Performance Modeling and Optimization of Live Migration of Virtual Machines in Cloud Infrastructure
Abstract
The cloud infrastructure is a base layer to support various types of computational and storage requirements of the users using Internet-based service provisioning. Virtualization enables cloud computing to compute different workloads using cloud service models. The performance of each cloud model depends on how effectively workloads are managed to give optimal performance. The process of workload management is obtained by migrating virtual machines using the pre-copy algorithm. In this chapter, we have improved pre-copy algorithm for virtual machine migration to calculate the optimal total migration time and the downtime using three proposed models: (i) compression model, (ii) prediction model, and (iii) performance model. The performance evaluation of different techniques using these three models is discussed in detail. Finally, we present open research problems in the field of resource utilization in cloud computing.
Minal Patel, Sanjay Chaudhary, Sanjay Garg
Analysis of Security in Modern Container Platforms
Abstract
Containers have quickly become a popular alternative to more traditional virtualization methods such as hypervisor-based virtualization. Residing at operating system level, containers offer a solution that is cheap in terms of resource usage and flexible in the way it can be applied. The purpose of this chapter is two-fold: first, we provide a brief overview of available container security solutions and how they operate, and second, we try to further elaborate and asses the security requirements for containers as proposed by Reshetova et al. We take a look at the current and past security threats and Common Vulnerabilities and Exposures (CVE) faced by container systems and see how attacks that exploit them violate the aforementioned requirements. Based on our analysis, we contribute by identifying more security requirements for container systems.
Samuel Laurén, M. Reza Memarian, Mauro Conti, Ville Leppänen
Identifying Evidence for Cloud Forensic Analysis
Abstract
Cloud computing provides increased flexibility, scalability, failure tolerance and reduced cost to customers. However, like any computing infrastructure, cloud systems are subjected to cyber-attacks. Post-attack investigations of such attacks present unusual challenges including the dependence of forensically valuable data on the deployment model, multiple virtual machines running on a single physical machine and multi-tenancy of clients. In this chapter, we use our own attack samples to show that, in the attacked cloud, evidence from three different sources can be used to reconstruct attack scenarios. They are (1) IDS and application software logging, (2) cloud service API calls and (3) system calls from VMs. Based on our example attack results, we present the potential design and implementation of a forensic analysis framework for clouds, which includes logging all the activities from both the application layer and lower layers. We show how a Prolog based forensic analysis tool can automate the process of correlating evidence from both the clients and the cloud service provider to reconstruct attack scenarios for cloud forensic analysis.
Changwei Liu, Anoop Singhal, Duminda Wijesekera
An Access Control Framework for Secure and Interoperable Cloud Computing Applied to the Healthcare Domain
Abstract
The healthcare domain is an emergent application for cloud computing, in which the Meaningful Use Stage 3 guidelines recommend health information technology (HIT) systems to provide cloud services that enable health-related data owners to access, modify, and exchange data. This requires mobile and desktop applications for patients and medical providers to obtain healthcare data from multiple HITs, which may be operating with different paradigms (e.g., cloud services, programming services, web services), use different cloud service providers, and employ different security/access control techniques. To address these issues, this chapter introduces and discusses an Access Control Framework for Secure and Interoperable Cloud Computing (FSICC) that provides a mechanism for multiple HITs to register cloud, programming, and web services and security requirements for use by applications. FSICC supports a global security policy and enforcement mechanism for cloud services with role-based (RBAC), discretionary (DAC), and mandatory (MAC) access controls. The Fast Healthcare Interoperability Resources (FHIR) standard models healthcare data using a set of 93 resources to track a patient’s clinical findings, problems, etc. For each resource, an FHIR Application Program Interface (API) is defined to share data in a common format for each HIT that can be accessed by mobile applications. Thus, there is a need to support with a heterogeneous set of information sources and differing security protocols (i.e., RBAC, DAC, and MAC). To demonstrate the realization of FSICC, we apply the framework to the integration of the Connecticut Concussion Tracker (CT\(^{2})\) mHealth application with the OpenEMR electronic medical record utilizing FHIR.
Mohammed S. Baihan, Steven A. Demurjian
Security and Privacy Issues in Outsourced Personal Health Record
Abstract
E-health effectively uses information and communications technology to support health-related services for its users.
Naveen Kumar, Anish Mathuria
Applications of Trusted Computing in Cloud Context
Abstract
Trusted computing is a technology that enables computer systems to behave in a given expected way. Achieving that goal happens by arming an isolated piece of hardware with embedded processing, cryptographic capabilities such as encryption key that is kept safe from software layer attacks. The mentioned module is accessible to the rest of the computer system via a well-defined and tested application programming interface. Trusted computing protects the system against external attackers and even against the owner of the system. Cloud computing enables users to have access to vast amounts of computational resources remotely, in a seamless and ubiquitous manner. However, in some cloud deployment models, such as public cloud computing, the users have very little control over how their own data is remotely handled and are not able to assure that their data is securely processed and stored. Cloud administrators and other parties can be considered threats in such cases. Given the ground that cloud has been gaining and the rate at which data is generated, transmitted, processed, and stored remotely, it is vital to protect it using means that address the ubiquitous nature of the cloud, including trusted computing. This chapter investigates applications of trusted computing in cloud computing areas where security threats exist, namely in live virtual machine migration.
Mohammad Reza Memarian, Diogo Fernandes, Pedro Inácio, Ville Leppänen, Mauro Conti
Metadaten
Titel
Research Advances in Cloud Computing
herausgegeben von
Dr. Sanjay Chaudhary
Dr. Gaurav Somani
Dr. Rajkumar Buyya
Copyright-Jahr
2017
Verlag
Springer Singapore
Electronic ISBN
978-981-10-5026-8
Print ISBN
978-981-10-5025-1
DOI
https://doi.org/10.1007/978-981-10-5026-8