Skip to main content

2017 | Buch

Cloud Service Benchmarking

Measuring Quality of Cloud Services from a Client Perspective

insite
SUCHEN

Über dieses Buch

Cloud service benchmarking can provide important, sometimes surprising insights into the quality of services and leads to a more quality-driven design and engineering of complex software architectures that use such services. Starting with a broad introduction to the field, this book guides readers step-by-step through the process of designing, implementing and executing a cloud service benchmark, as well as understanding and dealing with its results. It covers all aspects of cloud service benchmarking, i.e., both benchmarking the cloud and benchmarking in the cloud, at a basic level.

The book is divided into five parts: Part I discusses what cloud benchmarking is, provides an overview of cloud services and their key properties, and describes the notion of a cloud system and cloud-service quality. It also addresses the benchmarking lifecycle and the motivations behind running benchmarks in particular phases of an application lifecycle. Part II then focuses on benchmark design by discussing key objectives (e.g., repeatability, fairness, or understandability) and defining metrics and measurement methods, and by giving advice on developing own measurement methods and metrics. Next, Part III explores benchmark execution and implementation challenges and objectives as well as aspects like runtime monitoring and result collection. Subsequently, Part IV addresses benchmark results, covering topics such as an abstract process for turning data into insights, data preprocessing, and basic data analysis methods. Lastly, Part V concludes the book with a summary, suggestions for further reading and pointers to benchmarking tools available on the Web.

The book is intended for researchers and graduate students of computer science and related subjects looking for an introduction to benchmarking cloud services, but also for industry practitioners who are interested in evaluating the quality of cloud services or who want to assess key qualities of their own implementations through cloud-based experiments.

Inhaltsverzeichnis

Frontmatter

Fundamentals

Frontmatter
Chapter 1. Introduction
Abstract
We start this book with an introduction of what benchmarking is and what it is not, both in general and specifically in the context of cloud services. For that purpose, we introduce our notion of cloud services and discuss how cloud service benchmarking can be used to gain insights into these services’ quality characteristics. We furthermore introduce the idea of using cloud services as a testbed for benchmarking. Finally, at the end of this chapter, we provide an overview of the remainder of this book.
David Bermbach, Erik Wittern, Stefan Tai
Chapter 2. Terms and Definitions
Abstract
After having broadly introduced and motivated cloud service benchmarking in the previous chapter, we now provide a more thorough discussion of related terms and concepts. We start with discussing what cloud services, cloud service qualities, and cloud service benchmarking are, before differentiating benchmarking from the related practice of monitoring. Finally, we provide an overview of the essential components of cloud service benchmarking tools.
David Bermbach, Erik Wittern, Stefan Tai
Chapter 3. Quality
Abstract
Building on the already introduced key concepts and terms of cloud service benchmarking, we now focus on the quality of cloud services, which benchmarking aims to describe. We start by defining what quality is, both generally and in the context of cloud services, and by giving examples of such qualities. We then describe how distinct qualities are never isolated but rather form complex dependency graphs through direct and indirect tradeoffs and, finally, how these system qualities relate to SLA concepts on different layers of a software system stack.
David Bermbach, Erik Wittern, Stefan Tai
Chapter 4. Motivations
Abstract
We have by now introduced the main concepts of cloud service benchmarking and discussed the concept of system quality which cloud service benchmarking aims to measure. In this chapter, we focus on the different motivations for cloud service benchmarking, including SLA management, continuous quality improvement, and organizational process proficiency. Depending on the motivation, benchmarking will be used in different phases of an application lifecycle or may even be entirely decoupled from a concrete application. We also discuss how the different motivations affect the various benchmarking phases.
David Bermbach, Erik Wittern, Stefan Tai

Benchmark Design

Frontmatter
Chapter 5. Design Objectives
Abstract
The first part of this book introduced cloud service benchmarking, including its motivations and a variety of related concepts. We now focus on how to design effective cloud service benchmarks. In this chapter, we introduce the traditional key objectives of benchmark design, e.g., reproducibility, fairness, or understandability, and discuss why they are important. We then describe how these objectives need to change in the context of cloud services and cloud-based systems. Finally, we also discuss how concrete benchmarks may have to compromise one objective to reach another one and describe how the use of cloud services, both as system under test and as experimentation testbed, can influence these objectives.
David Bermbach, Erik Wittern, Stefan Tai
Chapter 6. Quality Metrics and Measurement Methods
Abstract
Building on the already introduced main design objectives of cloud service benchmarks, we now discuss key properties of quality metrics, which are used to assign values to a quality of interest in cloud service benchmarking. We especially focus on how disregarding these properties may (negatively) affect benchmark results. After providing examples of existing quality metrics, we present development strategies for both quality metrics and measurement methods.
David Bermbach, Erik Wittern, Stefan Tai
Chapter 7. Workloads
Abstract
In the last chapters, we have seen how to design good quality metrics and measurement methods with the goal of adhering to the benchmark design objectives from chapter 5. When using these methods to gain measurement data, we will need to generate stress for the system under test. This stress is typically referred to as workload for most system qualities. In this chapter, we will, hence, give an overview of the basic principles behind workload design and generation strategies. Examples include open vs. closed, synthetic vs. trace-based workloads, or application-driven vs. micro-benchmarks.
David Bermbach, Erik Wittern, Stefan Tai

Benchmark Execution

Frontmatter
Chapter 8. Implementation Objectives and Challenges
Abstract
The previous part of the book addressed how to design a good cloud service benchmark. In this part, we shift our focus from the design to its implementation as part of a benchmarking tool and (later on) its runtime execution. In this chapter, we start by introducing implementation objectives for cloud service benchmarks. Even based on a careful benchmark design considering all design objectives, the actual benchmark implementation can still run afoul of the goals initially set. In this chapter, next to outlining implementation objectives, we provide concrete examples on how they can be achieved in practice.
David Bermbach, Erik Wittern, Stefan Tai
Chapter 9. Experiment Setup and Runtime
Abstract
The previous chapter described the objectives and challenges for implementing a cloud service benchmark that has already been designed. Now having an implementation at hand, it can be used to run actual experiments. In this chapter, we discuss how to deploy, set up, and run such experiments. For this purpose, we start by outlining the typical process underlying experiment setup and execution. Afterwards, we discuss an important precondition for running experiments, namely, ensuring that the required resources are readily available when needed. We then dive into addressing challenges that occur directly before, during, and after running an experiment, including challenges associated with collecting benchmarking data, data provenance, and storing data.
David Bermbach, Erik Wittern, Stefan Tai

Benchmark Results

Frontmatter
Chapter 10. Turning Data into Insights
Abstract
Part III of this book addressed the execution of cloud service benchmarks. As such, it was also concerned with the collection of resulting benchmarking data, which to some degree co-occurs with benchmark execution. We now shift our focus on what to do with the resulting data. In this chapter, we start by introducing the general process for gaining insights from benchmarking data through preprocessing and analysis. We differentiate two fundamental approaches of data analysis, which depend on the benchmark’s original motivation. We end this chapter by providing an overview and discussion of different types of data analysis tools.
David Bermbach, Erik Wittern, Stefan Tai
Chapter 11. Data Preprocessing
Abstract
The previous chapter introduced the general process for gaining insights from benchmarking data through data analysis and two approaches to analysis. However, any analysis efforts will be limited by the quality of the input data. Therefore, in this chapter, we introduce data preprocessing methods that enhance data quality for later analysis steps.We start by outlining the characteristics of cloud benchmarking data, which affect the selection of presented preprocessing methods as well as the selection of analysis methods presented in the next chapter. We then introduce concrete preprocessing methods for data selection, dealing with missing values, resampling of data, and data transformation.
David Bermbach, Erik Wittern, Stefan Tai
Chapter 12. Data Analysis
Abstract
In the previous chapters on using benchmarking results, we learned about the importance of preprocessing data for analysis, and introduced two fundamental analysis approaches. In both approaches, a plethora of concrete data analysis methods can be used. In this chapter, we provide an overview of select data analysis methods used to gain insights from benchmarking data and exemplify their application.
David Bermbach, Erik Wittern, Stefan Tai
Chapter 13. Using Insights on Cloud Service Quality
Abstract
In the previous chapter, we learned how to gain insights from raw cloud service benchmarking measurement results. For these insights to be of value, they still need to be used for some purpose. In this chapter, we discuss two ways for leveraging insights from cloud service benchmarking. First, we describe how these insights can be communicated to interested parties. Second, we describe through examples how these insights can drive actions, e.g., when deciding on cloud service selection and configuration, or ultimately how to design applications that can either compensate for or leverage particular quality characteristics of underlying services.
David Bermbach, Erik Wittern, Stefan Tai

Conclusions

Frontmatter
Chapter 14. Getting Started in Cloud Service Benchmarking
Abstract
Among the other chapters, this chapter plays a special role in that it does not introduce new content. Instead, it tries to provide some pointers towards tools, organizations, and web resources that the interested reader can use for getting started in hands-on cloud service benchmarking. This chapter is more intended as a reference rather than a sequential process step as the other chapters are. Therefore, if there is no current benchmarking need, we would like to invite the reader to skip this chapter and instead directly proceed to the conclusion in chapter 15.
David Bermbach, Erik Wittern, Stefan Tai
Chapter 15. Conclusion
Abstract
In the four previous parts of this book, we have covered the entire lifecycle of cloud service benchmarking—from defining its goals, designing a benchmark, to implementing and ultimately executing it, to analyzing the resulting data for actionable insights. In this chapter, we summarize the lessons learned and point to topics worth further exploration.
David Bermbach, Erik Wittern, Stefan Tai
Backmatter
Metadaten
Titel
Cloud Service Benchmarking
verfasst von
David Bermbach
Erik Wittern
Stefan Tai
Copyright-Jahr
2017
Electronic ISBN
978-3-319-55483-9
Print ISBN
978-3-319-55482-2
DOI
https://doi.org/10.1007/978-3-319-55483-9

Neuer Inhalt