Skip to main content

2011 | Buch

Performance Evaluation, Measurement and Characterization of Complex Systems

Second TPC Technology Conference, TPCTC 2010, Singapore, September 13-17, 2010. Revised Selected Papers

herausgegeben von: Raghunath Nambiar, Meikel Poess

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the proceedings of the Second Technology Conference on Performance Evaluation and Benchmarking, TPCTC 2010, held in conjunction with the 36th International Conference on Very Large Data Bases, VLDB 2010, in Singapore, September 13-17, 2010. The 14 full papers and two keynote papers were carefully selected and reviewed from numerous submissions. This book considers issues such as appliance; business intelligence; cloud computing; complex event processing; database optimizations; data compression; energy and space efficiency, green computing; hardware innovations; high speed data generation; hybrid workloads; very large memory systems; and virtualization.

Inhaltsverzeichnis

Frontmatter
Transaction Processing Performance Council (TPC): State of the Council 2010
Abstract
The Transaction Processing Performance Council (TPC) is a non-profit corporation founded to define transaction processing and database benchmarks and to disseminate objective, verifiable performance data to the industry. Established in August 1988, the TPC has been integral in shaping the landscape of modern transaction processing and database benchmarks over the past twenty-two years. This paper provides an overview of the TPC’s existing benchmark standards and specifications, introduces two new TPC benchmarks under development, and examines the TPC’s active involvement in the early creation of additional future benchmarks.
Raghunath Nambiar, Nicholas Wakou, Forrest Carman, Michael Majdalany
Liquid Benchmarks: Towards an Online Platform for Collaborative Assessment of Computer Science Research Results
Abstract
Experimental evaluation and comparison of techniques, algorithms, approaches or complete systems is a crucial requirement to assess the practical impact of research results. The quality of published experimental results is usually limited due to several reasons such as: limited time, unavailability of standard benchmarks or shortage of computing resources. Moreover, achieving an independent, consistent, complete and insightful assessment for different alternatives in the same domain is a time and resource consuming task in addition to its requirement to be periodically repeated to maintain its freshness and being up-to-date. In this paper, we coin the notion of Liquid Benchmarks as online and public services that provide collaborative platforms to unify efforts of peer researchers from all over the world to simplify their task in performing high quality experimental evaluations and guarantee a transparent scientific crediting process.
Sherif Sakr, Fabio Casati
A Discussion on the Design of Graph Database Benchmarks
Abstract
Graph Database Management systems (GDBs) are gaining popularity. They are used to analyze huge graph datasets that are naturally appearing in many application areas to model interrelated data. The objective of this paper is to raise a new topic of discussion in the benchmarking community and allow practitioners having a set of basic guidelines for GDB benchmarking. We strongly believe that GDBs will become an important player in the market field of data analysis, and with that, their performance and capabilities will also become important. For this reason, we discuss those aspects that are important from our perspective, i.e. the characteristics of the graphs to be included in the benchmark, the characteristics of the queries that are important in graph analysis applications and the evaluation workbench.
David Dominguez-Sal, Norbert Martinez-Bazan, Victor Muntes-Mulero, Pere Baleta, Josep Lluis Larriba-Pey
A Data Generator for Cloud-Scale Benchmarking
Abstract
In many fields of research and business data sizes are breaking the petabyte barrier. This imposes new problems and research possibilities for the database community. Usually, data of this size is stored in large clusters or clouds. Although clouds have become very popular in recent years, there is only little work on benchmarking cloud applications. In this paper we present a data generator for cloud sized applications. Its architecture makes the data generator easy to extend and to configure. A key feature is the high degree of parallelism that allows linear scaling for arbitrary numbers of nodes. We show how distributions, relationships and dependencies in data can be computed in parallel with linear speed up.
Tilmann Rabl, Michael Frank, Hatem Mousselly Sergieh, Harald Kosch
How to Advance TPC Benchmarks with Dependability Aspects
Abstract
Transactional systems are the core of the information systems of most organizations. Although there is general acknowledgement that failures in these systems often entail significant impact both on the proceeds and reputation of companies, the benchmarks developed and managed by the Transaction Processing Performance Council (TPC) still maintain their focus on reporting bare performance. Each TPC benchmark has to pass a list of dependability-related tests (to verify ACID properties), but not all benchmarks require measuring their performances. While TPC-E measures the recovery time of some system failures, TPC-H and TPC-C only require functional correctness of such recovery. Consequently, systems used in TPC benchmarks are tuned mostly for performance. In this paper we argue that nowadays systems should be tuned for a more comprehensive suite of dependability tests, and that a dependability metric should be part of TPC benchmark publications. The paper discusses WHY and HOW this can be achieved. Two approaches are introduced and discussed: augmenting each TPC benchmark in a customized way, by extending each specification individually; and pursuing a more unified approach, defining a generic specification that could be adjoined to any TPC benchmark.
Raquel Almeida, Meikel Poess, Raghunath Nambiar, Indira Patil, Marco Vieira
Price and the TPC
Abstract
The value of a benchmark metric is directly related to how relevant it is to the consumer.
The inclusion of a price/performance metric and an availability date metric in TPC benchmarks has provided added value to consumers since the first TPC benchmark was introduced in 1989. However, over time the relative value of these metrics has diminished – both because the base price of hardware and software comprises a smaller fraction of the total cost of ownership (TCO) and because TPC pricing requirements have not kept pace with changes in the industry. This paper aims to
  • Highlight the strengths provided by including price/performance and availability metrics in benchmarks.
  • Identify areas where the relative value of these metrics has diminished, over time.
  • Propose enhancements that could return them to provide high value to the consumer.
Many of the ideas in this paper are a result of nearly a decade of discussions with many benchmark experts. It would be difficult to identify an originator for each specific suggestion. However, it is nearly certain that this is the first comprehensive list where this collection of ideas is presented.
Karl Huppler
Impact of Recent Hardware and Software Trends on High Performance Transaction Processing and Analytics
Abstract
In this paper, I survey briefly some of the recent and emerging trends in hardware and software features which impact high performance transaction processing and data analytics applications. These features include multicore processor chips, ultra large main memories, flash storage, storage class memories, database appliances, field programmable gate arrays, transactional memory, key-value stores, and cloud computing. While some applications, e.g., Web 2.0 ones, were initially built without traditional transaction processing functionality in mind, slowly system architects and designers are beginning to address such previously ignored issues. The availability, analytics and response time requirements of these applications were initially given more importance than ACID transaction semantics and resource consumption characteristics. A project at IBM Almaden is studying the implications of phase change memory on transaction processing, in the context of a key-value store. Bitemporal data management has also become an important requirement, especially for financial applications. Power consumption and heat dissipation properties are also major considerations in the emergence of modern software and hardware architectural features. Considerations relating to ease of configuration, installation, maintenance and monitoring, and improvement of total cost of ownership have resulted in database appliances becoming very popular. The MapReduce paradigm is now quite popular for large scale data analysis, in spite of the major inefficiencies associated with it.
C. Mohan
EXRT: Towards a Simple Benchmark for XML Readiness Testing
Abstract
As we approach the ten-year anniversary of the first working draft of the XQuery language, one finds XML storage and query support in a number of commercial database systems. For many XML use cases, database vendors now recommend storing and indexing XML natively and using XQuery or SQL/XML to query and update XML directly. If the complexity of the XML data allows, shredding and reconstructing XML to/from relational tables is still an alternative as well, and might in fact outperform native XML processing. In this paper we report on an effort to evaluate these basic XML data management trade-offs for current commercial systems. We describe EXRT (Experimental XML Readiness Test), a simple micro-benchmark that methodically evaluates the impact of query characteristics on the comparison of shredded and native XML. We describe our experiences and preliminary results from EXRT’ing pressure on the XML data management facilities offered by two relational databases and one XML database system.
Michael J. Carey, Ling Ling, Matthias Nicola, Lin Shao
Transaction Performance vs. Moore’s Law: A Trend Analysis
Abstract
Intel co-founder Gordon E. Moore postulated in his famous 1965 paper that the number of components in integrated circuits had doubled every year from their invention in 1958 until 1965, and then predicted that the trend would continue for at least ten years. Later, David House, an Intel colleague, after factoring in the increase in performance of transistors, concluded that integrated circuits would double in performance every 18 months. Despite this trend in microprocessor improvements, your favored text editor continues to take the same time to start and your PC takes pretty much the same time to reboot as it took 10 years ago. Can this observation be made on systems supporting the fundamental aspects of our information based economy, namely transaction processing systems?
For over two decades the Transaction Processing Performance Council (TPC) has been very successful in disseminating objective and verifiable performance data to the industry. During this period the TPC’s flagship benchmark, TPC-C, which simulates Online Transaction Processing (OLTP) Systems has produced over 750 benchmark publications across a wide range of hardware and software platforms representing the evolution of transaction processing systems. TPC-C results have been published by over two dozen unique vendors and over a dozen database platforms, some of them exist, others went under or were acquired. But TPC-C survived. Using this large benchmark result set, we discuss a comparison of TPC-C performance and price-performance to Moore’s Law.
Raghunath Nambiar, Meikel Poess
TPC-V: A Benchmark for Evaluating the Performance of Database Applications in Virtual Environments
Abstract
For two decades, TPC benchmarks have been the gold standards for evaluating the performance of database servers. An area that TPC benchmarks had not addressed until now was virtualization. Virtualization is now a major technology in use in data centers, and is the number one technology on Gartner Group’s Top Technologies List. In 2009, the TPC formed a Working Group to develop a benchmark specifically intended for virtual environments that run database applications. We will describe the characteristics of this benchmark, and provide a status update on its development.
Priya Sethuraman, H. Reza Taheri
First TPC-Energy Benchmark: Lessons Learned in Practice
Abstract
TPC-Energy specification augments the existing TPC Benchmarks with Energy Metrics. The TPC-Energy specification is designed to help hardware buyers identify energy efficient equipment that meets both their computational and budgetary requirements. In this paper we discuss our experience is publishing industry’s first-ever TPC-Energy metric publication.
Erik Young, Paul Cao, Mike Nikolaiev
Using Solid State Drives as a Mid-Tier Cache in Enterprise Database OLTP Applications
Abstract
When originally introduced, flash based solid state drives (SSD) exhibited a very high random read throughput with low sub-millisecond latencies. However, in addition to their steep prices, SSDs suffered from slow write rates and reliability concerns related to cell wear. For these reasons, they were relegated to a niche status in the consumer and personal computer market. Since then, several architectural enhancements have been introduced that led to a substantial increase in random write operations as well as a reasonable improvement in reliability. From a purely performance point of view, these high I/O rates and improved reliability make the SSDs an ideal choice for enterprise On-Line Transaction Processing (OLTP) applications. However, from a price/performance point of view, the case for SSDs may not be clear. Enterprise class SSD Price/GB, continues to be at least 10x higher than conventional magnetic hard disk drives (HDD) despite considerable drop in Flash chip prices.
We show that a complete replacement of traditional HDDs with SSDs is not cost effective. Further, we demonstrate that the most cost efficient use of SSDs for OLTP workloads is as an intermediate persistent cache that sits between conventional HDDs and memory, thus forming a three-level memory hierarchy. We also discuss two implementations of such cache: hardware or software. For the software approach, we discuss our implementation of such a cache in an in-house database system. We also describe off-the shelf hardware solutions. We will develop a Total Cost of Ownership (TCO) model for All-SSD and All-HDD configurations. We will also come up with a modified OLTP benchmark that can scale IO density to validate this model. We will also show how such SSD cache implementations could increase the performance of OLTP applications while reducing the overall system cost.
Badriddine M. Khessib, Kushagra Vaid, Sriram Sankar, Chengliang Zhang
Benchmarking Adaptive Indexing
Abstract
Ideally, realizing the best physical design for the current and all subsequent workloads would impact neither performance nor storage usage. In reality, workloads and datasets can change dramatically over time and index creation impacts the performance of concurrent user and system activity. We propose a framework that evaluates the key premise of adaptive indexing — a new indexing paradigm where index creation and re-organization take place automatically and incrementally, as a side-effect of query execution. We focus on how the incremental costs and benefits of dynamic reorganization are distributed across the workload’s lifetime. We believe measuring the costs and utility of the stages of adaptation are relevant metrics for evaluating new query processing paradigms and comparing them to traditional approaches.
Goetz Graefe, Stratos Idreos, Harumi Kuno, Stefan Manegold
XWeB: The XML Warehouse Benchmark
Abstract
With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB’s usage is illustrated by experiments on several XML database management systems.
Hadj Mahboubi, Jérôme Darmont
Benchmarking Using Basic DBMS Operations
Abstract
The TPC-H benchmark proved to be successful in the decision support area. Many commercial database vendors and their related hardware vendors used these benchmarks to show the superiority and competitive edge of their products. However, over time, the TPC-H became less representative of industry trends as vendors keep tuning their database to this benchmark-specific workload. In this paper, we present XMarq, a simple benchmark framework that can be used to compare various software/hardware combinations. Our benchmark model is currently composed of 25 queries that measure the performance of basic operations such as scans, aggregations, joins and index access. This benchmark model is based on the TPC-H data model due to its maturity and well-understood data generation capability. We also propose metrics to evaluate single-system performance and compare two systems. Finally we illustrate the effectiveness of this model by showing experimental results comparing two systems under different conditions.
Alain Crolotte, Ahmad Ghazal
Assessing and Optimizing Microarchitectural Performance of Event Processing Systems
Abstract
Event Processing (EP) systems are being progressively used in business critical applications in domains such as algorithmic trading, supply chain management, production monitoring, or fraud detection. To deal with high throughput and low response time requirements, these EP systems mainly use the CPU-RAM sub-system for data processing. However, as we show here, collected statistics on CPU usage or on CPU-RAM communication reveal that available systems are poorly optimized and grossly waste resources. In this paper we quantify some of these inefficiencies and propose cache-aware algorithms and changes on internal data structures to overcome them. We test the before and after system both at the microarchitecture and application level and show that: i) the changes improve microarchitecture metrics such as clocks-per-instruction, cache misses or TLB misses; ii) and that some of these improvements result in very high application level improvements such as a 44% improvement on stream-to-table joins with 6-fold reduction on memory consumption, and order-of-magnitude increase on throughput for moving aggregation operations.
Marcelo R. N. Mendes, Pedro Bizarro, Paulo Marques
Backmatter
Metadaten
Titel
Performance Evaluation, Measurement and Characterization of Complex Systems
herausgegeben von
Raghunath Nambiar
Meikel Poess
Copyright-Jahr
2011
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-18206-8
Print ISBN
978-3-642-18205-1
DOI
https://doi.org/10.1007/978-3-642-18206-8

Neuer Inhalt