Skip to main content
main-content

Über dieses Buch

This book constitutes the refereed proceedings of the 4th TPC Technology Conference, TPCTC 2012, held in Istanbul, Turkey, in August 2012.

It contains 10 selected peer-reviewed papers, 2 invited talks, a report from the TPC Public Relations Committee, and a report from the workshop on Big Data Benchmarking, WBDB 2012. The papers present novel ideas and methodologies in performance evaluation, measurement, and characterization.

Inhaltsverzeichnis

Frontmatter

TPC Benchmark Roadmap 2012

Abstract
The TPC has played, and continues to play, a crucial role in providing the computer industry with relevant standards for total system performance, price-performance and energy efficiency comparisons. Historically known for database-centric standards, the TPC is now developing standards for consolidation using virtualization technologies and multi-source data integration, and exploring new ideas such as Big Data and Big Data Analytics to keep pace with rapidly changing industry demands. This paper gives a high level overview of the current state of the TPC in terms of existing standards, standards under development and future outlook.
Raghunath Nambiar, Meikel Poess, Andrew Masland, H. Reza Taheri, Matthew Emmerton, Forrest Carman, Michael Majdalany

Incorporating Recovery from Failures into a Data Integration Benchmark

Abstract
The proposed TPC-DI benchmark measures the performance of Data Integration systems (a.k.a. ETL systems) given the task of integrating data from an OLTP system and other data sources to create a data warehouse.This paper describes the scenario, structure and timing principles used in TPC-DI. Although failure recovery is very important in real deployments of Data Integration systems, certain complexities made it difficult to specify in the benchmark. Hence failure recovery aspects have been scoped out of the current version of TPC-DI. The issues around failure recovery are discussed in detail and some options are described. Finally the audience is invited to offer additional suggestions.
Len Wyatt, Brian Caufield, Marco Vieira, Meikel Poess

Two Firsts for the TPC: A Benchmark to Characterize Databases Virtualized in the Cloud, and a Publicly-Available, Complete End-to-End Reference Kit

Abstract
The TPC formed a subcommittee in 2010 to develop TPC V, a benchmark for virtualized databases. We soon discovered two major issues. First, a database benchmark running in a VM, or even a consolidation scenario of a few database VMs, is no longer adequate. There is demand for a benchmark that emulates cloud computing, e.g., a mix of heterogeneous VMs, and dynamic load elasticity for each VM. Secondly, waiting for system or database vendors to develop benchmarking kits to run such a benchmark is problematic. Hence, we are developing a publicly-available, end-to-end reference kit that will run against the open source PostgreSQL DBMS. This paper describes TPC V and the proposed architecture of its reference kit; provides a progress report; and presents results from prototyping experiments with the reference kit.
Andrew Bond, Greg Kopczynski, H. Reza Taheri

Adding a Temporal Dimension to the TPC-H Benchmark

Abstract
The importance of time in decision-support is widely recognized and has been addressed through temporal applications or through native temporal features by major DBMS vendors. In this paper we propose a framework for adding a new temporal component to the TPC-H benchmark. Our proposal includes temporal DDL, procedures to populate the temporal tables via insertselect thereby providing history, and temporal queries based on a workload that covers the temporal dimension broken down as current, history, and both. The queries we define as part of this benchmark include the typical SQL operators involved in scans, joins and aggregations. The paper concludes with experimental results. While in this paper we consider adding temporal history to a subset of the TPC-H benchmark tables namely Part/ Supplier/Partsupp, our proposed framework addresses a need and uses, as a starting point, a benchmark that is widely successful and well-understood.
Mohammed Al-Kateb, Alain Crolotte, Ahmad Ghazal, Linda Rose

Performance Per Watt - Benchmarking Ways to Get More for Less

Abstract
The electrical cost of managing information systems has always been a concern for those investing in technology. However, in recent years the focus has increased, both because of increased costs of electricity and decreased costs of other components of the equation.
To understand the efficiency of a computing solution, one needs a measure of throughput per watt (or watts per unit of work) that employs a workload that is relevant to the target load on the system and that operates at a capacity that reflects the target throughput of the final application. The goal of this paper is to introduce the reader to some of the measures that are available and provide an explanation of the relative merits of each.
Karl R. Huppler

Revisiting ETL Benchmarking: The Case for Hybrid Flows

Abstract
Modern business intelligence systems integrate a variety of data sources using multiple data execution engines. A common example is the use of Hadoop to analyze unstructured text and merging the results with relational database queries over a data warehouse. These analytic data flows are generalizations of ETL flows. We refer to multi-engine data flows as hybrid flows. In this paper, we present our benchmark infrastructure for hybrid flows and illustrate its use with an example hybrid flow. We then present a collection of parameters to describe hybrid flows. Such parameters are needed to define and run a hybrid flows benchmark. An inherent difficulty in benchmarking ETL flows is the diversity of operators offered by ETL engines. However, a commonality for all engines is extract and load operations, operations which rely on data and function shipping. We propose that by focusing on these two operations for hybrid flows, it may be feasible to revisit the ETL benchmark effort and thus, enable comparison of flows for modern business intelligence applications. We believe our framework may be a useful step toward an industry standard benchmark for ETL flows.
Alkis Simitsis, Kevin Wilkinson

MulTe: A Multi-Tenancy Database Benchmark Framework

Abstract
Multi-tenancy in relational databases has been a topic of interest for a couple of years. On the one hand, ever increasing capabilities and capacities of modern hardware easily allow for multiple database applications to share one system. On the other hand, cloud computing leads to outsourcing of many applications to service architectures, which in turn leads to offerings for relational databases in the cloud, as well.
The ability to benchmark multi-tenancy database systems (MT-DBMSs) is imperative to evaluate and compare systems and helps to reveal otherwise unnoticed shortcomings. With several tenants sharing a MT-DBMS, a benchmark is considerably different compared to classic database benchmarks and calls for new benchmarking methods and performance metrics. Unfortunately, there is no single, well-accepted multi-tenancy benchmark for MT-DBMSs available and few efforts have been made regarding the methodology and general tooling of the process.
We propose a method to benchmark MT-DBMSs and provide a framework for building such benchmarks. To support the cumbersome process of defining and generating tenants, loading and querying their data, and analyzing the results we propose and provide MulTe, an open-source framework that helps with all these steps.
Tim Kiefer, Benjamin Schlegel, Wolfgang Lehner

BDMS Performance Evaluation: Practices, Pitfalls, and Possibilities

Abstract
Much of the IT world today is buzzing about Big Data, and we are witnessing the emergence of a new generation of data-oriented platforms aimed at storing and processing all of the anticipated Big Data. The current generation of Big Data Management Systems (BDMSs) can largely be divided into two kinds of platforms: systems for Big Data analytics, which today tend to be batch-oriented and based on MapReduce (e.g., Hadoop), and systems for Big Data storage and front-end request-serving, which are usually based on key-value (a.k.a. NoSQL) stores. In this paper we ponder the problem of evaluating the performance of such systems. After taking a brief historical look at Big Data management and DBMS benchmarking, we begin our pondering of BDMS performance evaluation by reviewing several key recent efforts to measure and compare the performance of BDMSs. Next we discuss a series of potential pitfalls that such evaluation efforts should watch out for, pitfalls mostly based on the author’s own experiences with past benchmarking efforts. Finally, we close by discussing some of the unmet needs and future possibilities with regard to BDMS performance characterization and assessment efforts.
Michael J. Carey

Data Historians in the Data Management Landscape

Abstract
At EDF, a leading energy company, process data produced in power stations are archived both to comply with legal archiving requirements and to perform various analysis applications. Such data consist of timestamped measurements, retrieved for the most part from process data acquisition systems. After archival, past and current values are used for various applications, including device monitoring, maintenance assistance, decision support, statistics publication, etc.
Large amounts of data are generated in these power stations, and aggregated in soft real-time – without operational deadlines – at the plant level by local servers. For this long-term data archiving, EDF relies on data historians – like InfoPlus.21, PI or Wonderware Historian – for years. This is also true for other energy companies worldwide and, in general, industry based on automated processes.
In this paper, we aim at answering a simple, yet not so easy, question: how can data historians be placed in the data management landscape, from classical RDBMSs to NoSQL systems? To answer this question, we first give an overview of data historians, then discuss benchmarking these particular systems. Although many benchmarks are defined for conventional database management systems, none of them are appropriate for data historians. To establish a first objective basis for comparison, we therefore propose a simple benchmark inspired by EDF use cases, and give experimental results for data historians and DBMSs.
Brice Chardin, Jean-Marc Lacombe, Jean-Marc Petit

Scalable Generation of Synthetic GPS Traces with Real-Life Data Characteristics

Abstract
Database benchmarking is most valuable if real-life data and workloads are available. However, real-life data (and workloads) are often not publicly available due to IPR constraints or privacy concerns. And even if available, they are often limited regarding scalability and variability of data characteristics. On the other hand, while easily scalable, synthetically generated data often fail to adequately reflect real-life data characteristics. While there are well established synthetic benchmarks and data generators for, e.g., business data (TPC-C, TPC-H), there is no such up-to-date data generator, let alone benchmark, for spatiotemporal and/or moving objects data.
In this work, we present a data generator for spatiotemporal data. More specifically, our data generator produces synthetic GPS traces, mimicking the GPS traces that GPS navigation devices generate. To this end, our generator is fed with real-life statistical profiles derived from the user base and uses real-world road network information. Spatial scalability is achieved by choosing statistics from different regions. The data volume can be scaled by tuning the number and length of the generated trajectories. We compare the generated data to real-life data to demonstrate how well the synthetically generated data reflects real-life data characteristics.
Konrad Bösche, Thibault Sellam, Holger Pirk, René Beier, Peter Mieth, Stefan Manegold

S3G2: A Scalable Structure-Correlated Social Graph Generator

Abstract
Benchmarking graph-oriented database workloads and graph-oriented database systems is increasingly becoming relevant in analytical Big Data tasks, such as social network analysis. In graph data, structure is not mainly found inside the nodes, but especially in the way nodes happen to be connected, i.e. structural correlations. Because such structural correlations determine join fan-outs experienced by graph analysis algorithms and graph query executors, they are an essential, yet typically neglected, ingredient of synthetic graph generators. To address this, we present S3G2: a Scalable Structure-correlated Social Graph Generator. This graph generator creates a synthetic social graph, containing non-uniform value distributions and structural correlations, which is intended as test data for scalable graph analysis algorithms and graph database systems. We generalize the problem by decomposing correlated graph generation in multiple passes that each focus on one so-called correlation dimension; each of which can be mapped to a MapReduce task. We show that S3G2 can generate social graphs that (i) share well-known graph connectivity characteristics typically found in real social graphs (ii) contain certain plausible structural correlations that influence the performance of graph analysis algorithms and queries, and (iii) can be quickly generated at huge sizes on common cluster hardware.
Minh-Duc Pham, Peter Boncz, Orri Erling

Benchmarking in the Cloud: What It Should, Can, and Cannot Be

Abstract
With the increasing adoption of Cloud Computing, we observe an increasing need for Cloud Benchmarks, in order to assess the performance of Cloud infrastructures and software stacks, to assist with provisioning decisions for Cloud users, and to compare Cloud offerings. We understand our paper as one of the first systematic approaches to the topic of Cloud Benchmarks. Our driving principle is that Cloud Benchmarks must consider end-to-end performance and pricing, taking into account that services are delivered over the Internet. This requirement yields new challenges for benchmarking and requires us to revisit existing benchmarking practices in order to adopt them to the Cloud.
Enno Folkerts, Alexander Alexandrov, Kai Sachs, Alexandru Iosup, Volker Markl, Cafer Tosun

Characterizing Cloud Performance with TPC Benchmarks

Abstract
TPC Benchmarks have become the gold standard in database benchmarks. The Companies who publish TPC Benchmarks have a significant investment in the workload, benchmark implementation and publication requirements. We will explore ideas on how TPC Benchmarks with limited modification can be used to characterize database performance in a cloud environment. This is a natural progression beyond the current TPC-VMS Specification that leverages existing TPC Benchmarks to measure database performance in a virtualized environment. The TPC-VMS Specification only addresses the consolidation of multiple databases in a virtualized or cloud environment. In addition to consolidation, we will address the cloud characteristics of load balancing, migration, resource elasticity and deployment.
Wayne D. Smith

Setting the Direction for Big Data Benchmark Standards

Abstract
The Workshop on Big Data Benchmarking (WBDB2012), held on May 8-9, 2012 in San Jose, CA, served as an incubator for several promising approaches to define a big data benchmark standard for industry. Through an open forum for discussions on a number of issues related to big data benchmarking—including definitions of big data terms, benchmark processes and auditing — the attendees were able to extend their own view of big data benchmarking as well as communicate their own ideas, which ultimately led to the formation of small working groups to continue collaborative work in this area. In this paper, we summarize the discussions and outcomes from this first workshop, which was attended by about 60 invitees representing 45 different organizations, including industry and academia. Workshop attendees were selected based on their experience and expertise in the areas of management of big data, database systems, performance benchmarking, and big data applications. There was consensus among participants about both the need and the opportunity for defining benchmarks to capture the end-to-end aspects of big data applications. Following the model of TPC benchmarks, it was felt that big data benchmarks should not only include metrics for performance, but also price/performance, along with a sound foundation for fair comparison through audit mechanisms. Additionally, the benchmarks should consider several costs relevant to big data systems including total cost of acquisition, setup cost, and the total cost of ownership, including energy cost. The second Workshop on Big Data Benchmarking will be held in December 2012 in Pune, India, and the third meeting is being planned for July 2013 in Xi’an, China.
Chaitanya Baru, Milind Bhandarkar, Raghunath Nambiar, Meikel Poess, Tilmann Rabl

Backmatter

Weitere Informationen