Skip to main content

2002 | Buch

Performance Evaluation of Complex Systems: Techniques and Tools

Performance 2002 Tutorial Lectures

herausgegeben von: Maria Carla Calzarossa, Salvatore Tucci

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter
G-Networks: Multiple Classes of Positive Customers, Signals, and Product Form Results
Abstract
The purpose of this tutorial presentation is to introduce G-Networks, or Gelenbe Networks, which are product form queueing networks which include normal or positive customers, as well as negative customers which destroy other customers, and triggers which displace other customers from one queue to another. We derive the balance equations for these models in the context of multiple customer classes, show the product form results, and exhibit the traffic equations which — in this case, contrary to BCMP and Jackson networks — are non-linear. This leads to interesting issues of existence and uniqueness of the steady-state solution. Gelenbe Network can be used to model large scale computer systems and networks in which signaling functions represented by negative customers and triggers are used to achieve flow and congestion control.
Erol Gelenbe
Spectral Expansion Solutions for Markov-Modulated Queues
Abstract
There are many computer, communication and manufacturing systems which give rise to queueing models where the arrival and/or service mechanisms are influenced by some external processes. In such models, a single unbounded queue evolves in an environment which changes state from time to time. The instantaneous arrival and service rates may depend on the state of the environment and also, to a limited extent, on the number of jobs present.
Isi Mitrani
M/G/1-Type Markov Processes: A Tutorial
Abstract
M/G/1-type processes are commonly encountered when modeling modern complex computer and communication systems. In this tutorial, we present a detailed survey of existing solution methods for M/G/1-type processes, focusing on the matrix-analytic methodology. From first principles and using simple examples, we derive the fundamental matrix-analytic results and lay out recent advances. Finally, we give an overview of an existing, state-of-the-art software tool for the analysis of M/G/1-type processes.
Alma Riska, Evgenia Smirni
An Algorithmic Approach to Stochastic Bounds
Abstract
We present a new methodology based on the stochastic ordering, algorithmic derivation of simpler Markov chains and numerical analysis of these chains. The performance indices defined by reward functions are stochastically bounded by reward functions computed on much simpler or smaller Markov chains. This leads to an important reduction on numerical complexity. Stochastic bounds are a promising method to analyze QoS requirements. Indeed it is sufficient to prove that a bound of the real performance satisfies the guarantee.
J. M. Fourneau, N. Pekergin
Dynamic Scheduling via Polymatroid Optimization
Abstract
Dynamic scheduling of multi-class jobs in queueing systems has wide ranging applications, but in general is a very difficult control problem. Here we focus on a class of systems for which conservation laws hold. Consequently, the performance space becomes a polymatroid — a polytope witha matroid-like structure, withall the vertices corresponding to the performance under priority rules, and all the vertices are easily identified. This structure translates the optimal control problem to an optimization problem, which, under a linear objective, becomes a special linear program; and the optimal schedule is a priority rule. In a more general setting, conservation laws extend to so-called generalized conservation laws, under which the performance space becomes more involved; however, the basic structure that ensures the optimality of priority rules remains intact. This tutorial provides an overview to the subject, focusing on the main ideas, basic mathematical facts, and computational implications.
David D. Yao
Workload Modeling for Performance Evaluation
Abstract
The performance of a computer system depends on the characteristics of the workload it must serve: for example, if work is evenly distributed performance will be better than if it comes in unpredictable bursts that lead to congestion. Thus performance evaluations require the use of representative workloads in order to produce dependable results. This can be achieved by collecting data about real workloads, and creating statistical models that capture their salient features. This survey covers methodologies for doing so. Emphasis is placed on problematic issues such as dealing with correlations between workload parameters and dealing with heavy-tailed distributions and rare events. These considerations lead to the notion of structural modeling, in which the general statistical model of the workload is replaced by a model of the process generating the workload.
Dror G. Feitelson
Capacity Planning for Web Services Techniques and Methodology
Abstract
Capacity planning is a powerful tool for managing quality of service on the Web. This tutorial presents a capacity planning methodology for Web-based environments, where the main steps are: understanding the environment, characterizing the workload, modeling the workload, validating and calibrating the models, forecasting the workload, predicting the performance, analyzing the cost-performance plans, and suggesting actions. The main steps are based on two models: a workload model and a performance model. The first model results from understanding and characterizing the workload and the second from a quantitative description of the system behavior. Instead of relying on intuition, ad hoc procedures and rules of thumb to understand and analyze the behavior of Web services, this tutorial emphasizes the role of models, as a uniform and formal way of dealing with capacity planning problems.
Virgilio A. F. Almeida
End-to-End Performance of Web Services
Abstract
As the number of applications that are made available over the Internet rapidly grows, providing services with adequate performance becomes an increasingly critical issue. The performance requirements of the newapplications span from fewmilliseconds to hundreds of seconds. In spite of the continuous technological improvement (e.g., faster servers and clients, multi-threaded browsers supporting several simultaneous and persistent TCP connections, access to the network with larger bandwidth for both servers and clients), the network performance as captured by response time and throughput does not keep up and progressively degrades. Several are the causes of the poor “Quality of Web Services” that users very often experience. The characteristics of the traffic (self-similarity and heavy-tailedness) and the widely varying resource requirements (in terms of bandwidth, size and number of downloaded objects, processor time, number of I/Os, etc.) of web requests are among the most important ones. Other factors refer to the architectural complexity of the network path connecting the client browser to the web server and to the protocols behavior at the different layers.
In this paper we present a study of the performance of web services. The first part of the paper is devoted to the analysis of the origins of the fluctuations in web data traffic. This peculiar characteristic is one of the most important causes of the performance degradation of web applications. In the second part of the paper experimental measurements of performance indices, such as end-to-end response time, TCP connection time, transfer time, of several web applications are presented. The presence of self-similarity characteristics in the traffic measurements is shown.
Paolo Cremonesi, Giuseppe Serazzi
Benchmarking
Abstract
After a definition (list of properties) of a benchmark, the major benchmarks currently used are classified according to several criteria: Ownership, definition of the benchmark, pricing, area covered by the benchmark. The SPEC Open Systems Group, TPC and SAP benchmarks are discussed in more detail. The use of benchmarks in academic research is discussed. Finally, some current issues in benchmarking are listed that users of benchmark results should be aware of.
Reinhold Weicker
Benchmarking Models and Tools for Distributed Web-Server Systems
Abstract
This tutorial reviews benchmarking tools and techniques that can be used to evaluate the performance and scalability of highly accessed Web-server systems. The focus is on design and testing of locally and geographically distributed architectures where the performance evaluation is obtained through workload generators and analyzers in a laboratory environment. The tutorial identifies the qualities and issues of existing tools with respect to the main features that characterize a benchmarking tool (workload representation, load generation, data collection, output analysis and report) and their applicability to the analysis of distributed Web-server systems.
Mauro Andreolini, Valeria Cardellini, Michele Colajanni
Stochastic Process Algebra: From an Algebraic Formalism to an Architectural Description Language
Abstract
The objective of this tutorial is to describe the evolution of the field of stochastic process algebra in the past decade, through a presentation of the main achievements in the field. In particular, the tutorial stresses the current transformation of stochastic process algebra from a simple formalism to a fully fledged architectural description language for the functional verification and performance evaluation of complex computer, communication and software systems.
Marco Bernardo, Lorenzo Donatiello, Paolo Ciancarini
Automated Performance and Dependability Evaluation Using Model Checking
Abstract
Markov chains (and their extensions with rewards) have been widely used to determine performance, dependability and performability characteristics of computer communication systems, such as throughput, delay, mean time to failure, or the probability to accumulate at least a certain amount of reward in a given time.
Due to the rapidly increasing size and complexity of systems, Markov chains and Markov reward models are difficult and cumbersome to specify by hand at the state-space level. Therefore, various specification formalisms, such as stochastic Petri nets and stochastic process algebras, have been developed to facilitate the specification of these models at a higher level of abstraction. Uptill now, however, the specification of the measure-of-interest is often done in an informal and relatively unstructured way. Furthermore, some measures-of-interest can not be expressed conveniently at all.
In this tutorial paper, we present a logic-based specification technique to specify performance, dependability and performability measures-ofinterest and show how for a given finite Markov chain (or Markov reward model) such measures can be evaluated in a fully automated way. Particular emphasis will be given to so-called path-based measures and hierarchically-specified measures. For this purpose, we extend so-called model checking techniques to reason about discrete- and continuous-time Markov chains and their rewards.We also report on the use of techniques such as (compositional) model reduction and measure-driven state-space generation to combat the infamous state space explosion problem.
Christel Baier, Boudewijn Haverkort, Holger Hermanns, Joost-Pieter Katoen
Measurement-Based Analysis of System Dependability Using Fault Injection and Field Failure Data
Abstract
The discussion in this paper focuses on the issues involved in analyzing the availability of networked systems using fault injection and the failure data collected by the logging mechanisms built into the system. In particular we address: (1) analysis in the prototype phase using physical fault injection to an actual system. We use example of fault injection-based evaluation of a software-implemented fault tolerance (SIFT) environment (built around a set of self-checking processes called ARMORS) that provides error detection and recovery services to spaceborne scientific applications and (2) measurement-based analysis of systems in the field. We use example of LAN of Windows NT based computers to present methods for collecting and analyzing failure data to characterize network system dependability. Both, fault injection and failure data analysis enable us to study naturally occurring errors and to provide feedback to system designers on potential availability bottlenecks. For example, the study of failures in a network of Windows NT machines reveals that most of the problems that lead to reboots are software related and that though the average availability evaluates to over 99%, a typical machine, on average, provides acceptable service only about 92% of the time.
Ravishankar K. Iyer, Zbigniew Kalbarczyk
Software Reliability and Rejuvenation: Modeling and Analysis
Abstract
Several recent studies have established that most system outages are due to software faults. Given the ever increasing complexity of software and the well-developed techniques and analysis for hardware reliability, this trend is not likely to change in the near future. In this paper, we classify software faults and discuss various techniques to deal with them in the testing/debugging phase and the operational phase of the software.We discuss the phenomenon of software aging and a preventive maintenance technique to deal with this problem called software rejuvenation. Stochastic models to evaluate the effectiveness of preventive maintenance in operational software systems and to determine optimal times to perform rejuvenation for different scenarios are described. We also present measurement-based methodologies to detect software aging and estimate its effect on various system resources. These models are intended to help develop software rejuvenation policies. An automated online measurement-based approach has been used in the software rejuvenation agent implemented in a major commercial server.
Kishor S. Trivedi, Kalyanaraman Vaidyanathan
Performance Validation of Mobile Software Architectures
Abstract
Design paradigms based on the idea of code mobility have been recently introduced, where components of an application may (autonomously or upon request) move to different locations, during the application execution. Besides, software technologies are readily available (e.g. Javabased), that provide tools to implement these paradigms. Based on mobile code paradigms and technologies, different but functionally equivalent software architectures can be defined and it is widely recognized that, in general, the adoption of a particular architecture can have a large impact on quality attributes such as modifiability, reusability, reliability, and performance. Hence, validation against specific attributes is necessary and claims for a careful planning of this activity. Within this framework, the goal of this tutorial is twofold: to provide a general methodology for the validation of software architectures, where the focus is on the transition from the modeling of software architectures to the validation of non-functional requirements; to substantiate this general methodology into the specific case of software architectures exploiting mobile code.
Vincenzo Grassi, Vittorio Cortellessa, Raffaela Mirandola
Performance Issues of Multimedia Applications
Abstract
The dissemination of the Internet technologies, increasing communication bandwidth and processing speeds, and the growth in demand for multimedia information gave rise to a variety of applications. Many of these applications demand the transmission of a continuous flow of data in real time. As such, continuous media applications may have high storage requirements, high bandwidth needs and strict delay and loss requirements. These pose significant challenges to the design of such systems, specially since the Internet currently provides no QoS guarantees to the data it delivers. An extensive range of problems have been investigated in the last years from issues on how to efficiently store and retrieve continuous media information in large systems, to issues on how to efficiently transmit the retrieved information via the Internet. Although broad in scope, the problems under investigation are tightly coupled. The purpose of this chapter is to survey some of the techniques proposed to cope with these challenges.
Edmundo de Souza e Silva, Rosa M. M. Leão, Berthier Ribeiro-Neto, Sérgio Campos
Markovian Modeling of Real Data Traffic: Heuristic Phase Type and MAP Fitting of Heavy Tailed and Fractal Like Samples
Abstract
In order to support the effective use of telecommunication infrastructure, the “random” behavior of traffic sources has been studied since the early days of telephony. Strange new features, like fractal like behavior and heavy tailed distributions were observed in high speed packet switched data networks in the early ’90s. Since that time a fertile research aims to find proper models to describe these strange traffic features and to establish a robust method to design, dimension and operate such networks.
In this paper we give an overview of methods that, on the one hand, allow us to capture important traffic properties like slow decay rate, Hurst parameter, scaling factor, etc., and, on the other hand, makes possible the quantitative analysis of the studied systems using the effective analysis approach called matrix geometric method.
The presentation of this analysis approach is associated with a discussion on the properties and limits of Markovian fitting of the typical non- Markovian behavior present in telecommunication networks.
András Horváth, Miklós Telek
Optimization of Bandwidth and Energy Consumption in Wireless Local Area Networks
Abstract
In the recent years the proliferation of portable computers, handheld digital devices, and PDAs has led to a rapid growth in the use of wireless technologies for the Local Area Network (LAN) environment. Beyond supporting wireless connectivity for fixed, portable and moving stations within a local area, the wireless LAN (WLAN) technologies can provide a mobile and ubiquitous connection to the Internet information services. The design of WLANs has to concentrate on bandwidth consumption because wireless networks deliver much lower bandwidth than wired networks, e.g., 2-11 Mbps [1] versus 10-150 Mbps [2]. In addition, the finite battery power of mobile computers represents one of the greatest limitations to the utility of portable computers [3], [4]. Hence, a relevant performance- optimization problem is the balancing between the minimization of battery consumption, and the maximization of the channel utilization. In this paper, we study bandwidth and energy consumption of the IEEE 802.11 standard, i.e., the most mature technology for WLANs. Specifically, we derived analytical formulas that relate the protocol parameters to the maximum throughput and to the minimal energy consumption. These formulas are used to define an effective method for tuning at run time the protocol parameters.
Marco Conti, Enrico Gregori
Service Centric Computing - Next Generation Internet Computing
Abstract
In the not-too-distant future, billions of people, places and things could all be connected to each other and to useful services through the Internet. In this world scalable, cost-effective information technology capabilities will need to be provisioned as service, delivered as a service, metered and managed as a service, and purchased as a service. We refer to this world as service centric computing. Consequently, processing and storage will be accessible via utilities where customers pay for what they need when they need it and where they need it. This tutorial introduces concepts of service centric computing and its relationship to the Grid. It explains a programmable data center paradigm as a flexible architecture that helps to achieve service centric computing. Case study results illustrate performance and thermal issues. Finally, key open research questions pertaining to service centric computing and Internet computing are summarized.
Jerry Rolia, Rich Friedrich, Chandrakant Patel
European DataGrid Project: Experiences of Deploying a Large Scale Testbed for E-science Applications
Abstract
The objective of the European DataGrid (EDG) project is to assist the next generation of scientific exploration, which requires intensive computation and analysis of shared large-scale datasets, from hundreds of terabytes to petabytes, across widely distributed scientific communities. We see these requirements emerging in many scientific disciplines, including physics, biology, and earth sciences. Such sharing is made complicated by the distributed nature of the resources to be used, the distributed nature of the research communities, the size of the datasets and the limited network bandwidth available. To address these problems we are building on emerging computational Grid technologies to establish a research network that is developing the technology components essential for the implementation of a world-wide data and computational Grid on a scale not previously attempted. An essential part of this project is the phased development and deployment of a large-scale Grid testbed.
The primary goals of the first phase of the EDG testbed were: 1) to demonstrate that the EDG software components could be integrated into a productionquality computational Grid; 2) to allow the middleware developers to evaluate the design and performance of their software; 3) to expose the technology to end-users to give them hands-on experience; and 4) to facilitate interaction and feedback between end-users and developers. This first testbed deployment was achieved towards the end of 2001 and assessed during the successful European Union review of the project on March 1, 2002. In this article we give an overview of the current status and plans of the EDG project and describe the distributed testbed.
Fabrizio Gagliardi, Bob Jones, Mario Reale, Stephen Burke
Backmatter
Metadaten
Titel
Performance Evaluation of Complex Systems: Techniques and Tools
herausgegeben von
Maria Carla Calzarossa
Salvatore Tucci
Copyright-Jahr
2002
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-45798-5
Print ISBN
978-3-540-44252-3
DOI
https://doi.org/10.1007/3-540-45798-4