Skip to main content

2012 | Buch

Measurement, Modelling, and Evaluation of Computing Systems and Dependability and Fault Tolerance

16th International GI/ITG Conference, MMB & DFT 2012, Kaiserslautern, Germany, March 19-21, 2012. Proceedings

insite
SUCHEN

Über dieses Buch

This book constitutes the refereed proceedings of the 16th International GI/ITG Conference on Measurement, Modeling and Evaluation of Computing Systems and Dependability and Fault Tolerance, MMB & DFT 2012, held in Kaiserslautern, Germany, in March 2012.

The 16 revised full papers presented together with 5 tool papers and 5 selected workshop papers were carefully reviewed and selected from 54 submissions. MMB & DFT 2012 covers diverse aspects of performance and dependability evaluation of systems including networks, computer architectures, distributed systems, software, fault-tolerant and secure systems.

Inhaltsverzeichnis

Frontmatter

Full Papers

Availability in Large Networks: Global Characteristics from Local Unreliability Properties
Abstract
We apply mean-field analysis to compute global availability in large networks of generalized SIS and voter models. The main results provide comparison and bounding techniques of the global availability depending on the local degree structure of the networks.
Hans Daduna, Lars Peter Saul
Stochastic Analysis of a Finite Source Retrial Queue with Spares and Orbit Search
Abstract
This paper aims at presenting an analytic approach for investigating a single server finite-source retrial queue with spares and constant retrial rate. We assume that there is a single repair facility (server) and K independent parts (customers) in the system. The customers’ life times are assumed to be exponentially distributed random variables. Once a customer breaks down, it is sent for repair immediately. If the server is idle upon the failed customer’s arrival, the customer receives repair immediately. The failed customer that finds the server busy upon arrival enters into the retrial orbit. Upon completion of a repair, the server searches for a customer from orbit if any. However, when a new primary customer arrives during the seeking process, the server interrupts the seeking process and serves the new customer. There are some spares for substitution of failed machines and the system is maintained by replacing failed part by spares and by repairing failed parts so that they may become spares when they are repaired. We carry out the steady-state analysis of the model and obtain various steady-state performance measures.
Feng Zhang, Jinting Wang
Bounds for Two-Terminal Network Reliability with Dependent Basic Events
Abstract
The two-terminal reliability problem has been a widely studied issue since 1970s. Therefore many efficient algorithms were proposed. Nevertheless, all these algorithms imply that all system components must be independent. With regard to nowadays applications it is not sufficient to assume independent component failures because in fault tolerant systems components may fail due to common cause failures or fault propagation. We therefore propose an algorithm which deals with upcoming dependencies. In addition to that, lower and upper bounds can be obtained in case the algorithm cannot be conducted until the end. The performance and accuracy of the algorithm is demonstrated on a certain network obeying a recursive structure where the exact result is given by a polynomial.
Minh Lê, Max Walter
Software Reliability Testing Covering Subsystem Interactions
Abstract
This article proposes a novel approach to quantitative software reliability assessment ensuring high interplay coverage for software components and decentralized (sub-)systems. The generation of adequate test cases is based on the measurement of their operational representativeness, stochastic independence and interaction coverage. The underlying multi-objective optimization problem is solved by genetic algorithms. The resulting automatic test case generation supports the derivation of conservative reliability measures as well as high interaction coverage. The practicability of the approach developed is finally demonstrated in the light of an interaction-intensive example.
Matthias Meitner, Francesca Saglietti
Failure-Dependent Timing Analysis - A New Methodology for Probabilistic Worst-Case Execution Time Analysis
Abstract
Embedded real-time systems are growing in complexity, which goes far beyond simplistic closed-loop functionality. Current approaches for worst-case execution time (WCET) analysis are used to verify the deadlines of such systems. These approaches calculate or measure the WCET as a single value that is expected as an upper bound for a system’s execution time. Overestimations are taken into account to make this upper bound a safe bound, but modern processor architectures expand those overestimations into unrealistic areas. Therefore, we present in this paper how of safety analysis model probabilities can be combined with elements of system development models to calculate a probabilistic WCET. This approach can be applied to systems that use mechanisms belonging to the area of fault tolerance, since such mechanisms are usually quantified using safety analyses to certify the system as being highly reliable or safe. A tool prototype implementing this approach is also presented which provides reliable safe upper bounds by performing a static WCET analysis and which overcomes the frequently encountered problem of dependence structures by using a fault injection approach.
Kai Höfig
A Calculus for SLA Delay Properties
Abstract
Service Providers in Service-Oriented Architectures (SOA) often specify system performance values with the help of Service Level Agreements (SLAs) that do not specify details of how the system realizes services. Analytic modeling of SOA to estimate performance values is thus made difficult without knowledge of service rates. Service components are characterized by quantitative requirements in SLAs only, that are not supported by most modeling methods.
We propose a calculus to model and evaluate SOA with quantitative properties described in SLAs. Instead of defining a system by its service capacity we will use flexible constraints on delays as found in SLAs. From these delays approximate service rates to fulfill the delay will be derived.
Sebastian Vastag
Verifying Worst Case Delays in Controller Area Network
Abstract
Controller Area Network (CAN) protocol was developed to fulfill high availability and timing demands in modern cars, but today it is also used in many other mission critical applications with hard real-time requirements. We present a compact model of the CAN bus specified by a timed automaton and prove its applicability for estimating worst case delays which are crucial for hard real-time systems. Using our model we detected flaws in previous approaches to determine the worst case delays in CAN systems.
Nikola Ivkovic, Dario Kresic, Kai-Steffen Hielscher, Reinhard German
Lifetime Improvement by Battery Scheduling
Abstract
The use of mobile devices is often limited by the lifetime of their batteries. For devices that have multiple batteries or that have the option to connect an extra battery, battery scheduling, thereby exploiting the recovery properties of the batteries, can help to extend the system lifetime. Due to the complexity, work on battery scheduling in literature is limited to either small batteries or to very simple loads. In this paper, we present an approach using the Kinetic Battery Model that combines real size batteries with realistic random loads. The results show that, indeed, battery scheduling results in lifetime improvements compared to the sequential useage of the batteries. The improvements mainly depend on the ratio between the average discharge current and the battery capacity. Our results show that for realistic loads one can achieve up to 20% improvements in system lifetime by applying battery scheduling.
Marijn R. Jongerden, Boudewijn R. Haverkort
Weighted Probabilistic Equivalence Preserves ω-Regular Properties
Abstract
Equivalence relations can be used to reduce the state space of a system model, thereby permitting more efficient analysis. This paper extends the notion of weighted lumpability (WL) defined on continuous-time Markov chains (CTMCs) to the discrete-time setting, i.e., discrete-time Markov chains (DTMCs). We provide a structural definition of weighted probabilistic equivalence (WPE), define the quotient under WPE and prove some elementary properties. We show that ω-regular properties are preserved when reducing the state space of a DTMC using WPE. Finally, we show that WPE is compositional w.r.t. synchronous parallel composition.
Arpit Sharma
Probabilistic CSP: Preserving the Laws via Restricted Schedulers
Abstract
Extending Communicating Sequential Processes (CSP) by preserving the distributivity laws for internal choice, in the presence of probabilistic choice, has been an open problem so far. The problem stems from a well known disagreement between probabilistic choice and nondeterministic choice, that raises congruence issues for parallel composition. Recently, it has been argued that the congruence issue can be resolved only by restricting the power of the schedulers that resolve the nondeterminism. In our previous work, we have restricted the schedulers by suitably labeling the nondeterministic transitions. We have defined a ready-trace equivalence and a parallel composition with hiding for which the equivalence is a congruence. In this paper, we generalize our model and give a CSP-style axiomatic characterization of the ready-trace equivalence. From the axiomatization it follows that all distributivity axioms for internal choice from CSP are preserved, and no new axioms are added.
Sonja Georgievska, Suzana Andova
Heuristics for Probabilistic Timed Automata with Abstraction Refinement
Abstract
Probabilistic Timed Automata provide a theory to model and verify real-time systems with non-deterministic and probabilistic behaviors. The main approach to model checking Probabilistic Timed Automata is based on encoding the time behavior either with abstractions based on a region graph or with digitalization of clocks. In this paper we present a sound method that combines digitalization to encode time behavior and predicate abstraction to reduce the state space, allowing the analysis of models with possibly infinite numbers of locations. Our method is compatible with abstraction refinement techniques previously used for Probabilistic Automata. Based on experimental results, we show that the underlying digital semantics of clocks is prone to produce an overhead in the abstraction process that can sometimes make the model checking infeasible. To cope with this problem we present some heuristics to handle clocks and show their impact on the verification.
Luis María Ferrer Fioriti, Holger Hermanns
Simulative and Analytical Evaluation for ASD-Based Embedded Software
Abstract
The Analytical Software Design (ASD) method of the company Verum has been designed to reduce the number of errors in embedded software. However, it does not take performance issues into account, which can also have a major impact on the duration of software development. This paper presents a discrete-event simulator for the performance evaluation of ASD-structured software as well as a compositional numerical analysis method using fixed-point iteration and phase-type distribution fitting. Whereas the numerical analysis is highly accurate for non-interfering tasks, its accuracy degrades when tasks run in opposite directions through interdependent software blocks and the utilization increases. A thorough validation identifies the underlying problems when analyzing the performance of embedded software.
Ramin Sadre, Anne Remke, Sjors Hettinga, Boudewijn Haverkort
Reducing Channel Zapping Delay in WiMAX-Based IPTV Systems
Abstract
Due to the enormous improvement of networking technologies and the advances in media encoding and compression techniques, IPTV becomes one of the fastest growing services in the Internet. When offered via wireless technologies (e.g., WiMAX), IPTV can pave the way for quad-play in next generation networks and ubiquitous delivery. IPTV subscribers expect the same or even better Quality of Experience (QoE) compared with the services offered by traditional operators (e.g., cable or satellite). An important QoE element is the channel switching delay also known as zapping delay. In this paper we propose a prediction-based prejoin mechanism to join one or two TV channels (which are likely to be selected next) in advance in order to shorten the channel switching time in WiMAX-based IPTV systems. A trace-driven simulation is conducted to evaluate the proposed method. The simulation results confirm the reduction of about 30% in average zapping delay.
Alireza Abdollahpouri, Bernd E. Wolfinger
Performance Evaluation of 10GE NICs with SR-IOV Support: I/O Virtualization and Network Stack Optimizations
Abstract
SR-IOV has been proposed to improve the performance and scalability of I/O in Virtual machines and some 10GE NICs supporting this functionality have already appeared on the market. In addition to the SR-IOV support, these NICs all provide optimizations for various network layers within the OS kernel. In this paper we try to present a comprehensive view of the performance gain by SR-IOV. This study is conducted by evaluating the performance of 10GE NICs with SR-IOV support at different layers in various virtualized environments.
Shu Huang, Ilia Baldine
Business Driven BCM SLA Translation for Service Oriented Systems
Abstract
A Business Continuity Management (BCM) Impact Analysis derives business-level BCM SLAs which need to be translated at IT-level, infrastructure-level and facility-level services. However, translation of SLAs across a service oriented system is not an easy task.
In this patent we present a new Petri-Net based procedure to define and to translate BCM SLAs for service oriented systems. As a result of our approach we are able to introduce a BCM SLA classification schema. We will describe our approach in the context of a use-case.
Ulrich Winkler, Wasif Gilani, Alan Marshall
Boosting Design Space Explorations with Existing or Automatically Learned Knowledge
Abstract
During development, processor architectures can be tuned and configured by many different parameters. For benchmarking, automatic design space explorations (DSEs) with heuristic algorithms are a helpful approach to find the best settings for these parameters according to multiple objectives, e.g. performance, energy consumption, or real-time constraints. But if the setup is slightly changed and a new DSE has to be performed, it will start from scratch, resulting in very long evaluation times.
To reduce the evaluation times we extend the NSGA-II algorithm in this article, such that automatic DSEs can be supported with a set of transformation rules defined in a highly readable format, the fuzzy control language (FCL). Rules can be specified by an engineer, thereby representing existing knowledge. Beyond this, a decision tree classifying high-quality configurations can be constructed automatically and translated into transformation rules. These can also be seen as very valuable result of a DSE because they allow drawing conclusions on the influence of parameters and describe regions of the design space with high density of good configurations.   
Our evaluations show that automatically generated decision trees can classify near optimal configurations for the hardware parameters of the Grid ALU Processor (GAP) and M-Sim 2. Further evaluations show that automatically constructed transformation rules can reduce the number of evaluations required to reach the same quality of results as without rules by 43%, leading to a significant saving of time of about 25%. In the demonstrated example using rules also leads to better results.
Ralf Jahr, Horia Calborean, Lucian Vintan, Theo Ungerer

Tool Papers

IBPM: An Open-Source-Based Framework for InifiniBand Performance Monitoring
Abstract
In this paper, we present a tool for performance measurement of InfiniBand networks. Our tool analyzes the network and presents a comprehensible visualization of the performance and health of the network. InfiniBand network operators can use the tool to detect potential bottlenecks and optimize the overall performance of their network.
Michael Hoefling, Michael Menth, Christian Kniep, Marcus Camen
A Workbench for Internet Traffic Analysis
Abstract
The specification and development of models for Internet traffic is mainly conducted by measurement-driven performance modeling based upon statistical analysis. Yet, this type of analysis can be a challenging task, due to the complexity and especially, with large sample sizes, the sheer quantity of the data. For this reason, preliminary data examination by graphical means and appropriate visualization techniques can be of great value. In this paper, we present the recent developments of the network traffic analyzer Atheris. Atheris has been designed specifically for measuring network traffic and for the visualization of its inherent properties. As a part of Atheris’ functionality, it performs traffic measurements at the packet layer, data extraction, flow analysis and enables the visual inspection of statistical characteristics.
Philipp M. Eittenberger, Udo R. Krieger
A Modelling and Analysis Environment for LARES
Abstract
This paper presents a toolset for modelling and analysis of fault tolerant systems based on LARES (LAnguage for REconfigurable Systems), an easy-to-learn formalism that allows its users to describe system structure, dynamic failure and repair behaviour.
Alexander Gouberman, Martin Riedl, Johann Schuster, Markus Siegle
Simulation and Statistical Model Checking for Modestly Nondeterministic Models
Abstract
Modest is a high-level compositional modelling language for stochastic timed systems with a formal semantics in terms of stochastic timed automata. The analysis of Modest models is supported by the Toolset, which includes the discrete-event simulator modes. modes handles arbitrary deterministic models as well as models that include nondeterminism due to concurrency through the use of methods inspired by partial order reduction. In this paper, we present version 1.4 of modes, which includes several enhancements compared to previous prototypical versions, such as support for recursive data structures, interactive simulation and statistical model checking.
Jonathan Bogdoll, Arnd Hartmanns, Holger Hermanns
UniLoG: A Unified Load Generation Tool
Motivation
Utilities for generating artificial (synthetic) loads are very important for analyses of performance and behavior of networks and their offered services. Load generators implemented by the industry are mainly dedicated hardware components with very high performance and stringent precision requirements. In research and academia, mainly software based load generators are commonly used because of the expected higher flexibility in operation and maintenance (e.g. due to easy deployment of constituent load generating modules in the network, code customizations for a specific research purpose, etc.) while components of real operating systems and protocol stacks can be used to guarantee realistic load generation at lower costs. However, many existing tools are dedicated to a specific modeling study (e.g., Guernica [1] along with its specific Dweb model for Web traffic, or Harpoon [2] modeling IP traffic flows) or are focusing on generating traffic at some specific interface in a network (e.g., ITG [3] or Brute [4] were designed to generate traffic on UDP and TCP service interfaces). The proposed solutions quite often do not provide an adequate flexibility, e.g. in case the underlying model is to be modified or a completely new model is to be used. Therefore, the unified load generator UniLoG is presented in this paper, which combines the specification and generation of network loads in one single coherent approach. The basic principle underlying the design and elaboration of UniLoG is to start with a formal description of an abstract load model by means of a finite user behavior automaton (UBA, introduced in Sec. 2) and thereafter to use interface-specific adapters to map the abstract requests to the concrete requests as they are “understood” by the service providing component at the real interface in question. An overview of the distributed UniLoG architecture is given in Sec. 3 and a concrete example of its practical use in QoS studies for video streaming is demonstrated in Sec. 4.
Andrey Kolesnikov

Selected Workshop Papers

Non Preemptive Static Priority with Network Calculus: Enhancement
Abstract
The paper addresses worst case performance analysis of non preemptive static scheduling priority scheduling within the network calculus theory. Previous studies have been done, each one generalizing some other [8,1,7,3,10], needing weaker hypotheses or improving accuracy of results. This paper presents a very general results, with an accuracy that appear, on preliminary examples, as good as all other one.
William Mangoua Sofack, Marc Boyer
A Demand-Response Calculus with Perfect Batteries
Abstract
We consider an electricity consumer equipped with a perfect battery, who needs to satisfy a non-elastic load, subject to external control signals. The control imposes a time-varying upper-bound on the instantaneous energy consumption (this is called “Demand-Response via quantity”). The consumer defines a charging schedule for the battery. We say that a schedule is feasible if it successfully absorbs the effects of service reduction and achieves the satisfiability of the load (making use of the battery). Our contribution is twofold. (1) We provide explicit necessary and sufficient conditions for the load, the control, and the battery, which ensure the existence of a feasible battery charging schedule. Furthermore, we show that whenever a feasible schedule exists, we can explicitly define an online (causal) feasible schedule. (2) For a given arrival curve characterizing the load and a given service curve characterizing the control, we compute a sufficient battery size that ensures existence of an online feasible schedule. For an arrival curve determined from a real measured trace, we numerically characterize the sufficient battery size for various types of service curves.
Jean-Yves Le Boudec, Dan-Cristian Tomozei
A Formal Definition and a New Security Mechanism of Physical Unclonable Functions
Abstract
The characteristic novelty of what is generally meant by a “physical unclonable function” (PUF) is precisely defined, in order to supply a firm basis for security evaluations and the proposal of new security mechanisms. A PUF is defined as a hardware device which implements a physical function with an output value that changes with its argument. A PUF can be clonable, but a secure PUF must be unclonable.
This proposed meaning of a PUF is cleanly delineated from the closely related concepts of “conventional unclonable function”, “physically obfuscated key”,“random-number generator”, “controlled PUF” and “strong PUF”. The structure of a systematic security evaluation of a PUF enabled by the proposed formal definition is outlined. Practically all current and novel physical (but not conventional) unclonable physical functions are PUFs by our definition. Thereby the proposed definition captures the existing intuition about what is a PUF and remains flexible enough to encompass further research.
In a second part we quantitatively characterize two classes of PUF security mechanisms, the standard one, based on a minimum secret read-out time, and a novel one, based on challenge-dependent erasure of stored information. The new mechanism is shown to allow in principle the construction of a “quantum-PUF”, that is absolutely secure while not requiring the storage of an exponentially large secret. The construction of a PUF that is mathematically and physically unclonable in principle does not contradict the laws of physics.
Rainer Plaga, Frank Koob
Modeling and Analysis of a P2P-VoD System Based on Stochastic Network Calculus
Abstract
Peer-to-peer video-on-demand (P2P-VoD) is one of the most commercially important and technically challenging applications. In such systems, each video file is typically divided into sub-pieces that are stored possibly at different sending peers, and the receiving peer often needs to coordinate multiple senders by specifying a transmission schedule for them. In this paper, we investigate this transmission scheduling problem for a specific P2P-VoD system which is PPLive. We formalize the problem of scheduling sub-piece transmission based on stochastic network calculus, and analyze its delay performance. Based on the delay analysis, an optimization problem is formulated to design the schedule for sending peers to transmit their sub-pieces. The objective of this optimization is to maximize a defined reward and at the same time meet the delay constraint required to ensure video quality at the receiver.
Kai Wang, Yuming Jiang, Chuang Lin
Using NFC Phones for Proving Credentials
Abstract
In this paper we propose a new solution for mobile payments called Tap2 technology. To use it, users need only their NFC-enabled mobile phones and credentials implemented on their smart cards. An NFC device acts like a bridge between service providers and secure elements and the secure credentials (on the card) are never revealed. In this way, secure authentication can be obtained by means of anonymous credentials, implemented on a smart card to provide the functionality with minimal data disclosure. We propose to use zero-knowledge proofs based on attribute-based anonymous credentials to provide the security and privacy requirements in mobile payments. Other use cases include online shopping, easy payment, eGoverment proofs etc.
Gergely Alpár, Lejla Batina, Roel Verdult
Backmatter
Metadaten
Titel
Measurement, Modelling, and Evaluation of Computing Systems and Dependability and Fault Tolerance
herausgegeben von
Jens B. Schmitt
Copyright-Jahr
2012
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-28540-0
Print ISBN
978-3-642-28539-4
DOI
https://doi.org/10.1007/978-3-642-28540-0