Skip to main content
Top

2017 | Book

Computer Performance Engineering

14th European Workshop, EPEW 2017, Berlin, Germany, September 7-8, 2017, Proceedings

insite
SEARCH

About this book

This book constitutes the refereed proceedings of the 14th EuropeanWorkshop on Computer Performance Engineering, EPEW 2017, held in Berlin,Germany, in September 2017.
The 18 papers presented together with the abstracts of two invited talks in this volume were carefully reviewed and selected from 30 submissions. The papers presented at the workshop reflect the diversity of modern performanceengineering, with topics ranging from advances in Markov models; advances in quantitative analysis; model checking; and cyber-physical systems to performance, energy and security.

Table of Contents

Frontmatter

Advances in Markov Models

Frontmatter
Analysis of Markov Decision Processes Under Parameter Uncertainty
Abstract
Markov Decision Processes (MDPs) are a popular decision model for stochastic systems. Introducing uncertainty in the transition probability distribution by giving upper and lower bounds for the transition probabilities yields the model of Bounded Parameter MDPs (BMDPs) which captures many practical situations with limited knowledge about a system or its environment. In this paper the class of BMDPs is extended to Bounded Parameter Semi Markov Decision Processes (BSMDPs). The main focus of the paper is on the introduction and numerical comparison of different algorithms to compute optimal policies for BMDPs and BSMDPs; specifically, we introduce and compare variants of value and policy iteration.
The paper delivers an empirical comparison between different numerical algorithms for BMDPs and BSMDPs, with an emphasis on the required solution time.
Peter Buchholz, Iryna Dohndorf, Dimitri Scheftelowitsch
Bounded Aggregation for Continuous Time Markov Decision Processes
Abstract
Markov decision processes suffer from two problems, namely the so-called state space explosion which may lead to long computation times and the memoryless property of states which limits the modeling power with respect to real systems. In this paper we combine existing state aggregation and optimization methods for a new aggregation based optimization method. More specifically, we compute reward bounds on an aggregated model by exchanging state space size with uncertainty. We propose an approach for continuous time Markov decision models with discounted or average reward measures.
The approach starts with a portioned state space which consists of blocks that represent an abstract, high-level view on the state space. The sojourn time in each block can then be represented by a phase-type distribution (PHD). Using known properties of PHDs, we can then bound sojourn times in the blocks and also the accumulated reward in each sojourn by constraining the set of possible initial vectors in order to derive tighter bounds for the sojourn times, and, ultimatively, for the average or discounted reward measures. Furthermore, given a fixed policy for the CTMDP, we can then further constrain the initial vector which improves reward bounds. The aggregation approach is illustrated on randomly generated models.
Peter Buchholz, Iryna Dohndorf, Alexander Frank, Dimitri Scheftelowitsch
Interactive Markovian Equivalence
Abstract
Behavioral equivalences relate states which are indistinguishable for an external observer of the system. This paper defines two equivalence relations, interactive Markovian equivalence (IME) and weak interactive Markovian equivalence (WIME) for closed IMCs. We define the quotient system under these relations and investigate their relationship with strong bisimulation and weak bisimulation, respectively. Next, we show that both IME and WIME can be used for repeated minimization of closed IMCs. Finally we prove that time-bounded reachability properties are preserved under IME and WIME quotienting.
Arpit Sharma

Advances in Quantitative Analysis

Frontmatter
Delay Analysis of Resequencing Buffer in Markov Environment with HOQ-FIFO-LIFO Policy
Abstract
Resequencing of customers during the service process results in hard to analyze delay distributions. A set of models with various service and resequencing policies have been analyzed already for memoryless arrival, service and resequencing processes with an intensive use of transform domain descriptions. In case of Markov modulated arrival, service and resequencing processes those methods are not applicable any more. In a previous work we analyzed the Markov modulated case with HOQ-FIFO-FIFO policy (head of queue customer of the higher priority FIFO queue is moved to resequencing FIFO queue). In this work we investigate if the approach remains applicable for different service discipline for the HOQ-FIFO-LIFO policy.
It turns out that the analysis of the new service policy requires the solution of a coupled quadratic matrix equations which were separated in the HOQ-FIFO-FIFO case.
Rostislav Razumchik, Miklós Telek
Analysis of Timed Properties Using the Jump-Diffusion Approximation
Abstract
Density dependent Markov chains (DDMCs) describe the interaction of groups of identical objects. In case of large numbers of objects a DDMC can be approximated efficiently by means of either a set of ordinary differential equations (ODEs) or by a set of stochastic differential equations (SDEs). While with the ODE approximation the chain stochasticity is not maintained, the SDE approximation, also known as the diffusion approximation, can capture specific stochastic phenomena (e.g., bi-modality) and has also better convergence characteristics. In this paper we introduce a method for assessing temporal properties, specified in terms of a timed automaton, of a DDMC through a jump diffusion approximation. The added value is in terms of runtime: the costly simulation of a very large DDMC model can be replaced through much faster simulation of the corresponding jump diffusion model. We show the efficacy of the framework through the analysis of a biological oscillator.
Paolo Ballarini, Marco Beccuti, Enrico Bibbona, Andras Horvath, Roberta Sirovich, Jeremy Sproston
Stability Analysis of a Multiclass Retrial System with Coupled Orbit Queues
Abstract
In this work we consider a single-server system accepting N types of retrial customers, which arrive according to independent Poisson streams. In case of blocking, type-i customer, \(i=1,2,...,N\) is routed to a separate type-i orbit queue of infinite capacity. Customers from the orbit queues try to access the server according to the constant retrial policy. We consider coupled orbit queues. More precisely, the orbit queue i retransmits a blocked customer of type-i to the main service station after an exponentially distributed time with rate \(\mu _{i}\), when at least one other orbit queue is non-empty. Otherwise, if all other orbit queues are empty, the orbit queue i changes its retransmission rate from \(\mu _{i}\) to \(\mu _{i}^{*}\). Such a scheme arises in the modeling of cooperative cognitive wireless networks, in which a node is aware of the status of other nodes, and accordingly, adjusts its retransmission parameters in order to exploit the idle periods of the other nodes. Using the regenerative approach we obtain the necessary conditions of the ergodicity of our system, and show that these conditions have a clear probabilistic interpretation. We also suggest a sufficient stability condition. Simulation experiments show that the obtained conditions delimit the stability domain with remarkable accuracy.
Evsey Morozov, Ioannis Dimitriou

Model Checking

Frontmatter
Model Checking the STL Time-Bounded Until on Hybrid Petri Nets Using Nef Polyhedra
Abstract
Hybrid Petri nets have been extended with so-called general transitions, which add one random variable and one dimension to the underlying state space for each firing of a general transition. We propose an algorithm for model checking the time-bounded until operator in hybrid Petri nets with two general transition firings, based on boolean-set operations on Nef polyhedra. A case study on (dis)-charging an electrical vehicle shows the feasibility of the approach. Results are validated against a simulation tool and computation times are compared.
Adrian Godde, Anne Remke
A New Approach to Predicting Reliable Project Runtimes via Probabilistic Model Checking
Abstract
For more than five decades, efforts of calculating exact probabilistic quantiles for generally distributed project runtimes have not been successful due to the tremendous computation requirements, paired with hard restrictions on the available computation power. The methods established today are PERT (Program Evaluation and Review Technique) and CCPM (Critical Chain Project Management). They make simplifying assumptions by focusing on the critical path (PERT) or estimating appropriate buffers (CCPM). In view of this, and since today’s machines offer an increased computation power, we have developed a new approach: For the calculation of more exact quantiles or – reversely – of the resulting buffer sizes, we combine the capabilities of classical reduction techniques for series-parallel structures with the capabilities of probabilistic model checking. In order to avoid the state space explosion problem, we propose a heuristic algorithm.
Ulrich Vogl, Markus Siegle

Cyber-Physical Systems

Frontmatter
Learning-Based Testing of Cyber-Physical Systems-of-Systems: A Platooning Study
Abstract
Learning-based testing (LBT) is a paradigm for fully automated requirements testing that combines machine learning with model-checking techniques. LBT has been shown to be effective for unit and integration testing of safety critical components in cyber-physical systems, e.g. automotive ECU software.
We consider the challenges faced, and some initial results obtained in an effort to scale up LBT to testing co-operative open cyber-physical systems-of-systems (CO-CPS). For this we focus on a case study of testing safety and performance properties of multi-vehicle platoons.
Karl Meinke
An Inspection-Based Compositional Approach to the Quantitative Evaluation of Assembly Lines
Abstract
We present a model-based approach to performance evaluation of a collection of similar systems based on runtime observations. As a concrete example, we consider an assembly line made of sequential workstations with transfer blocking and no buffering capacity, implementing complex workflows with random choices and sequential/cyclic phases with generally distributed durations and no internal parallelism. Starting from the steady state, an inspection mechanism is subject to some degree of uncertainty in the identification of the current phase of each workstation, and is in any case unable to estimate remaining times. By relying on the positive correlation between delays at different workstations, we provide stochastic upper and lower approximations of the performance measures of interest, including the time to completion of the local workflow of each workstation and the time until when a workstation starts a new job. Experimental results show that the approximated evaluation is accurate and feasible for lines of significant complexity.
Marco Biagi, Laura Carnevali, Tommaso Papini, Kumiko Tadano, Enrico Vicario

Performance, Energy and Security

Frontmatter
Machine Learning Models for Predicting Timely Virtual Machine Live Migration
Abstract
Virtual machine (VM) consolidation is among the key strategic approaches that can be employed to reduce energy consumption in large computing infrastructure. However, live migration of VMs is not a trivial operation and consequently not all VMs can be easily consolidated in all circumstances. In this paper we present experiments attempting to live migrate the Kernel-based VM (KVM) executing workload form the SPECjvm2008 benchmark. In order to understand what factors influence live migration we investigate three machine learning models to predict successful live migration using different training and evaluation sets drawn from our experimental data.
Osama Alrajeh, Matthew Forshaw, Nigel Thomas
Model-Based Simulation in Möbius: An Efficient Approach Targeting Loosely Interconnected Components
Abstract
This paper addresses the generation of stochastic models for dependability and performability analysis of complex systems, through automatic replication of template models inside the Möbius modeling framework. The proposed solution is tailored to systems composed by large populations of similar non-anonymous components, loosely interconnected with each other (as typically encountered in the electrical or transportation sectors). The approach is based on models that define channels shared among replicas, used to exchange the values of each state variable of a replica with the other replicas that need to use them. The goal is to improve the performance of simulation based solvers with respect to the existing state-sharing approach, when employed in the modeling of the addressed class of systems. Simulation results for the time overheads induced by both channel-sharing and state-sharing approaches for different system scenarios are presented and discussed. They confirm the expected gain in efficiency of the proposed channel-sharing approach in the addressed system context.
Giulio Masetti, Silvano Chiaradonna, Felicita Di Giandomenico
Analysis of Performance and Energy Consumption in the Cloud
Abstract
We analyze here a cloud system represented by hysteresis multi server queueing system. It is characterized by forward and backward thresholds for activation and deactivation of block of servers representing a set of VMs (Virtual Machines). The system is represented by a complex Markov Chain which is difficult to analyse when the size of the system is huge. We propose both analytical and numerical mathematical methods for deriving the steady-state probability distribution. We compute then performance and energy consumption measures and we define an overall cost taking into account both aspects. We compare the proposed methods with respect to the computation time and we analyse the impact of some parameters on the behaviour of the system.
Mehdi Kandi, Farah Aït-Salaht, Hind Castel-Taleb, Emmanuel Hyon
Deriving Power Models for Architecture-Level Energy Efficiency Analyses
Abstract
In early design phases and during software evolution, design-time energy efficiency analyses enable software architects to reason on the effect of design decisions on energy efficiency. Energy efficiency analyses rely on accurate power models to estimate power consumption. Deriving power models that are both accurate and usable for design time predictions requires extensive measurements and manual analysis. Existing approaches that aim to automate the extraction of power models focus on the construction of models for runtime estimation of power consumption. Power models constructed by these approaches do not allow users to identify the central set of system metrics that impact energy efficiency prediction accuracy. The identification of these central metrics is important for design time analyses, as an accurate prediction of each metric incurs modeling effort. We propose a methodology for the automated construction of multi-metric power models using systematic experimentation. Our approach enables the automated training and selection of power models for the design time prediction of power consumption. We validate our approach by evaluating the prediction accuracy of derived power models for a set of enterprise and data-intensive application benchmarks.
Christian Stier, Dominik Werle, Anne Koziolek
ADaCS: A Tool for Analysing Data Collection Strategies
Abstract
Given a model with multiple input parameters, and multiple possible sources for collecting data for those parameters, a data collection strategy is a way of deciding from which sources to sample data, in order to reduce the variance on the output of the model. Cain and Van Moorsel have previously formulated the problem of optimal data collection strategy, when each parameter can be associated with a prior normal distribution, and when sampling is associated with a cost. In this paper, we present ADaCS, a new tool built as an extension of PRISM, which automatically analyses all possible data collection strategies for a model, and selects the optimal one. We illustrate ADaCS on attack trees, which are a structured approach to analyse the impact and the likelihood of success of attacks and defenses on computer and socio-technical systems. Furthermore, we introduce a new strategy exploration heuristic that significantly improves on a brute force approach.
John C. Mace, Nipun Thekkummal, Charles Morisset, Aad Van Moorsel

Case Studies

Frontmatter
Improving ZooKeeper Atomic Broadcast Performance by Coin Tossing
Abstract
ZooKeeper atomic broadcast (Zab) is at the core of ZooKeeper system, enforcing a total order on service requests that seek to modify the replicated service state. Since it is a leader based protocol, its performance degrades when the leader server is made to handle an increased message traffic. We address this concern by having the other, non-leader server replicas toss a coin and broadcast their acknowledgement of a leader’s proposal only if the toss results in an outcome of Head. We model the coin-tossing process and derive analytical expressions for estimating the coin’s probability of Head for a given arrival rate of service requests such that the dual objectives of performance gains and traffic reduction can be accomplished. Experiments compare the performance of our coin-tossing Zab version (ZabCT) with Zab performance and confirm that the dual objectives are demonstrably met under heavy workloads. Moreover, ZabCT meets all requirements essential for crash-tolerance provisions within Zab which can therefore be adopted in any ZabCT implementation.
Ibrahim EL-Sanosi, Paul Ezhilchelvan
Modelling and Analysis of Commit Protocols with PEPA
Abstract
This paper introduces performance models of two phase and three phase commit protocols specified formally using the Markovian process algebra PEPA. We show how we can investigate the performance of such distributed commit protocols to get more insight into the system behaviour under different loads. The commit phases of the protocols are examined using discrete state space (CTMC) and fluid (ODE) analysis and then compared to better understand how performance is affected by the different protocol behaviours.
Said Naser Said Kamil, Nigel Thomas
Stochastic Models for Solar Power
Abstract
In this work we develop a stochastic model for the solar power at the surface of the earth. We combine a deterministic model of the clear sky irradiance with a stochastic model for the so-called clear sky index to obtain a stochastic model for the actual irradiance hitting the surface of the earth. Our clear sky index model is a 4-state semi-Markov process where state durations and clear sky index values in each state have phase-type distributions. We use per-minute solar irradiance data to tune the model, hence we are able to capture small time scales fluctuations. We compare our model with the on-off power source model developed by Miozzo et al. (2014) for the power generated by photovoltaic panels, and to a modified version that we propose. In our on-off model the output current is frequently resampled instead of being a constant during the duration of the “on” state. Computing the autocorrelation functions for all proposed models, we find that the irradiance model surpasses the on-off models and it is able to capture the multiscale correlations that are inherently present in the solar irradiance. The power spectrum density of generated trajectories matches closely that of measurements. We believe our irradiance model can be used not only in the mathematical analysis of energy harvesting systems but also in their simulation.
Dimitra Politaki, Sara Alouf
Backmatter
Metadata
Title
Computer Performance Engineering
Editors
Philipp Reinecke
Antinisca Di Marco
Copyright Year
2017
Electronic ISBN
978-3-319-66583-2
Print ISBN
978-3-319-66582-5
DOI
https://doi.org/10.1007/978-3-319-66583-2