Skip to main content

Über dieses Buch

This book constitutes the proceedings of the 20th International GI/ITG Conference on Measurement, Modelling and Evaluation of Computing Systems, MMB 2020, held in Saarbrücken, Germany, in March 2020.
The 16 full papers presented in this volume were carefully reviewed and selected from 32 submissions. They are dealing with scientific aspects of measurement, modelling and evaluation of intelligent systems including computer architectures, communication networks, distributed systems and software, autonomous systems, workflow systems, cyber-physical systems and networks, Internet-of-Things, as well as highly dependable, highly performant and highly secure systems.



Performance Analytics of a Virtual Reality Streaming Model

This work focuses on post-analysis of performance results by means of Performance Analytics. The results to be post-analysed are provided by a Stochastic Fluid Flow Model (SFFM) of Virtual Reality (VR) streaming. Performance Analytics implies using the Machine Learning (ML) algorithm M5P for constructing model trees, which we examine amongst others for asymptotic behaviours and parameter impacts in both uni- and multivariate settings. We gain valuable insights into key parameters and related thresholds of importance for good VR streaming performance.
Markus Fiedler

To Fail or Not to Fail: Predicting Hard Disk Drive Failure Time Windows

Due to the increasing size of today’s data centers as well as the expectation of 24/7 availability, the complexity in the administration of hardware continuously increases. Techniques as the Self-Monitoring, Analysis, and Reporting Technology (S.M.A.R.T.) support the monitoring of the hardware. However, those techniques often lack algorithms for intelligent data analytics. Especially, the integration of machine learning to identify potential failures in advance seems to be promising to reduce administration overhead. In this work, we present three machine learning approaches to (i) identify imminent failures, (ii) predict time windows for failures, as well as (iii) predict the exact time-to-failure. In a case study with real data from 369 hard disks, we achieve an F1-score of up to 98.0% and 97.6% for predicting potential failures with two or multiple time windows, respectively, and a hit rate of 84.9% (with a mean absolute error of 4.5 h) for predicting the time-to-failure.
Marwin Züfle, Christian Krupitzer, Florian Erhard, Johannes Grohmann, Samuel Kounev

Concurrent MDPs with Finite Markovian Policies

The recently defined class of Concurrent Markov Decision Processes (CMDPs) allows one to describe scenario based uncertainty in sequential decision problems like scheduling or admission problems. The resulting optimization problem of computing an optimal policy is NP-hard. This paper introduces a new class of policies for CMDPs on infinite horizons. A mixed integer linear program and an efficient approximation algorithm based on policy iteration are defined for the computation of optimal polices. The proposed approximation algorithm also improves the available approximate value iteration algorithm for the finite horizon case.
Peter Buchholz, Dimitri Scheftelowitsch

A Stochastic Automata Network Description for Spatial DNA-Methylation Models

DNA methylation is an important biological mechanism to regulate gene expression and control cell development. Mechanistic modeling has become a popular approach to enhance our understanding of the dynamics of methylation pattern formation in living cells. Recent findings suggest that the methylation state of a cytosine base can be influenced by its DNA neighborhood. Therefore, it is necessary to generalize existing mathematical models that consider only one cytosine and its partner on the opposite DNA-strand (CpG), in order to include such neighborhood dependencies. One approach is to describe the system as a stochastic automata network (SAN) with functional transitions. We show that single-CpG models can successfully be generalized to multiple CpGs using the SAN description and verify the results by comparing them to results from extensive Monte-Carlo simulations.
Alexander Lück, Verena Wolf

An ns-3 Model for Multipath Communication with Terrestrial and Satellite Links

Some terrestrial Internet access links provide only low data rates. Internet access via geostationary satellites on the other hand provides data rates of up to 50 Mbit/s, but suffers from high latencies. The combination of low data rate, low latency Internet access (e.g., DSL light) and high data rate, high latency Internet access (e.g., geostationary satellites) can utilize the advantages of both. We discuss multipath communication architectures and scheduling strategies for such heterogeneous link combinations. The solution described in this paper relies on Split TCP and Performance Enhancement Proxies. We implement and evaluate the solution with the network simulator ns-3. A well-established traffic model for loading websites is enhanced with different HTTP variants. The results show that the combination of heterogeneous link types can result in page load times close to as if there was a high data rate, low latency Internet access.
Jörg Deutschmann, Kai-Steffen Jens Hielscher, Reinhard German

Model-Based Performance Predictions for SDN-Based Networks: A Case Study

Emerging paradigms for network virtualization like Software-Defined Networking (SDN) and Network Functions Virtualization (NFV) form new challenges for accurate performance modeling and analysis tools. Therefore, performance modeling and prediction approaches that support SDN or NFV technologies help system operators to analyze the performance of a data center and its corresponding network. The Descartes Network Infrastructures (DNI) offers a high-level descriptive language to model SDN-based networks, which can be transformed into various predictive modeling formalisms. However, these modeling concepts have not yet been evaluated in a realistic scenario.
In this paper, we present an extensive case study evaluating the DNI modeling capabilities, the transformations to predictive models, and the performance prediction using the OMNeT++ and SimQPN simulation frameworks. We present five realistic scenarios of a content distribution network (CDN), compare the performance predictions with real-world measurements, and discuss modeling gaps and calibration issues causing mispredictions in some scenarios.
Stefan Herrnleben, Piotr Rygielski, Johannes Grohmann, Simon Eismann, Tobias Hoßfeld, Samuel Kounev

Design of a Hybrid Genetic Algorithm for Time-Sensitive Networking

With Time-Sensitive Networking (TSN), the IEEE 802.1 Task Group is extending the Ethernet standard by time-sensitive capabilities to establish a common ground for real-time communication systems via Ethernet. The Time-Sensitive Networking Task Group introduces a time-triggered transmission approach in IEEE 802.1Qbv to enable a deterministic transmission of time-critical network traffic, which requires scheduling strategies. Genetic algorithms are qualified to solve these scheduling problems in Time-Sensitive Networks. The difficulty is to design the genetic algorithm to find an optimal or a near-optimal solution for different complex problems taking performance and quality of the schedule into account. The complexity of schedules for TSN depends on the decision space of a network designer comprising the possibility to combine a variable number of network participants, a variable number of TSN flows, as well as assuming fixed or flexible routes for the flows. In this paper, we discuss a design approach for a hybrid genetic algorithm including chromosome representation for the routing and scheduling problems in TSN, the choice of genetic operators, and a neighborhood search to find a near-optimal solution. Additionally, we introduce an approach to compress the resulting schedules. Our evaluations show that the proposed hybrid genetic algorithm is able to compete with the well-adapted NEH algorithm in terms of schedule quality, and it outperforms the NEH algorithm regarding the computing time.
Anna Arestova, Kai-Steffen Jens Hielscher, Reinhard German

Performance Analysis for Loss Systems with Many Subscribers and Concurrent Services

We present models for performance analysis of IPTV services. The main topic is to model the interplay of the restricted number of channels, the larger number of available programs to be transmitted over the channels, and the concurrent access of many users to the same program, resp. channel (multicast services). In a simple Engset-like model we compute loss probabilities avoiding to count the number of concurrent users in the system. With a more detailed model where we explicitly count the number of concurrent users for the active channels we show how to compute revenue for the provider and loss probabilities as well. This model can be considered as a stratified system of different queueing systems. The main results are in both cases expressions for steady state distributions of simple structure, indicating separability.
Hans Daduna, Ruslan Krenzler

On the Stochastic End-to-End Delay Analysis in Sink Trees Under Independent and Dependent Arrivals

Sink trees are a frequent topology in many networked systems; typical examples are multipoint-to-point label switched paths in Multiprotocol Label Switching networks or wireless sensor networks with sensor nodes reporting to a base station. In this paper, we compute end-to-end delay bounds using a stochastic network calculus approach for a flow traversing a sink tree.
For n servers with one flow of interest and n cross-flows, we derive solutions for a general class of arrivals with moment-generating function bounds. Comparing algorithms known from the literature, our results show that, e.g., pay multiplexing only once has to consider less stochastic dependencies in the analysis.
In numerical experiments, we observe that the reduced dependencies to consider, and therefore less applications of Hölder’s inequality, lead to a significant improvement of delay bounds with fractional Brownian motion as a traffic model. Finally, we also consider a sink tree with dependent cross-flows and evaluate the impact on the delay bounds.
Paul Nikolaus, Jens Schmitt

Graph-Based Mobility Models: Asymptotic and Stationary Node Distribution

Under standard assumptions on the stochastic behaviour of mobile nodes in a graph-based mobility model we derive the stationary distribution for the network. This distribution describes as well the asymptotic behaviour of the system. We consider closed (fixed number of moving nodes) as well as open (nodes arrive and depart from the graph-structured area) systems. The stationary state shows that these graph-based models for mobile nodes are separable, i. e. the stationary distribution is for the open system the product of independent coordinate processes and for the closed system holds conditional independence.
Hans Daduna

Parallelization of EM-Algorithms for Markovian Arrival Processes

Markovian Arrival Processes (MAPs) are widely used stochastic models to describe correlated events. For the parameter fitting of MAPs according to measured data, the expectation-maximization (EM) algorithm is commonly seen as the best approach. Unfortunately, EM algorithms require a huge computational effort if the number of data points is large or the MAP has a larger dimension. The classical EM algorithm runs sequentially through the data which is necessary to consider dependencies between data points.
In this paper we present a parallel variant of the EM algorithm for MAPs with a general structure. The parallel version of the algorithm is developed for multicore systems with shared memory. It is shown that the parallel algorithm yields a significant speedup compared to its sequential counterpart.
Andreas Blume, Peter Buchholz, Jan Kriege

It Sometimes Works: A Lifting Algorithm for Repair of Stochastic Process Algebra Models

The paper presents an algorithm for lifting rate modification information from a flat Markovian model to its high-level modular description, specified with the help of a Stochastic Process Algebra (SPA). During the lifting, a specific set of transition rates in the model components is changed, and – if necessary – also the interaction between the components will be modified, in order to realise context-dependent rate modifications. It is shown that the proposed algorithm cannot always find a suitable lifting, but if such a lifting exists, the algorithm is guaranteed to find one. Furthermore, the paper shows that for a certain class of SPA product form models, the lifting algorithm will always find a solution.
Amin Soltanieh, Markus Siegle

An Efficient Brute Force Approach to Fit Finite Mixture Distributions

This paper presents a brute force approach to fit finite mixtures of distributions considering the empirical probability density and cumulative distribution functions as well as the empirical moments. The fitting problem is solved using a non-negative least squares method determining a mixture from a larger set of distributions.
The approach is experimentally validated for finite mixtures of Erlang distributions. The results show that a feasible number of component distributions, which accurately fit to the empirical data, is obtained within a short CPU time.
Falko Bause

Freight Train Scheduling in Railway Systems

Passenger train timetables in Europe are often periodical and predetermined for longer periods of time to facilitate the planning of travel. Freight train schedules, however, depend on the actual demand. Therefore it is a common problem in railway systems to schedule additional freight train requests, under consideration of a given timetable for passenger trains. In this paper, we present a model for railway systems that allows us to solve this scheduling problem as a constrained time-dependent shortest path problem. We adapt and implement an algorithm to solve this type of problems, examine our results, and discuss possible modifications and extensions to this approach.
Rebecca Haehn, Erika Ábrahám, Nils Nießen

A Tool for Requirements Analysis of Safety-Critical Cyber-Physical Systems

One of the key challenges in the design of a Safety-Critical Cyber-Physical Systems is Requirements Analysis. Current Requirements Analysis approaches range from informal, human-centered ones that are hard to automate, to formal approaches that often lack freedom of expression. Furthermore, most approaches are general-purpose and do not focus on a particular domain, which makes identifying the specific requirements of a given domain less trivial.
To overcome these challenges, this paper presents aDSL, a Domain-Specific Language and toolset for Requirement Analysis of Safety-Critical Cyber-Physical Systems. The approach comprises a mixture of informal and formal elements to enable both automation and freedom of expression; a number of stakeholders introduce and negotiate about their requirements. The aDSL language is used to precisely, concisely and unambiguously describe all such requirements. We have validated aDSL, using simulation techniques and actors that represent the stakeholders, on a case in the agro-machines domain. The proposed approach allows the discovery of requirements in a semi-automatic way.
Freek van den Berg, Boudewijn R. Haverkort

Automated Rare Event Simulation for Fault Tree Analysis via Minimal Cut Sets

Monte Carlo simulation is a common technique to estimate dependability metrics for fault trees. A bottleneck in this technique is the number of samples needed, especially when the interesting events are rare and occur with low probability. Rare Event Simulation ( ) reduces the number of samples when analysing rare events. Importance splitting is a method that spawns more simulation runs from promising system states. How promising a state is, is indicated by an importance function, which concentrates the information that makes this method efficient. Importance functions are given by domain and experts. This hinders re-utilisation and involves decisions entailing potential human error. Focusing in (general) fault trees, in this paper we automatically derive importance functions based on the tree structure. For this we exploit a common fault tree concept, namely cut sets: the more elements from a cut set have failed, the higher the importance. We show that the cut-set-derived importance function is an easy-to-implement and simple concept, that can nonetheless compete against another (more involved) automatic importance function for .
Carlos E. Budde, Mariëlle Stoelinga


Weitere Informationen

Premium Partner