Skip to main content

2016 | Buch | 1. Auflage

Measurement, Modelling and Evaluation of Dependable Computer and Communication Systems

18th International GI/ITG Conference, MMB & DFT 2016, Münster, Germany, April 4–6, 2016, Proceedings

herausgegeben von: Anne Remke, Boudewijn R. Haverkort

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the proceedings of the 18th International GI/ITG Conference on Measurement, Modelling and Evaluation of Computing Systems and Dependability and Fault Tolerance, MMB & DFT 2016, held in Münster, Germany, in April 2016.

The 12 full papers and 3 short papers included in this volume were carefully reviewed and selected from 23 submissions. The papers deal with the fields of performance evaluation, dependability, and fault-tolerance of computer and communication systems. A relatively new topic of smart grids is also covered.

Inhaltsverzeichnis

Frontmatter
DDoS 3.0 - How Terrorists Bring Down the Internet
Abstract
Dependable operation of the Internet is of crucial importance for our society. In recent years Distributed Denial of Service (DDoS) attacks have quickly become a major problem for the Internet. Most of these attacks are initiated by kids that target schools, ISPs, banks and web-shops; the Dutch NREN (SURFNet), for example, sees around 10 of such attacks per day. Performing attacks is extremely simple, since many websites offer “DDoS as a Service”; in fact it is easier to order a DDoS attack than to book a hotel! The websites that offer such DDoS attacks are called “Booters” or “Stressers”, and are able to perform attacks with a strength of many Gbps. Although current attempts to mitigate attacks seem promising, analysis of recent attacks learns that it is quite easy to build next generation attack tools that are able to generate DDoS attacks with a strength thousand to one million times higher than the ones we see today. If such tools are used by nation-states or, more likely, terrorists, it should be possible to completely stop the Internet. This paper argues that we should prepare for such novel attacks.
Aiko Pras, José Jair Santanna, Jessica Steinberger, Anna Sperotto
SGsim: Co-simulation Framework for ICT-Enabled Power Distribution Grids
Abstract
Empowering power grids with ICT is fundamental for the future power grid. Simulation plays an essential role for evaluating emerging smart grid applications. The presented co-simulation framework SGsim is based on two main simulators, OMNeT++ and OpenDSS. With newly added components, smart grid applications in the electricity distribution network can now be investigated and evaluated. Conservation Voltage Reduction (CVR) is a mechanism to reduce the power demand which eventually will reduce the energy consumption. In a case study, the co-simulation framework is used to explore the potential energy saving by applying a closed-loop CVR inside a residential power grid.
Abdalkarim Awad, Peter Bazan, Reinhard German
Improving Cross-Traffic Bounds in Feed-Forward Networks – There is a Job for Everyone
Abstract
Network calculus provides a mathematical framework for deterministically bounding backlog and delay in packet-switched networks. The analysis is compositional and proceeds in several steps. In the first step, a general feed-forward network is reduced to a tandem of servers lying on the path of the flow of interest. This requires to derive bounds on the cross-traffic for that flow. Tight bounds on cross-traffic are crucial for the overall analysis to obtain tight performance bounds. In this paper, we contribute an improvement on this first bounding step in a network calculus analysis. This improvement is based on the so-called total flow analysis (TFA), which so far saw little usage as it is known to be inferior to other methods for the overall delay analysis. Yet, in this work we show that TFA actually can bring significant benefits in bounding the burstiness of cross-traffic. We investigate analytically and numerically when these benefits actually occur and show that they can be considerable with several flows’ delays being improved by more than 40 % compared to existing methods – thus giving TFA’s existence a purpose finally.
Steffen Bondorf, Jens Schmitt
Stochastic Analysis of Energy Consumption in Pool Depletion Systems
Abstract
The evolutions of digital technologies and software applications have introduced a new computational paradigm that involves initially the creation of a large pool of jobs followed by a phase in which all the jobs are executed in systems with limited capacity. For example, a number of libraries have started digitizing their old books, or video content providers, such as YouTube or Netflix, need to transcode their contents to improve playback performances. Such applications are characterized by a huge number of jobs with different requests of computational resources, like CPU and GPU. Due to the very long computation time required by the execution of all the jobs, strategies to reduce the total energy consumption are very important.
In this work we present an analytical study of such systems, referred to as pool depletion systems, aimed at showing that very simple configuration parameters may have a non-trivial impact on the performance and especially on the energy consumption. We apply results from queueing theory coupled with the absorption time analysis for the depletion phase. We show that different optimal settings can be found depending on the considered metric.
Davide Cerotti, Marco Gribaudo, Riccardo Pinciroli, Giuseppe Serazzi
Moving Queue on a Network
Abstract
We describe a queueing network model for mobile servers on a network’s graph. The principle behind resembles the procedure to consider a “referenced node” in a static network or a network of mobile nodes. We investigate an integrated model where a “referenced mobile node” is described jointly with all other mobile nodes. The distinguished feature is that we operate on distinct levels of detail, microlevel for the “referenced mobile node”, macrolevel for all other moving nodes. The main achievement is the explicit stationary distribution which is of product form and indicates separability of the system in equilibrium.
Hans Daduna
A Multi-commodity Simulation Tool Based on TRIANA
Abstract
In this paper we extended the simulator based on the TRIANA concept, with a model for the heat demand of households. The heat demand is determined based on factors such as building properties, user setpoints and weather conditions. The simulator exploits the flexibility of both the electricity and heat components to optimize the stream of both commodities, heat and electricity.
Maryam Hajighasemi, Gerard J. M. Smit, Johann L. Hurink
Performance and Precision of Web Caching Simulations Including a Random Generator for Zipf Request Pattern
Abstract
The steadily growing Internet traffic volume for video, IP-TV and other content needs support by caching systems and architectures which are provided in global content delivery networks as well as in local networks, on home gateways or user terminals. The efficiency of caching is important in order to save transport capacity and to improve throughput and delays.
However, since analytic solutions for the hit rate as the main caching performance measure are not available even under the baseline scenario of an independent request model (IRM) with usual Zipf request pattern and caching strategies, simulation methods are used to evaluate caching efficiency. Based on promising experience with simulation approaches of caching methods in previous work, we study and verify two main prerequisites: First, a fast random Zipf rank generator is derived, which allows to extend simulations to billions of requests. Moreover, the accuracy of alternatives of the hit rate evaluation is compared based on the 2nd order statistics. The results indicate that the sum of request probabilities of objects in the cache provides a more precise estimator of the hit rate as a simple hit count.
Gerhard Hasslinger, Konstantinos Ntougias, Frank Hasslinger
PSTeC: A Location-Time Driven Modelling Formalism for Probabilistic Real-Time Systems
Abstract
Internet of Things (IoT) and Cyber-Physical Systems (CPS) have become important topics in both theory and industry. In some application domains, such as when specifying the behaviour of precision mechanics, we need to include features of spatial-temporal consistency. How to model probabilistic real-time systems in such domains is a challenge. This paper presents a modelling formalism, called PSTeC, for describing the behaviour of probabilistic real-time systems focusing on spatial-temporal consistency with nondeterministic, probabilistic and real-time aspects. The consistency restricts a process to start and finish at the required location and time. Communications between agents is specified by interactive actions. The language we propose is an extension of STeC, which is a specification language for location-aware real-time systems, adding probabilistic operations so as to support the incorporation of probabilistic aspects. We first give a formal definition of the syntax for PSTeC, then focus on the details of its operational semantics, which maps a PSTeC term onto a Probabilistic Spatial-Temporal Transition System (PSTTS) following the structured operational semantics style. A simple example demonstrates the expressiveness of PSTeC.
Kangli He, Yixiang Chen, Min Zhang, Yuanrui Zhang
Analysis of Hierarchical Semi-Markov Processes with Parallel Regions
Abstract
We consider state charts with generally distributed state sojourn times and with parallel regions in composite states. This corresponds to semi-Markov processes (SMPs) with parallel regions consisting again of SMPs. The concept of parallel regions significantly extends the modeling power: it allows for the specification of non-memoryless activities that take place in parallel on many nested hierarchy levels. Parallel regions can be left either by final states or by exit states, corresponding to the maximum and the minimum of the sojourn times in the regions, respectively. Therefore, concurrent activities with synchronization and competition can easily be modeled. An SMP with parallel regions cannot simply be analyzed by flattening the state space. We propose an analysis based on a steady-state analysis of an embedded Markov chain (EMC) at the top level and by a transient analysis at the composite state level with a limited computational effort. An expression for the asymptotic complexity of the analysis is also provided. An example SMP containing all modeling features with parallel regions is illustrated. We carry out experiments on basis of this model and confirm the results by simulations.
Daniel Homm, Reinhard German
Combining Mobility Models with Arrival Processes
Abstract
The realistic modeling of mobile networks makes it necessary to find adequate models to mimic the movement of mobile nodes. In the past various such mobility models have been proposed, that either create synthetic movement patterns or are based on real-world observations. These models usually assume a constant number of mobility nodes for the simulation. Although in real-world scenarios new nodes will arrive and other nodes will leave the simulation area, only little attention has been paid to modeling these arrivals and departures of nodes.
In this paper we present an approach to easily extend mobility models to support the generation of arrivals and departures. For three standard mobility models the effect of this extension on the performance measures of a simple mobile network is shown.
Jan Kriege
Product Line Fault Tree Analysis by Means of Multi-valued Decision Diagrams
Abstract
The development of cyber-physical systems such as highly integrated, safety-relevant automotive functions is challenged by an increasing complexity resulting from both customizable products and numerous soft- and hardware variants. In order to reduce the time to market for scenarios like these, a systematic analysis of the dependencies between functions, as well as the functional and technical variance, is required (cf. ISO 26262). In this paper we introduce a new approach which allows for a compact representation and analysis of failure mechanisms of systems marked by numerous variants, also: Product Line Fault Tree (PLFTs), in a unified data structure based on Multi-valued Decision Diagram (MDDs). Therefore, instead of analyzing the Fault Tree (FT) of each variant separately, the proposed method enables one to analyze the FT in a single step. Summing up, this article introduces a systematic modeling concept to analyze fault propagation in variant-rich systems.
Michael Käßmeyer, Rüdiger Berndt, Peter Bazan, Reinhard German
Resolving Contention for Networks-on-Chips: Combining Time-Triggered Application Scheduling with Dynamic Budgeting of Memory Bus Use
Abstract
One of the challenges for the design of integrated real-time systems deployed on modern multicore architectures is the finding of system configurations where all applications are guaranteed to complete their computations prior to their individual deadlines. Traditionally, timing feasability analysis, i.e., sche-dulability tests, take activation patterns and worst-case execution times (WCET) of applications as input. In the setting of mutlicore architectures with shared infrastructure, WCET are drastically overestimated as the number of accesses to a shared resource and their service times not only depend on the application itself, the service times experienced at the shared resource are significantly influenced by its use by applications executing on other cores. There are several ways to deal with the above phenomenon and give guarantees for the timing behaviour of a real-time system deployed on concurrent hardware. One either devise analysis techniques and accept the potential under-utilization of the hardware or one may employ specific protocols for coordinating the resource sharing. In this paper, we do both: (a) we combine time triggered, core-local scheduling of real-time applications with a dynamic budgeting scheme for controlling the access to the main memory bus. (b) We show how the obtained access budgets can be used at design time to ensure timing correctness at design-time. The scheme is implemented in a microkernel based operating system and we present experiments to investigate its performance.
Kai Lampka, Adam Lackorzynski
The Weak Convergence of TCP Bandwidth Sharing
Abstract
TCP is the dominating transmission protocol in the Internet since decades. It proved its flexibility to adapt to unknown and changing network conditions. A distinguished TCP feature is the comparably fair resource sharing. Unfortunately, this abstract fairness is frequently misinterpreted as convergence towards equal sharing rates. In this paper we show in theory as well as in experiment that TCP rate convergence does not exist. Instead, the individual TCP flow rate is persistently fluctuating over a range close to one order of magnitude. The fluctuations are not short term but correlated over long intervals, such that the carried data volume converges rather slowly. The weak convergence does not negate fairness in general. Nevertheless, a particular transmission operation could deviate considerably.
Wolfram Lautenschlaeger
Analysis of Mitigation Measures for Timing Attacks in Mobile-Cloud Offloading Systems
Abstract
Mobile cloud offloading has been proposed to migrate complex computations from mobile devices to powerful servers. While this may be beneficial from the performance and energy perspective, it certainly exhibits new challenges in terms of security due to increased data transmission over networks with potentially unknown threats. Among possible security issues are timing attacks which are not prevented by traditional cryptographic security. Usually random delays are introduced in such systems as a popular countermeasure. Random delays are easily deployed even if the source code of the application is not at hand. While the benefits are obvious, a random delay introduces a penalty that should be minimized. The challenge is to select the distribution from which to draw the random delays and to set mean and variance in a suitable way such that the system security is maximized and the overhead is minimized. To tackle this problem, we have implemented a prototype that allows us to compare the impact of different random distributions on the expected success of timing attacks. Based on our model, the effect of random delay padding on the performance and security perspective of offloading systems is analyzed in terms of response time and optimal rekeying rate. We found that the variance of random delays is the primary influencing factor to the mitigation effect. Based on our approach, the system performance and security can be improved as follows. Starting from the mission time of a computing job one can select a desired padding policy. From this the optimal rekeying interval can be determined for the offloading system.
Tianhui Meng, Katinka Wolter
Capabilities of Raspberry Pi 2 for Big Data and Video Streaming Applications in Data Centres
Abstract
Many new data centres have been built in recent years in order to keep up with the rising demand for server capacity. These data centres require a lot of electrical energy and cooling. Big data and video streaming are two heavily used applications in data centres. This paper experimentally investigates the possibilities and benefits of using cheap, low power and widely supported hardware in the form of a micro data centre with big data and video streaming as its main application area. For this purpose, multiple Raspberry Pi 2 Model B (RPi2)’s have been used in order to build a fully functional distributed Hadoop and video streaming setup that has acceptable performance and extends to new research opportunities. We experimentally validated the new setup to fit in a data centre environment by analysis of its performance, scalability, energy consumption, temperature and manageability. This paper proposes a high concurrency and low power setup in a small 1U form factor with an estimated number of 72 RPi2’s as an interesting alternative to traditional rack servers.
Nick J. Schot, Paul J. E. Velthuis, Björn F. Postema
Ensemble-Based Uncertainty Quantification for Smart Grid Co-simulation
Abstract
Coupling of independent models in the form of a co-simulation is a rather new approach for design and analysis of Smart Grids. However, uncertainty of model parameters and outputs decreases the significance of simulation results. Therefore, this paper presents an ensemble-based uncertainty quantification system as an extension to the already existing co-simulation framework mosaik.
Cornelius Steinbrink, Sebastian Lehnhoff, Thole Klingenberg
Backmatter
Metadaten
Titel
Measurement, Modelling and Evaluation of Dependable Computer and Communication Systems
herausgegeben von
Anne Remke
Boudewijn R. Haverkort
Copyright-Jahr
2016
Verlag
Springer International Publishing
Electronic ISBN
978-3-319-31559-1
Print ISBN
978-3-319-31558-4
DOI
https://doi.org/10.1007/978-3-319-31559-1