Skip to main content
Top

2014 | Book

Stochastic Model Checking. Rigorous Dependability Analysis Using Model Checking Techniques for Stochastic Systems

International Autumn School, ROCKS 2012, Vahrn, Italy, October 22-26, 2012, Advanced Lectures

Editors: Anne Remke, Mariëlle Stoelinga

Publisher: Springer Berlin Heidelberg

Book Series : Lecture Notes in Computer Science

insite
SEARCH

About this book

The use of stochastic models in computer science is wide spread, for instance in performance modeling, analysis of randomized algorithms and communication protocols which form the structure of the Internet. Stochastic model checking is an important field in stochastic analysis. It has rapidly gained popularity, due to its powerful and systematic methods to model and analyze stochastic systems. This book presents 7 tutorial lectures given by leading scientists at the ROCKS Autumn School on Stochastic Model Checking, held in Vahrn, Italy, in October 2012. The 7 chapters of this tutorial went through two rounds of reviewing and improvement and are summarizing the state-of-the-art in the field, centered around the tree areas of stochastic models, abstraction techniques and stochastic model checking.

Table of Contents

Frontmatter
Analyzing Oscillatory Behavior with Formal Methods
Abstract
An important behavioral pattern that can be witnessed in many systems is periodic re-occurrence. For example, most living organisms that we know are governed by a 24 hours rhythm that determines whether they are awake or not. On a larger scale, also whole population numbers of organisms fluctuate in a cyclic manner as in predator-prey relationships. When treating such systems in a deterministic way, i.e., assuming that stochastic effects are negligible, the analysis is a well-studied topic. But if those effects play an important role, recent publications suggest that at least a part of the system should be modeled stochastically. However, in that case, one quickly realizes that characterizing and defining oscillatory behavior is not a straightforward task, which can be solved once and for all. Moreover, efficiently checking whether a given system oscillates or not and if so determining the amplitude of the fluctuations and the time in-between is intricate. This paper shall give an overview of the existing literature on different modeling formalisms for oscillatory systems, definitions of oscillatory behavior, and the respective analysis methods.
Alexander Andreychenko, Thilo Krüger, David Spieler
A Tutorial on Interactive Markov Chains
Abstract
Interactive Markov chains (IMCs) constitute a powerful sto- chastic model that extends both continuous-time Markov chains and labelled transition systems. IMCs enable a wide range of modelling and analysis techniques and serve as a semantic model for many industrial and scientific formalisms, such as AADL, GSPNs and many more. Applications cover various engineering contexts ranging from industrial system-on-chip manufacturing to satellite designs. We present a survey of the state-of-the-art in modelling and analysis of IMCs.
We cover a set of techniques that can be utilised for compositional modelling, state space generation and reduction, and model checking. The significance of the presented material and corresponding tools is highlighted through multiple case studies.
Florian Arnold, Daniel Gebler, Dennis Guck, Hassan Hatefi
A Theory for the Semantics of Stochastic and Non-deterministic Continuous Systems
Abstract
The description of complex systems involving physical or biological components usually requires to model complex continuous behavior induced by variables such as time, distance, speed, temperature, alkalinity of a solution, etc. Often, such variables can be quantified probabilistically to better understand the behavior of the complex systems. For example, the arrival time of events may be considered a Poisson process or the weight of an individual may be assumed to be distributed according to a log-normal distribution. However, it is also common that the uncertainty on how these variables behave makes us prefer to leave out the choice of a particular probability and rather model it as a purely non-deterministic decision, as it is the case when a system is intended to be deployed in a variety of very different computer or network architectures. Therefore, the semantics of these systems needs to be represented by a variant of probabilistic automata that involves continuous domains on the state space and the transition relation.
In this paper, we provide a survey on the theory of such kind of models. We present the theory of the so-called labeled Markov processes (LMP) and its extension with internal non-determinism (NLMP). We show that in these complex domains, the bisimulation relation can be understood in different manners. We show the relation between the different bisimulations and try to understand their expressiveness through examples. We also study variants of Hennessy-Milner logic that provides logical characterizations of some of these bisimulations.
Carlos E. Budde, Pedro R. D’Argenio, Pedro Sánchez Terraf, Nicolás Wolovick
On Abstraction of Probabilistic Systems
Abstract
Probabilistic model checking extends traditional model checking by incorporating quantitative information about the probability of system transitions. However, probabilistic models that describe interesting behavior are often too complex for straightforward analysis. Abstraction is one way to deal with this complexity: instead of analyzing the (“concrete”) model, a simpler (“abstract”) model that preserves the relevant properties is built and analyzed. This paper surveys various abstraction techniques proposed in the past decade. For each abstraction technique we identify in what sense properties are preserved or provide alternatively suitable boundaries.
Christian Dehnert, Daniel Gebler, Michele Volpato, David N. Jansen
Computing Behavioral Relations for Probabilistic Concurrent Systems
Abstract
Behavioral equivalences and preorders are fundamental notions to formalize indistinguishability of transition systems and provide means to abstraction and refinement. We survey a collection of models used to represent concurrent probabilistic real systems, the behavioral equivalences and preorders they are equipped with and the corresponding decision algorithms. These algorithms follow the standard refinement approach and they improve their complexity by taking advantage of the efficient algorithms developed in the optimization community to solve optimization and flow problems.
Daniel Gebler, Vahid Hashemi, Andrea Turrini
Markov Reward Models and Markov Decision Processes in Discrete and Continuous Time: Performance Evaluation and Optimization
Abstract
State-based systems with discrete or continuous time are often modelled with the help of Markov chains. In order to specify performance measures for such systems, one can define a reward structure over the Markov chain, leading to the Markov Reward Model (MRM) formalism. Typical examples of performance measures that can be defined in this way are time-based measures (e.g. mean time to failure), average energy consumption, monetary cost (e.g. for repair, maintenance) or even combinations of such measures. These measures can also be regarded as target objects for system optimization. For that reason, an MRM can be enhanced with an additional control structure, leading to the formalism of Markov Decision Processes (MDP).
In this tutorial, we first introduce the MRM formalism with different types of reward structures and explain how these can be combined to a performance measure for the system model. We provide running examples which show how some of the above mentioned performance measures can be employed. Building on this, we extend to the MDP formalism and introduce the concept of a policy. The global optimization task (over the huge policy space) can be reduced to a greedy local optimization by exploiting the non-linear Bellman equations. We review several dynamic programming algorithms which can be used in order to solve the Bellman equations exactly. Moreover, we consider Markovian models in discrete and continuous time and study value-preserving transformations between them. We accompany the technical sections by applying the presented optimization algorithms to the example performance models.
Alexander Gouberman, Markus Siegle
Applying Mean-Field Approximation to Continuous Time Markov Chains
Abstract
The mean-field analysis technique is used to perform analysis of a system with a large number of components to determine the emergent deterministic behaviour and how this behaviour modifies when its parameters are perturbed. The computer science performance modelling and analysis community has found the mean-field method useful for modelling large-scale computer and communication networks. Applying mean-field analysis from the computer science perspective requires the following major steps: (1) describing how the agent populations evolve by means of a system of differential equations, (2) finding the emergent deterministic behaviour of the system by solving such differential equations, and (3) analysing properties of this behaviour. Depending on the system under analysis, performing these steps may become challenging. Often, modifications of the general idea are needed. In this tutorial we consider illustrating examples to discuss how the mean-field method is used in different application areas. Starting from the application of the classical technique, moving to cases where additional steps have to be used, such as systems with local communication. Finally, we illustrate the application of existing model checking analysis techniques.
Anna Kolesnichenko, Valerio Senni, Alireza Pourranjabar, Anne Remke
Backmatter
Metadata
Title
Stochastic Model Checking. Rigorous Dependability Analysis Using Model Checking Techniques for Stochastic Systems
Editors
Anne Remke
Mariëlle Stoelinga
Copyright Year
2014
Publisher
Springer Berlin Heidelberg
Electronic ISBN
978-3-662-45489-3
Print ISBN
978-3-662-45488-6
DOI
https://doi.org/10.1007/978-3-662-45489-3

Premium Partner