Skip to main content
Erschienen in:
Buchtitelbild

Open Access 2018 | OriginalPaper | Buchkapitel

Bayesian Statistical Parameter Synthesis for Linear Temporal Properties of Stochastic Models

verfasst von : Luca Bortolussi, Simone Silvetti

Erschienen in: Tools and Algorithms for the Construction and Analysis of Systems

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Parameterized verification of temporal properties is an active research area, being extremely relevant for model-based design of complex systems. In this paper, we focus on parameter synthesis for stochastic models, looking for regions of the parameter space where the model satisfies a linear time specification with probability greater (or less) than a given threshold. We propose a statistical approach relying on simulation and leveraging a machine learning method based on Gaussian Processes for statistical parametric verification, namely Smoothed Model Checking. By injecting active learning ideas, we obtain an efficient synthesis routine which is able to identify the target regions with statistical guarantees. Our approach, which is implemented in Python, scales better than existing ones with respect to state space of the model and number of parameters. It is applicable to linear time specifications with time constraints and to more complex stochastic models than Markov Chains.

1 Introduction

Overview. Stochastic models are commonly used in many areas to describe and reason about complex systems, from molecular and systems biology to performance evaluation of computer networks. In all these cases, the system dynamics is usually described by high-level languages as Chemical Reaction Networks [1], population models [2] or Stochastic Petri Nets [3], which generate an underlying Continuous Time Markov Chain (CTMC). Formal reasoning about these models often amounts to the computation of reachability probabilities. This is the basic tool behind successful Stochastic Model Checking tools like PRISM [4] or the more recent STORM [5]. These tools implement numerical algorithms that compute probabilities up to a given precision, suffering though from state space explosion, as well as simulation engines that allow statistical estimation when models are too large.
All classic quantitative verification tools assume that a model is fully specified, which is typically a strong assumption, particularly in application domains like system biology, where many model parameters are estimated from data or are only known to belong to a given range. An alternative approach is that of parameterised verification, which tries to verify properties for a whole set of models, indexed by some parameters. In case of stochastic models, this typically requires us to compute how reachability probabilities change as a function of model parameters, which is a much harder task [6]. A related problem is that of synthesis [7], where one looks for a subset of the parameter space where a given property (or multiple properties [8]) is guaranteed to be satisfied. Alternatively, one can try to design a system by finding a value that maximises the probability of satisfying a specification.
Problem Statement. In this paper, we focus on parameter synthesis for CTMC models described by chemical reaction networks, benchmarking against the approach of [7].
More specifically, we consider the following problem. We have a collection of CTMCs, indexed by a parameter vector \(\varvec{\theta }\in \varTheta \), taking values in a bounded and compact hyperrectangle \( \varTheta \subset \mathbb {R}^k\). We assume that the CTMCs depends on \(\varvec{\theta }\) through their rates, and that this dependency is smooth. We consider a linear time specifications \(\phi \) described by Metric Interval Temporal Logic [9], with bounded time operators. For each \(\phi \) and \(\varvec{\theta }\), we can in principle compute the probability that a random trajectory, generated by that specific CTMC, satisfies it, i.e. \(P_\phi (\varvec{\theta })\).
Our goal is to find a partition of the parameter space \(\varTheta \) composed by three classes. The positive class \(\mathcal {P}_\alpha \) which is composed by parameters where the probability of satisfying \(\phi \) is higher than a threshold value \(\alpha \), the negative class \(\mathcal {N}_\alpha \) composed by parameters where this probability is lower than \(\alpha \) and the undefined class \(\mathcal {U}_\alpha \) which collects all the other parameters. Following [7], we will look for a partition where the volume of the undefined class is lower a fraction of the volume of \(\varTheta \). This is the threshold synthesis problem.
Our approach will be statistic: we assume that models are too complex to numerically compute bounds on the reachability probability, and we only rely on the possibility of simulating the model. As a consequence, our solution to the parameter synthesis problem will have only statistical guarantees of being correct. For example, if a parameter belongs to \(\mathcal {P}_\alpha \), the confidence of this point satisfying \(P_\phi (\varvec{\theta })\ge \alpha \) will be larger than a prescribed probability (typically 95% or 99%), though for most points this probability will be essentially one, and similarly for \(\mathcal {N}_\alpha \). The challenge of such an approach is that estimating the satisfaction probability at many different points in the parameter space by simulation is very expensive and inefficient, unless we are able to share the information carried by simulation runs at neighbouring points in the parameter space.
Contributions. We propose a Bayesian statistical approach for parameter synthesis, which leverages a statistical parameterised verification method known as Smoothed Model Checking [6] and the nice theoretical approximation properties of Gaussian Process [10]. Being based on a Bayesian inference engine, this naturally gives statistical error bounds for the estimated probabilities. Our algorithm uses active learning strategies to steer the exploration of the parameter space only where the satisfaction probability is close to the threshold. We also provide a prototype implementation of the approach in Python.
Despite being implemented in Python, our approach turns to be remarkably efficient, being slightly faster than [7] for small models, and outperforming it for more complex and large models or when the number of parameters is increased, at the price of a weaker form of correctness. Compared to [7], we also have an additional advantage: the method treats the simulation engine and the routine to verify of the linear time specification on individual trajectories as black boxes. This means that we can not only treat arbitrary MTL properties (while in [7] they is an essential restriction to non-nested CSL properties, i.e. reachability), but also other more complex linear time specifications (e.g. using hybrid automata, provided that the satisfaction probability is a smooth function of model parameters), and we can also apply the same approach to more complex stochastic models for which efficient simulation routines exist, like stochastic differential equations.
Related Work. Parameter synthesis of CTMC is an active field of research. In [7, 11] the authors use Continuous Stochastic Logic (CSL) and uniformization methods for computing exact probability bounds for parameteric models of CTMCs obtained from chemical reaction networks. In [12] the same authors extend their algorithm to GPU architecture to improve the scalability. Authors in these two papers solve two problems: one is the threshold synthesis, the other is the identification of a parameter configuration maximising the satisfaction probability. In this paper we focus on the former, as we already presented a statistical approach to deal the latter problem in [13] for the single objective case and in [8] for the multi-objective case. An alternative statistical approach for multi-objective optimisation is that of [14], where authors use ANOVA test to estimate the dominance relation. Another approach to parameter synthesis for CTMC is [15], where the authors rely on a combination of discretisation of parameters with a refinement technique.
In this work we use a statistical approach to approximate the satisfaction probability function, building on Smoothed Model Checking [6]. This approach is applicable to CTMC with rate functions that are smooth with respect to parameters, and leverages statistical tools based on Gaussian Process regression [10] to learn an approximation of the satisfaction function from few observations. Moreover, this approach allows us to deal with a richer class of linear time properties than reachability, like those described by Metric Temporal Logic [9, 16], for which numerical verification routines are heavily suffering from state space explosion [17]. Another statistical approach is that of [18], which combines sensitivity analysis, statistical model checking and uniform continuity to approximate the satisfaction probability function, but it is restricted to cases when the satisfaction probability is monotonic in the parameters. In contrast, Gaussian Process-based methods have no restriction (as Gaussian Processes are universal approximators), and have also the advantage of requiring much less simulations than pointwise statistical model checking, as information is shared between neighbouring points (see [6] for a discussion in this sense). Parametric verification and synthesis approaches are more consolidated for Discrete Time Markov Chains [19], where mature tools like PROPhESY exist [20], which rely on an symbolic representation of the reachability probability, which does not generalise to the continuous time setting.
Paper Structure. The paper is organized as follows. In Sect. 2 we discuss background material, including Parametric CTMCs, MITL, and Smoothed Model Checking and Gaussian Processes. In Sect. 3 we present our method in detail. In Sect. 4 we discuss experimental results, comparing with [7]. Conclusions and future work are discussed in Sect. 5.

2 Background

In this section we introduce the relevant background material: a formalism to describe the systems of interest, i.e. Parametric Chemical Reaction Networks, and one to describe linear time properties, i.e. Signal Temporal Logic. We then present smoothed model checking [21] and Gaussian Processes [10], which form the underlying statistical backbone of the parameter synthesis.

2.1 Parametric Chemical Reaction Networks

Chemical Reaction Networks [1] are a standard model of population processes, known in literature also as Population Continuous Time Markov Chains [2] or Markov Population Models [22]. We consider a variant with an explicit representation of kinetic parameters.
Definition 1
A Parametric Chemical Reaction Network (PCRN) \(\mathcal {M}\) is a tuple \(({S},\mathbf {X},D,\mathbf {x_0},\mathcal {R},\varTheta )\) where
  • \({S} = \{s_1,\ldots ,s_n\}\) is the set of species;
  • \(\mathbf {X} = (X_1,\ldots ,X_n )\) is the vector of variables counting the amount of each species, with values \(\mathbf {X} \in D\), with \(D \subseteq \mathbb {N}^n\) the state space;
  • \(\mathbf {x_0}\in D\) is the initial state;
  • \(\mathcal {R} = \{r_1,\ldots ,r_m\}\) is the set of chemical reactions, each of the form \(r_j = (\mathbf {v_j},\alpha _j)\), with \(\mathbf {v_j}\) the stoichiometry or update vector and \(\alpha _j = \alpha _j(\varvec{X},\varvec{\theta })\) the propensity or rate function. Each reaction can be represented as
    $$\begin{aligned} r_j: u_{j,1}s_1+\ldots +u_{j,n}s_n \xrightarrow {\alpha _j} w_{j,1}s_1+\ldots +w_{j,n}s_n, \end{aligned}$$
    where \(u_{j,i}\) (\(w_{j,i}\)) is the amount of elements of species \(s_i\) consumed (produced) by reaction \(r_j\). With \(\varvec{u_j} = (u_{j,1},\ldots ,u_{j,n})\) (and similarly \(\varvec{w_j}\)), \(\varvec{v_j} = \varvec{w_j} - \varvec{u_j}\).
  • \(\varvec{\theta }= (\theta _1,\ldots , \theta _k)\) is the vector of (kinetic) parameters, taking values in a compact hyperrectangle \(\varTheta \subset \mathbb {R}^k\).
To stress the dependency of \(\mathcal {M}\) on the parameters \(\varvec{\theta }\in \varTheta \), we will often write \(\mathcal {M}_{\varvec{\theta }}\). A PCRN \(\mathcal {M}_{\varvec{\theta }}\) defines a Continuous Time Markov Chain [2, 23] on D, with infinitesimal generator Q, where \(Q_{\varvec{x},\varvec{y}} = \sum _{r_j\in \mathcal {R}} \{ \alpha _j(\varvec{x},{\varvec{\theta }})~|~\varvec{y} = \varvec{x}+\varvec{v_j} \} \), \(\varvec{x}\ne \varvec{y}\). We denote by \(P_{\varvec{\theta }}\) the probability over the paths \(Path^{\mathcal {M}_{\varvec{\theta }}}\) of \(\mathcal {M}_{\varvec{\theta }}\) of such a CTMC.

2.2 Metric Interval Temporal Logic

Metric Interval Temporal Logic (MITL [16]) is a discrete linear time temporal logic used to reason about the future evolution of a path in continuous time. Generally this formalism is used to qualitatively describe the behaviors of trajectories of differential equations or stochastic models. The temporal operators we consider are all time-bounded, like in Signal Temporal Logic [9], a signal-based version of MITL. This implies that time-bounded trajectories are sufficient to verify every formula. The atomic predicates of MITL are inequalities on a set of real-valued variables, i.e. of the form \(\mu (\varvec{X}) {:=} [g(\varvec{X})\ge 0]\), where \(g:\mathbb {R}^n \rightarrow \mathbb {R}\) is a continuous function and consequently \(\mu : \mathbb {R}^n \rightarrow \{\top ,\bot \}\).
Definition 2
A formula \(\phi \in \mathcal {F}\) of MITL is defined by the following syntax:
$$\begin{aligned} \phi := \bot \,|\,\top \,|\, \mu \,|\, \lnot \phi \,|\, \phi \vee \phi \,|\, \phi \mathbf {U}_{[T_1,T_2]} \phi , \end{aligned}$$
(1)
where \(\mu \) are atomic predicates as defined above, and \(T_1<T_2<+\infty \).
Eventually and globally modal operators are defined as customary as \(\mathbf {F}_{[T_1,T_2]}\phi \equiv \top \mathbf {U}_{[T_1,T_2]} \phi \) and \(\mathbf {G}_{[T_1,T_2]}\phi \equiv \lnot \mathbf {F}_{[T_1,T_2]}\lnot \phi \). MITL formulae are interpreted over the paths \(\varvec{x}(t)\) of a PCRN \(\mathcal {M}_{\varvec{\theta }}\). We will consider here the Boolean semantics of [9], which given a trajectory \(\varvec{x}(t)\), returns either true or false, referring the reader to [9] for its definition and for a description of monitoring algorithms. Combining this with the probability distribution \(P_{\varvec{\theta }}\) over trajectories induced by a PCRN model \(\mathcal {M}_{\varvec{\theta }}\), we obtain the satisfaction probability of a formula \(\phi \) as
$$\begin{aligned} P_\phi ({\varvec{\theta }}) \equiv P(\phi \mid \mathcal {M}_{\varvec{\theta }} ) := P_{\varvec{\theta }}(\{\varvec{x}(t)\in Path^{\mathcal {M}_{\varvec{\theta }}} ~|~(\varvec{x},0)\models \phi \}) \end{aligned}$$

2.3 Parametric Verification and Smoothed Model Checking

Given an MITL formula \(\phi \) and a CTMC \(\mathcal {M}_{\varvec{\theta }}\), we consider two verification tasks:
  • (Classic) Verification: compute or estimate the satisfaction probability \(P_\phi ({\varvec{\theta }})\) for a fixed \({\varvec{\theta }}\).
  • Parametric verification: compute or estimate the satisfaction probability \(P_\phi ({\varvec{\theta }})\) as a function of \({\varvec{\theta }}\in \varTheta \).
The classic verification task can be solved with specialised numerical algorithms [17, 24]. These methods calculate \(P_\phi ({\varvec{\theta }})\) by a clever numerical integration of the Kolmogorov equations of the CTMC. This approach, however, suffers from the curse of state space explosion, becoming inefficient for big or complex models. A viable alternative is rooted in statistics. The key idea is to estimate the satisfaction probability by combining simulation and monitoring of MITL formulas. In practice, for each trajectory \(\varvec{x}\) generated by a simulation of the CTMC \(\mathcal {M}_{\varvec{\theta }}\), we verify if \(\varvec{x} \models \phi \). This produces observations of a Bernoulli random variable \(Z_\phi \), which is equal to 1 if and only if the trajectory satisfies the property, and 0 otherwise. By definition, the probability of observing 1 is exactly \(P_\phi ({\varvec{\theta }})\), which can thus be estimated by frequentist or Bayesian statistical inference [25, 26].
Parametric verification brings additional challenges. For PCRN, the numerical approach of [27] provides upper and lower bounds on the satisfaction function. By decomposing the parameter space in small regions, one can provide a tight approximation of the satisfaction function, at the price of a polynomial cost in the dimension of the state space and of an exponential cost in the dimension of the parameter space [27].
The statistical counterpart for parametric verification is known as Smoothed Model Checking [6]. This method combines simulations in few points of the parameter space with state-of-the-art generalised regression methods from statistics and machine learning to infer an analytic approximation of the satisfaction function, mapping each \({\varvec{\theta }}\) to the corresponding value of \(P_\phi ({\varvec{\theta }})\). The basic idea is to cast the estimation of the satisfaction function as a learning problem: from the observation of few simulation runs at some points of the parameter space, we wish to learn an approximation of the satisfaction function, with statistical error guarantees. Smoothed Model Checking solves this problem relying on Gaussian Process (generalised) regression, a Bayesian non-parametric method that returns in each point an estimate of the value of the satisfaction function together with confidence bounds, defining the region containing the true value of the function with a prescribed probability. The only substantial requirement for Smoothed Model Checking is that the satisfaction probability is smooth with respect to the parameters. This holds for MITL properties interpreted over PCTMCs [6]. Smoothed Model Checking will be the key tool for our synthesis problem, hence we will introduce it in more detail, after a brief introduction of its underlying inference engine, i.e. Gaussian Processes.
Gaussian Processes. Gaussian Processes (GPs) are a family of distributions over function spaces, used mostly for Bayesian non-parametric classification or regression. More specifically, a GP is a collection of random variables \(f(\varvec{x}) \in \mathbb {R}\) (\(\varvec{x} \in E\), a compact subset of \(\mathbb {R}^h\)) of which any finite subset defines a multivariate normal distribution. A GP is uniquely determined by its mean and covariance functions (called also kernels) denoted respectively with \(m: E \rightarrow \mathbb {R} \) and \(k: E \times E \rightarrow \mathbb {R}\) such that for every finite set of points \((\varvec{x}_1,\varvec{x}_2,\dots ,\varvec{x}_n)\):
$$\begin{aligned} f \sim \mathcal {GP}(m,k) \iff (f(\varvec{x}_1),f(\varvec{x}_2),\dots ,f(\varvec{x}_n))\sim \mathcal {N}(\mathbf {m},K) \end{aligned}$$
(2)
where \(\mathbf {m} = (m(t_1),m(t_2),\dots ,m(t_n))\) is the vector mean and \(K \in \mathbb {R}^{n \times n}\) is the covariance matrix, such that \(K_{ij} = k(\varvec{x}_i,\varvec{x}_j)\). From a functional point of view, GP is a probability distribution on the set of functions \(g:E \rightarrow \mathbb {R}\). The choice of the covariance function is important from a modeling perspective because it determines which functions will be sampled with higher probability from a GP, see [10].
GP are popular as they provide a Bayesian non-parametric framework for regression and classification. Starting from a training set \(\{(\varvec{x}_i, y_i)\}_{i=1,\ldots ,n}\) of input \(\varvec{x}_i\) and output \(y_i\) pairs, and a prior GP, typically with zero mean and a given covariance function, GP regression computes a posterior distribution given the observations, which is another GP, whose mean and covariance depend on the prior kernel and the observation points. In particular, for real valued \(y_i\) and Gaussian observation noise, the posterior mean at a point \(\varvec{x}^*\) is a linear combination of the prior kernel \(k(\varvec{x}^*,\varvec{x}_i)\) evaluated at \(\varvec{x}^*\) and observation points \(\varvec{x}_i\) with coefficients depending on the observations \(y_i\). The prior kernel thus plays a central role, and it sometimes depends on hyperparameters, that can be set automatically by optimising the marginal likelihood, as traditionally happens in Bayesian methods [10].
In this work we use the Gaussian Radial Basis Function (GRBF) kernel [10], as samples from a GP defined by it can approximate arbitrarily well any continuous function on a compact set E. The kernel is defined as
$$\begin{aligned} k(\varvec{x}_1, \varvec{x}_2) = \exp (-\Vert \varvec{x}_1 - \varvec{x}_2\Vert ^2/l^2), \end{aligned}$$
where l is the lengthscale hyperparameter, which roughly governs how far away observations are contributing to predictions in a point (as if \(\varvec{x}^*\) and \(\varvec{x}_i\) are much more distant than l, then \(k(\varvec{x}^*,\varvec{x}_i)\) is approximately zero). Moreover, l determines the Lipschitz constant of the GRBF kernel, which is \(\frac{\sqrt{2/e}}{l}\), and a fortiori of the prediction itself (being a linear combination of kernel functions).
Smoothed Model Checking. Smoothed Model Checking is a statistical method to estimate the function \(P_\phi ({\varvec{\theta }})\), casting it into a learning problem taking as input the truth value of \(\phi \) for several simulations at different parameter values \({\varvec{\theta }}_1,\ldots ,{\varvec{\theta }}_n\), with few simulation runs (\(M \ll +\infty \)) per parameter point. The method tries to reconstruct a real-valued latent function \(f({\varvec{\theta }})\), which is squeezed to [0, 1] via the Probit transform1 \(\varPsi \) to give the satisfaction probability at \({\varvec{\theta }}\): \(P_\phi ({\varvec{\theta }}) = \varPsi (f({\varvec{\theta }}))\). Let us denote with \( \mathcal {O}= [\mathbf {o}_1,\mathbf {o}_2,\dots ,\mathbf {o}_n ]\) the matrix whose rows \( \mathbf {o}_i \) are the Boolean m-vectors of the evaluations in \({\varvec{\theta }}_j\). Hence, we have that each observation \( \mathbf {o}_i \) is an independent draw from a \(Binomial(M,P_\phi ({\varvec{\theta }}_j)))\).
Smoothed Model Checking plugs these observations into a Bayesian inference scheme, assuming a prior p(f) for the latent variable f. As f is a random function, one can take as a prior a GP, specifying its mean and kernel function, and then invoke Bayes theorem to compute the joint posterior distribution of f at a prediction point \({\varvec{\theta }}^*\) and at the observation points \({\varvec{\theta }}_1,\ldots ,{\varvec{\theta }}_n\) as
$$\begin{aligned} p(f({\varvec{\theta }}^*), f({\varvec{\theta }}_1),\ldots ,f({\varvec{\theta }}_n)\mid \mathbf {o}) = \frac{1}{Z} p(f({\varvec{\theta }}^*), f({\varvec{\theta }}_1),\ldots ,f({\varvec{\theta }}_n)) \prod _{j=1}^n p(\mathbf {o}_j\mid f({\varvec{\theta }}_j)). \end{aligned}$$
In the previous expression, on the right hand side, Z is a normalisation constant, while \(p(f({\varvec{\theta }}^*), f({\varvec{\theta }}_1),\ldots ,f({\varvec{\theta }}_n))\) is the prior, which is Gaussian distribution with mean and covariance matrix computed according to the GP. \(p(\mathbf {o}_j\mid f({\varvec{\theta }}_j))\), instead, is the noise model, which in our case is given by a Binomial density. By integrating out the value of the latent function at observations points in the previous expression, one gets the predictive distribution
$$\begin{aligned} p(f({\varvec{\theta }}^*) \mid \mathcal {O}) = \int \prod _{j=1}^n d(f({\varvec{\theta }}_j)) p(f({\varvec{\theta }}^*), f({\varvec{\theta }}_1),\ldots ,f({\varvec{\theta }}_n)\mid \mathcal {O}). \end{aligned}$$
The presence of a Binomial observation model makes this integral analytically intractable, and forces us to resort to an efficient variational approximation known as Expectation Propagation [6, 10]. The result is a Gaussian form for the predictive distribution for \(p(f({\varvec{\theta }}^*) \mid \mathcal {O})\), whose mean and \(\delta \)-confidence region are then Probit transformed into [0, 1].
It is important to stress that the prediction of Smoothed Model Checking, being a Bayesian method, depends on the choice of the prior. In case of Gaussian Processes, choosing the prior means fixing a covariance function, which makes assumptions on the smoothness and density of the functions that can be sampled by the GP. The Gaussian Radial Basis Function is dense in the space of continuous functions over a compact set [28], hence it can approximate arbitrarily well the satisfaction probability function. By setting its lengthscale via marginal likelihood optimization, we are picking the best prior for the observed data.

3 Methodology

3.1 Problem Definition

We start by rephrasing the parameter synthesis problem defined in [7] in the context of Bayesian statistics, where truths are quantified probabilistically. The basic idea is that we will exhibit a set of parameters that satisfy the specification with high confidence, which in the Bayesian world means with high posterior probability. To recall and fix the notation, let \( \mathcal {M_\theta } \) be a PCRN defined over a parameter space \(\varTheta \), \(\phi \) a MITL formula and \(\tilde{P}_\phi (\varvec{\theta })\) be a statistical approximate model of the satisfaction probability of \(\phi \) at each point \(\varvec{\theta }\). In the Bayesian setting, \(\tilde{P}_\phi (\varvec{\theta })\) is in fact a posterior probability distribution over [0, 1], hence we can compute for each measurable set \(B\subseteq [0,1]\) the probability \(p(\tilde{P}_\phi (\varvec{\theta }) \in B)\).
Problem (Bayesian Threshold Synthesis): Let \( \mathcal {M_\theta }\), \(\varTheta \), \(\phi \), and \(\tilde{P}_\phi (\varvec{\theta })\) as before. Fix a threshold \(\alpha \) and consider the threshold inequality \( P_\phi (\varvec{\theta }) > \alpha \), for the true satisfaction probability \( P_\phi (\varvec{\theta })\). Fix \(\epsilon >0\) a volume tolerance, and \(\delta \in (0.5,1]\) a confidence threshold. The Bayesian threshold synthesis problem consists in partitioning the parameter space \(\varTheta \) in three classes \(\mathcal {P}_\alpha \) (positive), \(\mathcal {N}_\alpha \) (negative) and \(\mathcal {U}_\alpha \) (undefined) as follows:
  • for each \(\theta \in \mathcal {P}_\alpha \), \(p(\tilde{P}_\phi (\varvec{\theta })> \alpha ) > \delta \)
  • for each \(\theta \in \mathcal {N}_\alpha \), \(p(\tilde{P}_\phi (\varvec{\theta }) < \alpha ) > \delta \)
  • \(\mathcal {U}_\alpha = \varTheta \setminus (\mathcal {P}_\alpha \cup \mathcal {N}_\alpha )\), and \(\frac{vol(U)}{vol(\varTheta )} < \epsilon \), where vol is the volume of the set.
Note that the set \(\mathcal {P}_\alpha \) solves the threshold synthesis problem defined above, while \(\mathcal {N}_\alpha \) solves the threshold synthesis problem \(P_\phi (\varvec{\theta }) < \alpha \).

3.2 Bayesian Parameter Synthesis: The Algorithm

Our Bayesian synthesis algorithm essentially combines smoothed Model Checking (smMC) with an active learning step to adaptively refine the sets \( \mathcal {P}_\alpha , \mathcal {N}_\alpha , \mathcal {U}_\alpha \), trying to keep the number of simulations of the PCRN \( \mathcal {M_\theta }\) to a minimum. smMC is used to compute a Bayesian estimate of the satisfaction probability, given the samples of the truth of \(\phi \) accumulated up to a certain point. More specifically, we use the posterior distribution \(p(\tilde{P}_\phi (\varvec{\theta }))\) of the satisfaction probability at each \(\varvec{\theta }\) returned by smMC to compute the following two functions of \(\varvec{\theta }\):
  • \(\lambda ^+(\varvec{\theta },\delta )\) is such that \(p\left( \tilde{P}_\phi (\varvec{\theta }) < \lambda ^+(\varvec{\theta },\delta )\right) > \delta \)
  • \(\lambda ^- (\varvec{\theta },\delta )\) is such that \( p\left( \tilde{P}_\phi (\varvec{\theta })> \lambda ^-(\varvec{\theta },\delta )\right) > \delta \)
Essentially, at each point \(\varvec{\theta }\), \(\lambda ^+(\varvec{\theta },\delta )\) is the upper bound for the estimate \(\tilde{P}_\phi (\varvec{\theta })\) at confidence \(\delta \) (i.e. with probability at least \(\delta \), the true value \(P_\phi (\varvec{\theta })\) is less than \(\lambda ^+\)), while \(\lambda ^-(\varvec{\theta },\delta )\) is the lower bound. These two values will be used to split the parameter space into the three regions \( \mathcal {P}_\alpha , \mathcal {N}_\alpha , \mathcal {U}_\alpha \) as follows:
  • \(\varvec{\theta }\in \mathcal {P}_\alpha \text{ iff } \lambda ^-(\varvec{\theta }, \delta ) > \alpha \)
  • \(\varvec{\theta }\in \mathcal {N}_\alpha \text{ iff } \lambda ^+(\varvec{\theta }, \delta ) < \alpha \)
  • \(\mathcal {U}_\alpha = \varTheta \setminus (\mathcal {P}_\alpha \cup \mathcal {N}_\alpha ), \frac{vol(U)}{vol(\varTheta )} < \epsilon \)
To dig into how \(\lambda ^+\) and \(\lambda ^-\) are computed, recall that smMC computes a real-valued Gaussian process \(f_\phi (\varvec{\theta })\), with mean function \(\mu \) and covariance function k, from which the pointwise standard deviation can be obtained as \(\sigma (\varvec{\theta }) = \sqrt{k(\varvec{\theta },\varvec{\theta })}\). At each \(\varvec{\theta }\), the function \(f_\phi (\varvec{\theta })\) is Gaussian distributed, hence we can compute the upper and lower confidence bounds for the Gaussian, and then squeeze them into [0, 1] by the Probit transform \(\varPsi \). Letting \(\beta _\delta = \varPsi ^-1(\frac{\delta +1}{2})\), as customary while working with Normal distributions, we get:
  • \(\lambda ^+(\varvec{\theta },\delta )= \varPsi ( \mu (\tilde{f}_\phi (\varvec{\theta })) + \beta _\delta \sigma (\tilde{f}_\phi (\varvec{\theta })))\)
  • \(\lambda ^-(\varvec{\theta },\delta )=\varPsi ( \mu (\tilde{f}_\phi (\varvec{\theta })) - \beta _\delta \sigma (\tilde{f}_\phi (\varvec{\theta })))\)
The Bayesian synthesis procedure is described in Algorithm 1, which after initialisation enters the main loop, in which the computation of the positive, negative, and uncertain sets are carried out adaptively until convergence. Before proceeding further, we introduce some notation to describe regular grids, as they are used in the current implementation of the method. Let us consider the hyper-rectangular parameter space https://static-content.springer.com/image/chp%3A10.1007%2F978-3-319-89963-3_23/467207_1_En_23_IEq207_HTML.gif , where \(w_i^-\) and \(w_i^+\) are respectively the lower and the upper bound of the domain of the parameter \(\theta _i\). An \(\varvec{h}\)-grid of \(\varTheta \) is the set \(\varvec{h}\text{-grid } = \cup _{\varvec{m} \in M }\{ \varvec{w^-} + \varvec{m}*\varvec{h}\}\) where \(\varvec{h}=\{ h_1,\dots ,h_n\}\), https://static-content.springer.com/image/chp%3A10.1007%2F978-3-319-89963-3_23/467207_1_En_23_IEq215_HTML.gif , \(\varvec{w}^-=(w_1^-,\dots ,w_n^-)\) and \(*\) is the elementwise multiplication. Given a grid, we define as basic cell a small hyperrectangle of size \(\varvec{h}\) whose vertices are points of the grid.
Initialisation. The initialisation phase consists in running some simulations of the PCRN at some points of the parameter space, to have a first reconstruction of the satisfaction function. As we do not need to be very precise in every part of the parameter space, but only for points \(\varvec{\theta }\) whose satisfaction probability \(P_\phi (\varvec{\theta })\) is close to the threshold \(\alpha \), we start by simulating the model on all parameters of a coarse grid \(\varvec{h_0}\text{-grid }\), with \(\varvec{h_0}\) chosen such that the total number of parameters \(\varvec{\theta }\) explored is reasonably small for smMC to be fast. The actual choice will depend on the number of dimensions of the parameter space, as grids depend exponentially on it. Once the grid \(\varvec{h_0}\text{-grid }\) is fixed, we simulate N runs of the model per each point and pass them to a monitoring algorithm for MITL, obtaining N observations of the truth value of the property \(\phi \) at each point of \(\varvec{h_0}\text{-grid }\), collected in the set \(\mathcal {S}\). We also initialise the sets \( \mathcal {P}_\alpha \), \(\mathcal {N}_\alpha \), and \(\mathcal {U}_\alpha \).
Computation of \(\varvec{\mathcal {P}_\alpha }\), \(\varvec{\mathcal {N}_\alpha }\), and \(\varvec{\mathcal {U}_\alpha }\) Regions. The algorithm then enters the main loop, first running smMC with the current set of sample points \(\mathcal {S}\) to compute the two functions \(\lambda ^+\) and \(\lambda ^-\). These are then used to update the regions \( \mathcal {P}_\alpha \), \(\mathcal {N}_\alpha \), and \(\mathcal {U}_\alpha \). Here we discuss several possible approaches.
Approach 1: Fixed Grid. The simplest approach is to partition the parameter space in small cells, i.e. using a \(\varvec{h}\text{-grid }\) with \(\varvec{h}\) small, and then assign each cell to one of the sets. The assignment will be discussed later, but it involves evaluating the functions \(\lambda ^+\) and \(\lambda ^-\) in each point of the grid. The method is accurate if each basic cell contains only a fraction of the volume much smaller than \(\epsilon \). However, this requires to work with fine grids, whose size blows up quickly with the number of parameters. Practically, this approach is feasible up to dimension 3 or 4 of the parameter space.
Approach 2: Adaptive Grid. To scale better with the dimension of the parameter space, we can start evaluating the \(\lambda ^{+/-}\) functions on a coarse grid, and refine the grid iteratively only for cells that are assigned to the uncertain set, until a minimum grid size is reached.
Central in both approaches is how to guarantee that all points of a basic cell are all belonging to one set, inspecting only a finite number of them. In particular, we will limit the evaluation of the \(\lambda ^{+/-}\) functions to the vertices of each cell c, i.e. to the points in the grid \(\varvec{h}\text{-grid }\). Intuitively, this will work if the cell has a small edge size compared to the rate of growth of the satisfaction function, and the values of the satisfaction function in its vertices are all (sufficiently) above or below the threshold. However, we need to precisely quantify this “sufficient”. We sketch here two exact methods and an heuristic one, which performs well in practice. We discuss here how to check that a cell belongs to the positive set, the negative one being symmetric.
Method 1: Global Lipschitz bound. This approach relies on computing the Lipschitz constant L of the satisfaction function. This can be obtained by estimating its derivatives (e.g. by finite difference or better by learning it using methods discussed in [10]), and performing a global optimization of the modulus of the gradient after each call to smMC. Let \(d(\varvec{h})\) be the length of the largest diagonal of a basic cell c in a \(\varvec{h}\text{-grid }\). Consider the smallest value of the satisfaction function in one of the vertices of c, and call it \(\hat{p}\). Then the value of the satisfaction function in the cell is surely greater than \(\hat{p} - L d(\varvec{h})/2\) (after decreasing for half the diagonal, we need to increase again to reach the value of another vertex). The test then is \(\hat{p} - L d(\varvec{h})/2 \ge \alpha \).
Method 2: Local Lipschitz bound. The previous method will suffer if the slope of the satisfaction function is large in some small region, as this will result in a large Lipschitz constant everywhere. To improve it, we can split the parameter space is subregions (for instance, by using a coarse grid), and compute the Lipschitz constant in each subregion. An alternative we are investigating is to compute in each cell of the grid a lower bound of the function \(f(\theta )\) learned from the GP from its analytic expression.
Heuristic Method. In order to speed up computation and avoid computing Lipschitz constants, we can make the function \(\lambda ^-\) more strict. Specifically, we can use a larger \(\beta _\delta \) than the one required by our confidence level \(\delta \). For instance for a 95% confidence, \(\beta _\delta = 1.96\), while we can use instead \(\beta _\delta =3\), corresponding roughly to a confidence of 99%. Coupling this with a choice of the grid step \(\varvec{h}\) at least one order of magnitude smaller than the lenghtscale of the kernel learned from the data (which is proportional to the Lipschitz constant of the kernel and of the satisfaction function), which guarantees that the satisfaction function will vary very little in each cell, we can be confident that if the strict \(\lambda ^-\) is above the threshold in all vertices of the cell, then the same will hold for all points inside c for the less strict \(\lambda ^-\).
Refinement Step. After having build the sets \( \mathcal {P}_\alpha \), \(\mathcal {N}_\alpha \), and \(\mathcal {U}_\alpha \), we check if the volume of \(\mathcal {U}_\alpha \) is below the tolerance threshold. If so, we stop and return these sets. Otherwise, we need to increase the precision of the satisfaction function near the uncertain region. This means essentially reducing the variance inside \(\mathcal {U}_\alpha \), which can be obtained by increasing the number of observations in this region. Hence, the refinement step samples points from the undefined regions \(\mathcal {U}\), simulates the model few times in each of these points, computes the truth of \(\phi \) for each trace, and add these points to the training set \(\mathcal {S}\) of the smoothed model checking process. This refinement will reduce the uncertainty bound in the undefined regions which leads some part of this region to be classified as Positive \(\mathcal {P}\) or Negative \(\mathcal {N}\). We iterate this process until the exit condition \(\frac{vol(U)}{vol(\varTheta )} < \epsilon \) is satisfied. The convergence of the algorithm is rooted in the properties of smoothed Model Checking, which is guaranteed to converge to the true function with vanishing variance as the number of observation points goes to infinity. In practice, the method converges quite fast, unless the problem is very hard (the true satisfaction function is close to the threshold for a large fraction of the parameter space).

4 Results

Implementation. We have implemented our algorithm in Python 3.6. The code is available at http://​simonesilvetti.​com/​pycheck/​. To improve the scalability of our algorithm, we profiled it to identify the most computationally expensive steps, among simulating the PCRN, checking the MITL formulae at each step, running smMC and partitioning the state space. The most expensive part in our test turned out to be the simulation step, which we performed using Gillespie SSA algorithm [1]. To speed up simulations, we ran them in parallel leveraging the Numba [29] package of Python which is optimal to execute array-oriented and math-heavy Python code. The smoothed model checking step, instead, is substantially independent with respect the number of repetitions. Its execution time depends on the cardinality of the training points. This is why, compared with [6], we increased the number of simulations per parameter point and reduced their number. We ran all the experiments on a Dell XPS, Intel Core i7-7700HQ 2.8 GHz, 8 GB 1600 MHz memory, equipped with Windows 10 Pro.
SIR Epidemic Model. We consider the popular SIR epidemic model [30], which is widely used to simulate the spreading of a disease among a population. The population of N individuals is divided in three classes:
  • susceptible S individuals that are healthy and vulnerable to the infection;
  • infected I individuals that are actively spreading the disease;
  • recovered R individuals, which gained immunity to the disease.
The version of SIR model we consider is defined by the following two chemical reactions:
$$\begin{aligned} r_1:S+I&\xrightarrow {\alpha _1}&2I \qquad \alpha _1= k_{i} \cdot \frac{X_s\cdot X_i}{N}\\ r_2:I&\xrightarrow {\alpha _2}&R \qquad \alpha _2= k_{r} \cdot X_i \end{aligned}$$
Here, \(r_1\) describes the possibility that an healthy individual gets the disease and becomes infected and the reaction \(r_2\) models the recovery of an infected agent. We described the model as a PCRN where \(k_{i} \in [0.005,0.3]\), \(k_r \in [0.005, 0.2]\) and initial population \((S,I,R)=(95,5,0)\) and we consider the following MITL formula:
$$\begin{aligned} \phi = (I>0) \,\mathcal {U}_{[100,120]}\, (I=0) \end{aligned}$$
(3)
This formula expresses that the disease becomes extinct (i.e.; \(I=0\)) between 100 and 120 time units. Note that for this model extinction will eventually happen with probability one, but the time of extinction depends on he parameters \(\varvec{\theta }=(k_{i},k_{r})\). In the following, we report experiments to synthetise the parameter region such that \(P_\phi (\varvec{\theta }) > \alpha \), with \(\alpha = 0.1\), volume tolerance \(\epsilon = 0.1\), and confidence \(\delta = 95\%\). We consider all possible combinations of free parameters to explore (i.e. \(k_i\) alone, \(k_r\) alone, and \(k_i\) and \(k_r\)). The initial train set of the smoothed model checking approach has been obtained by sampling the truth value on the parameters disposed in a grid as described in Sect. 3, of size 40 points for 1D case and 400 points for the 2D case. The satisfaction probability of each parameter vector which compose the training set, as well as, the parameter vectors sampled by the refinement process have been obtained by simulating the PCRN and evaluating the MITL formula 3 with 1000 repetitions per parameter point.
Efficiency, Accuracy, and Scalability. The execution times of the experiments are reported in Table 1 (left). The results shows a good performance of our statistical algorithms, despite being implemented in Python rather then in a more efficient language like C. The execution time (in percentage) with respect to the results of the exact method reported in [7] are 42%, 18% and 7% for Case 1, Case 2 and Case 3. Our results are reported using the heuristic method to compute the sets and a fixed grid of small stepsize \(\varvec{h}\).
In Case 1, we also compare the three methods to classify the regions, computing the derivative of the satisfaction probability function by finite differences and (i) optimising it globally to obtain the Lipschitz constant (equal to 4.31), (ii) optimising it in every cell of the fine prediction grid to compute a local Lipschitz constant (in each cell). As for the heuristic method, we use \(\beta '_\delta = 3\) instead of \(\beta _\delta = 1.96\), and a grid step of order \(10^{-4}\), three orders of magnitude less than the lengthscale of the kernel, set by marginal likelihood optimization equal to 0.1. All three methods gave the same results for the grid size we used. More specifically, the maximum displacement of the approximated satisfaction probability inside the cell is estimated to be 0.003
As a statistical accuracy test, we computed the “true” value of the satisfaction probability (by deep statistical model checking, using 10000 runs) for points in the positive and negative set close to the undefined set, and counted how many times these points were misclassified. More specifically, in Case 1 we consider 300 equally-spaced points between 0.1 and 0.07 (consider that a portion of the undefined region is located in a neighborhood of 0.05, see Fig. 1). All points turned to be classified correctly, pointing out to the accuracy of the smMC prediction.
We performed also a scalability test with respect to the size of the state space of the PCRN model, increasing the initial population size N of the SIR model (case 1). The results are reported in Table 1 (right). We increase the initial population size maintaining the original proportion \(\frac{I}{S}=\frac{1}{19}\). Moreover we consider different thresholds \(\alpha \) and volume tolerance \(\epsilon \) in order to force the algorithm to execute at least one refinement step, as the shape of the satisfaction function changes with N. The execution time increase moderately, following a linear trend.
Table 1.
(LEFT) Results for the Statistical Parameter Synthesis for the SIR model with \(N=100\) individuals and the formula \(\phi = (I>0) \,\mathcal {U}_{[100,120]}\, (I=0)\). We report the mean and standard deviation of the execution time of the algorithm. The volume tolerance is set to 10% and the threshold \(\alpha \) is set to 0.1. The h-grid column shows the size \(\varvec{h}\) of the grid used to compute the positive, negative, and uncertain sets. (RIGHT) Scalability of the method w.r.t. the size of the state space of the SIR model, increasing initial population N. \(\alpha \) and \(\delta \) are the threshold and volume tolerance used in the experiments.
https://static-content.springer.com/image/chp%3A10.1007%2F978-3-319-89963-3_23/MediaObjects/467207_1_En_23_Figb_HTML.gif

5 Conclusions

We presented an efficient statistical algorithm for parameter synthesis, to identify parameters satisfying MITL specifications with a probability greater than a certain threshold. The algorithm is based on Bayesian statistics and leverages the powerful parametric verification framework of Smoothed Model Checking, integrating it into an active learning refinement loop which drives the computational effort of simulations near the critical region concentrated around the threshold \(\alpha \). The developed approach shows good performance in terms of execution time and outperforms the exact algorithm developed in [7], retaining good accuracy at the price of having only statistical guarantees.
Note that we compared with the performance of [7] and not of their GPU implementation [12], as our method uses only CPU computing power at the moment. However, it can be implemented on a GPU, leveraging e.g. [31]. We expect a substantial increase of the performance. Fully distributing on CPU the computations of the algorithm, beyond only stochastic simulation, is also feasible, the hard part being to parallelise GP inference [32].
Other directions for future work include the implementation of the adaptive grid strategy to construct the \(\mathcal {P}_\alpha \), \(\mathcal {N}_\alpha \), and \(\mathcal {U}_\alpha \) regions, given the output of the smMC, and a divide and conquer strategy to split the parameter space (and the uncertain set \(\mathcal {U}_\alpha \)) in subregions, to reduce the complexity of the smMC. These two extensions are mandatory to scale the method in higher dimensions, up to 6–8 parameters. To scale even further, we plan to integrate techniques to speed up GP reconstruction: more classical sparsity approximation techniques [10] and more recent methods for GPs tailored to work on grids [33, 34]. This techniques have a computational cost of O(n) instead of standard implementation which costs \(O(n^3)\). Finally, we aim to combine our approach with the exact algorithm developed in [7]. The idea is to use our approach for a rough exploration of the parameter space to cut out the region with higher statistical confidence to be higher or lower than the considered threshold, applying the exact approach in the remain area, when feasible.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this book are included in the book's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Fußnoten
1
The Probit \(\varPsi (x)=p(Z \le x)\) is the cumulative distribution function of a standard normal distribution \(Z\sim \mathcal {N}(0,1)\), evaluated at the point x.
 
Literatur
1.
Zurück zum Zitat Gillespie, D.T.: Exact stochastic simulation of coupled chemical reactions. J. Phys. Chem. 81(25), 2340–2361 (1977)CrossRef Gillespie, D.T.: Exact stochastic simulation of coupled chemical reactions. J. Phys. Chem. 81(25), 2340–2361 (1977)CrossRef
2.
Zurück zum Zitat Bortolussi, L., Hillston, J., Latella, D., Massink, M.: Continuous approximation of collective systems behaviour: a tutorial. Perform. Eval. 70(5), 317–349 (2013)CrossRef Bortolussi, L., Hillston, J., Latella, D., Massink, M.: Continuous approximation of collective systems behaviour: a tutorial. Perform. Eval. 70(5), 317–349 (2013)CrossRef
3.
Zurück zum Zitat Haas, P.J.: Stochastic Petri nets for modelling and simulation. In: Winter Simulation Conference, vol. 1 (2004) Haas, P.J.: Stochastic Petri nets for modelling and simulation. In: Winter Simulation Conference, vol. 1 (2004)
5.
Zurück zum Zitat Dehnert, C., Junges, S., Katoen, J.P., Volk, M.: A storm is coming: a modern probabilistic model checker. arXiv preprint (2017). arXiv:1702.04311 Dehnert, C., Junges, S., Katoen, J.P., Volk, M.: A storm is coming: a modern probabilistic model checker. arXiv preprint (2017). arXiv:​1702.​04311
6.
Zurück zum Zitat Bortolussi, L., Milios, D., Sanguinetti, G.: Smoothed model checking for uncertain continuous-time markov chains. Inf. Comput. 247, 235–253 (2016)MathSciNetCrossRef Bortolussi, L., Milios, D., Sanguinetti, G.: Smoothed model checking for uncertain continuous-time markov chains. Inf. Comput. 247, 235–253 (2016)MathSciNetCrossRef
7.
Zurück zum Zitat Češka, M., Dannenberg, F., Paoletti, N., Kwiatkowska, M., Brim, L.: Precise parameter synthesis for stochastic biochemical systems. Acta Informatica 54(6), 589–623 (2017)MathSciNetCrossRef Češka, M., Dannenberg, F., Paoletti, N., Kwiatkowska, M., Brim, L.: Precise parameter synthesis for stochastic biochemical systems. Acta Informatica 54(6), 589–623 (2017)MathSciNetCrossRef
10.
Zurück zum Zitat Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. MIT Press, Cambridge (2006)MATH Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. MIT Press, Cambridge (2006)MATH
13.
Zurück zum Zitat Bartocci, E., Bortolussi, L., Nenzi, L., Sanguinetti, G.: System design of stochastic models using robustness of temporal properties. Theoret. Comput. Sci. 587, 3–25 (2015)MathSciNetCrossRef Bartocci, E., Bortolussi, L., Nenzi, L., Sanguinetti, G.: System design of stochastic models using robustness of temporal properties. Theoret. Comput. Sci. 587, 3–25 (2015)MathSciNetCrossRef
15.
Zurück zum Zitat Han, T., Katoen, J.P., Mereacre, A.: Approximate parameter synthesis for probabilistic time-bounded reachability. In: Proceedings of Real-Time Systems Symposium, pp. 173–182 (2008) Han, T., Katoen, J.P., Mereacre, A.: Approximate parameter synthesis for probabilistic time-bounded reachability. In: Proceedings of Real-Time Systems Symposium, pp. 173–182 (2008)
16.
18.
Zurück zum Zitat Jha, S.K., Langmead, C.J.: Synthesis and infeasibility analysis for stochastic models of biochemical systems using statistical model checking and abstraction refinement. Theoret. Comput. Sci. 412(21), 2162–2187 (2011)MathSciNetCrossRef Jha, S.K., Langmead, C.J.: Synthesis and infeasibility analysis for stochastic models of biochemical systems using statistical model checking and abstraction refinement. Theoret. Comput. Sci. 412(21), 2162–2187 (2011)MathSciNetCrossRef
19.
Zurück zum Zitat Katoen, J.P.: The probabilistic model checking landscape. In: LICS, pp. 31–45 (2016) Katoen, J.P.: The probabilistic model checking landscape. In: LICS, pp. 31–45 (2016)
21.
Zurück zum Zitat Bortolussi, L., Sanguinetti, G.: Smoothed model checking for uncertain continuous time Markov chains. arXiv preprint. arXiv:1402.1450 (2014) Bortolussi, L., Sanguinetti, G.: Smoothed model checking for uncertain continuous time Markov chains. arXiv preprint. arXiv:​1402.​1450 (2014)
23.
Zurück zum Zitat Norris, J.R.: Markov Chains. Number 2008. Cambridge University Press, Cambridge (1998) Norris, J.R.: Markov Chains. Number 2008. Cambridge University Press, Cambridge (1998)
24.
Zurück zum Zitat Baier, C., Haverkort, B., Hermanns, H., Katoen, J.P.: Model-checking algorithms for continuous-time Markov chains. IEEE Trans. Softw. Eng. 29(6), 524–541 (2003)CrossRef Baier, C., Haverkort, B., Hermanns, H., Katoen, J.P.: Model-checking algorithms for continuous-time Markov chains. IEEE Trans. Softw. Eng. 29(6), 524–541 (2003)CrossRef
25.
Zurück zum Zitat Younes, H.L., Simmons, R.G.: Statistical probabilistic model checking with a focus on time-bounded properties. Inf. Comput. 204(9), 1368–1409 (2006)MathSciNetCrossRef Younes, H.L., Simmons, R.G.: Statistical probabilistic model checking with a focus on time-bounded properties. Inf. Comput. 204(9), 1368–1409 (2006)MathSciNetCrossRef
28.
Zurück zum Zitat Steinwart, I.: On the influence of the kernel on the consistency of support vector machines. J. Mach. Learn. Res. 2, 67–93 (2002)MathSciNetMATH Steinwart, I.: On the influence of the kernel on the consistency of support vector machines. J. Mach. Learn. Res. 2, 67–93 (2002)MathSciNetMATH
29.
Zurück zum Zitat Lam, S.K., Pitrou, A., Seibert, S.: Numba: a LLVM-based Python JIT compiler. In: Proceedings of the Second Workshop on the LLVM Compiler Infrastructure in HPC, p. 7. ACM (2015) Lam, S.K., Pitrou, A., Seibert, S.: Numba: a LLVM-based Python JIT compiler. In: Proceedings of the Second Workshop on the LLVM Compiler Infrastructure in HPC, p. 7. ACM (2015)
30.
Zurück zum Zitat Gardner, T.S., Cantor, C.R., Collins, J.J.: Construction of a genetic toggle switch in escherichia coli. Nature 403(6767), 339–342 (2000)CrossRef Gardner, T.S., Cantor, C.R., Collins, J.J.: Construction of a genetic toggle switch in escherichia coli. Nature 403(6767), 339–342 (2000)CrossRef
31.
Zurück zum Zitat Dai, Z., Damianou, A., Hensman, J., Lawrence, N.: Gaussian process models with parallelization and GPU acceleration. arXiv:1410.4984 (2014). [cs, stat] Dai, Z., Damianou, A., Hensman, J., Lawrence, N.: Gaussian process models with parallelization and GPU acceleration. arXiv:​1410.​4984 (2014). [cs, stat]
32.
33.
Zurück zum Zitat Wilson, A., Nickisch, H.: Kernel interpolation for scalable structured Gaussian processes (KISS-GP). In: International Conference on Machine Learning, pp. 1775–1784 (2015) Wilson, A., Nickisch, H.: Kernel interpolation for scalable structured Gaussian processes (KISS-GP). In: International Conference on Machine Learning, pp. 1775–1784 (2015)
34.
Zurück zum Zitat Wilson, A.G., Hu, Z., Salakhutdinov, R., Xing, E.P.: Deep kernel learning. In: Artificial Intelligence and Statistics, pp. 370–378 (2016) Wilson, A.G., Hu, Z., Salakhutdinov, R., Xing, E.P.: Deep kernel learning. In: Artificial Intelligence and Statistics, pp. 370–378 (2016)
Metadaten
Titel
Bayesian Statistical Parameter Synthesis for Linear Temporal Properties of Stochastic Models
verfasst von
Luca Bortolussi
Simone Silvetti
Copyright-Jahr
2018
DOI
https://doi.org/10.1007/978-3-319-89963-3_23