Meta-model-based importance sampling for reliability sensitivity analysis
Introduction
Modern engineering has to cope with uncertainty at the various stages of the design, manufacturing and operating of systems and structures. Such uncertainty either arise from the observed scattering of the environmental conditions in which products and structures will evolve, or from a lack of knowledge that results in the formulation of hopefully conservative assumptions. No matter their source, aleatory (observed) and epistemic (reducible) uncertainties can be dealt with in the unified framework of probabilistic methods for uncertainty quantification and risk-based engineering.
In particular, reliability analysis is the discipline which aims at quantifying the level of safety of a system in terms of a probability of failure. From now on, it is assumed that the uncertain parameters of the problem at hand are modeled by a random vector X whose joint probability distribution is explicitly known and dependent on a certain number of design parameters grouped in the vector d. In practice these design parameters are considered as mean values or, more generally, characteristic values of the random variables gathered in X. This assumption corresponds to the common situation where d gathers “ideal” dimensions whereas the randomness in X models the aleatory uncertainty in the manufacturing process due to tolerancing.
It is also assumed that there exists a deterministic computational model which enables the assessment of the system’s performance through a so-called limit-state function . According to this setup, the failure probability is defined by the following integral:where is the failure domain and fX is the joint probability density function of the random vector X. The dependence of on the output of will now be dropped for the sake of clarity in the notation, but it is important to remember that each evaluation of implies a run of the possibly expensive-to-evaluate computational model . Note that in general the design parameters d could also affect the limit state function itself. This case is not addressed in the present paper though. We assume here that parameters d only affect X’s distribution as in [1], [2], [3], [4] because it considerably eases the reliability sensitivity analysis through the use of the importance sampling trick initially proposed by Rubinstein [5].
Introducing the failure indicator function , the failure probability easily rewrites as its mathematical expectation:This enables the evaluation of the failure probability by the so-called Monte Carlo estimator [1]:where {X(i), i = 1, … , N} is a sample of N independent copies of the random vector X. Due to the central limit theorem, this estimator is unbiased and convergent and its coefficient of variation is defined as follows (provided pf ≠ 0):
From this expression, it appears that the lower the probability pf, the greater the number N of evaluations of (hence, the number of runs of ). As an order of magnitude, one should expect a minimum sample size of N = 10k+2 for estimating a failure probability of 10−k with a 10% coefficient of variation. This clearly becomes intractable for expensive-to-evaluate failure indicator functions and low failure probabilities, which are both the trademark of engineered systems. Nowadays there exists a number of techniques to evaluate the failure probability at a far reduced computational cost.
On the one hand, variance reduction techniques [1] aim at rewriting Eq. (2) in order to derive new Monte-Carlo-sampling-based estimators that feature a lower coefficient of variation than the one given in Eq. (4). Importance sampling [1], directional sampling [6], line sampling [7], [8] and subset simulation [9] all enable a great reduction of the computational cost compared to crude Monte Carlo sampling. Importance sampling and subset simulation are certainly the most widely applicable techniques because they are not based on any geometrical assumptions about the topology of the failure domain . Nonetheless, importance sampling is only a concept and still requires the choice for an instrumental density function which is not trivial and strongly influences both the accuracy and the computational cost.
On the other hand, approximation techniques make use of meta-models that imitates the limit-state function (or at least the limit-state surface ) in order to reduce the computational cost. These meta-models are built from a so-called design of experiments {x(i), i = 1, … , m} whose size m does not depend on the order of magnitude of the failure probability but rather on the nonlinearity of the performance function and the dimension n of the input space . For instance, quadratic response surfaces [10], artificial neural networks [11], support vector machines [12], Kriging surrogates [13] and polynomial (resp. sparse polynomial) chaos expansions [14], [15], [16] have been used for surrogate-based reliability analysis (see [17, for a review]).
The most efficient variance reduction techniques (namely subset simulation) still require rather large sample sizes that can potentially be reduced when some knowledge about the shape of failure domain exists. Despite the increasing accuracy of meta-models, surrogate-based (also called plug-in) approaches that consists in using emulators instead of the actual limit-state functions lacks an error measure (alike the historical first- and second-order reliability methods (FORM/SORM)). Starting from these two premises, a novel hybrid technique named meta-model-based importance sampling was proposed by Dubourg et al. [18], [19]. This technique makes use of Kriging predictors in order to approximate the optimal instrumental density function in an importance sampling scheme that theoretically reduces the estimation variance to zero.
This paper is not only concerned with the evaluation of the failure probability in Eq. (1) for a single value of the parameters d, but also with the analysis of its sensitivity with respect to the latter vector. Within the structural reliability community, this type of analysis is referred to as reliability sensitivity analysis [20], [21]. It provides an important insight on system failure for risk-based decision making (e.g. robust control, design or reliability-based design optimization).
However, it was previously recalled that the accurate estimation of a single value of the failure probability was already computationally costly. Hence, assessing the failure probability sensitivity by means of repeated reliability analyses is absolutely not affordable. Starting from this premise, Au [22] proposed to consider the parameters d as artificially uncertain, and then use a conditional sampling technique in order to assess reliability sensitivity within a single simulation. The authors conclude that their approach reveals efficient up to 2–3 parameters in d. Based on a similar idea, Taanidis and Beck [23] developed an algorithm which enables the identification of a reduced set of the parameters d that minimizes the failure probability. It is applied to the robust control of the dynamic behavior of structural systems.
Here, the objective is to get a more local guess of the influence of d on the failure probability through the calculation of the gradient of the failure probability. This quantity then enables the use of gradient-based nonlinear constrained optimization algorithms for solving the reliability-based design optimization problem [24], [25], [26]. This topic has already been given a quite significant interest. For instance, Bjerager and Krenk [27] differentiated the Hasofer–Lind reliability index which itself enables the calculation of the gradient of the failure probability. However, FORM may suffer from incorrect assumptions (namely, the linearity of the limit-state surface in the standard space and the uniqueness of the most probable failure point) that are hard to check in practice. Valdebenito and Schuëller [28] propose a parametric approximation of the failure probability which then enables its differentiation. However the accuracy of this approach is conditional on the ability of the proposed parametric model to fit the actual failure probability function.
The score function approach that is used here was initially proposed by Rubinstein [5] (see also [1, Chapter 7]). It features the double advantage that (i) it is a simple post-processing of a sampling-based reliability analysis (i.e. it does not require any additional calls to the limit-state function) and (ii) it can be applied to any Monte-Carlo-sampling-based technique (either on the actual limit-state function or on a surrogate). For instance, it has already been applied to importance sampling [1] and subset simulation [3].
The goal of this paper is to show how the score function approach may be coupled with meta-model-based importance sampling for efficient reliability sensitivity analysis. The paper is divided into three sections. First, the basics of meta-model-based importance sampling are briefly summarized. Then, the score function approach is applied to the proposed estimator of the failure probability and the whole approach is eventually tested on three structural reliability examples.
Section snippets
Meta-model-based importance sampling for reliability analysis
This section recalls the basics of meta-model-based importance sampling. For a more exhaustive introduction, the interested reader is referred to the papers by Dubourg et al. [18], [19] and the Ph.D. thesis of Dubourg [29].
Meta-model-based importance sampling for reliability sensitivity analysis
This section deals with the calculation of the partial derivatives of the failure probability with respect to the parameters d in the probabilistic model using meta-model-based importance sampling together with the score function approach.
Application examples
The meta-model-based importance sampling procedure and the score function approach are now applied to a number of examples in structural reliability for the sake of validation and illustration. The results are compared to that obtained by FORM [27] or subset simulation [9], [3]. Some reliability sensitivity results are compared here in terms of the so-called elasticities (see e.g. [42]). The elasticity of the failure probability with respect to a non-zero parameter d is defined as follows:
Conclusion
The score function approach proposed by Rubinstein [5] reveals an efficient tool for reliability sensitivity analysis. First, using an importance-sampling-like trick, he showed that the gradient of the failure probability turns out to be the expectation of the gradient of the log-likelihood of the failed samples with respect to the distribution of the random vector X. It means that reliability sensitivities can readily be obtained after a Monte-Carlo-sampling-based reliability at a negligible
References (44)
Universal properties of kernel functions for probabilistic sensitivity analysis
Prob Eng Mech
(2009)- et al.
Subset simulation for structural reliability sensitivity analysis
Reliab Eng Sys Safety
(2009) Local estimation of failure probability function by weighted approach
Prob Eng Mech
(2013)The score function approach for sensitivity analysis of computer simulation models
Math Comput Simul
(1986)- et al.
Reliability of structures in high dimensions, part I: algorithms and applications
Prob Eng Mech
(2004) - et al.
Application of line sampling simulation method to reliability benchmark problems
Struct Safety
(2007) - et al.
Estimation of small failure probabilities in high dimensions by subset simulation
Prob Eng Mech
(2001) - et al.
A fast and efficient response surface approach for structural reliability problems
Struct Safety
(1990) - et al.
Sparse polynomial chaos expansions and adaptive stochastic finite elements using a regression approach
Comptes Rendus Mécanique
(2008) - et al.
An adaptive algorithm to build up sparse polynomial chaos expansions for stochastic finite element analysis
Prob Eng Mech
(2010)