Elsevier

Structural Safety

Volume 49, July 2014, Pages 27-36
Structural Safety

Meta-model-based importance sampling for reliability sensitivity analysis

https://doi.org/10.1016/j.strusafe.2013.08.010Get rights and content

Highlights

  • Meta-model-based importance sampling (meta-IS) makes use of Kriging to devise a quasi-optimal instrumental density.

  • The potential substitution bias which traditionally affects reliability methods that are based on meta-models is avoided.

  • The paper develops sensitivity measures derived from meta-IS that can be used in reliability-based design optimization.

Abstract

Reliability sensitivity analysis aims at studying the influence of the parameters in the probabilistic model onto the probability of failure of a given system. Such an influence may either be quantified on a given range of values of the parameters of interest using a parametric analysis, or only locally by means of its partial derivatives. This paper is concerned with the latter approach when the limit-state function involves the output of an expensive-to-evaluate computational model. In order to reduce the computational cost it is proposed to compute the failure probability by means of the recently proposed meta-model-based importance sampling method. This method resorts to the adaptive construction of a Kriging meta-model which emulates the limit-state function. Then, instead of using this meta-model as a surrogate for computing the probability of failure, its probabilistic nature is used in order to build an quasi-optimal instrumental density function for accurately computing the actual failure probability through importance sampling. The proposed estimator of the failure probability recasts as a product of two terms. The augmented failure probability is estimated using the emulator only, while the correction factor is estimated using both the actual limit-state function and its emulator in order to quantify the substitution error. This estimator is then differentiated by means of the score function approach which enables the estimation of the gradient of the failure probability without any additional call to the limit-state function (nor its Kriging emulator). The approach is validated on three structural reliability examples.

Introduction

Modern engineering has to cope with uncertainty at the various stages of the design, manufacturing and operating of systems and structures. Such uncertainty either arise from the observed scattering of the environmental conditions in which products and structures will evolve, or from a lack of knowledge that results in the formulation of hopefully conservative assumptions. No matter their source, aleatory (observed) and epistemic (reducible) uncertainties can be dealt with in the unified framework of probabilistic methods for uncertainty quantification and risk-based engineering.

In particular, reliability analysis is the discipline which aims at quantifying the level of safety of a system in terms of a probability of failure. From now on, it is assumed that the uncertain parameters of the problem at hand are modeled by a random vector X whose joint probability distribution is explicitly known and dependent on a certain number of design parameters grouped in the vector d. In practice these design parameters are considered as mean values or, more generally, characteristic values of the random variables gathered in X. This assumption corresponds to the common situation where d gathers “ideal” dimensions whereas the randomness in X models the aleatory uncertainty in the manufacturing process due to tolerancing.

It is also assumed that there exists a deterministic computational model M which enables the assessment of the system’s performance through a so-called limit-state function g. According to this setup, the failure probability is defined by the following integral:pf(d)=P[g(X,M(X))0|d]=FfX(x|d)dx,where F={xX:g(x,M(x))0} is the failure domain and fX is the joint probability density function of the random vector X. The dependence of g on the output of M will now be dropped for the sake of clarity in the notation, but it is important to remember that each evaluation of g implies a run of the possibly expensive-to-evaluate computational model M. Note that in general the design parameters d could also affect the limit state function itself. This case is not addressed in the present paper though. We assume here that parameters d only affect X’s distribution as in [1], [2], [3], [4] because it considerably eases the reliability sensitivity analysis through the use of the importance sampling trick initially proposed by Rubinstein [5].

Introducing the failure indicator function 1F, the failure probability easily rewrites as its mathematical expectation:pf(d)=X1F(x)fX(x|d)dx=EX[1F(X)].This enables the evaluation of the failure probability by the so-called Monte Carlo estimator [1]:p^fMCS=1Ni=1N1F(X(i)),where {X(i), i = 1,  , N} is a sample of N independent copies of the random vector X. Due to the central limit theorem, this estimator is unbiased and convergent and its coefficient of variation is defined as follows (provided pf  0):δMCS=1-pfNpf.

From this expression, it appears that the lower the probability pf, the greater the number N of evaluations of 1F (hence, the number of runs of M). As an order of magnitude, one should expect a minimum sample size of N = 10k+2 for estimating a failure probability of 10k with a 10% coefficient of variation. This clearly becomes intractable for expensive-to-evaluate failure indicator functions and low failure probabilities, which are both the trademark of engineered systems. Nowadays there exists a number of techniques to evaluate the failure probability at a far reduced computational cost.

On the one hand, variance reduction techniques [1] aim at rewriting Eq. (2) in order to derive new Monte-Carlo-sampling-based estimators that feature a lower coefficient of variation than the one given in Eq. (4). Importance sampling [1], directional sampling [6], line sampling [7], [8] and subset simulation [9] all enable a great reduction of the computational cost compared to crude Monte Carlo sampling. Importance sampling and subset simulation are certainly the most widely applicable techniques because they are not based on any geometrical assumptions about the topology of the failure domain F. Nonetheless, importance sampling is only a concept and still requires the choice for an instrumental density function which is not trivial and strongly influences both the accuracy and the computational cost.

On the other hand, approximation techniques make use of meta-models that imitates the limit-state function (or at least the limit-state surface {xX:g(x)=0}) in order to reduce the computational cost. These meta-models are built from a so-called design of experiments {x(i), i = 1,  , m} whose size m does not depend on the order of magnitude of the failure probability but rather on the nonlinearity of the performance function g and the dimension n of the input space XRn. For instance, quadratic response surfaces [10], artificial neural networks [11], support vector machines [12], Kriging surrogates [13] and polynomial (resp. sparse polynomial) chaos expansions [14], [15], [16] have been used for surrogate-based reliability analysis (see [17, for a review]).

The most efficient variance reduction techniques (namely subset simulation) still require rather large sample sizes that can potentially be reduced when some knowledge about the shape of failure domain exists. Despite the increasing accuracy of meta-models, surrogate-based (also called plug-in) approaches that consists in using emulators instead of the actual limit-state functions lacks an error measure (alike the historical first- and second-order reliability methods (FORM/SORM)). Starting from these two premises, a novel hybrid technique named meta-model-based importance sampling was proposed by Dubourg et al. [18], [19]. This technique makes use of Kriging predictors in order to approximate the optimal instrumental density function in an importance sampling scheme that theoretically reduces the estimation variance to zero.

This paper is not only concerned with the evaluation of the failure probability in Eq. (1) for a single value of the parameters d, but also with the analysis of its sensitivity with respect to the latter vector. Within the structural reliability community, this type of analysis is referred to as reliability sensitivity analysis [20], [21]. It provides an important insight on system failure for risk-based decision making (e.g. robust control, design or reliability-based design optimization).

However, it was previously recalled that the accurate estimation of a single value of the failure probability was already computationally costly. Hence, assessing the failure probability sensitivity by means of repeated reliability analyses is absolutely not affordable. Starting from this premise, Au [22] proposed to consider the parameters d as artificially uncertain, and then use a conditional sampling technique in order to assess reliability sensitivity within a single simulation. The authors conclude that their approach reveals efficient up to 2–3 parameters in d. Based on a similar idea, Taanidis and Beck [23] developed an algorithm which enables the identification of a reduced set of the parameters d that minimizes the failure probability. It is applied to the robust control of the dynamic behavior of structural systems.

Here, the objective is to get a more local guess of the influence of d on the failure probability through the calculation of the gradient of the failure probability. This quantity then enables the use of gradient-based nonlinear constrained optimization algorithms for solving the reliability-based design optimization problem [24], [25], [26]. This topic has already been given a quite significant interest. For instance, Bjerager and Krenk [27] differentiated the Hasofer–Lind reliability index which itself enables the calculation of the gradient of the failure probability. However, FORM may suffer from incorrect assumptions (namely, the linearity of the limit-state surface in the standard space and the uniqueness of the most probable failure point) that are hard to check in practice. Valdebenito and Schuëller [28] propose a parametric approximation of the failure probability which then enables its differentiation. However the accuracy of this approach is conditional on the ability of the proposed parametric model to fit the actual failure probability function.

The score function approach that is used here was initially proposed by Rubinstein [5] (see also [1, Chapter 7]). It features the double advantage that (i) it is a simple post-processing of a sampling-based reliability analysis (i.e. it does not require any additional calls to the limit-state function) and (ii) it can be applied to any Monte-Carlo-sampling-based technique (either on the actual limit-state function or on a surrogate). For instance, it has already been applied to importance sampling [1] and subset simulation [3].

The goal of this paper is to show how the score function approach may be coupled with meta-model-based importance sampling for efficient reliability sensitivity analysis. The paper is divided into three sections. First, the basics of meta-model-based importance sampling are briefly summarized. Then, the score function approach is applied to the proposed estimator of the failure probability and the whole approach is eventually tested on three structural reliability examples.

Section snippets

Meta-model-based importance sampling for reliability analysis

This section recalls the basics of meta-model-based importance sampling. For a more exhaustive introduction, the interested reader is referred to the papers by Dubourg et al. [18], [19] and the Ph.D. thesis of Dubourg [29].

Meta-model-based importance sampling for reliability sensitivity analysis

This section deals with the calculation of the partial derivatives of the failure probability with respect to the parameters d in the probabilistic model using meta-model-based importance sampling together with the score function approach.

Application examples

The meta-model-based importance sampling procedure and the score function approach are now applied to a number of examples in structural reliability for the sake of validation and illustration. The results are compared to that obtained by FORM [27] or subset simulation [9], [3]. Some reliability sensitivity results are compared here in terms of the so-called elasticities (see e.g. [42]). The elasticity of the failure probability with respect to a non-zero parameter d is defined as follows:ed=pf

Conclusion

The score function approach proposed by Rubinstein [5] reveals an efficient tool for reliability sensitivity analysis. First, using an importance-sampling-like trick, he showed that the gradient of the failure probability turns out to be the expectation of the gradient of the log-likelihood of the failed samples with respect to the distribution of the random vector X. It means that reliability sensitivities can readily be obtained after a Monte-Carlo-sampling-based reliability at a negligible

References (44)

  • V. Dubourg et al.

    Metamodel-based importance sampling for structural reliability analysis

    Prob Eng Mech

    (2013)
  • S.K. Au

    Reliability-based design sensitivity by efficient simulation

    Comput Struct

    (2005)
  • A. Taflanidis et al.

    Stochastic subset optimization for reliability optimization and sensitivity analysis in system design

    Comput Struct

    (2009)
  • M. Valdebenito et al.

    Efficient strategies for reliability-based optimization involving non-linear, dynamical structures

    Comput Struct

    (2011)
  • S.K. Au

    Augmenting approximate solutions for consistent reliability analysis

    Prob Eng Mech

    (2007)
  • F. Miao et al.

    Modified subset simulation method for reliability analysis of structural systems

    Struct Saf

    (2011)
  • R. Lebrun et al.

    An innovating analysis of the Nataf transformation from the copula viewpoint

    Prob Eng Mech

    (2009)
  • R. Lebrun et al.

    Do Rosenblatt and Nataf isoprobabilistic transformations really differ?

    Prob Eng Mech

    (2009)
  • R. Rubinstein et al.

    Simulation and the Monte Carlo method

    (2008)
  • P. Bjerager

    Probability integration by directional simulation

    J Eng Mech

    (1988)
  • Hurtado JE, Alvarez DA. Reliability assessment of structural systems using neural networks. In: Proceedings of the...
  • J. Hurtado

    Structural reliability – statistical learning perspectives

    (2004)
  • Cited by (0)

    View full text