A sampling-based approach for probabilistic design with random fields

https://doi.org/10.1016/j.cma.2009.07.003Get rights and content

Abstract

An original technique to incorporate random fields non-intrusively in probabilistic design is presented. The approach is based on the extraction of the main features of a random field using a limited number of experimental observations (snapshots). An approximation of the random field is obtained using proper orthogonal decomposition (POD). For a given failure criterion, an explicit limit state function (LSF) in terms of the coefficients of the POD expansion is obtained using a support vector machine (SVM). An adaptive sampling technique is used to generate samples and update the approximated LSF. The coefficients of the orthogonal decomposition are considered as random variables with distributions determined from the snapshots. Based on these distributions and the explicit LSF, the approach allows for an efficient assessment of the probabilities of failure. In addition, the construction of explicit LSF has the advantage of handling discontinuous responses. Two test-problems are used to demonstrate the proposed methodology used for the calculation of probabilities of failure. The first example involves the linear buckling of an arch structure for which the thickness is a random field. The second problem concerns the impact of a tube on a rigid wall. The planarity of the walls of the tube is considered as a random field.

Introduction

Despite substantial research efforts and recent improvements, probabilistic design still faces major challenges. First, it is well known that the initial assumptions for the representation and quantification of uncertainties are of prime importance. For instance, for a problem with spatial variability (e.g., sheet metal thickness distribution), one should choose to describe the problem with random fields as they provide a more realistic representation than uncorrelated random variables. These assumptions are as important as the process used to propagate uncertainties. Second, in a simulation-based context, the nature of the problem might severely restrict the use of traditional algorithms. Of particular interest are problems with non-smooth and discontinuous responses, prohibitive computational costs, or disjoint failure spaces. Computational design for crashworthiness is an example which encompasses these difficulties.

The probabilistic design literature is mostly dominated by approaches and applications where uncertainties are quantified as independent random variables. Techniques such as Monte Carlo simulations (MCS), and first and second order reliability methods (FORM and SORM) [1], [2] are used to perform reliability assessment using assumed probability density functions (PDFs). These reliability assessment techniques are also embedded within optimization problems to carry out reliability based design optimization (RBDO) [3], [4], [5]. Many studies have also been dedicated to the reduction of computational costs associated with these reliability assessment and RBDO techniques. Approaches based on designs of experiments (DOE) and surrogate models (response surfaces and metamodels [6], [7]) are common.

Recently, the authors have introduced the notion of explicit design space decomposition [8], [9], [10] whereby the LSFs are constructed explicitly in terms of the design variables. The LSF construction is based on a SVM which allows one to define the boundaries of failure regions that can be disjoint and non-convex. The approach allows for a straightforward calculation of a probability of failure using MCS. In addition, because this technique does not approximate responses but rather classify them as failed or safe, it naturally handles discontinuities. The construction of explicit LSF is also complemented by an adaptive sampling scheme which minimizes the number of function evaluations and refines the LSF approximation [10]. Therefore, the explicit design space decomposition technique is aimed at handling the difficulties due to discontinuities, complex failure domains and computational costs.

In the case of random fields, which is the focus of this article, the literature revolves around stochastic finite elements (SFE). SFE enable the propagation of uncertainties to obtain the distribution of the system’s responses using polynomial chaos expansion (PCE) [11]. In order to represent a random field, it is approximated using a Karhunen–Loeve expansion [11]. The coefficients of the expansion are considered as random variables and the response is expanded on a specific polynomial basis (Hermite, Legendre etc.) depending on the assumed types of probability distributions.

Several implementations of SFE are available in literature. The early approaches required the modification of the equilibrium equation to account for the uncertainty in the stiffness matrix and the load vector [11]. This approach is by construction highly intrusive and required specific codes. Newer methods developed recently overcome this limitation and allow for the determination of PCE coefficients using deterministic “black-box” function evaluations (e.g., finite element analysis). Therefore, these methods can be used with available commercial simulation packages without modifying the code [12], [13], [14].

Most studies with SFE typically assume a prior distribution of the random field. However, in practical situations, such as a random field generated by a manufacturing process, the characteristics of the random field are not known a priori. Therefore, the only way to characterize a random field with a certain level of confidence is from experimental observations. In addition, another limitation of existing approaches is that the expansion of responses on a polynomial basis hampers the use of PCE for problems with discontinuities.

In this article, an alternate non-intrusive approach is proposed, which provides a combined solution to the difficulties of realistically representing random fields, handling discontinuous responses, and efficiently calculating a probability of failure. This is achieved by combining the explicit design space decomposition approach with a proper orthogonal decomposition (POD) for the characterization of random fields.

Based on a limited number of observations, referred to as snapshots, POD is used to extract the important features of a random field in the form of eigenvectors of its covariance matrix [15]. The eigenvalues provide an indication of the importance of the corresponding features, thus allowing one to gauge their individual contributions to the random field. This technique is similar to the one found in pattern recognition [16].

Once the random field is characterized with the important features, the corresponding eigenvectors form a basis that is used to generate various random field configurations. This is required for design purposes as an initial set of experimental snapshots may not be sufficient. The random field is modified by varying the coefficients of the eigenvectors in the POD expansion. For this purpose, the response of the system is studied with a DOE [17], [18] with respect to the coefficients of the expansion. At this stage, the actual PDFs of the coefficients are not considered, and they are assumed uniformly distributed. This is done in order to extract as much information as possible over the whole design space.

The responses, generated for each sample of the DOE, are classified into failure and non-failure using a threshold value or a clustering technique such as K-means [19]. Clustering is used in the case of discontinuous responses. These two classes are then separated in the design space using an (explicit) SVM LSF [9]. In addition, in order to refine the LSF using a limited number of samples, an adaptive sampling technique is used [10]. The sampling strategy is based on the generation of samples that maximize the probability of misclassification of the SVM while avoiding redundancy of information.

The coefficients of the POD expansion are random variables and their distributions, obtained from the snapshots, are found through basic PDF fitting techniques. A similar approach was used in [20] for the probabilistic design of turbine blade engine using POD expansion for turbine blade random geometries.

In the proposed approach, probabilities of failure are efficiently calculated using MCS. This simplicity, and this is the novelty of the proposed approach, is due to the fact that the limit state function is defined explicitly in terms of the coefficients of the POD. As mentioned previously, it is noteworthy that the accuracy of the limit state function is improved through adaptive sampling [10] to limit the number of function evaluations.

The proposed approach for the calculation of probabilities of failure is applied to two problems. The first problem consists of a three dimensional arch structure whose thickness is considered as a random field. A failure criterion is defined based on a threshold value on the critical load factor for linear buckling. The second problem involves the impact of a tube on a rigid wall. The planarity of the tube walls is modified by a random field, which leads to a global buckling (considered failure) or crushing of the tube.

Section snippets

Summary of the proposed approach

For the sake of clarity, this section summarizes the main steps of the approach, which are subsequently described in the following sections. The stages of the approach are (Fig. 1):

  • Collection of snapshots and construction of the covariance matrix.

  • Selection of the main features of the random field.

  • Expansion of the field on a reduced basis. Sampling of the coefficients using a uniform design of experiments (DOE).

  • Construction of an explicit LSF using SVM in the space of coefficients.

  • Refinement of

Data collection and covariance matrix

The first step in the characterization of a random field is the collection of several observations of the random process output (e.g., a metal sheet after forming). The process generates a scalar random field S(X), function of the position X. M samples, outputs of this process, are obtained. On each sample, N measurements are performed at distinct positions. An example of observations, referred to as snapshots, is provided in Fig. 2. The snapshots can be condensed in the following matrix:S=S11

Sampling-based coefficient selection and response estimation

The characterization of a random field, and coefficient distributions is accomplished using the data from the snapshots. However, the mere characterization of the random field is not sufficient to account for uncertainties in the design process. For this purpose, several instances of random fields (other than the snapshots) are created by using different linear combinations of the eigenvectors.

The combinations are defined by selecting the POD coefficients using a DOE. The bounds of the DOE are

Explicit limit state function

SVM is a classification tool that belongs to the class of machine learning techniques. The main feature of SVM lies in its ability to explicitly define complex decision functions that optimally separate two classes of data samples. Thus, once the coefficient samples are categorized into two classes, SVM can provide an explicit decision function (the limit state function) separating the distinct classes. The equation of the SVM LSF is given by equating the quantity s in Eq. (13) equal to zero

Probability of failure calculation – MCS

The explicit LSF allows one to efficiently calculate the probability of failure using MCS. The PDFs of the coefficients, as found in Section 3.3, are used to generate Monte-Carlo samples. Predicting failure or non-failure at these samples involves calculating the sign of the analytical expression of the LSF (Eq. (13)). An example of calculation of the probability of failure is depicted in Fig. 6.

Linear buckling of an arch structure

This section provides an example of the effect of a random field on the critical load factor of an arch structure. The structure is subjected to a unit pressure load on the top surface. The thickness of the arch should ideally be constant over the entire surface; however it may vary due to uncertainties in the manufacturing processes. These variations are represented, for this study, by an artificial analytical random field (as opposed to real experimental data). The arch has a radius of R = 200 

Conclusion

A technique for reliability assessment using random fields is proposed. A new sampling-based method is used for constructing various potential random field configurations. The method overcomes the need for assumption on the random field distribution by using experimental data and Proper Orthogonal Decomposition. In addition the SVM-based method of constructing explicit LSFs enables one to address discontinuous system responses, which is successfully shown in the case of the tube impact problem.

Acknowledgement

The support of the National Science Foundation (Award CMMI-0800117) is gratefully acknowledged.

References (25)

  • D. Huang et al.

    Sequential kriging optimization using multiple-fidelity evaluations

    Struct. Multidiscip. Opt.

    (2006)
  • B.J. Bichon, M.S. Eldred, L.P. Swiler, S. Mahadevan, J.M. McFarland, Multimodal reliability assessment for complex...
  • Cited by (41)

    • An active learning method combining Kriging and accelerated chaotic single loop approach (AK-ACSLA) for reliability-based design optimization

      2019, Computer Methods in Applied Mechanics and Engineering
      Citation Excerpt :

      Youn and Choi [26] combined the moving least square method with RSM to improve the computational efficiency of RBDO. Basudhar and Missoum [27] used the support vector machine for RBDO, in which an adaptive sampling scheme was developed. Subsequently, Lee and Song [28] detected the constraint feasibility in the approximation process and variations of random characteristics in RBDO.

    • Identification of material properties of composite sandwich panels under geometric uncertainty

      2017, Composite Structures
      Citation Excerpt :

      The fidelity map technique is extended to the case of random fields [20], which enable a more realistic representation of uncertainties (e.g., thickness distribution). The extension of the fidelity map approach to random fields is carried out by using the coefficients of a snapshot-based proper orthogonal decomposition (POD) of the field as random variables [20]. That is, the fidelity map is built in a space made of the parameters to identify as well as the random coefficients of the POD description.

    • Explicit mapping of acoustic regimes for wind instruments

      2014, Journal of Sound and Vibration
      Citation Excerpt :

      Finally, Section 4 provides a set of numerical results to highlight the capabilities and potential of the methodology when applied to musical acoustics. In order to build maps, a technique referred to as explicit design space decomposition (EDSD) [17,18], is used. The basic idea is to construct the boundaries of an n-dimensional map using a Support Vector Machine (SVM) [14,19], which provides an explicit expression of the boundary in terms of the chosen parameters.

    • Buckling analysis of I-section portal frames with stochastic imperfections

      2013, Engineering Structures
      Citation Excerpt :

      However, a realistic description of initial imperfections in a rational probabilistic framework is absolutely necessary in order to capture the discrepancy between observed and predicted buckling loads as well as the large scatter that these loads usually exhibit. Towards this aim, a number of works was emerged in the last decade, treating the imperfections as stochastic fields which can be simulated with a standard numerical procedure such as Karhunen–Loeve expansion or the spectral representation method [10–19]. These stochastic approaches were mainly introduced for shell type structures, while a variant of these methods was recently applied for short-length I beam–column structural elements which exhibit a predominant local buckling behavior [20].

    View all citing articles on Scopus
    View full text