40 years FORM: Some new aspects?

https://doi.org/10.1016/j.probengmech.2015.09.012Get rights and content

Abstract

40 years ago Hasofer and Lind wrote their seminal paper [13] about FORM where they described an algorithm for finding the beta point. This algorithm, later in 1978 generalized by Rackwitz and Fiessler in [23] to include nonnormal random variables, is known as Hasofer-Lind–Rackwitz-Fiessler (HL–RF) algorithm and till now it is an important tool for reliability calculations. Here its relation with standard numerical optimization is explained. Further a simple method for computing the SORM factor is given and the connection of FORM/SORM with dimension reduction concepts is outlined.

Section snippets

FORM and the HL–RF algorithm

Given a continuous LSF (limit state function) g(u1,,un) in the standard normal space, i.e. the n-dimensional Euclidean space with PDF (probability density function)f(u1,,un)=(2π)n/2exp(i=1nun2/2)=(2π)n/2exp(|u|2/2).The idea of FORM was to approximate the failure domain F={u;g(u)<0} by a halfspace. This halfspace was obtained by linearizing the LSF at the point u where the limit state surface G={u;g(u)=0} has minimal distance to the origin, which means that the PDF is maximal there,

Numerical optimization methods

Most deterministic minimization methods for differentiable functions are line search or trust region methods [20, Chapters 3 and 4]. Here only the former will be considered. Line search means that for finding the minimum of a function f(x) a sequence xk of points, which should converge towards a minimum, is calculated iteratively in the following way:xk+1=xkαkHk1f(xk).Here xk is the present iteration point, f(xk) the gradient, Hk a symmetric and positive definite matrix and αk the step

Modified HL–RF algorithm

Due to convergence problems with the original HL–RF algorithm modifications were developed to overcome these deficiencies. In fact, as will be shown now, these modified approaches are variants of an older method, described in the following.

In 1970 Pshenichnyj [21] (see also [22]) proposed a method for constrained minimization. He considered the linearizations of the target function f and the constraint function gf(x)f(x0)+f(x0)T(xx0),g(x)g(x0)+g(x0)T(xx0)Since in general this optimization

The SORM factor

The FORM and SORM approximations for the failure probability areFORM:Pr(F)Φ(β),SORM:Pr(F)i=1n1(1βκi)1/2Φ(β).The SORM factor is given in [5] in the following form as above:SORMfactor=i=1n1(1βκi)1/2.Later, some authors replaced the minus sign before the term βκi by a plus sign and wrote instead i=1n1(1+βκi)1/2. This leads to confusions and also to wrong results if the meaning of the factor is not clearly understood. To explain the problem, there are two different definitions for

Dimension reduction and FORM/SORM

A well-known problem in the analysis of the structure of data is to separate the important from the unimportant, to try to find a simple structure. With the increasing complexity of models it becomes more and more substantial to carve out a skeleton which describes the essentials of a system in a form as simple as possible without too much loss of information.

Given many variables X=(X1,,Xn) describing a system by Y=f(X), can a simpler structure having much less variables be found without too

Two examples for dimension reduction

Let be given a linear LSF of three variablesg(u)=g(u1,u2,u3)=22u1u2.From a three-dimensional random vector with a standard normal distribution a sample of 1000 realizations is taken. Using now the SAVE method and slicing the values of the LSF into 10 slices given by [1000,3,2,1,0,0.5,1,1.5,2,3,1000], the eigenvectors are(0.1930.3990.8960.3910.8070.4430.9000.4360.001)and the eigenvalues(0.0030000.0050000.657)So the vector v1=(0.9,0.436,0) is the eigenvector with the largest

Conclusions

There are still some aspects of FORM/SORM as beta point search methods and dimension reduction problems which have not been researched thoroughly. Concerning the search algorithms for the beta point, it might be useful to investigate methods which can achieve superlinear convergence rates and study if the increased computational effort pays off.

The computing of the SORM factor should be done with the result in Eq. (21) to avoid problems with the definition of curvature. In publications using

References (27)

  • K. Breitung

    Laplace integrals and structural reliability

  • K. Breitung, F. Casciati, L. Faravelli, Minimization and approximate SORM factors, in: C. Guedes Soares, et al. (Eds.),...
  • C. Bucher et al.

    Response surfaces for reliability assessment

  • Cited by (43)

    • An enhanced method for improving the accuracy of small failure probability of structures

      2022, Reliability Engineering and System Safety
      Citation Excerpt :

      Therefore, the classical MCS method is usually considered as a reference for newly proposed methods, which is rarely applied in large-scale engineering structures. ( ii) Approximation methods, which are typically represented by the low-order moment methods, such as first-order reliability method (FORM) [5] or inverse FORM [6,7] and second-order reliability method (SORM) [8]. These methods have been widely used in actual engineering structures, in particular offshore structures.

    • A stochastic framework for the assessment of horizontally curved aluminium bridge decks on steel girders

      2022, Structures
      Citation Excerpt :

      Stochastic analysis refers to the explicit treatment of uncertainties in any quantity entering the corresponding deterministic analysis at a specific period of time. The exact values of these quantities are usually unknown because they cannot be precisely measured [13,14]. It describes the merging of advanced reliability methods with the FEM to obtain probability estimates for predefined performance criteria.

    • Realization of multi-dimensional random field based on Jacobi–Lagrange–Galerkin method in geotechnical engineering

      2022, Computers and Geotechnics
      Citation Excerpt :

      To carry out probabilistic analysis of geo-structure model with spatial various geomaterial, a random field must be represented by a finite number of random variables, which is called random field discretization or realization. However, although existing approaches include approximate methods such as the first/second order reliability method (FORM/SORM) (Breitung, 2015; Strömberg, 2017), as well as simulation techniques based on the Monte Carlo method (MCS) (Wang, 2011) and Subset Simulation (SuS) (Papaioannou et al., 2015), the evaluation of the probability of failure is not an easy task especially when the dimension of the random variables is large and the values of limit state function need to be obtained by numerical simulation (e.g., the bearing capacity of shallow foundations and the factor of safety of slope). In all, it is very important for probabilistic analysis to simulate a random field with fewer random variables.

    View all citing articles on Scopus
    View full text