A one-time truncate and encode multiresolution stochastic framework

https://doi.org/10.1016/j.jcp.2013.08.006Get rights and content

Abstract

In this work a novel adaptive strategy for stochastic problems, inspired from the classical Hartenʼs framework, is presented. The proposed algorithm allows building, in a very general manner, stochastic numerical schemes starting from a whatever type of deterministic schemes and handling a large class of problems, from unsteady to discontinuous solutions. Its formulations permits to recover the same results concerning the interpolation theory of the classical multiresolution approach, but with an extension to uncertainty quantification problems. The present strategy permits to build numerical scheme with a higher accuracy with respect to other classical uncertainty quantification techniques, but with a strong reduction of the numerical cost and memory requirements. Moreover, the flexibility of the proposed approach allows to employ any kind of probability density function, even discontinuous and time varying, without introducing further complications in the algorithm. The advantages of the present strategy are demonstrated by performing several numerical problems where different forms of uncertainty distributions are taken into account, such as discontinuous and unsteady custom-defined probability density functions. In addition to algebraic and ordinary differential equations, numerical results for the challenging 1D Kraichnan–Orszag are reported in terms of accuracy and convergence. Finally, a two degree-of-freedom aeroelastic model for a subsonic case is presented. Though quite simple, the model allows recovering some physical key aspect, on the fluid/structure interaction, thanks to the quasi-steady aerodynamic approximation employed. The injection of an uncertainty is chosen in order to obtain a complete parameterization of the mass matrix. All the numerical results are compared with respect to classical Monte Carlo solution and with a non-intrusive Polynomial Chaos method.

Introduction

Nowadays, the prediction of the numerical simulations is a fundamental task to attain for the optimization and the control of engineering devices. However, estimating the confidence of a numerical simulation remains very challenging. In recent years, a strong effort has been devoted to develop efficient numerical methods for taking into account the randomness in the numerical simulations.

The most popular and known method for the uncertainty quantification (UQ) is the Monte Carlo. Its development led back to the research on the nuclear devices in the context of the Manhattan project and is due to Fermi, von Neumann and Ulam. This method is based on a stochastic procedure to represent realizations of the solution for which the statistic moments can be computed. Despite its solid mathematical framework it represents a very expensive approach for most practical application because it requires a great number of realizations. Several improved versions of the classical Monte Carlo method have been proposed in literature for increasing the convergence rate, see for instance the recent work presented in [1], but they still remain unfeasible for complex problems when the evaluation of samples is expensive, as in most engineering problems.

One of the most important class of methods for UQ is based on the Polynomial Chaos (PC) representation. In the original work of Wiener [2], the solution is expanded in a polynomial Hermite basis, the so-called homogeneous chaos expansion. In recent years, Xiu and Karniadakis [3] demonstrated that the optimal convergence, with respect to non-Gaussian probability distributions, can be achieved if orthogonal basis are chosen following the so-called Wiener–Askey scheme. This leads to the well-known generalized Polynomial Chaos (gPC). Following the procedure introduced by Xiu and Karniadakis, i.e. employing the probability density function as a weight function for searching the orthogonal basis, an optimal expansion basis could be virtually obtained from every kind of pdf, see for details [4], [5]. Actually the gPC is often used in combination with Galerkin projection [6] techniques following the idea of Ghanem and Spanos [7], who first extended the applications of the PC in combination with finite elements. The gPC is recognized as one of the most efficient techniques thanks to its exponential rate of convergence. However, problems with discontinuities in the random space can lead to slow convergence. Similarly long-time integration problems could be encountered [8], where this behavior is due to the modification in time of the statistic properties of the solution that induces an efficiency loss of the polynomial basis in time. Recently, Gerritsma et al. [9] proposed a time-dependent generalized Polynomial Chaos scheme based on the research of a time varying optimal polynomial basis.

Another class of method for the UQ is based on the stochastic collocation (SC) approach [10]. This strategy is based on building interpolants (polynomial), of the same dimensionality as the stochastic space, in order to approximate the solution. In order to reduce the computational cost for high-dimension problems, these methods are often coupled to sparse grids techniques. The sparse grid strategy has been proposed by Smolyak [11] allowing interpolation of the function in a reduced subset of points with respect to the full tensorization set. This strategy is a cure against the so-called curse of dimensionality [12] problem, i.e. the exponential growth of the number of points with respect to the stochastic dimensions [13], [14].

Actually, handling a non-smooth behavior for high-dimension problems remains a very challenging issue. It is not completely solved even for low or moderate dimension problems. In the context of gPC schemes, Wan and Karniadakis introduced an adaptive class of methods for solving the discontinuity issues by using local basis functions, the multi-element generalized Polynomial Chaos (ME-gPC) [15]. This strategy deals with an adaptive decomposition of the domain on which local basis are employed. In order to treat discontinuous response surfaces, Le Maître et al. applied a multiresolution analysis to Galerkin projection schemes [16], [17]. This class of schemes relies on the projection of the uncertain data on a multi-wavelets basis consisting of piecewise polynomial (smooth) functions. This approach is shown to be very CPU demanding. Consequently, two cures are then explored in the context of adaptive methods: automatically refine the multi-wavelets basis or adaptively partitioning the domain.

More recently, unsteady stochastic problems have been solved by means of multi-elements techniques, employing the collocation simplex method [18]. Also for these stochastic collocation methods, adaptive strategies have been proposed in order to tackle the discontinuity issues. In the work of Agarwal and Aluru [19], an adaptive stochastic collocation method, based on the recursive splitting of the domain, has been proposed. In this case the splitting of the domain and the adaptivity is applied directly to the sparse grid basis. A sparse grid collocation strategy, based on piecewise multi-linear hierarchical basis functions, has been adopted by Ma and Zabaras [20] to recover the convergence loss by a global polynomial approximations in presence of discontinuities.

Recently, Abgrall et al. [21], [22], [23] introduced a new class of finite volume schemes capable to deal with discontinuous problems both in the physical and stochastic space for shock-dominated flows. The so-called semi-intrusive scheme (SI) exhibits promising results in term of accuracy and efficiency compared to more classical Monte Carlo and gPC methods. The idea is to extend to the stochastic space the finite volume representation used for the deterministic scheme. The established framework of the reconstruction techniques (ENO/WENO) in finite volume schemes can be, very easily, employed in the stochastic space with the SI scheme. This approach can lead to some advantages such as an extreme flexibility with respect the form of the pdf (that can be discontinuous and unsteady), an easy implementation, a slight modification of the deterministic solver preserving the number of equations.

The aim of the present work is to provide a framework, inspired from the classical multiresolution representation of Harten [24], capable to recover the same results of this theory but including new features for the extension to stochastic problems. The proposed algorithm, the Truncate and Encode (TE) strategy, displays very good properties in terms of convergence and efficiency. Moreover, it allows handling adaptively a stochastic mesh in a very general way. This could allow in the future a very easy coupling with different kinds of numerical methods as, for example SC and SI schemes. While in this work no dependence on the physical spaces is considered, the long-term objective is to build accurate numerical scheme, for low or moderate number of uncertainties, permitting to deal with unsteady discontinuous solutions and using unsteady refinement/derefinement capabilities both in the physical and stochastic space.

The approach proposed in the present work is based on a multiresolution concept, as already made in Le Maîtreʼs work. However, the approach differs completely since here no spectral projection is employed, as it will be explained in the next section. Moreover, the possibility to reject a wavelets (equal to an interpolation error as in the original Harten framework) is based only on local tests, then is different from Galerkin projection approach where 1D energy estimator along stochastic dimensions are used. For details on the multiresolution approach applied to Galerkin projection schemes, the reader can refer to the extremely exhaustive reference [6].

The paper is organized as follows. In Section 2, the classical multiresolution framework of Harten is illustrated. Some remarks are pointed out over the analytical results associated to the compression procedure. The algorithm proposed in this work, i.e. the Truncate and Encode (TE) strategy is presented in Section 3, while the accuracy preserving time-advancement procedure in Section 4. Differences between the classical MR framework and the present approach are discussed in Section 5. Accuracy and efficiency of the TE algorithm are verified on several numerical experiences in Section 6. In particular, algebraic and ordinary differential equation (both scalar and vectorial) and the 1D Kraichnan–Orszag problem with uniform and discontinuous unsteady probability distribution functions are considered. Finally, a two degree-of-freedom aeroelastic model is used to estimate the unsteady statistics for the motion. Remark that all the presented results are compared with respect to classical Monte Carlo methods and with gPC non-intrusive approach. Some concluding remarks and perspectives are reported in Section 7.

Section snippets

Hartenʼs multiresolution framework

In this section, we briefly recall the classical Harten MR framework [25], [24] extended here to the case of non-uniform measure. Consider a function u(ξ) defined on a domain Ξ=[0,1]. Let us suppose to know the values u0={uj0}j=0N0 on a uniform grid, defined as followsG0={ξj0}j=0N0,ξj0=jh0,h0=1/N0. This grid is assumed as the finest one, i.e. the highest resolution level. Now, let us consider the set of nested dyadic grids Gk with 0kLGk={ξjk}j=0Nk,ξjk=jhk,hk=2kh0,Nk=N0/2k, where k=0

A one-time truncated-encoding strategy

The aim of this work is to develop a more flexible strategy, in the context of uncertainty quantification for compressible flow problems, in which the classical multiresolution framework Section 2 is employed as a basis to build a non-intrusive technique for steady and non-steady problems. However in the following (Section 4) a procedure to extend the present strategy to unsteady problems is also presented, introducing a proper advancing procedure, and some numerical results will be reported in

An accuracy preserving time-advancement strategy

In this section we focus on a procedure able to deal with time dependent probability density functions employing the TE algorithm.

Some remarks on the difference between the classical and adaptive MR approach

In this section, some differences between the classical approach discussed in Section 2 and the adaptive procedure presented in Section 3 are illustrated. The TE algorithm is an encoding procedure with an embedded truncation capability. The application of the TE strategy produces a multiresolution representation uMˆ=(dˆ1,dˆ2,,dˆL,uL), already truncated with respect to the accuracy governed by the threshold ε. This structure allows reconstructing the solution at the finer level u0 starting from

Numerical results

In this section, the TE algorithm is applied to several numerical test-cases in order to check its accuracy and convergence rate with respect to some classical stochastic methods, such as quasi-Monte Carlo and Polynomial Chaos [6]. In the case of probability distribution not belonging to the so-called Wiener–Askey scheme [3], a PC method is used in order to evaluate the statistics as in a collocation method (the function is multiplied by the pdf). This correspond in practice to the computation

Conclusions

In this work an innovative adaptive strategy for stochastic problem, the TE algorithm, inspired to the classical Hartenʼs framework, is presented. A representation of the solution on a finest grid is computed starting from a coarsest grid, with a low number of evaluation of the function. Then, only a reduced set of point values, on the finest grid, is evaluated, while the remaining set is obtained by interpolation (from the previous levels). This procedure moves recursively, with a combination

Acknowledgements

Remi Abgrall has been partially supported by the ERC Advanced Grant ADDECCO No. 226316, while Gianluca Geraci has been fully supported by the ERC Advanced Grant ADDECCO No. 226316.

The authors are very grateful to anonymous referees for the time spent reading and analyzing the manuscript. Many insightful remarks helped the authors to improve the quality and readability of the paper.

References (33)

  • R. Abgrall et al.

    A semi-intrusive deterministic approach to uncertainty quantification in non-linear fluid flow problems

    J. Comput. Phys.

    (2013)
  • S. Amat et al.

    Tensor product multiresolution analysis with error control for compact image representation

    Signal Process.

    (2002)
  • N. Wiener

    The homogeneous chaos

    Am. J. Math.

    (1938)
  • X. Wan et al.

    Beyond Wiener–Askey expansions: handling arbitrary PDFs

    J. Sci. Comput.

    (2005)
  • C. Soize et al.

    Physical systems with random uncertainties: chaos representations with arbitrary probability measure

    SIAM J. Sci. Comput.

    (2004)
  • O. Le Maître et al.

    Spectral Methods for Uncertainty Quantification: With Applications to Computational Fluid Dynamics

    (2010)
  • Cited by (13)

    • On stochastic Galerkin approximation of the nonlinear Boltzmann equation with uncertainty in the fluid regime

      2019, Journal of Computational Physics
      Citation Excerpt :

      Furthermore, it does not preserve the positivity of positive physical quantities. These issues have been studied in some UQ literatures in other applications, for examples [43,42,20,31,32,1,4,2,39], and remain to be addressed for the Boltzmann equation. The rest of this paper is organized as follows.

    • Towards a unified multiresolution scheme for treating discontinuities in differential equations with uncertainties

      2017, Mathematics and Computers in Simulation
      Citation Excerpt :

      Preliminary results in this direction [4], for problems with custom-defined probability density functions, display advantages with respect to MC, PC and its counterpart without employing the adaptive based multiresolution approach. In the present work, this adaptive method is extended to solve not only ordinary differential equations, as made in [4] using the Truncate and Encode (TE) technique, but also partial differential equations. A new stochastic technique, called spatial-TE (sTE), is presented with refining/coarsening time dependent capabilities for both the physical and stochastic spaces.

    • Uncertainty Quantification for Hyperbolic Systems of Conservation Laws

      2017, Handbook of Numerical Analysis
      Citation Excerpt :

      However, extensions of this approach to systems of conservation laws is unclear. In Abgrall et al. (2014) the authors were partially successful in facing the curse of dimensionality via a multiresolution method using Harten's (Harten, 1995) framework, the price to pay was an added complexity of the algorithm. This method was used in Abgrall et al. (2015) for solving multiphase problems, and in Ricchiuto et al. (2012) some geophysical problems involving tsunamis.

    • High-order statistics in global sensitivity analysis: Decomposition and model reduction

      2016, Computer Methods in Applied Mechanics and Engineering
      Citation Excerpt :

      This situation becomes even more challenging when robust design optimization is tackled [7,8]. Several UQ methods have been developed with the objective of reducing the number of solutions required to obtain a statistical characterization of the quantity of interest, such as Sparse Grid techniques [9] or adaptive mesh generation [10–14]. These techniques can lead to a dramatical reduction of the quadrature points for moderate dimensional problem, provided that the function has some regularity properties.

    • Stochastic Discrete Equation Method (sDEM) for two-phase flows

      2015, Journal of Computational Physics
      Citation Excerpt :

      The TE framework is employed in order to perform a hierarchical refinement of the stochastic space employing the wavelets as error estimators. The TE framework [42,48] has been successfully extended to the analysis of differential equations in presence of uncertainty [49–52] when the point-value multiresolution setting is applied. The cell-average setting can be also applied.

    View all citing articles on Scopus
    View full text