Elsevier

Structural Safety

Volume 39, November 2012, Pages 44-51
Structural Safety

Quantification of model-form and parametric uncertainty using evidence theory

https://doi.org/10.1016/j.strusafe.2012.08.003Get rights and content

Abstract

It is common that two or more models can be created to predict responses of a physical system. Given a set of physical models, the response predictions might be significantly influenced by model-form uncertainty, which occurs due to the lack of certainty in selecting the true (or at least the best) one from the model set. In this paper, a mathematical methodology is developed to quantify both model-form and parametric uncertainty using expert evidence within evidence theory. Using the belief structure associated with evidence theory, degrees of belief are numerically specified for subsets of a model set. Response predictions supported by the subsets of a model set are integrated into a composite prediction using the disjunctive rule of combination. A nonlinear spring–mass system is utilized to demonstrate the process for implementing the proposed approach. Finally, the applicability of the approach to large-scale engineering problems is investigated through a problem of simulating a laser peening process depending on different material model theories.

Highlights

▸ A new approach is developed to quantify both model-form and parametric uncertainty using expert evidence within evidence theory. ▸ The constraint that probability theory demands regarding assigning model probability is loosen to effectively represent imprecise expert judgments. ▸ Predictions of a model set which involve parametric uncertainty are combined using the disjunctive rule of combination. ▸ The applicability of the proposed approach to engineering problems is investigated by addressing a large-scale simulation problem.

Introduction

Computer simulation is playing a vital role in analyzing complex physical phenomena as nonlinear modeling processes advance in creating modern products. When representing an identical physical system, different simulation models can be created depending on different sets of assumptions made to simplify the system. Also, a simulation model can be varied according to the decisions made in the modeling process with regard to the executor's preferences, the decision maker's requirements, or economic considerations. For instance, to analyze an engineering structure, a modeler can build a number of simulation models that are different in terms of element type, geometry, shape function, mesh size, material behavior, expected operating load, linear/nonlinear behavior, or boundary condition. Hence, we may have two or more different simulation models to express an identical system.

The conventional way of addressing the situation where different simulation models are generated is to select a single model believed to best describe the system under consideration from among a set of possibilities [1,2]. However, a model selection process inevitably accompanies the uncertainty not captured within the model selected from a set of possibilities [3]. Uncertainty associated with model selection—called model-form uncertainty because of its existence in model form—is due to the lack of confidence in selecting the best one from a given model set. Ignoring model-form uncertainty is problematic because it may lead to underestimation of the variability of predictions or erroneous predictions.

In the fields of statistics, economics, environmental science, and engineering, Bayesian Model Averaging (BMA) has been extensively and successfully applied to quantify model-form uncertainty associated with the predictions of a model set [[4], [5], [6], [7]]. BMA is a natural way to systematically deal with model-form uncertainty from a Bayesian point of view by basing response predictions on a set of plausible models. BMA requires the assignment of prior probability to each model to represent model-form uncertainty depending on information which is available prior to the observation of experimental data such as accumulated data and engineering expertise. Zio and Apostolakis [8] investigated the formal process of eliciting and interpreting expert judgments to evaluate prior model probability.

The quantification of prior model probability using a corpus of expert knowledge is not a tractable task because it is hard to establish a mathematically explicit relation between available information in expert systems about a model set and prior model probability. To avoid the difficulty of numerically specifying expert judgments, prior model probability is often assigned a uniform value as no information is given. It is maintained that a uniform distribution of probability values cannot be a strict representation of the state of total ignorance [9]; because total ignorance implies that there should be no preference in giving degree of belief to each possible proposition, each proposition should receive the same degree of belief. In probability theory, the same degree of belief cannot be assigned to each possible proposition because a probability measure must follow the additivity axiom (Pr ( θi U θj) = Pr (θi) + Pr (θj), θi and θj: two disjoint sets). For instance, it is impossible that Pr ( θ1) = Pr ( θ2) = Pr ( θ1 U θ2) for two sets θ1 and θ2 with Pr ( θ1) ≠ 0 or Pr ( θ2) ≠ 0.

Evidence theory, which was initiated by Dempster [10] and developed by Shafer [11], is attractive for modeling imprecise human knowledge because of its relative flexibility. Evidence theory can assign numerical measures of uncertainty to overlapping sets and subsets of propositions as well as individual propositions because evidence theory is not subject to the additivity axiom. Distinct pieces of evidence can be effectively aggregated unless a significant conflict arises between two sources of evidence.

Evidence theory has been successfully applied to quantify parametric uncertainty involved in mathematical problems and large-scale engineering applications [[12], [13], [14]]. The uncertainty in input parameters within a model is represented using the mathematical structure of evidence theory, and then is propagated through the model to obtain the corresponding representation of the uncertainty in model predictions.

Until now, an attempt has not been made to quantify model-form uncertainty as well as parametric uncertainty under evidence theory; although, model-form uncertainty is characterized as the epistemic uncertainty stemming from imprecise knowledge, which evidence theory can handle with ease. Using the mathematical structures of evidence theory, a new approach is developed to combine the predictions by a model set involving parametric uncertainty as well as quantify model-form uncertainty depending on expert knowledge systems.

The paper is structured as follows: model-form uncertainty is mathematically specified on a model set based on expert evidence, and distinct pieces of evidence are aggregated using the Dempster's rule of combination in Section 2. Predictions of a model set that involve parametric uncertainty are combined using the disjunctive rule of combination in Section 3. Then, the proposed approach is demonstrated using the numerical problem of a nonlinear spring–mass system in Section 4. Finally, the approach is applied to the large-scale engineering simulation of a laser peening process in Section 5.

Section snippets

Representation of model-form uncertainty by belief function

The Bayesian theory of subjective probability involves a restriction that assignment of belief to a model implies assignment of the remaining belief to the other models in a given model set; every member of a model set should be assigned a degree of belief (subjective probability). Evidence theory, known as a generalization of Bayesian probability theory, has a benefit of avoiding that restriction. A mathematical structure furnished by evidence theory allows us to flexibly represent model-form

Combination of response predictions by a model set

In general, given two or more different models with their own probabilities, the predictions from the individual models are averaged using the probability values as weights. This combination rule is not suited for the present research because degrees of belief are assigned to subsets of a model set. The disjunctive rule of combination is successfully utilized to perform the process of combining the response predictions from available models involving parametric uncertainty. The disjunctive rule

Problem description

The free vibration of a single-degree-of-freedom spring–mass system with a nonlinear spring is represented byμu¨+gu=0where μ is a mass, and g(u) is a nonlinear spring-force function of displacement u. Using different spring-force functions, different mathematical models are available to represent the free vibration of the nonlinear system. Three spring-force functions considered for this problem areg1u=αu1133g2u=βu+γu3g3u=δu+ζu1+u2where α, β, γ, δ, and ζ are the parameters used to specify the

Problem description

Laser Peening (LP) is an advanced surface enhancement technique that has been shown to increase the fatigue life of metallic components. During the LP process, laser energy is converted into shock waves at the surface that induce compressive residual stresses. A detailed description of the LP process is found in the literature [18,19].

In simulating a LP process, accurate description of material behavior is a challenging task because of the high strain rates experienced by the material. During a

Summary

A methodology is developed to quantify model-form uncertainty as well as parametric uncertainty associated with response prediction using the mathematical structures that evidence theory involves. The constraint that probability theory demands regarding assigning model probability is loosen to effectively represent imprecise expert judgments. The disjunctive rule of combination is effectively utilized to combine predictions from individual models comprising a subset of a model set. Because

Acknowledgments

The authors acknowledge the support of this research work through the Contract FA8650–04-D-3446, DO # 25 sponsored by Wright Patterson Air Force Base, Ohio, USA.

References (20)

There are more references available in the full text version of this article.

Cited by (0)

View full text