Skip to main content
main-content

Über dieses Buch

This book is a practical guide to the uncertainty analysis of computer model applications. Used in many areas, such as engineering, ecology and economics, computer models are subject to various uncertainties at the level of model formulations, parameter values and input data. Naturally, it would be advantageous to know the combined effect of these uncertainties on the model results as well as whether the state of knowledge should be improved in order to reduce the uncertainty of the results most effectively. The book supports decision-makers, model developers and users in their argumentation for an uncertainty analysis and assists them in the interpretation of the analysis results.

Inhaltsverzeichnis

Frontmatter

Chapter 1. Introduction and Necessary Distinctions

At the beginning of the previous century, it was traditional among engineers and physicists to perform an error analysis of both measurements and computations (Dubbel 1939). The enormous computational capabilities, that became available since, enabled the development of complex computer models. They make use of numerous parameters, various sub-models and large numbers of input data subject to uncertainty that is, however, frequently ignored. One of the reasons in the past was the difficulty, if not impossibility, of performing an uncertainty analysis of the results from computationally demanding models. Combining powerful statistical methods, as described in the following chapters, with today’s computational capabilities opens the door to uncertainty analysis as a standard procedure.
This chapter leads through the developmental stages of a computer model and points out their sources of epistemic uncertainties. The difference between “epistemic uncertainty” and “aleatoric uncertainty” is explained. It necessitates the use of two different interpretations of “probability” for their quantification. Epistemic uncertainty is quantified using subjective probability. The need for separation of epistemic and aleatoric uncertainties arises from the type of assessment question that is to be answered by the model result. This is explained and illustrated by practical examples.
Eduard Hofer

Chapter 2. STEP 1: Search

Model results are input to decision-making. The uncertainty of a model result is the effect of epistemic uncertainties accumulated on the path from the scenario description via the conceptual model to the mathematical model and its parameters. The latter is turned into a numerical model that is encoded and finally applied together with input data to arrive at the model result.
This chapter illustrates the search for the epistemic uncertainties with examples from each of the developmental stages on the path from the scenario description to the model application.
Eduard Hofer

Chapter 3. STEP 2: Quantify

This is the most important and most laborious analysis step. Its task is to express quantitatively what is known about each of the possibly important uncertainties of the computer model application. In other words, the task is to quantify the so-called state of knowledge. Subjective probability is the appropriate mathematical concept for state of knowledge quantifications. Section 3.1 briefly introduces this concept. Section 3.2 explains distinctions between data and model uncertainties that are important in the state of knowledge quantification. Ways to quantify the state of knowledge for uncertain data are presented in Sect. 3.3 while Sect. 3.4 presents those for model uncertainties. The subjects of Sect. 3.5 are the concept and sources of state of knowledge dependence together with ways of its quantification. The elicitation of the state of knowledge from experts and the corresponding probabilistic modelling are discussed in Sect. 3.6 for data as well as for models. Finally, Sect. 3.7 deals with the question of when and how to conduct a survey of expert opinion in order to arrive at state of knowledge quantifications that are supported by a broad base of expertise.
Eduard Hofer

Chapter 4. STEP 3: Propagate

Step 1 identified the potentially important uncertainties, and Step 2 expressed the corresponding states of knowledge as well as any relevant state of knowledge dependences probabilistically. The state of knowledge expressions need now to be propagated through the arithmetic and logic instructions of the computer model in order to obtain the subjective probability distributions that follow in a logically consistent manner for the model results. These distributions quantify the combined influence of the uncertainties on the model results and thereby express their state of knowledge. However, they remain unknown since deriving them analytically is out of the question for most practically relevant models. Uncertainty analysis aims at determining estimates of uncertainty measures for the model results through random sampling according to these distributions. The uncertainty measures are either the subjective probability content of a specified value range of the model result or a value range that contains the specified amount of subjective probability. The sampling techniques commonly used, and explained in this chapter, are either simple random sampling (SRS) or Latin Hypercube sampling (LHS). Statistical tolerance limits can be obtained from a relatively small SRS. A sample of size 93 provides already a two-sided (95%, 95%) statistical tolerance limit that contains at least subjective probability 0.95 at a confidence level of at least 95%.
Other sampling techniques are required if the subjective probability of a specified value range of the model result is extremely small. Among those techniques are importance sampling and subset sampling. In order to explain the principle, a simple practical procedure is presented for both.
Eduard Hofer

Chapter 5. Step 4: Estimate Uncertainty

Step 3 provided a random sample of values indirectly drawn according to the unknown subjective probability distribution for the model result. This distribution expresses the state of knowledge of the model result. It follows, in a logical consistent way, from the propagation of the state of knowledge quantifications at the level of uncertain parameters, model formulations and input data through the arithmetic and logic instructions of the model. Uncertainty measures are obtained from this subjective probability distribution as either one- or two-sided intervals containing subjective probability u/100, with u generally chosen as 90, 95 or higher, or as the subjective probability for values below or above a specified limit or from a specified range.
Estimates of the above measures are derived from the random samples obtained in Step 3. Computationally demanding models permit only small (≤100) to medium (several times 100) sample sizes. Therefore, one- or two-sided statistical tolerance limits need to be computed. The difference between confidence and tolerance confidence limits is explained, and the minimum sample sizes, required for specified tolerance and confidence percentages, are derived. The sample sizes depend only on the two chosen percentages and not on the number of uncertainties considered in the analysis.
A Latin Hypercube sample is expected to provide estimates with less variability. However, confidence statements are not available unless the estimates from several Latin Hypercube samples are used.
The chapter ends with a collection of graphical ways to present the uncertainty measures.
Eduard Hofer

Chapter 6. Step 5: Rank Uncertainties

The primary analysis objective is to quantify the uncertainty of the results of the model application. This objective was achieved in Step 4 of the analysis. The message may be that the uncertainty of the model results is small enough so as to be of no concern. If this is the case, the present analysis step may be skipped. The uncertainty may, however, be such that meaningful decision-making is impossible or that the subjective probability for the violation of a limit value is too high. In this case, it will be of vital interest to rank the uncertain parameters, model formulations and input data with respect to their contribution to model result uncertainty. This task is known as “uncertainty importance analysis”. The term “sensitivity analysis”, although frequently used, is misleading as will be shown below.
Uncertainty importance measures from correlation, regression and variance decomposition, using raw or rank transformed data, are formally derived and are discussed with respect to their benefits and shortcomings. This discussion summarizes with a table of recommendations for the choice of importance measures in four problem categories. Section 6.4 presents a method for the identification of uncertainties that tend to be responsible for the upper (or lower) quantile values of the subjective probability distribution of a model result. Section 6.5 illustrates, by practical cases, how importance analysis helps to improve the reliability of computer models and to enhance the quality of their results. The graphical presentation of importance rankings is discussed in Sect. 6.6.
Eduard Hofer

Chapter 7. Step 6: Present the Analysis and Interpret Its Results

The presentation of the analysis results in publications and in documentations for decision-makers should adhere to the tried and tested structure suggested in this chapter. It is suitable for all interested parties and may differ only in the degree of detail, depending on the intended recipient. Apart from the obvious components like uncertain data, state of knowledge quantifications, uncertainty estimates and rankings, it starts out with the formulation of the assessment question that is to be answered by the model results and includes a brief description of the model, of the analysis software as well as of the elicitation process.
To the point scientifically sound verbal formulations of the uncertainty statements as well as of the uncertainty importance rankings are required. The interpretation of the analysis results depends on whether or not the quality of the analysis satisfies certain conditions.
Eduard Hofer

Chapter 8. Practical Execution of the Analysis

The previous chapters looked at the uncertainty analysis from a methodological point of view. Now it is time to consider how its practical execution should be organized and how it might be supported by suitable software.
Following the sequence of analysis steps, it is explained in detail where custom made or commercially available analysis software can support their practical execution including the data exchange between analysis software and computer model. Four software packages are discussed and their main features are compared.
Eduard Hofer

Chapter 9. Uncertainty Analysis When Separation of Uncertainties Is Required

This chapter is not about how to do an uncertainty analysis that quantifies aleatoric uncertainty. Instead it explains, for each of the six analysis steps discussed in Chaps. 27, how to proceed with the analysis of the epistemic uncertainties if the computer model also operates with aleatoric uncertainties and the combined effect of the latter is the actual model result. Differences to the case of only epistemic uncertainties are explained step by step. In Step 1, the uncertain parameters of the stochastic models, quantifying the aleatoric uncertainties, need to be identified. In Step 2, their state of knowledge is to be quantitatively expressed by subjective probability distributions. The most prominent difference lies in Step 3 (propagate) where the simulation will need to proceed with two nested sampling loops if the analysis of the aleatoric uncertainties is not already taken care of by the computer model. Distribution functions, quantifying the aleatoric uncertainty, and probabilities (in the classical interpretation) that may be read from those distributions are among the model results. In Step 4, it will be necessary to obtain quantitative expressions of their epistemic uncertainty. Importance measures will need to be computed for the aleatoric uncertainties in the inner simulation loop and for the epistemic uncertainties in the outer loop in Step 5. The epistemic uncertainty of the former importance measures will also be of interest. Step 6 then shows ways of how to present the information obtained on both simulation levels.
Eduard Hofer

Chapter 10. Practical Examples

First, the application of a population dynamics model is analysed. The evolution of a fish population (anchovy) as well as that of their main natural predators (guano-birds) is simulated over the coming 20 years and for a given management strategy. The uncertainty analysis considers the combined effect of 37 uncertain data. “Harvestable anchovy biomass” and “guano-bird population” are the analysed main results. Their computed uncertainty is presented by statistical tolerance intervals with their lower and upper limit shown as functions over the given time. Three measures of uncertainty importance are computed over this time and are compared. The main uncertainty contributors are identified and presented together with the conclusions drawn from the analysis results.
The second example requires the separation of variability and epistemic uncertainty. A carcinogenic substance was accidentally released many years ago, and it is required to know the percentage of the exposed population with dose value above a given decision threshold. In total, over 3000 uncertain data contribute to the epistemic uncertainty of the computed percentage. The correlation between measured values and measurement errors is estimated, and its state of knowledge is taken into account following the method suggested in Chap. 3. Neglecting this correlation shifts the statistical tolerance intervals for the percentage of individuals with dose above the threshold to higher values. Additionally, the individual dose values and their uncertainty need to be known in order to decide about compensation. Uncertainty importance measures indicate where the state of knowledge needs to be improved.
Eduard Hofer
Weitere Informationen

Premium Partner

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.

Whitepaper

- ANZEIGE -

Best Practices für die Mitarbeiter-Partizipation in der Produktentwicklung

Unternehmen haben das Innovationspotenzial der eigenen Mitarbeiter auch außerhalb der F&E-Abteilung erkannt. Viele Initiativen zur Partizipation scheitern in der Praxis jedoch häufig. Lesen Sie hier  - basierend auf einer qualitativ-explorativen Expertenstudie - mehr über die wesentlichen Problemfelder der mitarbeiterzentrierten Produktentwicklung und profitieren Sie von konkreten Handlungsempfehlungen aus der Praxis.
Jetzt gratis downloaden!

Bildnachweise