Skip to main content

1994 | Buch

Predictability and Nonlinear Modelling in Natural Sciences and Economics

herausgegeben von: J. Grasman, G. van Straten

Verlag: Springer Netherlands

insite
SUCHEN

Über dieses Buch

Researchers in the natural sciences are faced with problems that require a novel approach to improve the quality of forecasts of processes that are sensitive to environmental conditions. Nonlinearity of a system may significantly complicate the predictability of future states: a small variation of parameters can dramatically change the dynamics, while sensitive dependence of the initial state may severely limit the predictability horizon. Uncertainties also play a role.
This volume addresses such problems by using tools from chaos theory and systems theory, adapted for the analysis of problems in the environmental sciences. Sensitive dependence on the initial state (chaos) and the parameters are analyzed using methods such as Lyapunov exponents and Monte Carlo simulation. Uncertainty in the structure and the values of parameters of a model is studied in relation to processes that depend on the environmental conditions. These methods also apply to biology and economics.
For research workers at universities and (semi)governmental institutes for the environment, agriculture, ecology, meteorology and water management, and theoretical economists.

Inhaltsverzeichnis

Frontmatter

Introduction

Introduction

Predicting the future behaviour of natural and economic processes is the subject of research in various fields of science. After a period of considerable progress by refining models in combination with large scale computer calculations, the scientific community is presently confronted with problems that require a novel approach to further extend the range of forecasts and to improve their quality. It is recognized that nonlinearity of a system may significantly complicate the predictability of future states of the system. A small variation of parameters can drastically change the dynamics, while sensitive dependence on the initial state may severely limit the predictability horizon.

J. Grasman, G. Van Straten

Geophysics

Karl Popper and the Accountability of Scientific Models

Karl Popper has written extensively on the methodology of scientific research. In his “Postscript to the Logic of Scientific Discovery” he formulates the principle of scientific accountability: “Scientific determinism requires the ability to predict every event with any desired degree of precision, provided we are given sufficiently precise intitial conditions. Our theory will have to account for the imprecision of the prediction: given the degree of precision which we require of the prediction, the theory will have to enable us to calculate the degree of precision which we require of the prediction, the theory will have to enable us to calculate the degree of precision in the initial conditions. For systems that exhibit sensitive dependence on initial conditions, this demand is nearly impossible to meet. This paper focuses on the potential consequences of insisting on accountability in the development of mathatical models of chaotic systems.

H. Tennekes
Evaluation of Forecasts

Evaluation of forecasts encompasses the processes of assessing both forecast quality and forecast value. These processes necessarily play key roles in any effort to improve forecasting performance or to enhance the usefulness of forecasts.A framework for forecast verification (the process of assessing forecast quality) based on the joint distribution of forecasts and observations — and on the conditional and marginal distributions derived from factorizations of this joint distribution — is described. The joint, conditional, and marginal distributions relate directly to basic aspects of forecast quality, and evaluation methods based on these distributions — and associated statistics and measures — provide a coherent, diagnostic approach to forecast verification. This approach — and its attendant methodology — is illustrated using a sample of probabilistic long-range weather forecasts.A decision-analytic approach to the problem of assessing the value of forecasts is outlined, and this approach is illustrated by considering the so-called fallowing-planting problem. In addition to providing estimates of the value of state-of-the-art and hypothetically improved long-range weather forecasts, the results of this case study illustrate some of the fundamental properties of quality/value relationships. These properties include the inherent nonlinearity of such relationships and the existence of quality thresholds below which the forecasts are of no value.The sufficiency relation is used to explore quality/value relationships; this relation embodies the conditions that must exist between the joint distributions of two forecasting systems to ensure that one system’s forecasts are better in all respects (i.e., in terms of quality and value) than the other system’s forecasts. The applicability of the sufficiency relation is illustrated by comparing forecasting systems that produce prototypical long-range weather forecasts. This application also demonstrates that quality/value reversals can occur when the multifaceted nature of forecast quality is not respected.Some outstanding problems in forecast evaluation are identified and briefly discussed. Recommendations are made regarding improvements in evaluation methods and practices.

Allan H. Murphy, Martin Ehrendorfer
The Liouville Equation and Prediction of Forecast Skill

The Liouville equation represents the consistent and comprehensive framework for dealing with uncertainty arising in meteorological forecasts due to uncertainty in the initial condition. This equation expresses the conservation of the phase space integral of the number density of realizations of a dynamical system originating at the same time instant from different initial conditions, in a way completely analogous to the continuity equation for mass in fluid mechanics. Its solution describes the temporal evolution of the probability density function of the state vector of a given dynamical model.The main purposes of this paper are (i) to review the basic form of the Liouville equation, (ii) to present the explicit general solution of the Liouville equation for a large class of dynamical systems in analytical terms, and (iii) to investigate the potential usefulness of the Liouville equation in the context of the prediction of forecast skill.As an illustration, the general analytical solution of the Liouville equation is used to obtain the solution of the Liouville equation relevant for a low-dimensional chaotic dynamical system. The information contained in this solution is compared with results obtained by application of the method of ensemble forecasting. It is found that a large number of ensemble integrations is required in order to obtain estimates of statistics, such as means, variances, and covariances, with the accuracy that is obtained by integration of the solution of the LE over phase space, even in the low-dimensional situation considered.The paper is concluded with a discussion of the fundamental role of the Liouville equation in dealing with initial state uncertainties in dynamical models, and of the problems that arise in this context. Even though some of these problems may be difficult to deal with in situations more realistic than considered here, the argument is made that the Liouville equation must be considered as an extremely valuable and useful guideline in the process of studying, developing and refining methods for the prediction of forecast skill.

Martin Ehrendorfer
An Improved Formula to Describe Error Growth in Meteorological Models

In meteorological models, the logistic growth law has been used traditionally to describe the error growth due to sensitivity to the initial conditions. A detailed analysis obtained from long range forecasting experiments using a GCM model, as well as from simulations based on a simple 3-variable model, has revealed significant deviations from the logistic law. A natural generalization is proposed, giving a law that has been used previously for the description of biological growth. A new characteristic parameter, which can be interpreted as a saturation rate for error growth, is identified. Further studies, based on a simple 3-variable model for different magnitudes of the initial error, reveal a more complex behaviour having a transient initial regime that is independent of the error magnitude, a regime of exponential growth, and a “deceleration regime”. The deceleration regime as defined here includes both the phases of linear and saturated error growth in time. For the case of large initial errors, the vanishing of the exponential regime, as a result of the coalescence of the initial and deceleration regimes, gives a continuous decrease in error growth rate with time, which can be well represented by the Gompertz growth law.

Jean-François Royer, Rodica Stroe, Michel Déqué, Stéphane Vannitsem
Searching for Periodic Motions in Long-Time Series

We present a method for the detection of periodic motions in long-time series of data. In contrast to other commonly used approaches, e.g., empirical orthogonal functions, principal oscillation patterns, singular spectrum analyses, this method is oriented towards scalar-quantities, therefore, it does not require the introduction of an arbitrary metric in the space of the dynamical variables. Nonlinear effects are included; needless to say, the higher the non-linearities included the longer and more complicated the actual calculations become. We are trying the method on a Lorenz model and on a simple model of the dynamics of the atmosphere.

Rubén A. Pasmanter
Comparison Study of the Response of the Climate System to Major Volcanic Eruptions and El Niño Events

This comparative study employs some special phase space representation of the coupled thermal behaviour of subsystems of the atmosphere. The course of the time derivatives or differences (especially the first ones) prove to be rather informative. Major volcanic eruptions (MVEs) are shown to activate a special dynamical state (a series of states) that, having some attractor properties, can be clearly traced up to the sixth year after the eruption with a high statistical significance. The cooling of the troposphere tends to start in the southern hemisphere and covers for the period from the first to the second year after the eruption both hemispheres with a small but very definite amount. From the second to the third year a remarkable warming of both hemispheres follows nearly without exception. There is a quasi-periodic fluctuation of a little less than three years the amplitude of which diminishes up to the sixth year only slowly but afterwards more rapidly. — The behaviour of the atmosphere in the same phase space representation in connection with El Nino events (ENEs) is rather different. The ENEs do not create a starting behaviour of the atmosphere that one might expect in view of the intensified latent heat flux into the atmosphere. There is rather a continuing fluctuation of the tropospheric temperature with a period of about 4 years. So the ENE is not simply the cause of this fluctuation, but is a manifestation of the fluctuation during a special phase. There is some evidence that a global cooling of the troposphere stimulates the fluctuation mentioned. — The method used is a powerful tool to detect weak modes in complex and/or noisy systems. The method will be applicable to all systems that can be thought to be composed of some relevant subsystems. As an example the method is applied to the relation between the global/hemispheric behaviour and the Middle European winters (MEWL) including the consequences of the global volcanic impact.

W. Böhme
Detection of a Perturbed Equator-Pole Temperature Gradient in a Spectral Model of the Atmospheric Circulation

This paper deals with the assessment of external perturbations in nonlinear chaotic dynamical processes using an extended Kalman filter treatment. We consider processes that can be modelled by a system of nonlinear ordinary differential equations and we use the extended Kalman filter to estimate the size of external perturbations. As an example to illustrate this approach we have analyzed a lower order spectral model of the atmospheric circulation with a perturbed equator-pole temperature gradient.

Sipko L. J. Mous
A Simple Two-Dimensional Climate Model with Ocean and Atmosphere Coupling

To study the effect of changes of essential climate parameters (such as the solar flux) a two-dimensional ocean model for the northern hemisphere is constructed which takes into account advection and diffusion in the top layer and upwelling of cold water formed at the ice edge from the deeper parts of the ocean (for the not-frozen part). At the ice-line, which acts as an internal boundary condition, continuity of temperature and heat flux is required. This model is solved by analytical means in terms of hypergeometric functions, which give the possibility to determine the latitude of the ice-line by a non-linear equation and to study the stability in terms of the physical parameters.

E. J. M. Veling, M. E. Wit
Climate Modelling at Different Scales of Space

Climatic changes ask for new tools to understand the relationships between atmosphere, vegetation and soil. At the present new methods are made possible by applying technologies as Remote Sensing, Telecomunication, computer science and Geographical Information Systems. Applied climatology investigates the variation in space and time of meteorological parameters relevant for the biological processes in an agroecosystem. The purpose of the work is to build up a methodology to extrapolate the values of maximum and minimum temperature starting from a reference station. This was done by classifying weather types and their relationships with the topographic parameters such as elevation, slope, aspects, valley type, etc. Daily minimum and maximum temperatures are computed from the reference station. Compared with the experimental data they show a good agreement.

G. Maracchi

Agriculture

Simulation of Effects of Climatic Change on Cauliflower Production

The impact of climatic changes on summer cauliflower production in northern Europe has been assessed using a dynamic crop simulation model. The sensitivity of the model to changes in temperature, global radiation and atmospheric CO2 concentration was analyzed using historical weather data from several sites in Europe. Effects of varying the transplanting date and plant density were also studied.Model simulations indicate that increasing atmospheric CO2 concentration may decrease the risk of loose heads in cauliflower. Higher CO2 concentrations may also enable a higher plant density than is currently used without detrimental effects on curd size and quality. Temperature was found to strongly affect the timing of cauliflower production, whereas the quality in terms of curd density is determined by a wider range of environmental conditions. In the model curd density is affected mainly by the balance between the source of and sink for assimilates. Plant density, atmospheric CO2 concentration and temperature were found to be the most important variables affecting the source-sink balance.

Jørgen E. Olesen, Kai Grevsen
Validation of Large Scale Process-Oriented Models for Managing Natural Resource Populations: A Case Study

The validation of models used in population management can be complicated by the number of component parts and by the large temporal and spatial scales often necessary. This is especially true for models developed for the analysis of management policy in forest-pest situations. In the case study considered here, a large-scale spruce budwormforest simulation model (Jones, 1979) was tested by comparing its output with data collected annually by the Maine Forest Service survey at 1000 sites from 1975 to 1980.In practice, model validation usually involves a comparison of observations, independent of those used to construct the model, with overall model output. This ‘typical’ validation was performed. In addition, separate tests were conducted on the model’s major components. These components represent the forest protection policy, the budworm-forest dynamics, and pest control efficacy.The model’s output was not always consistent with the Maine survey data. In some situations, compensating inaccuracies in different model components allowed the overall model output to reasonably represent the overall behavior of the system. There were also problems in scaling the findings of studies of nonlinear population dynamics from small experimental plots to the much larger spatial scales used in the models. The results of this validation imply that some modification of the optimal management strategies suggested by the models is appropriate.This paper concerns the validation of large-scale, process-oriented models in general. The spruce budworm-forest model is discussed as an illustrative case study.

R. A. Fleming, C. A. Shoemaker
Uncertainty of Predictions in Supervised Pest Control in Winter Wheat: Its Price and Causes

In supervised control, the economically optimal timing of pesticide application is equivalent with the level of pest attack where projected costs of immediate control just equal projected costs of no control. This level is called the damage threshold. Uncertainty about the costs of different strategies of chemical control of aphids (especially Sitobion avenae) and brown rust (Puccinia recondita) is calculated with a deterministic model. Sources of uncertainty, which comprise estimates of initial state and parameters, future weather, and white noise, are modelled as random inputs. Consequences of uncertainty for damage thresholds are analyzed. The relative importance of various sources of uncertainty for prediction uncertainty is calculated using a novel procedure.Stochastic damage thresholds are lower than those calculated using average values for sources of uncertainty. Thus, uncertainty causes earlier chemical control of pests and higher input of pesticides. Due to the strongly skewed frequency distribution of costs of no control, the probability of positive return on pesticide expenditure at the stochastic damage thresholds is only 30%. White noise in the relative growth rates of both aphids and brown rust is found to be the most important source of uncertainty. More accurate estimation of parameters and initial estimates in the current model results in marginal reduction of prediction uncertainty only. Reduction of prediction uncertainty and concomitant reduction of recommended pesticide use requires reduction of the uncertainty associated with no chemical control by adopting a different approach to prediction of the population dynamics of aphids and brown rust.

Walter A. H. Rossing, Richard A. Daamen, Eligius M. T. Hendrix, Michiel J. W. Jansen
The Implications and Importance of Non-Linear Responses in Modelling the Growth and Development of Wheat

Crop simulation models are used widely to predict crop growth and development in studies of the impact of climatic change. In seeking to couple meteorological information to crop-climate models it must be remembered that many interactions between crops and weather are non-linear. Non-linearity of response means it is necessary to preserve the variability of weather sequences in order to estimate the effect of climate on agricultural production and to assess agricultural risk. To date, only changes in average weather parameters derived from General Circulation Models (GCMs) and then applied to historical data have been used to construct climatic change scenarios and in only a few studies were changes in climatic variability incorporated. Accordingly, a computer system, AFRCWHEAT 3S, was designed to couple the simulation crop model for wheat, AFRCWHEAT2, with a stochastic weather generator based on the series approach. The system can perform real-time simulations of crop growth and assess crop productivity and its associated risk before harvest using recorded meteorological data from a current season supplemented by stochastically generated meteorological data. The considerable flexibility used to construct climatic scenarios, on the basis of the weather generator, makes AFRCWHEAT 3S a useful tool in studies of the impact of climatic change on wheat crops. Sensitivity analyses to changes in the variability of temperature and precipitation, as compared with changes in their mean values, were made for location in the UK for winter wheat. Results indicated that changes in climatic variability can have a more profound effect on yield and its associated risk than did changes in mean values.

Mikhail A. Semenov, John R. Porter
Growth Curve Analysis of Sedentary Plant Parasitic Nematodes in Relation to Plant Resitance and Tolerance

Growth curve analysis of sedentary plant parasitic nematodes on different hosts and at different population densities is used to assess plant suitability including their resistance and tolerance. The estimated parameters of host suitability can be used in pest management programs for economic important species such as potato cyst nematodes.

R. J. F. Van Haren, E. M. L. Hendrikx, H. J. Atkinson

Population Biology

Using Chaos to Understand Biological Dynamics

The application of nonlinear dynamics has begun to move beyond the problem of demonstrating the existence of nonlinearities (and chaos) in biological data. We are starting to see exciting cases where considerations of nonlinear dynamics can explain observed patterns, and give insight into the forces structuring biological phenomena. We present some examples of these, drawn from the analysis of measles epidemics and cardiac pathologies. We show that a phenomenon of many chaotic systems called transient periodicity can explain apparently qualitative shifts in the observed dynamics. Such shifts require no change of or perturbation to the system, but can be intrinsic features of purely deterministic dynamics. We show how the techniques of nonlinear forecasting can be used as analytical tools, both for quantifying the complexity of the time series in a biologically meaningful way and for determining how well a particular model accounts for the dynamics. We also post a warning for practitioners of “conventional” nonlinear time series analysis (such as dimension calculations): many biological time series are nonuniform, and this can seriously mislead the various algorithms in common use.

Bruce E. Kendall, William M. Schaffer, Lars F. Olsen, Charles W. Tidd, Bodil L. Jorgensen
Qualitative Analysis of Unpredictability: A Case Study from Childhood Epidemics

The unpredictability of the recurrent outbreaks of childhood epidemics has been a matter of scientific dispute. At first sight the deterministic SEIR model, which divides the host population into four classes (Susceptible, Exposed, Infectious, Recovered), fails to explain the unpredictability, because realistic parameter values lead to periodic attractors. We show that these periodic attractors coexist with chaotic transients. The detailed geometrical analysis of this phenomenon suggests that chaotic transients have been underestimated in their importance for the dynamics on observable time scales. A second problem of the SEIR model is the high extinction probability of epidemics in finite populations. We argue that the immigration of infectives from outside is an essential parameter in this context. Immigration and the process of infection itself are sources of demographic fluctuations, which undergo subtle interaction with chaotic transients. The stochastic simulation of the SEIR model shows that the chaotic transients are permanently revisited. The demographic noise integrates chaotic transients and intermittent periodic episodes.

Ralf Engbert, Friedhelm R. Drepper
Control and Prediction in Seasonally Driven Population Models

Recent studies have shown both theoretically and from data analysis that the dynamics of certain childhood diseases in large populations can exhibit a wide range of behavior that is periodic as well as chaotic. Mathematical models (such as the SEIR model with seasonal forcing) have been able to predict the onset of chaotic epidemics using parameters of childhood diseases such as measles, mumps, and chickenpox. A new method is presented to control certain unstable outbreaks by using inexpensive vaccine strategies. These controlled outbreaks are small in amplitude, periodic and therefore, quite predictable. New techniques for general dynamical systems will be presented and applied to this epidemiological problem in which the epidemic can be controlled and directed into a small amplitude outbreak which is regular. The incidence of the epidemic may be reduced by tracking this controlled outbreak as a function of certain vaccine strategic parameters.

Ira B. Schwartz, Ioana Triandaf
Simple Theoretical Models and Population Predictions

Using census data of spruce needleminer populations and their natural enemies from one stand of Norway spruce trees to estimate the parameters of a complex simulation model and two simple theoretical models, we predict spruce needleminer abundance in three other stands and compare the predictions with census data. The simple theoretical models were as good as the complex model in forecasting needleminer population numbers one year ahead. The reason may be that simple theoretical models capture the dominant structure of the dynamic system in an unbiased way.

Alan A. Berryman, Mikael Munster-Swendsen
Individual Based Population Modelling

Realistic models for the dynamics of populations of animals or bacteria should minimally account for uptake and use of resources by individuals. In field situations, it is usually necessary to implement also more advanced behaviour, such as interactions between individuals and spatial and temporal inhomogeneities. The dynamics of many heterotrofic systems can be understood by focusing on energy fluxes only, because mass fluxes tend to be closely coupled to them. Realistic and relatively simple descriptions of energy uptake and usage by individuals appeared to be possible for this purpose. Surface area related uptake, volume related maintenance and storage dynamics are the main key elements. These non-specific descriptions distinguish three energy-defined life stages of an animal (embryo, juvenile and adult) and allow the derivation of body size scaling relations of parameter values. Consequent application of the first law of thermodynamics at both the individual and the population level proves to restrict oscillations considerably in comparison with for instance Lotka--Voterra-based population dynamics. The dynamics of populations of energy-structured individuals can to some extent be simplified to a description of the energy uptake and use by the population in terms of that by individuals. These new objects, populations, can be linked into food chains and food webs to explore potential dynamics of ecosystems. Realistic descriptions of a three-step microbial food chain have been obtained. Body size scaling relations can be used to reduce the number of parameters of the system. The specification of ecosystem dynamics then reduces to that of particle size distributions. In this way it proved to be possible to explain for instance, why food chains cannot have many links.

S. A. L. M. Kooijman
Ecological Systems are Not Dynamic Systems: Some Consequences of Individual Variability

Ecological systems are not dynamic systems (they are not “state variable models”), because they consist of individuals, which are different. An individual-based model of single population of competing individuals is presented to explain this. The model illustrates the meaning of “density dependence” on the level of individuals. Two time scales on which density dependence is operating can be distinguished: “between generation” and “within-generation”. For the latter one, individual variability is an important factor.

Volker Grimm, Janusz Uchmański
Spatio-Temporal Organization Mediated by a Hierarchy in Time Scales in Ensembles of Predator-Prey Pairs

Spatio-temporal organization refers to functional changes in a system’s interaction network mediated by system dynamics. Complex, even chaotic, dynamics are claimed to be characteristic for an ecological community as a whole. In systems of this type the emphasis should be shifted from predicting individual trajectories to investigating organizational properties at the level of the system as a whole such as the relationship between dynamic diversity, spatio-temporal organization and system function. Such effects are illustrated with a model where a hierarchy of different time scales is introduced into an ensemble of predator-prey pairs (PPP) by distributing the latter along the body weight axis. The PPPs share a common pool of a limiting resource. Model versions comprising a single PPP are characterized by a time-invariant steady state. As soon as further PPPs are added the system becomes unstable exhibiting first periodic and then chaotic oscillations. In spite of the chaotic and unpredictable dynamics of the single PPPs, a number of system properties were found to be independent of the initial conditions chosen. The efficiency of resource utlization increases with an increasing number of PPPs due to the associated increase of the temporal organization of the network as a whole. The effects of spatio-temporal organization on system function are further illustrated by results from model versions where a dimension of space was introduced by assuming that the species diffuse along one spatial dimension.

Claudia Pahl-Wostl
Continental Expansion of Plant Disease: A Survey of Some Recent Results

In this paper we show how the continental expansion of fungal plant diseases can be modeled. The model takes the form of a set of two integral equations. The equations describe a systematic book-keeping of the dynamics of the number of foci in host fields. Using results on related models for the spatial expansion of epidemics, the velocity of expansion can be calculated. We present explicit formulae for the expansion velocity, both within one growing season and for successive growing seasons. The method thus developed is applied to the invasion of Phytophthora infestans. This example serves as a first verification of the model. The example also shows what type of data are needed to get insight in the development of quarantine pests.

F. Van Den Bosch, J. C. Zadoks, J. A. J. Metz
Modeling of Fish Behavior

The paper gives a survey of some fundamental principles which could form the basis for mathematical modeling of the large scale behavior (schooling) of fish in an ocean. The principle of maximization of comfort is presented as an acceptable fundamental hypothesis explaining behavior and the consequences of using this principle are discussed. A standard Kalman filtering structure is suggested for the estimation and prediction of fish behavior from observations made by research vessels etc.

Jens G. Balchen

Systems sciences

Understanding Uncertain Environmental Systems

Developments over the past two decades in the identification of models of environmental systems are reviewed, with special reference to the quality and pollution of surface freshwaters. As in so many fields, the early 1970s were a time of great expectations: it would not be long, we believed, before the admittedly less well defined problems of environmental systems analysis would nevertheless yield to the already vast array of methods available from applied mathematics and control theory (which had been so successful in their application, for example, to the analysis of aerospace systems). Such a yielding has still to come to pass, at least for multivariable models of more than, say, five or six state variables. In the past decade, because of the seemingly insuperable difficulties of model identifiability, we have promoted the pragmatic view that what really matters is the ability to generate “robust” predictions that are maximally insensitive to a lack of identifiability. Such pragmatism, coupled with a continuing dearth of successful techniques of system identification, does not bode well. The digital computing technology on which we are able to realise our “set of concepts” (our models) continues to expand rapidly. A similar expansion, although less dramatically so, is apparent in the technology of instrumentation and remote sensing, through which our “given data” are acquired in ever greater volumes. No such expansion is evident in the capacity of the brain to juggle with disparate facts and figures until the ever more comprehensive, given data can be reconciled with the increasingly massive sets of concepts. Whither, then, is environmental system identification bound in the next decade? A modest attempt to answer this question will be made, by way of conclusion.

M. B. Beck
System Identification by Approximate Realization

We discuss the use of state variables in time series modelling. Current procedures are mostly based on realization theory. First certain parameters are estimated which describe the process, e.g., the systems impulse response or autocorrelations. These parameters are then transformed into an approximate state space model. In this note we suggest an opposite procedure. First an approximate state trajectory is estimated, and in a second stage a corresponding state space model is determined. This method allows to infer several structural properties of the process from the observed data, in particular the dynamical structure (length of the involved time lags), causality (which variables are inputs and outputs), and the noise (whether it is stochastic or not).

Christiaan Heij
Sensitivity Analysis Versus Uncertainty Analysis: When to Use What?

Decision makers and other users of models are interested in model validity. From their viewpoint the important model inputs should be split into two groups, namely inputs that are under the decision makers’ control versus (environmental) inputs that are not controllable. Specifically, users want to ask ’what if’ questions about global (not local) sensitivities: what happens if controllable inputs are changed (scenario analysis), what if model parameters and structure change? Among the techniques to answer these questions are statistical design of experiments (such as fractional factorial designs) and regression analysis. These techniques may show that some non-controllable inputs of the model are important; yet these inputs may not be known precisely. Then risk or uncertainty analysis becomes relevant. Its techniques are Monte Carlo sampling, including variance reduction techniques (such as Latin hypercube sampling), possibly combined with regression analysis. Controllable inputs can be optimized through Response Surface Methodology (RSM).

Jack P. C. Kleijnen
Monte Carlo Estimation of Uncertainty Contributions from Several Independent Multivariate Sources

An efficient random sampling method is introduced to estimate the contributions of several sources of uncertainty to prediction variance of (computer) models. Prediction uncertainty is caused by uncertainty about the initial state, parameters, unknown (e.g. future) exogenous variables, noises, etcetera. Such uncertainties are modelled here as random inputs into a deterministic model, which translates input uncertainty into output uncertainty. The goal is to pinpoint the major causes of output uncertainty. The method presented is particularly suitable for cases where uncertainty is present in a large number of inputs (such as future weather conditions). The expected reduction of output variance is estimated for the case that various (groups of) inputs should become fully determined. The method can be applied if the input sources fall into stochastically independent groups. The approach is more flexible than conventional methods based on approximations of the model. An agronomic example illustrates the method. A deterministic model is used to advise farmers on control of brown rust in wheat. Empirical data were used to estimate the distributions of uncertain inputs. Analysis shows that effective improvement of the precision of the model’s prediction requires alternative submodels describing pest population dynamics, rather than better determination of initial conditions and parameters.

Michiel J. W. Jansen, Walter A. H. Rossing, Richard A. Daamen
Assessing Sensitivities and Uncertainties in Models: A Critical Evaluation

Sensitivity and uncertainty analysis are important ingredients of the modelling process, and contribute substantially to a reliable and efficient development, assessment and application of mathematical models. Quantifying how much the concerned model components contribute to the sensitivity and uncertainty in the model outputs is an essential issue in these analyses. An overview is given of various measures which are commonly used for assessing these contributions; their main features are discussed and critically evaluated.

P. H. M. Janssen
UNCSAM: A Software Tool for Sensitivity and Uncertainty Analysis of Mathematical Models

The paper addresses the important role of sensitivity and uncertainty analysis in the mathematical modeling process and discusses guidelines to perform these analyses. The main features are presented of the software package UNCSAM, which applies efficient Monte Carlo sampling in combination with regression and correlation analysis to perform sensitivity and uncertainty analyses on a large variety of simulation models. The use of UNCSAM is illustrated by an environmental application study.

P. S. C. Heuberger, P. H. M. Janssen
Set-Membership Identification of Non-Linear Conceptual Models

Identification of conceptual models nonlinear in the parameters from bounded-error data is considered. The assumption that errors are point-wise bounded implies that a set of parameter vectors is found instead of an ‘optimal’ parameter estimate. For our class of models, the Monte Carlo Set-Membership algorithm is appropriate to approximate the exact solution set by a number of feasible realizations. In addition to the feasible parameter set, representing the parametric uncertainty, information about the modelling uncertainty is also provided. In order to obtain realistic predictions both uncertainty sources must be quantified from the available data and evaluated over the prediction horizon. Three ‘real-world’ examples will illustrate the features of this set-membership approach to system identification and prediction.

Karel J. Keesman
Parameter Sensitivity and the Quality of Model Predictions

Using SIM-PEL, a comprehensive model for the pelagic compartment of lake ecosystems, we analyse synergistic toxicant effects in lake ecosystems. We show, that — even for a rather simple model — model predictions may be strongly dependent on the time horizon of the prediction and on the quality of input parameters. For longer time spans, small errors in parameter estimation may lead to qualitatively wrong prediction of toxicant effects. Monte Carlo simulations allow to take errors in parameter estimation into account, but they need rather good estimates of parameter variance.

Hans J. Poethke, Detlef Oertel, Alfred Seitz
Towards a Metrics for Simulation Model Validation

A large group of nonlinear dynamic simulation models can be seen as intermediates between hard (physical) and soft (management science) models, because they are based on insufficient or not generally accepted theories and hypotheses. This type of models (ecological, environmental, and economic) is characterized by highly uncertain outcomes, due to an uncertain, unidentifiable model structure, not well known model parameters and uncertain model inputs. Most validation techniques offer merely a terminology and a procedural validation approach without any metrics. Let S be a part of reality, which satisfies the constraints of a relevant experimental frame (specification of time, location, experimental conditions and relevant state variables). S can only be known by making observations of the real system. Any simulation model of the real system S has to be based on the available theoretical and other a priori knowledge. Each source of uncertainty will influence the model outcomes. Let O be the set of observations and M the set of model results, both within the same experimental frame, and both including uncertainty ranges, then validation tests for the fit between O and S. In the terminology of Popper most models of this class are invalid (no perfect match of O and S) and have to be rejected. This paper suggests to test for the usefulness of a model in terms of model adequacy (which part of the system can be adequately simulated) and model reliability (which part of the model outcome matches system behavior). The test on model usefulness instead of model validity provides a metrics which helps to determine the scope of the model and increases its acceptability. The method is illustrated with examples.

Huub Scholten, Marcel W. M. Van Der Tol
Use of a Fourier Decomposition Technique in Aquatic Ecosystems Modelling

A quasilinear system of ordinary first-order differential equations of the type frequently used in ecosystems modelling (including mathematical models of aquatic ecosystems) is considered. It is assumed that the system is subject to periodical changes in coefficients and/or right-hand sides (due to diurnal or seasonal character of the described ecological processes). The periodic component of the state variables caused by these disturbances is considered to be small enough to allow usage of first-order Taylor formulae. Under these assumptions a decomposition of the system dynamics into “the slow motion” component and first-order Fourier harmonics is performed. The resulting set of equations can be solved with large time steps, still preserving information on the periodic as well as the smooth average components of dynamical behavoir of the initial system. The performance of the method is evaluated using an algae growth equation, the only growth limiting factor being that of light availability. The results acquired suggest the proposed method is useful both for adjusting the average motion component and for evaluation of the diurnal dynamics of algae. Further uses of the method are discussed and proposed.

I. Masliev
Multiobjective Inverse Problems with Ecological and Economical Motivations

The paper is devoted to some problems of multiobjective analysis which can be briefly outlined as follows. Two groups of restrictions on a variable x are given. The first one is treated as preassigned constraints and depends on the vector parameter v, while the second group corresponds to controllable restrictions depending on the vector parameter μ. The following reciprocal problems are studied: find the set of controllable parameters m (the inputs of the system) which ensure the preassigned constraints on the variable x with n being given, and vice versa, specify the set of guaranteed values v (the outputs) for the fixed p. The paper deals with the structure, description and “extremal” elements of the above sets. Computerized implementation is also discussed. The questions under consideration are motivated by ecological and economical problems and closely related to those investigated in Kurzhanski (1986), Konstantinov (1983), Nikonov (1988), (1992).

Oleg I. Nikonov
An Expert-Opinion Approach to the Prediction Problem in Complex Systems

The use of model forecasts for decision making should be optimized. With this in mind, the concept of modelling the future is discussed from an epistemological point of view and on the basis of a stochastic model interpretation. Traditional definitions of model statistics make reference to an ensemble of systems. Since this does not work for a complex system with a unique state, an alternative approach, based on the subjective (Delphi) opinion of a group of experts, is also considered. This approach is then generalized to the situation in which a set of competing models is available. With a Delphi method a certain likelihood can be assigned to each model. Once the statistics is defined, one may face the issue of predictability. In hindsight (in a ‘hindcasting mode’) models can be validated by checking how accurate they have been describing observations and they can be falsified when their predictions differ in an unlikely way from the observations. ‘Forecasting’ is different, because models can never be proven. Therefore, exact prediction of the future is impossible. Definitions of predictability (two examples will be given) necessarily refer to the range of modelled possibilities. It is argued that all model predictions — also those resulting from physical models — should be considered as scenarios. To make rational decisions the likelihood of all possible model forecasts has to be taken into account. In case of complex systems and difficult decisions it appears useful to consider a large variety of models. Experts need not strive for consensus, because a diversity of opinions could lead to better decisions. It is recommended that more attention is paid to Delphi aspects of forecast likelihoods.

Gerbrand J. Komen

Environmental Sciences

Critical Loads and a Dynamic Assessment of Ecosystem Recovery

A dynamic soil acidification model (SMART) is used to assess the impact of the so-called 50%-gap-closure scenario on the state of forest soils in Europe. This scenario aims to reduce the excess deposition of sulfur over the critical loads by at least 50% everywhere in Europe by the year 2000 at minimal emission reduction costs and is currently under discussion as a basis for a new UN/ECE sulfur protocol. The concentration of aluminum in soil solution is used as an indicator for potential damage due to acidifying deposition. The time required for soils to recover, i.e. to reach an aluminum concentration of less than 0.2 eq/m3, is computed and mapped on a 150×150 km2 grid covering Europe. Results show that the implementation of the 50%-gap-closure scenario will “protect” an additional 10% of the European forest soils; however, they also indicate that deposition of acidifying nitrogen compounds have to be reduced as well in order not to create new areas with an elevated risk to ecosystem damage.

J.-P. Hettelingh, M. Posch
Uncertainty Analysis on Critical Loads for Forest Soils in Finland

The aim of the study is to present a comprehensive and quantitative estimation on the uncertainty of critical load values and their exceedances in a regional study for Finland. The critical loads are used to set goals for future deposition rates of acidifying compounds such that the environment is protected. In this study the critical loads for forest soils are determined using a steady-state mass balance approach. The critical load for a particular receptor varies from site to site, depending on its inherent sensitivity; its allocation for acidifying sulphur and nitrogen also depends on the deposition patterns. A software package UNCSAM (UNCertainty analysis by Monte Carlo SAMpling techniques) developed in RIVM has been used as a flexible tool for the analysis. The analysis presented here focuses on the estimation and effect of input parameter uncertainties. The study covers all relevant input parameters without preceding screening. The uncertainties are due to measurement errors or difficulties in the interpretation of measurement results. The effects of the uncertainties of the model structure and dose response assumptions are not included in this study. The uncertainties are calculated both for different areas in Finland and aggregated for the whole country. The uncertainties discovered are reasonable compared to the largest uncertainties of input parameters. The most influential parameters are shortly described both for the whole country and in spatial distribution.

Matti P. Johansson, Peter H. M. Janssen
Monte-Carlo Simulations in Ecological Risk Assessment

Ecological risk assessment is usually based on two processes: exposure and effect assessment. Both have to deal with different kinds of uncertainties. Thus, predictions can only be probability statements about the expected hazard. In the paper we shall present two stochastic approaches of simulation models to analyse and predict toxicant effects in freshwater plankton communities using Monte-Carlo techniques. The first is an individual-based model which uses detailed measurements of life-table data to simulate community dynamics in laboratory systems. The natural variability of individuals is reflected in the model by describing life courses of every individual according to the means and probability distributions of the measured life-table data. The second model is a compartment model of plankton communities in outdoor microcosms, where taxa are modelled via differential equations. In this approach uncertainties due to parameter estimation are incorporated conducting many simulation runs with parameter values chosen from estimated probability distributions. We shall present application examples of both models and discuss some benefits and problems of using simulation models in Ecological Risk Assessment.

U. Hommen, U. Dülmer, H. T. Ratte
Sensitivity Analysis of a Model for Pesticide Leaching and Accumulation

The sensitivity of pesticide leaching and accumulation to variations in pesticide properties, soil temperatures, soil water fluxes, and transport parameters was investigated with the general solute transport model, SOTRAS. Pesticide interactions include non-linear Freundlich equilibrium sorption, temperature and pressure head dependent first-order transformation kinetics, and plant uptake. For a number of pesticides with different mobility and half-lives, Monte Carlo simulations were carried out with Latin Hypercube Samples in a preset range of the input parameter domain. The sensitivity of model inputs to model outputs was quantified by statistics of linear regression. The time evolution of model sensitivity and the contribution of various model inputs to the total sensitivity were quantified as well. The standardized analysis gives rapid quantitative information about model behaviour. The results from the analysis are used to determine which parameters should be measured in greater detail and which need further calibration. Results are also used to set up sampling strategies. In general, the accumulation of pesticides in the plough layer was very sensitive to model inputs influencing the transformation rate of the pesticide (soil temperature and half-life) and almost insensitive to sorption characteristics and soil water fluxes. Only in the case of very persistent and mobile pesticides, accumulation was most sensitive to soil water fluxes. The concentration of pesticide in ground water was most sensitive to the Freundlich concentration exponent, and, to a lesser extent, to the Freundlich coefficient, except for some pesticides which are hardly sorbed. The leaching of these pesticides was most sensitive to half-life and soil temperature. The linear regression model could not be used for pesticides with high sorption coefficients, even if variation of the input was kept as low as 1%, but good results were obtained after logarithmic data transformation.

A. Tiktak, F. A. Swartjes, R. Sanders, P. H. M. Janssen
Bayesian Uncertainty Analysis in Water Quality Modelling

To analyse the influence of pollution and the impact of historical and future measures in river basins, an integrated modelling approach comprising the causality chain of emission, distribution and effects of toxicants was chosen. An integrated approach is also essential to study the interactions between abiotic and biotic components, which are considered vital to ecosystem functioning. After quantifying the sources of pollution, dynamic water quality models are used to determine the distribution of nutrients and toxicants over various spatial compartments. The models contain a large number of unknown parameters and therefore a statistical model analysis is needed. A combined uncertainty and sensitivity analysis procedure based on Bayesian inference is applied. This method leads to probability distributions for the uncertain parameters as well as for selected output variables, as shown in the results from the national water quality model. The inorganic matter in the water phase is analysed and calibrated on the sedimentation and resuspension parameters.

P. R. G. Kramer, A. C. M. De Nijs, T. Aldenberg
Modelling Dynamics of Air Pollution Dispersion in Mesoscale

In the paper a multilayer, dynamical model of air quality analysis is presented. The model has been designed for predicting dispersion and deposition of air pollution in urban and industrial areas and for evaluation of emission control strategies. The computer implementation of pollutant dispersion is based on numerical solving advection-diffusion equations. The wind field structure and dynamics are preprocessed by a specialized generator. The real data computational examples are presented.

Piotr Holnicki
Uncertainty Factors Analysis in Linear Water Quality Models

The parameter uncertainties that result from the calibration of Biochemical Oxygen Demand (BOD) — Dissolved Oxygen (DO) river quality models is quantitatively analyzed as a function of the number of sampling points, of the parameter “true” values, and of the model complexity.

Andrzej Kraszewski, Rodolfo Soncini-Sessa
Uncertainty Analysis and Risk Assessment Combined: Application to a Bioaccumulation Model

A bioaccumulation model has been formulated for contaminant accumulation in meadow-ecosystems. This type of model generally is parameter-rich, which poses problems for risk prediction due to parameter uncertainty. A procedure is needed to deal with model uncertainty and risk-assessment simultaneously. The model was subjected to uncertainty analysis, leading to probability distributions of all model output variables. Uncertainty measures were calculated using a linear regression model. The probability that environmental standards or No Observed Effect Concentrations are exceeded, was derived from the same distributions, used for the uncertainty analysis. Effects of different toxicant loading scenarios on these probabilities were calculated. The procedure discussed here facilitates the use of complex ecosystem models for risk-assessment.

Theo P. Traas, Tom Aldenberg
Diagnosis of Model Applicability by Identification of Incompatible Data Sets Illustrated on a Pharmacokinetic Model for Dioxins in Mammals

When calibrating a model the problem often arises that different data sets (e.g. different outputs of the same system; data obtained under different conditions) are incompatible: parameter values that give an acceptable fit to one data set provide an unacceptable fit to another set. Occasionally, a previously calibrated model fits poorly to a new data set, but it may be possible to re-calibrate the model to both data sets. However, if data sets are in fact incompatible, calibration of the model to all available data may simply result in a poor compromise-fit without any insight into the cause of the problem. This paper discusses how the situation can be analysed as a multi-objective optimization problem, with the goodness of fit values to the different data sets as optimization goals. It is shown that the set of Pareto-optimal solutions (i.e. where an increase in fit to a particular data set must necessarily lead to a decrease elsewhere) provides an efficient way to analyse the situation. If there is only a single Pareto-optimal point, the same set of parameter values can be used for all ap­plications. If this is not the case, it will be shown how trade-offs between goodness of fit values can be indicated, and clusters of mutually compatible data sets (i.e. those that can be fitted by single set of parameter values) can be identified. Analysis of parameter values corresponding to the different clusters provides insight to arrive at a more generally applicable model. The method proposed in this paper is illustrated on a simple growth model with artificial data sets as well as on a pharmacokinetic model for dioxins using data from a number of independent experiments using mice, rats and cows. The present analysis provides valuable insight for the further development of generic toxicokinetic models that can be used for interspecies extrapolations.

Olivier Klepper, Wout Slob
Regional Calibration of a Steady-State Model to Assess Critical Acid Loads

The Model to Assess Critical Acid Loads (MACAL) has been developed for assessing and mapping critical acid loads on a national scale. MACAL simulates soil solution concentrations of major ions in a forest soil at any given depth at steady state for a given deposition level. The critical acid load is calculated from defined critical values for the A13+ concentration and the Al3+/Ca2+ ratio by inverse modelling. In order to minimize the uncertainty in the critical load computations, which is due to insufficient knowledge of parameter values, a multi-signal calibration of poorly defined important model parameters was performed using a data set on soil solution concentrations of 150 forest stands in the Netherlands. Since no detailed data was available on site scale (i.e. individual forest stands), a regional calibration was preferred. The cumulative distribution functions (CDF) of the model outputs for the 150 forest stands where fitted to those of the associated measurements. All model parameters could be identified with the objective function used except for forest filtering factors for nitrogen deposition. The calibration showed to be useful to reduce parameter ranges for some of the important model parameters, resulting in a lower uncertainty in model predictions.

J. Kros, P. S. C. Heuberger, P. H. M. Janssen, W. De Vries
Uncertainty Analysis for the Computation of Greenhouse Gas Concentrations in IMAGE

Uncertainties in simulations of greenhouse gas concentrations are analyzed. Greenhouse gas concentrations are simulated using the Atmospheric Composition model of IMAGE. Uncertainties arise from, amongst others, uncertainties in greenhouse gas emissions and in parameters in the description of atmospheric processes. The total uncertainty in the greenhouse gas concentrations is quantified as well as the contributions of the individual sources of uncertainty to this total uncertainty, by making a multi-dimensional Monte Carlo analysis using Latin Hypercube sampling. Focus is on the non CO2 gases like methane and ozone, for which atmospheric processes play a key role in determining the changes in concentration.

Maarten S. Krol

Economics

Forecast Uncertainty in Economics

Forecasting and policy analysis in (macro-)economics has been the core business of the Dutch Central Planning Bureau ever since its inception in 1945. Econometric models are used to organize knowledge and to ensure consistency of the many variables in the forecast. The track record shows large forecast errors, both at a one year and a four year horizon. Forecasts are always conditional on declared government policies, hence forecast errors can.be partly traced to changes in economic policy. The paper reports on a Monte-Carlo based study into the relative importance of four different sources of forecast uncertainty; especially for a somewhat longer horizon, uncertainty originating from error terms in the model and errors in exogenous variables dominates uncertainty originating from preliminary data and uncertain model parameters.It is unlikely that forecast errors can be significantly reduced; indeed, continuous efforts are required to keep them at their current level. The main purpose of CPB’s forecasting activities is to provide a benchmark for economic policy preparation. The uncertainty that comes with the forecast must of course also be communicated to the policy makers, so they can develop contingency plans. By using different scenarios, ’no-regrets’-policies can be separated from strategic policy decisions.

F. J. Henk Don
Some Aspects of Nonlinear Discrete-Time Descriptor Systems in Economics

In this paper we study nonlinear discrete-time descriptor (or singular, implicit, general) systems. Some of the problems connected with such systems, which arise frequently in modelling certain classes of economic relationships, are questions concerning existence and uniqueness of solutions. By means of a step by step procedure we will give, under generic conditions, a local reduction mechanism yielding a possibly lower dimensional system in standard state space form.

Th. Fliegner, H. Nijmeijer, Ü. Kotta
Quasi-Periodic and Strange, Chaotic Attractors in Hick’s Nonlinear Trade Cycle Model

Hicks’ nonlinear trade cycle model is an unstable multiplier-accelerator model together with an ’income-ceiling’ and an ’investment-floor’. We show that the simplest, 2-D version of the model can have a quasi-periodic attractor. When consumption and/or investment is distributed over several time periods higher dimensional versions of the model are obtained. We present numerical evidence that the 3-D Hicks-model can have strange, chaotic attractors.

Cars H. Hommes
Monte Carlo Experimentation for Large Scale Forward-Looking Economic Models

In this paper, a Monte Carlo experimentation scheme is developed for rational expectations large scale models with a special attention to the theoretical foundations of the underlying deterministic algorithm and to the a posteriori statistical validation of the experimentation. The base-deterministic algorithm is of the Newton-Raphson type. The Monte Carlo experimentation uses a perfect foresight approximation and then, requires a posteriori validation. Numerical exercises are proposed in order to show clearly the adequacy of our methodology, by evaluating either its purely numerical bias or the goodness of its perfect foresight approximation, on a canonical growth model.

Raouf Boucekkine
Erratic Dynamics in a Restricted Tatonnement Process with Two and Three Goods

It is well known that adjustment processes like tatonnement can show erratic dynamic behavior. The path of prices generated in such a process generally shows big jumps and varies over a wide range. This is not acceptable. A discrete tatonnement process in simple two and three goods exchange economies is studied. It is shown that in the examples studied, the region in which the prices can vary can be restricted to a neighborhood of an equilibrium, if the relative price adjustment is restricted by a maximal rate of increase or decrease of the price. Within the interval any type of erratic dynamics remains possible.

Claus Weddepohl
Chaotic Dynamics in a Two-Dimensional Overlapping Generation Model: A Numerical Investigation

Grandmont (1985) showed that in a 1-dimensional overlapping generations (OLG-) model chaotic fluctuations can occur, when the traders offer curve is highly nonlinear due to a very strong income effect. We present a numerical investigation of the global dynamics of a 2-dimensional OLG-model as introduced by Grandmont (1992). Chaotic output fluctuations already arise when the income effect is not too strong.

Cars H. Hommes, Sebastian J. Van Strien, Robin G. De Vilder
Nonlinearity and Forecasting Aspects of Periodically Integrated Autoregressions

This paper deals with forecasting and nonlinearity aspects of linear periodic models for seasonally observed time series which contain a single unit root. This unit root imposes a nonlinear restriction on the model parameters. Multi-step ahead forecasts differ from forecasts obtained from nonperiodic models in the sense that they can reflect slowly changing seasonal patterns observed within the estimation sample.

Philip Hans Franses
Classical and Modified Rescaled Range Analysis: Some Evidence

In this paper it is shown that the ’modified’ resealed range statistic, as suggested by Lo (1991), is for practical purposes an unnecessary complication, since it is also possible to first correct the time series for its idiosyncratic short term dependence and then simply apply the ’classical’ resealed range test developed by Hurst (1951). These two methods are illustrated by applying them to an index of the Amsterdam Stock Exchange. A comparison between the empirical power of both methods is made, using a Monte Carlo simulation.

Ben Jacobsen
Backmatter
Metadaten
Titel
Predictability and Nonlinear Modelling in Natural Sciences and Economics
herausgegeben von
J. Grasman
G. van Straten
Copyright-Jahr
1994
Verlag
Springer Netherlands
Electronic ISBN
978-94-011-0962-8
Print ISBN
978-94-010-4416-5
DOI
https://doi.org/10.1007/978-94-011-0962-8