Skip to main content
Erschienen in: Fire Technology 5/2020

Open Access 04.03.2020

Determining Confidence Intervals, and Convergence, for Parameters in Stochastic Evacuation Models

verfasst von: Angus Grandison

Erschienen in: Fire Technology | Ausgabe 5/2020

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

An issue when using stochastic egress models is how many simulations are required to accurately represent the modelled scenario? Engineers are mostly interested in a representative Total Evacuation Time (TET). However, the convergence of the TET may not ensure that the full range of evacuation dynamics has been adequately represented. The average total egress curve (AC) has been suggested as an improved measure. Unfortunately, defining a confidence interval (CI) for the AC is problematic. CIs can robustly quantify the precision of many statistics and have been used to define convergence in egress modelling and other research fields. This paper presents a novel application of bootstrapping, functional analysis measures (FAMs), and a bisection algorithm, to derive three FAM-based CIs representing the precision of the AC. These CIs were tested using a theoretical model to demonstrate the consistency of the coverage probability, the actual percentage of CIs that contain the theoretical parameter, with the nominal 95% confidence level (NCL). For two of the FAM-based CIs, it was found that the coverage probability was between 94.2% and 95.6% for all tested sample sizes between 10 and 4000 simulations. The third FAM-based CI’s coverage probability was always greater than the NCL and was a conservative estimate, but this presented no problems in practice. A FAM-based CI may suggest if there is more or less variability in an earlier phase of the evacuation. A convergence scheme based on statistical precision, CI widths, is proposed and verified. The method can be extended to other statistics.
Hinweise

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Abkürzungen
CI
Confidence interval
CL
Confidence level
TET
Total evacuation time
MT
Mean TET
SD
Standard deviation (of TETs)
CDF
Cumulative Density Function
BCa
Bias Corrected and accelerated (bootstrap)
FAM
Functional analysis measure
ERD
Euclidean Relative Distance
EPC
Euclidean Projection Coefficient
SC
Secant Cosine

1 Introduction

A stochastic approach to evacuation simulation [1, 2] is employed in many models [38] to reflect uncertainty in human behaviour [9]. If an (experimental or simulated) evacuation trial is repeated using the same population and the same initial conditions, it is possible that the evacuation will progress differently and may result in differences in total evacuation time (TET) or other statistics of interest. In addition to that uncertainty there is also uncertainty related to the initial conditions where the distribution of agents (e.g. number of agents and starting locations) and their attributes (e.g. walking speeds and response times) may also cause differences between trials. These uncertainties can be simulated by randomly sampling suitable distribution functions for those attributes. A simple random approach is adopted in Monte Carlo egress simulation although other possible sampling methods could be utilised [10].
Many simulations are needed to represent the range of potential behaviours which may be expected for a particular scenario. However, this is set against the potentially significant computational cost of running a stochastic egress model which an end user would wish to minimise. Thus, a major issue facing stochastic egress model users is determining when enough simulations have been performed to give an accurate representation of the modelled scenario and the statistics of interest. In evacuation modelling literature [1120] that examines this issue there are broadly two approaches suggested. The first approach is based on running a fixed number of simulations. The second approach is based on some dynamic assessment of the behaviour of the statistic(s) of interest.
When the first approach is used, a wide range for the required number of simulations has been suggested. For high speed train evacuation analysis, Li et al. [11] suggested that 10 simulations would suffice provided certain conditions were met. For large passenger ship analysis, the International Maritime Organisation (IMO) recommends that a minimum of 500 simulations should be used to determine the 95th percentile TET [12]. They additionally stated that a lower number of simulations would be allowable provided a suitable convergence methodology was followed [13]. For aircraft evacuation analysis, Galea [14] noted that 1000 simulations could typically be utilised to determine the 95th percentile TET. Meacham et al. [15] utilised 2000 simulations to construct a Cumulative Density Function (CDF) of the TETs for building scenarios. There is no consensus on the number of simulations used and there is generally little justification for the number of simulations proposed. The disadvantage of this approach is that it is unknown whether the number of simulations is sufficient to represent the range of behaviours that may exist. Furthermore, the number of simulations performed could be excessive, for the intended purpose, which could limit the number of potential scenarios or alternative designs that could be explored. This approach does have the advantage of requiring little additional modeller expertise but is also limited as some models may be very sensitive to the chosen number of simulations [16].
In the second approach, additional egress simulations are performed until some indicators of convergence have been satisfied. With this approach, the number of simulations performed is unknown a priori. The eventual number of simulations performed is based on the behaviour of the statistics of interest and should ensure that sufficient, and not excessive, computation being performed. Compared to running an arbitrary number of simulations this requires some expertise in determining suitable convergence tolerances for the scenario being run. Two types of indicators have been proposed, the first type is based on confidence intervals (CIs) [21, 22] and the second type is based on examining successive differences in the statistics of interest with additional simulation.
CI based techniques have been successively used in egress modelling [13, 20], and various other fields of study [2328], to define convergence. For large passenger ship analysis, Grandison et al. [13] calculate the CI of the 95th percentile TET, using a binomial distribution, and incrementally test, with additional simulations, the limits of the CI against a pass/fail criterion [12]. This technique was found to accurately determine whether a ship design had passed or failed, whilst minimising the computational effort required. Furthermore, they noted that this technique could be adapted to determine convergence based on a required precision and that the methodology could be extended to other suitable statistics of interest and types of egress analysis. A disadvantage of this method is that it can be difficult to determine CIs for some statistics. However, an advantage of this scheme is that the convergence indicators are directly linked to the precision of the statistics of interest.
A successive difference technique has been proposed by Ronchi et al. [17] and Ronchi and Nilsson [18]. Ronchi et al. [17] define convergence (i.e. sufficient repeat simulations have been performed) when their five statistics of interest (MT, SD and three functional analysis measures (FAMs) of the average total egress curve (AC)) were all changing by less than their specified tolerances per simulation over a set number of simulations. It was suggested that these tolerances could be based on the uncertainty of the available safe egress time from a fire modelling analysis. Additionally, Lovreglio et al. [19] proposed that those tolerances could be based on values determined from a set of repeated evacuation experiments when used to validate a stochastic egress model. An advantage of the successive difference approach is the simplicity in calculating the convergence indicators. However, Ronchi et al. noted that ‘The first limitation of the method is that it uses the concepts of convergence in mean and the central limit theorem rather than a statistical estimation of the expected values…’. This means that their convergence indicators do not directly relate to standard errors or CIs of the statistics of interest.
From the previous examples, the parameter of most interest is a representative TET. However, an analysis based solely on the TET may miss important details of the evacuation [1620, 29], e.g., a set of egress trials could have similar representative TETs, but the individual evacuation dynamics could be very different. A more detailed analysis of the evacuation can be performed by additionally examining the exit time of certain percentages of the agents [18, 20]. However, an even greater level of detail can be achieved by examining the entire agent egress curve (an ordered vector of the individual agent’s exit time) via FAMs [16, 17, 19, 29]. Furthermore, egress curves are a typical output from many evacuation models. The convergence behaviour of the AC was used to represent behavioural uncertainty, the uncertainty associated with the stochastic nature of human behaviour, by Ronchi et al.
CIs have been demonstrated to accurately and efficiently determine convergence for statistics of interest in egress modelling and other fields of study. Ronchi et al.’s use of the AC was an important innovation in studying egress variability. It is therefore desirable to extend their original method by applying CIs to the AC, thereby giving a more standard statistical interpretation of behavioural uncertainty. However, it is not intuitively obvious how to determine the CIs for this curve and no solution has been previously published. The key contribution of this paper is the development of FAM-based CIs for the AC (Sect. 3) with an accurate overall confidence level. This was achieved using the novel application of bootstrapping (see Sect. 2.1), FAMs (see Sect. 2.2), and a bisection algorithm (see Sect. 3.1). This addresses the first limitation of Ronchi et al.’s method and allows the benefits of the CI approach to convergence to be combined with the AC within a common framework. It is also advantageous to use the FAMs, to form the CIs, as they are somewhat familiar to the fire and evacuation research communities. FAMs were introduced to the fire safety field in 1999 [30] and the first reported use for evacuation was in 2012 [29]. A further contribution of this paper is the development of a convergence scheme. The convergence scheme is based on the same variables used by Ronchi et al. [17], i.e. AC, MT, and SD, but utilises CIs. The CI for the MT is based on the standard t-distribution expression (see Sect. 2) and the CI for the SD is obtained using a standard application of bootstrapping (see Sect. 2.1.1). This convergence scheme will efficiently determine when all the statistics of interest have reached their required statistical precisions. Utilising convergence of the AC helps to ensure that variability, between simulations, which is not expressed by examining TETs alone, is also captured.
The CIs and the convergence scheme are tested and verified using a simple theoretical model. Apart from demonstrating the correctness of the methods, that work also feeds into a discussion on how to determine appropriate tolerances for the FAM-based CIs of AC.

2 Statistical and Mathematical Background

This section provides a brief background on the necessary terminology and methods used in this paper. Initially, some standard statistical terminology will be described; this initial section can be skipped over by readers with a grounding in statistics. This is followed by an overview of “bootstrapping” with some worked examples (Sect. 2.1). Finally, a summary of functional analysis, used to compare egress curves, is given (Sect. 2.2).
A sample statistic is the value of an estimator based on a sample from the population of results. A population parameter can be considered as the value of an estimator based on the entire population of results. Examples of estimators include the mean TET (MT) (Eq. 1), the 95th percentile TET, and the standard deviation (SD) of TET. Ideally, one would want to know the parameter but practically it is (generally) only possible to obtain the statistic of a sample of results and a range of potential values for the parameter. The sample statistic would display variability depending on the nature of the sample obtained although the sample statistic tends to the value of the parameter as the size of the sample increases (Law of Large Numbers [22]).
$$MT = \frac{{\mathop \sum \nolimits_{i = 1}^{n} TET_{i} }}{n}$$
(1)
The “expected value” of a sample statistic is the mean value of a series of infinitely repeated independent instances of that sample statistic with the same sample size [22]. This will equate to the corresponding population parameter if the estimator is unbiased. This is the case for many estimators although the sample SD is slightly biased. Whilst the standard variance bias correction has been applied (Eq. 2), i.e. dividing by (n − 1) rather than n, there is still some residual bias. It is reasonable to neglect this bias given the large scale of the variability associated with the SD compared to the small size of the bias and the complexity of calculating that bias. The small bias that does exist also diminishes as the sample size increases.
$$SD = \sqrt {\frac{{\mathop \sum \nolimits_{i = 1}^{n} \left( {TET_{i} - MT} \right)^{2} }}{{\left( {n - 1} \right)}}}$$
(2)
Although it is generally impossible to exactly know the population parameter it is possible to estimate a range of possible values for the population parameter based on the sample of results. This is a CI, which will differ between different samples but will frequently include the parameter of interest. Ideally, this frequency is determined by the confidence level (CL). However, the actual frequency, known as the coverage probability, may differ in practice. A set of CIs for 50 different samples is depicted in Fig. 1. This is illustrated for a 95% CL in Fig. 2 where the vast majority, 47 (~ 95%), of the sample CIs contain MT(∞) but 3 (~ 5%) of the sample CIs do not. The superscript brackets indicate the number of samples used to calculate the statistic, when the number is ∞ the statistic is equivalent to the population parameter. In general, only a single sample of results will be generated, although that single sample could contain hundreds of simulations, so only one CI will be generated per statistic of interest.
The calculation of the CI for certain parameters is straightforward due to the availability of standard expressions. For the MT, the CI with a 100(1 − α)% CL can be calculated using Eq. 3 where T−1(p, df) is the inverse t-distribution for percentile p with df degrees of freedom [22]. This expression assumes that the population SD of the TETs is unknown, which is generally the case in egress models/experiments, and is estimated with the sample SD. If the original data is normally distributed, then the expression is valid for n > 1. For non-normally distributed data, the expression is generally valid due to the central limit theorem for a larger n but is dependent on the nature of the original data distribution. Some sources advocate a minimum n of 30 although for skewed data a minimum n of 40 [31] has been suggested. It is further assumed that the data is independent and identically distributed.
$$100\left( {1 - \alpha } \right)\% CI = \left[ {\left( {MT^{\left( n \right)} + \text{T}^{ - 1} \left( {\frac{\alpha }{2},n - 1} \right) \times \frac{{SD^{\left( n \right)} }}{\sqrt n }} \right),\left( {MT^{\left( n \right)} + \text{T}^{ - 1} \left( {1 - \frac{\alpha }{2},n - 1} \right) \times \frac{{SD^{\left( n \right)} }}{\sqrt n }} \right)} \right]$$
(3)
In Eq. 3, the CI is quoted as a lower limit and an upper limit of the population parameter value, e.g. 150.0 with a 95% CI [135.0, 165.0]. The CI could also be quoted as the difference between the limits and the sample statistic, e.g. 150.0 with a 95% CI [− 15.0, + 15.0]. Those differences could also be quoted as percentages, e.g. 150.0 with a 95% CI [− 10%, + 10%]. If the CI is symmetric then it can be expressed in shortened form with a plus-minus limit, e.g. 150 ± 10% (95% CI).
A CI is a trade-off between its width, which should ideally be as narrow as possible, and its CL, which should ideally be as high as possible. Ideally, one would want a 100% confidence, but this leads to an infinitely wide CI which is not very useful. For a statistic calculated from a fixed number of data points, the CI width will increase with increasing CL. Having a very high CL will result in a very wide CI whilst a “low” CL with a correspondingly narrow CI is also unhelpful as there is little confidence that the CI contains the parameter of interest. A 95% CL is typically chosen as it is very likely to contain the population parameter whilst maintaining a relatively narrow width. When calculating the CIMT (Eq. 3) with varying CLs (see Table 1), a 99% CL gains an extra 4% in confidence but requires a 33% increase in CI width compared to a 95% CL. A higher 99.9% CL gains an extra 4.9% in confidence but requires a 71% increase in CI width compared to a 95% CL. Using a lower 68% CL (equivalent to 2 × standard error widths) reduces the CI width by 49% but there is a reduction in confidence of 27% compared to a 95% CL. From Eq. 3, the width (WMT) of the CIMT with a fixed CL will also tend to narrow with increasing n (see Eq. 4).
Table 1
Calculation of the Relative Widths of CIs for the Mean Value (Eq. 3) with Increasing CL for a Sample Size of 100
CL
68%
95%
99%
99.9%
100%
CI width relative to 2 × standard error
1
1.98
2.63
3.39
$$W_{\text{MT}} \left( n \right) \propto \frac{1}{\sqrt n }$$
(4)
In order to further discuss CIs, it is necessary to describe sampling distributions. A sampling distribution is the probability distribution of a statistic for a fixed sample size for all the possible samples that could be selected [22]. For example, a set of five TETs are obtained from repeated trials and the statistic of interest is the mean of those TETs. If this set of five trials is repeated, a further set of TETs can be obtained, and another mean of those values is calculated. Each time the process is repeated a different estimate for the mean value will be obtained, see Table 2.
Table 2
An Example Set of Samples Taken from a Distribution. Each Sample Consists of Five Values
Sample
Sample TETs
Mean TET
1
133, 127, 148, 140, 177
145
2
161, 124, 159, 145, 155
148.8
3
173, 147, 167, 134, 178
159.8
n → ∞
129, 139, 172, 113, 116
133.8
If this could be repeated a very large number of times, then it would be possible to produce a distribution of the sample mean values, i.e. the sampling distribution. Assuming the TETs are normally distributed with an unknown population SD then it has been shown theoretically that the sampling distribution for the example can be represented by a t-distribution [22]. The mean of the sampling distribution is equal to the expected value and the population mean of the original TET distribution. From the central limit theorem, the standard deviation of the sampling distribution, also known as the standard error, is the standard deviation of the original TET distribution divided by √n.
From this sampling distribution, the interval that contains the innermost 100(1 − α)% of sample means can be described by Eq. 5 (see Fig. 2).
$$P\left( {\left\{ {MT^{\left( \infty \right)} + \text{T}^{ - 1} \left( {\frac{\alpha }{2},n - 1} \right) \times \frac{SD}{\sqrt n }} \right\} < MT^{\left( n \right)} < \left\{ {MT^{\left( \infty \right)} + \text{T}^{ - 1} \left( {1 - \frac{\alpha }{2},n - 1} \right) \times \frac{SD}{\sqrt n }} \right\}} \right) = 1 - \alpha$$
(5)
Equation 5 is not particularly useful as in general the value of the MT(n) is known and the MT(∞) is unknown. However, Eq. 5 can be algebraically recast [22] to make the MT(∞) the subject of the inequality leading to the probability that the MT(∞) lies within a CI with a CL of 100(1 − α)% (Eq. 6). This is a probabilistic statement of the CI given in Eq. 3.
$$P\left( {\left\{ {MT^{\left( n \right)} +\text{T}^{ - 1} \left( {\frac{\alpha }{2},n - 1} \right) \times \frac{SD}{\sqrt n }} \right\} < MT^{\left( \infty \right)} < \left\{ {MT^{\left( n \right)} + \text{T}^{ - 1} \left( {1 - \frac{\alpha }{2},n - 1} \right) \times \frac{SD}{\sqrt n }} \right\}} \right) = 1 - \alpha$$
(6)
For some statistics, such as the mean value (Eq. 3) and the 95th percentile value [13], a theoretical solution exists for determining the CI. However, for many statistics a theoretical solution may not exist or is difficult to derive. For those cases, it may be possible to use bootstrapping to approximate the sampling distribution and hence derive the CI.

2.1 Bootstrapping Concepts

Bootstrapping [3236] is a widely used computational statistical technique. The premise of bootstrapping is that inference about a population from sample data can be modelled by resampling the sample data and performing inference about a sample from resampled data. Generally, the population distribution is unknown and only the sample of data is known. Bootstrapping uses this sample of data as an approximation to the true population distribution. The bootstrapping technique is applicable to many estimators although there are known difficulties associated with the modal value [37]. Care is also required when applying the technique to heavy-tailed distributions particularly those with infinite variance [38, 39]; this is not anticipated to be a major issue in the egress modelling context.
A simple worked example, to determine the CI of the mean, follows. In practice, bootstrapping is not generally used to determine the CI of a mean value due to the wide applicability of Eq. 3. The primary intent of this example is to demonstrate the methodology. A set of n values are sampled from some unknown distribution and leads to the set of values in the “Original” row of Table 3. In this example, n is set to five. This data could be expensive to obtain, e.g. these could be TETs from a set of egress trials. The mean of the original sample is 139.5. The original sample is now resampled to create a bootstrap sample, which is a very inexpensive process. Each value in the original sample has the same, 1/n, chance of being selected. Values are randomly copied from the original sample n times to create a bootstrap sample (see Table 3). This implies that a particular value from the original sample may be represented once, more than once, or not at all in a bootstrap sample. Furthermore, only values from the original sample can be represented in the bootstrap sample. For example, it can be seen for bootstrap 1 that 99.7 is represented twice and 188.5 is not represented at all and the other values are represented once each. When n values have been selected, the mean value of the bootstrap sample is calculated. This process is repeated B times, where B ≫ n, to create a bootstrap sampling distribution of the mean (cf. Table 2). In this example B is set to 19. This is a far lower number of bootstrap samples than would be used in practice but is used here for demonstration purposes.
Table 3
A Worked Example of Bootstrap Samples Based on an Original Sample of Five TETs
Sample
Sample TETs
Mean TET
Original
134.0, 188.5, 149.5, 99.7, 126.1
139.5
Bootstrap 1
126.1, 149.5, 99.7, 99.7, 134.0
121.8
Bootstrap 2
188.5, 188.5, 149.5, 99.7, 99.7
145.2
Bootstrap 6
126.1, 188.5, 188.5, 188.5, 134.0
165.1
Bootstrap 10
126.1, 188.5, 188.5, 188.5, 188.5
176.0
Bootstrap 15
126.1, 134.0, 99.7, 126.1, 99.7
117.1
Bootstrap 19
149.5, 149.5, 134.0, 126.1, 126.1
137.0
Only select bootstraps are listed for compactness
The CI, with a 90% CL (the maximum CL that can be represented with 19 bootstraps), is obtained by taking the percentile values that bound the innermost 90% of the sampling distribution which are the 5th and 95th percentile values. The bth bootstrap index, of the vector of ascending ordered bootstrap means, representing the (100p)th percentile is given by Eq. 7.
$$b = \left( {B + 1} \right)p$$
(7)
This equates to the 1st and 19th values of the ordered bootstrap sampling distribution of the mean (see Table 4). The bootstrap 90% CI of the mean is [117.1, 176.0]. Using the conventional calculation (Eq. 3) the 90% CI is [108.3, 170.8]. The bootstrap CI is not particularly close to the standard approach although is not unreasonable given the limited sample size and small number of bootstraps. The accuracy of the method significantly improves as n and B increase, and as more sophisticated methods are used to select the required percentiles from the bootstrap sampling distribution.
Table 4
The Lower and Upper Ends of the Ordered Bootstrap Sampling Distribution of the Mean for the Worked Example
Percentile
0
5
10
90
95
100
Ordered index (Eq. 7)
n/a
1
2
18
19
n/a
Original index
n/a
15
1
6
10
n/a
Mean value
− ∞
117.1
121.8
165.1
176.0
The above worked example was repeated using a sample size of 100 values and 1999 bootstraps. In this case the mean of the sample was 156.5. The bootstrap 90% CI, [151.1, 161.5], gives a good approximation to the conventionally determined 90% CI (Eq. 3), [151.4, 161.6], in this instance. The bootstrap sampling distribution of the mean for this case is presented as a histogram in Fig. 3. The bootstrap mean sampling distribution is similar in shape to a normal distribution and approximates the true sampling distribution.
The simplest approach for selecting a centred CI with a 100(1 − α)% confidence level is to assume it is bounded by the 100(α/2)th and 100(1 − α/2)th percentile bootstrap samples. This is the approach used for the worked example above. However, this simple percentile approach has two notable problems. Firstly, it tends to give narrower CIs than expected when “small” sample sizes were used. Secondly, it does not account for skew and bias that may exist in the samples.
For the first problem, Hesterberg proposed a correction based on the normal distribution and the t-distribution [40]. This is based on correcting the percentile limits of the bootstrap sampling distribution of the mean. The “corrected” (100\(\acute{p}\))th percentile based on the original (100p)th percentile of sample size n is given by Eq. 8, where Φ(z) is the CDF of the standard normal distribution function. This correction always tends to make the CI more conservative but could result in an over-correction or insufficient correction of the required percentile dependent on the actual, but unknown, form of the sampling distribution. In the event of an over-correction, the CI will be wider than it should be but represents a more conservative estimate. If the CI is insufficiently corrected, then at least some correction is applied compared to no correction thus bringing the coverage probability of the CI closer to the nominal confidence level.
$$\acute{p} \left({p, n} \right) = {\Phi}\left({\sqrt {\frac{n}{n - 1}} {\text{T}}^{- 1} \left({p,n - 1} \right)} \right)$$
(8)
Table 5 illustrates the correction (Eq. 8) applied to the 5th percentile for various sample sizes. The effect of the correction is significant for small n but diminishes as n increases.
Table 5
The Value of the Corrected 5th Percentile (Eq. 8) Based on Sample Size n
n
10
20
30
40
100
1000
100\(\acute{p}\)
2.6
3.8
4.2
4.4
4.8
5.0
For the second problem associated with possible skew and bias in the samples, there are a number of advanced bootstrapping methods for determining appropriate limits for the CI, including the Bias Corrected and accelerated (BCa) method [32, 33], ABC, studentized, and double bootstrap [35]. The BCa bootstrap was selected due to its wide range of applicability [41], its relatively straightforward implementation, and its comparatively low computational requirements.
The BCa corrected 100(α/2)th percentile bootstrap index bl and 100(1 − α/2)th percentile bootstrap index bu, incorporating the small sample size correction (Eq. 8), are given by Eqs. 9 and 10 respectively, where \(\lfloor{x}\rfloor\) is the integer floor and \(\lceil{x}\rceil\) is the integer ceiling of x. Equation 11 is the bias correction factor where Φ−1 is the inverse CDF of the normal distribution, θ is the calculated statistic, and \(\theta_{j}^{*}\) is the jth bootstrap of the distribution of θ*. Equation 13 is the acceleration factor derived from the original sample data. The CIθ is [\(\theta_{{b_{l} }}^{*,o}\), \(\theta_{{b_{u} }}^{*,o}\)] where \(\theta_{b}^{*,o}\) is the bth bootstrap of the ordered distribution of θ*.
$$b_{\text{l}} = \left \lfloor{\left({B + 1} \right)\acute{p}\left({{\Phi}\left({z_{0} + \frac{{z_{0} +\Phi^{- 1}\left({\frac{\alpha}{2}} \right)}}{{1 - acc \times \left({z_{0} +\Phi^{- 1}\left({\frac{\alpha}{2}} \right)} \right)}}} \right),n} \right)}\right \rfloor$$
(9)
$$b_{\text{u}} = \left \lceil{\left({B + 1} \right)\acute{p}\left({{\Phi}\left({z_{0} + \frac{{z_{0} +\Phi^{- 1}\left(1-{\frac{\alpha}{2}} \right)}}{{1 - acc \times \left({z_{0} +\Phi^{- 1}\left(1-{\frac{\alpha}{2}} \right)} \right)}}} \right),n} \right)}\right \rceil$$
(10)
$$z_{0} = \Phi^{ - 1} \left( {\mathop \sum \limits_{j = 1}^{B} f\left( {\theta_{j}^{*} ,\theta } \right)/B} \right)$$
(11)
$$f\left( {\theta_{j}^{*} ,\theta } \right) = \left\{ {\begin{array}{*{20}c} {1 \,if\, \theta_{j}^{*} < \theta } \\ {\frac{1}{2}\, if\, \theta_{j}^{*} = \theta } \\ { 0 \,if\, \theta_{j}^{*} > \theta } \\ \end{array} } \right.$$
(12)
$$acc = \frac{{\mathop \sum \nolimits_{i = 1}^{n} \left( {\bar{\theta }_{\left( \cdot \right)} - \theta_{ - i} } \right)^{3} }}{{6\left[ {\mathop \sum \nolimits_{i = 1}^{n} \left( {\bar{\theta }_{\left( \cdot \right)} - \theta_{ - i} } \right)^{2} } \right]^{1.5} }}$$
(13)
\(\theta_{ - i}\) is \(\theta\) calculated using the original n simulations performed barring the ith simulation.
$$\bar{\theta }_{\left( \cdot \right)} = \frac{{\mathop \sum \nolimits_{i = 1}^{n} \theta_{ - i} }}{n}$$
(14)

2.1.1 Bootstrapping the CI for the SD

If the TETs are normally distributed, the CISD can be determined using a Chi squared distribution with n − 1 degrees of freedom [22]. However, in general the TETs will not be normally distributed and those theoretically derived CIs could be inappropriate. The CISD is therefore determined using bootstrapping.
For n simulations there are n TETs that are used to compute SD(n). A single bootstrap SD* can be derived by randomly resampling, with replacement, those n TETs n times and calculating the SD of that bootstrap sample. This process is repeated B times, where B ≫ n, and leads to a bootstrap sampling distribution of the SD*. The CI of this ordered bootstrap sampling distribution of SD*,o approximates the CI for the true sampling distribution of SD(n). The BCa method is used to obtain the upper and lower limits of the CI.
The specific equations for calculating the BCa limits for the SD are obtained by substituting SD into Eqs. 914 leading to Eqs. 1521.
$$z_{{0,{\text{SD}}}} = \Phi^{ - 1} \left( {\mathop \sum \limits_{j = 1}^{B} f\left( {SD_{j}^{*} ,SD^{\left( n \right)} } \right)/B} \right)$$
(15)
$$acc_{\text{SD}} = \frac{{\mathop \sum \nolimits_{i = 1}^{n} \left( {\overline{SD}_{\left( \cdot \right)} - SD_{ - i} } \right)^{3} }}{{6\left[ {\mathop \sum \nolimits_{i = 1}^{n} \left( {\overline{SD}_{\left( \cdot \right)} - SD_{ - i} } \right)^{2} } \right]^{1.5} }}$$
(16)
$$SD_{ - i} = \sqrt {\frac{{\mathop \sum \nolimits_{k = 1}^{i - 1} \left( {TET_{k} - MT_{ - i} } \right)^{2} + \mathop \sum \nolimits_{k = i + 1}^{n} \left( {TET_{k} - MT_{ - i} } \right)^{2} }}{n - 2}}$$
(17)
$$MT_{ - i} = \frac{{\mathop \sum \nolimits_{l = 1}^{i - 1} TET_{l} + \mathop \sum \nolimits_{l = i + 1}^{n} TET_{l} }}{n - 1}$$
(18)
$$\overline{SD}_{\left( \cdot \right)} = \frac{{\mathop \sum \nolimits_{i = 1}^{n} SD_{ - i} }}{n}$$
(19)
$$b_{{{\text{l}},{\text{SD}}}} = \left \lfloor {\left({B + 1} \right)\acute{p} \left({{\Phi}\left({z_{{0,{\text{SD}}}} + \frac{{z_{{0,{\text{SD}}}} +\Phi^{- 1} \left({\frac{\alpha}{2}} \right)}}{{1 - acc_{\text{SD}} \times \left({z_{0,SD} +\Phi^{- 1} \left({\frac{\alpha}{2}} \right)} \right)}}} \right),n} \right)}\right \rfloor$$
(20)
$$b_{{{\text{u}},{\text{SD}}}} = \left \lceil {\left({B + 1} \right)\acute{p} \left({{\Phi}\left({z_{{0,{\text{SD}}}} + \frac{{z_{{0,{\text{SD}}}} +\Phi^{- 1} \left(1-{\frac{\alpha}{2}} \right)}}{{1 - acc_{\text{SD}} \times \left({z_{0,SD} +\Phi^{- 1} \left(1-{\frac{\alpha}{2}} \right)} \right)}}} \right),n} \right)}\right \rceil$$
(21)
The CISD is [\(SD_{{b_{{{\text{l}},{\text{SD}}}} }}^{*,o}\), \(SD_{{b_{{{\text{u}},{\text{SD}}}} }}^{*,o}\)] where the bootstrap index bl,SD is given by Eq. 20 and bu,SD is given by Eq. 21.

2.2 Functional Analysis

The final area of mathematics used in this paper to define CIs for the AC is functional analysis, a branch of mathematics used to compare vectors or times series data. An egress curve is a vector of agent egress times. Unlike a scalar quantity such as the TET where the difference between two TETs is merely a difference in magnitude, it is less obvious how to express the difference between vectors. The difference between the curves can be expressed using a set of FAMs.
The FAMs compare one curve (a) against another curve (b), using different criteria, that returns a scaler value that meaningfully represents the similarity (or difference) between the two curves. The egress curve a = {a1, ak,…., am}, where ak is the time of the kth agent to exit and am is the exit time of the last (mth) agent to exit; egress curve b is similarly defined. The three FAMs used here are the Euclidean Relative Distance (ERD), the Euclidean Projection Coefficient (EPC), and the Secant Cosine (SC). These FAMs have been previously described in fire [30] and evacuation contexts [16, 17, 19, 29] so only concise details, suitable for egress curves, are given here.
The ERD (Eq. 22) represents the “relative distance” between two curves and will be zero when the curves are identical and greater than zero if the curves are different. ERD varies between zero and infinity with values nearer to zero implying a closer “distance” between the curves. In the appendix it is shown that the ERD of two egress curves can be compared to normalised absolute difference (NAD), i.e. |TETa − TETb| ÷ TETb, between two TETs of those egress curves. If the ERD is greater than the NAD, then it indicates there is more relative difference in the egress curves compared to the TETs; this suggests there may be more variability in an earlier phase of the evacuation. If the ERD is less than the NAD, then there is less relative difference in the curve compared to the TETs; this suggests that there may be more variability at the end of the evacuation.
$$ERD\left( {{\mathbf{\text{a}}},{\mathbf{\text{b}}}} \right) = \sqrt {\frac{{\mathop \sum \nolimits_{k = 1}^{m} \left( {a_{k} - b_{k} } \right)^{2} }}{{\mathop \sum \nolimits_{k = 1}^{m} b_{k}^{2} }}}$$
(22)
The EPC (Eq. 23) is the value needed to project one curve onto another curve. This would be one when the curves can be exactly “projected” on each other, which will be the case when the curves are the same and varies about one when not exactly projected. EPC varies between zero and infinity, assuming ak ≥ 0 and bk ≥ 0 \(\forall k\), with values nearer to one implying a closer “projection” similarity between the curves.
$$EPC\left( {\mathbf{{\text{a}}},\mathbf{{\text{b}}}} \right) = \frac{{\mathop \sum \nolimits_{k = 1}^{m} a_{k} b_{k} }}{{\mathop \sum \nolimits_{k = 1}^{m} b_{k}^{2} }}$$
(23)
The SC (Eq. 24, where Δxks= xk xks and s is the step-size ≥ 1) compares the “shape” of two curves. The step-size s is used to vary the smoothing of the curve, where a step size of one implies no smoothing and the smoothing effect increases as s increases. SC equates to one when the curves have the same “shape”, which will be the case when the curves are the same. SC varies between zero and one when the egress curves have different “shapes” with values nearer to one implying a closer similarity in shape. Egress curves monotonically increase, thus Δaks ≥ 0 and Δbks ≥ 0 \(\forall k\), implying that SC(a, b) ≥ 0. The Cauchy–Schwarz [42] inequality (Eq. 25) implies that SC(a, b) ≤ 1.
$$SC\left( {{\mathbf{\text{a}}},\mathbf{{\text{b}}}} \right) = \frac{{\mathop \sum \nolimits_{k = 1 + s}^{m} \Delta a_{k - s} \Delta b_{k - s} }}{{\sqrt {\mathop \sum \nolimits_{k = 1 + s}^{m} \left( {\Delta a_{i - s} } \right)^{2} \mathop \sum \nolimits_{k = 1 + s}^{m} \left( {\Delta b_{k - s} } \right)^{2} } }}$$
(24)
$$\left( {\mathop \sum \limits_{k = 1 + s}^{m} \Delta a_{k - s} \Delta b_{k - s} } \right)^{2} \le \mathop \sum \limits_{k = 1 + s}^{m} \left( {\Delta a_{k - s} } \right)^{2} \mathop \sum \limits_{k = 1 + s}^{m} \left( {\Delta b_{k - s} } \right)^{2}$$
(25)

3 Development of FAM-Based CIs for the AC

The average egress curve AC(n) = {ac1, ack, …, acm} is calculated by averaging all n generated egress curves ci (Eq. 26 shows the calculation for the kth component of AC(n)) from the simulations performed. A bootstrap average curve AC* is derived by randomly resampling, with replacement, those n generated curves n times, and then averaging. This process is repeated B times and leads to a bootstrap sampling distribution of the average egress curve. Unfortunately, it is difficult to directly assign a CI to the sampling distribution of curves. However, it is possible to create proxies that represent the precision of the AC indirectly. This can be achieved by transforming the AC sampling distribution of curves into usable sampling distributions of point values, via the FAMs, that can be assigned CIs. Each bootstrapped average curve AC* is compared to the average egress curve AC(n) with the FAMs to create sampling distributions of those measures, i.e. ERD*(AC*, AC(n)), EPC*(AC*, AC(n)), and SC*(AC*, AC(n)). This approximates the (generally) unknowable sampling distributions of all possible sample average egress curves AC(n) compared to the population (or expected) average egress curve AC() using the FAMs, i.e., ERD(AC(n), AC()), EPC(AC(n), AC()), and SC(AC(n), AC()). The CIs of the FAM-based sampling distributions represent the precision of the AC.
$$ac_{k} = \frac{{\mathop \sum \nolimits_{i = 1}^{n} c_{i,k} }}{n}$$
(26)
The ERD sampling distribution can be represented by a semi-infinite distribution between zero to infinity as the ERD values are greater than or equal to zero. It is anticipated that the distribution would be concentrated near zero with the frequency tending to zero as the bootstrapped ERDs tend to infinity. When AC() is unknown the best estimate of ERD*(AC(), AC(n)) is zero as the best estimate of AC() is AC(n). The 0th percentile of the ERD*,o sampling distribution is zero. The CI is obtained by using the 0th and 100(1−α)th percentiles of the ordered bootstrapped values as the lower and upper limits for a confidence level of 100(1−α)%. The small sample size correction (Eq. 8) is applied to the 100(1−α)th percentile; this is achieved by modelling the bootstrapped ERD measures using a folded normal distribution from zero to infinity with the (unfolded and folded) mode at zero. The corrected 100(1−α)th percentile bootstrap index bERD of the ordered bootstrapped ERD*,o values is given by Eq. 27 where Eqs. 28 and 29 map the percentile from the folded distribution to the full distribution and back again. The CIERD is [0, \(ERD_{{b_{\text{ERD}} }}^{*,o}\)].
$$b_{\text{ERD}} \left(\alpha \right) = \left \lceil {\left({B + 1} \right)p_{\text{folded}} \left({\acute{p} \left({p_{\text{full}} \left({1 - \alpha} \right),n} \right)} \right)}\right \rceil$$
(27)
$$p_{\text{full}} \left( {p_{\text{folded}} } \right) = \frac{{p_{\text{folded}} + 1}}{2}$$
(28)
$$p_{\text{folded}} \left( {p_{\text{full}} } \right) = 2p_{\text{full}} - 1$$
(29)
The SC sampling distribution can be represented by a distribution between zero and one where it is expected that the values will be concentrated near one and the frequency will tend to zero as the SC values tend to zero. The best estimate of SC(AC(), AC(n)) is one and the 100th percentile of the SC sampling distribution is one. The CI, with a confidence level of 100(1−α)%, is given by the limits of the corrected (100α)th and 100th percentile of the ordered bootstrapped SC*,o measures. The corrected 100αth percentile bootstrap index bSC is modelled, in a similar fashion to bERD, using Eq. 30 where Eqs. 31 and 32 map the percentile from the folded distribution to the full distribution and back again. The CISC is [\(SC_{{b_{\text{SC}} }}^{*,o}\), 1].
$$b_{\text{SC}} \left(\alpha \right) = \left \lfloor {\left({B + 1} \right)p_{\text{folded}} \left({\acute{p} \left({p_{\text{full}} \left(\alpha \right),n} \right)} \right)} \right \rfloor$$
(30)
$$p_{\text{full}} \left( {p_{\text{folded}} } \right) = \frac{{p_{\text{folded}} }}{2}$$
(31)
$$p_{\text{folded}} \left( {p_{\text{full}} } \right) = 2p_{\text{full}}$$
(32)
The EPC sampling distribution will vary between zero and infinity where it is expected that the values will be concentrated about one and the frequency tends to zero as EPC tends to either zero or infinity. This can be represented by a two-tailed distribution and the CI of the ordered bootstrapped EPC measures is calculated using the BCa method (Sect. 2.4). The best estimate of EPC(AC(∞), AC(n)) is one. The specific equations for calculating the BCa limits for the EPC are obtained by substituting EPC into Eqs. 914 leading to Eqs. 3337.
$$z_{{0,{\text{EPC}}}} = \Phi^{ - 1} \left( {\mathop \sum \limits_{j = 1}^{B} f\left( {EPC\left( {{\mathbf{\text{AC}}}_{j}^{*} ,{\mathbf{\text{AC}}}^{\left( n \right)} } \right),1} \right)/B} \right)$$
(33)
$$acc_{\text{EPC}} = \frac{{\mathop \sum \nolimits_{i = 1}^{n} \left( {\overline{EPC}_{\left( \cdot \right)} - EPC\left( {{\mathbf{\text{AC}}}_{ - i}^{\left( n \right)} , {\mathbf{\text{AC}}}^{\left( n \right)} } \right)} \right)^{3} }}{{6\left[ {\mathop \sum \nolimits_{i = 1}^{n} \left( {\overline{EPC}_{\left( \cdot \right)} - EPC\left( {{\mathbf{\text{AC}}}_{ - i}^{\left( n \right)} , {\mathbf{\text{AC}}}^{\left( n \right)} } \right)} \right)^{2} } \right]^{1.5} }}$$
(34)
where \({\text{AC}}_{ - i}^{\left( n \right)}\) is an average egress curve calculated using the original egress curves barring the egress curve from the ith simulation.
$$\overline{EPC}_{\left( \cdot \right)} = \frac{{\mathop \sum \nolimits_{i = 1}^{n} EPC\left( {{\mathbf{\text{AC}}}_{ - i}^{(n)} , {\mathbf{\text{AC}}}^{(n)} } \right)}}{n}$$
(35)
$$b_{{{\text{l}},{\text{EPC}}}} \left(\alpha \right) = \left \lfloor \left({B + 1} \right)\acute{p} \left({{\Phi}\left({z_{{0,{\text{EPC}}}} + \frac{{z_{{0,{\text{EPC}}}} +\Phi^{- 1} \left({\frac{\alpha}{2}} \right)}}{{1 - acc_{\text{EPC}} \times \left({z_{{0,{\text{EPC}}}} +\Phi^{- 1} \left({\frac{\alpha}{2}} \right)} \right)}}} \right),n} \right) \right \rfloor$$
(36)
$$b_{{{\text{u}},{\text{EPC}}}} \left(\alpha \right) = \left \lceil \left({B + 1} \right)\acute{p} \left({{\Phi}\left({z_{{0,{\text{EPC}}}} + \frac{{z_{{0,{\text{EPC}}}} +\Phi^{- 1} \left(1-{\frac{\alpha}{2}} \right)}}{{1 - acc_{\text{EPC}} \times \left({z_{{0,{\text{EPC}}}} +\Phi^{- 1} \left(1-{\frac{\alpha}{2}} \right)} \right)}}} \right),n} \right) \right \rceil$$
(37)
The CIEPC is given by [\(EPC_{{b_{{{\text{l}},{\text{EPC}}}} }}^{*,o}\), \(EPC_{{b_{{{\text{u}},{\text{EPC}}}} }}^{*,o}\)] where bl,EPC and bu,EPC are given by Eqs. 36 and 37 respectively.

3.1 Overall Confidence Level for the Average Egress Curve

The precision of the AC is represented by three FAM-based CIs. Each of these CIs has an individual CL of 100(1−α)%. These three CIs can be considered to form a triple-CI that has a more complex coverage probability than a single CI as it is possible that the AC(∞) may lie within zero, one, two, or three of the CIs (see Fig. 4).
The overall confidence level is equivalent to P(ERD ∩ EPC ∩ SC), i.e. all FAM-based CIs are simultaneously satisfied, and is likely to be less than the individual CL. Although it is easy to control the individual CLs, what is sometimes required is the ability to set the overall CL. This is a well-known problem with multiple CIs and could be addressed by applying the Bonferroni correction to the individual CIs [43]. If the overall CL is 100(1 − αoverall)% then the confidence level required for each of the individual CIs is given be Eq. 38 where NCI is the number of CIs (three for the average egress curve).
$$CL_{\text{Bonferroni}} = 100\left( {1 - \alpha_{\text{Bonferroni}} } \right)\% = 100\left( {1 - \frac{{\alpha_{\text{overall}} }}{{N_{\text{CI}} }}} \right)\%$$
(38)
However, this leads to a conservative estimate of the individual CLs, as the correction assumes the worst-case multi-CI probability coverage, e.g. the right-hand Venn diagram of Fig. 4, and therefore the CIs are potentially much wider than necessary. When three CIs are used, an overall 95% CL corresponds to each CI having an individual 98.3% CL. Fortunately, as the ERD*(AC*, AC(n)), EPC*(AC*, AC(n)), and SC*(AC*, AC(n)) sampling distributions are available, for the bootstrapped egress curves, it is possible to determine a more accurate overall confidence level. This is achieved by adjusting the individual confidence level of the CIs, using the bisection algorithm below, until the number of AC* bootstrap curves that are simultaneously within all three FAM-based CIs matches the required overall CL.
1.
The required number (RAC) (Eq. 39) of AC* curves that needs to be contained simultaneously by all three CIs, for an overall CL of 100(1 − αoverall)%, is calculated by taking the maximum of the number of AC* that need to be held individually by CIERD (Eq. 40), CIEPC (Eq. 41), and CISC (Eq. 42) at the overall CL.
$$R_{\text{AC}} = { \hbox{max} }\left( {NB_{\text{ERD}} ,NB_{\text{EPC}} ,NB_{\text{SC}} } \right)$$
(39)
$$NB_{\text{ERD}} = b_{\text{ERD}} \left( {\alpha_{\text{overall}} } \right)$$
(40)
$$NB_{\text{EPC}} = b_{{{\text{u}},{\text{EPC}}}} \left( {\alpha_{\text{overall}} } \right) - b_{{{\text{l}},{\text{EPC}}}} \left( {\alpha_{\text{overall}} } \right) + 1$$
(41)
$$NB_{\text{SC}} = B - b_{\text{SC}} \left( {\alpha_{\text{overall}} } \right) + 1$$
(42)
 
2.
The lower individual confidence level (LICL) and upper individual confidence level (UICL) for bisection algorithm are set to Eqs. 43 and 44 (the Bonferroni correction) respectively. The actual individual CL, that satisfies the overall CL, lies between these two limits.
$$LICL = 100\left( {1 - \alpha_{\text{overall}} } \right)\%$$
(43)
$$UICL = 100\left( {1 - \alpha_{\text{overall}} /3} \right)\%$$
(44)
 
3.
The bisection is now performed by setting the middle individual confidence level (MICL) Eq. 45.
$$MICL = \frac{LICL + UICL}{2}$$
(45)
 
4.
The CI limits based on MICL, 100(1 − αMICL)%, are calculated for the ERD, EPC and SC sampling distributions, i.e. \(ERD_{{b_{\text{ERD}} \left( {\alpha_{\text{MICL}} } \right)}}^{*,o}\) (Eq. 27), \(EPC_{{b_{{{\text{l}},{\text{EPC}} }} \left( {\alpha_{\text{MICL}} } \right)}}^{*,o}\) (Eq. 36) and \(EPC_{{b_{{{\text{u}},{\text{EPC}} }} \left( {\alpha_{\text{MICL}} } \right)}}^{*,o}\) (Eq. 37), and \(SC_{{b_{\text{SC}} \left( {\alpha_{\text{MICL}} } \right)}}^{*,o}\) (Eq. 30).
 
5.
Count the number (NAC) of AC* curves that are simultaneously contained by all three CIs with MICL. This is achieved using the following pseudo code. Note that the FAMs of \({\mathbf{AC}}_{j}^{*}\) compared to \({\mathbf{AC}}^{\left( n \right)}\) are \(ERD_{j}^{*}\), \(EPC_{j}^{*}\) and, \(SC_{j}^{*}\).
$$\begin{aligned} & {\text{Set}}\,N_{AC} \,{\text{to }}\,{\text{zero}} \\ & {\text{DO}}\,j = \, 1\,{\text{ to}}\,B \\ & \quad \quad {\text{IF}}\,\left( {ERD_{j}^{*} \le ERD_{{b_{\text{ERD}} \left( {\alpha_{\text{MICL}} } \right)}}^{*,o} } \right){\text{AND}}\left( {SC_{j}^{*} \ge SC_{{b_{\text{SC}} \left( {\alpha_{\text{MICL}} } \right)}}^{*,o} } \right) \\ & \quad \quad {\text{AND}}\left( {EPC_{{b_{{{\text{l}},{\text{EPC}} }} \left( {\alpha_{MICL} } \right)}}^{*,o} \le EPC_{j}^{*} \le EPC_{{b_{{{\text{u}},{\text{EPC}}}} \left( {\alpha_{\text{MICL}} } \right)}}^{*,o} } \right){\text{THEN}}\,N_{AC} = N_{AC} + \, 1 \\ & \quad \quad {\text{ENDIF}} \\ & {\text{ENDDO}} \\ \end{aligned}$$
 
6.
If (NAC > RAC) then set UICL to MICL. Otherwise set LICL = MICL.
 
7.
If the difference between UICL and LICL is less than the bisection tolerance, 0.1% is suggested, then the overall CL convergence has been achieved (continue to step 8). Otherwise, the overall CL convergence has not been achieved (go to step 3).
 
8.
Calculate the final CIs using the UICL by substituting αUICL into \(ERD_{{b_{\text{ERD}} \left( {\alpha_{\text{UICL}} } \right)}}^{*,o}\) (Eq. 27), \(EPC_{{b_{{{\text{l}},{\text{EPC}} }} \left( {\alpha_{\text{UICL}} } \right)}}^{*,o}\) (Eq. 36) and \(EPC_{{b_{{{\text{u}},{\text{EPC}} }} \left( {\alpha_{\text{UICL}} } \right)}}^{*,o}\) (Eq. 37), and \(SC_{{b_{\text{SC}} \left( {\alpha_{\text{UICL}} } \right)}}^{*,o}\) (Eq. 30) for an overall CL of 100(1 − αoverall)%.
 

4 CI Convergence Scheme

Now that CIs have been developed for the statistics it is possible to determine convergence w.r.t. the estimate of the population parameters. The CIMT is calculated using Eq. 3, the CISD is calculated using bootstrapping (Sect. 2.1.1), and the FAM-based CIs of the AC are calculated using the methodology described in Sect. 3. The width of a CI narrows (converges) with increasing sample size, e.g. Eq. 4. This enables a convergent approach based on incremental testing where the number of trials can be progressively increased until the width of the CI is sufficiently reduced that it is less than the specified precision tolerance. The convergence of the MT, the SD, and the FAMs are determined by comparing the normalised width (W) of the CIs against a specified tolerance (Tol) (Eqs. 4650) for a 100(1 − α)% CL.
$$W_{\text{MT}} = \frac{{CI_{\text{u, MT}} - CI_{\text{l, MT}} }}{MT} = \frac{{2 \cdot T^{ - 1} \left( {1 - \frac{\alpha }{2}, n - 1} \right) \cdot SD}}{MT\sqrt n } < Tol_{\text{MT}}$$
(46)
$$W_{\text{SD}} = \frac{{CI_{\text{u, SD}} - CI_{\text{l, SD}} }}{{SD^{\left( n \right)} }} = \frac{{SD_{{b_{{{\text{u, SD}} }} \left( \alpha \right)}}^{*,o} - SD_{{b_{{{\text{l, SD}} }} \left( \alpha \right)}}^{*,o} }}{{SD^{\left( n \right)} }} < Tol_{\text{SD}}$$
(47)
$$W_{\text{ERD}} = CI_{\text{u, ERD}} - CI_{{ 0^{\text{th}} , {\text{ERD }}}} = ERD_{{b_{\text{ERD}} \left( \alpha \right)}}^{*,o} < Tol_{\text{ERD}}$$
(48)
$$W_{\text{EPC}} = CI_{\text{u, EPC}} - CI_{\text{l, EPC}} = EPC_{{b_{{{\text{u, EPC}} }} \left( \alpha \right)}}^{*,o} - EPC_{{b_{{{\text{l,EPC}} }} \left( \alpha \right)}}^{*,o} < Tol_{\text{EPC}}$$
(49)
$$W_{\text{SC}} = CI_{{ 1 0 0^{\text{th}} , {\text{SC}}}} - CI_{\text{l, SC}} = 1 - SC_{{b_{{{\text{SC}} }} \left( \alpha \right)}}^{*,o} < Tol_{\text{SC}}$$
(50)
The convergence algorithm is depicted as a flowchart in Fig. 5. The user needs to specify the above tolerances (TolMT, TolSD, TolERD, TolEPC, and TolSC) as well as the minimum number of simulations (MIN_SIM), the maximum number of simulations (MAX_SIM), the number of bootstrap-resamples (B), the CL, and the number of simulations performed between testing (K). Convergence is determined when all the CI widths are less than their specified tolerances. Convergence may not be achieved within the MAX_SIM simulations, in that case the MAX_SIM simulations will be performed with the final CIs reported. If the user has no specific convergence requirements, then a set of values for the convergence attributes are suggested in Table 6. The tolerances are discussed in Sect. 6.
Table 6
Suggested Values for the Attributes of the CI Convergence Algorithm
Attribute
MIN_SIM
CL
K
B
TolMT
TolSD
TolERD
TolEPC
TolSC
Value
40 + [31]
95%
1
2000 + [32]
0.02
0.2
½TolMT
TolMT
½TolMT

5 Case Study

A theoretical case study model, similar to Ronchi et al. [17], was used to create multiple fictitious egress curves of 120 agents. The data is generated by pseudo-randomly [44] sampling a log-normal distribution with a mean of 12 s and standard deviation of 13.4 s (√180 s). A set of 120 values are generated {l1, …, l120} with the egress time (t) of the kth agent given by Eq. 51. Figure 6 depicts ten randomly generated egress curves using the case study model.
$$t_{k} = \mathop \sum \limits_{q = 1}^{k} l_{q}$$
(51)
Using the case study model has two major advantages over using simulated or real data. Firstly, the generally unknowable population parameters are defined for the case study model so the methods can be tested and verified. The population MT(∞) is 1440 s (120 × 12 s), as the mean of the sum is the sum of the means. The population SD(∞) is 147.0 s (√120√180 s), as the variance of the sum is the sum of the variances and the standard deviation is the square root of the variance. The exit time of the kth agent of AC(∞) is given by Eq. 52.
$$t_{k}^{\left( \infty \right)} = k \times 12s$$
(52)
Secondly, there is very little computational cost in generating this data allowing a vast number of curves to be generated in a short timeframe. Virtually all the computational cost of the case study is incurred from the bootstrapping process rather than curve generation. In practice, it is anticipated that the computational cost of egress simulations (curve generation) will dwarf the computational costs of the bootstrapping process. In this test case study, the step size (s) used for the SC (Eq. 24) is one. This gives the most conservative value for the difference in shape between the curves.
The testing of the CIs is performed in three parts. The first part examines the behaviour of the CIs, within a single set of simulations, relative to the sample estimates of the statistics and the population parameters (Sect. 5.1). The second part examines the average behaviour of the CIs over multiple sets of simulations including the measured coverage probability and average CI width for a parameter at a specified sample size (Sect. 5.2). The third part examines the behaviour of the convergence scheme based on the CIs (Sect. 5.3).

5.1 CI and Error Behaviour within a Single Set of Simulations

The behaviour of the statistics (MT, SD, ERD, EPC, SC) and their CIs were examined over a set of two hundred simulations. This is an example set of simulations and the behaviour will not be the same every time but highlights the expected behaviour. The statistics are compared to their equivalent population parameters and normalised leading to the true errors (ε). The CIs are also normalised, in a similar fashion to ε, leading to normalised CI limits (v).
The normalised error (\(\varepsilon_{\text{MT}}\)) and CI limits (\(v_{\text{l,MT}}\), \(v_{\text{u,MT}}\)) for the MT are given by Eqs. 53, 54, and 55.
$$\varepsilon_{\text{MT}} = \frac{{MT^{\left( \infty \right)} - MT^{\left( n \right)} }}{{MT^{\left( n \right)} }}$$
(53)
$$v_{\text{l,MT}} = \frac{{T^{ - 1} \left( {\frac{\alpha }{2},n - 1} \right) \times \frac{{SD^{\left( n \right)} }}{\sqrt n }}}{{MT^{\left( n \right)} }}$$
(54)
$$v_{\text{u,MT}} = \frac{{T^{ - 1} \left( {\left( {1 - \frac{\alpha }{2}} \right),n - 1} \right) \times \frac{{SD^{\left( n \right)} }}{\sqrt n }}}{{MT^{\left( n \right)} }}$$
(55)
The normalised error (\(\varepsilon_{\text{SD}}\)) and CI limits (\(v_{\text{l,SD}} , v_{\text{u,SD}}\)) for the SD are given by Eqs. 56, 57, and 58.
$$\varepsilon_{\text{SD}} = \frac{{SD^{\left( \infty \right)} - SD^{\left( n \right)} }}{{SD^{\left( n \right)} }}$$
(56)
$$v_{\text{l,SD}} = \frac{{SD_{{b_{\text{l,SD}} }}^{*,o} - SD^{\left( n \right)} }}{{SD^{\left( n \right)} }}$$
(57)
$$v_{\text{u,SD}} = \frac{{SD_{{b_{\text{u,SD}} }}^{*,o} - SD^{\left( n \right)} }}{{SD^{\left( n \right)} }}$$
(58)
The normalised error (\(\varepsilon_{\text{ERD}}\)) and CI limit (\(v_{\text{ERD}}\)) for the ERD are given by Eqs. 59 and 60.
$$\varepsilon_{\text{ERD}} = ERD\left( {{\mathbf{AC}}^{\left( \infty \right)} ,{\mathbf{AC}}^{\left( n \right)} } \right)$$
(59)
$$v_{\text{ERD}} = ERD_{{b_{\text{ERD}} }}^{*,o}$$
(60)
The normalised error (\(\varepsilon_{\text{EPC}}\)) and CI limits (\(v_{\text{l,EPC}} , v_{\text{u,EPC}}\)) for the EPC are given by Eqs. 61, 62, and 63.
$$\varepsilon_{\text{EPC}} = EPC\left( {{\mathbf{AC}}^{\left( \infty \right)} ,{\mathbf{AC}}^{\left( n \right)} } \right) - 1$$
(61)
$$v_{\text{l,EPC}} = EPC_{{{\text{b}}_{\text{l,EPC}} }}^{*,o} - 1$$
(62)
$$v_{\text{u,EPC}} = EPC_{{b_{\text{u,EPC}} }}^{*,o} - 1$$
(63)
The normalised error (\(\varepsilon_{\text{SC}}\)) and CI limit (\(v_{\text{SC}}\)) for the SC are given by Eqs. 64 and 65.
$$\varepsilon_{\text{SC}} = 1 - SC\left( {{\mathbf{AC}}^{\left( \infty \right)} ,{\mathbf{AC}}^{\left( n \right)} } \right)$$
(64)
$$v_{\text{SC}} = 1 - SC_{{b_{\text{SC}} }}^{*,o}$$
(65)
In Figs. 7, 8, 9, 10 and 11 the normalised errors and CIs are plotted for MT (Fig. 7), SD (Fig. 8), ERD (Fig. 9), EPC (Fig. 10), and SC (Fig. 11).
In Figs. 7, 8, 9, 10 and 11 it is apparent that for this set of simulations the CIs bounds the true error. This is expected as the CI is an estimate interval of the population parameter, meaning that the population parameter is unlikely to be outside of the CI. In Fig. 7, the SD has a significant error, ~ 0.1 (~ 10%), after 150 simulations, although this error is bounded by the CI. There is no obvious relationship between \(\varepsilon_{\text{SD}}\) and the other error measures.
There is an unexpectedly close similarity between the behaviour of \(\varepsilon_{\text{MT}}\) (Fig. 7) and the \(\varepsilon_{\text{EPC}}\) (Fig. 10), both errors are near zero at 50 and 100 simulations, have a similar local peak value, approximately 0.02, at 67 simulations, and have similar values of approximately − 0.01 at 200 simulations. There is also a similarity between \(\varepsilon_{\text{ERD}}\) (Fig. 9) and \(\left| {\varepsilon_{\text{MT}} } \right|\), both errors are near zero at 50 and 100 simulations, have a local peak value at 67 simulations, and have similar values of 0.01 at 200 simulations. Similarly, the CI limits for EPC and ERD have a similar trend and magnitude as the corresponding CI limits of MT. A brief algebraic analysis of \(\varepsilon_{\text{EPC}}\) and \(\varepsilon_{\text{ERD}}\) is given in the appendix. That analysis demonstrates that \(\varepsilon_{\text{EPC}}\) and \(\varepsilon_{\text{ERD}}\) are approximately equivalent to \(\varepsilon_{\text{MT}}\) under certain conditions. This is useful as the ERD and EPC tolerances, in the convergence scheme, can be related to the MT tolerance (see Sect. 6). It is also shown, in the appendix, that if \(\varepsilon_{\text{ERD}}\) is greater than \(\left| {\varepsilon_{\text{MT}} } \right|\) then there is more relative difference exhibited between the egress curves than the difference between the TETs of those egress curves.
The convergence behaviour of SC (Fig. 11) is somewhat different, and shows no obvious correlation, to the other statistics as the error and CI change comparatively smoothly with an increasing number of simulations. The error reduces more rapidly than the other statistics with the normalised CI less than 0.01 after 100 simulations and less than 0.005 after 200 simulations. The convergence behaviour unexpectantly appears to be 1/n while the convergence of the other variables appears to be 1/√n. Although the general behaviour is different to the other CIs the normalised CISC still bounds \(\varepsilon_{\text{SC}}\).

5.2 Coverage Probability and Average Width of CIs

Ten thousand sets of simulations of size n were generated to estimate the actual coverage probability of the CIs and the average normalised widths (Eqs. 4650) of the CIs. This was repeated using eight sample sizes (10, 20, 30, 40, 100, 400, 1000, and 4000). The coverage probability is the percentage of CIs that “contain” the population parameter, and ideally this should match the confidence level of the CI. For the MT and SD, the coverage probability is determined by counting the number of times the population value lies within the CI. For the average egress curve, the FAMs are used to compare the sample average egress curve to the population average egress curve, e.g., ERD(AC(), AC(n)), EPC(AC(), AC(n)), and SC(AC(), AC(n)), and those values were checked to see if they lie within the CI or not.
In Table 7, the coverage probability of the CIs is generally good, the CIMT and CIERD coverage probability nearly perfectly matching the confidence level within the error band. The CIEPC coverage probability is within the error band for sample sizes of 30 or more; even at a sample size of 10 the discrepancy is not too great. However, the CISC coverage probability is unexpectedly higher than the nominal confidence level until a sample size of 4000 is reached. This implies that the CISC may not be as narrow as they could be, but at least the CI is a conservative estimate. The CISD has the worst coverage probability behaviour of all the statistics but is still reasonable (> 90%) with sample sizes over 20. Even for a small sample size of 10 the coverage probability is 85.2% and is perhaps better than no error estimate. The CISD coverage probability tends to the nominal CL with increasing sample size, but even for small sample sizes the method is usable. The coverage probability could perhaps be improved for the CISD using a double bootstrap [35], but this will result in a significant additional computational cost.
Table 7
CI Coverage Probability of MT, SD, ERD, EPC, and SC with Individual 95% CLs for a Range of Sample Sizes Over 10,000 Repetitions
Statistic
Error band (%)
Number of simulations
10 (%)
20 (%)
30 (%)
40 (%)
100 (%)
400 (%)
1000 (%)
4000 (%)
MT (95%)
± 0.4
94.5
94.8
95.0
94.6
95.1
95.2
95.3
94.8
SD (95%)
± 0.4
85.2
91.7
92.4
93.0
93.3
94.6
94.7
94.9
ERD (95%)
± 0.4
95.0
94.7
95.2
95.6
95.3
94.9
94.8
95.0
EPC (95%)
± 0.4
94.2
94.4
95.1
95.2
95.1
94.8
95.1
94.9
SC (95%)
± 0.4
100
100
99.9
99.8
99.3
97.7
96.1
95.4
In Table 8, the widths of the MT, SD, ERD, and EPC CIs are approximately inversely proportional to the square root of the number of simulations performed. This is a typical relationship between the CI width and the number of simulations performed and is similarly observed for CI widths of other parameters [13]. The SC’s width has a different convergence behaviour and is approximately inversely proportional to the number of simulations performed.
Table 8
Average Normalised CI Widths for MT, SD, ERD, EPC, and SC for Eight Sample Sizes Over 10,000 Repetitions
Width of normalised CI
Number of simulations
10
20
30
40
100
400
1000
4000
WMT (95%) (Eq. 46)
0.14
0.094
0.076
0.065
0.04
0.02
0.013
0.0063
WSD (95%) (Eq. 47)
0.84
0.68
0.57
0.49
0.31
0.16
0.098
0.049
WERD (95%) (Eq. 48)
0.084
0.053
0.043
0.037
0.023
0.011
0.0072
0.0036
WEPC (95%) (Eq. 49)
0.16
0.10
0.083
0.071
0.044
0.022
0.014
0.0069
WSC (95%) (Eq. 50)
0.10
0.047
0.03
0.022
0.0084
0.002
0.00077
0.00019
The normalised width of the SD is notable as it is much wider than the other statistics. For example, after 1000 simulations the WSD is still about 10% of the SD, approximately ± 5% error. For 40 simulations, the WSD is 0.49 which is equivalent to a [− 19%, + 30%] error at a 95% CL. For 10 simulations, the WSD is 0.84 is equivalent to a [− 24%, + 60%] error. If an end user requires a precise estimate of the SD, then a high number of simulations will generally be required.
When the multi-CI correction scheme (Sect. 3.1) is used it can be seen in Table 9 that the average individual CL (97.6%) is significantly lower than the conservative Bonferroni correction (98.3%), but the overall coverage probability is still above 95%. It should be noted that the SC coverage probability is much greater than its CL, thus significantly contributing to the overall coverage probability being over 95% below 4000 simulations. However, with the 4000-simulation sized sample the overall coverage probability is still above 95% when the SC coverage probability is in line with the expectation of the computed CL.
Table 9
CI Coverage Probability for a Range of Sample Sizes Over 10,000 Repetitions when the Multi-CI Correction Scheme is Used for AC with an Overall CL of 95%
 
Error band
Number of simulations
10
20
30
40
100
400
1000
4000
Average individual CL
± 0.1%
97.1%
97.3%
97.4%
97.5%
97.5%
97.6%
97.6%
97.5%
ERD
± 0.3%
97.2%
97.4%
97.2%
97.4%
97.5%
97.5%
97.5%
97.5%
EPC
± 0.3%
96.5%
97.1%
97.0%
97.5%
97.5%
97.4%
97.5%
97.5%
SC
± 0.3%
100%
100%
100%
100%
99.9%
99.1%
98.4%
97.6%
Overall coverage
± 0.4%
95.9%
96.6%
96.6%
96.7%
97.0%
96.2%
95.7%
95.2%
In Table 10 the corrected widths for an overall nominal 95% CL are given. These widths are slightly wider (~ 14%) than the widths in Table 8. For 40 simulations, the CIERD is [0, 0.042], CIEPC is [0.958, 1.042], and CISC is [0.975, 1] with an overall confidence level of 95%. These could be reported as errors with the ERD error of 4.2%, the EPC error of ± 4.2% and the SC error of 2.5%.
Table 10
Average Multi-CI Corrected Normalised CI Widths for AC at Various Sample Sizes Over 10,000 Repetitions
Normalised CI width
Number of simulations
10
20
30
40
100
400
1000
4000
WERD
0.094
0.061
0.049
0.042
0.026
0.013
0.0082
0.0041
WEPC
0.18
0.12
0.096
0.082
0.051
0.025
0.016
0.0079
WSC
0.12
0.054
0.034
0.025
0.0091
0.0021
0.00081
0.00020

5.3 Convergence Testing

A sample of simulations is generated until convergence was achieved for the tested statistic at the stated tolerance. A minimum of 40 simulations [41] were performed before testing for convergence. This was repeated 10,000 times for each measure to estimate the actual coverage probability of the CI. For this case study each convergence criteria were tested in isolation, i.e., convergence is based solely on the tested statistic of interest. This is a reasonable way to test the convergence as ultimately convergence will be dependent on a single variable as the other variables must be individually converged before the final convergence is determined.
The results in Table 11 illustrate that the methodology can generate CIs that have an actual coverage close to the nominal 95% confidence level. The theory of CIs is based on a variable CI limits with a set sample size. This is somewhat different to converging to a set CI width with a variable sample size. Intuitively, it would be expected that the coverage probability would be close to the nominal CL and Ross [25] noted that the coverage probability would match the nominal CL for sufficiently large n.
Table 11
CI Coverage for the Convergence Statistics for a Range of Tolerances
Statistic
Tolerance
Average sample size (SD)
CI coverage probability
MT
0.06
47.6 (7.6)
94.6% ± 0.4%
 
0.04
101.3 (15.5)
94.8% ± 0.4%
 
0.02
405.5 (30.6)
94.8% ± 0.4%
SD
0.4
52.4 (19.0)
91.6% ± 0.5%
 
0.2
211.5 (62.2)
93.8% ± 0.5%
 
0.1
858.1 (150.3)
93.9% ± 0.5%
ERD
0.04
40.6 (2)
97.9% ± 0.3%
 
0.02
122.1 (17)
94.3% ± 0.5%
 
0.01
484.4 (34.4)
94.4% ± 0.5%
EPC
0.08
40.4 (1.6)
93.9% ± 0.5%
 
0.04
112.8 (17.8)
93.5% ± 0.5%
 
0.02
456.0 (34.7)
94.2% ± 0.5%
SC
0.02
43.3 (3.7)
99.8% ± 0.1%
 
0.01
83.6 (6.3)
99.5% ± 0.1%
 
0.005
161.6 (8.8)
98.7% ± 0.2%
As would be anticipated from Sects. 5.1 and 5.2, it can also be seen that as the tolerance decreases the number of samples required for convergence increases. This increase is approximately quadratic for all the measures apart from the SC. For the SC, as the tolerance decreases there is a linear increase in the number of samples required for convergence.

6 Discussion

The advantage of the convergence scheme developed here compared to Ronchi et al.’s scheme is the convergence is directly based on the required precision of the statistics of interest. The most intuitive tolerance to set is TolMT. It has been previously suggested that the tolerance for Ronchi et al.’s original method could be based on uncertainty from either a fire modelling study determining available safe egress time or uncertainty based on experimental egress trials [19]. Those techniques could also be applied to the CI based method provided those uncertainties are calculated using CIs. In lieu of that type of information, a typical choice for TolMT could be 0.02, the equivalent of ± 1% error, but the choice depends on the user’s requirements. However, the choice of tolerances for the other statistics SD, ERD, EPC and, SC are less apparent.
Prescribing appropriate tolerances for the ERD, EPC and SC FAM-based CIs is problematic, but the case study suggests a potential way forward. It was found that the WEPC, with a 95% confidence level, and WMT have a similar magnitude, WEPC is approximately 10% wider than WMT, for the same sample size (see Table 10) and the behaviour of the corrected CI limits of MT and EPC (see Figs. 7 and 10) are also similar. It was also shown that under certain conditions the true error measured by EPC and MT are approximately equivalent (see appendix). The ERD also has a behaviour and magnitude related to MT under certain circumstances. WERD is roughly half the magnitude of WMT for the same sample size (see Table 10) and the upper confidence limit of the ERD and MT CIs are similar in magnitude and trend (see Figs. 7 and 10). It was also shown (see appendix) that the true error measured by ERD and MT are also equivalent under certain circumstances. In lieu of any other requirements, it is reasonable to base the tolerances of the ERD and EPC CIs on the specified tolerance of MT (Eqs. 66 and 67). Although there is no obvious relationship between MT and SC convergence behaviour, it is also suggested that TolSC is also based on the specified tolerance of MT (Eq. 68) unless the user has a specific requirement.
$$Tol_{\text{EPC}} = Tol_{\text{MT}}$$
(66)
$$Tol_{\text{ERD}} = \frac{{Tol_{\text{MT}} }}{2}$$
(67)
$$Tol_{\text{SC}} = \frac{{Tol_{\text{MT}} }}{2}$$
(68)
Setting the TolSD in relation to TolMT is not recommended. From Table 8, even with 4000 simulations the average WSD is 0.049. By assuming that the TETs follow a normal distribution it can be calculated [22] that it would take ~ 10,000 simulations to achieve convergence for a TolSD of 0.02. Unless the user specifically requires a certain precision for the SD then it is suggested setting TolSD between 0.4 and 0.2.
There are circumstances where a user would wish to set the MT tolerance based on time, e.g. ± 10 s, TolMT,seconds = 20 s. The tolerance of the FAM-based CIs cannot be directly expressed in seconds, but this is not a particular problem as the TolMT,seconds is easily converted to a decimal (Eq. 69). This needs to be calculated at runtime as MT is generally unknown a priori.
$$Tol_{\text{MT}} = \frac{{Tol_{\text{MT, seconds}} }}{MT}$$
(69)
In the previous case study, WEPC/2 and WERD are slightly larger than WMT/2. As WERD is greater than WMT/2 then there is a greater relative variability expressed in the AC compared to the MT (see appendix). This is due to the nature of the theoretical case study model. The coefficient of variation (CV) of the exit time of the kth agent, for the case study model, decreases with increasing k (see Eq. 70). Thus, the earlier exiting agents exhibit the most relative variability. However, the actual variability of the kth agent, i.e. SDk, increases with increasing k. The ERD gives greater weight to larger differences due to the squaring term (Eq. 22) leading to a small increase in WERD compared to WMT/2. When performing practical egress simulations, there is potentially larger variation in the exit time of agents other than the last agent to exit which would be reflected in the FAM-based CIs of the AC.
$$CV_{k} = \frac{{SD_{k} }}{{\bar{t}_{k} }} = \frac{{\sqrt {k \times 180} }}{k \times 12} = \frac{\sqrt 5 }{2\sqrt k }$$
(70)
The CI based convergence scheme has a higher computational cost than Ronchi et al.’s original method. This includes generating the bootstrap samples, ordering the values, and calculation of the BCa coefficients. However, it is anticipated that the overall cost will be small compared to the computational cost of the evacuation simulations. If bootstrapping takes a significant amount of time, relative to the evacuation simulations, then the algorithm can be optimised by increasing the number of simulations (K) performed between convergence checks. The calculations have been ordered in the convergence scheme so that the most computationally expensive bootstrapping, the FAM-based CIs, are not calculated until the MT and SD have been deemed to have converged.
The CI method also requires additional memory compared to their original method, although this is not expected to be a serious limitation. The largest memory requirement is the need to store all n egress curves. For example, a ‘large’ problem consisting of a scenario with 100,000 agents that is repeated 1000 times would require less than 500 MB of RAM. This is a modest memory requirement for a modern computer.
Although the use of CIs addresses Ronchi et al.’s first noted limitation, the other limitations they discussed still apply to the work performed here. For instance, it is possible that egress curves can appear identical even when the dynamics of the evacuation are different. Furthermore, differing egress curves compared to the AC can return the same ERD, EPC or SC measures. The convergent behaviours of the FAM-based CIs of the AC are therefore imperfect proxies for ensuring all the variability in an egress simulation is adequately represented. However, these measures are superior to using the convergence of the MT alone when trying to determine whether enough simulations have been performed to represent the true variability of the evacuation scenario.

7 Concluding Comments

Ronchi et al. [17] pioneered the use of the convergence behaviour of the AC to better represent the convergence of the evacuation scenario in general. In this paper, their original approach was modified to use CIs. The CI approach has been shown to give reliable convergence with reference to the estimate interval of the population parameters when applied to the mean TET, standard deviation of TETs and the AC. The FAM-based CIs for the AC have been calculated using a novel application of bootstrapping, FAMs, and bisection algorithm. Although the methods described in this paper are more complex than Ronchi et al.’s original convergence indicators, the resultant CIs have a more standard statistical interpretation than their original convergence indicators. The choice of convergence tolerances is therefore more straightforward as it is just the required statistical precision of the statistics of interest. The case study and algebraic study (see appendix) of the ERD and EPC true errors showed there can be equivalency between those errors and the error in MT. These studies provided guidance for setting the tolerances of ERD and EPC.
The convergence scheme is easily adapted to include other statistics of interest. There are many more parameters that may be of interest, e.g. measures of congestion, that may not have a known CI. For example, Smedburg [45] suggested several statistics, such as egress exit flow rates, which were represented by time series data; FAM-based CIs and convergence for those statistics could be determined using the methods described in this paper. The calculation of the CIs and convergence should be implemented in software so that end users can have easy access to the technology.
The CIs can also be easily applied to experimental or simulated trial data without the convergence scheme suggested here where the number of trials performed is limited by other criteria.

Acknowledgements

The author would like to thank John Ewer and Peter Lawrence for proofreading an earlier iteration of this paper.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Anhänge

Appendix: Equivalency of EPC and ERD Errors of the Mean Egress Curve with the Error in Mean Total Evacuation Time Under Certain Conditions

Under certain conditions it can be shown that the normalised errors, \(\varepsilon_{\text{EPC}}\) and \(\varepsilon_{\text{ERD}}\), between the sample mean egress curve AC(n) and the population mean egress curve AC(∞), in EPC and ERD are equivalent to the normalised error, \(\varepsilon_{\text{MT}}\), between the sample mean TET (MT(n)) and population mean TET (MT(∞)). The \(\varepsilon_{\text{MT}}\) is given by Eq. 71.
$$\varepsilon_{\text{MT}} = \frac{{MT^{\left( \infty \right)} - MT^{\left( n \right)} }}{{MT^{\left( n \right)} }}$$
(71)
The \(\varepsilon_{\text{EPC}}\) can be expressed as Eq. 72.
$$\varepsilon_{\text{EPC}} = EPC\left( {{\mathbf{AC}}^{\left( \infty \right)} , {\mathbf{AC}}^{{\left( n \right)}} } \right) - 1 = \frac{{\left( {\mathop \sum \nolimits_{k = 1}^{m} ac_{k}^{\left( \infty \right)} ac_{k}^{\left( n \right)} } \right)}}{{\left( {\mathop \sum \nolimits_{k = 1}^{m} ac_{k}^{\left( n \right)} ac_{k}^{\left( n \right)} } \right)}} - 1$$
(72)
Substituting \(ac_{k}^{\left( \infty \right)}\) with \(\gamma_{k} MT^{\left( \infty \right)}\) and \(ac_{k}^{\left( n \right)}\) with \(\lambda_{k} MT^{\left( n \right)}\) into Eq. 72 leads to Eq. 73. Note that acm ≡ MT.
$$\varepsilon_{\text{EPC}} = \frac{{\left( {\mathop \sum \nolimits_{k = 1}^{m} \gamma_{k} MT^{\left( \infty \right)} \lambda_{k} MT^{\left( n \right)} } \right)}}{{\left( {\mathop \sum \nolimits_{k = 1}^{m} \left( {\lambda_{k} MT^{\left( n \right)} } \right)^{2} } \right)}} - 1$$
(73)
Simplifying Eq. 73 leads to Eq. 74.
$$\varepsilon_{\text{EPC}} = \frac{{\left( {\left( {\mathop \sum \nolimits_{k = 1}^{m - 1} \gamma_{k} \lambda_{k} } \right) + 1} \right)MT^{\left( \infty \right)} }}{{\left( {\left( {\mathop \sum \nolimits_{k = 1}^{m - 1} \lambda_{k}^{2} } \right) + 1} \right)MT^{\left( n \right)} }} - 1$$
(74)
When Eq. 75 is satisfied then \(\varepsilon_{\text{EPC}}\) will be approximately equal to \(\varepsilon_{\text{MT}}\) (Eq. 76).
$$\left( {\left( {\mathop \sum \limits_{k = 1}^{m - 1} \gamma_{k} \lambda_{k} } \right) + 1} \right) \approx \left( {\left( {\mathop \sum \limits_{k = 1}^{m - 1} \lambda_{k}^{2} } \right) + 1} \right)$$
(75)
$$\varepsilon_{\text{EPC}} \approx \frac{{MT^{\left( \infty \right)} }}{{MT^{\left( n \right)} }} - 1 = \frac{{MT^{\left( \infty \right)} - MT^{\left( n \right)} }}{{MT^{\left( n \right)} }} = \varepsilon_{\text{MT}}$$
(76)
There are two specific examples, of when Eq. 75 is satisfied, provided here. The first example is when Eq. 77 is satisfied. The second example is when both Eqs. 78 and 79 are satisfied.
$$\gamma_{k} \approx \lambda_{k} \forall k$$
(77)
$$\mathop \sum \limits_{k = 1}^{m - 1} \gamma_{k} \lambda_{k} \ll 1$$
(78)
$$\mathop \sum \limits_{k = 1}^{m - 1} \lambda_{k}^{2} \ll 1$$
(79)
The \(\varepsilon_{\text{ERD}}\) (Eq. 80) can be expressed as Eq. 81.
$$\varepsilon_{\text{ERD}} = ERD\left( {{\mathbf{AC}}^{\left( \infty \right)} , {\mathbf{AC}}^{{\left( {n} \right)}} } \right) = \sqrt {\frac{{\mathop \sum \nolimits_{k = 1}^{m} \left( {ac_{k}^{\left( \infty \right)} - ac_{k}^{\left( n \right)} } \right)^{2} }}{{\mathop \sum \nolimits_{k = 1}^{m} \left( {ac_{k}^{\left( n \right)} } \right)^{2} }}}$$
(80)
$$\varepsilon_{\text{ERD}} = \sqrt {\frac{{\left( {\mathop \sum \nolimits_{k = 1}^{m - 1} \left( {ac_{k}^{\left( \infty \right)} - ac_{k}^{\left( n \right)} } \right)^{2} } \right) + \left( {MT^{\left( \infty \right)} - MT^{\left( n \right)} } \right)^{2} }}{{\left( {\mathop \sum \nolimits_{k = 1}^{m - 1} \left( {ac_{k}^{\left( n \right)} } \right)^{2} } \right) + \left( {MT^{\left( n \right)} } \right)^{2} }}}$$
(81)
Two examples of approximate equivalency of \(\varepsilon_{\text{ERD}}\) and \(\varepsilon_{\text{MT}}\) (Eq. 82) are given here. The first example occurs when Eq. 83 is true (note: Eq. 84). The simplest example of Eq. 83 being satisfied is when Eq. 85 is true. The second example is when both Eqs. 86 and 87 are true.
$$\varepsilon_{\text{ERD}} \approx \left| {\frac{{MT^{\left( \infty \right)} - MT^{\left( n \right)} }}{{MT^{\left( n \right)} }}} \right| = \left| {\varepsilon_{\text{MT}} } \right|$$
(82)
$$\frac{{\mathop \sum \nolimits_{k = 1}^{m - 1} \left( {ac_{k}^{\left( \infty \right)} - ac_{k}^{\left( n \right)} } \right)^{2} }}{{\mathop \sum \nolimits_{k = 1}^{m - 1} \left( {ac_{k}^{\left( n \right)} } \right)^{2} }} \approx \frac{{\left( {MT^{\left( \infty \right)} - MT^{\left( n \right)} } \right)^{2} }}{{\left( {MT^{\left( n \right)} } \right)^{2} }}$$
(83)
$$\frac{x}{y} \equiv \frac{\epsilon x}{\epsilon y} \equiv \frac{x + \epsilon x}{y + \epsilon y} = \frac{{x\left({1 + \epsilon} \right)}}{{y\left({1 + \epsilon} \right)}}$$
(84)
$$\frac{{\left| {ac_{k}^{\left( \infty \right)} - ac_{k}^{\left( n \right)} } \right|}}{{ac_{k}^{\left( n \right)} }} \approx \frac{{\left| {MT^{\left( \infty \right)} - MT^{\left( n \right)} } \right|}}{{MT^{\left( n \right)} }}\forall k$$
(85)
$$\left( {\mathop \sum \limits_{k = 1}^{m - 1} \left( {ac_{k}^{\left( \infty \right)} - ac_{k}^{\left( n \right)} } \right)^{2} } \right) \ll \left( {MT^{\left( \infty \right)} - MT^{\left( n \right)} } \right)^{2}$$
(86)
$$\left( {\mathop \sum \limits_{k = 1}^{m - 1} \left( {ac_{k}^{\left( n \right)} } \right)^{2} } \right) \ll \left( {MT^{\left( n \right)} } \right)^{2}$$
(87)
Figure 12 depicts two sets of contrived egress curves with their errors reported in Table 12. The first set of curves satisfy both Eqs. 77 and 85 and all the errors are equal (0.2). The second set of curves satisfy Eqs. 78 and 79 and, Eqs. 86 and 87 and \(\varepsilon_{\text{ERD}}\) and \(\varepsilon_{\text{EPC}}\) are approximately equal to \(\varepsilon_{\text{MT}}\) (0.19 ≈ 0.2).
Table 12
MT, EPC and ERD Error for Two Sets of Contrived Curves (Fig. 11). The Values Demonstrate the Equivalency of the Error Measures Under Certain Conditions
Set of curves
\(\varepsilon_{\text{MT}}\)
\(\varepsilon_{\text{ERD}}\)
\(\varepsilon_{\text{EPC}}\)
1
0.2
0.2
0.2
2
0.2
0.19
0.19
By extending the analysis of the ERD relationship it can also be seen, from Eq. 81, that if Eq. 88 is true then \(\varepsilon_{\text{ERD}}\) will be greater than \(\left| {\varepsilon_{\text{MT}} } \right|\), indicating there is more relative difference in the rest of the egress curve compared to the TET. One specific circumstance of this occurs when Eq. 89 is true. Similarly, if Eq. 90 is true then \(\varepsilon_{\text{ERD}}\) will be less than \(\left| {\varepsilon_{\text{MT}} } \right|\), showing there is less relative difference in the rest of the egress curve compared to the TET. One specific circumstance of this occurs when Eq. 91 is true.
$$\frac{{\mathop \sum \nolimits_{k = 1}^{m - 1} \left( {ac_{k}^{\left( \infty \right)} - ac_{k}^{\left( n \right)} } \right)^{2} }}{{\mathop \sum \nolimits_{k = 1}^{m - 1} \left( {ac_{k}^{\left( n \right)} } \right)^{2} }} > \frac{{\left( {MT^{\left( \infty \right)} - MT^{\left( n \right)} } \right)^{2} }}{{\left( {MT^{\left( n \right)} } \right)^{2} }}$$
(88)
$$\frac{{\left| {ac_{k}^{\left( \infty \right)} - ac_{k}^{\left( n \right)} } \right|}}{{ac_{k}^{\left( n \right)} }} > \frac{{\left| {MT^{\left( \infty \right)} - MT^{\left( n \right)} } \right|}}{{MT^{\left( n \right)} }}\forall k$$
(89)
$$\frac{{\mathop \sum \nolimits_{k = 1}^{m - 1} \left( {ac_{k}^{\left( \infty \right)} - ac_{k}^{\left( n \right)} } \right)^{2} }}{{\mathop \sum \nolimits_{k = 1}^{m - 1} \left( {ac_{k}^{\left( n \right)} } \right)^{2} }} < \frac{{\left( {MT^{\left( \infty \right)} - MT^{\left( n \right)} } \right)^{2} }}{{\left( {MT^{\left( n \right)} } \right)^{2} }}$$
(90)
$$\frac{{\left| {ac_{k}^{\left( \infty \right)} - ac_{k}^{\left( n \right)} } \right|}}{{ac_{k}^{\left( n \right)} }} < \frac{{\left| {MT^{\left( \infty \right)} - MT^{\left( n \right)} } \right|}}{{MT^{\left( n \right)} }}\forall k$$
(91)
Literatur
2.
Zurück zum Zitat Kuligowski ED, Peacock RD, Hoskins BL (2010) A review of building evacuation models, 2nd edn. National Institute of Standards and Technology (NIST) technical note 1680 Kuligowski ED, Peacock RD, Hoskins BL (2010) A review of building evacuation models, 2nd edn. National Institute of Standards and Technology (NIST) technical note 1680
12.
Zurück zum Zitat Revised guidelines for evacuation analysis for new and existing passenger ships, IMO MSC.1/Circ 1533, 6 June 2016 Revised guidelines for evacuation analysis for new and existing passenger ships, IMO MSC.1/Circ 1533, 6 June 2016
15.
Zurück zum Zitat Meacham B, Lord J, Moore A, Fahy R, Proulx G, Notarianni K (2004) Investigation of uncertainty in egress models and data. In: Proceedings of the 3rd international symposium on human behaviour in fire, Belfast, UK, September 01–03, 2004, NRCC-47308, pp 419–428 Meacham B, Lord J, Moore A, Fahy R, Proulx G, Notarianni K (2004) Investigation of uncertainty in egress models and data. In: Proceedings of the 3rd international symposium on human behaviour in fire, Belfast, UK, September 01–03, 2004, NRCC-47308, pp 419–428
17.
Zurück zum Zitat Ronchi E, Reneke PA, Peacock RD (2014) A method for the analysis of behavioural uncertainty in evacuation modelling. Fire Technol 50:1545–1571CrossRef Ronchi E, Reneke PA, Peacock RD (2014) A method for the analysis of behavioural uncertainty in evacuation modelling. Fire Technol 50:1545–1571CrossRef
20.
Zurück zum Zitat Jullien Q, Paillat J-L, Thiry-Muller A, Lardet P, Pinoteau N (2019) Use of statistical approach on stochastic building egress simulations applied to building EXODUS and pedestrian dynamics. In: Proceedings of Interflam 2019, pp 705–715 Jullien Q, Paillat J-L, Thiry-Muller A, Lardet P, Pinoteau N (2019) Use of statistical approach on stochastic building egress simulations applied to building EXODUS and pedestrian dynamics. In: Proceedings of Interflam 2019, pp 705–715
22.
Zurück zum Zitat Bluman AG (2007) Elementary statistics: a step by step approach, 6th edn. McGraw-Hill, New York. ISBN: 978-0-07-304825-3 Bluman AG (2007) Elementary statistics: a step by step approach, 6th edn. McGraw-Hill, New York. ISBN: 978-0-07-304825-3
23.
Zurück zum Zitat Byrne MD (2013) How many times should a stochastic model be run? An approach based on confidence intervals. In: Proceedings of the 12th international conference on cognitive modeling, Ottawa, pp 445–450 Byrne MD (2013) How many times should a stochastic model be run? An approach based on confidence intervals. In: Proceedings of the 12th international conference on cognitive modeling, Ottawa, pp 445–450
24.
Zurück zum Zitat Driels MR, Shin YS (2004) Determining the number of iterations for Monte Carlo simulations of weapon effectiveness, technical report NPS-MAE-04-005, Naval Postgraduate School, Defense Threat Reduction Agency, April 2004 Driels MR, Shin YS (2004) Determining the number of iterations for Monte Carlo simulations of weapon effectiveness, technical report NPS-MAE-04-005, Naval Postgraduate School, Defense Threat Reduction Agency, April 2004
26.
Zurück zum Zitat Winston WL (2000) Simulation modeling using @RISK. Duxbury Press, California. ISBN: 978-0534380595 Winston WL (2000) Simulation modeling using @RISK. Duxbury Press, California. ISBN: 978-0534380595
27.
Zurück zum Zitat Jackel P (2003) Monte Carlo methods in finance. Wiley, Hoboken. ISBN: 978-0471497417 Jackel P (2003) Monte Carlo methods in finance. Wiley, Hoboken. ISBN: 978-0471497417
28.
Zurück zum Zitat Oberle W (2015) Monte Carlo simulations: number of iterations and accuracy, ARL-TN-0684, US Army Research Laboratory, July 2015 Oberle W (2015) Monte Carlo simulations: number of iterations and accuracy, ARL-TN-0684, US Army Research Laboratory, July 2015
29.
Zurück zum Zitat Galea ER, Deere S, Brown R, Filippidis L (2014) An evacuation validation data set for large passenger ships. In: Weidmann U, Kirsch U, Schreckenberg M (eds) Pedestrian and evacuation dynamics 2012. Springer, Cham Galea ER, Deere S, Brown R, Filippidis L (2014) An evacuation validation data set for large passenger ships. In: Weidmann U, Kirsch U, Schreckenberg M (eds) Pedestrian and evacuation dynamics 2012. Springer, Cham
30.
Zurück zum Zitat Peacock RD, Reneke PA, Davis WD, Jones WW (1999) Quantifying fire model evaluation using functional analysis. Fire Saf J 33(3):167–184CrossRef Peacock RD, Reneke PA, Davis WD, Jones WW (1999) Quantifying fire model evaluation using functional analysis. Fire Saf J 33(3):167–184CrossRef
31.
Zurück zum Zitat Barrett JP, Goldsmith L (1976) When is n sufficiently large? Am Stat 30(2):67–70 Barrett JP, Goldsmith L (1976) When is n sufficiently large? Am Stat 30(2):67–70
32.
Zurück zum Zitat Efron B, Tibshirani R (1993) An introduction to the bootstrap. Chapman & Hall/CRC, Boca Raton. ISBN: 0-412-04231-2 Efron B, Tibshirani R (1993) An introduction to the bootstrap. Chapman & Hall/CRC, Boca Raton. ISBN: 0-412-04231-2
42.
Zurück zum Zitat Steele JM (2004) The Cauchy–Schwarz master class: an introduction to the art of mathematical inequalities. The Mathematical Association of America, Washington, D.C. ISBN: 978-0521546775 Steele JM (2004) The Cauchy–Schwarz master class: an introduction to the art of mathematical inequalities. The Mathematical Association of America, Washington, D.C. ISBN: 978-0521546775
45.
Zurück zum Zitat Smedberg E (2019) The analysis of results of stochastic evacuation models, report 5587 ISRN: LUTVDG/TVBB–5587—SE, Lund University, Sweden Smedberg E (2019) The analysis of results of stochastic evacuation models, report 5587 ISRN: LUTVDG/TVBB–5587—SE, Lund University, Sweden
Metadaten
Titel
Determining Confidence Intervals, and Convergence, for Parameters in Stochastic Evacuation Models
verfasst von
Angus Grandison
Publikationsdatum
04.03.2020
Verlag
Springer US
Erschienen in
Fire Technology / Ausgabe 5/2020
Print ISSN: 0015-2684
Elektronische ISSN: 1572-8099
DOI
https://doi.org/10.1007/s10694-020-00968-0

Weitere Artikel der Ausgabe 5/2020

Fire Technology 5/2020 Zur Ausgabe