Zum Inhalt

4. Size Distribution and Seismic Hazard

  • Open Access
  • 2025
  • OriginalPaper
  • Buchkapitel
Erschienen in:

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Dieses Kapitel vertieft die komplizierte Beziehung zwischen Größenverteilung und seismischer Gefahr, wobei ein besonderer Schwerpunkt auf Extremereignissen und ihren tiefgreifenden Auswirkungen auf das Risikomanagement liegt. Der Text beginnt mit der Untersuchung der Natur extremer Ereignisse, die durch ihre Abweichung vom durchschnittlichen Verhalten in sich entwickelnden Systemen gekennzeichnet sind. Es wird untersucht, wie diese Ereignisse durch die Dicke des Schweifes einer Wahrscheinlichkeitsverteilung gesteuert werden, die das Auftreten von Ereignissen einer gegebenen Größe definiert. Das Kapitel führt dann in das Konzept der Machtverteilung ein, insbesondere in das unbefristete Machtgesetz, das zur Modellierung der Größenverteilung seismischer Ereignisse verwendet wird. Der Text diskutiert die Eigenschaften von Schwerschwanzverteilungen, einschließlich ihrer Fettschwänze, und wie diese Verteilungen zu unerwarteten und großen Ereignissen führen können, die als Black Swan-Ereignisse bekannt sind. Das Kapitel behandelt auch das Konzept der Drachenkönige, die Ausreißer bei Machtverteilungen sind, die aus positiven Rückkopplungsmechanismen resultieren. Sie liefert Beispiele von Drachenkönigen in verschiedenen Datensätzen, einschließlich der Verteilung der Stadtgrößen und der Verteilung der Erdbebenenergien. Der Text geht dann auf die Hauptaspekte der Schwerschweifverteilungen ein, einschließlich der Unzuverlässigkeit historischer Durchschnittswerte für die Vorhersage, der zunehmenden Unterschiede zwischen sukzessiv größeren Beobachtungen und des nicht abnehmenden Verhältnisses aufeinander folgender Rekordwerte. Er betont, wie wichtig es ist, diese Aspekte für wirksame Risikomanagementstrategien zu verstehen. Das Kapitel behandelt auch die Skaleninvarianzeigenschaft von Machtgesetzen und ihre Auswirkungen auf die Naturgesetze. Es untersucht die Häufigkeitsbeziehung zwischen Gutenberg und Richter und das Omori-Gesetz für den Nachbebenzerfall, denen beide eine charakteristische Skala fehlt. Der Text untersucht dann die Wahrscheinlichkeit unerwartet großer Ereignisse in Systemen, die durch unterschiedliche Wahrscheinlichkeitsverteilungen bestimmt werden, einschließlich Gaußscher, exponentieller und gestreckter exponentieller Verteilungen. Es hebt die höhere Wahrscheinlichkeit großer Ereignisse in Systemen hervor, die durch exponentielle und gestreckte exponentielle Verteilungen beschrieben werden, verglichen mit Gaußschen Verteilungen. Das Kapitel behandelt auch das Gesetz zur unbefristeten Verteilung von Potenzgrößen und seine Anwendung auf seismische Ereignisse in Südkalifornien. Er diskutiert die Beobachtung der Skalierung der Machtgesetze bei seismischen Ereignissen und die Abweichungen vom Machtgesetz, die innerhalb der 95% -Vertrauensgrenzen des Poisson-Prozesses liegen können. Der Text untersucht dann das Gesetz über unbegrenzte, kegelförmige Macht und das obere Gesetz über abgeschnittene Macht und liefert eine detaillierte Analyse ihrer Eigenschaften und Anwendungen. Er diskutiert auch die Einschätzung der Parameter für diese Machtgesetze und die Vertrauensgrenzen für das Machtgesetz. Das Kapitel schließt mit der Hervorhebung des Nutzens von Verteilungen des Energierechts für das Verständnis seismischer Gefahren und der Grenzen und Vorteile, die sich aus der Verwendung dieser Verteilungen für die Gefahrenbewertung ergeben.

4.1 Fat Tails, Power Laws, Extreme, and Unexpected Events

Extreme events can be considered as large deviations from the average behaviour in an evolving system. They are governed by the thickness of the tail of a probability distribution that defines the occurrence of events of a given size, i.e. their probability increases with the thickness of the tail of the underlying size distribution. One characteristic of heavy tailed distributions is that there are usually a few very large values compared to the other values of the data set. The extreme events are actually a part of the nonlinear dynamics of many complex systems.
The open-ended power law, \(N\left (\geq P\right ) = \alpha P^{-\beta }\), with \(\beta >0\), with the probability density function \(f\left (P,\beta \right ) = \beta P_{min}^{\beta } P^{-\beta -1}\), see Eq. (4.3), is an example of a fat tailed distribution that for large potency P falls off more slowly than an exponential and much more slowly than a Gaussian, which is thin tail. Sooner or later an event at the end of a thick tail is going to happen, and it is going to be surprisingly big. These extreme events, defying the normal probability, are also called Black Swan events (Taleb, 2007). While unexpected at first, they tend to be rationalised in hindsight creating the impression that they can be predicted or avoided.
Laherrere and Sornette (1999) and Sornette (2009) identified a number of data sets showing power laws with outliers that they claim are the result of positive feedback mechanisms. They call these events Dragon Kings. They document their presence in six different examples: distribution of city sizes, distribution of acoustic emissions associated with material failure, distribution of velocity increments in hydrodynamic turbulence, distribution of financial drawdowns, distribution of the energies of epileptic seizures, and distribution of earthquake energies.
The main aspects of the heavy tail distribution are that the historical averages are unreliable for prediction, differences between successively larger observations increase, and the ratio of successive record values does not decrease. An important lesson to be learned is that risk management strategies and planning based on a normal distribution can lead to serious under preparation if in fact the problem at hand follows a power law distribution—the risk is in the tail of the distribution. The thicker the tail the higher the probability to be surprised, so we should expect the unexpected.
If we order a data set from a power law distribution, then the ratio between two consecutive observations also has a power law distribution. The sum of power law distributions, \(f\left (P,\beta _{1}\right )\) and \(f\left (P,\beta _{2}\right )\), is a fat tail distribution very close to the power law with the exponent \(\min \left (\beta _{1},\beta _{2}\right )\), but only for large values of P, see Fig. 4.1 left.
Fig. 4.1
Sum of power law distributions (left) and bodies and tails of selected thin- and thick-tailed probability distributions (right)
Bild vergrößern
The power law has the property of scale invariance, i.e. the relative change of \(N\left (\geq kP\right ) / N\left (\geq P\right ) = k^{-\beta }\) is independent of P, and therefore it lacks characteristic scale. The scale transformation changes all lengths by the same factor. The question is are the laws of nature invariant with respect to scales? This was already answered in the negative by Galileo in (1638),1 see also a very good discussion on the subject in Weingartner (1996) who stated that since atoms cannot be enlarged or reduced therefore laws in which basic physical constants play a role are not scale invariant. Larger systems do not contain larger atoms but just more atoms. The earthquake size distribution scaling can only exist within a certain range of sizes, then either the exponent of the distribution or the nature of the distribution needs to change to secure a finite energy or potency release.
Typical examples of power law scaling are the Gutenberg and Richter magnitude frequency relation and the Omori law for aftershock decay. Both lack a characteristic scale, i.e. there is no upper limit on magnitude in the former and there is no end to the duration of aftershock sequences in the latter.
The distance from the expected value of a given probability distribution is defined by \(ns_{d}\), where \(s_{d}\) is the standard deviation, see Fig. 4.1 right as an example. The probability of having an unexpectedly large event in a system governed by the Gaussian distribution of sizes is \(\Pr \left (ns_{d}\right )_{Gauss} \sim \exp \left [-\left (ns_{d}\right )^{2}/\left (2s_{d}^{2}\right )\right ] = \exp \left (-n^{2}/2\right )\). The probability of having an unexpectedly large event in a system governed by the exponential distribution of sizes is \(\Pr \left (ns_{d}\right )_{Exp} \sim \exp \left (-\left |ns_{d}\right |/s_{d}\right ) = \exp \left (-n\right )\), and by the stretched exponential distribution, \(\Pr \left (ns_{d}\right )_{SE} \sim \exp \left [-\left (\left |ns_{d}\right |/s_{d}\right )^{q}\right ] = \exp \left (-n^{q}\right )\), where \(q < 1\). The ratio \(\Pr \left (ns_{d}\right )_{Exp}/\Pr \left (ns_{d}\right )_{Gauss} = \exp \left (-n+n^{2}/2\right )\) and the ratio \(\Pr \left (ns_{d}\right )_{SE}/\Pr \left (ns_{d}\right )_{Gauss} = \exp \left [\left (n^{2}-2n^{q}\right )/2\right ]\).
Thus, the probability of a \(6s_{d}\) or 6-sigma event in a system described by the exponential distribution is \(10^{5}\) times more likely than in a system described by the Gaussian distribution. The tail of the stretched exponential distribution is fatter than the exponential one, and, for \(q=1/2\), it gives a 35 times higher probability of a \(6s_{d}\) event than the exponential. The power law tail is yet fatter than the stretched exponential.

4.2 Open-Ended Power Law (OE)

In a hard rock mass with random heterogeneities and with some regular geological structures undergoing loading, there are patches of rock resisting deformation where stresses are increasing, patches of diffusion where stresses are decreasing, and there are some passive volumes not influenced by loading. Locally, stress build-up and/or strength degradation may lead to fast relaxation via deformation jumps, or seismic events, that radiate and dissipate energy across the system. Over time, the size distribution of these events will, within a certain range, follow the power law, \(N\left (\geq P\right ) = \alpha P^{-\beta }\), where \(N\left (\geq P\right )\) is the number of events not smaller than P, where P is seismic potency. The same relation applies to seismic moment, M, and the radiated energy E. The parameter \(\alpha \) measures the level of seismic activity, and \(\beta \) is the exponent.
Remarkably, the distribution of sizes of earthquakes for Southern California is observed to be a power law with a constant exponent over more than six orders of magnitude (e.g. Christensen et al., 2002). This scaling is the property of regional dynamics rather than individual faults where it is more complex (e.g. Wesnousky et al., 1983; Main & Burton, 1984; Kijko & Stankiewicz, 1987; Wesnousky, 1994; Wiemer & Wyss, 2002). However, some deviations from the power law may lie within the 95% confidence limits of the Poisson process and therefore may not be significant enough to be regarded as “characteristic” or “bi-modal” (Jackson & Kagan, 2006; Kagan et al., 2012).
In mines, the power law scaling has been observed for seismic events as large as \(m = 5.0\) (Mendecki et al., 1988) and recently for small fractures with \(m = -4.0\) to \(-0.3\) (Kwiatek et al., 2010), although with varying exponents.
The open-ended (OE) potency size distribution power law can be written as
$$\displaystyle \begin{aligned} N\left(\geq P\right)=\alpha P^{-\beta},{} \end{aligned} $$
(4.1)
where P is potency (or energy, or seismic moment), \(\alpha \) estimates the number of seismic events with potency not smaller than one, \(\alpha =N(P\geq 1)\), and \(\beta >0\) is the exponent—the lower the \(\beta \) the heavier the tail of the distribution. Taking the logarithm of equation (4.1) gives \(\log N\left (\geq P\right ) = \log \alpha - \beta \log P\), which is the well-known (Gutenberg & Richter, 1944) relation
$$\displaystyle \begin{aligned} \log N\left(\geq m\right)=a-bm,{} \end{aligned} $$
(4.2)
where \(a = \log \alpha \), \(b = \beta \), and magnitude \(m = \log P\). The OE, or the Gutenberg-Richter, relation has no upper limit on the event size. The assumption that solving \(\alpha P_{max1}^{-\beta } = 1\) gives the so-called the one largest possible event or the next largest event, \(P_{max1} = \alpha ^{1/\beta }\) or \(m_{max1} = a/b\), is incorrect. Note that the probability of having an event greater than or equal to \(P_{max1}\) in the OE distribution is small, but finite, and it increases as \(\beta \) decreases, \(\Pr \left (\geq P_{max1}\right ) = P_{min}^{\beta }/\alpha \).
The cumulative distribution function, \(F(P),\) i.e. the probability of having an event with a potency smaller than P, is \(\Pr \left (\leq P\right ) = N(\leq P) / N\left (\geq P_{min}\right ) = 1 - P_{min}^{\beta }P^{-\beta }\), where \(P_{min}\) is the minimum observed potency that fits the power law. The survival function \(S\left (P\right ) = \Pr \left (\geq P\right ) = 1 - \Pr \left (\leq P\right ) = P_{min}^{\beta }P^{-\beta }\). Parameter \(P_{min}\) is limited by the sensitivity of the monitoring system. One can expect that an increase in system sensitivity by \(\Delta P\) will increase the number of recorded events by \(N\left (\geq P_{min}-\Delta P\right ) / N\left (\geq P_{min}\right ) = P_{min}^{\beta } \left (P_{min}-\Delta P\right )^{-\beta }\). For an order of magnitude drop in \(P_{min}\), i.e. \(P_{min} - \Delta P = P_{min}/10\), one can expect to record \(\left (1/10\right )^{-\beta }\) more events, which for \(\beta = 1\) gives a ten-fold increase. The probability density function for the OE relation is
$$\displaystyle \begin{aligned} f\left(P\right)=dF\left(P\right)/dP=\beta P_{min}^{\beta}P^{-\beta-1}=\left(\frac{\beta}{P}\right)\left(\frac{P_{min}}{P}\right)^{\beta}.{} \end{aligned} $$
(4.3)
Figure 4.2 illustrates the behaviour of probability density function of the open-ended power law for different values of \(\beta \).
Fig. 4.2
Illustration of the PDF of the OE power law for different values of \(\beta \)
Bild vergrößern
The number of events within the potency range \(P_{1}<P_{2}\) is \(N\left (P_{1},P_{2}\right ) = \alpha \left (P_{1}^{-\beta }-P_{2}^{-\beta }\right )\), and the probability of having this event is \(\Pr \left (P_{1},P_{2}\right ) = F\left (P_{2}\right ) - F\left (P_{1}\right ) = P_{min}^{\beta } \left (P_{1}^{-\beta }-P_{2}^{-\beta }\right )\). The mean value of the distribution \(\left \langle P\right \rangle = \intop _{P_{min}}^{\infty }Pf\left (P\right )dP / \intop _{P_{min}}^{\infty }f\left (P\right )dP = \beta P_{min} / \left (\beta -1\right )\), which is finite for \(\beta > 1\). Note that the mean of any finite sample is always finite, but as we draw more samples from the power law distributed population, this sample average tends to increase. One might mistakenly conclude that there is a time trend in such data.

4.2.1 Selection of \(P_{min}\)

\(P_{min}\), or \(\log P_{min}\), can be interpreted technically as the minimum observed potency that fits the data. A correctly selected \(P_{min}\) should also define the completeness of the catalogue, i.e. the lowest potency at which all events within a volume and time of interest, \(\left (\Delta V,\Delta t\right )\), are detected. While different methods for an objective and/or automatic threshold detection have been proposed, in practice the most reliable is the visual inspection of the potency frequency data. The potency frequency can be presented as a cumulative plot or an incremental plot. In the cumulative plot, the number of events increases or stays the same from high to low potencies. The incremental plot shows the number of events in each potency bin, \(\Delta P\), including the empty ones.
It is advisable to inspect both the cumulative and the incremental plot, since each value in a cumulative plot depends on all the preceding values and it may hide fine details. It is important not to underestimate \(\log P_{min}\) since it may lead to underestimation of \(\beta \) and thus overstating hazard. The properly determined \(P_{min}\) in a well -behaved data set is the minimum potency above which the estimates of \(\beta \) are stable. Data below the \(P_{min}\) threshold should not be used in fitting the parameters of the power law distribution. However, in some applications, e.g. short term hazard, dynamic triggering detection, and stability analysis, data below \(P_{min}\) is very useful.

4.2.2 Estimation of \(\alpha \) and \(\beta \) of the OE Relation

The \(\beta \)-value in the OE potency frequency relation \(N\left (\geq P\right ) = \alpha P^{-\beta }\) can be estimated by maximising the likelihood function (Utsu, 1964; Aki, 1965),
$$\displaystyle \begin{aligned} L\left(P_{j};\beta\right)=\prod_{j=1}^{n}f\left(P_{j}\right)=\prod_{j=1}^{n}\beta P_{min}^{\beta}P_{j}^{-\beta-1},{} \end{aligned} $$
(4.4)
where \(f\left (P\right )\) is the probability density function, n is the number of events above \(P_{min}\), and \(P_{min}\) is the potency selected to fit the distribution. The values of the probability density function \(f(P_{j})\) are very small, and when multiplied together the result may be too small to be accurately represented by a computer. This is why it is advisable to maximise the log likelihood function, \(\log L(P_{j};\beta )=\sum _{j=1}^{n}\log f\left (P_{j}\right )\), where multiplications are replaced by summation.
The \(\log L(P_{j};\beta )\) is monotonic giving the same maximum as the likelihood function \(L(P_{j};\beta )\), and it is also easier to differentiate to find its maximum. The point at which the \(\log L(P_{j};\beta )\) is maximum with respect to \(\beta \) is the solution of the equation \(\partial \log L\left (P_{j};\beta \right ) / \partial \beta = n / \left [\beta \ln \left (10\right )\right ] + \sum _{j=1}^{n}\log P_{min} - \sum _{j=1}^{n}\log P_{j} = 0\), which gives \(\left (n\log e\right )/\beta + n \log P_{min} - \sum _{j=1}^{n}\log P_{j} = 0\), and after simple algebra, the estimate of \(\beta \) of the open-ended relation is
$$\displaystyle \begin{aligned} \hat{\beta}_{OE}=\log\left(e\right)/\left(\overline{P}-\log P_{min}\right),{} \end{aligned} $$
(4.5)
where \(\log \left (e\right ) = 0.4343\) and \(\overline {P} = \left (1/n\right ) \sum _{j=1}^{n}\left (\log P_{j}\right )\). It follows that studying \(\beta \), or the b-value, is effectively equivalent to studying the mean \(\log P\) or mean magnitude. To correct for bias, one can multiply the \(\hat {\beta }_{OE}\) by \((n-1)/n\), which makes a difference only for small data sets.
For a sufficiently large data set, the standard deviation can be obtained from the second derivative of the log likelihood, \(s_{d}\left (\beta \right ) = \left [-\partial ^{2}\log L(\beta )/\partial \beta ^{2}\right ]^{-1/2} = \hat {\beta }_{OE}/\sqrt {n\log \left (e\right )}\) (Aki, 1965). Shi and Bolt (1982) gave a useful expression for the standard error of \(\beta \) when it varies slowly in time, which is very much the case in mines, and for large n,
$$\displaystyle \begin{aligned} s_{d}\left(\beta_{OE}\right)=\frac{\hat{\beta}_{OE}^{2}}{\log\left(e\right)}\sqrt{\frac{\sum_{j=1}^{n}\left(\log P_{j}-\overline{P}\right)^{2}}{n\left(n-1\right)}}.{} \end{aligned} $$
(4.6)
The parameter \(\hat {\alpha }_{OE}\) can be estimated from \(\log \hat {\alpha }_{OE} = \hat {\beta }_{OE} \overline {P} + \left (1/n\right ) \sum _{j=1}^{n}\)\(\log N\left (\geq P_{j}\right )\).

4.3 Open-Ended Tapered Power Law (OET)

Kagan (1993) suggested the OE relation that tapers to zero by multiplying the survival function of the OE power law by the exponential function, as suggested by Vere-Jones et al. (2001),
$$\displaystyle \begin{aligned} \Pr\left(\leq P\right)=F\left(P\right)=1-P_{min}^{\beta}P^{-\beta}\exp\left(\frac{P_{min}-P}{P_{c}}\right),{}, \end{aligned} $$
(4.7)
where \(P_{c}\) is the soft upper cut-off. The survival function, \(S\left (P\right ) = 1 - F\left (P\right )\), for different exponents \(\beta \) is illustrated in Fig. 4.3 left, and the probability density function is
$$\displaystyle \begin{aligned} f\left(P\right)=\left(\frac{\beta}{P}+\frac{1}{P_{c}}\right)\left(\frac{P_{min}}{P}\right)^{\beta}\exp\left(\frac{P_{min}-P}{P_{c}}\right).{} \end{aligned} $$
(4.8)
The mean and the standard deviation can be obtained from a general formula the higher order moments given by Kagan and Schoenberg (2001),
$$\displaystyle \begin{aligned} E\left(P^{k}\right)=P_{min}^{k}+kP_{min}^{\beta}P_{c}^{k-\beta}\exp\left(\frac{P_{min}}{P_{c}}\right)\Gamma\left(k-\beta,\frac{P_{min}}{P_{c}}\right),\quad \text{for}\quad k=1,2,{} \end{aligned} $$
(4.9)
where \(\Gamma \left (k-\beta ,\frac {P_{min}}{P_{c}}\right ) = \intop _{P_{min}/P_{c}}^{\infty }\exp \left (-t\right )t^{k-\beta -1}dt\) is the upper incomplete Gamma function. The probability of having an event greater than or equal to \(P_{c}\) as a function of \(P_{c}/P_{min}\) is finite, \(\Pr \left (\geq P_{c}\right ) = \left (P_{c}/P_{min}\right )^{-\beta } \exp \left [\left (P_{c}/P_{min}\right )^{-1}-1\right ]\), which is illustrated in Fig. 4.3 right. If we assume \(\beta = \hat {\beta }_{OE}\), then the soft cut-off can be estimated from data by
$$\displaystyle \begin{aligned} P_{c}=\left(\frac{1}{n}\sum_{i=1}^{n}P_{i}^{2}-P_{min}^{2}\right)/\left[2\beta P_{min}+2\overline{P}\left(1-\beta\right)\right],{} \end{aligned} $$
(4.10)
where \(\overline {P}\) is the mean value of P, Kagan and Schoenberg (2001).
Fig. 4.3
Probability density functions for OE and OET power law (left) and the probability of having an event greater than or equal to \(P_{c}\) as a function of \(P_{c}/P_{min}\)(right) for selected \(\beta \)
Bild vergrößern

4.4 Upper Truncated Power Law (UT)

The upper truncated (UT) power law can be written as
$$\displaystyle \begin{aligned} N\left(\geq P\right)=\alpha\left(P^{-\beta}-P_{max}^{-\beta}\right),{} \end{aligned} $$
(4.11)
where \(N\left (\geq P\right )\) is the number of events not smaller than P. The number of events within the potency range \(\left (P_{1},P_{2}\right )\), where \(P_{2} > P_{1}\) is \(N\left (P_{1},P_{2}\right ) = \alpha \left (P_{1}^{-\beta }-P_{2}^{-\beta }\right )\). Parameter \(\alpha \) measures the level of seismic activity, \(\beta \) is the exponent, and \(P_{max}\) is the limit of the maximum expected event size for a given data set. The probability density and the cumulative distribution functions \(F\left (P\right )\) are
$$\displaystyle \begin{aligned} f(P)=\beta P^{-\beta-1}/\left(P_{min}^{-\beta}-P_{max}^{-\beta}\right),\qquad \Pr\left(\leq P\right)=1-\frac{\left(P^{-\beta}-P_{max}^{-\beta}\right)}{\left(P_{min}^{-\beta}-P_{max}^{-\beta}\right)},{} \end{aligned} $$
(4.12)
where \(\Pr \left (\geq P_{max}\right ) = 0\), and the survival function \(\Pr \left (\geq P\right ) = 1 - \Pr \left (\leq P\right )\), see Fig. 4.4 (e.g. Page, 1968; Cosentino et al., 1977; Burroughs & Tebbens, 2001; Kijko, 2004). Note that \(P_{max}\) needs to be greater than the maximum observed potency \(P_{maxo}\) for \(N\left (\geq P\right )\) to be positive. The probability of having an event in a given potency range is \(\Pr \left (P_{1},P_{2}\right ) = \left (P_{1}^{-\beta }-P_{2}^{-\beta }\right ) / \left (P_{min}^{-\beta }-P_{max}^{-\beta }\right )\). For example, for \(\beta = 0.75\), \(\log P_{min} = 0\), and \(\log P_{max} = 3\), we have \(\Pr \left (0.0\leq \log P\leq 1.0\right ) = 0.827\), \(\Pr \left (1.0\leq \log P\leq 2.0\right ) = 0.147\), and \(\Pr \left (2.0\leq \log P\leq 3.0\right ) = 0.026\).
Fig. 4.4
Survival function of the OE power law (left) and the UT and OET (right) for a selected \(\beta \)
Bild vergrößern
The mean value of the UT power law is \(\left \langle P\right \rangle = \left [\beta /\left (1-\beta \right )\right ] \left (P_{max}^{1-\beta }-P_{min}^{1-\beta }\right ) /\)\(\left (P_{min}^{-\beta }-P_{max}^{-\beta }\right )\) for \(\beta \neq 1\), and \(\left \langle P\right \rangle = \ln \left [\left (P_{max}/P_{min}\right )/\left (P_{min}^{-1}-P_{max}^{-1}\right )\right ] = \ln \left [P_{max}^{2}/\left (P_{max}-P_{min}\right )\right ]\), for \(\beta = 1\). The variance, \(Var\left (P\right ) = \left \langle P^{2}\right \rangle - \left \langle P\right \rangle ^{2}\), where \(\left \langle P^{2}\right \rangle = \left [\beta /\left (2-\beta \right )\left (P_{max}^{2-\beta }-P_{min}^{2-\beta }\right )/\left (P_{min}^{-\beta }-P_{max}^{-\beta }\right )\right ]\).

4.4.1 Estimation of \(\alpha \) and \(\beta \) of the UT Relation

For the sake of convenience, the probability density and the survival functions of the upper truncated distribution may be written as
$$\displaystyle \begin{aligned} f\left(P\right)=\beta P_{min}^{\beta}P^{-\beta-1}/\left(1-R^{\beta}\right)\quad \Pr\left(\geq P\right)=P_{min}^{\beta}\left(P^{-\beta}-P_{max}^{-\beta}\right)/\left(1-R^{\beta}\right),{} \end{aligned} $$
(4.13)
where \(R = P_{min}/P_{max}\). The log likelihood function is \(\log L(P_{j};\beta )=\sum _{j=1}^{n}\log f\left (P_{j}\right )\), and taking its derivative with respect to \(\beta \) gives
$$\displaystyle \begin{aligned} \frac{n}{\hat{\beta}_{UT}}+\frac{1}{\log\left(e\right)}\frac{nR^{\hat{\beta}_{UT}}\log R}{1-R^{\hat{\beta}_{UT}}}-\frac{1}{\log\left(e\right)}\sum_{i=1}^{n}\left(\log P_{i}-\log P_{min}\right)=0,{} \end{aligned} $$
(4.14)
which can be simplified to
$$\displaystyle \begin{aligned} \hat{\beta}_{UT}=\log\left(e\right)\left[\overline{\log P}-\log P_{min}-\left(R^{\hat{\beta}_{UT}}\log R\right)/\left(1-R^{\hat{\beta}_{UT}}\right)\right]^{-1},{} \end{aligned} $$
(4.15)
where \(\overline {\log P}\) is the geometric mean of the data, \(\overline {\log P} = \left (1/n\right )\sum \log P_{i}\). Assuming the \(P_{min}\) and \(P_{max}\) are known, the Eq. (4.15) can be solved numerically for \(\hat {\beta }_{UT}\) (Page, 1968). The standard deviation is given by
$$\displaystyle \begin{aligned} s_{d}(\beta_{UT})=\frac{1}{\sqrt{n}}\left[\frac{\log\left(e\right)}{\hat{\beta}_{UT}^{2}}-\frac{R^{\hat{\beta}_{UT}}\left(\log R\right)^{2}}{\log\left(e\right)\left(1-R^{\hat{\beta}_{UT}}\right)^{2}}\right].^{-1/2}{} \end{aligned} $$
(4.16)
An approximate solution was given by Kijko and Funk (1994), by truncating the Taylor expansion of equation (4.15) at the second term ,
$$\displaystyle \begin{aligned} \hat{\beta}_{UT}=\hat{\beta}_{OE}-\frac{\hat{\beta}_{OE}^{2}R^{\hat{\beta}_{OE}}\log\left(1/R\right)}{\log\left(e\right)\left(1-R^{\hat{\beta}_{OE}}\right)},{} \end{aligned} $$
(4.17)
where \(\hat {\beta }_{OE}\) is derived from Eq. (4.5). Note that for the same data set the estimate of \(\beta \) for the UT formulation will always be lower than \(\beta \) for the OE one. The estimate \(\hat {\alpha }_{UT}\) can be taken as \(\log \hat {\alpha } = \left (1/n\right ) \sum _{j=1}^{n}\log N\left (\geq P_{j}\right ) - \left (1/n\right ) \sum _{j=1}^{n}\log \left (P_{j}^{-\hat {\beta }UT}-P_{max}^{-\hat {\beta }_{UT}}\right )\).

4.4.2 Comments on Data Selection and Parameter Estimation

It is important to select a data set and to determine the parameters of the power law correctly since a small change in the exponent \(\beta \) influences significantly the expected number of larger events. Typical mistakes and potential problems in fitting the power law to data:
1.
Using a data set polluted by blasts.
 
2.
Using a small data set, see Fig. 4.5.
Fig. 4.5
99% and 95% confidence limits for \(\beta \) as a function of the number of events (left). 95% confidence limits for \(\beta \) as a function of the number of events for UT power law for two bin sizes, \(\Delta _{logP}\)(right)
Bild vergrößern
 
3.
Using data over a short time span. The typical power law distribution develops under slow constant loading over longer period of time.
 
4.
Selecting the time span of data during which the way of mining changed significantly over a considerable period of time, e.g. the introduction of backfill, stabilising pillars, more disordered less concentrated mining method, preconditioning by hydrofracturing or blasting.
 
5.
Selecting a data set which is the result of mixing two independent different seismogenic areas.
 
6.
Fitting data with the linear least squares method (LS) as opposed to the maximum likelihood (ML). The LS method gives a standard deviation of \(\beta \) 2 to 3 times larger than ML.
 
7.
Fitting parameters to the cumulative data which, by the very nature of its construction, is correlated. This violates the basic assumption of independent observations.
 
8.
Underestimating \(P_{min}\), which delivers lower \(\beta \), i.e. overestimates hazard.
 
9.
Incorrect binning of data. Linear binning produces unacceptable results. Logarithmic binning is better, but results depend on bin size. Too coarse binning produces inaccurate results for two reasons. Firstly, because the average computed from binned data is systematically overestimated. Secondly, because the centre of the first bin is different to \(P_{min}\), and this influences the difference \(\left \langle \log P\right \rangle - P_{min}\), which is very small and sits in the denominator, and therefore is influential. Small bins are more accurate but frequently empty, and this leads to a biased estimate. If the seismic system delivers source parameters of reasonable accuracy, there is no reason for binning.
 

4.5 Confidence Limits

A confidence interval is the range of values of a sample statistic that, at a given level of probability called a confidence level, is likely to contain a population parameter. This is the interval that will include the population parameter a certain percentage of the time during repeated sampling. A confidence level is the degree of certainty that one wants to be able to place in the confidence interval. This is the probability that the parameter being estimated by the statistic falls within the confidence interval. The confidence limits are the upper and lower values of a confidence interval.
Confidence intervals and the standard error of the mean serve the same purpose. However, those error bars around the mean that represent the standard error imply that only 66% of them include the parametric mean, which is not always remembered. The error bars with confidence intervals, usually 95%, imply that 95% of them include the parametric means, which is explicit. In addition, for very small sample sizes, which apply to larger events in the size distribution statistics, the 95% confidence interval is larger than twice the standard error. Note that the lower the level of confidence the narrower the confidence region. One would like to have a narrow confidence region with high probability, which is possible for normally distributed data and a great number of observations. For a data set drawn from a power law distribution, e.g. size distribution of seismic events, there are very few observations for larger events and, for the same probability, the confidence region there is wider.
A typical assumption when estimating the confidence limits is that the data is normally distributed. This gives symmetrical confidence limits, which for means near zero or one give negative limits. This is obviously incorrect. Therefore, for data sets which are power law distributed, it is better to calculate the confidence interval based on the binomial or the Poisson distribution.

4.5.1 Confidence Limits for \(\beta \)

If the standard deviation of \(\beta \) is known and if the population is normally distributed, or if the number of events is greater than 30, the 99% confidence limits can be estimated as \(\beta \pm 2.58\cdot sd_{\beta }\), the 95% confidence limits can be estimated as \(\beta \pm 1.96\cdot sd_{\beta }\), and the 90% confidence limit as \(\beta \pm 1.645\cdot sd_{\beta }\). The higher the level of confidence, the wider the interval.
Figure 4.5 (left) shows the 99% and 95% confidence limits for the \(\beta \)-value as a function of the number of events. It shows that one would need well over 1000 events for 95% confidence errors \(<\) 0.1. Figure 4.5 (right) shows that, for the same confidence limit, the smaller the bin size the better the estimate.

4.5.2 Confidence Limits for the Power Law Fit

Suppose that the data selected to fit the power law deviates from the model. The deviation may scatter randomly around the model or they may be systematic, e.g. there may be a surplus of events of certain size that, if significant, could be considered as characteristic. To be significant, the deviation has to exceed a certain level of confidence, and therefore, it is important to test to what degree a given data set supports the model.
If we assume that the number of events in a given \(\log P\) bin can be estimated from the Poisson distribution, then the probability of having N events in that bin is \(\Pr \left (N\right ) = \left (\lambda ^{N}/N!\right ) \exp \left (-\lambda \right )\), where \(\lambda \) is the mean number of events in that bin. As per the power law, the mean number of events in a potency bin \(\Delta _{\log P_{j}} = P_{j} - P_{j-1}\) is \(N\left (P_{j-1},P_{j}\right ) = \alpha \left (P_{j-1}^{-\beta }-P_{j}^{-\beta }\right )\), where \(\alpha \) and \(\beta \) are derived from the data. According to the Poisson distribution, the probability of having \(N_{j}\) events in that bin is
$$\displaystyle \begin{aligned} \begin{array}{rcl} & &\displaystyle \Pr\left(N_{j}\right)=\left(1/N_{j}!\right)\left[\alpha\left(P_{j-1}^{-\beta}-P_{j}^{-\beta}\right)\right]^{N_{j}}\exp\left[-\alpha\left(P_{j-1}^{-\beta}-P_{j}^{-\beta}\right)\right],\\ & &\displaystyle j=1,2,...,n_{b},{} \end{array} \end{aligned} $$
(4.18)
where \(n_{b}\) is the number of bins. The upper and lower confidence limits for the centre potency in the jth bin are found by solving Eq. (4.18) for \(N_{j}\), assuming the required probability \(\Pr \left (N_{j}\right )\). The 95% confidence intervals are estimated using \(\sum \Pr \left (N_{j}\right ) = 0.975\) for upper limit and \(\sum \Pr \left (N_{j}\right ) = 0.025\) for lower limit, see, for example, Fig. 4.17.

4.6 Utility of Power Law Distributions

4.6.1 Limitations and Benefits

The power law size distribution has some apparent limitations. In general, power laws are asymptotic, and as a consequence, their moments, e.g. mean or variance, exist only for a certain range of exponents. Therefore, there is a need to impose a hard upper limit on the maximum event size and to define the power law within a range, \(\left (P_{min},P_{max}\right )\), or to add an exponential term to its tail, to secure a finite energy release.
The power law size distribution disregards the time of seismic events, thus ignoring any potential trend in the data. The bulk of the data are small events which are not necessarily hazardous, and, in some cases, the processes leading to small events may be different from the processes leading to large events.
Inverting \(\alpha \), \(\beta \), and \(P_{max}\) simultaneously from data does not always deliver stable results. Therefore, there is a need to estimate \(P_{max}\) independently, e.g. by order statistics which regards the sequence of larger events. Because order statistics and the associated jumps in the history of record breaking events change as mining progresses, \(P_{max}\) needs to be reassessed regularly.
The power law also disregards the spatial distribution of seismic events. Subdividing space into sub-volumes without de-clustering, fitting a power law to the data extracted from each sub-volume, and calculating their probabilities are not the best strategies to estimate spatial hazard. Unless one will assume some form of spatial intensity function in the non-stationary Poisson process, there is nothing to extrapolate, and therefore the future hazard will be very much like the past. In addition, there are two potential problems with subdivision of space: (1) Seismic activity within these sub-volumes may not be independent. (2) There is a trade-off between the spatial resolution and the amount of data one can extract from these sub-volumes, and therefore, there may be insufficient data for a reliable power law fit, see Fig. 4.5.
The best way to address the spatial hazard is by numerical modelling of stresses and strains associated with future mining, calibrated with the existing seismic data (Linkov, 2006, Linkov, 2013, Malovichko & Basson, 2014).
Nevertheless, the power law size distribution offers obvious benefits. It provides a useful association between its exponent and the seismic hazard-related factors, e.g. the state of stress, the system stiffness, rock mass and mine layout heterogeneity, potency production, information entropy or the unpredictability of larger events, and the stress transfer due to small and large events. It also gives an insight into the scaling relation between seismic potency and radiated seismic energy.
To take full advantage of the size distribution-based hazard, one needs to apply it carefully. Since the bulk of seismic activity in mines follows production, that can be very intermittent, the traditional recurrence times, \(\bar {t} = \Delta t / N\left (\geq P\right )\), and associated probabilities may not be as useful as in crustal seismology where loading is steady.
The other traditional size distribution parameters, namely, the activity rate, the \(\beta \)-value, “the one largest event, \(P_{max1}\)”, also do not measure seismic hazard consistently and reliably. The supposition that solving \(\alpha P_{max1}^{-\beta } = 1\) gives theone largest possible event or the next largest event, \(P_{max1} = \alpha ^{1/\beta }\), or \(m_{max1} = a/b\) in the Gutenberg-Richter relation, is incorrect. The fact that the data follows the OE power law closer than the upper truncated one indicates the potential for even larger events and larger jumps in record breaking events. Only when the largest observed events are significantly smaller than that predicted by the OE relation can one infer that the size distribution hazard may be contained or controlled.
If rock extraction is not a linear function of time, one should resort to parameters based on volume mined, e.g. an average inter-event volume mined to generate an event above a certain size, \(\bar {V}_{m}\left (\geq P\right ) = V_{m} / N\left (\geq P\right )\), the probability that a larger seismic event will occur while extracting a given volume of rock, \(\Pr \left (\geq P,\Delta V_{m}\right )\), or the volume of ground motion \(V_{GM}\left (\geq \mbox{v},\Delta V_{m}\right )\).
As the extraction ratio increases and the overall stiffness of the rock mass is being degraded, there is a tendency for the \(\beta \)-value to decrease and \(\alpha \) to increase. If such a trend can be detected, quantified, and extrapolated, then seismic hazard assessment is less dependent on the assumption of stationarity (Mendecki, 2008).

4.6.2 Missing Potency and \(\beta \)

Seismic monitoring systems record events above their overall sensitivity level \(P_{min}\), therefore there is missing potency below that level, and the ratio of the missing to recovered potency depends on the slope or the \(\beta \)-value of the observed potency frequency distribution.
The potency release by seismic events within the potency range \(P_{1}\) to \(P_{2}\) is \(P\left (P_{1},P_{2}\right ) = N\left (P_{1},P_{2}\right ) \intop _{P_{1}}^{P_{2}}Pf(P)dP / \intop _{P_{1}}^{P_{2}}f(P)dP\), which for \(\beta \neq 1\) gives
$$\displaystyle \begin{aligned} P\left(P_{1},P_{2}\right)=\alpha\beta\left(P_{2}^{1-\beta}-P_{1}^{1-\beta}\right)/\left(1-\beta\right). \end{aligned}$$
For \(\beta = 1\), one can integrate within the finite potency range, \(P\left (P_{1},P_{2}\right ) = \alpha \ln \left (P_{2}/P_{1}\right )\). For \(\beta < 1\), one can integrate from \(P_{1} = 0\) to a finite potency \(P\left (0,P_{2}\right ) = \alpha \beta P_{2}^{1-\beta } / \left (1-\beta \right )\), and for \(\beta > 1\), one can integrate from a finite potency to infinity, \(P\left (P_{1},\infty \right ) = -\alpha \beta P_{1}^{1-\beta } / \left (1-\beta \right )\). The equations for the number of events, \(N\left (P_{1},P_{2}\right )\), for both OE and UT relations are similar, with the exception that in the UT case the \(P_{2}\) can only go as far as \(P_{max}\).
The following ratio quantifies the UT portion of seismic potency below \(P_{min}\) that, if already produced by the rock mass, is missing because it could not have been recorded due to the limited sensitivity of the seismic monitoring system or because it has been produced aseismically,
$$\displaystyle \begin{aligned} \frac{P\left(0,P_{min}\right)}{P\left(P_{min},P_{max}\right)}=\frac{P_{min}^{1-\beta}}{\left(P_{max}^{1-\beta}-P_{min}^{1-\beta}\right)},\qquad \mbox{{$\left(\mbox{for}\:\beta<1\right)$}}.{} \end{aligned} $$
(4.19)
It can be shown that \(P\left (0,P_{min}\right )\!/\!P\left (P_{min},P_{max}\right )\!=\!\left [10^{\left (\log P_{max}-\log P_{min}\right )\left (1-\beta \right )}\!-\!1\right ]^{-1}\), i.e. it depends only on \(\beta \) and on the difference \(\log P_{max} - \log P_{min}\), regardless of \(P_{min}\), see Fig. 4.6. As expected, the higher the \(\beta \)-value the more potency is lost below the threshold level. For \(\log P_{max} - \log P_{min} = 3\) and \(\beta = 0.9\), there is 50/50 split between the observed and the missed potency.
Fig. 4.6
Missing to observed potency ratio as a function of \( \log P_{max} - \log P_{min}\) for different \(\beta \)
Bild vergrößern
Fig. 4.7
\( \log E\), vs. \( \log P\) plot of events with \( \log P \geq -1.0\). The red line represents the fit with coefficients d and c derived from Eq. (4.20), and the green one is by the ordinary least squares (left). \( \log \sigma _{A}\) vs. \( \log P\) for the same data set (right)
Bild vergrößern

4.6.3 Power Law and log(Energy) vs. log(Potency) Relation

In practice, the size distribution analysis is carried out in the magnitude and/or in the potency (moment) domain. However, the most appropriate measure of the strength of a seismic source is the radiated seismic energy, E, and, if estimated reliably, it should be the base for size distribution hazard estimation. If both seismic potencies and energies are available for the same data set, assuming the OE power law, we can write \(N\left (\geq P\right ) = \alpha _{P}P^{-\beta _{P}} = N\left (\geq E\right ) = \alpha _{E}E^{-\beta _{E}}\), which, after simple algebra, gives
$$\displaystyle \begin{aligned} \log E=\left(\beta_{P}/\beta_{E}\right)\log P+\left(1/\beta_{E}\right)\log\left(\alpha_{E}/\alpha_{P}\right),{} \end{aligned} $$
(4.20)
where subscripts P and E stand for potency and energy, respectively (Mendecki, 2013). This equation expresses the scaling relation \(\log E = d\log P + c\) via parameters of the potency frequency and the energy frequency distributions, where \(d = \beta _{P} / \beta _{E}\) and \(c = \left (1/\beta _{E}\right ) \log \left (\alpha _{E}/\alpha _{P}\right )\). Parameters d and c are usually derived by fitting a straight line to data using a standard least squares regression or the generalised orthogonal regression (e.g. Mendecki, 1993, Figure 1.2). Since parameters \(\alpha \) and \(\beta \) in the potency and the energy frequency distribution are derived by the maximum likelihood (ML) method, equation 4.20 offers an indirect way of ML fitting. Note that the \(\log E\) vs. \(\log P\) scaling may be affected by the inability of systems to record the wide frequency spectrum radiated from seismic sources. The exponent \(\beta _{E}\) scales inversely with the number of larger energy events, i.e. lower \(\beta _{E}\) delivers a larger portion of high-energy events. The exponent \(\beta _{P}\) scales inversely with the number of larger potency events, i.e. high \(\beta _{P}\) delivers larger portion of smaller potency events. Since most of seismic potency is delivered at low frequencies and most of seismic energy at higher frequencies, larger events recorded in mines by 4.5 Hz or 14 Hz geophones may underestimate potency more than energy, and therefore parameter d and an increase in apparent stress with potency may be overestimated.
Figure 4.7 left shows the \(\log E\) vs. \(\log P\) for the MineD data set where the red line represents the fit with coefficients \(d = \beta _{P}/\beta _{E}\) and \(c = \left (1/\beta _{E}\right )\log \left (\alpha _{E}/\alpha _{P}\right )\), see Equation 4.20, and the green line is by the ordinary least squares. The colour here scales with \(\log \sigma _{A}\). Figure 4.7 right shows apparent stress, \(\sigma _{A}\), vs. \(\log P\) for the same data set where colour indicates the time of the event.
Fig. 4.8
Apparent stress, \(\sigma _{A}\), vs. Z-coordinate of the event for the same data set as in Fig. 4.7 (left). Energy index vs. Z-coordinate of the event for the same data set (right)
Bild vergrößern
From \(\log E = d\log P + c\), the apparent stress is \(\sigma _{A} = E/P = 10^{c} P^{d-1}\), which gives
$$\displaystyle \begin{aligned} \sigma_{A}=\left(\alpha_{E}/\alpha_{P}\right)^{1/\beta_{E}}P^{\left(\beta_{P}/\beta_{E}\right)-1},{} \end{aligned} $$
(4.21)
and for \(d = 1.0\), or \(\beta _{P} = \beta _{E}\), the apparent stress is constant and independent of potency \(\sigma _{A} = 10^{c}\), or \(\sigma _{A} = \left (\alpha _{E}/\alpha _{P}\right )^{1/\beta _{E}}\). The higher the driving stress at the source the higher the apparent stress.
The energy index of an event is the ratio of the observed radiated seismic energy of that event E, to the average energy \(\bar {E}(P) = 10^{d\log P+c}\) radiated by events of the observed potency P, for a given area of interest, \(EI = E/\bar {E}\left (P\right ) = E / 10^{d\log P+c} = 10^{-c} E / P^{d}\), which for \(d = 1.0\) would be proportional to the apparent stress (van Aswegen & Butler, 1993). Energy index can now be written as
$$\displaystyle \begin{aligned} EI=\left(\alpha_{E}/\alpha_{P}\right)^{1/\beta_{E}}E/P^{\left(\beta_{P}/\beta_{E}\right)}.{} \end{aligned} $$
(4.22)
Figure 4.8 top row shows a modest increase in apparent stress, \(\sigma _{A}\) and energy index, EI, over time. Figure 4.8 bottom row shows \(\sigma _{A}\) and EI vs. Z-coordinate, where size of the event scales with the radius of source volume and the time of the event from the main shock. Note that here the lower the Z-coordinate the deeper the event. Since 2010, most rock extraction was concentrated at depth 1050 and 950, and this modest increase in apparent stress and energy index over time can be attributed to depth of mining and an increase in the extraction ratio at that depth.
Fig. 4.9
\( \log E\), vs. \( \log P\) plot of events with \( \log P \geq -1.0\). The red line represents the fit with coefficients d and c are derived from Eq. (4.20) and the green one is by the ordinary least squares (left). \( \log \sigma _{A}\), vs. \( \log P\) for the same data set (right)
Bild vergrößern
The slope d of the \(\log E\)vs.\(\log P\) plot of earthquakes is frequently reported to be close to 1.0. However, Choy et al. (2006) reported that earthquakes occurring on immature faults radiate more a higher frequency energy per unit of moment than earthquakes occurring on mature faults.
The slope d of the \(\log E\)vs.\(\log P\) plot of events recorded in mines is frequently reported to be higher than 1.0 (Mendecki, 1993, see also Fig. 4.7). However, there are exceptions. Figure 4.9 shows the \(\log E\) vs. \(\log P\) plot of 1738 events recorded during 1738 days between 2015 and 2019 in a deep tabular hard rock gold mine in South Africa. Here the slope d is practically 1.0, consequently, apparent stress is independent of potency, and energy index is proportional to the apparent stress.
Fig. 4.10
The centre potency, \(P_{0.5}\), as a function of \(\beta \) for \( \log P_{min=}-3.0\) and for the selected values of \( \log P_{max}\)
Bild vergrößern

4.6.4 Power Law and Stress Transfer

Small seismic events contribute very little to the total energy and potency release and to the cumulative co-seismic deformation, but they can make an important contribution to the spatial and temporal stress transfer within the seismically active rock mass (Hanks, 1992).
The major part of the stress transfer due to inelastic deformation associated with seismic activity takes place within the source volume, \(V = P / \Delta \epsilon \). Since the co-seismic stress drop \(\Delta \sigma \), and the strain change \(\Delta \epsilon = \Delta \sigma / \mu \), associated with larger events are remarkably similar to that of smaller events (Aki, 1972), the overall stress transfer due to small events may be equal to or even dominate that of large events.
Therefore, within the power law distribution, there is the centre potency, \(P_{0.5}\), that splits the potency release \(P\left (P_{min},P_{max}\right )\), and for a constant \(\Delta \epsilon \) the cumulative source volume, into two equal parts with similar contributions to stress transfer. Solving \(P\left (P_{min},P_{0.5}\right ) = P\left (P_{0.5},P_{max}\right )\), i.e. \(\left (P_{0.5}^{1-\beta }-P_{min}^{1-\beta }\right ) = \left (P_{max}^{1-\beta }-P_{0.5}^{1-\beta }\right )\), for \(\beta \neq 1\) gives
$$\displaystyle \begin{aligned} P_{0.5}=\left[0.5\left(P_{max}^{1-\beta}+P_{min}^{1-\beta}\right)\right]^{\frac{1}{1-\beta}}.{} \end{aligned} $$
(4.23)
For \(\beta = 1\), the centre potency \(P_{0.5} = \sqrt {P_{min}\cdot P_{max}}\), which is the right and the left hand limit of the previous equation when \(\beta \rightarrow 1\). For \(\beta < 1\), we can integrate from \(P_{min} = 0\), which gives \(P_{0.5} = 0.5^{1/\left (1-\beta \right )} P_{max}\). For \(\beta > 1\), we can integrate from \(P_{min}\) to \(P_{max} = \infty \) and have \(P_{0.5} = 0.5^{1/\left (1-\beta \right )}P_{min}\). For example, for \(P_{min} = 0\) and \(\beta = 0.9\), \(\log P_{0.5} = \log P_{max} - 3\), and for \(P_{max} = \infty \) (OE relation) and \(\beta = 1.1\), \(\log P_{0.5} = \log P_{min} + 3\). Figure 4.10 shows that \(P_{0.5}\) is lower for lower \(P_{max}\) and decreases with increasing \(\beta \).
Fig. 4.11
Information entropy, H, as a function of \(\beta \) for the OE power law, i.e. \( \log P_{max}=\infty \), in blue and for the UT for selected \( \log P_{max}\) in red
Bild vergrößern
While sources of all seismic events smaller than \(\log P_{0.5}\) may have the same cumulative volume of inelastic deformation as sources of events greater than \(\log P_{0.5}\), the influence of smaller events on stress transfer and on possible triggering of larger events will depend on their spatial distribution. The more clustered they are the more likely that they may trigger a larger event (Helmstetter et al., 2005).

4.6.5 Power Law and Information Entropy

Information is the currency of nature, but not all information is of equal value. The total amount of information in a system is the difference between the system’s current entropy and its maximum possible entropy. Following Shannon’s second paper (Shannon, 1948), the continuous information entropy of the probability density function of potency can be written as
$$\displaystyle \begin{aligned} H\left[f\left(P\right)\right]=\intop_{P_{min}}^{P_{max}}f\left(P\right)\log\left[1/f\left(P\right)\right]dP,{} \end{aligned} $$
(4.24)
where \(f\left (P\right )\) is the probability density function of the upper truncated potency frequency distribution, \(f(P) = \beta P^{-\beta -1} / \left (P_{min}^{-\beta }-P_{max}^{-\beta }\right )\).
In Eq. (4.24) \(\log \left [1/f\left (P\right )\right ]\) is the information content of event P with probability \(f\left (P\right )\), and if \(f\left (P\right )\) is high, then knowledge that event P occurred gives very little information, since it had a high probability of occurrence to start with.
The continuous entropy is not a limit of the discrete entropy when the bin size goes to zero. The entropies of continuous distributions have most, but not all, of the properties of the discrete case.
There is one important difference between the continuous and discrete entropies. In the discrete case, the entropy measures the randomness of the variable in an absolute way. The continuous formulation measures the entropy relative to the coordinate system, and the entropy can be negative. This is not important, though, if one is interested in the differences or in the rate of change between two or more entropies, since they are independent of the frame of reference.
In general, H measures the amount of uncertainty in a given distribution, which is a measure of unpredictability. After integration, the information entropy of the upper truncated potency power law is
$$\displaystyle \begin{aligned} \begin{array}{rcl} H& =&\displaystyle \log\left(\frac{P_{min}^{-\beta}-P_{max}^{-\beta}}{\beta}\right)+0.43\left(1+\frac{1}{\beta}\right)\\ & &\displaystyle +\frac{\left(\beta+1\right)P_{min}^{-\beta}\log P_{min}-P_{max}^{-\beta}\log P_{max}}{P_{min}^{-\beta}-P_{max}^{-\beta}},{} \end{array} \end{aligned} $$
(4.25)
which monotonically decreases with increasing \(\beta \). For the OE distribution, where \(P_{max} \rightarrow \infty \), \(H = \log \left (P_{min}/\beta \right ) + 0.43 \left (1/\beta +1\right )\).
Figure 4.11 shows the information entropy H as a function of \(\beta \) for the OE power law, i.e. \(\log P_{max} = \infty \), and for the UT one with three different values \(\log P_{max}\). It is clear that unpredictability increases with decreasing \(\beta \) for both OE and UT distributions, although more so for the OE one (Mendecki, 2012). Therefore, low \(\beta \)-values imply not only higher hazard, but also a less predictable size distribution of seismic events.
Fig. 4.12
The time evolution of energy index EI (a proxy for stress, in red) and the cumulative apparent volume, \(V_{A}\) (a proxy for deformation, in blue) during shaft pillar extraction progressing from the centre to its outer perimeter
Bild vergrößern
The information entropy should not be confused with the thermodynamical or mechanical interpretation, which states that the amount of entropy increase is proportional to the loss of capacity of the system to deliver work.

4.6.6 Power Law Exponent, Rock Properties, Stress, and Stiffness

For a data set that obeys the power law size distribution, the exponent \(\beta \) is a statistical measure of the ratio of small to large events, and it decreases as the portion of the intermediate and large events increases.
In some cases, however, data points on the cumulative frequency plot indicate a double slope convex or concave character. One possible explanation of such deviations is the change in the sensitivity of the monitoring system during acquisition of data. Another possibility is that the data is generated by two spatially separated processes with different size distributions.
Concavity may be generated by a combination of high activity of low -magnitude events induced by mining excavation(s) with a few larger events caused by an existing geological structure. This behaviour is frequently a function of spatial and/or temporal data selection. More frequently observed convexity is caused by a deficit of larger events that may be the result of a limited time span of the selected data. Its persistence, however, indicates that seismic hazard is contained.
In general, the exponent \(\beta \) is positively correlated with the heterogeneity of the rock mass and with its stiffness and negatively with stress.
The rock mass heterogeneity depends on the spatial distribution of sizes and distances between strong or stressed and weak or de-stressed patches of rock, where seismic sources may nucleate and be stopped. An increase in rock heterogeneity results in a higher \(\beta \), since it is more likely that the initiated rupture be stopped by a soft or hard patch before growing into a larger event (Mogi, 1962; Mori & Abercrombie, 1997). Scholz (1968) stated that the \(\beta \)-value varies inversely with stress. His reasoning is based on a similar argument to heterogeneity, namely, that rupture once initiated grows larger in a high stress regime.
Stiffness measures the rigidity of a system, i.e. its ability to resist deformation in response to an applied load. It scales positively with the ratio of the applied stress to the induced strain. Experimental study on rock samples of equal degrees of heterogeneity in triaxial conditions has shown a decrease in \(\beta \) of acoustic emission events with both the differential stress and the confining pressure, during all stages of stress-strain regime, including the post-peak strain softening (e.g. Amitrano, 2003).
Observations in mines found a higher \(\beta \) in stiffer systems and lower \(\beta \) in softer systems (Mendecki & van Aswegen, 1998 and van Aswegen & Mendecki, 1999).
Figure 4.12 shows the time evolution of the energy index (a proxy for stress) and the cumulative apparent volume, \(V_{A}\) (a proxy for deformation), during a shaft pillar extraction progressing from the centre to its outer perimeter. Note that almost all events larger than \(\log E=7.5\), marked by arrows, occurred during softening past peak stress. Other parameters are: seismic stiffness \(K_{s}\), the b-value (or \(\beta \)), and the d-value in \(\log E=d\log P+c\).
Fig. 4.13
\( \log P_{max} \left (V_{meff}/\alpha \right )\)(left) and \( \log P_{max} \left (\beta \right )\)(right)
Bild vergrößern
These observations do not contradict reports on decreasing \(\beta \) with increasing stress during the strain hardening regime, since there is a general loss of stiffness with increasing stress. However, in a strain softening regime, where the strength is decreasing with increasing strain, stress is lower but lower \(\beta \) was observed.

4.7 Expected Maximum Event Size

4.7.1 What Is \(P_{max}\) or \(m_{max}\)

In earthquake seismology, \(m_{max}\) or \(\log P_{max}\) is the maximum magnitude, or potency, earthquake that a given seismogenic region can deliver (e.g. McGarr, 1976; Ward, 1997; Kijko, 2004; Pisarenko et al., 2008). In regions where the association of earthquakes and geological structures is evident, one can assume that the largest event will occur on one of these structures. The estimate of \(m_{max}\) can then be made based on the mapped fault length and geological record of displacement or from the empirical relation between the observed magnitudes and the source size. Alternatively, it can be estimated by numerical modelling of stresses. In regions where connection between geological structures and past earthquakes is less clear, the possible location of a future largest event is uncertain, and the \(m_{max}\) needs to be estimated from past observations.
Having an adequately long catalog of data, the problem of estimating the maximum possible potency or magnitude of an earthquake in a given area may be reduced to finding the truncation point of the observed distribution of past events (Robson & Whitlock, 1964; Cooke, 1979; Van Der Watt, 1980; Kijko & Singh, 2011). In such a case, the complex details of the data on small and intermediate events are less important. However, all estimates of \(m_{max}\) are highly uncertain, and the upper bounds of the confidence intervals are acceptable only if reliable data over a long time interval, including several earthquakes with magnitudes close to \(m_{max}\), are available (Zoller & Holschneider, 2016). The assessment of the maximum magnitude of a fluid injection-induced earthquake was described by Zoller and Holschneider (2014) and Zoller (2022).
In mines, the geology of the ore body and of the surrounding rock mass is reasonably well explored, and therefore, the maximum possible event size can be estimated by numerical modelling, assuming a complete stress drop on the main geological structures at different mining steps, i.e. different extraction ratios or times, or a complete failure of pillars at different mining steps, i.e. different extraction ratios or times. However, the estimate of reliable \(m_{max}\), defined as the maximum possible event size during the life time of a mine, from the seismic catalogue only is not possible.
At best, it can be guesstimated from the empirical relation between the footprint of a mine and the maximum observed event size. The main reason for the difficulty is that, unlike in a tectonic regime, the rate and the spatial and temporal distribution of loading in mines are highly variable, and this may have significant influence on the size distribution of seismic events, and therefore on seismic hazard. In addition, there are two opposing forces at work.
On one hand, an increase in extraction ratio and consequently the degradation of rock mass stiffness create conditions conducive for larger events to occur. On the other hand, to manage the situation, mines alter the way of mining. There are examples where, after changes to mine layout and/or the introduction of backfill, seismic hazard decreased. Therefore, \(m_{max}\) or \(P_{max}\) associated with mining is not a fixed parameter and needs to be estimated periodically as mining progresses. It is the next \(m_{max}\) or \(P_{max}\) which can be estimated and that needs to be managed.
Size of a Mine and the Maximum Event Size
In the absence of tectonic forces, the size of the largest possible event induced by mining scales approximately with the characteristic size of the mine, L. The upper bound relation between the maximum magnitude and the linear size of the mine is \(m_{max} = 2.0 \log L - 2.0\) (McGarr et al., 2002). For \(L = 1500\) m, there is a potential for moment-magnitude \(m_{HK} = 4.3\) (or \(\log P=5.07\)). For \(L = 2500\) m, the maximum \(m_{HK} = 4.8\) or \(\log P=5.82\) (\(\log P_{max} = 1.5m_{HK} - 1.38\)). There are many mines with characteristic dimension \(L \geq 1500\) m that did not generate seismic events of that size.
However, there is also a case where mining triggered an event along an intersecting geological structure which was considerably larger than the characteristic size of the mine. It is speculated that this is more likely in the presence of tectonic horizontal stresses. If mining takes place in an active tectonic regime, then the maximum possible event size within the mine could be the same as the maximum possible size earthquake in the area.

4.7.2 Balance of the Effective Volume Mined and \(P_{max}\)

If we assume that the extraction of rock, the backfill placement, and the seismic and aseismic deformation take place within the volume \(\Delta V\) at discrete times, then one can construct the following step function that reflects a balance of inelastic deformation, \(B\left (t\right ) = V_{m}\left (t\right ) - V_{b}\left (t\right ) - V_{P}\left (t\right ) - V_{a}\left (t\right )\), where: \(V_{m}\left (t\right ) = \sum _{t_{i}\leq t}V_{m}\left (t_{i}\right )\Theta \left (t-t_{i}\right )\) is the volume mined to date, \(\Theta \) is the Heaviside function, \(\Theta \left (t\right ) = 0\) if \(t < 0\), and \(\Theta \left (t\right ) = 1\) if \(t \geq 0\), \(V_{m}\left (t_{i}\right )\) is the volume mined at \(t_{i}\), \(V_{b}\left (t\right ) = \sum _{t_{j}\leq t}c_{b}\left (t\right )V_{b}\left (t_{j}\right )\Theta \left (t-t_{j}\right )\) is the volume of backfill placed to date, where \(V_{b}\left (t_{j}\right )\) is backfill placed at \(t_{j}\), \(V_{meff}\left (t\right ) = V_{m}\left (t\right ) - V_{b}\left (t\right )\) is the effective volume mined to date, \(c_{b}\left (t\right )\) is the efficiency of backfill that corrects for shrinkage and compaction, \(V_{P}\left (t\right ) = \gamma _{0}\sum _{t_{k}\leq t}P\left (t_{k}\right )\Theta \left (t-t_{k}\right )\) is the volume reduction due to seismic inelastic deformation to date, \(P\left (t_{k}\right )\) is the seismic potency of the event at time \(t_{k}\), \(P_{I}\left (t_{k}\right )\) is the isotropic component of the potency \(P\left (t_{k}\right )\), \(\gamma _{0} = P_{I}\left (t_{k}\right ) / P\left (t_{k}\right )\) is the portion of the isotropic component of the total potency associated with seismic activity, which for events close to excavation faces is \(0.6 \leq \gamma _{0} \leq 0.9\) (McGarr, 1993), \(V_{a}\left (t\right ) = \gamma _{a}\sum _{t_{l}\leq t}P\left (t_{k}\right )\Theta \left (t-t_{k}\right )\) is the volume of aseismic inelastic deformation to date, spatially and temporarily synchronous with seismic activity, where \(0.1 \leq \gamma _{a} = P_{a}\left (t_{k}\right ) / P\left (t_{k}\right ) \leq 0.25\), and \(P_{a}\left (t_{k}\right )\) is the aseismic potency at \(t_{k}\).
The balance function can now be finally written as \(B\left (t\right ) = V_{meff}\left (t\right ) - \gamma \sum _{t_{k}\leq t}P\left (t_{k}\right )\Theta \left (t-t_{k}\right )\), where \(\gamma = \gamma _{0} + \gamma _{a}\).
Scenario1Collapse type closure. Here the current deficit of deformation \(B\left (t\right )\) is to be closed in a single large seismic event, which may or may not fit to the observed power law size distribution. This can be considered the worst-case scenario.
However, history shows that it is possible to engineer such a pathological mining layout that at some stage it collapses during one large complex seismic event. Without external loading, the maximum potency of such an event would be limited to \(P_{max}\left (t\right ) = B\left (t\right )\).
Scenario 2 Power law deficit closure. The deficit of deformation is closed due to seismic activity with a power law size distribution. In this case, \(P_{max}\) can be estimated from \(B\left (t\right ) = V_{meff}\left (t\right ) - \gamma \sum _{t_{k}\leq t}P\left (t_{k}\right )\Theta \left (t-t_{k}\right )\), by replacing the sum of seismic potency by a model of potency production derived from the size distribution, \(B\left (t\right ) = V_{meff}\left (t\right ) - P_{max}^{1-\beta }\cdot \gamma \alpha \beta / \left (1-\beta \right )\). By setting \(B\left (t\right ) = 0\), one can now calculate the \(P_{max}\) of the potency frequency distribution that would have closed the deficit of deformation
$$\displaystyle \begin{aligned} \begin{array}{rcl} P_{max}\left(t\right)& =&\displaystyle \left[\frac{1-\beta}{\gamma\beta}\cdot\frac{V_{meff}\left(t\right)}{\alpha}\right]^{\frac{1}{1-\beta}}\Rightarrow\log \\ P_{max}& =&\displaystyle \frac{1}{1-\beta}\log\left[\frac{1-\beta}{\gamma\beta}\cdot\frac{V_{meff}\left(t\right)}{\alpha}\right],{} \end{array} \end{aligned} $$
(4.26)
where \(\alpha \) is the number of seismic events with \(\log P\geq 0\). Similar equations were derived by Wyss (1973), Smith (1976), McGarr (1976), and Molnar (1979) to estimate the expected maximum magnitude of earthquakes, and by McGarr (1984) for mines.
The interpretation of this equation is straightforward, namely, \(P_{max}\left (t\right )\) increases with the effective volume mined. For given \(V_{meff}\left (t\right )\), the more seismic and aseismic potency has been produced to date, i.e. higher \(\alpha \), lower \(\beta \) and higher \(\gamma \), the less there is to be produced to close the deficit, therefore, the lower \(P_{max}\left (t\right )\). One way to reduce \(P_{max}\), at least temporarily, is to manage the heterogeneity of the seismogenic volume. Such heterogeneous systems are stiffer and, for a given \(V_{meff}\), maintain higher \(\beta \) for longer and deliver lower \(V_{meff}/\alpha \), therefore lower \(P_{max}\), see Fig. 4.13.
Fig. 4.14
History of the record breaking events and the estimated range of \( \log P_{max}\) for MineD based on two overlapping data sets, DataSet1 (left) and DataSet2 (right)
Bild vergrößern
Relation (4.26) is based on a few assumptions that may limit its application. Firstly, it implies that one knows the total potency produced due to mining in a given area. In practice, however, one can at best account only for that portion of seismic potency which has been released since the introduction of the monitoring system. Secondly, it assumes that the only stress acting in the area is the one induced by the particular volume mined, which may not be the case in scattered mining scenarios or in the presence of tectonic stresses. Then the effective volume mined \(V_{meff}\left (t\right )\) may be smaller than \(P_{maxo}^{1-\beta }\cdot \gamma \alpha \beta / \left (1-\beta \right )\), where \(P_{maxo}\) is the maximum observed potency. Misfitting the power law may also result in the balance \(B\left (t\right )\) being negative.

4.7.3 Order Statistics: \(P_{max}\)—The Upper Limit to the Next Largest Event

Order statistics represent the characteristics of a sample after it has been ordered, usually from the smallest observation to the largest observation. For \(P_{1}\),…,\(P_{n}\), random potencies \(P_{\left (k\right )}\) is the kth smallest, called the kth order statistic, \(P_{\left (1\right )} = \min \left (P_{1},\ldots ,P_{n}\right )\) and \(P_{\left (n\right )} = \max \left (P_{1},\ldots ,P_{n}\right )\). Let \(P_{\left (1\right )} = P_{min}\),…,\(P_{\left (n\right )} = P_{maxo} < P_{max}\), be the order statistics of a random sample of seismic potency of size n from a population with a continuous distribution function. The question here is: What is the \(P_{max}\)—the estimator for the upper bound of this random variable? The probability that any potency is not greater than some P is, by definition, the cumulative distribution function, \(\Pr \left (<P\right ) = F\left (P\right )\), but the probability that \(P_{max}\) is not greater than P is \(\Pr \left (P_{max}<P\right ) = F_{P_{max}}\left (P\right ) = \Pr \left (P_{1}<P,\:P_{2}<P,...,\:P_{max}<P\right ) = \prod _{i=1}^{n}\Pr \left (P_{\left (i\right )}<P\right ) = \left [F\left (P\right )\right ]^{n}\). The probability density function here is \(dF_{P_{max}}/dP = f_{P_{max}}\left (P\right ) = n \left [F\left (P\right )\right ]^{n-1} f\left (P\right )\), where \(f\left (P\right ) = dF\left (P\right )/dP\). The mean value of the distribution function \(F_{P_{max}}\left (P\right )\) is obtained after integrating by parts, \(\left \langle P_{max}\right \rangle = \intop _{P_{min}}^{P_{max}}PdF_{max}^{n}\left (P\right ) = P_{max} - \intop _{P_{min}}^{P_{max}}F_{P_{max}}^{n}\left (P\right )dP\), and therefore the estimator of \(P_{max}\) can be taken as
$$\displaystyle \begin{aligned} P_{max}=P_{maxo}+\intop_{P_{min}}^{P_{maxo}}F_{P_{maxo}}^{n}\left(P\right)dP=P_{maxo}+\sum_{i=1}^{n-1}\left(\frac{i}{n}\right)^{n}\left[P_{\left(i+1\right)}-P_{\left(i\right)}\right],{} \end{aligned} $$
(4.27)
where \(F_{P_{maxo}}\left (P\right )\) is the empirical distribution function based on order statistics, \(F_{P_{maxo}}\left (P\right ) = i/n\), for \(P_{\left (i\right )} \leq P \leq P_{\left (i+1\right )}\) and for \(i = 1\),...,\(n-1\), \(F_{P_{maxo}}\left (P\right ) = 0\) for \(P < P_{min}\), and \(F_{P_{maxo}}\left (P\right ) = 1\) for \(P \geq P_{maxo}\), and \(dP = P_{\left (i+1\right )}-P_{\left (i\right )}\) (Cooke, 1979). Rearranging the summation gives
$$\displaystyle \begin{aligned} P_{max}=2P_{maxo}-\sum_{i=0}^{n-1}P_{maxo-i}\left[\left(1-\frac{i}{n}\right)^{n}-\left(1-\frac{i+1}{n}\right)^{n}\right],{} \end{aligned} $$
(4.28)
where for \(i=0 P_{maxoi-1}\) is the second largest observed potency, for \(i=1\) the third largest observed potency, and so on. For large n, i.e. \(\lim _{n\rightarrow \infty }\left (1-i/n\right )^{n} = \exp \left (-i\right )\) and \(\lim _{n\rightarrow \infty }\left [1-\left (i+1\right )/n\right ]^{n} = \exp \left (-i-1\right )\), it can be simplified to
$$\displaystyle \begin{aligned} P_{max}=2P_{maxo}-\left(1-\frac{1}{e}\right)\sum_{i=0}^{n-1}e^{-i}P_{maxo-i}.{} \end{aligned} $$
(4.29)
As expected, the estimator of the upper bound of a random variable is a function of the differences of its largest observations, \(P_{maxoi} - P_{maxoi-1}\), and only the first few differences are significant in estimating \(P_{max}\).
The estimate can also be made on log-order statistics, i.e. \(\log P_{\left (1\right )}\),...,\(\log P_{\left (n\right )}\), which is slightly more conservative, since \(\left \langle \log P\right \rangle \leq \log \left (\left \langle P\right \rangle \right )\). Then Eq. (4.28) gives
$$\displaystyle \begin{aligned} \log P_{max}=\log P_{maxo}+\Delta\log P_{max},{} \end{aligned} $$
(4.30)
where \(\log P_{maxo}\) is the maximum observed \(\log P\) and \(\Delta \log P_{max}\) is the maximum expected jump from the history of observed jumps in record \(\log P\),
$$\displaystyle \begin{aligned} \begin{array}{rcl} \Delta\log P_{max}=2\max\left(\Delta\log P_{maxo}\right)\\ -\sum_{i=0}^{n-1} \left[\left(1-\frac{i}{n}\right)^{n}-\left(1-\frac{i+1}{n}\right)^{n}\right]\Delta\log P_{maxo-i}.{} \end{array} \end{aligned} $$
(4.31)
The order statistics estimate of \(P_{max}\) or \(\log P_{max}\) is independent of the underlying probability distribution, and therefore, one can make a reasonable estimate even if the data does not conform to any accepted potency frequency power law.
Note that in most cases seismic systems are installed in operating mines, and therefore a part of the history of seismic records is missed. It is very likely then to record larger record jumps at the beginning of the monitoring period that may not reflect the true history of record breaking events. If included in calculations, these false large record jumps will overstate hazard and need to be ignored. Initially, this introduces a degree of discretion, but it fades with time as new records occur.
Figure 4.14 left shows the history of record breaking \(\log P\) in MineD. The data set, called here DataSet1, starts on 07 September 2007 and ends on 13 June 2012, just after the largest event with \(\log P=2.24\) (\(m_{HK}=2.41\)), and includes 2281 events with \(\log P\geq -1.0 \left (m_{HK}\geq 0.25\right )\). Search for the first record started from the mean of all \(\log P\geq 0.5\), which was \(\log P=0.88\), and delivered eight forward counting records. The expected upper limit of the next record breaking event estimated by Eq. (4.30) delivered \(\log P_{max}=2.684\). Table 4.1 lists \(\log P\) and \(m_{HK}\) of all eight forward counting records and the eight largest events. Note that the largest events are not necessarily records.
Fig. 4.15
Probabilities of having exactly k records, \(\Pr \left (k,n \right )\), dashed lines, and at least k records, \(\Pr \left ( \geq k,n \right )\), solid lines, in n observations
Bild vergrößern
Table 4.1
List of eight forward counting \( \log P\) records and eight largest \( \log P\) events at MineD over the period 07 September 2007 to 13 June 2012
Record \(\log P\)
1.05
1.07
1.14
1.16
1.20
1.44
1.82
2.24
Largest \(\log P\)
1.45
1.46
1.47
1.51
1.53
1.65
1.82
2.24
Figure 4.14 right shows the history of record breaking \(\log P\) in the same mine for the period 07 September 2007 to 07 July 2013, called here DataSet2, which is 388.8 days longer than DataSet1, and stops just after the next record breaking event on 07 July 2013 with \(\log P=2.61 \left (m_{HK}=2.66\right )\). The new record \(\log P=2.61\) is close to the upper limit of the predicted range \(2.24 \leq \log P \leq 2.684\).

4.7.4 Record Statistics—The Next Record Breaking Event

A record is an entry in a chronological sequence of data that exceeds all previous entries. Let us ask the following question: Given a set of random observations in time, how often will the record value be surpassed? In a sequence \(P_{j}\) for \(j = 1\), 2,.., n of real independent and identical distributed variables in a time series, a record breaking high occurs at k if \(P_{k} = \max _{j\leq k}\left (P_{j}\right )\). \(P_{1}\) is always a record. The probability that a record high, or low, occurs at j is 1/j. The probability of having exactly k records in n samples can be calculated by a recursive formula, \(\Pr \left (k,n\right ) = \left (1-1/n\right ) \Pr \left (k,n-1\right ) + \left (1/n\right ) \Pr \left (k-1,n-1\right )\) for \(k \geq 1\) and \(n \geq 2\) with the initial values \(\Pr \left (k=0,n>1\right ) = 0\) and \(\Pr \left (k=1,n=1\right ) = 1\), see Fig. 4.15 dashed lines. For larger sample size n, the asymptotic result gives \(\Pr \left (k,n\right ) = \left [\ln \left (n\right )\right ]^{k-1} / \left [n\left (k-1\right )!\right ]\). The probability of having at least k records in n samples then is \(\Pr \left (\geq k,n\right ) = 1 - \sum _{j=1}^{k}\Pr \left (k,n\right )\), see Fig. 4.15 solid lines.
Fig. 4.16
History of the record breaking events and the estimated range of \( \log P_{nrb}\) for MineD based on two overlapping data sets assuming UT, in red, and OET distribution in green
Bild vergrößern
The expected number of records high (or low), \(\left \langle N_{rb}\right \rangle \), is \(\sum _{j=1}^{n}\left (1/j\right )\), which is a harmonic sequence that grows without bound, though rather slowly. The proof that harmonic sequences diverge was provided by Oresme (c.1323 - 1382) in the fourteenth century that was published only in fifteenth century by Johannes de Sancto Martino, Oresme, 1482. This supports the idea that any record can be beaten, but the time for this becomes longer and longer with the increasing number of records. For large n, the number of records can be approximated by \(\left \langle N_{rb}\left (n\right )\right \rangle \approx \ln \left (n\right ) + 0.577215\), and the variance \(Var\left (N_{rb}\right ) = \sum _{j=1}^{n}\left (1/j\right ) - \sum \left (1/j^{2}\right ) \approx \ln \left (n\right ) - 1.0677\) (Table 4.2).
Table 4.2
The expected number of records in n independent observations
n
\(10^{1}\)
\(10^{2}\)
\(10^{3}\)
\(10^{4}\)
\(10^{5}\)
\(10^{6}\)
\(\left \langle N_{rb}\right \rangle \)
2.93
5.19
7.49
9.8
12.1
14.4
\(\sqrt {Var}\)
1.17
1.88
2.42
2.85
3.23
3.57
The observed frequencies of record highs (or lows) can be used to infer whether or not the data set is random. The number of record breaking events can be determined both with time running forwards and with time running backwards. If future samples behave like earlier samples, then the number of records calculated forwards would be equal to the number of records calculated backwards. Statistically significantly more forward record breaking events signify an increased hazard and more backward records indicate that hazard is abating. The record sequence is distinctly non-stationary with increasing time, i.e. it becomes exponentially harder to beat the current record.
If \(P_{maxo}\) is the maximum potency in a data set of n observations, then, for a random data set, the probability of having a larger potency in the next \(n_{n}\) new observations is \(\Pr ( > P_{maxo}, n_{n} ) = n_{n} / \left (n+n_{n}\right )\). For large n, the number of new observations needed to beat the current record with probability \(\Pr \) is given by \(n_{\Pr } = n\cdot \Pr / \left (1-\Pr \right )\). So, there is 60% probability of beating the current record in \(1.5 n\) new observations.

4.7.5 The Expected Next Record Breaking Potency

Suppose that the selected data set on seismic potency is distributed according to the upper truncated (UT) power law with probability density function \(f\left (P\right ) = \beta P^{-\beta -1}/\left (P_{min}^{-\beta }-P_{max}^{-\beta }\right )\), where \(P_{min}\), \(P_{max}\), and \(\beta \) are parameters. The first randomly selected potency is, by definition, the first record breaking event, and it will most likely be the mean value of that distribution, \(P_{nrb(1)} = \left [\beta /\left (1-\beta \right )\right ] \left (P_{max}^{1-\beta }-P_{min}^{1-\beta }\right ) / \left (P_{min}^{-\beta }-P_{max}^{-\beta }\right )\) for \(\beta \neq 1\), or \(P_{nrb(1)} = \ln \left [\left (P_{max}/P_{min}\right )/\left (P_{min}^{-1}-P_{max}^{-1}\right )\right ]\) for \(\beta = 1\). The value of the second record breaking event will be the mean value of that portion of that distribution that lies beyond the first record, \(P_{nrb\left (2\right )} = \intop _{P_{rb\left (1\right )}}^{P_{max}}Pf\left (P\right )dP / \intop _{P_{nrb\left (1\right )}}^{P_{max}}f\left (P\right )dP\), and proceeding recursively, the relation between successive typical record potencies can be given by
$$\displaystyle \begin{aligned} P_{nrb\left(k\right)}=\frac{\beta\left(P_{max}^{1-\beta}-P_{nrb\left(k-1\right)}^{1-\beta}\right)}{\left[\left(1-\beta\right)\left(P_{nrb\left(k-1\right)}^{-\beta}-P_{max}^{-\beta}\right)\right]}\quad \mbox{or}\quad P_{nrb\left(k\right)}=\frac{\ln\left[P_{max}/P_{nrb\left(k-1\right)}\right]}{P_{nrb\left(k-1\right)}^{-1}-P_{max}^{-1}},{} \end{aligned} $$
(4.32)
for \(\beta \neq 1\) (left) and \(\beta = 1\) (right), and for \(k = 2,3,...,\) where \(P_{nrb\left (k-1\right )}\) is the potency of the previous (or the last) record breaking event (Mendecki, 2012). Note that \(P_{nrb\left (k\right )}\) given by Eq. (4.32) are not particularly sensitive to changes in \(\beta \). Applying a similar procedure, the next record breaking potency for the OET relation is
$$\displaystyle \begin{aligned} P_{nrb\left(k\right)}=P_{nrb\left(k-1\right)}+P_{nrb\left(k-1\right)}^{\beta}P_{c}^{1-\beta}\exp\left[P_{nrb\left(k-1\right)}/P_{c}\right]\varGamma\left[1-\beta,\frac{P_{nrb\left(k-1\right)}}{P_{c}}\right].{} \end{aligned} $$
(4.33)
Figure 4.16 shows the histories of records breaking \(\log P\) in MineD, as in Fig. 4.14, and the estimated expected next record \(\log P\) assuming the UT power law distribution, in red, and OET distribution in green.
Fig. 4.17
Cumulative number of unbinned data vs. \( \log P\) for the first (left) and second data sets (right) in MineD. The three fits marked by black red and green solid lines are described in text
Bild vergrößern
The power law exponent for the UT distribution of DataSet1 is \(\beta = 0.949\) and for the OET \(\beta = 0.951\). For DataSet2 the respected values are \(\beta = 0.941\) and \(\beta = 0.942\). Assuming the UT distribution, the expected next record breaking \(\log P_{nrb} = 2.45\) for DataSet1 and \(\log P_{nrb} = 2.81\) for DataSet2. Assuming the OET distribution, the \(\log P_{c}=1.85\) and the \(\log P_{nrb} = 2.36\) for DataSet1 and \(\log P_{c}=2.41\) and \(\log P_{nrb} = 2.77\) for DataSet2.
Since it has been published in Mendecki (2012), the method has been applied to data sets from a number of mines in Australia, Africa, South America, Canada, and Europe and, with a few exceptions of extreme outliers, delivered reasonable results. Recently the method has been applied to injection-induced seismicity by Cao et al. (2020) and Verdon and Bommer (2020).
Figure 4.17 left shows the cumulative number of unbinned data vs. \(\log P\geq -1.0\) for DataSet1, where colour indicates the time of the event, with three fits. (1) The upper truncated (UT) potency frequency relation assuming \(\log P_{max}=2.684\) and the respective \(\alpha =257.612\) and \(\beta =0.949\), in black. (2) The upper truncated (UT) potency frequency relation assuming \(\log P_{max}=2.45\) and the respective \(\alpha =258.312\) and \(\beta =0.948\) in red. (3) The OET one with the soft upper cut-off \(\log P_{c}=1.85\) and the respective \(\alpha =256.594\) and \(\beta =0.951\), in green. The blue straight line represents the OE power law and is given as a reference. The light grey vertical spikes below the data illustrate what would be the empirical pdf if the data was binned with bin size 0.05. The dashed grey lines on both sides of the UT power law fit indicate 95% confidence limits, see Eq. (4.18).
Figure 4.17 right shows the cumulative number of unbinned data vs. \(\log P\geq -1.0\) for DataSet2 with the three fits. (1) The upper truncated (UT) potency frequency relation assuming \(\log P_{max}=3.04\) with the respective \(\alpha =324.356\) and \(\beta =0.941\) in black. (2) The upper truncated (UT) potency frequency relation assuming \(\log P_{max}=2.81\) with the respective \(\alpha =324.816\) and \(\beta =0.94\) in red. (3) The OET one with the soft upper cut-off \(\log P_{c}=2.41\) with \(\alpha =323.68\) and \(\beta =0.942\), in green.

4.7.6 Rank Plot

Rank plot of seismic potency, or energy, is a presentation of the potency, or energy, frequency data but with flipped axes, where \(\log P\) is on the vertical axis and rank, or \(N\left (\geq \log P\right )\) on the horizontal axis. The statement that “the nth largest event has \(\log P\)” is equivalent to “there are n events greater or equal to \(\log P\)”. Inverting the axes also inverts the exponent from \(\beta \) to \(1/\beta \).
Rank statistics is a useful tool to test the continuity of the power law size distribution over the entire range of available data and to estimate the next record breaking event size, \(\log P_{nrb}\), in cases where a small number of largest events deviate from the assumed power law (Sornette et al., 1996).
Firstly, all events are ranked so the largest \(\log P\) observed, \(\log P_{maxo}\), is in rank 1, the second largest, \(\log P_{maxo-1}\), in rank 2, the third largest \(\log P_{maxo-2}\) in rank 3, et cetera. Note that the ranked series of events will almost never be in a chronological order. The rank-ordering plot depicts \(\log P\) vs. the logarithm of the rank. Then, the largest observed event, \(\log P_{maxo}\), is shifted to rank 2, the second largest observed event, \(\log P_{maxo-1}\), to rank 3, and so on. A fit to the first n largest events in the shifted rank-ordering plot extrapolated to \(n=1\) gives the estimate of the \(\log P_{nrb}\) event. The \(\log P_{max}\) is defined as the upper 95% confidence limit of \(\log P_{nrb}\). The choice of n depends on the structure of the rank-ordering plot. If the rank-ordering plot has two branches, the cross-over point would be the obvious candidate to select n. In most cases studied, the cross-over point is \(10 \leq n \leq 50\), and therefore, \(\log P_{nrb}\) is constrained by a small number of the largest events.
Figure 4.18 left shows the rank plot of \(\log P\) recorded at a mine where one can differentiate the two branches of the data and to illustrate the method. The cross-over point is assumed at the 6th largest event. Before shifting, the least square fit to the largest six events gives \(\alpha = 24.12\) and \(\beta = 0.44\).
Fig. 4.18
Rank-ordered \( \log P\) before shifting (left) and after with the estimated range of \( \log P_{nrb}\)(right)
Bild vergrößern
Figure 4.18 right shows the rank plot of \(\log P\) of the same data set after shifting. The least square fit to the six largest events gives \(\alpha = 17.62\) and \(\beta = 0.30\), which at the intersection with rank 1 gives \(\log P_{nrb}=4.12 \pm 19.5\), and therefore one can assume the upper limit of \(\log P_{nrb} = \log P_{max}=4.51\).
Note that in this case the upper branch has a steeper slope, i.e. lower beta in the size distribution than the lower branch, which is the opposite to what is expected for tectonic earthquakes. It frequently happens during the end of mining operations when the extraction ratio is high and/or when mining approached larger geological structures with the shear strength comparable to induced stresses.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://​creativecommons.​org/​licenses/​by/​4.​0/​), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
download
DOWNLOAD
print
DRUCKEN
Titel
Size Distribution and Seismic Hazard
Verfasst von
Aleksander J. Mendecki
Copyright-Jahr
2025
DOI
https://doi.org/10.1007/978-3-031-93239-7_4
1
From what has already been demonstrated, you can plainly see the impossibility of increasing the size of structures to vast dimensions either in art or in nature; likewise the impossibility of building ships, palaces, or temples of enormous size in such a way that their oars, yards, beams, iron bolts, and, in short, all their other parts will hold together; nor can nature produce trees of extraordinary size because the branches would break down under their own weight; so also it would be impossible to build up the bony structures of men, horses, or other animals so as to hold together and perform their normal functions if these animals were to be increased enormously in height; for this increase in height can be accomplished only by employing a material which is harder and stronger than usual, or by enlarging the size of the bones, thus changing their shape until the form and appearance of the animals suggest a monstrosity.
 
Zurück zum Zitat Aki, K. (1965). Maximum likelihood estimate of b in the formula logN=a-bm and its confidence limits. Bulletin Earthquake Research Institute Tokyo University, 43, 237–239.
Zurück zum Zitat Aki, K. (1972). Earthquake mechanism. Tectonophysics, 13(1-4), 423–446.CrossRef
Zurück zum Zitat Amitrano, D. (2003). Brittle-ductile transition and associated seismicity: Experimental and numerical studies and relationship with the b-value. Journal of Geophysical Research, 108 (B1)(2044), 1–15. http://​doi.​org/​10.​1029/​2001JB000680.
Zurück zum Zitat Burroughs, S. M., & Tebbens, S. (2001). Upper-truncated power laws in natural systems. Pure and Applied Geophysics, 158(4), 741–757.CrossRef
Zurück zum Zitat Cao, N.-T., Eisner, L., & Jechumtalova, Z. (2020). Next record breaking magnitude for injection induced seismicity. First Break, 38.
Zurück zum Zitat Choy, G. L., McGarr, A., Kirby, S., & Boatwright, J. (2006). An overview of the global variability in radiated energy and apparent stress. In R. E. Abercrombie, A. McGarr, G. D. Toro, & H. Kanamori (Eds.), Earthquakes: Radiated Energy and the Physics of Faulting (pp. 43–58). American Geophysical Union.
Zurück zum Zitat Christensen, K., Danon, L., Scanlon, T., & Bak, P. (2002). Unified scaling law for earthquakes. PNAS, 99(1), 2509–2513.CrossRef
Zurück zum Zitat Cooke, P. (1979). Statistical inference for bounds of random variables. Biometrica, 66(2), 367–374.CrossRef
Zurück zum Zitat Cosentino, P., Ficarra, V., & Luzio, D. (1977). Truncated exponential frequency-magnitude relationship in earthquake statistics. Bulletin of the Seismological Society of America, 67(6), 1615–1623.CrossRef
Zurück zum Zitat Galileo, G. (1638). Dialogues concerning two new sciences. Lodewijk Elzevir.
Zurück zum Zitat Gutenberg, B., & Richter, C. F. (1944). Frequency of earthquakes in California. Bulletin of the Seismological Society of America, 34, 185–188.CrossRef
Zurück zum Zitat Hanks, T. C. (1992). Small earthquakes, tectonic forces. Science, 256, 1430–1431.CrossRef
Zurück zum Zitat Helmstetter, A., Kagan, Y. Y., & Jackson, D. D. (2005). Importance of small earthquakes for stress transfers and earthquake triggering. Journal of Geophysical Research, 110(B05S08), 1–13. http://​doi.​org/​10.​1029/​2004JB003286.
Zurück zum Zitat Jackson, D. D., & Kagan, Y. Y. (2006). The 2004 Parkfield earthquake, the 1985 prediction, and characteristic earthquakes: Lessons for the future. Bulletin of the Seismological Society of America, S397–96S409. http://​doi.​org/​10.​1785/​0120050821.
Zurück zum Zitat Kagan, Y. Y. (1993). Statistics of characteristic earthquakes. Bulletin of the Seismological Society of America, 83(1), 7–24.
Zurück zum Zitat Kagan, Y. Y., & Schoenberg, F. P. (2001). Estimation of the upper cutoff parameter for the tapered pareto distribution. Journal of Applied Probability, 38A, 158–175.CrossRef
Zurück zum Zitat Kagan, Y. Y., Jackson, D. D., & Geller, R. J. (2012). Characteristic earthquake model, 1884-2011, R.I.P. Seismological Research Letters, 83(6), 951–953.CrossRef
Zurück zum Zitat Kijko, A. (2004). Estimation of the maximum earthquake magnitude, Mmax. Pure and Applied Geophysics, 161(8), 1655–1681. http://​doi.​org/​10.​1007/​s00024-004-2531-4.CrossRef
Zurück zum Zitat Kijko, A., & Funk, C. W. (1994). The assessment of seismic hazards in mines. The Journal of The Southern African Institute of Mining and Metallurgy, 94(7), 179–185.
Zurück zum Zitat Kijko, A., & Singh, M. (2011). Statistical tools for maximum possible earthquake magnitude estimation. Acta Geophysica, 59(4), 674–700. http://​doi.​org/​10.​2478/​s11600-011-0012-6.CrossRef
Zurück zum Zitat Kijko, A., & Stankiewicz, T. (1987). Bimodal character of the distribution of extreme seismic events in Polish mines. Acta Geophysica Polonica, 35, 491–506.
Zurück zum Zitat Kwiatek, G., Plenkers, K., Nakatani, M., Yabe, Y., & Dresen, G. (2010). Frequency-magnitude characteristics down to magnitude \(-\)4.4 for induced seismicity recorded at Mponeng gold mine, South Africa. Bulletin of the Seismological Society of America, 100(3), 1165–1173.CrossRef
Zurück zum Zitat Laherrere, J., & Sornette, D. (1999). Stretched exponential distributions in nature and economy: Fat tails with characteristic scales. European Physical Journal B, (2), 525–539.CrossRef
Zurück zum Zitat Linkov, A. M. (2006). Numerical modeling of seismic and aseismic events in three dimensional problems of rock mechanics. Journal of Mining Science, 42(1), 1–14.CrossRef
Zurück zum Zitat Linkov, A. M. (2013). Numerical modelling of seismicity: Theory and applications, Keynote lecture. In A. Malovichko, & D. Malovichko (Eds.), 8th International Symposium on Rockbursts and Seismicity in Mines, Russia (pp. 197–218).
Zurück zum Zitat Main, I. G., & Burton, P. W. (1984). Physical links between crustal deformation, seismic moment and seismic hazard for regions of varying seismicity. Geophysical Journal of the Royal Astronomical Society, 79(2), 469–488.CrossRef
Zurück zum Zitat Malovichko, D., & Basson, G. (2014). Simulation of mining induced seismicity using Salamon-Linkov method. In M. Hudyma, & Y. Potvin (Eds.), 7th International Conference on Deep and High Stress Mining (pp. 667–680). http://​doi.​org/​10.​13140/​2.​1.​1365.​0561.
Zurück zum Zitat McGarr, A. (1976). Upper limit to earthquake size. Nature, 262(5567), 378–379.CrossRef
Zurück zum Zitat McGarr, A. (1984). Some applications of seismic source mechanism studies to assessing underground hazard. In N. C. Gay, & E. H. Wainwright (Eds.), Proceedings 1st International Symposium on Rockbursts and Seismicity in Mines, Johannesburg, South Africa (pp. 199–208). South African Institute of Mining and Metallurgy.
Zurück zum Zitat McGarr, A. (1993). Keynote Address: Factors influencing the strong ground motion from mining induced tremors. In R. P. Young (Ed.), Proceedings 3rd International Symposium on Rockbursts and Seismicity in Mines, Kingston, Ontario, Canada (pp. 3–12). Balkema, Rotterdam.
Zurück zum Zitat McGarr, A., Simpson, D., & Seeber, L. (2002). Case histories of induced and triggered seismicity. In W. H. K. Lee, H. Kanamori, P. C. Jennings, & C. Kisslinger (Eds.), International Handbook of Earthquake and Engineering Seismology (pp. 647–661). Academic Press.
Zurück zum Zitat Mendecki, A. J. (1993). Real time quantitative seismology in mines: Keynote Address. In R. P. Young (Ed.), Proceedings 3rd International Symposium on Rockbursts and Seismicity in Mines, Kingston, Ontario, Canada (pp. 287–295). Balkema, Rotterdam.
Zurück zum Zitat Mendecki, A. J. (2008). Forecasting seismic hazard in mines. In Y. Potvin, J. Carter, A. Diskin, & R. Jeffrey (Eds.), Proceedings 1st Southern Hemisphere International Rock Mechanics Symposium, Perth, Australia (pp. 55–69). Australian Centre for Geomechanics.
Zurück zum Zitat Mendecki, A. J. (2012). Size distribution of seismic events in mines. In Proceedings of the Australian Earthquake Engineering Society 2012 Conference, Queensland, pp. 1–20.
Zurück zum Zitat Mendecki, A. J. (2013). Characteristics of seismic hazard in mines: Keynote Lecture. In A. Malovichko, & D. A. Malovichko (Eds.), Proceedings 8th International Symposium on Rockbursts and Seismicity in Mines, St Petersburg-Moscow, Russia (pp. 275–292). ISBN:978-5-903258-28-4.
Zurück zum Zitat Mendecki, A. J., & G. van Aswegen (1998). System stiffness and seismic characteristics - A case study. In M. Ando (Ed.), International Workshop on Frontiers in Monitoring Science and Technology for Earthquake Environments, Japan (pp. F4–2).
Zurück zum Zitat Mendecki, A. J., van Aswegen, G., Brown, J. N. R., & Hewlett, P. (1988). The Welkom seismological network. In C. Fairhurst (Ed.), 3rd International Symposium on Rockbursts and Seismicity in Mines, 08-10 June 1988, Minneapolis, USA (pp. 237–244), Balkema, Rotterdam, 1990.
Zurück zum Zitat Mogi, K. (1962). Magnitude frequency relations for elastic shocks accompanying fractures of various materials and some related problems in earthquakes. Bulletin Earthquake Research Institute of Tokyo University, 40, 831–853.
Zurück zum Zitat Molnar, P. (1979). Earthquake recurrence intervals and plate tectonics. Bulletin of the Seismological Society of America, 69(1), 115–133.CrossRef
Zurück zum Zitat Mori, J., & Abercrombie, R. E. (1997). Depth dependence of earthquake frequency-magnitude distribution in California: Implications for rupture initiation. Journal of Geophysical Research, 102(B7), 15,081–15,090.CrossRef
Zurück zum Zitat Oresme, N. (1482). Tractatus de Configuratione Qualitatum et Motuum. Johannes de Sancto Martino.
Zurück zum Zitat Page, R. (1968). Aftershocks and microaftershocks of the great Alaska earthquake of 1964. Bulletin of the Seismological Society of America, 58(3), 1131–1168.
Zurück zum Zitat Pisarenko, V. F., Sornette, A., Sornette, D., & Rodkin, M. V. (2008). New approach to the characterization of Mmax and of the tail of the distribution of earthquake magnitudes. Pure and Applied Geophysics, 165(5), 847–888. http://​doi.​org/​10.​1007/​s00024-008-0341-9.CrossRef
Zurück zum Zitat Robson, D. S., & Whitlock, J. H. (1964). Estimation of a truncation point. Biometrica, 51(1-2), 33–39.CrossRef
Zurück zum Zitat Scholz, C. H. (1968). Microfractures, aftershocks, and seismicity. Bulletin of the Seismological Society of America, 58(3), 1117–1130.
Zurück zum Zitat Shannon, C. E. (1948). A mathematical theory of communication. Part 2. Bell System Technical Journal, 27, 623–656.CrossRef
Zurück zum Zitat Shi, Y., & Bolt, B. A. (1982). The standard error of the magnitude-frequency b value. Bulletin of the Seismological Society of America, 72(5), 1677–1687.CrossRef
Zurück zum Zitat Smith, S. W. (1976). Determination of maximum earthquake magnitude. Geophysical Research Letters, 3(6), 351–354.CrossRef
Zurück zum Zitat Sornette, D. (2009). Dragon-kings, black swans and the prediction of crises. International Journal of Terraspace Science and Engineering, 2(1), 1–18.
Zurück zum Zitat Sornette, D., Knopoff, L., Kagan, Y. Y., & Vanneste, C. (1996). Rank-ordering statistics of extreme events: Applications to the distribution of large earthquakes. Journal of Geophysical Research, 101(B6), 13,883–13,893.CrossRef
Zurück zum Zitat Taleb, N. N. (2007). The black swan: The impact of the highly improbable. Random House.
Zurück zum Zitat Utsu, T. (1964). Estimation of b-value in Giutemberg-Ricter formula logN = a - bm, paper read at the meeting of the Seismological Society of Japan.
Zurück zum Zitat van Aswegen, G., & Butler, A. G. (1993). Applications of quantitative seismology in South African gold mines. In R. P. Young (Ed.), Proceedings 3rd International Symposium on Rockbursts and Seismicity in Mines, Kingston, Ontario, Canada (pp. 261–266). Balkema, Rotterdam. ISBN: 9054103205.
Zurück zum Zitat van Aswegen, G., & Mendecki, A. J. (1999). Mine layout, geological features and seismic hazard. Final report gap 303, Safety in Mines Research Advisory Committee, South Africa, 1–91.
Zurück zum Zitat Van Der Watt, P. (1980). Note on estimation of bounds of random variables. Biometrica, 67(3), 712–714.CrossRef
Zurück zum Zitat Verdon, J. P., & Bommer, J. J. (2020). Green, yellow, red, or out of the blue? An assessment of traffic light schemes to mitigate the impact of hydraulic fracturing-induced seismicity. Journal of Seismology. http://​doi.​org/​10.​1007/​s10950-020-09966-9.
Zurück zum Zitat Vere-Jones, D., Robinson, R., & Yang, W. (2001). Remarks on the accelerated moment release model: problems of model formulation, simulation and estimation. Geophysical Journal International, 144, 517–531.CrossRef
Zurück zum Zitat Ward, S. N. (1997). More on Mmax. Bulletin of the Seismological Society of America, 87(5), 1199–1208.CrossRef
Zurück zum Zitat Weingartner, P. (1996). Under what transforrnations are laws invariant? Laws and predictionin the light of chaos research. Springer.
Zurück zum Zitat Wesnousky, S. G. (1994). The Gutenberg-Richter or characteristic earthquake distribution, which is it? Bulletin of the Seismological Society of America, 84(6), 1940–1959.CrossRef
Zurück zum Zitat Wesnousky, S. G., Scholz, C. H., Shimazaki, K., & Matsuda, T. (1983). Earthquake frequency distribution and the mechanics of faulting. Journal of Geophysical Research, 88(B11), 9331–9340.CrossRef
Zurück zum Zitat Wiemer, S., & Wyss, M. (2002). Mapping spatial variability of the frequency-magnitude distribution of earthquakes. In R. Dmowska, & B. Saltzman (Eds.), Advances in Geophysics (vol. 45, pp. 259–302). Elsevier.
Zurück zum Zitat Wyss, M. (1973). Towards a physical understanding of the earthquake frequency distribution. Geophysical Journal of the Royal Astronomical Society, 31(4), 341–359.CrossRef
Zurück zum Zitat Zoller, G. (2022). A note on the estimation of the maximum possible earthquake magnitude based on extreme value theory for the Groningen gas field. Bulletin of the Seismological Society of America, 112(4), 1825–1831.CrossRef
Zurück zum Zitat Zoller, G., & Holschneider, M. (2014). Induced seismicity: What is the size of the largest expected earthquake? Bulletin of the Seismological Society of America, 104(6), 3153–3158. http://​doi.​org/​10.​1785/​0120140195.CrossRef
Zurück zum Zitat Zoller, G., & Holschneider, M. (2016). The earthquake history in a fault zone tells us almost nothing about mmax. Seismological Research Letters, 87, 132–137. http://​doi.​org/​10.​1785/​022015017.CrossRef