Dieses Kapitel geht auf die bedeutenden Fortschritte ein, die in den letzten 40 Jahren in der seismischen Überwachungstechnologie erzielt wurden, insbesondere bei der Quantifizierung seismischer Quellen und der Analyse bergbaubedingter Seismizität. Es beleuchtet die Entwicklung des weltweit ersten digitalen und intelligenten seismischen Überwachungssystems in Südafrika, das 1988 seinen Betrieb aufnahm, und seine anschließende Übernahme durch Minen weltweit. Der Text diskutiert die Beschränkungen aktueller seismischer Sensoren, Mängel bei der Datenerfassung und die Herausforderungen durch nichtlineare Mechanik und Gesteinsmassedynamik. Außerdem werden neue Technologien wie verteilte akustische Erfassung (DAS) und der Einsatz von maschinellem Lernen für die Phasenwahl erforscht. Das Kapitel betont die Bedeutung einer genauen Lokalisierung und Quantifizierung seismischer Ereignisse sowie die Notwendigkeit einer kontinuierlichen Datenerfassung, um die Reaktion der seismischen Gesteinsmassen auf den Bergbau besser zu verstehen. Es schließt mit einer Diskussion über die Grenzen der probabilistischen seismischen Gefahrenanalyse (PSHA) und die Notwendigkeit einer deterministischen Analyse in bestimmten Fällen. Darüber hinaus bietet das Kapitel Einblicke in den Plan für das seismische Gefahrenmanagement, einschließlich der Ziele der seismischen Überwachung, Systemanforderungen und empfohlener Analyseaufgaben. Außerdem wird die Bedeutung von Daten über die Gewinnung von Gestein bei der Einschränkung der Interpretation der Reaktion der seismischen Gesteinsmassen auf den Bergbau diskutiert. Der Text schließt mit einer detaillierten Analyse des seismischen Risikos im Bereich der Zeit und des Abbauvolumens und der Bedeutung der Entclusterung in der probabilistischen Gefahrenanalyse.
KI-Generiert
Diese Zusammenfassung des Fachinhalts wurde mit Hilfe von KI generiert.
Abstract
This chapter starts with the authors’ views on different aspects of mine seismology, states the main differences between earthquakes and mine induced seismicity, lists the major factors influencing seismic hazard in mines, and proceeds to the general formulation of the seismic hazard management plan for seismic monitoring. Such plan should specify (1) the objectives of seismic monitoring for a particular mine, (2) what the mine needs to have in terms of technology and skill, and (3) what the mine needs to do daily, weekly, monthly, and yearly to achieve the stated objectives. At the end of this chapter, almost as an appendix, is a short note on seismic magnitude scales.
1.1 Progress and Limitations in Seismic Monitoring in Mines
Over the last 40 years, considerable progress has been made in seismic monitoring technology, the quantification of seismic sources, and the quantification and analysis of mining induced seismicity. However, not enough is said about the limitations of seismic sensors, shortcomings in data acquisition, our limited understanding of statistical mechanics, non-linear mechanics and rock mass dynamics, and the tendency to over-interpretation.
Seismic Sensors and Systems
The world first digital and intelligent seismic monitoring system, i.e. with A/D conversion at the sensor site, and each site working independently and as a part of the network, was developed in South Africa and began routine operation in January 1988. There were 48 three-component sites mostly located underground with a few on surface to monitor seismic activity associated with gold mining in the Welkom area that spanned 20 km by 15 km (Mendecki et al., 1988). At that time more than a hundred thousand people worked daily at depths of 500 to 2500 m below surface applying scattered mining method, imposed by geological conditions to the tabular reefs, extracting 28 billion tonnes of ore and waste rock per year and producing almost 200 tonnes of gold.
Anzeige
Since then, a few hundred mines worldwide adopted digital seismic monitoring to manage seismic rock mass response to mining. Consequently, a number of geotechnical practitioners and geophysicists were exposed to, and trained in, the practical aspects of running seismic networks and in routine data interpretation. In the process, a new competence has emerged, namely that of mine seismologist.
Digital processing has benefited greatly from improvements in semiconductor technology over the past few decades, and, to a lesser extent, so has analogue signal conditioning and conversion. But transduction from ground motion to an electrical analogue, especially in the form of sensors which are rugged and small enough for installation in underground boreholes, has not. The result is that sensors are often the limiting factor in the performance of the seismic system as a whole. The principal reason is mass: Even in the noisy environment of a mine, with events occurring relatively close to the sensors, useful signals are small, and the simplest way to decrease sensor noise is to increase the mass, but the improvement in semiconductor price/performance has been attained largely by shrinking the size of the individual features. Applications with lower signal to noise ratio requirements, such as vehicle airbag triggers, use low-cost semiconductor MEMS, i.e. micro-electro-mechanical systems, accelerometers quite successfully. For mines, however, these lack dynamic range and can measure only the strongest ground motions, for example on the skin of an excavation during a damaging event. The passive geophone with coil and magnet introduced over 60 years ago is still holding its own in mine seismic monitoring. While they have limitations, we are not always able to utilise fully their capabilities mainly by incorrect installation and by applying an approximate transfer function that translates the voltage generated by the geophone into the instrumental ground velocities. In addition, there are transverse resonances (spurious frequencies) which make themselves felt at some sites but are not consistent enough from event to event to allow removal by site response correction.
Optical cavity accelerometers have been constructed which use optical cooling to reduce the thermal noise from a small mass with the high dynamic range of the electro-mechanical force balance accelerometers and wider bandwidth but are still relatively expensive.
To detect smaller events in mines, accelerometers or velocity sensors are generally used which attenuate low frequencies compared to displacement sensors. This is problematical since the measurement of seismic potency is taken from the low frequency asymptote of the displacement spectrum, which then requires integration of the velocity or acceleration ground motion recordings. The magnification of low frequencies inherent in the integration process, especially when compounded by the deconvolution of a velocity sensor response, must be carefully controlled to prevent the data from being swamped by noise. This can even lead to difficulties in picking the dominant frequency.
Anzeige
The new emerging technology in seismic monitoring is distributed acoustic sensing (DAS) that uses an optoelectronic instrument connected to an optical fibre to measure strain or strain rate at many positions along the fibre, effectively working as a seismic array (Lindsey & Martin, 2021). However, applications of DAS in underground mines are limited by a relatively high cost of interrogators, higher noise floor, the single -component nature of the recorded signals, and by the need for tight, uniform coupling of the fibre with the surrounding rock mass. Loosely coupled sections of fibre produce strong site effects in some cases larger than the actual seismic wave (duToit et al., 2022). But, if installed and grouted in longer boreholes, one can use it to constrain source location, to measure strain changes close to hydrofracturing sites (Luo et al., 2021), for seismic rock mass characterisation and exploration. It can also be used for tailings dams monitoring.
Seismic data on ground motion are grossly under-sampled in space and to a degree also in time. As I stated in Mendecki (1997b) “.. by analysing only good quality waveforms or well behaved seismic events one is utilising only a fraction of the time rock mass responds to mining seismically. In the case of 1000 seismic events recorded and analysed per day of an average duration, say 0.1 second each, one is listening to the rock mass for only for 0.1% of the day. Surely there is useful information lost during the remaining 99.9% of the time, since there are numerous coherent structures associated with convolved fracturing, unprocessable tremor type events and nonstochastic noise that constitutes a legitimate seismic rock mass response to mining”.
Hence, there was a need to acquire and utilise continuous data. Over the last 10 years or so, progress in data communication facilitated a collection of continuous, as opposed to triggered, waveforms, and today, an average seismic network in a mine produces and acquires 10 billion samples of data per day, and largest systems produce 100 billion samples. Such a massive amount of data puts a great demand on our capability to process, characterise, and quantify it. The use of DAS will magnify the problem. Recent progress in machine learning and convolutional neural networks (CNN) for phase picking facilitates the development of seismic monitoring workflow platforms to seed databases with useful information derived from continuous waveforms, e.g. see Trugman et al. (2022) or Zhang (2022), and Arowsmith et al. (2022) for a review of big data seismology.
Quantitative Seismology
Digital systems provide quality data that facilitate the quantification of ground motion parameters, seismic sources, and seismicity. Apart from the timing and location, a seismic source is quantified by seismic potency, or seismic moment, radiated seismic energy, and predominant frequency and is characterised by its source mechanism that decomposes the inelastic strain tensor into isotropic and deviatoric components.
The importance of the quality of seismological processing and the integrity of data can hardly be overestimated. The process starts with the location of seismic events, which is important for the following reasons. (1) It indicates the location of potential rockbursts. (2) All subsequent seismological processing, e.g. seismic source parameter and attenuation and velocity inversion, depends on location. (3) All subsequent interpretation of individual events depends on location, e.g. events far from active mining, close to excavations or, in general, in places not predicted by numerical modelling, may raise concern. (4) All subsequent interpretation of seismicity, e.g. spatial interaction, clustering—specifically localisation around planes, migration, spatial and temporal gradients of seismic parameters, and other patterns are judged by their location and timing. But accurate location is difficult because it relies on reconstructing the complex wave path from the source to the receiver and on picking the arrivals of the appropriate phases. Also, there are thousands of these events each day to be located and seismologically processed and the response time matters—information delayed may be information denied—with consequences that may exceed the cost of monitoring.
Seismicity is defined by a number of seismic events occurring within a given volume of rock over a certain period of time and is quantified using mainly the four, largely independent, quantities. (1) Average time between events. (2) Average distance between consecutive events. (3) Cumulative seismic potency or seismic moment. (4) Cumulative radiated energies. A quantitative description of seismic events and of seismicity allows one to derive information about their size and time distributions, spatial and temporal pattern formation, migration or diffusion, and about changes in the strain and stress regime and the rheological properties of the rock mass associated with seismic radiation. Although seismic waveforms do not provide direct information about absolute stresses, they do provide useful information about stress orientation and about the co-seismic spatial and temporal strain and stress changes.
In today’s practice, we quantify seismic sources by fitting spectra of the observed waveforms to the source model which predicts a far-field displacement spectrum that is constant at low frequencies and proportional to the seismic potency or seismic moment (Keilis-Borok et al., 1960) and decays as the inverse squared power at higher frequencies, the so-called \(\omega ^{2}\) model (Aki, 1967 and Brune, 1970). In mines, seismic events close to excavations and associated fractures zones have larger isotropic components, therefore are slower, radiate less energy at higher frequencies, and deliver displacement spectra that decay according to an \(\omega ^{2.5}\) or in some caving mines even an \(\omega ^{3}\) model.
In general, a simple point source representation of seismic sources, whether crack-like or the pulse-like, does not include the co-seismic generation of rock damage in the source volume which alters the seismic radiation (Johnson & Sammis, 2001; Sammis et al., 2009). There is ongoing work on more realistic sources with volume component (Lyakhovsky et al., 2016; Kurzon et al., 2019; Johnson et al., 2021).
Production Data
The seismic rock mass response to mining is mainly the result of stress changes due to rock extraction. The catalogue of seismic data is used to gain insight into people’s exposure to seismicity and to quantify the evolving hazard as mining progresses. If seismic data is used in conjunction with data on rock extraction, i.e. the timing, location, and volume of the in situ rock extracted, then we may gain insight into possible causes of and guide the effort into prevention and control of potential rock mass instabilities that could result in rockbursts.
Complexity of the Problem
On a macroscopic scale, all processes in nature are dissipative. Natural systems consist of a large number of elements which, at any given time, are not in the same state. Therefore, in order to accommodate the differences, a macroscopic system spontaneously generates local flow of energy and momentum in addition to those imposed by the external conditions. Close to equilibrium, the distribution of fluctuations is more or less random, the correlation time and the correlation length are short, and non-linearities are mostly hidden. Away from equilibrium, the system is more susceptible to the action of intermittent intrinsic instabilities that, due to their non-linear nature, are agents of spatial and temporal correlations. The finite values of spatial and temporal correlations measure the distance from equilibrium, and as this distance grows, the influence of non-linearities increases. When the range of spatial fluctuations increases, elements of the different parts of the system interact and the system can generate and maintain a reproducible relationship among its distant parts. In this process of self-organisation, the system creates spatial and temporal patterns that are not directly imposed by external forces. When the range of correlations becomes comparable with the system size, the resulting coherence, or order, may influence its behaviour qualitatively, and the system may become critical and undergo bifurcation, i.e. transition to another state (Nicolis & Prigogine, 1977). The approach to the critical point and the nature of the instability depends on the degree of disorder or heterogeneity in the system. Increase in disorder leads to a slower transition and diffused instability, while highly homogeneous systems may crack in one go with little precursory behaviour.
An important agent in the development of spatial and temporal correlations in overstressed rock is seismic activity itself. By breaking numerous asperities, seismic events smooth the system, allowing transfer of stresses over larger distances and thus paving the way for even larger events. Disorder, on the other hand, plays a stabilising role and, to a degree, can be engineered into the system by a scattered layout and/or by a scattered sequence of mining or blasting. Disordered directions of local stresses and a slower loading rate may also play a role, since it promotes healing and thus stress roughening.
In mines, the dynamics is excited mainly by the transient deformation associated with sudden rock extraction. Each such excitation causes a certain spatial and temporal pattern of events to develop that prevails only over a limited period of time after which the next excitation causes a new, and in many cases, compounded pattern of events. These responses change not only the rock mass properties but also the energy balance and redistribute that energy across a wide range of scales. As a consequence, it generates power laws of the size distribution of seismic events.
Modelling with Seismic Data
Science does not deliver a universal truth underlying a given natural phenomenon, but it does provide models of reality with various degrees of approximation. When building a model, we are trying to understand the mechanism that generates the observed data and to express it mathematically in the simplest possible way to capture the essence of the problem. But one can always find more than one model that can reproduce the observed data, the problem known as non-uniqueness. Therefore, models cannot be validated since we do not know if we use the correct equations to get a solution. What we can do is to verify the model to make sure that the equations used are solved correctly. To add credibility to the numerical modelling, we should try to adjust the model parameters to match the available relevant observations. This process is known as calibration. However, we can never claim that the model is calibrated because of the sparsity of observations and because the dynamic systems are evolving and the same action may produce different outcomes in future. Frequently, after calibration, a model still delivers a wrong prediction, so there is a need to re-calibrate again and again. This is also the case when modelling the seismic rock mass response to mining, the more data we incorporate trying to reconcile the model with data the better, however, we can never claim that the model is calibrated. See an excellent paper on the subject of verification, validation, and calibration of numerical model by Oreskes et al. (1994).
Models are useful to analyse the influence of various aspects of the system on its behaviour that helps to understand the model and indirectly the problem at hand. Here, the quality of a model may be quantified by the following ratio: the number of features the model can explain divided by the number of parameters (Ben-Zion, 2017). Models are also used to gain insight into the future behaviour of a system. While the predictive power of models is limited, their utility can be enhanced by scenario based modelling, when we compare various plausible future scenarios with the same model parameters in order to formulate a better outcome.
Past seismic data offers a reasonable forecast of the intermediate future of the size distribution of seismicity, but it is less successful in forecasting the time distribution and even less regarding the space distribution. Numerical modelling, however, is better in forecasting the potential locations of larger events but not so their timing.
Training
Traditionally, the training and practice of mining and geotechnical engineers was based on classical mechanics of conservative, and therefore reversible, systems at equilibrium, with focus on static linear elastic concepts. Statics is concerned with the equilibrium state of bodies under the action of forces. In mechanics, equilibrium is a particular state in which both the velocities and the accelerations of all the material points of a system are equal to zero, that means no “fluctuations”, i.e. no movements relative to the rest frame of the centre of mass. Such mechanics prohibits the interaction of system components, and it excludes the possibility of emergent behaviour.
The assumption of linearity implies superposition, i.e. the end effect of the combined action of two different causes is merely the superposition of the effects of each cause taken individually. But in a non-linear system adding a small cause to one that is already present can induce effects surpassing by far the amplitude of the individual effects. Elasticity is well suited to explain the propagation of seismic waves in the far field, but it fails to explain the near and intermediate field deformation and the transfer of stresses and strains over larger distances.
A seismic source is a volume of rock where linear elasticity breaks down, and the processes leading to and resulting from such instability are driven by non-linear dynamics embedded in dissipative, and therefore irreversible, inhomogeneous, or disordered, systems which are far from equilibrium. While the static elastic approach offered, and will continue to offer, useful solutions, its validity is limited to systems at, or close to, equilibrium. The real practical benefit of the applied static elastic solutions is underpinned by the ability and the experience of geotechnical engineers to interpret results taking into account the limitations of the method in relation to the problem at hand.
Fat Tails, Power Laws, and Seismic Hazard
Extreme events can be considered as large deviations from the average behaviour in an evolving system (Frisch & Sornette, 1997). Their recurrences are characterised by the thickness of the tail that defines the probabilities of having large events. The thicker the tail the higher the probability, so we should expect the unexpected. A thicker tail implies not only more larger events, i.e. higher hazard, but also a less predictable size of the next largest event. Unfortunately, it also means a higher probability of being surprised by the Black Swan—a big event with major effect, unexpected at first but rationalised by hindsight. Such made up explanations after the fact create misconceptions that causes of these events are understood and, hence, that they can be avoided or predicted.
It is therefore of great interest to find out how the smaller events interact in creating the conditions for larger events to occur. Failure can then be thought of as a scaling-up process in which failure at one scale is part of the damage accumulation of the larger instability.
Understanding the mechanisms generating power laws and their implications is one of the subjects of statistical seismology, which is a field of research at the interface of probability theory, stochastic modelling, and statistical physics. While physical models try to understand the process completely, the stochastic models estimate probability distributions of potential outcomes by allowing for random variation in one or more parameters. The main objective of statistical seismology though is to derive the large-scale laws of a physical system from the specification of the relevant microscopic elements and their interactions. One of the important practical applications of statistical seismology is the assessment of seismic hazard.
Prediction Versus Forecasting
A prediction can be understood as a deterministic binary statement, true or false, about a future event that can be validated or falsified with a single observation. A forecast can be defined as a statement of probability about a future event. In his excellent book, Prigogine (1980) writes “Even in physics, as in sociology, only various possible scenarios can be predicted”, which should be a guiding principle for practitioners involved in numerical modelling. One should however remember the calculated probability reflects what we know about the system, and not what the system is actually doing. An individual forecast can never be validated by a single observation and requires multiple observations to establish a degree of confidence.
The United Nations Disaster Relief Organization (UNDRO, 1979), defined hazard in general as the probability of occurrence, within a specific period of time in a given area, of a potentially damaging phenomenon.
Uncertainty
Uncertainty is the existence of more than one possibility, and it is measured by a set of probabilities assigned to a set of possibilities. Risk is a state of uncertainty where some of the possibilities involve a loss, and is measured by assigning losses to some possible outcomes. Therefore, one may have uncertainty without risk but not risk without uncertainty. The notion that events are uncertain is both complicated and uncomfortable, and therefore we tend to underestimate uncertainty and consequently underestimate risk.
In general, uncertainties can be divided into two categories: (1) Aleatory uncertainty, (from the Latin word alea for dice), or purely stochastic variability, where the ambiguity in outcome is inherent in the nature of the process, and no amount of additional measurements can reduce the inherent randomness. (2) Epistemic uncertainty, or ignorance or a lack of complete knowledge of the process, which results in certain influential variables not being considered, and affects our ability to model it. It also includes insufficient and inaccurate measurement. The epistemic uncertainty can be diminished by taking more data, by using more accurate instrumentation, by better experimental design and acquiring better insight into specific behaviour with which to develop more accurate models. The guiding concept in dealing with epistemic uncertainty should be that less data means larger uncertainty. While the aleatory variability reflects the natural randomness of the monitored process, the epistemic variability is the result of the uncertainty in the expected outcome due to a lack of knowledge or inaccurate measurements.
One way to deal with epistemic uncertainty is to use logic trees, first introduced by Kulkarni et al. (1984). A logic tree consists of a series of branches that describe the alternative models and/or parameter values. At each branch, there is a set of branch tips that represent the alternative credible models or parameter values. The weights on the branch tips represent the judgment about the credibility of the alternative models. The branch tip weights must sum to unity at each branch point. Only epistemic uncertainty should be on the logic tree. A common error is to put aleatory variability on some of the branches. Logic trees reflect the degree of belief of the group of experts in the alternative models. However, evaluating the alternative models involves considering alternative representations, new observations, and, in some cases, data from analogous regions. This process is also subjective. An alternative approach is to develop the single best model, but this requires a consensus by experts, which in some cases may be more difficult than constructing a logic tree.
Interpretation
Managing seismic hazard involves judgment under uncertainty. Judgment, frequently defined as “an intelligent use of experience” or “a sense of what is important”, is a vague attribute. In making predictions and judgments under uncertainty, people do not always follow probability theory but frequently relying on a limited number of heuristic principles: rules of thumb, educated guesses, intuitive judgments, experience-based reasoning, or simply on common sense. These rules reduce the complex tasks of assessing probabilities to simpler judgmental operations, that may be useful, but also may lead to severe and systematic errors (Tversky & Kahneman, 1974; Kahneman, 2003). Our understanding of how judgment works is far from satisfactory, and the definition of common sense is different to different people. Seismic hazard assessment delivers probabilities assuming a given probability distribution. However, we do not observe probability distributions, only a series of events from the system that generates data. So we assume a given probability distribution, say Poissonian, on the basis of limited data.
Motivational Bias
Human judgment under uncertainty is also affected by motivational bias, which may be even more critical than statistical misconceptions or errors. In some cases, an expert may be motivated by incentives to see things in a certain way. But most frequently an expert is defensively conservative or may want to suppress uncertainty in order to appear authoritative or knowledgeable, or has taken a strong stand in the past and wants to be consistent. In addition, there may be a conflict of interest between the loyalty demanded by the organisation the expert represents or even by the client he consults for, and the expert’s objectivity (Hogarth, 1975; Vick, 2002). To alleviate some of these problems, it is advisable to subject the applied methodologies, procedures, and results of analysis to peer review.
Communication
To be effective, mine seismologists and geotechnical engineers need to communicate their observations to mine management and convince them that their advice is credible. There is a difference between being vigilant, that helps to establish credibility, and being alarmist. The best advise here is, state the expected, monitor, and notify if the observed exceeds the expected. And when planning ahead, it can help to expect the unexpected.
1.2 Earthquakes and Mine Induced Seismicity
Use of Magnitude
Different mines use different magnitude scales that, in many cases, differ significantly and, in some cases, are not consistent over time. Therefore, in this book, the common logarithm of seismic potency, \(\log P\), is used as a measure of magnitude. Scalar seismic potency is a product of two parameters, \(P=\varDelta \epsilon \cdot V\), where the \(\varDelta \epsilon \) is the strain change at source during the seismic event and V is the source volume. In hard rock strains \(\varDelta \epsilon \leq 10^{-6}\) are considered elastic and strains \(\varDelta \epsilon \geq 10^{-3}\) crack intact rock. Therefore, a seismic event with \(\log P=1.0\), i.e. Hanks-Kanamori \(m=1.59\), is not just 10 m\(^{3}\) of something, but such event creates approximately \(V = 10^{1}/10^{-3} = 10^{4}\) m\(^{3}\) of rock subjected to cracks, which is substantial and may, and frequently does, lead to damage if its source is close to an excavation.
\(\log P\) is simple, appropriate for the range of sizes of seismic events recorded in mines and independent of rigidity, and thus seismic hazard may be objectively compared between different mines and between different periods of time for the same mine. Table 1.1 translates selected values of \(\log P\) into Hanks and Kanamori magnitude, expressed here in terms of potency, \(m = \left ({2}/{3}\right )\log P + 0.92\) or \(\log P = \left ({3}/{2}\right )m - 1.38\) and to Nuttli magnitude used in some Canadian mines, \(m_{N}=0.97m+0.59\), which gives \(m_{N}=0.65\log P+1.48\) (Sonley & Atkinson, 2005).
Table 1.1
\( \log P\), Hanks and Kanamori, m, and Nuttli (Sonley & Atkinson), \(m_{N}\), magnitudes
\(\log P\)
-5.0
-4.0
-3.0
-2.0
-1.0
0.0
1.0
2.0
3.0
4.0
5.0
m
-2.41
-1.75
-1.08
-0.41
0.25
0.92
1.59
2.25
2.92
3.59
4.25
\(m_N\)
-1.77
-1.12
-0.47
0.18
0.83
1.48
2.13
2.78
3.43
4.08
4.73
m
-5.0
-4.0
-3.0
-2.0
-1.0
0.0
1.0
2.0
3.0
4.0
5.0
\(\log P\)
-8.88
-7.38
-5.88
-4.38
-2.88
-1.38
0.12
1.62
3.12
4.62
6.12
\(m_N\)
-4.26
-3.29
-2.32
-1.35
-0.38
0.59
1.56
2.53
3.50
4.47
5.44
A one parameter description is inadequate to gain an insight into the stress and strain changes at seismic sources. For that, one must quantify the two independent source parameters: the seismic potency, P, and radiated energy, E. It gives an easy estimate of the apparent stress, \(\sigma _{A} = 10^{\log E-\log P}\), in Pascals, which is a model independent measure of stress change at the source.
Tectonic Earthquakes
Earthquake driving forces cannot be controlled. They are fairly constant over time and relatively slow compared to changes in stresses induced by mining. For example, different segments of San Andreas fault in California slip at the rate of a few to 40 mm per year. This slow loading facilitates the process of self-organisation that leads to a state at which the system develops reproducible relationships among its distant parts. Over time the system creates well defined spatial and temporal patterns that are not directly imposed by external forces.
There is a specific model, called self-organised criticality, SOC, developed by Bak et al. (1987) that combines the concepts of self-organisation and criticality to explain complexity. It assumes that under very general conditions, dynamical systems, when slowly driven, organise themselves into a state with a structure whose statistical properties can be described by simple power laws. The system becomes critical in the sense that all parts of the system influence each other and such systems fluctuate around a state of marginal stability. The SOC is a phenomenological definition, and the more constructive one would be the SDIDT—Slowly Driven Interaction-Dominated Threshold Systems (Jensen, 1998). The slow drive is needed to allow the system to relax from one metastable configuration to the other. The notion of interaction dominated means that the dynamics of the system is dominated by the mutual interaction between many degrees of freedom rather than by the intrinsic dynamics of the individual degrees of freedom. The effect of a threshold is to allow a large number of static metastable configurations. The threshold instability means that the incoming energy slowly builds up the profiles until the threshold is locally overcome and the system then self-organises itself via fast relaxation events that dissipate the energy excess across the system.
A state of SOC would imply a slowly driven system far from equilibrium with small fluctuations about the critical state over large timescales and sensitivity to minor perturbations that could trigger large events that can span the length scale of the system. The model self-organises to produce a power law in the frequency size distribution, despite having very simple rules governing interactions and no tuning parameters. In the case of tectonics, the SOC would therefore require the crust to be perpetually near a state of global failure, rendering individual events unpredictable both temporally and spatially.
Alternatively, seismicity may be described as a process undergoing self-organised sub-criticality, SOSC, as suggested by Al-Kindy and Main (2003). SOSC is characterised by the system being below the critical point but still maintaining power law statistics on a local scale. An SOSC state would also suggest a finite degree of statistical predictability in the dynamics of the system, contrary to a pure SOC.
While the long term seismic and aseismic loading are the main preparatory processes for larger earthquakes, their exact timing can be influenced by transient loading caused by surface waves from other earthquakes (dynamic triggering), and/or earth tides, atmospheric pressure or heavy precipitation amongst others. These loading sources all produce a similar amplitude stress change on a fault that is three to four orders of magnitude less than the ambient stress on the fault, and they occur on timescales from short periods on the order of 10 to 100 seconds to long periods, of the order of a year (Johnston, 2017).
Mining Induced Seismicity
Induced seismicity results from stress changes that are at least on the same order as the ambient pre-mining stresses. Triggered seismicity results from stress changes that are considerably smaller than the ambient pre-mining stress, i.e. when the system is close to critical. Mining cannot induce large earthquakes but could potentially trigger such events (McGarr et al., 2002).
The seismic rock mass response to mining is a result of stress changes due to rock extraction, blasting, hydrofracturing, and, to a lesser extent, due to the relocation of the extracted rock to rock waste sites, and therefore it is not a spontaneous process. Mining induces stresses at a particular place, at a particular time, and at a particular rate which are all highly variable compared to the tectonic regime. The average rate of deformation induced by mining is at least two orders of magnitude greater than the average deformation rate of tectonic plates. The bulk of seismic activity in mines starts with rock extraction, increases with the extraction ratio of the ore body, and, with the exception of large-scale mining taking place over a number of years, stops rather quickly with the cessation of mining.
Figure 1.1 left shows the cumulative number of events with \(\log P\geq -2.0\) during 1435 hours before and 2200 after the main shock (MS) of \(\log P\geq 2.61 \left (m\geq 2.66\right )\) where size of the event scales with the radius of source volume and the colour indicates the distance of the event from the main shock.
Fig. 1.1
Cumulative number (left) and distances (right) of seismic events to the MS with \( \log P=2.61\)
The first aftershock with \(\log P\geq -2.0\) was recorded 4, and the second 6.4 seconds after the MS. Within the first 16 hours after the main shock, there were 165 events that give the activity rate just over 10 events/hour and within the first 171 hours the rate of activity dropped to 1.4 events/hour. Figure 1.1 right shows the distances from the MS during the same time where colour scales with the logarithm of apparent stress, \(\log \sigma _{A} = \log \left (E/P\right )\). The bulk of the immediate aftershocks spread almost 500 m from the MS.
With a very few exceptions, aftershock sequences after larger seismic events in mines or after major production blasts are not as well developed as they are after tectonic earthquakes, mainly due to faster relaxation facilitated by the presence of openings and extensive fracturing. However, there are a few exceptions where large events which are triggered by mining are actually driven by local tectonic stresses. In such cases, there are more aftershocks, and the aftershock sequences last longer and extend beyond the mining operations.
One such example is a \(\log P=5.47 \left (m=4.57\right )\) reverse slip event triggered by mining and driven mainly by tectonic sub-horizontal stress. Figure 1.2 shows the left corner and section views of the mine, including the three nearest mining operations. The main shock is marked by the large light grey sphere centred at the very bottom of the mine, and its aftershocks are coloured according to time. The size of each event represents the radius of the source volume taken as a sphere, \(V=P/\varDelta \epsilon \), where \(\varDelta \epsilon \) is the assumed strain change at the source, in this case \(\varDelta \epsilon =1\cdot 10^{-4}\) to make small aftershocks visible. The main shock was initiated off the bottom edge of the mine and propagated over one kilometre away from the mine along a sub-horizontal geological structure. The immediate aftershocks were at the bottom of the mine and away from the mine on the structure. The first aftershock with \(\log P=2.66\) one minute after the main shock located within the mine and the second largest aftershock, marked as dark yellow, with \(\log P=2.54\) occurred 37 days later.
Fig. 1.2
Top (left) and section view (right) of the mine, including the nearest three mining operations. The main shock is shown in light grey and aftershocks coloured by the time of their occurrence
Figure 1.3 left shows the cumulative number of events with \(\log P\geq -3.0\) during 1338.7 hours before and 3730.5 hours after the MS shown here as the large red circle.
Fig. 1.3
Cumulative number (left) and distances to the main shock of seismic events with \( \log P \geq -3.0\) before and after the main shock
Colour here scales with distance to the MS, and the size of the event here represents the radius of the source volume taken as a sphere, \(V=P/\varDelta \epsilon \), where \(\varDelta \epsilon \) is the assumed strain change at the source, in this case \(10^{-2}\). There was a steady rate of seismic activity, \(\lambda _{B}=N\left (\log P\geq -3.0\right )/\Delta t_{B}=0.484\), per hour before the main shock.
Figure 1.3 right shows distances of seismic events to the main shock over the same period of time where colour scales with the logarithm of apparent stress, \(\log \sigma _{A} = \log \left (E/P\right )\). The spatial distribution of events before the MS is mostly concentrated at the mine, at distances of 300 to 500 m from the future MS, with only few events at distances over 500 m. From the very beginning of the aftershock sequence, the spatial distribution of aftershocks extended to over 2000 m from the MS. Approximately, 2900 hours after the MS one can see seismic activity at 200 m from the MS associated with the rehab of mining operations.
Smaller events in mines are highly correlated in space and in time with mining, while the largest events tend to occur after the extraction ratio and/or the depth of mining reach a critical level. The spatial distribution of larger events, however, is influenced by past mining layout and by the geological structures influenced by mining, and their temporal distribution is more random than that of small events.
Less frequently rockbursts result from a sudden shear fracture through pristine rock where no faults or discernible planar weaknesses existed before (Gay & Ortlepp, 1978). Certainly, to drive rupture through intact rock would require higher effective stress than to drive the same size event along pre-existing weakness, and therefore, it would produce higher ground velocities and therefore radiate more seismic energy.
Most frequently, the temporal distribution of larger events in mines is more random than that of small events. Small events are more clustered in time than larger events, while the time distribution of the larger events tends to a random or Poissonian distribution.
1.3 Seismic Hazard Factors in Mines
In mines, the rock mass dynamics is driven by the transient deformation associated with rock extraction, which is at least two orders of magnitude greater than the average slip rate of tectonic plates.
Where the excavation is created by blasting, the blasts themselves have a limited or local effect. The effects of mining are exacerbated by the proximity of older mine openings, the proximity of geological structures, and local tectonics stresses.
If the balance of the influx of energy and dissipation is not maintained, the system is susceptible to discharging its surplus energy in the form of larger seismic events, after which mining operations are frequently suspended and resumed only days or even months later.
Notwithstanding the complex nature of the inducement, seismic rock mass response to mining still develops reasonable size distributions and time relaxation patterns, which seismologists and geotechnical engineers use to manage seismic related risk.
Seismic hazard in mines is positively correlated with the following natural factors:
1.
Virgin rock stress, which is a combination of depth and the local tectonic stress
2.
The bulk strength of the rock mass
3.
The degree of homogeneity, or smoothness, of the rock mass including geological features, specifically those with shear strength comparable to the shear stresses induced by mining excavations
In addition, there is a number of factors related to mining that may exacerbate the intensity of the seismic rock mass response to mining:
1.
The extraction ratio
2.
The spatial extent of the mined-out area or volume
3.
The rate of mining and the spatial and temporal sequence of extraction
4.
The additional stress induced by adjacent mining
5.
The smoothness of the mine layout itself and in relation to the geological structures (Mendecki, 2012; Mendecki, 2013a)
Smoothness can be defined by the dimensionality of the object, as measured by its fractal dimension—the lower the fractal dimension the smoother the object. Fractals appear similar at any scale of observation. In mathematical terms, fractal objects exhibit fractional dimensionality, that is they are neither lines, nor surfaces or volumes. Their dimension falls in between the classical dimensions of Euclidean geometry. An object is fractal when its length L is a function of the length \(\lambda \) of the measuring device, \(L\sim \lambda ^{1-D}\), where D is the fractal dimension (Mandelbrot, 1967, 1975). If \(N\left (\lambda \right )\) is the number of cubes of size \(\lambda \) needed to cover the object, then the box counting fractal dimension of an object can be estimated by \(D = \ln N\left (\lambda \right )/\ln \left (1/\lambda \right )\) (Barnsley, 1988). Fractal dimension increases with the degree of irregularity, or raggedness of the object.
Seismic events originate and stop at inhomogeneities in the rock mass. By breaking numerous asperities, they smooth the system, allowing transfer of stresses over larger distances and thus paving the way for even larger events. Assuming that large seismic events are those small ones that were not stopped soon enough, then one way to manage seismic hazard is to engineer a mine layout that, together with the natural structures, is “rough” enough to limit the extent of ruptures. It has been argued that mining scenarios that introduce spatial heterogeneity, or roughness, may de-correlate the system and be less likely to develop larger dynamic instabilities (e.g. van Aswegen & Mendecki, 1999; Handley et al., 2000; Vieira et al., 2001; Mendecki, 2001 Figure 8; Mendecki, 2005; Durrheim et al., 2005).
1.4 Probabilistic Hazard: Long, Intermediate, and Short Term
Probabilistic seismic hazard analysis (PSHA) is a methodology that estimates how frequently a given level of a given ground motion parameter can be reached or exceeded at a point of interest, X, in future time \(\Delta T\), \(\Pr \left [\geq GMP\left (X\right ),\Delta T\right ]\), where the GMP can be the peak ground velocity, PGV , or the cumulative absolute displacement, CAD.
The PSHA incorporates all potential sizes of seismic events, the frequency of their occurrence and source distances via the ground motion prediction equation (GMPE) to estimate the combined probability of exceedance at X for different time periods \(\varDelta T\). It also disaggregates seismic hazard into its contribution from event size, distance, and the normalised residual, expressed in terms of the number of standard deviations from the median ground motion estimated by the assumed GMPE. The main objective of disaggregation is to show which component dominates seismic hazard at a given site.
For mines, the PSHA is frequently limited to the size distribution hazard, i.e. assessment of the probability that a potentially damaging event \(\geq \log P\) will occur inside a given seismogenic volume of rock, \(\Delta V\), in future time \(\Delta T\), or while extracting a given volume of rock \(\Delta V_{m}\), i.e. \(\Pr \left [\geq \log P\left (\Delta V\right ),\Delta T\:\mbox{or}\:\Delta V_{m}\right ]\). To estimate this probability, we need to assume the size and the time distributions of seismic events for a given seismogenic volume of rock. The most frequently used size distribution is the upper truncated power law, and it is recommended that the upper limit for a given data set be estimated independently. As for time distribution it is frequently assumed that seismic activity can be described by a stationary Poisson process, whether in the time or in the volume mined domain, and this simplification is in part neutralised by making regular hazard estimates and comparing differences. In cases where there is a clear trend in seismic activity, one can try to extrapolate the parameters of the size distribution, and alternatively, one can apply the non-stationary Poisson process with a suitable intensity function.
This limiting PSHA in mines to \(\Pr \left [\geq \log P\left (\Delta V\right ),\Delta T\:\mbox{or}\:\Delta V_{m}\right ]\) is justified for smaller operations where the linear size of the target event, \(\log P\), is comparable with the size of a mine. Moreover, most seismic systems for mines are designed to locate events and to estimate seismic source parameters, mainly potency P and radiated energy E, rather than to estimate the resulting ground motion at the skin of excavations. For this reason, seismic sensors are placed in boreholes, away from excavations, to avoid the very site effects that amplify ground motion at certain frequencies and contribute to damage. The ground motion prediction equations derived from such measurements will certainly underestimate seismic load and need to be corrected.
Seismic risk is a product of the probability of occurrence and the potential liability or vulnerability, \(\mathcal {L}\left (X\right )\), at a given site, \(\mbox{Risk}\left (\geq \mbox{v},\Delta T\right )=\int _{X}\left \{ \Pr \left [\geq \mathrm {v}\left (X\right ),\Delta T\right ]\cdot \mathcal {L}\left (X\right )\right \} dX\). Since probability is dimensionless, seismic risk is expressed in the units of liability \(\mathcal {L}\). Information on liability or vulnerability is usually more difficult to quantify than the information on seismic hazard, because it involves not only the material damage, injury and in some cases loss of life, but also the cost of business disruption, which is difficult to predict.
The intermediate term hazard would then cover a period of up to one Year, and results are presented as the expected probabilities versus different size events for selected time periods, e.g. 1, 3, 6, and 12 months. The long term hazard in a mining context should cover a period between one year and the expected life span of the mine but, because the uncertainties increase with time, 5 years may be sufficient. For the long term hazard, one can estimate the expected probabilities associated with different size events, \(\log P\) or, for specific size events that are associated with selected recurrence times, \(\log P\left (\bar {t}_{1\mbox{y}}\right )\), \(\log P\left (\bar {t}_{2\mbox{y}}\right )\), \(\log P\left (\bar {t}_{3\mbox{y}}\right )\), and \(\log P\left (\bar {t}_{5\mbox{y}}\right )\), and for the estimated next record breaking \(\log P_{nrb}\), versus time. Note that the data set used for the long term hazard assessment may frequently be the same as for the intermediate one. In such a case, the only difference is the presentation of results in the time domain, as opposed to the magnitude or \(\log P\) domain, and the confidence limits.
The short term hazard would cover a period of less than a month and include the estimates of interval (or re-entry) probabilities immediately after sudden loading by larger seismic events or after production blasts.
In many cases, seismic hazard can only be estimated by means of numerical modelling, since there may not be enough seismic data available to perform such analysis. However, the results of numerical studies should be confirmed regularly as soon as data becomes available.
Limitations
There is an exchange of views in the literature on the utility of PSHA. The spectrum of views is wide, from suggestions to drop it altogether (e.g. Mulargia et al., 2017) to the more pragmatic, stating that the shortcomings of the method do not invalidate the existence of the hazard curve, which comprises the basic assumption for PSHA (Anderson & Biasi, 2016).
There is a difference in the nature of ground motion hazard to underground structures due to seismic events in mines and to surface structures due to earthquakes. The main differences are the distances involved. The near-source ground motion due to small and larger seismic events is similar, but the hypocentral distances to underground excavations in mines are very small, say from a few metres to a few hundred metres. Earthquakes are usually many kilometres away. Therefore, earthquake engineers are mainly concerned with ground motions from larger earthquakes, but these earthquakes are less frequent and their activity rates are less certain. Recurrence times for large events in mines are also uncertain. One can then expect that the hazard maps for mines can better estimate the potential for less severe damage caused by smaller and medium size events than the infrequent large events.
This is why mines prone to larger events may wish to supplement probabilistic analysis with deterministic analysis, i.e. ground motion simulation. Such a simulation involves kinematic modelling of ground motion produced by sources defined by their expected maximum potency, or magnitude, and placed at the most likely locations that can produce the strongest level of peak ground velocity at a given site or sites. The likely locations of these sources may be determined by numerical modelling of the induced shear stresses on geological structures. In a kinematic model, the source process is defined by the spatial and temporal distribution of the slip vector, the local slip velocity function, and the rupture velocity, without taking into consideration the forces and stresses acting at the source, see Mendecki and Lötter (2011) and Mendecki (2016).
Probabilistic and deterministic methods for hazard assessment have advantages and disadvantages. Probabilistic methods can be viewed as inclusive of all deterministic events with a non-zero probability of occurrence. In this context, a proper deterministic method that models a particular larger event should ensure that that event is realistic, i.e. with a finite probability of occurrence. This points to the complementary nature of deterministic and probabilistic analyses: Deterministic events can be checked with a probabilistic analysis to ensure that the event is probable, and probabilistic analyses can be checked with deterministic events to see that rational, realistic hypotheses of concern have been included in the analyses (McGuire, 2001). However, the deterministic analysis can better account for specifics, i.e. the path and the site effects associated with a given strategic infrastructure.
1.5 Seismic Hazard in the Time and Volume Mined Domain
The mean recurrence interval between events above a certain potency estimated over the period of time \(\Delta t\) is \(\bar {t}\left (\geq P\right ) = \Delta t/N\left (\geq P\right )\), where \(\varDelta t\) is the time span of data used and \(N\left (\geq P\right )\) is the number of events not smaller than P over that time. Seismic activity rate for these events is then \(1/\bar {t}\left (\geq P\right )\). The mean volume mined between events above a certain size during extraction of \(V_{m}\) volume of rock is \(\bar {V}\left (\geq P\right ) = V_{m}/N\left (\geq P\right )\). The number of events, \(N\left (\geq P\right )\), per unit of volume of rock extraction is \(1/\bar {V}\left (\geq P\right )\). Note that the terms “mean inter-event time”, “mean recurrence interval” may be deceptive since in many cases the dispersion from the mean, as measured by the standard deviation, is comparable with the mean value. If the standard deviation of the observed recurrence intervals is less than 50% of the mean recurrence interval, then the seismic behaviour may be assumed to be periodic rather than episodic, and calculated probability estimates can be considered reasonable. For a very short time interval into the future, \(\varDelta T < \bar {t}\left (\geq P\right )\), the probability of having an event with potency not smaller than P can be estimated as \(\Pr (\geq P) \simeq \varDelta T/\bar {t}\left (\geq P\right )\). The calculated recurrence interval or the mean volume mined that stretch well beyond the span of the data set \(\Delta t\) or \(V_{m}\) should be treated with caution.
Example
Data sets from three mines, A, B, and C, were collected over the same two year period, \(\Delta t = 678\) days, all related to tabular mining with vertical principal stresses but with different geological structures, mining layouts, extraction ratios, depths, and rates of mining. Mine A practiced long-wall mining of a highly extracted tabular reef and mine B the sequential grid method. In both mines the rock extraction took place at practically the same depth of 3300 m. The extraction ratio of the tabular reef in mine A is over 80%, in mine B is 70%, and in mine C 60%. A simple numerical elastic model shows that due to the higher extraction ratio the mean vertical stress calculated over the un-mined areas in mine A is 1.7 times higher than in mine B. Mine C practiced scattered mining, imposed by the presence of larger faults, at an average depth of 1755 m.
Figure 1.4 compares cumulative volume mined and cumulative seismic potency in the three mines. Mine B delivered the highest production, production rate, and the highest seismic potency release. However, mine A delivered the highest observed potency release per volume mined, \(\sum P/V_{m}\). Compared to A and B, the seismic potency release in mine C is low. Note that the production in B is 4.3 times higher than in A, however, the potency release per day is only 1.4 times higher, and the potency release per unit of volume mined is 3.1 times lower. Clearly, the most informative parameter of the inherent hazard is \(\sum P/V_{m}\), and if it is unacceptably high, then slower pace of mining only delays the inevitable.
Fig. 1.4
All cumulative: volume mined versus time (left), potency release versus time (centre), and potency release versus volume mined (right) for mines A, B, and C over 678 days
Seismic hazard induced by mining should therefore be expressed as a function of time, e.g. by the mean inter-event times, and as a function of rock extraction, e.g. by the mean inter-event volumes mined. During times when mining is suspended, seismic activity decays very quickly and the mean inter-event times increase, indicating lower hazard. The mean inter-event volumes mined, however, stay the same since the hazard potential did not change, and when mining is resumed, it will be very much the same as it was before.
Figure 1.5 left shows seismic hazard estimated for the three mines A, B, and C in the time domain for \(\Delta T = 10\) days, assuming their current rate of mining. Seismic hazard at mine C is clearly the lowest, although converging with mine B for large potency events. Note the crossover point between mines A and B below which seismic hazard for data set B is higher. Figure 1.5 right shows seismic hazard potential, estimated in the volume mine domain, assuming all three mines extract the same volume of rock \(\Delta V_{m} = 1000\) m\(^{3}\), otherwise for the same set of parameters. Clearly, hazard potential at mine A is the highest of all three mines, then at mine B, but again slowly converging with mine C for large potency events.
Fig. 1.5
Seismic hazard for the three mines in the time domain (left), and in the volume mined domain (right)
1.5.1 Seismogenic Volume, Time Span of Data, and De-clustering
Probabilistic hazard analysis is based on data selected from a given volume of rock, \(\varDelta V\), over a certain period of time, \(\varDelta t\). The volume \(\varDelta V\) should include all seismic events that are interdependent. Many properties of induced seismic activity indicate a hierarchical organisation, suggesting a connection among events in space and in time. The range of spatial interaction is at least equal to the size of the largest event. Kijko et al. (1993) detected similarities in probability patterns between clusters of seismic activity separated by more than 1000 m, although they did not consider production. If the linear size of a stand-alone mine is comparable with the size of the largest expected event, then \(\varDelta V\) should be defined by the volume of seismic activity within and around the mine. For the same reason, two or more adjacent mines may be treated as one seismogenic volume. It is useful to test the spatial extent of the seismic response to larger production blasts or to larger seismic events which, in some cases, may cover the entire mine.
Similar considerations need to be applied to \(\varDelta t\), which should span as far back as possible. Since mining scenarios may change, it is advisable to select the past data that is most relevant to future mining. The calculated probabilities over future time \(\varDelta T\) larger than \(\varDelta t\) should be treated with caution. One should avoid sliding time windows and rather resort to cumulative windows and then normalise to compare. The future seismic rock mass response to mining is reasonably informed by the data on the size distribution, but it is less informed by the data on the time distribution, and even less by its space distribution. There is very little information about the possible location of future larger events in the past data. To gain insight into the spatial distribution of seismic hazard, it is better to resort to numerical stress modelling.
De-clustering
Conceptually, one can try to separate seismic activity into two components: events that are independent (background events or main shocks) and events that are triggered either by static or dynamic stress changes. This process of discrimination is called de-clustering, e.g. Gardner and Knopoff (1974), Reasenberg (1985), Bottiglieri et al. (2009). In mines, the problem is aggravated by blasting and/or hydrofracturing that induce the bulk of seismic activity. Smaller blasts frequently produce waveforms difficult to be discriminated.
The objective of de-clustering is to make the working data set more stationary in time and more random or disordered in space, i.e. more Poissonian, so it can be treated independently in time and/or in space. Because there is no physical difference between foreshocks, mainshocks, and aftershocks, clusters of events are usually defined by their proximity in space and/or by the fact that they occur at rates greater than the seismicity rate averaged over longer duration. Since most of the “triggered” events are small, de-clustering would lower the exponent of the power law overstating seismic hazard for larger events (Mizrahi et al., 2021). However, de-clustering is an ill-posed problem and does not have a unique solution, and therefore, it is subjective. In most mines, seismic activity is strongly dependent, and de-clustering would either decimate the data set leaving not much to work with, or it would only partially de-cluster leaving a distorted data set. There is a case to be made for reintroducing aftershocks and foreshocks into the probabilistic hazard assessment (Taroni, 2024). A careful selection of the seismogenic volume in space and in time, selecting slightly higher \(\log P_{min}\), and a more frequent assessment of time varying hazard, may offer a more objective solution. One can also account for a trend by extrapolating the parameters of the size distribution beyond the observed volume mined or the observed time span.
1.5.2 Testing the Quality and Integrity of Data
This is an important and frequently underestimated, or even overlooked, process that can influence the results of seismic hazard analysis considerably. It is also a process that takes time and that should be performed by an experienced mine seismologist.
Firstly, one needs to select all available data, regardless of their quality. This original data set should be stored for future reference.
Secondly, one should recognise, document, and then remove all artifacts, e.g. blasts, ore-pass noise, test pulses, spikes, misassociated, and split events, and identify gaps in data recordings. Some split events, i.e. larger complex events separated by seismologist during processing into two or more simple ones, may significantly affect the time, and to a degree, the size distribution analysis. It is useful to plot the spatial distribution of seismic events, the activity rate versus time and the activity versus time of day or day of week. It is useful to plot the peak ground velocity (PGV ), its associated frequency, and distance for each seismic station. Some blasts generate higher frequency PGV at close stations. One should test for amplitude saturation caused either by exceeding the output range of an amplifier—which produces squared off waveforms—or by exceeding the travel limit of the inertial mass within the geophone, known as displacement clipping. Seismic sensors used in mine networks can only sustain a few mm of displacement, and, for a given peak ground velocity, the peak ground displacement, PGD, increases as the associated frequency f decreases, approximately \(PGD = PGV/\left (2\pi f\right )\). The displacement clipping produces a sharp reversal in velocity and is difficult to spot on velocity waveforms, but it can be seen more easily on displacement traces as a sharp triangular peak reaching the maximum displacement. If not recognised, amplitude saturations will distort seismic source parameter calculations and ground motion analysis.
Thirdly, one should test the quality of seismological processing, specifically seismic potency, P, radiated energy, E, and corner frequency, \(f_{0}\). Here it is useful to test the relation between seismic potency and radiated energy on a log-log plot, \(\log E\) versus \(\log P\), the relation between the S- and P-wave energy, \(\log E_{S}\) versus \(\log E_{P}\), the relation between the S- and P-wave corner frequencies, \(\log f_{0S}\) versus \(\log f_{0P}\), the relation between the S-wave corner frequency and potency, \(\log f_{0S}\) versus \(\log P\), and check for outliers and patterns. Source parameters are also influenced by mislocation, and if the hypocentral distances are overestimated so are the source parameters, in some cases enormously. Plotting the recorded PGV s at each site versus distance for different \(\log P\) in colour helps to identify events with suspect source parameters and/or poor location.
One should also examine in detail the quality of seismological processing of at least the 100 largest events, since they have a great influence on the results. Here it is important to test the influence of near-field recordings on results of source parameter calculations, mainly on seismic potency, energy, and corner frequency.
The potency and energy range of seismic events that a system can recover is limited by its frequency range \(\left (f_{1},f_{2}\right )\), which is mainly determined by the capabilities of the seismic sensors. For example, in hard rock with \(v_{S} = 3.6\) km/s and \(\mu = 30\) GPa, the largest event for which the system can recover 85% of seismic potency is \(P \simeq 7.41\Delta \sigma /\left (10f_{1}\right )^{3}\), which for a reasonably high stress drop of \(\Delta \sigma = 3\) MPa and \(f_{1} = 3\) Hz (i.e. for 4.5 Hz sensor) gives \(\log P=2.9\) (Mendecki, 2013b). It is advisable to secure at least 80% recovery of seismic potency. Note that the estimates of seismic potency and radiated energy can be strongly influenced by their dependence on the applied correction for attenuation, Q. Apart from location and source parameters, one should try to resolve the source mechanisms of larger events and their spatial association with geological structures. One should also check if any changes made to the seismological processing software over time introduced step changes in the results.
Since seismic hazard in mines is driven by rock extraction, it is also useful to examine plots of cumulative potency versus time, cumulative potency versus volume mined in relation to cumulative volume mined versus time, see Fig. 1.4.
It needs to be recognised that the definition of a seismogenic volume and the selection of the appropriate data set introduce a degree of subjectivity.
The objective of this section is to facilitate preparation of the seismic monitoring part of seismic hazard management plans (SHMP) for seismically active mines. As mining goes deeper and the footprint and the extraction ratio of the ore bodies increases, more underground mines will identify seismicity as being the principal hazard, i.e. hazard that has a reasonable potential to result in multiple deaths in a single incident or a series of recurring incidents.
The process starts, or should start, before mining commences with deriving the expected seismic rock mass response to mining, which constitutes the first reference seismic hazard. In such a case, there is no seismic system, therefore no data, so the expected hazard depends on an expert opinion taking into account the nature of ore body, its geological setting, the planned mine layout, the results of numerical modelling of the expected stresses and strains, and by taking into account the experience of other mines in similar conditions. However, in many cases at the inception, mines did not expect to experience seismic problems, and therefore, frequently the first hazard assessment is done at a later stage when either underground workers reported “rock noises” or after the first incident of damage caused by the perceived seismic event. Most seismically active mines monitor seismic rock mass response to mining, and in these cases seismic hazard should be assessed quantitatively in terms of probabilities of exceedance of certain magnitudes and the resulting ground motion.
The seismic monitoring part of a good SHMP should be logical, consistent, and quantitative. All criteria should be defined to facilitate quantification and each routine task and action, timed and costed. The plan should also provide a framework to ensure compliance with the local health and safety regulations. At the same time, it should be realistic, i.e. should not be overly ambitious to make sure that the mine will be able to deliver on its own commitment. It should be reviewed every 6 months, or every time the seismic rock mass response to mining delivers the “unexpected”.
In preparation, it is useful to follow a four-step process accepted by the industry. (1) Identify the type and the nature of seismic related hazards, e.g. slip on geological structures, bursting of pillars, or coal, gas, and rock outburst from a face in coal mines. (2) Assess the risk, i.e. the expected consequences, associated with each type of hazard as measured by the magnitude and the probability or frequency of occurrence, and then score it in the company accepted risk matrix. (3) Specify measures to be taken to manage seismic hazard and minimise the risk, e.g. changes to mine layout and/or the sequence of mining, rate of mining, introduction or changes to regional or local support, introduction of preconditioning, etc. (4) Reassess seismic hazard and review control measures. The seismic part of the SHMP should include the following:
1.
The specific objectives of seismic monitoring for the mine.
2.
What the mine needs to have to achieve the stated objectives, in terms of seismic monitoring technology and people and skill involved?
3.
What the mine needs to do to achieve the stated objectives, i.e. a list of tasks the mine needs to do daily, weekly, monthly, and yearly to achieve the stated objectives?
Having the seismic part of the SHMP, one can prepare a seismic trigger action response plan that defines conditions, called triggers, that need to be continuously checked for and the respective actions to follow when those conditions occur. Actions are triggered when seismic behaviour deviates from the expected and could become hazardous. The SHMP and TARPs should be reviewed twice a year or every time seismic rock mass response to mining exceeds expectations in a significant way.
1.6.1 Objectives of Seismic Monitoring in Mines
Routine seismic monitoring in mines enables the quantification of exposure to seismicity and provides a logistical tool to guide the effort into prevention, control, and alert of potential rock mass instabilities that could result in damage, injury, or loss of life. The scope of seismic monitoring depends on the expected seismic rock mass response to mining. Mines with very low seismic hazard may opt for a limited scope of monitoring in order to register any potential changes in seismic response to mining in time and in space. As seismic activity increases, the scope of monitoring needs to be revised.
The objectives of seismic monitoring are an integral part of the seismic hazard risk management strategy adopted by mine management. The overall objective of seismic monitoring is to measure if the current seismic response to mining is as expected.
One can define the following general objectives of monitoring the seismic response of the rock mass to mining (modified after Mendecki, 1997a and Mendecki et al., 1999):
1.
Rescue. To detect and locate potentially damaging seismic events, to alert management, and to assist in rescue operations.
2.
Prevention. To confirm the rock mass stability related assumptions made during the mine design process and enable an audit of, and corrections to, the particulars of a given design while mining, for example:
(a)
Monitor seismic behaviour and the mechanism of seismic events associated with known geological structures and test for unknown geological structures.
(b)
Resolving spatial, temporal, and geometrical characteristics of co-seismic inelastic strain in selected areas of the mine to reconcile with corresponding numerical model(s) and to constrain numerical modelling of future mining scenarios.
(c)
Recording strong ground motion in solid rock and at the skin of excavations to quantify site amplification and assist in support design.
(d)
Monitoring the propagation of a caving and the development of the yield front in caving mines.
(e)
Quantify seismic rock mass response to production by plotting the the cumulative potency versus the cumulative volume of rock extraction, see Fig. 1.4 right.
(f)
Confirm the expected level of ground motion produced by larger events in solid rock and at the skin of excavations to confirm the support design, and monitor the consumption of deformation capacity of support due to seismicity.
(g)
Model the expected seismicity associated with different scenarios of future mining.
(h)
To monitor and to quantify an increase in seismic activity caused by larger seismic events or by production blasts in order to define the area-specific temporal exclusion zones and to guide the re-entry into the working places.
3.
Hazard Assessment. To quantify the expected exposure to seismicity in terms of the probability of occurrence, within a specific period of time in a given area, of potentially damaging events, and potentially damaging ground motion in the intermediate and long term. The latest hazard results should be compared with previous ones, differences discussed, explained, and documented.
4.
Alerts. To detect strong and unexpected changes in the spatial and/or temporal behaviour of seismic parameters that could lead to instability and affect working places immediately or in the short term.
5.
Back analysis. To improve both the mine design and the seismic monitoring processes. All seismic events regardless of size that caused fatality, injury, or damage need to be back analysed thoroughly. It is also advisable to back analyse all near misses, i.e. seismic events that did not result in consequences—but had the potential to do so. Results of back analysis should form the basis for a regular critical review of the applied seismic risk management strategy, guidelines, and procedures.
A quantitative description of seismic events and seismicity is considered as a minimum requirement to accomplish these objectives.
1.6.2 System Requirements
The transducers, data acquisition hardware, and processing software which comprise seismic monitoring systems can best be characterised in terms of the amplitudes and frequencies of the ground motion which they can faithfully represent, and the average rate at which events may be recorded and processed.
One should also specify the minimum location accuracy and the minimum and maximum potency and energy that the system can recover from waveforms.
Dynamic Range in Amplitude
The dynamic range in amplitude is commonly defined as the ratio of the highest amplitude that can be measured as the RMS noise level, which is considered to equal the amplitude of the smallest detectable signal. The ratio \(R=A_{\mathrm {peak}}/N_{\mathrm {RMS}}\) is often expressed in decibels \(\mathrm {dB}=20\cdot \log _{10}R\). If the environmental noise within the mine is within \(10^{-7}\) to \(10^{-6}\) m/s and the expected maximum amplitude of ground velocity is 1 m/s, then to measure it we need the dynamic range of at least 120 dB or more. Miniature geophones can easily measure the ground noise \(10^{-7}\) and less, but peak amplitudes are limited by internal displacement. A 4.5 Hz geophone with 0.7 damping and 4 mm peak-to-peak travel can accommodate 56 mm/s at the natural frequency, giving a minimum dynamic range of 115 dB. A 15 Hz geophone can tolerate higher tilt angles, which in turn reduces the available travel to as little as 0.5 mm, and lower damping is often used to enhance sensitivity at lower frequencies, all of which could reduce the peak velocity to 16 mm/s or less, with a corresponding dynamic range of 104 dB.
One can increase the signal range and the travel limits of geophones by overdamping. With normal damping of 0.7, the frequency response is flat to ground velocity above the natural frequency, \(f_{n}\), and is proportional to \(f_{n}^{2}\) below, which is caused by a double pole at that frequency. As the damping increases beyond 1, the poles separate, in such a way that the product of the pole frequencies remains constant. Between these poles, the velocity response is proportional to frequency, effectively making it flat to acceleration over this frequency range. For 4.5 Hz geophones, the maximum damping which can reasonably be achieved is 3.4, which means the acceleration response covers the frequency band from 0.7 Hz to 30 Hz. In this configuration, the ADC voltage clip limit is raised by a factor of 5 to 0.5 m/s, which is then slightly greater than the minimum internal displacement clip limit. The spectra need to be corrected for this response when calculating source parameters.
Sensors and Frequency Range
The potency and energy range of seismic events that a system can recover is limited by its frequency range \(\left (f_{1},f_{2}\right )\), which is mainly determined by the capabilities of seismic sensors. For the conventional \(\omega ^{2}\) model at \(f_{1}=0.2f_{0}\), we can recover 96%, at \(0.42f_{0}\) 85%, and at the corner frequency only 50% of seismic potency, respectively. The conventional 4.5 Hz sensors are capable of recording frequencies down to \(f_{1}=3\) Hz, and therefore, for seismic events with stress drop \(\varDelta \sigma =5\cdot 10^{5}\) Pa, they can reliably recover 85% of potency up to \(\log P=2.05\) or \(m=2.32\). The omnidirectional 14 Hz geophones are capable of recording reliably down to \(f_{1}=10\) Hz and can recover 85% of potency up to \(\log P=0.5\) or \(m=1.28\). If we assume that both sensors are capable of recording frequencies up to \(f_{2}=2000\) Hz, then they can recover 85% of seismic energy down to \(\log P=-2.3\) or \(m=-0.58\). While the underestimation of seismic potency does not have a significant effect on the potency -based magnitude, it has a notable effect on the estimation of the energy index, apparent stress, and apparent volume.
Assuming the relation between the S-wave corner frequency, \(f_{0}\), S-wave velocity, \(v_{S}\), and source radius, \(r = 0.3v_{S}/f_{0}\), (Brune et al., 1979), and that \(P = 16\Delta \epsilon r^{3}/7\) (Eshelby, 1957), we can derive the following simple relation: \(f_{0}=0.395\sqrt [3]{\Delta \epsilon /P}\). Since \(v_{S} = \left (\mu /\rho \right )^{1/2}\), one can construct a nomogram, see Fig. 1.6, representing the relations between these variables for hard rocks, defined here by \(\mu = 37\) GPa, \(\rho = 2700\) kg\(/\)m\(^{3}\), and \(v_{S} = 3700\) m/s (top of the band) and for soft rocks by \(\mu = 7.2\) GPa, \(\rho = 1800\) kg\(/\)m\(^{3}\), and \(v_{S} = 2000\) m/s (bottom of the band).
Fig. 1.6
Expected S-wave corner frequency and source radius as a function of \( \log P\) for different strain drops and rock type (after Mountfort & Mendecki, 1997)
Figure 1.7 left shows the recovery of seismic potency as a function of the ratio of available frequencies at the lower end of the spectrum \(f_{1}\), to corner frequency \(f_{0}\) for \(\omega ^{1.5}\), \(\omega ^{2}\), and \(\omega ^{3}\) models. Figure 1.7 right shows that the \(\omega ^{2}\) model produces 18% of energy below the corner frequency \((\)left of \(f_{2}/f_{0}=1)\), the \(\omega ^{1.5}\) model less than 10%, while the \(\omega ^{3}\) almost 50%. The energy recovery at Fig. 1.7 right is calculated for \(f_{1} = 0.2f_{0}\) and \(f_{2}\) varying from \(0.2f_{0}\) (0% recovery) to \(10f_{0}\) (87% recovery). Therefore for \(\omega ^{2}\) model, the smallest event for which we can recover at least 85% of energy is \(P \simeq 0.1\Delta \sigma /\left (f_{2}/10\right )^{3}\), which for \(\Delta \sigma = 3\) MPa and \(f_{2} = 1000\) Hz is \(\log P=-0.5\).
Fig. 1.7
Recovery of seismic potency as a function of \(f_{1}/f_{0}\)(left) and radiated energy \(E \left (0.2f_{0},f_{2} \right )/E \left (f_{1}=0,\infty \right )\) as a function of \(f_{2}/f_{0}\)(right) for the \(\omega ^{2}\) model in red, the \(\omega ^{1.5}\) in green, and the \(\omega ^{3}\) in blue. To secure a finite energy, the recovery for the \(\omega ^{1.5}\) model is defined as \(E \left (0.2f_{0},f_{2} \right )/E \left (f_{1}=0.01f_{0},f_{2}=100f_{0} \right )\). The large black dots on the \(\omega ^{2}\) model indicate particular recoveries of potency and energy. After Mendecki (2013b)
A mine wide system equipped with 4.5 Hz sensors should have at least one site capable of recording three-component strong ground motions of at least 0.5 m/s at lower frequencies.
Sensor Orientation
There are two aspects to sensor orientation: Firstly, if the lower natural frequency 4.5 Hz geophones are not installed precisely vertically or horizontally, they do not function correctly; secondly, the true directions of ground motion must be found for each event to assist in location and to estimate its mechanism. The 4.5 Hz geophones are installed in vertical or in horizontal boreholes up to 10 m from the skin of excavations; therefore, it is relatively easy to secure proper orientation. The omnidirectional higher frequency geophones are frequently installed in longer boreholes, and it is recommended to install orientation sensing electronics in the borehole sensors.
Seismic Data and Data Access
Data recorded by the seismic system should be available in open formats and be easily accessible:
1.
Event Data. The user should be able to define a template format that can be exported on a regular basis. It should contain at least the following data per event: event time (UTC and local), location (X, Y, Z), seismic potency for P-wave and S-wave, radiated energy for P-wave and S-wave, corner frequency, for P-wave and S-wave, the number of triggered stations, and the number of accepted stations.
2.
Trigger Data. For each triggered station, the following data should be logged: Trigger ID, Trigger Time, PGV or PGA, and Duration. This should be exportable in ASCII or CSV.
3.
Seismogram Data. Seismograms of triggered or continuous data should be exportable to an open format such as ASCII, miniSEED, or SEG-Y. Such seismogram data should have clear indications of how gain factors are defined and which filtering or processing, if any, has been applied to the exported data. Additional metadata such as site coordinates, sensor type, including sensor characteristics and configuration of the associated sites, should be exportable to simple textual formats. Timing information such as sampling rate and reference times per seismogram should be clear and well documented.
Rock Extraction Data
Rock extraction data is an important parameter to constrain the interpretation of the seismic rock mass response to mining. During times when mining is suspended, seismic activity decays very quickly, and the mean inter-event times increase, indicating lower hazard. The mean inter-event volumes mined, however, stay the same since the hazard potential did not change, and when mining is resumed, it will be very much the same as it was before. For this reason, seismic hazard induced by mining should be expressed as a function of time and as a function of rock extraction. For caving mines, rock extraction data should be separated into undercutting and drawing or mucking.
Sensitivity and Location Accuracy
The location of a seismic event is assumed to be a point within the seismic source that triggered the set of seismic sites used to locate it. The interpretation of location, if accurate, depends on the nature of the rupture process at the source—if a slow or weak rupture starts at a certain point, the closest site(s) may record waves radiated from that very point while others may only record waves generated later in the rupture process by a higher stress drop patch of the same source. One needs to be specific in determining the arrival times if the location of rupture initiation is sought, otherwise the location will be a statistical average of different parts of the same source.
Since the source of a seismic event has a finite size, the attainable location accuracy of all seismic events in a given area should be within the typical size of an \(m_{min}\) or \(\log P_{min}\) event that defines the sensitivity of the seismic network, i.e. the magnitude above which the system records all events by the minimum number of stations to secure the required accuracy of location.
Seismological Processing and Source Parameters
A seismic event is considered to be described quantitatively when apart from its timing, t, and location, \(X = \left (x,y,z\right )\), reliable values are obtained for at least two independent parameters pertaining to the seismic source, namely seismic potency, P, and radiated seismic energy, E. Mining induced seismic events are complex, and underground excavations are frequently a part of the source volume; it is therefore useful to invert for source mechanism and to decompose it into the isotropic and deviatoric components.
Seismicity is defined for a given volume, \(\varDelta V\), over a certain time, \(\varDelta t\), and it can be quantified by the basic four, largely independent, quantities: (1) average time between events, \(\bar {t}\), (2) average distance, including source sizes, between consecutive events, \(\bar {X}\), (3) cumulative seismic potency, \(\sum P\), and (4) cumulative radiated energy, \(\sum E\). Other parameters, e.g. apparent stress, \(\sigma _{A}=E/P\), and apparent volume, \(V_{A}=\mu P^{2}/E\), can be derived from seismic potency and energy. From these four independent quantities, we can derive a number of statistical parameters related to co-seismic deformation and associated changes in the strain rate, stress, and rheology of the process. These parameters are described in the chapter on Monitoring Rock Mass Stability.
To quantify the ground motion hazard, one needs to develop the ground motion prediction equation (GMPE), in case of smaller mines for the entire mine or, in case of larger mines, for different mining areas.
1.6.3 Recommended Analysis
A seismic monitoring system provides a large amount of potentially useful data. To accomplish the stated objectives, it is recommended to define routine tasks that need to be performed regularly. These tasks may vary depending on the degree of seismic exposure. However, any unexpected or unusual seismic behaviour needs to be recognised and communicated. Mines with elevated seismic hazard after production blasts or after larger seismic events should institute a higher resolution seismic monitoring as a part of the re-entry decision-making process. This may involve lowering trigger levels and/or monitoring individual triggers or monitoring continuous ground motions at selected sites. This can be done in addition to monitoring triggered events. The intermediate and long term seismic hazard assessment should be done at least once a year and also every time the maximum observed event size has been exceeded.
Examples of Daily Tasks
Check the system performance.
Identify possible outliers of seismological processing and return them for reprocessing.
Check for strong and unexpected changes in seismic activity. Unexpected activity close to excavations and events far from active mining, close to a shaft or, in general, in places not predicted by numerical modelling, should be noted.
Compare the time of day plot of seismic activity with the average over the last week. Be mindful of the linear size and orientation of these events, since their spatial influence in certain directions may be considerable greater than that indicated by the routine plots of events as dots or spheres on a mine plan.
Note any unusual activity after production blasts and after larger seismic events.
Examples of Weekly Tasks
Note any recurrent system performance issues during the last week.
Test the integrity of data, e.g. temporal gaps, the presence of blasts and ore-pass noise.
Review locations and magnitudes of events recorded over the last week in terms of their distances from excavations. Persistent activity close to excavations and events far from active mining, close to a shaft or, in general, in places not predicted by numerical modelling, may raise concern.
Examine if these unusual locations of seismic events are correct.
Update plots of cumulative potency versus time and versus the cumulative volume of rock extraction and compare them with a short term history, e.g. with the mean value for the last week or month.
Examples of Monthly Tasks
Note any persistent system performance issues during the past month.
Review locations and magnitudes of events recorded over the last month in terms of their distances from excavations. Persistent and unexpected activity close to excavations and events far from active mining, close to a shaft or, in general, in places not predicted by numerical modelling, should be reconciled with the expected behaviour and explained.
Update plots of the cumulative potency versus time and versus the cumulative volume of rock extraction. An increase in the rate of the cumulative potency versus cumulative volume mined, or versus time, may signify an unstable seismic deformation, and an accelerated potency release may indicate a temporary loss of control over the seismic rock mass response to mining.
Check if time of day and day of week plots of seismic activity are as expected.
Update the list with details of the 10 largest events and the list of record breaking events at the mine.
Update the list with details of the 10 largest recorded ground motions at each monitoring site.
Watch for amplitude saturation caused either by exceeding the output range of an amplifier or by exceeding the travel limit of the inertial mass within the geophone.
Undertake advanced analysis of larger or damaging seismic events, i.e. location uncertainty, source mechanism, the character of aftershock activity, and the spatial distribution of strong ground motion.
Test for any possible precursory behaviour to these events, e.g. activity rate and the cumulative apparent volume versus energy index plots.
Examples of Yearly Tasks
Quantify the performance of the seismic system and compare with previous years.
Reassess the configuration and the sensitivity of the seismic network.
Examine the location accuracy of seismic events.
The recommended location accuracy in a given area should be within the typical size of an event of that magnitude which defines the sensitivity of the seismic network for that area.
Reassess the suitability of currently employed sensors to accurately record strong ground motion.
Review lists of largest events, and record breaking events and largest recorded ground motions.
Compare and reconcile seismic activity with the latest numerical stress model.
Evaluate intermediate and long term seismic hazard and compare it with previous years.
Review seismic responses to production blasts, threshold levels for ground motion alerts and for re-entry, and the re-entry protocol.
Back analyse larger seismic events and comment on their effect on excavations.
Estimate the site effect on the skin of critical excavations and reconcile with the current support design.
Review seismic hazard management plan for seismic monitoring.
Forensic Analysis of Larger Events
This is a part of back analysis, and the main objective here is to understand the cause(s) of these events, their impact on mine safety, and the mine infrastructure. Such analysis includes the inversion of the mechanism of the main shock and its aftershocks, if any, to see if these events are associated with unknown geological structure, and the ground motion study to reconcile support performance with the observed and simulated seismic loading. Frequently larger events are complex, comprising a few sub-events, and it is important to establish their spatial distribution in respect to pillars and geological structures.
1.7 A Short Note on Magnitude Scales
In 1931, a Japanese seismologist Kiyoo Wadati constructed a chart of the logarithm of the maximum ground motion versus distance for 31 shallow, 5 intermediate, and 5 deep earthquakes and noted that the plots for different earthquakes formed parallel concave lines (Wadati, 1931). In 1934, Charles Richter constructed a similar diagram of peak ground motion versus distance for southern California, see Fig. 1.8, and used the fact that earthquakes of different size gave almost parallel curves to create the first earthquake magnitude scale. Richter published his work in January 1935 (Richter, 1935). This is how Richter described his initial observations.
I suggested that we might compare earthquakes in terms of the measured amplitudes recorded at the Wood-Anderson torsion seismographs in California with an appropriate correction for distance. Wood and I worked together on the latest events, but we found that we could not make satisfactory assumptions for the attenuation with distance. I found a paper by Professor K. Wadati of Japan in which he compared large earthquakes by plotting the maximum ground motion against distance to the epicentre. I tried a similar procedure for our stations, but the range between the largest and smallest magnitudes seemed unmanageably large. Dr. Beno Gutenberg then made the natural suggestion to plot the amplitudes logarithmically. I was lucky because logarithmic plots are a device of the devil. I saw that I could now rank the earthquakes one above the other. Also, quite unexpectedly, the attenuation curves were roughly parallel on the plot. By moving them vertically, a representative mean curve could be formed, and individual events were then characterised by individual logarithmic differences from the standard curve. This set of logarithmic differences thus became the numbers on a new instrumental scale. Very perceptively, Mr. Wood insisted that this new quantity should be given a distinctive name to contrast it with the intensity scale. My amateur interest in astronomy brought out the term “magnitude”, which is used for the brightness of a star.
Richter defined the local magnitude of an event \(m_{L}\) at a given recording station as
where A is the maximum zero-to-peak horizontal amplitude measured in mm on a Wood-Anderson torsion seismograph with magnification of 2080 at 0.8 second period (Uhrhammer & Collins, 1990), at epicentral distance R, and \(A_{0}(R)\) is the reference maximum amplitude for the same distance. The local magnitude then is a relative measure of the strength of a seismic event, and a unit increase in magnitude corresponds to a ten-fold increase in the amplitude of ground displacement. The reference amplitude \(A_{0}\)[mm] as a function of distance R [km] was given by Richter in a table that can be approximated by the following formulae: \(\log A_{0} = 0.15 - 1.6\log R\) for distances less than 200 km and \(\log A_{0} = 3.38 - 3.0\log R\) for distances between 200 and 600 km. Richter arbitrarily chose the reference amplitudes so that the earthquakes he dealt with did not have negative magnitudes. At \(R=100\) km, \(A_{0}\simeq 0.001\) mm and on Wood-Anderson seismograph. The zero-to-peak amplitude for a magnitude \(m_{L}=3\) event would measure \(A=0.001\cdot 10^{3}\), which is 1 mm or \(\log A=0\). To compute the displacement of ground motion \(u(R)\), one must divide the amplitude measured on the instrument by its magnification, thus for \(m_{L}=3\) at 100 km \(u=1/2080=0.00048\) mm. The final magnitude of an event is taken as an average from a number of stations surrounding the event.
The response of the Wood-Anderson seismometer, with nearly constant displacement amplification over the frequency range for local earthquakes in California, is implicitly included in Richter’s definition of local magnitude. These instruments were of low sensitivity, capable of recording only horizontal ground motion, hence poorly suited to study local micro-earthquakes with well developed vertical components of ground motion. They have been mostly out of use since the late 1960s, and the local magnitudes are now computed from Wood-Anderson equivalent of modern high-gain seismographs (Brune & Allen, 1967; Bakun et al., 1978; Uhrhammer & Collins, 1990).
To cater for earthquakes at larger distances, the magnitude scale was later extended by the introduction of surface magnitudes. The surface wave magnitude was developed by Gutenberg (1945a) and is given by \(m_{S}=\log (u/T)+c_{1}\log \Delta +c_{2}\), where u is the maximum amplitude of Rayleigh waves in micrometres, T is the period, approximately 20 seconds, \(\Delta \) is the distance in degrees, and \(c_{1}\) and \(c_{2}\) are calibration constants. Gutenberg (1945a) originally worked out \(c_{1}=1.656\) and \(c_{2}=1.818\), which applied for Pasadena, and in 1964 the International Association of Seismology and Physics of the Earth’s Interior (IASPEI) adopted \(c_{1}=1.66\) and \(c_{2}=3.3\) proposed by Vanek et al. (1962). The \(m_{S}\) is suitable for shallow earthquakes that generate well developed surface waves and with source duration not much greater than 20 seconds.
To cater for earthquakes at all depths, Gutenberg (1945b) developed the body wave magnitude scale. The body wave magnitude is given by \(m_{b}=\log (u/T)+q(\Delta ,h)\), where T is the period associated with the maximum body wave amplitude, measured generally at \(T =\) 1 second, and \(q(\Delta ,h)\) is the calibrating function to correct for the epicentral distance \(\Delta \), depth h, and site effects. The body wave magnitude is more suitable for smaller earthquakes of shorter duration.
The body wave and the surface wave magnitudes coincide only for earthquakes with magnitude 6.5, for smaller events \(m_{b}\) is larger, and for greater events \(m_{S}\) is larger. Gutenberg and Richter (1956) related the three magnitude scales as follows: \(m_{b}=0.63m_{S}+2.5,\) and \(m_{S}=1.27\left (m_{L}-1\right )-0.016m_{L}^{2}\).
There is also the \(m_{Lg}\) magnitude defined by Nuttli (1973) for regional distances based on the maximum amplitude of Lg waves, \(m_{Lg}=\log u+0.83\log \Delta +c(\Delta -0.09)\log e+3.81\), where u in micrometres, \(\Delta \) in degrees, and c is an attenuation coefficient different for different regions: \(c=0.07\) for the central USA and \(c=0.53\) for California. Lg waves are relatively short period, 1 to 6 seconds, have large arrival amplitude with predominantly transverse motion, and propagate along the surface with velocities close to the average shear velocity in the upper part of the continental crust (Aki & Richards, 2002). The Nuttli magnitude was used at one stage to quote the larger mine related events in Canada.
Before the advent of a high dynamic range, digital network waveforms of larger earthquakes recorded at distances less than 200 km were frequently saturated, and the maximum amplitude could not be measured. To alleviate the problem, Bisztricsany (1958) proposed to determine magnitude from the duration of long period surface waves. A basis for the duration magnitude was later given by Aki (1969) who pointed out that duration is virtually independent on distance within 100 km. One can therefore determine the magnitude of events which are only approximately located. Aki (1969) also suggested that the energy in the coda of signals from local events comes from back-scattered waves.
The duration magnitude \(m_{D}\), determined from short period sensors, can be defined as \(m_{D}=c_{1}\log t_{D}+c_{2}R+c_{3}\), where \(t_{D}\) is the duration of the waveform in seconds measured from the P-wave arrival and R is the distance in km. The total duration of an event is defined as the time interval between the onset of the first arrival and the point at which the signal falls and remains below the background noise level. The duration of ground motion depends, in part, on amplitude and thus may be influenced by site amplification, rock mass attenuation, radiation pattern of the source, and the background noise level. It is therefore important to estimate the duration magnitude at as many stations surrounding the event as possible, and the final magnitude is taken as an average over the stations. For small events, with signal duration dominated by S-P length, the duration magnitude may not be a stable estimate of event size because it does not measure properly the energy back-scattered from distant points. In such case, it is recommended to measure duration from the arrival of S-wave. In most cases quoted in the literature, the dependence of the duration magnitude on distance is rather weak. Today, the direct estimate of magnitude is performed routinely by national or regional seismological networks equipped with the low frequency sensors able to quantify larger earthquakes.
In most areas in the world, the low range of magnitudes of seismic events recorded in mines is not well covered by the regional or national seismological networks, and it is difficult to calibrate their records against local magnitudes. The modern seismic systems deployed in mines derive radiated energy and seismic potency, or seismic moment, from recorded waveforms. Since the underlying intention of different magnitude definitions is to offer an equivalent measure to earthquakes radiating the same amount of energy, one would like to scale the computed radiated energy with magnitude. Gutenberg and Richter (1956) derived an empirical relationship between the radiated seismic energy E and the surface wave magnitude of larger earthquakes,
Choy and Boatwright (1995) calculated the radiated energy of 397 earthquakes with \(m\geq \)5.8 by integrating the path and the radiation pattern corrected velocity-squared spectra and, assuming the slope 1.5, the least square regression fit yielded \(\log E=1.5m+4.4\). That indicates that, on average, the Gutenberg-Richter formula (1.2) may overestimate the radiated energy by a factor of 2.5. According to Eq. (1.2), a unit increase in magnitude corresponds to an approximately 30 times increase in radiated energy. Although the relation (1.2) is empirical, the implied scaling \(m\sim \frac {2}{3}\log E\) can be explained for most moderate to large earthquakes in terms of a simple dislocation model with a constant stress drop, while very small earthquakes are likely to satisfy \(m\sim \log E\) (Haskell, 1964; Kanamori & Anderson, 1975).
The radiated seismic energy of the P- or S-wave is proportional to the integral of the radiation pattern corrected far-field velocity pulse squared \(\dot {u}^{2}(t)\) of duration \(t_{s}\) and in frequency domain to the square of the velocity power spectrum of the P- or S-wave \(V_{P,S}^{2}(f)\),
where \(\rho \) is rock density, \(v_{S,P}\) is S- or P-wave velocity, and R is the distance from the source. The total radiated energy \(E=E_{P}+E_{S}\). The computation of the radiated seismic energy from waveforms requires a wide frequency bandwidth of the monitoring system, preferably from 0.2\(\cdot f_{0}\) to 10\(\cdot f_{0}\), where \(f_{0}\) is the predominant frequency of a given event. Such bandwidth is frequently not available for smaller events either due to the limited capabilities of the sensors used or due to the insufficient sampling frequency of the data acquisition units. Moreover, the rate at which seismic events are produced in mines does not always allow for careful, time -consuming processing with proper corrections for attenuation and site effects. As a result, the estimates of radiated energy for smaller events are rather underestimated, in some cases by up to a factor of 10 that causes the error in magnitude by 0.7 units. In general, the energy-based magnitude scale struggles to represent small and large events adequately. The energy measure of the strength of an earthquake, \(K=\log E\), was adopted in Russia (Rautian, 1960) and then in Polish coal mines (Gibowicz, 1963; Wierzchowska & Dubinski, 1973).
The energy available for seismic radiation during shear dislocation \(\bar {u}\) over the area A driven by the average stress \(\bar {\sigma }\) can be approximated by \(E=\xi \bar {\sigma }P=\xi \bar {\sigma }M/\mu \), where \(\xi \) is the seismic efficiency. The product \(\xi \bar {\sigma }\) is called apparent stress, \(\sigma _{A}=E/P\), (Aki, 1966), which is a model independent measure of stress release at the source, and it does not does not depend on rigidity. Seismic sources similar in terms of their potency may differ by up to two orders of magnitude in radiated energies, reflecting stress differences at the place and at the time of their occurrence (Mendecki, 1993). Comparing \(E=\sigma _{A}P=\sigma _{A}M/\mu \) with \(m = \left ({2}/{3}\right )\log E-3.2\) gives \(m = \left ({2}/{3}\right )\left (\log P+\log \sigma _{A}\right )-3.2\), which, assuming a constant apparent stress for larger earthquakes of 1.5 MPa and rigidity \(\mu = 30\) GPa, gives
which is the magnitude-moment or magnitude-potency relation (Hanks & Kanamori, 1979).
The assumption of constant apparent stress implies a slope of 1.0 on the \(\log E\) versus \(\log P\) plot, which is not always supported by data. In a more general case, \(\log E =d\log P+c\), where d is the slope and c the intercept that measures the \(\log E\) released by a seismic event with \(\log P=0\). In such a case, apparent stress scales with potency as \(\log \sigma _{A}=(d-1)\log P+c\), and for \(d=1\), apparent stress is independent of potency, \(\sigma _{A}=10^{c}\). For \(d>1\), apparent stress would increase with an increase in seismic potency. By combining \(\log E = 1.5m + 4.8\) with \(\log E =d\log P+c\), one obtains a more general magnitude-potency relation
The slope d of the \(\log E\)vs.\(\log P\) plot of events recorded in mines is frequently reported to be higher than 1.0. Spottiswoode and McGarr (1975) calculated seismic moments, radiated energies and magnitudes of 24 events associated with deep level mining on the East Rand in SA from analog waveforms recorded at a surface site above the mine. They confirmed the relation (1.2) for events in the magnitude range \(0 < m < 3\) and reported \(\log M=1.2m+10.7\) which, for \(\mu =30\) GPa, gives \(m=0.833\log P-0.186\) and, together with Eq. (1.2), gives \(\log E=1.25\log P+4.52\). Mendecki (1993) and van Aswegen and Butler (1993) analysed high dynamic range digital waveforms of thousands of events in the range, \(-0.5 < m < 3.5\), recorded underground on West Rand, Klerksdorp, and Welkom gold mines. They reported \(\log E=1.5(\pm 0.1)\log (\mu P)-10.5(\pm 0.5)\), which, for \(\mu = 30\) GPa, gives \(\log E=1.5\log P+5.22\) and \(m=\log P+0.28\).
A similar relation, \(m=\log P+0.72\), was obtained by Ben-Zion and Zhu (2002) for small earthquakes, \(m<3.5\), recorded by Abercrombie (1996) with 4.5 Hz sensors located in a deep borehole in California. For data sets with events in the magnitude range \(1.0<m<6.0\), recorded by the TERRAscope/TriNet network, the linear fit is \(m=0.74\log P+0.98\), but the best overall fit is \(m=4.04\sqrt {\log P+4.86}-8.07\) (Ben-Zion & Zhu, 2002). This indicates that the non-linear term is required to describe the scaling between potency and magnitude values over a broad range of sizes with a single relation. A linear potency magnitude scaling relation can approximate only a limited range of data. The local magnitudes used by Ben-Zion and Zhu (2002) were determined independently by the Southern California Short Period Network.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Abercrombie, R. E. (1996). The magnitude-frequency distribution of earthquakes recorded with deep seismometers at Cajon Pass. Tectonophysics, 261, 1–7.CrossRef
Aki, K. (1966). Generation and propagation of G waves from the Niigata earthquake of June 16, 1964. Part 2: Estimation of earthquake moment, released energy, and stress strain drop from the G-wave spectrum. Bulletin Earthquake Research Institute Tokyo University, 44, 73–88.
Aki, K. (1967). Scaling law of seismic spectrum. Journal of Geophysical Research, 72, 1217–1231.CrossRef
Aki, K. (1969), Analysis of the seismic coda of the local earthquakes as scattered waves. Journal of Geophysical Research, 74(2), 615–631.CrossRef
Aki, K., & Richards, P. G. (2002). Quantitative seismology (2nd ed.). University Science Books.
Al-Kindy, F. H., & Main, I. G. (2003). Testing self-organized criticality in the crust using entropy: A regionalized study of the CMT global earthquake catalogue. Journal of Geophysical Research, 108(B11), 2521. https://doi.org/10.1029/2002JB002230.CrossRef
Anderson, J. G., & Biasi, G. P. (2016). What is the basic assumption for probabilistic seismic hazard assessment? Seismological Research Letters, 87(2A), 323–326. https://doi.org/10.1785/0220150232.CrossRef
Arowsmith, S. J., Trugman, D. T., MacCarthy, J., Bergen, K. J., Lumley, D., & Magnani, B. (2022). Big data seismology. Reviews of Geophysics, 60(2), e2021RG000,769. https://doi.org/10.1029/2021RG000769.
Bak, P., Tang, C., & Wiesenfeld, K. (1987). Self-organized criticality: An explanation of 1/f noise. Physical Review Letters, 59, 381–384.CrossRef
Bakun, W. H., Houck, S. T., & Lee, W. H. K. (1978). A direct comparison of “synthetic” and actual Wood-Anderson seismograms. Bulletin of the Seismological Society of America, 68(4), 1199–1202.CrossRef
Barnsley, M. F. (1988). Fractals everywhere. Academic Press.
Ben-Zion, Y., & Zhu, L. (2002). Potency-magnitude scaling relation for southern California earthquakes with 1.0\(<\)M\(<\)7.0. Geophysical Journal International, 148, F1–F5.
Bisztricsany, E. (1958). A new method for the determination of the magnitude of earthquakes. Geophys. Kozlemen, 7, 69–96.
Bottiglieri, M., Lippiello, E., Godano, C., & de Arcangelis, L. (2009). Identification and spatiotemporal organization of aftershocks. Journal of Geophysical Research, 114(B03303), 1–12. https://doi.org/10.1029/2008JB005941.
Brune, J. N. (1970). Tectonic stress and the spectra of seismic shear waves from earthquakes. Journal of Geophysical Research, 75(26), 4997–5009.CrossRef
Brune, J. N., & Allen, C. R. (1967). A microearthquake survey of the San Andreas fault system in southern California. Bulletin of the Seismological Society of America, 57, 277–296.CrossRef
Brune, J. N., Archuleta, R. J., & Hartzell, S. (1979). Far-field S-wave spectra, corner frequencies, and pulse shapes, Journal of Geophysical Research, 84(B5), 2262–2272. https://doi.org/10.1029/JB084iB05p02262.CrossRef
Choy, G. L., & Boatwright, J. L. (1995). Global patterns of radiated seismic energy and apparent stress. Journal of Geophysical Research, 100(B9), 18,205–18,228.CrossRef
Durrheim, R. J., Spottiswoode, S. M., Roberts, M. K. C., & van Z. Brink, A. (2005). Comparative seismology of the witwatersrand basin and bushveld complex and emerging technologies to manage the risk of rockbursting. The Journal of The South African Institute of Mining and Metallurgy, 105, 409–416.
duToit, H. J., Goldswain, G., & Olivier, G. (2022). Can das be used to monitor mining induced seismicity? International Journal of Rock Mechanics and Mining Sciences, 155(105127). https://doi.org/10.1016/j.ijrmms.2022.105127.
Eshelby, J. D. (1957). The determination of the elastic field of an ellipsoidal inclusion and related problems. Proceedings of the Royal Society of London, Series A, Mathematical and Physical Sciences, 241(1226), 376–396.
Frisch, U., & Sornette. D. (1997). Extreme deviations and applications. Journal de Physique I, EDP Sciences, 7(9), 1155–1171.
Gardner, J. K., & Knopoff, L. (1974). Is the sequence of earthquakes in Southern California, with aftershocks removed, Poissonian? Bulletin of the Seismological Society of America, 64(5), 1363–1367.CrossRef
Gay, N. C., & Ortlepp, W. D. (1978). Anatomy of a mining-induced fault zone. Geological Society of America Bulletin, 90, 47–58.CrossRef
Gibowicz, S. (1963). Magnitude and energy of subterrane shocks in Upper Silesia. Studia Geophysica et Geodaetica, 7(1), 1–37.CrossRef
Gutenberg, B. (1945a). Amplitudes of surface waves and magnitudes of shallow earthquakes. Bulletin of the Seismological Society of America, 35, 3–12.CrossRef
Gutenberg, B. (1945b). Amplitudes of P, PP and S and magnitudes of shallow earthquakes. Bulletin of the Seismological Society of America, 35, 57–69.CrossRef
Gutenberg, B., & Richter, C. F. (1956). Earthquake magnitude, intensity, energy, and acceleration: (Second paper). Bulletin of the Seismological Society of America, 46(2), 105–145.CrossRef
Handley, M. F., de Lange, J. A. J., Essrich, F., & Banning, J. A. (2000). A review of the sequential grid mining method employed at Elandsrand Gold Mine. The Journal of The Southern African Institute of Mining and Metallurgy, 100(3), 157–168.
Hanks, T. C., & Kanamori, H. (1979) A moment magnitude scale. Journal of Geophysical Research, 84, 2348–2350.CrossRef
Haskell, N. (1964). Total energy and energy spectral density of elastic wave radiation from propagating faults. Bulletin of the Seismological Society of America, 56, 1811–1842.CrossRef
Hogarth, R. (1975). Cognitive processes and the assessment of subjective probability distribution. Journal of the American Statistical Association, 70(350), 271–289. https://doi.org/10.2307/2285808.CrossRef
Jensen, H. J. (1998). Self-organized criticality: Emergent complex behavior in physical and biological systems. Cambridge Lecture Notes in Physics 10 (1st ed.). Cambridge University Press.
Johnson, L. R., & Sammis, C. G. (2001). Effects of rock damage on seismic waves generated by explosions. Pure and Applied Geophysics, 158, 1869–1908. https://doi.org/10.1007/PL00001136.CrossRef
Johnson, S. E., Song, W. J., Vel, S. S., Song, B. R., & Gerbi, C. C. (2021). Energy partitioning, dynamic fragmentation, and off-fault damage in the earthquake source volume. JGR Solid Earth, 126(e2021JB022616). https://doi.org/10.1029/2021JB022616.
Johnston, C. W. (2017). Stress modulation of earthquakes: A study of long and short period stress perturbations and the crustal response, Ph.D. thesis. University of California, Berkeley.
Kanamori, H., & Anderson, D. L. (1975), Theoretical basis of some empirical relations in seismology. Bulletin of the Seismological Society of America, 65(5), 1073–1095.
Keilis-Borok, V. I., Bessanova, E. N., Gotsadze, O. D., Kirilova, I. V., Kogan, S. D., Kikhtikova, T. I., Malinovskaya, C. N., Pavola, G. I., & Sarskii, A. A. (1960). Investigation of the mechanism of earthquakes (English translation). American Geophysical Union, Consultants Bureau, New York.
Kijko, A., Funk, C. F., & Brink, A. v Z. (1993). Identification of anomalous patterns in time dependent mine seismicity. In R. P. Young (Ed.), Proceedings 3rd International Symposium on Rockburst and Seismicity in Mines, Kingston, ON, Canada (pp. 205–210).
Kulkarni, R. B., Youngs, R. R., & Coppersmith, K. J. (1984). Assessment of confidence intervals for results of seismic hazard analysis. In 8th World Conference on Earthquake Engineering, San Francisco (pp. 263–270).
Kurzon, I., Lyakhovsky, V., & Ben-Zion, Y. (2019). Dynamic rupture and seismic radiation in a damage–breakage rheology model. Pure and Applied Geophysics, 176, 1003–1020. https://doi.org/10.1007/s00024-018-2060-1.CrossRef
Lindsey, N. J., & Martin, E. R. (2021). Fibre-optic seismology. Annual Review of Earth and Planetary Sciences, 49, 309–336.CrossRef
Luo, B., Jin, G., & Stanek, F. (2021). Near-field strain in distributed acoustic sensing-based microseismic observation. Geophysics, 86(5), 1SO–Z1. https://doi.org/10.1190/geo2021-0031.1.
Lyakhovsky, V., Ben-Zion, Y., Ilchev, A., & Mendecki, A. (2016). Dynamic rupture in a damage-breakage rheology model. Geophysical Journal International, 206, 1126–1143. https://doi.org/10.1093/gji/ggw183.CrossRef
Mandelbrot, B. B. (1967). How long is the coast of Britain? Statistical self-similarity and fractional dimension. Science, 156, 636–638.CrossRef
McGarr, A., Simpson, D., & Seeber, L. (2002). Case histories of induced and triggered seismicity. In W. H. K. Lee, H. Kanamori, P. C. Jennings, & C. Kisslinger (Eds.), International Handbook of Earthquake and Engineering Seismology (pp. 647–661). Academic Press.
McGuire, R. K. (2001), Deterministic vs probabilistic earthquake hazards risks. Soil Dynamics and Earthquake Engineering, 21, 377–384.CrossRef
Mendecki, A. J. (1993), Real time quantitative seismology in mines: Keynote Address. In R. P. Young (Ed.), Proceedings 3rd International Symposium on Rockbursts and Seismicity in Mines, Kingston, ON, Canada (pp. 287–295). Balkema.
Mendecki, A. J. (1997a). Seismic monitoring in mines (1 ed., 262 pp.) Chapman and Hall.
Mendecki, A. J. (1997b). Principles of monitoring seismic rock mass response to mining: Keynote Address. In S. J. Gibowicz & S. Lasocki (Eds.), Proceedings 4th International Symposium on Rockbursts and Seismicity in Mines, Krakow, Poland (pp. 69–80). Balkema.
Mendecki, A. J. (2001). Data-driven understanding of seismic rock mass response to mining: Keynote Address. In G. van Aswegen, R. J. Durrheim, & W. D. Ortlepp (Eds.), Proceedings 5th International Symposium on Rockbursts and Seismicity in Mines, Johannesburg, South Africa (pp. 1–9). South African Institute of Mining and Metallurgy.
Mendecki, A. J. (2005). Persistence of seismic rock mass response to mining. In Y. Potvin & M. R. Hudyma (Eds.), Proceedings 6th International Symposium on Rockburst and Seismicity in Mines, Perth, Australia (pp. 97–105). Australian Centre for Geomechanics.
Mendecki, A. J. (2012). Size distribution of seismic events in mines. In Proceedings of the Australian Earthquake Engineering Society 2012 Conference, Queensland (pp. 1–20).
Mendecki, A. J. (2013a). Characteristics of seismic hazard in mines: Keynote Lecture. In A. Malovichko & D. A. Malovichko (Eds.), Proceedings 8th International Symposium on Rockbursts and Seismicity in Mines, St Petersburg-Moscow, Russia (pp. 275–292). ISBN 978-5-903258-28-4.
Mendecki, A. J. (2013b). Frequency range, logE, logP and magnitude. In A. Malovichko & D. A. Malovichko (Eds.), Proceedings 8th International Symposium on Rockbursts and Seismicity in Mines, St Petersburg-Moscow, Russia (pp. 167–173). ISBN 978-5-903258-28-4.
Mendecki, A. J. (2016). Mine seismology reference book: Seismic Hazard (1 ed.). Institute of Mine Seismology. ISBN 978-0-9942943-0-2. www.imseismology.org/msrb/.
Mendecki, A. J., & Lötter, E. C. (2011). Modelling seismic hazard for mines. In Australian Earthquake Engineering Society 2011 Conference, Barossa Valley.
Mendecki, A. J., van Aswegen, G., Brown, J. N. R., & Hewlett, P. (1988). The Welkom seismological network. In C. Fairhurst (Ed.), 3rd International Symposium on Rockbursts and Seismicity in Mines, 08–10 June 1988, MN, USA (pp. 237–244), Balkema, 1990.
Mendecki, A. J., van Aswegen, G., & Mountfort, P. (1999). A guide to routine seismic monitoring in mines. In A. J. Jager & J. A. Ryder (Eds.), A Handbook on Rock Engineering Practice for Tabular Hard Rock Mines (chap. 9, pp. 287–309). The Safety in Mines Research Advisory Committee.
Mizrahi, L., Nandan, S., & Wiemer, S. (2021). The effect of declustering on the size distribution of mainshocks. Seismological Research Letters, 92(4), 2333–2342.CrossRef
Mountfort, P., & Mendecki, A. J. (1997). Seismic transducers. In A. J. Mendecki (Ed.), Seismic Monitoring in Mines (1 ed., chap. 1, pp. 1–20). Chapman and Hall.
Mulargia, F., Stark, P. B., & Geller, R. J. (2017). Why is probabilistic seismic hazard analysis (PSHA) still used? Physics of the Earth and Planetary Interiors, 264, 63–75.CrossRef
Nicolis, G., & Prigogine, I. (1977). Self-organization in nonequilibrium systems. From dissipative structures to order through fluctuations (491 pp.). John Wiley and Sons.
Nuttli, O. W. (1973). Seismic wave attenuation and magnitude relations for east and north America. Journal of Geophysical Research, 78, 876–885.CrossRef
Oreskes, N., Shrader-Frechette, K., & Belitz, K. (1994). Verification, validation, and confirmation of numerical models in the earth sciences. Science, 263, 641–646.CrossRef
Prigogine, I. (1980). From being to becoming. Time and complexity in the physical sciences. W. H. Freeman and Company.
Rautian, T. G. (1960). Earthquake energy. Transaction of the Joint Institute of Physics of the Earth, 9, 35–114.
Richter, C. F. (1935). An instrumental earthquake magnitude scale. Bulletin of the Seismological Society of America, 25, 1–32.CrossRef
Sammis, C. G., Rosakis, A. J., & Bhat, H. S. (2009). Effects of off-fault damage on earthquake rupture propagation: Experimental studies. Pure and Applied Geophysics, 166, 629–1648. https://doi.org/10.1007/s00024-009-0512-3.CrossRef
Sonley, E., & Atkinson, G. M. (2005). Empirical relationship between moment magnitude and Nuttli magnitude for small-magnitude earthquakes in Southeastern Canada. Seismological Research Letters, 76(6), 752–755.CrossRef
Spottiswoode, S. M., & McGarr, A. (1975). Source parameters of tremors in a deep-level gold mine. Bulletin of the Seismological Society of America, 65(1), 93–112.
Trugman, D. T., Fang, L., Ajo-Franklin, J., Nayak, A., & Li, Z. (2022). Preface to the focus section on big data problems in seismology. Seismological Research Letters, 93(5), 2423–2425.CrossRef
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131.CrossRef
Uhrhammer, R. A., & Collins, E. R. (1990). Synthesis of Wood-Anderson seismograms from broadband digital records. Bulletin of the Seismological Society of America, 80(3), 702–716.
UNDRO. (1979). Natural disasters and vulnerability analysis. Report of expert group meeting. UNDRO - United Nations Disaster Relief Coordinator, Geneva.
van Aswegen, G., & Butler, A. G. (1993). Applications of quantitative seismology in South African gold mines. In R. P. Young (Ed.), Proceedings 3rd International Symposium on Rockbursts and Seismicity in Mines, Kingston, ON, Canada (pp. 261–266). Balkema. ISBN 90 5410320 5.
van Aswegen, G., & Mendecki, A. J. (1999). Mine layout, geological features and seismic hazard. Final report gap 303. Safety in Mines Research Advisory Committee, South Africa (pp. 1–91).
Vanek, J., Zapotek, A., Karnik, V., Kondorskaya, N. V., Riznichenko, Y. V., Savarensky, E. F., Soloviev, S. L., & Shebalin, N. V. (1962). Standarization of magnitude scales. Izvestiya, Academy Nauka SSSR, Geophysics, 2, 153–158.
Vick, S. G. (2002). Degrees of belief: Subjective probability and engineering judgment. American Society of Civil Engineers Press.
Vieira, F. M. C. C., Diering, D. H., & Durrheim, R. J. (2001). Methods to mine the ultra-deep tabular gold-bearing reefs of the Witwatersrand Basin, South Africa. In W. A. Hustrulid & R. L. Bullock, Underground mining methods: Engineering fundamentals and international case studies (pp. 691–704). Society for Mining, Metallurgy and Exploration, Inc (SME).
Wadati, K. (1931). Shallow and deep earthquakes. Geophysical Magazine, Tokyo, 4, 231–283.
Wierzchowska, Z., & Dubinski, J. (1973). Methods to calculate energy of seismic events in Upper Silesia (in Polish). Report, Central Mining Institute, Katowice, Poland.
Zhang, M. a. (2022). Loc-flow: An end-to-end machine learning-based high-precision earthquake location workflow. Seismological Research Letters, 93(5), 2426–2438. https://doi.org/10.1785/0220220019.CrossRef