Skip to main content
Top

2005 | Book

Statistical Seismology

insite
SEARCH

About this book

Statistical Seismology aims to bridge the gap between physics-based and statistics-based models. This volume provides a combination of reviews, methodological studies, and applications, which point to promising efforts in this field. The volume will be useful to students and professional researchers alike, who are interested in using stochastic modeling for probing the nature of earthquake phenomena, as well as an essential ingredient for earthquake forecasting.

Table of Contents

Statistical Seismology

Stochastic models with an increasing component of physical reasoning have been slowly gaining acceptance over the last two decades. The subject of statistical seismology aims to bridge the gap between physics-based models without statistics, and statistics-based models without physics. This volume, which is based largely on papers presented at the 3rd International Workshop on Statistical Seismology, held in Juriquilla, Mexico in May 2003, may serve to illustrate the range of issues now coming under the statistical seismology heading. While the papers presented may not solve the problem of bridging the gap, they indicate routes by which it is being approached.

The Role of Heterogeneities as a Tuning Parameter of Earthquake Dynamics

We investigate the influence of spatial heterogeneities on various aspects of brittle failure and seismicity in a model of a large strike-slip fault. The model dynamics is governed by realistic boundary conditions consisting of constant velocity motion of regions around the fault, static/kinetic friction laws, creep with depth-dependent coefficients, and 3-D elastic stress transfer. The dynamic rupture is approximated on a continuous time scale using a finite stress propagation velocity (“quasidynamic model”). The model produces a “brittle-ductile” transition at a depth of about 12.5 km, realistic hypocenter distributions, and other features of seismicity compatible with observations. Previous work suggested that the range of size scales in the distribution of strength-stress heterogeneities acts as a tuning parameter of the dynamics. Here we test this hypothesis by performing a systematic parameter-space study with different forms of heterogeneities. In particular, we analyze spatial heterogeneities that can be tuned by a single parameter in two distributions: (1) high stress drop barriers in near-vertical directions and (2) spatial heterogeneities with fractal properties and variable fractal dimension. The results indicate that the first form of heterogeneities provides an effective means of tuning the behavior while the second does not. In relatively homogeneous cases, the fault self-organizes to large-scale patches and big events are associated with inward failure of individual patches and sequential failures of different patches. The frequency-size event statistics in such cases are compatible with the characteristic earthquake distribution and large events are quasi-periodic in time. In strongly heterogeneous or near-critical cases, the rupture histories are highly discontinuous and consist of complex migration patterns of slip on the fault. In such cases, the frequency-size and temporal statistics follow approximately power-law relations.

Aftershock Statistics

The statistical properties of aftershock sequences are associated with three empirical scaling relations: (1) Gutenberg-Richter frequency-magnitude scaling, (2) Båth’s law for the magnitude of the largest aftershock, and (3) the modified Omori’s law for the temporal decay of aftershocks. In this paper these three laws are combined to give a relation for the aftershock decay rate that depends on only a few parameters. This result is used to study the temporal properties of aftershock sequences of several large California earthquakes. A review of different mechanisms and models of aftershocks are also given. The scale invariance of the process of stress transfer caused by a main shock and the heterogeneous medium in which aftershocks occur are responsible for the occurrence of scaling laws. We suggest that the observed partitioning of energy could play a crucial role in explaining the physical origin of Båth’s law. We also study the stress relaxation process in a simple model of damage mechanics and find that the rate of energy release in this model is identical to the rate of aftershock occurrence described by the modified Omori’s law.

Stochastic Branching Models of Fault Surfaces and Estimated Fractal Dimensions

We discuss simulations of nonplanar fault structures for a variant of the geometric stochastic branching model of Kagan (1982) and perform fractal analyses with 2-D and 3-D box-counting methods on the simulated structures. One goal is to clarify the assumptions associated with the geometric stochastic branching model and the conditions for which it may provide a useful tool in the context of earthquake faults. The primary purpose is to determine whether typical fractal analyses of observed earthquake data are likely to provide an adequate description of the underlying geometrical properties of the structure. The results suggest that stochastic branching structures are more complicated and quite distinct from the mathematical objects that have been used to develop fractal theory. The two families of geometrical structures do not share all of the same generalizations, and observations related to one cannot be used directly to make inferences on the other as has frequently been assumed. The fractal analyses indicate that it is incorrect to infer the fractal dimension of a complex volumetric fault structure from a cross-section such as a fault trace, from projections such as epicenters, or from a sparse number of representative points such as hypocenter distributions.

Power-law Distributions of Offspring and Generation Numbers in Branching Models of Earthquake Triggering

We consider a general stochastic branching process, which is relevant to earthquakes as well as to many other systems, and we study the distributions of the total number of offsprings (direct and indirect aftershocks in seismicity) and of the total number of generations before extinction. We apply our results to a branching model of triggered seismicity, the ETAS (epidemic-type aftershock sequence) model. The ETAS model assumes that each earthquake can trigger other earthquakes (“aftershocks”). An aftershock sequence results in this model from the cascade of aftershocks of each past earthquake. Due to the large fluctuations of the number of aftershocks triggered directly by any earthquake (“fertility”), there is a large variability of the total number of aftershocks from one sequence to another, for the same mainshock magnitude. We study the regime in which the distribution of fertilities μ is characterized by a power law ∼1/μ 1+γ . For earthquakes we expect such a power-distribution of fertilities with γ = b/α based on the Gutenberg-Richter magnitude distribution ∼10−bm and on the increase ∼10αm of the number of aftershocks with the mainshock magnitude m. We derive the asymptotic distributions p r(r) and p g(g) of the total number r of offsprings and of the total number g of generations until extinction following a mainshock. In the regime γ < 2 for which the distribution of fertilities has an infinite variance, we find $$ p_r (r) \sim 1/r^{1 + \tfrac{1} {\gamma }} $$ and $$ p_g (g) \sim 1/g^{1 + \tfrac{1} {{\gamma - 1}}} $$ . This should be compared with the distributions $$ p_g (g) \sim 1/g^{1 + \tfrac{1} {2}} p_g (g) $$ obtained for standard branching processes with finite variance. These predictions are checked by numerical simulations. Our results apply directly to the ETAS model whose preferred values α = 0.8−1 and b = 1 puts it in the regime where the distribution of fertilities has an infinite variance. More generally, our results apply to any stochastic branching process with a power-law distribution of offsprings per mother.

Interevent Time Distribution in Seismicity: A Theoretical Approach

This paper presents an analysis of the distribution of the time τ between two consecutive events in a stationary point process. The study is motivated by the discovery of unified scaling laws for τ for the case of seismic events. We demonstrate that these laws cannot exist simultaneously in a seismogenic area. Under very natural assumptions we show that if, after rescaling to ensure Eτ =1, the interevent time has a universal distribution F, then F must be exponential. In other words, Corral’s unified scaling law cannot exist in the whole range of time. In the framework of a general cluster model we discuss the parameterization of an empirical unified law and the physical meaning of the parameters involved.

Methods for Measuring Seismicity Rate Changes: A Review and a Study of How the M w 7.3 Landers Earthquake Affected the Aftershock Sequence of the M w 6.1 Joshua Tree Earthquake

The development of fault interaction models has triggered the need for an accurate estimation of seismicity rate changes following the occurrence of an earthquake. Several statistical methods have been developed in the past to serve this purpose, each relying on different assumptions (e.g., stationarity, gaussianity) pertaining to the seismicity process.In this paper we review these various approaches, discuss their limitations, and propose further improvements. The feasibility of mapping robust seismicity rate changes, and more particularly rate decreases (i.e., seismicity shadows), in the first few days of an aftershock sequence, is examined. To this aim, the hypothesis of large numbers of earthquakes, hence the use of Gaussian statistics, as is usually assumed, must be dropped.Finally, we analyse the modulation in seismicity rates following the 1992, June 28 M w 7.3 Landers earthquake in the region of the 1992, April 22 M w 6.1 JoshuaTree earthquake. Clear instances of early triggering (i.e., in the first few days) followed by a seismicity quiescence, are observed. This could indicate the existence of two distinct interaction regimes, a first one caused by the destabilisation of active faults by the travelling seismic waves, and a second one due to the remaining static stress perturbation.

Approximating the Distribution of Pareto Sums

Heavy tailed random variables (rvs) have proven to be an essential element in modeling a wide variety of natural and human-induced processes, and the sums of heavy tailed rvs represent a particularly important construction in such models. Oriented toward both geophysical and statistical audiences, this paper discusses the appearance of the Pareto law in seismology and addresses the problem of the statistical approximation for the sums of independent rvs with common Pareto distribution F(x)=1 − x)−α for 1/2 < α < 2. Such variables have infinite second moment which prevents one from using the Central Limit Theorem to solve the problem. This paper presents five approximation techniques for the Pareto sums and discusses their respective accuracy. The main focus is on the median and the upper and lower quantiles of the sum’s distribution. Two of the proposed approximations are based on the Generalized Central Limit Theorem, which establishes the general limit for the sums of independent identically distributed rvs in terms of stable distributions; these approximations work well for large numbers of summands. Another approximation, which replaces the sum with its maximal summand, has less than 10% relative error for the upper quantiles when α < 1. A more elaborate approach considers the two largest observations separately from the rest of the observations, and yields a relative error under 1% for the upper quantiles and less than 5% for the median. The last approximation is specially tailored for the lower quantiles, and involves reducing the non-Gaussian problem to its Gaussian equivalent; it too yields errors less than 1%. Approximation of the observed cumulative seismic moment in California illustrates developed methods.

The Entropy Score and its Uses in Earthquake Forecasting

Suppose a forecasting scheme associates a probability p* with some observed outcome. The entropy score given to this forecast is then −log p*. This article provides a review of the background to this scoring method, its main properties, and its relationships to concepts such as likelihood, probability gain, and Molchan’s v-τ diagram. It is shown that, in terms of this score, an intrinsic characterization can be given for the predictability of a given statistical forecasting model. Uses of the score are illustrated by applications to the stress release and ETAS models, electrical signals, and M8.

A Simplified Approach to Earthquake Risk in Mainland China

There are limitations in conventional earthquake loss procedures if attempts are made to apply these to assess the social and economic impacts of recent disastrous earthquakes. This paper addresses the need to develop an applicable model for estimating the significant increases of earthquake loss in mainland China. The casualties of earthquakes were studied first. The casualties of earthquakes are strongly related to earthquake strength, occurrence time (day or night) and the distribution of population in the affected area. Using data on earthquake casualties in mainland China from 1980 to 2000, we suggest a relationship between average losses of life and the magnitude of earthquakes. Combined with information on population density and earthquake occurrence times, we use these data to give a further relationship between the loss of life and factors like population density, intensity and occurrence time of the earthquake. Earthquakes that occurred from 2001 to 2003 were tested for the given relationships. This paper also explores the possibility of using a macroeconomic indicator, here GDP (Gross Domestic Product), to roughly estimate earthquake exposure in situations where no detailed insurance or similar inventories exist, thus bypassing some problems of the conventional method.

Test of the EEPAS Forecasting Model on the Japan Earthquake Catalogue

The EEPAS (“Every Earthquake a Precursor According to Scale”) model is a method of forecasting earthquakes based on the notion that the precursory scale increase (Ψ) phenomenon occurs at all scales in the seismogenic process. The rate density of future earthquake occurrence is computed directly from past earthquakes in the catalogue. The EEPAS model has previously been fitted to the New Zealand earthquake catalogue and successfully tested on the California catalogue.Here we describe a further test of the EEPAS model in the Japan region spanning 1965–2001, initially on earthquakes with magnitudes exceeding the threshold value 6.75. A baseline model and the Gutenberg-Richter b-value were fitted to the JMA catalogue over the learning period 1926–1964. The performance of EEPAS, with the key parameters unchanged from the New Zealand values, was compared with that of the baseline model over the testing period, using a likelihood ratio criterion. The EEPAS model proved superior. A sensitivity analysis shows that this result is not sensitive to the choice of the learning period or b-value, but that the advantage of EEPAS over the baseline model diminishes as the magnitude threshold is lowered. When key model parameters are optimised for the Japan catalogue, the advantage of EEPAS over the baseline model is consistent for all magnitudes above 6.25, although less than in the New Zealand and California regions.These results add strength to the proposition that the EEPAS model is effective at a variety of scales and in a variety of seismically active regions.

Tidal Stress/Strain and the b-values of Acoustic Emissions at the Underground Research Laboratory, Canada

The correlation between the b-values of acoustic emissions (AEs) and the phase of the moon was investigated at the Underground Research Laboratory (URL) in Canada. The same data as those used in Iwata (2002) were examined, which showed that the occurrence of AEs is correlated with the phase of the moon. It was expected, therefore, that the b-value of the AEs would also be sensitive to tidal stress/strain fluctuations. We investigated the variation of the b-values as a function of the phase of the moon. Results show that b-values immediately following the times of full/new moon are higher than those at other times. Using AIC (Akaike Information Criterion) and random (Monte Carlo) simulations, it was confirmed that this feature is statistically significant. We also investigated whether or not there was a change in the b-values immediately before the times of full/new moon, but no statistically significant change was observed. The results suggest that the effect of stress/strain fluctuations on AE occurrences at the URL is asymmetric to the times of full/new moon.

Use of Potential Foreshocks to Estimate the Short-term Probability of Large Earthquakes, Tohoku, Japan

This study seeks to construct a hazard function for earthquake probabilities based on potential foreshocks. Earthquakes of magnitude 6.5 and larger that occurred between 1976 and 2000 in an offshore area of the Tohoku region of northeast Japan were selected as events for estimating probabilities. Later occurrences of multiple events and aftershocks were omitted from targets. As a result, a total of 14 earthquakes were employed in the assessment of models. The study volume spans 300 km (East-West) × 660 km (North-South) × 60 km in depth. The probability of a target earthquake occurring at a certain point in time-space depends on the number of small earthquakes that occurred per unit volume in that vicinity. In this study, we assume that the hazard function increases geometrically with the number of potential foreshocks within a constrained space-time window. The parameters for defining potential foreshocks are magnitude, spatial extent and lead time to the point of assessment. The time parameter is studied in ranges of 1 to 5 days (1-day steps), and spatial parameters in 20 to 100 km (20-km steps). The model parameters of the hazard function are determined by the maximum likelihood method. The most effective hazard function examined was the following case: When an earthquake of magnitude 4.5 to 6.5 occurs, the hazard for a large event is increased significantly for one day within a 20 km radius surrounding the earthquake. If two or more such earthquakes are observed, the model expects a 20,000 times greater probability of an earthquake of magnitude 6.5 or greater than in the absence of such events.

A Point-process Analysis of the Matsushiro Earthquake Swarm Sequence: The Effect of Water on Earthquake Occurrence

Temporal characteristics of the famous Matsushiro earthquake swarm were investigated quantitatively using point-process analysis. Analysis of the earthquake occurrence rate revealed not only the precise and interesting process of the swarm, but also the relation between pore water pressure and the strength of the epidemic effect, and the modified Omori-type temporal decay of earthquake activity. The occurrence rate function λ(t) for this swarm is represented well as $$ \lambda \left( t \right) = f\left( t \right) + \sum\limits_{t > t_j } {\kappa e^{\gamma \left( {m_j - m_{th} } \right)} /\left( {t - t_j + c} \right)^p ,} $$ where f(t) represents the contribution of the swarm driver, which was the erupting water from the deep in this case, and the second term represents an epidemic effect of the modified Omori type. Based on changes in the form of f(t), this two-year long swarm was divided into six periods and one short transitional epoch. The form of f(t) in each period revealed the detail of the water erupting process. In the final stage, f(t) decayed according to the modified Omori-formula form, while it decayed exponentially in the brief respite of the water eruption in the fourth period. When an exponential decay of swarm activity is observed, we have to be cautious of a sudden restart of the violent activity. The epidemic effect is stronger when the pressure of the pore water is higher. Even when the pressure is not high, the p value in the epidemic effect is small, when there is plenty of pore water. However, the epidemic effect produced about a quarter of the earthquakes even though there was not much pore water in the rocks.

Seismic Hazard Evaluation Using Markov Chains: Application to the Japan Area

Seismogenic regions within some geographic area are interrelated through tectonics and seismic history, although this relation is usually complex, so that seismicity in a given region cannot be predicted in a straightforward manner from the activity in other region(s). We present a new statistical method for seismic hazard evaluation based on modeling the transition probabilities of seismicity patterns in the regions of a geographic area during a time interval, as a Markov chain. Application of the method to the Japan area renders good results, considering the occurrence of a high probability transition as a successful forecast. For magnitudes M ≥ 5.5 and time intervals △t 0.10 year, the method yields a 78% aftcast (forecast of data already used to evaluate the hazard) success rate for the entire catalog, and an indicative 80% forecast success rate for the last 10 transitions in the catalog. A byproduct of the method, regional occurrence probabilities determined from the transition probabilities, also provides good results; aftcasts of regional activity have a 98% success rate, and those of activity in the highest probability region about 80.5% success rate. All results are superior to those from the null hypotheses (a memory-less Poissonian, fixed-rate, or uniform system) and have vanishingly small probabilities of resulting from purely random guessing.

Preliminary Analysis of Observations on the Ultra-Low Frequency Electric Field in the Beijing Region

This paper presents a preliminary analysis of observations on ultra-low frequency ground electric signals from stations operated by the China Seismological Bureau over the last 20 years. A brief description of the instrumentation, operating procedures and data processing procedures is given. The data analyzed consists of estimates of the total strengths (cumulated amplitudes) of the electric signals during 24-hour periods. The thresholds are set low enough so that on most days a zero observation is returned. Non-zero observations are related to electric and magnetic storms, occasional man-made electrical effects, and, apparently, some pre-, co-, or postseismic signals. The main purpose of the analysis is to investigate the extent that the electric signals can be considered as preseismic in character. For this purpose the electric signals from each of five stations are jointly analyzed with the catalogue of local earthquakes within circular regions around the selected stations. A version of Ogata’s Lin-Lin algorithm is used to estimate and test the existence of a pre-seismic signal. This model allows the effect of the electric signals to be tested, even after allowing for the effects of earthquake clustering. It is found that, although the largest single effect influencing earthquake occurrence is the clustering tendency, there remains a significant preseismic component from the electrical signals. Additional tests show that the apparent effect is not postseismic in character, and persists even under variations of the model and the time periods used in the analysis. Samples of the data are presented and the full data sets have been made available on local websites.

Metadata
Title
Statistical Seismology
Copyright Year
2005
Electronic ISBN
978-3-7643-7375-7
Print ISBN
978-3-7643-7295-8
DOI
https://doi.org/10.1007/3-7643-7375-X